problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.1k
10.2k
| golden_diff
stringlengths 151
4.94k
| verification_info
stringlengths 582
21k
| num_tokens
int64 271
2.05k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_28035 | rasdani/github-patches | git_diff | scikit-image__scikit-image-6293 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Build image pyramids not always working with other images
## Description
Using the *[Build image pyramids](https://scikit-image.org/docs/dev/auto_examples/transform/plot_pyramid.html)* example with a random image is not always working.
## Way to reproduce
### hand.jpg

```python
import numpy as np
import matplotlib.pyplot as plt
from skimage import data
from skimage.transform import pyramid_gaussian
import imageio as io
image = io.imread('hand.jpg') # data.astronaut()
rows, cols, dim = image.shape
pyramid = tuple(pyramid_gaussian(image, downscale=2, multichannel=True))
composite_image = np.zeros((rows, cols + cols // 2, 3), dtype=np.double)
composite_image[:rows, :cols, :] = pyramid[0]
i_row = 0
for p in pyramid[1:]:
n_rows, n_cols = p.shape[:2]
composite_image[i_row:i_row + n_rows, cols:cols + n_cols] = p
i_row += n_rows
fig, ax = plt.subplots()
ax.imshow(composite_image)
plt.show()
```
## Version information
```python
3.7.4 (tags/v3.7.4:e09359112e, Jul 8 2019, 20:34:20) [MSC v.1916 64 bit (AMD64)]
Windows-10-10.0.18362-SP0
scikit-image version: 0.16.1
numpy version: 1.17.2
```
```python
Traceback (most recent call last):
File "D:\Vincent\Bureau\Patern recongnition and image analysis\Patern recognition and patern analysis\LAB_1\plot_pyramid.py", line 44, in <module>
composite_image[i_row:i_row + n_rows, cols:cols + n_cols] = p
ValueError: could not broadcast input array from shape (2,2,3) into shape (1,2,3)
```
## Possible solution
I was able to make it works for the same RGB image but this code is not adapted for BW and RGBA.
```python
import numpy as np
import matplotlib.pyplot as plt
from skimage import data
from skimage.transform import pyramid_gaussian
import imageio as io
image = io.imread('hand.jpg') # data.astronaut()
rows, cols, dim = image.shape
pyramid = tuple(pyramid_gaussian(image, downscale=2, multichannel=True))
composite_image = np.zeros((rows, cols + cols // 2, dim), dtype=np.double)
composite_image[:rows, :cols, :] = pyramid[0]
i_row = 0
for p in pyramid[1:]:
n_rows, n_cols = p.shape[:2]
# Check the dimension before assignement
if(composite_image[i_row:i_row + n_rows, cols:cols + n_cols].shape==p.shape):
composite_image[i_row:i_row + n_rows, cols:cols + n_cols] = p
i_row += n_rows
else:
break
fig, ax = plt.subplots()
ax.imshow(composite_image)
plt.show()
```
### Result

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `doc/examples/transform/plot_pyramid.py`
Content:
```
1 """
2 ====================
3 Build image pyramids
4 ====================
5
6 The ``pyramid_gaussian`` function takes an image and yields successive images
7 shrunk by a constant scale factor. Image pyramids are often used, e.g., to
8 implement algorithms for denoising, texture discrimination, and scale-invariant
9 detection.
10
11 """
12 import numpy as np
13 import matplotlib.pyplot as plt
14
15 from skimage import data
16 from skimage.transform import pyramid_gaussian
17
18
19 image = data.astronaut()
20 rows, cols, dim = image.shape
21 pyramid = tuple(pyramid_gaussian(image, downscale=2, channel_axis=-1))
22
23 composite_image = np.zeros((rows, cols + cols // 2, 3), dtype=np.double)
24
25 composite_image[:rows, :cols, :] = pyramid[0]
26
27 i_row = 0
28 for p in pyramid[1:]:
29 n_rows, n_cols = p.shape[:2]
30 composite_image[i_row:i_row + n_rows, cols:cols + n_cols] = p
31 i_row += n_rows
32
33 fig, ax = plt.subplots()
34 ax.imshow(composite_image)
35 plt.show()
36
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/doc/examples/transform/plot_pyramid.py b/doc/examples/transform/plot_pyramid.py
--- a/doc/examples/transform/plot_pyramid.py
+++ b/doc/examples/transform/plot_pyramid.py
@@ -9,6 +9,8 @@
detection.
"""
+import math
+
import numpy as np
import matplotlib.pyplot as plt
@@ -20,10 +22,31 @@
rows, cols, dim = image.shape
pyramid = tuple(pyramid_gaussian(image, downscale=2, channel_axis=-1))
-composite_image = np.zeros((rows, cols + cols // 2, 3), dtype=np.double)
-
+#####################################################################
+# Generate a composite image for visualization
+# ============================================
+#
+# For visualization, we generate a composite image with the same number of rows
+# as the source image but with ``cols + pyramid[1].shape[1]`` columns. We then
+# have space to stack all of the dowsampled images to the right of the
+# original.
+#
+# Note: The sum of the number of rows in all dowsampled images in the pyramid
+# may sometimes exceed the original image size in cases when image.shape[0] is
+# not a power of two. We expand the number of rows in the composite slightly as
+# necessary to account for this. Expansion beyond the number of rows in the
+# original will also be necessary to cover cases where downscale < 2.
+
+# determine the total number of rows and columns for the composite
+composite_rows = max(rows, sum(p.shape[0] for p in pyramid[1:]))
+composite_cols = cols + pyramid[1].shape[1]
+composite_image = np.zeros((composite_rows, composite_cols, 3),
+ dtype=np.double)
+
+# store the original to the left
composite_image[:rows, :cols, :] = pyramid[0]
+# stack all downsampled images in a column to the right of the original
i_row = 0
for p in pyramid[1:]:
n_rows, n_cols = p.shape[:2]
| {"golden_diff": "diff --git a/doc/examples/transform/plot_pyramid.py b/doc/examples/transform/plot_pyramid.py\n--- a/doc/examples/transform/plot_pyramid.py\n+++ b/doc/examples/transform/plot_pyramid.py\n@@ -9,6 +9,8 @@\n detection.\n \n \"\"\"\n+import math\n+\n import numpy as np\n import matplotlib.pyplot as plt\n \n@@ -20,10 +22,31 @@\n rows, cols, dim = image.shape\n pyramid = tuple(pyramid_gaussian(image, downscale=2, channel_axis=-1))\n \n-composite_image = np.zeros((rows, cols + cols // 2, 3), dtype=np.double)\n-\n+#####################################################################\n+# Generate a composite image for visualization\n+# ============================================\n+#\n+# For visualization, we generate a composite image with the same number of rows\n+# as the source image but with ``cols + pyramid[1].shape[1]`` columns. We then\n+# have space to stack all of the dowsampled images to the right of the\n+# original.\n+#\n+# Note: The sum of the number of rows in all dowsampled images in the pyramid\n+# may sometimes exceed the original image size in cases when image.shape[0] is\n+# not a power of two. We expand the number of rows in the composite slightly as\n+# necessary to account for this. Expansion beyond the number of rows in the\n+# original will also be necessary to cover cases where downscale < 2.\n+\n+# determine the total number of rows and columns for the composite\n+composite_rows = max(rows, sum(p.shape[0] for p in pyramid[1:]))\n+composite_cols = cols + pyramid[1].shape[1]\n+composite_image = np.zeros((composite_rows, composite_cols, 3),\n+ dtype=np.double)\n+\n+# store the original to the left\n composite_image[:rows, :cols, :] = pyramid[0]\n \n+# stack all downsampled images in a column to the right of the original\n i_row = 0\n for p in pyramid[1:]:\n n_rows, n_cols = p.shape[:2]\n", "issue": "Build image pyramids not always working with other images\n## Description\r\nUsing the *[Build image pyramids](https://scikit-image.org/docs/dev/auto_examples/transform/plot_pyramid.html)* example with a random image is not always working.\r\n\r\n## Way to reproduce\r\n### hand.jpg\r\n\r\n```python\r\nimport numpy as np\r\nimport matplotlib.pyplot as plt\r\n\r\nfrom skimage import data\r\nfrom skimage.transform import pyramid_gaussian\r\n\r\nimport imageio as io\r\n\r\nimage = io.imread('hand.jpg') # data.astronaut()\r\nrows, cols, dim = image.shape\r\npyramid = tuple(pyramid_gaussian(image, downscale=2, multichannel=True))\r\n\r\ncomposite_image = np.zeros((rows, cols + cols // 2, 3), dtype=np.double)\r\n\r\ncomposite_image[:rows, :cols, :] = pyramid[0]\r\n\r\ni_row = 0\r\nfor p in pyramid[1:]:\r\n n_rows, n_cols = p.shape[:2]\r\n composite_image[i_row:i_row + n_rows, cols:cols + n_cols] = p\r\n i_row += n_rows\r\n\r\nfig, ax = plt.subplots()\r\nax.imshow(composite_image)\r\nplt.show()\r\n```\r\n\r\n\r\n## Version information\r\n```python\r\n3.7.4 (tags/v3.7.4:e09359112e, Jul 8 2019, 20:34:20) [MSC v.1916 64 bit (AMD64)]\r\nWindows-10-10.0.18362-SP0\r\nscikit-image version: 0.16.1\r\nnumpy version: 1.17.2\r\n```\r\n\r\n```python\r\nTraceback (most recent call last):\r\n File \"D:\\Vincent\\Bureau\\Patern recongnition and image analysis\\Patern recognition and patern analysis\\LAB_1\\plot_pyramid.py\", line 44, in <module>\r\n composite_image[i_row:i_row + n_rows, cols:cols + n_cols] = p\r\nValueError: could not broadcast input array from shape (2,2,3) into shape (1,2,3)\r\n```\r\n## Possible solution\r\nI was able to make it works for the same RGB image but this code is not adapted for BW and RGBA.\r\n\r\n```python\r\nimport numpy as np\r\nimport matplotlib.pyplot as plt\r\n\r\nfrom skimage import data\r\nfrom skimage.transform import pyramid_gaussian\r\nimport imageio as io\r\n\r\nimage = io.imread('hand.jpg') # data.astronaut()\r\n\r\nrows, cols, dim = image.shape\r\npyramid = tuple(pyramid_gaussian(image, downscale=2, multichannel=True))\r\n\r\ncomposite_image = np.zeros((rows, cols + cols // 2, dim), dtype=np.double)\r\n\r\ncomposite_image[:rows, :cols, :] = pyramid[0]\r\n\r\ni_row = 0\r\nfor p in pyramid[1:]:\r\n n_rows, n_cols = p.shape[:2]\r\n # Check the dimension before assignement\r\n if(composite_image[i_row:i_row + n_rows, cols:cols + n_cols].shape==p.shape):\r\n composite_image[i_row:i_row + n_rows, cols:cols + n_cols] = p\r\n i_row += n_rows\r\n else:\r\n break\r\n \r\nfig, ax = plt.subplots()\r\nax.imshow(composite_image)\r\nplt.show()\r\n```\r\n### Result\r\n\r\n\r\n\n", "before_files": [{"content": "\"\"\"\n====================\nBuild image pyramids\n====================\n\nThe ``pyramid_gaussian`` function takes an image and yields successive images\nshrunk by a constant scale factor. Image pyramids are often used, e.g., to\nimplement algorithms for denoising, texture discrimination, and scale-invariant\ndetection.\n\n\"\"\"\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nfrom skimage import data\nfrom skimage.transform import pyramid_gaussian\n\n\nimage = data.astronaut()\nrows, cols, dim = image.shape\npyramid = tuple(pyramid_gaussian(image, downscale=2, channel_axis=-1))\n\ncomposite_image = np.zeros((rows, cols + cols // 2, 3), dtype=np.double)\n\ncomposite_image[:rows, :cols, :] = pyramid[0]\n\ni_row = 0\nfor p in pyramid[1:]:\n n_rows, n_cols = p.shape[:2]\n composite_image[i_row:i_row + n_rows, cols:cols + n_cols] = p\n i_row += n_rows\n\nfig, ax = plt.subplots()\nax.imshow(composite_image)\nplt.show()\n", "path": "doc/examples/transform/plot_pyramid.py"}], "after_files": [{"content": "\"\"\"\n====================\nBuild image pyramids\n====================\n\nThe ``pyramid_gaussian`` function takes an image and yields successive images\nshrunk by a constant scale factor. Image pyramids are often used, e.g., to\nimplement algorithms for denoising, texture discrimination, and scale-invariant\ndetection.\n\n\"\"\"\nimport math\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nfrom skimage import data\nfrom skimage.transform import pyramid_gaussian\n\n\nimage = data.astronaut()\nrows, cols, dim = image.shape\npyramid = tuple(pyramid_gaussian(image, downscale=2, channel_axis=-1))\n\n#####################################################################\n# Generate a composite image for visualization\n# ============================================\n#\n# For visualization, we generate a composite image with the same number of rows\n# as the source image but with ``cols + pyramid[1].shape[1]`` columns. We then\n# have space to stack all of the dowsampled images to the right of the\n# original.\n#\n# Note: The sum of the number of rows in all dowsampled images in the pyramid\n# may sometimes exceed the original image size in cases when image.shape[0] is\n# not a power of two. We expand the number of rows in the composite slightly as\n# necessary to account for this. Expansion beyond the number of rows in the\n# original will also be necessary to cover cases where downscale < 2.\n\n# determine the total number of rows and columns for the composite\ncomposite_rows = max(rows, sum(p.shape[0] for p in pyramid[1:]))\ncomposite_cols = cols + pyramid[1].shape[1]\ncomposite_image = np.zeros((composite_rows, composite_cols, 3),\n dtype=np.double)\n\n# store the original to the left\ncomposite_image[:rows, :cols, :] = pyramid[0]\n\n# stack all downsampled images in a column to the right of the original\ni_row = 0\nfor p in pyramid[1:]:\n n_rows, n_cols = p.shape[:2]\n composite_image[i_row:i_row + n_rows, cols:cols + n_cols] = p\n i_row += n_rows\n\nfig, ax = plt.subplots()\nax.imshow(composite_image)\nplt.show()\n", "path": "doc/examples/transform/plot_pyramid.py"}]} | 1,404 | 450 |
gh_patches_debug_25326 | rasdani/github-patches | git_diff | mlflow__mlflow-12224 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG] uc_volume_dataset_source only validates file paths, not folder paths
### Issues Policy acknowledgement
- [X] I have read and agree to submit bug reports in accordance with the [issues policy](https://www.github.com/mlflow/mlflow/blob/master/ISSUE_POLICY.md)
### Where did you encounter this bug?
Local machine
### Willingness to contribute
Yes. I would be willing to contribute a fix for this bug with guidance from the MLflow community.
### MLflow version
mlflow-2.12.2
### System information
- **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**:
- **Python version**:
- **yarn version, if running the dev UI**:
### Describe the problem
https://github.com/mlflow/mlflow/blob/72df4a2a0f44c52179dfbdc7d47ad10f58ceec39/mlflow/data/uc_volume_dataset_source.py#L28 doesn't verify folder paths, only file paths
### Tracking information
<!-- PLEASE KEEP BACKTICKS AND CHECK PREVIEW -->
```shell
REPLACE_ME
```
### Code to reproduce issue
<!-- PLEASE KEEP BACKTICKS AND CHECK PREVIEW -->
```
REPLACE_ME
```
### Stack trace
<!-- PLEASE KEEP BACKTICKS AND CHECK PREVIEW -->
```
REPLACE_ME
```
### Other info / logs
<!-- PLEASE KEEP BACKTICKS AND CHECK PREVIEW -->
```
REPLACE_ME
```
### What component(s) does this bug affect?
- [ ] `area/artifacts`: Artifact stores and artifact logging
- [ ] `area/build`: Build and test infrastructure for MLflow
- [ ] `area/deployments`: MLflow Deployments client APIs, server, and third-party Deployments integrations
- [ ] `area/docs`: MLflow documentation pages
- [ ] `area/examples`: Example code
- [ ] `area/model-registry`: Model Registry service, APIs, and the fluent client calls for Model Registry
- [ ] `area/models`: MLmodel format, model serialization/deserialization, flavors
- [ ] `area/recipes`: Recipes, Recipe APIs, Recipe configs, Recipe Templates
- [ ] `area/projects`: MLproject format, project running backends
- [ ] `area/scoring`: MLflow Model server, model deployment tools, Spark UDFs
- [ ] `area/server-infra`: MLflow Tracking server backend
- [ ] `area/tracking`: Tracking Service, tracking client APIs, autologging
### What interface(s) does this bug affect?
- [ ] `area/uiux`: Front-end, user experience, plotting, JavaScript, JavaScript dev server
- [ ] `area/docker`: Docker use across MLflow's components, such as MLflow Projects and MLflow Models
- [ ] `area/sqlalchemy`: Use of SQLAlchemy in the Tracking Service or Model Registry
- [ ] `area/windows`: Windows support
### What language(s) does this bug affect?
- [ ] `language/r`: R APIs and clients
- [ ] `language/java`: Java APIs and clients
- [ ] `language/new`: Proposals for new client languages
### What integration(s) does this bug affect?
- [ ] `integrations/azure`: Azure and Azure ML integrations
- [ ] `integrations/sagemaker`: SageMaker integrations
- [ ] `integrations/databricks`: Databricks integrations
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mlflow/data/uc_volume_dataset_source.py`
Content:
```
1 import logging
2 from typing import Any, Dict
3
4 from mlflow.data.dataset_source import DatasetSource
5 from mlflow.exceptions import MlflowException
6
7 _logger = logging.getLogger(__name__)
8
9
10 class UCVolumeDatasetSource(DatasetSource):
11 """Represents the source of a dataset stored in Databricks Unified Catalog Volume.
12
13 If you are using a delta table, please use `mlflow.data.delta_dataset_source.DeltaDatasetSource`
14 instead. This `UCVolumeDatasetSource` does not provide loading function, and is mostly useful
15 when you are logging a `mlflow.data.meta_dataset.MetaDataset` to MLflow, i.e., you want
16 to log the source of dataset to MLflow without loading the dataset.
17
18 Args:
19 path: the UC path of your data. It should be a valid UC path following the pattern
20 "/Volumes/{catalog}/{schema}/{volume}/{file_path}". For example,
21 "/Volumes/MyCatalog/MySchema/MyVolume/MyFile.json".
22 """
23
24 def __init__(self, path: str):
25 self._verify_uc_path_is_valid(path)
26 self.path = path
27
28 def _verify_uc_path_is_valid(self, path):
29 """Verify if the path exists in Databricks Unified Catalog."""
30 try:
31 from databricks.sdk import WorkspaceClient
32
33 w = WorkspaceClient()
34 except ImportError:
35 _logger.warning(
36 "Cannot verify the path of `UCVolumeDatasetSource` because of missing"
37 "`databricks-sdk`. Please install `databricks-sdk` via "
38 "`pip install -U databricks-sdk`. This does not block creating "
39 "`UCVolumeDatasetSource`, but your `UCVolumeDatasetSource` might be invalid."
40 )
41 return
42 except Exception:
43 _logger.warning(
44 "Cannot verify the path of `UCVolumeDatasetSource` due to a connection failure "
45 "with Databricks workspace. Please run `mlflow.login()` to log in to Databricks. "
46 "This does not block creating `UCVolumeDatasetSource`, but your "
47 "`UCVolumeDatasetSource` might be invalid."
48 )
49 return
50
51 try:
52 w.files.get_metadata(path)
53 except Exception:
54 raise MlflowException(f"{path} does not exist in Databricks Unified Catalog.")
55
56 @staticmethod
57 def _get_source_type() -> str:
58 return "uc_volume"
59
60 @staticmethod
61 def _can_resolve(raw_source: Any):
62 raise NotImplementedError
63
64 @classmethod
65 def _resolve(cls, raw_source: str):
66 raise NotImplementedError
67
68 def to_dict(self) -> Dict[Any, Any]:
69 return {"path": self.path}
70
71 @classmethod
72 def from_dict(cls, source_dict: Dict[Any, Any]) -> "UCVolumeDatasetSource":
73 return cls(**source_dict)
74
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mlflow/data/uc_volume_dataset_source.py b/mlflow/data/uc_volume_dataset_source.py
--- a/mlflow/data/uc_volume_dataset_source.py
+++ b/mlflow/data/uc_volume_dataset_source.py
@@ -22,10 +22,10 @@
"""
def __init__(self, path: str):
- self._verify_uc_path_is_valid(path)
self.path = path
+ self._verify_uc_path_is_valid()
- def _verify_uc_path_is_valid(self, path):
+ def _verify_uc_path_is_valid(self):
"""Verify if the path exists in Databricks Unified Catalog."""
try:
from databricks.sdk import WorkspaceClient
@@ -49,9 +49,17 @@
return
try:
- w.files.get_metadata(path)
+ # Check if `self.path` points to a valid UC file.
+ w.files.get_metadata(self.path)
except Exception:
- raise MlflowException(f"{path} does not exist in Databricks Unified Catalog.")
+ try:
+ # Check if `self.path` points to a valid UC directory.
+ w.files.get_directory_metadata(self.path)
+ # Append a slash to `self.path` to indicate it's a directory.
+ self.path += "/" if not self.path.endswith("/") else ""
+ except Exception:
+ # Neither file nor directory exists, we throw an exception.
+ raise MlflowException(f"{self.path} does not exist in Databricks Unified Catalog.")
@staticmethod
def _get_source_type() -> str:
| {"golden_diff": "diff --git a/mlflow/data/uc_volume_dataset_source.py b/mlflow/data/uc_volume_dataset_source.py\n--- a/mlflow/data/uc_volume_dataset_source.py\n+++ b/mlflow/data/uc_volume_dataset_source.py\n@@ -22,10 +22,10 @@\n \"\"\"\n \n def __init__(self, path: str):\n- self._verify_uc_path_is_valid(path)\n self.path = path\n+ self._verify_uc_path_is_valid()\n \n- def _verify_uc_path_is_valid(self, path):\n+ def _verify_uc_path_is_valid(self):\n \"\"\"Verify if the path exists in Databricks Unified Catalog.\"\"\"\n try:\n from databricks.sdk import WorkspaceClient\n@@ -49,9 +49,17 @@\n return\n \n try:\n- w.files.get_metadata(path)\n+ # Check if `self.path` points to a valid UC file.\n+ w.files.get_metadata(self.path)\n except Exception:\n- raise MlflowException(f\"{path} does not exist in Databricks Unified Catalog.\")\n+ try:\n+ # Check if `self.path` points to a valid UC directory.\n+ w.files.get_directory_metadata(self.path)\n+ # Append a slash to `self.path` to indicate it's a directory.\n+ self.path += \"/\" if not self.path.endswith(\"/\") else \"\"\n+ except Exception:\n+ # Neither file nor directory exists, we throw an exception.\n+ raise MlflowException(f\"{self.path} does not exist in Databricks Unified Catalog.\")\n \n @staticmethod\n def _get_source_type() -> str:\n", "issue": "[BUG] uc_volume_dataset_source only validates file paths, not folder paths\n### Issues Policy acknowledgement\n\n- [X] I have read and agree to submit bug reports in accordance with the [issues policy](https://www.github.com/mlflow/mlflow/blob/master/ISSUE_POLICY.md)\n\n### Where did you encounter this bug?\n\nLocal machine\n\n### Willingness to contribute\n\nYes. I would be willing to contribute a fix for this bug with guidance from the MLflow community.\n\n### MLflow version\n\nmlflow-2.12.2\n\n### System information\n\n- **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**:\r\n- **Python version**:\r\n- **yarn version, if running the dev UI**:\r\n\n\n### Describe the problem\n\nhttps://github.com/mlflow/mlflow/blob/72df4a2a0f44c52179dfbdc7d47ad10f58ceec39/mlflow/data/uc_volume_dataset_source.py#L28 doesn't verify folder paths, only file paths\n\n### Tracking information\n\n<!-- PLEASE KEEP BACKTICKS AND CHECK PREVIEW -->\r\n```shell\r\nREPLACE_ME\r\n```\r\n\n\n### Code to reproduce issue\n\n<!-- PLEASE KEEP BACKTICKS AND CHECK PREVIEW -->\r\n```\r\nREPLACE_ME\r\n```\r\n\n\n### Stack trace\n\n<!-- PLEASE KEEP BACKTICKS AND CHECK PREVIEW -->\r\n```\r\nREPLACE_ME\r\n```\r\n\n\n### Other info / logs\n\n<!-- PLEASE KEEP BACKTICKS AND CHECK PREVIEW -->\r\n```\r\nREPLACE_ME\r\n```\r\n\n\n### What component(s) does this bug affect?\n\n- [ ] `area/artifacts`: Artifact stores and artifact logging\n- [ ] `area/build`: Build and test infrastructure for MLflow\n- [ ] `area/deployments`: MLflow Deployments client APIs, server, and third-party Deployments integrations\n- [ ] `area/docs`: MLflow documentation pages\n- [ ] `area/examples`: Example code\n- [ ] `area/model-registry`: Model Registry service, APIs, and the fluent client calls for Model Registry\n- [ ] `area/models`: MLmodel format, model serialization/deserialization, flavors\n- [ ] `area/recipes`: Recipes, Recipe APIs, Recipe configs, Recipe Templates\n- [ ] `area/projects`: MLproject format, project running backends\n- [ ] `area/scoring`: MLflow Model server, model deployment tools, Spark UDFs\n- [ ] `area/server-infra`: MLflow Tracking server backend\n- [ ] `area/tracking`: Tracking Service, tracking client APIs, autologging\n\n### What interface(s) does this bug affect?\n\n- [ ] `area/uiux`: Front-end, user experience, plotting, JavaScript, JavaScript dev server\n- [ ] `area/docker`: Docker use across MLflow's components, such as MLflow Projects and MLflow Models\n- [ ] `area/sqlalchemy`: Use of SQLAlchemy in the Tracking Service or Model Registry\n- [ ] `area/windows`: Windows support\n\n### What language(s) does this bug affect?\n\n- [ ] `language/r`: R APIs and clients\n- [ ] `language/java`: Java APIs and clients\n- [ ] `language/new`: Proposals for new client languages\n\n### What integration(s) does this bug affect?\n\n- [ ] `integrations/azure`: Azure and Azure ML integrations\n- [ ] `integrations/sagemaker`: SageMaker integrations\n- [ ] `integrations/databricks`: Databricks integrations\n", "before_files": [{"content": "import logging\nfrom typing import Any, Dict\n\nfrom mlflow.data.dataset_source import DatasetSource\nfrom mlflow.exceptions import MlflowException\n\n_logger = logging.getLogger(__name__)\n\n\nclass UCVolumeDatasetSource(DatasetSource):\n \"\"\"Represents the source of a dataset stored in Databricks Unified Catalog Volume.\n\n If you are using a delta table, please use `mlflow.data.delta_dataset_source.DeltaDatasetSource`\n instead. This `UCVolumeDatasetSource` does not provide loading function, and is mostly useful\n when you are logging a `mlflow.data.meta_dataset.MetaDataset` to MLflow, i.e., you want\n to log the source of dataset to MLflow without loading the dataset.\n\n Args:\n path: the UC path of your data. It should be a valid UC path following the pattern\n \"/Volumes/{catalog}/{schema}/{volume}/{file_path}\". For example,\n \"/Volumes/MyCatalog/MySchema/MyVolume/MyFile.json\".\n \"\"\"\n\n def __init__(self, path: str):\n self._verify_uc_path_is_valid(path)\n self.path = path\n\n def _verify_uc_path_is_valid(self, path):\n \"\"\"Verify if the path exists in Databricks Unified Catalog.\"\"\"\n try:\n from databricks.sdk import WorkspaceClient\n\n w = WorkspaceClient()\n except ImportError:\n _logger.warning(\n \"Cannot verify the path of `UCVolumeDatasetSource` because of missing\"\n \"`databricks-sdk`. Please install `databricks-sdk` via \"\n \"`pip install -U databricks-sdk`. This does not block creating \"\n \"`UCVolumeDatasetSource`, but your `UCVolumeDatasetSource` might be invalid.\"\n )\n return\n except Exception:\n _logger.warning(\n \"Cannot verify the path of `UCVolumeDatasetSource` due to a connection failure \"\n \"with Databricks workspace. Please run `mlflow.login()` to log in to Databricks. \"\n \"This does not block creating `UCVolumeDatasetSource`, but your \"\n \"`UCVolumeDatasetSource` might be invalid.\"\n )\n return\n\n try:\n w.files.get_metadata(path)\n except Exception:\n raise MlflowException(f\"{path} does not exist in Databricks Unified Catalog.\")\n\n @staticmethod\n def _get_source_type() -> str:\n return \"uc_volume\"\n\n @staticmethod\n def _can_resolve(raw_source: Any):\n raise NotImplementedError\n\n @classmethod\n def _resolve(cls, raw_source: str):\n raise NotImplementedError\n\n def to_dict(self) -> Dict[Any, Any]:\n return {\"path\": self.path}\n\n @classmethod\n def from_dict(cls, source_dict: Dict[Any, Any]) -> \"UCVolumeDatasetSource\":\n return cls(**source_dict)\n", "path": "mlflow/data/uc_volume_dataset_source.py"}], "after_files": [{"content": "import logging\nfrom typing import Any, Dict\n\nfrom mlflow.data.dataset_source import DatasetSource\nfrom mlflow.exceptions import MlflowException\n\n_logger = logging.getLogger(__name__)\n\n\nclass UCVolumeDatasetSource(DatasetSource):\n \"\"\"Represents the source of a dataset stored in Databricks Unified Catalog Volume.\n\n If you are using a delta table, please use `mlflow.data.delta_dataset_source.DeltaDatasetSource`\n instead. This `UCVolumeDatasetSource` does not provide loading function, and is mostly useful\n when you are logging a `mlflow.data.meta_dataset.MetaDataset` to MLflow, i.e., you want\n to log the source of dataset to MLflow without loading the dataset.\n\n Args:\n path: the UC path of your data. It should be a valid UC path following the pattern\n \"/Volumes/{catalog}/{schema}/{volume}/{file_path}\". For example,\n \"/Volumes/MyCatalog/MySchema/MyVolume/MyFile.json\".\n \"\"\"\n\n def __init__(self, path: str):\n self.path = path\n self._verify_uc_path_is_valid()\n\n def _verify_uc_path_is_valid(self):\n \"\"\"Verify if the path exists in Databricks Unified Catalog.\"\"\"\n try:\n from databricks.sdk import WorkspaceClient\n\n w = WorkspaceClient()\n except ImportError:\n _logger.warning(\n \"Cannot verify the path of `UCVolumeDatasetSource` because of missing\"\n \"`databricks-sdk`. Please install `databricks-sdk` via \"\n \"`pip install -U databricks-sdk`. This does not block creating \"\n \"`UCVolumeDatasetSource`, but your `UCVolumeDatasetSource` might be invalid.\"\n )\n return\n except Exception:\n _logger.warning(\n \"Cannot verify the path of `UCVolumeDatasetSource` due to a connection failure \"\n \"with Databricks workspace. Please run `mlflow.login()` to log in to Databricks. \"\n \"This does not block creating `UCVolumeDatasetSource`, but your \"\n \"`UCVolumeDatasetSource` might be invalid.\"\n )\n return\n\n try:\n # Check if `self.path` points to a valid UC file.\n w.files.get_metadata(self.path)\n except Exception:\n try:\n # Check if `self.path` points to a valid UC directory.\n w.files.get_directory_metadata(self.path)\n # Append a slash to `self.path` to indicate it's a directory.\n self.path += \"/\" if not self.path.endswith(\"/\") else \"\"\n except Exception:\n # Neither file nor directory exists, we throw an exception.\n raise MlflowException(f\"{self.path} does not exist in Databricks Unified Catalog.\")\n\n @staticmethod\n def _get_source_type() -> str:\n return \"uc_volume\"\n\n @staticmethod\n def _can_resolve(raw_source: Any):\n raise NotImplementedError\n\n @classmethod\n def _resolve(cls, raw_source: str):\n raise NotImplementedError\n\n def to_dict(self) -> Dict[Any, Any]:\n return {\"path\": self.path}\n\n @classmethod\n def from_dict(cls, source_dict: Dict[Any, Any]) -> \"UCVolumeDatasetSource\":\n return cls(**source_dict)\n", "path": "mlflow/data/uc_volume_dataset_source.py"}]} | 1,752 | 349 |
gh_patches_debug_24966 | rasdani/github-patches | git_diff | chainer__chainer-2721 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
resuming issue of LinearShift
Same issue as #2680
```
import chainer
from chainer import iterators
from chainer import optimizers
from chainer import training
from chainer.training import extensions
from chainer import serializers
class DummyModel(chainer.Chain):
def __call__(self, x):
return x
def setup_trainer(iteration):
model = DummyModel()
optimizer = optimizers.SGD()
optimizer.setup(model)
iterator = iterators.SerialIterator([1, 2, 3], 1)
updater = training.StandardUpdater(iterator, optimizer)
trainer = training.Trainer(updater, (iteration, 'iteration'), out='.')
trainer.extend(extensions.LogReport(trigger=(1, 'iteration')))
trainer.extend(extensions.observe_lr(), trigger=(1, 'iteration'))
trainer.extend(
extensions.PrintReport(['iteration', 'lr']),
trigger=(1, 'iteration'))
trainer.extend(
extensions.LinearShift('lr', (2, 1), (5, 15)),
trigger=(1, 'iteration'))
return trainer
trainer = setup_trainer(10)
trainer.run()
serializers.save_npz('tmp', trainer)
# iteration lr
# 1 2
# 2 2
# 3 2
# 4 2
# 5 2
# 6 2
# 7 1.9
# 8 1.8
# 9 1.7
# 10 1.6
resumed_trainer = setup_trainer(20)
serializers.load_npz('tmp', resumed_trainer)
resumed_trainer.run()
# iteration lr
# 1 2
# 2 2
# 3 2
# 4 2
# 5 2
# 6 2
# 7 1.9
# 8 1.8
# 9 1.7
# 10 1.6
# 11 1.4 (lr = 1.5 is skipped)
# 12 1.3
# 13 1.2
# 14 1.1
# 15 1
# 16 1
# 17 1
# 18 1
# 19 1
# 20 1
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `chainer/training/extensions/linear_shift.py`
Content:
```
1 from __future__ import division
2
3 from chainer.training import extension
4
5
6 class LinearShift(extension.Extension):
7
8 """Trainer extension to change an optimizer attribute linearly.
9
10 This extension changes an optimizer attribute from the first value to the
11 last value linearly within a specified duration. The typical use case is
12 warming up of the momentum coefficient.
13
14 For example, suppose that this extension is called at every iteration, and
15 ``value_range == (x, y)`` and ``time_range == (i, j)``. Then, this
16 extension keeps the attribute to be ``x`` up to the ``i``-th iteration,
17 linearly shifts the value to ``y`` by the ``j``-th iteration, and then
18 keeps the value to be ``y`` after the ``j``-th iteration.
19
20 This extension is also called before the training loop starts by default.
21
22 Args:
23 attr (str): Name of the optimizer attribute to adjust.
24 value_range (tuple of float): The first and the last values of the
25 attribute.
26 time_range (tuple of ints): The first and last counts of calls in which
27 the attribute is adjusted.
28 optimizer (~chainer.Optimizer): Target optimizer object. If it is None,
29 the main optimizer of the trainer is used.
30
31 """
32 invoke_before_training = True
33
34 def __init__(self, attr, value_range, time_range, optimizer=None):
35 self._attr = attr
36 self._value_range = value_range
37 self._time_range = time_range
38 self._optimizer = optimizer
39 self._t = 0
40
41 def __call__(self, trainer):
42 optimizer = self._optimizer or trainer.updater.get_optimizer('main')
43 t1, t2 = self._time_range
44 v1, v2 = self._value_range
45
46 if self._t <= t1:
47 value = v1
48 elif self._t >= t2:
49 value = v2
50 else:
51 rate = (self._t - t1) / (t2 - t1)
52 value = v1 + rate * (v2 - v1)
53 setattr(optimizer, self._attr, value)
54
55 self._t += 1
56
57 def serialize(self, serializer):
58 self._t = serializer('_t', self._t)
59
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/chainer/training/extensions/linear_shift.py b/chainer/training/extensions/linear_shift.py
--- a/chainer/training/extensions/linear_shift.py
+++ b/chainer/training/extensions/linear_shift.py
@@ -36,23 +36,34 @@
self._value_range = value_range
self._time_range = time_range
self._optimizer = optimizer
- self._t = 0
+ self._t = 1
+ self._before_training = True
def __call__(self, trainer):
optimizer = self._optimizer or trainer.updater.get_optimizer('main')
+
+ if self._before_training:
+ self._before_training = False
+ value = self._compute_value(self._t - 1)
+ else:
+ value = self._compute_value(self._t)
+ self._t += 1
+
+ setattr(optimizer, self._attr, value)
+
+ def serialize(self, serializer):
+ self._t = serializer('_t', self._t)
+
+ def _compute_value(self, t):
t1, t2 = self._time_range
v1, v2 = self._value_range
- if self._t <= t1:
+ if t <= t1:
value = v1
- elif self._t >= t2:
+ elif t >= t2:
value = v2
else:
- rate = (self._t - t1) / (t2 - t1)
+ rate = (t - t1) / (t2 - t1)
value = v1 + rate * (v2 - v1)
- setattr(optimizer, self._attr, value)
- self._t += 1
-
- def serialize(self, serializer):
- self._t = serializer('_t', self._t)
+ return value
| {"golden_diff": "diff --git a/chainer/training/extensions/linear_shift.py b/chainer/training/extensions/linear_shift.py\n--- a/chainer/training/extensions/linear_shift.py\n+++ b/chainer/training/extensions/linear_shift.py\n@@ -36,23 +36,34 @@\n self._value_range = value_range\n self._time_range = time_range\n self._optimizer = optimizer\n- self._t = 0\n+ self._t = 1\n+ self._before_training = True\n \n def __call__(self, trainer):\n optimizer = self._optimizer or trainer.updater.get_optimizer('main')\n+\n+ if self._before_training:\n+ self._before_training = False\n+ value = self._compute_value(self._t - 1)\n+ else:\n+ value = self._compute_value(self._t)\n+ self._t += 1\n+\n+ setattr(optimizer, self._attr, value)\n+\n+ def serialize(self, serializer):\n+ self._t = serializer('_t', self._t)\n+\n+ def _compute_value(self, t):\n t1, t2 = self._time_range\n v1, v2 = self._value_range\n \n- if self._t <= t1:\n+ if t <= t1:\n value = v1\n- elif self._t >= t2:\n+ elif t >= t2:\n value = v2\n else:\n- rate = (self._t - t1) / (t2 - t1)\n+ rate = (t - t1) / (t2 - t1)\n value = v1 + rate * (v2 - v1)\n- setattr(optimizer, self._attr, value)\n \n- self._t += 1\n-\n- def serialize(self, serializer):\n- self._t = serializer('_t', self._t)\n+ return value\n", "issue": "resuming issue of LinearShift\nSame issue as #2680\r\n\r\n```\r\nimport chainer\r\nfrom chainer import iterators\r\nfrom chainer import optimizers\r\nfrom chainer import training\r\nfrom chainer.training import extensions\r\nfrom chainer import serializers\r\n\r\n\r\nclass DummyModel(chainer.Chain):\r\n\r\n def __call__(self, x):\r\n return x\r\n\r\n\r\ndef setup_trainer(iteration):\r\n model = DummyModel()\r\n optimizer = optimizers.SGD()\r\n optimizer.setup(model)\r\n\r\n iterator = iterators.SerialIterator([1, 2, 3], 1)\r\n\r\n updater = training.StandardUpdater(iterator, optimizer)\r\n trainer = training.Trainer(updater, (iteration, 'iteration'), out='.')\r\n\r\n trainer.extend(extensions.LogReport(trigger=(1, 'iteration')))\r\n trainer.extend(extensions.observe_lr(), trigger=(1, 'iteration'))\r\n trainer.extend(\r\n extensions.PrintReport(['iteration', 'lr']),\r\n trigger=(1, 'iteration'))\r\n\r\n trainer.extend(\r\n extensions.LinearShift('lr', (2, 1), (5, 15)),\r\n trigger=(1, 'iteration'))\r\n\r\n return trainer\r\n\r\n\r\ntrainer = setup_trainer(10)\r\ntrainer.run()\r\nserializers.save_npz('tmp', trainer)\r\n# iteration lr\r\n# 1 2\r\n# 2 2\r\n# 3 2\r\n# 4 2\r\n# 5 2\r\n# 6 2\r\n# 7 1.9\r\n# 8 1.8\r\n# 9 1.7\r\n# 10 1.6\r\n\r\nresumed_trainer = setup_trainer(20)\r\nserializers.load_npz('tmp', resumed_trainer)\r\nresumed_trainer.run()\r\n# iteration lr\r\n# 1 2\r\n# 2 2\r\n# 3 2\r\n# 4 2\r\n# 5 2\r\n# 6 2\r\n# 7 1.9\r\n# 8 1.8\r\n# 9 1.7\r\n# 10 1.6\r\n# 11 1.4 (lr = 1.5 is skipped)\r\n# 12 1.3\r\n# 13 1.2\r\n# 14 1.1\r\n# 15 1\r\n# 16 1\r\n# 17 1\r\n# 18 1\r\n# 19 1\r\n# 20 1\r\n```\n", "before_files": [{"content": "from __future__ import division\n\nfrom chainer.training import extension\n\n\nclass LinearShift(extension.Extension):\n\n \"\"\"Trainer extension to change an optimizer attribute linearly.\n\n This extension changes an optimizer attribute from the first value to the\n last value linearly within a specified duration. The typical use case is\n warming up of the momentum coefficient.\n\n For example, suppose that this extension is called at every iteration, and\n ``value_range == (x, y)`` and ``time_range == (i, j)``. Then, this\n extension keeps the attribute to be ``x`` up to the ``i``-th iteration,\n linearly shifts the value to ``y`` by the ``j``-th iteration, and then\n keeps the value to be ``y`` after the ``j``-th iteration.\n\n This extension is also called before the training loop starts by default.\n\n Args:\n attr (str): Name of the optimizer attribute to adjust.\n value_range (tuple of float): The first and the last values of the\n attribute.\n time_range (tuple of ints): The first and last counts of calls in which\n the attribute is adjusted.\n optimizer (~chainer.Optimizer): Target optimizer object. If it is None,\n the main optimizer of the trainer is used.\n\n \"\"\"\n invoke_before_training = True\n\n def __init__(self, attr, value_range, time_range, optimizer=None):\n self._attr = attr\n self._value_range = value_range\n self._time_range = time_range\n self._optimizer = optimizer\n self._t = 0\n\n def __call__(self, trainer):\n optimizer = self._optimizer or trainer.updater.get_optimizer('main')\n t1, t2 = self._time_range\n v1, v2 = self._value_range\n\n if self._t <= t1:\n value = v1\n elif self._t >= t2:\n value = v2\n else:\n rate = (self._t - t1) / (t2 - t1)\n value = v1 + rate * (v2 - v1)\n setattr(optimizer, self._attr, value)\n\n self._t += 1\n\n def serialize(self, serializer):\n self._t = serializer('_t', self._t)\n", "path": "chainer/training/extensions/linear_shift.py"}], "after_files": [{"content": "from __future__ import division\n\nfrom chainer.training import extension\n\n\nclass LinearShift(extension.Extension):\n\n \"\"\"Trainer extension to change an optimizer attribute linearly.\n\n This extension changes an optimizer attribute from the first value to the\n last value linearly within a specified duration. The typical use case is\n warming up of the momentum coefficient.\n\n For example, suppose that this extension is called at every iteration, and\n ``value_range == (x, y)`` and ``time_range == (i, j)``. Then, this\n extension keeps the attribute to be ``x`` up to the ``i``-th iteration,\n linearly shifts the value to ``y`` by the ``j``-th iteration, and then\n keeps the value to be ``y`` after the ``j``-th iteration.\n\n This extension is also called before the training loop starts by default.\n\n Args:\n attr (str): Name of the optimizer attribute to adjust.\n value_range (tuple of float): The first and the last values of the\n attribute.\n time_range (tuple of ints): The first and last counts of calls in which\n the attribute is adjusted.\n optimizer (~chainer.Optimizer): Target optimizer object. If it is None,\n the main optimizer of the trainer is used.\n\n \"\"\"\n invoke_before_training = True\n\n def __init__(self, attr, value_range, time_range, optimizer=None):\n self._attr = attr\n self._value_range = value_range\n self._time_range = time_range\n self._optimizer = optimizer\n self._t = 1\n self._before_training = True\n\n def __call__(self, trainer):\n optimizer = self._optimizer or trainer.updater.get_optimizer('main')\n\n if self._before_training:\n self._before_training = False\n value = self._compute_value(self._t - 1)\n else:\n value = self._compute_value(self._t)\n self._t += 1\n\n setattr(optimizer, self._attr, value)\n\n def serialize(self, serializer):\n self._t = serializer('_t', self._t)\n\n def _compute_value(self, t):\n t1, t2 = self._time_range\n v1, v2 = self._value_range\n\n if t <= t1:\n value = v1\n elif t >= t2:\n value = v2\n else:\n rate = (t - t1) / (t2 - t1)\n value = v1 + rate * (v2 - v1)\n\n return value\n", "path": "chainer/training/extensions/linear_shift.py"}]} | 1,438 | 419 |
gh_patches_debug_502 | rasdani/github-patches | git_diff | google__flax-2827 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Cannot import flax.training.checkpoints in 0.6.4
### System information
- OS Platform and Distribution: Ubuntu 22.04.1 LTS, also in Colab environment
- Flax, jax, jaxlib versions:
* flax 0.6.4
* jax 0.3.25
* jaxlib 0.3.25
- Python version: 3.10.6
- GPU/TPU model and memory: No Accelerator / 16GB
### Problem you have encountered:
With FLAX v0.6.4 I can't import `flax.training.checkpoints` module due to following error:
```
ImportError: cannot import name 'monitoring' from 'jax' (/usr/local/lib/python3.8/dist-packages/jax/__init__.py)
```
This does not happen in v0.6.3.
### What you expected to happen:
The module should be imported.
### Logs, error messages, etc:
Error message from jupyter notebook:
```
ImportError Traceback (most recent call last)
[<ipython-input-3-9a234296e658>](https://localhost:8080/#) in <module>
1 import flax
----> 2 from flax.training import checkpoints
[/usr/local/lib/python3.8/dist-packages/flax/training/checkpoints.py](https://localhost:8080/#) in <module>
36 from flax import traverse_util
37 import jax
---> 38 from jax import monitoring
39 from jax import process_index
40 from jax import sharding
ImportError: cannot import name 'monitoring' from 'jax' (/usr/local/lib/python3.8/dist-packages/jax/__init__.py)
```
### Steps to reproduce:
[Colab notebook](https://colab.research.google.com/drive/1ZLR1JSJPfaaoTmL7bow8oebqyhhxrqSo?usp=sharing)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 # Copyright 2022 The Flax Authors.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """setup.py for Flax."""
16
17 import os
18 from setuptools import find_packages
19 from setuptools import setup
20
21 here = os.path.abspath(os.path.dirname(__file__))
22 try:
23 README = open(os.path.join(here, "README.md"), encoding="utf-8").read()
24 except OSError:
25 README = ""
26
27 install_requires = [
28 "numpy>=1.12",
29 "jax>=0.3.16",
30 "matplotlib", # only needed for tensorboard export
31 "msgpack",
32 "optax",
33 "orbax",
34 "tensorstore",
35 "rich>=11.1",
36 "typing_extensions>=4.1.1",
37 "PyYAML>=5.4.1",
38 ]
39
40 tests_require = [
41 "atari-py==0.2.5", # Last version does not have the ROMs we test on pre-packaged
42 "clu", # All examples.
43 "gym==0.18.3",
44 "jaxlib",
45 "jraph>=0.0.6dev0",
46 "ml-collections",
47 "mypy",
48 "opencv-python",
49 "pytest",
50 "pytest-cov",
51 "pytest-custom_exit_code",
52 "pytest-xdist==1.34.0", # upgrading to 2.0 broke tests, need to investigate
53 "pytype",
54 "sentencepiece", # WMT example.
55 "tensorflow_text>=2.4.0", # WMT example.
56 "tensorflow_datasets",
57 "tensorflow",
58 "torch",
59 ]
60
61 __version__ = None
62
63 with open("flax/version.py") as f:
64 exec(f.read(), globals())
65
66 setup(
67 name="flax",
68 version=__version__,
69 description="Flax: A neural network library for JAX designed for flexibility",
70 long_description="\n\n".join([README]),
71 long_description_content_type="text/markdown",
72 classifiers=[
73 "Development Status :: 3 - Alpha",
74 "Intended Audience :: Developers",
75 "Intended Audience :: Science/Research",
76 "License :: OSI Approved :: Apache Software License",
77 "Programming Language :: Python :: 3.7",
78 "Topic :: Scientific/Engineering :: Artificial Intelligence",
79 ],
80 keywords="",
81 author="Flax team",
82 author_email="[email protected]",
83 url="https://github.com/google/flax",
84 packages=find_packages(),
85 package_data={"flax": ["py.typed"]},
86 zip_safe=False,
87 install_requires=install_requires,
88 extras_require={
89 "testing": tests_require,
90 },
91 )
92
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -26,7 +26,7 @@
install_requires = [
"numpy>=1.12",
- "jax>=0.3.16",
+ "jax>=0.4.2",
"matplotlib", # only needed for tensorboard export
"msgpack",
"optax",
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -26,7 +26,7 @@\n \n install_requires = [\n \"numpy>=1.12\",\n- \"jax>=0.3.16\",\n+ \"jax>=0.4.2\",\n \"matplotlib\", # only needed for tensorboard export\n \"msgpack\",\n \"optax\",\n", "issue": "Cannot import flax.training.checkpoints in 0.6.4\n### System information\r\n- OS Platform and Distribution: Ubuntu 22.04.1 LTS, also in Colab environment\r\n- Flax, jax, jaxlib versions:\r\n * flax 0.6.4\r\n * jax 0.3.25\r\n * jaxlib 0.3.25\r\n- Python version: 3.10.6\r\n- GPU/TPU model and memory: No Accelerator / 16GB\r\n\r\n### Problem you have encountered:\r\nWith FLAX v0.6.4 I can't import `flax.training.checkpoints` module due to following error:\r\n```\r\nImportError: cannot import name 'monitoring' from 'jax' (/usr/local/lib/python3.8/dist-packages/jax/__init__.py)\r\n```\r\nThis does not happen in v0.6.3.\r\n\r\n### What you expected to happen:\r\nThe module should be imported.\r\n\r\n### Logs, error messages, etc:\r\nError message from jupyter notebook:\r\n```\r\nImportError Traceback (most recent call last)\r\n\r\n[<ipython-input-3-9a234296e658>](https://localhost:8080/#) in <module>\r\n 1 import flax\r\n----> 2 from flax.training import checkpoints\r\n\r\n[/usr/local/lib/python3.8/dist-packages/flax/training/checkpoints.py](https://localhost:8080/#) in <module>\r\n 36 from flax import traverse_util\r\n 37 import jax\r\n---> 38 from jax import monitoring\r\n 39 from jax import process_index\r\n 40 from jax import sharding\r\n\r\nImportError: cannot import name 'monitoring' from 'jax' (/usr/local/lib/python3.8/dist-packages/jax/__init__.py)\r\n```\r\n\r\n### Steps to reproduce:\r\n[Colab notebook](https://colab.research.google.com/drive/1ZLR1JSJPfaaoTmL7bow8oebqyhhxrqSo?usp=sharing)\r\n\n", "before_files": [{"content": "# Copyright 2022 The Flax Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"setup.py for Flax.\"\"\"\n\nimport os\nfrom setuptools import find_packages\nfrom setuptools import setup\n\nhere = os.path.abspath(os.path.dirname(__file__))\ntry:\n README = open(os.path.join(here, \"README.md\"), encoding=\"utf-8\").read()\nexcept OSError:\n README = \"\"\n\ninstall_requires = [\n \"numpy>=1.12\",\n \"jax>=0.3.16\",\n \"matplotlib\", # only needed for tensorboard export\n \"msgpack\",\n \"optax\",\n \"orbax\",\n \"tensorstore\",\n \"rich>=11.1\",\n \"typing_extensions>=4.1.1\",\n \"PyYAML>=5.4.1\",\n]\n\ntests_require = [\n \"atari-py==0.2.5\", # Last version does not have the ROMs we test on pre-packaged\n \"clu\", # All examples.\n \"gym==0.18.3\",\n \"jaxlib\",\n \"jraph>=0.0.6dev0\",\n \"ml-collections\",\n \"mypy\",\n \"opencv-python\",\n \"pytest\",\n \"pytest-cov\",\n \"pytest-custom_exit_code\",\n \"pytest-xdist==1.34.0\", # upgrading to 2.0 broke tests, need to investigate\n \"pytype\",\n \"sentencepiece\", # WMT example.\n \"tensorflow_text>=2.4.0\", # WMT example.\n \"tensorflow_datasets\",\n \"tensorflow\",\n \"torch\",\n]\n\n__version__ = None\n\nwith open(\"flax/version.py\") as f:\n exec(f.read(), globals())\n\nsetup(\n name=\"flax\",\n version=__version__,\n description=\"Flax: A neural network library for JAX designed for flexibility\",\n long_description=\"\\n\\n\".join([README]),\n long_description_content_type=\"text/markdown\",\n classifiers=[\n \"Development Status :: 3 - Alpha\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: Science/Research\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Programming Language :: Python :: 3.7\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n ],\n keywords=\"\",\n author=\"Flax team\",\n author_email=\"[email protected]\",\n url=\"https://github.com/google/flax\",\n packages=find_packages(),\n package_data={\"flax\": [\"py.typed\"]},\n zip_safe=False,\n install_requires=install_requires,\n extras_require={\n \"testing\": tests_require,\n },\n )\n", "path": "setup.py"}], "after_files": [{"content": "# Copyright 2022 The Flax Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"setup.py for Flax.\"\"\"\n\nimport os\nfrom setuptools import find_packages\nfrom setuptools import setup\n\nhere = os.path.abspath(os.path.dirname(__file__))\ntry:\n README = open(os.path.join(here, \"README.md\"), encoding=\"utf-8\").read()\nexcept OSError:\n README = \"\"\n\ninstall_requires = [\n \"numpy>=1.12\",\n \"jax>=0.4.2\",\n \"matplotlib\", # only needed for tensorboard export\n \"msgpack\",\n \"optax\",\n \"orbax\",\n \"tensorstore\",\n \"rich>=11.1\",\n \"typing_extensions>=4.1.1\",\n \"PyYAML>=5.4.1\",\n]\n\ntests_require = [\n \"atari-py==0.2.5\", # Last version does not have the ROMs we test on pre-packaged\n \"clu\", # All examples.\n \"gym==0.18.3\",\n \"jaxlib\",\n \"jraph>=0.0.6dev0\",\n \"ml-collections\",\n \"mypy\",\n \"opencv-python\",\n \"pytest\",\n \"pytest-cov\",\n \"pytest-custom_exit_code\",\n \"pytest-xdist==1.34.0\", # upgrading to 2.0 broke tests, need to investigate\n \"pytype\",\n \"sentencepiece\", # WMT example.\n \"tensorflow_text>=2.4.0\", # WMT example.\n \"tensorflow_datasets\",\n \"tensorflow\",\n \"torch\",\n]\n\n__version__ = None\n\nwith open(\"flax/version.py\") as f:\n exec(f.read(), globals())\n\nsetup(\n name=\"flax\",\n version=__version__,\n description=\"Flax: A neural network library for JAX designed for flexibility\",\n long_description=\"\\n\\n\".join([README]),\n long_description_content_type=\"text/markdown\",\n classifiers=[\n \"Development Status :: 3 - Alpha\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: Science/Research\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Programming Language :: Python :: 3.7\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n ],\n keywords=\"\",\n author=\"Flax team\",\n author_email=\"[email protected]\",\n url=\"https://github.com/google/flax\",\n packages=find_packages(),\n package_data={\"flax\": [\"py.typed\"]},\n zip_safe=False,\n install_requires=install_requires,\n extras_require={\n \"testing\": tests_require,\n },\n )\n", "path": "setup.py"}]} | 1,587 | 92 |
gh_patches_debug_26532 | rasdani/github-patches | git_diff | jazzband__pip-tools-733 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Command in autogenerated requirements.txt can be shortened
When I run `pip-compile`, my requirements.txt has
```
#
# This file is autogenerated by pip-compile
# To update, run:
#
# pip-compile --output-file requirements.txt requirements.in
#
```
But I think the `--output-file requirements.txt` can just be dropped (for brevity) when the written file itself is named `requirements.txt`.
I'm recommending this because `pip-compile` already goes ahead and modifies `requirements.txt` when no options are specified. Thoughts?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `piptools/writer.py`
Content:
```
1 import os
2 from itertools import chain
3
4 from ._compat import ExitStack
5 from .click import unstyle
6 from .io import AtomicSaver
7 from .logging import log
8 from .utils import comment, dedup, format_requirement, key_from_req, UNSAFE_PACKAGES
9
10
11 class OutputWriter(object):
12 def __init__(self, src_files, dst_file, dry_run, emit_header, emit_index,
13 emit_trusted_host, annotate, generate_hashes,
14 default_index_url, index_urls, trusted_hosts, format_control,
15 allow_unsafe):
16 self.src_files = src_files
17 self.dst_file = dst_file
18 self.dry_run = dry_run
19 self.emit_header = emit_header
20 self.emit_index = emit_index
21 self.emit_trusted_host = emit_trusted_host
22 self.annotate = annotate
23 self.generate_hashes = generate_hashes
24 self.default_index_url = default_index_url
25 self.index_urls = index_urls
26 self.trusted_hosts = trusted_hosts
27 self.format_control = format_control
28 self.allow_unsafe = allow_unsafe
29
30 def _sort_key(self, ireq):
31 return (not ireq.editable, str(ireq.req).lower())
32
33 def write_header(self):
34 if self.emit_header:
35 yield comment('#')
36 yield comment('# This file is autogenerated by pip-compile')
37 yield comment('# To update, run:')
38 yield comment('#')
39 custom_cmd = os.environ.get('CUSTOM_COMPILE_COMMAND')
40 if custom_cmd:
41 yield comment('# {}'.format(custom_cmd))
42 else:
43 params = []
44 if not self.emit_index:
45 params += ['--no-index']
46 if not self.emit_trusted_host:
47 params += ['--no-emit-trusted-host']
48 if not self.annotate:
49 params += ['--no-annotate']
50 if self.generate_hashes:
51 params += ["--generate-hashes"]
52 if self.allow_unsafe:
53 params += ["--allow-unsafe"]
54 params += ['--output-file', self.dst_file]
55 params += self.src_files
56 yield comment('# pip-compile {}'.format(' '.join(params)))
57 yield comment('#')
58
59 def write_index_options(self):
60 if self.emit_index:
61 for index, index_url in enumerate(dedup(self.index_urls)):
62 if index_url.rstrip('/') == self.default_index_url:
63 continue
64 flag = '--index-url' if index == 0 else '--extra-index-url'
65 yield '{} {}'.format(flag, index_url)
66
67 def write_trusted_hosts(self):
68 if self.emit_trusted_host:
69 for trusted_host in dedup(self.trusted_hosts):
70 yield '--trusted-host {}'.format(trusted_host)
71
72 def write_format_controls(self):
73 for nb in dedup(self.format_control.no_binary):
74 yield '--no-binary {}'.format(nb)
75 for ob in dedup(self.format_control.only_binary):
76 yield '--only-binary {}'.format(ob)
77
78 def write_flags(self):
79 emitted = False
80 for line in chain(self.write_index_options(),
81 self.write_trusted_hosts(),
82 self.write_format_controls()):
83 emitted = True
84 yield line
85 if emitted:
86 yield ''
87
88 def _iter_lines(self, results, unsafe_requirements, reverse_dependencies,
89 primary_packages, markers, hashes):
90 for line in self.write_header():
91 yield line
92 for line in self.write_flags():
93 yield line
94
95 unsafe_requirements = {r for r in results if r.name in UNSAFE_PACKAGES} if not unsafe_requirements else unsafe_requirements # noqa
96 packages = {r for r in results if r.name not in UNSAFE_PACKAGES}
97
98 packages = sorted(packages, key=self._sort_key)
99
100 for ireq in packages:
101 line = self._format_requirement(
102 ireq, reverse_dependencies, primary_packages,
103 markers.get(key_from_req(ireq.req)), hashes=hashes)
104 yield line
105
106 if unsafe_requirements:
107 unsafe_requirements = sorted(unsafe_requirements, key=self._sort_key)
108 yield ''
109 yield comment('# The following packages are considered to be unsafe in a requirements file:')
110
111 for ireq in unsafe_requirements:
112 req = self._format_requirement(ireq,
113 reverse_dependencies,
114 primary_packages,
115 marker=markers.get(key_from_req(ireq.req)),
116 hashes=hashes)
117 if not self.allow_unsafe:
118 yield comment('# {}'.format(req))
119 else:
120 yield req
121
122 def write(self, results, unsafe_requirements, reverse_dependencies,
123 primary_packages, markers, hashes):
124 with ExitStack() as stack:
125 f = None
126 if not self.dry_run:
127 f = stack.enter_context(AtomicSaver(self.dst_file))
128
129 for line in self._iter_lines(results, unsafe_requirements, reverse_dependencies,
130 primary_packages, markers, hashes):
131 log.info(line)
132 if f:
133 f.write(unstyle(line).encode('utf-8'))
134 f.write(os.linesep.encode('utf-8'))
135
136 def _format_requirement(self, ireq, reverse_dependencies, primary_packages, marker=None, hashes=None):
137 ireq_hashes = (hashes if hashes is not None else {}).get(ireq)
138
139 line = format_requirement(ireq, marker=marker, hashes=ireq_hashes)
140
141 if not self.annotate or key_from_req(ireq.req) in primary_packages:
142 return line
143
144 # Annotate what packages this package is required by
145 required_by = reverse_dependencies.get(ireq.name.lower(), [])
146 if required_by:
147 annotation = ", ".join(sorted(required_by))
148 line = "{:24}{}{}".format(
149 line,
150 " \\\n " if ireq_hashes else " ",
151 comment("# via " + annotation))
152 return line
153
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/piptools/writer.py b/piptools/writer.py
--- a/piptools/writer.py
+++ b/piptools/writer.py
@@ -1,8 +1,9 @@
import os
+import sys
from itertools import chain
from ._compat import ExitStack
-from .click import unstyle
+from .click import unstyle, get_os_args
from .io import AtomicSaver
from .logging import log
from .utils import comment, dedup, format_requirement, key_from_req, UNSAFE_PACKAGES
@@ -40,20 +41,9 @@
if custom_cmd:
yield comment('# {}'.format(custom_cmd))
else:
- params = []
- if not self.emit_index:
- params += ['--no-index']
- if not self.emit_trusted_host:
- params += ['--no-emit-trusted-host']
- if not self.annotate:
- params += ['--no-annotate']
- if self.generate_hashes:
- params += ["--generate-hashes"]
- if self.allow_unsafe:
- params += ["--allow-unsafe"]
- params += ['--output-file', self.dst_file]
- params += self.src_files
- yield comment('# pip-compile {}'.format(' '.join(params)))
+ prog = os.path.basename(sys.argv[0])
+ args = ' '.join(get_os_args())
+ yield comment('# {prog} {args}'.format(prog=prog, args=args))
yield comment('#')
def write_index_options(self):
| {"golden_diff": "diff --git a/piptools/writer.py b/piptools/writer.py\n--- a/piptools/writer.py\n+++ b/piptools/writer.py\n@@ -1,8 +1,9 @@\n import os\n+import sys\n from itertools import chain\n \n from ._compat import ExitStack\n-from .click import unstyle\n+from .click import unstyle, get_os_args\n from .io import AtomicSaver\n from .logging import log\n from .utils import comment, dedup, format_requirement, key_from_req, UNSAFE_PACKAGES\n@@ -40,20 +41,9 @@\n if custom_cmd:\n yield comment('# {}'.format(custom_cmd))\n else:\n- params = []\n- if not self.emit_index:\n- params += ['--no-index']\n- if not self.emit_trusted_host:\n- params += ['--no-emit-trusted-host']\n- if not self.annotate:\n- params += ['--no-annotate']\n- if self.generate_hashes:\n- params += [\"--generate-hashes\"]\n- if self.allow_unsafe:\n- params += [\"--allow-unsafe\"]\n- params += ['--output-file', self.dst_file]\n- params += self.src_files\n- yield comment('# pip-compile {}'.format(' '.join(params)))\n+ prog = os.path.basename(sys.argv[0])\n+ args = ' '.join(get_os_args())\n+ yield comment('# {prog} {args}'.format(prog=prog, args=args))\n yield comment('#')\n \n def write_index_options(self):\n", "issue": "Command in autogenerated requirements.txt can be shortened\nWhen I run `pip-compile`, my requirements.txt has\r\n\r\n```\r\n#\r\n# This file is autogenerated by pip-compile\r\n# To update, run:\r\n#\r\n# pip-compile --output-file requirements.txt requirements.in\r\n#\r\n```\r\n\r\nBut I think the `--output-file requirements.txt` can just be dropped (for brevity) when the written file itself is named `requirements.txt`.\r\n\r\nI'm recommending this because `pip-compile` already goes ahead and modifies `requirements.txt` when no options are specified. Thoughts?\n", "before_files": [{"content": "import os\nfrom itertools import chain\n\nfrom ._compat import ExitStack\nfrom .click import unstyle\nfrom .io import AtomicSaver\nfrom .logging import log\nfrom .utils import comment, dedup, format_requirement, key_from_req, UNSAFE_PACKAGES\n\n\nclass OutputWriter(object):\n def __init__(self, src_files, dst_file, dry_run, emit_header, emit_index,\n emit_trusted_host, annotate, generate_hashes,\n default_index_url, index_urls, trusted_hosts, format_control,\n allow_unsafe):\n self.src_files = src_files\n self.dst_file = dst_file\n self.dry_run = dry_run\n self.emit_header = emit_header\n self.emit_index = emit_index\n self.emit_trusted_host = emit_trusted_host\n self.annotate = annotate\n self.generate_hashes = generate_hashes\n self.default_index_url = default_index_url\n self.index_urls = index_urls\n self.trusted_hosts = trusted_hosts\n self.format_control = format_control\n self.allow_unsafe = allow_unsafe\n\n def _sort_key(self, ireq):\n return (not ireq.editable, str(ireq.req).lower())\n\n def write_header(self):\n if self.emit_header:\n yield comment('#')\n yield comment('# This file is autogenerated by pip-compile')\n yield comment('# To update, run:')\n yield comment('#')\n custom_cmd = os.environ.get('CUSTOM_COMPILE_COMMAND')\n if custom_cmd:\n yield comment('# {}'.format(custom_cmd))\n else:\n params = []\n if not self.emit_index:\n params += ['--no-index']\n if not self.emit_trusted_host:\n params += ['--no-emit-trusted-host']\n if not self.annotate:\n params += ['--no-annotate']\n if self.generate_hashes:\n params += [\"--generate-hashes\"]\n if self.allow_unsafe:\n params += [\"--allow-unsafe\"]\n params += ['--output-file', self.dst_file]\n params += self.src_files\n yield comment('# pip-compile {}'.format(' '.join(params)))\n yield comment('#')\n\n def write_index_options(self):\n if self.emit_index:\n for index, index_url in enumerate(dedup(self.index_urls)):\n if index_url.rstrip('/') == self.default_index_url:\n continue\n flag = '--index-url' if index == 0 else '--extra-index-url'\n yield '{} {}'.format(flag, index_url)\n\n def write_trusted_hosts(self):\n if self.emit_trusted_host:\n for trusted_host in dedup(self.trusted_hosts):\n yield '--trusted-host {}'.format(trusted_host)\n\n def write_format_controls(self):\n for nb in dedup(self.format_control.no_binary):\n yield '--no-binary {}'.format(nb)\n for ob in dedup(self.format_control.only_binary):\n yield '--only-binary {}'.format(ob)\n\n def write_flags(self):\n emitted = False\n for line in chain(self.write_index_options(),\n self.write_trusted_hosts(),\n self.write_format_controls()):\n emitted = True\n yield line\n if emitted:\n yield ''\n\n def _iter_lines(self, results, unsafe_requirements, reverse_dependencies,\n primary_packages, markers, hashes):\n for line in self.write_header():\n yield line\n for line in self.write_flags():\n yield line\n\n unsafe_requirements = {r for r in results if r.name in UNSAFE_PACKAGES} if not unsafe_requirements else unsafe_requirements # noqa\n packages = {r for r in results if r.name not in UNSAFE_PACKAGES}\n\n packages = sorted(packages, key=self._sort_key)\n\n for ireq in packages:\n line = self._format_requirement(\n ireq, reverse_dependencies, primary_packages,\n markers.get(key_from_req(ireq.req)), hashes=hashes)\n yield line\n\n if unsafe_requirements:\n unsafe_requirements = sorted(unsafe_requirements, key=self._sort_key)\n yield ''\n yield comment('# The following packages are considered to be unsafe in a requirements file:')\n\n for ireq in unsafe_requirements:\n req = self._format_requirement(ireq,\n reverse_dependencies,\n primary_packages,\n marker=markers.get(key_from_req(ireq.req)),\n hashes=hashes)\n if not self.allow_unsafe:\n yield comment('# {}'.format(req))\n else:\n yield req\n\n def write(self, results, unsafe_requirements, reverse_dependencies,\n primary_packages, markers, hashes):\n with ExitStack() as stack:\n f = None\n if not self.dry_run:\n f = stack.enter_context(AtomicSaver(self.dst_file))\n\n for line in self._iter_lines(results, unsafe_requirements, reverse_dependencies,\n primary_packages, markers, hashes):\n log.info(line)\n if f:\n f.write(unstyle(line).encode('utf-8'))\n f.write(os.linesep.encode('utf-8'))\n\n def _format_requirement(self, ireq, reverse_dependencies, primary_packages, marker=None, hashes=None):\n ireq_hashes = (hashes if hashes is not None else {}).get(ireq)\n\n line = format_requirement(ireq, marker=marker, hashes=ireq_hashes)\n\n if not self.annotate or key_from_req(ireq.req) in primary_packages:\n return line\n\n # Annotate what packages this package is required by\n required_by = reverse_dependencies.get(ireq.name.lower(), [])\n if required_by:\n annotation = \", \".join(sorted(required_by))\n line = \"{:24}{}{}\".format(\n line,\n \" \\\\\\n \" if ireq_hashes else \" \",\n comment(\"# via \" + annotation))\n return line\n", "path": "piptools/writer.py"}], "after_files": [{"content": "import os\nimport sys\nfrom itertools import chain\n\nfrom ._compat import ExitStack\nfrom .click import unstyle, get_os_args\nfrom .io import AtomicSaver\nfrom .logging import log\nfrom .utils import comment, dedup, format_requirement, key_from_req, UNSAFE_PACKAGES\n\n\nclass OutputWriter(object):\n def __init__(self, src_files, dst_file, dry_run, emit_header, emit_index,\n emit_trusted_host, annotate, generate_hashes,\n default_index_url, index_urls, trusted_hosts, format_control,\n allow_unsafe):\n self.src_files = src_files\n self.dst_file = dst_file\n self.dry_run = dry_run\n self.emit_header = emit_header\n self.emit_index = emit_index\n self.emit_trusted_host = emit_trusted_host\n self.annotate = annotate\n self.generate_hashes = generate_hashes\n self.default_index_url = default_index_url\n self.index_urls = index_urls\n self.trusted_hosts = trusted_hosts\n self.format_control = format_control\n self.allow_unsafe = allow_unsafe\n\n def _sort_key(self, ireq):\n return (not ireq.editable, str(ireq.req).lower())\n\n def write_header(self):\n if self.emit_header:\n yield comment('#')\n yield comment('# This file is autogenerated by pip-compile')\n yield comment('# To update, run:')\n yield comment('#')\n custom_cmd = os.environ.get('CUSTOM_COMPILE_COMMAND')\n if custom_cmd:\n yield comment('# {}'.format(custom_cmd))\n else:\n prog = os.path.basename(sys.argv[0])\n args = ' '.join(get_os_args())\n yield comment('# {prog} {args}'.format(prog=prog, args=args))\n yield comment('#')\n\n def write_index_options(self):\n if self.emit_index:\n for index, index_url in enumerate(dedup(self.index_urls)):\n if index_url.rstrip('/') == self.default_index_url:\n continue\n flag = '--index-url' if index == 0 else '--extra-index-url'\n yield '{} {}'.format(flag, index_url)\n\n def write_trusted_hosts(self):\n if self.emit_trusted_host:\n for trusted_host in dedup(self.trusted_hosts):\n yield '--trusted-host {}'.format(trusted_host)\n\n def write_format_controls(self):\n for nb in dedup(self.format_control.no_binary):\n yield '--no-binary {}'.format(nb)\n for ob in dedup(self.format_control.only_binary):\n yield '--only-binary {}'.format(ob)\n\n def write_flags(self):\n emitted = False\n for line in chain(self.write_index_options(),\n self.write_trusted_hosts(),\n self.write_format_controls()):\n emitted = True\n yield line\n if emitted:\n yield ''\n\n def _iter_lines(self, results, unsafe_requirements, reverse_dependencies,\n primary_packages, markers, hashes):\n for line in self.write_header():\n yield line\n for line in self.write_flags():\n yield line\n\n unsafe_requirements = {r for r in results if r.name in UNSAFE_PACKAGES} if not unsafe_requirements else unsafe_requirements # noqa\n packages = {r for r in results if r.name not in UNSAFE_PACKAGES}\n\n packages = sorted(packages, key=self._sort_key)\n\n for ireq in packages:\n line = self._format_requirement(\n ireq, reverse_dependencies, primary_packages,\n markers.get(key_from_req(ireq.req)), hashes=hashes)\n yield line\n\n if unsafe_requirements:\n unsafe_requirements = sorted(unsafe_requirements, key=self._sort_key)\n yield ''\n yield comment('# The following packages are considered to be unsafe in a requirements file:')\n\n for ireq in unsafe_requirements:\n req = self._format_requirement(ireq,\n reverse_dependencies,\n primary_packages,\n marker=markers.get(key_from_req(ireq.req)),\n hashes=hashes)\n if not self.allow_unsafe:\n yield comment('# {}'.format(req))\n else:\n yield req\n\n def write(self, results, unsafe_requirements, reverse_dependencies,\n primary_packages, markers, hashes):\n with ExitStack() as stack:\n f = None\n if not self.dry_run:\n f = stack.enter_context(AtomicSaver(self.dst_file))\n\n for line in self._iter_lines(results, unsafe_requirements, reverse_dependencies,\n primary_packages, markers, hashes):\n log.info(line)\n if f:\n f.write(unstyle(line).encode('utf-8'))\n f.write(os.linesep.encode('utf-8'))\n\n def _format_requirement(self, ireq, reverse_dependencies, primary_packages, marker=None, hashes=None):\n ireq_hashes = (hashes if hashes is not None else {}).get(ireq)\n\n line = format_requirement(ireq, marker=marker, hashes=ireq_hashes)\n\n if not self.annotate or key_from_req(ireq.req) in primary_packages:\n return line\n\n # Annotate what packages this package is required by\n required_by = reverse_dependencies.get(ireq.name.lower(), [])\n if required_by:\n annotation = \", \".join(sorted(required_by))\n line = \"{:24}{}{}\".format(\n line,\n \" \\\\\\n \" if ireq_hashes else \" \",\n comment(\"# via \" + annotation))\n return line\n", "path": "piptools/writer.py"}]} | 1,960 | 341 |
gh_patches_debug_14738 | rasdani/github-patches | git_diff | crytic__slither-530 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Suicidal detector fails on external functions
If the [example](https://github.com/crytic/slither/wiki/Detector-Documentation#suicidal) function for the suicidal detector is changed from `public` to `external` the issue is no longer flagged.
```
pragma solidity ^0.5.0;
contract Suicidal{
function kill() external{
selfdestruct(msg.sender);
}
}
```
`slither --version`: 0.6.12
`solc --version`: 0.5.15
Suicidal detector fails on external functions
If the [example](https://github.com/crytic/slither/wiki/Detector-Documentation#suicidal) function for the suicidal detector is changed from `public` to `external` the issue is no longer flagged.
```
pragma solidity ^0.5.0;
contract Suicidal{
function kill() external{
selfdestruct(msg.sender);
}
}
```
`slither --version`: 0.6.12
`solc --version`: 0.5.15
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `slither/detectors/functions/suicidal.py`
Content:
```
1 """
2 Module detecting suicidal contract
3
4 A suicidal contract is an unprotected function that calls selfdestruct
5 """
6
7 from slither.detectors.abstract_detector import AbstractDetector, DetectorClassification
8
9
10 class Suicidal(AbstractDetector):
11 """
12 Unprotected function detector
13 """
14
15 ARGUMENT = 'suicidal'
16 HELP = 'Functions allowing anyone to destruct the contract'
17 IMPACT = DetectorClassification.HIGH
18 CONFIDENCE = DetectorClassification.HIGH
19
20 WIKI = 'https://github.com/crytic/slither/wiki/Detector-Documentation#suicidal'
21
22
23 WIKI_TITLE = 'Suicidal'
24 WIKI_DESCRIPTION = 'Unprotected call to a function executing `selfdestruct`/`suicide`.'
25 WIKI_EXPLOIT_SCENARIO = '''
26 ```solidity
27 contract Suicidal{
28 function kill() public{
29 selfdestruct(msg.sender);
30 }
31 }
32 ```
33 Bob calls `kill` and destructs the contract.'''
34
35 WIKI_RECOMMENDATION = 'Protect access to all sensitive functions.'
36
37 @staticmethod
38 def detect_suicidal_func(func):
39 """ Detect if the function is suicidal
40
41 Detect the public functions calling suicide/selfdestruct without protection
42 Returns:
43 (bool): True if the function is suicidal
44 """
45
46 if func.is_constructor:
47 return False
48
49 if func.visibility != 'public':
50 return False
51
52 calls = [c.name for c in func.internal_calls]
53 if not ('suicide(address)' in calls or 'selfdestruct(address)' in calls):
54 return False
55
56 if func.is_protected():
57 return False
58
59 return True
60
61 def detect_suicidal(self, contract):
62 ret = []
63 for f in [f for f in contract.functions if f.contract_declarer == contract]:
64 if self.detect_suicidal_func(f):
65 ret.append(f)
66 return ret
67
68 def _detect(self):
69 """ Detect the suicidal functions
70 """
71 results = []
72 for c in self.contracts:
73 functions = self.detect_suicidal(c)
74 for func in functions:
75
76 info = [func, " allows anyone to destruct the contract\n"]
77
78 res = self.generate_result(info)
79
80 results.append(res)
81
82 return results
83
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/slither/detectors/functions/suicidal.py b/slither/detectors/functions/suicidal.py
--- a/slither/detectors/functions/suicidal.py
+++ b/slither/detectors/functions/suicidal.py
@@ -46,7 +46,7 @@
if func.is_constructor:
return False
- if func.visibility != 'public':
+ if func.visibility not in ['public', 'external']:
return False
calls = [c.name for c in func.internal_calls]
@@ -60,7 +60,7 @@
def detect_suicidal(self, contract):
ret = []
- for f in [f for f in contract.functions if f.contract_declarer == contract]:
+ for f in contract.functions_declared:
if self.detect_suicidal_func(f):
ret.append(f)
return ret
| {"golden_diff": "diff --git a/slither/detectors/functions/suicidal.py b/slither/detectors/functions/suicidal.py\n--- a/slither/detectors/functions/suicidal.py\n+++ b/slither/detectors/functions/suicidal.py\n@@ -46,7 +46,7 @@\n if func.is_constructor:\n return False\n \n- if func.visibility != 'public':\n+ if func.visibility not in ['public', 'external']:\n return False\n \n calls = [c.name for c in func.internal_calls]\n@@ -60,7 +60,7 @@\n \n def detect_suicidal(self, contract):\n ret = []\n- for f in [f for f in contract.functions if f.contract_declarer == contract]:\n+ for f in contract.functions_declared:\n if self.detect_suicidal_func(f):\n ret.append(f)\n return ret\n", "issue": "Suicidal detector fails on external functions\nIf the [example](https://github.com/crytic/slither/wiki/Detector-Documentation#suicidal) function for the suicidal detector is changed from `public` to `external` the issue is no longer flagged.\r\n\r\n```\r\npragma solidity ^0.5.0;\r\ncontract Suicidal{\r\n function kill() external{\r\n selfdestruct(msg.sender);\r\n }\r\n}\r\n```\r\n\r\n`slither --version`: 0.6.12\r\n`solc --version`: 0.5.15\nSuicidal detector fails on external functions\nIf the [example](https://github.com/crytic/slither/wiki/Detector-Documentation#suicidal) function for the suicidal detector is changed from `public` to `external` the issue is no longer flagged.\r\n\r\n```\r\npragma solidity ^0.5.0;\r\ncontract Suicidal{\r\n function kill() external{\r\n selfdestruct(msg.sender);\r\n }\r\n}\r\n```\r\n\r\n`slither --version`: 0.6.12\r\n`solc --version`: 0.5.15\n", "before_files": [{"content": "\"\"\"\nModule detecting suicidal contract\n\nA suicidal contract is an unprotected function that calls selfdestruct\n\"\"\"\n\nfrom slither.detectors.abstract_detector import AbstractDetector, DetectorClassification\n\n\nclass Suicidal(AbstractDetector):\n \"\"\"\n Unprotected function detector\n \"\"\"\n\n ARGUMENT = 'suicidal'\n HELP = 'Functions allowing anyone to destruct the contract'\n IMPACT = DetectorClassification.HIGH\n CONFIDENCE = DetectorClassification.HIGH\n\n WIKI = 'https://github.com/crytic/slither/wiki/Detector-Documentation#suicidal'\n\n\n WIKI_TITLE = 'Suicidal'\n WIKI_DESCRIPTION = 'Unprotected call to a function executing `selfdestruct`/`suicide`.'\n WIKI_EXPLOIT_SCENARIO = '''\n```solidity\ncontract Suicidal{\n function kill() public{\n selfdestruct(msg.sender);\n }\n}\n```\nBob calls `kill` and destructs the contract.'''\n\n WIKI_RECOMMENDATION = 'Protect access to all sensitive functions.'\n\n @staticmethod\n def detect_suicidal_func(func):\n \"\"\" Detect if the function is suicidal\n\n Detect the public functions calling suicide/selfdestruct without protection\n Returns:\n (bool): True if the function is suicidal\n \"\"\"\n\n if func.is_constructor:\n return False\n\n if func.visibility != 'public':\n return False\n\n calls = [c.name for c in func.internal_calls]\n if not ('suicide(address)' in calls or 'selfdestruct(address)' in calls):\n return False\n\n if func.is_protected():\n return False\n\n return True\n\n def detect_suicidal(self, contract):\n ret = []\n for f in [f for f in contract.functions if f.contract_declarer == contract]:\n if self.detect_suicidal_func(f):\n ret.append(f)\n return ret\n\n def _detect(self):\n \"\"\" Detect the suicidal functions\n \"\"\"\n results = []\n for c in self.contracts:\n functions = self.detect_suicidal(c)\n for func in functions:\n\n info = [func, \" allows anyone to destruct the contract\\n\"]\n\n res = self.generate_result(info)\n\n results.append(res)\n\n return results\n", "path": "slither/detectors/functions/suicidal.py"}], "after_files": [{"content": "\"\"\"\nModule detecting suicidal contract\n\nA suicidal contract is an unprotected function that calls selfdestruct\n\"\"\"\n\nfrom slither.detectors.abstract_detector import AbstractDetector, DetectorClassification\n\n\nclass Suicidal(AbstractDetector):\n \"\"\"\n Unprotected function detector\n \"\"\"\n\n ARGUMENT = 'suicidal'\n HELP = 'Functions allowing anyone to destruct the contract'\n IMPACT = DetectorClassification.HIGH\n CONFIDENCE = DetectorClassification.HIGH\n\n WIKI = 'https://github.com/crytic/slither/wiki/Detector-Documentation#suicidal'\n\n\n WIKI_TITLE = 'Suicidal'\n WIKI_DESCRIPTION = 'Unprotected call to a function executing `selfdestruct`/`suicide`.'\n WIKI_EXPLOIT_SCENARIO = '''\n```solidity\ncontract Suicidal{\n function kill() public{\n selfdestruct(msg.sender);\n }\n}\n```\nBob calls `kill` and destructs the contract.'''\n\n WIKI_RECOMMENDATION = 'Protect access to all sensitive functions.'\n\n @staticmethod\n def detect_suicidal_func(func):\n \"\"\" Detect if the function is suicidal\n\n Detect the public functions calling suicide/selfdestruct without protection\n Returns:\n (bool): True if the function is suicidal\n \"\"\"\n\n if func.is_constructor:\n return False\n\n if func.visibility not in ['public', 'external']:\n return False\n\n calls = [c.name for c in func.internal_calls]\n if not ('suicide(address)' in calls or 'selfdestruct(address)' in calls):\n return False\n\n if func.is_protected():\n return False\n\n return True\n\n def detect_suicidal(self, contract):\n ret = []\n for f in contract.functions_declared:\n if self.detect_suicidal_func(f):\n ret.append(f)\n return ret\n\n def _detect(self):\n \"\"\" Detect the suicidal functions\n \"\"\"\n results = []\n for c in self.contracts:\n functions = self.detect_suicidal(c)\n for func in functions:\n\n info = [func, \" allows anyone to destruct the contract\\n\"]\n\n res = self.generate_result(info)\n\n results.append(res)\n\n return results\n", "path": "slither/detectors/functions/suicidal.py"}]} | 1,144 | 194 |
gh_patches_debug_57650 | rasdani/github-patches | git_diff | facebookresearch__ParlAI-1956 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Quickstart AttributeError: 'HogwildWorld' object has no attribute 'acts'
**Bug description**
When going through the ParlAI [quickstart](https://parl.ai/docs/tutorial_quick.html#install), I got the following error:
``` python
Traceback (most recent call last):
File "examples/interactive.py", line 18, in <module>
interactive(opt, print_parser=parser)
File "/root/ParlAI/parlai/scripts/interactive.py", line 68, in interactive
agent = create_agent(opt, requireModelExists=True)
File "/root/ParlAI/parlai/core/agents.py", line 683, in create_agent
model = load_agent_module(opt)
File "/root/ParlAI/parlai/core/agents.py", line 548, in load_agent_module
return model_class(new_opt)
File "/root/ParlAI/parlai/agents/memnn/memnn.py", line 86, in __init__
super().__init__(opt, shared)
File "/root/ParlAI/parlai/core/torch_ranker_agent.py", line 135, in __init__
super().__init__(opt, shared)
File "/root/ParlAI/parlai/core/torch_agent.py", line 737, in __init__
self.set_interactive_mode(opt['interactive_mode'], shared)
File "/root/ParlAI/parlai/core/torch_ranker_agent.py", line 206, in set_interactive_mode
path = self.get_task_candidates_path()
File "/root/ParlAI/parlai/core/torch_ranker_agent.py", line 230, in get_task_candidates_path
build_cands(opt)
File "/root/ParlAI/parlai/scripts/build_candidates.py", line 47, in build_cands
acts = world.get_acts()[0]
File "/root/ParlAI/parlai/core/worlds.py", line 162, in get_acts
return self.acts
AttributeError: 'HogwildWorld' object has no attribute 'acts'
```
**While running**
```python
python examples/interactive.py -mf /tmp/babi_memnn -ecands vocab
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `parlai/scripts/build_candidates.py`
Content:
```
1 #!/usr/bin/env python3
2
3 # Copyright (c) Facebook, Inc. and its affiliates.
4 # This source code is licensed under the MIT license found in the
5 # LICENSE file in the root directory of this source tree.
6 """Build the candidate responses for a retrieval model.
7
8 Examples
9 --------
10
11 .. code-block:: shell
12
13 python build_candidates.py -t convai2 --outfile /tmp/cands.txt
14 """
15
16 from parlai.core.params import ParlaiParser
17 from parlai.agents.repeat_label.repeat_label import RepeatLabelAgent
18 from parlai.core.worlds import create_task
19 from parlai.core.utils import TimeLogger
20 import random
21 import tempfile
22
23
24 def build_cands(opt):
25 # create repeat label agent and assign it to the specified task
26 agent = RepeatLabelAgent(opt)
27 world = create_task(opt, agent)
28 if opt['outfile'] is None:
29 outfile = tempfile.mkstemp(
30 prefix='{}_{}_'.format(opt['task'], opt['datatype']), suffix='.txt'
31 )[1]
32 else:
33 outfile = opt['outfile']
34
35 if opt.get('num_examples', -1) == -1:
36 num_examples = world.num_examples()
37 else:
38 num_examples = opt['num_examples']
39 log_timer = TimeLogger()
40
41 print('[ starting to build candidates from task.. (ex:' + str(num_examples) + ')]')
42 print('[ saving output to {} ]'.format(outfile))
43 cands = []
44 for _ in range(num_examples):
45 world.parley()
46 # We get the acts of the first agent, which is the teacher.
47 acts = world.get_acts()[0]
48 if isinstance(acts, dict):
49 # We turn into a batch of 1 example, in case batching is being used.
50 acts = [acts]
51 for a in acts:
52 candidate = a.get('labels', a.get('eval_labels', None))
53 if candidate is not None:
54 candidate = candidate[0]
55 cands.append(candidate)
56 if log_timer.time() > opt['log_every_n_secs']:
57 text, _log = log_timer.log(world.total_parleys, world.num_examples())
58 print(text)
59 if world.epoch_done():
60 print('EPOCH DONE')
61 break
62 fw = open(outfile, 'w')
63 fw.write('\n'.join(cands))
64 fw.close()
65
66
67 def main():
68 random.seed(42)
69 # Get command line arguments
70 parser = ParlaiParser()
71 parser.add_argument(
72 '-n',
73 '--num-examples',
74 default=-1,
75 type=int,
76 help='Total number of exs to convert, -1 to convert all examples',
77 )
78 parser.add_argument(
79 '-of',
80 '--outfile',
81 default=None,
82 type=str,
83 help='Output file where to save, by default will be created in /tmp',
84 )
85 parser.add_argument('-ltim', '--log-every-n-secs', type=float, default=2)
86 parser.set_defaults(datatype='train:evalmode')
87 opt = parser.parse_args()
88 build_cands(opt)
89
90
91 if __name__ == '__main__':
92 main()
93
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/parlai/scripts/build_candidates.py b/parlai/scripts/build_candidates.py
--- a/parlai/scripts/build_candidates.py
+++ b/parlai/scripts/build_candidates.py
@@ -23,6 +23,9 @@
def build_cands(opt):
# create repeat label agent and assign it to the specified task
+ if opt['numthreads'] > 1:
+ # Broken in hogwild mode. Just fall back to single processing mode
+ opt['numthreads'] = 1
agent = RepeatLabelAgent(opt)
world = create_task(opt, agent)
if opt['outfile'] is None:
| {"golden_diff": "diff --git a/parlai/scripts/build_candidates.py b/parlai/scripts/build_candidates.py\n--- a/parlai/scripts/build_candidates.py\n+++ b/parlai/scripts/build_candidates.py\n@@ -23,6 +23,9 @@\n \n def build_cands(opt):\n # create repeat label agent and assign it to the specified task\n+ if opt['numthreads'] > 1:\n+ # Broken in hogwild mode. Just fall back to single processing mode\n+ opt['numthreads'] = 1\n agent = RepeatLabelAgent(opt)\n world = create_task(opt, agent)\n if opt['outfile'] is None:\n", "issue": "Quickstart AttributeError: 'HogwildWorld' object has no attribute 'acts'\n**Bug description**\r\nWhen going through the ParlAI [quickstart](https://parl.ai/docs/tutorial_quick.html#install), I got the following error:\r\n\r\n``` python\r\nTraceback (most recent call last):\r\n File \"examples/interactive.py\", line 18, in <module>\r\n interactive(opt, print_parser=parser)\r\n File \"/root/ParlAI/parlai/scripts/interactive.py\", line 68, in interactive\r\n agent = create_agent(opt, requireModelExists=True)\r\n File \"/root/ParlAI/parlai/core/agents.py\", line 683, in create_agent\r\n model = load_agent_module(opt)\r\n File \"/root/ParlAI/parlai/core/agents.py\", line 548, in load_agent_module\r\n return model_class(new_opt)\r\n File \"/root/ParlAI/parlai/agents/memnn/memnn.py\", line 86, in __init__\r\n super().__init__(opt, shared)\r\n File \"/root/ParlAI/parlai/core/torch_ranker_agent.py\", line 135, in __init__\r\n super().__init__(opt, shared)\r\n File \"/root/ParlAI/parlai/core/torch_agent.py\", line 737, in __init__\r\n self.set_interactive_mode(opt['interactive_mode'], shared)\r\n File \"/root/ParlAI/parlai/core/torch_ranker_agent.py\", line 206, in set_interactive_mode\r\n path = self.get_task_candidates_path()\r\n File \"/root/ParlAI/parlai/core/torch_ranker_agent.py\", line 230, in get_task_candidates_path\r\n build_cands(opt)\r\n File \"/root/ParlAI/parlai/scripts/build_candidates.py\", line 47, in build_cands\r\n acts = world.get_acts()[0]\r\n File \"/root/ParlAI/parlai/core/worlds.py\", line 162, in get_acts\r\n return self.acts\r\nAttributeError: 'HogwildWorld' object has no attribute 'acts'\r\n```\r\n\r\n**While running**\r\n```python\r\npython examples/interactive.py -mf /tmp/babi_memnn -ecands vocab\r\n```\r\n\n", "before_files": [{"content": "#!/usr/bin/env python3\n\n# Copyright (c) Facebook, Inc. and its affiliates.\n# This source code is licensed under the MIT license found in the\n# LICENSE file in the root directory of this source tree.\n\"\"\"Build the candidate responses for a retrieval model.\n\nExamples\n--------\n\n.. code-block:: shell\n\n python build_candidates.py -t convai2 --outfile /tmp/cands.txt\n\"\"\"\n\nfrom parlai.core.params import ParlaiParser\nfrom parlai.agents.repeat_label.repeat_label import RepeatLabelAgent\nfrom parlai.core.worlds import create_task\nfrom parlai.core.utils import TimeLogger\nimport random\nimport tempfile\n\n\ndef build_cands(opt):\n # create repeat label agent and assign it to the specified task\n agent = RepeatLabelAgent(opt)\n world = create_task(opt, agent)\n if opt['outfile'] is None:\n outfile = tempfile.mkstemp(\n prefix='{}_{}_'.format(opt['task'], opt['datatype']), suffix='.txt'\n )[1]\n else:\n outfile = opt['outfile']\n\n if opt.get('num_examples', -1) == -1:\n num_examples = world.num_examples()\n else:\n num_examples = opt['num_examples']\n log_timer = TimeLogger()\n\n print('[ starting to build candidates from task.. (ex:' + str(num_examples) + ')]')\n print('[ saving output to {} ]'.format(outfile))\n cands = []\n for _ in range(num_examples):\n world.parley()\n # We get the acts of the first agent, which is the teacher.\n acts = world.get_acts()[0]\n if isinstance(acts, dict):\n # We turn into a batch of 1 example, in case batching is being used.\n acts = [acts]\n for a in acts:\n candidate = a.get('labels', a.get('eval_labels', None))\n if candidate is not None:\n candidate = candidate[0]\n cands.append(candidate)\n if log_timer.time() > opt['log_every_n_secs']:\n text, _log = log_timer.log(world.total_parleys, world.num_examples())\n print(text)\n if world.epoch_done():\n print('EPOCH DONE')\n break\n fw = open(outfile, 'w')\n fw.write('\\n'.join(cands))\n fw.close()\n\n\ndef main():\n random.seed(42)\n # Get command line arguments\n parser = ParlaiParser()\n parser.add_argument(\n '-n',\n '--num-examples',\n default=-1,\n type=int,\n help='Total number of exs to convert, -1 to convert all examples',\n )\n parser.add_argument(\n '-of',\n '--outfile',\n default=None,\n type=str,\n help='Output file where to save, by default will be created in /tmp',\n )\n parser.add_argument('-ltim', '--log-every-n-secs', type=float, default=2)\n parser.set_defaults(datatype='train:evalmode')\n opt = parser.parse_args()\n build_cands(opt)\n\n\nif __name__ == '__main__':\n main()\n", "path": "parlai/scripts/build_candidates.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n\n# Copyright (c) Facebook, Inc. and its affiliates.\n# This source code is licensed under the MIT license found in the\n# LICENSE file in the root directory of this source tree.\n\"\"\"Build the candidate responses for a retrieval model.\n\nExamples\n--------\n\n.. code-block:: shell\n\n python build_candidates.py -t convai2 --outfile /tmp/cands.txt\n\"\"\"\n\nfrom parlai.core.params import ParlaiParser\nfrom parlai.agents.repeat_label.repeat_label import RepeatLabelAgent\nfrom parlai.core.worlds import create_task\nfrom parlai.core.utils import TimeLogger\nimport random\nimport tempfile\n\n\ndef build_cands(opt):\n # create repeat label agent and assign it to the specified task\n if opt['numthreads'] > 1:\n # Broken in hogwild mode. Just fall back to single processing mode\n opt['numthreads'] = 1\n agent = RepeatLabelAgent(opt)\n world = create_task(opt, agent)\n if opt['outfile'] is None:\n outfile = tempfile.mkstemp(\n prefix='{}_{}_'.format(opt['task'], opt['datatype']), suffix='.txt'\n )[1]\n else:\n outfile = opt['outfile']\n\n if opt.get('num_examples', -1) == -1:\n num_examples = world.num_examples()\n else:\n num_examples = opt['num_examples']\n log_timer = TimeLogger()\n\n print('[ starting to build candidates from task.. (ex:' + str(num_examples) + ')]')\n print('[ saving output to {} ]'.format(outfile))\n cands = []\n for _ in range(num_examples):\n world.parley()\n # We get the acts of the first agent, which is the teacher.\n acts = world.get_acts()[0]\n if isinstance(acts, dict):\n # We turn into a batch of 1 example, in case batching is being used.\n acts = [acts]\n for a in acts:\n candidate = a.get('labels', a.get('eval_labels', None))\n if candidate is not None:\n candidate = candidate[0]\n cands.append(candidate)\n if log_timer.time() > opt['log_every_n_secs']:\n text, _log = log_timer.log(world.total_parleys, world.num_examples())\n print(text)\n if world.epoch_done():\n print('EPOCH DONE')\n break\n fw = open(outfile, 'w')\n fw.write('\\n'.join(cands))\n fw.close()\n\n\ndef main():\n random.seed(42)\n # Get command line arguments\n parser = ParlaiParser()\n parser.add_argument(\n '-n',\n '--num-examples',\n default=-1,\n type=int,\n help='Total number of exs to convert, -1 to convert all examples',\n )\n parser.add_argument(\n '-of',\n '--outfile',\n default=None,\n type=str,\n help='Output file where to save, by default will be created in /tmp',\n )\n parser.add_argument('-ltim', '--log-every-n-secs', type=float, default=2)\n parser.set_defaults(datatype='train:evalmode')\n opt = parser.parse_args()\n build_cands(opt)\n\n\nif __name__ == '__main__':\n main()\n", "path": "parlai/scripts/build_candidates.py"}]} | 1,619 | 143 |
gh_patches_debug_1156 | rasdani/github-patches | git_diff | facebookresearch__hydra-1531 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add `env` to Hydra's config group
This is a follow up to #1441
the `env` config group will allows users to manually change the env defaults value. (such as provides default callbacks or update run.dir )
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `hydra/conf/__init__.py`
Content:
```
1 # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
2 from dataclasses import dataclass, field
3 from typing import Any, Dict, List, Optional
4
5 from omegaconf import MISSING
6
7 from hydra.core.config_store import ConfigStore
8
9
10 @dataclass
11 class HelpConf:
12 app_name: str = MISSING
13 header: str = MISSING
14 footer: str = MISSING
15 template: str = MISSING
16
17
18 @dataclass
19 class HydraHelpConf:
20 hydra_help: str = MISSING
21 template: str = MISSING
22
23
24 @dataclass
25 class RunDir:
26 dir: str = MISSING
27
28
29 @dataclass
30 class SweepDir:
31 dir: str = MISSING
32 subdir: str = MISSING
33
34
35 @dataclass
36 class OverridesConf:
37 # Overrides for the hydra configuration
38 hydra: List[str] = field(default_factory=lambda: [])
39 # Overrides for the task configuration
40 task: List[str] = field(default_factory=lambda: [])
41
42
43 # job runtime information will be populated here
44 @dataclass
45 class JobConf:
46 # Job name, populated automatically unless specified by the user (in config or cli)
47 name: str = MISSING
48
49 # Populated automatically by Hydra.
50 # Concatenation of job overrides that can be used as a part
51 # of the directory name.
52 # This can be configured via hydra.job.config.override_dirname
53 override_dirname: str = MISSING
54
55 # Job ID in underlying scheduling system
56 id: str = MISSING
57
58 # Job number if job is a part of a sweep
59 num: int = MISSING
60
61 # The config name used by the job
62 config_name: Optional[str] = MISSING
63
64 # Environment variables to set remotely
65 env_set: Dict[str, str] = field(default_factory=dict)
66 # Environment variables to copy from the launching machine
67 env_copy: List[str] = field(default_factory=list)
68
69 # Job config
70 @dataclass
71 class JobConfig:
72 @dataclass
73 # configuration for the ${hydra.job.override_dirname} runtime variable
74 class OverrideDirname:
75 kv_sep: str = "="
76 item_sep: str = ","
77 exclude_keys: List[str] = field(default_factory=list)
78
79 override_dirname: OverrideDirname = OverrideDirname()
80
81 config: JobConfig = JobConfig()
82
83
84 @dataclass
85 class RuntimeConf:
86 version: str = MISSING
87 cwd: str = MISSING
88
89
90 @dataclass
91 class HydraConf:
92 defaults: List[Any] = field(
93 default_factory=lambda: [
94 {"output": "default"},
95 {"launcher": "basic"},
96 {"sweeper": "basic"},
97 {"help": "default"},
98 {"hydra_help": "default"},
99 {"hydra_logging": "default"},
100 {"job_logging": "default"},
101 {"callbacks": None},
102 ]
103 )
104
105 # Elements to append to the config search path.
106 # Note: This can only be configured in the primary config.
107 searchpath: List[str] = field(default_factory=list)
108
109 # Normal run output configuration
110 run: RunDir = RunDir()
111 # Multi-run output configuration
112 sweep: SweepDir = SweepDir()
113 # Logging configuration for Hydra
114 hydra_logging: Any = MISSING
115 # Logging configuration for the job
116 job_logging: Any = MISSING
117
118 # Sweeper configuration
119 sweeper: Any = MISSING
120 # Launcher configuration
121 launcher: Any = MISSING
122 # Callbacks configuration
123 callbacks: Dict[str, Any] = field(default_factory=dict)
124
125 # Program Help template
126 help: HelpConf = HelpConf()
127 # Hydra's Help template
128 hydra_help: HydraHelpConf = HydraHelpConf()
129
130 # Output directory for produced configuration files and overrides.
131 # E.g., hydra.yaml, overrides.yaml will go here. Useful for debugging
132 # and extra context when looking at past runs.
133 # Setting to None will prevent the creation of the output subdir.
134 output_subdir: Optional[str] = ".hydra"
135
136 # Those lists will contain runtime overrides
137 overrides: OverridesConf = OverridesConf()
138
139 job: JobConf = JobConf()
140
141 # populated at runtime
142 runtime: RuntimeConf = RuntimeConf()
143
144 # Can be a boolean, string or a list of strings
145 # If a boolean, setting to true will set the log level for the root logger to debug
146 # If a string, it's interpreted as a the list [string]
147 # If a list, each element is interpreted as a logger to have logging level set to debug.
148 # Typical command lines to manipulate hydra.verbose:
149 # hydra.verbose=true
150 # hydra.verbose=[hydra,__main__]
151 # TODO: good use case for Union support in OmegaConf
152 verbose: Any = False
153
154 # Composition choices dictionary
155 choices: Dict[str, str] = field(default_factory=lambda: {})
156
157
158 cs = ConfigStore.instance()
159
160 cs.store(
161 group="hydra",
162 name="config",
163 node=HydraConf(),
164 provider="hydra",
165 )
166
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/hydra/conf/__init__.py b/hydra/conf/__init__.py
--- a/hydra/conf/__init__.py
+++ b/hydra/conf/__init__.py
@@ -99,6 +99,8 @@
{"hydra_logging": "default"},
{"job_logging": "default"},
{"callbacks": None},
+ # env specific overrides
+ {"env": "default"},
]
)
| {"golden_diff": "diff --git a/hydra/conf/__init__.py b/hydra/conf/__init__.py\n--- a/hydra/conf/__init__.py\n+++ b/hydra/conf/__init__.py\n@@ -99,6 +99,8 @@\n {\"hydra_logging\": \"default\"},\n {\"job_logging\": \"default\"},\n {\"callbacks\": None},\n+ # env specific overrides\n+ {\"env\": \"default\"},\n ]\n )\n", "issue": "Add `env` to Hydra's config group\nThis is a follow up to #1441\r\n\r\nthe `env` config group will allows users to manually change the env defaults value. (such as provides default callbacks or update run.dir )\r\n\n", "before_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\nfrom dataclasses import dataclass, field\nfrom typing import Any, Dict, List, Optional\n\nfrom omegaconf import MISSING\n\nfrom hydra.core.config_store import ConfigStore\n\n\n@dataclass\nclass HelpConf:\n app_name: str = MISSING\n header: str = MISSING\n footer: str = MISSING\n template: str = MISSING\n\n\n@dataclass\nclass HydraHelpConf:\n hydra_help: str = MISSING\n template: str = MISSING\n\n\n@dataclass\nclass RunDir:\n dir: str = MISSING\n\n\n@dataclass\nclass SweepDir:\n dir: str = MISSING\n subdir: str = MISSING\n\n\n@dataclass\nclass OverridesConf:\n # Overrides for the hydra configuration\n hydra: List[str] = field(default_factory=lambda: [])\n # Overrides for the task configuration\n task: List[str] = field(default_factory=lambda: [])\n\n\n# job runtime information will be populated here\n@dataclass\nclass JobConf:\n # Job name, populated automatically unless specified by the user (in config or cli)\n name: str = MISSING\n\n # Populated automatically by Hydra.\n # Concatenation of job overrides that can be used as a part\n # of the directory name.\n # This can be configured via hydra.job.config.override_dirname\n override_dirname: str = MISSING\n\n # Job ID in underlying scheduling system\n id: str = MISSING\n\n # Job number if job is a part of a sweep\n num: int = MISSING\n\n # The config name used by the job\n config_name: Optional[str] = MISSING\n\n # Environment variables to set remotely\n env_set: Dict[str, str] = field(default_factory=dict)\n # Environment variables to copy from the launching machine\n env_copy: List[str] = field(default_factory=list)\n\n # Job config\n @dataclass\n class JobConfig:\n @dataclass\n # configuration for the ${hydra.job.override_dirname} runtime variable\n class OverrideDirname:\n kv_sep: str = \"=\"\n item_sep: str = \",\"\n exclude_keys: List[str] = field(default_factory=list)\n\n override_dirname: OverrideDirname = OverrideDirname()\n\n config: JobConfig = JobConfig()\n\n\n@dataclass\nclass RuntimeConf:\n version: str = MISSING\n cwd: str = MISSING\n\n\n@dataclass\nclass HydraConf:\n defaults: List[Any] = field(\n default_factory=lambda: [\n {\"output\": \"default\"},\n {\"launcher\": \"basic\"},\n {\"sweeper\": \"basic\"},\n {\"help\": \"default\"},\n {\"hydra_help\": \"default\"},\n {\"hydra_logging\": \"default\"},\n {\"job_logging\": \"default\"},\n {\"callbacks\": None},\n ]\n )\n\n # Elements to append to the config search path.\n # Note: This can only be configured in the primary config.\n searchpath: List[str] = field(default_factory=list)\n\n # Normal run output configuration\n run: RunDir = RunDir()\n # Multi-run output configuration\n sweep: SweepDir = SweepDir()\n # Logging configuration for Hydra\n hydra_logging: Any = MISSING\n # Logging configuration for the job\n job_logging: Any = MISSING\n\n # Sweeper configuration\n sweeper: Any = MISSING\n # Launcher configuration\n launcher: Any = MISSING\n # Callbacks configuration\n callbacks: Dict[str, Any] = field(default_factory=dict)\n\n # Program Help template\n help: HelpConf = HelpConf()\n # Hydra's Help template\n hydra_help: HydraHelpConf = HydraHelpConf()\n\n # Output directory for produced configuration files and overrides.\n # E.g., hydra.yaml, overrides.yaml will go here. Useful for debugging\n # and extra context when looking at past runs.\n # Setting to None will prevent the creation of the output subdir.\n output_subdir: Optional[str] = \".hydra\"\n\n # Those lists will contain runtime overrides\n overrides: OverridesConf = OverridesConf()\n\n job: JobConf = JobConf()\n\n # populated at runtime\n runtime: RuntimeConf = RuntimeConf()\n\n # Can be a boolean, string or a list of strings\n # If a boolean, setting to true will set the log level for the root logger to debug\n # If a string, it's interpreted as a the list [string]\n # If a list, each element is interpreted as a logger to have logging level set to debug.\n # Typical command lines to manipulate hydra.verbose:\n # hydra.verbose=true\n # hydra.verbose=[hydra,__main__]\n # TODO: good use case for Union support in OmegaConf\n verbose: Any = False\n\n # Composition choices dictionary\n choices: Dict[str, str] = field(default_factory=lambda: {})\n\n\ncs = ConfigStore.instance()\n\ncs.store(\n group=\"hydra\",\n name=\"config\",\n node=HydraConf(),\n provider=\"hydra\",\n)\n", "path": "hydra/conf/__init__.py"}], "after_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\nfrom dataclasses import dataclass, field\nfrom typing import Any, Dict, List, Optional\n\nfrom omegaconf import MISSING\n\nfrom hydra.core.config_store import ConfigStore\n\n\n@dataclass\nclass HelpConf:\n app_name: str = MISSING\n header: str = MISSING\n footer: str = MISSING\n template: str = MISSING\n\n\n@dataclass\nclass HydraHelpConf:\n hydra_help: str = MISSING\n template: str = MISSING\n\n\n@dataclass\nclass RunDir:\n dir: str = MISSING\n\n\n@dataclass\nclass SweepDir:\n dir: str = MISSING\n subdir: str = MISSING\n\n\n@dataclass\nclass OverridesConf:\n # Overrides for the hydra configuration\n hydra: List[str] = field(default_factory=lambda: [])\n # Overrides for the task configuration\n task: List[str] = field(default_factory=lambda: [])\n\n\n# job runtime information will be populated here\n@dataclass\nclass JobConf:\n # Job name, populated automatically unless specified by the user (in config or cli)\n name: str = MISSING\n\n # Populated automatically by Hydra.\n # Concatenation of job overrides that can be used as a part\n # of the directory name.\n # This can be configured via hydra.job.config.override_dirname\n override_dirname: str = MISSING\n\n # Job ID in underlying scheduling system\n id: str = MISSING\n\n # Job number if job is a part of a sweep\n num: int = MISSING\n\n # The config name used by the job\n config_name: Optional[str] = MISSING\n\n # Environment variables to set remotely\n env_set: Dict[str, str] = field(default_factory=dict)\n # Environment variables to copy from the launching machine\n env_copy: List[str] = field(default_factory=list)\n\n # Job config\n @dataclass\n class JobConfig:\n @dataclass\n # configuration for the ${hydra.job.override_dirname} runtime variable\n class OverrideDirname:\n kv_sep: str = \"=\"\n item_sep: str = \",\"\n exclude_keys: List[str] = field(default_factory=list)\n\n override_dirname: OverrideDirname = OverrideDirname()\n\n config: JobConfig = JobConfig()\n\n\n@dataclass\nclass RuntimeConf:\n version: str = MISSING\n cwd: str = MISSING\n\n\n@dataclass\nclass HydraConf:\n defaults: List[Any] = field(\n default_factory=lambda: [\n {\"output\": \"default\"},\n {\"launcher\": \"basic\"},\n {\"sweeper\": \"basic\"},\n {\"help\": \"default\"},\n {\"hydra_help\": \"default\"},\n {\"hydra_logging\": \"default\"},\n {\"job_logging\": \"default\"},\n {\"callbacks\": None},\n # env specific overrides\n {\"env\": \"default\"},\n ]\n )\n\n # Elements to append to the config search path.\n # Note: This can only be configured in the primary config.\n searchpath: List[str] = field(default_factory=list)\n\n # Normal run output configuration\n run: RunDir = RunDir()\n # Multi-run output configuration\n sweep: SweepDir = SweepDir()\n # Logging configuration for Hydra\n hydra_logging: Any = MISSING\n # Logging configuration for the job\n job_logging: Any = MISSING\n\n # Sweeper configuration\n sweeper: Any = MISSING\n # Launcher configuration\n launcher: Any = MISSING\n # Callbacks configuration\n callbacks: Dict[str, Any] = field(default_factory=dict)\n\n # Program Help template\n help: HelpConf = HelpConf()\n # Hydra's Help template\n hydra_help: HydraHelpConf = HydraHelpConf()\n\n # Output directory for produced configuration files and overrides.\n # E.g., hydra.yaml, overrides.yaml will go here. Useful for debugging\n # and extra context when looking at past runs.\n # Setting to None will prevent the creation of the output subdir.\n output_subdir: Optional[str] = \".hydra\"\n\n # Those lists will contain runtime overrides\n overrides: OverridesConf = OverridesConf()\n\n job: JobConf = JobConf()\n\n # populated at runtime\n runtime: RuntimeConf = RuntimeConf()\n\n # Can be a boolean, string or a list of strings\n # If a boolean, setting to true will set the log level for the root logger to debug\n # If a string, it's interpreted as a the list [string]\n # If a list, each element is interpreted as a logger to have logging level set to debug.\n # Typical command lines to manipulate hydra.verbose:\n # hydra.verbose=true\n # hydra.verbose=[hydra,__main__]\n # TODO: good use case for Union support in OmegaConf\n verbose: Any = False\n\n # Composition choices dictionary\n choices: Dict[str, str] = field(default_factory=lambda: {})\n\n\ncs = ConfigStore.instance()\n\ncs.store(\n group=\"hydra\",\n name=\"config\",\n node=HydraConf(),\n provider=\"hydra\",\n)\n", "path": "hydra/conf/__init__.py"}]} | 1,820 | 98 |
gh_patches_debug_20846 | rasdani/github-patches | git_diff | wagtail__wagtail-1147 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Wagtail doesn't gracefully support session invalidation on password change
According to [Django's documentation](https://docs.djangoproject.com/en/1.7/topics/auth/default/#session-invalidation-on-password-change), SessionAuthenticationMiddleware is new in Django 1.7, enabled by default, and will be mandatory in Django 2.0.
Currently, when the middleware is loaded and the user changes their password, they are immediately kicked out to the sign in screen. The user's session is most likely invalidated. This is very obtrusive and the user is not informed if their password was successfully updated. I believe the offending code is in
[account.py](https://github.com/torchbox/wagtail/blob/master/wagtail/wagtailadmin/views/account.py#L26) and attempted to modify the code from the example to make it work, but the outcome was the same:
``` python
# ...
from django.contrib.auth import update_session_auth_hash # new code
# ...
def change_password(request):
can_change_password = request.user.has_usable_password()
if can_change_password:
if request.POST:
form = SetPasswordForm(request.user, request.POST)
if form.is_valid():
form.save()
update_session_auth_hash(request, form.user) # new code
messages.success(request, _("Your password has been changed successfully!"))
return redirect('wagtailadmin_account')
else:
form = SetPasswordForm(request.user)
else:
form = None
return render(request, 'wagtailadmin/account/change_password.html', {
'form': form,
'can_change_password': can_change_password,
})
```
I am, currently, a Django novice, so that's as far as I was able to get. Hope this is an easy fix!
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `wagtail/wagtailadmin/views/account.py`
Content:
```
1 from django.conf import settings
2 from django.shortcuts import render, redirect
3 from django.contrib import messages
4 from django.contrib.auth.forms import SetPasswordForm
5 from django.contrib.auth.views import logout as auth_logout, login as auth_login
6 from django.utils.translation import ugettext as _
7 from django.views.decorators.debug import sensitive_post_parameters
8 from django.views.decorators.cache import never_cache
9
10 from wagtail.wagtailadmin import forms
11 from wagtail.wagtailusers.forms import NotificationPreferencesForm
12 from wagtail.wagtailusers.models import UserProfile
13 from wagtail.wagtailcore.models import UserPagePermissionsProxy
14
15
16 def account(request):
17 user_perms = UserPagePermissionsProxy(request.user)
18 show_notification_preferences = user_perms.can_edit_pages() or user_perms.can_publish_pages()
19
20 return render(request, 'wagtailadmin/account/account.html', {
21 'show_change_password': getattr(settings, 'WAGTAIL_PASSWORD_MANAGEMENT_ENABLED', True) and request.user.has_usable_password(),
22 'show_notification_preferences': show_notification_preferences
23 })
24
25
26 def change_password(request):
27 can_change_password = request.user.has_usable_password()
28
29 if can_change_password:
30 if request.POST:
31 form = SetPasswordForm(request.user, request.POST)
32
33 if form.is_valid():
34 form.save()
35
36 messages.success(request, _("Your password has been changed successfully!"))
37 return redirect('wagtailadmin_account')
38 else:
39 form = SetPasswordForm(request.user)
40 else:
41 form = None
42
43 return render(request, 'wagtailadmin/account/change_password.html', {
44 'form': form,
45 'can_change_password': can_change_password,
46 })
47
48
49 def notification_preferences(request):
50
51 if request.POST:
52 form = NotificationPreferencesForm(request.POST, instance=UserProfile.get_for_user(request.user))
53
54 if form.is_valid():
55 form.save()
56 messages.success(request, _("Your preferences have been updated successfully!"))
57 return redirect('wagtailadmin_account')
58 else:
59 form = NotificationPreferencesForm(instance=UserProfile.get_for_user(request.user))
60
61 # quick-and-dirty catch-all in case the form has been rendered with no
62 # fields, as the user has no customisable permissions
63 if not form.fields:
64 return redirect('wagtailadmin_account')
65
66 return render(request, 'wagtailadmin/account/notification_preferences.html', {
67 'form': form,
68 })
69
70
71 @sensitive_post_parameters()
72 @never_cache
73 def login(request):
74 if request.user.is_authenticated() and request.user.has_perm('wagtailadmin.access_admin'):
75 return redirect('wagtailadmin_home')
76 else:
77 from django.contrib.auth import get_user_model
78 return auth_login(request,
79 template_name='wagtailadmin/login.html',
80 authentication_form=forms.LoginForm,
81 extra_context={
82 'show_password_reset': getattr(settings, 'WAGTAIL_PASSWORD_MANAGEMENT_ENABLED', True),
83 'username_field': get_user_model().USERNAME_FIELD,
84 },
85 )
86
87
88 def logout(request):
89 response = auth_logout(request, next_page='wagtailadmin_login')
90
91 # By default, logging out will generate a fresh sessionid cookie. We want to use the
92 # absence of sessionid as an indication that front-end pages are being viewed by a
93 # non-logged-in user and are therefore cacheable, so we forcibly delete the cookie here.
94 response.delete_cookie(settings.SESSION_COOKIE_NAME,
95 domain=settings.SESSION_COOKIE_DOMAIN,
96 path=settings.SESSION_COOKIE_PATH)
97
98 # HACK: pretend that the session hasn't been modified, so that SessionMiddleware
99 # won't override the above and write a new cookie.
100 request.session.modified = False
101
102 return response
103
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/wagtail/wagtailadmin/views/account.py b/wagtail/wagtailadmin/views/account.py
--- a/wagtail/wagtailadmin/views/account.py
+++ b/wagtail/wagtailadmin/views/account.py
@@ -3,6 +3,7 @@
from django.contrib import messages
from django.contrib.auth.forms import SetPasswordForm
from django.contrib.auth.views import logout as auth_logout, login as auth_login
+from django.contrib.auth import update_session_auth_hash
from django.utils.translation import ugettext as _
from django.views.decorators.debug import sensitive_post_parameters
from django.views.decorators.cache import never_cache
@@ -32,6 +33,7 @@
if form.is_valid():
form.save()
+ update_session_auth_hash(request, form.user)
messages.success(request, _("Your password has been changed successfully!"))
return redirect('wagtailadmin_account')
| {"golden_diff": "diff --git a/wagtail/wagtailadmin/views/account.py b/wagtail/wagtailadmin/views/account.py\n--- a/wagtail/wagtailadmin/views/account.py\n+++ b/wagtail/wagtailadmin/views/account.py\n@@ -3,6 +3,7 @@\n from django.contrib import messages\n from django.contrib.auth.forms import SetPasswordForm\n from django.contrib.auth.views import logout as auth_logout, login as auth_login\n+from django.contrib.auth import update_session_auth_hash\n from django.utils.translation import ugettext as _ \n from django.views.decorators.debug import sensitive_post_parameters\n from django.views.decorators.cache import never_cache\n@@ -32,6 +33,7 @@\n \n if form.is_valid():\n form.save()\n+ update_session_auth_hash(request, form.user)\n \n messages.success(request, _(\"Your password has been changed successfully!\"))\n return redirect('wagtailadmin_account')\n", "issue": "Wagtail doesn't gracefully support session invalidation on password change\nAccording to [Django's documentation](https://docs.djangoproject.com/en/1.7/topics/auth/default/#session-invalidation-on-password-change), SessionAuthenticationMiddleware is new in Django 1.7, enabled by default, and will be mandatory in Django 2.0.\n\nCurrently, when the middleware is loaded and the user changes their password, they are immediately kicked out to the sign in screen. The user's session is most likely invalidated. This is very obtrusive and the user is not informed if their password was successfully updated. I believe the offending code is in\n[account.py](https://github.com/torchbox/wagtail/blob/master/wagtail/wagtailadmin/views/account.py#L26) and attempted to modify the code from the example to make it work, but the outcome was the same:\n\n``` python\n# ...\nfrom django.contrib.auth import update_session_auth_hash # new code\n# ...\ndef change_password(request):\n can_change_password = request.user.has_usable_password()\n\n if can_change_password:\n if request.POST:\n form = SetPasswordForm(request.user, request.POST)\n\n if form.is_valid():\n form.save()\n update_session_auth_hash(request, form.user) # new code\n\n messages.success(request, _(\"Your password has been changed successfully!\"))\n return redirect('wagtailadmin_account')\n else:\n form = SetPasswordForm(request.user)\n else:\n form = None\n\n return render(request, 'wagtailadmin/account/change_password.html', {\n 'form': form,\n 'can_change_password': can_change_password,\n })\n```\n\nI am, currently, a Django novice, so that's as far as I was able to get. Hope this is an easy fix!\n\n", "before_files": [{"content": "from django.conf import settings\nfrom django.shortcuts import render, redirect\nfrom django.contrib import messages\nfrom django.contrib.auth.forms import SetPasswordForm\nfrom django.contrib.auth.views import logout as auth_logout, login as auth_login\nfrom django.utils.translation import ugettext as _ \nfrom django.views.decorators.debug import sensitive_post_parameters\nfrom django.views.decorators.cache import never_cache\n\nfrom wagtail.wagtailadmin import forms\nfrom wagtail.wagtailusers.forms import NotificationPreferencesForm\nfrom wagtail.wagtailusers.models import UserProfile\nfrom wagtail.wagtailcore.models import UserPagePermissionsProxy\n\n\ndef account(request):\n user_perms = UserPagePermissionsProxy(request.user)\n show_notification_preferences = user_perms.can_edit_pages() or user_perms.can_publish_pages()\n\n return render(request, 'wagtailadmin/account/account.html', {\n 'show_change_password': getattr(settings, 'WAGTAIL_PASSWORD_MANAGEMENT_ENABLED', True) and request.user.has_usable_password(),\n 'show_notification_preferences': show_notification_preferences\n })\n\n\ndef change_password(request):\n can_change_password = request.user.has_usable_password()\n\n if can_change_password:\n if request.POST:\n form = SetPasswordForm(request.user, request.POST)\n\n if form.is_valid():\n form.save()\n\n messages.success(request, _(\"Your password has been changed successfully!\"))\n return redirect('wagtailadmin_account')\n else:\n form = SetPasswordForm(request.user)\n else:\n form = None\n\n return render(request, 'wagtailadmin/account/change_password.html', {\n 'form': form,\n 'can_change_password': can_change_password,\n })\n\n\ndef notification_preferences(request):\n\n if request.POST:\n form = NotificationPreferencesForm(request.POST, instance=UserProfile.get_for_user(request.user))\n\n if form.is_valid():\n form.save()\n messages.success(request, _(\"Your preferences have been updated successfully!\"))\n return redirect('wagtailadmin_account')\n else:\n form = NotificationPreferencesForm(instance=UserProfile.get_for_user(request.user))\n\n # quick-and-dirty catch-all in case the form has been rendered with no\n # fields, as the user has no customisable permissions\n if not form.fields:\n return redirect('wagtailadmin_account')\n\n return render(request, 'wagtailadmin/account/notification_preferences.html', {\n 'form': form,\n })\n\n\n@sensitive_post_parameters()\n@never_cache\ndef login(request):\n if request.user.is_authenticated() and request.user.has_perm('wagtailadmin.access_admin'):\n return redirect('wagtailadmin_home')\n else:\n from django.contrib.auth import get_user_model\n return auth_login(request,\n template_name='wagtailadmin/login.html',\n authentication_form=forms.LoginForm,\n extra_context={\n 'show_password_reset': getattr(settings, 'WAGTAIL_PASSWORD_MANAGEMENT_ENABLED', True),\n 'username_field': get_user_model().USERNAME_FIELD,\n },\n )\n\n\ndef logout(request):\n response = auth_logout(request, next_page='wagtailadmin_login')\n\n # By default, logging out will generate a fresh sessionid cookie. We want to use the\n # absence of sessionid as an indication that front-end pages are being viewed by a\n # non-logged-in user and are therefore cacheable, so we forcibly delete the cookie here.\n response.delete_cookie(settings.SESSION_COOKIE_NAME,\n domain=settings.SESSION_COOKIE_DOMAIN,\n path=settings.SESSION_COOKIE_PATH)\n\n # HACK: pretend that the session hasn't been modified, so that SessionMiddleware\n # won't override the above and write a new cookie.\n request.session.modified = False\n\n return response\n", "path": "wagtail/wagtailadmin/views/account.py"}], "after_files": [{"content": "from django.conf import settings\nfrom django.shortcuts import render, redirect\nfrom django.contrib import messages\nfrom django.contrib.auth.forms import SetPasswordForm\nfrom django.contrib.auth.views import logout as auth_logout, login as auth_login\nfrom django.contrib.auth import update_session_auth_hash\nfrom django.utils.translation import ugettext as _ \nfrom django.views.decorators.debug import sensitive_post_parameters\nfrom django.views.decorators.cache import never_cache\n\nfrom wagtail.wagtailadmin import forms\nfrom wagtail.wagtailusers.forms import NotificationPreferencesForm\nfrom wagtail.wagtailusers.models import UserProfile\nfrom wagtail.wagtailcore.models import UserPagePermissionsProxy\n\n\ndef account(request):\n user_perms = UserPagePermissionsProxy(request.user)\n show_notification_preferences = user_perms.can_edit_pages() or user_perms.can_publish_pages()\n\n return render(request, 'wagtailadmin/account/account.html', {\n 'show_change_password': getattr(settings, 'WAGTAIL_PASSWORD_MANAGEMENT_ENABLED', True) and request.user.has_usable_password(),\n 'show_notification_preferences': show_notification_preferences\n })\n\n\ndef change_password(request):\n can_change_password = request.user.has_usable_password()\n\n if can_change_password:\n if request.POST:\n form = SetPasswordForm(request.user, request.POST)\n\n if form.is_valid():\n form.save()\n update_session_auth_hash(request, form.user)\n\n messages.success(request, _(\"Your password has been changed successfully!\"))\n return redirect('wagtailadmin_account')\n else:\n form = SetPasswordForm(request.user)\n else:\n form = None\n\n return render(request, 'wagtailadmin/account/change_password.html', {\n 'form': form,\n 'can_change_password': can_change_password,\n })\n\n\ndef notification_preferences(request):\n\n if request.POST:\n form = NotificationPreferencesForm(request.POST, instance=UserProfile.get_for_user(request.user))\n\n if form.is_valid():\n form.save()\n messages.success(request, _(\"Your preferences have been updated successfully!\"))\n return redirect('wagtailadmin_account')\n else:\n form = NotificationPreferencesForm(instance=UserProfile.get_for_user(request.user))\n\n # quick-and-dirty catch-all in case the form has been rendered with no\n # fields, as the user has no customisable permissions\n if not form.fields:\n return redirect('wagtailadmin_account')\n\n return render(request, 'wagtailadmin/account/notification_preferences.html', {\n 'form': form,\n })\n\n\n@sensitive_post_parameters()\n@never_cache\ndef login(request):\n if request.user.is_authenticated() and request.user.has_perm('wagtailadmin.access_admin'):\n return redirect('wagtailadmin_home')\n else:\n from django.contrib.auth import get_user_model\n return auth_login(request,\n template_name='wagtailadmin/login.html',\n authentication_form=forms.LoginForm,\n extra_context={\n 'show_password_reset': getattr(settings, 'WAGTAIL_PASSWORD_MANAGEMENT_ENABLED', True),\n 'username_field': get_user_model().USERNAME_FIELD,\n },\n )\n\n\ndef logout(request):\n response = auth_logout(request, next_page='wagtailadmin_login')\n\n # By default, logging out will generate a fresh sessionid cookie. We want to use the\n # absence of sessionid as an indication that front-end pages are being viewed by a\n # non-logged-in user and are therefore cacheable, so we forcibly delete the cookie here.\n response.delete_cookie(settings.SESSION_COOKIE_NAME,\n domain=settings.SESSION_COOKIE_DOMAIN,\n path=settings.SESSION_COOKIE_PATH)\n\n # HACK: pretend that the session hasn't been modified, so that SessionMiddleware\n # won't override the above and write a new cookie.\n request.session.modified = False\n\n return response\n", "path": "wagtail/wagtailadmin/views/account.py"}]} | 1,623 | 193 |
gh_patches_debug_28162 | rasdani/github-patches | git_diff | Qiskit__qiskit-12069 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Documentation of RVGate is incorrect
### Environment
N/A
### What is happening?
Received this in an email:
>Hi, I think I found some errors in the Qiskit documentation at
<https://docs.quantum.ibm.com/api/qiskit/qiskit.circuit.library.RVGate>
and I'm contacting you because you look like the two people who most recently edited the source file at
<https://github.com/Qiskit/qiskit/blob/stable/0.46/qiskit/circuit/library/generalized_gates/rv.py>
The matrix representation given in the documentation seems to be wrong. I compared it to the definition given in
<https://arxiv.org/pdf/2104.14875.pdf>
on page 4, equation 1, we see the definition of the rotation matrix. It almost matches the definition given in the documentation at
<https://docs.quantum.ibm.com/api/qiskit/qiskit.circuit.library.RVGate>
except for two mistakes: the "sinc" function should be "sin", and the angle should be divided by two. This can be compared to the source code at
<https://github.com/Qiskit/qiskit/blob/stable/0.46/qiskit/circuit/library/generalized_gates/rv.py>
at lines 86 and 87, where we see the angle divided by two, and we see the use of the sin and cos functions.
### How can we reproduce the issue?
N/A
### What should happen?
N/A
### Any suggestions?
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `qiskit/circuit/library/generalized_gates/rv.py`
Content:
```
1 # This code is part of Qiskit.
2 #
3 # (C) Copyright IBM 2017, 2020
4 #
5 # This code is licensed under the Apache License, Version 2.0. You may
6 # obtain a copy of this license in the LICENSE.txt file in the root directory
7 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
8 #
9 # Any modifications or derivative works of this code must retain this
10 # copyright notice, and modified files need to carry a notice indicating
11 # that they have been altered from the originals.
12
13 """Rotation around an arbitrary axis on the Bloch sphere."""
14
15 import numpy
16 from qiskit.circuit.gate import Gate
17 from qiskit.circuit.exceptions import CircuitError
18
19
20 class RVGate(Gate):
21 r"""Rotation around arbitrary rotation axis :math:`v` where :math:`|v|` is
22 angle of rotation in radians.
23
24 Can be applied to a :class:`~qiskit.circuit.QuantumCircuit`
25 with the :meth:`~qiskit.circuit.QuantumCircuit.rv` method.
26
27 **Circuit symbol:**
28
29 .. parsed-literal::
30
31 ┌─────────────────┐
32 q_0: ┤ RV(v_x,v_y,v_z) ├
33 └─────────────────┘
34
35 **Matrix Representation:**
36
37 .. math::
38
39 \newcommand{\rotationangle}{|\vec{v}|}
40 \newcommand{\sinc}{\text{sinc}}
41 R(\vec{v}) = e^{-i \vec{v}\cdot\vec{\sigma}} =
42 \begin{pmatrix}
43 \cos\left(\rotationangle\right) -i v_z \sinc\left(\rotationangle\right)
44 & -(i v_x + v_y) \sinc\left(\rotationangle\right) \\
45 -(i v_x - v_y) \sinc\left(\rotationangle\right)
46 & \cos\left(\rotationangle\right) + i v_z \sinc\left(\rotationangle\right)
47 \end{pmatrix}
48 """
49
50 def __init__(self, v_x, v_y, v_z, basis="U"):
51 """Create new rv single-qubit gate.
52
53 Args:
54 v_x (float): x-component
55 v_y (float): y-component
56 v_z (float): z-component
57 basis (str, optional): basis (see
58 :class:`~qiskit.synthesis.one_qubit.one_qubit_decompose.OneQubitEulerDecomposer`)
59 """
60 # pylint: disable=cyclic-import
61 from qiskit.synthesis.one_qubit.one_qubit_decompose import OneQubitEulerDecomposer
62
63 super().__init__("rv", 1, [v_x, v_y, v_z])
64 self._decomposer = OneQubitEulerDecomposer(basis=basis)
65
66 def _define(self):
67 try:
68 self.definition = self._decomposer(self.to_matrix())
69 except TypeError as ex:
70 raise CircuitError(
71 f"The {self.name} gate cannot be decomposed with unbound parameters"
72 ) from ex
73
74 def inverse(self):
75 """Invert this gate."""
76 vx, vy, vz = self.params
77 return RVGate(-vx, -vy, -vz)
78
79 def to_matrix(self):
80 """Return a numpy.array for the R(v) gate."""
81 v = numpy.asarray(self.params, dtype=float)
82 angle = numpy.sqrt(v.dot(v))
83 if angle == 0:
84 return numpy.array([[1, 0], [0, 1]])
85 nx, ny, nz = v / angle
86 sin = numpy.sin(angle / 2)
87 cos = numpy.cos(angle / 2)
88 return numpy.array(
89 [
90 [cos - 1j * nz * sin, (-ny - 1j * nx) * sin],
91 [(ny - 1j * nx) * sin, cos + 1j * nz * sin],
92 ]
93 )
94
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/qiskit/circuit/library/generalized_gates/rv.py b/qiskit/circuit/library/generalized_gates/rv.py
--- a/qiskit/circuit/library/generalized_gates/rv.py
+++ b/qiskit/circuit/library/generalized_gates/rv.py
@@ -18,7 +18,7 @@
class RVGate(Gate):
- r"""Rotation around arbitrary rotation axis :math:`v` where :math:`|v|` is
+ r"""Rotation around arbitrary rotation axis :math:`\vec{v}` where :math:`\|\vec{v}\|_2` is
angle of rotation in radians.
Can be applied to a :class:`~qiskit.circuit.QuantumCircuit`
@@ -36,14 +36,17 @@
.. math::
- \newcommand{\rotationangle}{|\vec{v}|}
- \newcommand{\sinc}{\text{sinc}}
- R(\vec{v}) = e^{-i \vec{v}\cdot\vec{\sigma}} =
+ \newcommand{\rotationangle}{\frac{\|\vec{v}\|_2}{2}}
+ R(\vec{v}) = e^{-i \vec{v}\cdot\vec{\sigma} / 2} =
\begin{pmatrix}
- \cos\left(\rotationangle\right) -i v_z \sinc\left(\rotationangle\right)
- & -(i v_x + v_y) \sinc\left(\rotationangle\right) \\
- -(i v_x - v_y) \sinc\left(\rotationangle\right)
- & \cos\left(\rotationangle\right) + i v_z \sinc\left(\rotationangle\right)
+ \cos\left(\rotationangle\right)
+ -i \frac{v_z}{\|\vec{v}\|_2} \sin\left(\rotationangle\right)
+ & -(i \frac{v_x}{\|\vec{v}\|_2}
+ + \frac{v_y}{\|\vec{v}\|_2}) \sin\left(\rotationangle\right) \\
+ -(i \frac{v_x}{\|\vec{v}\|_2}
+ - \frac{v_y}{\|\vec{v}\|_2}) \sin\left(\rotationangle\right)
+ & \cos\left(\rotationangle\right)
+ + i \frac{v_z}{\|\vec{v}\|_2} \sin\left(\rotationangle\right)
\end{pmatrix}
"""
| {"golden_diff": "diff --git a/qiskit/circuit/library/generalized_gates/rv.py b/qiskit/circuit/library/generalized_gates/rv.py\n--- a/qiskit/circuit/library/generalized_gates/rv.py\n+++ b/qiskit/circuit/library/generalized_gates/rv.py\n@@ -18,7 +18,7 @@\n \n \n class RVGate(Gate):\n- r\"\"\"Rotation around arbitrary rotation axis :math:`v` where :math:`|v|` is\n+ r\"\"\"Rotation around arbitrary rotation axis :math:`\\vec{v}` where :math:`\\|\\vec{v}\\|_2` is\n angle of rotation in radians.\n \n Can be applied to a :class:`~qiskit.circuit.QuantumCircuit`\n@@ -36,14 +36,17 @@\n \n .. math::\n \n- \\newcommand{\\rotationangle}{|\\vec{v}|}\n- \\newcommand{\\sinc}{\\text{sinc}}\n- R(\\vec{v}) = e^{-i \\vec{v}\\cdot\\vec{\\sigma}} =\n+ \\newcommand{\\rotationangle}{\\frac{\\|\\vec{v}\\|_2}{2}}\n+ R(\\vec{v}) = e^{-i \\vec{v}\\cdot\\vec{\\sigma} / 2} =\n \\begin{pmatrix}\n- \\cos\\left(\\rotationangle\\right) -i v_z \\sinc\\left(\\rotationangle\\right)\n- & -(i v_x + v_y) \\sinc\\left(\\rotationangle\\right) \\\\\n- -(i v_x - v_y) \\sinc\\left(\\rotationangle\\right)\n- & \\cos\\left(\\rotationangle\\right) + i v_z \\sinc\\left(\\rotationangle\\right)\n+ \\cos\\left(\\rotationangle\\right)\n+ -i \\frac{v_z}{\\|\\vec{v}\\|_2} \\sin\\left(\\rotationangle\\right)\n+ & -(i \\frac{v_x}{\\|\\vec{v}\\|_2}\n+ + \\frac{v_y}{\\|\\vec{v}\\|_2}) \\sin\\left(\\rotationangle\\right) \\\\\n+ -(i \\frac{v_x}{\\|\\vec{v}\\|_2}\n+ - \\frac{v_y}{\\|\\vec{v}\\|_2}) \\sin\\left(\\rotationangle\\right)\n+ & \\cos\\left(\\rotationangle\\right)\n+ + i \\frac{v_z}{\\|\\vec{v}\\|_2} \\sin\\left(\\rotationangle\\right)\n \\end{pmatrix}\n \"\"\"\n", "issue": "Documentation of RVGate is incorrect\n### Environment\n\nN/A\n\n### What is happening?\n\nReceived this in an email:\r\n>Hi, I think I found some errors in the Qiskit documentation at\r\n<https://docs.quantum.ibm.com/api/qiskit/qiskit.circuit.library.RVGate>\r\nand I'm contacting you because you look like the two people who most recently edited the source file at\r\n<https://github.com/Qiskit/qiskit/blob/stable/0.46/qiskit/circuit/library/generalized_gates/rv.py>\r\nThe matrix representation given in the documentation seems to be wrong. I compared it to the definition given in\r\n<https://arxiv.org/pdf/2104.14875.pdf>\r\non page 4, equation 1, we see the definition of the rotation matrix. It almost matches the definition given in the documentation at\r\n<https://docs.quantum.ibm.com/api/qiskit/qiskit.circuit.library.RVGate>\r\nexcept for two mistakes: the \"sinc\" function should be \"sin\", and the angle should be divided by two. This can be compared to the source code at\r\n<https://github.com/Qiskit/qiskit/blob/stable/0.46/qiskit/circuit/library/generalized_gates/rv.py>\r\nat lines 86 and 87, where we see the angle divided by two, and we see the use of the sin and cos functions.\n\n### How can we reproduce the issue?\n\nN/A\n\n### What should happen?\n\nN/A\n\n### Any suggestions?\n\n_No response_\n", "before_files": [{"content": "# This code is part of Qiskit.\n#\n# (C) Copyright IBM 2017, 2020\n#\n# This code is licensed under the Apache License, Version 2.0. You may\n# obtain a copy of this license in the LICENSE.txt file in the root directory\n# of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.\n#\n# Any modifications or derivative works of this code must retain this\n# copyright notice, and modified files need to carry a notice indicating\n# that they have been altered from the originals.\n\n\"\"\"Rotation around an arbitrary axis on the Bloch sphere.\"\"\"\n\nimport numpy\nfrom qiskit.circuit.gate import Gate\nfrom qiskit.circuit.exceptions import CircuitError\n\n\nclass RVGate(Gate):\n r\"\"\"Rotation around arbitrary rotation axis :math:`v` where :math:`|v|` is\n angle of rotation in radians.\n\n Can be applied to a :class:`~qiskit.circuit.QuantumCircuit`\n with the :meth:`~qiskit.circuit.QuantumCircuit.rv` method.\n\n **Circuit symbol:**\n\n .. parsed-literal::\n\n \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n q_0: \u2524 RV(v_x,v_y,v_z) \u251c\n \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n\n **Matrix Representation:**\n\n .. math::\n\n \\newcommand{\\rotationangle}{|\\vec{v}|}\n \\newcommand{\\sinc}{\\text{sinc}}\n R(\\vec{v}) = e^{-i \\vec{v}\\cdot\\vec{\\sigma}} =\n \\begin{pmatrix}\n \\cos\\left(\\rotationangle\\right) -i v_z \\sinc\\left(\\rotationangle\\right)\n & -(i v_x + v_y) \\sinc\\left(\\rotationangle\\right) \\\\\n -(i v_x - v_y) \\sinc\\left(\\rotationangle\\right)\n & \\cos\\left(\\rotationangle\\right) + i v_z \\sinc\\left(\\rotationangle\\right)\n \\end{pmatrix}\n \"\"\"\n\n def __init__(self, v_x, v_y, v_z, basis=\"U\"):\n \"\"\"Create new rv single-qubit gate.\n\n Args:\n v_x (float): x-component\n v_y (float): y-component\n v_z (float): z-component\n basis (str, optional): basis (see\n :class:`~qiskit.synthesis.one_qubit.one_qubit_decompose.OneQubitEulerDecomposer`)\n \"\"\"\n # pylint: disable=cyclic-import\n from qiskit.synthesis.one_qubit.one_qubit_decompose import OneQubitEulerDecomposer\n\n super().__init__(\"rv\", 1, [v_x, v_y, v_z])\n self._decomposer = OneQubitEulerDecomposer(basis=basis)\n\n def _define(self):\n try:\n self.definition = self._decomposer(self.to_matrix())\n except TypeError as ex:\n raise CircuitError(\n f\"The {self.name} gate cannot be decomposed with unbound parameters\"\n ) from ex\n\n def inverse(self):\n \"\"\"Invert this gate.\"\"\"\n vx, vy, vz = self.params\n return RVGate(-vx, -vy, -vz)\n\n def to_matrix(self):\n \"\"\"Return a numpy.array for the R(v) gate.\"\"\"\n v = numpy.asarray(self.params, dtype=float)\n angle = numpy.sqrt(v.dot(v))\n if angle == 0:\n return numpy.array([[1, 0], [0, 1]])\n nx, ny, nz = v / angle\n sin = numpy.sin(angle / 2)\n cos = numpy.cos(angle / 2)\n return numpy.array(\n [\n [cos - 1j * nz * sin, (-ny - 1j * nx) * sin],\n [(ny - 1j * nx) * sin, cos + 1j * nz * sin],\n ]\n )\n", "path": "qiskit/circuit/library/generalized_gates/rv.py"}], "after_files": [{"content": "# This code is part of Qiskit.\n#\n# (C) Copyright IBM 2017, 2020\n#\n# This code is licensed under the Apache License, Version 2.0. You may\n# obtain a copy of this license in the LICENSE.txt file in the root directory\n# of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.\n#\n# Any modifications or derivative works of this code must retain this\n# copyright notice, and modified files need to carry a notice indicating\n# that they have been altered from the originals.\n\n\"\"\"Rotation around an arbitrary axis on the Bloch sphere.\"\"\"\n\nimport numpy\nfrom qiskit.circuit.gate import Gate\nfrom qiskit.circuit.exceptions import CircuitError\n\n\nclass RVGate(Gate):\n r\"\"\"Rotation around arbitrary rotation axis :math:`\\vec{v}` where :math:`\\|\\vec{v}\\|_2` is\n angle of rotation in radians.\n\n Can be applied to a :class:`~qiskit.circuit.QuantumCircuit`\n with the :meth:`~qiskit.circuit.QuantumCircuit.rv` method.\n\n **Circuit symbol:**\n\n .. parsed-literal::\n\n \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n q_0: \u2524 RV(v_x,v_y,v_z) \u251c\n \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n\n **Matrix Representation:**\n\n .. math::\n\n \\newcommand{\\rotationangle}{\\frac{\\|\\vec{v}\\|_2}{2}}\n R(\\vec{v}) = e^{-i \\vec{v}\\cdot\\vec{\\sigma} / 2} =\n \\begin{pmatrix}\n \\cos\\left(\\rotationangle\\right)\n -i \\frac{v_z}{\\|\\vec{v}\\|_2} \\sin\\left(\\rotationangle\\right)\n & -(i \\frac{v_x}{\\|\\vec{v}\\|_2}\n + \\frac{v_y}{\\|\\vec{v}\\|_2}) \\sin\\left(\\rotationangle\\right) \\\\\n -(i \\frac{v_x}{\\|\\vec{v}\\|_2}\n - \\frac{v_y}{\\|\\vec{v}\\|_2}) \\sin\\left(\\rotationangle\\right)\n & \\cos\\left(\\rotationangle\\right)\n + i \\frac{v_z}{\\|\\vec{v}\\|_2} \\sin\\left(\\rotationangle\\right)\n \\end{pmatrix}\n \"\"\"\n\n def __init__(self, v_x, v_y, v_z, basis=\"U\"):\n \"\"\"Create new rv single-qubit gate.\n\n Args:\n v_x (float): x-component\n v_y (float): y-component\n v_z (float): z-component\n basis (str, optional): basis (see\n :class:`~qiskit.synthesis.one_qubit.one_qubit_decompose.OneQubitEulerDecomposer`)\n \"\"\"\n # pylint: disable=cyclic-import\n from qiskit.synthesis.one_qubit.one_qubit_decompose import OneQubitEulerDecomposer\n\n super().__init__(\"rv\", 1, [v_x, v_y, v_z])\n self._decomposer = OneQubitEulerDecomposer(basis=basis)\n\n def _define(self):\n try:\n self.definition = self._decomposer(self.to_matrix())\n except TypeError as ex:\n raise CircuitError(\n f\"The {self.name} gate cannot be decomposed with unbound parameters\"\n ) from ex\n\n def inverse(self):\n \"\"\"Invert this gate.\"\"\"\n vx, vy, vz = self.params\n return RVGate(-vx, -vy, -vz)\n\n def to_matrix(self):\n \"\"\"Return a numpy.array for the R(v) gate.\"\"\"\n v = numpy.asarray(self.params, dtype=float)\n angle = numpy.sqrt(v.dot(v))\n if angle == 0:\n return numpy.array([[1, 0], [0, 1]])\n nx, ny, nz = v / angle\n sin = numpy.sin(angle / 2)\n cos = numpy.cos(angle / 2)\n return numpy.array(\n [\n [cos - 1j * nz * sin, (-ny - 1j * nx) * sin],\n [(ny - 1j * nx) * sin, cos + 1j * nz * sin],\n ]\n )\n", "path": "qiskit/circuit/library/generalized_gates/rv.py"}]} | 1,667 | 592 |
gh_patches_debug_8362 | rasdani/github-patches | git_diff | getnikola__nikola-3036 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
RSS_PATH doesn't work as advertised (is path and filename, excluding .xml)
* Python Version: 3.5.3
* Nikola Version: v7.8.14
* Operating System: Debian
A fresh config says:
```
# Final location for the blog main RSS feed is:
# output / TRANSLATION[lang] / RSS_PATH / rss.xml
```
which is in line with other `_PATH` variables.
But it seems `RSS_PATH` is actually path+filename (and `.xml` is appended).
With `RSS_PATH = "blog/`I get `render_taxonomies:output/blog/.xml` (instead of `blog/rss.xml`)
With `RSS_PATH = blog/index.xml` I get `render_taxonomies:output/blog/index.xml.xml`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `nikola/plugins/task/indexes.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 # Copyright © 2012-2018 Roberto Alsina and others.
4
5 # Permission is hereby granted, free of charge, to any
6 # person obtaining a copy of this software and associated
7 # documentation files (the "Software"), to deal in the
8 # Software without restriction, including without limitation
9 # the rights to use, copy, modify, merge, publish,
10 # distribute, sublicense, and/or sell copies of the
11 # Software, and to permit persons to whom the Software is
12 # furnished to do so, subject to the following conditions:
13 #
14 # The above copyright notice and this permission notice
15 # shall be included in all copies or substantial portions of
16 # the Software.
17 #
18 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
19 # KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
20 # WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
21 # PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS
22 # OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
23 # OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
24 # OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
25 # SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
26
27 """Render the blog's main index."""
28
29
30 from nikola.plugin_categories import Taxonomy
31
32
33 class Indexes(Taxonomy):
34 """Classify for the blog's main index."""
35
36 name = "classify_indexes"
37
38 classification_name = "index"
39 overview_page_variable_name = None
40 more_than_one_classifications_per_post = False
41 has_hierarchy = False
42 show_list_as_index = True
43 template_for_single_list = "index.tmpl"
44 template_for_classification_overview = None
45 apply_to_posts = True
46 apply_to_pages = False
47 omit_empty_classifications = False
48 path_handler_docstrings = {
49 'index_index': False,
50 'index': """Link to a numbered index.
51
52 Example:
53
54 link://index/3 => /index-3.html""",
55 'index_atom': """Link to a numbered Atom index.
56
57 Example:
58
59 link://index_atom/3 => /index-3.atom""",
60 'index_rss': """A link to the RSS feed path.
61
62 Example:
63
64 link://rss => /blog/rss.xml""",
65 }
66
67 def set_site(self, site):
68 """Set Nikola site."""
69 # Redirect automatically generated 'index_rss' path handler to 'rss' for compatibility with old rss plugin
70 site.register_path_handler('rss', lambda name, lang: site.path_handlers['index_rss'](name, lang))
71 site.path_handlers['rss'].__doc__ = """A link to the RSS feed path.
72
73 Example:
74
75 link://rss => /blog/rss.xml
76 """.strip()
77 return super(Indexes, self).set_site(site)
78
79 def get_implicit_classifications(self, lang):
80 """Return a list of classification strings which should always appear in posts_per_classification."""
81 return [""]
82
83 def classify(self, post, lang):
84 """Classify the given post for the given language."""
85 return [""]
86
87 def get_classification_friendly_name(self, classification, lang, only_last_component=False):
88 """Extract a friendly name from the classification."""
89 return self.site.config["BLOG_TITLE"](lang)
90
91 def get_path(self, classification, lang, dest_type='page'):
92 """Return a path for the given classification."""
93 if dest_type == 'rss':
94 return [self.site.config['RSS_PATH'](lang)], True
95 # 'page' (index) or 'feed' (Atom)
96 page_number = None
97 if dest_type == 'page':
98 # Interpret argument as page number
99 try:
100 page_number = int(classification)
101 except (ValueError, TypeError):
102 pass
103 return [self.site.config['INDEX_PATH'](lang)], 'always', page_number
104
105 def provide_context_and_uptodate(self, classification, lang, node=None):
106 """Provide data for the context and the uptodate list for the list of the given classifiation."""
107 kw = {
108 }
109 context = {
110 "title": self.site.config["INDEXES_TITLE"](lang) or self.site.config["BLOG_TITLE"](lang),
111 "description": self.site.config["BLOG_DESCRIPTION"](lang),
112 "pagekind": ["main_index", "index"],
113 }
114 kw.update(context)
115 return context, kw
116
117 def should_generate_classification_page(self, classification, post_list, lang):
118 """Only generates list of posts for classification if this function returns True."""
119 return not self.site.config["DISABLE_INDEXES_PLUGIN_INDEX_AND_ATOM_FEED"]
120
121 def should_generate_rss_for_classification_page(self, classification, post_list, lang):
122 """Only generates RSS feed for list of posts for classification if this function returns True."""
123 return not self.site.config["DISABLE_INDEXES_PLUGIN_RSS_FEED"]
124
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/nikola/plugins/task/indexes.py b/nikola/plugins/task/indexes.py
--- a/nikola/plugins/task/indexes.py
+++ b/nikola/plugins/task/indexes.py
@@ -91,7 +91,7 @@
def get_path(self, classification, lang, dest_type='page'):
"""Return a path for the given classification."""
if dest_type == 'rss':
- return [self.site.config['RSS_PATH'](lang)], True
+ return [self.site.config['RSS_PATH'](lang), 'rss'], 'auto'
# 'page' (index) or 'feed' (Atom)
page_number = None
if dest_type == 'page':
| {"golden_diff": "diff --git a/nikola/plugins/task/indexes.py b/nikola/plugins/task/indexes.py\n--- a/nikola/plugins/task/indexes.py\n+++ b/nikola/plugins/task/indexes.py\n@@ -91,7 +91,7 @@\n def get_path(self, classification, lang, dest_type='page'):\n \"\"\"Return a path for the given classification.\"\"\"\n if dest_type == 'rss':\n- return [self.site.config['RSS_PATH'](lang)], True\n+ return [self.site.config['RSS_PATH'](lang), 'rss'], 'auto'\n # 'page' (index) or 'feed' (Atom)\n page_number = None\n if dest_type == 'page':\n", "issue": "RSS_PATH doesn't work as advertised (is path and filename, excluding .xml)\n* Python Version: 3.5.3\r\n* Nikola Version: v7.8.14\r\n* Operating System: Debian\r\n\r\nA fresh config says:\r\n\r\n```\r\n# Final location for the blog main RSS feed is:\r\n# output / TRANSLATION[lang] / RSS_PATH / rss.xml\r\n```\r\n\r\nwhich is in line with other `_PATH` variables.\r\n\r\nBut it seems `RSS_PATH` is actually path+filename (and `.xml` is appended).\r\n\r\nWith `RSS_PATH = \"blog/`I get `render_taxonomies:output/blog/.xml` (instead of `blog/rss.xml`)\r\n\r\nWith `RSS_PATH = blog/index.xml` I get `render_taxonomies:output/blog/index.xml.xml`\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Copyright \u00a9 2012-2018 Roberto Alsina and others.\n\n# Permission is hereby granted, free of charge, to any\n# person obtaining a copy of this software and associated\n# documentation files (the \"Software\"), to deal in the\n# Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the\n# Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice\n# shall be included in all copies or substantial portions of\n# the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY\n# KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE\n# WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR\n# PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS\n# OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR\n# OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR\n# OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\n\"\"\"Render the blog's main index.\"\"\"\n\n\nfrom nikola.plugin_categories import Taxonomy\n\n\nclass Indexes(Taxonomy):\n \"\"\"Classify for the blog's main index.\"\"\"\n\n name = \"classify_indexes\"\n\n classification_name = \"index\"\n overview_page_variable_name = None\n more_than_one_classifications_per_post = False\n has_hierarchy = False\n show_list_as_index = True\n template_for_single_list = \"index.tmpl\"\n template_for_classification_overview = None\n apply_to_posts = True\n apply_to_pages = False\n omit_empty_classifications = False\n path_handler_docstrings = {\n 'index_index': False,\n 'index': \"\"\"Link to a numbered index.\n\nExample:\n\nlink://index/3 => /index-3.html\"\"\",\n 'index_atom': \"\"\"Link to a numbered Atom index.\n\nExample:\n\nlink://index_atom/3 => /index-3.atom\"\"\",\n 'index_rss': \"\"\"A link to the RSS feed path.\n\nExample:\n\nlink://rss => /blog/rss.xml\"\"\",\n }\n\n def set_site(self, site):\n \"\"\"Set Nikola site.\"\"\"\n # Redirect automatically generated 'index_rss' path handler to 'rss' for compatibility with old rss plugin\n site.register_path_handler('rss', lambda name, lang: site.path_handlers['index_rss'](name, lang))\n site.path_handlers['rss'].__doc__ = \"\"\"A link to the RSS feed path.\n\nExample:\n\n link://rss => /blog/rss.xml\n \"\"\".strip()\n return super(Indexes, self).set_site(site)\n\n def get_implicit_classifications(self, lang):\n \"\"\"Return a list of classification strings which should always appear in posts_per_classification.\"\"\"\n return [\"\"]\n\n def classify(self, post, lang):\n \"\"\"Classify the given post for the given language.\"\"\"\n return [\"\"]\n\n def get_classification_friendly_name(self, classification, lang, only_last_component=False):\n \"\"\"Extract a friendly name from the classification.\"\"\"\n return self.site.config[\"BLOG_TITLE\"](lang)\n\n def get_path(self, classification, lang, dest_type='page'):\n \"\"\"Return a path for the given classification.\"\"\"\n if dest_type == 'rss':\n return [self.site.config['RSS_PATH'](lang)], True\n # 'page' (index) or 'feed' (Atom)\n page_number = None\n if dest_type == 'page':\n # Interpret argument as page number\n try:\n page_number = int(classification)\n except (ValueError, TypeError):\n pass\n return [self.site.config['INDEX_PATH'](lang)], 'always', page_number\n\n def provide_context_and_uptodate(self, classification, lang, node=None):\n \"\"\"Provide data for the context and the uptodate list for the list of the given classifiation.\"\"\"\n kw = {\n }\n context = {\n \"title\": self.site.config[\"INDEXES_TITLE\"](lang) or self.site.config[\"BLOG_TITLE\"](lang),\n \"description\": self.site.config[\"BLOG_DESCRIPTION\"](lang),\n \"pagekind\": [\"main_index\", \"index\"],\n }\n kw.update(context)\n return context, kw\n\n def should_generate_classification_page(self, classification, post_list, lang):\n \"\"\"Only generates list of posts for classification if this function returns True.\"\"\"\n return not self.site.config[\"DISABLE_INDEXES_PLUGIN_INDEX_AND_ATOM_FEED\"]\n\n def should_generate_rss_for_classification_page(self, classification, post_list, lang):\n \"\"\"Only generates RSS feed for list of posts for classification if this function returns True.\"\"\"\n return not self.site.config[\"DISABLE_INDEXES_PLUGIN_RSS_FEED\"]\n", "path": "nikola/plugins/task/indexes.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Copyright \u00a9 2012-2018 Roberto Alsina and others.\n\n# Permission is hereby granted, free of charge, to any\n# person obtaining a copy of this software and associated\n# documentation files (the \"Software\"), to deal in the\n# Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the\n# Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice\n# shall be included in all copies or substantial portions of\n# the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY\n# KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE\n# WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR\n# PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS\n# OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR\n# OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR\n# OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\n\"\"\"Render the blog's main index.\"\"\"\n\n\nfrom nikola.plugin_categories import Taxonomy\n\n\nclass Indexes(Taxonomy):\n \"\"\"Classify for the blog's main index.\"\"\"\n\n name = \"classify_indexes\"\n\n classification_name = \"index\"\n overview_page_variable_name = None\n more_than_one_classifications_per_post = False\n has_hierarchy = False\n show_list_as_index = True\n template_for_single_list = \"index.tmpl\"\n template_for_classification_overview = None\n apply_to_posts = True\n apply_to_pages = False\n omit_empty_classifications = False\n path_handler_docstrings = {\n 'index_index': False,\n 'index': \"\"\"Link to a numbered index.\n\nExample:\n\nlink://index/3 => /index-3.html\"\"\",\n 'index_atom': \"\"\"Link to a numbered Atom index.\n\nExample:\n\nlink://index_atom/3 => /index-3.atom\"\"\",\n 'index_rss': \"\"\"A link to the RSS feed path.\n\nExample:\n\nlink://rss => /blog/rss.xml\"\"\",\n }\n\n def set_site(self, site):\n \"\"\"Set Nikola site.\"\"\"\n # Redirect automatically generated 'index_rss' path handler to 'rss' for compatibility with old rss plugin\n site.register_path_handler('rss', lambda name, lang: site.path_handlers['index_rss'](name, lang))\n site.path_handlers['rss'].__doc__ = \"\"\"A link to the RSS feed path.\n\nExample:\n\n link://rss => /blog/rss.xml\n \"\"\".strip()\n return super(Indexes, self).set_site(site)\n\n def get_implicit_classifications(self, lang):\n \"\"\"Return a list of classification strings which should always appear in posts_per_classification.\"\"\"\n return [\"\"]\n\n def classify(self, post, lang):\n \"\"\"Classify the given post for the given language.\"\"\"\n return [\"\"]\n\n def get_classification_friendly_name(self, classification, lang, only_last_component=False):\n \"\"\"Extract a friendly name from the classification.\"\"\"\n return self.site.config[\"BLOG_TITLE\"](lang)\n\n def get_path(self, classification, lang, dest_type='page'):\n \"\"\"Return a path for the given classification.\"\"\"\n if dest_type == 'rss':\n return [self.site.config['RSS_PATH'](lang), 'rss'], 'auto'\n # 'page' (index) or 'feed' (Atom)\n page_number = None\n if dest_type == 'page':\n # Interpret argument as page number\n try:\n page_number = int(classification)\n except (ValueError, TypeError):\n pass\n return [self.site.config['INDEX_PATH'](lang)], 'always', page_number\n\n def provide_context_and_uptodate(self, classification, lang, node=None):\n \"\"\"Provide data for the context and the uptodate list for the list of the given classifiation.\"\"\"\n kw = {\n }\n context = {\n \"title\": self.site.config[\"INDEXES_TITLE\"](lang) or self.site.config[\"BLOG_TITLE\"](lang),\n \"description\": self.site.config[\"BLOG_DESCRIPTION\"](lang),\n \"pagekind\": [\"main_index\", \"index\"],\n }\n kw.update(context)\n return context, kw\n\n def should_generate_classification_page(self, classification, post_list, lang):\n \"\"\"Only generates list of posts for classification if this function returns True.\"\"\"\n return not self.site.config[\"DISABLE_INDEXES_PLUGIN_INDEX_AND_ATOM_FEED\"]\n\n def should_generate_rss_for_classification_page(self, classification, post_list, lang):\n \"\"\"Only generates RSS feed for list of posts for classification if this function returns True.\"\"\"\n return not self.site.config[\"DISABLE_INDEXES_PLUGIN_RSS_FEED\"]\n", "path": "nikola/plugins/task/indexes.py"}]} | 1,735 | 152 |
gh_patches_debug_416 | rasdani/github-patches | git_diff | automl__auto-sklearn-1361 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Check if test requirement `flaky` can be removed
We currently have a test dependancy [flaky](https://pypi.org/project/flaky/) used to annotate a test `KernelPCAComponentTest::test_default_configuration_classify()`. This is the only place it's used.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 # -*- encoding: utf-8 -*-
2 import os
3 import sys
4 from setuptools import setup, find_packages
5
6
7 # Check if Auto-sklearn *could* run on the given system
8 if os.name != 'posix':
9 raise ValueError(
10 'Detected unsupported operating system: %s. Please check '
11 'the compability information of auto-sklearn: https://automl.github.io'
12 '/auto-sklearn/master/installation.html#windows-osx-compatibility' %
13 sys.platform
14 )
15
16 if sys.version_info < (3, 7):
17 raise ValueError(
18 'Unsupported Python version %d.%d.%d found. Auto-sklearn requires Python '
19 '3.7 or higher.' % (sys.version_info.major, sys.version_info.minor, sys.version_info.micro)
20 )
21
22 HERE = os.path.abspath(os.path.dirname(__file__))
23 with open(os.path.join(HERE, 'requirements.txt')) as fp:
24 install_reqs = [r.rstrip() for r in fp.readlines()
25 if not r.startswith('#') and not r.startswith('git+')]
26
27 extras_reqs={
28 "test": [
29 "pytest>=4.6",
30 "mypy",
31 "pytest-xdist",
32 "pytest-timeout",
33 "flaky",
34 "openml",
35 "pre-commit",
36 "pytest-cov",
37 ],
38 "examples": [
39 "matplotlib",
40 "jupyter",
41 "notebook",
42 "seaborn",
43 ],
44 "docs": [
45 "sphinx<4.3",
46 "sphinx-gallery",
47 "sphinx_bootstrap_theme",
48 "numpydoc",
49 "sphinx_toolbox",
50 "docutils==0.16"
51 ],
52 }
53
54 with open(os.path.join(HERE, 'autosklearn', '__version__.py')) as fh:
55 version = fh.readlines()[-1].split()[-1].strip("\"'")
56
57
58 with open(os.path.join(HERE, 'README.md')) as fh:
59 long_description = fh.read()
60
61
62 setup(
63 name='auto-sklearn',
64 author='Matthias Feurer',
65 author_email='[email protected]',
66 description='Automated machine learning.',
67 long_description=long_description,
68 long_description_content_type='text/markdown',
69 version=version,
70 packages=find_packages(exclude=['test', 'scripts', 'examples']),
71 extras_require=extras_reqs,
72 install_requires=install_reqs,
73 include_package_data=True,
74 license='BSD3',
75 platforms=['Linux'],
76 classifiers=[
77 "Environment :: Console",
78 "Intended Audience :: Developers",
79 "Intended Audience :: Education",
80 "Intended Audience :: Science/Research",
81 "Intended Audience :: Information Technology",
82 "License :: OSI Approved :: BSD License",
83 "Natural Language :: English",
84 "Operating System :: OS Independent",
85 "Topic :: Scientific/Engineering :: Artificial Intelligence",
86 "Topic :: Scientific/Engineering :: Information Analysis",
87 'Programming Language :: Python :: 3.7',
88 'Programming Language :: Python :: 3.8',
89 'Programming Language :: Python :: 3.9',
90 ],
91 python_requires='>=3.7',
92 url='https://automl.github.io/auto-sklearn',
93 )
94
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -30,7 +30,6 @@
"mypy",
"pytest-xdist",
"pytest-timeout",
- "flaky",
"openml",
"pre-commit",
"pytest-cov",
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -30,7 +30,6 @@\n \"mypy\",\n \"pytest-xdist\",\n \"pytest-timeout\",\n- \"flaky\",\n \"openml\",\n \"pre-commit\",\n \"pytest-cov\",\n", "issue": "Check if test requirement `flaky` can be removed\nWe currently have a test dependancy [flaky](https://pypi.org/project/flaky/) used to annotate a test `KernelPCAComponentTest::test_default_configuration_classify()`. This is the only place it's used.\n", "before_files": [{"content": "# -*- encoding: utf-8 -*-\nimport os\nimport sys\nfrom setuptools import setup, find_packages\n\n\n# Check if Auto-sklearn *could* run on the given system\nif os.name != 'posix':\n raise ValueError(\n 'Detected unsupported operating system: %s. Please check '\n 'the compability information of auto-sklearn: https://automl.github.io'\n '/auto-sklearn/master/installation.html#windows-osx-compatibility' %\n sys.platform\n )\n\nif sys.version_info < (3, 7):\n raise ValueError(\n 'Unsupported Python version %d.%d.%d found. Auto-sklearn requires Python '\n '3.7 or higher.' % (sys.version_info.major, sys.version_info.minor, sys.version_info.micro)\n )\n\nHERE = os.path.abspath(os.path.dirname(__file__))\nwith open(os.path.join(HERE, 'requirements.txt')) as fp:\n install_reqs = [r.rstrip() for r in fp.readlines()\n if not r.startswith('#') and not r.startswith('git+')]\n\nextras_reqs={\n \"test\": [\n \"pytest>=4.6\",\n \"mypy\",\n \"pytest-xdist\",\n \"pytest-timeout\",\n \"flaky\",\n \"openml\",\n \"pre-commit\",\n \"pytest-cov\",\n ],\n \"examples\": [\n \"matplotlib\",\n \"jupyter\",\n \"notebook\",\n \"seaborn\",\n ],\n \"docs\": [\n \"sphinx<4.3\",\n \"sphinx-gallery\",\n \"sphinx_bootstrap_theme\",\n \"numpydoc\",\n \"sphinx_toolbox\",\n \"docutils==0.16\"\n ],\n}\n\nwith open(os.path.join(HERE, 'autosklearn', '__version__.py')) as fh:\n version = fh.readlines()[-1].split()[-1].strip(\"\\\"'\")\n\n\nwith open(os.path.join(HERE, 'README.md')) as fh:\n long_description = fh.read()\n\n\nsetup(\n name='auto-sklearn',\n author='Matthias Feurer',\n author_email='[email protected]',\n description='Automated machine learning.',\n long_description=long_description,\n long_description_content_type='text/markdown',\n version=version,\n packages=find_packages(exclude=['test', 'scripts', 'examples']),\n extras_require=extras_reqs,\n install_requires=install_reqs,\n include_package_data=True,\n license='BSD3',\n platforms=['Linux'],\n classifiers=[\n \"Environment :: Console\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: Education\",\n \"Intended Audience :: Science/Research\",\n \"Intended Audience :: Information Technology\",\n \"License :: OSI Approved :: BSD License\",\n \"Natural Language :: English\",\n \"Operating System :: OS Independent\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n \"Topic :: Scientific/Engineering :: Information Analysis\",\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n 'Programming Language :: Python :: 3.9',\n ],\n python_requires='>=3.7',\n url='https://automl.github.io/auto-sklearn',\n)\n", "path": "setup.py"}], "after_files": [{"content": "# -*- encoding: utf-8 -*-\nimport os\nimport sys\nfrom setuptools import setup, find_packages\n\n\n# Check if Auto-sklearn *could* run on the given system\nif os.name != 'posix':\n raise ValueError(\n 'Detected unsupported operating system: %s. Please check '\n 'the compability information of auto-sklearn: https://automl.github.io'\n '/auto-sklearn/master/installation.html#windows-osx-compatibility' %\n sys.platform\n )\n\nif sys.version_info < (3, 7):\n raise ValueError(\n 'Unsupported Python version %d.%d.%d found. Auto-sklearn requires Python '\n '3.7 or higher.' % (sys.version_info.major, sys.version_info.minor, sys.version_info.micro)\n )\n\nHERE = os.path.abspath(os.path.dirname(__file__))\nwith open(os.path.join(HERE, 'requirements.txt')) as fp:\n install_reqs = [r.rstrip() for r in fp.readlines()\n if not r.startswith('#') and not r.startswith('git+')]\n\nextras_reqs={\n \"test\": [\n \"pytest>=4.6\",\n \"mypy\",\n \"pytest-xdist\",\n \"pytest-timeout\",\n \"openml\",\n \"pre-commit\",\n \"pytest-cov\",\n ],\n \"examples\": [\n \"matplotlib\",\n \"jupyter\",\n \"notebook\",\n \"seaborn\",\n ],\n \"docs\": [\n \"sphinx<4.3\",\n \"sphinx-gallery\",\n \"sphinx_bootstrap_theme\",\n \"numpydoc\",\n \"sphinx_toolbox\",\n \"docutils==0.16\"\n ],\n}\n\nwith open(os.path.join(HERE, 'autosklearn', '__version__.py')) as fh:\n version = fh.readlines()[-1].split()[-1].strip(\"\\\"'\")\n\n\nwith open(os.path.join(HERE, 'README.md')) as fh:\n long_description = fh.read()\n\n\nsetup(\n name='auto-sklearn',\n author='Matthias Feurer',\n author_email='[email protected]',\n description='Automated machine learning.',\n long_description=long_description,\n long_description_content_type='text/markdown',\n version=version,\n packages=find_packages(exclude=['test', 'scripts', 'examples']),\n extras_require=extras_reqs,\n install_requires=install_reqs,\n include_package_data=True,\n license='BSD3',\n platforms=['Linux'],\n classifiers=[\n \"Environment :: Console\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: Education\",\n \"Intended Audience :: Science/Research\",\n \"Intended Audience :: Information Technology\",\n \"License :: OSI Approved :: BSD License\",\n \"Natural Language :: English\",\n \"Operating System :: OS Independent\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n \"Topic :: Scientific/Engineering :: Information Analysis\",\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n 'Programming Language :: Python :: 3.9',\n ],\n python_requires='>=3.7',\n url='https://automl.github.io/auto-sklearn',\n)\n", "path": "setup.py"}]} | 1,186 | 71 |
gh_patches_debug_5141 | rasdani/github-patches | git_diff | scrapy__scrapy-2503 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
scrapy view <url> raise exc in v1.3.0
````
(py35) wingyiu@mbp101:~$scrapy view http://www.scrapy.org
2017-01-19 22:13:54 [scrapy.utils.log] INFO: Scrapy 1.3.0 started (bot: scrapybot)
2017-01-19 22:13:54 [scrapy.utils.log] INFO: Overridden settings: {}
Traceback (most recent call last):
File "/Users/user/venv/py35/bin/scrapy", line 11, in <module>
sys.exit(execute())
File "/Users/user/venv/py35/lib/python3.5/site-packages/scrapy/cmdline.py", line 142, in execute
_run_print_help(parser, _run_command, cmd, args, opts)
File "/Users/user/venv/py35/lib/python3.5/site-packages/scrapy/cmdline.py", line 88, in _run_print_help
func(*a, **kw)
File "/Users/user/venv/py35/lib/python3.5/site-packages/scrapy/cmdline.py", line 149, in _run_command
cmd.run(args, opts)
File "/Users/user/venv/py35/lib/python3.5/site-packages/scrapy/commands/fetch.py", line 58, in run
if not opts.no_redirect:
AttributeError: 'Values' object has no attribute 'no_redirect'
````
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scrapy/commands/view.py`
Content:
```
1 from scrapy.commands import fetch, ScrapyCommand
2 from scrapy.utils.response import open_in_browser
3
4 class Command(fetch.Command):
5
6 def short_desc(self):
7 return "Open URL in browser, as seen by Scrapy"
8
9 def long_desc(self):
10 return "Fetch a URL using the Scrapy downloader and show its " \
11 "contents in a browser"
12
13 def add_options(self, parser):
14 ScrapyCommand.add_options(self, parser)
15 parser.add_option("--spider", dest="spider",
16 help="use this spider")
17
18 def _print_response(self, response, opts):
19 open_in_browser(response)
20
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/scrapy/commands/view.py b/scrapy/commands/view.py
--- a/scrapy/commands/view.py
+++ b/scrapy/commands/view.py
@@ -11,9 +11,8 @@
"contents in a browser"
def add_options(self, parser):
- ScrapyCommand.add_options(self, parser)
- parser.add_option("--spider", dest="spider",
- help="use this spider")
+ super(Command, self).add_options(parser)
+ parser.remove_option("--headers")
def _print_response(self, response, opts):
open_in_browser(response)
| {"golden_diff": "diff --git a/scrapy/commands/view.py b/scrapy/commands/view.py\n--- a/scrapy/commands/view.py\n+++ b/scrapy/commands/view.py\n@@ -11,9 +11,8 @@\n \"contents in a browser\"\n \n def add_options(self, parser):\n- ScrapyCommand.add_options(self, parser)\n- parser.add_option(\"--spider\", dest=\"spider\",\n- help=\"use this spider\")\n+ super(Command, self).add_options(parser)\n+ parser.remove_option(\"--headers\")\n \n def _print_response(self, response, opts):\n open_in_browser(response)\n", "issue": "scrapy view <url> raise exc in v1.3.0\n````\r\n(py35) wingyiu@mbp101:~$scrapy view http://www.scrapy.org\r\n2017-01-19 22:13:54 [scrapy.utils.log] INFO: Scrapy 1.3.0 started (bot: scrapybot)\r\n2017-01-19 22:13:54 [scrapy.utils.log] INFO: Overridden settings: {}\r\nTraceback (most recent call last):\r\n File \"/Users/user/venv/py35/bin/scrapy\", line 11, in <module>\r\n sys.exit(execute())\r\n File \"/Users/user/venv/py35/lib/python3.5/site-packages/scrapy/cmdline.py\", line 142, in execute\r\n _run_print_help(parser, _run_command, cmd, args, opts)\r\n File \"/Users/user/venv/py35/lib/python3.5/site-packages/scrapy/cmdline.py\", line 88, in _run_print_help\r\n func(*a, **kw)\r\n File \"/Users/user/venv/py35/lib/python3.5/site-packages/scrapy/cmdline.py\", line 149, in _run_command\r\n cmd.run(args, opts)\r\n File \"/Users/user/venv/py35/lib/python3.5/site-packages/scrapy/commands/fetch.py\", line 58, in run\r\n if not opts.no_redirect:\r\nAttributeError: 'Values' object has no attribute 'no_redirect'\r\n````\r\n\n", "before_files": [{"content": "from scrapy.commands import fetch, ScrapyCommand\nfrom scrapy.utils.response import open_in_browser\n\nclass Command(fetch.Command):\n\n def short_desc(self):\n return \"Open URL in browser, as seen by Scrapy\"\n\n def long_desc(self):\n return \"Fetch a URL using the Scrapy downloader and show its \" \\\n \"contents in a browser\"\n\n def add_options(self, parser):\n ScrapyCommand.add_options(self, parser)\n parser.add_option(\"--spider\", dest=\"spider\",\n help=\"use this spider\")\n\n def _print_response(self, response, opts):\n open_in_browser(response)\n", "path": "scrapy/commands/view.py"}], "after_files": [{"content": "from scrapy.commands import fetch, ScrapyCommand\nfrom scrapy.utils.response import open_in_browser\n\nclass Command(fetch.Command):\n\n def short_desc(self):\n return \"Open URL in browser, as seen by Scrapy\"\n\n def long_desc(self):\n return \"Fetch a URL using the Scrapy downloader and show its \" \\\n \"contents in a browser\"\n\n def add_options(self, parser):\n super(Command, self).add_options(parser)\n parser.remove_option(\"--headers\")\n\n def _print_response(self, response, opts):\n open_in_browser(response)\n", "path": "scrapy/commands/view.py"}]} | 775 | 134 |
gh_patches_debug_15924 | rasdani/github-patches | git_diff | Kinto__kinto-119 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Using the _since querystring filter has no effect
I've tried using the `_since` querystring filter as explained in the tutorial, but it seems to have no effect.
`GET`ing any of those urls returns the exact same list (the full list of records)
```
http GET http://0.0.0.0:8888/v1/buckets/default/collections/tasks/records?_since=1436094288171 -v --auth 'user:password'
http GET http://0.0.0.0:8888/v1/buckets/default/collections/tasks/records?_since=foobar -v --auth 'user:password'
http GET http://0.0.0.0:8888/v1/buckets/default/collections/tasks/records?_since=`date +%s` -v --auth 'user:password'
```
The last one uses the current timestamp as the value, which means it should return an empty list.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kinto/views/buckets.py`
Content:
```
1 from pyramid.httpexceptions import HTTPForbidden, HTTPPreconditionFailed
2 from pyramid.security import NO_PERMISSION_REQUIRED
3 from pyramid.view import view_config
4
5 from cliquet import resource
6 from cliquet.utils import hmac_digest, build_request
7
8 from kinto.views import NameGenerator
9
10
11 def create_bucket(request, bucket_id):
12 """Create a bucket if it doesn't exists."""
13 bucket_put = (request.method.lower() == 'put' and
14 request.path.endswith('buckets/default'))
15
16 if not bucket_put:
17 subrequest = build_request(request, {
18 'method': 'PUT',
19 'path': '/buckets/%s' % bucket_id,
20 'body': {"data": {}},
21 'headers': {'If-None-Match': '*'.encode('utf-8')}
22 })
23
24 try:
25 request.invoke_subrequest(subrequest)
26 except HTTPPreconditionFailed:
27 # The bucket already exists
28 pass
29
30
31 def create_collection(request, bucket_id):
32 subpath = request.matchdict['subpath']
33 if subpath.startswith('/collections/'):
34 collection_id = subpath.split('/')[2]
35 collection_put = (request.method.lower() == 'put' and
36 request.path.endswith(collection_id))
37 if not collection_put:
38 subrequest = build_request(request, {
39 'method': 'PUT',
40 'path': '/buckets/%s/collections/%s' % (
41 bucket_id, collection_id),
42 'body': {"data": {}},
43 'headers': {'If-None-Match': '*'.encode('utf-8')}
44 })
45 try:
46 request.invoke_subrequest(subrequest)
47 except HTTPPreconditionFailed:
48 # The collection already exists
49 pass
50
51
52 @view_config(route_name='default_bucket', permission=NO_PERMISSION_REQUIRED)
53 def default_bucket(request):
54 if getattr(request, 'prefixed_userid', None) is None:
55 raise HTTPForbidden # Pass through the forbidden_view_config
56
57 settings = request.registry.settings
58 hmac_secret = settings['cliquet.userid_hmac_secret']
59 # Build the user unguessable bucket_id UUID from its user_id
60 bucket_id = hmac_digest(hmac_secret, request.prefixed_userid)[:32]
61 path = request.path.replace('default', bucket_id)
62
63 # Make sure bucket exists
64 create_bucket(request, bucket_id)
65
66 # Make sure the collection exists
67 create_collection(request, bucket_id)
68
69 subrequest = build_request(request, {
70 'method': request.method,
71 'path': path,
72 'body': request.body
73 })
74
75 return request.invoke_subrequest(subrequest)
76
77
78 @resource.register(name='bucket',
79 collection_methods=('GET',),
80 collection_path='/buckets',
81 record_path='/buckets/{{id}}')
82 class Bucket(resource.ProtectedResource):
83 permissions = ('read', 'write', 'collection:create', 'group:create')
84
85 def __init__(self, *args, **kwargs):
86 super(Bucket, self).__init__(*args, **kwargs)
87 self.collection.id_generator = NameGenerator()
88
89 def get_parent_id(self, request):
90 # Buckets are not isolated by user, unlike Cliquet resources.
91 return ''
92
93 def delete(self):
94 result = super(Bucket, self).delete()
95
96 # Delete groups.
97 storage = self.collection.storage
98 parent_id = '/buckets/%s' % self.record_id
99 storage.delete_all(collection_id='group', parent_id=parent_id)
100
101 # Delete collections.
102 deleted = storage.delete_all(collection_id='collection',
103 parent_id=parent_id)
104
105 # Delete records.
106 id_field = self.collection.id_field
107 for collection in deleted:
108 parent_id = '/buckets/%s/collections/%s' % (self.record_id,
109 collection[id_field])
110 storage.delete_all(collection_id='record', parent_id=parent_id)
111
112 return result
113
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/kinto/views/buckets.py b/kinto/views/buckets.py
--- a/kinto/views/buckets.py
+++ b/kinto/views/buckets.py
@@ -59,6 +59,8 @@
# Build the user unguessable bucket_id UUID from its user_id
bucket_id = hmac_digest(hmac_secret, request.prefixed_userid)[:32]
path = request.path.replace('default', bucket_id)
+ querystring = request.url[(request.url.index(request.path) +
+ len(request.path)):]
# Make sure bucket exists
create_bucket(request, bucket_id)
@@ -68,7 +70,7 @@
subrequest = build_request(request, {
'method': request.method,
- 'path': path,
+ 'path': path + querystring,
'body': request.body
})
| {"golden_diff": "diff --git a/kinto/views/buckets.py b/kinto/views/buckets.py\n--- a/kinto/views/buckets.py\n+++ b/kinto/views/buckets.py\n@@ -59,6 +59,8 @@\n # Build the user unguessable bucket_id UUID from its user_id\n bucket_id = hmac_digest(hmac_secret, request.prefixed_userid)[:32]\n path = request.path.replace('default', bucket_id)\n+ querystring = request.url[(request.url.index(request.path) +\n+ len(request.path)):]\n \n # Make sure bucket exists\n create_bucket(request, bucket_id)\n@@ -68,7 +70,7 @@\n \n subrequest = build_request(request, {\n 'method': request.method,\n- 'path': path,\n+ 'path': path + querystring,\n 'body': request.body\n })\n", "issue": "Using the _since querystring filter has no effect\nI've tried using the `_since` querystring filter as explained in the tutorial, but it seems to have no effect.\n\n`GET`ing any of those urls returns the exact same list (the full list of records)\n\n```\nhttp GET http://0.0.0.0:8888/v1/buckets/default/collections/tasks/records?_since=1436094288171 -v --auth 'user:password'\nhttp GET http://0.0.0.0:8888/v1/buckets/default/collections/tasks/records?_since=foobar -v --auth 'user:password'\nhttp GET http://0.0.0.0:8888/v1/buckets/default/collections/tasks/records?_since=`date +%s` -v --auth 'user:password'\n```\n\nThe last one uses the current timestamp as the value, which means it should return an empty list.\n\n", "before_files": [{"content": "from pyramid.httpexceptions import HTTPForbidden, HTTPPreconditionFailed\nfrom pyramid.security import NO_PERMISSION_REQUIRED\nfrom pyramid.view import view_config\n\nfrom cliquet import resource\nfrom cliquet.utils import hmac_digest, build_request\n\nfrom kinto.views import NameGenerator\n\n\ndef create_bucket(request, bucket_id):\n \"\"\"Create a bucket if it doesn't exists.\"\"\"\n bucket_put = (request.method.lower() == 'put' and\n request.path.endswith('buckets/default'))\n\n if not bucket_put:\n subrequest = build_request(request, {\n 'method': 'PUT',\n 'path': '/buckets/%s' % bucket_id,\n 'body': {\"data\": {}},\n 'headers': {'If-None-Match': '*'.encode('utf-8')}\n })\n\n try:\n request.invoke_subrequest(subrequest)\n except HTTPPreconditionFailed:\n # The bucket already exists\n pass\n\n\ndef create_collection(request, bucket_id):\n subpath = request.matchdict['subpath']\n if subpath.startswith('/collections/'):\n collection_id = subpath.split('/')[2]\n collection_put = (request.method.lower() == 'put' and\n request.path.endswith(collection_id))\n if not collection_put:\n subrequest = build_request(request, {\n 'method': 'PUT',\n 'path': '/buckets/%s/collections/%s' % (\n bucket_id, collection_id),\n 'body': {\"data\": {}},\n 'headers': {'If-None-Match': '*'.encode('utf-8')}\n })\n try:\n request.invoke_subrequest(subrequest)\n except HTTPPreconditionFailed:\n # The collection already exists\n pass\n\n\n@view_config(route_name='default_bucket', permission=NO_PERMISSION_REQUIRED)\ndef default_bucket(request):\n if getattr(request, 'prefixed_userid', None) is None:\n raise HTTPForbidden # Pass through the forbidden_view_config\n\n settings = request.registry.settings\n hmac_secret = settings['cliquet.userid_hmac_secret']\n # Build the user unguessable bucket_id UUID from its user_id\n bucket_id = hmac_digest(hmac_secret, request.prefixed_userid)[:32]\n path = request.path.replace('default', bucket_id)\n\n # Make sure bucket exists\n create_bucket(request, bucket_id)\n\n # Make sure the collection exists\n create_collection(request, bucket_id)\n\n subrequest = build_request(request, {\n 'method': request.method,\n 'path': path,\n 'body': request.body\n })\n\n return request.invoke_subrequest(subrequest)\n\n\[email protected](name='bucket',\n collection_methods=('GET',),\n collection_path='/buckets',\n record_path='/buckets/{{id}}')\nclass Bucket(resource.ProtectedResource):\n permissions = ('read', 'write', 'collection:create', 'group:create')\n\n def __init__(self, *args, **kwargs):\n super(Bucket, self).__init__(*args, **kwargs)\n self.collection.id_generator = NameGenerator()\n\n def get_parent_id(self, request):\n # Buckets are not isolated by user, unlike Cliquet resources.\n return ''\n\n def delete(self):\n result = super(Bucket, self).delete()\n\n # Delete groups.\n storage = self.collection.storage\n parent_id = '/buckets/%s' % self.record_id\n storage.delete_all(collection_id='group', parent_id=parent_id)\n\n # Delete collections.\n deleted = storage.delete_all(collection_id='collection',\n parent_id=parent_id)\n\n # Delete records.\n id_field = self.collection.id_field\n for collection in deleted:\n parent_id = '/buckets/%s/collections/%s' % (self.record_id,\n collection[id_field])\n storage.delete_all(collection_id='record', parent_id=parent_id)\n\n return result\n", "path": "kinto/views/buckets.py"}], "after_files": [{"content": "from pyramid.httpexceptions import HTTPForbidden, HTTPPreconditionFailed\nfrom pyramid.security import NO_PERMISSION_REQUIRED\nfrom pyramid.view import view_config\n\nfrom cliquet import resource\nfrom cliquet.utils import hmac_digest, build_request\n\nfrom kinto.views import NameGenerator\n\n\ndef create_bucket(request, bucket_id):\n \"\"\"Create a bucket if it doesn't exists.\"\"\"\n bucket_put = (request.method.lower() == 'put' and\n request.path.endswith('buckets/default'))\n\n if not bucket_put:\n subrequest = build_request(request, {\n 'method': 'PUT',\n 'path': '/buckets/%s' % bucket_id,\n 'body': {\"data\": {}},\n 'headers': {'If-None-Match': '*'.encode('utf-8')}\n })\n\n try:\n request.invoke_subrequest(subrequest)\n except HTTPPreconditionFailed:\n # The bucket already exists\n pass\n\n\ndef create_collection(request, bucket_id):\n subpath = request.matchdict['subpath']\n if subpath.startswith('/collections/'):\n collection_id = subpath.split('/')[2]\n collection_put = (request.method.lower() == 'put' and\n request.path.endswith(collection_id))\n if not collection_put:\n subrequest = build_request(request, {\n 'method': 'PUT',\n 'path': '/buckets/%s/collections/%s' % (\n bucket_id, collection_id),\n 'body': {\"data\": {}},\n 'headers': {'If-None-Match': '*'.encode('utf-8')}\n })\n try:\n request.invoke_subrequest(subrequest)\n except HTTPPreconditionFailed:\n # The collection already exists\n pass\n\n\n@view_config(route_name='default_bucket', permission=NO_PERMISSION_REQUIRED)\ndef default_bucket(request):\n if getattr(request, 'prefixed_userid', None) is None:\n raise HTTPForbidden # Pass through the forbidden_view_config\n\n settings = request.registry.settings\n hmac_secret = settings['cliquet.userid_hmac_secret']\n # Build the user unguessable bucket_id UUID from its user_id\n bucket_id = hmac_digest(hmac_secret, request.prefixed_userid)[:32]\n path = request.path.replace('default', bucket_id)\n querystring = request.url[(request.url.index(request.path) +\n len(request.path)):]\n\n # Make sure bucket exists\n create_bucket(request, bucket_id)\n\n # Make sure the collection exists\n create_collection(request, bucket_id)\n\n subrequest = build_request(request, {\n 'method': request.method,\n 'path': path + querystring,\n 'body': request.body\n })\n\n return request.invoke_subrequest(subrequest)\n\n\[email protected](name='bucket',\n collection_methods=('GET',),\n collection_path='/buckets',\n record_path='/buckets/{{id}}')\nclass Bucket(resource.ProtectedResource):\n permissions = ('read', 'write', 'collection:create', 'group:create')\n\n def __init__(self, *args, **kwargs):\n super(Bucket, self).__init__(*args, **kwargs)\n self.collection.id_generator = NameGenerator()\n\n def get_parent_id(self, request):\n # Buckets are not isolated by user, unlike Cliquet resources.\n return ''\n\n def delete(self):\n result = super(Bucket, self).delete()\n\n # Delete groups.\n storage = self.collection.storage\n parent_id = '/buckets/%s' % self.record_id\n storage.delete_all(collection_id='group', parent_id=parent_id)\n\n # Delete collections.\n deleted = storage.delete_all(collection_id='collection',\n parent_id=parent_id)\n\n # Delete records.\n id_field = self.collection.id_field\n for collection in deleted:\n parent_id = '/buckets/%s/collections/%s' % (self.record_id,\n collection[id_field])\n storage.delete_all(collection_id='record', parent_id=parent_id)\n\n return result\n", "path": "kinto/views/buckets.py"}]} | 1,528 | 187 |
gh_patches_debug_49039 | rasdani/github-patches | git_diff | facebookresearch__hydra-279 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Bug] Documentation inconsistency for `utils.get_original_cwd`
# 🐛 Bug
The tutorial for working directories has a few commands for setting the working directory [see here](https://cli.dev/docs/tutorial/working_directory), but the version of hydra on pip does not have these functions. Additionally, the install instructions do not include instructions on how to install from source (even if that's fairly trivial). The simple solution is to update the wheels on pip. Another alternative would be to put on the installation page that hydra is rapidly developing and suggest that one can install from source directly.
## System information
- 0.10.0 from pip
- python 3.7
- arch linux
## One more thing...
This is very minor but the pip version is `0.10.0` and the github master version is also `0.10.0`, but they not the same as evidenced by this issue. You should probably bump the version of git master. Keep up the good work, I think this is a great idea.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `hydra/__init__.py`
Content:
```
1 # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
2 from . import utils
3 from .errors import MissingConfigException
4 from .main import main
5
6 # Source of truth for Hydra's version
7 __version__ = "0.10.0"
8
9 __all__ = ["__version__", "MissingConfigException", "main", "utils"]
10
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/hydra/__init__.py b/hydra/__init__.py
--- a/hydra/__init__.py
+++ b/hydra/__init__.py
@@ -4,6 +4,6 @@
from .main import main
# Source of truth for Hydra's version
-__version__ = "0.10.0"
+__version__ = "0.11.0-pre1"
__all__ = ["__version__", "MissingConfigException", "main", "utils"]
| {"golden_diff": "diff --git a/hydra/__init__.py b/hydra/__init__.py\n--- a/hydra/__init__.py\n+++ b/hydra/__init__.py\n@@ -4,6 +4,6 @@\n from .main import main\n \n # Source of truth for Hydra's version\n-__version__ = \"0.10.0\"\n+__version__ = \"0.11.0-pre1\"\n \n __all__ = [\"__version__\", \"MissingConfigException\", \"main\", \"utils\"]\n", "issue": "[Bug] Documentation inconsistency for `utils.get_original_cwd`\n# \ud83d\udc1b Bug\r\n\r\nThe tutorial for working directories has a few commands for setting the working directory [see here](https://cli.dev/docs/tutorial/working_directory), but the version of hydra on pip does not have these functions. Additionally, the install instructions do not include instructions on how to install from source (even if that's fairly trivial). The simple solution is to update the wheels on pip. Another alternative would be to put on the installation page that hydra is rapidly developing and suggest that one can install from source directly.\r\n\r\n## System information\r\n- 0.10.0 from pip\r\n- python 3.7\r\n- arch linux\r\n\r\n## One more thing...\r\nThis is very minor but the pip version is `0.10.0` and the github master version is also `0.10.0`, but they not the same as evidenced by this issue. You should probably bump the version of git master. Keep up the good work, I think this is a great idea.\n", "before_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\nfrom . import utils\nfrom .errors import MissingConfigException\nfrom .main import main\n\n# Source of truth for Hydra's version\n__version__ = \"0.10.0\"\n\n__all__ = [\"__version__\", \"MissingConfigException\", \"main\", \"utils\"]\n", "path": "hydra/__init__.py"}], "after_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\nfrom . import utils\nfrom .errors import MissingConfigException\nfrom .main import main\n\n# Source of truth for Hydra's version\n__version__ = \"0.11.0-pre1\"\n\n__all__ = [\"__version__\", \"MissingConfigException\", \"main\", \"utils\"]\n", "path": "hydra/__init__.py"}]} | 574 | 114 |
gh_patches_debug_34543 | rasdani/github-patches | git_diff | UTNkar__moore-154 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Menu Translations
<!-- Do you want to ask a question? Are you looking for support? The system administrator can help you: [email protected] -->
### Description
Not all menu pages are using `translated_title` when being added to the menu.
<!-- Please select the appropriate "topic category"/blue and "issue type"/yellow label -->
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `website/website/templatetags/site_tags.py`
Content:
```
1 from django import template
2
3 register = template.Library()
4
5
6 @register.simple_tag(takes_context=True)
7 def get_site_root(context):
8 # NB this returns a core.Page, not the implementation-specific model used
9 # so object-comparison to self will return false as objects would differ
10 return context['request'].site.root_page
11
12
13 def has_menu_children(page):
14 return page.get_children().live().in_menu().exists()
15
16
17 # Retrieves the top menu items - the immediate children of the parent page
18 # The has_menu_children method is necessary because the bootstrap menu requires
19 # a dropdown class to be applied to a parent
20 @register.inclusion_tag('tags/menu.html', takes_context=True)
21 def menu_items(context, parent, calling_page=None, sidenav=False):
22 menuitems = parent.get_children().live().in_menu()
23 for menuitem in menuitems:
24 menuitem.show_dropdown = has_menu_children(menuitem)
25 # TODO: There has to be a better alternative!
26 if hasattr(menuitem, 'googleformindex'):
27 menuitem.translated_title = menuitem.googleformindex\
28 .translated_title
29 elif hasattr(menuitem, 'googleformpage'):
30 menuitem.translated_title = menuitem.googleformpage\
31 .translated_title
32 elif hasattr(menuitem, 'homepage'):
33 menuitem.translated_title = menuitem.homepage.translated_title
34 elif hasattr(menuitem, 'recruitmentpage'):
35 menuitem.translated_title = menuitem.recruitmentpage\
36 .translated_title
37 elif hasattr(menuitem, 'newsindexpage'):
38 menuitem.translated_title = menuitem.newsindexpage.translated_title
39 elif hasattr(menuitem, 'newspage'):
40 menuitem.translated_title = menuitem.newspage.translated_title
41 elif hasattr(menuitem, 'webpage'):
42 menuitem.translated_title = menuitem.webpage.translated_title
43 # We don't directly check if calling_page is None since the template
44 # engine can pass an empty string to calling_page
45 # if the variable passed as calling_page does not exist.
46 menuitem.active = (calling_page.url.startswith(menuitem.url)
47 if calling_page else False)
48 return {
49 'calling_page': calling_page,
50 'menuitems': menuitems,
51 'sidenav': sidenav,
52 # required by the pageurl tag that we want to use within this template
53 'request': context['request'],
54 }
55
56
57 # Retrieves the children of the top menu items for the drop downs
58 @register.inclusion_tag('tags/menu_children.html', takes_context=True)
59 def menu_children(context, parent, sidenav=False):
60 children = parent.get_children()
61 children = children.live().in_menu()
62 return {
63 'parent': parent,
64 'children': children,
65 'sidenav': sidenav,
66 # required by the pageurl tag that we want to use within this template
67 'request': context['request'],
68 }
69
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/website/website/templatetags/site_tags.py b/website/website/templatetags/site_tags.py
--- a/website/website/templatetags/site_tags.py
+++ b/website/website/templatetags/site_tags.py
@@ -20,26 +20,9 @@
@register.inclusion_tag('tags/menu.html', takes_context=True)
def menu_items(context, parent, calling_page=None, sidenav=False):
menuitems = parent.get_children().live().in_menu()
+ menuitems = [m.specific for m in menuitems]
for menuitem in menuitems:
menuitem.show_dropdown = has_menu_children(menuitem)
- # TODO: There has to be a better alternative!
- if hasattr(menuitem, 'googleformindex'):
- menuitem.translated_title = menuitem.googleformindex\
- .translated_title
- elif hasattr(menuitem, 'googleformpage'):
- menuitem.translated_title = menuitem.googleformpage\
- .translated_title
- elif hasattr(menuitem, 'homepage'):
- menuitem.translated_title = menuitem.homepage.translated_title
- elif hasattr(menuitem, 'recruitmentpage'):
- menuitem.translated_title = menuitem.recruitmentpage\
- .translated_title
- elif hasattr(menuitem, 'newsindexpage'):
- menuitem.translated_title = menuitem.newsindexpage.translated_title
- elif hasattr(menuitem, 'newspage'):
- menuitem.translated_title = menuitem.newspage.translated_title
- elif hasattr(menuitem, 'webpage'):
- menuitem.translated_title = menuitem.webpage.translated_title
# We don't directly check if calling_page is None since the template
# engine can pass an empty string to calling_page
# if the variable passed as calling_page does not exist.
@@ -57,8 +40,8 @@
# Retrieves the children of the top menu items for the drop downs
@register.inclusion_tag('tags/menu_children.html', takes_context=True)
def menu_children(context, parent, sidenav=False):
- children = parent.get_children()
- children = children.live().in_menu()
+ children = parent.get_children().live().in_menu()
+ children = [c.specific for c in children]
return {
'parent': parent,
'children': children,
| {"golden_diff": "diff --git a/website/website/templatetags/site_tags.py b/website/website/templatetags/site_tags.py\n--- a/website/website/templatetags/site_tags.py\n+++ b/website/website/templatetags/site_tags.py\n@@ -20,26 +20,9 @@\n @register.inclusion_tag('tags/menu.html', takes_context=True)\n def menu_items(context, parent, calling_page=None, sidenav=False):\n menuitems = parent.get_children().live().in_menu()\n+ menuitems = [m.specific for m in menuitems]\n for menuitem in menuitems:\n menuitem.show_dropdown = has_menu_children(menuitem)\n- # TODO: There has to be a better alternative!\n- if hasattr(menuitem, 'googleformindex'):\n- menuitem.translated_title = menuitem.googleformindex\\\n- .translated_title\n- elif hasattr(menuitem, 'googleformpage'):\n- menuitem.translated_title = menuitem.googleformpage\\\n- .translated_title\n- elif hasattr(menuitem, 'homepage'):\n- menuitem.translated_title = menuitem.homepage.translated_title\n- elif hasattr(menuitem, 'recruitmentpage'):\n- menuitem.translated_title = menuitem.recruitmentpage\\\n- .translated_title\n- elif hasattr(menuitem, 'newsindexpage'):\n- menuitem.translated_title = menuitem.newsindexpage.translated_title\n- elif hasattr(menuitem, 'newspage'):\n- menuitem.translated_title = menuitem.newspage.translated_title\n- elif hasattr(menuitem, 'webpage'):\n- menuitem.translated_title = menuitem.webpage.translated_title\n # We don't directly check if calling_page is None since the template\n # engine can pass an empty string to calling_page\n # if the variable passed as calling_page does not exist.\n@@ -57,8 +40,8 @@\n # Retrieves the children of the top menu items for the drop downs\n @register.inclusion_tag('tags/menu_children.html', takes_context=True)\n def menu_children(context, parent, sidenav=False):\n- children = parent.get_children()\n- children = children.live().in_menu()\n+ children = parent.get_children().live().in_menu()\n+ children = [c.specific for c in children]\n return {\n 'parent': parent,\n 'children': children,\n", "issue": "Menu Translations\n<!-- Do you want to ask a question? Are you looking for support? The system administrator can help you: [email protected] -->\r\n\r\n### Description\r\n\r\nNot all menu pages are using `translated_title` when being added to the menu.\r\n\r\n<!-- Please select the appropriate \"topic category\"/blue and \"issue type\"/yellow label -->\r\n\n", "before_files": [{"content": "from django import template\n\nregister = template.Library()\n\n\[email protected]_tag(takes_context=True)\ndef get_site_root(context):\n # NB this returns a core.Page, not the implementation-specific model used\n # so object-comparison to self will return false as objects would differ\n return context['request'].site.root_page\n\n\ndef has_menu_children(page):\n return page.get_children().live().in_menu().exists()\n\n\n# Retrieves the top menu items - the immediate children of the parent page\n# The has_menu_children method is necessary because the bootstrap menu requires\n# a dropdown class to be applied to a parent\[email protected]_tag('tags/menu.html', takes_context=True)\ndef menu_items(context, parent, calling_page=None, sidenav=False):\n menuitems = parent.get_children().live().in_menu()\n for menuitem in menuitems:\n menuitem.show_dropdown = has_menu_children(menuitem)\n # TODO: There has to be a better alternative!\n if hasattr(menuitem, 'googleformindex'):\n menuitem.translated_title = menuitem.googleformindex\\\n .translated_title\n elif hasattr(menuitem, 'googleformpage'):\n menuitem.translated_title = menuitem.googleformpage\\\n .translated_title\n elif hasattr(menuitem, 'homepage'):\n menuitem.translated_title = menuitem.homepage.translated_title\n elif hasattr(menuitem, 'recruitmentpage'):\n menuitem.translated_title = menuitem.recruitmentpage\\\n .translated_title\n elif hasattr(menuitem, 'newsindexpage'):\n menuitem.translated_title = menuitem.newsindexpage.translated_title\n elif hasattr(menuitem, 'newspage'):\n menuitem.translated_title = menuitem.newspage.translated_title\n elif hasattr(menuitem, 'webpage'):\n menuitem.translated_title = menuitem.webpage.translated_title\n # We don't directly check if calling_page is None since the template\n # engine can pass an empty string to calling_page\n # if the variable passed as calling_page does not exist.\n menuitem.active = (calling_page.url.startswith(menuitem.url)\n if calling_page else False)\n return {\n 'calling_page': calling_page,\n 'menuitems': menuitems,\n 'sidenav': sidenav,\n # required by the pageurl tag that we want to use within this template\n 'request': context['request'],\n }\n\n\n# Retrieves the children of the top menu items for the drop downs\[email protected]_tag('tags/menu_children.html', takes_context=True)\ndef menu_children(context, parent, sidenav=False):\n children = parent.get_children()\n children = children.live().in_menu()\n return {\n 'parent': parent,\n 'children': children,\n 'sidenav': sidenav,\n # required by the pageurl tag that we want to use within this template\n 'request': context['request'],\n }\n", "path": "website/website/templatetags/site_tags.py"}], "after_files": [{"content": "from django import template\n\nregister = template.Library()\n\n\[email protected]_tag(takes_context=True)\ndef get_site_root(context):\n # NB this returns a core.Page, not the implementation-specific model used\n # so object-comparison to self will return false as objects would differ\n return context['request'].site.root_page\n\n\ndef has_menu_children(page):\n return page.get_children().live().in_menu().exists()\n\n\n# Retrieves the top menu items - the immediate children of the parent page\n# The has_menu_children method is necessary because the bootstrap menu requires\n# a dropdown class to be applied to a parent\[email protected]_tag('tags/menu.html', takes_context=True)\ndef menu_items(context, parent, calling_page=None, sidenav=False):\n menuitems = parent.get_children().live().in_menu()\n menuitems = [m.specific for m in menuitems]\n for menuitem in menuitems:\n menuitem.show_dropdown = has_menu_children(menuitem)\n # We don't directly check if calling_page is None since the template\n # engine can pass an empty string to calling_page\n # if the variable passed as calling_page does not exist.\n menuitem.active = (calling_page.url.startswith(menuitem.url)\n if calling_page else False)\n return {\n 'calling_page': calling_page,\n 'menuitems': menuitems,\n 'sidenav': sidenav,\n # required by the pageurl tag that we want to use within this template\n 'request': context['request'],\n }\n\n\n# Retrieves the children of the top menu items for the drop downs\[email protected]_tag('tags/menu_children.html', takes_context=True)\ndef menu_children(context, parent, sidenav=False):\n children = parent.get_children().live().in_menu()\n children = [c.specific for c in children]\n return {\n 'parent': parent,\n 'children': children,\n 'sidenav': sidenav,\n # required by the pageurl tag that we want to use within this template\n 'request': context['request'],\n }\n", "path": "website/website/templatetags/site_tags.py"}]} | 1,087 | 522 |
gh_patches_debug_8815 | rasdani/github-patches | git_diff | CTFd__CTFd-2458 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Upload to S3 Failing
- CTFd Version/Commit: 3.6.1
- Operating System: Linux (Docker container)
- Web Browser and Version: Chrome
**What happened?**
Upgrading CTFd resulting in S3 file uploads beginning to return 400 (bad request) status codes. I see one of the fixes for 3.6.1 was for S3, so perhaps a new bug was introduced.
Here are some additional facts which may be helpful:
- The files are successfully making there way into S3, despite the error
- The timezone I have configured for my server is CST
I can also confirm that my deployment had working file upload before upgrade to version 3.6.1 (file upload was working for 3.6.0).
**What did you expect to happen?**
File upload to continue working.
**How to reproduce your issue**
Deploy CTFd free version using version 3.6.1 with S3 file upload configured.
**Any associated stack traces or error logs**
The browser request returns error (400 status code):
```
{
"success": false,
"errors": {
"location": [
"I/O operation on closed file."
]
}
}
```
The backend error is:
```
[ERROR] Error handling request
Traceback (most recent call last):
File "/opt/venv/lib/python3.9/site-packages/gunicorn/workers/base_async.py", line 113, in handle_request
resp.write_file(respiter)
File "/opt/venv/lib/python3.9/site-packages/gunicorn/http/wsgi.py", line 385, in write_file
if not self.sendfile(respiter):
File "/opt/venv/lib/python3.9/site-packages/gunicorn/http/wsgi.py", line 375, in sendfile
self.sock.sendfile(respiter.filelike, count=nbytes)
File "/opt/venv/lib/python3.9/site-packages/gevent/_socket3.py", line 486, in sendfile
return self._sendfile_use_send(file, offset, count)
File "/opt/venv/lib/python3.9/site-packages/gevent/_socket3.py", line 416, in _sendfile_use_send
self._check_sendfile_params(file, offset, count)
File "/opt/venv/lib/python3.9/site-packages/gevent/_socket3.py", line 461, in _check_sendfile_params
raise ValueError(
ValueError: count must be a positive integer (got 0)
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `CTFd/utils/uploads/__init__.py`
Content:
```
1 import hashlib
2 import shutil
3 from pathlib import Path
4
5 from CTFd.models import ChallengeFiles, Files, PageFiles, db
6 from CTFd.utils import get_app_config
7 from CTFd.utils.uploads.uploaders import FilesystemUploader, S3Uploader
8
9 UPLOADERS = {"filesystem": FilesystemUploader, "s3": S3Uploader}
10
11
12 def get_uploader():
13 return UPLOADERS.get(get_app_config("UPLOAD_PROVIDER") or "filesystem")()
14
15
16 def upload_file(*args, **kwargs):
17 file_obj = kwargs.get("file")
18 challenge_id = kwargs.get("challenge_id") or kwargs.get("challenge")
19 page_id = kwargs.get("page_id") or kwargs.get("page")
20 file_type = kwargs.get("type", "standard")
21 location = kwargs.get("location")
22
23 # Validate location and default filename to uploaded file's name
24 parent = None
25 filename = file_obj.filename
26 if location:
27 path = Path(location)
28 if len(path.parts) != 2:
29 raise ValueError(
30 "Location must contain two parts, a directory and a filename"
31 )
32 # Allow location to override the directory and filename
33 parent = path.parts[0]
34 filename = path.parts[1]
35 location = parent + "/" + filename
36
37 model_args = {"type": file_type, "location": location}
38
39 model = Files
40 if file_type == "challenge":
41 model = ChallengeFiles
42 model_args["challenge_id"] = challenge_id
43 if file_type == "page":
44 model = PageFiles
45 model_args["page_id"] = page_id
46
47 uploader = get_uploader()
48 location = uploader.upload(file_obj=file_obj, filename=filename, path=parent)
49
50 sha1sum = hash_file(fp=file_obj)
51
52 model_args["location"] = location
53 model_args["sha1sum"] = sha1sum
54
55 existing_file = Files.query.filter_by(location=location).first()
56 if existing_file:
57 for k, v in model_args.items():
58 setattr(existing_file, k, v)
59 db.session.commit()
60 file_row = existing_file
61 else:
62 file_row = model(**model_args)
63 db.session.add(file_row)
64 db.session.commit()
65 return file_row
66
67
68 def hash_file(fp, algo="sha1"):
69 fp.seek(0)
70 if algo == "sha1":
71 h = hashlib.sha1() # nosec
72 # https://stackoverflow.com/a/64730457
73 while chunk := fp.read(1024):
74 h.update(chunk)
75 fp.seek(0)
76 return h.hexdigest()
77 else:
78 raise NotImplementedError
79
80
81 def delete_file(file_id):
82 f = Files.query.filter_by(id=file_id).first_or_404()
83
84 uploader = get_uploader()
85 uploader.delete(filename=f.location)
86
87 db.session.delete(f)
88 db.session.commit()
89 return True
90
91
92 def rmdir(directory):
93 shutil.rmtree(directory, ignore_errors=True)
94
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/CTFd/utils/uploads/__init__.py b/CTFd/utils/uploads/__init__.py
--- a/CTFd/utils/uploads/__init__.py
+++ b/CTFd/utils/uploads/__init__.py
@@ -44,11 +44,12 @@
model = PageFiles
model_args["page_id"] = page_id
+ # Hash is calculated before upload since S3 file upload closes file object
+ sha1sum = hash_file(fp=file_obj)
+
uploader = get_uploader()
location = uploader.upload(file_obj=file_obj, filename=filename, path=parent)
- sha1sum = hash_file(fp=file_obj)
-
model_args["location"] = location
model_args["sha1sum"] = sha1sum
| {"golden_diff": "diff --git a/CTFd/utils/uploads/__init__.py b/CTFd/utils/uploads/__init__.py\n--- a/CTFd/utils/uploads/__init__.py\n+++ b/CTFd/utils/uploads/__init__.py\n@@ -44,11 +44,12 @@\n model = PageFiles\n model_args[\"page_id\"] = page_id\n \n+ # Hash is calculated before upload since S3 file upload closes file object\n+ sha1sum = hash_file(fp=file_obj)\n+\n uploader = get_uploader()\n location = uploader.upload(file_obj=file_obj, filename=filename, path=parent)\n \n- sha1sum = hash_file(fp=file_obj)\n-\n model_args[\"location\"] = location\n model_args[\"sha1sum\"] = sha1sum\n", "issue": "Upload to S3 Failing\n- CTFd Version/Commit: 3.6.1\r\n- Operating System: Linux (Docker container)\r\n- Web Browser and Version: Chrome\r\n\r\n**What happened?**\r\n\r\nUpgrading CTFd resulting in S3 file uploads beginning to return 400 (bad request) status codes. I see one of the fixes for 3.6.1 was for S3, so perhaps a new bug was introduced.\r\n\r\nHere are some additional facts which may be helpful:\r\n\r\n - The files are successfully making there way into S3, despite the error\r\n - The timezone I have configured for my server is CST\r\n\r\nI can also confirm that my deployment had working file upload before upgrade to version 3.6.1 (file upload was working for 3.6.0).\r\n\r\n**What did you expect to happen?**\r\n\r\nFile upload to continue working.\r\n\r\n**How to reproduce your issue**\r\n\r\nDeploy CTFd free version using version 3.6.1 with S3 file upload configured.\r\n\r\n**Any associated stack traces or error logs**\r\n\r\nThe browser request returns error (400 status code):\r\n\r\n```\r\n{\r\n \"success\": false,\r\n \"errors\": {\r\n \"location\": [\r\n \"I/O operation on closed file.\"\r\n ]\r\n }\r\n}\r\n```\r\n\r\nThe backend error is:\r\n\r\n```\r\n[ERROR] Error handling request\r\nTraceback (most recent call last):\r\nFile \"/opt/venv/lib/python3.9/site-packages/gunicorn/workers/base_async.py\", line 113, in handle_request\r\nresp.write_file(respiter)\r\nFile \"/opt/venv/lib/python3.9/site-packages/gunicorn/http/wsgi.py\", line 385, in write_file\r\nif not self.sendfile(respiter):\r\nFile \"/opt/venv/lib/python3.9/site-packages/gunicorn/http/wsgi.py\", line 375, in sendfile\r\nself.sock.sendfile(respiter.filelike, count=nbytes)\r\nFile \"/opt/venv/lib/python3.9/site-packages/gevent/_socket3.py\", line 486, in sendfile\r\nreturn self._sendfile_use_send(file, offset, count)\r\nFile \"/opt/venv/lib/python3.9/site-packages/gevent/_socket3.py\", line 416, in _sendfile_use_send\r\nself._check_sendfile_params(file, offset, count)\r\nFile \"/opt/venv/lib/python3.9/site-packages/gevent/_socket3.py\", line 461, in _check_sendfile_params\r\nraise ValueError(\r\nValueError: count must be a positive integer (got 0)\r\n```\n", "before_files": [{"content": "import hashlib\nimport shutil\nfrom pathlib import Path\n\nfrom CTFd.models import ChallengeFiles, Files, PageFiles, db\nfrom CTFd.utils import get_app_config\nfrom CTFd.utils.uploads.uploaders import FilesystemUploader, S3Uploader\n\nUPLOADERS = {\"filesystem\": FilesystemUploader, \"s3\": S3Uploader}\n\n\ndef get_uploader():\n return UPLOADERS.get(get_app_config(\"UPLOAD_PROVIDER\") or \"filesystem\")()\n\n\ndef upload_file(*args, **kwargs):\n file_obj = kwargs.get(\"file\")\n challenge_id = kwargs.get(\"challenge_id\") or kwargs.get(\"challenge\")\n page_id = kwargs.get(\"page_id\") or kwargs.get(\"page\")\n file_type = kwargs.get(\"type\", \"standard\")\n location = kwargs.get(\"location\")\n\n # Validate location and default filename to uploaded file's name\n parent = None\n filename = file_obj.filename\n if location:\n path = Path(location)\n if len(path.parts) != 2:\n raise ValueError(\n \"Location must contain two parts, a directory and a filename\"\n )\n # Allow location to override the directory and filename\n parent = path.parts[0]\n filename = path.parts[1]\n location = parent + \"/\" + filename\n\n model_args = {\"type\": file_type, \"location\": location}\n\n model = Files\n if file_type == \"challenge\":\n model = ChallengeFiles\n model_args[\"challenge_id\"] = challenge_id\n if file_type == \"page\":\n model = PageFiles\n model_args[\"page_id\"] = page_id\n\n uploader = get_uploader()\n location = uploader.upload(file_obj=file_obj, filename=filename, path=parent)\n\n sha1sum = hash_file(fp=file_obj)\n\n model_args[\"location\"] = location\n model_args[\"sha1sum\"] = sha1sum\n\n existing_file = Files.query.filter_by(location=location).first()\n if existing_file:\n for k, v in model_args.items():\n setattr(existing_file, k, v)\n db.session.commit()\n file_row = existing_file\n else:\n file_row = model(**model_args)\n db.session.add(file_row)\n db.session.commit()\n return file_row\n\n\ndef hash_file(fp, algo=\"sha1\"):\n fp.seek(0)\n if algo == \"sha1\":\n h = hashlib.sha1() # nosec\n # https://stackoverflow.com/a/64730457\n while chunk := fp.read(1024):\n h.update(chunk)\n fp.seek(0)\n return h.hexdigest()\n else:\n raise NotImplementedError\n\n\ndef delete_file(file_id):\n f = Files.query.filter_by(id=file_id).first_or_404()\n\n uploader = get_uploader()\n uploader.delete(filename=f.location)\n\n db.session.delete(f)\n db.session.commit()\n return True\n\n\ndef rmdir(directory):\n shutil.rmtree(directory, ignore_errors=True)\n", "path": "CTFd/utils/uploads/__init__.py"}], "after_files": [{"content": "import hashlib\nimport shutil\nfrom pathlib import Path\n\nfrom CTFd.models import ChallengeFiles, Files, PageFiles, db\nfrom CTFd.utils import get_app_config\nfrom CTFd.utils.uploads.uploaders import FilesystemUploader, S3Uploader\n\nUPLOADERS = {\"filesystem\": FilesystemUploader, \"s3\": S3Uploader}\n\n\ndef get_uploader():\n return UPLOADERS.get(get_app_config(\"UPLOAD_PROVIDER\") or \"filesystem\")()\n\n\ndef upload_file(*args, **kwargs):\n file_obj = kwargs.get(\"file\")\n challenge_id = kwargs.get(\"challenge_id\") or kwargs.get(\"challenge\")\n page_id = kwargs.get(\"page_id\") or kwargs.get(\"page\")\n file_type = kwargs.get(\"type\", \"standard\")\n location = kwargs.get(\"location\")\n\n # Validate location and default filename to uploaded file's name\n parent = None\n filename = file_obj.filename\n if location:\n path = Path(location)\n if len(path.parts) != 2:\n raise ValueError(\n \"Location must contain two parts, a directory and a filename\"\n )\n # Allow location to override the directory and filename\n parent = path.parts[0]\n filename = path.parts[1]\n location = parent + \"/\" + filename\n\n model_args = {\"type\": file_type, \"location\": location}\n\n model = Files\n if file_type == \"challenge\":\n model = ChallengeFiles\n model_args[\"challenge_id\"] = challenge_id\n if file_type == \"page\":\n model = PageFiles\n model_args[\"page_id\"] = page_id\n\n # Hash is calculated before upload since S3 file upload closes file object\n sha1sum = hash_file(fp=file_obj)\n\n uploader = get_uploader()\n location = uploader.upload(file_obj=file_obj, filename=filename, path=parent)\n\n model_args[\"location\"] = location\n model_args[\"sha1sum\"] = sha1sum\n\n existing_file = Files.query.filter_by(location=location).first()\n if existing_file:\n for k, v in model_args.items():\n setattr(existing_file, k, v)\n db.session.commit()\n file_row = existing_file\n else:\n file_row = model(**model_args)\n db.session.add(file_row)\n db.session.commit()\n return file_row\n\n\ndef hash_file(fp, algo=\"sha1\"):\n fp.seek(0)\n if algo == \"sha1\":\n h = hashlib.sha1() # nosec\n # https://stackoverflow.com/a/64730457\n while chunk := fp.read(1024):\n h.update(chunk)\n fp.seek(0)\n return h.hexdigest()\n else:\n raise NotImplementedError\n\n\ndef delete_file(file_id):\n f = Files.query.filter_by(id=file_id).first_or_404()\n\n uploader = get_uploader()\n uploader.delete(filename=f.location)\n\n db.session.delete(f)\n db.session.commit()\n return True\n\n\ndef rmdir(directory):\n shutil.rmtree(directory, ignore_errors=True)\n", "path": "CTFd/utils/uploads/__init__.py"}]} | 1,645 | 170 |
gh_patches_debug_29333 | rasdani/github-patches | git_diff | pex-tool__pex-322 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Remove pkg_resources.build_zipmanifest monkeypatching
This may involve increasing the minimum setuptools version. Another alternative is vendoring setuptools.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pex/version.py`
Content:
```
1 # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 __version__ = '1.1.15'
5
6 SETUPTOOLS_REQUIREMENT = 'setuptools>=2.2,<20.11'
7 WHEEL_REQUIREMENT = 'wheel>=0.26.0,<0.30.0'
8
```
Path: `pex/pex_bootstrapper.py`
Content:
```
1 # Copyright 2014 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 import contextlib
5 import os
6 import sys
7 import zipfile
8
9 __all__ = ('bootstrap_pex',)
10
11
12 def pex_info_name(entry_point):
13 """Return the PEX-INFO for an entry_point"""
14 return os.path.join(entry_point, 'PEX-INFO')
15
16
17 def is_compressed(entry_point):
18 return os.path.exists(entry_point) and not os.path.exists(pex_info_name(entry_point))
19
20
21 def read_pexinfo_from_directory(entry_point):
22 with open(pex_info_name(entry_point), 'rb') as fp:
23 return fp.read()
24
25
26 def read_pexinfo_from_zip(entry_point):
27 with contextlib.closing(zipfile.ZipFile(entry_point)) as zf:
28 return zf.read('PEX-INFO')
29
30
31 def read_pex_info_content(entry_point):
32 """Return the raw content of a PEX-INFO."""
33 if is_compressed(entry_point):
34 return read_pexinfo_from_zip(entry_point)
35 else:
36 return read_pexinfo_from_directory(entry_point)
37
38
39 def get_pex_info(entry_point):
40 """Return the PexInfo object for an entry point."""
41 from . import pex_info
42
43 pex_info_content = read_pex_info_content(entry_point)
44 if pex_info_content:
45 return pex_info.PexInfo.from_json(pex_info_content)
46 raise ValueError('Invalid entry_point: %s' % entry_point)
47
48
49 # TODO(wickman) Remove once resolved (#91):
50 # https://bitbucket.org/pypa/setuptools/issue/154/build_zipmanifest-results-should-be
51 def monkeypatch_build_zipmanifest():
52 import pkg_resources
53 if not hasattr(pkg_resources, 'build_zipmanifest'):
54 return
55 old_build_zipmanifest = pkg_resources.build_zipmanifest
56 def memoized_build_zipmanifest(archive, memo={}):
57 if archive not in memo:
58 memo[archive] = old_build_zipmanifest(archive)
59 return memo[archive]
60 pkg_resources.build_zipmanifest = memoized_build_zipmanifest
61
62
63 def find_in_path(target_interpreter):
64 if os.path.exists(target_interpreter):
65 return target_interpreter
66
67 for directory in os.getenv('PATH', '').split(os.pathsep):
68 try_path = os.path.join(directory, target_interpreter)
69 if os.path.exists(try_path):
70 return try_path
71
72
73 def maybe_reexec_pex():
74 from .variables import ENV
75 if not ENV.PEX_PYTHON:
76 return
77
78 from .common import die
79 from .tracer import TRACER
80
81 target_python = ENV.PEX_PYTHON
82 target = find_in_path(target_python)
83 if not target:
84 die('Failed to find interpreter specified by PEX_PYTHON: %s' % target)
85 if os.path.exists(target) and os.path.realpath(target) != os.path.realpath(sys.executable):
86 TRACER.log('Detected PEX_PYTHON, re-exec to %s' % target)
87 ENV.delete('PEX_PYTHON')
88 os.execve(target, [target_python] + sys.argv, ENV.copy())
89
90
91 def bootstrap_pex(entry_point):
92 from .finders import register_finders
93 monkeypatch_build_zipmanifest()
94 register_finders()
95 maybe_reexec_pex()
96
97 from . import pex
98 pex.PEX(entry_point).execute()
99
100
101 def bootstrap_pex_env(entry_point):
102 """Bootstrap the current runtime environment using a given pex."""
103 from .environment import PEXEnvironment
104 from .finders import register_finders
105 from .pex_info import PexInfo
106
107 monkeypatch_build_zipmanifest()
108 register_finders()
109
110 PEXEnvironment(entry_point, PexInfo.from_pex(entry_point)).activate()
111
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pex/pex_bootstrapper.py b/pex/pex_bootstrapper.py
--- a/pex/pex_bootstrapper.py
+++ b/pex/pex_bootstrapper.py
@@ -46,20 +46,6 @@
raise ValueError('Invalid entry_point: %s' % entry_point)
-# TODO(wickman) Remove once resolved (#91):
-# https://bitbucket.org/pypa/setuptools/issue/154/build_zipmanifest-results-should-be
-def monkeypatch_build_zipmanifest():
- import pkg_resources
- if not hasattr(pkg_resources, 'build_zipmanifest'):
- return
- old_build_zipmanifest = pkg_resources.build_zipmanifest
- def memoized_build_zipmanifest(archive, memo={}):
- if archive not in memo:
- memo[archive] = old_build_zipmanifest(archive)
- return memo[archive]
- pkg_resources.build_zipmanifest = memoized_build_zipmanifest
-
-
def find_in_path(target_interpreter):
if os.path.exists(target_interpreter):
return target_interpreter
@@ -90,7 +76,6 @@
def bootstrap_pex(entry_point):
from .finders import register_finders
- monkeypatch_build_zipmanifest()
register_finders()
maybe_reexec_pex()
@@ -104,7 +89,6 @@
from .finders import register_finders
from .pex_info import PexInfo
- monkeypatch_build_zipmanifest()
register_finders()
PEXEnvironment(entry_point, PexInfo.from_pex(entry_point)).activate()
diff --git a/pex/version.py b/pex/version.py
--- a/pex/version.py
+++ b/pex/version.py
@@ -3,5 +3,5 @@
__version__ = '1.1.15'
-SETUPTOOLS_REQUIREMENT = 'setuptools>=2.2,<20.11'
+SETUPTOOLS_REQUIREMENT = 'setuptools>=5.7,<20.11'
WHEEL_REQUIREMENT = 'wheel>=0.26.0,<0.30.0'
| {"golden_diff": "diff --git a/pex/pex_bootstrapper.py b/pex/pex_bootstrapper.py\n--- a/pex/pex_bootstrapper.py\n+++ b/pex/pex_bootstrapper.py\n@@ -46,20 +46,6 @@\n raise ValueError('Invalid entry_point: %s' % entry_point)\n \n \n-# TODO(wickman) Remove once resolved (#91):\n-# https://bitbucket.org/pypa/setuptools/issue/154/build_zipmanifest-results-should-be\n-def monkeypatch_build_zipmanifest():\n- import pkg_resources\n- if not hasattr(pkg_resources, 'build_zipmanifest'):\n- return\n- old_build_zipmanifest = pkg_resources.build_zipmanifest\n- def memoized_build_zipmanifest(archive, memo={}):\n- if archive not in memo:\n- memo[archive] = old_build_zipmanifest(archive)\n- return memo[archive]\n- pkg_resources.build_zipmanifest = memoized_build_zipmanifest\n-\n-\n def find_in_path(target_interpreter):\n if os.path.exists(target_interpreter):\n return target_interpreter\n@@ -90,7 +76,6 @@\n \n def bootstrap_pex(entry_point):\n from .finders import register_finders\n- monkeypatch_build_zipmanifest()\n register_finders()\n maybe_reexec_pex()\n \n@@ -104,7 +89,6 @@\n from .finders import register_finders\n from .pex_info import PexInfo\n \n- monkeypatch_build_zipmanifest()\n register_finders()\n \n PEXEnvironment(entry_point, PexInfo.from_pex(entry_point)).activate()\ndiff --git a/pex/version.py b/pex/version.py\n--- a/pex/version.py\n+++ b/pex/version.py\n@@ -3,5 +3,5 @@\n \n __version__ = '1.1.15'\n \n-SETUPTOOLS_REQUIREMENT = 'setuptools>=2.2,<20.11'\n+SETUPTOOLS_REQUIREMENT = 'setuptools>=5.7,<20.11'\n WHEEL_REQUIREMENT = 'wheel>=0.26.0,<0.30.0'\n", "issue": "Remove pkg_resources.build_zipmanifest monkeypatching\nThis may involve increasing the minimum setuptools version. Another alternative is vendoring setuptools.\n\n", "before_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = '1.1.15'\n\nSETUPTOOLS_REQUIREMENT = 'setuptools>=2.2,<20.11'\nWHEEL_REQUIREMENT = 'wheel>=0.26.0,<0.30.0'\n", "path": "pex/version.py"}, {"content": "# Copyright 2014 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\nimport contextlib\nimport os\nimport sys\nimport zipfile\n\n__all__ = ('bootstrap_pex',)\n\n\ndef pex_info_name(entry_point):\n \"\"\"Return the PEX-INFO for an entry_point\"\"\"\n return os.path.join(entry_point, 'PEX-INFO')\n\n\ndef is_compressed(entry_point):\n return os.path.exists(entry_point) and not os.path.exists(pex_info_name(entry_point))\n\n\ndef read_pexinfo_from_directory(entry_point):\n with open(pex_info_name(entry_point), 'rb') as fp:\n return fp.read()\n\n\ndef read_pexinfo_from_zip(entry_point):\n with contextlib.closing(zipfile.ZipFile(entry_point)) as zf:\n return zf.read('PEX-INFO')\n\n\ndef read_pex_info_content(entry_point):\n \"\"\"Return the raw content of a PEX-INFO.\"\"\"\n if is_compressed(entry_point):\n return read_pexinfo_from_zip(entry_point)\n else:\n return read_pexinfo_from_directory(entry_point)\n\n\ndef get_pex_info(entry_point):\n \"\"\"Return the PexInfo object for an entry point.\"\"\"\n from . import pex_info\n\n pex_info_content = read_pex_info_content(entry_point)\n if pex_info_content:\n return pex_info.PexInfo.from_json(pex_info_content)\n raise ValueError('Invalid entry_point: %s' % entry_point)\n\n\n# TODO(wickman) Remove once resolved (#91):\n# https://bitbucket.org/pypa/setuptools/issue/154/build_zipmanifest-results-should-be\ndef monkeypatch_build_zipmanifest():\n import pkg_resources\n if not hasattr(pkg_resources, 'build_zipmanifest'):\n return\n old_build_zipmanifest = pkg_resources.build_zipmanifest\n def memoized_build_zipmanifest(archive, memo={}):\n if archive not in memo:\n memo[archive] = old_build_zipmanifest(archive)\n return memo[archive]\n pkg_resources.build_zipmanifest = memoized_build_zipmanifest\n\n\ndef find_in_path(target_interpreter):\n if os.path.exists(target_interpreter):\n return target_interpreter\n\n for directory in os.getenv('PATH', '').split(os.pathsep):\n try_path = os.path.join(directory, target_interpreter)\n if os.path.exists(try_path):\n return try_path\n\n\ndef maybe_reexec_pex():\n from .variables import ENV\n if not ENV.PEX_PYTHON:\n return\n\n from .common import die\n from .tracer import TRACER\n\n target_python = ENV.PEX_PYTHON\n target = find_in_path(target_python)\n if not target:\n die('Failed to find interpreter specified by PEX_PYTHON: %s' % target)\n if os.path.exists(target) and os.path.realpath(target) != os.path.realpath(sys.executable):\n TRACER.log('Detected PEX_PYTHON, re-exec to %s' % target)\n ENV.delete('PEX_PYTHON')\n os.execve(target, [target_python] + sys.argv, ENV.copy())\n\n\ndef bootstrap_pex(entry_point):\n from .finders import register_finders\n monkeypatch_build_zipmanifest()\n register_finders()\n maybe_reexec_pex()\n\n from . import pex\n pex.PEX(entry_point).execute()\n\n\ndef bootstrap_pex_env(entry_point):\n \"\"\"Bootstrap the current runtime environment using a given pex.\"\"\"\n from .environment import PEXEnvironment\n from .finders import register_finders\n from .pex_info import PexInfo\n\n monkeypatch_build_zipmanifest()\n register_finders()\n\n PEXEnvironment(entry_point, PexInfo.from_pex(entry_point)).activate()\n", "path": "pex/pex_bootstrapper.py"}], "after_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = '1.1.15'\n\nSETUPTOOLS_REQUIREMENT = 'setuptools>=5.7,<20.11'\nWHEEL_REQUIREMENT = 'wheel>=0.26.0,<0.30.0'\n", "path": "pex/version.py"}, {"content": "# Copyright 2014 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\nimport contextlib\nimport os\nimport sys\nimport zipfile\n\n__all__ = ('bootstrap_pex',)\n\n\ndef pex_info_name(entry_point):\n \"\"\"Return the PEX-INFO for an entry_point\"\"\"\n return os.path.join(entry_point, 'PEX-INFO')\n\n\ndef is_compressed(entry_point):\n return os.path.exists(entry_point) and not os.path.exists(pex_info_name(entry_point))\n\n\ndef read_pexinfo_from_directory(entry_point):\n with open(pex_info_name(entry_point), 'rb') as fp:\n return fp.read()\n\n\ndef read_pexinfo_from_zip(entry_point):\n with contextlib.closing(zipfile.ZipFile(entry_point)) as zf:\n return zf.read('PEX-INFO')\n\n\ndef read_pex_info_content(entry_point):\n \"\"\"Return the raw content of a PEX-INFO.\"\"\"\n if is_compressed(entry_point):\n return read_pexinfo_from_zip(entry_point)\n else:\n return read_pexinfo_from_directory(entry_point)\n\n\ndef get_pex_info(entry_point):\n \"\"\"Return the PexInfo object for an entry point.\"\"\"\n from . import pex_info\n\n pex_info_content = read_pex_info_content(entry_point)\n if pex_info_content:\n return pex_info.PexInfo.from_json(pex_info_content)\n raise ValueError('Invalid entry_point: %s' % entry_point)\n\n\ndef find_in_path(target_interpreter):\n if os.path.exists(target_interpreter):\n return target_interpreter\n\n for directory in os.getenv('PATH', '').split(os.pathsep):\n try_path = os.path.join(directory, target_interpreter)\n if os.path.exists(try_path):\n return try_path\n\n\ndef maybe_reexec_pex():\n from .variables import ENV\n if not ENV.PEX_PYTHON:\n return\n\n from .common import die\n from .tracer import TRACER\n\n target_python = ENV.PEX_PYTHON\n target = find_in_path(target_python)\n if not target:\n die('Failed to find interpreter specified by PEX_PYTHON: %s' % target)\n if os.path.exists(target) and os.path.realpath(target) != os.path.realpath(sys.executable):\n TRACER.log('Detected PEX_PYTHON, re-exec to %s' % target)\n ENV.delete('PEX_PYTHON')\n os.execve(target, [target_python] + sys.argv, ENV.copy())\n\n\ndef bootstrap_pex(entry_point):\n from .finders import register_finders\n register_finders()\n maybe_reexec_pex()\n\n from . import pex\n pex.PEX(entry_point).execute()\n\n\ndef bootstrap_pex_env(entry_point):\n \"\"\"Bootstrap the current runtime environment using a given pex.\"\"\"\n from .environment import PEXEnvironment\n from .finders import register_finders\n from .pex_info import PexInfo\n\n register_finders()\n\n PEXEnvironment(entry_point, PexInfo.from_pex(entry_point)).activate()\n", "path": "pex/pex_bootstrapper.py"}]} | 1,458 | 476 |
gh_patches_debug_18001 | rasdani/github-patches | git_diff | mozilla__telemetry-analysis-service-258 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ATMO should pre-click my single SSH key
Would save me thousands of milliseconds every time I launch a cluster ;)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `atmo/clusters/views.py`
Content:
```
1 # This Source Code Form is subject to the terms of the Mozilla Public
2 # License, v. 2.0. If a copy of the MPL was not distributed with this
3 # file, you can obtain one at http://mozilla.org/MPL/2.0/.
4 from django.contrib import messages
5 from django.contrib.auth.decorators import login_required
6 from django.shortcuts import redirect, render
7 from django.utils.safestring import mark_safe
8
9 from allauth.account.utils import user_display
10
11 from .forms import NewClusterForm
12 from .models import Cluster
13 from ..decorators import view_permission_required, delete_permission_required
14
15
16 @login_required
17 def new_cluster(request):
18 if request.user.created_sshkeys.count() == 0:
19 messages.error(
20 request,
21 mark_safe(
22 '<h4>No SSH keys associated to you.</h4>'
23 'Please upload one below to be able to launch a cluster.'
24 'This is one-time step.'
25 )
26 )
27 return redirect('keys-new')
28 initial = {
29 'identifier': '{}-telemetry-analysis'.format(user_display(request.user)),
30 'size': 1,
31 }
32 form = NewClusterForm(
33 request.user,
34 initial=initial,
35 )
36 if request.method == 'POST':
37 form = NewClusterForm(
38 request.user,
39 data=request.POST,
40 files=request.FILES,
41 initial=initial,
42 )
43 if form.is_valid():
44 cluster = form.save() # this will also magically spawn the cluster for us
45 return redirect(cluster)
46 context = {
47 'form': form,
48 }
49 return render(request, 'atmo/clusters/new.html', context)
50
51
52 @login_required
53 @delete_permission_required(Cluster)
54 def terminate_cluster(request, id):
55 cluster = Cluster.objects.get(id=id)
56 if not cluster.is_active:
57 return redirect(cluster)
58
59 if request.method == 'POST':
60 cluster.deactivate()
61 return redirect(cluster)
62
63 context = {
64 'cluster': cluster,
65 }
66 return render(request, 'atmo/clusters/terminate.html', context=context)
67
68
69 @login_required
70 @view_permission_required(Cluster)
71 def detail_cluster(request, id):
72 cluster = Cluster.objects.get(id=id)
73 context = {
74 'cluster': cluster,
75 }
76 return render(request, 'atmo/clusters/detail.html', context=context)
77
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/atmo/clusters/views.py b/atmo/clusters/views.py
--- a/atmo/clusters/views.py
+++ b/atmo/clusters/views.py
@@ -15,7 +15,13 @@
@login_required
def new_cluster(request):
- if request.user.created_sshkeys.count() == 0:
+ initial = {
+ 'identifier': '{}-telemetry-analysis'.format(user_display(request.user)),
+ 'size': 1,
+ }
+ ssh_key_count = request.user.created_sshkeys.count()
+
+ if ssh_key_count == 0:
messages.error(
request,
mark_safe(
@@ -25,10 +31,10 @@
)
)
return redirect('keys-new')
- initial = {
- 'identifier': '{}-telemetry-analysis'.format(user_display(request.user)),
- 'size': 1,
- }
+ elif ssh_key_count == 1:
+ # If only 1 ssh key, make it pre-selected.
+ initial['ssh_key'] = request.user.created_sshkeys.values('pk')[0]['pk']
+
form = NewClusterForm(
request.user,
initial=initial,
| {"golden_diff": "diff --git a/atmo/clusters/views.py b/atmo/clusters/views.py\n--- a/atmo/clusters/views.py\n+++ b/atmo/clusters/views.py\n@@ -15,7 +15,13 @@\n \n @login_required\n def new_cluster(request):\n- if request.user.created_sshkeys.count() == 0:\n+ initial = {\n+ 'identifier': '{}-telemetry-analysis'.format(user_display(request.user)),\n+ 'size': 1,\n+ }\n+ ssh_key_count = request.user.created_sshkeys.count()\n+\n+ if ssh_key_count == 0:\n messages.error(\n request,\n mark_safe(\n@@ -25,10 +31,10 @@\n )\n )\n return redirect('keys-new')\n- initial = {\n- 'identifier': '{}-telemetry-analysis'.format(user_display(request.user)),\n- 'size': 1,\n- }\n+ elif ssh_key_count == 1:\n+ # If only 1 ssh key, make it pre-selected.\n+ initial['ssh_key'] = request.user.created_sshkeys.values('pk')[0]['pk']\n+\n form = NewClusterForm(\n request.user,\n initial=initial,\n", "issue": "ATMO should pre-click my single SSH key\nWould save me thousands of milliseconds every time I launch a cluster ;)\n", "before_files": [{"content": "# This Source Code Form is subject to the terms of the Mozilla Public\n# License, v. 2.0. If a copy of the MPL was not distributed with this\n# file, you can obtain one at http://mozilla.org/MPL/2.0/.\nfrom django.contrib import messages\nfrom django.contrib.auth.decorators import login_required\nfrom django.shortcuts import redirect, render\nfrom django.utils.safestring import mark_safe\n\nfrom allauth.account.utils import user_display\n\nfrom .forms import NewClusterForm\nfrom .models import Cluster\nfrom ..decorators import view_permission_required, delete_permission_required\n\n\n@login_required\ndef new_cluster(request):\n if request.user.created_sshkeys.count() == 0:\n messages.error(\n request,\n mark_safe(\n '<h4>No SSH keys associated to you.</h4>'\n 'Please upload one below to be able to launch a cluster.'\n 'This is one-time step.'\n )\n )\n return redirect('keys-new')\n initial = {\n 'identifier': '{}-telemetry-analysis'.format(user_display(request.user)),\n 'size': 1,\n }\n form = NewClusterForm(\n request.user,\n initial=initial,\n )\n if request.method == 'POST':\n form = NewClusterForm(\n request.user,\n data=request.POST,\n files=request.FILES,\n initial=initial,\n )\n if form.is_valid():\n cluster = form.save() # this will also magically spawn the cluster for us\n return redirect(cluster)\n context = {\n 'form': form,\n }\n return render(request, 'atmo/clusters/new.html', context)\n\n\n@login_required\n@delete_permission_required(Cluster)\ndef terminate_cluster(request, id):\n cluster = Cluster.objects.get(id=id)\n if not cluster.is_active:\n return redirect(cluster)\n\n if request.method == 'POST':\n cluster.deactivate()\n return redirect(cluster)\n\n context = {\n 'cluster': cluster,\n }\n return render(request, 'atmo/clusters/terminate.html', context=context)\n\n\n@login_required\n@view_permission_required(Cluster)\ndef detail_cluster(request, id):\n cluster = Cluster.objects.get(id=id)\n context = {\n 'cluster': cluster,\n }\n return render(request, 'atmo/clusters/detail.html', context=context)\n", "path": "atmo/clusters/views.py"}], "after_files": [{"content": "# This Source Code Form is subject to the terms of the Mozilla Public\n# License, v. 2.0. If a copy of the MPL was not distributed with this\n# file, you can obtain one at http://mozilla.org/MPL/2.0/.\nfrom django.contrib import messages\nfrom django.contrib.auth.decorators import login_required\nfrom django.shortcuts import redirect, render\nfrom django.utils.safestring import mark_safe\n\nfrom allauth.account.utils import user_display\n\nfrom .forms import NewClusterForm\nfrom .models import Cluster\nfrom ..decorators import view_permission_required, delete_permission_required\n\n\n@login_required\ndef new_cluster(request):\n initial = {\n 'identifier': '{}-telemetry-analysis'.format(user_display(request.user)),\n 'size': 1,\n }\n ssh_key_count = request.user.created_sshkeys.count()\n\n if ssh_key_count == 0:\n messages.error(\n request,\n mark_safe(\n '<h4>No SSH keys associated to you.</h4>'\n 'Please upload one below to be able to launch a cluster.'\n 'This is one-time step.'\n )\n )\n return redirect('keys-new')\n elif ssh_key_count == 1:\n # If only 1 ssh key, make it pre-selected.\n initial['ssh_key'] = request.user.created_sshkeys.values('pk')[0]['pk']\n\n form = NewClusterForm(\n request.user,\n initial=initial,\n )\n if request.method == 'POST':\n form = NewClusterForm(\n request.user,\n data=request.POST,\n files=request.FILES,\n initial=initial,\n )\n if form.is_valid():\n cluster = form.save() # this will also magically spawn the cluster for us\n return redirect(cluster)\n context = {\n 'form': form,\n }\n return render(request, 'atmo/clusters/new.html', context)\n\n\n@login_required\n@delete_permission_required(Cluster)\ndef terminate_cluster(request, id):\n cluster = Cluster.objects.get(id=id)\n if not cluster.is_active:\n return redirect(cluster)\n\n if request.method == 'POST':\n cluster.deactivate()\n return redirect(cluster)\n\n context = {\n 'cluster': cluster,\n }\n return render(request, 'atmo/clusters/terminate.html', context=context)\n\n\n@login_required\n@view_permission_required(Cluster)\ndef detail_cluster(request, id):\n cluster = Cluster.objects.get(id=id)\n context = {\n 'cluster': cluster,\n }\n return render(request, 'atmo/clusters/detail.html', context=context)\n", "path": "atmo/clusters/views.py"}]} | 921 | 262 |
gh_patches_debug_40147 | rasdani/github-patches | git_diff | OpenNMT__OpenNMT-tf-671 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
PRF evaluator: list index out of range
Hi!
I'm getting `list index out of range` when prf evaluator is used.
**Config:**
Model: TransformerRelative
params:
beam_width: 1
train:
maximum_features_length: 50
maximum_labels_length: 50
save_summary_steps: 100
sample_buffer_size: 1000000
keep_checkpoint_max: 20
save_checkpoints_steps: 5000
max_step: 2000000
eval:
batch_size: 32
steps: 5000
export_on_best: bleu
external_evaluators: [ "bleu", "prf", "wer" ]
infer:
batch_size: 1024
**Full stack:**
W tensorflow/core/kernels/data/generator_dataset_op.cc:103] Error occurred when finalizing GeneratorDataset iterator: Cancelled: Operation was cancelled
Traceback (most recent call last):
File "/home/dima/anaconda3/envs/tf/bin/onmt-main", line 8, in <module>
sys.exit(main())
File "/home/dima/anaconda3/envs/tf/lib/python3.7/site-packages/opennmt/bin/main.py", line 224, in main
hvd=hvd)
File "/home/dima/anaconda3/envs/tf/lib/python3.7/site-packages/opennmt/runner.py", line 217, in train
moving_average_decay=train_config.get("moving_average_decay"))
File "/home/dima/anaconda3/envs/tf/lib/python3.7/site-packages/opennmt/training.py", line 118, in __call__
early_stop = self._evaluate(evaluator, step, moving_average=moving_average)
File "/home/dima/anaconda3/envs/tf/lib/python3.7/site-packages/opennmt/training.py", line 140, in _evaluate
evaluator(step)
File "/home/dima/anaconda3/envs/tf/lib/python3.7/site-packages/opennmt/evaluation.py", line 299, in __call__
score = scorer(self._labels_file, output_path)
File "/home/dima/anaconda3/envs/tf/lib/python3.7/site-packages/opennmt/utils/scorers.py", line 132, in __call__
precision_score, recall_score, fmeasure_score = fmeasure(ref_path, hyp_path)
File "/home/dima/anaconda3/envs/tf/lib/python3.7/site-packages/opennmt/utils/fmeasure.py", line 49, in fmeasure
if tag == classref[linecpt][tagcpt]:
IndexError: list index out of range
Can I help you with the issue? I'm not familiar with the code base, but I can try to reproduce it locally and extract the context if necessary.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `opennmt/utils/fmeasure.py`
Content:
```
1 """Hypotheses file scoring for Precision Recall and F-Measure."""
2
3 def fmeasure(ref_path,
4 hyp_path,
5 return_precision_only=False,
6 return_recall_only=False,
7 return_fmeasure_only=False):
8 """Compute Precision Recall and F-Measure between two files"""
9 with open(ref_path) as ref_fp, open(hyp_path) as hyp_fp:
10 list_null_tags = ["X", "null", "NULL", "Null", "O"]
11 listtags = []
12 linecpt = 0
13 classref = []
14 classrandom = []
15 classhyp = []
16 nbrtagref = {}
17 nbrtaghyp = {}
18 nbrtagok = {}
19 for tag in listtags:
20 nbrtagref[tag] = 0
21 nbrtaghyp[tag] = 0
22 nbrtagok[tag] = 0
23 for line in ref_fp:
24 line = line.strip()
25 tabline = line.split(' ')
26 tagcpt = 0
27 lineref = []
28 for tag in tabline:
29 lineref.append(tag)
30 if tag in nbrtagref.keys() and tag not in list_null_tags:
31 nbrtagref[tag] = nbrtagref[tag]+1
32 else:
33 nbrtagref[tag] = 1
34 tagcpt = tagcpt+1
35 classref.append(lineref)
36 linecpt = linecpt+1
37 linecpt = 0
38 for line in hyp_fp:
39 line = line.strip()
40 tabline = line.split(' ')
41 tagcpt = 0
42 linehyp = []
43 linerandom = []
44 for tag in tabline:
45 linehyp.append(tag)
46 if tag not in listtags:
47 listtags.append(tag)
48 linerandom.append(tag)
49 if tag == classref[linecpt][tagcpt]:
50 if tag in nbrtagok.keys():
51 nbrtagok[tag] = nbrtagok[tag]+1
52 else:
53 nbrtagok[tag] = 1
54 tagcpt = tagcpt+1
55 if tag in nbrtaghyp.keys():
56 nbrtaghyp[tag] = nbrtaghyp[tag]+1
57 else:
58 nbrtaghyp[tag] = 1
59 classhyp.append(linehyp)
60 classrandom.append(linerandom)
61 linecpt = linecpt+1
62
63 tagcpt = 0
64 fullprecision = 0
65 fullrecall = 0
66 precision = {}
67 recall = {}
68 fulltagok = 0.00
69 fulltaghyp = 0.00
70 fulltagref = 0.00
71 for tag in listtags:
72 if tag not in nbrtagok:
73 nbrtagok[tag] = 0
74 if tag not in nbrtaghyp:
75 nbrtaghyp[tag] = 0
76 if tag not in nbrtagref:
77 nbrtagref[tag] = 0
78 if nbrtaghyp[tag] != 0:
79 precision[tag] = nbrtagok[tag]/nbrtaghyp[tag]
80 else:
81 precision[tag] = 0
82 if nbrtagref[tag] != 0:
83 recall[tag] = nbrtagok[tag]/nbrtagref[tag]
84 else:
85 recall[tag] = 0
86 if tag not in list_null_tags:
87 fulltagok = fulltagok+nbrtagok[tag]
88 fulltaghyp = fulltaghyp+nbrtaghyp[tag]
89 fulltagref = fulltagref+nbrtagref[tag]
90 # fullprecision = fullprecision+precision[tag]
91 # fullrecall = fullrecall+recall[tag]
92 tagcpt = tagcpt+1
93 fullprecision = round(100*fulltagok/fulltaghyp, 2)/100
94 fullrecall = round(100*fulltagok/fulltagref, 2)/100
95 fullfmeasure = (round((200*fullprecision*fullrecall)/(fullprecision+fullrecall), 2))/100
96 if return_precision_only:
97 return fullprecision
98 if return_recall_only:
99 return fullrecall
100 if return_fmeasure_only:
101 return fullfmeasure
102 return fullprecision, fullrecall, fullfmeasure
103
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/opennmt/utils/fmeasure.py b/opennmt/utils/fmeasure.py
--- a/opennmt/utils/fmeasure.py
+++ b/opennmt/utils/fmeasure.py
@@ -9,21 +9,15 @@
with open(ref_path) as ref_fp, open(hyp_path) as hyp_fp:
list_null_tags = ["X", "null", "NULL", "Null", "O"]
listtags = []
- linecpt = 0
classref = []
classrandom = []
classhyp = []
nbrtagref = {}
nbrtaghyp = {}
nbrtagok = {}
- for tag in listtags:
- nbrtagref[tag] = 0
- nbrtaghyp[tag] = 0
- nbrtagok[tag] = 0
for line in ref_fp:
line = line.strip()
tabline = line.split(' ')
- tagcpt = 0
lineref = []
for tag in tabline:
lineref.append(tag)
@@ -31,36 +25,29 @@
nbrtagref[tag] = nbrtagref[tag]+1
else:
nbrtagref[tag] = 1
- tagcpt = tagcpt+1
classref.append(lineref)
- linecpt = linecpt+1
- linecpt = 0
- for line in hyp_fp:
+ for line, lineref in zip(hyp_fp, classref):
line = line.strip()
tabline = line.split(' ')
- tagcpt = 0
linehyp = []
linerandom = []
- for tag in tabline:
+ for tagcpt, tag in enumerate(tabline):
linehyp.append(tag)
if tag not in listtags:
listtags.append(tag)
linerandom.append(tag)
- if tag == classref[linecpt][tagcpt]:
+ if tagcpt < len(lineref) and tag == lineref[tagcpt]:
if tag in nbrtagok.keys():
nbrtagok[tag] = nbrtagok[tag]+1
else:
nbrtagok[tag] = 1
- tagcpt = tagcpt+1
if tag in nbrtaghyp.keys():
nbrtaghyp[tag] = nbrtaghyp[tag]+1
else:
nbrtaghyp[tag] = 1
classhyp.append(linehyp)
classrandom.append(linerandom)
- linecpt = linecpt+1
- tagcpt = 0
fullprecision = 0
fullrecall = 0
precision = {}
@@ -87,12 +74,11 @@
fulltagok = fulltagok+nbrtagok[tag]
fulltaghyp = fulltaghyp+nbrtaghyp[tag]
fulltagref = fulltagref+nbrtagref[tag]
-# fullprecision = fullprecision+precision[tag]
-# fullrecall = fullrecall+recall[tag]
- tagcpt = tagcpt+1
- fullprecision = round(100*fulltagok/fulltaghyp, 2)/100
- fullrecall = round(100*fulltagok/fulltagref, 2)/100
- fullfmeasure = (round((200*fullprecision*fullrecall)/(fullprecision+fullrecall), 2))/100
+ fullprecision = fulltagok / fulltaghyp if fulltaghyp != 0 else 0
+ fullrecall = fulltagok / fulltagref if fulltagref != 0 else 0
+ fullfmeasure = (
+ (2 * fullprecision * fullrecall) / (fullprecision + fullrecall)
+ if (fullprecision + fullrecall) != 0 else 0)
if return_precision_only:
return fullprecision
if return_recall_only:
| {"golden_diff": "diff --git a/opennmt/utils/fmeasure.py b/opennmt/utils/fmeasure.py\n--- a/opennmt/utils/fmeasure.py\n+++ b/opennmt/utils/fmeasure.py\n@@ -9,21 +9,15 @@\n with open(ref_path) as ref_fp, open(hyp_path) as hyp_fp:\n list_null_tags = [\"X\", \"null\", \"NULL\", \"Null\", \"O\"]\n listtags = []\n- linecpt = 0\n classref = []\n classrandom = []\n classhyp = []\n nbrtagref = {}\n nbrtaghyp = {}\n nbrtagok = {}\n- for tag in listtags:\n- nbrtagref[tag] = 0\n- nbrtaghyp[tag] = 0\n- nbrtagok[tag] = 0\n for line in ref_fp:\n line = line.strip()\n tabline = line.split(' ')\n- tagcpt = 0\n lineref = []\n for tag in tabline:\n lineref.append(tag)\n@@ -31,36 +25,29 @@\n nbrtagref[tag] = nbrtagref[tag]+1\n else:\n nbrtagref[tag] = 1\n- tagcpt = tagcpt+1\n classref.append(lineref)\n- linecpt = linecpt+1\n- linecpt = 0\n- for line in hyp_fp:\n+ for line, lineref in zip(hyp_fp, classref):\n line = line.strip()\n tabline = line.split(' ')\n- tagcpt = 0\n linehyp = []\n linerandom = []\n- for tag in tabline:\n+ for tagcpt, tag in enumerate(tabline):\n linehyp.append(tag)\n if tag not in listtags:\n listtags.append(tag)\n linerandom.append(tag)\n- if tag == classref[linecpt][tagcpt]:\n+ if tagcpt < len(lineref) and tag == lineref[tagcpt]:\n if tag in nbrtagok.keys():\n nbrtagok[tag] = nbrtagok[tag]+1\n else:\n nbrtagok[tag] = 1\n- tagcpt = tagcpt+1\n if tag in nbrtaghyp.keys():\n nbrtaghyp[tag] = nbrtaghyp[tag]+1\n else:\n nbrtaghyp[tag] = 1\n classhyp.append(linehyp)\n classrandom.append(linerandom)\n- linecpt = linecpt+1\n \n- tagcpt = 0\n fullprecision = 0\n fullrecall = 0\n precision = {}\n@@ -87,12 +74,11 @@\n fulltagok = fulltagok+nbrtagok[tag]\n fulltaghyp = fulltaghyp+nbrtaghyp[tag]\n fulltagref = fulltagref+nbrtagref[tag]\n-# fullprecision = fullprecision+precision[tag]\n-# fullrecall = fullrecall+recall[tag]\n- tagcpt = tagcpt+1\n- fullprecision = round(100*fulltagok/fulltaghyp, 2)/100\n- fullrecall = round(100*fulltagok/fulltagref, 2)/100\n- fullfmeasure = (round((200*fullprecision*fullrecall)/(fullprecision+fullrecall), 2))/100\n+ fullprecision = fulltagok / fulltaghyp if fulltaghyp != 0 else 0\n+ fullrecall = fulltagok / fulltagref if fulltagref != 0 else 0\n+ fullfmeasure = (\n+ (2 * fullprecision * fullrecall) / (fullprecision + fullrecall)\n+ if (fullprecision + fullrecall) != 0 else 0)\n if return_precision_only:\n return fullprecision\n if return_recall_only:\n", "issue": "PRF evaluator: list index out of range\nHi! \r\nI'm getting `list index out of range` when prf evaluator is used.\r\n\r\n**Config:**\r\nModel: TransformerRelative\r\nparams:\r\n beam_width: 1\r\n\r\ntrain:\r\n maximum_features_length: 50\r\n maximum_labels_length: 50\r\n save_summary_steps: 100\r\n sample_buffer_size: 1000000\r\n keep_checkpoint_max: 20\r\n save_checkpoints_steps: 5000\r\n max_step: 2000000\r\n\r\neval:\r\n batch_size: 32\r\n steps: 5000\r\n export_on_best: bleu\r\n external_evaluators: [ \"bleu\", \"prf\", \"wer\" ]\r\n\r\ninfer:\r\n batch_size: 1024\r\n\r\n**Full stack:**\r\nW tensorflow/core/kernels/data/generator_dataset_op.cc:103] Error occurred when finalizing GeneratorDataset iterator: Cancelled: Operation was cancelled\r\nTraceback (most recent call last):\r\n File \"/home/dima/anaconda3/envs/tf/bin/onmt-main\", line 8, in <module>\r\n sys.exit(main())\r\n File \"/home/dima/anaconda3/envs/tf/lib/python3.7/site-packages/opennmt/bin/main.py\", line 224, in main\r\n hvd=hvd)\r\n File \"/home/dima/anaconda3/envs/tf/lib/python3.7/site-packages/opennmt/runner.py\", line 217, in train\r\n moving_average_decay=train_config.get(\"moving_average_decay\"))\r\n File \"/home/dima/anaconda3/envs/tf/lib/python3.7/site-packages/opennmt/training.py\", line 118, in __call__\r\n early_stop = self._evaluate(evaluator, step, moving_average=moving_average)\r\n File \"/home/dima/anaconda3/envs/tf/lib/python3.7/site-packages/opennmt/training.py\", line 140, in _evaluate\r\n evaluator(step)\r\n File \"/home/dima/anaconda3/envs/tf/lib/python3.7/site-packages/opennmt/evaluation.py\", line 299, in __call__\r\n score = scorer(self._labels_file, output_path)\r\n File \"/home/dima/anaconda3/envs/tf/lib/python3.7/site-packages/opennmt/utils/scorers.py\", line 132, in __call__\r\n precision_score, recall_score, fmeasure_score = fmeasure(ref_path, hyp_path)\r\n File \"/home/dima/anaconda3/envs/tf/lib/python3.7/site-packages/opennmt/utils/fmeasure.py\", line 49, in fmeasure\r\n if tag == classref[linecpt][tagcpt]:\r\nIndexError: list index out of range\r\n\r\nCan I help you with the issue? I'm not familiar with the code base, but I can try to reproduce it locally and extract the context if necessary.\r\n\n", "before_files": [{"content": "\"\"\"Hypotheses file scoring for Precision Recall and F-Measure.\"\"\"\n\ndef fmeasure(ref_path,\n hyp_path,\n return_precision_only=False,\n return_recall_only=False,\n return_fmeasure_only=False):\n \"\"\"Compute Precision Recall and F-Measure between two files\"\"\"\n with open(ref_path) as ref_fp, open(hyp_path) as hyp_fp:\n list_null_tags = [\"X\", \"null\", \"NULL\", \"Null\", \"O\"]\n listtags = []\n linecpt = 0\n classref = []\n classrandom = []\n classhyp = []\n nbrtagref = {}\n nbrtaghyp = {}\n nbrtagok = {}\n for tag in listtags:\n nbrtagref[tag] = 0\n nbrtaghyp[tag] = 0\n nbrtagok[tag] = 0\n for line in ref_fp:\n line = line.strip()\n tabline = line.split(' ')\n tagcpt = 0\n lineref = []\n for tag in tabline:\n lineref.append(tag)\n if tag in nbrtagref.keys() and tag not in list_null_tags:\n nbrtagref[tag] = nbrtagref[tag]+1\n else:\n nbrtagref[tag] = 1\n tagcpt = tagcpt+1\n classref.append(lineref)\n linecpt = linecpt+1\n linecpt = 0\n for line in hyp_fp:\n line = line.strip()\n tabline = line.split(' ')\n tagcpt = 0\n linehyp = []\n linerandom = []\n for tag in tabline:\n linehyp.append(tag)\n if tag not in listtags:\n listtags.append(tag)\n linerandom.append(tag)\n if tag == classref[linecpt][tagcpt]:\n if tag in nbrtagok.keys():\n nbrtagok[tag] = nbrtagok[tag]+1\n else:\n nbrtagok[tag] = 1\n tagcpt = tagcpt+1\n if tag in nbrtaghyp.keys():\n nbrtaghyp[tag] = nbrtaghyp[tag]+1\n else:\n nbrtaghyp[tag] = 1\n classhyp.append(linehyp)\n classrandom.append(linerandom)\n linecpt = linecpt+1\n\n tagcpt = 0\n fullprecision = 0\n fullrecall = 0\n precision = {}\n recall = {}\n fulltagok = 0.00\n fulltaghyp = 0.00\n fulltagref = 0.00\n for tag in listtags:\n if tag not in nbrtagok:\n nbrtagok[tag] = 0\n if tag not in nbrtaghyp:\n nbrtaghyp[tag] = 0\n if tag not in nbrtagref:\n nbrtagref[tag] = 0\n if nbrtaghyp[tag] != 0:\n precision[tag] = nbrtagok[tag]/nbrtaghyp[tag]\n else:\n precision[tag] = 0\n if nbrtagref[tag] != 0:\n recall[tag] = nbrtagok[tag]/nbrtagref[tag]\n else:\n recall[tag] = 0\n if tag not in list_null_tags:\n fulltagok = fulltagok+nbrtagok[tag]\n fulltaghyp = fulltaghyp+nbrtaghyp[tag]\n fulltagref = fulltagref+nbrtagref[tag]\n# fullprecision = fullprecision+precision[tag]\n# fullrecall = fullrecall+recall[tag]\n tagcpt = tagcpt+1\n fullprecision = round(100*fulltagok/fulltaghyp, 2)/100\n fullrecall = round(100*fulltagok/fulltagref, 2)/100\n fullfmeasure = (round((200*fullprecision*fullrecall)/(fullprecision+fullrecall), 2))/100\n if return_precision_only:\n return fullprecision\n if return_recall_only:\n return fullrecall\n if return_fmeasure_only:\n return fullfmeasure\n return fullprecision, fullrecall, fullfmeasure\n", "path": "opennmt/utils/fmeasure.py"}], "after_files": [{"content": "\"\"\"Hypotheses file scoring for Precision Recall and F-Measure.\"\"\"\n\ndef fmeasure(ref_path,\n hyp_path,\n return_precision_only=False,\n return_recall_only=False,\n return_fmeasure_only=False):\n \"\"\"Compute Precision Recall and F-Measure between two files\"\"\"\n with open(ref_path) as ref_fp, open(hyp_path) as hyp_fp:\n list_null_tags = [\"X\", \"null\", \"NULL\", \"Null\", \"O\"]\n listtags = []\n classref = []\n classrandom = []\n classhyp = []\n nbrtagref = {}\n nbrtaghyp = {}\n nbrtagok = {}\n for line in ref_fp:\n line = line.strip()\n tabline = line.split(' ')\n lineref = []\n for tag in tabline:\n lineref.append(tag)\n if tag in nbrtagref.keys() and tag not in list_null_tags:\n nbrtagref[tag] = nbrtagref[tag]+1\n else:\n nbrtagref[tag] = 1\n classref.append(lineref)\n for line, lineref in zip(hyp_fp, classref):\n line = line.strip()\n tabline = line.split(' ')\n linehyp = []\n linerandom = []\n for tagcpt, tag in enumerate(tabline):\n linehyp.append(tag)\n if tag not in listtags:\n listtags.append(tag)\n linerandom.append(tag)\n if tagcpt < len(lineref) and tag == lineref[tagcpt]:\n if tag in nbrtagok.keys():\n nbrtagok[tag] = nbrtagok[tag]+1\n else:\n nbrtagok[tag] = 1\n if tag in nbrtaghyp.keys():\n nbrtaghyp[tag] = nbrtaghyp[tag]+1\n else:\n nbrtaghyp[tag] = 1\n classhyp.append(linehyp)\n classrandom.append(linerandom)\n\n fullprecision = 0\n fullrecall = 0\n precision = {}\n recall = {}\n fulltagok = 0.00\n fulltaghyp = 0.00\n fulltagref = 0.00\n for tag in listtags:\n if tag not in nbrtagok:\n nbrtagok[tag] = 0\n if tag not in nbrtaghyp:\n nbrtaghyp[tag] = 0\n if tag not in nbrtagref:\n nbrtagref[tag] = 0\n if nbrtaghyp[tag] != 0:\n precision[tag] = nbrtagok[tag]/nbrtaghyp[tag]\n else:\n precision[tag] = 0\n if nbrtagref[tag] != 0:\n recall[tag] = nbrtagok[tag]/nbrtagref[tag]\n else:\n recall[tag] = 0\n if tag not in list_null_tags:\n fulltagok = fulltagok+nbrtagok[tag]\n fulltaghyp = fulltaghyp+nbrtaghyp[tag]\n fulltagref = fulltagref+nbrtagref[tag]\n fullprecision = fulltagok / fulltaghyp if fulltaghyp != 0 else 0\n fullrecall = fulltagok / fulltagref if fulltagref != 0 else 0\n fullfmeasure = (\n (2 * fullprecision * fullrecall) / (fullprecision + fullrecall)\n if (fullprecision + fullrecall) != 0 else 0)\n if return_precision_only:\n return fullprecision\n if return_recall_only:\n return fullrecall\n if return_fmeasure_only:\n return fullfmeasure\n return fullprecision, fullrecall, fullfmeasure\n", "path": "opennmt/utils/fmeasure.py"}]} | 2,017 | 847 |
gh_patches_debug_2290 | rasdani/github-patches | git_diff | TheAlgorithms__Python-4779 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bug with union in disjoint_set
https://github.com/TheAlgorithms/Python/blob/master/data_structures/disjoint_set/disjoint_set.py
```python
def union_set(x, y):
"""
union two sets.
set with bigger rank should be parent, so that the
disjoint set tree will be more flat.
"""
x, y = find_set(x), find_set(y)
if x.rank > y.rank:
y.parent = x
else:
x.parent = y
if x.rank == y.rank:
y.rank += 1
```
here need check if x==y
Bug with union in disjoint_set
https://github.com/TheAlgorithms/Python/blob/master/data_structures/disjoint_set/disjoint_set.py
```python
def union_set(x, y):
"""
union two sets.
set with bigger rank should be parent, so that the
disjoint set tree will be more flat.
"""
x, y = find_set(x), find_set(y)
if x.rank > y.rank:
y.parent = x
else:
x.parent = y
if x.rank == y.rank:
y.rank += 1
```
here need check if x==y
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `data_structures/disjoint_set/disjoint_set.py`
Content:
```
1 """
2 disjoint set
3 Reference: https://en.wikipedia.org/wiki/Disjoint-set_data_structure
4 """
5
6
7 class Node:
8 def __init__(self, data):
9 self.data = data
10
11
12 def make_set(x):
13 """
14 make x as a set.
15 """
16 # rank is the distance from x to its' parent
17 # root's rank is 0
18 x.rank = 0
19 x.parent = x
20
21
22 def union_set(x, y):
23 """
24 union two sets.
25 set with bigger rank should be parent, so that the
26 disjoint set tree will be more flat.
27 """
28 x, y = find_set(x), find_set(y)
29 if x.rank > y.rank:
30 y.parent = x
31 else:
32 x.parent = y
33 if x.rank == y.rank:
34 y.rank += 1
35
36
37 def find_set(x):
38 """
39 return the parent of x
40 """
41 if x != x.parent:
42 x.parent = find_set(x.parent)
43 return x.parent
44
45
46 def find_python_set(node: Node) -> set:
47 """
48 Return a Python Standard Library set that contains i.
49 """
50 sets = ({0, 1, 2}, {3, 4, 5})
51 for s in sets:
52 if node.data in s:
53 return s
54 raise ValueError(f"{node.data} is not in {sets}")
55
56
57 def test_disjoint_set():
58 """
59 >>> test_disjoint_set()
60 """
61 vertex = [Node(i) for i in range(6)]
62 for v in vertex:
63 make_set(v)
64
65 union_set(vertex[0], vertex[1])
66 union_set(vertex[1], vertex[2])
67 union_set(vertex[3], vertex[4])
68 union_set(vertex[3], vertex[5])
69
70 for node0 in vertex:
71 for node1 in vertex:
72 if find_python_set(node0).isdisjoint(find_python_set(node1)):
73 assert find_set(node0) != find_set(node1)
74 else:
75 assert find_set(node0) == find_set(node1)
76
77
78 if __name__ == "__main__":
79 test_disjoint_set()
80
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/data_structures/disjoint_set/disjoint_set.py b/data_structures/disjoint_set/disjoint_set.py
--- a/data_structures/disjoint_set/disjoint_set.py
+++ b/data_structures/disjoint_set/disjoint_set.py
@@ -26,7 +26,10 @@
disjoint set tree will be more flat.
"""
x, y = find_set(x), find_set(y)
- if x.rank > y.rank:
+ if x == y:
+ return
+
+ elif x.rank > y.rank:
y.parent = x
else:
x.parent = y
| {"golden_diff": "diff --git a/data_structures/disjoint_set/disjoint_set.py b/data_structures/disjoint_set/disjoint_set.py\n--- a/data_structures/disjoint_set/disjoint_set.py\n+++ b/data_structures/disjoint_set/disjoint_set.py\n@@ -26,7 +26,10 @@\n disjoint set tree will be more flat.\r\n \"\"\"\r\n x, y = find_set(x), find_set(y)\r\n- if x.rank > y.rank:\r\n+ if x == y:\r\n+ return\r\n+\r\n+ elif x.rank > y.rank:\r\n y.parent = x\r\n else:\r\n x.parent = y\n", "issue": "Bug with union in disjoint_set\nhttps://github.com/TheAlgorithms/Python/blob/master/data_structures/disjoint_set/disjoint_set.py\r\n```python\r\ndef union_set(x, y):\r\n \"\"\"\r\n union two sets.\r\n set with bigger rank should be parent, so that the\r\n disjoint set tree will be more flat.\r\n \"\"\"\r\n x, y = find_set(x), find_set(y)\r\n if x.rank > y.rank:\r\n y.parent = x\r\n else:\r\n x.parent = y\r\n if x.rank == y.rank:\r\n y.rank += 1\r\n```\r\n\r\nhere need check if x==y\r\n\nBug with union in disjoint_set\nhttps://github.com/TheAlgorithms/Python/blob/master/data_structures/disjoint_set/disjoint_set.py\r\n```python\r\ndef union_set(x, y):\r\n \"\"\"\r\n union two sets.\r\n set with bigger rank should be parent, so that the\r\n disjoint set tree will be more flat.\r\n \"\"\"\r\n x, y = find_set(x), find_set(y)\r\n if x.rank > y.rank:\r\n y.parent = x\r\n else:\r\n x.parent = y\r\n if x.rank == y.rank:\r\n y.rank += 1\r\n```\r\n\r\nhere need check if x==y\r\n\n", "before_files": [{"content": "\"\"\"\r\n disjoint set\r\n Reference: https://en.wikipedia.org/wiki/Disjoint-set_data_structure\r\n\"\"\"\r\n\r\n\r\nclass Node:\r\n def __init__(self, data):\r\n self.data = data\r\n\r\n\r\ndef make_set(x):\r\n \"\"\"\r\n make x as a set.\r\n \"\"\"\r\n # rank is the distance from x to its' parent\r\n # root's rank is 0\r\n x.rank = 0\r\n x.parent = x\r\n\r\n\r\ndef union_set(x, y):\r\n \"\"\"\r\n union two sets.\r\n set with bigger rank should be parent, so that the\r\n disjoint set tree will be more flat.\r\n \"\"\"\r\n x, y = find_set(x), find_set(y)\r\n if x.rank > y.rank:\r\n y.parent = x\r\n else:\r\n x.parent = y\r\n if x.rank == y.rank:\r\n y.rank += 1\r\n\r\n\r\ndef find_set(x):\r\n \"\"\"\r\n return the parent of x\r\n \"\"\"\r\n if x != x.parent:\r\n x.parent = find_set(x.parent)\r\n return x.parent\r\n\r\n\r\ndef find_python_set(node: Node) -> set:\r\n \"\"\"\r\n Return a Python Standard Library set that contains i.\r\n \"\"\"\r\n sets = ({0, 1, 2}, {3, 4, 5})\r\n for s in sets:\r\n if node.data in s:\r\n return s\r\n raise ValueError(f\"{node.data} is not in {sets}\")\r\n\r\n\r\ndef test_disjoint_set():\r\n \"\"\"\r\n >>> test_disjoint_set()\r\n \"\"\"\r\n vertex = [Node(i) for i in range(6)]\r\n for v in vertex:\r\n make_set(v)\r\n\r\n union_set(vertex[0], vertex[1])\r\n union_set(vertex[1], vertex[2])\r\n union_set(vertex[3], vertex[4])\r\n union_set(vertex[3], vertex[5])\r\n\r\n for node0 in vertex:\r\n for node1 in vertex:\r\n if find_python_set(node0).isdisjoint(find_python_set(node1)):\r\n assert find_set(node0) != find_set(node1)\r\n else:\r\n assert find_set(node0) == find_set(node1)\r\n\r\n\r\nif __name__ == \"__main__\":\r\n test_disjoint_set()\r\n", "path": "data_structures/disjoint_set/disjoint_set.py"}], "after_files": [{"content": "\"\"\"\r\n disjoint set\r\n Reference: https://en.wikipedia.org/wiki/Disjoint-set_data_structure\r\n\"\"\"\r\n\r\n\r\nclass Node:\r\n def __init__(self, data):\r\n self.data = data\r\n\r\n\r\ndef make_set(x):\r\n \"\"\"\r\n make x as a set.\r\n \"\"\"\r\n # rank is the distance from x to its' parent\r\n # root's rank is 0\r\n x.rank = 0\r\n x.parent = x\r\n\r\n\r\ndef union_set(x, y):\r\n \"\"\"\r\n union two sets.\r\n set with bigger rank should be parent, so that the\r\n disjoint set tree will be more flat.\r\n \"\"\"\r\n x, y = find_set(x), find_set(y)\r\n if x == y:\r\n return\r\n\r\n elif x.rank > y.rank:\r\n y.parent = x\r\n else:\r\n x.parent = y\r\n if x.rank == y.rank:\r\n y.rank += 1\r\n\r\n\r\ndef find_set(x):\r\n \"\"\"\r\n return the parent of x\r\n \"\"\"\r\n if x != x.parent:\r\n x.parent = find_set(x.parent)\r\n return x.parent\r\n\r\n\r\ndef find_python_set(node: Node) -> set:\r\n \"\"\"\r\n Return a Python Standard Library set that contains i.\r\n \"\"\"\r\n sets = ({0, 1, 2}, {3, 4, 5})\r\n for s in sets:\r\n if node.data in s:\r\n return s\r\n raise ValueError(f\"{node.data} is not in {sets}\")\r\n\r\n\r\ndef test_disjoint_set():\r\n \"\"\"\r\n >>> test_disjoint_set()\r\n \"\"\"\r\n vertex = [Node(i) for i in range(6)]\r\n for v in vertex:\r\n make_set(v)\r\n\r\n union_set(vertex[0], vertex[1])\r\n union_set(vertex[1], vertex[2])\r\n union_set(vertex[3], vertex[4])\r\n union_set(vertex[3], vertex[5])\r\n\r\n for node0 in vertex:\r\n for node1 in vertex:\r\n if find_python_set(node0).isdisjoint(find_python_set(node1)):\r\n assert find_set(node0) != find_set(node1)\r\n else:\r\n assert find_set(node0) == find_set(node1)\r\n\r\n\r\nif __name__ == \"__main__\":\r\n test_disjoint_set()\r\n", "path": "data_structures/disjoint_set/disjoint_set.py"}]} | 1,150 | 135 |
gh_patches_debug_15343 | rasdani/github-patches | git_diff | Pylons__pyramid-1131 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
No way to add query parameters without a value
I occasionally need to put a hint in the query string for a URL, which is essentially a parameter without a value. This can be important to provide information to javascript or as a hint to GA. For example I may need to use `http://localhost/dashboard?new-user` as URL when I redirect a new user to the dashboard after completing registration.
Intuitively I expected this to work:
``` python
return HTTPFound(request.route_url('dashboard', _query={'new-user': None}))
```
but that returns `/dashboard?new-user=None` which is not very pretty.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pyramid/encode.py`
Content:
```
1 from pyramid.compat import (
2 text_type,
3 binary_type,
4 is_nonstr_iter,
5 url_quote as _url_quote,
6 url_quote_plus as quote_plus, # bw compat api (dnr)
7 )
8
9 def url_quote(s, safe=''): # bw compat api
10 return _url_quote(s, safe=safe)
11
12 def urlencode(query, doseq=True):
13 """
14 An alternate implementation of Python's stdlib `urllib.urlencode
15 function <http://docs.python.org/library/urllib.html>`_ which
16 accepts unicode keys and values within the ``query``
17 dict/sequence; all Unicode keys and values are first converted to
18 UTF-8 before being used to compose the query string.
19
20 The value of ``query`` must be a sequence of two-tuples
21 representing key/value pairs *or* an object (often a dictionary)
22 with an ``.items()`` method that returns a sequence of two-tuples
23 representing key/value pairs.
24
25 For minimal calling convention backwards compatibility, this
26 version of urlencode accepts *but ignores* a second argument
27 conventionally named ``doseq``. The Python stdlib version behaves
28 differently when ``doseq`` is False and when a sequence is
29 presented as one of the values. This version always behaves in
30 the ``doseq=True`` mode, no matter what the value of the second
31 argument.
32
33 See the Python stdlib documentation for ``urllib.urlencode`` for
34 more information.
35 """
36 try:
37 # presumed to be a dictionary
38 query = query.items()
39 except AttributeError:
40 pass
41
42 result = ''
43 prefix = ''
44
45 for (k, v) in query:
46 k = _enc(k)
47
48 if is_nonstr_iter(v):
49 for x in v:
50 x = _enc(x)
51 result += '%s%s=%s' % (prefix, k, x)
52 prefix = '&'
53 else:
54 v = _enc(v)
55 result += '%s%s=%s' % (prefix, k, v)
56
57 prefix = '&'
58
59 return result
60
61 def _enc(val):
62 cls = val.__class__
63 if cls is text_type:
64 val = val.encode('utf-8')
65 elif cls is not binary_type:
66 val = str(val).encode('utf-8')
67 return quote_plus(val)
68
69
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pyramid/encode.py b/pyramid/encode.py
--- a/pyramid/encode.py
+++ b/pyramid/encode.py
@@ -32,6 +32,10 @@
See the Python stdlib documentation for ``urllib.urlencode`` for
more information.
+
+ .. versionchanged:: 1.5
+ In a key/value pair, if the value is ``None`` then it will be
+ dropped from the resulting output.
"""
try:
# presumed to be a dictionary
@@ -50,6 +54,8 @@
x = _enc(x)
result += '%s%s=%s' % (prefix, k, x)
prefix = '&'
+ elif v is None:
+ result += '%s%s=' % (prefix, k)
else:
v = _enc(v)
result += '%s%s=%s' % (prefix, k, v)
| {"golden_diff": "diff --git a/pyramid/encode.py b/pyramid/encode.py\n--- a/pyramid/encode.py\n+++ b/pyramid/encode.py\n@@ -32,6 +32,10 @@\n \n See the Python stdlib documentation for ``urllib.urlencode`` for\n more information.\n+\n+ .. versionchanged:: 1.5\n+ In a key/value pair, if the value is ``None`` then it will be\n+ dropped from the resulting output.\n \"\"\"\n try:\n # presumed to be a dictionary\n@@ -50,6 +54,8 @@\n x = _enc(x)\n result += '%s%s=%s' % (prefix, k, x)\n prefix = '&'\n+ elif v is None:\n+ result += '%s%s=' % (prefix, k)\n else:\n v = _enc(v)\n result += '%s%s=%s' % (prefix, k, v)\n", "issue": "No way to add query parameters without a value\nI occasionally need to put a hint in the query string for a URL, which is essentially a parameter without a value. This can be important to provide information to javascript or as a hint to GA. For example I may need to use `http://localhost/dashboard?new-user` as URL when I redirect a new user to the dashboard after completing registration.\n\nIntuitively I expected this to work:\n\n``` python\nreturn HTTPFound(request.route_url('dashboard', _query={'new-user': None}))\n```\n\nbut that returns `/dashboard?new-user=None` which is not very pretty.\n\n", "before_files": [{"content": "from pyramid.compat import (\n text_type,\n binary_type,\n is_nonstr_iter,\n url_quote as _url_quote,\n url_quote_plus as quote_plus, # bw compat api (dnr)\n )\n\ndef url_quote(s, safe=''): # bw compat api\n return _url_quote(s, safe=safe)\n\ndef urlencode(query, doseq=True):\n \"\"\"\n An alternate implementation of Python's stdlib `urllib.urlencode\n function <http://docs.python.org/library/urllib.html>`_ which\n accepts unicode keys and values within the ``query``\n dict/sequence; all Unicode keys and values are first converted to\n UTF-8 before being used to compose the query string.\n\n The value of ``query`` must be a sequence of two-tuples\n representing key/value pairs *or* an object (often a dictionary)\n with an ``.items()`` method that returns a sequence of two-tuples\n representing key/value pairs.\n\n For minimal calling convention backwards compatibility, this\n version of urlencode accepts *but ignores* a second argument\n conventionally named ``doseq``. The Python stdlib version behaves\n differently when ``doseq`` is False and when a sequence is\n presented as one of the values. This version always behaves in\n the ``doseq=True`` mode, no matter what the value of the second\n argument.\n\n See the Python stdlib documentation for ``urllib.urlencode`` for\n more information.\n \"\"\"\n try:\n # presumed to be a dictionary\n query = query.items()\n except AttributeError:\n pass\n\n result = ''\n prefix = ''\n\n for (k, v) in query:\n k = _enc(k)\n\n if is_nonstr_iter(v):\n for x in v:\n x = _enc(x)\n result += '%s%s=%s' % (prefix, k, x)\n prefix = '&'\n else:\n v = _enc(v)\n result += '%s%s=%s' % (prefix, k, v)\n\n prefix = '&'\n\n return result\n\ndef _enc(val):\n cls = val.__class__\n if cls is text_type:\n val = val.encode('utf-8')\n elif cls is not binary_type:\n val = str(val).encode('utf-8')\n return quote_plus(val)\n\n", "path": "pyramid/encode.py"}], "after_files": [{"content": "from pyramid.compat import (\n text_type,\n binary_type,\n is_nonstr_iter,\n url_quote as _url_quote,\n url_quote_plus as quote_plus, # bw compat api (dnr)\n )\n\ndef url_quote(s, safe=''): # bw compat api\n return _url_quote(s, safe=safe)\n\ndef urlencode(query, doseq=True):\n \"\"\"\n An alternate implementation of Python's stdlib `urllib.urlencode\n function <http://docs.python.org/library/urllib.html>`_ which\n accepts unicode keys and values within the ``query``\n dict/sequence; all Unicode keys and values are first converted to\n UTF-8 before being used to compose the query string.\n\n The value of ``query`` must be a sequence of two-tuples\n representing key/value pairs *or* an object (often a dictionary)\n with an ``.items()`` method that returns a sequence of two-tuples\n representing key/value pairs.\n\n For minimal calling convention backwards compatibility, this\n version of urlencode accepts *but ignores* a second argument\n conventionally named ``doseq``. The Python stdlib version behaves\n differently when ``doseq`` is False and when a sequence is\n presented as one of the values. This version always behaves in\n the ``doseq=True`` mode, no matter what the value of the second\n argument.\n\n See the Python stdlib documentation for ``urllib.urlencode`` for\n more information.\n\n .. versionchanged:: 1.5\n In a key/value pair, if the value is ``None`` then it will be\n dropped from the resulting output.\n \"\"\"\n try:\n # presumed to be a dictionary\n query = query.items()\n except AttributeError:\n pass\n\n result = ''\n prefix = ''\n\n for (k, v) in query:\n k = _enc(k)\n\n if is_nonstr_iter(v):\n for x in v:\n x = _enc(x)\n result += '%s%s=%s' % (prefix, k, x)\n prefix = '&'\n elif v is None:\n result += '%s%s=' % (prefix, k)\n else:\n v = _enc(v)\n result += '%s%s=%s' % (prefix, k, v)\n\n prefix = '&'\n\n return result\n\ndef _enc(val):\n cls = val.__class__\n if cls is text_type:\n val = val.encode('utf-8')\n elif cls is not binary_type:\n val = str(val).encode('utf-8')\n return quote_plus(val)\n\n", "path": "pyramid/encode.py"}]} | 1,032 | 208 |
gh_patches_debug_37438 | rasdani/github-patches | git_diff | open-telemetry__opentelemetry-python-934 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
B3 trace_id and span_id not handled correctly
These fields are not being handled correctly when an invalid value is passed for one or both of them. Fix that.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `opentelemetry-sdk/src/opentelemetry/sdk/trace/propagation/b3_format.py`
Content:
```
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import typing
16
17 import opentelemetry.trace as trace
18 from opentelemetry.context import Context
19 from opentelemetry.trace.propagation.httptextformat import (
20 Getter,
21 HTTPTextFormat,
22 HTTPTextFormatT,
23 Setter,
24 )
25
26
27 class B3Format(HTTPTextFormat):
28 """Propagator for the B3 HTTP header format.
29
30 See: https://github.com/openzipkin/b3-propagation
31 """
32
33 SINGLE_HEADER_KEY = "b3"
34 TRACE_ID_KEY = "x-b3-traceid"
35 SPAN_ID_KEY = "x-b3-spanid"
36 PARENT_SPAN_ID_KEY = "x-b3-parentspanid"
37 SAMPLED_KEY = "x-b3-sampled"
38 FLAGS_KEY = "x-b3-flags"
39 _SAMPLE_PROPAGATE_VALUES = set(["1", "True", "true", "d"])
40
41 def extract(
42 self,
43 get_from_carrier: Getter[HTTPTextFormatT],
44 carrier: HTTPTextFormatT,
45 context: typing.Optional[Context] = None,
46 ) -> Context:
47 trace_id = format_trace_id(trace.INVALID_TRACE_ID)
48 span_id = format_span_id(trace.INVALID_SPAN_ID)
49 sampled = "0"
50 flags = None
51
52 single_header = _extract_first_element(
53 get_from_carrier(carrier, self.SINGLE_HEADER_KEY)
54 )
55 if single_header:
56 # The b3 spec calls for the sampling state to be
57 # "deferred", which is unspecified. This concept does not
58 # translate to SpanContext, so we set it as recorded.
59 sampled = "1"
60 fields = single_header.split("-", 4)
61
62 if len(fields) == 1:
63 sampled = fields[0]
64 elif len(fields) == 2:
65 trace_id, span_id = fields
66 elif len(fields) == 3:
67 trace_id, span_id, sampled = fields
68 elif len(fields) == 4:
69 trace_id, span_id, sampled, _ = fields
70 else:
71 return trace.set_span_in_context(trace.INVALID_SPAN)
72 else:
73 trace_id = (
74 _extract_first_element(
75 get_from_carrier(carrier, self.TRACE_ID_KEY)
76 )
77 or trace_id
78 )
79 span_id = (
80 _extract_first_element(
81 get_from_carrier(carrier, self.SPAN_ID_KEY)
82 )
83 or span_id
84 )
85 sampled = (
86 _extract_first_element(
87 get_from_carrier(carrier, self.SAMPLED_KEY)
88 )
89 or sampled
90 )
91 flags = (
92 _extract_first_element(
93 get_from_carrier(carrier, self.FLAGS_KEY)
94 )
95 or flags
96 )
97
98 options = 0
99 # The b3 spec provides no defined behavior for both sample and
100 # flag values set. Since the setting of at least one implies
101 # the desire for some form of sampling, propagate if either
102 # header is set to allow.
103 if sampled in self._SAMPLE_PROPAGATE_VALUES or flags == "1":
104 options |= trace.TraceFlags.SAMPLED
105 return trace.set_span_in_context(
106 trace.DefaultSpan(
107 trace.SpanContext(
108 # trace an span ids are encoded in hex, so must be converted
109 trace_id=int(trace_id, 16),
110 span_id=int(span_id, 16),
111 is_remote=True,
112 trace_flags=trace.TraceFlags(options),
113 trace_state=trace.TraceState(),
114 )
115 )
116 )
117
118 def inject(
119 self,
120 set_in_carrier: Setter[HTTPTextFormatT],
121 carrier: HTTPTextFormatT,
122 context: typing.Optional[Context] = None,
123 ) -> None:
124 span = trace.get_current_span(context=context)
125
126 if span.get_context() == trace.INVALID_SPAN_CONTEXT:
127 return
128
129 sampled = (trace.TraceFlags.SAMPLED & span.context.trace_flags) != 0
130 set_in_carrier(
131 carrier, self.TRACE_ID_KEY, format_trace_id(span.context.trace_id),
132 )
133 set_in_carrier(
134 carrier, self.SPAN_ID_KEY, format_span_id(span.context.span_id)
135 )
136 if span.parent is not None:
137 set_in_carrier(
138 carrier,
139 self.PARENT_SPAN_ID_KEY,
140 format_span_id(span.parent.span_id),
141 )
142 set_in_carrier(carrier, self.SAMPLED_KEY, "1" if sampled else "0")
143
144
145 def format_trace_id(trace_id: int) -> str:
146 """Format the trace id according to b3 specification."""
147 return format(trace_id, "032x")
148
149
150 def format_span_id(span_id: int) -> str:
151 """Format the span id according to b3 specification."""
152 return format(span_id, "016x")
153
154
155 def _extract_first_element(
156 items: typing.Iterable[HTTPTextFormatT],
157 ) -> typing.Optional[HTTPTextFormatT]:
158 if items is None:
159 return None
160 return next(iter(items), None)
161
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/opentelemetry-sdk/src/opentelemetry/sdk/trace/propagation/b3_format.py b/opentelemetry-sdk/src/opentelemetry/sdk/trace/propagation/b3_format.py
--- a/opentelemetry-sdk/src/opentelemetry/sdk/trace/propagation/b3_format.py
+++ b/opentelemetry-sdk/src/opentelemetry/sdk/trace/propagation/b3_format.py
@@ -13,9 +13,11 @@
# limitations under the License.
import typing
+from re import compile as re_compile
import opentelemetry.trace as trace
from opentelemetry.context import Context
+from opentelemetry.sdk.trace import generate_span_id, generate_trace_id
from opentelemetry.trace.propagation.httptextformat import (
Getter,
HTTPTextFormat,
@@ -37,6 +39,8 @@
SAMPLED_KEY = "x-b3-sampled"
FLAGS_KEY = "x-b3-flags"
_SAMPLE_PROPAGATE_VALUES = set(["1", "True", "true", "d"])
+ _trace_id_regex = re_compile(r"[\da-fA-F]{16}|[\da-fA-F]{32}")
+ _span_id_regex = re_compile(r"[\da-fA-F]{16}")
def extract(
self,
@@ -95,6 +99,18 @@
or flags
)
+ if (
+ self._trace_id_regex.fullmatch(trace_id) is None
+ or self._span_id_regex.fullmatch(span_id) is None
+ ):
+ trace_id = generate_trace_id()
+ span_id = generate_span_id()
+ sampled = "0"
+
+ else:
+ trace_id = int(trace_id, 16)
+ span_id = int(span_id, 16)
+
options = 0
# The b3 spec provides no defined behavior for both sample and
# flag values set. Since the setting of at least one implies
@@ -102,12 +118,13 @@
# header is set to allow.
if sampled in self._SAMPLE_PROPAGATE_VALUES or flags == "1":
options |= trace.TraceFlags.SAMPLED
+
return trace.set_span_in_context(
trace.DefaultSpan(
trace.SpanContext(
# trace an span ids are encoded in hex, so must be converted
- trace_id=int(trace_id, 16),
- span_id=int(span_id, 16),
+ trace_id=trace_id,
+ span_id=span_id,
is_remote=True,
trace_flags=trace.TraceFlags(options),
trace_state=trace.TraceState(),
| {"golden_diff": "diff --git a/opentelemetry-sdk/src/opentelemetry/sdk/trace/propagation/b3_format.py b/opentelemetry-sdk/src/opentelemetry/sdk/trace/propagation/b3_format.py\n--- a/opentelemetry-sdk/src/opentelemetry/sdk/trace/propagation/b3_format.py\n+++ b/opentelemetry-sdk/src/opentelemetry/sdk/trace/propagation/b3_format.py\n@@ -13,9 +13,11 @@\n # limitations under the License.\n \n import typing\n+from re import compile as re_compile\n \n import opentelemetry.trace as trace\n from opentelemetry.context import Context\n+from opentelemetry.sdk.trace import generate_span_id, generate_trace_id\n from opentelemetry.trace.propagation.httptextformat import (\n Getter,\n HTTPTextFormat,\n@@ -37,6 +39,8 @@\n SAMPLED_KEY = \"x-b3-sampled\"\n FLAGS_KEY = \"x-b3-flags\"\n _SAMPLE_PROPAGATE_VALUES = set([\"1\", \"True\", \"true\", \"d\"])\n+ _trace_id_regex = re_compile(r\"[\\da-fA-F]{16}|[\\da-fA-F]{32}\")\n+ _span_id_regex = re_compile(r\"[\\da-fA-F]{16}\")\n \n def extract(\n self,\n@@ -95,6 +99,18 @@\n or flags\n )\n \n+ if (\n+ self._trace_id_regex.fullmatch(trace_id) is None\n+ or self._span_id_regex.fullmatch(span_id) is None\n+ ):\n+ trace_id = generate_trace_id()\n+ span_id = generate_span_id()\n+ sampled = \"0\"\n+\n+ else:\n+ trace_id = int(trace_id, 16)\n+ span_id = int(span_id, 16)\n+\n options = 0\n # The b3 spec provides no defined behavior for both sample and\n # flag values set. Since the setting of at least one implies\n@@ -102,12 +118,13 @@\n # header is set to allow.\n if sampled in self._SAMPLE_PROPAGATE_VALUES or flags == \"1\":\n options |= trace.TraceFlags.SAMPLED\n+\n return trace.set_span_in_context(\n trace.DefaultSpan(\n trace.SpanContext(\n # trace an span ids are encoded in hex, so must be converted\n- trace_id=int(trace_id, 16),\n- span_id=int(span_id, 16),\n+ trace_id=trace_id,\n+ span_id=span_id,\n is_remote=True,\n trace_flags=trace.TraceFlags(options),\n trace_state=trace.TraceState(),\n", "issue": "B3 trace_id and span_id not handled correctly\nThese fields are not being handled correctly when an invalid value is passed for one or both of them. Fix that.\n", "before_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport typing\n\nimport opentelemetry.trace as trace\nfrom opentelemetry.context import Context\nfrom opentelemetry.trace.propagation.httptextformat import (\n Getter,\n HTTPTextFormat,\n HTTPTextFormatT,\n Setter,\n)\n\n\nclass B3Format(HTTPTextFormat):\n \"\"\"Propagator for the B3 HTTP header format.\n\n See: https://github.com/openzipkin/b3-propagation\n \"\"\"\n\n SINGLE_HEADER_KEY = \"b3\"\n TRACE_ID_KEY = \"x-b3-traceid\"\n SPAN_ID_KEY = \"x-b3-spanid\"\n PARENT_SPAN_ID_KEY = \"x-b3-parentspanid\"\n SAMPLED_KEY = \"x-b3-sampled\"\n FLAGS_KEY = \"x-b3-flags\"\n _SAMPLE_PROPAGATE_VALUES = set([\"1\", \"True\", \"true\", \"d\"])\n\n def extract(\n self,\n get_from_carrier: Getter[HTTPTextFormatT],\n carrier: HTTPTextFormatT,\n context: typing.Optional[Context] = None,\n ) -> Context:\n trace_id = format_trace_id(trace.INVALID_TRACE_ID)\n span_id = format_span_id(trace.INVALID_SPAN_ID)\n sampled = \"0\"\n flags = None\n\n single_header = _extract_first_element(\n get_from_carrier(carrier, self.SINGLE_HEADER_KEY)\n )\n if single_header:\n # The b3 spec calls for the sampling state to be\n # \"deferred\", which is unspecified. This concept does not\n # translate to SpanContext, so we set it as recorded.\n sampled = \"1\"\n fields = single_header.split(\"-\", 4)\n\n if len(fields) == 1:\n sampled = fields[0]\n elif len(fields) == 2:\n trace_id, span_id = fields\n elif len(fields) == 3:\n trace_id, span_id, sampled = fields\n elif len(fields) == 4:\n trace_id, span_id, sampled, _ = fields\n else:\n return trace.set_span_in_context(trace.INVALID_SPAN)\n else:\n trace_id = (\n _extract_first_element(\n get_from_carrier(carrier, self.TRACE_ID_KEY)\n )\n or trace_id\n )\n span_id = (\n _extract_first_element(\n get_from_carrier(carrier, self.SPAN_ID_KEY)\n )\n or span_id\n )\n sampled = (\n _extract_first_element(\n get_from_carrier(carrier, self.SAMPLED_KEY)\n )\n or sampled\n )\n flags = (\n _extract_first_element(\n get_from_carrier(carrier, self.FLAGS_KEY)\n )\n or flags\n )\n\n options = 0\n # The b3 spec provides no defined behavior for both sample and\n # flag values set. Since the setting of at least one implies\n # the desire for some form of sampling, propagate if either\n # header is set to allow.\n if sampled in self._SAMPLE_PROPAGATE_VALUES or flags == \"1\":\n options |= trace.TraceFlags.SAMPLED\n return trace.set_span_in_context(\n trace.DefaultSpan(\n trace.SpanContext(\n # trace an span ids are encoded in hex, so must be converted\n trace_id=int(trace_id, 16),\n span_id=int(span_id, 16),\n is_remote=True,\n trace_flags=trace.TraceFlags(options),\n trace_state=trace.TraceState(),\n )\n )\n )\n\n def inject(\n self,\n set_in_carrier: Setter[HTTPTextFormatT],\n carrier: HTTPTextFormatT,\n context: typing.Optional[Context] = None,\n ) -> None:\n span = trace.get_current_span(context=context)\n\n if span.get_context() == trace.INVALID_SPAN_CONTEXT:\n return\n\n sampled = (trace.TraceFlags.SAMPLED & span.context.trace_flags) != 0\n set_in_carrier(\n carrier, self.TRACE_ID_KEY, format_trace_id(span.context.trace_id),\n )\n set_in_carrier(\n carrier, self.SPAN_ID_KEY, format_span_id(span.context.span_id)\n )\n if span.parent is not None:\n set_in_carrier(\n carrier,\n self.PARENT_SPAN_ID_KEY,\n format_span_id(span.parent.span_id),\n )\n set_in_carrier(carrier, self.SAMPLED_KEY, \"1\" if sampled else \"0\")\n\n\ndef format_trace_id(trace_id: int) -> str:\n \"\"\"Format the trace id according to b3 specification.\"\"\"\n return format(trace_id, \"032x\")\n\n\ndef format_span_id(span_id: int) -> str:\n \"\"\"Format the span id according to b3 specification.\"\"\"\n return format(span_id, \"016x\")\n\n\ndef _extract_first_element(\n items: typing.Iterable[HTTPTextFormatT],\n) -> typing.Optional[HTTPTextFormatT]:\n if items is None:\n return None\n return next(iter(items), None)\n", "path": "opentelemetry-sdk/src/opentelemetry/sdk/trace/propagation/b3_format.py"}], "after_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport typing\nfrom re import compile as re_compile\n\nimport opentelemetry.trace as trace\nfrom opentelemetry.context import Context\nfrom opentelemetry.sdk.trace import generate_span_id, generate_trace_id\nfrom opentelemetry.trace.propagation.httptextformat import (\n Getter,\n HTTPTextFormat,\n HTTPTextFormatT,\n Setter,\n)\n\n\nclass B3Format(HTTPTextFormat):\n \"\"\"Propagator for the B3 HTTP header format.\n\n See: https://github.com/openzipkin/b3-propagation\n \"\"\"\n\n SINGLE_HEADER_KEY = \"b3\"\n TRACE_ID_KEY = \"x-b3-traceid\"\n SPAN_ID_KEY = \"x-b3-spanid\"\n PARENT_SPAN_ID_KEY = \"x-b3-parentspanid\"\n SAMPLED_KEY = \"x-b3-sampled\"\n FLAGS_KEY = \"x-b3-flags\"\n _SAMPLE_PROPAGATE_VALUES = set([\"1\", \"True\", \"true\", \"d\"])\n _trace_id_regex = re_compile(r\"[\\da-fA-F]{16}|[\\da-fA-F]{32}\")\n _span_id_regex = re_compile(r\"[\\da-fA-F]{16}\")\n\n def extract(\n self,\n get_from_carrier: Getter[HTTPTextFormatT],\n carrier: HTTPTextFormatT,\n context: typing.Optional[Context] = None,\n ) -> Context:\n trace_id = format_trace_id(trace.INVALID_TRACE_ID)\n span_id = format_span_id(trace.INVALID_SPAN_ID)\n sampled = \"0\"\n flags = None\n\n single_header = _extract_first_element(\n get_from_carrier(carrier, self.SINGLE_HEADER_KEY)\n )\n if single_header:\n # The b3 spec calls for the sampling state to be\n # \"deferred\", which is unspecified. This concept does not\n # translate to SpanContext, so we set it as recorded.\n sampled = \"1\"\n fields = single_header.split(\"-\", 4)\n\n if len(fields) == 1:\n sampled = fields[0]\n elif len(fields) == 2:\n trace_id, span_id = fields\n elif len(fields) == 3:\n trace_id, span_id, sampled = fields\n elif len(fields) == 4:\n trace_id, span_id, sampled, _ = fields\n else:\n return trace.set_span_in_context(trace.INVALID_SPAN)\n else:\n trace_id = (\n _extract_first_element(\n get_from_carrier(carrier, self.TRACE_ID_KEY)\n )\n or trace_id\n )\n span_id = (\n _extract_first_element(\n get_from_carrier(carrier, self.SPAN_ID_KEY)\n )\n or span_id\n )\n sampled = (\n _extract_first_element(\n get_from_carrier(carrier, self.SAMPLED_KEY)\n )\n or sampled\n )\n flags = (\n _extract_first_element(\n get_from_carrier(carrier, self.FLAGS_KEY)\n )\n or flags\n )\n\n if (\n self._trace_id_regex.fullmatch(trace_id) is None\n or self._span_id_regex.fullmatch(span_id) is None\n ):\n trace_id = generate_trace_id()\n span_id = generate_span_id()\n sampled = \"0\"\n\n else:\n trace_id = int(trace_id, 16)\n span_id = int(span_id, 16)\n\n options = 0\n # The b3 spec provides no defined behavior for both sample and\n # flag values set. Since the setting of at least one implies\n # the desire for some form of sampling, propagate if either\n # header is set to allow.\n if sampled in self._SAMPLE_PROPAGATE_VALUES or flags == \"1\":\n options |= trace.TraceFlags.SAMPLED\n\n return trace.set_span_in_context(\n trace.DefaultSpan(\n trace.SpanContext(\n # trace an span ids are encoded in hex, so must be converted\n trace_id=trace_id,\n span_id=span_id,\n is_remote=True,\n trace_flags=trace.TraceFlags(options),\n trace_state=trace.TraceState(),\n )\n )\n )\n\n def inject(\n self,\n set_in_carrier: Setter[HTTPTextFormatT],\n carrier: HTTPTextFormatT,\n context: typing.Optional[Context] = None,\n ) -> None:\n span = trace.get_current_span(context=context)\n\n if span.get_context() == trace.INVALID_SPAN_CONTEXT:\n return\n\n sampled = (trace.TraceFlags.SAMPLED & span.context.trace_flags) != 0\n set_in_carrier(\n carrier, self.TRACE_ID_KEY, format_trace_id(span.context.trace_id),\n )\n set_in_carrier(\n carrier, self.SPAN_ID_KEY, format_span_id(span.context.span_id)\n )\n if span.parent is not None:\n set_in_carrier(\n carrier,\n self.PARENT_SPAN_ID_KEY,\n format_span_id(span.parent.span_id),\n )\n set_in_carrier(carrier, self.SAMPLED_KEY, \"1\" if sampled else \"0\")\n\n\ndef format_trace_id(trace_id: int) -> str:\n \"\"\"Format the trace id according to b3 specification.\"\"\"\n return format(trace_id, \"032x\")\n\n\ndef format_span_id(span_id: int) -> str:\n \"\"\"Format the span id according to b3 specification.\"\"\"\n return format(span_id, \"016x\")\n\n\ndef _extract_first_element(\n items: typing.Iterable[HTTPTextFormatT],\n) -> typing.Optional[HTTPTextFormatT]:\n if items is None:\n return None\n return next(iter(items), None)\n", "path": "opentelemetry-sdk/src/opentelemetry/sdk/trace/propagation/b3_format.py"}]} | 1,893 | 582 |
gh_patches_debug_25471 | rasdani/github-patches | git_diff | StackStorm__st2-5383 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Trigger name collision workaround
This addresses the jinja trigger name collision noted in issue #4641
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `contrib/core/actions/inject_trigger.py`
Content:
```
1 # Copyright 2020 The StackStorm Authors.
2 # Copyright 2019 Extreme Networks, Inc.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15
16 from __future__ import absolute_import
17
18 from st2common.runners.base_action import Action
19
20 __all__ = ["InjectTriggerAction"]
21
22
23 class InjectTriggerAction(Action):
24 def run(self, trigger, payload=None, trace_tag=None):
25 payload = payload or {}
26
27 datastore_service = self.action_service.datastore_service
28 client = datastore_service.get_api_client()
29
30 # Dispatch the trigger using the /webhooks/st2 API endpoint
31 # NOTE: Webhooks API endpoint is asynchronous so we don't know if the actual injection
32 # results in a TriggerInstanceDB database object creation or not. The object is created
33 # inside rulesengine service and could fail due to the user providing an invalid trigger
34 # reference or similar.
35 self.logger.debug(
36 'Injecting trigger "%s" with payload="%s"' % (trigger, str(payload))
37 )
38 result = client.webhooks.post_generic_webhook(
39 trigger=trigger, payload=payload, trace_tag=trace_tag
40 )
41
42 return result
43
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/contrib/core/actions/inject_trigger.py b/contrib/core/actions/inject_trigger.py
--- a/contrib/core/actions/inject_trigger.py
+++ b/contrib/core/actions/inject_trigger.py
@@ -21,7 +21,7 @@
class InjectTriggerAction(Action):
- def run(self, trigger, payload=None, trace_tag=None):
+ def run(self, trigger=None, trigger_name=None, payload=None, trace_tag=None):
payload = payload or {}
datastore_service = self.action_service.datastore_service
@@ -32,6 +32,18 @@
# results in a TriggerInstanceDB database object creation or not. The object is created
# inside rulesengine service and could fail due to the user providing an invalid trigger
# reference or similar.
+
+ # Raise an error if both trigger and trigger_name are specified
+ if trigger and trigger_name:
+ raise ValueError(
+ "Parameters `trigger` and `trigger_name` are mutually exclusive."
+ )
+
+ # Raise an error if neither trigger nor trigger_name are specified
+ if not trigger and not trigger_name:
+ raise ValueError("You must include the `trigger_name` parameter.")
+
+ trigger = trigger if trigger else trigger_name
self.logger.debug(
'Injecting trigger "%s" with payload="%s"' % (trigger, str(payload))
)
| {"golden_diff": "diff --git a/contrib/core/actions/inject_trigger.py b/contrib/core/actions/inject_trigger.py\n--- a/contrib/core/actions/inject_trigger.py\n+++ b/contrib/core/actions/inject_trigger.py\n@@ -21,7 +21,7 @@\n \n \n class InjectTriggerAction(Action):\n- def run(self, trigger, payload=None, trace_tag=None):\n+ def run(self, trigger=None, trigger_name=None, payload=None, trace_tag=None):\n payload = payload or {}\n \n datastore_service = self.action_service.datastore_service\n@@ -32,6 +32,18 @@\n # results in a TriggerInstanceDB database object creation or not. The object is created\n # inside rulesengine service and could fail due to the user providing an invalid trigger\n # reference or similar.\n+\n+ # Raise an error if both trigger and trigger_name are specified\n+ if trigger and trigger_name:\n+ raise ValueError(\n+ \"Parameters `trigger` and `trigger_name` are mutually exclusive.\"\n+ )\n+\n+ # Raise an error if neither trigger nor trigger_name are specified\n+ if not trigger and not trigger_name:\n+ raise ValueError(\"You must include the `trigger_name` parameter.\")\n+\n+ trigger = trigger if trigger else trigger_name\n self.logger.debug(\n 'Injecting trigger \"%s\" with payload=\"%s\"' % (trigger, str(payload))\n )\n", "issue": "Trigger name collision workaround\nThis addresses the jinja trigger name collision noted in issue #4641\n", "before_files": [{"content": "# Copyright 2020 The StackStorm Authors.\n# Copyright 2019 Extreme Networks, Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import absolute_import\n\nfrom st2common.runners.base_action import Action\n\n__all__ = [\"InjectTriggerAction\"]\n\n\nclass InjectTriggerAction(Action):\n def run(self, trigger, payload=None, trace_tag=None):\n payload = payload or {}\n\n datastore_service = self.action_service.datastore_service\n client = datastore_service.get_api_client()\n\n # Dispatch the trigger using the /webhooks/st2 API endpoint\n # NOTE: Webhooks API endpoint is asynchronous so we don't know if the actual injection\n # results in a TriggerInstanceDB database object creation or not. The object is created\n # inside rulesengine service and could fail due to the user providing an invalid trigger\n # reference or similar.\n self.logger.debug(\n 'Injecting trigger \"%s\" with payload=\"%s\"' % (trigger, str(payload))\n )\n result = client.webhooks.post_generic_webhook(\n trigger=trigger, payload=payload, trace_tag=trace_tag\n )\n\n return result\n", "path": "contrib/core/actions/inject_trigger.py"}], "after_files": [{"content": "# Copyright 2020 The StackStorm Authors.\n# Copyright 2019 Extreme Networks, Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import absolute_import\n\nfrom st2common.runners.base_action import Action\n\n__all__ = [\"InjectTriggerAction\"]\n\n\nclass InjectTriggerAction(Action):\n def run(self, trigger=None, trigger_name=None, payload=None, trace_tag=None):\n payload = payload or {}\n\n datastore_service = self.action_service.datastore_service\n client = datastore_service.get_api_client()\n\n # Dispatch the trigger using the /webhooks/st2 API endpoint\n # NOTE: Webhooks API endpoint is asynchronous so we don't know if the actual injection\n # results in a TriggerInstanceDB database object creation or not. The object is created\n # inside rulesengine service and could fail due to the user providing an invalid trigger\n # reference or similar.\n\n # Raise an error if both trigger and trigger_name are specified\n if trigger and trigger_name:\n raise ValueError(\n \"Parameters `trigger` and `trigger_name` are mutually exclusive.\"\n )\n\n # Raise an error if neither trigger nor trigger_name are specified\n if not trigger and not trigger_name:\n raise ValueError(\"You must include the `trigger_name` parameter.\")\n\n trigger = trigger if trigger else trigger_name\n self.logger.debug(\n 'Injecting trigger \"%s\" with payload=\"%s\"' % (trigger, str(payload))\n )\n result = client.webhooks.post_generic_webhook(\n trigger=trigger, payload=payload, trace_tag=trace_tag\n )\n\n return result\n", "path": "contrib/core/actions/inject_trigger.py"}]} | 722 | 300 |
gh_patches_debug_40529 | rasdani/github-patches | git_diff | nautobot__nautobot-1148 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Remove Custom Fields from Admin UI
### Proposed Changes
Remove custom fields from Admin UI. This should be as simple as deleting a bunch of code from `nautobot/extras/admin.py` that's no longer needed.
### Justification
Now that we have custom field management in the regular UI (#735, #997), the admin UI for custom field management is redundant.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `nautobot/extras/admin.py`
Content:
```
1 from db_file_storage.form_widgets import DBAdminClearableFileInput
2 from django import forms
3 from django.contrib import admin, messages
4 from django.db import transaction
5 from django.db.models import ProtectedError
6
7 from .models import CustomField, CustomFieldChoice, FileProxy, JobResult
8
9
10 def order_content_types(field):
11 """
12 Order the list of available ContentTypes by application
13 """
14 queryset = field.queryset.order_by("app_label", "model")
15 field.choices = [(ct.pk, "{} > {}".format(ct.app_label, ct.name)) for ct in queryset]
16
17
18 #
19 # Custom fields
20 #
21
22
23 class CustomFieldForm(forms.ModelForm):
24 class Meta:
25 model = CustomField
26 exclude = []
27 widgets = {
28 "default": forms.TextInput(),
29 "validation_regex": forms.Textarea(
30 attrs={
31 "cols": 80,
32 "rows": 3,
33 }
34 ),
35 }
36
37 def __init__(self, *args, **kwargs):
38 super().__init__(*args, **kwargs)
39
40 order_content_types(self.fields["content_types"])
41
42
43 class CustomFieldChoiceAdmin(admin.TabularInline):
44 """
45 Defines the inline formset factory that handles choices for selection type custom fields.
46 The `extra` defines the default number of inline rows that appear in the UI.
47 """
48
49 model = CustomFieldChoice
50 extra = 5
51
52
53 @admin.register(CustomField)
54 class CustomFieldAdmin(admin.ModelAdmin):
55 """
56 Define the structure and composition of the custom field form in the admin panel.
57 """
58
59 actions = None
60 form = CustomFieldForm
61 inlines = [CustomFieldChoiceAdmin]
62 list_display = [
63 "name",
64 "models",
65 "type",
66 "required",
67 "filter_logic",
68 "default",
69 "weight",
70 "description",
71 ]
72 list_filter = [
73 "type",
74 "required",
75 "content_types",
76 ]
77 fieldsets = (
78 (
79 "Custom Field",
80 {
81 "fields": (
82 "type",
83 "name",
84 "weight",
85 "label",
86 "description",
87 "required",
88 "default",
89 "filter_logic",
90 )
91 },
92 ),
93 (
94 "Assignment",
95 {
96 "description": "A custom field must be assigned to one or more object types.",
97 "fields": ("content_types",),
98 },
99 ),
100 (
101 "Validation Rules",
102 {
103 "fields": (
104 "validation_minimum",
105 "validation_maximum",
106 "validation_regex",
107 ),
108 "classes": ("monospace",),
109 },
110 ),
111 )
112
113 def models(self, obj):
114 return ", ".join([ct.name for ct in obj.content_types.all()])
115
116 @transaction.atomic
117 def save_formset(self, request, form, formset, change):
118 # TODO(John): revisit this when custom fields are moved out of admin... there is a better way...
119 if formset.model != CustomFieldChoice:
120 return super().save_formset(request, form, formset, change)
121 instances = formset.save(commit=False)
122 for instance in instances:
123 instance.save()
124 formset.save_m2m()
125 for obj in formset.deleted_objects:
126 try:
127 obj.delete()
128 except ProtectedError as e:
129 self.message_user(request, e, level=messages.ERROR)
130 raise e
131
132
133 #
134 # File attachments
135 #
136
137
138 class FileProxyForm(forms.ModelForm):
139 class Meta:
140 model = FileProxy
141 exclude = []
142 widgets = {
143 "file": DBAdminClearableFileInput,
144 }
145
146
147 @admin.register(FileProxy)
148 class FileProxyAdmin(admin.ModelAdmin):
149 form = FileProxyForm
150 list_display = ["name", "uploaded_at"]
151 list_filter = ["uploaded_at"]
152
153
154 #
155 # Job results (jobs, scripts, reports, Git repository sync, etc.)
156 #
157
158
159 @admin.register(JobResult)
160 class JobResultAdmin(admin.ModelAdmin):
161 list_display = [
162 "obj_type",
163 "name",
164 "created",
165 "completed",
166 "user",
167 "status",
168 ]
169 fields = [
170 "obj_type",
171 "name",
172 "created",
173 "completed",
174 "user",
175 "status",
176 "data",
177 "job_id",
178 ]
179 list_filter = [
180 "status",
181 ]
182 readonly_fields = fields
183
184 def has_add_permission(self, request):
185 return False
186
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/nautobot/extras/admin.py b/nautobot/extras/admin.py
--- a/nautobot/extras/admin.py
+++ b/nautobot/extras/admin.py
@@ -1,10 +1,8 @@
from db_file_storage.form_widgets import DBAdminClearableFileInput
from django import forms
-from django.contrib import admin, messages
-from django.db import transaction
-from django.db.models import ProtectedError
+from django.contrib import admin
-from .models import CustomField, CustomFieldChoice, FileProxy, JobResult
+from .models import FileProxy, JobResult
def order_content_types(field):
@@ -15,121 +13,6 @@
field.choices = [(ct.pk, "{} > {}".format(ct.app_label, ct.name)) for ct in queryset]
-#
-# Custom fields
-#
-
-
-class CustomFieldForm(forms.ModelForm):
- class Meta:
- model = CustomField
- exclude = []
- widgets = {
- "default": forms.TextInput(),
- "validation_regex": forms.Textarea(
- attrs={
- "cols": 80,
- "rows": 3,
- }
- ),
- }
-
- def __init__(self, *args, **kwargs):
- super().__init__(*args, **kwargs)
-
- order_content_types(self.fields["content_types"])
-
-
-class CustomFieldChoiceAdmin(admin.TabularInline):
- """
- Defines the inline formset factory that handles choices for selection type custom fields.
- The `extra` defines the default number of inline rows that appear in the UI.
- """
-
- model = CustomFieldChoice
- extra = 5
-
-
[email protected](CustomField)
-class CustomFieldAdmin(admin.ModelAdmin):
- """
- Define the structure and composition of the custom field form in the admin panel.
- """
-
- actions = None
- form = CustomFieldForm
- inlines = [CustomFieldChoiceAdmin]
- list_display = [
- "name",
- "models",
- "type",
- "required",
- "filter_logic",
- "default",
- "weight",
- "description",
- ]
- list_filter = [
- "type",
- "required",
- "content_types",
- ]
- fieldsets = (
- (
- "Custom Field",
- {
- "fields": (
- "type",
- "name",
- "weight",
- "label",
- "description",
- "required",
- "default",
- "filter_logic",
- )
- },
- ),
- (
- "Assignment",
- {
- "description": "A custom field must be assigned to one or more object types.",
- "fields": ("content_types",),
- },
- ),
- (
- "Validation Rules",
- {
- "fields": (
- "validation_minimum",
- "validation_maximum",
- "validation_regex",
- ),
- "classes": ("monospace",),
- },
- ),
- )
-
- def models(self, obj):
- return ", ".join([ct.name for ct in obj.content_types.all()])
-
- @transaction.atomic
- def save_formset(self, request, form, formset, change):
- # TODO(John): revisit this when custom fields are moved out of admin... there is a better way...
- if formset.model != CustomFieldChoice:
- return super().save_formset(request, form, formset, change)
- instances = formset.save(commit=False)
- for instance in instances:
- instance.save()
- formset.save_m2m()
- for obj in formset.deleted_objects:
- try:
- obj.delete()
- except ProtectedError as e:
- self.message_user(request, e, level=messages.ERROR)
- raise e
-
-
#
# File attachments
#
| {"golden_diff": "diff --git a/nautobot/extras/admin.py b/nautobot/extras/admin.py\n--- a/nautobot/extras/admin.py\n+++ b/nautobot/extras/admin.py\n@@ -1,10 +1,8 @@\n from db_file_storage.form_widgets import DBAdminClearableFileInput\n from django import forms\n-from django.contrib import admin, messages\n-from django.db import transaction\n-from django.db.models import ProtectedError\n+from django.contrib import admin\n \n-from .models import CustomField, CustomFieldChoice, FileProxy, JobResult\n+from .models import FileProxy, JobResult\n \n \n def order_content_types(field):\n@@ -15,121 +13,6 @@\n field.choices = [(ct.pk, \"{} > {}\".format(ct.app_label, ct.name)) for ct in queryset]\n \n \n-#\n-# Custom fields\n-#\n-\n-\n-class CustomFieldForm(forms.ModelForm):\n- class Meta:\n- model = CustomField\n- exclude = []\n- widgets = {\n- \"default\": forms.TextInput(),\n- \"validation_regex\": forms.Textarea(\n- attrs={\n- \"cols\": 80,\n- \"rows\": 3,\n- }\n- ),\n- }\n-\n- def __init__(self, *args, **kwargs):\n- super().__init__(*args, **kwargs)\n-\n- order_content_types(self.fields[\"content_types\"])\n-\n-\n-class CustomFieldChoiceAdmin(admin.TabularInline):\n- \"\"\"\n- Defines the inline formset factory that handles choices for selection type custom fields.\n- The `extra` defines the default number of inline rows that appear in the UI.\n- \"\"\"\n-\n- model = CustomFieldChoice\n- extra = 5\n-\n-\[email protected](CustomField)\n-class CustomFieldAdmin(admin.ModelAdmin):\n- \"\"\"\n- Define the structure and composition of the custom field form in the admin panel.\n- \"\"\"\n-\n- actions = None\n- form = CustomFieldForm\n- inlines = [CustomFieldChoiceAdmin]\n- list_display = [\n- \"name\",\n- \"models\",\n- \"type\",\n- \"required\",\n- \"filter_logic\",\n- \"default\",\n- \"weight\",\n- \"description\",\n- ]\n- list_filter = [\n- \"type\",\n- \"required\",\n- \"content_types\",\n- ]\n- fieldsets = (\n- (\n- \"Custom Field\",\n- {\n- \"fields\": (\n- \"type\",\n- \"name\",\n- \"weight\",\n- \"label\",\n- \"description\",\n- \"required\",\n- \"default\",\n- \"filter_logic\",\n- )\n- },\n- ),\n- (\n- \"Assignment\",\n- {\n- \"description\": \"A custom field must be assigned to one or more object types.\",\n- \"fields\": (\"content_types\",),\n- },\n- ),\n- (\n- \"Validation Rules\",\n- {\n- \"fields\": (\n- \"validation_minimum\",\n- \"validation_maximum\",\n- \"validation_regex\",\n- ),\n- \"classes\": (\"monospace\",),\n- },\n- ),\n- )\n-\n- def models(self, obj):\n- return \", \".join([ct.name for ct in obj.content_types.all()])\n-\n- @transaction.atomic\n- def save_formset(self, request, form, formset, change):\n- # TODO(John): revisit this when custom fields are moved out of admin... there is a better way...\n- if formset.model != CustomFieldChoice:\n- return super().save_formset(request, form, formset, change)\n- instances = formset.save(commit=False)\n- for instance in instances:\n- instance.save()\n- formset.save_m2m()\n- for obj in formset.deleted_objects:\n- try:\n- obj.delete()\n- except ProtectedError as e:\n- self.message_user(request, e, level=messages.ERROR)\n- raise e\n-\n-\n #\n # File attachments\n #\n", "issue": "Remove Custom Fields from Admin UI\n### Proposed Changes\r\n\r\nRemove custom fields from Admin UI. This should be as simple as deleting a bunch of code from `nautobot/extras/admin.py` that's no longer needed.\r\n\r\n### Justification\r\n\r\nNow that we have custom field management in the regular UI (#735, #997), the admin UI for custom field management is redundant.\n", "before_files": [{"content": "from db_file_storage.form_widgets import DBAdminClearableFileInput\nfrom django import forms\nfrom django.contrib import admin, messages\nfrom django.db import transaction\nfrom django.db.models import ProtectedError\n\nfrom .models import CustomField, CustomFieldChoice, FileProxy, JobResult\n\n\ndef order_content_types(field):\n \"\"\"\n Order the list of available ContentTypes by application\n \"\"\"\n queryset = field.queryset.order_by(\"app_label\", \"model\")\n field.choices = [(ct.pk, \"{} > {}\".format(ct.app_label, ct.name)) for ct in queryset]\n\n\n#\n# Custom fields\n#\n\n\nclass CustomFieldForm(forms.ModelForm):\n class Meta:\n model = CustomField\n exclude = []\n widgets = {\n \"default\": forms.TextInput(),\n \"validation_regex\": forms.Textarea(\n attrs={\n \"cols\": 80,\n \"rows\": 3,\n }\n ),\n }\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n\n order_content_types(self.fields[\"content_types\"])\n\n\nclass CustomFieldChoiceAdmin(admin.TabularInline):\n \"\"\"\n Defines the inline formset factory that handles choices for selection type custom fields.\n The `extra` defines the default number of inline rows that appear in the UI.\n \"\"\"\n\n model = CustomFieldChoice\n extra = 5\n\n\[email protected](CustomField)\nclass CustomFieldAdmin(admin.ModelAdmin):\n \"\"\"\n Define the structure and composition of the custom field form in the admin panel.\n \"\"\"\n\n actions = None\n form = CustomFieldForm\n inlines = [CustomFieldChoiceAdmin]\n list_display = [\n \"name\",\n \"models\",\n \"type\",\n \"required\",\n \"filter_logic\",\n \"default\",\n \"weight\",\n \"description\",\n ]\n list_filter = [\n \"type\",\n \"required\",\n \"content_types\",\n ]\n fieldsets = (\n (\n \"Custom Field\",\n {\n \"fields\": (\n \"type\",\n \"name\",\n \"weight\",\n \"label\",\n \"description\",\n \"required\",\n \"default\",\n \"filter_logic\",\n )\n },\n ),\n (\n \"Assignment\",\n {\n \"description\": \"A custom field must be assigned to one or more object types.\",\n \"fields\": (\"content_types\",),\n },\n ),\n (\n \"Validation Rules\",\n {\n \"fields\": (\n \"validation_minimum\",\n \"validation_maximum\",\n \"validation_regex\",\n ),\n \"classes\": (\"monospace\",),\n },\n ),\n )\n\n def models(self, obj):\n return \", \".join([ct.name for ct in obj.content_types.all()])\n\n @transaction.atomic\n def save_formset(self, request, form, formset, change):\n # TODO(John): revisit this when custom fields are moved out of admin... there is a better way...\n if formset.model != CustomFieldChoice:\n return super().save_formset(request, form, formset, change)\n instances = formset.save(commit=False)\n for instance in instances:\n instance.save()\n formset.save_m2m()\n for obj in formset.deleted_objects:\n try:\n obj.delete()\n except ProtectedError as e:\n self.message_user(request, e, level=messages.ERROR)\n raise e\n\n\n#\n# File attachments\n#\n\n\nclass FileProxyForm(forms.ModelForm):\n class Meta:\n model = FileProxy\n exclude = []\n widgets = {\n \"file\": DBAdminClearableFileInput,\n }\n\n\[email protected](FileProxy)\nclass FileProxyAdmin(admin.ModelAdmin):\n form = FileProxyForm\n list_display = [\"name\", \"uploaded_at\"]\n list_filter = [\"uploaded_at\"]\n\n\n#\n# Job results (jobs, scripts, reports, Git repository sync, etc.)\n#\n\n\[email protected](JobResult)\nclass JobResultAdmin(admin.ModelAdmin):\n list_display = [\n \"obj_type\",\n \"name\",\n \"created\",\n \"completed\",\n \"user\",\n \"status\",\n ]\n fields = [\n \"obj_type\",\n \"name\",\n \"created\",\n \"completed\",\n \"user\",\n \"status\",\n \"data\",\n \"job_id\",\n ]\n list_filter = [\n \"status\",\n ]\n readonly_fields = fields\n\n def has_add_permission(self, request):\n return False\n", "path": "nautobot/extras/admin.py"}], "after_files": [{"content": "from db_file_storage.form_widgets import DBAdminClearableFileInput\nfrom django import forms\nfrom django.contrib import admin\n\nfrom .models import FileProxy, JobResult\n\n\ndef order_content_types(field):\n \"\"\"\n Order the list of available ContentTypes by application\n \"\"\"\n queryset = field.queryset.order_by(\"app_label\", \"model\")\n field.choices = [(ct.pk, \"{} > {}\".format(ct.app_label, ct.name)) for ct in queryset]\n\n\n#\n# File attachments\n#\n\n\nclass FileProxyForm(forms.ModelForm):\n class Meta:\n model = FileProxy\n exclude = []\n widgets = {\n \"file\": DBAdminClearableFileInput,\n }\n\n\[email protected](FileProxy)\nclass FileProxyAdmin(admin.ModelAdmin):\n form = FileProxyForm\n list_display = [\"name\", \"uploaded_at\"]\n list_filter = [\"uploaded_at\"]\n\n\n#\n# Job results (jobs, scripts, reports, Git repository sync, etc.)\n#\n\n\[email protected](JobResult)\nclass JobResultAdmin(admin.ModelAdmin):\n list_display = [\n \"obj_type\",\n \"name\",\n \"created\",\n \"completed\",\n \"user\",\n \"status\",\n ]\n fields = [\n \"obj_type\",\n \"name\",\n \"created\",\n \"completed\",\n \"user\",\n \"status\",\n \"data\",\n \"job_id\",\n ]\n list_filter = [\n \"status\",\n ]\n readonly_fields = fields\n\n def has_add_permission(self, request):\n return False\n", "path": "nautobot/extras/admin.py"}]} | 1,737 | 875 |
gh_patches_debug_14993 | rasdani/github-patches | git_diff | PrefectHQ__prefect-1583 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add example code block to `switch` docstring
I recently realized I hadn't touched the `switch` code in a long time, and I would've really appreciated an example to work off of. Instead, I ended up looking at our tests which most users won't want to do. Relevant doc: https://docs.prefect.io/api/unreleased/tasks/control_flow.html#prefect-tasks-control-flow-conditional-switch
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/prefect/tasks/control_flow/conditional.py`
Content:
```
1 from typing import Any, Dict
2
3 import prefect
4 from prefect import Task
5 from prefect.engine import signals
6 from prefect.engine.result import NoResult
7
8 __all__ = ["switch", "ifelse"]
9
10
11 class Merge(Task):
12 def __init__(self, **kwargs) -> None:
13 if kwargs.setdefault("skip_on_upstream_skip", False):
14 raise ValueError("Merge tasks must have `skip_on_upstream_skip=False`.")
15 super().__init__(**kwargs)
16
17 def run(self, **task_results: Any) -> Any:
18 return next((v for v in task_results.values() if v != NoResult), None)
19
20
21 class CompareValue(Task):
22 """
23 This task stores a `value` at initialization and compares it to a `value` received at runtime.
24 If the values don't match, it raises a SKIP exception.
25
26 Args:
27 - value (Any): the value this task will attempt to match when it runs
28 - **kwargs: keyword arguments for the Task
29 """
30
31 def __init__(self, value: Any, **kwargs: Any):
32 self.value = value
33 kwargs.setdefault("name", 'CompareValue: "{}"'.format(value))
34 super().__init__(**kwargs)
35
36 def run(self, value: Any) -> None:
37 """
38 Raises a SKIP signal if the passed value does not match the task's match value;
39 succeeds silently otherwise.
40
41 Args:
42 - value (Any): the value that will be matched against the task's value.
43 """
44 if value != self.value:
45 raise signals.SKIP(
46 'Provided value "{}" did not match "{}"'.format(value, self.value)
47 )
48
49
50 def switch(condition: Task, cases: Dict[Any, Task]) -> None:
51 """
52 Adds a SWITCH to a workflow.
53
54 The condition task is evaluated and the result is compared to the keys of the cases
55 dictionary. The task corresponding to the matching key is run; all other tasks are
56 skipped. Any tasks downstream of the skipped tasks are also skipped unless they set
57 `skip_on_upstream_skip=False`.
58
59 Args:
60 - condition (Task): a task whose result forms the condition for the switch
61 - cases (Dict[Any, Task]): a dict representing the "case" statements of the switch.
62 The value of the `condition` task will be compared to the keys of this dict, and
63 the matching task will be executed.
64
65 Raises:
66 - PrefectWarning: if any of the tasks in "cases" have upstream dependencies,
67 then this task will warn that those upstream tasks may run whether or not the switch condition matches their branch. The most common cause of this
68 is passing a list of tasks as one of the cases, which adds the `List` task
69 to the switch condition but leaves the tasks themselves upstream.
70 """
71
72 with prefect.tags("switch"):
73 for value, task in cases.items():
74 task = prefect.utilities.tasks.as_task(task)
75 match_condition = CompareValue(value=value).bind(value=condition)
76 task.set_dependencies(upstream_tasks=[match_condition])
77
78
79 def ifelse(condition: Task, true_task: Task, false_task: Task) -> None:
80 """
81 Builds a conditional branch into a workflow.
82
83 If the condition evaluates True(ish), the true_task will run. If it
84 evaluates False(ish), the false_task will run. The task doesn't run is Skipped, as are
85 all downstream tasks that don't set `skip_on_upstream_skip=False`.
86
87 Args:
88 - condition (Task): a task whose boolean result forms the condition for the ifelse
89 - true_task (Task): a task that will be executed if the condition is True
90 - false_task (Task): a task that will be executed if the condition is False
91 """
92
93 switch(condition=condition, cases={True: true_task, False: false_task})
94
95
96 def merge(*tasks: Task) -> Task:
97 """
98 Merges conditional branches back together.
99
100 A conditional branch in a flow results in one or more tasks proceeding and one or
101 more tasks skipping. It is often convenient to merge those branches back into a
102 single result. This function is a simple way to achieve that goal.
103
104 The merge will return the first real result it encounters, or `None`. If multiple
105 tasks might return a result, group them with a list.
106
107 Example:
108 ```python
109 with Flow("My Flow"):
110 true_branch = ActionIfTrue()
111 false_branch = ActionIfFalse()
112 ifelse(CheckCondition(), true_branch, false_branch)
113
114 merged_result = merge(true_branch, false_branch)
115 ```
116
117 Args:
118 - *tasks (Task): tasks whose results should be merged into a single result. The tasks are
119 assumed to all sit downstream of different `switch` branches, such that only
120 one of them will contain a result and the others will all be skipped.
121
122 Returns:
123 - Task: a Task representing the merged result.
124
125 """
126 return Merge().bind(**{"task_{}".format(i + 1): t for i, t in enumerate(tasks)})
127
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/prefect/tasks/control_flow/conditional.py b/src/prefect/tasks/control_flow/conditional.py
--- a/src/prefect/tasks/control_flow/conditional.py
+++ b/src/prefect/tasks/control_flow/conditional.py
@@ -56,6 +56,24 @@
skipped. Any tasks downstream of the skipped tasks are also skipped unless they set
`skip_on_upstream_skip=False`.
+ Example:
+ ```python
+ @task
+ def condition():
+ return "b" # returning 'b' will take the b_branch
+
+ @task
+ def a_branch():
+ return "A Branch"
+
+ @task
+ def b_branch():
+ return "B Branch"
+
+ with Flow("switch-flow") as flow:
+ switch(condition, dict(a=a_branch, b=b_branch))
+ ```
+
Args:
- condition (Task): a task whose result forms the condition for the switch
- cases (Dict[Any, Task]): a dict representing the "case" statements of the switch.
| {"golden_diff": "diff --git a/src/prefect/tasks/control_flow/conditional.py b/src/prefect/tasks/control_flow/conditional.py\n--- a/src/prefect/tasks/control_flow/conditional.py\n+++ b/src/prefect/tasks/control_flow/conditional.py\n@@ -56,6 +56,24 @@\n skipped. Any tasks downstream of the skipped tasks are also skipped unless they set\n `skip_on_upstream_skip=False`.\n \n+ Example:\n+ ```python\n+ @task\n+ def condition():\n+ return \"b\" # returning 'b' will take the b_branch\n+\n+ @task\n+ def a_branch():\n+ return \"A Branch\"\n+\n+ @task\n+ def b_branch():\n+ return \"B Branch\"\n+\n+ with Flow(\"switch-flow\") as flow:\n+ switch(condition, dict(a=a_branch, b=b_branch))\n+ ```\n+\n Args:\n - condition (Task): a task whose result forms the condition for the switch\n - cases (Dict[Any, Task]): a dict representing the \"case\" statements of the switch.\n", "issue": "Add example code block to `switch` docstring\nI recently realized I hadn't touched the `switch` code in a long time, and I would've really appreciated an example to work off of. Instead, I ended up looking at our tests which most users won't want to do. Relevant doc: https://docs.prefect.io/api/unreleased/tasks/control_flow.html#prefect-tasks-control-flow-conditional-switch\n", "before_files": [{"content": "from typing import Any, Dict\n\nimport prefect\nfrom prefect import Task\nfrom prefect.engine import signals\nfrom prefect.engine.result import NoResult\n\n__all__ = [\"switch\", \"ifelse\"]\n\n\nclass Merge(Task):\n def __init__(self, **kwargs) -> None:\n if kwargs.setdefault(\"skip_on_upstream_skip\", False):\n raise ValueError(\"Merge tasks must have `skip_on_upstream_skip=False`.\")\n super().__init__(**kwargs)\n\n def run(self, **task_results: Any) -> Any:\n return next((v for v in task_results.values() if v != NoResult), None)\n\n\nclass CompareValue(Task):\n \"\"\"\n This task stores a `value` at initialization and compares it to a `value` received at runtime.\n If the values don't match, it raises a SKIP exception.\n\n Args:\n - value (Any): the value this task will attempt to match when it runs\n - **kwargs: keyword arguments for the Task\n \"\"\"\n\n def __init__(self, value: Any, **kwargs: Any):\n self.value = value\n kwargs.setdefault(\"name\", 'CompareValue: \"{}\"'.format(value))\n super().__init__(**kwargs)\n\n def run(self, value: Any) -> None:\n \"\"\"\n Raises a SKIP signal if the passed value does not match the task's match value;\n succeeds silently otherwise.\n\n Args:\n - value (Any): the value that will be matched against the task's value.\n \"\"\"\n if value != self.value:\n raise signals.SKIP(\n 'Provided value \"{}\" did not match \"{}\"'.format(value, self.value)\n )\n\n\ndef switch(condition: Task, cases: Dict[Any, Task]) -> None:\n \"\"\"\n Adds a SWITCH to a workflow.\n\n The condition task is evaluated and the result is compared to the keys of the cases\n dictionary. The task corresponding to the matching key is run; all other tasks are\n skipped. Any tasks downstream of the skipped tasks are also skipped unless they set\n `skip_on_upstream_skip=False`.\n\n Args:\n - condition (Task): a task whose result forms the condition for the switch\n - cases (Dict[Any, Task]): a dict representing the \"case\" statements of the switch.\n The value of the `condition` task will be compared to the keys of this dict, and\n the matching task will be executed.\n\n Raises:\n - PrefectWarning: if any of the tasks in \"cases\" have upstream dependencies,\n then this task will warn that those upstream tasks may run whether or not the switch condition matches their branch. The most common cause of this\n is passing a list of tasks as one of the cases, which adds the `List` task\n to the switch condition but leaves the tasks themselves upstream.\n \"\"\"\n\n with prefect.tags(\"switch\"):\n for value, task in cases.items():\n task = prefect.utilities.tasks.as_task(task)\n match_condition = CompareValue(value=value).bind(value=condition)\n task.set_dependencies(upstream_tasks=[match_condition])\n\n\ndef ifelse(condition: Task, true_task: Task, false_task: Task) -> None:\n \"\"\"\n Builds a conditional branch into a workflow.\n\n If the condition evaluates True(ish), the true_task will run. If it\n evaluates False(ish), the false_task will run. The task doesn't run is Skipped, as are\n all downstream tasks that don't set `skip_on_upstream_skip=False`.\n\n Args:\n - condition (Task): a task whose boolean result forms the condition for the ifelse\n - true_task (Task): a task that will be executed if the condition is True\n - false_task (Task): a task that will be executed if the condition is False\n \"\"\"\n\n switch(condition=condition, cases={True: true_task, False: false_task})\n\n\ndef merge(*tasks: Task) -> Task:\n \"\"\"\n Merges conditional branches back together.\n\n A conditional branch in a flow results in one or more tasks proceeding and one or\n more tasks skipping. It is often convenient to merge those branches back into a\n single result. This function is a simple way to achieve that goal.\n\n The merge will return the first real result it encounters, or `None`. If multiple\n tasks might return a result, group them with a list.\n\n Example:\n ```python\n with Flow(\"My Flow\"):\n true_branch = ActionIfTrue()\n false_branch = ActionIfFalse()\n ifelse(CheckCondition(), true_branch, false_branch)\n\n merged_result = merge(true_branch, false_branch)\n ```\n\n Args:\n - *tasks (Task): tasks whose results should be merged into a single result. The tasks are\n assumed to all sit downstream of different `switch` branches, such that only\n one of them will contain a result and the others will all be skipped.\n\n Returns:\n - Task: a Task representing the merged result.\n\n \"\"\"\n return Merge().bind(**{\"task_{}\".format(i + 1): t for i, t in enumerate(tasks)})\n", "path": "src/prefect/tasks/control_flow/conditional.py"}], "after_files": [{"content": "from typing import Any, Dict\n\nimport prefect\nfrom prefect import Task\nfrom prefect.engine import signals\nfrom prefect.engine.result import NoResult\n\n__all__ = [\"switch\", \"ifelse\"]\n\n\nclass Merge(Task):\n def __init__(self, **kwargs) -> None:\n if kwargs.setdefault(\"skip_on_upstream_skip\", False):\n raise ValueError(\"Merge tasks must have `skip_on_upstream_skip=False`.\")\n super().__init__(**kwargs)\n\n def run(self, **task_results: Any) -> Any:\n return next((v for v in task_results.values() if v != NoResult), None)\n\n\nclass CompareValue(Task):\n \"\"\"\n This task stores a `value` at initialization and compares it to a `value` received at runtime.\n If the values don't match, it raises a SKIP exception.\n\n Args:\n - value (Any): the value this task will attempt to match when it runs\n - **kwargs: keyword arguments for the Task\n \"\"\"\n\n def __init__(self, value: Any, **kwargs: Any):\n self.value = value\n kwargs.setdefault(\"name\", 'CompareValue: \"{}\"'.format(value))\n super().__init__(**kwargs)\n\n def run(self, value: Any) -> None:\n \"\"\"\n Raises a SKIP signal if the passed value does not match the task's match value;\n succeeds silently otherwise.\n\n Args:\n - value (Any): the value that will be matched against the task's value.\n \"\"\"\n if value != self.value:\n raise signals.SKIP(\n 'Provided value \"{}\" did not match \"{}\"'.format(value, self.value)\n )\n\n\ndef switch(condition: Task, cases: Dict[Any, Task]) -> None:\n \"\"\"\n Adds a SWITCH to a workflow.\n\n The condition task is evaluated and the result is compared to the keys of the cases\n dictionary. The task corresponding to the matching key is run; all other tasks are\n skipped. Any tasks downstream of the skipped tasks are also skipped unless they set\n `skip_on_upstream_skip=False`.\n\n Example:\n ```python\n @task\n def condition():\n return \"b\" # returning 'b' will take the b_branch\n\n @task\n def a_branch():\n return \"A Branch\"\n\n @task\n def b_branch():\n return \"B Branch\"\n\n with Flow(\"switch-flow\") as flow:\n switch(condition, dict(a=a_branch, b=b_branch))\n ```\n\n Args:\n - condition (Task): a task whose result forms the condition for the switch\n - cases (Dict[Any, Task]): a dict representing the \"case\" statements of the switch.\n The value of the `condition` task will be compared to the keys of this dict, and\n the matching task will be executed.\n\n Raises:\n - PrefectWarning: if any of the tasks in \"cases\" have upstream dependencies,\n then this task will warn that those upstream tasks may run whether or not the switch condition matches their branch. The most common cause of this\n is passing a list of tasks as one of the cases, which adds the `List` task\n to the switch condition but leaves the tasks themselves upstream.\n \"\"\"\n\n with prefect.tags(\"switch\"):\n for value, task in cases.items():\n task = prefect.utilities.tasks.as_task(task)\n match_condition = CompareValue(value=value).bind(value=condition)\n task.set_dependencies(upstream_tasks=[match_condition])\n\n\ndef ifelse(condition: Task, true_task: Task, false_task: Task) -> None:\n \"\"\"\n Builds a conditional branch into a workflow.\n\n If the condition evaluates True(ish), the true_task will run. If it\n evaluates False(ish), the false_task will run. The task doesn't run is Skipped, as are\n all downstream tasks that don't set `skip_on_upstream_skip=False`.\n\n Args:\n - condition (Task): a task whose boolean result forms the condition for the ifelse\n - true_task (Task): a task that will be executed if the condition is True\n - false_task (Task): a task that will be executed if the condition is False\n \"\"\"\n\n switch(condition=condition, cases={True: true_task, False: false_task})\n\n\ndef merge(*tasks: Task) -> Task:\n \"\"\"\n Merges conditional branches back together.\n\n A conditional branch in a flow results in one or more tasks proceeding and one or\n more tasks skipping. It is often convenient to merge those branches back into a\n single result. This function is a simple way to achieve that goal.\n\n The merge will return the first real result it encounters, or `None`. If multiple\n tasks might return a result, group them with a list.\n\n Example:\n ```python\n with Flow(\"My Flow\"):\n true_branch = ActionIfTrue()\n false_branch = ActionIfFalse()\n ifelse(CheckCondition(), true_branch, false_branch)\n\n merged_result = merge(true_branch, false_branch)\n ```\n\n Args:\n - *tasks (Task): tasks whose results should be merged into a single result. The tasks are\n assumed to all sit downstream of different `switch` branches, such that only\n one of them will contain a result and the others will all be skipped.\n\n Returns:\n - Task: a Task representing the merged result.\n\n \"\"\"\n return Merge().bind(**{\"task_{}\".format(i + 1): t for i, t in enumerate(tasks)})\n", "path": "src/prefect/tasks/control_flow/conditional.py"}]} | 1,724 | 238 |
gh_patches_debug_30157 | rasdani/github-patches | git_diff | xonsh__xonsh-3796 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bad documentation or bug: _.rtn does not work
[In the Documentation](https://xon.sh/bash_to_xsh.html) you write that `_.rtn` is the equivalent of the shell `$?` and that it `Returns the exit code, or status, of the previous command.`. Either I understand the documentation wrong or there is a bug:
```
#!/usr/bin/env xonsh
echo "abc"
print(_.rtn)
```
Outputs
```
abc
Traceback (most recent call last):
File "/home/volker/.local/bin/xonsh", line 8, in <module>
sys.exit(main())
File "/home/volker/.local/lib/python3.8/site-packages/xonsh/main.py", line 426, in main
_failback_to_other_shells(args, err)
File "/home/volker/.local/lib/python3.8/site-packages/xonsh/main.py", line 373, in _failback_to_other_shells
raise err
File "/home/volker/.local/lib/python3.8/site-packages/xonsh/main.py", line 424, in main
sys.exit(main_xonsh(args))
File "/home/volker/.local/lib/python3.8/site-packages/xonsh/main.py", line 471, in main_xonsh
run_script_with_cache(
File "/home/volker/.local/lib/python3.8/site-packages/xonsh/codecache.py", line 162, in run_script_with_cache
run_compiled_code(ccode, glb, loc, mode)
File "/home/volker/.local/lib/python3.8/site-packages/xonsh/codecache.py", line 67, in run_compiled_code
func(code, glb, loc)
File "./generateIso.xonsh", line 24, in <module>
print(_.rtn)
NameError: name '_' is not defined
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `xontrib/bashisms.py`
Content:
```
1 """Bash-like interface extensions for xonsh."""
2 import shlex
3 import sys
4 import re
5 import builtins
6
7
8 __all__ = ()
9
10
11 @events.on_transform_command
12 def bash_preproc(cmd, **kw):
13 bang_previous = {
14 "!": lambda x: x,
15 "$": lambda x: shlex.split(x)[-1],
16 "^": lambda x: shlex.split(x)[0],
17 "*": lambda x: " ".join(shlex.split(x)[1:]),
18 }
19
20 def replace_bang(m):
21 arg = m.group(1)
22 inputs = __xonsh__.history.inps
23
24 # Dissect the previous command.
25 if arg in bang_previous:
26 try:
27 return bang_previous[arg](inputs[-1])
28 except IndexError:
29 print("xonsh: no history for '!{}'".format(arg))
30 return ""
31
32 # Look back in history for a matching command.
33 else:
34 try:
35 return next((x for x in reversed(inputs) if x.startswith(arg)))
36 except StopIteration:
37 print("xonsh: no previous commands match '!{}'".format(arg))
38 return ""
39
40 return re.sub(r"!([!$^*]|[\w]+)", replace_bang, cmd)
41
42
43 def alias(args, stdin=None):
44 ret = 0
45
46 if args:
47 for arg in args:
48 if "=" in arg:
49 # shlex.split to remove quotes, e.g. "foo='echo hey'" into
50 # "foo=echo hey"
51 name, cmd = shlex.split(arg)[0].split("=", 1)
52 aliases[name] = shlex.split(cmd)
53 elif arg in aliases:
54 print("{}={}".format(arg, aliases[arg]))
55 else:
56 print("alias: {}: not found".format(arg), file=sys.stderr)
57 ret = 1
58 else:
59 for alias, cmd in aliases.items():
60 print("{}={}".format(alias, cmd))
61
62 return ret
63
64
65 aliases["alias"] = alias
66 builtins.__xonsh__.env["THREAD_SUBPROCS"] = False
67
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/xontrib/bashisms.py b/xontrib/bashisms.py
--- a/xontrib/bashisms.py
+++ b/xontrib/bashisms.py
@@ -64,3 +64,86 @@
aliases["alias"] = alias
builtins.__xonsh__.env["THREAD_SUBPROCS"] = False
+
+
+def _unset(args):
+ if not args:
+ print("Usage: unset ENV_VARIABLE", file=sys.stderr)
+
+ for v in args:
+ try:
+ __xonsh__.env.pop(v)
+ except KeyError:
+ print(f"{v} not found", file=sys.stderr)
+
+
+aliases["unset"] = _unset
+
+
+def _export(args):
+ if not args:
+ print("Usage: export ENV_VARIABLE=VALUE", file=sys.stderr)
+
+ for eq in args:
+ if "=" in eq:
+ name, val = shlex.split(eq)[0].split("=", 1)
+ __xonsh__.env[name] = val
+ else:
+ print(f"{eq} equal sign not found", file=sys.stderr)
+
+
+aliases["export"] = _export
+
+
+def _set(args):
+ arg = args[0]
+ if arg == "-e":
+ __xonsh__.env["RAISE_SUBPROC_ERROR"] = True
+ elif arg == "+e":
+ __xonsh__.env["RAISE_SUBPROC_ERROR"] = False
+ elif arg == "-x":
+ __xonsh__.env["XONSH_TRACE_SUBPROC"] = True
+ elif arg == "+x":
+ __xonsh__.env["XONSH_TRACE_SUBPROC"] = False
+ else:
+ print(
+ "Not supported in xontrib bashisms.\nPRs are welcome - https://github.com/xonsh/xonsh/blob/master/xontrib/bashisms.py",
+ file=sys.stderr,
+ )
+
+
+aliases["set"] = _set
+
+
+def _shopt(args):
+
+ supported_shopt = ["DOTGLOB"]
+
+ args_len = len(args)
+ if args_len == 0:
+ for so in supported_shopt:
+ onoff = "on" if so in __xonsh__.env and __xonsh__.env[so] else "off"
+ print(f"dotglob\t{onoff}")
+ return
+ elif args_len < 2 or args[0] in ["-h", "--help"]:
+ print(f'Usage: shopt <-s|-u> <{"|".join(supported_shopt).lower()}>')
+ return
+
+ opt = args[0]
+ optname = args[1]
+
+ if opt == "-s" and optname == "dotglob":
+ __xonsh__.env["DOTGLOB"] = True
+ elif opt == "-u" and optname == "dotglob":
+ __xonsh__.env["DOTGLOB"] = False
+ else:
+ print(
+ "Not supported in xontrib bashisms.\nPRs are welcome - https://github.com/xonsh/xonsh/blob/master/xontrib/bashisms.py",
+ file=sys.stderr,
+ )
+
+
+aliases["shopt"] = _shopt
+
+
+aliases["complete"] = "completer list"
| {"golden_diff": "diff --git a/xontrib/bashisms.py b/xontrib/bashisms.py\n--- a/xontrib/bashisms.py\n+++ b/xontrib/bashisms.py\n@@ -64,3 +64,86 @@\n \n aliases[\"alias\"] = alias\n builtins.__xonsh__.env[\"THREAD_SUBPROCS\"] = False\n+\n+\n+def _unset(args):\n+ if not args:\n+ print(\"Usage: unset ENV_VARIABLE\", file=sys.stderr)\n+\n+ for v in args:\n+ try:\n+ __xonsh__.env.pop(v)\n+ except KeyError:\n+ print(f\"{v} not found\", file=sys.stderr)\n+\n+\n+aliases[\"unset\"] = _unset\n+\n+\n+def _export(args):\n+ if not args:\n+ print(\"Usage: export ENV_VARIABLE=VALUE\", file=sys.stderr)\n+\n+ for eq in args:\n+ if \"=\" in eq:\n+ name, val = shlex.split(eq)[0].split(\"=\", 1)\n+ __xonsh__.env[name] = val\n+ else:\n+ print(f\"{eq} equal sign not found\", file=sys.stderr)\n+\n+\n+aliases[\"export\"] = _export\n+\n+\n+def _set(args):\n+ arg = args[0]\n+ if arg == \"-e\":\n+ __xonsh__.env[\"RAISE_SUBPROC_ERROR\"] = True\n+ elif arg == \"+e\":\n+ __xonsh__.env[\"RAISE_SUBPROC_ERROR\"] = False\n+ elif arg == \"-x\":\n+ __xonsh__.env[\"XONSH_TRACE_SUBPROC\"] = True\n+ elif arg == \"+x\":\n+ __xonsh__.env[\"XONSH_TRACE_SUBPROC\"] = False\n+ else:\n+ print(\n+ \"Not supported in xontrib bashisms.\\nPRs are welcome - https://github.com/xonsh/xonsh/blob/master/xontrib/bashisms.py\",\n+ file=sys.stderr,\n+ )\n+\n+\n+aliases[\"set\"] = _set\n+\n+\n+def _shopt(args):\n+\n+ supported_shopt = [\"DOTGLOB\"]\n+\n+ args_len = len(args)\n+ if args_len == 0:\n+ for so in supported_shopt:\n+ onoff = \"on\" if so in __xonsh__.env and __xonsh__.env[so] else \"off\"\n+ print(f\"dotglob\\t{onoff}\")\n+ return\n+ elif args_len < 2 or args[0] in [\"-h\", \"--help\"]:\n+ print(f'Usage: shopt <-s|-u> <{\"|\".join(supported_shopt).lower()}>')\n+ return\n+\n+ opt = args[0]\n+ optname = args[1]\n+\n+ if opt == \"-s\" and optname == \"dotglob\":\n+ __xonsh__.env[\"DOTGLOB\"] = True\n+ elif opt == \"-u\" and optname == \"dotglob\":\n+ __xonsh__.env[\"DOTGLOB\"] = False\n+ else:\n+ print(\n+ \"Not supported in xontrib bashisms.\\nPRs are welcome - https://github.com/xonsh/xonsh/blob/master/xontrib/bashisms.py\",\n+ file=sys.stderr,\n+ )\n+\n+\n+aliases[\"shopt\"] = _shopt\n+\n+\n+aliases[\"complete\"] = \"completer list\"\n", "issue": "Bad documentation or bug: _.rtn does not work\n[In the Documentation](https://xon.sh/bash_to_xsh.html) you write that `_.rtn` is the equivalent of the shell `$?` and that it `Returns the exit code, or status, of the previous command.`. Either I understand the documentation wrong or there is a bug:\r\n```\r\n#!/usr/bin/env xonsh\r\necho \"abc\"\r\nprint(_.rtn)\r\n```\r\nOutputs\r\n```\r\nabc\r\nTraceback (most recent call last):\r\n File \"/home/volker/.local/bin/xonsh\", line 8, in <module>\r\n sys.exit(main())\r\n File \"/home/volker/.local/lib/python3.8/site-packages/xonsh/main.py\", line 426, in main\r\n _failback_to_other_shells(args, err)\r\n File \"/home/volker/.local/lib/python3.8/site-packages/xonsh/main.py\", line 373, in _failback_to_other_shells\r\n raise err\r\n File \"/home/volker/.local/lib/python3.8/site-packages/xonsh/main.py\", line 424, in main\r\n sys.exit(main_xonsh(args))\r\n File \"/home/volker/.local/lib/python3.8/site-packages/xonsh/main.py\", line 471, in main_xonsh\r\n run_script_with_cache(\r\n File \"/home/volker/.local/lib/python3.8/site-packages/xonsh/codecache.py\", line 162, in run_script_with_cache\r\n run_compiled_code(ccode, glb, loc, mode)\r\n File \"/home/volker/.local/lib/python3.8/site-packages/xonsh/codecache.py\", line 67, in run_compiled_code\r\n func(code, glb, loc)\r\n File \"./generateIso.xonsh\", line 24, in <module>\r\n print(_.rtn)\r\nNameError: name '_' is not defined\r\n```\n", "before_files": [{"content": "\"\"\"Bash-like interface extensions for xonsh.\"\"\"\nimport shlex\nimport sys\nimport re\nimport builtins\n\n\n__all__ = ()\n\n\[email protected]_transform_command\ndef bash_preproc(cmd, **kw):\n bang_previous = {\n \"!\": lambda x: x,\n \"$\": lambda x: shlex.split(x)[-1],\n \"^\": lambda x: shlex.split(x)[0],\n \"*\": lambda x: \" \".join(shlex.split(x)[1:]),\n }\n\n def replace_bang(m):\n arg = m.group(1)\n inputs = __xonsh__.history.inps\n\n # Dissect the previous command.\n if arg in bang_previous:\n try:\n return bang_previous[arg](inputs[-1])\n except IndexError:\n print(\"xonsh: no history for '!{}'\".format(arg))\n return \"\"\n\n # Look back in history for a matching command.\n else:\n try:\n return next((x for x in reversed(inputs) if x.startswith(arg)))\n except StopIteration:\n print(\"xonsh: no previous commands match '!{}'\".format(arg))\n return \"\"\n\n return re.sub(r\"!([!$^*]|[\\w]+)\", replace_bang, cmd)\n\n\ndef alias(args, stdin=None):\n ret = 0\n\n if args:\n for arg in args:\n if \"=\" in arg:\n # shlex.split to remove quotes, e.g. \"foo='echo hey'\" into\n # \"foo=echo hey\"\n name, cmd = shlex.split(arg)[0].split(\"=\", 1)\n aliases[name] = shlex.split(cmd)\n elif arg in aliases:\n print(\"{}={}\".format(arg, aliases[arg]))\n else:\n print(\"alias: {}: not found\".format(arg), file=sys.stderr)\n ret = 1\n else:\n for alias, cmd in aliases.items():\n print(\"{}={}\".format(alias, cmd))\n\n return ret\n\n\naliases[\"alias\"] = alias\nbuiltins.__xonsh__.env[\"THREAD_SUBPROCS\"] = False\n", "path": "xontrib/bashisms.py"}], "after_files": [{"content": "\"\"\"Bash-like interface extensions for xonsh.\"\"\"\nimport shlex\nimport sys\nimport re\nimport builtins\n\n\n__all__ = ()\n\n\[email protected]_transform_command\ndef bash_preproc(cmd, **kw):\n bang_previous = {\n \"!\": lambda x: x,\n \"$\": lambda x: shlex.split(x)[-1],\n \"^\": lambda x: shlex.split(x)[0],\n \"*\": lambda x: \" \".join(shlex.split(x)[1:]),\n }\n\n def replace_bang(m):\n arg = m.group(1)\n inputs = __xonsh__.history.inps\n\n # Dissect the previous command.\n if arg in bang_previous:\n try:\n return bang_previous[arg](inputs[-1])\n except IndexError:\n print(\"xonsh: no history for '!{}'\".format(arg))\n return \"\"\n\n # Look back in history for a matching command.\n else:\n try:\n return next((x for x in reversed(inputs) if x.startswith(arg)))\n except StopIteration:\n print(\"xonsh: no previous commands match '!{}'\".format(arg))\n return \"\"\n\n return re.sub(r\"!([!$^*]|[\\w]+)\", replace_bang, cmd)\n\n\ndef alias(args, stdin=None):\n ret = 0\n\n if args:\n for arg in args:\n if \"=\" in arg:\n # shlex.split to remove quotes, e.g. \"foo='echo hey'\" into\n # \"foo=echo hey\"\n name, cmd = shlex.split(arg)[0].split(\"=\", 1)\n aliases[name] = shlex.split(cmd)\n elif arg in aliases:\n print(\"{}={}\".format(arg, aliases[arg]))\n else:\n print(\"alias: {}: not found\".format(arg), file=sys.stderr)\n ret = 1\n else:\n for alias, cmd in aliases.items():\n print(\"{}={}\".format(alias, cmd))\n\n return ret\n\n\naliases[\"alias\"] = alias\nbuiltins.__xonsh__.env[\"THREAD_SUBPROCS\"] = False\n\n\ndef _unset(args):\n if not args:\n print(\"Usage: unset ENV_VARIABLE\", file=sys.stderr)\n\n for v in args:\n try:\n __xonsh__.env.pop(v)\n except KeyError:\n print(f\"{v} not found\", file=sys.stderr)\n\n\naliases[\"unset\"] = _unset\n\n\ndef _export(args):\n if not args:\n print(\"Usage: export ENV_VARIABLE=VALUE\", file=sys.stderr)\n\n for eq in args:\n if \"=\" in eq:\n name, val = shlex.split(eq)[0].split(\"=\", 1)\n __xonsh__.env[name] = val\n else:\n print(f\"{eq} equal sign not found\", file=sys.stderr)\n\n\naliases[\"export\"] = _export\n\n\ndef _set(args):\n arg = args[0]\n if arg == \"-e\":\n __xonsh__.env[\"RAISE_SUBPROC_ERROR\"] = True\n elif arg == \"+e\":\n __xonsh__.env[\"RAISE_SUBPROC_ERROR\"] = False\n elif arg == \"-x\":\n __xonsh__.env[\"XONSH_TRACE_SUBPROC\"] = True\n elif arg == \"+x\":\n __xonsh__.env[\"XONSH_TRACE_SUBPROC\"] = False\n else:\n print(\n \"Not supported in xontrib bashisms.\\nPRs are welcome - https://github.com/xonsh/xonsh/blob/master/xontrib/bashisms.py\",\n file=sys.stderr,\n )\n\n\naliases[\"set\"] = _set\n\n\ndef _shopt(args):\n\n supported_shopt = [\"DOTGLOB\"]\n\n args_len = len(args)\n if args_len == 0:\n for so in supported_shopt:\n onoff = \"on\" if so in __xonsh__.env and __xonsh__.env[so] else \"off\"\n print(f\"dotglob\\t{onoff}\")\n return\n elif args_len < 2 or args[0] in [\"-h\", \"--help\"]:\n print(f'Usage: shopt <-s|-u> <{\"|\".join(supported_shopt).lower()}>')\n return\n\n opt = args[0]\n optname = args[1]\n\n if opt == \"-s\" and optname == \"dotglob\":\n __xonsh__.env[\"DOTGLOB\"] = True\n elif opt == \"-u\" and optname == \"dotglob\":\n __xonsh__.env[\"DOTGLOB\"] = False\n else:\n print(\n \"Not supported in xontrib bashisms.\\nPRs are welcome - https://github.com/xonsh/xonsh/blob/master/xontrib/bashisms.py\",\n file=sys.stderr,\n )\n\n\naliases[\"shopt\"] = _shopt\n\n\naliases[\"complete\"] = \"completer list\"\n", "path": "xontrib/bashisms.py"}]} | 1,263 | 751 |
gh_patches_debug_13527 | rasdani/github-patches | git_diff | jupyterhub__jupyterhub-443 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Spawner custom form validation
Are there ideas for allowing form validation for spawners that have a custom form?
I was thinking of raising an exception in `options_from_form()` and moving the `try` up by one line in [SpawnHandler](https://github.com/jupyter/jupyterhub/blob/master/jupyterhub/handlers/pages.py#L97).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `jupyterhub/handlers/pages.py`
Content:
```
1 """Basic html-rendering handlers."""
2
3 # Copyright (c) Jupyter Development Team.
4 # Distributed under the terms of the Modified BSD License.
5
6 from tornado import web, gen
7
8 from .. import orm
9 from ..utils import admin_only, url_path_join
10 from .base import BaseHandler
11 from .login import LoginHandler
12
13
14 class RootHandler(BaseHandler):
15 """Render the Hub root page.
16
17 If logged in, redirects to:
18
19 - single-user server if running
20 - hub home, otherwise
21
22 Otherwise, renders login page.
23 """
24 def get(self):
25 user = self.get_current_user()
26 if user:
27 if user.running:
28 url = user.server.base_url
29 self.log.debug("User is running: %s", url)
30 else:
31 url = url_path_join(self.hub.server.base_url, 'home')
32 self.log.debug("User is not running: %s", url)
33 self.redirect(url)
34 return
35 url = url_path_join(self.hub.server.base_url, 'login')
36 self.redirect(url)
37
38
39 class HomeHandler(BaseHandler):
40 """Render the user's home page."""
41
42 @web.authenticated
43 def get(self):
44 html = self.render_template('home.html',
45 user=self.get_current_user(),
46 )
47 self.finish(html)
48
49
50 class SpawnHandler(BaseHandler):
51 """Handle spawning of single-user servers via form.
52
53 GET renders the form, POST handles form submission.
54
55 Only enabled when Spawner.options_form is defined.
56 """
57 def _render_form(self, message=''):
58 user = self.get_current_user()
59 return self.render_template('spawn.html',
60 user=user,
61 spawner_options_form=user.spawner.options_form,
62 error_message=message,
63 )
64
65 @web.authenticated
66 def get(self):
67 """GET renders form for spawning with user-specified options"""
68 user = self.get_current_user()
69 if user.running:
70 url = user.server.base_url
71 self.log.debug("User is running: %s", url)
72 self.redirect(url)
73 return
74 if user.spawner.options_form:
75 self.finish(self._render_form())
76 else:
77 # not running, no form. Trigger spawn.
78 url = url_path_join(self.base_url, 'user', user.name)
79 self.redirect(url)
80
81 @web.authenticated
82 @gen.coroutine
83 def post(self):
84 """POST spawns with user-specified options"""
85 user = self.get_current_user()
86 if user.running:
87 url = user.server.base_url
88 self.log.warning("User is already running: %s", url)
89 self.redirect(url)
90 return
91 form_options = {}
92 for key, byte_list in self.request.body_arguments.items():
93 form_options[key] = [ bs.decode('utf8') for bs in byte_list ]
94 for key, byte_list in self.request.files.items():
95 form_options["%s_file"%key] = byte_list
96 options = user.spawner.options_from_form(form_options)
97 try:
98 yield self.spawn_single_user(user, options=options)
99 except Exception as e:
100 self.log.error("Failed to spawn single-user server with form", exc_info=True)
101 self.finish(self._render_form(str(e)))
102 return
103 self.set_login_cookie(user)
104 url = user.server.base_url
105 self.redirect(url)
106
107 class AdminHandler(BaseHandler):
108 """Render the admin page."""
109
110 @admin_only
111 def get(self):
112 available = {'name', 'admin', 'running', 'last_activity'}
113 default_sort = ['admin', 'name']
114 mapping = {
115 'running': '_server_id'
116 }
117 default_order = {
118 'name': 'asc',
119 'last_activity': 'desc',
120 'admin': 'desc',
121 'running': 'desc',
122 }
123 sorts = self.get_arguments('sort') or default_sort
124 orders = self.get_arguments('order')
125
126 for bad in set(sorts).difference(available):
127 self.log.warn("ignoring invalid sort: %r", bad)
128 sorts.remove(bad)
129 for bad in set(orders).difference({'asc', 'desc'}):
130 self.log.warn("ignoring invalid order: %r", bad)
131 orders.remove(bad)
132
133 # add default sort as secondary
134 for s in default_sort:
135 if s not in sorts:
136 sorts.append(s)
137 if len(orders) < len(sorts):
138 for col in sorts[len(orders):]:
139 orders.append(default_order[col])
140 else:
141 orders = orders[:len(sorts)]
142
143 # this could be one incomprehensible nested list comprehension
144 # get User columns
145 cols = [ getattr(orm.User, mapping.get(c, c)) for c in sorts ]
146 # get User.col.desc() order objects
147 ordered = [ getattr(c, o)() for c, o in zip(cols, orders) ]
148
149 users = self.db.query(orm.User).order_by(*ordered)
150 users = [ self._user_from_orm(u) for u in users ]
151 running = [ u for u in users if u.running ]
152
153 html = self.render_template('admin.html',
154 user=self.get_current_user(),
155 admin_access=self.settings.get('admin_access', False),
156 users=users,
157 running=running,
158 sort={s:o for s,o in zip(sorts, orders)},
159 )
160 self.finish(html)
161
162
163 default_handlers = [
164 (r'/', RootHandler),
165 (r'/home', HomeHandler),
166 (r'/admin', AdminHandler),
167 (r'/spawn', SpawnHandler),
168 ]
169
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/jupyterhub/handlers/pages.py b/jupyterhub/handlers/pages.py
--- a/jupyterhub/handlers/pages.py
+++ b/jupyterhub/handlers/pages.py
@@ -93,8 +93,8 @@
form_options[key] = [ bs.decode('utf8') for bs in byte_list ]
for key, byte_list in self.request.files.items():
form_options["%s_file"%key] = byte_list
- options = user.spawner.options_from_form(form_options)
try:
+ options = user.spawner.options_from_form(form_options)
yield self.spawn_single_user(user, options=options)
except Exception as e:
self.log.error("Failed to spawn single-user server with form", exc_info=True)
| {"golden_diff": "diff --git a/jupyterhub/handlers/pages.py b/jupyterhub/handlers/pages.py\n--- a/jupyterhub/handlers/pages.py\n+++ b/jupyterhub/handlers/pages.py\n@@ -93,8 +93,8 @@\n form_options[key] = [ bs.decode('utf8') for bs in byte_list ]\n for key, byte_list in self.request.files.items():\n form_options[\"%s_file\"%key] = byte_list\n- options = user.spawner.options_from_form(form_options)\n try:\n+ options = user.spawner.options_from_form(form_options)\n yield self.spawn_single_user(user, options=options)\n except Exception as e:\n self.log.error(\"Failed to spawn single-user server with form\", exc_info=True)\n", "issue": "Spawner custom form validation\nAre there ideas for allowing form validation for spawners that have a custom form?\n\nI was thinking of raising an exception in `options_from_form()` and moving the `try` up by one line in [SpawnHandler](https://github.com/jupyter/jupyterhub/blob/master/jupyterhub/handlers/pages.py#L97).\n\n", "before_files": [{"content": "\"\"\"Basic html-rendering handlers.\"\"\"\n\n# Copyright (c) Jupyter Development Team.\n# Distributed under the terms of the Modified BSD License.\n\nfrom tornado import web, gen\n\nfrom .. import orm\nfrom ..utils import admin_only, url_path_join\nfrom .base import BaseHandler\nfrom .login import LoginHandler\n\n\nclass RootHandler(BaseHandler):\n \"\"\"Render the Hub root page.\n \n If logged in, redirects to:\n \n - single-user server if running\n - hub home, otherwise\n \n Otherwise, renders login page.\n \"\"\"\n def get(self):\n user = self.get_current_user()\n if user:\n if user.running:\n url = user.server.base_url\n self.log.debug(\"User is running: %s\", url)\n else:\n url = url_path_join(self.hub.server.base_url, 'home')\n self.log.debug(\"User is not running: %s\", url)\n self.redirect(url)\n return\n url = url_path_join(self.hub.server.base_url, 'login')\n self.redirect(url)\n\n\nclass HomeHandler(BaseHandler):\n \"\"\"Render the user's home page.\"\"\"\n\n @web.authenticated\n def get(self):\n html = self.render_template('home.html',\n user=self.get_current_user(),\n )\n self.finish(html)\n\n\nclass SpawnHandler(BaseHandler):\n \"\"\"Handle spawning of single-user servers via form.\n \n GET renders the form, POST handles form submission.\n \n Only enabled when Spawner.options_form is defined.\n \"\"\"\n def _render_form(self, message=''):\n user = self.get_current_user()\n return self.render_template('spawn.html',\n user=user,\n spawner_options_form=user.spawner.options_form,\n error_message=message,\n )\n\n @web.authenticated\n def get(self):\n \"\"\"GET renders form for spawning with user-specified options\"\"\"\n user = self.get_current_user()\n if user.running:\n url = user.server.base_url\n self.log.debug(\"User is running: %s\", url)\n self.redirect(url)\n return\n if user.spawner.options_form:\n self.finish(self._render_form())\n else:\n # not running, no form. Trigger spawn.\n url = url_path_join(self.base_url, 'user', user.name)\n self.redirect(url)\n \n @web.authenticated\n @gen.coroutine\n def post(self):\n \"\"\"POST spawns with user-specified options\"\"\"\n user = self.get_current_user()\n if user.running:\n url = user.server.base_url\n self.log.warning(\"User is already running: %s\", url)\n self.redirect(url)\n return\n form_options = {}\n for key, byte_list in self.request.body_arguments.items():\n form_options[key] = [ bs.decode('utf8') for bs in byte_list ]\n for key, byte_list in self.request.files.items():\n form_options[\"%s_file\"%key] = byte_list\n options = user.spawner.options_from_form(form_options)\n try:\n yield self.spawn_single_user(user, options=options)\n except Exception as e:\n self.log.error(\"Failed to spawn single-user server with form\", exc_info=True)\n self.finish(self._render_form(str(e)))\n return\n self.set_login_cookie(user)\n url = user.server.base_url\n self.redirect(url)\n\nclass AdminHandler(BaseHandler):\n \"\"\"Render the admin page.\"\"\"\n\n @admin_only\n def get(self):\n available = {'name', 'admin', 'running', 'last_activity'}\n default_sort = ['admin', 'name']\n mapping = {\n 'running': '_server_id'\n }\n default_order = {\n 'name': 'asc',\n 'last_activity': 'desc',\n 'admin': 'desc',\n 'running': 'desc',\n }\n sorts = self.get_arguments('sort') or default_sort\n orders = self.get_arguments('order')\n \n for bad in set(sorts).difference(available):\n self.log.warn(\"ignoring invalid sort: %r\", bad)\n sorts.remove(bad)\n for bad in set(orders).difference({'asc', 'desc'}):\n self.log.warn(\"ignoring invalid order: %r\", bad)\n orders.remove(bad)\n \n # add default sort as secondary\n for s in default_sort:\n if s not in sorts:\n sorts.append(s)\n if len(orders) < len(sorts):\n for col in sorts[len(orders):]:\n orders.append(default_order[col])\n else:\n orders = orders[:len(sorts)]\n \n # this could be one incomprehensible nested list comprehension\n # get User columns\n cols = [ getattr(orm.User, mapping.get(c, c)) for c in sorts ]\n # get User.col.desc() order objects\n ordered = [ getattr(c, o)() for c, o in zip(cols, orders) ]\n \n users = self.db.query(orm.User).order_by(*ordered)\n users = [ self._user_from_orm(u) for u in users ]\n running = [ u for u in users if u.running ]\n \n html = self.render_template('admin.html',\n user=self.get_current_user(),\n admin_access=self.settings.get('admin_access', False),\n users=users,\n running=running,\n sort={s:o for s,o in zip(sorts, orders)},\n )\n self.finish(html)\n\n\ndefault_handlers = [\n (r'/', RootHandler),\n (r'/home', HomeHandler),\n (r'/admin', AdminHandler),\n (r'/spawn', SpawnHandler),\n]\n", "path": "jupyterhub/handlers/pages.py"}], "after_files": [{"content": "\"\"\"Basic html-rendering handlers.\"\"\"\n\n# Copyright (c) Jupyter Development Team.\n# Distributed under the terms of the Modified BSD License.\n\nfrom tornado import web, gen\n\nfrom .. import orm\nfrom ..utils import admin_only, url_path_join\nfrom .base import BaseHandler\nfrom .login import LoginHandler\n\n\nclass RootHandler(BaseHandler):\n \"\"\"Render the Hub root page.\n \n If logged in, redirects to:\n \n - single-user server if running\n - hub home, otherwise\n \n Otherwise, renders login page.\n \"\"\"\n def get(self):\n user = self.get_current_user()\n if user:\n if user.running:\n url = user.server.base_url\n self.log.debug(\"User is running: %s\", url)\n else:\n url = url_path_join(self.hub.server.base_url, 'home')\n self.log.debug(\"User is not running: %s\", url)\n self.redirect(url)\n return\n url = url_path_join(self.hub.server.base_url, 'login')\n self.redirect(url)\n\n\nclass HomeHandler(BaseHandler):\n \"\"\"Render the user's home page.\"\"\"\n\n @web.authenticated\n def get(self):\n html = self.render_template('home.html',\n user=self.get_current_user(),\n )\n self.finish(html)\n\n\nclass SpawnHandler(BaseHandler):\n \"\"\"Handle spawning of single-user servers via form.\n \n GET renders the form, POST handles form submission.\n \n Only enabled when Spawner.options_form is defined.\n \"\"\"\n def _render_form(self, message=''):\n user = self.get_current_user()\n return self.render_template('spawn.html',\n user=user,\n spawner_options_form=user.spawner.options_form,\n error_message=message,\n )\n\n @web.authenticated\n def get(self):\n \"\"\"GET renders form for spawning with user-specified options\"\"\"\n user = self.get_current_user()\n if user.running:\n url = user.server.base_url\n self.log.debug(\"User is running: %s\", url)\n self.redirect(url)\n return\n if user.spawner.options_form:\n self.finish(self._render_form())\n else:\n # not running, no form. Trigger spawn.\n url = url_path_join(self.base_url, 'user', user.name)\n self.redirect(url)\n \n @web.authenticated\n @gen.coroutine\n def post(self):\n \"\"\"POST spawns with user-specified options\"\"\"\n user = self.get_current_user()\n if user.running:\n url = user.server.base_url\n self.log.warning(\"User is already running: %s\", url)\n self.redirect(url)\n return\n form_options = {}\n for key, byte_list in self.request.body_arguments.items():\n form_options[key] = [ bs.decode('utf8') for bs in byte_list ]\n for key, byte_list in self.request.files.items():\n form_options[\"%s_file\"%key] = byte_list\n try:\n options = user.spawner.options_from_form(form_options)\n yield self.spawn_single_user(user, options=options)\n except Exception as e:\n self.log.error(\"Failed to spawn single-user server with form\", exc_info=True)\n self.finish(self._render_form(str(e)))\n return\n self.set_login_cookie(user)\n url = user.server.base_url\n self.redirect(url)\n\nclass AdminHandler(BaseHandler):\n \"\"\"Render the admin page.\"\"\"\n\n @admin_only\n def get(self):\n available = {'name', 'admin', 'running', 'last_activity'}\n default_sort = ['admin', 'name']\n mapping = {\n 'running': '_server_id'\n }\n default_order = {\n 'name': 'asc',\n 'last_activity': 'desc',\n 'admin': 'desc',\n 'running': 'desc',\n }\n sorts = self.get_arguments('sort') or default_sort\n orders = self.get_arguments('order')\n \n for bad in set(sorts).difference(available):\n self.log.warn(\"ignoring invalid sort: %r\", bad)\n sorts.remove(bad)\n for bad in set(orders).difference({'asc', 'desc'}):\n self.log.warn(\"ignoring invalid order: %r\", bad)\n orders.remove(bad)\n \n # add default sort as secondary\n for s in default_sort:\n if s not in sorts:\n sorts.append(s)\n if len(orders) < len(sorts):\n for col in sorts[len(orders):]:\n orders.append(default_order[col])\n else:\n orders = orders[:len(sorts)]\n \n # this could be one incomprehensible nested list comprehension\n # get User columns\n cols = [ getattr(orm.User, mapping.get(c, c)) for c in sorts ]\n # get User.col.desc() order objects\n ordered = [ getattr(c, o)() for c, o in zip(cols, orders) ]\n \n users = self.db.query(orm.User).order_by(*ordered)\n users = [ self._user_from_orm(u) for u in users ]\n running = [ u for u in users if u.running ]\n \n html = self.render_template('admin.html',\n user=self.get_current_user(),\n admin_access=self.settings.get('admin_access', False),\n users=users,\n running=running,\n sort={s:o for s,o in zip(sorts, orders)},\n )\n self.finish(html)\n\n\ndefault_handlers = [\n (r'/', RootHandler),\n (r'/home', HomeHandler),\n (r'/admin', AdminHandler),\n (r'/spawn', SpawnHandler),\n]\n", "path": "jupyterhub/handlers/pages.py"}]} | 1,928 | 165 |
gh_patches_debug_30656 | rasdani/github-patches | git_diff | rucio__rucio-2150 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Test reaper console script
Motivation
----------
The reaper console script `rucio-reaper` is not tested in the testsuite.
Modification
------------
- Add test for the reaper console script.
- Install the environnement with `python setup.py develop` in the docker env to have the generated console scripts available in the docker.
- Extend the reaper argparse method and the reaper tests to validate the argparse main method and console script.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lib/rucio/clis/daemons/reaper/reaper.py`
Content:
```
1 # Copyright 2012-2018 CERN for the benefit of the ATLAS collaboration.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 #
15 # Authors:
16 # - Vincent Garonne, <[email protected]>, 2012-2018
17 # - Wen Guan, <[email protected]>, 2014
18 # - Hannes Hansen, <[email protected]>, 2018
19
20 """
21 Reaper is a daemon to manage file deletion
22 """
23
24 import argparse
25 import signal
26
27 from rucio.daemons.reaper.reaper import run, stop
28
29
30 def get_parser():
31 """
32 Returns the argparse parser.
33 """
34 parser = argparse.ArgumentParser(description="The Reaper daemon is responsible for replica deletion. It deletes them by checking if there are replicas that are not locked and have a tombstone to indicate that they can be deleted.", epilog='''
35 Upload a file and prepare the rules and replicas for deletion by using the judge-cleaner daemon::
36
37 $ rucio upload --rse MOCK --scope mock --name file filename.txt
38 $ rucio add-rule mock:file 1 MOCK2 --lifetime 1
39 $ rucio-judge-cleaner --run-once
40
41 Check if the replica was created::
42
43 $ rucio list-file-replica mock:file
44 +---------+--------+------------+-----------+---------------------------------------------------------+
45 | SCOPE | NAME | FILESIZE | ADLER32 | RSE: REPLICA |
46 |---------+--------+------------+-----------+---------------------------------------------------------|
47 | mock | file | 1.542 kB | 1268ee71 | MOCK: file://localhost:0/tmp/rucio_rse/mock/15/58/file |
48 +---------+--------+------------+-----------+---------------------------------------------------------+
49
50 Run the daemon::
51
52 $ rucio-reaper --run-once
53
54 Check if the replica exists::
55
56 $ rucio list-file-replica mock:file
57 +---------+--------+------------+-----------+---------------------------------------------------------+
58 | SCOPE | NAME | FILESIZE | ADLER32 | RSE: REPLICA |
59 |---------+--------+------------+-----------+---------------------------------------------------------|
60 +---------+--------+------------+-----------+---------------------------------------------------------+
61 ''')
62 parser.add_argument("--run-once", action="store_true", default=False, help='One iteration only')
63 parser.add_argument("--total-workers", action="store", default=1, type=int, help='Total number of workers per process')
64 parser.add_argument("--threads-per-worker", action="store", default=None, type=int, help='Total number of threads created by each worker')
65 parser.add_argument("--chunk-size", action="store", default=10, type=int, help='Chunk size')
66 parser.add_argument("--scheme", action="store", default=None, type=str, help='Force the reaper to use a particular protocol, e.g., mock.')
67 parser.add_argument('--greedy', action='store_true', default=False, help='Greedy mode')
68 parser.add_argument('--exclude-rses', action="store", default=None, type=str, help='RSEs expression to exclude RSEs')
69 parser.add_argument('--include-rses', action="store", default=None, type=str, help='RSEs expression to include RSEs')
70 parser.add_argument('--rses', nargs='+', type=str, help='List of RSEs')
71 parser.add_argument('--delay-seconds', action="store", default=3600, type=int, help='Delay to retry failed deletion')
72 return parser
73
74
75 def main():
76
77 signal.signal(signal.SIGTERM, stop)
78 parser = get_parser()
79 args = parser.parse_args()
80 try:
81 run(total_workers=args.total_workers, chunk_size=args.chunk_size, greedy=args.greedy,
82 once=args.run_once, scheme=args.scheme, rses=args.rses, threads_per_worker=args.threads_per_worker,
83 exclude_rses=args.exclude_rses, include_rses=args.include_rses, delay_seconds=args.delay_seconds)
84 except KeyboardInterrupt:
85 stop()
86
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/lib/rucio/clis/daemons/reaper/reaper.py b/lib/rucio/clis/daemons/reaper/reaper.py
--- a/lib/rucio/clis/daemons/reaper/reaper.py
+++ b/lib/rucio/clis/daemons/reaper/reaper.py
@@ -1,4 +1,4 @@
-# Copyright 2012-2018 CERN for the benefit of the ATLAS collaboration.
+# Copyright 2012-2019 CERN for the benefit of the ATLAS collaboration.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -13,7 +13,7 @@
# limitations under the License.
#
# Authors:
-# - Vincent Garonne, <[email protected]>, 2012-2018
+# - Vincent Garonne, <[email protected]>, 2012-2019
# - Wen Guan, <[email protected]>, 2014
# - Hannes Hansen, <[email protected]>, 2018
@@ -23,6 +23,7 @@
import argparse
import signal
+import sys
from rucio.daemons.reaper.reaper import run, stop
@@ -72,11 +73,19 @@
return parser
-def main():
+def main(argv=None):
+ """
+ The main reaper method called by the command.
+ :param argv: Command-line arguments. Default to sys.argv if not set.
+ """
signal.signal(signal.SIGTERM, stop)
+
+ if argv is None:
+ argv = sys.argv[1:]
+
parser = get_parser()
- args = parser.parse_args()
+ args = parser.parse_args(argv)
try:
run(total_workers=args.total_workers, chunk_size=args.chunk_size, greedy=args.greedy,
once=args.run_once, scheme=args.scheme, rses=args.rses, threads_per_worker=args.threads_per_worker,
| {"golden_diff": "diff --git a/lib/rucio/clis/daemons/reaper/reaper.py b/lib/rucio/clis/daemons/reaper/reaper.py\n--- a/lib/rucio/clis/daemons/reaper/reaper.py\n+++ b/lib/rucio/clis/daemons/reaper/reaper.py\n@@ -1,4 +1,4 @@\n-# Copyright 2012-2018 CERN for the benefit of the ATLAS collaboration.\n+# Copyright 2012-2019 CERN for the benefit of the ATLAS collaboration.\n #\n # Licensed under the Apache License, Version 2.0 (the \"License\");\n # you may not use this file except in compliance with the License.\n@@ -13,7 +13,7 @@\n # limitations under the License.\n #\n # Authors:\n-# - Vincent Garonne, <[email protected]>, 2012-2018\n+# - Vincent Garonne, <[email protected]>, 2012-2019\n # - Wen Guan, <[email protected]>, 2014\n # - Hannes Hansen, <[email protected]>, 2018\n \n@@ -23,6 +23,7 @@\n \n import argparse\n import signal\n+import sys\n \n from rucio.daemons.reaper.reaper import run, stop\n \n@@ -72,11 +73,19 @@\n return parser\n \n \n-def main():\n+def main(argv=None):\n+ \"\"\"\n+ The main reaper method called by the command.\n \n+ :param argv: Command-line arguments. Default to sys.argv if not set.\n+ \"\"\"\n signal.signal(signal.SIGTERM, stop)\n+\n+ if argv is None:\n+ argv = sys.argv[1:]\n+\n parser = get_parser()\n- args = parser.parse_args()\n+ args = parser.parse_args(argv)\n try:\n run(total_workers=args.total_workers, chunk_size=args.chunk_size, greedy=args.greedy,\n once=args.run_once, scheme=args.scheme, rses=args.rses, threads_per_worker=args.threads_per_worker,\n", "issue": "Test reaper console script\nMotivation\r\n----------\r\n\r\nThe reaper console script `rucio-reaper` is not tested in the testsuite.\r\n\r\nModification\r\n------------\r\n- Add test for the reaper console script.\r\n- Install the environnement with `python setup.py develop` in the docker env to have the generated console scripts available in the docker.\r\n- Extend the reaper argparse method and the reaper tests to validate the argparse main method and console script.\r\n\r\n\r\n\r\n\n", "before_files": [{"content": "# Copyright 2012-2018 CERN for the benefit of the ATLAS collaboration.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n# Authors:\n# - Vincent Garonne, <[email protected]>, 2012-2018\n# - Wen Guan, <[email protected]>, 2014\n# - Hannes Hansen, <[email protected]>, 2018\n\n\"\"\"\nReaper is a daemon to manage file deletion\n\"\"\"\n\nimport argparse\nimport signal\n\nfrom rucio.daemons.reaper.reaper import run, stop\n\n\ndef get_parser():\n \"\"\"\n Returns the argparse parser.\n \"\"\"\n parser = argparse.ArgumentParser(description=\"The Reaper daemon is responsible for replica deletion. It deletes them by checking if there are replicas that are not locked and have a tombstone to indicate that they can be deleted.\", epilog='''\nUpload a file and prepare the rules and replicas for deletion by using the judge-cleaner daemon::\n\n $ rucio upload --rse MOCK --scope mock --name file filename.txt\n $ rucio add-rule mock:file 1 MOCK2 --lifetime 1\n $ rucio-judge-cleaner --run-once\n\nCheck if the replica was created::\n\n $ rucio list-file-replica mock:file\n +---------+--------+------------+-----------+---------------------------------------------------------+\n | SCOPE | NAME | FILESIZE | ADLER32 | RSE: REPLICA |\n |---------+--------+------------+-----------+---------------------------------------------------------|\n | mock | file | 1.542 kB | 1268ee71 | MOCK: file://localhost:0/tmp/rucio_rse/mock/15/58/file |\n +---------+--------+------------+-----------+---------------------------------------------------------+\n\nRun the daemon::\n\n $ rucio-reaper --run-once\n\nCheck if the replica exists::\n\n $ rucio list-file-replica mock:file\n +---------+--------+------------+-----------+---------------------------------------------------------+\n | SCOPE | NAME | FILESIZE | ADLER32 | RSE: REPLICA |\n |---------+--------+------------+-----------+---------------------------------------------------------|\n +---------+--------+------------+-----------+---------------------------------------------------------+\n ''')\n parser.add_argument(\"--run-once\", action=\"store_true\", default=False, help='One iteration only')\n parser.add_argument(\"--total-workers\", action=\"store\", default=1, type=int, help='Total number of workers per process')\n parser.add_argument(\"--threads-per-worker\", action=\"store\", default=None, type=int, help='Total number of threads created by each worker')\n parser.add_argument(\"--chunk-size\", action=\"store\", default=10, type=int, help='Chunk size')\n parser.add_argument(\"--scheme\", action=\"store\", default=None, type=str, help='Force the reaper to use a particular protocol, e.g., mock.')\n parser.add_argument('--greedy', action='store_true', default=False, help='Greedy mode')\n parser.add_argument('--exclude-rses', action=\"store\", default=None, type=str, help='RSEs expression to exclude RSEs')\n parser.add_argument('--include-rses', action=\"store\", default=None, type=str, help='RSEs expression to include RSEs')\n parser.add_argument('--rses', nargs='+', type=str, help='List of RSEs')\n parser.add_argument('--delay-seconds', action=\"store\", default=3600, type=int, help='Delay to retry failed deletion')\n return parser\n\n\ndef main():\n\n signal.signal(signal.SIGTERM, stop)\n parser = get_parser()\n args = parser.parse_args()\n try:\n run(total_workers=args.total_workers, chunk_size=args.chunk_size, greedy=args.greedy,\n once=args.run_once, scheme=args.scheme, rses=args.rses, threads_per_worker=args.threads_per_worker,\n exclude_rses=args.exclude_rses, include_rses=args.include_rses, delay_seconds=args.delay_seconds)\n except KeyboardInterrupt:\n stop()\n", "path": "lib/rucio/clis/daemons/reaper/reaper.py"}], "after_files": [{"content": "# Copyright 2012-2019 CERN for the benefit of the ATLAS collaboration.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n# Authors:\n# - Vincent Garonne, <[email protected]>, 2012-2019\n# - Wen Guan, <[email protected]>, 2014\n# - Hannes Hansen, <[email protected]>, 2018\n\n\"\"\"\nReaper is a daemon to manage file deletion\n\"\"\"\n\nimport argparse\nimport signal\nimport sys\n\nfrom rucio.daemons.reaper.reaper import run, stop\n\n\ndef get_parser():\n \"\"\"\n Returns the argparse parser.\n \"\"\"\n parser = argparse.ArgumentParser(description=\"The Reaper daemon is responsible for replica deletion. It deletes them by checking if there are replicas that are not locked and have a tombstone to indicate that they can be deleted.\", epilog='''\nUpload a file and prepare the rules and replicas for deletion by using the judge-cleaner daemon::\n\n $ rucio upload --rse MOCK --scope mock --name file filename.txt\n $ rucio add-rule mock:file 1 MOCK2 --lifetime 1\n $ rucio-judge-cleaner --run-once\n\nCheck if the replica was created::\n\n $ rucio list-file-replica mock:file\n +---------+--------+------------+-----------+---------------------------------------------------------+\n | SCOPE | NAME | FILESIZE | ADLER32 | RSE: REPLICA |\n |---------+--------+------------+-----------+---------------------------------------------------------|\n | mock | file | 1.542 kB | 1268ee71 | MOCK: file://localhost:0/tmp/rucio_rse/mock/15/58/file |\n +---------+--------+------------+-----------+---------------------------------------------------------+\n\nRun the daemon::\n\n $ rucio-reaper --run-once\n\nCheck if the replica exists::\n\n $ rucio list-file-replica mock:file\n +---------+--------+------------+-----------+---------------------------------------------------------+\n | SCOPE | NAME | FILESIZE | ADLER32 | RSE: REPLICA |\n |---------+--------+------------+-----------+---------------------------------------------------------|\n +---------+--------+------------+-----------+---------------------------------------------------------+\n ''')\n parser.add_argument(\"--run-once\", action=\"store_true\", default=False, help='One iteration only')\n parser.add_argument(\"--total-workers\", action=\"store\", default=1, type=int, help='Total number of workers per process')\n parser.add_argument(\"--threads-per-worker\", action=\"store\", default=None, type=int, help='Total number of threads created by each worker')\n parser.add_argument(\"--chunk-size\", action=\"store\", default=10, type=int, help='Chunk size')\n parser.add_argument(\"--scheme\", action=\"store\", default=None, type=str, help='Force the reaper to use a particular protocol, e.g., mock.')\n parser.add_argument('--greedy', action='store_true', default=False, help='Greedy mode')\n parser.add_argument('--exclude-rses', action=\"store\", default=None, type=str, help='RSEs expression to exclude RSEs')\n parser.add_argument('--include-rses', action=\"store\", default=None, type=str, help='RSEs expression to include RSEs')\n parser.add_argument('--rses', nargs='+', type=str, help='List of RSEs')\n parser.add_argument('--delay-seconds', action=\"store\", default=3600, type=int, help='Delay to retry failed deletion')\n return parser\n\n\ndef main(argv=None):\n \"\"\"\n The main reaper method called by the command.\n\n :param argv: Command-line arguments. Default to sys.argv if not set.\n \"\"\"\n signal.signal(signal.SIGTERM, stop)\n\n if argv is None:\n argv = sys.argv[1:]\n\n parser = get_parser()\n args = parser.parse_args(argv)\n try:\n run(total_workers=args.total_workers, chunk_size=args.chunk_size, greedy=args.greedy,\n once=args.run_once, scheme=args.scheme, rses=args.rses, threads_per_worker=args.threads_per_worker,\n exclude_rses=args.exclude_rses, include_rses=args.include_rses, delay_seconds=args.delay_seconds)\n except KeyboardInterrupt:\n stop()\n", "path": "lib/rucio/clis/daemons/reaper/reaper.py"}]} | 1,546 | 476 |
gh_patches_debug_57588 | rasdani/github-patches | git_diff | joke2k__faker-1043 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
BBAN for en_GB too short
* Faker version: v2.0.3
* OS: linux
Numeric part of the en_GB BBAN needs to be 14 digits long, it currently only returns 13, failing further validation.
### Steps to reproduce
Invoke `fake.iban()` or `fake.bban()` with the en_GB locale, an IBAN or BBAN with 1 digit missing is returned.
### Expected behavior
GB ibans should be 22 chars long: https://www.xe.com/ibancalculator/sample/?ibancountry=united kingdom
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `faker/providers/bank/en_GB/__init__.py`
Content:
```
1 from .. import Provider as BankProvider
2
3
4 class Provider(BankProvider):
5 bban_format = '????#############'
6 country_code = 'GB'
7
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/faker/providers/bank/en_GB/__init__.py b/faker/providers/bank/en_GB/__init__.py
--- a/faker/providers/bank/en_GB/__init__.py
+++ b/faker/providers/bank/en_GB/__init__.py
@@ -2,5 +2,5 @@
class Provider(BankProvider):
- bban_format = '????#############'
+ bban_format = '????##############'
country_code = 'GB'
| {"golden_diff": "diff --git a/faker/providers/bank/en_GB/__init__.py b/faker/providers/bank/en_GB/__init__.py\n--- a/faker/providers/bank/en_GB/__init__.py\n+++ b/faker/providers/bank/en_GB/__init__.py\n@@ -2,5 +2,5 @@\n \n \n class Provider(BankProvider):\n- bban_format = '????#############'\n+ bban_format = '????##############'\n country_code = 'GB'\n", "issue": "BBAN for en_GB too short\n* Faker version: v2.0.3\r\n* OS: linux\r\n\r\nNumeric part of the en_GB BBAN needs to be 14 digits long, it currently only returns 13, failing further validation.\r\n\r\n### Steps to reproduce\r\n\r\nInvoke `fake.iban()` or `fake.bban()` with the en_GB locale, an IBAN or BBAN with 1 digit missing is returned.\r\n\r\n### Expected behavior\r\n\r\nGB ibans should be 22 chars long: https://www.xe.com/ibancalculator/sample/?ibancountry=united kingdom\r\n\r\n\n", "before_files": [{"content": "from .. import Provider as BankProvider\n\n\nclass Provider(BankProvider):\n bban_format = '????#############'\n country_code = 'GB'\n", "path": "faker/providers/bank/en_GB/__init__.py"}], "after_files": [{"content": "from .. import Provider as BankProvider\n\n\nclass Provider(BankProvider):\n bban_format = '????##############'\n country_code = 'GB'\n", "path": "faker/providers/bank/en_GB/__init__.py"}]} | 432 | 102 |
gh_patches_debug_60487 | rasdani/github-patches | git_diff | mars-project__mars-284 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG] Fuse operand's sparse value is wrong
<!--
Thank you for your contribution!
Please review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue.
-->
**Describe the bug**
A fuse operand's sparseness should be the same as tail node's, it is not set correctly now.
**To Reproduce**
``` Python
In [1]: import scipy.sparse as sps
In [2]: import mars.tensor as mt
In [3]: data = sps.rand(10, 10, density=0.05)
In [4]: a = mt.tensor(data, chunk_size=3)
In [5]: b = (a * 2) * 2
In [6]: g = b.build_graph(tiled=True, compose=True)
In [7]: list(g)[0].op.sparse
Out[7]: False
In [8]: list(g)[0].op
Out[8]: <mars.tensor.expressions.fuse.core.TensorFuseChunk at 0xa208b7048>
In [9]: list(g)[0].composed[-1].op.sparse
Out[9]: True
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mars/tensor/expressions/fuse/core.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 # Copyright 1999-2018 Alibaba Group Holding Ltd.
4 #
5 # Licensed under the Apache License, Version 2.0 (the "License");
6 # you may not use this file except in compliance with the License.
7 # You may obtain a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing, software
12 # distributed under the License is distributed on an "AS IS" BASIS,
13 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14 # See the License for the specific language governing permissions and
15 # limitations under the License.
16
17 from .... import operands
18 from ....tiles import NotSupportTile
19 from ..core import TensorOperandMixin
20
21
22 class TensorFuseChunk(operands.Fuse, TensorOperandMixin):
23 def __init__(self, dtype=None, **kw):
24 super(TensorFuseChunk, self).__init__(_dtype=dtype, **kw)
25
26 def calc_shape(self, *inputs_shape):
27 in_shapes = inputs_shape
28 out_shape = None
29
30 # TODO: the logic will be changed when fusion is not only straight line
31 for c in self.outputs[0].composed:
32 out_shape = c.op.calc_shape(*in_shapes)
33 in_shapes = [out_shape]
34 return out_shape
35
36 @classmethod
37 def tile(cls, op):
38 raise NotSupportTile('TensorFuseChunk is a chunk operand which does not support tile')
39
40
41 class TensorFuseChunkMixin(TensorOperandMixin):
42 __slots__ = ()
43
44 @classmethod
45 def tile(cls, op):
46 raise NotSupportTile('TensorFuseChunk is a chunk operand which does not support tile')
47
48 def __call__(self, fuse_chunks):
49 head_chunk = fuse_chunks[0]
50 tail_chunk = fuse_chunks[-1]
51 setattr(self, '_operands', [c.op for c in fuse_chunks])
52 return self.new_chunk(head_chunk.inputs, tail_chunk.shape,
53 _composed=fuse_chunks, _key=tail_chunk.key)
54
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mars/tensor/expressions/fuse/core.py b/mars/tensor/expressions/fuse/core.py
--- a/mars/tensor/expressions/fuse/core.py
+++ b/mars/tensor/expressions/fuse/core.py
@@ -20,8 +20,8 @@
class TensorFuseChunk(operands.Fuse, TensorOperandMixin):
- def __init__(self, dtype=None, **kw):
- super(TensorFuseChunk, self).__init__(_dtype=dtype, **kw)
+ def __init__(self, dtype=None, sparse=False, **kw):
+ super(TensorFuseChunk, self).__init__(_dtype=dtype, _sparse=sparse, **kw)
def calc_shape(self, *inputs_shape):
in_shapes = inputs_shape
| {"golden_diff": "diff --git a/mars/tensor/expressions/fuse/core.py b/mars/tensor/expressions/fuse/core.py\n--- a/mars/tensor/expressions/fuse/core.py\n+++ b/mars/tensor/expressions/fuse/core.py\n@@ -20,8 +20,8 @@\n \n \n class TensorFuseChunk(operands.Fuse, TensorOperandMixin):\n- def __init__(self, dtype=None, **kw):\n- super(TensorFuseChunk, self).__init__(_dtype=dtype, **kw)\n+ def __init__(self, dtype=None, sparse=False, **kw):\n+ super(TensorFuseChunk, self).__init__(_dtype=dtype, _sparse=sparse, **kw)\n \n def calc_shape(self, *inputs_shape):\n in_shapes = inputs_shape\n", "issue": "[BUG] Fuse operand's sparse value is wrong\n<!--\r\nThank you for your contribution!\r\n\r\nPlease review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue.\r\n-->\r\n\r\n**Describe the bug**\r\nA fuse operand's sparseness should be the same as tail node's, it is not set correctly now.\r\n\r\n**To Reproduce**\r\n``` Python\r\nIn [1]: import scipy.sparse as sps \r\n\r\nIn [2]: import mars.tensor as mt \r\n\r\nIn [3]: data = sps.rand(10, 10, density=0.05) \r\n\r\nIn [4]: a = mt.tensor(data, chunk_size=3) \r\n\r\nIn [5]: b = (a * 2) * 2 \r\n\r\nIn [6]: g = b.build_graph(tiled=True, compose=True) \r\n\r\nIn [7]: list(g)[0].op.sparse \r\nOut[7]: False\r\n\r\nIn [8]: list(g)[0].op \r\nOut[8]: <mars.tensor.expressions.fuse.core.TensorFuseChunk at 0xa208b7048>\r\n\r\nIn [9]: list(g)[0].composed[-1].op.sparse \r\nOut[9]: True\r\n```\r\n\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n# Copyright 1999-2018 Alibaba Group Holding Ltd.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom .... import operands\nfrom ....tiles import NotSupportTile\nfrom ..core import TensorOperandMixin\n\n\nclass TensorFuseChunk(operands.Fuse, TensorOperandMixin):\n def __init__(self, dtype=None, **kw):\n super(TensorFuseChunk, self).__init__(_dtype=dtype, **kw)\n\n def calc_shape(self, *inputs_shape):\n in_shapes = inputs_shape\n out_shape = None\n\n # TODO: the logic will be changed when fusion is not only straight line\n for c in self.outputs[0].composed:\n out_shape = c.op.calc_shape(*in_shapes)\n in_shapes = [out_shape]\n return out_shape\n\n @classmethod\n def tile(cls, op):\n raise NotSupportTile('TensorFuseChunk is a chunk operand which does not support tile')\n\n\nclass TensorFuseChunkMixin(TensorOperandMixin):\n __slots__ = ()\n\n @classmethod\n def tile(cls, op):\n raise NotSupportTile('TensorFuseChunk is a chunk operand which does not support tile')\n\n def __call__(self, fuse_chunks):\n head_chunk = fuse_chunks[0]\n tail_chunk = fuse_chunks[-1]\n setattr(self, '_operands', [c.op for c in fuse_chunks])\n return self.new_chunk(head_chunk.inputs, tail_chunk.shape,\n _composed=fuse_chunks, _key=tail_chunk.key)\n", "path": "mars/tensor/expressions/fuse/core.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n# Copyright 1999-2018 Alibaba Group Holding Ltd.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom .... import operands\nfrom ....tiles import NotSupportTile\nfrom ..core import TensorOperandMixin\n\n\nclass TensorFuseChunk(operands.Fuse, TensorOperandMixin):\n def __init__(self, dtype=None, sparse=False, **kw):\n super(TensorFuseChunk, self).__init__(_dtype=dtype, _sparse=sparse, **kw)\n\n def calc_shape(self, *inputs_shape):\n in_shapes = inputs_shape\n out_shape = None\n\n # TODO: the logic will be changed when fusion is not only straight line\n for c in self.outputs[0].composed:\n out_shape = c.op.calc_shape(*in_shapes)\n in_shapes = [out_shape]\n return out_shape\n\n @classmethod\n def tile(cls, op):\n raise NotSupportTile('TensorFuseChunk is a chunk operand which does not support tile')\n\n\nclass TensorFuseChunkMixin(TensorOperandMixin):\n __slots__ = ()\n\n @classmethod\n def tile(cls, op):\n raise NotSupportTile('TensorFuseChunk is a chunk operand which does not support tile')\n\n def __call__(self, fuse_chunks):\n head_chunk = fuse_chunks[0]\n tail_chunk = fuse_chunks[-1]\n setattr(self, '_operands', [c.op for c in fuse_chunks])\n return self.new_chunk(head_chunk.inputs, tail_chunk.shape,\n _composed=fuse_chunks, _key=tail_chunk.key)\n", "path": "mars/tensor/expressions/fuse/core.py"}]} | 1,092 | 176 |
gh_patches_debug_16846 | rasdani/github-patches | git_diff | electricitymaps__electricitymaps-contrib-1691 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Import intensity could fallback on yearly averages when missing/unknown
When a country, or area, is importing electricity from another country and the exporting country's production sources are unknown, it seems as if the intensity of the imported electricity is set to be equal to the intensity of the importing country. But this is hardly meaningful. Would it be possible to set the unknown intensity of imported electricity to an average or mean value from a historical period? E.g. the last month or the same month last year. Or to the last available dataset (depending on how old that is).
I can see that it happens quite often for Norway, that "Data [is] temporarily unavailable". The intensity of the electricity exported to Sweden is low, while it is medium high when exported to West Denmark.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `utils/config.py`
Content:
```
1 import json
2 import os
3
4 def relative_path(script_reference_path, rel_path):
5 # __file__ should be passed as script_reference_path
6 script_path = os.path.abspath(
7 script_reference_path) # i.e. /path/to/dir/foobar.py
8 script_dir = os.path.split(script_path)[0] # i.e. /path/to/dir/
9 return os.path.join(script_dir, rel_path)
10
11
12 # Prepare zone bounding boxes
13 ZONE_BOUNDING_BOXES = {}
14
15 # Read parser import list from config jsons
16 ZONES_CONFIG = json.load(open(relative_path(
17 __file__, '../config/zones.json')))
18
19 # Read all zones
20 for zone_id, zone_config in ZONES_CONFIG.items():
21 if 'bounding_box' in zone_config:
22 ZONE_BOUNDING_BOXES[zone_id] = zone_config['bounding_box']
23
24 # Read parser import list from config jsons
25 ZONES_CONFIG = json.load(open(relative_path(
26 __file__, '../config/zones.json')))
27 EXCHANGES_CONFIG = json.load(open(relative_path(
28 __file__, '../config/exchanges.json')))
29 ZONE_NEIGHBOURS = {}
30 for k, v in EXCHANGES_CONFIG.items():
31 zone_names = k.split('->')
32 pairs = [
33 (zone_names[0], zone_names[1]),
34 (zone_names[1], zone_names[0])
35 ]
36 for zone_name_1, zone_name_2 in pairs:
37 if zone_name_1 not in ZONE_NEIGHBOURS:
38 ZONE_NEIGHBOURS[zone_name_1] = set()
39 ZONE_NEIGHBOURS[zone_name_1].add(zone_name_2)
40 # we want neighbors to always be in the same order
41 for zone, neighbors in ZONE_NEIGHBOURS.items():
42 ZONE_NEIGHBOURS[zone] = sorted(neighbors)
43
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/utils/config.py b/utils/config.py
--- a/utils/config.py
+++ b/utils/config.py
@@ -40,3 +40,22 @@
# we want neighbors to always be in the same order
for zone, neighbors in ZONE_NEIGHBOURS.items():
ZONE_NEIGHBOURS[zone] = sorted(neighbors)
+
+CO2EQ_PARAMETERS = json.load(open(relative_path(
+ __file__, '../config/co2eq_parameters.json')))
+
+def emission_factors(zone_key):
+ fallback_carbon_intensity = CO2EQ_PARAMETERS['fallbackZoneMixes'].get(zone_key, {}).get('carbonIntensity');
+ override = CO2EQ_PARAMETERS['emissionFactors']['zoneOverrides'].get(zone_key, {})
+ defaults = CO2EQ_PARAMETERS['emissionFactors']['defaults']
+ merged = {**defaults, **override}
+ if fallback_carbon_intensity:
+ merged['battery storage'] = {
+ 'value': fallback_carbon_intensity,
+ 'source': 'Annual carbon intensity'
+ }
+ merged['hydro storage'] = {
+ 'value': fallback_carbon_intensity,
+ 'source': 'Annual carbon intensity'
+ }
+ return dict([(k, (v or {}).get('value')) for (k, v) in merged.items()])
| {"golden_diff": "diff --git a/utils/config.py b/utils/config.py\n--- a/utils/config.py\n+++ b/utils/config.py\n@@ -40,3 +40,22 @@\n # we want neighbors to always be in the same order\n for zone, neighbors in ZONE_NEIGHBOURS.items():\n ZONE_NEIGHBOURS[zone] = sorted(neighbors)\n+\n+CO2EQ_PARAMETERS = json.load(open(relative_path(\n+ __file__, '../config/co2eq_parameters.json')))\n+\n+def emission_factors(zone_key):\n+ fallback_carbon_intensity = CO2EQ_PARAMETERS['fallbackZoneMixes'].get(zone_key, {}).get('carbonIntensity');\n+ override = CO2EQ_PARAMETERS['emissionFactors']['zoneOverrides'].get(zone_key, {})\n+ defaults = CO2EQ_PARAMETERS['emissionFactors']['defaults']\n+ merged = {**defaults, **override}\n+ if fallback_carbon_intensity:\n+ merged['battery storage'] = {\n+ 'value': fallback_carbon_intensity,\n+ 'source': 'Annual carbon intensity'\n+ }\n+ merged['hydro storage'] = {\n+ 'value': fallback_carbon_intensity,\n+ 'source': 'Annual carbon intensity'\n+ }\n+ return dict([(k, (v or {}).get('value')) for (k, v) in merged.items()])\n", "issue": "Import intensity could fallback on yearly averages when missing/unknown\nWhen a country, or area, is importing electricity from another country and the exporting country's production sources are unknown, it seems as if the intensity of the imported electricity is set to be equal to the intensity of the importing country. But this is hardly meaningful. Would it be possible to set the unknown intensity of imported electricity to an average or mean value from a historical period? E.g. the last month or the same month last year. Or to the last available dataset (depending on how old that is).\r\n\r\nI can see that it happens quite often for Norway, that \"Data [is] temporarily unavailable\". The intensity of the electricity exported to Sweden is low, while it is medium high when exported to West Denmark.\n", "before_files": [{"content": "import json\nimport os\n\ndef relative_path(script_reference_path, rel_path):\n # __file__ should be passed as script_reference_path\n script_path = os.path.abspath(\n script_reference_path) # i.e. /path/to/dir/foobar.py\n script_dir = os.path.split(script_path)[0] # i.e. /path/to/dir/\n return os.path.join(script_dir, rel_path)\n\n\n# Prepare zone bounding boxes\nZONE_BOUNDING_BOXES = {}\n\n# Read parser import list from config jsons\nZONES_CONFIG = json.load(open(relative_path(\n __file__, '../config/zones.json')))\n\n# Read all zones\nfor zone_id, zone_config in ZONES_CONFIG.items():\n if 'bounding_box' in zone_config:\n ZONE_BOUNDING_BOXES[zone_id] = zone_config['bounding_box']\n\n# Read parser import list from config jsons\nZONES_CONFIG = json.load(open(relative_path(\n __file__, '../config/zones.json')))\nEXCHANGES_CONFIG = json.load(open(relative_path(\n __file__, '../config/exchanges.json')))\nZONE_NEIGHBOURS = {}\nfor k, v in EXCHANGES_CONFIG.items():\n zone_names = k.split('->')\n pairs = [\n (zone_names[0], zone_names[1]),\n (zone_names[1], zone_names[0])\n ]\n for zone_name_1, zone_name_2 in pairs:\n if zone_name_1 not in ZONE_NEIGHBOURS:\n ZONE_NEIGHBOURS[zone_name_1] = set()\n ZONE_NEIGHBOURS[zone_name_1].add(zone_name_2)\n# we want neighbors to always be in the same order\nfor zone, neighbors in ZONE_NEIGHBOURS.items():\n ZONE_NEIGHBOURS[zone] = sorted(neighbors)\n", "path": "utils/config.py"}], "after_files": [{"content": "import json\nimport os\n\ndef relative_path(script_reference_path, rel_path):\n # __file__ should be passed as script_reference_path\n script_path = os.path.abspath(\n script_reference_path) # i.e. /path/to/dir/foobar.py\n script_dir = os.path.split(script_path)[0] # i.e. /path/to/dir/\n return os.path.join(script_dir, rel_path)\n\n\n# Prepare zone bounding boxes\nZONE_BOUNDING_BOXES = {}\n\n# Read parser import list from config jsons\nZONES_CONFIG = json.load(open(relative_path(\n __file__, '../config/zones.json')))\n\n# Read all zones\nfor zone_id, zone_config in ZONES_CONFIG.items():\n if 'bounding_box' in zone_config:\n ZONE_BOUNDING_BOXES[zone_id] = zone_config['bounding_box']\n\n# Read parser import list from config jsons\nZONES_CONFIG = json.load(open(relative_path(\n __file__, '../config/zones.json')))\nEXCHANGES_CONFIG = json.load(open(relative_path(\n __file__, '../config/exchanges.json')))\nZONE_NEIGHBOURS = {}\nfor k, v in EXCHANGES_CONFIG.items():\n zone_names = k.split('->')\n pairs = [\n (zone_names[0], zone_names[1]),\n (zone_names[1], zone_names[0])\n ]\n for zone_name_1, zone_name_2 in pairs:\n if zone_name_1 not in ZONE_NEIGHBOURS:\n ZONE_NEIGHBOURS[zone_name_1] = set()\n ZONE_NEIGHBOURS[zone_name_1].add(zone_name_2)\n# we want neighbors to always be in the same order\nfor zone, neighbors in ZONE_NEIGHBOURS.items():\n ZONE_NEIGHBOURS[zone] = sorted(neighbors)\n\nCO2EQ_PARAMETERS = json.load(open(relative_path(\n __file__, '../config/co2eq_parameters.json')))\n\ndef emission_factors(zone_key):\n fallback_carbon_intensity = CO2EQ_PARAMETERS['fallbackZoneMixes'].get(zone_key, {}).get('carbonIntensity');\n override = CO2EQ_PARAMETERS['emissionFactors']['zoneOverrides'].get(zone_key, {})\n defaults = CO2EQ_PARAMETERS['emissionFactors']['defaults']\n merged = {**defaults, **override}\n if fallback_carbon_intensity:\n merged['battery storage'] = {\n 'value': fallback_carbon_intensity,\n 'source': 'Annual carbon intensity'\n }\n merged['hydro storage'] = {\n 'value': fallback_carbon_intensity,\n 'source': 'Annual carbon intensity'\n }\n return dict([(k, (v or {}).get('value')) for (k, v) in merged.items()])\n", "path": "utils/config.py"}]} | 893 | 284 |
gh_patches_debug_34020 | rasdani/github-patches | git_diff | pre-commit__pre-commit-1888 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Path is not mounted correctly when running Docker hooks from Docker
**Situation**:
- In our CI we want to run `pre-commit` inside Docker.
- Some of our hooks are `docker_image`
**Problem**
This line mostly https://github.com/pre-commit/pre-commit/blob/528c7afd18dafa6e47ce73add2c8e1550d105674/pre_commit/languages/docker.py#L94
Currently `pre-commit` mounts the current directory to `/src` and uses current directory name as mount base.
However this does not work when `pre-commit` is run inside the container on some mounted path already, because mount points are relative to the host, not to the container.
Example:
```
/opt/my_code <- host, mounts /opt/my_code:/project
/project <- in Docker running pre-commit, pre-commit is doing mount /project:/src
/src <- (in Dockerized hook)
```
Currently pre-commit will try to mount it as `-v /project:/src,rw,Z`. Expected - to mount it as `-v /opt/my_code:/src`
**Possible solution**:
When I replaced `os.getcwd()` from the code above to `translate_path(os.getcwd())` where `translate_path` is taken from https://gist.github.com/dpfoose/f96d4e4b76c2e01265619d545b77987a, it worked perfectly. It does add extra `docker` pip-dependency though.
**See also**: https://forums.docker.com/t/mounting-a-volume-not-working-with-running-docker-in-docker/25775/2
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pre_commit/languages/docker.py`
Content:
```
1 import hashlib
2 import os
3 from typing import Sequence
4 from typing import Tuple
5
6 import pre_commit.constants as C
7 from pre_commit.hook import Hook
8 from pre_commit.languages import helpers
9 from pre_commit.prefix import Prefix
10 from pre_commit.util import clean_path_on_failure
11
12 ENVIRONMENT_DIR = 'docker'
13 PRE_COMMIT_LABEL = 'PRE_COMMIT'
14 get_default_version = helpers.basic_get_default_version
15 healthy = helpers.basic_healthy
16
17
18 def md5(s: str) -> str: # pragma: win32 no cover
19 return hashlib.md5(s.encode()).hexdigest()
20
21
22 def docker_tag(prefix: Prefix) -> str: # pragma: win32 no cover
23 md5sum = md5(os.path.basename(prefix.prefix_dir)).lower()
24 return f'pre-commit-{md5sum}'
25
26
27 def build_docker_image(
28 prefix: Prefix,
29 *,
30 pull: bool,
31 ) -> None: # pragma: win32 no cover
32 cmd: Tuple[str, ...] = (
33 'docker', 'build',
34 '--tag', docker_tag(prefix),
35 '--label', PRE_COMMIT_LABEL,
36 )
37 if pull:
38 cmd += ('--pull',)
39 # This must come last for old versions of docker. See #477
40 cmd += ('.',)
41 helpers.run_setup_cmd(prefix, cmd)
42
43
44 def install_environment(
45 prefix: Prefix, version: str, additional_dependencies: Sequence[str],
46 ) -> None: # pragma: win32 no cover
47 helpers.assert_version_default('docker', version)
48 helpers.assert_no_additional_deps('docker', additional_dependencies)
49
50 directory = prefix.path(
51 helpers.environment_dir(ENVIRONMENT_DIR, C.DEFAULT),
52 )
53
54 # Docker doesn't really have relevant disk environment, but pre-commit
55 # still needs to cleanup its state files on failure
56 with clean_path_on_failure(directory):
57 build_docker_image(prefix, pull=True)
58 os.mkdir(directory)
59
60
61 def get_docker_user() -> Tuple[str, ...]: # pragma: win32 no cover
62 try:
63 return ('-u', f'{os.getuid()}:{os.getgid()}')
64 except AttributeError:
65 return ()
66
67
68 def docker_cmd() -> Tuple[str, ...]: # pragma: win32 no cover
69 return (
70 'docker', 'run',
71 '--rm',
72 *get_docker_user(),
73 # https://docs.docker.com/engine/reference/commandline/run/#mount-volumes-from-container-volumes-from
74 # The `Z` option tells Docker to label the content with a private
75 # unshared label. Only the current container can use a private volume.
76 '-v', f'{os.getcwd()}:/src:rw,Z',
77 '--workdir', '/src',
78 )
79
80
81 def run_hook(
82 hook: Hook,
83 file_args: Sequence[str],
84 color: bool,
85 ) -> Tuple[int, bytes]: # pragma: win32 no cover
86 # Rebuild the docker image in case it has gone missing, as many people do
87 # automated cleanup of docker images.
88 build_docker_image(hook.prefix, pull=False)
89
90 entry_exe, *cmd_rest = hook.cmd
91
92 entry_tag = ('--entrypoint', entry_exe, docker_tag(hook.prefix))
93 cmd = (*docker_cmd(), *entry_tag, *cmd_rest)
94 return helpers.run_xargs(hook, cmd, file_args, color=color)
95
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pre_commit/languages/docker.py b/pre_commit/languages/docker.py
--- a/pre_commit/languages/docker.py
+++ b/pre_commit/languages/docker.py
@@ -1,5 +1,7 @@
import hashlib
+import json
import os
+import socket
from typing import Sequence
from typing import Tuple
@@ -8,6 +10,7 @@
from pre_commit.languages import helpers
from pre_commit.prefix import Prefix
from pre_commit.util import clean_path_on_failure
+from pre_commit.util import cmd_output_b
ENVIRONMENT_DIR = 'docker'
PRE_COMMIT_LABEL = 'PRE_COMMIT'
@@ -15,6 +18,34 @@
healthy = helpers.basic_healthy
+def _is_in_docker() -> bool:
+ try:
+ with open('/proc/1/cgroup', 'rb') as f:
+ return b'docker' in f.read()
+ except FileNotFoundError:
+ return False
+
+
+def _get_docker_path(path: str) -> str:
+ if not _is_in_docker():
+ return path
+ hostname = socket.gethostname()
+
+ _, out, _ = cmd_output_b('docker', 'inspect', hostname)
+
+ container, = json.loads(out)
+ for mount in container['Mounts']:
+ src_path = mount['Source']
+ to_path = mount['Destination']
+ if os.path.commonpath((path, to_path)) == to_path:
+ # So there is something in common,
+ # and we can proceed remapping it
+ return path.replace(to_path, src_path)
+ # we're in Docker, but the path is not mounted, cannot really do anything,
+ # so fall back to original path
+ return path
+
+
def md5(s: str) -> str: # pragma: win32 no cover
return hashlib.md5(s.encode()).hexdigest()
@@ -73,7 +104,7 @@
# https://docs.docker.com/engine/reference/commandline/run/#mount-volumes-from-container-volumes-from
# The `Z` option tells Docker to label the content with a private
# unshared label. Only the current container can use a private volume.
- '-v', f'{os.getcwd()}:/src:rw,Z',
+ '-v', f'{_get_docker_path(os.getcwd())}:/src:rw,Z',
'--workdir', '/src',
)
| {"golden_diff": "diff --git a/pre_commit/languages/docker.py b/pre_commit/languages/docker.py\n--- a/pre_commit/languages/docker.py\n+++ b/pre_commit/languages/docker.py\n@@ -1,5 +1,7 @@\n import hashlib\n+import json\n import os\n+import socket\n from typing import Sequence\n from typing import Tuple\n \n@@ -8,6 +10,7 @@\n from pre_commit.languages import helpers\n from pre_commit.prefix import Prefix\n from pre_commit.util import clean_path_on_failure\n+from pre_commit.util import cmd_output_b\n \n ENVIRONMENT_DIR = 'docker'\n PRE_COMMIT_LABEL = 'PRE_COMMIT'\n@@ -15,6 +18,34 @@\n healthy = helpers.basic_healthy\n \n \n+def _is_in_docker() -> bool:\n+ try:\n+ with open('/proc/1/cgroup', 'rb') as f:\n+ return b'docker' in f.read()\n+ except FileNotFoundError:\n+ return False\n+\n+\n+def _get_docker_path(path: str) -> str:\n+ if not _is_in_docker():\n+ return path\n+ hostname = socket.gethostname()\n+\n+ _, out, _ = cmd_output_b('docker', 'inspect', hostname)\n+\n+ container, = json.loads(out)\n+ for mount in container['Mounts']:\n+ src_path = mount['Source']\n+ to_path = mount['Destination']\n+ if os.path.commonpath((path, to_path)) == to_path:\n+ # So there is something in common,\n+ # and we can proceed remapping it\n+ return path.replace(to_path, src_path)\n+ # we're in Docker, but the path is not mounted, cannot really do anything,\n+ # so fall back to original path\n+ return path\n+\n+\n def md5(s: str) -> str: # pragma: win32 no cover\n return hashlib.md5(s.encode()).hexdigest()\n \n@@ -73,7 +104,7 @@\n # https://docs.docker.com/engine/reference/commandline/run/#mount-volumes-from-container-volumes-from\n # The `Z` option tells Docker to label the content with a private\n # unshared label. Only the current container can use a private volume.\n- '-v', f'{os.getcwd()}:/src:rw,Z',\n+ '-v', f'{_get_docker_path(os.getcwd())}:/src:rw,Z',\n '--workdir', '/src',\n )\n", "issue": "Path is not mounted correctly when running Docker hooks from Docker\n**Situation**:\r\n\r\n- In our CI we want to run `pre-commit` inside Docker.\r\n- Some of our hooks are `docker_image`\r\n\r\n**Problem**\r\nThis line mostly https://github.com/pre-commit/pre-commit/blob/528c7afd18dafa6e47ce73add2c8e1550d105674/pre_commit/languages/docker.py#L94\r\n\r\nCurrently `pre-commit` mounts the current directory to `/src` and uses current directory name as mount base.\r\nHowever this does not work when `pre-commit` is run inside the container on some mounted path already, because mount points are relative to the host, not to the container.\r\n\r\n Example: \r\n```\r\n/opt/my_code <- host, mounts /opt/my_code:/project\r\n/project <- in Docker running pre-commit, pre-commit is doing mount /project:/src\r\n/src <- (in Dockerized hook)\r\n```\r\n\r\nCurrently pre-commit will try to mount it as `-v /project:/src,rw,Z`. Expected - to mount it as `-v /opt/my_code:/src`\r\n\r\n**Possible solution**:\r\n\r\nWhen I replaced `os.getcwd()` from the code above to `translate_path(os.getcwd())` where `translate_path` is taken from https://gist.github.com/dpfoose/f96d4e4b76c2e01265619d545b77987a, it worked perfectly. It does add extra `docker` pip-dependency though.\r\n\r\n**See also**: https://forums.docker.com/t/mounting-a-volume-not-working-with-running-docker-in-docker/25775/2\n", "before_files": [{"content": "import hashlib\nimport os\nfrom typing import Sequence\nfrom typing import Tuple\n\nimport pre_commit.constants as C\nfrom pre_commit.hook import Hook\nfrom pre_commit.languages import helpers\nfrom pre_commit.prefix import Prefix\nfrom pre_commit.util import clean_path_on_failure\n\nENVIRONMENT_DIR = 'docker'\nPRE_COMMIT_LABEL = 'PRE_COMMIT'\nget_default_version = helpers.basic_get_default_version\nhealthy = helpers.basic_healthy\n\n\ndef md5(s: str) -> str: # pragma: win32 no cover\n return hashlib.md5(s.encode()).hexdigest()\n\n\ndef docker_tag(prefix: Prefix) -> str: # pragma: win32 no cover\n md5sum = md5(os.path.basename(prefix.prefix_dir)).lower()\n return f'pre-commit-{md5sum}'\n\n\ndef build_docker_image(\n prefix: Prefix,\n *,\n pull: bool,\n) -> None: # pragma: win32 no cover\n cmd: Tuple[str, ...] = (\n 'docker', 'build',\n '--tag', docker_tag(prefix),\n '--label', PRE_COMMIT_LABEL,\n )\n if pull:\n cmd += ('--pull',)\n # This must come last for old versions of docker. See #477\n cmd += ('.',)\n helpers.run_setup_cmd(prefix, cmd)\n\n\ndef install_environment(\n prefix: Prefix, version: str, additional_dependencies: Sequence[str],\n) -> None: # pragma: win32 no cover\n helpers.assert_version_default('docker', version)\n helpers.assert_no_additional_deps('docker', additional_dependencies)\n\n directory = prefix.path(\n helpers.environment_dir(ENVIRONMENT_DIR, C.DEFAULT),\n )\n\n # Docker doesn't really have relevant disk environment, but pre-commit\n # still needs to cleanup its state files on failure\n with clean_path_on_failure(directory):\n build_docker_image(prefix, pull=True)\n os.mkdir(directory)\n\n\ndef get_docker_user() -> Tuple[str, ...]: # pragma: win32 no cover\n try:\n return ('-u', f'{os.getuid()}:{os.getgid()}')\n except AttributeError:\n return ()\n\n\ndef docker_cmd() -> Tuple[str, ...]: # pragma: win32 no cover\n return (\n 'docker', 'run',\n '--rm',\n *get_docker_user(),\n # https://docs.docker.com/engine/reference/commandline/run/#mount-volumes-from-container-volumes-from\n # The `Z` option tells Docker to label the content with a private\n # unshared label. Only the current container can use a private volume.\n '-v', f'{os.getcwd()}:/src:rw,Z',\n '--workdir', '/src',\n )\n\n\ndef run_hook(\n hook: Hook,\n file_args: Sequence[str],\n color: bool,\n) -> Tuple[int, bytes]: # pragma: win32 no cover\n # Rebuild the docker image in case it has gone missing, as many people do\n # automated cleanup of docker images.\n build_docker_image(hook.prefix, pull=False)\n\n entry_exe, *cmd_rest = hook.cmd\n\n entry_tag = ('--entrypoint', entry_exe, docker_tag(hook.prefix))\n cmd = (*docker_cmd(), *entry_tag, *cmd_rest)\n return helpers.run_xargs(hook, cmd, file_args, color=color)\n", "path": "pre_commit/languages/docker.py"}], "after_files": [{"content": "import hashlib\nimport json\nimport os\nimport socket\nfrom typing import Sequence\nfrom typing import Tuple\n\nimport pre_commit.constants as C\nfrom pre_commit.hook import Hook\nfrom pre_commit.languages import helpers\nfrom pre_commit.prefix import Prefix\nfrom pre_commit.util import clean_path_on_failure\nfrom pre_commit.util import cmd_output_b\n\nENVIRONMENT_DIR = 'docker'\nPRE_COMMIT_LABEL = 'PRE_COMMIT'\nget_default_version = helpers.basic_get_default_version\nhealthy = helpers.basic_healthy\n\n\ndef _is_in_docker() -> bool:\n try:\n with open('/proc/1/cgroup', 'rb') as f:\n return b'docker' in f.read()\n except FileNotFoundError:\n return False\n\n\ndef _get_docker_path(path: str) -> str:\n if not _is_in_docker():\n return path\n hostname = socket.gethostname()\n\n _, out, _ = cmd_output_b('docker', 'inspect', hostname)\n\n container, = json.loads(out)\n for mount in container['Mounts']:\n src_path = mount['Source']\n to_path = mount['Destination']\n if os.path.commonpath((path, to_path)) == to_path:\n # So there is something in common,\n # and we can proceed remapping it\n return path.replace(to_path, src_path)\n # we're in Docker, but the path is not mounted, cannot really do anything,\n # so fall back to original path\n return path\n\n\ndef md5(s: str) -> str: # pragma: win32 no cover\n return hashlib.md5(s.encode()).hexdigest()\n\n\ndef docker_tag(prefix: Prefix) -> str: # pragma: win32 no cover\n md5sum = md5(os.path.basename(prefix.prefix_dir)).lower()\n return f'pre-commit-{md5sum}'\n\n\ndef build_docker_image(\n prefix: Prefix,\n *,\n pull: bool,\n) -> None: # pragma: win32 no cover\n cmd: Tuple[str, ...] = (\n 'docker', 'build',\n '--tag', docker_tag(prefix),\n '--label', PRE_COMMIT_LABEL,\n )\n if pull:\n cmd += ('--pull',)\n # This must come last for old versions of docker. See #477\n cmd += ('.',)\n helpers.run_setup_cmd(prefix, cmd)\n\n\ndef install_environment(\n prefix: Prefix, version: str, additional_dependencies: Sequence[str],\n) -> None: # pragma: win32 no cover\n helpers.assert_version_default('docker', version)\n helpers.assert_no_additional_deps('docker', additional_dependencies)\n\n directory = prefix.path(\n helpers.environment_dir(ENVIRONMENT_DIR, C.DEFAULT),\n )\n\n # Docker doesn't really have relevant disk environment, but pre-commit\n # still needs to cleanup its state files on failure\n with clean_path_on_failure(directory):\n build_docker_image(prefix, pull=True)\n os.mkdir(directory)\n\n\ndef get_docker_user() -> Tuple[str, ...]: # pragma: win32 no cover\n try:\n return ('-u', f'{os.getuid()}:{os.getgid()}')\n except AttributeError:\n return ()\n\n\ndef docker_cmd() -> Tuple[str, ...]: # pragma: win32 no cover\n return (\n 'docker', 'run',\n '--rm',\n *get_docker_user(),\n # https://docs.docker.com/engine/reference/commandline/run/#mount-volumes-from-container-volumes-from\n # The `Z` option tells Docker to label the content with a private\n # unshared label. Only the current container can use a private volume.\n '-v', f'{_get_docker_path(os.getcwd())}:/src:rw,Z',\n '--workdir', '/src',\n )\n\n\ndef run_hook(\n hook: Hook,\n file_args: Sequence[str],\n color: bool,\n) -> Tuple[int, bytes]: # pragma: win32 no cover\n # Rebuild the docker image in case it has gone missing, as many people do\n # automated cleanup of docker images.\n build_docker_image(hook.prefix, pull=False)\n\n entry_exe, *cmd_rest = hook.cmd\n\n entry_tag = ('--entrypoint', entry_exe, docker_tag(hook.prefix))\n cmd = (*docker_cmd(), *entry_tag, *cmd_rest)\n return helpers.run_xargs(hook, cmd, file_args, color=color)\n", "path": "pre_commit/languages/docker.py"}]} | 1,540 | 533 |
gh_patches_debug_37126 | rasdani/github-patches | git_diff | open-mmlab__mmdetection3d-69 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
iou3d failed when inference with gpu:1
Thanks for your error report and we appreciate it a lot.
**Checklist**
1. I have searched related issues but cannot get the expected help.
2. The bug has not been fixed in the latest version.
**Describe the bug**
Training on single GPU, when using default gpu (gpu:0) , everything is ok.
Switch to gpu:1, report `an illegal memory access was encountered mmdet3d/ops/iou3d/src/iou3d.cpp 121` during inference, however training is ok.
**Reproduction**
1. What command or script did you run?
```
python tools/train.py CONFIG_PATH --gpu-ids 1
```
2. Did you make any modifications on the code or config? Did you understand what you have modified?
3. What dataset did you use?
- kitti
**Environment**
1. Please run `python mmdet3d/utils/collect_env.py` to collect necessary environment infomation and paste it here.
2. You may add addition that may be helpful for locating the problem, such as
- How you installed PyTorch [e.g., pip, conda, source]
- Other environment variables that may be related (such as `$PATH`, `$LD_LIBRARY_PATH`, `$PYTHONPATH`, etc.)
**Error traceback**
If applicable, paste the error trackback here.
```
A placeholder for trackback.
```
**Bug fix**
If you have already identified the reason, you can provide the information here. If you are willing to create a PR to fix it, please also leave a comment here and that would be much appreciated!
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mmdet3d/ops/iou3d/iou3d_utils.py`
Content:
```
1 import torch
2
3 from . import iou3d_cuda
4
5
6 def boxes_iou_bev(boxes_a, boxes_b):
7 """
8 :param boxes_a: (M, 5)
9 :param boxes_b: (N, 5)
10 :return:
11 ans_iou: (M, N)
12 """
13
14 ans_iou = torch.cuda.FloatTensor(
15 torch.Size((boxes_a.shape[0], boxes_b.shape[0]))).zero_()
16
17 iou3d_cuda.boxes_iou_bev_gpu(boxes_a.contiguous(), boxes_b.contiguous(),
18 ans_iou)
19
20 return ans_iou
21
22
23 def nms_gpu(boxes, scores, thresh):
24 """
25 :param boxes: (N, 5) [x1, y1, x2, y2, ry]
26 :param scores: (N)
27 :param thresh:
28 :return:
29 """
30 # areas = (x2 - x1) * (y2 - y1)
31 order = scores.sort(0, descending=True)[1]
32
33 boxes = boxes[order].contiguous()
34
35 keep = torch.LongTensor(boxes.size(0))
36 num_out = iou3d_cuda.nms_gpu(boxes, keep, thresh)
37 return order[keep[:num_out].cuda()].contiguous()
38
39
40 def nms_normal_gpu(boxes, scores, thresh):
41 """
42 :param boxes: (N, 5) [x1, y1, x2, y2, ry]
43 :param scores: (N)
44 :param thresh:
45 :return:
46 """
47 # areas = (x2 - x1) * (y2 - y1)
48 order = scores.sort(0, descending=True)[1]
49
50 boxes = boxes[order].contiguous()
51
52 keep = torch.LongTensor(boxes.size(0))
53 num_out = iou3d_cuda.nms_normal_gpu(boxes, keep, thresh)
54 return order[keep[:num_out].cuda()].contiguous()
55
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mmdet3d/ops/iou3d/iou3d_utils.py b/mmdet3d/ops/iou3d/iou3d_utils.py
--- a/mmdet3d/ops/iou3d/iou3d_utils.py
+++ b/mmdet3d/ops/iou3d/iou3d_utils.py
@@ -4,15 +4,17 @@
def boxes_iou_bev(boxes_a, boxes_b):
- """
- :param boxes_a: (M, 5)
- :param boxes_b: (N, 5)
- :return:
- ans_iou: (M, N)
- """
+ """Calculate boxes IoU in the bird view.
- ans_iou = torch.cuda.FloatTensor(
- torch.Size((boxes_a.shape[0], boxes_b.shape[0]))).zero_()
+ Args:
+ boxes_a (torch.Tensor): Input boxes a with shape (M, 5).
+ boxes_b (torch.Tensor): Input boxes b with shape (N, 5).
+
+ Returns:
+ ans_iou (torch.Tensor): IoU result with shape (M, N).
+ """
+ ans_iou = boxes_a.new_zeros(
+ torch.Size((boxes_a.shape[0], boxes_b.shape[0])))
iou3d_cuda.boxes_iou_bev_gpu(boxes_a.contiguous(), boxes_b.contiguous(),
ans_iou)
@@ -21,34 +23,41 @@
def nms_gpu(boxes, scores, thresh):
+ """Non maximum suppression on GPU.
+
+ Args:
+ boxes (torch.Tensor): Input boxes with shape (N, 5).
+ scores (torch.Tensor): Scores of predicted boxes with shape (N).
+ thresh (torch.Tensor): Threshold of non maximum suppression.
+
+ Returns:
+ torch.Tensor: Remaining indices with scores in descending order.
"""
- :param boxes: (N, 5) [x1, y1, x2, y2, ry]
- :param scores: (N)
- :param thresh:
- :return:
- """
- # areas = (x2 - x1) * (y2 - y1)
order = scores.sort(0, descending=True)[1]
boxes = boxes[order].contiguous()
- keep = torch.LongTensor(boxes.size(0))
- num_out = iou3d_cuda.nms_gpu(boxes, keep, thresh)
- return order[keep[:num_out].cuda()].contiguous()
+ keep = boxes.new_zeros(boxes.size(0))
+ num_out = iou3d_cuda.nms_gpu(boxes, keep, thresh, boxes.device.index)
+ return order[keep[:num_out].cuda(boxes.device)].contiguous()
def nms_normal_gpu(boxes, scores, thresh):
+ """Normal non maximum suppression on GPU.
+
+ Args:
+ boxes (torch.Tensor): Input boxes with shape (N, 5).
+ scores (torch.Tensor): Scores of predicted boxes with shape (N).
+ thresh (torch.Tensor): Threshold of non maximum suppression.
+
+ Returns:
+ torch.Tensor: Remaining indices with scores in descending order.
"""
- :param boxes: (N, 5) [x1, y1, x2, y2, ry]
- :param scores: (N)
- :param thresh:
- :return:
- """
- # areas = (x2 - x1) * (y2 - y1)
order = scores.sort(0, descending=True)[1]
boxes = boxes[order].contiguous()
- keep = torch.LongTensor(boxes.size(0))
- num_out = iou3d_cuda.nms_normal_gpu(boxes, keep, thresh)
- return order[keep[:num_out].cuda()].contiguous()
+ keep = boxes.new_zeros(boxes.size(0))
+ num_out = iou3d_cuda.nms_normal_gpu(boxes, keep, thresh,
+ boxes.device.index)
+ return order[keep[:num_out].cuda(boxes.device)].contiguous()
| {"golden_diff": "diff --git a/mmdet3d/ops/iou3d/iou3d_utils.py b/mmdet3d/ops/iou3d/iou3d_utils.py\n--- a/mmdet3d/ops/iou3d/iou3d_utils.py\n+++ b/mmdet3d/ops/iou3d/iou3d_utils.py\n@@ -4,15 +4,17 @@\n \n \n def boxes_iou_bev(boxes_a, boxes_b):\n- \"\"\"\n- :param boxes_a: (M, 5)\n- :param boxes_b: (N, 5)\n- :return:\n- ans_iou: (M, N)\n- \"\"\"\n+ \"\"\"Calculate boxes IoU in the bird view.\n \n- ans_iou = torch.cuda.FloatTensor(\n- torch.Size((boxes_a.shape[0], boxes_b.shape[0]))).zero_()\n+ Args:\n+ boxes_a (torch.Tensor): Input boxes a with shape (M, 5).\n+ boxes_b (torch.Tensor): Input boxes b with shape (N, 5).\n+\n+ Returns:\n+ ans_iou (torch.Tensor): IoU result with shape (M, N).\n+ \"\"\"\n+ ans_iou = boxes_a.new_zeros(\n+ torch.Size((boxes_a.shape[0], boxes_b.shape[0])))\n \n iou3d_cuda.boxes_iou_bev_gpu(boxes_a.contiguous(), boxes_b.contiguous(),\n ans_iou)\n@@ -21,34 +23,41 @@\n \n \n def nms_gpu(boxes, scores, thresh):\n+ \"\"\"Non maximum suppression on GPU.\n+\n+ Args:\n+ boxes (torch.Tensor): Input boxes with shape (N, 5).\n+ scores (torch.Tensor): Scores of predicted boxes with shape (N).\n+ thresh (torch.Tensor): Threshold of non maximum suppression.\n+\n+ Returns:\n+ torch.Tensor: Remaining indices with scores in descending order.\n \"\"\"\n- :param boxes: (N, 5) [x1, y1, x2, y2, ry]\n- :param scores: (N)\n- :param thresh:\n- :return:\n- \"\"\"\n- # areas = (x2 - x1) * (y2 - y1)\n order = scores.sort(0, descending=True)[1]\n \n boxes = boxes[order].contiguous()\n \n- keep = torch.LongTensor(boxes.size(0))\n- num_out = iou3d_cuda.nms_gpu(boxes, keep, thresh)\n- return order[keep[:num_out].cuda()].contiguous()\n+ keep = boxes.new_zeros(boxes.size(0))\n+ num_out = iou3d_cuda.nms_gpu(boxes, keep, thresh, boxes.device.index)\n+ return order[keep[:num_out].cuda(boxes.device)].contiguous()\n \n \n def nms_normal_gpu(boxes, scores, thresh):\n+ \"\"\"Normal non maximum suppression on GPU.\n+\n+ Args:\n+ boxes (torch.Tensor): Input boxes with shape (N, 5).\n+ scores (torch.Tensor): Scores of predicted boxes with shape (N).\n+ thresh (torch.Tensor): Threshold of non maximum suppression.\n+\n+ Returns:\n+ torch.Tensor: Remaining indices with scores in descending order.\n \"\"\"\n- :param boxes: (N, 5) [x1, y1, x2, y2, ry]\n- :param scores: (N)\n- :param thresh:\n- :return:\n- \"\"\"\n- # areas = (x2 - x1) * (y2 - y1)\n order = scores.sort(0, descending=True)[1]\n \n boxes = boxes[order].contiguous()\n \n- keep = torch.LongTensor(boxes.size(0))\n- num_out = iou3d_cuda.nms_normal_gpu(boxes, keep, thresh)\n- return order[keep[:num_out].cuda()].contiguous()\n+ keep = boxes.new_zeros(boxes.size(0))\n+ num_out = iou3d_cuda.nms_normal_gpu(boxes, keep, thresh,\n+ boxes.device.index)\n+ return order[keep[:num_out].cuda(boxes.device)].contiguous()\n", "issue": "iou3d failed when inference with gpu:1\nThanks for your error report and we appreciate it a lot.\r\n\r\n**Checklist**\r\n1. I have searched related issues but cannot get the expected help.\r\n2. The bug has not been fixed in the latest version.\r\n\r\n**Describe the bug**\r\nTraining on single GPU, when using default gpu (gpu:0) , everything is ok. \r\nSwitch to gpu:1, report `an illegal memory access was encountered mmdet3d/ops/iou3d/src/iou3d.cpp 121` during inference, however training is ok.\r\n\r\n**Reproduction**\r\n1. What command or script did you run?\r\n```\r\npython tools/train.py CONFIG_PATH --gpu-ids 1\r\n```\r\n2. Did you make any modifications on the code or config? Did you understand what you have modified?\r\n3. What dataset did you use?\r\n- kitti\r\n\r\n**Environment**\r\n\r\n1. Please run `python mmdet3d/utils/collect_env.py` to collect necessary environment infomation and paste it here.\r\n2. You may add addition that may be helpful for locating the problem, such as\r\n - How you installed PyTorch [e.g., pip, conda, source]\r\n - Other environment variables that may be related (such as `$PATH`, `$LD_LIBRARY_PATH`, `$PYTHONPATH`, etc.)\r\n\r\n**Error traceback**\r\nIf applicable, paste the error trackback here.\r\n```\r\nA placeholder for trackback.\r\n```\r\n\r\n**Bug fix**\r\nIf you have already identified the reason, you can provide the information here. If you are willing to create a PR to fix it, please also leave a comment here and that would be much appreciated!\r\n\n", "before_files": [{"content": "import torch\n\nfrom . import iou3d_cuda\n\n\ndef boxes_iou_bev(boxes_a, boxes_b):\n \"\"\"\n :param boxes_a: (M, 5)\n :param boxes_b: (N, 5)\n :return:\n ans_iou: (M, N)\n \"\"\"\n\n ans_iou = torch.cuda.FloatTensor(\n torch.Size((boxes_a.shape[0], boxes_b.shape[0]))).zero_()\n\n iou3d_cuda.boxes_iou_bev_gpu(boxes_a.contiguous(), boxes_b.contiguous(),\n ans_iou)\n\n return ans_iou\n\n\ndef nms_gpu(boxes, scores, thresh):\n \"\"\"\n :param boxes: (N, 5) [x1, y1, x2, y2, ry]\n :param scores: (N)\n :param thresh:\n :return:\n \"\"\"\n # areas = (x2 - x1) * (y2 - y1)\n order = scores.sort(0, descending=True)[1]\n\n boxes = boxes[order].contiguous()\n\n keep = torch.LongTensor(boxes.size(0))\n num_out = iou3d_cuda.nms_gpu(boxes, keep, thresh)\n return order[keep[:num_out].cuda()].contiguous()\n\n\ndef nms_normal_gpu(boxes, scores, thresh):\n \"\"\"\n :param boxes: (N, 5) [x1, y1, x2, y2, ry]\n :param scores: (N)\n :param thresh:\n :return:\n \"\"\"\n # areas = (x2 - x1) * (y2 - y1)\n order = scores.sort(0, descending=True)[1]\n\n boxes = boxes[order].contiguous()\n\n keep = torch.LongTensor(boxes.size(0))\n num_out = iou3d_cuda.nms_normal_gpu(boxes, keep, thresh)\n return order[keep[:num_out].cuda()].contiguous()\n", "path": "mmdet3d/ops/iou3d/iou3d_utils.py"}], "after_files": [{"content": "import torch\n\nfrom . import iou3d_cuda\n\n\ndef boxes_iou_bev(boxes_a, boxes_b):\n \"\"\"Calculate boxes IoU in the bird view.\n\n Args:\n boxes_a (torch.Tensor): Input boxes a with shape (M, 5).\n boxes_b (torch.Tensor): Input boxes b with shape (N, 5).\n\n Returns:\n ans_iou (torch.Tensor): IoU result with shape (M, N).\n \"\"\"\n ans_iou = boxes_a.new_zeros(\n torch.Size((boxes_a.shape[0], boxes_b.shape[0])))\n\n iou3d_cuda.boxes_iou_bev_gpu(boxes_a.contiguous(), boxes_b.contiguous(),\n ans_iou)\n\n return ans_iou\n\n\ndef nms_gpu(boxes, scores, thresh):\n \"\"\"Non maximum suppression on GPU.\n\n Args:\n boxes (torch.Tensor): Input boxes with shape (N, 5).\n scores (torch.Tensor): Scores of predicted boxes with shape (N).\n thresh (torch.Tensor): Threshold of non maximum suppression.\n\n Returns:\n torch.Tensor: Remaining indices with scores in descending order.\n \"\"\"\n order = scores.sort(0, descending=True)[1]\n\n boxes = boxes[order].contiguous()\n\n keep = boxes.new_zeros(boxes.size(0))\n num_out = iou3d_cuda.nms_gpu(boxes, keep, thresh, boxes.device.index)\n return order[keep[:num_out].cuda(boxes.device)].contiguous()\n\n\ndef nms_normal_gpu(boxes, scores, thresh):\n \"\"\"Normal non maximum suppression on GPU.\n\n Args:\n boxes (torch.Tensor): Input boxes with shape (N, 5).\n scores (torch.Tensor): Scores of predicted boxes with shape (N).\n thresh (torch.Tensor): Threshold of non maximum suppression.\n\n Returns:\n torch.Tensor: Remaining indices with scores in descending order.\n \"\"\"\n order = scores.sort(0, descending=True)[1]\n\n boxes = boxes[order].contiguous()\n\n keep = boxes.new_zeros(boxes.size(0))\n num_out = iou3d_cuda.nms_normal_gpu(boxes, keep, thresh,\n boxes.device.index)\n return order[keep[:num_out].cuda(boxes.device)].contiguous()\n", "path": "mmdet3d/ops/iou3d/iou3d_utils.py"}]} | 1,162 | 924 |
gh_patches_debug_5988 | rasdani/github-patches | git_diff | liqd__a4-meinberlin-4916 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
#6963 Too many codes in 1 package
URL: https://meinberlin-dev.liqd.net/dashboard/modules/burgerinnenhaushalt-3-phasen-21/download-codes/
user: admin, initiator
expected behaviour: Each code-package should contain a max. of 1.000.000 codes. ~~The wording of the helptext should have also the right number of 1.000.000 codes per package as each package should contain a maximum of 1.000.000 codes per excel-file.~~
behaviour: ~~the number in the wording of the helptext is "10.000.000" and~~ the packages can contain more than 1.000.000 codes.
important screensize: -
device & browser: mac ff
Comment/Question: I tried it with generating two mill codes and the codes were put in only one code-package. I also couldn't download the package probably because it was too big.
Linked: https://github.com/liqd/a4-meinberlin/issues/4907
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `meinberlin/apps/votes/tasks.py`
Content:
```
1 from background_task import background
2
3 from adhocracy4.modules.models import Module
4 from meinberlin.apps.votes.models import VotingToken
5 from meinberlin.apps.votes.models import get_token_12
6
7 # Number of tokens to insert into database per bulk_create
8 BATCH_SIZE = 1000000
9 # Max number of tokens in one download / package
10 PACKAGE_SIZE = 10000000
11
12
13 def generate_voting_tokens(module_id, number_of_tokens, existing_tokens):
14 module = Module.objects.get(pk=module_id)
15 package_number = VotingToken.next_package_number(module)
16 module_name = module.name
17 project_id = module.project.id
18 project_name = module.project.name
19
20 number_to_generate = number_of_tokens
21 package_number_limit = 0
22 if number_of_tokens > PACKAGE_SIZE:
23 package_number_limit = number_of_tokens - PACKAGE_SIZE
24 while number_to_generate > 0:
25 if number_to_generate >= BATCH_SIZE:
26 generate_voting_tokens_batch(
27 module_id,
28 BATCH_SIZE,
29 package_number,
30 number_of_tokens,
31 module_name,
32 project_id,
33 project_name,
34 existing_tokens,
35 )
36 number_to_generate = number_to_generate - BATCH_SIZE
37 else:
38 generate_voting_tokens_batch(
39 module_id,
40 number_to_generate,
41 package_number,
42 number_of_tokens,
43 module_name,
44 project_id,
45 project_name,
46 existing_tokens,
47 )
48 number_to_generate = 0
49 if package_number_limit >= number_to_generate:
50 package_number += 1
51 package_number_limit - PACKAGE_SIZE
52
53
54 @background(schedule=1)
55 def generate_voting_tokens_batch(
56 module_id,
57 batch_size,
58 package_number,
59 number_of_tokens,
60 module_name,
61 project_id,
62 project_name,
63 existing_tokens,
64 ):
65 module = Module.objects.get(pk=module_id)
66 VotingToken.objects.bulk_create(
67 [get_token_and_hash(module, package_number) for i in range(batch_size)]
68 )
69
70
71 def get_token_and_hash(module, package_number):
72 token = get_token_12()
73 token_hash = VotingToken.hash_token(token, module)
74 return VotingToken(
75 token=token, token_hash=token_hash, module=module, package_number=package_number
76 )
77
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/meinberlin/apps/votes/tasks.py b/meinberlin/apps/votes/tasks.py
--- a/meinberlin/apps/votes/tasks.py
+++ b/meinberlin/apps/votes/tasks.py
@@ -5,9 +5,9 @@
from meinberlin.apps.votes.models import get_token_12
# Number of tokens to insert into database per bulk_create
-BATCH_SIZE = 1000000
+BATCH_SIZE = 100000
# Max number of tokens in one download / package
-PACKAGE_SIZE = 10000000
+PACKAGE_SIZE = 1000000
def generate_voting_tokens(module_id, number_of_tokens, existing_tokens):
| {"golden_diff": "diff --git a/meinberlin/apps/votes/tasks.py b/meinberlin/apps/votes/tasks.py\n--- a/meinberlin/apps/votes/tasks.py\n+++ b/meinberlin/apps/votes/tasks.py\n@@ -5,9 +5,9 @@\n from meinberlin.apps.votes.models import get_token_12\n \n # Number of tokens to insert into database per bulk_create\n-BATCH_SIZE = 1000000\n+BATCH_SIZE = 100000\n # Max number of tokens in one download / package\n-PACKAGE_SIZE = 10000000\n+PACKAGE_SIZE = 1000000\n \n \n def generate_voting_tokens(module_id, number_of_tokens, existing_tokens):\n", "issue": "#6963 Too many codes in 1 package\nURL: https://meinberlin-dev.liqd.net/dashboard/modules/burgerinnenhaushalt-3-phasen-21/download-codes/\r\nuser: admin, initiator\r\nexpected behaviour: Each code-package should contain a max. of 1.000.000 codes. ~~The wording of the helptext should have also the right number of 1.000.000 codes per package as each package should contain a maximum of 1.000.000 codes per excel-file.~~\r\nbehaviour: ~~the number in the wording of the helptext is \"10.000.000\" and~~ the packages can contain more than 1.000.000 codes.\r\nimportant screensize: -\r\ndevice & browser: mac ff\r\nComment/Question: I tried it with generating two mill codes and the codes were put in only one code-package. I also couldn't download the package probably because it was too big.\r\n\r\nLinked: https://github.com/liqd/a4-meinberlin/issues/4907\r\n\n", "before_files": [{"content": "from background_task import background\n\nfrom adhocracy4.modules.models import Module\nfrom meinberlin.apps.votes.models import VotingToken\nfrom meinberlin.apps.votes.models import get_token_12\n\n# Number of tokens to insert into database per bulk_create\nBATCH_SIZE = 1000000\n# Max number of tokens in one download / package\nPACKAGE_SIZE = 10000000\n\n\ndef generate_voting_tokens(module_id, number_of_tokens, existing_tokens):\n module = Module.objects.get(pk=module_id)\n package_number = VotingToken.next_package_number(module)\n module_name = module.name\n project_id = module.project.id\n project_name = module.project.name\n\n number_to_generate = number_of_tokens\n package_number_limit = 0\n if number_of_tokens > PACKAGE_SIZE:\n package_number_limit = number_of_tokens - PACKAGE_SIZE\n while number_to_generate > 0:\n if number_to_generate >= BATCH_SIZE:\n generate_voting_tokens_batch(\n module_id,\n BATCH_SIZE,\n package_number,\n number_of_tokens,\n module_name,\n project_id,\n project_name,\n existing_tokens,\n )\n number_to_generate = number_to_generate - BATCH_SIZE\n else:\n generate_voting_tokens_batch(\n module_id,\n number_to_generate,\n package_number,\n number_of_tokens,\n module_name,\n project_id,\n project_name,\n existing_tokens,\n )\n number_to_generate = 0\n if package_number_limit >= number_to_generate:\n package_number += 1\n package_number_limit - PACKAGE_SIZE\n\n\n@background(schedule=1)\ndef generate_voting_tokens_batch(\n module_id,\n batch_size,\n package_number,\n number_of_tokens,\n module_name,\n project_id,\n project_name,\n existing_tokens,\n):\n module = Module.objects.get(pk=module_id)\n VotingToken.objects.bulk_create(\n [get_token_and_hash(module, package_number) for i in range(batch_size)]\n )\n\n\ndef get_token_and_hash(module, package_number):\n token = get_token_12()\n token_hash = VotingToken.hash_token(token, module)\n return VotingToken(\n token=token, token_hash=token_hash, module=module, package_number=package_number\n )\n", "path": "meinberlin/apps/votes/tasks.py"}], "after_files": [{"content": "from background_task import background\n\nfrom adhocracy4.modules.models import Module\nfrom meinberlin.apps.votes.models import VotingToken\nfrom meinberlin.apps.votes.models import get_token_12\n\n# Number of tokens to insert into database per bulk_create\nBATCH_SIZE = 100000\n# Max number of tokens in one download / package\nPACKAGE_SIZE = 1000000\n\n\ndef generate_voting_tokens(module_id, number_of_tokens, existing_tokens):\n module = Module.objects.get(pk=module_id)\n package_number = VotingToken.next_package_number(module)\n module_name = module.name\n project_id = module.project.id\n project_name = module.project.name\n\n number_to_generate = number_of_tokens\n package_number_limit = 0\n if number_of_tokens > PACKAGE_SIZE:\n package_number_limit = number_of_tokens - PACKAGE_SIZE\n while number_to_generate > 0:\n if number_to_generate >= BATCH_SIZE:\n generate_voting_tokens_batch(\n module_id,\n BATCH_SIZE,\n package_number,\n number_of_tokens,\n module_name,\n project_id,\n project_name,\n existing_tokens,\n )\n number_to_generate = number_to_generate - BATCH_SIZE\n else:\n generate_voting_tokens_batch(\n module_id,\n number_to_generate,\n package_number,\n number_of_tokens,\n module_name,\n project_id,\n project_name,\n existing_tokens,\n )\n number_to_generate = 0\n if package_number_limit >= number_to_generate:\n package_number += 1\n package_number_limit - PACKAGE_SIZE\n\n\n@background(schedule=1)\ndef generate_voting_tokens_batch(\n module_id,\n batch_size,\n package_number,\n number_of_tokens,\n module_name,\n project_id,\n project_name,\n existing_tokens,\n):\n module = Module.objects.get(pk=module_id)\n VotingToken.objects.bulk_create(\n [get_token_and_hash(module, package_number) for i in range(batch_size)]\n )\n\n\ndef get_token_and_hash(module, package_number):\n token = get_token_12()\n token_hash = VotingToken.hash_token(token, module)\n return VotingToken(\n token=token, token_hash=token_hash, module=module, package_number=package_number\n )\n", "path": "meinberlin/apps/votes/tasks.py"}]} | 1,148 | 165 |
gh_patches_debug_17265 | rasdani/github-patches | git_diff | netbox-community__netbox-2694 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add "White" as a cable color
### Environment
* Python version: 3.6
* NetBox version: 2.5.1
### Proposed Functionality
Add color white to the cable colors.
Optionally add:
* ~~slate~~(Dark Grey works, almost identical color)
* rose
* ~~violet~~ (Fuschia works, almost identical color)
* aqua
### Use Case
These fiber strand colors are missing
### Database Changes
None
### External Dependencies
None
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `netbox/utilities/constants.py`
Content:
```
1 COLOR_CHOICES = (
2 ('aa1409', 'Dark red'),
3 ('f44336', 'Red'),
4 ('e91e63', 'Pink'),
5 ('ff66ff', 'Fuschia'),
6 ('9c27b0', 'Purple'),
7 ('673ab7', 'Dark purple'),
8 ('3f51b5', 'Indigo'),
9 ('2196f3', 'Blue'),
10 ('03a9f4', 'Light blue'),
11 ('00bcd4', 'Cyan'),
12 ('009688', 'Teal'),
13 ('2f6a31', 'Dark green'),
14 ('4caf50', 'Green'),
15 ('8bc34a', 'Light green'),
16 ('cddc39', 'Lime'),
17 ('ffeb3b', 'Yellow'),
18 ('ffc107', 'Amber'),
19 ('ff9800', 'Orange'),
20 ('ff5722', 'Dark orange'),
21 ('795548', 'Brown'),
22 ('c0c0c0', 'Light grey'),
23 ('9e9e9e', 'Grey'),
24 ('607d8b', 'Dark grey'),
25 ('111111', 'Black'),
26 )
27
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/netbox/utilities/constants.py b/netbox/utilities/constants.py
--- a/netbox/utilities/constants.py
+++ b/netbox/utilities/constants.py
@@ -2,6 +2,7 @@
('aa1409', 'Dark red'),
('f44336', 'Red'),
('e91e63', 'Pink'),
+ ('ffe4e1', 'Rose'),
('ff66ff', 'Fuschia'),
('9c27b0', 'Purple'),
('673ab7', 'Dark purple'),
@@ -10,6 +11,7 @@
('03a9f4', 'Light blue'),
('00bcd4', 'Cyan'),
('009688', 'Teal'),
+ ('00ffff', 'Aqua'),
('2f6a31', 'Dark green'),
('4caf50', 'Green'),
('8bc34a', 'Light green'),
@@ -23,4 +25,5 @@
('9e9e9e', 'Grey'),
('607d8b', 'Dark grey'),
('111111', 'Black'),
+ ('ffffff', 'White'),
)
| {"golden_diff": "diff --git a/netbox/utilities/constants.py b/netbox/utilities/constants.py\n--- a/netbox/utilities/constants.py\n+++ b/netbox/utilities/constants.py\n@@ -2,6 +2,7 @@\n ('aa1409', 'Dark red'),\n ('f44336', 'Red'),\n ('e91e63', 'Pink'),\n+ ('ffe4e1', 'Rose'),\n ('ff66ff', 'Fuschia'),\n ('9c27b0', 'Purple'),\n ('673ab7', 'Dark purple'),\n@@ -10,6 +11,7 @@\n ('03a9f4', 'Light blue'),\n ('00bcd4', 'Cyan'),\n ('009688', 'Teal'),\n+ ('00ffff', 'Aqua'),\n ('2f6a31', 'Dark green'),\n ('4caf50', 'Green'),\n ('8bc34a', 'Light green'),\n@@ -23,4 +25,5 @@\n ('9e9e9e', 'Grey'),\n ('607d8b', 'Dark grey'),\n ('111111', 'Black'),\n+ ('ffffff', 'White'),\n )\n", "issue": "Add \"White\" as a cable color\n### Environment\r\n* Python version: 3.6\r\n* NetBox version: 2.5.1\r\n\r\n### Proposed Functionality\r\n\r\nAdd color white to the cable colors.\r\n\r\nOptionally add:\r\n\r\n* ~~slate~~(Dark Grey works, almost identical color)\r\n* rose\r\n* ~~violet~~ (Fuschia works, almost identical color)\r\n* aqua\r\n\r\n### Use Case\r\n\r\nThese fiber strand colors are missing\r\n\r\n### Database Changes\r\n\r\nNone\r\n\r\n### External Dependencies\r\n\r\nNone\n", "before_files": [{"content": "COLOR_CHOICES = (\n ('aa1409', 'Dark red'),\n ('f44336', 'Red'),\n ('e91e63', 'Pink'),\n ('ff66ff', 'Fuschia'),\n ('9c27b0', 'Purple'),\n ('673ab7', 'Dark purple'),\n ('3f51b5', 'Indigo'),\n ('2196f3', 'Blue'),\n ('03a9f4', 'Light blue'),\n ('00bcd4', 'Cyan'),\n ('009688', 'Teal'),\n ('2f6a31', 'Dark green'),\n ('4caf50', 'Green'),\n ('8bc34a', 'Light green'),\n ('cddc39', 'Lime'),\n ('ffeb3b', 'Yellow'),\n ('ffc107', 'Amber'),\n ('ff9800', 'Orange'),\n ('ff5722', 'Dark orange'),\n ('795548', 'Brown'),\n ('c0c0c0', 'Light grey'),\n ('9e9e9e', 'Grey'),\n ('607d8b', 'Dark grey'),\n ('111111', 'Black'),\n)\n", "path": "netbox/utilities/constants.py"}], "after_files": [{"content": "COLOR_CHOICES = (\n ('aa1409', 'Dark red'),\n ('f44336', 'Red'),\n ('e91e63', 'Pink'),\n ('ffe4e1', 'Rose'),\n ('ff66ff', 'Fuschia'),\n ('9c27b0', 'Purple'),\n ('673ab7', 'Dark purple'),\n ('3f51b5', 'Indigo'),\n ('2196f3', 'Blue'),\n ('03a9f4', 'Light blue'),\n ('00bcd4', 'Cyan'),\n ('009688', 'Teal'),\n ('00ffff', 'Aqua'),\n ('2f6a31', 'Dark green'),\n ('4caf50', 'Green'),\n ('8bc34a', 'Light green'),\n ('cddc39', 'Lime'),\n ('ffeb3b', 'Yellow'),\n ('ffc107', 'Amber'),\n ('ff9800', 'Orange'),\n ('ff5722', 'Dark orange'),\n ('795548', 'Brown'),\n ('c0c0c0', 'Light grey'),\n ('9e9e9e', 'Grey'),\n ('607d8b', 'Dark grey'),\n ('111111', 'Black'),\n ('ffffff', 'White'),\n)\n", "path": "netbox/utilities/constants.py"}]} | 708 | 282 |
gh_patches_debug_4763 | rasdani/github-patches | git_diff | pytorch__ignite-3199 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Mean Absolute Percentage Error (MAPE)
## 🚀 Feature
I'd like to implement the mean absolute percentage error [(MAPE)](https://en.wikipedia.org/wiki/Mean_absolute_percentage_error) in `ignite/metrics`.
It is a commonly used metric for regression problems and it would be really convenient to be able to use it directly with ignite evaluators.
For that, I would write a custom Metric class in a new file `mean_absolute_percentage_error.py` inheriting from the base `Metric` class in `ignite/metrics/metric.py`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ignite/contrib/metrics/regression/mean_absolute_relative_error.py`
Content:
```
1 from typing import Tuple
2
3 import torch
4
5 from ignite.contrib.metrics.regression._base import _BaseRegression
6 from ignite.exceptions import NotComputableError
7 from ignite.metrics.metric import reinit__is_reduced, sync_all_reduce
8
9
10 class MeanAbsoluteRelativeError(_BaseRegression):
11 r"""Calculate Mean Absolute Relative Error.
12
13 .. math::
14 \text{MARE} = \frac{1}{n}\sum_{j=1}^n\frac{\left|A_j-P_j\right|}{\left|A_j\right|}
15
16 where :math:`A_j` is the ground truth and :math:`P_j` is the predicted value.
17
18 More details can be found in the reference `Botchkarev 2018`__.
19
20 - ``update`` must receive output of the form ``(y_pred, y)`` or ``{'y_pred': y_pred, 'y': y}``.
21 - `y` and `y_pred` must be of same shape `(N, )` or `(N, 1)`.
22
23 __ https://arxiv.org/ftp/arxiv/papers/1809/1809.03006.pdf
24
25 Parameters are inherited from ``Metric.__init__``.
26
27 Args:
28 output_transform: a callable that is used to transform the
29 :class:`~ignite.engine.engine.Engine`'s ``process_function``'s output into the
30 form expected by the metric. This can be useful if, for example, you have a multi-output model and
31 you want to compute the metric with respect to one of the outputs.
32 By default, metrics require the output as ``(y_pred, y)`` or ``{'y_pred': y_pred, 'y': y}``.
33 device: specifies which device updates are accumulated on. Setting the
34 metric's device to be the same as your ``update`` arguments ensures the ``update`` method is
35 non-blocking. By default, CPU.
36
37 Examples:
38 To use with ``Engine`` and ``process_function``, simply attach the metric instance to the engine.
39 The output of the engine's ``process_function`` needs to be in format of
40 ``(y_pred, y)`` or ``{'y_pred': y_pred, 'y': y, ...}``.
41
42 .. include:: defaults.rst
43 :start-after: :orphan:
44
45 .. testcode::
46
47 metric = MeanAbsoluteRelativeError()
48 metric.attach(default_evaluator, 'mare')
49 y_true = torch.tensor([1., 2., 3., 4., 5.])
50 y_pred = y_true * 0.75
51 state = default_evaluator.run([[y_pred, y_true]])
52 print(state.metrics['mare'])
53
54 .. testoutput::
55
56 0.25...
57
58 .. versionchanged:: 0.4.5
59 - Works with DDP.
60 """
61 _state_dict_all_req_keys = ("_sum_of_absolute_relative_errors", "_num_samples")
62
63 @reinit__is_reduced
64 def reset(self) -> None:
65 self._sum_of_absolute_relative_errors = torch.tensor(0.0, device=self._device)
66 self._num_samples = 0
67
68 def _update(self, output: Tuple[torch.Tensor, torch.Tensor]) -> None:
69 y_pred, y = output[0].detach(), output[1].detach()
70 if (y == 0).any():
71 raise NotComputableError("The ground truth has 0.")
72 absolute_error = torch.abs(y_pred - y.view_as(y_pred)) / torch.abs(y.view_as(y_pred))
73 self._sum_of_absolute_relative_errors += torch.sum(absolute_error).to(self._device)
74 self._num_samples += y.size()[0]
75
76 @sync_all_reduce("_sum_of_absolute_relative_errors", "_num_samples")
77 def compute(self) -> float:
78 if self._num_samples == 0:
79 raise NotComputableError(
80 "MeanAbsoluteRelativeError must have at least one sample before it can be computed."
81 )
82 return self._sum_of_absolute_relative_errors.item() / self._num_samples
83
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ignite/contrib/metrics/regression/mean_absolute_relative_error.py b/ignite/contrib/metrics/regression/mean_absolute_relative_error.py
--- a/ignite/contrib/metrics/regression/mean_absolute_relative_error.py
+++ b/ignite/contrib/metrics/regression/mean_absolute_relative_error.py
@@ -8,7 +8,7 @@
class MeanAbsoluteRelativeError(_BaseRegression):
- r"""Calculate Mean Absolute Relative Error.
+ r"""Calculate Mean Absolute Relative Error (MARE), also known as Mean Absolute Percentage Error (MAPE).
.. math::
\text{MARE} = \frac{1}{n}\sum_{j=1}^n\frac{\left|A_j-P_j\right|}{\left|A_j\right|}
| {"golden_diff": "diff --git a/ignite/contrib/metrics/regression/mean_absolute_relative_error.py b/ignite/contrib/metrics/regression/mean_absolute_relative_error.py\n--- a/ignite/contrib/metrics/regression/mean_absolute_relative_error.py\n+++ b/ignite/contrib/metrics/regression/mean_absolute_relative_error.py\n@@ -8,7 +8,7 @@\n \n \n class MeanAbsoluteRelativeError(_BaseRegression):\n- r\"\"\"Calculate Mean Absolute Relative Error.\n+ r\"\"\"Calculate Mean Absolute Relative Error (MARE), also known as Mean Absolute Percentage Error (MAPE).\n \n .. math::\n \\text{MARE} = \\frac{1}{n}\\sum_{j=1}^n\\frac{\\left|A_j-P_j\\right|}{\\left|A_j\\right|}\n", "issue": "Mean Absolute Percentage Error (MAPE)\n## \ud83d\ude80 Feature\r\n\r\nI'd like to implement the mean absolute percentage error [(MAPE)](https://en.wikipedia.org/wiki/Mean_absolute_percentage_error) in `ignite/metrics`.\r\n\r\nIt is a commonly used metric for regression problems and it would be really convenient to be able to use it directly with ignite evaluators.\r\n\r\nFor that, I would write a custom Metric class in a new file `mean_absolute_percentage_error.py` inheriting from the base `Metric` class in `ignite/metrics/metric.py`.\r\n\n", "before_files": [{"content": "from typing import Tuple\n\nimport torch\n\nfrom ignite.contrib.metrics.regression._base import _BaseRegression\nfrom ignite.exceptions import NotComputableError\nfrom ignite.metrics.metric import reinit__is_reduced, sync_all_reduce\n\n\nclass MeanAbsoluteRelativeError(_BaseRegression):\n r\"\"\"Calculate Mean Absolute Relative Error.\n\n .. math::\n \\text{MARE} = \\frac{1}{n}\\sum_{j=1}^n\\frac{\\left|A_j-P_j\\right|}{\\left|A_j\\right|}\n\n where :math:`A_j` is the ground truth and :math:`P_j` is the predicted value.\n\n More details can be found in the reference `Botchkarev 2018`__.\n\n - ``update`` must receive output of the form ``(y_pred, y)`` or ``{'y_pred': y_pred, 'y': y}``.\n - `y` and `y_pred` must be of same shape `(N, )` or `(N, 1)`.\n\n __ https://arxiv.org/ftp/arxiv/papers/1809/1809.03006.pdf\n\n Parameters are inherited from ``Metric.__init__``.\n\n Args:\n output_transform: a callable that is used to transform the\n :class:`~ignite.engine.engine.Engine`'s ``process_function``'s output into the\n form expected by the metric. This can be useful if, for example, you have a multi-output model and\n you want to compute the metric with respect to one of the outputs.\n By default, metrics require the output as ``(y_pred, y)`` or ``{'y_pred': y_pred, 'y': y}``.\n device: specifies which device updates are accumulated on. Setting the\n metric's device to be the same as your ``update`` arguments ensures the ``update`` method is\n non-blocking. By default, CPU.\n\n Examples:\n To use with ``Engine`` and ``process_function``, simply attach the metric instance to the engine.\n The output of the engine's ``process_function`` needs to be in format of\n ``(y_pred, y)`` or ``{'y_pred': y_pred, 'y': y, ...}``.\n\n .. include:: defaults.rst\n :start-after: :orphan:\n\n .. testcode::\n\n metric = MeanAbsoluteRelativeError()\n metric.attach(default_evaluator, 'mare')\n y_true = torch.tensor([1., 2., 3., 4., 5.])\n y_pred = y_true * 0.75\n state = default_evaluator.run([[y_pred, y_true]])\n print(state.metrics['mare'])\n\n .. testoutput::\n\n 0.25...\n\n .. versionchanged:: 0.4.5\n - Works with DDP.\n \"\"\"\n _state_dict_all_req_keys = (\"_sum_of_absolute_relative_errors\", \"_num_samples\")\n\n @reinit__is_reduced\n def reset(self) -> None:\n self._sum_of_absolute_relative_errors = torch.tensor(0.0, device=self._device)\n self._num_samples = 0\n\n def _update(self, output: Tuple[torch.Tensor, torch.Tensor]) -> None:\n y_pred, y = output[0].detach(), output[1].detach()\n if (y == 0).any():\n raise NotComputableError(\"The ground truth has 0.\")\n absolute_error = torch.abs(y_pred - y.view_as(y_pred)) / torch.abs(y.view_as(y_pred))\n self._sum_of_absolute_relative_errors += torch.sum(absolute_error).to(self._device)\n self._num_samples += y.size()[0]\n\n @sync_all_reduce(\"_sum_of_absolute_relative_errors\", \"_num_samples\")\n def compute(self) -> float:\n if self._num_samples == 0:\n raise NotComputableError(\n \"MeanAbsoluteRelativeError must have at least one sample before it can be computed.\"\n )\n return self._sum_of_absolute_relative_errors.item() / self._num_samples\n", "path": "ignite/contrib/metrics/regression/mean_absolute_relative_error.py"}], "after_files": [{"content": "from typing import Tuple\n\nimport torch\n\nfrom ignite.contrib.metrics.regression._base import _BaseRegression\nfrom ignite.exceptions import NotComputableError\nfrom ignite.metrics.metric import reinit__is_reduced, sync_all_reduce\n\n\nclass MeanAbsoluteRelativeError(_BaseRegression):\n r\"\"\"Calculate Mean Absolute Relative Error (MARE), also known as Mean Absolute Percentage Error (MAPE).\n\n .. math::\n \\text{MARE} = \\frac{1}{n}\\sum_{j=1}^n\\frac{\\left|A_j-P_j\\right|}{\\left|A_j\\right|}\n\n where :math:`A_j` is the ground truth and :math:`P_j` is the predicted value.\n\n More details can be found in the reference `Botchkarev 2018`__.\n\n - ``update`` must receive output of the form ``(y_pred, y)`` or ``{'y_pred': y_pred, 'y': y}``.\n - `y` and `y_pred` must be of same shape `(N, )` or `(N, 1)`.\n\n __ https://arxiv.org/ftp/arxiv/papers/1809/1809.03006.pdf\n\n Parameters are inherited from ``Metric.__init__``.\n\n Args:\n output_transform: a callable that is used to transform the\n :class:`~ignite.engine.engine.Engine`'s ``process_function``'s output into the\n form expected by the metric. This can be useful if, for example, you have a multi-output model and\n you want to compute the metric with respect to one of the outputs.\n By default, metrics require the output as ``(y_pred, y)`` or ``{'y_pred': y_pred, 'y': y}``.\n device: specifies which device updates are accumulated on. Setting the\n metric's device to be the same as your ``update`` arguments ensures the ``update`` method is\n non-blocking. By default, CPU.\n\n Examples:\n To use with ``Engine`` and ``process_function``, simply attach the metric instance to the engine.\n The output of the engine's ``process_function`` needs to be in format of\n ``(y_pred, y)`` or ``{'y_pred': y_pred, 'y': y, ...}``.\n\n .. include:: defaults.rst\n :start-after: :orphan:\n\n .. testcode::\n\n metric = MeanAbsoluteRelativeError()\n metric.attach(default_evaluator, 'mare')\n y_true = torch.tensor([1., 2., 3., 4., 5.])\n y_pred = y_true * 0.75\n state = default_evaluator.run([[y_pred, y_true]])\n print(state.metrics['mare'])\n\n .. testoutput::\n\n 0.25...\n\n .. versionchanged:: 0.4.5\n - Works with DDP.\n \"\"\"\n _state_dict_all_req_keys = (\"_sum_of_absolute_relative_errors\", \"_num_samples\")\n\n @reinit__is_reduced\n def reset(self) -> None:\n self._sum_of_absolute_relative_errors = torch.tensor(0.0, device=self._device)\n self._num_samples = 0\n\n def _update(self, output: Tuple[torch.Tensor, torch.Tensor]) -> None:\n y_pred, y = output[0].detach(), output[1].detach()\n if (y == 0).any():\n raise NotComputableError(\"The ground truth has 0.\")\n absolute_error = torch.abs(y_pred - y.view_as(y_pred)) / torch.abs(y.view_as(y_pred))\n self._sum_of_absolute_relative_errors += torch.sum(absolute_error).to(self._device)\n self._num_samples += y.size()[0]\n\n @sync_all_reduce(\"_sum_of_absolute_relative_errors\", \"_num_samples\")\n def compute(self) -> float:\n if self._num_samples == 0:\n raise NotComputableError(\n \"MeanAbsoluteRelativeError must have at least one sample before it can be computed.\"\n )\n return self._sum_of_absolute_relative_errors.item() / self._num_samples\n", "path": "ignite/contrib/metrics/regression/mean_absolute_relative_error.py"}]} | 1,432 | 171 |
gh_patches_debug_26538 | rasdani/github-patches | git_diff | speechbrain__speechbrain-304 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Stats precision of FileTrainLogger
Now, all the stats logged by a FileTrainLogger have the precision 2 after their decimal points. In some training scenarios, precision 2 is not enough for some stats. I suggest allowing users to decide precision for each stats or adding precision number to 4 or 5 uniformly.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `speechbrain/utils/train_logger.py`
Content:
```
1 """
2 Loggers for experiment monitoring
3
4 Authors
5 * Peter Plantinga 2020
6 """
7 import logging
8 from speechbrain.utils.edit_distance import wer_summary
9
10 logger = logging.getLogger(__name__)
11
12
13 class TrainLogger:
14 """Abstract class defining an interface for training loggers."""
15
16 def log_stats(
17 self,
18 stats_meta,
19 train_stats=None,
20 valid_stats=None,
21 test_stats=None,
22 verbose=False,
23 ):
24 """Log the stats for one epoch.
25
26 Arguments
27 ---------
28 stats_meta : dict of str:scalar pairs
29 Meta information about the stats (e.g. epoch, learning-rate, etc.)
30 train_stats : dict of str:list pairs
31 Each loss type is represented with a str : list pair including
32 all the values for the training pass.
33 valid_stats : dict of str:list pairs
34 Each loss type is represented with a str : list pair including
35 all the values for the validation pass.
36 test_stats : dict of str:list pairs
37 Each loss type is represented with a str : list pair including
38 all the values for the test pass.
39 verbose : bool
40 Whether to also put logging information to the standard logger.
41 """
42 raise NotImplementedError
43
44
45 class FileTrainLogger(TrainLogger):
46 """Text logger of training information
47
48 Arguments
49 ---------
50 save_file : str
51 The file to use for logging train information.
52 summary_fns : dict of str:function pairs
53 Each summary function should take a list produced as output
54 from a training/validation pass and summarize it to a single scalar.
55 """
56
57 def __init__(self, save_file, summary_fns=None):
58 self.save_file = save_file
59 self.summary_fns = summary_fns or {}
60
61 def _item_to_string(self, key, value, dataset=None):
62 """Convert one item to string, handling floats"""
63 if isinstance(value, float) and 0.01 < value < 100.0:
64 value = f"{value:.2f}"
65 elif isinstance(value, float):
66 value = f"{value:.2e}"
67 if dataset is not None:
68 key = f"{dataset} {key}"
69 return f"{key}: {value}"
70
71 def _stats_to_string(self, stats, dataset=None):
72 """Convert all stats to a single string summary"""
73 return ", ".join(
74 [self._item_to_string(k, v, dataset) for k, v in stats.items()]
75 )
76
77 def log_stats(
78 self,
79 stats_meta,
80 train_stats=None,
81 valid_stats=None,
82 test_stats=None,
83 verbose=True,
84 ):
85 """See TrainLogger.log_stats()"""
86 string_summary = self._stats_to_string(stats_meta)
87 for dataset, stats in [
88 ("train", train_stats),
89 ("valid", valid_stats),
90 ("test", test_stats),
91 ]:
92 if stats is None:
93 continue
94 summary = {}
95 for stat, value_list in stats.items():
96 if stat in self.summary_fns:
97 summary[stat] = self.summary_fns[stat](value_list)
98 else:
99 summary[stat] = summarize_average(value_list)
100 string_summary += " - " + self._stats_to_string(summary, dataset)
101
102 with open(self.save_file, "a") as fout:
103 print(string_summary, file=fout)
104 if verbose:
105 logger.info(string_summary)
106
107
108 class TensorboardLogger(TrainLogger):
109 """Logs training information in the format required by Tensorboard.
110
111 Arguments
112 ---------
113 save_dir : str
114 A directory for storing all the relevant logs
115
116 Raises
117 ------
118 ImportError if Tensorboard is not installed.
119 """
120
121 def __init__(self, save_dir):
122 self.save_dir = save_dir
123
124 # Raises ImportError if TensorBoard is not installed
125 from torch.utils.tensorboard import SummaryWriter
126
127 self.writer = SummaryWriter(self.save_dir)
128 self.global_step = {"train": {}, "valid": {}, "meta": 0}
129
130 def log_stats(
131 self,
132 stats_meta,
133 train_stats=None,
134 valid_stats=None,
135 test_stats=None,
136 verbose=False,
137 ):
138 """See TrainLogger.log_stats()"""
139 self.global_step["meta"] += 1
140 for name, value in stats_meta.items():
141 self.writer.add_scalar(name, value, self.global_step["meta"])
142
143 for dataset, stats in [
144 ("train", train_stats),
145 ("valid", valid_stats),
146 ("test", test_stats),
147 ]:
148 if stats is None:
149 continue
150 for stat, value_list in stats.items():
151 if stat not in self.global_step[dataset]:
152 self.global_step[dataset][stat] = 0
153 tag = f"{stat}/{dataset}"
154 for value in value_list:
155 new_global_step = self.global_step[dataset][stat] + 1
156 self.writer.add_scalar(tag, value, new_global_step)
157 self.global_step[dataset][stat] = new_global_step
158
159
160 def summarize_average(stat_list):
161 return float(sum(stat_list) / len(stat_list))
162
163
164 def summarize_error_rate(stat_list):
165 summary = wer_summary(stat_list)
166 return summary["WER"]
167
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/speechbrain/utils/train_logger.py b/speechbrain/utils/train_logger.py
--- a/speechbrain/utils/train_logger.py
+++ b/speechbrain/utils/train_logger.py
@@ -49,21 +49,24 @@
---------
save_file : str
The file to use for logging train information.
+ precision : int
+ Number of decimal places to display. Default 2, example: 1.35e-5
summary_fns : dict of str:function pairs
Each summary function should take a list produced as output
from a training/validation pass and summarize it to a single scalar.
"""
- def __init__(self, save_file, summary_fns=None):
+ def __init__(self, save_file, precision=2, summary_fns=None):
self.save_file = save_file
+ self.precision = precision
self.summary_fns = summary_fns or {}
def _item_to_string(self, key, value, dataset=None):
"""Convert one item to string, handling floats"""
- if isinstance(value, float) and 0.01 < value < 100.0:
- value = f"{value:.2f}"
+ if isinstance(value, float) and 1.0 < value < 100.0:
+ value = f"{value:.{self.precision}f}"
elif isinstance(value, float):
- value = f"{value:.2e}"
+ value = f"{value:.{self.precision}e}"
if dataset is not None:
key = f"{dataset} {key}"
return f"{key}: {value}"
| {"golden_diff": "diff --git a/speechbrain/utils/train_logger.py b/speechbrain/utils/train_logger.py\n--- a/speechbrain/utils/train_logger.py\n+++ b/speechbrain/utils/train_logger.py\n@@ -49,21 +49,24 @@\n ---------\n save_file : str\n The file to use for logging train information.\n+ precision : int\n+ Number of decimal places to display. Default 2, example: 1.35e-5\n summary_fns : dict of str:function pairs\n Each summary function should take a list produced as output\n from a training/validation pass and summarize it to a single scalar.\n \"\"\"\n \n- def __init__(self, save_file, summary_fns=None):\n+ def __init__(self, save_file, precision=2, summary_fns=None):\n self.save_file = save_file\n+ self.precision = precision\n self.summary_fns = summary_fns or {}\n \n def _item_to_string(self, key, value, dataset=None):\n \"\"\"Convert one item to string, handling floats\"\"\"\n- if isinstance(value, float) and 0.01 < value < 100.0:\n- value = f\"{value:.2f}\"\n+ if isinstance(value, float) and 1.0 < value < 100.0:\n+ value = f\"{value:.{self.precision}f}\"\n elif isinstance(value, float):\n- value = f\"{value:.2e}\"\n+ value = f\"{value:.{self.precision}e}\"\n if dataset is not None:\n key = f\"{dataset} {key}\"\n return f\"{key}: {value}\"\n", "issue": "Stats precision of FileTrainLogger\nNow, all the stats logged by a FileTrainLogger have the precision 2 after their decimal points. In some training scenarios, precision 2 is not enough for some stats. I suggest allowing users to decide precision for each stats or adding precision number to 4 or 5 uniformly.\n", "before_files": [{"content": "\"\"\"\nLoggers for experiment monitoring\n\nAuthors\n * Peter Plantinga 2020\n\"\"\"\nimport logging\nfrom speechbrain.utils.edit_distance import wer_summary\n\nlogger = logging.getLogger(__name__)\n\n\nclass TrainLogger:\n \"\"\"Abstract class defining an interface for training loggers.\"\"\"\n\n def log_stats(\n self,\n stats_meta,\n train_stats=None,\n valid_stats=None,\n test_stats=None,\n verbose=False,\n ):\n \"\"\"Log the stats for one epoch.\n\n Arguments\n ---------\n stats_meta : dict of str:scalar pairs\n Meta information about the stats (e.g. epoch, learning-rate, etc.)\n train_stats : dict of str:list pairs\n Each loss type is represented with a str : list pair including\n all the values for the training pass.\n valid_stats : dict of str:list pairs\n Each loss type is represented with a str : list pair including\n all the values for the validation pass.\n test_stats : dict of str:list pairs\n Each loss type is represented with a str : list pair including\n all the values for the test pass.\n verbose : bool\n Whether to also put logging information to the standard logger.\n \"\"\"\n raise NotImplementedError\n\n\nclass FileTrainLogger(TrainLogger):\n \"\"\"Text logger of training information\n\n Arguments\n ---------\n save_file : str\n The file to use for logging train information.\n summary_fns : dict of str:function pairs\n Each summary function should take a list produced as output\n from a training/validation pass and summarize it to a single scalar.\n \"\"\"\n\n def __init__(self, save_file, summary_fns=None):\n self.save_file = save_file\n self.summary_fns = summary_fns or {}\n\n def _item_to_string(self, key, value, dataset=None):\n \"\"\"Convert one item to string, handling floats\"\"\"\n if isinstance(value, float) and 0.01 < value < 100.0:\n value = f\"{value:.2f}\"\n elif isinstance(value, float):\n value = f\"{value:.2e}\"\n if dataset is not None:\n key = f\"{dataset} {key}\"\n return f\"{key}: {value}\"\n\n def _stats_to_string(self, stats, dataset=None):\n \"\"\"Convert all stats to a single string summary\"\"\"\n return \", \".join(\n [self._item_to_string(k, v, dataset) for k, v in stats.items()]\n )\n\n def log_stats(\n self,\n stats_meta,\n train_stats=None,\n valid_stats=None,\n test_stats=None,\n verbose=True,\n ):\n \"\"\"See TrainLogger.log_stats()\"\"\"\n string_summary = self._stats_to_string(stats_meta)\n for dataset, stats in [\n (\"train\", train_stats),\n (\"valid\", valid_stats),\n (\"test\", test_stats),\n ]:\n if stats is None:\n continue\n summary = {}\n for stat, value_list in stats.items():\n if stat in self.summary_fns:\n summary[stat] = self.summary_fns[stat](value_list)\n else:\n summary[stat] = summarize_average(value_list)\n string_summary += \" - \" + self._stats_to_string(summary, dataset)\n\n with open(self.save_file, \"a\") as fout:\n print(string_summary, file=fout)\n if verbose:\n logger.info(string_summary)\n\n\nclass TensorboardLogger(TrainLogger):\n \"\"\"Logs training information in the format required by Tensorboard.\n\n Arguments\n ---------\n save_dir : str\n A directory for storing all the relevant logs\n\n Raises\n ------\n ImportError if Tensorboard is not installed.\n \"\"\"\n\n def __init__(self, save_dir):\n self.save_dir = save_dir\n\n # Raises ImportError if TensorBoard is not installed\n from torch.utils.tensorboard import SummaryWriter\n\n self.writer = SummaryWriter(self.save_dir)\n self.global_step = {\"train\": {}, \"valid\": {}, \"meta\": 0}\n\n def log_stats(\n self,\n stats_meta,\n train_stats=None,\n valid_stats=None,\n test_stats=None,\n verbose=False,\n ):\n \"\"\"See TrainLogger.log_stats()\"\"\"\n self.global_step[\"meta\"] += 1\n for name, value in stats_meta.items():\n self.writer.add_scalar(name, value, self.global_step[\"meta\"])\n\n for dataset, stats in [\n (\"train\", train_stats),\n (\"valid\", valid_stats),\n (\"test\", test_stats),\n ]:\n if stats is None:\n continue\n for stat, value_list in stats.items():\n if stat not in self.global_step[dataset]:\n self.global_step[dataset][stat] = 0\n tag = f\"{stat}/{dataset}\"\n for value in value_list:\n new_global_step = self.global_step[dataset][stat] + 1\n self.writer.add_scalar(tag, value, new_global_step)\n self.global_step[dataset][stat] = new_global_step\n\n\ndef summarize_average(stat_list):\n return float(sum(stat_list) / len(stat_list))\n\n\ndef summarize_error_rate(stat_list):\n summary = wer_summary(stat_list)\n return summary[\"WER\"]\n", "path": "speechbrain/utils/train_logger.py"}], "after_files": [{"content": "\"\"\"\nLoggers for experiment monitoring\n\nAuthors\n * Peter Plantinga 2020\n\"\"\"\nimport logging\nfrom speechbrain.utils.edit_distance import wer_summary\n\nlogger = logging.getLogger(__name__)\n\n\nclass TrainLogger:\n \"\"\"Abstract class defining an interface for training loggers.\"\"\"\n\n def log_stats(\n self,\n stats_meta,\n train_stats=None,\n valid_stats=None,\n test_stats=None,\n verbose=False,\n ):\n \"\"\"Log the stats for one epoch.\n\n Arguments\n ---------\n stats_meta : dict of str:scalar pairs\n Meta information about the stats (e.g. epoch, learning-rate, etc.)\n train_stats : dict of str:list pairs\n Each loss type is represented with a str : list pair including\n all the values for the training pass.\n valid_stats : dict of str:list pairs\n Each loss type is represented with a str : list pair including\n all the values for the validation pass.\n test_stats : dict of str:list pairs\n Each loss type is represented with a str : list pair including\n all the values for the test pass.\n verbose : bool\n Whether to also put logging information to the standard logger.\n \"\"\"\n raise NotImplementedError\n\n\nclass FileTrainLogger(TrainLogger):\n \"\"\"Text logger of training information\n\n Arguments\n ---------\n save_file : str\n The file to use for logging train information.\n precision : int\n Number of decimal places to display. Default 2, example: 1.35e-5\n summary_fns : dict of str:function pairs\n Each summary function should take a list produced as output\n from a training/validation pass and summarize it to a single scalar.\n \"\"\"\n\n def __init__(self, save_file, precision=2, summary_fns=None):\n self.save_file = save_file\n self.precision = precision\n self.summary_fns = summary_fns or {}\n\n def _item_to_string(self, key, value, dataset=None):\n \"\"\"Convert one item to string, handling floats\"\"\"\n if isinstance(value, float) and 1.0 < value < 100.0:\n value = f\"{value:.{self.precision}f}\"\n elif isinstance(value, float):\n value = f\"{value:.{self.precision}e}\"\n if dataset is not None:\n key = f\"{dataset} {key}\"\n return f\"{key}: {value}\"\n\n def _stats_to_string(self, stats, dataset=None):\n \"\"\"Convert all stats to a single string summary\"\"\"\n return \", \".join(\n [self._item_to_string(k, v, dataset) for k, v in stats.items()]\n )\n\n def log_stats(\n self,\n stats_meta,\n train_stats=None,\n valid_stats=None,\n test_stats=None,\n verbose=True,\n ):\n \"\"\"See TrainLogger.log_stats()\"\"\"\n string_summary = self._stats_to_string(stats_meta)\n for dataset, stats in [\n (\"train\", train_stats),\n (\"valid\", valid_stats),\n (\"test\", test_stats),\n ]:\n if stats is None:\n continue\n summary = {}\n for stat, value_list in stats.items():\n if stat in self.summary_fns:\n summary[stat] = self.summary_fns[stat](value_list)\n else:\n summary[stat] = summarize_average(value_list)\n string_summary += \" - \" + self._stats_to_string(summary, dataset)\n\n with open(self.save_file, \"a\") as fout:\n print(string_summary, file=fout)\n if verbose:\n logger.info(string_summary)\n\n\nclass TensorboardLogger(TrainLogger):\n \"\"\"Logs training information in the format required by Tensorboard.\n\n Arguments\n ---------\n save_dir : str\n A directory for storing all the relevant logs\n\n Raises\n ------\n ImportError if Tensorboard is not installed.\n \"\"\"\n\n def __init__(self, save_dir):\n self.save_dir = save_dir\n\n # Raises ImportError if TensorBoard is not installed\n from torch.utils.tensorboard import SummaryWriter\n\n self.writer = SummaryWriter(self.save_dir)\n self.global_step = {\"train\": {}, \"valid\": {}, \"meta\": 0}\n\n def log_stats(\n self,\n stats_meta,\n train_stats=None,\n valid_stats=None,\n test_stats=None,\n verbose=False,\n ):\n \"\"\"See TrainLogger.log_stats()\"\"\"\n self.global_step[\"meta\"] += 1\n for name, value in stats_meta.items():\n self.writer.add_scalar(name, value, self.global_step[\"meta\"])\n\n for dataset, stats in [\n (\"train\", train_stats),\n (\"valid\", valid_stats),\n (\"test\", test_stats),\n ]:\n if stats is None:\n continue\n for stat, value_list in stats.items():\n if stat not in self.global_step[dataset]:\n self.global_step[dataset][stat] = 0\n tag = f\"{stat}/{dataset}\"\n for value in value_list:\n new_global_step = self.global_step[dataset][stat] + 1\n self.writer.add_scalar(tag, value, new_global_step)\n self.global_step[dataset][stat] = new_global_step\n\n\ndef summarize_average(stat_list):\n return float(sum(stat_list) / len(stat_list))\n\n\ndef summarize_error_rate(stat_list):\n summary = wer_summary(stat_list)\n return summary[\"WER\"]\n", "path": "speechbrain/utils/train_logger.py"}]} | 1,842 | 362 |
gh_patches_debug_1757 | rasdani/github-patches | git_diff | mne-tools__mne-bids-1156 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
MNE-BIDS 0.13 release
A release of MNE-BIDS has been requested: https://mne.discourse.group/t/mne-bids-0-13-release-date/7291/2
Our last release has been in December 2022, so I feel like cutting a release now is reasonable.
I'll migrate issues from the [0.13 milestone](https://github.com/mne-tools/mne-bids/milestone/14) to a new 0.14 milestone.
Please comment here if you need some particular thing to be fixed before the release.
cc @agramfort @hoechenberger @larsoner
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mne_bids/__init__.py`
Content:
```
1 """MNE software for easily interacting with BIDS compatible datasets."""
2
3 __version__ = "0.13.dev0"
4 from mne_bids import commands
5 from mne_bids.report import make_report
6 from mne_bids.path import (
7 BIDSPath,
8 get_datatypes,
9 get_entity_vals,
10 print_dir_tree,
11 get_entities_from_fname,
12 search_folder_for_text,
13 get_bids_path_from_fname,
14 find_matching_paths,
15 )
16 from mne_bids.read import get_head_mri_trans, read_raw_bids
17 from mne_bids.utils import get_anonymization_daysback
18 from mne_bids.write import (
19 make_dataset_description,
20 write_anat,
21 write_raw_bids,
22 mark_channels,
23 write_meg_calibration,
24 write_meg_crosstalk,
25 get_anat_landmarks,
26 anonymize_dataset,
27 )
28 from mne_bids.sidecar_updates import update_sidecar_json, update_anat_landmarks
29 from mne_bids.inspect import inspect_dataset
30 from mne_bids.dig import (
31 template_to_head,
32 convert_montage_to_ras,
33 convert_montage_to_mri,
34 )
35
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mne_bids/__init__.py b/mne_bids/__init__.py
--- a/mne_bids/__init__.py
+++ b/mne_bids/__init__.py
@@ -1,6 +1,6 @@
"""MNE software for easily interacting with BIDS compatible datasets."""
-__version__ = "0.13.dev0"
+__version__ = "0.13"
from mne_bids import commands
from mne_bids.report import make_report
from mne_bids.path import (
| {"golden_diff": "diff --git a/mne_bids/__init__.py b/mne_bids/__init__.py\n--- a/mne_bids/__init__.py\n+++ b/mne_bids/__init__.py\n@@ -1,6 +1,6 @@\n \"\"\"MNE software for easily interacting with BIDS compatible datasets.\"\"\"\n \n-__version__ = \"0.13.dev0\"\n+__version__ = \"0.13\"\n from mne_bids import commands\n from mne_bids.report import make_report\n from mne_bids.path import (\n", "issue": "MNE-BIDS 0.13 release\nA release of MNE-BIDS has been requested: https://mne.discourse.group/t/mne-bids-0-13-release-date/7291/2\r\n\r\nOur last release has been in December 2022, so I feel like cutting a release now is reasonable.\r\n\r\nI'll migrate issues from the [0.13 milestone](https://github.com/mne-tools/mne-bids/milestone/14) to a new 0.14 milestone.\r\n\r\nPlease comment here if you need some particular thing to be fixed before the release.\r\n\r\ncc @agramfort @hoechenberger @larsoner \n", "before_files": [{"content": "\"\"\"MNE software for easily interacting with BIDS compatible datasets.\"\"\"\n\n__version__ = \"0.13.dev0\"\nfrom mne_bids import commands\nfrom mne_bids.report import make_report\nfrom mne_bids.path import (\n BIDSPath,\n get_datatypes,\n get_entity_vals,\n print_dir_tree,\n get_entities_from_fname,\n search_folder_for_text,\n get_bids_path_from_fname,\n find_matching_paths,\n)\nfrom mne_bids.read import get_head_mri_trans, read_raw_bids\nfrom mne_bids.utils import get_anonymization_daysback\nfrom mne_bids.write import (\n make_dataset_description,\n write_anat,\n write_raw_bids,\n mark_channels,\n write_meg_calibration,\n write_meg_crosstalk,\n get_anat_landmarks,\n anonymize_dataset,\n)\nfrom mne_bids.sidecar_updates import update_sidecar_json, update_anat_landmarks\nfrom mne_bids.inspect import inspect_dataset\nfrom mne_bids.dig import (\n template_to_head,\n convert_montage_to_ras,\n convert_montage_to_mri,\n)\n", "path": "mne_bids/__init__.py"}], "after_files": [{"content": "\"\"\"MNE software for easily interacting with BIDS compatible datasets.\"\"\"\n\n__version__ = \"0.13\"\nfrom mne_bids import commands\nfrom mne_bids.report import make_report\nfrom mne_bids.path import (\n BIDSPath,\n get_datatypes,\n get_entity_vals,\n print_dir_tree,\n get_entities_from_fname,\n search_folder_for_text,\n get_bids_path_from_fname,\n find_matching_paths,\n)\nfrom mne_bids.read import get_head_mri_trans, read_raw_bids\nfrom mne_bids.utils import get_anonymization_daysback\nfrom mne_bids.write import (\n make_dataset_description,\n write_anat,\n write_raw_bids,\n mark_channels,\n write_meg_calibration,\n write_meg_crosstalk,\n get_anat_landmarks,\n anonymize_dataset,\n)\nfrom mne_bids.sidecar_updates import update_sidecar_json, update_anat_landmarks\nfrom mne_bids.inspect import inspect_dataset\nfrom mne_bids.dig import (\n template_to_head,\n convert_montage_to_ras,\n convert_montage_to_mri,\n)\n", "path": "mne_bids/__init__.py"}]} | 713 | 118 |
gh_patches_debug_36782 | rasdani/github-patches | git_diff | aws-cloudformation__cfn-lint-2006 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
cfn-lint 0.49.1 does not catch `/` as an invalid character in a Mapping element name
*cfn-lint version: cfn-lint 0.49.1*
*cfn-lint did not catch `/` as an invalid character in a Mapping element name*
cfn-lint passed successfully with this mapping included in the template:
```yaml
Mappings:
NameServers:
10.90.0.0/16:
NameServer1: 10.90.0.10
NameServer2: 10.90.4.10
10.91.0.0/16:
NameServer1: 10.91.0.10
NameServer2: 10.91.4.10
```
However AWS rejected it:
> Template format error: Mappings element name '10.93.0.0/16' must be non-empty and can contain only alphanumerics, '-' or '.'

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/cfnlint/rules/mappings/KeyName.py`
Content:
```
1 """
2 Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
3 SPDX-License-Identifier: MIT-0
4 """
5 import re
6 import six
7 from cfnlint.rules import CloudFormationLintRule
8 from cfnlint.rules import RuleMatch
9 from cfnlint.helpers import REGEX_ALPHANUMERIC
10
11
12 class KeyName(CloudFormationLintRule):
13 """Check if Mapping Keys are type string"""
14 id = 'E7003'
15 shortdesc = 'Mapping keys are strings and alphanumeric'
16 description = 'Check if Mappings keys are properly typed as strings and alphanumeric'
17 source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/mappings-section-structure.html'
18 tags = ['mappings']
19
20 def check_key(self, key, path, check_alphanumeric=True):
21 """ Check the key name for string and alphanumeric"""
22 matches = []
23 if not isinstance(key, six.string_types):
24 message = 'Mapping key ({0}) has to be a string.'
25 matches.append(RuleMatch(path[:], message.format(key)))
26 elif not re.match(REGEX_ALPHANUMERIC, key) and check_alphanumeric:
27 message = 'Mapping key ({0}) has invalid name. Name has to be alphanumeric.'
28 matches.append(RuleMatch(path[:], message.format(key)))
29
30 return matches
31
32 def match(self, cfn):
33 matches = []
34
35 mappings = cfn.template.get('Mappings', {})
36 for mapping_name, mapping_value in mappings.items():
37 if isinstance(mapping_value, dict):
38 for key_name, key_value in mapping_value.items():
39 matches.extend(self.check_key(
40 key_name, ['Mappings', mapping_name, key_name], False))
41 if isinstance(key_value, dict):
42 for sub_key_name, _ in key_value.items():
43 matches.extend(
44 self.check_key(
45 sub_key_name, ['Mappings', mapping_name, key_name, sub_key_name]))
46
47 return matches
48
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/cfnlint/rules/mappings/KeyName.py b/src/cfnlint/rules/mappings/KeyName.py
--- a/src/cfnlint/rules/mappings/KeyName.py
+++ b/src/cfnlint/rules/mappings/KeyName.py
@@ -17,14 +17,26 @@
source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/mappings-section-structure.html'
tags = ['mappings']
- def check_key(self, key, path, check_alphanumeric=True):
+ def check_attribute(self, key, path):
+ """ Check the key name for string and alphanumeric"""
+ matches = []
+ if not isinstance(key, six.string_types):
+ message = 'Mapping attribute ({0}) has to be a string.'
+ matches.append(RuleMatch(path[:], message.format(key)))
+ elif not re.match(REGEX_ALPHANUMERIC, key):
+ message = 'Mapping attribute ({0}) has invalid name. Name has to be alphanumeric.'
+ matches.append(RuleMatch(path[:], message.format(key)))
+
+ return matches
+
+ def check_key(self, key, path):
""" Check the key name for string and alphanumeric"""
matches = []
if not isinstance(key, six.string_types):
message = 'Mapping key ({0}) has to be a string.'
matches.append(RuleMatch(path[:], message.format(key)))
- elif not re.match(REGEX_ALPHANUMERIC, key) and check_alphanumeric:
- message = 'Mapping key ({0}) has invalid name. Name has to be alphanumeric.'
+ elif not re.match('^[a-zA-Z0-9.-]{1,255}$', key):
+ message = 'Mapping key ({0}) has invalid name. Name has to be alphanumeric, \'-\' or \'.\''
matches.append(RuleMatch(path[:], message.format(key)))
return matches
@@ -37,11 +49,11 @@
if isinstance(mapping_value, dict):
for key_name, key_value in mapping_value.items():
matches.extend(self.check_key(
- key_name, ['Mappings', mapping_name, key_name], False))
+ key_name, ['Mappings', mapping_name, key_name]))
if isinstance(key_value, dict):
for sub_key_name, _ in key_value.items():
matches.extend(
- self.check_key(
+ self.check_attribute(
sub_key_name, ['Mappings', mapping_name, key_name, sub_key_name]))
return matches
| {"golden_diff": "diff --git a/src/cfnlint/rules/mappings/KeyName.py b/src/cfnlint/rules/mappings/KeyName.py\n--- a/src/cfnlint/rules/mappings/KeyName.py\n+++ b/src/cfnlint/rules/mappings/KeyName.py\n@@ -17,14 +17,26 @@\n source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/mappings-section-structure.html'\n tags = ['mappings']\n \n- def check_key(self, key, path, check_alphanumeric=True):\n+ def check_attribute(self, key, path):\n+ \"\"\" Check the key name for string and alphanumeric\"\"\"\n+ matches = []\n+ if not isinstance(key, six.string_types):\n+ message = 'Mapping attribute ({0}) has to be a string.'\n+ matches.append(RuleMatch(path[:], message.format(key)))\n+ elif not re.match(REGEX_ALPHANUMERIC, key):\n+ message = 'Mapping attribute ({0}) has invalid name. Name has to be alphanumeric.'\n+ matches.append(RuleMatch(path[:], message.format(key)))\n+\n+ return matches\n+\n+ def check_key(self, key, path):\n \"\"\" Check the key name for string and alphanumeric\"\"\"\n matches = []\n if not isinstance(key, six.string_types):\n message = 'Mapping key ({0}) has to be a string.'\n matches.append(RuleMatch(path[:], message.format(key)))\n- elif not re.match(REGEX_ALPHANUMERIC, key) and check_alphanumeric:\n- message = 'Mapping key ({0}) has invalid name. Name has to be alphanumeric.'\n+ elif not re.match('^[a-zA-Z0-9.-]{1,255}$', key):\n+ message = 'Mapping key ({0}) has invalid name. Name has to be alphanumeric, \\'-\\' or \\'.\\''\n matches.append(RuleMatch(path[:], message.format(key)))\n \n return matches\n@@ -37,11 +49,11 @@\n if isinstance(mapping_value, dict):\n for key_name, key_value in mapping_value.items():\n matches.extend(self.check_key(\n- key_name, ['Mappings', mapping_name, key_name], False))\n+ key_name, ['Mappings', mapping_name, key_name]))\n if isinstance(key_value, dict):\n for sub_key_name, _ in key_value.items():\n matches.extend(\n- self.check_key(\n+ self.check_attribute(\n sub_key_name, ['Mappings', mapping_name, key_name, sub_key_name]))\n \n return matches\n", "issue": "cfn-lint 0.49.1 does not catch `/` as an invalid character in a Mapping element name\n*cfn-lint version: cfn-lint 0.49.1*\r\n\r\n*cfn-lint did not catch `/` as an invalid character in a Mapping element name*\r\n\r\ncfn-lint passed successfully with this mapping included in the template:\r\n```yaml\r\nMappings:\r\n NameServers:\r\n 10.90.0.0/16:\r\n NameServer1: 10.90.0.10\r\n NameServer2: 10.90.4.10\r\n 10.91.0.0/16:\r\n NameServer1: 10.91.0.10\r\n NameServer2: 10.91.4.10\r\n```\r\n\r\nHowever AWS rejected it:\r\n> Template format error: Mappings element name '10.93.0.0/16' must be non-empty and can contain only alphanumerics, '-' or '.'\r\n\r\n\r\n\n", "before_files": [{"content": "\"\"\"\nCopyright Amazon.com, Inc. or its affiliates. All Rights Reserved.\nSPDX-License-Identifier: MIT-0\n\"\"\"\nimport re\nimport six\nfrom cfnlint.rules import CloudFormationLintRule\nfrom cfnlint.rules import RuleMatch\nfrom cfnlint.helpers import REGEX_ALPHANUMERIC\n\n\nclass KeyName(CloudFormationLintRule):\n \"\"\"Check if Mapping Keys are type string\"\"\"\n id = 'E7003'\n shortdesc = 'Mapping keys are strings and alphanumeric'\n description = 'Check if Mappings keys are properly typed as strings and alphanumeric'\n source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/mappings-section-structure.html'\n tags = ['mappings']\n\n def check_key(self, key, path, check_alphanumeric=True):\n \"\"\" Check the key name for string and alphanumeric\"\"\"\n matches = []\n if not isinstance(key, six.string_types):\n message = 'Mapping key ({0}) has to be a string.'\n matches.append(RuleMatch(path[:], message.format(key)))\n elif not re.match(REGEX_ALPHANUMERIC, key) and check_alphanumeric:\n message = 'Mapping key ({0}) has invalid name. Name has to be alphanumeric.'\n matches.append(RuleMatch(path[:], message.format(key)))\n\n return matches\n\n def match(self, cfn):\n matches = []\n\n mappings = cfn.template.get('Mappings', {})\n for mapping_name, mapping_value in mappings.items():\n if isinstance(mapping_value, dict):\n for key_name, key_value in mapping_value.items():\n matches.extend(self.check_key(\n key_name, ['Mappings', mapping_name, key_name], False))\n if isinstance(key_value, dict):\n for sub_key_name, _ in key_value.items():\n matches.extend(\n self.check_key(\n sub_key_name, ['Mappings', mapping_name, key_name, sub_key_name]))\n\n return matches\n", "path": "src/cfnlint/rules/mappings/KeyName.py"}], "after_files": [{"content": "\"\"\"\nCopyright Amazon.com, Inc. or its affiliates. All Rights Reserved.\nSPDX-License-Identifier: MIT-0\n\"\"\"\nimport re\nimport six\nfrom cfnlint.rules import CloudFormationLintRule\nfrom cfnlint.rules import RuleMatch\nfrom cfnlint.helpers import REGEX_ALPHANUMERIC\n\n\nclass KeyName(CloudFormationLintRule):\n \"\"\"Check if Mapping Keys are type string\"\"\"\n id = 'E7003'\n shortdesc = 'Mapping keys are strings and alphanumeric'\n description = 'Check if Mappings keys are properly typed as strings and alphanumeric'\n source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/mappings-section-structure.html'\n tags = ['mappings']\n\n def check_attribute(self, key, path):\n \"\"\" Check the key name for string and alphanumeric\"\"\"\n matches = []\n if not isinstance(key, six.string_types):\n message = 'Mapping attribute ({0}) has to be a string.'\n matches.append(RuleMatch(path[:], message.format(key)))\n elif not re.match(REGEX_ALPHANUMERIC, key):\n message = 'Mapping attribute ({0}) has invalid name. Name has to be alphanumeric.'\n matches.append(RuleMatch(path[:], message.format(key)))\n\n return matches\n\n def check_key(self, key, path):\n \"\"\" Check the key name for string and alphanumeric\"\"\"\n matches = []\n if not isinstance(key, six.string_types):\n message = 'Mapping key ({0}) has to be a string.'\n matches.append(RuleMatch(path[:], message.format(key)))\n elif not re.match('^[a-zA-Z0-9.-]{1,255}$', key):\n message = 'Mapping key ({0}) has invalid name. Name has to be alphanumeric, \\'-\\' or \\'.\\''\n matches.append(RuleMatch(path[:], message.format(key)))\n\n return matches\n\n def match(self, cfn):\n matches = []\n\n mappings = cfn.template.get('Mappings', {})\n for mapping_name, mapping_value in mappings.items():\n if isinstance(mapping_value, dict):\n for key_name, key_value in mapping_value.items():\n matches.extend(self.check_key(\n key_name, ['Mappings', mapping_name, key_name]))\n if isinstance(key_value, dict):\n for sub_key_name, _ in key_value.items():\n matches.extend(\n self.check_attribute(\n sub_key_name, ['Mappings', mapping_name, key_name, sub_key_name]))\n\n return matches\n", "path": "src/cfnlint/rules/mappings/KeyName.py"}]} | 1,082 | 547 |
gh_patches_debug_8668 | rasdani/github-patches | git_diff | wright-group__WrightTools-1132 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
shift supported Python 3 versions
Since users are increasingly relying on 3.10 and 3.11, I propose we move testing from 3.7-9 to 3.8-11.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #! /usr/bin/env python3
2
3 import os
4 from setuptools import setup, find_packages
5
6
7 here = os.path.abspath(os.path.dirname(__file__))
8
9
10 def read(fname):
11 with open(os.path.join(here, fname)) as f:
12 return f.read()
13
14
15 extra_files = {
16 "WrightTools": [
17 "datasets",
18 "datasets/*",
19 "datasets/*/*",
20 "datasets/*/*/*",
21 "datasets/*/*/*/*",
22 "CITATION",
23 "VERSION",
24 "WT5_VERSION",
25 ]
26 }
27
28 with open(os.path.join(here, "WrightTools", "VERSION")) as version_file:
29 version = version_file.read().strip()
30
31 docs_require = ["sphinx", "sphinx-gallery==0.8.2", "sphinx-rtd-theme"]
32
33 setup(
34 name="WrightTools",
35 packages=find_packages(exclude=("tests", "tests.*")),
36 package_data=extra_files,
37 python_requires=">=3.7",
38 install_requires=[
39 "h5py",
40 "imageio",
41 "matplotlib>=3.4.0",
42 "numexpr",
43 "numpy>=1.15.0",
44 "pint",
45 "python-dateutil",
46 "scipy",
47 "tidy_headers>=1.0.0",
48 ],
49 extras_require={
50 "docs": docs_require,
51 "dev": [
52 "black",
53 "pre-commit",
54 "pydocstyle",
55 "pytest",
56 "pytest-cov",
57 "databroker>=1.2",
58 "msgpack",
59 ]
60 + docs_require,
61 },
62 version=version,
63 description="Tools for loading, processing, and plotting multidimensional spectroscopy data.",
64 long_description=read("README.rst"),
65 author="WrightTools Developers",
66 license="MIT",
67 url="http://wright.tools",
68 keywords="spectroscopy science multidimensional visualization",
69 entry_points={"console_scripts": ["wt-tree=WrightTools.__main__:wt_tree"]},
70 classifiers=[
71 "Development Status :: 5 - Production/Stable",
72 "Intended Audience :: Science/Research",
73 "License :: OSI Approved :: MIT License",
74 "Framework :: Matplotlib",
75 "Natural Language :: English",
76 "Programming Language :: Python :: 3",
77 "Programming Language :: Python :: 3.7",
78 "Programming Language :: Python :: 3.8",
79 "Programming Language :: Python :: 3.9",
80 "Topic :: Scientific/Engineering",
81 ],
82 )
83
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -74,9 +74,10 @@
"Framework :: Matplotlib",
"Natural Language :: English",
"Programming Language :: Python :: 3",
- "Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
+ "Programming Language :: Python :: 3.10",
+ "Programming Language :: Python :: 3.11",
"Topic :: Scientific/Engineering",
],
)
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -74,9 +74,10 @@\n \"Framework :: Matplotlib\",\n \"Natural Language :: English\",\n \"Programming Language :: Python :: 3\",\n- \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n+ \"Programming Language :: Python :: 3.10\",\n+ \"Programming Language :: Python :: 3.11\",\n \"Topic :: Scientific/Engineering\",\n ],\n )\n", "issue": "shift supported Python 3 versions\nSince users are increasingly relying on 3.10 and 3.11, I propose we move testing from 3.7-9 to 3.8-11.\r\n\n", "before_files": [{"content": "#! /usr/bin/env python3\n\nimport os\nfrom setuptools import setup, find_packages\n\n\nhere = os.path.abspath(os.path.dirname(__file__))\n\n\ndef read(fname):\n with open(os.path.join(here, fname)) as f:\n return f.read()\n\n\nextra_files = {\n \"WrightTools\": [\n \"datasets\",\n \"datasets/*\",\n \"datasets/*/*\",\n \"datasets/*/*/*\",\n \"datasets/*/*/*/*\",\n \"CITATION\",\n \"VERSION\",\n \"WT5_VERSION\",\n ]\n}\n\nwith open(os.path.join(here, \"WrightTools\", \"VERSION\")) as version_file:\n version = version_file.read().strip()\n\ndocs_require = [\"sphinx\", \"sphinx-gallery==0.8.2\", \"sphinx-rtd-theme\"]\n\nsetup(\n name=\"WrightTools\",\n packages=find_packages(exclude=(\"tests\", \"tests.*\")),\n package_data=extra_files,\n python_requires=\">=3.7\",\n install_requires=[\n \"h5py\",\n \"imageio\",\n \"matplotlib>=3.4.0\",\n \"numexpr\",\n \"numpy>=1.15.0\",\n \"pint\",\n \"python-dateutil\",\n \"scipy\",\n \"tidy_headers>=1.0.0\",\n ],\n extras_require={\n \"docs\": docs_require,\n \"dev\": [\n \"black\",\n \"pre-commit\",\n \"pydocstyle\",\n \"pytest\",\n \"pytest-cov\",\n \"databroker>=1.2\",\n \"msgpack\",\n ]\n + docs_require,\n },\n version=version,\n description=\"Tools for loading, processing, and plotting multidimensional spectroscopy data.\",\n long_description=read(\"README.rst\"),\n author=\"WrightTools Developers\",\n license=\"MIT\",\n url=\"http://wright.tools\",\n keywords=\"spectroscopy science multidimensional visualization\",\n entry_points={\"console_scripts\": [\"wt-tree=WrightTools.__main__:wt_tree\"]},\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Intended Audience :: Science/Research\",\n \"License :: OSI Approved :: MIT License\",\n \"Framework :: Matplotlib\",\n \"Natural Language :: English\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Topic :: Scientific/Engineering\",\n ],\n)\n", "path": "setup.py"}], "after_files": [{"content": "#! /usr/bin/env python3\n\nimport os\nfrom setuptools import setup, find_packages\n\n\nhere = os.path.abspath(os.path.dirname(__file__))\n\n\ndef read(fname):\n with open(os.path.join(here, fname)) as f:\n return f.read()\n\n\nextra_files = {\n \"WrightTools\": [\n \"datasets\",\n \"datasets/*\",\n \"datasets/*/*\",\n \"datasets/*/*/*\",\n \"datasets/*/*/*/*\",\n \"CITATION\",\n \"VERSION\",\n \"WT5_VERSION\",\n ]\n}\n\nwith open(os.path.join(here, \"WrightTools\", \"VERSION\")) as version_file:\n version = version_file.read().strip()\n\ndocs_require = [\"sphinx\", \"sphinx-gallery==0.8.2\", \"sphinx-rtd-theme\"]\n\nsetup(\n name=\"WrightTools\",\n packages=find_packages(exclude=(\"tests\", \"tests.*\")),\n package_data=extra_files,\n python_requires=\">=3.7\",\n install_requires=[\n \"h5py\",\n \"imageio\",\n \"matplotlib>=3.4.0\",\n \"numexpr\",\n \"numpy>=1.15.0\",\n \"pint\",\n \"python-dateutil\",\n \"scipy\",\n \"tidy_headers>=1.0.0\",\n ],\n extras_require={\n \"docs\": docs_require,\n \"dev\": [\n \"black\",\n \"pre-commit\",\n \"pydocstyle\",\n \"pytest\",\n \"pytest-cov\",\n \"databroker>=1.2\",\n \"msgpack\",\n ]\n + docs_require,\n },\n version=version,\n description=\"Tools for loading, processing, and plotting multidimensional spectroscopy data.\",\n long_description=read(\"README.rst\"),\n author=\"WrightTools Developers\",\n license=\"MIT\",\n url=\"http://wright.tools\",\n keywords=\"spectroscopy science multidimensional visualization\",\n entry_points={\"console_scripts\": [\"wt-tree=WrightTools.__main__:wt_tree\"]},\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Intended Audience :: Science/Research\",\n \"License :: OSI Approved :: MIT License\",\n \"Framework :: Matplotlib\",\n \"Natural Language :: English\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Programming Language :: Python :: 3.11\",\n \"Topic :: Scientific/Engineering\",\n ],\n)\n", "path": "setup.py"}]} | 1,002 | 133 |
gh_patches_debug_35416 | rasdani/github-patches | git_diff | zigpy__zha-device-handlers-1664 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG] New yooksmart D10110 inverted with quirk
**Describe the bug**
I purchased a new yooksmart D10110 cover and paired with home assistant. The controls
seemed inverted and I had to move the bar twice in order to get it to move. I read reports
in the past with the suggestion to unpair and pair again, tried multiple times with no luck.
So I disabled the quirk (apologies for the brute force: moved the file to a different directory
and reloaded) and it works now. For completeness:
Before:
- buttons up and down wouldn't work
- available button would be inverted (e.g.: cover was all the way down and the down button was enabled)
- in order to control the cover I'd move the progress bar all the way to 0 or to 100 then the opposite in order to work
After:
- buttons up and down work
- enabled button matches the direction of the cover: if open, it shows down button enabled
**To Reproduce**
Behavior is consistent across multiple pair/unpair cycles and full home assistant instance restarts
**Additional context**
Something that is possible, since the cover is new, is that they corrected the behavior in their firmware
and the quirk isn't needed anymore.
This device has: Firmware: 0x10013001
I can provide any debugging necessary. I'm using homeassistant official virtual machine image and keeping
it up to date.
Editted: formatting
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `zhaquirks/yooksmart/D10110blinds.py`
Content:
```
1 """Device handler for Yooksmart D10110 roller blinds."""
2 from zigpy.profiles import zha
3 from zigpy.quirks import CustomCluster, CustomDevice
4 from zigpy.zcl.clusters.closures import WindowCovering
5 from zigpy.zcl.clusters.general import (
6 Basic,
7 Groups,
8 Identify,
9 Ota,
10 PollControl,
11 PowerConfiguration,
12 Scenes,
13 )
14
15 from zhaquirks.const import (
16 DEVICE_TYPE,
17 ENDPOINTS,
18 INPUT_CLUSTERS,
19 MODELS_INFO,
20 OUTPUT_CLUSTERS,
21 PROFILE_ID,
22 )
23
24
25 class InvertedWindowCoveringCluster(CustomCluster, WindowCovering):
26 """WindowCovering cluster implementation.
27
28 This implementation inverts the reported covering percent for non standard
29 devices that don't follow the reporting spec.
30 """
31
32 cluster_id = WindowCovering.cluster_id
33 CURRENT_POSITION_LIFT_PERCENTAGE = 0x0008
34
35 def _update_attribute(self, attrid, value):
36 if attrid == self.CURRENT_POSITION_LIFT_PERCENTAGE:
37 value = 100 - value
38 super()._update_attribute(attrid, value)
39
40
41 class D10110Blinds(CustomDevice):
42 """Custom device representing Yooksmart D10110 roller blinds."""
43
44 signature = {
45 # <SimpleDescriptor endpoint=1 profile=260 device_type=514
46 # device_version=1
47 # input_clusters=[0, 1, 3, 4, 5, 32, 258]
48 # output_clusters=[3, 25]>
49 MODELS_INFO: [
50 ("yooksmart", "D10110"),
51 ],
52 ENDPOINTS: {
53 1: {
54 PROFILE_ID: zha.PROFILE_ID,
55 DEVICE_TYPE: zha.DeviceType.WINDOW_COVERING_DEVICE,
56 INPUT_CLUSTERS: [
57 Basic.cluster_id,
58 PowerConfiguration.cluster_id,
59 Identify.cluster_id,
60 Groups.cluster_id,
61 Scenes.cluster_id,
62 PollControl.cluster_id,
63 WindowCovering.cluster_id,
64 ],
65 OUTPUT_CLUSTERS: [Identify.cluster_id, Ota.cluster_id],
66 }
67 },
68 }
69
70 replacement = {
71 ENDPOINTS: {
72 1: {
73 PROFILE_ID: zha.PROFILE_ID,
74 DEVICE_TYPE: zha.DeviceType.WINDOW_COVERING_DEVICE,
75 INPUT_CLUSTERS: [
76 Basic.cluster_id,
77 PowerConfiguration.cluster_id,
78 Identify.cluster_id,
79 Groups.cluster_id,
80 Scenes.cluster_id,
81 PollControl.cluster_id,
82 InvertedWindowCoveringCluster,
83 ],
84 OUTPUT_CLUSTERS: [Identify.cluster_id, Ota.cluster_id],
85 }
86 }
87 }
88
```
Path: `zhaquirks/yooksmart/__init__.py`
Content:
```
1 """Yooksmart module for custom device handlers."""
2
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/zhaquirks/yooksmart/D10110blinds.py b/zhaquirks/yooksmart/D10110blinds.py
deleted file mode 100644
--- a/zhaquirks/yooksmart/D10110blinds.py
+++ /dev/null
@@ -1,87 +0,0 @@
-"""Device handler for Yooksmart D10110 roller blinds."""
-from zigpy.profiles import zha
-from zigpy.quirks import CustomCluster, CustomDevice
-from zigpy.zcl.clusters.closures import WindowCovering
-from zigpy.zcl.clusters.general import (
- Basic,
- Groups,
- Identify,
- Ota,
- PollControl,
- PowerConfiguration,
- Scenes,
-)
-
-from zhaquirks.const import (
- DEVICE_TYPE,
- ENDPOINTS,
- INPUT_CLUSTERS,
- MODELS_INFO,
- OUTPUT_CLUSTERS,
- PROFILE_ID,
-)
-
-
-class InvertedWindowCoveringCluster(CustomCluster, WindowCovering):
- """WindowCovering cluster implementation.
-
- This implementation inverts the reported covering percent for non standard
- devices that don't follow the reporting spec.
- """
-
- cluster_id = WindowCovering.cluster_id
- CURRENT_POSITION_LIFT_PERCENTAGE = 0x0008
-
- def _update_attribute(self, attrid, value):
- if attrid == self.CURRENT_POSITION_LIFT_PERCENTAGE:
- value = 100 - value
- super()._update_attribute(attrid, value)
-
-
-class D10110Blinds(CustomDevice):
- """Custom device representing Yooksmart D10110 roller blinds."""
-
- signature = {
- # <SimpleDescriptor endpoint=1 profile=260 device_type=514
- # device_version=1
- # input_clusters=[0, 1, 3, 4, 5, 32, 258]
- # output_clusters=[3, 25]>
- MODELS_INFO: [
- ("yooksmart", "D10110"),
- ],
- ENDPOINTS: {
- 1: {
- PROFILE_ID: zha.PROFILE_ID,
- DEVICE_TYPE: zha.DeviceType.WINDOW_COVERING_DEVICE,
- INPUT_CLUSTERS: [
- Basic.cluster_id,
- PowerConfiguration.cluster_id,
- Identify.cluster_id,
- Groups.cluster_id,
- Scenes.cluster_id,
- PollControl.cluster_id,
- WindowCovering.cluster_id,
- ],
- OUTPUT_CLUSTERS: [Identify.cluster_id, Ota.cluster_id],
- }
- },
- }
-
- replacement = {
- ENDPOINTS: {
- 1: {
- PROFILE_ID: zha.PROFILE_ID,
- DEVICE_TYPE: zha.DeviceType.WINDOW_COVERING_DEVICE,
- INPUT_CLUSTERS: [
- Basic.cluster_id,
- PowerConfiguration.cluster_id,
- Identify.cluster_id,
- Groups.cluster_id,
- Scenes.cluster_id,
- PollControl.cluster_id,
- InvertedWindowCoveringCluster,
- ],
- OUTPUT_CLUSTERS: [Identify.cluster_id, Ota.cluster_id],
- }
- }
- }
diff --git a/zhaquirks/yooksmart/__init__.py b/zhaquirks/yooksmart/__init__.py
deleted file mode 100644
--- a/zhaquirks/yooksmart/__init__.py
+++ /dev/null
@@ -1 +0,0 @@
-"""Yooksmart module for custom device handlers."""
| {"golden_diff": "diff --git a/zhaquirks/yooksmart/D10110blinds.py b/zhaquirks/yooksmart/D10110blinds.py\ndeleted file mode 100644\n--- a/zhaquirks/yooksmart/D10110blinds.py\n+++ /dev/null\n@@ -1,87 +0,0 @@\n-\"\"\"Device handler for Yooksmart D10110 roller blinds.\"\"\"\n-from zigpy.profiles import zha\n-from zigpy.quirks import CustomCluster, CustomDevice\n-from zigpy.zcl.clusters.closures import WindowCovering\n-from zigpy.zcl.clusters.general import (\n- Basic,\n- Groups,\n- Identify,\n- Ota,\n- PollControl,\n- PowerConfiguration,\n- Scenes,\n-)\n-\n-from zhaquirks.const import (\n- DEVICE_TYPE,\n- ENDPOINTS,\n- INPUT_CLUSTERS,\n- MODELS_INFO,\n- OUTPUT_CLUSTERS,\n- PROFILE_ID,\n-)\n-\n-\n-class InvertedWindowCoveringCluster(CustomCluster, WindowCovering):\n- \"\"\"WindowCovering cluster implementation.\n-\n- This implementation inverts the reported covering percent for non standard\n- devices that don't follow the reporting spec.\n- \"\"\"\n-\n- cluster_id = WindowCovering.cluster_id\n- CURRENT_POSITION_LIFT_PERCENTAGE = 0x0008\n-\n- def _update_attribute(self, attrid, value):\n- if attrid == self.CURRENT_POSITION_LIFT_PERCENTAGE:\n- value = 100 - value\n- super()._update_attribute(attrid, value)\n-\n-\n-class D10110Blinds(CustomDevice):\n- \"\"\"Custom device representing Yooksmart D10110 roller blinds.\"\"\"\n-\n- signature = {\n- # <SimpleDescriptor endpoint=1 profile=260 device_type=514\n- # device_version=1\n- # input_clusters=[0, 1, 3, 4, 5, 32, 258]\n- # output_clusters=[3, 25]>\n- MODELS_INFO: [\n- (\"yooksmart\", \"D10110\"),\n- ],\n- ENDPOINTS: {\n- 1: {\n- PROFILE_ID: zha.PROFILE_ID,\n- DEVICE_TYPE: zha.DeviceType.WINDOW_COVERING_DEVICE,\n- INPUT_CLUSTERS: [\n- Basic.cluster_id,\n- PowerConfiguration.cluster_id,\n- Identify.cluster_id,\n- Groups.cluster_id,\n- Scenes.cluster_id,\n- PollControl.cluster_id,\n- WindowCovering.cluster_id,\n- ],\n- OUTPUT_CLUSTERS: [Identify.cluster_id, Ota.cluster_id],\n- }\n- },\n- }\n-\n- replacement = {\n- ENDPOINTS: {\n- 1: {\n- PROFILE_ID: zha.PROFILE_ID,\n- DEVICE_TYPE: zha.DeviceType.WINDOW_COVERING_DEVICE,\n- INPUT_CLUSTERS: [\n- Basic.cluster_id,\n- PowerConfiguration.cluster_id,\n- Identify.cluster_id,\n- Groups.cluster_id,\n- Scenes.cluster_id,\n- PollControl.cluster_id,\n- InvertedWindowCoveringCluster,\n- ],\n- OUTPUT_CLUSTERS: [Identify.cluster_id, Ota.cluster_id],\n- }\n- }\n- }\ndiff --git a/zhaquirks/yooksmart/__init__.py b/zhaquirks/yooksmart/__init__.py\ndeleted file mode 100644\n--- a/zhaquirks/yooksmart/__init__.py\n+++ /dev/null\n@@ -1 +0,0 @@\n-\"\"\"Yooksmart module for custom device handlers.\"\"\"\n", "issue": "[BUG] New yooksmart D10110 inverted with quirk\n**Describe the bug**\r\nI purchased a new yooksmart D10110 cover and paired with home assistant. The controls\r\nseemed inverted and I had to move the bar twice in order to get it to move. I read reports\r\nin the past with the suggestion to unpair and pair again, tried multiple times with no luck.\r\nSo I disabled the quirk (apologies for the brute force: moved the file to a different directory\r\nand reloaded) and it works now. For completeness:\r\nBefore:\r\n- buttons up and down wouldn't work\r\n- available button would be inverted (e.g.: cover was all the way down and the down button was enabled)\r\n- in order to control the cover I'd move the progress bar all the way to 0 or to 100 then the opposite in order to work\r\nAfter:\r\n- buttons up and down work\r\n- enabled button matches the direction of the cover: if open, it shows down button enabled\r\n\r\n**To Reproduce**\r\nBehavior is consistent across multiple pair/unpair cycles and full home assistant instance restarts\r\n\r\n**Additional context**\r\nSomething that is possible, since the cover is new, is that they corrected the behavior in their firmware\r\nand the quirk isn't needed anymore.\r\nThis device has: Firmware: 0x10013001\r\n\r\nI can provide any debugging necessary. I'm using homeassistant official virtual machine image and keeping\r\nit up to date.\r\n\r\nEditted: formatting\n", "before_files": [{"content": "\"\"\"Device handler for Yooksmart D10110 roller blinds.\"\"\"\nfrom zigpy.profiles import zha\nfrom zigpy.quirks import CustomCluster, CustomDevice\nfrom zigpy.zcl.clusters.closures import WindowCovering\nfrom zigpy.zcl.clusters.general import (\n Basic,\n Groups,\n Identify,\n Ota,\n PollControl,\n PowerConfiguration,\n Scenes,\n)\n\nfrom zhaquirks.const import (\n DEVICE_TYPE,\n ENDPOINTS,\n INPUT_CLUSTERS,\n MODELS_INFO,\n OUTPUT_CLUSTERS,\n PROFILE_ID,\n)\n\n\nclass InvertedWindowCoveringCluster(CustomCluster, WindowCovering):\n \"\"\"WindowCovering cluster implementation.\n\n This implementation inverts the reported covering percent for non standard\n devices that don't follow the reporting spec.\n \"\"\"\n\n cluster_id = WindowCovering.cluster_id\n CURRENT_POSITION_LIFT_PERCENTAGE = 0x0008\n\n def _update_attribute(self, attrid, value):\n if attrid == self.CURRENT_POSITION_LIFT_PERCENTAGE:\n value = 100 - value\n super()._update_attribute(attrid, value)\n\n\nclass D10110Blinds(CustomDevice):\n \"\"\"Custom device representing Yooksmart D10110 roller blinds.\"\"\"\n\n signature = {\n # <SimpleDescriptor endpoint=1 profile=260 device_type=514\n # device_version=1\n # input_clusters=[0, 1, 3, 4, 5, 32, 258]\n # output_clusters=[3, 25]>\n MODELS_INFO: [\n (\"yooksmart\", \"D10110\"),\n ],\n ENDPOINTS: {\n 1: {\n PROFILE_ID: zha.PROFILE_ID,\n DEVICE_TYPE: zha.DeviceType.WINDOW_COVERING_DEVICE,\n INPUT_CLUSTERS: [\n Basic.cluster_id,\n PowerConfiguration.cluster_id,\n Identify.cluster_id,\n Groups.cluster_id,\n Scenes.cluster_id,\n PollControl.cluster_id,\n WindowCovering.cluster_id,\n ],\n OUTPUT_CLUSTERS: [Identify.cluster_id, Ota.cluster_id],\n }\n },\n }\n\n replacement = {\n ENDPOINTS: {\n 1: {\n PROFILE_ID: zha.PROFILE_ID,\n DEVICE_TYPE: zha.DeviceType.WINDOW_COVERING_DEVICE,\n INPUT_CLUSTERS: [\n Basic.cluster_id,\n PowerConfiguration.cluster_id,\n Identify.cluster_id,\n Groups.cluster_id,\n Scenes.cluster_id,\n PollControl.cluster_id,\n InvertedWindowCoveringCluster,\n ],\n OUTPUT_CLUSTERS: [Identify.cluster_id, Ota.cluster_id],\n }\n }\n }\n", "path": "zhaquirks/yooksmart/D10110blinds.py"}, {"content": "\"\"\"Yooksmart module for custom device handlers.\"\"\"\n", "path": "zhaquirks/yooksmart/__init__.py"}], "after_files": [{"content": null, "path": "zhaquirks/yooksmart/D10110blinds.py"}, {"content": null, "path": "zhaquirks/yooksmart/__init__.py"}]} | 1,387 | 825 |
gh_patches_debug_3138 | rasdani/github-patches | git_diff | microsoft__botbuilder-python-1231 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[PORT] [Authentication] updates to support Arlington
> Port this change from botbuilder-dotnet/master branch:
https://github.com/microsoft/botbuilder-dotnet/pull/3734
# Changed projects
* Microsoft.Bot.Connector
* Microsoft.Bot.Connector.Tests
[R9]
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `libraries/botframework-connector/botframework/connector/auth/government_constants.py`
Content:
```
1 # Copyright (c) Microsoft Corporation. All rights reserved.
2 # Licensed under the MIT License.
3 from abc import ABC
4
5
6 class GovernmentConstants(ABC):
7
8 """
9 Government Channel Service property value
10 """
11
12 CHANNEL_SERVICE = "https://botframework.azure.us"
13
14 """
15 TO CHANNEL FROM BOT: Login URL
16 """
17 TO_CHANNEL_FROM_BOT_LOGIN_URL = (
18 "https://login.microsoftonline.us/"
19 "cab8a31a-1906-4287-a0d8-4eef66b95f6e/"
20 "oauth2/v2.0/token"
21 )
22
23 """
24 TO CHANNEL FROM BOT: OAuth scope to request
25 """
26 TO_CHANNEL_FROM_BOT_OAUTH_SCOPE = "https://api.botframework.us/.default"
27
28 """
29 TO BOT FROM CHANNEL: Token issuer
30 """
31 TO_BOT_FROM_CHANNEL_TOKEN_ISSUER = "https://api.botframework.us"
32
33 """
34 TO BOT FROM CHANNEL: OpenID metadata document for tokens coming from MSA
35 """
36 TO_BOT_FROM_CHANNEL_OPEN_ID_METADATA_URL = (
37 "https://login.botframework.azure.us/v1/.well-known/openidconfiguration"
38 )
39
40 """
41 TO BOT FROM GOV EMULATOR: OpenID metadata document for tokens coming from MSA
42 """
43 TO_BOT_FROM_EMULATOR_OPEN_ID_METADATA_URL = (
44 "https://login.microsoftonline.us/"
45 "cab8a31a-1906-4287-a0d8-4eef66b95f6e/v2.0/"
46 ".well-known/openid-configuration"
47 )
48
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/libraries/botframework-connector/botframework/connector/auth/government_constants.py b/libraries/botframework-connector/botframework/connector/auth/government_constants.py
--- a/libraries/botframework-connector/botframework/connector/auth/government_constants.py
+++ b/libraries/botframework-connector/botframework/connector/auth/government_constants.py
@@ -15,9 +15,7 @@
TO CHANNEL FROM BOT: Login URL
"""
TO_CHANNEL_FROM_BOT_LOGIN_URL = (
- "https://login.microsoftonline.us/"
- "cab8a31a-1906-4287-a0d8-4eef66b95f6e/"
- "oauth2/v2.0/token"
+ "https://login.microsoftonline.us/MicrosoftServices.onmicrosoft.us"
)
"""
| {"golden_diff": "diff --git a/libraries/botframework-connector/botframework/connector/auth/government_constants.py b/libraries/botframework-connector/botframework/connector/auth/government_constants.py\n--- a/libraries/botframework-connector/botframework/connector/auth/government_constants.py\n+++ b/libraries/botframework-connector/botframework/connector/auth/government_constants.py\n@@ -15,9 +15,7 @@\n TO CHANNEL FROM BOT: Login URL\n \"\"\"\n TO_CHANNEL_FROM_BOT_LOGIN_URL = (\n- \"https://login.microsoftonline.us/\"\n- \"cab8a31a-1906-4287-a0d8-4eef66b95f6e/\"\n- \"oauth2/v2.0/token\"\n+ \"https://login.microsoftonline.us/MicrosoftServices.onmicrosoft.us\"\n )\n \n \"\"\"\n", "issue": "[PORT] [Authentication] updates to support Arlington\n> Port this change from botbuilder-dotnet/master branch:\nhttps://github.com/microsoft/botbuilder-dotnet/pull/3734\n\n\n\n\r\n# Changed projects\r\n* Microsoft.Bot.Connector\r\n* Microsoft.Bot.Connector.Tests\r\n\r\n[R9]\r\n\r\n\n\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License.\nfrom abc import ABC\n\n\nclass GovernmentConstants(ABC):\n\n \"\"\"\n Government Channel Service property value\n \"\"\"\n\n CHANNEL_SERVICE = \"https://botframework.azure.us\"\n\n \"\"\"\n TO CHANNEL FROM BOT: Login URL\n \"\"\"\n TO_CHANNEL_FROM_BOT_LOGIN_URL = (\n \"https://login.microsoftonline.us/\"\n \"cab8a31a-1906-4287-a0d8-4eef66b95f6e/\"\n \"oauth2/v2.0/token\"\n )\n\n \"\"\"\n TO CHANNEL FROM BOT: OAuth scope to request\n \"\"\"\n TO_CHANNEL_FROM_BOT_OAUTH_SCOPE = \"https://api.botframework.us/.default\"\n\n \"\"\"\n TO BOT FROM CHANNEL: Token issuer\n \"\"\"\n TO_BOT_FROM_CHANNEL_TOKEN_ISSUER = \"https://api.botframework.us\"\n\n \"\"\"\n TO BOT FROM CHANNEL: OpenID metadata document for tokens coming from MSA\n \"\"\"\n TO_BOT_FROM_CHANNEL_OPEN_ID_METADATA_URL = (\n \"https://login.botframework.azure.us/v1/.well-known/openidconfiguration\"\n )\n\n \"\"\"\n TO BOT FROM GOV EMULATOR: OpenID metadata document for tokens coming from MSA\n \"\"\"\n TO_BOT_FROM_EMULATOR_OPEN_ID_METADATA_URL = (\n \"https://login.microsoftonline.us/\"\n \"cab8a31a-1906-4287-a0d8-4eef66b95f6e/v2.0/\"\n \".well-known/openid-configuration\"\n )\n", "path": "libraries/botframework-connector/botframework/connector/auth/government_constants.py"}], "after_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License.\nfrom abc import ABC\n\n\nclass GovernmentConstants(ABC):\n\n \"\"\"\n Government Channel Service property value\n \"\"\"\n\n CHANNEL_SERVICE = \"https://botframework.azure.us\"\n\n \"\"\"\n TO CHANNEL FROM BOT: Login URL\n \"\"\"\n TO_CHANNEL_FROM_BOT_LOGIN_URL = (\n \"https://login.microsoftonline.us/MicrosoftServices.onmicrosoft.us\"\n )\n\n \"\"\"\n TO CHANNEL FROM BOT: OAuth scope to request\n \"\"\"\n TO_CHANNEL_FROM_BOT_OAUTH_SCOPE = \"https://api.botframework.us/.default\"\n\n \"\"\"\n TO BOT FROM CHANNEL: Token issuer\n \"\"\"\n TO_BOT_FROM_CHANNEL_TOKEN_ISSUER = \"https://api.botframework.us\"\n\n \"\"\"\n TO BOT FROM CHANNEL: OpenID metadata document for tokens coming from MSA\n \"\"\"\n TO_BOT_FROM_CHANNEL_OPEN_ID_METADATA_URL = (\n \"https://login.botframework.azure.us/v1/.well-known/openidconfiguration\"\n )\n\n \"\"\"\n TO BOT FROM GOV EMULATOR: OpenID metadata document for tokens coming from MSA\n \"\"\"\n TO_BOT_FROM_EMULATOR_OPEN_ID_METADATA_URL = (\n \"https://login.microsoftonline.us/\"\n \"cab8a31a-1906-4287-a0d8-4eef66b95f6e/v2.0/\"\n \".well-known/openid-configuration\"\n )\n", "path": "libraries/botframework-connector/botframework/connector/auth/government_constants.py"}]} | 779 | 192 |
gh_patches_debug_272 | rasdani/github-patches | git_diff | cupy__cupy-1028 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
cupy.copyto behaves differently from numpy.copyto when src is a python scalar
Code:
```python
import numpy
import cupy
def copyto_check(xp):
x = xp.zeros(3, dtype=numpy.float32)
# replace first and third items with 1.0
xp.copyto(x, 1.0, where=xp.asarray([True, False, True]))
print(x)
print('numpy', numpy.__version__)
copyto_check(numpy)
print('cupy', cupy.__version__)
copyto_check(cupy)
```
Output:
```
numpy 1.14.0
[1. 0. 1.]
cupy 2.2.0
[1. 1. 1.]
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cupy/manipulation/basic.py`
Content:
```
1 import numpy
2 import six
3
4 from cupy import core
5
6
7 def copyto(dst, src, casting='same_kind', where=None):
8 """Copies values from one array to another with broadcasting.
9
10 This function can be called for arrays on different devices. In this case,
11 casting, ``where``, and broadcasting is not supported, and an exception is
12 raised if these are used.
13
14 Args:
15 dst (cupy.ndarray): Target array.
16 src (cupy.ndarray): Source array.
17 casting (str): Casting rule. See :func:`numpy.can_cast` for detail.
18 where (cupy.ndarray of bool): If specified, this array acts as a mask,
19 and an element is copied only if the corresponding element of
20 ``where`` is True.
21
22 .. seealso:: :func:`numpy.copyto`
23
24 """
25
26 src_type = type(src)
27 src_is_python_scalar = (src_type in six.integer_types or
28 src_type in (bool, float, complex))
29 if src_is_python_scalar:
30 src_dtype = numpy.dtype(type(src))
31 can_cast = numpy.can_cast(src, dst.dtype, casting)
32 else:
33 src_dtype = src.dtype
34 can_cast = numpy.can_cast(src_dtype, dst.dtype, casting)
35
36 if not can_cast:
37 raise TypeError('Cannot cast %s to %s in %s casting mode' %
38 (src_dtype, dst.dtype, casting))
39 if dst.size == 0:
40 return
41
42 if src_is_python_scalar:
43 dst.fill(src)
44 return
45
46 if where is None:
47 if _can_memcpy(dst, src):
48 dst.data.copy_from(src.data, src.nbytes)
49 else:
50 device = dst.device
51 with device:
52 if src.device != device:
53 src = src.copy()
54 core.elementwise_copy(src, dst)
55 else:
56 core.elementwise_copy_where(src, where, dst)
57
58
59 def _can_memcpy(dst, src):
60 c_contiguous = dst.flags.c_contiguous and src.flags.c_contiguous
61 f_contiguous = dst.flags.f_contiguous and src.flags.f_contiguous
62 return (c_contiguous or f_contiguous) and dst.dtype == src.dtype and \
63 dst.size == src.size
64
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/cupy/manipulation/basic.py b/cupy/manipulation/basic.py
--- a/cupy/manipulation/basic.py
+++ b/cupy/manipulation/basic.py
@@ -39,7 +39,7 @@
if dst.size == 0:
return
- if src_is_python_scalar:
+ if src_is_python_scalar and where is None:
dst.fill(src)
return
| {"golden_diff": "diff --git a/cupy/manipulation/basic.py b/cupy/manipulation/basic.py\n--- a/cupy/manipulation/basic.py\n+++ b/cupy/manipulation/basic.py\n@@ -39,7 +39,7 @@\n if dst.size == 0:\n return\n \n- if src_is_python_scalar:\n+ if src_is_python_scalar and where is None:\n dst.fill(src)\n return\n", "issue": "cupy.copyto behaves differently from numpy.copyto when src is a python scalar\nCode:\r\n```python\r\nimport numpy\r\nimport cupy\r\n\r\ndef copyto_check(xp):\r\n x = xp.zeros(3, dtype=numpy.float32)\r\n # replace first and third items with 1.0\r\n xp.copyto(x, 1.0, where=xp.asarray([True, False, True]))\r\n print(x)\r\n\r\nprint('numpy', numpy.__version__)\r\ncopyto_check(numpy)\r\nprint('cupy', cupy.__version__)\r\ncopyto_check(cupy)\r\n```\r\nOutput:\r\n```\r\nnumpy 1.14.0\r\n[1. 0. 1.]\r\ncupy 2.2.0\r\n[1. 1. 1.]\r\n```\n", "before_files": [{"content": "import numpy\nimport six\n\nfrom cupy import core\n\n\ndef copyto(dst, src, casting='same_kind', where=None):\n \"\"\"Copies values from one array to another with broadcasting.\n\n This function can be called for arrays on different devices. In this case,\n casting, ``where``, and broadcasting is not supported, and an exception is\n raised if these are used.\n\n Args:\n dst (cupy.ndarray): Target array.\n src (cupy.ndarray): Source array.\n casting (str): Casting rule. See :func:`numpy.can_cast` for detail.\n where (cupy.ndarray of bool): If specified, this array acts as a mask,\n and an element is copied only if the corresponding element of\n ``where`` is True.\n\n .. seealso:: :func:`numpy.copyto`\n\n \"\"\"\n\n src_type = type(src)\n src_is_python_scalar = (src_type in six.integer_types or\n src_type in (bool, float, complex))\n if src_is_python_scalar:\n src_dtype = numpy.dtype(type(src))\n can_cast = numpy.can_cast(src, dst.dtype, casting)\n else:\n src_dtype = src.dtype\n can_cast = numpy.can_cast(src_dtype, dst.dtype, casting)\n\n if not can_cast:\n raise TypeError('Cannot cast %s to %s in %s casting mode' %\n (src_dtype, dst.dtype, casting))\n if dst.size == 0:\n return\n\n if src_is_python_scalar:\n dst.fill(src)\n return\n\n if where is None:\n if _can_memcpy(dst, src):\n dst.data.copy_from(src.data, src.nbytes)\n else:\n device = dst.device\n with device:\n if src.device != device:\n src = src.copy()\n core.elementwise_copy(src, dst)\n else:\n core.elementwise_copy_where(src, where, dst)\n\n\ndef _can_memcpy(dst, src):\n c_contiguous = dst.flags.c_contiguous and src.flags.c_contiguous\n f_contiguous = dst.flags.f_contiguous and src.flags.f_contiguous\n return (c_contiguous or f_contiguous) and dst.dtype == src.dtype and \\\n dst.size == src.size\n", "path": "cupy/manipulation/basic.py"}], "after_files": [{"content": "import numpy\nimport six\n\nfrom cupy import core\n\n\ndef copyto(dst, src, casting='same_kind', where=None):\n \"\"\"Copies values from one array to another with broadcasting.\n\n This function can be called for arrays on different devices. In this case,\n casting, ``where``, and broadcasting is not supported, and an exception is\n raised if these are used.\n\n Args:\n dst (cupy.ndarray): Target array.\n src (cupy.ndarray): Source array.\n casting (str): Casting rule. See :func:`numpy.can_cast` for detail.\n where (cupy.ndarray of bool): If specified, this array acts as a mask,\n and an element is copied only if the corresponding element of\n ``where`` is True.\n\n .. seealso:: :func:`numpy.copyto`\n\n \"\"\"\n\n src_type = type(src)\n src_is_python_scalar = (src_type in six.integer_types or\n src_type in (bool, float, complex))\n if src_is_python_scalar:\n src_dtype = numpy.dtype(type(src))\n can_cast = numpy.can_cast(src, dst.dtype, casting)\n else:\n src_dtype = src.dtype\n can_cast = numpy.can_cast(src_dtype, dst.dtype, casting)\n\n if not can_cast:\n raise TypeError('Cannot cast %s to %s in %s casting mode' %\n (src_dtype, dst.dtype, casting))\n if dst.size == 0:\n return\n\n if src_is_python_scalar and where is None:\n dst.fill(src)\n return\n\n if where is None:\n if _can_memcpy(dst, src):\n dst.data.copy_from(src.data, src.nbytes)\n else:\n device = dst.device\n with device:\n if src.device != device:\n src = src.copy()\n core.elementwise_copy(src, dst)\n else:\n core.elementwise_copy_where(src, where, dst)\n\n\ndef _can_memcpy(dst, src):\n c_contiguous = dst.flags.c_contiguous and src.flags.c_contiguous\n f_contiguous = dst.flags.f_contiguous and src.flags.f_contiguous\n return (c_contiguous or f_contiguous) and dst.dtype == src.dtype and \\\n dst.size == src.size\n", "path": "cupy/manipulation/basic.py"}]} | 1,028 | 91 |
gh_patches_debug_28700 | rasdani/github-patches | git_diff | meltano__meltano-6552 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Feature]: Collect telemetry data about how `send_anonymous_usage_stats` was configured
The project context (and its schema) should be updated to include the key `send_anonymous_usage_stats_source` with the value `ProjectSettingService.get_with_metadata('send_anonymous_usage_stats')[1]['source'].value`, which can be one of the following strings:
- `auto`
- `config_override`
- `db`
- `default`
- `dotenv`
- `env`
- `inherited`
- `meltano_env`
- `meltano_yml`
CC @pnadolny13 @aaronsteers
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/meltano/core/tracking/contexts/project.py`
Content:
```
1 """Project context for the Snowplow tracker."""
2
3 from __future__ import annotations
4
5 import uuid
6 from enum import Enum, auto
7
8 from cached_property import cached_property
9 from snowplow_tracker import SelfDescribingJson
10 from structlog.stdlib import get_logger
11
12 from meltano.core.project import Project
13 from meltano.core.project_settings_service import ProjectSettingsService
14 from meltano.core.tracking.schemas import ProjectContextSchema
15 from meltano.core.utils import hash_sha256
16
17 logger = get_logger(__name__)
18
19
20 class ProjectUUIDSource(Enum):
21 """The source of the `project_uuid` used for telemetry."""
22
23 # The UUID was explicitly provided in the config as the `project_id`.
24 explicit = auto()
25
26 # The UUID was derived by hashing the `project_id` in the config.
27 derived = auto()
28
29 # The UUID was randomly generated (UUID v4) since no `project_id` was configured.
30 random = auto()
31
32
33 class ProjectContext(SelfDescribingJson):
34 """Tracking context for the Meltano project."""
35
36 def __init__(self, project: Project, client_id: uuid.UUID):
37 """Initialize a meltano tracking "project" context.
38
39 Args:
40 project: The Meltano project.
41 client_id: The client ID from `analytics.json`.
42 """
43 self.project = project
44 self.settings_service = ProjectSettingsService(project)
45 self.send_anonymous_usage_stats = self.settings_service.get(
46 "send_anonymous_usage_stats", True
47 )
48
49 super().__init__(
50 ProjectContextSchema.url,
51 {
52 "context_uuid": str(uuid.uuid4()),
53 "project_uuid": str(self.project_uuid),
54 "project_uuid_source": self.project_uuid_source.name,
55 "client_uuid": str(client_id),
56 "environment_name_hash": (
57 hash_sha256(self.project.active_environment.name)
58 if self.project.active_environment
59 else None
60 ),
61 },
62 )
63
64 @property
65 def project_uuid_source(self) -> ProjectUUIDSource:
66 """Obtain the source of the `project_uuid` used for telemetry.
67
68 Returns:
69 ProjectUUIDSource: The source of the `project_uuid` used for telemetry.
70 """
71 # Ensure the `project_uuid` has been generated
72 self.project_uuid # noqa: WPS428
73 return self._project_uuid_source
74
75 @cached_property
76 def project_uuid(self) -> uuid.UUID:
77 """Obtain the `project_id` from the project config file.
78
79 If it is not found (e.g. first time run), generate a valid v4 UUID, and and store it in the
80 project config file.
81
82 Returns:
83 The project UUID.
84 """
85 project_id_str = self.settings_service.get("project_id")
86
87 if project_id_str:
88 try:
89 # Project ID might already be a UUID
90 project_id = uuid.UUID(project_id_str)
91 except ValueError:
92 # If the project ID is not a UUID, then we hash it, and use the hash to make a UUID
93 project_id = uuid.UUID(hash_sha256(project_id_str)[::2])
94 self._project_uuid_source = ProjectUUIDSource.derived
95 else:
96 self._project_uuid_source = ProjectUUIDSource.explicit
97 else:
98 project_id = uuid.uuid4()
99 self._project_uuid_source = ProjectUUIDSource.random
100
101 return project_id
102
```
Path: `src/meltano/core/tracking/schemas.py`
Content:
```
1 """Meltano Iglu schemas metadata & utilities."""
2
3 from __future__ import annotations
4
5 from dataclasses import dataclass
6
7 DEFAULT_VENDOR = "com.meltano"
8
9
10 @dataclass
11 class IgluSchema:
12 """Dataclass to store the name, version, vendor, and URL for an Iglu schema."""
13
14 name: str
15 version: str
16 vendor: str = DEFAULT_VENDOR
17
18 @property
19 def url(self) -> str:
20 """Construct an iglu schema URL.
21
22 Returns:
23 The URL to the schema.
24 """
25 return f"iglu:{self.vendor}/{self.name}/jsonschema/{self.version}"
26
27
28 CliContextSchema = IgluSchema("cli_context", "1-1-0")
29 CliEventSchema = IgluSchema("cli_event", "1-0-1")
30 BlockEventSchema = IgluSchema("block_event", "1-0-0")
31 EnvironmentContextSchema = IgluSchema("environment_context", "1-0-0")
32 ExceptionContextSchema = IgluSchema("exception_context", "1-0-0")
33 ExitEventSchema = IgluSchema("exit_event", "1-0-0")
34 PluginsContextSchema = IgluSchema("plugins_context", "1-0-0")
35 ProjectContextSchema = IgluSchema("project_context", "1-0-0")
36 TelemetryStateChangeEventSchema = IgluSchema("telemetry_state_change_event", "1-0-0")
37
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/meltano/core/tracking/contexts/project.py b/src/meltano/core/tracking/contexts/project.py
--- a/src/meltano/core/tracking/contexts/project.py
+++ b/src/meltano/core/tracking/contexts/project.py
@@ -42,9 +42,10 @@
"""
self.project = project
self.settings_service = ProjectSettingsService(project)
- self.send_anonymous_usage_stats = self.settings_service.get(
- "send_anonymous_usage_stats", True
- )
+ (
+ send_anonymous_usage_stats,
+ send_anonymous_usage_stats_metadata,
+ ) = self.settings_service.get_with_metadata("send_anonymous_usage_stats")
super().__init__(
ProjectContextSchema.url,
@@ -58,6 +59,10 @@
if self.project.active_environment
else None
),
+ "send_anonymous_usage_stats": send_anonymous_usage_stats,
+ "send_anonymous_usage_stats_source": (
+ send_anonymous_usage_stats_metadata["source"].value
+ ),
},
)
diff --git a/src/meltano/core/tracking/schemas.py b/src/meltano/core/tracking/schemas.py
--- a/src/meltano/core/tracking/schemas.py
+++ b/src/meltano/core/tracking/schemas.py
@@ -32,5 +32,5 @@
ExceptionContextSchema = IgluSchema("exception_context", "1-0-0")
ExitEventSchema = IgluSchema("exit_event", "1-0-0")
PluginsContextSchema = IgluSchema("plugins_context", "1-0-0")
-ProjectContextSchema = IgluSchema("project_context", "1-0-0")
+ProjectContextSchema = IgluSchema("project_context", "1-1-0")
TelemetryStateChangeEventSchema = IgluSchema("telemetry_state_change_event", "1-0-0")
| {"golden_diff": "diff --git a/src/meltano/core/tracking/contexts/project.py b/src/meltano/core/tracking/contexts/project.py\n--- a/src/meltano/core/tracking/contexts/project.py\n+++ b/src/meltano/core/tracking/contexts/project.py\n@@ -42,9 +42,10 @@\n \"\"\"\n self.project = project\n self.settings_service = ProjectSettingsService(project)\n- self.send_anonymous_usage_stats = self.settings_service.get(\n- \"send_anonymous_usage_stats\", True\n- )\n+ (\n+ send_anonymous_usage_stats,\n+ send_anonymous_usage_stats_metadata,\n+ ) = self.settings_service.get_with_metadata(\"send_anonymous_usage_stats\")\n \n super().__init__(\n ProjectContextSchema.url,\n@@ -58,6 +59,10 @@\n if self.project.active_environment\n else None\n ),\n+ \"send_anonymous_usage_stats\": send_anonymous_usage_stats,\n+ \"send_anonymous_usage_stats_source\": (\n+ send_anonymous_usage_stats_metadata[\"source\"].value\n+ ),\n },\n )\n \ndiff --git a/src/meltano/core/tracking/schemas.py b/src/meltano/core/tracking/schemas.py\n--- a/src/meltano/core/tracking/schemas.py\n+++ b/src/meltano/core/tracking/schemas.py\n@@ -32,5 +32,5 @@\n ExceptionContextSchema = IgluSchema(\"exception_context\", \"1-0-0\")\n ExitEventSchema = IgluSchema(\"exit_event\", \"1-0-0\")\n PluginsContextSchema = IgluSchema(\"plugins_context\", \"1-0-0\")\n-ProjectContextSchema = IgluSchema(\"project_context\", \"1-0-0\")\n+ProjectContextSchema = IgluSchema(\"project_context\", \"1-1-0\")\n TelemetryStateChangeEventSchema = IgluSchema(\"telemetry_state_change_event\", \"1-0-0\")\n", "issue": "[Feature]: Collect telemetry data about how `send_anonymous_usage_stats` was configured\nThe project context (and its schema) should be updated to include the key `send_anonymous_usage_stats_source` with the value `ProjectSettingService.get_with_metadata('send_anonymous_usage_stats')[1]['source'].value`, which can be one of the following strings:\r\n- `auto`\r\n- `config_override`\r\n- `db`\r\n- `default`\r\n- `dotenv`\r\n- `env`\r\n- `inherited`\r\n- `meltano_env`\r\n- `meltano_yml`\r\n\r\nCC @pnadolny13 @aaronsteers \n", "before_files": [{"content": "\"\"\"Project context for the Snowplow tracker.\"\"\"\n\nfrom __future__ import annotations\n\nimport uuid\nfrom enum import Enum, auto\n\nfrom cached_property import cached_property\nfrom snowplow_tracker import SelfDescribingJson\nfrom structlog.stdlib import get_logger\n\nfrom meltano.core.project import Project\nfrom meltano.core.project_settings_service import ProjectSettingsService\nfrom meltano.core.tracking.schemas import ProjectContextSchema\nfrom meltano.core.utils import hash_sha256\n\nlogger = get_logger(__name__)\n\n\nclass ProjectUUIDSource(Enum):\n \"\"\"The source of the `project_uuid` used for telemetry.\"\"\"\n\n # The UUID was explicitly provided in the config as the `project_id`.\n explicit = auto()\n\n # The UUID was derived by hashing the `project_id` in the config.\n derived = auto()\n\n # The UUID was randomly generated (UUID v4) since no `project_id` was configured.\n random = auto()\n\n\nclass ProjectContext(SelfDescribingJson):\n \"\"\"Tracking context for the Meltano project.\"\"\"\n\n def __init__(self, project: Project, client_id: uuid.UUID):\n \"\"\"Initialize a meltano tracking \"project\" context.\n\n Args:\n project: The Meltano project.\n client_id: The client ID from `analytics.json`.\n \"\"\"\n self.project = project\n self.settings_service = ProjectSettingsService(project)\n self.send_anonymous_usage_stats = self.settings_service.get(\n \"send_anonymous_usage_stats\", True\n )\n\n super().__init__(\n ProjectContextSchema.url,\n {\n \"context_uuid\": str(uuid.uuid4()),\n \"project_uuid\": str(self.project_uuid),\n \"project_uuid_source\": self.project_uuid_source.name,\n \"client_uuid\": str(client_id),\n \"environment_name_hash\": (\n hash_sha256(self.project.active_environment.name)\n if self.project.active_environment\n else None\n ),\n },\n )\n\n @property\n def project_uuid_source(self) -> ProjectUUIDSource:\n \"\"\"Obtain the source of the `project_uuid` used for telemetry.\n\n Returns:\n ProjectUUIDSource: The source of the `project_uuid` used for telemetry.\n \"\"\"\n # Ensure the `project_uuid` has been generated\n self.project_uuid # noqa: WPS428\n return self._project_uuid_source\n\n @cached_property\n def project_uuid(self) -> uuid.UUID:\n \"\"\"Obtain the `project_id` from the project config file.\n\n If it is not found (e.g. first time run), generate a valid v4 UUID, and and store it in the\n project config file.\n\n Returns:\n The project UUID.\n \"\"\"\n project_id_str = self.settings_service.get(\"project_id\")\n\n if project_id_str:\n try:\n # Project ID might already be a UUID\n project_id = uuid.UUID(project_id_str)\n except ValueError:\n # If the project ID is not a UUID, then we hash it, and use the hash to make a UUID\n project_id = uuid.UUID(hash_sha256(project_id_str)[::2])\n self._project_uuid_source = ProjectUUIDSource.derived\n else:\n self._project_uuid_source = ProjectUUIDSource.explicit\n else:\n project_id = uuid.uuid4()\n self._project_uuid_source = ProjectUUIDSource.random\n\n return project_id\n", "path": "src/meltano/core/tracking/contexts/project.py"}, {"content": "\"\"\"Meltano Iglu schemas metadata & utilities.\"\"\"\n\nfrom __future__ import annotations\n\nfrom dataclasses import dataclass\n\nDEFAULT_VENDOR = \"com.meltano\"\n\n\n@dataclass\nclass IgluSchema:\n \"\"\"Dataclass to store the name, version, vendor, and URL for an Iglu schema.\"\"\"\n\n name: str\n version: str\n vendor: str = DEFAULT_VENDOR\n\n @property\n def url(self) -> str:\n \"\"\"Construct an iglu schema URL.\n\n Returns:\n The URL to the schema.\n \"\"\"\n return f\"iglu:{self.vendor}/{self.name}/jsonschema/{self.version}\"\n\n\nCliContextSchema = IgluSchema(\"cli_context\", \"1-1-0\")\nCliEventSchema = IgluSchema(\"cli_event\", \"1-0-1\")\nBlockEventSchema = IgluSchema(\"block_event\", \"1-0-0\")\nEnvironmentContextSchema = IgluSchema(\"environment_context\", \"1-0-0\")\nExceptionContextSchema = IgluSchema(\"exception_context\", \"1-0-0\")\nExitEventSchema = IgluSchema(\"exit_event\", \"1-0-0\")\nPluginsContextSchema = IgluSchema(\"plugins_context\", \"1-0-0\")\nProjectContextSchema = IgluSchema(\"project_context\", \"1-0-0\")\nTelemetryStateChangeEventSchema = IgluSchema(\"telemetry_state_change_event\", \"1-0-0\")\n", "path": "src/meltano/core/tracking/schemas.py"}], "after_files": [{"content": "\"\"\"Project context for the Snowplow tracker.\"\"\"\n\nfrom __future__ import annotations\n\nimport uuid\nfrom enum import Enum, auto\n\nfrom cached_property import cached_property\nfrom snowplow_tracker import SelfDescribingJson\nfrom structlog.stdlib import get_logger\n\nfrom meltano.core.project import Project\nfrom meltano.core.project_settings_service import ProjectSettingsService\nfrom meltano.core.tracking.schemas import ProjectContextSchema\nfrom meltano.core.utils import hash_sha256\n\nlogger = get_logger(__name__)\n\n\nclass ProjectUUIDSource(Enum):\n \"\"\"The source of the `project_uuid` used for telemetry.\"\"\"\n\n # The UUID was explicitly provided in the config as the `project_id`.\n explicit = auto()\n\n # The UUID was derived by hashing the `project_id` in the config.\n derived = auto()\n\n # The UUID was randomly generated (UUID v4) since no `project_id` was configured.\n random = auto()\n\n\nclass ProjectContext(SelfDescribingJson):\n \"\"\"Tracking context for the Meltano project.\"\"\"\n\n def __init__(self, project: Project, client_id: uuid.UUID):\n \"\"\"Initialize a meltano tracking \"project\" context.\n\n Args:\n project: The Meltano project.\n client_id: The client ID from `analytics.json`.\n \"\"\"\n self.project = project\n self.settings_service = ProjectSettingsService(project)\n (\n send_anonymous_usage_stats,\n send_anonymous_usage_stats_metadata,\n ) = self.settings_service.get_with_metadata(\"send_anonymous_usage_stats\")\n\n super().__init__(\n ProjectContextSchema.url,\n {\n \"context_uuid\": str(uuid.uuid4()),\n \"project_uuid\": str(self.project_uuid),\n \"project_uuid_source\": self.project_uuid_source.name,\n \"client_uuid\": str(client_id),\n \"environment_name_hash\": (\n hash_sha256(self.project.active_environment.name)\n if self.project.active_environment\n else None\n ),\n \"send_anonymous_usage_stats\": send_anonymous_usage_stats,\n \"send_anonymous_usage_stats_source\": (\n send_anonymous_usage_stats_metadata[\"source\"].value\n ),\n },\n )\n\n @property\n def project_uuid_source(self) -> ProjectUUIDSource:\n \"\"\"Obtain the source of the `project_uuid` used for telemetry.\n\n Returns:\n ProjectUUIDSource: The source of the `project_uuid` used for telemetry.\n \"\"\"\n # Ensure the `project_uuid` has been generated\n self.project_uuid # noqa: WPS428\n return self._project_uuid_source\n\n @cached_property\n def project_uuid(self) -> uuid.UUID:\n \"\"\"Obtain the `project_id` from the project config file.\n\n If it is not found (e.g. first time run), generate a valid v4 UUID, and and store it in the\n project config file.\n\n Returns:\n The project UUID.\n \"\"\"\n project_id_str = self.settings_service.get(\"project_id\")\n\n if project_id_str:\n try:\n # Project ID might already be a UUID\n project_id = uuid.UUID(project_id_str)\n except ValueError:\n # If the project ID is not a UUID, then we hash it, and use the hash to make a UUID\n project_id = uuid.UUID(hash_sha256(project_id_str)[::2])\n self._project_uuid_source = ProjectUUIDSource.derived\n else:\n self._project_uuid_source = ProjectUUIDSource.explicit\n else:\n project_id = uuid.uuid4()\n self._project_uuid_source = ProjectUUIDSource.random\n\n return project_id\n", "path": "src/meltano/core/tracking/contexts/project.py"}, {"content": "\"\"\"Meltano Iglu schemas metadata & utilities.\"\"\"\n\nfrom __future__ import annotations\n\nfrom dataclasses import dataclass\n\nDEFAULT_VENDOR = \"com.meltano\"\n\n\n@dataclass\nclass IgluSchema:\n \"\"\"Dataclass to store the name, version, vendor, and URL for an Iglu schema.\"\"\"\n\n name: str\n version: str\n vendor: str = DEFAULT_VENDOR\n\n @property\n def url(self) -> str:\n \"\"\"Construct an iglu schema URL.\n\n Returns:\n The URL to the schema.\n \"\"\"\n return f\"iglu:{self.vendor}/{self.name}/jsonschema/{self.version}\"\n\n\nCliContextSchema = IgluSchema(\"cli_context\", \"1-1-0\")\nCliEventSchema = IgluSchema(\"cli_event\", \"1-0-1\")\nBlockEventSchema = IgluSchema(\"block_event\", \"1-0-0\")\nEnvironmentContextSchema = IgluSchema(\"environment_context\", \"1-0-0\")\nExceptionContextSchema = IgluSchema(\"exception_context\", \"1-0-0\")\nExitEventSchema = IgluSchema(\"exit_event\", \"1-0-0\")\nPluginsContextSchema = IgluSchema(\"plugins_context\", \"1-0-0\")\nProjectContextSchema = IgluSchema(\"project_context\", \"1-1-0\")\nTelemetryStateChangeEventSchema = IgluSchema(\"telemetry_state_change_event\", \"1-0-0\")\n", "path": "src/meltano/core/tracking/schemas.py"}]} | 1,733 | 422 |
gh_patches_debug_26978 | rasdani/github-patches | git_diff | PrefectHQ__prefect-2727 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Write More Idioms
We should write some more idioms:
- [x] how to define conditional logic using the [new conditional api](https://github.com/PrefectHQ/prefect/pull/2443) and the "old" way
- [x] how to use `target`s (0.11.0+)
- [x] how to configure notifications (three options: write a downstream task, state handler, cloud hook)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/prefect/tasks/control_flow/conditional.py`
Content:
```
1 from typing import Any, Dict
2
3 import prefect
4 from prefect import Task
5 from prefect.engine import signals
6
7 __all__ = ["switch", "ifelse"]
8
9
10 class Merge(Task):
11 def __init__(self, **kwargs) -> None:
12 if kwargs.setdefault("skip_on_upstream_skip", False):
13 raise ValueError("Merge tasks must have `skip_on_upstream_skip=False`.")
14 kwargs.setdefault("trigger", prefect.triggers.not_all_skipped)
15 super().__init__(**kwargs)
16
17 def run(self, **task_results: Any) -> Any:
18 return next(
19 (v for k, v in sorted(task_results.items()) if v is not None), None,
20 )
21
22
23 class CompareValue(Task):
24 """
25 This task stores a `value` at initialization and compares it to a `value` received at runtime.
26 If the values don't match, it raises a SKIP exception.
27
28 Args:
29 - value (Any): the value this task will attempt to match when it runs
30 - **kwargs: keyword arguments for the Task
31 """
32
33 def __init__(self, value: Any, **kwargs: Any):
34 self.value = value
35 kwargs.setdefault("name", 'CompareValue: "{}"'.format(value))
36 super().__init__(**kwargs)
37
38 def run(self, value: Any) -> None:
39 """
40 Raises a SKIP signal if the passed value does not match the task's match value;
41 succeeds silently otherwise.
42
43 Args:
44 - value (Any): the value that will be matched against the task's value.
45 """
46 if value != self.value:
47 raise signals.SKIP(
48 'Provided value "{}" did not match "{}"'.format(value, self.value)
49 )
50
51
52 def switch(condition: Task, cases: Dict[Any, Task]) -> None:
53 """
54 Adds a SWITCH to a workflow.
55
56 The condition task is evaluated and the result is compared to the keys of the cases
57 dictionary. The task corresponding to the matching key is run; all other tasks are
58 skipped. Any tasks downstream of the skipped tasks are also skipped unless they set
59 `skip_on_upstream_skip=False`.
60
61 Example:
62 ```python
63 @task
64 def condition():
65 return "b" # returning 'b' will take the b_branch
66
67 @task
68 def a_branch():
69 return "A Branch"
70
71 @task
72 def b_branch():
73 return "B Branch"
74
75 with Flow("switch-flow") as flow:
76 switch(condition, dict(a=a_branch, b=b_branch))
77 ```
78
79 Args:
80 - condition (Task): a task whose result forms the condition for the switch
81 - cases (Dict[Any, Task]): a dict representing the "case" statements of the switch.
82 The value of the `condition` task will be compared to the keys of this dict, and
83 the matching task will be executed.
84
85 Raises:
86 - PrefectWarning: if any of the tasks in "cases" have upstream dependencies,
87 then this task will warn that those upstream tasks may run whether or not the switch condition matches their branch. The most common cause of this
88 is passing a list of tasks as one of the cases, which adds the `List` task
89 to the switch condition but leaves the tasks themselves upstream.
90 """
91
92 with prefect.tags("switch"):
93 for value, task in cases.items():
94 task = prefect.utilities.tasks.as_task(task)
95 match_condition = CompareValue(value=value).bind(value=condition)
96 task.set_dependencies(upstream_tasks=[match_condition])
97
98
99 def ifelse(condition: Task, true_task: Task, false_task: Task) -> None:
100 """
101 Builds a conditional branch into a workflow.
102
103 If the condition evaluates True(ish), the true_task will run. If it
104 evaluates False(ish), the false_task will run. The task doesn't run is Skipped, as are
105 all downstream tasks that don't set `skip_on_upstream_skip=False`.
106
107 Args:
108 - condition (Task): a task whose boolean result forms the condition for the ifelse
109 - true_task (Task): a task that will be executed if the condition is True
110 - false_task (Task): a task that will be executed if the condition is False
111 """
112
113 @prefect.task
114 def as_bool(x):
115 return bool(x)
116
117 cases = {c: t for c, t in [(True, true_task), (False, false_task)] if t is not None}
118 if cases:
119 switch(condition=as_bool(condition), cases=cases)
120
121
122 def merge(*tasks: Task) -> Task:
123 """
124 Merges conditional branches back together.
125
126 A conditional branch in a flow results in one or more tasks proceeding and one or
127 more tasks skipping. It is often convenient to merge those branches back into a
128 single result. This function is a simple way to achieve that goal. By default this
129 task will skip if all its upstream dependencies are also skipped.
130
131 The merge will return the first real result it encounters, or `None`. If multiple
132 tasks might return a result, group them with a list.
133
134 Example:
135 ```python
136 with Flow("My Flow"):
137 true_branch = ActionIfTrue()
138 false_branch = ActionIfFalse()
139 ifelse(CheckCondition(), true_branch, false_branch)
140
141 merged_result = merge(true_branch, false_branch)
142 ```
143
144 Args:
145 - *tasks (Task): tasks whose results should be merged into a single result. The tasks are
146 assumed to all sit downstream of different `switch` branches, such that only
147 one of them will contain a result and the others will all be skipped.
148
149 Returns:
150 - Task: a Task representing the merged result.
151
152 """
153 return Merge().bind(**{"task_{}".format(i + 1): t for i, t in enumerate(tasks)})
154
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/prefect/tasks/control_flow/conditional.py b/src/prefect/tasks/control_flow/conditional.py
--- a/src/prefect/tasks/control_flow/conditional.py
+++ b/src/prefect/tasks/control_flow/conditional.py
@@ -4,7 +4,7 @@
from prefect import Task
from prefect.engine import signals
-__all__ = ["switch", "ifelse"]
+__all__ = ["switch", "ifelse", "merge"]
class Merge(Task):
@@ -119,7 +119,7 @@
switch(condition=as_bool(condition), cases=cases)
-def merge(*tasks: Task) -> Task:
+def merge(*tasks: Task, flow=None) -> Task:
"""
Merges conditional branches back together.
@@ -145,9 +145,13 @@
- *tasks (Task): tasks whose results should be merged into a single result. The tasks are
assumed to all sit downstream of different `switch` branches, such that only
one of them will contain a result and the others will all be skipped.
+ - flow (Flow, optional): The flow to use, defaults to the current flow
+ in context if no flow is specified
Returns:
- Task: a Task representing the merged result.
"""
- return Merge().bind(**{"task_{}".format(i + 1): t for i, t in enumerate(tasks)})
+ return Merge().bind(
+ **{"task_{}".format(i + 1): t for i, t in enumerate(tasks)}, flow=flow
+ )
| {"golden_diff": "diff --git a/src/prefect/tasks/control_flow/conditional.py b/src/prefect/tasks/control_flow/conditional.py\n--- a/src/prefect/tasks/control_flow/conditional.py\n+++ b/src/prefect/tasks/control_flow/conditional.py\n@@ -4,7 +4,7 @@\n from prefect import Task\n from prefect.engine import signals\n \n-__all__ = [\"switch\", \"ifelse\"]\n+__all__ = [\"switch\", \"ifelse\", \"merge\"]\n \n \n class Merge(Task):\n@@ -119,7 +119,7 @@\n switch(condition=as_bool(condition), cases=cases)\n \n \n-def merge(*tasks: Task) -> Task:\n+def merge(*tasks: Task, flow=None) -> Task:\n \"\"\"\n Merges conditional branches back together.\n \n@@ -145,9 +145,13 @@\n - *tasks (Task): tasks whose results should be merged into a single result. The tasks are\n assumed to all sit downstream of different `switch` branches, such that only\n one of them will contain a result and the others will all be skipped.\n+ - flow (Flow, optional): The flow to use, defaults to the current flow\n+ in context if no flow is specified\n \n Returns:\n - Task: a Task representing the merged result.\n \n \"\"\"\n- return Merge().bind(**{\"task_{}\".format(i + 1): t for i, t in enumerate(tasks)})\n+ return Merge().bind(\n+ **{\"task_{}\".format(i + 1): t for i, t in enumerate(tasks)}, flow=flow\n+ )\n", "issue": "Write More Idioms\nWe should write some more idioms:\r\n\r\n- [x] how to define conditional logic using the [new conditional api](https://github.com/PrefectHQ/prefect/pull/2443) and the \"old\" way\r\n- [x] how to use `target`s (0.11.0+)\r\n- [x] how to configure notifications (three options: write a downstream task, state handler, cloud hook)\n", "before_files": [{"content": "from typing import Any, Dict\n\nimport prefect\nfrom prefect import Task\nfrom prefect.engine import signals\n\n__all__ = [\"switch\", \"ifelse\"]\n\n\nclass Merge(Task):\n def __init__(self, **kwargs) -> None:\n if kwargs.setdefault(\"skip_on_upstream_skip\", False):\n raise ValueError(\"Merge tasks must have `skip_on_upstream_skip=False`.\")\n kwargs.setdefault(\"trigger\", prefect.triggers.not_all_skipped)\n super().__init__(**kwargs)\n\n def run(self, **task_results: Any) -> Any:\n return next(\n (v for k, v in sorted(task_results.items()) if v is not None), None,\n )\n\n\nclass CompareValue(Task):\n \"\"\"\n This task stores a `value` at initialization and compares it to a `value` received at runtime.\n If the values don't match, it raises a SKIP exception.\n\n Args:\n - value (Any): the value this task will attempt to match when it runs\n - **kwargs: keyword arguments for the Task\n \"\"\"\n\n def __init__(self, value: Any, **kwargs: Any):\n self.value = value\n kwargs.setdefault(\"name\", 'CompareValue: \"{}\"'.format(value))\n super().__init__(**kwargs)\n\n def run(self, value: Any) -> None:\n \"\"\"\n Raises a SKIP signal if the passed value does not match the task's match value;\n succeeds silently otherwise.\n\n Args:\n - value (Any): the value that will be matched against the task's value.\n \"\"\"\n if value != self.value:\n raise signals.SKIP(\n 'Provided value \"{}\" did not match \"{}\"'.format(value, self.value)\n )\n\n\ndef switch(condition: Task, cases: Dict[Any, Task]) -> None:\n \"\"\"\n Adds a SWITCH to a workflow.\n\n The condition task is evaluated and the result is compared to the keys of the cases\n dictionary. The task corresponding to the matching key is run; all other tasks are\n skipped. Any tasks downstream of the skipped tasks are also skipped unless they set\n `skip_on_upstream_skip=False`.\n\n Example:\n ```python\n @task\n def condition():\n return \"b\" # returning 'b' will take the b_branch\n\n @task\n def a_branch():\n return \"A Branch\"\n\n @task\n def b_branch():\n return \"B Branch\"\n\n with Flow(\"switch-flow\") as flow:\n switch(condition, dict(a=a_branch, b=b_branch))\n ```\n\n Args:\n - condition (Task): a task whose result forms the condition for the switch\n - cases (Dict[Any, Task]): a dict representing the \"case\" statements of the switch.\n The value of the `condition` task will be compared to the keys of this dict, and\n the matching task will be executed.\n\n Raises:\n - PrefectWarning: if any of the tasks in \"cases\" have upstream dependencies,\n then this task will warn that those upstream tasks may run whether or not the switch condition matches their branch. The most common cause of this\n is passing a list of tasks as one of the cases, which adds the `List` task\n to the switch condition but leaves the tasks themselves upstream.\n \"\"\"\n\n with prefect.tags(\"switch\"):\n for value, task in cases.items():\n task = prefect.utilities.tasks.as_task(task)\n match_condition = CompareValue(value=value).bind(value=condition)\n task.set_dependencies(upstream_tasks=[match_condition])\n\n\ndef ifelse(condition: Task, true_task: Task, false_task: Task) -> None:\n \"\"\"\n Builds a conditional branch into a workflow.\n\n If the condition evaluates True(ish), the true_task will run. If it\n evaluates False(ish), the false_task will run. The task doesn't run is Skipped, as are\n all downstream tasks that don't set `skip_on_upstream_skip=False`.\n\n Args:\n - condition (Task): a task whose boolean result forms the condition for the ifelse\n - true_task (Task): a task that will be executed if the condition is True\n - false_task (Task): a task that will be executed if the condition is False\n \"\"\"\n\n @prefect.task\n def as_bool(x):\n return bool(x)\n\n cases = {c: t for c, t in [(True, true_task), (False, false_task)] if t is not None}\n if cases:\n switch(condition=as_bool(condition), cases=cases)\n\n\ndef merge(*tasks: Task) -> Task:\n \"\"\"\n Merges conditional branches back together.\n\n A conditional branch in a flow results in one or more tasks proceeding and one or\n more tasks skipping. It is often convenient to merge those branches back into a\n single result. This function is a simple way to achieve that goal. By default this\n task will skip if all its upstream dependencies are also skipped.\n\n The merge will return the first real result it encounters, or `None`. If multiple\n tasks might return a result, group them with a list.\n\n Example:\n ```python\n with Flow(\"My Flow\"):\n true_branch = ActionIfTrue()\n false_branch = ActionIfFalse()\n ifelse(CheckCondition(), true_branch, false_branch)\n\n merged_result = merge(true_branch, false_branch)\n ```\n\n Args:\n - *tasks (Task): tasks whose results should be merged into a single result. The tasks are\n assumed to all sit downstream of different `switch` branches, such that only\n one of them will contain a result and the others will all be skipped.\n\n Returns:\n - Task: a Task representing the merged result.\n\n \"\"\"\n return Merge().bind(**{\"task_{}\".format(i + 1): t for i, t in enumerate(tasks)})\n", "path": "src/prefect/tasks/control_flow/conditional.py"}], "after_files": [{"content": "from typing import Any, Dict\n\nimport prefect\nfrom prefect import Task\nfrom prefect.engine import signals\n\n__all__ = [\"switch\", \"ifelse\", \"merge\"]\n\n\nclass Merge(Task):\n def __init__(self, **kwargs) -> None:\n if kwargs.setdefault(\"skip_on_upstream_skip\", False):\n raise ValueError(\"Merge tasks must have `skip_on_upstream_skip=False`.\")\n kwargs.setdefault(\"trigger\", prefect.triggers.not_all_skipped)\n super().__init__(**kwargs)\n\n def run(self, **task_results: Any) -> Any:\n return next(\n (v for k, v in sorted(task_results.items()) if v is not None), None,\n )\n\n\nclass CompareValue(Task):\n \"\"\"\n This task stores a `value` at initialization and compares it to a `value` received at runtime.\n If the values don't match, it raises a SKIP exception.\n\n Args:\n - value (Any): the value this task will attempt to match when it runs\n - **kwargs: keyword arguments for the Task\n \"\"\"\n\n def __init__(self, value: Any, **kwargs: Any):\n self.value = value\n kwargs.setdefault(\"name\", 'CompareValue: \"{}\"'.format(value))\n super().__init__(**kwargs)\n\n def run(self, value: Any) -> None:\n \"\"\"\n Raises a SKIP signal if the passed value does not match the task's match value;\n succeeds silently otherwise.\n\n Args:\n - value (Any): the value that will be matched against the task's value.\n \"\"\"\n if value != self.value:\n raise signals.SKIP(\n 'Provided value \"{}\" did not match \"{}\"'.format(value, self.value)\n )\n\n\ndef switch(condition: Task, cases: Dict[Any, Task]) -> None:\n \"\"\"\n Adds a SWITCH to a workflow.\n\n The condition task is evaluated and the result is compared to the keys of the cases\n dictionary. The task corresponding to the matching key is run; all other tasks are\n skipped. Any tasks downstream of the skipped tasks are also skipped unless they set\n `skip_on_upstream_skip=False`.\n\n Example:\n ```python\n @task\n def condition():\n return \"b\" # returning 'b' will take the b_branch\n\n @task\n def a_branch():\n return \"A Branch\"\n\n @task\n def b_branch():\n return \"B Branch\"\n\n with Flow(\"switch-flow\") as flow:\n switch(condition, dict(a=a_branch, b=b_branch))\n ```\n\n Args:\n - condition (Task): a task whose result forms the condition for the switch\n - cases (Dict[Any, Task]): a dict representing the \"case\" statements of the switch.\n The value of the `condition` task will be compared to the keys of this dict, and\n the matching task will be executed.\n\n Raises:\n - PrefectWarning: if any of the tasks in \"cases\" have upstream dependencies,\n then this task will warn that those upstream tasks may run whether or not the switch condition matches their branch. The most common cause of this\n is passing a list of tasks as one of the cases, which adds the `List` task\n to the switch condition but leaves the tasks themselves upstream.\n \"\"\"\n\n with prefect.tags(\"switch\"):\n for value, task in cases.items():\n task = prefect.utilities.tasks.as_task(task)\n match_condition = CompareValue(value=value).bind(value=condition)\n task.set_dependencies(upstream_tasks=[match_condition])\n\n\ndef ifelse(condition: Task, true_task: Task, false_task: Task) -> None:\n \"\"\"\n Builds a conditional branch into a workflow.\n\n If the condition evaluates True(ish), the true_task will run. If it\n evaluates False(ish), the false_task will run. The task doesn't run is Skipped, as are\n all downstream tasks that don't set `skip_on_upstream_skip=False`.\n\n Args:\n - condition (Task): a task whose boolean result forms the condition for the ifelse\n - true_task (Task): a task that will be executed if the condition is True\n - false_task (Task): a task that will be executed if the condition is False\n \"\"\"\n\n @prefect.task\n def as_bool(x):\n return bool(x)\n\n cases = {c: t for c, t in [(True, true_task), (False, false_task)] if t is not None}\n if cases:\n switch(condition=as_bool(condition), cases=cases)\n\n\ndef merge(*tasks: Task, flow=None) -> Task:\n \"\"\"\n Merges conditional branches back together.\n\n A conditional branch in a flow results in one or more tasks proceeding and one or\n more tasks skipping. It is often convenient to merge those branches back into a\n single result. This function is a simple way to achieve that goal. By default this\n task will skip if all its upstream dependencies are also skipped.\n\n The merge will return the first real result it encounters, or `None`. If multiple\n tasks might return a result, group them with a list.\n\n Example:\n ```python\n with Flow(\"My Flow\"):\n true_branch = ActionIfTrue()\n false_branch = ActionIfFalse()\n ifelse(CheckCondition(), true_branch, false_branch)\n\n merged_result = merge(true_branch, false_branch)\n ```\n\n Args:\n - *tasks (Task): tasks whose results should be merged into a single result. The tasks are\n assumed to all sit downstream of different `switch` branches, such that only\n one of them will contain a result and the others will all be skipped.\n - flow (Flow, optional): The flow to use, defaults to the current flow\n in context if no flow is specified\n\n Returns:\n - Task: a Task representing the merged result.\n\n \"\"\"\n return Merge().bind(\n **{\"task_{}\".format(i + 1): t for i, t in enumerate(tasks)}, flow=flow\n )\n", "path": "src/prefect/tasks/control_flow/conditional.py"}]} | 1,981 | 347 |
gh_patches_debug_12227 | rasdani/github-patches | git_diff | mitmproxy__mitmproxy-2041 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add PyInstaller indicator to `mitmproxy --version`
We currently cannot distinguish if users use our precompiled binaries or if they installed mitmproxy using pip/brew/$packagemanager. It would be very useful to output if we are running the precompiled PyInstaller binary.
https://pythonhosted.org/PyInstaller/runtime-information.html
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mitmproxy/utils/debug.py`
Content:
```
1 import gc
2 import os
3 import sys
4 import threading
5 import signal
6 import platform
7 import traceback
8 import subprocess
9
10 from mitmproxy import version
11 from mitmproxy import utils
12
13 from OpenSSL import SSL
14
15
16 def dump_system_info():
17 git_describe = 'release version'
18 with utils.chdir(os.path.abspath(os.path.join(os.path.dirname(__file__), ".."))):
19 try:
20 c = ['git', 'describe', '--tags', '--long']
21 git_describe = subprocess.check_output(c, stderr=subprocess.STDOUT)
22 last_tag, tag_dist, commit = git_describe.decode().strip().rsplit("-", 2)
23
24 if last_tag.startswith('v'):
25 # remove the 'v' prefix
26 last_tag = last_tag[1:]
27 if commit.startswith('g'):
28 # remove the 'g' prefix added by recent git versions
29 commit = commit[1:]
30
31 # build the same version specifier as used for snapshots by rtool
32 git_describe = "{version}dev{tag:04}-0x{commit}".format(
33 version=last_tag,
34 tag=int(tag_dist),
35 commit=commit,
36 )
37 except:
38 pass
39
40 data = [
41 "Mitmproxy version: {} ({})".format(version.VERSION, git_describe),
42 "Python version: {}".format(platform.python_version()),
43 "Platform: {}".format(platform.platform()),
44 "SSL version: {}".format(SSL.SSLeay_version(SSL.SSLEAY_VERSION).decode()),
45 ]
46 d = platform.linux_distribution()
47 t = "Linux distro: %s %s %s" % d
48 if d[0]: # pragma: no cover
49 data.append(t)
50
51 d = platform.mac_ver()
52 t = "Mac version: %s %s %s" % d
53 if d[0]: # pragma: no cover
54 data.append(t)
55
56 d = platform.win32_ver()
57 t = "Windows version: %s %s %s %s" % d
58 if d[0]: # pragma: no cover
59 data.append(t)
60
61 return "\n".join(data)
62
63
64 def dump_info(signal=None, frame=None, file=sys.stdout, testing=False): # pragma: no cover
65 print("****************************************************", file=file)
66 print("Summary", file=file)
67 print("=======", file=file)
68
69 try:
70 import psutil
71 except:
72 print("(psutil not installed, skipping some debug info)", file=file)
73 else:
74 p = psutil.Process()
75 print("num threads: ", p.num_threads(), file=file)
76 if hasattr(p, "num_fds"):
77 print("num fds: ", p.num_fds(), file=file)
78 print("memory: ", p.memory_info(), file=file)
79
80 print(file=file)
81 print("Files", file=file)
82 print("=====", file=file)
83 for i in p.open_files():
84 print(i, file=file)
85
86 print(file=file)
87 print("Connections", file=file)
88 print("===========", file=file)
89 for i in p.connections():
90 print(i, file=file)
91
92 print(file=file)
93 print("Threads", file=file)
94 print("=======", file=file)
95 bthreads = []
96 for i in threading.enumerate():
97 if hasattr(i, "_threadinfo"):
98 bthreads.append(i)
99 else:
100 print(i.name, file=file)
101 bthreads.sort(key=lambda x: x._thread_started)
102 for i in bthreads:
103 print(i._threadinfo(), file=file)
104
105 print(file=file)
106 print("Memory", file=file)
107 print("=======", file=file)
108 gc.collect()
109 d = {}
110 for i in gc.get_objects():
111 t = str(type(i))
112 if "mitmproxy" in t:
113 d[t] = d.setdefault(t, 0) + 1
114 itms = list(d.items())
115 itms.sort(key=lambda x: x[1])
116 for i in itms[-20:]:
117 print(i[1], i[0], file=file)
118 print("****************************************************", file=file)
119
120 if not testing:
121 sys.exit(1)
122
123
124 def dump_stacks(signal=None, frame=None, file=sys.stdout, testing=False):
125 id2name = dict([(th.ident, th.name) for th in threading.enumerate()])
126 code = []
127 for threadId, stack in sys._current_frames().items():
128 code.append(
129 "\n# Thread: %s(%d)" % (
130 id2name.get(threadId, ""), threadId
131 )
132 )
133 for filename, lineno, name, line in traceback.extract_stack(stack):
134 code.append('File: "%s", line %d, in %s' % (filename, lineno, name))
135 if line:
136 code.append(" %s" % (line.strip()))
137 print("\n".join(code), file=file)
138 if not testing: # pragma: no cover
139 sys.exit(1)
140
141
142 def register_info_dumpers():
143 if os.name != "nt": # pragma: windows no cover
144 signal.signal(signal.SIGUSR1, dump_info)
145 signal.signal(signal.SIGUSR2, dump_stacks)
146
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mitmproxy/utils/debug.py b/mitmproxy/utils/debug.py
--- a/mitmproxy/utils/debug.py
+++ b/mitmproxy/utils/debug.py
@@ -37,8 +37,12 @@
except:
pass
+ bin_indicator = "" # PyInstaller builds indicator, if using precompiled binary
+ if getattr(sys, 'frozen', False):
+ bin_indicator = "Precompiled Binary"
+
data = [
- "Mitmproxy version: {} ({})".format(version.VERSION, git_describe),
+ "Mitmproxy version: {} ({}) {}".format(version.VERSION, git_describe, bin_indicator),
"Python version: {}".format(platform.python_version()),
"Platform: {}".format(platform.platform()),
"SSL version: {}".format(SSL.SSLeay_version(SSL.SSLEAY_VERSION).decode()),
| {"golden_diff": "diff --git a/mitmproxy/utils/debug.py b/mitmproxy/utils/debug.py\n--- a/mitmproxy/utils/debug.py\n+++ b/mitmproxy/utils/debug.py\n@@ -37,8 +37,12 @@\n except:\n pass\n \n+ bin_indicator = \"\" # PyInstaller builds indicator, if using precompiled binary\n+ if getattr(sys, 'frozen', False):\n+ bin_indicator = \"Precompiled Binary\"\n+\n data = [\n- \"Mitmproxy version: {} ({})\".format(version.VERSION, git_describe),\n+ \"Mitmproxy version: {} ({}) {}\".format(version.VERSION, git_describe, bin_indicator),\n \"Python version: {}\".format(platform.python_version()),\n \"Platform: {}\".format(platform.platform()),\n \"SSL version: {}\".format(SSL.SSLeay_version(SSL.SSLEAY_VERSION).decode()),\n", "issue": "Add PyInstaller indicator to `mitmproxy --version`\nWe currently cannot distinguish if users use our precompiled binaries or if they installed mitmproxy using pip/brew/$packagemanager. It would be very useful to output if we are running the precompiled PyInstaller binary. \r\n\r\nhttps://pythonhosted.org/PyInstaller/runtime-information.html\n", "before_files": [{"content": "import gc\nimport os\nimport sys\nimport threading\nimport signal\nimport platform\nimport traceback\nimport subprocess\n\nfrom mitmproxy import version\nfrom mitmproxy import utils\n\nfrom OpenSSL import SSL\n\n\ndef dump_system_info():\n git_describe = 'release version'\n with utils.chdir(os.path.abspath(os.path.join(os.path.dirname(__file__), \"..\"))):\n try:\n c = ['git', 'describe', '--tags', '--long']\n git_describe = subprocess.check_output(c, stderr=subprocess.STDOUT)\n last_tag, tag_dist, commit = git_describe.decode().strip().rsplit(\"-\", 2)\n\n if last_tag.startswith('v'):\n # remove the 'v' prefix\n last_tag = last_tag[1:]\n if commit.startswith('g'):\n # remove the 'g' prefix added by recent git versions\n commit = commit[1:]\n\n # build the same version specifier as used for snapshots by rtool\n git_describe = \"{version}dev{tag:04}-0x{commit}\".format(\n version=last_tag,\n tag=int(tag_dist),\n commit=commit,\n )\n except:\n pass\n\n data = [\n \"Mitmproxy version: {} ({})\".format(version.VERSION, git_describe),\n \"Python version: {}\".format(platform.python_version()),\n \"Platform: {}\".format(platform.platform()),\n \"SSL version: {}\".format(SSL.SSLeay_version(SSL.SSLEAY_VERSION).decode()),\n ]\n d = platform.linux_distribution()\n t = \"Linux distro: %s %s %s\" % d\n if d[0]: # pragma: no cover\n data.append(t)\n\n d = platform.mac_ver()\n t = \"Mac version: %s %s %s\" % d\n if d[0]: # pragma: no cover\n data.append(t)\n\n d = platform.win32_ver()\n t = \"Windows version: %s %s %s %s\" % d\n if d[0]: # pragma: no cover\n data.append(t)\n\n return \"\\n\".join(data)\n\n\ndef dump_info(signal=None, frame=None, file=sys.stdout, testing=False): # pragma: no cover\n print(\"****************************************************\", file=file)\n print(\"Summary\", file=file)\n print(\"=======\", file=file)\n\n try:\n import psutil\n except:\n print(\"(psutil not installed, skipping some debug info)\", file=file)\n else:\n p = psutil.Process()\n print(\"num threads: \", p.num_threads(), file=file)\n if hasattr(p, \"num_fds\"):\n print(\"num fds: \", p.num_fds(), file=file)\n print(\"memory: \", p.memory_info(), file=file)\n\n print(file=file)\n print(\"Files\", file=file)\n print(\"=====\", file=file)\n for i in p.open_files():\n print(i, file=file)\n\n print(file=file)\n print(\"Connections\", file=file)\n print(\"===========\", file=file)\n for i in p.connections():\n print(i, file=file)\n\n print(file=file)\n print(\"Threads\", file=file)\n print(\"=======\", file=file)\n bthreads = []\n for i in threading.enumerate():\n if hasattr(i, \"_threadinfo\"):\n bthreads.append(i)\n else:\n print(i.name, file=file)\n bthreads.sort(key=lambda x: x._thread_started)\n for i in bthreads:\n print(i._threadinfo(), file=file)\n\n print(file=file)\n print(\"Memory\", file=file)\n print(\"=======\", file=file)\n gc.collect()\n d = {}\n for i in gc.get_objects():\n t = str(type(i))\n if \"mitmproxy\" in t:\n d[t] = d.setdefault(t, 0) + 1\n itms = list(d.items())\n itms.sort(key=lambda x: x[1])\n for i in itms[-20:]:\n print(i[1], i[0], file=file)\n print(\"****************************************************\", file=file)\n\n if not testing:\n sys.exit(1)\n\n\ndef dump_stacks(signal=None, frame=None, file=sys.stdout, testing=False):\n id2name = dict([(th.ident, th.name) for th in threading.enumerate()])\n code = []\n for threadId, stack in sys._current_frames().items():\n code.append(\n \"\\n# Thread: %s(%d)\" % (\n id2name.get(threadId, \"\"), threadId\n )\n )\n for filename, lineno, name, line in traceback.extract_stack(stack):\n code.append('File: \"%s\", line %d, in %s' % (filename, lineno, name))\n if line:\n code.append(\" %s\" % (line.strip()))\n print(\"\\n\".join(code), file=file)\n if not testing: # pragma: no cover\n sys.exit(1)\n\n\ndef register_info_dumpers():\n if os.name != \"nt\": # pragma: windows no cover\n signal.signal(signal.SIGUSR1, dump_info)\n signal.signal(signal.SIGUSR2, dump_stacks)\n", "path": "mitmproxy/utils/debug.py"}], "after_files": [{"content": "import gc\nimport os\nimport sys\nimport threading\nimport signal\nimport platform\nimport traceback\nimport subprocess\n\nfrom mitmproxy import version\nfrom mitmproxy import utils\n\nfrom OpenSSL import SSL\n\n\ndef dump_system_info():\n git_describe = 'release version'\n with utils.chdir(os.path.abspath(os.path.join(os.path.dirname(__file__), \"..\"))):\n try:\n c = ['git', 'describe', '--tags', '--long']\n git_describe = subprocess.check_output(c, stderr=subprocess.STDOUT)\n last_tag, tag_dist, commit = git_describe.decode().strip().rsplit(\"-\", 2)\n\n if last_tag.startswith('v'):\n # remove the 'v' prefix\n last_tag = last_tag[1:]\n if commit.startswith('g'):\n # remove the 'g' prefix added by recent git versions\n commit = commit[1:]\n\n # build the same version specifier as used for snapshots by rtool\n git_describe = \"{version}dev{tag:04}-0x{commit}\".format(\n version=last_tag,\n tag=int(tag_dist),\n commit=commit,\n )\n except:\n pass\n\n bin_indicator = \"\" # PyInstaller builds indicator, if using precompiled binary\n if getattr(sys, 'frozen', False):\n bin_indicator = \"Precompiled Binary\"\n\n data = [\n \"Mitmproxy version: {} ({}) {}\".format(version.VERSION, git_describe, bin_indicator),\n \"Python version: {}\".format(platform.python_version()),\n \"Platform: {}\".format(platform.platform()),\n \"SSL version: {}\".format(SSL.SSLeay_version(SSL.SSLEAY_VERSION).decode()),\n ]\n d = platform.linux_distribution()\n t = \"Linux distro: %s %s %s\" % d\n if d[0]: # pragma: no cover\n data.append(t)\n\n d = platform.mac_ver()\n t = \"Mac version: %s %s %s\" % d\n if d[0]: # pragma: no cover\n data.append(t)\n\n d = platform.win32_ver()\n t = \"Windows version: %s %s %s %s\" % d\n if d[0]: # pragma: no cover\n data.append(t)\n\n return \"\\n\".join(data)\n\n\ndef dump_info(signal=None, frame=None, file=sys.stdout, testing=False): # pragma: no cover\n print(\"****************************************************\", file=file)\n print(\"Summary\", file=file)\n print(\"=======\", file=file)\n\n try:\n import psutil\n except:\n print(\"(psutil not installed, skipping some debug info)\", file=file)\n else:\n p = psutil.Process()\n print(\"num threads: \", p.num_threads(), file=file)\n if hasattr(p, \"num_fds\"):\n print(\"num fds: \", p.num_fds(), file=file)\n print(\"memory: \", p.memory_info(), file=file)\n\n print(file=file)\n print(\"Files\", file=file)\n print(\"=====\", file=file)\n for i in p.open_files():\n print(i, file=file)\n\n print(file=file)\n print(\"Connections\", file=file)\n print(\"===========\", file=file)\n for i in p.connections():\n print(i, file=file)\n\n print(file=file)\n print(\"Threads\", file=file)\n print(\"=======\", file=file)\n bthreads = []\n for i in threading.enumerate():\n if hasattr(i, \"_threadinfo\"):\n bthreads.append(i)\n else:\n print(i.name, file=file)\n bthreads.sort(key=lambda x: x._thread_started)\n for i in bthreads:\n print(i._threadinfo(), file=file)\n\n print(file=file)\n print(\"Memory\", file=file)\n print(\"=======\", file=file)\n gc.collect()\n d = {}\n for i in gc.get_objects():\n t = str(type(i))\n if \"mitmproxy\" in t:\n d[t] = d.setdefault(t, 0) + 1\n itms = list(d.items())\n itms.sort(key=lambda x: x[1])\n for i in itms[-20:]:\n print(i[1], i[0], file=file)\n print(\"****************************************************\", file=file)\n\n if not testing:\n sys.exit(1)\n\n\ndef dump_stacks(signal=None, frame=None, file=sys.stdout, testing=False):\n id2name = dict([(th.ident, th.name) for th in threading.enumerate()])\n code = []\n for threadId, stack in sys._current_frames().items():\n code.append(\n \"\\n# Thread: %s(%d)\" % (\n id2name.get(threadId, \"\"), threadId\n )\n )\n for filename, lineno, name, line in traceback.extract_stack(stack):\n code.append('File: \"%s\", line %d, in %s' % (filename, lineno, name))\n if line:\n code.append(\" %s\" % (line.strip()))\n print(\"\\n\".join(code), file=file)\n if not testing: # pragma: no cover\n sys.exit(1)\n\n\ndef register_info_dumpers():\n if os.name != \"nt\": # pragma: windows no cover\n signal.signal(signal.SIGUSR1, dump_info)\n signal.signal(signal.SIGUSR2, dump_stacks)\n", "path": "mitmproxy/utils/debug.py"}]} | 1,788 | 188 |
gh_patches_debug_28571 | rasdani/github-patches | git_diff | alltheplaces__alltheplaces-5219 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Poundland spider address parsing issue
The addr:street_address field returned by the poundland.py spider is sometimes broken, giving results such as:
`"addr:street_address": "5, 6, -, 5, 8, , T, a, f, f, , S, t, r, e, e, t"`
The problem is caused by line 20 in the code:
` item["street_address"] = ", ".join(filter(None, store["address"].get("line")))`
where is is assumed that "line" from the scraped JSON will be an array of values. But it is sometimes "line" is just a single string. When this happens, the string itself is split into individual characters, giving results like the one above.
I guess that before applying that code we should test whether "line" is a single string. I don't think I know enough python to know the best way to fix this, and a quick Google suggests there may be a difference between Python 2 and Python 3 (which would make it difficult for me to test any solutions).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `locations/spiders/poundland.py`
Content:
```
1 import scrapy
2
3 from locations.dict_parser import DictParser
4 from locations.hours import OpeningHours
5
6
7 class PoundlandSpider(scrapy.Spider):
8 name = "poundland"
9 item_attributes = {"brand": "Poundland", "brand_wikidata": "Q1434528"}
10 start_urls = [
11 "https://www.poundland.co.uk/rest/poundland/V1/locator/?searchCriteria[scope]=store-locator&searchCriteria[current_page]=1&searchCriteria[page_size]=10000"
12 ]
13 custom_settings = {"DEFAULT_REQUEST_HEADERS": {"Accept": "application/json"}}
14
15 def parse(self, response):
16 # We may have to handle pagination at some point
17 for store in response.json()["locations"]:
18 item = DictParser.parse(store)
19
20 item["street_address"] = ", ".join(filter(None, store["address"].get("line")))
21
22 # "store_id" seems to be a better ref than "id"
23 item["ref"] = store.get("store_id")
24 item["website"] = "https://www.poundland.co.uk/store-finder/store_page/view/id/" + item["ref"] + "/"
25
26 oh = OpeningHours()
27 for rule in store["opening_hours"]:
28 if rule["hours"] == "Closed":
29 continue
30 open_time, close_time = rule["hours"].split(" - ")
31 oh.add_range(rule["day"][:2], open_time, close_time)
32
33 item["opening_hours"] = oh.as_opening_hours()
34
35 item["extras"] = {}
36 item["extras"]["atm"] = "yes" if store.get("atm") == "1" else "no"
37 item["extras"]["icestore"] = "yes" if store.get("icestore") == "1" else "no"
38
39 if store["is_pep_co_only"] == "1":
40 item["brand"] = "Pep&Co"
41 item["brand_wikidata"] = "Q24908166"
42 else:
43 if store.get("pepshopinshop") == "1":
44 # Pep and Poundland at this location
45 pep = item.copy()
46
47 pep["ref"] = pep["ref"] + "_pep"
48
49 pep["brand"] = "Pep&Co"
50 pep["brand_wikidata"] = "Q24908166"
51
52 pep["located_in"] = self.item_attributes["brand"]
53 pep["located_in_wikidata"] = self.item_attributes["brand_wikidata"]
54
55 yield pep
56
57 yield item
58
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/locations/spiders/poundland.py b/locations/spiders/poundland.py
--- a/locations/spiders/poundland.py
+++ b/locations/spiders/poundland.py
@@ -1,7 +1,9 @@
import scrapy
+from locations.categories import Extras, apply_yes_no
from locations.dict_parser import DictParser
from locations.hours import OpeningHours
+from locations.spiders.vapestore_gb import clean_address
class PoundlandSpider(scrapy.Spider):
@@ -17,7 +19,7 @@
for store in response.json()["locations"]:
item = DictParser.parse(store)
- item["street_address"] = ", ".join(filter(None, store["address"].get("line")))
+ item["street_address"] = clean_address(store["address"].get("line"))
# "store_id" seems to be a better ref than "id"
item["ref"] = store.get("store_id")
@@ -30,10 +32,9 @@
open_time, close_time = rule["hours"].split(" - ")
oh.add_range(rule["day"][:2], open_time, close_time)
- item["opening_hours"] = oh.as_opening_hours()
+ item["opening_hours"] = oh
- item["extras"] = {}
- item["extras"]["atm"] = "yes" if store.get("atm") == "1" else "no"
+ apply_yes_no(Extras.ATM, item, store.get("atm") == "1")
item["extras"]["icestore"] = "yes" if store.get("icestore") == "1" else "no"
if store["is_pep_co_only"] == "1":
| {"golden_diff": "diff --git a/locations/spiders/poundland.py b/locations/spiders/poundland.py\n--- a/locations/spiders/poundland.py\n+++ b/locations/spiders/poundland.py\n@@ -1,7 +1,9 @@\n import scrapy\n \n+from locations.categories import Extras, apply_yes_no\n from locations.dict_parser import DictParser\n from locations.hours import OpeningHours\n+from locations.spiders.vapestore_gb import clean_address\n \n \n class PoundlandSpider(scrapy.Spider):\n@@ -17,7 +19,7 @@\n for store in response.json()[\"locations\"]:\n item = DictParser.parse(store)\n \n- item[\"street_address\"] = \", \".join(filter(None, store[\"address\"].get(\"line\")))\n+ item[\"street_address\"] = clean_address(store[\"address\"].get(\"line\"))\n \n # \"store_id\" seems to be a better ref than \"id\"\n item[\"ref\"] = store.get(\"store_id\")\n@@ -30,10 +32,9 @@\n open_time, close_time = rule[\"hours\"].split(\" - \")\n oh.add_range(rule[\"day\"][:2], open_time, close_time)\n \n- item[\"opening_hours\"] = oh.as_opening_hours()\n+ item[\"opening_hours\"] = oh\n \n- item[\"extras\"] = {}\n- item[\"extras\"][\"atm\"] = \"yes\" if store.get(\"atm\") == \"1\" else \"no\"\n+ apply_yes_no(Extras.ATM, item, store.get(\"atm\") == \"1\")\n item[\"extras\"][\"icestore\"] = \"yes\" if store.get(\"icestore\") == \"1\" else \"no\"\n \n if store[\"is_pep_co_only\"] == \"1\":\n", "issue": "Poundland spider address parsing issue\nThe addr:street_address field returned by the poundland.py spider is sometimes broken, giving results such as:\r\n`\"addr:street_address\": \"5, 6, -, 5, 8, , T, a, f, f, , S, t, r, e, e, t\"`\r\nThe problem is caused by line 20 in the code:\r\n` item[\"street_address\"] = \", \".join(filter(None, store[\"address\"].get(\"line\")))`\r\nwhere is is assumed that \"line\" from the scraped JSON will be an array of values. But it is sometimes \"line\" is just a single string. When this happens, the string itself is split into individual characters, giving results like the one above.\r\n\r\nI guess that before applying that code we should test whether \"line\" is a single string. I don't think I know enough python to know the best way to fix this, and a quick Google suggests there may be a difference between Python 2 and Python 3 (which would make it difficult for me to test any solutions).\n", "before_files": [{"content": "import scrapy\n\nfrom locations.dict_parser import DictParser\nfrom locations.hours import OpeningHours\n\n\nclass PoundlandSpider(scrapy.Spider):\n name = \"poundland\"\n item_attributes = {\"brand\": \"Poundland\", \"brand_wikidata\": \"Q1434528\"}\n start_urls = [\n \"https://www.poundland.co.uk/rest/poundland/V1/locator/?searchCriteria[scope]=store-locator&searchCriteria[current_page]=1&searchCriteria[page_size]=10000\"\n ]\n custom_settings = {\"DEFAULT_REQUEST_HEADERS\": {\"Accept\": \"application/json\"}}\n\n def parse(self, response):\n # We may have to handle pagination at some point\n for store in response.json()[\"locations\"]:\n item = DictParser.parse(store)\n\n item[\"street_address\"] = \", \".join(filter(None, store[\"address\"].get(\"line\")))\n\n # \"store_id\" seems to be a better ref than \"id\"\n item[\"ref\"] = store.get(\"store_id\")\n item[\"website\"] = \"https://www.poundland.co.uk/store-finder/store_page/view/id/\" + item[\"ref\"] + \"/\"\n\n oh = OpeningHours()\n for rule in store[\"opening_hours\"]:\n if rule[\"hours\"] == \"Closed\":\n continue\n open_time, close_time = rule[\"hours\"].split(\" - \")\n oh.add_range(rule[\"day\"][:2], open_time, close_time)\n\n item[\"opening_hours\"] = oh.as_opening_hours()\n\n item[\"extras\"] = {}\n item[\"extras\"][\"atm\"] = \"yes\" if store.get(\"atm\") == \"1\" else \"no\"\n item[\"extras\"][\"icestore\"] = \"yes\" if store.get(\"icestore\") == \"1\" else \"no\"\n\n if store[\"is_pep_co_only\"] == \"1\":\n item[\"brand\"] = \"Pep&Co\"\n item[\"brand_wikidata\"] = \"Q24908166\"\n else:\n if store.get(\"pepshopinshop\") == \"1\":\n # Pep and Poundland at this location\n pep = item.copy()\n\n pep[\"ref\"] = pep[\"ref\"] + \"_pep\"\n\n pep[\"brand\"] = \"Pep&Co\"\n pep[\"brand_wikidata\"] = \"Q24908166\"\n\n pep[\"located_in\"] = self.item_attributes[\"brand\"]\n pep[\"located_in_wikidata\"] = self.item_attributes[\"brand_wikidata\"]\n\n yield pep\n\n yield item\n", "path": "locations/spiders/poundland.py"}], "after_files": [{"content": "import scrapy\n\nfrom locations.categories import Extras, apply_yes_no\nfrom locations.dict_parser import DictParser\nfrom locations.hours import OpeningHours\nfrom locations.spiders.vapestore_gb import clean_address\n\n\nclass PoundlandSpider(scrapy.Spider):\n name = \"poundland\"\n item_attributes = {\"brand\": \"Poundland\", \"brand_wikidata\": \"Q1434528\"}\n start_urls = [\n \"https://www.poundland.co.uk/rest/poundland/V1/locator/?searchCriteria[scope]=store-locator&searchCriteria[current_page]=1&searchCriteria[page_size]=10000\"\n ]\n custom_settings = {\"DEFAULT_REQUEST_HEADERS\": {\"Accept\": \"application/json\"}}\n\n def parse(self, response):\n # We may have to handle pagination at some point\n for store in response.json()[\"locations\"]:\n item = DictParser.parse(store)\n\n item[\"street_address\"] = clean_address(store[\"address\"].get(\"line\"))\n\n # \"store_id\" seems to be a better ref than \"id\"\n item[\"ref\"] = store.get(\"store_id\")\n item[\"website\"] = \"https://www.poundland.co.uk/store-finder/store_page/view/id/\" + item[\"ref\"] + \"/\"\n\n oh = OpeningHours()\n for rule in store[\"opening_hours\"]:\n if rule[\"hours\"] == \"Closed\":\n continue\n open_time, close_time = rule[\"hours\"].split(\" - \")\n oh.add_range(rule[\"day\"][:2], open_time, close_time)\n\n item[\"opening_hours\"] = oh\n\n apply_yes_no(Extras.ATM, item, store.get(\"atm\") == \"1\")\n item[\"extras\"][\"icestore\"] = \"yes\" if store.get(\"icestore\") == \"1\" else \"no\"\n\n if store[\"is_pep_co_only\"] == \"1\":\n item[\"brand\"] = \"Pep&Co\"\n item[\"brand_wikidata\"] = \"Q24908166\"\n else:\n if store.get(\"pepshopinshop\") == \"1\":\n # Pep and Poundland at this location\n pep = item.copy()\n\n pep[\"ref\"] = pep[\"ref\"] + \"_pep\"\n\n pep[\"brand\"] = \"Pep&Co\"\n pep[\"brand_wikidata\"] = \"Q24908166\"\n\n pep[\"located_in\"] = self.item_attributes[\"brand\"]\n pep[\"located_in_wikidata\"] = self.item_attributes[\"brand_wikidata\"]\n\n yield pep\n\n yield item\n", "path": "locations/spiders/poundland.py"}]} | 1,164 | 378 |
gh_patches_debug_5530 | rasdani/github-patches | git_diff | urllib3__urllib3-2204 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
is_connection_dropped checks against None but uses False as default value for getattr
I happened to read this line and the code looks fishy. I did not otherwise verify the potential bug.
See implementation of `is_connection_dropped(conn: socket.socket) -> bool`:
https://github.com/urllib3/urllib3/blob/287052a16a59bcaba5772387de36fa9a49eb8378/src/urllib3/util/connection.py#L19-L23
If there is no property `sock` on `conn`, then we will call `wait_for_read(False, timeout=0.0)`, which e.g. may end up putting the `False` into the iterable passed to `select`.
Since this seemed to never have caused problems, the `sock = getattr(conn, "sock", False)` can probably be replaced with just `sock = conn.sock`.
Alternatives would be to replace the default (last argument of `getattr`) of `False` with `None` or replace the `if sock is None` with `if not sock`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/urllib3/util/connection.py`
Content:
```
1 import socket
2 from typing import List, Optional, Tuple, Union
3
4 from urllib3.exceptions import LocationParseError
5
6 from .wait import wait_for_read
7
8 SOCKET_GLOBAL_DEFAULT_TIMEOUT = socket._GLOBAL_DEFAULT_TIMEOUT # type: ignore
9 SocketOptions = List[Tuple[int, int, Union[int, bytes]]]
10
11
12 def is_connection_dropped(conn: socket.socket) -> bool: # Platform-specific
13 """
14 Returns True if the connection is dropped and should be closed.
15
16 :param conn:
17 :class:`http.client.HTTPConnection` object.
18 """
19 sock = getattr(conn, "sock", False)
20 if sock is None: # Connection already closed (such as by httplib).
21 return True
22 # Returns True if readable, which here means it's been dropped
23 return wait_for_read(sock, timeout=0.0)
24
25
26 # This function is copied from socket.py in the Python 2.7 standard
27 # library test suite. Added to its signature is only `socket_options`.
28 # One additional modification is that we avoid binding to IPv6 servers
29 # discovered in DNS if the system doesn't have IPv6 functionality.
30 def create_connection(
31 address: Tuple[str, int],
32 timeout: Optional[float] = SOCKET_GLOBAL_DEFAULT_TIMEOUT,
33 source_address: Optional[Tuple[str, int]] = None,
34 socket_options: Optional[SocketOptions] = None,
35 ) -> socket.socket:
36 """Connect to *address* and return the socket object.
37
38 Convenience function. Connect to *address* (a 2-tuple ``(host,
39 port)``) and return the socket object. Passing the optional
40 *timeout* parameter will set the timeout on the socket instance
41 before attempting to connect. If no *timeout* is supplied, the
42 global default timeout setting returned by :func:`socket.getdefaulttimeout`
43 is used. If *source_address* is set it must be a tuple of (host, port)
44 for the socket to bind as a source address before making the connection.
45 An host of '' or port 0 tells the OS to use the default.
46 """
47
48 host, port = address
49 if host.startswith("["):
50 host = host.strip("[]")
51 err = None
52
53 # Using the value from allowed_gai_family() in the context of getaddrinfo lets
54 # us select whether to work with IPv4 DNS records, IPv6 records, or both.
55 # The original create_connection function always returns all records.
56 family = allowed_gai_family()
57
58 try:
59 host.encode("idna")
60 except UnicodeError:
61 raise LocationParseError(f"'{host}', label empty or too long") from None
62
63 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
64 af, socktype, proto, canonname, sa = res
65 sock = None
66 try:
67 sock = socket.socket(af, socktype, proto)
68
69 # If provided, set socket level options before connecting.
70 _set_socket_options(sock, socket_options)
71
72 if timeout is not SOCKET_GLOBAL_DEFAULT_TIMEOUT:
73 sock.settimeout(timeout)
74 if source_address:
75 sock.bind(source_address)
76 sock.connect(sa)
77 return sock
78
79 except OSError as e:
80 err = e
81 if sock is not None:
82 sock.close()
83 sock = None
84
85 if err is not None:
86 raise err
87
88 raise OSError("getaddrinfo returns an empty list")
89
90
91 def _set_socket_options(sock: socket.socket, options: Optional[SocketOptions]) -> None:
92 if options is None:
93 return
94
95 for opt in options:
96 sock.setsockopt(*opt)
97
98
99 def allowed_gai_family() -> socket.AddressFamily:
100 """This function is designed to work in the context of
101 getaddrinfo, where family=socket.AF_UNSPEC is the default and
102 will perform a DNS search for both IPv6 and IPv4 records."""
103
104 family = socket.AF_INET
105 if HAS_IPV6:
106 family = socket.AF_UNSPEC
107 return family
108
109
110 def _has_ipv6(host: str) -> bool:
111 """ Returns True if the system can bind an IPv6 address. """
112 sock = None
113 has_ipv6 = False
114
115 if socket.has_ipv6:
116 # has_ipv6 returns true if cPython was compiled with IPv6 support.
117 # It does not tell us if the system has IPv6 support enabled. To
118 # determine that we must bind to an IPv6 address.
119 # https://github.com/urllib3/urllib3/pull/611
120 # https://bugs.python.org/issue658327
121 try:
122 sock = socket.socket(socket.AF_INET6)
123 sock.bind((host, 0))
124 has_ipv6 = True
125 except Exception:
126 pass
127
128 if sock:
129 sock.close()
130 return has_ipv6
131
132
133 HAS_IPV6 = _has_ipv6("::1")
134
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/urllib3/util/connection.py b/src/urllib3/util/connection.py
--- a/src/urllib3/util/connection.py
+++ b/src/urllib3/util/connection.py
@@ -16,7 +16,7 @@
:param conn:
:class:`http.client.HTTPConnection` object.
"""
- sock = getattr(conn, "sock", False)
+ sock = getattr(conn, "sock", None)
if sock is None: # Connection already closed (such as by httplib).
return True
# Returns True if readable, which here means it's been dropped
| {"golden_diff": "diff --git a/src/urllib3/util/connection.py b/src/urllib3/util/connection.py\n--- a/src/urllib3/util/connection.py\n+++ b/src/urllib3/util/connection.py\n@@ -16,7 +16,7 @@\n :param conn:\n :class:`http.client.HTTPConnection` object.\n \"\"\"\n- sock = getattr(conn, \"sock\", False)\n+ sock = getattr(conn, \"sock\", None)\n if sock is None: # Connection already closed (such as by httplib).\n return True\n # Returns True if readable, which here means it's been dropped\n", "issue": "is_connection_dropped checks against None but uses False as default value for getattr\nI happened to read this line and the code looks fishy. I did not otherwise verify the potential bug.\r\n\r\nSee implementation of `is_connection_dropped(conn: socket.socket) -> bool`:\r\n\r\nhttps://github.com/urllib3/urllib3/blob/287052a16a59bcaba5772387de36fa9a49eb8378/src/urllib3/util/connection.py#L19-L23\r\n\r\nIf there is no property `sock` on `conn`, then we will call `wait_for_read(False, timeout=0.0)`, which e.g. may end up putting the `False` into the iterable passed to `select`.\r\n\r\nSince this seemed to never have caused problems, the `sock = getattr(conn, \"sock\", False)` can probably be replaced with just `sock = conn.sock`.\r\n\r\nAlternatives would be to replace the default (last argument of `getattr`) of `False` with `None` or replace the `if sock is None` with `if not sock`.\n", "before_files": [{"content": "import socket\nfrom typing import List, Optional, Tuple, Union\n\nfrom urllib3.exceptions import LocationParseError\n\nfrom .wait import wait_for_read\n\nSOCKET_GLOBAL_DEFAULT_TIMEOUT = socket._GLOBAL_DEFAULT_TIMEOUT # type: ignore\nSocketOptions = List[Tuple[int, int, Union[int, bytes]]]\n\n\ndef is_connection_dropped(conn: socket.socket) -> bool: # Platform-specific\n \"\"\"\n Returns True if the connection is dropped and should be closed.\n\n :param conn:\n :class:`http.client.HTTPConnection` object.\n \"\"\"\n sock = getattr(conn, \"sock\", False)\n if sock is None: # Connection already closed (such as by httplib).\n return True\n # Returns True if readable, which here means it's been dropped\n return wait_for_read(sock, timeout=0.0)\n\n\n# This function is copied from socket.py in the Python 2.7 standard\n# library test suite. Added to its signature is only `socket_options`.\n# One additional modification is that we avoid binding to IPv6 servers\n# discovered in DNS if the system doesn't have IPv6 functionality.\ndef create_connection(\n address: Tuple[str, int],\n timeout: Optional[float] = SOCKET_GLOBAL_DEFAULT_TIMEOUT,\n source_address: Optional[Tuple[str, int]] = None,\n socket_options: Optional[SocketOptions] = None,\n) -> socket.socket:\n \"\"\"Connect to *address* and return the socket object.\n\n Convenience function. Connect to *address* (a 2-tuple ``(host,\n port)``) and return the socket object. Passing the optional\n *timeout* parameter will set the timeout on the socket instance\n before attempting to connect. If no *timeout* is supplied, the\n global default timeout setting returned by :func:`socket.getdefaulttimeout`\n is used. If *source_address* is set it must be a tuple of (host, port)\n for the socket to bind as a source address before making the connection.\n An host of '' or port 0 tells the OS to use the default.\n \"\"\"\n\n host, port = address\n if host.startswith(\"[\"):\n host = host.strip(\"[]\")\n err = None\n\n # Using the value from allowed_gai_family() in the context of getaddrinfo lets\n # us select whether to work with IPv4 DNS records, IPv6 records, or both.\n # The original create_connection function always returns all records.\n family = allowed_gai_family()\n\n try:\n host.encode(\"idna\")\n except UnicodeError:\n raise LocationParseError(f\"'{host}', label empty or too long\") from None\n\n for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):\n af, socktype, proto, canonname, sa = res\n sock = None\n try:\n sock = socket.socket(af, socktype, proto)\n\n # If provided, set socket level options before connecting.\n _set_socket_options(sock, socket_options)\n\n if timeout is not SOCKET_GLOBAL_DEFAULT_TIMEOUT:\n sock.settimeout(timeout)\n if source_address:\n sock.bind(source_address)\n sock.connect(sa)\n return sock\n\n except OSError as e:\n err = e\n if sock is not None:\n sock.close()\n sock = None\n\n if err is not None:\n raise err\n\n raise OSError(\"getaddrinfo returns an empty list\")\n\n\ndef _set_socket_options(sock: socket.socket, options: Optional[SocketOptions]) -> None:\n if options is None:\n return\n\n for opt in options:\n sock.setsockopt(*opt)\n\n\ndef allowed_gai_family() -> socket.AddressFamily:\n \"\"\"This function is designed to work in the context of\n getaddrinfo, where family=socket.AF_UNSPEC is the default and\n will perform a DNS search for both IPv6 and IPv4 records.\"\"\"\n\n family = socket.AF_INET\n if HAS_IPV6:\n family = socket.AF_UNSPEC\n return family\n\n\ndef _has_ipv6(host: str) -> bool:\n \"\"\" Returns True if the system can bind an IPv6 address. \"\"\"\n sock = None\n has_ipv6 = False\n\n if socket.has_ipv6:\n # has_ipv6 returns true if cPython was compiled with IPv6 support.\n # It does not tell us if the system has IPv6 support enabled. To\n # determine that we must bind to an IPv6 address.\n # https://github.com/urllib3/urllib3/pull/611\n # https://bugs.python.org/issue658327\n try:\n sock = socket.socket(socket.AF_INET6)\n sock.bind((host, 0))\n has_ipv6 = True\n except Exception:\n pass\n\n if sock:\n sock.close()\n return has_ipv6\n\n\nHAS_IPV6 = _has_ipv6(\"::1\")\n", "path": "src/urllib3/util/connection.py"}], "after_files": [{"content": "import socket\nfrom typing import List, Optional, Tuple, Union\n\nfrom urllib3.exceptions import LocationParseError\n\nfrom .wait import wait_for_read\n\nSOCKET_GLOBAL_DEFAULT_TIMEOUT = socket._GLOBAL_DEFAULT_TIMEOUT # type: ignore\nSocketOptions = List[Tuple[int, int, Union[int, bytes]]]\n\n\ndef is_connection_dropped(conn: socket.socket) -> bool: # Platform-specific\n \"\"\"\n Returns True if the connection is dropped and should be closed.\n\n :param conn:\n :class:`http.client.HTTPConnection` object.\n \"\"\"\n sock = getattr(conn, \"sock\", None)\n if sock is None: # Connection already closed (such as by httplib).\n return True\n # Returns True if readable, which here means it's been dropped\n return wait_for_read(sock, timeout=0.0)\n\n\n# This function is copied from socket.py in the Python 2.7 standard\n# library test suite. Added to its signature is only `socket_options`.\n# One additional modification is that we avoid binding to IPv6 servers\n# discovered in DNS if the system doesn't have IPv6 functionality.\ndef create_connection(\n address: Tuple[str, int],\n timeout: Optional[float] = SOCKET_GLOBAL_DEFAULT_TIMEOUT,\n source_address: Optional[Tuple[str, int]] = None,\n socket_options: Optional[SocketOptions] = None,\n) -> socket.socket:\n \"\"\"Connect to *address* and return the socket object.\n\n Convenience function. Connect to *address* (a 2-tuple ``(host,\n port)``) and return the socket object. Passing the optional\n *timeout* parameter will set the timeout on the socket instance\n before attempting to connect. If no *timeout* is supplied, the\n global default timeout setting returned by :func:`socket.getdefaulttimeout`\n is used. If *source_address* is set it must be a tuple of (host, port)\n for the socket to bind as a source address before making the connection.\n An host of '' or port 0 tells the OS to use the default.\n \"\"\"\n\n host, port = address\n if host.startswith(\"[\"):\n host = host.strip(\"[]\")\n err = None\n\n # Using the value from allowed_gai_family() in the context of getaddrinfo lets\n # us select whether to work with IPv4 DNS records, IPv6 records, or both.\n # The original create_connection function always returns all records.\n family = allowed_gai_family()\n\n try:\n host.encode(\"idna\")\n except UnicodeError:\n raise LocationParseError(f\"'{host}', label empty or too long\") from None\n\n for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):\n af, socktype, proto, canonname, sa = res\n sock = None\n try:\n sock = socket.socket(af, socktype, proto)\n\n # If provided, set socket level options before connecting.\n _set_socket_options(sock, socket_options)\n\n if timeout is not SOCKET_GLOBAL_DEFAULT_TIMEOUT:\n sock.settimeout(timeout)\n if source_address:\n sock.bind(source_address)\n sock.connect(sa)\n return sock\n\n except OSError as e:\n err = e\n if sock is not None:\n sock.close()\n sock = None\n\n if err is not None:\n raise err\n\n raise OSError(\"getaddrinfo returns an empty list\")\n\n\ndef _set_socket_options(sock: socket.socket, options: Optional[SocketOptions]) -> None:\n if options is None:\n return\n\n for opt in options:\n sock.setsockopt(*opt)\n\n\ndef allowed_gai_family() -> socket.AddressFamily:\n \"\"\"This function is designed to work in the context of\n getaddrinfo, where family=socket.AF_UNSPEC is the default and\n will perform a DNS search for both IPv6 and IPv4 records.\"\"\"\n\n family = socket.AF_INET\n if HAS_IPV6:\n family = socket.AF_UNSPEC\n return family\n\n\ndef _has_ipv6(host: str) -> bool:\n \"\"\" Returns True if the system can bind an IPv6 address. \"\"\"\n sock = None\n has_ipv6 = False\n\n if socket.has_ipv6:\n # has_ipv6 returns true if cPython was compiled with IPv6 support.\n # It does not tell us if the system has IPv6 support enabled. To\n # determine that we must bind to an IPv6 address.\n # https://github.com/urllib3/urllib3/pull/611\n # https://bugs.python.org/issue658327\n try:\n sock = socket.socket(socket.AF_INET6)\n sock.bind((host, 0))\n has_ipv6 = True\n except Exception:\n pass\n\n if sock:\n sock.close()\n return has_ipv6\n\n\nHAS_IPV6 = _has_ipv6(\"::1\")\n", "path": "src/urllib3/util/connection.py"}]} | 1,866 | 135 |
gh_patches_debug_30382 | rasdani/github-patches | git_diff | pytorch__audio-3 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Need API for saving to file.
Currently we only have a load function. But after training the network it would be great if we can save the generated tensor to a file.
@soumith I think we can reuse a lot of code from this repo https://github.com/MattVitelli/GRUV
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `torchaudio/__init__.py`
Content:
```
1 import torch
2
3 from cffi import FFI
4 ffi = FFI()
5 from ._ext import th_sox
6
7 def load(filename, out=None):
8 if out is not None:
9 assert torch.is_tensor(out)
10 assert not out.is_cuda
11 else:
12 out = torch.FloatTensor()
13
14 if isinstance(out, torch.FloatTensor):
15 func = th_sox.libthsox_Float_read_audio_file
16 elif isinstance(out, torch.DoubleTensor):
17 func = th_sox.libthsox_Double_read_audio_file
18 elif isinstance(out, torch.ByteTensor):
19 func = th_sox.libthsox_Byte_read_audio_file
20 elif isinstance(out, torch.CharTensor):
21 func = th_sox.libthsox_Char_read_audio_file
22 elif isinstance(out, torch.ShortTensor):
23 func = th_sox.libthsox_Short_read_audio_file
24 elif isinstance(out, torch.IntTensor):
25 func = th_sox.libthsox_Int_read_audio_file
26 elif isinstance(out, torch.LongTensor):
27 func = th_sox.libthsox_Long_read_audio_file
28
29 sample_rate_p = ffi.new('int*')
30 func(bytes(filename), out, sample_rate_p)
31 sample_rate = sample_rate_p[0]
32 return out, sample_rate
33
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/torchaudio/__init__.py b/torchaudio/__init__.py
--- a/torchaudio/__init__.py
+++ b/torchaudio/__init__.py
@@ -1,32 +1,40 @@
+import os
+
import torch
from cffi import FFI
+
ffi = FFI()
from ._ext import th_sox
+
+def check_input(src):
+ if not torch.is_tensor(src):
+ raise TypeError('Expected a tensor, got %s' % type(src))
+ if not src.__module__ == 'torch':
+ raise TypeError('Expected a CPU based tensor, got %s' % type(src))
+
+
def load(filename, out=None):
if out is not None:
- assert torch.is_tensor(out)
- assert not out.is_cuda
+ check_input(out)
else:
out = torch.FloatTensor()
-
- if isinstance(out, torch.FloatTensor):
- func = th_sox.libthsox_Float_read_audio_file
- elif isinstance(out, torch.DoubleTensor):
- func = th_sox.libthsox_Double_read_audio_file
- elif isinstance(out, torch.ByteTensor):
- func = th_sox.libthsox_Byte_read_audio_file
- elif isinstance(out, torch.CharTensor):
- func = th_sox.libthsox_Char_read_audio_file
- elif isinstance(out, torch.ShortTensor):
- func = th_sox.libthsox_Short_read_audio_file
- elif isinstance(out, torch.IntTensor):
- func = th_sox.libthsox_Int_read_audio_file
- elif isinstance(out, torch.LongTensor):
- func = th_sox.libthsox_Long_read_audio_file
-
- sample_rate_p = ffi.new('int*')
+ typename = type(out).__name__.replace('Tensor', '')
+ func = getattr(th_sox, 'libthsox_{}_read_audio_file'.format(typename))
+ sample_rate_p = ffi.new('int*')
func(bytes(filename), out, sample_rate_p)
sample_rate = sample_rate_p[0]
return out, sample_rate
+
+
+def save(filepath, src, sample_rate):
+ filename, extension = os.path.splitext(filepath)
+ if type(sample_rate) != int:
+ raise TypeError('Sample rate should be a integer')
+
+ check_input(src)
+ typename = type(src).__name__.replace('Tensor', '')
+ func = getattr(th_sox, 'libthsox_{}_write_audio_file'.format(typename))
+
+ func(bytes(filepath), src, extension[1:], sample_rate)
| {"golden_diff": "diff --git a/torchaudio/__init__.py b/torchaudio/__init__.py\n--- a/torchaudio/__init__.py\n+++ b/torchaudio/__init__.py\n@@ -1,32 +1,40 @@\n+import os\n+\n import torch\n \n from cffi import FFI\n+\n ffi = FFI()\n from ._ext import th_sox\n \n+\n+def check_input(src):\n+ if not torch.is_tensor(src):\n+ raise TypeError('Expected a tensor, got %s' % type(src))\n+ if not src.__module__ == 'torch':\n+ raise TypeError('Expected a CPU based tensor, got %s' % type(src))\n+\n+\n def load(filename, out=None):\n if out is not None:\n- assert torch.is_tensor(out)\n- assert not out.is_cuda\n+ check_input(out)\n else:\n out = torch.FloatTensor()\n-\n- if isinstance(out, torch.FloatTensor):\n- func = th_sox.libthsox_Float_read_audio_file\n- elif isinstance(out, torch.DoubleTensor):\n- func = th_sox.libthsox_Double_read_audio_file\n- elif isinstance(out, torch.ByteTensor):\n- func = th_sox.libthsox_Byte_read_audio_file\n- elif isinstance(out, torch.CharTensor):\n- func = th_sox.libthsox_Char_read_audio_file\n- elif isinstance(out, torch.ShortTensor):\n- func = th_sox.libthsox_Short_read_audio_file\n- elif isinstance(out, torch.IntTensor):\n- func = th_sox.libthsox_Int_read_audio_file\n- elif isinstance(out, torch.LongTensor):\n- func = th_sox.libthsox_Long_read_audio_file\n- \n- sample_rate_p = ffi.new('int*') \n+ typename = type(out).__name__.replace('Tensor', '')\n+ func = getattr(th_sox, 'libthsox_{}_read_audio_file'.format(typename))\n+ sample_rate_p = ffi.new('int*')\n func(bytes(filename), out, sample_rate_p)\n sample_rate = sample_rate_p[0]\n return out, sample_rate\n+\n+\n+def save(filepath, src, sample_rate):\n+ filename, extension = os.path.splitext(filepath)\n+ if type(sample_rate) != int:\n+ raise TypeError('Sample rate should be a integer')\n+\n+ check_input(src)\n+ typename = type(src).__name__.replace('Tensor', '')\n+ func = getattr(th_sox, 'libthsox_{}_write_audio_file'.format(typename))\n+\n+ func(bytes(filepath), src, extension[1:], sample_rate)\n", "issue": "Need API for saving to file.\nCurrently we only have a load function. But after training the network it would be great if we can save the generated tensor to a file.\r\n\r\n@soumith I think we can reuse a lot of code from this repo https://github.com/MattVitelli/GRUV\n", "before_files": [{"content": "import torch\n\nfrom cffi import FFI\nffi = FFI()\nfrom ._ext import th_sox\n\ndef load(filename, out=None):\n if out is not None:\n assert torch.is_tensor(out)\n assert not out.is_cuda\n else:\n out = torch.FloatTensor()\n\n if isinstance(out, torch.FloatTensor):\n func = th_sox.libthsox_Float_read_audio_file\n elif isinstance(out, torch.DoubleTensor):\n func = th_sox.libthsox_Double_read_audio_file\n elif isinstance(out, torch.ByteTensor):\n func = th_sox.libthsox_Byte_read_audio_file\n elif isinstance(out, torch.CharTensor):\n func = th_sox.libthsox_Char_read_audio_file\n elif isinstance(out, torch.ShortTensor):\n func = th_sox.libthsox_Short_read_audio_file\n elif isinstance(out, torch.IntTensor):\n func = th_sox.libthsox_Int_read_audio_file\n elif isinstance(out, torch.LongTensor):\n func = th_sox.libthsox_Long_read_audio_file\n \n sample_rate_p = ffi.new('int*') \n func(bytes(filename), out, sample_rate_p)\n sample_rate = sample_rate_p[0]\n return out, sample_rate\n", "path": "torchaudio/__init__.py"}], "after_files": [{"content": "import os\n\nimport torch\n\nfrom cffi import FFI\n\nffi = FFI()\nfrom ._ext import th_sox\n\n\ndef check_input(src):\n if not torch.is_tensor(src):\n raise TypeError('Expected a tensor, got %s' % type(src))\n if not src.__module__ == 'torch':\n raise TypeError('Expected a CPU based tensor, got %s' % type(src))\n\n\ndef load(filename, out=None):\n if out is not None:\n check_input(out)\n else:\n out = torch.FloatTensor()\n typename = type(out).__name__.replace('Tensor', '')\n func = getattr(th_sox, 'libthsox_{}_read_audio_file'.format(typename))\n sample_rate_p = ffi.new('int*')\n func(bytes(filename), out, sample_rate_p)\n sample_rate = sample_rate_p[0]\n return out, sample_rate\n\n\ndef save(filepath, src, sample_rate):\n filename, extension = os.path.splitext(filepath)\n if type(sample_rate) != int:\n raise TypeError('Sample rate should be a integer')\n\n check_input(src)\n typename = type(src).__name__.replace('Tensor', '')\n func = getattr(th_sox, 'libthsox_{}_write_audio_file'.format(typename))\n\n func(bytes(filepath), src, extension[1:], sample_rate)\n", "path": "torchaudio/__init__.py"}]} | 654 | 579 |
gh_patches_debug_24978 | rasdani/github-patches | git_diff | chainer__chainer-310 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
split_axis.backward fails on incomplete gradients
When there is a None in the grad_outputs, split_axis fails to backprop the incomplete gradients.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `chainer/functions/split_axis.py`
Content:
```
1 import collections
2
3 import numpy
4
5 from chainer import cuda
6 from chainer import function
7 from chainer.utils import type_check
8
9
10 _args = 'float* y, float* x, int cdimy, int cdimx, int rdim, int coffset'
11 _preamble = '''
12 #define COPY(statement) \
13 int l = i / (rdim * cdimy); \
14 int c = i / rdim % cdimy + coffset; \
15 int r = i % rdim; \
16 int idx = r + rdim * (c + cdimx * l); \
17 statement;
18 '''
19
20
21 class SplitAxis(function.Function):
22
23 """Function that splits multiple arrays towards the specified axis."""
24
25 def __init__(self, indices_or_sections, axis):
26 if not isinstance(indices_or_sections, (int, collections.Iterable)):
27 raise TypeError('indices_or_sections must be integer or 1-D array')
28 self.indices_or_sections = indices_or_sections
29 self.axis = axis
30
31 def check_type_forward(self, in_types):
32 type_check.expect(in_types.size() == 1)
33 type_check.expect(in_types[0].ndim >= self.axis)
34
35 if isinstance(self.indices_or_sections, collections.Iterable):
36 max_index = type_check.Variable(
37 self.indices_or_sections[-1], 'max_index')
38 type_check.expect(in_types[0].shape[self.axis] > max_index)
39 else:
40 sections = type_check.Variable(
41 self.indices_or_sections, 'sections')
42 type_check.expect(in_types[0].shape[self.axis] % sections == 0)
43
44 def forward_cpu(self, x):
45 if isinstance(self.indices_or_sections, collections.Iterable):
46 cdimx = x[0].shape[self.axis]
47 ind = list(self.indices_or_sections)
48 ind.append(cdimx)
49 prev_i = 0
50 for i in ind:
51 cdimy = max(0, min(i, cdimx) - prev_i)
52 if cdimy == 0:
53 raise ValueError('Not support if shape contains 0')
54 prev_i = i
55 return tuple(numpy.split(x[0], self.indices_or_sections, self.axis))
56
57 def forward_gpu(self, x):
58 xshape = x[0].shape
59 self.cdimx = xshape[self.axis]
60 self.rdim = numpy.prod(xshape[self.axis + 1:], dtype=int)
61
62 if isinstance(self.indices_or_sections, collections.Iterable):
63 ind = list(self.indices_or_sections)
64 ind.append(self.cdimx)
65 else:
66 sec = self.indices_or_sections
67 if self.cdimx % sec:
68 raise ValueError(
69 'array split does not result in an equal division')
70 ind = numpy.arange(1, sec + 1) * (self.cdimx // sec)
71 ys = []
72 kernel = cuda.elementwise(
73 _args, 'COPY(y[i] = x[idx])', 'split_fwd', preamble=_preamble)
74 prev_i = 0
75 for i in ind:
76 cdimy = max(0, min(i, self.cdimx) - prev_i)
77 s = list(xshape)
78 s[self.axis] = cdimy
79 y = cuda.empty(tuple(s), dtype=x[0].dtype)
80 if cdimy == 0:
81 raise ValueError('Not support if shape contains 0')
82 kernel(y, x[0], cdimy, self.cdimx, self.rdim, prev_i)
83 prev_i = i
84 ys.append(y)
85 return tuple(ys)
86
87 def backward_cpu(self, x, gys):
88 return numpy.concatenate(gys, axis=self.axis),
89
90 def backward_gpu(self, x, gys):
91 gx = cuda.empty_like(x[0])
92 coffset = 0
93 kernel = cuda.elementwise(
94 _args, 'COPY(x[idx] = y[i])', 'split_bwd', preamble=_preamble)
95 for gy in gys:
96 cdimy = gy.shape[self.axis]
97 if cdimy != 0:
98 kernel(gy, gx, cdimy, self.cdimx, self.rdim, coffset)
99 coffset += cdimy
100 return gx,
101
102
103 def split_axis(x, indices_or_sections, axis):
104 """Splits given variables along an axis.
105
106 Args:
107 x (tuple of Variables): Variables to be split.
108 indices_or_sections (int or 1-D array): If this argument is an integer,
109 N, the array will be divided into N equal arrays along axis.
110 If it is a 1-D array of sorted integers, it
111 indicates the positions where the array is split.
112 axis (int): Axis that the input array is split along.
113
114 Returns:
115 ``tuple`` or ``Variable``: Tuple of :class:`~chainer.Variable` objects
116 if the number of outputs is more than 1 or
117 :class:`~chainer.Variable` otherwise.
118
119 .. note::
120 This function raises ``ValueError`` if at least
121 one of the outputs is splitted to zero-size
122 (i.e. `axis`-th value of its shape is zero).
123
124 """
125 return SplitAxis(indices_or_sections, axis)(x)
126
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/chainer/functions/split_axis.py b/chainer/functions/split_axis.py
--- a/chainer/functions/split_axis.py
+++ b/chainer/functions/split_axis.py
@@ -1,6 +1,7 @@
import collections
import numpy
+import six
from chainer import cuda
from chainer import function
@@ -85,14 +86,25 @@
return tuple(ys)
def backward_cpu(self, x, gys):
- return numpy.concatenate(gys, axis=self.axis),
+ if any(gy is None for gy in gys):
+ gx = numpy.zeros_like(x[0])
+ gxs = numpy.split(gx, self.indices_or_sections, self.axis)
+ for gxi, gy in six.moves.zip(gxs, gys):
+ if gy is None:
+ continue
+ gxi[:] = gy
+ return gx,
+ else:
+ return numpy.concatenate(gys, axis=self.axis),
def backward_gpu(self, x, gys):
- gx = cuda.empty_like(x[0])
+ gx = cuda.zeros_like(x[0])
coffset = 0
kernel = cuda.elementwise(
_args, 'COPY(x[idx] = y[i])', 'split_bwd', preamble=_preamble)
for gy in gys:
+ if gy is None:
+ continue
cdimy = gy.shape[self.axis]
if cdimy != 0:
kernel(gy, gx, cdimy, self.cdimx, self.rdim, coffset)
| {"golden_diff": "diff --git a/chainer/functions/split_axis.py b/chainer/functions/split_axis.py\n--- a/chainer/functions/split_axis.py\n+++ b/chainer/functions/split_axis.py\n@@ -1,6 +1,7 @@\n import collections\n \n import numpy\n+import six\n \n from chainer import cuda\n from chainer import function\n@@ -85,14 +86,25 @@\n return tuple(ys)\n \n def backward_cpu(self, x, gys):\n- return numpy.concatenate(gys, axis=self.axis),\n+ if any(gy is None for gy in gys):\n+ gx = numpy.zeros_like(x[0])\n+ gxs = numpy.split(gx, self.indices_or_sections, self.axis)\n+ for gxi, gy in six.moves.zip(gxs, gys):\n+ if gy is None:\n+ continue\n+ gxi[:] = gy\n+ return gx,\n+ else:\n+ return numpy.concatenate(gys, axis=self.axis),\n \n def backward_gpu(self, x, gys):\n- gx = cuda.empty_like(x[0])\n+ gx = cuda.zeros_like(x[0])\n coffset = 0\n kernel = cuda.elementwise(\n _args, 'COPY(x[idx] = y[i])', 'split_bwd', preamble=_preamble)\n for gy in gys:\n+ if gy is None:\n+ continue\n cdimy = gy.shape[self.axis]\n if cdimy != 0:\n kernel(gy, gx, cdimy, self.cdimx, self.rdim, coffset)\n", "issue": "split_axis.backward fails on incomplete gradients\nWhen there is a None in the grad_outputs, split_axis fails to backprop the incomplete gradients.\n\n", "before_files": [{"content": "import collections\n\nimport numpy\n\nfrom chainer import cuda\nfrom chainer import function\nfrom chainer.utils import type_check\n\n\n_args = 'float* y, float* x, int cdimy, int cdimx, int rdim, int coffset'\n_preamble = '''\n#define COPY(statement) \\\n int l = i / (rdim * cdimy); \\\n int c = i / rdim % cdimy + coffset; \\\n int r = i % rdim; \\\n int idx = r + rdim * (c + cdimx * l); \\\n statement;\n'''\n\n\nclass SplitAxis(function.Function):\n\n \"\"\"Function that splits multiple arrays towards the specified axis.\"\"\"\n\n def __init__(self, indices_or_sections, axis):\n if not isinstance(indices_or_sections, (int, collections.Iterable)):\n raise TypeError('indices_or_sections must be integer or 1-D array')\n self.indices_or_sections = indices_or_sections\n self.axis = axis\n\n def check_type_forward(self, in_types):\n type_check.expect(in_types.size() == 1)\n type_check.expect(in_types[0].ndim >= self.axis)\n\n if isinstance(self.indices_or_sections, collections.Iterable):\n max_index = type_check.Variable(\n self.indices_or_sections[-1], 'max_index')\n type_check.expect(in_types[0].shape[self.axis] > max_index)\n else:\n sections = type_check.Variable(\n self.indices_or_sections, 'sections')\n type_check.expect(in_types[0].shape[self.axis] % sections == 0)\n\n def forward_cpu(self, x):\n if isinstance(self.indices_or_sections, collections.Iterable):\n cdimx = x[0].shape[self.axis]\n ind = list(self.indices_or_sections)\n ind.append(cdimx)\n prev_i = 0\n for i in ind:\n cdimy = max(0, min(i, cdimx) - prev_i)\n if cdimy == 0:\n raise ValueError('Not support if shape contains 0')\n prev_i = i\n return tuple(numpy.split(x[0], self.indices_or_sections, self.axis))\n\n def forward_gpu(self, x):\n xshape = x[0].shape\n self.cdimx = xshape[self.axis]\n self.rdim = numpy.prod(xshape[self.axis + 1:], dtype=int)\n\n if isinstance(self.indices_or_sections, collections.Iterable):\n ind = list(self.indices_or_sections)\n ind.append(self.cdimx)\n else:\n sec = self.indices_or_sections\n if self.cdimx % sec:\n raise ValueError(\n 'array split does not result in an equal division')\n ind = numpy.arange(1, sec + 1) * (self.cdimx // sec)\n ys = []\n kernel = cuda.elementwise(\n _args, 'COPY(y[i] = x[idx])', 'split_fwd', preamble=_preamble)\n prev_i = 0\n for i in ind:\n cdimy = max(0, min(i, self.cdimx) - prev_i)\n s = list(xshape)\n s[self.axis] = cdimy\n y = cuda.empty(tuple(s), dtype=x[0].dtype)\n if cdimy == 0:\n raise ValueError('Not support if shape contains 0')\n kernel(y, x[0], cdimy, self.cdimx, self.rdim, prev_i)\n prev_i = i\n ys.append(y)\n return tuple(ys)\n\n def backward_cpu(self, x, gys):\n return numpy.concatenate(gys, axis=self.axis),\n\n def backward_gpu(self, x, gys):\n gx = cuda.empty_like(x[0])\n coffset = 0\n kernel = cuda.elementwise(\n _args, 'COPY(x[idx] = y[i])', 'split_bwd', preamble=_preamble)\n for gy in gys:\n cdimy = gy.shape[self.axis]\n if cdimy != 0:\n kernel(gy, gx, cdimy, self.cdimx, self.rdim, coffset)\n coffset += cdimy\n return gx,\n\n\ndef split_axis(x, indices_or_sections, axis):\n \"\"\"Splits given variables along an axis.\n\n Args:\n x (tuple of Variables): Variables to be split.\n indices_or_sections (int or 1-D array): If this argument is an integer,\n N, the array will be divided into N equal arrays along axis.\n If it is a 1-D array of sorted integers, it\n indicates the positions where the array is split.\n axis (int): Axis that the input array is split along.\n\n Returns:\n ``tuple`` or ``Variable``: Tuple of :class:`~chainer.Variable` objects\n if the number of outputs is more than 1 or\n :class:`~chainer.Variable` otherwise.\n\n .. note::\n This function raises ``ValueError`` if at least\n one of the outputs is splitted to zero-size\n (i.e. `axis`-th value of its shape is zero).\n\n \"\"\"\n return SplitAxis(indices_or_sections, axis)(x)\n", "path": "chainer/functions/split_axis.py"}], "after_files": [{"content": "import collections\n\nimport numpy\nimport six\n\nfrom chainer import cuda\nfrom chainer import function\nfrom chainer.utils import type_check\n\n\n_args = 'float* y, float* x, int cdimy, int cdimx, int rdim, int coffset'\n_preamble = '''\n#define COPY(statement) \\\n int l = i / (rdim * cdimy); \\\n int c = i / rdim % cdimy + coffset; \\\n int r = i % rdim; \\\n int idx = r + rdim * (c + cdimx * l); \\\n statement;\n'''\n\n\nclass SplitAxis(function.Function):\n\n \"\"\"Function that splits multiple arrays towards the specified axis.\"\"\"\n\n def __init__(self, indices_or_sections, axis):\n if not isinstance(indices_or_sections, (int, collections.Iterable)):\n raise TypeError('indices_or_sections must be integer or 1-D array')\n self.indices_or_sections = indices_or_sections\n self.axis = axis\n\n def check_type_forward(self, in_types):\n type_check.expect(in_types.size() == 1)\n type_check.expect(in_types[0].ndim >= self.axis)\n\n if isinstance(self.indices_or_sections, collections.Iterable):\n max_index = type_check.Variable(\n self.indices_or_sections[-1], 'max_index')\n type_check.expect(in_types[0].shape[self.axis] > max_index)\n else:\n sections = type_check.Variable(\n self.indices_or_sections, 'sections')\n type_check.expect(in_types[0].shape[self.axis] % sections == 0)\n\n def forward_cpu(self, x):\n if isinstance(self.indices_or_sections, collections.Iterable):\n cdimx = x[0].shape[self.axis]\n ind = list(self.indices_or_sections)\n ind.append(cdimx)\n prev_i = 0\n for i in ind:\n cdimy = max(0, min(i, cdimx) - prev_i)\n if cdimy == 0:\n raise ValueError('Not support if shape contains 0')\n prev_i = i\n return tuple(numpy.split(x[0], self.indices_or_sections, self.axis))\n\n def forward_gpu(self, x):\n xshape = x[0].shape\n self.cdimx = xshape[self.axis]\n self.rdim = numpy.prod(xshape[self.axis + 1:], dtype=int)\n\n if isinstance(self.indices_or_sections, collections.Iterable):\n ind = list(self.indices_or_sections)\n ind.append(self.cdimx)\n else:\n sec = self.indices_or_sections\n if self.cdimx % sec:\n raise ValueError(\n 'array split does not result in an equal division')\n ind = numpy.arange(1, sec + 1) * (self.cdimx // sec)\n ys = []\n kernel = cuda.elementwise(\n _args, 'COPY(y[i] = x[idx])', 'split_fwd', preamble=_preamble)\n prev_i = 0\n for i in ind:\n cdimy = max(0, min(i, self.cdimx) - prev_i)\n s = list(xshape)\n s[self.axis] = cdimy\n y = cuda.empty(tuple(s), dtype=x[0].dtype)\n if cdimy == 0:\n raise ValueError('Not support if shape contains 0')\n kernel(y, x[0], cdimy, self.cdimx, self.rdim, prev_i)\n prev_i = i\n ys.append(y)\n return tuple(ys)\n\n def backward_cpu(self, x, gys):\n if any(gy is None for gy in gys):\n gx = numpy.zeros_like(x[0])\n gxs = numpy.split(gx, self.indices_or_sections, self.axis)\n for gxi, gy in six.moves.zip(gxs, gys):\n if gy is None:\n continue\n gxi[:] = gy\n return gx,\n else:\n return numpy.concatenate(gys, axis=self.axis),\n\n def backward_gpu(self, x, gys):\n gx = cuda.zeros_like(x[0])\n coffset = 0\n kernel = cuda.elementwise(\n _args, 'COPY(x[idx] = y[i])', 'split_bwd', preamble=_preamble)\n for gy in gys:\n if gy is None:\n continue\n cdimy = gy.shape[self.axis]\n if cdimy != 0:\n kernel(gy, gx, cdimy, self.cdimx, self.rdim, coffset)\n coffset += cdimy\n return gx,\n\n\ndef split_axis(x, indices_or_sections, axis):\n \"\"\"Splits given variables along an axis.\n\n Args:\n x (tuple of Variables): Variables to be split.\n indices_or_sections (int or 1-D array): If this argument is an integer,\n N, the array will be divided into N equal arrays along axis.\n If it is a 1-D array of sorted integers, it\n indicates the positions where the array is split.\n axis (int): Axis that the input array is split along.\n\n Returns:\n ``tuple`` or ``Variable``: Tuple of :class:`~chainer.Variable` objects\n if the number of outputs is more than 1 or\n :class:`~chainer.Variable` otherwise.\n\n .. note::\n This function raises ``ValueError`` if at least\n one of the outputs is splitted to zero-size\n (i.e. `axis`-th value of its shape is zero).\n\n \"\"\"\n return SplitAxis(indices_or_sections, axis)(x)\n", "path": "chainer/functions/split_axis.py"}]} | 1,698 | 348 |
gh_patches_debug_61068 | rasdani/github-patches | git_diff | Mailu__Mailu-719 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Alternatives useless after podop
After updating to master to get all the up-to-date fixes it also moves postfix to use podop and it seems to no longer support receiving external mail from alternative domains 😢
Sending internal mail between alternatives works as expected but not with external mail, a "relay denied" message is shown in the logs and when checking the postfix podop views it looks like alternative is never mentioned.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `core/admin/mailu/internal/views/postfix.py`
Content:
```
1 from mailu import db, models
2 from mailu.internal import internal
3
4 import flask
5
6
7 @internal.route("/postfix/domain/<domain_name>")
8 def postfix_mailbox_domain(domain_name):
9 domain = models.Domain.query.get(domain_name) or flask.abort(404)
10 return flask.jsonify(domain.name)
11
12
13 @internal.route("/postfix/mailbox/<email>")
14 def postfix_mailbox_map(email):
15 user = models.User.query.get(email) or flask.abort(404)
16 return flask.jsonify(user.email)
17
18
19 @internal.route("/postfix/alias/<alias>")
20 def postfix_alias_map(alias):
21 localpart, domain = alias.split('@', 1) if '@' in alias else (None, alias)
22 alternative = models.Alternative.query.get(domain)
23 if alternative:
24 domain = alternative.domain_name
25 email = '{}@{}'.format(localpart, domain)
26 if localpart is None:
27 return flask.jsonify(domain)
28 else:
29 alias_obj = models.Alias.resolve(localpart, domain)
30 if alias_obj:
31 return flask.jsonify(",".join(alias_obj.destination))
32 user_obj = models.User.query.get(email)
33 if user_obj:
34 return flask.jsonify(user_obj.destination)
35 return flask.abort(404)
36
37
38 @internal.route("/postfix/transport/<email>")
39 def postfix_transport(email):
40 localpart, domain = email.split('@', 1) if '@' in email else (None, email)
41 relay = models.Relay.query.get(domain) or flask.abort(404)
42 return flask.jsonify("smtp:[{}]".format(relay.smtp))
43
44
45 @internal.route("/postfix/sender/<sender>")
46 def postfix_sender(sender):
47 """ Simply reject any sender that pretends to be from a local domain
48 """
49 localpart, domain_name = sender.split('@', 1) if '@' in sender else (None, sender)
50 domain = models.Domain.query.get(domain_name)
51 alternative = models.Alternative.query.get(domain_name)
52 if domain or alternative:
53 return flask.jsonify("REJECT")
54 return flask.abort(404)
55
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/core/admin/mailu/internal/views/postfix.py b/core/admin/mailu/internal/views/postfix.py
--- a/core/admin/mailu/internal/views/postfix.py
+++ b/core/admin/mailu/internal/views/postfix.py
@@ -6,7 +6,9 @@
@internal.route("/postfix/domain/<domain_name>")
def postfix_mailbox_domain(domain_name):
- domain = models.Domain.query.get(domain_name) or flask.abort(404)
+ domain = models.Domain.query.get(domain_name) or \
+ models.Alternative.query.get(domain_name) or \
+ flask.abort(404)
return flask.jsonify(domain.name)
| {"golden_diff": "diff --git a/core/admin/mailu/internal/views/postfix.py b/core/admin/mailu/internal/views/postfix.py\n--- a/core/admin/mailu/internal/views/postfix.py\n+++ b/core/admin/mailu/internal/views/postfix.py\n@@ -6,7 +6,9 @@\n \n @internal.route(\"/postfix/domain/<domain_name>\")\n def postfix_mailbox_domain(domain_name):\n- domain = models.Domain.query.get(domain_name) or flask.abort(404)\n+ domain = models.Domain.query.get(domain_name) or \\\n+ models.Alternative.query.get(domain_name) or \\\n+ flask.abort(404)\n return flask.jsonify(domain.name)\n", "issue": "Alternatives useless after podop\nAfter updating to master to get all the up-to-date fixes it also moves postfix to use podop and it seems to no longer support receiving external mail from alternative domains \ud83d\ude22 \r\n\r\nSending internal mail between alternatives works as expected but not with external mail, a \"relay denied\" message is shown in the logs and when checking the postfix podop views it looks like alternative is never mentioned.\n", "before_files": [{"content": "from mailu import db, models\nfrom mailu.internal import internal\n\nimport flask\n\n\[email protected](\"/postfix/domain/<domain_name>\")\ndef postfix_mailbox_domain(domain_name):\n domain = models.Domain.query.get(domain_name) or flask.abort(404)\n return flask.jsonify(domain.name)\n\n\[email protected](\"/postfix/mailbox/<email>\")\ndef postfix_mailbox_map(email):\n user = models.User.query.get(email) or flask.abort(404)\n return flask.jsonify(user.email)\n\n\[email protected](\"/postfix/alias/<alias>\")\ndef postfix_alias_map(alias):\n localpart, domain = alias.split('@', 1) if '@' in alias else (None, alias)\n alternative = models.Alternative.query.get(domain)\n if alternative:\n domain = alternative.domain_name\n email = '{}@{}'.format(localpart, domain)\n if localpart is None:\n return flask.jsonify(domain)\n else:\n alias_obj = models.Alias.resolve(localpart, domain)\n if alias_obj:\n return flask.jsonify(\",\".join(alias_obj.destination))\n user_obj = models.User.query.get(email)\n if user_obj:\n return flask.jsonify(user_obj.destination)\n return flask.abort(404)\n\n\[email protected](\"/postfix/transport/<email>\")\ndef postfix_transport(email):\n localpart, domain = email.split('@', 1) if '@' in email else (None, email)\n relay = models.Relay.query.get(domain) or flask.abort(404)\n return flask.jsonify(\"smtp:[{}]\".format(relay.smtp))\n\n\[email protected](\"/postfix/sender/<sender>\")\ndef postfix_sender(sender):\n \"\"\" Simply reject any sender that pretends to be from a local domain\n \"\"\"\n localpart, domain_name = sender.split('@', 1) if '@' in sender else (None, sender)\n domain = models.Domain.query.get(domain_name)\n alternative = models.Alternative.query.get(domain_name)\n if domain or alternative:\n return flask.jsonify(\"REJECT\")\n return flask.abort(404)\n", "path": "core/admin/mailu/internal/views/postfix.py"}], "after_files": [{"content": "from mailu import db, models\nfrom mailu.internal import internal\n\nimport flask\n\n\[email protected](\"/postfix/domain/<domain_name>\")\ndef postfix_mailbox_domain(domain_name):\n domain = models.Domain.query.get(domain_name) or \\\n models.Alternative.query.get(domain_name) or \\\n flask.abort(404)\n return flask.jsonify(domain.name)\n\n\[email protected](\"/postfix/mailbox/<email>\")\ndef postfix_mailbox_map(email):\n user = models.User.query.get(email) or flask.abort(404)\n return flask.jsonify(user.email)\n\n\[email protected](\"/postfix/alias/<alias>\")\ndef postfix_alias_map(alias):\n localpart, domain = alias.split('@', 1) if '@' in alias else (None, alias)\n alternative = models.Alternative.query.get(domain)\n if alternative:\n domain = alternative.domain_name\n email = '{}@{}'.format(localpart, domain)\n if localpart is None:\n return flask.jsonify(domain)\n else:\n alias_obj = models.Alias.resolve(localpart, domain)\n if alias_obj:\n return flask.jsonify(\",\".join(alias_obj.destination))\n user_obj = models.User.query.get(email)\n if user_obj:\n return flask.jsonify(user_obj.destination)\n return flask.abort(404)\n\n\[email protected](\"/postfix/transport/<email>\")\ndef postfix_transport(email):\n localpart, domain = email.split('@', 1) if '@' in email else (None, email)\n relay = models.Relay.query.get(domain) or flask.abort(404)\n return flask.jsonify(\"smtp:[{}]\".format(relay.smtp))\n\n\[email protected](\"/postfix/sender/<sender>\")\ndef postfix_sender(sender):\n \"\"\" Simply reject any sender that pretends to be from a local domain\n \"\"\"\n localpart, domain_name = sender.split('@', 1) if '@' in sender else (None, sender)\n domain = models.Domain.query.get(domain_name)\n alternative = models.Alternative.query.get(domain_name)\n if domain or alternative:\n return flask.jsonify(\"REJECT\")\n return flask.abort(404)\n", "path": "core/admin/mailu/internal/views/postfix.py"}]} | 898 | 140 |
gh_patches_debug_34169 | rasdani/github-patches | git_diff | conan-io__conan-center-index-253 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[package] catch2/2.9.2: Expected CMake scripts to be included in the package
### Package and Environment Details (include every applicable attribute)
* Package Name/Version: **catch2/2.9.2**
I expected to have access to cmake scripts that are installed with Catch2.
The helper scripts are set to be installed.
https://github.com/conan-io/conan-center-index/blob/6a7ff72be4e6fa6362112459f7319f6e6e565a99/recipes/catch2/2.x.x/conanfile.py#L33
Then they are deleted during packaging.
https://github.com/conan-io/conan-center-index/blob/6a7ff72be4e6fa6362112459f7319f6e6e565a99/recipes/catch2/2.x.x/conanfile.py#L51
Currently, I am using the older bincrafters package (catch2/2.5.0@bincrafters/stable) which still includes the CMake scripts. I would need to maintain my own conan package to use the newer version of Catch2.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `recipes/catch2/2.x.x/conanfile.py`
Content:
```
1 #!/usr/bin/env python
2
3 import os
4
5 from conans import ConanFile, CMake, tools
6
7
8 class ConanRecipe(ConanFile):
9 name = "catch2"
10 description = "A modern, C++-native, header-only, framework for unit-tests, TDD and BDD"
11 topics = ("conan", "catch2", "header-only", "unit-test", "tdd", "bdd")
12 homepage = "https://github.com/catchorg/Catch2"
13 url = "https://github.com/conan-io/conan-center-index"
14 license = "BSL-1.0"
15
16 settings = "os", "compiler", "build_type", "arch"
17
18 generators = "cmake"
19
20 _source_subfolder = "source_subfolder"
21
22 def source(self):
23 tools.get(**self.conan_data["sources"][self.version])
24 extracted_dir = "Catch2-" + self.version
25 os.rename(extracted_dir, self._source_subfolder)
26
27 _build_subfolder = "build_subfolder"
28
29 def _configure_cmake(self):
30 cmake = CMake(self)
31 cmake.definitions["BUILD_TESTING"] = "OFF"
32 cmake.definitions["CATCH_INSTALL_DOCS"] = "OFF"
33 cmake.definitions["CATCH_INSTALL_HELPERS"] = "ON"
34 cmake.configure(
35 source_folder=self._source_subfolder,
36 build_folder=self._build_subfolder
37 )
38 return cmake
39
40 def build(self):
41 cmake = self._configure_cmake()
42 cmake.build()
43
44 def package(self):
45 self.copy(pattern="LICENSE.txt", dst="licenses",
46 src=self._source_subfolder)
47
48 cmake = self._configure_cmake()
49 cmake.install()
50
51 tools.rmdir(os.path.join(self.package_folder, "lib", "cmake"))
52 tools.rmdir(os.path.join(self.package_folder, "share"))
53
54 def package_id(self):
55 self.info.header_only()
56
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/recipes/catch2/2.x.x/conanfile.py b/recipes/catch2/2.x.x/conanfile.py
--- a/recipes/catch2/2.x.x/conanfile.py
+++ b/recipes/catch2/2.x.x/conanfile.py
@@ -1,5 +1,3 @@
-#!/usr/bin/env python
-
import os
from conans import ConanFile, CMake, tools
@@ -12,20 +10,16 @@
homepage = "https://github.com/catchorg/Catch2"
url = "https://github.com/conan-io/conan-center-index"
license = "BSL-1.0"
-
settings = "os", "compiler", "build_type", "arch"
-
generators = "cmake"
-
_source_subfolder = "source_subfolder"
+ _build_subfolder = "build_subfolder"
def source(self):
tools.get(**self.conan_data["sources"][self.version])
extracted_dir = "Catch2-" + self.version
os.rename(extracted_dir, self._source_subfolder)
- _build_subfolder = "build_subfolder"
-
def _configure_cmake(self):
cmake = CMake(self)
cmake.definitions["BUILD_TESTING"] = "OFF"
@@ -42,14 +36,18 @@
cmake.build()
def package(self):
- self.copy(pattern="LICENSE.txt", dst="licenses",
- src=self._source_subfolder)
-
+ self.copy(pattern="LICENSE.txt", dst="licenses", src=self._source_subfolder)
cmake = self._configure_cmake()
cmake.install()
-
tools.rmdir(os.path.join(self.package_folder, "lib", "cmake"))
tools.rmdir(os.path.join(self.package_folder, "share"))
+ for cmake_file in ["ParseAndAddCatchTests.cmake", "Catch.cmake"]:
+ self.copy(cmake_file,
+ src=os.path.join(self._source_subfolder, "contrib"),
+ dst=os.path.join("lib", "cmake", "Catch2"))
def package_id(self):
self.info.header_only()
+
+ def package_info(self):
+ self.cpp_info.builddirs = [os.path.join("lib", "cmake", "Catch2")]
| {"golden_diff": "diff --git a/recipes/catch2/2.x.x/conanfile.py b/recipes/catch2/2.x.x/conanfile.py\n--- a/recipes/catch2/2.x.x/conanfile.py\n+++ b/recipes/catch2/2.x.x/conanfile.py\n@@ -1,5 +1,3 @@\n-#!/usr/bin/env python\n-\n import os\n \n from conans import ConanFile, CMake, tools\n@@ -12,20 +10,16 @@\n homepage = \"https://github.com/catchorg/Catch2\"\n url = \"https://github.com/conan-io/conan-center-index\"\n license = \"BSL-1.0\"\n-\n settings = \"os\", \"compiler\", \"build_type\", \"arch\"\n-\n generators = \"cmake\"\n-\n _source_subfolder = \"source_subfolder\"\n+ _build_subfolder = \"build_subfolder\"\n \n def source(self):\n tools.get(**self.conan_data[\"sources\"][self.version])\n extracted_dir = \"Catch2-\" + self.version\n os.rename(extracted_dir, self._source_subfolder)\n \n- _build_subfolder = \"build_subfolder\"\n-\n def _configure_cmake(self):\n cmake = CMake(self)\n cmake.definitions[\"BUILD_TESTING\"] = \"OFF\"\n@@ -42,14 +36,18 @@\n cmake.build()\n \n def package(self):\n- self.copy(pattern=\"LICENSE.txt\", dst=\"licenses\",\n- src=self._source_subfolder)\n-\n+ self.copy(pattern=\"LICENSE.txt\", dst=\"licenses\", src=self._source_subfolder)\n cmake = self._configure_cmake()\n cmake.install()\n-\n tools.rmdir(os.path.join(self.package_folder, \"lib\", \"cmake\"))\n tools.rmdir(os.path.join(self.package_folder, \"share\"))\n+ for cmake_file in [\"ParseAndAddCatchTests.cmake\", \"Catch.cmake\"]:\n+ self.copy(cmake_file,\n+ src=os.path.join(self._source_subfolder, \"contrib\"),\n+ dst=os.path.join(\"lib\", \"cmake\", \"Catch2\"))\n \n def package_id(self):\n self.info.header_only()\n+\n+ def package_info(self):\n+ self.cpp_info.builddirs = [os.path.join(\"lib\", \"cmake\", \"Catch2\")]\n", "issue": "[package] catch2/2.9.2: Expected CMake scripts to be included in the package \n### Package and Environment Details (include every applicable attribute)\r\n * Package Name/Version: **catch2/2.9.2**\r\n\r\nI expected to have access to cmake scripts that are installed with Catch2.\r\n\r\nThe helper scripts are set to be installed.\r\n\r\nhttps://github.com/conan-io/conan-center-index/blob/6a7ff72be4e6fa6362112459f7319f6e6e565a99/recipes/catch2/2.x.x/conanfile.py#L33\r\n\r\nThen they are deleted during packaging.\r\n\r\nhttps://github.com/conan-io/conan-center-index/blob/6a7ff72be4e6fa6362112459f7319f6e6e565a99/recipes/catch2/2.x.x/conanfile.py#L51\r\n\r\nCurrently, I am using the older bincrafters package (catch2/2.5.0@bincrafters/stable) which still includes the CMake scripts. I would need to maintain my own conan package to use the newer version of Catch2.\n", "before_files": [{"content": "#!/usr/bin/env python\n\nimport os\n\nfrom conans import ConanFile, CMake, tools\n\n\nclass ConanRecipe(ConanFile):\n name = \"catch2\"\n description = \"A modern, C++-native, header-only, framework for unit-tests, TDD and BDD\"\n topics = (\"conan\", \"catch2\", \"header-only\", \"unit-test\", \"tdd\", \"bdd\")\n homepage = \"https://github.com/catchorg/Catch2\"\n url = \"https://github.com/conan-io/conan-center-index\"\n license = \"BSL-1.0\"\n\n settings = \"os\", \"compiler\", \"build_type\", \"arch\"\n\n generators = \"cmake\"\n\n _source_subfolder = \"source_subfolder\"\n\n def source(self):\n tools.get(**self.conan_data[\"sources\"][self.version])\n extracted_dir = \"Catch2-\" + self.version\n os.rename(extracted_dir, self._source_subfolder)\n\n _build_subfolder = \"build_subfolder\"\n\n def _configure_cmake(self):\n cmake = CMake(self)\n cmake.definitions[\"BUILD_TESTING\"] = \"OFF\"\n cmake.definitions[\"CATCH_INSTALL_DOCS\"] = \"OFF\"\n cmake.definitions[\"CATCH_INSTALL_HELPERS\"] = \"ON\"\n cmake.configure(\n source_folder=self._source_subfolder,\n build_folder=self._build_subfolder\n )\n return cmake\n\n def build(self):\n cmake = self._configure_cmake()\n cmake.build()\n\n def package(self):\n self.copy(pattern=\"LICENSE.txt\", dst=\"licenses\",\n src=self._source_subfolder)\n\n cmake = self._configure_cmake()\n cmake.install()\n\n tools.rmdir(os.path.join(self.package_folder, \"lib\", \"cmake\"))\n tools.rmdir(os.path.join(self.package_folder, \"share\"))\n\n def package_id(self):\n self.info.header_only()\n", "path": "recipes/catch2/2.x.x/conanfile.py"}], "after_files": [{"content": "import os\n\nfrom conans import ConanFile, CMake, tools\n\n\nclass ConanRecipe(ConanFile):\n name = \"catch2\"\n description = \"A modern, C++-native, header-only, framework for unit-tests, TDD and BDD\"\n topics = (\"conan\", \"catch2\", \"header-only\", \"unit-test\", \"tdd\", \"bdd\")\n homepage = \"https://github.com/catchorg/Catch2\"\n url = \"https://github.com/conan-io/conan-center-index\"\n license = \"BSL-1.0\"\n settings = \"os\", \"compiler\", \"build_type\", \"arch\"\n generators = \"cmake\"\n _source_subfolder = \"source_subfolder\"\n _build_subfolder = \"build_subfolder\"\n\n def source(self):\n tools.get(**self.conan_data[\"sources\"][self.version])\n extracted_dir = \"Catch2-\" + self.version\n os.rename(extracted_dir, self._source_subfolder)\n\n def _configure_cmake(self):\n cmake = CMake(self)\n cmake.definitions[\"BUILD_TESTING\"] = \"OFF\"\n cmake.definitions[\"CATCH_INSTALL_DOCS\"] = \"OFF\"\n cmake.definitions[\"CATCH_INSTALL_HELPERS\"] = \"ON\"\n cmake.configure(\n source_folder=self._source_subfolder,\n build_folder=self._build_subfolder\n )\n return cmake\n\n def build(self):\n cmake = self._configure_cmake()\n cmake.build()\n\n def package(self):\n self.copy(pattern=\"LICENSE.txt\", dst=\"licenses\", src=self._source_subfolder)\n cmake = self._configure_cmake()\n cmake.install()\n tools.rmdir(os.path.join(self.package_folder, \"lib\", \"cmake\"))\n tools.rmdir(os.path.join(self.package_folder, \"share\"))\n for cmake_file in [\"ParseAndAddCatchTests.cmake\", \"Catch.cmake\"]:\n self.copy(cmake_file,\n src=os.path.join(self._source_subfolder, \"contrib\"),\n dst=os.path.join(\"lib\", \"cmake\", \"Catch2\"))\n\n def package_id(self):\n self.info.header_only()\n\n def package_info(self):\n self.cpp_info.builddirs = [os.path.join(\"lib\", \"cmake\", \"Catch2\")]\n", "path": "recipes/catch2/2.x.x/conanfile.py"}]} | 1,065 | 508 |
gh_patches_debug_5 | rasdani/github-patches | git_diff | freedomofpress__securedrop-1117 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Update kernel module blacklist
During an installation last week, we encountered an issue with the kernel module blacklist. The install was using the new generation of Intel NUCs ([NUC5i5RYK](http://www.amazon.com/dp/B00SD9ISIQ) and [NUC5i5RYH](http://www.amazon.com/dp/B00SD9IS1S/)). Unlike the previous generation of NUCs, which did not include wireless networking hardware by default, the new generation includes wireless networking hardware for Wifi and Bluetooth on the motherboard.
This means that Ubuntu running on the servers not only loaded the high-level kernel modules for wifi and bluetooth support (`iwlwifi` and `bluetooth`), it also loaded modules necessary for support on the specific (included) hardware: `iwlmvm` and `btusb`. When the `remove kernel modules` Ansible role ran, it failed with an error because it could not remove the top-level modules without removing their dependencies first.
A quickfix to get this working on the new hardware was to change `disabled_kernel_modules` in `group_vars/securedrop.yml` from:
``` yml
disabled_kernel_modules:
- bluetooth
- iwlwifi
```
to:
``` yml
disabled_kernel_modules:
- btusb
- bluetooth
- iwlmvm
- iwlwifi
```
The order of the modules is important! We need to make sure the the dependencies are removed prior to the target modules that depend on them.
This list is also likely specific to the new generation of Intel NUCs. If we want to support a wider variety of hardware, we may want to try being smart about removing kernel modules and their dependencies, e.g. something akin to this technique from [Stack Exchange](https://askubuntu.com/questions/317230/how-can-i-temporarily-disable-a-kernel-module).
Finally, we need to make sure this updated module blacklist still works on the old hardware as well.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `securedrop/version.py`
Content:
```
1 __version__ = '0.3.4'
2
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/securedrop/version.py b/securedrop/version.py
--- a/securedrop/version.py
+++ b/securedrop/version.py
@@ -1 +1 @@
-__version__ = '0.3.4'
+__version__ = '0.3.5'
| {"golden_diff": "diff --git a/securedrop/version.py b/securedrop/version.py\n--- a/securedrop/version.py\n+++ b/securedrop/version.py\n@@ -1 +1 @@\n-__version__ = '0.3.4'\n+__version__ = '0.3.5'\n", "issue": "Update kernel module blacklist\nDuring an installation last week, we encountered an issue with the kernel module blacklist. The install was using the new generation of Intel NUCs ([NUC5i5RYK](http://www.amazon.com/dp/B00SD9ISIQ) and [NUC5i5RYH](http://www.amazon.com/dp/B00SD9IS1S/)). Unlike the previous generation of NUCs, which did not include wireless networking hardware by default, the new generation includes wireless networking hardware for Wifi and Bluetooth on the motherboard.\n\nThis means that Ubuntu running on the servers not only loaded the high-level kernel modules for wifi and bluetooth support (`iwlwifi` and `bluetooth`), it also loaded modules necessary for support on the specific (included) hardware: `iwlmvm` and `btusb`. When the `remove kernel modules` Ansible role ran, it failed with an error because it could not remove the top-level modules without removing their dependencies first.\n\nA quickfix to get this working on the new hardware was to change `disabled_kernel_modules` in `group_vars/securedrop.yml` from:\n\n``` yml\ndisabled_kernel_modules:\n - bluetooth\n - iwlwifi\n```\n\nto:\n\n``` yml\ndisabled_kernel_modules:\n - btusb\n - bluetooth\n - iwlmvm\n - iwlwifi\n```\n\nThe order of the modules is important! We need to make sure the the dependencies are removed prior to the target modules that depend on them.\n\nThis list is also likely specific to the new generation of Intel NUCs. If we want to support a wider variety of hardware, we may want to try being smart about removing kernel modules and their dependencies, e.g. something akin to this technique from [Stack Exchange](https://askubuntu.com/questions/317230/how-can-i-temporarily-disable-a-kernel-module).\n\nFinally, we need to make sure this updated module blacklist still works on the old hardware as well.\n\n", "before_files": [{"content": "__version__ = '0.3.4'\n", "path": "securedrop/version.py"}], "after_files": [{"content": "__version__ = '0.3.5'\n", "path": "securedrop/version.py"}]} | 688 | 62 |
gh_patches_debug_6718 | rasdani/github-patches | git_diff | getmoto__moto-556 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Fix S3 issues with botocore 1.3.29
botocore 1.3.29 breaks s3 in tests
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `moto/__init__.py`
Content:
```
1 from __future__ import unicode_literals
2 import logging
3 logging.getLogger('boto').setLevel(logging.CRITICAL)
4
5 __title__ = 'moto'
6 __version__ = '0.4.22'
7
8 from .autoscaling import mock_autoscaling # flake8: noqa
9 from .awslambda import mock_lambda # flake8: noqa
10 from .cloudformation import mock_cloudformation # flake8: noqa
11 from .cloudwatch import mock_cloudwatch # flake8: noqa
12 from .datapipeline import mock_datapipeline # flake8: noqa
13 from .dynamodb import mock_dynamodb # flake8: noqa
14 from .dynamodb2 import mock_dynamodb2 # flake8: noqa
15 from .ec2 import mock_ec2 # flake8: noqa
16 from .ecs import mock_ecs # flake8: noqa
17 from .elb import mock_elb # flake8: noqa
18 from .emr import mock_emr # flake8: noqa
19 from .glacier import mock_glacier # flake8: noqa
20 from .iam import mock_iam # flake8: noqa
21 from .kinesis import mock_kinesis # flake8: noqa
22 from .kms import mock_kms # flake8: noqa
23 from .rds import mock_rds # flake8: noqa
24 from .rds2 import mock_rds2 # flake8: noqa
25 from .redshift import mock_redshift # flake8: noqa
26 from .s3 import mock_s3 # flake8: noqa
27 from .s3bucket_path import mock_s3bucket_path # flake8: noqa
28 from .ses import mock_ses # flake8: noqa
29 from .sns import mock_sns # flake8: noqa
30 from .sqs import mock_sqs # flake8: noqa
31 from .sts import mock_sts # flake8: noqa
32 from .route53 import mock_route53 # flake8: noqa
33 from .swf import mock_swf # flake8: noqa
34
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/moto/__init__.py b/moto/__init__.py
--- a/moto/__init__.py
+++ b/moto/__init__.py
@@ -31,3 +31,13 @@
from .sts import mock_sts # flake8: noqa
from .route53 import mock_route53 # flake8: noqa
from .swf import mock_swf # flake8: noqa
+
+
+try:
+ # Need to monkey-patch botocore requests back to underlying urllib3 classes
+ from botocore.awsrequest import HTTPSConnectionPool, HTTPConnectionPool, HTTPConnection, VerifiedHTTPSConnection
+except ImportError:
+ pass
+else:
+ HTTPSConnectionPool.ConnectionCls = VerifiedHTTPSConnection
+ HTTPConnectionPool.ConnectionCls = HTTPConnection
| {"golden_diff": "diff --git a/moto/__init__.py b/moto/__init__.py\n--- a/moto/__init__.py\n+++ b/moto/__init__.py\n@@ -31,3 +31,13 @@\n from .sts import mock_sts # flake8: noqa\n from .route53 import mock_route53 # flake8: noqa\n from .swf import mock_swf # flake8: noqa\n+\n+\n+try:\n+ # Need to monkey-patch botocore requests back to underlying urllib3 classes\n+ from botocore.awsrequest import HTTPSConnectionPool, HTTPConnectionPool, HTTPConnection, VerifiedHTTPSConnection\n+except ImportError:\n+ pass\n+else:\n+ HTTPSConnectionPool.ConnectionCls = VerifiedHTTPSConnection\n+ HTTPConnectionPool.ConnectionCls = HTTPConnection\n", "issue": "Fix S3 issues with botocore 1.3.29\nbotocore 1.3.29 breaks s3 in tests\n\n", "before_files": [{"content": "from __future__ import unicode_literals\nimport logging\nlogging.getLogger('boto').setLevel(logging.CRITICAL)\n\n__title__ = 'moto'\n__version__ = '0.4.22'\n\nfrom .autoscaling import mock_autoscaling # flake8: noqa\nfrom .awslambda import mock_lambda # flake8: noqa\nfrom .cloudformation import mock_cloudformation # flake8: noqa\nfrom .cloudwatch import mock_cloudwatch # flake8: noqa\nfrom .datapipeline import mock_datapipeline # flake8: noqa\nfrom .dynamodb import mock_dynamodb # flake8: noqa\nfrom .dynamodb2 import mock_dynamodb2 # flake8: noqa\nfrom .ec2 import mock_ec2 # flake8: noqa\nfrom .ecs import mock_ecs # flake8: noqa\nfrom .elb import mock_elb # flake8: noqa\nfrom .emr import mock_emr # flake8: noqa\nfrom .glacier import mock_glacier # flake8: noqa\nfrom .iam import mock_iam # flake8: noqa\nfrom .kinesis import mock_kinesis # flake8: noqa\nfrom .kms import mock_kms # flake8: noqa\nfrom .rds import mock_rds # flake8: noqa\nfrom .rds2 import mock_rds2 # flake8: noqa\nfrom .redshift import mock_redshift # flake8: noqa\nfrom .s3 import mock_s3 # flake8: noqa\nfrom .s3bucket_path import mock_s3bucket_path # flake8: noqa\nfrom .ses import mock_ses # flake8: noqa\nfrom .sns import mock_sns # flake8: noqa\nfrom .sqs import mock_sqs # flake8: noqa\nfrom .sts import mock_sts # flake8: noqa\nfrom .route53 import mock_route53 # flake8: noqa\nfrom .swf import mock_swf # flake8: noqa\n", "path": "moto/__init__.py"}], "after_files": [{"content": "from __future__ import unicode_literals\nimport logging\nlogging.getLogger('boto').setLevel(logging.CRITICAL)\n\n__title__ = 'moto'\n__version__ = '0.4.22'\n\nfrom .autoscaling import mock_autoscaling # flake8: noqa\nfrom .awslambda import mock_lambda # flake8: noqa\nfrom .cloudformation import mock_cloudformation # flake8: noqa\nfrom .cloudwatch import mock_cloudwatch # flake8: noqa\nfrom .datapipeline import mock_datapipeline # flake8: noqa\nfrom .dynamodb import mock_dynamodb # flake8: noqa\nfrom .dynamodb2 import mock_dynamodb2 # flake8: noqa\nfrom .ec2 import mock_ec2 # flake8: noqa\nfrom .ecs import mock_ecs # flake8: noqa\nfrom .elb import mock_elb # flake8: noqa\nfrom .emr import mock_emr # flake8: noqa\nfrom .glacier import mock_glacier # flake8: noqa\nfrom .iam import mock_iam # flake8: noqa\nfrom .kinesis import mock_kinesis # flake8: noqa\nfrom .kms import mock_kms # flake8: noqa\nfrom .rds import mock_rds # flake8: noqa\nfrom .rds2 import mock_rds2 # flake8: noqa\nfrom .redshift import mock_redshift # flake8: noqa\nfrom .s3 import mock_s3 # flake8: noqa\nfrom .s3bucket_path import mock_s3bucket_path # flake8: noqa\nfrom .ses import mock_ses # flake8: noqa\nfrom .sns import mock_sns # flake8: noqa\nfrom .sqs import mock_sqs # flake8: noqa\nfrom .sts import mock_sts # flake8: noqa\nfrom .route53 import mock_route53 # flake8: noqa\nfrom .swf import mock_swf # flake8: noqa\n\n\ntry:\n # Need to monkey-patch botocore requests back to underlying urllib3 classes\n from botocore.awsrequest import HTTPSConnectionPool, HTTPConnectionPool, HTTPConnection, VerifiedHTTPSConnection\nexcept ImportError:\n pass\nelse:\n HTTPSConnectionPool.ConnectionCls = VerifiedHTTPSConnection\n HTTPConnectionPool.ConnectionCls = HTTPConnection\n", "path": "moto/__init__.py"}]} | 819 | 180 |
gh_patches_debug_4681 | rasdani/github-patches | git_diff | awslabs__gluonts-1159 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Multiprocessing hangs when num_workers > len(dataset)
## Description
I'm trying to serialize a predictor trained on multiple cores. When calling the `serialize` method nothing happens.
Running the same code, but without specifying `num_workers`, it works as expected.
## To Reproduce
```python
from pathlib import Path
from typing import Optional
from gluonts.dataset.multivariate_grouper import MultivariateGrouper
from gluonts.dataset.common import TrainDatasets
from gluonts.model.gpvar import GPVAREstimator
from gluonts.dataset.repository.datasets import get_dataset
from gluonts.mx.trainer import Trainer
def load_multivariate_dataset(dataset_name: str, target_dim: Optional[int] = None):
ds = get_dataset(dataset_name)
if target_dim is None:
target_dim = len(ds.train)
grouper = MultivariateGrouper(max_target_dim=target_dim)
meta = ds.metadata
meta.feat_static_cat[0].cardinality = target_dim
return (TrainDatasets(
metadata=meta,
train=grouper(ds.train),
test=grouper(ds.test)
), target_dim)
ds, target_dim = load_multivariate_dataset("exchange_rate")
metadata = ds.metadata
estimator = GPVAREstimator(
prediction_length=metadata.prediction_length,
freq=metadata.freq,
target_dim=target_dim,
trainer=Trainer(
epochs=2,
num_batches_per_epoch=10,
batch_size=8,
),
)
predictor = estimator.train(training_data=ds.train, num_workers=2)
predictor.serialize(Path("/tmp"))
```
## Error message or code output
Nothing happens.
## Environment
- Operating system: Mac OSX 10.15.7
- Python version: 3.6.12
- GluonTS version: 0.6.0
- MXNet version: 1.7.0post1
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/gluonts/itertools.py`
Content:
```
1 # Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License").
4 # You may not use this file except in compliance with the License.
5 # A copy of the License is located at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # or in the "license" file accompanying this file. This file is distributed
10 # on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
11 # express or implied. See the License for the specific language governing
12 # permissions and limitations under the License.
13
14 from typing import Iterable, Iterator, List, TypeVar
15 import itertools
16 import random
17
18 T = TypeVar("T")
19
20
21 def cyclic(it):
22 """Like `itertools.cycle`, but does not store the data."""
23
24 while True:
25 yield from it
26
27
28 def batcher(iterable: Iterable[T], batch_size: int) -> Iterator[List[T]]:
29 """Groups elements from `iterable` into batches of size `batch_size`.
30
31 >>> list(batcher("ABCDEFG", 3))
32 [['A', 'B', 'C'], ['D', 'E', 'F'], ['G']]
33
34 Unlike the grouper proposed in the documentation of itertools, `batcher`
35 doesn't fill up missing values.
36 """
37 it: Iterator[T] = iter(iterable)
38
39 def get_batch():
40 return list(itertools.islice(it, batch_size))
41
42 # has an empty list so that we have a 2D array for sure
43 return iter(get_batch, [])
44
45
46 class cached(Iterable):
47 """
48 An iterable wrapper, which caches values in a list the first time it is iterated.
49
50 The primary use-case for this is to avoid re-computing the element of the sequence,
51 in case the inner iterable does it on demand.
52
53 This should be used to wrap deterministic iterables, i.e. iterables where the data
54 generation process is not random, and that yield the same elements when iterated
55 multiple times.
56 """
57
58 def __init__(self, iterable: Iterable) -> None:
59 self.iterable = iterable
60 self.cache = None
61
62 def __iter__(self):
63 if self.cache is None:
64 self.cache = []
65 for element in self.iterable:
66 yield element
67 self.cache.append(element)
68 else:
69 yield from self.cache
70
71
72 def pseudo_shuffled(iterator: Iterator, shuffle_buffer_length: int):
73 """
74 An iterator that yields item from a given iterator in a pseudo-shuffled order.
75 """
76 shuffle_buffer = []
77
78 for element in iterator:
79 shuffle_buffer.append(element)
80 if len(shuffle_buffer) >= shuffle_buffer_length:
81 yield shuffle_buffer.pop(random.randrange(len(shuffle_buffer)))
82
83 while shuffle_buffer:
84 yield shuffle_buffer.pop(random.randrange(len(shuffle_buffer)))
85
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/gluonts/itertools.py b/src/gluonts/itertools.py
--- a/src/gluonts/itertools.py
+++ b/src/gluonts/itertools.py
@@ -21,8 +21,13 @@
def cyclic(it):
"""Like `itertools.cycle`, but does not store the data."""
+ at_least_one = False
while True:
- yield from it
+ for el in it:
+ at_least_one = True
+ yield el
+ if not at_least_one:
+ break
def batcher(iterable: Iterable[T], batch_size: int) -> Iterator[List[T]]:
| {"golden_diff": "diff --git a/src/gluonts/itertools.py b/src/gluonts/itertools.py\n--- a/src/gluonts/itertools.py\n+++ b/src/gluonts/itertools.py\n@@ -21,8 +21,13 @@\n def cyclic(it):\n \"\"\"Like `itertools.cycle`, but does not store the data.\"\"\"\n \n+ at_least_one = False\n while True:\n- yield from it\n+ for el in it:\n+ at_least_one = True\n+ yield el\n+ if not at_least_one:\n+ break\n \n \n def batcher(iterable: Iterable[T], batch_size: int) -> Iterator[List[T]]:\n", "issue": "Multiprocessing hangs when num_workers > len(dataset)\n## Description\r\nI'm trying to serialize a predictor trained on multiple cores. When calling the `serialize` method nothing happens.\r\nRunning the same code, but without specifying `num_workers`, it works as expected.\r\n\r\n## To Reproduce\r\n\r\n```python\r\nfrom pathlib import Path\r\nfrom typing import Optional\r\n\r\nfrom gluonts.dataset.multivariate_grouper import MultivariateGrouper\r\nfrom gluonts.dataset.common import TrainDatasets\r\nfrom gluonts.model.gpvar import GPVAREstimator\r\nfrom gluonts.dataset.repository.datasets import get_dataset\r\nfrom gluonts.mx.trainer import Trainer\r\n\r\n\r\ndef load_multivariate_dataset(dataset_name: str, target_dim: Optional[int] = None):\r\n ds = get_dataset(dataset_name)\r\n\r\n if target_dim is None:\r\n target_dim = len(ds.train)\r\n\r\n grouper = MultivariateGrouper(max_target_dim=target_dim)\r\n\r\n meta = ds.metadata\r\n meta.feat_static_cat[0].cardinality = target_dim\r\n\r\n return (TrainDatasets(\r\n metadata=meta,\r\n train=grouper(ds.train),\r\n test=grouper(ds.test)\r\n ), target_dim)\r\n\r\n\r\nds, target_dim = load_multivariate_dataset(\"exchange_rate\")\r\nmetadata = ds.metadata\r\n\r\nestimator = GPVAREstimator(\r\n prediction_length=metadata.prediction_length,\r\n freq=metadata.freq,\r\n target_dim=target_dim,\r\n trainer=Trainer(\r\n epochs=2,\r\n num_batches_per_epoch=10,\r\n batch_size=8,\r\n ),\r\n)\r\n\r\npredictor = estimator.train(training_data=ds.train, num_workers=2)\r\n\r\npredictor.serialize(Path(\"/tmp\"))\r\n\r\n```\r\n\r\n## Error message or code output\r\nNothing happens.\r\n\r\n\r\n## Environment\r\n- Operating system: Mac OSX 10.15.7\r\n- Python version: 3.6.12\r\n- GluonTS version: 0.6.0\r\n- MXNet version: 1.7.0post1\r\n\r\n\n", "before_files": [{"content": "# Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\").\n# You may not use this file except in compliance with the License.\n# A copy of the License is located at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# or in the \"license\" file accompanying this file. This file is distributed\n# on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either\n# express or implied. See the License for the specific language governing\n# permissions and limitations under the License.\n\nfrom typing import Iterable, Iterator, List, TypeVar\nimport itertools\nimport random\n\nT = TypeVar(\"T\")\n\n\ndef cyclic(it):\n \"\"\"Like `itertools.cycle`, but does not store the data.\"\"\"\n\n while True:\n yield from it\n\n\ndef batcher(iterable: Iterable[T], batch_size: int) -> Iterator[List[T]]:\n \"\"\"Groups elements from `iterable` into batches of size `batch_size`.\n\n >>> list(batcher(\"ABCDEFG\", 3))\n [['A', 'B', 'C'], ['D', 'E', 'F'], ['G']]\n\n Unlike the grouper proposed in the documentation of itertools, `batcher`\n doesn't fill up missing values.\n \"\"\"\n it: Iterator[T] = iter(iterable)\n\n def get_batch():\n return list(itertools.islice(it, batch_size))\n\n # has an empty list so that we have a 2D array for sure\n return iter(get_batch, [])\n\n\nclass cached(Iterable):\n \"\"\"\n An iterable wrapper, which caches values in a list the first time it is iterated.\n\n The primary use-case for this is to avoid re-computing the element of the sequence,\n in case the inner iterable does it on demand.\n\n This should be used to wrap deterministic iterables, i.e. iterables where the data\n generation process is not random, and that yield the same elements when iterated\n multiple times.\n \"\"\"\n\n def __init__(self, iterable: Iterable) -> None:\n self.iterable = iterable\n self.cache = None\n\n def __iter__(self):\n if self.cache is None:\n self.cache = []\n for element in self.iterable:\n yield element\n self.cache.append(element)\n else:\n yield from self.cache\n\n\ndef pseudo_shuffled(iterator: Iterator, shuffle_buffer_length: int):\n \"\"\"\n An iterator that yields item from a given iterator in a pseudo-shuffled order.\n \"\"\"\n shuffle_buffer = []\n\n for element in iterator:\n shuffle_buffer.append(element)\n if len(shuffle_buffer) >= shuffle_buffer_length:\n yield shuffle_buffer.pop(random.randrange(len(shuffle_buffer)))\n\n while shuffle_buffer:\n yield shuffle_buffer.pop(random.randrange(len(shuffle_buffer)))\n", "path": "src/gluonts/itertools.py"}], "after_files": [{"content": "# Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\").\n# You may not use this file except in compliance with the License.\n# A copy of the License is located at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# or in the \"license\" file accompanying this file. This file is distributed\n# on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either\n# express or implied. See the License for the specific language governing\n# permissions and limitations under the License.\n\nfrom typing import Iterable, Iterator, List, TypeVar\nimport itertools\nimport random\n\nT = TypeVar(\"T\")\n\n\ndef cyclic(it):\n \"\"\"Like `itertools.cycle`, but does not store the data.\"\"\"\n\n at_least_one = False\n while True:\n for el in it:\n at_least_one = True\n yield el\n if not at_least_one:\n break\n\n\ndef batcher(iterable: Iterable[T], batch_size: int) -> Iterator[List[T]]:\n \"\"\"Groups elements from `iterable` into batches of size `batch_size`.\n\n >>> list(batcher(\"ABCDEFG\", 3))\n [['A', 'B', 'C'], ['D', 'E', 'F'], ['G']]\n\n Unlike the grouper proposed in the documentation of itertools, `batcher`\n doesn't fill up missing values.\n \"\"\"\n it: Iterator[T] = iter(iterable)\n\n def get_batch():\n return list(itertools.islice(it, batch_size))\n\n # has an empty list so that we have a 2D array for sure\n return iter(get_batch, [])\n\n\nclass cached(Iterable):\n \"\"\"\n An iterable wrapper, which caches values in a list the first time it is iterated.\n\n The primary use-case for this is to avoid re-computing the element of the sequence,\n in case the inner iterable does it on demand.\n\n This should be used to wrap deterministic iterables, i.e. iterables where the data\n generation process is not random, and that yield the same elements when iterated\n multiple times.\n \"\"\"\n\n def __init__(self, iterable: Iterable) -> None:\n self.iterable = iterable\n self.cache = None\n\n def __iter__(self):\n if self.cache is None:\n self.cache = []\n for element in self.iterable:\n yield element\n self.cache.append(element)\n else:\n yield from self.cache\n\n\ndef pseudo_shuffled(iterator: Iterator, shuffle_buffer_length: int):\n \"\"\"\n An iterator that yields item from a given iterator in a pseudo-shuffled order.\n \"\"\"\n shuffle_buffer = []\n\n for element in iterator:\n shuffle_buffer.append(element)\n if len(shuffle_buffer) >= shuffle_buffer_length:\n yield shuffle_buffer.pop(random.randrange(len(shuffle_buffer)))\n\n while shuffle_buffer:\n yield shuffle_buffer.pop(random.randrange(len(shuffle_buffer)))\n", "path": "src/gluonts/itertools.py"}]} | 1,467 | 151 |
gh_patches_debug_10230 | rasdani/github-patches | git_diff | streamlink__streamlink-925 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
BBC iPlayer plugin cannot find VPID
### Checklist
- [x] This is a bug report.
- [ ] This is a feature request.
- [ ] This is a plugin (improvement) request.
- [ ] I have read the contribution guidelines.
### Description
The BBC IPlayer plugin cannot find the VPID for valid urls.
### Reproduction steps / Explicit stream URLs to test
The following command:
`streamlink -l debug 'http://www.bbc.co.uk/iplayer/episode/b013pnv4/horizon-20112012-2-seeing-stars' best`
produces this output:
```
[cli][info] Found matching plugin bbciplayer for URL http://www.bbc.co.uk/iplayer/episode/b013pnv4/horizon-20112012-2-seeing-stars
[plugin.bbciplayer][debug] Loading streams for episode: b013pnv4
[plugin.bbciplayer][debug] Looking for vpid on http://www.bbc.co.uk/iplayer/episode/b013pnv4/horizon-20112012-2-seeing-stars
[plugin.bbciplayer][error] Could not find VPID for episode b013pnv4
error: No playable streams found on this URL: http://www.bbc.co.uk/iplayer/episode/b013pnv4/horizon-20112012-2-seeing-stars
```
and the same goes for any other valid iplayer url.
### Environment details
Operating system: arch linux
Streamlink and Python versions: streamlink-0.6.0 and python-3.6.1
### Comments, logs, screenshots, etc.
AFAICS, the page downloaded from the iplayer url does not contain the string "vpid".
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/streamlink/plugins/bbciplayer.py`
Content:
```
1 from __future__ import print_function
2
3 import base64
4 import re
5 from functools import partial
6 from hashlib import sha1
7
8 from streamlink.plugin import Plugin
9 from streamlink.plugin.api import http
10 from streamlink.plugin.api import validate
11 from streamlink.stream import HDSStream
12 from streamlink.stream import HLSStream
13 from streamlink.utils import parse_xml, parse_json
14
15
16 class BBCiPlayer(Plugin):
17 url_re = re.compile(r"""https?://(?:www\.)?bbc.co.uk/iplayer/
18 (
19 episode/(?P<episode_id>\w+)|
20 live/(?P<channel_name>\w+)
21 )
22 """, re.VERBOSE)
23 vpid_re = re.compile(r'"vpid"\s*:\s*"(\w+)"')
24 tvip_re = re.compile(r'event_master_brand=(\w+?)&')
25 swf_url = "http://emp.bbci.co.uk/emp/SMPf/1.18.3/StandardMediaPlayerChromelessFlash.swf"
26 hash = base64.b64decode(b"N2RmZjc2NzFkMGM2OTdmZWRiMWQ5MDVkOWExMjE3MTk5MzhiOTJiZg==")
27 api_url = ("http://open.live.bbc.co.uk/mediaselector/5/select/"
28 "version/2.0/mediaset/{platform}/vpid/{vpid}/atk/{vpid_hash}/asn/1/")
29 platforms = ("pc", "iptv-all")
30
31 mediaselector_schema = validate.Schema(
32 validate.transform(partial(parse_xml, ignore_ns=True)),
33 validate.union({
34 "hds": validate.xml_findall(".//media[@kind='video']//connection[@transferFormat='hds']"),
35 "hls": validate.xml_findall(".//media[@kind='video']//connection[@transferFormat='hls']")
36 }),
37 {validate.text: validate.all(
38 [validate.all(validate.getattr("attrib"), validate.get("href"))],
39 validate.transform(lambda x: list(set(x))) # unique
40 )}
41 )
42
43 @classmethod
44 def can_handle_url(cls, url):
45 return cls.url_re.match(url) is not None
46
47 @classmethod
48 def _hash_vpid(cls, vpid):
49 return sha1(cls.hash + str(vpid).encode("utf8")).hexdigest()
50
51 def find_vpid(self, url):
52 self.logger.debug("Looking for vpid on {0}", url)
53 res = http.get(url)
54 m = self.vpid_re.search(res.text)
55 return m and m.group(1)
56
57 def find_tvip(self, url):
58 self.logger.debug("Looking for tvip on {0}", url)
59 res = http.get(url)
60 m = self.tvip_re.search(res.text)
61 return m and m.group(1)
62
63 def mediaselector(self, vpid):
64 for platform in self.platforms:
65 url = self.api_url.format(vpid=vpid, vpid_hash=self._hash_vpid(vpid), platform=platform)
66 stream_urls = http.get(url, schema=self.mediaselector_schema)
67 for surl in stream_urls.get("hls"):
68 for s in HLSStream.parse_variant_playlist(self.session, surl).items():
69 yield s
70 for surl in stream_urls.get("hds"):
71 for s in HDSStream.parse_manifest(self.session, surl).items():
72 yield s
73
74 def _get_streams(self):
75 m = self.url_re.match(self.url)
76 episode_id = m.group("episode_id")
77 channel_name = m.group("channel_name")
78
79 if episode_id:
80 self.logger.debug("Loading streams for episode: {0}", episode_id)
81 vpid = self.find_vpid(self.url)
82 if vpid:
83 self.logger.debug("Found VPID: {0}", vpid)
84 for s in self.mediaselector(vpid):
85 yield s
86 else:
87 self.logger.error("Could not find VPID for episode {0}", episode_id)
88 elif channel_name:
89 self.logger.debug("Loading stream for live channel: {0}", channel_name)
90 tvip = self.find_tvip(self.url)
91 if tvip:
92 self.logger.debug("Found TVIP: {0}", tvip)
93 for s in self.mediaselector(tvip):
94 yield s
95
96
97 __plugin__ = BBCiPlayer
98
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/streamlink/plugins/bbciplayer.py b/src/streamlink/plugins/bbciplayer.py
--- a/src/streamlink/plugins/bbciplayer.py
+++ b/src/streamlink/plugins/bbciplayer.py
@@ -20,7 +20,7 @@
live/(?P<channel_name>\w+)
)
""", re.VERBOSE)
- vpid_re = re.compile(r'"vpid"\s*:\s*"(\w+)"')
+ vpid_re = re.compile(r'"ident_id"\s*:\s*"(\w+)"')
tvip_re = re.compile(r'event_master_brand=(\w+?)&')
swf_url = "http://emp.bbci.co.uk/emp/SMPf/1.18.3/StandardMediaPlayerChromelessFlash.swf"
hash = base64.b64decode(b"N2RmZjc2NzFkMGM2OTdmZWRiMWQ5MDVkOWExMjE3MTk5MzhiOTJiZg==")
| {"golden_diff": "diff --git a/src/streamlink/plugins/bbciplayer.py b/src/streamlink/plugins/bbciplayer.py\n--- a/src/streamlink/plugins/bbciplayer.py\n+++ b/src/streamlink/plugins/bbciplayer.py\n@@ -20,7 +20,7 @@\n live/(?P<channel_name>\\w+)\n )\n \"\"\", re.VERBOSE)\n- vpid_re = re.compile(r'\"vpid\"\\s*:\\s*\"(\\w+)\"')\n+ vpid_re = re.compile(r'\"ident_id\"\\s*:\\s*\"(\\w+)\"')\n tvip_re = re.compile(r'event_master_brand=(\\w+?)&')\n swf_url = \"http://emp.bbci.co.uk/emp/SMPf/1.18.3/StandardMediaPlayerChromelessFlash.swf\"\n hash = base64.b64decode(b\"N2RmZjc2NzFkMGM2OTdmZWRiMWQ5MDVkOWExMjE3MTk5MzhiOTJiZg==\")\n", "issue": "BBC iPlayer plugin cannot find VPID\n### Checklist\r\n\r\n- [x] This is a bug report.\r\n- [ ] This is a feature request.\r\n- [ ] This is a plugin (improvement) request.\r\n- [ ] I have read the contribution guidelines.\r\n\r\n### Description\r\n\r\nThe BBC IPlayer plugin cannot find the VPID for valid urls.\r\n\r\n### Reproduction steps / Explicit stream URLs to test\r\n\r\nThe following command:\r\n\r\n`streamlink -l debug 'http://www.bbc.co.uk/iplayer/episode/b013pnv4/horizon-20112012-2-seeing-stars' best`\r\n\r\nproduces this output:\r\n\r\n```\r\n[cli][info] Found matching plugin bbciplayer for URL http://www.bbc.co.uk/iplayer/episode/b013pnv4/horizon-20112012-2-seeing-stars\r\n[plugin.bbciplayer][debug] Loading streams for episode: b013pnv4\r\n[plugin.bbciplayer][debug] Looking for vpid on http://www.bbc.co.uk/iplayer/episode/b013pnv4/horizon-20112012-2-seeing-stars\r\n[plugin.bbciplayer][error] Could not find VPID for episode b013pnv4\r\nerror: No playable streams found on this URL: http://www.bbc.co.uk/iplayer/episode/b013pnv4/horizon-20112012-2-seeing-stars\r\n\r\n```\r\n\r\nand the same goes for any other valid iplayer url.\r\n\r\n### Environment details\r\n\r\nOperating system: arch linux\r\nStreamlink and Python versions: streamlink-0.6.0 and python-3.6.1\r\n\r\n### Comments, logs, screenshots, etc.\r\n\r\nAFAICS, the page downloaded from the iplayer url does not contain the string \"vpid\".\r\n\n", "before_files": [{"content": "from __future__ import print_function\n\nimport base64\nimport re\nfrom functools import partial\nfrom hashlib import sha1\n\nfrom streamlink.plugin import Plugin\nfrom streamlink.plugin.api import http\nfrom streamlink.plugin.api import validate\nfrom streamlink.stream import HDSStream\nfrom streamlink.stream import HLSStream\nfrom streamlink.utils import parse_xml, parse_json\n\n\nclass BBCiPlayer(Plugin):\n url_re = re.compile(r\"\"\"https?://(?:www\\.)?bbc.co.uk/iplayer/\n (\n episode/(?P<episode_id>\\w+)|\n live/(?P<channel_name>\\w+)\n )\n \"\"\", re.VERBOSE)\n vpid_re = re.compile(r'\"vpid\"\\s*:\\s*\"(\\w+)\"')\n tvip_re = re.compile(r'event_master_brand=(\\w+?)&')\n swf_url = \"http://emp.bbci.co.uk/emp/SMPf/1.18.3/StandardMediaPlayerChromelessFlash.swf\"\n hash = base64.b64decode(b\"N2RmZjc2NzFkMGM2OTdmZWRiMWQ5MDVkOWExMjE3MTk5MzhiOTJiZg==\")\n api_url = (\"http://open.live.bbc.co.uk/mediaselector/5/select/\"\n \"version/2.0/mediaset/{platform}/vpid/{vpid}/atk/{vpid_hash}/asn/1/\")\n platforms = (\"pc\", \"iptv-all\")\n\n mediaselector_schema = validate.Schema(\n validate.transform(partial(parse_xml, ignore_ns=True)),\n validate.union({\n \"hds\": validate.xml_findall(\".//media[@kind='video']//connection[@transferFormat='hds']\"),\n \"hls\": validate.xml_findall(\".//media[@kind='video']//connection[@transferFormat='hls']\")\n }),\n {validate.text: validate.all(\n [validate.all(validate.getattr(\"attrib\"), validate.get(\"href\"))],\n validate.transform(lambda x: list(set(x))) # unique\n )}\n )\n\n @classmethod\n def can_handle_url(cls, url):\n return cls.url_re.match(url) is not None\n\n @classmethod\n def _hash_vpid(cls, vpid):\n return sha1(cls.hash + str(vpid).encode(\"utf8\")).hexdigest()\n\n def find_vpid(self, url):\n self.logger.debug(\"Looking for vpid on {0}\", url)\n res = http.get(url)\n m = self.vpid_re.search(res.text)\n return m and m.group(1)\n\n def find_tvip(self, url):\n self.logger.debug(\"Looking for tvip on {0}\", url)\n res = http.get(url)\n m = self.tvip_re.search(res.text)\n return m and m.group(1)\n\n def mediaselector(self, vpid):\n for platform in self.platforms:\n url = self.api_url.format(vpid=vpid, vpid_hash=self._hash_vpid(vpid), platform=platform)\n stream_urls = http.get(url, schema=self.mediaselector_schema)\n for surl in stream_urls.get(\"hls\"):\n for s in HLSStream.parse_variant_playlist(self.session, surl).items():\n yield s\n for surl in stream_urls.get(\"hds\"):\n for s in HDSStream.parse_manifest(self.session, surl).items():\n yield s\n\n def _get_streams(self):\n m = self.url_re.match(self.url)\n episode_id = m.group(\"episode_id\")\n channel_name = m.group(\"channel_name\")\n\n if episode_id:\n self.logger.debug(\"Loading streams for episode: {0}\", episode_id)\n vpid = self.find_vpid(self.url)\n if vpid:\n self.logger.debug(\"Found VPID: {0}\", vpid)\n for s in self.mediaselector(vpid):\n yield s\n else:\n self.logger.error(\"Could not find VPID for episode {0}\", episode_id)\n elif channel_name:\n self.logger.debug(\"Loading stream for live channel: {0}\", channel_name)\n tvip = self.find_tvip(self.url)\n if tvip:\n self.logger.debug(\"Found TVIP: {0}\", tvip)\n for s in self.mediaselector(tvip):\n yield s\n\n\n__plugin__ = BBCiPlayer\n", "path": "src/streamlink/plugins/bbciplayer.py"}], "after_files": [{"content": "from __future__ import print_function\n\nimport base64\nimport re\nfrom functools import partial\nfrom hashlib import sha1\n\nfrom streamlink.plugin import Plugin\nfrom streamlink.plugin.api import http\nfrom streamlink.plugin.api import validate\nfrom streamlink.stream import HDSStream\nfrom streamlink.stream import HLSStream\nfrom streamlink.utils import parse_xml, parse_json\n\n\nclass BBCiPlayer(Plugin):\n url_re = re.compile(r\"\"\"https?://(?:www\\.)?bbc.co.uk/iplayer/\n (\n episode/(?P<episode_id>\\w+)|\n live/(?P<channel_name>\\w+)\n )\n \"\"\", re.VERBOSE)\n vpid_re = re.compile(r'\"ident_id\"\\s*:\\s*\"(\\w+)\"')\n tvip_re = re.compile(r'event_master_brand=(\\w+?)&')\n swf_url = \"http://emp.bbci.co.uk/emp/SMPf/1.18.3/StandardMediaPlayerChromelessFlash.swf\"\n hash = base64.b64decode(b\"N2RmZjc2NzFkMGM2OTdmZWRiMWQ5MDVkOWExMjE3MTk5MzhiOTJiZg==\")\n api_url = (\"http://open.live.bbc.co.uk/mediaselector/5/select/\"\n \"version/2.0/mediaset/{platform}/vpid/{vpid}/atk/{vpid_hash}/asn/1/\")\n platforms = (\"pc\", \"iptv-all\")\n\n mediaselector_schema = validate.Schema(\n validate.transform(partial(parse_xml, ignore_ns=True)),\n validate.union({\n \"hds\": validate.xml_findall(\".//media[@kind='video']//connection[@transferFormat='hds']\"),\n \"hls\": validate.xml_findall(\".//media[@kind='video']//connection[@transferFormat='hls']\")\n }),\n {validate.text: validate.all(\n [validate.all(validate.getattr(\"attrib\"), validate.get(\"href\"))],\n validate.transform(lambda x: list(set(x))) # unique\n )}\n )\n\n @classmethod\n def can_handle_url(cls, url):\n return cls.url_re.match(url) is not None\n\n @classmethod\n def _hash_vpid(cls, vpid):\n return sha1(cls.hash + str(vpid).encode(\"utf8\")).hexdigest()\n\n def find_vpid(self, url):\n self.logger.debug(\"Looking for vpid on {0}\", url)\n res = http.get(url)\n m = self.vpid_re.search(res.text)\n return m and m.group(1)\n\n def find_tvip(self, url):\n self.logger.debug(\"Looking for tvip on {0}\", url)\n res = http.get(url)\n m = self.tvip_re.search(res.text)\n return m and m.group(1)\n\n def mediaselector(self, vpid):\n for platform in self.platforms:\n url = self.api_url.format(vpid=vpid, vpid_hash=self._hash_vpid(vpid), platform=platform)\n stream_urls = http.get(url, schema=self.mediaselector_schema)\n for surl in stream_urls.get(\"hls\"):\n for s in HLSStream.parse_variant_playlist(self.session, surl).items():\n yield s\n for surl in stream_urls.get(\"hds\"):\n for s in HDSStream.parse_manifest(self.session, surl).items():\n yield s\n\n def _get_streams(self):\n m = self.url_re.match(self.url)\n episode_id = m.group(\"episode_id\")\n channel_name = m.group(\"channel_name\")\n\n if episode_id:\n self.logger.debug(\"Loading streams for episode: {0}\", episode_id)\n vpid = self.find_vpid(self.url)\n if vpid:\n self.logger.debug(\"Found VPID: {0}\", vpid)\n for s in self.mediaselector(vpid):\n yield s\n else:\n self.logger.error(\"Could not find VPID for episode {0}\", episode_id)\n elif channel_name:\n self.logger.debug(\"Loading stream for live channel: {0}\", channel_name)\n tvip = self.find_tvip(self.url)\n if tvip:\n self.logger.debug(\"Found TVIP: {0}\", tvip)\n for s in self.mediaselector(tvip):\n yield s\n\n\n__plugin__ = BBCiPlayer\n", "path": "src/streamlink/plugins/bbciplayer.py"}]} | 1,814 | 233 |
gh_patches_debug_149 | rasdani/github-patches | git_diff | apache__tvm-6399 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`import tvm` now requires pytest
With the merge of #6331, `import tvm` now requires pytest. I created this issue just to check whether this is something intentional or something that we want to fix.
The chain from `import tvm` to `import pytest` happens due to the `from .import testing` on `python/tvm/__init__.py`. There is nothing actually done with that import.
https://github.com/apache/incubator-tvm/blob/a4ebb16ed76bfea4ce4eed7be7ea73d4a01027e2/python/tvm/__init__.py#L53-L56
Within `python/tvm/testing.py` then there is the `import pytest`. I was thinking that we might want to remove these lines from `__init__.py`, so that we don't load `tvm.testing` and will only import it when required. I'm happy to submit a PR removing those lines, in case there is an understanding that it makes sense.
cc @tqchen
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `python/tvm/__init__.py`
Content:
```
1 # Licensed to the Apache Software Foundation (ASF) under one
2 # or more contributor license agreements. See the NOTICE file
3 # distributed with this work for additional information
4 # regarding copyright ownership. The ASF licenses this file
5 # to you under the Apache License, Version 2.0 (the
6 # "License"); you may not use this file except in compliance
7 # with the License. You may obtain a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing,
12 # software distributed under the License is distributed on an
13 # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
14 # KIND, either express or implied. See the License for the
15 # specific language governing permissions and limitations
16 # under the License.
17 # pylint: disable=redefined-builtin, wildcard-import
18 """TVM: Open Deep Learning Compiler Stack."""
19 import multiprocessing
20 import sys
21 import traceback
22
23 # top-level alias
24 # tvm._ffi
25 from ._ffi.base import TVMError, __version__
26 from ._ffi.runtime_ctypes import DataTypeCode, DataType
27 from ._ffi import register_object, register_func, register_extension, get_global_func
28
29 # top-level alias
30 # tvm.runtime
31 from .runtime.object import Object
32 from .runtime.ndarray import context, cpu, gpu, opencl, cl, vulkan, metal, mtl
33 from .runtime.ndarray import vpi, rocm, ext_dev, micro_dev, hexagon
34 from .runtime import ndarray as nd
35
36 # tvm.error
37 from . import error
38
39 # tvm.ir
40 from .ir import IRModule
41 from .ir import transform
42 from .ir import container
43 from . import ir
44
45 # tvm.tir
46 from . import tir
47
48 # tvm.target
49 from . import target
50
51 # tvm.te
52 from . import te
53
54 # tvm.testing
55 from . import testing
56
57 # tvm.driver
58 from .driver import build, lower
59
60 # tvm.parser
61 from . import parser
62
63 # tvm tir hybrid script
64 from . import hybrid
65
66 # others
67 from . import arith
68
69 # support infra
70 from . import support
71
72 # Contrib initializers
73 from .contrib import rocm as _rocm, nvcc as _nvcc, sdaccel as _sdaccel
74
75
76 def tvm_wrap_excepthook(exception_hook):
77 """Wrap given excepthook with TVM additional work."""
78
79 def wrapper(exctype, value, trbk):
80 """Clean subprocesses when TVM is interrupted."""
81 exception_hook(exctype, value, trbk)
82 if hasattr(multiprocessing, 'active_children'):
83 # pylint: disable=not-callable
84 for p in multiprocessing.active_children():
85 p.terminate()
86
87 return wrapper
88
89
90 sys.excepthook = tvm_wrap_excepthook(sys.excepthook)
91
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/python/tvm/__init__.py b/python/tvm/__init__.py
--- a/python/tvm/__init__.py
+++ b/python/tvm/__init__.py
@@ -51,9 +51,6 @@
# tvm.te
from . import te
-# tvm.testing
-from . import testing
-
# tvm.driver
from .driver import build, lower
| {"golden_diff": "diff --git a/python/tvm/__init__.py b/python/tvm/__init__.py\n--- a/python/tvm/__init__.py\n+++ b/python/tvm/__init__.py\n@@ -51,9 +51,6 @@\n # tvm.te\n from . import te\n \n-# tvm.testing\n-from . import testing\n-\n # tvm.driver\n from .driver import build, lower\n", "issue": "`import tvm` now requires pytest\nWith the merge of #6331, `import tvm` now requires pytest. I created this issue just to check whether this is something intentional or something that we want to fix.\r\n\r\nThe chain from `import tvm` to `import pytest` happens due to the `from .import testing` on `python/tvm/__init__.py`. There is nothing actually done with that import.\r\n\r\nhttps://github.com/apache/incubator-tvm/blob/a4ebb16ed76bfea4ce4eed7be7ea73d4a01027e2/python/tvm/__init__.py#L53-L56\r\n\r\nWithin `python/tvm/testing.py` then there is the `import pytest`. I was thinking that we might want to remove these lines from `__init__.py`, so that we don't load `tvm.testing` and will only import it when required. I'm happy to submit a PR removing those lines, in case there is an understanding that it makes sense.\r\n\r\ncc @tqchen \n", "before_files": [{"content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied. See the License for the\n# specific language governing permissions and limitations\n# under the License.\n# pylint: disable=redefined-builtin, wildcard-import\n\"\"\"TVM: Open Deep Learning Compiler Stack.\"\"\"\nimport multiprocessing\nimport sys\nimport traceback\n\n# top-level alias\n# tvm._ffi\nfrom ._ffi.base import TVMError, __version__\nfrom ._ffi.runtime_ctypes import DataTypeCode, DataType\nfrom ._ffi import register_object, register_func, register_extension, get_global_func\n\n# top-level alias\n# tvm.runtime\nfrom .runtime.object import Object\nfrom .runtime.ndarray import context, cpu, gpu, opencl, cl, vulkan, metal, mtl\nfrom .runtime.ndarray import vpi, rocm, ext_dev, micro_dev, hexagon\nfrom .runtime import ndarray as nd\n\n# tvm.error\nfrom . import error\n\n# tvm.ir\nfrom .ir import IRModule\nfrom .ir import transform\nfrom .ir import container\nfrom . import ir\n\n# tvm.tir\nfrom . import tir\n\n# tvm.target\nfrom . import target\n\n# tvm.te\nfrom . import te\n\n# tvm.testing\nfrom . import testing\n\n# tvm.driver\nfrom .driver import build, lower\n\n# tvm.parser\nfrom . import parser\n\n# tvm tir hybrid script\nfrom . import hybrid\n\n# others\nfrom . import arith\n\n# support infra\nfrom . import support\n\n# Contrib initializers\nfrom .contrib import rocm as _rocm, nvcc as _nvcc, sdaccel as _sdaccel\n\n\ndef tvm_wrap_excepthook(exception_hook):\n \"\"\"Wrap given excepthook with TVM additional work.\"\"\"\n\n def wrapper(exctype, value, trbk):\n \"\"\"Clean subprocesses when TVM is interrupted.\"\"\"\n exception_hook(exctype, value, trbk)\n if hasattr(multiprocessing, 'active_children'):\n # pylint: disable=not-callable\n for p in multiprocessing.active_children():\n p.terminate()\n\n return wrapper\n\n\nsys.excepthook = tvm_wrap_excepthook(sys.excepthook)\n", "path": "python/tvm/__init__.py"}], "after_files": [{"content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied. See the License for the\n# specific language governing permissions and limitations\n# under the License.\n# pylint: disable=redefined-builtin, wildcard-import\n\"\"\"TVM: Open Deep Learning Compiler Stack.\"\"\"\nimport multiprocessing\nimport sys\nimport traceback\n\n# top-level alias\n# tvm._ffi\nfrom ._ffi.base import TVMError, __version__\nfrom ._ffi.runtime_ctypes import DataTypeCode, DataType\nfrom ._ffi import register_object, register_func, register_extension, get_global_func\n\n# top-level alias\n# tvm.runtime\nfrom .runtime.object import Object\nfrom .runtime.ndarray import context, cpu, gpu, opencl, cl, vulkan, metal, mtl\nfrom .runtime.ndarray import vpi, rocm, ext_dev, micro_dev, hexagon\nfrom .runtime import ndarray as nd\n\n# tvm.error\nfrom . import error\n\n# tvm.ir\nfrom .ir import IRModule\nfrom .ir import transform\nfrom .ir import container\nfrom . import ir\n\n# tvm.tir\nfrom . import tir\n\n# tvm.target\nfrom . import target\n\n# tvm.te\nfrom . import te\n\n# tvm.driver\nfrom .driver import build, lower\n\n# tvm.parser\nfrom . import parser\n\n# tvm tir hybrid script\nfrom . import hybrid\n\n# others\nfrom . import arith\n\n# support infra\nfrom . import support\n\n# Contrib initializers\nfrom .contrib import rocm as _rocm, nvcc as _nvcc, sdaccel as _sdaccel\n\n\ndef tvm_wrap_excepthook(exception_hook):\n \"\"\"Wrap given excepthook with TVM additional work.\"\"\"\n\n def wrapper(exctype, value, trbk):\n \"\"\"Clean subprocesses when TVM is interrupted.\"\"\"\n exception_hook(exctype, value, trbk)\n if hasattr(multiprocessing, 'active_children'):\n # pylint: disable=not-callable\n for p in multiprocessing.active_children():\n p.terminate()\n\n return wrapper\n\n\nsys.excepthook = tvm_wrap_excepthook(sys.excepthook)\n", "path": "python/tvm/__init__.py"}]} | 1,283 | 87 |
gh_patches_debug_1023 | rasdani/github-patches | git_diff | pyca__cryptography-4037 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bug in HKDF?
I think the computation of [`max_length`](https://github.com/pyca/cryptography/blob/66460d8f62b3f27a009bb61be6ce7675c8451b6e/src/cryptography/hazmat/primitives/kdf/hkdf.py#L70) in `src/cryptography/hazmat/primitives/kdf/hkdf.py` is wrong.
[RFC5869](https://tools.ietf.org/html/rfc5869) states on page 3 that the input `L` of the HKDF-Expand function describes the "length of output keying material in octets (<= 255*HashLen)".
An octet consists of 8 bit.
Currently, `max_length` is computed as:
```
max_length = 255 * (algorithm.digest_size // 8)
```
The problem is, that `algorithm.digest_size` returns the size of the digest in bytes. (There are 8 bits per byte). Therefore, the division by 8 is wrong, and thus, `max_length` is unnecessarily small.
(same applies for the computation of `salt` as well ([line 33](https://github.com/pyca/cryptography/blob/66460d8f62b3f27a009bb61be6ce7675c8451b6e/src/cryptography/hazmat/primitives/kdf/hkdf.py#L33)), in the case where `salt is None`)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/cryptography/hazmat/primitives/kdf/hkdf.py`
Content:
```
1 # This file is dual licensed under the terms of the Apache License, Version
2 # 2.0, and the BSD License. See the LICENSE file in the root of this repository
3 # for complete details.
4
5 from __future__ import absolute_import, division, print_function
6
7 import six
8
9 from cryptography import utils
10 from cryptography.exceptions import (
11 AlreadyFinalized, InvalidKey, UnsupportedAlgorithm, _Reasons
12 )
13 from cryptography.hazmat.backends.interfaces import HMACBackend
14 from cryptography.hazmat.primitives import constant_time, hmac
15 from cryptography.hazmat.primitives.kdf import KeyDerivationFunction
16
17
18 @utils.register_interface(KeyDerivationFunction)
19 class HKDF(object):
20 def __init__(self, algorithm, length, salt, info, backend):
21 if not isinstance(backend, HMACBackend):
22 raise UnsupportedAlgorithm(
23 "Backend object does not implement HMACBackend.",
24 _Reasons.BACKEND_MISSING_INTERFACE
25 )
26
27 self._algorithm = algorithm
28
29 if not (salt is None or isinstance(salt, bytes)):
30 raise TypeError("salt must be bytes.")
31
32 if salt is None:
33 salt = b"\x00" * self._algorithm.digest_size
34
35 self._salt = salt
36
37 self._backend = backend
38
39 self._hkdf_expand = HKDFExpand(self._algorithm, length, info, backend)
40
41 def _extract(self, key_material):
42 h = hmac.HMAC(self._salt, self._algorithm, backend=self._backend)
43 h.update(key_material)
44 return h.finalize()
45
46 def derive(self, key_material):
47 if not isinstance(key_material, bytes):
48 raise TypeError("key_material must be bytes.")
49
50 return self._hkdf_expand.derive(self._extract(key_material))
51
52 def verify(self, key_material, expected_key):
53 if not constant_time.bytes_eq(self.derive(key_material), expected_key):
54 raise InvalidKey
55
56
57 @utils.register_interface(KeyDerivationFunction)
58 class HKDFExpand(object):
59 def __init__(self, algorithm, length, info, backend):
60 if not isinstance(backend, HMACBackend):
61 raise UnsupportedAlgorithm(
62 "Backend object does not implement HMACBackend.",
63 _Reasons.BACKEND_MISSING_INTERFACE
64 )
65
66 self._algorithm = algorithm
67
68 self._backend = backend
69
70 max_length = 255 * (algorithm.digest_size // 8)
71
72 if length > max_length:
73 raise ValueError(
74 "Can not derive keys larger than {0} octets.".format(
75 max_length
76 ))
77
78 self._length = length
79
80 if not (info is None or isinstance(info, bytes)):
81 raise TypeError("info must be bytes.")
82
83 if info is None:
84 info = b""
85
86 self._info = info
87
88 self._used = False
89
90 def _expand(self, key_material):
91 output = [b""]
92 counter = 1
93
94 while self._algorithm.digest_size * (len(output) - 1) < self._length:
95 h = hmac.HMAC(key_material, self._algorithm, backend=self._backend)
96 h.update(output[-1])
97 h.update(self._info)
98 h.update(six.int2byte(counter))
99 output.append(h.finalize())
100 counter += 1
101
102 return b"".join(output)[:self._length]
103
104 def derive(self, key_material):
105 if not isinstance(key_material, bytes):
106 raise TypeError("key_material must be bytes.")
107
108 if self._used:
109 raise AlreadyFinalized
110
111 self._used = True
112 return self._expand(key_material)
113
114 def verify(self, key_material, expected_key):
115 if not constant_time.bytes_eq(self.derive(key_material), expected_key):
116 raise InvalidKey
117
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/cryptography/hazmat/primitives/kdf/hkdf.py b/src/cryptography/hazmat/primitives/kdf/hkdf.py
--- a/src/cryptography/hazmat/primitives/kdf/hkdf.py
+++ b/src/cryptography/hazmat/primitives/kdf/hkdf.py
@@ -67,7 +67,7 @@
self._backend = backend
- max_length = 255 * (algorithm.digest_size // 8)
+ max_length = 255 * algorithm.digest_size
if length > max_length:
raise ValueError(
| {"golden_diff": "diff --git a/src/cryptography/hazmat/primitives/kdf/hkdf.py b/src/cryptography/hazmat/primitives/kdf/hkdf.py\n--- a/src/cryptography/hazmat/primitives/kdf/hkdf.py\n+++ b/src/cryptography/hazmat/primitives/kdf/hkdf.py\n@@ -67,7 +67,7 @@\n \n self._backend = backend\n \n- max_length = 255 * (algorithm.digest_size // 8)\n+ max_length = 255 * algorithm.digest_size\n \n if length > max_length:\n raise ValueError(\n", "issue": "Bug in HKDF?\nI think the computation of [`max_length`](https://github.com/pyca/cryptography/blob/66460d8f62b3f27a009bb61be6ce7675c8451b6e/src/cryptography/hazmat/primitives/kdf/hkdf.py#L70) in `src/cryptography/hazmat/primitives/kdf/hkdf.py` is wrong.\r\n\r\n[RFC5869](https://tools.ietf.org/html/rfc5869) states on page 3 that the input `L` of the HKDF-Expand function describes the \"length of output keying material in octets (<= 255*HashLen)\".\r\nAn octet consists of 8 bit. \r\n\r\nCurrently, `max_length` is computed as:\r\n\r\n```\r\nmax_length = 255 * (algorithm.digest_size // 8)\r\n```\r\n\r\nThe problem is, that `algorithm.digest_size` returns the size of the digest in bytes. (There are 8 bits per byte). Therefore, the division by 8 is wrong, and thus, `max_length` is unnecessarily small.\r\n\r\n(same applies for the computation of `salt` as well ([line 33](https://github.com/pyca/cryptography/blob/66460d8f62b3f27a009bb61be6ce7675c8451b6e/src/cryptography/hazmat/primitives/kdf/hkdf.py#L33)), in the case where `salt is None`)\n", "before_files": [{"content": "# This file is dual licensed under the terms of the Apache License, Version\n# 2.0, and the BSD License. See the LICENSE file in the root of this repository\n# for complete details.\n\nfrom __future__ import absolute_import, division, print_function\n\nimport six\n\nfrom cryptography import utils\nfrom cryptography.exceptions import (\n AlreadyFinalized, InvalidKey, UnsupportedAlgorithm, _Reasons\n)\nfrom cryptography.hazmat.backends.interfaces import HMACBackend\nfrom cryptography.hazmat.primitives import constant_time, hmac\nfrom cryptography.hazmat.primitives.kdf import KeyDerivationFunction\n\n\[email protected]_interface(KeyDerivationFunction)\nclass HKDF(object):\n def __init__(self, algorithm, length, salt, info, backend):\n if not isinstance(backend, HMACBackend):\n raise UnsupportedAlgorithm(\n \"Backend object does not implement HMACBackend.\",\n _Reasons.BACKEND_MISSING_INTERFACE\n )\n\n self._algorithm = algorithm\n\n if not (salt is None or isinstance(salt, bytes)):\n raise TypeError(\"salt must be bytes.\")\n\n if salt is None:\n salt = b\"\\x00\" * self._algorithm.digest_size\n\n self._salt = salt\n\n self._backend = backend\n\n self._hkdf_expand = HKDFExpand(self._algorithm, length, info, backend)\n\n def _extract(self, key_material):\n h = hmac.HMAC(self._salt, self._algorithm, backend=self._backend)\n h.update(key_material)\n return h.finalize()\n\n def derive(self, key_material):\n if not isinstance(key_material, bytes):\n raise TypeError(\"key_material must be bytes.\")\n\n return self._hkdf_expand.derive(self._extract(key_material))\n\n def verify(self, key_material, expected_key):\n if not constant_time.bytes_eq(self.derive(key_material), expected_key):\n raise InvalidKey\n\n\[email protected]_interface(KeyDerivationFunction)\nclass HKDFExpand(object):\n def __init__(self, algorithm, length, info, backend):\n if not isinstance(backend, HMACBackend):\n raise UnsupportedAlgorithm(\n \"Backend object does not implement HMACBackend.\",\n _Reasons.BACKEND_MISSING_INTERFACE\n )\n\n self._algorithm = algorithm\n\n self._backend = backend\n\n max_length = 255 * (algorithm.digest_size // 8)\n\n if length > max_length:\n raise ValueError(\n \"Can not derive keys larger than {0} octets.\".format(\n max_length\n ))\n\n self._length = length\n\n if not (info is None or isinstance(info, bytes)):\n raise TypeError(\"info must be bytes.\")\n\n if info is None:\n info = b\"\"\n\n self._info = info\n\n self._used = False\n\n def _expand(self, key_material):\n output = [b\"\"]\n counter = 1\n\n while self._algorithm.digest_size * (len(output) - 1) < self._length:\n h = hmac.HMAC(key_material, self._algorithm, backend=self._backend)\n h.update(output[-1])\n h.update(self._info)\n h.update(six.int2byte(counter))\n output.append(h.finalize())\n counter += 1\n\n return b\"\".join(output)[:self._length]\n\n def derive(self, key_material):\n if not isinstance(key_material, bytes):\n raise TypeError(\"key_material must be bytes.\")\n\n if self._used:\n raise AlreadyFinalized\n\n self._used = True\n return self._expand(key_material)\n\n def verify(self, key_material, expected_key):\n if not constant_time.bytes_eq(self.derive(key_material), expected_key):\n raise InvalidKey\n", "path": "src/cryptography/hazmat/primitives/kdf/hkdf.py"}], "after_files": [{"content": "# This file is dual licensed under the terms of the Apache License, Version\n# 2.0, and the BSD License. See the LICENSE file in the root of this repository\n# for complete details.\n\nfrom __future__ import absolute_import, division, print_function\n\nimport six\n\nfrom cryptography import utils\nfrom cryptography.exceptions import (\n AlreadyFinalized, InvalidKey, UnsupportedAlgorithm, _Reasons\n)\nfrom cryptography.hazmat.backends.interfaces import HMACBackend\nfrom cryptography.hazmat.primitives import constant_time, hmac\nfrom cryptography.hazmat.primitives.kdf import KeyDerivationFunction\n\n\[email protected]_interface(KeyDerivationFunction)\nclass HKDF(object):\n def __init__(self, algorithm, length, salt, info, backend):\n if not isinstance(backend, HMACBackend):\n raise UnsupportedAlgorithm(\n \"Backend object does not implement HMACBackend.\",\n _Reasons.BACKEND_MISSING_INTERFACE\n )\n\n self._algorithm = algorithm\n\n if not (salt is None or isinstance(salt, bytes)):\n raise TypeError(\"salt must be bytes.\")\n\n if salt is None:\n salt = b\"\\x00\" * self._algorithm.digest_size\n\n self._salt = salt\n\n self._backend = backend\n\n self._hkdf_expand = HKDFExpand(self._algorithm, length, info, backend)\n\n def _extract(self, key_material):\n h = hmac.HMAC(self._salt, self._algorithm, backend=self._backend)\n h.update(key_material)\n return h.finalize()\n\n def derive(self, key_material):\n if not isinstance(key_material, bytes):\n raise TypeError(\"key_material must be bytes.\")\n\n return self._hkdf_expand.derive(self._extract(key_material))\n\n def verify(self, key_material, expected_key):\n if not constant_time.bytes_eq(self.derive(key_material), expected_key):\n raise InvalidKey\n\n\[email protected]_interface(KeyDerivationFunction)\nclass HKDFExpand(object):\n def __init__(self, algorithm, length, info, backend):\n if not isinstance(backend, HMACBackend):\n raise UnsupportedAlgorithm(\n \"Backend object does not implement HMACBackend.\",\n _Reasons.BACKEND_MISSING_INTERFACE\n )\n\n self._algorithm = algorithm\n\n self._backend = backend\n\n max_length = 255 * algorithm.digest_size\n\n if length > max_length:\n raise ValueError(\n \"Can not derive keys larger than {0} octets.\".format(\n max_length\n ))\n\n self._length = length\n\n if not (info is None or isinstance(info, bytes)):\n raise TypeError(\"info must be bytes.\")\n\n if info is None:\n info = b\"\"\n\n self._info = info\n\n self._used = False\n\n def _expand(self, key_material):\n output = [b\"\"]\n counter = 1\n\n while self._algorithm.digest_size * (len(output) - 1) < self._length:\n h = hmac.HMAC(key_material, self._algorithm, backend=self._backend)\n h.update(output[-1])\n h.update(self._info)\n h.update(six.int2byte(counter))\n output.append(h.finalize())\n counter += 1\n\n return b\"\".join(output)[:self._length]\n\n def derive(self, key_material):\n if not isinstance(key_material, bytes):\n raise TypeError(\"key_material must be bytes.\")\n\n if self._used:\n raise AlreadyFinalized\n\n self._used = True\n return self._expand(key_material)\n\n def verify(self, key_material, expected_key):\n if not constant_time.bytes_eq(self.derive(key_material), expected_key):\n raise InvalidKey\n", "path": "src/cryptography/hazmat/primitives/kdf/hkdf.py"}]} | 1,658 | 131 |
gh_patches_debug_17802 | rasdani/github-patches | git_diff | python-discord__bot-919 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Use appropriate log level for exceptions from event listeners
From @SebastiaanZ:
> Finally, `discord.py` currently "hides" errors/tracebacks that happen in event listeners as we only have a custom error handler for commands. This isn't too bad locally, since `d.py` **prints** those exceptions to stderr, but it obviously means they'll never show up in Sentry, as they are **not actually logged** with the appropriate level.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bot/bot.py`
Content:
```
1 import asyncio
2 import logging
3 import socket
4 import warnings
5 from typing import Optional
6
7 import aiohttp
8 import discord
9 from discord.ext import commands
10
11 from bot import DEBUG_MODE, api, constants
12 from bot.async_stats import AsyncStatsClient
13
14 log = logging.getLogger('bot')
15
16
17 class Bot(commands.Bot):
18 """A subclass of `discord.ext.commands.Bot` with an aiohttp session and an API client."""
19
20 def __init__(self, *args, **kwargs):
21 if "connector" in kwargs:
22 warnings.warn(
23 "If login() is called (or the bot is started), the connector will be overwritten "
24 "with an internal one"
25 )
26
27 super().__init__(*args, **kwargs)
28
29 self.http_session: Optional[aiohttp.ClientSession] = None
30 self.api_client = api.APIClient(loop=self.loop)
31
32 self._connector = None
33 self._resolver = None
34 self._guild_available = asyncio.Event()
35
36 statsd_url = constants.Stats.statsd_host
37
38 if DEBUG_MODE:
39 # Since statsd is UDP, there are no errors for sending to a down port.
40 # For this reason, setting the statsd host to 127.0.0.1 for development
41 # will effectively disable stats.
42 statsd_url = "127.0.0.1"
43
44 self.stats = AsyncStatsClient(self.loop, statsd_url, 8125, prefix="bot")
45
46 def add_cog(self, cog: commands.Cog) -> None:
47 """Adds a "cog" to the bot and logs the operation."""
48 super().add_cog(cog)
49 log.info(f"Cog loaded: {cog.qualified_name}")
50
51 def clear(self) -> None:
52 """
53 Clears the internal state of the bot and recreates the connector and sessions.
54
55 Will cause a DeprecationWarning if called outside a coroutine.
56 """
57 # Because discord.py recreates the HTTPClient session, may as well follow suit and recreate
58 # our own stuff here too.
59 self._recreate()
60 super().clear()
61
62 async def close(self) -> None:
63 """Close the Discord connection and the aiohttp session, connector, statsd client, and resolver."""
64 await super().close()
65
66 await self.api_client.close()
67
68 if self.http_session:
69 await self.http_session.close()
70
71 if self._connector:
72 await self._connector.close()
73
74 if self._resolver:
75 await self._resolver.close()
76
77 if self.stats._transport:
78 await self.stats._transport.close()
79
80 async def login(self, *args, **kwargs) -> None:
81 """Re-create the connector and set up sessions before logging into Discord."""
82 self._recreate()
83 await self.stats.create_socket()
84 await super().login(*args, **kwargs)
85
86 def _recreate(self) -> None:
87 """Re-create the connector, aiohttp session, and the APIClient."""
88 # Use asyncio for DNS resolution instead of threads so threads aren't spammed.
89 # Doesn't seem to have any state with regards to being closed, so no need to worry?
90 self._resolver = aiohttp.AsyncResolver()
91
92 # Its __del__ does send a warning but it doesn't always show up for some reason.
93 if self._connector and not self._connector._closed:
94 log.warning(
95 "The previous connector was not closed; it will remain open and be overwritten"
96 )
97
98 # Use AF_INET as its socket family to prevent HTTPS related problems both locally
99 # and in production.
100 self._connector = aiohttp.TCPConnector(
101 resolver=self._resolver,
102 family=socket.AF_INET,
103 )
104
105 # Client.login() will call HTTPClient.static_login() which will create a session using
106 # this connector attribute.
107 self.http.connector = self._connector
108
109 # Its __del__ does send a warning but it doesn't always show up for some reason.
110 if self.http_session and not self.http_session.closed:
111 log.warning(
112 "The previous session was not closed; it will remain open and be overwritten"
113 )
114
115 self.http_session = aiohttp.ClientSession(connector=self._connector)
116 self.api_client.recreate(force=True, connector=self._connector)
117
118 async def on_guild_available(self, guild: discord.Guild) -> None:
119 """
120 Set the internal guild available event when constants.Guild.id becomes available.
121
122 If the cache appears to still be empty (no members, no channels, or no roles), the event
123 will not be set.
124 """
125 if guild.id != constants.Guild.id:
126 return
127
128 if not guild.roles or not guild.members or not guild.channels:
129 msg = "Guild available event was dispatched but the cache appears to still be empty!"
130 log.warning(msg)
131
132 try:
133 webhook = await self.fetch_webhook(constants.Webhooks.dev_log)
134 except discord.HTTPException as e:
135 log.error(f"Failed to fetch webhook to send empty cache warning: status {e.status}")
136 else:
137 await webhook.send(f"<@&{constants.Roles.admin}> {msg}")
138
139 return
140
141 self._guild_available.set()
142
143 async def on_guild_unavailable(self, guild: discord.Guild) -> None:
144 """Clear the internal guild available event when constants.Guild.id becomes unavailable."""
145 if guild.id != constants.Guild.id:
146 return
147
148 self._guild_available.clear()
149
150 async def wait_until_guild_available(self) -> None:
151 """
152 Wait until the constants.Guild.id guild is available (and the cache is ready).
153
154 The on_ready event is inadequate because it only waits 2 seconds for a GUILD_CREATE
155 gateway event before giving up and thus not populating the cache for unavailable guilds.
156 """
157 await self._guild_available.wait()
158
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/bot/bot.py b/bot/bot.py
--- a/bot/bot.py
+++ b/bot/bot.py
@@ -7,6 +7,7 @@
import aiohttp
import discord
from discord.ext import commands
+from sentry_sdk import push_scope
from bot import DEBUG_MODE, api, constants
from bot.async_stats import AsyncStatsClient
@@ -155,3 +156,14 @@
gateway event before giving up and thus not populating the cache for unavailable guilds.
"""
await self._guild_available.wait()
+
+ async def on_error(self, event: str, *args, **kwargs) -> None:
+ """Log errors raised in event listeners rather than printing them to stderr."""
+ self.stats.incr(f"errors.event.{event}")
+
+ with push_scope() as scope:
+ scope.set_tag("event", event)
+ scope.set_extra("args", args)
+ scope.set_extra("kwargs", kwargs)
+
+ log.exception(f"Unhandled exception in {event}.")
| {"golden_diff": "diff --git a/bot/bot.py b/bot/bot.py\n--- a/bot/bot.py\n+++ b/bot/bot.py\n@@ -7,6 +7,7 @@\n import aiohttp\n import discord\n from discord.ext import commands\n+from sentry_sdk import push_scope\n \n from bot import DEBUG_MODE, api, constants\n from bot.async_stats import AsyncStatsClient\n@@ -155,3 +156,14 @@\n gateway event before giving up and thus not populating the cache for unavailable guilds.\n \"\"\"\n await self._guild_available.wait()\n+\n+ async def on_error(self, event: str, *args, **kwargs) -> None:\n+ \"\"\"Log errors raised in event listeners rather than printing them to stderr.\"\"\"\n+ self.stats.incr(f\"errors.event.{event}\")\n+\n+ with push_scope() as scope:\n+ scope.set_tag(\"event\", event)\n+ scope.set_extra(\"args\", args)\n+ scope.set_extra(\"kwargs\", kwargs)\n+\n+ log.exception(f\"Unhandled exception in {event}.\")\n", "issue": "Use appropriate log level for exceptions from event listeners\nFrom @SebastiaanZ:\r\n\r\n> Finally, `discord.py` currently \"hides\" errors/tracebacks that happen in event listeners as we only have a custom error handler for commands. This isn't too bad locally, since `d.py` **prints** those exceptions to stderr, but it obviously means they'll never show up in Sentry, as they are **not actually logged** with the appropriate level.\n", "before_files": [{"content": "import asyncio\nimport logging\nimport socket\nimport warnings\nfrom typing import Optional\n\nimport aiohttp\nimport discord\nfrom discord.ext import commands\n\nfrom bot import DEBUG_MODE, api, constants\nfrom bot.async_stats import AsyncStatsClient\n\nlog = logging.getLogger('bot')\n\n\nclass Bot(commands.Bot):\n \"\"\"A subclass of `discord.ext.commands.Bot` with an aiohttp session and an API client.\"\"\"\n\n def __init__(self, *args, **kwargs):\n if \"connector\" in kwargs:\n warnings.warn(\n \"If login() is called (or the bot is started), the connector will be overwritten \"\n \"with an internal one\"\n )\n\n super().__init__(*args, **kwargs)\n\n self.http_session: Optional[aiohttp.ClientSession] = None\n self.api_client = api.APIClient(loop=self.loop)\n\n self._connector = None\n self._resolver = None\n self._guild_available = asyncio.Event()\n\n statsd_url = constants.Stats.statsd_host\n\n if DEBUG_MODE:\n # Since statsd is UDP, there are no errors for sending to a down port.\n # For this reason, setting the statsd host to 127.0.0.1 for development\n # will effectively disable stats.\n statsd_url = \"127.0.0.1\"\n\n self.stats = AsyncStatsClient(self.loop, statsd_url, 8125, prefix=\"bot\")\n\n def add_cog(self, cog: commands.Cog) -> None:\n \"\"\"Adds a \"cog\" to the bot and logs the operation.\"\"\"\n super().add_cog(cog)\n log.info(f\"Cog loaded: {cog.qualified_name}\")\n\n def clear(self) -> None:\n \"\"\"\n Clears the internal state of the bot and recreates the connector and sessions.\n\n Will cause a DeprecationWarning if called outside a coroutine.\n \"\"\"\n # Because discord.py recreates the HTTPClient session, may as well follow suit and recreate\n # our own stuff here too.\n self._recreate()\n super().clear()\n\n async def close(self) -> None:\n \"\"\"Close the Discord connection and the aiohttp session, connector, statsd client, and resolver.\"\"\"\n await super().close()\n\n await self.api_client.close()\n\n if self.http_session:\n await self.http_session.close()\n\n if self._connector:\n await self._connector.close()\n\n if self._resolver:\n await self._resolver.close()\n\n if self.stats._transport:\n await self.stats._transport.close()\n\n async def login(self, *args, **kwargs) -> None:\n \"\"\"Re-create the connector and set up sessions before logging into Discord.\"\"\"\n self._recreate()\n await self.stats.create_socket()\n await super().login(*args, **kwargs)\n\n def _recreate(self) -> None:\n \"\"\"Re-create the connector, aiohttp session, and the APIClient.\"\"\"\n # Use asyncio for DNS resolution instead of threads so threads aren't spammed.\n # Doesn't seem to have any state with regards to being closed, so no need to worry?\n self._resolver = aiohttp.AsyncResolver()\n\n # Its __del__ does send a warning but it doesn't always show up for some reason.\n if self._connector and not self._connector._closed:\n log.warning(\n \"The previous connector was not closed; it will remain open and be overwritten\"\n )\n\n # Use AF_INET as its socket family to prevent HTTPS related problems both locally\n # and in production.\n self._connector = aiohttp.TCPConnector(\n resolver=self._resolver,\n family=socket.AF_INET,\n )\n\n # Client.login() will call HTTPClient.static_login() which will create a session using\n # this connector attribute.\n self.http.connector = self._connector\n\n # Its __del__ does send a warning but it doesn't always show up for some reason.\n if self.http_session and not self.http_session.closed:\n log.warning(\n \"The previous session was not closed; it will remain open and be overwritten\"\n )\n\n self.http_session = aiohttp.ClientSession(connector=self._connector)\n self.api_client.recreate(force=True, connector=self._connector)\n\n async def on_guild_available(self, guild: discord.Guild) -> None:\n \"\"\"\n Set the internal guild available event when constants.Guild.id becomes available.\n\n If the cache appears to still be empty (no members, no channels, or no roles), the event\n will not be set.\n \"\"\"\n if guild.id != constants.Guild.id:\n return\n\n if not guild.roles or not guild.members or not guild.channels:\n msg = \"Guild available event was dispatched but the cache appears to still be empty!\"\n log.warning(msg)\n\n try:\n webhook = await self.fetch_webhook(constants.Webhooks.dev_log)\n except discord.HTTPException as e:\n log.error(f\"Failed to fetch webhook to send empty cache warning: status {e.status}\")\n else:\n await webhook.send(f\"<@&{constants.Roles.admin}> {msg}\")\n\n return\n\n self._guild_available.set()\n\n async def on_guild_unavailable(self, guild: discord.Guild) -> None:\n \"\"\"Clear the internal guild available event when constants.Guild.id becomes unavailable.\"\"\"\n if guild.id != constants.Guild.id:\n return\n\n self._guild_available.clear()\n\n async def wait_until_guild_available(self) -> None:\n \"\"\"\n Wait until the constants.Guild.id guild is available (and the cache is ready).\n\n The on_ready event is inadequate because it only waits 2 seconds for a GUILD_CREATE\n gateway event before giving up and thus not populating the cache for unavailable guilds.\n \"\"\"\n await self._guild_available.wait()\n", "path": "bot/bot.py"}], "after_files": [{"content": "import asyncio\nimport logging\nimport socket\nimport warnings\nfrom typing import Optional\n\nimport aiohttp\nimport discord\nfrom discord.ext import commands\nfrom sentry_sdk import push_scope\n\nfrom bot import DEBUG_MODE, api, constants\nfrom bot.async_stats import AsyncStatsClient\n\nlog = logging.getLogger('bot')\n\n\nclass Bot(commands.Bot):\n \"\"\"A subclass of `discord.ext.commands.Bot` with an aiohttp session and an API client.\"\"\"\n\n def __init__(self, *args, **kwargs):\n if \"connector\" in kwargs:\n warnings.warn(\n \"If login() is called (or the bot is started), the connector will be overwritten \"\n \"with an internal one\"\n )\n\n super().__init__(*args, **kwargs)\n\n self.http_session: Optional[aiohttp.ClientSession] = None\n self.api_client = api.APIClient(loop=self.loop)\n\n self._connector = None\n self._resolver = None\n self._guild_available = asyncio.Event()\n\n statsd_url = constants.Stats.statsd_host\n\n if DEBUG_MODE:\n # Since statsd is UDP, there are no errors for sending to a down port.\n # For this reason, setting the statsd host to 127.0.0.1 for development\n # will effectively disable stats.\n statsd_url = \"127.0.0.1\"\n\n self.stats = AsyncStatsClient(self.loop, statsd_url, 8125, prefix=\"bot\")\n\n def add_cog(self, cog: commands.Cog) -> None:\n \"\"\"Adds a \"cog\" to the bot and logs the operation.\"\"\"\n super().add_cog(cog)\n log.info(f\"Cog loaded: {cog.qualified_name}\")\n\n def clear(self) -> None:\n \"\"\"\n Clears the internal state of the bot and recreates the connector and sessions.\n\n Will cause a DeprecationWarning if called outside a coroutine.\n \"\"\"\n # Because discord.py recreates the HTTPClient session, may as well follow suit and recreate\n # our own stuff here too.\n self._recreate()\n super().clear()\n\n async def close(self) -> None:\n \"\"\"Close the Discord connection and the aiohttp session, connector, statsd client, and resolver.\"\"\"\n await super().close()\n\n await self.api_client.close()\n\n if self.http_session:\n await self.http_session.close()\n\n if self._connector:\n await self._connector.close()\n\n if self._resolver:\n await self._resolver.close()\n\n if self.stats._transport:\n await self.stats._transport.close()\n\n async def login(self, *args, **kwargs) -> None:\n \"\"\"Re-create the connector and set up sessions before logging into Discord.\"\"\"\n self._recreate()\n await self.stats.create_socket()\n await super().login(*args, **kwargs)\n\n def _recreate(self) -> None:\n \"\"\"Re-create the connector, aiohttp session, and the APIClient.\"\"\"\n # Use asyncio for DNS resolution instead of threads so threads aren't spammed.\n # Doesn't seem to have any state with regards to being closed, so no need to worry?\n self._resolver = aiohttp.AsyncResolver()\n\n # Its __del__ does send a warning but it doesn't always show up for some reason.\n if self._connector and not self._connector._closed:\n log.warning(\n \"The previous connector was not closed; it will remain open and be overwritten\"\n )\n\n # Use AF_INET as its socket family to prevent HTTPS related problems both locally\n # and in production.\n self._connector = aiohttp.TCPConnector(\n resolver=self._resolver,\n family=socket.AF_INET,\n )\n\n # Client.login() will call HTTPClient.static_login() which will create a session using\n # this connector attribute.\n self.http.connector = self._connector\n\n # Its __del__ does send a warning but it doesn't always show up for some reason.\n if self.http_session and not self.http_session.closed:\n log.warning(\n \"The previous session was not closed; it will remain open and be overwritten\"\n )\n\n self.http_session = aiohttp.ClientSession(connector=self._connector)\n self.api_client.recreate(force=True, connector=self._connector)\n\n async def on_guild_available(self, guild: discord.Guild) -> None:\n \"\"\"\n Set the internal guild available event when constants.Guild.id becomes available.\n\n If the cache appears to still be empty (no members, no channels, or no roles), the event\n will not be set.\n \"\"\"\n if guild.id != constants.Guild.id:\n return\n\n if not guild.roles or not guild.members or not guild.channels:\n msg = \"Guild available event was dispatched but the cache appears to still be empty!\"\n log.warning(msg)\n\n try:\n webhook = await self.fetch_webhook(constants.Webhooks.dev_log)\n except discord.HTTPException as e:\n log.error(f\"Failed to fetch webhook to send empty cache warning: status {e.status}\")\n else:\n await webhook.send(f\"<@&{constants.Roles.admin}> {msg}\")\n\n return\n\n self._guild_available.set()\n\n async def on_guild_unavailable(self, guild: discord.Guild) -> None:\n \"\"\"Clear the internal guild available event when constants.Guild.id becomes unavailable.\"\"\"\n if guild.id != constants.Guild.id:\n return\n\n self._guild_available.clear()\n\n async def wait_until_guild_available(self) -> None:\n \"\"\"\n Wait until the constants.Guild.id guild is available (and the cache is ready).\n\n The on_ready event is inadequate because it only waits 2 seconds for a GUILD_CREATE\n gateway event before giving up and thus not populating the cache for unavailable guilds.\n \"\"\"\n await self._guild_available.wait()\n\n async def on_error(self, event: str, *args, **kwargs) -> None:\n \"\"\"Log errors raised in event listeners rather than printing them to stderr.\"\"\"\n self.stats.incr(f\"errors.event.{event}\")\n\n with push_scope() as scope:\n scope.set_tag(\"event\", event)\n scope.set_extra(\"args\", args)\n scope.set_extra(\"kwargs\", kwargs)\n\n log.exception(f\"Unhandled exception in {event}.\")\n", "path": "bot/bot.py"}]} | 1,974 | 230 |
gh_patches_debug_18475 | rasdani/github-patches | git_diff | getnikola__nikola-1957 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
handle include tag in mako templates
Currently templates used via include tags are not considered dependencies. It's not hard.
handle include tag in mako templates
Currently templates used via include tags are not considered dependencies. It's not hard.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `nikola/plugins/template/mako.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 # Copyright © 2012-2015 Roberto Alsina and others.
4
5 # Permission is hereby granted, free of charge, to any
6 # person obtaining a copy of this software and associated
7 # documentation files (the "Software"), to deal in the
8 # Software without restriction, including without limitation
9 # the rights to use, copy, modify, merge, publish,
10 # distribute, sublicense, and/or sell copies of the
11 # Software, and to permit persons to whom the Software is
12 # furnished to do so, subject to the following conditions:
13 #
14 # The above copyright notice and this permission notice
15 # shall be included in all copies or substantial portions of
16 # the Software.
17 #
18 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
19 # KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
20 # WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
21 # PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS
22 # OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
23 # OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
24 # OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
25 # SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
26
27 """Mako template handler."""
28
29 from __future__ import unicode_literals, print_function, absolute_import
30 import os
31 import shutil
32 import sys
33 import tempfile
34
35 from mako import util, lexer
36 from mako.lookup import TemplateLookup
37 from mako.template import Template
38 from markupsafe import Markup # It's ok, Mako requires it
39
40 from nikola.plugin_categories import TemplateSystem
41 from nikola.utils import makedirs, get_logger, STDERR_HANDLER
42
43 LOGGER = get_logger('mako', STDERR_HANDLER)
44
45
46 class MakoTemplates(TemplateSystem):
47
48 """Support for Mako templates."""
49
50 name = "mako"
51
52 lookup = None
53 cache = {}
54 filters = {}
55 directories = []
56 cache_dir = None
57
58 def get_deps(self, filename):
59 """Get dependencies for a template (internal function)."""
60 text = util.read_file(filename)
61 lex = lexer.Lexer(text=text, filename=filename)
62 lex.parse()
63
64 deps = []
65 for n in lex.template.nodes:
66 keyword = getattr(n, 'keyword', None)
67 if keyword in ["inherit", "namespace"]:
68 deps.append(n.attributes['file'])
69 # TODO: include tags are not handled
70 return deps
71
72 def set_directories(self, directories, cache_folder):
73 """Create a new template lookup with set directories."""
74 cache_dir = os.path.join(cache_folder, '.mako.tmp')
75 # Workaround for a Mako bug, Issue #825
76 if sys.version_info[0] == 2:
77 try:
78 os.path.abspath(cache_dir).decode('ascii')
79 except UnicodeEncodeError:
80 cache_dir = tempfile.mkdtemp()
81 LOGGER.warning('Because of a Mako bug, setting cache_dir to {0}'.format(cache_dir))
82 if os.path.exists(cache_dir):
83 shutil.rmtree(cache_dir)
84 self.directories = directories
85 self.cache_dir = cache_dir
86 self.create_lookup()
87
88 def inject_directory(self, directory):
89 """Add a directory to the lookup and recreate it if it's not there yet."""
90 if directory not in self.directories:
91 self.directories.append(directory)
92 self.create_lookup()
93
94 def create_lookup(self):
95 """Create a template lookup."""
96 self.lookup = TemplateLookup(
97 directories=self.directories,
98 module_directory=self.cache_dir,
99 output_encoding='utf-8')
100
101 def set_site(self, site):
102 """Set the Nikola site."""
103 self.site = site
104 self.filters.update(self.site.config['TEMPLATE_FILTERS'])
105
106 def render_template(self, template_name, output_name, context):
107 """Render the template into output_name using context."""
108 context['striphtml'] = striphtml
109 template = self.lookup.get_template(template_name)
110 data = template.render_unicode(**context)
111 if output_name is not None:
112 makedirs(os.path.dirname(output_name))
113 with open(output_name, 'w+') as output:
114 output.write(data)
115 return data
116
117 def render_template_to_string(self, template, context):
118 """Render template to a string using context."""
119 context.update(self.filters)
120 return Template(template).render(**context)
121
122 def template_deps(self, template_name):
123 """Generate list of dependencies for a template."""
124 # We can cache here because dependencies should
125 # not change between runs
126 if self.cache.get(template_name, None) is None:
127 template = self.lookup.get_template(template_name)
128 dep_filenames = self.get_deps(template.filename)
129 deps = [template.filename]
130 for fname in dep_filenames:
131 deps += self.template_deps(fname)
132 self.cache[template_name] = tuple(deps)
133 return list(self.cache[template_name])
134
135
136 def striphtml(text):
137 """Strip HTML tags from text."""
138 return Markup(text).striptags()
139
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/nikola/plugins/template/mako.py b/nikola/plugins/template/mako.py
--- a/nikola/plugins/template/mako.py
+++ b/nikola/plugins/template/mako.py
@@ -32,7 +32,7 @@
import sys
import tempfile
-from mako import util, lexer
+from mako import util, lexer, parsetree
from mako.lookup import TemplateLookup
from mako.template import Template
from markupsafe import Markup # It's ok, Mako requires it
@@ -64,9 +64,8 @@
deps = []
for n in lex.template.nodes:
keyword = getattr(n, 'keyword', None)
- if keyword in ["inherit", "namespace"]:
+ if keyword in ["inherit", "namespace"] or isinstance(n, parsetree.IncludeTag):
deps.append(n.attributes['file'])
- # TODO: include tags are not handled
return deps
def set_directories(self, directories, cache_folder):
| {"golden_diff": "diff --git a/nikola/plugins/template/mako.py b/nikola/plugins/template/mako.py\n--- a/nikola/plugins/template/mako.py\n+++ b/nikola/plugins/template/mako.py\n@@ -32,7 +32,7 @@\n import sys\n import tempfile\n \n-from mako import util, lexer\n+from mako import util, lexer, parsetree\n from mako.lookup import TemplateLookup\n from mako.template import Template\n from markupsafe import Markup # It's ok, Mako requires it\n@@ -64,9 +64,8 @@\n deps = []\n for n in lex.template.nodes:\n keyword = getattr(n, 'keyword', None)\n- if keyword in [\"inherit\", \"namespace\"]:\n+ if keyword in [\"inherit\", \"namespace\"] or isinstance(n, parsetree.IncludeTag):\n deps.append(n.attributes['file'])\n- # TODO: include tags are not handled\n return deps\n \n def set_directories(self, directories, cache_folder):\n", "issue": "handle include tag in mako templates\nCurrently templates used via include tags are not considered dependencies. It's not hard.\n\nhandle include tag in mako templates\nCurrently templates used via include tags are not considered dependencies. It's not hard.\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Copyright \u00a9 2012-2015 Roberto Alsina and others.\n\n# Permission is hereby granted, free of charge, to any\n# person obtaining a copy of this software and associated\n# documentation files (the \"Software\"), to deal in the\n# Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the\n# Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice\n# shall be included in all copies or substantial portions of\n# the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY\n# KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE\n# WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR\n# PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS\n# OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR\n# OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR\n# OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\n\"\"\"Mako template handler.\"\"\"\n\nfrom __future__ import unicode_literals, print_function, absolute_import\nimport os\nimport shutil\nimport sys\nimport tempfile\n\nfrom mako import util, lexer\nfrom mako.lookup import TemplateLookup\nfrom mako.template import Template\nfrom markupsafe import Markup # It's ok, Mako requires it\n\nfrom nikola.plugin_categories import TemplateSystem\nfrom nikola.utils import makedirs, get_logger, STDERR_HANDLER\n\nLOGGER = get_logger('mako', STDERR_HANDLER)\n\n\nclass MakoTemplates(TemplateSystem):\n\n \"\"\"Support for Mako templates.\"\"\"\n\n name = \"mako\"\n\n lookup = None\n cache = {}\n filters = {}\n directories = []\n cache_dir = None\n\n def get_deps(self, filename):\n \"\"\"Get dependencies for a template (internal function).\"\"\"\n text = util.read_file(filename)\n lex = lexer.Lexer(text=text, filename=filename)\n lex.parse()\n\n deps = []\n for n in lex.template.nodes:\n keyword = getattr(n, 'keyword', None)\n if keyword in [\"inherit\", \"namespace\"]:\n deps.append(n.attributes['file'])\n # TODO: include tags are not handled\n return deps\n\n def set_directories(self, directories, cache_folder):\n \"\"\"Create a new template lookup with set directories.\"\"\"\n cache_dir = os.path.join(cache_folder, '.mako.tmp')\n # Workaround for a Mako bug, Issue #825\n if sys.version_info[0] == 2:\n try:\n os.path.abspath(cache_dir).decode('ascii')\n except UnicodeEncodeError:\n cache_dir = tempfile.mkdtemp()\n LOGGER.warning('Because of a Mako bug, setting cache_dir to {0}'.format(cache_dir))\n if os.path.exists(cache_dir):\n shutil.rmtree(cache_dir)\n self.directories = directories\n self.cache_dir = cache_dir\n self.create_lookup()\n\n def inject_directory(self, directory):\n \"\"\"Add a directory to the lookup and recreate it if it's not there yet.\"\"\"\n if directory not in self.directories:\n self.directories.append(directory)\n self.create_lookup()\n\n def create_lookup(self):\n \"\"\"Create a template lookup.\"\"\"\n self.lookup = TemplateLookup(\n directories=self.directories,\n module_directory=self.cache_dir,\n output_encoding='utf-8')\n\n def set_site(self, site):\n \"\"\"Set the Nikola site.\"\"\"\n self.site = site\n self.filters.update(self.site.config['TEMPLATE_FILTERS'])\n\n def render_template(self, template_name, output_name, context):\n \"\"\"Render the template into output_name using context.\"\"\"\n context['striphtml'] = striphtml\n template = self.lookup.get_template(template_name)\n data = template.render_unicode(**context)\n if output_name is not None:\n makedirs(os.path.dirname(output_name))\n with open(output_name, 'w+') as output:\n output.write(data)\n return data\n\n def render_template_to_string(self, template, context):\n \"\"\"Render template to a string using context.\"\"\"\n context.update(self.filters)\n return Template(template).render(**context)\n\n def template_deps(self, template_name):\n \"\"\"Generate list of dependencies for a template.\"\"\"\n # We can cache here because dependencies should\n # not change between runs\n if self.cache.get(template_name, None) is None:\n template = self.lookup.get_template(template_name)\n dep_filenames = self.get_deps(template.filename)\n deps = [template.filename]\n for fname in dep_filenames:\n deps += self.template_deps(fname)\n self.cache[template_name] = tuple(deps)\n return list(self.cache[template_name])\n\n\ndef striphtml(text):\n \"\"\"Strip HTML tags from text.\"\"\"\n return Markup(text).striptags()\n", "path": "nikola/plugins/template/mako.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Copyright \u00a9 2012-2015 Roberto Alsina and others.\n\n# Permission is hereby granted, free of charge, to any\n# person obtaining a copy of this software and associated\n# documentation files (the \"Software\"), to deal in the\n# Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the\n# Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice\n# shall be included in all copies or substantial portions of\n# the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY\n# KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE\n# WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR\n# PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS\n# OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR\n# OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR\n# OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\n\"\"\"Mako template handler.\"\"\"\n\nfrom __future__ import unicode_literals, print_function, absolute_import\nimport os\nimport shutil\nimport sys\nimport tempfile\n\nfrom mako import util, lexer, parsetree\nfrom mako.lookup import TemplateLookup\nfrom mako.template import Template\nfrom markupsafe import Markup # It's ok, Mako requires it\n\nfrom nikola.plugin_categories import TemplateSystem\nfrom nikola.utils import makedirs, get_logger, STDERR_HANDLER\n\nLOGGER = get_logger('mako', STDERR_HANDLER)\n\n\nclass MakoTemplates(TemplateSystem):\n\n \"\"\"Support for Mako templates.\"\"\"\n\n name = \"mako\"\n\n lookup = None\n cache = {}\n filters = {}\n directories = []\n cache_dir = None\n\n def get_deps(self, filename):\n \"\"\"Get dependencies for a template (internal function).\"\"\"\n text = util.read_file(filename)\n lex = lexer.Lexer(text=text, filename=filename)\n lex.parse()\n\n deps = []\n for n in lex.template.nodes:\n keyword = getattr(n, 'keyword', None)\n if keyword in [\"inherit\", \"namespace\"] or isinstance(n, parsetree.IncludeTag):\n deps.append(n.attributes['file'])\n return deps\n\n def set_directories(self, directories, cache_folder):\n \"\"\"Create a new template lookup with set directories.\"\"\"\n cache_dir = os.path.join(cache_folder, '.mako.tmp')\n # Workaround for a Mako bug, Issue #825\n if sys.version_info[0] == 2:\n try:\n os.path.abspath(cache_dir).decode('ascii')\n except UnicodeEncodeError:\n cache_dir = tempfile.mkdtemp()\n LOGGER.warning('Because of a Mako bug, setting cache_dir to {0}'.format(cache_dir))\n if os.path.exists(cache_dir):\n shutil.rmtree(cache_dir)\n self.directories = directories\n self.cache_dir = cache_dir\n self.create_lookup()\n\n def inject_directory(self, directory):\n \"\"\"Add a directory to the lookup and recreate it if it's not there yet.\"\"\"\n if directory not in self.directories:\n self.directories.append(directory)\n self.create_lookup()\n\n def create_lookup(self):\n \"\"\"Create a template lookup.\"\"\"\n self.lookup = TemplateLookup(\n directories=self.directories,\n module_directory=self.cache_dir,\n output_encoding='utf-8')\n\n def set_site(self, site):\n \"\"\"Set the Nikola site.\"\"\"\n self.site = site\n self.filters.update(self.site.config['TEMPLATE_FILTERS'])\n\n def render_template(self, template_name, output_name, context):\n \"\"\"Render the template into output_name using context.\"\"\"\n context['striphtml'] = striphtml\n template = self.lookup.get_template(template_name)\n data = template.render_unicode(**context)\n if output_name is not None:\n makedirs(os.path.dirname(output_name))\n with open(output_name, 'w+') as output:\n output.write(data)\n return data\n\n def render_template_to_string(self, template, context):\n \"\"\"Render template to a string using context.\"\"\"\n context.update(self.filters)\n return Template(template).render(**context)\n\n def template_deps(self, template_name):\n \"\"\"Generate list of dependencies for a template.\"\"\"\n # We can cache here because dependencies should\n # not change between runs\n if self.cache.get(template_name, None) is None:\n template = self.lookup.get_template(template_name)\n dep_filenames = self.get_deps(template.filename)\n deps = [template.filename]\n for fname in dep_filenames:\n deps += self.template_deps(fname)\n self.cache[template_name] = tuple(deps)\n return list(self.cache[template_name])\n\n\ndef striphtml(text):\n \"\"\"Strip HTML tags from text.\"\"\"\n return Markup(text).striptags()\n", "path": "nikola/plugins/template/mako.py"}]} | 1,684 | 217 |
gh_patches_debug_339 | rasdani/github-patches | git_diff | pyro-ppl__pyro-3164 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
PyTorch 2.0 compatibility: Explicit PyTorch 1.x check causing issues with packages that depend on PyTorch / pyro (e.g. BoTorch)
### Issue Description
The explicit check for PyTorch 1.x here (https://github.com/pyro-ppl/pyro/blob/dev/pyro/distributions/torch_patch.py#L10) is causing problems when another package has a dependency on PyTorch + Pyro, since PyTorch is now at 2.0. For example, it is causing BoTorch tests to fail here (https://github.com/pytorch/botorch/pull/1551).
Could this check be removed to allow for PyTorch 2.0?
### Environment
Mac OS 11.7.1
Python 3.10
PyTorch 2.0
Pyro 1.8.3
### Code Snippet
https://github.com/pytorch/botorch/actions/runs/3659534850/jobs/6185642011
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pyro/distributions/torch_patch.py`
Content:
```
1 # Copyright (c) 2017-2019 Uber Technologies, Inc.
2 # SPDX-License-Identifier: Apache-2.0
3
4 import functools
5 import math
6 import weakref
7
8 import torch
9
10 assert torch.__version__.startswith("1.")
11
12
13 def patch_dependency(target, root_module=torch):
14 parts = target.split(".")
15 assert parts[0] == root_module.__name__
16 module = root_module
17 for part in parts[1:-1]:
18 module = getattr(module, part)
19 name = parts[-1]
20 old_fn = getattr(module, name, None)
21 old_fn = getattr(old_fn, "_pyro_unpatched", old_fn) # ensure patching is idempotent
22
23 def decorator(new_fn):
24 try:
25 functools.update_wrapper(new_fn, old_fn)
26 except Exception:
27 for attr in functools.WRAPPER_ASSIGNMENTS:
28 if hasattr(old_fn, attr):
29 setattr(new_fn, attr, getattr(old_fn, attr))
30 new_fn._pyro_unpatched = old_fn
31 setattr(module, name, new_fn)
32 return new_fn
33
34 return decorator
35
36
37 # TODO: Move upstream to allow for pickle serialization of transforms
38 @patch_dependency("torch.distributions.transforms.Transform.__getstate__")
39 def _Transform__getstate__(self):
40 attrs = {}
41 for k, v in self.__dict__.items():
42 if isinstance(v, weakref.ref):
43 attrs[k] = None
44 else:
45 attrs[k] = v
46 return attrs
47
48
49 # TODO move upstream
50 @patch_dependency("torch.distributions.transforms.Transform.clear_cache")
51 def _Transform_clear_cache(self):
52 if self._cache_size == 1:
53 self._cached_x_y = None, None
54
55
56 # TODO move upstream
57 @patch_dependency("torch.distributions.TransformedDistribution.clear_cache")
58 def _TransformedDistribution_clear_cache(self):
59 for t in self.transforms:
60 t.clear_cache()
61
62
63 # TODO fix https://github.com/pytorch/pytorch/issues/48054 upstream
64 @patch_dependency("torch.distributions.HalfCauchy.log_prob")
65 def _HalfCauchy_logprob(self, value):
66 if self._validate_args:
67 self._validate_sample(value)
68 value = torch.as_tensor(
69 value, dtype=self.base_dist.scale.dtype, device=self.base_dist.scale.device
70 )
71 log_prob = self.base_dist.log_prob(value) + math.log(2)
72 log_prob.masked_fill_(value.expand(log_prob.shape) < 0, -float("inf"))
73 return log_prob
74
75
76 # TODO fix batch_shape have an extra singleton dimension upstream
77 @patch_dependency("torch.distributions.constraints._PositiveDefinite.check")
78 def _PositiveDefinite_check(self, value):
79 matrix_shape = value.shape[-2:]
80 batch_shape = value.shape[:-2]
81 flattened_value = value.reshape((-1,) + matrix_shape)
82 return torch.stack(
83 [torch.linalg.eigvalsh(v)[:1] > 0.0 for v in flattened_value]
84 ).view(batch_shape)
85
86
87 @patch_dependency("torch.distributions.constraints._CorrCholesky.check")
88 def _CorrCholesky_check(self, value):
89 row_norm = torch.linalg.norm(value.detach(), dim=-1)
90 unit_row_norm = (row_norm - 1.0).abs().le(1e-4).all(dim=-1)
91 return torch.distributions.constraints.lower_cholesky.check(value) & unit_row_norm
92
93
94 # This adds a __call__ method to satisfy sphinx.
95 @patch_dependency("torch.distributions.utils.lazy_property.__call__")
96 def _lazy_property__call__(self):
97 raise NotImplementedError
98
99
100 __all__ = []
101
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pyro/distributions/torch_patch.py b/pyro/distributions/torch_patch.py
--- a/pyro/distributions/torch_patch.py
+++ b/pyro/distributions/torch_patch.py
@@ -7,8 +7,6 @@
import torch
-assert torch.__version__.startswith("1.")
-
def patch_dependency(target, root_module=torch):
parts = target.split(".")
| {"golden_diff": "diff --git a/pyro/distributions/torch_patch.py b/pyro/distributions/torch_patch.py\n--- a/pyro/distributions/torch_patch.py\n+++ b/pyro/distributions/torch_patch.py\n@@ -7,8 +7,6 @@\n \n import torch\n \n-assert torch.__version__.startswith(\"1.\")\n-\n \n def patch_dependency(target, root_module=torch):\n parts = target.split(\".\")\n", "issue": "PyTorch 2.0 compatibility: Explicit PyTorch 1.x check causing issues with packages that depend on PyTorch / pyro (e.g. BoTorch)\n### Issue Description\r\nThe explicit check for PyTorch 1.x here (https://github.com/pyro-ppl/pyro/blob/dev/pyro/distributions/torch_patch.py#L10) is causing problems when another package has a dependency on PyTorch + Pyro, since PyTorch is now at 2.0. For example, it is causing BoTorch tests to fail here (https://github.com/pytorch/botorch/pull/1551).\r\n\r\nCould this check be removed to allow for PyTorch 2.0?\r\n\r\n### Environment\r\nMac OS 11.7.1\r\nPython 3.10\r\nPyTorch 2.0\r\nPyro 1.8.3\r\n\r\n### Code Snippet\r\nhttps://github.com/pytorch/botorch/actions/runs/3659534850/jobs/6185642011\n", "before_files": [{"content": "# Copyright (c) 2017-2019 Uber Technologies, Inc.\n# SPDX-License-Identifier: Apache-2.0\n\nimport functools\nimport math\nimport weakref\n\nimport torch\n\nassert torch.__version__.startswith(\"1.\")\n\n\ndef patch_dependency(target, root_module=torch):\n parts = target.split(\".\")\n assert parts[0] == root_module.__name__\n module = root_module\n for part in parts[1:-1]:\n module = getattr(module, part)\n name = parts[-1]\n old_fn = getattr(module, name, None)\n old_fn = getattr(old_fn, \"_pyro_unpatched\", old_fn) # ensure patching is idempotent\n\n def decorator(new_fn):\n try:\n functools.update_wrapper(new_fn, old_fn)\n except Exception:\n for attr in functools.WRAPPER_ASSIGNMENTS:\n if hasattr(old_fn, attr):\n setattr(new_fn, attr, getattr(old_fn, attr))\n new_fn._pyro_unpatched = old_fn\n setattr(module, name, new_fn)\n return new_fn\n\n return decorator\n\n\n# TODO: Move upstream to allow for pickle serialization of transforms\n@patch_dependency(\"torch.distributions.transforms.Transform.__getstate__\")\ndef _Transform__getstate__(self):\n attrs = {}\n for k, v in self.__dict__.items():\n if isinstance(v, weakref.ref):\n attrs[k] = None\n else:\n attrs[k] = v\n return attrs\n\n\n# TODO move upstream\n@patch_dependency(\"torch.distributions.transforms.Transform.clear_cache\")\ndef _Transform_clear_cache(self):\n if self._cache_size == 1:\n self._cached_x_y = None, None\n\n\n# TODO move upstream\n@patch_dependency(\"torch.distributions.TransformedDistribution.clear_cache\")\ndef _TransformedDistribution_clear_cache(self):\n for t in self.transforms:\n t.clear_cache()\n\n\n# TODO fix https://github.com/pytorch/pytorch/issues/48054 upstream\n@patch_dependency(\"torch.distributions.HalfCauchy.log_prob\")\ndef _HalfCauchy_logprob(self, value):\n if self._validate_args:\n self._validate_sample(value)\n value = torch.as_tensor(\n value, dtype=self.base_dist.scale.dtype, device=self.base_dist.scale.device\n )\n log_prob = self.base_dist.log_prob(value) + math.log(2)\n log_prob.masked_fill_(value.expand(log_prob.shape) < 0, -float(\"inf\"))\n return log_prob\n\n\n# TODO fix batch_shape have an extra singleton dimension upstream\n@patch_dependency(\"torch.distributions.constraints._PositiveDefinite.check\")\ndef _PositiveDefinite_check(self, value):\n matrix_shape = value.shape[-2:]\n batch_shape = value.shape[:-2]\n flattened_value = value.reshape((-1,) + matrix_shape)\n return torch.stack(\n [torch.linalg.eigvalsh(v)[:1] > 0.0 for v in flattened_value]\n ).view(batch_shape)\n\n\n@patch_dependency(\"torch.distributions.constraints._CorrCholesky.check\")\ndef _CorrCholesky_check(self, value):\n row_norm = torch.linalg.norm(value.detach(), dim=-1)\n unit_row_norm = (row_norm - 1.0).abs().le(1e-4).all(dim=-1)\n return torch.distributions.constraints.lower_cholesky.check(value) & unit_row_norm\n\n\n# This adds a __call__ method to satisfy sphinx.\n@patch_dependency(\"torch.distributions.utils.lazy_property.__call__\")\ndef _lazy_property__call__(self):\n raise NotImplementedError\n\n\n__all__ = []\n", "path": "pyro/distributions/torch_patch.py"}], "after_files": [{"content": "# Copyright (c) 2017-2019 Uber Technologies, Inc.\n# SPDX-License-Identifier: Apache-2.0\n\nimport functools\nimport math\nimport weakref\n\nimport torch\n\n\ndef patch_dependency(target, root_module=torch):\n parts = target.split(\".\")\n assert parts[0] == root_module.__name__\n module = root_module\n for part in parts[1:-1]:\n module = getattr(module, part)\n name = parts[-1]\n old_fn = getattr(module, name, None)\n old_fn = getattr(old_fn, \"_pyro_unpatched\", old_fn) # ensure patching is idempotent\n\n def decorator(new_fn):\n try:\n functools.update_wrapper(new_fn, old_fn)\n except Exception:\n for attr in functools.WRAPPER_ASSIGNMENTS:\n if hasattr(old_fn, attr):\n setattr(new_fn, attr, getattr(old_fn, attr))\n new_fn._pyro_unpatched = old_fn\n setattr(module, name, new_fn)\n return new_fn\n\n return decorator\n\n\n# TODO: Move upstream to allow for pickle serialization of transforms\n@patch_dependency(\"torch.distributions.transforms.Transform.__getstate__\")\ndef _Transform__getstate__(self):\n attrs = {}\n for k, v in self.__dict__.items():\n if isinstance(v, weakref.ref):\n attrs[k] = None\n else:\n attrs[k] = v\n return attrs\n\n\n# TODO move upstream\n@patch_dependency(\"torch.distributions.transforms.Transform.clear_cache\")\ndef _Transform_clear_cache(self):\n if self._cache_size == 1:\n self._cached_x_y = None, None\n\n\n# TODO move upstream\n@patch_dependency(\"torch.distributions.TransformedDistribution.clear_cache\")\ndef _TransformedDistribution_clear_cache(self):\n for t in self.transforms:\n t.clear_cache()\n\n\n# TODO fix https://github.com/pytorch/pytorch/issues/48054 upstream\n@patch_dependency(\"torch.distributions.HalfCauchy.log_prob\")\ndef _HalfCauchy_logprob(self, value):\n if self._validate_args:\n self._validate_sample(value)\n value = torch.as_tensor(\n value, dtype=self.base_dist.scale.dtype, device=self.base_dist.scale.device\n )\n log_prob = self.base_dist.log_prob(value) + math.log(2)\n log_prob.masked_fill_(value.expand(log_prob.shape) < 0, -float(\"inf\"))\n return log_prob\n\n\n# TODO fix batch_shape have an extra singleton dimension upstream\n@patch_dependency(\"torch.distributions.constraints._PositiveDefinite.check\")\ndef _PositiveDefinite_check(self, value):\n matrix_shape = value.shape[-2:]\n batch_shape = value.shape[:-2]\n flattened_value = value.reshape((-1,) + matrix_shape)\n return torch.stack(\n [torch.linalg.eigvalsh(v)[:1] > 0.0 for v in flattened_value]\n ).view(batch_shape)\n\n\n@patch_dependency(\"torch.distributions.constraints._CorrCholesky.check\")\ndef _CorrCholesky_check(self, value):\n row_norm = torch.linalg.norm(value.detach(), dim=-1)\n unit_row_norm = (row_norm - 1.0).abs().le(1e-4).all(dim=-1)\n return torch.distributions.constraints.lower_cholesky.check(value) & unit_row_norm\n\n\n# This adds a __call__ method to satisfy sphinx.\n@patch_dependency(\"torch.distributions.utils.lazy_property.__call__\")\ndef _lazy_property__call__(self):\n raise NotImplementedError\n\n\n__all__ = []\n", "path": "pyro/distributions/torch_patch.py"}]} | 1,484 | 86 |
gh_patches_debug_30012 | rasdani/github-patches | git_diff | TheAlgorithms__Python-2443 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Dev sprint ideas: More tests, type hints and less complexity
currently, some of the programs use static type checking like this [program](https://github.com/TheAlgorithms/Python/blob/master/dynamic_programming/fast_fibonacci.py) but some of the programs did not use static typing.
it's a good practice to use static typing as it makes code more clear and readable, should we make it a standard for this repository.we can use [mypy](http://mypy-lang.org/) for testing code
[more on static typing](https://medium.com/@ageitgey/learn-how-to-use-static-type-checking-in-python-3-6-in-10-minutes-12c86d72677b)
thank you
### Dev sprint ideas:
* [ ] [Add tests to Python files with <10% test coverage.](https://github.com/TheAlgorithms/Python/issues/2128#issuecomment-645231020)
* [ ] [Add static typing to functions and methods.](https://github.com/TheAlgorithms/Python/issues/2128)
* [ ] [Set `flake8 --max-complexity=15`](https://github.com/TheAlgorithms/Python/issues/2128#issuecomment-645190839) (Ensure files have strong tests ___before___ refactoring). Test results from #2139...
* [ ] ./boolean_algebra/quine_mc_cluskey.py:82:1: C901 'selection' is too complex (17)
* [ ] ./digital_image_processing/edge_detection/canny.py:20:1: C901 'canny' is too complex (17) @lighttxu
* [ ] ./graphs/minimum_spanning_tree_prims.py:5:1: C901 'PrimsAlgorithm' is too complex (21)
* [ ] Add doctests aligned with https://en.wikipedia.org/wiki/Prim%27s_algorithm
* [ ] In a ___separate___ PR reduce the McCabe complexity
* [ ] ./linear_algebra/src/polynom-for-points.py:1:1: C901 'points_to_polynomial' is too complex (23) @nic-dern
* [ ] ./machine_learning/linear_discriminant_analysis.py:251:1: C901 'main' is too complex (25)
* [x] ./hashes/hamming_code.py:71:1: C901 'emitterConverter' is too complex (16) #2140
* [x] ./hashes/hamming_code.py:153:1: C901 'receptorConverter' is too complex (20) #2140
* [x] ./project_euler/problem_551/sol1.py:20:1: C901 'next_term' is too complex (16) #2141
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `searches/simple_binary_search.py`
Content:
```
1 """
2 Pure Python implementation of a binary search algorithm.
3
4 For doctests run following command:
5 python3 -m doctest -v simple_binary_search.py
6
7 For manual testing run:
8 python3 simple_binary_search.py
9 """
10 from __future__ import annotations
11
12
13 def binary_search(a_list: list[int], item: int) -> bool:
14 """
15 >>> test_list = [0, 1, 2, 8, 13, 17, 19, 32, 42]
16 >>> print(binary_search(test_list, 3))
17 False
18 >>> print(binary_search(test_list, 13))
19 True
20 >>> print(binary_search([4, 4, 5, 6, 7], 4))
21 True
22 >>> print(binary_search([4, 4, 5, 6, 7], -10))
23 False
24 >>> print(binary_search([-18, 2], -18))
25 True
26 >>> print(binary_search([5], 5))
27 True
28 >>> print(binary_search(['a', 'c', 'd'], 'c'))
29 True
30 >>> print(binary_search(['a', 'c', 'd'], 'f'))
31 False
32 >>> print(binary_search([], 1))
33 False
34 >>> print(binary_search([-.1, .1 , .8], .1))
35 True
36 >>> binary_search(range(-5000, 5000, 10), 80)
37 True
38 >>> binary_search(range(-5000, 5000, 10), 1255)
39 False
40 >>> binary_search(range(0, 10000, 5), 2)
41 False
42 """
43 if len(a_list) == 0:
44 return False
45 midpoint = len(a_list) // 2
46 if a_list[midpoint] == item:
47 return True
48 if item < a_list[midpoint]:
49 return binary_search(a_list[:midpoint], item)
50 else:
51 return binary_search(a_list[midpoint + 1 :], item)
52
53
54 if __name__ == "__main__":
55 user_input = input("Enter numbers separated by comma:\n").strip()
56 sequence = [int(item.strip()) for item in user_input.split(",")]
57 target = int(input("Enter the number to be found in the list:\n").strip())
58 not_str = "" if binary_search(sequence, target) else "not "
59 print(f"{target} was {not_str}found in {sequence}")
60
```
Path: `graphs/minimum_spanning_tree_kruskal.py`
Content:
```
1 if __name__ == "__main__":
2 num_nodes, num_edges = list(map(int, input().strip().split()))
3
4 edges = []
5
6 for i in range(num_edges):
7 node1, node2, cost = list(map(int, input().strip().split()))
8 edges.append((i, node1, node2, cost))
9
10 edges = sorted(edges, key=lambda edge: edge[3])
11
12 parent = list(range(num_nodes))
13
14 def find_parent(i):
15 if i != parent[i]:
16 parent[i] = find_parent(parent[i])
17 return parent[i]
18
19 minimum_spanning_tree_cost = 0
20 minimum_spanning_tree = []
21
22 for edge in edges:
23 parent_a = find_parent(edge[1])
24 parent_b = find_parent(edge[2])
25 if parent_a != parent_b:
26 minimum_spanning_tree_cost += edge[3]
27 minimum_spanning_tree.append(edge)
28 parent[parent_a] = parent_b
29
30 print(minimum_spanning_tree_cost)
31 for edge in minimum_spanning_tree:
32 print(edge)
33
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/graphs/minimum_spanning_tree_kruskal.py b/graphs/minimum_spanning_tree_kruskal.py
--- a/graphs/minimum_spanning_tree_kruskal.py
+++ b/graphs/minimum_spanning_tree_kruskal.py
@@ -1,13 +1,5 @@
-if __name__ == "__main__":
- num_nodes, num_edges = list(map(int, input().strip().split()))
-
- edges = []
-
- for i in range(num_edges):
- node1, node2, cost = list(map(int, input().strip().split()))
- edges.append((i, node1, node2, cost))
-
- edges = sorted(edges, key=lambda edge: edge[3])
+def kruskal(num_nodes, num_edges, edges):
+ edges = sorted(edges, key=lambda edge: edge[2])
parent = list(range(num_nodes))
@@ -20,13 +12,22 @@
minimum_spanning_tree = []
for edge in edges:
- parent_a = find_parent(edge[1])
- parent_b = find_parent(edge[2])
+ parent_a = find_parent(edge[0])
+ parent_b = find_parent(edge[1])
if parent_a != parent_b:
- minimum_spanning_tree_cost += edge[3]
+ minimum_spanning_tree_cost += edge[2]
minimum_spanning_tree.append(edge)
parent[parent_a] = parent_b
- print(minimum_spanning_tree_cost)
- for edge in minimum_spanning_tree:
- print(edge)
+ return minimum_spanning_tree
+
+
+if __name__ == "__main__": # pragma: no cover
+ num_nodes, num_edges = list(map(int, input().strip().split()))
+ edges = []
+
+ for _ in range(num_edges):
+ node1, node2, cost = [int(x) for x in input().strip().split()]
+ edges.append((node1, node2, cost))
+
+ kruskal(num_nodes, num_edges, edges)
diff --git a/searches/simple_binary_search.py b/searches/simple_binary_search.py
--- a/searches/simple_binary_search.py
+++ b/searches/simple_binary_search.py
@@ -42,7 +42,7 @@
if item < a_list[midpoint]:
return binary_search(a_list[:midpoint], item)
else:
- return binary_search(a_list[midpoint + 1:], item)
+ return binary_search(a_list[midpoint + 1 :], item)
if __name__ == "__main__":
| {"golden_diff": "diff --git a/graphs/minimum_spanning_tree_kruskal.py b/graphs/minimum_spanning_tree_kruskal.py\n--- a/graphs/minimum_spanning_tree_kruskal.py\n+++ b/graphs/minimum_spanning_tree_kruskal.py\n@@ -1,13 +1,5 @@\n-if __name__ == \"__main__\":\n- num_nodes, num_edges = list(map(int, input().strip().split()))\n-\n- edges = []\n-\n- for i in range(num_edges):\n- node1, node2, cost = list(map(int, input().strip().split()))\n- edges.append((i, node1, node2, cost))\n-\n- edges = sorted(edges, key=lambda edge: edge[3])\n+def kruskal(num_nodes, num_edges, edges):\n+ edges = sorted(edges, key=lambda edge: edge[2])\n \n parent = list(range(num_nodes))\n \n@@ -20,13 +12,22 @@\n minimum_spanning_tree = []\n \n for edge in edges:\n- parent_a = find_parent(edge[1])\n- parent_b = find_parent(edge[2])\n+ parent_a = find_parent(edge[0])\n+ parent_b = find_parent(edge[1])\n if parent_a != parent_b:\n- minimum_spanning_tree_cost += edge[3]\n+ minimum_spanning_tree_cost += edge[2]\n minimum_spanning_tree.append(edge)\n parent[parent_a] = parent_b\n \n- print(minimum_spanning_tree_cost)\n- for edge in minimum_spanning_tree:\n- print(edge)\n+ return minimum_spanning_tree\n+\n+\n+if __name__ == \"__main__\": # pragma: no cover\n+ num_nodes, num_edges = list(map(int, input().strip().split()))\n+ edges = []\n+\n+ for _ in range(num_edges):\n+ node1, node2, cost = [int(x) for x in input().strip().split()]\n+ edges.append((node1, node2, cost))\n+\n+ kruskal(num_nodes, num_edges, edges)\ndiff --git a/searches/simple_binary_search.py b/searches/simple_binary_search.py\n--- a/searches/simple_binary_search.py\n+++ b/searches/simple_binary_search.py\n@@ -42,7 +42,7 @@\n if item < a_list[midpoint]:\n return binary_search(a_list[:midpoint], item)\n else:\n- return binary_search(a_list[midpoint + 1:], item)\n+ return binary_search(a_list[midpoint + 1 :], item)\n \n \n if __name__ == \"__main__\":\n", "issue": "Dev sprint ideas: More tests, type hints and less complexity\ncurrently, some of the programs use static type checking like this [program](https://github.com/TheAlgorithms/Python/blob/master/dynamic_programming/fast_fibonacci.py) but some of the programs did not use static typing.\r\n\r\nit's a good practice to use static typing as it makes code more clear and readable, should we make it a standard for this repository.we can use [mypy](http://mypy-lang.org/) for testing code \r\n\r\n[more on static typing](https://medium.com/@ageitgey/learn-how-to-use-static-type-checking-in-python-3-6-in-10-minutes-12c86d72677b)\r\n \r\nthank you\r\n\r\n### Dev sprint ideas:\r\n* [ ] [Add tests to Python files with <10% test coverage.](https://github.com/TheAlgorithms/Python/issues/2128#issuecomment-645231020)\r\n* [ ] [Add static typing to functions and methods.](https://github.com/TheAlgorithms/Python/issues/2128)\r\n* [ ] [Set `flake8 --max-complexity=15`](https://github.com/TheAlgorithms/Python/issues/2128#issuecomment-645190839) (Ensure files have strong tests ___before___ refactoring). Test results from #2139...\r\n * [ ] ./boolean_algebra/quine_mc_cluskey.py:82:1: C901 'selection' is too complex (17)\r\n * [ ] ./digital_image_processing/edge_detection/canny.py:20:1: C901 'canny' is too complex (17) @lighttxu\r\n * [ ] ./graphs/minimum_spanning_tree_prims.py:5:1: C901 'PrimsAlgorithm' is too complex (21)\r\n * [ ] Add doctests aligned with https://en.wikipedia.org/wiki/Prim%27s_algorithm\r\n * [ ] In a ___separate___ PR reduce the McCabe complexity\r\n * [ ] ./linear_algebra/src/polynom-for-points.py:1:1: C901 'points_to_polynomial' is too complex (23) @nic-dern\r\n * [ ] ./machine_learning/linear_discriminant_analysis.py:251:1: C901 'main' is too complex (25)\r\n * [x] ./hashes/hamming_code.py:71:1: C901 'emitterConverter' is too complex (16) #2140\r\n * [x] ./hashes/hamming_code.py:153:1: C901 'receptorConverter' is too complex (20) #2140\r\n * [x] ./project_euler/problem_551/sol1.py:20:1: C901 'next_term' is too complex (16) #2141\n", "before_files": [{"content": "\"\"\"\nPure Python implementation of a binary search algorithm.\n\nFor doctests run following command:\npython3 -m doctest -v simple_binary_search.py\n\nFor manual testing run:\npython3 simple_binary_search.py\n\"\"\"\nfrom __future__ import annotations\n\n\ndef binary_search(a_list: list[int], item: int) -> bool:\n \"\"\"\n >>> test_list = [0, 1, 2, 8, 13, 17, 19, 32, 42]\n >>> print(binary_search(test_list, 3))\n False\n >>> print(binary_search(test_list, 13))\n True\n >>> print(binary_search([4, 4, 5, 6, 7], 4))\n True\n >>> print(binary_search([4, 4, 5, 6, 7], -10))\n False\n >>> print(binary_search([-18, 2], -18))\n True\n >>> print(binary_search([5], 5))\n True\n >>> print(binary_search(['a', 'c', 'd'], 'c'))\n True\n >>> print(binary_search(['a', 'c', 'd'], 'f'))\n False\n >>> print(binary_search([], 1))\n False\n >>> print(binary_search([-.1, .1 , .8], .1))\n True\n >>> binary_search(range(-5000, 5000, 10), 80)\n True\n >>> binary_search(range(-5000, 5000, 10), 1255)\n False\n >>> binary_search(range(0, 10000, 5), 2)\n False\n \"\"\"\n if len(a_list) == 0:\n return False\n midpoint = len(a_list) // 2\n if a_list[midpoint] == item:\n return True\n if item < a_list[midpoint]:\n return binary_search(a_list[:midpoint], item)\n else:\n return binary_search(a_list[midpoint + 1 :], item)\n\n\nif __name__ == \"__main__\":\n user_input = input(\"Enter numbers separated by comma:\\n\").strip()\n sequence = [int(item.strip()) for item in user_input.split(\",\")]\n target = int(input(\"Enter the number to be found in the list:\\n\").strip())\n not_str = \"\" if binary_search(sequence, target) else \"not \"\n print(f\"{target} was {not_str}found in {sequence}\")\n", "path": "searches/simple_binary_search.py"}, {"content": "if __name__ == \"__main__\":\n num_nodes, num_edges = list(map(int, input().strip().split()))\n\n edges = []\n\n for i in range(num_edges):\n node1, node2, cost = list(map(int, input().strip().split()))\n edges.append((i, node1, node2, cost))\n\n edges = sorted(edges, key=lambda edge: edge[3])\n\n parent = list(range(num_nodes))\n\n def find_parent(i):\n if i != parent[i]:\n parent[i] = find_parent(parent[i])\n return parent[i]\n\n minimum_spanning_tree_cost = 0\n minimum_spanning_tree = []\n\n for edge in edges:\n parent_a = find_parent(edge[1])\n parent_b = find_parent(edge[2])\n if parent_a != parent_b:\n minimum_spanning_tree_cost += edge[3]\n minimum_spanning_tree.append(edge)\n parent[parent_a] = parent_b\n\n print(minimum_spanning_tree_cost)\n for edge in minimum_spanning_tree:\n print(edge)\n", "path": "graphs/minimum_spanning_tree_kruskal.py"}], "after_files": [{"content": "\"\"\"\nPure Python implementation of a binary search algorithm.\n\nFor doctests run following command:\npython3 -m doctest -v simple_binary_search.py\n\nFor manual testing run:\npython3 simple_binary_search.py\n\"\"\"\nfrom typing import List\n\n\ndef binary_search(a_list: List[int], item: int) -> bool:\n \"\"\"\n >>> test_list = [0, 1, 2, 8, 13, 17, 19, 32, 42]\n >>> print(binary_search(test_list, 3))\n False\n >>> print(binary_search(test_list, 13))\n True\n >>> print(binary_search([4, 4, 5, 6, 7], 4))\n True\n >>> print(binary_search([4, 4, 5, 6, 7], -10))\n False\n >>> print(binary_search([-18, 2], -18))\n True\n >>> print(binary_search([5], 5))\n True\n >>> print(binary_search(['a', 'c', 'd'], 'c'))\n True\n >>> print(binary_search(['a', 'c', 'd'], 'f'))\n False\n >>> print(binary_search([], 1))\n False\n >>> print(binary_search([.1, .4 , -.1], .1))\n True\n \"\"\"\n if len(a_list) == 0:\n return False\n midpoint = len(a_list) // 2\n if a_list[midpoint] == item:\n return True\n if item < a_list[midpoint]:\n return binary_search(a_list[:midpoint], item)\n else:\n return binary_search(a_list[midpoint + 1 :], item)\n\n\nif __name__ == \"__main__\":\n user_input = input(\"Enter numbers separated by comma:\\n\").strip()\n sequence = [int(item.strip()) for item in user_input.split(\",\")]\n target = int(input(\"Enter the number to be found in the list:\\n\").strip())\n not_str = \"\" if binary_search(sequence, target) else \"not \"\n print(f\"{target} was {not_str}found in {sequence}\")\n", "path": "searches/simple_binary_search.py"}, {"content": "def kruskal(num_nodes, num_edges, edges):\n edges = sorted(edges, key=lambda edge: edge[2])\n\n parent = list(range(num_nodes))\n\n def find_parent(i):\n if i != parent[i]:\n parent[i] = find_parent(parent[i])\n return parent[i]\n\n minimum_spanning_tree_cost = 0\n minimum_spanning_tree = []\n\n for edge in edges:\n parent_a = find_parent(edge[0])\n parent_b = find_parent(edge[1])\n if parent_a != parent_b:\n minimum_spanning_tree_cost += edge[2]\n minimum_spanning_tree.append(edge)\n parent[parent_a] = parent_b\n\n return minimum_spanning_tree\n\n\nif __name__ == \"__main__\": # pragma: no cover\n num_nodes, num_edges = list(map(int, input().strip().split()))\n edges = []\n\n for _ in range(num_edges):\n node1, node2, cost = [int(x) for x in input().strip().split()]\n edges.append((node1, node2, cost))\n\n kruskal(num_nodes, num_edges, edges)\n", "path": "graphs/minimum_spanning_tree_kruskal.py"}]} | 1,893 | 565 |
gh_patches_debug_20698 | rasdani/github-patches | git_diff | freqtrade__freqtrade-5530 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
API Server under 2021.8
<!--
Have you searched for similar issues before posting it?
Did you have a VERY good look at the [documentation](https://www.freqtrade.io/en/latest/) and are sure that the question is not explained there
Please do not use the question template to report bugs or to request new features.
-->
## Describe your environment
* Operating system: Windows Server 2019
* Python Version: Miniconda 3
* CCXT version: 1.55.56_____ (`pip freeze | grep ccxt`)
* Freqtrade Version: 2021.8 (`freqtrade -V` or `docker-compose run --rm freqtrade -V` for Freqtrade running in docker)
## Your question
This might be a bug, I post it as question, since I am nor sure for 100%.
(OS and Miniconda configuration works fine for Freqtrade since 2020.12)
Trading works fine under Telegram with current version.
With current version and activated API Server, the system remains idle and does not begin to trade. Play button in GUI is pushed. Even no trades visible in DB, opened in a SQlite explorer.
API Server web GUI works excellent.
(Trading with API Server works fine under 2021.7)
*Ask the question you have not been able to find an answer in our [Documentation](https://www.freqtrade.io/en/latest/)*
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `freqtrade/rpc/api_server/uvicorn_threaded.py`
Content:
```
1 import contextlib
2 import threading
3 import time
4
5 import uvicorn
6
7
8 class UvicornServer(uvicorn.Server):
9 """
10 Multithreaded server - as found in https://github.com/encode/uvicorn/issues/742
11
12 Removed install_signal_handlers() override based on changes from this commit:
13 https://github.com/encode/uvicorn/commit/ce2ef45a9109df8eae038c0ec323eb63d644cbc6
14
15 Cannot rely on asyncio.get_event_loop() to create new event loop because of this check:
16 https://github.com/python/cpython/blob/4d7f11e05731f67fd2c07ec2972c6cb9861d52be/Lib/asyncio/events.py#L638
17
18 Fix by overriding run() and forcing creation of new event loop if uvloop is available
19 """
20
21 def run(self, sockets=None):
22 import asyncio
23
24 """
25 Parent implementation calls self.config.setup_event_loop(),
26 but we need to create uvloop event loop manually
27 """
28 try:
29 import uvloop # noqa
30 except ImportError: # pragma: no cover
31 from uvicorn.loops.asyncio import asyncio_setup
32 asyncio_setup()
33 else:
34 asyncio.set_event_loop(uvloop.new_event_loop())
35 try:
36 loop = asyncio.get_event_loop()
37 except RuntimeError:
38 # When running in a thread, we'll not have an eventloop yet.
39 loop = asyncio.new_event_loop()
40 loop.run_until_complete(self.serve(sockets=sockets))
41
42 @contextlib.contextmanager
43 def run_in_thread(self):
44 self.thread = threading.Thread(target=self.run)
45 self.thread.start()
46 while not self.started:
47 time.sleep(1e-3)
48
49 def cleanup(self):
50 self.should_exit = True
51 self.thread.join()
52
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/freqtrade/rpc/api_server/uvicorn_threaded.py b/freqtrade/rpc/api_server/uvicorn_threaded.py
--- a/freqtrade/rpc/api_server/uvicorn_threaded.py
+++ b/freqtrade/rpc/api_server/uvicorn_threaded.py
@@ -5,6 +5,20 @@
import uvicorn
+def asyncio_setup() -> None: # pragma: no cover
+ # Set eventloop for win32 setups
+ # Reverts a change done in uvicorn 0.15.0 - which now sets the eventloop
+ # via policy.
+ import sys
+
+ if sys.version_info >= (3, 8) and sys.platform == "win32":
+ import asyncio
+ import selectors
+ selector = selectors.SelectSelector()
+ loop = asyncio.SelectorEventLoop(selector)
+ asyncio.set_event_loop(loop)
+
+
class UvicornServer(uvicorn.Server):
"""
Multithreaded server - as found in https://github.com/encode/uvicorn/issues/742
@@ -28,7 +42,7 @@
try:
import uvloop # noqa
except ImportError: # pragma: no cover
- from uvicorn.loops.asyncio import asyncio_setup
+
asyncio_setup()
else:
asyncio.set_event_loop(uvloop.new_event_loop())
| {"golden_diff": "diff --git a/freqtrade/rpc/api_server/uvicorn_threaded.py b/freqtrade/rpc/api_server/uvicorn_threaded.py\n--- a/freqtrade/rpc/api_server/uvicorn_threaded.py\n+++ b/freqtrade/rpc/api_server/uvicorn_threaded.py\n@@ -5,6 +5,20 @@\n import uvicorn\n \n \n+def asyncio_setup() -> None: # pragma: no cover\n+ # Set eventloop for win32 setups\n+ # Reverts a change done in uvicorn 0.15.0 - which now sets the eventloop\n+ # via policy.\n+ import sys\n+\n+ if sys.version_info >= (3, 8) and sys.platform == \"win32\":\n+ import asyncio\n+ import selectors\n+ selector = selectors.SelectSelector()\n+ loop = asyncio.SelectorEventLoop(selector)\n+ asyncio.set_event_loop(loop)\n+\n+\n class UvicornServer(uvicorn.Server):\n \"\"\"\n Multithreaded server - as found in https://github.com/encode/uvicorn/issues/742\n@@ -28,7 +42,7 @@\n try:\n import uvloop # noqa\n except ImportError: # pragma: no cover\n- from uvicorn.loops.asyncio import asyncio_setup\n+\n asyncio_setup()\n else:\n asyncio.set_event_loop(uvloop.new_event_loop())\n", "issue": "API Server under 2021.8\n<!-- \r\nHave you searched for similar issues before posting it?\r\nDid you have a VERY good look at the [documentation](https://www.freqtrade.io/en/latest/) and are sure that the question is not explained there\r\n\r\nPlease do not use the question template to report bugs or to request new features.\r\n-->\r\n\r\n## Describe your environment\r\n\r\n * Operating system: Windows Server 2019\r\n * Python Version: Miniconda 3\r\n * CCXT version: 1.55.56_____ (`pip freeze | grep ccxt`)\r\n * Freqtrade Version: 2021.8 (`freqtrade -V` or `docker-compose run --rm freqtrade -V` for Freqtrade running in docker)\r\n \r\n## Your question\r\nThis might be a bug, I post it as question, since I am nor sure for 100%.\r\n\r\n(OS and Miniconda configuration works fine for Freqtrade since 2020.12)\r\nTrading works fine under Telegram with current version.\r\nWith current version and activated API Server, the system remains idle and does not begin to trade. Play button in GUI is pushed. Even no trades visible in DB, opened in a SQlite explorer.\r\nAPI Server web GUI works excellent.\r\n(Trading with API Server works fine under 2021.7)\r\n\r\n*Ask the question you have not been able to find an answer in our [Documentation](https://www.freqtrade.io/en/latest/)*\r\n\n", "before_files": [{"content": "import contextlib\nimport threading\nimport time\n\nimport uvicorn\n\n\nclass UvicornServer(uvicorn.Server):\n \"\"\"\n Multithreaded server - as found in https://github.com/encode/uvicorn/issues/742\n\n Removed install_signal_handlers() override based on changes from this commit:\n https://github.com/encode/uvicorn/commit/ce2ef45a9109df8eae038c0ec323eb63d644cbc6\n\n Cannot rely on asyncio.get_event_loop() to create new event loop because of this check:\n https://github.com/python/cpython/blob/4d7f11e05731f67fd2c07ec2972c6cb9861d52be/Lib/asyncio/events.py#L638\n\n Fix by overriding run() and forcing creation of new event loop if uvloop is available\n \"\"\"\n\n def run(self, sockets=None):\n import asyncio\n\n \"\"\"\n Parent implementation calls self.config.setup_event_loop(),\n but we need to create uvloop event loop manually\n \"\"\"\n try:\n import uvloop # noqa\n except ImportError: # pragma: no cover\n from uvicorn.loops.asyncio import asyncio_setup\n asyncio_setup()\n else:\n asyncio.set_event_loop(uvloop.new_event_loop())\n try:\n loop = asyncio.get_event_loop()\n except RuntimeError:\n # When running in a thread, we'll not have an eventloop yet.\n loop = asyncio.new_event_loop()\n loop.run_until_complete(self.serve(sockets=sockets))\n\n @contextlib.contextmanager\n def run_in_thread(self):\n self.thread = threading.Thread(target=self.run)\n self.thread.start()\n while not self.started:\n time.sleep(1e-3)\n\n def cleanup(self):\n self.should_exit = True\n self.thread.join()\n", "path": "freqtrade/rpc/api_server/uvicorn_threaded.py"}], "after_files": [{"content": "import contextlib\nimport threading\nimport time\n\nimport uvicorn\n\n\ndef asyncio_setup() -> None: # pragma: no cover\n # Set eventloop for win32 setups\n # Reverts a change done in uvicorn 0.15.0 - which now sets the eventloop\n # via policy.\n import sys\n\n if sys.version_info >= (3, 8) and sys.platform == \"win32\":\n import asyncio\n import selectors\n selector = selectors.SelectSelector()\n loop = asyncio.SelectorEventLoop(selector)\n asyncio.set_event_loop(loop)\n\n\nclass UvicornServer(uvicorn.Server):\n \"\"\"\n Multithreaded server - as found in https://github.com/encode/uvicorn/issues/742\n\n Removed install_signal_handlers() override based on changes from this commit:\n https://github.com/encode/uvicorn/commit/ce2ef45a9109df8eae038c0ec323eb63d644cbc6\n\n Cannot rely on asyncio.get_event_loop() to create new event loop because of this check:\n https://github.com/python/cpython/blob/4d7f11e05731f67fd2c07ec2972c6cb9861d52be/Lib/asyncio/events.py#L638\n\n Fix by overriding run() and forcing creation of new event loop if uvloop is available\n \"\"\"\n\n def run(self, sockets=None):\n import asyncio\n\n \"\"\"\n Parent implementation calls self.config.setup_event_loop(),\n but we need to create uvloop event loop manually\n \"\"\"\n try:\n import uvloop # noqa\n except ImportError: # pragma: no cover\n\n asyncio_setup()\n else:\n asyncio.set_event_loop(uvloop.new_event_loop())\n try:\n loop = asyncio.get_event_loop()\n except RuntimeError:\n # When running in a thread, we'll not have an eventloop yet.\n loop = asyncio.new_event_loop()\n loop.run_until_complete(self.serve(sockets=sockets))\n\n @contextlib.contextmanager\n def run_in_thread(self):\n self.thread = threading.Thread(target=self.run)\n self.thread.start()\n while not self.started:\n time.sleep(1e-3)\n\n def cleanup(self):\n self.should_exit = True\n self.thread.join()\n", "path": "freqtrade/rpc/api_server/uvicorn_threaded.py"}]} | 1,106 | 310 |
gh_patches_debug_25617 | rasdani/github-patches | git_diff | saleor__saleor-3169 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
User type should be able to return `created` and `lastLogin` fields
There is no way to obtain information when a user was registered and when she/he logged last time.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `saleor/graphql/account/resolvers.py`
Content:
```
1 from django.db.models import Q
2 from i18naddress import get_validation_rules
3
4 from ...account import models
5 from ...core.utils import get_client_ip, get_country_by_ip
6 from ..utils import filter_by_query_param
7 from .types import AddressValidationData, ChoiceValue
8
9 USER_SEARCH_FIELDS = (
10 'email', 'default_shipping_address__first_name',
11 'default_shipping_address__last_name', 'default_shipping_address__city',
12 'default_shipping_address__country')
13
14
15 def resolve_customers(info, query):
16 qs = models.User.objects.filter(
17 Q(is_staff=False) | (Q(is_staff=True) & Q(orders__isnull=False))
18 ).prefetch_related('addresses')
19 return filter_by_query_param(
20 queryset=qs, query=query, search_fields=USER_SEARCH_FIELDS)
21
22
23 def resolve_staff_users(info, query):
24 qs = models.User.objects.filter(is_staff=True)
25 return filter_by_query_param(
26 queryset=qs, query=query, search_fields=USER_SEARCH_FIELDS)
27
28
29 def resolve_address_validator(info, input):
30 country_code = input['country_code']
31 if not country_code:
32 client_ip = get_client_ip(info.context)
33 country = get_country_by_ip(client_ip)
34 if country:
35 country_code = country.code
36 else:
37 return None
38 params = {
39 'country_code': country_code,
40 'country_area': input['country_area'],
41 'city_area': input['city_area']}
42 rules = get_validation_rules(params)
43
44 return AddressValidationData(
45 country_code=rules.country_code,
46 country_name=rules.country_name,
47 address_format=rules.address_format,
48 address_latin_format=rules.address_latin_format,
49 allowed_fields=rules.allowed_fields,
50 required_fields=rules.required_fields,
51 upper_fields=rules.upper_fields,
52 country_area_type=rules.country_area_type,
53 country_area_choices=[
54 ChoiceValue(area[0], area[1])
55 for area in rules.country_area_choices],
56 city_type=rules.city_type,
57 city_area_choices=[
58 ChoiceValue(area[0], area[1]) for area in rules.city_area_choices],
59 postal_code_type=rules.postal_code_type,
60 postal_code_matchers=[
61 compiled.pattern for compiled in rules.postal_code_matchers],
62 postal_code_examples=rules.postal_code_examples,
63 postal_code_prefix=rules.postal_code_prefix
64 )
65
```
Path: `saleor/graphql/account/types.py`
Content:
```
1 import graphene
2 from django.contrib.auth import get_user_model
3 from graphene import relay
4
5 from ...account import models
6 from ...core.permissions import get_permissions
7 from ..core.types.common import (
8 CountableDjangoObjectType, CountryDisplay, PermissionDisplay)
9 from ..utils import format_permissions_for_display
10
11
12 class AddressInput(graphene.InputObjectType):
13 first_name = graphene.String(description='Given name.')
14 last_name = graphene.String(description='Family name.')
15 company_name = graphene.String(description='Company or organization.')
16 street_address_1 = graphene.String(description='Address.')
17 street_address_2 = graphene.String(description='Address.')
18 city = graphene.String(description='City.')
19 city_area = graphene.String(description='District.')
20 postal_code = graphene.String(description='Postal code.')
21 country = graphene.String(description='Country.')
22 country_area = graphene.String(description='State or province.')
23 phone = graphene.String(description='Phone number.')
24
25
26 class Address(CountableDjangoObjectType):
27 country = graphene.Field(
28 CountryDisplay, required=True, description='Default shop\'s country')
29
30 class Meta:
31 exclude_fields = ['user_set', 'user_addresses']
32 description = 'Represents user address data.'
33 interfaces = [relay.Node]
34 model = models.Address
35
36 def resolve_country(self, info):
37 return CountryDisplay(
38 code=self.country.code, country=self.country.name)
39
40
41 class User(CountableDjangoObjectType):
42 permissions = graphene.List(PermissionDisplay)
43
44 class Meta:
45 exclude_fields = [
46 'date_joined', 'password', 'is_superuser',
47 'OrderEvent_set', 'last_login']
48 description = 'Represents user data.'
49 interfaces = [relay.Node]
50 model = get_user_model()
51 filter_fields = ['is_staff']
52
53 def resolve_permissions(self, info, **kwargs):
54 if self.is_superuser:
55 permissions = get_permissions()
56 else:
57 permissions = self.user_permissions.prefetch_related(
58 'content_type').order_by('codename')
59 return format_permissions_for_display(permissions)
60
61
62 class AddressValidationInput(graphene.InputObjectType):
63 country_code = graphene.String()
64 country_area = graphene.String()
65 city_area = graphene.String()
66
67
68 class ChoiceValue(graphene.ObjectType):
69 raw = graphene.String()
70 verbose = graphene.String()
71
72
73 class AddressValidationData(graphene.ObjectType):
74 country_code = graphene.String()
75 country_name = graphene.String()
76 address_format = graphene.String()
77 address_latin_format = graphene.String()
78 allowed_fields = graphene.List(graphene.String)
79 required_fields = graphene.List(graphene.String)
80 upper_fields = graphene.List(graphene.String)
81 country_area_type = graphene.String()
82 country_area_choices = graphene.List(ChoiceValue)
83 city_type = graphene.String()
84 city_area_choices = graphene.List(ChoiceValue)
85 postal_code_type = graphene.String()
86 postal_code_matchers = graphene.List(graphene.String)
87 postal_code_examples = graphene.List(graphene.String)
88 postal_code_prefix = graphene.String()
89
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/saleor/graphql/account/resolvers.py b/saleor/graphql/account/resolvers.py
--- a/saleor/graphql/account/resolvers.py
+++ b/saleor/graphql/account/resolvers.py
@@ -16,14 +16,16 @@
qs = models.User.objects.filter(
Q(is_staff=False) | (Q(is_staff=True) & Q(orders__isnull=False))
).prefetch_related('addresses')
- return filter_by_query_param(
+ qs = filter_by_query_param(
queryset=qs, query=query, search_fields=USER_SEARCH_FIELDS)
+ return qs.distinct()
def resolve_staff_users(info, query):
qs = models.User.objects.filter(is_staff=True)
- return filter_by_query_param(
+ qs = filter_by_query_param(
queryset=qs, query=query, search_fields=USER_SEARCH_FIELDS)
+ return qs.distinct()
def resolve_address_validator(info, input):
diff --git a/saleor/graphql/account/types.py b/saleor/graphql/account/types.py
--- a/saleor/graphql/account/types.py
+++ b/saleor/graphql/account/types.py
@@ -42,9 +42,7 @@
permissions = graphene.List(PermissionDisplay)
class Meta:
- exclude_fields = [
- 'date_joined', 'password', 'is_superuser',
- 'OrderEvent_set', 'last_login']
+ exclude_fields = ['password', 'is_superuser', 'OrderEvent_set']
description = 'Represents user data.'
interfaces = [relay.Node]
model = get_user_model()
| {"golden_diff": "diff --git a/saleor/graphql/account/resolvers.py b/saleor/graphql/account/resolvers.py\n--- a/saleor/graphql/account/resolvers.py\n+++ b/saleor/graphql/account/resolvers.py\n@@ -16,14 +16,16 @@\n qs = models.User.objects.filter(\n Q(is_staff=False) | (Q(is_staff=True) & Q(orders__isnull=False))\n ).prefetch_related('addresses')\n- return filter_by_query_param(\n+ qs = filter_by_query_param(\n queryset=qs, query=query, search_fields=USER_SEARCH_FIELDS)\n+ return qs.distinct()\n \n \n def resolve_staff_users(info, query):\n qs = models.User.objects.filter(is_staff=True)\n- return filter_by_query_param(\n+ qs = filter_by_query_param(\n queryset=qs, query=query, search_fields=USER_SEARCH_FIELDS)\n+ return qs.distinct()\n \n \n def resolve_address_validator(info, input):\ndiff --git a/saleor/graphql/account/types.py b/saleor/graphql/account/types.py\n--- a/saleor/graphql/account/types.py\n+++ b/saleor/graphql/account/types.py\n@@ -42,9 +42,7 @@\n permissions = graphene.List(PermissionDisplay)\n \n class Meta:\n- exclude_fields = [\n- 'date_joined', 'password', 'is_superuser',\n- 'OrderEvent_set', 'last_login']\n+ exclude_fields = ['password', 'is_superuser', 'OrderEvent_set']\n description = 'Represents user data.'\n interfaces = [relay.Node]\n model = get_user_model()\n", "issue": "User type should be able to return `created` and `lastLogin` fields\nThere is no way to obtain information when a user was registered and when she/he logged last time.\n", "before_files": [{"content": "from django.db.models import Q\nfrom i18naddress import get_validation_rules\n\nfrom ...account import models\nfrom ...core.utils import get_client_ip, get_country_by_ip\nfrom ..utils import filter_by_query_param\nfrom .types import AddressValidationData, ChoiceValue\n\nUSER_SEARCH_FIELDS = (\n 'email', 'default_shipping_address__first_name',\n 'default_shipping_address__last_name', 'default_shipping_address__city',\n 'default_shipping_address__country')\n\n\ndef resolve_customers(info, query):\n qs = models.User.objects.filter(\n Q(is_staff=False) | (Q(is_staff=True) & Q(orders__isnull=False))\n ).prefetch_related('addresses')\n return filter_by_query_param(\n queryset=qs, query=query, search_fields=USER_SEARCH_FIELDS)\n\n\ndef resolve_staff_users(info, query):\n qs = models.User.objects.filter(is_staff=True)\n return filter_by_query_param(\n queryset=qs, query=query, search_fields=USER_SEARCH_FIELDS)\n\n\ndef resolve_address_validator(info, input):\n country_code = input['country_code']\n if not country_code:\n client_ip = get_client_ip(info.context)\n country = get_country_by_ip(client_ip)\n if country:\n country_code = country.code\n else:\n return None\n params = {\n 'country_code': country_code,\n 'country_area': input['country_area'],\n 'city_area': input['city_area']}\n rules = get_validation_rules(params)\n\n return AddressValidationData(\n country_code=rules.country_code,\n country_name=rules.country_name,\n address_format=rules.address_format,\n address_latin_format=rules.address_latin_format,\n allowed_fields=rules.allowed_fields,\n required_fields=rules.required_fields,\n upper_fields=rules.upper_fields,\n country_area_type=rules.country_area_type,\n country_area_choices=[\n ChoiceValue(area[0], area[1])\n for area in rules.country_area_choices],\n city_type=rules.city_type,\n city_area_choices=[\n ChoiceValue(area[0], area[1]) for area in rules.city_area_choices],\n postal_code_type=rules.postal_code_type,\n postal_code_matchers=[\n compiled.pattern for compiled in rules.postal_code_matchers],\n postal_code_examples=rules.postal_code_examples,\n postal_code_prefix=rules.postal_code_prefix\n )\n", "path": "saleor/graphql/account/resolvers.py"}, {"content": "import graphene\nfrom django.contrib.auth import get_user_model\nfrom graphene import relay\n\nfrom ...account import models\nfrom ...core.permissions import get_permissions\nfrom ..core.types.common import (\n CountableDjangoObjectType, CountryDisplay, PermissionDisplay)\nfrom ..utils import format_permissions_for_display\n\n\nclass AddressInput(graphene.InputObjectType):\n first_name = graphene.String(description='Given name.')\n last_name = graphene.String(description='Family name.')\n company_name = graphene.String(description='Company or organization.')\n street_address_1 = graphene.String(description='Address.')\n street_address_2 = graphene.String(description='Address.')\n city = graphene.String(description='City.')\n city_area = graphene.String(description='District.')\n postal_code = graphene.String(description='Postal code.')\n country = graphene.String(description='Country.')\n country_area = graphene.String(description='State or province.')\n phone = graphene.String(description='Phone number.')\n\n\nclass Address(CountableDjangoObjectType):\n country = graphene.Field(\n CountryDisplay, required=True, description='Default shop\\'s country')\n\n class Meta:\n exclude_fields = ['user_set', 'user_addresses']\n description = 'Represents user address data.'\n interfaces = [relay.Node]\n model = models.Address\n\n def resolve_country(self, info):\n return CountryDisplay(\n code=self.country.code, country=self.country.name)\n\n\nclass User(CountableDjangoObjectType):\n permissions = graphene.List(PermissionDisplay)\n\n class Meta:\n exclude_fields = [\n 'date_joined', 'password', 'is_superuser',\n 'OrderEvent_set', 'last_login']\n description = 'Represents user data.'\n interfaces = [relay.Node]\n model = get_user_model()\n filter_fields = ['is_staff']\n\n def resolve_permissions(self, info, **kwargs):\n if self.is_superuser:\n permissions = get_permissions()\n else:\n permissions = self.user_permissions.prefetch_related(\n 'content_type').order_by('codename')\n return format_permissions_for_display(permissions)\n\n\nclass AddressValidationInput(graphene.InputObjectType):\n country_code = graphene.String()\n country_area = graphene.String()\n city_area = graphene.String()\n\n\nclass ChoiceValue(graphene.ObjectType):\n raw = graphene.String()\n verbose = graphene.String()\n\n\nclass AddressValidationData(graphene.ObjectType):\n country_code = graphene.String()\n country_name = graphene.String()\n address_format = graphene.String()\n address_latin_format = graphene.String()\n allowed_fields = graphene.List(graphene.String)\n required_fields = graphene.List(graphene.String)\n upper_fields = graphene.List(graphene.String)\n country_area_type = graphene.String()\n country_area_choices = graphene.List(ChoiceValue)\n city_type = graphene.String()\n city_area_choices = graphene.List(ChoiceValue)\n postal_code_type = graphene.String()\n postal_code_matchers = graphene.List(graphene.String)\n postal_code_examples = graphene.List(graphene.String)\n postal_code_prefix = graphene.String()\n", "path": "saleor/graphql/account/types.py"}], "after_files": [{"content": "from django.db.models import Q\nfrom i18naddress import get_validation_rules\n\nfrom ...account import models\nfrom ...core.utils import get_client_ip, get_country_by_ip\nfrom ..utils import filter_by_query_param\nfrom .types import AddressValidationData, ChoiceValue\n\nUSER_SEARCH_FIELDS = (\n 'email', 'default_shipping_address__first_name',\n 'default_shipping_address__last_name', 'default_shipping_address__city',\n 'default_shipping_address__country')\n\n\ndef resolve_customers(info, query):\n qs = models.User.objects.filter(\n Q(is_staff=False) | (Q(is_staff=True) & Q(orders__isnull=False))\n ).prefetch_related('addresses')\n qs = filter_by_query_param(\n queryset=qs, query=query, search_fields=USER_SEARCH_FIELDS)\n return qs.distinct()\n\n\ndef resolve_staff_users(info, query):\n qs = models.User.objects.filter(is_staff=True)\n qs = filter_by_query_param(\n queryset=qs, query=query, search_fields=USER_SEARCH_FIELDS)\n return qs.distinct()\n\n\ndef resolve_address_validator(info, input):\n country_code = input['country_code']\n if not country_code:\n client_ip = get_client_ip(info.context)\n country = get_country_by_ip(client_ip)\n if country:\n country_code = country.code\n else:\n return None\n params = {\n 'country_code': country_code,\n 'country_area': input['country_area'],\n 'city_area': input['city_area']}\n rules = get_validation_rules(params)\n\n return AddressValidationData(\n country_code=rules.country_code,\n country_name=rules.country_name,\n address_format=rules.address_format,\n address_latin_format=rules.address_latin_format,\n allowed_fields=rules.allowed_fields,\n required_fields=rules.required_fields,\n upper_fields=rules.upper_fields,\n country_area_type=rules.country_area_type,\n country_area_choices=[\n ChoiceValue(area[0], area[1])\n for area in rules.country_area_choices],\n city_type=rules.city_type,\n city_area_choices=[\n ChoiceValue(area[0], area[1]) for area in rules.city_area_choices],\n postal_code_type=rules.postal_code_type,\n postal_code_matchers=[\n compiled.pattern for compiled in rules.postal_code_matchers],\n postal_code_examples=rules.postal_code_examples,\n postal_code_prefix=rules.postal_code_prefix\n )\n", "path": "saleor/graphql/account/resolvers.py"}, {"content": "import graphene\nfrom django.contrib.auth import get_user_model\nfrom graphene import relay\n\nfrom ...account import models\nfrom ...core.permissions import get_permissions\nfrom ..core.types.common import (\n CountableDjangoObjectType, CountryDisplay, PermissionDisplay)\nfrom ..utils import format_permissions_for_display\n\n\nclass AddressInput(graphene.InputObjectType):\n first_name = graphene.String(description='Given name.')\n last_name = graphene.String(description='Family name.')\n company_name = graphene.String(description='Company or organization.')\n street_address_1 = graphene.String(description='Address.')\n street_address_2 = graphene.String(description='Address.')\n city = graphene.String(description='City.')\n city_area = graphene.String(description='District.')\n postal_code = graphene.String(description='Postal code.')\n country = graphene.String(description='Country.')\n country_area = graphene.String(description='State or province.')\n phone = graphene.String(description='Phone number.')\n\n\nclass Address(CountableDjangoObjectType):\n country = graphene.Field(\n CountryDisplay, required=True, description='Default shop\\'s country')\n\n class Meta:\n exclude_fields = ['user_set', 'user_addresses']\n description = 'Represents user address data.'\n interfaces = [relay.Node]\n model = models.Address\n\n def resolve_country(self, info):\n return CountryDisplay(\n code=self.country.code, country=self.country.name)\n\n\nclass User(CountableDjangoObjectType):\n permissions = graphene.List(PermissionDisplay)\n\n class Meta:\n exclude_fields = ['password', 'is_superuser', 'OrderEvent_set']\n description = 'Represents user data.'\n interfaces = [relay.Node]\n model = get_user_model()\n filter_fields = ['is_staff']\n\n def resolve_permissions(self, info, **kwargs):\n if self.is_superuser:\n permissions = get_permissions()\n else:\n permissions = self.user_permissions.prefetch_related(\n 'content_type').order_by('codename')\n return format_permissions_for_display(permissions)\n\n\nclass AddressValidationInput(graphene.InputObjectType):\n country_code = graphene.String()\n country_area = graphene.String()\n city_area = graphene.String()\n\n\nclass ChoiceValue(graphene.ObjectType):\n raw = graphene.String()\n verbose = graphene.String()\n\n\nclass AddressValidationData(graphene.ObjectType):\n country_code = graphene.String()\n country_name = graphene.String()\n address_format = graphene.String()\n address_latin_format = graphene.String()\n allowed_fields = graphene.List(graphene.String)\n required_fields = graphene.List(graphene.String)\n upper_fields = graphene.List(graphene.String)\n country_area_type = graphene.String()\n country_area_choices = graphene.List(ChoiceValue)\n city_type = graphene.String()\n city_area_choices = graphene.List(ChoiceValue)\n postal_code_type = graphene.String()\n postal_code_matchers = graphene.List(graphene.String)\n postal_code_examples = graphene.List(graphene.String)\n postal_code_prefix = graphene.String()\n", "path": "saleor/graphql/account/types.py"}]} | 1,739 | 340 |
gh_patches_debug_25 | rasdani/github-patches | git_diff | Zeroto521__my-data-toolkit-543 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
PERF: `to_set` speeds up especial for large data
<!--
Thanks for contributing a pull request!
Please follow these standard acronyms to start the commit message:
- ENH: enhancement
- BUG: bug fix
- DOC: documentation
- TYP: type annotations
- TST: addition or modification of tests
- MAINT: maintenance commit (refactoring, typos, etc.)
- BLD: change related to building
- REL: related to releasing
- API: an (incompatible) API change
- DEP: deprecate something, or remove a deprecated object
- DEV: development tool or utility
- REV: revert an earlier commit
- PERF: performance improvement
- BOT: always commit via a bot
- CI: related to CI or CD
- CLN: Code cleanup
-->
- [ ] closes #xxxx
- [x] whatsnew entry
| data | `set(s)` | `set(s.unique())` |
| -------------------- | ---------------- | ----------------- |
| small, `list(range(10)` | 1.83 µs ± 31.6 ns | 1.17 ms ± 144 µs |
| large, `list(range(10)*1000` | 9.67 µs ± 564 ns | 255 µs ± 14.9 µs |
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `dtoolkit/accessor/index/to_set.py`
Content:
```
1 import pandas as pd
2
3 from dtoolkit.accessor.register import register_index_method
4
5
6 @register_index_method
7 def to_set(index: pd.Index) -> set:
8 """
9 Return a :keyword:`set` of the values.
10
11 A sugary syntax wraps :keyword:`set`::
12
13 set(index)
14
15 Different to :meth:`~pandas.Index.unique`, it returns :class:`~pandas.Index`.
16
17 Returns
18 -------
19 set
20
21 See Also
22 --------
23 pandas.Index.unique
24
25 Examples
26 --------
27 >>> import dtoolkit.accessor
28 >>> import pandas as pd
29 >>> i = pd.Index([1, 2, 2])
30 >>> i
31 Int64Index([1, 2, 2], dtype='int64')
32 >>> i.to_set()
33 {1, 2}
34 """
35
36 return set(index)
37
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/dtoolkit/accessor/index/to_set.py b/dtoolkit/accessor/index/to_set.py
--- a/dtoolkit/accessor/index/to_set.py
+++ b/dtoolkit/accessor/index/to_set.py
@@ -33,4 +33,4 @@
{1, 2}
"""
- return set(index)
+ return set(index.unique())
| {"golden_diff": "diff --git a/dtoolkit/accessor/index/to_set.py b/dtoolkit/accessor/index/to_set.py\n--- a/dtoolkit/accessor/index/to_set.py\n+++ b/dtoolkit/accessor/index/to_set.py\n@@ -33,4 +33,4 @@\n {1, 2}\n \"\"\"\n \n- return set(index)\n+ return set(index.unique())\n", "issue": "PERF: `to_set` speeds up especial for large data\n<!--\r\nThanks for contributing a pull request!\r\n\r\nPlease follow these standard acronyms to start the commit message:\r\n\r\n- ENH: enhancement\r\n- BUG: bug fix\r\n- DOC: documentation\r\n- TYP: type annotations\r\n- TST: addition or modification of tests\r\n- MAINT: maintenance commit (refactoring, typos, etc.)\r\n- BLD: change related to building\r\n- REL: related to releasing\r\n- API: an (incompatible) API change\r\n- DEP: deprecate something, or remove a deprecated object\r\n- DEV: development tool or utility\r\n- REV: revert an earlier commit\r\n- PERF: performance improvement\r\n- BOT: always commit via a bot\r\n- CI: related to CI or CD\r\n- CLN: Code cleanup\r\n-->\r\n\r\n- [ ] closes #xxxx\r\n- [x] whatsnew entry\r\n\r\n\r\n| data | `set(s)` | `set(s.unique())` |\r\n| -------------------- | ---------------- | ----------------- |\r\n| small, `list(range(10)` | 1.83 \u00b5s \u00b1 31.6 ns | 1.17 ms \u00b1 144 \u00b5s |\r\n| large, `list(range(10)*1000` | 9.67 \u00b5s \u00b1 564 ns | 255 \u00b5s \u00b1 14.9 \u00b5s |\r\n\n", "before_files": [{"content": "import pandas as pd\n\nfrom dtoolkit.accessor.register import register_index_method\n\n\n@register_index_method\ndef to_set(index: pd.Index) -> set:\n \"\"\"\n Return a :keyword:`set` of the values.\n\n A sugary syntax wraps :keyword:`set`::\n\n set(index)\n\n Different to :meth:`~pandas.Index.unique`, it returns :class:`~pandas.Index`.\n\n Returns\n -------\n set\n\n See Also\n --------\n pandas.Index.unique\n\n Examples\n --------\n >>> import dtoolkit.accessor\n >>> import pandas as pd\n >>> i = pd.Index([1, 2, 2])\n >>> i\n Int64Index([1, 2, 2], dtype='int64')\n >>> i.to_set()\n {1, 2}\n \"\"\"\n\n return set(index)\n", "path": "dtoolkit/accessor/index/to_set.py"}], "after_files": [{"content": "import pandas as pd\n\nfrom dtoolkit.accessor.register import register_index_method\n\n\n@register_index_method\ndef to_set(index: pd.Index) -> set:\n \"\"\"\n Return a :keyword:`set` of the values.\n\n A sugary syntax wraps :keyword:`set`::\n\n set(index)\n\n Different to :meth:`~pandas.Index.unique`, it returns :class:`~pandas.Index`.\n\n Returns\n -------\n set\n\n See Also\n --------\n pandas.Index.unique\n\n Examples\n --------\n >>> import dtoolkit.accessor\n >>> import pandas as pd\n >>> i = pd.Index([1, 2, 2])\n >>> i\n Int64Index([1, 2, 2], dtype='int64')\n >>> i.to_set()\n {1, 2}\n \"\"\"\n\n return set(index.unique())\n", "path": "dtoolkit/accessor/index/to_set.py"}]} | 826 | 83 |
gh_patches_debug_13395 | rasdani/github-patches | git_diff | facebookresearch__xformers-57 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[bug] Failing triton dropout test
# 🐛 Bug
See https://app.circleci.com/pipelines/github/facebookresearch/xformers/212/workflows/8988c71c-84f5-4bd0-bd59-ac7d293c2370/jobs/398
Not sure why this happens just now, looking into that
## Command
can repro locally with ` pytest tests -k dropout -x -v `
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `xformers/triton/k_dropout.py`
Content:
```
1 # Copyright (c) Facebook, Inc. and its affiliates. All rights reserved.
2 #
3 # This source code is licensed under the BSD license found in the
4 # LICENSE file in the root directory of this source tree.
5
6
7 # CREDITS: This comes almost as-is from the Triton dropout tutorial
8 # https://raw.githubusercontent.com/openai/triton/master/python/tutorials/04-low-memory-dropout.py
9
10 import triton
11 import triton.language as tl
12
13
14 # fmt: off
15 @triton.autotune(
16 configs=[
17 triton.Config({"BLOCK_SIZE" : 256}, num_warps=1),
18 triton.Config({"BLOCK_SIZE" : 512}, num_warps=2),
19 triton.Config({"BLOCK_SIZE" : 1024}, num_warps=4),
20 triton.Config({"BLOCK_SIZE" : 2048}, num_warps=8),
21 triton.Config({"BLOCK_SIZE" : 4096}, num_warps=8),
22 ],
23 key=["N"],
24 )
25 @triton.jit
26 def k_dropout(
27 Y, X, S,
28 stride,
29 N,
30 p,
31 **meta,
32 ):
33 """
34 Apply dropout on an input tensor
35 Y : Output (M, N)
36 X : Input (M, N)
37 S : Seeds (M,)
38 p : dropout probability
39 """
40 # fmt: on
41
42 # compute memory offsets of elements handled by this instance
43 BLOCK_SIZE = meta["BLOCK_SIZE"]
44 row = tl.program_id(axis=0)
45 col = tl.program_id(axis=1)
46 offsets = row * stride + col * BLOCK_SIZE + tl.arange(0, BLOCK_SIZE)
47 mask = col * BLOCK_SIZE + tl.arange(0, BLOCK_SIZE) < N
48
49 # load data from x
50 x_ptrs = X + offsets
51 x = tl.load(x_ptrs, mask=mask)
52
53 # randomly prune it
54 seed = S + row
55 random = tl.rand(seed.to(tl.int32), offsets)
56 x_keep = random > p
57
58 # write-back
59 zero = 0.
60 zero = zero.to(x.dtype)
61 output = tl.where(x_keep, (x / (1 - p)).to(x.dtype), zero)
62 y_ptrs = Y + offsets
63 tl.store(y_ptrs, output, mask=mask)
64
```
Path: `xformers/triton/dropout.py`
Content:
```
1 # Copyright (c) Facebook, Inc. and its affiliates. All rights reserved.
2 #
3 # This source code is licensed under the BSD license found in the
4 # LICENSE file in the root directory of this source tree.
5
6
7 # CREDITS: This comes almost as-is from the Triton dropout tutorial
8 # https://raw.githubusercontent.com/openai/triton/master/python/tutorials/04-low-memory-dropout.py
9
10 import torch
11 import triton
12 from torch.cuda.amp import custom_bwd, custom_fwd
13
14 from xformers.triton.k_dropout import k_dropout
15
16
17 # Helper to handle the SPMD launch grid and error cases
18 class _dropout(torch.autograd.Function):
19 @staticmethod
20 @custom_fwd(cast_inputs=torch.float16)
21 def forward(ctx, x, p):
22 # Soft-flatten an hypothetical 3rd dimension
23 x_ = x.reshape(-1, x.shape[-1])
24 y = torch.empty_like(x_)
25 _, N = x_.shape
26
27 assert y.stride(-1) == 1 and x_.stride(-1) == 1
28
29 # Generate one seed per sample
30 # seed max is int32 max for positive numbers: 2**16
31 seeds = torch.randint(65536, (x_.shape[0],), device=x.device).to(torch.int32)
32
33 # SPMD launch grid
34 def grid(meta):
35 return (
36 x_.shape[0],
37 triton.cdiv(x_.shape[1], meta["BLOCK_SIZE"]),
38 )
39
40 k_dropout[grid](y, x_, seeds, y.stride(0), N, p)
41
42 ctx.save_for_backward(seeds)
43 ctx.p = p
44
45 return y.reshape_as(x)
46
47 @staticmethod
48 @custom_bwd
49 def backward(ctx, grad_out):
50 (seeds,) = ctx.saved_tensors
51
52 # Soft-flatten an hypothetical 3rd dimension
53 grad_out_ = grad_out.reshape(-1, grad_out.shape[-1])
54 grad_in = torch.empty_like(grad_out_)
55 _, N = grad_out_.shape
56
57 assert grad_in.stride(-1) == 1 and grad_out_.stride(-1) == 1
58
59 # SPMD launch grid
60 def grid(meta):
61 return (
62 grad_out_.shape[0],
63 triton.cdiv(grad_out_.shape[1], meta["BLOCK_SIZE"]),
64 )
65
66 k_dropout[grid](grad_in, grad_out_, seeds, grad_out_.stride(0), N, ctx.p)
67
68 return grad_in.reshape_as(grad_out), None
69
70
71 def dropout(x: torch.Tensor, p: float):
72 return _dropout.apply(x, p)
73
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/xformers/triton/dropout.py b/xformers/triton/dropout.py
--- a/xformers/triton/dropout.py
+++ b/xformers/triton/dropout.py
@@ -69,4 +69,7 @@
def dropout(x: torch.Tensor, p: float):
- return _dropout.apply(x, p)
+ if p > 0.0:
+ return _dropout.apply(x, p)
+
+ return x
diff --git a/xformers/triton/k_dropout.py b/xformers/triton/k_dropout.py
--- a/xformers/triton/k_dropout.py
+++ b/xformers/triton/k_dropout.py
@@ -24,7 +24,7 @@
)
@triton.jit
def k_dropout(
- Y, X, S,
+ Y, X, SEEDS,
stride,
N,
p,
@@ -51,7 +51,7 @@
x = tl.load(x_ptrs, mask=mask)
# randomly prune it
- seed = S + row
+ seed = SEEDS + row
random = tl.rand(seed.to(tl.int32), offsets)
x_keep = random > p
| {"golden_diff": "diff --git a/xformers/triton/dropout.py b/xformers/triton/dropout.py\n--- a/xformers/triton/dropout.py\n+++ b/xformers/triton/dropout.py\n@@ -69,4 +69,7 @@\n \n \n def dropout(x: torch.Tensor, p: float):\n- return _dropout.apply(x, p)\n+ if p > 0.0:\n+ return _dropout.apply(x, p)\n+\n+ return x\ndiff --git a/xformers/triton/k_dropout.py b/xformers/triton/k_dropout.py\n--- a/xformers/triton/k_dropout.py\n+++ b/xformers/triton/k_dropout.py\n@@ -24,7 +24,7 @@\n )\n @triton.jit\n def k_dropout(\n- Y, X, S,\n+ Y, X, SEEDS,\n stride,\n N,\n p,\n@@ -51,7 +51,7 @@\n x = tl.load(x_ptrs, mask=mask)\n \n # randomly prune it\n- seed = S + row\n+ seed = SEEDS + row\n random = tl.rand(seed.to(tl.int32), offsets)\n x_keep = random > p\n", "issue": "[bug] Failing triton dropout test \n# \ud83d\udc1b Bug\r\n\r\nSee https://app.circleci.com/pipelines/github/facebookresearch/xformers/212/workflows/8988c71c-84f5-4bd0-bd59-ac7d293c2370/jobs/398\r\n\r\nNot sure why this happens just now, looking into that\r\n\r\n## Command\r\ncan repro locally with ` pytest tests -k dropout -x -v `\n", "before_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All rights reserved.\n#\n# This source code is licensed under the BSD license found in the\n# LICENSE file in the root directory of this source tree.\n\n\n# CREDITS: This comes almost as-is from the Triton dropout tutorial\n# https://raw.githubusercontent.com/openai/triton/master/python/tutorials/04-low-memory-dropout.py\n\nimport triton\nimport triton.language as tl\n\n\n# fmt: off\[email protected](\n configs=[\n triton.Config({\"BLOCK_SIZE\" : 256}, num_warps=1),\n triton.Config({\"BLOCK_SIZE\" : 512}, num_warps=2),\n triton.Config({\"BLOCK_SIZE\" : 1024}, num_warps=4),\n triton.Config({\"BLOCK_SIZE\" : 2048}, num_warps=8),\n triton.Config({\"BLOCK_SIZE\" : 4096}, num_warps=8),\n ],\n key=[\"N\"],\n)\[email protected]\ndef k_dropout(\n Y, X, S,\n stride,\n N,\n p,\n **meta,\n):\n \"\"\"\n Apply dropout on an input tensor\n Y : Output (M, N)\n X : Input (M, N)\n S : Seeds (M,)\n p : dropout probability\n \"\"\"\n # fmt: on\n\n # compute memory offsets of elements handled by this instance\n BLOCK_SIZE = meta[\"BLOCK_SIZE\"]\n row = tl.program_id(axis=0)\n col = tl.program_id(axis=1)\n offsets = row * stride + col * BLOCK_SIZE + tl.arange(0, BLOCK_SIZE)\n mask = col * BLOCK_SIZE + tl.arange(0, BLOCK_SIZE) < N\n\n # load data from x\n x_ptrs = X + offsets\n x = tl.load(x_ptrs, mask=mask)\n\n # randomly prune it\n seed = S + row\n random = tl.rand(seed.to(tl.int32), offsets)\n x_keep = random > p\n\n # write-back\n zero = 0.\n zero = zero.to(x.dtype)\n output = tl.where(x_keep, (x / (1 - p)).to(x.dtype), zero)\n y_ptrs = Y + offsets\n tl.store(y_ptrs, output, mask=mask)\n", "path": "xformers/triton/k_dropout.py"}, {"content": "# Copyright (c) Facebook, Inc. and its affiliates. All rights reserved.\n#\n# This source code is licensed under the BSD license found in the\n# LICENSE file in the root directory of this source tree.\n\n\n# CREDITS: This comes almost as-is from the Triton dropout tutorial\n# https://raw.githubusercontent.com/openai/triton/master/python/tutorials/04-low-memory-dropout.py\n\nimport torch\nimport triton\nfrom torch.cuda.amp import custom_bwd, custom_fwd\n\nfrom xformers.triton.k_dropout import k_dropout\n\n\n# Helper to handle the SPMD launch grid and error cases\nclass _dropout(torch.autograd.Function):\n @staticmethod\n @custom_fwd(cast_inputs=torch.float16)\n def forward(ctx, x, p):\n # Soft-flatten an hypothetical 3rd dimension\n x_ = x.reshape(-1, x.shape[-1])\n y = torch.empty_like(x_)\n _, N = x_.shape\n\n assert y.stride(-1) == 1 and x_.stride(-1) == 1\n\n # Generate one seed per sample\n # seed max is int32 max for positive numbers: 2**16\n seeds = torch.randint(65536, (x_.shape[0],), device=x.device).to(torch.int32)\n\n # SPMD launch grid\n def grid(meta):\n return (\n x_.shape[0],\n triton.cdiv(x_.shape[1], meta[\"BLOCK_SIZE\"]),\n )\n\n k_dropout[grid](y, x_, seeds, y.stride(0), N, p)\n\n ctx.save_for_backward(seeds)\n ctx.p = p\n\n return y.reshape_as(x)\n\n @staticmethod\n @custom_bwd\n def backward(ctx, grad_out):\n (seeds,) = ctx.saved_tensors\n\n # Soft-flatten an hypothetical 3rd dimension\n grad_out_ = grad_out.reshape(-1, grad_out.shape[-1])\n grad_in = torch.empty_like(grad_out_)\n _, N = grad_out_.shape\n\n assert grad_in.stride(-1) == 1 and grad_out_.stride(-1) == 1\n\n # SPMD launch grid\n def grid(meta):\n return (\n grad_out_.shape[0],\n triton.cdiv(grad_out_.shape[1], meta[\"BLOCK_SIZE\"]),\n )\n\n k_dropout[grid](grad_in, grad_out_, seeds, grad_out_.stride(0), N, ctx.p)\n\n return grad_in.reshape_as(grad_out), None\n\n\ndef dropout(x: torch.Tensor, p: float):\n return _dropout.apply(x, p)\n", "path": "xformers/triton/dropout.py"}], "after_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All rights reserved.\n#\n# This source code is licensed under the BSD license found in the\n# LICENSE file in the root directory of this source tree.\n\n\n# CREDITS: This comes almost as-is from the Triton dropout tutorial\n# https://raw.githubusercontent.com/openai/triton/master/python/tutorials/04-low-memory-dropout.py\n\nimport triton\nimport triton.language as tl\n\n\n# fmt: off\[email protected](\n configs=[\n triton.Config({\"BLOCK_SIZE\" : 256}, num_warps=1),\n triton.Config({\"BLOCK_SIZE\" : 512}, num_warps=2),\n triton.Config({\"BLOCK_SIZE\" : 1024}, num_warps=4),\n triton.Config({\"BLOCK_SIZE\" : 2048}, num_warps=8),\n triton.Config({\"BLOCK_SIZE\" : 4096}, num_warps=8),\n ],\n key=[\"N\"],\n)\[email protected]\ndef k_dropout(\n Y, X, SEEDS,\n stride,\n N,\n p,\n **meta,\n):\n \"\"\"\n Apply dropout on an input tensor\n Y : Output (M, N)\n X : Input (M, N)\n S : Seeds (M,)\n p : dropout probability\n \"\"\"\n # fmt: on\n\n # compute memory offsets of elements handled by this instance\n BLOCK_SIZE = meta[\"BLOCK_SIZE\"]\n row = tl.program_id(axis=0)\n col = tl.program_id(axis=1)\n offsets = row * stride + col * BLOCK_SIZE + tl.arange(0, BLOCK_SIZE)\n mask = col * BLOCK_SIZE + tl.arange(0, BLOCK_SIZE) < N\n\n # load data from x\n x_ptrs = X + offsets\n x = tl.load(x_ptrs, mask=mask)\n\n # randomly prune it\n seed = SEEDS + row\n random = tl.rand(seed.to(tl.int32), offsets)\n x_keep = random > p\n\n # write-back\n zero = 0.\n zero = zero.to(x.dtype)\n output = tl.where(x_keep, (x / (1 - p)).to(x.dtype), zero)\n y_ptrs = Y + offsets\n tl.store(y_ptrs, output, mask=mask)\n", "path": "xformers/triton/k_dropout.py"}, {"content": "# Copyright (c) Facebook, Inc. and its affiliates. All rights reserved.\n#\n# This source code is licensed under the BSD license found in the\n# LICENSE file in the root directory of this source tree.\n\n\n# CREDITS: This comes almost as-is from the Triton dropout tutorial\n# https://raw.githubusercontent.com/openai/triton/master/python/tutorials/04-low-memory-dropout.py\n\nimport torch\nimport triton\nfrom torch.cuda.amp import custom_bwd, custom_fwd\n\nfrom xformers.triton.k_dropout import k_dropout\n\n\n# Helper to handle the SPMD launch grid and error cases\nclass _dropout(torch.autograd.Function):\n @staticmethod\n @custom_fwd(cast_inputs=torch.float16)\n def forward(ctx, x, p):\n # Soft-flatten an hypothetical 3rd dimension\n x_ = x.reshape(-1, x.shape[-1])\n y = torch.empty_like(x_)\n _, N = x_.shape\n\n assert y.stride(-1) == 1 and x_.stride(-1) == 1\n\n # Generate one seed per sample\n # seed max is int32 max for positive numbers: 2**16\n seeds = torch.randint(65536, (x_.shape[0],), device=x.device).to(torch.int32)\n\n # SPMD launch grid\n def grid(meta):\n return (\n x_.shape[0],\n triton.cdiv(x_.shape[1], meta[\"BLOCK_SIZE\"]),\n )\n\n k_dropout[grid](y, x_, seeds, y.stride(0), N, p)\n\n ctx.save_for_backward(seeds)\n ctx.p = p\n\n return y.reshape_as(x)\n\n @staticmethod\n @custom_bwd\n def backward(ctx, grad_out):\n (seeds,) = ctx.saved_tensors\n\n # Soft-flatten an hypothetical 3rd dimension\n grad_out_ = grad_out.reshape(-1, grad_out.shape[-1])\n grad_in = torch.empty_like(grad_out_)\n _, N = grad_out_.shape\n\n assert grad_in.stride(-1) == 1 and grad_out_.stride(-1) == 1\n\n # SPMD launch grid\n def grid(meta):\n return (\n grad_out_.shape[0],\n triton.cdiv(grad_out_.shape[1], meta[\"BLOCK_SIZE\"]),\n )\n\n k_dropout[grid](grad_in, grad_out_, seeds, grad_out_.stride(0), N, ctx.p)\n\n return grad_in.reshape_as(grad_out), None\n\n\ndef dropout(x: torch.Tensor, p: float):\n if p > 0.0:\n return _dropout.apply(x, p)\n\n return x\n", "path": "xformers/triton/dropout.py"}]} | 1,742 | 274 |
gh_patches_debug_25915 | rasdani/github-patches | git_diff | microsoft__AzureTRE-1653 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Health check endpoint should log all the service status as it queries
Currently the `/health` endpoint queries Cosmos / Service Bus / the RP - and returns the statuses. If any are not ok, the response is a 503.
There is currently no way to query that endpoint when the gateway has blocked access - so we at least need it to log the results so we can track back and see what service was down, when.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `api_app/api/routes/health.py`
Content:
```
1 from fastapi import APIRouter
2 from models.schemas.status import HealthCheck, ServiceStatus, StatusEnum
3 from resources import strings
4 from services.health_checker import create_resource_processor_status, create_state_store_status, create_service_bus_status
5 from fastapi import HTTPException, status
6
7 router = APIRouter()
8
9
10 @router.get("/health", name=strings.API_GET_HEALTH_STATUS)
11 async def health_check() -> HealthCheck:
12 cosmos_status, cosmos_message = create_state_store_status()
13 sb_status, sb_message = await create_service_bus_status()
14 rp_status, rp_message = create_resource_processor_status()
15 services = [ServiceStatus(service=strings.COSMOS_DB, status=cosmos_status, message=cosmos_message),
16 ServiceStatus(service=strings.SERVICE_BUS, status=sb_status, message=sb_message),
17 ServiceStatus(service=strings.RESOURCE_PROCESSOR, status=rp_status, message=rp_message)]
18 health_check_result = HealthCheck(services=services)
19 if cosmos_status == StatusEnum.not_ok or sb_status == StatusEnum.not_ok or rp_status == StatusEnum.not_ok:
20 raise HTTPException(status_code=status.HTTP_503_SERVICE_UNAVAILABLE, detail=health_check_result.json())
21 return health_check_result
22
```
Path: `api_app/_version.py`
Content:
```
1 __version__ = "0.2.10"
2
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/api_app/_version.py b/api_app/_version.py
--- a/api_app/_version.py
+++ b/api_app/_version.py
@@ -1 +1 @@
-__version__ = "0.2.10"
+__version__ = "0.2.11"
diff --git a/api_app/api/routes/health.py b/api_app/api/routes/health.py
--- a/api_app/api/routes/health.py
+++ b/api_app/api/routes/health.py
@@ -3,6 +3,7 @@
from resources import strings
from services.health_checker import create_resource_processor_status, create_state_store_status, create_service_bus_status
from fastapi import HTTPException, status
+import logging
router = APIRouter()
@@ -17,5 +18,8 @@
ServiceStatus(service=strings.RESOURCE_PROCESSOR, status=rp_status, message=rp_message)]
health_check_result = HealthCheck(services=services)
if cosmos_status == StatusEnum.not_ok or sb_status == StatusEnum.not_ok or rp_status == StatusEnum.not_ok:
+ logging.error(f'Cosmos Status: {cosmos_status}, message: {cosmos_message}')
+ logging.error(f'Service Bus Status: {sb_status}, message: {sb_message}')
+ logging.error(f'Resource Processor Status: {rp_status}, message: {rp_message}')
raise HTTPException(status_code=status.HTTP_503_SERVICE_UNAVAILABLE, detail=health_check_result.json())
return health_check_result
| {"golden_diff": "diff --git a/api_app/_version.py b/api_app/_version.py\n--- a/api_app/_version.py\n+++ b/api_app/_version.py\n@@ -1 +1 @@\n-__version__ = \"0.2.10\"\n+__version__ = \"0.2.11\"\ndiff --git a/api_app/api/routes/health.py b/api_app/api/routes/health.py\n--- a/api_app/api/routes/health.py\n+++ b/api_app/api/routes/health.py\n@@ -3,6 +3,7 @@\n from resources import strings\n from services.health_checker import create_resource_processor_status, create_state_store_status, create_service_bus_status\n from fastapi import HTTPException, status\n+import logging\n \n router = APIRouter()\n \n@@ -17,5 +18,8 @@\n ServiceStatus(service=strings.RESOURCE_PROCESSOR, status=rp_status, message=rp_message)]\n health_check_result = HealthCheck(services=services)\n if cosmos_status == StatusEnum.not_ok or sb_status == StatusEnum.not_ok or rp_status == StatusEnum.not_ok:\n+ logging.error(f'Cosmos Status: {cosmos_status}, message: {cosmos_message}')\n+ logging.error(f'Service Bus Status: {sb_status}, message: {sb_message}')\n+ logging.error(f'Resource Processor Status: {rp_status}, message: {rp_message}')\n raise HTTPException(status_code=status.HTTP_503_SERVICE_UNAVAILABLE, detail=health_check_result.json())\n return health_check_result\n", "issue": "Health check endpoint should log all the service status as it queries\nCurrently the `/health` endpoint queries Cosmos / Service Bus / the RP - and returns the statuses. If any are not ok, the response is a 503.\r\n\r\nThere is currently no way to query that endpoint when the gateway has blocked access - so we at least need it to log the results so we can track back and see what service was down, when.\n", "before_files": [{"content": "from fastapi import APIRouter\nfrom models.schemas.status import HealthCheck, ServiceStatus, StatusEnum\nfrom resources import strings\nfrom services.health_checker import create_resource_processor_status, create_state_store_status, create_service_bus_status\nfrom fastapi import HTTPException, status\n\nrouter = APIRouter()\n\n\[email protected](\"/health\", name=strings.API_GET_HEALTH_STATUS)\nasync def health_check() -> HealthCheck:\n cosmos_status, cosmos_message = create_state_store_status()\n sb_status, sb_message = await create_service_bus_status()\n rp_status, rp_message = create_resource_processor_status()\n services = [ServiceStatus(service=strings.COSMOS_DB, status=cosmos_status, message=cosmos_message),\n ServiceStatus(service=strings.SERVICE_BUS, status=sb_status, message=sb_message),\n ServiceStatus(service=strings.RESOURCE_PROCESSOR, status=rp_status, message=rp_message)]\n health_check_result = HealthCheck(services=services)\n if cosmos_status == StatusEnum.not_ok or sb_status == StatusEnum.not_ok or rp_status == StatusEnum.not_ok:\n raise HTTPException(status_code=status.HTTP_503_SERVICE_UNAVAILABLE, detail=health_check_result.json())\n return health_check_result\n", "path": "api_app/api/routes/health.py"}, {"content": "__version__ = \"0.2.10\"\n", "path": "api_app/_version.py"}], "after_files": [{"content": "from fastapi import APIRouter\nfrom models.schemas.status import HealthCheck, ServiceStatus, StatusEnum\nfrom resources import strings\nfrom services.health_checker import create_resource_processor_status, create_state_store_status, create_service_bus_status\nfrom fastapi import HTTPException, status\nimport logging\n\nrouter = APIRouter()\n\n\[email protected](\"/health\", name=strings.API_GET_HEALTH_STATUS)\nasync def health_check() -> HealthCheck:\n cosmos_status, cosmos_message = create_state_store_status()\n sb_status, sb_message = await create_service_bus_status()\n rp_status, rp_message = create_resource_processor_status()\n services = [ServiceStatus(service=strings.COSMOS_DB, status=cosmos_status, message=cosmos_message),\n ServiceStatus(service=strings.SERVICE_BUS, status=sb_status, message=sb_message),\n ServiceStatus(service=strings.RESOURCE_PROCESSOR, status=rp_status, message=rp_message)]\n health_check_result = HealthCheck(services=services)\n if cosmos_status == StatusEnum.not_ok or sb_status == StatusEnum.not_ok or rp_status == StatusEnum.not_ok:\n logging.error(f'Cosmos Status: {cosmos_status}, message: {cosmos_message}')\n logging.error(f'Service Bus Status: {sb_status}, message: {sb_message}')\n logging.error(f'Resource Processor Status: {rp_status}, message: {rp_message}')\n raise HTTPException(status_code=status.HTTP_503_SERVICE_UNAVAILABLE, detail=health_check_result.json())\n return health_check_result\n", "path": "api_app/api/routes/health.py"}, {"content": "__version__ = \"0.2.11\"\n", "path": "api_app/_version.py"}]} | 673 | 320 |
gh_patches_debug_26067 | rasdani/github-patches | git_diff | beeware__toga-543 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
A single column table leads to only showing first letter of value
## Expected Behavior
Table like this:
**Filename**
xxx
yyy
zzz
## Current Behavior
**Filename**
x
y
z
## Steps to reproduce
Use toga.Table(headings=['Filename'], data=['xxx', 'yyy', 'zzz'], on_select=self.handle_name_select)
## Analysis
The problem seems to be in the ListSource class.
`def _create_row(self, data):
if isinstance(data, dict):
row = Row(**data)
else:
row = Row(**dict(zip(self._accessors, data)))
row._source = self
return row'
In list_source.py line 56 it says:
`row = Row(**dict(zip(self._accessors, data)))`
but the data parameter is a string when using a list of strings as data, leading to the zipping of the individual characters. When passing in the data as [('xxx',), ('yyy',), ('zzz',)] the error does not occur.
So either the API should make it explicit that it expects a list of lists, or handle the data-is-a-list-of-strings case correctly
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/core/toga/sources/list_source.py`
Content:
```
1 from .base import Source
2
3
4 class Row:
5 def __init__(self, **data):
6 self._attrs = list(data.keys())
7 self._source = None
8 for name, value in data.items():
9 setattr(self, name, value)
10
11 ######################################################################
12 # Utility wrappers
13 ######################################################################
14
15 def __setattr__(self, attr, value):
16 super().__setattr__(attr, value)
17 if attr in self._attrs:
18 if self._source is not None:
19 self._source._notify('change', item=self)
20
21
22 class ListSource(Source):
23 """A data source to store a list of multiple data values, in a row-like fashion.
24
25 Args:
26 data (`list`): The data in the list. Each entry in the list should have the
27 same number of entries as there are accessors.
28 accessors (`list`): A list of attribute names for accessing the value
29 in each column of the row.
30 """
31 def __init__(self, data, accessors):
32 super().__init__()
33 self._accessors = accessors
34 self._data = []
35 for value in data:
36 self._data.append(self._create_row(value))
37
38 ######################################################################
39 # Methods required by the ListSource interface
40 ######################################################################
41
42 def __len__(self):
43 return len(self._data)
44
45 def __getitem__(self, index):
46 return self._data[index]
47
48 ######################################################################
49 # Factory methods for new rows
50 ######################################################################
51
52 def _create_row(self, data):
53 if isinstance(data, dict):
54 row = Row(**data)
55 else:
56 row = Row(**dict(zip(self._accessors, data)))
57 row._source = self
58 return row
59
60 ######################################################################
61 # Utility methods to make ListSources more list-like
62 ######################################################################
63
64 def __setitem__(self, index, value):
65 row = self._create_row(value)
66 self._data[index] = row
67 self._notify('insert', index=index, item=row)
68
69 def __iter__(self):
70 return iter(self._data)
71
72 def clear(self):
73 self._data = []
74 self._notify('clear')
75
76 def insert(self, index, *values, **named):
77 # Coalesce values and data into a single data dictionary,
78 # and use that to create the data row. Explicitly named data override.
79 row = self._create_row(dict(zip(self._accessors, values), **named))
80 self._data.insert(index, row)
81 self._notify('insert', index=index, item=row)
82 return row
83
84 def prepend(self, *values, **named):
85 return self.insert(0, *values, **named)
86
87 def append(self, *values, **named):
88 return self.insert(len(self), *values, **named)
89
90 def remove(self, row):
91 self._data.remove(row)
92 self._notify('remove', item=row)
93 return row
94
95 def index(self, row):
96 return self._data.index(row)
97
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/core/toga/sources/list_source.py b/src/core/toga/sources/list_source.py
--- a/src/core/toga/sources/list_source.py
+++ b/src/core/toga/sources/list_source.py
@@ -24,7 +24,7 @@
Args:
data (`list`): The data in the list. Each entry in the list should have the
- same number of entries as there are accessors.
+ same number of entries as there are accessors.
accessors (`list`): A list of attribute names for accessing the value
in each column of the row.
"""
@@ -50,10 +50,20 @@
######################################################################
def _create_row(self, data):
+ """Create a Row object from the given data.
+ Args:
+ data (any): The type of `data` determines how it is handled
+ ``dict``: each key corresponds to a column accessor
+ iterables, except ``str`` and ``dict``: each item corresponds to a column
+ all else: `data` will fill the first column
+ """
+
if isinstance(data, dict):
row = Row(**data)
- else:
+ elif hasattr(data, '__iter__') and not isinstance(data, str):
row = Row(**dict(zip(self._accessors, data)))
+ else:
+ row = Row(**{self._accessors[0]: data})
row._source = self
return row
| {"golden_diff": "diff --git a/src/core/toga/sources/list_source.py b/src/core/toga/sources/list_source.py\n--- a/src/core/toga/sources/list_source.py\n+++ b/src/core/toga/sources/list_source.py\n@@ -24,7 +24,7 @@\n \n Args:\n data (`list`): The data in the list. Each entry in the list should have the\n- same number of entries as there are accessors.\n+ same number of entries as there are accessors. \n accessors (`list`): A list of attribute names for accessing the value\n in each column of the row.\n \"\"\"\n@@ -50,10 +50,20 @@\n ######################################################################\n \n def _create_row(self, data):\n+ \"\"\"Create a Row object from the given data.\n+ Args:\n+ data (any): The type of `data` determines how it is handled\n+ ``dict``: each key corresponds to a column accessor\n+ iterables, except ``str`` and ``dict``: each item corresponds to a column\n+ all else: `data` will fill the first column\n+ \"\"\"\n+\n if isinstance(data, dict):\n row = Row(**data)\n- else:\n+ elif hasattr(data, '__iter__') and not isinstance(data, str):\n row = Row(**dict(zip(self._accessors, data)))\n+ else:\n+ row = Row(**{self._accessors[0]: data})\n row._source = self\n return row\n", "issue": "A single column table leads to only showing first letter of value\n## Expected Behavior\r\n\r\nTable like this:\r\n**Filename**\r\nxxx\r\nyyy\r\nzzz\r\n\r\n## Current Behavior\r\n**Filename**\r\nx\r\ny\r\nz\r\n\r\n## Steps to reproduce\r\n\r\nUse toga.Table(headings=['Filename'], data=['xxx', 'yyy', 'zzz'], on_select=self.handle_name_select)\r\n\r\n## Analysis\r\nThe problem seems to be in the ListSource class.\r\n`def _create_row(self, data):\r\n if isinstance(data, dict):\r\n row = Row(**data)\r\n else:\r\n row = Row(**dict(zip(self._accessors, data)))\r\n row._source = self\r\n return row'\r\n\r\nIn list_source.py line 56 it says:\r\n\r\n`row = Row(**dict(zip(self._accessors, data)))`\r\n\r\nbut the data parameter is a string when using a list of strings as data, leading to the zipping of the individual characters. When passing in the data as [('xxx',), ('yyy',), ('zzz',)] the error does not occur.\r\n\r\nSo either the API should make it explicit that it expects a list of lists, or handle the data-is-a-list-of-strings case correctly\n", "before_files": [{"content": "from .base import Source\n\n\nclass Row:\n def __init__(self, **data):\n self._attrs = list(data.keys())\n self._source = None\n for name, value in data.items():\n setattr(self, name, value)\n\n ######################################################################\n # Utility wrappers\n ######################################################################\n\n def __setattr__(self, attr, value):\n super().__setattr__(attr, value)\n if attr in self._attrs:\n if self._source is not None:\n self._source._notify('change', item=self)\n\n\nclass ListSource(Source):\n \"\"\"A data source to store a list of multiple data values, in a row-like fashion.\n\n Args:\n data (`list`): The data in the list. Each entry in the list should have the\n same number of entries as there are accessors.\n accessors (`list`): A list of attribute names for accessing the value\n in each column of the row.\n \"\"\"\n def __init__(self, data, accessors):\n super().__init__()\n self._accessors = accessors\n self._data = []\n for value in data:\n self._data.append(self._create_row(value))\n\n ######################################################################\n # Methods required by the ListSource interface\n ######################################################################\n\n def __len__(self):\n return len(self._data)\n\n def __getitem__(self, index):\n return self._data[index]\n\n ######################################################################\n # Factory methods for new rows\n ######################################################################\n\n def _create_row(self, data):\n if isinstance(data, dict):\n row = Row(**data)\n else:\n row = Row(**dict(zip(self._accessors, data)))\n row._source = self\n return row\n\n ######################################################################\n # Utility methods to make ListSources more list-like\n ######################################################################\n\n def __setitem__(self, index, value):\n row = self._create_row(value)\n self._data[index] = row\n self._notify('insert', index=index, item=row)\n\n def __iter__(self):\n return iter(self._data)\n\n def clear(self):\n self._data = []\n self._notify('clear')\n\n def insert(self, index, *values, **named):\n # Coalesce values and data into a single data dictionary,\n # and use that to create the data row. Explicitly named data override.\n row = self._create_row(dict(zip(self._accessors, values), **named))\n self._data.insert(index, row)\n self._notify('insert', index=index, item=row)\n return row\n\n def prepend(self, *values, **named):\n return self.insert(0, *values, **named)\n\n def append(self, *values, **named):\n return self.insert(len(self), *values, **named)\n\n def remove(self, row):\n self._data.remove(row)\n self._notify('remove', item=row)\n return row\n\n def index(self, row):\n return self._data.index(row)\n", "path": "src/core/toga/sources/list_source.py"}], "after_files": [{"content": "from .base import Source\n\n\nclass Row:\n def __init__(self, **data):\n self._attrs = list(data.keys())\n self._source = None\n for name, value in data.items():\n setattr(self, name, value)\n\n ######################################################################\n # Utility wrappers\n ######################################################################\n\n def __setattr__(self, attr, value):\n super().__setattr__(attr, value)\n if attr in self._attrs:\n if self._source is not None:\n self._source._notify('change', item=self)\n\n\nclass ListSource(Source):\n \"\"\"A data source to store a list of multiple data values, in a row-like fashion.\n\n Args:\n data (`list`): The data in the list. Each entry in the list should have the\n same number of entries as there are accessors. \n accessors (`list`): A list of attribute names for accessing the value\n in each column of the row.\n \"\"\"\n def __init__(self, data, accessors):\n super().__init__()\n self._accessors = accessors\n self._data = []\n for value in data:\n self._data.append(self._create_row(value))\n\n ######################################################################\n # Methods required by the ListSource interface\n ######################################################################\n\n def __len__(self):\n return len(self._data)\n\n def __getitem__(self, index):\n return self._data[index]\n\n ######################################################################\n # Factory methods for new rows\n ######################################################################\n\n def _create_row(self, data):\n \"\"\"Create a Row object from the given data.\n Args:\n data (any): The type of `data` determines how it is handled\n ``dict``: each key corresponds to a column accessor\n iterables, except ``str`` and ``dict``: each item corresponds to a column\n all else: `data` will fill the first column\n \"\"\"\n\n if isinstance(data, dict):\n row = Row(**data)\n elif hasattr(data, '__iter__') and not isinstance(data, str):\n row = Row(**dict(zip(self._accessors, data)))\n else:\n row = Row(**{self._accessors[0]: data})\n row._source = self\n return row\n\n ######################################################################\n # Utility methods to make ListSources more list-like\n ######################################################################\n\n def __setitem__(self, index, value):\n row = self._create_row(value)\n self._data[index] = row\n self._notify('insert', index=index, item=row)\n\n def __iter__(self):\n return iter(self._data)\n\n def clear(self):\n self._data = []\n self._notify('clear')\n\n def insert(self, index, *values, **named):\n # Coalesce values and data into a single data dictionary,\n # and use that to create the data row. Explicitly named data override.\n row = self._create_row(dict(zip(self._accessors, values), **named))\n self._data.insert(index, row)\n self._notify('insert', index=index, item=row)\n return row\n\n def prepend(self, *values, **named):\n return self.insert(0, *values, **named)\n\n def append(self, *values, **named):\n return self.insert(len(self), *values, **named)\n\n def remove(self, row):\n self._data.remove(row)\n self._notify('remove', item=row)\n return row\n\n def index(self, row):\n return self._data.index(row)\n", "path": "src/core/toga/sources/list_source.py"}]} | 1,347 | 327 |
gh_patches_debug_37784 | rasdani/github-patches | git_diff | bokeh__bokeh-8738 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Windows phantomjs not killed on selenium termination
I reinstalled a fresh python environment on windows with python 3.7 and pythonenv
I use only pip for package dependencies
When I tried to run tests some failed because temp files were locked.
<img width="726" alt="win32error" src="https://user-images.githubusercontent.com/18531147/54091987-214f4580-4387-11e9-9584-6a117a356ad2.png">
<img width="257" alt="test_failures" src="https://user-images.githubusercontent.com/18531147/54091989-24e2cc80-4387-11e9-9c42-3573dabd1813.PNG">
When driver terminate phantomjs is not correctly killed:
<img width="294" alt="proc_pantomjs" src="https://user-images.githubusercontent.com/18531147/54092002-45128b80-4387-11e9-9967-bf74b1e41bd7.PNG">
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bokeh/io/webdriver.py`
Content:
```
1 #-----------------------------------------------------------------------------
2 # Copyright (c) 2012 - 2019, Anaconda, Inc., and Bokeh Contributors.
3 # All rights reserved.
4 #
5 # The full license is in the file LICENSE.txt, distributed with this software.
6 #-----------------------------------------------------------------------------
7 '''
8
9 '''
10
11 #-----------------------------------------------------------------------------
12 # Boilerplate
13 #-----------------------------------------------------------------------------
14 from __future__ import absolute_import, division, print_function, unicode_literals
15
16 import logging
17 log = logging.getLogger(__name__)
18
19 #-----------------------------------------------------------------------------
20 # Imports
21 #-----------------------------------------------------------------------------
22
23 # Standard library imports
24 import atexit
25 import signal
26 import warnings
27 from os.path import devnull
28
29 # External imports
30
31 # Bokeh imports
32 from ..util.dependencies import import_required, detect_phantomjs
33
34 #-----------------------------------------------------------------------------
35 # Globals and constants
36 #-----------------------------------------------------------------------------
37
38 __all__ = (
39 'create_phantomjs_webdriver',
40 'terminate_webdriver',
41 'webdriver_control',
42 )
43
44 #-----------------------------------------------------------------------------
45 # General API
46 #-----------------------------------------------------------------------------
47
48 #-----------------------------------------------------------------------------
49 # Dev API
50 #-----------------------------------------------------------------------------
51
52 def create_phantomjs_webdriver():
53 with warnings.catch_warnings():
54 warnings.filterwarnings("ignore", ".*", UserWarning, "selenium.webdriver.phantomjs.webdriver")
55
56 webdriver = import_required('selenium.webdriver',
57 'To use bokeh.io image export functions you need selenium ' +
58 '("conda install -c bokeh selenium" or "pip install selenium")')
59
60 phantomjs_path = detect_phantomjs()
61 return webdriver.PhantomJS(executable_path=phantomjs_path, service_log_path=devnull)
62
63 def terminate_webdriver(driver):
64 if driver.name == "phantomjs":
65 # https://github.com/seleniumhq/selenium/issues/767
66 if driver.service.process:
67 driver.service.process.send_signal(signal.SIGTERM)
68
69 try:
70 driver.quit()
71 except (IOError, OSError): # IOError for Python 2.7
72 pass
73
74 #-----------------------------------------------------------------------------
75 # Private API
76 #-----------------------------------------------------------------------------
77
78 class _WebdriverState(object):
79 '''
80
81 '''
82
83 def __init__(self, reuse=True, kind="phantomjs"):
84 self.reuse = reuse
85 self.kind = kind
86 self.current = None
87
88 def reset(self):
89 if self.current is not None:
90 terminate_webdriver(self.current)
91 self.current = None
92
93 def get(self):
94 if not self.reuse or self.current is None:
95 if self.current is not None:
96 terminate_webdriver(self.current)
97 self.current = self.create()
98 return self.current
99
100 def create(self):
101 if self.kind == "phantomjs":
102 return create_phantomjs_webdriver()
103 raise ValueError("Unknown webdriver kind %r" % self.kind)
104
105 @property
106 def reuse(self):
107 return self._reuse
108
109 @reuse.setter
110 def reuse(self, value):
111 self._reuse = value
112
113 @property
114 def kind(self):
115 return self._kind
116
117 @kind.setter
118 def kind(self, value):
119 # TODO (bev) enum/value check when more are added
120 self._kind = value
121
122 #-----------------------------------------------------------------------------
123 # Code
124 #-----------------------------------------------------------------------------
125
126
127 webdriver_control = _WebdriverState()
128
129 atexit.register(lambda: webdriver_control.reset())
130
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/bokeh/io/webdriver.py b/bokeh/io/webdriver.py
--- a/bokeh/io/webdriver.py
+++ b/bokeh/io/webdriver.py
@@ -21,6 +21,7 @@
#-----------------------------------------------------------------------------
# Standard library imports
+import sys
import atexit
import signal
import warnings
@@ -29,7 +30,7 @@
# External imports
# Bokeh imports
-from ..util.dependencies import import_required, detect_phantomjs
+from ..util.dependencies import import_required, detect_phantomjs, import_optional
#-----------------------------------------------------------------------------
# Globals and constants
@@ -49,6 +50,20 @@
# Dev API
#-----------------------------------------------------------------------------
+
+def kill_proc_tree(pid, including_parent=True):
+ psutil = import_optional('psutil')
+ if psutil is not None:
+ parent = psutil.Process(pid)
+ children = parent.children(recursive=True)
+ for child in children:
+ child.kill()
+ psutil.wait_procs(children)
+ if including_parent:
+ parent.kill()
+ parent.wait(5)
+
+
def create_phantomjs_webdriver():
with warnings.catch_warnings():
warnings.filterwarnings("ignore", ".*", UserWarning, "selenium.webdriver.phantomjs.webdriver")
@@ -60,21 +75,25 @@
phantomjs_path = detect_phantomjs()
return webdriver.PhantomJS(executable_path=phantomjs_path, service_log_path=devnull)
+
def terminate_webdriver(driver):
if driver.name == "phantomjs":
# https://github.com/seleniumhq/selenium/issues/767
if driver.service.process:
+ if sys.platform == 'win32':
+ kill_proc_tree(driver.service.process.pid, including_parent=False)
driver.service.process.send_signal(signal.SIGTERM)
try:
driver.quit()
- except (IOError, OSError): # IOError for Python 2.7
+ except (IOError, OSError): # IOError for Python 2.7
pass
#-----------------------------------------------------------------------------
# Private API
#-----------------------------------------------------------------------------
+
class _WebdriverState(object):
'''
| {"golden_diff": "diff --git a/bokeh/io/webdriver.py b/bokeh/io/webdriver.py\n--- a/bokeh/io/webdriver.py\n+++ b/bokeh/io/webdriver.py\n@@ -21,6 +21,7 @@\n #-----------------------------------------------------------------------------\n \n # Standard library imports\n+import sys\n import atexit\n import signal\n import warnings\n@@ -29,7 +30,7 @@\n # External imports\n \n # Bokeh imports\n-from ..util.dependencies import import_required, detect_phantomjs\n+from ..util.dependencies import import_required, detect_phantomjs, import_optional\n \n #-----------------------------------------------------------------------------\n # Globals and constants\n@@ -49,6 +50,20 @@\n # Dev API\n #-----------------------------------------------------------------------------\n \n+\n+def kill_proc_tree(pid, including_parent=True):\n+ psutil = import_optional('psutil')\n+ if psutil is not None:\n+ parent = psutil.Process(pid)\n+ children = parent.children(recursive=True)\n+ for child in children:\n+ child.kill()\n+ psutil.wait_procs(children)\n+ if including_parent:\n+ parent.kill()\n+ parent.wait(5)\n+\n+\n def create_phantomjs_webdriver():\n with warnings.catch_warnings():\n warnings.filterwarnings(\"ignore\", \".*\", UserWarning, \"selenium.webdriver.phantomjs.webdriver\")\n@@ -60,21 +75,25 @@\n phantomjs_path = detect_phantomjs()\n return webdriver.PhantomJS(executable_path=phantomjs_path, service_log_path=devnull)\n \n+\n def terminate_webdriver(driver):\n if driver.name == \"phantomjs\":\n # https://github.com/seleniumhq/selenium/issues/767\n if driver.service.process:\n+ if sys.platform == 'win32':\n+ kill_proc_tree(driver.service.process.pid, including_parent=False)\n driver.service.process.send_signal(signal.SIGTERM)\n \n try:\n driver.quit()\n- except (IOError, OSError): # IOError for Python 2.7\n+ except (IOError, OSError): # IOError for Python 2.7\n pass\n \n #-----------------------------------------------------------------------------\n # Private API\n #-----------------------------------------------------------------------------\n \n+\n class _WebdriverState(object):\n '''\n", "issue": "Windows phantomjs not killed on selenium termination\nI reinstalled a fresh python environment on windows with python 3.7 and pythonenv\r\nI use only pip for package dependencies\r\nWhen I tried to run tests some failed because temp files were locked.\r\n<img width=\"726\" alt=\"win32error\" src=\"https://user-images.githubusercontent.com/18531147/54091987-214f4580-4387-11e9-9584-6a117a356ad2.png\">\r\n<img width=\"257\" alt=\"test_failures\" src=\"https://user-images.githubusercontent.com/18531147/54091989-24e2cc80-4387-11e9-9c42-3573dabd1813.PNG\">\r\n\r\n\r\nWhen driver terminate phantomjs is not correctly killed:\r\n<img width=\"294\" alt=\"proc_pantomjs\" src=\"https://user-images.githubusercontent.com/18531147/54092002-45128b80-4387-11e9-9967-bf74b1e41bd7.PNG\">\r\n\n", "before_files": [{"content": "#-----------------------------------------------------------------------------\n# Copyright (c) 2012 - 2019, Anaconda, Inc., and Bokeh Contributors.\n# All rights reserved.\n#\n# The full license is in the file LICENSE.txt, distributed with this software.\n#-----------------------------------------------------------------------------\n'''\n\n'''\n\n#-----------------------------------------------------------------------------\n# Boilerplate\n#-----------------------------------------------------------------------------\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport logging\nlog = logging.getLogger(__name__)\n\n#-----------------------------------------------------------------------------\n# Imports\n#-----------------------------------------------------------------------------\n\n# Standard library imports\nimport atexit\nimport signal\nimport warnings\nfrom os.path import devnull\n\n# External imports\n\n# Bokeh imports\nfrom ..util.dependencies import import_required, detect_phantomjs\n\n#-----------------------------------------------------------------------------\n# Globals and constants\n#-----------------------------------------------------------------------------\n\n__all__ = (\n 'create_phantomjs_webdriver',\n 'terminate_webdriver',\n 'webdriver_control',\n)\n\n#-----------------------------------------------------------------------------\n# General API\n#-----------------------------------------------------------------------------\n\n#-----------------------------------------------------------------------------\n# Dev API\n#-----------------------------------------------------------------------------\n\ndef create_phantomjs_webdriver():\n with warnings.catch_warnings():\n warnings.filterwarnings(\"ignore\", \".*\", UserWarning, \"selenium.webdriver.phantomjs.webdriver\")\n\n webdriver = import_required('selenium.webdriver',\n 'To use bokeh.io image export functions you need selenium ' +\n '(\"conda install -c bokeh selenium\" or \"pip install selenium\")')\n\n phantomjs_path = detect_phantomjs()\n return webdriver.PhantomJS(executable_path=phantomjs_path, service_log_path=devnull)\n\ndef terminate_webdriver(driver):\n if driver.name == \"phantomjs\":\n # https://github.com/seleniumhq/selenium/issues/767\n if driver.service.process:\n driver.service.process.send_signal(signal.SIGTERM)\n\n try:\n driver.quit()\n except (IOError, OSError): # IOError for Python 2.7\n pass\n\n#-----------------------------------------------------------------------------\n# Private API\n#-----------------------------------------------------------------------------\n\nclass _WebdriverState(object):\n '''\n\n '''\n\n def __init__(self, reuse=True, kind=\"phantomjs\"):\n self.reuse = reuse\n self.kind = kind\n self.current = None\n\n def reset(self):\n if self.current is not None:\n terminate_webdriver(self.current)\n self.current = None\n\n def get(self):\n if not self.reuse or self.current is None:\n if self.current is not None:\n terminate_webdriver(self.current)\n self.current = self.create()\n return self.current\n\n def create(self):\n if self.kind == \"phantomjs\":\n return create_phantomjs_webdriver()\n raise ValueError(\"Unknown webdriver kind %r\" % self.kind)\n\n @property\n def reuse(self):\n return self._reuse\n\n @reuse.setter\n def reuse(self, value):\n self._reuse = value\n\n @property\n def kind(self):\n return self._kind\n\n @kind.setter\n def kind(self, value):\n # TODO (bev) enum/value check when more are added\n self._kind = value\n\n#-----------------------------------------------------------------------------\n# Code\n#-----------------------------------------------------------------------------\n\n\nwebdriver_control = _WebdriverState()\n\natexit.register(lambda: webdriver_control.reset())\n", "path": "bokeh/io/webdriver.py"}], "after_files": [{"content": "#-----------------------------------------------------------------------------\n# Copyright (c) 2012 - 2019, Anaconda, Inc., and Bokeh Contributors.\n# All rights reserved.\n#\n# The full license is in the file LICENSE.txt, distributed with this software.\n#-----------------------------------------------------------------------------\n'''\n\n'''\n\n#-----------------------------------------------------------------------------\n# Boilerplate\n#-----------------------------------------------------------------------------\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport logging\nlog = logging.getLogger(__name__)\n\n#-----------------------------------------------------------------------------\n# Imports\n#-----------------------------------------------------------------------------\n\n# Standard library imports\nimport sys\nimport atexit\nimport signal\nimport warnings\nfrom os.path import devnull\n\n# External imports\n\n# Bokeh imports\nfrom ..util.dependencies import import_required, detect_phantomjs, import_optional\n\n#-----------------------------------------------------------------------------\n# Globals and constants\n#-----------------------------------------------------------------------------\n\n__all__ = (\n 'create_phantomjs_webdriver',\n 'terminate_webdriver',\n 'webdriver_control',\n)\n\n#-----------------------------------------------------------------------------\n# General API\n#-----------------------------------------------------------------------------\n\n#-----------------------------------------------------------------------------\n# Dev API\n#-----------------------------------------------------------------------------\n\n\ndef kill_proc_tree(pid, including_parent=True):\n psutil = import_optional('psutil')\n if psutil is not None:\n parent = psutil.Process(pid)\n children = parent.children(recursive=True)\n for child in children:\n child.kill()\n psutil.wait_procs(children)\n if including_parent:\n parent.kill()\n parent.wait(5)\n\n\ndef create_phantomjs_webdriver():\n with warnings.catch_warnings():\n warnings.filterwarnings(\"ignore\", \".*\", UserWarning, \"selenium.webdriver.phantomjs.webdriver\")\n\n webdriver = import_required('selenium.webdriver',\n 'To use bokeh.io image export functions you need selenium ' +\n '(\"conda install -c bokeh selenium\" or \"pip install selenium\")')\n\n phantomjs_path = detect_phantomjs()\n return webdriver.PhantomJS(executable_path=phantomjs_path, service_log_path=devnull)\n\n\ndef terminate_webdriver(driver):\n if driver.name == \"phantomjs\":\n # https://github.com/seleniumhq/selenium/issues/767\n if driver.service.process:\n if sys.platform == 'win32':\n kill_proc_tree(driver.service.process.pid, including_parent=False)\n driver.service.process.send_signal(signal.SIGTERM)\n\n try:\n driver.quit()\n except (IOError, OSError): # IOError for Python 2.7\n pass\n\n#-----------------------------------------------------------------------------\n# Private API\n#-----------------------------------------------------------------------------\n\n\nclass _WebdriverState(object):\n '''\n\n '''\n\n def __init__(self, reuse=True, kind=\"phantomjs\"):\n self.reuse = reuse\n self.kind = kind\n self.current = None\n\n def reset(self):\n if self.current is not None:\n terminate_webdriver(self.current)\n self.current = None\n\n def get(self):\n if not self.reuse or self.current is None:\n if self.current is not None:\n terminate_webdriver(self.current)\n self.current = self.create()\n return self.current\n\n def create(self):\n if self.kind == \"phantomjs\":\n return create_phantomjs_webdriver()\n raise ValueError(\"Unknown webdriver kind %r\" % self.kind)\n\n @property\n def reuse(self):\n return self._reuse\n\n @reuse.setter\n def reuse(self, value):\n self._reuse = value\n\n @property\n def kind(self):\n return self._kind\n\n @kind.setter\n def kind(self, value):\n # TODO (bev) enum/value check when more are added\n self._kind = value\n\n#-----------------------------------------------------------------------------\n# Code\n#-----------------------------------------------------------------------------\n\n\nwebdriver_control = _WebdriverState()\n\natexit.register(lambda: webdriver_control.reset())\n", "path": "bokeh/io/webdriver.py"}]} | 1,529 | 473 |
gh_patches_debug_24271 | rasdani/github-patches | git_diff | ivy-llc__ivy-15738 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ihfft
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ivy/functional/frontends/numpy/fft/discrete_fourier_transform.py`
Content:
```
1 import ivy
2 from ivy.functional.frontends.numpy.func_wrapper import to_ivy_arrays_and_back
3 from ivy.func_wrapper import with_unsupported_dtypes
4
5
6 @to_ivy_arrays_and_back
7 def ifft(a, n=None, axis=-1, norm=None):
8 a = ivy.array(a, dtype=ivy.complex128)
9 if norm is None:
10 norm = "backward"
11 return ivy.ifft(a, axis, norm=norm, n=n)
12
13
14 @to_ivy_arrays_and_back
15 @with_unsupported_dtypes({"1.24.3 and below": ("float16",)}, "numpy")
16 def ifftshift(x, axes=None):
17 x = ivy.asarray(x)
18
19 if axes is None:
20 axes = tuple(range(x.ndim))
21 shift = [-(dim // 2) for dim in x.shape]
22 elif isinstance(
23 axes,
24 (int, type(ivy.uint8), type(ivy.uint16), type(ivy.uint32), type(ivy.uint64)),
25 ):
26 shift = -(x.shape[axes] // 2)
27 else:
28 shift = [-(x.shape[ax] // 2) for ax in axes]
29
30 roll = ivy.roll(x, shift, axis=axes)
31
32 return roll
33
34
35 @to_ivy_arrays_and_back
36 def fft(a, n=None, axis=-1, norm=None):
37 return ivy.fft(ivy.astype(a, ivy.complex128), axis, norm=norm, n=n)
38
39
40 @to_ivy_arrays_and_back
41 @with_unsupported_dtypes({"1.24.3 and below": ("float16",)}, "numpy")
42 def fftshift(x, axes=None):
43 x = ivy.asarray(x)
44
45 if axes is None:
46 axes = tuple(range(x.ndim))
47 shift = [(dim // 2) for dim in x.shape]
48 elif isinstance(
49 axes,
50 (int, type(ivy.uint8), type(ivy.uint16), type(ivy.uint32), type(ivy.uint64)),
51 ):
52 shift = x.shape[axes] // 2
53 else:
54 shift = [(x.shape[ax] // 2) for ax in axes]
55
56 roll = ivy.roll(x, shift, axis=axes)
57
58 return roll
59
60
61 @with_unsupported_dtypes({"1.9.0 and below": ("float16",)}, "torch")
62 @to_ivy_arrays_and_back
63 def rfft(a, n=None, axis=-1, norm=None):
64 if norm is None:
65 norm = "backward"
66 a = ivy.array(a, dtype=ivy.float64)
67 return ivy.dft(a, axis=axis, inverse=False, onesided=True, dft_length=n, norm=norm)
68
69
70 @with_unsupported_dtypes({"2.4.2 and below": ("int",)}, "paddle")
71 @to_ivy_arrays_and_back
72 def fftfreq(n, d=1.0):
73 if not isinstance(
74 n, (int, type(ivy.int8), type(ivy.int16), type(ivy.int32), type(ivy.int64))
75 ):
76 raise ValueError("n should be an integer")
77
78 N = (n - 1) // 2 + 1
79 val = 1.0 / (n * d)
80 results = ivy.empty(tuple([n]), dtype=int)
81
82 p1 = ivy.arange(0, N, dtype=int)
83 results[:N] = p1
84 p2 = ivy.arange(-(n // 2), 0, dtype=int)
85 results[N:] = p2
86
87 return results * val
88
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ivy/functional/frontends/numpy/fft/discrete_fourier_transform.py b/ivy/functional/frontends/numpy/fft/discrete_fourier_transform.py
--- a/ivy/functional/frontends/numpy/fft/discrete_fourier_transform.py
+++ b/ivy/functional/frontends/numpy/fft/discrete_fourier_transform.py
@@ -3,6 +3,23 @@
from ivy.func_wrapper import with_unsupported_dtypes
+_SWAP_DIRECTION_MAP = {
+ None: "forward",
+ "backward": "forward",
+ "ortho": "ortho",
+ "forward": "backward",
+}
+
+
+def _swap_direction(norm):
+ try:
+ return _SWAP_DIRECTION_MAP[norm]
+ except KeyError:
+ raise ValueError(
+ f'Invalid norm value {norm}; should be "backward", "ortho" or "forward".'
+ ) from None
+
+
@to_ivy_arrays_and_back
def ifft(a, n=None, axis=-1, norm=None):
a = ivy.array(a, dtype=ivy.complex128)
@@ -67,6 +84,17 @@
return ivy.dft(a, axis=axis, inverse=False, onesided=True, dft_length=n, norm=norm)
+@to_ivy_arrays_and_back
+@with_unsupported_dtypes({"1.12.0 and below": ("float16",)}, "numpy")
+def ihfft(a, n=None, axis=-1, norm=None):
+ a = ivy.array(a, dtype=ivy.float64)
+ if n is None:
+ n = a.shape[axis]
+ norm = _swap_direction(norm)
+ output = ivy.conj(rfft(a, n, axis, norm=norm).ivy_array)
+ return output
+
+
@with_unsupported_dtypes({"2.4.2 and below": ("int",)}, "paddle")
@to_ivy_arrays_and_back
def fftfreq(n, d=1.0):
| {"golden_diff": "diff --git a/ivy/functional/frontends/numpy/fft/discrete_fourier_transform.py b/ivy/functional/frontends/numpy/fft/discrete_fourier_transform.py\n--- a/ivy/functional/frontends/numpy/fft/discrete_fourier_transform.py\n+++ b/ivy/functional/frontends/numpy/fft/discrete_fourier_transform.py\n@@ -3,6 +3,23 @@\n from ivy.func_wrapper import with_unsupported_dtypes\n \n \n+_SWAP_DIRECTION_MAP = {\n+ None: \"forward\",\n+ \"backward\": \"forward\",\n+ \"ortho\": \"ortho\",\n+ \"forward\": \"backward\",\n+}\n+\n+\n+def _swap_direction(norm):\n+ try:\n+ return _SWAP_DIRECTION_MAP[norm]\n+ except KeyError:\n+ raise ValueError(\n+ f'Invalid norm value {norm}; should be \"backward\", \"ortho\" or \"forward\".'\n+ ) from None\n+\n+\n @to_ivy_arrays_and_back\n def ifft(a, n=None, axis=-1, norm=None):\n a = ivy.array(a, dtype=ivy.complex128)\n@@ -67,6 +84,17 @@\n return ivy.dft(a, axis=axis, inverse=False, onesided=True, dft_length=n, norm=norm)\n \n \n+@to_ivy_arrays_and_back\n+@with_unsupported_dtypes({\"1.12.0 and below\": (\"float16\",)}, \"numpy\")\n+def ihfft(a, n=None, axis=-1, norm=None):\n+ a = ivy.array(a, dtype=ivy.float64)\n+ if n is None:\n+ n = a.shape[axis]\n+ norm = _swap_direction(norm)\n+ output = ivy.conj(rfft(a, n, axis, norm=norm).ivy_array)\n+ return output\n+\n+\n @with_unsupported_dtypes({\"2.4.2 and below\": (\"int\",)}, \"paddle\")\n @to_ivy_arrays_and_back\n def fftfreq(n, d=1.0):\n", "issue": "ihfft\n\n", "before_files": [{"content": "import ivy\nfrom ivy.functional.frontends.numpy.func_wrapper import to_ivy_arrays_and_back\nfrom ivy.func_wrapper import with_unsupported_dtypes\n\n\n@to_ivy_arrays_and_back\ndef ifft(a, n=None, axis=-1, norm=None):\n a = ivy.array(a, dtype=ivy.complex128)\n if norm is None:\n norm = \"backward\"\n return ivy.ifft(a, axis, norm=norm, n=n)\n\n\n@to_ivy_arrays_and_back\n@with_unsupported_dtypes({\"1.24.3 and below\": (\"float16\",)}, \"numpy\")\ndef ifftshift(x, axes=None):\n x = ivy.asarray(x)\n\n if axes is None:\n axes = tuple(range(x.ndim))\n shift = [-(dim // 2) for dim in x.shape]\n elif isinstance(\n axes,\n (int, type(ivy.uint8), type(ivy.uint16), type(ivy.uint32), type(ivy.uint64)),\n ):\n shift = -(x.shape[axes] // 2)\n else:\n shift = [-(x.shape[ax] // 2) for ax in axes]\n\n roll = ivy.roll(x, shift, axis=axes)\n\n return roll\n\n\n@to_ivy_arrays_and_back\ndef fft(a, n=None, axis=-1, norm=None):\n return ivy.fft(ivy.astype(a, ivy.complex128), axis, norm=norm, n=n)\n\n\n@to_ivy_arrays_and_back\n@with_unsupported_dtypes({\"1.24.3 and below\": (\"float16\",)}, \"numpy\")\ndef fftshift(x, axes=None):\n x = ivy.asarray(x)\n\n if axes is None:\n axes = tuple(range(x.ndim))\n shift = [(dim // 2) for dim in x.shape]\n elif isinstance(\n axes,\n (int, type(ivy.uint8), type(ivy.uint16), type(ivy.uint32), type(ivy.uint64)),\n ):\n shift = x.shape[axes] // 2\n else:\n shift = [(x.shape[ax] // 2) for ax in axes]\n\n roll = ivy.roll(x, shift, axis=axes)\n\n return roll\n\n\n@with_unsupported_dtypes({\"1.9.0 and below\": (\"float16\",)}, \"torch\")\n@to_ivy_arrays_and_back\ndef rfft(a, n=None, axis=-1, norm=None):\n if norm is None:\n norm = \"backward\"\n a = ivy.array(a, dtype=ivy.float64)\n return ivy.dft(a, axis=axis, inverse=False, onesided=True, dft_length=n, norm=norm)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"int\",)}, \"paddle\")\n@to_ivy_arrays_and_back\ndef fftfreq(n, d=1.0):\n if not isinstance(\n n, (int, type(ivy.int8), type(ivy.int16), type(ivy.int32), type(ivy.int64))\n ):\n raise ValueError(\"n should be an integer\")\n\n N = (n - 1) // 2 + 1\n val = 1.0 / (n * d)\n results = ivy.empty(tuple([n]), dtype=int)\n\n p1 = ivy.arange(0, N, dtype=int)\n results[:N] = p1\n p2 = ivy.arange(-(n // 2), 0, dtype=int)\n results[N:] = p2\n\n return results * val\n", "path": "ivy/functional/frontends/numpy/fft/discrete_fourier_transform.py"}], "after_files": [{"content": "import ivy\nfrom ivy.functional.frontends.numpy.func_wrapper import to_ivy_arrays_and_back\nfrom ivy.func_wrapper import with_unsupported_dtypes\n\n\n_SWAP_DIRECTION_MAP = {\n None: \"forward\",\n \"backward\": \"forward\",\n \"ortho\": \"ortho\",\n \"forward\": \"backward\",\n}\n\n\ndef _swap_direction(norm):\n try:\n return _SWAP_DIRECTION_MAP[norm]\n except KeyError:\n raise ValueError(\n f'Invalid norm value {norm}; should be \"backward\", \"ortho\" or \"forward\".'\n ) from None\n\n\n@to_ivy_arrays_and_back\ndef ifft(a, n=None, axis=-1, norm=None):\n a = ivy.array(a, dtype=ivy.complex128)\n if norm is None:\n norm = \"backward\"\n return ivy.ifft(a, axis, norm=norm, n=n)\n\n\n@to_ivy_arrays_and_back\n@with_unsupported_dtypes({\"1.24.3 and below\": (\"float16\",)}, \"numpy\")\ndef ifftshift(x, axes=None):\n x = ivy.asarray(x)\n\n if axes is None:\n axes = tuple(range(x.ndim))\n shift = [-(dim // 2) for dim in x.shape]\n elif isinstance(\n axes,\n (int, type(ivy.uint8), type(ivy.uint16), type(ivy.uint32), type(ivy.uint64)),\n ):\n shift = -(x.shape[axes] // 2)\n else:\n shift = [-(x.shape[ax] // 2) for ax in axes]\n\n roll = ivy.roll(x, shift, axis=axes)\n\n return roll\n\n\n@to_ivy_arrays_and_back\ndef fft(a, n=None, axis=-1, norm=None):\n return ivy.fft(ivy.astype(a, ivy.complex128), axis, norm=norm, n=n)\n\n\n@to_ivy_arrays_and_back\n@with_unsupported_dtypes({\"1.24.3 and below\": (\"float16\",)}, \"numpy\")\ndef fftshift(x, axes=None):\n x = ivy.asarray(x)\n\n if axes is None:\n axes = tuple(range(x.ndim))\n shift = [(dim // 2) for dim in x.shape]\n elif isinstance(\n axes,\n (int, type(ivy.uint8), type(ivy.uint16), type(ivy.uint32), type(ivy.uint64)),\n ):\n shift = x.shape[axes] // 2\n else:\n shift = [(x.shape[ax] // 2) for ax in axes]\n\n roll = ivy.roll(x, shift, axis=axes)\n\n return roll\n\n\n@with_unsupported_dtypes({\"1.9.0 and below\": (\"float16\",)}, \"torch\")\n@to_ivy_arrays_and_back\ndef rfft(a, n=None, axis=-1, norm=None):\n if norm is None:\n norm = \"backward\"\n a = ivy.array(a, dtype=ivy.float64)\n return ivy.dft(a, axis=axis, inverse=False, onesided=True, dft_length=n, norm=norm)\n\n\n@to_ivy_arrays_and_back\n@with_unsupported_dtypes({\"1.12.0 and below\": (\"float16\",)}, \"numpy\")\ndef ihfft(a, n=None, axis=-1, norm=None):\n a = ivy.array(a, dtype=ivy.float64)\n if n is None:\n n = a.shape[axis]\n norm = _swap_direction(norm)\n output = ivy.conj(rfft(a, n, axis, norm=norm).ivy_array)\n return output\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"int\",)}, \"paddle\")\n@to_ivy_arrays_and_back\ndef fftfreq(n, d=1.0):\n if not isinstance(\n n, (int, type(ivy.int8), type(ivy.int16), type(ivy.int32), type(ivy.int64))\n ):\n raise ValueError(\"n should be an integer\")\n\n N = (n - 1) // 2 + 1\n val = 1.0 / (n * d)\n results = ivy.empty(tuple([n]), dtype=int)\n\n p1 = ivy.arange(0, N, dtype=int)\n results[:N] = p1\n p2 = ivy.arange(-(n // 2), 0, dtype=int)\n results[N:] = p2\n\n return results * val\n", "path": "ivy/functional/frontends/numpy/fft/discrete_fourier_transform.py"}]} | 1,246 | 450 |
gh_patches_debug_25212 | rasdani/github-patches | git_diff | vyperlang__vyper-3030 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
can't declare variable `public` but can define custom getter
### Version Information
* vyper Version: 0.3.4+commit.f31f0ec
* OS: osx
* Python Version: 3.8.9
### What's your issue about?
This code fails to compile:
```
slates : public(HashMap[bytes32, DynArray[address, 15]])
```
but I can define my own getter for that type just fine:
```
slates : HashMap[bytes32, DynArray[address, 15]]
@external
def slate(sid :bytes32) -> DynArray[address, 15]:
return self.slates[sid]
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `vyper/ast/expansion.py`
Content:
```
1 import copy
2
3 from vyper import ast as vy_ast
4 from vyper.exceptions import CompilerPanic
5
6
7 def expand_annotated_ast(vyper_module: vy_ast.Module) -> None:
8 """
9 Perform expansion / simplification operations on an annotated Vyper AST.
10
11 This pass uses annotated type information to modify the AST, simplifying
12 logic and expanding subtrees to reduce the compexity during codegen.
13
14 Arguments
15 ---------
16 vyper_module : Module
17 Top-level Vyper AST node that has been type-checked and annotated.
18 """
19 generate_public_variable_getters(vyper_module)
20 remove_unused_statements(vyper_module)
21
22
23 def generate_public_variable_getters(vyper_module: vy_ast.Module) -> None:
24 """
25 Create getter functions for public variables.
26
27 Arguments
28 ---------
29 vyper_module : Module
30 Top-level Vyper AST node.
31 """
32
33 for node in vyper_module.get_children(vy_ast.VariableDecl, {"annotation.func.id": "public"}):
34 func_type = node._metadata["func_type"]
35 input_types, return_type = func_type.get_signature()
36 input_nodes = []
37
38 # use the annotation node as a base to build the input args and return type
39 # starting with `args[0]` to remove the surrounding `public()` call`
40 annotation = copy.copy(node.annotation.args[0])
41
42 # the base return statement is an `Attribute` node, e.g. `self.<var_name>`
43 # for each input type we wrap it in a `Subscript` to access a specific member
44 return_stmt: vy_ast.VyperNode = vy_ast.Attribute(
45 value=vy_ast.Name(id="self"), attr=func_type.name
46 )
47 return_stmt._metadata["type"] = node._metadata["type"]
48
49 for i, type_ in enumerate(input_types):
50 if not isinstance(annotation, vy_ast.Subscript):
51 # if we get here something has failed in type checking
52 raise CompilerPanic("Mismatch between node and input type while building getter")
53 if annotation.value.get("id") == "HashMap": # type: ignore
54 # for a HashMap, split the key/value types and use the key type as the next arg
55 arg, annotation = annotation.slice.value.elements # type: ignore
56 else:
57 # for other types, build an input arg node from the expected type
58 # and remove the outer `Subscript` from the annotation
59 arg = vy_ast.Name(id=type_._id)
60 annotation = annotation.value
61 input_nodes.append(vy_ast.arg(arg=f"arg{i}", annotation=arg))
62
63 # wrap the return statement in a `Subscript`
64 return_stmt = vy_ast.Subscript(
65 value=return_stmt, slice=vy_ast.Index(value=vy_ast.Name(id=f"arg{i}"))
66 )
67
68 # after iterating the input types, the remaining annotation node is our return type
69 return_node = annotation
70
71 # join everything together as a new `FunctionDef` node, annotate it
72 # with the type, and append it to the existing `Module` node
73 expanded = vy_ast.FunctionDef.from_node(
74 node.annotation,
75 name=func_type.name,
76 args=vy_ast.arguments(args=input_nodes, defaults=[]),
77 body=[vy_ast.Return(value=return_stmt)],
78 decorator_list=[vy_ast.Name(id="external"), vy_ast.Name(id="view")],
79 returns=return_node,
80 )
81 expanded._metadata["type"] = func_type
82 return_node.set_parent(expanded)
83 vyper_module.add_to_body(expanded)
84
85
86 def remove_unused_statements(vyper_module: vy_ast.Module) -> None:
87 """
88 Remove statement nodes that are unused after type checking.
89
90 Once type checking is complete, we can remove now-meaningless statements to
91 simplify the AST prior to IR generation.
92
93 Arguments
94 ---------
95 vyper_module : Module
96 Top-level Vyper AST node.
97 """
98
99 # constant declarations - values were substituted within the AST during folding
100 for node in vyper_module.get_children(vy_ast.VariableDecl, {"annotation.func.id": "constant"}):
101 vyper_module.remove_from_body(node)
102
103 # `implements: interface` statements - validated during type checking
104 for node in vyper_module.get_children(vy_ast.AnnAssign, {"target.id": "implements"}):
105 vyper_module.remove_from_body(node)
106
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/vyper/ast/expansion.py b/vyper/ast/expansion.py
--- a/vyper/ast/expansion.py
+++ b/vyper/ast/expansion.py
@@ -53,6 +53,9 @@
if annotation.value.get("id") == "HashMap": # type: ignore
# for a HashMap, split the key/value types and use the key type as the next arg
arg, annotation = annotation.slice.value.elements # type: ignore
+ elif annotation.value.get("id") == "DynArray":
+ arg = vy_ast.Name(id=type_._id)
+ annotation = annotation.slice.value.elements[0] # type: ignore
else:
# for other types, build an input arg node from the expected type
# and remove the outer `Subscript` from the annotation
@@ -66,7 +69,7 @@
)
# after iterating the input types, the remaining annotation node is our return type
- return_node = annotation
+ return_node = copy.copy(annotation)
# join everything together as a new `FunctionDef` node, annotate it
# with the type, and append it to the existing `Module` node
| {"golden_diff": "diff --git a/vyper/ast/expansion.py b/vyper/ast/expansion.py\n--- a/vyper/ast/expansion.py\n+++ b/vyper/ast/expansion.py\n@@ -53,6 +53,9 @@\n if annotation.value.get(\"id\") == \"HashMap\": # type: ignore\n # for a HashMap, split the key/value types and use the key type as the next arg\n arg, annotation = annotation.slice.value.elements # type: ignore\n+ elif annotation.value.get(\"id\") == \"DynArray\":\n+ arg = vy_ast.Name(id=type_._id)\n+ annotation = annotation.slice.value.elements[0] # type: ignore\n else:\n # for other types, build an input arg node from the expected type\n # and remove the outer `Subscript` from the annotation\n@@ -66,7 +69,7 @@\n )\n \n # after iterating the input types, the remaining annotation node is our return type\n- return_node = annotation\n+ return_node = copy.copy(annotation)\n \n # join everything together as a new `FunctionDef` node, annotate it\n # with the type, and append it to the existing `Module` node\n", "issue": "can't declare variable `public` but can define custom getter\n### Version Information\r\n\r\n* vyper Version: 0.3.4+commit.f31f0ec\r\n* OS: osx\r\n* Python Version: 3.8.9\r\n\r\n### What's your issue about?\r\n\r\nThis code fails to compile:\r\n\r\n```\r\nslates : public(HashMap[bytes32, DynArray[address, 15]])\r\n```\r\n\r\nbut I can define my own getter for that type just fine:\r\n\r\n```\r\nslates : HashMap[bytes32, DynArray[address, 15]]\r\n\r\n@external\r\ndef slate(sid :bytes32) -> DynArray[address, 15]:\r\n return self.slates[sid]\r\n```\r\n\n", "before_files": [{"content": "import copy\n\nfrom vyper import ast as vy_ast\nfrom vyper.exceptions import CompilerPanic\n\n\ndef expand_annotated_ast(vyper_module: vy_ast.Module) -> None:\n \"\"\"\n Perform expansion / simplification operations on an annotated Vyper AST.\n\n This pass uses annotated type information to modify the AST, simplifying\n logic and expanding subtrees to reduce the compexity during codegen.\n\n Arguments\n ---------\n vyper_module : Module\n Top-level Vyper AST node that has been type-checked and annotated.\n \"\"\"\n generate_public_variable_getters(vyper_module)\n remove_unused_statements(vyper_module)\n\n\ndef generate_public_variable_getters(vyper_module: vy_ast.Module) -> None:\n \"\"\"\n Create getter functions for public variables.\n\n Arguments\n ---------\n vyper_module : Module\n Top-level Vyper AST node.\n \"\"\"\n\n for node in vyper_module.get_children(vy_ast.VariableDecl, {\"annotation.func.id\": \"public\"}):\n func_type = node._metadata[\"func_type\"]\n input_types, return_type = func_type.get_signature()\n input_nodes = []\n\n # use the annotation node as a base to build the input args and return type\n # starting with `args[0]` to remove the surrounding `public()` call`\n annotation = copy.copy(node.annotation.args[0])\n\n # the base return statement is an `Attribute` node, e.g. `self.<var_name>`\n # for each input type we wrap it in a `Subscript` to access a specific member\n return_stmt: vy_ast.VyperNode = vy_ast.Attribute(\n value=vy_ast.Name(id=\"self\"), attr=func_type.name\n )\n return_stmt._metadata[\"type\"] = node._metadata[\"type\"]\n\n for i, type_ in enumerate(input_types):\n if not isinstance(annotation, vy_ast.Subscript):\n # if we get here something has failed in type checking\n raise CompilerPanic(\"Mismatch between node and input type while building getter\")\n if annotation.value.get(\"id\") == \"HashMap\": # type: ignore\n # for a HashMap, split the key/value types and use the key type as the next arg\n arg, annotation = annotation.slice.value.elements # type: ignore\n else:\n # for other types, build an input arg node from the expected type\n # and remove the outer `Subscript` from the annotation\n arg = vy_ast.Name(id=type_._id)\n annotation = annotation.value\n input_nodes.append(vy_ast.arg(arg=f\"arg{i}\", annotation=arg))\n\n # wrap the return statement in a `Subscript`\n return_stmt = vy_ast.Subscript(\n value=return_stmt, slice=vy_ast.Index(value=vy_ast.Name(id=f\"arg{i}\"))\n )\n\n # after iterating the input types, the remaining annotation node is our return type\n return_node = annotation\n\n # join everything together as a new `FunctionDef` node, annotate it\n # with the type, and append it to the existing `Module` node\n expanded = vy_ast.FunctionDef.from_node(\n node.annotation,\n name=func_type.name,\n args=vy_ast.arguments(args=input_nodes, defaults=[]),\n body=[vy_ast.Return(value=return_stmt)],\n decorator_list=[vy_ast.Name(id=\"external\"), vy_ast.Name(id=\"view\")],\n returns=return_node,\n )\n expanded._metadata[\"type\"] = func_type\n return_node.set_parent(expanded)\n vyper_module.add_to_body(expanded)\n\n\ndef remove_unused_statements(vyper_module: vy_ast.Module) -> None:\n \"\"\"\n Remove statement nodes that are unused after type checking.\n\n Once type checking is complete, we can remove now-meaningless statements to\n simplify the AST prior to IR generation.\n\n Arguments\n ---------\n vyper_module : Module\n Top-level Vyper AST node.\n \"\"\"\n\n # constant declarations - values were substituted within the AST during folding\n for node in vyper_module.get_children(vy_ast.VariableDecl, {\"annotation.func.id\": \"constant\"}):\n vyper_module.remove_from_body(node)\n\n # `implements: interface` statements - validated during type checking\n for node in vyper_module.get_children(vy_ast.AnnAssign, {\"target.id\": \"implements\"}):\n vyper_module.remove_from_body(node)\n", "path": "vyper/ast/expansion.py"}], "after_files": [{"content": "import copy\n\nfrom vyper import ast as vy_ast\nfrom vyper.exceptions import CompilerPanic\n\n\ndef expand_annotated_ast(vyper_module: vy_ast.Module) -> None:\n \"\"\"\n Perform expansion / simplification operations on an annotated Vyper AST.\n\n This pass uses annotated type information to modify the AST, simplifying\n logic and expanding subtrees to reduce the compexity during codegen.\n\n Arguments\n ---------\n vyper_module : Module\n Top-level Vyper AST node that has been type-checked and annotated.\n \"\"\"\n generate_public_variable_getters(vyper_module)\n remove_unused_statements(vyper_module)\n\n\ndef generate_public_variable_getters(vyper_module: vy_ast.Module) -> None:\n \"\"\"\n Create getter functions for public variables.\n\n Arguments\n ---------\n vyper_module : Module\n Top-level Vyper AST node.\n \"\"\"\n\n for node in vyper_module.get_children(vy_ast.VariableDecl, {\"annotation.func.id\": \"public\"}):\n func_type = node._metadata[\"func_type\"]\n input_types, return_type = func_type.get_signature()\n input_nodes = []\n\n # use the annotation node as a base to build the input args and return type\n # starting with `args[0]` to remove the surrounding `public()` call`\n annotation = copy.copy(node.annotation.args[0])\n\n # the base return statement is an `Attribute` node, e.g. `self.<var_name>`\n # for each input type we wrap it in a `Subscript` to access a specific member\n return_stmt: vy_ast.VyperNode = vy_ast.Attribute(\n value=vy_ast.Name(id=\"self\"), attr=func_type.name\n )\n return_stmt._metadata[\"type\"] = node._metadata[\"type\"]\n\n for i, type_ in enumerate(input_types):\n if not isinstance(annotation, vy_ast.Subscript):\n # if we get here something has failed in type checking\n raise CompilerPanic(\"Mismatch between node and input type while building getter\")\n if annotation.value.get(\"id\") == \"HashMap\": # type: ignore\n # for a HashMap, split the key/value types and use the key type as the next arg\n arg, annotation = annotation.slice.value.elements # type: ignore\n elif annotation.value.get(\"id\") == \"DynArray\":\n arg = vy_ast.Name(id=type_._id)\n annotation = annotation.slice.value.elements[0] # type: ignore\n else:\n # for other types, build an input arg node from the expected type\n # and remove the outer `Subscript` from the annotation\n arg = vy_ast.Name(id=type_._id)\n annotation = annotation.value\n input_nodes.append(vy_ast.arg(arg=f\"arg{i}\", annotation=arg))\n\n # wrap the return statement in a `Subscript`\n return_stmt = vy_ast.Subscript(\n value=return_stmt, slice=vy_ast.Index(value=vy_ast.Name(id=f\"arg{i}\"))\n )\n\n # after iterating the input types, the remaining annotation node is our return type\n return_node = copy.copy(annotation)\n\n # join everything together as a new `FunctionDef` node, annotate it\n # with the type, and append it to the existing `Module` node\n expanded = vy_ast.FunctionDef.from_node(\n node.annotation,\n name=func_type.name,\n args=vy_ast.arguments(args=input_nodes, defaults=[]),\n body=[vy_ast.Return(value=return_stmt)],\n decorator_list=[vy_ast.Name(id=\"external\"), vy_ast.Name(id=\"view\")],\n returns=return_node,\n )\n expanded._metadata[\"type\"] = func_type\n return_node.set_parent(expanded)\n vyper_module.add_to_body(expanded)\n\n\ndef remove_unused_statements(vyper_module: vy_ast.Module) -> None:\n \"\"\"\n Remove statement nodes that are unused after type checking.\n\n Once type checking is complete, we can remove now-meaningless statements to\n simplify the AST prior to IR generation.\n\n Arguments\n ---------\n vyper_module : Module\n Top-level Vyper AST node.\n \"\"\"\n\n # constant declarations - values were substituted within the AST during folding\n for node in vyper_module.get_children(vy_ast.VariableDecl, {\"annotation.func.id\": \"constant\"}):\n vyper_module.remove_from_body(node)\n\n # `implements: interface` statements - validated during type checking\n for node in vyper_module.get_children(vy_ast.AnnAssign, {\"target.id\": \"implements\"}):\n vyper_module.remove_from_body(node)\n", "path": "vyper/ast/expansion.py"}]} | 1,577 | 267 |
gh_patches_debug_28637 | rasdani/github-patches | git_diff | comic__grand-challenge.org-1771 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add profile information to the verification admin
When manually reviewing verification requests it would be helpful to have more information in the admin such as the users full name, location, department and website.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `app/grandchallenge/verifications/admin.py`
Content:
```
1 from django.contrib import admin
2 from django.utils.timezone import now
3
4 from grandchallenge.verifications.models import Verification
5
6
7 def mark_verified(modeladmin, request, queryset):
8 queryset.update(is_verified=True, verified_at=now())
9
10
11 mark_verified.short_description = "Mark selected users as verified"
12 mark_verified.allowed_permissions = ("change",)
13
14
15 def mark_not_verified(modeladmin, request, queryset):
16 queryset.update(is_verified=False, verified_at=None)
17
18
19 mark_not_verified.short_description = "Mark selected users as not verified"
20 mark_not_verified.allowed_permissions = ("change",)
21
22
23 class VerificationAdmin(admin.ModelAdmin):
24 list_display = (
25 "user",
26 "created",
27 "signup_email",
28 "signup_email_activated",
29 "email",
30 "email_is_verified",
31 "is_verified",
32 "verified_at",
33 )
34 list_filter = ("email_is_verified", "is_verified")
35 readonly_fields = (
36 "created",
37 "modified",
38 "email_is_verified",
39 "email_verified_at",
40 "is_verified",
41 "verified_at",
42 )
43 search_fields = ("user__username", "email", "user__email")
44 actions = (mark_verified, mark_not_verified)
45 autocomplete_fields = ("user",)
46
47 def signup_email_activated(self, instance):
48 return instance.signup_email_activated
49
50 signup_email_activated.boolean = True
51
52 def get_readonly_fields(self, request, obj=None):
53 if obj:
54 return ("user", "email", *self.readonly_fields)
55 else:
56 return self.readonly_fields
57
58
59 admin.site.register(Verification, VerificationAdmin)
60
```
Path: `app/grandchallenge/verifications/models.py`
Content:
```
1 from allauth.account.signals import email_confirmed
2 from django.contrib.auth import get_user_model
3 from django.db import models
4 from django.utils.timezone import now
5 from pyswot import is_academic
6
7 from grandchallenge.subdomains.utils import reverse
8 from grandchallenge.verifications.tokens import (
9 email_verification_token_generator,
10 )
11
12
13 def email_is_trusted(*, email):
14 return is_academic(email)
15
16
17 class Verification(models.Model):
18 created = models.DateTimeField(auto_now_add=True)
19 modified = models.DateTimeField(auto_now=True)
20
21 user = models.OneToOneField(
22 get_user_model(), unique=True, on_delete=models.CASCADE,
23 )
24
25 email = models.EmailField(blank=True)
26 email_is_verified = models.BooleanField(default=False, editable=False)
27 email_verified_at = models.DateTimeField(
28 blank=True, null=True, editable=False
29 )
30
31 is_verified = models.BooleanField(default=None, null=True, editable=False)
32 verified_at = models.DateTimeField(blank=True, null=True, editable=False)
33
34 def __str__(self):
35 return f"Verification for {self.user}"
36
37 @property
38 def signup_email(self):
39 return self.user.email
40
41 @property
42 def signup_email_activated(self):
43 return self.user.emailaddress_set.filter(
44 verified=True, email=self.signup_email
45 ).exists()
46
47 @property
48 def signup_email_is_trusted(self):
49 return self.signup_email_activated and email_is_trusted(
50 email=self.signup_email
51 )
52
53 @property
54 def verification_email_is_trusted(self):
55 return self.email_is_verified and email_is_trusted(email=self.email)
56
57 @property
58 def token(self):
59 return email_verification_token_generator.make_token(self.user)
60
61 @property
62 def verification_url(self):
63 return reverse("verifications:confirm", kwargs={"token": self.token},)
64
65 def save(self, *args, **kwargs):
66 if self.signup_email_is_trusted or self.verification_email_is_trusted:
67 self.is_verified = True
68 self.verified_at = now()
69
70 super().save(*args, **kwargs)
71
72
73 def create_verification(email_address, *_, **__):
74 if (
75 email_is_trusted(email=email_address.email)
76 and not Verification.objects.filter(user=email_address.user).exists()
77 ):
78 Verification.objects.create(
79 user=email_address.user, email=email_address.email
80 )
81
82
83 email_confirmed.connect(create_verification)
84
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/app/grandchallenge/verifications/admin.py b/app/grandchallenge/verifications/admin.py
--- a/app/grandchallenge/verifications/admin.py
+++ b/app/grandchallenge/verifications/admin.py
@@ -23,14 +23,15 @@
class VerificationAdmin(admin.ModelAdmin):
list_display = (
"user",
+ "user_info",
"created",
"signup_email",
- "signup_email_activated",
"email",
"email_is_verified",
"is_verified",
"verified_at",
)
+ list_select_related = ("user__user_profile",)
list_filter = ("email_is_verified", "is_verified")
readonly_fields = (
"created",
diff --git a/app/grandchallenge/verifications/models.py b/app/grandchallenge/verifications/models.py
--- a/app/grandchallenge/verifications/models.py
+++ b/app/grandchallenge/verifications/models.py
@@ -1,6 +1,7 @@
from allauth.account.signals import email_confirmed
from django.contrib.auth import get_user_model
from django.db import models
+from django.utils.html import format_html
from django.utils.timezone import now
from pyswot import is_academic
@@ -69,6 +70,17 @@
super().save(*args, **kwargs)
+ @property
+ def user_info(self):
+ return format_html(
+ "<span>{} <br/> {} <br/> {} <br/> {} <br/> {}</span>",
+ self.user.get_full_name(),
+ self.user.user_profile.institution,
+ self.user.user_profile.department,
+ self.user.user_profile.country,
+ self.user.user_profile.website,
+ )
+
def create_verification(email_address, *_, **__):
if (
| {"golden_diff": "diff --git a/app/grandchallenge/verifications/admin.py b/app/grandchallenge/verifications/admin.py\n--- a/app/grandchallenge/verifications/admin.py\n+++ b/app/grandchallenge/verifications/admin.py\n@@ -23,14 +23,15 @@\n class VerificationAdmin(admin.ModelAdmin):\n list_display = (\n \"user\",\n+ \"user_info\",\n \"created\",\n \"signup_email\",\n- \"signup_email_activated\",\n \"email\",\n \"email_is_verified\",\n \"is_verified\",\n \"verified_at\",\n )\n+ list_select_related = (\"user__user_profile\",)\n list_filter = (\"email_is_verified\", \"is_verified\")\n readonly_fields = (\n \"created\",\ndiff --git a/app/grandchallenge/verifications/models.py b/app/grandchallenge/verifications/models.py\n--- a/app/grandchallenge/verifications/models.py\n+++ b/app/grandchallenge/verifications/models.py\n@@ -1,6 +1,7 @@\n from allauth.account.signals import email_confirmed\n from django.contrib.auth import get_user_model\n from django.db import models\n+from django.utils.html import format_html\n from django.utils.timezone import now\n from pyswot import is_academic\n \n@@ -69,6 +70,17 @@\n \n super().save(*args, **kwargs)\n \n+ @property\n+ def user_info(self):\n+ return format_html(\n+ \"<span>{} <br/> {} <br/> {} <br/> {} <br/> {}</span>\",\n+ self.user.get_full_name(),\n+ self.user.user_profile.institution,\n+ self.user.user_profile.department,\n+ self.user.user_profile.country,\n+ self.user.user_profile.website,\n+ )\n+\n \n def create_verification(email_address, *_, **__):\n if (\n", "issue": "Add profile information to the verification admin\nWhen manually reviewing verification requests it would be helpful to have more information in the admin such as the users full name, location, department and website.\n", "before_files": [{"content": "from django.contrib import admin\nfrom django.utils.timezone import now\n\nfrom grandchallenge.verifications.models import Verification\n\n\ndef mark_verified(modeladmin, request, queryset):\n queryset.update(is_verified=True, verified_at=now())\n\n\nmark_verified.short_description = \"Mark selected users as verified\"\nmark_verified.allowed_permissions = (\"change\",)\n\n\ndef mark_not_verified(modeladmin, request, queryset):\n queryset.update(is_verified=False, verified_at=None)\n\n\nmark_not_verified.short_description = \"Mark selected users as not verified\"\nmark_not_verified.allowed_permissions = (\"change\",)\n\n\nclass VerificationAdmin(admin.ModelAdmin):\n list_display = (\n \"user\",\n \"created\",\n \"signup_email\",\n \"signup_email_activated\",\n \"email\",\n \"email_is_verified\",\n \"is_verified\",\n \"verified_at\",\n )\n list_filter = (\"email_is_verified\", \"is_verified\")\n readonly_fields = (\n \"created\",\n \"modified\",\n \"email_is_verified\",\n \"email_verified_at\",\n \"is_verified\",\n \"verified_at\",\n )\n search_fields = (\"user__username\", \"email\", \"user__email\")\n actions = (mark_verified, mark_not_verified)\n autocomplete_fields = (\"user\",)\n\n def signup_email_activated(self, instance):\n return instance.signup_email_activated\n\n signup_email_activated.boolean = True\n\n def get_readonly_fields(self, request, obj=None):\n if obj:\n return (\"user\", \"email\", *self.readonly_fields)\n else:\n return self.readonly_fields\n\n\nadmin.site.register(Verification, VerificationAdmin)\n", "path": "app/grandchallenge/verifications/admin.py"}, {"content": "from allauth.account.signals import email_confirmed\nfrom django.contrib.auth import get_user_model\nfrom django.db import models\nfrom django.utils.timezone import now\nfrom pyswot import is_academic\n\nfrom grandchallenge.subdomains.utils import reverse\nfrom grandchallenge.verifications.tokens import (\n email_verification_token_generator,\n)\n\n\ndef email_is_trusted(*, email):\n return is_academic(email)\n\n\nclass Verification(models.Model):\n created = models.DateTimeField(auto_now_add=True)\n modified = models.DateTimeField(auto_now=True)\n\n user = models.OneToOneField(\n get_user_model(), unique=True, on_delete=models.CASCADE,\n )\n\n email = models.EmailField(blank=True)\n email_is_verified = models.BooleanField(default=False, editable=False)\n email_verified_at = models.DateTimeField(\n blank=True, null=True, editable=False\n )\n\n is_verified = models.BooleanField(default=None, null=True, editable=False)\n verified_at = models.DateTimeField(blank=True, null=True, editable=False)\n\n def __str__(self):\n return f\"Verification for {self.user}\"\n\n @property\n def signup_email(self):\n return self.user.email\n\n @property\n def signup_email_activated(self):\n return self.user.emailaddress_set.filter(\n verified=True, email=self.signup_email\n ).exists()\n\n @property\n def signup_email_is_trusted(self):\n return self.signup_email_activated and email_is_trusted(\n email=self.signup_email\n )\n\n @property\n def verification_email_is_trusted(self):\n return self.email_is_verified and email_is_trusted(email=self.email)\n\n @property\n def token(self):\n return email_verification_token_generator.make_token(self.user)\n\n @property\n def verification_url(self):\n return reverse(\"verifications:confirm\", kwargs={\"token\": self.token},)\n\n def save(self, *args, **kwargs):\n if self.signup_email_is_trusted or self.verification_email_is_trusted:\n self.is_verified = True\n self.verified_at = now()\n\n super().save(*args, **kwargs)\n\n\ndef create_verification(email_address, *_, **__):\n if (\n email_is_trusted(email=email_address.email)\n and not Verification.objects.filter(user=email_address.user).exists()\n ):\n Verification.objects.create(\n user=email_address.user, email=email_address.email\n )\n\n\nemail_confirmed.connect(create_verification)\n", "path": "app/grandchallenge/verifications/models.py"}], "after_files": [{"content": "from django.contrib import admin\nfrom django.utils.timezone import now\n\nfrom grandchallenge.verifications.models import Verification\n\n\ndef mark_verified(modeladmin, request, queryset):\n queryset.update(is_verified=True, verified_at=now())\n\n\nmark_verified.short_description = \"Mark selected users as verified\"\nmark_verified.allowed_permissions = (\"change\",)\n\n\ndef mark_not_verified(modeladmin, request, queryset):\n queryset.update(is_verified=False, verified_at=None)\n\n\nmark_not_verified.short_description = \"Mark selected users as not verified\"\nmark_not_verified.allowed_permissions = (\"change\",)\n\n\nclass VerificationAdmin(admin.ModelAdmin):\n list_display = (\n \"user\",\n \"user_info\",\n \"created\",\n \"signup_email\",\n \"email\",\n \"email_is_verified\",\n \"is_verified\",\n \"verified_at\",\n )\n list_select_related = (\"user__user_profile\",)\n list_filter = (\"email_is_verified\", \"is_verified\")\n readonly_fields = (\n \"created\",\n \"modified\",\n \"email_is_verified\",\n \"email_verified_at\",\n \"is_verified\",\n \"verified_at\",\n )\n search_fields = (\"user__username\", \"email\", \"user__email\")\n actions = (mark_verified, mark_not_verified)\n autocomplete_fields = (\"user\",)\n\n def signup_email_activated(self, instance):\n return instance.signup_email_activated\n\n signup_email_activated.boolean = True\n\n def get_readonly_fields(self, request, obj=None):\n if obj:\n return (\"user\", \"email\", *self.readonly_fields)\n else:\n return self.readonly_fields\n\n\nadmin.site.register(Verification, VerificationAdmin)\n", "path": "app/grandchallenge/verifications/admin.py"}, {"content": "from allauth.account.signals import email_confirmed\nfrom django.contrib.auth import get_user_model\nfrom django.db import models\nfrom django.utils.html import format_html\nfrom django.utils.timezone import now\nfrom pyswot import is_academic\n\nfrom grandchallenge.subdomains.utils import reverse\nfrom grandchallenge.verifications.tokens import (\n email_verification_token_generator,\n)\n\n\ndef email_is_trusted(*, email):\n return is_academic(email)\n\n\nclass Verification(models.Model):\n created = models.DateTimeField(auto_now_add=True)\n modified = models.DateTimeField(auto_now=True)\n\n user = models.OneToOneField(\n get_user_model(), unique=True, on_delete=models.CASCADE,\n )\n\n email = models.EmailField(blank=True)\n email_is_verified = models.BooleanField(default=False, editable=False)\n email_verified_at = models.DateTimeField(\n blank=True, null=True, editable=False\n )\n\n is_verified = models.BooleanField(default=None, null=True, editable=False)\n verified_at = models.DateTimeField(blank=True, null=True, editable=False)\n\n def __str__(self):\n return f\"Verification for {self.user}\"\n\n @property\n def signup_email(self):\n return self.user.email\n\n @property\n def signup_email_activated(self):\n return self.user.emailaddress_set.filter(\n verified=True, email=self.signup_email\n ).exists()\n\n @property\n def signup_email_is_trusted(self):\n return self.signup_email_activated and email_is_trusted(\n email=self.signup_email\n )\n\n @property\n def verification_email_is_trusted(self):\n return self.email_is_verified and email_is_trusted(email=self.email)\n\n @property\n def token(self):\n return email_verification_token_generator.make_token(self.user)\n\n @property\n def verification_url(self):\n return reverse(\"verifications:confirm\", kwargs={\"token\": self.token},)\n\n def save(self, *args, **kwargs):\n if self.signup_email_is_trusted or self.verification_email_is_trusted:\n self.is_verified = True\n self.verified_at = now()\n\n super().save(*args, **kwargs)\n\n @property\n def user_info(self):\n return format_html(\n \"<span>{} <br/> {} <br/> {} <br/> {} <br/> {}</span>\",\n self.user.get_full_name(),\n self.user.user_profile.institution,\n self.user.user_profile.department,\n self.user.user_profile.country,\n self.user.user_profile.website,\n )\n\n\ndef create_verification(email_address, *_, **__):\n if (\n email_is_trusted(email=email_address.email)\n and not Verification.objects.filter(user=email_address.user).exists()\n ):\n Verification.objects.create(\n user=email_address.user, email=email_address.email\n )\n\n\nemail_confirmed.connect(create_verification)\n", "path": "app/grandchallenge/verifications/models.py"}]} | 1,446 | 393 |
gh_patches_debug_37750 | rasdani/github-patches | git_diff | ytdl-org__youtube-dl-2859 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
FunnyOrDie extractor not finding thumbnails
python -m youtube_dl -v --skip-download --write-info-json --no-playlist -f mp4 http://www.funnyordie.com/videos/e402820827/please-use-this-song-jon-lajoie extracts the video properly but not a thumbnail. Here's the resulting JSON:
{"display_id": "e402820827", "extractor": "FunnyOrDie", "description": "Please use this to sell something", "format": "0 - unknown", "format_id": "0", "playlist_index": null, "stitle": "Please Use This Song (Jon Lajoie)", "playlist": null, "title": "Please Use This Song (Jon Lajoie)", "url": "http://vo.fod4.com/v/e402820827/v600.mp4", "extractor_key": "FunnyOrDie", "id": "e402820827", "ext": "mp4", "webpage_url": "http://www.funnyordie.com/videos/e402820827/please-use-this-song-jon-lajoie", "fulltitle": "Please Use This Song (Jon Lajoie)", "thumbnail": null, "webpage_url_basename": "please-use-this-song-jon-lajoie"}
FunnyorDie's RSS feed entry for this page does contain a thumbnail:
media:thumbnail url="http://t.fod4.com/t/e402820827/c480x270_50.jpg" width="464" height="348"
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `youtube_dl/extractor/funnyordie.py`
Content:
```
1 from __future__ import unicode_literals
2
3 import json
4 import re
5
6 from .common import InfoExtractor
7
8
9 class FunnyOrDieIE(InfoExtractor):
10 _VALID_URL = r'https?://(?:www\.)?funnyordie\.com/(?P<type>embed|videos)/(?P<id>[0-9a-f]+)(?:$|[?#/])'
11 _TEST = {
12 'url': 'http://www.funnyordie.com/videos/0732f586d7/heart-shaped-box-literal-video-version',
13 'file': '0732f586d7.mp4',
14 'md5': 'f647e9e90064b53b6e046e75d0241fbd',
15 'info_dict': {
16 'description': ('Lyrics changed to match the video. Spoken cameo '
17 'by Obscurus Lupa (from ThatGuyWithTheGlasses.com). Based on a '
18 'concept by Dustin McLean (DustFilms.com). Performed, edited, '
19 'and written by David A. Scott.'),
20 'title': 'Heart-Shaped Box: Literal Video Version',
21 },
22 }
23
24 def _real_extract(self, url):
25 mobj = re.match(self._VALID_URL, url)
26
27 video_id = mobj.group('id')
28 webpage = self._download_webpage(url, video_id)
29
30 video_url = self._search_regex(
31 [r'type="video/mp4" src="(.*?)"', r'src="([^>]*?)" type=\'video/mp4\''],
32 webpage, 'video URL', flags=re.DOTALL)
33
34 if mobj.group('type') == 'embed':
35 post_json = self._search_regex(
36 r'fb_post\s*=\s*(\{.*?\});', webpage, 'post details')
37 post = json.loads(post_json)
38 title = post['name']
39 description = post.get('description')
40 thumbnail = post.get('picture')
41 else:
42 title = self._og_search_title(webpage)
43 description = self._og_search_description(webpage)
44 thumbnail = None
45
46 return {
47 'id': video_id,
48 'url': video_url,
49 'ext': 'mp4',
50 'title': title,
51 'description': description,
52 'thumbnail': thumbnail,
53 }
54
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/youtube_dl/extractor/funnyordie.py b/youtube_dl/extractor/funnyordie.py
--- a/youtube_dl/extractor/funnyordie.py
+++ b/youtube_dl/extractor/funnyordie.py
@@ -8,18 +8,27 @@
class FunnyOrDieIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?funnyordie\.com/(?P<type>embed|videos)/(?P<id>[0-9a-f]+)(?:$|[?#/])'
- _TEST = {
+ _TESTS = [{
'url': 'http://www.funnyordie.com/videos/0732f586d7/heart-shaped-box-literal-video-version',
- 'file': '0732f586d7.mp4',
'md5': 'f647e9e90064b53b6e046e75d0241fbd',
'info_dict': {
- 'description': ('Lyrics changed to match the video. Spoken cameo '
- 'by Obscurus Lupa (from ThatGuyWithTheGlasses.com). Based on a '
- 'concept by Dustin McLean (DustFilms.com). Performed, edited, '
- 'and written by David A. Scott.'),
+ 'id': '0732f586d7',
+ 'ext': 'mp4',
'title': 'Heart-Shaped Box: Literal Video Version',
+ 'description': 'md5:ea09a01bc9a1c46d9ab696c01747c338',
+ 'thumbnail': 're:^http:.*\.jpg$',
+ },
+ }, {
+ 'url': 'http://www.funnyordie.com/embed/e402820827',
+ 'md5': '0e0c5a7bf45c52b95cd16aa7f28be0b6',
+ 'info_dict': {
+ 'id': 'e402820827',
+ 'ext': 'mp4',
+ 'title': 'Please Use This Song (Jon Lajoie)',
+ 'description': 'md5:2ed27d364f5a805a6dba199faaf6681d',
+ 'thumbnail': 're:^http:.*\.jpg$',
},
- }
+ }]
def _real_extract(self, url):
mobj = re.match(self._VALID_URL, url)
@@ -31,23 +40,15 @@
[r'type="video/mp4" src="(.*?)"', r'src="([^>]*?)" type=\'video/mp4\''],
webpage, 'video URL', flags=re.DOTALL)
- if mobj.group('type') == 'embed':
- post_json = self._search_regex(
- r'fb_post\s*=\s*(\{.*?\});', webpage, 'post details')
- post = json.loads(post_json)
- title = post['name']
- description = post.get('description')
- thumbnail = post.get('picture')
- else:
- title = self._og_search_title(webpage)
- description = self._og_search_description(webpage)
- thumbnail = None
+ post_json = self._search_regex(
+ r'fb_post\s*=\s*(\{.*?\});', webpage, 'post details')
+ post = json.loads(post_json)
return {
'id': video_id,
'url': video_url,
'ext': 'mp4',
- 'title': title,
- 'description': description,
- 'thumbnail': thumbnail,
+ 'title': post['name'],
+ 'description': post.get('description'),
+ 'thumbnail': post.get('picture'),
}
| {"golden_diff": "diff --git a/youtube_dl/extractor/funnyordie.py b/youtube_dl/extractor/funnyordie.py\n--- a/youtube_dl/extractor/funnyordie.py\n+++ b/youtube_dl/extractor/funnyordie.py\n@@ -8,18 +8,27 @@\n \n class FunnyOrDieIE(InfoExtractor):\n _VALID_URL = r'https?://(?:www\\.)?funnyordie\\.com/(?P<type>embed|videos)/(?P<id>[0-9a-f]+)(?:$|[?#/])'\n- _TEST = {\n+ _TESTS = [{\n 'url': 'http://www.funnyordie.com/videos/0732f586d7/heart-shaped-box-literal-video-version',\n- 'file': '0732f586d7.mp4',\n 'md5': 'f647e9e90064b53b6e046e75d0241fbd',\n 'info_dict': {\n- 'description': ('Lyrics changed to match the video. Spoken cameo '\n- 'by Obscurus Lupa (from ThatGuyWithTheGlasses.com). Based on a '\n- 'concept by Dustin McLean (DustFilms.com). Performed, edited, '\n- 'and written by David A. Scott.'),\n+ 'id': '0732f586d7',\n+ 'ext': 'mp4',\n 'title': 'Heart-Shaped Box: Literal Video Version',\n+ 'description': 'md5:ea09a01bc9a1c46d9ab696c01747c338',\n+ 'thumbnail': 're:^http:.*\\.jpg$',\n+ },\n+ }, {\n+ 'url': 'http://www.funnyordie.com/embed/e402820827',\n+ 'md5': '0e0c5a7bf45c52b95cd16aa7f28be0b6',\n+ 'info_dict': {\n+ 'id': 'e402820827',\n+ 'ext': 'mp4',\n+ 'title': 'Please Use This Song (Jon Lajoie)',\n+ 'description': 'md5:2ed27d364f5a805a6dba199faaf6681d',\n+ 'thumbnail': 're:^http:.*\\.jpg$',\n },\n- }\n+ }]\n \n def _real_extract(self, url):\n mobj = re.match(self._VALID_URL, url)\n@@ -31,23 +40,15 @@\n [r'type=\"video/mp4\" src=\"(.*?)\"', r'src=\"([^>]*?)\" type=\\'video/mp4\\''],\n webpage, 'video URL', flags=re.DOTALL)\n \n- if mobj.group('type') == 'embed':\n- post_json = self._search_regex(\n- r'fb_post\\s*=\\s*(\\{.*?\\});', webpage, 'post details')\n- post = json.loads(post_json)\n- title = post['name']\n- description = post.get('description')\n- thumbnail = post.get('picture')\n- else:\n- title = self._og_search_title(webpage)\n- description = self._og_search_description(webpage)\n- thumbnail = None\n+ post_json = self._search_regex(\n+ r'fb_post\\s*=\\s*(\\{.*?\\});', webpage, 'post details')\n+ post = json.loads(post_json)\n \n return {\n 'id': video_id,\n 'url': video_url,\n 'ext': 'mp4',\n- 'title': title,\n- 'description': description,\n- 'thumbnail': thumbnail,\n+ 'title': post['name'],\n+ 'description': post.get('description'),\n+ 'thumbnail': post.get('picture'),\n }\n", "issue": "FunnyOrDie extractor not finding thumbnails\npython -m youtube_dl -v --skip-download --write-info-json --no-playlist -f mp4 http://www.funnyordie.com/videos/e402820827/please-use-this-song-jon-lajoie extracts the video properly but not a thumbnail. Here's the resulting JSON:\n{\"display_id\": \"e402820827\", \"extractor\": \"FunnyOrDie\", \"description\": \"Please use this to sell something\", \"format\": \"0 - unknown\", \"format_id\": \"0\", \"playlist_index\": null, \"stitle\": \"Please Use This Song (Jon Lajoie)\", \"playlist\": null, \"title\": \"Please Use This Song (Jon Lajoie)\", \"url\": \"http://vo.fod4.com/v/e402820827/v600.mp4\", \"extractor_key\": \"FunnyOrDie\", \"id\": \"e402820827\", \"ext\": \"mp4\", \"webpage_url\": \"http://www.funnyordie.com/videos/e402820827/please-use-this-song-jon-lajoie\", \"fulltitle\": \"Please Use This Song (Jon Lajoie)\", \"thumbnail\": null, \"webpage_url_basename\": \"please-use-this-song-jon-lajoie\"}\n\nFunnyorDie's RSS feed entry for this page does contain a thumbnail:\nmedia:thumbnail url=\"http://t.fod4.com/t/e402820827/c480x270_50.jpg\" width=\"464\" height=\"348\"\n\n", "before_files": [{"content": "from __future__ import unicode_literals\n\nimport json\nimport re\n\nfrom .common import InfoExtractor\n\n\nclass FunnyOrDieIE(InfoExtractor):\n _VALID_URL = r'https?://(?:www\\.)?funnyordie\\.com/(?P<type>embed|videos)/(?P<id>[0-9a-f]+)(?:$|[?#/])'\n _TEST = {\n 'url': 'http://www.funnyordie.com/videos/0732f586d7/heart-shaped-box-literal-video-version',\n 'file': '0732f586d7.mp4',\n 'md5': 'f647e9e90064b53b6e046e75d0241fbd',\n 'info_dict': {\n 'description': ('Lyrics changed to match the video. Spoken cameo '\n 'by Obscurus Lupa (from ThatGuyWithTheGlasses.com). Based on a '\n 'concept by Dustin McLean (DustFilms.com). Performed, edited, '\n 'and written by David A. Scott.'),\n 'title': 'Heart-Shaped Box: Literal Video Version',\n },\n }\n\n def _real_extract(self, url):\n mobj = re.match(self._VALID_URL, url)\n\n video_id = mobj.group('id')\n webpage = self._download_webpage(url, video_id)\n\n video_url = self._search_regex(\n [r'type=\"video/mp4\" src=\"(.*?)\"', r'src=\"([^>]*?)\" type=\\'video/mp4\\''],\n webpage, 'video URL', flags=re.DOTALL)\n\n if mobj.group('type') == 'embed':\n post_json = self._search_regex(\n r'fb_post\\s*=\\s*(\\{.*?\\});', webpage, 'post details')\n post = json.loads(post_json)\n title = post['name']\n description = post.get('description')\n thumbnail = post.get('picture')\n else:\n title = self._og_search_title(webpage)\n description = self._og_search_description(webpage)\n thumbnail = None\n\n return {\n 'id': video_id,\n 'url': video_url,\n 'ext': 'mp4',\n 'title': title,\n 'description': description,\n 'thumbnail': thumbnail,\n }\n", "path": "youtube_dl/extractor/funnyordie.py"}], "after_files": [{"content": "from __future__ import unicode_literals\n\nimport json\nimport re\n\nfrom .common import InfoExtractor\n\n\nclass FunnyOrDieIE(InfoExtractor):\n _VALID_URL = r'https?://(?:www\\.)?funnyordie\\.com/(?P<type>embed|videos)/(?P<id>[0-9a-f]+)(?:$|[?#/])'\n _TESTS = [{\n 'url': 'http://www.funnyordie.com/videos/0732f586d7/heart-shaped-box-literal-video-version',\n 'md5': 'f647e9e90064b53b6e046e75d0241fbd',\n 'info_dict': {\n 'id': '0732f586d7',\n 'ext': 'mp4',\n 'title': 'Heart-Shaped Box: Literal Video Version',\n 'description': 'md5:ea09a01bc9a1c46d9ab696c01747c338',\n 'thumbnail': 're:^http:.*\\.jpg$',\n },\n }, {\n 'url': 'http://www.funnyordie.com/embed/e402820827',\n 'md5': '0e0c5a7bf45c52b95cd16aa7f28be0b6',\n 'info_dict': {\n 'id': 'e402820827',\n 'ext': 'mp4',\n 'title': 'Please Use This Song (Jon Lajoie)',\n 'description': 'md5:2ed27d364f5a805a6dba199faaf6681d',\n 'thumbnail': 're:^http:.*\\.jpg$',\n },\n }]\n\n def _real_extract(self, url):\n mobj = re.match(self._VALID_URL, url)\n\n video_id = mobj.group('id')\n webpage = self._download_webpage(url, video_id)\n\n video_url = self._search_regex(\n [r'type=\"video/mp4\" src=\"(.*?)\"', r'src=\"([^>]*?)\" type=\\'video/mp4\\''],\n webpage, 'video URL', flags=re.DOTALL)\n\n post_json = self._search_regex(\n r'fb_post\\s*=\\s*(\\{.*?\\});', webpage, 'post details')\n post = json.loads(post_json)\n\n return {\n 'id': video_id,\n 'url': video_url,\n 'ext': 'mp4',\n 'title': post['name'],\n 'description': post.get('description'),\n 'thumbnail': post.get('picture'),\n }\n", "path": "youtube_dl/extractor/funnyordie.py"}]} | 1,248 | 899 |
gh_patches_debug_37748 | rasdani/github-patches | git_diff | open-telemetry__opentelemetry-python-1123 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add support for OTEL_PROPAGATORS
The spec describes environment variables that should be supported to configure propagators, this feature request is to add support in the current implementation.
https://github.com/open-telemetry/opentelemetry-specification/blob/master/specification/sdk-environment-variables.md
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `opentelemetry-api/src/opentelemetry/propagators/__init__.py`
Content:
```
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """
16 API for propagation of context.
17
18 Example::
19
20 import flask
21 import requests
22 from opentelemetry import propagators
23
24
25 PROPAGATOR = propagators.get_global_textmap()
26
27
28 def get_header_from_flask_request(request, key):
29 return request.headers.get_all(key)
30
31 def set_header_into_requests_request(request: requests.Request,
32 key: str, value: str):
33 request.headers[key] = value
34
35 def example_route():
36 context = PROPAGATOR.extract(
37 get_header_from_flask_request,
38 flask.request
39 )
40 request_to_downstream = requests.Request(
41 "GET", "http://httpbin.org/get"
42 )
43 PROPAGATOR.inject(
44 set_header_into_requests_request,
45 request_to_downstream,
46 context=context
47 )
48 session = requests.Session()
49 session.send(request_to_downstream.prepare())
50
51
52 .. _Propagation API Specification:
53 https://github.com/open-telemetry/opentelemetry-specification/blob/master/specification/api-propagators.md
54 """
55
56 import typing
57
58 from opentelemetry.baggage.propagation import BaggagePropagator
59 from opentelemetry.context.context import Context
60 from opentelemetry.propagators import composite
61 from opentelemetry.trace.propagation import textmap
62 from opentelemetry.trace.propagation.tracecontext import (
63 TraceContextTextMapPropagator,
64 )
65
66
67 def extract(
68 get_from_carrier: textmap.Getter[textmap.TextMapPropagatorT],
69 carrier: textmap.TextMapPropagatorT,
70 context: typing.Optional[Context] = None,
71 ) -> Context:
72 """ Uses the configured propagator to extract a Context from the carrier.
73
74 Args:
75 get_from_carrier: a function that can retrieve zero
76 or more values from the carrier. In the case that
77 the value does not exist, return an empty list.
78 carrier: and object which contains values that are
79 used to construct a Context. This object
80 must be paired with an appropriate get_from_carrier
81 which understands how to extract a value from it.
82 context: an optional Context to use. Defaults to current
83 context if not set.
84 """
85 return get_global_textmap().extract(get_from_carrier, carrier, context)
86
87
88 def inject(
89 set_in_carrier: textmap.Setter[textmap.TextMapPropagatorT],
90 carrier: textmap.TextMapPropagatorT,
91 context: typing.Optional[Context] = None,
92 ) -> None:
93 """ Uses the configured propagator to inject a Context into the carrier.
94
95 Args:
96 set_in_carrier: A setter function that can set values
97 on the carrier.
98 carrier: An object that contains a representation of HTTP
99 headers. Should be paired with set_in_carrier, which
100 should know how to set header values on the carrier.
101 context: an optional Context to use. Defaults to current
102 context if not set.
103 """
104 get_global_textmap().inject(set_in_carrier, carrier, context)
105
106
107 _HTTP_TEXT_FORMAT = composite.CompositeHTTPPropagator(
108 [TraceContextTextMapPropagator(), BaggagePropagator()],
109 ) # type: textmap.TextMapPropagator
110
111
112 def get_global_textmap() -> textmap.TextMapPropagator:
113 return _HTTP_TEXT_FORMAT
114
115
116 def set_global_textmap(http_text_format: textmap.TextMapPropagator,) -> None:
117 global _HTTP_TEXT_FORMAT # pylint:disable=global-statement
118 _HTTP_TEXT_FORMAT = http_text_format
119
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/opentelemetry-api/src/opentelemetry/propagators/__init__.py b/opentelemetry-api/src/opentelemetry/propagators/__init__.py
--- a/opentelemetry-api/src/opentelemetry/propagators/__init__.py
+++ b/opentelemetry-api/src/opentelemetry/propagators/__init__.py
@@ -15,6 +15,21 @@
"""
API for propagation of context.
+The propagators for the
+``opentelemetry.propagators.composite.CompositeHTTPPropagator`` can be defined
+via configuration in the ``OTEL_PROPAGATORS`` environment variable. This
+variable should be set to a comma-separated string of names of values for the
+``opentelemetry_propagator`` entry point. For example, setting
+``OTEL_PROPAGATORS`` to ``tracecontext,baggage`` (which is the default value)
+would instantiate
+``opentelemetry.propagators.composite.CompositeHTTPPropagator`` with 2
+propagators, one of type
+``opentelemetry.trace.propagation.tracecontext.TraceContextTextMapPropagator``
+and other of type ``opentelemetry.baggage.propagation.BaggagePropagator``.
+Notice that these propagator classes are defined as
+``opentelemetry_propagator`` entry points in the ``setup.cfg`` file of
+``opentelemetry``.
+
Example::
import flask
@@ -54,14 +69,16 @@
"""
import typing
+from logging import getLogger
+
+from pkg_resources import iter_entry_points
-from opentelemetry.baggage.propagation import BaggagePropagator
+from opentelemetry.configuration import Configuration
from opentelemetry.context.context import Context
from opentelemetry.propagators import composite
from opentelemetry.trace.propagation import textmap
-from opentelemetry.trace.propagation.tracecontext import (
- TraceContextTextMapPropagator,
-)
+
+logger = getLogger(__name__)
def extract(
@@ -104,9 +121,25 @@
get_global_textmap().inject(set_in_carrier, carrier, context)
-_HTTP_TEXT_FORMAT = composite.CompositeHTTPPropagator(
- [TraceContextTextMapPropagator(), BaggagePropagator()],
-) # type: textmap.TextMapPropagator
+try:
+
+ propagators = []
+
+ for propagator in ( # type: ignore
+ Configuration().get("PROPAGATORS", "tracecontext,baggage").split(",") # type: ignore
+ ):
+
+ propagators.append( # type: ignore
+ next( # type: ignore
+ iter_entry_points("opentelemetry_propagator", propagator) # type: ignore
+ ).load()()
+ )
+
+except Exception: # pylint: disable=broad-except
+ logger.exception("Failed to load configured propagators")
+ raise
+
+_HTTP_TEXT_FORMAT = composite.CompositeHTTPPropagator(propagators) # type: ignore
def get_global_textmap() -> textmap.TextMapPropagator:
@@ -115,4 +148,4 @@
def set_global_textmap(http_text_format: textmap.TextMapPropagator,) -> None:
global _HTTP_TEXT_FORMAT # pylint:disable=global-statement
- _HTTP_TEXT_FORMAT = http_text_format
+ _HTTP_TEXT_FORMAT = http_text_format # type: ignore
| {"golden_diff": "diff --git a/opentelemetry-api/src/opentelemetry/propagators/__init__.py b/opentelemetry-api/src/opentelemetry/propagators/__init__.py\n--- a/opentelemetry-api/src/opentelemetry/propagators/__init__.py\n+++ b/opentelemetry-api/src/opentelemetry/propagators/__init__.py\n@@ -15,6 +15,21 @@\n \"\"\"\n API for propagation of context.\n \n+The propagators for the\n+``opentelemetry.propagators.composite.CompositeHTTPPropagator`` can be defined\n+via configuration in the ``OTEL_PROPAGATORS`` environment variable. This\n+variable should be set to a comma-separated string of names of values for the\n+``opentelemetry_propagator`` entry point. For example, setting\n+``OTEL_PROPAGATORS`` to ``tracecontext,baggage`` (which is the default value)\n+would instantiate\n+``opentelemetry.propagators.composite.CompositeHTTPPropagator`` with 2\n+propagators, one of type\n+``opentelemetry.trace.propagation.tracecontext.TraceContextTextMapPropagator``\n+and other of type ``opentelemetry.baggage.propagation.BaggagePropagator``.\n+Notice that these propagator classes are defined as\n+``opentelemetry_propagator`` entry points in the ``setup.cfg`` file of\n+``opentelemetry``.\n+\n Example::\n \n import flask\n@@ -54,14 +69,16 @@\n \"\"\"\n \n import typing\n+from logging import getLogger\n+\n+from pkg_resources import iter_entry_points\n \n-from opentelemetry.baggage.propagation import BaggagePropagator\n+from opentelemetry.configuration import Configuration\n from opentelemetry.context.context import Context\n from opentelemetry.propagators import composite\n from opentelemetry.trace.propagation import textmap\n-from opentelemetry.trace.propagation.tracecontext import (\n- TraceContextTextMapPropagator,\n-)\n+\n+logger = getLogger(__name__)\n \n \n def extract(\n@@ -104,9 +121,25 @@\n get_global_textmap().inject(set_in_carrier, carrier, context)\n \n \n-_HTTP_TEXT_FORMAT = composite.CompositeHTTPPropagator(\n- [TraceContextTextMapPropagator(), BaggagePropagator()],\n-) # type: textmap.TextMapPropagator\n+try:\n+\n+ propagators = []\n+\n+ for propagator in ( # type: ignore\n+ Configuration().get(\"PROPAGATORS\", \"tracecontext,baggage\").split(\",\") # type: ignore\n+ ):\n+\n+ propagators.append( # type: ignore\n+ next( # type: ignore\n+ iter_entry_points(\"opentelemetry_propagator\", propagator) # type: ignore\n+ ).load()()\n+ )\n+\n+except Exception: # pylint: disable=broad-except\n+ logger.exception(\"Failed to load configured propagators\")\n+ raise\n+\n+_HTTP_TEXT_FORMAT = composite.CompositeHTTPPropagator(propagators) # type: ignore\n \n \n def get_global_textmap() -> textmap.TextMapPropagator:\n@@ -115,4 +148,4 @@\n \n def set_global_textmap(http_text_format: textmap.TextMapPropagator,) -> None:\n global _HTTP_TEXT_FORMAT # pylint:disable=global-statement\n- _HTTP_TEXT_FORMAT = http_text_format\n+ _HTTP_TEXT_FORMAT = http_text_format # type: ignore\n", "issue": "Add support for OTEL_PROPAGATORS\nThe spec describes environment variables that should be supported to configure propagators, this feature request is to add support in the current implementation.\r\n\r\nhttps://github.com/open-telemetry/opentelemetry-specification/blob/master/specification/sdk-environment-variables.md\n", "before_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nAPI for propagation of context.\n\nExample::\n\n import flask\n import requests\n from opentelemetry import propagators\n\n\n PROPAGATOR = propagators.get_global_textmap()\n\n\n def get_header_from_flask_request(request, key):\n return request.headers.get_all(key)\n\n def set_header_into_requests_request(request: requests.Request,\n key: str, value: str):\n request.headers[key] = value\n\n def example_route():\n context = PROPAGATOR.extract(\n get_header_from_flask_request,\n flask.request\n )\n request_to_downstream = requests.Request(\n \"GET\", \"http://httpbin.org/get\"\n )\n PROPAGATOR.inject(\n set_header_into_requests_request,\n request_to_downstream,\n context=context\n )\n session = requests.Session()\n session.send(request_to_downstream.prepare())\n\n\n.. _Propagation API Specification:\n https://github.com/open-telemetry/opentelemetry-specification/blob/master/specification/api-propagators.md\n\"\"\"\n\nimport typing\n\nfrom opentelemetry.baggage.propagation import BaggagePropagator\nfrom opentelemetry.context.context import Context\nfrom opentelemetry.propagators import composite\nfrom opentelemetry.trace.propagation import textmap\nfrom opentelemetry.trace.propagation.tracecontext import (\n TraceContextTextMapPropagator,\n)\n\n\ndef extract(\n get_from_carrier: textmap.Getter[textmap.TextMapPropagatorT],\n carrier: textmap.TextMapPropagatorT,\n context: typing.Optional[Context] = None,\n) -> Context:\n \"\"\" Uses the configured propagator to extract a Context from the carrier.\n\n Args:\n get_from_carrier: a function that can retrieve zero\n or more values from the carrier. In the case that\n the value does not exist, return an empty list.\n carrier: and object which contains values that are\n used to construct a Context. This object\n must be paired with an appropriate get_from_carrier\n which understands how to extract a value from it.\n context: an optional Context to use. Defaults to current\n context if not set.\n \"\"\"\n return get_global_textmap().extract(get_from_carrier, carrier, context)\n\n\ndef inject(\n set_in_carrier: textmap.Setter[textmap.TextMapPropagatorT],\n carrier: textmap.TextMapPropagatorT,\n context: typing.Optional[Context] = None,\n) -> None:\n \"\"\" Uses the configured propagator to inject a Context into the carrier.\n\n Args:\n set_in_carrier: A setter function that can set values\n on the carrier.\n carrier: An object that contains a representation of HTTP\n headers. Should be paired with set_in_carrier, which\n should know how to set header values on the carrier.\n context: an optional Context to use. Defaults to current\n context if not set.\n \"\"\"\n get_global_textmap().inject(set_in_carrier, carrier, context)\n\n\n_HTTP_TEXT_FORMAT = composite.CompositeHTTPPropagator(\n [TraceContextTextMapPropagator(), BaggagePropagator()],\n) # type: textmap.TextMapPropagator\n\n\ndef get_global_textmap() -> textmap.TextMapPropagator:\n return _HTTP_TEXT_FORMAT\n\n\ndef set_global_textmap(http_text_format: textmap.TextMapPropagator,) -> None:\n global _HTTP_TEXT_FORMAT # pylint:disable=global-statement\n _HTTP_TEXT_FORMAT = http_text_format\n", "path": "opentelemetry-api/src/opentelemetry/propagators/__init__.py"}], "after_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nAPI for propagation of context.\n\nThe propagators for the\n``opentelemetry.propagators.composite.CompositeHTTPPropagator`` can be defined\nvia configuration in the ``OTEL_PROPAGATORS`` environment variable. This\nvariable should be set to a comma-separated string of names of values for the\n``opentelemetry_propagator`` entry point. For example, setting\n``OTEL_PROPAGATORS`` to ``tracecontext,baggage`` (which is the default value)\nwould instantiate\n``opentelemetry.propagators.composite.CompositeHTTPPropagator`` with 2\npropagators, one of type\n``opentelemetry.trace.propagation.tracecontext.TraceContextTextMapPropagator``\nand other of type ``opentelemetry.baggage.propagation.BaggagePropagator``.\nNotice that these propagator classes are defined as\n``opentelemetry_propagator`` entry points in the ``setup.cfg`` file of\n``opentelemetry``.\n\nExample::\n\n import flask\n import requests\n from opentelemetry import propagators\n\n\n PROPAGATOR = propagators.get_global_textmap()\n\n\n def get_header_from_flask_request(request, key):\n return request.headers.get_all(key)\n\n def set_header_into_requests_request(request: requests.Request,\n key: str, value: str):\n request.headers[key] = value\n\n def example_route():\n context = PROPAGATOR.extract(\n get_header_from_flask_request,\n flask.request\n )\n request_to_downstream = requests.Request(\n \"GET\", \"http://httpbin.org/get\"\n )\n PROPAGATOR.inject(\n set_header_into_requests_request,\n request_to_downstream,\n context=context\n )\n session = requests.Session()\n session.send(request_to_downstream.prepare())\n\n\n.. _Propagation API Specification:\n https://github.com/open-telemetry/opentelemetry-specification/blob/master/specification/api-propagators.md\n\"\"\"\n\nimport typing\nfrom logging import getLogger\n\nfrom pkg_resources import iter_entry_points\n\nfrom opentelemetry.configuration import Configuration\nfrom opentelemetry.context.context import Context\nfrom opentelemetry.propagators import composite\nfrom opentelemetry.trace.propagation import textmap\n\nlogger = getLogger(__name__)\n\n\ndef extract(\n get_from_carrier: textmap.Getter[textmap.TextMapPropagatorT],\n carrier: textmap.TextMapPropagatorT,\n context: typing.Optional[Context] = None,\n) -> Context:\n \"\"\" Uses the configured propagator to extract a Context from the carrier.\n\n Args:\n get_from_carrier: a function that can retrieve zero\n or more values from the carrier. In the case that\n the value does not exist, return an empty list.\n carrier: and object which contains values that are\n used to construct a Context. This object\n must be paired with an appropriate get_from_carrier\n which understands how to extract a value from it.\n context: an optional Context to use. Defaults to current\n context if not set.\n \"\"\"\n return get_global_textmap().extract(get_from_carrier, carrier, context)\n\n\ndef inject(\n set_in_carrier: textmap.Setter[textmap.TextMapPropagatorT],\n carrier: textmap.TextMapPropagatorT,\n context: typing.Optional[Context] = None,\n) -> None:\n \"\"\" Uses the configured propagator to inject a Context into the carrier.\n\n Args:\n set_in_carrier: A setter function that can set values\n on the carrier.\n carrier: An object that contains a representation of HTTP\n headers. Should be paired with set_in_carrier, which\n should know how to set header values on the carrier.\n context: an optional Context to use. Defaults to current\n context if not set.\n \"\"\"\n get_global_textmap().inject(set_in_carrier, carrier, context)\n\n\ntry:\n\n propagators = []\n\n for propagator in ( # type: ignore\n Configuration().get(\"PROPAGATORS\", \"tracecontext,baggage\").split(\",\") # type: ignore\n ):\n\n propagators.append( # type: ignore\n next( # type: ignore\n iter_entry_points(\"opentelemetry_propagator\", propagator) # type: ignore\n ).load()()\n )\n\nexcept Exception: # pylint: disable=broad-except\n logger.exception(\"Failed to load configured propagators\")\n raise\n\n_HTTP_TEXT_FORMAT = composite.CompositeHTTPPropagator(propagators) # type: ignore\n\n\ndef get_global_textmap() -> textmap.TextMapPropagator:\n return _HTTP_TEXT_FORMAT\n\n\ndef set_global_textmap(http_text_format: textmap.TextMapPropagator,) -> None:\n global _HTTP_TEXT_FORMAT # pylint:disable=global-statement\n _HTTP_TEXT_FORMAT = http_text_format # type: ignore\n", "path": "opentelemetry-api/src/opentelemetry/propagators/__init__.py"}]} | 1,462 | 773 |
gh_patches_debug_28935 | rasdani/github-patches | git_diff | pyload__pyload-1659 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
premiumize.me hook is broken
account says username and password is ok
but the log always shows:
4 01.08.2015 19:50:13 WARNING HOOK PremiumizeMe: 'password' | Waiting 1 minute and retry
5 01.08.2015 19:51:05 WARNING HOOK PremiumizeMe: 'password' | Waiting 1 minute and retry
6 01.08.2015 19:51:13 WARNING HOOK PremiumizeMe: 'password' | Waiting 1 minute and retry
7 01.08.2015 19:52:05 WARNING HOOK PremiumizeMe: 'password' | Waiting 1 minute and retry
8 01.08.2015 19:52:13 WARNING HOOK PremiumizeMe: 'password' | Waiting 1 minute and retry
9 01.08.2015 19:53:05 WARNING HOOK PremiumizeMe: 'password' | Waiting 1 minute and retry
10 01.08.2015 19:53:13 WARNING HOOK PremiumizeMe: 'password' | Waiting 1 minute and retry
so i guess the hook is broken
premiumize.me hook is broken
account says username and password is ok
but the log always shows:
4 01.08.2015 19:50:13 WARNING HOOK PremiumizeMe: 'password' | Waiting 1 minute and retry
5 01.08.2015 19:51:05 WARNING HOOK PremiumizeMe: 'password' | Waiting 1 minute and retry
6 01.08.2015 19:51:13 WARNING HOOK PremiumizeMe: 'password' | Waiting 1 minute and retry
7 01.08.2015 19:52:05 WARNING HOOK PremiumizeMe: 'password' | Waiting 1 minute and retry
8 01.08.2015 19:52:13 WARNING HOOK PremiumizeMe: 'password' | Waiting 1 minute and retry
9 01.08.2015 19:53:05 WARNING HOOK PremiumizeMe: 'password' | Waiting 1 minute and retry
10 01.08.2015 19:53:13 WARNING HOOK PremiumizeMe: 'password' | Waiting 1 minute and retry
so i guess the hook is broken
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `module/plugins/hoster/PremiumizeMe.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 from module.common.json_layer import json_loads
4 from module.plugins.internal.MultiHoster import MultiHoster, create_getInfo
5
6
7 class PremiumizeMe(MultiHoster):
8 __name__ = "PremiumizeMe"
9 __type__ = "hoster"
10 __version__ = "0.19"
11 __status__ = "testing"
12
13 __pattern__ = r'^unmatchable$' #: Since we want to allow the user to specify the list of hoster to use we let MultiHoster.activate
14 __config__ = [("use_premium" , "bool", "Use premium account if available" , True),
15 ("revertfailed", "bool", "Revert to standard download if fails", True)]
16
17 __description__ = """Premiumize.me multi-hoster plugin"""
18 __license__ = "GPLv3"
19 __authors__ = [("Florian Franzen", "[email protected]")]
20
21
22 def handle_premium(self, pyfile):
23 #: In some cases hostsers do not supply us with a filename at download, so we
24 #: Are going to set a fall back filename (e.g. for freakshare or xfileshare)
25 pyfile.name = pyfile.name.split('/').pop() #: Remove everthing before last slash
26
27 #: Correction for automatic assigned filename: Removing html at end if needed
28 suffix_to_remove = ["html", "htm", "php", "php3", "asp", "shtm", "shtml", "cfml", "cfm"]
29 temp = pyfile.name.split('.')
30 if temp.pop() in suffix_to_remove:
31 pyfile.name = ".".join(temp)
32
33 #: Get account data
34 user, data = self.account.select()
35
36 #: Get rewritten link using the premiumize.me api v1 (see https://secure.premiumize.me/?show=api)
37 data = json_loads(self.load("http://api.premiumize.me/pm-api/v1.php", #@TODO: Revert to `https` in 0.4.10
38 get={'method' : "directdownloadlink",
39 'params[login]': user,
40 'params[pass]' : data['password'],
41 'params[link]' : pyfile.url}))
42
43 #: Check status and decide what to do
44 status = data['status']
45
46 if status == 200:
47 self.link = data['result']['location']
48 return
49
50 elif status == 400:
51 self.fail(_("Invalid link"))
52
53 elif status == 404:
54 self.offline()
55
56 elif status >= 500:
57 self.temp_offline()
58
59 else:
60 self.fail(data['statusmessage'])
61
62
63 getInfo = create_getInfo(PremiumizeMe)
64
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/module/plugins/hoster/PremiumizeMe.py b/module/plugins/hoster/PremiumizeMe.py
--- a/module/plugins/hoster/PremiumizeMe.py
+++ b/module/plugins/hoster/PremiumizeMe.py
@@ -7,7 +7,7 @@
class PremiumizeMe(MultiHoster):
__name__ = "PremiumizeMe"
__type__ = "hoster"
- __version__ = "0.19"
+ __version__ = "0.20"
__status__ = "testing"
__pattern__ = r'^unmatchable$' #: Since we want to allow the user to specify the list of hoster to use we let MultiHoster.activate
@@ -31,13 +31,13 @@
pyfile.name = ".".join(temp)
#: Get account data
- user, data = self.account.select()
+ user, info = self.account.select()
#: Get rewritten link using the premiumize.me api v1 (see https://secure.premiumize.me/?show=api)
data = json_loads(self.load("http://api.premiumize.me/pm-api/v1.php", #@TODO: Revert to `https` in 0.4.10
get={'method' : "directdownloadlink",
'params[login]': user,
- 'params[pass]' : data['password'],
+ 'params[pass]' : info['login']['password'],
'params[link]' : pyfile.url}))
#: Check status and decide what to do
| {"golden_diff": "diff --git a/module/plugins/hoster/PremiumizeMe.py b/module/plugins/hoster/PremiumizeMe.py\n--- a/module/plugins/hoster/PremiumizeMe.py\n+++ b/module/plugins/hoster/PremiumizeMe.py\n@@ -7,7 +7,7 @@\n class PremiumizeMe(MultiHoster):\n __name__ = \"PremiumizeMe\"\n __type__ = \"hoster\"\n- __version__ = \"0.19\"\n+ __version__ = \"0.20\"\n __status__ = \"testing\"\n \n __pattern__ = r'^unmatchable$' #: Since we want to allow the user to specify the list of hoster to use we let MultiHoster.activate\n@@ -31,13 +31,13 @@\n pyfile.name = \".\".join(temp)\n \n #: Get account data\n- user, data = self.account.select()\n+ user, info = self.account.select()\n \n #: Get rewritten link using the premiumize.me api v1 (see https://secure.premiumize.me/?show=api)\n data = json_loads(self.load(\"http://api.premiumize.me/pm-api/v1.php\", #@TODO: Revert to `https` in 0.4.10\n get={'method' : \"directdownloadlink\",\n 'params[login]': user,\n- 'params[pass]' : data['password'],\n+ 'params[pass]' : info['login']['password'],\n 'params[link]' : pyfile.url}))\n \n #: Check status and decide what to do\n", "issue": "premiumize.me hook is broken\naccount says username and password is ok\n\nbut the log always shows:\n\n4 01.08.2015 19:50:13 WARNING HOOK PremiumizeMe: 'password' | Waiting 1 minute and retry\n5 01.08.2015 19:51:05 WARNING HOOK PremiumizeMe: 'password' | Waiting 1 minute and retry\n6 01.08.2015 19:51:13 WARNING HOOK PremiumizeMe: 'password' | Waiting 1 minute and retry\n7 01.08.2015 19:52:05 WARNING HOOK PremiumizeMe: 'password' | Waiting 1 minute and retry\n8 01.08.2015 19:52:13 WARNING HOOK PremiumizeMe: 'password' | Waiting 1 minute and retry\n9 01.08.2015 19:53:05 WARNING HOOK PremiumizeMe: 'password' | Waiting 1 minute and retry\n10 01.08.2015 19:53:13 WARNING HOOK PremiumizeMe: 'password' | Waiting 1 minute and retry\n\nso i guess the hook is broken\n\npremiumize.me hook is broken\naccount says username and password is ok\n\nbut the log always shows:\n\n4 01.08.2015 19:50:13 WARNING HOOK PremiumizeMe: 'password' | Waiting 1 minute and retry\n5 01.08.2015 19:51:05 WARNING HOOK PremiumizeMe: 'password' | Waiting 1 minute and retry\n6 01.08.2015 19:51:13 WARNING HOOK PremiumizeMe: 'password' | Waiting 1 minute and retry\n7 01.08.2015 19:52:05 WARNING HOOK PremiumizeMe: 'password' | Waiting 1 minute and retry\n8 01.08.2015 19:52:13 WARNING HOOK PremiumizeMe: 'password' | Waiting 1 minute and retry\n9 01.08.2015 19:53:05 WARNING HOOK PremiumizeMe: 'password' | Waiting 1 minute and retry\n10 01.08.2015 19:53:13 WARNING HOOK PremiumizeMe: 'password' | Waiting 1 minute and retry\n\nso i guess the hook is broken\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\nfrom module.common.json_layer import json_loads\nfrom module.plugins.internal.MultiHoster import MultiHoster, create_getInfo\n\n\nclass PremiumizeMe(MultiHoster):\n __name__ = \"PremiumizeMe\"\n __type__ = \"hoster\"\n __version__ = \"0.19\"\n __status__ = \"testing\"\n\n __pattern__ = r'^unmatchable$' #: Since we want to allow the user to specify the list of hoster to use we let MultiHoster.activate\n __config__ = [(\"use_premium\" , \"bool\", \"Use premium account if available\" , True),\n (\"revertfailed\", \"bool\", \"Revert to standard download if fails\", True)]\n\n __description__ = \"\"\"Premiumize.me multi-hoster plugin\"\"\"\n __license__ = \"GPLv3\"\n __authors__ = [(\"Florian Franzen\", \"[email protected]\")]\n\n\n def handle_premium(self, pyfile):\n #: In some cases hostsers do not supply us with a filename at download, so we\n #: Are going to set a fall back filename (e.g. for freakshare or xfileshare)\n pyfile.name = pyfile.name.split('/').pop() #: Remove everthing before last slash\n\n #: Correction for automatic assigned filename: Removing html at end if needed\n suffix_to_remove = [\"html\", \"htm\", \"php\", \"php3\", \"asp\", \"shtm\", \"shtml\", \"cfml\", \"cfm\"]\n temp = pyfile.name.split('.')\n if temp.pop() in suffix_to_remove:\n pyfile.name = \".\".join(temp)\n\n #: Get account data\n user, data = self.account.select()\n\n #: Get rewritten link using the premiumize.me api v1 (see https://secure.premiumize.me/?show=api)\n data = json_loads(self.load(\"http://api.premiumize.me/pm-api/v1.php\", #@TODO: Revert to `https` in 0.4.10\n get={'method' : \"directdownloadlink\",\n 'params[login]': user,\n 'params[pass]' : data['password'],\n 'params[link]' : pyfile.url}))\n\n #: Check status and decide what to do\n status = data['status']\n\n if status == 200:\n self.link = data['result']['location']\n return\n\n elif status == 400:\n self.fail(_(\"Invalid link\"))\n\n elif status == 404:\n self.offline()\n\n elif status >= 500:\n self.temp_offline()\n\n else:\n self.fail(data['statusmessage'])\n\n\ngetInfo = create_getInfo(PremiumizeMe)\n", "path": "module/plugins/hoster/PremiumizeMe.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\nfrom module.common.json_layer import json_loads\nfrom module.plugins.internal.MultiHoster import MultiHoster, create_getInfo\n\n\nclass PremiumizeMe(MultiHoster):\n __name__ = \"PremiumizeMe\"\n __type__ = \"hoster\"\n __version__ = \"0.20\"\n __status__ = \"testing\"\n\n __pattern__ = r'^unmatchable$' #: Since we want to allow the user to specify the list of hoster to use we let MultiHoster.activate\n __config__ = [(\"use_premium\" , \"bool\", \"Use premium account if available\" , True),\n (\"revertfailed\", \"bool\", \"Revert to standard download if fails\", True)]\n\n __description__ = \"\"\"Premiumize.me multi-hoster plugin\"\"\"\n __license__ = \"GPLv3\"\n __authors__ = [(\"Florian Franzen\", \"[email protected]\")]\n\n\n def handle_premium(self, pyfile):\n #: In some cases hostsers do not supply us with a filename at download, so we\n #: Are going to set a fall back filename (e.g. for freakshare or xfileshare)\n pyfile.name = pyfile.name.split('/').pop() #: Remove everthing before last slash\n\n #: Correction for automatic assigned filename: Removing html at end if needed\n suffix_to_remove = [\"html\", \"htm\", \"php\", \"php3\", \"asp\", \"shtm\", \"shtml\", \"cfml\", \"cfm\"]\n temp = pyfile.name.split('.')\n if temp.pop() in suffix_to_remove:\n pyfile.name = \".\".join(temp)\n\n #: Get account data\n user, info = self.account.select()\n\n #: Get rewritten link using the premiumize.me api v1 (see https://secure.premiumize.me/?show=api)\n data = json_loads(self.load(\"http://api.premiumize.me/pm-api/v1.php\", #@TODO: Revert to `https` in 0.4.10\n get={'method' : \"directdownloadlink\",\n 'params[login]': user,\n 'params[pass]' : info['login']['password'],\n 'params[link]' : pyfile.url}))\n\n #: Check status and decide what to do\n status = data['status']\n\n if status == 200:\n self.link = data['result']['location']\n return\n\n elif status == 400:\n self.fail(_(\"Invalid link\"))\n\n elif status == 404:\n self.offline()\n\n elif status >= 500:\n self.temp_offline()\n\n else:\n self.fail(data['statusmessage'])\n\n\ngetInfo = create_getInfo(PremiumizeMe)\n", "path": "module/plugins/hoster/PremiumizeMe.py"}]} | 1,617 | 348 |
gh_patches_debug_22076 | rasdani/github-patches | git_diff | netbox-community__netbox-16229 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
User and group queries are not properly restricted via GraphQL API in v4.0.2 Re-Open
### Deployment Type
Self-hosted
### NetBox Version
v4.0.2
### Python Version
3.10
### Steps to Reproduce
This is is to re-opent #7814
Create New Group netbox-graphql. Don't add any permission to the group.
Add new user to the group
Login as new user
Access https://netbox/graphql
query {
user_list{
username
password
}
}
Username and hash in password returned.
### Expected Behavior
Empty result retured because the user in a group without permission to Group/User view.
### Observed Behavior
All Username and hash in Database returned.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `netbox/users/graphql/types.py`
Content:
```
1 from typing import List
2
3 import strawberry
4 import strawberry_django
5 from django.contrib.auth import get_user_model
6 from django.contrib.auth.models import Group
7 from strawberry import auto
8 from users import filtersets
9 from users.models import Group
10 from utilities.querysets import RestrictedQuerySet
11 from .filters import *
12
13 __all__ = (
14 'GroupType',
15 'UserType',
16 )
17
18
19 @strawberry_django.type(
20 Group,
21 fields=['id', 'name'],
22 filters=GroupFilter
23 )
24 class GroupType:
25 pass
26
27
28 @strawberry_django.type(
29 get_user_model(),
30 fields=[
31 'id', 'username', 'password', 'first_name', 'last_name', 'email', 'is_staff',
32 'is_active', 'date_joined', 'groups',
33 ],
34 filters=UserFilter
35 )
36 class UserType:
37 groups: List[GroupType]
38
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/netbox/users/graphql/types.py b/netbox/users/graphql/types.py
--- a/netbox/users/graphql/types.py
+++ b/netbox/users/graphql/types.py
@@ -1,13 +1,10 @@
from typing import List
-import strawberry
import strawberry_django
from django.contrib.auth import get_user_model
-from django.contrib.auth.models import Group
-from strawberry import auto
-from users import filtersets
+
+from netbox.graphql.types import BaseObjectType
from users.models import Group
-from utilities.querysets import RestrictedQuerySet
from .filters import *
__all__ = (
@@ -21,17 +18,16 @@
fields=['id', 'name'],
filters=GroupFilter
)
-class GroupType:
+class GroupType(BaseObjectType):
pass
@strawberry_django.type(
get_user_model(),
fields=[
- 'id', 'username', 'password', 'first_name', 'last_name', 'email', 'is_staff',
- 'is_active', 'date_joined', 'groups',
+ 'id', 'username', 'first_name', 'last_name', 'email', 'is_staff', 'is_active', 'date_joined', 'groups',
],
filters=UserFilter
)
-class UserType:
+class UserType(BaseObjectType):
groups: List[GroupType]
| {"golden_diff": "diff --git a/netbox/users/graphql/types.py b/netbox/users/graphql/types.py\n--- a/netbox/users/graphql/types.py\n+++ b/netbox/users/graphql/types.py\n@@ -1,13 +1,10 @@\n from typing import List\n \n-import strawberry\n import strawberry_django\n from django.contrib.auth import get_user_model\n-from django.contrib.auth.models import Group\n-from strawberry import auto\n-from users import filtersets\n+\n+from netbox.graphql.types import BaseObjectType\n from users.models import Group\n-from utilities.querysets import RestrictedQuerySet\n from .filters import *\n \n __all__ = (\n@@ -21,17 +18,16 @@\n fields=['id', 'name'],\n filters=GroupFilter\n )\n-class GroupType:\n+class GroupType(BaseObjectType):\n pass\n \n \n @strawberry_django.type(\n get_user_model(),\n fields=[\n- 'id', 'username', 'password', 'first_name', 'last_name', 'email', 'is_staff',\n- 'is_active', 'date_joined', 'groups',\n+ 'id', 'username', 'first_name', 'last_name', 'email', 'is_staff', 'is_active', 'date_joined', 'groups',\n ],\n filters=UserFilter\n )\n-class UserType:\n+class UserType(BaseObjectType):\n groups: List[GroupType]\n", "issue": "User and group queries are not properly restricted via GraphQL API in v4.0.2 Re-Open\n### Deployment Type\n\nSelf-hosted\n\n### NetBox Version\n\nv4.0.2\n\n### Python Version\n\n3.10\n\n### Steps to Reproduce\n\nThis is is to re-opent #7814\r\n\r\nCreate New Group netbox-graphql. Don't add any permission to the group.\r\nAdd new user to the group\r\nLogin as new user\r\nAccess https://netbox/graphql\r\n\r\nquery {\r\n user_list{\r\n username\r\n password\r\n }\r\n }\r\n\r\nUsername and hash in password returned.\r\n\r\n\n\n### Expected Behavior\n\nEmpty result retured because the user in a group without permission to Group/User view.\n\n### Observed Behavior\n\nAll Username and hash in Database returned.\n", "before_files": [{"content": "from typing import List\n\nimport strawberry\nimport strawberry_django\nfrom django.contrib.auth import get_user_model\nfrom django.contrib.auth.models import Group\nfrom strawberry import auto\nfrom users import filtersets\nfrom users.models import Group\nfrom utilities.querysets import RestrictedQuerySet\nfrom .filters import *\n\n__all__ = (\n 'GroupType',\n 'UserType',\n)\n\n\n@strawberry_django.type(\n Group,\n fields=['id', 'name'],\n filters=GroupFilter\n)\nclass GroupType:\n pass\n\n\n@strawberry_django.type(\n get_user_model(),\n fields=[\n 'id', 'username', 'password', 'first_name', 'last_name', 'email', 'is_staff',\n 'is_active', 'date_joined', 'groups',\n ],\n filters=UserFilter\n)\nclass UserType:\n groups: List[GroupType]\n", "path": "netbox/users/graphql/types.py"}], "after_files": [{"content": "from typing import List\n\nimport strawberry_django\nfrom django.contrib.auth import get_user_model\n\nfrom netbox.graphql.types import BaseObjectType\nfrom users.models import Group\nfrom .filters import *\n\n__all__ = (\n 'GroupType',\n 'UserType',\n)\n\n\n@strawberry_django.type(\n Group,\n fields=['id', 'name'],\n filters=GroupFilter\n)\nclass GroupType(BaseObjectType):\n pass\n\n\n@strawberry_django.type(\n get_user_model(),\n fields=[\n 'id', 'username', 'first_name', 'last_name', 'email', 'is_staff', 'is_active', 'date_joined', 'groups',\n ],\n filters=UserFilter\n)\nclass UserType(BaseObjectType):\n groups: List[GroupType]\n", "path": "netbox/users/graphql/types.py"}]} | 681 | 287 |
gh_patches_debug_7311 | rasdani/github-patches | git_diff | freedomofpress__securedrop-4644 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
replace "hidden service" occurrences
## Status
ready for review
## Description of Changes
Changes Proposed:
- no longer refer to [Onion Services](https://2019.www.torproject.org/docs/onion-services.html.en) as hidden services;
- there are NO new images I added, it's just text;
- all changed content here is either just a comment (playbook, or shell script);
- changelog was kept as is.
## Testing
I followed the _(slightly outdated)_ [Documentation Guidelines](https://docs.securedrop.org/en/latest/development/documentation_guidelines.html), and all looked fine:
```
# make docs
```
Gave me the following:
```
...
| copying static files... done
| copying extra files... done
| dumping search index in English (code: en) ... done
| dumping object inventory... done
| build succeeded.
+--------------------------------------------------------------------------------
[I 190725 16:16:16 server:296] Serving on http://127.0.0.1:8000
[I 190725 16:16:16 handlers:62] Start watching changes
[I 190725 16:16:16 handlers:64] Start detecting changes
```
`make docs-linkcheck` returned an error, but that's not related to the changes made here. `docs-lint` ran just fine.
## Deployment
Any special considerations for deployment?
- AFAIK, no.
## Checklist
### If you made changes to the server application code:
- [ ] Linting (`make lint`) and tests (`make -C securedrop test`) pass in the development container
### If you made changes to `securedrop-admin`:
- [ ] Linting and tests (`make -C admin test`) pass in the admin development container
### If you made changes to the system configuration:
- [ ] [Configuration tests](https://docs.securedrop.org/en/latest/development/testing_configuration_tests.html) pass
### If you made non-trivial code changes:
- [ ] I have written a test plan and validated it for this PR
### If you made changes to documentation:
- [x] Doc linting (`make docs-lint`) passed locally
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `install_files/ansible-base/roles/backup/files/0.3_collect.py`
Content:
```
1 #!/usr/bin/python2.7
2 """
3
4 This script should be copied to the App server and ran by the anisble
5 plabook. When run (as root), it collects all of the necessary information
6 to backup the 0.3 system and stores it in /tmp/sd-backup-0.3-TIME_STAMP.zip.gpg
7
8 """
9
10 import sys
11 import os
12 import io
13 import zipfile
14 from datetime import datetime
15 # Import the application config.py file
16 sys.path.append("/var/www/securedrop")
17 import config # noqa: F403
18 import gnupg # noqa: F403
19
20 TOR_SERVICES = "/var/lib/tor/services"
21 TOR_CONFIG = "/etc/tor/torrc"
22
23
24 def collect_config_file(zf):
25 config_file_path = os.path.join(config.SECUREDROP_ROOT, "config.py")
26 zf.write(config_file_path)
27
28
29 def collect_securedrop_data_root(zf):
30 # The store and key dirs are shared between both interfaces
31 for root, dirs, files in os.walk(config.SECUREDROP_DATA_ROOT):
32 for name in files:
33 zf.write(os.path.join(root, name))
34
35
36 def collect_custom_header_image(zf):
37 # The custom header image is copied over the deafult `static/i/logo.png`.
38 zf.write(os.path.join(config.SECUREDROP_ROOT, "static/i/logo.png"))
39
40
41 def collect_tor_files(zf):
42 # All of the tor hidden service private keys are stored in the THS specific
43 # subdirectory `/var/lib/tor/services` backing up this directory will back
44 # up all of the THS and ATHS required keys needed to restore all the hidden
45 # services on that system.
46 for root, dirs, files in os.walk(TOR_SERVICES):
47 for name in files:
48 zf.write(os.path.join(root, name))
49
50 # The tor config file has the ATHS client names required to restore
51 # the ATHS info. These names are also in the the specific client_key file
52 # but backing up this file makes it easier than parsing the files during a
53 # restore.
54 zf.write(TOR_CONFIG)
55
56
57 def encrypt_zip_file(zf_fn):
58 # Encrypt the backup zip file with the application's gpg public key
59 gpg = gnupg.GPG(binary='gpg2', homedir=config.GPG_KEY_DIR)
60 e_fn = '{}.gpg'.format(zf_fn)
61
62 stream = io.open(zf_fn, "rb")
63 gpg.encrypt_file(stream, config.JOURNALIST_KEY, always_trust='True',
64 output=e_fn)
65
66
67 def main():
68 # name append a timestamp to the sd-backup zip filename
69 dt = str(datetime.utcnow().strftime("%Y-%m-%d--%H-%M-%S"))
70 zf_fn = 'sd-backup-{}.zip'.format(dt)
71 with zipfile.ZipFile(zf_fn, 'w') as zf:
72 collect_config_file(zf)
73 collect_securedrop_data_root(zf)
74 collect_custom_header_image(zf)
75 collect_tor_files(zf)
76 encrypt_zip_file(zf_fn)
77 print(zf_fn)
78
79
80 if __name__ == "__main__":
81 main()
82
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/install_files/ansible-base/roles/backup/files/0.3_collect.py b/install_files/ansible-base/roles/backup/files/0.3_collect.py
--- a/install_files/ansible-base/roles/backup/files/0.3_collect.py
+++ b/install_files/ansible-base/roles/backup/files/0.3_collect.py
@@ -39,7 +39,7 @@
def collect_tor_files(zf):
- # All of the tor hidden service private keys are stored in the THS specific
+ # All of the tor Onion Service private keys are stored in the THS specific
# subdirectory `/var/lib/tor/services` backing up this directory will back
# up all of the THS and ATHS required keys needed to restore all the hidden
# services on that system.
| {"golden_diff": "diff --git a/install_files/ansible-base/roles/backup/files/0.3_collect.py b/install_files/ansible-base/roles/backup/files/0.3_collect.py\n--- a/install_files/ansible-base/roles/backup/files/0.3_collect.py\n+++ b/install_files/ansible-base/roles/backup/files/0.3_collect.py\n@@ -39,7 +39,7 @@\n \n \n def collect_tor_files(zf):\n- # All of the tor hidden service private keys are stored in the THS specific\n+ # All of the tor Onion Service private keys are stored in the THS specific\n # subdirectory `/var/lib/tor/services` backing up this directory will back\n # up all of the THS and ATHS required keys needed to restore all the hidden\n # services on that system.\n", "issue": "replace \"hidden service\" occurrences\n## Status\r\n\r\nready for review\r\n\r\n## Description of Changes\r\n\r\nChanges Proposed:\r\n\r\n - no longer refer to [Onion Services](https://2019.www.torproject.org/docs/onion-services.html.en) as hidden services;\r\n - there are NO new images I added, it's just text;\r\n - all changed content here is either just a comment (playbook, or shell script);\r\n - changelog was kept as is.\r\n\r\n## Testing\r\n\r\nI followed the _(slightly outdated)_ [Documentation Guidelines](https://docs.securedrop.org/en/latest/development/documentation_guidelines.html), and all looked fine:\r\n\r\n```\r\n# make docs\r\n```\r\n\r\nGave me the following:\r\n\r\n```\r\n ...\r\n\r\n| copying static files... done\r\n| copying extra files... done\r\n| dumping search index in English (code: en) ... done\r\n| dumping object inventory... done\r\n| build succeeded.\r\n+--------------------------------------------------------------------------------\r\n\r\n[I 190725 16:16:16 server:296] Serving on http://127.0.0.1:8000\r\n[I 190725 16:16:16 handlers:62] Start watching changes\r\n[I 190725 16:16:16 handlers:64] Start detecting changes\r\n```\r\n`make docs-linkcheck` returned an error, but that's not related to the changes made here. `docs-lint` ran just fine.\r\n\r\n## Deployment\r\n\r\nAny special considerations for deployment?\r\n\r\n - AFAIK, no.\r\n\r\n## Checklist\r\n\r\n### If you made changes to the server application code:\r\n\r\n- [ ] Linting (`make lint`) and tests (`make -C securedrop test`) pass in the development container\r\n\r\n### If you made changes to `securedrop-admin`:\r\n\r\n- [ ] Linting and tests (`make -C admin test`) pass in the admin development container\r\n\r\n### If you made changes to the system configuration:\r\n\r\n- [ ] [Configuration tests](https://docs.securedrop.org/en/latest/development/testing_configuration_tests.html) pass\r\n\r\n### If you made non-trivial code changes:\r\n\r\n- [ ] I have written a test plan and validated it for this PR\r\n\r\n### If you made changes to documentation:\r\n\r\n- [x] Doc linting (`make docs-lint`) passed locally\r\n\n", "before_files": [{"content": "#!/usr/bin/python2.7\n\"\"\"\n\nThis script should be copied to the App server and ran by the anisble\nplabook. When run (as root), it collects all of the necessary information\nto backup the 0.3 system and stores it in /tmp/sd-backup-0.3-TIME_STAMP.zip.gpg\n\n\"\"\"\n\nimport sys\nimport os\nimport io\nimport zipfile\nfrom datetime import datetime\n# Import the application config.py file\nsys.path.append(\"/var/www/securedrop\")\nimport config # noqa: F403\nimport gnupg # noqa: F403\n\nTOR_SERVICES = \"/var/lib/tor/services\"\nTOR_CONFIG = \"/etc/tor/torrc\"\n\n\ndef collect_config_file(zf):\n config_file_path = os.path.join(config.SECUREDROP_ROOT, \"config.py\")\n zf.write(config_file_path)\n\n\ndef collect_securedrop_data_root(zf):\n # The store and key dirs are shared between both interfaces\n for root, dirs, files in os.walk(config.SECUREDROP_DATA_ROOT):\n for name in files:\n zf.write(os.path.join(root, name))\n\n\ndef collect_custom_header_image(zf):\n # The custom header image is copied over the deafult `static/i/logo.png`.\n zf.write(os.path.join(config.SECUREDROP_ROOT, \"static/i/logo.png\"))\n\n\ndef collect_tor_files(zf):\n # All of the tor hidden service private keys are stored in the THS specific\n # subdirectory `/var/lib/tor/services` backing up this directory will back\n # up all of the THS and ATHS required keys needed to restore all the hidden\n # services on that system.\n for root, dirs, files in os.walk(TOR_SERVICES):\n for name in files:\n zf.write(os.path.join(root, name))\n\n # The tor config file has the ATHS client names required to restore\n # the ATHS info. These names are also in the the specific client_key file\n # but backing up this file makes it easier than parsing the files during a\n # restore.\n zf.write(TOR_CONFIG)\n\n\ndef encrypt_zip_file(zf_fn):\n # Encrypt the backup zip file with the application's gpg public key\n gpg = gnupg.GPG(binary='gpg2', homedir=config.GPG_KEY_DIR)\n e_fn = '{}.gpg'.format(zf_fn)\n\n stream = io.open(zf_fn, \"rb\")\n gpg.encrypt_file(stream, config.JOURNALIST_KEY, always_trust='True',\n output=e_fn)\n\n\ndef main():\n # name append a timestamp to the sd-backup zip filename\n dt = str(datetime.utcnow().strftime(\"%Y-%m-%d--%H-%M-%S\"))\n zf_fn = 'sd-backup-{}.zip'.format(dt)\n with zipfile.ZipFile(zf_fn, 'w') as zf:\n collect_config_file(zf)\n collect_securedrop_data_root(zf)\n collect_custom_header_image(zf)\n collect_tor_files(zf)\n encrypt_zip_file(zf_fn)\n print(zf_fn)\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "install_files/ansible-base/roles/backup/files/0.3_collect.py"}], "after_files": [{"content": "#!/usr/bin/python2.7\n\"\"\"\n\nThis script should be copied to the App server and ran by the anisble\nplabook. When run (as root), it collects all of the necessary information\nto backup the 0.3 system and stores it in /tmp/sd-backup-0.3-TIME_STAMP.zip.gpg\n\n\"\"\"\n\nimport sys\nimport os\nimport io\nimport zipfile\nfrom datetime import datetime\n# Import the application config.py file\nsys.path.append(\"/var/www/securedrop\")\nimport config # noqa: F403\nimport gnupg # noqa: F403\n\nTOR_SERVICES = \"/var/lib/tor/services\"\nTOR_CONFIG = \"/etc/tor/torrc\"\n\n\ndef collect_config_file(zf):\n config_file_path = os.path.join(config.SECUREDROP_ROOT, \"config.py\")\n zf.write(config_file_path)\n\n\ndef collect_securedrop_data_root(zf):\n # The store and key dirs are shared between both interfaces\n for root, dirs, files in os.walk(config.SECUREDROP_DATA_ROOT):\n for name in files:\n zf.write(os.path.join(root, name))\n\n\ndef collect_custom_header_image(zf):\n # The custom header image is copied over the deafult `static/i/logo.png`.\n zf.write(os.path.join(config.SECUREDROP_ROOT, \"static/i/logo.png\"))\n\n\ndef collect_tor_files(zf):\n # All of the tor Onion Service private keys are stored in the THS specific\n # subdirectory `/var/lib/tor/services` backing up this directory will back\n # up all of the THS and ATHS required keys needed to restore all the hidden\n # services on that system.\n for root, dirs, files in os.walk(TOR_SERVICES):\n for name in files:\n zf.write(os.path.join(root, name))\n\n # The tor config file has the ATHS client names required to restore\n # the ATHS info. These names are also in the the specific client_key file\n # but backing up this file makes it easier than parsing the files during a\n # restore.\n zf.write(TOR_CONFIG)\n\n\ndef encrypt_zip_file(zf_fn):\n # Encrypt the backup zip file with the application's gpg public key\n gpg = gnupg.GPG(binary='gpg2', homedir=config.GPG_KEY_DIR)\n e_fn = '{}.gpg'.format(zf_fn)\n\n stream = io.open(zf_fn, \"rb\")\n gpg.encrypt_file(stream, config.JOURNALIST_KEY, always_trust='True',\n output=e_fn)\n\n\ndef main():\n # name append a timestamp to the sd-backup zip filename\n dt = str(datetime.utcnow().strftime(\"%Y-%m-%d--%H-%M-%S\"))\n zf_fn = 'sd-backup-{}.zip'.format(dt)\n with zipfile.ZipFile(zf_fn, 'w') as zf:\n collect_config_file(zf)\n collect_securedrop_data_root(zf)\n collect_custom_header_image(zf)\n collect_tor_files(zf)\n encrypt_zip_file(zf_fn)\n print(zf_fn)\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "install_files/ansible-base/roles/backup/files/0.3_collect.py"}]} | 1,626 | 180 |
gh_patches_debug_7166 | rasdani/github-patches | git_diff | pytorch__vision-7665 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
bug when using PIL backend in classification references
### 🐛 Describe the bug
When I try to train a model using the train.py script under references/classification with the PIL backend, I encounter an error:
```bash
ValueError: backend can be 'tensor' or 'pil', but got pil
```
To reproduce this issue, you can write:
```bash
git clone https://github.com/pytorch/vision && cd vision
conda create -n vision_env python=3.9
conda activate vision_env
pip install torch==1.13.1 torchvision
cd references/classification/
python train.py --data-path "path-to-dataset" --test-only --backend pil
```
### Versions
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.3
[pip3] torch==1.13.1
[pip3] torchvision==0.14.1
[conda] numpy 1.24.3 pypi_0 pypi
[conda] torch 1.13.1 pypi_0 pypi
[conda] torchvision 0.14.1 pypi_0 pypi
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `references/classification/presets.py`
Content:
```
1 import torch
2 from torchvision.transforms import autoaugment, transforms
3 from torchvision.transforms.functional import InterpolationMode
4
5
6 class ClassificationPresetTrain:
7 def __init__(
8 self,
9 *,
10 crop_size,
11 mean=(0.485, 0.456, 0.406),
12 std=(0.229, 0.224, 0.225),
13 interpolation=InterpolationMode.BILINEAR,
14 hflip_prob=0.5,
15 auto_augment_policy=None,
16 ra_magnitude=9,
17 augmix_severity=3,
18 random_erase_prob=0.0,
19 backend="pil",
20 ):
21 trans = []
22 backend = backend.lower()
23 if backend == "tensor":
24 trans.append(transforms.PILToTensor())
25 elif backend != "pil":
26 raise ValueError(f"backend can be 'tensor' or 'pil', but got {backend}")
27
28 trans.append(transforms.RandomResizedCrop(crop_size, interpolation=interpolation, antialias=True))
29 if hflip_prob > 0:
30 trans.append(transforms.RandomHorizontalFlip(hflip_prob))
31 if auto_augment_policy is not None:
32 if auto_augment_policy == "ra":
33 trans.append(autoaugment.RandAugment(interpolation=interpolation, magnitude=ra_magnitude))
34 elif auto_augment_policy == "ta_wide":
35 trans.append(autoaugment.TrivialAugmentWide(interpolation=interpolation))
36 elif auto_augment_policy == "augmix":
37 trans.append(autoaugment.AugMix(interpolation=interpolation, severity=augmix_severity))
38 else:
39 aa_policy = autoaugment.AutoAugmentPolicy(auto_augment_policy)
40 trans.append(autoaugment.AutoAugment(policy=aa_policy, interpolation=interpolation))
41
42 if backend == "pil":
43 trans.append(transforms.PILToTensor())
44
45 trans.extend(
46 [
47 transforms.ConvertImageDtype(torch.float),
48 transforms.Normalize(mean=mean, std=std),
49 ]
50 )
51 if random_erase_prob > 0:
52 trans.append(transforms.RandomErasing(p=random_erase_prob))
53
54 self.transforms = transforms.Compose(trans)
55
56 def __call__(self, img):
57 return self.transforms(img)
58
59
60 class ClassificationPresetEval:
61 def __init__(
62 self,
63 *,
64 crop_size,
65 resize_size=256,
66 mean=(0.485, 0.456, 0.406),
67 std=(0.229, 0.224, 0.225),
68 interpolation=InterpolationMode.BILINEAR,
69 backend="pil",
70 ):
71 trans = []
72
73 backend = backend.lower()
74 if backend == "tensor":
75 trans.append(transforms.PILToTensor())
76 else:
77 raise ValueError(f"backend can be 'tensor' or 'pil', but got {backend}")
78
79 trans += [
80 transforms.Resize(resize_size, interpolation=interpolation, antialias=True),
81 transforms.CenterCrop(crop_size),
82 ]
83
84 if backend == "pil":
85 trans.append(transforms.PILToTensor())
86
87 trans += [
88 transforms.ConvertImageDtype(torch.float),
89 transforms.Normalize(mean=mean, std=std),
90 ]
91
92 self.transforms = transforms.Compose(trans)
93
94 def __call__(self, img):
95 return self.transforms(img)
96
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/references/classification/presets.py b/references/classification/presets.py
--- a/references/classification/presets.py
+++ b/references/classification/presets.py
@@ -69,11 +69,10 @@
backend="pil",
):
trans = []
-
backend = backend.lower()
if backend == "tensor":
trans.append(transforms.PILToTensor())
- else:
+ elif backend != "pil":
raise ValueError(f"backend can be 'tensor' or 'pil', but got {backend}")
trans += [
| {"golden_diff": "diff --git a/references/classification/presets.py b/references/classification/presets.py\n--- a/references/classification/presets.py\n+++ b/references/classification/presets.py\n@@ -69,11 +69,10 @@\n backend=\"pil\",\n ):\n trans = []\n-\n backend = backend.lower()\n if backend == \"tensor\":\n trans.append(transforms.PILToTensor())\n- else:\n+ elif backend != \"pil\":\n raise ValueError(f\"backend can be 'tensor' or 'pil', but got {backend}\")\n \n trans += [\n", "issue": "bug when using PIL backend in classification references\n### \ud83d\udc1b Describe the bug\n\nWhen I try to train a model using the train.py script under references/classification with the PIL backend, I encounter an error:\r\n```bash\r\nValueError: backend can be 'tensor' or 'pil', but got pil\r\n```\r\n\r\nTo reproduce this issue, you can write:\r\n```bash\r\ngit clone https://github.com/pytorch/vision && cd vision\r\nconda create -n vision_env python=3.9\r\nconda activate vision_env\r\npip install torch==1.13.1 torchvision\r\ncd references/classification/\r\npython train.py --data-path \"path-to-dataset\" --test-only --backend pil\r\n```\n\n### Versions\n\n[pip3] mypy-extensions==1.0.0\r\n[pip3] numpy==1.24.3\r\n[pip3] torch==1.13.1\r\n[pip3] torchvision==0.14.1\r\n[conda] numpy 1.24.3 pypi_0 pypi\r\n[conda] torch 1.13.1 pypi_0 pypi\r\n[conda] torchvision 0.14.1 pypi_0 pypi\n", "before_files": [{"content": "import torch\nfrom torchvision.transforms import autoaugment, transforms\nfrom torchvision.transforms.functional import InterpolationMode\n\n\nclass ClassificationPresetTrain:\n def __init__(\n self,\n *,\n crop_size,\n mean=(0.485, 0.456, 0.406),\n std=(0.229, 0.224, 0.225),\n interpolation=InterpolationMode.BILINEAR,\n hflip_prob=0.5,\n auto_augment_policy=None,\n ra_magnitude=9,\n augmix_severity=3,\n random_erase_prob=0.0,\n backend=\"pil\",\n ):\n trans = []\n backend = backend.lower()\n if backend == \"tensor\":\n trans.append(transforms.PILToTensor())\n elif backend != \"pil\":\n raise ValueError(f\"backend can be 'tensor' or 'pil', but got {backend}\")\n\n trans.append(transforms.RandomResizedCrop(crop_size, interpolation=interpolation, antialias=True))\n if hflip_prob > 0:\n trans.append(transforms.RandomHorizontalFlip(hflip_prob))\n if auto_augment_policy is not None:\n if auto_augment_policy == \"ra\":\n trans.append(autoaugment.RandAugment(interpolation=interpolation, magnitude=ra_magnitude))\n elif auto_augment_policy == \"ta_wide\":\n trans.append(autoaugment.TrivialAugmentWide(interpolation=interpolation))\n elif auto_augment_policy == \"augmix\":\n trans.append(autoaugment.AugMix(interpolation=interpolation, severity=augmix_severity))\n else:\n aa_policy = autoaugment.AutoAugmentPolicy(auto_augment_policy)\n trans.append(autoaugment.AutoAugment(policy=aa_policy, interpolation=interpolation))\n\n if backend == \"pil\":\n trans.append(transforms.PILToTensor())\n\n trans.extend(\n [\n transforms.ConvertImageDtype(torch.float),\n transforms.Normalize(mean=mean, std=std),\n ]\n )\n if random_erase_prob > 0:\n trans.append(transforms.RandomErasing(p=random_erase_prob))\n\n self.transforms = transforms.Compose(trans)\n\n def __call__(self, img):\n return self.transforms(img)\n\n\nclass ClassificationPresetEval:\n def __init__(\n self,\n *,\n crop_size,\n resize_size=256,\n mean=(0.485, 0.456, 0.406),\n std=(0.229, 0.224, 0.225),\n interpolation=InterpolationMode.BILINEAR,\n backend=\"pil\",\n ):\n trans = []\n\n backend = backend.lower()\n if backend == \"tensor\":\n trans.append(transforms.PILToTensor())\n else:\n raise ValueError(f\"backend can be 'tensor' or 'pil', but got {backend}\")\n\n trans += [\n transforms.Resize(resize_size, interpolation=interpolation, antialias=True),\n transforms.CenterCrop(crop_size),\n ]\n\n if backend == \"pil\":\n trans.append(transforms.PILToTensor())\n\n trans += [\n transforms.ConvertImageDtype(torch.float),\n transforms.Normalize(mean=mean, std=std),\n ]\n\n self.transforms = transforms.Compose(trans)\n\n def __call__(self, img):\n return self.transforms(img)\n", "path": "references/classification/presets.py"}], "after_files": [{"content": "import torch\nfrom torchvision.transforms import autoaugment, transforms\nfrom torchvision.transforms.functional import InterpolationMode\n\n\nclass ClassificationPresetTrain:\n def __init__(\n self,\n *,\n crop_size,\n mean=(0.485, 0.456, 0.406),\n std=(0.229, 0.224, 0.225),\n interpolation=InterpolationMode.BILINEAR,\n hflip_prob=0.5,\n auto_augment_policy=None,\n ra_magnitude=9,\n augmix_severity=3,\n random_erase_prob=0.0,\n backend=\"pil\",\n ):\n trans = []\n backend = backend.lower()\n if backend == \"tensor\":\n trans.append(transforms.PILToTensor())\n elif backend != \"pil\":\n raise ValueError(f\"backend can be 'tensor' or 'pil', but got {backend}\")\n\n trans.append(transforms.RandomResizedCrop(crop_size, interpolation=interpolation, antialias=True))\n if hflip_prob > 0:\n trans.append(transforms.RandomHorizontalFlip(hflip_prob))\n if auto_augment_policy is not None:\n if auto_augment_policy == \"ra\":\n trans.append(autoaugment.RandAugment(interpolation=interpolation, magnitude=ra_magnitude))\n elif auto_augment_policy == \"ta_wide\":\n trans.append(autoaugment.TrivialAugmentWide(interpolation=interpolation))\n elif auto_augment_policy == \"augmix\":\n trans.append(autoaugment.AugMix(interpolation=interpolation, severity=augmix_severity))\n else:\n aa_policy = autoaugment.AutoAugmentPolicy(auto_augment_policy)\n trans.append(autoaugment.AutoAugment(policy=aa_policy, interpolation=interpolation))\n\n if backend == \"pil\":\n trans.append(transforms.PILToTensor())\n\n trans.extend(\n [\n transforms.ConvertImageDtype(torch.float),\n transforms.Normalize(mean=mean, std=std),\n ]\n )\n if random_erase_prob > 0:\n trans.append(transforms.RandomErasing(p=random_erase_prob))\n\n self.transforms = transforms.Compose(trans)\n\n def __call__(self, img):\n return self.transforms(img)\n\n\nclass ClassificationPresetEval:\n def __init__(\n self,\n *,\n crop_size,\n resize_size=256,\n mean=(0.485, 0.456, 0.406),\n std=(0.229, 0.224, 0.225),\n interpolation=InterpolationMode.BILINEAR,\n backend=\"pil\",\n ):\n trans = []\n backend = backend.lower()\n if backend == \"tensor\":\n trans.append(transforms.PILToTensor())\n elif backend != \"pil\":\n raise ValueError(f\"backend can be 'tensor' or 'pil', but got {backend}\")\n\n trans += [\n transforms.Resize(resize_size, interpolation=interpolation, antialias=True),\n transforms.CenterCrop(crop_size),\n ]\n\n if backend == \"pil\":\n trans.append(transforms.PILToTensor())\n\n trans += [\n transforms.ConvertImageDtype(torch.float),\n transforms.Normalize(mean=mean, std=std),\n ]\n\n self.transforms = transforms.Compose(trans)\n\n def __call__(self, img):\n return self.transforms(img)\n", "path": "references/classification/presets.py"}]} | 1,446 | 133 |
gh_patches_debug_6577 | rasdani/github-patches | git_diff | rlworkgroup__garage-1927 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
On policy algos stop learning midway
Avg return either drops dramatically or the run stops completely due to NaN errors. Could affect off policy as well.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/garage/envs/normalized_env.py`
Content:
```
1 """An environment wrapper that normalizes action, observation and reward."""
2 import akro
3 import numpy as np
4
5 from garage import EnvStep, Wrapper
6
7
8 class NormalizedEnv(Wrapper):
9 """An environment wrapper for normalization.
10
11 This wrapper normalizes action, and optionally observation and reward.
12
13 Args:
14 env (Environment): An environment instance.
15 scale_reward (float): Scale of environment reward.
16 normalize_obs (bool): If True, normalize observation.
17 normalize_reward (bool): If True, normalize reward. scale_reward is
18 applied after normalization.
19 expected_action_scale (float): Assuming action falls in the range of
20 [-expected_action_scale, expected_action_scale] when normalize it.
21 flatten_obs (bool): Flatten observation if True.
22 obs_alpha (float): Update rate of moving average when estimating the
23 mean and variance of observations.
24 reward_alpha (float): Update rate of moving average when estimating the
25 mean and variance of rewards.
26
27 """
28
29 def __init__(
30 self,
31 env,
32 scale_reward=1.,
33 normalize_obs=False,
34 normalize_reward=False,
35 expected_action_scale=1.,
36 flatten_obs=True,
37 obs_alpha=0.001,
38 reward_alpha=0.001,
39 ):
40 super().__init__(env)
41
42 self._scale_reward = scale_reward
43 self._normalize_obs = normalize_obs
44 self._normalize_reward = normalize_reward
45 self._expected_action_scale = expected_action_scale
46 self._flatten_obs = flatten_obs
47
48 self._obs_alpha = obs_alpha
49 flat_obs_dim = self._env.observation_space.flat_dim
50 self._obs_mean = np.zeros(flat_obs_dim)
51 self._obs_var = np.ones(flat_obs_dim)
52
53 self._reward_alpha = reward_alpha
54 self._reward_mean = 0.
55 self._reward_var = 1.
56
57 def reset(self):
58 """Call reset on wrapped env.
59
60 Returns:
61 numpy.ndarray: The first observation conforming to
62 `observation_space`.
63 dict: The episode-level information.
64 Note that this is not part of `env_info` provided in `step()`.
65 It contains information of he entire episode, which could be
66 needed to determine the first action (e.g. in the case of
67 goal-conditioned or MTRL.)
68
69 """
70 first_obs, episode_info = self._env.reset()
71 if self._normalize_obs:
72 return self._apply_normalize_obs(first_obs), episode_info
73 else:
74 return first_obs, episode_info
75
76 def step(self, action):
77 """Call step on wrapped env.
78
79 Args:
80 action (np.ndarray): An action provided by the agent.
81
82 Returns:
83 EnvStep: The environment step resulting from the action.
84
85 Raises:
86 RuntimeError: if `step()` is called after the environment has been
87 constructed and `reset()` has not been called.
88
89 """
90 if isinstance(self.action_space, akro.Box):
91 # rescale the action when the bounds are not inf
92 lb, ub = self.action_space.low, self.action_space.high
93 if np.all(lb != -np.inf) and np.all(ub != -np.inf):
94 scaled_action = lb + (action + self._expected_action_scale) * (
95 0.5 * (ub - lb) / self._expected_action_scale)
96 scaled_action = np.clip(scaled_action, lb, ub)
97 else:
98 scaled_action = action
99 else:
100 scaled_action = action
101
102 es = self._env.step(scaled_action)
103 next_obs = es.observation
104 reward = es.reward
105
106 if self._normalize_obs:
107 next_obs = self._apply_normalize_obs(next_obs)
108 if self._normalize_reward:
109 reward = self._apply_normalize_reward(reward)
110
111 return EnvStep(env_spec=es.env_spec,
112 action=es.action,
113 reward=reward * self._scale_reward,
114 observation=next_obs,
115 env_info=es.env_info,
116 step_type=es.step_type)
117
118 def _update_obs_estimate(self, obs):
119 flat_obs = self._env.observation_space.flatten(obs)
120 self._obs_mean = (
121 1 - self._obs_alpha) * self._obs_mean + self._obs_alpha * flat_obs
122 self._obs_var = (
123 1 - self._obs_alpha) * self._obs_var + self._obs_alpha * np.square(
124 flat_obs - self._obs_mean)
125
126 def _update_reward_estimate(self, reward):
127 self._reward_mean = (1 - self._reward_alpha) * \
128 self._reward_mean + self._reward_alpha * reward
129 self._reward_var = (
130 1 - self._reward_alpha
131 ) * self._reward_var + self._reward_alpha * np.square(
132 reward - self._reward_mean)
133
134 def _apply_normalize_obs(self, obs):
135 """Compute normalized observation.
136
137 Args:
138 obs (np.ndarray): Observation.
139
140 Returns:
141 np.ndarray: Normalized observation.
142
143 """
144 self._update_obs_estimate(obs)
145 flat_obs = self._env.observation_space.flatten(obs)
146 normalized_obs = (flat_obs -
147 self._obs_mean) / (np.sqrt(self._obs_var) + 1e-8)
148 if not self._flatten_obs:
149 normalized_obs = self._env.observation_space.unflatten(
150 self._env.observation_space, normalized_obs)
151 return normalized_obs
152
153 def _apply_normalize_reward(self, reward):
154 """Compute normalized reward.
155
156 Args:
157 reward (float): Reward.
158
159 Returns:
160 float: Normalized reward.
161
162 """
163 self._update_reward_estimate(reward)
164 return reward / (np.sqrt(self._reward_var) + 1e-8)
165
166
167 normalize = NormalizedEnv
168
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/garage/envs/normalized_env.py b/src/garage/envs/normalized_env.py
--- a/src/garage/envs/normalized_env.py
+++ b/src/garage/envs/normalized_env.py
@@ -109,7 +109,7 @@
reward = self._apply_normalize_reward(reward)
return EnvStep(env_spec=es.env_spec,
- action=es.action,
+ action=action,
reward=reward * self._scale_reward,
observation=next_obs,
env_info=es.env_info,
| {"golden_diff": "diff --git a/src/garage/envs/normalized_env.py b/src/garage/envs/normalized_env.py\n--- a/src/garage/envs/normalized_env.py\n+++ b/src/garage/envs/normalized_env.py\n@@ -109,7 +109,7 @@\n reward = self._apply_normalize_reward(reward)\n \n return EnvStep(env_spec=es.env_spec,\n- action=es.action,\n+ action=action,\n reward=reward * self._scale_reward,\n observation=next_obs,\n env_info=es.env_info,\n", "issue": "On policy algos stop learning midway\nAvg return either drops dramatically or the run stops completely due to NaN errors. Could affect off policy as well.\n", "before_files": [{"content": "\"\"\"An environment wrapper that normalizes action, observation and reward.\"\"\"\nimport akro\nimport numpy as np\n\nfrom garage import EnvStep, Wrapper\n\n\nclass NormalizedEnv(Wrapper):\n \"\"\"An environment wrapper for normalization.\n\n This wrapper normalizes action, and optionally observation and reward.\n\n Args:\n env (Environment): An environment instance.\n scale_reward (float): Scale of environment reward.\n normalize_obs (bool): If True, normalize observation.\n normalize_reward (bool): If True, normalize reward. scale_reward is\n applied after normalization.\n expected_action_scale (float): Assuming action falls in the range of\n [-expected_action_scale, expected_action_scale] when normalize it.\n flatten_obs (bool): Flatten observation if True.\n obs_alpha (float): Update rate of moving average when estimating the\n mean and variance of observations.\n reward_alpha (float): Update rate of moving average when estimating the\n mean and variance of rewards.\n\n \"\"\"\n\n def __init__(\n self,\n env,\n scale_reward=1.,\n normalize_obs=False,\n normalize_reward=False,\n expected_action_scale=1.,\n flatten_obs=True,\n obs_alpha=0.001,\n reward_alpha=0.001,\n ):\n super().__init__(env)\n\n self._scale_reward = scale_reward\n self._normalize_obs = normalize_obs\n self._normalize_reward = normalize_reward\n self._expected_action_scale = expected_action_scale\n self._flatten_obs = flatten_obs\n\n self._obs_alpha = obs_alpha\n flat_obs_dim = self._env.observation_space.flat_dim\n self._obs_mean = np.zeros(flat_obs_dim)\n self._obs_var = np.ones(flat_obs_dim)\n\n self._reward_alpha = reward_alpha\n self._reward_mean = 0.\n self._reward_var = 1.\n\n def reset(self):\n \"\"\"Call reset on wrapped env.\n\n Returns:\n numpy.ndarray: The first observation conforming to\n `observation_space`.\n dict: The episode-level information.\n Note that this is not part of `env_info` provided in `step()`.\n It contains information of he entire episode\uff0c which could be\n needed to determine the first action (e.g. in the case of\n goal-conditioned or MTRL.)\n\n \"\"\"\n first_obs, episode_info = self._env.reset()\n if self._normalize_obs:\n return self._apply_normalize_obs(first_obs), episode_info\n else:\n return first_obs, episode_info\n\n def step(self, action):\n \"\"\"Call step on wrapped env.\n\n Args:\n action (np.ndarray): An action provided by the agent.\n\n Returns:\n EnvStep: The environment step resulting from the action.\n\n Raises:\n RuntimeError: if `step()` is called after the environment has been\n constructed and `reset()` has not been called.\n\n \"\"\"\n if isinstance(self.action_space, akro.Box):\n # rescale the action when the bounds are not inf\n lb, ub = self.action_space.low, self.action_space.high\n if np.all(lb != -np.inf) and np.all(ub != -np.inf):\n scaled_action = lb + (action + self._expected_action_scale) * (\n 0.5 * (ub - lb) / self._expected_action_scale)\n scaled_action = np.clip(scaled_action, lb, ub)\n else:\n scaled_action = action\n else:\n scaled_action = action\n\n es = self._env.step(scaled_action)\n next_obs = es.observation\n reward = es.reward\n\n if self._normalize_obs:\n next_obs = self._apply_normalize_obs(next_obs)\n if self._normalize_reward:\n reward = self._apply_normalize_reward(reward)\n\n return EnvStep(env_spec=es.env_spec,\n action=es.action,\n reward=reward * self._scale_reward,\n observation=next_obs,\n env_info=es.env_info,\n step_type=es.step_type)\n\n def _update_obs_estimate(self, obs):\n flat_obs = self._env.observation_space.flatten(obs)\n self._obs_mean = (\n 1 - self._obs_alpha) * self._obs_mean + self._obs_alpha * flat_obs\n self._obs_var = (\n 1 - self._obs_alpha) * self._obs_var + self._obs_alpha * np.square(\n flat_obs - self._obs_mean)\n\n def _update_reward_estimate(self, reward):\n self._reward_mean = (1 - self._reward_alpha) * \\\n self._reward_mean + self._reward_alpha * reward\n self._reward_var = (\n 1 - self._reward_alpha\n ) * self._reward_var + self._reward_alpha * np.square(\n reward - self._reward_mean)\n\n def _apply_normalize_obs(self, obs):\n \"\"\"Compute normalized observation.\n\n Args:\n obs (np.ndarray): Observation.\n\n Returns:\n np.ndarray: Normalized observation.\n\n \"\"\"\n self._update_obs_estimate(obs)\n flat_obs = self._env.observation_space.flatten(obs)\n normalized_obs = (flat_obs -\n self._obs_mean) / (np.sqrt(self._obs_var) + 1e-8)\n if not self._flatten_obs:\n normalized_obs = self._env.observation_space.unflatten(\n self._env.observation_space, normalized_obs)\n return normalized_obs\n\n def _apply_normalize_reward(self, reward):\n \"\"\"Compute normalized reward.\n\n Args:\n reward (float): Reward.\n\n Returns:\n float: Normalized reward.\n\n \"\"\"\n self._update_reward_estimate(reward)\n return reward / (np.sqrt(self._reward_var) + 1e-8)\n\n\nnormalize = NormalizedEnv\n", "path": "src/garage/envs/normalized_env.py"}], "after_files": [{"content": "\"\"\"An environment wrapper that normalizes action, observation and reward.\"\"\"\nimport akro\nimport numpy as np\n\nfrom garage import EnvStep, Wrapper\n\n\nclass NormalizedEnv(Wrapper):\n \"\"\"An environment wrapper for normalization.\n\n This wrapper normalizes action, and optionally observation and reward.\n\n Args:\n env (Environment): An environment instance.\n scale_reward (float): Scale of environment reward.\n normalize_obs (bool): If True, normalize observation.\n normalize_reward (bool): If True, normalize reward. scale_reward is\n applied after normalization.\n expected_action_scale (float): Assuming action falls in the range of\n [-expected_action_scale, expected_action_scale] when normalize it.\n flatten_obs (bool): Flatten observation if True.\n obs_alpha (float): Update rate of moving average when estimating the\n mean and variance of observations.\n reward_alpha (float): Update rate of moving average when estimating the\n mean and variance of rewards.\n\n \"\"\"\n\n def __init__(\n self,\n env,\n scale_reward=1.,\n normalize_obs=False,\n normalize_reward=False,\n expected_action_scale=1.,\n flatten_obs=True,\n obs_alpha=0.001,\n reward_alpha=0.001,\n ):\n super().__init__(env)\n\n self._scale_reward = scale_reward\n self._normalize_obs = normalize_obs\n self._normalize_reward = normalize_reward\n self._expected_action_scale = expected_action_scale\n self._flatten_obs = flatten_obs\n\n self._obs_alpha = obs_alpha\n flat_obs_dim = self._env.observation_space.flat_dim\n self._obs_mean = np.zeros(flat_obs_dim)\n self._obs_var = np.ones(flat_obs_dim)\n\n self._reward_alpha = reward_alpha\n self._reward_mean = 0.\n self._reward_var = 1.\n\n def reset(self):\n \"\"\"Call reset on wrapped env.\n\n Returns:\n numpy.ndarray: The first observation conforming to\n `observation_space`.\n dict: The episode-level information.\n Note that this is not part of `env_info` provided in `step()`.\n It contains information of he entire episode\uff0c which could be\n needed to determine the first action (e.g. in the case of\n goal-conditioned or MTRL.)\n\n \"\"\"\n first_obs, episode_info = self._env.reset()\n if self._normalize_obs:\n return self._apply_normalize_obs(first_obs), episode_info\n else:\n return first_obs, episode_info\n\n def step(self, action):\n \"\"\"Call step on wrapped env.\n\n Args:\n action (np.ndarray): An action provided by the agent.\n\n Returns:\n EnvStep: The environment step resulting from the action.\n\n Raises:\n RuntimeError: if `step()` is called after the environment has been\n constructed and `reset()` has not been called.\n\n \"\"\"\n if isinstance(self.action_space, akro.Box):\n # rescale the action when the bounds are not inf\n lb, ub = self.action_space.low, self.action_space.high\n if np.all(lb != -np.inf) and np.all(ub != -np.inf):\n scaled_action = lb + (action + self._expected_action_scale) * (\n 0.5 * (ub - lb) / self._expected_action_scale)\n scaled_action = np.clip(scaled_action, lb, ub)\n else:\n scaled_action = action\n else:\n scaled_action = action\n\n es = self._env.step(scaled_action)\n next_obs = es.observation\n reward = es.reward\n\n if self._normalize_obs:\n next_obs = self._apply_normalize_obs(next_obs)\n if self._normalize_reward:\n reward = self._apply_normalize_reward(reward)\n\n return EnvStep(env_spec=es.env_spec,\n action=action,\n reward=reward * self._scale_reward,\n observation=next_obs,\n env_info=es.env_info,\n step_type=es.step_type)\n\n def _update_obs_estimate(self, obs):\n flat_obs = self._env.observation_space.flatten(obs)\n self._obs_mean = (\n 1 - self._obs_alpha) * self._obs_mean + self._obs_alpha * flat_obs\n self._obs_var = (\n 1 - self._obs_alpha) * self._obs_var + self._obs_alpha * np.square(\n flat_obs - self._obs_mean)\n\n def _update_reward_estimate(self, reward):\n self._reward_mean = (1 - self._reward_alpha) * \\\n self._reward_mean + self._reward_alpha * reward\n self._reward_var = (\n 1 - self._reward_alpha\n ) * self._reward_var + self._reward_alpha * np.square(\n reward - self._reward_mean)\n\n def _apply_normalize_obs(self, obs):\n \"\"\"Compute normalized observation.\n\n Args:\n obs (np.ndarray): Observation.\n\n Returns:\n np.ndarray: Normalized observation.\n\n \"\"\"\n self._update_obs_estimate(obs)\n flat_obs = self._env.observation_space.flatten(obs)\n normalized_obs = (flat_obs -\n self._obs_mean) / (np.sqrt(self._obs_var) + 1e-8)\n if not self._flatten_obs:\n normalized_obs = self._env.observation_space.unflatten(\n self._env.observation_space, normalized_obs)\n return normalized_obs\n\n def _apply_normalize_reward(self, reward):\n \"\"\"Compute normalized reward.\n\n Args:\n reward (float): Reward.\n\n Returns:\n float: Normalized reward.\n\n \"\"\"\n self._update_reward_estimate(reward)\n return reward / (np.sqrt(self._reward_var) + 1e-8)\n\n\nnormalize = NormalizedEnv\n", "path": "src/garage/envs/normalized_env.py"}]} | 1,940 | 126 |
gh_patches_debug_20997 | rasdani/github-patches | git_diff | microsoft__presidio-259 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
crypto_recognizer throws an exception
When calling the engine analyze API like
```
response = engine.analyze(correlation_id=0,
text=text_to_analyze,
language='en',
entities=[],
all_fields=True,
score_threshold=0.5)
```
and the value of 'text_to_analyze' is
"/boardingPass/v1/devices/34e7b5e1a0aa1d6f3d862b52a289cdb7/registrations/pass.apoc.wallet/"
The exception below is thrown
` File "/home/folder_name/presidio_testing/my_venv/lib/python3.6/site-packages/analyzer/analyzer_engine.py", line 204, in analyze
current_results = recognizer.analyze(text, entities, nlp_artifacts)
File "/home/folder_name/presidio_testing/my_venv/lib/python3.6/site-packages/analyzer/pattern_recognizer.py", line 61, in analyze
pattern_result = self.__analyze_patterns(text)
File "/home/folder_name/presidio_testing/my_venv/lib/python3.6/site-packages/analyzer/pattern_recognizer.py", line 144, in __analyze_patterns
validation_result = self.validate_result(current_match)
File "/home/folder_name/presidio_testing/my_venv/lib/python3.6/site-packages/analyzer/predefined_recognizers/crypto_recognizer.py", line 23, in validate_result
bcbytes = CryptoRecognizer.__decode_base58(pattern_text, 25)
File "/home/folder_name/presidio_testing/my_venv/lib/python3.6/site-packages/analyzer/predefined_recognizers/crypto_recognizer.py", line 33, in __decode_base58
n = n * 58 + digits58.index(char)`
ValueError: substring not found
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `presidio-analyzer/analyzer/predefined_recognizers/crypto_recognizer.py`
Content:
```
1 from hashlib import sha256
2 from analyzer import Pattern
3 from analyzer import PatternRecognizer
4
5 # Copied from:
6 # http://rosettacode.org/wiki/Bitcoin/address_validation#Python
7 REGEX = r'\b[13][a-km-zA-HJ-NP-Z0-9]{26,33}\b'
8 CONTEXT = ["wallet", "btc", "bitcoin", "crypto"]
9
10
11 class CryptoRecognizer(PatternRecognizer):
12 """
13 Recognizes common crypto account numbers using regex + checksum
14 """
15
16 def __init__(self):
17 patterns = [Pattern('Crypto (Medium)', REGEX, 0.5)]
18 super().__init__(supported_entity="CRYPTO", patterns=patterns,
19 context=CONTEXT)
20
21 def validate_result(self, pattern_text):
22 # try:
23 bcbytes = CryptoRecognizer.__decode_base58(pattern_text, 25)
24 result = bcbytes[-4:] == sha256(sha256(bcbytes[:-4])
25 .digest()).digest()[:4]
26 return result
27
28 @staticmethod
29 def __decode_base58(bc, length):
30 digits58 = '123456789ABCDEFGHJKLMNPQRSTUVWXYZabcdefghijkmnopqrstuvwxyz'
31 n = 0
32 for char in bc:
33 n = n * 58 + digits58.index(char)
34 return n.to_bytes(length, 'big')
35
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/presidio-analyzer/analyzer/predefined_recognizers/crypto_recognizer.py b/presidio-analyzer/analyzer/predefined_recognizers/crypto_recognizer.py
--- a/presidio-analyzer/analyzer/predefined_recognizers/crypto_recognizer.py
+++ b/presidio-analyzer/analyzer/predefined_recognizers/crypto_recognizer.py
@@ -4,7 +4,7 @@
# Copied from:
# http://rosettacode.org/wiki/Bitcoin/address_validation#Python
-REGEX = r'\b[13][a-km-zA-HJ-NP-Z0-9]{26,33}\b'
+REGEX = r'\b[13][a-km-zA-HJ-NP-Z1-9]{26,33}\b'
CONTEXT = ["wallet", "btc", "bitcoin", "crypto"]
@@ -19,11 +19,12 @@
context=CONTEXT)
def validate_result(self, pattern_text):
- # try:
- bcbytes = CryptoRecognizer.__decode_base58(pattern_text, 25)
- result = bcbytes[-4:] == sha256(sha256(bcbytes[:-4])
- .digest()).digest()[:4]
- return result
+ try:
+ bcbytes = CryptoRecognizer.__decode_base58(pattern_text, 25)
+ return bcbytes[-4:] == sha256(sha256(bcbytes[:-4])
+ .digest()).digest()[:4]
+ except ValueError:
+ return False
@staticmethod
def __decode_base58(bc, length):
| {"golden_diff": "diff --git a/presidio-analyzer/analyzer/predefined_recognizers/crypto_recognizer.py b/presidio-analyzer/analyzer/predefined_recognizers/crypto_recognizer.py\n--- a/presidio-analyzer/analyzer/predefined_recognizers/crypto_recognizer.py\n+++ b/presidio-analyzer/analyzer/predefined_recognizers/crypto_recognizer.py\n@@ -4,7 +4,7 @@\n \n # Copied from:\n # http://rosettacode.org/wiki/Bitcoin/address_validation#Python\n-REGEX = r'\\b[13][a-km-zA-HJ-NP-Z0-9]{26,33}\\b'\n+REGEX = r'\\b[13][a-km-zA-HJ-NP-Z1-9]{26,33}\\b'\n CONTEXT = [\"wallet\", \"btc\", \"bitcoin\", \"crypto\"]\n \n \n@@ -19,11 +19,12 @@\n context=CONTEXT)\n \n def validate_result(self, pattern_text):\n- # try:\n- bcbytes = CryptoRecognizer.__decode_base58(pattern_text, 25)\n- result = bcbytes[-4:] == sha256(sha256(bcbytes[:-4])\n- .digest()).digest()[:4]\n- return result\n+ try:\n+ bcbytes = CryptoRecognizer.__decode_base58(pattern_text, 25)\n+ return bcbytes[-4:] == sha256(sha256(bcbytes[:-4])\n+ .digest()).digest()[:4]\n+ except ValueError:\n+ return False\n \n @staticmethod\n def __decode_base58(bc, length):\n", "issue": "crypto_recognizer throws an exception\n\r\nWhen calling the engine analyze API like\r\n\r\n```\r\n response = engine.analyze(correlation_id=0,\r\n text=text_to_analyze,\r\n language='en',\r\n entities=[],\r\n all_fields=True,\r\n score_threshold=0.5)\r\n```\r\n\r\nand the value of 'text_to_analyze' is \r\n\r\n\"/boardingPass/v1/devices/34e7b5e1a0aa1d6f3d862b52a289cdb7/registrations/pass.apoc.wallet/\"\r\n\r\nThe exception below is thrown\r\n\r\n\r\n` File \"/home/folder_name/presidio_testing/my_venv/lib/python3.6/site-packages/analyzer/analyzer_engine.py\", line 204, in analyze\r\n current_results = recognizer.analyze(text, entities, nlp_artifacts)\r\n File \"/home/folder_name/presidio_testing/my_venv/lib/python3.6/site-packages/analyzer/pattern_recognizer.py\", line 61, in analyze\r\n pattern_result = self.__analyze_patterns(text)\r\n File \"/home/folder_name/presidio_testing/my_venv/lib/python3.6/site-packages/analyzer/pattern_recognizer.py\", line 144, in __analyze_patterns\r\n validation_result = self.validate_result(current_match)\r\n File \"/home/folder_name/presidio_testing/my_venv/lib/python3.6/site-packages/analyzer/predefined_recognizers/crypto_recognizer.py\", line 23, in validate_result\r\n bcbytes = CryptoRecognizer.__decode_base58(pattern_text, 25)\r\n File \"/home/folder_name/presidio_testing/my_venv/lib/python3.6/site-packages/analyzer/predefined_recognizers/crypto_recognizer.py\", line 33, in __decode_base58\r\n n = n * 58 + digits58.index(char)`\r\n\r\nValueError: substring not found\n", "before_files": [{"content": "from hashlib import sha256\nfrom analyzer import Pattern\nfrom analyzer import PatternRecognizer\n\n# Copied from:\n# http://rosettacode.org/wiki/Bitcoin/address_validation#Python\nREGEX = r'\\b[13][a-km-zA-HJ-NP-Z0-9]{26,33}\\b'\nCONTEXT = [\"wallet\", \"btc\", \"bitcoin\", \"crypto\"]\n\n\nclass CryptoRecognizer(PatternRecognizer):\n \"\"\"\n Recognizes common crypto account numbers using regex + checksum\n \"\"\"\n\n def __init__(self):\n patterns = [Pattern('Crypto (Medium)', REGEX, 0.5)]\n super().__init__(supported_entity=\"CRYPTO\", patterns=patterns,\n context=CONTEXT)\n\n def validate_result(self, pattern_text):\n # try:\n bcbytes = CryptoRecognizer.__decode_base58(pattern_text, 25)\n result = bcbytes[-4:] == sha256(sha256(bcbytes[:-4])\n .digest()).digest()[:4]\n return result\n\n @staticmethod\n def __decode_base58(bc, length):\n digits58 = '123456789ABCDEFGHJKLMNPQRSTUVWXYZabcdefghijkmnopqrstuvwxyz'\n n = 0\n for char in bc:\n n = n * 58 + digits58.index(char)\n return n.to_bytes(length, 'big')\n", "path": "presidio-analyzer/analyzer/predefined_recognizers/crypto_recognizer.py"}], "after_files": [{"content": "from hashlib import sha256\nfrom analyzer import Pattern\nfrom analyzer import PatternRecognizer\n\n# Copied from:\n# http://rosettacode.org/wiki/Bitcoin/address_validation#Python\nREGEX = r'\\b[13][a-km-zA-HJ-NP-Z1-9]{26,33}\\b'\nCONTEXT = [\"wallet\", \"btc\", \"bitcoin\", \"crypto\"]\n\n\nclass CryptoRecognizer(PatternRecognizer):\n \"\"\"\n Recognizes common crypto account numbers using regex + checksum\n \"\"\"\n\n def __init__(self):\n patterns = [Pattern('Crypto (Medium)', REGEX, 0.5)]\n super().__init__(supported_entity=\"CRYPTO\", patterns=patterns,\n context=CONTEXT)\n\n def validate_result(self, pattern_text):\n try:\n bcbytes = CryptoRecognizer.__decode_base58(pattern_text, 25)\n return bcbytes[-4:] == sha256(sha256(bcbytes[:-4])\n .digest()).digest()[:4]\n except ValueError:\n return False\n\n @staticmethod\n def __decode_base58(bc, length):\n digits58 = '123456789ABCDEFGHJKLMNPQRSTUVWXYZabcdefghijkmnopqrstuvwxyz'\n n = 0\n for char in bc:\n n = n * 58 + digits58.index(char)\n return n.to_bytes(length, 'big')\n", "path": "presidio-analyzer/analyzer/predefined_recognizers/crypto_recognizer.py"}]} | 1,054 | 368 |
gh_patches_debug_10157 | rasdani/github-patches | git_diff | huggingface__transformers-193 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
pytorch_pretrained_bert/convert_tf_checkpoint_to_pytorch.py error
attributeError: 'BertForPreTraining' object has no attribute 'global_step'
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pytorch_pretrained_bert/convert_tf_checkpoint_to_pytorch.py`
Content:
```
1 # coding=utf-8
2 # Copyright 2018 The HugginFace Inc. team.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15 """Convert BERT checkpoint."""
16
17 from __future__ import absolute_import
18 from __future__ import division
19 from __future__ import print_function
20
21 import os
22 import re
23 import argparse
24 import tensorflow as tf
25 import torch
26 import numpy as np
27
28 from .modeling import BertConfig, BertForPreTraining
29
30 def convert_tf_checkpoint_to_pytorch(tf_checkpoint_path, bert_config_file, pytorch_dump_path):
31 config_path = os.path.abspath(bert_config_file)
32 tf_path = os.path.abspath(tf_checkpoint_path)
33 print("Converting TensorFlow checkpoint from {} with config at {}".format(tf_path, config_path))
34 # Load weights from TF model
35 init_vars = tf.train.list_variables(tf_path)
36 names = []
37 arrays = []
38 for name, shape in init_vars:
39 print("Loading TF weight {} with shape {}".format(name, shape))
40 array = tf.train.load_variable(tf_path, name)
41 names.append(name)
42 arrays.append(array)
43
44 # Initialise PyTorch model
45 config = BertConfig.from_json_file(bert_config_file)
46 print("Building PyTorch model from configuration: {}".format(str(config)))
47 model = BertForPreTraining(config)
48
49 for name, array in zip(names, arrays):
50 name = name.split('/')
51 # adam_v and adam_m are variables used in AdamWeightDecayOptimizer to calculated m and v
52 # which are not required for using pretrained model
53 if any(n in ["adam_v", "adam_m"] for n in name):
54 print("Skipping {}".format("/".join(name)))
55 continue
56 pointer = model
57 for m_name in name:
58 if re.fullmatch(r'[A-Za-z]+_\d+', m_name):
59 l = re.split(r'_(\d+)', m_name)
60 else:
61 l = [m_name]
62 if l[0] == 'kernel' or l[0] == 'gamma':
63 pointer = getattr(pointer, 'weight')
64 elif l[0] == 'output_bias' or l[0] == 'beta':
65 pointer = getattr(pointer, 'bias')
66 elif l[0] == 'output_weights':
67 pointer = getattr(pointer, 'weight')
68 else:
69 pointer = getattr(pointer, l[0])
70 if len(l) >= 2:
71 num = int(l[1])
72 pointer = pointer[num]
73 if m_name[-11:] == '_embeddings':
74 pointer = getattr(pointer, 'weight')
75 elif m_name == 'kernel':
76 array = np.transpose(array)
77 try:
78 assert pointer.shape == array.shape
79 except AssertionError as e:
80 e.args += (pointer.shape, array.shape)
81 raise
82 print("Initialize PyTorch weight {}".format(name))
83 pointer.data = torch.from_numpy(array)
84
85 # Save pytorch-model
86 print("Save PyTorch model to {}".format(pytorch_dump_path))
87 torch.save(model.state_dict(), pytorch_dump_path)
88
89
90 if __name__ == "__main__":
91 parser = argparse.ArgumentParser()
92 ## Required parameters
93 parser.add_argument("--tf_checkpoint_path",
94 default = None,
95 type = str,
96 required = True,
97 help = "Path the TensorFlow checkpoint path.")
98 parser.add_argument("--bert_config_file",
99 default = None,
100 type = str,
101 required = True,
102 help = "The config json file corresponding to the pre-trained BERT model. \n"
103 "This specifies the model architecture.")
104 parser.add_argument("--pytorch_dump_path",
105 default = None,
106 type = str,
107 required = True,
108 help = "Path to the output PyTorch model.")
109 args = parser.parse_args()
110 convert_tf_checkpoint_to_pytorch(args.tf_checkpoint_path,
111 args.bert_config_file,
112 args.pytorch_dump_path)
113
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pytorch_pretrained_bert/convert_tf_checkpoint_to_pytorch.py b/pytorch_pretrained_bert/convert_tf_checkpoint_to_pytorch.py
--- a/pytorch_pretrained_bert/convert_tf_checkpoint_to_pytorch.py
+++ b/pytorch_pretrained_bert/convert_tf_checkpoint_to_pytorch.py
@@ -50,7 +50,7 @@
name = name.split('/')
# adam_v and adam_m are variables used in AdamWeightDecayOptimizer to calculated m and v
# which are not required for using pretrained model
- if any(n in ["adam_v", "adam_m"] for n in name):
+ if any(n in ["adam_v", "adam_m", "global_step"] for n in name):
print("Skipping {}".format("/".join(name)))
continue
pointer = model
| {"golden_diff": "diff --git a/pytorch_pretrained_bert/convert_tf_checkpoint_to_pytorch.py b/pytorch_pretrained_bert/convert_tf_checkpoint_to_pytorch.py\n--- a/pytorch_pretrained_bert/convert_tf_checkpoint_to_pytorch.py\n+++ b/pytorch_pretrained_bert/convert_tf_checkpoint_to_pytorch.py\n@@ -50,7 +50,7 @@\n name = name.split('/')\n # adam_v and adam_m are variables used in AdamWeightDecayOptimizer to calculated m and v\n # which are not required for using pretrained model\n- if any(n in [\"adam_v\", \"adam_m\"] for n in name):\n+ if any(n in [\"adam_v\", \"adam_m\", \"global_step\"] for n in name):\n print(\"Skipping {}\".format(\"/\".join(name)))\n continue\n pointer = model\n", "issue": "pytorch_pretrained_bert/convert_tf_checkpoint_to_pytorch.py error\nattributeError: 'BertForPreTraining' object has no attribute 'global_step'\n", "before_files": [{"content": "# coding=utf-8\n# Copyright 2018 The HugginFace Inc. team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Convert BERT checkpoint.\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport os\nimport re\nimport argparse\nimport tensorflow as tf\nimport torch\nimport numpy as np\n\nfrom .modeling import BertConfig, BertForPreTraining\n\ndef convert_tf_checkpoint_to_pytorch(tf_checkpoint_path, bert_config_file, pytorch_dump_path):\n config_path = os.path.abspath(bert_config_file)\n tf_path = os.path.abspath(tf_checkpoint_path)\n print(\"Converting TensorFlow checkpoint from {} with config at {}\".format(tf_path, config_path))\n # Load weights from TF model\n init_vars = tf.train.list_variables(tf_path)\n names = []\n arrays = []\n for name, shape in init_vars:\n print(\"Loading TF weight {} with shape {}\".format(name, shape))\n array = tf.train.load_variable(tf_path, name)\n names.append(name)\n arrays.append(array)\n\n # Initialise PyTorch model\n config = BertConfig.from_json_file(bert_config_file)\n print(\"Building PyTorch model from configuration: {}\".format(str(config)))\n model = BertForPreTraining(config)\n\n for name, array in zip(names, arrays):\n name = name.split('/')\n # adam_v and adam_m are variables used in AdamWeightDecayOptimizer to calculated m and v\n # which are not required for using pretrained model\n if any(n in [\"adam_v\", \"adam_m\"] for n in name):\n print(\"Skipping {}\".format(\"/\".join(name)))\n continue\n pointer = model\n for m_name in name:\n if re.fullmatch(r'[A-Za-z]+_\\d+', m_name):\n l = re.split(r'_(\\d+)', m_name)\n else:\n l = [m_name]\n if l[0] == 'kernel' or l[0] == 'gamma':\n pointer = getattr(pointer, 'weight')\n elif l[0] == 'output_bias' or l[0] == 'beta':\n pointer = getattr(pointer, 'bias')\n elif l[0] == 'output_weights':\n pointer = getattr(pointer, 'weight')\n else:\n pointer = getattr(pointer, l[0])\n if len(l) >= 2:\n num = int(l[1])\n pointer = pointer[num]\n if m_name[-11:] == '_embeddings':\n pointer = getattr(pointer, 'weight')\n elif m_name == 'kernel':\n array = np.transpose(array)\n try:\n assert pointer.shape == array.shape\n except AssertionError as e:\n e.args += (pointer.shape, array.shape)\n raise\n print(\"Initialize PyTorch weight {}\".format(name))\n pointer.data = torch.from_numpy(array)\n\n # Save pytorch-model\n print(\"Save PyTorch model to {}\".format(pytorch_dump_path))\n torch.save(model.state_dict(), pytorch_dump_path)\n\n\nif __name__ == \"__main__\":\n parser = argparse.ArgumentParser()\n ## Required parameters\n parser.add_argument(\"--tf_checkpoint_path\",\n default = None,\n type = str,\n required = True,\n help = \"Path the TensorFlow checkpoint path.\")\n parser.add_argument(\"--bert_config_file\",\n default = None,\n type = str,\n required = True,\n help = \"The config json file corresponding to the pre-trained BERT model. \\n\"\n \"This specifies the model architecture.\")\n parser.add_argument(\"--pytorch_dump_path\",\n default = None,\n type = str,\n required = True,\n help = \"Path to the output PyTorch model.\")\n args = parser.parse_args()\n convert_tf_checkpoint_to_pytorch(args.tf_checkpoint_path,\n args.bert_config_file,\n args.pytorch_dump_path)\n", "path": "pytorch_pretrained_bert/convert_tf_checkpoint_to_pytorch.py"}], "after_files": [{"content": "# coding=utf-8\n# Copyright 2018 The HugginFace Inc. team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Convert BERT checkpoint.\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport os\nimport re\nimport argparse\nimport tensorflow as tf\nimport torch\nimport numpy as np\n\nfrom .modeling import BertConfig, BertForPreTraining\n\ndef convert_tf_checkpoint_to_pytorch(tf_checkpoint_path, bert_config_file, pytorch_dump_path):\n config_path = os.path.abspath(bert_config_file)\n tf_path = os.path.abspath(tf_checkpoint_path)\n print(\"Converting TensorFlow checkpoint from {} with config at {}\".format(tf_path, config_path))\n # Load weights from TF model\n init_vars = tf.train.list_variables(tf_path)\n names = []\n arrays = []\n for name, shape in init_vars:\n print(\"Loading TF weight {} with shape {}\".format(name, shape))\n array = tf.train.load_variable(tf_path, name)\n names.append(name)\n arrays.append(array)\n\n # Initialise PyTorch model\n config = BertConfig.from_json_file(bert_config_file)\n print(\"Building PyTorch model from configuration: {}\".format(str(config)))\n model = BertForPreTraining(config)\n\n for name, array in zip(names, arrays):\n name = name.split('/')\n # adam_v and adam_m are variables used in AdamWeightDecayOptimizer to calculated m and v\n # which are not required for using pretrained model\n if any(n in [\"adam_v\", \"adam_m\", \"global_step\"] for n in name):\n print(\"Skipping {}\".format(\"/\".join(name)))\n continue\n pointer = model\n for m_name in name:\n if re.fullmatch(r'[A-Za-z]+_\\d+', m_name):\n l = re.split(r'_(\\d+)', m_name)\n else:\n l = [m_name]\n if l[0] == 'kernel' or l[0] == 'gamma':\n pointer = getattr(pointer, 'weight')\n elif l[0] == 'output_bias' or l[0] == 'beta':\n pointer = getattr(pointer, 'bias')\n elif l[0] == 'output_weights':\n pointer = getattr(pointer, 'weight')\n else:\n pointer = getattr(pointer, l[0])\n if len(l) >= 2:\n num = int(l[1])\n pointer = pointer[num]\n if m_name[-11:] == '_embeddings':\n pointer = getattr(pointer, 'weight')\n elif m_name == 'kernel':\n array = np.transpose(array)\n try:\n assert pointer.shape == array.shape\n except AssertionError as e:\n e.args += (pointer.shape, array.shape)\n raise\n print(\"Initialize PyTorch weight {}\".format(name))\n pointer.data = torch.from_numpy(array)\n\n # Save pytorch-model\n print(\"Save PyTorch model to {}\".format(pytorch_dump_path))\n torch.save(model.state_dict(), pytorch_dump_path)\n\n\nif __name__ == \"__main__\":\n parser = argparse.ArgumentParser()\n ## Required parameters\n parser.add_argument(\"--tf_checkpoint_path\",\n default = None,\n type = str,\n required = True,\n help = \"Path the TensorFlow checkpoint path.\")\n parser.add_argument(\"--bert_config_file\",\n default = None,\n type = str,\n required = True,\n help = \"The config json file corresponding to the pre-trained BERT model. \\n\"\n \"This specifies the model architecture.\")\n parser.add_argument(\"--pytorch_dump_path\",\n default = None,\n type = str,\n required = True,\n help = \"Path to the output PyTorch model.\")\n args = parser.parse_args()\n convert_tf_checkpoint_to_pytorch(args.tf_checkpoint_path,\n args.bert_config_file,\n args.pytorch_dump_path)\n", "path": "pytorch_pretrained_bert/convert_tf_checkpoint_to_pytorch.py"}]} | 1,480 | 180 |
gh_patches_debug_8109 | rasdani/github-patches | git_diff | pre-commit__pre-commit-204 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Crash when /tmp is on a different device
```
Traceback (most recent call last):
File "/home/cameron/Workspace/hack16-llvm-lang/venv/bin/pre-commit", line 9, in <module>
load_entry_point('pre-commit==0.4.0', 'console_scripts', 'pre-commit')()
File "/home/cameron/Workspace/hack16-llvm-lang/venv/lib/python3.4/site-packages/pre_commit/main.py", line 136, in main
'Command {0} failed to exit with a returncode'.format(args.command)
File "/usr/lib64/python3.4/contextlib.py", line 77, in __exit__
self.gen.throw(type, value, traceback)
File "/home/cameron/Workspace/hack16-llvm-lang/venv/lib/python3.4/site-packages/pre_commit/error_handler.py", line 41, in error_handler
traceback.format_exc(),
File "/home/cameron/Workspace/hack16-llvm-lang/venv/lib/python3.4/site-packages/pre_commit/error_handler.py", line 24, in _log_and_exit
store.require_created()
File "/home/cameron/Workspace/hack16-llvm-lang/venv/lib/python3.4/site-packages/pre_commit/store.py", line 97, in require_created
self._create()
File "/home/cameron/Workspace/hack16-llvm-lang/venv/lib/python3.4/site-packages/pre_commit/store.py", line 90, in _create
self._write_sqlite_db()
File "/home/cameron/Workspace/hack16-llvm-lang/venv/lib/python3.4/site-packages/pre_commit/store.py", line 82, in _write_sqlite_db
os.rename(tmpfile, self.db_path)
OSError: [Errno 18] Invalid cross-device link: '/tmp/tmpz1pkyqsm' -> '/home/cameron/.pre-commit/db.db'
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pre_commit/store.py`
Content:
```
1 from __future__ import unicode_literals
2
3 import contextlib
4 import io
5 import logging
6 import os
7 import os.path
8 import sqlite3
9 import tempfile
10
11 from cached_property import cached_property
12
13 from pre_commit.prefixed_command_runner import PrefixedCommandRunner
14 from pre_commit.util import clean_path_on_failure
15 from pre_commit.util import cmd_output
16 from pre_commit.util import cwd
17
18
19 logger = logging.getLogger('pre_commit')
20
21
22 def _get_default_directory():
23 """Returns the default directory for the Store. This is intentionally
24 underscored to indicate that `Store.get_default_directory` is the intended
25 way to get this information. This is also done so
26 `Store.get_default_directory` can be mocked in tests and
27 `_get_default_directory` can be tested.
28 """
29 return os.environ.get(
30 'PRE_COMMIT_HOME',
31 os.path.join(os.path.expanduser('~'), '.pre-commit'),
32 )
33
34
35 class Store(object):
36 get_default_directory = staticmethod(_get_default_directory)
37
38 class RepoPathGetter(object):
39 def __init__(self, repo, sha, store):
40 self._repo = repo
41 self._sha = sha
42 self._store = store
43
44 @cached_property
45 def repo_path(self):
46 return self._store.clone(self._repo, self._sha)
47
48 def __init__(self, directory=None):
49 if directory is None:
50 directory = self.get_default_directory()
51
52 self.directory = directory
53 self.__created = False
54
55 def _write_readme(self):
56 with io.open(os.path.join(self.directory, 'README'), 'w') as readme:
57 readme.write(
58 'This directory is maintained by the pre-commit project.\n'
59 'Learn more: https://github.com/pre-commit/pre-commit\n'
60 )
61
62 def _write_sqlite_db(self):
63 # To avoid a race where someone ^Cs between db creation and execution
64 # of the CREATE TABLE statement
65 fd, tmpfile = tempfile.mkstemp()
66 # We'll be managing this file ourselves
67 os.close(fd)
68 # sqlite doesn't close its fd with its contextmanager >.<
69 # contextlib.closing fixes this.
70 # See: http://stackoverflow.com/a/28032829/812183
71 with contextlib.closing(sqlite3.connect(tmpfile)) as db:
72 db.executescript(
73 'CREATE TABLE repos ('
74 ' repo CHAR(255) NOT NULL,'
75 ' ref CHAR(255) NOT NULL,'
76 ' path CHAR(255) NOT NULL,'
77 ' PRIMARY KEY (repo, ref)'
78 ');'
79 )
80
81 # Atomic file move
82 os.rename(tmpfile, self.db_path)
83
84 def _create(self):
85 if os.path.exists(self.db_path):
86 return
87 if not os.path.exists(self.directory):
88 os.makedirs(self.directory)
89 self._write_readme()
90 self._write_sqlite_db()
91
92 def require_created(self):
93 """Require the pre-commit file store to be created."""
94 if self.__created:
95 return
96
97 self._create()
98 self.__created = True
99
100 def clone(self, url, sha):
101 """Clone the given url and checkout the specific sha."""
102 self.require_created()
103
104 # Check if we already exist
105 with sqlite3.connect(self.db_path) as db:
106 result = db.execute(
107 'SELECT path FROM repos WHERE repo = ? AND ref = ?',
108 [url, sha],
109 ).fetchone()
110 if result:
111 return result[0]
112
113 logger.info('Initializing environment for {0}.'.format(url))
114
115 dir = tempfile.mkdtemp(prefix='repo', dir=self.directory)
116 with clean_path_on_failure(dir):
117 cmd_output('git', 'clone', '--no-checkout', url, dir)
118 with cwd(dir):
119 cmd_output('git', 'checkout', sha)
120
121 # Update our db with the created repo
122 with sqlite3.connect(self.db_path) as db:
123 db.execute(
124 'INSERT INTO repos (repo, ref, path) VALUES (?, ?, ?)',
125 [url, sha, dir],
126 )
127 return dir
128
129 def get_repo_path_getter(self, repo, sha):
130 return self.RepoPathGetter(repo, sha, self)
131
132 @cached_property
133 def cmd_runner(self):
134 return PrefixedCommandRunner(self.directory)
135
136 @cached_property
137 def db_path(self):
138 return os.path.join(self.directory, 'db.db')
139
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pre_commit/store.py b/pre_commit/store.py
--- a/pre_commit/store.py
+++ b/pre_commit/store.py
@@ -62,7 +62,7 @@
def _write_sqlite_db(self):
# To avoid a race where someone ^Cs between db creation and execution
# of the CREATE TABLE statement
- fd, tmpfile = tempfile.mkstemp()
+ fd, tmpfile = tempfile.mkstemp(dir=self.directory)
# We'll be managing this file ourselves
os.close(fd)
# sqlite doesn't close its fd with its contextmanager >.<
| {"golden_diff": "diff --git a/pre_commit/store.py b/pre_commit/store.py\n--- a/pre_commit/store.py\n+++ b/pre_commit/store.py\n@@ -62,7 +62,7 @@\n def _write_sqlite_db(self):\n # To avoid a race where someone ^Cs between db creation and execution\n # of the CREATE TABLE statement\n- fd, tmpfile = tempfile.mkstemp()\n+ fd, tmpfile = tempfile.mkstemp(dir=self.directory)\n # We'll be managing this file ourselves\n os.close(fd)\n # sqlite doesn't close its fd with its contextmanager >.<\n", "issue": "Crash when /tmp is on a different device\n```\nTraceback (most recent call last):\n File \"/home/cameron/Workspace/hack16-llvm-lang/venv/bin/pre-commit\", line 9, in <module>\n load_entry_point('pre-commit==0.4.0', 'console_scripts', 'pre-commit')()\n File \"/home/cameron/Workspace/hack16-llvm-lang/venv/lib/python3.4/site-packages/pre_commit/main.py\", line 136, in main\n 'Command {0} failed to exit with a returncode'.format(args.command)\n File \"/usr/lib64/python3.4/contextlib.py\", line 77, in __exit__\n self.gen.throw(type, value, traceback)\n File \"/home/cameron/Workspace/hack16-llvm-lang/venv/lib/python3.4/site-packages/pre_commit/error_handler.py\", line 41, in error_handler\n traceback.format_exc(),\n File \"/home/cameron/Workspace/hack16-llvm-lang/venv/lib/python3.4/site-packages/pre_commit/error_handler.py\", line 24, in _log_and_exit\n store.require_created()\n File \"/home/cameron/Workspace/hack16-llvm-lang/venv/lib/python3.4/site-packages/pre_commit/store.py\", line 97, in require_created\n self._create()\n File \"/home/cameron/Workspace/hack16-llvm-lang/venv/lib/python3.4/site-packages/pre_commit/store.py\", line 90, in _create\n self._write_sqlite_db()\n File \"/home/cameron/Workspace/hack16-llvm-lang/venv/lib/python3.4/site-packages/pre_commit/store.py\", line 82, in _write_sqlite_db\n os.rename(tmpfile, self.db_path)\nOSError: [Errno 18] Invalid cross-device link: '/tmp/tmpz1pkyqsm' -> '/home/cameron/.pre-commit/db.db'\n```\n\n", "before_files": [{"content": "from __future__ import unicode_literals\n\nimport contextlib\nimport io\nimport logging\nimport os\nimport os.path\nimport sqlite3\nimport tempfile\n\nfrom cached_property import cached_property\n\nfrom pre_commit.prefixed_command_runner import PrefixedCommandRunner\nfrom pre_commit.util import clean_path_on_failure\nfrom pre_commit.util import cmd_output\nfrom pre_commit.util import cwd\n\n\nlogger = logging.getLogger('pre_commit')\n\n\ndef _get_default_directory():\n \"\"\"Returns the default directory for the Store. This is intentionally\n underscored to indicate that `Store.get_default_directory` is the intended\n way to get this information. This is also done so\n `Store.get_default_directory` can be mocked in tests and\n `_get_default_directory` can be tested.\n \"\"\"\n return os.environ.get(\n 'PRE_COMMIT_HOME',\n os.path.join(os.path.expanduser('~'), '.pre-commit'),\n )\n\n\nclass Store(object):\n get_default_directory = staticmethod(_get_default_directory)\n\n class RepoPathGetter(object):\n def __init__(self, repo, sha, store):\n self._repo = repo\n self._sha = sha\n self._store = store\n\n @cached_property\n def repo_path(self):\n return self._store.clone(self._repo, self._sha)\n\n def __init__(self, directory=None):\n if directory is None:\n directory = self.get_default_directory()\n\n self.directory = directory\n self.__created = False\n\n def _write_readme(self):\n with io.open(os.path.join(self.directory, 'README'), 'w') as readme:\n readme.write(\n 'This directory is maintained by the pre-commit project.\\n'\n 'Learn more: https://github.com/pre-commit/pre-commit\\n'\n )\n\n def _write_sqlite_db(self):\n # To avoid a race where someone ^Cs between db creation and execution\n # of the CREATE TABLE statement\n fd, tmpfile = tempfile.mkstemp()\n # We'll be managing this file ourselves\n os.close(fd)\n # sqlite doesn't close its fd with its contextmanager >.<\n # contextlib.closing fixes this.\n # See: http://stackoverflow.com/a/28032829/812183\n with contextlib.closing(sqlite3.connect(tmpfile)) as db:\n db.executescript(\n 'CREATE TABLE repos ('\n ' repo CHAR(255) NOT NULL,'\n ' ref CHAR(255) NOT NULL,'\n ' path CHAR(255) NOT NULL,'\n ' PRIMARY KEY (repo, ref)'\n ');'\n )\n\n # Atomic file move\n os.rename(tmpfile, self.db_path)\n\n def _create(self):\n if os.path.exists(self.db_path):\n return\n if not os.path.exists(self.directory):\n os.makedirs(self.directory)\n self._write_readme()\n self._write_sqlite_db()\n\n def require_created(self):\n \"\"\"Require the pre-commit file store to be created.\"\"\"\n if self.__created:\n return\n\n self._create()\n self.__created = True\n\n def clone(self, url, sha):\n \"\"\"Clone the given url and checkout the specific sha.\"\"\"\n self.require_created()\n\n # Check if we already exist\n with sqlite3.connect(self.db_path) as db:\n result = db.execute(\n 'SELECT path FROM repos WHERE repo = ? AND ref = ?',\n [url, sha],\n ).fetchone()\n if result:\n return result[0]\n\n logger.info('Initializing environment for {0}.'.format(url))\n\n dir = tempfile.mkdtemp(prefix='repo', dir=self.directory)\n with clean_path_on_failure(dir):\n cmd_output('git', 'clone', '--no-checkout', url, dir)\n with cwd(dir):\n cmd_output('git', 'checkout', sha)\n\n # Update our db with the created repo\n with sqlite3.connect(self.db_path) as db:\n db.execute(\n 'INSERT INTO repos (repo, ref, path) VALUES (?, ?, ?)',\n [url, sha, dir],\n )\n return dir\n\n def get_repo_path_getter(self, repo, sha):\n return self.RepoPathGetter(repo, sha, self)\n\n @cached_property\n def cmd_runner(self):\n return PrefixedCommandRunner(self.directory)\n\n @cached_property\n def db_path(self):\n return os.path.join(self.directory, 'db.db')\n", "path": "pre_commit/store.py"}], "after_files": [{"content": "from __future__ import unicode_literals\n\nimport contextlib\nimport io\nimport logging\nimport os\nimport os.path\nimport sqlite3\nimport tempfile\n\nfrom cached_property import cached_property\n\nfrom pre_commit.prefixed_command_runner import PrefixedCommandRunner\nfrom pre_commit.util import clean_path_on_failure\nfrom pre_commit.util import cmd_output\nfrom pre_commit.util import cwd\n\n\nlogger = logging.getLogger('pre_commit')\n\n\ndef _get_default_directory():\n \"\"\"Returns the default directory for the Store. This is intentionally\n underscored to indicate that `Store.get_default_directory` is the intended\n way to get this information. This is also done so\n `Store.get_default_directory` can be mocked in tests and\n `_get_default_directory` can be tested.\n \"\"\"\n return os.environ.get(\n 'PRE_COMMIT_HOME',\n os.path.join(os.path.expanduser('~'), '.pre-commit'),\n )\n\n\nclass Store(object):\n get_default_directory = staticmethod(_get_default_directory)\n\n class RepoPathGetter(object):\n def __init__(self, repo, sha, store):\n self._repo = repo\n self._sha = sha\n self._store = store\n\n @cached_property\n def repo_path(self):\n return self._store.clone(self._repo, self._sha)\n\n def __init__(self, directory=None):\n if directory is None:\n directory = self.get_default_directory()\n\n self.directory = directory\n self.__created = False\n\n def _write_readme(self):\n with io.open(os.path.join(self.directory, 'README'), 'w') as readme:\n readme.write(\n 'This directory is maintained by the pre-commit project.\\n'\n 'Learn more: https://github.com/pre-commit/pre-commit\\n'\n )\n\n def _write_sqlite_db(self):\n # To avoid a race where someone ^Cs between db creation and execution\n # of the CREATE TABLE statement\n fd, tmpfile = tempfile.mkstemp(dir=self.directory)\n # We'll be managing this file ourselves\n os.close(fd)\n # sqlite doesn't close its fd with its contextmanager >.<\n # contextlib.closing fixes this.\n # See: http://stackoverflow.com/a/28032829/812183\n with contextlib.closing(sqlite3.connect(tmpfile)) as db:\n db.executescript(\n 'CREATE TABLE repos ('\n ' repo CHAR(255) NOT NULL,'\n ' ref CHAR(255) NOT NULL,'\n ' path CHAR(255) NOT NULL,'\n ' PRIMARY KEY (repo, ref)'\n ');'\n )\n\n # Atomic file move\n os.rename(tmpfile, self.db_path)\n\n def _create(self):\n if os.path.exists(self.db_path):\n return\n if not os.path.exists(self.directory):\n os.makedirs(self.directory)\n self._write_readme()\n self._write_sqlite_db()\n\n def require_created(self):\n \"\"\"Require the pre-commit file store to be created.\"\"\"\n if self.__created:\n return\n\n self._create()\n self.__created = True\n\n def clone(self, url, sha):\n \"\"\"Clone the given url and checkout the specific sha.\"\"\"\n self.require_created()\n\n # Check if we already exist\n with sqlite3.connect(self.db_path) as db:\n result = db.execute(\n 'SELECT path FROM repos WHERE repo = ? AND ref = ?',\n [url, sha],\n ).fetchone()\n if result:\n return result[0]\n\n logger.info('Initializing environment for {0}.'.format(url))\n\n dir = tempfile.mkdtemp(prefix='repo', dir=self.directory)\n with clean_path_on_failure(dir):\n cmd_output('git', 'clone', '--no-checkout', url, dir)\n with cwd(dir):\n cmd_output('git', 'checkout', sha)\n\n # Update our db with the created repo\n with sqlite3.connect(self.db_path) as db:\n db.execute(\n 'INSERT INTO repos (repo, ref, path) VALUES (?, ?, ?)',\n [url, sha, dir],\n )\n return dir\n\n def get_repo_path_getter(self, repo, sha):\n return self.RepoPathGetter(repo, sha, self)\n\n @cached_property\n def cmd_runner(self):\n return PrefixedCommandRunner(self.directory)\n\n @cached_property\n def db_path(self):\n return os.path.join(self.directory, 'db.db')\n", "path": "pre_commit/store.py"}]} | 1,986 | 130 |
gh_patches_debug_57079 | rasdani/github-patches | git_diff | searx__searx-672 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Infinite scroll: answer are repeated on each page
How to reproduce : search for ["user agent"](https://searx.me/?q=user+agent) with Infinite scroll activated.
Should the answer be disabled except the first page ? or should Infinite Scroll hide the answer ?
I vote for the first option : disabled answers except on the first page on the server side.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `searx/plugins/self_info.py`
Content:
```
1 '''
2 searx is free software: you can redistribute it and/or modify
3 it under the terms of the GNU Affero General Public License as published by
4 the Free Software Foundation, either version 3 of the License, or
5 (at your option) any later version.
6
7 searx is distributed in the hope that it will be useful,
8 but WITHOUT ANY WARRANTY; without even the implied warranty of
9 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
10 GNU Affero General Public License for more details.
11
12 You should have received a copy of the GNU Affero General Public License
13 along with searx. If not, see < http://www.gnu.org/licenses/ >.
14
15 (C) 2015 by Adam Tauber, <[email protected]>
16 '''
17 from flask_babel import gettext
18 import re
19 name = "Self Informations"
20 description = gettext('Displays your IP if the query is "ip" and your user agent if the query contains "user agent".')
21 default_on = True
22
23
24 # Self User Agent regex
25 p = re.compile('.*user[ -]agent.*', re.IGNORECASE)
26
27
28 # attach callback to the post search hook
29 # request: flask request object
30 # ctx: the whole local context of the pre search hook
31 def post_search(request, ctx):
32 if ctx['search'].query == 'ip':
33 x_forwarded_for = request.headers.getlist("X-Forwarded-For")
34 if x_forwarded_for:
35 ip = x_forwarded_for[0]
36 else:
37 ip = request.remote_addr
38 ctx['search'].result_container.answers.clear()
39 ctx['search'].result_container.answers.add(ip)
40 elif p.match(ctx['search'].query):
41 ua = request.user_agent
42 ctx['search'].result_container.answers.clear()
43 ctx['search'].result_container.answers.add(ua)
44 return True
45
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/searx/plugins/self_info.py b/searx/plugins/self_info.py
--- a/searx/plugins/self_info.py
+++ b/searx/plugins/self_info.py
@@ -29,6 +29,8 @@
# request: flask request object
# ctx: the whole local context of the pre search hook
def post_search(request, ctx):
+ if ctx['search'].pageno > 1:
+ return True
if ctx['search'].query == 'ip':
x_forwarded_for = request.headers.getlist("X-Forwarded-For")
if x_forwarded_for:
| {"golden_diff": "diff --git a/searx/plugins/self_info.py b/searx/plugins/self_info.py\n--- a/searx/plugins/self_info.py\n+++ b/searx/plugins/self_info.py\n@@ -29,6 +29,8 @@\n # request: flask request object\n # ctx: the whole local context of the pre search hook\n def post_search(request, ctx):\n+ if ctx['search'].pageno > 1:\n+ return True\n if ctx['search'].query == 'ip':\n x_forwarded_for = request.headers.getlist(\"X-Forwarded-For\")\n if x_forwarded_for:\n", "issue": "Infinite scroll: answer are repeated on each page\nHow to reproduce : search for [\"user agent\"](https://searx.me/?q=user+agent) with Infinite scroll activated.\n\nShould the answer be disabled except the first page ? or should Infinite Scroll hide the answer ?\n\nI vote for the first option : disabled answers except on the first page on the server side. \n\n", "before_files": [{"content": "'''\nsearx is free software: you can redistribute it and/or modify\nit under the terms of the GNU Affero General Public License as published by\nthe Free Software Foundation, either version 3 of the License, or\n(at your option) any later version.\n\nsearx is distributed in the hope that it will be useful,\nbut WITHOUT ANY WARRANTY; without even the implied warranty of\nMERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\nGNU Affero General Public License for more details.\n\nYou should have received a copy of the GNU Affero General Public License\nalong with searx. If not, see < http://www.gnu.org/licenses/ >.\n\n(C) 2015 by Adam Tauber, <[email protected]>\n'''\nfrom flask_babel import gettext\nimport re\nname = \"Self Informations\"\ndescription = gettext('Displays your IP if the query is \"ip\" and your user agent if the query contains \"user agent\".')\ndefault_on = True\n\n\n# Self User Agent regex\np = re.compile('.*user[ -]agent.*', re.IGNORECASE)\n\n\n# attach callback to the post search hook\n# request: flask request object\n# ctx: the whole local context of the pre search hook\ndef post_search(request, ctx):\n if ctx['search'].query == 'ip':\n x_forwarded_for = request.headers.getlist(\"X-Forwarded-For\")\n if x_forwarded_for:\n ip = x_forwarded_for[0]\n else:\n ip = request.remote_addr\n ctx['search'].result_container.answers.clear()\n ctx['search'].result_container.answers.add(ip)\n elif p.match(ctx['search'].query):\n ua = request.user_agent\n ctx['search'].result_container.answers.clear()\n ctx['search'].result_container.answers.add(ua)\n return True\n", "path": "searx/plugins/self_info.py"}], "after_files": [{"content": "'''\nsearx is free software: you can redistribute it and/or modify\nit under the terms of the GNU Affero General Public License as published by\nthe Free Software Foundation, either version 3 of the License, or\n(at your option) any later version.\n\nsearx is distributed in the hope that it will be useful,\nbut WITHOUT ANY WARRANTY; without even the implied warranty of\nMERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\nGNU Affero General Public License for more details.\n\nYou should have received a copy of the GNU Affero General Public License\nalong with searx. If not, see < http://www.gnu.org/licenses/ >.\n\n(C) 2015 by Adam Tauber, <[email protected]>\n'''\nfrom flask_babel import gettext\nimport re\nname = \"Self Informations\"\ndescription = gettext('Displays your IP if the query is \"ip\" and your user agent if the query contains \"user agent\".')\ndefault_on = True\n\n\n# Self User Agent regex\np = re.compile('.*user[ -]agent.*', re.IGNORECASE)\n\n\n# attach callback to the post search hook\n# request: flask request object\n# ctx: the whole local context of the pre search hook\ndef post_search(request, ctx):\n if ctx['search'].pageno > 1:\n return True\n if ctx['search'].query == 'ip':\n x_forwarded_for = request.headers.getlist(\"X-Forwarded-For\")\n if x_forwarded_for:\n ip = x_forwarded_for[0]\n else:\n ip = request.remote_addr\n ctx['search'].result_container.answers.clear()\n ctx['search'].result_container.answers.add(ip)\n elif p.match(ctx['search'].query):\n ua = request.user_agent\n ctx['search'].result_container.answers.clear()\n ctx['search'].result_container.answers.add(ua)\n return True\n", "path": "searx/plugins/self_info.py"}]} | 813 | 135 |
gh_patches_debug_6064 | rasdani/github-patches | git_diff | benoitc__gunicorn-1441 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Supporting newest version of python
Gunicorn currently doesn't run tests with python3.6.
Since 3.6 is release and some of us are preparing to use it in production it would be great if gunicorn had confirmed support.
Also `setup.py` classifiers doesn't include 3.5 or 3.6.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 # -*- coding: utf-8 -
2 #
3 # This file is part of gunicorn released under the MIT license.
4 # See the NOTICE for more information.
5
6 import os
7 import sys
8
9 from setuptools import setup, find_packages
10 from setuptools.command.test import test as TestCommand
11
12 from gunicorn import __version__
13
14
15 CLASSIFIERS = [
16 'Development Status :: 4 - Beta',
17 'Environment :: Other Environment',
18 'Intended Audience :: Developers',
19 'License :: OSI Approved :: MIT License',
20 'Operating System :: MacOS :: MacOS X',
21 'Operating System :: POSIX',
22 'Programming Language :: Python',
23 'Programming Language :: Python :: 2',
24 'Programming Language :: Python :: 2.6',
25 'Programming Language :: Python :: 2.7',
26 'Programming Language :: Python :: 3',
27 'Programming Language :: Python :: 3.2',
28 'Programming Language :: Python :: 3.3',
29 'Programming Language :: Python :: 3.4',
30 'Topic :: Internet',
31 'Topic :: Utilities',
32 'Topic :: Software Development :: Libraries :: Python Modules',
33 'Topic :: Internet :: WWW/HTTP',
34 'Topic :: Internet :: WWW/HTTP :: WSGI',
35 'Topic :: Internet :: WWW/HTTP :: WSGI :: Server',
36 'Topic :: Internet :: WWW/HTTP :: Dynamic Content']
37
38 # read long description
39 with open(os.path.join(os.path.dirname(__file__), 'README.rst')) as f:
40 long_description = f.read()
41
42 # read dev requirements
43 fname = os.path.join(os.path.dirname(__file__), 'requirements_test.txt')
44 with open(fname) as f:
45 tests_require = [l.strip() for l in f.readlines()]
46
47 if sys.version_info[:2] < (3, 3):
48 tests_require.append('mock')
49 if sys.version_info[:2] < (2, 7):
50 tests_require.append('unittest2')
51
52 class PyTestCommand(TestCommand):
53 user_options = [
54 ("cov", None, "measure coverage")
55 ]
56
57 def initialize_options(self):
58 TestCommand.initialize_options(self)
59 self.cov = None
60
61 def finalize_options(self):
62 TestCommand.finalize_options(self)
63 self.test_args = ['tests']
64 if self.cov:
65 self.test_args += ['--cov', 'gunicorn']
66 self.test_suite = True
67
68 def run_tests(self):
69 import pytest
70 errno = pytest.main(self.test_args)
71 sys.exit(errno)
72
73 setup(
74 name='gunicorn',
75 version=__version__,
76
77 description='WSGI HTTP Server for UNIX',
78 long_description=long_description,
79 author='Benoit Chesneau',
80 author_email='[email protected]',
81 license='MIT',
82 url='http://gunicorn.org',
83
84 classifiers=CLASSIFIERS,
85 zip_safe=False,
86 packages=find_packages(exclude=['examples', 'tests']),
87 include_package_data=True,
88
89 tests_require=tests_require,
90 cmdclass={'test': PyTestCommand},
91
92 entry_points="""
93 [console_scripts]
94 gunicorn=gunicorn.app.wsgiapp:run
95 gunicorn_django=gunicorn.app.djangoapp:run
96 gunicorn_paster=gunicorn.app.pasterapp:run
97
98 [paste.server_runner]
99 main=gunicorn.app.pasterapp:paste_server
100 """
101 )
102
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -27,6 +27,8 @@
'Programming Language :: Python :: 3.2',
'Programming Language :: Python :: 3.3',
'Programming Language :: Python :: 3.4',
+ 'Programming Language :: Python :: 3.5',
+ 'Programming Language :: Python :: 3.6',
'Topic :: Internet',
'Topic :: Utilities',
'Topic :: Software Development :: Libraries :: Python Modules',
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -27,6 +27,8 @@\n 'Programming Language :: Python :: 3.2',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n+ 'Programming Language :: Python :: 3.5',\n+ 'Programming Language :: Python :: 3.6',\n 'Topic :: Internet',\n 'Topic :: Utilities',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n", "issue": "Supporting newest version of python\nGunicorn currently doesn't run tests with python3.6.\r\n\r\nSince 3.6 is release and some of us are preparing to use it in production it would be great if gunicorn had confirmed support.\r\n\r\nAlso `setup.py` classifiers doesn't include 3.5 or 3.6.\n", "before_files": [{"content": "# -*- coding: utf-8 -\n#\n# This file is part of gunicorn released under the MIT license.\n# See the NOTICE for more information.\n\nimport os\nimport sys\n\nfrom setuptools import setup, find_packages\nfrom setuptools.command.test import test as TestCommand\n\nfrom gunicorn import __version__\n\n\nCLASSIFIERS = [\n 'Development Status :: 4 - Beta',\n 'Environment :: Other Environment',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: MIT License',\n 'Operating System :: MacOS :: MacOS X',\n 'Operating System :: POSIX',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.2',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Topic :: Internet',\n 'Topic :: Utilities',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n 'Topic :: Internet :: WWW/HTTP',\n 'Topic :: Internet :: WWW/HTTP :: WSGI',\n 'Topic :: Internet :: WWW/HTTP :: WSGI :: Server',\n 'Topic :: Internet :: WWW/HTTP :: Dynamic Content']\n\n# read long description\nwith open(os.path.join(os.path.dirname(__file__), 'README.rst')) as f:\n long_description = f.read()\n\n# read dev requirements\nfname = os.path.join(os.path.dirname(__file__), 'requirements_test.txt')\nwith open(fname) as f:\n tests_require = [l.strip() for l in f.readlines()]\n\nif sys.version_info[:2] < (3, 3):\n tests_require.append('mock')\nif sys.version_info[:2] < (2, 7):\n tests_require.append('unittest2')\n\nclass PyTestCommand(TestCommand):\n user_options = [\n (\"cov\", None, \"measure coverage\")\n ]\n\n def initialize_options(self):\n TestCommand.initialize_options(self)\n self.cov = None\n\n def finalize_options(self):\n TestCommand.finalize_options(self)\n self.test_args = ['tests']\n if self.cov:\n self.test_args += ['--cov', 'gunicorn']\n self.test_suite = True\n\n def run_tests(self):\n import pytest\n errno = pytest.main(self.test_args)\n sys.exit(errno)\n\nsetup(\n name='gunicorn',\n version=__version__,\n\n description='WSGI HTTP Server for UNIX',\n long_description=long_description,\n author='Benoit Chesneau',\n author_email='[email protected]',\n license='MIT',\n url='http://gunicorn.org',\n\n classifiers=CLASSIFIERS,\n zip_safe=False,\n packages=find_packages(exclude=['examples', 'tests']),\n include_package_data=True,\n\n tests_require=tests_require,\n cmdclass={'test': PyTestCommand},\n\n entry_points=\"\"\"\n [console_scripts]\n gunicorn=gunicorn.app.wsgiapp:run\n gunicorn_django=gunicorn.app.djangoapp:run\n gunicorn_paster=gunicorn.app.pasterapp:run\n\n [paste.server_runner]\n main=gunicorn.app.pasterapp:paste_server\n \"\"\"\n)\n", "path": "setup.py"}], "after_files": [{"content": "# -*- coding: utf-8 -\n#\n# This file is part of gunicorn released under the MIT license.\n# See the NOTICE for more information.\n\nimport os\nimport sys\n\nfrom setuptools import setup, find_packages\nfrom setuptools.command.test import test as TestCommand\n\nfrom gunicorn import __version__\n\n\nCLASSIFIERS = [\n 'Development Status :: 4 - Beta',\n 'Environment :: Other Environment',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: MIT License',\n 'Operating System :: MacOS :: MacOS X',\n 'Operating System :: POSIX',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.2',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Topic :: Internet',\n 'Topic :: Utilities',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n 'Topic :: Internet :: WWW/HTTP',\n 'Topic :: Internet :: WWW/HTTP :: WSGI',\n 'Topic :: Internet :: WWW/HTTP :: WSGI :: Server',\n 'Topic :: Internet :: WWW/HTTP :: Dynamic Content']\n\n# read long description\nwith open(os.path.join(os.path.dirname(__file__), 'README.rst')) as f:\n long_description = f.read()\n\n# read dev requirements\nfname = os.path.join(os.path.dirname(__file__), 'requirements_test.txt')\nwith open(fname) as f:\n tests_require = [l.strip() for l in f.readlines()]\n\nif sys.version_info[:2] < (3, 3):\n tests_require.append('mock')\nif sys.version_info[:2] < (2, 7):\n tests_require.append('unittest2')\n\nclass PyTestCommand(TestCommand):\n user_options = [\n (\"cov\", None, \"measure coverage\")\n ]\n\n def initialize_options(self):\n TestCommand.initialize_options(self)\n self.cov = None\n\n def finalize_options(self):\n TestCommand.finalize_options(self)\n self.test_args = ['tests']\n if self.cov:\n self.test_args += ['--cov', 'gunicorn']\n self.test_suite = True\n\n def run_tests(self):\n import pytest\n errno = pytest.main(self.test_args)\n sys.exit(errno)\n\nsetup(\n name='gunicorn',\n version=__version__,\n\n description='WSGI HTTP Server for UNIX',\n long_description=long_description,\n author='Benoit Chesneau',\n author_email='[email protected]',\n license='MIT',\n url='http://gunicorn.org',\n\n classifiers=CLASSIFIERS,\n zip_safe=False,\n packages=find_packages(exclude=['examples', 'tests']),\n include_package_data=True,\n\n tests_require=tests_require,\n cmdclass={'test': PyTestCommand},\n\n entry_points=\"\"\"\n [console_scripts]\n gunicorn=gunicorn.app.wsgiapp:run\n gunicorn_django=gunicorn.app.djangoapp:run\n gunicorn_paster=gunicorn.app.pasterapp:run\n\n [paste.server_runner]\n main=gunicorn.app.pasterapp:paste_server\n \"\"\"\n)\n", "path": "setup.py"}]} | 1,239 | 118 |
gh_patches_debug_20410 | rasdani/github-patches | git_diff | PlasmaPy__PlasmaPy-1692 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Use `importlib.metadata` to get package version instead of `pkg_resources`
Now that we're using Python 3.8+, we should switch to using `importlib.metadata` to get our version at runtime in `plasmapy/__init__.py`. We're using `pkg_resources` right now, but that has a "[significant runtime cost](https://github.com/pypa/setuptools_scm/#retrieving-package-version-at-runtime)".
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `plasmapy/__init__.py`
Content:
```
1 """
2 Welcome to the `plasmapy` package, an open source community-developed Python
3 package for the plasma community. Documentation is available in the docstrings
4 and online at https://docs.plasmapy.org (accessible also using the
5 :func:`~plasmapy.online_help` function).
6 """
7 __all__ = [
8 "online_help",
9 "analysis",
10 "diagnostics",
11 "dispersion",
12 "formulary",
13 "particles",
14 "plasma",
15 "simulation",
16 "utils",
17 "__version__",
18 "__citation__",
19 ]
20
21 # Enforce Python version check during package import.
22 # This is the same check as the one at the top of setup.py
23 import sys
24
25 if sys.version_info < (3, 8): # coverage: ignore
26 raise ImportError("PlasmaPy does not support Python < 3.8")
27
28 # Packages may add whatever they like to this file, but
29 # should keep this content at the top.
30 # ----------------------------------------------------------------------------
31 import pkg_resources
32
33 from plasmapy import (
34 analysis,
35 diagnostics,
36 dispersion,
37 formulary,
38 particles,
39 plasma,
40 simulation,
41 utils,
42 )
43
44 # define version
45 try:
46 # this places a runtime dependency on setuptools
47 #
48 # note: if there's any distribution metadata in your source files, then this
49 # will find a version based on those files. Keep distribution metadata
50 # out of your repository unless you've intentionally installed the package
51 # as editable (e.g. `pip install -e {plasmapy_directory_root}`),
52 # but then __version__ will not be updated with each commit, it is
53 # frozen to the version at time of install.
54 #
55 #: PlasmaPy version string
56 __version__ = pkg_resources.get_distribution("plasmapy").version
57 except pkg_resources.DistributionNotFound:
58 # package is not installed
59 fallback_version = "unknown"
60 try:
61 # code most likely being used from source
62 # if setuptools_scm is installed then generate a version
63 from setuptools_scm import get_version
64
65 __version__ = get_version(
66 root="..", relative_to=__file__, fallback_version=fallback_version
67 )
68 del get_version
69 warn_add = "setuptools_scm failed to detect the version"
70 except ModuleNotFoundError:
71 # setuptools_scm is not installed
72 __version__ = fallback_version
73 warn_add = "setuptools_scm is not installed"
74
75 if __version__ == fallback_version:
76 from warnings import warn
77
78 warn(
79 f"plasmapy.__version__ not generated (set to 'unknown'), PlasmaPy is "
80 f"not an installed package and {warn_add}.",
81 RuntimeWarning,
82 )
83
84 del warn
85 del fallback_version, warn_add
86
87 # ----------------------------------------------------------------------------
88 #: PlasmaPy citation instructions
89 __citation__ = (
90 "Instructions on how to cite and acknowledge PlasmaPy are provided in the "
91 "online documentation at: http://docs.plasmapy.org/en/stable/about/citation.html"
92 )
93
94
95 def online_help(query: str):
96 """
97 Open a webpage containing a search page in `PlasmaPy's documentation`_,
98 or another page that contains relevant online help.
99
100 This function requires an active internet connection, and will open
101 the page in the default web browser.
102
103 Parameters
104 ----------
105 query : str
106 The search query.
107 """
108 import webbrowser
109
110 from urllib.parse import urlencode
111
112 url = (
113 "http://docs.plasmapy.org/en/stable/search.html?"
114 "{}&check_keywords=yes&area=default"
115 ).format(urlencode({"q": query}))
116
117 if query.lower() in ("unit", "units", "quantity", "quantities"):
118 url = "http://docs.astropy.org/en/stable/units/"
119
120 webbrowser.open(url)
121
122
123 del pkg_resources, sys
124
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/plasmapy/__init__.py b/plasmapy/__init__.py
--- a/plasmapy/__init__.py
+++ b/plasmapy/__init__.py
@@ -28,7 +28,7 @@
# Packages may add whatever they like to this file, but
# should keep this content at the top.
# ----------------------------------------------------------------------------
-import pkg_resources
+from importlib.metadata import PackageNotFoundError, version
from plasmapy import (
analysis,
@@ -53,8 +53,8 @@
# frozen to the version at time of install.
#
#: PlasmaPy version string
- __version__ = pkg_resources.get_distribution("plasmapy").version
-except pkg_resources.DistributionNotFound:
+ __version__ = version("plasmapy")
+except PackageNotFoundError:
# package is not installed
fallback_version = "unknown"
try:
@@ -120,4 +120,4 @@
webbrowser.open(url)
-del pkg_resources, sys
+del sys
| {"golden_diff": "diff --git a/plasmapy/__init__.py b/plasmapy/__init__.py\n--- a/plasmapy/__init__.py\n+++ b/plasmapy/__init__.py\n@@ -28,7 +28,7 @@\n # Packages may add whatever they like to this file, but\n # should keep this content at the top.\n # ----------------------------------------------------------------------------\n-import pkg_resources\n+from importlib.metadata import PackageNotFoundError, version\n \n from plasmapy import (\n analysis,\n@@ -53,8 +53,8 @@\n # frozen to the version at time of install.\n #\n #: PlasmaPy version string\n- __version__ = pkg_resources.get_distribution(\"plasmapy\").version\n-except pkg_resources.DistributionNotFound:\n+ __version__ = version(\"plasmapy\")\n+except PackageNotFoundError:\n # package is not installed\n fallback_version = \"unknown\"\n try:\n@@ -120,4 +120,4 @@\n webbrowser.open(url)\n \n \n-del pkg_resources, sys\n+del sys\n", "issue": "Use `importlib.metadata` to get package version instead of `pkg_resources`\nNow that we're using Python 3.8+, we should switch to using `importlib.metadata` to get our version at runtime in `plasmapy/__init__.py`. We're using `pkg_resources` right now, but that has a \"[significant runtime cost](https://github.com/pypa/setuptools_scm/#retrieving-package-version-at-runtime)\".\n", "before_files": [{"content": "\"\"\"\nWelcome to the `plasmapy` package, an open source community-developed Python\npackage for the plasma community. Documentation is available in the docstrings\nand online at https://docs.plasmapy.org (accessible also using the\n:func:`~plasmapy.online_help` function).\n\"\"\"\n__all__ = [\n \"online_help\",\n \"analysis\",\n \"diagnostics\",\n \"dispersion\",\n \"formulary\",\n \"particles\",\n \"plasma\",\n \"simulation\",\n \"utils\",\n \"__version__\",\n \"__citation__\",\n]\n\n# Enforce Python version check during package import.\n# This is the same check as the one at the top of setup.py\nimport sys\n\nif sys.version_info < (3, 8): # coverage: ignore\n raise ImportError(\"PlasmaPy does not support Python < 3.8\")\n\n# Packages may add whatever they like to this file, but\n# should keep this content at the top.\n# ----------------------------------------------------------------------------\nimport pkg_resources\n\nfrom plasmapy import (\n analysis,\n diagnostics,\n dispersion,\n formulary,\n particles,\n plasma,\n simulation,\n utils,\n)\n\n# define version\ntry:\n # this places a runtime dependency on setuptools\n #\n # note: if there's any distribution metadata in your source files, then this\n # will find a version based on those files. Keep distribution metadata\n # out of your repository unless you've intentionally installed the package\n # as editable (e.g. `pip install -e {plasmapy_directory_root}`),\n # but then __version__ will not be updated with each commit, it is\n # frozen to the version at time of install.\n #\n #: PlasmaPy version string\n __version__ = pkg_resources.get_distribution(\"plasmapy\").version\nexcept pkg_resources.DistributionNotFound:\n # package is not installed\n fallback_version = \"unknown\"\n try:\n # code most likely being used from source\n # if setuptools_scm is installed then generate a version\n from setuptools_scm import get_version\n\n __version__ = get_version(\n root=\"..\", relative_to=__file__, fallback_version=fallback_version\n )\n del get_version\n warn_add = \"setuptools_scm failed to detect the version\"\n except ModuleNotFoundError:\n # setuptools_scm is not installed\n __version__ = fallback_version\n warn_add = \"setuptools_scm is not installed\"\n\n if __version__ == fallback_version:\n from warnings import warn\n\n warn(\n f\"plasmapy.__version__ not generated (set to 'unknown'), PlasmaPy is \"\n f\"not an installed package and {warn_add}.\",\n RuntimeWarning,\n )\n\n del warn\n del fallback_version, warn_add\n\n# ----------------------------------------------------------------------------\n#: PlasmaPy citation instructions\n__citation__ = (\n \"Instructions on how to cite and acknowledge PlasmaPy are provided in the \"\n \"online documentation at: http://docs.plasmapy.org/en/stable/about/citation.html\"\n)\n\n\ndef online_help(query: str):\n \"\"\"\n Open a webpage containing a search page in `PlasmaPy's documentation`_,\n or another page that contains relevant online help.\n\n This function requires an active internet connection, and will open\n the page in the default web browser.\n\n Parameters\n ----------\n query : str\n The search query.\n \"\"\"\n import webbrowser\n\n from urllib.parse import urlencode\n\n url = (\n \"http://docs.plasmapy.org/en/stable/search.html?\"\n \"{}&check_keywords=yes&area=default\"\n ).format(urlencode({\"q\": query}))\n\n if query.lower() in (\"unit\", \"units\", \"quantity\", \"quantities\"):\n url = \"http://docs.astropy.org/en/stable/units/\"\n\n webbrowser.open(url)\n\n\ndel pkg_resources, sys\n", "path": "plasmapy/__init__.py"}], "after_files": [{"content": "\"\"\"\nWelcome to the `plasmapy` package, an open source community-developed Python\npackage for the plasma community. Documentation is available in the docstrings\nand online at https://docs.plasmapy.org (accessible also using the\n:func:`~plasmapy.online_help` function).\n\"\"\"\n__all__ = [\n \"online_help\",\n \"analysis\",\n \"diagnostics\",\n \"dispersion\",\n \"formulary\",\n \"particles\",\n \"plasma\",\n \"simulation\",\n \"utils\",\n \"__version__\",\n \"__citation__\",\n]\n\n# Enforce Python version check during package import.\n# This is the same check as the one at the top of setup.py\nimport sys\n\nif sys.version_info < (3, 8): # coverage: ignore\n raise ImportError(\"PlasmaPy does not support Python < 3.8\")\n\n# Packages may add whatever they like to this file, but\n# should keep this content at the top.\n# ----------------------------------------------------------------------------\nfrom importlib.metadata import PackageNotFoundError, version\n\nfrom plasmapy import (\n analysis,\n diagnostics,\n dispersion,\n formulary,\n particles,\n plasma,\n simulation,\n utils,\n)\n\n# define version\ntry:\n # this places a runtime dependency on setuptools\n #\n # note: if there's any distribution metadata in your source files, then this\n # will find a version based on those files. Keep distribution metadata\n # out of your repository unless you've intentionally installed the package\n # as editable (e.g. `pip install -e {plasmapy_directory_root}`),\n # but then __version__ will not be updated with each commit, it is\n # frozen to the version at time of install.\n #\n #: PlasmaPy version string\n __version__ = version(\"plasmapy\")\nexcept PackageNotFoundError:\n # package is not installed\n fallback_version = \"unknown\"\n try:\n # code most likely being used from source\n # if setuptools_scm is installed then generate a version\n from setuptools_scm import get_version\n\n __version__ = get_version(\n root=\"..\", relative_to=__file__, fallback_version=fallback_version\n )\n del get_version\n warn_add = \"setuptools_scm failed to detect the version\"\n except ModuleNotFoundError:\n # setuptools_scm is not installed\n __version__ = fallback_version\n warn_add = \"setuptools_scm is not installed\"\n\n if __version__ == fallback_version:\n from warnings import warn\n\n warn(\n f\"plasmapy.__version__ not generated (set to 'unknown'), PlasmaPy is \"\n f\"not an installed package and {warn_add}.\",\n RuntimeWarning,\n )\n\n del warn\n del fallback_version, warn_add\n\n# ----------------------------------------------------------------------------\n#: PlasmaPy citation instructions\n__citation__ = (\n \"Instructions on how to cite and acknowledge PlasmaPy are provided in the \"\n \"online documentation at: http://docs.plasmapy.org/en/stable/about/citation.html\"\n)\n\n\ndef online_help(query: str):\n \"\"\"\n Open a webpage containing a search page in `PlasmaPy's documentation`_,\n or another page that contains relevant online help.\n\n This function requires an active internet connection, and will open\n the page in the default web browser.\n\n Parameters\n ----------\n query : str\n The search query.\n \"\"\"\n import webbrowser\n\n from urllib.parse import urlencode\n\n url = (\n \"http://docs.plasmapy.org/en/stable/search.html?\"\n \"{}&check_keywords=yes&area=default\"\n ).format(urlencode({\"q\": query}))\n\n if query.lower() in (\"unit\", \"units\", \"quantity\", \"quantities\"):\n url = \"http://docs.astropy.org/en/stable/units/\"\n\n webbrowser.open(url)\n\n\ndel sys\n", "path": "plasmapy/__init__.py"}]} | 1,469 | 230 |
gh_patches_debug_41922 | rasdani/github-patches | git_diff | spack__spack-851 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
spack can't bootstrap from release tarball
Spack release tarballs don't include `.git` in the top directory like a clone of the repo would. The bootstrap relies on this to bootrstrap a copy from github:
```
[jawestlu@master4-centos71 spack-0.8.17]$ ./bin/spack bootstrap /tmp/
==> Error: command '/bin/git --git-dir=/mnt/lustre/jawestlu/rpmbuild/BUILD/spack-0.8.17/.git config --get remote.origin.url' returned error code 1
[jawestlu@master4-centos71 spack-0.8.17]$ ls -la /mnt/lustre/jawestlu/rpmbuild/BUILD/spack-0.8.17/
total 52
drwxr-xr-x 6 jawestlu jawestlu 4096 Jan 13 15:21 .
drwxr-xr-x 14 jawestlu jawestlu 4096 Jan 13 15:16 ..
-rw-r--r-- 1 jawestlu jawestlu 106 Mar 24 2015 .gitignore
-rw-r--r-- 1 jawestlu jawestlu 20309 Mar 24 2015 LICENSE
-rw-r--r-- 1 jawestlu jawestlu 2894 Mar 24 2015 README.md
drwxr-xr-x 2 jawestlu jawestlu 4096 Mar 24 2015 bin
drwxr-xr-x 3 jawestlu jawestlu 4096 Mar 24 2015 lib
drwxr-xr-x 3 jawestlu jawestlu 4096 Mar 24 2015 share
drwxr-xr-x 3 jawestlu jawestlu 4096 Mar 24 2015 var
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lib/spack/spack/cmd/bootstrap.py`
Content:
```
1 ##############################################################################
2 # Copyright (c) 2013-2016, Lawrence Livermore National Security, LLC.
3 # Produced at the Lawrence Livermore National Laboratory.
4 #
5 # This file is part of Spack.
6 # Created by Todd Gamblin, [email protected], All rights reserved.
7 # LLNL-CODE-647188
8 #
9 # For details, see https://github.com/llnl/spack
10 # Please also see the LICENSE file for our notice and the LGPL.
11 #
12 # This program is free software; you can redistribute it and/or modify
13 # it under the terms of the GNU Lesser General Public License (as
14 # published by the Free Software Foundation) version 2.1, February 1999.
15 #
16 # This program is distributed in the hope that it will be useful, but
17 # WITHOUT ANY WARRANTY; without even the IMPLIED WARRANTY OF
18 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the terms and
19 # conditions of the GNU Lesser General Public License for more details.
20 #
21 # You should have received a copy of the GNU Lesser General Public
22 # License along with this program; if not, write to the Free Software
23 # Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
24 ##############################################################################
25 import os
26 from subprocess import check_call
27
28 import llnl.util.tty as tty
29 from llnl.util.filesystem import join_path, mkdirp
30
31 import spack
32 from spack.util.executable import which
33
34 description = "Create a new installation of spack in another prefix"
35
36 def setup_parser(subparser):
37 subparser.add_argument('prefix', help="names of prefix where we should install spack")
38
39
40 def get_origin_url():
41 git_dir = join_path(spack.prefix, '.git')
42 git = which('git', required=True)
43 origin_url = git(
44 '--git-dir=%s' % git_dir, 'config', '--get', 'remote.origin.url',
45 output=str)
46 return origin_url.strip()
47
48
49 def bootstrap(parser, args):
50 origin_url = get_origin_url()
51 prefix = args.prefix
52
53 tty.msg("Fetching spack from origin: %s" % origin_url)
54
55 if os.path.isfile(prefix):
56 tty.die("There is already a file at %s" % prefix)
57
58 mkdirp(prefix)
59
60 if os.path.exists(join_path(prefix, '.git')):
61 tty.die("There already seems to be a git repository in %s" % prefix)
62
63 files_in_the_way = os.listdir(prefix)
64 if files_in_the_way:
65 tty.die("There are already files there! Delete these files before boostrapping spack.",
66 *files_in_the_way)
67
68 tty.msg("Installing:",
69 "%s/bin/spack" % prefix,
70 "%s/lib/spack/..." % prefix)
71
72 os.chdir(prefix)
73 git = which('git', required=True)
74 git('init', '--shared', '-q')
75 git('remote', 'add', 'origin', origin_url)
76 git('fetch', 'origin', 'master:refs/remotes/origin/master', '-n', '-q')
77 git('reset', '--hard', 'origin/master', '-q')
78
79 tty.msg("Successfully created a new spack in %s" % prefix,
80 "Run %s/bin/spack to use this installation." % prefix)
81
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/lib/spack/spack/cmd/bootstrap.py b/lib/spack/spack/cmd/bootstrap.py
--- a/lib/spack/spack/cmd/bootstrap.py
+++ b/lib/spack/spack/cmd/bootstrap.py
@@ -23,7 +23,6 @@
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
##############################################################################
import os
-from subprocess import check_call
import llnl.util.tty as tty
from llnl.util.filesystem import join_path, mkdirp
@@ -31,26 +30,49 @@
import spack
from spack.util.executable import which
+_SPACK_UPSTREAM = 'https://github.com/llnl/spack'
+
description = "Create a new installation of spack in another prefix"
+
def setup_parser(subparser):
- subparser.add_argument('prefix', help="names of prefix where we should install spack")
+ subparser.add_argument(
+ '-r', '--remote', action='store', dest='remote',
+ help="name of the remote to bootstrap from", default='origin')
+ subparser.add_argument(
+ 'prefix',
+ help="names of prefix where we should install spack")
-def get_origin_url():
+def get_origin_info(remote):
git_dir = join_path(spack.prefix, '.git')
git = which('git', required=True)
- origin_url = git(
- '--git-dir=%s' % git_dir, 'config', '--get', 'remote.origin.url',
- output=str)
- return origin_url.strip()
+ try:
+ branch = git('symbolic-ref', '--short', 'HEAD', output=str)
+ except ProcessError:
+ branch = 'develop'
+ tty.warn('No branch found; using default branch: %s' % branch)
+ if remote == 'origin' and \
+ branch not in ('master', 'develop'):
+ branch = 'develop'
+ tty.warn('Unknown branch found; using default branch: %s' % branch)
+ try:
+ origin_url = git(
+ '--git-dir=%s' % git_dir,
+ 'config', '--get', 'remote.%s.url' % remote,
+ output=str)
+ except ProcessError:
+ origin_url = _SPACK_UPSTREAM
+ tty.warn('No git repository found; '
+ 'using default upstream URL: %s' % origin_url)
+ return (origin_url.strip(), branch.strip())
def bootstrap(parser, args):
- origin_url = get_origin_url()
+ origin_url, branch = get_origin_info(args.remote)
prefix = args.prefix
- tty.msg("Fetching spack from origin: %s" % origin_url)
+ tty.msg("Fetching spack from '%s': %s" % (args.remote, origin_url))
if os.path.isfile(prefix):
tty.die("There is already a file at %s" % prefix)
@@ -62,7 +84,8 @@
files_in_the_way = os.listdir(prefix)
if files_in_the_way:
- tty.die("There are already files there! Delete these files before boostrapping spack.",
+ tty.die("There are already files there! "
+ "Delete these files before boostrapping spack.",
*files_in_the_way)
tty.msg("Installing:",
@@ -73,8 +96,10 @@
git = which('git', required=True)
git('init', '--shared', '-q')
git('remote', 'add', 'origin', origin_url)
- git('fetch', 'origin', 'master:refs/remotes/origin/master', '-n', '-q')
- git('reset', '--hard', 'origin/master', '-q')
+ git('fetch', 'origin', '%s:refs/remotes/origin/%s' % (branch, branch),
+ '-n', '-q')
+ git('reset', '--hard', 'origin/%s' % branch, '-q')
+ git('checkout', '-B', branch, 'origin/%s' % branch, '-q')
tty.msg("Successfully created a new spack in %s" % prefix,
"Run %s/bin/spack to use this installation." % prefix)
| {"golden_diff": "diff --git a/lib/spack/spack/cmd/bootstrap.py b/lib/spack/spack/cmd/bootstrap.py\n--- a/lib/spack/spack/cmd/bootstrap.py\n+++ b/lib/spack/spack/cmd/bootstrap.py\n@@ -23,7 +23,6 @@\n # Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA\n ##############################################################################\n import os\n-from subprocess import check_call\n \n import llnl.util.tty as tty\n from llnl.util.filesystem import join_path, mkdirp\n@@ -31,26 +30,49 @@\n import spack\n from spack.util.executable import which\n \n+_SPACK_UPSTREAM = 'https://github.com/llnl/spack'\n+\n description = \"Create a new installation of spack in another prefix\"\n \n+\n def setup_parser(subparser):\n- subparser.add_argument('prefix', help=\"names of prefix where we should install spack\")\n+ subparser.add_argument(\n+ '-r', '--remote', action='store', dest='remote',\n+ help=\"name of the remote to bootstrap from\", default='origin')\n+ subparser.add_argument(\n+ 'prefix',\n+ help=\"names of prefix where we should install spack\")\n \n \n-def get_origin_url():\n+def get_origin_info(remote):\n git_dir = join_path(spack.prefix, '.git')\n git = which('git', required=True)\n- origin_url = git(\n- '--git-dir=%s' % git_dir, 'config', '--get', 'remote.origin.url',\n- output=str)\n- return origin_url.strip()\n+ try:\n+ branch = git('symbolic-ref', '--short', 'HEAD', output=str)\n+ except ProcessError:\n+ branch = 'develop'\n+ tty.warn('No branch found; using default branch: %s' % branch)\n+ if remote == 'origin' and \\\n+ branch not in ('master', 'develop'):\n+ branch = 'develop'\n+ tty.warn('Unknown branch found; using default branch: %s' % branch)\n+ try:\n+ origin_url = git(\n+ '--git-dir=%s' % git_dir,\n+ 'config', '--get', 'remote.%s.url' % remote,\n+ output=str)\n+ except ProcessError:\n+ origin_url = _SPACK_UPSTREAM\n+ tty.warn('No git repository found; '\n+ 'using default upstream URL: %s' % origin_url)\n+ return (origin_url.strip(), branch.strip())\n \n \n def bootstrap(parser, args):\n- origin_url = get_origin_url()\n+ origin_url, branch = get_origin_info(args.remote)\n prefix = args.prefix\n \n- tty.msg(\"Fetching spack from origin: %s\" % origin_url)\n+ tty.msg(\"Fetching spack from '%s': %s\" % (args.remote, origin_url))\n \n if os.path.isfile(prefix):\n tty.die(\"There is already a file at %s\" % prefix)\n@@ -62,7 +84,8 @@\n \n files_in_the_way = os.listdir(prefix)\n if files_in_the_way:\n- tty.die(\"There are already files there! Delete these files before boostrapping spack.\",\n+ tty.die(\"There are already files there! \"\n+ \"Delete these files before boostrapping spack.\",\n *files_in_the_way)\n \n tty.msg(\"Installing:\",\n@@ -73,8 +96,10 @@\n git = which('git', required=True)\n git('init', '--shared', '-q')\n git('remote', 'add', 'origin', origin_url)\n- git('fetch', 'origin', 'master:refs/remotes/origin/master', '-n', '-q')\n- git('reset', '--hard', 'origin/master', '-q')\n+ git('fetch', 'origin', '%s:refs/remotes/origin/%s' % (branch, branch),\n+ '-n', '-q')\n+ git('reset', '--hard', 'origin/%s' % branch, '-q')\n+ git('checkout', '-B', branch, 'origin/%s' % branch, '-q')\n \n tty.msg(\"Successfully created a new spack in %s\" % prefix,\n \"Run %s/bin/spack to use this installation.\" % prefix)\n", "issue": "spack can't bootstrap from release tarball\nSpack release tarballs don't include `.git` in the top directory like a clone of the repo would. The bootstrap relies on this to bootrstrap a copy from github:\n\n```\n[jawestlu@master4-centos71 spack-0.8.17]$ ./bin/spack bootstrap /tmp/\n==> Error: command '/bin/git --git-dir=/mnt/lustre/jawestlu/rpmbuild/BUILD/spack-0.8.17/.git config --get remote.origin.url' returned error code 1\n[jawestlu@master4-centos71 spack-0.8.17]$ ls -la /mnt/lustre/jawestlu/rpmbuild/BUILD/spack-0.8.17/\ntotal 52\ndrwxr-xr-x 6 jawestlu jawestlu 4096 Jan 13 15:21 .\ndrwxr-xr-x 14 jawestlu jawestlu 4096 Jan 13 15:16 ..\n-rw-r--r-- 1 jawestlu jawestlu 106 Mar 24 2015 .gitignore\n-rw-r--r-- 1 jawestlu jawestlu 20309 Mar 24 2015 LICENSE\n-rw-r--r-- 1 jawestlu jawestlu 2894 Mar 24 2015 README.md\ndrwxr-xr-x 2 jawestlu jawestlu 4096 Mar 24 2015 bin\ndrwxr-xr-x 3 jawestlu jawestlu 4096 Mar 24 2015 lib\ndrwxr-xr-x 3 jawestlu jawestlu 4096 Mar 24 2015 share\ndrwxr-xr-x 3 jawestlu jawestlu 4096 Mar 24 2015 var\n```\n\n", "before_files": [{"content": "##############################################################################\n# Copyright (c) 2013-2016, Lawrence Livermore National Security, LLC.\n# Produced at the Lawrence Livermore National Laboratory.\n#\n# This file is part of Spack.\n# Created by Todd Gamblin, [email protected], All rights reserved.\n# LLNL-CODE-647188\n#\n# For details, see https://github.com/llnl/spack\n# Please also see the LICENSE file for our notice and the LGPL.\n#\n# This program is free software; you can redistribute it and/or modify\n# it under the terms of the GNU Lesser General Public License (as\n# published by the Free Software Foundation) version 2.1, February 1999.\n#\n# This program is distributed in the hope that it will be useful, but\n# WITHOUT ANY WARRANTY; without even the IMPLIED WARRANTY OF\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the terms and\n# conditions of the GNU Lesser General Public License for more details.\n#\n# You should have received a copy of the GNU Lesser General Public\n# License along with this program; if not, write to the Free Software\n# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA\n##############################################################################\nimport os\nfrom subprocess import check_call\n\nimport llnl.util.tty as tty\nfrom llnl.util.filesystem import join_path, mkdirp\n\nimport spack\nfrom spack.util.executable import which\n\ndescription = \"Create a new installation of spack in another prefix\"\n\ndef setup_parser(subparser):\n subparser.add_argument('prefix', help=\"names of prefix where we should install spack\")\n\n\ndef get_origin_url():\n git_dir = join_path(spack.prefix, '.git')\n git = which('git', required=True)\n origin_url = git(\n '--git-dir=%s' % git_dir, 'config', '--get', 'remote.origin.url',\n output=str)\n return origin_url.strip()\n\n\ndef bootstrap(parser, args):\n origin_url = get_origin_url()\n prefix = args.prefix\n\n tty.msg(\"Fetching spack from origin: %s\" % origin_url)\n\n if os.path.isfile(prefix):\n tty.die(\"There is already a file at %s\" % prefix)\n\n mkdirp(prefix)\n\n if os.path.exists(join_path(prefix, '.git')):\n tty.die(\"There already seems to be a git repository in %s\" % prefix)\n\n files_in_the_way = os.listdir(prefix)\n if files_in_the_way:\n tty.die(\"There are already files there! Delete these files before boostrapping spack.\",\n *files_in_the_way)\n\n tty.msg(\"Installing:\",\n \"%s/bin/spack\" % prefix,\n \"%s/lib/spack/...\" % prefix)\n\n os.chdir(prefix)\n git = which('git', required=True)\n git('init', '--shared', '-q')\n git('remote', 'add', 'origin', origin_url)\n git('fetch', 'origin', 'master:refs/remotes/origin/master', '-n', '-q')\n git('reset', '--hard', 'origin/master', '-q')\n\n tty.msg(\"Successfully created a new spack in %s\" % prefix,\n \"Run %s/bin/spack to use this installation.\" % prefix)\n", "path": "lib/spack/spack/cmd/bootstrap.py"}], "after_files": [{"content": "##############################################################################\n# Copyright (c) 2013-2016, Lawrence Livermore National Security, LLC.\n# Produced at the Lawrence Livermore National Laboratory.\n#\n# This file is part of Spack.\n# Created by Todd Gamblin, [email protected], All rights reserved.\n# LLNL-CODE-647188\n#\n# For details, see https://github.com/llnl/spack\n# Please also see the LICENSE file for our notice and the LGPL.\n#\n# This program is free software; you can redistribute it and/or modify\n# it under the terms of the GNU Lesser General Public License (as\n# published by the Free Software Foundation) version 2.1, February 1999.\n#\n# This program is distributed in the hope that it will be useful, but\n# WITHOUT ANY WARRANTY; without even the IMPLIED WARRANTY OF\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the terms and\n# conditions of the GNU Lesser General Public License for more details.\n#\n# You should have received a copy of the GNU Lesser General Public\n# License along with this program; if not, write to the Free Software\n# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA\n##############################################################################\nimport os\n\nimport llnl.util.tty as tty\nfrom llnl.util.filesystem import join_path, mkdirp\n\nimport spack\nfrom spack.util.executable import which\n\n_SPACK_UPSTREAM = 'https://github.com/llnl/spack'\n\ndescription = \"Create a new installation of spack in another prefix\"\n\n\ndef setup_parser(subparser):\n subparser.add_argument(\n '-r', '--remote', action='store', dest='remote',\n help=\"name of the remote to bootstrap from\", default='origin')\n subparser.add_argument(\n 'prefix',\n help=\"names of prefix where we should install spack\")\n\n\ndef get_origin_info(remote):\n git_dir = join_path(spack.prefix, '.git')\n git = which('git', required=True)\n try:\n branch = git('symbolic-ref', '--short', 'HEAD', output=str)\n except ProcessError:\n branch = 'develop'\n tty.warn('No branch found; using default branch: %s' % branch)\n if remote == 'origin' and \\\n branch not in ('master', 'develop'):\n branch = 'develop'\n tty.warn('Unknown branch found; using default branch: %s' % branch)\n try:\n origin_url = git(\n '--git-dir=%s' % git_dir,\n 'config', '--get', 'remote.%s.url' % remote,\n output=str)\n except ProcessError:\n origin_url = _SPACK_UPSTREAM\n tty.warn('No git repository found; '\n 'using default upstream URL: %s' % origin_url)\n return (origin_url.strip(), branch.strip())\n\n\ndef bootstrap(parser, args):\n origin_url, branch = get_origin_info(args.remote)\n prefix = args.prefix\n\n tty.msg(\"Fetching spack from '%s': %s\" % (args.remote, origin_url))\n\n if os.path.isfile(prefix):\n tty.die(\"There is already a file at %s\" % prefix)\n\n mkdirp(prefix)\n\n if os.path.exists(join_path(prefix, '.git')):\n tty.die(\"There already seems to be a git repository in %s\" % prefix)\n\n files_in_the_way = os.listdir(prefix)\n if files_in_the_way:\n tty.die(\"There are already files there! \"\n \"Delete these files before boostrapping spack.\",\n *files_in_the_way)\n\n tty.msg(\"Installing:\",\n \"%s/bin/spack\" % prefix,\n \"%s/lib/spack/...\" % prefix)\n\n os.chdir(prefix)\n git = which('git', required=True)\n git('init', '--shared', '-q')\n git('remote', 'add', 'origin', origin_url)\n git('fetch', 'origin', '%s:refs/remotes/origin/%s' % (branch, branch),\n '-n', '-q')\n git('reset', '--hard', 'origin/%s' % branch, '-q')\n git('checkout', '-B', branch, 'origin/%s' % branch, '-q')\n\n tty.msg(\"Successfully created a new spack in %s\" % prefix,\n \"Run %s/bin/spack to use this installation.\" % prefix)\n", "path": "lib/spack/spack/cmd/bootstrap.py"}]} | 1,630 | 949 |
gh_patches_debug_15365 | rasdani/github-patches | git_diff | uclapi__uclapi-1028 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Medium Articles Bug
Bug in getting medium articles on the homepage
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `backend/uclapi/dashboard/app_helpers.py`
Content:
```
1 from binascii import hexlify
2 from random import SystemRandom
3
4 from common.helpers import generate_api_token
5 from uclapi.settings import (
6 MEDIUM_ARTICLE_QUANTITY,
7 REDIS_UCLAPI_HOST
8 )
9
10 import os
11 import redis
12 import textwrap
13 import validators
14
15
16 def get_articles():
17 r = redis.Redis(host=REDIS_UCLAPI_HOST)
18 pipe = r.pipeline()
19 articles = []
20 for i in range(0, MEDIUM_ARTICLE_QUANTITY):
21 articles.append({})
22 redis_key_url = "Blog:item:{}:url".format(i)
23 redis_key_title = "Blog:item:{}:title".format(i)
24 pipe.get(redis_key_url)
25 pipe.get(redis_key_title)
26 redis_response = pipe.execute()
27 for i in range(0, MEDIUM_ARTICLE_QUANTITY):
28 articles[i]['url'] = redis_response[i*2].decode("utf-8")
29 articles[i]['title'] = redis_response[i*2+1].decode("utf-8")
30 return articles
31
32
33 def generate_temp_api_token():
34 return generate_api_token("temp")
35
36
37 def get_temp_token():
38 r = redis.Redis(host=REDIS_UCLAPI_HOST)
39
40 token = generate_temp_api_token()
41 # We initialise a new temporary token and set it to 1
42 # as it is generated at its first usage.
43 r.set(token, 1, 600)
44 return token
45
46
47 def generate_app_id():
48 key = hexlify(os.urandom(5)).decode()
49 final = "A" + key
50
51 return final
52
53
54 def generate_app_client_id():
55 sr = SystemRandom()
56
57 client_id = '{}.{}'.format(
58 ''.join(str(sr.randint(0, 9)) for _ in range(16)),
59 ''.join(str(sr.randint(0, 9)) for _ in range(16))
60 )
61
62 return client_id
63
64
65 def generate_app_client_secret():
66 client_secret = hexlify(os.urandom(32)).decode()
67 return client_secret
68
69
70 def is_url_safe(url):
71 if not url.startswith("https://"):
72 return False
73
74 if not validators.url(url, public=True):
75 return False
76
77 whitelist_urls = os.environ["WHITELISTED_CALLBACK_URLS"].split(';')
78 if url in whitelist_urls:
79 return True
80
81 forbidden_urls = os.environ["FORBIDDEN_CALLBACK_URLS"].split(';')
82 for furl in forbidden_urls:
83 if furl in url:
84 return False
85
86 return True
87
88
89 def generate_secret():
90 key = hexlify(os.urandom(30)).decode()
91 dashed = '-'.join(textwrap.wrap(key, 15))
92
93 return dashed
94
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/backend/uclapi/dashboard/app_helpers.py b/backend/uclapi/dashboard/app_helpers.py
--- a/backend/uclapi/dashboard/app_helpers.py
+++ b/backend/uclapi/dashboard/app_helpers.py
@@ -4,9 +4,10 @@
from common.helpers import generate_api_token
from uclapi.settings import (
MEDIUM_ARTICLE_QUANTITY,
- REDIS_UCLAPI_HOST
+ REDIS_UCLAPI_HOST,
+ DEBUG
)
-
+from django.core.management import call_command
import os
import redis
import textwrap
@@ -15,6 +16,11 @@
def get_articles():
r = redis.Redis(host=REDIS_UCLAPI_HOST)
+ if not r.exists("Blog:item:1:url"):
+ if DEBUG:
+ call_command('update_medium')
+ else:
+ return []
pipe = r.pipeline()
articles = []
for i in range(0, MEDIUM_ARTICLE_QUANTITY):
| {"golden_diff": "diff --git a/backend/uclapi/dashboard/app_helpers.py b/backend/uclapi/dashboard/app_helpers.py\n--- a/backend/uclapi/dashboard/app_helpers.py\n+++ b/backend/uclapi/dashboard/app_helpers.py\n@@ -4,9 +4,10 @@\n from common.helpers import generate_api_token\n from uclapi.settings import (\n MEDIUM_ARTICLE_QUANTITY,\n- REDIS_UCLAPI_HOST\n+ REDIS_UCLAPI_HOST,\n+ DEBUG\n )\n-\n+from django.core.management import call_command\n import os\n import redis\n import textwrap\n@@ -15,6 +16,11 @@\n \n def get_articles():\n r = redis.Redis(host=REDIS_UCLAPI_HOST)\n+ if not r.exists(\"Blog:item:1:url\"):\n+ if DEBUG:\n+ call_command('update_medium')\n+ else:\n+ return []\n pipe = r.pipeline()\n articles = []\n for i in range(0, MEDIUM_ARTICLE_QUANTITY):\n", "issue": "Medium Articles Bug\nBug in getting medium articles on the homepage\n", "before_files": [{"content": "from binascii import hexlify\nfrom random import SystemRandom\n\nfrom common.helpers import generate_api_token\nfrom uclapi.settings import (\n MEDIUM_ARTICLE_QUANTITY,\n REDIS_UCLAPI_HOST\n)\n\nimport os\nimport redis\nimport textwrap\nimport validators\n\n\ndef get_articles():\n r = redis.Redis(host=REDIS_UCLAPI_HOST)\n pipe = r.pipeline()\n articles = []\n for i in range(0, MEDIUM_ARTICLE_QUANTITY):\n articles.append({})\n redis_key_url = \"Blog:item:{}:url\".format(i)\n redis_key_title = \"Blog:item:{}:title\".format(i)\n pipe.get(redis_key_url)\n pipe.get(redis_key_title)\n redis_response = pipe.execute()\n for i in range(0, MEDIUM_ARTICLE_QUANTITY):\n articles[i]['url'] = redis_response[i*2].decode(\"utf-8\")\n articles[i]['title'] = redis_response[i*2+1].decode(\"utf-8\")\n return articles\n\n\ndef generate_temp_api_token():\n return generate_api_token(\"temp\")\n\n\ndef get_temp_token():\n r = redis.Redis(host=REDIS_UCLAPI_HOST)\n\n token = generate_temp_api_token()\n # We initialise a new temporary token and set it to 1\n # as it is generated at its first usage.\n r.set(token, 1, 600)\n return token\n\n\ndef generate_app_id():\n key = hexlify(os.urandom(5)).decode()\n final = \"A\" + key\n\n return final\n\n\ndef generate_app_client_id():\n sr = SystemRandom()\n\n client_id = '{}.{}'.format(\n ''.join(str(sr.randint(0, 9)) for _ in range(16)),\n ''.join(str(sr.randint(0, 9)) for _ in range(16))\n )\n\n return client_id\n\n\ndef generate_app_client_secret():\n client_secret = hexlify(os.urandom(32)).decode()\n return client_secret\n\n\ndef is_url_safe(url):\n if not url.startswith(\"https://\"):\n return False\n\n if not validators.url(url, public=True):\n return False\n\n whitelist_urls = os.environ[\"WHITELISTED_CALLBACK_URLS\"].split(';')\n if url in whitelist_urls:\n return True\n\n forbidden_urls = os.environ[\"FORBIDDEN_CALLBACK_URLS\"].split(';')\n for furl in forbidden_urls:\n if furl in url:\n return False\n\n return True\n\n\ndef generate_secret():\n key = hexlify(os.urandom(30)).decode()\n dashed = '-'.join(textwrap.wrap(key, 15))\n\n return dashed\n", "path": "backend/uclapi/dashboard/app_helpers.py"}], "after_files": [{"content": "from binascii import hexlify\nfrom random import SystemRandom\n\nfrom common.helpers import generate_api_token\nfrom uclapi.settings import (\n MEDIUM_ARTICLE_QUANTITY,\n REDIS_UCLAPI_HOST,\n DEBUG\n)\nfrom django.core.management import call_command\nimport os\nimport redis\nimport textwrap\nimport validators\n\n\ndef get_articles():\n r = redis.Redis(host=REDIS_UCLAPI_HOST)\n if not r.exists(\"Blog:item:1:url\"):\n if DEBUG:\n call_command('update_medium')\n else:\n return []\n pipe = r.pipeline()\n articles = []\n for i in range(0, MEDIUM_ARTICLE_QUANTITY):\n articles.append({})\n redis_key_url = \"Blog:item:{}:url\".format(i)\n redis_key_title = \"Blog:item:{}:title\".format(i)\n pipe.get(redis_key_url)\n pipe.get(redis_key_title)\n redis_response = pipe.execute()\n for i in range(0, MEDIUM_ARTICLE_QUANTITY):\n articles[i]['url'] = redis_response[i*2].decode(\"utf-8\")\n articles[i]['title'] = redis_response[i*2+1].decode(\"utf-8\")\n return articles\n\n\ndef generate_temp_api_token():\n return generate_api_token(\"temp\")\n\n\ndef get_temp_token():\n r = redis.Redis(host=REDIS_UCLAPI_HOST)\n\n token = generate_temp_api_token()\n # We initialise a new temporary token and set it to 1\n # as it is generated at its first usage.\n r.set(token, 1, 600)\n return token\n\n\ndef generate_app_id():\n key = hexlify(os.urandom(5)).decode()\n final = \"A\" + key\n\n return final\n\n\ndef generate_app_client_id():\n sr = SystemRandom()\n\n client_id = '{}.{}'.format(\n ''.join(str(sr.randint(0, 9)) for _ in range(16)),\n ''.join(str(sr.randint(0, 9)) for _ in range(16))\n )\n\n return client_id\n\n\ndef generate_app_client_secret():\n client_secret = hexlify(os.urandom(32)).decode()\n return client_secret\n\n\ndef is_url_safe(url):\n if not url.startswith(\"https://\"):\n return False\n\n if not validators.url(url, public=True):\n return False\n\n whitelist_urls = os.environ[\"WHITELISTED_CALLBACK_URLS\"].split(';')\n if url in whitelist_urls:\n return True\n\n forbidden_urls = os.environ[\"FORBIDDEN_CALLBACK_URLS\"].split(';')\n for furl in forbidden_urls:\n if furl in url:\n return False\n\n return True\n\n\ndef generate_secret():\n key = hexlify(os.urandom(30)).decode()\n dashed = '-'.join(textwrap.wrap(key, 15))\n\n return dashed\n", "path": "backend/uclapi/dashboard/app_helpers.py"}]} | 1,043 | 211 |
gh_patches_debug_1224 | rasdani/github-patches | git_diff | projectmesa__mesa-826 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Push new Mesa release
Wee are overdue for an official release. Before I push one, does anyone have anything that really want to try to get in or should I just tag and release?
Discuss.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mesa/__init__.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 """
3 Mesa Agent-Based Modeling Framework
4
5 Core Objects: Model, and Agent.
6
7 """
8 import datetime
9
10 from .model import Model
11 from .agent import Agent
12
13
14 __all__ = ["Model", "Agent"]
15
16 __title__ = "mesa"
17 __version__ = "0.8.6"
18 __license__ = "Apache 2.0"
19 __copyright__ = "Copyright %s Project Mesa Team" % datetime.date.today().year
20
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mesa/__init__.py b/mesa/__init__.py
--- a/mesa/__init__.py
+++ b/mesa/__init__.py
@@ -14,6 +14,6 @@
__all__ = ["Model", "Agent"]
__title__ = "mesa"
-__version__ = "0.8.6"
+__version__ = "0.8.7"
__license__ = "Apache 2.0"
__copyright__ = "Copyright %s Project Mesa Team" % datetime.date.today().year
| {"golden_diff": "diff --git a/mesa/__init__.py b/mesa/__init__.py\n--- a/mesa/__init__.py\n+++ b/mesa/__init__.py\n@@ -14,6 +14,6 @@\n __all__ = [\"Model\", \"Agent\"]\n \n __title__ = \"mesa\"\n-__version__ = \"0.8.6\"\n+__version__ = \"0.8.7\"\n __license__ = \"Apache 2.0\"\n __copyright__ = \"Copyright %s Project Mesa Team\" % datetime.date.today().year\n", "issue": "Push new Mesa release\nWee are overdue for an official release. Before I push one, does anyone have anything that really want to try to get in or should I just tag and release? \r\n\r\nDiscuss. \n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\nMesa Agent-Based Modeling Framework\n\nCore Objects: Model, and Agent.\n\n\"\"\"\nimport datetime\n\nfrom .model import Model\nfrom .agent import Agent\n\n\n__all__ = [\"Model\", \"Agent\"]\n\n__title__ = \"mesa\"\n__version__ = \"0.8.6\"\n__license__ = \"Apache 2.0\"\n__copyright__ = \"Copyright %s Project Mesa Team\" % datetime.date.today().year\n", "path": "mesa/__init__.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\nMesa Agent-Based Modeling Framework\n\nCore Objects: Model, and Agent.\n\n\"\"\"\nimport datetime\n\nfrom .model import Model\nfrom .agent import Agent\n\n\n__all__ = [\"Model\", \"Agent\"]\n\n__title__ = \"mesa\"\n__version__ = \"0.8.7\"\n__license__ = \"Apache 2.0\"\n__copyright__ = \"Copyright %s Project Mesa Team\" % datetime.date.today().year\n", "path": "mesa/__init__.py"}]} | 438 | 121 |
gh_patches_debug_27345 | rasdani/github-patches | git_diff | internetarchive__openlibrary-5001 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Is there a way to limit the page-size of search API requests?
### Question
Is there a way to limit the page-size of search API requests?
The default Search-API page-size is 100 items: http://openlibrary.org/search.json?q=the+lord+of+the+rings
I would like to reduce the page-size (limit) for Search API calls, since the user can just 'page' through the results if he/she wants. Fetching more results also requires more processing on the client-side.
Side notes:
- The number is 20 for the search-inside API: http://openlibrary.org/search/inside.json?q=thanks%20for%20all%20the%20fish
- I think both default page-sizes should probably be the same (20 seems like a reasonable number to me).
- The Archive.org API has the "limit" parameter to do this.
Thanks!
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `openlibrary/plugins/inside/code.py`
Content:
```
1 from time import time
2
3 import json
4 import web
5
6 from infogami.utils import delegate
7 from infogami.utils.view import render_template
8
9 from openlibrary.core.fulltext import fulltext_search
10
11 RESULTS_PER_PAGE = 20
12
13
14 class search_inside(delegate.page):
15
16 path = '/search/inside'
17
18 def GET(self):
19 search_start = time() # should probably use a @timeit decorator
20 i = web.input(q='', page=1)
21 query = i.q
22 page = int(i.page)
23 results = fulltext_search(query, page=page, limit=RESULTS_PER_PAGE)
24 search_time = time() - search_start
25
26 return render_template('search/inside.tmpl', query, results, search_time,
27 page=page, results_per_page=RESULTS_PER_PAGE)
28 page.v2 = True # page is mobile-first
29 return page
30
31
32 class search_inside_json(delegate.page):
33 path = "/search/inside"
34 encoding = "json"
35
36 def GET(self):
37 i = web.input(q='', page=1, limit=RESULTS_PER_PAGE)
38 limit = min(i.limit, RESULTS_PER_PAGE) if i.limit else RESULTS_PER_PAGE
39 query = i.q
40 page = int(i.page)
41 results = fulltext_search(query, page=page, limit=limit, js=True)
42 web.header('Content-Type', 'application/json')
43 return delegate.RawText(json.dumps(results, indent=4))
44
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/openlibrary/plugins/inside/code.py b/openlibrary/plugins/inside/code.py
--- a/openlibrary/plugins/inside/code.py
+++ b/openlibrary/plugins/inside/code.py
@@ -4,7 +4,7 @@
import web
from infogami.utils import delegate
-from infogami.utils.view import render_template
+from infogami.utils.view import render_template, safeint
from openlibrary.core.fulltext import fulltext_search
@@ -12,7 +12,6 @@
class search_inside(delegate.page):
-
path = '/search/inside'
def GET(self):
@@ -25,8 +24,6 @@
return render_template('search/inside.tmpl', query, results, search_time,
page=page, results_per_page=RESULTS_PER_PAGE)
- page.v2 = True # page is mobile-first
- return page
class search_inside_json(delegate.page):
@@ -35,7 +32,7 @@
def GET(self):
i = web.input(q='', page=1, limit=RESULTS_PER_PAGE)
- limit = min(i.limit, RESULTS_PER_PAGE) if i.limit else RESULTS_PER_PAGE
+ limit = min(safeint(i.limit, RESULTS_PER_PAGE), RESULTS_PER_PAGE)
query = i.q
page = int(i.page)
results = fulltext_search(query, page=page, limit=limit, js=True)
| {"golden_diff": "diff --git a/openlibrary/plugins/inside/code.py b/openlibrary/plugins/inside/code.py\n--- a/openlibrary/plugins/inside/code.py\n+++ b/openlibrary/plugins/inside/code.py\n@@ -4,7 +4,7 @@\n import web\n \n from infogami.utils import delegate\n-from infogami.utils.view import render_template\n+from infogami.utils.view import render_template, safeint\n \n from openlibrary.core.fulltext import fulltext_search\n \n@@ -12,7 +12,6 @@\n \n \n class search_inside(delegate.page):\n-\n path = '/search/inside'\n \n def GET(self):\n@@ -25,8 +24,6 @@\n \n return render_template('search/inside.tmpl', query, results, search_time,\n page=page, results_per_page=RESULTS_PER_PAGE)\n- page.v2 = True # page is mobile-first\n- return page\n \n \n class search_inside_json(delegate.page):\n@@ -35,7 +32,7 @@\n \n def GET(self):\n i = web.input(q='', page=1, limit=RESULTS_PER_PAGE)\n- limit = min(i.limit, RESULTS_PER_PAGE) if i.limit else RESULTS_PER_PAGE\n+ limit = min(safeint(i.limit, RESULTS_PER_PAGE), RESULTS_PER_PAGE)\n query = i.q\n page = int(i.page)\n results = fulltext_search(query, page=page, limit=limit, js=True)\n", "issue": "Is there a way to limit the page-size of search API requests?\n### Question\r\nIs there a way to limit the page-size of search API requests?\r\n\r\nThe default Search-API page-size is 100 items: http://openlibrary.org/search.json?q=the+lord+of+the+rings\r\n\r\nI would like to reduce the page-size (limit) for Search API calls, since the user can just 'page' through the results if he/she wants. Fetching more results also requires more processing on the client-side.\r\n\r\nSide notes:\r\n- The number is 20 for the search-inside API: http://openlibrary.org/search/inside.json?q=thanks%20for%20all%20the%20fish\r\n- I think both default page-sizes should probably be the same (20 seems like a reasonable number to me).\r\n- The Archive.org API has the \"limit\" parameter to do this.\r\n\r\nThanks!\r\n\r\n\n", "before_files": [{"content": "from time import time\n\nimport json\nimport web\n\nfrom infogami.utils import delegate\nfrom infogami.utils.view import render_template\n\nfrom openlibrary.core.fulltext import fulltext_search\n\nRESULTS_PER_PAGE = 20\n\n\nclass search_inside(delegate.page):\n\n path = '/search/inside'\n\n def GET(self):\n search_start = time() # should probably use a @timeit decorator\n i = web.input(q='', page=1)\n query = i.q\n page = int(i.page)\n results = fulltext_search(query, page=page, limit=RESULTS_PER_PAGE)\n search_time = time() - search_start\n\n return render_template('search/inside.tmpl', query, results, search_time,\n page=page, results_per_page=RESULTS_PER_PAGE)\n page.v2 = True # page is mobile-first\n return page\n\n\nclass search_inside_json(delegate.page):\n path = \"/search/inside\"\n encoding = \"json\"\n\n def GET(self):\n i = web.input(q='', page=1, limit=RESULTS_PER_PAGE)\n limit = min(i.limit, RESULTS_PER_PAGE) if i.limit else RESULTS_PER_PAGE\n query = i.q\n page = int(i.page)\n results = fulltext_search(query, page=page, limit=limit, js=True)\n web.header('Content-Type', 'application/json')\n return delegate.RawText(json.dumps(results, indent=4))\n", "path": "openlibrary/plugins/inside/code.py"}], "after_files": [{"content": "from time import time\n\nimport json\nimport web\n\nfrom infogami.utils import delegate\nfrom infogami.utils.view import render_template, safeint\n\nfrom openlibrary.core.fulltext import fulltext_search\n\nRESULTS_PER_PAGE = 20\n\n\nclass search_inside(delegate.page):\n path = '/search/inside'\n\n def GET(self):\n search_start = time() # should probably use a @timeit decorator\n i = web.input(q='', page=1)\n query = i.q\n page = int(i.page)\n results = fulltext_search(query, page=page, limit=RESULTS_PER_PAGE)\n search_time = time() - search_start\n\n return render_template('search/inside.tmpl', query, results, search_time,\n page=page, results_per_page=RESULTS_PER_PAGE)\n\n\nclass search_inside_json(delegate.page):\n path = \"/search/inside\"\n encoding = \"json\"\n\n def GET(self):\n i = web.input(q='', page=1, limit=RESULTS_PER_PAGE)\n limit = min(safeint(i.limit, RESULTS_PER_PAGE), RESULTS_PER_PAGE)\n query = i.q\n page = int(i.page)\n results = fulltext_search(query, page=page, limit=limit, js=True)\n web.header('Content-Type', 'application/json')\n return delegate.RawText(json.dumps(results, indent=4))\n", "path": "openlibrary/plugins/inside/code.py"}]} | 852 | 307 |
gh_patches_debug_20619 | rasdani/github-patches | git_diff | strawberry-graphql__strawberry-1348 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`merge_type` `types` type hint
https://github.com/strawberry-graphql/strawberry/blob/main/strawberry/tools/merge_types.py#L9
The current `Tuple[Type]` produces:
```
*.py:15:5: error: Argument 2 to "merge_types" has incompatible type "Tuple[Type[QueryA], Type[QueryB], Type[QueryC]]"; expected "Tuple[Type[Any]]" [arg-type]
```
According to [mypy](https://mypy.readthedocs.io/en/stable/kinds_of_types.html#tuple-types), we should either change it to `Tuple[Type, ...]` or follow mypy's suggestion and go with a generic `Sequence`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `strawberry/tools/merge_types.py`
Content:
```
1 import warnings
2 from collections import Counter
3 from itertools import chain
4 from typing import Tuple, Type
5
6 import strawberry
7
8
9 def merge_types(name: str, types: Tuple[Type]) -> Type:
10 """Merge multiple Strawberry types into one
11
12 For example, given two queries `A` and `B`, one can merge them into a
13 super type as follows:
14
15 merge_types("SuperQuery", (B, A))
16
17 This is essentially the same as:
18
19 class SuperQuery(B, A):
20 ...
21 """
22
23 if not types:
24 raise ValueError("Can't merge types if none are supplied")
25
26 fields = chain(*(t._type_definition.fields for t in types))
27 counter = Counter(f.name for f in fields)
28 dupes = [f for f, c in counter.most_common() if c > 1]
29 if dupes:
30 warnings.warn("{} has overridden fields: {}".format(name, ", ".join(dupes)))
31
32 return strawberry.type(type(name, types, {}))
33
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/strawberry/tools/merge_types.py b/strawberry/tools/merge_types.py
--- a/strawberry/tools/merge_types.py
+++ b/strawberry/tools/merge_types.py
@@ -1,12 +1,12 @@
import warnings
from collections import Counter
from itertools import chain
-from typing import Tuple, Type
+from typing import Tuple
import strawberry
-def merge_types(name: str, types: Tuple[Type]) -> Type:
+def merge_types(name: str, types: Tuple[type, ...]) -> type:
"""Merge multiple Strawberry types into one
For example, given two queries `A` and `B`, one can merge them into a
@@ -23,7 +23,9 @@
if not types:
raise ValueError("Can't merge types if none are supplied")
- fields = chain(*(t._type_definition.fields for t in types))
+ fields = chain(
+ *(t._type_definition.fields for t in types) # type: ignore[attr-defined]
+ )
counter = Counter(f.name for f in fields)
dupes = [f for f, c in counter.most_common() if c > 1]
if dupes:
| {"golden_diff": "diff --git a/strawberry/tools/merge_types.py b/strawberry/tools/merge_types.py\n--- a/strawberry/tools/merge_types.py\n+++ b/strawberry/tools/merge_types.py\n@@ -1,12 +1,12 @@\n import warnings\n from collections import Counter\n from itertools import chain\n-from typing import Tuple, Type\n+from typing import Tuple\n \n import strawberry\n \n \n-def merge_types(name: str, types: Tuple[Type]) -> Type:\n+def merge_types(name: str, types: Tuple[type, ...]) -> type:\n \"\"\"Merge multiple Strawberry types into one\n \n For example, given two queries `A` and `B`, one can merge them into a\n@@ -23,7 +23,9 @@\n if not types:\n raise ValueError(\"Can't merge types if none are supplied\")\n \n- fields = chain(*(t._type_definition.fields for t in types))\n+ fields = chain(\n+ *(t._type_definition.fields for t in types) # type: ignore[attr-defined]\n+ )\n counter = Counter(f.name for f in fields)\n dupes = [f for f, c in counter.most_common() if c > 1]\n if dupes:\n", "issue": "`merge_type` `types` type hint\nhttps://github.com/strawberry-graphql/strawberry/blob/main/strawberry/tools/merge_types.py#L9\r\n\r\nThe current `Tuple[Type]` produces:\r\n```\r\n*.py:15:5: error: Argument 2 to \"merge_types\" has incompatible type \"Tuple[Type[QueryA], Type[QueryB], Type[QueryC]]\"; expected \"Tuple[Type[Any]]\" [arg-type]\r\n```\r\n\r\nAccording to [mypy](https://mypy.readthedocs.io/en/stable/kinds_of_types.html#tuple-types), we should either change it to `Tuple[Type, ...]` or follow mypy's suggestion and go with a generic `Sequence`.\r\n\r\n\n", "before_files": [{"content": "import warnings\nfrom collections import Counter\nfrom itertools import chain\nfrom typing import Tuple, Type\n\nimport strawberry\n\n\ndef merge_types(name: str, types: Tuple[Type]) -> Type:\n \"\"\"Merge multiple Strawberry types into one\n\n For example, given two queries `A` and `B`, one can merge them into a\n super type as follows:\n\n merge_types(\"SuperQuery\", (B, A))\n\n This is essentially the same as:\n\n class SuperQuery(B, A):\n ...\n \"\"\"\n\n if not types:\n raise ValueError(\"Can't merge types if none are supplied\")\n\n fields = chain(*(t._type_definition.fields for t in types))\n counter = Counter(f.name for f in fields)\n dupes = [f for f, c in counter.most_common() if c > 1]\n if dupes:\n warnings.warn(\"{} has overridden fields: {}\".format(name, \", \".join(dupes)))\n\n return strawberry.type(type(name, types, {}))\n", "path": "strawberry/tools/merge_types.py"}], "after_files": [{"content": "import warnings\nfrom collections import Counter\nfrom itertools import chain\nfrom typing import Tuple\n\nimport strawberry\n\n\ndef merge_types(name: str, types: Tuple[type, ...]) -> type:\n \"\"\"Merge multiple Strawberry types into one\n\n For example, given two queries `A` and `B`, one can merge them into a\n super type as follows:\n\n merge_types(\"SuperQuery\", (B, A))\n\n This is essentially the same as:\n\n class SuperQuery(B, A):\n ...\n \"\"\"\n\n if not types:\n raise ValueError(\"Can't merge types if none are supplied\")\n\n fields = chain(\n *(t._type_definition.fields for t in types) # type: ignore[attr-defined]\n )\n counter = Counter(f.name for f in fields)\n dupes = [f for f, c in counter.most_common() if c > 1]\n if dupes:\n warnings.warn(\"{} has overridden fields: {}\".format(name, \", \".join(dupes)))\n\n return strawberry.type(type(name, types, {}))\n", "path": "strawberry/tools/merge_types.py"}]} | 700 | 269 |
gh_patches_debug_18567 | rasdani/github-patches | git_diff | Lightning-AI__torchmetrics-945 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Update and unify a number of metrics in `torchmetrics` docs
## 📚 Documentation
Before the next feature release, it'd be nice to update the number of implemented metrics and unify this number over all occurrences within the docs/pages.
**Additional context:** It looks like we've already had almost 80 metrics, so it'd be pity to underestimate these before another feature release O:]
```bash
$ grep -w docs/source/references/functional.rst -e "func" | wc -l
78
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `torchmetrics/__about__.py`
Content:
```
1 __version__ = "0.8.0dev"
2 __author__ = "PyTorchLightning et al."
3 __author_email__ = "[email protected]"
4 __license__ = "Apache-2.0"
5 __copyright__ = f"Copyright (c) 2020-2022, {__author__}."
6 __homepage__ = "https://github.com/PyTorchLightning/metrics"
7 __docs__ = "PyTorch native Metrics"
8 __docs_url__ = "https://torchmetrics.readthedocs.io/en/stable/"
9 __long_doc__ = """
10 Torchmetrics is a metrics API created for easy metric development and usage in both PyTorch and
11 [PyTorch Lightning](https://pytorch-lightning.readthedocs.io/en/stable/). It was originally a part of
12 Pytorch Lightning, but got split off so users could take advantage of the large collection of metrics
13 implemented without having to install Pytorch Lightning (even though we would love for you to try it out).
14 We currently have around 60+ metrics implemented and we continuously are adding more metrics, both within
15 already covered domains (classification, regression ect.) but also new domains (object detection ect.).
16 We make sure that all our metrics are rigorously tested such that you can trust them.
17 """
18
19 __all__ = [
20 "__author__",
21 "__author_email__",
22 "__copyright__",
23 "__docs__",
24 "__homepage__",
25 "__license__",
26 "__version__",
27 ]
28
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/torchmetrics/__about__.py b/torchmetrics/__about__.py
--- a/torchmetrics/__about__.py
+++ b/torchmetrics/__about__.py
@@ -11,7 +11,7 @@
[PyTorch Lightning](https://pytorch-lightning.readthedocs.io/en/stable/). It was originally a part of
Pytorch Lightning, but got split off so users could take advantage of the large collection of metrics
implemented without having to install Pytorch Lightning (even though we would love for you to try it out).
-We currently have around 60+ metrics implemented and we continuously are adding more metrics, both within
+We currently have around 80+ metrics implemented and we continuously are adding more metrics, both within
already covered domains (classification, regression ect.) but also new domains (object detection ect.).
We make sure that all our metrics are rigorously tested such that you can trust them.
"""
| {"golden_diff": "diff --git a/torchmetrics/__about__.py b/torchmetrics/__about__.py\n--- a/torchmetrics/__about__.py\n+++ b/torchmetrics/__about__.py\n@@ -11,7 +11,7 @@\n [PyTorch Lightning](https://pytorch-lightning.readthedocs.io/en/stable/). It was originally a part of\n Pytorch Lightning, but got split off so users could take advantage of the large collection of metrics\n implemented without having to install Pytorch Lightning (even though we would love for you to try it out).\n-We currently have around 60+ metrics implemented and we continuously are adding more metrics, both within\n+We currently have around 80+ metrics implemented and we continuously are adding more metrics, both within\n already covered domains (classification, regression ect.) but also new domains (object detection ect.).\n We make sure that all our metrics are rigorously tested such that you can trust them.\n \"\"\"\n", "issue": "Update and unify a number of metrics in `torchmetrics` docs\n## \ud83d\udcda Documentation\r\n\r\nBefore the next feature release, it'd be nice to update the number of implemented metrics and unify this number over all occurrences within the docs/pages.\r\n\r\n**Additional context:** It looks like we've already had almost 80 metrics, so it'd be pity to underestimate these before another feature release O:]\r\n\r\n```bash\r\n$ grep -w docs/source/references/functional.rst -e \"func\" | wc -l\r\n 78\r\n```\r\n\n", "before_files": [{"content": "__version__ = \"0.8.0dev\"\n__author__ = \"PyTorchLightning et al.\"\n__author_email__ = \"[email protected]\"\n__license__ = \"Apache-2.0\"\n__copyright__ = f\"Copyright (c) 2020-2022, {__author__}.\"\n__homepage__ = \"https://github.com/PyTorchLightning/metrics\"\n__docs__ = \"PyTorch native Metrics\"\n__docs_url__ = \"https://torchmetrics.readthedocs.io/en/stable/\"\n__long_doc__ = \"\"\"\nTorchmetrics is a metrics API created for easy metric development and usage in both PyTorch and\n[PyTorch Lightning](https://pytorch-lightning.readthedocs.io/en/stable/). It was originally a part of\nPytorch Lightning, but got split off so users could take advantage of the large collection of metrics\nimplemented without having to install Pytorch Lightning (even though we would love for you to try it out).\nWe currently have around 60+ metrics implemented and we continuously are adding more metrics, both within\nalready covered domains (classification, regression ect.) but also new domains (object detection ect.).\nWe make sure that all our metrics are rigorously tested such that you can trust them.\n\"\"\"\n\n__all__ = [\n \"__author__\",\n \"__author_email__\",\n \"__copyright__\",\n \"__docs__\",\n \"__homepage__\",\n \"__license__\",\n \"__version__\",\n]\n", "path": "torchmetrics/__about__.py"}], "after_files": [{"content": "__version__ = \"0.8.0dev\"\n__author__ = \"PyTorchLightning et al.\"\n__author_email__ = \"[email protected]\"\n__license__ = \"Apache-2.0\"\n__copyright__ = f\"Copyright (c) 2020-2022, {__author__}.\"\n__homepage__ = \"https://github.com/PyTorchLightning/metrics\"\n__docs__ = \"PyTorch native Metrics\"\n__docs_url__ = \"https://torchmetrics.readthedocs.io/en/stable/\"\n__long_doc__ = \"\"\"\nTorchmetrics is a metrics API created for easy metric development and usage in both PyTorch and\n[PyTorch Lightning](https://pytorch-lightning.readthedocs.io/en/stable/). It was originally a part of\nPytorch Lightning, but got split off so users could take advantage of the large collection of metrics\nimplemented without having to install Pytorch Lightning (even though we would love for you to try it out).\nWe currently have around 80+ metrics implemented and we continuously are adding more metrics, both within\nalready covered domains (classification, regression ect.) but also new domains (object detection ect.).\nWe make sure that all our metrics are rigorously tested such that you can trust them.\n\"\"\"\n\n__all__ = [\n \"__author__\",\n \"__author_email__\",\n \"__copyright__\",\n \"__docs__\",\n \"__homepage__\",\n \"__license__\",\n \"__version__\",\n]\n", "path": "torchmetrics/__about__.py"}]} | 742 | 203 |
gh_patches_debug_454 | rasdani/github-patches | git_diff | Textualize__textual-2755 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
A lone `Static` results in a `TooManyMatches` error when using `query_one`
I've not dived into this beyond knocking up this example to isolate what I saw (about to head out of the door but wanted to record this as a reminder). With 0.27.0 (perhaps before too, just noting the version here for the record), this code:
```python
from textual.app import App, ComposeResult
from textual.widgets import Static
class OneStatic( App[ None ] ):
def compose( self ) -> ComposeResult:
yield Static()
def on_mount( self ) -> None:
self.query_one( Static ).update( "Hello, World!" )
if __name__ == "__main__":
OneStatic().run()
```
results in a `TooManyMatches` error being raised from the `query_one`. With very early testing this only seems to be the case with `Static` (at least, I tested with `Label` and `Button` and they're fine).
I think most people would rightly find this surprising.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/textual/widgets/_tooltip.py`
Content:
```
1 from __future__ import annotations
2
3 from textual.widgets import Static
4
5
6 class Tooltip(Static):
7 DEFAULT_CSS = """
8 Tooltip {
9 layer: _tooltips;
10 margin: 1 2;
11 padding: 1 2;
12 background: $panel;
13 width: auto;
14 height: auto;
15 constrain: inflect;
16 max-width: 40;
17 display: none;
18 }
19 """
20
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/textual/widgets/_tooltip.py b/src/textual/widgets/_tooltip.py
--- a/src/textual/widgets/_tooltip.py
+++ b/src/textual/widgets/_tooltip.py
@@ -3,7 +3,7 @@
from textual.widgets import Static
-class Tooltip(Static):
+class Tooltip(Static, inherit_css=False):
DEFAULT_CSS = """
Tooltip {
layer: _tooltips;
| {"golden_diff": "diff --git a/src/textual/widgets/_tooltip.py b/src/textual/widgets/_tooltip.py\n--- a/src/textual/widgets/_tooltip.py\n+++ b/src/textual/widgets/_tooltip.py\n@@ -3,7 +3,7 @@\n from textual.widgets import Static\n \n \n-class Tooltip(Static):\n+class Tooltip(Static, inherit_css=False):\n DEFAULT_CSS = \"\"\"\n Tooltip {\n layer: _tooltips;\n", "issue": "A lone `Static` results in a `TooManyMatches` error when using `query_one`\nI've not dived into this beyond knocking up this example to isolate what I saw (about to head out of the door but wanted to record this as a reminder). With 0.27.0 (perhaps before too, just noting the version here for the record), this code:\r\n\r\n```python\r\nfrom textual.app import App, ComposeResult\r\nfrom textual.widgets import Static\r\n\r\nclass OneStatic( App[ None ] ):\r\n\r\n def compose( self ) -> ComposeResult:\r\n yield Static()\r\n\r\n def on_mount( self ) -> None:\r\n self.query_one( Static ).update( \"Hello, World!\" )\r\n\r\nif __name__ == \"__main__\":\r\n OneStatic().run()\r\n```\r\n\r\nresults in a `TooManyMatches` error being raised from the `query_one`. With very early testing this only seems to be the case with `Static` (at least, I tested with `Label` and `Button` and they're fine).\r\n\r\nI think most people would rightly find this surprising.\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom textual.widgets import Static\n\n\nclass Tooltip(Static):\n DEFAULT_CSS = \"\"\"\n Tooltip {\n layer: _tooltips;\n margin: 1 2;\n padding: 1 2;\n background: $panel;\n width: auto;\n height: auto;\n constrain: inflect;\n max-width: 40;\n display: none;\n }\n \"\"\"\n", "path": "src/textual/widgets/_tooltip.py"}], "after_files": [{"content": "from __future__ import annotations\n\nfrom textual.widgets import Static\n\n\nclass Tooltip(Static, inherit_css=False):\n DEFAULT_CSS = \"\"\"\n Tooltip {\n layer: _tooltips;\n margin: 1 2;\n padding: 1 2;\n background: $panel;\n width: auto;\n height: auto;\n constrain: inflect;\n max-width: 40;\n display: none;\n }\n \"\"\"\n", "path": "src/textual/widgets/_tooltip.py"}]} | 612 | 88 |
gh_patches_debug_22794 | rasdani/github-patches | git_diff | ultrabug__py3status-2007 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Clock terminated with Exception
With the latest Manjaro Testing Update, I received version 3.32 with Python 3.9.1.
All modules still work except the clock module which is terminated. The journal simply says
```Exception in `i3pystatus clock` post_config_hook().```
The config didn't change and works with 3.31:
```
clock {
format = "{Local}"
format_time = "{icon} %a, %d.%m.%Y %H:%M"
}
```
Downgrading to 3.31 works. What else information do you need?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `py3status/storage.py`
Content:
```
1 import os
2 import time
3
4 from pathlib import Path
5 from pickle import dump, load
6 from tempfile import NamedTemporaryFile
7
8
9 class Storage:
10
11 data = {}
12 initialized = False
13
14 def init(self, py3_wrapper):
15 self.py3_wrapper = py3_wrapper
16 self.config = py3_wrapper.config
17 py3_config = self.config.get("py3_config", {})
18
19 # check for legacy storage cache
20 legacy_storage_path = self.get_legacy_storage_path()
21
22 # cutting edge storage cache
23 storage_config = py3_config.get("py3status", {}).get("storage")
24 if storage_config:
25 storage_file = os.path.expandvars(storage_config.expanduser())
26 if "/" in storage_file:
27 storage_dir = None
28 else:
29 storage_dir = os.environ.get("XDG_CACHE_HOME")
30 else:
31 storage_dir = os.environ.get("XDG_CACHE_HOME")
32 storage_file = Path("py3status_cache.data")
33
34 if not storage_dir:
35 storage_dir = Path("~/.cache").expanduser()
36 self.storage_path = storage_dir / storage_file
37
38 # move legacy storage cache to new desired / default location
39 if legacy_storage_path:
40 self.py3_wrapper.log(
41 "moving legacy storage_path {} to {}".format(
42 legacy_storage_path, self.storage_path
43 )
44 )
45 legacy_storage_path.rename(self.storage_path)
46
47 try:
48 with self.storage_path.open("rb") as f:
49 self.data = load(f, encoding="bytes")
50 except OSError:
51 pass
52
53 self.py3_wrapper.log(f"storage_path: {self.storage_path}")
54 if self.data:
55 self.py3_wrapper.log(f"storage_data: {self.data}")
56 self.initialized = True
57
58 def get_legacy_storage_path(self):
59 """
60 Detect and return existing legacy storage path.
61 """
62 config_dir = Path(
63 self.py3_wrapper.config.get("i3status_config_path", "/tmp")
64 ).parent
65 storage_path = config_dir / "py3status.data"
66 if storage_path.exists():
67 return storage_path
68 else:
69 return None
70
71 def save(self):
72 """
73 Save our data to disk. We want to always have a valid file.
74 """
75 with NamedTemporaryFile(dir=self.storage_path.parent, delete=False) as f:
76 # we use protocol=2 for python 2/3 compatibility
77 dump(self.data, f, protocol=2)
78 f.flush()
79 os.fsync(f.fileno())
80 tmppath = Path(f.name)
81 tmppath.rename(self.storage_path)
82
83 def storage_set(self, module_name, key, value):
84 if key.startswith("_"):
85 raise ValueError('cannot set keys starting with an underscore "_"')
86
87 if self.data.get(module_name, {}).get(key) == value:
88 return
89
90 if module_name not in self.data:
91 self.data[module_name] = {}
92 self.data[module_name][key] = value
93 ts = time.time()
94 if "_ctime" not in self.data[module_name]:
95 self.data[module_name]["_ctime"] = ts
96 self.data[module_name]["_mtime"] = ts
97 self.save()
98
99 def storage_get(self, module_name, key):
100 return self.data.get(module_name, {}).get(key, None)
101
102 def storage_del(self, module_name, key=None):
103 if module_name in self.data and key in self.data[module_name]:
104 del self.data[module_name][key]
105 self.save()
106
107 def storage_keys(self, module_name):
108 return list(self.data.get(module_name, {}))
109
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/py3status/storage.py b/py3status/storage.py
--- a/py3status/storage.py
+++ b/py3status/storage.py
@@ -22,18 +22,18 @@
# cutting edge storage cache
storage_config = py3_config.get("py3status", {}).get("storage")
if storage_config:
- storage_file = os.path.expandvars(storage_config.expanduser())
+ storage_file = os.path.expandvars(os.path.expanduser(storage_config))
if "/" in storage_file:
storage_dir = None
else:
storage_dir = os.environ.get("XDG_CACHE_HOME")
else:
storage_dir = os.environ.get("XDG_CACHE_HOME")
- storage_file = Path("py3status_cache.data")
+ storage_file = "py3status_cache.data"
if not storage_dir:
storage_dir = Path("~/.cache").expanduser()
- self.storage_path = storage_dir / storage_file
+ self.storage_path = Path(storage_dir, storage_file)
# move legacy storage cache to new desired / default location
if legacy_storage_path:
| {"golden_diff": "diff --git a/py3status/storage.py b/py3status/storage.py\n--- a/py3status/storage.py\n+++ b/py3status/storage.py\n@@ -22,18 +22,18 @@\n # cutting edge storage cache\n storage_config = py3_config.get(\"py3status\", {}).get(\"storage\")\n if storage_config:\n- storage_file = os.path.expandvars(storage_config.expanduser())\n+ storage_file = os.path.expandvars(os.path.expanduser(storage_config))\n if \"/\" in storage_file:\n storage_dir = None\n else:\n storage_dir = os.environ.get(\"XDG_CACHE_HOME\")\n else:\n storage_dir = os.environ.get(\"XDG_CACHE_HOME\")\n- storage_file = Path(\"py3status_cache.data\")\n+ storage_file = \"py3status_cache.data\"\n \n if not storage_dir:\n storage_dir = Path(\"~/.cache\").expanduser()\n- self.storage_path = storage_dir / storage_file\n+ self.storage_path = Path(storage_dir, storage_file)\n \n # move legacy storage cache to new desired / default location\n if legacy_storage_path:\n", "issue": "Clock terminated with Exception\nWith the latest Manjaro Testing Update, I received version 3.32 with Python 3.9.1.\r\n\r\nAll modules still work except the clock module which is terminated. The journal simply says\r\n```Exception in `i3pystatus clock` post_config_hook().```\r\n\r\nThe config didn't change and works with 3.31:\r\n```\r\nclock {\r\n format = \"{Local}\"\r\n format_time = \"{icon} %a, %d.%m.%Y %H:%M\"\r\n}\r\n```\r\n\r\nDowngrading to 3.31 works. What else information do you need?\n", "before_files": [{"content": "import os\nimport time\n\nfrom pathlib import Path\nfrom pickle import dump, load\nfrom tempfile import NamedTemporaryFile\n\n\nclass Storage:\n\n data = {}\n initialized = False\n\n def init(self, py3_wrapper):\n self.py3_wrapper = py3_wrapper\n self.config = py3_wrapper.config\n py3_config = self.config.get(\"py3_config\", {})\n\n # check for legacy storage cache\n legacy_storage_path = self.get_legacy_storage_path()\n\n # cutting edge storage cache\n storage_config = py3_config.get(\"py3status\", {}).get(\"storage\")\n if storage_config:\n storage_file = os.path.expandvars(storage_config.expanduser())\n if \"/\" in storage_file:\n storage_dir = None\n else:\n storage_dir = os.environ.get(\"XDG_CACHE_HOME\")\n else:\n storage_dir = os.environ.get(\"XDG_CACHE_HOME\")\n storage_file = Path(\"py3status_cache.data\")\n\n if not storage_dir:\n storage_dir = Path(\"~/.cache\").expanduser()\n self.storage_path = storage_dir / storage_file\n\n # move legacy storage cache to new desired / default location\n if legacy_storage_path:\n self.py3_wrapper.log(\n \"moving legacy storage_path {} to {}\".format(\n legacy_storage_path, self.storage_path\n )\n )\n legacy_storage_path.rename(self.storage_path)\n\n try:\n with self.storage_path.open(\"rb\") as f:\n self.data = load(f, encoding=\"bytes\")\n except OSError:\n pass\n\n self.py3_wrapper.log(f\"storage_path: {self.storage_path}\")\n if self.data:\n self.py3_wrapper.log(f\"storage_data: {self.data}\")\n self.initialized = True\n\n def get_legacy_storage_path(self):\n \"\"\"\n Detect and return existing legacy storage path.\n \"\"\"\n config_dir = Path(\n self.py3_wrapper.config.get(\"i3status_config_path\", \"/tmp\")\n ).parent\n storage_path = config_dir / \"py3status.data\"\n if storage_path.exists():\n return storage_path\n else:\n return None\n\n def save(self):\n \"\"\"\n Save our data to disk. We want to always have a valid file.\n \"\"\"\n with NamedTemporaryFile(dir=self.storage_path.parent, delete=False) as f:\n # we use protocol=2 for python 2/3 compatibility\n dump(self.data, f, protocol=2)\n f.flush()\n os.fsync(f.fileno())\n tmppath = Path(f.name)\n tmppath.rename(self.storage_path)\n\n def storage_set(self, module_name, key, value):\n if key.startswith(\"_\"):\n raise ValueError('cannot set keys starting with an underscore \"_\"')\n\n if self.data.get(module_name, {}).get(key) == value:\n return\n\n if module_name not in self.data:\n self.data[module_name] = {}\n self.data[module_name][key] = value\n ts = time.time()\n if \"_ctime\" not in self.data[module_name]:\n self.data[module_name][\"_ctime\"] = ts\n self.data[module_name][\"_mtime\"] = ts\n self.save()\n\n def storage_get(self, module_name, key):\n return self.data.get(module_name, {}).get(key, None)\n\n def storage_del(self, module_name, key=None):\n if module_name in self.data and key in self.data[module_name]:\n del self.data[module_name][key]\n self.save()\n\n def storage_keys(self, module_name):\n return list(self.data.get(module_name, {}))\n", "path": "py3status/storage.py"}], "after_files": [{"content": "import os\nimport time\n\nfrom pathlib import Path\nfrom pickle import dump, load\nfrom tempfile import NamedTemporaryFile\n\n\nclass Storage:\n\n data = {}\n initialized = False\n\n def init(self, py3_wrapper):\n self.py3_wrapper = py3_wrapper\n self.config = py3_wrapper.config\n py3_config = self.config.get(\"py3_config\", {})\n\n # check for legacy storage cache\n legacy_storage_path = self.get_legacy_storage_path()\n\n # cutting edge storage cache\n storage_config = py3_config.get(\"py3status\", {}).get(\"storage\")\n if storage_config:\n storage_file = os.path.expandvars(os.path.expanduser(storage_config))\n if \"/\" in storage_file:\n storage_dir = None\n else:\n storage_dir = os.environ.get(\"XDG_CACHE_HOME\")\n else:\n storage_dir = os.environ.get(\"XDG_CACHE_HOME\")\n storage_file = \"py3status_cache.data\"\n\n if not storage_dir:\n storage_dir = Path(\"~/.cache\").expanduser()\n self.storage_path = Path(storage_dir, storage_file)\n\n # move legacy storage cache to new desired / default location\n if legacy_storage_path:\n self.py3_wrapper.log(\n \"moving legacy storage_path {} to {}\".format(\n legacy_storage_path, self.storage_path\n )\n )\n legacy_storage_path.rename(self.storage_path)\n\n try:\n with self.storage_path.open(\"rb\") as f:\n self.data = load(f, encoding=\"bytes\")\n except OSError:\n pass\n\n self.py3_wrapper.log(f\"storage_path: {self.storage_path}\")\n if self.data:\n self.py3_wrapper.log(f\"storage_data: {self.data}\")\n self.initialized = True\n\n def get_legacy_storage_path(self):\n \"\"\"\n Detect and return existing legacy storage path.\n \"\"\"\n config_dir = Path(\n self.py3_wrapper.config.get(\"i3status_config_path\", \"/tmp\")\n ).parent\n storage_path = config_dir / \"py3status.data\"\n if storage_path.exists():\n return storage_path\n else:\n return None\n\n def save(self):\n \"\"\"\n Save our data to disk. We want to always have a valid file.\n \"\"\"\n with NamedTemporaryFile(dir=self.storage_path.parent, delete=False) as f:\n # we use protocol=2 for python 2/3 compatibility\n dump(self.data, f, protocol=2)\n f.flush()\n os.fsync(f.fileno())\n tmppath = Path(f.name)\n tmppath.rename(self.storage_path)\n\n def storage_set(self, module_name, key, value):\n if key.startswith(\"_\"):\n raise ValueError('cannot set keys starting with an underscore \"_\"')\n\n if self.data.get(module_name, {}).get(key) == value:\n return\n\n if module_name not in self.data:\n self.data[module_name] = {}\n self.data[module_name][key] = value\n ts = time.time()\n if \"_ctime\" not in self.data[module_name]:\n self.data[module_name][\"_ctime\"] = ts\n self.data[module_name][\"_mtime\"] = ts\n self.save()\n\n def storage_get(self, module_name, key):\n return self.data.get(module_name, {}).get(key, None)\n\n def storage_del(self, module_name, key=None):\n if module_name in self.data and key in self.data[module_name]:\n del self.data[module_name][key]\n self.save()\n\n def storage_keys(self, module_name):\n return list(self.data.get(module_name, {}))\n", "path": "py3status/storage.py"}]} | 1,383 | 240 |
gh_patches_debug_1684 | rasdani/github-patches | git_diff | geopandas__geopandas-2398 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Drop Python 3.7
We should consider dropping support for Python 3.7. We are roughly following numpy model (#1457) and numpy itself is 3.8+ now. Same applies to pyproj, which requires 3.8 (and causes some macOS CI failures because of some conda issues).
I forgot about Python versions when doing #2358 and bumped only packages.
@jorisvandenbossche if you're fine with that, I'll update CI matrix and related things.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env/python
2 """Installation script
3
4 """
5
6 import os
7
8 try:
9 from setuptools import setup
10 except ImportError:
11 from distutils.core import setup
12
13 import versioneer
14
15 LONG_DESCRIPTION = """GeoPandas is a project to add support for geographic data to
16 `pandas`_ objects.
17
18 The goal of GeoPandas is to make working with geospatial data in
19 python easier. It combines the capabilities of `pandas`_ and `shapely`_,
20 providing geospatial operations in pandas and a high-level interface
21 to multiple geometries to shapely. GeoPandas enables you to easily do
22 operations in python that would otherwise require a spatial database
23 such as PostGIS.
24
25 .. _pandas: http://pandas.pydata.org
26 .. _shapely: http://shapely.readthedocs.io/en/latest/
27 """
28
29 if os.environ.get("READTHEDOCS", False) == "True":
30 INSTALL_REQUIRES = []
31 else:
32 INSTALL_REQUIRES = [
33 "pandas >= 1.0.0",
34 "shapely >= 1.7",
35 "fiona >= 1.8",
36 "pyproj >= 2.6.1.post1",
37 "packaging",
38 ]
39
40 # get all data dirs in the datasets module
41 data_files = []
42
43 for item in os.listdir("geopandas/datasets"):
44 if not item.startswith("__"):
45 if os.path.isdir(os.path.join("geopandas/datasets/", item)):
46 data_files.append(os.path.join("datasets", item, "*"))
47 elif item.endswith(".zip"):
48 data_files.append(os.path.join("datasets", item))
49
50 data_files.append("tests/data/*")
51
52
53 setup(
54 name="geopandas",
55 version=versioneer.get_version(),
56 description="Geographic pandas extensions",
57 license="BSD",
58 author="GeoPandas contributors",
59 author_email="[email protected]",
60 url="http://geopandas.org",
61 project_urls={
62 "Source": "https://github.com/geopandas/geopandas",
63 },
64 long_description=LONG_DESCRIPTION,
65 packages=[
66 "geopandas",
67 "geopandas.io",
68 "geopandas.tools",
69 "geopandas.datasets",
70 "geopandas.tests",
71 "geopandas.tools.tests",
72 ],
73 package_data={"geopandas": data_files},
74 python_requires=">=3.7",
75 install_requires=INSTALL_REQUIRES,
76 cmdclass=versioneer.get_cmdclass(),
77 )
78
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -71,7 +71,7 @@
"geopandas.tools.tests",
],
package_data={"geopandas": data_files},
- python_requires=">=3.7",
+ python_requires=">=3.8",
install_requires=INSTALL_REQUIRES,
cmdclass=versioneer.get_cmdclass(),
)
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -71,7 +71,7 @@\n \"geopandas.tools.tests\",\n ],\n package_data={\"geopandas\": data_files},\n- python_requires=\">=3.7\",\n+ python_requires=\">=3.8\",\n install_requires=INSTALL_REQUIRES,\n cmdclass=versioneer.get_cmdclass(),\n )\n", "issue": "Drop Python 3.7\nWe should consider dropping support for Python 3.7. We are roughly following numpy model (#1457) and numpy itself is 3.8+ now. Same applies to pyproj, which requires 3.8 (and causes some macOS CI failures because of some conda issues). \r\n\r\nI forgot about Python versions when doing #2358 and bumped only packages.\r\n\r\n@jorisvandenbossche if you're fine with that, I'll update CI matrix and related things.\n", "before_files": [{"content": "#!/usr/bin/env/python\n\"\"\"Installation script\n\n\"\"\"\n\nimport os\n\ntry:\n from setuptools import setup\nexcept ImportError:\n from distutils.core import setup\n\nimport versioneer\n\nLONG_DESCRIPTION = \"\"\"GeoPandas is a project to add support for geographic data to\n`pandas`_ objects.\n\nThe goal of GeoPandas is to make working with geospatial data in\npython easier. It combines the capabilities of `pandas`_ and `shapely`_,\nproviding geospatial operations in pandas and a high-level interface\nto multiple geometries to shapely. GeoPandas enables you to easily do\noperations in python that would otherwise require a spatial database\nsuch as PostGIS.\n\n.. _pandas: http://pandas.pydata.org\n.. _shapely: http://shapely.readthedocs.io/en/latest/\n\"\"\"\n\nif os.environ.get(\"READTHEDOCS\", False) == \"True\":\n INSTALL_REQUIRES = []\nelse:\n INSTALL_REQUIRES = [\n \"pandas >= 1.0.0\",\n \"shapely >= 1.7\",\n \"fiona >= 1.8\",\n \"pyproj >= 2.6.1.post1\",\n \"packaging\",\n ]\n\n# get all data dirs in the datasets module\ndata_files = []\n\nfor item in os.listdir(\"geopandas/datasets\"):\n if not item.startswith(\"__\"):\n if os.path.isdir(os.path.join(\"geopandas/datasets/\", item)):\n data_files.append(os.path.join(\"datasets\", item, \"*\"))\n elif item.endswith(\".zip\"):\n data_files.append(os.path.join(\"datasets\", item))\n\ndata_files.append(\"tests/data/*\")\n\n\nsetup(\n name=\"geopandas\",\n version=versioneer.get_version(),\n description=\"Geographic pandas extensions\",\n license=\"BSD\",\n author=\"GeoPandas contributors\",\n author_email=\"[email protected]\",\n url=\"http://geopandas.org\",\n project_urls={\n \"Source\": \"https://github.com/geopandas/geopandas\",\n },\n long_description=LONG_DESCRIPTION,\n packages=[\n \"geopandas\",\n \"geopandas.io\",\n \"geopandas.tools\",\n \"geopandas.datasets\",\n \"geopandas.tests\",\n \"geopandas.tools.tests\",\n ],\n package_data={\"geopandas\": data_files},\n python_requires=\">=3.7\",\n install_requires=INSTALL_REQUIRES,\n cmdclass=versioneer.get_cmdclass(),\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env/python\n\"\"\"Installation script\n\n\"\"\"\n\nimport os\n\ntry:\n from setuptools import setup\nexcept ImportError:\n from distutils.core import setup\n\nimport versioneer\n\nLONG_DESCRIPTION = \"\"\"GeoPandas is a project to add support for geographic data to\n`pandas`_ objects.\n\nThe goal of GeoPandas is to make working with geospatial data in\npython easier. It combines the capabilities of `pandas`_ and `shapely`_,\nproviding geospatial operations in pandas and a high-level interface\nto multiple geometries to shapely. GeoPandas enables you to easily do\noperations in python that would otherwise require a spatial database\nsuch as PostGIS.\n\n.. _pandas: http://pandas.pydata.org\n.. _shapely: http://shapely.readthedocs.io/en/latest/\n\"\"\"\n\nif os.environ.get(\"READTHEDOCS\", False) == \"True\":\n INSTALL_REQUIRES = []\nelse:\n INSTALL_REQUIRES = [\n \"pandas >= 1.0.0\",\n \"shapely >= 1.7\",\n \"fiona >= 1.8\",\n \"pyproj >= 2.6.1.post1\",\n \"packaging\",\n ]\n\n# get all data dirs in the datasets module\ndata_files = []\n\nfor item in os.listdir(\"geopandas/datasets\"):\n if not item.startswith(\"__\"):\n if os.path.isdir(os.path.join(\"geopandas/datasets/\", item)):\n data_files.append(os.path.join(\"datasets\", item, \"*\"))\n elif item.endswith(\".zip\"):\n data_files.append(os.path.join(\"datasets\", item))\n\ndata_files.append(\"tests/data/*\")\n\n\nsetup(\n name=\"geopandas\",\n version=versioneer.get_version(),\n description=\"Geographic pandas extensions\",\n license=\"BSD\",\n author=\"GeoPandas contributors\",\n author_email=\"[email protected]\",\n url=\"http://geopandas.org\",\n project_urls={\n \"Source\": \"https://github.com/geopandas/geopandas\",\n },\n long_description=LONG_DESCRIPTION,\n packages=[\n \"geopandas\",\n \"geopandas.io\",\n \"geopandas.tools\",\n \"geopandas.datasets\",\n \"geopandas.tests\",\n \"geopandas.tools.tests\",\n ],\n package_data={\"geopandas\": data_files},\n python_requires=\">=3.8\",\n install_requires=INSTALL_REQUIRES,\n cmdclass=versioneer.get_cmdclass(),\n)\n", "path": "setup.py"}]} | 1,056 | 92 |
gh_patches_debug_5914 | rasdani/github-patches | git_diff | DataDog__dd-trace-py-1585 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
profiling/line2def does not handle empty filenames
### Which version of dd-trace-py are you using?
We're not running dd-trace - we're running the profiler by importing `ddtrace.profiling.auto`.
### Which version of the libraries are you using?
ddtrace: 0.40.0
datadog: 0.38.0
You can copy/paste the output of `pip freeze` here.
### How can we reproduce your problem?
I'm unsure - this appears to happen sporadically.
### What is the result that you get?
First, ddtrace runs into a KeyError in `_to_Location`, line 90:
```
def _to_Location(self, filename, lineno, funcname=None):
try:
return self._locations[(filename, lineno, funcname)]
```
`filename` is '', `lineno` is 1, `funcname` is None.
Next, in `filename_and_lineno_to_def`, line 63, we get an IndexError:
```
def filename_and_lineno_to_def(filename, lineno):
if filename[0] == "<" and filename[-1] == ">":
return default_def(filename, lineno)
```
Since the filename is an empty string, this complains.
### What is the result that you expected?
Not an error.
If you need more information, please let me know!
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ddtrace/profiling/_line2def.py`
Content:
```
1 # -*- encoding: utf-8 -*-
2 import ast
3
4 import intervaltree
5
6
7 try:
8 from functools import lru_cache
9 except ImportError:
10 # This is for Python 2 but Python 2 does not use this module.
11 # It's just useful for unit tests.
12 def lru_cache(maxsize):
13 def w(f):
14 return f
15
16 return w
17
18
19 try:
20 # Python 2 does not have this.
21 from tokenize import open as source_open
22 except ImportError:
23 source_open = open
24
25 from ddtrace.vendor import six
26
27
28 def _compute_interval(node):
29 min_lineno = node.lineno
30 max_lineno = node.lineno
31 for node in ast.walk(node):
32 if hasattr(node, "lineno"):
33 min_lineno = min(min_lineno, node.lineno)
34 max_lineno = max(max_lineno, node.lineno)
35 return (min_lineno, max_lineno + 1)
36
37
38 if six.PY3:
39 _DEFS = (ast.FunctionDef, ast.AsyncFunctionDef, ast.ClassDef)
40 else:
41 _DEFS = (ast.FunctionDef, ast.ClassDef)
42
43
44 @lru_cache(maxsize=256)
45 def file_to_tree(filename):
46 # Use tokenize.open to detect encoding
47 with source_open(filename) as f:
48 parsed = ast.parse(f.read(), filename=filename)
49 tree = intervaltree.IntervalTree()
50 for node in ast.walk(parsed):
51 if isinstance(node, _DEFS):
52 start, end = _compute_interval(node)
53 tree[start:end] = node
54 return tree
55
56
57 def default_def(filename, lineno):
58 return filename + ":" + str(lineno)
59
60
61 @lru_cache(maxsize=8192)
62 def filename_and_lineno_to_def(filename, lineno):
63 if filename[0] == "<" and filename[-1] == ">":
64 return default_def(filename, lineno)
65
66 try:
67 matches = file_to_tree(filename)[lineno]
68 except (IOError, OSError, SyntaxError):
69 return default_def(filename, lineno)
70 if matches:
71 return min(matches, key=lambda i: i.length()).data.name
72
73 return default_def(filename, lineno)
74
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ddtrace/profiling/_line2def.py b/ddtrace/profiling/_line2def.py
--- a/ddtrace/profiling/_line2def.py
+++ b/ddtrace/profiling/_line2def.py
@@ -55,12 +55,12 @@
def default_def(filename, lineno):
- return filename + ":" + str(lineno)
+ return str(filename) + ":" + str(lineno)
@lru_cache(maxsize=8192)
def filename_and_lineno_to_def(filename, lineno):
- if filename[0] == "<" and filename[-1] == ">":
+ if not filename or (filename[0] == "<" and filename[-1] == ">"):
return default_def(filename, lineno)
try:
| {"golden_diff": "diff --git a/ddtrace/profiling/_line2def.py b/ddtrace/profiling/_line2def.py\n--- a/ddtrace/profiling/_line2def.py\n+++ b/ddtrace/profiling/_line2def.py\n@@ -55,12 +55,12 @@\n \n \n def default_def(filename, lineno):\n- return filename + \":\" + str(lineno)\n+ return str(filename) + \":\" + str(lineno)\n \n \n @lru_cache(maxsize=8192)\n def filename_and_lineno_to_def(filename, lineno):\n- if filename[0] == \"<\" and filename[-1] == \">\":\n+ if not filename or (filename[0] == \"<\" and filename[-1] == \">\"):\n return default_def(filename, lineno)\n \n try:\n", "issue": "profiling/line2def does not handle empty filenames\n### Which version of dd-trace-py are you using?\r\nWe're not running dd-trace - we're running the profiler by importing `ddtrace.profiling.auto`.\r\n\r\n### Which version of the libraries are you using?\r\nddtrace: 0.40.0\r\ndatadog: 0.38.0\r\n\r\nYou can copy/paste the output of `pip freeze` here.\r\n\r\n### How can we reproduce your problem?\r\nI'm unsure - this appears to happen sporadically.\r\n\r\n### What is the result that you get?\r\nFirst, ddtrace runs into a KeyError in `_to_Location`, line 90:\r\n```\r\ndef _to_Location(self, filename, lineno, funcname=None):\r\n try:\r\n return self._locations[(filename, lineno, funcname)]\r\n```\r\n`filename` is '', `lineno` is 1, `funcname` is None.\r\n\r\nNext, in `filename_and_lineno_to_def`, line 63, we get an IndexError:\r\n```\r\ndef filename_and_lineno_to_def(filename, lineno):\r\n if filename[0] == \"<\" and filename[-1] == \">\":\r\n return default_def(filename, lineno)\r\n```\r\nSince the filename is an empty string, this complains.\r\n\r\n\r\n\r\n### What is the result that you expected?\r\nNot an error.\r\n\r\nIf you need more information, please let me know!\n", "before_files": [{"content": "# -*- encoding: utf-8 -*-\nimport ast\n\nimport intervaltree\n\n\ntry:\n from functools import lru_cache\nexcept ImportError:\n # This is for Python\u00a02 but Python\u00a02 does not use this module.\n # It's just useful for unit tests.\n def lru_cache(maxsize):\n def w(f):\n return f\n\n return w\n\n\ntry:\n # Python\u00a02 does not have this.\n from tokenize import open as source_open\nexcept ImportError:\n source_open = open\n\nfrom ddtrace.vendor import six\n\n\ndef _compute_interval(node):\n min_lineno = node.lineno\n max_lineno = node.lineno\n for node in ast.walk(node):\n if hasattr(node, \"lineno\"):\n min_lineno = min(min_lineno, node.lineno)\n max_lineno = max(max_lineno, node.lineno)\n return (min_lineno, max_lineno + 1)\n\n\nif six.PY3:\n _DEFS = (ast.FunctionDef, ast.AsyncFunctionDef, ast.ClassDef)\nelse:\n _DEFS = (ast.FunctionDef, ast.ClassDef)\n\n\n@lru_cache(maxsize=256)\ndef file_to_tree(filename):\n # Use tokenize.open to detect encoding\n with source_open(filename) as f:\n parsed = ast.parse(f.read(), filename=filename)\n tree = intervaltree.IntervalTree()\n for node in ast.walk(parsed):\n if isinstance(node, _DEFS):\n start, end = _compute_interval(node)\n tree[start:end] = node\n return tree\n\n\ndef default_def(filename, lineno):\n return filename + \":\" + str(lineno)\n\n\n@lru_cache(maxsize=8192)\ndef filename_and_lineno_to_def(filename, lineno):\n if filename[0] == \"<\" and filename[-1] == \">\":\n return default_def(filename, lineno)\n\n try:\n matches = file_to_tree(filename)[lineno]\n except (IOError, OSError, SyntaxError):\n return default_def(filename, lineno)\n if matches:\n return min(matches, key=lambda i: i.length()).data.name\n\n return default_def(filename, lineno)\n", "path": "ddtrace/profiling/_line2def.py"}], "after_files": [{"content": "# -*- encoding: utf-8 -*-\nimport ast\n\nimport intervaltree\n\n\ntry:\n from functools import lru_cache\nexcept ImportError:\n # This is for Python\u00a02 but Python\u00a02 does not use this module.\n # It's just useful for unit tests.\n def lru_cache(maxsize):\n def w(f):\n return f\n\n return w\n\n\ntry:\n # Python\u00a02 does not have this.\n from tokenize import open as source_open\nexcept ImportError:\n source_open = open\n\nfrom ddtrace.vendor import six\n\n\ndef _compute_interval(node):\n min_lineno = node.lineno\n max_lineno = node.lineno\n for node in ast.walk(node):\n if hasattr(node, \"lineno\"):\n min_lineno = min(min_lineno, node.lineno)\n max_lineno = max(max_lineno, node.lineno)\n return (min_lineno, max_lineno + 1)\n\n\nif six.PY3:\n _DEFS = (ast.FunctionDef, ast.AsyncFunctionDef, ast.ClassDef)\nelse:\n _DEFS = (ast.FunctionDef, ast.ClassDef)\n\n\n@lru_cache(maxsize=256)\ndef file_to_tree(filename):\n # Use tokenize.open to detect encoding\n with source_open(filename) as f:\n parsed = ast.parse(f.read(), filename=filename)\n tree = intervaltree.IntervalTree()\n for node in ast.walk(parsed):\n if isinstance(node, _DEFS):\n start, end = _compute_interval(node)\n tree[start:end] = node\n return tree\n\n\ndef default_def(filename, lineno):\n return str(filename) + \":\" + str(lineno)\n\n\n@lru_cache(maxsize=8192)\ndef filename_and_lineno_to_def(filename, lineno):\n if not filename or (filename[0] == \"<\" and filename[-1] == \">\"):\n return default_def(filename, lineno)\n\n try:\n matches = file_to_tree(filename)[lineno]\n except (IOError, OSError, SyntaxError):\n return default_def(filename, lineno)\n if matches:\n return min(matches, key=lambda i: i.length()).data.name\n\n return default_def(filename, lineno)\n", "path": "ddtrace/profiling/_line2def.py"}]} | 1,159 | 175 |
gh_patches_debug_17993 | rasdani/github-patches | git_diff | modin-project__modin-1532 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Update Ray to 0.8.5
Ray 0.8.5 was released, we should test and update.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 from setuptools import setup, find_packages
2 import versioneer
3 import os
4 from setuptools.dist import Distribution
5
6 try:
7 from wheel.bdist_wheel import bdist_wheel
8
9 HAS_WHEEL = True
10 except ImportError:
11 HAS_WHEEL = False
12
13 with open("README.md", "r") as fh:
14 long_description = fh.read()
15
16 if HAS_WHEEL:
17
18 class ModinWheel(bdist_wheel):
19 def finalize_options(self):
20 bdist_wheel.finalize_options(self)
21 self.root_is_pure = False
22
23 def get_tag(self):
24 _, _, plat = bdist_wheel.get_tag(self)
25 py = "py3"
26 abi = "none"
27 return py, abi, plat
28
29
30 class ModinDistribution(Distribution):
31 def __init__(self, *attrs):
32 Distribution.__init__(self, *attrs)
33 if HAS_WHEEL:
34 self.cmdclass["bdist_wheel"] = ModinWheel
35
36 def is_pure(self):
37 return False
38
39
40 dask_deps = ["dask>=2.1.0", "distributed>=2.3.2"]
41 ray_deps = ["ray==0.8.4", "pyarrow<0.17"]
42 if "SETUP_PLAT_NAME" in os.environ:
43 if "win" in os.environ["SETUP_PLAT_NAME"]:
44 all_deps = dask_deps
45 else:
46 all_deps = dask_deps + ray_deps
47 else:
48 all_deps = dask_deps if os.name == "nt" else dask_deps + ray_deps
49
50 setup(
51 name="modin",
52 version=versioneer.get_version(),
53 cmdclass=versioneer.get_cmdclass(),
54 distclass=ModinDistribution,
55 description="Modin: Make your pandas code run faster by changing one line of code.",
56 packages=find_packages(),
57 license="Apache 2",
58 url="https://github.com/modin-project/modin",
59 long_description=long_description,
60 long_description_content_type="text/markdown",
61 install_requires=["pandas==1.0.3", "packaging"],
62 extras_require={
63 # can be installed by pip install modin[dask]
64 "dask": dask_deps,
65 "ray": ray_deps,
66 "all": all_deps,
67 },
68 python_requires=">=3.5",
69 )
70
```
Path: `modin/__init__.py`
Content:
```
1 # Licensed to Modin Development Team under one or more contributor license agreements.
2 # See the NOTICE file distributed with this work for additional information regarding
3 # copyright ownership. The Modin Development Team licenses this file to you under the
4 # Apache License, Version 2.0 (the "License"); you may not use this file except in
5 # compliance with the License. You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software distributed under
10 # the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
11 # ANY KIND, either express or implied. See the License for the specific language
12 # governing permissions and limitations under the License.
13
14 import os
15 import sys
16 import warnings
17 from packaging import version
18
19 from ._version import get_versions
20
21
22 def custom_formatwarning(msg, category, *args, **kwargs):
23 # ignore everything except the message
24 return "{}: {}\n".format(category.__name__, msg)
25
26
27 warnings.formatwarning = custom_formatwarning
28 # Filter numpy version warnings because they are not relevant
29 warnings.filterwarnings("ignore", message="numpy.dtype size changed")
30 warnings.filterwarnings("ignore", message="Large object of size")
31 warnings.filterwarnings(
32 "ignore",
33 message="The pandas.datetime class is deprecated and will be removed from pandas in a future version. "
34 "Import from datetime module instead.",
35 )
36
37
38 def get_execution_engine():
39 # In the future, when there are multiple engines and different ways of
40 # backing the DataFrame, there will have to be some changed logic here to
41 # decide these things. In the meantime, we will use the currently supported
42 # execution engine + backing (Pandas + Ray).
43 if "MODIN_ENGINE" in os.environ:
44 # .title allows variants like ray, RAY, Ray
45 return os.environ["MODIN_ENGINE"].title()
46 else:
47 if "MODIN_DEBUG" in os.environ:
48 return "Python"
49 else:
50 if sys.platform != "win32":
51 try:
52 import ray
53
54 except ImportError:
55 pass
56 else:
57 if version.parse(ray.__version__) != version.parse("0.8.4"):
58 raise ImportError(
59 "Please `pip install modin[ray]` to install compatible Ray version."
60 )
61 return "Ray"
62 try:
63 import dask
64 import distributed
65
66 except ImportError:
67 raise ImportError(
68 "Please `pip install {}modin[dask]` to install an engine".format(
69 "modin[ray]` or `" if sys.platform != "win32" else ""
70 )
71 )
72 else:
73 if version.parse(dask.__version__) < version.parse(
74 "2.1.0"
75 ) or version.parse(distributed.__version__) < version.parse("2.3.2"):
76 raise ImportError(
77 "Please `pip install modin[dask]` to install compatible Dask version."
78 )
79 return "Dask"
80
81
82 def get_partition_format():
83 # See note above about engine + backing.
84 return os.environ.get("MODIN_BACKEND", "Pandas").title()
85
86
87 __version__ = "0.6.3"
88 __execution_engine__ = get_execution_engine()
89 __partition_format__ = get_partition_format()
90
91 # We don't want these used outside of this file.
92 del get_execution_engine
93 del get_partition_format
94
95 __version__ = get_versions()["version"]
96 del get_versions
97
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/modin/__init__.py b/modin/__init__.py
--- a/modin/__init__.py
+++ b/modin/__init__.py
@@ -54,7 +54,7 @@
except ImportError:
pass
else:
- if version.parse(ray.__version__) != version.parse("0.8.4"):
+ if version.parse(ray.__version__) != version.parse("0.8.5"):
raise ImportError(
"Please `pip install modin[ray]` to install compatible Ray version."
)
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -38,7 +38,7 @@
dask_deps = ["dask>=2.1.0", "distributed>=2.3.2"]
-ray_deps = ["ray==0.8.4", "pyarrow<0.17"]
+ray_deps = ["ray==0.8.5", "pyarrow<0.17"]
if "SETUP_PLAT_NAME" in os.environ:
if "win" in os.environ["SETUP_PLAT_NAME"]:
all_deps = dask_deps
| {"golden_diff": "diff --git a/modin/__init__.py b/modin/__init__.py\n--- a/modin/__init__.py\n+++ b/modin/__init__.py\n@@ -54,7 +54,7 @@\n except ImportError:\n pass\n else:\n- if version.parse(ray.__version__) != version.parse(\"0.8.4\"):\n+ if version.parse(ray.__version__) != version.parse(\"0.8.5\"):\n raise ImportError(\n \"Please `pip install modin[ray]` to install compatible Ray version.\"\n )\ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -38,7 +38,7 @@\n \n \n dask_deps = [\"dask>=2.1.0\", \"distributed>=2.3.2\"]\n-ray_deps = [\"ray==0.8.4\", \"pyarrow<0.17\"]\n+ray_deps = [\"ray==0.8.5\", \"pyarrow<0.17\"]\n if \"SETUP_PLAT_NAME\" in os.environ:\n if \"win\" in os.environ[\"SETUP_PLAT_NAME\"]:\n all_deps = dask_deps\n", "issue": "Update Ray to 0.8.5\nRay 0.8.5 was released, we should test and update.\n", "before_files": [{"content": "from setuptools import setup, find_packages\nimport versioneer\nimport os\nfrom setuptools.dist import Distribution\n\ntry:\n from wheel.bdist_wheel import bdist_wheel\n\n HAS_WHEEL = True\nexcept ImportError:\n HAS_WHEEL = False\n\nwith open(\"README.md\", \"r\") as fh:\n long_description = fh.read()\n\nif HAS_WHEEL:\n\n class ModinWheel(bdist_wheel):\n def finalize_options(self):\n bdist_wheel.finalize_options(self)\n self.root_is_pure = False\n\n def get_tag(self):\n _, _, plat = bdist_wheel.get_tag(self)\n py = \"py3\"\n abi = \"none\"\n return py, abi, plat\n\n\nclass ModinDistribution(Distribution):\n def __init__(self, *attrs):\n Distribution.__init__(self, *attrs)\n if HAS_WHEEL:\n self.cmdclass[\"bdist_wheel\"] = ModinWheel\n\n def is_pure(self):\n return False\n\n\ndask_deps = [\"dask>=2.1.0\", \"distributed>=2.3.2\"]\nray_deps = [\"ray==0.8.4\", \"pyarrow<0.17\"]\nif \"SETUP_PLAT_NAME\" in os.environ:\n if \"win\" in os.environ[\"SETUP_PLAT_NAME\"]:\n all_deps = dask_deps\n else:\n all_deps = dask_deps + ray_deps\nelse:\n all_deps = dask_deps if os.name == \"nt\" else dask_deps + ray_deps\n\nsetup(\n name=\"modin\",\n version=versioneer.get_version(),\n cmdclass=versioneer.get_cmdclass(),\n distclass=ModinDistribution,\n description=\"Modin: Make your pandas code run faster by changing one line of code.\",\n packages=find_packages(),\n license=\"Apache 2\",\n url=\"https://github.com/modin-project/modin\",\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n install_requires=[\"pandas==1.0.3\", \"packaging\"],\n extras_require={\n # can be installed by pip install modin[dask]\n \"dask\": dask_deps,\n \"ray\": ray_deps,\n \"all\": all_deps,\n },\n python_requires=\">=3.5\",\n)\n", "path": "setup.py"}, {"content": "# Licensed to Modin Development Team under one or more contributor license agreements.\n# See the NOTICE file distributed with this work for additional information regarding\n# copyright ownership. The Modin Development Team licenses this file to you under the\n# Apache License, Version 2.0 (the \"License\"); you may not use this file except in\n# compliance with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software distributed under\n# the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF\n# ANY KIND, either express or implied. See the License for the specific language\n# governing permissions and limitations under the License.\n\nimport os\nimport sys\nimport warnings\nfrom packaging import version\n\nfrom ._version import get_versions\n\n\ndef custom_formatwarning(msg, category, *args, **kwargs):\n # ignore everything except the message\n return \"{}: {}\\n\".format(category.__name__, msg)\n\n\nwarnings.formatwarning = custom_formatwarning\n# Filter numpy version warnings because they are not relevant\nwarnings.filterwarnings(\"ignore\", message=\"numpy.dtype size changed\")\nwarnings.filterwarnings(\"ignore\", message=\"Large object of size\")\nwarnings.filterwarnings(\n \"ignore\",\n message=\"The pandas.datetime class is deprecated and will be removed from pandas in a future version. \"\n \"Import from datetime module instead.\",\n)\n\n\ndef get_execution_engine():\n # In the future, when there are multiple engines and different ways of\n # backing the DataFrame, there will have to be some changed logic here to\n # decide these things. In the meantime, we will use the currently supported\n # execution engine + backing (Pandas + Ray).\n if \"MODIN_ENGINE\" in os.environ:\n # .title allows variants like ray, RAY, Ray\n return os.environ[\"MODIN_ENGINE\"].title()\n else:\n if \"MODIN_DEBUG\" in os.environ:\n return \"Python\"\n else:\n if sys.platform != \"win32\":\n try:\n import ray\n\n except ImportError:\n pass\n else:\n if version.parse(ray.__version__) != version.parse(\"0.8.4\"):\n raise ImportError(\n \"Please `pip install modin[ray]` to install compatible Ray version.\"\n )\n return \"Ray\"\n try:\n import dask\n import distributed\n\n except ImportError:\n raise ImportError(\n \"Please `pip install {}modin[dask]` to install an engine\".format(\n \"modin[ray]` or `\" if sys.platform != \"win32\" else \"\"\n )\n )\n else:\n if version.parse(dask.__version__) < version.parse(\n \"2.1.0\"\n ) or version.parse(distributed.__version__) < version.parse(\"2.3.2\"):\n raise ImportError(\n \"Please `pip install modin[dask]` to install compatible Dask version.\"\n )\n return \"Dask\"\n\n\ndef get_partition_format():\n # See note above about engine + backing.\n return os.environ.get(\"MODIN_BACKEND\", \"Pandas\").title()\n\n\n__version__ = \"0.6.3\"\n__execution_engine__ = get_execution_engine()\n__partition_format__ = get_partition_format()\n\n# We don't want these used outside of this file.\ndel get_execution_engine\ndel get_partition_format\n\n__version__ = get_versions()[\"version\"]\ndel get_versions\n", "path": "modin/__init__.py"}], "after_files": [{"content": "from setuptools import setup, find_packages\nimport versioneer\nimport os\nfrom setuptools.dist import Distribution\n\ntry:\n from wheel.bdist_wheel import bdist_wheel\n\n HAS_WHEEL = True\nexcept ImportError:\n HAS_WHEEL = False\n\nwith open(\"README.md\", \"r\") as fh:\n long_description = fh.read()\n\nif HAS_WHEEL:\n\n class ModinWheel(bdist_wheel):\n def finalize_options(self):\n bdist_wheel.finalize_options(self)\n self.root_is_pure = False\n\n def get_tag(self):\n _, _, plat = bdist_wheel.get_tag(self)\n py = \"py3\"\n abi = \"none\"\n return py, abi, plat\n\n\nclass ModinDistribution(Distribution):\n def __init__(self, *attrs):\n Distribution.__init__(self, *attrs)\n if HAS_WHEEL:\n self.cmdclass[\"bdist_wheel\"] = ModinWheel\n\n def is_pure(self):\n return False\n\n\ndask_deps = [\"dask>=2.1.0\", \"distributed>=2.3.2\"]\nray_deps = [\"ray==0.8.5\", \"pyarrow<0.17\"]\nif \"SETUP_PLAT_NAME\" in os.environ:\n if \"win\" in os.environ[\"SETUP_PLAT_NAME\"]:\n all_deps = dask_deps\n else:\n all_deps = dask_deps + ray_deps\nelse:\n all_deps = dask_deps if os.name == \"nt\" else dask_deps + ray_deps\n\nsetup(\n name=\"modin\",\n version=versioneer.get_version(),\n cmdclass=versioneer.get_cmdclass(),\n distclass=ModinDistribution,\n description=\"Modin: Make your pandas code run faster by changing one line of code.\",\n packages=find_packages(),\n license=\"Apache 2\",\n url=\"https://github.com/modin-project/modin\",\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n install_requires=[\"pandas==1.0.3\", \"packaging\"],\n extras_require={\n # can be installed by pip install modin[dask]\n \"dask\": dask_deps,\n \"ray\": ray_deps,\n \"all\": all_deps,\n },\n python_requires=\">=3.5\",\n)\n", "path": "setup.py"}, {"content": "# Licensed to Modin Development Team under one or more contributor license agreements.\n# See the NOTICE file distributed with this work for additional information regarding\n# copyright ownership. The Modin Development Team licenses this file to you under the\n# Apache License, Version 2.0 (the \"License\"); you may not use this file except in\n# compliance with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software distributed under\n# the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF\n# ANY KIND, either express or implied. See the License for the specific language\n# governing permissions and limitations under the License.\n\nimport os\nimport sys\nimport warnings\nfrom packaging import version\n\nfrom ._version import get_versions\n\n\ndef custom_formatwarning(msg, category, *args, **kwargs):\n # ignore everything except the message\n return \"{}: {}\\n\".format(category.__name__, msg)\n\n\nwarnings.formatwarning = custom_formatwarning\n# Filter numpy version warnings because they are not relevant\nwarnings.filterwarnings(\"ignore\", message=\"numpy.dtype size changed\")\nwarnings.filterwarnings(\"ignore\", message=\"Large object of size\")\nwarnings.filterwarnings(\n \"ignore\",\n message=\"The pandas.datetime class is deprecated and will be removed from pandas in a future version. \"\n \"Import from datetime module instead.\",\n)\n\n\ndef get_execution_engine():\n # In the future, when there are multiple engines and different ways of\n # backing the DataFrame, there will have to be some changed logic here to\n # decide these things. In the meantime, we will use the currently supported\n # execution engine + backing (Pandas + Ray).\n if \"MODIN_ENGINE\" in os.environ:\n # .title allows variants like ray, RAY, Ray\n return os.environ[\"MODIN_ENGINE\"].title()\n else:\n if \"MODIN_DEBUG\" in os.environ:\n return \"Python\"\n else:\n if sys.platform != \"win32\":\n try:\n import ray\n\n except ImportError:\n pass\n else:\n if version.parse(ray.__version__) != version.parse(\"0.8.5\"):\n raise ImportError(\n \"Please `pip install modin[ray]` to install compatible Ray version.\"\n )\n return \"Ray\"\n try:\n import dask\n import distributed\n\n except ImportError:\n raise ImportError(\n \"Please `pip install {}modin[dask]` to install an engine\".format(\n \"modin[ray]` or `\" if sys.platform != \"win32\" else \"\"\n )\n )\n else:\n if version.parse(dask.__version__) < version.parse(\n \"2.1.0\"\n ) or version.parse(distributed.__version__) < version.parse(\"2.3.2\"):\n raise ImportError(\n \"Please `pip install modin[dask]` to install compatible Dask version.\"\n )\n return \"Dask\"\n\n\ndef get_partition_format():\n # See note above about engine + backing.\n return os.environ.get(\"MODIN_BACKEND\", \"Pandas\").title()\n\n\n__version__ = \"0.6.3\"\n__execution_engine__ = get_execution_engine()\n__partition_format__ = get_partition_format()\n\n# We don't want these used outside of this file.\ndel get_execution_engine\ndel get_partition_format\n\n__version__ = get_versions()[\"version\"]\ndel get_versions\n", "path": "modin/__init__.py"}]} | 1,867 | 254 |
gh_patches_debug_40179 | rasdani/github-patches | git_diff | Project-MONAI__MONAI-3464 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`reduction` for `ContrastiveLoss`
**Describe the bug**
the error messages and docstring should be consistent
https://github.com/Project-MONAI/MONAI/blob/a7bc4a3cbaeaa3c505a25ca2ddf6922bda8ea7dc/monai/losses/contrastive.py#L89-L91
https://github.com/Project-MONAI/MONAI/blob/a7bc4a3cbaeaa3c505a25ca2ddf6922bda8ea7dc/monai/losses/contrastive.py#L58
**Expected behavior**
implementing the option `reduction="none"`?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `monai/losses/contrastive.py`
Content:
```
1 # Copyright 2020 - 2021 MONAI Consortium
2 # Licensed under the Apache License, Version 2.0 (the "License");
3 # you may not use this file except in compliance with the License.
4 # You may obtain a copy of the License at
5 # http://www.apache.org/licenses/LICENSE-2.0
6 # Unless required by applicable law or agreed to in writing, software
7 # distributed under the License is distributed on an "AS IS" BASIS,
8 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
9 # See the License for the specific language governing permissions and
10 # limitations under the License.
11
12 from typing import Union
13
14 import torch
15 from torch.nn import functional as F
16 from torch.nn.modules.loss import _Loss
17
18 from monai.utils import LossReduction
19
20
21 class ContrastiveLoss(_Loss):
22
23 """
24 Compute the Contrastive loss defined in:
25
26 Chen, Ting, et al. "A simple framework for contrastive learning of visual representations." International
27 conference on machine learning. PMLR, 2020. (http://proceedings.mlr.press/v119/chen20j.html)
28
29 Adapted from:
30 https://github.com/Sara-Ahmed/SiT/blob/1aacd6adcd39b71efc903d16b4e9095b97dda76f/losses.py#L5
31
32 """
33
34 def __init__(
35 self, temperature: float = 0.5, batch_size: int = 1, reduction: Union[LossReduction, str] = LossReduction.SUM
36 ) -> None:
37 """
38 Args:
39 temperature: Can be scaled between 0 and 1 for learning from negative samples, ideally set to 0.5.
40
41 Raises:
42 AssertionError: When an input of dimension length > 2 is passed
43 AssertionError: When input and target are of different shapes
44
45 """
46 super().__init__(reduction=LossReduction(reduction).value)
47
48 self.batch_size = batch_size
49 self.temperature = temperature
50
51 def forward(self, input: torch.Tensor, target: torch.Tensor) -> torch.Tensor:
52 """
53 Args:
54 input: the shape should be B[F].
55 target: the shape should be B[F].
56
57 Raises:
58 ValueError: When ``self.reduction`` is not one of ["sum", "none"].
59 """
60 if len(target.shape) > 2 or len(input.shape) > 2:
61 raise AssertionError(
62 f"Either target or input has dimensions greater than 2 where target "
63 f"shape is ({target.shape}) and input shape is ({input.shape})"
64 )
65
66 if target.shape != input.shape:
67 raise AssertionError(f"ground truth has differing shape ({target.shape}) from input ({input.shape})")
68
69 temperature_tensor = torch.tensor(self.temperature).to(input.device)
70
71 norm_i = F.normalize(input, dim=1)
72 norm_j = F.normalize(target, dim=1)
73
74 negatives_mask = ~torch.eye(self.batch_size * 2, self.batch_size * 2, dtype=torch.bool)
75 negatives_mask = torch.tensor(negatives_mask, dtype=torch.float)
76 negatives_mask = torch.clone(torch.as_tensor(negatives_mask)).to(input.device)
77
78 repr = torch.cat([norm_i, norm_j], dim=0)
79 sim_matrix = F.cosine_similarity(repr.unsqueeze(1), repr.unsqueeze(0), dim=2)
80 sim_ij = torch.diag(sim_matrix, self.batch_size)
81 sim_ji = torch.diag(sim_matrix, -self.batch_size)
82
83 positives = torch.cat([sim_ij, sim_ji], dim=0)
84 nominator = torch.exp(positives / temperature_tensor)
85 denominator = negatives_mask * torch.exp(sim_matrix / temperature_tensor)
86
87 loss_partial = -torch.log(nominator / torch.sum(denominator, dim=1))
88
89 if self.reduction == LossReduction.SUM.value:
90 return torch.sum(loss_partial) / (2 * self.batch_size)
91 raise ValueError(f"Unsupported reduction: {self.reduction}, " f'available options are ["mean", "sum", "none"].')
92
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/monai/losses/contrastive.py b/monai/losses/contrastive.py
--- a/monai/losses/contrastive.py
+++ b/monai/losses/contrastive.py
@@ -9,13 +9,11 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from typing import Union
-
import torch
from torch.nn import functional as F
from torch.nn.modules.loss import _Loss
-from monai.utils import LossReduction
+from monai.utils import deprecated_arg
class ContrastiveLoss(_Loss):
@@ -31,19 +29,23 @@
"""
- def __init__(
- self, temperature: float = 0.5, batch_size: int = 1, reduction: Union[LossReduction, str] = LossReduction.SUM
- ) -> None:
+ @deprecated_arg(name="reduction", since="0.8", msg_suffix="`reduction` is no longer supported.")
+ def __init__(self, temperature: float = 0.5, batch_size: int = 1, reduction="sum") -> None:
"""
Args:
temperature: Can be scaled between 0 and 1 for learning from negative samples, ideally set to 0.5.
+ batch_size: The number of samples.
Raises:
- AssertionError: When an input of dimension length > 2 is passed
- AssertionError: When input and target are of different shapes
+ ValueError: When an input of dimension length > 2 is passed
+ ValueError: When input and target are of different shapes
+
+ .. deprecated:: 0.8.0
+
+ `reduction` is no longer supported.
"""
- super().__init__(reduction=LossReduction(reduction).value)
+ super().__init__()
self.batch_size = batch_size
self.temperature = temperature
@@ -53,18 +55,15 @@
Args:
input: the shape should be B[F].
target: the shape should be B[F].
-
- Raises:
- ValueError: When ``self.reduction`` is not one of ["sum", "none"].
"""
if len(target.shape) > 2 or len(input.shape) > 2:
- raise AssertionError(
+ raise ValueError(
f"Either target or input has dimensions greater than 2 where target "
f"shape is ({target.shape}) and input shape is ({input.shape})"
)
if target.shape != input.shape:
- raise AssertionError(f"ground truth has differing shape ({target.shape}) from input ({input.shape})")
+ raise ValueError(f"ground truth has differing shape ({target.shape}) from input ({input.shape})")
temperature_tensor = torch.tensor(self.temperature).to(input.device)
@@ -86,6 +85,4 @@
loss_partial = -torch.log(nominator / torch.sum(denominator, dim=1))
- if self.reduction == LossReduction.SUM.value:
- return torch.sum(loss_partial) / (2 * self.batch_size)
- raise ValueError(f"Unsupported reduction: {self.reduction}, " f'available options are ["mean", "sum", "none"].')
+ return torch.sum(loss_partial) / (2 * self.batch_size)
| {"golden_diff": "diff --git a/monai/losses/contrastive.py b/monai/losses/contrastive.py\n--- a/monai/losses/contrastive.py\n+++ b/monai/losses/contrastive.py\n@@ -9,13 +9,11 @@\n # See the License for the specific language governing permissions and\n # limitations under the License.\n \n-from typing import Union\n-\n import torch\n from torch.nn import functional as F\n from torch.nn.modules.loss import _Loss\n \n-from monai.utils import LossReduction\n+from monai.utils import deprecated_arg\n \n \n class ContrastiveLoss(_Loss):\n@@ -31,19 +29,23 @@\n \n \"\"\"\n \n- def __init__(\n- self, temperature: float = 0.5, batch_size: int = 1, reduction: Union[LossReduction, str] = LossReduction.SUM\n- ) -> None:\n+ @deprecated_arg(name=\"reduction\", since=\"0.8\", msg_suffix=\"`reduction` is no longer supported.\")\n+ def __init__(self, temperature: float = 0.5, batch_size: int = 1, reduction=\"sum\") -> None:\n \"\"\"\n Args:\n temperature: Can be scaled between 0 and 1 for learning from negative samples, ideally set to 0.5.\n+ batch_size: The number of samples.\n \n Raises:\n- AssertionError: When an input of dimension length > 2 is passed\n- AssertionError: When input and target are of different shapes\n+ ValueError: When an input of dimension length > 2 is passed\n+ ValueError: When input and target are of different shapes\n+\n+ .. deprecated:: 0.8.0\n+\n+ `reduction` is no longer supported.\n \n \"\"\"\n- super().__init__(reduction=LossReduction(reduction).value)\n+ super().__init__()\n \n self.batch_size = batch_size\n self.temperature = temperature\n@@ -53,18 +55,15 @@\n Args:\n input: the shape should be B[F].\n target: the shape should be B[F].\n-\n- Raises:\n- ValueError: When ``self.reduction`` is not one of [\"sum\", \"none\"].\n \"\"\"\n if len(target.shape) > 2 or len(input.shape) > 2:\n- raise AssertionError(\n+ raise ValueError(\n f\"Either target or input has dimensions greater than 2 where target \"\n f\"shape is ({target.shape}) and input shape is ({input.shape})\"\n )\n \n if target.shape != input.shape:\n- raise AssertionError(f\"ground truth has differing shape ({target.shape}) from input ({input.shape})\")\n+ raise ValueError(f\"ground truth has differing shape ({target.shape}) from input ({input.shape})\")\n \n temperature_tensor = torch.tensor(self.temperature).to(input.device)\n \n@@ -86,6 +85,4 @@\n \n loss_partial = -torch.log(nominator / torch.sum(denominator, dim=1))\n \n- if self.reduction == LossReduction.SUM.value:\n- return torch.sum(loss_partial) / (2 * self.batch_size)\n- raise ValueError(f\"Unsupported reduction: {self.reduction}, \" f'available options are [\"mean\", \"sum\", \"none\"].')\n+ return torch.sum(loss_partial) / (2 * self.batch_size)\n", "issue": "`reduction` for `ContrastiveLoss`\n**Describe the bug**\r\nthe error messages and docstring should be consistent\r\nhttps://github.com/Project-MONAI/MONAI/blob/a7bc4a3cbaeaa3c505a25ca2ddf6922bda8ea7dc/monai/losses/contrastive.py#L89-L91\r\n\r\nhttps://github.com/Project-MONAI/MONAI/blob/a7bc4a3cbaeaa3c505a25ca2ddf6922bda8ea7dc/monai/losses/contrastive.py#L58\r\n\r\n**Expected behavior**\r\nimplementing the option `reduction=\"none\"`?\r\n\n", "before_files": [{"content": "# Copyright 2020 - 2021 MONAI Consortium\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n# http://www.apache.org/licenses/LICENSE-2.0\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom typing import Union\n\nimport torch\nfrom torch.nn import functional as F\nfrom torch.nn.modules.loss import _Loss\n\nfrom monai.utils import LossReduction\n\n\nclass ContrastiveLoss(_Loss):\n\n \"\"\"\n Compute the Contrastive loss defined in:\n\n Chen, Ting, et al. \"A simple framework for contrastive learning of visual representations.\" International\n conference on machine learning. PMLR, 2020. (http://proceedings.mlr.press/v119/chen20j.html)\n\n Adapted from:\n https://github.com/Sara-Ahmed/SiT/blob/1aacd6adcd39b71efc903d16b4e9095b97dda76f/losses.py#L5\n\n \"\"\"\n\n def __init__(\n self, temperature: float = 0.5, batch_size: int = 1, reduction: Union[LossReduction, str] = LossReduction.SUM\n ) -> None:\n \"\"\"\n Args:\n temperature: Can be scaled between 0 and 1 for learning from negative samples, ideally set to 0.5.\n\n Raises:\n AssertionError: When an input of dimension length > 2 is passed\n AssertionError: When input and target are of different shapes\n\n \"\"\"\n super().__init__(reduction=LossReduction(reduction).value)\n\n self.batch_size = batch_size\n self.temperature = temperature\n\n def forward(self, input: torch.Tensor, target: torch.Tensor) -> torch.Tensor:\n \"\"\"\n Args:\n input: the shape should be B[F].\n target: the shape should be B[F].\n\n Raises:\n ValueError: When ``self.reduction`` is not one of [\"sum\", \"none\"].\n \"\"\"\n if len(target.shape) > 2 or len(input.shape) > 2:\n raise AssertionError(\n f\"Either target or input has dimensions greater than 2 where target \"\n f\"shape is ({target.shape}) and input shape is ({input.shape})\"\n )\n\n if target.shape != input.shape:\n raise AssertionError(f\"ground truth has differing shape ({target.shape}) from input ({input.shape})\")\n\n temperature_tensor = torch.tensor(self.temperature).to(input.device)\n\n norm_i = F.normalize(input, dim=1)\n norm_j = F.normalize(target, dim=1)\n\n negatives_mask = ~torch.eye(self.batch_size * 2, self.batch_size * 2, dtype=torch.bool)\n negatives_mask = torch.tensor(negatives_mask, dtype=torch.float)\n negatives_mask = torch.clone(torch.as_tensor(negatives_mask)).to(input.device)\n\n repr = torch.cat([norm_i, norm_j], dim=0)\n sim_matrix = F.cosine_similarity(repr.unsqueeze(1), repr.unsqueeze(0), dim=2)\n sim_ij = torch.diag(sim_matrix, self.batch_size)\n sim_ji = torch.diag(sim_matrix, -self.batch_size)\n\n positives = torch.cat([sim_ij, sim_ji], dim=0)\n nominator = torch.exp(positives / temperature_tensor)\n denominator = negatives_mask * torch.exp(sim_matrix / temperature_tensor)\n\n loss_partial = -torch.log(nominator / torch.sum(denominator, dim=1))\n\n if self.reduction == LossReduction.SUM.value:\n return torch.sum(loss_partial) / (2 * self.batch_size)\n raise ValueError(f\"Unsupported reduction: {self.reduction}, \" f'available options are [\"mean\", \"sum\", \"none\"].')\n", "path": "monai/losses/contrastive.py"}], "after_files": [{"content": "# Copyright 2020 - 2021 MONAI Consortium\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n# http://www.apache.org/licenses/LICENSE-2.0\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport torch\nfrom torch.nn import functional as F\nfrom torch.nn.modules.loss import _Loss\n\nfrom monai.utils import deprecated_arg\n\n\nclass ContrastiveLoss(_Loss):\n\n \"\"\"\n Compute the Contrastive loss defined in:\n\n Chen, Ting, et al. \"A simple framework for contrastive learning of visual representations.\" International\n conference on machine learning. PMLR, 2020. (http://proceedings.mlr.press/v119/chen20j.html)\n\n Adapted from:\n https://github.com/Sara-Ahmed/SiT/blob/1aacd6adcd39b71efc903d16b4e9095b97dda76f/losses.py#L5\n\n \"\"\"\n\n @deprecated_arg(name=\"reduction\", since=\"0.8\", msg_suffix=\"`reduction` is no longer supported.\")\n def __init__(self, temperature: float = 0.5, batch_size: int = 1, reduction=\"sum\") -> None:\n \"\"\"\n Args:\n temperature: Can be scaled between 0 and 1 for learning from negative samples, ideally set to 0.5.\n batch_size: The number of samples.\n\n Raises:\n ValueError: When an input of dimension length > 2 is passed\n ValueError: When input and target are of different shapes\n\n .. deprecated:: 0.8.0\n\n `reduction` is no longer supported.\n\n \"\"\"\n super().__init__()\n\n self.batch_size = batch_size\n self.temperature = temperature\n\n def forward(self, input: torch.Tensor, target: torch.Tensor) -> torch.Tensor:\n \"\"\"\n Args:\n input: the shape should be B[F].\n target: the shape should be B[F].\n \"\"\"\n if len(target.shape) > 2 or len(input.shape) > 2:\n raise ValueError(\n f\"Either target or input has dimensions greater than 2 where target \"\n f\"shape is ({target.shape}) and input shape is ({input.shape})\"\n )\n\n if target.shape != input.shape:\n raise ValueError(f\"ground truth has differing shape ({target.shape}) from input ({input.shape})\")\n\n temperature_tensor = torch.tensor(self.temperature).to(input.device)\n\n norm_i = F.normalize(input, dim=1)\n norm_j = F.normalize(target, dim=1)\n\n negatives_mask = ~torch.eye(self.batch_size * 2, self.batch_size * 2, dtype=torch.bool)\n negatives_mask = torch.tensor(negatives_mask, dtype=torch.float)\n negatives_mask = torch.clone(torch.as_tensor(negatives_mask)).to(input.device)\n\n repr = torch.cat([norm_i, norm_j], dim=0)\n sim_matrix = F.cosine_similarity(repr.unsqueeze(1), repr.unsqueeze(0), dim=2)\n sim_ij = torch.diag(sim_matrix, self.batch_size)\n sim_ji = torch.diag(sim_matrix, -self.batch_size)\n\n positives = torch.cat([sim_ij, sim_ji], dim=0)\n nominator = torch.exp(positives / temperature_tensor)\n denominator = negatives_mask * torch.exp(sim_matrix / temperature_tensor)\n\n loss_partial = -torch.log(nominator / torch.sum(denominator, dim=1))\n\n return torch.sum(loss_partial) / (2 * self.batch_size)\n", "path": "monai/losses/contrastive.py"}]} | 1,507 | 738 |
gh_patches_debug_20192 | rasdani/github-patches | git_diff | certbot__certbot-7163 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Update SSL session cache size to match Mozilla recommendations
This is a followup from the research issue at #6903.
Ideally, https://github.com/mozilla/server-side-tls/issues/198 is resolved and Mozilla updates their recommendations. If not, I think we should update our value in https://github.com/certbot/certbot/blob/master/certbot-nginx/certbot_nginx/options-ssl-nginx.conf.
Exactly what these values should be is up for discussion, however, nginx's default timeout of 5 minutes seems like a reasonable place to start to me. I don't know of the top of my head how I think the cache should be configured.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `certbot-nginx/certbot_nginx/constants.py`
Content:
```
1 """nginx plugin constants."""
2 import platform
3
4 FREEBSD_DARWIN_SERVER_ROOT = "/usr/local/etc/nginx"
5 LINUX_SERVER_ROOT = "/etc/nginx"
6
7 if platform.system() in ('FreeBSD', 'Darwin'):
8 server_root_tmp = FREEBSD_DARWIN_SERVER_ROOT
9 else:
10 server_root_tmp = LINUX_SERVER_ROOT
11
12 CLI_DEFAULTS = dict(
13 server_root=server_root_tmp,
14 ctl="nginx",
15 )
16 """CLI defaults."""
17
18
19 MOD_SSL_CONF_DEST = "options-ssl-nginx.conf"
20 """Name of the mod_ssl config file as saved in `IConfig.config_dir`."""
21
22 UPDATED_MOD_SSL_CONF_DIGEST = ".updated-options-ssl-nginx-conf-digest.txt"
23 """Name of the hash of the updated or informed mod_ssl_conf as saved in `IConfig.config_dir`."""
24
25 SSL_OPTIONS_HASHES_NEW = [
26 '63e2bddebb174a05c9d8a7cf2adf72f7af04349ba59a1a925fe447f73b2f1abf',
27 ]
28 """SHA256 hashes of the contents of versions of MOD_SSL_CONF_SRC for nginx >= 1.5.9"""
29
30 ALL_SSL_OPTIONS_HASHES = [
31 '0f81093a1465e3d4eaa8b0c14e77b2a2e93568b0fc1351c2b87893a95f0de87c',
32 '9a7b32c49001fed4cff8ad24353329472a50e86ade1ef9b2b9e43566a619612e',
33 'a6d9f1c7d6b36749b52ba061fff1421f9a0a3d2cfdafbd63c05d06f65b990937',
34 '7f95624dd95cf5afc708b9f967ee83a24b8025dc7c8d9df2b556bbc64256b3ff',
35 '394732f2bbe3e5e637c3fb5c6e980a1f1b90b01e2e8d6b7cff41dde16e2a756d',
36 '4b16fec2bcbcd8a2f3296d886f17f9953ffdcc0af54582452ca1e52f5f776f16',
37 ] + SSL_OPTIONS_HASHES_NEW
38 """SHA256 hashes of the contents of all versions of MOD_SSL_CONF_SRC"""
39
40 def os_constant(key):
41 # XXX TODO: In the future, this could return different constants
42 # based on what OS we are running under. To see an
43 # approach to how to handle different OSes, see the
44 # apache version of this file. Currently, we do not
45 # actually have any OS-specific constants on Nginx.
46 """
47 Get a constant value for operating system
48
49 :param key: name of cli constant
50 :return: value of constant for active os
51 """
52 return CLI_DEFAULTS[key]
53
54 HSTS_ARGS = ['\"max-age=31536000\"', ' ', 'always']
55
56 HEADER_ARGS = {'Strict-Transport-Security': HSTS_ARGS}
57
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/certbot-nginx/certbot_nginx/constants.py b/certbot-nginx/certbot_nginx/constants.py
--- a/certbot-nginx/certbot_nginx/constants.py
+++ b/certbot-nginx/certbot_nginx/constants.py
@@ -24,6 +24,7 @@
SSL_OPTIONS_HASHES_NEW = [
'63e2bddebb174a05c9d8a7cf2adf72f7af04349ba59a1a925fe447f73b2f1abf',
+ '2901debc7ecbc10917edd9084c05464c9c5930b463677571eaf8c94bffd11ae2',
]
"""SHA256 hashes of the contents of versions of MOD_SSL_CONF_SRC for nginx >= 1.5.9"""
@@ -34,6 +35,7 @@
'7f95624dd95cf5afc708b9f967ee83a24b8025dc7c8d9df2b556bbc64256b3ff',
'394732f2bbe3e5e637c3fb5c6e980a1f1b90b01e2e8d6b7cff41dde16e2a756d',
'4b16fec2bcbcd8a2f3296d886f17f9953ffdcc0af54582452ca1e52f5f776f16',
+ 'c052ffff0ad683f43bffe105f7c606b339536163490930e2632a335c8d191cc4',
] + SSL_OPTIONS_HASHES_NEW
"""SHA256 hashes of the contents of all versions of MOD_SSL_CONF_SRC"""
| {"golden_diff": "diff --git a/certbot-nginx/certbot_nginx/constants.py b/certbot-nginx/certbot_nginx/constants.py\n--- a/certbot-nginx/certbot_nginx/constants.py\n+++ b/certbot-nginx/certbot_nginx/constants.py\n@@ -24,6 +24,7 @@\n \n SSL_OPTIONS_HASHES_NEW = [\n '63e2bddebb174a05c9d8a7cf2adf72f7af04349ba59a1a925fe447f73b2f1abf',\n+ '2901debc7ecbc10917edd9084c05464c9c5930b463677571eaf8c94bffd11ae2',\n ]\n \"\"\"SHA256 hashes of the contents of versions of MOD_SSL_CONF_SRC for nginx >= 1.5.9\"\"\"\n \n@@ -34,6 +35,7 @@\n '7f95624dd95cf5afc708b9f967ee83a24b8025dc7c8d9df2b556bbc64256b3ff',\n '394732f2bbe3e5e637c3fb5c6e980a1f1b90b01e2e8d6b7cff41dde16e2a756d',\n '4b16fec2bcbcd8a2f3296d886f17f9953ffdcc0af54582452ca1e52f5f776f16',\n+ 'c052ffff0ad683f43bffe105f7c606b339536163490930e2632a335c8d191cc4',\n ] + SSL_OPTIONS_HASHES_NEW\n \"\"\"SHA256 hashes of the contents of all versions of MOD_SSL_CONF_SRC\"\"\"\n", "issue": "Update SSL session cache size to match Mozilla recommendations\nThis is a followup from the research issue at #6903.\r\n\r\nIdeally, https://github.com/mozilla/server-side-tls/issues/198 is resolved and Mozilla updates their recommendations. If not, I think we should update our value in https://github.com/certbot/certbot/blob/master/certbot-nginx/certbot_nginx/options-ssl-nginx.conf.\r\n\r\nExactly what these values should be is up for discussion, however, nginx's default timeout of 5 minutes seems like a reasonable place to start to me. I don't know of the top of my head how I think the cache should be configured.\n", "before_files": [{"content": "\"\"\"nginx plugin constants.\"\"\"\nimport platform\n\nFREEBSD_DARWIN_SERVER_ROOT = \"/usr/local/etc/nginx\"\nLINUX_SERVER_ROOT = \"/etc/nginx\"\n\nif platform.system() in ('FreeBSD', 'Darwin'):\n server_root_tmp = FREEBSD_DARWIN_SERVER_ROOT\nelse:\n server_root_tmp = LINUX_SERVER_ROOT\n\nCLI_DEFAULTS = dict(\n server_root=server_root_tmp,\n ctl=\"nginx\",\n)\n\"\"\"CLI defaults.\"\"\"\n\n\nMOD_SSL_CONF_DEST = \"options-ssl-nginx.conf\"\n\"\"\"Name of the mod_ssl config file as saved in `IConfig.config_dir`.\"\"\"\n\nUPDATED_MOD_SSL_CONF_DIGEST = \".updated-options-ssl-nginx-conf-digest.txt\"\n\"\"\"Name of the hash of the updated or informed mod_ssl_conf as saved in `IConfig.config_dir`.\"\"\"\n\nSSL_OPTIONS_HASHES_NEW = [\n '63e2bddebb174a05c9d8a7cf2adf72f7af04349ba59a1a925fe447f73b2f1abf',\n]\n\"\"\"SHA256 hashes of the contents of versions of MOD_SSL_CONF_SRC for nginx >= 1.5.9\"\"\"\n\nALL_SSL_OPTIONS_HASHES = [\n '0f81093a1465e3d4eaa8b0c14e77b2a2e93568b0fc1351c2b87893a95f0de87c',\n '9a7b32c49001fed4cff8ad24353329472a50e86ade1ef9b2b9e43566a619612e',\n 'a6d9f1c7d6b36749b52ba061fff1421f9a0a3d2cfdafbd63c05d06f65b990937',\n '7f95624dd95cf5afc708b9f967ee83a24b8025dc7c8d9df2b556bbc64256b3ff',\n '394732f2bbe3e5e637c3fb5c6e980a1f1b90b01e2e8d6b7cff41dde16e2a756d',\n '4b16fec2bcbcd8a2f3296d886f17f9953ffdcc0af54582452ca1e52f5f776f16',\n] + SSL_OPTIONS_HASHES_NEW\n\"\"\"SHA256 hashes of the contents of all versions of MOD_SSL_CONF_SRC\"\"\"\n\ndef os_constant(key):\n # XXX TODO: In the future, this could return different constants\n # based on what OS we are running under. To see an\n # approach to how to handle different OSes, see the\n # apache version of this file. Currently, we do not\n # actually have any OS-specific constants on Nginx.\n \"\"\"\n Get a constant value for operating system\n\n :param key: name of cli constant\n :return: value of constant for active os\n \"\"\"\n return CLI_DEFAULTS[key]\n\nHSTS_ARGS = ['\\\"max-age=31536000\\\"', ' ', 'always']\n\nHEADER_ARGS = {'Strict-Transport-Security': HSTS_ARGS}\n", "path": "certbot-nginx/certbot_nginx/constants.py"}], "after_files": [{"content": "\"\"\"nginx plugin constants.\"\"\"\nimport platform\n\nFREEBSD_DARWIN_SERVER_ROOT = \"/usr/local/etc/nginx\"\nLINUX_SERVER_ROOT = \"/etc/nginx\"\n\nif platform.system() in ('FreeBSD', 'Darwin'):\n server_root_tmp = FREEBSD_DARWIN_SERVER_ROOT\nelse:\n server_root_tmp = LINUX_SERVER_ROOT\n\nCLI_DEFAULTS = dict(\n server_root=server_root_tmp,\n ctl=\"nginx\",\n)\n\"\"\"CLI defaults.\"\"\"\n\n\nMOD_SSL_CONF_DEST = \"options-ssl-nginx.conf\"\n\"\"\"Name of the mod_ssl config file as saved in `IConfig.config_dir`.\"\"\"\n\nUPDATED_MOD_SSL_CONF_DIGEST = \".updated-options-ssl-nginx-conf-digest.txt\"\n\"\"\"Name of the hash of the updated or informed mod_ssl_conf as saved in `IConfig.config_dir`.\"\"\"\n\nSSL_OPTIONS_HASHES_NEW = [\n '63e2bddebb174a05c9d8a7cf2adf72f7af04349ba59a1a925fe447f73b2f1abf',\n '2901debc7ecbc10917edd9084c05464c9c5930b463677571eaf8c94bffd11ae2',\n]\n\"\"\"SHA256 hashes of the contents of versions of MOD_SSL_CONF_SRC for nginx >= 1.5.9\"\"\"\n\nALL_SSL_OPTIONS_HASHES = [\n '0f81093a1465e3d4eaa8b0c14e77b2a2e93568b0fc1351c2b87893a95f0de87c',\n '9a7b32c49001fed4cff8ad24353329472a50e86ade1ef9b2b9e43566a619612e',\n 'a6d9f1c7d6b36749b52ba061fff1421f9a0a3d2cfdafbd63c05d06f65b990937',\n '7f95624dd95cf5afc708b9f967ee83a24b8025dc7c8d9df2b556bbc64256b3ff',\n '394732f2bbe3e5e637c3fb5c6e980a1f1b90b01e2e8d6b7cff41dde16e2a756d',\n '4b16fec2bcbcd8a2f3296d886f17f9953ffdcc0af54582452ca1e52f5f776f16',\n 'c052ffff0ad683f43bffe105f7c606b339536163490930e2632a335c8d191cc4',\n] + SSL_OPTIONS_HASHES_NEW\n\"\"\"SHA256 hashes of the contents of all versions of MOD_SSL_CONF_SRC\"\"\"\n\ndef os_constant(key):\n # XXX TODO: In the future, this could return different constants\n # based on what OS we are running under. To see an\n # approach to how to handle different OSes, see the\n # apache version of this file. Currently, we do not\n # actually have any OS-specific constants on Nginx.\n \"\"\"\n Get a constant value for operating system\n\n :param key: name of cli constant\n :return: value of constant for active os\n \"\"\"\n return CLI_DEFAULTS[key]\n\nHSTS_ARGS = ['\\\"max-age=31536000\\\"', ' ', 'always']\n\nHEADER_ARGS = {'Strict-Transport-Security': HSTS_ARGS}\n", "path": "certbot-nginx/certbot_nginx/constants.py"}]} | 1,331 | 497 |
gh_patches_debug_10834 | rasdani/github-patches | git_diff | getredash__redash-6561 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
The 'Create your first Dashboard' newbie link will not dispear even I create dashboards
### Issue Summary
The 'Create your first Dashboard' newbie link will not dispear even I create dashboards. Other newbie link works fine. I tried a completely new Redash instance, this issue still exists. I remember there is a commit related to the newbie link recently, but I cannot find which. This issue does not exists in the previous Docker preview image, so I assume that it should be related to recent commits.
### Steps to Reproduce
1. Create new dashboards.
2. The link still there.
<img width="280" alt="image" src="https://github.com/getredash/redash/assets/8188177/19555165-b2df-4b07-89cf-7443858ca704">
### Technical details:
* Redash Version: 23.10.0-dev (dev)
* Browser/OS: Chrome 118
* How did you install Redash: Docker
The 'Create your first Dashboard' newbie link will not dispear even I create dashboards
### Issue Summary
The 'Create your first Dashboard' newbie link will not dispear even I create dashboards. Other newbie link works fine. I tried a completely new Redash instance, this issue still exists. I remember there is a commit related to the newbie link recently, but I cannot find which. This issue does not exists in the previous Docker preview image, so I assume that it should be related to recent commits.
### Steps to Reproduce
1. Create new dashboards.
2. The link still there.
<img width="280" alt="image" src="https://github.com/getredash/redash/assets/8188177/19555165-b2df-4b07-89cf-7443858ca704">
### Technical details:
* Redash Version: 23.10.0-dev (dev)
* Browser/OS: Chrome 118
* How did you install Redash: Docker
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `redash/handlers/organization.py`
Content:
```
1 from flask_login import current_user, login_required
2
3 from redash import models
4 from redash.authentication import current_org
5 from redash.handlers import routes
6 from redash.handlers.base import json_response, org_scoped_rule
7
8
9 @routes.route(org_scoped_rule("/api/organization/status"), methods=["GET"])
10 @login_required
11 def organization_status(org_slug=None):
12 counters = {
13 "users": models.User.all(current_org).count(),
14 "alerts": models.Alert.all(group_ids=current_user.group_ids).count(),
15 "data_sources": models.DataSource.all(current_org, group_ids=current_user.group_ids).count(),
16 "queries": models.Query.all_queries(current_user.group_ids, current_user.id, include_drafts=True).count(),
17 "dashboards": models.Dashboard.query.filter(
18 models.Dashboard.org == current_org, models.Dashboard.is_archived is False
19 ).count(),
20 }
21
22 return json_response(dict(object_counters=counters))
23
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/redash/handlers/organization.py b/redash/handlers/organization.py
--- a/redash/handlers/organization.py
+++ b/redash/handlers/organization.py
@@ -15,7 +15,7 @@
"data_sources": models.DataSource.all(current_org, group_ids=current_user.group_ids).count(),
"queries": models.Query.all_queries(current_user.group_ids, current_user.id, include_drafts=True).count(),
"dashboards": models.Dashboard.query.filter(
- models.Dashboard.org == current_org, models.Dashboard.is_archived is False
+ models.Dashboard.org == current_org, models.Dashboard.is_archived.is_(False)
).count(),
}
| {"golden_diff": "diff --git a/redash/handlers/organization.py b/redash/handlers/organization.py\n--- a/redash/handlers/organization.py\n+++ b/redash/handlers/organization.py\n@@ -15,7 +15,7 @@\n \"data_sources\": models.DataSource.all(current_org, group_ids=current_user.group_ids).count(),\n \"queries\": models.Query.all_queries(current_user.group_ids, current_user.id, include_drafts=True).count(),\n \"dashboards\": models.Dashboard.query.filter(\n- models.Dashboard.org == current_org, models.Dashboard.is_archived is False\n+ models.Dashboard.org == current_org, models.Dashboard.is_archived.is_(False)\n ).count(),\n }\n", "issue": "The 'Create your first Dashboard' newbie link will not dispear even I create dashboards\n### Issue Summary\r\n\r\nThe 'Create your first Dashboard' newbie link will not dispear even I create dashboards. Other newbie link works fine. I tried a completely new Redash instance, this issue still exists. I remember there is a commit related to the newbie link recently, but I cannot find which. This issue does not exists in the previous Docker preview image, so I assume that it should be related to recent commits.\r\n\r\n### Steps to Reproduce\r\n\r\n1. Create new dashboards.\r\n2. The link still there.\r\n\r\n<img width=\"280\" alt=\"image\" src=\"https://github.com/getredash/redash/assets/8188177/19555165-b2df-4b07-89cf-7443858ca704\">\r\n\r\n### Technical details:\r\n\r\n* Redash Version: 23.10.0-dev (dev)\r\n* Browser/OS: Chrome 118\r\n* How did you install Redash: Docker\r\n\nThe 'Create your first Dashboard' newbie link will not dispear even I create dashboards\n### Issue Summary\r\n\r\nThe 'Create your first Dashboard' newbie link will not dispear even I create dashboards. Other newbie link works fine. I tried a completely new Redash instance, this issue still exists. I remember there is a commit related to the newbie link recently, but I cannot find which. This issue does not exists in the previous Docker preview image, so I assume that it should be related to recent commits.\r\n\r\n### Steps to Reproduce\r\n\r\n1. Create new dashboards.\r\n2. The link still there.\r\n\r\n<img width=\"280\" alt=\"image\" src=\"https://github.com/getredash/redash/assets/8188177/19555165-b2df-4b07-89cf-7443858ca704\">\r\n\r\n### Technical details:\r\n\r\n* Redash Version: 23.10.0-dev (dev)\r\n* Browser/OS: Chrome 118\r\n* How did you install Redash: Docker\r\n\n", "before_files": [{"content": "from flask_login import current_user, login_required\n\nfrom redash import models\nfrom redash.authentication import current_org\nfrom redash.handlers import routes\nfrom redash.handlers.base import json_response, org_scoped_rule\n\n\[email protected](org_scoped_rule(\"/api/organization/status\"), methods=[\"GET\"])\n@login_required\ndef organization_status(org_slug=None):\n counters = {\n \"users\": models.User.all(current_org).count(),\n \"alerts\": models.Alert.all(group_ids=current_user.group_ids).count(),\n \"data_sources\": models.DataSource.all(current_org, group_ids=current_user.group_ids).count(),\n \"queries\": models.Query.all_queries(current_user.group_ids, current_user.id, include_drafts=True).count(),\n \"dashboards\": models.Dashboard.query.filter(\n models.Dashboard.org == current_org, models.Dashboard.is_archived is False\n ).count(),\n }\n\n return json_response(dict(object_counters=counters))\n", "path": "redash/handlers/organization.py"}], "after_files": [{"content": "from flask_login import current_user, login_required\n\nfrom redash import models\nfrom redash.authentication import current_org\nfrom redash.handlers import routes\nfrom redash.handlers.base import json_response, org_scoped_rule\n\n\[email protected](org_scoped_rule(\"/api/organization/status\"), methods=[\"GET\"])\n@login_required\ndef organization_status(org_slug=None):\n counters = {\n \"users\": models.User.all(current_org).count(),\n \"alerts\": models.Alert.all(group_ids=current_user.group_ids).count(),\n \"data_sources\": models.DataSource.all(current_org, group_ids=current_user.group_ids).count(),\n \"queries\": models.Query.all_queries(current_user.group_ids, current_user.id, include_drafts=True).count(),\n \"dashboards\": models.Dashboard.query.filter(\n models.Dashboard.org == current_org, models.Dashboard.is_archived.is_(False)\n ).count(),\n }\n\n return json_response(dict(object_counters=counters))\n", "path": "redash/handlers/organization.py"}]} | 962 | 156 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.