problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.1k
10.2k
| golden_diff
stringlengths 151
4.94k
| verification_info
stringlengths 582
21k
| num_tokens
int64 271
2.05k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_14394 | rasdani/github-patches | git_diff | comic__grand-challenge.org-1728 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Annotation answers get parsed incorrectly in csv export
For annotation type answers, the csv export looks like this currently:

It appears the annotation json gets part as part of the export. We should probably add some escaping.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `app/grandchallenge/core/renderers.py`
Content:
```
1 from rest_framework_csv.renderers import CSVRenderer
2
3
4 class PaginatedCSVRenderer(CSVRenderer):
5 results_field = "results"
6
7 def render(self, data, *args, **kwargs):
8 if self.results_field in data:
9 data = data[self.results_field]
10
11 return super().render(data, *args, **kwargs)
12
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/app/grandchallenge/core/renderers.py b/app/grandchallenge/core/renderers.py
--- a/app/grandchallenge/core/renderers.py
+++ b/app/grandchallenge/core/renderers.py
@@ -1,3 +1,5 @@
+import json
+
from rest_framework_csv.renderers import CSVRenderer
@@ -9,3 +11,19 @@
data = data[self.results_field]
return super().render(data, *args, **kwargs)
+
+ def flatten_data(self, data):
+ """
+ Create a dictionary that is 1 level deep, with nested values serialized
+ as json. This means that the header rows are now consistent.
+ """
+ for row in data:
+ flat_row = {k: self._flatten_value(v) for k, v in row.items()}
+ yield flat_row
+
+ @staticmethod
+ def _flatten_value(value):
+ if isinstance(value, (dict, list)):
+ return json.dumps(value)
+ else:
+ return value
| {"golden_diff": "diff --git a/app/grandchallenge/core/renderers.py b/app/grandchallenge/core/renderers.py\n--- a/app/grandchallenge/core/renderers.py\n+++ b/app/grandchallenge/core/renderers.py\n@@ -1,3 +1,5 @@\n+import json\n+\n from rest_framework_csv.renderers import CSVRenderer\n \n \n@@ -9,3 +11,19 @@\n data = data[self.results_field]\n \n return super().render(data, *args, **kwargs)\n+\n+ def flatten_data(self, data):\n+ \"\"\"\n+ Create a dictionary that is 1 level deep, with nested values serialized\n+ as json. This means that the header rows are now consistent.\n+ \"\"\"\n+ for row in data:\n+ flat_row = {k: self._flatten_value(v) for k, v in row.items()}\n+ yield flat_row\n+\n+ @staticmethod\n+ def _flatten_value(value):\n+ if isinstance(value, (dict, list)):\n+ return json.dumps(value)\n+ else:\n+ return value\n", "issue": "Annotation answers get parsed incorrectly in csv export\nFor annotation type answers, the csv export looks like this currently:\r\n\r\n\r\nIt appears the annotation json gets part as part of the export. We should probably add some escaping.\n", "before_files": [{"content": "from rest_framework_csv.renderers import CSVRenderer\n\n\nclass PaginatedCSVRenderer(CSVRenderer):\n results_field = \"results\"\n\n def render(self, data, *args, **kwargs):\n if self.results_field in data:\n data = data[self.results_field]\n\n return super().render(data, *args, **kwargs)\n", "path": "app/grandchallenge/core/renderers.py"}], "after_files": [{"content": "import json\n\nfrom rest_framework_csv.renderers import CSVRenderer\n\n\nclass PaginatedCSVRenderer(CSVRenderer):\n results_field = \"results\"\n\n def render(self, data, *args, **kwargs):\n if self.results_field in data:\n data = data[self.results_field]\n\n return super().render(data, *args, **kwargs)\n\n def flatten_data(self, data):\n \"\"\"\n Create a dictionary that is 1 level deep, with nested values serialized\n as json. This means that the header rows are now consistent.\n \"\"\"\n for row in data:\n flat_row = {k: self._flatten_value(v) for k, v in row.items()}\n yield flat_row\n\n @staticmethod\n def _flatten_value(value):\n if isinstance(value, (dict, list)):\n return json.dumps(value)\n else:\n return value\n", "path": "app/grandchallenge/core/renderers.py"}]} | 474 | 228 |
gh_patches_debug_14311 | rasdani/github-patches | git_diff | googleapis__google-cloud-python-6264 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Monitoring: where is CallOptions on monitoring API example?
[OS] macOS Sierra 10.12.6
[Versions]
- Python 3.6.1
```
google-api-core==1.2.1
google-api-python-client==1.7.3
google-auth==1.5.0
google-auth-httplib2==0.0.3
google-cloud-monitoring==0.30.0
googleapis-common-protos==1.5.3
```
----
## CallOptions class was not found!
Hi. I'm new to GCP and Stackdriver. I wanted to use Google Kubernetes Engine and its auto scaling by custom metrics. Then, it is required to export the metrics to Stackdriver Monitoring, so I am trying to do it.
But, After installing above-mentioned libraries, the example code on monitoring API README document failed. The pit hole is that `CallOptions` was not found, thus I've searched it in this repository and some other repositories.
And finally, I couldn't find it...
`CallOptions` is defined in gax.python, but the package is currently deprecated and moved to google-api-core. So I guess that also the dependency is currently corrupted or the some examples are out-of-date.
Please tell me how handle this problem.
_Thank you for the great package and platform._
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `vision/google/cloud/vision_helpers/__init__.py`
Content:
```
1 # Copyright 2017, Google LLC All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from __future__ import absolute_import
16 import io
17
18 from google.api_core import protobuf_helpers as protobuf
19
20
21 class VisionHelpers(object):
22 """A set of convenience methods to make the Vision GAPIC easier to use.
23
24 This class should be considered abstract; it is used as a superclass
25 in a multiple-inheritance construction alongside the applicable GAPIC.
26 See the :class:`~google.cloud.vision_v1.ImageAnnotatorClient`.
27 """
28 def annotate_image(self, request, retry=None, timeout=None):
29 """Run image detection and annotation for an image.
30
31 Example:
32 >>> from google.cloud.vision_v1 import ImageAnnotatorClient
33 >>> client = ImageAnnotatorClient()
34 >>> request = {
35 ... 'image': {
36 ... 'source': {'image_uri': 'https://foo.com/image.jpg'},
37 ... },
38 ... }
39 >>> response = client.annotate_image(request)
40
41 Args:
42 request (:class:`~.vision_v1.types.AnnotateImageRequest`)
43 options (:class:`google.gax.CallOptions`): Overrides the default
44 settings for this call, e.g, timeout, retries, etc.
45
46 Returns:
47 :class:`~.vision_v1.types.AnnotateImageResponse` The API response.
48 """
49 # If the image is a file handler, set the content.
50 image = protobuf.get(request, 'image')
51 if hasattr(image, 'read'):
52 img_bytes = image.read()
53 protobuf.set(request, 'image', {})
54 protobuf.set(request, 'image.content', img_bytes)
55 image = protobuf.get(request, 'image')
56
57 # If a filename is provided, read the file.
58 filename = protobuf.get(image, 'source.filename', default=None)
59 if filename:
60 with io.open(filename, 'rb') as img_file:
61 protobuf.set(request, 'image.content', img_file.read())
62 protobuf.set(request, 'image.source', None)
63
64 # This method allows features not to be specified, and you get all
65 # of them.
66 protobuf.setdefault(request, 'features', self._get_all_features())
67 r = self.batch_annotate_images([request], retry=retry, timeout=timeout)
68 return r.responses[0]
69
70 def _get_all_features(self):
71 """Return a list of all features.
72
73 Returns:
74 list: A list of all available features.
75 """
76 return [
77 {'type': feature}
78 for feature in self.enums.Feature.Type if feature != 0]
79
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/vision/google/cloud/vision_helpers/__init__.py b/vision/google/cloud/vision_helpers/__init__.py
--- a/vision/google/cloud/vision_helpers/__init__.py
+++ b/vision/google/cloud/vision_helpers/__init__.py
@@ -40,8 +40,12 @@
Args:
request (:class:`~.vision_v1.types.AnnotateImageRequest`)
- options (:class:`google.gax.CallOptions`): Overrides the default
- settings for this call, e.g, timeout, retries, etc.
+ retry (Optional[google.api_core.retry.Retry]): A retry object used
+ to retry requests. If ``None`` is specified, requests will not
+ be retried.
+ timeout (Optional[float]): The amount of time, in seconds, to wait
+ for the request to complete. Note that if ``retry`` is
+ specified, the timeout applies to each individual attempt.
Returns:
:class:`~.vision_v1.types.AnnotateImageResponse` The API response.
| {"golden_diff": "diff --git a/vision/google/cloud/vision_helpers/__init__.py b/vision/google/cloud/vision_helpers/__init__.py\n--- a/vision/google/cloud/vision_helpers/__init__.py\n+++ b/vision/google/cloud/vision_helpers/__init__.py\n@@ -40,8 +40,12 @@\n \n Args:\n request (:class:`~.vision_v1.types.AnnotateImageRequest`)\n- options (:class:`google.gax.CallOptions`): Overrides the default\n- settings for this call, e.g, timeout, retries, etc.\n+ retry (Optional[google.api_core.retry.Retry]): A retry object used\n+ to retry requests. If ``None`` is specified, requests will not\n+ be retried.\n+ timeout (Optional[float]): The amount of time, in seconds, to wait\n+ for the request to complete. Note that if ``retry`` is\n+ specified, the timeout applies to each individual attempt.\n \n Returns:\n :class:`~.vision_v1.types.AnnotateImageResponse` The API response.\n", "issue": "Monitoring: where is CallOptions on monitoring API example?\n[OS] macOS Sierra 10.12.6\r\n[Versions]\r\n\r\n- Python 3.6.1\r\n\r\n```\r\ngoogle-api-core==1.2.1\r\ngoogle-api-python-client==1.7.3\r\ngoogle-auth==1.5.0\r\ngoogle-auth-httplib2==0.0.3\r\ngoogle-cloud-monitoring==0.30.0\r\ngoogleapis-common-protos==1.5.3\r\n```\r\n\r\n----\r\n\r\n## CallOptions class was not found!\r\n\r\nHi. I'm new to GCP and Stackdriver. I wanted to use Google Kubernetes Engine and its auto scaling by custom metrics. Then, it is required to export the metrics to Stackdriver Monitoring, so I am trying to do it.\r\n\r\nBut, After installing above-mentioned libraries, the example code on monitoring API README document failed. The pit hole is that `CallOptions` was not found, thus I've searched it in this repository and some other repositories.\r\n\r\nAnd finally, I couldn't find it...\r\n\r\n`CallOptions` is defined in gax.python, but the package is currently deprecated and moved to google-api-core. So I guess that also the dependency is currently corrupted or the some examples are out-of-date.\r\n\r\nPlease tell me how handle this problem.\r\n\r\n_Thank you for the great package and platform._\n", "before_files": [{"content": "# Copyright 2017, Google LLC All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import absolute_import\nimport io\n\nfrom google.api_core import protobuf_helpers as protobuf\n\n\nclass VisionHelpers(object):\n \"\"\"A set of convenience methods to make the Vision GAPIC easier to use.\n\n This class should be considered abstract; it is used as a superclass\n in a multiple-inheritance construction alongside the applicable GAPIC.\n See the :class:`~google.cloud.vision_v1.ImageAnnotatorClient`.\n \"\"\"\n def annotate_image(self, request, retry=None, timeout=None):\n \"\"\"Run image detection and annotation for an image.\n\n Example:\n >>> from google.cloud.vision_v1 import ImageAnnotatorClient\n >>> client = ImageAnnotatorClient()\n >>> request = {\n ... 'image': {\n ... 'source': {'image_uri': 'https://foo.com/image.jpg'},\n ... },\n ... }\n >>> response = client.annotate_image(request)\n\n Args:\n request (:class:`~.vision_v1.types.AnnotateImageRequest`)\n options (:class:`google.gax.CallOptions`): Overrides the default\n settings for this call, e.g, timeout, retries, etc.\n\n Returns:\n :class:`~.vision_v1.types.AnnotateImageResponse` The API response.\n \"\"\"\n # If the image is a file handler, set the content.\n image = protobuf.get(request, 'image')\n if hasattr(image, 'read'):\n img_bytes = image.read()\n protobuf.set(request, 'image', {})\n protobuf.set(request, 'image.content', img_bytes)\n image = protobuf.get(request, 'image')\n\n # If a filename is provided, read the file.\n filename = protobuf.get(image, 'source.filename', default=None)\n if filename:\n with io.open(filename, 'rb') as img_file:\n protobuf.set(request, 'image.content', img_file.read())\n protobuf.set(request, 'image.source', None)\n\n # This method allows features not to be specified, and you get all\n # of them.\n protobuf.setdefault(request, 'features', self._get_all_features())\n r = self.batch_annotate_images([request], retry=retry, timeout=timeout)\n return r.responses[0]\n\n def _get_all_features(self):\n \"\"\"Return a list of all features.\n\n Returns:\n list: A list of all available features.\n \"\"\"\n return [\n {'type': feature}\n for feature in self.enums.Feature.Type if feature != 0]\n", "path": "vision/google/cloud/vision_helpers/__init__.py"}], "after_files": [{"content": "# Copyright 2017, Google LLC All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import absolute_import\nimport io\n\nfrom google.api_core import protobuf_helpers as protobuf\n\n\nclass VisionHelpers(object):\n \"\"\"A set of convenience methods to make the Vision GAPIC easier to use.\n\n This class should be considered abstract; it is used as a superclass\n in a multiple-inheritance construction alongside the applicable GAPIC.\n See the :class:`~google.cloud.vision_v1.ImageAnnotatorClient`.\n \"\"\"\n def annotate_image(self, request, retry=None, timeout=None):\n \"\"\"Run image detection and annotation for an image.\n\n Example:\n >>> from google.cloud.vision_v1 import ImageAnnotatorClient\n >>> client = ImageAnnotatorClient()\n >>> request = {\n ... 'image': {\n ... 'source': {'image_uri': 'https://foo.com/image.jpg'},\n ... },\n ... }\n >>> response = client.annotate_image(request)\n\n Args:\n request (:class:`~.vision_v1.types.AnnotateImageRequest`)\n retry (Optional[google.api_core.retry.Retry]): A retry object used\n to retry requests. If ``None`` is specified, requests will not\n be retried.\n timeout (Optional[float]): The amount of time, in seconds, to wait\n for the request to complete. Note that if ``retry`` is\n specified, the timeout applies to each individual attempt.\n\n Returns:\n :class:`~.vision_v1.types.AnnotateImageResponse` The API response.\n \"\"\"\n # If the image is a file handler, set the content.\n image = protobuf.get(request, 'image')\n if hasattr(image, 'read'):\n img_bytes = image.read()\n protobuf.set(request, 'image', {})\n protobuf.set(request, 'image.content', img_bytes)\n image = protobuf.get(request, 'image')\n\n # If a filename is provided, read the file.\n filename = protobuf.get(image, 'source.filename', default=None)\n if filename:\n with io.open(filename, 'rb') as img_file:\n protobuf.set(request, 'image.content', img_file.read())\n protobuf.set(request, 'image.source', None)\n\n # This method allows features not to be specified, and you get all\n # of them.\n protobuf.setdefault(request, 'features', self._get_all_features())\n r = self.batch_annotate_images([request], retry=retry, timeout=timeout)\n return r.responses[0]\n\n def _get_all_features(self):\n \"\"\"Return a list of all features.\n\n Returns:\n list: A list of all available features.\n \"\"\"\n return [\n {'type': feature}\n for feature in self.enums.Feature.Type if feature != 0]\n", "path": "vision/google/cloud/vision_helpers/__init__.py"}]} | 1,362 | 233 |
gh_patches_debug_21937 | rasdani/github-patches | git_diff | TheAlgorithms__Python-9228 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Concatenate/consolidate all algorithms with different implementations
### Feature description
There are lots of algorithms with the same concept but different implementations/methods in different files. All these should be moved into one file
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `maths/miller_rabin.py`
Content:
```
1 import random
2
3 from .binary_exp_mod import bin_exp_mod
4
5
6 # This is a probabilistic check to test primality, useful for big numbers!
7 # if it's a prime, it will return true
8 # if it's not a prime, the chance of it returning true is at most 1/4**prec
9 def is_prime_big(n, prec=1000):
10 """
11 >>> from maths.prime_check import is_prime
12 >>> # all(is_prime_big(i) == is_prime(i) for i in range(1000)) # 3.45s
13 >>> all(is_prime_big(i) == is_prime(i) for i in range(256))
14 True
15 """
16 if n < 2:
17 return False
18
19 if n % 2 == 0:
20 return n == 2
21
22 # this means n is odd
23 d = n - 1
24 exp = 0
25 while d % 2 == 0:
26 d /= 2
27 exp += 1
28
29 # n - 1=d*(2**exp)
30 count = 0
31 while count < prec:
32 a = random.randint(2, n - 1)
33 b = bin_exp_mod(a, d, n)
34 if b != 1:
35 flag = True
36 for _ in range(exp):
37 if b == n - 1:
38 flag = False
39 break
40 b = b * b
41 b %= n
42 if flag:
43 return False
44 count += 1
45 return True
46
47
48 if __name__ == "__main__":
49 n = abs(int(input("Enter bound : ").strip()))
50 print("Here's the list of primes:")
51 print(", ".join(str(i) for i in range(n + 1) if is_prime_big(i)))
52
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/maths/miller_rabin.py b/maths/miller_rabin.py
deleted file mode 100644
--- a/maths/miller_rabin.py
+++ /dev/null
@@ -1,51 +0,0 @@
-import random
-
-from .binary_exp_mod import bin_exp_mod
-
-
-# This is a probabilistic check to test primality, useful for big numbers!
-# if it's a prime, it will return true
-# if it's not a prime, the chance of it returning true is at most 1/4**prec
-def is_prime_big(n, prec=1000):
- """
- >>> from maths.prime_check import is_prime
- >>> # all(is_prime_big(i) == is_prime(i) for i in range(1000)) # 3.45s
- >>> all(is_prime_big(i) == is_prime(i) for i in range(256))
- True
- """
- if n < 2:
- return False
-
- if n % 2 == 0:
- return n == 2
-
- # this means n is odd
- d = n - 1
- exp = 0
- while d % 2 == 0:
- d /= 2
- exp += 1
-
- # n - 1=d*(2**exp)
- count = 0
- while count < prec:
- a = random.randint(2, n - 1)
- b = bin_exp_mod(a, d, n)
- if b != 1:
- flag = True
- for _ in range(exp):
- if b == n - 1:
- flag = False
- break
- b = b * b
- b %= n
- if flag:
- return False
- count += 1
- return True
-
-
-if __name__ == "__main__":
- n = abs(int(input("Enter bound : ").strip()))
- print("Here's the list of primes:")
- print(", ".join(str(i) for i in range(n + 1) if is_prime_big(i)))
| {"golden_diff": "diff --git a/maths/miller_rabin.py b/maths/miller_rabin.py\ndeleted file mode 100644\n--- a/maths/miller_rabin.py\n+++ /dev/null\n@@ -1,51 +0,0 @@\n-import random\n-\n-from .binary_exp_mod import bin_exp_mod\n-\n-\n-# This is a probabilistic check to test primality, useful for big numbers!\n-# if it's a prime, it will return true\n-# if it's not a prime, the chance of it returning true is at most 1/4**prec\n-def is_prime_big(n, prec=1000):\n- \"\"\"\n- >>> from maths.prime_check import is_prime\n- >>> # all(is_prime_big(i) == is_prime(i) for i in range(1000)) # 3.45s\n- >>> all(is_prime_big(i) == is_prime(i) for i in range(256))\n- True\n- \"\"\"\n- if n < 2:\n- return False\n-\n- if n % 2 == 0:\n- return n == 2\n-\n- # this means n is odd\n- d = n - 1\n- exp = 0\n- while d % 2 == 0:\n- d /= 2\n- exp += 1\n-\n- # n - 1=d*(2**exp)\n- count = 0\n- while count < prec:\n- a = random.randint(2, n - 1)\n- b = bin_exp_mod(a, d, n)\n- if b != 1:\n- flag = True\n- for _ in range(exp):\n- if b == n - 1:\n- flag = False\n- break\n- b = b * b\n- b %= n\n- if flag:\n- return False\n- count += 1\n- return True\n-\n-\n-if __name__ == \"__main__\":\n- n = abs(int(input(\"Enter bound : \").strip()))\n- print(\"Here's the list of primes:\")\n- print(\", \".join(str(i) for i in range(n + 1) if is_prime_big(i)))\n", "issue": "Concatenate/consolidate all algorithms with different implementations\n### Feature description\n\nThere are lots of algorithms with the same concept but different implementations/methods in different files. All these should be moved into one file\n", "before_files": [{"content": "import random\n\nfrom .binary_exp_mod import bin_exp_mod\n\n\n# This is a probabilistic check to test primality, useful for big numbers!\n# if it's a prime, it will return true\n# if it's not a prime, the chance of it returning true is at most 1/4**prec\ndef is_prime_big(n, prec=1000):\n \"\"\"\n >>> from maths.prime_check import is_prime\n >>> # all(is_prime_big(i) == is_prime(i) for i in range(1000)) # 3.45s\n >>> all(is_prime_big(i) == is_prime(i) for i in range(256))\n True\n \"\"\"\n if n < 2:\n return False\n\n if n % 2 == 0:\n return n == 2\n\n # this means n is odd\n d = n - 1\n exp = 0\n while d % 2 == 0:\n d /= 2\n exp += 1\n\n # n - 1=d*(2**exp)\n count = 0\n while count < prec:\n a = random.randint(2, n - 1)\n b = bin_exp_mod(a, d, n)\n if b != 1:\n flag = True\n for _ in range(exp):\n if b == n - 1:\n flag = False\n break\n b = b * b\n b %= n\n if flag:\n return False\n count += 1\n return True\n\n\nif __name__ == \"__main__\":\n n = abs(int(input(\"Enter bound : \").strip()))\n print(\"Here's the list of primes:\")\n print(\", \".join(str(i) for i in range(n + 1) if is_prime_big(i)))\n", "path": "maths/miller_rabin.py"}], "after_files": [{"content": null, "path": "maths/miller_rabin.py"}]} | 800 | 497 |
gh_patches_debug_9138 | rasdani/github-patches | git_diff | keras-team__autokeras-277 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
cannot install autokeras because of package dependency confliction
### Bug Description
following package dependency is configured at setup.py
https://github.com/jhfjhfj1/autokeras/blob/master/setup.py#L6
```
install_requires=['torch==0.4.1', 'torchvision==0.2.1', 'numpy>=1.14.5', 'keras==2.2.2', 'scikit-learn==0.19.1',
'tensorflow>=1.10.0', 'tqdm==4.25.0'],
```
When execute `pip install autokeras`, following error is appeared.
```
keras 2.2.2 has requirement keras-applications==1.0.4, but you'll have keras-applications 1.0.6 which is incompatible.
keras 2.2.2 has requirement keras-preprocessing==1.0.2, but you'll have keras-preprocessing 1.0.5 which is incompatible.
```
It is because that tensorflow==1.11.0 is installed first and
keras-applications >= 1.0.5 and keras-preprocessing > = 1.0.3 can installed with tensorflow==1.11.0.
On the other hand, keras==2.2.2's dependency versions are keras-applications==1.0.4 and keras-preprocessing==1.0.2.
tensorflow version should be defined as `tensorflow==1.10.0`at [setup.py L6](https://github.com/jhfjhfj1/autokeras/blob/master/setup.py#L6).
```
# before
install_requires=['torch==0.4.1', 'torchvision==0.2.1', 'numpy>=1.14.5', 'keras==2.2.2', 'scikit-learn==0.19.1',
'tensorflow>=1.10.0', 'tqdm==4.25.0'],
# after
install_requires=['torch==0.4.1', 'torchvision==0.2.1', 'numpy>=1.14.5', 'keras==2.2.2', 'scikit-learn==0.19.1',
'tensorflow==1.10.0', 'tqdm==4.25.0'],
```
### Reproducing Steps
Step1: curl https://gist.githubusercontent.com/chie8842/b3b9f3ea2d886bbb5aa5c903b9e42ee3/raw/e94cc375ca1265c66d4517a25a748f1e13a3de9d/Dockerfile -o Dockerfile
Step2: docker build -t autokeras -f Dockerfile .
Step3: docker run -it --rm autokeras /bin/bash
Step4: sudo pip install autokeras
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 from distutils.core import setup
2
3 setup(
4 name='autokeras',
5 packages=['autokeras'], # this must be the same as the name above
6 install_requires=['torch==0.4.1', 'torchvision==0.2.1', 'numpy>=1.14.5', 'keras==2.2.2', 'scikit-learn==0.19.1',
7 'tensorflow>=1.10.0', 'tqdm==4.25.0'],
8 version='0.2.18',
9 description='AutoML for deep learning',
10 author='Haifeng Jin',
11 author_email='[email protected]',
12 url='http://autokeras.com',
13 download_url='https://github.com/jhfjhfj1/autokeras/archive/0.2.18.tar.gz',
14 keywords=['automl'], # arbitrary keywords
15 classifiers=[]
16 )
17
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -4,7 +4,7 @@
name='autokeras',
packages=['autokeras'], # this must be the same as the name above
install_requires=['torch==0.4.1', 'torchvision==0.2.1', 'numpy>=1.14.5', 'keras==2.2.2', 'scikit-learn==0.19.1',
- 'tensorflow>=1.10.0', 'tqdm==4.25.0'],
+ 'tensorflow==1.10.0', 'tqdm==4.25.0'],
version='0.2.18',
description='AutoML for deep learning',
author='Haifeng Jin',
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -4,7 +4,7 @@\n name='autokeras',\n packages=['autokeras'], # this must be the same as the name above\n install_requires=['torch==0.4.1', 'torchvision==0.2.1', 'numpy>=1.14.5', 'keras==2.2.2', 'scikit-learn==0.19.1',\n- 'tensorflow>=1.10.0', 'tqdm==4.25.0'],\n+ 'tensorflow==1.10.0', 'tqdm==4.25.0'],\n version='0.2.18',\n description='AutoML for deep learning',\n author='Haifeng Jin',\n", "issue": "cannot install autokeras because of package dependency confliction\n### Bug Description\r\nfollowing package dependency is configured at setup.py\r\nhttps://github.com/jhfjhfj1/autokeras/blob/master/setup.py#L6\r\n\r\n```\r\ninstall_requires=['torch==0.4.1', 'torchvision==0.2.1', 'numpy>=1.14.5', 'keras==2.2.2', 'scikit-learn==0.19.1',\r\n 'tensorflow>=1.10.0', 'tqdm==4.25.0'],\r\n```\r\n\r\nWhen execute `pip install autokeras`, following error is appeared.\r\n\r\n```\r\nkeras 2.2.2 has requirement keras-applications==1.0.4, but you'll have keras-applications 1.0.6 which is incompatible.\r\nkeras 2.2.2 has requirement keras-preprocessing==1.0.2, but you'll have keras-preprocessing 1.0.5 which is incompatible.\r\n```\r\n\r\nIt is because that tensorflow==1.11.0 is installed first and\r\nkeras-applications >= 1.0.5 and keras-preprocessing > = 1.0.3 can installed with tensorflow==1.11.0.\r\nOn the other hand, keras==2.2.2's dependency versions are keras-applications==1.0.4 and keras-preprocessing==1.0.2.\r\n\r\n tensorflow version should be defined as `tensorflow==1.10.0`at [setup.py L6](https://github.com/jhfjhfj1/autokeras/blob/master/setup.py#L6).\r\n\r\n```\r\n# before\r\ninstall_requires=['torch==0.4.1', 'torchvision==0.2.1', 'numpy>=1.14.5', 'keras==2.2.2', 'scikit-learn==0.19.1',\r\n 'tensorflow>=1.10.0', 'tqdm==4.25.0'],\r\n\r\n# after\r\ninstall_requires=['torch==0.4.1', 'torchvision==0.2.1', 'numpy>=1.14.5', 'keras==2.2.2', 'scikit-learn==0.19.1',\r\n 'tensorflow==1.10.0', 'tqdm==4.25.0'],\r\n```\r\n\r\n### Reproducing Steps\r\n\u00a0\r\nStep1: curl https://gist.githubusercontent.com/chie8842/b3b9f3ea2d886bbb5aa5c903b9e42ee3/raw/e94cc375ca1265c66d4517a25a748f1e13a3de9d/Dockerfile -o Dockerfile\r\nStep2: docker build -t autokeras -f Dockerfile .\r\nStep3: docker run -it --rm autokeras /bin/bash\r\nStep4: sudo pip install autokeras\n", "before_files": [{"content": "from distutils.core import setup\n\nsetup(\n name='autokeras',\n packages=['autokeras'], # this must be the same as the name above\n install_requires=['torch==0.4.1', 'torchvision==0.2.1', 'numpy>=1.14.5', 'keras==2.2.2', 'scikit-learn==0.19.1',\n 'tensorflow>=1.10.0', 'tqdm==4.25.0'],\n version='0.2.18',\n description='AutoML for deep learning',\n author='Haifeng Jin',\n author_email='[email protected]',\n url='http://autokeras.com',\n download_url='https://github.com/jhfjhfj1/autokeras/archive/0.2.18.tar.gz',\n keywords=['automl'], # arbitrary keywords\n classifiers=[]\n)\n", "path": "setup.py"}], "after_files": [{"content": "from distutils.core import setup\n\nsetup(\n name='autokeras',\n packages=['autokeras'], # this must be the same as the name above\n install_requires=['torch==0.4.1', 'torchvision==0.2.1', 'numpy>=1.14.5', 'keras==2.2.2', 'scikit-learn==0.19.1',\n 'tensorflow==1.10.0', 'tqdm==4.25.0'],\n version='0.2.18',\n description='AutoML for deep learning',\n author='Haifeng Jin',\n author_email='[email protected]',\n url='http://autokeras.com',\n download_url='https://github.com/jhfjhfj1/autokeras/archive/0.2.18.tar.gz',\n keywords=['automl'], # arbitrary keywords\n classifiers=[]\n)\n", "path": "setup.py"}]} | 1,144 | 186 |
gh_patches_debug_17732 | rasdani/github-patches | git_diff | bookwyrm-social__bookwyrm-1381 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Import from Goodreads doesn't work correctly
**Describe the bug**
Import from goodreads csv imports only first line of csv and stops with 'success' status. If user tries to reimport same csv again importer takes the same first imported line yet again.
Broken import examples https://bookwyrm.social/import/775 https://bookwyrm.social/import/776
**Expected behavior**
Importer correctly imports all lines of csv or returns error message to user
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bookwyrm/views/import_data.py`
Content:
```
1 """ import books from another app """
2 from io import TextIOWrapper
3
4 from django.contrib.auth.decorators import login_required
5 from django.core.exceptions import PermissionDenied
6 from django.http import HttpResponseBadRequest
7 from django.shortcuts import get_object_or_404, redirect
8 from django.template.response import TemplateResponse
9 from django.utils.decorators import method_decorator
10 from django.utils.translation import gettext_lazy as _
11 from django.views import View
12
13 from bookwyrm import forms, models
14 from bookwyrm.importers import (
15 Importer,
16 LibrarythingImporter,
17 GoodreadsImporter,
18 StorygraphImporter,
19 )
20 from bookwyrm.tasks import app
21
22 # pylint: disable= no-self-use
23 @method_decorator(login_required, name="dispatch")
24 class Import(View):
25 """import view"""
26
27 def get(self, request):
28 """load import page"""
29 return TemplateResponse(
30 request,
31 "import.html",
32 {
33 "import_form": forms.ImportForm(),
34 "jobs": models.ImportJob.objects.filter(user=request.user).order_by(
35 "-created_date"
36 ),
37 },
38 )
39
40 def post(self, request):
41 """ingest a goodreads csv"""
42 form = forms.ImportForm(request.POST, request.FILES)
43 if form.is_valid():
44 include_reviews = request.POST.get("include_reviews") == "on"
45 privacy = request.POST.get("privacy")
46 source = request.POST.get("source")
47
48 importer = None
49 if source == "LibraryThing":
50 importer = LibrarythingImporter()
51 elif source == "Storygraph":
52 importer = StorygraphImporter()
53 else:
54 # Default : GoodReads
55 importer = GoodreadsImporter()
56
57 try:
58 job = importer.create_job(
59 request.user,
60 TextIOWrapper(
61 request.FILES["csv_file"], encoding=importer.encoding
62 ),
63 include_reviews,
64 privacy,
65 )
66 except (UnicodeDecodeError, ValueError, KeyError):
67 return HttpResponseBadRequest(_("Not a valid csv file"))
68
69 importer.start_import(job)
70
71 return redirect("/import/%d" % job.id)
72 return HttpResponseBadRequest()
73
74
75 @method_decorator(login_required, name="dispatch")
76 class ImportStatus(View):
77 """status of an existing import"""
78
79 def get(self, request, job_id):
80 """status of an import job"""
81 job = get_object_or_404(models.ImportJob, id=job_id)
82 if job.user != request.user:
83 raise PermissionDenied
84
85 try:
86 task = app.AsyncResult(job.task_id)
87 # triggers attribute error if the task won't load
88 task.status # pylint: disable=pointless-statement
89 except (ValueError, AttributeError):
90 task = None
91
92 items = job.items.order_by("index").all()
93 failed_items = [i for i in items if i.fail_reason]
94 items = [i for i in items if not i.fail_reason]
95 return TemplateResponse(
96 request,
97 "import_status.html",
98 {"job": job, "items": items, "failed_items": failed_items, "task": task},
99 )
100
101 def post(self, request, job_id):
102 """retry lines from an import"""
103 job = get_object_or_404(models.ImportJob, id=job_id)
104 items = []
105 for item in request.POST.getlist("import_item"):
106 items.append(get_object_or_404(models.ImportItem, id=item))
107
108 importer = Importer()
109 job = importer.create_retry_job(
110 request.user,
111 job,
112 items,
113 )
114 importer.start_import(job)
115 return redirect("/import/%d" % job.id)
116
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/bookwyrm/views/import_data.py b/bookwyrm/views/import_data.py
--- a/bookwyrm/views/import_data.py
+++ b/bookwyrm/views/import_data.py
@@ -28,7 +28,7 @@
"""load import page"""
return TemplateResponse(
request,
- "import.html",
+ "import/import.html",
{
"import_form": forms.ImportForm(),
"jobs": models.ImportJob.objects.filter(user=request.user).order_by(
@@ -94,7 +94,7 @@
items = [i for i in items if not i.fail_reason]
return TemplateResponse(
request,
- "import_status.html",
+ "import/import_status.html",
{"job": job, "items": items, "failed_items": failed_items, "task": task},
)
| {"golden_diff": "diff --git a/bookwyrm/views/import_data.py b/bookwyrm/views/import_data.py\n--- a/bookwyrm/views/import_data.py\n+++ b/bookwyrm/views/import_data.py\n@@ -28,7 +28,7 @@\n \"\"\"load import page\"\"\"\n return TemplateResponse(\n request,\n- \"import.html\",\n+ \"import/import.html\",\n {\n \"import_form\": forms.ImportForm(),\n \"jobs\": models.ImportJob.objects.filter(user=request.user).order_by(\n@@ -94,7 +94,7 @@\n items = [i for i in items if not i.fail_reason]\n return TemplateResponse(\n request,\n- \"import_status.html\",\n+ \"import/import_status.html\",\n {\"job\": job, \"items\": items, \"failed_items\": failed_items, \"task\": task},\n )\n", "issue": "Import from Goodreads doesn't work correctly\n**Describe the bug**\r\n\r\nImport from goodreads csv imports only first line of csv and stops with 'success' status. If user tries to reimport same csv again importer takes the same first imported line yet again. \r\n\r\nBroken import examples https://bookwyrm.social/import/775 https://bookwyrm.social/import/776\r\n\r\n**Expected behavior**\r\nImporter correctly imports all lines of csv or returns error message to user\n", "before_files": [{"content": "\"\"\" import books from another app \"\"\"\nfrom io import TextIOWrapper\n\nfrom django.contrib.auth.decorators import login_required\nfrom django.core.exceptions import PermissionDenied\nfrom django.http import HttpResponseBadRequest\nfrom django.shortcuts import get_object_or_404, redirect\nfrom django.template.response import TemplateResponse\nfrom django.utils.decorators import method_decorator\nfrom django.utils.translation import gettext_lazy as _\nfrom django.views import View\n\nfrom bookwyrm import forms, models\nfrom bookwyrm.importers import (\n Importer,\n LibrarythingImporter,\n GoodreadsImporter,\n StorygraphImporter,\n)\nfrom bookwyrm.tasks import app\n\n# pylint: disable= no-self-use\n@method_decorator(login_required, name=\"dispatch\")\nclass Import(View):\n \"\"\"import view\"\"\"\n\n def get(self, request):\n \"\"\"load import page\"\"\"\n return TemplateResponse(\n request,\n \"import.html\",\n {\n \"import_form\": forms.ImportForm(),\n \"jobs\": models.ImportJob.objects.filter(user=request.user).order_by(\n \"-created_date\"\n ),\n },\n )\n\n def post(self, request):\n \"\"\"ingest a goodreads csv\"\"\"\n form = forms.ImportForm(request.POST, request.FILES)\n if form.is_valid():\n include_reviews = request.POST.get(\"include_reviews\") == \"on\"\n privacy = request.POST.get(\"privacy\")\n source = request.POST.get(\"source\")\n\n importer = None\n if source == \"LibraryThing\":\n importer = LibrarythingImporter()\n elif source == \"Storygraph\":\n importer = StorygraphImporter()\n else:\n # Default : GoodReads\n importer = GoodreadsImporter()\n\n try:\n job = importer.create_job(\n request.user,\n TextIOWrapper(\n request.FILES[\"csv_file\"], encoding=importer.encoding\n ),\n include_reviews,\n privacy,\n )\n except (UnicodeDecodeError, ValueError, KeyError):\n return HttpResponseBadRequest(_(\"Not a valid csv file\"))\n\n importer.start_import(job)\n\n return redirect(\"/import/%d\" % job.id)\n return HttpResponseBadRequest()\n\n\n@method_decorator(login_required, name=\"dispatch\")\nclass ImportStatus(View):\n \"\"\"status of an existing import\"\"\"\n\n def get(self, request, job_id):\n \"\"\"status of an import job\"\"\"\n job = get_object_or_404(models.ImportJob, id=job_id)\n if job.user != request.user:\n raise PermissionDenied\n\n try:\n task = app.AsyncResult(job.task_id)\n # triggers attribute error if the task won't load\n task.status # pylint: disable=pointless-statement\n except (ValueError, AttributeError):\n task = None\n\n items = job.items.order_by(\"index\").all()\n failed_items = [i for i in items if i.fail_reason]\n items = [i for i in items if not i.fail_reason]\n return TemplateResponse(\n request,\n \"import_status.html\",\n {\"job\": job, \"items\": items, \"failed_items\": failed_items, \"task\": task},\n )\n\n def post(self, request, job_id):\n \"\"\"retry lines from an import\"\"\"\n job = get_object_or_404(models.ImportJob, id=job_id)\n items = []\n for item in request.POST.getlist(\"import_item\"):\n items.append(get_object_or_404(models.ImportItem, id=item))\n\n importer = Importer()\n job = importer.create_retry_job(\n request.user,\n job,\n items,\n )\n importer.start_import(job)\n return redirect(\"/import/%d\" % job.id)\n", "path": "bookwyrm/views/import_data.py"}], "after_files": [{"content": "\"\"\" import books from another app \"\"\"\nfrom io import TextIOWrapper\n\nfrom django.contrib.auth.decorators import login_required\nfrom django.core.exceptions import PermissionDenied\nfrom django.http import HttpResponseBadRequest\nfrom django.shortcuts import get_object_or_404, redirect\nfrom django.template.response import TemplateResponse\nfrom django.utils.decorators import method_decorator\nfrom django.utils.translation import gettext_lazy as _\nfrom django.views import View\n\nfrom bookwyrm import forms, models\nfrom bookwyrm.importers import (\n Importer,\n LibrarythingImporter,\n GoodreadsImporter,\n StorygraphImporter,\n)\nfrom bookwyrm.tasks import app\n\n# pylint: disable= no-self-use\n@method_decorator(login_required, name=\"dispatch\")\nclass Import(View):\n \"\"\"import view\"\"\"\n\n def get(self, request):\n \"\"\"load import page\"\"\"\n return TemplateResponse(\n request,\n \"import/import.html\",\n {\n \"import_form\": forms.ImportForm(),\n \"jobs\": models.ImportJob.objects.filter(user=request.user).order_by(\n \"-created_date\"\n ),\n },\n )\n\n def post(self, request):\n \"\"\"ingest a goodreads csv\"\"\"\n form = forms.ImportForm(request.POST, request.FILES)\n if form.is_valid():\n include_reviews = request.POST.get(\"include_reviews\") == \"on\"\n privacy = request.POST.get(\"privacy\")\n source = request.POST.get(\"source\")\n\n importer = None\n if source == \"LibraryThing\":\n importer = LibrarythingImporter()\n elif source == \"Storygraph\":\n importer = StorygraphImporter()\n else:\n # Default : GoodReads\n importer = GoodreadsImporter()\n\n try:\n job = importer.create_job(\n request.user,\n TextIOWrapper(\n request.FILES[\"csv_file\"], encoding=importer.encoding\n ),\n include_reviews,\n privacy,\n )\n except (UnicodeDecodeError, ValueError, KeyError):\n return HttpResponseBadRequest(_(\"Not a valid csv file\"))\n\n importer.start_import(job)\n\n return redirect(\"/import/%d\" % job.id)\n return HttpResponseBadRequest()\n\n\n@method_decorator(login_required, name=\"dispatch\")\nclass ImportStatus(View):\n \"\"\"status of an existing import\"\"\"\n\n def get(self, request, job_id):\n \"\"\"status of an import job\"\"\"\n job = get_object_or_404(models.ImportJob, id=job_id)\n if job.user != request.user:\n raise PermissionDenied\n\n try:\n task = app.AsyncResult(job.task_id)\n # triggers attribute error if the task won't load\n task.status # pylint: disable=pointless-statement\n except (ValueError, AttributeError):\n task = None\n\n items = job.items.order_by(\"index\").all()\n failed_items = [i for i in items if i.fail_reason]\n items = [i for i in items if not i.fail_reason]\n return TemplateResponse(\n request,\n \"import/import_status.html\",\n {\"job\": job, \"items\": items, \"failed_items\": failed_items, \"task\": task},\n )\n\n def post(self, request, job_id):\n \"\"\"retry lines from an import\"\"\"\n job = get_object_or_404(models.ImportJob, id=job_id)\n items = []\n for item in request.POST.getlist(\"import_item\"):\n items.append(get_object_or_404(models.ImportItem, id=item))\n\n importer = Importer()\n job = importer.create_retry_job(\n request.user,\n job,\n items,\n )\n importer.start_import(job)\n return redirect(\"/import/%d\" % job.id)\n", "path": "bookwyrm/views/import_data.py"}]} | 1,366 | 181 |
gh_patches_debug_35754 | rasdani/github-patches | git_diff | beetbox__beets-1595 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
plexupdate: Doesn't work with libaries not named "Music"
I've named my music libaries `Music (New)` and `Music (Untagged)`. The plex update plugin should update the `Music (New)` section, but instead of updating at least both music libaries it doesn't update anything. If I change the library name from `Music (New)` to `Music` it works like a charm. This is specified on line 33 of the beets plugin. A config option to add libraries other than `Music` would make sense imo.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `beetsplug/plexupdate.py`
Content:
```
1 """Updates an Plex library whenever the beets library is changed.
2
3 Plex Home users enter the Plex Token to enable updating.
4 Put something like the following in your config.yaml to configure:
5 plex:
6 host: localhost
7 port: 32400
8 token: token
9 """
10 from __future__ import (division, absolute_import, print_function,
11 unicode_literals)
12
13 import requests
14 from urlparse import urljoin
15 from urllib import urlencode
16 import xml.etree.ElementTree as ET
17 from beets import config
18 from beets.plugins import BeetsPlugin
19
20
21 def get_music_section(host, port, token):
22 """Getting the section key for the music library in Plex.
23 """
24 api_endpoint = append_token('library/sections', token)
25 url = urljoin('http://{0}:{1}'.format(host, port), api_endpoint)
26
27 # Sends request.
28 r = requests.get(url)
29
30 # Parse xml tree and extract music section key.
31 tree = ET.fromstring(r.text)
32 for child in tree.findall('Directory'):
33 if child.get('title') == 'Music':
34 return child.get('key')
35
36
37 def update_plex(host, port, token):
38 """Sends request to the Plex api to start a library refresh.
39 """
40 # Getting section key and build url.
41 section_key = get_music_section(host, port, token)
42 api_endpoint = 'library/sections/{0}/refresh'.format(section_key)
43 api_endpoint = append_token(api_endpoint, token)
44 url = urljoin('http://{0}:{1}'.format(host, port), api_endpoint)
45
46 # Sends request and returns requests object.
47 r = requests.get(url)
48 return r
49
50
51 def append_token(url, token):
52 """Appends the Plex Home token to the api call if required.
53 """
54 if token:
55 url += '?' + urlencode({'X-Plex-Token': token})
56 return url
57
58
59 class PlexUpdate(BeetsPlugin):
60 def __init__(self):
61 super(PlexUpdate, self).__init__()
62
63 # Adding defaults.
64 config['plex'].add({
65 u'host': u'localhost',
66 u'port': 32400,
67 u'token': u''})
68
69 self.register_listener('database_change', self.listen_for_db_change)
70
71 def listen_for_db_change(self, lib, model):
72 """Listens for beets db change and register the update for the end"""
73 self.register_listener('cli_exit', self.update)
74
75 def update(self, lib):
76 """When the client exists try to send refresh request to Plex server.
77 """
78 self._log.info('Updating Plex library...')
79
80 # Try to send update request.
81 try:
82 update_plex(
83 config['plex']['host'].get(),
84 config['plex']['port'].get(),
85 config['plex']['token'].get())
86 self._log.info('... started.')
87
88 except requests.exceptions.RequestException:
89 self._log.warning('Update failed.')
90
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/beetsplug/plexupdate.py b/beetsplug/plexupdate.py
--- a/beetsplug/plexupdate.py
+++ b/beetsplug/plexupdate.py
@@ -18,7 +18,7 @@
from beets.plugins import BeetsPlugin
-def get_music_section(host, port, token):
+def get_music_section(host, port, token, library_name):
"""Getting the section key for the music library in Plex.
"""
api_endpoint = append_token('library/sections', token)
@@ -30,15 +30,15 @@
# Parse xml tree and extract music section key.
tree = ET.fromstring(r.text)
for child in tree.findall('Directory'):
- if child.get('title') == 'Music':
+ if child.get('title') == library_name:
return child.get('key')
-def update_plex(host, port, token):
+def update_plex(host, port, token, library_name):
"""Sends request to the Plex api to start a library refresh.
"""
# Getting section key and build url.
- section_key = get_music_section(host, port, token)
+ section_key = get_music_section(host, port, token, library_name)
api_endpoint = 'library/sections/{0}/refresh'.format(section_key)
api_endpoint = append_token(api_endpoint, token)
url = urljoin('http://{0}:{1}'.format(host, port), api_endpoint)
@@ -64,7 +64,8 @@
config['plex'].add({
u'host': u'localhost',
u'port': 32400,
- u'token': u''})
+ u'token': u'',
+ u'library_name': u'Music'})
self.register_listener('database_change', self.listen_for_db_change)
@@ -82,7 +83,8 @@
update_plex(
config['plex']['host'].get(),
config['plex']['port'].get(),
- config['plex']['token'].get())
+ config['plex']['token'].get(),
+ config['plex']['library_name'].get())
self._log.info('... started.')
except requests.exceptions.RequestException:
| {"golden_diff": "diff --git a/beetsplug/plexupdate.py b/beetsplug/plexupdate.py\n--- a/beetsplug/plexupdate.py\n+++ b/beetsplug/plexupdate.py\n@@ -18,7 +18,7 @@\n from beets.plugins import BeetsPlugin\n \n \n-def get_music_section(host, port, token):\n+def get_music_section(host, port, token, library_name):\n \"\"\"Getting the section key for the music library in Plex.\n \"\"\"\n api_endpoint = append_token('library/sections', token)\n@@ -30,15 +30,15 @@\n # Parse xml tree and extract music section key.\n tree = ET.fromstring(r.text)\n for child in tree.findall('Directory'):\n- if child.get('title') == 'Music':\n+ if child.get('title') == library_name:\n return child.get('key')\n \n \n-def update_plex(host, port, token):\n+def update_plex(host, port, token, library_name):\n \"\"\"Sends request to the Plex api to start a library refresh.\n \"\"\"\n # Getting section key and build url.\n- section_key = get_music_section(host, port, token)\n+ section_key = get_music_section(host, port, token, library_name)\n api_endpoint = 'library/sections/{0}/refresh'.format(section_key)\n api_endpoint = append_token(api_endpoint, token)\n url = urljoin('http://{0}:{1}'.format(host, port), api_endpoint)\n@@ -64,7 +64,8 @@\n config['plex'].add({\n u'host': u'localhost',\n u'port': 32400,\n- u'token': u''})\n+ u'token': u'',\n+ u'library_name': u'Music'})\n \n self.register_listener('database_change', self.listen_for_db_change)\n \n@@ -82,7 +83,8 @@\n update_plex(\n config['plex']['host'].get(),\n config['plex']['port'].get(),\n- config['plex']['token'].get())\n+ config['plex']['token'].get(),\n+ config['plex']['library_name'].get())\n self._log.info('... started.')\n \n except requests.exceptions.RequestException:\n", "issue": "plexupdate: Doesn't work with libaries not named \"Music\"\nI've named my music libaries `Music (New)` and `Music (Untagged)`. The plex update plugin should update the `Music (New)` section, but instead of updating at least both music libaries it doesn't update anything. If I change the library name from `Music (New)` to `Music` it works like a charm. This is specified on line 33 of the beets plugin. A config option to add libraries other than `Music` would make sense imo.\n\n", "before_files": [{"content": "\"\"\"Updates an Plex library whenever the beets library is changed.\n\nPlex Home users enter the Plex Token to enable updating.\nPut something like the following in your config.yaml to configure:\n plex:\n host: localhost\n port: 32400\n token: token\n\"\"\"\nfrom __future__ import (division, absolute_import, print_function,\n unicode_literals)\n\nimport requests\nfrom urlparse import urljoin\nfrom urllib import urlencode\nimport xml.etree.ElementTree as ET\nfrom beets import config\nfrom beets.plugins import BeetsPlugin\n\n\ndef get_music_section(host, port, token):\n \"\"\"Getting the section key for the music library in Plex.\n \"\"\"\n api_endpoint = append_token('library/sections', token)\n url = urljoin('http://{0}:{1}'.format(host, port), api_endpoint)\n\n # Sends request.\n r = requests.get(url)\n\n # Parse xml tree and extract music section key.\n tree = ET.fromstring(r.text)\n for child in tree.findall('Directory'):\n if child.get('title') == 'Music':\n return child.get('key')\n\n\ndef update_plex(host, port, token):\n \"\"\"Sends request to the Plex api to start a library refresh.\n \"\"\"\n # Getting section key and build url.\n section_key = get_music_section(host, port, token)\n api_endpoint = 'library/sections/{0}/refresh'.format(section_key)\n api_endpoint = append_token(api_endpoint, token)\n url = urljoin('http://{0}:{1}'.format(host, port), api_endpoint)\n\n # Sends request and returns requests object.\n r = requests.get(url)\n return r\n\n\ndef append_token(url, token):\n \"\"\"Appends the Plex Home token to the api call if required.\n \"\"\"\n if token:\n url += '?' + urlencode({'X-Plex-Token': token})\n return url\n\n\nclass PlexUpdate(BeetsPlugin):\n def __init__(self):\n super(PlexUpdate, self).__init__()\n\n # Adding defaults.\n config['plex'].add({\n u'host': u'localhost',\n u'port': 32400,\n u'token': u''})\n\n self.register_listener('database_change', self.listen_for_db_change)\n\n def listen_for_db_change(self, lib, model):\n \"\"\"Listens for beets db change and register the update for the end\"\"\"\n self.register_listener('cli_exit', self.update)\n\n def update(self, lib):\n \"\"\"When the client exists try to send refresh request to Plex server.\n \"\"\"\n self._log.info('Updating Plex library...')\n\n # Try to send update request.\n try:\n update_plex(\n config['plex']['host'].get(),\n config['plex']['port'].get(),\n config['plex']['token'].get())\n self._log.info('... started.')\n\n except requests.exceptions.RequestException:\n self._log.warning('Update failed.')\n", "path": "beetsplug/plexupdate.py"}], "after_files": [{"content": "\"\"\"Updates an Plex library whenever the beets library is changed.\n\nPlex Home users enter the Plex Token to enable updating.\nPut something like the following in your config.yaml to configure:\n plex:\n host: localhost\n port: 32400\n token: token\n\"\"\"\nfrom __future__ import (division, absolute_import, print_function,\n unicode_literals)\n\nimport requests\nfrom urlparse import urljoin\nfrom urllib import urlencode\nimport xml.etree.ElementTree as ET\nfrom beets import config\nfrom beets.plugins import BeetsPlugin\n\n\ndef get_music_section(host, port, token, library_name):\n \"\"\"Getting the section key for the music library in Plex.\n \"\"\"\n api_endpoint = append_token('library/sections', token)\n url = urljoin('http://{0}:{1}'.format(host, port), api_endpoint)\n\n # Sends request.\n r = requests.get(url)\n\n # Parse xml tree and extract music section key.\n tree = ET.fromstring(r.text)\n for child in tree.findall('Directory'):\n if child.get('title') == library_name:\n return child.get('key')\n\n\ndef update_plex(host, port, token, library_name):\n \"\"\"Sends request to the Plex api to start a library refresh.\n \"\"\"\n # Getting section key and build url.\n section_key = get_music_section(host, port, token, library_name)\n api_endpoint = 'library/sections/{0}/refresh'.format(section_key)\n api_endpoint = append_token(api_endpoint, token)\n url = urljoin('http://{0}:{1}'.format(host, port), api_endpoint)\n\n # Sends request and returns requests object.\n r = requests.get(url)\n return r\n\n\ndef append_token(url, token):\n \"\"\"Appends the Plex Home token to the api call if required.\n \"\"\"\n if token:\n url += '?' + urlencode({'X-Plex-Token': token})\n return url\n\n\nclass PlexUpdate(BeetsPlugin):\n def __init__(self):\n super(PlexUpdate, self).__init__()\n\n # Adding defaults.\n config['plex'].add({\n u'host': u'localhost',\n u'port': 32400,\n u'token': u'',\n u'library_name': u'Music'})\n\n self.register_listener('database_change', self.listen_for_db_change)\n\n def listen_for_db_change(self, lib, model):\n \"\"\"Listens for beets db change and register the update for the end\"\"\"\n self.register_listener('cli_exit', self.update)\n\n def update(self, lib):\n \"\"\"When the client exists try to send refresh request to Plex server.\n \"\"\"\n self._log.info('Updating Plex library...')\n\n # Try to send update request.\n try:\n update_plex(\n config['plex']['host'].get(),\n config['plex']['port'].get(),\n config['plex']['token'].get(),\n config['plex']['library_name'].get())\n self._log.info('... started.')\n\n except requests.exceptions.RequestException:\n self._log.warning('Update failed.')\n", "path": "beetsplug/plexupdate.py"}]} | 1,187 | 486 |
gh_patches_debug_21909 | rasdani/github-patches | git_diff | strawberry-graphql__strawberry-2512 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Strawberry cli commands fail with error: strawberry.exe\__main__.py not found
After upgrading strawberry to latest version (0.154.1), I am unable to run any strawberry cli commands.
## Describe the Bug
- Upgraded strawberry from 0.152.0 to 0.154.1
```
poetry add strawberry-graphql[debug-server]@0.154.1
```
- Executed below commands:
```
strawberry server myapp.schema
strawberry export-schema myapp.schema:schema
```
- Both these commands are failing in below error:
**FileNotFoundError: [WinError 3] The system cannot find the path specified: 'C:\\Users\\myyuser\\AppData\\Local\\pypoetry\\Cache\\virtualenvs\\straw-k47ybk7v-py3.10\\Scripts\\strawberry.exe\\\_\_main\_\_.py'**
## System Information
- Operating system: Windows 10
- Strawberry version (if applicable): 0.154.1
- Python: 3.10.9
## Additional Context
There is no issue with strawberry cli in version 0.152.0 which I am using currently. If we downgrade the package to this version, cli commands work just fine.
Strawberry cli commands fail with error: strawberry.exe\__main__.py not found
After upgrading strawberry to latest version (0.154.1), I am unable to run any strawberry cli commands.
## Describe the Bug
- Upgraded strawberry from 0.152.0 to 0.154.1
```
poetry add strawberry-graphql[debug-server]@0.154.1
```
- Executed below commands:
```
strawberry server myapp.schema
strawberry export-schema myapp.schema:schema
```
- Both these commands are failing in below error:
**FileNotFoundError: [WinError 3] The system cannot find the path specified: 'C:\\Users\\myyuser\\AppData\\Local\\pypoetry\\Cache\\virtualenvs\\straw-k47ybk7v-py3.10\\Scripts\\strawberry.exe\\\_\_main\_\_.py'**
## System Information
- Operating system: Windows 10
- Strawberry version (if applicable): 0.154.1
- Python: 3.10.9
## Additional Context
There is no issue with strawberry cli in version 0.152.0 which I am using currently. If we downgrade the package to this version, cli commands work just fine.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `strawberry/lazy_type.py`
Content:
```
1 import importlib
2 import inspect
3 import sys
4 import warnings
5 from dataclasses import dataclass
6 from pathlib import Path
7 from typing import ForwardRef, Generic, Optional, Type, TypeVar, cast
8
9 TypeName = TypeVar("TypeName")
10 Module = TypeVar("Module")
11
12
13 @dataclass(frozen=True)
14 class LazyType(Generic[TypeName, Module]):
15 type_name: str
16 module: str
17 package: Optional[str] = None
18
19 def __class_getitem__(cls, params):
20 warnings.warn(
21 (
22 "LazyType is deprecated, use "
23 "Annotated[YourType, strawberry.lazy(path)] instead"
24 ),
25 DeprecationWarning,
26 stacklevel=2,
27 )
28
29 type_name, module = params
30
31 package = None
32
33 if module.startswith("."):
34 current_frame = inspect.currentframe()
35 assert current_frame is not None
36 assert current_frame.f_back is not None
37 package = current_frame.f_back.f_globals["__package__"]
38
39 return cls(type_name, module, package)
40
41 def resolve_type(self) -> Type:
42 module = importlib.import_module(self.module, self.package)
43 main_module = sys.modules.get("__main__", None)
44 if main_module:
45 # If lazy type points to the main module, use it instead of the imported
46 # module. Otherwise duplication checks during schema-conversion might fail.
47 # Refer to: https://github.com/strawberry-graphql/strawberry/issues/2397
48 if main_module.__spec__ and main_module.__spec__.name == self.module:
49 module = main_module
50 elif hasattr(main_module, "__file__") and hasattr(module, "__file__"):
51 if (
52 main_module.__file__
53 and module.__file__
54 and Path(main_module.__file__).samefile(module.__file__)
55 ):
56 module = main_module
57 return module.__dict__[self.type_name]
58
59 # this empty call method allows LazyTypes to be used in generic types
60 # for example: List[LazyType["A", "module"]]
61
62 def __call__(self): # pragma: no cover
63 return None
64
65
66 class StrawberryLazyReference:
67 def __init__(self, module: str) -> None:
68 self.module = module
69 self.package = None
70
71 if module.startswith("."):
72 frame = inspect.stack()[2][0]
73 # TODO: raise a nice error if frame is None
74 assert frame is not None
75 self.package = cast(str, frame.f_globals["__package__"])
76
77 def resolve_forward_ref(self, forward_ref: ForwardRef) -> LazyType:
78 return LazyType(forward_ref.__forward_arg__, self.module, self.package)
79
80
81 def lazy(module_path: str) -> StrawberryLazyReference:
82 return StrawberryLazyReference(module_path)
83
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/strawberry/lazy_type.py b/strawberry/lazy_type.py
--- a/strawberry/lazy_type.py
+++ b/strawberry/lazy_type.py
@@ -48,12 +48,16 @@
if main_module.__spec__ and main_module.__spec__.name == self.module:
module = main_module
elif hasattr(main_module, "__file__") and hasattr(module, "__file__"):
- if (
- main_module.__file__
- and module.__file__
- and Path(main_module.__file__).samefile(module.__file__)
- ):
- module = main_module
+ main_file = main_module.__file__
+ module_file = module.__file__
+ if main_file and module_file:
+ try:
+ is_samefile = Path(main_file).samefile(module_file)
+ except FileNotFoundError:
+ # Can be raised when run through the CLI as the __main__ file
+ # path contains `strawberry.exe`
+ is_samefile = False
+ module = main_module if is_samefile else module
return module.__dict__[self.type_name]
# this empty call method allows LazyTypes to be used in generic types
| {"golden_diff": "diff --git a/strawberry/lazy_type.py b/strawberry/lazy_type.py\n--- a/strawberry/lazy_type.py\n+++ b/strawberry/lazy_type.py\n@@ -48,12 +48,16 @@\n if main_module.__spec__ and main_module.__spec__.name == self.module:\n module = main_module\n elif hasattr(main_module, \"__file__\") and hasattr(module, \"__file__\"):\n- if (\n- main_module.__file__\n- and module.__file__\n- and Path(main_module.__file__).samefile(module.__file__)\n- ):\n- module = main_module\n+ main_file = main_module.__file__\n+ module_file = module.__file__\n+ if main_file and module_file:\n+ try:\n+ is_samefile = Path(main_file).samefile(module_file)\n+ except FileNotFoundError:\n+ # Can be raised when run through the CLI as the __main__ file\n+ # path contains `strawberry.exe`\n+ is_samefile = False\n+ module = main_module if is_samefile else module\n return module.__dict__[self.type_name]\n \n # this empty call method allows LazyTypes to be used in generic types\n", "issue": "Strawberry cli commands fail with error: strawberry.exe\\__main__.py not found\nAfter upgrading strawberry to latest version (0.154.1), I am unable to run any strawberry cli commands.\r\n\r\n## Describe the Bug\r\n- Upgraded strawberry from 0.152.0 to 0.154.1\r\n```\r\npoetry add strawberry-graphql[debug-server]@0.154.1\r\n```\r\n- Executed below commands:\r\n```\r\nstrawberry server myapp.schema\r\nstrawberry export-schema myapp.schema:schema\r\n```\r\n- Both these commands are failing in below error:\r\n\r\n**FileNotFoundError: [WinError 3] The system cannot find the path specified: 'C:\\\\Users\\\\myyuser\\\\AppData\\\\Local\\\\pypoetry\\\\Cache\\\\virtualenvs\\\\straw-k47ybk7v-py3.10\\\\Scripts\\\\strawberry.exe\\\\\\_\\_main\\_\\_.py'**\r\n\r\n## System Information\r\n\r\n - Operating system: Windows 10\r\n - Strawberry version (if applicable): 0.154.1\r\n - Python: 3.10.9\r\n\r\n## Additional Context\r\n\r\nThere is no issue with strawberry cli in version 0.152.0 which I am using currently. If we downgrade the package to this version, cli commands work just fine.\r\n\nStrawberry cli commands fail with error: strawberry.exe\\__main__.py not found\nAfter upgrading strawberry to latest version (0.154.1), I am unable to run any strawberry cli commands.\r\n\r\n## Describe the Bug\r\n- Upgraded strawberry from 0.152.0 to 0.154.1\r\n```\r\npoetry add strawberry-graphql[debug-server]@0.154.1\r\n```\r\n- Executed below commands:\r\n```\r\nstrawberry server myapp.schema\r\nstrawberry export-schema myapp.schema:schema\r\n```\r\n- Both these commands are failing in below error:\r\n\r\n**FileNotFoundError: [WinError 3] The system cannot find the path specified: 'C:\\\\Users\\\\myyuser\\\\AppData\\\\Local\\\\pypoetry\\\\Cache\\\\virtualenvs\\\\straw-k47ybk7v-py3.10\\\\Scripts\\\\strawberry.exe\\\\\\_\\_main\\_\\_.py'**\r\n\r\n## System Information\r\n\r\n - Operating system: Windows 10\r\n - Strawberry version (if applicable): 0.154.1\r\n - Python: 3.10.9\r\n\r\n## Additional Context\r\n\r\nThere is no issue with strawberry cli in version 0.152.0 which I am using currently. If we downgrade the package to this version, cli commands work just fine.\r\n\n", "before_files": [{"content": "import importlib\nimport inspect\nimport sys\nimport warnings\nfrom dataclasses import dataclass\nfrom pathlib import Path\nfrom typing import ForwardRef, Generic, Optional, Type, TypeVar, cast\n\nTypeName = TypeVar(\"TypeName\")\nModule = TypeVar(\"Module\")\n\n\n@dataclass(frozen=True)\nclass LazyType(Generic[TypeName, Module]):\n type_name: str\n module: str\n package: Optional[str] = None\n\n def __class_getitem__(cls, params):\n warnings.warn(\n (\n \"LazyType is deprecated, use \"\n \"Annotated[YourType, strawberry.lazy(path)] instead\"\n ),\n DeprecationWarning,\n stacklevel=2,\n )\n\n type_name, module = params\n\n package = None\n\n if module.startswith(\".\"):\n current_frame = inspect.currentframe()\n assert current_frame is not None\n assert current_frame.f_back is not None\n package = current_frame.f_back.f_globals[\"__package__\"]\n\n return cls(type_name, module, package)\n\n def resolve_type(self) -> Type:\n module = importlib.import_module(self.module, self.package)\n main_module = sys.modules.get(\"__main__\", None)\n if main_module:\n # If lazy type points to the main module, use it instead of the imported\n # module. Otherwise duplication checks during schema-conversion might fail.\n # Refer to: https://github.com/strawberry-graphql/strawberry/issues/2397\n if main_module.__spec__ and main_module.__spec__.name == self.module:\n module = main_module\n elif hasattr(main_module, \"__file__\") and hasattr(module, \"__file__\"):\n if (\n main_module.__file__\n and module.__file__\n and Path(main_module.__file__).samefile(module.__file__)\n ):\n module = main_module\n return module.__dict__[self.type_name]\n\n # this empty call method allows LazyTypes to be used in generic types\n # for example: List[LazyType[\"A\", \"module\"]]\n\n def __call__(self): # pragma: no cover\n return None\n\n\nclass StrawberryLazyReference:\n def __init__(self, module: str) -> None:\n self.module = module\n self.package = None\n\n if module.startswith(\".\"):\n frame = inspect.stack()[2][0]\n # TODO: raise a nice error if frame is None\n assert frame is not None\n self.package = cast(str, frame.f_globals[\"__package__\"])\n\n def resolve_forward_ref(self, forward_ref: ForwardRef) -> LazyType:\n return LazyType(forward_ref.__forward_arg__, self.module, self.package)\n\n\ndef lazy(module_path: str) -> StrawberryLazyReference:\n return StrawberryLazyReference(module_path)\n", "path": "strawberry/lazy_type.py"}], "after_files": [{"content": "import importlib\nimport inspect\nimport sys\nimport warnings\nfrom dataclasses import dataclass\nfrom pathlib import Path\nfrom typing import ForwardRef, Generic, Optional, Type, TypeVar, cast\n\nTypeName = TypeVar(\"TypeName\")\nModule = TypeVar(\"Module\")\n\n\n@dataclass(frozen=True)\nclass LazyType(Generic[TypeName, Module]):\n type_name: str\n module: str\n package: Optional[str] = None\n\n def __class_getitem__(cls, params):\n warnings.warn(\n (\n \"LazyType is deprecated, use \"\n \"Annotated[YourType, strawberry.lazy(path)] instead\"\n ),\n DeprecationWarning,\n stacklevel=2,\n )\n\n type_name, module = params\n\n package = None\n\n if module.startswith(\".\"):\n current_frame = inspect.currentframe()\n assert current_frame is not None\n assert current_frame.f_back is not None\n package = current_frame.f_back.f_globals[\"__package__\"]\n\n return cls(type_name, module, package)\n\n def resolve_type(self) -> Type:\n module = importlib.import_module(self.module, self.package)\n main_module = sys.modules.get(\"__main__\", None)\n if main_module:\n # If lazy type points to the main module, use it instead of the imported\n # module. Otherwise duplication checks during schema-conversion might fail.\n # Refer to: https://github.com/strawberry-graphql/strawberry/issues/2397\n if main_module.__spec__ and main_module.__spec__.name == self.module:\n module = main_module\n elif hasattr(main_module, \"__file__\") and hasattr(module, \"__file__\"):\n main_file = main_module.__file__\n module_file = module.__file__\n if main_file and module_file:\n try:\n is_samefile = Path(main_file).samefile(module_file)\n except FileNotFoundError:\n # Can be raised when run through the CLI as the __main__ file\n # path contains `strawberry.exe`\n is_samefile = False\n module = main_module if is_samefile else module\n return module.__dict__[self.type_name]\n\n # this empty call method allows LazyTypes to be used in generic types\n # for example: List[LazyType[\"A\", \"module\"]]\n\n def __call__(self): # pragma: no cover\n return None\n\n\nclass StrawberryLazyReference:\n def __init__(self, module: str) -> None:\n self.module = module\n self.package = None\n\n if module.startswith(\".\"):\n frame = inspect.stack()[2][0]\n # TODO: raise a nice error if frame is None\n assert frame is not None\n self.package = cast(str, frame.f_globals[\"__package__\"])\n\n def resolve_forward_ref(self, forward_ref: ForwardRef) -> LazyType:\n return LazyType(forward_ref.__forward_arg__, self.module, self.package)\n\n\ndef lazy(module_path: str) -> StrawberryLazyReference:\n return StrawberryLazyReference(module_path)\n", "path": "strawberry/lazy_type.py"}]} | 1,610 | 272 |
gh_patches_debug_9378 | rasdani/github-patches | git_diff | digitalfabrik__integreat-cms-346 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
New user creation results in TypeError
If one wants to create a new user via the network settings an error will occur. The user gets created anyway, but this should be fixed quite fast.

New user creation results in TypeError
If one wants to create a new user via the network settings an error will occur. The user gets created anyway, but this should be fixed quite fast.

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/cms/forms/users/user_profile_form.py`
Content:
```
1 """
2 Form for creating a user object
3 """
4 import logging
5
6 from django import forms
7
8 from ...models import UserProfile
9
10
11 logger = logging.getLogger(__name__)
12
13
14 class UserProfileForm(forms.ModelForm):
15
16 class Meta:
17 model = UserProfile
18 fields = [
19 'regions',
20 'organization'
21 ]
22
23 # pylint: disable=arguments-differ
24 def save(self, *args, **kwargs):
25
26 logger.info(
27 'UserProfileForm saved with args %s and kwargs %s',
28 args,
29 kwargs
30 )
31
32 # pop kwarg to make sure the super class does not get this param
33 user = kwargs.pop('user', None)
34
35 if not self.instance.id:
36 # don't commit saving of ModelForm, because required user field is still missing
37 kwargs['commit'] = False
38
39 # save ModelForm
40 user_profile = super(UserProfileForm, self).save(*args, **kwargs)
41
42 if not self.instance.id:
43 user_profile.user = user
44 user_profile.save()
45 # check if called from UserProfileForm or RegionUserProfileForm
46 if 'regions' in self.cleaned_data:
47 # regions can't be saved if commit=False on the ModelForm, so we have to save them explicitly
48 user_profile.regions = self.cleaned_data['regions']
49 user_profile.save()
50
51 return user_profile
52
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/cms/forms/users/user_profile_form.py b/src/cms/forms/users/user_profile_form.py
--- a/src/cms/forms/users/user_profile_form.py
+++ b/src/cms/forms/users/user_profile_form.py
@@ -45,7 +45,6 @@
# check if called from UserProfileForm or RegionUserProfileForm
if 'regions' in self.cleaned_data:
# regions can't be saved if commit=False on the ModelForm, so we have to save them explicitly
- user_profile.regions = self.cleaned_data['regions']
- user_profile.save()
+ user_profile.regions.set(self.cleaned_data['regions'])
return user_profile
| {"golden_diff": "diff --git a/src/cms/forms/users/user_profile_form.py b/src/cms/forms/users/user_profile_form.py\n--- a/src/cms/forms/users/user_profile_form.py\n+++ b/src/cms/forms/users/user_profile_form.py\n@@ -45,7 +45,6 @@\n # check if called from UserProfileForm or RegionUserProfileForm\n if 'regions' in self.cleaned_data:\n # regions can't be saved if commit=False on the ModelForm, so we have to save them explicitly\n- user_profile.regions = self.cleaned_data['regions']\n- user_profile.save()\n+ user_profile.regions.set(self.cleaned_data['regions'])\n \n return user_profile\n", "issue": "New user creation results in TypeError\nIf one wants to create a new user via the network settings an error will occur. The user gets created anyway, but this should be fixed quite fast.\r\n\r\n\r\n\nNew user creation results in TypeError\nIf one wants to create a new user via the network settings an error will occur. The user gets created anyway, but this should be fixed quite fast.\r\n\r\n\r\n\n", "before_files": [{"content": "\"\"\"\nForm for creating a user object\n\"\"\"\nimport logging\n\nfrom django import forms\n\nfrom ...models import UserProfile\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass UserProfileForm(forms.ModelForm):\n\n class Meta:\n model = UserProfile\n fields = [\n 'regions',\n 'organization'\n ]\n\n # pylint: disable=arguments-differ\n def save(self, *args, **kwargs):\n\n logger.info(\n 'UserProfileForm saved with args %s and kwargs %s',\n args,\n kwargs\n )\n\n # pop kwarg to make sure the super class does not get this param\n user = kwargs.pop('user', None)\n\n if not self.instance.id:\n # don't commit saving of ModelForm, because required user field is still missing\n kwargs['commit'] = False\n\n # save ModelForm\n user_profile = super(UserProfileForm, self).save(*args, **kwargs)\n\n if not self.instance.id:\n user_profile.user = user\n user_profile.save()\n # check if called from UserProfileForm or RegionUserProfileForm\n if 'regions' in self.cleaned_data:\n # regions can't be saved if commit=False on the ModelForm, so we have to save them explicitly\n user_profile.regions = self.cleaned_data['regions']\n user_profile.save()\n\n return user_profile\n", "path": "src/cms/forms/users/user_profile_form.py"}], "after_files": [{"content": "\"\"\"\nForm for creating a user object\n\"\"\"\nimport logging\n\nfrom django import forms\n\nfrom ...models import UserProfile\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass UserProfileForm(forms.ModelForm):\n\n class Meta:\n model = UserProfile\n fields = [\n 'regions',\n 'organization'\n ]\n\n # pylint: disable=arguments-differ\n def save(self, *args, **kwargs):\n\n logger.info(\n 'UserProfileForm saved with args %s and kwargs %s',\n args,\n kwargs\n )\n\n # pop kwarg to make sure the super class does not get this param\n user = kwargs.pop('user', None)\n\n if not self.instance.id:\n # don't commit saving of ModelForm, because required user field is still missing\n kwargs['commit'] = False\n\n # save ModelForm\n user_profile = super(UserProfileForm, self).save(*args, **kwargs)\n\n if not self.instance.id:\n user_profile.user = user\n user_profile.save()\n # check if called from UserProfileForm or RegionUserProfileForm\n if 'regions' in self.cleaned_data:\n # regions can't be saved if commit=False on the ModelForm, so we have to save them explicitly\n user_profile.regions.set(self.cleaned_data['regions'])\n\n return user_profile\n", "path": "src/cms/forms/users/user_profile_form.py"}]} | 861 | 140 |
gh_patches_debug_321 | rasdani/github-patches | git_diff | readthedocs__readthedocs.org-5424 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Remove possibel unused constant
At first sight looks like isn't used anymore after https://github.com/rtfd/readthedocs.org/pull/5383
https://github.com/rtfd/readthedocs.org/blob/78c34c904b347110b2cd545b4b5a80ed526590f7/readthedocs/core/models.py#L13-L13
We should still double check and make sure tests are passing after the removal.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `readthedocs/core/models.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 """Models for the core app."""
4 import logging
5
6 from annoying.fields import AutoOneToOneField
7 from django.db import models
8 from django.urls import reverse
9 from django.utils.translation import ugettext
10 from django.utils.translation import ugettext_lazy as _
11
12
13 STANDARD_EMAIL = '[email protected]'
14
15 log = logging.getLogger(__name__)
16
17
18 class UserProfile(models.Model):
19
20 """Additional information about a User."""
21
22 user = AutoOneToOneField(
23 'auth.User',
24 verbose_name=_('User'),
25 related_name='profile',
26 )
27 whitelisted = models.BooleanField(_('Whitelisted'), default=False)
28 banned = models.BooleanField(_('Banned'), default=False)
29 homepage = models.CharField(_('Homepage'), max_length=100, blank=True)
30 allow_ads = models.BooleanField(
31 _('See paid advertising'),
32 help_text=_('If unchecked, you will still see community ads.'),
33 default=True,
34 )
35
36 def __str__(self):
37 return (
38 ugettext("%(username)s's profile") %
39 {'username': self.user.username}
40 )
41
42 def get_absolute_url(self):
43 return reverse(
44 'profiles_profile_detail',
45 kwargs={'username': self.user.username},
46 )
47
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/readthedocs/core/models.py b/readthedocs/core/models.py
--- a/readthedocs/core/models.py
+++ b/readthedocs/core/models.py
@@ -10,8 +10,6 @@
from django.utils.translation import ugettext_lazy as _
-STANDARD_EMAIL = '[email protected]'
-
log = logging.getLogger(__name__)
| {"golden_diff": "diff --git a/readthedocs/core/models.py b/readthedocs/core/models.py\n--- a/readthedocs/core/models.py\n+++ b/readthedocs/core/models.py\n@@ -10,8 +10,6 @@\n from django.utils.translation import ugettext_lazy as _\n \n \n-STANDARD_EMAIL = '[email protected]'\n-\n log = logging.getLogger(__name__)\n", "issue": "Remove possibel unused constant\nAt first sight looks like isn't used anymore after https://github.com/rtfd/readthedocs.org/pull/5383\r\n\r\nhttps://github.com/rtfd/readthedocs.org/blob/78c34c904b347110b2cd545b4b5a80ed526590f7/readthedocs/core/models.py#L13-L13\r\n\r\nWe should still double check and make sure tests are passing after the removal.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\"Models for the core app.\"\"\"\nimport logging\n\nfrom annoying.fields import AutoOneToOneField\nfrom django.db import models\nfrom django.urls import reverse\nfrom django.utils.translation import ugettext\nfrom django.utils.translation import ugettext_lazy as _\n\n\nSTANDARD_EMAIL = '[email protected]'\n\nlog = logging.getLogger(__name__)\n\n\nclass UserProfile(models.Model):\n\n \"\"\"Additional information about a User.\"\"\"\n\n user = AutoOneToOneField(\n 'auth.User',\n verbose_name=_('User'),\n related_name='profile',\n )\n whitelisted = models.BooleanField(_('Whitelisted'), default=False)\n banned = models.BooleanField(_('Banned'), default=False)\n homepage = models.CharField(_('Homepage'), max_length=100, blank=True)\n allow_ads = models.BooleanField(\n _('See paid advertising'),\n help_text=_('If unchecked, you will still see community ads.'),\n default=True,\n )\n\n def __str__(self):\n return (\n ugettext(\"%(username)s's profile\") %\n {'username': self.user.username}\n )\n\n def get_absolute_url(self):\n return reverse(\n 'profiles_profile_detail',\n kwargs={'username': self.user.username},\n )\n", "path": "readthedocs/core/models.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\"Models for the core app.\"\"\"\nimport logging\n\nfrom annoying.fields import AutoOneToOneField\nfrom django.db import models\nfrom django.urls import reverse\nfrom django.utils.translation import ugettext\nfrom django.utils.translation import ugettext_lazy as _\n\n\nlog = logging.getLogger(__name__)\n\n\nclass UserProfile(models.Model):\n\n \"\"\"Additional information about a User.\"\"\"\n\n user = AutoOneToOneField(\n 'auth.User',\n verbose_name=_('User'),\n related_name='profile',\n )\n whitelisted = models.BooleanField(_('Whitelisted'), default=False)\n banned = models.BooleanField(_('Banned'), default=False)\n homepage = models.CharField(_('Homepage'), max_length=100, blank=True)\n allow_ads = models.BooleanField(\n _('See paid advertising'),\n help_text=_('If unchecked, you will still see community ads.'),\n default=True,\n )\n\n def __str__(self):\n return (\n ugettext(\"%(username)s's profile\") %\n {'username': self.user.username}\n )\n\n def get_absolute_url(self):\n return reverse(\n 'profiles_profile_detail',\n kwargs={'username': self.user.username},\n )\n", "path": "readthedocs/core/models.py"}]} | 723 | 79 |
gh_patches_debug_1133 | rasdani/github-patches | git_diff | joke2k__faker-512 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Using É, é (e-acute) in emails.
It looks that É, é (e-acute) symbols are not appropriate for valid email. I used https://pypi.python.org/pypi/robotframework-faker/ which uses this library and the following email was returned:
andré[email protected]
But email verification was failed for this email.
Could you remove É, é and other such letters if they are present from valid email generation?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `faker/providers/internet/de_DE/__init__.py`
Content:
```
1 # coding=utf-8
2
3 from __future__ import unicode_literals
4 from .. import Provider as InternetProvider
5
6 class Provider(InternetProvider):
7
8 free_email_domains = (
9 'aol.de', 'gmail.com', 'gmx.de', 'googlemail.com', 'hotmail.de',
10 'web.de', 'yahoo.de',
11 )
12 tlds = ('com', 'com', 'com', 'net', 'org', 'de', 'de', 'de', )
13
14 replacements = (
15 ('ä', 'ae'), ('Ä', 'Ae'),
16 ('ö', 'oe'), ('Ö', 'Oe'),
17 ('ü', 'ue'), ('Ü', 'Ue'),
18 ('ß', 'ss'),
19 )
20
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/faker/providers/internet/de_DE/__init__.py b/faker/providers/internet/de_DE/__init__.py
--- a/faker/providers/internet/de_DE/__init__.py
+++ b/faker/providers/internet/de_DE/__init__.py
@@ -15,5 +15,7 @@
('ä', 'ae'), ('Ä', 'Ae'),
('ö', 'oe'), ('Ö', 'Oe'),
('ü', 'ue'), ('Ü', 'Ue'),
+ ('é', 'e'), ('É', 'E'),
+ ('à', 'a'), ('À', 'A'),
('ß', 'ss'),
)
| {"golden_diff": "diff --git a/faker/providers/internet/de_DE/__init__.py b/faker/providers/internet/de_DE/__init__.py\n--- a/faker/providers/internet/de_DE/__init__.py\n+++ b/faker/providers/internet/de_DE/__init__.py\n@@ -15,5 +15,7 @@\n ('\u00e4', 'ae'), ('\u00c4', 'Ae'),\n ('\u00f6', 'oe'), ('\u00d6', 'Oe'),\n ('\u00fc', 'ue'), ('\u00dc', 'Ue'),\n+ ('\u00e9', 'e'), ('\u00c9', 'E'),\n+ ('\u00e0', 'a'), ('\u00c0', 'A'),\n ('\u00df', 'ss'),\n )\n", "issue": "Using \u00c9, \u00e9 (e-acute) in emails.\nIt looks that \u00c9, \u00e9 (e-acute) symbols are not appropriate for valid email. I used https://pypi.python.org/pypi/robotframework-faker/ which uses this library and the following email was returned: \r\nandr\[email protected]\r\n\r\nBut email verification was failed for this email. \r\nCould you remove \u00c9, \u00e9 and other such letters if they are present from valid email generation?\n", "before_files": [{"content": "# coding=utf-8\n\nfrom __future__ import unicode_literals\nfrom .. import Provider as InternetProvider\n\nclass Provider(InternetProvider):\n\n free_email_domains = (\n 'aol.de', 'gmail.com', 'gmx.de', 'googlemail.com', 'hotmail.de',\n 'web.de', 'yahoo.de',\n )\n tlds = ('com', 'com', 'com', 'net', 'org', 'de', 'de', 'de', )\n\n replacements = (\n ('\u00e4', 'ae'), ('\u00c4', 'Ae'),\n ('\u00f6', 'oe'), ('\u00d6', 'Oe'),\n ('\u00fc', 'ue'), ('\u00dc', 'Ue'),\n ('\u00df', 'ss'),\n )\n", "path": "faker/providers/internet/de_DE/__init__.py"}], "after_files": [{"content": "# coding=utf-8\n\nfrom __future__ import unicode_literals\nfrom .. import Provider as InternetProvider\n\nclass Provider(InternetProvider):\n\n free_email_domains = (\n 'aol.de', 'gmail.com', 'gmx.de', 'googlemail.com', 'hotmail.de',\n 'web.de', 'yahoo.de',\n )\n tlds = ('com', 'com', 'com', 'net', 'org', 'de', 'de', 'de', )\n\n replacements = (\n ('\u00e4', 'ae'), ('\u00c4', 'Ae'),\n ('\u00f6', 'oe'), ('\u00d6', 'Oe'),\n ('\u00fc', 'ue'), ('\u00dc', 'Ue'),\n ('\u00e9', 'e'), ('\u00c9', 'E'),\n ('\u00e0', 'a'), ('\u00c0', 'A'),\n ('\u00df', 'ss'),\n )\n", "path": "faker/providers/internet/de_DE/__init__.py"}]} | 550 | 147 |
gh_patches_debug_32914 | rasdani/github-patches | git_diff | getpelican__pelican-2440 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Category/Tag/Author slugs are not settable
URLWrapper objects have a setter for their 'slug' property, but all of the concrete URLWrapper subclasses override the _getter_ for 'slug', which, because of the way Python's property accessors work, makes the setter inaccessible. This breaks the 'category_meta' plugin and probably other things as well.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pelican/urlwrappers.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 from __future__ import unicode_literals
3
4 import functools
5 import logging
6 import os
7
8 import six
9
10 from pelican.utils import python_2_unicode_compatible, slugify
11
12 logger = logging.getLogger(__name__)
13
14
15 @python_2_unicode_compatible
16 @functools.total_ordering
17 class URLWrapper(object):
18 def __init__(self, name, settings):
19 self.settings = settings
20 self._name = name
21 self._slug = None
22 self._slug_from_name = True
23
24 @property
25 def name(self):
26 return self._name
27
28 @name.setter
29 def name(self, name):
30 self._name = name
31 # if slug wasn't explicitly set, it needs to be regenerated from name
32 # so, changing name should reset slug for slugification
33 if self._slug_from_name:
34 self._slug = None
35
36 @property
37 def slug(self):
38 if self._slug is None:
39 self._slug = slugify(
40 self.name,
41 regex_subs=self.settings.get('SLUG_REGEX_SUBSTITUTIONS', []))
42 return self._slug
43
44 @slug.setter
45 def slug(self, slug):
46 # if slug is expliticly set, changing name won't alter slug
47 self._slug_from_name = False
48 self._slug = slug
49
50 def as_dict(self):
51 d = self.__dict__
52 d['name'] = self.name
53 d['slug'] = self.slug
54 return d
55
56 def __hash__(self):
57 return hash(self.slug)
58
59 def _normalize_key(self, key):
60 subs = self.settings.get('SLUG_REGEX_SUBSTITUTIONS', [])
61 return six.text_type(slugify(key, regex_subs=subs))
62
63 def __eq__(self, other):
64 if isinstance(other, self.__class__):
65 return self.slug == other.slug
66 if isinstance(other, six.text_type):
67 return self.slug == self._normalize_key(other)
68 return False
69
70 def __ne__(self, other):
71 if isinstance(other, self.__class__):
72 return self.slug != other.slug
73 if isinstance(other, six.text_type):
74 return self.slug != self._normalize_key(other)
75 return True
76
77 def __lt__(self, other):
78 if isinstance(other, self.__class__):
79 return self.slug < other.slug
80 if isinstance(other, six.text_type):
81 return self.slug < self._normalize_key(other)
82 return False
83
84 def __str__(self):
85 return self.name
86
87 def __repr__(self):
88 return '<{} {}>'.format(type(self).__name__, repr(self._name))
89
90 def _from_settings(self, key, get_page_name=False):
91 """Returns URL information as defined in settings.
92
93 When get_page_name=True returns URL without anything after {slug} e.g.
94 if in settings: CATEGORY_URL="cat/{slug}.html" this returns
95 "cat/{slug}" Useful for pagination.
96
97 """
98 setting = "%s_%s" % (self.__class__.__name__.upper(), key)
99 value = self.settings[setting]
100 if not isinstance(value, six.string_types):
101 logger.warning('%s is set to %s', setting, value)
102 return value
103 else:
104 if get_page_name:
105 return os.path.splitext(value)[0].format(**self.as_dict())
106 else:
107 return value.format(**self.as_dict())
108
109 page_name = property(functools.partial(_from_settings, key='URL',
110 get_page_name=True))
111 url = property(functools.partial(_from_settings, key='URL'))
112 save_as = property(functools.partial(_from_settings, key='SAVE_AS'))
113
114
115 class Category(URLWrapper):
116 @property
117 def slug(self):
118 if self._slug is None:
119 if 'CATEGORY_REGEX_SUBSTITUTIONS' in self.settings:
120 subs = self.settings['CATEGORY_REGEX_SUBSTITUTIONS']
121 else:
122 subs = self.settings.get('SLUG_REGEX_SUBSTITUTIONS', [])
123 self._slug = slugify(self.name, regex_subs=subs)
124 return self._slug
125
126
127 class Tag(URLWrapper):
128 def __init__(self, name, *args, **kwargs):
129 super(Tag, self).__init__(name.strip(), *args, **kwargs)
130
131 @property
132 def slug(self):
133 if self._slug is None:
134 if 'TAG_REGEX_SUBSTITUTIONS' in self.settings:
135 subs = self.settings['TAG_REGEX_SUBSTITUTIONS']
136 else:
137 subs = self.settings.get('SLUG_REGEX_SUBSTITUTIONS', [])
138 self._slug = slugify(self.name, regex_subs=subs)
139 return self._slug
140
141
142 class Author(URLWrapper):
143 @property
144 def slug(self):
145 if self._slug is None:
146 if 'AUTHOR_REGEX_SUBSTITUTIONS' in self.settings:
147 subs = self.settings['AUTHOR_REGEX_SUBSTITUTIONS']
148 else:
149 subs = self.settings.get('SLUG_REGEX_SUBSTITUTIONS', [])
150 self._slug = slugify(self.name, regex_subs=subs)
151 return self._slug
152
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pelican/urlwrappers.py b/pelican/urlwrappers.py
--- a/pelican/urlwrappers.py
+++ b/pelican/urlwrappers.py
@@ -36,9 +36,17 @@
@property
def slug(self):
if self._slug is None:
- self._slug = slugify(
- self.name,
- regex_subs=self.settings.get('SLUG_REGEX_SUBSTITUTIONS', []))
+ class_key = '{}_REGEX_SUBSTITUTIONS'.format(
+ self.__class__.__name__.upper())
+ if class_key in self.settings:
+ self._slug = slugify(
+ self.name,
+ regex_subs=self.settings[class_key])
+ else:
+ self._slug = slugify(
+ self.name,
+ regex_subs=self.settings.get(
+ 'SLUG_REGEX_SUBSTITUTIONS', []))
return self._slug
@slug.setter
@@ -113,39 +121,13 @@
class Category(URLWrapper):
- @property
- def slug(self):
- if self._slug is None:
- if 'CATEGORY_REGEX_SUBSTITUTIONS' in self.settings:
- subs = self.settings['CATEGORY_REGEX_SUBSTITUTIONS']
- else:
- subs = self.settings.get('SLUG_REGEX_SUBSTITUTIONS', [])
- self._slug = slugify(self.name, regex_subs=subs)
- return self._slug
+ pass
class Tag(URLWrapper):
def __init__(self, name, *args, **kwargs):
super(Tag, self).__init__(name.strip(), *args, **kwargs)
- @property
- def slug(self):
- if self._slug is None:
- if 'TAG_REGEX_SUBSTITUTIONS' in self.settings:
- subs = self.settings['TAG_REGEX_SUBSTITUTIONS']
- else:
- subs = self.settings.get('SLUG_REGEX_SUBSTITUTIONS', [])
- self._slug = slugify(self.name, regex_subs=subs)
- return self._slug
-
class Author(URLWrapper):
- @property
- def slug(self):
- if self._slug is None:
- if 'AUTHOR_REGEX_SUBSTITUTIONS' in self.settings:
- subs = self.settings['AUTHOR_REGEX_SUBSTITUTIONS']
- else:
- subs = self.settings.get('SLUG_REGEX_SUBSTITUTIONS', [])
- self._slug = slugify(self.name, regex_subs=subs)
- return self._slug
+ pass
| {"golden_diff": "diff --git a/pelican/urlwrappers.py b/pelican/urlwrappers.py\n--- a/pelican/urlwrappers.py\n+++ b/pelican/urlwrappers.py\n@@ -36,9 +36,17 @@\n @property\n def slug(self):\n if self._slug is None:\n- self._slug = slugify(\n- self.name,\n- regex_subs=self.settings.get('SLUG_REGEX_SUBSTITUTIONS', []))\n+ class_key = '{}_REGEX_SUBSTITUTIONS'.format(\n+ self.__class__.__name__.upper())\n+ if class_key in self.settings:\n+ self._slug = slugify(\n+ self.name,\n+ regex_subs=self.settings[class_key])\n+ else:\n+ self._slug = slugify(\n+ self.name,\n+ regex_subs=self.settings.get(\n+ 'SLUG_REGEX_SUBSTITUTIONS', []))\n return self._slug\n \n @slug.setter\n@@ -113,39 +121,13 @@\n \n \n class Category(URLWrapper):\n- @property\n- def slug(self):\n- if self._slug is None:\n- if 'CATEGORY_REGEX_SUBSTITUTIONS' in self.settings:\n- subs = self.settings['CATEGORY_REGEX_SUBSTITUTIONS']\n- else:\n- subs = self.settings.get('SLUG_REGEX_SUBSTITUTIONS', [])\n- self._slug = slugify(self.name, regex_subs=subs)\n- return self._slug\n+ pass\n \n \n class Tag(URLWrapper):\n def __init__(self, name, *args, **kwargs):\n super(Tag, self).__init__(name.strip(), *args, **kwargs)\n \n- @property\n- def slug(self):\n- if self._slug is None:\n- if 'TAG_REGEX_SUBSTITUTIONS' in self.settings:\n- subs = self.settings['TAG_REGEX_SUBSTITUTIONS']\n- else:\n- subs = self.settings.get('SLUG_REGEX_SUBSTITUTIONS', [])\n- self._slug = slugify(self.name, regex_subs=subs)\n- return self._slug\n-\n \n class Author(URLWrapper):\n- @property\n- def slug(self):\n- if self._slug is None:\n- if 'AUTHOR_REGEX_SUBSTITUTIONS' in self.settings:\n- subs = self.settings['AUTHOR_REGEX_SUBSTITUTIONS']\n- else:\n- subs = self.settings.get('SLUG_REGEX_SUBSTITUTIONS', [])\n- self._slug = slugify(self.name, regex_subs=subs)\n- return self._slug\n+ pass\n", "issue": "Category/Tag/Author slugs are not settable\nURLWrapper objects have a setter for their 'slug' property, but all of the concrete URLWrapper subclasses override the _getter_ for 'slug', which, because of the way Python's property accessors work, makes the setter inaccessible. This breaks the 'category_meta' plugin and probably other things as well.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nfrom __future__ import unicode_literals\n\nimport functools\nimport logging\nimport os\n\nimport six\n\nfrom pelican.utils import python_2_unicode_compatible, slugify\n\nlogger = logging.getLogger(__name__)\n\n\n@python_2_unicode_compatible\[email protected]_ordering\nclass URLWrapper(object):\n def __init__(self, name, settings):\n self.settings = settings\n self._name = name\n self._slug = None\n self._slug_from_name = True\n\n @property\n def name(self):\n return self._name\n\n @name.setter\n def name(self, name):\n self._name = name\n # if slug wasn't explicitly set, it needs to be regenerated from name\n # so, changing name should reset slug for slugification\n if self._slug_from_name:\n self._slug = None\n\n @property\n def slug(self):\n if self._slug is None:\n self._slug = slugify(\n self.name,\n regex_subs=self.settings.get('SLUG_REGEX_SUBSTITUTIONS', []))\n return self._slug\n\n @slug.setter\n def slug(self, slug):\n # if slug is expliticly set, changing name won't alter slug\n self._slug_from_name = False\n self._slug = slug\n\n def as_dict(self):\n d = self.__dict__\n d['name'] = self.name\n d['slug'] = self.slug\n return d\n\n def __hash__(self):\n return hash(self.slug)\n\n def _normalize_key(self, key):\n subs = self.settings.get('SLUG_REGEX_SUBSTITUTIONS', [])\n return six.text_type(slugify(key, regex_subs=subs))\n\n def __eq__(self, other):\n if isinstance(other, self.__class__):\n return self.slug == other.slug\n if isinstance(other, six.text_type):\n return self.slug == self._normalize_key(other)\n return False\n\n def __ne__(self, other):\n if isinstance(other, self.__class__):\n return self.slug != other.slug\n if isinstance(other, six.text_type):\n return self.slug != self._normalize_key(other)\n return True\n\n def __lt__(self, other):\n if isinstance(other, self.__class__):\n return self.slug < other.slug\n if isinstance(other, six.text_type):\n return self.slug < self._normalize_key(other)\n return False\n\n def __str__(self):\n return self.name\n\n def __repr__(self):\n return '<{} {}>'.format(type(self).__name__, repr(self._name))\n\n def _from_settings(self, key, get_page_name=False):\n \"\"\"Returns URL information as defined in settings.\n\n When get_page_name=True returns URL without anything after {slug} e.g.\n if in settings: CATEGORY_URL=\"cat/{slug}.html\" this returns\n \"cat/{slug}\" Useful for pagination.\n\n \"\"\"\n setting = \"%s_%s\" % (self.__class__.__name__.upper(), key)\n value = self.settings[setting]\n if not isinstance(value, six.string_types):\n logger.warning('%s is set to %s', setting, value)\n return value\n else:\n if get_page_name:\n return os.path.splitext(value)[0].format(**self.as_dict())\n else:\n return value.format(**self.as_dict())\n\n page_name = property(functools.partial(_from_settings, key='URL',\n get_page_name=True))\n url = property(functools.partial(_from_settings, key='URL'))\n save_as = property(functools.partial(_from_settings, key='SAVE_AS'))\n\n\nclass Category(URLWrapper):\n @property\n def slug(self):\n if self._slug is None:\n if 'CATEGORY_REGEX_SUBSTITUTIONS' in self.settings:\n subs = self.settings['CATEGORY_REGEX_SUBSTITUTIONS']\n else:\n subs = self.settings.get('SLUG_REGEX_SUBSTITUTIONS', [])\n self._slug = slugify(self.name, regex_subs=subs)\n return self._slug\n\n\nclass Tag(URLWrapper):\n def __init__(self, name, *args, **kwargs):\n super(Tag, self).__init__(name.strip(), *args, **kwargs)\n\n @property\n def slug(self):\n if self._slug is None:\n if 'TAG_REGEX_SUBSTITUTIONS' in self.settings:\n subs = self.settings['TAG_REGEX_SUBSTITUTIONS']\n else:\n subs = self.settings.get('SLUG_REGEX_SUBSTITUTIONS', [])\n self._slug = slugify(self.name, regex_subs=subs)\n return self._slug\n\n\nclass Author(URLWrapper):\n @property\n def slug(self):\n if self._slug is None:\n if 'AUTHOR_REGEX_SUBSTITUTIONS' in self.settings:\n subs = self.settings['AUTHOR_REGEX_SUBSTITUTIONS']\n else:\n subs = self.settings.get('SLUG_REGEX_SUBSTITUTIONS', [])\n self._slug = slugify(self.name, regex_subs=subs)\n return self._slug\n", "path": "pelican/urlwrappers.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\nfrom __future__ import unicode_literals\n\nimport functools\nimport logging\nimport os\n\nimport six\n\nfrom pelican.utils import python_2_unicode_compatible, slugify\n\nlogger = logging.getLogger(__name__)\n\n\n@python_2_unicode_compatible\[email protected]_ordering\nclass URLWrapper(object):\n def __init__(self, name, settings):\n self.settings = settings\n self._name = name\n self._slug = None\n self._slug_from_name = True\n\n @property\n def name(self):\n return self._name\n\n @name.setter\n def name(self, name):\n self._name = name\n # if slug wasn't explicitly set, it needs to be regenerated from name\n # so, changing name should reset slug for slugification\n if self._slug_from_name:\n self._slug = None\n\n @property\n def slug(self):\n if self._slug is None:\n class_key = '{}_REGEX_SUBSTITUTIONS'.format(\n self.__class__.__name__.upper())\n if class_key in self.settings:\n self._slug = slugify(\n self.name,\n regex_subs=self.settings[class_key])\n else:\n self._slug = slugify(\n self.name,\n regex_subs=self.settings.get(\n 'SLUG_REGEX_SUBSTITUTIONS', []))\n return self._slug\n\n @slug.setter\n def slug(self, slug):\n # if slug is expliticly set, changing name won't alter slug\n self._slug_from_name = False\n self._slug = slug\n\n def as_dict(self):\n d = self.__dict__\n d['name'] = self.name\n d['slug'] = self.slug\n return d\n\n def __hash__(self):\n return hash(self.slug)\n\n def _normalize_key(self, key):\n subs = self.settings.get('SLUG_REGEX_SUBSTITUTIONS', [])\n return six.text_type(slugify(key, regex_subs=subs))\n\n def __eq__(self, other):\n if isinstance(other, self.__class__):\n return self.slug == other.slug\n if isinstance(other, six.text_type):\n return self.slug == self._normalize_key(other)\n return False\n\n def __ne__(self, other):\n if isinstance(other, self.__class__):\n return self.slug != other.slug\n if isinstance(other, six.text_type):\n return self.slug != self._normalize_key(other)\n return True\n\n def __lt__(self, other):\n if isinstance(other, self.__class__):\n return self.slug < other.slug\n if isinstance(other, six.text_type):\n return self.slug < self._normalize_key(other)\n return False\n\n def __str__(self):\n return self.name\n\n def __repr__(self):\n return '<{} {}>'.format(type(self).__name__, repr(self._name))\n\n def _from_settings(self, key, get_page_name=False):\n \"\"\"Returns URL information as defined in settings.\n\n When get_page_name=True returns URL without anything after {slug} e.g.\n if in settings: CATEGORY_URL=\"cat/{slug}.html\" this returns\n \"cat/{slug}\" Useful for pagination.\n\n \"\"\"\n setting = \"%s_%s\" % (self.__class__.__name__.upper(), key)\n value = self.settings[setting]\n if not isinstance(value, six.string_types):\n logger.warning('%s is set to %s', setting, value)\n return value\n else:\n if get_page_name:\n return os.path.splitext(value)[0].format(**self.as_dict())\n else:\n return value.format(**self.as_dict())\n\n page_name = property(functools.partial(_from_settings, key='URL',\n get_page_name=True))\n url = property(functools.partial(_from_settings, key='URL'))\n save_as = property(functools.partial(_from_settings, key='SAVE_AS'))\n\n\nclass Category(URLWrapper):\n pass\n\n\nclass Tag(URLWrapper):\n def __init__(self, name, *args, **kwargs):\n super(Tag, self).__init__(name.strip(), *args, **kwargs)\n\n\nclass Author(URLWrapper):\n pass\n", "path": "pelican/urlwrappers.py"}]} | 1,798 | 567 |
gh_patches_debug_35965 | rasdani/github-patches | git_diff | ethereum__web3.py-914 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Erorr in websockets.py: '<=' not supported between instances of 'int' and 'NoneType'
* web3 (4.3.0)
* websockets (4.0.1)
* Python: 3.6
* OS: osx HighSierra
### What was wrong?
`web3 = Web3(Web3.WebsocketProvider("ws://10.224.12.6:8546"))`
`web3.eth.syncing //returns data`
The websocket is clearly open but when I run a filter which is supposed to have many entries, I get the following error trace:
Upon running: `data = web3.eth.getFilterLogs(new_block_filter.filter_id)`, I get:
```
~/Desktop/contracts-py/contracts/lib/python3.6/site-packages/web3/providers/websocket.py in make_request(self, method, params)
81 WebsocketProvider._loop
82 )
---> 83 return future.result()
/anaconda3/lib/python3.6/concurrent/futures/_base.py in result(self, timeout)
430 raise CancelledError()
431 elif self._state == FINISHED:
--> 432 return self.__get_result()
433 else:
434 raise TimeoutError()
/anaconda3/lib/python3.6/concurrent/futures/_base.py in __get_result(self)
382 def __get_result(self):
383 if self._exception:
--> 384 raise self._exception
385 else:
386 return self._result
~/Desktop/contracts-py/contracts/lib/python3.6/site-packages/web3/providers/websocket.py in coro_make_request(self, request_data)
71 async with self.conn as conn:
72 await conn.send(request_data)
---> 73 return json.loads(await conn.recv())
74
75 def make_request(self, method, params):
~/Desktop/contracts-py/contracts/lib/python3.6/site-packages/websockets/protocol.py in recv(self)
321 next_message.cancel()
322 if not self.legacy_recv:
--> 323 raise ConnectionClosed(self.close_code, self.close_reason)
324
325 @asyncio.coroutine
~/Desktop/contracts-py/contracts/lib/python3.6/site-packages/websockets/exceptions.py in __init__(self, code, reason)
145 self.reason = reason
146 message = "WebSocket connection is closed: "
--> 147 if 3000 <= code < 4000:
148 explanation = "registered"
149 elif 4000 <= code < 5000:
TypeError: '<=' not supported between instances of 'int' and 'NoneType'
```
The same filter runs fine (albeit a bit slow) using `Web3.HTTPProvider()`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `web3/providers/websocket.py`
Content:
```
1 import asyncio
2 import json
3 import logging
4 import os
5 from threading import (
6 Thread,
7 )
8
9 import websockets
10
11 from web3.providers.base import (
12 JSONBaseProvider,
13 )
14
15
16 def _start_event_loop(loop):
17 asyncio.set_event_loop(loop)
18 loop.run_forever()
19 loop.close()
20
21
22 def _get_threaded_loop():
23 new_loop = asyncio.new_event_loop()
24 thread_loop = Thread(target=_start_event_loop, args=(new_loop,), daemon=True)
25 thread_loop.start()
26 return new_loop
27
28
29 def get_default_endpoint():
30 return os.environ.get('WEB3_WS_PROVIDER_URI', 'ws://127.0.0.1:8546')
31
32
33 class PersistentWebSocket:
34
35 def __init__(self, endpoint_uri, loop):
36 self.ws = None
37 self.endpoint_uri = endpoint_uri
38 self.loop = loop
39
40 async def __aenter__(self):
41 if self.ws is None:
42 self.ws = await websockets.connect(uri=self.endpoint_uri, loop=self.loop)
43 return self.ws
44
45 async def __aexit__(self, exc_type, exc_val, exc_tb):
46 if exc_val is not None:
47 try:
48 await self.ws.close()
49 except Exception:
50 pass
51 self.ws = None
52
53
54 class WebsocketProvider(JSONBaseProvider):
55 logger = logging.getLogger("web3.providers.WebsocketProvider")
56 _loop = None
57
58 def __init__(self, endpoint_uri=None):
59 self.endpoint_uri = endpoint_uri
60 if self.endpoint_uri is None:
61 self.endpoint_uri = get_default_endpoint()
62 if WebsocketProvider._loop is None:
63 WebsocketProvider._loop = _get_threaded_loop()
64 self.conn = PersistentWebSocket(self.endpoint_uri, WebsocketProvider._loop)
65 super().__init__()
66
67 def __str__(self):
68 return "WS connection {0}".format(self.endpoint_uri)
69
70 async def coro_make_request(self, request_data):
71 async with self.conn as conn:
72 await conn.send(request_data)
73 return json.loads(await conn.recv())
74
75 def make_request(self, method, params):
76 self.logger.debug("Making request WebSocket. URI: %s, "
77 "Method: %s", self.endpoint_uri, method)
78 request_data = self.encode_rpc_request(method, params)
79 future = asyncio.run_coroutine_threadsafe(
80 self.coro_make_request(request_data),
81 WebsocketProvider._loop
82 )
83 return future.result()
84
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/web3/providers/websocket.py b/web3/providers/websocket.py
--- a/web3/providers/websocket.py
+++ b/web3/providers/websocket.py
@@ -8,10 +8,15 @@
import websockets
+from web3.exceptions import (
+ ValidationError,
+)
from web3.providers.base import (
JSONBaseProvider,
)
+RESTRICTED_WEBSOCKET_KWARGS = {'uri', 'loop'}
+
def _start_event_loop(loop):
asyncio.set_event_loop(loop)
@@ -32,14 +37,17 @@
class PersistentWebSocket:
- def __init__(self, endpoint_uri, loop):
+ def __init__(self, endpoint_uri, loop, websocket_kwargs):
self.ws = None
self.endpoint_uri = endpoint_uri
self.loop = loop
+ self.websocket_kwargs = websocket_kwargs
async def __aenter__(self):
if self.ws is None:
- self.ws = await websockets.connect(uri=self.endpoint_uri, loop=self.loop)
+ self.ws = await websockets.connect(
+ uri=self.endpoint_uri, loop=self.loop, **self.websocket_kwargs
+ )
return self.ws
async def __aexit__(self, exc_type, exc_val, exc_tb):
@@ -55,13 +63,26 @@
logger = logging.getLogger("web3.providers.WebsocketProvider")
_loop = None
- def __init__(self, endpoint_uri=None):
+ def __init__(self, endpoint_uri=None, websocket_kwargs=None):
self.endpoint_uri = endpoint_uri
if self.endpoint_uri is None:
self.endpoint_uri = get_default_endpoint()
if WebsocketProvider._loop is None:
WebsocketProvider._loop = _get_threaded_loop()
- self.conn = PersistentWebSocket(self.endpoint_uri, WebsocketProvider._loop)
+ if websocket_kwargs is None:
+ websocket_kwargs = {}
+ else:
+ found_restricted_keys = set(websocket_kwargs.keys()).intersection(
+ RESTRICTED_WEBSOCKET_KWARGS
+ )
+ if found_restricted_keys:
+ raise ValidationError(
+ '{0} are not allowed in websocket_kwargs, '
+ 'found: {1}'.format(RESTRICTED_WEBSOCKET_KWARGS, found_restricted_keys)
+ )
+ self.conn = PersistentWebSocket(
+ self.endpoint_uri, WebsocketProvider._loop, websocket_kwargs
+ )
super().__init__()
def __str__(self):
| {"golden_diff": "diff --git a/web3/providers/websocket.py b/web3/providers/websocket.py\n--- a/web3/providers/websocket.py\n+++ b/web3/providers/websocket.py\n@@ -8,10 +8,15 @@\n \n import websockets\n \n+from web3.exceptions import (\n+ ValidationError,\n+)\n from web3.providers.base import (\n JSONBaseProvider,\n )\n \n+RESTRICTED_WEBSOCKET_KWARGS = {'uri', 'loop'}\n+\n \n def _start_event_loop(loop):\n asyncio.set_event_loop(loop)\n@@ -32,14 +37,17 @@\n \n class PersistentWebSocket:\n \n- def __init__(self, endpoint_uri, loop):\n+ def __init__(self, endpoint_uri, loop, websocket_kwargs):\n self.ws = None\n self.endpoint_uri = endpoint_uri\n self.loop = loop\n+ self.websocket_kwargs = websocket_kwargs\n \n async def __aenter__(self):\n if self.ws is None:\n- self.ws = await websockets.connect(uri=self.endpoint_uri, loop=self.loop)\n+ self.ws = await websockets.connect(\n+ uri=self.endpoint_uri, loop=self.loop, **self.websocket_kwargs\n+ )\n return self.ws\n \n async def __aexit__(self, exc_type, exc_val, exc_tb):\n@@ -55,13 +63,26 @@\n logger = logging.getLogger(\"web3.providers.WebsocketProvider\")\n _loop = None\n \n- def __init__(self, endpoint_uri=None):\n+ def __init__(self, endpoint_uri=None, websocket_kwargs=None):\n self.endpoint_uri = endpoint_uri\n if self.endpoint_uri is None:\n self.endpoint_uri = get_default_endpoint()\n if WebsocketProvider._loop is None:\n WebsocketProvider._loop = _get_threaded_loop()\n- self.conn = PersistentWebSocket(self.endpoint_uri, WebsocketProvider._loop)\n+ if websocket_kwargs is None:\n+ websocket_kwargs = {}\n+ else:\n+ found_restricted_keys = set(websocket_kwargs.keys()).intersection(\n+ RESTRICTED_WEBSOCKET_KWARGS\n+ )\n+ if found_restricted_keys:\n+ raise ValidationError(\n+ '{0} are not allowed in websocket_kwargs, '\n+ 'found: {1}'.format(RESTRICTED_WEBSOCKET_KWARGS, found_restricted_keys)\n+ )\n+ self.conn = PersistentWebSocket(\n+ self.endpoint_uri, WebsocketProvider._loop, websocket_kwargs\n+ )\n super().__init__()\n \n def __str__(self):\n", "issue": "Erorr in websockets.py: '<=' not supported between instances of 'int' and 'NoneType'\n* web3 (4.3.0)\r\n* websockets (4.0.1)\r\n* Python: 3.6\r\n* OS: osx HighSierra\r\n\r\n\r\n### What was wrong?\r\n\r\n`web3 = Web3(Web3.WebsocketProvider(\"ws://10.224.12.6:8546\"))`\r\n`web3.eth.syncing //returns data`\r\n\r\nThe websocket is clearly open but when I run a filter which is supposed to have many entries, I get the following error trace:\r\n\r\nUpon running: `data = web3.eth.getFilterLogs(new_block_filter.filter_id)`, I get:\r\n\r\n```\r\n~/Desktop/contracts-py/contracts/lib/python3.6/site-packages/web3/providers/websocket.py in make_request(self, method, params)\r\n 81 WebsocketProvider._loop\r\n 82 )\r\n---> 83 return future.result()\r\n\r\n/anaconda3/lib/python3.6/concurrent/futures/_base.py in result(self, timeout)\r\n 430 raise CancelledError()\r\n 431 elif self._state == FINISHED:\r\n--> 432 return self.__get_result()\r\n 433 else:\r\n 434 raise TimeoutError()\r\n\r\n/anaconda3/lib/python3.6/concurrent/futures/_base.py in __get_result(self)\r\n 382 def __get_result(self):\r\n 383 if self._exception:\r\n--> 384 raise self._exception\r\n 385 else:\r\n 386 return self._result\r\n\r\n~/Desktop/contracts-py/contracts/lib/python3.6/site-packages/web3/providers/websocket.py in coro_make_request(self, request_data)\r\n 71 async with self.conn as conn:\r\n 72 await conn.send(request_data)\r\n---> 73 return json.loads(await conn.recv())\r\n 74 \r\n 75 def make_request(self, method, params):\r\n\r\n~/Desktop/contracts-py/contracts/lib/python3.6/site-packages/websockets/protocol.py in recv(self)\r\n 321 next_message.cancel()\r\n 322 if not self.legacy_recv:\r\n--> 323 raise ConnectionClosed(self.close_code, self.close_reason)\r\n 324 \r\n 325 @asyncio.coroutine\r\n\r\n~/Desktop/contracts-py/contracts/lib/python3.6/site-packages/websockets/exceptions.py in __init__(self, code, reason)\r\n 145 self.reason = reason\r\n 146 message = \"WebSocket connection is closed: \"\r\n--> 147 if 3000 <= code < 4000:\r\n 148 explanation = \"registered\"\r\n 149 elif 4000 <= code < 5000:\r\n\r\nTypeError: '<=' not supported between instances of 'int' and 'NoneType'\r\n```\r\n\r\nThe same filter runs fine (albeit a bit slow) using `Web3.HTTPProvider()`\r\n\r\n\n", "before_files": [{"content": "import asyncio\nimport json\nimport logging\nimport os\nfrom threading import (\n Thread,\n)\n\nimport websockets\n\nfrom web3.providers.base import (\n JSONBaseProvider,\n)\n\n\ndef _start_event_loop(loop):\n asyncio.set_event_loop(loop)\n loop.run_forever()\n loop.close()\n\n\ndef _get_threaded_loop():\n new_loop = asyncio.new_event_loop()\n thread_loop = Thread(target=_start_event_loop, args=(new_loop,), daemon=True)\n thread_loop.start()\n return new_loop\n\n\ndef get_default_endpoint():\n return os.environ.get('WEB3_WS_PROVIDER_URI', 'ws://127.0.0.1:8546')\n\n\nclass PersistentWebSocket:\n\n def __init__(self, endpoint_uri, loop):\n self.ws = None\n self.endpoint_uri = endpoint_uri\n self.loop = loop\n\n async def __aenter__(self):\n if self.ws is None:\n self.ws = await websockets.connect(uri=self.endpoint_uri, loop=self.loop)\n return self.ws\n\n async def __aexit__(self, exc_type, exc_val, exc_tb):\n if exc_val is not None:\n try:\n await self.ws.close()\n except Exception:\n pass\n self.ws = None\n\n\nclass WebsocketProvider(JSONBaseProvider):\n logger = logging.getLogger(\"web3.providers.WebsocketProvider\")\n _loop = None\n\n def __init__(self, endpoint_uri=None):\n self.endpoint_uri = endpoint_uri\n if self.endpoint_uri is None:\n self.endpoint_uri = get_default_endpoint()\n if WebsocketProvider._loop is None:\n WebsocketProvider._loop = _get_threaded_loop()\n self.conn = PersistentWebSocket(self.endpoint_uri, WebsocketProvider._loop)\n super().__init__()\n\n def __str__(self):\n return \"WS connection {0}\".format(self.endpoint_uri)\n\n async def coro_make_request(self, request_data):\n async with self.conn as conn:\n await conn.send(request_data)\n return json.loads(await conn.recv())\n\n def make_request(self, method, params):\n self.logger.debug(\"Making request WebSocket. URI: %s, \"\n \"Method: %s\", self.endpoint_uri, method)\n request_data = self.encode_rpc_request(method, params)\n future = asyncio.run_coroutine_threadsafe(\n self.coro_make_request(request_data),\n WebsocketProvider._loop\n )\n return future.result()\n", "path": "web3/providers/websocket.py"}], "after_files": [{"content": "import asyncio\nimport json\nimport logging\nimport os\nfrom threading import (\n Thread,\n)\n\nimport websockets\n\nfrom web3.exceptions import (\n ValidationError,\n)\nfrom web3.providers.base import (\n JSONBaseProvider,\n)\n\nRESTRICTED_WEBSOCKET_KWARGS = {'uri', 'loop'}\n\n\ndef _start_event_loop(loop):\n asyncio.set_event_loop(loop)\n loop.run_forever()\n loop.close()\n\n\ndef _get_threaded_loop():\n new_loop = asyncio.new_event_loop()\n thread_loop = Thread(target=_start_event_loop, args=(new_loop,), daemon=True)\n thread_loop.start()\n return new_loop\n\n\ndef get_default_endpoint():\n return os.environ.get('WEB3_WS_PROVIDER_URI', 'ws://127.0.0.1:8546')\n\n\nclass PersistentWebSocket:\n\n def __init__(self, endpoint_uri, loop, websocket_kwargs):\n self.ws = None\n self.endpoint_uri = endpoint_uri\n self.loop = loop\n self.websocket_kwargs = websocket_kwargs\n\n async def __aenter__(self):\n if self.ws is None:\n self.ws = await websockets.connect(\n uri=self.endpoint_uri, loop=self.loop, **self.websocket_kwargs\n )\n return self.ws\n\n async def __aexit__(self, exc_type, exc_val, exc_tb):\n if exc_val is not None:\n try:\n await self.ws.close()\n except Exception:\n pass\n self.ws = None\n\n\nclass WebsocketProvider(JSONBaseProvider):\n logger = logging.getLogger(\"web3.providers.WebsocketProvider\")\n _loop = None\n\n def __init__(self, endpoint_uri=None, websocket_kwargs=None):\n self.endpoint_uri = endpoint_uri\n if self.endpoint_uri is None:\n self.endpoint_uri = get_default_endpoint()\n if WebsocketProvider._loop is None:\n WebsocketProvider._loop = _get_threaded_loop()\n if websocket_kwargs is None:\n websocket_kwargs = {}\n else:\n found_restricted_keys = set(websocket_kwargs.keys()).intersection(\n RESTRICTED_WEBSOCKET_KWARGS\n )\n if found_restricted_keys:\n raise ValidationError(\n '{0} are not allowed in websocket_kwargs, '\n 'found: {1}'.format(RESTRICTED_WEBSOCKET_KWARGS, found_restricted_keys)\n )\n self.conn = PersistentWebSocket(\n self.endpoint_uri, WebsocketProvider._loop, websocket_kwargs\n )\n super().__init__()\n\n def __str__(self):\n return \"WS connection {0}\".format(self.endpoint_uri)\n\n async def coro_make_request(self, request_data):\n async with self.conn as conn:\n await conn.send(request_data)\n return json.loads(await conn.recv())\n\n def make_request(self, method, params):\n self.logger.debug(\"Making request WebSocket. URI: %s, \"\n \"Method: %s\", self.endpoint_uri, method)\n request_data = self.encode_rpc_request(method, params)\n future = asyncio.run_coroutine_threadsafe(\n self.coro_make_request(request_data),\n WebsocketProvider._loop\n )\n return future.result()\n", "path": "web3/providers/websocket.py"}]} | 1,632 | 544 |
gh_patches_debug_6782 | rasdani/github-patches | git_diff | learningequality__kolibri-1761 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
The mastery completion sign updates only after a page refresh and not real time.
## Summary
A learner had completed and came out of the exercise and found the green completed tick did not get updated real time, but after refreshing the page the completed tick appeared.
## System information
- Version: Kolibri 0.4.0beta10
- Operating system: Ubuntu 14.04 LTS
- Browser: Chrome
## How to reproduce
1. Attempt an exercise or master it.
2. Come out of the exercise.
3. The completed or In progress stamp is not updated real time.
## Screenshots
Learner has mastered the topic.

He exited the exercise and the completed sign on the thumbnail is not update:

But on refreshing the page the thumbnail has the completed sign.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kolibri/auth/backends.py`
Content:
```
1 """
2 Implements custom auth backends as described in the Django docs, for our custom user classes -- FacilityUser and
3 DeviceOwner. The appropriate classes should be listed in the AUTHENTICATION_BACKENDS. Note that authentication
4 backends are checked in the order they're listed.
5 """
6
7 from kolibri.auth.models import DeviceOwner, FacilityUser
8
9
10 class FacilityUserBackend(object):
11 """
12 A class that implements authentication for FacilityUsers.
13 """
14
15 def authenticate(self, username=None, password=None, facility=None):
16 """
17 Authenticates the user if the credentials correspond to a FacilityUser for the specified Facility.
18
19 :param username: a string
20 :param password: a string
21 :param facility: a Facility
22 :return: A FacilityUser instance if successful, or None if authentication failed.
23 """
24 users = FacilityUser.objects.filter(username=username)
25 if facility:
26 users = users.filter(facility=facility)
27 for user in users:
28 if user.check_password(password):
29 return user
30 # Allow login without password for learners for facilities that allow this.
31 # Must specify the facility, to prevent accidental logins
32 elif facility and user.dataset.learner_can_login_with_no_password and not user.roles.count():
33 return user
34 return None
35
36 def get_user(self, user_id):
37 """
38 Gets a user. Auth backends are required to implement this.
39
40 :param user_id: A FacilityUser pk
41 :return: A FacilityUser instance if a BaseUser with that pk is found, else None.
42 """
43 try:
44 return FacilityUser.objects.get(pk=user_id)
45 except FacilityUser.DoesNotExist:
46 return None
47
48
49 class DeviceOwnerBackend(object):
50 """
51 A class that implements authentication for DeviceOwners.
52 """
53
54 def authenticate(self, username=None, password=None, **kwargs):
55 """
56 Authenticates the user if the credentials correspond to a DeviceOwner.
57
58 :param username: a string
59 :param password: a string
60 :return: A DeviceOwner instance if successful, or None if authentication failed.
61 """
62 try:
63 user = DeviceOwner.objects.get(username=username)
64 if user.check_password(password):
65 return user
66 else:
67 return None
68 except DeviceOwner.DoesNotExist:
69 return None
70
71 def get_user(self, user_id):
72 """
73 Gets a user. Auth backends are required to implement this.
74
75 :param user_id: A BaseUser pk
76 :return: A DeviceOwner instance if a BaseUser with that pk is found, else None.
77 """
78 try:
79 return DeviceOwner.objects.get(pk=user_id)
80 except DeviceOwner.DoesNotExist:
81 return None
82
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/kolibri/auth/backends.py b/kolibri/auth/backends.py
--- a/kolibri/auth/backends.py
+++ b/kolibri/auth/backends.py
@@ -21,7 +21,7 @@
:param facility: a Facility
:return: A FacilityUser instance if successful, or None if authentication failed.
"""
- users = FacilityUser.objects.filter(username=username)
+ users = FacilityUser.objects.filter(username__iexact=username)
if facility:
users = users.filter(facility=facility)
for user in users:
| {"golden_diff": "diff --git a/kolibri/auth/backends.py b/kolibri/auth/backends.py\n--- a/kolibri/auth/backends.py\n+++ b/kolibri/auth/backends.py\n@@ -21,7 +21,7 @@\n :param facility: a Facility\n :return: A FacilityUser instance if successful, or None if authentication failed.\n \"\"\"\n- users = FacilityUser.objects.filter(username=username)\n+ users = FacilityUser.objects.filter(username__iexact=username)\n if facility:\n users = users.filter(facility=facility)\n for user in users:\n", "issue": "The mastery completion sign updates only after a page refresh and not real time.\n## Summary\r\n\r\nA learner had completed and came out of the exercise and found the green completed tick did not get updated real time, but after refreshing the page the completed tick appeared. \r\n\r\n## System information\r\n - Version: Kolibri 0.4.0beta10\r\n - Operating system: Ubuntu 14.04 LTS\r\n - Browser: Chrome\r\n\r\n\r\n## How to reproduce\r\n1. Attempt an exercise or master it.\r\n2. Come out of the exercise.\r\n3. The completed or In progress stamp is not updated real time.\r\n\r\n## Screenshots\r\nLearner has mastered the topic.\r\n\r\n\r\nHe exited the exercise and the completed sign on the thumbnail is not update:\r\n\r\n\r\nBut on refreshing the page the thumbnail has the completed sign.\n", "before_files": [{"content": "\"\"\"\nImplements custom auth backends as described in the Django docs, for our custom user classes -- FacilityUser and\nDeviceOwner. The appropriate classes should be listed in the AUTHENTICATION_BACKENDS. Note that authentication\nbackends are checked in the order they're listed.\n\"\"\"\n\nfrom kolibri.auth.models import DeviceOwner, FacilityUser\n\n\nclass FacilityUserBackend(object):\n \"\"\"\n A class that implements authentication for FacilityUsers.\n \"\"\"\n\n def authenticate(self, username=None, password=None, facility=None):\n \"\"\"\n Authenticates the user if the credentials correspond to a FacilityUser for the specified Facility.\n\n :param username: a string\n :param password: a string\n :param facility: a Facility\n :return: A FacilityUser instance if successful, or None if authentication failed.\n \"\"\"\n users = FacilityUser.objects.filter(username=username)\n if facility:\n users = users.filter(facility=facility)\n for user in users:\n if user.check_password(password):\n return user\n # Allow login without password for learners for facilities that allow this.\n # Must specify the facility, to prevent accidental logins\n elif facility and user.dataset.learner_can_login_with_no_password and not user.roles.count():\n return user\n return None\n\n def get_user(self, user_id):\n \"\"\"\n Gets a user. Auth backends are required to implement this.\n\n :param user_id: A FacilityUser pk\n :return: A FacilityUser instance if a BaseUser with that pk is found, else None.\n \"\"\"\n try:\n return FacilityUser.objects.get(pk=user_id)\n except FacilityUser.DoesNotExist:\n return None\n\n\nclass DeviceOwnerBackend(object):\n \"\"\"\n A class that implements authentication for DeviceOwners.\n \"\"\"\n\n def authenticate(self, username=None, password=None, **kwargs):\n \"\"\"\n Authenticates the user if the credentials correspond to a DeviceOwner.\n\n :param username: a string\n :param password: a string\n :return: A DeviceOwner instance if successful, or None if authentication failed.\n \"\"\"\n try:\n user = DeviceOwner.objects.get(username=username)\n if user.check_password(password):\n return user\n else:\n return None\n except DeviceOwner.DoesNotExist:\n return None\n\n def get_user(self, user_id):\n \"\"\"\n Gets a user. Auth backends are required to implement this.\n\n :param user_id: A BaseUser pk\n :return: A DeviceOwner instance if a BaseUser with that pk is found, else None.\n \"\"\"\n try:\n return DeviceOwner.objects.get(pk=user_id)\n except DeviceOwner.DoesNotExist:\n return None\n", "path": "kolibri/auth/backends.py"}], "after_files": [{"content": "\"\"\"\nImplements custom auth backends as described in the Django docs, for our custom user classes -- FacilityUser and\nDeviceOwner. The appropriate classes should be listed in the AUTHENTICATION_BACKENDS. Note that authentication\nbackends are checked in the order they're listed.\n\"\"\"\n\nfrom kolibri.auth.models import DeviceOwner, FacilityUser\n\n\nclass FacilityUserBackend(object):\n \"\"\"\n A class that implements authentication for FacilityUsers.\n \"\"\"\n\n def authenticate(self, username=None, password=None, facility=None):\n \"\"\"\n Authenticates the user if the credentials correspond to a FacilityUser for the specified Facility.\n\n :param username: a string\n :param password: a string\n :param facility: a Facility\n :return: A FacilityUser instance if successful, or None if authentication failed.\n \"\"\"\n users = FacilityUser.objects.filter(username__iexact=username)\n if facility:\n users = users.filter(facility=facility)\n for user in users:\n if user.check_password(password):\n return user\n # Allow login without password for learners for facilities that allow this.\n # Must specify the facility, to prevent accidental logins\n elif facility and user.dataset.learner_can_login_with_no_password and not user.roles.count():\n return user\n return None\n\n def get_user(self, user_id):\n \"\"\"\n Gets a user. Auth backends are required to implement this.\n\n :param user_id: A FacilityUser pk\n :return: A FacilityUser instance if a BaseUser with that pk is found, else None.\n \"\"\"\n try:\n return FacilityUser.objects.get(pk=user_id)\n except FacilityUser.DoesNotExist:\n return None\n\n\nclass DeviceOwnerBackend(object):\n \"\"\"\n A class that implements authentication for DeviceOwners.\n \"\"\"\n\n def authenticate(self, username=None, password=None, **kwargs):\n \"\"\"\n Authenticates the user if the credentials correspond to a DeviceOwner.\n\n :param username: a string\n :param password: a string\n :return: A DeviceOwner instance if successful, or None if authentication failed.\n \"\"\"\n try:\n user = DeviceOwner.objects.get(username=username)\n if user.check_password(password):\n return user\n else:\n return None\n except DeviceOwner.DoesNotExist:\n return None\n\n def get_user(self, user_id):\n \"\"\"\n Gets a user. Auth backends are required to implement this.\n\n :param user_id: A BaseUser pk\n :return: A DeviceOwner instance if a BaseUser with that pk is found, else None.\n \"\"\"\n try:\n return DeviceOwner.objects.get(pk=user_id)\n except DeviceOwner.DoesNotExist:\n return None\n", "path": "kolibri/auth/backends.py"}]} | 1,318 | 126 |
gh_patches_debug_36757 | rasdani/github-patches | git_diff | huggingface__trl-398 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Llama Reward Model is incorrectly merged
As mentioned in #287, `merge_peft_adapter` saves the Llama RM as a `LlamaForCausalLM` see [here](https://github.com/lvwerra/trl/blob/main/examples/stack_llama/scripts/merge_peft_adapter.py#L35)
But the reward model is trained and should be a `LlamaForSequenceClassification` and running `rl_training.py` gives the obvious warnings
```
Some weights of the model checkpoint at ./llama-7b-se-rm were not used when initializing LlamaForSequenceClassification: ['lm_head.weight']
- This IS expected if you are initializing LlamaForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing LlamaForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of LlamaForSequenceClassification were not initialized from the model checkpoint at /home/toolkit/huggingface/llama-7b-rm and are newly initialized: ['score.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
```
We should instead check whether we are merging the rm and then save as a the correct model
Also the `score.weight` is not being loaded as mentioned in #297 , see more info below
--- update --
It seems that `merge_peft_adapter` should be using `merge_and_unload()` which correctly overrides the score. But I haven't yet managed to get good results using the adapter weights on the hub
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `examples/stack_llama/scripts/merge_peft_adapter.py`
Content:
```
1 from dataclasses import dataclass, field
2 from typing import Optional
3
4 import peft
5 import torch
6 from peft import PeftConfig, PeftModel
7 from peft.utils import _get_submodules
8 from transformers import AutoConfig, AutoModelForCausalLM, AutoTokenizer, HfArgumentParser
9
10
11 DEFAULT_PAD_TOKEN = "[PAD]"
12 DEFAULT_EOS_TOKEN = "</s>"
13 DEFAULT_BOS_TOKEN = "</s>"
14 DEFAULT_UNK_TOKEN = "</s>"
15
16
17 @dataclass
18 class ScriptArguments:
19 """
20 The name of the Casual LM model we wish to fine with PPO
21 """
22
23 adapter_model_name: Optional[str] = field(default=None, metadata={"help": "the model name"})
24 base_model_name: Optional[str] = field(default=None, metadata={"help": "the model name"})
25 output_name: Optional[str] = field(default=None, metadata={"help": "the model name"})
26
27
28 parser = HfArgumentParser(ScriptArguments)
29 script_args = parser.parse_args_into_dataclasses()[0]
30 assert script_args.adapter_model_name is not None, "please provide the name of the Adapter you would like to merge"
31 assert script_args.base_model_name is not None, "please provide the name of the Base model"
32 assert script_args.base_model_name is not None, "please provide the output name of the merged model"
33
34 peft_config = PeftConfig.from_pretrained(script_args.adapter_model_name)
35 model = AutoModelForCausalLM.from_pretrained(script_args.base_model_name, return_dict=True, torch_dtype=torch.bfloat16)
36 tokenizer = AutoTokenizer.from_pretrained(script_args.base_model_name)
37 config = AutoConfig.from_pretrained(script_args.base_model_name)
38 architecture = config.architectures[0]
39 if "Llama" in architecture:
40 print("Setting EOS, BOS, and UNK tokens for LLama tokenizer")
41 tokenizer.add_special_tokens(
42 {
43 "eos_token": DEFAULT_EOS_TOKEN,
44 "bos_token": DEFAULT_BOS_TOKEN,
45 "unk_token": DEFAULT_UNK_TOKEN,
46 "pad_token": DEFAULT_PAD_TOKEN,
47 }
48 )
49
50 # Load the Lora model
51 model = PeftModel.from_pretrained(model, script_args.adapter_model_name)
52 model.eval()
53
54 key_list = [key for key, _ in model.base_model.model.named_modules() if "lora" not in key]
55 for key in key_list:
56 parent, target, target_name = _get_submodules(model.base_model.model, key)
57 if isinstance(target, peft.tuners.lora.Linear):
58 bias = target.bias is not None
59 new_module = torch.nn.Linear(target.in_features, target.out_features, bias=bias)
60 model.base_model._replace_module(parent, target_name, new_module, target)
61
62 model = model.base_model.model
63
64 model.save_pretrained(f"{script_args.output_name}")
65 tokenizer.save_pretrained(f"{script_args.output_name}")
66 model.push_to_hub(f"{script_args.output_name}", use_temp_dir=False)
67
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/examples/stack_llama/scripts/merge_peft_adapter.py b/examples/stack_llama/scripts/merge_peft_adapter.py
--- a/examples/stack_llama/scripts/merge_peft_adapter.py
+++ b/examples/stack_llama/scripts/merge_peft_adapter.py
@@ -1,17 +1,9 @@
from dataclasses import dataclass, field
from typing import Optional
-import peft
import torch
from peft import PeftConfig, PeftModel
-from peft.utils import _get_submodules
-from transformers import AutoConfig, AutoModelForCausalLM, AutoTokenizer, HfArgumentParser
-
-
-DEFAULT_PAD_TOKEN = "[PAD]"
-DEFAULT_EOS_TOKEN = "</s>"
-DEFAULT_BOS_TOKEN = "</s>"
-DEFAULT_UNK_TOKEN = "</s>"
+from transformers import AutoModelForCausalLM, AutoModelForSequenceClassification, AutoTokenizer, HfArgumentParser
@dataclass
@@ -32,34 +24,23 @@
assert script_args.base_model_name is not None, "please provide the output name of the merged model"
peft_config = PeftConfig.from_pretrained(script_args.adapter_model_name)
-model = AutoModelForCausalLM.from_pretrained(script_args.base_model_name, return_dict=True, torch_dtype=torch.bfloat16)
-tokenizer = AutoTokenizer.from_pretrained(script_args.base_model_name)
-config = AutoConfig.from_pretrained(script_args.base_model_name)
-architecture = config.architectures[0]
-if "Llama" in architecture:
- print("Setting EOS, BOS, and UNK tokens for LLama tokenizer")
- tokenizer.add_special_tokens(
- {
- "eos_token": DEFAULT_EOS_TOKEN,
- "bos_token": DEFAULT_BOS_TOKEN,
- "unk_token": DEFAULT_UNK_TOKEN,
- "pad_token": DEFAULT_PAD_TOKEN,
- }
+if peft_config.task_type == "SEQ_CLS":
+ # peft is for reward model so load sequence classification
+ model = AutoModelForSequenceClassification.from_pretrained(
+ script_args.base_model_name, num_labels=1, torch_dtype=torch.bfloat16
+ )
+else:
+ model = AutoModelForCausalLM.from_pretrained(
+ script_args.base_model_name, return_dict=True, torch_dtype=torch.bfloat16
)
+tokenizer = AutoTokenizer.from_pretrained(script_args.base_model_name)
+
# Load the Lora model
model = PeftModel.from_pretrained(model, script_args.adapter_model_name)
model.eval()
-key_list = [key for key, _ in model.base_model.model.named_modules() if "lora" not in key]
-for key in key_list:
- parent, target, target_name = _get_submodules(model.base_model.model, key)
- if isinstance(target, peft.tuners.lora.Linear):
- bias = target.bias is not None
- new_module = torch.nn.Linear(target.in_features, target.out_features, bias=bias)
- model.base_model._replace_module(parent, target_name, new_module, target)
-
-model = model.base_model.model
+model = model.merge_and_unload()
model.save_pretrained(f"{script_args.output_name}")
tokenizer.save_pretrained(f"{script_args.output_name}")
| {"golden_diff": "diff --git a/examples/stack_llama/scripts/merge_peft_adapter.py b/examples/stack_llama/scripts/merge_peft_adapter.py\n--- a/examples/stack_llama/scripts/merge_peft_adapter.py\n+++ b/examples/stack_llama/scripts/merge_peft_adapter.py\n@@ -1,17 +1,9 @@\n from dataclasses import dataclass, field\n from typing import Optional\n \n-import peft\n import torch\n from peft import PeftConfig, PeftModel\n-from peft.utils import _get_submodules\n-from transformers import AutoConfig, AutoModelForCausalLM, AutoTokenizer, HfArgumentParser\n-\n-\n-DEFAULT_PAD_TOKEN = \"[PAD]\"\n-DEFAULT_EOS_TOKEN = \"</s>\"\n-DEFAULT_BOS_TOKEN = \"</s>\"\n-DEFAULT_UNK_TOKEN = \"</s>\"\n+from transformers import AutoModelForCausalLM, AutoModelForSequenceClassification, AutoTokenizer, HfArgumentParser\n \n \n @dataclass\n@@ -32,34 +24,23 @@\n assert script_args.base_model_name is not None, \"please provide the output name of the merged model\"\n \n peft_config = PeftConfig.from_pretrained(script_args.adapter_model_name)\n-model = AutoModelForCausalLM.from_pretrained(script_args.base_model_name, return_dict=True, torch_dtype=torch.bfloat16)\n-tokenizer = AutoTokenizer.from_pretrained(script_args.base_model_name)\n-config = AutoConfig.from_pretrained(script_args.base_model_name)\n-architecture = config.architectures[0]\n-if \"Llama\" in architecture:\n- print(\"Setting EOS, BOS, and UNK tokens for LLama tokenizer\")\n- tokenizer.add_special_tokens(\n- {\n- \"eos_token\": DEFAULT_EOS_TOKEN,\n- \"bos_token\": DEFAULT_BOS_TOKEN,\n- \"unk_token\": DEFAULT_UNK_TOKEN,\n- \"pad_token\": DEFAULT_PAD_TOKEN,\n- }\n+if peft_config.task_type == \"SEQ_CLS\":\n+ # peft is for reward model so load sequence classification\n+ model = AutoModelForSequenceClassification.from_pretrained(\n+ script_args.base_model_name, num_labels=1, torch_dtype=torch.bfloat16\n+ )\n+else:\n+ model = AutoModelForCausalLM.from_pretrained(\n+ script_args.base_model_name, return_dict=True, torch_dtype=torch.bfloat16\n )\n \n+tokenizer = AutoTokenizer.from_pretrained(script_args.base_model_name)\n+\n # Load the Lora model\n model = PeftModel.from_pretrained(model, script_args.adapter_model_name)\n model.eval()\n \n-key_list = [key for key, _ in model.base_model.model.named_modules() if \"lora\" not in key]\n-for key in key_list:\n- parent, target, target_name = _get_submodules(model.base_model.model, key)\n- if isinstance(target, peft.tuners.lora.Linear):\n- bias = target.bias is not None\n- new_module = torch.nn.Linear(target.in_features, target.out_features, bias=bias)\n- model.base_model._replace_module(parent, target_name, new_module, target)\n-\n-model = model.base_model.model\n+model = model.merge_and_unload()\n \n model.save_pretrained(f\"{script_args.output_name}\")\n tokenizer.save_pretrained(f\"{script_args.output_name}\")\n", "issue": "Llama Reward Model is incorrectly merged\nAs mentioned in #287, `merge_peft_adapter` saves the Llama RM as a `LlamaForCausalLM` see [here](https://github.com/lvwerra/trl/blob/main/examples/stack_llama/scripts/merge_peft_adapter.py#L35)\r\n\r\nBut the reward model is trained and should be a `LlamaForSequenceClassification` and running `rl_training.py` gives the obvious warnings\r\n```\r\nSome weights of the model checkpoint at ./llama-7b-se-rm were not used when initializing LlamaForSequenceClassification: ['lm_head.weight']\r\n- This IS expected if you are initializing LlamaForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\r\n- This IS NOT expected if you are initializing LlamaForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\r\nSome weights of LlamaForSequenceClassification were not initialized from the model checkpoint at /home/toolkit/huggingface/llama-7b-rm and are newly initialized: ['score.weight']\r\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\r\n```\r\n\r\nWe should instead check whether we are merging the rm and then save as a the correct model \r\n\r\nAlso the `score.weight` is not being loaded as mentioned in #297 , see more info below\r\n\r\n\r\n--- update --\r\n\r\nIt seems that `merge_peft_adapter` should be using `merge_and_unload()` which correctly overrides the score. But I haven't yet managed to get good results using the adapter weights on the hub\n", "before_files": [{"content": "from dataclasses import dataclass, field\nfrom typing import Optional\n\nimport peft\nimport torch\nfrom peft import PeftConfig, PeftModel\nfrom peft.utils import _get_submodules\nfrom transformers import AutoConfig, AutoModelForCausalLM, AutoTokenizer, HfArgumentParser\n\n\nDEFAULT_PAD_TOKEN = \"[PAD]\"\nDEFAULT_EOS_TOKEN = \"</s>\"\nDEFAULT_BOS_TOKEN = \"</s>\"\nDEFAULT_UNK_TOKEN = \"</s>\"\n\n\n@dataclass\nclass ScriptArguments:\n \"\"\"\n The name of the Casual LM model we wish to fine with PPO\n \"\"\"\n\n adapter_model_name: Optional[str] = field(default=None, metadata={\"help\": \"the model name\"})\n base_model_name: Optional[str] = field(default=None, metadata={\"help\": \"the model name\"})\n output_name: Optional[str] = field(default=None, metadata={\"help\": \"the model name\"})\n\n\nparser = HfArgumentParser(ScriptArguments)\nscript_args = parser.parse_args_into_dataclasses()[0]\nassert script_args.adapter_model_name is not None, \"please provide the name of the Adapter you would like to merge\"\nassert script_args.base_model_name is not None, \"please provide the name of the Base model\"\nassert script_args.base_model_name is not None, \"please provide the output name of the merged model\"\n\npeft_config = PeftConfig.from_pretrained(script_args.adapter_model_name)\nmodel = AutoModelForCausalLM.from_pretrained(script_args.base_model_name, return_dict=True, torch_dtype=torch.bfloat16)\ntokenizer = AutoTokenizer.from_pretrained(script_args.base_model_name)\nconfig = AutoConfig.from_pretrained(script_args.base_model_name)\narchitecture = config.architectures[0]\nif \"Llama\" in architecture:\n print(\"Setting EOS, BOS, and UNK tokens for LLama tokenizer\")\n tokenizer.add_special_tokens(\n {\n \"eos_token\": DEFAULT_EOS_TOKEN,\n \"bos_token\": DEFAULT_BOS_TOKEN,\n \"unk_token\": DEFAULT_UNK_TOKEN,\n \"pad_token\": DEFAULT_PAD_TOKEN,\n }\n )\n\n# Load the Lora model\nmodel = PeftModel.from_pretrained(model, script_args.adapter_model_name)\nmodel.eval()\n\nkey_list = [key for key, _ in model.base_model.model.named_modules() if \"lora\" not in key]\nfor key in key_list:\n parent, target, target_name = _get_submodules(model.base_model.model, key)\n if isinstance(target, peft.tuners.lora.Linear):\n bias = target.bias is not None\n new_module = torch.nn.Linear(target.in_features, target.out_features, bias=bias)\n model.base_model._replace_module(parent, target_name, new_module, target)\n\nmodel = model.base_model.model\n\nmodel.save_pretrained(f\"{script_args.output_name}\")\ntokenizer.save_pretrained(f\"{script_args.output_name}\")\nmodel.push_to_hub(f\"{script_args.output_name}\", use_temp_dir=False)\n", "path": "examples/stack_llama/scripts/merge_peft_adapter.py"}], "after_files": [{"content": "from dataclasses import dataclass, field\nfrom typing import Optional\n\nimport torch\nfrom peft import PeftConfig, PeftModel\nfrom transformers import AutoModelForCausalLM, AutoModelForSequenceClassification, AutoTokenizer, HfArgumentParser\n\n\n@dataclass\nclass ScriptArguments:\n \"\"\"\n The name of the Casual LM model we wish to fine with PPO\n \"\"\"\n\n adapter_model_name: Optional[str] = field(default=None, metadata={\"help\": \"the model name\"})\n base_model_name: Optional[str] = field(default=None, metadata={\"help\": \"the model name\"})\n output_name: Optional[str] = field(default=None, metadata={\"help\": \"the model name\"})\n\n\nparser = HfArgumentParser(ScriptArguments)\nscript_args = parser.parse_args_into_dataclasses()[0]\nassert script_args.adapter_model_name is not None, \"please provide the name of the Adapter you would like to merge\"\nassert script_args.base_model_name is not None, \"please provide the name of the Base model\"\nassert script_args.base_model_name is not None, \"please provide the output name of the merged model\"\n\npeft_config = PeftConfig.from_pretrained(script_args.adapter_model_name)\nif peft_config.task_type == \"SEQ_CLS\":\n # peft is for reward model so load sequence classification\n model = AutoModelForSequenceClassification.from_pretrained(\n script_args.base_model_name, num_labels=1, torch_dtype=torch.bfloat16\n )\nelse:\n model = AutoModelForCausalLM.from_pretrained(\n script_args.base_model_name, return_dict=True, torch_dtype=torch.bfloat16\n )\n\ntokenizer = AutoTokenizer.from_pretrained(script_args.base_model_name)\n\n# Load the Lora model\nmodel = PeftModel.from_pretrained(model, script_args.adapter_model_name)\nmodel.eval()\n\nmodel = model.merge_and_unload()\n\nmodel.save_pretrained(f\"{script_args.output_name}\")\ntokenizer.save_pretrained(f\"{script_args.output_name}\")\nmodel.push_to_hub(f\"{script_args.output_name}\", use_temp_dir=False)\n", "path": "examples/stack_llama/scripts/merge_peft_adapter.py"}]} | 1,391 | 704 |
gh_patches_debug_21408 | rasdani/github-patches | git_diff | aws-cloudformation__cfn-lint-1063 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
String misinterpreted as an int results in error on E2015
```
cfn-lint --version
cfn-lint 0.19.1
```
*Description of issue.*
The following template
```
Parameters:
CentralAccountId:
Default: 112233445566
MaxLength: 12
MinLength: 12
Type: String
```
result in the error:
```
E0002 Unknown exception while processing rule E2015: object of type 'int' has no len()
application-account-initial-setup.yaml:1:1
```
It is solved by putting quotes on the default value. However it is valid to not putting the quotes.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/cfnlint/rules/parameters/Default.py`
Content:
```
1 """
2 Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
3
4 Permission is hereby granted, free of charge, to any person obtaining a copy of this
5 software and associated documentation files (the "Software"), to deal in the Software
6 without restriction, including without limitation the rights to use, copy, modify,
7 merge, publish, distribute, sublicense, and/or sell copies of the Software, and to
8 permit persons to whom the Software is furnished to do so.
9
10 THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
11 INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
12 PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
13 HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
14 OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
15 SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
16 """
17 import re
18 import six
19 from cfnlint import CloudFormationLintRule
20 from cfnlint import RuleMatch
21
22
23 class Default(CloudFormationLintRule):
24 """Check if Parameters are configured correctly"""
25 id = 'E2015'
26 shortdesc = 'Default value is within parameter constraints'
27 description = 'Making sure the parameters have a default value inside AllowedValues, MinValue, MaxValue, AllowedPattern'
28 source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/parameters-section-structure.html'
29 tags = ['parameters']
30
31 def check_allowed_pattern(self, allowed_value, allowed_pattern, path):
32 """
33 Check allowed value against allowed pattern
34 """
35 message = 'Default should be allowed by AllowedPattern'
36 try:
37 if not re.match(allowed_pattern, str(allowed_value)):
38 return([RuleMatch(path, message)])
39 except re.error as ex:
40 self.logger.debug('Regex pattern "%s" isn\'t supported by Python: %s', allowed_pattern, ex)
41
42 return []
43
44 def check_min_value(self, allowed_value, min_value, path):
45 """
46 Check allowed value against min value
47 """
48 message = 'Default should be equal to or higher than MinValue'
49
50 if isinstance(allowed_value, six.integer_types) and isinstance(min_value, six.integer_types):
51 if allowed_value < min_value:
52 return([RuleMatch(path, message)])
53
54 return []
55
56 def check_max_value(self, allowed_value, max_value, path):
57 """
58 Check allowed value against max value
59 """
60 message = 'Default should be less than or equal to MaxValue'
61
62 if isinstance(allowed_value, six.integer_types) and isinstance(max_value, six.integer_types):
63 if allowed_value > max_value:
64 return([RuleMatch(path, message)])
65
66 return []
67
68 def check_allowed_values(self, allowed_value, allowed_values, path):
69 """
70 Check allowed value against allowed values
71 """
72 message = 'Default should be a value within AllowedValues'
73
74 if allowed_value not in allowed_values:
75 return([RuleMatch(path, message)])
76
77 return []
78
79 def check_min_length(self, allowed_value, min_length, path):
80 """
81 Check allowed value against MinLength
82 """
83 message = 'Default should have a length above or equal to MinLength'
84
85 if isinstance(min_length, six.integer_types):
86 if len(allowed_value) < min_length:
87 return([RuleMatch(path, message)])
88
89 return []
90
91 def check_max_length(self, allowed_value, max_length, path):
92 """
93 Check allowed value against MaxLength
94 """
95 message = 'Default should have a length below or equal to MaxLength'
96
97 if isinstance(max_length, six.integer_types):
98 if len(allowed_value) > max_length:
99 return([RuleMatch(path, message)])
100
101 return []
102
103 def match(self, cfn):
104 """Check CloudFormation Parameters"""
105
106 matches = []
107
108 for paramname, paramvalue in cfn.get_parameters().items():
109 default_value = paramvalue.get('Default')
110 if default_value is not None:
111 path = ['Parameters', paramname, 'Default']
112 allowed_pattern = paramvalue.get('AllowedPattern')
113 if allowed_pattern:
114 matches.extend(
115 self.check_allowed_pattern(
116 default_value, allowed_pattern, path
117 )
118 )
119 min_value = paramvalue.get('MinValue')
120 if min_value:
121 matches.extend(
122 self.check_min_value(
123 default_value, min_value, path
124 )
125 )
126 max_value = paramvalue.get('MaxValue')
127 if max_value is not None:
128 matches.extend(
129 self.check_max_value(
130 default_value, max_value, path
131 )
132 )
133 allowed_values = paramvalue.get('AllowedValues')
134 if allowed_values:
135 matches.extend(
136 self.check_allowed_values(
137 default_value, allowed_values, path
138 )
139 )
140 min_length = paramvalue.get('MinLength')
141 if min_length is not None:
142 matches.extend(
143 self.check_min_length(
144 default_value, min_length, path
145 )
146 )
147 max_length = paramvalue.get('MaxLength')
148 if max_length is not None:
149 matches.extend(
150 self.check_max_length(
151 default_value, max_length, path
152 )
153 )
154
155 return matches
156
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/cfnlint/rules/parameters/Default.py b/src/cfnlint/rules/parameters/Default.py
--- a/src/cfnlint/rules/parameters/Default.py
+++ b/src/cfnlint/rules/parameters/Default.py
@@ -82,8 +82,9 @@
"""
message = 'Default should have a length above or equal to MinLength'
+ value = allowed_value if isinstance(allowed_value, six.string_types) else str(allowed_value)
if isinstance(min_length, six.integer_types):
- if len(allowed_value) < min_length:
+ if len(value) < min_length:
return([RuleMatch(path, message)])
return []
@@ -94,8 +95,9 @@
"""
message = 'Default should have a length below or equal to MaxLength'
+ value = allowed_value if isinstance(allowed_value, six.string_types) else str(allowed_value)
if isinstance(max_length, six.integer_types):
- if len(allowed_value) > max_length:
+ if len(value) > max_length:
return([RuleMatch(path, message)])
return []
| {"golden_diff": "diff --git a/src/cfnlint/rules/parameters/Default.py b/src/cfnlint/rules/parameters/Default.py\n--- a/src/cfnlint/rules/parameters/Default.py\n+++ b/src/cfnlint/rules/parameters/Default.py\n@@ -82,8 +82,9 @@\n \"\"\"\n message = 'Default should have a length above or equal to MinLength'\n \n+ value = allowed_value if isinstance(allowed_value, six.string_types) else str(allowed_value)\n if isinstance(min_length, six.integer_types):\n- if len(allowed_value) < min_length:\n+ if len(value) < min_length:\n return([RuleMatch(path, message)])\n \n return []\n@@ -94,8 +95,9 @@\n \"\"\"\n message = 'Default should have a length below or equal to MaxLength'\n \n+ value = allowed_value if isinstance(allowed_value, six.string_types) else str(allowed_value)\n if isinstance(max_length, six.integer_types):\n- if len(allowed_value) > max_length:\n+ if len(value) > max_length:\n return([RuleMatch(path, message)])\n \n return []\n", "issue": "String misinterpreted as an int results in error on E2015\n```\r\ncfn-lint --version\r\ncfn-lint 0.19.1\r\n```\r\n\r\n*Description of issue.*\r\nThe following template\r\n```\r\nParameters:\r\n CentralAccountId:\r\n Default: 112233445566\r\n MaxLength: 12\r\n MinLength: 12\r\n Type: String\r\n```\r\nresult in the error:\r\n```\r\nE0002 Unknown exception while processing rule E2015: object of type 'int' has no len()\r\napplication-account-initial-setup.yaml:1:1\r\n```\r\n\r\nIt is solved by putting quotes on the default value. However it is valid to not putting the quotes.\n", "before_files": [{"content": "\"\"\"\n Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n\n Permission is hereby granted, free of charge, to any person obtaining a copy of this\n software and associated documentation files (the \"Software\"), to deal in the Software\n without restriction, including without limitation the rights to use, copy, modify,\n merge, publish, distribute, sublicense, and/or sell copies of the Software, and to\n permit persons to whom the Software is furnished to do so.\n\n THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,\n INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A\n PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT\n HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION\n OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\"\"\"\nimport re\nimport six\nfrom cfnlint import CloudFormationLintRule\nfrom cfnlint import RuleMatch\n\n\nclass Default(CloudFormationLintRule):\n \"\"\"Check if Parameters are configured correctly\"\"\"\n id = 'E2015'\n shortdesc = 'Default value is within parameter constraints'\n description = 'Making sure the parameters have a default value inside AllowedValues, MinValue, MaxValue, AllowedPattern'\n source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/parameters-section-structure.html'\n tags = ['parameters']\n\n def check_allowed_pattern(self, allowed_value, allowed_pattern, path):\n \"\"\"\n Check allowed value against allowed pattern\n \"\"\"\n message = 'Default should be allowed by AllowedPattern'\n try:\n if not re.match(allowed_pattern, str(allowed_value)):\n return([RuleMatch(path, message)])\n except re.error as ex:\n self.logger.debug('Regex pattern \"%s\" isn\\'t supported by Python: %s', allowed_pattern, ex)\n\n return []\n\n def check_min_value(self, allowed_value, min_value, path):\n \"\"\"\n Check allowed value against min value\n \"\"\"\n message = 'Default should be equal to or higher than MinValue'\n\n if isinstance(allowed_value, six.integer_types) and isinstance(min_value, six.integer_types):\n if allowed_value < min_value:\n return([RuleMatch(path, message)])\n\n return []\n\n def check_max_value(self, allowed_value, max_value, path):\n \"\"\"\n Check allowed value against max value\n \"\"\"\n message = 'Default should be less than or equal to MaxValue'\n\n if isinstance(allowed_value, six.integer_types) and isinstance(max_value, six.integer_types):\n if allowed_value > max_value:\n return([RuleMatch(path, message)])\n\n return []\n\n def check_allowed_values(self, allowed_value, allowed_values, path):\n \"\"\"\n Check allowed value against allowed values\n \"\"\"\n message = 'Default should be a value within AllowedValues'\n\n if allowed_value not in allowed_values:\n return([RuleMatch(path, message)])\n\n return []\n\n def check_min_length(self, allowed_value, min_length, path):\n \"\"\"\n Check allowed value against MinLength\n \"\"\"\n message = 'Default should have a length above or equal to MinLength'\n\n if isinstance(min_length, six.integer_types):\n if len(allowed_value) < min_length:\n return([RuleMatch(path, message)])\n\n return []\n\n def check_max_length(self, allowed_value, max_length, path):\n \"\"\"\n Check allowed value against MaxLength\n \"\"\"\n message = 'Default should have a length below or equal to MaxLength'\n\n if isinstance(max_length, six.integer_types):\n if len(allowed_value) > max_length:\n return([RuleMatch(path, message)])\n\n return []\n\n def match(self, cfn):\n \"\"\"Check CloudFormation Parameters\"\"\"\n\n matches = []\n\n for paramname, paramvalue in cfn.get_parameters().items():\n default_value = paramvalue.get('Default')\n if default_value is not None:\n path = ['Parameters', paramname, 'Default']\n allowed_pattern = paramvalue.get('AllowedPattern')\n if allowed_pattern:\n matches.extend(\n self.check_allowed_pattern(\n default_value, allowed_pattern, path\n )\n )\n min_value = paramvalue.get('MinValue')\n if min_value:\n matches.extend(\n self.check_min_value(\n default_value, min_value, path\n )\n )\n max_value = paramvalue.get('MaxValue')\n if max_value is not None:\n matches.extend(\n self.check_max_value(\n default_value, max_value, path\n )\n )\n allowed_values = paramvalue.get('AllowedValues')\n if allowed_values:\n matches.extend(\n self.check_allowed_values(\n default_value, allowed_values, path\n )\n )\n min_length = paramvalue.get('MinLength')\n if min_length is not None:\n matches.extend(\n self.check_min_length(\n default_value, min_length, path\n )\n )\n max_length = paramvalue.get('MaxLength')\n if max_length is not None:\n matches.extend(\n self.check_max_length(\n default_value, max_length, path\n )\n )\n\n return matches\n", "path": "src/cfnlint/rules/parameters/Default.py"}], "after_files": [{"content": "\"\"\"\n Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n\n Permission is hereby granted, free of charge, to any person obtaining a copy of this\n software and associated documentation files (the \"Software\"), to deal in the Software\n without restriction, including without limitation the rights to use, copy, modify,\n merge, publish, distribute, sublicense, and/or sell copies of the Software, and to\n permit persons to whom the Software is furnished to do so.\n\n THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,\n INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A\n PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT\n HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION\n OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\"\"\"\nimport re\nimport six\nfrom cfnlint import CloudFormationLintRule\nfrom cfnlint import RuleMatch\n\n\nclass Default(CloudFormationLintRule):\n \"\"\"Check if Parameters are configured correctly\"\"\"\n id = 'E2015'\n shortdesc = 'Default value is within parameter constraints'\n description = 'Making sure the parameters have a default value inside AllowedValues, MinValue, MaxValue, AllowedPattern'\n source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/parameters-section-structure.html'\n tags = ['parameters']\n\n def check_allowed_pattern(self, allowed_value, allowed_pattern, path):\n \"\"\"\n Check allowed value against allowed pattern\n \"\"\"\n message = 'Default should be allowed by AllowedPattern'\n try:\n if not re.match(allowed_pattern, str(allowed_value)):\n return([RuleMatch(path, message)])\n except re.error as ex:\n self.logger.debug('Regex pattern \"%s\" isn\\'t supported by Python: %s', allowed_pattern, ex)\n\n return []\n\n def check_min_value(self, allowed_value, min_value, path):\n \"\"\"\n Check allowed value against min value\n \"\"\"\n message = 'Default should be equal to or higher than MinValue'\n\n if isinstance(allowed_value, six.integer_types) and isinstance(min_value, six.integer_types):\n if allowed_value < min_value:\n return([RuleMatch(path, message)])\n\n return []\n\n def check_max_value(self, allowed_value, max_value, path):\n \"\"\"\n Check allowed value against max value\n \"\"\"\n message = 'Default should be less than or equal to MaxValue'\n\n if isinstance(allowed_value, six.integer_types) and isinstance(max_value, six.integer_types):\n if allowed_value > max_value:\n return([RuleMatch(path, message)])\n\n return []\n\n def check_allowed_values(self, allowed_value, allowed_values, path):\n \"\"\"\n Check allowed value against allowed values\n \"\"\"\n message = 'Default should be a value within AllowedValues'\n\n if allowed_value not in allowed_values:\n return([RuleMatch(path, message)])\n\n return []\n\n def check_min_length(self, allowed_value, min_length, path):\n \"\"\"\n Check allowed value against MinLength\n \"\"\"\n message = 'Default should have a length above or equal to MinLength'\n\n value = allowed_value if isinstance(allowed_value, six.string_types) else str(allowed_value)\n if isinstance(min_length, six.integer_types):\n if len(value) < min_length:\n return([RuleMatch(path, message)])\n\n return []\n\n def check_max_length(self, allowed_value, max_length, path):\n \"\"\"\n Check allowed value against MaxLength\n \"\"\"\n message = 'Default should have a length below or equal to MaxLength'\n\n value = allowed_value if isinstance(allowed_value, six.string_types) else str(allowed_value)\n if isinstance(max_length, six.integer_types):\n if len(value) > max_length:\n return([RuleMatch(path, message)])\n\n return []\n\n def match(self, cfn):\n \"\"\"Check CloudFormation Parameters\"\"\"\n\n matches = []\n\n for paramname, paramvalue in cfn.get_parameters().items():\n default_value = paramvalue.get('Default')\n if default_value is not None:\n path = ['Parameters', paramname, 'Default']\n allowed_pattern = paramvalue.get('AllowedPattern')\n if allowed_pattern:\n matches.extend(\n self.check_allowed_pattern(\n default_value, allowed_pattern, path\n )\n )\n min_value = paramvalue.get('MinValue')\n if min_value:\n matches.extend(\n self.check_min_value(\n default_value, min_value, path\n )\n )\n max_value = paramvalue.get('MaxValue')\n if max_value is not None:\n matches.extend(\n self.check_max_value(\n default_value, max_value, path\n )\n )\n allowed_values = paramvalue.get('AllowedValues')\n if allowed_values:\n matches.extend(\n self.check_allowed_values(\n default_value, allowed_values, path\n )\n )\n min_length = paramvalue.get('MinLength')\n if min_length is not None:\n matches.extend(\n self.check_min_length(\n default_value, min_length, path\n )\n )\n max_length = paramvalue.get('MaxLength')\n if max_length is not None:\n matches.extend(\n self.check_max_length(\n default_value, max_length, path\n )\n )\n\n return matches\n", "path": "src/cfnlint/rules/parameters/Default.py"}]} | 1,923 | 248 |
gh_patches_debug_29460 | rasdani/github-patches | git_diff | aimhubio__aim-2671 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Extend `aim.ext.tensorboard_tracker.run.Run` to also allow system stats, parameters, and stdout capture.
## 🚀 Feature
Allow capturing of system parameters and terminal logs by the `aim.ext.tensorboard_tracker.run.Run`, as this is great feature shouldn't be only available to the default `Run`.
### Motivation
The new feature of allowing continuous syncing from `tensorboard` files to `aim` is really nice, but because `aim.ext.tensorboard_tracker.run.Run` inherits from `BasicRun` rather than `Run`, it misses out on the ability to log the standard out, system stats and system parameters. Since `aim.ext.tensorboard_tracker.run.Run` should be a possible replacement for `Run`, I don't see a reason why this behaviour shouldn't be allowed.
It has been highlighted in Discord by @mihran113:
> The reason behind inheriting from basic run is exactly to avoid terminal log tracking and system param tracking actually, cause we don’t want to add anything else rather than what’s tracked via tensorboard. Cause there can be times when live tracking is done from a different process, and catching that process’s terminal logs and system params won’t make any sense I guess. If you’re interested you can open a PR to address those points, cause adding the possibility to enable those won’t make any harm as well.
so I believe the *default* arguments should *not* do this extra logging, but still optionally allow this behaviour.
### Pitch
Have `aim.ext.tensorboard_tracker.run.Run` inherit from `aim.sdk.run.Run` instead of `aim.sdk.run.BasicRun`, so that it can utilise it's extra capabilities.
### Alternatives
Instead of inheritance we could change the system resource tracking be a mixin?
Extend `aim.ext.tensorboard_tracker.run.Run` to also allow system stats, parameters, and stdout capture.
## 🚀 Feature
Allow capturing of system parameters and terminal logs by the `aim.ext.tensorboard_tracker.run.Run`, as this is great feature shouldn't be only available to the default `Run`.
### Motivation
The new feature of allowing continuous syncing from `tensorboard` files to `aim` is really nice, but because `aim.ext.tensorboard_tracker.run.Run` inherits from `BasicRun` rather than `Run`, it misses out on the ability to log the standard out, system stats and system parameters. Since `aim.ext.tensorboard_tracker.run.Run` should be a possible replacement for `Run`, I don't see a reason why this behaviour shouldn't be allowed.
It has been highlighted in Discord by @mihran113:
> The reason behind inheriting from basic run is exactly to avoid terminal log tracking and system param tracking actually, cause we don’t want to add anything else rather than what’s tracked via tensorboard. Cause there can be times when live tracking is done from a different process, and catching that process’s terminal logs and system params won’t make any sense I guess. If you’re interested you can open a PR to address those points, cause adding the possibility to enable those won’t make any harm as well.
so I believe the *default* arguments should *not* do this extra logging, but still optionally allow this behaviour.
### Pitch
Have `aim.ext.tensorboard_tracker.run.Run` inherit from `aim.sdk.run.Run` instead of `aim.sdk.run.BasicRun`, so that it can utilise it's extra capabilities.
### Alternatives
Instead of inheritance we could change the system resource tracking be a mixin?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `aim/ext/tensorboard_tracker/run.py`
Content:
```
1 from typing import Optional, Union
2
3 from aim.sdk.run import BasicRun
4 from aim.ext.tensorboard_tracker.tracker import TensorboardTracker
5
6 from typing import TYPE_CHECKING
7
8 if TYPE_CHECKING:
9 from aim.sdk.repo import Repo
10
11
12 class Run(BasicRun):
13 def __init__(self, run_hash: Optional[str] = None, *,
14 sync_tensorboard_log_dir: str,
15 repo: Optional[Union[str, 'Repo']] = None,
16 experiment: Optional[str] = None,
17 force_resume: Optional[bool] = False,
18 ):
19 super().__init__(run_hash, repo=repo, read_only=False, experiment=experiment, force_resume=force_resume)
20 self['tb_log_directory'] = sync_tensorboard_log_dir
21 self._tensorboard_tracker = TensorboardTracker(self._tracker, sync_tensorboard_log_dir)
22 self._tensorboard_tracker.start()
23 self._resources.add_extra_resource(self._tensorboard_tracker)
24
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/aim/ext/tensorboard_tracker/run.py b/aim/ext/tensorboard_tracker/run.py
--- a/aim/ext/tensorboard_tracker/run.py
+++ b/aim/ext/tensorboard_tracker/run.py
@@ -1,6 +1,6 @@
from typing import Optional, Union
-from aim.sdk.run import BasicRun
+from aim.sdk.run import Run as SdkRun
from aim.ext.tensorboard_tracker.tracker import TensorboardTracker
from typing import TYPE_CHECKING
@@ -9,14 +9,23 @@
from aim.sdk.repo import Repo
-class Run(BasicRun):
- def __init__(self, run_hash: Optional[str] = None, *,
- sync_tensorboard_log_dir: str,
- repo: Optional[Union[str, 'Repo']] = None,
- experiment: Optional[str] = None,
- force_resume: Optional[bool] = False,
- ):
- super().__init__(run_hash, repo=repo, read_only=False, experiment=experiment, force_resume=force_resume)
+class Run(SdkRun):
+ def __init__(
+ self, run_hash: Optional[str] = None, *,
+ sync_tensorboard_log_dir: str,
+ repo: Optional[Union[str, 'Repo']] = None,
+ experiment: Optional[str] = None,
+ force_resume: Optional[bool] = False,
+ system_tracking_interval: Optional[Union[int, float]] = None,
+ log_system_params: Optional[bool] = False,
+ capture_terminal_logs: Optional[bool] = False,
+ ):
+ super().__init__(
+ run_hash, repo=repo, read_only=False, experiment=experiment, force_resume=force_resume,
+ system_tracking_interval=system_tracking_interval, log_system_params=log_system_params,
+ capture_terminal_logs=capture_terminal_logs
+ )
+
self['tb_log_directory'] = sync_tensorboard_log_dir
self._tensorboard_tracker = TensorboardTracker(self._tracker, sync_tensorboard_log_dir)
self._tensorboard_tracker.start()
| {"golden_diff": "diff --git a/aim/ext/tensorboard_tracker/run.py b/aim/ext/tensorboard_tracker/run.py\n--- a/aim/ext/tensorboard_tracker/run.py\n+++ b/aim/ext/tensorboard_tracker/run.py\n@@ -1,6 +1,6 @@\n from typing import Optional, Union\n \n-from aim.sdk.run import BasicRun\n+from aim.sdk.run import Run as SdkRun\n from aim.ext.tensorboard_tracker.tracker import TensorboardTracker\n \n from typing import TYPE_CHECKING\n@@ -9,14 +9,23 @@\n from aim.sdk.repo import Repo\n \n \n-class Run(BasicRun):\n- def __init__(self, run_hash: Optional[str] = None, *,\n- sync_tensorboard_log_dir: str,\n- repo: Optional[Union[str, 'Repo']] = None,\n- experiment: Optional[str] = None,\n- force_resume: Optional[bool] = False,\n- ):\n- super().__init__(run_hash, repo=repo, read_only=False, experiment=experiment, force_resume=force_resume)\n+class Run(SdkRun):\n+ def __init__(\n+ self, run_hash: Optional[str] = None, *,\n+ sync_tensorboard_log_dir: str,\n+ repo: Optional[Union[str, 'Repo']] = None,\n+ experiment: Optional[str] = None,\n+ force_resume: Optional[bool] = False,\n+ system_tracking_interval: Optional[Union[int, float]] = None,\n+ log_system_params: Optional[bool] = False,\n+ capture_terminal_logs: Optional[bool] = False,\n+ ):\n+ super().__init__(\n+ run_hash, repo=repo, read_only=False, experiment=experiment, force_resume=force_resume,\n+ system_tracking_interval=system_tracking_interval, log_system_params=log_system_params,\n+ capture_terminal_logs=capture_terminal_logs\n+ )\n+\n self['tb_log_directory'] = sync_tensorboard_log_dir\n self._tensorboard_tracker = TensorboardTracker(self._tracker, sync_tensorboard_log_dir)\n self._tensorboard_tracker.start()\n", "issue": "Extend `aim.ext.tensorboard_tracker.run.Run` to also allow system stats, parameters, and stdout capture.\n## \ud83d\ude80 Feature\r\n\r\nAllow capturing of system parameters and terminal logs by the `aim.ext.tensorboard_tracker.run.Run`, as this is great feature shouldn't be only available to the default `Run`.\r\n\r\n### Motivation\r\n\r\nThe new feature of allowing continuous syncing from `tensorboard` files to `aim` is really nice, but because `aim.ext.tensorboard_tracker.run.Run` inherits from `BasicRun` rather than `Run`, it misses out on the ability to log the standard out, system stats and system parameters. Since `aim.ext.tensorboard_tracker.run.Run` should be a possible replacement for `Run`, I don't see a reason why this behaviour shouldn't be allowed.\r\n\r\nIt has been highlighted in Discord by @mihran113:\r\n\r\n> The reason behind inheriting from basic run is exactly to avoid terminal log tracking and system param tracking actually, cause we don\u2019t want to add anything else rather than what\u2019s tracked via tensorboard. Cause there can be times when live tracking is done from a different process, and catching that process\u2019s terminal logs and system params won\u2019t make any sense I guess. If you\u2019re interested you can open a PR to address those points, cause adding the possibility to enable those won\u2019t make any harm as well.\r\n\r\nso I believe the *default* arguments should *not* do this extra logging, but still optionally allow this behaviour. \r\n\r\n### Pitch\r\n\r\nHave `aim.ext.tensorboard_tracker.run.Run` inherit from `aim.sdk.run.Run` instead of `aim.sdk.run.BasicRun`, so that it can utilise it's extra capabilities.\r\n\r\n### Alternatives\r\n\r\nInstead of inheritance we could change the system resource tracking be a mixin? \r\n\nExtend `aim.ext.tensorboard_tracker.run.Run` to also allow system stats, parameters, and stdout capture.\n## \ud83d\ude80 Feature\r\n\r\nAllow capturing of system parameters and terminal logs by the `aim.ext.tensorboard_tracker.run.Run`, as this is great feature shouldn't be only available to the default `Run`.\r\n\r\n### Motivation\r\n\r\nThe new feature of allowing continuous syncing from `tensorboard` files to `aim` is really nice, but because `aim.ext.tensorboard_tracker.run.Run` inherits from `BasicRun` rather than `Run`, it misses out on the ability to log the standard out, system stats and system parameters. Since `aim.ext.tensorboard_tracker.run.Run` should be a possible replacement for `Run`, I don't see a reason why this behaviour shouldn't be allowed.\r\n\r\nIt has been highlighted in Discord by @mihran113:\r\n\r\n> The reason behind inheriting from basic run is exactly to avoid terminal log tracking and system param tracking actually, cause we don\u2019t want to add anything else rather than what\u2019s tracked via tensorboard. Cause there can be times when live tracking is done from a different process, and catching that process\u2019s terminal logs and system params won\u2019t make any sense I guess. If you\u2019re interested you can open a PR to address those points, cause adding the possibility to enable those won\u2019t make any harm as well.\r\n\r\nso I believe the *default* arguments should *not* do this extra logging, but still optionally allow this behaviour. \r\n\r\n### Pitch\r\n\r\nHave `aim.ext.tensorboard_tracker.run.Run` inherit from `aim.sdk.run.Run` instead of `aim.sdk.run.BasicRun`, so that it can utilise it's extra capabilities.\r\n\r\n### Alternatives\r\n\r\nInstead of inheritance we could change the system resource tracking be a mixin? \r\n\n", "before_files": [{"content": "from typing import Optional, Union\n\nfrom aim.sdk.run import BasicRun\nfrom aim.ext.tensorboard_tracker.tracker import TensorboardTracker\n\nfrom typing import TYPE_CHECKING\n\nif TYPE_CHECKING:\n from aim.sdk.repo import Repo\n\n\nclass Run(BasicRun):\n def __init__(self, run_hash: Optional[str] = None, *,\n sync_tensorboard_log_dir: str,\n repo: Optional[Union[str, 'Repo']] = None,\n experiment: Optional[str] = None,\n force_resume: Optional[bool] = False,\n ):\n super().__init__(run_hash, repo=repo, read_only=False, experiment=experiment, force_resume=force_resume)\n self['tb_log_directory'] = sync_tensorboard_log_dir\n self._tensorboard_tracker = TensorboardTracker(self._tracker, sync_tensorboard_log_dir)\n self._tensorboard_tracker.start()\n self._resources.add_extra_resource(self._tensorboard_tracker)\n", "path": "aim/ext/tensorboard_tracker/run.py"}], "after_files": [{"content": "from typing import Optional, Union\n\nfrom aim.sdk.run import Run as SdkRun\nfrom aim.ext.tensorboard_tracker.tracker import TensorboardTracker\n\nfrom typing import TYPE_CHECKING\n\nif TYPE_CHECKING:\n from aim.sdk.repo import Repo\n\n\nclass Run(SdkRun):\n def __init__(\n self, run_hash: Optional[str] = None, *,\n sync_tensorboard_log_dir: str,\n repo: Optional[Union[str, 'Repo']] = None,\n experiment: Optional[str] = None,\n force_resume: Optional[bool] = False,\n system_tracking_interval: Optional[Union[int, float]] = None,\n log_system_params: Optional[bool] = False,\n capture_terminal_logs: Optional[bool] = False,\n ):\n super().__init__(\n run_hash, repo=repo, read_only=False, experiment=experiment, force_resume=force_resume,\n system_tracking_interval=system_tracking_interval, log_system_params=log_system_params,\n capture_terminal_logs=capture_terminal_logs\n )\n\n self['tb_log_directory'] = sync_tensorboard_log_dir\n self._tensorboard_tracker = TensorboardTracker(self._tracker, sync_tensorboard_log_dir)\n self._tensorboard_tracker.start()\n self._resources.add_extra_resource(self._tensorboard_tracker)\n", "path": "aim/ext/tensorboard_tracker/run.py"}]} | 1,232 | 448 |
gh_patches_debug_42129 | rasdani/github-patches | git_diff | conan-io__conan-center-index-1204 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[package] cgal/all: review options applied
Comming from https://github.com/conan-io/conan-center-index/pull/965#issuecomment-590802910
Seems that the recipe might require some work regarding the options and flags
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `recipes/cgal/all/conanfile.py`
Content:
```
1 import os
2 from conans import ConanFile, CMake, tools
3
4
5 class CgalConan(ConanFile):
6 name = "cgal"
7 license = "LGPL-3.0-or-later"
8 url = "https://github.com/conan-io/conan-center-index"
9 homepage = "https://github.com/CGAL/cgal"
10 description = "C++ library that aims to provide easy access to efficient and reliable algorithms"\
11 "in computational geometry."
12 topics = ("geometry", "algorithms")
13 settings = "os", "compiler", "build_type", "arch"
14 requires = "mpir/3.0.0", "mpfr/4.0.2", "boost/1.72.0", "eigen/3.3.7"
15 generators = "cmake"
16
17 _source_subfolder = "source_subfolder"
18 _cmake = None
19
20 options = {
21 "with_cgal_core": [True, False],
22 "with_cgal_qt5": [True, False],
23 "with_cgal_imageio": [True, False]
24 }
25
26 default_options = {
27 "with_cgal_core": True,
28 "with_cgal_qt5": False,
29 "with_cgal_imageio": True
30 }
31
32 def _configure_cmake(self):
33 if not self._cmake:
34 self._cmake = CMake(self)
35 self._cmake.definitions["WITH_CGAL_Core"] = self.options.with_cgal_core
36 self._cmake.definitions["WITH_CGAL_Qt5"] = self.options.with_cgal_qt5
37 self._cmake.definitions["WITH_CGAL_ImageIO"] = self.options.with_cgal_imageio
38 self._cmake.configure(source_folder=self._source_subfolder)
39 return self._cmake
40
41 def _patch_sources(self):
42 tools.replace_in_file(
43 os.path.join(self._source_subfolder, "CMakeLists.txt"),
44 "project(CGAL CXX C)", '''project(CGAL CXX C)
45 include(${CMAKE_BINARY_DIR}/conanbuildinfo.cmake)
46 conan_basic_setup()''')
47
48 def source(self):
49 tools.get(**self.conan_data["sources"][self.version])
50 extracted_dir = "CGAL-{}".format(self.version)
51 os.rename(extracted_dir, self._source_subfolder)
52
53 def build(self):
54 self._patch_sources()
55 cmake = self._configure_cmake()
56 cmake.build()
57
58 def package(self):
59 self.copy("LICENSE*", dst="licenses", src=self._source_subfolder)
60 cmake = self._configure_cmake()
61 cmake.install()
62 tools.rmdir(os.path.join(self.package_folder, "share"))
63 tools.rmdir(os.path.join(self.package_folder, "lib", "cmake"))
64 tools.rmdir(os.path.join(self.package_folder, "bin"))
65
66 def package_info(self):
67 self.cpp_info.names["cmake_find_package"] = "CGAL"
68 self.cpp_info.names["cmake_find_package_multi"] = "CGAL"
69
70 def package_id(self):
71 self.info.header_only()
72
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/recipes/cgal/all/conanfile.py b/recipes/cgal/all/conanfile.py
--- a/recipes/cgal/all/conanfile.py
+++ b/recipes/cgal/all/conanfile.py
@@ -1,5 +1,6 @@
import os
from conans import ConanFile, CMake, tools
+from conans.errors import ConanInvalidConfiguration
class CgalConan(ConanFile):
@@ -13,20 +14,26 @@
settings = "os", "compiler", "build_type", "arch"
requires = "mpir/3.0.0", "mpfr/4.0.2", "boost/1.72.0", "eigen/3.3.7"
generators = "cmake"
+ exports_sources = "CMakeLists.txt"
_source_subfolder = "source_subfolder"
+ _build_subfolder = "build_subfolder"
_cmake = None
options = {
"with_cgal_core": [True, False],
"with_cgal_qt5": [True, False],
- "with_cgal_imageio": [True, False]
+ "with_cgal_imageio": [True, False],
+ "shared": [True, False],
+ "header_only": [True, False]
}
default_options = {
"with_cgal_core": True,
"with_cgal_qt5": False,
- "with_cgal_imageio": True
+ "with_cgal_imageio": True,
+ "shared": False,
+ "header_only": True
}
def _configure_cmake(self):
@@ -35,15 +42,19 @@
self._cmake.definitions["WITH_CGAL_Core"] = self.options.with_cgal_core
self._cmake.definitions["WITH_CGAL_Qt5"] = self.options.with_cgal_qt5
self._cmake.definitions["WITH_CGAL_ImageIO"] = self.options.with_cgal_imageio
- self._cmake.configure(source_folder=self._source_subfolder)
+ self._cmake.definitions["CGAL_HEADER_ONLY"] = self.options.header_only
+ self._cmake.configure(build_folder=self._build_subfolder)
return self._cmake
def _patch_sources(self):
- tools.replace_in_file(
- os.path.join(self._source_subfolder, "CMakeLists.txt"),
- "project(CGAL CXX C)", '''project(CGAL CXX C)
-include(${CMAKE_BINARY_DIR}/conanbuildinfo.cmake)
-conan_basic_setup()''')
+ tools.replace_in_file(os.path.join(self._source_subfolder, "CMakeLists.txt"),
+ "CMAKE_SOURCE_DIR", "CMAKE_CURRENT_SOURCE_DIR")
+
+ def configure(self):
+ if self.options.with_cgal_qt5:
+ raise ConanInvalidConfiguration("Qt Conan package is not available yet.")
+ if self.options.header_only:
+ del self.options.shared
def source(self):
tools.get(**self.conan_data["sources"][self.version])
@@ -61,11 +72,20 @@
cmake.install()
tools.rmdir(os.path.join(self.package_folder, "share"))
tools.rmdir(os.path.join(self.package_folder, "lib", "cmake"))
- tools.rmdir(os.path.join(self.package_folder, "bin"))
+ if self.options.get_safe("shared"):
+ for root, _, filenames in os.walk(os.path.join(self.package_folder, "bin")):
+ for filename in filenames:
+ if not filename.endswith(".dll"):
+ os.unlink(os.path.join(root, filename))
+ else:
+ tools.rmdir(os.path.join(self.package_folder, "bin"))
def package_info(self):
+ if not self.options.header_only:
+ self.cpp_info.libs = tools.collect_libs(self)
self.cpp_info.names["cmake_find_package"] = "CGAL"
self.cpp_info.names["cmake_find_package_multi"] = "CGAL"
def package_id(self):
- self.info.header_only()
+ if self.options.header_only:
+ self.info.header_only()
| {"golden_diff": "diff --git a/recipes/cgal/all/conanfile.py b/recipes/cgal/all/conanfile.py\n--- a/recipes/cgal/all/conanfile.py\n+++ b/recipes/cgal/all/conanfile.py\n@@ -1,5 +1,6 @@\n import os\n from conans import ConanFile, CMake, tools\n+from conans.errors import ConanInvalidConfiguration\n \n \n class CgalConan(ConanFile):\n@@ -13,20 +14,26 @@\n settings = \"os\", \"compiler\", \"build_type\", \"arch\"\n requires = \"mpir/3.0.0\", \"mpfr/4.0.2\", \"boost/1.72.0\", \"eigen/3.3.7\"\n generators = \"cmake\"\n+ exports_sources = \"CMakeLists.txt\"\n \n _source_subfolder = \"source_subfolder\"\n+ _build_subfolder = \"build_subfolder\"\n _cmake = None\n \n options = {\n \"with_cgal_core\": [True, False],\n \"with_cgal_qt5\": [True, False],\n- \"with_cgal_imageio\": [True, False]\n+ \"with_cgal_imageio\": [True, False],\n+ \"shared\": [True, False],\n+ \"header_only\": [True, False]\n }\n \n default_options = {\n \"with_cgal_core\": True,\n \"with_cgal_qt5\": False,\n- \"with_cgal_imageio\": True\n+ \"with_cgal_imageio\": True,\n+ \"shared\": False,\n+ \"header_only\": True\n }\n \n def _configure_cmake(self):\n@@ -35,15 +42,19 @@\n self._cmake.definitions[\"WITH_CGAL_Core\"] = self.options.with_cgal_core\n self._cmake.definitions[\"WITH_CGAL_Qt5\"] = self.options.with_cgal_qt5\n self._cmake.definitions[\"WITH_CGAL_ImageIO\"] = self.options.with_cgal_imageio\n- self._cmake.configure(source_folder=self._source_subfolder)\n+ self._cmake.definitions[\"CGAL_HEADER_ONLY\"] = self.options.header_only\n+ self._cmake.configure(build_folder=self._build_subfolder)\n return self._cmake\n \n def _patch_sources(self):\n- tools.replace_in_file(\n- os.path.join(self._source_subfolder, \"CMakeLists.txt\"),\n- \"project(CGAL CXX C)\", '''project(CGAL CXX C)\n-include(${CMAKE_BINARY_DIR}/conanbuildinfo.cmake)\n-conan_basic_setup()''')\n+ tools.replace_in_file(os.path.join(self._source_subfolder, \"CMakeLists.txt\"),\n+ \"CMAKE_SOURCE_DIR\", \"CMAKE_CURRENT_SOURCE_DIR\")\n+\n+ def configure(self):\n+ if self.options.with_cgal_qt5:\n+ raise ConanInvalidConfiguration(\"Qt Conan package is not available yet.\")\n+ if self.options.header_only:\n+ del self.options.shared\n \n def source(self):\n tools.get(**self.conan_data[\"sources\"][self.version])\n@@ -61,11 +72,20 @@\n cmake.install()\n tools.rmdir(os.path.join(self.package_folder, \"share\"))\n tools.rmdir(os.path.join(self.package_folder, \"lib\", \"cmake\"))\n- tools.rmdir(os.path.join(self.package_folder, \"bin\"))\n+ if self.options.get_safe(\"shared\"):\n+ for root, _, filenames in os.walk(os.path.join(self.package_folder, \"bin\")):\n+ for filename in filenames:\n+ if not filename.endswith(\".dll\"):\n+ os.unlink(os.path.join(root, filename))\n+ else:\n+ tools.rmdir(os.path.join(self.package_folder, \"bin\"))\n \n def package_info(self):\n+ if not self.options.header_only:\n+ self.cpp_info.libs = tools.collect_libs(self)\n self.cpp_info.names[\"cmake_find_package\"] = \"CGAL\"\n self.cpp_info.names[\"cmake_find_package_multi\"] = \"CGAL\"\n \n def package_id(self):\n- self.info.header_only()\n+ if self.options.header_only:\n+ self.info.header_only()\n", "issue": "[package] cgal/all: review options applied\nComming from https://github.com/conan-io/conan-center-index/pull/965#issuecomment-590802910\r\n\r\nSeems that the recipe might require some work regarding the options and flags\n", "before_files": [{"content": "import os\nfrom conans import ConanFile, CMake, tools\n\n\nclass CgalConan(ConanFile):\n name = \"cgal\"\n license = \"LGPL-3.0-or-later\"\n url = \"https://github.com/conan-io/conan-center-index\"\n homepage = \"https://github.com/CGAL/cgal\"\n description = \"C++ library that aims to provide easy access to efficient and reliable algorithms\"\\\n \"in computational geometry.\"\n topics = (\"geometry\", \"algorithms\")\n settings = \"os\", \"compiler\", \"build_type\", \"arch\"\n requires = \"mpir/3.0.0\", \"mpfr/4.0.2\", \"boost/1.72.0\", \"eigen/3.3.7\"\n generators = \"cmake\"\n\n _source_subfolder = \"source_subfolder\"\n _cmake = None\n\n options = {\n \"with_cgal_core\": [True, False],\n \"with_cgal_qt5\": [True, False],\n \"with_cgal_imageio\": [True, False]\n }\n\n default_options = {\n \"with_cgal_core\": True,\n \"with_cgal_qt5\": False,\n \"with_cgal_imageio\": True\n }\n\n def _configure_cmake(self):\n if not self._cmake:\n self._cmake = CMake(self)\n self._cmake.definitions[\"WITH_CGAL_Core\"] = self.options.with_cgal_core\n self._cmake.definitions[\"WITH_CGAL_Qt5\"] = self.options.with_cgal_qt5\n self._cmake.definitions[\"WITH_CGAL_ImageIO\"] = self.options.with_cgal_imageio\n self._cmake.configure(source_folder=self._source_subfolder)\n return self._cmake\n\n def _patch_sources(self):\n tools.replace_in_file(\n os.path.join(self._source_subfolder, \"CMakeLists.txt\"),\n \"project(CGAL CXX C)\", '''project(CGAL CXX C)\ninclude(${CMAKE_BINARY_DIR}/conanbuildinfo.cmake)\nconan_basic_setup()''')\n\n def source(self):\n tools.get(**self.conan_data[\"sources\"][self.version])\n extracted_dir = \"CGAL-{}\".format(self.version)\n os.rename(extracted_dir, self._source_subfolder)\n\n def build(self):\n self._patch_sources()\n cmake = self._configure_cmake()\n cmake.build()\n\n def package(self):\n self.copy(\"LICENSE*\", dst=\"licenses\", src=self._source_subfolder)\n cmake = self._configure_cmake()\n cmake.install()\n tools.rmdir(os.path.join(self.package_folder, \"share\"))\n tools.rmdir(os.path.join(self.package_folder, \"lib\", \"cmake\"))\n tools.rmdir(os.path.join(self.package_folder, \"bin\"))\n\n def package_info(self):\n self.cpp_info.names[\"cmake_find_package\"] = \"CGAL\"\n self.cpp_info.names[\"cmake_find_package_multi\"] = \"CGAL\"\n\n def package_id(self):\n self.info.header_only()\n", "path": "recipes/cgal/all/conanfile.py"}], "after_files": [{"content": "import os\nfrom conans import ConanFile, CMake, tools\nfrom conans.errors import ConanInvalidConfiguration\n\n\nclass CgalConan(ConanFile):\n name = \"cgal\"\n license = \"LGPL-3.0-or-later\"\n url = \"https://github.com/conan-io/conan-center-index\"\n homepage = \"https://github.com/CGAL/cgal\"\n description = \"C++ library that aims to provide easy access to efficient and reliable algorithms\"\\\n \"in computational geometry.\"\n topics = (\"geometry\", \"algorithms\")\n settings = \"os\", \"compiler\", \"build_type\", \"arch\"\n requires = \"mpir/3.0.0\", \"mpfr/4.0.2\", \"boost/1.72.0\", \"eigen/3.3.7\"\n generators = \"cmake\"\n exports_sources = \"CMakeLists.txt\"\n\n _source_subfolder = \"source_subfolder\"\n _build_subfolder = \"build_subfolder\"\n _cmake = None\n\n options = {\n \"with_cgal_core\": [True, False],\n \"with_cgal_qt5\": [True, False],\n \"with_cgal_imageio\": [True, False],\n \"shared\": [True, False],\n \"header_only\": [True, False]\n }\n\n default_options = {\n \"with_cgal_core\": True,\n \"with_cgal_qt5\": False,\n \"with_cgal_imageio\": True,\n \"shared\": False,\n \"header_only\": True\n }\n\n def _configure_cmake(self):\n if not self._cmake:\n self._cmake = CMake(self)\n self._cmake.definitions[\"WITH_CGAL_Core\"] = self.options.with_cgal_core\n self._cmake.definitions[\"WITH_CGAL_Qt5\"] = self.options.with_cgal_qt5\n self._cmake.definitions[\"WITH_CGAL_ImageIO\"] = self.options.with_cgal_imageio\n self._cmake.definitions[\"CGAL_HEADER_ONLY\"] = self.options.header_only\n self._cmake.configure(build_folder=self._build_subfolder)\n return self._cmake\n\n def _patch_sources(self):\n tools.replace_in_file(os.path.join(self._source_subfolder, \"CMakeLists.txt\"),\n \"CMAKE_SOURCE_DIR\", \"CMAKE_CURRENT_SOURCE_DIR\")\n\n def configure(self):\n if self.options.with_cgal_qt5:\n raise ConanInvalidConfiguration(\"Qt Conan package is not available yet.\")\n if self.options.header_only:\n del self.options.shared\n\n def source(self):\n tools.get(**self.conan_data[\"sources\"][self.version])\n extracted_dir = \"CGAL-{}\".format(self.version)\n os.rename(extracted_dir, self._source_subfolder)\n\n def build(self):\n self._patch_sources()\n cmake = self._configure_cmake()\n cmake.build()\n\n def package(self):\n self.copy(\"LICENSE*\", dst=\"licenses\", src=self._source_subfolder)\n cmake = self._configure_cmake()\n cmake.install()\n tools.rmdir(os.path.join(self.package_folder, \"share\"))\n tools.rmdir(os.path.join(self.package_folder, \"lib\", \"cmake\"))\n if self.options.get_safe(\"shared\"):\n for root, _, filenames in os.walk(os.path.join(self.package_folder, \"bin\")):\n for filename in filenames:\n if not filename.endswith(\".dll\"):\n os.unlink(os.path.join(root, filename))\n else:\n tools.rmdir(os.path.join(self.package_folder, \"bin\"))\n\n def package_info(self):\n if not self.options.header_only:\n self.cpp_info.libs = tools.collect_libs(self)\n self.cpp_info.names[\"cmake_find_package\"] = \"CGAL\"\n self.cpp_info.names[\"cmake_find_package_multi\"] = \"CGAL\"\n\n def package_id(self):\n if self.options.header_only:\n self.info.header_only()\n", "path": "recipes/cgal/all/conanfile.py"}]} | 1,133 | 922 |
gh_patches_debug_11580 | rasdani/github-patches | git_diff | electricitymaps__electricitymaps-contrib-1631 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
FR: key coal has negative value -9.0
```
invalid point: {'zoneKey': 'FR', 'datetime': datetime.datetime(2018, 10, 9, 11, 15, tzinfo=tzoffset(None, 7200)), 'production': {'nuclear': 41740.0, 'coal': -9.0, 'gas': 4057.0, 'oil': 188.0, 'wind': 1158.0, 'solar': 2762.0, 'biomass': 861.0, 'hydro': 3366.0}, 'storage': {'hydro': -1024.0}, 'source': 'opendata.reseaux-energies.fr', 'schemaVersion': 1}, reason:FR: key coal has negative value -9.0
```
Probably a good idea to set small negative values to 0
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `parsers/FR.py`
Content:
```
1 #!/usr/bin/env python3
2
3 import arrow
4 import json
5 import logging
6 import os
7 import math
8
9 import pandas as pd
10 import requests
11 import xml.etree.ElementTree as ET
12
13 API_ENDPOINT = 'https://opendata.reseaux-energies.fr/api/records/1.0/search/'
14
15 MAP_GENERATION = {
16 'nucleaire': 'nuclear',
17 'charbon': 'coal',
18 'gaz': 'gas',
19 'fioul': 'oil',
20 'eolien': 'wind',
21 'solaire': 'solar',
22 'bioenergies': 'biomass'
23 }
24
25 MAP_HYDRO = [
26 'hydraulique_fil_eau_eclusee',
27 'hydraulique_lacs',
28 'hydraulique_step_turbinage',
29 'pompage'
30 ]
31
32 def is_not_nan_and_truthy(v):
33 if isinstance(v, float) and math.isnan(v):
34 return False
35 return bool(v)
36
37
38 def fetch_production(zone_key='FR', session=None, target_datetime=None,
39 logger=logging.getLogger(__name__)):
40 if target_datetime:
41 to = arrow.get(target_datetime, 'Europe/Paris')
42 else:
43 to = arrow.now(tz='Europe/Paris')
44
45 # setup request
46 r = session or requests.session()
47 formatted_from = to.shift(days=-1).format('YYYY-MM-DDTHH:mm')
48 formatted_to = to.format('YYYY-MM-DDTHH:mm')
49
50 params = {
51 'dataset': 'eco2mix-national-tr',
52 'q': 'date_heure >= {} AND date_heure <= {}'.format(
53 formatted_from, formatted_to),
54 'timezone': 'Europe/Paris',
55 'rows': 100
56 }
57
58 if 'RESEAUX_ENERGIES_TOKEN' not in os.environ:
59 raise Exception(
60 'No RESEAUX_ENERGIES_TOKEN found! Please add it into secrets.env!')
61 params['apikey'] = os.environ['RESEAUX_ENERGIES_TOKEN']
62
63 # make request and create dataframe with response
64 response = r.get(API_ENDPOINT, params=params)
65 data = json.loads(response.content)
66 data = [d['fields'] for d in data['records']]
67 df = pd.DataFrame(data)
68
69 # filter out desired columns and convert values to float
70 value_columns = list(MAP_GENERATION.keys()) + MAP_HYDRO
71 df = df[['date_heure'] + value_columns]
72 df[value_columns] = df[value_columns].astype(float)
73
74 datapoints = list()
75 for row in df.iterrows():
76 production = dict()
77 for key, value in MAP_GENERATION.items():
78 production[value] = row[1][key]
79
80 # Hydro is a special case!
81 production['hydro'] = row[1]['hydraulique_lacs'] + row[1]['hydraulique_fil_eau_eclusee']
82 storage = {
83 'hydro': row[1]['pompage'] * -1 + row[1]['hydraulique_step_turbinage'] * -1
84 }
85
86 # if all production values are null, ignore datapoint
87 if not any([is_not_nan_and_truthy(v)
88 for k, v in production.items()]):
89 continue
90
91 datapoints.append({
92 'zoneKey': zone_key,
93 'datetime': arrow.get(row[1]['date_heure']).datetime,
94 'production': production,
95 'storage': storage,
96 'source': 'opendata.reseaux-energies.fr'
97 })
98
99 return datapoints
100
101
102 def fetch_price(zone_key, session=None, target_datetime=None,
103 logger=logging.getLogger(__name__)):
104 if target_datetime:
105 now = arrow.get(target_datetime, tz='Europe/Paris')
106 else:
107 now = arrow.now(tz='Europe/Paris')
108
109 r = session or requests.session()
110 formatted_from = now.shift(days=-1).format('DD/MM/YYYY')
111 formatted_to = now.format('DD/MM/YYYY')
112
113 url = 'http://www.rte-france.com/getEco2MixXml.php?type=donneesMarche&da' \
114 'teDeb={}&dateFin={}&mode=NORM'.format(formatted_from, formatted_to)
115 response = r.get(url)
116 obj = ET.fromstring(response.content)
117 datas = {}
118
119 for donnesMarche in obj:
120 if donnesMarche.tag != 'donneesMarche':
121 continue
122
123 start_date = arrow.get(arrow.get(donnesMarche.attrib['date']).datetime, 'Europe/Paris')
124
125 for item in donnesMarche:
126 if item.get('granularite') != 'Global':
127 continue
128 country_c = item.get('perimetre')
129 if zone_key != country_c:
130 continue
131 value = None
132 for value in item:
133 if value.text == 'ND':
134 continue
135 period = int(value.attrib['periode'])
136 datetime = start_date.replace(hours=+period).datetime
137 if not datetime in datas:
138 datas[datetime] = {
139 'zoneKey': zone_key,
140 'currency': 'EUR',
141 'datetime': datetime,
142 'source': 'rte-france.com',
143 }
144 data = datas[datetime]
145 data['price'] = float(value.text)
146
147 return list(datas.values())
148
149
150 if __name__ == '__main__':
151 print(fetch_production())
152
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/parsers/FR.py b/parsers/FR.py
--- a/parsers/FR.py
+++ b/parsers/FR.py
@@ -75,7 +75,12 @@
for row in df.iterrows():
production = dict()
for key, value in MAP_GENERATION.items():
- production[value] = row[1][key]
+ # Set small negative values to 0
+ if row[1][key] < 0 and row[1][key] > -50:
+ logger.warning('Setting small value of %s (%s) to 0.' % (key, value))
+ production[value] = 0
+ else:
+ production[value] = row[1][key]
# Hydro is a special case!
production['hydro'] = row[1]['hydraulique_lacs'] + row[1]['hydraulique_fil_eau_eclusee']
| {"golden_diff": "diff --git a/parsers/FR.py b/parsers/FR.py\n--- a/parsers/FR.py\n+++ b/parsers/FR.py\n@@ -75,7 +75,12 @@\n for row in df.iterrows():\n production = dict()\n for key, value in MAP_GENERATION.items():\n- production[value] = row[1][key]\n+ # Set small negative values to 0\n+ if row[1][key] < 0 and row[1][key] > -50:\n+ logger.warning('Setting small value of %s (%s) to 0.' % (key, value))\n+ production[value] = 0\n+ else:\n+ production[value] = row[1][key]\n \n # Hydro is a special case!\n production['hydro'] = row[1]['hydraulique_lacs'] + row[1]['hydraulique_fil_eau_eclusee']\n", "issue": "FR: key coal has negative value -9.0\n```\r\ninvalid point: {'zoneKey': 'FR', 'datetime': datetime.datetime(2018, 10, 9, 11, 15, tzinfo=tzoffset(None, 7200)), 'production': {'nuclear': 41740.0, 'coal': -9.0, 'gas': 4057.0, 'oil': 188.0, 'wind': 1158.0, 'solar': 2762.0, 'biomass': 861.0, 'hydro': 3366.0}, 'storage': {'hydro': -1024.0}, 'source': 'opendata.reseaux-energies.fr', 'schemaVersion': 1}, reason:FR: key coal has negative value -9.0\r\n```\r\n\r\nProbably a good idea to set small negative values to 0\n", "before_files": [{"content": "#!/usr/bin/env python3\n\nimport arrow\nimport json\nimport logging\nimport os\nimport math\n\nimport pandas as pd\nimport requests\nimport xml.etree.ElementTree as ET\n\nAPI_ENDPOINT = 'https://opendata.reseaux-energies.fr/api/records/1.0/search/'\n\nMAP_GENERATION = {\n 'nucleaire': 'nuclear',\n 'charbon': 'coal',\n 'gaz': 'gas',\n 'fioul': 'oil',\n 'eolien': 'wind',\n 'solaire': 'solar',\n 'bioenergies': 'biomass'\n}\n\nMAP_HYDRO = [\n 'hydraulique_fil_eau_eclusee',\n 'hydraulique_lacs',\n 'hydraulique_step_turbinage',\n 'pompage'\n]\n\ndef is_not_nan_and_truthy(v):\n if isinstance(v, float) and math.isnan(v):\n return False\n return bool(v)\n\n\ndef fetch_production(zone_key='FR', session=None, target_datetime=None,\n logger=logging.getLogger(__name__)):\n if target_datetime:\n to = arrow.get(target_datetime, 'Europe/Paris')\n else:\n to = arrow.now(tz='Europe/Paris')\n\n # setup request\n r = session or requests.session()\n formatted_from = to.shift(days=-1).format('YYYY-MM-DDTHH:mm')\n formatted_to = to.format('YYYY-MM-DDTHH:mm')\n\n params = {\n 'dataset': 'eco2mix-national-tr',\n 'q': 'date_heure >= {} AND date_heure <= {}'.format(\n formatted_from, formatted_to),\n 'timezone': 'Europe/Paris',\n 'rows': 100\n }\n\n if 'RESEAUX_ENERGIES_TOKEN' not in os.environ:\n raise Exception(\n 'No RESEAUX_ENERGIES_TOKEN found! Please add it into secrets.env!')\n params['apikey'] = os.environ['RESEAUX_ENERGIES_TOKEN']\n\n # make request and create dataframe with response\n response = r.get(API_ENDPOINT, params=params)\n data = json.loads(response.content)\n data = [d['fields'] for d in data['records']]\n df = pd.DataFrame(data)\n\n # filter out desired columns and convert values to float\n value_columns = list(MAP_GENERATION.keys()) + MAP_HYDRO\n df = df[['date_heure'] + value_columns]\n df[value_columns] = df[value_columns].astype(float)\n\n datapoints = list()\n for row in df.iterrows():\n production = dict()\n for key, value in MAP_GENERATION.items():\n production[value] = row[1][key]\n\n # Hydro is a special case!\n production['hydro'] = row[1]['hydraulique_lacs'] + row[1]['hydraulique_fil_eau_eclusee']\n storage = {\n 'hydro': row[1]['pompage'] * -1 + row[1]['hydraulique_step_turbinage'] * -1\n }\n\n # if all production values are null, ignore datapoint\n if not any([is_not_nan_and_truthy(v)\n for k, v in production.items()]):\n continue\n\n datapoints.append({\n 'zoneKey': zone_key,\n 'datetime': arrow.get(row[1]['date_heure']).datetime,\n 'production': production,\n 'storage': storage,\n 'source': 'opendata.reseaux-energies.fr'\n })\n\n return datapoints\n\n\ndef fetch_price(zone_key, session=None, target_datetime=None,\n logger=logging.getLogger(__name__)):\n if target_datetime:\n now = arrow.get(target_datetime, tz='Europe/Paris')\n else:\n now = arrow.now(tz='Europe/Paris')\n\n r = session or requests.session()\n formatted_from = now.shift(days=-1).format('DD/MM/YYYY')\n formatted_to = now.format('DD/MM/YYYY')\n\n url = 'http://www.rte-france.com/getEco2MixXml.php?type=donneesMarche&da' \\\n 'teDeb={}&dateFin={}&mode=NORM'.format(formatted_from, formatted_to)\n response = r.get(url)\n obj = ET.fromstring(response.content)\n datas = {}\n\n for donnesMarche in obj:\n if donnesMarche.tag != 'donneesMarche':\n continue\n\n start_date = arrow.get(arrow.get(donnesMarche.attrib['date']).datetime, 'Europe/Paris')\n\n for item in donnesMarche:\n if item.get('granularite') != 'Global':\n continue\n country_c = item.get('perimetre')\n if zone_key != country_c:\n continue\n value = None\n for value in item:\n if value.text == 'ND':\n continue\n period = int(value.attrib['periode'])\n datetime = start_date.replace(hours=+period).datetime\n if not datetime in datas:\n datas[datetime] = {\n 'zoneKey': zone_key,\n 'currency': 'EUR',\n 'datetime': datetime,\n 'source': 'rte-france.com',\n }\n data = datas[datetime]\n data['price'] = float(value.text)\n\n return list(datas.values())\n\n\nif __name__ == '__main__':\n print(fetch_production())\n", "path": "parsers/FR.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n\nimport arrow\nimport json\nimport logging\nimport os\nimport math\n\nimport pandas as pd\nimport requests\nimport xml.etree.ElementTree as ET\n\nAPI_ENDPOINT = 'https://opendata.reseaux-energies.fr/api/records/1.0/search/'\n\nMAP_GENERATION = {\n 'nucleaire': 'nuclear',\n 'charbon': 'coal',\n 'gaz': 'gas',\n 'fioul': 'oil',\n 'eolien': 'wind',\n 'solaire': 'solar',\n 'bioenergies': 'biomass'\n}\n\nMAP_HYDRO = [\n 'hydraulique_fil_eau_eclusee',\n 'hydraulique_lacs',\n 'hydraulique_step_turbinage',\n 'pompage'\n]\n\ndef is_not_nan_and_truthy(v):\n if isinstance(v, float) and math.isnan(v):\n return False\n return bool(v)\n\n\ndef fetch_production(zone_key='FR', session=None, target_datetime=None,\n logger=logging.getLogger(__name__)):\n if target_datetime:\n to = arrow.get(target_datetime, 'Europe/Paris')\n else:\n to = arrow.now(tz='Europe/Paris')\n\n # setup request\n r = session or requests.session()\n formatted_from = to.shift(days=-1).format('YYYY-MM-DDTHH:mm')\n formatted_to = to.format('YYYY-MM-DDTHH:mm')\n\n params = {\n 'dataset': 'eco2mix-national-tr',\n 'q': 'date_heure >= {} AND date_heure <= {}'.format(\n formatted_from, formatted_to),\n 'timezone': 'Europe/Paris',\n 'rows': 100\n }\n\n if 'RESEAUX_ENERGIES_TOKEN' not in os.environ:\n raise Exception(\n 'No RESEAUX_ENERGIES_TOKEN found! Please add it into secrets.env!')\n params['apikey'] = os.environ['RESEAUX_ENERGIES_TOKEN']\n\n # make request and create dataframe with response\n response = r.get(API_ENDPOINT, params=params)\n data = json.loads(response.content)\n data = [d['fields'] for d in data['records']]\n df = pd.DataFrame(data)\n\n # filter out desired columns and convert values to float\n value_columns = list(MAP_GENERATION.keys()) + MAP_HYDRO\n df = df[['date_heure'] + value_columns]\n df[value_columns] = df[value_columns].astype(float)\n\n datapoints = list()\n for row in df.iterrows():\n production = dict()\n for key, value in MAP_GENERATION.items():\n # Set small negative values to 0\n if row[1][key] < 0 and row[1][key] > -50:\n logger.warning('Setting small value of %s (%s) to 0.' % (key, value))\n production[value] = 0\n else:\n production[value] = row[1][key]\n\n # Hydro is a special case!\n production['hydro'] = row[1]['hydraulique_lacs'] + row[1]['hydraulique_fil_eau_eclusee']\n storage = {\n 'hydro': row[1]['pompage'] * -1 + row[1]['hydraulique_step_turbinage'] * -1\n }\n\n # if all production values are null, ignore datapoint\n if not any([is_not_nan_and_truthy(v)\n for k, v in production.items()]):\n continue\n\n datapoints.append({\n 'zoneKey': zone_key,\n 'datetime': arrow.get(row[1]['date_heure']).datetime,\n 'production': production,\n 'storage': storage,\n 'source': 'opendata.reseaux-energies.fr'\n })\n\n return datapoints\n\n\ndef fetch_price(zone_key, session=None, target_datetime=None,\n logger=logging.getLogger(__name__)):\n if target_datetime:\n now = arrow.get(target_datetime, tz='Europe/Paris')\n else:\n now = arrow.now(tz='Europe/Paris')\n\n r = session or requests.session()\n formatted_from = now.shift(days=-1).format('DD/MM/YYYY')\n formatted_to = now.format('DD/MM/YYYY')\n\n url = 'http://www.rte-france.com/getEco2MixXml.php?type=donneesMarche&da' \\\n 'teDeb={}&dateFin={}&mode=NORM'.format(formatted_from, formatted_to)\n response = r.get(url)\n obj = ET.fromstring(response.content)\n datas = {}\n\n for donnesMarche in obj:\n if donnesMarche.tag != 'donneesMarche':\n continue\n\n start_date = arrow.get(arrow.get(donnesMarche.attrib['date']).datetime, 'Europe/Paris')\n\n for item in donnesMarche:\n if item.get('granularite') != 'Global':\n continue\n country_c = item.get('perimetre')\n if zone_key != country_c:\n continue\n value = None\n for value in item:\n if value.text == 'ND':\n continue\n period = int(value.attrib['periode'])\n datetime = start_date.replace(hours=+period).datetime\n if not datetime in datas:\n datas[datetime] = {\n 'zoneKey': zone_key,\n 'currency': 'EUR',\n 'datetime': datetime,\n 'source': 'rte-france.com',\n }\n data = datas[datetime]\n data['price'] = float(value.text)\n\n return list(datas.values())\n\n\nif __name__ == '__main__':\n print(fetch_production())\n", "path": "parsers/FR.py"}]} | 2,002 | 206 |
gh_patches_debug_4345 | rasdani/github-patches | git_diff | netbox-community__netbox-16037 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Unable to run scripts from CLI in v4.0
### Deployment Type
Self-hosted
### NetBox Version
v4.0.0
### Python Version
3.11
### Steps to Reproduce
1. Create a script
2. Run it with `python manage.py runscript 'module.ScriptName' inside the NetBox instance
### Expected Behavior
Script should run.
### Observed Behavior
Script fails with:
> AttributeError: 'Script' object has no attribute 'full_name'
Running the same script from GUI works fine, have tried multiple scripts, and haven't been able to run any via CLI in v4.
Seems to be this line that fails: https://github.com/netbox-community/netbox/blob/develop/netbox/extras/management/commands/runscript.py#L104
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `netbox/extras/management/commands/runscript.py`
Content:
```
1 import json
2 import logging
3 import sys
4 import traceback
5 import uuid
6
7 from django.contrib.auth import get_user_model
8 from django.core.management.base import BaseCommand, CommandError
9 from django.db import transaction
10
11 from core.choices import JobStatusChoices
12 from core.models import Job
13 from extras.context_managers import event_tracking
14 from extras.scripts import get_module_and_script
15 from extras.signals import clear_events
16 from utilities.exceptions import AbortTransaction
17 from utilities.request import NetBoxFakeRequest
18
19
20 class Command(BaseCommand):
21 help = "Run a script in NetBox"
22
23 def add_arguments(self, parser):
24 parser.add_argument(
25 '--loglevel',
26 help="Logging Level (default: info)",
27 dest='loglevel',
28 default='info',
29 choices=['debug', 'info', 'warning', 'error', 'critical'])
30 parser.add_argument('--commit', help="Commit this script to database", action='store_true')
31 parser.add_argument('--user', help="User script is running as")
32 parser.add_argument('--data', help="Data as a string encapsulated JSON blob")
33 parser.add_argument('script', help="Script to run")
34
35 def handle(self, *args, **options):
36
37 def _run_script():
38 """
39 Core script execution task. We capture this within a subfunction to allow for conditionally wrapping it with
40 the event_tracking context manager (which is bypassed if commit == False).
41 """
42 try:
43 try:
44 with transaction.atomic():
45 script.output = script.run(data=data, commit=commit)
46 if not commit:
47 raise AbortTransaction()
48 except AbortTransaction:
49 script.log_info("Database changes have been reverted automatically.")
50 clear_events.send(request)
51 job.data = script.get_job_data()
52 job.terminate()
53 except Exception as e:
54 stacktrace = traceback.format_exc()
55 script.log_failure(
56 f"An exception occurred: `{type(e).__name__}: {e}`\n```\n{stacktrace}\n```"
57 )
58 script.log_info("Database changes have been reverted due to error.")
59 logger.error(f"Exception raised during script execution: {e}")
60 clear_events.send(request)
61 job.data = script.get_job_data()
62 job.terminate(status=JobStatusChoices.STATUS_ERRORED, error=repr(e))
63
64 # Print any test method results
65 for test_name, attrs in job.data['tests'].items():
66 self.stdout.write(
67 "\t{}: {} success, {} info, {} warning, {} failure".format(
68 test_name, attrs['success'], attrs['info'], attrs['warning'], attrs['failure']
69 )
70 )
71
72 logger.info(f"Script completed in {job.duration}")
73
74 User = get_user_model()
75
76 # Params
77 script = options['script']
78 loglevel = options['loglevel']
79 commit = options['commit']
80
81 try:
82 data = json.loads(options['data'])
83 except TypeError:
84 data = {}
85
86 module_name, script_name = script.split('.', 1)
87 module, script = get_module_and_script(module_name, script_name)
88
89 # Take user from command line if provided and exists, other
90 if options['user']:
91 try:
92 user = User.objects.get(username=options['user'])
93 except User.DoesNotExist:
94 user = User.objects.filter(is_superuser=True).order_by('pk')[0]
95 else:
96 user = User.objects.filter(is_superuser=True).order_by('pk')[0]
97
98 # Setup logging to Stdout
99 formatter = logging.Formatter(f'[%(asctime)s][%(levelname)s] - %(message)s')
100 stdouthandler = logging.StreamHandler(sys.stdout)
101 stdouthandler.setLevel(logging.DEBUG)
102 stdouthandler.setFormatter(formatter)
103
104 logger = logging.getLogger(f"netbox.scripts.{script.full_name}")
105 logger.addHandler(stdouthandler)
106
107 try:
108 logger.setLevel({
109 'critical': logging.CRITICAL,
110 'debug': logging.DEBUG,
111 'error': logging.ERROR,
112 'fatal': logging.FATAL,
113 'info': logging.INFO,
114 'warning': logging.WARNING,
115 }[loglevel])
116 except KeyError:
117 raise CommandError(f"Invalid log level: {loglevel}")
118
119 # Initialize the script form
120 script = script()
121 form = script.as_form(data, None)
122
123 # Create the job
124 job = Job.objects.create(
125 object=module,
126 name=script.class_name,
127 user=User.objects.filter(is_superuser=True).order_by('pk')[0],
128 job_id=uuid.uuid4()
129 )
130
131 request = NetBoxFakeRequest({
132 'META': {},
133 'POST': data,
134 'GET': {},
135 'FILES': {},
136 'user': user,
137 'path': '',
138 'id': job.job_id
139 })
140
141 if form.is_valid():
142 job.status = JobStatusChoices.STATUS_RUNNING
143 job.save()
144
145 logger.info(f"Running script (commit={commit})")
146 script.request = request
147
148 # Execute the script. If commit is True, wrap it with the event_tracking context manager to ensure we process
149 # change logging, webhooks, etc.
150 with event_tracking(request):
151 _run_script()
152 else:
153 logger.error('Data is not valid:')
154 for field, errors in form.errors.get_json_data().items():
155 for error in errors:
156 logger.error(f'\t{field}: {error.get("message")}')
157 job.status = JobStatusChoices.STATUS_ERRORED
158 job.save()
159
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/netbox/extras/management/commands/runscript.py b/netbox/extras/management/commands/runscript.py
--- a/netbox/extras/management/commands/runscript.py
+++ b/netbox/extras/management/commands/runscript.py
@@ -85,6 +85,7 @@
module_name, script_name = script.split('.', 1)
module, script = get_module_and_script(module_name, script_name)
+ script = script.python_class
# Take user from command line if provided and exists, other
if options['user']:
| {"golden_diff": "diff --git a/netbox/extras/management/commands/runscript.py b/netbox/extras/management/commands/runscript.py\n--- a/netbox/extras/management/commands/runscript.py\n+++ b/netbox/extras/management/commands/runscript.py\n@@ -85,6 +85,7 @@\n \n module_name, script_name = script.split('.', 1)\n module, script = get_module_and_script(module_name, script_name)\n+ script = script.python_class\n \n # Take user from command line if provided and exists, other\n if options['user']:\n", "issue": "Unable to run scripts from CLI in v4.0\n### Deployment Type\n\nSelf-hosted\n\n### NetBox Version\n\nv4.0.0\n\n### Python Version\n\n3.11\n\n### Steps to Reproduce\n\n1. Create a script\r\n2. Run it with `python manage.py runscript 'module.ScriptName' inside the NetBox instance\r\n\n\n### Expected Behavior\n\nScript should run.\n\n### Observed Behavior\n\nScript fails with:\r\n> AttributeError: 'Script' object has no attribute 'full_name'\r\n\r\nRunning the same script from GUI works fine, have tried multiple scripts, and haven't been able to run any via CLI in v4. \r\n\r\nSeems to be this line that fails: https://github.com/netbox-community/netbox/blob/develop/netbox/extras/management/commands/runscript.py#L104\n", "before_files": [{"content": "import json\nimport logging\nimport sys\nimport traceback\nimport uuid\n\nfrom django.contrib.auth import get_user_model\nfrom django.core.management.base import BaseCommand, CommandError\nfrom django.db import transaction\n\nfrom core.choices import JobStatusChoices\nfrom core.models import Job\nfrom extras.context_managers import event_tracking\nfrom extras.scripts import get_module_and_script\nfrom extras.signals import clear_events\nfrom utilities.exceptions import AbortTransaction\nfrom utilities.request import NetBoxFakeRequest\n\n\nclass Command(BaseCommand):\n help = \"Run a script in NetBox\"\n\n def add_arguments(self, parser):\n parser.add_argument(\n '--loglevel',\n help=\"Logging Level (default: info)\",\n dest='loglevel',\n default='info',\n choices=['debug', 'info', 'warning', 'error', 'critical'])\n parser.add_argument('--commit', help=\"Commit this script to database\", action='store_true')\n parser.add_argument('--user', help=\"User script is running as\")\n parser.add_argument('--data', help=\"Data as a string encapsulated JSON blob\")\n parser.add_argument('script', help=\"Script to run\")\n\n def handle(self, *args, **options):\n\n def _run_script():\n \"\"\"\n Core script execution task. We capture this within a subfunction to allow for conditionally wrapping it with\n the event_tracking context manager (which is bypassed if commit == False).\n \"\"\"\n try:\n try:\n with transaction.atomic():\n script.output = script.run(data=data, commit=commit)\n if not commit:\n raise AbortTransaction()\n except AbortTransaction:\n script.log_info(\"Database changes have been reverted automatically.\")\n clear_events.send(request)\n job.data = script.get_job_data()\n job.terminate()\n except Exception as e:\n stacktrace = traceback.format_exc()\n script.log_failure(\n f\"An exception occurred: `{type(e).__name__}: {e}`\\n```\\n{stacktrace}\\n```\"\n )\n script.log_info(\"Database changes have been reverted due to error.\")\n logger.error(f\"Exception raised during script execution: {e}\")\n clear_events.send(request)\n job.data = script.get_job_data()\n job.terminate(status=JobStatusChoices.STATUS_ERRORED, error=repr(e))\n\n # Print any test method results\n for test_name, attrs in job.data['tests'].items():\n self.stdout.write(\n \"\\t{}: {} success, {} info, {} warning, {} failure\".format(\n test_name, attrs['success'], attrs['info'], attrs['warning'], attrs['failure']\n )\n )\n\n logger.info(f\"Script completed in {job.duration}\")\n\n User = get_user_model()\n\n # Params\n script = options['script']\n loglevel = options['loglevel']\n commit = options['commit']\n\n try:\n data = json.loads(options['data'])\n except TypeError:\n data = {}\n\n module_name, script_name = script.split('.', 1)\n module, script = get_module_and_script(module_name, script_name)\n\n # Take user from command line if provided and exists, other\n if options['user']:\n try:\n user = User.objects.get(username=options['user'])\n except User.DoesNotExist:\n user = User.objects.filter(is_superuser=True).order_by('pk')[0]\n else:\n user = User.objects.filter(is_superuser=True).order_by('pk')[0]\n\n # Setup logging to Stdout\n formatter = logging.Formatter(f'[%(asctime)s][%(levelname)s] - %(message)s')\n stdouthandler = logging.StreamHandler(sys.stdout)\n stdouthandler.setLevel(logging.DEBUG)\n stdouthandler.setFormatter(formatter)\n\n logger = logging.getLogger(f\"netbox.scripts.{script.full_name}\")\n logger.addHandler(stdouthandler)\n\n try:\n logger.setLevel({\n 'critical': logging.CRITICAL,\n 'debug': logging.DEBUG,\n 'error': logging.ERROR,\n 'fatal': logging.FATAL,\n 'info': logging.INFO,\n 'warning': logging.WARNING,\n }[loglevel])\n except KeyError:\n raise CommandError(f\"Invalid log level: {loglevel}\")\n\n # Initialize the script form\n script = script()\n form = script.as_form(data, None)\n\n # Create the job\n job = Job.objects.create(\n object=module,\n name=script.class_name,\n user=User.objects.filter(is_superuser=True).order_by('pk')[0],\n job_id=uuid.uuid4()\n )\n\n request = NetBoxFakeRequest({\n 'META': {},\n 'POST': data,\n 'GET': {},\n 'FILES': {},\n 'user': user,\n 'path': '',\n 'id': job.job_id\n })\n\n if form.is_valid():\n job.status = JobStatusChoices.STATUS_RUNNING\n job.save()\n\n logger.info(f\"Running script (commit={commit})\")\n script.request = request\n\n # Execute the script. If commit is True, wrap it with the event_tracking context manager to ensure we process\n # change logging, webhooks, etc.\n with event_tracking(request):\n _run_script()\n else:\n logger.error('Data is not valid:')\n for field, errors in form.errors.get_json_data().items():\n for error in errors:\n logger.error(f'\\t{field}: {error.get(\"message\")}')\n job.status = JobStatusChoices.STATUS_ERRORED\n job.save()\n", "path": "netbox/extras/management/commands/runscript.py"}], "after_files": [{"content": "import json\nimport logging\nimport sys\nimport traceback\nimport uuid\n\nfrom django.contrib.auth import get_user_model\nfrom django.core.management.base import BaseCommand, CommandError\nfrom django.db import transaction\n\nfrom core.choices import JobStatusChoices\nfrom core.models import Job\nfrom extras.context_managers import event_tracking\nfrom extras.scripts import get_module_and_script\nfrom extras.signals import clear_events\nfrom utilities.exceptions import AbortTransaction\nfrom utilities.request import NetBoxFakeRequest\n\n\nclass Command(BaseCommand):\n help = \"Run a script in NetBox\"\n\n def add_arguments(self, parser):\n parser.add_argument(\n '--loglevel',\n help=\"Logging Level (default: info)\",\n dest='loglevel',\n default='info',\n choices=['debug', 'info', 'warning', 'error', 'critical'])\n parser.add_argument('--commit', help=\"Commit this script to database\", action='store_true')\n parser.add_argument('--user', help=\"User script is running as\")\n parser.add_argument('--data', help=\"Data as a string encapsulated JSON blob\")\n parser.add_argument('script', help=\"Script to run\")\n\n def handle(self, *args, **options):\n\n def _run_script():\n \"\"\"\n Core script execution task. We capture this within a subfunction to allow for conditionally wrapping it with\n the event_tracking context manager (which is bypassed if commit == False).\n \"\"\"\n try:\n try:\n with transaction.atomic():\n script.output = script.run(data=data, commit=commit)\n if not commit:\n raise AbortTransaction()\n except AbortTransaction:\n script.log_info(\"Database changes have been reverted automatically.\")\n clear_events.send(request)\n job.data = script.get_job_data()\n job.terminate()\n except Exception as e:\n stacktrace = traceback.format_exc()\n script.log_failure(\n f\"An exception occurred: `{type(e).__name__}: {e}`\\n```\\n{stacktrace}\\n```\"\n )\n script.log_info(\"Database changes have been reverted due to error.\")\n logger.error(f\"Exception raised during script execution: {e}\")\n clear_events.send(request)\n job.data = script.get_job_data()\n job.terminate(status=JobStatusChoices.STATUS_ERRORED, error=repr(e))\n\n # Print any test method results\n for test_name, attrs in job.data['tests'].items():\n self.stdout.write(\n \"\\t{}: {} success, {} info, {} warning, {} failure\".format(\n test_name, attrs['success'], attrs['info'], attrs['warning'], attrs['failure']\n )\n )\n\n logger.info(f\"Script completed in {job.duration}\")\n\n User = get_user_model()\n\n # Params\n script = options['script']\n loglevel = options['loglevel']\n commit = options['commit']\n\n try:\n data = json.loads(options['data'])\n except TypeError:\n data = {}\n\n module_name, script_name = script.split('.', 1)\n module, script = get_module_and_script(module_name, script_name)\n script = script.python_class\n\n # Take user from command line if provided and exists, other\n if options['user']:\n try:\n user = User.objects.get(username=options['user'])\n except User.DoesNotExist:\n user = User.objects.filter(is_superuser=True).order_by('pk')[0]\n else:\n user = User.objects.filter(is_superuser=True).order_by('pk')[0]\n\n # Setup logging to Stdout\n formatter = logging.Formatter(f'[%(asctime)s][%(levelname)s] - %(message)s')\n stdouthandler = logging.StreamHandler(sys.stdout)\n stdouthandler.setLevel(logging.DEBUG)\n stdouthandler.setFormatter(formatter)\n\n logger = logging.getLogger(f\"netbox.scripts.{script.full_name}\")\n logger.addHandler(stdouthandler)\n\n try:\n logger.setLevel({\n 'critical': logging.CRITICAL,\n 'debug': logging.DEBUG,\n 'error': logging.ERROR,\n 'fatal': logging.FATAL,\n 'info': logging.INFO,\n 'warning': logging.WARNING,\n }[loglevel])\n except KeyError:\n raise CommandError(f\"Invalid log level: {loglevel}\")\n\n # Initialize the script form\n script = script()\n form = script.as_form(data, None)\n\n # Create the job\n job = Job.objects.create(\n object=module,\n name=script.class_name,\n user=User.objects.filter(is_superuser=True).order_by('pk')[0],\n job_id=uuid.uuid4()\n )\n\n request = NetBoxFakeRequest({\n 'META': {},\n 'POST': data,\n 'GET': {},\n 'FILES': {},\n 'user': user,\n 'path': '',\n 'id': job.job_id\n })\n\n if form.is_valid():\n job.status = JobStatusChoices.STATUS_RUNNING\n job.save()\n\n logger.info(f\"Running script (commit={commit})\")\n script.request = request\n\n # Execute the script. If commit is True, wrap it with the event_tracking context manager to ensure we process\n # change logging, webhooks, etc.\n with event_tracking(request):\n _run_script()\n else:\n logger.error('Data is not valid:')\n for field, errors in form.errors.get_json_data().items():\n for error in errors:\n logger.error(f'\\t{field}: {error.get(\"message\")}')\n job.status = JobStatusChoices.STATUS_ERRORED\n job.save()\n", "path": "netbox/extras/management/commands/runscript.py"}]} | 1,970 | 125 |
gh_patches_debug_63273 | rasdani/github-patches | git_diff | weecology__retriever-400 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Can't download and extract Gentry dataset
If trying to download "Gentry Forest Transect Dataset" the retriever seems to download the data, but gets stuck when it comes in extracting AVALANCH.xls
Moreover force quit seems the only way to close the program.
OS: OS X El Capitan Version 10.11.3 (15D21)
Machine: Macbook Pro Early 2015 13"
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `app/download_manager.py`
Content:
```
1 """This class manages dataset downloads concurrently and processes progress
2 output."""
3
4 import wx
5 from retriever.lib.download import DownloadThread
6
7
8 class DownloadManager:
9 def __init__(self, parent):
10 self.dialog = None
11 self.worker = None
12 self.queue = []
13 self.downloaded = set()
14 self.errors = set()
15 self.warnings = set()
16 self.Parent = parent
17 self.timer = wx.Timer(parent, -1)
18 self.timer.interval = 10
19 parent.Bind(wx.EVT_TIMER, self.update, self.timer)
20
21 def Download(self, script):
22 if not script in self.queue and not (self.worker and self.worker.script == script):
23 self.queue.append(script)
24 self.downloaded.add(script)
25 if script in self.errors:
26 self.errors.remove(script)
27 self.warnings.remove(script)
28 self.Parent.script_list.RefreshMe(None)
29 if not self.timer.IsRunning() and not self.worker and len(self.queue) < 2:
30 self.timer.Start(self.timer.interval)
31 return True
32 return False
33
34 def update(self, evt):
35 self.timer.Stop()
36 terminate = False
37 if self.worker:
38 script = self.worker.script
39 if self.worker.finished() and len(self.worker.output) == 0:
40 if hasattr(script, 'warnings') and script.warnings:
41 self.warnings.add(script)
42 self.Parent.SetStatusText('\n'.join(str(w) for w in script.warnings))
43 else:
44 self.Parent.SetStatusText("")
45 self.worker = None
46 self.Parent.script_list.RefreshMe(None)
47 self.timer.Start(self.timer.interval)
48 else:
49 self.worker.output_lock.acquire()
50 while len(self.worker.output) > 0 and not terminate:
51 if "Error:" in self.worker.output[0] and script in self.downloaded:
52 self.downloaded.remove(script)
53 self.errors.add(script)
54 if self.write(self.worker) == False:
55 terminate = True
56 self.worker.output = self.worker.output[1:]
57 #self.gauge.SetValue(100 * ((self.worker.scriptnum) /
58 # (self.worker.progress_max + 1.0)))
59 self.worker.output_lock.release()
60 if terminate:
61 self.Parent.Quit(None)
62 else:
63 self.timer.Start(self.timer.interval)
64 elif self.queue:
65 script = self.queue[0]
66 self.queue = self.queue[1:]
67 self.worker = DownloadThread(self.Parent.engine, script)
68 self.worker.parent = self
69 self.worker.start()
70 self.timer.Start(10)
71
72 def flush(self):
73 pass
74
75 def write(self, worker):
76 s = worker.output[0]
77
78 if '\b' in s:
79 s = s.replace('\b', '')
80 if not self.dialog:
81 wx.GetApp().Yield()
82 self.dialog = wx.ProgressDialog("Download Progress",
83 "Downloading datasets . . .\n"
84 + " " * len(s),
85 maximum=1000,
86 parent=None,
87 style=wx.PD_SMOOTH
88 | wx.DIALOG_NO_PARENT
89 | wx.PD_CAN_ABORT
90 | wx.PD_AUTO_HIDE
91 | wx.PD_REMAINING_TIME
92 )
93 def progress(s):
94 if ' / ' in s:
95 s = s.split(' / ')
96 total = float(s[1])
97 current = float(s[0].split(': ')[1])
98 progress = int((current / total) * 1000)
99 return (progress if progress > 1 else 1)
100 else:
101 return None
102
103 current_progress = progress(s)
104 if current_progress:
105 (keepgoing, skip) = self.dialog.Update(current_progress, s)
106 else:
107 (keepgoing, skip) = self.dialog.Pulse(s)
108
109 if not keepgoing:
110 return False
111 else:
112 if self.dialog:
113 self.dialog.Update(1000, "")
114 self.dialog.Destroy()
115 self.dialog = None
116
117 if '...' in s:
118 self.Parent.SetStatusText(s)
119 else:
120 self.Parent.script_list.SetStatus(worker.script.name, s)
121
122 wx.GetApp().Yield()
123 return True
124
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/app/download_manager.py b/app/download_manager.py
--- a/app/download_manager.py
+++ b/app/download_manager.py
@@ -102,8 +102,9 @@
current_progress = progress(s)
if current_progress:
- (keepgoing, skip) = self.dialog.Update(current_progress, s)
- else:
+ # download progress remaining-time disabled. causes bottle neck on Gentry ref: #396.
+ # (keepgoing, skip) = self.dialog.Update(current_progress, s)
+ # else:
(keepgoing, skip) = self.dialog.Pulse(s)
if not keepgoing:
| {"golden_diff": "diff --git a/app/download_manager.py b/app/download_manager.py\n--- a/app/download_manager.py\n+++ b/app/download_manager.py\n@@ -102,8 +102,9 @@\n \n current_progress = progress(s)\n if current_progress:\n- (keepgoing, skip) = self.dialog.Update(current_progress, s)\n- else:\n+ # download progress remaining-time disabled. causes bottle neck on Gentry ref: #396.\n+ # (keepgoing, skip) = self.dialog.Update(current_progress, s)\n+ # else:\n (keepgoing, skip) = self.dialog.Pulse(s)\n \n if not keepgoing:\n", "issue": "Can't download and extract Gentry dataset\nIf trying to download \"Gentry Forest Transect Dataset\" the retriever seems to download the data, but gets stuck when it comes in extracting AVALANCH.xls\nMoreover force quit seems the only way to close the program. \nOS: OS X El Capitan Version 10.11.3 (15D21)\nMachine: Macbook Pro Early 2015 13\"\n\n", "before_files": [{"content": "\"\"\"This class manages dataset downloads concurrently and processes progress\noutput.\"\"\"\n\nimport wx\nfrom retriever.lib.download import DownloadThread\n\n\nclass DownloadManager:\n def __init__(self, parent):\n self.dialog = None\n self.worker = None\n self.queue = []\n self.downloaded = set()\n self.errors = set()\n self.warnings = set()\n self.Parent = parent\n self.timer = wx.Timer(parent, -1)\n self.timer.interval = 10\n parent.Bind(wx.EVT_TIMER, self.update, self.timer)\n\n def Download(self, script):\n if not script in self.queue and not (self.worker and self.worker.script == script):\n self.queue.append(script)\n self.downloaded.add(script)\n if script in self.errors:\n self.errors.remove(script)\n self.warnings.remove(script)\n self.Parent.script_list.RefreshMe(None)\n if not self.timer.IsRunning() and not self.worker and len(self.queue) < 2:\n self.timer.Start(self.timer.interval)\n return True\n return False\n\n def update(self, evt):\n self.timer.Stop()\n terminate = False\n if self.worker:\n script = self.worker.script\n if self.worker.finished() and len(self.worker.output) == 0:\n if hasattr(script, 'warnings') and script.warnings:\n self.warnings.add(script)\n self.Parent.SetStatusText('\\n'.join(str(w) for w in script.warnings))\n else:\n self.Parent.SetStatusText(\"\")\n self.worker = None\n self.Parent.script_list.RefreshMe(None)\n self.timer.Start(self.timer.interval)\n else:\n self.worker.output_lock.acquire()\n while len(self.worker.output) > 0 and not terminate:\n if \"Error:\" in self.worker.output[0] and script in self.downloaded:\n self.downloaded.remove(script)\n self.errors.add(script)\n if self.write(self.worker) == False:\n terminate = True\n self.worker.output = self.worker.output[1:]\n #self.gauge.SetValue(100 * ((self.worker.scriptnum) /\n # (self.worker.progress_max + 1.0)))\n self.worker.output_lock.release()\n if terminate:\n self.Parent.Quit(None)\n else:\n self.timer.Start(self.timer.interval)\n elif self.queue:\n script = self.queue[0]\n self.queue = self.queue[1:]\n self.worker = DownloadThread(self.Parent.engine, script)\n self.worker.parent = self\n self.worker.start()\n self.timer.Start(10)\n\n def flush(self):\n pass\n\n def write(self, worker):\n s = worker.output[0]\n\n if '\\b' in s:\n s = s.replace('\\b', '')\n if not self.dialog:\n wx.GetApp().Yield()\n self.dialog = wx.ProgressDialog(\"Download Progress\",\n \"Downloading datasets . . .\\n\"\n + \" \" * len(s),\n maximum=1000,\n parent=None,\n style=wx.PD_SMOOTH\n | wx.DIALOG_NO_PARENT\n | wx.PD_CAN_ABORT\n | wx.PD_AUTO_HIDE\n | wx.PD_REMAINING_TIME\n )\n def progress(s):\n if ' / ' in s:\n s = s.split(' / ')\n total = float(s[1])\n current = float(s[0].split(': ')[1])\n progress = int((current / total) * 1000)\n return (progress if progress > 1 else 1)\n else:\n return None\n\n current_progress = progress(s)\n if current_progress:\n (keepgoing, skip) = self.dialog.Update(current_progress, s)\n else:\n (keepgoing, skip) = self.dialog.Pulse(s)\n\n if not keepgoing:\n return False\n else:\n if self.dialog:\n self.dialog.Update(1000, \"\")\n self.dialog.Destroy()\n self.dialog = None\n\n if '...' in s:\n self.Parent.SetStatusText(s)\n else:\n self.Parent.script_list.SetStatus(worker.script.name, s)\n\n wx.GetApp().Yield()\n return True\n", "path": "app/download_manager.py"}], "after_files": [{"content": "\"\"\"This class manages dataset downloads concurrently and processes progress\noutput.\"\"\"\n\nimport wx\nfrom retriever.lib.download import DownloadThread\n\n\nclass DownloadManager:\n def __init__(self, parent):\n self.dialog = None\n self.worker = None\n self.queue = []\n self.downloaded = set()\n self.errors = set()\n self.warnings = set()\n self.Parent = parent\n self.timer = wx.Timer(parent, -1)\n self.timer.interval = 10\n parent.Bind(wx.EVT_TIMER, self.update, self.timer)\n\n def Download(self, script):\n if not script in self.queue and not (self.worker and self.worker.script == script):\n self.queue.append(script)\n self.downloaded.add(script)\n if script in self.errors:\n self.errors.remove(script)\n self.warnings.remove(script)\n self.Parent.script_list.RefreshMe(None)\n if not self.timer.IsRunning() and not self.worker and len(self.queue) < 2:\n self.timer.Start(self.timer.interval)\n return True\n return False\n\n def update(self, evt):\n self.timer.Stop()\n terminate = False\n if self.worker:\n script = self.worker.script\n if self.worker.finished() and len(self.worker.output) == 0:\n if hasattr(script, 'warnings') and script.warnings:\n self.warnings.add(script)\n self.Parent.SetStatusText('\\n'.join(str(w) for w in script.warnings))\n else:\n self.Parent.SetStatusText(\"\")\n self.worker = None\n self.Parent.script_list.RefreshMe(None)\n self.timer.Start(self.timer.interval)\n else:\n self.worker.output_lock.acquire()\n while len(self.worker.output) > 0 and not terminate:\n if \"Error:\" in self.worker.output[0] and script in self.downloaded:\n self.downloaded.remove(script)\n self.errors.add(script)\n if self.write(self.worker) == False:\n terminate = True\n self.worker.output = self.worker.output[1:]\n #self.gauge.SetValue(100 * ((self.worker.scriptnum) /\n # (self.worker.progress_max + 1.0)))\n self.worker.output_lock.release()\n if terminate:\n self.Parent.Quit(None)\n else:\n self.timer.Start(self.timer.interval)\n elif self.queue:\n script = self.queue[0]\n self.queue = self.queue[1:]\n self.worker = DownloadThread(self.Parent.engine, script)\n self.worker.parent = self\n self.worker.start()\n self.timer.Start(10)\n\n def flush(self):\n pass\n\n def write(self, worker):\n s = worker.output[0]\n\n if '\\b' in s:\n s = s.replace('\\b', '')\n if not self.dialog:\n wx.GetApp().Yield()\n self.dialog = wx.ProgressDialog(\"Download Progress\",\n \"Downloading datasets . . .\\n\"\n + \" \" * len(s),\n maximum=1000,\n parent=None,\n style=wx.PD_SMOOTH\n | wx.DIALOG_NO_PARENT\n | wx.PD_CAN_ABORT\n | wx.PD_AUTO_HIDE\n | wx.PD_REMAINING_TIME\n )\n def progress(s):\n if ' / ' in s:\n s = s.split(' / ')\n total = float(s[1])\n current = float(s[0].split(': ')[1])\n progress = int((current / total) * 1000)\n return (progress if progress > 1 else 1)\n else:\n return None\n\n current_progress = progress(s)\n if current_progress:\n # download progress remaining-time disabled. causes bottle neck on Gentry ref: #396.\n # (keepgoing, skip) = self.dialog.Update(current_progress, s)\n # else:\n (keepgoing, skip) = self.dialog.Pulse(s)\n\n if not keepgoing:\n return False\n else:\n if self.dialog:\n self.dialog.Update(1000, \"\")\n self.dialog.Destroy()\n self.dialog = None\n\n if '...' in s:\n self.Parent.SetStatusText(s)\n else:\n self.Parent.script_list.SetStatus(worker.script.name, s)\n\n wx.GetApp().Yield()\n return True\n", "path": "app/download_manager.py"}]} | 1,507 | 143 |
gh_patches_debug_30074 | rasdani/github-patches | git_diff | mathesar-foundation__mathesar-266 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Duplicate schema creation
**Describe the bug**
We are currently able to create a new schema with an existing schema name, creating duplicates on our mathesar_schema table.
**Expected behavior**
* Schema name should be unique per db in mathesar_schema table.
* If a new schema creation is attempted with the same name as an existing schema, a 400 should be thrown with proper error message.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mathesar/views/api.py`
Content:
```
1 import logging
2 from rest_framework import status, viewsets
3 from rest_framework.exceptions import NotFound, ValidationError
4 from rest_framework.mixins import ListModelMixin, RetrieveModelMixin, CreateModelMixin
5 from rest_framework.response import Response
6 from django.core.cache import cache
7 from django_filters import rest_framework as filters
8
9
10 from mathesar.database.utils import get_non_default_database_keys
11 from mathesar.models import Table, Schema, DataFile
12 from mathesar.pagination import DefaultLimitOffsetPagination, TableLimitOffsetPagination
13 from mathesar.serializers import TableSerializer, SchemaSerializer, RecordSerializer, DataFileSerializer
14 from mathesar.utils.schemas import create_schema_and_object, reflect_schemas_from_database
15 from mathesar.utils.tables import reflect_tables_from_schema
16 from mathesar.utils.api import create_table_from_datafile, create_datafile
17 from mathesar.filters import SchemaFilter, TableFilter
18
19 logger = logging.getLogger(__name__)
20
21 DB_REFLECTION_KEY = 'database_reflected_recently'
22 DB_REFLECTION_INTERVAL = 60 * 5 # we reflect DB changes every 5 minutes
23
24
25 def reflect_db_objects():
26 if not cache.get(DB_REFLECTION_KEY):
27 for database_key in get_non_default_database_keys():
28 reflect_schemas_from_database(database_key)
29 for schema in Schema.objects.all():
30 reflect_tables_from_schema(schema)
31 cache.set(DB_REFLECTION_KEY, True, DB_REFLECTION_INTERVAL)
32
33
34 class SchemaViewSet(viewsets.GenericViewSet, ListModelMixin, RetrieveModelMixin):
35 def get_queryset(self):
36 reflect_db_objects()
37 return Schema.objects.all().order_by('-created_at')
38
39 serializer_class = SchemaSerializer
40 pagination_class = DefaultLimitOffsetPagination
41 filter_backends = (filters.DjangoFilterBackend,)
42 filterset_class = SchemaFilter
43
44 def create(self, request):
45 schema = create_schema_and_object(request.data['name'], request.data['database'])
46 serializer = SchemaSerializer(schema)
47 return Response(serializer.data, status=status.HTTP_201_CREATED)
48
49
50 class TableViewSet(viewsets.GenericViewSet, ListModelMixin, RetrieveModelMixin,
51 CreateModelMixin):
52 def get_queryset(self):
53 reflect_db_objects()
54 return Table.objects.all().order_by('-created_at')
55
56 serializer_class = TableSerializer
57 pagination_class = DefaultLimitOffsetPagination
58 filter_backends = (filters.DjangoFilterBackend,)
59 filterset_class = TableFilter
60
61 def create(self, request):
62 serializer = TableSerializer(data=request.data, context={'request': request})
63 if serializer.is_valid():
64 return create_table_from_datafile(request, serializer.validated_data)
65 else:
66 raise ValidationError(serializer.errors)
67
68
69 class RecordViewSet(viewsets.ViewSet):
70 # There is no "update" method.
71 # We're not supporting PUT requests because there aren't a lot of use cases
72 # where the entire record needs to be replaced, PATCH suffices for updates.
73 queryset = Table.objects.all().order_by('-created_at')
74
75 def list(self, request, table_pk=None):
76 paginator = TableLimitOffsetPagination()
77 records = paginator.paginate_queryset(self.queryset, request, table_pk)
78 serializer = RecordSerializer(records, many=True)
79 return paginator.get_paginated_response(serializer.data)
80
81 def retrieve(self, request, pk=None, table_pk=None):
82 table = Table.objects.get(id=table_pk)
83 record = table.get_record(pk)
84 if not record:
85 raise NotFound
86 serializer = RecordSerializer(record)
87 return Response(serializer.data)
88
89 def create(self, request, table_pk=None):
90 table = Table.objects.get(id=table_pk)
91 # We only support adding a single record through the API.
92 assert isinstance((request.data), dict)
93 record = table.create_record_or_records(request.data)
94 serializer = RecordSerializer(record)
95 return Response(serializer.data, status=status.HTTP_201_CREATED)
96
97 def partial_update(self, request, pk=None, table_pk=None):
98 table = Table.objects.get(id=table_pk)
99 record = table.update_record(pk, request.data)
100 serializer = RecordSerializer(record)
101 return Response(serializer.data)
102
103 def destroy(self, request, pk=None, table_pk=None):
104 table = Table.objects.get(id=table_pk)
105 table.delete_record(pk)
106 return Response(status=status.HTTP_204_NO_CONTENT)
107
108
109 class DatabaseKeyViewSet(viewsets.ViewSet):
110 def list(self, request):
111 return Response(get_non_default_database_keys())
112
113
114 class DataFileViewSet(viewsets.GenericViewSet, ListModelMixin, RetrieveModelMixin, CreateModelMixin):
115 queryset = DataFile.objects.all().order_by('-created_at')
116 serializer_class = DataFileSerializer
117 pagination_class = DefaultLimitOffsetPagination
118
119 def create(self, request):
120 serializer = DataFileSerializer(data=request.data, context={'request': request})
121 if serializer.is_valid():
122 return create_datafile(request, serializer.validated_data['file'])
123 else:
124 raise ValidationError(serializer.errors)
125
```
Path: `mathesar/utils/schemas.py`
Content:
```
1 from db.schemas import (
2 create_schema, get_schema_oid_from_name, get_mathesar_schemas_with_oids
3 )
4 from mathesar.database.base import create_mathesar_engine
5 from mathesar.models import Schema
6
7
8 def create_schema_and_object(name, database):
9 engine = create_mathesar_engine(database)
10 create_schema(name, engine)
11 schema_oid = get_schema_oid_from_name(name, engine)
12 schema = Schema.objects.create(oid=schema_oid, database=database)
13 return schema
14
15
16 def reflect_schemas_from_database(database):
17 engine = create_mathesar_engine(database)
18 db_schema_oids = {
19 schema["oid"] for schema in get_mathesar_schemas_with_oids(engine)
20 }
21 schemas = [
22 Schema.objects.get_or_create(oid=oid, database=database)
23 for oid in db_schema_oids
24 ]
25 for schema in Schema.objects.all():
26 if schema.oid not in db_schema_oids:
27 schema.delete()
28 return schemas
29
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mathesar/utils/schemas.py b/mathesar/utils/schemas.py
--- a/mathesar/utils/schemas.py
+++ b/mathesar/utils/schemas.py
@@ -1,5 +1,8 @@
+from rest_framework.exceptions import ValidationError
+
from db.schemas import (
- create_schema, get_schema_oid_from_name, get_mathesar_schemas_with_oids
+ create_schema, get_schema_oid_from_name, get_mathesar_schemas,
+ get_mathesar_schemas_with_oids
)
from mathesar.database.base import create_mathesar_engine
from mathesar.models import Schema
@@ -7,6 +10,11 @@
def create_schema_and_object(name, database):
engine = create_mathesar_engine(database)
+
+ all_schemas = get_mathesar_schemas(engine)
+ if name in all_schemas:
+ raise ValidationError({"name": "Schema name is not unique"})
+
create_schema(name, engine)
schema_oid = get_schema_oid_from_name(name, engine)
schema = Schema.objects.create(oid=schema_oid, database=database)
diff --git a/mathesar/views/api.py b/mathesar/views/api.py
--- a/mathesar/views/api.py
+++ b/mathesar/views/api.py
@@ -42,9 +42,13 @@
filterset_class = SchemaFilter
def create(self, request):
- schema = create_schema_and_object(request.data['name'], request.data['database'])
- serializer = SchemaSerializer(schema)
- return Response(serializer.data, status=status.HTTP_201_CREATED)
+ serializer = SchemaSerializer(data=request.data)
+ if serializer.is_valid():
+ schema = create_schema_and_object(request.data['name'], request.data['database'])
+ serializer = SchemaSerializer(schema)
+ return Response(serializer.data, status=status.HTTP_201_CREATED)
+ else:
+ raise ValidationError(serializer.errors)
class TableViewSet(viewsets.GenericViewSet, ListModelMixin, RetrieveModelMixin,
| {"golden_diff": "diff --git a/mathesar/utils/schemas.py b/mathesar/utils/schemas.py\n--- a/mathesar/utils/schemas.py\n+++ b/mathesar/utils/schemas.py\n@@ -1,5 +1,8 @@\n+from rest_framework.exceptions import ValidationError\n+\n from db.schemas import (\n- create_schema, get_schema_oid_from_name, get_mathesar_schemas_with_oids\n+ create_schema, get_schema_oid_from_name, get_mathesar_schemas,\n+ get_mathesar_schemas_with_oids\n )\n from mathesar.database.base import create_mathesar_engine\n from mathesar.models import Schema\n@@ -7,6 +10,11 @@\n \n def create_schema_and_object(name, database):\n engine = create_mathesar_engine(database)\n+\n+ all_schemas = get_mathesar_schemas(engine)\n+ if name in all_schemas:\n+ raise ValidationError({\"name\": \"Schema name is not unique\"})\n+\n create_schema(name, engine)\n schema_oid = get_schema_oid_from_name(name, engine)\n schema = Schema.objects.create(oid=schema_oid, database=database)\ndiff --git a/mathesar/views/api.py b/mathesar/views/api.py\n--- a/mathesar/views/api.py\n+++ b/mathesar/views/api.py\n@@ -42,9 +42,13 @@\n filterset_class = SchemaFilter\n \n def create(self, request):\n- schema = create_schema_and_object(request.data['name'], request.data['database'])\n- serializer = SchemaSerializer(schema)\n- return Response(serializer.data, status=status.HTTP_201_CREATED)\n+ serializer = SchemaSerializer(data=request.data)\n+ if serializer.is_valid():\n+ schema = create_schema_and_object(request.data['name'], request.data['database'])\n+ serializer = SchemaSerializer(schema)\n+ return Response(serializer.data, status=status.HTTP_201_CREATED)\n+ else:\n+ raise ValidationError(serializer.errors)\n \n \n class TableViewSet(viewsets.GenericViewSet, ListModelMixin, RetrieveModelMixin,\n", "issue": "Duplicate schema creation\n**Describe the bug**\r\nWe are currently able to create a new schema with an existing schema name, creating duplicates on our mathesar_schema table.\r\n\r\n**Expected behavior**\r\n* Schema name should be unique per db in mathesar_schema table.\r\n* If a new schema creation is attempted with the same name as an existing schema, a 400 should be thrown with proper error message.\n", "before_files": [{"content": "import logging\nfrom rest_framework import status, viewsets\nfrom rest_framework.exceptions import NotFound, ValidationError\nfrom rest_framework.mixins import ListModelMixin, RetrieveModelMixin, CreateModelMixin\nfrom rest_framework.response import Response\nfrom django.core.cache import cache\nfrom django_filters import rest_framework as filters\n\n\nfrom mathesar.database.utils import get_non_default_database_keys\nfrom mathesar.models import Table, Schema, DataFile\nfrom mathesar.pagination import DefaultLimitOffsetPagination, TableLimitOffsetPagination\nfrom mathesar.serializers import TableSerializer, SchemaSerializer, RecordSerializer, DataFileSerializer\nfrom mathesar.utils.schemas import create_schema_and_object, reflect_schemas_from_database\nfrom mathesar.utils.tables import reflect_tables_from_schema\nfrom mathesar.utils.api import create_table_from_datafile, create_datafile\nfrom mathesar.filters import SchemaFilter, TableFilter\n\nlogger = logging.getLogger(__name__)\n\nDB_REFLECTION_KEY = 'database_reflected_recently'\nDB_REFLECTION_INTERVAL = 60 * 5 # we reflect DB changes every 5 minutes\n\n\ndef reflect_db_objects():\n if not cache.get(DB_REFLECTION_KEY):\n for database_key in get_non_default_database_keys():\n reflect_schemas_from_database(database_key)\n for schema in Schema.objects.all():\n reflect_tables_from_schema(schema)\n cache.set(DB_REFLECTION_KEY, True, DB_REFLECTION_INTERVAL)\n\n\nclass SchemaViewSet(viewsets.GenericViewSet, ListModelMixin, RetrieveModelMixin):\n def get_queryset(self):\n reflect_db_objects()\n return Schema.objects.all().order_by('-created_at')\n\n serializer_class = SchemaSerializer\n pagination_class = DefaultLimitOffsetPagination\n filter_backends = (filters.DjangoFilterBackend,)\n filterset_class = SchemaFilter\n\n def create(self, request):\n schema = create_schema_and_object(request.data['name'], request.data['database'])\n serializer = SchemaSerializer(schema)\n return Response(serializer.data, status=status.HTTP_201_CREATED)\n\n\nclass TableViewSet(viewsets.GenericViewSet, ListModelMixin, RetrieveModelMixin,\n CreateModelMixin):\n def get_queryset(self):\n reflect_db_objects()\n return Table.objects.all().order_by('-created_at')\n\n serializer_class = TableSerializer\n pagination_class = DefaultLimitOffsetPagination\n filter_backends = (filters.DjangoFilterBackend,)\n filterset_class = TableFilter\n\n def create(self, request):\n serializer = TableSerializer(data=request.data, context={'request': request})\n if serializer.is_valid():\n return create_table_from_datafile(request, serializer.validated_data)\n else:\n raise ValidationError(serializer.errors)\n\n\nclass RecordViewSet(viewsets.ViewSet):\n # There is no \"update\" method.\n # We're not supporting PUT requests because there aren't a lot of use cases\n # where the entire record needs to be replaced, PATCH suffices for updates.\n queryset = Table.objects.all().order_by('-created_at')\n\n def list(self, request, table_pk=None):\n paginator = TableLimitOffsetPagination()\n records = paginator.paginate_queryset(self.queryset, request, table_pk)\n serializer = RecordSerializer(records, many=True)\n return paginator.get_paginated_response(serializer.data)\n\n def retrieve(self, request, pk=None, table_pk=None):\n table = Table.objects.get(id=table_pk)\n record = table.get_record(pk)\n if not record:\n raise NotFound\n serializer = RecordSerializer(record)\n return Response(serializer.data)\n\n def create(self, request, table_pk=None):\n table = Table.objects.get(id=table_pk)\n # We only support adding a single record through the API.\n assert isinstance((request.data), dict)\n record = table.create_record_or_records(request.data)\n serializer = RecordSerializer(record)\n return Response(serializer.data, status=status.HTTP_201_CREATED)\n\n def partial_update(self, request, pk=None, table_pk=None):\n table = Table.objects.get(id=table_pk)\n record = table.update_record(pk, request.data)\n serializer = RecordSerializer(record)\n return Response(serializer.data)\n\n def destroy(self, request, pk=None, table_pk=None):\n table = Table.objects.get(id=table_pk)\n table.delete_record(pk)\n return Response(status=status.HTTP_204_NO_CONTENT)\n\n\nclass DatabaseKeyViewSet(viewsets.ViewSet):\n def list(self, request):\n return Response(get_non_default_database_keys())\n\n\nclass DataFileViewSet(viewsets.GenericViewSet, ListModelMixin, RetrieveModelMixin, CreateModelMixin):\n queryset = DataFile.objects.all().order_by('-created_at')\n serializer_class = DataFileSerializer\n pagination_class = DefaultLimitOffsetPagination\n\n def create(self, request):\n serializer = DataFileSerializer(data=request.data, context={'request': request})\n if serializer.is_valid():\n return create_datafile(request, serializer.validated_data['file'])\n else:\n raise ValidationError(serializer.errors)\n", "path": "mathesar/views/api.py"}, {"content": "from db.schemas import (\n create_schema, get_schema_oid_from_name, get_mathesar_schemas_with_oids\n)\nfrom mathesar.database.base import create_mathesar_engine\nfrom mathesar.models import Schema\n\n\ndef create_schema_and_object(name, database):\n engine = create_mathesar_engine(database)\n create_schema(name, engine)\n schema_oid = get_schema_oid_from_name(name, engine)\n schema = Schema.objects.create(oid=schema_oid, database=database)\n return schema\n\n\ndef reflect_schemas_from_database(database):\n engine = create_mathesar_engine(database)\n db_schema_oids = {\n schema[\"oid\"] for schema in get_mathesar_schemas_with_oids(engine)\n }\n schemas = [\n Schema.objects.get_or_create(oid=oid, database=database)\n for oid in db_schema_oids\n ]\n for schema in Schema.objects.all():\n if schema.oid not in db_schema_oids:\n schema.delete()\n return schemas\n", "path": "mathesar/utils/schemas.py"}], "after_files": [{"content": "import logging\nfrom rest_framework import status, viewsets\nfrom rest_framework.exceptions import NotFound, ValidationError\nfrom rest_framework.mixins import ListModelMixin, RetrieveModelMixin, CreateModelMixin\nfrom rest_framework.response import Response\nfrom django.core.cache import cache\nfrom django_filters import rest_framework as filters\n\n\nfrom mathesar.database.utils import get_non_default_database_keys\nfrom mathesar.models import Table, Schema, DataFile\nfrom mathesar.pagination import DefaultLimitOffsetPagination, TableLimitOffsetPagination\nfrom mathesar.serializers import TableSerializer, SchemaSerializer, RecordSerializer, DataFileSerializer\nfrom mathesar.utils.schemas import create_schema_and_object, reflect_schemas_from_database\nfrom mathesar.utils.tables import reflect_tables_from_schema\nfrom mathesar.utils.api import create_table_from_datafile, create_datafile\nfrom mathesar.filters import SchemaFilter, TableFilter\n\nlogger = logging.getLogger(__name__)\n\nDB_REFLECTION_KEY = 'database_reflected_recently'\nDB_REFLECTION_INTERVAL = 60 * 5 # we reflect DB changes every 5 minutes\n\n\ndef reflect_db_objects():\n if not cache.get(DB_REFLECTION_KEY):\n for database_key in get_non_default_database_keys():\n reflect_schemas_from_database(database_key)\n for schema in Schema.objects.all():\n reflect_tables_from_schema(schema)\n cache.set(DB_REFLECTION_KEY, True, DB_REFLECTION_INTERVAL)\n\n\nclass SchemaViewSet(viewsets.GenericViewSet, ListModelMixin, RetrieveModelMixin):\n def get_queryset(self):\n reflect_db_objects()\n return Schema.objects.all().order_by('-created_at')\n\n serializer_class = SchemaSerializer\n pagination_class = DefaultLimitOffsetPagination\n filter_backends = (filters.DjangoFilterBackend,)\n filterset_class = SchemaFilter\n\n def create(self, request):\n serializer = SchemaSerializer(data=request.data)\n if serializer.is_valid():\n schema = create_schema_and_object(request.data['name'], request.data['database'])\n serializer = SchemaSerializer(schema)\n return Response(serializer.data, status=status.HTTP_201_CREATED)\n else:\n raise ValidationError(serializer.errors)\n\n\nclass TableViewSet(viewsets.GenericViewSet, ListModelMixin, RetrieveModelMixin,\n CreateModelMixin):\n def get_queryset(self):\n reflect_db_objects()\n return Table.objects.all().order_by('-created_at')\n\n serializer_class = TableSerializer\n pagination_class = DefaultLimitOffsetPagination\n filter_backends = (filters.DjangoFilterBackend,)\n filterset_class = TableFilter\n\n def create(self, request):\n serializer = TableSerializer(data=request.data, context={'request': request})\n if serializer.is_valid():\n return create_table_from_datafile(request, serializer.validated_data)\n else:\n raise ValidationError(serializer.errors)\n\n\nclass RecordViewSet(viewsets.ViewSet):\n # There is no \"update\" method.\n # We're not supporting PUT requests because there aren't a lot of use cases\n # where the entire record needs to be replaced, PATCH suffices for updates.\n queryset = Table.objects.all().order_by('-created_at')\n\n def list(self, request, table_pk=None):\n paginator = TableLimitOffsetPagination()\n records = paginator.paginate_queryset(self.queryset, request, table_pk)\n serializer = RecordSerializer(records, many=True)\n return paginator.get_paginated_response(serializer.data)\n\n def retrieve(self, request, pk=None, table_pk=None):\n table = Table.objects.get(id=table_pk)\n record = table.get_record(pk)\n if not record:\n raise NotFound\n serializer = RecordSerializer(record)\n return Response(serializer.data)\n\n def create(self, request, table_pk=None):\n table = Table.objects.get(id=table_pk)\n # We only support adding a single record through the API.\n assert isinstance((request.data), dict)\n record = table.create_record_or_records(request.data)\n serializer = RecordSerializer(record)\n return Response(serializer.data, status=status.HTTP_201_CREATED)\n\n def partial_update(self, request, pk=None, table_pk=None):\n table = Table.objects.get(id=table_pk)\n record = table.update_record(pk, request.data)\n serializer = RecordSerializer(record)\n return Response(serializer.data)\n\n def destroy(self, request, pk=None, table_pk=None):\n table = Table.objects.get(id=table_pk)\n table.delete_record(pk)\n return Response(status=status.HTTP_204_NO_CONTENT)\n\n\nclass DatabaseKeyViewSet(viewsets.ViewSet):\n def list(self, request):\n return Response(get_non_default_database_keys())\n\n\nclass DataFileViewSet(viewsets.GenericViewSet, ListModelMixin, RetrieveModelMixin, CreateModelMixin):\n queryset = DataFile.objects.all().order_by('-created_at')\n serializer_class = DataFileSerializer\n pagination_class = DefaultLimitOffsetPagination\n\n def create(self, request):\n serializer = DataFileSerializer(data=request.data, context={'request': request})\n if serializer.is_valid():\n return create_datafile(request, serializer.validated_data['file'])\n else:\n raise ValidationError(serializer.errors)\n", "path": "mathesar/views/api.py"}, {"content": "from rest_framework.exceptions import ValidationError\n\nfrom db.schemas import (\n create_schema, get_schema_oid_from_name, get_mathesar_schemas,\n get_mathesar_schemas_with_oids\n)\nfrom mathesar.database.base import create_mathesar_engine\nfrom mathesar.models import Schema\n\n\ndef create_schema_and_object(name, database):\n engine = create_mathesar_engine(database)\n\n all_schemas = get_mathesar_schemas(engine)\n if name in all_schemas:\n raise ValidationError({\"name\": \"Schema name is not unique\"})\n\n create_schema(name, engine)\n schema_oid = get_schema_oid_from_name(name, engine)\n schema = Schema.objects.create(oid=schema_oid, database=database)\n return schema\n\n\ndef reflect_schemas_from_database(database):\n engine = create_mathesar_engine(database)\n db_schema_oids = {\n schema[\"oid\"] for schema in get_mathesar_schemas_with_oids(engine)\n }\n schemas = [\n Schema.objects.get_or_create(oid=oid, database=database)\n for oid in db_schema_oids\n ]\n for schema in Schema.objects.all():\n if schema.oid not in db_schema_oids:\n schema.delete()\n return schemas\n", "path": "mathesar/utils/schemas.py"}]} | 1,917 | 422 |
gh_patches_debug_137 | rasdani/github-patches | git_diff | google__flax-3089 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Imcompatibility with Flax Official ImageNet example with jax version >= 0.4.7
Hi,
I was testing the [official flax example](https://github.com/google/flax/tree/main/examples/imagenet/) on Colab with jax and jaxlib version >= 0.4.7 on the colab pro+ environment with V100. After installing the requirements with `pip install -r requirements.txt` and with the following command `python main.py --workdir=./imagenet --config=configs/v100_x8.py`, the error is
```
File "/content/FlaxImageNet/main.py", line 29, in <module>
import train
File "/content/FlaxImageNet/train.py", line 30, in <module>
from flax.training import checkpoints
File "/usr/local/lib/python3.10/dist-packages/flax/training/checkpoints.py", line 34,
in <module>
from jax.experimental.global_device_array import GlobalDeviceArray
ModuleNotFoundError: No module named 'jax.experimental.global_device_array'
```
According to [this StackOverflow answer](https://stackoverflow.com/questions/76191911/no-module-named-jax-experimental-global-device-array-when-running-the-official/76192120#76192120), it seems that 'jax.experimental.global_device_array' is removed.
Therefore, it would be great if one can fix the official example so that it works on newer version of jax.
Unavailable to import checkpoints
Provide as much information as possible. At least, this should include a description of your issue and steps to reproduce the problem. If possible also provide a summary of what steps or workarounds you have already tried.
### System information
- Flax, jax, jaxlib versions (obtain with `pip show flax jax jaxlib`: All to its latest, also orbitax
Name: flax
Version: 0.6.9
Summary: Flax: A neural network library for JAX designed for flexibility
Home-page:
Author:
Author-email: Flax team <[email protected]>
License:
Location: /home/fernanda/.local/lib/python3.8/site-packages
Requires: jax, msgpack, numpy, optax, orbax-checkpoint, PyYAML, rich, tensorstore, typing-extensions
Required-by:
---
Name: jax
Version: 0.4.8
Summary: Differentiate, compile, and transform Numpy code.
Home-page: https://github.com/google/jax
Author: JAX team
Author-email: [email protected]
License: Apache-2.0
Location: /home/fernanda/.local/lib/python3.8/site-packages
Requires: ml-dtypes, numpy, opt-einsum, scipy
Required-by: chex, diffrax, equinox, flax, optax, orbax, orbax-checkpoint, richmol
---
Name: jaxlib
Version: 0.4.7
Summary: XLA library for JAX
Home-page: https://github.com/google/jax
Author: JAX team
Author-email: [email protected]
License: Apache-2.0
Location: /home/fernanda/.local/lib/python3.8/site-packages
Requires: ml-dtypes, numpy, scipy
Required-by: chex, optax, orbax, orbax-checkpoint
---
Name: orbax
Version: 0.1.7
Summary: Orbax
Home-page:
Author:
Author-email: Orbax Authors <[email protected]>
License:
Location: /home/fernanda/.local/lib/python3.8/site-packages
Requires: absl-py, cached_property, etils, importlib_resources, jax, jaxlib, msgpack, nest_asyncio, numpy, pyyaml, tensorstore, typing_extensions
- Python version: 3.8
### Problem you have encountered:
When importing checkpoints, get the following error:
"""
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-1-0eac7b685376> in <module>
11 config.update("jax_enable_x64", True)
12 from flax import serialization
---> 13 from flax.training import checkpoints
14 from jax import numpy as jnp
15 import jax
/gpfs/cfel/group/cmi/common/psi4/psi4conda/lib//python3.8/site-packages/flax/training/checkpoints.py in <module>
37 from jax import process_index
38 from jax import sharding
---> 39 from jax.experimental.global_device_array import GlobalDeviceArray
40 from jax.experimental.multihost_utils import sync_global_devices
41 import orbax.checkpoint as orbax
ModuleNotFoundError: No module named 'jax.experimental.global_device_array'
"""
I guess it is a compatibility problem between jax and flax.
### What you expected to happen:
Usual importing
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `flax/version.py`
Content:
```
1 # Copyright 2023 The Flax Authors.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Current Flax version at head on Github."""
16 __version__ = "0.6.9"
17
18
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/flax/version.py b/flax/version.py
--- a/flax/version.py
+++ b/flax/version.py
@@ -13,5 +13,5 @@
# limitations under the License.
"""Current Flax version at head on Github."""
-__version__ = "0.6.9"
+__version__ = "0.6.10"
| {"golden_diff": "diff --git a/flax/version.py b/flax/version.py\n--- a/flax/version.py\n+++ b/flax/version.py\n@@ -13,5 +13,5 @@\n # limitations under the License.\n \n \"\"\"Current Flax version at head on Github.\"\"\"\n-__version__ = \"0.6.9\"\n+__version__ = \"0.6.10\"\n", "issue": "Imcompatibility with Flax Official ImageNet example with jax version >= 0.4.7\nHi, \r\n\r\nI was testing the [official flax example](https://github.com/google/flax/tree/main/examples/imagenet/) on Colab with jax and jaxlib version >= 0.4.7 on the colab pro+ environment with V100. After installing the requirements with `pip install -r requirements.txt` and with the following command `python main.py --workdir=./imagenet --config=configs/v100_x8.py`, the error is \r\n\r\n```\r\nFile \"/content/FlaxImageNet/main.py\", line 29, in <module>\r\nimport train\r\nFile \"/content/FlaxImageNet/train.py\", line 30, in <module>\r\nfrom flax.training import checkpoints\r\nFile \"/usr/local/lib/python3.10/dist-packages/flax/training/checkpoints.py\", line 34, \r\nin <module>\r\nfrom jax.experimental.global_device_array import GlobalDeviceArray\r\nModuleNotFoundError: No module named 'jax.experimental.global_device_array'\r\n```\r\n\r\nAccording to [this StackOverflow answer](https://stackoverflow.com/questions/76191911/no-module-named-jax-experimental-global-device-array-when-running-the-official/76192120#76192120), it seems that 'jax.experimental.global_device_array' is removed. \r\n\r\nTherefore, it would be great if one can fix the official example so that it works on newer version of jax. \nUnavailable to import checkpoints\nProvide as much information as possible. At least, this should include a description of your issue and steps to reproduce the problem. If possible also provide a summary of what steps or workarounds you have already tried.\r\n\r\n### System information\r\n- Flax, jax, jaxlib versions (obtain with `pip show flax jax jaxlib`: All to its latest, also orbitax\r\n\r\nName: flax\r\nVersion: 0.6.9\r\nSummary: Flax: A neural network library for JAX designed for flexibility\r\nHome-page: \r\nAuthor: \r\nAuthor-email: Flax team <[email protected]>\r\nLicense: \r\nLocation: /home/fernanda/.local/lib/python3.8/site-packages\r\nRequires: jax, msgpack, numpy, optax, orbax-checkpoint, PyYAML, rich, tensorstore, typing-extensions\r\nRequired-by: \r\n---\r\nName: jax\r\nVersion: 0.4.8\r\nSummary: Differentiate, compile, and transform Numpy code.\r\nHome-page: https://github.com/google/jax\r\nAuthor: JAX team\r\nAuthor-email: [email protected]\r\nLicense: Apache-2.0\r\nLocation: /home/fernanda/.local/lib/python3.8/site-packages\r\nRequires: ml-dtypes, numpy, opt-einsum, scipy\r\nRequired-by: chex, diffrax, equinox, flax, optax, orbax, orbax-checkpoint, richmol\r\n---\r\nName: jaxlib\r\nVersion: 0.4.7\r\nSummary: XLA library for JAX\r\nHome-page: https://github.com/google/jax\r\nAuthor: JAX team\r\nAuthor-email: [email protected]\r\nLicense: Apache-2.0\r\nLocation: /home/fernanda/.local/lib/python3.8/site-packages\r\nRequires: ml-dtypes, numpy, scipy\r\nRequired-by: chex, optax, orbax, orbax-checkpoint\r\n---\r\nName: orbax\r\nVersion: 0.1.7\r\nSummary: Orbax\r\nHome-page: \r\nAuthor: \r\nAuthor-email: Orbax Authors <[email protected]>\r\nLicense: \r\nLocation: /home/fernanda/.local/lib/python3.8/site-packages\r\nRequires: absl-py, cached_property, etils, importlib_resources, jax, jaxlib, msgpack, nest_asyncio, numpy, pyyaml, tensorstore, typing_extensions\r\n\r\n- Python version: 3.8\r\n\r\n\r\n### Problem you have encountered:\r\nWhen importing checkpoints, get the following error:\r\n \"\"\" \r\n---------------------------------------------------------------------------\r\nModuleNotFoundError Traceback (most recent call last)\r\n<ipython-input-1-0eac7b685376> in <module>\r\n 11 config.update(\"jax_enable_x64\", True)\r\n 12 from flax import serialization\r\n---> 13 from flax.training import checkpoints\r\n 14 from jax import numpy as jnp\r\n 15 import jax\r\n\r\n/gpfs/cfel/group/cmi/common/psi4/psi4conda/lib//python3.8/site-packages/flax/training/checkpoints.py in <module>\r\n 37 from jax import process_index\r\n 38 from jax import sharding\r\n---> 39 from jax.experimental.global_device_array import GlobalDeviceArray\r\n 40 from jax.experimental.multihost_utils import sync_global_devices\r\n 41 import orbax.checkpoint as orbax\r\n\r\nModuleNotFoundError: No module named 'jax.experimental.global_device_array'\r\n\r\n\"\"\"\r\n\r\nI guess it is a compatibility problem between jax and flax.\r\n\r\n### What you expected to happen:\r\n\r\nUsual importing\r\n\r\n\n", "before_files": [{"content": "# Copyright 2023 The Flax Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Current Flax version at head on Github.\"\"\"\n__version__ = \"0.6.9\"\n\n", "path": "flax/version.py"}], "after_files": [{"content": "# Copyright 2023 The Flax Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Current Flax version at head on Github.\"\"\"\n__version__ = \"0.6.10\"\n\n", "path": "flax/version.py"}]} | 1,558 | 82 |
gh_patches_debug_21264 | rasdani/github-patches | git_diff | inventree__InvenTree-6250 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
News Feed task doesn't work behind proxy, impacting performance
### Please verify that this bug has NOT been raised before.
- [X] I checked and didn't find a similar issue
### Describe the bug*
The `update_news_feed` task attempts to fetch the RSS/Atom feed once daily. This, however, doesn't work behind a proxy server.
The result is that these tasks occupy workers all the time, and never complete.
Each worker is terminated roughly every 90 seconds due to this.
### Steps to Reproduce
1. Put the InvenTree backend on a network unable to reach `INVENTREE_NEWS_URL`
2. Trigger the task
3. Task will lead to continuous timeout termination of workers
### Expected behaviour
Task should finish with no new News entries added if URL is unreachable.
### Deployment Method
- [ ] Docker
- [X] Bare metal
### Version Information
0.12.10
### Please verify if you can reproduce this bug on the demo site.
- [ ] I can reproduce this bug on the demo site.
### Relevant log output
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `InvenTree/common/tasks.py`
Content:
```
1 """Tasks (processes that get offloaded) for common app."""
2
3 import logging
4 import os
5 from datetime import datetime, timedelta
6
7 from django.conf import settings
8 from django.core.exceptions import AppRegistryNotReady
9 from django.db.utils import IntegrityError, OperationalError
10 from django.utils import timezone
11
12 import feedparser
13
14 from InvenTree.helpers_model import getModelsWithMixin
15 from InvenTree.models import InvenTreeNotesMixin
16 from InvenTree.tasks import ScheduledTask, scheduled_task
17
18 logger = logging.getLogger('inventree')
19
20
21 @scheduled_task(ScheduledTask.DAILY)
22 def delete_old_notifications():
23 """Remove old notifications from the database.
24
25 Anything older than ~3 months is removed
26 """
27 try:
28 from common.models import NotificationEntry
29 except AppRegistryNotReady: # pragma: no cover
30 logger.info(
31 "Could not perform 'delete_old_notifications' - App registry not ready"
32 )
33 return
34
35 before = timezone.now() - timedelta(days=90)
36
37 # Delete notification records before the specified date
38 NotificationEntry.objects.filter(updated__lte=before).delete()
39
40
41 @scheduled_task(ScheduledTask.DAILY)
42 def update_news_feed():
43 """Update the newsfeed."""
44 try:
45 from common.models import NewsFeedEntry
46 except AppRegistryNotReady: # pragma: no cover
47 logger.info("Could not perform 'update_news_feed' - App registry not ready")
48 return
49
50 # Fetch and parse feed
51 try:
52 d = feedparser.parse(settings.INVENTREE_NEWS_URL)
53 except Exception as entry: # pragma: no cover
54 logger.warning('update_news_feed: Error parsing the newsfeed', entry)
55 return
56
57 # Get a reference list
58 id_list = [a.feed_id for a in NewsFeedEntry.objects.all()]
59
60 # Iterate over entries
61 for entry in d.entries:
62 # Check if id already exists
63 if entry.id in id_list:
64 continue
65
66 # Create entry
67 try:
68 NewsFeedEntry.objects.create(
69 feed_id=entry.id,
70 title=entry.title,
71 link=entry.link,
72 published=entry.published,
73 author=entry.author,
74 summary=entry.summary,
75 )
76 except (IntegrityError, OperationalError):
77 # Sometimes errors-out on database start-up
78 pass
79
80 logger.info('update_news_feed: Sync done')
81
82
83 @scheduled_task(ScheduledTask.DAILY)
84 def delete_old_notes_images():
85 """Remove old notes images from the database.
86
87 Anything older than ~3 months is removed, unless it is linked to a note
88 """
89 try:
90 from common.models import NotesImage
91 except AppRegistryNotReady:
92 logger.info(
93 "Could not perform 'delete_old_notes_images' - App registry not ready"
94 )
95 return
96
97 # Remove any notes which point to non-existent image files
98 for note in NotesImage.objects.all():
99 if not os.path.exists(note.image.path):
100 logger.info('Deleting note %s - image file does not exist', note.image.path)
101 note.delete()
102
103 note_classes = getModelsWithMixin(InvenTreeNotesMixin)
104 before = datetime.now() - timedelta(days=90)
105
106 for note in NotesImage.objects.filter(date__lte=before):
107 # Find any images which are no longer referenced by a note
108
109 found = False
110
111 img = note.image.name
112
113 for model in note_classes:
114 if model.objects.filter(notes__icontains=img).exists():
115 found = True
116 break
117
118 if not found:
119 logger.info('Deleting note %s - image file not linked to a note', img)
120 note.delete()
121
122 # Finally, remove any images in the notes dir which are not linked to a note
123 notes_dir = os.path.join(settings.MEDIA_ROOT, 'notes')
124
125 try:
126 images = os.listdir(notes_dir)
127 except FileNotFoundError:
128 # Thrown if the directory does not exist
129 images = []
130
131 all_notes = NotesImage.objects.all()
132
133 for image in images:
134 found = False
135 for note in all_notes:
136 img_path = os.path.basename(note.image.path)
137 if img_path == image:
138 found = True
139 break
140
141 if not found:
142 logger.info('Deleting note %s - image file not linked to a note', image)
143 os.remove(os.path.join(notes_dir, image))
144
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/InvenTree/common/tasks.py b/InvenTree/common/tasks.py
--- a/InvenTree/common/tasks.py
+++ b/InvenTree/common/tasks.py
@@ -10,6 +10,7 @@
from django.utils import timezone
import feedparser
+import requests
from InvenTree.helpers_model import getModelsWithMixin
from InvenTree.models import InvenTreeNotesMixin
@@ -47,11 +48,16 @@
logger.info("Could not perform 'update_news_feed' - App registry not ready")
return
+ # News feed isn't defined, no need to continue
+ if not settings.INVENTREE_NEWS_URL or type(settings.INVENTREE_NEWS_URL) != str:
+ return
+
# Fetch and parse feed
try:
- d = feedparser.parse(settings.INVENTREE_NEWS_URL)
- except Exception as entry: # pragma: no cover
- logger.warning('update_news_feed: Error parsing the newsfeed', entry)
+ feed = requests.get(settings.INVENTREE_NEWS_URL)
+ d = feedparser.parse(feed.content)
+ except Exception: # pragma: no cover
+ logger.warning('update_news_feed: Error parsing the newsfeed')
return
# Get a reference list
| {"golden_diff": "diff --git a/InvenTree/common/tasks.py b/InvenTree/common/tasks.py\n--- a/InvenTree/common/tasks.py\n+++ b/InvenTree/common/tasks.py\n@@ -10,6 +10,7 @@\n from django.utils import timezone\n \n import feedparser\n+import requests\n \n from InvenTree.helpers_model import getModelsWithMixin\n from InvenTree.models import InvenTreeNotesMixin\n@@ -47,11 +48,16 @@\n logger.info(\"Could not perform 'update_news_feed' - App registry not ready\")\n return\n \n+ # News feed isn't defined, no need to continue\n+ if not settings.INVENTREE_NEWS_URL or type(settings.INVENTREE_NEWS_URL) != str:\n+ return\n+\n # Fetch and parse feed\n try:\n- d = feedparser.parse(settings.INVENTREE_NEWS_URL)\n- except Exception as entry: # pragma: no cover\n- logger.warning('update_news_feed: Error parsing the newsfeed', entry)\n+ feed = requests.get(settings.INVENTREE_NEWS_URL)\n+ d = feedparser.parse(feed.content)\n+ except Exception: # pragma: no cover\n+ logger.warning('update_news_feed: Error parsing the newsfeed')\n return\n \n # Get a reference list\n", "issue": "News Feed task doesn't work behind proxy, impacting performance\n### Please verify that this bug has NOT been raised before.\n\n- [X] I checked and didn't find a similar issue\n\n### Describe the bug*\n\nThe `update_news_feed` task attempts to fetch the RSS/Atom feed once daily. This, however, doesn't work behind a proxy server.\r\n\r\nThe result is that these tasks occupy workers all the time, and never complete.\r\nEach worker is terminated roughly every 90 seconds due to this.\n\n### Steps to Reproduce\n\n1. Put the InvenTree backend on a network unable to reach `INVENTREE_NEWS_URL`\r\n2. Trigger the task\r\n3. Task will lead to continuous timeout termination of workers\n\n### Expected behaviour\n\nTask should finish with no new News entries added if URL is unreachable.\n\n### Deployment Method\n\n- [ ] Docker\n- [X] Bare metal\n\n### Version Information\n\n0.12.10\n\n### Please verify if you can reproduce this bug on the demo site.\n\n- [ ] I can reproduce this bug on the demo site.\n\n### Relevant log output\n\n_No response_\n", "before_files": [{"content": "\"\"\"Tasks (processes that get offloaded) for common app.\"\"\"\n\nimport logging\nimport os\nfrom datetime import datetime, timedelta\n\nfrom django.conf import settings\nfrom django.core.exceptions import AppRegistryNotReady\nfrom django.db.utils import IntegrityError, OperationalError\nfrom django.utils import timezone\n\nimport feedparser\n\nfrom InvenTree.helpers_model import getModelsWithMixin\nfrom InvenTree.models import InvenTreeNotesMixin\nfrom InvenTree.tasks import ScheduledTask, scheduled_task\n\nlogger = logging.getLogger('inventree')\n\n\n@scheduled_task(ScheduledTask.DAILY)\ndef delete_old_notifications():\n \"\"\"Remove old notifications from the database.\n\n Anything older than ~3 months is removed\n \"\"\"\n try:\n from common.models import NotificationEntry\n except AppRegistryNotReady: # pragma: no cover\n logger.info(\n \"Could not perform 'delete_old_notifications' - App registry not ready\"\n )\n return\n\n before = timezone.now() - timedelta(days=90)\n\n # Delete notification records before the specified date\n NotificationEntry.objects.filter(updated__lte=before).delete()\n\n\n@scheduled_task(ScheduledTask.DAILY)\ndef update_news_feed():\n \"\"\"Update the newsfeed.\"\"\"\n try:\n from common.models import NewsFeedEntry\n except AppRegistryNotReady: # pragma: no cover\n logger.info(\"Could not perform 'update_news_feed' - App registry not ready\")\n return\n\n # Fetch and parse feed\n try:\n d = feedparser.parse(settings.INVENTREE_NEWS_URL)\n except Exception as entry: # pragma: no cover\n logger.warning('update_news_feed: Error parsing the newsfeed', entry)\n return\n\n # Get a reference list\n id_list = [a.feed_id for a in NewsFeedEntry.objects.all()]\n\n # Iterate over entries\n for entry in d.entries:\n # Check if id already exists\n if entry.id in id_list:\n continue\n\n # Create entry\n try:\n NewsFeedEntry.objects.create(\n feed_id=entry.id,\n title=entry.title,\n link=entry.link,\n published=entry.published,\n author=entry.author,\n summary=entry.summary,\n )\n except (IntegrityError, OperationalError):\n # Sometimes errors-out on database start-up\n pass\n\n logger.info('update_news_feed: Sync done')\n\n\n@scheduled_task(ScheduledTask.DAILY)\ndef delete_old_notes_images():\n \"\"\"Remove old notes images from the database.\n\n Anything older than ~3 months is removed, unless it is linked to a note\n \"\"\"\n try:\n from common.models import NotesImage\n except AppRegistryNotReady:\n logger.info(\n \"Could not perform 'delete_old_notes_images' - App registry not ready\"\n )\n return\n\n # Remove any notes which point to non-existent image files\n for note in NotesImage.objects.all():\n if not os.path.exists(note.image.path):\n logger.info('Deleting note %s - image file does not exist', note.image.path)\n note.delete()\n\n note_classes = getModelsWithMixin(InvenTreeNotesMixin)\n before = datetime.now() - timedelta(days=90)\n\n for note in NotesImage.objects.filter(date__lte=before):\n # Find any images which are no longer referenced by a note\n\n found = False\n\n img = note.image.name\n\n for model in note_classes:\n if model.objects.filter(notes__icontains=img).exists():\n found = True\n break\n\n if not found:\n logger.info('Deleting note %s - image file not linked to a note', img)\n note.delete()\n\n # Finally, remove any images in the notes dir which are not linked to a note\n notes_dir = os.path.join(settings.MEDIA_ROOT, 'notes')\n\n try:\n images = os.listdir(notes_dir)\n except FileNotFoundError:\n # Thrown if the directory does not exist\n images = []\n\n all_notes = NotesImage.objects.all()\n\n for image in images:\n found = False\n for note in all_notes:\n img_path = os.path.basename(note.image.path)\n if img_path == image:\n found = True\n break\n\n if not found:\n logger.info('Deleting note %s - image file not linked to a note', image)\n os.remove(os.path.join(notes_dir, image))\n", "path": "InvenTree/common/tasks.py"}], "after_files": [{"content": "\"\"\"Tasks (processes that get offloaded) for common app.\"\"\"\n\nimport logging\nimport os\nfrom datetime import datetime, timedelta\n\nfrom django.conf import settings\nfrom django.core.exceptions import AppRegistryNotReady\nfrom django.db.utils import IntegrityError, OperationalError\nfrom django.utils import timezone\n\nimport feedparser\nimport requests\n\nfrom InvenTree.helpers_model import getModelsWithMixin\nfrom InvenTree.models import InvenTreeNotesMixin\nfrom InvenTree.tasks import ScheduledTask, scheduled_task\n\nlogger = logging.getLogger('inventree')\n\n\n@scheduled_task(ScheduledTask.DAILY)\ndef delete_old_notifications():\n \"\"\"Remove old notifications from the database.\n\n Anything older than ~3 months is removed\n \"\"\"\n try:\n from common.models import NotificationEntry\n except AppRegistryNotReady: # pragma: no cover\n logger.info(\n \"Could not perform 'delete_old_notifications' - App registry not ready\"\n )\n return\n\n before = timezone.now() - timedelta(days=90)\n\n # Delete notification records before the specified date\n NotificationEntry.objects.filter(updated__lte=before).delete()\n\n\n@scheduled_task(ScheduledTask.DAILY)\ndef update_news_feed():\n \"\"\"Update the newsfeed.\"\"\"\n try:\n from common.models import NewsFeedEntry\n except AppRegistryNotReady: # pragma: no cover\n logger.info(\"Could not perform 'update_news_feed' - App registry not ready\")\n return\n\n # News feed isn't defined, no need to continue\n if not settings.INVENTREE_NEWS_URL or type(settings.INVENTREE_NEWS_URL) != str:\n return\n\n # Fetch and parse feed\n try:\n feed = requests.get(settings.INVENTREE_NEWS_URL)\n d = feedparser.parse(feed.content)\n except Exception: # pragma: no cover\n logger.warning('update_news_feed: Error parsing the newsfeed')\n return\n\n # Get a reference list\n id_list = [a.feed_id for a in NewsFeedEntry.objects.all()]\n\n # Iterate over entries\n for entry in d.entries:\n # Check if id already exists\n if entry.id in id_list:\n continue\n\n # Create entry\n try:\n NewsFeedEntry.objects.create(\n feed_id=entry.id,\n title=entry.title,\n link=entry.link,\n published=entry.published,\n author=entry.author,\n summary=entry.summary,\n )\n except (IntegrityError, OperationalError):\n # Sometimes errors-out on database start-up\n pass\n\n logger.info('update_news_feed: Sync done')\n\n\n@scheduled_task(ScheduledTask.DAILY)\ndef delete_old_notes_images():\n \"\"\"Remove old notes images from the database.\n\n Anything older than ~3 months is removed, unless it is linked to a note\n \"\"\"\n try:\n from common.models import NotesImage\n except AppRegistryNotReady:\n logger.info(\n \"Could not perform 'delete_old_notes_images' - App registry not ready\"\n )\n return\n\n # Remove any notes which point to non-existent image files\n for note in NotesImage.objects.all():\n if not os.path.exists(note.image.path):\n logger.info('Deleting note %s - image file does not exist', note.image.path)\n note.delete()\n\n note_classes = getModelsWithMixin(InvenTreeNotesMixin)\n before = datetime.now() - timedelta(days=90)\n\n for note in NotesImage.objects.filter(date__lte=before):\n # Find any images which are no longer referenced by a note\n\n found = False\n\n img = note.image.name\n\n for model in note_classes:\n if model.objects.filter(notes__icontains=img).exists():\n found = True\n break\n\n if not found:\n logger.info('Deleting note %s - image file not linked to a note', img)\n note.delete()\n\n # Finally, remove any images in the notes dir which are not linked to a note\n notes_dir = os.path.join(settings.MEDIA_ROOT, 'notes')\n\n try:\n images = os.listdir(notes_dir)\n except FileNotFoundError:\n # Thrown if the directory does not exist\n images = []\n\n all_notes = NotesImage.objects.all()\n\n for image in images:\n found = False\n for note in all_notes:\n img_path = os.path.basename(note.image.path)\n if img_path == image:\n found = True\n break\n\n if not found:\n logger.info('Deleting note %s - image file not linked to a note', image)\n os.remove(os.path.join(notes_dir, image))\n", "path": "InvenTree/common/tasks.py"}]} | 1,762 | 284 |
gh_patches_debug_55968 | rasdani/github-patches | git_diff | bridgecrewio__checkov-2740 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Check Azure Front Door WAF enabled fails even when a WAF is correctly assigned
**Describe the issue**
[`CKV_AZURE_121`](https://github.com/bridgecrewio/checkov/blob/master/checkov/terraform/checks/resource/azure/AzureFrontDoorEnablesWAF.py) fails despite a Web Application Firewall policy being correctly applied.
WAF policies are applied by specifying a value for `web_application_firewall_policy_link_id` inside a `frontend_endpoint` block within the `azurerm_frontdoor` resource itself.
The [documentation](https://docs.bridgecrew.io/docs/ensure-that-azure-front-door-enables-waf) seems to expect that the `web_application_firewall_policy_link_id` attribute is defined in the resource block itself, rather than in a sub-block (`frontend_endpoint`).
- [`azurerm_frontdoor` resource documentation reference](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/frontdoor#web_application_firewall_policy_link_id)
**Examples**
```terraform
resource "azurerm_frontdoor" "test" {
name = "test-front-door"
resource_group_name = var.resource_group_name
enforce_backend_pools_certificate_name_check = false
tags = var.tags
frontend_endpoint {
name = "DefaultFrontend"
host_name = "test-front-door.azurefd.net"
web_application_firewall_policy_link_id = azurerm_frontdoor_firewall_policy.test.id
}
# ...
```
**Version (please complete the following information):**
- Checkov Version: 2.0.930
**Additional context**
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `checkov/terraform/checks/resource/azure/AzureFrontDoorEnablesWAF.py`
Content:
```
1 from checkov.common.models.consts import ANY_VALUE
2 from checkov.common.models.enums import CheckCategories
3 from checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck
4
5
6 class AzureFrontDoorEnablesWAF(BaseResourceValueCheck):
7 def __init__(self):
8 name = "Ensure that Azure Front Door enables WAF"
9 id = "CKV_AZURE_121"
10 supported_resources = ['azurerm_frontdoor']
11 categories = [CheckCategories.NETWORKING]
12 super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
13
14 def get_inspected_key(self):
15 return "web_application_firewall_policy_link_id"
16
17 def get_expected_value(self):
18 return ANY_VALUE
19
20
21 check = AzureFrontDoorEnablesWAF()
22
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/checkov/terraform/checks/resource/azure/AzureFrontDoorEnablesWAF.py b/checkov/terraform/checks/resource/azure/AzureFrontDoorEnablesWAF.py
--- a/checkov/terraform/checks/resource/azure/AzureFrontDoorEnablesWAF.py
+++ b/checkov/terraform/checks/resource/azure/AzureFrontDoorEnablesWAF.py
@@ -12,7 +12,7 @@
super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
def get_inspected_key(self):
- return "web_application_firewall_policy_link_id"
+ return "frontend_endpoint/[0]/web_application_firewall_policy_link_id"
def get_expected_value(self):
return ANY_VALUE
| {"golden_diff": "diff --git a/checkov/terraform/checks/resource/azure/AzureFrontDoorEnablesWAF.py b/checkov/terraform/checks/resource/azure/AzureFrontDoorEnablesWAF.py\n--- a/checkov/terraform/checks/resource/azure/AzureFrontDoorEnablesWAF.py\n+++ b/checkov/terraform/checks/resource/azure/AzureFrontDoorEnablesWAF.py\n@@ -12,7 +12,7 @@\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n \n def get_inspected_key(self):\n- return \"web_application_firewall_policy_link_id\"\n+ return \"frontend_endpoint/[0]/web_application_firewall_policy_link_id\"\n \n def get_expected_value(self):\n return ANY_VALUE\n", "issue": "Check Azure Front Door WAF enabled fails even when a WAF is correctly assigned\n**Describe the issue**\r\n[`CKV_AZURE_121`](https://github.com/bridgecrewio/checkov/blob/master/checkov/terraform/checks/resource/azure/AzureFrontDoorEnablesWAF.py) fails despite a Web Application Firewall policy being correctly applied. \r\n\r\nWAF policies are applied by specifying a value for `web_application_firewall_policy_link_id` inside a `frontend_endpoint` block within the `azurerm_frontdoor` resource itself.\r\n\r\nThe [documentation](https://docs.bridgecrew.io/docs/ensure-that-azure-front-door-enables-waf) seems to expect that the `web_application_firewall_policy_link_id` attribute is defined in the resource block itself, rather than in a sub-block (`frontend_endpoint`).\r\n\r\n- [`azurerm_frontdoor` resource documentation reference](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/frontdoor#web_application_firewall_policy_link_id)\r\n\r\n**Examples**\r\n```terraform\r\nresource \"azurerm_frontdoor\" \"test\" {\r\n name = \"test-front-door\"\r\n resource_group_name = var.resource_group_name\r\n enforce_backend_pools_certificate_name_check = false\r\n tags = var.tags\r\n\r\n frontend_endpoint {\r\n name = \"DefaultFrontend\"\r\n host_name = \"test-front-door.azurefd.net\"\r\n web_application_firewall_policy_link_id = azurerm_frontdoor_firewall_policy.test.id\r\n }\r\n\r\n # ... \r\n```\r\n\r\n**Version (please complete the following information):**\r\n - Checkov Version: 2.0.930\r\n\r\n**Additional context**\r\n\n", "before_files": [{"content": "from checkov.common.models.consts import ANY_VALUE\nfrom checkov.common.models.enums import CheckCategories\nfrom checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck\n\n\nclass AzureFrontDoorEnablesWAF(BaseResourceValueCheck):\n def __init__(self):\n name = \"Ensure that Azure Front Door enables WAF\"\n id = \"CKV_AZURE_121\"\n supported_resources = ['azurerm_frontdoor']\n categories = [CheckCategories.NETWORKING]\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def get_inspected_key(self):\n return \"web_application_firewall_policy_link_id\"\n\n def get_expected_value(self):\n return ANY_VALUE\n\n\ncheck = AzureFrontDoorEnablesWAF()\n", "path": "checkov/terraform/checks/resource/azure/AzureFrontDoorEnablesWAF.py"}], "after_files": [{"content": "from checkov.common.models.consts import ANY_VALUE\nfrom checkov.common.models.enums import CheckCategories\nfrom checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck\n\n\nclass AzureFrontDoorEnablesWAF(BaseResourceValueCheck):\n def __init__(self):\n name = \"Ensure that Azure Front Door enables WAF\"\n id = \"CKV_AZURE_121\"\n supported_resources = ['azurerm_frontdoor']\n categories = [CheckCategories.NETWORKING]\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def get_inspected_key(self):\n return \"frontend_endpoint/[0]/web_application_firewall_policy_link_id\"\n\n def get_expected_value(self):\n return ANY_VALUE\n\n\ncheck = AzureFrontDoorEnablesWAF()\n", "path": "checkov/terraform/checks/resource/azure/AzureFrontDoorEnablesWAF.py"}]} | 833 | 168 |
gh_patches_debug_23532 | rasdani/github-patches | git_diff | mathesar-foundation__mathesar-29 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Configure flake8 & GitHub Action correctly
Our flake8 setup has a couple of issues:
- Failures on the GitHub Action don't actually block merge.
- We need to set up our style guide for flake8.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mathesar/forms/widgets.py`
Content:
```
1 from django.forms.widgets import TextInput
2
3 class DataListInput(TextInput):
4 """
5 Widget that adds a <data_list> element to the standard text input widget.
6 See TextInput for further details.
7
8 Attributes:
9 data_list: List of strings, where each string is a data_list value, or
10 a callable that returns a list of the same form
11 data_list_id: ID of the data_list, generated when render() is called.
12 Of the form [widget_id | widget_name]_data_list
13 """
14 template_name = "mathesar/widgets/data_list.html"
15
16 def __init__(self, data_list, attrs=None):
17 super().__init__(attrs=attrs)
18 self.data_list = data_list
19 self.data_list_id = "_data_list"
20
21 def get_context(self, name, value, attrs):
22 context = super().get_context(name, value, attrs)
23 if callable(self.data_list):
24 context["widget"]["data_list"] = self.data_list()
25 else:
26 context["widget"]["data_list"] = self.data_list
27 context["widget"]["data_list_id"] = self.data_list_id
28 return context
29
30 def render(self, name, value, attrs=None, renderer=None):
31 # In practice, there should always be an ID attribute, but we fallback
32 # to using widget name if ID is missing
33 if attrs and "id" in attrs:
34 self.data_list_id = attrs["id"] + "_data_list"
35 else:
36 self.data_list_id = name + "_data_list"
37 attrs = {} if attrs is None else attrs
38 attrs["list"] = self.data_list_id
39 return super().render(name, value, attrs, renderer)
40
41
```
Path: `mathesar/forms/forms.py`
Content:
```
1 from django import forms
2 from django.core.exceptions import ValidationError
3
4 from mathesar.database.schemas import get_all_schemas
5 from mathesar.forms.widgets import DataListInput
6
7 def validate_csv(value):
8 if not value.name.lower().endswith(".csv"):
9 raise ValidationError(f"{value.name} is not a CSV file")
10
11 class UploadFileForm(forms.Form):
12 collection_name = forms.CharField(min_length=1, label="Collection Name")
13
14 application_name = forms.CharField(
15 min_length=1, label="Application Name",
16 widget=DataListInput(get_all_schemas)
17 )
18
19 file = forms.FileField(validators=[validate_csv], label="CSV File")
20
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mathesar/forms/forms.py b/mathesar/forms/forms.py
--- a/mathesar/forms/forms.py
+++ b/mathesar/forms/forms.py
@@ -4,10 +4,12 @@
from mathesar.database.schemas import get_all_schemas
from mathesar.forms.widgets import DataListInput
+
def validate_csv(value):
if not value.name.lower().endswith(".csv"):
raise ValidationError(f"{value.name} is not a CSV file")
+
class UploadFileForm(forms.Form):
collection_name = forms.CharField(min_length=1, label="Collection Name")
diff --git a/mathesar/forms/widgets.py b/mathesar/forms/widgets.py
--- a/mathesar/forms/widgets.py
+++ b/mathesar/forms/widgets.py
@@ -1,5 +1,6 @@
from django.forms.widgets import TextInput
+
class DataListInput(TextInput):
"""
Widget that adds a <data_list> element to the standard text input widget.
@@ -37,4 +38,3 @@
attrs = {} if attrs is None else attrs
attrs["list"] = self.data_list_id
return super().render(name, value, attrs, renderer)
-
| {"golden_diff": "diff --git a/mathesar/forms/forms.py b/mathesar/forms/forms.py\n--- a/mathesar/forms/forms.py\n+++ b/mathesar/forms/forms.py\n@@ -4,10 +4,12 @@\n from mathesar.database.schemas import get_all_schemas\n from mathesar.forms.widgets import DataListInput\n \n+\n def validate_csv(value):\n if not value.name.lower().endswith(\".csv\"):\n raise ValidationError(f\"{value.name} is not a CSV file\")\n \n+\n class UploadFileForm(forms.Form):\n collection_name = forms.CharField(min_length=1, label=\"Collection Name\")\n \ndiff --git a/mathesar/forms/widgets.py b/mathesar/forms/widgets.py\n--- a/mathesar/forms/widgets.py\n+++ b/mathesar/forms/widgets.py\n@@ -1,5 +1,6 @@\n from django.forms.widgets import TextInput\n \n+\n class DataListInput(TextInput):\n \"\"\"\n Widget that adds a <data_list> element to the standard text input widget.\n@@ -37,4 +38,3 @@\n attrs = {} if attrs is None else attrs\n attrs[\"list\"] = self.data_list_id\n return super().render(name, value, attrs, renderer)\n-\n", "issue": "Configure flake8 & GitHub Action correctly\nOur flake8 setup has a couple of issues:\r\n- Failures on the GitHub Action don't actually block merge.\r\n- We need to set up our style guide for flake8.\n", "before_files": [{"content": "from django.forms.widgets import TextInput\n\nclass DataListInput(TextInput):\n \"\"\"\n Widget that adds a <data_list> element to the standard text input widget.\n See TextInput for further details.\n\n Attributes:\n data_list: List of strings, where each string is a data_list value, or\n a callable that returns a list of the same form\n data_list_id: ID of the data_list, generated when render() is called.\n Of the form [widget_id | widget_name]_data_list\n \"\"\"\n template_name = \"mathesar/widgets/data_list.html\"\n\n def __init__(self, data_list, attrs=None):\n super().__init__(attrs=attrs)\n self.data_list = data_list\n self.data_list_id = \"_data_list\"\n\n def get_context(self, name, value, attrs):\n context = super().get_context(name, value, attrs)\n if callable(self.data_list):\n context[\"widget\"][\"data_list\"] = self.data_list()\n else:\n context[\"widget\"][\"data_list\"] = self.data_list\n context[\"widget\"][\"data_list_id\"] = self.data_list_id\n return context\n\n def render(self, name, value, attrs=None, renderer=None):\n # In practice, there should always be an ID attribute, but we fallback\n # to using widget name if ID is missing\n if attrs and \"id\" in attrs:\n self.data_list_id = attrs[\"id\"] + \"_data_list\"\n else:\n self.data_list_id = name + \"_data_list\"\n attrs = {} if attrs is None else attrs\n attrs[\"list\"] = self.data_list_id\n return super().render(name, value, attrs, renderer)\n\n", "path": "mathesar/forms/widgets.py"}, {"content": "from django import forms\nfrom django.core.exceptions import ValidationError\n\nfrom mathesar.database.schemas import get_all_schemas\nfrom mathesar.forms.widgets import DataListInput\n\ndef validate_csv(value):\n if not value.name.lower().endswith(\".csv\"):\n raise ValidationError(f\"{value.name} is not a CSV file\")\n\nclass UploadFileForm(forms.Form):\n collection_name = forms.CharField(min_length=1, label=\"Collection Name\")\n\n application_name = forms.CharField(\n min_length=1, label=\"Application Name\",\n widget=DataListInput(get_all_schemas)\n )\n\n file = forms.FileField(validators=[validate_csv], label=\"CSV File\")\n", "path": "mathesar/forms/forms.py"}], "after_files": [{"content": "from django.forms.widgets import TextInput\n\n\nclass DataListInput(TextInput):\n \"\"\"\n Widget that adds a <data_list> element to the standard text input widget.\n See TextInput for further details.\n\n Attributes:\n data_list: List of strings, where each string is a data_list value, or\n a callable that returns a list of the same form\n data_list_id: ID of the data_list, generated when render() is called.\n Of the form [widget_id | widget_name]_data_list\n \"\"\"\n template_name = \"mathesar/widgets/data_list.html\"\n\n def __init__(self, data_list, attrs=None):\n super().__init__(attrs=attrs)\n self.data_list = data_list\n self.data_list_id = \"_data_list\"\n\n def get_context(self, name, value, attrs):\n context = super().get_context(name, value, attrs)\n if callable(self.data_list):\n context[\"widget\"][\"data_list\"] = self.data_list()\n else:\n context[\"widget\"][\"data_list\"] = self.data_list\n context[\"widget\"][\"data_list_id\"] = self.data_list_id\n return context\n\n def render(self, name, value, attrs=None, renderer=None):\n # In practice, there should always be an ID attribute, but we fallback\n # to using widget name if ID is missing\n if attrs and \"id\" in attrs:\n self.data_list_id = attrs[\"id\"] + \"_data_list\"\n else:\n self.data_list_id = name + \"_data_list\"\n attrs = {} if attrs is None else attrs\n attrs[\"list\"] = self.data_list_id\n return super().render(name, value, attrs, renderer)\n", "path": "mathesar/forms/widgets.py"}, {"content": "from django import forms\nfrom django.core.exceptions import ValidationError\n\nfrom mathesar.database.schemas import get_all_schemas\nfrom mathesar.forms.widgets import DataListInput\n\n\ndef validate_csv(value):\n if not value.name.lower().endswith(\".csv\"):\n raise ValidationError(f\"{value.name} is not a CSV file\")\n\n\nclass UploadFileForm(forms.Form):\n collection_name = forms.CharField(min_length=1, label=\"Collection Name\")\n\n application_name = forms.CharField(\n min_length=1, label=\"Application Name\",\n widget=DataListInput(get_all_schemas)\n )\n\n file = forms.FileField(validators=[validate_csv], label=\"CSV File\")\n", "path": "mathesar/forms/forms.py"}]} | 932 | 248 |
gh_patches_debug_155 | rasdani/github-patches | git_diff | hylang__hy-1369 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Official support for evaluating strings of Hy code from Python
Is it possible to embed some hy code inside a python file? As opposed to having the whole file be full on hy?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `hy/__init__.py`
Content:
```
1 __appname__ = 'hy'
2 try:
3 from hy.version import __version__
4 except ImportError:
5 __version__ = 'unknown'
6
7
8 from hy.models import HyExpression, HyInteger, HyKeyword, HyComplex, HyString, HyBytes, HySymbol, HyFloat, HyDict, HyList, HySet, HyCons # NOQA
9
10
11 import hy.importer # NOQA
12 # we import for side-effects.
13
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/hy/__init__.py b/hy/__init__.py
--- a/hy/__init__.py
+++ b/hy/__init__.py
@@ -10,3 +10,7 @@
import hy.importer # NOQA
# we import for side-effects.
+
+
+from hy.core.language import read, read_str # NOQA
+from hy.importer import hy_eval as eval # NOQA
| {"golden_diff": "diff --git a/hy/__init__.py b/hy/__init__.py\n--- a/hy/__init__.py\n+++ b/hy/__init__.py\n@@ -10,3 +10,7 @@\n \n import hy.importer # NOQA\n # we import for side-effects.\n+\n+\n+from hy.core.language import read, read_str # NOQA\n+from hy.importer import hy_eval as eval # NOQA\n", "issue": "Official support for evaluating strings of Hy code from Python\nIs it possible to embed some hy code inside a python file? As opposed to having the whole file be full on hy?\n", "before_files": [{"content": "__appname__ = 'hy'\ntry:\n from hy.version import __version__\nexcept ImportError:\n __version__ = 'unknown'\n\n\nfrom hy.models import HyExpression, HyInteger, HyKeyword, HyComplex, HyString, HyBytes, HySymbol, HyFloat, HyDict, HyList, HySet, HyCons # NOQA\n\n\nimport hy.importer # NOQA\n# we import for side-effects.\n", "path": "hy/__init__.py"}], "after_files": [{"content": "__appname__ = 'hy'\ntry:\n from hy.version import __version__\nexcept ImportError:\n __version__ = 'unknown'\n\n\nfrom hy.models import HyExpression, HyInteger, HyKeyword, HyComplex, HyString, HyBytes, HySymbol, HyFloat, HyDict, HyList, HySet, HyCons # NOQA\n\n\nimport hy.importer # NOQA\n# we import for side-effects.\n\n\nfrom hy.core.language import read, read_str # NOQA\nfrom hy.importer import hy_eval as eval # NOQA\n", "path": "hy/__init__.py"}]} | 405 | 97 |
gh_patches_debug_11274 | rasdani/github-patches | git_diff | elastic__apm-agent-python-1021 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Remove Python 3.5 support
Python 3.5 hit EOL September 13, 2020. Support will be removed in our next major release.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `elasticapm/__init__.py`
Content:
```
1 # BSD 3-Clause License
2 #
3 # Copyright (c) 2012, the Sentry Team, see AUTHORS for more details
4 # Copyright (c) 2019, Elasticsearch BV
5 # All rights reserved.
6 #
7 # Redistribution and use in source and binary forms, with or without
8 # modification, are permitted provided that the following conditions are met:
9 #
10 # * Redistributions of source code must retain the above copyright notice, this
11 # list of conditions and the following disclaimer.
12 #
13 # * Redistributions in binary form must reproduce the above copyright notice,
14 # this list of conditions and the following disclaimer in the documentation
15 # and/or other materials provided with the distribution.
16 #
17 # * Neither the name of the copyright holder nor the names of its
18 # contributors may be used to endorse or promote products derived from
19 # this software without specific prior written permission.
20 #
21 # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
22 # AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
23 # IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
24 # DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
25 # FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
26 # DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
27 # SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
28 # CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
29 # OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
30 import sys
31
32 from elasticapm.base import Client
33 from elasticapm.conf import setup_logging # noqa: F401
34 from elasticapm.instrumentation.control import instrument, uninstrument # noqa: F401
35 from elasticapm.traces import ( # noqa: F401
36 capture_span,
37 get_span_id,
38 get_trace_id,
39 get_transaction_id,
40 get_trace_parent_header,
41 label,
42 set_context,
43 set_custom_context,
44 set_transaction_name,
45 set_transaction_outcome,
46 set_transaction_result,
47 set_user_context,
48 tag,
49 )
50 from elasticapm.utils.disttracing import trace_parent_from_headers, trace_parent_from_string # noqa: F401
51
52 __all__ = ("VERSION", "Client")
53
54 try:
55 try:
56 VERSION = __import__("importlib.metadata").metadata.version("elastic-apm")
57 except ImportError:
58 VERSION = __import__("pkg_resources").get_distribution("elastic-apm").version
59 except Exception:
60 VERSION = "unknown"
61
62
63 if sys.version_info >= (3, 5):
64 from elasticapm.contrib.asyncio.traces import async_capture_span # noqa: F401
65
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/elasticapm/__init__.py b/elasticapm/__init__.py
--- a/elasticapm/__init__.py
+++ b/elasticapm/__init__.py
@@ -36,8 +36,8 @@
capture_span,
get_span_id,
get_trace_id,
- get_transaction_id,
get_trace_parent_header,
+ get_transaction_id,
label,
set_context,
set_custom_context,
@@ -60,5 +60,7 @@
VERSION = "unknown"
-if sys.version_info >= (3, 5):
- from elasticapm.contrib.asyncio.traces import async_capture_span # noqa: F401
+if sys.version_info <= (3, 5):
+ raise DeprecationWarning("The Elastic APM agent requires Python 3.6+")
+
+from elasticapm.contrib.asyncio.traces import async_capture_span # noqa: F401
| {"golden_diff": "diff --git a/elasticapm/__init__.py b/elasticapm/__init__.py\n--- a/elasticapm/__init__.py\n+++ b/elasticapm/__init__.py\n@@ -36,8 +36,8 @@\n capture_span,\n get_span_id,\n get_trace_id,\n- get_transaction_id,\n get_trace_parent_header,\n+ get_transaction_id,\n label,\n set_context,\n set_custom_context,\n@@ -60,5 +60,7 @@\n VERSION = \"unknown\"\n \n \n-if sys.version_info >= (3, 5):\n- from elasticapm.contrib.asyncio.traces import async_capture_span # noqa: F401\n+if sys.version_info <= (3, 5):\n+ raise DeprecationWarning(\"The Elastic APM agent requires Python 3.6+\")\n+\n+from elasticapm.contrib.asyncio.traces import async_capture_span # noqa: F401\n", "issue": "Remove Python 3.5 support\nPython 3.5 hit EOL September 13, 2020. Support will be removed in our next major release.\n", "before_files": [{"content": "# BSD 3-Clause License\n#\n# Copyright (c) 2012, the Sentry Team, see AUTHORS for more details\n# Copyright (c) 2019, Elasticsearch BV\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are met:\n#\n# * Redistributions of source code must retain the above copyright notice, this\n# list of conditions and the following disclaimer.\n#\n# * Redistributions in binary form must reproduce the above copyright notice,\n# this list of conditions and the following disclaimer in the documentation\n# and/or other materials provided with the distribution.\n#\n# * Neither the name of the copyright holder nor the names of its\n# contributors may be used to endorse or promote products derived from\n# this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\n# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\n# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\n# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\n# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\nimport sys\n\nfrom elasticapm.base import Client\nfrom elasticapm.conf import setup_logging # noqa: F401\nfrom elasticapm.instrumentation.control import instrument, uninstrument # noqa: F401\nfrom elasticapm.traces import ( # noqa: F401\n capture_span,\n get_span_id,\n get_trace_id,\n get_transaction_id,\n get_trace_parent_header,\n label,\n set_context,\n set_custom_context,\n set_transaction_name,\n set_transaction_outcome,\n set_transaction_result,\n set_user_context,\n tag,\n)\nfrom elasticapm.utils.disttracing import trace_parent_from_headers, trace_parent_from_string # noqa: F401\n\n__all__ = (\"VERSION\", \"Client\")\n\ntry:\n try:\n VERSION = __import__(\"importlib.metadata\").metadata.version(\"elastic-apm\")\n except ImportError:\n VERSION = __import__(\"pkg_resources\").get_distribution(\"elastic-apm\").version\nexcept Exception:\n VERSION = \"unknown\"\n\n\nif sys.version_info >= (3, 5):\n from elasticapm.contrib.asyncio.traces import async_capture_span # noqa: F401\n", "path": "elasticapm/__init__.py"}], "after_files": [{"content": "# BSD 3-Clause License\n#\n# Copyright (c) 2012, the Sentry Team, see AUTHORS for more details\n# Copyright (c) 2019, Elasticsearch BV\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are met:\n#\n# * Redistributions of source code must retain the above copyright notice, this\n# list of conditions and the following disclaimer.\n#\n# * Redistributions in binary form must reproduce the above copyright notice,\n# this list of conditions and the following disclaimer in the documentation\n# and/or other materials provided with the distribution.\n#\n# * Neither the name of the copyright holder nor the names of its\n# contributors may be used to endorse or promote products derived from\n# this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\n# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\n# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\n# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\n# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\nimport sys\n\nfrom elasticapm.base import Client\nfrom elasticapm.conf import setup_logging # noqa: F401\nfrom elasticapm.instrumentation.control import instrument, uninstrument # noqa: F401\nfrom elasticapm.traces import ( # noqa: F401\n capture_span,\n get_span_id,\n get_trace_id,\n get_trace_parent_header,\n get_transaction_id,\n label,\n set_context,\n set_custom_context,\n set_transaction_name,\n set_transaction_outcome,\n set_transaction_result,\n set_user_context,\n tag,\n)\nfrom elasticapm.utils.disttracing import trace_parent_from_headers, trace_parent_from_string # noqa: F401\n\n__all__ = (\"VERSION\", \"Client\")\n\ntry:\n try:\n VERSION = __import__(\"importlib.metadata\").metadata.version(\"elastic-apm\")\n except ImportError:\n VERSION = __import__(\"pkg_resources\").get_distribution(\"elastic-apm\").version\nexcept Exception:\n VERSION = \"unknown\"\n\n\nif sys.version_info <= (3, 5):\n raise DeprecationWarning(\"The Elastic APM agent requires Python 3.6+\")\n\nfrom elasticapm.contrib.asyncio.traces import async_capture_span # noqa: F401\n", "path": "elasticapm/__init__.py"}]} | 1,020 | 211 |
gh_patches_debug_36865 | rasdani/github-patches | git_diff | cookiecutter__cookiecutter-834 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Simplify cookiecutter.hooks.find_hooks
We should rename `cookiecutter.hooks.find_hooks` to `find_hook(hook_name)` and explicitly look for the requested hook, instead of processing all the files in the hooks directory.
See https://github.com/audreyr/cookiecutter/pull/768/files/9a94484093ca23e9d55d42a53f096f67535b0b63#r68646614
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cookiecutter/hooks.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 """Functions for discovering and executing various cookiecutter hooks."""
4
5 import io
6 import logging
7 import os
8 import subprocess
9 import sys
10 import tempfile
11
12 from jinja2 import Template
13
14 from cookiecutter import utils
15 from .exceptions import FailedHookException
16
17 logger = logging.getLogger(__name__)
18
19
20 _HOOKS = [
21 'pre_gen_project',
22 'post_gen_project',
23 # TODO: other hooks should be listed here
24 ]
25 EXIT_SUCCESS = 0
26
27
28 def find_hooks():
29 """Return a dict of all hook scripts provided.
30
31 Must be called with the project template as the current working directory.
32 Dict's key will be the hook/script's name, without extension, while values
33 will be the absolute path to the script. Missing scripts will not be
34 included in the returned dict.
35 """
36 hooks_dir = 'hooks'
37 hooks = {}
38 logger.debug('hooks_dir is {}'.format(hooks_dir))
39
40 if not os.path.isdir(hooks_dir):
41 logger.debug('No hooks/ dir in template_dir')
42 return hooks
43
44 for f in os.listdir(hooks_dir):
45 filename = os.path.basename(f)
46 basename = os.path.splitext(filename)[0]
47
48 if basename in _HOOKS and not filename.endswith('~'):
49 hooks[basename] = os.path.abspath(os.path.join(hooks_dir, f))
50 return hooks
51
52
53 def run_script(script_path, cwd='.'):
54 """Execute a script from a working directory.
55
56 :param script_path: Absolute path to the script to run.
57 :param cwd: The directory to run the script from.
58 """
59 run_thru_shell = sys.platform.startswith('win')
60 if script_path.endswith('.py'):
61 script_command = [sys.executable, script_path]
62 else:
63 script_command = [script_path]
64
65 utils.make_executable(script_path)
66
67 proc = subprocess.Popen(
68 script_command,
69 shell=run_thru_shell,
70 cwd=cwd
71 )
72 exit_status = proc.wait()
73 if exit_status != EXIT_SUCCESS:
74 raise FailedHookException(
75 "Hook script failed (exit status: %d)" % exit_status)
76
77
78 def run_script_with_context(script_path, cwd, context):
79 """Execute a script after rendering it with Jinja.
80
81 :param script_path: Absolute path to the script to run.
82 :param cwd: The directory to run the script from.
83 :param context: Cookiecutter project template context.
84 """
85 _, extension = os.path.splitext(script_path)
86
87 contents = io.open(script_path, 'r', encoding='utf-8').read()
88
89 with tempfile.NamedTemporaryFile(
90 delete=False,
91 mode='wb',
92 suffix=extension
93 ) as temp:
94 output = Template(contents).render(**context)
95 temp.write(output.encode('utf-8'))
96
97 run_script(temp.name, cwd)
98
99
100 def run_hook(hook_name, project_dir, context):
101 """
102 Try to find and execute a hook from the specified project directory.
103
104 :param hook_name: The hook to execute.
105 :param project_dir: The directory to execute the script from.
106 :param context: Cookiecutter project context.
107 """
108 script = find_hooks().get(hook_name)
109 if script is None:
110 logger.debug('No hooks found')
111 return
112 logger.debug('Running hook {}'.format(hook_name))
113 run_script_with_context(script, project_dir, context)
114
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/cookiecutter/hooks.py b/cookiecutter/hooks.py
--- a/cookiecutter/hooks.py
+++ b/cookiecutter/hooks.py
@@ -16,38 +16,53 @@
logger = logging.getLogger(__name__)
-
_HOOKS = [
'pre_gen_project',
'post_gen_project',
- # TODO: other hooks should be listed here
]
EXIT_SUCCESS = 0
-def find_hooks():
+def valid_hook(hook_file, hook_name):
+ """Determine if a hook file is valid.
+
+ :param hook_file: The hook file to consider for validity
+ :param hook_name: The hook to find
+ :return: The hook file validity
+ """
+ filename = os.path.basename(hook_file)
+ basename = os.path.splitext(filename)[0]
+
+ matching_hook = basename == hook_name
+ supported_hook = basename in _HOOKS
+ backup_file = filename.endswith('~')
+
+ return matching_hook and supported_hook and not backup_file
+
+
+def find_hook(hook_name, hooks_dir='hooks'):
"""Return a dict of all hook scripts provided.
Must be called with the project template as the current working directory.
Dict's key will be the hook/script's name, without extension, while values
will be the absolute path to the script. Missing scripts will not be
included in the returned dict.
+
+ :param hook_name: The hook to find
+ :param hooks_dir: The hook directory in the template
+ :return: The absolute path to the hook script or None
"""
- hooks_dir = 'hooks'
- hooks = {}
- logger.debug('hooks_dir is {}'.format(hooks_dir))
+ logger.debug('hooks_dir is {}'.format(os.path.abspath(hooks_dir)))
if not os.path.isdir(hooks_dir):
logger.debug('No hooks/ dir in template_dir')
- return hooks
+ return None
- for f in os.listdir(hooks_dir):
- filename = os.path.basename(f)
- basename = os.path.splitext(filename)[0]
+ for hook_file in os.listdir(hooks_dir):
+ if valid_hook(hook_file, hook_name):
+ return os.path.abspath(os.path.join(hooks_dir, hook_file))
- if basename in _HOOKS and not filename.endswith('~'):
- hooks[basename] = os.path.abspath(os.path.join(hooks_dir, f))
- return hooks
+ return None
def run_script(script_path, cwd='.'):
@@ -105,7 +120,7 @@
:param project_dir: The directory to execute the script from.
:param context: Cookiecutter project context.
"""
- script = find_hooks().get(hook_name)
+ script = find_hook(hook_name)
if script is None:
logger.debug('No hooks found')
return
| {"golden_diff": "diff --git a/cookiecutter/hooks.py b/cookiecutter/hooks.py\n--- a/cookiecutter/hooks.py\n+++ b/cookiecutter/hooks.py\n@@ -16,38 +16,53 @@\n \n logger = logging.getLogger(__name__)\n \n-\n _HOOKS = [\n 'pre_gen_project',\n 'post_gen_project',\n- # TODO: other hooks should be listed here\n ]\n EXIT_SUCCESS = 0\n \n \n-def find_hooks():\n+def valid_hook(hook_file, hook_name):\n+ \"\"\"Determine if a hook file is valid.\n+\n+ :param hook_file: The hook file to consider for validity\n+ :param hook_name: The hook to find\n+ :return: The hook file validity\n+ \"\"\"\n+ filename = os.path.basename(hook_file)\n+ basename = os.path.splitext(filename)[0]\n+\n+ matching_hook = basename == hook_name\n+ supported_hook = basename in _HOOKS\n+ backup_file = filename.endswith('~')\n+\n+ return matching_hook and supported_hook and not backup_file\n+\n+\n+def find_hook(hook_name, hooks_dir='hooks'):\n \"\"\"Return a dict of all hook scripts provided.\n \n Must be called with the project template as the current working directory.\n Dict's key will be the hook/script's name, without extension, while values\n will be the absolute path to the script. Missing scripts will not be\n included in the returned dict.\n+\n+ :param hook_name: The hook to find\n+ :param hooks_dir: The hook directory in the template\n+ :return: The absolute path to the hook script or None\n \"\"\"\n- hooks_dir = 'hooks'\n- hooks = {}\n- logger.debug('hooks_dir is {}'.format(hooks_dir))\n+ logger.debug('hooks_dir is {}'.format(os.path.abspath(hooks_dir)))\n \n if not os.path.isdir(hooks_dir):\n logger.debug('No hooks/ dir in template_dir')\n- return hooks\n+ return None\n \n- for f in os.listdir(hooks_dir):\n- filename = os.path.basename(f)\n- basename = os.path.splitext(filename)[0]\n+ for hook_file in os.listdir(hooks_dir):\n+ if valid_hook(hook_file, hook_name):\n+ return os.path.abspath(os.path.join(hooks_dir, hook_file))\n \n- if basename in _HOOKS and not filename.endswith('~'):\n- hooks[basename] = os.path.abspath(os.path.join(hooks_dir, f))\n- return hooks\n+ return None\n \n \n def run_script(script_path, cwd='.'):\n@@ -105,7 +120,7 @@\n :param project_dir: The directory to execute the script from.\n :param context: Cookiecutter project context.\n \"\"\"\n- script = find_hooks().get(hook_name)\n+ script = find_hook(hook_name)\n if script is None:\n logger.debug('No hooks found')\n return\n", "issue": "Simplify cookiecutter.hooks.find_hooks\nWe should rename `cookiecutter.hooks.find_hooks` to `find_hook(hook_name)` and explicitly look for the requested hook, instead of processing all the files in the hooks directory.\n\nSee https://github.com/audreyr/cookiecutter/pull/768/files/9a94484093ca23e9d55d42a53f096f67535b0b63#r68646614\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\"Functions for discovering and executing various cookiecutter hooks.\"\"\"\n\nimport io\nimport logging\nimport os\nimport subprocess\nimport sys\nimport tempfile\n\nfrom jinja2 import Template\n\nfrom cookiecutter import utils\nfrom .exceptions import FailedHookException\n\nlogger = logging.getLogger(__name__)\n\n\n_HOOKS = [\n 'pre_gen_project',\n 'post_gen_project',\n # TODO: other hooks should be listed here\n]\nEXIT_SUCCESS = 0\n\n\ndef find_hooks():\n \"\"\"Return a dict of all hook scripts provided.\n\n Must be called with the project template as the current working directory.\n Dict's key will be the hook/script's name, without extension, while values\n will be the absolute path to the script. Missing scripts will not be\n included in the returned dict.\n \"\"\"\n hooks_dir = 'hooks'\n hooks = {}\n logger.debug('hooks_dir is {}'.format(hooks_dir))\n\n if not os.path.isdir(hooks_dir):\n logger.debug('No hooks/ dir in template_dir')\n return hooks\n\n for f in os.listdir(hooks_dir):\n filename = os.path.basename(f)\n basename = os.path.splitext(filename)[0]\n\n if basename in _HOOKS and not filename.endswith('~'):\n hooks[basename] = os.path.abspath(os.path.join(hooks_dir, f))\n return hooks\n\n\ndef run_script(script_path, cwd='.'):\n \"\"\"Execute a script from a working directory.\n\n :param script_path: Absolute path to the script to run.\n :param cwd: The directory to run the script from.\n \"\"\"\n run_thru_shell = sys.platform.startswith('win')\n if script_path.endswith('.py'):\n script_command = [sys.executable, script_path]\n else:\n script_command = [script_path]\n\n utils.make_executable(script_path)\n\n proc = subprocess.Popen(\n script_command,\n shell=run_thru_shell,\n cwd=cwd\n )\n exit_status = proc.wait()\n if exit_status != EXIT_SUCCESS:\n raise FailedHookException(\n \"Hook script failed (exit status: %d)\" % exit_status)\n\n\ndef run_script_with_context(script_path, cwd, context):\n \"\"\"Execute a script after rendering it with Jinja.\n\n :param script_path: Absolute path to the script to run.\n :param cwd: The directory to run the script from.\n :param context: Cookiecutter project template context.\n \"\"\"\n _, extension = os.path.splitext(script_path)\n\n contents = io.open(script_path, 'r', encoding='utf-8').read()\n\n with tempfile.NamedTemporaryFile(\n delete=False,\n mode='wb',\n suffix=extension\n ) as temp:\n output = Template(contents).render(**context)\n temp.write(output.encode('utf-8'))\n\n run_script(temp.name, cwd)\n\n\ndef run_hook(hook_name, project_dir, context):\n \"\"\"\n Try to find and execute a hook from the specified project directory.\n\n :param hook_name: The hook to execute.\n :param project_dir: The directory to execute the script from.\n :param context: Cookiecutter project context.\n \"\"\"\n script = find_hooks().get(hook_name)\n if script is None:\n logger.debug('No hooks found')\n return\n logger.debug('Running hook {}'.format(hook_name))\n run_script_with_context(script, project_dir, context)\n", "path": "cookiecutter/hooks.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\"Functions for discovering and executing various cookiecutter hooks.\"\"\"\n\nimport io\nimport logging\nimport os\nimport subprocess\nimport sys\nimport tempfile\n\nfrom jinja2 import Template\n\nfrom cookiecutter import utils\nfrom .exceptions import FailedHookException\n\nlogger = logging.getLogger(__name__)\n\n_HOOKS = [\n 'pre_gen_project',\n 'post_gen_project',\n]\nEXIT_SUCCESS = 0\n\n\ndef valid_hook(hook_file, hook_name):\n \"\"\"Determine if a hook file is valid.\n\n :param hook_file: The hook file to consider for validity\n :param hook_name: The hook to find\n :return: The hook file validity\n \"\"\"\n filename = os.path.basename(hook_file)\n basename = os.path.splitext(filename)[0]\n\n matching_hook = basename == hook_name\n supported_hook = basename in _HOOKS\n backup_file = filename.endswith('~')\n\n return matching_hook and supported_hook and not backup_file\n\n\ndef find_hook(hook_name, hooks_dir='hooks'):\n \"\"\"Return a dict of all hook scripts provided.\n\n Must be called with the project template as the current working directory.\n Dict's key will be the hook/script's name, without extension, while values\n will be the absolute path to the script. Missing scripts will not be\n included in the returned dict.\n\n :param hook_name: The hook to find\n :param hooks_dir: The hook directory in the template\n :return: The absolute path to the hook script or None\n \"\"\"\n logger.debug('hooks_dir is {}'.format(os.path.abspath(hooks_dir)))\n\n if not os.path.isdir(hooks_dir):\n logger.debug('No hooks/ dir in template_dir')\n return None\n\n for hook_file in os.listdir(hooks_dir):\n if valid_hook(hook_file, hook_name):\n return os.path.abspath(os.path.join(hooks_dir, hook_file))\n\n return None\n\n\ndef run_script(script_path, cwd='.'):\n \"\"\"Execute a script from a working directory.\n\n :param script_path: Absolute path to the script to run.\n :param cwd: The directory to run the script from.\n \"\"\"\n run_thru_shell = sys.platform.startswith('win')\n if script_path.endswith('.py'):\n script_command = [sys.executable, script_path]\n else:\n script_command = [script_path]\n\n utils.make_executable(script_path)\n\n proc = subprocess.Popen(\n script_command,\n shell=run_thru_shell,\n cwd=cwd\n )\n exit_status = proc.wait()\n if exit_status != EXIT_SUCCESS:\n raise FailedHookException(\n \"Hook script failed (exit status: %d)\" % exit_status)\n\n\ndef run_script_with_context(script_path, cwd, context):\n \"\"\"Execute a script after rendering it with Jinja.\n\n :param script_path: Absolute path to the script to run.\n :param cwd: The directory to run the script from.\n :param context: Cookiecutter project template context.\n \"\"\"\n _, extension = os.path.splitext(script_path)\n\n contents = io.open(script_path, 'r', encoding='utf-8').read()\n\n with tempfile.NamedTemporaryFile(\n delete=False,\n mode='wb',\n suffix=extension\n ) as temp:\n output = Template(contents).render(**context)\n temp.write(output.encode('utf-8'))\n\n run_script(temp.name, cwd)\n\n\ndef run_hook(hook_name, project_dir, context):\n \"\"\"\n Try to find and execute a hook from the specified project directory.\n\n :param hook_name: The hook to execute.\n :param project_dir: The directory to execute the script from.\n :param context: Cookiecutter project context.\n \"\"\"\n script = find_hook(hook_name)\n if script is None:\n logger.debug('No hooks found')\n return\n logger.debug('Running hook {}'.format(hook_name))\n run_script_with_context(script, project_dir, context)\n", "path": "cookiecutter/hooks.py"}]} | 1,354 | 640 |
gh_patches_debug_26507 | rasdani/github-patches | git_diff | airctic__icevision-960 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add more logging to the pytorch lighning models.
The feature consists of two parts:
1. Add the validation loss to the progress bar by default
2. Create boolean parameter for extended progress bar logging (showing the different components of the loss)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `icevision/engines/lightning/lightning_model_adapter.py`
Content:
```
1 __all__ = ["LightningModelAdapter"]
2
3 import pytorch_lightning as pl
4 from icevision.imports import *
5 from icevision.metrics import *
6
7
8 class LightningModelAdapter(pl.LightningModule, ABC):
9 def __init__(self, metrics: List[Metric] = None):
10 super().__init__()
11 self.metrics = metrics or []
12
13 def accumulate_metrics(self, preds):
14 for metric in self.metrics:
15 metric.accumulate(preds=preds)
16
17 def finalize_metrics(self) -> None:
18 for metric in self.metrics:
19 metric_logs = metric.finalize()
20 for k, v in metric_logs.items():
21 self.log(f"{metric.name}/{k}", v)
22
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/icevision/engines/lightning/lightning_model_adapter.py b/icevision/engines/lightning/lightning_model_adapter.py
--- a/icevision/engines/lightning/lightning_model_adapter.py
+++ b/icevision/engines/lightning/lightning_model_adapter.py
@@ -6,9 +6,21 @@
class LightningModelAdapter(pl.LightningModule, ABC):
- def __init__(self, metrics: List[Metric] = None):
+ def __init__(
+ self,
+ metrics: List[Metric] = None,
+ metrics_keys_to_log_to_prog_bar: List[tuple] = None,
+ ):
+ """
+ To show a metric in the progressbar a list of tupels can be provided for metrics_keys_to_log_to_prog_bar, the first
+ entry has to be the name of the metric to log and the second entry the display name in the progressbar. By default the
+ mAP is logged to the progressbar.
+ """
super().__init__()
self.metrics = metrics or []
+ self.metrics_keys_to_log_to_prog_bar = metrics_keys_to_log_to_prog_bar or [
+ ("AP (IoU=0.50:0.95) area=all", "COCOMetric")
+ ]
def accumulate_metrics(self, preds):
for metric in self.metrics:
@@ -18,4 +30,9 @@
for metric in self.metrics:
metric_logs = metric.finalize()
for k, v in metric_logs.items():
- self.log(f"{metric.name}/{k}", v)
+ for entry in self.metrics_keys_to_log_to_prog_bar:
+ if entry[0] == k:
+ self.log(entry[1], v, prog_bar=True)
+ self.log(f"{metric.name}/{k}", v)
+ else:
+ self.log(f"{metric.name}/{k}", v)
| {"golden_diff": "diff --git a/icevision/engines/lightning/lightning_model_adapter.py b/icevision/engines/lightning/lightning_model_adapter.py\n--- a/icevision/engines/lightning/lightning_model_adapter.py\n+++ b/icevision/engines/lightning/lightning_model_adapter.py\n@@ -6,9 +6,21 @@\n \n \n class LightningModelAdapter(pl.LightningModule, ABC):\n- def __init__(self, metrics: List[Metric] = None):\n+ def __init__(\n+ self,\n+ metrics: List[Metric] = None,\n+ metrics_keys_to_log_to_prog_bar: List[tuple] = None,\n+ ):\n+ \"\"\"\n+ To show a metric in the progressbar a list of tupels can be provided for metrics_keys_to_log_to_prog_bar, the first\n+ entry has to be the name of the metric to log and the second entry the display name in the progressbar. By default the\n+ mAP is logged to the progressbar.\n+ \"\"\"\n super().__init__()\n self.metrics = metrics or []\n+ self.metrics_keys_to_log_to_prog_bar = metrics_keys_to_log_to_prog_bar or [\n+ (\"AP (IoU=0.50:0.95) area=all\", \"COCOMetric\")\n+ ]\n \n def accumulate_metrics(self, preds):\n for metric in self.metrics:\n@@ -18,4 +30,9 @@\n for metric in self.metrics:\n metric_logs = metric.finalize()\n for k, v in metric_logs.items():\n- self.log(f\"{metric.name}/{k}\", v)\n+ for entry in self.metrics_keys_to_log_to_prog_bar:\n+ if entry[0] == k:\n+ self.log(entry[1], v, prog_bar=True)\n+ self.log(f\"{metric.name}/{k}\", v)\n+ else:\n+ self.log(f\"{metric.name}/{k}\", v)\n", "issue": "Add more logging to the pytorch lighning models.\nThe feature consists of two parts:\r\n 1. Add the validation loss to the progress bar by default\r\n 2. Create boolean parameter for extended progress bar logging (showing the different components of the loss)\n", "before_files": [{"content": "__all__ = [\"LightningModelAdapter\"]\n\nimport pytorch_lightning as pl\nfrom icevision.imports import *\nfrom icevision.metrics import *\n\n\nclass LightningModelAdapter(pl.LightningModule, ABC):\n def __init__(self, metrics: List[Metric] = None):\n super().__init__()\n self.metrics = metrics or []\n\n def accumulate_metrics(self, preds):\n for metric in self.metrics:\n metric.accumulate(preds=preds)\n\n def finalize_metrics(self) -> None:\n for metric in self.metrics:\n metric_logs = metric.finalize()\n for k, v in metric_logs.items():\n self.log(f\"{metric.name}/{k}\", v)\n", "path": "icevision/engines/lightning/lightning_model_adapter.py"}], "after_files": [{"content": "__all__ = [\"LightningModelAdapter\"]\n\nimport pytorch_lightning as pl\nfrom icevision.imports import *\nfrom icevision.metrics import *\n\n\nclass LightningModelAdapter(pl.LightningModule, ABC):\n def __init__(\n self,\n metrics: List[Metric] = None,\n metrics_keys_to_log_to_prog_bar: List[tuple] = None,\n ):\n \"\"\"\n To show a metric in the progressbar a list of tupels can be provided for metrics_keys_to_log_to_prog_bar, the first\n entry has to be the name of the metric to log and the second entry the display name in the progressbar. By default the\n mAP is logged to the progressbar.\n \"\"\"\n super().__init__()\n self.metrics = metrics or []\n self.metrics_keys_to_log_to_prog_bar = metrics_keys_to_log_to_prog_bar or [\n (\"AP (IoU=0.50:0.95) area=all\", \"COCOMetric\")\n ]\n\n def accumulate_metrics(self, preds):\n for metric in self.metrics:\n metric.accumulate(preds=preds)\n\n def finalize_metrics(self) -> None:\n for metric in self.metrics:\n metric_logs = metric.finalize()\n for k, v in metric_logs.items():\n for entry in self.metrics_keys_to_log_to_prog_bar:\n if entry[0] == k:\n self.log(entry[1], v, prog_bar=True)\n self.log(f\"{metric.name}/{k}\", v)\n else:\n self.log(f\"{metric.name}/{k}\", v)\n", "path": "icevision/engines/lightning/lightning_model_adapter.py"}]} | 502 | 417 |
gh_patches_debug_13791 | rasdani/github-patches | git_diff | DataDog__dd-trace-py-3409 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
app_key not passed to aiohttp_jinja2
When using aiohttp_admin the app_key value for the templating module differs from the default one.
This causes an error executing:
https://github.com/DataDog/dd-trace-py/blob/ec191a4a71ae71017b70d26111bba4489e617ae5/ddtrace/contrib/aiohttp/template.py#L21
As far as I understand this would solve the problem.
`env = aiohttp_jinja2.get_env(request.app, app_key=kwargs["app_key"])`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ddtrace/contrib/aiohttp_jinja2/patch.py`
Content:
```
1 from ddtrace import Pin
2 from ddtrace import config
3
4 from ...ext import SpanTypes
5 from ...internal.utils import get_argument_value
6 from ..trace_utils import unwrap
7 from ..trace_utils import with_traced_module
8 from ..trace_utils import wrap
9
10
11 config._add(
12 "aiohttp_jinja2",
13 dict(),
14 )
15
16
17 @with_traced_module
18 def traced_render_template(aiohttp_jinja2, pin, func, instance, args, kwargs):
19 # original signature:
20 # render_template(template_name, request, context, *, app_key=APP_KEY, encoding='utf-8')
21 template_name = get_argument_value(args, kwargs, 0, "template_name")
22 request = get_argument_value(args, kwargs, 1, "request")
23 env = aiohttp_jinja2.get_env(request.app)
24
25 # the prefix is available only on PackageLoader
26 template_prefix = getattr(env.loader, "package_path", "")
27 template_meta = "%s/%s" % (template_prefix, template_name)
28
29 with pin.tracer.trace("aiohttp.template", span_type=SpanTypes.TEMPLATE) as span:
30 span.set_tag("aiohttp.template", template_meta)
31 return func(*args, **kwargs)
32
33
34 def _patch(aiohttp_jinja2):
35 Pin().onto(aiohttp_jinja2)
36 wrap("aiohttp_jinja2", "render_template", traced_render_template(aiohttp_jinja2))
37
38
39 def patch():
40 import aiohttp_jinja2
41
42 if getattr(aiohttp_jinja2, "_datadog_patch", False):
43 return
44
45 _patch(aiohttp_jinja2)
46
47 setattr(aiohttp_jinja2, "_datadog_patch", True)
48
49
50 def _unpatch(aiohttp_jinja2):
51 unwrap(aiohttp_jinja2, "render_template")
52
53
54 def unpatch():
55 import aiohttp_jinja2
56
57 if not getattr(aiohttp_jinja2, "_datadog_patch", False):
58 return
59
60 _unpatch(aiohttp_jinja2)
61
62 setattr(aiohttp_jinja2, "_datadog_patch", False)
63
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ddtrace/contrib/aiohttp_jinja2/patch.py b/ddtrace/contrib/aiohttp_jinja2/patch.py
--- a/ddtrace/contrib/aiohttp_jinja2/patch.py
+++ b/ddtrace/contrib/aiohttp_jinja2/patch.py
@@ -20,7 +20,10 @@
# render_template(template_name, request, context, *, app_key=APP_KEY, encoding='utf-8')
template_name = get_argument_value(args, kwargs, 0, "template_name")
request = get_argument_value(args, kwargs, 1, "request")
- env = aiohttp_jinja2.get_env(request.app)
+ get_env_kwargs = {}
+ if "app_key" in kwargs:
+ get_env_kwargs["app_key"] = kwargs["app_key"]
+ env = aiohttp_jinja2.get_env(request.app, **get_env_kwargs)
# the prefix is available only on PackageLoader
template_prefix = getattr(env.loader, "package_path", "")
| {"golden_diff": "diff --git a/ddtrace/contrib/aiohttp_jinja2/patch.py b/ddtrace/contrib/aiohttp_jinja2/patch.py\n--- a/ddtrace/contrib/aiohttp_jinja2/patch.py\n+++ b/ddtrace/contrib/aiohttp_jinja2/patch.py\n@@ -20,7 +20,10 @@\n # render_template(template_name, request, context, *, app_key=APP_KEY, encoding='utf-8')\n template_name = get_argument_value(args, kwargs, 0, \"template_name\")\n request = get_argument_value(args, kwargs, 1, \"request\")\n- env = aiohttp_jinja2.get_env(request.app)\n+ get_env_kwargs = {}\n+ if \"app_key\" in kwargs:\n+ get_env_kwargs[\"app_key\"] = kwargs[\"app_key\"]\n+ env = aiohttp_jinja2.get_env(request.app, **get_env_kwargs)\n \n # the prefix is available only on PackageLoader\n template_prefix = getattr(env.loader, \"package_path\", \"\")\n", "issue": "app_key not passed to aiohttp_jinja2 \nWhen using aiohttp_admin the app_key value for the templating module differs from the default one.\r\n\r\nThis causes an error executing:\r\nhttps://github.com/DataDog/dd-trace-py/blob/ec191a4a71ae71017b70d26111bba4489e617ae5/ddtrace/contrib/aiohttp/template.py#L21\r\n\r\nAs far as I understand this would solve the problem.\r\n`env = aiohttp_jinja2.get_env(request.app, app_key=kwargs[\"app_key\"])`\n", "before_files": [{"content": "from ddtrace import Pin\nfrom ddtrace import config\n\nfrom ...ext import SpanTypes\nfrom ...internal.utils import get_argument_value\nfrom ..trace_utils import unwrap\nfrom ..trace_utils import with_traced_module\nfrom ..trace_utils import wrap\n\n\nconfig._add(\n \"aiohttp_jinja2\",\n dict(),\n)\n\n\n@with_traced_module\ndef traced_render_template(aiohttp_jinja2, pin, func, instance, args, kwargs):\n # original signature:\n # render_template(template_name, request, context, *, app_key=APP_KEY, encoding='utf-8')\n template_name = get_argument_value(args, kwargs, 0, \"template_name\")\n request = get_argument_value(args, kwargs, 1, \"request\")\n env = aiohttp_jinja2.get_env(request.app)\n\n # the prefix is available only on PackageLoader\n template_prefix = getattr(env.loader, \"package_path\", \"\")\n template_meta = \"%s/%s\" % (template_prefix, template_name)\n\n with pin.tracer.trace(\"aiohttp.template\", span_type=SpanTypes.TEMPLATE) as span:\n span.set_tag(\"aiohttp.template\", template_meta)\n return func(*args, **kwargs)\n\n\ndef _patch(aiohttp_jinja2):\n Pin().onto(aiohttp_jinja2)\n wrap(\"aiohttp_jinja2\", \"render_template\", traced_render_template(aiohttp_jinja2))\n\n\ndef patch():\n import aiohttp_jinja2\n\n if getattr(aiohttp_jinja2, \"_datadog_patch\", False):\n return\n\n _patch(aiohttp_jinja2)\n\n setattr(aiohttp_jinja2, \"_datadog_patch\", True)\n\n\ndef _unpatch(aiohttp_jinja2):\n unwrap(aiohttp_jinja2, \"render_template\")\n\n\ndef unpatch():\n import aiohttp_jinja2\n\n if not getattr(aiohttp_jinja2, \"_datadog_patch\", False):\n return\n\n _unpatch(aiohttp_jinja2)\n\n setattr(aiohttp_jinja2, \"_datadog_patch\", False)\n", "path": "ddtrace/contrib/aiohttp_jinja2/patch.py"}], "after_files": [{"content": "from ddtrace import Pin\nfrom ddtrace import config\n\nfrom ...ext import SpanTypes\nfrom ...internal.utils import get_argument_value\nfrom ..trace_utils import unwrap\nfrom ..trace_utils import with_traced_module\nfrom ..trace_utils import wrap\n\n\nconfig._add(\n \"aiohttp_jinja2\",\n dict(),\n)\n\n\n@with_traced_module\ndef traced_render_template(aiohttp_jinja2, pin, func, instance, args, kwargs):\n # original signature:\n # render_template(template_name, request, context, *, app_key=APP_KEY, encoding='utf-8')\n template_name = get_argument_value(args, kwargs, 0, \"template_name\")\n request = get_argument_value(args, kwargs, 1, \"request\")\n get_env_kwargs = {}\n if \"app_key\" in kwargs:\n get_env_kwargs[\"app_key\"] = kwargs[\"app_key\"]\n env = aiohttp_jinja2.get_env(request.app, **get_env_kwargs)\n\n # the prefix is available only on PackageLoader\n template_prefix = getattr(env.loader, \"package_path\", \"\")\n template_meta = \"%s/%s\" % (template_prefix, template_name)\n\n with pin.tracer.trace(\"aiohttp.template\", span_type=SpanTypes.TEMPLATE) as span:\n span.set_tag(\"aiohttp.template\", template_meta)\n return func(*args, **kwargs)\n\n\ndef _patch(aiohttp_jinja2):\n Pin().onto(aiohttp_jinja2)\n wrap(\"aiohttp_jinja2\", \"render_template\", traced_render_template(aiohttp_jinja2))\n\n\ndef patch():\n import aiohttp_jinja2\n\n if getattr(aiohttp_jinja2, \"_datadog_patch\", False):\n return\n\n _patch(aiohttp_jinja2)\n\n setattr(aiohttp_jinja2, \"_datadog_patch\", True)\n\n\ndef _unpatch(aiohttp_jinja2):\n unwrap(aiohttp_jinja2, \"render_template\")\n\n\ndef unpatch():\n import aiohttp_jinja2\n\n if not getattr(aiohttp_jinja2, \"_datadog_patch\", False):\n return\n\n _unpatch(aiohttp_jinja2)\n\n setattr(aiohttp_jinja2, \"_datadog_patch\", False)\n", "path": "ddtrace/contrib/aiohttp_jinja2/patch.py"}]} | 985 | 225 |
gh_patches_debug_4173 | rasdani/github-patches | git_diff | statsmodels__statsmodels-779 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
OLS residuals returned as Pandas series when endog and exog are Pandas series
When I fit OLS model with pandas series and try to do a Durbin-Watson test, the function returns nan. In that case the RegressionResult.resid attribute is a pandas series, rather than a numpy array- converting to a numpy array explicitly, the durbin_watson function works like a charm.
My instinct is this is something that should probably be changed in OLS (to guarantee the type of resid), hence the title of the issue, but I leave that to the judgement of our fearless leaders.
``` python
import statsmodels.api as sm
import numpy as np
from pandas import DataFrame
x=np.arange(1,11)
y=[num+np.random.normal() for num in np.arange(0,5, .5)]
linmod=sm.OLS(y, x).fit()
dw=sm.stats.stattools.durbin_watson(linmod.resid)
data=DataFrame({'x':x, 'y':y}, index=x)
linmod_pandas=sm.OLS(data.y, data.x).fit()
dw_pandas=sm.stats.stattools.durbin_watson(linmod_pandas.resid)
dw_pandas1=sm.stats.stattools.durbin_watson(array(linmod_pandas.resid))
print type(linmod_pandas.resid)
print dw, dw_pandas, dw_pandas1
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `statsmodels/stats/stattools.py`
Content:
```
1 """
2 Statistical tests to be used in conjunction with the models
3
4 Notes
5 -----
6 These functions haven't been formally tested.
7 """
8
9 from scipy import stats
10 import numpy as np
11
12
13 #TODO: these are pretty straightforward but they should be tested
14 def durbin_watson(resids):
15 """
16 Calculates the Durbin-Watson statistic
17
18 Parameters
19 -----------
20 resids : array-like
21
22 Returns
23 --------
24 Durbin Watson statistic. This is defined as
25 sum_(t=2)^(T)((e_t - e_(t-1))^(2))/sum_(t=1)^(T)e_t^(2)
26 """
27 diff_resids = np.diff(resids, 1)
28 dw = np.dot(diff_resids, diff_resids) / np.dot(resids, resids)
29 return dw
30
31 def omni_normtest(resids, axis=0):
32 """
33 Omnibus test for normality
34
35 Parameters
36 -----------
37 resid : array-like
38 axis : int, optional
39 Default is 0
40
41 Returns
42 -------
43 Chi^2 score, two-tail probability
44 """
45 #TODO: change to exception in summary branch and catch in summary()
46 #behavior changed between scipy 0.9 and 0.10
47 resids = np.asarray(resids)
48 n = resids.shape[axis]
49 if n < 8:
50 return np.nan, np.nan
51 return_shape = list(resids.shape)
52 del return_shape[axis]
53 return np.nan * np.zeros(return_shape), np.nan * np.zeros(return_shape)
54 raise ValueError(
55 "skewtest is not valid with less than 8 observations; %i samples"
56 " were given." % int(n))
57
58 return stats.normaltest(resids, axis=axis)
59
60 def jarque_bera(resids):
61 """
62 Calculate residual skewness, kurtosis, and do the JB test for normality
63
64 Parameters
65 -----------
66 resids : array-like
67
68 Returns
69 -------
70 JB, JBpv, skew, kurtosis
71
72 JB = n/6*(S^2 + (K-3)^2/4)
73
74 JBpv is the Chi^2 two-tail probability value
75
76 skew is the measure of skewness
77
78 kurtosis is the measure of kurtosis
79
80 """
81 resids = np.asarray(resids)
82 # Calculate residual skewness and kurtosis
83 skew = stats.skew(resids)
84 kurtosis = 3 + stats.kurtosis(resids)
85
86 # Calculate the Jarque-Bera test for normality
87 JB = (resids.shape[0] / 6.) * (skew**2 + (1 / 4.) * (kurtosis-3)**2)
88 JBpv = stats.chi2.sf(JB,2)
89
90 return JB, JBpv, skew, kurtosis
91
92
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/statsmodels/stats/stattools.py b/statsmodels/stats/stattools.py
--- a/statsmodels/stats/stattools.py
+++ b/statsmodels/stats/stattools.py
@@ -24,6 +24,7 @@
Durbin Watson statistic. This is defined as
sum_(t=2)^(T)((e_t - e_(t-1))^(2))/sum_(t=1)^(T)e_t^(2)
"""
+ resids=np.asarray(resids)
diff_resids = np.diff(resids, 1)
dw = np.dot(diff_resids, diff_resids) / np.dot(resids, resids)
return dw
| {"golden_diff": "diff --git a/statsmodels/stats/stattools.py b/statsmodels/stats/stattools.py\n--- a/statsmodels/stats/stattools.py\n+++ b/statsmodels/stats/stattools.py\n@@ -24,6 +24,7 @@\n Durbin Watson statistic. This is defined as\n sum_(t=2)^(T)((e_t - e_(t-1))^(2))/sum_(t=1)^(T)e_t^(2)\n \"\"\"\n+ resids=np.asarray(resids)\n diff_resids = np.diff(resids, 1)\n dw = np.dot(diff_resids, diff_resids) / np.dot(resids, resids)\n return dw\n", "issue": "OLS residuals returned as Pandas series when endog and exog are Pandas series\nWhen I fit OLS model with pandas series and try to do a Durbin-Watson test, the function returns nan. In that case the RegressionResult.resid attribute is a pandas series, rather than a numpy array- converting to a numpy array explicitly, the durbin_watson function works like a charm. \n\nMy instinct is this is something that should probably be changed in OLS (to guarantee the type of resid), hence the title of the issue, but I leave that to the judgement of our fearless leaders.\n\n``` python\nimport statsmodels.api as sm\nimport numpy as np\nfrom pandas import DataFrame\nx=np.arange(1,11)\ny=[num+np.random.normal() for num in np.arange(0,5, .5)]\nlinmod=sm.OLS(y, x).fit()\ndw=sm.stats.stattools.durbin_watson(linmod.resid)\ndata=DataFrame({'x':x, 'y':y}, index=x)\nlinmod_pandas=sm.OLS(data.y, data.x).fit()\ndw_pandas=sm.stats.stattools.durbin_watson(linmod_pandas.resid)\ndw_pandas1=sm.stats.stattools.durbin_watson(array(linmod_pandas.resid))\nprint type(linmod_pandas.resid)\nprint dw, dw_pandas, dw_pandas1\n```\n\n", "before_files": [{"content": "\"\"\"\nStatistical tests to be used in conjunction with the models\n\nNotes\n-----\nThese functions haven't been formally tested.\n\"\"\"\n\nfrom scipy import stats\nimport numpy as np\n\n\n#TODO: these are pretty straightforward but they should be tested\ndef durbin_watson(resids):\n \"\"\"\n Calculates the Durbin-Watson statistic\n\n Parameters\n -----------\n resids : array-like\n\n Returns\n --------\n Durbin Watson statistic. This is defined as\n sum_(t=2)^(T)((e_t - e_(t-1))^(2))/sum_(t=1)^(T)e_t^(2)\n \"\"\"\n diff_resids = np.diff(resids, 1)\n dw = np.dot(diff_resids, diff_resids) / np.dot(resids, resids)\n return dw\n\ndef omni_normtest(resids, axis=0):\n \"\"\"\n Omnibus test for normality\n\n Parameters\n -----------\n resid : array-like\n axis : int, optional\n Default is 0\n\n Returns\n -------\n Chi^2 score, two-tail probability\n \"\"\"\n #TODO: change to exception in summary branch and catch in summary()\n #behavior changed between scipy 0.9 and 0.10\n resids = np.asarray(resids)\n n = resids.shape[axis]\n if n < 8:\n return np.nan, np.nan\n return_shape = list(resids.shape)\n del return_shape[axis]\n return np.nan * np.zeros(return_shape), np.nan * np.zeros(return_shape)\n raise ValueError(\n \"skewtest is not valid with less than 8 observations; %i samples\"\n \" were given.\" % int(n))\n\n return stats.normaltest(resids, axis=axis)\n\ndef jarque_bera(resids):\n \"\"\"\n Calculate residual skewness, kurtosis, and do the JB test for normality\n\n Parameters\n -----------\n resids : array-like\n\n Returns\n -------\n JB, JBpv, skew, kurtosis\n\n JB = n/6*(S^2 + (K-3)^2/4)\n\n JBpv is the Chi^2 two-tail probability value\n\n skew is the measure of skewness\n\n kurtosis is the measure of kurtosis\n\n \"\"\"\n resids = np.asarray(resids)\n # Calculate residual skewness and kurtosis\n skew = stats.skew(resids)\n kurtosis = 3 + stats.kurtosis(resids)\n\n # Calculate the Jarque-Bera test for normality\n JB = (resids.shape[0] / 6.) * (skew**2 + (1 / 4.) * (kurtosis-3)**2)\n JBpv = stats.chi2.sf(JB,2)\n\n return JB, JBpv, skew, kurtosis\n\n", "path": "statsmodels/stats/stattools.py"}], "after_files": [{"content": "\"\"\"\nStatistical tests to be used in conjunction with the models\n\nNotes\n-----\nThese functions haven't been formally tested.\n\"\"\"\n\nfrom scipy import stats\nimport numpy as np\n\n\n#TODO: these are pretty straightforward but they should be tested\ndef durbin_watson(resids):\n \"\"\"\n Calculates the Durbin-Watson statistic\n\n Parameters\n -----------\n resids : array-like\n\n Returns\n --------\n Durbin Watson statistic. This is defined as\n sum_(t=2)^(T)((e_t - e_(t-1))^(2))/sum_(t=1)^(T)e_t^(2)\n \"\"\"\n resids=np.asarray(resids)\n diff_resids = np.diff(resids, 1)\n dw = np.dot(diff_resids, diff_resids) / np.dot(resids, resids)\n return dw\n\ndef omni_normtest(resids, axis=0):\n \"\"\"\n Omnibus test for normality\n\n Parameters\n -----------\n resid : array-like\n axis : int, optional\n Default is 0\n\n Returns\n -------\n Chi^2 score, two-tail probability\n \"\"\"\n #TODO: change to exception in summary branch and catch in summary()\n #behavior changed between scipy 0.9 and 0.10\n resids = np.asarray(resids)\n n = resids.shape[axis]\n if n < 8:\n return np.nan, np.nan\n return_shape = list(resids.shape)\n del return_shape[axis]\n return np.nan * np.zeros(return_shape), np.nan * np.zeros(return_shape)\n raise ValueError(\n \"skewtest is not valid with less than 8 observations; %i samples\"\n \" were given.\" % int(n))\n\n return stats.normaltest(resids, axis=axis)\n\ndef jarque_bera(resids):\n \"\"\"\n Calculate residual skewness, kurtosis, and do the JB test for normality\n\n Parameters\n -----------\n resids : array-like\n\n Returns\n -------\n JB, JBpv, skew, kurtosis\n\n JB = n/6*(S^2 + (K-3)^2/4)\n\n JBpv is the Chi^2 two-tail probability value\n\n skew is the measure of skewness\n\n kurtosis is the measure of kurtosis\n\n \"\"\"\n resids = np.asarray(resids)\n # Calculate residual skewness and kurtosis\n skew = stats.skew(resids)\n kurtosis = 3 + stats.kurtosis(resids)\n\n # Calculate the Jarque-Bera test for normality\n JB = (resids.shape[0] / 6.) * (skew**2 + (1 / 4.) * (kurtosis-3)**2)\n JBpv = stats.chi2.sf(JB,2)\n\n return JB, JBpv, skew, kurtosis\n\n", "path": "statsmodels/stats/stattools.py"}]} | 1,380 | 145 |
gh_patches_debug_15028 | rasdani/github-patches | git_diff | Pyomo__pyomo-1521 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Deprecate the pyomo install-extras subcommand
The conda pyomo.extras package supports this functionality more robustly. We should not duplicate this logic in separate places.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pyomo/scripting/plugins/extras.py`
Content:
```
1 # ___________________________________________________________________________
2 #
3 # Pyomo: Python Optimization Modeling Objects
4 # Copyright 2017 National Technology and Engineering Solutions of Sandia, LLC
5 # Under the terms of Contract DE-NA0003525 with National Technology and
6 # Engineering Solutions of Sandia, LLC, the U.S. Government retains certain
7 # rights in this software.
8 # This software is distributed under the 3-clause BSD License.
9 # ___________________________________________________________________________
10
11 import six
12 from pyomo.scripting.pyomo_parser import add_subparser, CustomHelpFormatter
13
14 def get_packages():
15 packages = [
16 'sympy',
17 'xlrd',
18 'openpyxl',
19 #('suds-jurko', 'suds'),
20 ('PyYAML', 'yaml'),
21 'pypyodbc',
22 'pymysql',
23 #'openopt',
24 #'FuncDesigner',
25 #'DerApproximator',
26 ('ipython[notebook]', 'IPython'),
27 ('pyro4', 'Pyro4'),
28 ]
29 if six.PY2:
30 packages.append(('pyro','Pyro'))
31 return packages
32
33 def install_extras(args=[], quiet=False):
34 #
35 # Verify that pip is installed
36 #
37 try:
38 import pip
39 pip_version = pip.__version__.split('.')
40 for i,s in enumerate(pip_version):
41 try:
42 pip_version[i] = int(s)
43 except:
44 pass
45 pip_version = tuple(pip_version)
46 except ImportError:
47 print("You must have 'pip' installed to run this script.")
48 raise SystemExit
49
50 cmd = ['--disable-pip-version-check', 'install','--upgrade']
51 # Disable the PIP download cache
52 if pip_version[0] >= 6:
53 cmd.append('--no-cache-dir')
54 else:
55 cmd.append('--download-cache')
56 cmd.append('')
57
58 if not quiet:
59 print(' ')
60 print('-'*60)
61 print("Installation Output Logs")
62 print(" (A summary will be printed below)")
63 print('-'*60)
64 print(' ')
65
66 results = {}
67 for package in get_packages():
68 if type(package) is tuple:
69 package, pkg_import = package
70 else:
71 pkg_import = package
72 try:
73 # Allow the user to provide extra options
74 pip.main(cmd + args + [package])
75 __import__(pkg_import)
76 results[package] = True
77 except:
78 results[package] = False
79 try:
80 pip.logger.consumers = []
81 except AttributeError:
82 # old pip versions (prior to 6.0~104^2)
83 pip.log.consumers = []
84
85 if not quiet:
86 print(' ')
87 print(' ')
88 print('-'*60)
89 print("Installation Summary")
90 print('-'*60)
91 print(' ')
92 for package, result in sorted(six.iteritems(results)):
93 if result:
94 print("YES %s" % package)
95 else:
96 print("NO %s" % package)
97
98
99 def pyomo_subcommand(options):
100 return install_extras(options.args, quiet=options.quiet)
101
102
103 _parser = add_subparser(
104 'install-extras',
105 func=pyomo_subcommand,
106 help='Install "extra" packages that Pyomo can leverage.',
107 description="""
108 This pyomo subcommand uses PIP to install optional third-party Python
109 packages that Pyomo could leverage from PyPI. The installation of some
110 packages may fail, but this subcommand ignore these failures and
111 provides a summary describing which packages were installed.
112 """,
113 epilog="""
114 Since pip options begin with a dash, the --pip-args option can only be
115 used with the equals syntax. --pip-args may appear multiple times on
116 the command line. For example:\n\n
117 pyomo install-extras --pip-args="--upgrade"
118 """,
119 formatter_class=CustomHelpFormatter,
120 )
121
122 _parser.add_argument(
123 '-q', '--quiet',
124 action='store_true',
125 dest='quiet',
126 default=False,
127 help="Suppress some terminal output",
128 )
129 _parser.add_argument(
130 "--pip-args",
131 dest="args",
132 action="append",
133 help=("Arguments that are passed to the 'pip' command when "
134 "installing packages"),
135 )
136
137
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pyomo/scripting/plugins/extras.py b/pyomo/scripting/plugins/extras.py
--- a/pyomo/scripting/plugins/extras.py
+++ b/pyomo/scripting/plugins/extras.py
@@ -11,6 +11,8 @@
import six
from pyomo.scripting.pyomo_parser import add_subparser, CustomHelpFormatter
+from pyomo.common.deprecation import deprecated
+
def get_packages():
packages = [
'sympy',
@@ -30,6 +32,11 @@
packages.append(('pyro','Pyro'))
return packages
+@deprecated(
+ "Use of the pyomo install-extras is deprecated."
+ "The current recommended course of action is to manually install "
+ "optional dependencies as needed.",
+ version='TBD')
def install_extras(args=[], quiet=False):
#
# Verify that pip is installed
| {"golden_diff": "diff --git a/pyomo/scripting/plugins/extras.py b/pyomo/scripting/plugins/extras.py\n--- a/pyomo/scripting/plugins/extras.py\n+++ b/pyomo/scripting/plugins/extras.py\n@@ -11,6 +11,8 @@\n import six\n from pyomo.scripting.pyomo_parser import add_subparser, CustomHelpFormatter\n \n+from pyomo.common.deprecation import deprecated\n+\n def get_packages():\n packages = [\n 'sympy', \n@@ -30,6 +32,11 @@\n packages.append(('pyro','Pyro'))\n return packages\n \n+@deprecated(\n+ \"Use of the pyomo install-extras is deprecated.\"\n+ \"The current recommended course of action is to manually install \"\n+ \"optional dependencies as needed.\",\n+ version='TBD')\n def install_extras(args=[], quiet=False):\n #\n # Verify that pip is installed\n", "issue": "Deprecate the pyomo install-extras subcommand\nThe conda pyomo.extras package supports this functionality more robustly. We should not duplicate this logic in separate places.\n", "before_files": [{"content": "# ___________________________________________________________________________\n#\n# Pyomo: Python Optimization Modeling Objects\n# Copyright 2017 National Technology and Engineering Solutions of Sandia, LLC\n# Under the terms of Contract DE-NA0003525 with National Technology and \n# Engineering Solutions of Sandia, LLC, the U.S. Government retains certain \n# rights in this software.\n# This software is distributed under the 3-clause BSD License.\n# ___________________________________________________________________________\n\nimport six\nfrom pyomo.scripting.pyomo_parser import add_subparser, CustomHelpFormatter\n\ndef get_packages():\n packages = [\n 'sympy', \n 'xlrd', \n 'openpyxl', \n #('suds-jurko', 'suds'),\n ('PyYAML', 'yaml'),\n 'pypyodbc', \n 'pymysql', \n #'openopt', \n #'FuncDesigner', \n #'DerApproximator', \n ('ipython[notebook]', 'IPython'),\n ('pyro4', 'Pyro4'),\n ]\n if six.PY2:\n packages.append(('pyro','Pyro'))\n return packages\n\ndef install_extras(args=[], quiet=False):\n #\n # Verify that pip is installed\n #\n try:\n import pip\n pip_version = pip.__version__.split('.')\n for i,s in enumerate(pip_version):\n try:\n pip_version[i] = int(s)\n except:\n pass\n pip_version = tuple(pip_version)\n except ImportError:\n print(\"You must have 'pip' installed to run this script.\")\n raise SystemExit\n\n cmd = ['--disable-pip-version-check', 'install','--upgrade']\n # Disable the PIP download cache\n if pip_version[0] >= 6:\n cmd.append('--no-cache-dir')\n else:\n cmd.append('--download-cache')\n cmd.append('')\n\n if not quiet:\n print(' ')\n print('-'*60)\n print(\"Installation Output Logs\")\n print(\" (A summary will be printed below)\")\n print('-'*60)\n print(' ')\n\n results = {}\n for package in get_packages():\n if type(package) is tuple:\n package, pkg_import = package\n else:\n pkg_import = package\n try:\n # Allow the user to provide extra options\n pip.main(cmd + args + [package])\n __import__(pkg_import)\n results[package] = True\n except:\n results[package] = False\n try:\n pip.logger.consumers = []\n except AttributeError:\n # old pip versions (prior to 6.0~104^2)\n pip.log.consumers = []\n\n if not quiet:\n print(' ')\n print(' ')\n print('-'*60)\n print(\"Installation Summary\")\n print('-'*60)\n print(' ')\n for package, result in sorted(six.iteritems(results)):\n if result:\n print(\"YES %s\" % package)\n else:\n print(\"NO %s\" % package)\n\n\ndef pyomo_subcommand(options):\n return install_extras(options.args, quiet=options.quiet)\n\n\n_parser = add_subparser(\n 'install-extras',\n func=pyomo_subcommand,\n help='Install \"extra\" packages that Pyomo can leverage.',\n description=\"\"\"\nThis pyomo subcommand uses PIP to install optional third-party Python\npackages that Pyomo could leverage from PyPI. The installation of some\npackages may fail, but this subcommand ignore these failures and\nprovides a summary describing which packages were installed.\n\"\"\",\n epilog=\"\"\"\nSince pip options begin with a dash, the --pip-args option can only be\nused with the equals syntax. --pip-args may appear multiple times on\nthe command line. For example:\\n\\n\n pyomo install-extras --pip-args=\"--upgrade\"\n\"\"\",\n formatter_class=CustomHelpFormatter,\n)\n\n_parser.add_argument(\n '-q', '--quiet',\n action='store_true',\n dest='quiet',\n default=False,\n help=\"Suppress some terminal output\",\n)\n_parser.add_argument(\n \"--pip-args\",\n dest=\"args\",\n action=\"append\",\n help=(\"Arguments that are passed to the 'pip' command when \"\n \"installing packages\"),\n)\n\n", "path": "pyomo/scripting/plugins/extras.py"}], "after_files": [{"content": "# ___________________________________________________________________________\n#\n# Pyomo: Python Optimization Modeling Objects\n# Copyright 2017 National Technology and Engineering Solutions of Sandia, LLC\n# Under the terms of Contract DE-NA0003525 with National Technology and \n# Engineering Solutions of Sandia, LLC, the U.S. Government retains certain \n# rights in this software.\n# This software is distributed under the 3-clause BSD License.\n# ___________________________________________________________________________\n\nimport six\nfrom pyomo.scripting.pyomo_parser import add_subparser, CustomHelpFormatter\n\nfrom pyomo.common.deprecation import deprecated\n\ndef get_packages():\n packages = [\n 'sympy', \n 'xlrd', \n 'openpyxl', \n #('suds-jurko', 'suds'),\n ('PyYAML', 'yaml'),\n 'pypyodbc', \n 'pymysql', \n #'openopt', \n #'FuncDesigner', \n #'DerApproximator', \n ('ipython[notebook]', 'IPython'),\n ('pyro4', 'Pyro4'),\n ]\n if six.PY2:\n packages.append(('pyro','Pyro'))\n return packages\n\n@deprecated(\n \"Use of the pyomo install-extras is deprecated.\"\n \"The current recommended course of action is to manually install \"\n \"optional dependencies as needed.\",\n version='TBD')\ndef install_extras(args=[], quiet=False):\n #\n # Verify that pip is installed\n #\n try:\n import pip\n pip_version = pip.__version__.split('.')\n for i,s in enumerate(pip_version):\n try:\n pip_version[i] = int(s)\n except:\n pass\n pip_version = tuple(pip_version)\n except ImportError:\n print(\"You must have 'pip' installed to run this script.\")\n raise SystemExit\n\n cmd = ['--disable-pip-version-check', 'install','--upgrade']\n # Disable the PIP download cache\n if pip_version[0] >= 6:\n cmd.append('--no-cache-dir')\n else:\n cmd.append('--download-cache')\n cmd.append('')\n\n if not quiet:\n print(' ')\n print('-'*60)\n print(\"Installation Output Logs\")\n print(\" (A summary will be printed below)\")\n print('-'*60)\n print(' ')\n\n results = {}\n for package in get_packages():\n if type(package) is tuple:\n package, pkg_import = package\n else:\n pkg_import = package\n try:\n # Allow the user to provide extra options\n pip.main(cmd + args + [package])\n __import__(pkg_import)\n results[package] = True\n except:\n results[package] = False\n try:\n pip.logger.consumers = []\n except AttributeError:\n # old pip versions (prior to 6.0~104^2)\n pip.log.consumers = []\n\n if not quiet:\n print(' ')\n print(' ')\n print('-'*60)\n print(\"Installation Summary\")\n print('-'*60)\n print(' ')\n for package, result in sorted(six.iteritems(results)):\n if result:\n print(\"YES %s\" % package)\n else:\n print(\"NO %s\" % package)\n\n\ndef pyomo_subcommand(options):\n return install_extras(options.args, quiet=options.quiet)\n\n\n_parser = add_subparser(\n 'install-extras',\n func=pyomo_subcommand,\n help='Install \"extra\" packages that Pyomo can leverage.',\n description=\"\"\"\nThis pyomo subcommand uses PIP to install optional third-party Python\npackages that Pyomo could leverage from PyPI. The installation of some\npackages may fail, but this subcommand ignore these failures and\nprovides a summary describing which packages were installed.\n\"\"\",\n epilog=\"\"\"\nSince pip options begin with a dash, the --pip-args option can only be\nused with the equals syntax. --pip-args may appear multiple times on\nthe command line. For example:\\n\\n\n pyomo install-extras --pip-args=\"--upgrade\"\n\"\"\",\n formatter_class=CustomHelpFormatter,\n)\n\n_parser.add_argument(\n '-q', '--quiet',\n action='store_true',\n dest='quiet',\n default=False,\n help=\"Suppress some terminal output\",\n)\n_parser.add_argument(\n \"--pip-args\",\n dest=\"args\",\n action=\"append\",\n help=(\"Arguments that are passed to the 'pip' command when \"\n \"installing packages\"),\n)\n\n", "path": "pyomo/scripting/plugins/extras.py"}]} | 1,546 | 195 |
gh_patches_debug_39322 | rasdani/github-patches | git_diff | carpentries__amy-583 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add event organizer info to the API
Compute Canada would like to be able to use the API to pull all the events it is hosting and then use this information to populate website.
Might be nice to have the EventBrite IDs there too.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `api/serializers.py`
Content:
```
1 from rest_framework import serializers
2
3 from workshops.models import Badge, Airport, Person, Event
4
5
6 class PersonUsernameSerializer(serializers.ModelSerializer):
7 name = serializers.CharField(source='get_full_name')
8 user = serializers.CharField(source='username')
9
10 class Meta:
11 model = Person
12 fields = ('name', 'user', )
13
14
15 class ExportBadgesSerializer(serializers.ModelSerializer):
16 persons = PersonUsernameSerializer(many=True, source='person_set')
17
18 class Meta:
19 model = Badge
20 fields = ('name', 'persons')
21
22
23 class ExportInstructorLocationsSerializer(serializers.ModelSerializer):
24 name = serializers.CharField(source='fullname')
25 instructors = PersonUsernameSerializer(many=True, source='person_set')
26
27 class Meta:
28 model = Airport
29 fields = ('name', 'latitude', 'longitude', 'instructors', 'country')
30
31
32 class EventSerializer(serializers.ModelSerializer):
33 humandate = serializers.SerializerMethodField()
34 country = serializers.CharField()
35 start = serializers.DateField(format=None)
36 end = serializers.DateField(format=None)
37 url = serializers.URLField(source='website_url')
38
39 def get_humandate(self, obj):
40 """Render start and end dates as human-readable short date."""
41 return EventSerializer.human_readable_date(obj.start, obj.end)
42
43 @staticmethod
44 def human_readable_date(date1, date2):
45 """Render start and end dates as human-readable short date."""
46 if date1 and not date2:
47 return '{:%b %d, %Y}-???'.format(date1)
48 elif date2 and not date1:
49 return '???-{:%b %d, %Y}'.format(date2)
50 elif not date2 and not date1:
51 return '???-???'
52
53 if date1.year == date2.year:
54 if date1.month == date2.month:
55 return '{:%b %d}-{:%d, %Y}'.format(date1, date2)
56 else:
57 return '{:%b %d}-{:%b %d, %Y}'.format(date1, date2)
58 else:
59 return '{:%b %d, %Y}-{:%b %d, %Y}'.format(date1, date2)
60
61 class Meta:
62 model = Event
63 fields = (
64 'slug', 'start', 'end', 'url', 'humandate', 'contact', 'country',
65 'venue', 'address', 'latitude', 'longitude',
66 )
67
```
Path: `api/views.py`
Content:
```
1 from django.db.models import Q
2 from rest_framework.generics import ListAPIView
3 from rest_framework.permissions import IsAuthenticatedOrReadOnly
4 from rest_framework.response import Response
5 from rest_framework.reverse import reverse
6 from rest_framework.views import APIView
7
8 from workshops.models import Badge, Airport, Event
9
10 from .serializers import (
11 ExportBadgesSerializer,
12 ExportInstructorLocationsSerializer,
13 EventSerializer,
14 )
15
16
17 class ApiRoot(APIView):
18 def get(self, request, format=None):
19 return Response({
20 'export-badges': reverse('api:export-badges', request=request,
21 format=format),
22 'export-instructors': reverse('api:export-instructors',
23 request=request, format=format),
24 'events-published': reverse('api:events-published',
25 request=request, format=format),
26 })
27
28
29 class ExportBadgesView(ListAPIView):
30 """List all badges and people who have them."""
31 permission_classes = (IsAuthenticatedOrReadOnly, )
32 paginator = None # disable pagination
33
34 queryset = Badge.objects.prefetch_related('person_set')
35 serializer_class = ExportBadgesSerializer
36
37
38 class ExportInstructorLocationsView(ListAPIView):
39 """List all airports and instructors located near them."""
40 permission_classes = (IsAuthenticatedOrReadOnly, )
41 paginator = None # disable pagination
42
43 queryset = Airport.objects.exclude(person=None) \
44 .prefetch_related('person_set')
45 serializer_class = ExportInstructorLocationsSerializer
46
47
48 class PublishedEvents(ListAPIView):
49 # only events that have both a starting date and a URL
50 permission_classes = (IsAuthenticatedOrReadOnly, )
51 paginator = None # disable pagination
52
53 serializer_class = EventSerializer
54 queryset = Event.objects.published_events()
55
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/api/serializers.py b/api/serializers.py
--- a/api/serializers.py
+++ b/api/serializers.py
@@ -35,6 +35,7 @@
start = serializers.DateField(format=None)
end = serializers.DateField(format=None)
url = serializers.URLField(source='website_url')
+ eventbrite_id = serializers.CharField(source='reg_key')
def get_humandate(self, obj):
"""Render start and end dates as human-readable short date."""
@@ -62,5 +63,5 @@
model = Event
fields = (
'slug', 'start', 'end', 'url', 'humandate', 'contact', 'country',
- 'venue', 'address', 'latitude', 'longitude',
+ 'venue', 'address', 'latitude', 'longitude', 'eventbrite_id',
)
diff --git a/api/views.py b/api/views.py
--- a/api/views.py
+++ b/api/views.py
@@ -1,5 +1,6 @@
from django.db.models import Q
from rest_framework.generics import ListAPIView
+from rest_framework.metadata import SimpleMetadata
from rest_framework.permissions import IsAuthenticatedOrReadOnly
from rest_framework.response import Response
from rest_framework.reverse import reverse
@@ -14,6 +15,21 @@
)
+class QueryMetadata(SimpleMetadata):
+ """Additionally include info about query parameters."""
+
+ def determine_metadata(self, request, view):
+ print('doing something')
+ data = super().determine_metadata(request, view)
+
+ try:
+ data['query_params'] = view.get_query_params_description()
+ except AttributeError:
+ pass
+
+ return data
+
+
class ApiRoot(APIView):
def get(self, request, format=None):
return Response({
@@ -46,9 +62,34 @@
class PublishedEvents(ListAPIView):
+ """List published events."""
+
# only events that have both a starting date and a URL
permission_classes = (IsAuthenticatedOrReadOnly, )
paginator = None # disable pagination
serializer_class = EventSerializer
- queryset = Event.objects.published_events()
+
+ metadata_class = QueryMetadata
+
+ def get_queryset(self):
+ """Optionally restrict the returned event set to events hosted by
+ specific host or administered by specific admin."""
+ queryset = Event.objects.published_events()
+
+ administrator = self.request.query_params.get('administrator', None)
+ if administrator is not None:
+ queryset = queryset.filter(administrator__pk=administrator)
+
+ host = self.request.query_params.get('host', None)
+ if host is not None:
+ queryset = queryset.filter(host__pk=host)
+
+ return queryset
+
+ def get_query_params_description(self):
+ return {
+ 'administrator': 'ID of the organization responsible for admin '
+ 'work on events.',
+ 'host': 'ID of the organization hosting the event.',
+ }
| {"golden_diff": "diff --git a/api/serializers.py b/api/serializers.py\n--- a/api/serializers.py\n+++ b/api/serializers.py\n@@ -35,6 +35,7 @@\n start = serializers.DateField(format=None)\n end = serializers.DateField(format=None)\n url = serializers.URLField(source='website_url')\n+ eventbrite_id = serializers.CharField(source='reg_key')\n \n def get_humandate(self, obj):\n \"\"\"Render start and end dates as human-readable short date.\"\"\"\n@@ -62,5 +63,5 @@\n model = Event\n fields = (\n 'slug', 'start', 'end', 'url', 'humandate', 'contact', 'country',\n- 'venue', 'address', 'latitude', 'longitude',\n+ 'venue', 'address', 'latitude', 'longitude', 'eventbrite_id',\n )\ndiff --git a/api/views.py b/api/views.py\n--- a/api/views.py\n+++ b/api/views.py\n@@ -1,5 +1,6 @@\n from django.db.models import Q\n from rest_framework.generics import ListAPIView\n+from rest_framework.metadata import SimpleMetadata\n from rest_framework.permissions import IsAuthenticatedOrReadOnly\n from rest_framework.response import Response\n from rest_framework.reverse import reverse\n@@ -14,6 +15,21 @@\n )\n \n \n+class QueryMetadata(SimpleMetadata):\n+ \"\"\"Additionally include info about query parameters.\"\"\"\n+\n+ def determine_metadata(self, request, view):\n+ print('doing something')\n+ data = super().determine_metadata(request, view)\n+\n+ try:\n+ data['query_params'] = view.get_query_params_description()\n+ except AttributeError:\n+ pass\n+\n+ return data\n+\n+\n class ApiRoot(APIView):\n def get(self, request, format=None):\n return Response({\n@@ -46,9 +62,34 @@\n \n \n class PublishedEvents(ListAPIView):\n+ \"\"\"List published events.\"\"\"\n+\n # only events that have both a starting date and a URL\n permission_classes = (IsAuthenticatedOrReadOnly, )\n paginator = None # disable pagination\n \n serializer_class = EventSerializer\n- queryset = Event.objects.published_events()\n+\n+ metadata_class = QueryMetadata\n+\n+ def get_queryset(self):\n+ \"\"\"Optionally restrict the returned event set to events hosted by\n+ specific host or administered by specific admin.\"\"\"\n+ queryset = Event.objects.published_events()\n+\n+ administrator = self.request.query_params.get('administrator', None)\n+ if administrator is not None:\n+ queryset = queryset.filter(administrator__pk=administrator)\n+\n+ host = self.request.query_params.get('host', None)\n+ if host is not None:\n+ queryset = queryset.filter(host__pk=host)\n+\n+ return queryset\n+\n+ def get_query_params_description(self):\n+ return {\n+ 'administrator': 'ID of the organization responsible for admin '\n+ 'work on events.',\n+ 'host': 'ID of the organization hosting the event.',\n+ }\n", "issue": "Add event organizer info to the API\nCompute Canada would like to be able to use the API to pull all the events it is hosting and then use this information to populate website.\n\nMight be nice to have the EventBrite IDs there too.\n\n", "before_files": [{"content": "from rest_framework import serializers\n\nfrom workshops.models import Badge, Airport, Person, Event\n\n\nclass PersonUsernameSerializer(serializers.ModelSerializer):\n name = serializers.CharField(source='get_full_name')\n user = serializers.CharField(source='username')\n\n class Meta:\n model = Person\n fields = ('name', 'user', )\n\n\nclass ExportBadgesSerializer(serializers.ModelSerializer):\n persons = PersonUsernameSerializer(many=True, source='person_set')\n\n class Meta:\n model = Badge\n fields = ('name', 'persons')\n\n\nclass ExportInstructorLocationsSerializer(serializers.ModelSerializer):\n name = serializers.CharField(source='fullname')\n instructors = PersonUsernameSerializer(many=True, source='person_set')\n\n class Meta:\n model = Airport\n fields = ('name', 'latitude', 'longitude', 'instructors', 'country')\n\n\nclass EventSerializer(serializers.ModelSerializer):\n humandate = serializers.SerializerMethodField()\n country = serializers.CharField()\n start = serializers.DateField(format=None)\n end = serializers.DateField(format=None)\n url = serializers.URLField(source='website_url')\n\n def get_humandate(self, obj):\n \"\"\"Render start and end dates as human-readable short date.\"\"\"\n return EventSerializer.human_readable_date(obj.start, obj.end)\n\n @staticmethod\n def human_readable_date(date1, date2):\n \"\"\"Render start and end dates as human-readable short date.\"\"\"\n if date1 and not date2:\n return '{:%b %d, %Y}-???'.format(date1)\n elif date2 and not date1:\n return '???-{:%b %d, %Y}'.format(date2)\n elif not date2 and not date1:\n return '???-???'\n\n if date1.year == date2.year:\n if date1.month == date2.month:\n return '{:%b %d}-{:%d, %Y}'.format(date1, date2)\n else:\n return '{:%b %d}-{:%b %d, %Y}'.format(date1, date2)\n else:\n return '{:%b %d, %Y}-{:%b %d, %Y}'.format(date1, date2)\n\n class Meta:\n model = Event\n fields = (\n 'slug', 'start', 'end', 'url', 'humandate', 'contact', 'country',\n 'venue', 'address', 'latitude', 'longitude',\n )\n", "path": "api/serializers.py"}, {"content": "from django.db.models import Q\nfrom rest_framework.generics import ListAPIView\nfrom rest_framework.permissions import IsAuthenticatedOrReadOnly\nfrom rest_framework.response import Response\nfrom rest_framework.reverse import reverse\nfrom rest_framework.views import APIView\n\nfrom workshops.models import Badge, Airport, Event\n\nfrom .serializers import (\n ExportBadgesSerializer,\n ExportInstructorLocationsSerializer,\n EventSerializer,\n)\n\n\nclass ApiRoot(APIView):\n def get(self, request, format=None):\n return Response({\n 'export-badges': reverse('api:export-badges', request=request,\n format=format),\n 'export-instructors': reverse('api:export-instructors',\n request=request, format=format),\n 'events-published': reverse('api:events-published',\n request=request, format=format),\n })\n\n\nclass ExportBadgesView(ListAPIView):\n \"\"\"List all badges and people who have them.\"\"\"\n permission_classes = (IsAuthenticatedOrReadOnly, )\n paginator = None # disable pagination\n\n queryset = Badge.objects.prefetch_related('person_set')\n serializer_class = ExportBadgesSerializer\n\n\nclass ExportInstructorLocationsView(ListAPIView):\n \"\"\"List all airports and instructors located near them.\"\"\"\n permission_classes = (IsAuthenticatedOrReadOnly, )\n paginator = None # disable pagination\n\n queryset = Airport.objects.exclude(person=None) \\\n .prefetch_related('person_set')\n serializer_class = ExportInstructorLocationsSerializer\n\n\nclass PublishedEvents(ListAPIView):\n # only events that have both a starting date and a URL\n permission_classes = (IsAuthenticatedOrReadOnly, )\n paginator = None # disable pagination\n\n serializer_class = EventSerializer\n queryset = Event.objects.published_events()\n", "path": "api/views.py"}], "after_files": [{"content": "from rest_framework import serializers\n\nfrom workshops.models import Badge, Airport, Person, Event\n\n\nclass PersonUsernameSerializer(serializers.ModelSerializer):\n name = serializers.CharField(source='get_full_name')\n user = serializers.CharField(source='username')\n\n class Meta:\n model = Person\n fields = ('name', 'user', )\n\n\nclass ExportBadgesSerializer(serializers.ModelSerializer):\n persons = PersonUsernameSerializer(many=True, source='person_set')\n\n class Meta:\n model = Badge\n fields = ('name', 'persons')\n\n\nclass ExportInstructorLocationsSerializer(serializers.ModelSerializer):\n name = serializers.CharField(source='fullname')\n instructors = PersonUsernameSerializer(many=True, source='person_set')\n\n class Meta:\n model = Airport\n fields = ('name', 'latitude', 'longitude', 'instructors', 'country')\n\n\nclass EventSerializer(serializers.ModelSerializer):\n humandate = serializers.SerializerMethodField()\n country = serializers.CharField()\n start = serializers.DateField(format=None)\n end = serializers.DateField(format=None)\n url = serializers.URLField(source='website_url')\n eventbrite_id = serializers.CharField(source='reg_key')\n\n def get_humandate(self, obj):\n \"\"\"Render start and end dates as human-readable short date.\"\"\"\n return EventSerializer.human_readable_date(obj.start, obj.end)\n\n @staticmethod\n def human_readable_date(date1, date2):\n \"\"\"Render start and end dates as human-readable short date.\"\"\"\n if date1 and not date2:\n return '{:%b %d, %Y}-???'.format(date1)\n elif date2 and not date1:\n return '???-{:%b %d, %Y}'.format(date2)\n elif not date2 and not date1:\n return '???-???'\n\n if date1.year == date2.year:\n if date1.month == date2.month:\n return '{:%b %d}-{:%d, %Y}'.format(date1, date2)\n else:\n return '{:%b %d}-{:%b %d, %Y}'.format(date1, date2)\n else:\n return '{:%b %d, %Y}-{:%b %d, %Y}'.format(date1, date2)\n\n class Meta:\n model = Event\n fields = (\n 'slug', 'start', 'end', 'url', 'humandate', 'contact', 'country',\n 'venue', 'address', 'latitude', 'longitude', 'eventbrite_id',\n )\n", "path": "api/serializers.py"}, {"content": "from django.db.models import Q\nfrom rest_framework.generics import ListAPIView\nfrom rest_framework.metadata import SimpleMetadata\nfrom rest_framework.permissions import IsAuthenticatedOrReadOnly\nfrom rest_framework.response import Response\nfrom rest_framework.reverse import reverse\nfrom rest_framework.views import APIView\n\nfrom workshops.models import Badge, Airport, Event\n\nfrom .serializers import (\n ExportBadgesSerializer,\n ExportInstructorLocationsSerializer,\n EventSerializer,\n)\n\n\nclass QueryMetadata(SimpleMetadata):\n \"\"\"Additionally include info about query parameters.\"\"\"\n\n def determine_metadata(self, request, view):\n print('doing something')\n data = super().determine_metadata(request, view)\n\n try:\n data['query_params'] = view.get_query_params_description()\n except AttributeError:\n pass\n\n return data\n\n\nclass ApiRoot(APIView):\n def get(self, request, format=None):\n return Response({\n 'export-badges': reverse('api:export-badges', request=request,\n format=format),\n 'export-instructors': reverse('api:export-instructors',\n request=request, format=format),\n 'events-published': reverse('api:events-published',\n request=request, format=format),\n })\n\n\nclass ExportBadgesView(ListAPIView):\n \"\"\"List all badges and people who have them.\"\"\"\n permission_classes = (IsAuthenticatedOrReadOnly, )\n paginator = None # disable pagination\n\n queryset = Badge.objects.prefetch_related('person_set')\n serializer_class = ExportBadgesSerializer\n\n\nclass ExportInstructorLocationsView(ListAPIView):\n \"\"\"List all airports and instructors located near them.\"\"\"\n permission_classes = (IsAuthenticatedOrReadOnly, )\n paginator = None # disable pagination\n\n queryset = Airport.objects.exclude(person=None) \\\n .prefetch_related('person_set')\n serializer_class = ExportInstructorLocationsSerializer\n\n\nclass PublishedEvents(ListAPIView):\n \"\"\"List published events.\"\"\"\n\n # only events that have both a starting date and a URL\n permission_classes = (IsAuthenticatedOrReadOnly, )\n paginator = None # disable pagination\n\n serializer_class = EventSerializer\n\n metadata_class = QueryMetadata\n\n def get_queryset(self):\n \"\"\"Optionally restrict the returned event set to events hosted by\n specific host or administered by specific admin.\"\"\"\n queryset = Event.objects.published_events()\n\n administrator = self.request.query_params.get('administrator', None)\n if administrator is not None:\n queryset = queryset.filter(administrator__pk=administrator)\n\n host = self.request.query_params.get('host', None)\n if host is not None:\n queryset = queryset.filter(host__pk=host)\n\n return queryset\n\n def get_query_params_description(self):\n return {\n 'administrator': 'ID of the organization responsible for admin '\n 'work on events.',\n 'host': 'ID of the organization hosting the event.',\n }\n", "path": "api/views.py"}]} | 1,429 | 653 |
gh_patches_debug_7093 | rasdani/github-patches | git_diff | ckan__ckan-260 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Recline does not preview datastore anymore
The new plugin does not evaluate `datastore_active`.
<!---
@huboard:{"order":247.0}
-->
Recline does not preview datastore anymore
The new plugin does not evaluate `datastore_active`.
<!---
@huboard:{"order":247.0}
-->
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ckanext/reclinepreview/plugin.py`
Content:
```
1 from logging import getLogger
2
3 import ckan.plugins as p
4 import ckan.plugins.toolkit as toolkit
5
6 log = getLogger(__name__)
7
8
9 class ReclinePreview(p.SingletonPlugin):
10 """This extension previews resources using recline
11
12 This extension implements two interfaces
13
14 - ``IConfigurer`` allows to modify the configuration
15 - ``IResourcePreview`` allows to add previews
16 """
17 p.implements(p.IConfigurer, inherit=True)
18 p.implements(p.IResourcePreview, inherit=True)
19
20 def update_config(self, config):
21 ''' Set up the resource library, public directory and
22 template directory for the preview
23 '''
24 toolkit.add_public_directory(config, 'theme/public')
25 toolkit.add_template_directory(config, 'theme/templates')
26 toolkit.add_resource('theme/public', 'ckanext-reclinepreview')
27
28 def can_preview(self, data_dict):
29 format_lower = data_dict['resource']['format'].lower()
30 return format_lower in ['csv', 'xls', 'tsv']
31
32 def preview_template(self, context, data_dict):
33 return 'recline.html'
34
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ckanext/reclinepreview/plugin.py b/ckanext/reclinepreview/plugin.py
--- a/ckanext/reclinepreview/plugin.py
+++ b/ckanext/reclinepreview/plugin.py
@@ -26,6 +26,9 @@
toolkit.add_resource('theme/public', 'ckanext-reclinepreview')
def can_preview(self, data_dict):
+ # if the resource is in the datastore then we can preview it with recline
+ if data_dict['resource'].get('datastore_active'):
+ return True
format_lower = data_dict['resource']['format'].lower()
return format_lower in ['csv', 'xls', 'tsv']
| {"golden_diff": "diff --git a/ckanext/reclinepreview/plugin.py b/ckanext/reclinepreview/plugin.py\n--- a/ckanext/reclinepreview/plugin.py\n+++ b/ckanext/reclinepreview/plugin.py\n@@ -26,6 +26,9 @@\n toolkit.add_resource('theme/public', 'ckanext-reclinepreview')\n \n def can_preview(self, data_dict):\n+ # if the resource is in the datastore then we can preview it with recline\n+ if data_dict['resource'].get('datastore_active'):\n+ return True\n format_lower = data_dict['resource']['format'].lower()\n return format_lower in ['csv', 'xls', 'tsv']\n", "issue": "Recline does not preview datastore anymore\nThe new plugin does not evaluate `datastore_active`.\n\n<!---\n@huboard:{\"order\":247.0}\n-->\n\nRecline does not preview datastore anymore\nThe new plugin does not evaluate `datastore_active`.\n\n<!---\n@huboard:{\"order\":247.0}\n-->\n\n", "before_files": [{"content": "from logging import getLogger\n\nimport ckan.plugins as p\nimport ckan.plugins.toolkit as toolkit\n\nlog = getLogger(__name__)\n\n\nclass ReclinePreview(p.SingletonPlugin):\n \"\"\"This extension previews resources using recline\n\n This extension implements two interfaces\n\n - ``IConfigurer`` allows to modify the configuration\n - ``IResourcePreview`` allows to add previews\n \"\"\"\n p.implements(p.IConfigurer, inherit=True)\n p.implements(p.IResourcePreview, inherit=True)\n\n def update_config(self, config):\n ''' Set up the resource library, public directory and\n template directory for the preview\n '''\n toolkit.add_public_directory(config, 'theme/public')\n toolkit.add_template_directory(config, 'theme/templates')\n toolkit.add_resource('theme/public', 'ckanext-reclinepreview')\n\n def can_preview(self, data_dict):\n format_lower = data_dict['resource']['format'].lower()\n return format_lower in ['csv', 'xls', 'tsv']\n\n def preview_template(self, context, data_dict):\n return 'recline.html'\n", "path": "ckanext/reclinepreview/plugin.py"}], "after_files": [{"content": "from logging import getLogger\n\nimport ckan.plugins as p\nimport ckan.plugins.toolkit as toolkit\n\nlog = getLogger(__name__)\n\n\nclass ReclinePreview(p.SingletonPlugin):\n \"\"\"This extension previews resources using recline\n\n This extension implements two interfaces\n\n - ``IConfigurer`` allows to modify the configuration\n - ``IResourcePreview`` allows to add previews\n \"\"\"\n p.implements(p.IConfigurer, inherit=True)\n p.implements(p.IResourcePreview, inherit=True)\n\n def update_config(self, config):\n ''' Set up the resource library, public directory and\n template directory for the preview\n '''\n toolkit.add_public_directory(config, 'theme/public')\n toolkit.add_template_directory(config, 'theme/templates')\n toolkit.add_resource('theme/public', 'ckanext-reclinepreview')\n\n def can_preview(self, data_dict):\n # if the resource is in the datastore then we can preview it with recline\n if data_dict['resource'].get('datastore_active'):\n return True\n format_lower = data_dict['resource']['format'].lower()\n return format_lower in ['csv', 'xls', 'tsv']\n\n def preview_template(self, context, data_dict):\n return 'recline.html'\n", "path": "ckanext/reclinepreview/plugin.py"}]} | 624 | 152 |
gh_patches_debug_1229 | rasdani/github-patches | git_diff | streamlit__streamlit-6348 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
experimental_get_query_params won't work before rerun
### Checklist
- [X] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [X] I added a very descriptive title to this issue.
- [X] I have provided sufficient information below to help reproduce this issue.
### Summary
User can not get right query_params before rerun.
### Reproducible Code Example
```Python
import streamlit as st
st.experimental_set_query_params(param=3)
st.write(st.experimental_get_query_params())
```
### Steps To Reproduce
Run script, `{"param ": 3}` will not appear at first time until rerun script after querystring in browser already changed.
### Expected Behavior
Show `{"param ": 3}`
### Current Behavior
show empty dict
### Is this a regression?
- [X] Yes, this used to work in a previous version.
### Debug info
- Streamlit version: 1.20.0
- Python version: 3.10.6
- Operating System: Linux
- Browser: Chrome
- Virtual environment: None
### Additional Information
In previous version `set_query_params` will set `ctx.query_string = parse.urlencode(query_params, doseq=True)` immediately.
But in 1.20, this line is removed while `get_query_params` still get if from `ctx.query_string` .
### Are you willing to submit a PR?
- [x] Yes, I am willing to submit a PR!
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lib/streamlit/commands/query_params.py`
Content:
```
1 # Copyright (c) Streamlit Inc. (2018-2022) Snowflake Inc. (2022)
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import urllib.parse as parse
16 from typing import Any, Dict, List
17
18 from streamlit import util
19 from streamlit.errors import StreamlitAPIException
20 from streamlit.proto.ForwardMsg_pb2 import ForwardMsg
21 from streamlit.runtime.metrics_util import gather_metrics
22 from streamlit.runtime.scriptrunner import get_script_run_ctx
23
24 EMBED_QUERY_PARAM = "embed"
25 EMBED_OPTIONS_QUERY_PARAM = "embed_options"
26 EMBED_QUERY_PARAMS_KEYS = [EMBED_QUERY_PARAM, EMBED_OPTIONS_QUERY_PARAM]
27
28
29 @gather_metrics("experimental_get_query_params")
30 def get_query_params() -> Dict[str, List[str]]:
31 """Return the query parameters that is currently showing in the browser's URL bar.
32
33 Returns
34 -------
35 dict
36 The current query parameters as a dict. "Query parameters" are the part of the URL that comes
37 after the first "?".
38
39 Example
40 -------
41 Let's say the user's web browser is at
42 `http://localhost:8501/?show_map=True&selected=asia&selected=america`.
43 Then, you can get the query parameters using the following:
44
45 >>> import streamlit as st
46 >>>
47 >>> st.experimental_get_query_params()
48 {"show_map": ["True"], "selected": ["asia", "america"]}
49
50 Note that the values in the returned dict are *always* lists. This is
51 because we internally use Python's urllib.parse.parse_qs(), which behaves
52 this way. And this behavior makes sense when you consider that every item
53 in a query string is potentially a 1-element array.
54
55 """
56 ctx = get_script_run_ctx()
57 if ctx is None:
58 return {}
59 # Return new query params dict, but without embed, embed_options query params
60 return util.exclude_key_query_params(
61 parse.parse_qs(ctx.query_string), keys_to_exclude=EMBED_QUERY_PARAMS_KEYS
62 )
63
64
65 @gather_metrics("experimental_set_query_params")
66 def set_query_params(**query_params: Any) -> None:
67 """Set the query parameters that are shown in the browser's URL bar.
68
69 .. warning::
70 Query param `embed` cannot be set using this method.
71
72 Parameters
73 ----------
74 **query_params : dict
75 The query parameters to set, as key-value pairs.
76
77 Example
78 -------
79
80 To point the user's web browser to something like
81 "http://localhost:8501/?show_map=True&selected=asia&selected=america",
82 you would do the following:
83
84 >>> import streamlit as st
85 >>>
86 >>> st.experimental_set_query_params(
87 ... show_map=True,
88 ... selected=["asia", "america"],
89 ... )
90
91 """
92 ctx = get_script_run_ctx()
93 if ctx is None:
94 return
95
96 msg = ForwardMsg()
97 msg.page_info_changed.query_string = _ensure_no_embed_params(
98 query_params, ctx.query_string
99 )
100 ctx.enqueue(msg)
101
102
103 def _ensure_no_embed_params(
104 query_params: Dict[str, List[str]], query_string: str
105 ) -> str:
106 """Ensures there are no embed params set (raises StreamlitAPIException) if there is a try,
107 also makes sure old param values in query_string are preserved. Returns query_string : str."""
108 # Get query params dict without embed, embed_options params
109 query_params_without_embed = util.exclude_key_query_params(
110 query_params, keys_to_exclude=EMBED_QUERY_PARAMS_KEYS
111 )
112 if query_params != query_params_without_embed:
113 raise StreamlitAPIException(
114 "Query param embed and embed_options (case-insensitive) cannot be set using set_query_params method."
115 )
116
117 all_current_params = parse.parse_qs(query_string)
118 current_embed_params = parse.urlencode(
119 {
120 EMBED_QUERY_PARAM: [
121 param
122 for param in util.extract_key_query_params(
123 all_current_params, param_key=EMBED_QUERY_PARAM
124 )
125 ],
126 EMBED_OPTIONS_QUERY_PARAM: [
127 param
128 for param in util.extract_key_query_params(
129 all_current_params, param_key=EMBED_OPTIONS_QUERY_PARAM
130 )
131 ],
132 },
133 doseq=True,
134 )
135 query_string = parse.urlencode(query_params, doseq=True)
136
137 if query_string:
138 separator = "&" if current_embed_params else ""
139 return separator.join([query_string, current_embed_params])
140 return current_embed_params
141
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/lib/streamlit/commands/query_params.py b/lib/streamlit/commands/query_params.py
--- a/lib/streamlit/commands/query_params.py
+++ b/lib/streamlit/commands/query_params.py
@@ -97,6 +97,7 @@
msg.page_info_changed.query_string = _ensure_no_embed_params(
query_params, ctx.query_string
)
+ ctx.query_string = msg.page_info_changed.query_string
ctx.enqueue(msg)
| {"golden_diff": "diff --git a/lib/streamlit/commands/query_params.py b/lib/streamlit/commands/query_params.py\n--- a/lib/streamlit/commands/query_params.py\n+++ b/lib/streamlit/commands/query_params.py\n@@ -97,6 +97,7 @@\n msg.page_info_changed.query_string = _ensure_no_embed_params(\n query_params, ctx.query_string\n )\n+ ctx.query_string = msg.page_info_changed.query_string\n ctx.enqueue(msg)\n", "issue": " experimental_get_query_params won't work before rerun \n### Checklist\n\n- [X] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.\n- [X] I added a very descriptive title to this issue.\n- [X] I have provided sufficient information below to help reproduce this issue.\n\n### Summary\n\nUser can not get right query_params before rerun.\n\n### Reproducible Code Example\n\n```Python\nimport streamlit as st\r\n\r\nst.experimental_set_query_params(param=3)\r\nst.write(st.experimental_get_query_params())\n```\n\n\n### Steps To Reproduce\n\nRun script, `{\"param \": 3}` will not appear at first time until rerun script after querystring in browser already changed.\n\n### Expected Behavior\n\nShow `{\"param \": 3}`\n\n### Current Behavior\n\nshow empty dict\n\n### Is this a regression?\n\n- [X] Yes, this used to work in a previous version.\n\n### Debug info\n\n- Streamlit version: 1.20.0\r\n- Python version: 3.10.6\r\n- Operating System: Linux\r\n- Browser: Chrome\r\n- Virtual environment: None\r\n\n\n### Additional Information\n\nIn previous version `set_query_params` will set `ctx.query_string = parse.urlencode(query_params, doseq=True)` immediately.\r\n\r\nBut in 1.20, this line is removed while `get_query_params` still get if from `ctx.query_string` .\n\n### Are you willing to submit a PR?\n\n- [x] Yes, I am willing to submit a PR!\n", "before_files": [{"content": "# Copyright (c) Streamlit Inc. (2018-2022) Snowflake Inc. (2022)\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport urllib.parse as parse\nfrom typing import Any, Dict, List\n\nfrom streamlit import util\nfrom streamlit.errors import StreamlitAPIException\nfrom streamlit.proto.ForwardMsg_pb2 import ForwardMsg\nfrom streamlit.runtime.metrics_util import gather_metrics\nfrom streamlit.runtime.scriptrunner import get_script_run_ctx\n\nEMBED_QUERY_PARAM = \"embed\"\nEMBED_OPTIONS_QUERY_PARAM = \"embed_options\"\nEMBED_QUERY_PARAMS_KEYS = [EMBED_QUERY_PARAM, EMBED_OPTIONS_QUERY_PARAM]\n\n\n@gather_metrics(\"experimental_get_query_params\")\ndef get_query_params() -> Dict[str, List[str]]:\n \"\"\"Return the query parameters that is currently showing in the browser's URL bar.\n\n Returns\n -------\n dict\n The current query parameters as a dict. \"Query parameters\" are the part of the URL that comes\n after the first \"?\".\n\n Example\n -------\n Let's say the user's web browser is at\n `http://localhost:8501/?show_map=True&selected=asia&selected=america`.\n Then, you can get the query parameters using the following:\n\n >>> import streamlit as st\n >>>\n >>> st.experimental_get_query_params()\n {\"show_map\": [\"True\"], \"selected\": [\"asia\", \"america\"]}\n\n Note that the values in the returned dict are *always* lists. This is\n because we internally use Python's urllib.parse.parse_qs(), which behaves\n this way. And this behavior makes sense when you consider that every item\n in a query string is potentially a 1-element array.\n\n \"\"\"\n ctx = get_script_run_ctx()\n if ctx is None:\n return {}\n # Return new query params dict, but without embed, embed_options query params\n return util.exclude_key_query_params(\n parse.parse_qs(ctx.query_string), keys_to_exclude=EMBED_QUERY_PARAMS_KEYS\n )\n\n\n@gather_metrics(\"experimental_set_query_params\")\ndef set_query_params(**query_params: Any) -> None:\n \"\"\"Set the query parameters that are shown in the browser's URL bar.\n\n .. warning::\n Query param `embed` cannot be set using this method.\n\n Parameters\n ----------\n **query_params : dict\n The query parameters to set, as key-value pairs.\n\n Example\n -------\n\n To point the user's web browser to something like\n \"http://localhost:8501/?show_map=True&selected=asia&selected=america\",\n you would do the following:\n\n >>> import streamlit as st\n >>>\n >>> st.experimental_set_query_params(\n ... show_map=True,\n ... selected=[\"asia\", \"america\"],\n ... )\n\n \"\"\"\n ctx = get_script_run_ctx()\n if ctx is None:\n return\n\n msg = ForwardMsg()\n msg.page_info_changed.query_string = _ensure_no_embed_params(\n query_params, ctx.query_string\n )\n ctx.enqueue(msg)\n\n\ndef _ensure_no_embed_params(\n query_params: Dict[str, List[str]], query_string: str\n) -> str:\n \"\"\"Ensures there are no embed params set (raises StreamlitAPIException) if there is a try,\n also makes sure old param values in query_string are preserved. Returns query_string : str.\"\"\"\n # Get query params dict without embed, embed_options params\n query_params_without_embed = util.exclude_key_query_params(\n query_params, keys_to_exclude=EMBED_QUERY_PARAMS_KEYS\n )\n if query_params != query_params_without_embed:\n raise StreamlitAPIException(\n \"Query param embed and embed_options (case-insensitive) cannot be set using set_query_params method.\"\n )\n\n all_current_params = parse.parse_qs(query_string)\n current_embed_params = parse.urlencode(\n {\n EMBED_QUERY_PARAM: [\n param\n for param in util.extract_key_query_params(\n all_current_params, param_key=EMBED_QUERY_PARAM\n )\n ],\n EMBED_OPTIONS_QUERY_PARAM: [\n param\n for param in util.extract_key_query_params(\n all_current_params, param_key=EMBED_OPTIONS_QUERY_PARAM\n )\n ],\n },\n doseq=True,\n )\n query_string = parse.urlencode(query_params, doseq=True)\n\n if query_string:\n separator = \"&\" if current_embed_params else \"\"\n return separator.join([query_string, current_embed_params])\n return current_embed_params\n", "path": "lib/streamlit/commands/query_params.py"}], "after_files": [{"content": "# Copyright (c) Streamlit Inc. (2018-2022) Snowflake Inc. (2022)\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport urllib.parse as parse\nfrom typing import Any, Dict, List\n\nfrom streamlit import util\nfrom streamlit.errors import StreamlitAPIException\nfrom streamlit.proto.ForwardMsg_pb2 import ForwardMsg\nfrom streamlit.runtime.metrics_util import gather_metrics\nfrom streamlit.runtime.scriptrunner import get_script_run_ctx\n\nEMBED_QUERY_PARAM = \"embed\"\nEMBED_OPTIONS_QUERY_PARAM = \"embed_options\"\nEMBED_QUERY_PARAMS_KEYS = [EMBED_QUERY_PARAM, EMBED_OPTIONS_QUERY_PARAM]\n\n\n@gather_metrics(\"experimental_get_query_params\")\ndef get_query_params() -> Dict[str, List[str]]:\n \"\"\"Return the query parameters that is currently showing in the browser's URL bar.\n\n Returns\n -------\n dict\n The current query parameters as a dict. \"Query parameters\" are the part of the URL that comes\n after the first \"?\".\n\n Example\n -------\n Let's say the user's web browser is at\n `http://localhost:8501/?show_map=True&selected=asia&selected=america`.\n Then, you can get the query parameters using the following:\n\n >>> import streamlit as st\n >>>\n >>> st.experimental_get_query_params()\n {\"show_map\": [\"True\"], \"selected\": [\"asia\", \"america\"]}\n\n Note that the values in the returned dict are *always* lists. This is\n because we internally use Python's urllib.parse.parse_qs(), which behaves\n this way. And this behavior makes sense when you consider that every item\n in a query string is potentially a 1-element array.\n\n \"\"\"\n ctx = get_script_run_ctx()\n if ctx is None:\n return {}\n # Return new query params dict, but without embed, embed_options query params\n return util.exclude_key_query_params(\n parse.parse_qs(ctx.query_string), keys_to_exclude=EMBED_QUERY_PARAMS_KEYS\n )\n\n\n@gather_metrics(\"experimental_set_query_params\")\ndef set_query_params(**query_params: Any) -> None:\n \"\"\"Set the query parameters that are shown in the browser's URL bar.\n\n .. warning::\n Query param `embed` cannot be set using this method.\n\n Parameters\n ----------\n **query_params : dict\n The query parameters to set, as key-value pairs.\n\n Example\n -------\n\n To point the user's web browser to something like\n \"http://localhost:8501/?show_map=True&selected=asia&selected=america\",\n you would do the following:\n\n >>> import streamlit as st\n >>>\n >>> st.experimental_set_query_params(\n ... show_map=True,\n ... selected=[\"asia\", \"america\"],\n ... )\n\n \"\"\"\n ctx = get_script_run_ctx()\n if ctx is None:\n return\n\n msg = ForwardMsg()\n msg.page_info_changed.query_string = _ensure_no_embed_params(\n query_params, ctx.query_string\n )\n ctx.query_string = msg.page_info_changed.query_string\n ctx.enqueue(msg)\n\n\ndef _ensure_no_embed_params(\n query_params: Dict[str, List[str]], query_string: str\n) -> str:\n \"\"\"Ensures there are no embed params set (raises StreamlitAPIException) if there is a try,\n also makes sure old param values in query_string are preserved. Returns query_string : str.\"\"\"\n # Get query params dict without embed, embed_options params\n query_params_without_embed = util.exclude_key_query_params(\n query_params, keys_to_exclude=EMBED_QUERY_PARAMS_KEYS\n )\n if query_params != query_params_without_embed:\n raise StreamlitAPIException(\n \"Query param embed and embed_options (case-insensitive) cannot be set using set_query_params method.\"\n )\n\n all_current_params = parse.parse_qs(query_string)\n current_embed_params = parse.urlencode(\n {\n EMBED_QUERY_PARAM: [\n param\n for param in util.extract_key_query_params(\n all_current_params, param_key=EMBED_QUERY_PARAM\n )\n ],\n EMBED_OPTIONS_QUERY_PARAM: [\n param\n for param in util.extract_key_query_params(\n all_current_params, param_key=EMBED_OPTIONS_QUERY_PARAM\n )\n ],\n },\n doseq=True,\n )\n query_string = parse.urlencode(query_params, doseq=True)\n\n if query_string:\n separator = \"&\" if current_embed_params else \"\"\n return separator.join([query_string, current_embed_params])\n return current_embed_params\n", "path": "lib/streamlit/commands/query_params.py"}]} | 1,997 | 98 |
gh_patches_debug_18458 | rasdani/github-patches | git_diff | mampfes__hacs_waste_collection_schedule-603 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
I miss one with C-Trace.de/WZV
Hello guys,
I just switched from ics to C-Trace.de. Since then, unfortunately, it no longer shows me all the bins. I'm missing the residual waste, everything else is displayed as usual. Can someone help me?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `custom_components/waste_collection_schedule/waste_collection_schedule/source/c_trace_de.py`
Content:
```
1 import requests
2 from waste_collection_schedule import Collection # type: ignore[attr-defined]
3 from waste_collection_schedule.service.ICS import ICS
4
5 TITLE = "C-Trace"
6 DESCRIPTION = "Source for C-Trace.de."
7 URL = "https://c-trace.de/"
8 EXTRA_INFO = [
9 {
10 "title": "Bremener Stadreinigung",
11 "url": "https://www.die-bremer-stadtreinigung.de/",
12 },
13 {
14 "title": "AWB Landkreis Augsburg",
15 "url": "https://www.awb-landkreis-augsburg.de/",
16 },
17 {
18 "title": "WZV Kreis Segeberg",
19 "url": "https://www.wzv.de/",
20 },
21 ]
22 TEST_CASES = {
23 "Bremen": {"ort": "Bremen", "strasse": "Abbentorstraße", "hausnummer": 5},
24 "AugsburgLand": {
25 "ort": "Königsbrunn",
26 "strasse": "Marktplatz",
27 "hausnummer": 7,
28 "service": "augsburglandkreis",
29 },
30 }
31
32
33 BASE_URL = "https://web.c-trace.de"
34
35
36 class Source:
37 def __init__(self, ort, strasse, hausnummer, service=None):
38 # Compatibility handling for Bremen which was the first supported
39 # district and didn't require to set a service name.
40 if service is None:
41 if ort == "Bremen":
42 service = "bremenabfallkalender"
43 else:
44 raise Exception("service is missing")
45
46 self._service = service
47 self._ort = ort
48 self._strasse = strasse
49 self._hausnummer = hausnummer
50 self._ics = ICS(regex=r"Abfuhr: (.*)")
51
52 def fetch(self):
53 session = requests.session()
54
55 # get session url
56 r = session.get(
57 f"{BASE_URL}/{self._service}/Abfallkalender",
58 allow_redirects=False,
59 )
60 session_id = r.headers["location"].split("/")[
61 2
62 ] # session_id like "(S(r3bme50igdgsp2lstgxxhvs2))"
63
64 args = {
65 "Ort": self._ort,
66 "Gemeinde": self._ort,
67 "Strasse": self._strasse,
68 "Hausnr": self._hausnummer,
69 "Abfall": "|".join(str(i) for i in range(1, 99)), # return all waste types
70 }
71 r = session.get(
72 f"{BASE_URL}/{self._service}/{session_id}/abfallkalender/cal", params=args
73 )
74 r.raise_for_status()
75
76 # parse ics file
77 r.encoding = "utf-8"
78 dates = self._ics.convert(r.text)
79
80 entries = []
81 for d in dates:
82 entries.append(Collection(d[0], d[1]))
83 return entries
84
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/c_trace_de.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/c_trace_de.py
--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/c_trace_de.py
+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/c_trace_de.py
@@ -27,6 +27,12 @@
"hausnummer": 7,
"service": "augsburglandkreis",
},
+ "WZV": {
+ "ort": "Bark",
+ "strasse": "Birkenweg",
+ "hausnummer": 1,
+ "service": "segebergwzv-abfallkalender",
+ },
}
@@ -66,7 +72,7 @@
"Gemeinde": self._ort,
"Strasse": self._strasse,
"Hausnr": self._hausnummer,
- "Abfall": "|".join(str(i) for i in range(1, 99)), # return all waste types
+ "Abfall": "|".join(str(i) for i in range(0, 99)), # return all waste types
}
r = session.get(
f"{BASE_URL}/{self._service}/{session_id}/abfallkalender/cal", params=args
| {"golden_diff": "diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/c_trace_de.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/c_trace_de.py\n--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/c_trace_de.py\n+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/c_trace_de.py\n@@ -27,6 +27,12 @@\n \"hausnummer\": 7,\n \"service\": \"augsburglandkreis\",\n },\n+ \"WZV\": {\n+ \"ort\": \"Bark\",\n+ \"strasse\": \"Birkenweg\",\n+ \"hausnummer\": 1,\n+ \"service\": \"segebergwzv-abfallkalender\",\n+ },\n }\n \n \n@@ -66,7 +72,7 @@\n \"Gemeinde\": self._ort,\n \"Strasse\": self._strasse,\n \"Hausnr\": self._hausnummer,\n- \"Abfall\": \"|\".join(str(i) for i in range(1, 99)), # return all waste types\n+ \"Abfall\": \"|\".join(str(i) for i in range(0, 99)), # return all waste types\n }\n r = session.get(\n f\"{BASE_URL}/{self._service}/{session_id}/abfallkalender/cal\", params=args\n", "issue": "I miss one with C-Trace.de/WZV\nHello guys,\r\n\r\nI just switched from ics to C-Trace.de. Since then, unfortunately, it no longer shows me all the bins. I'm missing the residual waste, everything else is displayed as usual. Can someone help me?\r\n\r\n\n", "before_files": [{"content": "import requests\nfrom waste_collection_schedule import Collection # type: ignore[attr-defined]\nfrom waste_collection_schedule.service.ICS import ICS\n\nTITLE = \"C-Trace\"\nDESCRIPTION = \"Source for C-Trace.de.\"\nURL = \"https://c-trace.de/\"\nEXTRA_INFO = [\n {\n \"title\": \"Bremener Stadreinigung\",\n \"url\": \"https://www.die-bremer-stadtreinigung.de/\",\n },\n {\n \"title\": \"AWB Landkreis Augsburg\",\n \"url\": \"https://www.awb-landkreis-augsburg.de/\",\n },\n {\n \"title\": \"WZV Kreis Segeberg\",\n \"url\": \"https://www.wzv.de/\",\n },\n]\nTEST_CASES = {\n \"Bremen\": {\"ort\": \"Bremen\", \"strasse\": \"Abbentorstra\u00dfe\", \"hausnummer\": 5},\n \"AugsburgLand\": {\n \"ort\": \"K\u00f6nigsbrunn\",\n \"strasse\": \"Marktplatz\",\n \"hausnummer\": 7,\n \"service\": \"augsburglandkreis\",\n },\n}\n\n\nBASE_URL = \"https://web.c-trace.de\"\n\n\nclass Source:\n def __init__(self, ort, strasse, hausnummer, service=None):\n # Compatibility handling for Bremen which was the first supported\n # district and didn't require to set a service name.\n if service is None:\n if ort == \"Bremen\":\n service = \"bremenabfallkalender\"\n else:\n raise Exception(\"service is missing\")\n\n self._service = service\n self._ort = ort\n self._strasse = strasse\n self._hausnummer = hausnummer\n self._ics = ICS(regex=r\"Abfuhr: (.*)\")\n\n def fetch(self):\n session = requests.session()\n\n # get session url\n r = session.get(\n f\"{BASE_URL}/{self._service}/Abfallkalender\",\n allow_redirects=False,\n )\n session_id = r.headers[\"location\"].split(\"/\")[\n 2\n ] # session_id like \"(S(r3bme50igdgsp2lstgxxhvs2))\"\n\n args = {\n \"Ort\": self._ort,\n \"Gemeinde\": self._ort,\n \"Strasse\": self._strasse,\n \"Hausnr\": self._hausnummer,\n \"Abfall\": \"|\".join(str(i) for i in range(1, 99)), # return all waste types\n }\n r = session.get(\n f\"{BASE_URL}/{self._service}/{session_id}/abfallkalender/cal\", params=args\n )\n r.raise_for_status()\n\n # parse ics file\n r.encoding = \"utf-8\"\n dates = self._ics.convert(r.text)\n\n entries = []\n for d in dates:\n entries.append(Collection(d[0], d[1]))\n return entries\n", "path": "custom_components/waste_collection_schedule/waste_collection_schedule/source/c_trace_de.py"}], "after_files": [{"content": "import requests\nfrom waste_collection_schedule import Collection # type: ignore[attr-defined]\nfrom waste_collection_schedule.service.ICS import ICS\n\nTITLE = \"C-Trace\"\nDESCRIPTION = \"Source for C-Trace.de.\"\nURL = \"https://c-trace.de/\"\nEXTRA_INFO = [\n {\n \"title\": \"Bremener Stadreinigung\",\n \"url\": \"https://www.die-bremer-stadtreinigung.de/\",\n },\n {\n \"title\": \"AWB Landkreis Augsburg\",\n \"url\": \"https://www.awb-landkreis-augsburg.de/\",\n },\n {\n \"title\": \"WZV Kreis Segeberg\",\n \"url\": \"https://www.wzv.de/\",\n },\n]\nTEST_CASES = {\n \"Bremen\": {\"ort\": \"Bremen\", \"strasse\": \"Abbentorstra\u00dfe\", \"hausnummer\": 5},\n \"AugsburgLand\": {\n \"ort\": \"K\u00f6nigsbrunn\",\n \"strasse\": \"Marktplatz\",\n \"hausnummer\": 7,\n \"service\": \"augsburglandkreis\",\n },\n \"WZV\": {\n \"ort\": \"Bark\",\n \"strasse\": \"Birkenweg\",\n \"hausnummer\": 1,\n \"service\": \"segebergwzv-abfallkalender\",\n },\n}\n\n\nBASE_URL = \"https://web.c-trace.de\"\n\n\nclass Source:\n def __init__(self, ort, strasse, hausnummer, service=None):\n # Compatibility handling for Bremen which was the first supported\n # district and didn't require to set a service name.\n if service is None:\n if ort == \"Bremen\":\n service = \"bremenabfallkalender\"\n else:\n raise Exception(\"service is missing\")\n\n self._service = service\n self._ort = ort\n self._strasse = strasse\n self._hausnummer = hausnummer\n self._ics = ICS(regex=r\"Abfuhr: (.*)\")\n\n def fetch(self):\n session = requests.session()\n\n # get session url\n r = session.get(\n f\"{BASE_URL}/{self._service}/Abfallkalender\",\n allow_redirects=False,\n )\n session_id = r.headers[\"location\"].split(\"/\")[\n 2\n ] # session_id like \"(S(r3bme50igdgsp2lstgxxhvs2))\"\n\n args = {\n \"Ort\": self._ort,\n \"Gemeinde\": self._ort,\n \"Strasse\": self._strasse,\n \"Hausnr\": self._hausnummer,\n \"Abfall\": \"|\".join(str(i) for i in range(0, 99)), # return all waste types\n }\n r = session.get(\n f\"{BASE_URL}/{self._service}/{session_id}/abfallkalender/cal\", params=args\n )\n r.raise_for_status()\n\n # parse ics file\n r.encoding = \"utf-8\"\n dates = self._ics.convert(r.text)\n\n entries = []\n for d in dates:\n entries.append(Collection(d[0], d[1]))\n return entries\n", "path": "custom_components/waste_collection_schedule/waste_collection_schedule/source/c_trace_de.py"}]} | 1,146 | 300 |
gh_patches_debug_24900 | rasdani/github-patches | git_diff | liberapay__liberapay.com-502 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
add support for xmpp: uri in markdown syntax
When adding and XMPP uri in the following form:
`[[email protected]](xmpp:[email protected]?join)`
the uri syntax is shown raw instead of linking to the room as expected.
add support for xmpp: uri in markdown syntax
When adding and XMPP uri in the following form:
`[[email protected]](xmpp:[email protected]?join)`
the uri syntax is shown raw instead of linking to the room as expected.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `liberapay/utils/markdown.py`
Content:
```
1 from markupsafe import Markup
2 import misaka as m # http://misaka.61924.nl/
3
4 def render(markdown):
5 return Markup(m.html(
6 markdown,
7 extensions=m.EXT_AUTOLINK | m.EXT_STRIKETHROUGH | m.EXT_NO_INTRA_EMPHASIS,
8 render_flags=m.HTML_SKIP_HTML | m.HTML_TOC | m.HTML_SMARTYPANTS | m.HTML_SAFELINK
9 ))
10
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/liberapay/utils/markdown.py b/liberapay/utils/markdown.py
--- a/liberapay/utils/markdown.py
+++ b/liberapay/utils/markdown.py
@@ -1,9 +1,41 @@
-from markupsafe import Markup
+from __future__ import absolute_import, division, print_function, unicode_literals
+
+import re
+
+from markupsafe import Markup, escape
import misaka as m # http://misaka.61924.nl/
+
+url_re = re.compile(r'^(https?|xmpp):')
+
+
+class CustomRenderer(m.HtmlRenderer):
+
+ def image(self, link, title='', alt=''):
+ if url_re.match(link):
+ maybe_alt = Markup(' alt="%s"') % alt if alt else ''
+ maybe_title = Markup(' title="%s"') % title if title else ''
+ return Markup('<img src="%s"%s%s />') % (link, maybe_alt, maybe_title)
+ else:
+ return escape("" % (alt, link))
+
+ def link(self, content, link, title=''):
+ if url_re.match(link):
+ maybe_title = Markup(' title="%s"') % title if title else ''
+ return Markup('<a href="%s"%s>%s</a>') % (link, maybe_title, content)
+ else:
+ return escape("[%s](%s)" % (content, link))
+
+ def autolink(self, link, is_email):
+ if url_re.match(link):
+ return Markup('<a href="%s">%s</a>') % (link, link)
+ else:
+ return escape('<%s>' % link)
+
+
+renderer = CustomRenderer(flags=m.HTML_SKIP_HTML)
+md = m.Markdown(renderer, extensions=('autolink', 'strikethrough', 'no-intra-emphasis'))
+
+
def render(markdown):
- return Markup(m.html(
- markdown,
- extensions=m.EXT_AUTOLINK | m.EXT_STRIKETHROUGH | m.EXT_NO_INTRA_EMPHASIS,
- render_flags=m.HTML_SKIP_HTML | m.HTML_TOC | m.HTML_SMARTYPANTS | m.HTML_SAFELINK
- ))
+ return Markup(md(markdown))
| {"golden_diff": "diff --git a/liberapay/utils/markdown.py b/liberapay/utils/markdown.py\n--- a/liberapay/utils/markdown.py\n+++ b/liberapay/utils/markdown.py\n@@ -1,9 +1,41 @@\n-from markupsafe import Markup\n+from __future__ import absolute_import, division, print_function, unicode_literals\n+\n+import re\n+\n+from markupsafe import Markup, escape\n import misaka as m # http://misaka.61924.nl/\n \n+\n+url_re = re.compile(r'^(https?|xmpp):')\n+\n+\n+class CustomRenderer(m.HtmlRenderer):\n+\n+ def image(self, link, title='', alt=''):\n+ if url_re.match(link):\n+ maybe_alt = Markup(' alt=\"%s\"') % alt if alt else ''\n+ maybe_title = Markup(' title=\"%s\"') % title if title else ''\n+ return Markup('<img src=\"%s\"%s%s />') % (link, maybe_alt, maybe_title)\n+ else:\n+ return escape(\"\" % (alt, link))\n+\n+ def link(self, content, link, title=''):\n+ if url_re.match(link):\n+ maybe_title = Markup(' title=\"%s\"') % title if title else ''\n+ return Markup('<a href=\"%s\"%s>%s</a>') % (link, maybe_title, content)\n+ else:\n+ return escape(\"[%s](%s)\" % (content, link))\n+\n+ def autolink(self, link, is_email):\n+ if url_re.match(link):\n+ return Markup('<a href=\"%s\">%s</a>') % (link, link)\n+ else:\n+ return escape('<%s>' % link)\n+\n+\n+renderer = CustomRenderer(flags=m.HTML_SKIP_HTML)\n+md = m.Markdown(renderer, extensions=('autolink', 'strikethrough', 'no-intra-emphasis'))\n+\n+\n def render(markdown):\n- return Markup(m.html(\n- markdown,\n- extensions=m.EXT_AUTOLINK | m.EXT_STRIKETHROUGH | m.EXT_NO_INTRA_EMPHASIS,\n- render_flags=m.HTML_SKIP_HTML | m.HTML_TOC | m.HTML_SMARTYPANTS | m.HTML_SAFELINK\n- ))\n+ return Markup(md(markdown))\n", "issue": "add support for xmpp: uri in markdown syntax\nWhen adding and XMPP uri in the following form:\r\n`[[email protected]](xmpp:[email protected]?join)`\r\nthe uri syntax is shown raw instead of linking to the room as expected.\nadd support for xmpp: uri in markdown syntax\nWhen adding and XMPP uri in the following form:\r\n`[[email protected]](xmpp:[email protected]?join)`\r\nthe uri syntax is shown raw instead of linking to the room as expected.\n", "before_files": [{"content": "from markupsafe import Markup\nimport misaka as m # http://misaka.61924.nl/\n\ndef render(markdown):\n return Markup(m.html(\n markdown,\n extensions=m.EXT_AUTOLINK | m.EXT_STRIKETHROUGH | m.EXT_NO_INTRA_EMPHASIS,\n render_flags=m.HTML_SKIP_HTML | m.HTML_TOC | m.HTML_SMARTYPANTS | m.HTML_SAFELINK\n ))\n", "path": "liberapay/utils/markdown.py"}], "after_files": [{"content": "from __future__ import absolute_import, division, print_function, unicode_literals\n\nimport re\n\nfrom markupsafe import Markup, escape\nimport misaka as m # http://misaka.61924.nl/\n\n\nurl_re = re.compile(r'^(https?|xmpp):')\n\n\nclass CustomRenderer(m.HtmlRenderer):\n\n def image(self, link, title='', alt=''):\n if url_re.match(link):\n maybe_alt = Markup(' alt=\"%s\"') % alt if alt else ''\n maybe_title = Markup(' title=\"%s\"') % title if title else ''\n return Markup('<img src=\"%s\"%s%s />') % (link, maybe_alt, maybe_title)\n else:\n return escape(\"\" % (alt, link))\n\n def link(self, content, link, title=''):\n if url_re.match(link):\n maybe_title = Markup(' title=\"%s\"') % title if title else ''\n return Markup('<a href=\"%s\"%s>%s</a>') % (link, maybe_title, content)\n else:\n return escape(\"[%s](%s)\" % (content, link))\n\n def autolink(self, link, is_email):\n if url_re.match(link):\n return Markup('<a href=\"%s\">%s</a>') % (link, link)\n else:\n return escape('<%s>' % link)\n\n\nrenderer = CustomRenderer(flags=m.HTML_SKIP_HTML)\nmd = m.Markdown(renderer, extensions=('autolink', 'strikethrough', 'no-intra-emphasis'))\n\n\ndef render(markdown):\n return Markup(md(markdown))\n", "path": "liberapay/utils/markdown.py"}]} | 497 | 512 |
gh_patches_debug_842 | rasdani/github-patches | git_diff | streamlit__streamlit-6377 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Streamlit logger working on root
### Summary
Upon import, Streamlit adds a new **global** log handler that dumps logs in text format. Packages should not be doing that, because it might break the logging convention of the host systems.
In our case for example, we dump logs in JSON format and push it all to our logging aggregation system. Streamlit's log message break the format and so it happens that the only service we can't debug properly is Streamlit.
### Steps to reproduce
Nothing special, logging comes out of the box.
**Expected behavior:**
Streamlit should attach its handler to a specific logger namespace (e.g. `streamlit`) instead of attaching it to the root logger.
**Actual behavior:**
Streamlit attaches a stream handler to the root logger
### Is this a regression?
That is, did this use to work the way you expected in the past?
no
### Debug info
- Streamlit version: 1.1.0
- Python version: 3.8
- Using Conda? PipEnv? PyEnv? Pex?
- OS version: Any
- Browser version: Irrelevant
---
Community voting on feature requests enables the Streamlit team to understand which features are most important to our users.
**If you'd like the Streamlit team to prioritize this feature request, please use the 👍 (thumbs up emoji) reaction in response to the initial post.**
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lib/streamlit/logger.py`
Content:
```
1 # Copyright (c) Streamlit Inc. (2018-2022) Snowflake Inc. (2022)
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Logging module."""
16
17 import logging
18 import sys
19 from typing import Dict, Union
20
21 from typing_extensions import Final
22
23 DEFAULT_LOG_MESSAGE: Final = "%(asctime)s %(levelname) -7s " "%(name)s: %(message)s"
24
25 # Loggers for each name are saved here.
26 _loggers: Dict[str, logging.Logger] = {}
27
28 # The global log level is set here across all names.
29 _global_log_level = logging.INFO
30
31
32 def set_log_level(level: Union[str, int]) -> None:
33 """Set log level."""
34 logger = get_logger(__name__)
35
36 if isinstance(level, str):
37 level = level.upper()
38 if level == "CRITICAL" or level == logging.CRITICAL:
39 log_level = logging.CRITICAL
40 elif level == "ERROR" or level == logging.ERROR:
41 log_level = logging.ERROR
42 elif level == "WARNING" or level == logging.WARNING:
43 log_level = logging.WARNING
44 elif level == "INFO" or level == logging.INFO:
45 log_level = logging.INFO
46 elif level == "DEBUG" or level == logging.DEBUG:
47 log_level = logging.DEBUG
48 else:
49 msg = 'undefined log level "%s"' % level
50 logger.critical(msg)
51 sys.exit(1)
52
53 for log in _loggers.values():
54 log.setLevel(log_level)
55
56 global _global_log_level
57 _global_log_level = log_level
58
59
60 def setup_formatter(logger: logging.Logger) -> None:
61 """Set up the console formatter for a given logger."""
62 # Deregister any previous console loggers.
63 if hasattr(logger, "streamlit_console_handler"):
64 logger.removeHandler(logger.streamlit_console_handler)
65
66 logger.streamlit_console_handler = logging.StreamHandler() # type: ignore[attr-defined]
67
68 # Import here to avoid circular imports
69 from streamlit import config
70
71 if config._config_options:
72 # logger is required in ConfigOption.set_value
73 # Getting the config option before the config file has been parsed
74 # can create an infinite loop
75 message_format = config.get_option("logger.messageFormat")
76 else:
77 message_format = DEFAULT_LOG_MESSAGE
78 formatter = logging.Formatter(fmt=message_format)
79 formatter.default_msec_format = "%s.%03d"
80 logger.streamlit_console_handler.setFormatter(formatter) # type: ignore[attr-defined]
81
82 # Register the new console logger.
83 logger.addHandler(logger.streamlit_console_handler) # type: ignore[attr-defined]
84
85
86 def update_formatter() -> None:
87 for log in _loggers.values():
88 setup_formatter(log)
89
90
91 def init_tornado_logs() -> None:
92 """Set Tornado log levels.
93
94 This function does not import any Tornado code, so it's safe to call even
95 when Server is not running.
96 """
97 # http://www.tornadoweb.org/en/stable/log.html
98 for log in ("access", "application", "general"):
99 # get_logger will set the log level for the logger with the given name.
100 get_logger(f"tornado.{log}")
101
102
103 def get_logger(name: str) -> logging.Logger:
104 """Return a logger.
105
106 Parameters
107 ----------
108 name : str
109 The name of the logger to use. You should just pass in __name__.
110
111 Returns
112 -------
113 Logger
114
115 """
116 if name in _loggers.keys():
117 return _loggers[name]
118
119 if name == "root":
120 logger = logging.getLogger()
121 else:
122 logger = logging.getLogger(name)
123
124 logger.setLevel(_global_log_level)
125 logger.propagate = False
126 setup_formatter(logger)
127
128 _loggers[name] = logger
129
130 return logger
131
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/lib/streamlit/logger.py b/lib/streamlit/logger.py
--- a/lib/streamlit/logger.py
+++ b/lib/streamlit/logger.py
@@ -117,7 +117,7 @@
return _loggers[name]
if name == "root":
- logger = logging.getLogger()
+ logger = logging.getLogger("streamlit")
else:
logger = logging.getLogger(name)
| {"golden_diff": "diff --git a/lib/streamlit/logger.py b/lib/streamlit/logger.py\n--- a/lib/streamlit/logger.py\n+++ b/lib/streamlit/logger.py\n@@ -117,7 +117,7 @@\n return _loggers[name]\n \n if name == \"root\":\n- logger = logging.getLogger()\n+ logger = logging.getLogger(\"streamlit\")\n else:\n logger = logging.getLogger(name)\n", "issue": "Streamlit logger working on root\n### Summary\r\n\r\nUpon import, Streamlit adds a new **global** log handler that dumps logs in text format. Packages should not be doing that, because it might break the logging convention of the host systems. \r\nIn our case for example, we dump logs in JSON format and push it all to our logging aggregation system. Streamlit's log message break the format and so it happens that the only service we can't debug properly is Streamlit.\r\n\r\n### Steps to reproduce\r\nNothing special, logging comes out of the box.\r\n\r\n**Expected behavior:**\r\nStreamlit should attach its handler to a specific logger namespace (e.g. `streamlit`) instead of attaching it to the root logger.\r\n\r\n**Actual behavior:**\r\n\r\nStreamlit attaches a stream handler to the root logger\r\n\r\n### Is this a regression?\r\n\r\nThat is, did this use to work the way you expected in the past?\r\nno\r\n\r\n### Debug info\r\n\r\n- Streamlit version: 1.1.0\r\n- Python version: 3.8\r\n- Using Conda? PipEnv? PyEnv? Pex?\r\n- OS version: Any\r\n- Browser version: Irrelevant\r\n\r\n---\r\n\r\nCommunity voting on feature requests enables the Streamlit team to understand which features are most important to our users.\r\n\r\n**If you'd like the Streamlit team to prioritize this feature request, please use the \ud83d\udc4d (thumbs up emoji) reaction in response to the initial post.**\r\n\n", "before_files": [{"content": "# Copyright (c) Streamlit Inc. (2018-2022) Snowflake Inc. (2022)\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Logging module.\"\"\"\n\nimport logging\nimport sys\nfrom typing import Dict, Union\n\nfrom typing_extensions import Final\n\nDEFAULT_LOG_MESSAGE: Final = \"%(asctime)s %(levelname) -7s \" \"%(name)s: %(message)s\"\n\n# Loggers for each name are saved here.\n_loggers: Dict[str, logging.Logger] = {}\n\n# The global log level is set here across all names.\n_global_log_level = logging.INFO\n\n\ndef set_log_level(level: Union[str, int]) -> None:\n \"\"\"Set log level.\"\"\"\n logger = get_logger(__name__)\n\n if isinstance(level, str):\n level = level.upper()\n if level == \"CRITICAL\" or level == logging.CRITICAL:\n log_level = logging.CRITICAL\n elif level == \"ERROR\" or level == logging.ERROR:\n log_level = logging.ERROR\n elif level == \"WARNING\" or level == logging.WARNING:\n log_level = logging.WARNING\n elif level == \"INFO\" or level == logging.INFO:\n log_level = logging.INFO\n elif level == \"DEBUG\" or level == logging.DEBUG:\n log_level = logging.DEBUG\n else:\n msg = 'undefined log level \"%s\"' % level\n logger.critical(msg)\n sys.exit(1)\n\n for log in _loggers.values():\n log.setLevel(log_level)\n\n global _global_log_level\n _global_log_level = log_level\n\n\ndef setup_formatter(logger: logging.Logger) -> None:\n \"\"\"Set up the console formatter for a given logger.\"\"\"\n # Deregister any previous console loggers.\n if hasattr(logger, \"streamlit_console_handler\"):\n logger.removeHandler(logger.streamlit_console_handler)\n\n logger.streamlit_console_handler = logging.StreamHandler() # type: ignore[attr-defined]\n\n # Import here to avoid circular imports\n from streamlit import config\n\n if config._config_options:\n # logger is required in ConfigOption.set_value\n # Getting the config option before the config file has been parsed\n # can create an infinite loop\n message_format = config.get_option(\"logger.messageFormat\")\n else:\n message_format = DEFAULT_LOG_MESSAGE\n formatter = logging.Formatter(fmt=message_format)\n formatter.default_msec_format = \"%s.%03d\"\n logger.streamlit_console_handler.setFormatter(formatter) # type: ignore[attr-defined]\n\n # Register the new console logger.\n logger.addHandler(logger.streamlit_console_handler) # type: ignore[attr-defined]\n\n\ndef update_formatter() -> None:\n for log in _loggers.values():\n setup_formatter(log)\n\n\ndef init_tornado_logs() -> None:\n \"\"\"Set Tornado log levels.\n\n This function does not import any Tornado code, so it's safe to call even\n when Server is not running.\n \"\"\"\n # http://www.tornadoweb.org/en/stable/log.html\n for log in (\"access\", \"application\", \"general\"):\n # get_logger will set the log level for the logger with the given name.\n get_logger(f\"tornado.{log}\")\n\n\ndef get_logger(name: str) -> logging.Logger:\n \"\"\"Return a logger.\n\n Parameters\n ----------\n name : str\n The name of the logger to use. You should just pass in __name__.\n\n Returns\n -------\n Logger\n\n \"\"\"\n if name in _loggers.keys():\n return _loggers[name]\n\n if name == \"root\":\n logger = logging.getLogger()\n else:\n logger = logging.getLogger(name)\n\n logger.setLevel(_global_log_level)\n logger.propagate = False\n setup_formatter(logger)\n\n _loggers[name] = logger\n\n return logger\n", "path": "lib/streamlit/logger.py"}], "after_files": [{"content": "# Copyright (c) Streamlit Inc. (2018-2022) Snowflake Inc. (2022)\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Logging module.\"\"\"\n\nimport logging\nimport sys\nfrom typing import Dict, Union\n\nfrom typing_extensions import Final\n\nDEFAULT_LOG_MESSAGE: Final = \"%(asctime)s %(levelname) -7s \" \"%(name)s: %(message)s\"\n\n# Loggers for each name are saved here.\n_loggers: Dict[str, logging.Logger] = {}\n\n# The global log level is set here across all names.\n_global_log_level = logging.INFO\n\n\ndef set_log_level(level: Union[str, int]) -> None:\n \"\"\"Set log level.\"\"\"\n logger = get_logger(__name__)\n\n if isinstance(level, str):\n level = level.upper()\n if level == \"CRITICAL\" or level == logging.CRITICAL:\n log_level = logging.CRITICAL\n elif level == \"ERROR\" or level == logging.ERROR:\n log_level = logging.ERROR\n elif level == \"WARNING\" or level == logging.WARNING:\n log_level = logging.WARNING\n elif level == \"INFO\" or level == logging.INFO:\n log_level = logging.INFO\n elif level == \"DEBUG\" or level == logging.DEBUG:\n log_level = logging.DEBUG\n else:\n msg = 'undefined log level \"%s\"' % level\n logger.critical(msg)\n sys.exit(1)\n\n for log in _loggers.values():\n log.setLevel(log_level)\n\n global _global_log_level\n _global_log_level = log_level\n\n\ndef setup_formatter(logger: logging.Logger) -> None:\n \"\"\"Set up the console formatter for a given logger.\"\"\"\n # Deregister any previous console loggers.\n if hasattr(logger, \"streamlit_console_handler\"):\n logger.removeHandler(logger.streamlit_console_handler)\n\n logger.streamlit_console_handler = logging.StreamHandler() # type: ignore[attr-defined]\n\n # Import here to avoid circular imports\n from streamlit import config\n\n if config._config_options:\n # logger is required in ConfigOption.set_value\n # Getting the config option before the config file has been parsed\n # can create an infinite loop\n message_format = config.get_option(\"logger.messageFormat\")\n else:\n message_format = DEFAULT_LOG_MESSAGE\n formatter = logging.Formatter(fmt=message_format)\n formatter.default_msec_format = \"%s.%03d\"\n logger.streamlit_console_handler.setFormatter(formatter) # type: ignore[attr-defined]\n\n # Register the new console logger.\n logger.addHandler(logger.streamlit_console_handler) # type: ignore[attr-defined]\n\n\ndef update_formatter() -> None:\n for log in _loggers.values():\n setup_formatter(log)\n\n\ndef init_tornado_logs() -> None:\n \"\"\"Set Tornado log levels.\n\n This function does not import any Tornado code, so it's safe to call even\n when Server is not running.\n \"\"\"\n # http://www.tornadoweb.org/en/stable/log.html\n for log in (\"access\", \"application\", \"general\"):\n # get_logger will set the log level for the logger with the given name.\n get_logger(f\"tornado.{log}\")\n\n\ndef get_logger(name: str) -> logging.Logger:\n \"\"\"Return a logger.\n\n Parameters\n ----------\n name : str\n The name of the logger to use. You should just pass in __name__.\n\n Returns\n -------\n Logger\n\n \"\"\"\n if name in _loggers.keys():\n return _loggers[name]\n\n if name == \"root\":\n logger = logging.getLogger(\"streamlit\")\n else:\n logger = logging.getLogger(name)\n\n logger.setLevel(_global_log_level)\n logger.propagate = False\n setup_formatter(logger)\n\n _loggers[name] = logger\n\n return logger\n", "path": "lib/streamlit/logger.py"}]} | 1,784 | 88 |
gh_patches_debug_24425 | rasdani/github-patches | git_diff | conda__conda-5421 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
conda-env update error in 4.3.20
```
conda env update
An unexpected error has occurred.
Please consider posting the following information to the
conda GitHub issue tracker at:
https://github.com/conda/conda/issues
Current conda install:
platform : linux-64
conda version : 4.3.20
conda is private : False
conda-env version : 4.3.20
conda-build version : not installed
python version : 3.5.2.final.0
requests version : 2.14.2
root environment : /home/travis/miniconda (writable)
default environment : /home/travis/miniconda
envs directories : /home/travis/miniconda/envs
/home/travis/.conda/envs
package cache : /home/travis/miniconda/pkgs
/home/travis/.conda/pkgs
channel URLs : https://conda.anaconda.org/conda-canary/linux-64
https://conda.anaconda.org/conda-canary/noarch
https://repo.continuum.io/pkgs/free/linux-64
https://repo.continuum.io/pkgs/free/noarch
https://repo.continuum.io/pkgs/r/linux-64
https://repo.continuum.io/pkgs/r/noarch
https://repo.continuum.io/pkgs/pro/linux-64
https://repo.continuum.io/pkgs/pro/noarch
config file : /home/travis/.condarc
netrc file : None
offline mode : False
user-agent : conda/4.3.20 requests/2.14.2 CPython/3.5.2 Linux/4.4.0-51-generic debian/jessie/sid glibc/2.19
UID:GID : 1000:1000
`$ /home/travis/miniconda/bin/conda-env update`
Traceback (most recent call last):
File "/home/travis/miniconda/lib/python3.5/site-packages/conda/exceptions.py", line 632, in conda_exception_handler
return_value = func(*args, **kwargs)
File "/home/travis/miniconda/lib/python3.5/site-packages/conda_env/cli/main_update.py", line 82, in execute
if not (args.name or args.prefix):
AttributeError: 'Namespace' object has no attribute 'prefix'
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `conda_env/cli/main_update.py`
Content:
```
1 from argparse import RawDescriptionHelpFormatter
2 import os
3 import textwrap
4 import sys
5
6 from conda import config
7 from conda.cli import common
8 from conda.cli import install as cli_install
9 from conda.misc import touch_nonadmin
10 from ..installers.base import get_installer, InvalidInstaller
11 from .. import specs as install_specs
12 from .. import exceptions
13 # for conda env
14 from conda_env.cli.common import get_prefix
15 from ..exceptions import CondaEnvException
16 description = """
17 Update the current environment based on environment file
18 """
19
20 example = """
21 examples:
22 conda env update
23 conda env update -n=foo
24 conda env update -f=/path/to/environment.yml
25 conda env update --name=foo --file=environment.yml
26 conda env update vader/deathstar
27 """
28
29
30 def configure_parser(sub_parsers):
31 p = sub_parsers.add_parser(
32 'update',
33 formatter_class=RawDescriptionHelpFormatter,
34 description=description,
35 help=description,
36 epilog=example,
37 )
38 p.add_argument(
39 '-n', '--name',
40 action='store',
41 help='name of environment (in %s)' % os.pathsep.join(config.envs_dirs),
42 default=None,
43 )
44 p.add_argument(
45 '-f', '--file',
46 action='store',
47 help='environment definition (default: environment.yml)',
48 default='environment.yml',
49 )
50 p.add_argument(
51 '--prune',
52 action='store_true',
53 default=False,
54 help='remove installed packages not defined in environment.yml',
55 )
56 p.add_argument(
57 '-q', '--quiet',
58 action='store_true',
59 default=False,
60 )
61 p.add_argument(
62 'remote_definition',
63 help='remote environment definition / IPython notebook',
64 action='store',
65 default=None,
66 nargs='?'
67 )
68 common.add_parser_json(p)
69 p.set_defaults(func=execute)
70
71
72 def execute(args, parser):
73 name = args.remote_definition or args.name
74
75 try:
76 spec = install_specs.detect(name=name, filename=args.file,
77 directory=os.getcwd())
78 env = spec.environment
79 except exceptions.SpecNotFound:
80 raise
81
82 if not (args.name or args.prefix):
83 if not env.name:
84 # Note, this is a hack fofr get_prefix that assumes argparse results
85 # TODO Refactor common.get_prefix
86 name = os.environ.get('CONDA_DEFAULT_ENV', False)
87 if not name:
88 msg = "Unable to determine environment\n\n"
89 msg += textwrap.dedent("""
90 Please re-run this command with one of the following options:
91
92 * Provide an environment name via --name or -n
93 * Re-run this command inside an activated conda environment.""").lstrip()
94 # TODO Add json support
95 raise CondaEnvException(msg)
96
97 # Note: stubbing out the args object as all of the
98 # conda.cli.common code thinks that name will always
99 # be specified.
100 args.name = env.name
101
102 prefix = get_prefix(args, search=False)
103 # CAN'T Check with this function since it assumes we will create prefix.
104 # cli_install.check_prefix(prefix, json=args.json)
105
106 # TODO, add capability
107 # common.ensure_override_channels_requires_channel(args)
108 # channel_urls = args.channel or ()
109
110 for installer_type, specs in env.dependencies.items():
111 try:
112 installer = get_installer(installer_type)
113 installer.install(prefix, specs, args, env, prune=args.prune)
114 except InvalidInstaller:
115 sys.stderr.write(textwrap.dedent("""
116 Unable to install package for {0}.
117
118 Please double check and ensure you dependencies file has
119 the correct spelling. You might also try installing the
120 conda-env-{0} package to see if provides the required
121 installer.
122 """).lstrip().format(installer_type)
123 )
124 return -1
125
126 touch_nonadmin(prefix)
127 if not args.json:
128 print(cli_install.print_activate(args.name if args.name else prefix))
129
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/conda_env/cli/main_update.py b/conda_env/cli/main_update.py
--- a/conda_env/cli/main_update.py
+++ b/conda_env/cli/main_update.py
@@ -1,18 +1,16 @@
from argparse import RawDescriptionHelpFormatter
import os
-import textwrap
import sys
+import textwrap
-from conda import config
-from conda.cli import common
-from conda.cli import install as cli_install
+from conda.cli import common, install as cli_install
from conda.misc import touch_nonadmin
-from ..installers.base import get_installer, InvalidInstaller
-from .. import specs as install_specs
-from .. import exceptions
# for conda env
from conda_env.cli.common import get_prefix
+from .. import exceptions, specs as install_specs
from ..exceptions import CondaEnvException
+from ..installers.base import InvalidInstaller, get_installer
+
description = """
Update the current environment based on environment file
"""
@@ -35,12 +33,7 @@
help=description,
epilog=example,
)
- p.add_argument(
- '-n', '--name',
- action='store',
- help='name of environment (in %s)' % os.pathsep.join(config.envs_dirs),
- default=None,
- )
+ common.add_parser_prefix(p)
p.add_argument(
'-f', '--file',
action='store',
| {"golden_diff": "diff --git a/conda_env/cli/main_update.py b/conda_env/cli/main_update.py\n--- a/conda_env/cli/main_update.py\n+++ b/conda_env/cli/main_update.py\n@@ -1,18 +1,16 @@\n from argparse import RawDescriptionHelpFormatter\n import os\n-import textwrap\n import sys\n+import textwrap\n \n-from conda import config\n-from conda.cli import common\n-from conda.cli import install as cli_install\n+from conda.cli import common, install as cli_install\n from conda.misc import touch_nonadmin\n-from ..installers.base import get_installer, InvalidInstaller\n-from .. import specs as install_specs\n-from .. import exceptions\n # for conda env\n from conda_env.cli.common import get_prefix\n+from .. import exceptions, specs as install_specs\n from ..exceptions import CondaEnvException\n+from ..installers.base import InvalidInstaller, get_installer\n+\n description = \"\"\"\n Update the current environment based on environment file\n \"\"\"\n@@ -35,12 +33,7 @@\n help=description,\n epilog=example,\n )\n- p.add_argument(\n- '-n', '--name',\n- action='store',\n- help='name of environment (in %s)' % os.pathsep.join(config.envs_dirs),\n- default=None,\n- )\n+ common.add_parser_prefix(p)\n p.add_argument(\n '-f', '--file',\n action='store',\n", "issue": "conda-env update error in 4.3.20\n```\r\nconda env update\r\nAn unexpected error has occurred.\r\nPlease consider posting the following information to the\r\nconda GitHub issue tracker at:\r\n https://github.com/conda/conda/issues\r\nCurrent conda install:\r\n platform : linux-64\r\n conda version : 4.3.20\r\n conda is private : False\r\n conda-env version : 4.3.20\r\n conda-build version : not installed\r\n python version : 3.5.2.final.0\r\n requests version : 2.14.2\r\n root environment : /home/travis/miniconda (writable)\r\n default environment : /home/travis/miniconda\r\n envs directories : /home/travis/miniconda/envs\r\n /home/travis/.conda/envs\r\n package cache : /home/travis/miniconda/pkgs\r\n /home/travis/.conda/pkgs\r\n channel URLs : https://conda.anaconda.org/conda-canary/linux-64\r\n https://conda.anaconda.org/conda-canary/noarch\r\n https://repo.continuum.io/pkgs/free/linux-64\r\n https://repo.continuum.io/pkgs/free/noarch\r\n https://repo.continuum.io/pkgs/r/linux-64\r\n https://repo.continuum.io/pkgs/r/noarch\r\n https://repo.continuum.io/pkgs/pro/linux-64\r\n https://repo.continuum.io/pkgs/pro/noarch\r\n config file : /home/travis/.condarc\r\n netrc file : None\r\n offline mode : False\r\n user-agent : conda/4.3.20 requests/2.14.2 CPython/3.5.2 Linux/4.4.0-51-generic debian/jessie/sid glibc/2.19 \r\n UID:GID : 1000:1000\r\n`$ /home/travis/miniconda/bin/conda-env update`\r\n Traceback (most recent call last):\r\n File \"/home/travis/miniconda/lib/python3.5/site-packages/conda/exceptions.py\", line 632, in conda_exception_handler\r\n return_value = func(*args, **kwargs)\r\n File \"/home/travis/miniconda/lib/python3.5/site-packages/conda_env/cli/main_update.py\", line 82, in execute\r\n if not (args.name or args.prefix):\r\n AttributeError: 'Namespace' object has no attribute 'prefix'\r\n```\n", "before_files": [{"content": "from argparse import RawDescriptionHelpFormatter\nimport os\nimport textwrap\nimport sys\n\nfrom conda import config\nfrom conda.cli import common\nfrom conda.cli import install as cli_install\nfrom conda.misc import touch_nonadmin\nfrom ..installers.base import get_installer, InvalidInstaller\nfrom .. import specs as install_specs\nfrom .. import exceptions\n# for conda env\nfrom conda_env.cli.common import get_prefix\nfrom ..exceptions import CondaEnvException\ndescription = \"\"\"\nUpdate the current environment based on environment file\n\"\"\"\n\nexample = \"\"\"\nexamples:\n conda env update\n conda env update -n=foo\n conda env update -f=/path/to/environment.yml\n conda env update --name=foo --file=environment.yml\n conda env update vader/deathstar\n\"\"\"\n\n\ndef configure_parser(sub_parsers):\n p = sub_parsers.add_parser(\n 'update',\n formatter_class=RawDescriptionHelpFormatter,\n description=description,\n help=description,\n epilog=example,\n )\n p.add_argument(\n '-n', '--name',\n action='store',\n help='name of environment (in %s)' % os.pathsep.join(config.envs_dirs),\n default=None,\n )\n p.add_argument(\n '-f', '--file',\n action='store',\n help='environment definition (default: environment.yml)',\n default='environment.yml',\n )\n p.add_argument(\n '--prune',\n action='store_true',\n default=False,\n help='remove installed packages not defined in environment.yml',\n )\n p.add_argument(\n '-q', '--quiet',\n action='store_true',\n default=False,\n )\n p.add_argument(\n 'remote_definition',\n help='remote environment definition / IPython notebook',\n action='store',\n default=None,\n nargs='?'\n )\n common.add_parser_json(p)\n p.set_defaults(func=execute)\n\n\ndef execute(args, parser):\n name = args.remote_definition or args.name\n\n try:\n spec = install_specs.detect(name=name, filename=args.file,\n directory=os.getcwd())\n env = spec.environment\n except exceptions.SpecNotFound:\n raise\n\n if not (args.name or args.prefix):\n if not env.name:\n # Note, this is a hack fofr get_prefix that assumes argparse results\n # TODO Refactor common.get_prefix\n name = os.environ.get('CONDA_DEFAULT_ENV', False)\n if not name:\n msg = \"Unable to determine environment\\n\\n\"\n msg += textwrap.dedent(\"\"\"\n Please re-run this command with one of the following options:\n\n * Provide an environment name via --name or -n\n * Re-run this command inside an activated conda environment.\"\"\").lstrip()\n # TODO Add json support\n raise CondaEnvException(msg)\n\n # Note: stubbing out the args object as all of the\n # conda.cli.common code thinks that name will always\n # be specified.\n args.name = env.name\n\n prefix = get_prefix(args, search=False)\n # CAN'T Check with this function since it assumes we will create prefix.\n # cli_install.check_prefix(prefix, json=args.json)\n\n # TODO, add capability\n # common.ensure_override_channels_requires_channel(args)\n # channel_urls = args.channel or ()\n\n for installer_type, specs in env.dependencies.items():\n try:\n installer = get_installer(installer_type)\n installer.install(prefix, specs, args, env, prune=args.prune)\n except InvalidInstaller:\n sys.stderr.write(textwrap.dedent(\"\"\"\n Unable to install package for {0}.\n\n Please double check and ensure you dependencies file has\n the correct spelling. You might also try installing the\n conda-env-{0} package to see if provides the required\n installer.\n \"\"\").lstrip().format(installer_type)\n )\n return -1\n\n touch_nonadmin(prefix)\n if not args.json:\n print(cli_install.print_activate(args.name if args.name else prefix))\n", "path": "conda_env/cli/main_update.py"}], "after_files": [{"content": "from argparse import RawDescriptionHelpFormatter\nimport os\nimport sys\nimport textwrap\n\nfrom conda.cli import common, install as cli_install\nfrom conda.misc import touch_nonadmin\n# for conda env\nfrom conda_env.cli.common import get_prefix\nfrom .. import exceptions, specs as install_specs\nfrom ..exceptions import CondaEnvException\nfrom ..installers.base import InvalidInstaller, get_installer\n\ndescription = \"\"\"\nUpdate the current environment based on environment file\n\"\"\"\n\nexample = \"\"\"\nexamples:\n conda env update\n conda env update -n=foo\n conda env update -f=/path/to/environment.yml\n conda env update --name=foo --file=environment.yml\n conda env update vader/deathstar\n\"\"\"\n\n\ndef configure_parser(sub_parsers):\n p = sub_parsers.add_parser(\n 'update',\n formatter_class=RawDescriptionHelpFormatter,\n description=description,\n help=description,\n epilog=example,\n )\n common.add_parser_prefix(p)\n p.add_argument(\n '-f', '--file',\n action='store',\n help='environment definition (default: environment.yml)',\n default='environment.yml',\n )\n p.add_argument(\n '--prune',\n action='store_true',\n default=False,\n help='remove installed packages not defined in environment.yml',\n )\n p.add_argument(\n '-q', '--quiet',\n action='store_true',\n default=False,\n )\n p.add_argument(\n 'remote_definition',\n help='remote environment definition / IPython notebook',\n action='store',\n default=None,\n nargs='?'\n )\n common.add_parser_json(p)\n p.set_defaults(func=execute)\n\n\ndef execute(args, parser):\n name = args.remote_definition or args.name\n\n try:\n spec = install_specs.detect(name=name, filename=args.file,\n directory=os.getcwd())\n env = spec.environment\n except exceptions.SpecNotFound:\n raise\n\n if not (args.name or args.prefix):\n if not env.name:\n # Note, this is a hack fofr get_prefix that assumes argparse results\n # TODO Refactor common.get_prefix\n name = os.environ.get('CONDA_DEFAULT_ENV', False)\n if not name:\n msg = \"Unable to determine environment\\n\\n\"\n msg += textwrap.dedent(\"\"\"\n Please re-run this command with one of the following options:\n\n * Provide an environment name via --name or -n\n * Re-run this command inside an activated conda environment.\"\"\").lstrip()\n # TODO Add json support\n raise CondaEnvException(msg)\n\n # Note: stubbing out the args object as all of the\n # conda.cli.common code thinks that name will always\n # be specified.\n args.name = env.name\n\n prefix = get_prefix(args, search=False)\n # CAN'T Check with this function since it assumes we will create prefix.\n # cli_install.check_prefix(prefix, json=args.json)\n\n # TODO, add capability\n # common.ensure_override_channels_requires_channel(args)\n # channel_urls = args.channel or ()\n\n for installer_type, specs in env.dependencies.items():\n try:\n installer = get_installer(installer_type)\n installer.install(prefix, specs, args, env, prune=args.prune)\n except InvalidInstaller:\n sys.stderr.write(textwrap.dedent(\"\"\"\n Unable to install package for {0}.\n\n Please double check and ensure you dependencies file has\n the correct spelling. You might also try installing the\n conda-env-{0} package to see if provides the required\n installer.\n \"\"\").lstrip().format(installer_type)\n )\n return -1\n\n touch_nonadmin(prefix)\n if not args.json:\n print(cli_install.print_activate(args.name if args.name else prefix))\n", "path": "conda_env/cli/main_update.py"}]} | 1,978 | 311 |
gh_patches_debug_4431 | rasdani/github-patches | git_diff | MycroftAI__mycroft-core-2528 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
mycroft "devices" web UI doesn't show core version
Version/setup same as MycroftAI/mycroft-core#2523 2523
## Try to provide steps that we can use to replicate the Issue
Hit up https://account.mycroft.ai/devices

## Provide log files or other output to help us see the error
N/A TBD (can help investigate let me know how) per the ref'd ticket the "self support" method didn't work
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mycroft/version/__init__.py`
Content:
```
1 # Copyright 2017 Mycroft AI Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 #
15 import json
16
17 from genericpath import exists, isfile
18 from os.path import join, expanduser
19
20 from mycroft.configuration import Configuration
21 from mycroft.util.log import LOG
22
23
24 # The following lines are replaced during the release process.
25 # START_VERSION_BLOCK
26 CORE_VERSION_MAJOR = 20
27 CORE_VERSION_MINOR = 2
28 CORE_VERSION_BUILD = 1
29 # END_VERSION_BLOCK
30
31 CORE_VERSION_TUPLE = (CORE_VERSION_MAJOR,
32 CORE_VERSION_MINOR,
33 CORE_VERSION_BUILD)
34 CORE_VERSION_STR = '.'.join(map(str, CORE_VERSION_TUPLE))
35
36
37 class VersionManager:
38 @staticmethod
39 def get():
40 data_dir = expanduser(Configuration.get()['data_dir'])
41 version_file = join(data_dir, 'version.json')
42 if exists(version_file) and isfile(version_file):
43 try:
44 with open(version_file) as f:
45 return json.load(f)
46 except Exception:
47 LOG.error("Failed to load version from '%s'" % version_file)
48 return {"coreVersion": None, "enclosureVersion": None}
49
50
51 def check_version(version_string):
52 """
53 Check if current version is equal or higher than the
54 version string provided to the function
55
56 Args:
57 version_string (string): version string ('Major.Minor.Build')
58 """
59 version_tuple = tuple(map(int, version_string.split('.')))
60 return CORE_VERSION_TUPLE >= version_tuple
61
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mycroft/version/__init__.py b/mycroft/version/__init__.py
--- a/mycroft/version/__init__.py
+++ b/mycroft/version/__init__.py
@@ -45,7 +45,7 @@
return json.load(f)
except Exception:
LOG.error("Failed to load version from '%s'" % version_file)
- return {"coreVersion": None, "enclosureVersion": None}
+ return {"coreVersion": CORE_VERSION_STR, "enclosureVersion": None}
def check_version(version_string):
| {"golden_diff": "diff --git a/mycroft/version/__init__.py b/mycroft/version/__init__.py\n--- a/mycroft/version/__init__.py\n+++ b/mycroft/version/__init__.py\n@@ -45,7 +45,7 @@\n return json.load(f)\n except Exception:\n LOG.error(\"Failed to load version from '%s'\" % version_file)\n- return {\"coreVersion\": None, \"enclosureVersion\": None}\n+ return {\"coreVersion\": CORE_VERSION_STR, \"enclosureVersion\": None}\n \n \n def check_version(version_string):\n", "issue": "mycroft \"devices\" web UI doesn't show core version\n\r\nVersion/setup same as MycroftAI/mycroft-core#2523 2523\r\n\r\n## Try to provide steps that we can use to replicate the Issue\r\n\r\nHit up https://account.mycroft.ai/devices\r\n\r\n\r\n## Provide log files or other output to help us see the error\r\n\r\nN/A TBD (can help investigate let me know how) per the ref'd ticket the \"self support\" method didn't work\n", "before_files": [{"content": "# Copyright 2017 Mycroft AI Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\nimport json\n\nfrom genericpath import exists, isfile\nfrom os.path import join, expanduser\n\nfrom mycroft.configuration import Configuration\nfrom mycroft.util.log import LOG\n\n\n# The following lines are replaced during the release process.\n# START_VERSION_BLOCK\nCORE_VERSION_MAJOR = 20\nCORE_VERSION_MINOR = 2\nCORE_VERSION_BUILD = 1\n# END_VERSION_BLOCK\n\nCORE_VERSION_TUPLE = (CORE_VERSION_MAJOR,\n CORE_VERSION_MINOR,\n CORE_VERSION_BUILD)\nCORE_VERSION_STR = '.'.join(map(str, CORE_VERSION_TUPLE))\n\n\nclass VersionManager:\n @staticmethod\n def get():\n data_dir = expanduser(Configuration.get()['data_dir'])\n version_file = join(data_dir, 'version.json')\n if exists(version_file) and isfile(version_file):\n try:\n with open(version_file) as f:\n return json.load(f)\n except Exception:\n LOG.error(\"Failed to load version from '%s'\" % version_file)\n return {\"coreVersion\": None, \"enclosureVersion\": None}\n\n\ndef check_version(version_string):\n \"\"\"\n Check if current version is equal or higher than the\n version string provided to the function\n\n Args:\n version_string (string): version string ('Major.Minor.Build')\n \"\"\"\n version_tuple = tuple(map(int, version_string.split('.')))\n return CORE_VERSION_TUPLE >= version_tuple\n", "path": "mycroft/version/__init__.py"}], "after_files": [{"content": "# Copyright 2017 Mycroft AI Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\nimport json\n\nfrom genericpath import exists, isfile\nfrom os.path import join, expanduser\n\nfrom mycroft.configuration import Configuration\nfrom mycroft.util.log import LOG\n\n\n# The following lines are replaced during the release process.\n# START_VERSION_BLOCK\nCORE_VERSION_MAJOR = 20\nCORE_VERSION_MINOR = 2\nCORE_VERSION_BUILD = 1\n# END_VERSION_BLOCK\n\nCORE_VERSION_TUPLE = (CORE_VERSION_MAJOR,\n CORE_VERSION_MINOR,\n CORE_VERSION_BUILD)\nCORE_VERSION_STR = '.'.join(map(str, CORE_VERSION_TUPLE))\n\n\nclass VersionManager:\n @staticmethod\n def get():\n data_dir = expanduser(Configuration.get()['data_dir'])\n version_file = join(data_dir, 'version.json')\n if exists(version_file) and isfile(version_file):\n try:\n with open(version_file) as f:\n return json.load(f)\n except Exception:\n LOG.error(\"Failed to load version from '%s'\" % version_file)\n return {\"coreVersion\": CORE_VERSION_STR, \"enclosureVersion\": None}\n\n\ndef check_version(version_string):\n \"\"\"\n Check if current version is equal or higher than the\n version string provided to the function\n\n Args:\n version_string (string): version string ('Major.Minor.Build')\n \"\"\"\n version_tuple = tuple(map(int, version_string.split('.')))\n return CORE_VERSION_TUPLE >= version_tuple\n", "path": "mycroft/version/__init__.py"}]} | 970 | 119 |
gh_patches_debug_12730 | rasdani/github-patches | git_diff | dbt-labs__dbt-core-8050 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[CT-2804] Exclude `click==8.1.4` from dependencies
## Problem
When `click==8.1.4` was released, our code quality workflows using `mypy` began failing. An issue has been created on the [click repository](https://github.com/pallets/click/issues/2558).
## Solution
The solution is to exclude, for the time being, `click==8.1.4`. Currently our click dependency is set to `click>=7.0,<9.0`, this should become on main `click>=8.1.1,<8.1.4`.
## Backports
We need to backport this fix to `1.3.latest`, `1.4.latest`, and `1.5.latest`. For the backports we should update the dependency from `click>=7.0,<9.0` to `click>=7.0,<8.1.4`. The reason for the different specification in the backports is that we already support `click 7.x` in these earlier versions. Dropping support for `click 7.x` could be problematic if people are installing dbt-core alongside other dependencies which limit click to `7.x.`, then dropping support for `click 7.x` would represent a breaking change (and we shouldn't do this in a patch version).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `core/setup.py`
Content:
```
1 #!/usr/bin/env python
2 import os
3 import sys
4
5 if sys.version_info < (3, 8):
6 print("Error: dbt does not support this version of Python.")
7 print("Please upgrade to Python 3.8 or higher.")
8 sys.exit(1)
9
10
11 from setuptools import setup
12
13 try:
14 from setuptools import find_namespace_packages
15 except ImportError:
16 # the user has a downlevel version of setuptools.
17 print("Error: dbt requires setuptools v40.1.0 or higher.")
18 print('Please upgrade setuptools with "pip install --upgrade setuptools" ' "and try again")
19 sys.exit(1)
20
21
22 this_directory = os.path.abspath(os.path.dirname(__file__))
23 with open(os.path.join(this_directory, "README.md")) as f:
24 long_description = f.read()
25
26
27 package_name = "dbt-core"
28 package_version = "1.6.0b8"
29 description = """With dbt, data analysts and engineers can build analytics \
30 the way engineers build applications."""
31
32
33 setup(
34 name=package_name,
35 version=package_version,
36 description=description,
37 long_description=long_description,
38 long_description_content_type="text/markdown",
39 author="dbt Labs",
40 author_email="[email protected]",
41 url="https://github.com/dbt-labs/dbt-core",
42 packages=find_namespace_packages(include=["dbt", "dbt.*"]),
43 include_package_data=True,
44 test_suite="test",
45 entry_points={
46 "console_scripts": ["dbt = dbt.cli.main:cli"],
47 },
48 install_requires=[
49 # ----
50 # dbt-core uses these packages deeply, throughout the codebase, and there have been breaking changes in past patch releases (even though these are major-version-one).
51 # Pin to the patch or minor version, and bump in each new minor version of dbt-core.
52 "agate~=1.7.0",
53 "Jinja2~=3.1.2",
54 "mashumaro[msgpack]~=3.8.1",
55 # ----
56 # Legacy: This package has not been updated since 2019, and it is unused in dbt's logging system (since v1.0)
57 # The dependency here will be removed along with the removal of 'legacy logging', in a future release of dbt-core
58 "logbook>=1.5,<1.6",
59 # ----
60 # dbt-core uses these packages in standard ways. Pin to the major version, and check compatibility
61 # with major versions in each new minor version of dbt-core.
62 "click>=7.0,<9",
63 "networkx>=2.3,<4",
64 # ----
65 # These packages are major-version-0. Keep upper bounds on upcoming minor versions (which could have breaking changes)
66 # and check compatibility / bump in each new minor version of dbt-core.
67 "colorama>=0.3.9,<0.5",
68 "pathspec>=0.9,<0.12",
69 "isodate>=0.6,<0.7",
70 # ----
71 # There was a pin to below 0.4.4 for a while due to a bug in Ubuntu/sqlparse 0.4.4
72 "sqlparse>=0.2.3",
73 # ----
74 # These are major-version-0 packages also maintained by dbt-labs. Accept patches.
75 "dbt-extractor~=0.4.1",
76 "hologram~=0.0.16", # includes transitive dependencies on python-dateutil and jsonschema
77 "minimal-snowplow-tracker~=0.0.2",
78 # DSI is under active development, so we're pinning to specific dev versions for now.
79 # TODO: Before RC/final release, update to use ~= pinning.
80 "dbt-semantic-interfaces==0.1.0.dev8",
81 # ----
82 # Expect compatibility with all new versions of these packages, so lower bounds only.
83 "packaging>20.9",
84 "protobuf>=4.0.0",
85 "pytz>=2015.7",
86 "pyyaml>=6.0",
87 "typing-extensions>=3.7.4",
88 # ----
89 # Match snowflake-connector-python, to ensure compatibility in dbt-snowflake
90 "cffi>=1.9,<2.0.0",
91 "idna>=2.5,<4",
92 "requests<3.0.0",
93 "urllib3~=1.0",
94 # ----
95 ],
96 zip_safe=False,
97 classifiers=[
98 "Development Status :: 5 - Production/Stable",
99 "License :: OSI Approved :: Apache Software License",
100 "Operating System :: Microsoft :: Windows",
101 "Operating System :: MacOS :: MacOS X",
102 "Operating System :: POSIX :: Linux",
103 "Programming Language :: Python :: 3.8",
104 "Programming Language :: Python :: 3.9",
105 "Programming Language :: Python :: 3.10",
106 "Programming Language :: Python :: 3.11",
107 ],
108 python_requires=">=3.8",
109 )
110
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/core/setup.py b/core/setup.py
--- a/core/setup.py
+++ b/core/setup.py
@@ -59,7 +59,8 @@
# ----
# dbt-core uses these packages in standard ways. Pin to the major version, and check compatibility
# with major versions in each new minor version of dbt-core.
- "click>=7.0,<9",
+ # temporarily pinning click for mypy failures: https://github.com/pallets/click/issues/2558
+ "click>=8.1.1,<8.1.4",
"networkx>=2.3,<4",
# ----
# These packages are major-version-0. Keep upper bounds on upcoming minor versions (which could have breaking changes)
| {"golden_diff": "diff --git a/core/setup.py b/core/setup.py\n--- a/core/setup.py\n+++ b/core/setup.py\n@@ -59,7 +59,8 @@\n # ----\n # dbt-core uses these packages in standard ways. Pin to the major version, and check compatibility\n # with major versions in each new minor version of dbt-core.\n- \"click>=7.0,<9\",\n+ # temporarily pinning click for mypy failures: https://github.com/pallets/click/issues/2558\n+ \"click>=8.1.1,<8.1.4\",\n \"networkx>=2.3,<4\",\n # ----\n # These packages are major-version-0. Keep upper bounds on upcoming minor versions (which could have breaking changes)\n", "issue": "[CT-2804] Exclude `click==8.1.4` from dependencies\n## Problem\r\nWhen `click==8.1.4` was released, our code quality workflows using `mypy` began failing. An issue has been created on the [click repository](https://github.com/pallets/click/issues/2558). \r\n\r\n## Solution\r\nThe solution is to exclude, for the time being, `click==8.1.4`. Currently our click dependency is set to `click>=7.0,<9.0`, this should become on main `click>=8.1.1,<8.1.4`.\r\n\r\n## Backports\r\nWe need to backport this fix to `1.3.latest`, `1.4.latest`, and `1.5.latest`. For the backports we should update the dependency from `click>=7.0,<9.0` to `click>=7.0,<8.1.4`. The reason for the different specification in the backports is that we already support `click 7.x` in these earlier versions. Dropping support for `click 7.x` could be problematic if people are installing dbt-core alongside other dependencies which limit click to `7.x.`, then dropping support for `click 7.x` would represent a breaking change (and we shouldn't do this in a patch version).\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\nimport os\nimport sys\n\nif sys.version_info < (3, 8):\n print(\"Error: dbt does not support this version of Python.\")\n print(\"Please upgrade to Python 3.8 or higher.\")\n sys.exit(1)\n\n\nfrom setuptools import setup\n\ntry:\n from setuptools import find_namespace_packages\nexcept ImportError:\n # the user has a downlevel version of setuptools.\n print(\"Error: dbt requires setuptools v40.1.0 or higher.\")\n print('Please upgrade setuptools with \"pip install --upgrade setuptools\" ' \"and try again\")\n sys.exit(1)\n\n\nthis_directory = os.path.abspath(os.path.dirname(__file__))\nwith open(os.path.join(this_directory, \"README.md\")) as f:\n long_description = f.read()\n\n\npackage_name = \"dbt-core\"\npackage_version = \"1.6.0b8\"\ndescription = \"\"\"With dbt, data analysts and engineers can build analytics \\\nthe way engineers build applications.\"\"\"\n\n\nsetup(\n name=package_name,\n version=package_version,\n description=description,\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n author=\"dbt Labs\",\n author_email=\"[email protected]\",\n url=\"https://github.com/dbt-labs/dbt-core\",\n packages=find_namespace_packages(include=[\"dbt\", \"dbt.*\"]),\n include_package_data=True,\n test_suite=\"test\",\n entry_points={\n \"console_scripts\": [\"dbt = dbt.cli.main:cli\"],\n },\n install_requires=[\n # ----\n # dbt-core uses these packages deeply, throughout the codebase, and there have been breaking changes in past patch releases (even though these are major-version-one).\n # Pin to the patch or minor version, and bump in each new minor version of dbt-core.\n \"agate~=1.7.0\",\n \"Jinja2~=3.1.2\",\n \"mashumaro[msgpack]~=3.8.1\",\n # ----\n # Legacy: This package has not been updated since 2019, and it is unused in dbt's logging system (since v1.0)\n # The dependency here will be removed along with the removal of 'legacy logging', in a future release of dbt-core\n \"logbook>=1.5,<1.6\",\n # ----\n # dbt-core uses these packages in standard ways. Pin to the major version, and check compatibility\n # with major versions in each new minor version of dbt-core.\n \"click>=7.0,<9\",\n \"networkx>=2.3,<4\",\n # ----\n # These packages are major-version-0. Keep upper bounds on upcoming minor versions (which could have breaking changes)\n # and check compatibility / bump in each new minor version of dbt-core.\n \"colorama>=0.3.9,<0.5\",\n \"pathspec>=0.9,<0.12\",\n \"isodate>=0.6,<0.7\",\n # ----\n # There was a pin to below 0.4.4 for a while due to a bug in Ubuntu/sqlparse 0.4.4\n \"sqlparse>=0.2.3\",\n # ----\n # These are major-version-0 packages also maintained by dbt-labs. Accept patches.\n \"dbt-extractor~=0.4.1\",\n \"hologram~=0.0.16\", # includes transitive dependencies on python-dateutil and jsonschema\n \"minimal-snowplow-tracker~=0.0.2\",\n # DSI is under active development, so we're pinning to specific dev versions for now.\n # TODO: Before RC/final release, update to use ~= pinning.\n \"dbt-semantic-interfaces==0.1.0.dev8\",\n # ----\n # Expect compatibility with all new versions of these packages, so lower bounds only.\n \"packaging>20.9\",\n \"protobuf>=4.0.0\",\n \"pytz>=2015.7\",\n \"pyyaml>=6.0\",\n \"typing-extensions>=3.7.4\",\n # ----\n # Match snowflake-connector-python, to ensure compatibility in dbt-snowflake\n \"cffi>=1.9,<2.0.0\",\n \"idna>=2.5,<4\",\n \"requests<3.0.0\",\n \"urllib3~=1.0\",\n # ----\n ],\n zip_safe=False,\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Operating System :: Microsoft :: Windows\",\n \"Operating System :: MacOS :: MacOS X\",\n \"Operating System :: POSIX :: Linux\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Programming Language :: Python :: 3.11\",\n ],\n python_requires=\">=3.8\",\n)\n", "path": "core/setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\nimport os\nimport sys\n\nif sys.version_info < (3, 8):\n print(\"Error: dbt does not support this version of Python.\")\n print(\"Please upgrade to Python 3.8 or higher.\")\n sys.exit(1)\n\n\nfrom setuptools import setup\n\ntry:\n from setuptools import find_namespace_packages\nexcept ImportError:\n # the user has a downlevel version of setuptools.\n print(\"Error: dbt requires setuptools v40.1.0 or higher.\")\n print('Please upgrade setuptools with \"pip install --upgrade setuptools\" ' \"and try again\")\n sys.exit(1)\n\n\nthis_directory = os.path.abspath(os.path.dirname(__file__))\nwith open(os.path.join(this_directory, \"README.md\")) as f:\n long_description = f.read()\n\n\npackage_name = \"dbt-core\"\npackage_version = \"1.6.0b8\"\ndescription = \"\"\"With dbt, data analysts and engineers can build analytics \\\nthe way engineers build applications.\"\"\"\n\n\nsetup(\n name=package_name,\n version=package_version,\n description=description,\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n author=\"dbt Labs\",\n author_email=\"[email protected]\",\n url=\"https://github.com/dbt-labs/dbt-core\",\n packages=find_namespace_packages(include=[\"dbt\", \"dbt.*\"]),\n include_package_data=True,\n test_suite=\"test\",\n entry_points={\n \"console_scripts\": [\"dbt = dbt.cli.main:cli\"],\n },\n install_requires=[\n # ----\n # dbt-core uses these packages deeply, throughout the codebase, and there have been breaking changes in past patch releases (even though these are major-version-one).\n # Pin to the patch or minor version, and bump in each new minor version of dbt-core.\n \"agate~=1.7.0\",\n \"Jinja2~=3.1.2\",\n \"mashumaro[msgpack]~=3.8.1\",\n # ----\n # Legacy: This package has not been updated since 2019, and it is unused in dbt's logging system (since v1.0)\n # The dependency here will be removed along with the removal of 'legacy logging', in a future release of dbt-core\n \"logbook>=1.5,<1.6\",\n # ----\n # dbt-core uses these packages in standard ways. Pin to the major version, and check compatibility\n # with major versions in each new minor version of dbt-core.\n # temporarily pinning click for mypy failures: https://github.com/pallets/click/issues/2558\n \"click>=8.1.1,<8.1.4\",\n \"networkx>=2.3,<4\",\n # ----\n # These packages are major-version-0. Keep upper bounds on upcoming minor versions (which could have breaking changes)\n # and check compatibility / bump in each new minor version of dbt-core.\n \"colorama>=0.3.9,<0.5\",\n \"pathspec>=0.9,<0.12\",\n \"isodate>=0.6,<0.7\",\n # ----\n # There was a pin to below 0.4.4 for a while due to a bug in Ubuntu/sqlparse 0.4.4\n \"sqlparse>=0.2.3\",\n # ----\n # These are major-version-0 packages also maintained by dbt-labs. Accept patches.\n \"dbt-extractor~=0.4.1\",\n \"hologram~=0.0.16\", # includes transitive dependencies on python-dateutil and jsonschema\n \"minimal-snowplow-tracker~=0.0.2\",\n # DSI is under active development, so we're pinning to specific dev versions for now.\n # TODO: Before RC/final release, update to use ~= pinning.\n \"dbt-semantic-interfaces==0.1.0.dev8\",\n # ----\n # Expect compatibility with all new versions of these packages, so lower bounds only.\n \"packaging>20.9\",\n \"protobuf>=4.0.0\",\n \"pytz>=2015.7\",\n \"pyyaml>=6.0\",\n \"typing-extensions>=3.7.4\",\n # ----\n # Match snowflake-connector-python, to ensure compatibility in dbt-snowflake\n \"cffi>=1.9,<2.0.0\",\n \"idna>=2.5,<4\",\n \"requests<3.0.0\",\n \"urllib3~=1.0\",\n # ----\n ],\n zip_safe=False,\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Operating System :: Microsoft :: Windows\",\n \"Operating System :: MacOS :: MacOS X\",\n \"Operating System :: POSIX :: Linux\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Programming Language :: Python :: 3.11\",\n ],\n python_requires=\">=3.8\",\n)\n", "path": "core/setup.py"}]} | 1,888 | 172 |
gh_patches_debug_6728 | rasdani/github-patches | git_diff | ydataai__ydata-profiling-1023 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Incorrect duplicate rows count
### Current Behaviour
The duplicated rows count is different between pandas and pandas-profiling when there are nan's in columns
### Expected Behaviour
The count should be equal
### Data Description
I attach a simple example

### Code that reproduces the bug
```Python
import pandas as pd
import numpy as np
df = pd.DataFrame({"a": [np.nan, np.nan, 2], "b": [1, 1, 3]})
sum(df.duplicated())
from pandas_profiling import ProfileReport
profile = ProfileReport(df, title="Pandas Profiling Report")
```
### pandas-profiling version
3.2.0
### Dependencies
```Text
numpy==1.22.4
pandas==1.3.3
```
### OS
_No response_
### Checklist
- [X] There is not yet another bug report for this issue in the [issue tracker](https://github.com/ydataai/pandas-profiling/issues)
- [X] The problem is reproducible from this bug report. [This guide](http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) can help to craft a minimal bug report.
- [X] The issue has not been resolved by the entries listed under [Common Issues](https://pandas-profiling.ydata.ai/docs/master/pages/support_contrib/common_issues.html).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/pandas_profiling/model/pandas/duplicates_pandas.py`
Content:
```
1 from typing import Any, Dict, Optional, Sequence, Tuple
2
3 import pandas as pd
4
5 from pandas_profiling.config import Settings
6 from pandas_profiling.model.duplicates import get_duplicates
7
8
9 @get_duplicates.register(Settings, pd.DataFrame, Sequence)
10 def pandas_get_duplicates(
11 config: Settings, df: pd.DataFrame, supported_columns: Sequence
12 ) -> Tuple[Dict[str, Any], Optional[pd.DataFrame]]:
13 """Obtain the most occurring duplicate rows in the DataFrame.
14
15 Args:
16 config: report Settings object
17 df: the Pandas DataFrame.
18 supported_columns: the columns to consider
19
20 Returns:
21 A subset of the DataFrame, ordered by occurrence.
22 """
23 n_head = config.duplicates.head
24
25 metrics: Dict[str, Any] = {}
26 if n_head > 0:
27 if supported_columns and len(df) > 0:
28 duplicates_key = config.duplicates.key
29 if duplicates_key in df.columns:
30 raise ValueError(
31 f"Duplicates key ({duplicates_key}) may not be part of the DataFrame. Either change the "
32 f" column name in the DataFrame or change the 'duplicates.key' parameter."
33 )
34
35 duplicated_rows = df.duplicated(subset=supported_columns, keep=False)
36 duplicated_rows = (
37 df[duplicated_rows]
38 .groupby(supported_columns)
39 .size()
40 .reset_index(name=duplicates_key)
41 )
42
43 metrics["n_duplicates"] = len(duplicated_rows[duplicates_key])
44 metrics["p_duplicates"] = metrics["n_duplicates"] / len(df)
45
46 return (
47 metrics,
48 duplicated_rows.nlargest(n_head, duplicates_key),
49 )
50 else:
51 metrics["n_duplicates"] = 0
52 metrics["p_duplicates"] = 0.0
53 return metrics, None
54 else:
55 return metrics, None
56
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/pandas_profiling/model/pandas/duplicates_pandas.py b/src/pandas_profiling/model/pandas/duplicates_pandas.py
--- a/src/pandas_profiling/model/pandas/duplicates_pandas.py
+++ b/src/pandas_profiling/model/pandas/duplicates_pandas.py
@@ -35,7 +35,7 @@
duplicated_rows = df.duplicated(subset=supported_columns, keep=False)
duplicated_rows = (
df[duplicated_rows]
- .groupby(supported_columns)
+ .groupby(supported_columns, dropna=False)
.size()
.reset_index(name=duplicates_key)
)
| {"golden_diff": "diff --git a/src/pandas_profiling/model/pandas/duplicates_pandas.py b/src/pandas_profiling/model/pandas/duplicates_pandas.py\n--- a/src/pandas_profiling/model/pandas/duplicates_pandas.py\n+++ b/src/pandas_profiling/model/pandas/duplicates_pandas.py\n@@ -35,7 +35,7 @@\n duplicated_rows = df.duplicated(subset=supported_columns, keep=False)\n duplicated_rows = (\n df[duplicated_rows]\n- .groupby(supported_columns)\n+ .groupby(supported_columns, dropna=False)\n .size()\n .reset_index(name=duplicates_key)\n )\n", "issue": "Incorrect duplicate rows count\n### Current Behaviour\n\nThe duplicated rows count is different between pandas and pandas-profiling when there are nan's in columns\n\n### Expected Behaviour\n\nThe count should be equal\n\n### Data Description\n\nI attach a simple example\r\n\r\n\r\n\n\n### Code that reproduces the bug\n\n```Python\nimport pandas as pd\r\nimport numpy as np\r\n\r\ndf = pd.DataFrame({\"a\": [np.nan, np.nan, 2], \"b\": [1, 1, 3]})\r\nsum(df.duplicated())\r\n\r\nfrom pandas_profiling import ProfileReport\r\n\r\nprofile = ProfileReport(df, title=\"Pandas Profiling Report\")\n```\n\n\n### pandas-profiling version\n\n3.2.0\n\n### Dependencies\n\n```Text\nnumpy==1.22.4\r\npandas==1.3.3\n```\n\n\n### OS\n\n_No response_\n\n### Checklist\n\n- [X] There is not yet another bug report for this issue in the [issue tracker](https://github.com/ydataai/pandas-profiling/issues)\n- [X] The problem is reproducible from this bug report. [This guide](http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) can help to craft a minimal bug report.\n- [X] The issue has not been resolved by the entries listed under [Common Issues](https://pandas-profiling.ydata.ai/docs/master/pages/support_contrib/common_issues.html).\n", "before_files": [{"content": "from typing import Any, Dict, Optional, Sequence, Tuple\n\nimport pandas as pd\n\nfrom pandas_profiling.config import Settings\nfrom pandas_profiling.model.duplicates import get_duplicates\n\n\n@get_duplicates.register(Settings, pd.DataFrame, Sequence)\ndef pandas_get_duplicates(\n config: Settings, df: pd.DataFrame, supported_columns: Sequence\n) -> Tuple[Dict[str, Any], Optional[pd.DataFrame]]:\n \"\"\"Obtain the most occurring duplicate rows in the DataFrame.\n\n Args:\n config: report Settings object\n df: the Pandas DataFrame.\n supported_columns: the columns to consider\n\n Returns:\n A subset of the DataFrame, ordered by occurrence.\n \"\"\"\n n_head = config.duplicates.head\n\n metrics: Dict[str, Any] = {}\n if n_head > 0:\n if supported_columns and len(df) > 0:\n duplicates_key = config.duplicates.key\n if duplicates_key in df.columns:\n raise ValueError(\n f\"Duplicates key ({duplicates_key}) may not be part of the DataFrame. Either change the \"\n f\" column name in the DataFrame or change the 'duplicates.key' parameter.\"\n )\n\n duplicated_rows = df.duplicated(subset=supported_columns, keep=False)\n duplicated_rows = (\n df[duplicated_rows]\n .groupby(supported_columns)\n .size()\n .reset_index(name=duplicates_key)\n )\n\n metrics[\"n_duplicates\"] = len(duplicated_rows[duplicates_key])\n metrics[\"p_duplicates\"] = metrics[\"n_duplicates\"] / len(df)\n\n return (\n metrics,\n duplicated_rows.nlargest(n_head, duplicates_key),\n )\n else:\n metrics[\"n_duplicates\"] = 0\n metrics[\"p_duplicates\"] = 0.0\n return metrics, None\n else:\n return metrics, None\n", "path": "src/pandas_profiling/model/pandas/duplicates_pandas.py"}], "after_files": [{"content": "from typing import Any, Dict, Optional, Sequence, Tuple\n\nimport pandas as pd\n\nfrom pandas_profiling.config import Settings\nfrom pandas_profiling.model.duplicates import get_duplicates\n\n\n@get_duplicates.register(Settings, pd.DataFrame, Sequence)\ndef pandas_get_duplicates(\n config: Settings, df: pd.DataFrame, supported_columns: Sequence\n) -> Tuple[Dict[str, Any], Optional[pd.DataFrame]]:\n \"\"\"Obtain the most occurring duplicate rows in the DataFrame.\n\n Args:\n config: report Settings object\n df: the Pandas DataFrame.\n supported_columns: the columns to consider\n\n Returns:\n A subset of the DataFrame, ordered by occurrence.\n \"\"\"\n n_head = config.duplicates.head\n\n metrics: Dict[str, Any] = {}\n if n_head > 0:\n if supported_columns and len(df) > 0:\n duplicates_key = config.duplicates.key\n if duplicates_key in df.columns:\n raise ValueError(\n f\"Duplicates key ({duplicates_key}) may not be part of the DataFrame. Either change the \"\n f\" column name in the DataFrame or change the 'duplicates.key' parameter.\"\n )\n\n duplicated_rows = df.duplicated(subset=supported_columns, keep=False)\n duplicated_rows = (\n df[duplicated_rows]\n .groupby(supported_columns, dropna=False)\n .size()\n .reset_index(name=duplicates_key)\n )\n\n metrics[\"n_duplicates\"] = len(duplicated_rows[duplicates_key])\n metrics[\"p_duplicates\"] = metrics[\"n_duplicates\"] / len(df)\n\n return (\n metrics,\n duplicated_rows.nlargest(n_head, duplicates_key),\n )\n else:\n metrics[\"n_duplicates\"] = 0\n metrics[\"p_duplicates\"] = 0.0\n return metrics, None\n else:\n return metrics, None\n", "path": "src/pandas_profiling/model/pandas/duplicates_pandas.py"}]} | 1,122 | 140 |
gh_patches_debug_23376 | rasdani/github-patches | git_diff | goauthentik__authentik-8677 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Allow setting a custom attribute for oidc provider sub claim
**Is your feature request related to a problem? Please describe.**
I have an external auth source and I'm using authentik as an authentication hub between the source and other applications. That auth source has unique user ids that I save in authentik as a custom attribute. I would like to use it as the oidc subject.
**Describe the solution you'd like**
Add a subject mode option "Based on a user attribute" with a text field where one enter the attribute. Alternatively it could be an expression similar to property mappings.
This would be quite similar to the current "Based on the User's UPN" and it may even make sense to replace it entirely, but that would require migrating existing configurations to the new type with upn as the attribute.
**Describe alternatives you've considered**
I could set the external uid as the username in authentik as I'm not currently using the username for anything
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `authentik/providers/oauth2/views/userinfo.py`
Content:
```
1 """authentik OAuth2 OpenID Userinfo views"""
2
3 from typing import Any
4
5 from deepmerge import always_merger
6 from django.http import HttpRequest, HttpResponse
7 from django.http.response import HttpResponseBadRequest
8 from django.utils.decorators import method_decorator
9 from django.utils.translation import gettext_lazy as _
10 from django.views import View
11 from django.views.decorators.csrf import csrf_exempt
12 from structlog.stdlib import get_logger
13
14 from authentik.core.exceptions import PropertyMappingExpressionException
15 from authentik.events.models import Event, EventAction
16 from authentik.flows.challenge import PermissionDict
17 from authentik.providers.oauth2.constants import (
18 SCOPE_AUTHENTIK_API,
19 SCOPE_GITHUB_ORG_READ,
20 SCOPE_GITHUB_USER,
21 SCOPE_GITHUB_USER_EMAIL,
22 SCOPE_GITHUB_USER_READ,
23 SCOPE_OPENID,
24 )
25 from authentik.providers.oauth2.models import (
26 BaseGrantModel,
27 OAuth2Provider,
28 RefreshToken,
29 ScopeMapping,
30 )
31 from authentik.providers.oauth2.utils import TokenResponse, cors_allow, protected_resource_view
32
33 LOGGER = get_logger()
34
35
36 @method_decorator(csrf_exempt, name="dispatch")
37 @method_decorator(protected_resource_view([SCOPE_OPENID]), name="dispatch")
38 class UserInfoView(View):
39 """Create a dictionary with all the requested claims about the End-User.
40 See: http://openid.net/specs/openid-connect-core-1_0.html#UserInfoResponse"""
41
42 token: RefreshToken | None
43
44 def get_scope_descriptions(
45 self, scopes: list[str], provider: OAuth2Provider
46 ) -> list[PermissionDict]:
47 """Get a list of all Scopes's descriptions"""
48 scope_descriptions = []
49 for scope in ScopeMapping.objects.filter(scope_name__in=scopes, provider=provider).order_by(
50 "scope_name"
51 ):
52 scope_descriptions.append(PermissionDict(id=scope.scope_name, name=scope.description))
53 # GitHub Compatibility Scopes are handled differently, since they required custom paths
54 # Hence they don't exist as Scope objects
55 special_scope_map = {
56 SCOPE_GITHUB_USER: _("GitHub Compatibility: Access your User Information"),
57 SCOPE_GITHUB_USER_READ: _("GitHub Compatibility: Access your User Information"),
58 SCOPE_GITHUB_USER_EMAIL: _("GitHub Compatibility: Access you Email addresses"),
59 SCOPE_GITHUB_ORG_READ: _("GitHub Compatibility: Access your Groups"),
60 SCOPE_AUTHENTIK_API: _("authentik API Access on behalf of your user"),
61 }
62 for scope in scopes:
63 if scope in special_scope_map:
64 scope_descriptions.append(
65 PermissionDict(id=scope, name=str(special_scope_map[scope]))
66 )
67 return scope_descriptions
68
69 def get_claims(self, provider: OAuth2Provider, token: BaseGrantModel) -> dict[str, Any]:
70 """Get a dictionary of claims from scopes that the token
71 requires and are assigned to the provider."""
72
73 scopes_from_client = token.scope
74 final_claims = {}
75 for scope in ScopeMapping.objects.filter(
76 provider=provider, scope_name__in=scopes_from_client
77 ).order_by("scope_name"):
78 scope: ScopeMapping
79 value = None
80 try:
81 value = scope.evaluate(
82 user=token.user,
83 request=self.request,
84 provider=provider,
85 token=token,
86 )
87 except PropertyMappingExpressionException as exc:
88 Event.new(
89 EventAction.CONFIGURATION_ERROR,
90 message=f"Failed to evaluate property-mapping: '{scope.name}'",
91 provider=provider,
92 mapping=scope,
93 ).from_http(self.request)
94 LOGGER.warning("Failed to evaluate property mapping", exc=exc)
95 if value is None:
96 continue
97 if not isinstance(value, dict):
98 LOGGER.warning(
99 "Scope returned a non-dict value, ignoring",
100 scope=scope,
101 value=value,
102 )
103 continue
104 LOGGER.debug("updated scope", scope=scope)
105 always_merger.merge(final_claims, value)
106 return final_claims
107
108 def dispatch(self, request: HttpRequest, *args: Any, **kwargs: Any) -> HttpResponse:
109 self.token = kwargs.get("token", None)
110 response = super().dispatch(request, *args, **kwargs)
111 allowed_origins = []
112 if self.token:
113 allowed_origins = self.token.provider.redirect_uris.split("\n")
114 cors_allow(self.request, response, *allowed_origins)
115 return response
116
117 def options(self, request: HttpRequest) -> HttpResponse:
118 return TokenResponse({})
119
120 def get(self, request: HttpRequest, **kwargs) -> HttpResponse:
121 """Handle GET Requests for UserInfo"""
122 if not self.token:
123 return HttpResponseBadRequest()
124 claims = self.get_claims(self.token.provider, self.token)
125 claims["sub"] = self.token.id_token.sub
126 if self.token.id_token.nonce:
127 claims["nonce"] = self.token.id_token.nonce
128 response = TokenResponse(claims)
129 return response
130
131 def post(self, request: HttpRequest, **kwargs) -> HttpResponse:
132 """POST Requests behave the same as GET Requests, so the get handler is called here"""
133 return self.get(request, **kwargs)
134
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/authentik/providers/oauth2/views/userinfo.py b/authentik/providers/oauth2/views/userinfo.py
--- a/authentik/providers/oauth2/views/userinfo.py
+++ b/authentik/providers/oauth2/views/userinfo.py
@@ -101,8 +101,8 @@
value=value,
)
continue
- LOGGER.debug("updated scope", scope=scope)
always_merger.merge(final_claims, value)
+ LOGGER.debug("updated scope", scope=scope)
return final_claims
def dispatch(self, request: HttpRequest, *args: Any, **kwargs: Any) -> HttpResponse:
@@ -121,8 +121,9 @@
"""Handle GET Requests for UserInfo"""
if not self.token:
return HttpResponseBadRequest()
- claims = self.get_claims(self.token.provider, self.token)
- claims["sub"] = self.token.id_token.sub
+ claims = {}
+ claims.setdefault("sub", self.token.id_token.sub)
+ claims.update(self.get_claims(self.token.provider, self.token))
if self.token.id_token.nonce:
claims["nonce"] = self.token.id_token.nonce
response = TokenResponse(claims)
| {"golden_diff": "diff --git a/authentik/providers/oauth2/views/userinfo.py b/authentik/providers/oauth2/views/userinfo.py\n--- a/authentik/providers/oauth2/views/userinfo.py\n+++ b/authentik/providers/oauth2/views/userinfo.py\n@@ -101,8 +101,8 @@\n value=value,\n )\n continue\n- LOGGER.debug(\"updated scope\", scope=scope)\n always_merger.merge(final_claims, value)\n+ LOGGER.debug(\"updated scope\", scope=scope)\n return final_claims\n \n def dispatch(self, request: HttpRequest, *args: Any, **kwargs: Any) -> HttpResponse:\n@@ -121,8 +121,9 @@\n \"\"\"Handle GET Requests for UserInfo\"\"\"\n if not self.token:\n return HttpResponseBadRequest()\n- claims = self.get_claims(self.token.provider, self.token)\n- claims[\"sub\"] = self.token.id_token.sub\n+ claims = {}\n+ claims.setdefault(\"sub\", self.token.id_token.sub)\n+ claims.update(self.get_claims(self.token.provider, self.token))\n if self.token.id_token.nonce:\n claims[\"nonce\"] = self.token.id_token.nonce\n response = TokenResponse(claims)\n", "issue": "Allow setting a custom attribute for oidc provider sub claim\n**Is your feature request related to a problem? Please describe.**\r\nI have an external auth source and I'm using authentik as an authentication hub between the source and other applications. That auth source has unique user ids that I save in authentik as a custom attribute. I would like to use it as the oidc subject.\r\n\r\n**Describe the solution you'd like**\r\nAdd a subject mode option \"Based on a user attribute\" with a text field where one enter the attribute. Alternatively it could be an expression similar to property mappings.\r\n\r\nThis would be quite similar to the current \"Based on the User's UPN\" and it may even make sense to replace it entirely, but that would require migrating existing configurations to the new type with upn as the attribute.\r\n\r\n**Describe alternatives you've considered**\r\nI could set the external uid as the username in authentik as I'm not currently using the username for anything\n", "before_files": [{"content": "\"\"\"authentik OAuth2 OpenID Userinfo views\"\"\"\n\nfrom typing import Any\n\nfrom deepmerge import always_merger\nfrom django.http import HttpRequest, HttpResponse\nfrom django.http.response import HttpResponseBadRequest\nfrom django.utils.decorators import method_decorator\nfrom django.utils.translation import gettext_lazy as _\nfrom django.views import View\nfrom django.views.decorators.csrf import csrf_exempt\nfrom structlog.stdlib import get_logger\n\nfrom authentik.core.exceptions import PropertyMappingExpressionException\nfrom authentik.events.models import Event, EventAction\nfrom authentik.flows.challenge import PermissionDict\nfrom authentik.providers.oauth2.constants import (\n SCOPE_AUTHENTIK_API,\n SCOPE_GITHUB_ORG_READ,\n SCOPE_GITHUB_USER,\n SCOPE_GITHUB_USER_EMAIL,\n SCOPE_GITHUB_USER_READ,\n SCOPE_OPENID,\n)\nfrom authentik.providers.oauth2.models import (\n BaseGrantModel,\n OAuth2Provider,\n RefreshToken,\n ScopeMapping,\n)\nfrom authentik.providers.oauth2.utils import TokenResponse, cors_allow, protected_resource_view\n\nLOGGER = get_logger()\n\n\n@method_decorator(csrf_exempt, name=\"dispatch\")\n@method_decorator(protected_resource_view([SCOPE_OPENID]), name=\"dispatch\")\nclass UserInfoView(View):\n \"\"\"Create a dictionary with all the requested claims about the End-User.\n See: http://openid.net/specs/openid-connect-core-1_0.html#UserInfoResponse\"\"\"\n\n token: RefreshToken | None\n\n def get_scope_descriptions(\n self, scopes: list[str], provider: OAuth2Provider\n ) -> list[PermissionDict]:\n \"\"\"Get a list of all Scopes's descriptions\"\"\"\n scope_descriptions = []\n for scope in ScopeMapping.objects.filter(scope_name__in=scopes, provider=provider).order_by(\n \"scope_name\"\n ):\n scope_descriptions.append(PermissionDict(id=scope.scope_name, name=scope.description))\n # GitHub Compatibility Scopes are handled differently, since they required custom paths\n # Hence they don't exist as Scope objects\n special_scope_map = {\n SCOPE_GITHUB_USER: _(\"GitHub Compatibility: Access your User Information\"),\n SCOPE_GITHUB_USER_READ: _(\"GitHub Compatibility: Access your User Information\"),\n SCOPE_GITHUB_USER_EMAIL: _(\"GitHub Compatibility: Access you Email addresses\"),\n SCOPE_GITHUB_ORG_READ: _(\"GitHub Compatibility: Access your Groups\"),\n SCOPE_AUTHENTIK_API: _(\"authentik API Access on behalf of your user\"),\n }\n for scope in scopes:\n if scope in special_scope_map:\n scope_descriptions.append(\n PermissionDict(id=scope, name=str(special_scope_map[scope]))\n )\n return scope_descriptions\n\n def get_claims(self, provider: OAuth2Provider, token: BaseGrantModel) -> dict[str, Any]:\n \"\"\"Get a dictionary of claims from scopes that the token\n requires and are assigned to the provider.\"\"\"\n\n scopes_from_client = token.scope\n final_claims = {}\n for scope in ScopeMapping.objects.filter(\n provider=provider, scope_name__in=scopes_from_client\n ).order_by(\"scope_name\"):\n scope: ScopeMapping\n value = None\n try:\n value = scope.evaluate(\n user=token.user,\n request=self.request,\n provider=provider,\n token=token,\n )\n except PropertyMappingExpressionException as exc:\n Event.new(\n EventAction.CONFIGURATION_ERROR,\n message=f\"Failed to evaluate property-mapping: '{scope.name}'\",\n provider=provider,\n mapping=scope,\n ).from_http(self.request)\n LOGGER.warning(\"Failed to evaluate property mapping\", exc=exc)\n if value is None:\n continue\n if not isinstance(value, dict):\n LOGGER.warning(\n \"Scope returned a non-dict value, ignoring\",\n scope=scope,\n value=value,\n )\n continue\n LOGGER.debug(\"updated scope\", scope=scope)\n always_merger.merge(final_claims, value)\n return final_claims\n\n def dispatch(self, request: HttpRequest, *args: Any, **kwargs: Any) -> HttpResponse:\n self.token = kwargs.get(\"token\", None)\n response = super().dispatch(request, *args, **kwargs)\n allowed_origins = []\n if self.token:\n allowed_origins = self.token.provider.redirect_uris.split(\"\\n\")\n cors_allow(self.request, response, *allowed_origins)\n return response\n\n def options(self, request: HttpRequest) -> HttpResponse:\n return TokenResponse({})\n\n def get(self, request: HttpRequest, **kwargs) -> HttpResponse:\n \"\"\"Handle GET Requests for UserInfo\"\"\"\n if not self.token:\n return HttpResponseBadRequest()\n claims = self.get_claims(self.token.provider, self.token)\n claims[\"sub\"] = self.token.id_token.sub\n if self.token.id_token.nonce:\n claims[\"nonce\"] = self.token.id_token.nonce\n response = TokenResponse(claims)\n return response\n\n def post(self, request: HttpRequest, **kwargs) -> HttpResponse:\n \"\"\"POST Requests behave the same as GET Requests, so the get handler is called here\"\"\"\n return self.get(request, **kwargs)\n", "path": "authentik/providers/oauth2/views/userinfo.py"}], "after_files": [{"content": "\"\"\"authentik OAuth2 OpenID Userinfo views\"\"\"\n\nfrom typing import Any\n\nfrom deepmerge import always_merger\nfrom django.http import HttpRequest, HttpResponse\nfrom django.http.response import HttpResponseBadRequest\nfrom django.utils.decorators import method_decorator\nfrom django.utils.translation import gettext_lazy as _\nfrom django.views import View\nfrom django.views.decorators.csrf import csrf_exempt\nfrom structlog.stdlib import get_logger\n\nfrom authentik.core.exceptions import PropertyMappingExpressionException\nfrom authentik.events.models import Event, EventAction\nfrom authentik.flows.challenge import PermissionDict\nfrom authentik.providers.oauth2.constants import (\n SCOPE_AUTHENTIK_API,\n SCOPE_GITHUB_ORG_READ,\n SCOPE_GITHUB_USER,\n SCOPE_GITHUB_USER_EMAIL,\n SCOPE_GITHUB_USER_READ,\n SCOPE_OPENID,\n)\nfrom authentik.providers.oauth2.models import (\n BaseGrantModel,\n OAuth2Provider,\n RefreshToken,\n ScopeMapping,\n)\nfrom authentik.providers.oauth2.utils import TokenResponse, cors_allow, protected_resource_view\n\nLOGGER = get_logger()\n\n\n@method_decorator(csrf_exempt, name=\"dispatch\")\n@method_decorator(protected_resource_view([SCOPE_OPENID]), name=\"dispatch\")\nclass UserInfoView(View):\n \"\"\"Create a dictionary with all the requested claims about the End-User.\n See: http://openid.net/specs/openid-connect-core-1_0.html#UserInfoResponse\"\"\"\n\n token: RefreshToken | None\n\n def get_scope_descriptions(\n self, scopes: list[str], provider: OAuth2Provider\n ) -> list[PermissionDict]:\n \"\"\"Get a list of all Scopes's descriptions\"\"\"\n scope_descriptions = []\n for scope in ScopeMapping.objects.filter(scope_name__in=scopes, provider=provider).order_by(\n \"scope_name\"\n ):\n scope_descriptions.append(PermissionDict(id=scope.scope_name, name=scope.description))\n # GitHub Compatibility Scopes are handled differently, since they required custom paths\n # Hence they don't exist as Scope objects\n special_scope_map = {\n SCOPE_GITHUB_USER: _(\"GitHub Compatibility: Access your User Information\"),\n SCOPE_GITHUB_USER_READ: _(\"GitHub Compatibility: Access your User Information\"),\n SCOPE_GITHUB_USER_EMAIL: _(\"GitHub Compatibility: Access you Email addresses\"),\n SCOPE_GITHUB_ORG_READ: _(\"GitHub Compatibility: Access your Groups\"),\n SCOPE_AUTHENTIK_API: _(\"authentik API Access on behalf of your user\"),\n }\n for scope in scopes:\n if scope in special_scope_map:\n scope_descriptions.append(\n PermissionDict(id=scope, name=str(special_scope_map[scope]))\n )\n return scope_descriptions\n\n def get_claims(self, provider: OAuth2Provider, token: BaseGrantModel) -> dict[str, Any]:\n \"\"\"Get a dictionary of claims from scopes that the token\n requires and are assigned to the provider.\"\"\"\n\n scopes_from_client = token.scope\n final_claims = {}\n for scope in ScopeMapping.objects.filter(\n provider=provider, scope_name__in=scopes_from_client\n ).order_by(\"scope_name\"):\n scope: ScopeMapping\n value = None\n try:\n value = scope.evaluate(\n user=token.user,\n request=self.request,\n provider=provider,\n token=token,\n )\n except PropertyMappingExpressionException as exc:\n Event.new(\n EventAction.CONFIGURATION_ERROR,\n message=f\"Failed to evaluate property-mapping: '{scope.name}'\",\n provider=provider,\n mapping=scope,\n ).from_http(self.request)\n LOGGER.warning(\"Failed to evaluate property mapping\", exc=exc)\n if value is None:\n continue\n if not isinstance(value, dict):\n LOGGER.warning(\n \"Scope returned a non-dict value, ignoring\",\n scope=scope,\n value=value,\n )\n continue\n always_merger.merge(final_claims, value)\n LOGGER.debug(\"updated scope\", scope=scope)\n return final_claims\n\n def dispatch(self, request: HttpRequest, *args: Any, **kwargs: Any) -> HttpResponse:\n self.token = kwargs.get(\"token\", None)\n response = super().dispatch(request, *args, **kwargs)\n allowed_origins = []\n if self.token:\n allowed_origins = self.token.provider.redirect_uris.split(\"\\n\")\n cors_allow(self.request, response, *allowed_origins)\n return response\n\n def options(self, request: HttpRequest) -> HttpResponse:\n return TokenResponse({})\n\n def get(self, request: HttpRequest, **kwargs) -> HttpResponse:\n \"\"\"Handle GET Requests for UserInfo\"\"\"\n if not self.token:\n return HttpResponseBadRequest()\n claims = {}\n claims.setdefault(\"sub\", self.token.id_token.sub)\n claims.update(self.get_claims(self.token.provider, self.token))\n if self.token.id_token.nonce:\n claims[\"nonce\"] = self.token.id_token.nonce\n response = TokenResponse(claims)\n return response\n\n def post(self, request: HttpRequest, **kwargs) -> HttpResponse:\n \"\"\"POST Requests behave the same as GET Requests, so the get handler is called here\"\"\"\n return self.get(request, **kwargs)\n", "path": "authentik/providers/oauth2/views/userinfo.py"}]} | 1,862 | 263 |
gh_patches_debug_4588 | rasdani/github-patches | git_diff | saleor__saleor-541 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Drop cart partitioner from cart view
Currently the cart is partitioned every time it'd displayed. We really only need to do this when creating an order/payment. We do call it every time the cart is rendered but we then merge all of the partitions back into a single list.
- [ ] identify places where cart partitioner is called
- [ ] remove the unnecessary calls from places that don't absolutely need partitioning to work (checkout)
- [ ] simplify templates so they iterate over the cart instead of walking through a list of partitions that in turn contain items
- [ ] provide a brief description of the changes for the next release changelog
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `saleor/cart/views.py`
Content:
```
1 from __future__ import unicode_literals
2 from babeldjango.templatetags.babel import currencyfmt
3
4 from django.contrib import messages
5 from django.http import JsonResponse
6 from django.shortcuts import redirect
7 from django.template.response import TemplateResponse
8 from django.utils.translation import ugettext as _
9
10 from . import Cart
11 from .forms import ReplaceCartLineForm
12 from ..cart.utils import (
13 contains_unavailable_products, remove_unavailable_products)
14
15
16 def index(request, product_id=None):
17 if product_id is not None:
18 product_id = int(product_id)
19 cart = Cart.for_session_cart(request.cart, discounts=request.discounts)
20 if contains_unavailable_products(cart):
21 msg = _('Sorry. We don\'t have that many items in stock. '
22 'Quantity was set to maximum available for now.')
23 messages.warning(request, msg)
24 remove_unavailable_products(cart)
25 for line in cart:
26 data = None
27 if line.product.pk == product_id:
28 data = request.POST
29 initial = {'quantity': line.get_quantity()}
30 form = ReplaceCartLineForm(data, cart=cart, product=line.product,
31 initial=initial)
32 line.form = form
33 if form.is_valid():
34 form.save()
35 if request.is_ajax():
36 response = {
37 'productId': line.product.pk,
38 'subtotal': currencyfmt(
39 line.get_total().gross,
40 line.get_total().currency),
41 'total': 0}
42 if cart:
43 response['total'] = currencyfmt(
44 cart.get_total().gross, cart.get_total().currency)
45 return JsonResponse(response)
46 return redirect('cart:index')
47 elif data is not None:
48 if request.is_ajax():
49 response = {'error': form.errors}
50 return JsonResponse(response, status=400)
51 cart_partitioner = cart.partition()
52 return TemplateResponse(
53 request, 'cart/index.html', {
54 'cart': cart_partitioner})
55
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/saleor/cart/views.py b/saleor/cart/views.py
--- a/saleor/cart/views.py
+++ b/saleor/cart/views.py
@@ -48,7 +48,6 @@
if request.is_ajax():
response = {'error': form.errors}
return JsonResponse(response, status=400)
- cart_partitioner = cart.partition()
return TemplateResponse(
request, 'cart/index.html', {
- 'cart': cart_partitioner})
+ 'cart': cart})
| {"golden_diff": "diff --git a/saleor/cart/views.py b/saleor/cart/views.py\n--- a/saleor/cart/views.py\n+++ b/saleor/cart/views.py\n@@ -48,7 +48,6 @@\n if request.is_ajax():\n response = {'error': form.errors}\n return JsonResponse(response, status=400)\n- cart_partitioner = cart.partition()\n return TemplateResponse(\n request, 'cart/index.html', {\n- 'cart': cart_partitioner})\n+ 'cart': cart})\n", "issue": "Drop cart partitioner from cart view\nCurrently the cart is partitioned every time it'd displayed. We really only need to do this when creating an order/payment. We do call it every time the cart is rendered but we then merge all of the partitions back into a single list.\n- [ ] identify places where cart partitioner is called\n- [ ] remove the unnecessary calls from places that don't absolutely need partitioning to work (checkout)\n- [ ] simplify templates so they iterate over the cart instead of walking through a list of partitions that in turn contain items\n- [ ] provide a brief description of the changes for the next release changelog\n\n", "before_files": [{"content": "from __future__ import unicode_literals\nfrom babeldjango.templatetags.babel import currencyfmt\n\nfrom django.contrib import messages\nfrom django.http import JsonResponse\nfrom django.shortcuts import redirect\nfrom django.template.response import TemplateResponse\nfrom django.utils.translation import ugettext as _\n\nfrom . import Cart\nfrom .forms import ReplaceCartLineForm\nfrom ..cart.utils import (\n contains_unavailable_products, remove_unavailable_products)\n\n\ndef index(request, product_id=None):\n if product_id is not None:\n product_id = int(product_id)\n cart = Cart.for_session_cart(request.cart, discounts=request.discounts)\n if contains_unavailable_products(cart):\n msg = _('Sorry. We don\\'t have that many items in stock. '\n 'Quantity was set to maximum available for now.')\n messages.warning(request, msg)\n remove_unavailable_products(cart)\n for line in cart:\n data = None\n if line.product.pk == product_id:\n data = request.POST\n initial = {'quantity': line.get_quantity()}\n form = ReplaceCartLineForm(data, cart=cart, product=line.product,\n initial=initial)\n line.form = form\n if form.is_valid():\n form.save()\n if request.is_ajax():\n response = {\n 'productId': line.product.pk,\n 'subtotal': currencyfmt(\n line.get_total().gross,\n line.get_total().currency),\n 'total': 0}\n if cart:\n response['total'] = currencyfmt(\n cart.get_total().gross, cart.get_total().currency)\n return JsonResponse(response)\n return redirect('cart:index')\n elif data is not None:\n if request.is_ajax():\n response = {'error': form.errors}\n return JsonResponse(response, status=400)\n cart_partitioner = cart.partition()\n return TemplateResponse(\n request, 'cart/index.html', {\n 'cart': cart_partitioner})\n", "path": "saleor/cart/views.py"}], "after_files": [{"content": "from __future__ import unicode_literals\nfrom babeldjango.templatetags.babel import currencyfmt\n\nfrom django.contrib import messages\nfrom django.http import JsonResponse\nfrom django.shortcuts import redirect\nfrom django.template.response import TemplateResponse\nfrom django.utils.translation import ugettext as _\n\nfrom . import Cart\nfrom .forms import ReplaceCartLineForm\nfrom ..cart.utils import (\n contains_unavailable_products, remove_unavailable_products)\n\n\ndef index(request, product_id=None):\n if product_id is not None:\n product_id = int(product_id)\n cart = Cart.for_session_cart(request.cart, discounts=request.discounts)\n if contains_unavailable_products(cart):\n msg = _('Sorry. We don\\'t have that many items in stock. '\n 'Quantity was set to maximum available for now.')\n messages.warning(request, msg)\n remove_unavailable_products(cart)\n for line in cart:\n data = None\n if line.product.pk == product_id:\n data = request.POST\n initial = {'quantity': line.get_quantity()}\n form = ReplaceCartLineForm(data, cart=cart, product=line.product,\n initial=initial)\n line.form = form\n if form.is_valid():\n form.save()\n if request.is_ajax():\n response = {\n 'productId': line.product.pk,\n 'subtotal': currencyfmt(\n line.get_total().gross,\n line.get_total().currency),\n 'total': 0}\n if cart:\n response['total'] = currencyfmt(\n cart.get_total().gross, cart.get_total().currency)\n return JsonResponse(response)\n return redirect('cart:index')\n elif data is not None:\n if request.is_ajax():\n response = {'error': form.errors}\n return JsonResponse(response, status=400)\n return TemplateResponse(\n request, 'cart/index.html', {\n 'cart': cart})\n", "path": "saleor/cart/views.py"}]} | 895 | 113 |
gh_patches_debug_10288 | rasdani/github-patches | git_diff | Zeroto521__my-data-toolkit-585 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
PERF: `to_set` speeds up especial to large data
<!--
Thanks for contributing a pull request!
Please follow these standard acronyms to start the commit message:
- ENH: enhancement
- BUG: bug fix
- DOC: documentation
- TYP: type annotations
- TST: addition or modification of tests
- MAINT: maintenance commit (refactoring, typos, etc.)
- BLD: change related to building
- REL: related to releasing
- API: an (incompatible) API change
- DEP: deprecate something, or remove a deprecated object
- DEV: development tool or utility
- REV: revert an earlier commit
- PERF: performance improvement
- BOT: always commit via a bot
- CI: related to CI or CD
- CLN: Code cleanup
-->
- [x] closes #542
- [x] whatsnew entry
Apply to index accessor
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `dtoolkit/accessor/index/to_set.py`
Content:
```
1 import pandas as pd
2
3 from dtoolkit.accessor.register import register_index_method
4
5
6 @register_index_method
7 def to_set(index: pd.Index) -> set:
8 """
9 Return a :keyword:`set` of the values.
10
11 A sugary syntax wraps :keyword:`set`::
12
13 set(index)
14
15 Different to :meth:`~pandas.Index.unique`, it returns :class:`~pandas.Index`.
16
17 Returns
18 -------
19 set
20
21 See Also
22 --------
23 pandas.Index.unique
24
25 Examples
26 --------
27 >>> import dtoolkit.accessor
28 >>> import pandas as pd
29 >>> i = pd.Index([1, 2, 2])
30 >>> i
31 Int64Index([1, 2, 2], dtype='int64')
32 >>> i.to_set()
33 {1, 2}
34 """
35
36 return set(index.unique())
37
```
Path: `dtoolkit/accessor/series/to_set.py`
Content:
```
1 import pandas as pd
2
3 from dtoolkit.accessor.register import register_series_method
4
5
6 @register_series_method
7 def to_set(s: pd.Series) -> set:
8 """
9 Return a :keyword:`set` of the values.
10
11 A sugary syntax wraps :keyword:`set`::
12
13 set(s)
14
15 Different to :meth:`~pandas.Series.unique`, it returns :class:`~numpy.ndarray`.
16
17 Returns
18 -------
19 set
20
21 See Also
22 --------
23 pandas.Series.unique
24
25 Examples
26 --------
27 >>> import dtoolkit.accessor
28 >>> import pandas as pd
29 >>> s = pd.Series([1, 2, 2])
30 >>> s
31 0 1
32 1 2
33 2 2
34 dtype: int64
35 >>> s.to_set()
36 {1, 2}
37 """
38
39 return set(s.unique())
40
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/dtoolkit/accessor/index/to_set.py b/dtoolkit/accessor/index/to_set.py
--- a/dtoolkit/accessor/index/to_set.py
+++ b/dtoolkit/accessor/index/to_set.py
@@ -21,6 +21,7 @@
See Also
--------
pandas.Index.unique
+ dtoolkit.accessor.series.to_set
Examples
--------
diff --git a/dtoolkit/accessor/series/to_set.py b/dtoolkit/accessor/series/to_set.py
--- a/dtoolkit/accessor/series/to_set.py
+++ b/dtoolkit/accessor/series/to_set.py
@@ -21,6 +21,7 @@
See Also
--------
pandas.Series.unique
+ dtoolkit.accessor.index.to_set
Examples
--------
@@ -36,4 +37,4 @@
{1, 2}
"""
- return set(s.unique())
+ return set(s.to_list())
| {"golden_diff": "diff --git a/dtoolkit/accessor/index/to_set.py b/dtoolkit/accessor/index/to_set.py\n--- a/dtoolkit/accessor/index/to_set.py\n+++ b/dtoolkit/accessor/index/to_set.py\n@@ -21,6 +21,7 @@\n See Also\n --------\n pandas.Index.unique\n+ dtoolkit.accessor.series.to_set\n \n Examples\n --------\ndiff --git a/dtoolkit/accessor/series/to_set.py b/dtoolkit/accessor/series/to_set.py\n--- a/dtoolkit/accessor/series/to_set.py\n+++ b/dtoolkit/accessor/series/to_set.py\n@@ -21,6 +21,7 @@\n See Also\n --------\n pandas.Series.unique\n+ dtoolkit.accessor.index.to_set\n \n Examples\n --------\n@@ -36,4 +37,4 @@\n {1, 2}\n \"\"\"\n \n- return set(s.unique())\n+ return set(s.to_list())\n", "issue": "PERF: `to_set` speeds up especial to large data\n<!--\r\nThanks for contributing a pull request!\r\n\r\nPlease follow these standard acronyms to start the commit message:\r\n\r\n- ENH: enhancement\r\n- BUG: bug fix\r\n- DOC: documentation\r\n- TYP: type annotations\r\n- TST: addition or modification of tests\r\n- MAINT: maintenance commit (refactoring, typos, etc.)\r\n- BLD: change related to building\r\n- REL: related to releasing\r\n- API: an (incompatible) API change\r\n- DEP: deprecate something, or remove a deprecated object\r\n- DEV: development tool or utility\r\n- REV: revert an earlier commit\r\n- PERF: performance improvement\r\n- BOT: always commit via a bot\r\n- CI: related to CI or CD\r\n- CLN: Code cleanup\r\n-->\r\n\r\n- [x] closes #542\r\n- [x] whatsnew entry\r\n\r\nApply to index accessor\n", "before_files": [{"content": "import pandas as pd\n\nfrom dtoolkit.accessor.register import register_index_method\n\n\n@register_index_method\ndef to_set(index: pd.Index) -> set:\n \"\"\"\n Return a :keyword:`set` of the values.\n\n A sugary syntax wraps :keyword:`set`::\n\n set(index)\n\n Different to :meth:`~pandas.Index.unique`, it returns :class:`~pandas.Index`.\n\n Returns\n -------\n set\n\n See Also\n --------\n pandas.Index.unique\n\n Examples\n --------\n >>> import dtoolkit.accessor\n >>> import pandas as pd\n >>> i = pd.Index([1, 2, 2])\n >>> i\n Int64Index([1, 2, 2], dtype='int64')\n >>> i.to_set()\n {1, 2}\n \"\"\"\n\n return set(index.unique())\n", "path": "dtoolkit/accessor/index/to_set.py"}, {"content": "import pandas as pd\n\nfrom dtoolkit.accessor.register import register_series_method\n\n\n@register_series_method\ndef to_set(s: pd.Series) -> set:\n \"\"\"\n Return a :keyword:`set` of the values.\n\n A sugary syntax wraps :keyword:`set`::\n\n set(s)\n\n Different to :meth:`~pandas.Series.unique`, it returns :class:`~numpy.ndarray`.\n\n Returns\n -------\n set\n\n See Also\n --------\n pandas.Series.unique\n\n Examples\n --------\n >>> import dtoolkit.accessor\n >>> import pandas as pd\n >>> s = pd.Series([1, 2, 2])\n >>> s\n 0 1\n 1 2\n 2 2\n dtype: int64\n >>> s.to_set()\n {1, 2}\n \"\"\"\n\n return set(s.unique())\n", "path": "dtoolkit/accessor/series/to_set.py"}], "after_files": [{"content": "import pandas as pd\n\nfrom dtoolkit.accessor.register import register_index_method\n\n\n@register_index_method\ndef to_set(index: pd.Index) -> set:\n \"\"\"\n Return a :keyword:`set` of the values.\n\n A sugary syntax wraps :keyword:`set`::\n\n set(index)\n\n Different to :meth:`~pandas.Index.unique`, it returns :class:`~pandas.Index`.\n\n Returns\n -------\n set\n\n See Also\n --------\n pandas.Index.unique\n dtoolkit.accessor.series.to_set\n\n Examples\n --------\n >>> import dtoolkit.accessor\n >>> import pandas as pd\n >>> i = pd.Index([1, 2, 2])\n >>> i\n Int64Index([1, 2, 2], dtype='int64')\n >>> i.to_set()\n {1, 2}\n \"\"\"\n\n return set(index.unique())\n", "path": "dtoolkit/accessor/index/to_set.py"}, {"content": "import pandas as pd\n\nfrom dtoolkit.accessor.register import register_series_method\n\n\n@register_series_method\ndef to_set(s: pd.Series) -> set:\n \"\"\"\n Return a :keyword:`set` of the values.\n\n A sugary syntax wraps :keyword:`set`::\n\n set(s)\n\n Different to :meth:`~pandas.Series.unique`, it returns :class:`~numpy.ndarray`.\n\n Returns\n -------\n set\n\n See Also\n --------\n pandas.Series.unique\n dtoolkit.accessor.index.to_set\n\n Examples\n --------\n >>> import dtoolkit.accessor\n >>> import pandas as pd\n >>> s = pd.Series([1, 2, 2])\n >>> s\n 0 1\n 1 2\n 2 2\n dtype: int64\n >>> s.to_set()\n {1, 2}\n \"\"\"\n\n return set(s.to_list())\n", "path": "dtoolkit/accessor/series/to_set.py"}]} | 1,014 | 215 |
gh_patches_debug_9822 | rasdani/github-patches | git_diff | mitmproxy__mitmproxy-4532 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Passphrase given in the command line is visible in the process list
#### Problem Description
Mitmproxy accepts cert-passphrase as one of the command-line options. If the user gives a passphrase like this while running the mitmproxy, anyone having access to a command line on that server can see the passphrase by listing the running processes.
#### Steps to reproduce the behavior:
1. Create a self-signed certificate using openssl, make sure you give a passphrase for the certificate.
2. Run mitmproxy/mitmdump/mitmweb with the command line options as shown
mitmdump --certs *.mydomain.com=mycert.pem cert-passphrase abcd
3. Take a Linux terminal and issue the command ps -ef | grep mitm
4. You can see the passphrase given to mitmdump command in clear text
This is a security issue in my opinion. Some programs effectively hide such sensitive inputs that are given as command-line arguments. They do this by rewriting the command line args in an obfuscated manner and by rerunning the program by itself. In this way, the sensitive data that came along via command-line arguments will be visible for a split second, but that is still better than making them always visible as long as the program is running.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mitmproxy/options.py`
Content:
```
1 from typing import Optional, Sequence
2
3 from mitmproxy import optmanager
4
5 CONF_DIR = "~/.mitmproxy"
6 CONF_BASENAME = "mitmproxy"
7 LISTEN_PORT = 8080
8 CONTENT_VIEW_LINES_CUTOFF = 512
9 KEY_SIZE = 2048
10
11
12 class Options(optmanager.OptManager):
13
14 def __init__(self, **kwargs) -> None:
15 super().__init__()
16 self.add_option(
17 "server", bool, True,
18 "Start a proxy server. Enabled by default."
19 )
20 self.add_option(
21 "showhost", bool, False,
22 "Use the Host header to construct URLs for display."
23 )
24
25 # Proxy options
26 self.add_option(
27 "add_upstream_certs_to_client_chain", bool, False,
28 """
29 Add all certificates of the upstream server to the certificate chain
30 that will be served to the proxy client, as extras.
31 """
32 )
33 self.add_option(
34 "confdir", str, CONF_DIR,
35 "Location of the default mitmproxy configuration files."
36 )
37 self.add_option(
38 "certs", Sequence[str], [],
39 """
40 SSL certificates of the form "[domain=]path". The domain may include
41 a wildcard, and is equal to "*" if not specified. The file at path
42 is a certificate in PEM format. If a private key is included in the
43 PEM, it is used, else the default key in the conf dir is used. The
44 PEM file should contain the full certificate chain, with the leaf
45 certificate as the first entry.
46 """
47 )
48 self.add_option(
49 "cert_passphrase", Optional[str], None,
50 "Passphrase for decrypting the private key provided in the --cert option."
51 )
52 self.add_option(
53 "ciphers_client", Optional[str], None,
54 "Set supported ciphers for client connections using OpenSSL syntax."
55 )
56 self.add_option(
57 "ciphers_server", Optional[str], None,
58 "Set supported ciphers for server connections using OpenSSL syntax."
59 )
60 self.add_option(
61 "client_certs", Optional[str], None,
62 "Client certificate file or directory."
63 )
64 self.add_option(
65 "ignore_hosts", Sequence[str], [],
66 """
67 Ignore host and forward all traffic without processing it. In
68 transparent mode, it is recommended to use an IP address (range),
69 not the hostname. In regular mode, only SSL traffic is ignored and
70 the hostname should be used. The supplied value is interpreted as a
71 regular expression and matched on the ip or the hostname.
72 """
73 )
74 self.add_option(
75 "allow_hosts", Sequence[str], [],
76 "Opposite of --ignore-hosts."
77 )
78 self.add_option(
79 "listen_host", str, "",
80 "Address to bind proxy to."
81 )
82 self.add_option(
83 "listen_port", int, LISTEN_PORT,
84 "Proxy service port."
85 )
86 self.add_option(
87 "mode", str, "regular",
88 """
89 Mode can be "regular", "transparent", "socks5", "reverse:SPEC",
90 or "upstream:SPEC". For reverse and upstream proxy modes, SPEC
91 is host specification in the form of "http[s]://host[:port]".
92 """
93 )
94 self.add_option(
95 "upstream_cert", bool, True,
96 "Connect to upstream server to look up certificate details."
97 )
98
99 self.add_option(
100 "http2", bool, True,
101 "Enable/disable HTTP/2 support. "
102 "HTTP/2 support is enabled by default.",
103 )
104 self.add_option(
105 "websocket", bool, True,
106 "Enable/disable WebSocket support. "
107 "WebSocket support is enabled by default.",
108 )
109 self.add_option(
110 "rawtcp", bool, True,
111 "Enable/disable raw TCP connections. "
112 "TCP connections are enabled by default. "
113 )
114 self.add_option(
115 "ssl_insecure", bool, False,
116 "Do not verify upstream server SSL/TLS certificates."
117 )
118 self.add_option(
119 "ssl_verify_upstream_trusted_confdir", Optional[str], None,
120 """
121 Path to a directory of trusted CA certificates for upstream server
122 verification prepared using the c_rehash tool.
123 """
124 )
125 self.add_option(
126 "ssl_verify_upstream_trusted_ca", Optional[str], None,
127 "Path to a PEM formatted trusted CA certificate."
128 )
129 self.add_option(
130 "tcp_hosts", Sequence[str], [],
131 """
132 Generic TCP SSL proxy mode for all hosts that match the pattern.
133 Similar to --ignore-hosts, but SSL connections are intercepted.
134 The communication contents are printed to the log in verbose mode.
135 """
136 )
137 self.add_option(
138 "content_view_lines_cutoff", int, CONTENT_VIEW_LINES_CUTOFF,
139 """
140 Flow content view lines limit. Limit is enabled by default to
141 speedup flows browsing.
142 """
143 )
144 self.add_option(
145 "key_size", int, KEY_SIZE,
146 """
147 TLS key size for certificates and CA.
148 """
149 )
150
151 self.update(**kwargs)
152
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mitmproxy/options.py b/mitmproxy/options.py
--- a/mitmproxy/options.py
+++ b/mitmproxy/options.py
@@ -47,7 +47,12 @@
)
self.add_option(
"cert_passphrase", Optional[str], None,
- "Passphrase for decrypting the private key provided in the --cert option."
+ """
+ Passphrase for decrypting the private key provided in the --cert option.
+
+ Note that passing cert_passphrase on the command line makes your passphrase visible in your system's
+ process list. Specify it in config.yaml to avoid this.
+ """
)
self.add_option(
"ciphers_client", Optional[str], None,
| {"golden_diff": "diff --git a/mitmproxy/options.py b/mitmproxy/options.py\n--- a/mitmproxy/options.py\n+++ b/mitmproxy/options.py\n@@ -47,7 +47,12 @@\n )\n self.add_option(\n \"cert_passphrase\", Optional[str], None,\n- \"Passphrase for decrypting the private key provided in the --cert option.\"\n+ \"\"\"\n+ Passphrase for decrypting the private key provided in the --cert option.\n+\n+ Note that passing cert_passphrase on the command line makes your passphrase visible in your system's\n+ process list. Specify it in config.yaml to avoid this.\n+ \"\"\"\n )\n self.add_option(\n \"ciphers_client\", Optional[str], None,\n", "issue": "Passphrase given in the command line is visible in the process list \n#### Problem Description\r\nMitmproxy accepts cert-passphrase as one of the command-line options. If the user gives a passphrase like this while running the mitmproxy, anyone having access to a command line on that server can see the passphrase by listing the running processes.\r\n\r\n#### Steps to reproduce the behavior:\r\n1. Create a self-signed certificate using openssl, make sure you give a passphrase for the certificate. \r\n2. Run mitmproxy/mitmdump/mitmweb with the command line options as shown\r\nmitmdump --certs *.mydomain.com=mycert.pem cert-passphrase abcd \r\n3. Take a Linux terminal and issue the command ps -ef | grep mitm\r\n4. You can see the passphrase given to mitmdump command in clear text\r\n\r\nThis is a security issue in my opinion. Some programs effectively hide such sensitive inputs that are given as command-line arguments. They do this by rewriting the command line args in an obfuscated manner and by rerunning the program by itself. In this way, the sensitive data that came along via command-line arguments will be visible for a split second, but that is still better than making them always visible as long as the program is running.\r\n\r\n\n", "before_files": [{"content": "from typing import Optional, Sequence\n\nfrom mitmproxy import optmanager\n\nCONF_DIR = \"~/.mitmproxy\"\nCONF_BASENAME = \"mitmproxy\"\nLISTEN_PORT = 8080\nCONTENT_VIEW_LINES_CUTOFF = 512\nKEY_SIZE = 2048\n\n\nclass Options(optmanager.OptManager):\n\n def __init__(self, **kwargs) -> None:\n super().__init__()\n self.add_option(\n \"server\", bool, True,\n \"Start a proxy server. Enabled by default.\"\n )\n self.add_option(\n \"showhost\", bool, False,\n \"Use the Host header to construct URLs for display.\"\n )\n\n # Proxy options\n self.add_option(\n \"add_upstream_certs_to_client_chain\", bool, False,\n \"\"\"\n Add all certificates of the upstream server to the certificate chain\n that will be served to the proxy client, as extras.\n \"\"\"\n )\n self.add_option(\n \"confdir\", str, CONF_DIR,\n \"Location of the default mitmproxy configuration files.\"\n )\n self.add_option(\n \"certs\", Sequence[str], [],\n \"\"\"\n SSL certificates of the form \"[domain=]path\". The domain may include\n a wildcard, and is equal to \"*\" if not specified. The file at path\n is a certificate in PEM format. If a private key is included in the\n PEM, it is used, else the default key in the conf dir is used. The\n PEM file should contain the full certificate chain, with the leaf\n certificate as the first entry.\n \"\"\"\n )\n self.add_option(\n \"cert_passphrase\", Optional[str], None,\n \"Passphrase for decrypting the private key provided in the --cert option.\"\n )\n self.add_option(\n \"ciphers_client\", Optional[str], None,\n \"Set supported ciphers for client connections using OpenSSL syntax.\"\n )\n self.add_option(\n \"ciphers_server\", Optional[str], None,\n \"Set supported ciphers for server connections using OpenSSL syntax.\"\n )\n self.add_option(\n \"client_certs\", Optional[str], None,\n \"Client certificate file or directory.\"\n )\n self.add_option(\n \"ignore_hosts\", Sequence[str], [],\n \"\"\"\n Ignore host and forward all traffic without processing it. In\n transparent mode, it is recommended to use an IP address (range),\n not the hostname. In regular mode, only SSL traffic is ignored and\n the hostname should be used. The supplied value is interpreted as a\n regular expression and matched on the ip or the hostname.\n \"\"\"\n )\n self.add_option(\n \"allow_hosts\", Sequence[str], [],\n \"Opposite of --ignore-hosts.\"\n )\n self.add_option(\n \"listen_host\", str, \"\",\n \"Address to bind proxy to.\"\n )\n self.add_option(\n \"listen_port\", int, LISTEN_PORT,\n \"Proxy service port.\"\n )\n self.add_option(\n \"mode\", str, \"regular\",\n \"\"\"\n Mode can be \"regular\", \"transparent\", \"socks5\", \"reverse:SPEC\",\n or \"upstream:SPEC\". For reverse and upstream proxy modes, SPEC\n is host specification in the form of \"http[s]://host[:port]\".\n \"\"\"\n )\n self.add_option(\n \"upstream_cert\", bool, True,\n \"Connect to upstream server to look up certificate details.\"\n )\n\n self.add_option(\n \"http2\", bool, True,\n \"Enable/disable HTTP/2 support. \"\n \"HTTP/2 support is enabled by default.\",\n )\n self.add_option(\n \"websocket\", bool, True,\n \"Enable/disable WebSocket support. \"\n \"WebSocket support is enabled by default.\",\n )\n self.add_option(\n \"rawtcp\", bool, True,\n \"Enable/disable raw TCP connections. \"\n \"TCP connections are enabled by default. \"\n )\n self.add_option(\n \"ssl_insecure\", bool, False,\n \"Do not verify upstream server SSL/TLS certificates.\"\n )\n self.add_option(\n \"ssl_verify_upstream_trusted_confdir\", Optional[str], None,\n \"\"\"\n Path to a directory of trusted CA certificates for upstream server\n verification prepared using the c_rehash tool.\n \"\"\"\n )\n self.add_option(\n \"ssl_verify_upstream_trusted_ca\", Optional[str], None,\n \"Path to a PEM formatted trusted CA certificate.\"\n )\n self.add_option(\n \"tcp_hosts\", Sequence[str], [],\n \"\"\"\n Generic TCP SSL proxy mode for all hosts that match the pattern.\n Similar to --ignore-hosts, but SSL connections are intercepted.\n The communication contents are printed to the log in verbose mode.\n \"\"\"\n )\n self.add_option(\n \"content_view_lines_cutoff\", int, CONTENT_VIEW_LINES_CUTOFF,\n \"\"\"\n Flow content view lines limit. Limit is enabled by default to\n speedup flows browsing.\n \"\"\"\n )\n self.add_option(\n \"key_size\", int, KEY_SIZE,\n \"\"\"\n TLS key size for certificates and CA.\n \"\"\"\n )\n\n self.update(**kwargs)\n", "path": "mitmproxy/options.py"}], "after_files": [{"content": "from typing import Optional, Sequence\n\nfrom mitmproxy import optmanager\n\nCONF_DIR = \"~/.mitmproxy\"\nCONF_BASENAME = \"mitmproxy\"\nLISTEN_PORT = 8080\nCONTENT_VIEW_LINES_CUTOFF = 512\nKEY_SIZE = 2048\n\n\nclass Options(optmanager.OptManager):\n\n def __init__(self, **kwargs) -> None:\n super().__init__()\n self.add_option(\n \"server\", bool, True,\n \"Start a proxy server. Enabled by default.\"\n )\n self.add_option(\n \"showhost\", bool, False,\n \"Use the Host header to construct URLs for display.\"\n )\n\n # Proxy options\n self.add_option(\n \"add_upstream_certs_to_client_chain\", bool, False,\n \"\"\"\n Add all certificates of the upstream server to the certificate chain\n that will be served to the proxy client, as extras.\n \"\"\"\n )\n self.add_option(\n \"confdir\", str, CONF_DIR,\n \"Location of the default mitmproxy configuration files.\"\n )\n self.add_option(\n \"certs\", Sequence[str], [],\n \"\"\"\n SSL certificates of the form \"[domain=]path\". The domain may include\n a wildcard, and is equal to \"*\" if not specified. The file at path\n is a certificate in PEM format. If a private key is included in the\n PEM, it is used, else the default key in the conf dir is used. The\n PEM file should contain the full certificate chain, with the leaf\n certificate as the first entry.\n \"\"\"\n )\n self.add_option(\n \"cert_passphrase\", Optional[str], None,\n \"\"\"\n Passphrase for decrypting the private key provided in the --cert option.\n\n Note that passing cert_passphrase on the command line makes your passphrase visible in your system's\n process list. Specify it in config.yaml to avoid this.\n \"\"\"\n )\n self.add_option(\n \"ciphers_client\", Optional[str], None,\n \"Set supported ciphers for client connections using OpenSSL syntax.\"\n )\n self.add_option(\n \"ciphers_server\", Optional[str], None,\n \"Set supported ciphers for server connections using OpenSSL syntax.\"\n )\n self.add_option(\n \"client_certs\", Optional[str], None,\n \"Client certificate file or directory.\"\n )\n self.add_option(\n \"ignore_hosts\", Sequence[str], [],\n \"\"\"\n Ignore host and forward all traffic without processing it. In\n transparent mode, it is recommended to use an IP address (range),\n not the hostname. In regular mode, only SSL traffic is ignored and\n the hostname should be used. The supplied value is interpreted as a\n regular expression and matched on the ip or the hostname.\n \"\"\"\n )\n self.add_option(\n \"allow_hosts\", Sequence[str], [],\n \"Opposite of --ignore-hosts.\"\n )\n self.add_option(\n \"listen_host\", str, \"\",\n \"Address to bind proxy to.\"\n )\n self.add_option(\n \"listen_port\", int, LISTEN_PORT,\n \"Proxy service port.\"\n )\n self.add_option(\n \"mode\", str, \"regular\",\n \"\"\"\n Mode can be \"regular\", \"transparent\", \"socks5\", \"reverse:SPEC\",\n or \"upstream:SPEC\". For reverse and upstream proxy modes, SPEC\n is host specification in the form of \"http[s]://host[:port]\".\n \"\"\"\n )\n self.add_option(\n \"upstream_cert\", bool, True,\n \"Connect to upstream server to look up certificate details.\"\n )\n\n self.add_option(\n \"http2\", bool, True,\n \"Enable/disable HTTP/2 support. \"\n \"HTTP/2 support is enabled by default.\",\n )\n self.add_option(\n \"websocket\", bool, True,\n \"Enable/disable WebSocket support. \"\n \"WebSocket support is enabled by default.\",\n )\n self.add_option(\n \"rawtcp\", bool, True,\n \"Enable/disable raw TCP connections. \"\n \"TCP connections are enabled by default. \"\n )\n self.add_option(\n \"ssl_insecure\", bool, False,\n \"Do not verify upstream server SSL/TLS certificates.\"\n )\n self.add_option(\n \"ssl_verify_upstream_trusted_confdir\", Optional[str], None,\n \"\"\"\n Path to a directory of trusted CA certificates for upstream server\n verification prepared using the c_rehash tool.\n \"\"\"\n )\n self.add_option(\n \"ssl_verify_upstream_trusted_ca\", Optional[str], None,\n \"Path to a PEM formatted trusted CA certificate.\"\n )\n self.add_option(\n \"tcp_hosts\", Sequence[str], [],\n \"\"\"\n Generic TCP SSL proxy mode for all hosts that match the pattern.\n Similar to --ignore-hosts, but SSL connections are intercepted.\n The communication contents are printed to the log in verbose mode.\n \"\"\"\n )\n self.add_option(\n \"content_view_lines_cutoff\", int, CONTENT_VIEW_LINES_CUTOFF,\n \"\"\"\n Flow content view lines limit. Limit is enabled by default to\n speedup flows browsing.\n \"\"\"\n )\n self.add_option(\n \"key_size\", int, KEY_SIZE,\n \"\"\"\n TLS key size for certificates and CA.\n \"\"\"\n )\n\n self.update(**kwargs)\n", "path": "mitmproxy/options.py"}]} | 1,968 | 158 |
gh_patches_debug_33829 | rasdani/github-patches | git_diff | googleapis__google-auth-library-python-1430 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Broken link in Python37DeprecationWarning deprecation message
```
warnings.warn(message, Python37DeprecationWarning)
E google.auth.Python37DeprecationWarning: After January 1, 2024, new releases of this library will drop support for Python 3.7. More details about Python 3.7 support can be found at https://cloud.google.com/python/docs/python37-sunset/
```
The link https://cloud.google.com/python/docs/python37-sunset/ results in 404. We should remove it from the deprecation message.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `google/oauth2/__init__.py`
Content:
```
1 # Copyright 2016 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Google OAuth 2.0 Library for Python."""
16
17 import sys
18 import warnings
19
20
21 class Python37DeprecationWarning(DeprecationWarning): # pragma: NO COVER
22 """
23 Deprecation warning raised when Python 3.7 runtime is detected.
24 Python 3.7 support will be dropped after January 1, 2024. See
25 https://cloud.google.com/python/docs/python37-sunset/ for more information.
26 """
27
28 pass
29
30
31 # Checks if the current runtime is Python 3.7.
32 if sys.version_info.major == 3 and sys.version_info.minor == 7: # pragma: NO COVER
33 message = (
34 "After January 1, 2024, new releases of this library will drop support "
35 "for Python 3.7. More details about Python 3.7 support "
36 "can be found at https://cloud.google.com/python/docs/python37-sunset/"
37 )
38 warnings.warn(message, Python37DeprecationWarning)
39
```
Path: `google/auth/__init__.py`
Content:
```
1 # Copyright 2016 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Google Auth Library for Python."""
16
17 import logging
18 import sys
19 import warnings
20
21 from google.auth import version as google_auth_version
22 from google.auth._default import (
23 default,
24 load_credentials_from_dict,
25 load_credentials_from_file,
26 )
27
28
29 __version__ = google_auth_version.__version__
30
31
32 __all__ = ["default", "load_credentials_from_file", "load_credentials_from_dict"]
33
34
35 class Python37DeprecationWarning(DeprecationWarning): # pragma: NO COVER
36 """
37 Deprecation warning raised when Python 3.7 runtime is detected.
38 Python 3.7 support will be dropped after January 1, 2024. See
39 https://cloud.google.com/python/docs/python37-sunset/ for more information.
40 """
41
42 pass
43
44
45 # Checks if the current runtime is Python 3.7.
46 if sys.version_info.major == 3 and sys.version_info.minor == 7: # pragma: NO COVER
47 message = (
48 "After January 1, 2024, new releases of this library will drop support "
49 "for Python 3.7. More details about Python 3.7 support "
50 "can be found at https://cloud.google.com/python/docs/python37-sunset/"
51 )
52 warnings.warn(message, Python37DeprecationWarning)
53
54 # Set default logging handler to avoid "No handler found" warnings.
55 logging.getLogger(__name__).addHandler(logging.NullHandler())
56
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/google/auth/__init__.py b/google/auth/__init__.py
--- a/google/auth/__init__.py
+++ b/google/auth/__init__.py
@@ -35,8 +35,7 @@
class Python37DeprecationWarning(DeprecationWarning): # pragma: NO COVER
"""
Deprecation warning raised when Python 3.7 runtime is detected.
- Python 3.7 support will be dropped after January 1, 2024. See
- https://cloud.google.com/python/docs/python37-sunset/ for more information.
+ Python 3.7 support will be dropped after January 1, 2024.
"""
pass
@@ -46,8 +45,7 @@
if sys.version_info.major == 3 and sys.version_info.minor == 7: # pragma: NO COVER
message = (
"After January 1, 2024, new releases of this library will drop support "
- "for Python 3.7. More details about Python 3.7 support "
- "can be found at https://cloud.google.com/python/docs/python37-sunset/"
+ "for Python 3.7."
)
warnings.warn(message, Python37DeprecationWarning)
diff --git a/google/oauth2/__init__.py b/google/oauth2/__init__.py
--- a/google/oauth2/__init__.py
+++ b/google/oauth2/__init__.py
@@ -21,8 +21,7 @@
class Python37DeprecationWarning(DeprecationWarning): # pragma: NO COVER
"""
Deprecation warning raised when Python 3.7 runtime is detected.
- Python 3.7 support will be dropped after January 1, 2024. See
- https://cloud.google.com/python/docs/python37-sunset/ for more information.
+ Python 3.7 support will be dropped after January 1, 2024.
"""
pass
@@ -32,7 +31,6 @@
if sys.version_info.major == 3 and sys.version_info.minor == 7: # pragma: NO COVER
message = (
"After January 1, 2024, new releases of this library will drop support "
- "for Python 3.7. More details about Python 3.7 support "
- "can be found at https://cloud.google.com/python/docs/python37-sunset/"
+ "for Python 3.7."
)
warnings.warn(message, Python37DeprecationWarning)
| {"golden_diff": "diff --git a/google/auth/__init__.py b/google/auth/__init__.py\n--- a/google/auth/__init__.py\n+++ b/google/auth/__init__.py\n@@ -35,8 +35,7 @@\n class Python37DeprecationWarning(DeprecationWarning): # pragma: NO COVER\n \"\"\"\n Deprecation warning raised when Python 3.7 runtime is detected.\n- Python 3.7 support will be dropped after January 1, 2024. See\n- https://cloud.google.com/python/docs/python37-sunset/ for more information.\n+ Python 3.7 support will be dropped after January 1, 2024.\n \"\"\"\n \n pass\n@@ -46,8 +45,7 @@\n if sys.version_info.major == 3 and sys.version_info.minor == 7: # pragma: NO COVER\n message = (\n \"After January 1, 2024, new releases of this library will drop support \"\n- \"for Python 3.7. More details about Python 3.7 support \"\n- \"can be found at https://cloud.google.com/python/docs/python37-sunset/\"\n+ \"for Python 3.7.\"\n )\n warnings.warn(message, Python37DeprecationWarning)\n \ndiff --git a/google/oauth2/__init__.py b/google/oauth2/__init__.py\n--- a/google/oauth2/__init__.py\n+++ b/google/oauth2/__init__.py\n@@ -21,8 +21,7 @@\n class Python37DeprecationWarning(DeprecationWarning): # pragma: NO COVER\n \"\"\"\n Deprecation warning raised when Python 3.7 runtime is detected.\n- Python 3.7 support will be dropped after January 1, 2024. See\n- https://cloud.google.com/python/docs/python37-sunset/ for more information.\n+ Python 3.7 support will be dropped after January 1, 2024.\n \"\"\"\n \n pass\n@@ -32,7 +31,6 @@\n if sys.version_info.major == 3 and sys.version_info.minor == 7: # pragma: NO COVER\n message = (\n \"After January 1, 2024, new releases of this library will drop support \"\n- \"for Python 3.7. More details about Python 3.7 support \"\n- \"can be found at https://cloud.google.com/python/docs/python37-sunset/\"\n+ \"for Python 3.7.\"\n )\n warnings.warn(message, Python37DeprecationWarning)\n", "issue": "Broken link in Python37DeprecationWarning deprecation message\n```\r\n warnings.warn(message, Python37DeprecationWarning)\r\nE google.auth.Python37DeprecationWarning: After January 1, 2024, new releases of this library will drop support for Python 3.7. More details about Python 3.7 support can be found at https://cloud.google.com/python/docs/python37-sunset/\r\n```\r\nThe link https://cloud.google.com/python/docs/python37-sunset/ results in 404. We should remove it from the deprecation message.\n", "before_files": [{"content": "# Copyright 2016 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Google OAuth 2.0 Library for Python.\"\"\"\n\nimport sys\nimport warnings\n\n\nclass Python37DeprecationWarning(DeprecationWarning): # pragma: NO COVER\n \"\"\"\n Deprecation warning raised when Python 3.7 runtime is detected.\n Python 3.7 support will be dropped after January 1, 2024. See\n https://cloud.google.com/python/docs/python37-sunset/ for more information.\n \"\"\"\n\n pass\n\n\n# Checks if the current runtime is Python 3.7.\nif sys.version_info.major == 3 and sys.version_info.minor == 7: # pragma: NO COVER\n message = (\n \"After January 1, 2024, new releases of this library will drop support \"\n \"for Python 3.7. More details about Python 3.7 support \"\n \"can be found at https://cloud.google.com/python/docs/python37-sunset/\"\n )\n warnings.warn(message, Python37DeprecationWarning)\n", "path": "google/oauth2/__init__.py"}, {"content": "# Copyright 2016 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Google Auth Library for Python.\"\"\"\n\nimport logging\nimport sys\nimport warnings\n\nfrom google.auth import version as google_auth_version\nfrom google.auth._default import (\n default,\n load_credentials_from_dict,\n load_credentials_from_file,\n)\n\n\n__version__ = google_auth_version.__version__\n\n\n__all__ = [\"default\", \"load_credentials_from_file\", \"load_credentials_from_dict\"]\n\n\nclass Python37DeprecationWarning(DeprecationWarning): # pragma: NO COVER\n \"\"\"\n Deprecation warning raised when Python 3.7 runtime is detected.\n Python 3.7 support will be dropped after January 1, 2024. See\n https://cloud.google.com/python/docs/python37-sunset/ for more information.\n \"\"\"\n\n pass\n\n\n# Checks if the current runtime is Python 3.7.\nif sys.version_info.major == 3 and sys.version_info.minor == 7: # pragma: NO COVER\n message = (\n \"After January 1, 2024, new releases of this library will drop support \"\n \"for Python 3.7. More details about Python 3.7 support \"\n \"can be found at https://cloud.google.com/python/docs/python37-sunset/\"\n )\n warnings.warn(message, Python37DeprecationWarning)\n\n# Set default logging handler to avoid \"No handler found\" warnings.\nlogging.getLogger(__name__).addHandler(logging.NullHandler())\n", "path": "google/auth/__init__.py"}], "after_files": [{"content": "# Copyright 2016 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Google OAuth 2.0 Library for Python.\"\"\"\n\nimport sys\nimport warnings\n\n\nclass Python37DeprecationWarning(DeprecationWarning): # pragma: NO COVER\n \"\"\"\n Deprecation warning raised when Python 3.7 runtime is detected.\n Python 3.7 support will be dropped after January 1, 2024.\n \"\"\"\n\n pass\n\n\n# Checks if the current runtime is Python 3.7.\nif sys.version_info.major == 3 and sys.version_info.minor == 7: # pragma: NO COVER\n message = (\n \"After January 1, 2024, new releases of this library will drop support \"\n \"for Python 3.7.\"\n )\n warnings.warn(message, Python37DeprecationWarning)\n", "path": "google/oauth2/__init__.py"}, {"content": "# Copyright 2016 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Google Auth Library for Python.\"\"\"\n\nimport logging\nimport sys\nimport warnings\n\nfrom google.auth import version as google_auth_version\nfrom google.auth._default import (\n default,\n load_credentials_from_dict,\n load_credentials_from_file,\n)\n\n\n__version__ = google_auth_version.__version__\n\n\n__all__ = [\"default\", \"load_credentials_from_file\", \"load_credentials_from_dict\"]\n\n\nclass Python37DeprecationWarning(DeprecationWarning): # pragma: NO COVER\n \"\"\"\n Deprecation warning raised when Python 3.7 runtime is detected.\n Python 3.7 support will be dropped after January 1, 2024.\n \"\"\"\n\n pass\n\n\n# Checks if the current runtime is Python 3.7.\nif sys.version_info.major == 3 and sys.version_info.minor == 7: # pragma: NO COVER\n message = (\n \"After January 1, 2024, new releases of this library will drop support \"\n \"for Python 3.7.\"\n )\n warnings.warn(message, Python37DeprecationWarning)\n\n# Set default logging handler to avoid \"No handler found\" warnings.\nlogging.getLogger(__name__).addHandler(logging.NullHandler())\n", "path": "google/auth/__init__.py"}]} | 1,382 | 577 |
gh_patches_debug_14755 | rasdani/github-patches | git_diff | ansible__ansible-41206 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
aws_s3 is automaticly decrypting ansible-vault encrypted files before put
<!---
Verify first that your issue/request is not already reported on GitHub.
Also test if the latest release, and devel branch are affected too.
Always add information AFTER of these html comments. -->
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
aws_s3
##### ANSIBLE VERSION
<!--- Paste, BELOW THIS COMMENT, verbatim output from "ansible --version" between quotes below -->
```
2.5.1
```
##### SUMMARY
- I'm trying to upload an ansible-vault encrypted file with aws_s3. But aws_s3 decrypts the src: file before uploading it to S3.
- aws_s3 in 2.4 didn't decrypt the src: parameter.
- The documentation for aws_s3 doesn't mention that the src: parameter is autodecrypted.
- The aws_s3 module doesn't accept the decrypt: argument.
##### STEPS TO REPRODUCE
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: upload vault to s3
aws_s3:
bucket: "the bucket"
object: "file.txt"
src: "file.txt"
mode: put
```
1. The file.txt is encrypted with ansible-vault.
2. The playbook that runs this task is invoked with --vault-password and is able to decrypt the file because other tasks need the file decrypted.
##### EXPECTED RESULTS
Don't autodecrypt the src: argument or be able to specify decrypt: no.
##### ACTUAL RESULTS
The src: argument to aws_s3 is automagicly decrypted without documentation or a way to disable the feature like other modules (ex. copy).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lib/ansible/plugins/action/aws_s3.py`
Content:
```
1 # (c) 2012, Michael DeHaan <[email protected]>
2 # (c) 2018, Will Thames <[email protected]>
3 #
4 # This file is part of Ansible
5 #
6 # Ansible is free software: you can redistribute it and/or modify
7 # it under the terms of the GNU General Public License as published by
8 # the Free Software Foundation, either version 3 of the License, or
9 # (at your option) any later version.
10 #
11 # Ansible is distributed in the hope that it will be useful,
12 # but WITHOUT ANY WARRANTY; without even the implied warranty of
13 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
14 # GNU General Public License for more details.
15 #
16 # You should have received a copy of the GNU General Public License
17 # along with Ansible. If not, see <http://www.gnu.org/licenses/>.
18 from __future__ import (absolute_import, division, print_function)
19 __metaclass__ = type
20
21 import os
22
23 from ansible.errors import AnsibleError, AnsibleAction, AnsibleActionFail, AnsibleFileNotFound
24 from ansible.module_utils._text import to_text
25 from ansible.plugins.action import ActionBase
26
27
28 class ActionModule(ActionBase):
29
30 TRANSFERS_FILES = True
31
32 def run(self, tmp=None, task_vars=None):
33 ''' handler for aws_s3 operations '''
34 if task_vars is None:
35 task_vars = dict()
36
37 result = super(ActionModule, self).run(tmp, task_vars)
38 del tmp # tmp no longer has any effect
39
40 source = self._task.args.get('src', None)
41
42 try:
43 new_module_args = self._task.args.copy()
44 if source:
45 source = os.path.expanduser(source)
46
47 # For backward compatibility check if the file exists on the remote; it should take precedence
48 if not self._remote_file_exists(source):
49 try:
50 source = self._loader.get_real_file(self._find_needle('files', source))
51 new_module_args['src'] = source
52 except AnsibleFileNotFound as e:
53 # module handles error message for nonexistent files
54 new_module_args['src'] = source
55 except AnsibleError as e:
56 raise AnsibleActionFail(to_text(e))
57
58 # execute the aws_s3 module now, with the updated args
59 result.update(self._execute_module(module_args=new_module_args, task_vars=task_vars))
60 except AnsibleAction as e:
61 result.update(e.result)
62 return result
63
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/lib/ansible/plugins/action/aws_s3.py b/lib/ansible/plugins/action/aws_s3.py
--- a/lib/ansible/plugins/action/aws_s3.py
+++ b/lib/ansible/plugins/action/aws_s3.py
@@ -47,7 +47,7 @@
# For backward compatibility check if the file exists on the remote; it should take precedence
if not self._remote_file_exists(source):
try:
- source = self._loader.get_real_file(self._find_needle('files', source))
+ source = self._loader.get_real_file(self._find_needle('files', source), decrypt=False)
new_module_args['src'] = source
except AnsibleFileNotFound as e:
# module handles error message for nonexistent files
| {"golden_diff": "diff --git a/lib/ansible/plugins/action/aws_s3.py b/lib/ansible/plugins/action/aws_s3.py\n--- a/lib/ansible/plugins/action/aws_s3.py\n+++ b/lib/ansible/plugins/action/aws_s3.py\n@@ -47,7 +47,7 @@\n # For backward compatibility check if the file exists on the remote; it should take precedence\n if not self._remote_file_exists(source):\n try:\n- source = self._loader.get_real_file(self._find_needle('files', source))\n+ source = self._loader.get_real_file(self._find_needle('files', source), decrypt=False)\n new_module_args['src'] = source\n except AnsibleFileNotFound as e:\n # module handles error message for nonexistent files\n", "issue": "aws_s3 is automaticly decrypting ansible-vault encrypted files before put\n<!---\r\nVerify first that your issue/request is not already reported on GitHub.\r\nAlso test if the latest release, and devel branch are affected too.\r\nAlways add information AFTER of these html comments. -->\r\n\r\n##### ISSUE TYPE\r\n - Bug Report\r\n\r\n##### COMPONENT NAME\r\naws_s3\r\n\r\n##### ANSIBLE VERSION\r\n<!--- Paste, BELOW THIS COMMENT, verbatim output from \"ansible --version\" between quotes below -->\r\n```\r\n2.5.1\r\n```\r\n\r\n##### SUMMARY\r\n- I'm trying to upload an ansible-vault encrypted file with aws_s3. But aws_s3 decrypts the src: file before uploading it to S3. \r\n- aws_s3 in 2.4 didn't decrypt the src: parameter.\r\n- The documentation for aws_s3 doesn't mention that the src: parameter is autodecrypted.\r\n- The aws_s3 module doesn't accept the decrypt: argument.\r\n\r\n##### STEPS TO REPRODUCE\r\n<!--- Paste example playbooks or commands between quotes below -->\r\n```yaml\r\n- name: upload vault to s3\r\n aws_s3:\r\n bucket: \"the bucket\"\r\n object: \"file.txt\"\r\n src: \"file.txt\"\r\n mode: put\r\n```\r\n1. The file.txt is encrypted with ansible-vault. \r\n2. The playbook that runs this task is invoked with --vault-password and is able to decrypt the file because other tasks need the file decrypted.\r\n\r\n##### EXPECTED RESULTS\r\nDon't autodecrypt the src: argument or be able to specify decrypt: no.\r\n\r\n##### ACTUAL RESULTS\r\nThe src: argument to aws_s3 is automagicly decrypted without documentation or a way to disable the feature like other modules (ex. copy).\r\n\n", "before_files": [{"content": "# (c) 2012, Michael DeHaan <[email protected]>\n# (c) 2018, Will Thames <[email protected]>\n#\n# This file is part of Ansible\n#\n# Ansible is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# Ansible is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with Ansible. If not, see <http://www.gnu.org/licenses/>.\nfrom __future__ import (absolute_import, division, print_function)\n__metaclass__ = type\n\nimport os\n\nfrom ansible.errors import AnsibleError, AnsibleAction, AnsibleActionFail, AnsibleFileNotFound\nfrom ansible.module_utils._text import to_text\nfrom ansible.plugins.action import ActionBase\n\n\nclass ActionModule(ActionBase):\n\n TRANSFERS_FILES = True\n\n def run(self, tmp=None, task_vars=None):\n ''' handler for aws_s3 operations '''\n if task_vars is None:\n task_vars = dict()\n\n result = super(ActionModule, self).run(tmp, task_vars)\n del tmp # tmp no longer has any effect\n\n source = self._task.args.get('src', None)\n\n try:\n new_module_args = self._task.args.copy()\n if source:\n source = os.path.expanduser(source)\n\n # For backward compatibility check if the file exists on the remote; it should take precedence\n if not self._remote_file_exists(source):\n try:\n source = self._loader.get_real_file(self._find_needle('files', source))\n new_module_args['src'] = source\n except AnsibleFileNotFound as e:\n # module handles error message for nonexistent files\n new_module_args['src'] = source\n except AnsibleError as e:\n raise AnsibleActionFail(to_text(e))\n\n # execute the aws_s3 module now, with the updated args\n result.update(self._execute_module(module_args=new_module_args, task_vars=task_vars))\n except AnsibleAction as e:\n result.update(e.result)\n return result\n", "path": "lib/ansible/plugins/action/aws_s3.py"}], "after_files": [{"content": "# (c) 2012, Michael DeHaan <[email protected]>\n# (c) 2018, Will Thames <[email protected]>\n#\n# This file is part of Ansible\n#\n# Ansible is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# Ansible is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with Ansible. If not, see <http://www.gnu.org/licenses/>.\nfrom __future__ import (absolute_import, division, print_function)\n__metaclass__ = type\n\nimport os\n\nfrom ansible.errors import AnsibleError, AnsibleAction, AnsibleActionFail, AnsibleFileNotFound\nfrom ansible.module_utils._text import to_text\nfrom ansible.plugins.action import ActionBase\n\n\nclass ActionModule(ActionBase):\n\n TRANSFERS_FILES = True\n\n def run(self, tmp=None, task_vars=None):\n ''' handler for aws_s3 operations '''\n if task_vars is None:\n task_vars = dict()\n\n result = super(ActionModule, self).run(tmp, task_vars)\n del tmp # tmp no longer has any effect\n\n source = self._task.args.get('src', None)\n\n try:\n new_module_args = self._task.args.copy()\n if source:\n source = os.path.expanduser(source)\n\n # For backward compatibility check if the file exists on the remote; it should take precedence\n if not self._remote_file_exists(source):\n try:\n source = self._loader.get_real_file(self._find_needle('files', source), decrypt=False)\n new_module_args['src'] = source\n except AnsibleFileNotFound as e:\n # module handles error message for nonexistent files\n new_module_args['src'] = source\n except AnsibleError as e:\n raise AnsibleActionFail(to_text(e))\n\n # execute the aws_s3 module now, with the updated args\n result.update(self._execute_module(module_args=new_module_args, task_vars=task_vars))\n except AnsibleAction as e:\n result.update(e.result)\n return result\n", "path": "lib/ansible/plugins/action/aws_s3.py"}]} | 1,290 | 164 |
gh_patches_debug_22681 | rasdani/github-patches | git_diff | OpenNMT__OpenNMT-py-1188 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Source/Target length distinction
## Preprocess parameters
Removed parameter `-seq_length`
New parameters `-src_seq_length` and `-tgt_seq_length`
---
## Training speed token/s
In both LUA/PyTorch OpenNMT, the training process prints a speed, in token/sec, but:
* LUA OpenNMT is printing source token/sec
* PyOpenNMT is printing target token/sec
This can lead to important differences, especially when src/tgt sequence length are different (e.g. summarization), and therefore lead to false conclusion about performances.
See also: [pytoch/example/issue#75](https://github.com/pytorch/examples/issues/75)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `onmt/inputters/dataset_base.py`
Content:
```
1 # coding: utf-8
2
3 from itertools import chain
4 from collections import Counter
5 import codecs
6
7 import torch
8 from torchtext.data import Example, Dataset
9 from torchtext.vocab import Vocab
10
11
12 class DatasetBase(Dataset):
13 """
14 A dataset is an object that accepts sequences of raw data (sentence pairs
15 in the case of machine translation) and fields which describe how this
16 raw data should be processed to produce tensors. When a dataset is
17 instantiated, it applies the fields' preprocessing pipeline (but not
18 the bit that numericalizes it or turns it into batch tensors) to the raw
19 data, producing a list of torchtext.data.Example objects. torchtext's
20 iterators then know how to use these examples to make batches.
21
22 Datasets in OpenNMT take three positional arguments:
23
24 `fields`: a dict with the structure returned by inputters.get_fields().
25 keys match the keys of items yielded by the src_examples_iter or
26 tgt_examples_iter, while values are lists of (name, Field) pairs.
27 An attribute with this name will be created for each Example object,
28 and its value will be the result of applying the Field to the data
29 that matches the key. The advantage of having sequences of fields
30 for each piece of raw input is that it allows for the dataset to store
31 multiple `views` of each input, which allows for easy implementation
32 of token-level features, mixed word- and character-level models, and
33 so on.
34 `src_examples_iter`: a sequence of dicts. Each dict's keys should be a
35 subset of the keys in `fields`.
36 `tgt_examples_iter`: like `src_examples_iter`, but may be None (this is
37 the case at translation time if no target is specified).
38
39 `filter_pred` if specified, a function that accepts Example objects and
40 returns a boolean value indicating whether to include that example
41 in the dataset.
42
43 The resulting dataset will have three attributes (todo: also src_vocabs):
44
45 `examples`: a list of `torchtext.data.Example` objects with attributes as
46 described above.
47 `fields`: a dictionary whose keys are strings with the same names as the
48 attributes of the elements of `examples` and whose values are
49 the corresponding `torchtext.data.Field` objects. NOTE: this is not
50 the same structure as in the fields argument passed to the constructor.
51 """
52
53 def __getstate__(self):
54 return self.__dict__
55
56 def __setstate__(self, _d):
57 self.__dict__.update(_d)
58
59 def __reduce_ex__(self, proto):
60 # This is a hack. Something is broken with torch pickle.
61 return super(DatasetBase, self).__reduce_ex__()
62
63 def __init__(self, fields, src_examples_iter, tgt_examples_iter,
64 filter_pred=None):
65
66 dynamic_dict = 'src_map' in fields and 'alignment' in fields
67
68 if tgt_examples_iter is not None:
69 examples_iter = (self._join_dicts(src, tgt) for src, tgt in
70 zip(src_examples_iter, tgt_examples_iter))
71 else:
72 examples_iter = src_examples_iter
73
74 # self.src_vocabs is used in collapse_copy_scores and Translator.py
75 self.src_vocabs = []
76 examples = []
77 for ex_dict in examples_iter:
78 if dynamic_dict:
79 src_field = fields['src'][0][1]
80 tgt_field = fields['tgt'][0][1]
81 src_vocab, ex_dict = self._dynamic_dict(
82 ex_dict, src_field, tgt_field)
83 self.src_vocabs.append(src_vocab)
84 ex_fields = {k: v for k, v in fields.items() if k in ex_dict}
85 ex = Example.fromdict(ex_dict, ex_fields)
86 examples.append(ex)
87
88 # the dataset's self.fields should have the same attributes as examples
89 fields = dict(chain.from_iterable(ex_fields.values()))
90
91 super(DatasetBase, self).__init__(examples, fields, filter_pred)
92
93 def save(self, path, remove_fields=True):
94 if remove_fields:
95 self.fields = []
96 torch.save(self, path)
97
98 def _join_dicts(self, *args):
99 """
100 Args:
101 dictionaries with disjoint keys.
102
103 Returns:
104 a single dictionary that has the union of these keys.
105 """
106 return dict(chain(*[d.items() for d in args]))
107
108 def _dynamic_dict(self, example, src_field, tgt_field):
109 src = src_field.tokenize(example["src"])
110 # make a small vocab containing just the tokens in the source sequence
111 unk = src_field.unk_token
112 pad = src_field.pad_token
113 src_vocab = Vocab(Counter(src), specials=[unk, pad])
114 # Map source tokens to indices in the dynamic dict.
115 src_map = torch.LongTensor([src_vocab.stoi[w] for w in src])
116 example["src_map"] = src_map
117
118 if "tgt" in example:
119 tgt = tgt_field.tokenize(example["tgt"])
120 mask = torch.LongTensor(
121 [0] + [src_vocab.stoi[w] for w in tgt] + [0])
122 example["alignment"] = mask
123 return src_vocab, example
124
125 @property
126 def can_copy(self):
127 return False
128
129 @classmethod
130 def _read_file(cls, path):
131 with codecs.open(path, "r", "utf-8") as f:
132 for line in f:
133 yield line
134
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/onmt/inputters/dataset_base.py b/onmt/inputters/dataset_base.py
--- a/onmt/inputters/dataset_base.py
+++ b/onmt/inputters/dataset_base.py
@@ -50,16 +50,6 @@
the same structure as in the fields argument passed to the constructor.
"""
- def __getstate__(self):
- return self.__dict__
-
- def __setstate__(self, _d):
- self.__dict__.update(_d)
-
- def __reduce_ex__(self, proto):
- # This is a hack. Something is broken with torch pickle.
- return super(DatasetBase, self).__reduce_ex__()
-
def __init__(self, fields, src_examples_iter, tgt_examples_iter,
filter_pred=None):
@@ -90,6 +80,15 @@
super(DatasetBase, self).__init__(examples, fields, filter_pred)
+ def __getattr__(self, attr):
+ # avoid infinite recursion when fields isn't defined
+ if 'fields' not in vars(self):
+ raise AttributeError
+ if attr in self.fields:
+ return (getattr(x, attr) for x in self.examples)
+ else:
+ raise AttributeError
+
def save(self, path, remove_fields=True):
if remove_fields:
self.fields = []
| {"golden_diff": "diff --git a/onmt/inputters/dataset_base.py b/onmt/inputters/dataset_base.py\n--- a/onmt/inputters/dataset_base.py\n+++ b/onmt/inputters/dataset_base.py\n@@ -50,16 +50,6 @@\n the same structure as in the fields argument passed to the constructor.\n \"\"\"\n \n- def __getstate__(self):\n- return self.__dict__\n-\n- def __setstate__(self, _d):\n- self.__dict__.update(_d)\n-\n- def __reduce_ex__(self, proto):\n- # This is a hack. Something is broken with torch pickle.\n- return super(DatasetBase, self).__reduce_ex__()\n-\n def __init__(self, fields, src_examples_iter, tgt_examples_iter,\n filter_pred=None):\n \n@@ -90,6 +80,15 @@\n \n super(DatasetBase, self).__init__(examples, fields, filter_pred)\n \n+ def __getattr__(self, attr):\n+ # avoid infinite recursion when fields isn't defined\n+ if 'fields' not in vars(self):\n+ raise AttributeError\n+ if attr in self.fields:\n+ return (getattr(x, attr) for x in self.examples)\n+ else:\n+ raise AttributeError\n+\n def save(self, path, remove_fields=True):\n if remove_fields:\n self.fields = []\n", "issue": "Source/Target length distinction\n## Preprocess parameters\r\nRemoved parameter `-seq_length`\r\nNew parameters `-src_seq_length` and `-tgt_seq_length`\r\n\r\n---\r\n\r\n## Training speed token/s\r\nIn both LUA/PyTorch OpenNMT, the training process prints a speed, in token/sec, but:\r\n\r\n* LUA OpenNMT is printing source token/sec\r\n* PyOpenNMT is printing target token/sec\r\n\r\nThis can lead to important differences, especially when src/tgt sequence length are different (e.g. summarization), and therefore lead to false conclusion about performances.\r\n\r\nSee also: [pytoch/example/issue#75](https://github.com/pytorch/examples/issues/75)\n", "before_files": [{"content": "# coding: utf-8\n\nfrom itertools import chain\nfrom collections import Counter\nimport codecs\n\nimport torch\nfrom torchtext.data import Example, Dataset\nfrom torchtext.vocab import Vocab\n\n\nclass DatasetBase(Dataset):\n \"\"\"\n A dataset is an object that accepts sequences of raw data (sentence pairs\n in the case of machine translation) and fields which describe how this\n raw data should be processed to produce tensors. When a dataset is\n instantiated, it applies the fields' preprocessing pipeline (but not\n the bit that numericalizes it or turns it into batch tensors) to the raw\n data, producing a list of torchtext.data.Example objects. torchtext's\n iterators then know how to use these examples to make batches.\n\n Datasets in OpenNMT take three positional arguments:\n\n `fields`: a dict with the structure returned by inputters.get_fields().\n keys match the keys of items yielded by the src_examples_iter or\n tgt_examples_iter, while values are lists of (name, Field) pairs.\n An attribute with this name will be created for each Example object,\n and its value will be the result of applying the Field to the data\n that matches the key. The advantage of having sequences of fields\n for each piece of raw input is that it allows for the dataset to store\n multiple `views` of each input, which allows for easy implementation\n of token-level features, mixed word- and character-level models, and\n so on.\n `src_examples_iter`: a sequence of dicts. Each dict's keys should be a\n subset of the keys in `fields`.\n `tgt_examples_iter`: like `src_examples_iter`, but may be None (this is\n the case at translation time if no target is specified).\n\n `filter_pred` if specified, a function that accepts Example objects and\n returns a boolean value indicating whether to include that example\n in the dataset.\n\n The resulting dataset will have three attributes (todo: also src_vocabs):\n\n `examples`: a list of `torchtext.data.Example` objects with attributes as\n described above.\n `fields`: a dictionary whose keys are strings with the same names as the\n attributes of the elements of `examples` and whose values are\n the corresponding `torchtext.data.Field` objects. NOTE: this is not\n the same structure as in the fields argument passed to the constructor.\n \"\"\"\n\n def __getstate__(self):\n return self.__dict__\n\n def __setstate__(self, _d):\n self.__dict__.update(_d)\n\n def __reduce_ex__(self, proto):\n # This is a hack. Something is broken with torch pickle.\n return super(DatasetBase, self).__reduce_ex__()\n\n def __init__(self, fields, src_examples_iter, tgt_examples_iter,\n filter_pred=None):\n\n dynamic_dict = 'src_map' in fields and 'alignment' in fields\n\n if tgt_examples_iter is not None:\n examples_iter = (self._join_dicts(src, tgt) for src, tgt in\n zip(src_examples_iter, tgt_examples_iter))\n else:\n examples_iter = src_examples_iter\n\n # self.src_vocabs is used in collapse_copy_scores and Translator.py\n self.src_vocabs = []\n examples = []\n for ex_dict in examples_iter:\n if dynamic_dict:\n src_field = fields['src'][0][1]\n tgt_field = fields['tgt'][0][1]\n src_vocab, ex_dict = self._dynamic_dict(\n ex_dict, src_field, tgt_field)\n self.src_vocabs.append(src_vocab)\n ex_fields = {k: v for k, v in fields.items() if k in ex_dict}\n ex = Example.fromdict(ex_dict, ex_fields)\n examples.append(ex)\n\n # the dataset's self.fields should have the same attributes as examples\n fields = dict(chain.from_iterable(ex_fields.values()))\n\n super(DatasetBase, self).__init__(examples, fields, filter_pred)\n\n def save(self, path, remove_fields=True):\n if remove_fields:\n self.fields = []\n torch.save(self, path)\n\n def _join_dicts(self, *args):\n \"\"\"\n Args:\n dictionaries with disjoint keys.\n\n Returns:\n a single dictionary that has the union of these keys.\n \"\"\"\n return dict(chain(*[d.items() for d in args]))\n\n def _dynamic_dict(self, example, src_field, tgt_field):\n src = src_field.tokenize(example[\"src\"])\n # make a small vocab containing just the tokens in the source sequence\n unk = src_field.unk_token\n pad = src_field.pad_token\n src_vocab = Vocab(Counter(src), specials=[unk, pad])\n # Map source tokens to indices in the dynamic dict.\n src_map = torch.LongTensor([src_vocab.stoi[w] for w in src])\n example[\"src_map\"] = src_map\n\n if \"tgt\" in example:\n tgt = tgt_field.tokenize(example[\"tgt\"])\n mask = torch.LongTensor(\n [0] + [src_vocab.stoi[w] for w in tgt] + [0])\n example[\"alignment\"] = mask\n return src_vocab, example\n\n @property\n def can_copy(self):\n return False\n\n @classmethod\n def _read_file(cls, path):\n with codecs.open(path, \"r\", \"utf-8\") as f:\n for line in f:\n yield line\n", "path": "onmt/inputters/dataset_base.py"}], "after_files": [{"content": "# coding: utf-8\n\nfrom itertools import chain\nfrom collections import Counter\nimport codecs\n\nimport torch\nfrom torchtext.data import Example, Dataset\nfrom torchtext.vocab import Vocab\n\n\nclass DatasetBase(Dataset):\n \"\"\"\n A dataset is an object that accepts sequences of raw data (sentence pairs\n in the case of machine translation) and fields which describe how this\n raw data should be processed to produce tensors. When a dataset is\n instantiated, it applies the fields' preprocessing pipeline (but not\n the bit that numericalizes it or turns it into batch tensors) to the raw\n data, producing a list of torchtext.data.Example objects. torchtext's\n iterators then know how to use these examples to make batches.\n\n Datasets in OpenNMT take three positional arguments:\n\n `fields`: a dict with the structure returned by inputters.get_fields().\n keys match the keys of items yielded by the src_examples_iter or\n tgt_examples_iter, while values are lists of (name, Field) pairs.\n An attribute with this name will be created for each Example object,\n and its value will be the result of applying the Field to the data\n that matches the key. The advantage of having sequences of fields\n for each piece of raw input is that it allows for the dataset to store\n multiple `views` of each input, which allows for easy implementation\n of token-level features, mixed word- and character-level models, and\n so on.\n `src_examples_iter`: a sequence of dicts. Each dict's keys should be a\n subset of the keys in `fields`.\n `tgt_examples_iter`: like `src_examples_iter`, but may be None (this is\n the case at translation time if no target is specified).\n\n `filter_pred` if specified, a function that accepts Example objects and\n returns a boolean value indicating whether to include that example\n in the dataset.\n\n The resulting dataset will have three attributes (todo: also src_vocabs):\n\n `examples`: a list of `torchtext.data.Example` objects with attributes as\n described above.\n `fields`: a dictionary whose keys are strings with the same names as the\n attributes of the elements of `examples` and whose values are\n the corresponding `torchtext.data.Field` objects. NOTE: this is not\n the same structure as in the fields argument passed to the constructor.\n \"\"\"\n\n def __init__(self, fields, src_examples_iter, tgt_examples_iter,\n filter_pred=None):\n\n dynamic_dict = 'src_map' in fields and 'alignment' in fields\n\n if tgt_examples_iter is not None:\n examples_iter = (self._join_dicts(src, tgt) for src, tgt in\n zip(src_examples_iter, tgt_examples_iter))\n else:\n examples_iter = src_examples_iter\n\n # self.src_vocabs is used in collapse_copy_scores and Translator.py\n self.src_vocabs = []\n examples = []\n for ex_dict in examples_iter:\n if dynamic_dict:\n src_field = fields['src'][0][1]\n tgt_field = fields['tgt'][0][1]\n src_vocab, ex_dict = self._dynamic_dict(\n ex_dict, src_field, tgt_field)\n self.src_vocabs.append(src_vocab)\n ex_fields = {k: v for k, v in fields.items() if k in ex_dict}\n ex = Example.fromdict(ex_dict, ex_fields)\n examples.append(ex)\n\n # the dataset's self.fields should have the same attributes as examples\n fields = dict(chain.from_iterable(ex_fields.values()))\n\n super(DatasetBase, self).__init__(examples, fields, filter_pred)\n\n def __getattr__(self, attr):\n # avoid infinite recursion when fields isn't defined\n if 'fields' not in vars(self):\n raise AttributeError\n if attr in self.fields:\n return (getattr(x, attr) for x in self.examples)\n else:\n raise AttributeError\n\n def save(self, path, remove_fields=True):\n if remove_fields:\n self.fields = []\n torch.save(self, path)\n\n def _join_dicts(self, *args):\n \"\"\"\n Args:\n dictionaries with disjoint keys.\n\n Returns:\n a single dictionary that has the union of these keys.\n \"\"\"\n return dict(chain(*[d.items() for d in args]))\n\n def _dynamic_dict(self, example, src_field, tgt_field):\n src = src_field.tokenize(example[\"src\"])\n # make a small vocab containing just the tokens in the source sequence\n unk = src_field.unk_token\n pad = src_field.pad_token\n src_vocab = Vocab(Counter(src), specials=[unk, pad])\n # Map source tokens to indices in the dynamic dict.\n src_map = torch.LongTensor([src_vocab.stoi[w] for w in src])\n example[\"src_map\"] = src_map\n\n if \"tgt\" in example:\n tgt = tgt_field.tokenize(example[\"tgt\"])\n mask = torch.LongTensor(\n [0] + [src_vocab.stoi[w] for w in tgt] + [0])\n example[\"alignment\"] = mask\n return src_vocab, example\n\n @property\n def can_copy(self):\n return False\n\n @classmethod\n def _read_file(cls, path):\n with codecs.open(path, \"r\", \"utf-8\") as f:\n for line in f:\n yield line\n", "path": "onmt/inputters/dataset_base.py"}]} | 1,882 | 301 |
gh_patches_debug_21995 | rasdani/github-patches | git_diff | openai__gym-1661 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Remove FireReset wrapper for atari environments
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `gym/wrappers/atari_preprocessing.py`
Content:
```
1 import numpy as np
2
3 import gym
4 from gym.spaces import Box
5 from gym.wrappers import TimeLimit
6
7
8 class AtariPreprocessing(gym.Wrapper):
9 r"""Atari 2600 preprocessings.
10
11 This class follows the guidelines in
12 Machado et al. (2018), "Revisiting the Arcade Learning Environment:
13 Evaluation Protocols and Open Problems for General Agents".
14
15 Specifically:
16
17 * NoopReset: obtain initial state by taking random number of no-ops on reset.
18 * FireReset: take action on reset for environments that are fixed until firing.
19 * Frame skipping: 4 by default
20 * Max-pooling: most recent two observations
21 * Termination signal when a life is lost: turned off by default. Not recommended by Machado et al. (2018).
22 * Resize to a square image: 84x84 by default
23 * Grayscale observation: optional
24 * Scale observation: optional
25
26 Args:
27 env (Env): environment
28 noop_max (int): max number of no-ops
29 frame_skip (int): the frequency at which the agent experiences the game.
30 screen_size (int): resize Atari frame
31 terminal_on_life_loss (bool): if True, then step() returns done=True whenever a
32 life is lost.
33 grayscale_obs (bool): if True, then gray scale observation is returned, otherwise, RGB observation
34 is returned.
35 scale_obs (bool): if True, then observation normalized in range [0,1] is returned. It also limits memory
36 optimization benefits of FrameStack Wrapper.
37 """
38
39 def __init__(self, env, noop_max=30, frame_skip=4, screen_size=84, terminal_on_life_loss=False, grayscale_obs=True,
40 scale_obs=False):
41 super().__init__(env)
42 assert frame_skip > 0
43 assert screen_size > 0
44
45 self.noop_max = noop_max
46 assert env.unwrapped.get_action_meanings()[0] == 'NOOP'
47
48 self.frame_skip = frame_skip
49 self.screen_size = screen_size
50 self.terminal_on_life_loss = terminal_on_life_loss
51 self.grayscale_obs = grayscale_obs
52 self.scale_obs = scale_obs
53
54 # buffer of most recent two observations for max pooling
55 if grayscale_obs:
56 self.obs_buffer = [np.empty(env.observation_space.shape[:2], dtype=np.uint8),
57 np.empty(env.observation_space.shape[:2], dtype=np.uint8)]
58 else:
59 self.obs_buffer = [np.empty(env.observation_space.shape, dtype=np.uint8),
60 np.empty(env.observation_space.shape, dtype=np.uint8)]
61
62 self.ale = env.unwrapped.ale
63 self.lives = 0
64 self.game_over = False
65
66 _low, _high, _obs_dtype = (0, 255, np.uint8) if not scale_obs else (0, 1, np.float32)
67 if grayscale_obs:
68 self.observation_space = Box(low=_low, high=_high, shape=(screen_size, screen_size), dtype=_obs_dtype)
69 else:
70 self.observation_space = Box(low=_low, high=_high, shape=(screen_size, screen_size, 3), dtype=_obs_dtype)
71
72 def step(self, action):
73 R = 0.0
74
75 for t in range(self.frame_skip):
76 _, reward, done, info = self.env.step(action)
77 R += reward
78 self.game_over = done
79
80 if self.terminal_on_life_loss:
81 new_lives = self.ale.lives()
82 done = done or new_lives < self.lives
83 self.lives = new_lives
84
85 if done:
86 break
87 if t == self.frame_skip - 2:
88 if self.grayscale_obs:
89 self.ale.getScreenGrayscale(self.obs_buffer[0])
90 else:
91 self.ale.getScreenRGB2(self.obs_buffer[0])
92 elif t == self.frame_skip - 1:
93 if self.grayscale_obs:
94 self.ale.getScreenGrayscale(self.obs_buffer[1])
95 else:
96 self.ale.getScreenRGB2(self.obs_buffer[1])
97 return self._get_obs(), R, done, info
98
99 def reset(self, **kwargs):
100 # NoopReset
101 self.env.reset(**kwargs)
102 noops = self.env.unwrapped.np_random.randint(1, self.noop_max + 1) if self.noop_max > 0 else 0
103 for _ in range(noops):
104 _, _, done, _ = self.env.step(0)
105 if done:
106 self.env.reset(**kwargs)
107
108 # FireReset
109 action_meanings = self.env.unwrapped.get_action_meanings()
110 if action_meanings[1] == 'FIRE' and len(action_meanings) >= 3:
111 self.env.step(1)
112 self.env.step(2)
113
114 self.lives = self.ale.lives()
115 if self.grayscale_obs:
116 self.ale.getScreenGrayscale(self.obs_buffer[0])
117 else:
118 self.ale.getScreenRGB2(self.obs_buffer[0])
119 self.obs_buffer[1].fill(0)
120 return self._get_obs()
121
122 def _get_obs(self):
123 import cv2
124 if self.frame_skip > 1: # more efficient in-place pooling
125 np.maximum(self.obs_buffer[0], self.obs_buffer[1], out=self.obs_buffer[0])
126 obs = cv2.resize(self.obs_buffer[0], (self.screen_size, self.screen_size), interpolation=cv2.INTER_AREA)
127
128 if self.scale_obs:
129 obs = np.asarray(obs, dtype=np.float32) / 255.0
130 else:
131 obs = np.asarray(obs, dtype=np.uint8)
132 return obs
133
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/gym/wrappers/atari_preprocessing.py b/gym/wrappers/atari_preprocessing.py
--- a/gym/wrappers/atari_preprocessing.py
+++ b/gym/wrappers/atari_preprocessing.py
@@ -15,7 +15,6 @@
Specifically:
* NoopReset: obtain initial state by taking random number of no-ops on reset.
- * FireReset: take action on reset for environments that are fixed until firing.
* Frame skipping: 4 by default
* Max-pooling: most recent two observations
* Termination signal when a life is lost: turned off by default. Not recommended by Machado et al. (2018).
@@ -105,12 +104,6 @@
if done:
self.env.reset(**kwargs)
- # FireReset
- action_meanings = self.env.unwrapped.get_action_meanings()
- if action_meanings[1] == 'FIRE' and len(action_meanings) >= 3:
- self.env.step(1)
- self.env.step(2)
-
self.lives = self.ale.lives()
if self.grayscale_obs:
self.ale.getScreenGrayscale(self.obs_buffer[0])
| {"golden_diff": "diff --git a/gym/wrappers/atari_preprocessing.py b/gym/wrappers/atari_preprocessing.py\n--- a/gym/wrappers/atari_preprocessing.py\n+++ b/gym/wrappers/atari_preprocessing.py\n@@ -15,7 +15,6 @@\n Specifically:\n \n * NoopReset: obtain initial state by taking random number of no-ops on reset. \n- * FireReset: take action on reset for environments that are fixed until firing. \n * Frame skipping: 4 by default\n * Max-pooling: most recent two observations\n * Termination signal when a life is lost: turned off by default. Not recommended by Machado et al. (2018).\n@@ -105,12 +104,6 @@\n if done:\n self.env.reset(**kwargs)\n \n- # FireReset\n- action_meanings = self.env.unwrapped.get_action_meanings()\n- if action_meanings[1] == 'FIRE' and len(action_meanings) >= 3:\n- self.env.step(1)\n- self.env.step(2)\n-\n self.lives = self.ale.lives()\n if self.grayscale_obs:\n self.ale.getScreenGrayscale(self.obs_buffer[0])\n", "issue": "Remove FireReset wrapper for atari environments\n\n", "before_files": [{"content": "import numpy as np\n\nimport gym\nfrom gym.spaces import Box\nfrom gym.wrappers import TimeLimit\n\n\nclass AtariPreprocessing(gym.Wrapper):\n r\"\"\"Atari 2600 preprocessings. \n\n This class follows the guidelines in \n Machado et al. (2018), \"Revisiting the Arcade Learning Environment: \n Evaluation Protocols and Open Problems for General Agents\".\n\n Specifically:\n\n * NoopReset: obtain initial state by taking random number of no-ops on reset. \n * FireReset: take action on reset for environments that are fixed until firing. \n * Frame skipping: 4 by default\n * Max-pooling: most recent two observations\n * Termination signal when a life is lost: turned off by default. Not recommended by Machado et al. (2018).\n * Resize to a square image: 84x84 by default\n * Grayscale observation: optional\n * Scale observation: optional\n\n Args:\n env (Env): environment\n noop_max (int): max number of no-ops\n frame_skip (int): the frequency at which the agent experiences the game. \n screen_size (int): resize Atari frame\n terminal_on_life_loss (bool): if True, then step() returns done=True whenever a\n life is lost. \n grayscale_obs (bool): if True, then gray scale observation is returned, otherwise, RGB observation\n is returned.\n scale_obs (bool): if True, then observation normalized in range [0,1] is returned. It also limits memory\n optimization benefits of FrameStack Wrapper.\n \"\"\"\n\n def __init__(self, env, noop_max=30, frame_skip=4, screen_size=84, terminal_on_life_loss=False, grayscale_obs=True,\n scale_obs=False):\n super().__init__(env)\n assert frame_skip > 0\n assert screen_size > 0\n\n self.noop_max = noop_max\n assert env.unwrapped.get_action_meanings()[0] == 'NOOP'\n\n self.frame_skip = frame_skip\n self.screen_size = screen_size\n self.terminal_on_life_loss = terminal_on_life_loss\n self.grayscale_obs = grayscale_obs\n self.scale_obs = scale_obs\n\n # buffer of most recent two observations for max pooling\n if grayscale_obs:\n self.obs_buffer = [np.empty(env.observation_space.shape[:2], dtype=np.uint8),\n np.empty(env.observation_space.shape[:2], dtype=np.uint8)]\n else:\n self.obs_buffer = [np.empty(env.observation_space.shape, dtype=np.uint8),\n np.empty(env.observation_space.shape, dtype=np.uint8)]\n\n self.ale = env.unwrapped.ale\n self.lives = 0\n self.game_over = False\n\n _low, _high, _obs_dtype = (0, 255, np.uint8) if not scale_obs else (0, 1, np.float32)\n if grayscale_obs:\n self.observation_space = Box(low=_low, high=_high, shape=(screen_size, screen_size), dtype=_obs_dtype)\n else:\n self.observation_space = Box(low=_low, high=_high, shape=(screen_size, screen_size, 3), dtype=_obs_dtype)\n\n def step(self, action):\n R = 0.0\n\n for t in range(self.frame_skip):\n _, reward, done, info = self.env.step(action)\n R += reward\n self.game_over = done\n\n if self.terminal_on_life_loss:\n new_lives = self.ale.lives()\n done = done or new_lives < self.lives\n self.lives = new_lives\n\n if done:\n break\n if t == self.frame_skip - 2:\n if self.grayscale_obs:\n self.ale.getScreenGrayscale(self.obs_buffer[0])\n else:\n self.ale.getScreenRGB2(self.obs_buffer[0])\n elif t == self.frame_skip - 1:\n if self.grayscale_obs:\n self.ale.getScreenGrayscale(self.obs_buffer[1])\n else:\n self.ale.getScreenRGB2(self.obs_buffer[1])\n return self._get_obs(), R, done, info\n\n def reset(self, **kwargs):\n # NoopReset\n self.env.reset(**kwargs)\n noops = self.env.unwrapped.np_random.randint(1, self.noop_max + 1) if self.noop_max > 0 else 0\n for _ in range(noops):\n _, _, done, _ = self.env.step(0)\n if done:\n self.env.reset(**kwargs)\n\n # FireReset\n action_meanings = self.env.unwrapped.get_action_meanings()\n if action_meanings[1] == 'FIRE' and len(action_meanings) >= 3:\n self.env.step(1)\n self.env.step(2)\n\n self.lives = self.ale.lives()\n if self.grayscale_obs:\n self.ale.getScreenGrayscale(self.obs_buffer[0])\n else:\n self.ale.getScreenRGB2(self.obs_buffer[0])\n self.obs_buffer[1].fill(0)\n return self._get_obs()\n\n def _get_obs(self):\n import cv2\n if self.frame_skip > 1: # more efficient in-place pooling\n np.maximum(self.obs_buffer[0], self.obs_buffer[1], out=self.obs_buffer[0])\n obs = cv2.resize(self.obs_buffer[0], (self.screen_size, self.screen_size), interpolation=cv2.INTER_AREA)\n\n if self.scale_obs:\n obs = np.asarray(obs, dtype=np.float32) / 255.0\n else:\n obs = np.asarray(obs, dtype=np.uint8)\n return obs\n", "path": "gym/wrappers/atari_preprocessing.py"}], "after_files": [{"content": "import numpy as np\n\nimport gym\nfrom gym.spaces import Box\nfrom gym.wrappers import TimeLimit\n\n\nclass AtariPreprocessing(gym.Wrapper):\n r\"\"\"Atari 2600 preprocessings. \n\n This class follows the guidelines in \n Machado et al. (2018), \"Revisiting the Arcade Learning Environment: \n Evaluation Protocols and Open Problems for General Agents\".\n\n Specifically:\n\n * NoopReset: obtain initial state by taking random number of no-ops on reset. \n * Frame skipping: 4 by default\n * Max-pooling: most recent two observations\n * Termination signal when a life is lost: turned off by default. Not recommended by Machado et al. (2018).\n * Resize to a square image: 84x84 by default\n * Grayscale observation: optional\n * Scale observation: optional\n\n Args:\n env (Env): environment\n noop_max (int): max number of no-ops\n frame_skip (int): the frequency at which the agent experiences the game. \n screen_size (int): resize Atari frame\n terminal_on_life_loss (bool): if True, then step() returns done=True whenever a\n life is lost. \n grayscale_obs (bool): if True, then gray scale observation is returned, otherwise, RGB observation\n is returned.\n scale_obs (bool): if True, then observation normalized in range [0,1] is returned. It also limits memory\n optimization benefits of FrameStack Wrapper.\n \"\"\"\n\n def __init__(self, env, noop_max=30, frame_skip=4, screen_size=84, terminal_on_life_loss=False, grayscale_obs=True,\n scale_obs=False):\n super().__init__(env)\n assert frame_skip > 0\n assert screen_size > 0\n\n self.noop_max = noop_max\n assert env.unwrapped.get_action_meanings()[0] == 'NOOP'\n\n self.frame_skip = frame_skip\n self.screen_size = screen_size\n self.terminal_on_life_loss = terminal_on_life_loss\n self.grayscale_obs = grayscale_obs\n self.scale_obs = scale_obs\n\n # buffer of most recent two observations for max pooling\n if grayscale_obs:\n self.obs_buffer = [np.empty(env.observation_space.shape[:2], dtype=np.uint8),\n np.empty(env.observation_space.shape[:2], dtype=np.uint8)]\n else:\n self.obs_buffer = [np.empty(env.observation_space.shape, dtype=np.uint8),\n np.empty(env.observation_space.shape, dtype=np.uint8)]\n\n self.ale = env.unwrapped.ale\n self.lives = 0\n self.game_over = False\n\n _low, _high, _obs_dtype = (0, 255, np.uint8) if not scale_obs else (0, 1, np.float32)\n if grayscale_obs:\n self.observation_space = Box(low=_low, high=_high, shape=(screen_size, screen_size), dtype=_obs_dtype)\n else:\n self.observation_space = Box(low=_low, high=_high, shape=(screen_size, screen_size, 3), dtype=_obs_dtype)\n\n def step(self, action):\n R = 0.0\n\n for t in range(self.frame_skip):\n _, reward, done, info = self.env.step(action)\n R += reward\n self.game_over = done\n\n if self.terminal_on_life_loss:\n new_lives = self.ale.lives()\n done = done or new_lives < self.lives\n self.lives = new_lives\n\n if done:\n break\n if t == self.frame_skip - 2:\n if self.grayscale_obs:\n self.ale.getScreenGrayscale(self.obs_buffer[0])\n else:\n self.ale.getScreenRGB2(self.obs_buffer[0])\n elif t == self.frame_skip - 1:\n if self.grayscale_obs:\n self.ale.getScreenGrayscale(self.obs_buffer[1])\n else:\n self.ale.getScreenRGB2(self.obs_buffer[1])\n return self._get_obs(), R, done, info\n\n def reset(self, **kwargs):\n # NoopReset\n self.env.reset(**kwargs)\n noops = self.env.unwrapped.np_random.randint(1, self.noop_max + 1) if self.noop_max > 0 else 0\n for _ in range(noops):\n _, _, done, _ = self.env.step(0)\n if done:\n self.env.reset(**kwargs)\n\n self.lives = self.ale.lives()\n if self.grayscale_obs:\n self.ale.getScreenGrayscale(self.obs_buffer[0])\n else:\n self.ale.getScreenRGB2(self.obs_buffer[0])\n self.obs_buffer[1].fill(0)\n return self._get_obs()\n\n def _get_obs(self):\n import cv2\n if self.frame_skip > 1: # more efficient in-place pooling\n np.maximum(self.obs_buffer[0], self.obs_buffer[1], out=self.obs_buffer[0])\n obs = cv2.resize(self.obs_buffer[0], (self.screen_size, self.screen_size), interpolation=cv2.INTER_AREA)\n\n if self.scale_obs:\n obs = np.asarray(obs, dtype=np.float32) / 255.0\n else:\n obs = np.asarray(obs, dtype=np.uint8)\n return obs\n", "path": "gym/wrappers/atari_preprocessing.py"}]} | 1,839 | 281 |
gh_patches_debug_5738 | rasdani/github-patches | git_diff | quantumlib__Cirq-1673 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Two circuit diagram tests that rest in `contrib` are failing on Windows
See: https://travis-ci.com/quantumlib/Cirq/jobs/202641395
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cirq/contrib/paulistring/convert_to_pauli_string_phasors.py`
Content:
```
1 # Copyright 2018 The Cirq Developers
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from typing import Optional, cast, TYPE_CHECKING
16
17 import numpy as np
18
19 from cirq import ops, optimizers, protocols, linalg
20 from cirq.circuits.circuit import Circuit
21 from cirq.circuits.optimization_pass import (
22 PointOptimizationSummary,
23 PointOptimizer,
24 )
25
26 if TYPE_CHECKING:
27 # pylint: disable=unused-import
28 from typing import List
29
30
31 class ConvertToPauliStringPhasors(PointOptimizer):
32 """Attempts to convert single-qubit gates into single-qubit
33 PauliStringPhasor operations.
34
35 Checks if the operation has a known unitary effect. If so, and the gate is a
36 1-qubit gate, then decomposes it into x, y, or z rotations and creates a
37 PauliStringPhasor for each.
38 """
39
40 def __init__(self,
41 ignore_failures: bool = False,
42 keep_clifford: bool = False,
43 atol: float = 0) -> None:
44 """
45 Args:
46 ignore_failures: If set, gates that fail to convert are forwarded
47 unchanged. If not set, conversion failures raise a TypeError.
48 keep_clifford: If set, single qubit rotations in the Clifford group
49 are converted to SingleQubitCliffordGates.
50 atol: Maximum absolute error tolerance. The optimization is
51 permitted to round angles with a threshold determined by this
52 tolerance.
53 """
54 super().__init__()
55 self.ignore_failures = ignore_failures
56 self.keep_clifford = keep_clifford
57 self.atol = atol
58
59 def _matrix_to_pauli_string_phasors(self,
60 mat: np.ndarray,
61 qubit: ops.Qid) -> ops.OP_TREE:
62 rotations = optimizers.single_qubit_matrix_to_pauli_rotations(
63 mat, self.atol)
64 out_ops = [] # type: List[ops.Operation]
65 for pauli, half_turns in rotations:
66 if (self.keep_clifford
67 and linalg.all_near_zero_mod(half_turns, 0.5)):
68 cliff_gate = ops.SingleQubitCliffordGate.from_quarter_turns(
69 pauli, round(half_turns * 2))
70 if out_ops and not isinstance(out_ops[-1],
71 ops.PauliStringPhasor):
72 op = cast(ops.GateOperation, out_ops[-1])
73 gate = cast(ops.SingleQubitCliffordGate, op.gate)
74 out_ops[-1] = gate.merged_with(cliff_gate)(qubit)
75 else:
76 out_ops.append(
77 cliff_gate(qubit))
78 else:
79 pauli_string = ops.PauliString.from_single(qubit, pauli)
80 out_ops.append(
81 ops.PauliStringPhasor(pauli_string,
82 exponent_neg=round(half_turns, 10)))
83 return out_ops
84
85 def _convert_one(self, op: ops.Operation) -> ops.OP_TREE:
86 # Don't change if it's already a ops.PauliStringPhasor
87 if isinstance(op, ops.PauliStringPhasor):
88 return op
89
90 if (self.keep_clifford
91 and isinstance(op, ops.GateOperation)
92 and isinstance(op.gate, ops.SingleQubitCliffordGate)):
93 return op
94
95 # Single qubit gate with known matrix?
96 if len(op.qubits) == 1:
97 mat = protocols.unitary(op, None)
98 if mat is not None:
99 return self._matrix_to_pauli_string_phasors(mat, op.qubits[0])
100
101 # Just let it be?
102 if self.ignore_failures:
103 return op
104
105 raise TypeError("Don't know how to work with {!r}. "
106 "It isn't a 1-qubit operation with a known unitary "
107 "effect.".format(op))
108
109 def convert(self, op: ops.Operation) -> ops.OP_TREE:
110 converted = self._convert_one(op)
111 if converted is op:
112 return converted
113 return [self.convert(cast(ops.Operation, e))
114 for e in ops.flatten_op_tree(converted)]
115
116 def optimization_at(self, circuit: Circuit, index: int, op: ops.Operation
117 ) -> Optional[PointOptimizationSummary]:
118 converted = self.convert(op)
119 if converted is op:
120 return None
121
122 return PointOptimizationSummary(
123 clear_span=1,
124 new_operations=converted,
125 clear_qubits=op.qubits)
126
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/cirq/contrib/paulistring/convert_to_pauli_string_phasors.py b/cirq/contrib/paulistring/convert_to_pauli_string_phasors.py
--- a/cirq/contrib/paulistring/convert_to_pauli_string_phasors.py
+++ b/cirq/contrib/paulistring/convert_to_pauli_string_phasors.py
@@ -40,7 +40,7 @@
def __init__(self,
ignore_failures: bool = False,
keep_clifford: bool = False,
- atol: float = 0) -> None:
+ atol: float = 1e-14) -> None:
"""
Args:
ignore_failures: If set, gates that fail to convert are forwarded
| {"golden_diff": "diff --git a/cirq/contrib/paulistring/convert_to_pauli_string_phasors.py b/cirq/contrib/paulistring/convert_to_pauli_string_phasors.py\n--- a/cirq/contrib/paulistring/convert_to_pauli_string_phasors.py\n+++ b/cirq/contrib/paulistring/convert_to_pauli_string_phasors.py\n@@ -40,7 +40,7 @@\n def __init__(self,\n ignore_failures: bool = False,\n keep_clifford: bool = False,\n- atol: float = 0) -> None:\n+ atol: float = 1e-14) -> None:\n \"\"\"\n Args:\n ignore_failures: If set, gates that fail to convert are forwarded\n", "issue": "Two circuit diagram tests that rest in `contrib` are failing on Windows\nSee: https://travis-ci.com/quantumlib/Cirq/jobs/202641395\n", "before_files": [{"content": "# Copyright 2018 The Cirq Developers\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom typing import Optional, cast, TYPE_CHECKING\n\nimport numpy as np\n\nfrom cirq import ops, optimizers, protocols, linalg\nfrom cirq.circuits.circuit import Circuit\nfrom cirq.circuits.optimization_pass import (\n PointOptimizationSummary,\n PointOptimizer,\n)\n\nif TYPE_CHECKING:\n # pylint: disable=unused-import\n from typing import List\n\n\nclass ConvertToPauliStringPhasors(PointOptimizer):\n \"\"\"Attempts to convert single-qubit gates into single-qubit\n PauliStringPhasor operations.\n\n Checks if the operation has a known unitary effect. If so, and the gate is a\n 1-qubit gate, then decomposes it into x, y, or z rotations and creates a\n PauliStringPhasor for each.\n \"\"\"\n\n def __init__(self,\n ignore_failures: bool = False,\n keep_clifford: bool = False,\n atol: float = 0) -> None:\n \"\"\"\n Args:\n ignore_failures: If set, gates that fail to convert are forwarded\n unchanged. If not set, conversion failures raise a TypeError.\n keep_clifford: If set, single qubit rotations in the Clifford group\n are converted to SingleQubitCliffordGates.\n atol: Maximum absolute error tolerance. The optimization is\n permitted to round angles with a threshold determined by this\n tolerance.\n \"\"\"\n super().__init__()\n self.ignore_failures = ignore_failures\n self.keep_clifford = keep_clifford\n self.atol = atol\n\n def _matrix_to_pauli_string_phasors(self,\n mat: np.ndarray,\n qubit: ops.Qid) -> ops.OP_TREE:\n rotations = optimizers.single_qubit_matrix_to_pauli_rotations(\n mat, self.atol)\n out_ops = [] # type: List[ops.Operation]\n for pauli, half_turns in rotations:\n if (self.keep_clifford\n and linalg.all_near_zero_mod(half_turns, 0.5)):\n cliff_gate = ops.SingleQubitCliffordGate.from_quarter_turns(\n pauli, round(half_turns * 2))\n if out_ops and not isinstance(out_ops[-1],\n ops.PauliStringPhasor):\n op = cast(ops.GateOperation, out_ops[-1])\n gate = cast(ops.SingleQubitCliffordGate, op.gate)\n out_ops[-1] = gate.merged_with(cliff_gate)(qubit)\n else:\n out_ops.append(\n cliff_gate(qubit))\n else:\n pauli_string = ops.PauliString.from_single(qubit, pauli)\n out_ops.append(\n ops.PauliStringPhasor(pauli_string,\n exponent_neg=round(half_turns, 10)))\n return out_ops\n\n def _convert_one(self, op: ops.Operation) -> ops.OP_TREE:\n # Don't change if it's already a ops.PauliStringPhasor\n if isinstance(op, ops.PauliStringPhasor):\n return op\n\n if (self.keep_clifford\n and isinstance(op, ops.GateOperation)\n and isinstance(op.gate, ops.SingleQubitCliffordGate)):\n return op\n\n # Single qubit gate with known matrix?\n if len(op.qubits) == 1:\n mat = protocols.unitary(op, None)\n if mat is not None:\n return self._matrix_to_pauli_string_phasors(mat, op.qubits[0])\n\n # Just let it be?\n if self.ignore_failures:\n return op\n\n raise TypeError(\"Don't know how to work with {!r}. \"\n \"It isn't a 1-qubit operation with a known unitary \"\n \"effect.\".format(op))\n\n def convert(self, op: ops.Operation) -> ops.OP_TREE:\n converted = self._convert_one(op)\n if converted is op:\n return converted\n return [self.convert(cast(ops.Operation, e))\n for e in ops.flatten_op_tree(converted)]\n\n def optimization_at(self, circuit: Circuit, index: int, op: ops.Operation\n ) -> Optional[PointOptimizationSummary]:\n converted = self.convert(op)\n if converted is op:\n return None\n\n return PointOptimizationSummary(\n clear_span=1,\n new_operations=converted,\n clear_qubits=op.qubits)\n", "path": "cirq/contrib/paulistring/convert_to_pauli_string_phasors.py"}], "after_files": [{"content": "# Copyright 2018 The Cirq Developers\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom typing import Optional, cast, TYPE_CHECKING\n\nimport numpy as np\n\nfrom cirq import ops, optimizers, protocols, linalg\nfrom cirq.circuits.circuit import Circuit\nfrom cirq.circuits.optimization_pass import (\n PointOptimizationSummary,\n PointOptimizer,\n)\n\nif TYPE_CHECKING:\n # pylint: disable=unused-import\n from typing import List\n\n\nclass ConvertToPauliStringPhasors(PointOptimizer):\n \"\"\"Attempts to convert single-qubit gates into single-qubit\n PauliStringPhasor operations.\n\n Checks if the operation has a known unitary effect. If so, and the gate is a\n 1-qubit gate, then decomposes it into x, y, or z rotations and creates a\n PauliStringPhasor for each.\n \"\"\"\n\n def __init__(self,\n ignore_failures: bool = False,\n keep_clifford: bool = False,\n atol: float = 1e-14) -> None:\n \"\"\"\n Args:\n ignore_failures: If set, gates that fail to convert are forwarded\n unchanged. If not set, conversion failures raise a TypeError.\n keep_clifford: If set, single qubit rotations in the Clifford group\n are converted to SingleQubitCliffordGates.\n atol: Maximum absolute error tolerance. The optimization is\n permitted to round angles with a threshold determined by this\n tolerance.\n \"\"\"\n super().__init__()\n self.ignore_failures = ignore_failures\n self.keep_clifford = keep_clifford\n self.atol = atol\n\n def _matrix_to_pauli_string_phasors(self,\n mat: np.ndarray,\n qubit: ops.Qid) -> ops.OP_TREE:\n rotations = optimizers.single_qubit_matrix_to_pauli_rotations(\n mat, self.atol)\n out_ops = [] # type: List[ops.Operation]\n for pauli, half_turns in rotations:\n if (self.keep_clifford\n and linalg.all_near_zero_mod(half_turns, 0.5)):\n cliff_gate = ops.SingleQubitCliffordGate.from_quarter_turns(\n pauli, round(half_turns * 2))\n if out_ops and not isinstance(out_ops[-1],\n ops.PauliStringPhasor):\n op = cast(ops.GateOperation, out_ops[-1])\n gate = cast(ops.SingleQubitCliffordGate, op.gate)\n out_ops[-1] = gate.merged_with(cliff_gate)(qubit)\n else:\n out_ops.append(\n cliff_gate(qubit))\n else:\n pauli_string = ops.PauliString.from_single(qubit, pauli)\n out_ops.append(\n ops.PauliStringPhasor(pauli_string,\n exponent_neg=round(half_turns, 10)))\n return out_ops\n\n def _convert_one(self, op: ops.Operation) -> ops.OP_TREE:\n # Don't change if it's already a ops.PauliStringPhasor\n if isinstance(op, ops.PauliStringPhasor):\n return op\n\n if (self.keep_clifford\n and isinstance(op, ops.GateOperation)\n and isinstance(op.gate, ops.SingleQubitCliffordGate)):\n return op\n\n # Single qubit gate with known matrix?\n if len(op.qubits) == 1:\n mat = protocols.unitary(op, None)\n if mat is not None:\n return self._matrix_to_pauli_string_phasors(mat, op.qubits[0])\n\n # Just let it be?\n if self.ignore_failures:\n return op\n\n raise TypeError(\"Don't know how to work with {!r}. \"\n \"It isn't a 1-qubit operation with a known unitary \"\n \"effect.\".format(op))\n\n def convert(self, op: ops.Operation) -> ops.OP_TREE:\n converted = self._convert_one(op)\n if converted is op:\n return converted\n return [self.convert(cast(ops.Operation, e))\n for e in ops.flatten_op_tree(converted)]\n\n def optimization_at(self, circuit: Circuit, index: int, op: ops.Operation\n ) -> Optional[PointOptimizationSummary]:\n converted = self.convert(op)\n if converted is op:\n return None\n\n return PointOptimizationSummary(\n clear_span=1,\n new_operations=converted,\n clear_qubits=op.qubits)\n", "path": "cirq/contrib/paulistring/convert_to_pauli_string_phasors.py"}]} | 1,698 | 171 |
gh_patches_debug_28145 | rasdani/github-patches | git_diff | dynamiqs__dynamiqs-216 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Propagator solvers are cached on slighlty changing `delta_t`
Both the `sesolve` and `mesolve` propagator solvers are cached on the time step `delta_t` to take, which should be constant for linearly spaced `t_save`. Thus, the propagator should be computed only once. However, due to numerical imprecisions, the `delta_t` changes slightly even when `t_save` is linearly spaced, resulting in frequent recomputations of the same quantity.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `dynamiqs/solvers/propagator.py`
Content:
```
1 from abc import abstractmethod
2
3 from torch import Tensor
4
5 from .solver import AutogradSolver
6 from .utils.td_tensor import ConstantTDTensor
7 from .utils.utils import tqdm
8
9
10 class Propagator(AutogradSolver):
11 def __init__(self, *args, **kwargs):
12 super().__init__(*args, **kwargs)
13
14 # check that Hamiltonian is time-independent
15 if not isinstance(self.H, ConstantTDTensor):
16 raise TypeError(
17 'Solver `Propagator` requires a time-independent Hamiltonian.'
18 )
19 self.H = self.H(0.0)
20
21 def run_autograd(self):
22 y, t1 = self.y0, 0.0
23 for t2 in tqdm(self.t_stop.cpu().numpy(), disable=not self.options.verbose):
24 y = self.forward(t1, t2 - t1, y)
25 self.save(y)
26 t1 = t2
27
28 @abstractmethod
29 def forward(self, t: float, delta_t: float, y: Tensor):
30 pass
31
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/dynamiqs/solvers/propagator.py b/dynamiqs/solvers/propagator.py
--- a/dynamiqs/solvers/propagator.py
+++ b/dynamiqs/solvers/propagator.py
@@ -1,5 +1,8 @@
+from __future__ import annotations
+
from abc import abstractmethod
+import numpy as np
from torch import Tensor
from .solver import AutogradSolver
@@ -7,6 +10,19 @@
from .utils.utils import tqdm
+def round_truncate(x: np.float32 | np.float64) -> np.float32 | np.float64:
+ # round a strictly positive-valued float to remove numerical errors, and enable
+ # comparing floats for equality
+
+ # The mantissa of a float32 is stored using 23 bits. The following code rounds and
+ # truncates the float value to the 18 most significant bits of its mantissa. This
+ # removes any numerical error that may have accumulated in the 5 least significant
+ # bits of the mantissa.
+ leading = abs(int(np.log2(x)))
+ keep = leading + 18
+ return (x * 2**keep).round() / 2**keep
+
+
class Propagator(AutogradSolver):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
@@ -21,7 +37,10 @@
def run_autograd(self):
y, t1 = self.y0, 0.0
for t2 in tqdm(self.t_stop.cpu().numpy(), disable=not self.options.verbose):
- y = self.forward(t1, t2 - t1, y)
+ if t2 != 0.0:
+ # round time difference to avoid numerical errors when comparing floats
+ delta_t = round_truncate(t2 - t1)
+ y = self.forward(t1, delta_t, y)
self.save(y)
t1 = t2
| {"golden_diff": "diff --git a/dynamiqs/solvers/propagator.py b/dynamiqs/solvers/propagator.py\n--- a/dynamiqs/solvers/propagator.py\n+++ b/dynamiqs/solvers/propagator.py\n@@ -1,5 +1,8 @@\n+from __future__ import annotations\n+\n from abc import abstractmethod\n \n+import numpy as np\n from torch import Tensor\n \n from .solver import AutogradSolver\n@@ -7,6 +10,19 @@\n from .utils.utils import tqdm\n \n \n+def round_truncate(x: np.float32 | np.float64) -> np.float32 | np.float64:\n+ # round a strictly positive-valued float to remove numerical errors, and enable\n+ # comparing floats for equality\n+\n+ # The mantissa of a float32 is stored using 23 bits. The following code rounds and\n+ # truncates the float value to the 18 most significant bits of its mantissa. This\n+ # removes any numerical error that may have accumulated in the 5 least significant\n+ # bits of the mantissa.\n+ leading = abs(int(np.log2(x)))\n+ keep = leading + 18\n+ return (x * 2**keep).round() / 2**keep\n+\n+\n class Propagator(AutogradSolver):\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n@@ -21,7 +37,10 @@\n def run_autograd(self):\n y, t1 = self.y0, 0.0\n for t2 in tqdm(self.t_stop.cpu().numpy(), disable=not self.options.verbose):\n- y = self.forward(t1, t2 - t1, y)\n+ if t2 != 0.0:\n+ # round time difference to avoid numerical errors when comparing floats\n+ delta_t = round_truncate(t2 - t1)\n+ y = self.forward(t1, delta_t, y)\n self.save(y)\n t1 = t2\n", "issue": "Propagator solvers are cached on slighlty changing `delta_t`\nBoth the `sesolve` and `mesolve` propagator solvers are cached on the time step `delta_t` to take, which should be constant for linearly spaced `t_save`. Thus, the propagator should be computed only once. However, due to numerical imprecisions, the `delta_t` changes slightly even when `t_save` is linearly spaced, resulting in frequent recomputations of the same quantity.\n", "before_files": [{"content": "from abc import abstractmethod\n\nfrom torch import Tensor\n\nfrom .solver import AutogradSolver\nfrom .utils.td_tensor import ConstantTDTensor\nfrom .utils.utils import tqdm\n\n\nclass Propagator(AutogradSolver):\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n\n # check that Hamiltonian is time-independent\n if not isinstance(self.H, ConstantTDTensor):\n raise TypeError(\n 'Solver `Propagator` requires a time-independent Hamiltonian.'\n )\n self.H = self.H(0.0)\n\n def run_autograd(self):\n y, t1 = self.y0, 0.0\n for t2 in tqdm(self.t_stop.cpu().numpy(), disable=not self.options.verbose):\n y = self.forward(t1, t2 - t1, y)\n self.save(y)\n t1 = t2\n\n @abstractmethod\n def forward(self, t: float, delta_t: float, y: Tensor):\n pass\n", "path": "dynamiqs/solvers/propagator.py"}], "after_files": [{"content": "from __future__ import annotations\n\nfrom abc import abstractmethod\n\nimport numpy as np\nfrom torch import Tensor\n\nfrom .solver import AutogradSolver\nfrom .utils.td_tensor import ConstantTDTensor\nfrom .utils.utils import tqdm\n\n\ndef round_truncate(x: np.float32 | np.float64) -> np.float32 | np.float64:\n # round a strictly positive-valued float to remove numerical errors, and enable\n # comparing floats for equality\n\n # The mantissa of a float32 is stored using 23 bits. The following code rounds and\n # truncates the float value to the 18 most significant bits of its mantissa. This\n # removes any numerical error that may have accumulated in the 5 least significant\n # bits of the mantissa.\n leading = abs(int(np.log2(x)))\n keep = leading + 18\n return (x * 2**keep).round() / 2**keep\n\n\nclass Propagator(AutogradSolver):\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n\n # check that Hamiltonian is time-independent\n if not isinstance(self.H, ConstantTDTensor):\n raise TypeError(\n 'Solver `Propagator` requires a time-independent Hamiltonian.'\n )\n self.H = self.H(0.0)\n\n def run_autograd(self):\n y, t1 = self.y0, 0.0\n for t2 in tqdm(self.t_stop.cpu().numpy(), disable=not self.options.verbose):\n if t2 != 0.0:\n # round time difference to avoid numerical errors when comparing floats\n delta_t = round_truncate(t2 - t1)\n y = self.forward(t1, delta_t, y)\n self.save(y)\n t1 = t2\n\n @abstractmethod\n def forward(self, t: float, delta_t: float, y: Tensor):\n pass\n", "path": "dynamiqs/solvers/propagator.py"}]} | 649 | 461 |
gh_patches_debug_10 | rasdani/github-patches | git_diff | OCHA-DAP__hdx-ckan-770 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
remove text from home page
Please remove this text from homepage 'This is an early version of the HDX Repository. Initially, you will be able to find global datasets relevant to humanitarian work as well as local datasets from our three pilot locations - Colombia, Kenya and Yemen. You can also create an account and add your own data to the repository to share privately or publicly. Please have a look around and send us your feedback!' this will be covered in the about page. Not sure if yumi will want to adjusts the centering of the remaining HDX and tagline but we can ask her
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ckanext-hdx_theme/ckanext/hdx_theme/version.py`
Content:
```
1 hdx_version='v0.2.6'
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ckanext-hdx_theme/ckanext/hdx_theme/version.py b/ckanext-hdx_theme/ckanext/hdx_theme/version.py
--- a/ckanext-hdx_theme/ckanext/hdx_theme/version.py
+++ b/ckanext-hdx_theme/ckanext/hdx_theme/version.py
@@ -1 +1 @@
-hdx_version='v0.2.6'
\ No newline at end of file
+hdx_version='v0.3.0'
\ No newline at end of file
| {"golden_diff": "diff --git a/ckanext-hdx_theme/ckanext/hdx_theme/version.py b/ckanext-hdx_theme/ckanext/hdx_theme/version.py\n--- a/ckanext-hdx_theme/ckanext/hdx_theme/version.py\n+++ b/ckanext-hdx_theme/ckanext/hdx_theme/version.py\n@@ -1 +1 @@\n-hdx_version='v0.2.6'\n\\ No newline at end of file\n+hdx_version='v0.3.0'\n\\ No newline at end of file\n", "issue": "remove text from home page \nPlease remove this text from homepage 'This is an early version of the HDX Repository. Initially, you will be able to find global datasets relevant to humanitarian work as well as local datasets from our three pilot locations - Colombia, Kenya and Yemen. You can also create an account and add your own data to the repository to share privately or publicly. Please have a look around and send us your feedback!' this will be covered in the about page. Not sure if yumi will want to adjusts the centering of the remaining HDX and tagline but we can ask her\n\n", "before_files": [{"content": "hdx_version='v0.2.6'", "path": "ckanext-hdx_theme/ckanext/hdx_theme/version.py"}], "after_files": [{"content": "hdx_version='v0.3.0'", "path": "ckanext-hdx_theme/ckanext/hdx_theme/version.py"}]} | 401 | 120 |
gh_patches_debug_3770 | rasdani/github-patches | git_diff | joke2k__faker-1046 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Fake ISBN10 causes "Registrant/Publication not found"
In rare cases the `fake.isbn10` method throws an exception with the following message: `Exception: Registrant/Publication not found in registrant rule list. `
A full exception message:
```
/usr/local/lib/python3.6/site-packages/faker/providers/isbn/__init__.py:70: in isbn10
ean, group, registrant, publication = self._body()
/usr/local/lib/python3.6/site-packages/faker/providers/isbn/__init__.py:41: in _body
registrant, publication = self._registrant_publication(reg_pub, rules)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
reg_pub = '64799998'
rules = [RegistrantRule(min='0000000', max='1999999', registrant_length=2), RegistrantRule(min='2000000', max='2279999', regis...'6480000', max='6489999', registrant_length=7), RegistrantRule(min='6490000', max='6999999', registrant_length=3), ...]
@staticmethod
def _registrant_publication(reg_pub, rules):
""" Separate the registration from the publication in a given
string.
:param reg_pub: A string of digits representing a registration
and publication.
:param rules: A list of RegistrantRules which designate where
to separate the values in the string.
:returns: A (registrant, publication) tuple of strings.
"""
for rule in rules:
if rule.min <= reg_pub <= rule.max:
reg_len = rule.registrant_length
break
else:
> raise Exception('Registrant/Publication not found in registrant '
'rule list.')
E Exception: Registrant/Publication not found in registrant rule list.
/usr/local/lib/python3.6/site-packages/faker/providers/isbn/__init__.py:59: Exception
```
### Steps to reproduce
Call `faker.providers.isbn.Provider._registrant_publication` with any of the following values for the `reg_pub` param: `64799998`, `39999999`. These values are valid randomly generated strings from [L34](https://github.com/joke2k/faker/blob/master/faker/providers/isbn/__init__.py#L37).
Code:
```python
from faker.providers.isbn import Provider
from faker.providers.isbn.rules import RULES
# Fails; throws an exception
Provider._registrant_publication('64799998', RULES['978']['0'])
Provider._registrant_publication('39999999', RULES['978']['1'])
# Works; but may be invalid
Provider._registrant_publication('64799998', RULES['978']['1'])
Provider._registrant_publication('39999999', RULES['978']['0'])
```
### Expected behavior
The `faker.providers.isbn.Provider._body` should generate valid `reg_pub` values.
### Actual behavior
It generates values for `reg_pub` that are not accepted by the rules defined in `faker.providers.isbn.rules`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `faker/providers/isbn/__init__.py`
Content:
```
1 # coding=utf-8
2
3 from __future__ import unicode_literals
4 from .. import BaseProvider
5 from .isbn import ISBN, ISBN10, ISBN13
6 from .rules import RULES
7
8
9 class Provider(BaseProvider):
10 """ Generates fake ISBNs. ISBN rules vary across languages/regions
11 so this class makes no attempt at replicating all of the rules. It
12 only replicates the 978 EAN prefix for the English registration
13 groups, meaning the first 4 digits of the ISBN-13 will either be
14 978-0 or 978-1. Since we are only replicating 978 prefixes, every
15 ISBN-13 will have a direct mapping to an ISBN-10.
16
17 See https://www.isbn-international.org/content/what-isbn for the
18 format of ISBNs.
19 See https://www.isbn-international.org/range_file_generation for the
20 list of rules pertaining to each prefix/registration group.
21 """
22
23 def _body(self):
24 """ Generate the information required to create an ISBN-10 or
25 ISBN-13.
26 """
27 ean = self.random_element(RULES.keys())
28 reg_group = self.random_element(RULES[ean].keys())
29
30 # Given the chosen ean/group, decide how long the
31 # registrant/publication string may be.
32 # We must allocate for the calculated check digit, so
33 # subtract 1
34 reg_pub_len = ISBN.MAX_LENGTH - len(ean) - len(reg_group) - 1
35
36 # Generate a registrant/publication combination
37 reg_pub = self.numerify('#' * reg_pub_len)
38
39 # Use rules to separate the registrant from the publication
40 rules = RULES[ean][reg_group]
41 registrant, publication = self._registrant_publication(reg_pub, rules)
42 return [ean, reg_group, registrant, publication]
43
44 @staticmethod
45 def _registrant_publication(reg_pub, rules):
46 """ Separate the registration from the publication in a given
47 string.
48 :param reg_pub: A string of digits representing a registration
49 and publication.
50 :param rules: A list of RegistrantRules which designate where
51 to separate the values in the string.
52 :returns: A (registrant, publication) tuple of strings.
53 """
54 for rule in rules:
55 if rule.min <= reg_pub <= rule.max:
56 reg_len = rule.registrant_length
57 break
58 else:
59 raise Exception('Registrant/Publication not found in registrant '
60 'rule list.')
61 registrant, publication = reg_pub[:reg_len], reg_pub[reg_len:]
62 return registrant, publication
63
64 def isbn13(self, separator='-'):
65 ean, group, registrant, publication = self._body()
66 isbn = ISBN13(ean, group, registrant, publication)
67 return isbn.format(separator)
68
69 def isbn10(self, separator='-'):
70 ean, group, registrant, publication = self._body()
71 isbn = ISBN10(ean, group, registrant, publication)
72 return isbn.format(separator)
73
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/faker/providers/isbn/__init__.py b/faker/providers/isbn/__init__.py
--- a/faker/providers/isbn/__init__.py
+++ b/faker/providers/isbn/__init__.py
@@ -52,7 +52,7 @@
:returns: A (registrant, publication) tuple of strings.
"""
for rule in rules:
- if rule.min <= reg_pub <= rule.max:
+ if rule.min <= reg_pub[:-1] <= rule.max:
reg_len = rule.registrant_length
break
else:
| {"golden_diff": "diff --git a/faker/providers/isbn/__init__.py b/faker/providers/isbn/__init__.py\n--- a/faker/providers/isbn/__init__.py\n+++ b/faker/providers/isbn/__init__.py\n@@ -52,7 +52,7 @@\n :returns: A (registrant, publication) tuple of strings.\n \"\"\"\n for rule in rules:\n- if rule.min <= reg_pub <= rule.max:\n+ if rule.min <= reg_pub[:-1] <= rule.max:\n reg_len = rule.registrant_length\n break\n else:\n", "issue": "Fake ISBN10 causes \"Registrant/Publication not found\"\nIn rare cases the `fake.isbn10` method throws an exception with the following message: `Exception: Registrant/Publication not found in registrant rule list. `\r\n\r\nA full exception message:\r\n```\r\n/usr/local/lib/python3.6/site-packages/faker/providers/isbn/__init__.py:70: in isbn10\r\n ean, group, registrant, publication = self._body()\r\n/usr/local/lib/python3.6/site-packages/faker/providers/isbn/__init__.py:41: in _body\r\n registrant, publication = self._registrant_publication(reg_pub, rules)\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\nreg_pub = '64799998'\r\nrules = [RegistrantRule(min='0000000', max='1999999', registrant_length=2), RegistrantRule(min='2000000', max='2279999', regis...'6480000', max='6489999', registrant_length=7), RegistrantRule(min='6490000', max='6999999', registrant_length=3), ...]\r\n\r\n @staticmethod\r\n def _registrant_publication(reg_pub, rules):\r\n \"\"\" Separate the registration from the publication in a given\r\n string.\r\n :param reg_pub: A string of digits representing a registration\r\n and publication.\r\n :param rules: A list of RegistrantRules which designate where\r\n to separate the values in the string.\r\n :returns: A (registrant, publication) tuple of strings.\r\n \"\"\"\r\n for rule in rules:\r\n if rule.min <= reg_pub <= rule.max:\r\n reg_len = rule.registrant_length\r\n break\r\n else:\r\n> raise Exception('Registrant/Publication not found in registrant '\r\n 'rule list.')\r\nE Exception: Registrant/Publication not found in registrant rule list.\r\n\r\n/usr/local/lib/python3.6/site-packages/faker/providers/isbn/__init__.py:59: Exception\r\n```\r\n### Steps to reproduce\r\n\r\nCall `faker.providers.isbn.Provider._registrant_publication` with any of the following values for the `reg_pub` param: `64799998`, `39999999`. These values are valid randomly generated strings from [L34](https://github.com/joke2k/faker/blob/master/faker/providers/isbn/__init__.py#L37).\r\n\r\nCode:\r\n```python\r\nfrom faker.providers.isbn import Provider\r\nfrom faker.providers.isbn.rules import RULES\r\n\r\n# Fails; throws an exception\r\nProvider._registrant_publication('64799998', RULES['978']['0'])\r\nProvider._registrant_publication('39999999', RULES['978']['1'])\r\n\r\n# Works; but may be invalid\r\nProvider._registrant_publication('64799998', RULES['978']['1'])\r\nProvider._registrant_publication('39999999', RULES['978']['0'])\r\n```\r\n\r\n### Expected behavior\r\n\r\nThe `faker.providers.isbn.Provider._body` should generate valid `reg_pub` values.\r\n\r\n### Actual behavior\r\n\r\nIt generates values for `reg_pub` that are not accepted by the rules defined in `faker.providers.isbn.rules`.\r\n\n", "before_files": [{"content": "# coding=utf-8\n\nfrom __future__ import unicode_literals\nfrom .. import BaseProvider\nfrom .isbn import ISBN, ISBN10, ISBN13\nfrom .rules import RULES\n\n\nclass Provider(BaseProvider):\n \"\"\" Generates fake ISBNs. ISBN rules vary across languages/regions\n so this class makes no attempt at replicating all of the rules. It\n only replicates the 978 EAN prefix for the English registration\n groups, meaning the first 4 digits of the ISBN-13 will either be\n 978-0 or 978-1. Since we are only replicating 978 prefixes, every\n ISBN-13 will have a direct mapping to an ISBN-10.\n\n See https://www.isbn-international.org/content/what-isbn for the\n format of ISBNs.\n See https://www.isbn-international.org/range_file_generation for the\n list of rules pertaining to each prefix/registration group.\n \"\"\"\n\n def _body(self):\n \"\"\" Generate the information required to create an ISBN-10 or\n ISBN-13.\n \"\"\"\n ean = self.random_element(RULES.keys())\n reg_group = self.random_element(RULES[ean].keys())\n\n # Given the chosen ean/group, decide how long the\n # registrant/publication string may be.\n # We must allocate for the calculated check digit, so\n # subtract 1\n reg_pub_len = ISBN.MAX_LENGTH - len(ean) - len(reg_group) - 1\n\n # Generate a registrant/publication combination\n reg_pub = self.numerify('#' * reg_pub_len)\n\n # Use rules to separate the registrant from the publication\n rules = RULES[ean][reg_group]\n registrant, publication = self._registrant_publication(reg_pub, rules)\n return [ean, reg_group, registrant, publication]\n\n @staticmethod\n def _registrant_publication(reg_pub, rules):\n \"\"\" Separate the registration from the publication in a given\n string.\n :param reg_pub: A string of digits representing a registration\n and publication.\n :param rules: A list of RegistrantRules which designate where\n to separate the values in the string.\n :returns: A (registrant, publication) tuple of strings.\n \"\"\"\n for rule in rules:\n if rule.min <= reg_pub <= rule.max:\n reg_len = rule.registrant_length\n break\n else:\n raise Exception('Registrant/Publication not found in registrant '\n 'rule list.')\n registrant, publication = reg_pub[:reg_len], reg_pub[reg_len:]\n return registrant, publication\n\n def isbn13(self, separator='-'):\n ean, group, registrant, publication = self._body()\n isbn = ISBN13(ean, group, registrant, publication)\n return isbn.format(separator)\n\n def isbn10(self, separator='-'):\n ean, group, registrant, publication = self._body()\n isbn = ISBN10(ean, group, registrant, publication)\n return isbn.format(separator)\n", "path": "faker/providers/isbn/__init__.py"}], "after_files": [{"content": "# coding=utf-8\n\nfrom __future__ import unicode_literals\nfrom .. import BaseProvider\nfrom .isbn import ISBN, ISBN10, ISBN13\nfrom .rules import RULES\n\n\nclass Provider(BaseProvider):\n \"\"\" Generates fake ISBNs. ISBN rules vary across languages/regions\n so this class makes no attempt at replicating all of the rules. It\n only replicates the 978 EAN prefix for the English registration\n groups, meaning the first 4 digits of the ISBN-13 will either be\n 978-0 or 978-1. Since we are only replicating 978 prefixes, every\n ISBN-13 will have a direct mapping to an ISBN-10.\n\n See https://www.isbn-international.org/content/what-isbn for the\n format of ISBNs.\n See https://www.isbn-international.org/range_file_generation for the\n list of rules pertaining to each prefix/registration group.\n \"\"\"\n\n def _body(self):\n \"\"\" Generate the information required to create an ISBN-10 or\n ISBN-13.\n \"\"\"\n ean = self.random_element(RULES.keys())\n reg_group = self.random_element(RULES[ean].keys())\n\n # Given the chosen ean/group, decide how long the\n # registrant/publication string may be.\n # We must allocate for the calculated check digit, so\n # subtract 1\n reg_pub_len = ISBN.MAX_LENGTH - len(ean) - len(reg_group) - 1\n\n # Generate a registrant/publication combination\n reg_pub = self.numerify('#' * reg_pub_len)\n\n # Use rules to separate the registrant from the publication\n rules = RULES[ean][reg_group]\n registrant, publication = self._registrant_publication(reg_pub, rules)\n return [ean, reg_group, registrant, publication]\n\n @staticmethod\n def _registrant_publication(reg_pub, rules):\n \"\"\" Separate the registration from the publication in a given\n string.\n :param reg_pub: A string of digits representing a registration\n and publication.\n :param rules: A list of RegistrantRules which designate where\n to separate the values in the string.\n :returns: A (registrant, publication) tuple of strings.\n \"\"\"\n for rule in rules:\n if rule.min <= reg_pub[:-1] <= rule.max:\n reg_len = rule.registrant_length\n break\n else:\n raise Exception('Registrant/Publication not found in registrant '\n 'rule list.')\n registrant, publication = reg_pub[:reg_len], reg_pub[reg_len:]\n return registrant, publication\n\n def isbn13(self, separator='-'):\n ean, group, registrant, publication = self._body()\n isbn = ISBN13(ean, group, registrant, publication)\n return isbn.format(separator)\n\n def isbn10(self, separator='-'):\n ean, group, registrant, publication = self._body()\n isbn = ISBN10(ean, group, registrant, publication)\n return isbn.format(separator)\n", "path": "faker/providers/isbn/__init__.py"}]} | 1,870 | 126 |
gh_patches_debug_6354 | rasdani/github-patches | git_diff | iterative__dvc-2627 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Issue with dvc push to AWS s3 remote
**Please provide information about your setup**
DVC version(i.e. `dvc --version`), Platform and method of installation (pip, homebrew, pkg Mac, exe (Windows), DEB(Linux), RPM(Linux))
DVC: 0.62.1
Mac: Mojave 10.13
Install with pip
issue with `dvc push`

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `dvc/progress.py`
Content:
```
1 """Manages progress bars for dvc repo."""
2 from __future__ import print_function
3 import logging
4 from tqdm import tqdm
5 from concurrent.futures import ThreadPoolExecutor
6 from funcy import merge
7
8 logger = logging.getLogger(__name__)
9
10
11 class TqdmThreadPoolExecutor(ThreadPoolExecutor):
12 """
13 Ensure worker progressbars are cleared away properly.
14 """
15
16 def __enter__(self):
17 """
18 Creates a blank initial dummy progress bar if needed so that workers
19 are forced to create "nested" bars.
20 """
21 blank_bar = Tqdm(bar_format="Multi-Threaded:", leave=False)
22 if blank_bar.pos > 0:
23 # already nested - don't need a placeholder bar
24 blank_bar.close()
25 self.bar = blank_bar
26 return super(TqdmThreadPoolExecutor, self).__enter__()
27
28 def __exit__(self, *a, **k):
29 super(TqdmThreadPoolExecutor, self).__exit__(*a, **k)
30 self.bar.close()
31
32
33 class Tqdm(tqdm):
34 """
35 maximum-compatibility tqdm-based progressbars
36 """
37
38 BAR_FMT_DEFAULT = (
39 "{percentage:3.0f}%|{bar:10}|"
40 "{desc:{ncols_desc}.{ncols_desc}}{n}/{total}"
41 " [{elapsed}<{remaining}, {rate_fmt:>11}{postfix}]"
42 )
43 BAR_FMT_NOTOTAL = (
44 "{desc:{ncols_desc}.{ncols_desc}}{n}"
45 " [{elapsed}<??:??, {rate_fmt:>11}{postfix}]"
46 )
47
48 def __init__(
49 self,
50 iterable=None,
51 disable=None,
52 level=logging.ERROR,
53 desc=None,
54 leave=False,
55 bar_format=None,
56 bytes=False, # pylint: disable=W0622
57 **kwargs
58 ):
59 """
60 bytes : shortcut for
61 `unit='B', unit_scale=True, unit_divisor=1024, miniters=1`
62 desc : persists after `close()`
63 level : effective logging level for determining `disable`;
64 used only if `disable` is unspecified
65 kwargs : anything accepted by `tqdm.tqdm()`
66 """
67 kwargs = kwargs.copy()
68 kwargs.setdefault("unit_scale", True)
69 if bytes:
70 bytes_defaults = dict(
71 unit="B", unit_scale=True, unit_divisor=1024, miniters=1
72 )
73 kwargs = merge(bytes_defaults, kwargs)
74 self.desc_persist = desc
75 if disable is None:
76 disable = logger.getEffectiveLevel() > level
77 super(Tqdm, self).__init__(
78 iterable=iterable,
79 disable=disable,
80 leave=leave,
81 desc=desc,
82 bar_format="!",
83 **kwargs
84 )
85 if bar_format is None:
86 if self.__len__():
87 self.bar_format = self.BAR_FMT_DEFAULT
88 else:
89 self.bar_format = self.BAR_FMT_NOTOTAL
90 else:
91 self.bar_format = bar_format
92 self.refresh()
93
94 def update_desc(self, desc, n=1):
95 """
96 Calls `set_description_str(desc)` and `update(n)`
97 """
98 self.set_description_str(desc, refresh=False)
99 self.update(n)
100
101 def update_to(self, current, total=None):
102 if total:
103 self.total = total # pylint: disable=W0613,W0201
104 self.update(current - self.n)
105
106 def close(self):
107 if self.desc_persist is not None:
108 self.set_description_str(self.desc_persist, refresh=False)
109 super(Tqdm, self).close()
110
111 @property
112 def format_dict(self):
113 """inject `ncols_desc` to fill the display width (`ncols`)"""
114 d = super(Tqdm, self).format_dict
115 ncols = d["ncols"] or 80
116 ncols_desc = ncols - len(self.format_meter(ncols_desc=1, **d)) + 1
117 d["ncols_desc"] = max(ncols_desc, 0)
118 return d
119
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/dvc/progress.py b/dvc/progress.py
--- a/dvc/progress.py
+++ b/dvc/progress.py
@@ -114,5 +114,11 @@
d = super(Tqdm, self).format_dict
ncols = d["ncols"] or 80
ncols_desc = ncols - len(self.format_meter(ncols_desc=1, **d)) + 1
- d["ncols_desc"] = max(ncols_desc, 0)
+ ncols_desc = max(ncols_desc, 0)
+ if ncols_desc:
+ d["ncols_desc"] = ncols_desc
+ else:
+ # work-around for zero-width desc
+ d["ncols_desc"] = 1
+ d["desc"] = 0
return d
| {"golden_diff": "diff --git a/dvc/progress.py b/dvc/progress.py\n--- a/dvc/progress.py\n+++ b/dvc/progress.py\n@@ -114,5 +114,11 @@\n d = super(Tqdm, self).format_dict\n ncols = d[\"ncols\"] or 80\n ncols_desc = ncols - len(self.format_meter(ncols_desc=1, **d)) + 1\n- d[\"ncols_desc\"] = max(ncols_desc, 0)\n+ ncols_desc = max(ncols_desc, 0)\n+ if ncols_desc:\n+ d[\"ncols_desc\"] = ncols_desc\n+ else:\n+ # work-around for zero-width desc\n+ d[\"ncols_desc\"] = 1\n+ d[\"desc\"] = 0\n return d\n", "issue": "Issue with dvc push to AWS s3 remote\n**Please provide information about your setup**\r\nDVC version(i.e. `dvc --version`), Platform and method of installation (pip, homebrew, pkg Mac, exe (Windows), DEB(Linux), RPM(Linux))\r\n\r\nDVC: 0.62.1\r\nMac: Mojave 10.13\r\nInstall with pip\r\n\r\nissue with `dvc push`\r\n\r\n\r\n\r\n\n", "before_files": [{"content": "\"\"\"Manages progress bars for dvc repo.\"\"\"\nfrom __future__ import print_function\nimport logging\nfrom tqdm import tqdm\nfrom concurrent.futures import ThreadPoolExecutor\nfrom funcy import merge\n\nlogger = logging.getLogger(__name__)\n\n\nclass TqdmThreadPoolExecutor(ThreadPoolExecutor):\n \"\"\"\n Ensure worker progressbars are cleared away properly.\n \"\"\"\n\n def __enter__(self):\n \"\"\"\n Creates a blank initial dummy progress bar if needed so that workers\n are forced to create \"nested\" bars.\n \"\"\"\n blank_bar = Tqdm(bar_format=\"Multi-Threaded:\", leave=False)\n if blank_bar.pos > 0:\n # already nested - don't need a placeholder bar\n blank_bar.close()\n self.bar = blank_bar\n return super(TqdmThreadPoolExecutor, self).__enter__()\n\n def __exit__(self, *a, **k):\n super(TqdmThreadPoolExecutor, self).__exit__(*a, **k)\n self.bar.close()\n\n\nclass Tqdm(tqdm):\n \"\"\"\n maximum-compatibility tqdm-based progressbars\n \"\"\"\n\n BAR_FMT_DEFAULT = (\n \"{percentage:3.0f}%|{bar:10}|\"\n \"{desc:{ncols_desc}.{ncols_desc}}{n}/{total}\"\n \" [{elapsed}<{remaining}, {rate_fmt:>11}{postfix}]\"\n )\n BAR_FMT_NOTOTAL = (\n \"{desc:{ncols_desc}.{ncols_desc}}{n}\"\n \" [{elapsed}<??:??, {rate_fmt:>11}{postfix}]\"\n )\n\n def __init__(\n self,\n iterable=None,\n disable=None,\n level=logging.ERROR,\n desc=None,\n leave=False,\n bar_format=None,\n bytes=False, # pylint: disable=W0622\n **kwargs\n ):\n \"\"\"\n bytes : shortcut for\n `unit='B', unit_scale=True, unit_divisor=1024, miniters=1`\n desc : persists after `close()`\n level : effective logging level for determining `disable`;\n used only if `disable` is unspecified\n kwargs : anything accepted by `tqdm.tqdm()`\n \"\"\"\n kwargs = kwargs.copy()\n kwargs.setdefault(\"unit_scale\", True)\n if bytes:\n bytes_defaults = dict(\n unit=\"B\", unit_scale=True, unit_divisor=1024, miniters=1\n )\n kwargs = merge(bytes_defaults, kwargs)\n self.desc_persist = desc\n if disable is None:\n disable = logger.getEffectiveLevel() > level\n super(Tqdm, self).__init__(\n iterable=iterable,\n disable=disable,\n leave=leave,\n desc=desc,\n bar_format=\"!\",\n **kwargs\n )\n if bar_format is None:\n if self.__len__():\n self.bar_format = self.BAR_FMT_DEFAULT\n else:\n self.bar_format = self.BAR_FMT_NOTOTAL\n else:\n self.bar_format = bar_format\n self.refresh()\n\n def update_desc(self, desc, n=1):\n \"\"\"\n Calls `set_description_str(desc)` and `update(n)`\n \"\"\"\n self.set_description_str(desc, refresh=False)\n self.update(n)\n\n def update_to(self, current, total=None):\n if total:\n self.total = total # pylint: disable=W0613,W0201\n self.update(current - self.n)\n\n def close(self):\n if self.desc_persist is not None:\n self.set_description_str(self.desc_persist, refresh=False)\n super(Tqdm, self).close()\n\n @property\n def format_dict(self):\n \"\"\"inject `ncols_desc` to fill the display width (`ncols`)\"\"\"\n d = super(Tqdm, self).format_dict\n ncols = d[\"ncols\"] or 80\n ncols_desc = ncols - len(self.format_meter(ncols_desc=1, **d)) + 1\n d[\"ncols_desc\"] = max(ncols_desc, 0)\n return d\n", "path": "dvc/progress.py"}], "after_files": [{"content": "\"\"\"Manages progress bars for dvc repo.\"\"\"\nfrom __future__ import print_function\nimport logging\nfrom tqdm import tqdm\nfrom concurrent.futures import ThreadPoolExecutor\nfrom funcy import merge\n\nlogger = logging.getLogger(__name__)\n\n\nclass TqdmThreadPoolExecutor(ThreadPoolExecutor):\n \"\"\"\n Ensure worker progressbars are cleared away properly.\n \"\"\"\n\n def __enter__(self):\n \"\"\"\n Creates a blank initial dummy progress bar if needed so that workers\n are forced to create \"nested\" bars.\n \"\"\"\n blank_bar = Tqdm(bar_format=\"Multi-Threaded:\", leave=False)\n if blank_bar.pos > 0:\n # already nested - don't need a placeholder bar\n blank_bar.close()\n self.bar = blank_bar\n return super(TqdmThreadPoolExecutor, self).__enter__()\n\n def __exit__(self, *a, **k):\n super(TqdmThreadPoolExecutor, self).__exit__(*a, **k)\n self.bar.close()\n\n\nclass Tqdm(tqdm):\n \"\"\"\n maximum-compatibility tqdm-based progressbars\n \"\"\"\n\n BAR_FMT_DEFAULT = (\n \"{percentage:3.0f}%|{bar:10}|\"\n \"{desc:{ncols_desc}.{ncols_desc}}{n}/{total}\"\n \" [{elapsed}<{remaining}, {rate_fmt:>11}{postfix}]\"\n )\n BAR_FMT_NOTOTAL = (\n \"{desc:{ncols_desc}.{ncols_desc}}{n}\"\n \" [{elapsed}<??:??, {rate_fmt:>11}{postfix}]\"\n )\n\n def __init__(\n self,\n iterable=None,\n disable=None,\n level=logging.ERROR,\n desc=None,\n leave=False,\n bar_format=None,\n bytes=False, # pylint: disable=W0622\n **kwargs\n ):\n \"\"\"\n bytes : shortcut for\n `unit='B', unit_scale=True, unit_divisor=1024, miniters=1`\n desc : persists after `close()`\n level : effective logging level for determining `disable`;\n used only if `disable` is unspecified\n kwargs : anything accepted by `tqdm.tqdm()`\n \"\"\"\n kwargs = kwargs.copy()\n kwargs.setdefault(\"unit_scale\", True)\n if bytes:\n bytes_defaults = dict(\n unit=\"B\", unit_scale=True, unit_divisor=1024, miniters=1\n )\n kwargs = merge(bytes_defaults, kwargs)\n self.desc_persist = desc\n if disable is None:\n disable = logger.getEffectiveLevel() > level\n super(Tqdm, self).__init__(\n iterable=iterable,\n disable=disable,\n leave=leave,\n desc=desc,\n bar_format=\"!\",\n **kwargs\n )\n if bar_format is None:\n if self.__len__():\n self.bar_format = self.BAR_FMT_DEFAULT\n else:\n self.bar_format = self.BAR_FMT_NOTOTAL\n else:\n self.bar_format = bar_format\n self.refresh()\n\n def update_desc(self, desc, n=1):\n \"\"\"\n Calls `set_description_str(desc)` and `update(n)`\n \"\"\"\n self.set_description_str(desc, refresh=False)\n self.update(n)\n\n def update_to(self, current, total=None):\n if total:\n self.total = total # pylint: disable=W0613,W0201\n self.update(current - self.n)\n\n def close(self):\n if self.desc_persist is not None:\n self.set_description_str(self.desc_persist, refresh=False)\n super(Tqdm, self).close()\n\n @property\n def format_dict(self):\n \"\"\"inject `ncols_desc` to fill the display width (`ncols`)\"\"\"\n d = super(Tqdm, self).format_dict\n ncols = d[\"ncols\"] or 80\n ncols_desc = ncols - len(self.format_meter(ncols_desc=1, **d)) + 1\n ncols_desc = max(ncols_desc, 0)\n if ncols_desc:\n d[\"ncols_desc\"] = ncols_desc\n else:\n # work-around for zero-width desc\n d[\"ncols_desc\"] = 1\n d[\"desc\"] = 0\n return d\n", "path": "dvc/progress.py"}]} | 1,553 | 184 |
gh_patches_debug_10256 | rasdani/github-patches | git_diff | PaddlePaddle__Paddle2ONNX-12 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Fix travis-ci problems
Travis-ci always failed
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `variables.py`
Content:
```
1 # Copyright (c) 2018 PaddlePaddle Authors. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from onnx import helper, onnx_pb2, TensorProto
16 import paddle.fluid.core as core
17
18
19 def paddle_variable_to_onnx_tensor(paddle_var_name, block):
20 # TODO(varunarora): Need to do this only in the case of VarType.LOD_TENSOR.
21 paddle_var = block.var(paddle_var_name)
22 return helper.make_tensor_value_info(
23 paddle_var_name, PADDLE_TO_ONNX_DTYPE[paddle_var.dtype],
24 paddle_var.shape)
25
26
27 PADDLE_TO_ONNX_DTYPE = {
28 core.VarDesc.VarType.FP32: onnx_pb2.TensorProto.FLOAT,
29 core.VarDesc.VarType.FP64: onnx_pb2.TensorProto.FLOAT16,
30 # '': onnx_pb2.TensorProto.DOUBLE,
31 core.VarDesc.VarType.INT32: onnx_pb2.TensorProto.INT32,
32 core.VarDesc.VarType.INT16: onnx_pb2.TensorProto.INT16,
33 # '': onnx_pb2.TensorProto.INT8,
34 # '': onnx_pb2.TensorProto.UINT8,
35 core.VarDesc.VarType.INT16: onnx_pb2.TensorProto.UINT16,
36 core.VarDesc.VarType.INT64: onnx_pb2.TensorProto.INT64,
37 # '': onnx_pb2.TensorProto.STRING,
38 # '': onnx_pb2.TensorProto.COMPLEX64,
39 # '': onnx_pb2.TensorProto.COMPLEX128,
40 core.VarDesc.VarType.BOOL: onnx_pb2.TensorProto.BOOL
41 }
42
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/variables.py b/variables.py
--- a/variables.py
+++ b/variables.py
@@ -19,9 +19,9 @@
def paddle_variable_to_onnx_tensor(paddle_var_name, block):
# TODO(varunarora): Need to do this only in the case of VarType.LOD_TENSOR.
paddle_var = block.var(paddle_var_name)
- return helper.make_tensor_value_info(
- paddle_var_name, PADDLE_TO_ONNX_DTYPE[paddle_var.dtype],
- paddle_var.shape)
+ return helper.make_tensor_value_info(paddle_var_name,
+ PADDLE_TO_ONNX_DTYPE[paddle_var.dtype],
+ paddle_var.shape)
PADDLE_TO_ONNX_DTYPE = {
| {"golden_diff": "diff --git a/variables.py b/variables.py\n--- a/variables.py\n+++ b/variables.py\n@@ -19,9 +19,9 @@\n def paddle_variable_to_onnx_tensor(paddle_var_name, block):\n # TODO(varunarora): Need to do this only in the case of VarType.LOD_TENSOR.\n paddle_var = block.var(paddle_var_name)\n- return helper.make_tensor_value_info(\n- paddle_var_name, PADDLE_TO_ONNX_DTYPE[paddle_var.dtype],\n- paddle_var.shape)\n+ return helper.make_tensor_value_info(paddle_var_name,\n+ PADDLE_TO_ONNX_DTYPE[paddle_var.dtype],\n+ paddle_var.shape)\n \n \n PADDLE_TO_ONNX_DTYPE = {\n", "issue": "Fix travis-ci problems\nTravis-ci always failed\n", "before_files": [{"content": "# Copyright (c) 2018 PaddlePaddle Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom onnx import helper, onnx_pb2, TensorProto\nimport paddle.fluid.core as core\n\n\ndef paddle_variable_to_onnx_tensor(paddle_var_name, block):\n # TODO(varunarora): Need to do this only in the case of VarType.LOD_TENSOR.\n paddle_var = block.var(paddle_var_name)\n return helper.make_tensor_value_info(\n paddle_var_name, PADDLE_TO_ONNX_DTYPE[paddle_var.dtype],\n paddle_var.shape)\n\n\nPADDLE_TO_ONNX_DTYPE = {\n core.VarDesc.VarType.FP32: onnx_pb2.TensorProto.FLOAT,\n core.VarDesc.VarType.FP64: onnx_pb2.TensorProto.FLOAT16,\n # '': onnx_pb2.TensorProto.DOUBLE,\n core.VarDesc.VarType.INT32: onnx_pb2.TensorProto.INT32,\n core.VarDesc.VarType.INT16: onnx_pb2.TensorProto.INT16,\n # '': onnx_pb2.TensorProto.INT8,\n # '': onnx_pb2.TensorProto.UINT8,\n core.VarDesc.VarType.INT16: onnx_pb2.TensorProto.UINT16,\n core.VarDesc.VarType.INT64: onnx_pb2.TensorProto.INT64,\n # '': onnx_pb2.TensorProto.STRING,\n # '': onnx_pb2.TensorProto.COMPLEX64,\n # '': onnx_pb2.TensorProto.COMPLEX128,\n core.VarDesc.VarType.BOOL: onnx_pb2.TensorProto.BOOL\n}\n", "path": "variables.py"}], "after_files": [{"content": "# Copyright (c) 2018 PaddlePaddle Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom onnx import helper, onnx_pb2, TensorProto\nimport paddle.fluid.core as core\n\n\ndef paddle_variable_to_onnx_tensor(paddle_var_name, block):\n # TODO(varunarora): Need to do this only in the case of VarType.LOD_TENSOR.\n paddle_var = block.var(paddle_var_name)\n return helper.make_tensor_value_info(paddle_var_name,\n PADDLE_TO_ONNX_DTYPE[paddle_var.dtype],\n paddle_var.shape)\n\n\nPADDLE_TO_ONNX_DTYPE = {\n core.VarDesc.VarType.FP32: onnx_pb2.TensorProto.FLOAT,\n core.VarDesc.VarType.FP64: onnx_pb2.TensorProto.FLOAT16,\n # '': onnx_pb2.TensorProto.DOUBLE,\n core.VarDesc.VarType.INT32: onnx_pb2.TensorProto.INT32,\n core.VarDesc.VarType.INT16: onnx_pb2.TensorProto.INT16,\n # '': onnx_pb2.TensorProto.INT8,\n # '': onnx_pb2.TensorProto.UINT8,\n core.VarDesc.VarType.INT16: onnx_pb2.TensorProto.UINT16,\n core.VarDesc.VarType.INT64: onnx_pb2.TensorProto.INT64,\n # '': onnx_pb2.TensorProto.STRING,\n # '': onnx_pb2.TensorProto.COMPLEX64,\n # '': onnx_pb2.TensorProto.COMPLEX128,\n core.VarDesc.VarType.BOOL: onnx_pb2.TensorProto.BOOL\n}\n", "path": "variables.py"}]} | 822 | 163 |
gh_patches_debug_10950 | rasdani/github-patches | git_diff | chainer__chainer-2329 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Remove chainer.functions.caffe.CaffeFunction
This is left for backward compatibility.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `chainer/functions/caffe/__init__.py`
Content:
```
1 from chainer.links.caffe import caffe_function
2
3
4 # for backward compatibility
5 CaffeFunction = caffe_function.CaffeFunction
6
```
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2
3 from setuptools import setup
4
5
6 setup_requires = []
7 install_requires = [
8 'filelock',
9 'nose',
10 'numpy>=1.9.0',
11 'protobuf',
12 'six>=1.9.0',
13 ]
14
15 setup(
16 name='chainer',
17 version='2.0.0a1',
18 description='A flexible framework of neural networks',
19 author='Seiya Tokui',
20 author_email='[email protected]',
21 url='http://chainer.org/',
22 license='MIT License',
23 packages=['chainer',
24 'chainer.dataset',
25 'chainer.datasets',
26 'chainer.functions',
27 'chainer.functions.activation',
28 'chainer.functions.array',
29 'chainer.functions.caffe',
30 'chainer.functions.connection',
31 'chainer.functions.evaluation',
32 'chainer.functions.loss',
33 'chainer.functions.math',
34 'chainer.functions.noise',
35 'chainer.functions.normalization',
36 'chainer.functions.pooling',
37 'chainer.functions.theano',
38 'chainer.functions.util',
39 'chainer.function_hooks',
40 'chainer.iterators',
41 'chainer.initializers',
42 'chainer.links',
43 'chainer.links.activation',
44 'chainer.links.caffe',
45 'chainer.links.caffe.protobuf2',
46 'chainer.links.caffe.protobuf3',
47 'chainer.links.connection',
48 'chainer.links.loss',
49 'chainer.links.model',
50 'chainer.links.model.vision',
51 'chainer.links.normalization',
52 'chainer.links.theano',
53 'chainer.optimizers',
54 'chainer.serializers',
55 'chainer.testing',
56 'chainer.training',
57 'chainer.training.extensions',
58 'chainer.training.triggers',
59 'chainer.utils'],
60 zip_safe=False,
61 setup_requires=setup_requires,
62 install_requires=install_requires,
63 tests_require=['mock',
64 'nose'],
65 )
66
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/chainer/functions/caffe/__init__.py b/chainer/functions/caffe/__init__.py
deleted file mode 100644
--- a/chainer/functions/caffe/__init__.py
+++ /dev/null
@@ -1,5 +0,0 @@
-from chainer.links.caffe import caffe_function
-
-
-# for backward compatibility
-CaffeFunction = caffe_function.CaffeFunction
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -26,7 +26,6 @@
'chainer.functions',
'chainer.functions.activation',
'chainer.functions.array',
- 'chainer.functions.caffe',
'chainer.functions.connection',
'chainer.functions.evaluation',
'chainer.functions.loss',
| {"golden_diff": "diff --git a/chainer/functions/caffe/__init__.py b/chainer/functions/caffe/__init__.py\ndeleted file mode 100644\n--- a/chainer/functions/caffe/__init__.py\n+++ /dev/null\n@@ -1,5 +0,0 @@\n-from chainer.links.caffe import caffe_function\n-\n-\n-# for backward compatibility\n-CaffeFunction = caffe_function.CaffeFunction\ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -26,7 +26,6 @@\n 'chainer.functions',\n 'chainer.functions.activation',\n 'chainer.functions.array',\n- 'chainer.functions.caffe',\n 'chainer.functions.connection',\n 'chainer.functions.evaluation',\n 'chainer.functions.loss',\n", "issue": "Remove chainer.functions.caffe.CaffeFunction\nThis is left for backward compatibility.\n", "before_files": [{"content": "from chainer.links.caffe import caffe_function\n\n\n# for backward compatibility\nCaffeFunction = caffe_function.CaffeFunction\n", "path": "chainer/functions/caffe/__init__.py"}, {"content": "#!/usr/bin/env python\n\nfrom setuptools import setup\n\n\nsetup_requires = []\ninstall_requires = [\n 'filelock',\n 'nose',\n 'numpy>=1.9.0',\n 'protobuf',\n 'six>=1.9.0',\n]\n\nsetup(\n name='chainer',\n version='2.0.0a1',\n description='A flexible framework of neural networks',\n author='Seiya Tokui',\n author_email='[email protected]',\n url='http://chainer.org/',\n license='MIT License',\n packages=['chainer',\n 'chainer.dataset',\n 'chainer.datasets',\n 'chainer.functions',\n 'chainer.functions.activation',\n 'chainer.functions.array',\n 'chainer.functions.caffe',\n 'chainer.functions.connection',\n 'chainer.functions.evaluation',\n 'chainer.functions.loss',\n 'chainer.functions.math',\n 'chainer.functions.noise',\n 'chainer.functions.normalization',\n 'chainer.functions.pooling',\n 'chainer.functions.theano',\n 'chainer.functions.util',\n 'chainer.function_hooks',\n 'chainer.iterators',\n 'chainer.initializers',\n 'chainer.links',\n 'chainer.links.activation',\n 'chainer.links.caffe',\n 'chainer.links.caffe.protobuf2',\n 'chainer.links.caffe.protobuf3',\n 'chainer.links.connection',\n 'chainer.links.loss',\n 'chainer.links.model',\n 'chainer.links.model.vision',\n 'chainer.links.normalization',\n 'chainer.links.theano',\n 'chainer.optimizers',\n 'chainer.serializers',\n 'chainer.testing',\n 'chainer.training',\n 'chainer.training.extensions',\n 'chainer.training.triggers',\n 'chainer.utils'],\n zip_safe=False,\n setup_requires=setup_requires,\n install_requires=install_requires,\n tests_require=['mock',\n 'nose'],\n)\n", "path": "setup.py"}], "after_files": [{"content": null, "path": "chainer/functions/caffe/__init__.py"}, {"content": "#!/usr/bin/env python\n\nfrom setuptools import setup\n\n\nsetup_requires = []\ninstall_requires = [\n 'filelock',\n 'nose',\n 'numpy>=1.9.0',\n 'protobuf',\n 'six>=1.9.0',\n]\n\nsetup(\n name='chainer',\n version='2.0.0a1',\n description='A flexible framework of neural networks',\n author='Seiya Tokui',\n author_email='[email protected]',\n url='http://chainer.org/',\n license='MIT License',\n packages=['chainer',\n 'chainer.dataset',\n 'chainer.datasets',\n 'chainer.functions',\n 'chainer.functions.activation',\n 'chainer.functions.array',\n 'chainer.functions.connection',\n 'chainer.functions.evaluation',\n 'chainer.functions.loss',\n 'chainer.functions.math',\n 'chainer.functions.noise',\n 'chainer.functions.normalization',\n 'chainer.functions.pooling',\n 'chainer.functions.theano',\n 'chainer.functions.util',\n 'chainer.function_hooks',\n 'chainer.iterators',\n 'chainer.initializers',\n 'chainer.links',\n 'chainer.links.activation',\n 'chainer.links.caffe',\n 'chainer.links.caffe.protobuf2',\n 'chainer.links.caffe.protobuf3',\n 'chainer.links.connection',\n 'chainer.links.loss',\n 'chainer.links.model',\n 'chainer.links.model.vision',\n 'chainer.links.normalization',\n 'chainer.links.theano',\n 'chainer.optimizers',\n 'chainer.serializers',\n 'chainer.testing',\n 'chainer.training',\n 'chainer.training.extensions',\n 'chainer.training.triggers',\n 'chainer.utils'],\n zip_safe=False,\n setup_requires=setup_requires,\n install_requires=install_requires,\n tests_require=['mock',\n 'nose'],\n)\n", "path": "setup.py"}]} | 870 | 173 |
gh_patches_debug_11020 | rasdani/github-patches | git_diff | goauthentik__authentik-6809 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ldap_sync_single ignores "ldap.task_timeout_hours" settings
**Describe the bug**
The "ldap_sync_single" task is ignoring the "ldap.task_timeout_hours" setting as set with the `AUTHENTIK_LDAP__TASK_TIMEOUT_HOURS` environment variable.
**To Reproduce**
Steps to reproduce the behavior:
1. configure AUTHENTIK_LDAP__TASK_TIMEOUT_HOURS to be too short to synchronize a target ldap source
2. configure an LDAP source
3. on the LDAP source details page, click on "Run sync now"
4. wait 10 minutes
**Expected behavior**
The task is given the specified amount of time and not cancelled after 10 minutes.
**Screenshots**

**Logs**
Output of docker-compose logs or kubectl logs respectively
**Version and Deployment (please complete the following information):**
- authentik version: [e.g. [2023.8.2](https://goauthentik.io/docs/releases/2023.8)]
- Deployment: docker-compose
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `authentik/sources/ldap/tasks.py`
Content:
```
1 """LDAP Sync tasks"""
2 from uuid import uuid4
3
4 from celery import chain, group
5 from django.core.cache import cache
6 from ldap3.core.exceptions import LDAPException
7 from structlog.stdlib import get_logger
8
9 from authentik.events.monitored_tasks import MonitoredTask, TaskResult, TaskResultStatus
10 from authentik.lib.config import CONFIG
11 from authentik.lib.utils.errors import exception_to_string
12 from authentik.lib.utils.reflection import class_to_path, path_to_class
13 from authentik.root.celery import CELERY_APP
14 from authentik.sources.ldap.models import LDAPSource
15 from authentik.sources.ldap.sync.base import BaseLDAPSynchronizer
16 from authentik.sources.ldap.sync.groups import GroupLDAPSynchronizer
17 from authentik.sources.ldap.sync.membership import MembershipLDAPSynchronizer
18 from authentik.sources.ldap.sync.users import UserLDAPSynchronizer
19
20 LOGGER = get_logger()
21 SYNC_CLASSES = [
22 UserLDAPSynchronizer,
23 GroupLDAPSynchronizer,
24 MembershipLDAPSynchronizer,
25 ]
26 CACHE_KEY_PREFIX = "goauthentik.io/sources/ldap/page/"
27
28
29 @CELERY_APP.task()
30 def ldap_sync_all():
31 """Sync all sources"""
32 for source in LDAPSource.objects.filter(enabled=True):
33 ldap_sync_single(source.pk)
34
35
36 @CELERY_APP.task()
37 def ldap_sync_single(source_pk: str):
38 """Sync a single source"""
39 source: LDAPSource = LDAPSource.objects.filter(pk=source_pk).first()
40 if not source:
41 return
42 task = chain(
43 # User and group sync can happen at once, they have no dependencies on each other
44 group(
45 ldap_sync_paginator(source, UserLDAPSynchronizer)
46 + ldap_sync_paginator(source, GroupLDAPSynchronizer),
47 ),
48 # Membership sync needs to run afterwards
49 group(
50 ldap_sync_paginator(source, MembershipLDAPSynchronizer),
51 ),
52 )
53 task()
54
55
56 def ldap_sync_paginator(source: LDAPSource, sync: type[BaseLDAPSynchronizer]) -> list:
57 """Return a list of task signatures with LDAP pagination data"""
58 sync_inst: BaseLDAPSynchronizer = sync(source)
59 signatures = []
60 for page in sync_inst.get_objects():
61 page_cache_key = CACHE_KEY_PREFIX + str(uuid4())
62 cache.set(page_cache_key, page, 60 * 60 * CONFIG.get_int("ldap.task_timeout_hours"))
63 page_sync = ldap_sync.si(source.pk, class_to_path(sync), page_cache_key)
64 signatures.append(page_sync)
65 return signatures
66
67
68 @CELERY_APP.task(
69 bind=True,
70 base=MonitoredTask,
71 soft_time_limit=60 * 60 * CONFIG.get_int("ldap.task_timeout_hours"),
72 task_time_limit=60 * 60 * CONFIG.get_int("ldap.task_timeout_hours"),
73 )
74 def ldap_sync(self: MonitoredTask, source_pk: str, sync_class: str, page_cache_key: str):
75 """Synchronization of an LDAP Source"""
76 self.result_timeout_hours = CONFIG.get_int("ldap.task_timeout_hours")
77 source: LDAPSource = LDAPSource.objects.filter(pk=source_pk).first()
78 if not source:
79 # Because the source couldn't be found, we don't have a UID
80 # to set the state with
81 return
82 sync: type[BaseLDAPSynchronizer] = path_to_class(sync_class)
83 uid = page_cache_key.replace(CACHE_KEY_PREFIX, "")
84 self.set_uid(f"{source.slug}:{sync.name()}:{uid}")
85 try:
86 sync_inst: BaseLDAPSynchronizer = sync(source)
87 page = cache.get(page_cache_key)
88 if not page:
89 error_message = (
90 f"Could not find page in cache: {page_cache_key}. "
91 + "Try increasing ldap.task_timeout_hours"
92 )
93 LOGGER.warning(error_message)
94 self.set_status(TaskResult(TaskResultStatus.ERROR, [error_message]))
95 return
96 cache.touch(page_cache_key)
97 count = sync_inst.sync(page)
98 messages = sync_inst.messages
99 messages.append(f"Synced {count} objects.")
100 self.set_status(
101 TaskResult(
102 TaskResultStatus.SUCCESSFUL,
103 messages,
104 )
105 )
106 cache.delete(page_cache_key)
107 except LDAPException as exc:
108 # No explicit event is created here as .set_status with an error will do that
109 LOGGER.warning(exception_to_string(exc))
110 self.set_status(TaskResult(TaskResultStatus.ERROR).with_error(exc))
111
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/authentik/sources/ldap/tasks.py b/authentik/sources/ldap/tasks.py
--- a/authentik/sources/ldap/tasks.py
+++ b/authentik/sources/ldap/tasks.py
@@ -33,7 +33,13 @@
ldap_sync_single(source.pk)
-@CELERY_APP.task()
+@CELERY_APP.task(
+ # We take the configured hours timeout time by 2.5 as we run user and
+ # group in parallel and then membership, so 2x is to cover the serial tasks,
+ # and 0.5x on top of that to give some more leeway
+ soft_time_limit=(60 * 60 * CONFIG.get_int("ldap.task_timeout_hours")) * 2.5,
+ task_time_limit=(60 * 60 * CONFIG.get_int("ldap.task_timeout_hours")) * 2.5,
+)
def ldap_sync_single(source_pk: str):
"""Sync a single source"""
source: LDAPSource = LDAPSource.objects.filter(pk=source_pk).first()
| {"golden_diff": "diff --git a/authentik/sources/ldap/tasks.py b/authentik/sources/ldap/tasks.py\n--- a/authentik/sources/ldap/tasks.py\n+++ b/authentik/sources/ldap/tasks.py\n@@ -33,7 +33,13 @@\n ldap_sync_single(source.pk)\n \n \n-@CELERY_APP.task()\n+@CELERY_APP.task(\n+ # We take the configured hours timeout time by 2.5 as we run user and\n+ # group in parallel and then membership, so 2x is to cover the serial tasks,\n+ # and 0.5x on top of that to give some more leeway\n+ soft_time_limit=(60 * 60 * CONFIG.get_int(\"ldap.task_timeout_hours\")) * 2.5,\n+ task_time_limit=(60 * 60 * CONFIG.get_int(\"ldap.task_timeout_hours\")) * 2.5,\n+)\n def ldap_sync_single(source_pk: str):\n \"\"\"Sync a single source\"\"\"\n source: LDAPSource = LDAPSource.objects.filter(pk=source_pk).first()\n", "issue": "ldap_sync_single ignores \"ldap.task_timeout_hours\" settings\n**Describe the bug**\r\nThe \"ldap_sync_single\" task is ignoring the \"ldap.task_timeout_hours\" setting as set with the `AUTHENTIK_LDAP__TASK_TIMEOUT_HOURS` environment variable.\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n\r\n1. configure AUTHENTIK_LDAP__TASK_TIMEOUT_HOURS to be too short to synchronize a target ldap source\r\n2. configure an LDAP source\r\n3. on the LDAP source details page, click on \"Run sync now\"\r\n4. wait 10 minutes\r\n\r\n**Expected behavior**\r\nThe task is given the specified amount of time and not cancelled after 10 minutes.\r\n\r\n**Screenshots**\r\n\r\n\r\n**Logs**\r\nOutput of docker-compose logs or kubectl logs respectively\r\n\r\n**Version and Deployment (please complete the following information):**\r\n\r\n- authentik version: [e.g. [2023.8.2](https://goauthentik.io/docs/releases/2023.8)]\r\n- Deployment: docker-compose\r\n\r\n\r\n\n", "before_files": [{"content": "\"\"\"LDAP Sync tasks\"\"\"\nfrom uuid import uuid4\n\nfrom celery import chain, group\nfrom django.core.cache import cache\nfrom ldap3.core.exceptions import LDAPException\nfrom structlog.stdlib import get_logger\n\nfrom authentik.events.monitored_tasks import MonitoredTask, TaskResult, TaskResultStatus\nfrom authentik.lib.config import CONFIG\nfrom authentik.lib.utils.errors import exception_to_string\nfrom authentik.lib.utils.reflection import class_to_path, path_to_class\nfrom authentik.root.celery import CELERY_APP\nfrom authentik.sources.ldap.models import LDAPSource\nfrom authentik.sources.ldap.sync.base import BaseLDAPSynchronizer\nfrom authentik.sources.ldap.sync.groups import GroupLDAPSynchronizer\nfrom authentik.sources.ldap.sync.membership import MembershipLDAPSynchronizer\nfrom authentik.sources.ldap.sync.users import UserLDAPSynchronizer\n\nLOGGER = get_logger()\nSYNC_CLASSES = [\n UserLDAPSynchronizer,\n GroupLDAPSynchronizer,\n MembershipLDAPSynchronizer,\n]\nCACHE_KEY_PREFIX = \"goauthentik.io/sources/ldap/page/\"\n\n\n@CELERY_APP.task()\ndef ldap_sync_all():\n \"\"\"Sync all sources\"\"\"\n for source in LDAPSource.objects.filter(enabled=True):\n ldap_sync_single(source.pk)\n\n\n@CELERY_APP.task()\ndef ldap_sync_single(source_pk: str):\n \"\"\"Sync a single source\"\"\"\n source: LDAPSource = LDAPSource.objects.filter(pk=source_pk).first()\n if not source:\n return\n task = chain(\n # User and group sync can happen at once, they have no dependencies on each other\n group(\n ldap_sync_paginator(source, UserLDAPSynchronizer)\n + ldap_sync_paginator(source, GroupLDAPSynchronizer),\n ),\n # Membership sync needs to run afterwards\n group(\n ldap_sync_paginator(source, MembershipLDAPSynchronizer),\n ),\n )\n task()\n\n\ndef ldap_sync_paginator(source: LDAPSource, sync: type[BaseLDAPSynchronizer]) -> list:\n \"\"\"Return a list of task signatures with LDAP pagination data\"\"\"\n sync_inst: BaseLDAPSynchronizer = sync(source)\n signatures = []\n for page in sync_inst.get_objects():\n page_cache_key = CACHE_KEY_PREFIX + str(uuid4())\n cache.set(page_cache_key, page, 60 * 60 * CONFIG.get_int(\"ldap.task_timeout_hours\"))\n page_sync = ldap_sync.si(source.pk, class_to_path(sync), page_cache_key)\n signatures.append(page_sync)\n return signatures\n\n\n@CELERY_APP.task(\n bind=True,\n base=MonitoredTask,\n soft_time_limit=60 * 60 * CONFIG.get_int(\"ldap.task_timeout_hours\"),\n task_time_limit=60 * 60 * CONFIG.get_int(\"ldap.task_timeout_hours\"),\n)\ndef ldap_sync(self: MonitoredTask, source_pk: str, sync_class: str, page_cache_key: str):\n \"\"\"Synchronization of an LDAP Source\"\"\"\n self.result_timeout_hours = CONFIG.get_int(\"ldap.task_timeout_hours\")\n source: LDAPSource = LDAPSource.objects.filter(pk=source_pk).first()\n if not source:\n # Because the source couldn't be found, we don't have a UID\n # to set the state with\n return\n sync: type[BaseLDAPSynchronizer] = path_to_class(sync_class)\n uid = page_cache_key.replace(CACHE_KEY_PREFIX, \"\")\n self.set_uid(f\"{source.slug}:{sync.name()}:{uid}\")\n try:\n sync_inst: BaseLDAPSynchronizer = sync(source)\n page = cache.get(page_cache_key)\n if not page:\n error_message = (\n f\"Could not find page in cache: {page_cache_key}. \"\n + \"Try increasing ldap.task_timeout_hours\"\n )\n LOGGER.warning(error_message)\n self.set_status(TaskResult(TaskResultStatus.ERROR, [error_message]))\n return\n cache.touch(page_cache_key)\n count = sync_inst.sync(page)\n messages = sync_inst.messages\n messages.append(f\"Synced {count} objects.\")\n self.set_status(\n TaskResult(\n TaskResultStatus.SUCCESSFUL,\n messages,\n )\n )\n cache.delete(page_cache_key)\n except LDAPException as exc:\n # No explicit event is created here as .set_status with an error will do that\n LOGGER.warning(exception_to_string(exc))\n self.set_status(TaskResult(TaskResultStatus.ERROR).with_error(exc))\n", "path": "authentik/sources/ldap/tasks.py"}], "after_files": [{"content": "\"\"\"LDAP Sync tasks\"\"\"\nfrom uuid import uuid4\n\nfrom celery import chain, group\nfrom django.core.cache import cache\nfrom ldap3.core.exceptions import LDAPException\nfrom structlog.stdlib import get_logger\n\nfrom authentik.events.monitored_tasks import MonitoredTask, TaskResult, TaskResultStatus\nfrom authentik.lib.config import CONFIG\nfrom authentik.lib.utils.errors import exception_to_string\nfrom authentik.lib.utils.reflection import class_to_path, path_to_class\nfrom authentik.root.celery import CELERY_APP\nfrom authentik.sources.ldap.models import LDAPSource\nfrom authentik.sources.ldap.sync.base import BaseLDAPSynchronizer\nfrom authentik.sources.ldap.sync.groups import GroupLDAPSynchronizer\nfrom authentik.sources.ldap.sync.membership import MembershipLDAPSynchronizer\nfrom authentik.sources.ldap.sync.users import UserLDAPSynchronizer\n\nLOGGER = get_logger()\nSYNC_CLASSES = [\n UserLDAPSynchronizer,\n GroupLDAPSynchronizer,\n MembershipLDAPSynchronizer,\n]\nCACHE_KEY_PREFIX = \"goauthentik.io/sources/ldap/page/\"\n\n\n@CELERY_APP.task()\ndef ldap_sync_all():\n \"\"\"Sync all sources\"\"\"\n for source in LDAPSource.objects.filter(enabled=True):\n ldap_sync_single(source.pk)\n\n\n@CELERY_APP.task(\n # We take the configured hours timeout time by 2.5 as we run user and\n # group in parallel and then membership, so 2x is to cover the serial tasks,\n # and 0.5x on top of that to give some more leeway\n soft_time_limit=(60 * 60 * CONFIG.get_int(\"ldap.task_timeout_hours\")) * 2.5,\n task_time_limit=(60 * 60 * CONFIG.get_int(\"ldap.task_timeout_hours\")) * 2.5,\n)\ndef ldap_sync_single(source_pk: str):\n \"\"\"Sync a single source\"\"\"\n source: LDAPSource = LDAPSource.objects.filter(pk=source_pk).first()\n if not source:\n return\n task = chain(\n # User and group sync can happen at once, they have no dependencies on each other\n group(\n ldap_sync_paginator(source, UserLDAPSynchronizer)\n + ldap_sync_paginator(source, GroupLDAPSynchronizer),\n ),\n # Membership sync needs to run afterwards\n group(\n ldap_sync_paginator(source, MembershipLDAPSynchronizer),\n ),\n )\n task()\n\n\ndef ldap_sync_paginator(source: LDAPSource, sync: type[BaseLDAPSynchronizer]) -> list:\n \"\"\"Return a list of task signatures with LDAP pagination data\"\"\"\n sync_inst: BaseLDAPSynchronizer = sync(source)\n signatures = []\n for page in sync_inst.get_objects():\n page_cache_key = CACHE_KEY_PREFIX + str(uuid4())\n cache.set(page_cache_key, page, 60 * 60 * CONFIG.get_int(\"ldap.task_timeout_hours\"))\n page_sync = ldap_sync.si(source.pk, class_to_path(sync), page_cache_key)\n signatures.append(page_sync)\n return signatures\n\n\n@CELERY_APP.task(\n bind=True,\n base=MonitoredTask,\n soft_time_limit=60 * 60 * CONFIG.get_int(\"ldap.task_timeout_hours\"),\n task_time_limit=60 * 60 * CONFIG.get_int(\"ldap.task_timeout_hours\"),\n)\ndef ldap_sync(self: MonitoredTask, source_pk: str, sync_class: str, page_cache_key: str):\n \"\"\"Synchronization of an LDAP Source\"\"\"\n self.result_timeout_hours = CONFIG.get_int(\"ldap.task_timeout_hours\")\n source: LDAPSource = LDAPSource.objects.filter(pk=source_pk).first()\n if not source:\n # Because the source couldn't be found, we don't have a UID\n # to set the state with\n return\n sync: type[BaseLDAPSynchronizer] = path_to_class(sync_class)\n uid = page_cache_key.replace(CACHE_KEY_PREFIX, \"\")\n self.set_uid(f\"{source.slug}:{sync.name()}:{uid}\")\n try:\n sync_inst: BaseLDAPSynchronizer = sync(source)\n page = cache.get(page_cache_key)\n if not page:\n error_message = (\n f\"Could not find page in cache: {page_cache_key}. \"\n + \"Try increasing ldap.task_timeout_hours\"\n )\n LOGGER.warning(error_message)\n self.set_status(TaskResult(TaskResultStatus.ERROR, [error_message]))\n return\n cache.touch(page_cache_key)\n count = sync_inst.sync(page)\n messages = sync_inst.messages\n messages.append(f\"Synced {count} objects.\")\n self.set_status(\n TaskResult(\n TaskResultStatus.SUCCESSFUL,\n messages,\n )\n )\n cache.delete(page_cache_key)\n except LDAPException as exc:\n # No explicit event is created here as .set_status with an error will do that\n LOGGER.warning(exception_to_string(exc))\n self.set_status(TaskResult(TaskResultStatus.ERROR).with_error(exc))\n", "path": "authentik/sources/ldap/tasks.py"}]} | 1,717 | 232 |
gh_patches_debug_6460 | rasdani/github-patches | git_diff | open-mmlab__mmpose-293 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Pylint: W0223
```bash
mmpose/models/detectors/bottom_up.py:19:0: W0223: Method 'simple_test' is abstract in class 'BasePose' but is not overridden (abstract-method)
mmpose/models/detectors/top_down.py:18:0: W0223: Method 'simple_test' is abstract in class 'BasePose' but is not overridden (abstract-method)
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mmpose/models/detectors/base.py`
Content:
```
1 from abc import ABCMeta, abstractmethod
2 from collections import OrderedDict
3
4 import torch
5 import torch.distributed as dist
6 import torch.nn as nn
7
8
9 class BasePose(nn.Module):
10 """Base class for pose detectors.
11
12 All recognizers should subclass it.
13 All subclass should overwrite:
14 Methods:`forward_train`, supporting to forward when training.
15 Methods:`forward_test`, supporting to forward when testing.
16
17 Args:
18 backbone (dict): Backbone modules to extract feature.
19 head (dict): Head modules to give output.
20 train_cfg (dict): Config for training. Default: None.
21 test_cfg (dict): Config for testing. Default: None.
22 """
23
24 __metaclass__ = ABCMeta
25
26 @abstractmethod
27 def forward_train(self, img, img_metas, **kwargs):
28 """Defines the computation performed at training."""
29
30 @abstractmethod
31 def forward_test(self, img, img_metas, **kwargs):
32 """Defines the computation performed at testing."""
33
34 @abstractmethod
35 def simple_test(self, img, img_metas, **kwargs):
36 """Simple test function."""
37
38 @abstractmethod
39 def forward(self, img, img_metas, return_loss=True, **kwargs):
40 """Forward function."""
41
42 @staticmethod
43 def _parse_losses(losses):
44 """Parse the raw outputs (losses) of the network.
45
46 Args:
47 losses (dict): Raw output of the network, which usually contain
48 losses and other necessary information.
49
50 Returns:
51 tuple[Tensor, dict]: (loss, log_vars), loss is the loss tensor
52 which may be a weighted sum of all losses, log_vars contains
53 all the variables to be sent to the logger.
54 """
55 log_vars = OrderedDict()
56 for loss_name, loss_value in losses.items():
57 if isinstance(loss_value, torch.Tensor):
58 log_vars[loss_name] = loss_value.mean()
59 elif isinstance(loss_value, float):
60 log_vars[loss_name] = loss_value
61 elif isinstance(loss_value, list):
62 log_vars[loss_name] = sum(_loss.mean() for _loss in loss_value)
63 else:
64 raise TypeError(
65 f'{loss_name} is not a tensor or list of tensors or float')
66
67 loss = sum(_value for _key, _value in log_vars.items()
68 if 'loss' in _key)
69
70 log_vars['loss'] = loss
71 for loss_name, loss_value in log_vars.items():
72 # reduce loss when distributed training
73 if not isinstance(loss_value, float):
74 if dist.is_available() and dist.is_initialized():
75 loss_value = loss_value.data.clone()
76 dist.all_reduce(loss_value.div_(dist.get_world_size()))
77 log_vars[loss_name] = loss_value.item()
78 else:
79 log_vars[loss_name] = loss_value
80
81 return loss, log_vars
82
83 def train_step(self, data_batch, optimizer, **kwargs):
84 """The iteration step during training.
85
86 This method defines an iteration step during training, except for the
87 back propagation and optimizer updating, which are done in an optimizer
88 hook. Note that in some complicated cases or models, the whole process
89 including back propagation and optimizer updating is also defined in
90 this method, such as GAN.
91
92 Args:
93 data_batch (dict): The output of dataloader.
94 optimizer (:obj:`torch.optim.Optimizer` | dict): The optimizer of
95 runner is passed to ``train_step()``. This argument is unused
96 and reserved.
97
98 Returns:
99 dict: It should contain at least 3 keys: ``loss``, ``log_vars``,
100 ``num_samples``.
101 ``loss`` is a tensor for back propagation, which can be a
102 weighted sum of multiple losses.
103 ``log_vars`` contains all the variables to be sent to the
104 logger.
105 ``num_samples`` indicates the batch size (when the model is
106 DDP, it means the batch size on each GPU), which is used for
107 averaging the logs.
108 """
109 losses = self.forward(**data_batch)
110
111 loss, log_vars = self._parse_losses(losses)
112
113 outputs = dict(
114 loss=loss,
115 log_vars=log_vars,
116 num_samples=len(next(iter(data_batch.values()))))
117
118 return outputs
119
120 def val_step(self, data_batch, optimizer, **kwargs):
121 """The iteration step during validation.
122
123 This method shares the same signature as :func:`train_step`, but used
124 during val epochs. Note that the evaluation after training epochs is
125 not implemented with this method, but an evaluation hook.
126 """
127 results = self.forward(return_loss=False, **data_batch)
128
129 outputs = dict(results=results)
130
131 return outputs
132
133 @abstractmethod
134 def show_result(self, **kwargs):
135 """Visualize the results."""
136 raise NotImplementedError
137
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mmpose/models/detectors/base.py b/mmpose/models/detectors/base.py
--- a/mmpose/models/detectors/base.py
+++ b/mmpose/models/detectors/base.py
@@ -31,10 +31,6 @@
def forward_test(self, img, img_metas, **kwargs):
"""Defines the computation performed at testing."""
- @abstractmethod
- def simple_test(self, img, img_metas, **kwargs):
- """Simple test function."""
-
@abstractmethod
def forward(self, img, img_metas, return_loss=True, **kwargs):
"""Forward function."""
| {"golden_diff": "diff --git a/mmpose/models/detectors/base.py b/mmpose/models/detectors/base.py\n--- a/mmpose/models/detectors/base.py\n+++ b/mmpose/models/detectors/base.py\n@@ -31,10 +31,6 @@\n def forward_test(self, img, img_metas, **kwargs):\n \"\"\"Defines the computation performed at testing.\"\"\"\n \n- @abstractmethod\n- def simple_test(self, img, img_metas, **kwargs):\n- \"\"\"Simple test function.\"\"\"\n-\n @abstractmethod\n def forward(self, img, img_metas, return_loss=True, **kwargs):\n \"\"\"Forward function.\"\"\"\n", "issue": "Pylint: W0223\n```bash\r\nmmpose/models/detectors/bottom_up.py:19:0: W0223: Method 'simple_test' is abstract in class 'BasePose' but is not overridden (abstract-method)\r\nmmpose/models/detectors/top_down.py:18:0: W0223: Method 'simple_test' is abstract in class 'BasePose' but is not overridden (abstract-method)\r\n```\n", "before_files": [{"content": "from abc import ABCMeta, abstractmethod\nfrom collections import OrderedDict\n\nimport torch\nimport torch.distributed as dist\nimport torch.nn as nn\n\n\nclass BasePose(nn.Module):\n \"\"\"Base class for pose detectors.\n\n All recognizers should subclass it.\n All subclass should overwrite:\n Methods:`forward_train`, supporting to forward when training.\n Methods:`forward_test`, supporting to forward when testing.\n\n Args:\n backbone (dict): Backbone modules to extract feature.\n head (dict): Head modules to give output.\n train_cfg (dict): Config for training. Default: None.\n test_cfg (dict): Config for testing. Default: None.\n \"\"\"\n\n __metaclass__ = ABCMeta\n\n @abstractmethod\n def forward_train(self, img, img_metas, **kwargs):\n \"\"\"Defines the computation performed at training.\"\"\"\n\n @abstractmethod\n def forward_test(self, img, img_metas, **kwargs):\n \"\"\"Defines the computation performed at testing.\"\"\"\n\n @abstractmethod\n def simple_test(self, img, img_metas, **kwargs):\n \"\"\"Simple test function.\"\"\"\n\n @abstractmethod\n def forward(self, img, img_metas, return_loss=True, **kwargs):\n \"\"\"Forward function.\"\"\"\n\n @staticmethod\n def _parse_losses(losses):\n \"\"\"Parse the raw outputs (losses) of the network.\n\n Args:\n losses (dict): Raw output of the network, which usually contain\n losses and other necessary information.\n\n Returns:\n tuple[Tensor, dict]: (loss, log_vars), loss is the loss tensor\n which may be a weighted sum of all losses, log_vars contains\n all the variables to be sent to the logger.\n \"\"\"\n log_vars = OrderedDict()\n for loss_name, loss_value in losses.items():\n if isinstance(loss_value, torch.Tensor):\n log_vars[loss_name] = loss_value.mean()\n elif isinstance(loss_value, float):\n log_vars[loss_name] = loss_value\n elif isinstance(loss_value, list):\n log_vars[loss_name] = sum(_loss.mean() for _loss in loss_value)\n else:\n raise TypeError(\n f'{loss_name} is not a tensor or list of tensors or float')\n\n loss = sum(_value for _key, _value in log_vars.items()\n if 'loss' in _key)\n\n log_vars['loss'] = loss\n for loss_name, loss_value in log_vars.items():\n # reduce loss when distributed training\n if not isinstance(loss_value, float):\n if dist.is_available() and dist.is_initialized():\n loss_value = loss_value.data.clone()\n dist.all_reduce(loss_value.div_(dist.get_world_size()))\n log_vars[loss_name] = loss_value.item()\n else:\n log_vars[loss_name] = loss_value\n\n return loss, log_vars\n\n def train_step(self, data_batch, optimizer, **kwargs):\n \"\"\"The iteration step during training.\n\n This method defines an iteration step during training, except for the\n back propagation and optimizer updating, which are done in an optimizer\n hook. Note that in some complicated cases or models, the whole process\n including back propagation and optimizer updating is also defined in\n this method, such as GAN.\n\n Args:\n data_batch (dict): The output of dataloader.\n optimizer (:obj:`torch.optim.Optimizer` | dict): The optimizer of\n runner is passed to ``train_step()``. This argument is unused\n and reserved.\n\n Returns:\n dict: It should contain at least 3 keys: ``loss``, ``log_vars``,\n ``num_samples``.\n ``loss`` is a tensor for back propagation, which can be a\n weighted sum of multiple losses.\n ``log_vars`` contains all the variables to be sent to the\n logger.\n ``num_samples`` indicates the batch size (when the model is\n DDP, it means the batch size on each GPU), which is used for\n averaging the logs.\n \"\"\"\n losses = self.forward(**data_batch)\n\n loss, log_vars = self._parse_losses(losses)\n\n outputs = dict(\n loss=loss,\n log_vars=log_vars,\n num_samples=len(next(iter(data_batch.values()))))\n\n return outputs\n\n def val_step(self, data_batch, optimizer, **kwargs):\n \"\"\"The iteration step during validation.\n\n This method shares the same signature as :func:`train_step`, but used\n during val epochs. Note that the evaluation after training epochs is\n not implemented with this method, but an evaluation hook.\n \"\"\"\n results = self.forward(return_loss=False, **data_batch)\n\n outputs = dict(results=results)\n\n return outputs\n\n @abstractmethod\n def show_result(self, **kwargs):\n \"\"\"Visualize the results.\"\"\"\n raise NotImplementedError\n", "path": "mmpose/models/detectors/base.py"}], "after_files": [{"content": "from abc import ABCMeta, abstractmethod\nfrom collections import OrderedDict\n\nimport torch\nimport torch.distributed as dist\nimport torch.nn as nn\n\n\nclass BasePose(nn.Module):\n \"\"\"Base class for pose detectors.\n\n All recognizers should subclass it.\n All subclass should overwrite:\n Methods:`forward_train`, supporting to forward when training.\n Methods:`forward_test`, supporting to forward when testing.\n\n Args:\n backbone (dict): Backbone modules to extract feature.\n head (dict): Head modules to give output.\n train_cfg (dict): Config for training. Default: None.\n test_cfg (dict): Config for testing. Default: None.\n \"\"\"\n\n __metaclass__ = ABCMeta\n\n @abstractmethod\n def forward_train(self, img, img_metas, **kwargs):\n \"\"\"Defines the computation performed at training.\"\"\"\n\n @abstractmethod\n def forward_test(self, img, img_metas, **kwargs):\n \"\"\"Defines the computation performed at testing.\"\"\"\n\n @abstractmethod\n def forward(self, img, img_metas, return_loss=True, **kwargs):\n \"\"\"Forward function.\"\"\"\n\n @staticmethod\n def _parse_losses(losses):\n \"\"\"Parse the raw outputs (losses) of the network.\n\n Args:\n losses (dict): Raw output of the network, which usually contain\n losses and other necessary information.\n\n Returns:\n tuple[Tensor, dict]: (loss, log_vars), loss is the loss tensor\n which may be a weighted sum of all losses, log_vars contains\n all the variables to be sent to the logger.\n \"\"\"\n log_vars = OrderedDict()\n for loss_name, loss_value in losses.items():\n if isinstance(loss_value, torch.Tensor):\n log_vars[loss_name] = loss_value.mean()\n elif isinstance(loss_value, float):\n log_vars[loss_name] = loss_value\n elif isinstance(loss_value, list):\n log_vars[loss_name] = sum(_loss.mean() for _loss in loss_value)\n else:\n raise TypeError(\n f'{loss_name} is not a tensor or list of tensors or float')\n\n loss = sum(_value for _key, _value in log_vars.items()\n if 'loss' in _key)\n\n log_vars['loss'] = loss\n for loss_name, loss_value in log_vars.items():\n # reduce loss when distributed training\n if not isinstance(loss_value, float):\n if dist.is_available() and dist.is_initialized():\n loss_value = loss_value.data.clone()\n dist.all_reduce(loss_value.div_(dist.get_world_size()))\n log_vars[loss_name] = loss_value.item()\n else:\n log_vars[loss_name] = loss_value\n\n return loss, log_vars\n\n def train_step(self, data_batch, optimizer, **kwargs):\n \"\"\"The iteration step during training.\n\n This method defines an iteration step during training, except for the\n back propagation and optimizer updating, which are done in an optimizer\n hook. Note that in some complicated cases or models, the whole process\n including back propagation and optimizer updating is also defined in\n this method, such as GAN.\n\n Args:\n data_batch (dict): The output of dataloader.\n optimizer (:obj:`torch.optim.Optimizer` | dict): The optimizer of\n runner is passed to ``train_step()``. This argument is unused\n and reserved.\n\n Returns:\n dict: It should contain at least 3 keys: ``loss``, ``log_vars``,\n ``num_samples``.\n ``loss`` is a tensor for back propagation, which can be a\n weighted sum of multiple losses.\n ``log_vars`` contains all the variables to be sent to the\n logger.\n ``num_samples`` indicates the batch size (when the model is\n DDP, it means the batch size on each GPU), which is used for\n averaging the logs.\n \"\"\"\n losses = self.forward(**data_batch)\n\n loss, log_vars = self._parse_losses(losses)\n\n outputs = dict(\n loss=loss,\n log_vars=log_vars,\n num_samples=len(next(iter(data_batch.values()))))\n\n return outputs\n\n def val_step(self, data_batch, optimizer, **kwargs):\n \"\"\"The iteration step during validation.\n\n This method shares the same signature as :func:`train_step`, but used\n during val epochs. Note that the evaluation after training epochs is\n not implemented with this method, but an evaluation hook.\n \"\"\"\n results = self.forward(return_loss=False, **data_batch)\n\n outputs = dict(results=results)\n\n return outputs\n\n @abstractmethod\n def show_result(self, **kwargs):\n \"\"\"Visualize the results.\"\"\"\n raise NotImplementedError\n", "path": "mmpose/models/detectors/base.py"}]} | 1,710 | 142 |
gh_patches_debug_1841 | rasdani/github-patches | git_diff | kivy__python-for-android-1351 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Python2 Build fails with make: *** [Makefile:426: sharedmods] Error 139
# Python version: 3.6
# OS: Arch Linux
# python-for-android version: 0.6.0
The command I use to build is:
`
p4a apk --private ~/Projects/Python/Mobile_Apps/BeerApp/ --package=org.drink.recommendations --name "Drink Recommendations" --version 0.2 --bootstrap=sdl2 --requirements=python2,kivy --ndk_version r9c
`
The error is:
`
make: *** [Makefile:426: sharedmods] Error 139
`
The build logs are in the following file.
[p4a_errors.txt](https://github.com/kivy/python-for-android/files/2091833/p4a_errors.txt)
Initally I thought that this was a buildozer issue, as I attempted it that way first. So, I opened an issue on their github page and multiple users pointed out that they too were experiencing this issue. I've tried with both python3 and python2, the out come is the same. There is absolutely no unicode in any of my source files, I've also attempted the build with pygame instead of sdl2 (for python 2). There are also multiple simillar SO threads open about this.
Does anyone have any sugesstions or ideas as to why this is happening and how to go about fixing it?
It's also worth noting that if I use the kivy buildozer vm, I can use buildozer to carry out a successful build. Just not on any other machine using either buildozer or p4a, using the same source and build commands.
The buildozer issue is here: https://github.com/kivy/buildozer/issues/673
The output from the dump file is:
`
Reading symbols from /home/suroh/.local/share/python-for-android/build/other_builds/hostpython2/desktop/hostpython2/python...done.
[New LWP 28854]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/usr/lib/libthread_db.so.1".
Core was generated by ./python -E ./setup.py -q build.
Program terminated with signal SIGSEGV, Segmentation fault.
#0 0x000055731803eb2a in PyInstance_NewRaw (klass=klass@entry=0x7f7cbf1d1c18, dict=0x557319325210, dict@entry=0x0) at Objects/classobject.c:534
534 inst->in_dict = dict;
File "/home/suroh/.local/share/python-for-android/build/other_builds/hostpython2/desktop/hostpython2/python-gdb.py", line 55
Py_TPFLAGS_HEAPTYPE = (1L << 9)
^
SyntaxError: invalid syntax
`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pythonforandroid/recipes/hostpython2/__init__.py`
Content:
```
1
2 from pythonforandroid.toolchain import Recipe, shprint, current_directory, info, warning
3 from os.path import join, exists
4 import os
5 import sh
6
7
8 class Hostpython2Recipe(Recipe):
9 version = '2.7.2'
10 url = 'https://python.org/ftp/python/{version}/Python-{version}.tar.bz2'
11 name = 'hostpython2'
12
13 conflicts = ['hostpython3']
14
15 def get_build_container_dir(self, arch=None):
16 choices = self.check_recipe_choices()
17 dir_name = '-'.join([self.name] + choices)
18 return join(self.ctx.build_dir, 'other_builds', dir_name, 'desktop')
19
20 def get_build_dir(self, arch=None):
21 return join(self.get_build_container_dir(), self.name)
22
23 def prebuild_arch(self, arch):
24 # Override hostpython Setup?
25 shprint(sh.cp, join(self.get_recipe_dir(), 'Setup'),
26 join(self.get_build_dir(), 'Modules', 'Setup'))
27
28 def build_arch(self, arch):
29 with current_directory(self.get_build_dir()):
30
31 if exists('hostpython'):
32 info('hostpython already exists, skipping build')
33 self.ctx.hostpython = join(self.get_build_dir(),
34 'hostpython')
35 self.ctx.hostpgen = join(self.get_build_dir(),
36 'hostpgen')
37 return
38
39 if 'LIBS' in os.environ:
40 os.environ.pop('LIBS')
41 configure = sh.Command('./configure')
42
43 shprint(configure)
44 shprint(sh.make, '-j5')
45
46 shprint(sh.mv, join('Parser', 'pgen'), 'hostpgen')
47
48 if exists('python.exe'):
49 shprint(sh.mv, 'python.exe', 'hostpython')
50 elif exists('python'):
51 shprint(sh.mv, 'python', 'hostpython')
52 else:
53 warning('Unable to find the python executable after '
54 'hostpython build! Exiting.')
55 exit(1)
56
57 self.ctx.hostpython = join(self.get_build_dir(), 'hostpython')
58 self.ctx.hostpgen = join(self.get_build_dir(), 'hostpgen')
59
60
61 recipe = Hostpython2Recipe()
62
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pythonforandroid/recipes/hostpython2/__init__.py b/pythonforandroid/recipes/hostpython2/__init__.py
--- a/pythonforandroid/recipes/hostpython2/__init__.py
+++ b/pythonforandroid/recipes/hostpython2/__init__.py
@@ -10,6 +10,7 @@
version = '2.7.2'
url = 'https://python.org/ftp/python/{version}/Python-{version}.tar.bz2'
name = 'hostpython2'
+ patches = ['fix-segfault-pygchead.patch']
conflicts = ['hostpython3']
| {"golden_diff": "diff --git a/pythonforandroid/recipes/hostpython2/__init__.py b/pythonforandroid/recipes/hostpython2/__init__.py\n--- a/pythonforandroid/recipes/hostpython2/__init__.py\n+++ b/pythonforandroid/recipes/hostpython2/__init__.py\n@@ -10,6 +10,7 @@\n version = '2.7.2'\n url = 'https://python.org/ftp/python/{version}/Python-{version}.tar.bz2'\n name = 'hostpython2'\n+ patches = ['fix-segfault-pygchead.patch']\n \n conflicts = ['hostpython3']\n", "issue": "Python2 Build fails with make: *** [Makefile:426: sharedmods] Error 139\n# Python version: 3.6\r\n# OS: Arch Linux \r\n# python-for-android version: 0.6.0 \r\n\r\nThe command I use to build is:\r\n\r\n` \r\n p4a apk --private ~/Projects/Python/Mobile_Apps/BeerApp/ --package=org.drink.recommendations --name \"Drink Recommendations\" --version 0.2 --bootstrap=sdl2 --requirements=python2,kivy --ndk_version r9c\r\n`\r\n\r\nThe error is:\r\n\r\n`\r\n make: *** [Makefile:426: sharedmods] Error 139\r\n`\r\n\r\nThe build logs are in the following file.\r\n[p4a_errors.txt](https://github.com/kivy/python-for-android/files/2091833/p4a_errors.txt)\r\n\r\nInitally I thought that this was a buildozer issue, as I attempted it that way first. So, I opened an issue on their github page and multiple users pointed out that they too were experiencing this issue. I've tried with both python3 and python2, the out come is the same. There is absolutely no unicode in any of my source files, I've also attempted the build with pygame instead of sdl2 (for python 2). There are also multiple simillar SO threads open about this. \r\n\r\nDoes anyone have any sugesstions or ideas as to why this is happening and how to go about fixing it?\r\n\r\nIt's also worth noting that if I use the kivy buildozer vm, I can use buildozer to carry out a successful build. Just not on any other machine using either buildozer or p4a, using the same source and build commands.\r\n\r\nThe buildozer issue is here: https://github.com/kivy/buildozer/issues/673\r\n\r\nThe output from the dump file is:\r\n\r\n`\r\n Reading symbols from /home/suroh/.local/share/python-for-android/build/other_builds/hostpython2/desktop/hostpython2/python...done.\r\n [New LWP 28854]\r\n [Thread debugging using libthread_db enabled]\r\n Using host libthread_db library \"/usr/lib/libthread_db.so.1\".\r\n Core was generated by ./python -E ./setup.py -q build.\r\n Program terminated with signal SIGSEGV, Segmentation fault.\r\n #0 0x000055731803eb2a in PyInstance_NewRaw (klass=klass@entry=0x7f7cbf1d1c18, dict=0x557319325210, dict@entry=0x0) at Objects/classobject.c:534\r\n 534 inst->in_dict = dict;\r\n File \"/home/suroh/.local/share/python-for-android/build/other_builds/hostpython2/desktop/hostpython2/python-gdb.py\", line 55\r\n Py_TPFLAGS_HEAPTYPE = (1L << 9)\r\n ^\r\n SyntaxError: invalid syntax\r\n`\n", "before_files": [{"content": "\nfrom pythonforandroid.toolchain import Recipe, shprint, current_directory, info, warning\nfrom os.path import join, exists\nimport os\nimport sh\n\n\nclass Hostpython2Recipe(Recipe):\n version = '2.7.2'\n url = 'https://python.org/ftp/python/{version}/Python-{version}.tar.bz2'\n name = 'hostpython2'\n\n conflicts = ['hostpython3']\n\n def get_build_container_dir(self, arch=None):\n choices = self.check_recipe_choices()\n dir_name = '-'.join([self.name] + choices)\n return join(self.ctx.build_dir, 'other_builds', dir_name, 'desktop')\n\n def get_build_dir(self, arch=None):\n return join(self.get_build_container_dir(), self.name)\n\n def prebuild_arch(self, arch):\n # Override hostpython Setup?\n shprint(sh.cp, join(self.get_recipe_dir(), 'Setup'),\n join(self.get_build_dir(), 'Modules', 'Setup'))\n\n def build_arch(self, arch):\n with current_directory(self.get_build_dir()):\n\n if exists('hostpython'):\n info('hostpython already exists, skipping build')\n self.ctx.hostpython = join(self.get_build_dir(),\n 'hostpython')\n self.ctx.hostpgen = join(self.get_build_dir(),\n 'hostpgen')\n return\n\n if 'LIBS' in os.environ:\n os.environ.pop('LIBS')\n configure = sh.Command('./configure')\n\n shprint(configure)\n shprint(sh.make, '-j5')\n\n shprint(sh.mv, join('Parser', 'pgen'), 'hostpgen')\n\n if exists('python.exe'):\n shprint(sh.mv, 'python.exe', 'hostpython')\n elif exists('python'):\n shprint(sh.mv, 'python', 'hostpython')\n else:\n warning('Unable to find the python executable after '\n 'hostpython build! Exiting.')\n exit(1)\n\n self.ctx.hostpython = join(self.get_build_dir(), 'hostpython')\n self.ctx.hostpgen = join(self.get_build_dir(), 'hostpgen')\n\n\nrecipe = Hostpython2Recipe()\n", "path": "pythonforandroid/recipes/hostpython2/__init__.py"}], "after_files": [{"content": "\nfrom pythonforandroid.toolchain import Recipe, shprint, current_directory, info, warning\nfrom os.path import join, exists\nfrom os import chdir\nimport os\nimport sh\n\n\nclass Hostpython2Recipe(Recipe):\n version = '2.7.2'\n url = 'https://python.org/ftp/python/{version}/Python-{version}.tar.bz2'\n name = 'hostpython2'\n patches = ['fix-segfault-pygchead.patch']\n\n conflicts = ['hostpython3']\n\n def get_build_container_dir(self, arch=None):\n choices = self.check_recipe_choices()\n dir_name = '-'.join([self.name] + choices)\n return join(self.ctx.build_dir, 'other_builds', dir_name, 'desktop')\n\n def get_build_dir(self, arch=None):\n return join(self.get_build_container_dir(), self.name)\n\n def prebuild_arch(self, arch):\n # Override hostpython Setup?\n shprint(sh.cp, join(self.get_recipe_dir(), 'Setup'),\n join(self.get_build_dir(), 'Modules', 'Setup'))\n\n def build_arch(self, arch):\n with current_directory(self.get_build_dir()):\n\n if exists('hostpython'):\n info('hostpython already exists, skipping build')\n self.ctx.hostpython = join(self.get_build_dir(),\n 'hostpython')\n self.ctx.hostpgen = join(self.get_build_dir(),\n 'hostpgen')\n return\n \n if 'LIBS' in os.environ:\n os.environ.pop('LIBS')\n configure = sh.Command('./configure')\n\n shprint(configure)\n shprint(sh.make, '-j5')\n\n shprint(sh.mv, join('Parser', 'pgen'), 'hostpgen')\n\n if exists('python.exe'):\n shprint(sh.mv, 'python.exe', 'hostpython')\n elif exists('python'):\n shprint(sh.mv, 'python', 'hostpython')\n else:\n warning('Unable to find the python executable after '\n 'hostpython build! Exiting.')\n exit(1)\n\n self.ctx.hostpython = join(self.get_build_dir(), 'hostpython')\n self.ctx.hostpgen = join(self.get_build_dir(), 'hostpgen')\n\n\nrecipe = Hostpython2Recipe()\n", "path": "pythonforandroid/recipes/hostpython2/__init__.py"}]} | 1,516 | 136 |
gh_patches_debug_19534 | rasdani/github-patches | git_diff | mathesar-foundation__mathesar-3132 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Support xls and xlsx
## Problem
Mathesar does not support excel files (xls or xlsx). Please see this file:
https://github.com/centerofci/mathesar/blob/0d99ee984206a99c6743a319504a1d86621d71d5/mathesar/imports/base.py#L13
## Proposed solution
Mathesar should support both xls and xlsx files. This should be simple to do with the xlrd (for xls) and openpyxl (for xlsx) libraries and the implementation would be similar to csv.
## Additional context
It is important to keep in mind that non-technical users can't really use csv but are comfortable with xls and xlsx. Implementing this feature would make mathesar much more friendly for these users.
I see that there's an issue about xlsx files: #2742 however it seems to be closed ? If you want and nobody else is working on that I can try providing a PR implementing the xls and xlsx features.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mathesar/imports/excel.py`
Content:
```
1 import pandas
2
3 from db.tables.operations.alter import update_pk_sequence_to_latest
4 from mathesar.database.base import create_mathesar_engine
5 from db.records.operations.insert import insert_records_from_excel
6 from db.tables.operations.create import create_string_column_table
7 from db.tables.operations.drop import drop_table
8 from mathesar.imports.utils import get_alternate_column_names, process_column_names
9 from psycopg2.errors import IntegrityError, DataError
10
11 from mathesar.state import reset_reflection
12
13
14 def insert_records_from_dataframe(name, schema, column_names, engine, comment, dataframe):
15 table = create_string_column_table(
16 name=name,
17 schema_oid=schema.oid,
18 column_names=column_names,
19 engine=engine,
20 comment=comment,
21 )
22
23 insert_records_from_excel(
24 table,
25 engine,
26 dataframe,
27 )
28 return table
29
30
31 def create_db_table_from_excel_data_file(data_file, name, schema, comment=None):
32 db_name = schema.database.name
33 engine = create_mathesar_engine(db_name)
34 dataframe = pandas.read_excel(data_file.file.path)
35 column_names = process_column_names(dataframe.columns)
36 try:
37 table = insert_records_from_dataframe(name, schema, column_names, engine, comment, dataframe)
38 update_pk_sequence_to_latest(engine, table)
39 except (IntegrityError, DataError):
40 drop_table(name=name, schema=schema.name, engine=engine)
41 column_names_alt = get_alternate_column_names(column_names)
42 table = insert_records_from_dataframe(name, schema, column_names_alt, engine, comment, dataframe)
43
44 reset_reflection(db_name=db_name)
45 return table
46
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mathesar/imports/excel.py b/mathesar/imports/excel.py
--- a/mathesar/imports/excel.py
+++ b/mathesar/imports/excel.py
@@ -28,10 +28,28 @@
return table
+def remove_empty_rows_and_columns_from_dataframe(df):
+ if df.iloc[0].isna().any():
+
+ # drop rows with all NaN values
+ df.dropna(how='all', inplace=True)
+
+ # drop columns with all NaN values
+ df.dropna(axis=1, how='all', inplace=True)
+
+ if all(df.columns.str.startswith('Unnamed')):
+ df.columns = df.iloc[0]
+ df = df[1:]
+
+ return df
+
+
def create_db_table_from_excel_data_file(data_file, name, schema, comment=None):
db_name = schema.database.name
engine = create_mathesar_engine(db_name)
- dataframe = pandas.read_excel(data_file.file.path)
+ dataframe = remove_empty_rows_and_columns_from_dataframe(
+ pandas.read_excel(data_file.file.path)
+ )
column_names = process_column_names(dataframe.columns)
try:
table = insert_records_from_dataframe(name, schema, column_names, engine, comment, dataframe)
| {"golden_diff": "diff --git a/mathesar/imports/excel.py b/mathesar/imports/excel.py\n--- a/mathesar/imports/excel.py\n+++ b/mathesar/imports/excel.py\n@@ -28,10 +28,28 @@\n return table\n \n \n+def remove_empty_rows_and_columns_from_dataframe(df):\n+ if df.iloc[0].isna().any():\n+\n+ # drop rows with all NaN values\n+ df.dropna(how='all', inplace=True)\n+\n+ # drop columns with all NaN values\n+ df.dropna(axis=1, how='all', inplace=True)\n+\n+ if all(df.columns.str.startswith('Unnamed')):\n+ df.columns = df.iloc[0]\n+ df = df[1:]\n+\n+ return df\n+\n+\n def create_db_table_from_excel_data_file(data_file, name, schema, comment=None):\n db_name = schema.database.name\n engine = create_mathesar_engine(db_name)\n- dataframe = pandas.read_excel(data_file.file.path)\n+ dataframe = remove_empty_rows_and_columns_from_dataframe(\n+ pandas.read_excel(data_file.file.path)\n+ )\n column_names = process_column_names(dataframe.columns)\n try:\n table = insert_records_from_dataframe(name, schema, column_names, engine, comment, dataframe)\n", "issue": "Support xls and xlsx\n## Problem\r\nMathesar does not support excel files (xls or xlsx). Please see this file: \r\n\r\nhttps://github.com/centerofci/mathesar/blob/0d99ee984206a99c6743a319504a1d86621d71d5/mathesar/imports/base.py#L13\r\n\r\n## Proposed solution\r\nMathesar should support both xls and xlsx files. This should be simple to do with the xlrd (for xls) and openpyxl (for xlsx) libraries and the implementation would be similar to csv.\r\n\r\n## Additional context\r\nIt is important to keep in mind that non-technical users can't really use csv but are comfortable with xls and xlsx. Implementing this feature would make mathesar much more friendly for these users.\r\n\r\nI see that there's an issue about xlsx files: #2742 however it seems to be closed ? If you want and nobody else is working on that I can try providing a PR implementing the xls and xlsx features.\r\n\n", "before_files": [{"content": "import pandas\n\nfrom db.tables.operations.alter import update_pk_sequence_to_latest\nfrom mathesar.database.base import create_mathesar_engine\nfrom db.records.operations.insert import insert_records_from_excel\nfrom db.tables.operations.create import create_string_column_table\nfrom db.tables.operations.drop import drop_table\nfrom mathesar.imports.utils import get_alternate_column_names, process_column_names\nfrom psycopg2.errors import IntegrityError, DataError\n\nfrom mathesar.state import reset_reflection\n\n\ndef insert_records_from_dataframe(name, schema, column_names, engine, comment, dataframe):\n table = create_string_column_table(\n name=name,\n schema_oid=schema.oid,\n column_names=column_names,\n engine=engine,\n comment=comment,\n )\n\n insert_records_from_excel(\n table,\n engine,\n dataframe,\n )\n return table\n\n\ndef create_db_table_from_excel_data_file(data_file, name, schema, comment=None):\n db_name = schema.database.name\n engine = create_mathesar_engine(db_name)\n dataframe = pandas.read_excel(data_file.file.path)\n column_names = process_column_names(dataframe.columns)\n try:\n table = insert_records_from_dataframe(name, schema, column_names, engine, comment, dataframe)\n update_pk_sequence_to_latest(engine, table)\n except (IntegrityError, DataError):\n drop_table(name=name, schema=schema.name, engine=engine)\n column_names_alt = get_alternate_column_names(column_names)\n table = insert_records_from_dataframe(name, schema, column_names_alt, engine, comment, dataframe)\n\n reset_reflection(db_name=db_name)\n return table\n", "path": "mathesar/imports/excel.py"}], "after_files": [{"content": "import pandas\n\nfrom db.tables.operations.alter import update_pk_sequence_to_latest\nfrom mathesar.database.base import create_mathesar_engine\nfrom db.records.operations.insert import insert_records_from_excel\nfrom db.tables.operations.create import create_string_column_table\nfrom db.tables.operations.drop import drop_table\nfrom mathesar.imports.utils import get_alternate_column_names, process_column_names\nfrom psycopg2.errors import IntegrityError, DataError\n\nfrom mathesar.state import reset_reflection\n\n\ndef insert_records_from_dataframe(name, schema, column_names, engine, comment, dataframe):\n table = create_string_column_table(\n name=name,\n schema_oid=schema.oid,\n column_names=column_names,\n engine=engine,\n comment=comment,\n )\n\n insert_records_from_excel(\n table,\n engine,\n dataframe,\n )\n return table\n\n\ndef remove_empty_rows_and_columns_from_dataframe(df):\n if df.iloc[0].isna().any():\n\n # drop rows with all NaN values\n df.dropna(how='all', inplace=True)\n\n # drop columns with all NaN values\n df.dropna(axis=1, how='all', inplace=True)\n\n if all(df.columns.str.startswith('Unnamed')):\n df.columns = df.iloc[0]\n df = df[1:]\n\n return df\n\n\ndef create_db_table_from_excel_data_file(data_file, name, schema, comment=None):\n db_name = schema.database.name\n engine = create_mathesar_engine(db_name)\n dataframe = remove_empty_rows_and_columns_from_dataframe(\n pandas.read_excel(data_file.file.path)\n )\n column_names = process_column_names(dataframe.columns)\n try:\n table = insert_records_from_dataframe(name, schema, column_names, engine, comment, dataframe)\n update_pk_sequence_to_latest(engine, table)\n except (IntegrityError, DataError):\n drop_table(name=name, schema=schema.name, engine=engine)\n column_names_alt = get_alternate_column_names(column_names)\n table = insert_records_from_dataframe(name, schema, column_names_alt, engine, comment, dataframe)\n\n reset_reflection(db_name=db_name)\n return table\n", "path": "mathesar/imports/excel.py"}]} | 921 | 279 |
gh_patches_debug_8999 | rasdani/github-patches | git_diff | elastic__ecs-1112 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Support the `value` parameter for `constant_keyword` fields
The [`constant_keyword` data type](https://www.elastic.co/guide/en/elasticsearch/reference/current/keyword.html#constant-keyword-field-type) accepts the `value` parameter. On a `constant_keyword` field if the `value` parameter is specified, that value is used for all documents in the index. Otherwise, it is set based on the first document that gets indexed.
If a user wishes to use the ECS tooling to manage their ES index templates, they may also wish to control the value of `value` via their custom field definitions.
Example definition:
```yaml
- name: acme
title: acme
group: 2
short: Fields describing acme-related needs.
description: >
Acme-related needs
fields:
- name: stream
description: stream
level: extended
type: constant_keyword
value: widgets
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scripts/generators/es_template.py`
Content:
```
1 import json
2 import sys
3 import copy
4
5 from os.path import join
6 from generators import ecs_helpers
7
8
9 def generate(ecs_flat, ecs_version, out_dir, template_settings_file, mapping_settings_file):
10 field_mappings = {}
11 for flat_name in sorted(ecs_flat):
12 field = ecs_flat[flat_name]
13 nestings = flat_name.split('.')
14 dict_add_nested(field_mappings, nestings, entry_for(field))
15
16 if mapping_settings_file:
17 with open(mapping_settings_file) as f:
18 mappings_section = json.load(f)
19 else:
20 mappings_section = default_mapping_settings(ecs_version)
21
22 mappings_section['properties'] = field_mappings
23
24 generate_template_version(6, mappings_section, out_dir, template_settings_file)
25 generate_template_version(7, mappings_section, out_dir, template_settings_file)
26
27 # Field mappings
28
29
30 def dict_add_nested(dct, nestings, value):
31 current_nesting = nestings[0]
32 rest_nestings = nestings[1:]
33 if len(rest_nestings) > 0:
34 dct.setdefault(current_nesting, {})
35 dct[current_nesting].setdefault('properties', {})
36
37 dict_add_nested(
38 dct[current_nesting]['properties'],
39 rest_nestings,
40 value)
41
42 else:
43 if current_nesting in dct and 'type' in value and 'object' == value['type']:
44 return
45 dct[current_nesting] = value
46
47
48 def entry_for(field):
49 field_entry = {'type': field['type']}
50 try:
51 if field['type'] == 'object' or field['type'] == 'nested':
52 if 'enabled' in field and not field['enabled']:
53 ecs_helpers.dict_copy_existing_keys(field, field_entry, ['enabled'])
54 # the index field is only valid for field types that are not object and nested
55 elif 'index' in field and not field['index']:
56 ecs_helpers.dict_copy_existing_keys(field, field_entry, ['index', 'doc_values'])
57
58 if field['type'] == 'keyword':
59 ecs_helpers.dict_copy_existing_keys(field, field_entry, ['ignore_above'])
60 elif field['type'] == 'text':
61 ecs_helpers.dict_copy_existing_keys(field, field_entry, ['norms'])
62 elif field['type'] == 'alias':
63 ecs_helpers.dict_copy_existing_keys(field, field_entry, ['path'])
64 elif field['type'] == 'scaled_float':
65 ecs_helpers.dict_copy_existing_keys(field, field_entry, ['scaling_factor'])
66
67 if 'multi_fields' in field:
68 field_entry['fields'] = {}
69 for mf in field['multi_fields']:
70 mf_type = mf['type']
71 mf_entry = {'type': mf_type}
72 if mf_type == 'keyword':
73 ecs_helpers.dict_copy_existing_keys(mf, mf_entry, ['normalizer', 'ignore_above'])
74 elif mf_type == 'text':
75 ecs_helpers.dict_copy_existing_keys(mf, mf_entry, ['norms'])
76 field_entry['fields'][mf['name']] = mf_entry
77
78 except KeyError as ex:
79 print("Exception {} occurred for field {}".format(ex, field))
80 raise ex
81 return field_entry
82
83 # Generated files
84
85
86 def generate_template_version(elasticsearch_version, mappings_section, out_dir, template_settings_file):
87 ecs_helpers.make_dirs(join(out_dir, 'elasticsearch', str(elasticsearch_version)))
88 if template_settings_file:
89 with open(template_settings_file) as f:
90 template = json.load(f)
91 else:
92 template = default_template_settings()
93 if elasticsearch_version == 6:
94 template['mappings'] = {'_doc': mappings_section}
95 else:
96 template['mappings'] = mappings_section
97
98 filename = join(out_dir, "elasticsearch/{}/template.json".format(elasticsearch_version))
99 save_json(filename, template)
100
101
102 def save_json(file, data):
103 open_mode = "wb"
104 if sys.version_info >= (3, 0):
105 open_mode = "w"
106 with open(file, open_mode) as jsonfile:
107 jsonfile.write(json.dumps(data, indent=2, sort_keys=True))
108
109
110 def default_template_settings():
111 return {
112 "index_patterns": ["try-ecs-*"],
113 "order": 1,
114 "settings": {
115 "index": {
116 "mapping": {
117 "total_fields": {
118 "limit": 10000
119 }
120 },
121 "refresh_interval": "5s"
122 }
123 },
124 "mappings": {}
125 }
126
127
128 def default_mapping_settings(ecs_version):
129 return {
130 "_meta": {"version": ecs_version},
131 "date_detection": False,
132 "dynamic_templates": [
133 {
134 "strings_as_keyword": {
135 "mapping": {
136 "ignore_above": 1024,
137 "type": "keyword"
138 },
139 "match_mapping_type": "string"
140 }
141 }
142 ],
143 "properties": {}
144 }
145
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/scripts/generators/es_template.py b/scripts/generators/es_template.py
--- a/scripts/generators/es_template.py
+++ b/scripts/generators/es_template.py
@@ -57,6 +57,8 @@
if field['type'] == 'keyword':
ecs_helpers.dict_copy_existing_keys(field, field_entry, ['ignore_above'])
+ elif field['type'] == 'constant_keyword':
+ ecs_helpers.dict_copy_existing_keys(field, field_entry, ['value'])
elif field['type'] == 'text':
ecs_helpers.dict_copy_existing_keys(field, field_entry, ['norms'])
elif field['type'] == 'alias':
| {"golden_diff": "diff --git a/scripts/generators/es_template.py b/scripts/generators/es_template.py\n--- a/scripts/generators/es_template.py\n+++ b/scripts/generators/es_template.py\n@@ -57,6 +57,8 @@\n \n if field['type'] == 'keyword':\n ecs_helpers.dict_copy_existing_keys(field, field_entry, ['ignore_above'])\n+ elif field['type'] == 'constant_keyword':\n+ ecs_helpers.dict_copy_existing_keys(field, field_entry, ['value'])\n elif field['type'] == 'text':\n ecs_helpers.dict_copy_existing_keys(field, field_entry, ['norms'])\n elif field['type'] == 'alias':\n", "issue": "Support the `value` parameter for `constant_keyword` fields\nThe [`constant_keyword` data type](https://www.elastic.co/guide/en/elasticsearch/reference/current/keyword.html#constant-keyword-field-type) accepts the `value` parameter. On a `constant_keyword` field if the `value` parameter is specified, that value is used for all documents in the index. Otherwise, it is set based on the first document that gets indexed.\r\n\r\nIf a user wishes to use the ECS tooling to manage their ES index templates, they may also wish to control the value of `value` via their custom field definitions.\r\n\r\nExample definition:\r\n\r\n```yaml\r\n - name: acme\r\n title: acme\r\n group: 2\r\n short: Fields describing acme-related needs.\r\n description: >\r\n Acme-related needs\r\n fields:\r\n - name: stream\r\n description: stream\r\n level: extended\r\n type: constant_keyword\r\n value: widgets\r\n```\n", "before_files": [{"content": "import json\nimport sys\nimport copy\n\nfrom os.path import join\nfrom generators import ecs_helpers\n\n\ndef generate(ecs_flat, ecs_version, out_dir, template_settings_file, mapping_settings_file):\n field_mappings = {}\n for flat_name in sorted(ecs_flat):\n field = ecs_flat[flat_name]\n nestings = flat_name.split('.')\n dict_add_nested(field_mappings, nestings, entry_for(field))\n\n if mapping_settings_file:\n with open(mapping_settings_file) as f:\n mappings_section = json.load(f)\n else:\n mappings_section = default_mapping_settings(ecs_version)\n\n mappings_section['properties'] = field_mappings\n\n generate_template_version(6, mappings_section, out_dir, template_settings_file)\n generate_template_version(7, mappings_section, out_dir, template_settings_file)\n\n# Field mappings\n\n\ndef dict_add_nested(dct, nestings, value):\n current_nesting = nestings[0]\n rest_nestings = nestings[1:]\n if len(rest_nestings) > 0:\n dct.setdefault(current_nesting, {})\n dct[current_nesting].setdefault('properties', {})\n\n dict_add_nested(\n dct[current_nesting]['properties'],\n rest_nestings,\n value)\n\n else:\n if current_nesting in dct and 'type' in value and 'object' == value['type']:\n return\n dct[current_nesting] = value\n\n\ndef entry_for(field):\n field_entry = {'type': field['type']}\n try:\n if field['type'] == 'object' or field['type'] == 'nested':\n if 'enabled' in field and not field['enabled']:\n ecs_helpers.dict_copy_existing_keys(field, field_entry, ['enabled'])\n # the index field is only valid for field types that are not object and nested\n elif 'index' in field and not field['index']:\n ecs_helpers.dict_copy_existing_keys(field, field_entry, ['index', 'doc_values'])\n\n if field['type'] == 'keyword':\n ecs_helpers.dict_copy_existing_keys(field, field_entry, ['ignore_above'])\n elif field['type'] == 'text':\n ecs_helpers.dict_copy_existing_keys(field, field_entry, ['norms'])\n elif field['type'] == 'alias':\n ecs_helpers.dict_copy_existing_keys(field, field_entry, ['path'])\n elif field['type'] == 'scaled_float':\n ecs_helpers.dict_copy_existing_keys(field, field_entry, ['scaling_factor'])\n\n if 'multi_fields' in field:\n field_entry['fields'] = {}\n for mf in field['multi_fields']:\n mf_type = mf['type']\n mf_entry = {'type': mf_type}\n if mf_type == 'keyword':\n ecs_helpers.dict_copy_existing_keys(mf, mf_entry, ['normalizer', 'ignore_above'])\n elif mf_type == 'text':\n ecs_helpers.dict_copy_existing_keys(mf, mf_entry, ['norms'])\n field_entry['fields'][mf['name']] = mf_entry\n\n except KeyError as ex:\n print(\"Exception {} occurred for field {}\".format(ex, field))\n raise ex\n return field_entry\n\n# Generated files\n\n\ndef generate_template_version(elasticsearch_version, mappings_section, out_dir, template_settings_file):\n ecs_helpers.make_dirs(join(out_dir, 'elasticsearch', str(elasticsearch_version)))\n if template_settings_file:\n with open(template_settings_file) as f:\n template = json.load(f)\n else:\n template = default_template_settings()\n if elasticsearch_version == 6:\n template['mappings'] = {'_doc': mappings_section}\n else:\n template['mappings'] = mappings_section\n\n filename = join(out_dir, \"elasticsearch/{}/template.json\".format(elasticsearch_version))\n save_json(filename, template)\n\n\ndef save_json(file, data):\n open_mode = \"wb\"\n if sys.version_info >= (3, 0):\n open_mode = \"w\"\n with open(file, open_mode) as jsonfile:\n jsonfile.write(json.dumps(data, indent=2, sort_keys=True))\n\n\ndef default_template_settings():\n return {\n \"index_patterns\": [\"try-ecs-*\"],\n \"order\": 1,\n \"settings\": {\n \"index\": {\n \"mapping\": {\n \"total_fields\": {\n \"limit\": 10000\n }\n },\n \"refresh_interval\": \"5s\"\n }\n },\n \"mappings\": {}\n }\n\n\ndef default_mapping_settings(ecs_version):\n return {\n \"_meta\": {\"version\": ecs_version},\n \"date_detection\": False,\n \"dynamic_templates\": [\n {\n \"strings_as_keyword\": {\n \"mapping\": {\n \"ignore_above\": 1024,\n \"type\": \"keyword\"\n },\n \"match_mapping_type\": \"string\"\n }\n }\n ],\n \"properties\": {}\n }\n", "path": "scripts/generators/es_template.py"}], "after_files": [{"content": "import json\nimport sys\nimport copy\n\nfrom os.path import join\nfrom generators import ecs_helpers\n\n\ndef generate(ecs_flat, ecs_version, out_dir, template_settings_file, mapping_settings_file):\n field_mappings = {}\n for flat_name in sorted(ecs_flat):\n field = ecs_flat[flat_name]\n nestings = flat_name.split('.')\n dict_add_nested(field_mappings, nestings, entry_for(field))\n\n if mapping_settings_file:\n with open(mapping_settings_file) as f:\n mappings_section = json.load(f)\n else:\n mappings_section = default_mapping_settings(ecs_version)\n\n mappings_section['properties'] = field_mappings\n\n generate_template_version(6, mappings_section, out_dir, template_settings_file)\n generate_template_version(7, mappings_section, out_dir, template_settings_file)\n\n# Field mappings\n\n\ndef dict_add_nested(dct, nestings, value):\n current_nesting = nestings[0]\n rest_nestings = nestings[1:]\n if len(rest_nestings) > 0:\n dct.setdefault(current_nesting, {})\n dct[current_nesting].setdefault('properties', {})\n\n dict_add_nested(\n dct[current_nesting]['properties'],\n rest_nestings,\n value)\n\n else:\n if current_nesting in dct and 'type' in value and 'object' == value['type']:\n return\n dct[current_nesting] = value\n\n\ndef entry_for(field):\n field_entry = {'type': field['type']}\n try:\n if field['type'] == 'object' or field['type'] == 'nested':\n if 'enabled' in field and not field['enabled']:\n ecs_helpers.dict_copy_existing_keys(field, field_entry, ['enabled'])\n # the index field is only valid for field types that are not object and nested\n elif 'index' in field and not field['index']:\n ecs_helpers.dict_copy_existing_keys(field, field_entry, ['index', 'doc_values'])\n\n if field['type'] == 'keyword':\n ecs_helpers.dict_copy_existing_keys(field, field_entry, ['ignore_above'])\n elif field['type'] == 'constant_keyword':\n ecs_helpers.dict_copy_existing_keys(field, field_entry, ['value'])\n elif field['type'] == 'text':\n ecs_helpers.dict_copy_existing_keys(field, field_entry, ['norms'])\n elif field['type'] == 'alias':\n ecs_helpers.dict_copy_existing_keys(field, field_entry, ['path'])\n elif field['type'] == 'scaled_float':\n ecs_helpers.dict_copy_existing_keys(field, field_entry, ['scaling_factor'])\n\n if 'multi_fields' in field:\n field_entry['fields'] = {}\n for mf in field['multi_fields']:\n mf_type = mf['type']\n mf_entry = {'type': mf_type}\n if mf_type == 'keyword':\n ecs_helpers.dict_copy_existing_keys(mf, mf_entry, ['normalizer', 'ignore_above'])\n elif mf_type == 'text':\n ecs_helpers.dict_copy_existing_keys(mf, mf_entry, ['norms'])\n field_entry['fields'][mf['name']] = mf_entry\n\n except KeyError as ex:\n print(\"Exception {} occurred for field {}\".format(ex, field))\n raise ex\n return field_entry\n\n# Generated files\n\n\ndef generate_template_version(elasticsearch_version, mappings_section, out_dir, template_settings_file):\n ecs_helpers.make_dirs(join(out_dir, 'elasticsearch', str(elasticsearch_version)))\n if template_settings_file:\n with open(template_settings_file) as f:\n template = json.load(f)\n else:\n template = default_template_settings()\n if elasticsearch_version == 6:\n template['mappings'] = {'_doc': mappings_section}\n else:\n template['mappings'] = mappings_section\n\n filename = join(out_dir, \"elasticsearch/{}/template.json\".format(elasticsearch_version))\n save_json(filename, template)\n\n\ndef save_json(file, data):\n open_mode = \"wb\"\n if sys.version_info >= (3, 0):\n open_mode = \"w\"\n with open(file, open_mode) as jsonfile:\n jsonfile.write(json.dumps(data, indent=2, sort_keys=True))\n\n\ndef default_template_settings():\n return {\n \"index_patterns\": [\"try-ecs-*\"],\n \"order\": 1,\n \"settings\": {\n \"index\": {\n \"mapping\": {\n \"total_fields\": {\n \"limit\": 10000\n }\n },\n \"refresh_interval\": \"5s\"\n }\n },\n \"mappings\": {}\n }\n\n\ndef default_mapping_settings(ecs_version):\n return {\n \"_meta\": {\"version\": ecs_version},\n \"date_detection\": False,\n \"dynamic_templates\": [\n {\n \"strings_as_keyword\": {\n \"mapping\": {\n \"ignore_above\": 1024,\n \"type\": \"keyword\"\n },\n \"match_mapping_type\": \"string\"\n }\n }\n ],\n \"properties\": {}\n }\n", "path": "scripts/generators/es_template.py"}]} | 1,845 | 143 |
gh_patches_debug_57187 | rasdani/github-patches | git_diff | zestedesavoir__zds-site-3325 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[beta][v16][rc1] Les contenus extra (pdf, epub, etc.) ne sont pas générés lors de la 2nde validation
Version 16 RC1.
Scénario de test :
- Je publie un tutoriel pris en zone de validation (J'ai pris celui sur les bases de la prog)
- Le tutoriel est réservé, publié.
- Je modifie le sous-titre du tutoriel et redemande sa validation (2 min après la première publication)
- Je le réserver puis publie une fois de plus le tutoriel sans cocher la case maj majeur, donc en version mineure
- Le tutoriel est publié cette fois, mais après 5 min, toujours pas de signe d'un pdf ni epub, etc.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `zds/tutorialv2/management/commands/publication_watchdog.py`
Content:
```
1 # coding: utf-8
2 from os.path import dirname, join
3 import os
4 import time
5
6 import shutil
7 from django.core.management import BaseCommand
8 from pathtools.path import listdir
9 from watchdog.observers import Observer
10 from watchdog.events import FileCreatedEvent, FileSystemEventHandler, LoggingEventHandler
11 from zds import settings
12 from zds.tutorialv2.publication_utils import generate_exernal_content
13 from codecs import open
14
15
16 class TutorialIsPublished(FileSystemEventHandler):
17 prepare_callbacks = [] # because we can imagine we will create far more than test directory existence
18 finish_callbacks = [] # because we can imagine we will send a PM on success or failure one day
19
20 @staticmethod
21 def __create_dir(extra_contents_path):
22 if not os.path.exists(extra_contents_path):
23 os.makedirs(extra_contents_path)
24
25 @staticmethod
26 def __cleanup_build_and_watchdog(extra_contents_path, watchdog_file_path):
27 for listed in listdir(extra_contents_path, recursive=False):
28 try:
29 shutil.copy(join(extra_contents_path, listed), extra_contents_path.replace("__building", ""))
30 except Exception:
31 pass
32 shutil.rmtree(extra_contents_path)
33 os.remove()
34
35 def __init__(self):
36 self.prepare_callbacks = [TutorialIsPublished.__create_dir]
37 self.finish_callbacks = [TutorialIsPublished.__cleanup_build_and_watchdog]
38
39 def on_created(self, event):
40 super(TutorialIsPublished, self).on_created(event)
41 pandoc_debug_str = ""
42
43 if settings.PANDOC_LOG_STATE:
44 pandoc_debug_str = " 2>&1 | tee -a " + settings.PANDOC_LOG
45 if isinstance(event, FileCreatedEvent):
46 with open(event.src_path, encoding="utf-8") as f:
47 infos = f.read().strip().split(";")
48 md_file_path = infos[1]
49 base_name = infos[0]
50 extra_contents_path = dirname(md_file_path)
51 self.prepare_generation(extra_contents_path)
52 try:
53 generate_exernal_content(base_name, extra_contents_path, md_file_path,
54 pandoc_debug_str, overload_settings=True)
55 finally:
56 self.finish_generation(extra_contents_path, event.src_path)
57
58 def prepare_generation(self, extra_contents_path):
59
60 for callback in self.prepare_callbacks:
61 callback(extra_contents_path)
62
63 def finish_generation(self, extra_contents_path, watchdog_file_path):
64 for callback in self.finish_callbacks:
65 callback(extra_contents_path, watchdog_file_path)
66
67
68 class Command(BaseCommand):
69 help = 'Launch a watchdog that generate all exported formats (epub, pdf...) files without blocking request handling'
70
71 def handle(self, *args, **options):
72 path = settings.ZDS_APP['content']['extra_content_watchdog_dir']
73 event_handler = TutorialIsPublished()
74 observer = Observer()
75 observer.schedule(event_handler, path, recursive=True)
76 observer.schedule(LoggingEventHandler(), path)
77 observer.start()
78 try:
79 while True:
80 time.sleep(1)
81 except KeyboardInterrupt:
82 observer.stop()
83 observer.join()
84
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/zds/tutorialv2/management/commands/publication_watchdog.py b/zds/tutorialv2/management/commands/publication_watchdog.py
--- a/zds/tutorialv2/management/commands/publication_watchdog.py
+++ b/zds/tutorialv2/management/commands/publication_watchdog.py
@@ -30,7 +30,7 @@
except Exception:
pass
shutil.rmtree(extra_contents_path)
- os.remove()
+ os.remove(watchdog_file_path)
def __init__(self):
self.prepare_callbacks = [TutorialIsPublished.__create_dir]
| {"golden_diff": "diff --git a/zds/tutorialv2/management/commands/publication_watchdog.py b/zds/tutorialv2/management/commands/publication_watchdog.py\n--- a/zds/tutorialv2/management/commands/publication_watchdog.py\n+++ b/zds/tutorialv2/management/commands/publication_watchdog.py\n@@ -30,7 +30,7 @@\n except Exception:\n pass\n shutil.rmtree(extra_contents_path)\n- os.remove()\n+ os.remove(watchdog_file_path)\n \n def __init__(self):\n self.prepare_callbacks = [TutorialIsPublished.__create_dir]\n", "issue": "[beta][v16][rc1] Les contenus extra (pdf, epub, etc.) ne sont pas g\u00e9n\u00e9r\u00e9s lors de la 2nde validation\nVersion 16 RC1.\n\nSc\u00e9nario de test : \n- Je publie un tutoriel pris en zone de validation (J'ai pris celui sur les bases de la prog)\n- Le tutoriel est r\u00e9serv\u00e9, publi\u00e9.\n- Je modifie le sous-titre du tutoriel et redemande sa validation (2 min apr\u00e8s la premi\u00e8re publication)\n- Je le r\u00e9server puis publie une fois de plus le tutoriel sans cocher la case maj majeur, donc en version mineure\n- Le tutoriel est publi\u00e9 cette fois, mais apr\u00e8s 5 min, toujours pas de signe d'un pdf ni epub, etc.\n\n", "before_files": [{"content": "# coding: utf-8\nfrom os.path import dirname, join\nimport os\nimport time\n\nimport shutil\nfrom django.core.management import BaseCommand\nfrom pathtools.path import listdir\nfrom watchdog.observers import Observer\nfrom watchdog.events import FileCreatedEvent, FileSystemEventHandler, LoggingEventHandler\nfrom zds import settings\nfrom zds.tutorialv2.publication_utils import generate_exernal_content\nfrom codecs import open\n\n\nclass TutorialIsPublished(FileSystemEventHandler):\n prepare_callbacks = [] # because we can imagine we will create far more than test directory existence\n finish_callbacks = [] # because we can imagine we will send a PM on success or failure one day\n\n @staticmethod\n def __create_dir(extra_contents_path):\n if not os.path.exists(extra_contents_path):\n os.makedirs(extra_contents_path)\n\n @staticmethod\n def __cleanup_build_and_watchdog(extra_contents_path, watchdog_file_path):\n for listed in listdir(extra_contents_path, recursive=False):\n try:\n shutil.copy(join(extra_contents_path, listed), extra_contents_path.replace(\"__building\", \"\"))\n except Exception:\n pass\n shutil.rmtree(extra_contents_path)\n os.remove()\n\n def __init__(self):\n self.prepare_callbacks = [TutorialIsPublished.__create_dir]\n self.finish_callbacks = [TutorialIsPublished.__cleanup_build_and_watchdog]\n\n def on_created(self, event):\n super(TutorialIsPublished, self).on_created(event)\n pandoc_debug_str = \"\"\n\n if settings.PANDOC_LOG_STATE:\n pandoc_debug_str = \" 2>&1 | tee -a \" + settings.PANDOC_LOG\n if isinstance(event, FileCreatedEvent):\n with open(event.src_path, encoding=\"utf-8\") as f:\n infos = f.read().strip().split(\";\")\n md_file_path = infos[1]\n base_name = infos[0]\n extra_contents_path = dirname(md_file_path)\n self.prepare_generation(extra_contents_path)\n try:\n generate_exernal_content(base_name, extra_contents_path, md_file_path,\n pandoc_debug_str, overload_settings=True)\n finally:\n self.finish_generation(extra_contents_path, event.src_path)\n\n def prepare_generation(self, extra_contents_path):\n\n for callback in self.prepare_callbacks:\n callback(extra_contents_path)\n\n def finish_generation(self, extra_contents_path, watchdog_file_path):\n for callback in self.finish_callbacks:\n callback(extra_contents_path, watchdog_file_path)\n\n\nclass Command(BaseCommand):\n help = 'Launch a watchdog that generate all exported formats (epub, pdf...) files without blocking request handling'\n\n def handle(self, *args, **options):\n path = settings.ZDS_APP['content']['extra_content_watchdog_dir']\n event_handler = TutorialIsPublished()\n observer = Observer()\n observer.schedule(event_handler, path, recursive=True)\n observer.schedule(LoggingEventHandler(), path)\n observer.start()\n try:\n while True:\n time.sleep(1)\n except KeyboardInterrupt:\n observer.stop()\n observer.join()\n", "path": "zds/tutorialv2/management/commands/publication_watchdog.py"}], "after_files": [{"content": "# coding: utf-8\nfrom os.path import dirname, join\nimport os\nimport time\n\nimport shutil\nfrom django.core.management import BaseCommand\nfrom pathtools.path import listdir\nfrom watchdog.observers import Observer\nfrom watchdog.events import FileCreatedEvent, FileSystemEventHandler, LoggingEventHandler\nfrom zds import settings\nfrom zds.tutorialv2.publication_utils import generate_exernal_content\nfrom codecs import open\n\n\nclass TutorialIsPublished(FileSystemEventHandler):\n prepare_callbacks = [] # because we can imagine we will create far more than test directory existence\n finish_callbacks = [] # because we can imagine we will send a PM on success or failure one day\n\n @staticmethod\n def __create_dir(extra_contents_path):\n if not os.path.exists(extra_contents_path):\n os.makedirs(extra_contents_path)\n\n @staticmethod\n def __cleanup_build_and_watchdog(extra_contents_path, watchdog_file_path):\n for listed in listdir(extra_contents_path, recursive=False):\n try:\n shutil.copy(join(extra_contents_path, listed), extra_contents_path.replace(\"__building\", \"\"))\n except Exception:\n pass\n shutil.rmtree(extra_contents_path)\n os.remove(watchdog_file_path)\n\n def __init__(self):\n self.prepare_callbacks = [TutorialIsPublished.__create_dir]\n self.finish_callbacks = [TutorialIsPublished.__cleanup_build_and_watchdog]\n\n def on_created(self, event):\n super(TutorialIsPublished, self).on_created(event)\n pandoc_debug_str = \"\"\n\n if settings.PANDOC_LOG_STATE:\n pandoc_debug_str = \" 2>&1 | tee -a \" + settings.PANDOC_LOG\n if isinstance(event, FileCreatedEvent):\n with open(event.src_path, encoding=\"utf-8\") as f:\n infos = f.read().strip().split(\";\")\n md_file_path = infos[1]\n base_name = infos[0]\n extra_contents_path = dirname(md_file_path)\n self.prepare_generation(extra_contents_path)\n try:\n generate_exernal_content(base_name, extra_contents_path, md_file_path,\n pandoc_debug_str, overload_settings=True)\n finally:\n self.finish_generation(extra_contents_path, event.src_path)\n\n def prepare_generation(self, extra_contents_path):\n\n for callback in self.prepare_callbacks:\n callback(extra_contents_path)\n\n def finish_generation(self, extra_contents_path, watchdog_file_path):\n for callback in self.finish_callbacks:\n callback(extra_contents_path, watchdog_file_path)\n\n\nclass Command(BaseCommand):\n help = 'Launch a watchdog that generate all exported formats (epub, pdf...) files without blocking request handling'\n\n def handle(self, *args, **options):\n path = settings.ZDS_APP['content']['extra_content_watchdog_dir']\n event_handler = TutorialIsPublished()\n observer = Observer()\n observer.schedule(event_handler, path, recursive=True)\n observer.schedule(LoggingEventHandler(), path)\n observer.start()\n try:\n while True:\n time.sleep(1)\n except KeyboardInterrupt:\n observer.stop()\n observer.join()\n", "path": "zds/tutorialv2/management/commands/publication_watchdog.py"}]} | 1,236 | 130 |
gh_patches_debug_18278 | rasdani/github-patches | git_diff | streamlink__streamlink-1731 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Vaughnlive RTMP port changed from 1935 to 2935
Very brief bug, very simple fix.
rtmp_server_map for all requests uses 192.240.105.171:1935 and doesn't work. (No data returned from stream)
rtmp_server_map change all requests to 192.240.105.171:2935 works for me.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/streamlink/plugins/vaughnlive.py`
Content:
```
1 import itertools
2 import logging
3 import random
4 import re
5 import ssl
6
7 import websocket
8
9 from streamlink.plugin import Plugin
10 from streamlink.plugin.api import useragents
11 from streamlink.stream import RTMPStream
12
13 _url_re = re.compile(r"""
14 http(s)?://(\w+\.)?
15 (?P<domain>vaughnlive|breakers|instagib|vapers|pearltime).tv
16 (/embed/video)?
17 /(?P<channel>[^/&?]+)
18 """, re.VERBOSE)
19
20
21 class VLWebSocket(websocket.WebSocket):
22 def __init__(self, **_):
23 self.session = _.pop("session")
24 self.logger = logging.getLogger("streamlink.plugins.vaughnlive.websocket")
25 sslopt = _.pop("sslopt", {})
26 sslopt["cert_reqs"] = ssl.CERT_NONE
27 super(VLWebSocket, self).__init__(sslopt=sslopt, **_)
28
29 def send(self, payload, opcode=websocket.ABNF.OPCODE_TEXT):
30 self.logger.debug("Sending message: {0}", payload)
31 return super(VLWebSocket, self).send(payload + "\n\x00", opcode)
32
33 def recv(self):
34 d = super(VLWebSocket, self).recv().replace("\n", "").replace("\x00", "")
35 return d.split(" ", 1)
36
37
38 class VaughnLive(Plugin):
39 servers = ["wss://sapi-ws-{0}x{1:02}.vaughnlive.tv".format(x, y) for x, y in itertools.product(range(1, 3),
40 range(1, 6))]
41 origin = "https://vaughnlive.tv"
42 rtmp_server_map = {
43 "594140c69edad": "192.240.105.171:1935",
44 "585c4cab1bef1": "192.240.105.171:1935",
45 "5940d648b3929": "192.240.105.171:1935",
46 "5941854b39bc4": "192.240.105.171:1935"
47 }
48 name_remap = {"#vl": "live", "#btv": "btv", "#pt": "pt", "#igb": "instagib", "#vtv": "vtv"}
49 domain_map = {"vaughnlive": "#vl", "breakers": "#btv", "instagib": "#igb", "vapers": "#vtv", "pearltime": "#pt"}
50
51 @classmethod
52 def can_handle_url(cls, url):
53 return _url_re.match(url)
54
55 def api_url(self):
56 return random.choice(self.servers)
57
58 def parse_ack(self, action, message):
59 if action.endswith("3"):
60 channel, _, viewers, token, server, choked, is_live, chls, trns, ingest = message.split(";")
61 is_live = is_live == "1"
62 viewers = int(viewers)
63 self.logger.debug("Viewers: {0}, isLive={1}", viewers, is_live)
64 domain, channel = channel.split("-", 1)
65 return is_live, server, domain, channel, token, ingest
66 else:
67 self.logger.error("Unhandled action format: {0}", action)
68
69 def _get_info(self, stream_name):
70 server = self.api_url()
71 self.logger.debug("Connecting to API: {0}", server)
72 ws = websocket.create_connection(server,
73 header=["User-Agent: {0}".format(useragents.CHROME)],
74 origin=self.origin,
75 class_=VLWebSocket,
76 session=self.session)
77 ws.send("MVN LOAD3 {0}".format(stream_name))
78 action, message = ws.recv()
79 return self.parse_ack(action, message)
80
81 def _get_rtmp_streams(self, server, domain, channel, token):
82 rtmp_server = self.rtmp_server_map.get(server, server)
83
84 url = "rtmp://{0}/live?{1}".format(rtmp_server, token)
85
86 yield "live", RTMPStream(self.session, params={
87 "rtmp": url,
88 "pageUrl": self.url,
89 "playpath": "{0}_{1}".format(self.name_remap.get(domain, "live"), channel),
90 "live": True
91 })
92
93 def _get_streams(self):
94 m = _url_re.match(self.url)
95 if m:
96 stream_name = "{0}-{1}".format(self.domain_map[(m.group("domain").lower())],
97 m.group("channel"))
98
99 is_live, server, domain, channel, token, ingest = self._get_info(stream_name)
100
101 if not is_live:
102 self.logger.info("Stream is currently off air")
103 else:
104 self.logger.info("Stream powered by VaughnSoft - remember to support them.")
105 for s in self._get_rtmp_streams(server, domain, channel, token):
106 yield s
107
108
109 __plugin__ = VaughnLive
110
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/streamlink/plugins/vaughnlive.py b/src/streamlink/plugins/vaughnlive.py
--- a/src/streamlink/plugins/vaughnlive.py
+++ b/src/streamlink/plugins/vaughnlive.py
@@ -40,10 +40,10 @@
range(1, 6))]
origin = "https://vaughnlive.tv"
rtmp_server_map = {
- "594140c69edad": "192.240.105.171:1935",
- "585c4cab1bef1": "192.240.105.171:1935",
- "5940d648b3929": "192.240.105.171:1935",
- "5941854b39bc4": "192.240.105.171:1935"
+ "594140c69edad": "192.240.105.171:2935",
+ "585c4cab1bef1": "192.240.105.171:2935",
+ "5940d648b3929": "192.240.105.171:2935",
+ "5941854b39bc4": "192.240.105.171:2935"
}
name_remap = {"#vl": "live", "#btv": "btv", "#pt": "pt", "#igb": "instagib", "#vtv": "vtv"}
domain_map = {"vaughnlive": "#vl", "breakers": "#btv", "instagib": "#igb", "vapers": "#vtv", "pearltime": "#pt"}
| {"golden_diff": "diff --git a/src/streamlink/plugins/vaughnlive.py b/src/streamlink/plugins/vaughnlive.py\n--- a/src/streamlink/plugins/vaughnlive.py\n+++ b/src/streamlink/plugins/vaughnlive.py\n@@ -40,10 +40,10 @@\n range(1, 6))]\n origin = \"https://vaughnlive.tv\"\n rtmp_server_map = {\n- \"594140c69edad\": \"192.240.105.171:1935\",\n- \"585c4cab1bef1\": \"192.240.105.171:1935\",\n- \"5940d648b3929\": \"192.240.105.171:1935\",\n- \"5941854b39bc4\": \"192.240.105.171:1935\"\n+ \"594140c69edad\": \"192.240.105.171:2935\",\n+ \"585c4cab1bef1\": \"192.240.105.171:2935\",\n+ \"5940d648b3929\": \"192.240.105.171:2935\",\n+ \"5941854b39bc4\": \"192.240.105.171:2935\"\n }\n name_remap = {\"#vl\": \"live\", \"#btv\": \"btv\", \"#pt\": \"pt\", \"#igb\": \"instagib\", \"#vtv\": \"vtv\"}\n domain_map = {\"vaughnlive\": \"#vl\", \"breakers\": \"#btv\", \"instagib\": \"#igb\", \"vapers\": \"#vtv\", \"pearltime\": \"#pt\"}\n", "issue": "Vaughnlive RTMP port changed from 1935 to 2935\nVery brief bug, very simple fix.\r\n\r\nrtmp_server_map for all requests uses 192.240.105.171:1935 and doesn't work. (No data returned from stream)\r\nrtmp_server_map change all requests to 192.240.105.171:2935 works for me.\r\n\n", "before_files": [{"content": "import itertools\nimport logging\nimport random\nimport re\nimport ssl\n\nimport websocket\n\nfrom streamlink.plugin import Plugin\nfrom streamlink.plugin.api import useragents\nfrom streamlink.stream import RTMPStream\n\n_url_re = re.compile(r\"\"\"\n http(s)?://(\\w+\\.)?\n (?P<domain>vaughnlive|breakers|instagib|vapers|pearltime).tv\n (/embed/video)?\n /(?P<channel>[^/&?]+)\n\"\"\", re.VERBOSE)\n\n\nclass VLWebSocket(websocket.WebSocket):\n def __init__(self, **_):\n self.session = _.pop(\"session\")\n self.logger = logging.getLogger(\"streamlink.plugins.vaughnlive.websocket\")\n sslopt = _.pop(\"sslopt\", {})\n sslopt[\"cert_reqs\"] = ssl.CERT_NONE\n super(VLWebSocket, self).__init__(sslopt=sslopt, **_)\n\n def send(self, payload, opcode=websocket.ABNF.OPCODE_TEXT):\n self.logger.debug(\"Sending message: {0}\", payload)\n return super(VLWebSocket, self).send(payload + \"\\n\\x00\", opcode)\n\n def recv(self):\n d = super(VLWebSocket, self).recv().replace(\"\\n\", \"\").replace(\"\\x00\", \"\")\n return d.split(\" \", 1)\n\n\nclass VaughnLive(Plugin):\n servers = [\"wss://sapi-ws-{0}x{1:02}.vaughnlive.tv\".format(x, y) for x, y in itertools.product(range(1, 3),\n range(1, 6))]\n origin = \"https://vaughnlive.tv\"\n rtmp_server_map = {\n \"594140c69edad\": \"192.240.105.171:1935\",\n \"585c4cab1bef1\": \"192.240.105.171:1935\",\n \"5940d648b3929\": \"192.240.105.171:1935\",\n \"5941854b39bc4\": \"192.240.105.171:1935\"\n }\n name_remap = {\"#vl\": \"live\", \"#btv\": \"btv\", \"#pt\": \"pt\", \"#igb\": \"instagib\", \"#vtv\": \"vtv\"}\n domain_map = {\"vaughnlive\": \"#vl\", \"breakers\": \"#btv\", \"instagib\": \"#igb\", \"vapers\": \"#vtv\", \"pearltime\": \"#pt\"}\n\n @classmethod\n def can_handle_url(cls, url):\n return _url_re.match(url)\n\n def api_url(self):\n return random.choice(self.servers)\n\n def parse_ack(self, action, message):\n if action.endswith(\"3\"):\n channel, _, viewers, token, server, choked, is_live, chls, trns, ingest = message.split(\";\")\n is_live = is_live == \"1\"\n viewers = int(viewers)\n self.logger.debug(\"Viewers: {0}, isLive={1}\", viewers, is_live)\n domain, channel = channel.split(\"-\", 1)\n return is_live, server, domain, channel, token, ingest\n else:\n self.logger.error(\"Unhandled action format: {0}\", action)\n\n def _get_info(self, stream_name):\n server = self.api_url()\n self.logger.debug(\"Connecting to API: {0}\", server)\n ws = websocket.create_connection(server,\n header=[\"User-Agent: {0}\".format(useragents.CHROME)],\n origin=self.origin,\n class_=VLWebSocket,\n session=self.session)\n ws.send(\"MVN LOAD3 {0}\".format(stream_name))\n action, message = ws.recv()\n return self.parse_ack(action, message)\n\n def _get_rtmp_streams(self, server, domain, channel, token):\n rtmp_server = self.rtmp_server_map.get(server, server)\n\n url = \"rtmp://{0}/live?{1}\".format(rtmp_server, token)\n\n yield \"live\", RTMPStream(self.session, params={\n \"rtmp\": url,\n \"pageUrl\": self.url,\n \"playpath\": \"{0}_{1}\".format(self.name_remap.get(domain, \"live\"), channel),\n \"live\": True\n })\n\n def _get_streams(self):\n m = _url_re.match(self.url)\n if m:\n stream_name = \"{0}-{1}\".format(self.domain_map[(m.group(\"domain\").lower())],\n m.group(\"channel\"))\n\n is_live, server, domain, channel, token, ingest = self._get_info(stream_name)\n\n if not is_live:\n self.logger.info(\"Stream is currently off air\")\n else:\n self.logger.info(\"Stream powered by VaughnSoft - remember to support them.\")\n for s in self._get_rtmp_streams(server, domain, channel, token):\n yield s\n\n\n__plugin__ = VaughnLive\n", "path": "src/streamlink/plugins/vaughnlive.py"}], "after_files": [{"content": "import itertools\nimport logging\nimport random\nimport re\nimport ssl\n\nimport websocket\n\nfrom streamlink.plugin import Plugin\nfrom streamlink.plugin.api import useragents\nfrom streamlink.stream import RTMPStream\n\n_url_re = re.compile(r\"\"\"\n http(s)?://(\\w+\\.)?\n (?P<domain>vaughnlive|breakers|instagib|vapers|pearltime).tv\n (/embed/video)?\n /(?P<channel>[^/&?]+)\n\"\"\", re.VERBOSE)\n\n\nclass VLWebSocket(websocket.WebSocket):\n def __init__(self, **_):\n self.session = _.pop(\"session\")\n self.logger = logging.getLogger(\"streamlink.plugins.vaughnlive.websocket\")\n sslopt = _.pop(\"sslopt\", {})\n sslopt[\"cert_reqs\"] = ssl.CERT_NONE\n super(VLWebSocket, self).__init__(sslopt=sslopt, **_)\n\n def send(self, payload, opcode=websocket.ABNF.OPCODE_TEXT):\n self.logger.debug(\"Sending message: {0}\", payload)\n return super(VLWebSocket, self).send(payload + \"\\n\\x00\", opcode)\n\n def recv(self):\n d = super(VLWebSocket, self).recv().replace(\"\\n\", \"\").replace(\"\\x00\", \"\")\n return d.split(\" \", 1)\n\n\nclass VaughnLive(Plugin):\n servers = [\"wss://sapi-ws-{0}x{1:02}.vaughnlive.tv\".format(x, y) for x, y in itertools.product(range(1, 3),\n range(1, 6))]\n origin = \"https://vaughnlive.tv\"\n rtmp_server_map = {\n \"594140c69edad\": \"192.240.105.171:2935\",\n \"585c4cab1bef1\": \"192.240.105.171:2935\",\n \"5940d648b3929\": \"192.240.105.171:2935\",\n \"5941854b39bc4\": \"192.240.105.171:2935\"\n }\n name_remap = {\"#vl\": \"live\", \"#btv\": \"btv\", \"#pt\": \"pt\", \"#igb\": \"instagib\", \"#vtv\": \"vtv\"}\n domain_map = {\"vaughnlive\": \"#vl\", \"breakers\": \"#btv\", \"instagib\": \"#igb\", \"vapers\": \"#vtv\", \"pearltime\": \"#pt\"}\n\n @classmethod\n def can_handle_url(cls, url):\n return _url_re.match(url)\n\n def api_url(self):\n return random.choice(self.servers)\n\n def parse_ack(self, action, message):\n if action.endswith(\"3\"):\n channel, _, viewers, token, server, choked, is_live, chls, trns, ingest = message.split(\";\")\n is_live = is_live == \"1\"\n viewers = int(viewers)\n self.logger.debug(\"Viewers: {0}, isLive={1}\", viewers, is_live)\n domain, channel = channel.split(\"-\", 1)\n return is_live, server, domain, channel, token, ingest\n else:\n self.logger.error(\"Unhandled action format: {0}\", action)\n\n def _get_info(self, stream_name):\n server = self.api_url()\n self.logger.debug(\"Connecting to API: {0}\", server)\n ws = websocket.create_connection(server,\n header=[\"User-Agent: {0}\".format(useragents.CHROME)],\n origin=self.origin,\n class_=VLWebSocket,\n session=self.session)\n ws.send(\"MVN LOAD3 {0}\".format(stream_name))\n action, message = ws.recv()\n return self.parse_ack(action, message)\n\n def _get_rtmp_streams(self, server, domain, channel, token):\n rtmp_server = self.rtmp_server_map.get(server, server)\n\n url = \"rtmp://{0}/live?{1}\".format(rtmp_server, token)\n\n yield \"live\", RTMPStream(self.session, params={\n \"rtmp\": url,\n \"pageUrl\": self.url,\n \"playpath\": \"{0}_{1}\".format(self.name_remap.get(domain, \"live\"), channel),\n \"live\": True\n })\n\n def _get_streams(self):\n m = _url_re.match(self.url)\n if m:\n stream_name = \"{0}-{1}\".format(self.domain_map[(m.group(\"domain\").lower())],\n m.group(\"channel\"))\n\n is_live, server, domain, channel, token, ingest = self._get_info(stream_name)\n\n if not is_live:\n self.logger.info(\"Stream is currently off air\")\n else:\n self.logger.info(\"Stream powered by VaughnSoft - remember to support them.\")\n for s in self._get_rtmp_streams(server, domain, channel, token):\n yield s\n\n\n__plugin__ = VaughnLive\n", "path": "src/streamlink/plugins/vaughnlive.py"}]} | 1,728 | 481 |
gh_patches_debug_12764 | rasdani/github-patches | git_diff | googleapis__python-bigquery-768 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Expand range for 'google-{api-core,cloud-core,resumable-media}' to allow 2.x versions
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 # Copyright 2018 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import io
16 import os
17
18 import setuptools
19
20
21 # Package metadata.
22
23 name = "google-cloud-bigquery"
24 description = "Google BigQuery API client library"
25
26 # Should be one of:
27 # 'Development Status :: 3 - Alpha'
28 # 'Development Status :: 4 - Beta'
29 # 'Development Status :: 5 - Production/Stable'
30 release_status = "Development Status :: 5 - Production/Stable"
31 dependencies = [
32 "grpcio >= 1.38.1, < 2.0dev", # https://github.com/googleapis/python-bigquery/issues/695
33 "google-api-core[grpc] >= 1.29.0, < 2.0.0dev",
34 "proto-plus >= 1.10.0",
35 "google-cloud-core >= 1.4.1, < 2.0dev",
36 "google-resumable-media >= 0.6.0, < 2.0dev",
37 "packaging >= 14.3",
38 "protobuf >= 3.12.0",
39 "requests >= 2.18.0, < 3.0.0dev",
40 ]
41 extras = {
42 "bqstorage": [
43 "google-cloud-bigquery-storage >= 2.0.0, <3.0.0dev",
44 # Due to an issue in pip's dependency resolver, the `grpc` extra is not
45 # installed, even though `google-cloud-bigquery-storage` specifies it
46 # as `google-api-core[grpc]`. We thus need to explicitly specify it here.
47 # See: https://github.com/googleapis/python-bigquery/issues/83 The
48 # grpc.Channel.close() method isn't added until 1.32.0.
49 # https://github.com/grpc/grpc/pull/15254
50 "grpcio >= 1.38.1, < 2.0dev",
51 "pyarrow >= 1.0.0, < 5.0dev",
52 ],
53 "pandas": ["pandas>=0.23.0", "pyarrow >= 1.0.0, < 5.0dev"],
54 "bignumeric_type": ["pyarrow >= 3.0.0, < 5.0dev"],
55 "tqdm": ["tqdm >= 4.7.4, <5.0.0dev"],
56 "opentelemetry": [
57 "opentelemetry-api >= 0.11b0",
58 "opentelemetry-sdk >= 0.11b0",
59 "opentelemetry-instrumentation >= 0.11b0",
60 ],
61 }
62
63 all_extras = []
64
65 for extra in extras:
66 # Exclude this extra from all to avoid overly strict dependencies on core
67 # libraries such as pyarrow.
68 # https://github.com/googleapis/python-bigquery/issues/563
69 if extra in {"bignumeric_type"}:
70 continue
71 all_extras.extend(extras[extra])
72
73 extras["all"] = all_extras
74
75 # Setup boilerplate below this line.
76
77 package_root = os.path.abspath(os.path.dirname(__file__))
78
79 readme_filename = os.path.join(package_root, "README.rst")
80 with io.open(readme_filename, encoding="utf-8") as readme_file:
81 readme = readme_file.read()
82
83 version = {}
84 with open(os.path.join(package_root, "google/cloud/bigquery/version.py")) as fp:
85 exec(fp.read(), version)
86 version = version["__version__"]
87
88 # Only include packages under the 'google' namespace. Do not include tests,
89 # benchmarks, etc.
90 packages = [
91 package
92 for package in setuptools.PEP420PackageFinder.find()
93 if package.startswith("google")
94 ]
95
96 # Determine which namespaces are needed.
97 namespaces = ["google"]
98 if "google.cloud" in packages:
99 namespaces.append("google.cloud")
100
101
102 setuptools.setup(
103 name=name,
104 version=version,
105 description=description,
106 long_description=readme,
107 author="Google LLC",
108 author_email="[email protected]",
109 license="Apache 2.0",
110 url="https://github.com/googleapis/python-bigquery",
111 classifiers=[
112 release_status,
113 "Intended Audience :: Developers",
114 "License :: OSI Approved :: Apache Software License",
115 "Programming Language :: Python",
116 "Programming Language :: Python :: 3",
117 "Programming Language :: Python :: 3.6",
118 "Programming Language :: Python :: 3.7",
119 "Programming Language :: Python :: 3.8",
120 "Programming Language :: Python :: 3.9",
121 "Operating System :: OS Independent",
122 "Topic :: Internet",
123 ],
124 platforms="Posix; MacOS X; Windows",
125 packages=packages,
126 namespace_packages=namespaces,
127 install_requires=dependencies,
128 extras_require=extras,
129 python_requires=">=3.6, <3.10",
130 include_package_data=True,
131 zip_safe=False,
132 )
133
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -30,10 +30,10 @@
release_status = "Development Status :: 5 - Production/Stable"
dependencies = [
"grpcio >= 1.38.1, < 2.0dev", # https://github.com/googleapis/python-bigquery/issues/695
- "google-api-core[grpc] >= 1.29.0, < 2.0.0dev",
+ "google-api-core[grpc] >= 1.29.0, < 3.0.0dev",
"proto-plus >= 1.10.0",
- "google-cloud-core >= 1.4.1, < 2.0dev",
- "google-resumable-media >= 0.6.0, < 2.0dev",
+ "google-cloud-core >= 1.4.1, < 3.0dev",
+ "google-resumable-media >= 0.6.0, < 3.0dev",
"packaging >= 14.3",
"protobuf >= 3.12.0",
"requests >= 2.18.0, < 3.0.0dev",
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -30,10 +30,10 @@\n release_status = \"Development Status :: 5 - Production/Stable\"\n dependencies = [\n \"grpcio >= 1.38.1, < 2.0dev\", # https://github.com/googleapis/python-bigquery/issues/695\n- \"google-api-core[grpc] >= 1.29.0, < 2.0.0dev\",\n+ \"google-api-core[grpc] >= 1.29.0, < 3.0.0dev\",\n \"proto-plus >= 1.10.0\",\n- \"google-cloud-core >= 1.4.1, < 2.0dev\",\n- \"google-resumable-media >= 0.6.0, < 2.0dev\",\n+ \"google-cloud-core >= 1.4.1, < 3.0dev\",\n+ \"google-resumable-media >= 0.6.0, < 3.0dev\",\n \"packaging >= 14.3\",\n \"protobuf >= 3.12.0\",\n \"requests >= 2.18.0, < 3.0.0dev\",\n", "issue": "Expand range for 'google-{api-core,cloud-core,resumable-media}' to allow 2.x versions\n\n", "before_files": [{"content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport io\nimport os\n\nimport setuptools\n\n\n# Package metadata.\n\nname = \"google-cloud-bigquery\"\ndescription = \"Google BigQuery API client library\"\n\n# Should be one of:\n# 'Development Status :: 3 - Alpha'\n# 'Development Status :: 4 - Beta'\n# 'Development Status :: 5 - Production/Stable'\nrelease_status = \"Development Status :: 5 - Production/Stable\"\ndependencies = [\n \"grpcio >= 1.38.1, < 2.0dev\", # https://github.com/googleapis/python-bigquery/issues/695\n \"google-api-core[grpc] >= 1.29.0, < 2.0.0dev\",\n \"proto-plus >= 1.10.0\",\n \"google-cloud-core >= 1.4.1, < 2.0dev\",\n \"google-resumable-media >= 0.6.0, < 2.0dev\",\n \"packaging >= 14.3\",\n \"protobuf >= 3.12.0\",\n \"requests >= 2.18.0, < 3.0.0dev\",\n]\nextras = {\n \"bqstorage\": [\n \"google-cloud-bigquery-storage >= 2.0.0, <3.0.0dev\",\n # Due to an issue in pip's dependency resolver, the `grpc` extra is not\n # installed, even though `google-cloud-bigquery-storage` specifies it\n # as `google-api-core[grpc]`. We thus need to explicitly specify it here.\n # See: https://github.com/googleapis/python-bigquery/issues/83 The\n # grpc.Channel.close() method isn't added until 1.32.0.\n # https://github.com/grpc/grpc/pull/15254\n \"grpcio >= 1.38.1, < 2.0dev\",\n \"pyarrow >= 1.0.0, < 5.0dev\",\n ],\n \"pandas\": [\"pandas>=0.23.0\", \"pyarrow >= 1.0.0, < 5.0dev\"],\n \"bignumeric_type\": [\"pyarrow >= 3.0.0, < 5.0dev\"],\n \"tqdm\": [\"tqdm >= 4.7.4, <5.0.0dev\"],\n \"opentelemetry\": [\n \"opentelemetry-api >= 0.11b0\",\n \"opentelemetry-sdk >= 0.11b0\",\n \"opentelemetry-instrumentation >= 0.11b0\",\n ],\n}\n\nall_extras = []\n\nfor extra in extras:\n # Exclude this extra from all to avoid overly strict dependencies on core\n # libraries such as pyarrow.\n # https://github.com/googleapis/python-bigquery/issues/563\n if extra in {\"bignumeric_type\"}:\n continue\n all_extras.extend(extras[extra])\n\nextras[\"all\"] = all_extras\n\n# Setup boilerplate below this line.\n\npackage_root = os.path.abspath(os.path.dirname(__file__))\n\nreadme_filename = os.path.join(package_root, \"README.rst\")\nwith io.open(readme_filename, encoding=\"utf-8\") as readme_file:\n readme = readme_file.read()\n\nversion = {}\nwith open(os.path.join(package_root, \"google/cloud/bigquery/version.py\")) as fp:\n exec(fp.read(), version)\nversion = version[\"__version__\"]\n\n# Only include packages under the 'google' namespace. Do not include tests,\n# benchmarks, etc.\npackages = [\n package\n for package in setuptools.PEP420PackageFinder.find()\n if package.startswith(\"google\")\n]\n\n# Determine which namespaces are needed.\nnamespaces = [\"google\"]\nif \"google.cloud\" in packages:\n namespaces.append(\"google.cloud\")\n\n\nsetuptools.setup(\n name=name,\n version=version,\n description=description,\n long_description=readme,\n author=\"Google LLC\",\n author_email=\"[email protected]\",\n license=\"Apache 2.0\",\n url=\"https://github.com/googleapis/python-bigquery\",\n classifiers=[\n release_status,\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Operating System :: OS Independent\",\n \"Topic :: Internet\",\n ],\n platforms=\"Posix; MacOS X; Windows\",\n packages=packages,\n namespace_packages=namespaces,\n install_requires=dependencies,\n extras_require=extras,\n python_requires=\">=3.6, <3.10\",\n include_package_data=True,\n zip_safe=False,\n)\n", "path": "setup.py"}], "after_files": [{"content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport io\nimport os\n\nimport setuptools\n\n\n# Package metadata.\n\nname = \"google-cloud-bigquery\"\ndescription = \"Google BigQuery API client library\"\n\n# Should be one of:\n# 'Development Status :: 3 - Alpha'\n# 'Development Status :: 4 - Beta'\n# 'Development Status :: 5 - Production/Stable'\nrelease_status = \"Development Status :: 5 - Production/Stable\"\ndependencies = [\n \"grpcio >= 1.38.1, < 2.0dev\", # https://github.com/googleapis/python-bigquery/issues/695\n \"google-api-core[grpc] >= 1.29.0, < 3.0.0dev\",\n \"proto-plus >= 1.10.0\",\n \"google-cloud-core >= 1.4.1, < 3.0dev\",\n \"google-resumable-media >= 0.6.0, < 3.0dev\",\n \"packaging >= 14.3\",\n \"protobuf >= 3.12.0\",\n \"requests >= 2.18.0, < 3.0.0dev\",\n]\nextras = {\n \"bqstorage\": [\n \"google-cloud-bigquery-storage >= 2.0.0, <3.0.0dev\",\n # Due to an issue in pip's dependency resolver, the `grpc` extra is not\n # installed, even though `google-cloud-bigquery-storage` specifies it\n # as `google-api-core[grpc]`. We thus need to explicitly specify it here.\n # See: https://github.com/googleapis/python-bigquery/issues/83 The\n # grpc.Channel.close() method isn't added until 1.32.0.\n # https://github.com/grpc/grpc/pull/15254\n \"grpcio >= 1.38.1, < 2.0dev\",\n \"pyarrow >= 1.0.0, < 5.0dev\",\n ],\n \"pandas\": [\"pandas>=0.23.0\", \"pyarrow >= 1.0.0, < 5.0dev\"],\n \"bignumeric_type\": [\"pyarrow >= 3.0.0, < 5.0dev\"],\n \"tqdm\": [\"tqdm >= 4.7.4, <5.0.0dev\"],\n \"opentelemetry\": [\n \"opentelemetry-api >= 0.11b0\",\n \"opentelemetry-sdk >= 0.11b0\",\n \"opentelemetry-instrumentation >= 0.11b0\",\n ],\n}\n\nall_extras = []\n\nfor extra in extras:\n # Exclude this extra from all to avoid overly strict dependencies on core\n # libraries such as pyarrow.\n # https://github.com/googleapis/python-bigquery/issues/563\n if extra in {\"bignumeric_type\"}:\n continue\n all_extras.extend(extras[extra])\n\nextras[\"all\"] = all_extras\n\n# Setup boilerplate below this line.\n\npackage_root = os.path.abspath(os.path.dirname(__file__))\n\nreadme_filename = os.path.join(package_root, \"README.rst\")\nwith io.open(readme_filename, encoding=\"utf-8\") as readme_file:\n readme = readme_file.read()\n\nversion = {}\nwith open(os.path.join(package_root, \"google/cloud/bigquery/version.py\")) as fp:\n exec(fp.read(), version)\nversion = version[\"__version__\"]\n\n# Only include packages under the 'google' namespace. Do not include tests,\n# benchmarks, etc.\npackages = [\n package\n for package in setuptools.PEP420PackageFinder.find()\n if package.startswith(\"google\")\n]\n\n# Determine which namespaces are needed.\nnamespaces = [\"google\"]\nif \"google.cloud\" in packages:\n namespaces.append(\"google.cloud\")\n\n\nsetuptools.setup(\n name=name,\n version=version,\n description=description,\n long_description=readme,\n author=\"Google LLC\",\n author_email=\"[email protected]\",\n license=\"Apache 2.0\",\n url=\"https://github.com/googleapis/python-bigquery\",\n classifiers=[\n release_status,\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Operating System :: OS Independent\",\n \"Topic :: Internet\",\n ],\n platforms=\"Posix; MacOS X; Windows\",\n packages=packages,\n namespace_packages=namespaces,\n install_requires=dependencies,\n extras_require=extras,\n python_requires=\">=3.6, <3.10\",\n include_package_data=True,\n zip_safe=False,\n)\n", "path": "setup.py"}]} | 1,778 | 287 |
gh_patches_debug_33175 | rasdani/github-patches | git_diff | getsentry__sentry-python-418 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Tracing Popen monkey patch breaks environment
In `0.10.0` the new tracing functionality wipes the environment for subprocesses.
As calls to `_get_argument` modify input (surely an anti-pattern) the subprocess call becomes `POpen(..., env={sentry trace values only})` which wipes the environment from the parent process.
https://github.com/getsentry/sentry-python/blob/ca0ba7f6c417d9ce7ee157149ddddce5add893a9/sentry_sdk/integrations/stdlib.py#L143
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sentry_sdk/integrations/stdlib.py`
Content:
```
1 import os
2 import subprocess
3 import sys
4 import platform
5
6 from sentry_sdk.hub import Hub
7 from sentry_sdk.integrations import Integration
8 from sentry_sdk.scope import add_global_event_processor
9 from sentry_sdk.tracing import EnvironHeaders, record_http_request
10
11 try:
12 from httplib import HTTPConnection # type: ignore
13 except ImportError:
14 from http.client import HTTPConnection
15
16 _RUNTIME_CONTEXT = {
17 "name": platform.python_implementation(),
18 "version": "%s.%s.%s" % (sys.version_info[:3]),
19 "build": sys.version,
20 }
21
22
23 class StdlibIntegration(Integration):
24 identifier = "stdlib"
25
26 @staticmethod
27 def setup_once():
28 # type: () -> None
29 _install_httplib()
30 _install_subprocess()
31
32 @add_global_event_processor
33 def add_python_runtime_context(event, hint):
34 if Hub.current.get_integration(StdlibIntegration) is not None:
35 contexts = event.setdefault("contexts", {})
36 if isinstance(contexts, dict) and "runtime" not in contexts:
37 contexts["runtime"] = _RUNTIME_CONTEXT
38
39 return event
40
41
42 def _install_httplib():
43 # type: () -> None
44 real_putrequest = HTTPConnection.putrequest
45 real_getresponse = HTTPConnection.getresponse
46
47 def putrequest(self, method, url, *args, **kwargs):
48 hub = Hub.current
49 if hub.get_integration(StdlibIntegration) is None:
50 return real_putrequest(self, method, url, *args, **kwargs)
51
52 host = self.host
53 port = self.port
54 default_port = self.default_port
55
56 real_url = url
57 if not real_url.startswith(("http://", "https://")):
58 real_url = "%s://%s%s%s" % (
59 default_port == 443 and "https" or "http",
60 host,
61 port != default_port and ":%s" % port or "",
62 url,
63 )
64
65 recorder = record_http_request(hub, real_url, method)
66 data_dict = recorder.__enter__()
67
68 try:
69 rv = real_putrequest(self, method, url, *args, **kwargs)
70
71 for key, value in hub.iter_trace_propagation_headers():
72 self.putheader(key, value)
73 except Exception:
74 recorder.__exit__(*sys.exc_info())
75 raise
76
77 self._sentrysdk_recorder = recorder
78 self._sentrysdk_data_dict = data_dict
79
80 return rv
81
82 def getresponse(self, *args, **kwargs):
83 recorder = getattr(self, "_sentrysdk_recorder", None)
84
85 if recorder is None:
86 return real_getresponse(self, *args, **kwargs)
87
88 data_dict = getattr(self, "_sentrysdk_data_dict", None)
89
90 try:
91 rv = real_getresponse(self, *args, **kwargs)
92
93 if data_dict is not None:
94 data_dict["httplib_response"] = rv
95 data_dict["status_code"] = rv.status
96 data_dict["reason"] = rv.reason
97 except TypeError:
98 # python-requests provokes a typeerror to discover py3 vs py2 differences
99 #
100 # > TypeError("getresponse() got an unexpected keyword argument 'buffering'")
101 raise
102 except Exception:
103 recorder.__exit__(*sys.exc_info())
104 raise
105 else:
106 recorder.__exit__(None, None, None)
107
108 return rv
109
110 HTTPConnection.putrequest = putrequest
111 HTTPConnection.getresponse = getresponse
112
113
114 def _get_argument(args, kwargs, name, position, setdefault=None):
115 if name in kwargs:
116 rv = kwargs[name]
117 if rv is None and setdefault is not None:
118 rv = kwargs[name] = setdefault
119 elif position < len(args):
120 rv = args[position]
121 if rv is None and setdefault is not None:
122 rv = args[position] = setdefault
123 else:
124 rv = kwargs[name] = setdefault
125
126 return rv
127
128
129 def _install_subprocess():
130 old_popen_init = subprocess.Popen.__init__
131
132 def sentry_patched_popen_init(self, *a, **kw):
133 hub = Hub.current
134 if hub.get_integration(StdlibIntegration) is None:
135 return old_popen_init(self, *a, **kw)
136
137 # do not setdefault! args is required by Popen, doing setdefault would
138 # make invalid calls valid
139 args = _get_argument(a, kw, "args", 0) or []
140 cwd = _get_argument(a, kw, "cwd", 10)
141
142 for k, v in hub.iter_trace_propagation_headers():
143 env = _get_argument(a, kw, "env", 11, {})
144 env["SUBPROCESS_" + k.upper().replace("-", "_")] = v
145
146 with hub.span(op="subprocess", description=" ".join(map(str, args))) as span:
147 span.set_tag("subprocess.cwd", cwd)
148
149 return old_popen_init(self, *a, **kw)
150
151 subprocess.Popen.__init__ = sentry_patched_popen_init # type: ignore
152
153
154 def get_subprocess_traceparent_headers():
155 return EnvironHeaders(os.environ, prefix="SUBPROCESS_")
156
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/sentry_sdk/integrations/stdlib.py b/sentry_sdk/integrations/stdlib.py
--- a/sentry_sdk/integrations/stdlib.py
+++ b/sentry_sdk/integrations/stdlib.py
@@ -111,17 +111,28 @@
HTTPConnection.getresponse = getresponse
-def _get_argument(args, kwargs, name, position, setdefault=None):
+def _init_argument(args, kwargs, name, position, setdefault_callback=None):
+ """
+ given (*args, **kwargs) of a function call, retrieve (and optionally set a
+ default for) an argument by either name or position.
+
+ This is useful for wrapping functions with complex type signatures and
+ extracting a few arguments without needing to redefine that function's
+ entire type signature.
+ """
+
if name in kwargs:
rv = kwargs[name]
- if rv is None and setdefault is not None:
- rv = kwargs[name] = setdefault
+ if rv is None and setdefault_callback is not None:
+ rv = kwargs[name] = setdefault_callback()
elif position < len(args):
rv = args[position]
- if rv is None and setdefault is not None:
- rv = args[position] = setdefault
+ if rv is None and setdefault_callback is not None:
+ rv = args[position] = setdefault_callback()
else:
- rv = kwargs[name] = setdefault
+ rv = setdefault_callback and setdefault_callback()
+ if rv is not None:
+ kwargs[name] = rv
return rv
@@ -136,11 +147,14 @@
# do not setdefault! args is required by Popen, doing setdefault would
# make invalid calls valid
- args = _get_argument(a, kw, "args", 0) or []
- cwd = _get_argument(a, kw, "cwd", 10)
+ args = _init_argument(a, kw, "args", 0) or []
+ cwd = _init_argument(a, kw, "cwd", 10)
+
+ env = None
for k, v in hub.iter_trace_propagation_headers():
- env = _get_argument(a, kw, "env", 11, {})
+ if env is None:
+ env = _init_argument(a, kw, "env", 11, lambda: dict(os.environ))
env["SUBPROCESS_" + k.upper().replace("-", "_")] = v
with hub.span(op="subprocess", description=" ".join(map(str, args))) as span:
| {"golden_diff": "diff --git a/sentry_sdk/integrations/stdlib.py b/sentry_sdk/integrations/stdlib.py\n--- a/sentry_sdk/integrations/stdlib.py\n+++ b/sentry_sdk/integrations/stdlib.py\n@@ -111,17 +111,28 @@\n HTTPConnection.getresponse = getresponse\n \n \n-def _get_argument(args, kwargs, name, position, setdefault=None):\n+def _init_argument(args, kwargs, name, position, setdefault_callback=None):\n+ \"\"\"\n+ given (*args, **kwargs) of a function call, retrieve (and optionally set a\n+ default for) an argument by either name or position.\n+\n+ This is useful for wrapping functions with complex type signatures and\n+ extracting a few arguments without needing to redefine that function's\n+ entire type signature.\n+ \"\"\"\n+\n if name in kwargs:\n rv = kwargs[name]\n- if rv is None and setdefault is not None:\n- rv = kwargs[name] = setdefault\n+ if rv is None and setdefault_callback is not None:\n+ rv = kwargs[name] = setdefault_callback()\n elif position < len(args):\n rv = args[position]\n- if rv is None and setdefault is not None:\n- rv = args[position] = setdefault\n+ if rv is None and setdefault_callback is not None:\n+ rv = args[position] = setdefault_callback()\n else:\n- rv = kwargs[name] = setdefault\n+ rv = setdefault_callback and setdefault_callback()\n+ if rv is not None:\n+ kwargs[name] = rv\n \n return rv\n \n@@ -136,11 +147,14 @@\n \n # do not setdefault! args is required by Popen, doing setdefault would\n # make invalid calls valid\n- args = _get_argument(a, kw, \"args\", 0) or []\n- cwd = _get_argument(a, kw, \"cwd\", 10)\n+ args = _init_argument(a, kw, \"args\", 0) or []\n+ cwd = _init_argument(a, kw, \"cwd\", 10)\n+\n+ env = None\n \n for k, v in hub.iter_trace_propagation_headers():\n- env = _get_argument(a, kw, \"env\", 11, {})\n+ if env is None:\n+ env = _init_argument(a, kw, \"env\", 11, lambda: dict(os.environ))\n env[\"SUBPROCESS_\" + k.upper().replace(\"-\", \"_\")] = v\n \n with hub.span(op=\"subprocess\", description=\" \".join(map(str, args))) as span:\n", "issue": "Tracing Popen monkey patch breaks environment\nIn `0.10.0` the new tracing functionality wipes the environment for subprocesses.\r\n\r\nAs calls to `_get_argument` modify input (surely an anti-pattern) the subprocess call becomes `POpen(..., env={sentry trace values only})` which wipes the environment from the parent process.\r\n\r\nhttps://github.com/getsentry/sentry-python/blob/ca0ba7f6c417d9ce7ee157149ddddce5add893a9/sentry_sdk/integrations/stdlib.py#L143\r\n\r\n\n", "before_files": [{"content": "import os\nimport subprocess\nimport sys\nimport platform\n\nfrom sentry_sdk.hub import Hub\nfrom sentry_sdk.integrations import Integration\nfrom sentry_sdk.scope import add_global_event_processor\nfrom sentry_sdk.tracing import EnvironHeaders, record_http_request\n\ntry:\n from httplib import HTTPConnection # type: ignore\nexcept ImportError:\n from http.client import HTTPConnection\n\n_RUNTIME_CONTEXT = {\n \"name\": platform.python_implementation(),\n \"version\": \"%s.%s.%s\" % (sys.version_info[:3]),\n \"build\": sys.version,\n}\n\n\nclass StdlibIntegration(Integration):\n identifier = \"stdlib\"\n\n @staticmethod\n def setup_once():\n # type: () -> None\n _install_httplib()\n _install_subprocess()\n\n @add_global_event_processor\n def add_python_runtime_context(event, hint):\n if Hub.current.get_integration(StdlibIntegration) is not None:\n contexts = event.setdefault(\"contexts\", {})\n if isinstance(contexts, dict) and \"runtime\" not in contexts:\n contexts[\"runtime\"] = _RUNTIME_CONTEXT\n\n return event\n\n\ndef _install_httplib():\n # type: () -> None\n real_putrequest = HTTPConnection.putrequest\n real_getresponse = HTTPConnection.getresponse\n\n def putrequest(self, method, url, *args, **kwargs):\n hub = Hub.current\n if hub.get_integration(StdlibIntegration) is None:\n return real_putrequest(self, method, url, *args, **kwargs)\n\n host = self.host\n port = self.port\n default_port = self.default_port\n\n real_url = url\n if not real_url.startswith((\"http://\", \"https://\")):\n real_url = \"%s://%s%s%s\" % (\n default_port == 443 and \"https\" or \"http\",\n host,\n port != default_port and \":%s\" % port or \"\",\n url,\n )\n\n recorder = record_http_request(hub, real_url, method)\n data_dict = recorder.__enter__()\n\n try:\n rv = real_putrequest(self, method, url, *args, **kwargs)\n\n for key, value in hub.iter_trace_propagation_headers():\n self.putheader(key, value)\n except Exception:\n recorder.__exit__(*sys.exc_info())\n raise\n\n self._sentrysdk_recorder = recorder\n self._sentrysdk_data_dict = data_dict\n\n return rv\n\n def getresponse(self, *args, **kwargs):\n recorder = getattr(self, \"_sentrysdk_recorder\", None)\n\n if recorder is None:\n return real_getresponse(self, *args, **kwargs)\n\n data_dict = getattr(self, \"_sentrysdk_data_dict\", None)\n\n try:\n rv = real_getresponse(self, *args, **kwargs)\n\n if data_dict is not None:\n data_dict[\"httplib_response\"] = rv\n data_dict[\"status_code\"] = rv.status\n data_dict[\"reason\"] = rv.reason\n except TypeError:\n # python-requests provokes a typeerror to discover py3 vs py2 differences\n #\n # > TypeError(\"getresponse() got an unexpected keyword argument 'buffering'\")\n raise\n except Exception:\n recorder.__exit__(*sys.exc_info())\n raise\n else:\n recorder.__exit__(None, None, None)\n\n return rv\n\n HTTPConnection.putrequest = putrequest\n HTTPConnection.getresponse = getresponse\n\n\ndef _get_argument(args, kwargs, name, position, setdefault=None):\n if name in kwargs:\n rv = kwargs[name]\n if rv is None and setdefault is not None:\n rv = kwargs[name] = setdefault\n elif position < len(args):\n rv = args[position]\n if rv is None and setdefault is not None:\n rv = args[position] = setdefault\n else:\n rv = kwargs[name] = setdefault\n\n return rv\n\n\ndef _install_subprocess():\n old_popen_init = subprocess.Popen.__init__\n\n def sentry_patched_popen_init(self, *a, **kw):\n hub = Hub.current\n if hub.get_integration(StdlibIntegration) is None:\n return old_popen_init(self, *a, **kw)\n\n # do not setdefault! args is required by Popen, doing setdefault would\n # make invalid calls valid\n args = _get_argument(a, kw, \"args\", 0) or []\n cwd = _get_argument(a, kw, \"cwd\", 10)\n\n for k, v in hub.iter_trace_propagation_headers():\n env = _get_argument(a, kw, \"env\", 11, {})\n env[\"SUBPROCESS_\" + k.upper().replace(\"-\", \"_\")] = v\n\n with hub.span(op=\"subprocess\", description=\" \".join(map(str, args))) as span:\n span.set_tag(\"subprocess.cwd\", cwd)\n\n return old_popen_init(self, *a, **kw)\n\n subprocess.Popen.__init__ = sentry_patched_popen_init # type: ignore\n\n\ndef get_subprocess_traceparent_headers():\n return EnvironHeaders(os.environ, prefix=\"SUBPROCESS_\")\n", "path": "sentry_sdk/integrations/stdlib.py"}], "after_files": [{"content": "import os\nimport subprocess\nimport sys\nimport platform\n\nfrom sentry_sdk.hub import Hub\nfrom sentry_sdk.integrations import Integration\nfrom sentry_sdk.scope import add_global_event_processor\nfrom sentry_sdk.tracing import EnvironHeaders, record_http_request\n\ntry:\n from httplib import HTTPConnection # type: ignore\nexcept ImportError:\n from http.client import HTTPConnection\n\n_RUNTIME_CONTEXT = {\n \"name\": platform.python_implementation(),\n \"version\": \"%s.%s.%s\" % (sys.version_info[:3]),\n \"build\": sys.version,\n}\n\n\nclass StdlibIntegration(Integration):\n identifier = \"stdlib\"\n\n @staticmethod\n def setup_once():\n # type: () -> None\n _install_httplib()\n _install_subprocess()\n\n @add_global_event_processor\n def add_python_runtime_context(event, hint):\n if Hub.current.get_integration(StdlibIntegration) is not None:\n contexts = event.setdefault(\"contexts\", {})\n if isinstance(contexts, dict) and \"runtime\" not in contexts:\n contexts[\"runtime\"] = _RUNTIME_CONTEXT\n\n return event\n\n\ndef _install_httplib():\n # type: () -> None\n real_putrequest = HTTPConnection.putrequest\n real_getresponse = HTTPConnection.getresponse\n\n def putrequest(self, method, url, *args, **kwargs):\n hub = Hub.current\n if hub.get_integration(StdlibIntegration) is None:\n return real_putrequest(self, method, url, *args, **kwargs)\n\n host = self.host\n port = self.port\n default_port = self.default_port\n\n real_url = url\n if not real_url.startswith((\"http://\", \"https://\")):\n real_url = \"%s://%s%s%s\" % (\n default_port == 443 and \"https\" or \"http\",\n host,\n port != default_port and \":%s\" % port or \"\",\n url,\n )\n\n recorder = record_http_request(hub, real_url, method)\n data_dict = recorder.__enter__()\n\n try:\n rv = real_putrequest(self, method, url, *args, **kwargs)\n\n for key, value in hub.iter_trace_propagation_headers():\n self.putheader(key, value)\n except Exception:\n recorder.__exit__(*sys.exc_info())\n raise\n\n self._sentrysdk_recorder = recorder\n self._sentrysdk_data_dict = data_dict\n\n return rv\n\n def getresponse(self, *args, **kwargs):\n recorder = getattr(self, \"_sentrysdk_recorder\", None)\n\n if recorder is None:\n return real_getresponse(self, *args, **kwargs)\n\n data_dict = getattr(self, \"_sentrysdk_data_dict\", None)\n\n try:\n rv = real_getresponse(self, *args, **kwargs)\n\n if data_dict is not None:\n data_dict[\"httplib_response\"] = rv\n data_dict[\"status_code\"] = rv.status\n data_dict[\"reason\"] = rv.reason\n except TypeError:\n # python-requests provokes a typeerror to discover py3 vs py2 differences\n #\n # > TypeError(\"getresponse() got an unexpected keyword argument 'buffering'\")\n raise\n except Exception:\n recorder.__exit__(*sys.exc_info())\n raise\n else:\n recorder.__exit__(None, None, None)\n\n return rv\n\n HTTPConnection.putrequest = putrequest\n HTTPConnection.getresponse = getresponse\n\n\ndef _init_argument(args, kwargs, name, position, setdefault_callback=None):\n \"\"\"\n given (*args, **kwargs) of a function call, retrieve (and optionally set a\n default for) an argument by either name or position.\n\n This is useful for wrapping functions with complex type signatures and\n extracting a few arguments without needing to redefine that function's\n entire type signature.\n \"\"\"\n\n if name in kwargs:\n rv = kwargs[name]\n if rv is None and setdefault_callback is not None:\n rv = kwargs[name] = setdefault_callback()\n elif position < len(args):\n rv = args[position]\n if rv is None and setdefault_callback is not None:\n rv = args[position] = setdefault_callback()\n else:\n rv = setdefault_callback and setdefault_callback()\n if rv is not None:\n kwargs[name] = rv\n\n return rv\n\n\ndef _install_subprocess():\n old_popen_init = subprocess.Popen.__init__\n\n def sentry_patched_popen_init(self, *a, **kw):\n hub = Hub.current\n if hub.get_integration(StdlibIntegration) is None:\n return old_popen_init(self, *a, **kw)\n\n # do not setdefault! args is required by Popen, doing setdefault would\n # make invalid calls valid\n args = _init_argument(a, kw, \"args\", 0) or []\n cwd = _init_argument(a, kw, \"cwd\", 10)\n\n env = None\n\n for k, v in hub.iter_trace_propagation_headers():\n if env is None:\n env = _init_argument(a, kw, \"env\", 11, lambda: dict(os.environ))\n env[\"SUBPROCESS_\" + k.upper().replace(\"-\", \"_\")] = v\n\n with hub.span(op=\"subprocess\", description=\" \".join(map(str, args))) as span:\n span.set_tag(\"subprocess.cwd\", cwd)\n\n return old_popen_init(self, *a, **kw)\n\n subprocess.Popen.__init__ = sentry_patched_popen_init # type: ignore\n\n\ndef get_subprocess_traceparent_headers():\n return EnvironHeaders(os.environ, prefix=\"SUBPROCESS_\")\n", "path": "sentry_sdk/integrations/stdlib.py"}]} | 1,897 | 580 |
gh_patches_debug_33942 | rasdani/github-patches | git_diff | TheAlgorithms__Python-10633 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Improve our test coverage
### Feature description
Many of our existing algorithm files have little to no unit testing. This is problematic because this can easily let bugs slip through. We want some assurance that the code we currently have is correct and functional. We welcome all contributors to open PRs to help us add tests to our codebase.
### How to find low-coverage files
Go to the Actions tab in this repository and find the most recent **build** workflow run. Open the logs under "Run Tests" and scroll down until you find the section on code coverage:
```
---------- coverage: platform linux, python 3.12.0-final-0 -----------
Name Stmts Miss Cover Missing
-----------------------------------------------------------------------------------------------------------
quantum/q_fourier_transform.py 30 30 0% 14-93
scripts/validate_solutions.py 54 54 0% 2-94
strings/min_cost_string_conversion.py 78 75 4% 20-57, 61-75, 79-129
...
```
The "Cover" column tells you what percentage of the lines in that file are covered by tests. We want to increase this percentage for existing files. Find a file with low coverage percentage that you wish to write tests for, add doctests for each function, and open a PR with your changes. You do not need to have a perfect coverage percentage, but all functions should have doctests.
Some files will naturally be hard to write tests for. For example, the file may be poorly written because they lack any functions. Other files might be how-tos, meaning they simply demonstrate how to use an existing library's functions rather than implementing the algorithm themselves. Ignore these kinds of files, as they will need to be rewritten eventually. Furthermore, ignore files in the `web_programming` and `project_euler` directories. Web programming files are inherently hard to test and Project Euler files have their own validation workflow, so don't worry about their test coverage.
_**When you open your PR, put "Contributes to #9943" in the PR description.**_ Do not use the word "fixes", "resolves", or "closes". This issue is an ongoing one, and your PR will not single-handedly resolve this issue.
### How to add doctests
A doctest is a unit test that is contained within the documentation comment (docstring) for a function. Here is an example of what doctests look like within a docstring:
```py
def add(a: int, b: int) -> int:
"""
Adds two non-negative numbers.
>>> add(1, 1)
2
>>> add(2, 5)
7
>>> add(1, 0)
1
>>> add(-1, -1)
Traceback (most recent last):
...
ValueError: Numbers must be non-negative
"""
```
For every function in the file you choose, you should write doctests like the ones shown above in its docstring. If a function doesn't have a docstring, add one. Your doctests should be comprehensive but not excessive: you should write just enough tests to cover all basic cases as well as all edge cases (e.g., negative numbers, empty lists, etc).
Do not simply run a function on some example inputs and put its output as the expected output for a doctest. This assumes that the function is implemented correctly when it might not be. Verify independently that your doctests and their expected outputs are correct. **Your PR will not be merged if it has failing tests.** If you happen to discover a bug while writing doctests, please fix it.
_**Please read our [contributing guidelines](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md) before you contribute.**_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `backtracking/all_combinations.py`
Content:
```
1 """
2 In this problem, we want to determine all possible combinations of k
3 numbers out of 1 ... n. We use backtracking to solve this problem.
4 Time complexity: O(C(n,k)) which is O(n choose k) = O((n!/(k! * (n - k)!)))
5 """
6 from __future__ import annotations
7
8
9 def generate_all_combinations(n: int, k: int) -> list[list[int]]:
10 """
11 >>> generate_all_combinations(n=4, k=2)
12 [[1, 2], [1, 3], [1, 4], [2, 3], [2, 4], [3, 4]]
13 """
14
15 result: list[list[int]] = []
16 create_all_state(1, n, k, [], result)
17 return result
18
19
20 def create_all_state(
21 increment: int,
22 total_number: int,
23 level: int,
24 current_list: list[int],
25 total_list: list[list[int]],
26 ) -> None:
27 if level == 0:
28 total_list.append(current_list[:])
29 return
30
31 for i in range(increment, total_number - level + 2):
32 current_list.append(i)
33 create_all_state(i + 1, total_number, level - 1, current_list, total_list)
34 current_list.pop()
35
36
37 def print_all_state(total_list: list[list[int]]) -> None:
38 for i in total_list:
39 print(*i)
40
41
42 if __name__ == "__main__":
43 n = 4
44 k = 2
45 total_list = generate_all_combinations(n, k)
46 print_all_state(total_list)
47
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/backtracking/all_combinations.py b/backtracking/all_combinations.py
--- a/backtracking/all_combinations.py
+++ b/backtracking/all_combinations.py
@@ -1,15 +1,40 @@
"""
In this problem, we want to determine all possible combinations of k
numbers out of 1 ... n. We use backtracking to solve this problem.
- Time complexity: O(C(n,k)) which is O(n choose k) = O((n!/(k! * (n - k)!)))
+
+ Time complexity: O(C(n,k)) which is O(n choose k) = O((n!/(k! * (n - k)!))),
"""
from __future__ import annotations
+from itertools import combinations
+
+
+def combination_lists(n: int, k: int) -> list[list[int]]:
+ """
+ >>> combination_lists(n=4, k=2)
+ [[1, 2], [1, 3], [1, 4], [2, 3], [2, 4], [3, 4]]
+ """
+ return [list(x) for x in combinations(range(1, n + 1), k)]
+
def generate_all_combinations(n: int, k: int) -> list[list[int]]:
"""
>>> generate_all_combinations(n=4, k=2)
[[1, 2], [1, 3], [1, 4], [2, 3], [2, 4], [3, 4]]
+ >>> generate_all_combinations(n=0, k=0)
+ [[]]
+ >>> generate_all_combinations(n=10, k=-1)
+ Traceback (most recent call last):
+ ...
+ RecursionError: maximum recursion depth exceeded
+ >>> generate_all_combinations(n=-1, k=10)
+ []
+ >>> generate_all_combinations(n=5, k=4)
+ [[1, 2, 3, 4], [1, 2, 3, 5], [1, 2, 4, 5], [1, 3, 4, 5], [2, 3, 4, 5]]
+ >>> from itertools import combinations
+ >>> all(generate_all_combinations(n, k) == combination_lists(n, k)
+ ... for n in range(1, 6) for k in range(1, 6))
+ True
"""
result: list[list[int]] = []
@@ -34,13 +59,17 @@
current_list.pop()
-def print_all_state(total_list: list[list[int]]) -> None:
- for i in total_list:
- print(*i)
+if __name__ == "__main__":
+ from doctest import testmod
+ testmod()
+ print(generate_all_combinations(n=4, k=2))
+ tests = ((n, k) for n in range(1, 5) for k in range(1, 5))
+ for n, k in tests:
+ print(n, k, generate_all_combinations(n, k) == combination_lists(n, k))
-if __name__ == "__main__":
- n = 4
- k = 2
- total_list = generate_all_combinations(n, k)
- print_all_state(total_list)
+ print("Benchmark:")
+ from timeit import timeit
+
+ for func in ("combination_lists", "generate_all_combinations"):
+ print(f"{func:>25}(): {timeit(f'{func}(n=4, k = 2)', globals=globals())}")
| {"golden_diff": "diff --git a/backtracking/all_combinations.py b/backtracking/all_combinations.py\n--- a/backtracking/all_combinations.py\n+++ b/backtracking/all_combinations.py\n@@ -1,15 +1,40 @@\n \"\"\"\n In this problem, we want to determine all possible combinations of k\n numbers out of 1 ... n. We use backtracking to solve this problem.\n- Time complexity: O(C(n,k)) which is O(n choose k) = O((n!/(k! * (n - k)!)))\n+\n+ Time complexity: O(C(n,k)) which is O(n choose k) = O((n!/(k! * (n - k)!))),\n \"\"\"\n from __future__ import annotations\n \n+from itertools import combinations\n+\n+\n+def combination_lists(n: int, k: int) -> list[list[int]]:\n+ \"\"\"\n+ >>> combination_lists(n=4, k=2)\n+ [[1, 2], [1, 3], [1, 4], [2, 3], [2, 4], [3, 4]]\n+ \"\"\"\n+ return [list(x) for x in combinations(range(1, n + 1), k)]\n+\n \n def generate_all_combinations(n: int, k: int) -> list[list[int]]:\n \"\"\"\n >>> generate_all_combinations(n=4, k=2)\n [[1, 2], [1, 3], [1, 4], [2, 3], [2, 4], [3, 4]]\n+ >>> generate_all_combinations(n=0, k=0)\n+ [[]]\n+ >>> generate_all_combinations(n=10, k=-1)\n+ Traceback (most recent call last):\n+ ...\n+ RecursionError: maximum recursion depth exceeded\n+ >>> generate_all_combinations(n=-1, k=10)\n+ []\n+ >>> generate_all_combinations(n=5, k=4)\n+ [[1, 2, 3, 4], [1, 2, 3, 5], [1, 2, 4, 5], [1, 3, 4, 5], [2, 3, 4, 5]]\n+ >>> from itertools import combinations\n+ >>> all(generate_all_combinations(n, k) == combination_lists(n, k)\n+ ... for n in range(1, 6) for k in range(1, 6))\n+ True\n \"\"\"\n \n result: list[list[int]] = []\n@@ -34,13 +59,17 @@\n current_list.pop()\n \n \n-def print_all_state(total_list: list[list[int]]) -> None:\n- for i in total_list:\n- print(*i)\n+if __name__ == \"__main__\":\n+ from doctest import testmod\n \n+ testmod()\n+ print(generate_all_combinations(n=4, k=2))\n+ tests = ((n, k) for n in range(1, 5) for k in range(1, 5))\n+ for n, k in tests:\n+ print(n, k, generate_all_combinations(n, k) == combination_lists(n, k))\n \n-if __name__ == \"__main__\":\n- n = 4\n- k = 2\n- total_list = generate_all_combinations(n, k)\n- print_all_state(total_list)\n+ print(\"Benchmark:\")\n+ from timeit import timeit\n+\n+ for func in (\"combination_lists\", \"generate_all_combinations\"):\n+ print(f\"{func:>25}(): {timeit(f'{func}(n=4, k = 2)', globals=globals())}\")\n", "issue": "Improve our test coverage\n### Feature description\r\n\r\nMany of our existing algorithm files have little to no unit testing. This is problematic because this can easily let bugs slip through. We want some assurance that the code we currently have is correct and functional. We welcome all contributors to open PRs to help us add tests to our codebase.\r\n\r\n### How to find low-coverage files\r\n\r\nGo to the Actions tab in this repository and find the most recent **build** workflow run. Open the logs under \"Run Tests\" and scroll down until you find the section on code coverage:\r\n```\r\n---------- coverage: platform linux, python 3.12.0-final-0 -----------\r\nName Stmts Miss Cover Missing\r\n-----------------------------------------------------------------------------------------------------------\r\nquantum/q_fourier_transform.py 30 30 0% 14-93\r\nscripts/validate_solutions.py 54 54 0% 2-94\r\nstrings/min_cost_string_conversion.py 78 75 4% 20-57, 61-75, 79-129\r\n...\r\n```\r\nThe \"Cover\" column tells you what percentage of the lines in that file are covered by tests. We want to increase this percentage for existing files. Find a file with low coverage percentage that you wish to write tests for, add doctests for each function, and open a PR with your changes. You do not need to have a perfect coverage percentage, but all functions should have doctests.\r\n\r\nSome files will naturally be hard to write tests for. For example, the file may be poorly written because they lack any functions. Other files might be how-tos, meaning they simply demonstrate how to use an existing library's functions rather than implementing the algorithm themselves. Ignore these kinds of files, as they will need to be rewritten eventually. Furthermore, ignore files in the `web_programming` and `project_euler` directories. Web programming files are inherently hard to test and Project Euler files have their own validation workflow, so don't worry about their test coverage.\r\n\r\n_**When you open your PR, put \"Contributes to #9943\" in the PR description.**_ Do not use the word \"fixes\", \"resolves\", or \"closes\". This issue is an ongoing one, and your PR will not single-handedly resolve this issue.\r\n\r\n### How to add doctests\r\n\r\nA doctest is a unit test that is contained within the documentation comment (docstring) for a function. Here is an example of what doctests look like within a docstring:\r\n```py\r\ndef add(a: int, b: int) -> int:\r\n \"\"\"\r\n Adds two non-negative numbers.\r\n >>> add(1, 1)\r\n 2\r\n >>> add(2, 5)\r\n 7\r\n >>> add(1, 0)\r\n 1\r\n >>> add(-1, -1)\r\n Traceback (most recent last):\r\n ...\r\n ValueError: Numbers must be non-negative\r\n \"\"\"\r\n```\r\nFor every function in the file you choose, you should write doctests like the ones shown above in its docstring. If a function doesn't have a docstring, add one. Your doctests should be comprehensive but not excessive: you should write just enough tests to cover all basic cases as well as all edge cases (e.g., negative numbers, empty lists, etc).\r\n\r\nDo not simply run a function on some example inputs and put its output as the expected output for a doctest. This assumes that the function is implemented correctly when it might not be. Verify independently that your doctests and their expected outputs are correct. **Your PR will not be merged if it has failing tests.** If you happen to discover a bug while writing doctests, please fix it.\r\n\r\n_**Please read our [contributing guidelines](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md) before you contribute.**_\n", "before_files": [{"content": "\"\"\"\n In this problem, we want to determine all possible combinations of k\n numbers out of 1 ... n. We use backtracking to solve this problem.\n Time complexity: O(C(n,k)) which is O(n choose k) = O((n!/(k! * (n - k)!)))\n\"\"\"\nfrom __future__ import annotations\n\n\ndef generate_all_combinations(n: int, k: int) -> list[list[int]]:\n \"\"\"\n >>> generate_all_combinations(n=4, k=2)\n [[1, 2], [1, 3], [1, 4], [2, 3], [2, 4], [3, 4]]\n \"\"\"\n\n result: list[list[int]] = []\n create_all_state(1, n, k, [], result)\n return result\n\n\ndef create_all_state(\n increment: int,\n total_number: int,\n level: int,\n current_list: list[int],\n total_list: list[list[int]],\n) -> None:\n if level == 0:\n total_list.append(current_list[:])\n return\n\n for i in range(increment, total_number - level + 2):\n current_list.append(i)\n create_all_state(i + 1, total_number, level - 1, current_list, total_list)\n current_list.pop()\n\n\ndef print_all_state(total_list: list[list[int]]) -> None:\n for i in total_list:\n print(*i)\n\n\nif __name__ == \"__main__\":\n n = 4\n k = 2\n total_list = generate_all_combinations(n, k)\n print_all_state(total_list)\n", "path": "backtracking/all_combinations.py"}], "after_files": [{"content": "\"\"\"\n In this problem, we want to determine all possible combinations of k\n numbers out of 1 ... n. We use backtracking to solve this problem.\n\n Time complexity: O(C(n,k)) which is O(n choose k) = O((n!/(k! * (n - k)!))),\n\"\"\"\nfrom __future__ import annotations\n\nfrom itertools import combinations\n\n\ndef combination_lists(n: int, k: int) -> list[list[int]]:\n \"\"\"\n >>> combination_lists(n=4, k=2)\n [[1, 2], [1, 3], [1, 4], [2, 3], [2, 4], [3, 4]]\n \"\"\"\n return [list(x) for x in combinations(range(1, n + 1), k)]\n\n\ndef generate_all_combinations(n: int, k: int) -> list[list[int]]:\n \"\"\"\n >>> generate_all_combinations(n=4, k=2)\n [[1, 2], [1, 3], [1, 4], [2, 3], [2, 4], [3, 4]]\n >>> generate_all_combinations(n=0, k=0)\n [[]]\n >>> generate_all_combinations(n=10, k=-1)\n Traceback (most recent call last):\n ...\n RecursionError: maximum recursion depth exceeded\n >>> generate_all_combinations(n=-1, k=10)\n []\n >>> generate_all_combinations(n=5, k=4)\n [[1, 2, 3, 4], [1, 2, 3, 5], [1, 2, 4, 5], [1, 3, 4, 5], [2, 3, 4, 5]]\n >>> from itertools import combinations\n >>> all(generate_all_combinations(n, k) == combination_lists(n, k)\n ... for n in range(1, 6) for k in range(1, 6))\n True\n \"\"\"\n\n result: list[list[int]] = []\n create_all_state(1, n, k, [], result)\n return result\n\n\ndef create_all_state(\n increment: int,\n total_number: int,\n level: int,\n current_list: list[int],\n total_list: list[list[int]],\n) -> None:\n if level == 0:\n total_list.append(current_list[:])\n return\n\n for i in range(increment, total_number - level + 2):\n current_list.append(i)\n create_all_state(i + 1, total_number, level - 1, current_list, total_list)\n current_list.pop()\n\n\nif __name__ == \"__main__\":\n from doctest import testmod\n\n testmod()\n print(generate_all_combinations(n=4, k=2))\n tests = ((n, k) for n in range(1, 5) for k in range(1, 5))\n for n, k in tests:\n print(n, k, generate_all_combinations(n, k) == combination_lists(n, k))\n\n print(\"Benchmark:\")\n from timeit import timeit\n\n for func in (\"combination_lists\", \"generate_all_combinations\"):\n print(f\"{func:>25}(): {timeit(f'{func}(n=4, k = 2)', globals=globals())}\")\n", "path": "backtracking/all_combinations.py"}]} | 1,542 | 809 |
gh_patches_debug_3134 | rasdani/github-patches | git_diff | DataDog__dd-agent-1776 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Couchbase service check do not send tags on OK status
Missing a `tags=service_check_tags` on line:
https://github.com/DataDog/dd-agent/blob/master/checks.d/couchbase.py#L104
Pretty small fix.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `checks.d/couchbase.py`
Content:
```
1 # stdlib
2 import re
3
4 # 3rd party
5 import requests
6
7 # project
8 from checks import AgentCheck
9 from util import headers
10
11 # Constants
12 COUCHBASE_STATS_PATH = '/pools/default'
13 DEFAULT_TIMEOUT = 10
14
15
16 class Couchbase(AgentCheck):
17 """Extracts stats from Couchbase via its REST API
18 http://docs.couchbase.com/couchbase-manual-2.0/#using-the-rest-api
19 """
20 SERVICE_CHECK_NAME = 'couchbase.can_connect'
21
22 def _create_metrics(self, data, tags=None):
23 storage_totals = data['stats']['storageTotals']
24 for key, storage_type in storage_totals.items():
25 for metric_name, val in storage_type.items():
26 if val is not None:
27 metric_name = '.'.join(['couchbase', key, self.camel_case_to_joined_lower(metric_name)])
28 self.gauge(metric_name, val, tags=tags)
29
30 for bucket_name, bucket_stats in data['buckets'].items():
31 for metric_name, val in bucket_stats.items():
32 if val is not None:
33 metric_name = '.'.join(['couchbase', 'by_bucket', self.camel_case_to_joined_lower(metric_name)])
34 metric_tags = list(tags)
35 metric_tags.append('bucket:%s' % bucket_name)
36 self.gauge(metric_name, val[0], tags=metric_tags, device_name=bucket_name)
37
38 for node_name, node_stats in data['nodes'].items():
39 for metric_name, val in node_stats['interestingStats'].items():
40 if val is not None:
41 metric_name = '.'.join(['couchbase', 'by_node', self.camel_case_to_joined_lower(metric_name)])
42 metric_tags = list(tags)
43 metric_tags.append('node:%s' % node_name)
44 self.gauge(metric_name, val, tags=metric_tags, device_name=node_name)
45
46
47 def _get_stats(self, url, instance):
48 """ Hit a given URL and return the parsed json. """
49 self.log.debug('Fetching Couchbase stats at url: %s' % url)
50
51 timeout = float(instance.get('timeout', DEFAULT_TIMEOUT))
52
53 auth = None
54 if 'user' in instance and 'password' in instance:
55 auth = (instance['user'], instance['password'])
56
57 r = requests.get(url, auth=auth, headers=headers(self.agentConfig),
58 timeout=timeout)
59 r.raise_for_status()
60 return r.json()
61
62 def check(self, instance):
63 server = instance.get('server', None)
64 if server is None:
65 raise Exception("The server must be specified")
66 tags = instance.get('tags', [])
67 # Clean up tags in case there was a None entry in the instance
68 # e.g. if the yaml contains tags: but no actual tags
69 if tags is None:
70 tags = []
71 else:
72 tags = list(set(tags))
73 tags.append('instance:%s' % server)
74 data = self.get_data(server, instance)
75 self._create_metrics(data, tags=list(set(tags)))
76
77 def get_data(self, server, instance):
78 # The dictionary to be returned.
79 couchbase = {
80 'stats': None,
81 'buckets': {},
82 'nodes': {}
83 }
84
85 # build couchbase stats entry point
86 url = '%s%s' % (server, COUCHBASE_STATS_PATH)
87
88 # Fetch initial stats and capture a service check based on response.
89 service_check_tags = ['instance:%s' % server]
90 try:
91 overall_stats = self._get_stats(url, instance)
92 # No overall stats? bail out now
93 if overall_stats is None:
94 raise Exception("No data returned from couchbase endpoint: %s" % url)
95 except requests.exceptions.HTTPError as e:
96 self.service_check(self.SERVICE_CHECK_NAME, AgentCheck.CRITICAL,
97 tags=service_check_tags, message=str(e.message))
98 raise
99 except Exception as e:
100 self.service_check(self.SERVICE_CHECK_NAME, AgentCheck.CRITICAL,
101 tags=service_check_tags, message=str(e))
102 raise
103 else:
104 self.service_check(self.SERVICE_CHECK_NAME, AgentCheck.OK)
105
106 couchbase['stats'] = overall_stats
107
108 nodes = overall_stats['nodes']
109
110 # Next, get all the nodes
111 if nodes is not None:
112 for node in nodes:
113 couchbase['nodes'][node['hostname']] = node
114
115 # Next, get all buckets .
116 endpoint = overall_stats['buckets']['uri']
117
118 url = '%s%s' % (server, endpoint)
119 buckets = self._get_stats(url, instance)
120
121 if buckets is not None:
122 for bucket in buckets:
123 bucket_name = bucket['name']
124
125 # Fetch URI for the stats bucket
126 endpoint = bucket['stats']['uri']
127 url = '%s%s' % (server, endpoint)
128
129 try:
130 bucket_stats = self._get_stats(url, instance)
131 except requests.exceptions.HTTPError:
132 url_backup = '%s/pools/nodes/buckets/%s/stats' % (server, bucket_name)
133 bucket_stats = self._get_stats(url_backup, instance)
134
135 bucket_samples = bucket_stats['op']['samples']
136 if bucket_samples is not None:
137 couchbase['buckets'][bucket['name']] = bucket_samples
138
139 return couchbase
140
141 # Takes a camelCased variable and returns a joined_lower equivalent.
142 # Returns input if non-camelCase variable is detected.
143 def camel_case_to_joined_lower(self, variable):
144 # replace non-word with _
145 converted_variable = re.sub('\W+', '_', variable)
146
147 # insert _ in front of capital letters and lowercase the string
148 converted_variable = re.sub('([A-Z])', '_\g<1>', converted_variable).lower()
149
150 # remove duplicate _
151 converted_variable = re.sub('_+', '_', converted_variable)
152
153 # handle special case of starting/ending underscores
154 converted_variable = re.sub('^_|_$', '', converted_variable)
155
156 return converted_variable
157
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/checks.d/couchbase.py b/checks.d/couchbase.py
--- a/checks.d/couchbase.py
+++ b/checks.d/couchbase.py
@@ -101,7 +101,8 @@
tags=service_check_tags, message=str(e))
raise
else:
- self.service_check(self.SERVICE_CHECK_NAME, AgentCheck.OK)
+ self.service_check(self.SERVICE_CHECK_NAME, AgentCheck.OK,
+ tags=service_check_tags)
couchbase['stats'] = overall_stats
| {"golden_diff": "diff --git a/checks.d/couchbase.py b/checks.d/couchbase.py\n--- a/checks.d/couchbase.py\n+++ b/checks.d/couchbase.py\n@@ -101,7 +101,8 @@\n tags=service_check_tags, message=str(e))\n raise\n else:\n- self.service_check(self.SERVICE_CHECK_NAME, AgentCheck.OK)\n+ self.service_check(self.SERVICE_CHECK_NAME, AgentCheck.OK,\n+ tags=service_check_tags)\n \n couchbase['stats'] = overall_stats\n", "issue": "Couchbase service check do not send tags on OK status\nMissing a `tags=service_check_tags` on line:\nhttps://github.com/DataDog/dd-agent/blob/master/checks.d/couchbase.py#L104\n\nPretty small fix.\n\n", "before_files": [{"content": "# stdlib\nimport re\n\n# 3rd party\nimport requests\n\n# project\nfrom checks import AgentCheck\nfrom util import headers\n\n# Constants\nCOUCHBASE_STATS_PATH = '/pools/default'\nDEFAULT_TIMEOUT = 10\n\n\nclass Couchbase(AgentCheck):\n \"\"\"Extracts stats from Couchbase via its REST API\n http://docs.couchbase.com/couchbase-manual-2.0/#using-the-rest-api\n \"\"\"\n SERVICE_CHECK_NAME = 'couchbase.can_connect'\n\n def _create_metrics(self, data, tags=None):\n storage_totals = data['stats']['storageTotals']\n for key, storage_type in storage_totals.items():\n for metric_name, val in storage_type.items():\n if val is not None:\n metric_name = '.'.join(['couchbase', key, self.camel_case_to_joined_lower(metric_name)])\n self.gauge(metric_name, val, tags=tags)\n\n for bucket_name, bucket_stats in data['buckets'].items():\n for metric_name, val in bucket_stats.items():\n if val is not None:\n metric_name = '.'.join(['couchbase', 'by_bucket', self.camel_case_to_joined_lower(metric_name)])\n metric_tags = list(tags)\n metric_tags.append('bucket:%s' % bucket_name)\n self.gauge(metric_name, val[0], tags=metric_tags, device_name=bucket_name)\n\n for node_name, node_stats in data['nodes'].items():\n for metric_name, val in node_stats['interestingStats'].items():\n if val is not None:\n metric_name = '.'.join(['couchbase', 'by_node', self.camel_case_to_joined_lower(metric_name)])\n metric_tags = list(tags)\n metric_tags.append('node:%s' % node_name)\n self.gauge(metric_name, val, tags=metric_tags, device_name=node_name)\n\n\n def _get_stats(self, url, instance):\n \"\"\" Hit a given URL and return the parsed json. \"\"\"\n self.log.debug('Fetching Couchbase stats at url: %s' % url)\n\n timeout = float(instance.get('timeout', DEFAULT_TIMEOUT))\n\n auth = None\n if 'user' in instance and 'password' in instance:\n auth = (instance['user'], instance['password'])\n\n r = requests.get(url, auth=auth, headers=headers(self.agentConfig),\n timeout=timeout)\n r.raise_for_status()\n return r.json()\n\n def check(self, instance):\n server = instance.get('server', None)\n if server is None:\n raise Exception(\"The server must be specified\")\n tags = instance.get('tags', [])\n # Clean up tags in case there was a None entry in the instance\n # e.g. if the yaml contains tags: but no actual tags\n if tags is None:\n tags = []\n else:\n tags = list(set(tags))\n tags.append('instance:%s' % server)\n data = self.get_data(server, instance)\n self._create_metrics(data, tags=list(set(tags)))\n\n def get_data(self, server, instance):\n # The dictionary to be returned.\n couchbase = {\n 'stats': None,\n 'buckets': {},\n 'nodes': {}\n }\n\n # build couchbase stats entry point\n url = '%s%s' % (server, COUCHBASE_STATS_PATH)\n\n # Fetch initial stats and capture a service check based on response.\n service_check_tags = ['instance:%s' % server]\n try:\n overall_stats = self._get_stats(url, instance)\n # No overall stats? bail out now\n if overall_stats is None:\n raise Exception(\"No data returned from couchbase endpoint: %s\" % url)\n except requests.exceptions.HTTPError as e:\n self.service_check(self.SERVICE_CHECK_NAME, AgentCheck.CRITICAL,\n tags=service_check_tags, message=str(e.message))\n raise\n except Exception as e:\n self.service_check(self.SERVICE_CHECK_NAME, AgentCheck.CRITICAL,\n tags=service_check_tags, message=str(e))\n raise\n else:\n self.service_check(self.SERVICE_CHECK_NAME, AgentCheck.OK)\n\n couchbase['stats'] = overall_stats\n\n nodes = overall_stats['nodes']\n\n # Next, get all the nodes\n if nodes is not None:\n for node in nodes:\n couchbase['nodes'][node['hostname']] = node\n\n # Next, get all buckets .\n endpoint = overall_stats['buckets']['uri']\n\n url = '%s%s' % (server, endpoint)\n buckets = self._get_stats(url, instance)\n\n if buckets is not None:\n for bucket in buckets:\n bucket_name = bucket['name']\n\n # Fetch URI for the stats bucket\n endpoint = bucket['stats']['uri']\n url = '%s%s' % (server, endpoint)\n\n try:\n bucket_stats = self._get_stats(url, instance)\n except requests.exceptions.HTTPError:\n url_backup = '%s/pools/nodes/buckets/%s/stats' % (server, bucket_name)\n bucket_stats = self._get_stats(url_backup, instance)\n\n bucket_samples = bucket_stats['op']['samples']\n if bucket_samples is not None:\n couchbase['buckets'][bucket['name']] = bucket_samples\n\n return couchbase\n\n # Takes a camelCased variable and returns a joined_lower equivalent.\n # Returns input if non-camelCase variable is detected.\n def camel_case_to_joined_lower(self, variable):\n # replace non-word with _\n converted_variable = re.sub('\\W+', '_', variable)\n\n # insert _ in front of capital letters and lowercase the string\n converted_variable = re.sub('([A-Z])', '_\\g<1>', converted_variable).lower()\n\n # remove duplicate _\n converted_variable = re.sub('_+', '_', converted_variable)\n\n # handle special case of starting/ending underscores\n converted_variable = re.sub('^_|_$', '', converted_variable)\n\n return converted_variable\n", "path": "checks.d/couchbase.py"}], "after_files": [{"content": "# stdlib\nimport re\n\n# 3rd party\nimport requests\n\n# project\nfrom checks import AgentCheck\nfrom util import headers\n\n# Constants\nCOUCHBASE_STATS_PATH = '/pools/default'\nDEFAULT_TIMEOUT = 10\n\n\nclass Couchbase(AgentCheck):\n \"\"\"Extracts stats from Couchbase via its REST API\n http://docs.couchbase.com/couchbase-manual-2.0/#using-the-rest-api\n \"\"\"\n SERVICE_CHECK_NAME = 'couchbase.can_connect'\n\n def _create_metrics(self, data, tags=None):\n storage_totals = data['stats']['storageTotals']\n for key, storage_type in storage_totals.items():\n for metric_name, val in storage_type.items():\n if val is not None:\n metric_name = '.'.join(['couchbase', key, self.camel_case_to_joined_lower(metric_name)])\n self.gauge(metric_name, val, tags=tags)\n\n for bucket_name, bucket_stats in data['buckets'].items():\n for metric_name, val in bucket_stats.items():\n if val is not None:\n metric_name = '.'.join(['couchbase', 'by_bucket', self.camel_case_to_joined_lower(metric_name)])\n metric_tags = list(tags)\n metric_tags.append('bucket:%s' % bucket_name)\n self.gauge(metric_name, val[0], tags=metric_tags, device_name=bucket_name)\n\n for node_name, node_stats in data['nodes'].items():\n for metric_name, val in node_stats['interestingStats'].items():\n if val is not None:\n metric_name = '.'.join(['couchbase', 'by_node', self.camel_case_to_joined_lower(metric_name)])\n metric_tags = list(tags)\n metric_tags.append('node:%s' % node_name)\n self.gauge(metric_name, val, tags=metric_tags, device_name=node_name)\n\n\n def _get_stats(self, url, instance):\n \"\"\" Hit a given URL and return the parsed json. \"\"\"\n self.log.debug('Fetching Couchbase stats at url: %s' % url)\n\n timeout = float(instance.get('timeout', DEFAULT_TIMEOUT))\n\n auth = None\n if 'user' in instance and 'password' in instance:\n auth = (instance['user'], instance['password'])\n\n r = requests.get(url, auth=auth, headers=headers(self.agentConfig),\n timeout=timeout)\n r.raise_for_status()\n return r.json()\n\n def check(self, instance):\n server = instance.get('server', None)\n if server is None:\n raise Exception(\"The server must be specified\")\n tags = instance.get('tags', [])\n # Clean up tags in case there was a None entry in the instance\n # e.g. if the yaml contains tags: but no actual tags\n if tags is None:\n tags = []\n else:\n tags = list(set(tags))\n tags.append('instance:%s' % server)\n data = self.get_data(server, instance)\n self._create_metrics(data, tags=list(set(tags)))\n\n def get_data(self, server, instance):\n # The dictionary to be returned.\n couchbase = {\n 'stats': None,\n 'buckets': {},\n 'nodes': {}\n }\n\n # build couchbase stats entry point\n url = '%s%s' % (server, COUCHBASE_STATS_PATH)\n\n # Fetch initial stats and capture a service check based on response.\n service_check_tags = ['instance:%s' % server]\n try:\n overall_stats = self._get_stats(url, instance)\n # No overall stats? bail out now\n if overall_stats is None:\n raise Exception(\"No data returned from couchbase endpoint: %s\" % url)\n except requests.exceptions.HTTPError as e:\n self.service_check(self.SERVICE_CHECK_NAME, AgentCheck.CRITICAL,\n tags=service_check_tags, message=str(e.message))\n raise\n except Exception as e:\n self.service_check(self.SERVICE_CHECK_NAME, AgentCheck.CRITICAL,\n tags=service_check_tags, message=str(e))\n raise\n else:\n self.service_check(self.SERVICE_CHECK_NAME, AgentCheck.OK,\n tags=service_check_tags)\n\n couchbase['stats'] = overall_stats\n\n nodes = overall_stats['nodes']\n\n # Next, get all the nodes\n if nodes is not None:\n for node in nodes:\n couchbase['nodes'][node['hostname']] = node\n\n # Next, get all buckets .\n endpoint = overall_stats['buckets']['uri']\n\n url = '%s%s' % (server, endpoint)\n buckets = self._get_stats(url, instance)\n\n if buckets is not None:\n for bucket in buckets:\n bucket_name = bucket['name']\n\n # Fetch URI for the stats bucket\n endpoint = bucket['stats']['uri']\n url = '%s%s' % (server, endpoint)\n\n try:\n bucket_stats = self._get_stats(url, instance)\n except requests.exceptions.HTTPError:\n url_backup = '%s/pools/nodes/buckets/%s/stats' % (server, bucket_name)\n bucket_stats = self._get_stats(url_backup, instance)\n\n bucket_samples = bucket_stats['op']['samples']\n if bucket_samples is not None:\n couchbase['buckets'][bucket['name']] = bucket_samples\n\n return couchbase\n\n # Takes a camelCased variable and returns a joined_lower equivalent.\n # Returns input if non-camelCase variable is detected.\n def camel_case_to_joined_lower(self, variable):\n # replace non-word with _\n converted_variable = re.sub('\\W+', '_', variable)\n\n # insert _ in front of capital letters and lowercase the string\n converted_variable = re.sub('([A-Z])', '_\\g<1>', converted_variable).lower()\n\n # remove duplicate _\n converted_variable = re.sub('_+', '_', converted_variable)\n\n # handle special case of starting/ending underscores\n converted_variable = re.sub('^_|_$', '', converted_variable)\n\n return converted_variable\n", "path": "checks.d/couchbase.py"}]} | 1,972 | 120 |
gh_patches_debug_12088 | rasdani/github-patches | git_diff | cookiecutter__cookiecutter-639 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Explicitly set encoding for reading history file.
Fixes build in C locale. Otherwise I see:
Traceback (most recent call last):
File "setup.py", line 24, in <module>
history = history_file.read().replace('.. :changelog:', '')
File "/usr/pkg/lib/python3.5/encodings/ascii.py", line 26, in decode
return codecs.ascii_decode(input, self.errors)[0]
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 6348: ordinal not in range(128)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2
3 import os
4 import sys
5
6 from setuptools import setup
7
8 version = "1.3.0"
9
10 if sys.argv[-1] == 'publish':
11 os.system('python setup.py sdist upload')
12 os.system('python setup.py bdist_wheel upload')
13 sys.exit()
14
15 if sys.argv[-1] == 'tag':
16 os.system("git tag -a %s -m 'version %s'" % (version, version))
17 os.system("git push --tags")
18 sys.exit()
19
20 with open('README.rst') as readme_file:
21 readme = readme_file.read()
22
23 with open('HISTORY.rst') as history_file:
24 history = history_file.read().replace('.. :changelog:', '')
25
26 requirements = [
27 'future>=0.15.2',
28 'binaryornot>=0.2.0',
29 'jinja2>=2.7',
30 'click>=5.0',
31 'whichcraft>=0.1.1',
32 'poyo>=0.1.0'
33 ]
34
35 long_description = readme + '\n\n' + history
36
37 if sys.argv[-1] == 'readme':
38 print(long_description)
39 sys.exit()
40
41
42 setup(
43 name='cookiecutter',
44 version=version,
45 description=('A command-line utility that creates projects from project '
46 'templates, e.g. creating a Python package project from a '
47 'Python package project template.'),
48 long_description=long_description,
49 author='Audrey Roy',
50 author_email='[email protected]',
51 url='https://github.com/audreyr/cookiecutter',
52 packages=[
53 'cookiecutter',
54 ],
55 package_dir={'cookiecutter': 'cookiecutter'},
56 entry_points={
57 'console_scripts': [
58 'cookiecutter = cookiecutter.cli:main',
59 ]
60 },
61 include_package_data=True,
62 install_requires=requirements,
63 license='BSD',
64 zip_safe=False,
65 classifiers=[
66 'Development Status :: 5 - Production/Stable',
67 'Environment :: Console',
68 'Intended Audience :: Developers',
69 'Natural Language :: English',
70 'License :: OSI Approved :: BSD License',
71 'Programming Language :: Python',
72 'Programming Language :: Python :: 2',
73 'Programming Language :: Python :: 2.7',
74 'Programming Language :: Python :: 3',
75 'Programming Language :: Python :: 3.3',
76 'Programming Language :: Python :: 3.4',
77 'Programming Language :: Python :: 3.5',
78 'Programming Language :: Python :: Implementation :: CPython',
79 'Programming Language :: Python :: Implementation :: PyPy',
80 'Topic :: Software Development',
81 ],
82 keywords=(
83 'cookiecutter, Python, projects, project templates, Jinja2, '
84 'skeleton, scaffolding, project directory, setup.py, package, '
85 'packaging'
86 ),
87 )
88
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -1,6 +1,7 @@
#!/usr/bin/env python
import os
+import io
import sys
from setuptools import setup
@@ -17,10 +18,10 @@
os.system("git push --tags")
sys.exit()
-with open('README.rst') as readme_file:
+with io.open('README.rst', 'r', encoding='utf-8') as readme_file:
readme = readme_file.read()
-with open('HISTORY.rst') as history_file:
+with io.open('HISTORY.rst', 'r', encoding='utf-8') as history_file:
history = history_file.read().replace('.. :changelog:', '')
requirements = [
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -1,6 +1,7 @@\n #!/usr/bin/env python\n \n import os\n+import io\n import sys\n \n from setuptools import setup\n@@ -17,10 +18,10 @@\n os.system(\"git push --tags\")\n sys.exit()\n \n-with open('README.rst') as readme_file:\n+with io.open('README.rst', 'r', encoding='utf-8') as readme_file:\n readme = readme_file.read()\n \n-with open('HISTORY.rst') as history_file:\n+with io.open('HISTORY.rst', 'r', encoding='utf-8') as history_file:\n history = history_file.read().replace('.. :changelog:', '')\n \n requirements = [\n", "issue": "Explicitly set encoding for reading history file.\nFixes build in C locale. Otherwise I see:\n\nTraceback (most recent call last):\n File \"setup.py\", line 24, in <module>\n history = history_file.read().replace('.. :changelog:', '')\n File \"/usr/pkg/lib/python3.5/encodings/ascii.py\", line 26, in decode\n return codecs.ascii_decode(input, self.errors)[0]\nUnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 6348: ordinal not in range(128)\n\n", "before_files": [{"content": "#!/usr/bin/env python\n\nimport os\nimport sys\n\nfrom setuptools import setup\n\nversion = \"1.3.0\"\n\nif sys.argv[-1] == 'publish':\n os.system('python setup.py sdist upload')\n os.system('python setup.py bdist_wheel upload')\n sys.exit()\n\nif sys.argv[-1] == 'tag':\n os.system(\"git tag -a %s -m 'version %s'\" % (version, version))\n os.system(\"git push --tags\")\n sys.exit()\n\nwith open('README.rst') as readme_file:\n readme = readme_file.read()\n\nwith open('HISTORY.rst') as history_file:\n history = history_file.read().replace('.. :changelog:', '')\n\nrequirements = [\n 'future>=0.15.2',\n 'binaryornot>=0.2.0',\n 'jinja2>=2.7',\n 'click>=5.0',\n 'whichcraft>=0.1.1',\n 'poyo>=0.1.0'\n]\n\nlong_description = readme + '\\n\\n' + history\n\nif sys.argv[-1] == 'readme':\n print(long_description)\n sys.exit()\n\n\nsetup(\n name='cookiecutter',\n version=version,\n description=('A command-line utility that creates projects from project '\n 'templates, e.g. creating a Python package project from a '\n 'Python package project template.'),\n long_description=long_description,\n author='Audrey Roy',\n author_email='[email protected]',\n url='https://github.com/audreyr/cookiecutter',\n packages=[\n 'cookiecutter',\n ],\n package_dir={'cookiecutter': 'cookiecutter'},\n entry_points={\n 'console_scripts': [\n 'cookiecutter = cookiecutter.cli:main',\n ]\n },\n include_package_data=True,\n install_requires=requirements,\n license='BSD',\n zip_safe=False,\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Environment :: Console',\n 'Intended Audience :: Developers',\n 'Natural Language :: English',\n 'License :: OSI Approved :: BSD License',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: Implementation :: CPython',\n 'Programming Language :: Python :: Implementation :: PyPy',\n 'Topic :: Software Development',\n ],\n keywords=(\n 'cookiecutter, Python, projects, project templates, Jinja2, '\n 'skeleton, scaffolding, project directory, setup.py, package, '\n 'packaging'\n ),\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\nimport os\nimport io\nimport sys\n\nfrom setuptools import setup\n\nversion = \"1.3.0\"\n\nif sys.argv[-1] == 'publish':\n os.system('python setup.py sdist upload')\n os.system('python setup.py bdist_wheel upload')\n sys.exit()\n\nif sys.argv[-1] == 'tag':\n os.system(\"git tag -a %s -m 'version %s'\" % (version, version))\n os.system(\"git push --tags\")\n sys.exit()\n\nwith io.open('README.rst', 'r', encoding='utf-8') as readme_file:\n readme = readme_file.read()\n\nwith io.open('HISTORY.rst', 'r', encoding='utf-8') as history_file:\n history = history_file.read().replace('.. :changelog:', '')\n\nrequirements = [\n 'future>=0.15.2',\n 'binaryornot>=0.2.0',\n 'jinja2>=2.7',\n 'click>=5.0',\n 'whichcraft>=0.1.1',\n 'poyo>=0.1.0'\n]\n\nlong_description = readme + '\\n\\n' + history\n\nif sys.argv[-1] == 'readme':\n print(long_description)\n sys.exit()\n\n\nsetup(\n name='cookiecutter',\n version=version,\n description=('A command-line utility that creates projects from project '\n 'templates, e.g. creating a Python package project from a '\n 'Python package project template.'),\n long_description=long_description,\n author='Audrey Roy',\n author_email='[email protected]',\n url='https://github.com/audreyr/cookiecutter',\n packages=[\n 'cookiecutter',\n ],\n package_dir={'cookiecutter': 'cookiecutter'},\n entry_points={\n 'console_scripts': [\n 'cookiecutter = cookiecutter.cli:main',\n ]\n },\n include_package_data=True,\n install_requires=requirements,\n license='BSD',\n zip_safe=False,\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Environment :: Console',\n 'Intended Audience :: Developers',\n 'Natural Language :: English',\n 'License :: OSI Approved :: BSD License',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: Implementation :: CPython',\n 'Programming Language :: Python :: Implementation :: PyPy',\n 'Topic :: Software Development',\n ],\n keywords=(\n 'cookiecutter, Python, projects, project templates, Jinja2, '\n 'skeleton, scaffolding, project directory, setup.py, package, '\n 'packaging'\n ),\n)\n", "path": "setup.py"}]} | 1,179 | 175 |
gh_patches_debug_8011 | rasdani/github-patches | git_diff | zestedesavoir__zds-site-2259 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Plus de pagination dans la liste des membres
Url incriminée : http://beta.zestedesavoir.com/membres/
On ne voit que 100 membres inscrit, alors qu'il y'en a plus.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `zds/utils/paginator.py`
Content:
```
1 # coding: utf-8
2
3 from django.views.generic import ListView
4 from django.views.generic.list import MultipleObjectMixin
5
6 from zds.settings import ZDS_APP
7
8
9 class ZdSPagingListView(ListView):
10 def get_context_data(self, **kwargs):
11 """
12 Get the context for this view. This method is surcharged to modify the paginator
13 and information given at the template.
14 """
15 queryset = kwargs.pop('object_list', self.object_list)
16 page_size = self.get_paginate_by(queryset)
17 context_object_name = self.get_context_object_name(queryset)
18 paginator, page, queryset, is_paginated = self.paginate_queryset(queryset, page_size)
19 if page_size:
20 paginator, page, queryset, is_paginated = self.paginate_queryset(queryset, page_size)
21 context = {
22 'paginator': paginator,
23 'page_obj': page,
24 'is_paginated': is_paginated,
25 'object_list': queryset,
26 'pages': paginator_range(page.number, paginator.num_pages),
27 }
28 else:
29 context = {
30 'paginator': None,
31 'page_obj': None,
32 'is_paginated': False,
33 'object_list': queryset,
34 'pages': [],
35 }
36 if context_object_name is not None:
37 context[context_object_name] = queryset
38 context.update(kwargs)
39 return super(MultipleObjectMixin, self).get_context_data(**context)
40
41
42 def paginator_range(current, stop, start=1):
43 assert (current <= stop)
44
45 # Basic case when no folding
46 if stop - start <= ZDS_APP['paginator']['folding_limit']:
47 return range(start, stop + 1)
48
49 # Complex case when folding
50 lst = []
51 for page_number in range(start, stop + 1):
52 # Bounds
53 if page_number == start or page_number == stop:
54 lst.append(page_number)
55 if page_number == start and current - start > 2:
56 lst.append(None)
57 # Neighbors
58 elif abs(page_number - current) == 1:
59 lst.append(page_number)
60 if page_number - current > 0 and stop - page_number > 2:
61 lst.append(None)
62 # Current
63 elif page_number == current:
64 lst.append(page_number)
65 # Put some
66 elif page_number == stop - 1 and current == stop - 3:
67 lst.append(page_number)
68 # And ignore all other numbers
69
70 return lst
71
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/zds/utils/paginator.py b/zds/utils/paginator.py
--- a/zds/utils/paginator.py
+++ b/zds/utils/paginator.py
@@ -17,7 +17,6 @@
context_object_name = self.get_context_object_name(queryset)
paginator, page, queryset, is_paginated = self.paginate_queryset(queryset, page_size)
if page_size:
- paginator, page, queryset, is_paginated = self.paginate_queryset(queryset, page_size)
context = {
'paginator': paginator,
'page_obj': page,
| {"golden_diff": "diff --git a/zds/utils/paginator.py b/zds/utils/paginator.py\n--- a/zds/utils/paginator.py\n+++ b/zds/utils/paginator.py\n@@ -17,7 +17,6 @@\n context_object_name = self.get_context_object_name(queryset)\n paginator, page, queryset, is_paginated = self.paginate_queryset(queryset, page_size)\n if page_size:\n- paginator, page, queryset, is_paginated = self.paginate_queryset(queryset, page_size)\n context = {\n 'paginator': paginator,\n 'page_obj': page,\n", "issue": "Plus de pagination dans la liste des membres\nUrl incrimin\u00e9e : http://beta.zestedesavoir.com/membres/\n\nOn ne voit que 100 membres inscrit, alors qu'il y'en a plus.\n\n", "before_files": [{"content": "# coding: utf-8\n\nfrom django.views.generic import ListView\nfrom django.views.generic.list import MultipleObjectMixin\n\nfrom zds.settings import ZDS_APP\n\n\nclass ZdSPagingListView(ListView):\n def get_context_data(self, **kwargs):\n \"\"\"\n Get the context for this view. This method is surcharged to modify the paginator\n and information given at the template.\n \"\"\"\n queryset = kwargs.pop('object_list', self.object_list)\n page_size = self.get_paginate_by(queryset)\n context_object_name = self.get_context_object_name(queryset)\n paginator, page, queryset, is_paginated = self.paginate_queryset(queryset, page_size)\n if page_size:\n paginator, page, queryset, is_paginated = self.paginate_queryset(queryset, page_size)\n context = {\n 'paginator': paginator,\n 'page_obj': page,\n 'is_paginated': is_paginated,\n 'object_list': queryset,\n 'pages': paginator_range(page.number, paginator.num_pages),\n }\n else:\n context = {\n 'paginator': None,\n 'page_obj': None,\n 'is_paginated': False,\n 'object_list': queryset,\n 'pages': [],\n }\n if context_object_name is not None:\n context[context_object_name] = queryset\n context.update(kwargs)\n return super(MultipleObjectMixin, self).get_context_data(**context)\n\n\ndef paginator_range(current, stop, start=1):\n assert (current <= stop)\n\n # Basic case when no folding\n if stop - start <= ZDS_APP['paginator']['folding_limit']:\n return range(start, stop + 1)\n\n # Complex case when folding\n lst = []\n for page_number in range(start, stop + 1):\n # Bounds\n if page_number == start or page_number == stop:\n lst.append(page_number)\n if page_number == start and current - start > 2:\n lst.append(None)\n # Neighbors\n elif abs(page_number - current) == 1:\n lst.append(page_number)\n if page_number - current > 0 and stop - page_number > 2:\n lst.append(None)\n # Current\n elif page_number == current:\n lst.append(page_number)\n # Put some\n elif page_number == stop - 1 and current == stop - 3:\n lst.append(page_number)\n # And ignore all other numbers\n\n return lst\n", "path": "zds/utils/paginator.py"}], "after_files": [{"content": "# coding: utf-8\n\nfrom django.views.generic import ListView\nfrom django.views.generic.list import MultipleObjectMixin\n\nfrom zds.settings import ZDS_APP\n\n\nclass ZdSPagingListView(ListView):\n def get_context_data(self, **kwargs):\n \"\"\"\n Get the context for this view. This method is surcharged to modify the paginator\n and information given at the template.\n \"\"\"\n queryset = kwargs.pop('object_list', self.object_list)\n page_size = self.get_paginate_by(queryset)\n context_object_name = self.get_context_object_name(queryset)\n paginator, page, queryset, is_paginated = self.paginate_queryset(queryset, page_size)\n if page_size:\n context = {\n 'paginator': paginator,\n 'page_obj': page,\n 'is_paginated': is_paginated,\n 'object_list': queryset,\n 'pages': paginator_range(page.number, paginator.num_pages),\n }\n else:\n context = {\n 'paginator': None,\n 'page_obj': None,\n 'is_paginated': False,\n 'object_list': queryset,\n 'pages': [],\n }\n if context_object_name is not None:\n context[context_object_name] = queryset\n context.update(kwargs)\n return super(MultipleObjectMixin, self).get_context_data(**context)\n\n\ndef paginator_range(current, stop, start=1):\n assert (current <= stop)\n\n # Basic case when no folding\n if stop - start <= ZDS_APP['paginator']['folding_limit']:\n return range(start, stop + 1)\n\n # Complex case when folding\n lst = []\n for page_number in range(start, stop + 1):\n # Bounds\n if page_number == start or page_number == stop:\n lst.append(page_number)\n if page_number == start and current - start > 2:\n lst.append(None)\n # Neighbors\n elif abs(page_number - current) == 1:\n lst.append(page_number)\n if page_number - current > 0 and stop - page_number > 2:\n lst.append(None)\n # Current\n elif page_number == current:\n lst.append(page_number)\n # Put some\n elif page_number == stop - 1 and current == stop - 3:\n lst.append(page_number)\n # And ignore all other numbers\n\n return lst\n", "path": "zds/utils/paginator.py"}]} | 970 | 126 |
gh_patches_debug_18462 | rasdani/github-patches | git_diff | aio-libs__aiohttp-5118 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
option to disable automatic client response body decompression
enhancement for https://github.com/aio-libs/aiohttp/issues/1992
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `aiohttp/resolver.py`
Content:
```
1 import socket
2 from typing import Any, Dict, List
3
4 from .abc import AbstractResolver
5 from .helpers import get_running_loop
6
7 __all__ = ("ThreadedResolver", "AsyncResolver", "DefaultResolver")
8
9 try:
10 import aiodns
11
12 # aiodns_default = hasattr(aiodns.DNSResolver, 'gethostbyname')
13 except ImportError: # pragma: no cover
14 aiodns = None
15
16 aiodns_default = False
17
18
19 class ThreadedResolver(AbstractResolver):
20 """Use Executor for synchronous getaddrinfo() calls, which defaults to
21 concurrent.futures.ThreadPoolExecutor.
22 """
23
24 def __init__(self) -> None:
25 self._loop = get_running_loop()
26
27 async def resolve(
28 self, host: str, port: int = 0, family: int = socket.AF_INET
29 ) -> List[Dict[str, Any]]:
30 infos = await self._loop.getaddrinfo(
31 host, port, type=socket.SOCK_STREAM, family=family
32 )
33
34 hosts = []
35 for family, _, proto, _, address in infos:
36 if family == socket.AF_INET6 and address[3]: # type: ignore
37 # This is essential for link-local IPv6 addresses.
38 # LL IPv6 is a VERY rare case. Strictly speaking, we should use
39 # getnameinfo() unconditionally, but performance makes sense.
40 host, _port = socket.getnameinfo(
41 address, socket.NI_NUMERICHOST | socket.NI_NUMERICSERV
42 )
43 port = int(_port)
44 else:
45 host, port = address[:2]
46 hosts.append(
47 {
48 "hostname": host,
49 "host": host,
50 "port": port,
51 "family": family,
52 "proto": proto,
53 "flags": socket.AI_NUMERICHOST | socket.AI_NUMERICSERV,
54 }
55 )
56
57 return hosts
58
59 async def close(self) -> None:
60 pass
61
62
63 class AsyncResolver(AbstractResolver):
64 """Use the `aiodns` package to make asynchronous DNS lookups"""
65
66 def __init__(self, *args: Any, **kwargs: Any) -> None:
67 if aiodns is None:
68 raise RuntimeError("Resolver requires aiodns library")
69
70 self._loop = get_running_loop()
71 self._resolver = aiodns.DNSResolver(*args, loop=self._loop, **kwargs)
72
73 async def resolve(
74 self, host: str, port: int = 0, family: int = socket.AF_INET
75 ) -> List[Dict[str, Any]]:
76 try:
77 resp = await self._resolver.gethostbyname(host, family)
78 except aiodns.error.DNSError as exc:
79 msg = exc.args[1] if len(exc.args) >= 1 else "DNS lookup failed"
80 raise OSError(msg) from exc
81 hosts = []
82 for address in resp.addresses:
83 hosts.append(
84 {
85 "hostname": host,
86 "host": address,
87 "port": port,
88 "family": family,
89 "proto": 0,
90 "flags": socket.AI_NUMERICHOST | socket.AI_NUMERICSERV,
91 }
92 )
93
94 if not hosts:
95 raise OSError("DNS lookup failed")
96
97 return hosts
98
99 async def close(self) -> None:
100 return self._resolver.cancel()
101
102
103 DefaultResolver = AsyncResolver if aiodns_default else ThreadedResolver
104
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/aiohttp/resolver.py b/aiohttp/resolver.py
--- a/aiohttp/resolver.py
+++ b/aiohttp/resolver.py
@@ -25,10 +25,10 @@
self._loop = get_running_loop()
async def resolve(
- self, host: str, port: int = 0, family: int = socket.AF_INET
+ self, hostname: str, port: int = 0, family: int = socket.AF_INET
) -> List[Dict[str, Any]]:
infos = await self._loop.getaddrinfo(
- host, port, type=socket.SOCK_STREAM, family=family
+ hostname, port, type=socket.SOCK_STREAM, family=family
)
hosts = []
@@ -45,7 +45,7 @@
host, port = address[:2]
hosts.append(
{
- "hostname": host,
+ "hostname": hostname,
"host": host,
"port": port,
"family": family,
| {"golden_diff": "diff --git a/aiohttp/resolver.py b/aiohttp/resolver.py\n--- a/aiohttp/resolver.py\n+++ b/aiohttp/resolver.py\n@@ -25,10 +25,10 @@\n self._loop = get_running_loop()\n \n async def resolve(\n- self, host: str, port: int = 0, family: int = socket.AF_INET\n+ self, hostname: str, port: int = 0, family: int = socket.AF_INET\n ) -> List[Dict[str, Any]]:\n infos = await self._loop.getaddrinfo(\n- host, port, type=socket.SOCK_STREAM, family=family\n+ hostname, port, type=socket.SOCK_STREAM, family=family\n )\n \n hosts = []\n@@ -45,7 +45,7 @@\n host, port = address[:2]\n hosts.append(\n {\n- \"hostname\": host,\n+ \"hostname\": hostname,\n \"host\": host,\n \"port\": port,\n \"family\": family,\n", "issue": "option to disable automatic client response body decompression\nenhancement for https://github.com/aio-libs/aiohttp/issues/1992\n", "before_files": [{"content": "import socket\nfrom typing import Any, Dict, List\n\nfrom .abc import AbstractResolver\nfrom .helpers import get_running_loop\n\n__all__ = (\"ThreadedResolver\", \"AsyncResolver\", \"DefaultResolver\")\n\ntry:\n import aiodns\n\n # aiodns_default = hasattr(aiodns.DNSResolver, 'gethostbyname')\nexcept ImportError: # pragma: no cover\n aiodns = None\n\naiodns_default = False\n\n\nclass ThreadedResolver(AbstractResolver):\n \"\"\"Use Executor for synchronous getaddrinfo() calls, which defaults to\n concurrent.futures.ThreadPoolExecutor.\n \"\"\"\n\n def __init__(self) -> None:\n self._loop = get_running_loop()\n\n async def resolve(\n self, host: str, port: int = 0, family: int = socket.AF_INET\n ) -> List[Dict[str, Any]]:\n infos = await self._loop.getaddrinfo(\n host, port, type=socket.SOCK_STREAM, family=family\n )\n\n hosts = []\n for family, _, proto, _, address in infos:\n if family == socket.AF_INET6 and address[3]: # type: ignore\n # This is essential for link-local IPv6 addresses.\n # LL IPv6 is a VERY rare case. Strictly speaking, we should use\n # getnameinfo() unconditionally, but performance makes sense.\n host, _port = socket.getnameinfo(\n address, socket.NI_NUMERICHOST | socket.NI_NUMERICSERV\n )\n port = int(_port)\n else:\n host, port = address[:2]\n hosts.append(\n {\n \"hostname\": host,\n \"host\": host,\n \"port\": port,\n \"family\": family,\n \"proto\": proto,\n \"flags\": socket.AI_NUMERICHOST | socket.AI_NUMERICSERV,\n }\n )\n\n return hosts\n\n async def close(self) -> None:\n pass\n\n\nclass AsyncResolver(AbstractResolver):\n \"\"\"Use the `aiodns` package to make asynchronous DNS lookups\"\"\"\n\n def __init__(self, *args: Any, **kwargs: Any) -> None:\n if aiodns is None:\n raise RuntimeError(\"Resolver requires aiodns library\")\n\n self._loop = get_running_loop()\n self._resolver = aiodns.DNSResolver(*args, loop=self._loop, **kwargs)\n\n async def resolve(\n self, host: str, port: int = 0, family: int = socket.AF_INET\n ) -> List[Dict[str, Any]]:\n try:\n resp = await self._resolver.gethostbyname(host, family)\n except aiodns.error.DNSError as exc:\n msg = exc.args[1] if len(exc.args) >= 1 else \"DNS lookup failed\"\n raise OSError(msg) from exc\n hosts = []\n for address in resp.addresses:\n hosts.append(\n {\n \"hostname\": host,\n \"host\": address,\n \"port\": port,\n \"family\": family,\n \"proto\": 0,\n \"flags\": socket.AI_NUMERICHOST | socket.AI_NUMERICSERV,\n }\n )\n\n if not hosts:\n raise OSError(\"DNS lookup failed\")\n\n return hosts\n\n async def close(self) -> None:\n return self._resolver.cancel()\n\n\nDefaultResolver = AsyncResolver if aiodns_default else ThreadedResolver\n", "path": "aiohttp/resolver.py"}], "after_files": [{"content": "import socket\nfrom typing import Any, Dict, List\n\nfrom .abc import AbstractResolver\nfrom .helpers import get_running_loop\n\n__all__ = (\"ThreadedResolver\", \"AsyncResolver\", \"DefaultResolver\")\n\ntry:\n import aiodns\n\n # aiodns_default = hasattr(aiodns.DNSResolver, 'gethostbyname')\nexcept ImportError: # pragma: no cover\n aiodns = None\n\naiodns_default = False\n\n\nclass ThreadedResolver(AbstractResolver):\n \"\"\"Use Executor for synchronous getaddrinfo() calls, which defaults to\n concurrent.futures.ThreadPoolExecutor.\n \"\"\"\n\n def __init__(self) -> None:\n self._loop = get_running_loop()\n\n async def resolve(\n self, hostname: str, port: int = 0, family: int = socket.AF_INET\n ) -> List[Dict[str, Any]]:\n infos = await self._loop.getaddrinfo(\n hostname, port, type=socket.SOCK_STREAM, family=family\n )\n\n hosts = []\n for family, _, proto, _, address in infos:\n if family == socket.AF_INET6 and address[3]: # type: ignore\n # This is essential for link-local IPv6 addresses.\n # LL IPv6 is a VERY rare case. Strictly speaking, we should use\n # getnameinfo() unconditionally, but performance makes sense.\n host, _port = socket.getnameinfo(\n address, socket.NI_NUMERICHOST | socket.NI_NUMERICSERV\n )\n port = int(_port)\n else:\n host, port = address[:2]\n hosts.append(\n {\n \"hostname\": hostname,\n \"host\": host,\n \"port\": port,\n \"family\": family,\n \"proto\": proto,\n \"flags\": socket.AI_NUMERICHOST | socket.AI_NUMERICSERV,\n }\n )\n\n return hosts\n\n async def close(self) -> None:\n pass\n\n\nclass AsyncResolver(AbstractResolver):\n \"\"\"Use the `aiodns` package to make asynchronous DNS lookups\"\"\"\n\n def __init__(self, *args: Any, **kwargs: Any) -> None:\n if aiodns is None:\n raise RuntimeError(\"Resolver requires aiodns library\")\n\n self._loop = get_running_loop()\n self._resolver = aiodns.DNSResolver(*args, loop=self._loop, **kwargs)\n\n async def resolve(\n self, host: str, port: int = 0, family: int = socket.AF_INET\n ) -> List[Dict[str, Any]]:\n try:\n resp = await self._resolver.gethostbyname(host, family)\n except aiodns.error.DNSError as exc:\n msg = exc.args[1] if len(exc.args) >= 1 else \"DNS lookup failed\"\n raise OSError(msg) from exc\n hosts = []\n for address in resp.addresses:\n hosts.append(\n {\n \"hostname\": host,\n \"host\": address,\n \"port\": port,\n \"family\": family,\n \"proto\": 0,\n \"flags\": socket.AI_NUMERICHOST | socket.AI_NUMERICSERV,\n }\n )\n\n if not hosts:\n raise OSError(\"DNS lookup failed\")\n\n return hosts\n\n async def close(self) -> None:\n return self._resolver.cancel()\n\n\nDefaultResolver = AsyncResolver if aiodns_default else ThreadedResolver\n", "path": "aiohttp/resolver.py"}]} | 1,236 | 230 |
gh_patches_debug_44753 | rasdani/github-patches | git_diff | microsoft__botbuilder-python-888 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[bug] ShowTypingMiddleware middleware in a python bot not functioning
## Version
`botbuilder-core 4.7.1`
`botbuilder-schema 4.7.1`
## Describe the bug
``` python
#app.py
ADAPTER = BotFrameworkAdapter(SETTINGS)
# show typing indicator on long activities
ADAPTER.use(ShowTypingMiddleware(delay=0.5, period=2.0))
```
``` python
#bot.py
...
async def on_message_activity(self, turn_context: TurnContext):
if turn_context.activity.text == "middleware":
await asyncio.sleep(10) # mock getting some data
await turn_context.send_activity("done")
...
```
## Expected behavior
I expect that calling the middleware
- shows a TI for activities taking longer than .5 seconds
- repeat sending a TI to the client every 2 seconds
## Actual results :
- TI is sent one time only
- no repeat TI are sent
- a runtime warning is shown:
```
c:\develop\x\pybot1\.venv\lib\site-packages\botbuilder\core\show_typing_middleware.py:79:
RuntimeWarning: coroutine 'ShowTypingMiddleware.on_turn.<locals>.start_interval' was never awaited
start_interval(context, period, period)
RuntimeWarning: Enable tracemalloc to get the object allocation traceback
```
In the emulator log it is clear that only one TI indicator is sent , and no repeats are to be seen
```
[16:55:12]<- messageYou said 'middleware'
[16:55:12]POST200conversations.:conversationId.activities.:activityId
[16:55:12]POST201directline.conversations.:conversationId.activities
[16:55:43]-> messagemiddleware
[16:55:44]<- typing
[16:55:44]POST200conversations.:conversationId.activities.:activityId
[16:55:54]<- messagedone
[16:55:54]POST200conversations.:conversationId.activities.:activityId
[16:55:54]POST201directline.conversations.:conversationId.activities
```
## Additional context
also see Question on [SO](https://stackoverflow.com/posts/60467080/edit)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `libraries/botbuilder-core/botbuilder/core/show_typing_middleware.py`
Content:
```
1 # Copyright (c) Microsoft Corporation. All rights reserved.
2 # Licensed under the MIT License.
3
4 import time
5 from functools import wraps
6 from typing import Awaitable, Callable
7
8 from botbuilder.schema import Activity, ActivityTypes
9
10 from .middleware_set import Middleware
11 from .turn_context import TurnContext
12
13
14 def delay(span=0.0):
15 def wrap(func):
16 @wraps(func)
17 async def delayed():
18 time.sleep(span)
19 await func()
20
21 return delayed
22
23 return wrap
24
25
26 class Timer:
27 clear_timer = False
28
29 async def set_timeout(self, func, time):
30 is_invocation_cancelled = False
31
32 @delay(time)
33 async def some_fn(): # pylint: disable=function-redefined
34 if not self.clear_timer:
35 await func()
36
37 await some_fn()
38 return is_invocation_cancelled
39
40 def set_clear_timer(self):
41 self.clear_timer = True
42
43
44 class ShowTypingMiddleware(Middleware):
45 def __init__(self, delay: float = 0.5, period: float = 2.0):
46 if delay < 0:
47 raise ValueError("Delay must be greater than or equal to zero")
48
49 if period <= 0:
50 raise ValueError("Repeat period must be greater than zero")
51
52 self._delay = delay
53 self._period = period
54
55 async def on_turn(
56 self, context: TurnContext, logic: Callable[[TurnContext], Awaitable]
57 ):
58 finished = False
59 timer = Timer()
60
61 async def start_interval(context: TurnContext, delay: int, period: int):
62 async def aux():
63 if not finished:
64 typing_activity = Activity(
65 type=ActivityTypes.typing,
66 relates_to=context.activity.relates_to,
67 )
68
69 conversation_reference = TurnContext.get_conversation_reference(
70 context.activity
71 )
72
73 typing_activity = TurnContext.apply_conversation_reference(
74 typing_activity, conversation_reference
75 )
76
77 await context.adapter.send_activities(context, [typing_activity])
78
79 start_interval(context, period, period)
80
81 await timer.set_timeout(aux, delay)
82
83 def stop_interval():
84 nonlocal finished
85 finished = True
86 timer.set_clear_timer()
87
88 if context.activity.type == ActivityTypes.message:
89 finished = False
90 await start_interval(context, self._delay, self._period)
91
92 result = await logic()
93 stop_interval()
94
95 return result
96
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/libraries/botbuilder-core/botbuilder/core/show_typing_middleware.py b/libraries/botbuilder-core/botbuilder/core/show_typing_middleware.py
--- a/libraries/botbuilder-core/botbuilder/core/show_typing_middleware.py
+++ b/libraries/botbuilder-core/botbuilder/core/show_typing_middleware.py
@@ -1,8 +1,6 @@
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
-
-import time
-from functools import wraps
+import asyncio
from typing import Awaitable, Callable
from botbuilder.schema import Activity, ActivityTypes
@@ -11,38 +9,38 @@
from .turn_context import TurnContext
-def delay(span=0.0):
- def wrap(func):
- @wraps(func)
- async def delayed():
- time.sleep(span)
- await func()
-
- return delayed
-
- return wrap
-
-
class Timer:
clear_timer = False
- async def set_timeout(self, func, time):
- is_invocation_cancelled = False
-
- @delay(time)
+ def set_timeout(self, func, span):
async def some_fn(): # pylint: disable=function-redefined
+ await asyncio.sleep(span)
if not self.clear_timer:
await func()
- await some_fn()
- return is_invocation_cancelled
+ asyncio.ensure_future(some_fn())
def set_clear_timer(self):
self.clear_timer = True
class ShowTypingMiddleware(Middleware):
+ """
+ When added, this middleware will send typing activities back to the user when a Message activity
+ is received to let them know that the bot has received the message and is working on the response.
+ You can specify a delay before the first typing activity is sent and then a frequency, which
+ determines how often another typing activity is sent. Typing activities will continue to be sent
+ until your bot sends another message back to the user.
+ """
+
def __init__(self, delay: float = 0.5, period: float = 2.0):
+ """
+ Initializes the middleware.
+
+ :param delay: Delay in seconds for the first typing indicator to be sent.
+ :param period: Delay in seconds for subsequent typing indicators.
+ """
+
if delay < 0:
raise ValueError("Delay must be greater than or equal to zero")
@@ -55,41 +53,43 @@
async def on_turn(
self, context: TurnContext, logic: Callable[[TurnContext], Awaitable]
):
- finished = False
timer = Timer()
- async def start_interval(context: TurnContext, delay: int, period: int):
+ def start_interval(context: TurnContext, delay, period):
async def aux():
- if not finished:
- typing_activity = Activity(
- type=ActivityTypes.typing,
- relates_to=context.activity.relates_to,
- )
+ typing_activity = Activity(
+ type=ActivityTypes.typing, relates_to=context.activity.relates_to,
+ )
- conversation_reference = TurnContext.get_conversation_reference(
- context.activity
- )
+ conversation_reference = TurnContext.get_conversation_reference(
+ context.activity
+ )
- typing_activity = TurnContext.apply_conversation_reference(
- typing_activity, conversation_reference
- )
+ typing_activity = TurnContext.apply_conversation_reference(
+ typing_activity, conversation_reference
+ )
- await context.adapter.send_activities(context, [typing_activity])
+ asyncio.ensure_future(
+ context.adapter.send_activities(context, [typing_activity])
+ )
- start_interval(context, period, period)
+ # restart the timer, with the 'period' value for the delay
+ timer.set_timeout(aux, period)
- await timer.set_timeout(aux, delay)
+ # first time through we use the 'delay' value for the timer.
+ timer.set_timeout(aux, delay)
def stop_interval():
- nonlocal finished
- finished = True
timer.set_clear_timer()
+ # if it's a message, start sending typing activities until the
+ # bot logic is done.
if context.activity.type == ActivityTypes.message:
- finished = False
- await start_interval(context, self._delay, self._period)
+ start_interval(context, self._delay, self._period)
+ # call the bot logic
result = await logic()
+
stop_interval()
return result
| {"golden_diff": "diff --git a/libraries/botbuilder-core/botbuilder/core/show_typing_middleware.py b/libraries/botbuilder-core/botbuilder/core/show_typing_middleware.py\n--- a/libraries/botbuilder-core/botbuilder/core/show_typing_middleware.py\n+++ b/libraries/botbuilder-core/botbuilder/core/show_typing_middleware.py\n@@ -1,8 +1,6 @@\n # Copyright (c) Microsoft Corporation. All rights reserved.\r\n # Licensed under the MIT License.\r\n-\r\n-import time\r\n-from functools import wraps\r\n+import asyncio\r\n from typing import Awaitable, Callable\r\n \r\n from botbuilder.schema import Activity, ActivityTypes\r\n@@ -11,38 +9,38 @@\n from .turn_context import TurnContext\r\n \r\n \r\n-def delay(span=0.0):\r\n- def wrap(func):\r\n- @wraps(func)\r\n- async def delayed():\r\n- time.sleep(span)\r\n- await func()\r\n-\r\n- return delayed\r\n-\r\n- return wrap\r\n-\r\n-\r\n class Timer:\r\n clear_timer = False\r\n \r\n- async def set_timeout(self, func, time):\r\n- is_invocation_cancelled = False\r\n-\r\n- @delay(time)\r\n+ def set_timeout(self, func, span):\r\n async def some_fn(): # pylint: disable=function-redefined\r\n+ await asyncio.sleep(span)\r\n if not self.clear_timer:\r\n await func()\r\n \r\n- await some_fn()\r\n- return is_invocation_cancelled\r\n+ asyncio.ensure_future(some_fn())\r\n \r\n def set_clear_timer(self):\r\n self.clear_timer = True\r\n \r\n \r\n class ShowTypingMiddleware(Middleware):\r\n+ \"\"\"\r\n+ When added, this middleware will send typing activities back to the user when a Message activity\r\n+ is received to let them know that the bot has received the message and is working on the response.\r\n+ You can specify a delay before the first typing activity is sent and then a frequency, which\r\n+ determines how often another typing activity is sent. Typing activities will continue to be sent\r\n+ until your bot sends another message back to the user.\r\n+ \"\"\"\r\n+\r\n def __init__(self, delay: float = 0.5, period: float = 2.0):\r\n+ \"\"\"\r\n+ Initializes the middleware.\r\n+\r\n+ :param delay: Delay in seconds for the first typing indicator to be sent.\r\n+ :param period: Delay in seconds for subsequent typing indicators.\r\n+ \"\"\"\r\n+\r\n if delay < 0:\r\n raise ValueError(\"Delay must be greater than or equal to zero\")\r\n \r\n@@ -55,41 +53,43 @@\n async def on_turn(\r\n self, context: TurnContext, logic: Callable[[TurnContext], Awaitable]\r\n ):\r\n- finished = False\r\n timer = Timer()\r\n \r\n- async def start_interval(context: TurnContext, delay: int, period: int):\r\n+ def start_interval(context: TurnContext, delay, period):\r\n async def aux():\r\n- if not finished:\r\n- typing_activity = Activity(\r\n- type=ActivityTypes.typing,\r\n- relates_to=context.activity.relates_to,\r\n- )\r\n+ typing_activity = Activity(\r\n+ type=ActivityTypes.typing, relates_to=context.activity.relates_to,\r\n+ )\r\n \r\n- conversation_reference = TurnContext.get_conversation_reference(\r\n- context.activity\r\n- )\r\n+ conversation_reference = TurnContext.get_conversation_reference(\r\n+ context.activity\r\n+ )\r\n \r\n- typing_activity = TurnContext.apply_conversation_reference(\r\n- typing_activity, conversation_reference\r\n- )\r\n+ typing_activity = TurnContext.apply_conversation_reference(\r\n+ typing_activity, conversation_reference\r\n+ )\r\n \r\n- await context.adapter.send_activities(context, [typing_activity])\r\n+ asyncio.ensure_future(\r\n+ context.adapter.send_activities(context, [typing_activity])\r\n+ )\r\n \r\n- start_interval(context, period, period)\r\n+ # restart the timer, with the 'period' value for the delay\r\n+ timer.set_timeout(aux, period)\r\n \r\n- await timer.set_timeout(aux, delay)\r\n+ # first time through we use the 'delay' value for the timer.\r\n+ timer.set_timeout(aux, delay)\r\n \r\n def stop_interval():\r\n- nonlocal finished\r\n- finished = True\r\n timer.set_clear_timer()\r\n \r\n+ # if it's a message, start sending typing activities until the\r\n+ # bot logic is done.\r\n if context.activity.type == ActivityTypes.message:\r\n- finished = False\r\n- await start_interval(context, self._delay, self._period)\r\n+ start_interval(context, self._delay, self._period)\r\n \r\n+ # call the bot logic\r\n result = await logic()\r\n+\r\n stop_interval()\r\n \r\n return result\n", "issue": "[bug] ShowTypingMiddleware middleware in a python bot not functioning\n## Version\r\n`botbuilder-core 4.7.1` \r\n`botbuilder-schema 4.7.1`\r\n\r\n## Describe the bug\r\n\r\n\r\n``` python\r\n#app.py \r\nADAPTER = BotFrameworkAdapter(SETTINGS)\r\n# show typing indicator on long activities\r\nADAPTER.use(ShowTypingMiddleware(delay=0.5, period=2.0))\r\n```\r\n\r\n``` python\r\n#bot.py \r\n...\r\n\r\n async def on_message_activity(self, turn_context: TurnContext):\r\n if turn_context.activity.text == \"middleware\":\r\n await asyncio.sleep(10) # mock getting some data \r\n await turn_context.send_activity(\"done\")\r\n\r\n...\r\n```\r\n\r\n## Expected behavior\r\n\r\nI expect that calling the middleware \r\n- shows a TI for activities taking longer than .5 seconds \r\n- repeat sending a TI to the client every 2 seconds \r\n\r\n## Actual results : \r\n\r\n - TI is sent one time only\r\n - no repeat TI are sent \r\n - a runtime warning is shown:\r\n```\r\n c:\\develop\\x\\pybot1\\.venv\\lib\\site-packages\\botbuilder\\core\\show_typing_middleware.py:79: \r\nRuntimeWarning: coroutine 'ShowTypingMiddleware.on_turn.<locals>.start_interval' was never awaited\r\n start_interval(context, period, period)\r\nRuntimeWarning: Enable tracemalloc to get the object allocation traceback\r\n```\r\n\r\nIn the emulator log it is clear that only one TI indicator is sent , and no repeats are to be seen\r\n```\r\n[16:55:12]<- messageYou said 'middleware'\r\n[16:55:12]POST200conversations.:conversationId.activities.:activityId\r\n[16:55:12]POST201directline.conversations.:conversationId.activities\r\n[16:55:43]-> messagemiddleware\r\n[16:55:44]<- typing\r\n[16:55:44]POST200conversations.:conversationId.activities.:activityId\r\n[16:55:54]<- messagedone\r\n[16:55:54]POST200conversations.:conversationId.activities.:activityId\r\n[16:55:54]POST201directline.conversations.:conversationId.activities\r\n```\r\n\r\n## Additional context\r\nalso see Question on [SO](https://stackoverflow.com/posts/60467080/edit)\r\n\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\r\n# Licensed under the MIT License.\r\n\r\nimport time\r\nfrom functools import wraps\r\nfrom typing import Awaitable, Callable\r\n\r\nfrom botbuilder.schema import Activity, ActivityTypes\r\n\r\nfrom .middleware_set import Middleware\r\nfrom .turn_context import TurnContext\r\n\r\n\r\ndef delay(span=0.0):\r\n def wrap(func):\r\n @wraps(func)\r\n async def delayed():\r\n time.sleep(span)\r\n await func()\r\n\r\n return delayed\r\n\r\n return wrap\r\n\r\n\r\nclass Timer:\r\n clear_timer = False\r\n\r\n async def set_timeout(self, func, time):\r\n is_invocation_cancelled = False\r\n\r\n @delay(time)\r\n async def some_fn(): # pylint: disable=function-redefined\r\n if not self.clear_timer:\r\n await func()\r\n\r\n await some_fn()\r\n return is_invocation_cancelled\r\n\r\n def set_clear_timer(self):\r\n self.clear_timer = True\r\n\r\n\r\nclass ShowTypingMiddleware(Middleware):\r\n def __init__(self, delay: float = 0.5, period: float = 2.0):\r\n if delay < 0:\r\n raise ValueError(\"Delay must be greater than or equal to zero\")\r\n\r\n if period <= 0:\r\n raise ValueError(\"Repeat period must be greater than zero\")\r\n\r\n self._delay = delay\r\n self._period = period\r\n\r\n async def on_turn(\r\n self, context: TurnContext, logic: Callable[[TurnContext], Awaitable]\r\n ):\r\n finished = False\r\n timer = Timer()\r\n\r\n async def start_interval(context: TurnContext, delay: int, period: int):\r\n async def aux():\r\n if not finished:\r\n typing_activity = Activity(\r\n type=ActivityTypes.typing,\r\n relates_to=context.activity.relates_to,\r\n )\r\n\r\n conversation_reference = TurnContext.get_conversation_reference(\r\n context.activity\r\n )\r\n\r\n typing_activity = TurnContext.apply_conversation_reference(\r\n typing_activity, conversation_reference\r\n )\r\n\r\n await context.adapter.send_activities(context, [typing_activity])\r\n\r\n start_interval(context, period, period)\r\n\r\n await timer.set_timeout(aux, delay)\r\n\r\n def stop_interval():\r\n nonlocal finished\r\n finished = True\r\n timer.set_clear_timer()\r\n\r\n if context.activity.type == ActivityTypes.message:\r\n finished = False\r\n await start_interval(context, self._delay, self._period)\r\n\r\n result = await logic()\r\n stop_interval()\r\n\r\n return result\r\n", "path": "libraries/botbuilder-core/botbuilder/core/show_typing_middleware.py"}], "after_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\r\n# Licensed under the MIT License.\r\nimport asyncio\r\nfrom typing import Awaitable, Callable\r\n\r\nfrom botbuilder.schema import Activity, ActivityTypes\r\n\r\nfrom .middleware_set import Middleware\r\nfrom .turn_context import TurnContext\r\n\r\n\r\nclass Timer:\r\n clear_timer = False\r\n\r\n def set_timeout(self, func, span):\r\n async def some_fn(): # pylint: disable=function-redefined\r\n await asyncio.sleep(span)\r\n if not self.clear_timer:\r\n await func()\r\n\r\n asyncio.ensure_future(some_fn())\r\n\r\n def set_clear_timer(self):\r\n self.clear_timer = True\r\n\r\n\r\nclass ShowTypingMiddleware(Middleware):\r\n \"\"\"\r\n When added, this middleware will send typing activities back to the user when a Message activity\r\n is received to let them know that the bot has received the message and is working on the response.\r\n You can specify a delay before the first typing activity is sent and then a frequency, which\r\n determines how often another typing activity is sent. Typing activities will continue to be sent\r\n until your bot sends another message back to the user.\r\n \"\"\"\r\n\r\n def __init__(self, delay: float = 0.5, period: float = 2.0):\r\n \"\"\"\r\n Initializes the middleware.\r\n\r\n :param delay: Delay in seconds for the first typing indicator to be sent.\r\n :param period: Delay in seconds for subsequent typing indicators.\r\n \"\"\"\r\n\r\n if delay < 0:\r\n raise ValueError(\"Delay must be greater than or equal to zero\")\r\n\r\n if period <= 0:\r\n raise ValueError(\"Repeat period must be greater than zero\")\r\n\r\n self._delay = delay\r\n self._period = period\r\n\r\n async def on_turn(\r\n self, context: TurnContext, logic: Callable[[TurnContext], Awaitable]\r\n ):\r\n timer = Timer()\r\n\r\n def start_interval(context: TurnContext, delay, period):\r\n async def aux():\r\n typing_activity = Activity(\r\n type=ActivityTypes.typing, relates_to=context.activity.relates_to,\r\n )\r\n\r\n conversation_reference = TurnContext.get_conversation_reference(\r\n context.activity\r\n )\r\n\r\n typing_activity = TurnContext.apply_conversation_reference(\r\n typing_activity, conversation_reference\r\n )\r\n\r\n asyncio.ensure_future(\r\n context.adapter.send_activities(context, [typing_activity])\r\n )\r\n\r\n # restart the timer, with the 'period' value for the delay\r\n timer.set_timeout(aux, period)\r\n\r\n # first time through we use the 'delay' value for the timer.\r\n timer.set_timeout(aux, delay)\r\n\r\n def stop_interval():\r\n timer.set_clear_timer()\r\n\r\n # if it's a message, start sending typing activities until the\r\n # bot logic is done.\r\n if context.activity.type == ActivityTypes.message:\r\n start_interval(context, self._delay, self._period)\r\n\r\n # call the bot logic\r\n result = await logic()\r\n\r\n stop_interval()\r\n\r\n return result\r\n", "path": "libraries/botbuilder-core/botbuilder/core/show_typing_middleware.py"}]} | 1,508 | 1,012 |
gh_patches_debug_10155 | rasdani/github-patches | git_diff | Mailu__Mailu-1885 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Postfix no longer starts correctly in kubernetes
## Environment & Versions
### Environment
- [ ] docker-compose
- [X] kubernetes
- [ ] docker swarm
### Versions
1.8
## Description
After installing mailu 1.8 via helm on Kubernetes, the `mailu-postfix` container runs but never reaches a healthy state and smtp functionality is impaired.
After digging into it I believe the container is failing to become healthy because the following change to the postfix container's startup script (https://github.com/Mailu/Mailu/commit/1d65529c94f54de3cb49ed9584ed95f7860c26fa) and a known issue with the musl resolver the alpine base image uses (https://github.com/kubernetes/kubernetes/issues/64924).
Resolving the mailu installation hostname never succeeds because of the aforementioned bug, and `socrates.system.resolve_hostname` simply retries until the pod's failure threshold is exceeded and is restarted.
There's a couple different ways I believe this could be resolved:
1. Pass a FQDN to `system.resolve_hostname()`, which avoids the resolver bug with search lists, i.e. `domain.com.` with a trailing dot.
2. Update the deployment manifest in the mailu helm chart to use `dnsConfig.options` on the pod spec to set a more agreeable `ndots` value for `/etc/resolv.conf`
3. Use a different base image for mailu containers that is not affected by this issue.
I would be happy to investigate further and file a PR with the appropriate changes based on feedback. Thanks!
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `core/postfix/start.py`
Content:
```
1 #!/usr/bin/python3
2
3 import os
4 import glob
5 import shutil
6 import multiprocessing
7 import logging as log
8 import sys
9
10 from podop import run_server
11 from socrate import system, conf
12
13 log.basicConfig(stream=sys.stderr, level=os.environ.get("LOG_LEVEL", "WARNING"))
14
15 def start_podop():
16 os.setuid(100)
17 url = "http://" + os.environ["ADMIN_ADDRESS"] + "/internal/postfix/"
18 # TODO: Remove verbosity setting from Podop?
19 run_server(0, "postfix", "/tmp/podop.socket", [
20 ("transport", "url", url + "transport/§"),
21 ("alias", "url", url + "alias/§"),
22 ("domain", "url", url + "domain/§"),
23 ("mailbox", "url", url + "mailbox/§"),
24 ("recipientmap", "url", url + "recipient/map/§"),
25 ("sendermap", "url", url + "sender/map/§"),
26 ("senderaccess", "url", url + "sender/access/§"),
27 ("senderlogin", "url", url + "sender/login/§")
28 ])
29
30 def is_valid_postconf_line(line):
31 return not line.startswith("#") \
32 and not line == ''
33
34 # Actual startup script
35 os.environ["FRONT_ADDRESS"] = system.get_host_address_from_environment("FRONT", "front")
36 os.environ["ADMIN_ADDRESS"] = system.get_host_address_from_environment("ADMIN", "admin")
37 os.environ["ANTISPAM_MILTER_ADDRESS"] = system.get_host_address_from_environment("ANTISPAM_MILTER", "antispam:11332")
38 os.environ["LMTP_ADDRESS"] = system.get_host_address_from_environment("LMTP", "imap:2525")
39 os.environ["OUTCLEAN"] = os.environ["HOSTNAMES"].split(",")[0]
40 try:
41 os.environ["OUTCLEAN_ADDRESS"] = system.resolve_hostname(os.environ["OUTCLEAN"])
42 except:
43 os.environ["OUTCLEAN_ADDRESS"] = "10.10.10.10"
44
45 for postfix_file in glob.glob("/conf/*.cf"):
46 conf.jinja(postfix_file, os.environ, os.path.join("/etc/postfix", os.path.basename(postfix_file)))
47
48 if os.path.exists("/overrides/postfix.cf"):
49 for line in open("/overrides/postfix.cf").read().strip().split("\n"):
50 if is_valid_postconf_line(line):
51 os.system('postconf -e "{}"'.format(line))
52
53 if os.path.exists("/overrides/postfix.master"):
54 for line in open("/overrides/postfix.master").read().strip().split("\n"):
55 if is_valid_postconf_line(line):
56 os.system('postconf -Me "{}"'.format(line))
57
58 for map_file in glob.glob("/overrides/*.map"):
59 destination = os.path.join("/etc/postfix", os.path.basename(map_file))
60 shutil.copyfile(map_file, destination)
61 os.system("postmap {}".format(destination))
62 os.remove(destination)
63
64 if "RELAYUSER" in os.environ:
65 path = "/etc/postfix/sasl_passwd"
66 conf.jinja("/conf/sasl_passwd", os.environ, path)
67 os.system("postmap {}".format(path))
68
69 # Run Podop and Postfix
70 multiprocessing.Process(target=start_podop).start()
71 os.system("/usr/libexec/postfix/post-install meta_directory=/etc/postfix create-missing")
72 # Before starting postfix, we need to check permissions on /queue
73 # in the event that postfix,postdrop id have changed
74 os.system("postfix set-permissions")
75 os.system("postfix start-fg")
76
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/core/postfix/start.py b/core/postfix/start.py
--- a/core/postfix/start.py
+++ b/core/postfix/start.py
@@ -38,7 +38,11 @@
os.environ["LMTP_ADDRESS"] = system.get_host_address_from_environment("LMTP", "imap:2525")
os.environ["OUTCLEAN"] = os.environ["HOSTNAMES"].split(",")[0]
try:
- os.environ["OUTCLEAN_ADDRESS"] = system.resolve_hostname(os.environ["OUTCLEAN"])
+ _to_lookup = os.environ["OUTCLEAN"]
+ # Ensure we lookup a FQDN: @see #1884
+ if not _to_lookup.endswith('.'):
+ _to_lookup += '.'
+ os.environ["OUTCLEAN_ADDRESS"] = system.resolve_hostname(_to_lookup)
except:
os.environ["OUTCLEAN_ADDRESS"] = "10.10.10.10"
| {"golden_diff": "diff --git a/core/postfix/start.py b/core/postfix/start.py\n--- a/core/postfix/start.py\n+++ b/core/postfix/start.py\n@@ -38,7 +38,11 @@\n os.environ[\"LMTP_ADDRESS\"] = system.get_host_address_from_environment(\"LMTP\", \"imap:2525\")\n os.environ[\"OUTCLEAN\"] = os.environ[\"HOSTNAMES\"].split(\",\")[0]\n try:\n- os.environ[\"OUTCLEAN_ADDRESS\"] = system.resolve_hostname(os.environ[\"OUTCLEAN\"])\n+ _to_lookup = os.environ[\"OUTCLEAN\"]\n+ # Ensure we lookup a FQDN: @see #1884\n+ if not _to_lookup.endswith('.'):\n+ _to_lookup += '.'\n+ os.environ[\"OUTCLEAN_ADDRESS\"] = system.resolve_hostname(_to_lookup)\n except:\n os.environ[\"OUTCLEAN_ADDRESS\"] = \"10.10.10.10\"\n", "issue": "Postfix no longer starts correctly in kubernetes\n## Environment & Versions\r\n### Environment\r\n - [ ] docker-compose\r\n - [X] kubernetes\r\n - [ ] docker swarm\r\n\r\n### Versions\r\n1.8\r\n\r\n## Description\r\nAfter installing mailu 1.8 via helm on Kubernetes, the `mailu-postfix` container runs but never reaches a healthy state and smtp functionality is impaired.\r\n\r\nAfter digging into it I believe the container is failing to become healthy because the following change to the postfix container's startup script (https://github.com/Mailu/Mailu/commit/1d65529c94f54de3cb49ed9584ed95f7860c26fa) and a known issue with the musl resolver the alpine base image uses (https://github.com/kubernetes/kubernetes/issues/64924).\r\n\r\nResolving the mailu installation hostname never succeeds because of the aforementioned bug, and `socrates.system.resolve_hostname` simply retries until the pod's failure threshold is exceeded and is restarted.\r\n\r\nThere's a couple different ways I believe this could be resolved:\r\n\r\n1. Pass a FQDN to `system.resolve_hostname()`, which avoids the resolver bug with search lists, i.e. `domain.com.` with a trailing dot.\r\n\r\n2. Update the deployment manifest in the mailu helm chart to use `dnsConfig.options` on the pod spec to set a more agreeable `ndots` value for `/etc/resolv.conf`\r\n\r\n3. Use a different base image for mailu containers that is not affected by this issue.\r\n\r\nI would be happy to investigate further and file a PR with the appropriate changes based on feedback. Thanks!\n", "before_files": [{"content": "#!/usr/bin/python3\n\nimport os\nimport glob\nimport shutil\nimport multiprocessing\nimport logging as log\nimport sys\n\nfrom podop import run_server\nfrom socrate import system, conf\n\nlog.basicConfig(stream=sys.stderr, level=os.environ.get(\"LOG_LEVEL\", \"WARNING\"))\n\ndef start_podop():\n os.setuid(100)\n url = \"http://\" + os.environ[\"ADMIN_ADDRESS\"] + \"/internal/postfix/\"\n # TODO: Remove verbosity setting from Podop?\n run_server(0, \"postfix\", \"/tmp/podop.socket\", [\n\t\t(\"transport\", \"url\", url + \"transport/\u00a7\"),\n\t\t(\"alias\", \"url\", url + \"alias/\u00a7\"),\n\t\t(\"domain\", \"url\", url + \"domain/\u00a7\"),\n (\"mailbox\", \"url\", url + \"mailbox/\u00a7\"),\n (\"recipientmap\", \"url\", url + \"recipient/map/\u00a7\"),\n (\"sendermap\", \"url\", url + \"sender/map/\u00a7\"),\n (\"senderaccess\", \"url\", url + \"sender/access/\u00a7\"),\n (\"senderlogin\", \"url\", url + \"sender/login/\u00a7\")\n ])\n\ndef is_valid_postconf_line(line):\n return not line.startswith(\"#\") \\\n and not line == ''\n\n# Actual startup script\nos.environ[\"FRONT_ADDRESS\"] = system.get_host_address_from_environment(\"FRONT\", \"front\")\nos.environ[\"ADMIN_ADDRESS\"] = system.get_host_address_from_environment(\"ADMIN\", \"admin\")\nos.environ[\"ANTISPAM_MILTER_ADDRESS\"] = system.get_host_address_from_environment(\"ANTISPAM_MILTER\", \"antispam:11332\")\nos.environ[\"LMTP_ADDRESS\"] = system.get_host_address_from_environment(\"LMTP\", \"imap:2525\")\nos.environ[\"OUTCLEAN\"] = os.environ[\"HOSTNAMES\"].split(\",\")[0]\ntry:\n os.environ[\"OUTCLEAN_ADDRESS\"] = system.resolve_hostname(os.environ[\"OUTCLEAN\"])\nexcept:\n os.environ[\"OUTCLEAN_ADDRESS\"] = \"10.10.10.10\"\n\nfor postfix_file in glob.glob(\"/conf/*.cf\"):\n conf.jinja(postfix_file, os.environ, os.path.join(\"/etc/postfix\", os.path.basename(postfix_file)))\n\nif os.path.exists(\"/overrides/postfix.cf\"):\n for line in open(\"/overrides/postfix.cf\").read().strip().split(\"\\n\"):\n if is_valid_postconf_line(line):\n os.system('postconf -e \"{}\"'.format(line))\n\nif os.path.exists(\"/overrides/postfix.master\"):\n for line in open(\"/overrides/postfix.master\").read().strip().split(\"\\n\"):\n if is_valid_postconf_line(line):\n os.system('postconf -Me \"{}\"'.format(line))\n\nfor map_file in glob.glob(\"/overrides/*.map\"):\n destination = os.path.join(\"/etc/postfix\", os.path.basename(map_file))\n shutil.copyfile(map_file, destination)\n os.system(\"postmap {}\".format(destination))\n os.remove(destination)\n\nif \"RELAYUSER\" in os.environ:\n path = \"/etc/postfix/sasl_passwd\"\n conf.jinja(\"/conf/sasl_passwd\", os.environ, path)\n os.system(\"postmap {}\".format(path))\n\n# Run Podop and Postfix\nmultiprocessing.Process(target=start_podop).start()\nos.system(\"/usr/libexec/postfix/post-install meta_directory=/etc/postfix create-missing\")\n# Before starting postfix, we need to check permissions on /queue\n# in the event that postfix,postdrop id have changed\nos.system(\"postfix set-permissions\")\nos.system(\"postfix start-fg\")\n", "path": "core/postfix/start.py"}], "after_files": [{"content": "#!/usr/bin/python3\n\nimport os\nimport glob\nimport shutil\nimport multiprocessing\nimport logging as log\nimport sys\n\nfrom podop import run_server\nfrom socrate import system, conf\n\nlog.basicConfig(stream=sys.stderr, level=os.environ.get(\"LOG_LEVEL\", \"WARNING\"))\n\ndef start_podop():\n os.setuid(100)\n url = \"http://\" + os.environ[\"ADMIN_ADDRESS\"] + \"/internal/postfix/\"\n # TODO: Remove verbosity setting from Podop?\n run_server(0, \"postfix\", \"/tmp/podop.socket\", [\n\t\t(\"transport\", \"url\", url + \"transport/\u00a7\"),\n\t\t(\"alias\", \"url\", url + \"alias/\u00a7\"),\n\t\t(\"domain\", \"url\", url + \"domain/\u00a7\"),\n (\"mailbox\", \"url\", url + \"mailbox/\u00a7\"),\n (\"recipientmap\", \"url\", url + \"recipient/map/\u00a7\"),\n (\"sendermap\", \"url\", url + \"sender/map/\u00a7\"),\n (\"senderaccess\", \"url\", url + \"sender/access/\u00a7\"),\n (\"senderlogin\", \"url\", url + \"sender/login/\u00a7\")\n ])\n\ndef is_valid_postconf_line(line):\n return not line.startswith(\"#\") \\\n and not line == ''\n\n# Actual startup script\nos.environ[\"FRONT_ADDRESS\"] = system.get_host_address_from_environment(\"FRONT\", \"front\")\nos.environ[\"ADMIN_ADDRESS\"] = system.get_host_address_from_environment(\"ADMIN\", \"admin\")\nos.environ[\"ANTISPAM_MILTER_ADDRESS\"] = system.get_host_address_from_environment(\"ANTISPAM_MILTER\", \"antispam:11332\")\nos.environ[\"LMTP_ADDRESS\"] = system.get_host_address_from_environment(\"LMTP\", \"imap:2525\")\nos.environ[\"OUTCLEAN\"] = os.environ[\"HOSTNAMES\"].split(\",\")[0]\ntry:\n _to_lookup = os.environ[\"OUTCLEAN\"]\n # Ensure we lookup a FQDN: @see #1884\n if not _to_lookup.endswith('.'):\n _to_lookup += '.'\n os.environ[\"OUTCLEAN_ADDRESS\"] = system.resolve_hostname(_to_lookup)\nexcept:\n os.environ[\"OUTCLEAN_ADDRESS\"] = \"10.10.10.10\"\n\nfor postfix_file in glob.glob(\"/conf/*.cf\"):\n conf.jinja(postfix_file, os.environ, os.path.join(\"/etc/postfix\", os.path.basename(postfix_file)))\n\nif os.path.exists(\"/overrides/postfix.cf\"):\n for line in open(\"/overrides/postfix.cf\").read().strip().split(\"\\n\"):\n if is_valid_postconf_line(line):\n os.system('postconf -e \"{}\"'.format(line))\n\nif os.path.exists(\"/overrides/postfix.master\"):\n for line in open(\"/overrides/postfix.master\").read().strip().split(\"\\n\"):\n if is_valid_postconf_line(line):\n os.system('postconf -Me \"{}\"'.format(line))\n\nfor map_file in glob.glob(\"/overrides/*.map\"):\n destination = os.path.join(\"/etc/postfix\", os.path.basename(map_file))\n shutil.copyfile(map_file, destination)\n os.system(\"postmap {}\".format(destination))\n os.remove(destination)\n\nif \"RELAYUSER\" in os.environ:\n path = \"/etc/postfix/sasl_passwd\"\n conf.jinja(\"/conf/sasl_passwd\", os.environ, path)\n os.system(\"postmap {}\".format(path))\n\n# Run Podop and Postfix\nmultiprocessing.Process(target=start_podop).start()\nos.system(\"/usr/libexec/postfix/post-install meta_directory=/etc/postfix create-missing\")\n# Before starting postfix, we need to check permissions on /queue\n# in the event that postfix,postdrop id have changed\nos.system(\"postfix set-permissions\")\nos.system(\"postfix start-fg\")\n", "path": "core/postfix/start.py"}]} | 1,531 | 207 |
gh_patches_debug_8254 | rasdani/github-patches | git_diff | bookwyrm-social__bookwyrm-2448 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add review count to book search listing to concentrate reviews
Often when I'm searching for a book to e.g. mark as having started reading, or to figure out if any other wyrms have reviewed it, I'll have more than one search result.

_two results for wayfarer no 2_
I typically don't really care so much about reviewing a given edition (I read a lot of non-scholarly ebooks). So instead of finding a particular edition, I want to find the one that has been reviewed by people I follow & whose judgement I trust. Similarly, I'd want to contribute _my_ review to that growing pile of context around a given book.
To aid this, I suggest adding some light information markers to the search results. # of reviews would be one concrete suggestions, another would be to display which ones people I'm following have reviewed. Basically use whatever makes sense from a fast query perspective imo :)
Thanks again for bookwyrm! It's a delightful space and I've found _so_ many books over the soon-to-be 2 years since I joined!! u rok
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bookwyrm/templatetags/book_display_tags.py`
Content:
```
1 """ template filters """
2 from django import template
3
4
5 register = template.Library()
6
7
8 @register.filter(name="book_description")
9 def get_book_description(book):
10 """use the work's text if the book doesn't have it"""
11 if book.description:
12 return book.description
13 if book.parent_work:
14 # this shoud always be true
15 return book.parent_work.description
16 return None
17
18
19 @register.simple_tag(takes_context=False)
20 def get_book_file_links(book):
21 """links for a book"""
22 return book.file_links.filter(domain__status="approved")
23
24
25 @register.filter(name="author_edition")
26 def get_author_edition(book, author):
27 """default edition for a book on the author page"""
28 return book.author_edition(author)
29
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/bookwyrm/templatetags/book_display_tags.py b/bookwyrm/templatetags/book_display_tags.py
--- a/bookwyrm/templatetags/book_display_tags.py
+++ b/bookwyrm/templatetags/book_display_tags.py
@@ -1,10 +1,17 @@
""" template filters """
from django import template
+from bookwyrm import models
register = template.Library()
[email protected](name="review_count")
+def get_review_count(book):
+ """how many reviews?"""
+ return models.Review.objects.filter(deleted=False, book=book).count()
+
+
@register.filter(name="book_description")
def get_book_description(book):
"""use the work's text if the book doesn't have it"""
| {"golden_diff": "diff --git a/bookwyrm/templatetags/book_display_tags.py b/bookwyrm/templatetags/book_display_tags.py\n--- a/bookwyrm/templatetags/book_display_tags.py\n+++ b/bookwyrm/templatetags/book_display_tags.py\n@@ -1,10 +1,17 @@\n \"\"\" template filters \"\"\"\n from django import template\n+from bookwyrm import models\n \n \n register = template.Library()\n \n \[email protected](name=\"review_count\")\n+def get_review_count(book):\n+ \"\"\"how many reviews?\"\"\"\n+ return models.Review.objects.filter(deleted=False, book=book).count()\n+\n+\n @register.filter(name=\"book_description\")\n def get_book_description(book):\n \"\"\"use the work's text if the book doesn't have it\"\"\"\n", "issue": "Add review count to book search listing to concentrate reviews\nOften when I'm searching for a book to e.g. mark as having started reading, or to figure out if any other wyrms have reviewed it, I'll have more than one search result. \r\n\r\n\r\n_two results for wayfarer no 2_\r\n\r\nI typically don't really care so much about reviewing a given edition (I read a lot of non-scholarly ebooks). So instead of finding a particular edition, I want to find the one that has been reviewed by people I follow & whose judgement I trust. Similarly, I'd want to contribute _my_ review to that growing pile of context around a given book.\r\n\r\nTo aid this, I suggest adding some light information markers to the search results. # of reviews would be one concrete suggestions, another would be to display which ones people I'm following have reviewed. Basically use whatever makes sense from a fast query perspective imo :)\r\n\r\nThanks again for bookwyrm! It's a delightful space and I've found _so_ many books over the soon-to-be 2 years since I joined!! u rok\n", "before_files": [{"content": "\"\"\" template filters \"\"\"\nfrom django import template\n\n\nregister = template.Library()\n\n\[email protected](name=\"book_description\")\ndef get_book_description(book):\n \"\"\"use the work's text if the book doesn't have it\"\"\"\n if book.description:\n return book.description\n if book.parent_work:\n # this shoud always be true\n return book.parent_work.description\n return None\n\n\[email protected]_tag(takes_context=False)\ndef get_book_file_links(book):\n \"\"\"links for a book\"\"\"\n return book.file_links.filter(domain__status=\"approved\")\n\n\[email protected](name=\"author_edition\")\ndef get_author_edition(book, author):\n \"\"\"default edition for a book on the author page\"\"\"\n return book.author_edition(author)\n", "path": "bookwyrm/templatetags/book_display_tags.py"}], "after_files": [{"content": "\"\"\" template filters \"\"\"\nfrom django import template\nfrom bookwyrm import models\n\n\nregister = template.Library()\n\n\[email protected](name=\"review_count\")\ndef get_review_count(book):\n \"\"\"how many reviews?\"\"\"\n return models.Review.objects.filter(deleted=False, book=book).count()\n\n\[email protected](name=\"book_description\")\ndef get_book_description(book):\n \"\"\"use the work's text if the book doesn't have it\"\"\"\n if book.description:\n return book.description\n if book.parent_work:\n # this shoud always be true\n return book.parent_work.description\n return None\n\n\[email protected]_tag(takes_context=False)\ndef get_book_file_links(book):\n \"\"\"links for a book\"\"\"\n return book.file_links.filter(domain__status=\"approved\")\n\n\[email protected](name=\"author_edition\")\ndef get_author_edition(book, author):\n \"\"\"default edition for a book on the author page\"\"\"\n return book.author_edition(author)\n", "path": "bookwyrm/templatetags/book_display_tags.py"}]} | 766 | 169 |
gh_patches_debug_11492 | rasdani/github-patches | git_diff | cobbler__cobbler-3607 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Backport] SafeConfigParser removal
### Original feature issue
Issue #3551 PR #3552
### Target release
- [x] release33
- [ ] release32
- [ ] release30
### Reason
This is needed for Fedora
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cobbler/modules/authorization/configfile.py`
Content:
```
1 """
2 Authorization module that allow users listed in
3 /etc/cobbler/users.conf to be permitted to access resources.
4 For instance, when using authz_ldap, you want to use authn_configfile,
5 not authz_allowall, which will most likely NOT do what you want.
6 """
7 # SPDX-License-Identifier: GPL-2.0-or-later
8 # SPDX-FileCopyrightText: Copyright 2007-2009, Red Hat, Inc and Others
9 # SPDX-FileCopyrightText: Michael DeHaan <michael.dehaan AT gmail>
10
11
12 from configparser import SafeConfigParser
13
14 import os
15 from typing import Dict
16
17 CONFIG_FILE = '/etc/cobbler/users.conf'
18
19
20 def register() -> str:
21 """
22 The mandatory Cobbler module registration hook.
23
24 :return: Always "authz".
25 """
26 return "authz"
27
28
29 def __parse_config() -> Dict[str, dict]:
30 """
31 Parse the the users.conf file.
32
33 :return: The data of the config file.
34 """
35 if not os.path.exists(CONFIG_FILE):
36 return {}
37 config = SafeConfigParser()
38 config.read(CONFIG_FILE)
39 alldata = {}
40 groups = config.sections()
41 for g in groups:
42 alldata[str(g)] = {}
43 opts = config.options(g)
44 for o in opts:
45 alldata[g][o] = 1
46 return alldata
47
48
49 def authorize(api_handle, user: str, resource: str, arg1=None, arg2=None) -> int:
50 """
51 Validate a user against a resource. All users in the file are permitted by this module.
52
53 :param api_handle: This parameter is not used currently.
54 :param user: The user to authorize.
55 :param resource: This parameter is not used currently.
56 :param arg1: This parameter is not used currently.
57 :param arg2: This parameter is not used currently.
58 :return: "0" if no authorized, "1" if authorized.
59 """
60 # FIXME: this must be modified to use the new ACL engine
61
62 data = __parse_config()
63 for g in data:
64 if user.lower() in data[g]:
65 return 1
66 return 0
67
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/cobbler/modules/authorization/configfile.py b/cobbler/modules/authorization/configfile.py
--- a/cobbler/modules/authorization/configfile.py
+++ b/cobbler/modules/authorization/configfile.py
@@ -9,7 +9,7 @@
# SPDX-FileCopyrightText: Michael DeHaan <michael.dehaan AT gmail>
-from configparser import SafeConfigParser
+from configparser import ConfigParser
import os
from typing import Dict
@@ -34,7 +34,7 @@
"""
if not os.path.exists(CONFIG_FILE):
return {}
- config = SafeConfigParser()
+ config = ConfigParser()
config.read(CONFIG_FILE)
alldata = {}
groups = config.sections()
| {"golden_diff": "diff --git a/cobbler/modules/authorization/configfile.py b/cobbler/modules/authorization/configfile.py\n--- a/cobbler/modules/authorization/configfile.py\n+++ b/cobbler/modules/authorization/configfile.py\n@@ -9,7 +9,7 @@\n # SPDX-FileCopyrightText: Michael DeHaan <michael.dehaan AT gmail>\n \n \n-from configparser import SafeConfigParser\n+from configparser import ConfigParser\n \n import os\n from typing import Dict\n@@ -34,7 +34,7 @@\n \"\"\"\n if not os.path.exists(CONFIG_FILE):\n return {}\n- config = SafeConfigParser()\n+ config = ConfigParser()\n config.read(CONFIG_FILE)\n alldata = {}\n groups = config.sections()\n", "issue": "[Backport] SafeConfigParser removal\n### Original feature issue\r\n\r\nIssue #3551 PR #3552 \r\n\r\n### Target release\r\n\r\n- [x] release33\r\n- [ ] release32\r\n- [ ] release30\r\n\r\n### Reason\r\n\r\nThis is needed for Fedora\n", "before_files": [{"content": "\"\"\"\nAuthorization module that allow users listed in\n/etc/cobbler/users.conf to be permitted to access resources.\nFor instance, when using authz_ldap, you want to use authn_configfile,\nnot authz_allowall, which will most likely NOT do what you want.\n\"\"\"\n# SPDX-License-Identifier: GPL-2.0-or-later\n# SPDX-FileCopyrightText: Copyright 2007-2009, Red Hat, Inc and Others\n# SPDX-FileCopyrightText: Michael DeHaan <michael.dehaan AT gmail>\n\n\nfrom configparser import SafeConfigParser\n\nimport os\nfrom typing import Dict\n\nCONFIG_FILE = '/etc/cobbler/users.conf'\n\n\ndef register() -> str:\n \"\"\"\n The mandatory Cobbler module registration hook.\n\n :return: Always \"authz\".\n \"\"\"\n return \"authz\"\n\n\ndef __parse_config() -> Dict[str, dict]:\n \"\"\"\n Parse the the users.conf file.\n\n :return: The data of the config file.\n \"\"\"\n if not os.path.exists(CONFIG_FILE):\n return {}\n config = SafeConfigParser()\n config.read(CONFIG_FILE)\n alldata = {}\n groups = config.sections()\n for g in groups:\n alldata[str(g)] = {}\n opts = config.options(g)\n for o in opts:\n alldata[g][o] = 1\n return alldata\n\n\ndef authorize(api_handle, user: str, resource: str, arg1=None, arg2=None) -> int:\n \"\"\"\n Validate a user against a resource. All users in the file are permitted by this module.\n\n :param api_handle: This parameter is not used currently.\n :param user: The user to authorize.\n :param resource: This parameter is not used currently.\n :param arg1: This parameter is not used currently.\n :param arg2: This parameter is not used currently.\n :return: \"0\" if no authorized, \"1\" if authorized.\n \"\"\"\n # FIXME: this must be modified to use the new ACL engine\n\n data = __parse_config()\n for g in data:\n if user.lower() in data[g]:\n return 1\n return 0\n", "path": "cobbler/modules/authorization/configfile.py"}], "after_files": [{"content": "\"\"\"\nAuthorization module that allow users listed in\n/etc/cobbler/users.conf to be permitted to access resources.\nFor instance, when using authz_ldap, you want to use authn_configfile,\nnot authz_allowall, which will most likely NOT do what you want.\n\"\"\"\n# SPDX-License-Identifier: GPL-2.0-or-later\n# SPDX-FileCopyrightText: Copyright 2007-2009, Red Hat, Inc and Others\n# SPDX-FileCopyrightText: Michael DeHaan <michael.dehaan AT gmail>\n\n\nfrom configparser import ConfigParser\n\nimport os\nfrom typing import Dict\n\nCONFIG_FILE = '/etc/cobbler/users.conf'\n\n\ndef register() -> str:\n \"\"\"\n The mandatory Cobbler module registration hook.\n\n :return: Always \"authz\".\n \"\"\"\n return \"authz\"\n\n\ndef __parse_config() -> Dict[str, dict]:\n \"\"\"\n Parse the the users.conf file.\n\n :return: The data of the config file.\n \"\"\"\n if not os.path.exists(CONFIG_FILE):\n return {}\n config = ConfigParser()\n config.read(CONFIG_FILE)\n alldata = {}\n groups = config.sections()\n for g in groups:\n alldata[str(g)] = {}\n opts = config.options(g)\n for o in opts:\n alldata[g][o] = 1\n return alldata\n\n\ndef authorize(api_handle, user: str, resource: str, arg1=None, arg2=None) -> int:\n \"\"\"\n Validate a user against a resource. All users in the file are permitted by this module.\n\n :param api_handle: This parameter is not used currently.\n :param user: The user to authorize.\n :param resource: This parameter is not used currently.\n :param arg1: This parameter is not used currently.\n :param arg2: This parameter is not used currently.\n :return: \"0\" if no authorized, \"1\" if authorized.\n \"\"\"\n # FIXME: this must be modified to use the new ACL engine\n\n data = __parse_config()\n for g in data:\n if user.lower() in data[g]:\n return 1\n return 0\n", "path": "cobbler/modules/authorization/configfile.py"}]} | 928 | 159 |
gh_patches_debug_2487 | rasdani/github-patches | git_diff | bokeh__bokeh-10308 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
bokehjs' version has duplicated dev suffix
```sh
$ jq '.version' bokehjs/package.json
"2.2.0dev4-dev.4"
```
Should be `2.2.0-dev.4` instead.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `release/config.py`
Content:
```
1 # -----------------------------------------------------------------------------
2 # Copyright (c) 2012 - 2020, Anaconda, Inc., and Bokeh Contributors.
3 # All rights reserved.
4 #
5 # The full license is in the file LICENSE.txt, distributed with this software.
6 # -----------------------------------------------------------------------------
7 """
8
9 """
10
11 # Standard library imports
12 import re
13 from typing import Dict, Optional, Tuple
14
15 # Bokeh imports
16 from .enums import VersionType
17 from .logger import LOG, Scrubber
18
19 __all__ = ("Config",)
20
21 # This excludes "local" build versions, e.g. 0.12.4+19.gf85560a
22 ANY_VERSION = re.compile(r"^((\d+)\.(\d+)\.(\d+))((dev|rc)(\d+))?$")
23
24 FULL_VERSION = re.compile(r"^(\d+\.\d+\.\d+)$")
25
26
27 class Config(object):
28 def __init__(self, version: str) -> None:
29 m = ANY_VERSION.match(version)
30 if not m:
31 raise ValueError(f"Invalid version for Bokeh build/release {version!r}")
32 groups = m.groups()
33
34 self.version: str = version
35
36 self.base_version: str = groups[0]
37 self.base_version_tuple: Tuple[str, ...] = tuple(groups[1:4])
38 self.ext: Optional[str] = groups[4]
39 self.ext_type: str = groups[5]
40 self.ext_number: str = groups[6]
41
42 self._secrets: Dict[str, str] = {}
43
44 def add_secret(self, name: str, secret: str) -> None:
45 """
46
47 """
48 if name in self._secrets:
49 raise RuntimeError()
50 LOG.add_scrubber(Scrubber(secret, name=name))
51 self._secrets[name] = secret
52
53 @property
54 def secrets(self) -> Dict[str, str]:
55 return self._secrets
56
57 @property
58 def prerelease(self) -> bool:
59 return self.version_type != VersionType.FULL
60
61 @property
62 def version_type(self) -> VersionType:
63 if "rc" in self.version:
64 return VersionType.RC
65 elif "dev" in self.version:
66 return VersionType.DEV
67 else:
68 return VersionType.FULL
69
70 @property
71 def js_version(self) -> str:
72 if self.ext is None:
73 return self.version
74 return f"{self.version}-{self.ext_type}.{self.ext_number}"
75
76 @property
77 def release_level(self) -> str:
78 major, minor = self.base_version_tuple[:2]
79 return f"{major}.{minor}"
80
81 @property
82 def staging_branch(self) -> str:
83 return f"staging-{self.version}"
84
85 @property
86 def base_branch(self) -> str:
87 return f"branch-{self.release_level}"
88
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/release/config.py b/release/config.py
--- a/release/config.py
+++ b/release/config.py
@@ -71,7 +71,7 @@
def js_version(self) -> str:
if self.ext is None:
return self.version
- return f"{self.version}-{self.ext_type}.{self.ext_number}"
+ return f"{self.base_version}-{self.ext_type}.{self.ext_number}"
@property
def release_level(self) -> str:
| {"golden_diff": "diff --git a/release/config.py b/release/config.py\n--- a/release/config.py\n+++ b/release/config.py\n@@ -71,7 +71,7 @@\n def js_version(self) -> str:\n if self.ext is None:\n return self.version\n- return f\"{self.version}-{self.ext_type}.{self.ext_number}\"\n+ return f\"{self.base_version}-{self.ext_type}.{self.ext_number}\"\n \n @property\n def release_level(self) -> str:\n", "issue": "bokehjs' version has duplicated dev suffix\n```sh\r\n$ jq '.version' bokehjs/package.json\r\n\"2.2.0dev4-dev.4\"\r\n```\r\nShould be `2.2.0-dev.4` instead.\n", "before_files": [{"content": "# -----------------------------------------------------------------------------\n# Copyright (c) 2012 - 2020, Anaconda, Inc., and Bokeh Contributors.\n# All rights reserved.\n#\n# The full license is in the file LICENSE.txt, distributed with this software.\n# -----------------------------------------------------------------------------\n\"\"\"\n\n\"\"\"\n\n# Standard library imports\nimport re\nfrom typing import Dict, Optional, Tuple\n\n# Bokeh imports\nfrom .enums import VersionType\nfrom .logger import LOG, Scrubber\n\n__all__ = (\"Config\",)\n\n# This excludes \"local\" build versions, e.g. 0.12.4+19.gf85560a\nANY_VERSION = re.compile(r\"^((\\d+)\\.(\\d+)\\.(\\d+))((dev|rc)(\\d+))?$\")\n\nFULL_VERSION = re.compile(r\"^(\\d+\\.\\d+\\.\\d+)$\")\n\n\nclass Config(object):\n def __init__(self, version: str) -> None:\n m = ANY_VERSION.match(version)\n if not m:\n raise ValueError(f\"Invalid version for Bokeh build/release {version!r}\")\n groups = m.groups()\n\n self.version: str = version\n\n self.base_version: str = groups[0]\n self.base_version_tuple: Tuple[str, ...] = tuple(groups[1:4])\n self.ext: Optional[str] = groups[4]\n self.ext_type: str = groups[5]\n self.ext_number: str = groups[6]\n\n self._secrets: Dict[str, str] = {}\n\n def add_secret(self, name: str, secret: str) -> None:\n \"\"\"\n\n \"\"\"\n if name in self._secrets:\n raise RuntimeError()\n LOG.add_scrubber(Scrubber(secret, name=name))\n self._secrets[name] = secret\n\n @property\n def secrets(self) -> Dict[str, str]:\n return self._secrets\n\n @property\n def prerelease(self) -> bool:\n return self.version_type != VersionType.FULL\n\n @property\n def version_type(self) -> VersionType:\n if \"rc\" in self.version:\n return VersionType.RC\n elif \"dev\" in self.version:\n return VersionType.DEV\n else:\n return VersionType.FULL\n\n @property\n def js_version(self) -> str:\n if self.ext is None:\n return self.version\n return f\"{self.version}-{self.ext_type}.{self.ext_number}\"\n\n @property\n def release_level(self) -> str:\n major, minor = self.base_version_tuple[:2]\n return f\"{major}.{minor}\"\n\n @property\n def staging_branch(self) -> str:\n return f\"staging-{self.version}\"\n\n @property\n def base_branch(self) -> str:\n return f\"branch-{self.release_level}\"\n", "path": "release/config.py"}], "after_files": [{"content": "# -----------------------------------------------------------------------------\n# Copyright (c) 2012 - 2020, Anaconda, Inc., and Bokeh Contributors.\n# All rights reserved.\n#\n# The full license is in the file LICENSE.txt, distributed with this software.\n# -----------------------------------------------------------------------------\n\"\"\"\n\n\"\"\"\n\n# Standard library imports\nimport re\nfrom typing import Dict, Optional, Tuple\n\n# Bokeh imports\nfrom .enums import VersionType\nfrom .logger import LOG, Scrubber\n\n__all__ = (\"Config\",)\n\n# This excludes \"local\" build versions, e.g. 0.12.4+19.gf85560a\nANY_VERSION = re.compile(r\"^((\\d+)\\.(\\d+)\\.(\\d+))((dev|rc)(\\d+))?$\")\n\nFULL_VERSION = re.compile(r\"^(\\d+\\.\\d+\\.\\d+)$\")\n\n\nclass Config(object):\n def __init__(self, version: str) -> None:\n m = ANY_VERSION.match(version)\n if not m:\n raise ValueError(f\"Invalid version for Bokeh build/release {version!r}\")\n groups = m.groups()\n\n self.version: str = version\n\n self.base_version: str = groups[0]\n self.base_version_tuple: Tuple[str, ...] = tuple(groups[1:4])\n self.ext: Optional[str] = groups[4]\n self.ext_type: str = groups[5]\n self.ext_number: str = groups[6]\n\n self._secrets: Dict[str, str] = {}\n\n def add_secret(self, name: str, secret: str) -> None:\n \"\"\"\n\n \"\"\"\n if name in self._secrets:\n raise RuntimeError()\n LOG.add_scrubber(Scrubber(secret, name=name))\n self._secrets[name] = secret\n\n @property\n def secrets(self) -> Dict[str, str]:\n return self._secrets\n\n @property\n def prerelease(self) -> bool:\n return self.version_type != VersionType.FULL\n\n @property\n def version_type(self) -> VersionType:\n if \"rc\" in self.version:\n return VersionType.RC\n elif \"dev\" in self.version:\n return VersionType.DEV\n else:\n return VersionType.FULL\n\n @property\n def js_version(self) -> str:\n if self.ext is None:\n return self.version\n return f\"{self.base_version}-{self.ext_type}.{self.ext_number}\"\n\n @property\n def release_level(self) -> str:\n major, minor = self.base_version_tuple[:2]\n return f\"{major}.{minor}\"\n\n @property\n def staging_branch(self) -> str:\n return f\"staging-{self.version}\"\n\n @property\n def base_branch(self) -> str:\n return f\"branch-{self.release_level}\"\n", "path": "release/config.py"}]} | 1,094 | 104 |
gh_patches_debug_2660 | rasdani/github-patches | git_diff | techmatters__terraso-backend-81 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add photo field to the User model
## Description
The user profile photo might be automatically fetched from the third-party account system (Google or Apple), or it can also be uploaded from by the user. Since the file itself might be stored on an external storage service, this field will be used to store the location of the file.
In this issue, it's important to consider the flow front-end → back-end for photo upload.
## Suggested subtasks
- [ ] Design the overall flow to upload photo considering front-end → back-end flow
- [ ] Add the new field on model with proper support to the external storage service (upload) and update DB migrations
- [ ] Implement upload feature to update photo
- [ ] Add support to present the proper photo URL from external services
- [ ] Add the new photo field on user API
This issue depends on:
- #21
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `terraso_backend/apps/graphql/schema/users.py`
Content:
```
1 import graphene
2 from graphene import relay
3 from graphene_django import DjangoObjectType
4
5 from apps.core.models import User
6
7 from .commons import BaseDeleteMutation
8
9
10 class UserNode(DjangoObjectType):
11 id = graphene.ID(source="pk", required=True)
12
13 class Meta:
14 model = User
15 filter_fields = {
16 "email": ["exact", "icontains"],
17 "first_name": ["icontains"],
18 "last_name": ["icontains"],
19 }
20 fields = ("email", "first_name", "last_name", "memberships")
21 interfaces = (relay.Node,)
22
23
24 class UserAddMutation(relay.ClientIDMutation):
25 user = graphene.Field(UserNode)
26
27 class Input:
28 first_name = graphene.String()
29 last_name = graphene.String()
30 email = graphene.String(required=True)
31 password = graphene.String(required=True)
32
33 @classmethod
34 def mutate_and_get_payload(cls, root, info, **kwargs):
35 user = User.objects.create_user(
36 kwargs.pop("email"), password=kwargs.pop("password"), **kwargs
37 )
38
39 return cls(user=user)
40
41
42 class UserUpdateMutation(relay.ClientIDMutation):
43 user = graphene.Field(UserNode)
44
45 model_class = User
46
47 class Input:
48 id = graphene.ID(required=True)
49 first_name = graphene.String()
50 last_name = graphene.String()
51 email = graphene.String()
52 password = graphene.String()
53
54 @classmethod
55 def mutate_and_get_payload(cls, root, info, **kwargs):
56 _id = kwargs.pop("id")
57
58 user = User.objects.get(pk=_id)
59 new_password = kwargs.pop("password", None)
60
61 if new_password:
62 user.set_password(new_password)
63
64 for attr, value in kwargs.items():
65 setattr(user, attr, value)
66
67 user.save()
68
69 return cls(user=user)
70
71
72 class UserDeleteMutation(BaseDeleteMutation):
73 user = graphene.Field(UserNode)
74 model_class = User
75
76 class Input:
77 id = graphene.ID()
78
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/terraso_backend/apps/graphql/schema/users.py b/terraso_backend/apps/graphql/schema/users.py
--- a/terraso_backend/apps/graphql/schema/users.py
+++ b/terraso_backend/apps/graphql/schema/users.py
@@ -17,7 +17,7 @@
"first_name": ["icontains"],
"last_name": ["icontains"],
}
- fields = ("email", "first_name", "last_name", "memberships")
+ fields = ("email", "first_name", "last_name", "profile_image", "memberships")
interfaces = (relay.Node,)
| {"golden_diff": "diff --git a/terraso_backend/apps/graphql/schema/users.py b/terraso_backend/apps/graphql/schema/users.py\n--- a/terraso_backend/apps/graphql/schema/users.py\n+++ b/terraso_backend/apps/graphql/schema/users.py\n@@ -17,7 +17,7 @@\n \"first_name\": [\"icontains\"],\n \"last_name\": [\"icontains\"],\n }\n- fields = (\"email\", \"first_name\", \"last_name\", \"memberships\")\n+ fields = (\"email\", \"first_name\", \"last_name\", \"profile_image\", \"memberships\")\n interfaces = (relay.Node,)\n", "issue": "Add photo field to the User model\n## Description\r\nThe user profile photo might be automatically fetched from the third-party account system (Google or Apple), or it can also be uploaded from by the user. Since the file itself might be stored on an external storage service, this field will be used to store the location of the file.\r\n\r\nIn this issue, it's important to consider the flow front-end \u2192 back-end for photo upload.\r\n\r\n## Suggested subtasks\r\n- [ ] Design the overall flow to upload photo considering front-end \u2192 back-end flow\r\n- [ ] Add the new field on model with proper support to the external storage service (upload) and update DB migrations\r\n- [ ] Implement upload feature to update photo\r\n- [ ] Add support to present the proper photo URL from external services\r\n- [ ] Add the new photo field on user API\r\n\r\nThis issue depends on:\r\n- #21 \n", "before_files": [{"content": "import graphene\nfrom graphene import relay\nfrom graphene_django import DjangoObjectType\n\nfrom apps.core.models import User\n\nfrom .commons import BaseDeleteMutation\n\n\nclass UserNode(DjangoObjectType):\n id = graphene.ID(source=\"pk\", required=True)\n\n class Meta:\n model = User\n filter_fields = {\n \"email\": [\"exact\", \"icontains\"],\n \"first_name\": [\"icontains\"],\n \"last_name\": [\"icontains\"],\n }\n fields = (\"email\", \"first_name\", \"last_name\", \"memberships\")\n interfaces = (relay.Node,)\n\n\nclass UserAddMutation(relay.ClientIDMutation):\n user = graphene.Field(UserNode)\n\n class Input:\n first_name = graphene.String()\n last_name = graphene.String()\n email = graphene.String(required=True)\n password = graphene.String(required=True)\n\n @classmethod\n def mutate_and_get_payload(cls, root, info, **kwargs):\n user = User.objects.create_user(\n kwargs.pop(\"email\"), password=kwargs.pop(\"password\"), **kwargs\n )\n\n return cls(user=user)\n\n\nclass UserUpdateMutation(relay.ClientIDMutation):\n user = graphene.Field(UserNode)\n\n model_class = User\n\n class Input:\n id = graphene.ID(required=True)\n first_name = graphene.String()\n last_name = graphene.String()\n email = graphene.String()\n password = graphene.String()\n\n @classmethod\n def mutate_and_get_payload(cls, root, info, **kwargs):\n _id = kwargs.pop(\"id\")\n\n user = User.objects.get(pk=_id)\n new_password = kwargs.pop(\"password\", None)\n\n if new_password:\n user.set_password(new_password)\n\n for attr, value in kwargs.items():\n setattr(user, attr, value)\n\n user.save()\n\n return cls(user=user)\n\n\nclass UserDeleteMutation(BaseDeleteMutation):\n user = graphene.Field(UserNode)\n model_class = User\n\n class Input:\n id = graphene.ID()\n", "path": "terraso_backend/apps/graphql/schema/users.py"}], "after_files": [{"content": "import graphene\nfrom graphene import relay\nfrom graphene_django import DjangoObjectType\n\nfrom apps.core.models import User\n\nfrom .commons import BaseDeleteMutation\n\n\nclass UserNode(DjangoObjectType):\n id = graphene.ID(source=\"pk\", required=True)\n\n class Meta:\n model = User\n filter_fields = {\n \"email\": [\"exact\", \"icontains\"],\n \"first_name\": [\"icontains\"],\n \"last_name\": [\"icontains\"],\n }\n fields = (\"email\", \"first_name\", \"last_name\", \"profile_image\", \"memberships\")\n interfaces = (relay.Node,)\n\n\nclass UserAddMutation(relay.ClientIDMutation):\n user = graphene.Field(UserNode)\n\n class Input:\n first_name = graphene.String()\n last_name = graphene.String()\n email = graphene.String(required=True)\n password = graphene.String(required=True)\n\n @classmethod\n def mutate_and_get_payload(cls, root, info, **kwargs):\n user = User.objects.create_user(\n kwargs.pop(\"email\"), password=kwargs.pop(\"password\"), **kwargs\n )\n\n return cls(user=user)\n\n\nclass UserUpdateMutation(relay.ClientIDMutation):\n user = graphene.Field(UserNode)\n\n model_class = User\n\n class Input:\n id = graphene.ID(required=True)\n first_name = graphene.String()\n last_name = graphene.String()\n email = graphene.String()\n password = graphene.String()\n\n @classmethod\n def mutate_and_get_payload(cls, root, info, **kwargs):\n _id = kwargs.pop(\"id\")\n\n user = User.objects.get(pk=_id)\n new_password = kwargs.pop(\"password\", None)\n\n if new_password:\n user.set_password(new_password)\n\n for attr, value in kwargs.items():\n setattr(user, attr, value)\n\n user.save()\n\n return cls(user=user)\n\n\nclass UserDeleteMutation(BaseDeleteMutation):\n user = graphene.Field(UserNode)\n model_class = User\n\n class Input:\n id = graphene.ID()\n", "path": "terraso_backend/apps/graphql/schema/users.py"}]} | 1,018 | 131 |
gh_patches_debug_12406 | rasdani/github-patches | git_diff | cookiecutter__cookiecutter-57 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
UnicodeEncodeError when the prompt string contains non ascii characters.
The call prompt fails, if the template settings contains non-ASCII characters.
cookiecutter.json example:
```
{
"full_name": "Jindřich Smitka",
...
}
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cookiecutter/prompt.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3
4 """
5 cookiecutter.prompt
6 ---------------------
7
8 Functions for prompting the user for project info.
9 """
10
11 import sys
12
13 PY3 = sys.version > '3'
14 if PY3:
15 iteritems = lambda d: iter(d.items())
16 else:
17 input = raw_input
18 iteritems = lambda d: d.iteritems()
19
20 def prompt_for_config(context):
21 """
22 Prompts the user to enter new config, using context as a source for the
23 field names and sample values.
24 """
25 cookiecutter_dict = {}
26
27 for key, val in iteritems(context['cookiecutter']):
28 prompt = "{0} (default is \"{1}\")? ".format(key, val)
29 new_val = input(prompt)
30 new_val = new_val.strip()
31
32 if new_val == '':
33 new_val = val
34
35 if PY3:
36 cookiecutter_dict[key] = new_val
37 else:
38 cookiecutter_dict[key] = new_val.decode('utf-8')
39 return cookiecutter_dict
40
41
42 def query_yes_no(question, default="yes"):
43 """
44 Ask a yes/no question via `raw_input()` and return their answer.
45
46 :param question: A string that is presented to the user.
47 :param default: The presumed answer if the user just hits <Enter>.
48 It must be "yes" (the default), "no" or None (meaning
49 an answer is required of the user).
50
51 The "answer" return value is one of "yes" or "no".
52
53 Adapted from
54 http://stackoverflow.com/questions/3041986/python-command-line-yes-no-input
55 http://code.activestate.com/recipes/577058/
56
57 """
58 valid = {"yes": True, "y": True, "ye": True, "no": False, "n": False}
59 if default is None:
60 prompt = " [y/n] "
61 elif default == "yes":
62 prompt = " [Y/n] "
63 elif default == "no":
64 prompt = " [y/N] "
65 else:
66 raise ValueError("invalid default answer: '%s'" % default)
67
68 while True:
69 sys.stdout.write(question + prompt)
70 choice = input().lower()
71
72 if default is not None and choice == '':
73 return valid[default]
74 elif choice in valid:
75 return valid[choice]
76 else:
77 sys.stdout.write("Please respond with 'yes' or 'no' "
78 "(or 'y' or 'n').\n")
79
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/cookiecutter/prompt.py b/cookiecutter/prompt.py
--- a/cookiecutter/prompt.py
+++ b/cookiecutter/prompt.py
@@ -23,15 +23,15 @@
field names and sample values.
"""
cookiecutter_dict = {}
-
+
for key, val in iteritems(context['cookiecutter']):
- prompt = "{0} (default is \"{1}\")? ".format(key, val)
- new_val = input(prompt)
+ prompt = u"{0} (default is \"{1}\")? ".format(key, val)
+ new_val = input(prompt.encode('utf-8'))
new_val = new_val.strip()
if new_val == '':
new_val = val
-
+
if PY3:
cookiecutter_dict[key] = new_val
else:
| {"golden_diff": "diff --git a/cookiecutter/prompt.py b/cookiecutter/prompt.py\n--- a/cookiecutter/prompt.py\n+++ b/cookiecutter/prompt.py\n@@ -23,15 +23,15 @@\n field names and sample values.\n \"\"\"\n cookiecutter_dict = {}\n- \n+\n for key, val in iteritems(context['cookiecutter']):\n- prompt = \"{0} (default is \\\"{1}\\\")? \".format(key, val)\n- new_val = input(prompt)\n+ prompt = u\"{0} (default is \\\"{1}\\\")? \".format(key, val)\n+ new_val = input(prompt.encode('utf-8'))\n new_val = new_val.strip()\n \n if new_val == '':\n new_val = val\n- \n+\n if PY3:\n cookiecutter_dict[key] = new_val\n else:\n", "issue": "UnicodeEncodeError when the prompt string contains non ascii characters.\nThe call prompt fails, if the template settings contains non-ASCII characters.\n\ncookiecutter.json example:\n\n```\n{\n \"full_name\": \"Jind\u0159ich Smitka\",\n ...\n}\n```\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\n\"\"\"\ncookiecutter.prompt\n---------------------\n\nFunctions for prompting the user for project info.\n\"\"\"\n\nimport sys\n\nPY3 = sys.version > '3'\nif PY3:\n iteritems = lambda d: iter(d.items())\nelse:\n input = raw_input\n iteritems = lambda d: d.iteritems()\n\ndef prompt_for_config(context):\n \"\"\"\n Prompts the user to enter new config, using context as a source for the\n field names and sample values.\n \"\"\"\n cookiecutter_dict = {}\n \n for key, val in iteritems(context['cookiecutter']):\n prompt = \"{0} (default is \\\"{1}\\\")? \".format(key, val)\n new_val = input(prompt)\n new_val = new_val.strip()\n\n if new_val == '':\n new_val = val\n \n if PY3:\n cookiecutter_dict[key] = new_val\n else:\n cookiecutter_dict[key] = new_val.decode('utf-8')\n return cookiecutter_dict\n\n\ndef query_yes_no(question, default=\"yes\"):\n \"\"\"\n Ask a yes/no question via `raw_input()` and return their answer.\n\n :param question: A string that is presented to the user.\n :param default: The presumed answer if the user just hits <Enter>.\n It must be \"yes\" (the default), \"no\" or None (meaning\n an answer is required of the user).\n\n The \"answer\" return value is one of \"yes\" or \"no\".\n\n Adapted from\n http://stackoverflow.com/questions/3041986/python-command-line-yes-no-input\n http://code.activestate.com/recipes/577058/\n\n \"\"\"\n valid = {\"yes\": True, \"y\": True, \"ye\": True, \"no\": False, \"n\": False}\n if default is None:\n prompt = \" [y/n] \"\n elif default == \"yes\":\n prompt = \" [Y/n] \"\n elif default == \"no\":\n prompt = \" [y/N] \"\n else:\n raise ValueError(\"invalid default answer: '%s'\" % default)\n\n while True:\n sys.stdout.write(question + prompt)\n choice = input().lower()\n\n if default is not None and choice == '':\n return valid[default]\n elif choice in valid:\n return valid[choice]\n else:\n sys.stdout.write(\"Please respond with 'yes' or 'no' \"\n \"(or 'y' or 'n').\\n\")\n", "path": "cookiecutter/prompt.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\n\"\"\"\ncookiecutter.prompt\n---------------------\n\nFunctions for prompting the user for project info.\n\"\"\"\n\nimport sys\n\nPY3 = sys.version > '3'\nif PY3:\n iteritems = lambda d: iter(d.items())\nelse:\n input = raw_input\n iteritems = lambda d: d.iteritems()\n\ndef prompt_for_config(context):\n \"\"\"\n Prompts the user to enter new config, using context as a source for the\n field names and sample values.\n \"\"\"\n cookiecutter_dict = {}\n\n for key, val in iteritems(context['cookiecutter']):\n prompt = u\"{0} (default is \\\"{1}\\\")? \".format(key, val)\n new_val = input(prompt.encode('utf-8'))\n new_val = new_val.strip()\n\n if new_val == '':\n new_val = val\n\n if PY3:\n cookiecutter_dict[key] = new_val\n else:\n cookiecutter_dict[key] = new_val.decode('utf-8')\n return cookiecutter_dict\n\n\ndef query_yes_no(question, default=\"yes\"):\n \"\"\"\n Ask a yes/no question via `raw_input()` and return their answer.\n\n :param question: A string that is presented to the user.\n :param default: The presumed answer if the user just hits <Enter>.\n It must be \"yes\" (the default), \"no\" or None (meaning\n an answer is required of the user).\n\n The \"answer\" return value is one of \"yes\" or \"no\".\n\n Adapted from\n http://stackoverflow.com/questions/3041986/python-command-line-yes-no-input\n http://code.activestate.com/recipes/577058/\n\n \"\"\"\n valid = {\"yes\": True, \"y\": True, \"ye\": True, \"no\": False, \"n\": False}\n if default is None:\n prompt = \" [y/n] \"\n elif default == \"yes\":\n prompt = \" [Y/n] \"\n elif default == \"no\":\n prompt = \" [y/N] \"\n else:\n raise ValueError(\"invalid default answer: '%s'\" % default)\n\n while True:\n sys.stdout.write(question + prompt)\n choice = input().lower()\n\n if default is not None and choice == '':\n return valid[default]\n elif choice in valid:\n return valid[choice]\n else:\n sys.stdout.write(\"Please respond with 'yes' or 'no' \"\n \"(or 'y' or 'n').\\n\")\n", "path": "cookiecutter/prompt.py"}]} | 1,023 | 194 |
gh_patches_debug_25360 | rasdani/github-patches | git_diff | ansible__ansible-lint-436 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
false positive on apt_key with data field
# Issue Type
- Bug report
# Ansible and Ansible Lint details
```
$ ansible --version
ansible 2.7.4
config file = /home/lae/.ansible.cfg
configured module search path = ['/home/lae/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.7.1 (default, Oct 22 2018, 10:41:28) [GCC 8.2.1 20180831]
$ ansible-lint --version
ansible-lint 4.0.0a1
```
- ansible installation method: OS
- ansible-lint installation method: pip
# Desired Behaviour
Rule 405 is meant for remote connections but using `apt_key` with the `data` field doesn't require network connectivity (not sure if there's an appropriate network lookup, but if so, that would be an exception).
# Actual Behaviour
```yaml
- name: Trust Proxmox' packaging key
apt_key:
data: "{{ lookup('file', pve_release_key) }}"
id: "{{ pve_release_key_id }}"
state: present
```
The above results in the following.
```
[405] Remote package tasks should have a retry
/home/lae/src/ansible-role-proxmox/tasks/main.yml:47
Task/Handler: Trust Proxmox' packaging key
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lib/ansiblelint/rules/PackageHasRetryRule.py`
Content:
```
1 # Copyright (c) 2016, Will Thames and contributors
2 # Copyright (c) 2018, Ansible Project
3
4 from ansiblelint import AnsibleLintRule
5
6
7 class PackageHasRetryRule(AnsibleLintRule):
8 id = '405'
9 shortdesc = 'Remote package tasks should have a retry'
10 description = (
11 'Package operations are unreliable as they require '
12 'network communication and the availability of remote '
13 'servers. To mitigate the potential problems, retries '
14 'should be used via '
15 '``register: my_result`` and ``until: my_result is succeeded``'
16 )
17 severity = 'LOW'
18 tags = ['module', 'reliability']
19 version_added = 'v4.0.0'
20
21 # module list generated with:
22 # find lib/ansible/modules/packaging/ -type f -printf '%f\n' \
23 # | sort | awk -F '/' \
24 # '/__|dpkg|_repo|_facts|_sub|_chan/{next} {split($NF, words, ".");
25 # print "\""words[1]"\","}'
26 _package_modules = [
27 "apk",
28 "apt_key",
29 "apt",
30 "apt_rpm",
31 "bower",
32 "bundler",
33 "composer",
34 "cpanm",
35 "dnf",
36 "easy_install",
37 "flatpak",
38 "flatpak_remote",
39 "gem",
40 "homebrew_cask",
41 "homebrew",
42 "homebrew_tap",
43 "layman",
44 "macports",
45 "maven_artifact",
46 "npm",
47 "openbsd_pkg",
48 "opkg",
49 "package",
50 "pacman",
51 "pear",
52 "pip",
53 "pkg5_publisher",
54 "pkg5",
55 "pkgin",
56 "pkgng",
57 "pkgutil",
58 "portage",
59 "portinstall",
60 "rhn_register",
61 "rpm_key",
62 "slackpkg",
63 "snap",
64 "sorcery",
65 "svr4pkg",
66 "swdepot",
67 "swupd",
68 "urpmi",
69 "xbps",
70 "yarn",
71 "yum",
72 "zypper",
73 ]
74
75 _module_ignore_states = [
76 "absent",
77 ]
78
79 _package_name_keys = [
80 "name",
81 "package",
82 "pkg",
83 "deb",
84 ]
85
86 # attempt to find package name
87 def get_package_name(self, action):
88 for key in self._package_name_keys:
89 found_package_name = action.get(key)
90 if found_package_name:
91 break
92 return found_package_name
93
94 def matchtask(self, file, task):
95 module = task["action"]["__ansible_module__"]
96
97 if module not in self._package_modules:
98 return False
99
100 is_task_retryable = 'until' in task
101 if is_task_retryable:
102 return False
103
104 is_state_whitelisted = task['action'].get('state') in self._module_ignore_states
105 if is_state_whitelisted:
106 return False
107
108 found_package_name = self.get_package_name(task['action'])
109 if not found_package_name:
110 return True
111
112 is_package_file = '.' in found_package_name
113 is_package_html = '://' in found_package_name
114 is_local_package_file = is_package_file and not is_package_html
115 if is_local_package_file:
116 return False
117
118 return True
119
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/lib/ansiblelint/rules/PackageHasRetryRule.py b/lib/ansiblelint/rules/PackageHasRetryRule.py
--- a/lib/ansiblelint/rules/PackageHasRetryRule.py
+++ b/lib/ansiblelint/rules/PackageHasRetryRule.py
@@ -76,19 +76,24 @@
"absent",
]
+ _module_ignore_parameters = [
+ "data",
+ ]
+
_package_name_keys = [
"name",
"package",
"pkg",
"deb",
+ "key",
]
- # attempt to find package name
def get_package_name(self, action):
+ """Attempt to find package name."""
for key in self._package_name_keys:
found_package_name = action.get(key)
if found_package_name:
- break
+ return found_package_name
return found_package_name
def matchtask(self, file, task):
@@ -105,6 +110,12 @@
if is_state_whitelisted:
return False
+ has_whitelisted_parameter = (
+ set(self._module_ignore_parameters).intersection(set(task['action']))
+ )
+ if has_whitelisted_parameter:
+ return False
+
found_package_name = self.get_package_name(task['action'])
if not found_package_name:
return True
| {"golden_diff": "diff --git a/lib/ansiblelint/rules/PackageHasRetryRule.py b/lib/ansiblelint/rules/PackageHasRetryRule.py\n--- a/lib/ansiblelint/rules/PackageHasRetryRule.py\n+++ b/lib/ansiblelint/rules/PackageHasRetryRule.py\n@@ -76,19 +76,24 @@\n \"absent\",\n ]\n \n+ _module_ignore_parameters = [\n+ \"data\",\n+ ]\n+\n _package_name_keys = [\n \"name\",\n \"package\",\n \"pkg\",\n \"deb\",\n+ \"key\",\n ]\n \n- # attempt to find package name\n def get_package_name(self, action):\n+ \"\"\"Attempt to find package name.\"\"\"\n for key in self._package_name_keys:\n found_package_name = action.get(key)\n if found_package_name:\n- break\n+ return found_package_name\n return found_package_name\n \n def matchtask(self, file, task):\n@@ -105,6 +110,12 @@\n if is_state_whitelisted:\n return False\n \n+ has_whitelisted_parameter = (\n+ set(self._module_ignore_parameters).intersection(set(task['action']))\n+ )\n+ if has_whitelisted_parameter:\n+ return False\n+\n found_package_name = self.get_package_name(task['action'])\n if not found_package_name:\n return True\n", "issue": "false positive on apt_key with data field\n# Issue Type\r\n- Bug report\r\n\r\n# Ansible and Ansible Lint details\r\n\r\n```\r\n$ ansible --version\r\nansible 2.7.4\r\n config file = /home/lae/.ansible.cfg\r\n configured module search path = ['/home/lae/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']\r\n ansible python module location = /usr/lib/python3.7/site-packages/ansible\r\n executable location = /usr/bin/ansible\r\n python version = 3.7.1 (default, Oct 22 2018, 10:41:28) [GCC 8.2.1 20180831]\r\n$ ansible-lint --version\r\nansible-lint 4.0.0a1\r\n```\r\n\r\n- ansible installation method: OS\r\n- ansible-lint installation method: pip\r\n\r\n# Desired Behaviour\r\n\r\nRule 405 is meant for remote connections but using `apt_key` with the `data` field doesn't require network connectivity (not sure if there's an appropriate network lookup, but if so, that would be an exception).\r\n\r\n# Actual Behaviour\r\n\r\n```yaml\r\n- name: Trust Proxmox' packaging key\r\n apt_key:\r\n data: \"{{ lookup('file', pve_release_key) }}\"\r\n id: \"{{ pve_release_key_id }}\"\r\n state: present\r\n```\r\n\r\nThe above results in the following.\r\n\r\n```\r\n[405] Remote package tasks should have a retry\r\n/home/lae/src/ansible-role-proxmox/tasks/main.yml:47\r\nTask/Handler: Trust Proxmox' packaging key\r\n```\n", "before_files": [{"content": "# Copyright (c) 2016, Will Thames and contributors\n# Copyright (c) 2018, Ansible Project\n\nfrom ansiblelint import AnsibleLintRule\n\n\nclass PackageHasRetryRule(AnsibleLintRule):\n id = '405'\n shortdesc = 'Remote package tasks should have a retry'\n description = (\n 'Package operations are unreliable as they require '\n 'network communication and the availability of remote '\n 'servers. To mitigate the potential problems, retries '\n 'should be used via '\n '``register: my_result`` and ``until: my_result is succeeded``'\n )\n severity = 'LOW'\n tags = ['module', 'reliability']\n version_added = 'v4.0.0'\n\n # module list generated with:\n # find lib/ansible/modules/packaging/ -type f -printf '%f\\n' \\\n # | sort | awk -F '/' \\\n # '/__|dpkg|_repo|_facts|_sub|_chan/{next} {split($NF, words, \".\");\n # print \"\\\"\"words[1]\"\\\",\"}'\n _package_modules = [\n \"apk\",\n \"apt_key\",\n \"apt\",\n \"apt_rpm\",\n \"bower\",\n \"bundler\",\n \"composer\",\n \"cpanm\",\n \"dnf\",\n \"easy_install\",\n \"flatpak\",\n \"flatpak_remote\",\n \"gem\",\n \"homebrew_cask\",\n \"homebrew\",\n \"homebrew_tap\",\n \"layman\",\n \"macports\",\n \"maven_artifact\",\n \"npm\",\n \"openbsd_pkg\",\n \"opkg\",\n \"package\",\n \"pacman\",\n \"pear\",\n \"pip\",\n \"pkg5_publisher\",\n \"pkg5\",\n \"pkgin\",\n \"pkgng\",\n \"pkgutil\",\n \"portage\",\n \"portinstall\",\n \"rhn_register\",\n \"rpm_key\",\n \"slackpkg\",\n \"snap\",\n \"sorcery\",\n \"svr4pkg\",\n \"swdepot\",\n \"swupd\",\n \"urpmi\",\n \"xbps\",\n \"yarn\",\n \"yum\",\n \"zypper\",\n ]\n\n _module_ignore_states = [\n \"absent\",\n ]\n\n _package_name_keys = [\n \"name\",\n \"package\",\n \"pkg\",\n \"deb\",\n ]\n\n # attempt to find package name\n def get_package_name(self, action):\n for key in self._package_name_keys:\n found_package_name = action.get(key)\n if found_package_name:\n break\n return found_package_name\n\n def matchtask(self, file, task):\n module = task[\"action\"][\"__ansible_module__\"]\n\n if module not in self._package_modules:\n return False\n\n is_task_retryable = 'until' in task\n if is_task_retryable:\n return False\n\n is_state_whitelisted = task['action'].get('state') in self._module_ignore_states\n if is_state_whitelisted:\n return False\n\n found_package_name = self.get_package_name(task['action'])\n if not found_package_name:\n return True\n\n is_package_file = '.' in found_package_name\n is_package_html = '://' in found_package_name\n is_local_package_file = is_package_file and not is_package_html\n if is_local_package_file:\n return False\n\n return True\n", "path": "lib/ansiblelint/rules/PackageHasRetryRule.py"}], "after_files": [{"content": "# Copyright (c) 2016, Will Thames and contributors\n# Copyright (c) 2018, Ansible Project\n\nfrom ansiblelint import AnsibleLintRule\n\n\nclass PackageHasRetryRule(AnsibleLintRule):\n id = '405'\n shortdesc = 'Remote package tasks should have a retry'\n description = (\n 'Package operations are unreliable as they require '\n 'network communication and the availability of remote '\n 'servers. To mitigate the potential problems, retries '\n 'should be used via '\n '``register: my_result`` and ``until: my_result is succeeded``'\n )\n severity = 'LOW'\n tags = ['module', 'reliability']\n version_added = 'v4.0.0'\n\n # module list generated with:\n # find lib/ansible/modules/packaging/ -type f -printf '%f\\n' \\\n # | sort | awk -F '/' \\\n # '/__|dpkg|_repo|_facts|_sub|_chan/{next} {split($NF, words, \".\");\n # print \"\\\"\"words[1]\"\\\",\"}'\n _package_modules = [\n \"apk\",\n \"apt_key\",\n \"apt\",\n \"apt_rpm\",\n \"bower\",\n \"bundler\",\n \"composer\",\n \"cpanm\",\n \"dnf\",\n \"easy_install\",\n \"flatpak\",\n \"flatpak_remote\",\n \"gem\",\n \"homebrew_cask\",\n \"homebrew\",\n \"homebrew_tap\",\n \"layman\",\n \"macports\",\n \"maven_artifact\",\n \"npm\",\n \"openbsd_pkg\",\n \"opkg\",\n \"package\",\n \"pacman\",\n \"pear\",\n \"pip\",\n \"pkg5_publisher\",\n \"pkg5\",\n \"pkgin\",\n \"pkgng\",\n \"pkgutil\",\n \"portage\",\n \"portinstall\",\n \"rhn_register\",\n \"rpm_key\",\n \"slackpkg\",\n \"snap\",\n \"sorcery\",\n \"svr4pkg\",\n \"swdepot\",\n \"swupd\",\n \"urpmi\",\n \"xbps\",\n \"yarn\",\n \"yum\",\n \"zypper\",\n ]\n\n _module_ignore_states = [\n \"absent\",\n ]\n\n _module_ignore_parameters = [\n \"data\",\n ]\n\n _package_name_keys = [\n \"name\",\n \"package\",\n \"pkg\",\n \"deb\",\n \"key\",\n ]\n\n def get_package_name(self, action):\n \"\"\"Attempt to find package name.\"\"\"\n for key in self._package_name_keys:\n found_package_name = action.get(key)\n if found_package_name:\n return found_package_name\n return found_package_name\n\n def matchtask(self, file, task):\n module = task[\"action\"][\"__ansible_module__\"]\n\n if module not in self._package_modules:\n return False\n\n is_task_retryable = 'until' in task\n if is_task_retryable:\n return False\n\n is_state_whitelisted = task['action'].get('state') in self._module_ignore_states\n if is_state_whitelisted:\n return False\n\n has_whitelisted_parameter = (\n set(self._module_ignore_parameters).intersection(set(task['action']))\n )\n if has_whitelisted_parameter:\n return False\n\n found_package_name = self.get_package_name(task['action'])\n if not found_package_name:\n return True\n\n is_package_file = '.' in found_package_name\n is_package_html = '://' in found_package_name\n is_local_package_file = is_package_file and not is_package_html\n if is_local_package_file:\n return False\n\n return True\n", "path": "lib/ansiblelint/rules/PackageHasRetryRule.py"}]} | 1,635 | 298 |
gh_patches_debug_15277 | rasdani/github-patches | git_diff | alltheplaces__alltheplaces-3316 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Spider verizon is broken
During the global build at 2021-07-14-14-42-22, spider **verizon** failed with **4611 features** and **1645 errors**.
Here's [the log](https://data.alltheplaces.xyz/runs/2021-07-14-14-42-22/logs/verizon.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-07-14-14-42-22/output/verizon.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-07-14-14-42-22/output/verizon.geojson))
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `locations/spiders/verizon.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 import scrapy
3 import json
4 import re
5
6 from locations.items import GeojsonPointItem
7 from locations.hours import OpeningHours
8
9
10 class VerizonSpider(scrapy.Spider):
11 name = "verizon"
12 item_attributes = { 'brand': "Verizon" }
13 allowed_domains = ["www.verizonwireless.com"]
14 start_urls = (
15 'https://www.verizonwireless.com/sitemap_storelocator.xml',
16 )
17 custom_settings = {
18 'USER_AGENT': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.86 Safari/537.36',
19 }
20
21 def parse_hours(self, store_hours):
22 opening_hours = OpeningHours()
23 for store_day in store_hours['dayOfWeek']:
24 if store_day.lower() == 'closed':
25 continue
26 else:
27 day, open_close = store_day.split('-')
28 day = day.strip()[:2]
29 open_time = ' '.join(open_close.strip().split(' ', 2)[0:2])
30 if open_time.split(' ')[0].lower() == 'closed':
31 continue
32 elif open_time.split(' ')[0].lower() == 'null':
33 continue
34 else:
35 if open_close.strip().count(' ') == 1:
36 open_time, close_time = open_time.split(' ')
37 opening_hours.add_range(day=day,
38 open_time=open_time,
39 close_time=close_time,
40 time_format='%I:%M%p'
41 )
42 elif open_close.strip().count(' ') == 2:
43 open_time = open_close.strip().split(' ')[0]
44 close_time = ''.join(open_close.strip().split(' ')[1:3])
45 opening_hours.add_range(day=day,
46 open_time=open_time,
47 close_time=close_time,
48 time_format='%I:%M%p'
49 )
50 else:
51 close_time = open_close.strip().split(' ', 2)[2]
52 opening_hours.add_range(day=day,
53 open_time=open_time,
54 close_time=close_time,
55 time_format='%I:%M %p'
56 )
57
58 return opening_hours.as_opening_hours()
59
60 def parse(self, response):
61 response.selector.remove_namespaces()
62 urls = response.xpath('//url/loc/text()').extract()
63
64 for url in urls:
65 yield scrapy.Request(url, callback=self.parse_store)
66
67 def parse_store(self, response):
68 script = response.xpath('//script[contains(text(), "storeJSON")]/text()').extract_first()
69 store_data = json.loads(re.search(r'var storeJSON = (.*);', script).group(1))
70
71 properties = {
72 'name': store_data["storeName"],
73 'ref': store_data["storeNumber"],
74 'addr_full': store_data["address"]["streetAddress"],
75 'city': store_data["address"]["addressLocality"],
76 'state': store_data["address"]["addressRegion"],
77 'postcode': store_data["address"]["postalCode"],
78 'country': store_data["address"]["addressCountry"],
79 'phone': store_data.get("telephone"),
80 'website': store_data.get("url") or response.url,
81 'lat': store_data["geo"].get("latitude"),
82 'lon': store_data["geo"].get("longitude"),
83 'extras': {
84 'business_name': store_data.get('posStoreDetail').get('businessName'),
85 'retail_id': store_data.get('retailId'),
86 'store_type': store_data.get('posStoreDetail').get('storeType'),
87 'store_type_note': store_data.get('typeOfStore')
88 }
89 }
90
91 hours = self.parse_hours(store_data.get("openingHoursSpecification"))
92 if hours:
93 properties["opening_hours"] = hours
94
95 yield GeojsonPointItem(**properties)
96
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/locations/spiders/verizon.py b/locations/spiders/verizon.py
--- a/locations/spiders/verizon.py
+++ b/locations/spiders/verizon.py
@@ -62,10 +62,15 @@
urls = response.xpath('//url/loc/text()').extract()
for url in urls:
- yield scrapy.Request(url, callback=self.parse_store)
+ if url.split('/')[-2].split('-')[-1].isdigit():
+ # Store pages have a number at the end of their URL
+ yield scrapy.Request(url, callback=self.parse_store)
def parse_store(self, response):
script = response.xpath('//script[contains(text(), "storeJSON")]/text()').extract_first()
+ if not script:
+ return
+
store_data = json.loads(re.search(r'var storeJSON = (.*);', script).group(1))
properties = {
| {"golden_diff": "diff --git a/locations/spiders/verizon.py b/locations/spiders/verizon.py\n--- a/locations/spiders/verizon.py\n+++ b/locations/spiders/verizon.py\n@@ -62,10 +62,15 @@\n urls = response.xpath('//url/loc/text()').extract()\n \n for url in urls:\n- yield scrapy.Request(url, callback=self.parse_store)\n+ if url.split('/')[-2].split('-')[-1].isdigit():\n+ # Store pages have a number at the end of their URL\n+ yield scrapy.Request(url, callback=self.parse_store)\n \n def parse_store(self, response):\n script = response.xpath('//script[contains(text(), \"storeJSON\")]/text()').extract_first()\n+ if not script:\n+ return\n+\n store_data = json.loads(re.search(r'var storeJSON = (.*);', script).group(1))\n \n properties = {\n", "issue": "Spider verizon is broken\nDuring the global build at 2021-07-14-14-42-22, spider **verizon** failed with **4611 features** and **1645 errors**.\n\nHere's [the log](https://data.alltheplaces.xyz/runs/2021-07-14-14-42-22/logs/verizon.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-07-14-14-42-22/output/verizon.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-07-14-14-42-22/output/verizon.geojson))\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport scrapy\nimport json\nimport re\n\nfrom locations.items import GeojsonPointItem\nfrom locations.hours import OpeningHours\n\n\nclass VerizonSpider(scrapy.Spider):\n name = \"verizon\"\n item_attributes = { 'brand': \"Verizon\" }\n allowed_domains = [\"www.verizonwireless.com\"]\n start_urls = (\n 'https://www.verizonwireless.com/sitemap_storelocator.xml',\n )\n custom_settings = {\n 'USER_AGENT': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.86 Safari/537.36',\n }\n\n def parse_hours(self, store_hours):\n opening_hours = OpeningHours()\n for store_day in store_hours['dayOfWeek']:\n if store_day.lower() == 'closed':\n continue\n else:\n day, open_close = store_day.split('-')\n day = day.strip()[:2]\n open_time = ' '.join(open_close.strip().split(' ', 2)[0:2])\n if open_time.split(' ')[0].lower() == 'closed':\n continue\n elif open_time.split(' ')[0].lower() == 'null':\n continue\n else:\n if open_close.strip().count(' ') == 1:\n open_time, close_time = open_time.split(' ')\n opening_hours.add_range(day=day,\n open_time=open_time,\n close_time=close_time,\n time_format='%I:%M%p'\n )\n elif open_close.strip().count(' ') == 2:\n open_time = open_close.strip().split(' ')[0]\n close_time = ''.join(open_close.strip().split(' ')[1:3])\n opening_hours.add_range(day=day,\n open_time=open_time,\n close_time=close_time,\n time_format='%I:%M%p'\n )\n else:\n close_time = open_close.strip().split(' ', 2)[2]\n opening_hours.add_range(day=day,\n open_time=open_time,\n close_time=close_time,\n time_format='%I:%M %p'\n )\n\n return opening_hours.as_opening_hours()\n\n def parse(self, response):\n response.selector.remove_namespaces()\n urls = response.xpath('//url/loc/text()').extract()\n\n for url in urls:\n yield scrapy.Request(url, callback=self.parse_store)\n\n def parse_store(self, response):\n script = response.xpath('//script[contains(text(), \"storeJSON\")]/text()').extract_first()\n store_data = json.loads(re.search(r'var storeJSON = (.*);', script).group(1))\n\n properties = {\n 'name': store_data[\"storeName\"],\n 'ref': store_data[\"storeNumber\"],\n 'addr_full': store_data[\"address\"][\"streetAddress\"],\n 'city': store_data[\"address\"][\"addressLocality\"],\n 'state': store_data[\"address\"][\"addressRegion\"],\n 'postcode': store_data[\"address\"][\"postalCode\"],\n 'country': store_data[\"address\"][\"addressCountry\"],\n 'phone': store_data.get(\"telephone\"),\n 'website': store_data.get(\"url\") or response.url,\n 'lat': store_data[\"geo\"].get(\"latitude\"),\n 'lon': store_data[\"geo\"].get(\"longitude\"),\n 'extras': {\n 'business_name': store_data.get('posStoreDetail').get('businessName'),\n 'retail_id': store_data.get('retailId'),\n 'store_type': store_data.get('posStoreDetail').get('storeType'),\n 'store_type_note': store_data.get('typeOfStore')\n }\n }\n\n hours = self.parse_hours(store_data.get(\"openingHoursSpecification\"))\n if hours:\n properties[\"opening_hours\"] = hours\n\n yield GeojsonPointItem(**properties)\n", "path": "locations/spiders/verizon.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\nimport scrapy\nimport json\nimport re\n\nfrom locations.items import GeojsonPointItem\nfrom locations.hours import OpeningHours\n\n\nclass VerizonSpider(scrapy.Spider):\n name = \"verizon\"\n item_attributes = { 'brand': \"Verizon\" }\n allowed_domains = [\"www.verizonwireless.com\"]\n start_urls = (\n 'https://www.verizonwireless.com/sitemap_storelocator.xml',\n )\n custom_settings = {\n 'USER_AGENT': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.86 Safari/537.36',\n }\n\n def parse_hours(self, store_hours):\n opening_hours = OpeningHours()\n for store_day in store_hours['dayOfWeek']:\n if store_day.lower() == 'closed':\n continue\n else:\n day, open_close = store_day.split('-')\n day = day.strip()[:2]\n open_time = ' '.join(open_close.strip().split(' ', 2)[0:2])\n if open_time.split(' ')[0].lower() == 'closed':\n continue\n elif open_time.split(' ')[0].lower() == 'null':\n continue\n else:\n if open_close.strip().count(' ') == 1:\n open_time, close_time = open_time.split(' ')\n opening_hours.add_range(day=day,\n open_time=open_time,\n close_time=close_time,\n time_format='%I:%M%p'\n )\n elif open_close.strip().count(' ') == 2:\n open_time = open_close.strip().split(' ')[0]\n close_time = ''.join(open_close.strip().split(' ')[1:3])\n opening_hours.add_range(day=day,\n open_time=open_time,\n close_time=close_time,\n time_format='%I:%M%p'\n )\n else:\n close_time = open_close.strip().split(' ', 2)[2]\n opening_hours.add_range(day=day,\n open_time=open_time,\n close_time=close_time,\n time_format='%I:%M %p'\n )\n\n return opening_hours.as_opening_hours()\n\n def parse(self, response):\n response.selector.remove_namespaces()\n urls = response.xpath('//url/loc/text()').extract()\n\n for url in urls:\n if url.split('/')[-2].split('-')[-1].isdigit():\n # Store pages have a number at the end of their URL\n yield scrapy.Request(url, callback=self.parse_store)\n\n def parse_store(self, response):\n script = response.xpath('//script[contains(text(), \"storeJSON\")]/text()').extract_first()\n if not script:\n return\n\n store_data = json.loads(re.search(r'var storeJSON = (.*);', script).group(1))\n\n properties = {\n 'name': store_data[\"storeName\"],\n 'ref': store_data[\"storeNumber\"],\n 'addr_full': store_data[\"address\"][\"streetAddress\"],\n 'city': store_data[\"address\"][\"addressLocality\"],\n 'state': store_data[\"address\"][\"addressRegion\"],\n 'postcode': store_data[\"address\"][\"postalCode\"],\n 'country': store_data[\"address\"][\"addressCountry\"],\n 'phone': store_data.get(\"telephone\"),\n 'website': store_data.get(\"url\") or response.url,\n 'lat': store_data[\"geo\"].get(\"latitude\"),\n 'lon': store_data[\"geo\"].get(\"longitude\"),\n 'extras': {\n 'business_name': store_data.get('posStoreDetail').get('businessName'),\n 'retail_id': store_data.get('retailId'),\n 'store_type': store_data.get('posStoreDetail').get('storeType'),\n 'store_type_note': store_data.get('typeOfStore')\n }\n }\n\n hours = self.parse_hours(store_data.get(\"openingHoursSpecification\"))\n if hours:\n properties[\"opening_hours\"] = hours\n\n yield GeojsonPointItem(**properties)\n", "path": "locations/spiders/verizon.py"}]} | 1,470 | 204 |
gh_patches_debug_4124 | rasdani/github-patches | git_diff | PlasmaPy__PlasmaPy-541 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Archive version 0.1.1 on Zenodo and get a DOI
There is a partnership between [Zenodo](https://zenodo.org/) and GitHub that allows Zenodo to archive releases and [make code citable](https://guides.github.com/activities/citable-code/). Zenodo can then mint a digital object identifier (DOI) that would make that version of PlasmaPy citable. We can also get a persistent doi that would alway refers to the most recent version. We should make archiving our release on Zenodo part of our regular release process. [SunPy](https://doi.org/10.5281/zenodo.591887) has done this already.
- [x] Link the PlasmaPy organization/repository to Zenodo
- [ ] Archive version 0.1 on Zenodo and get a persistent DOI and a release DOI
- [ ] Put a badge on our main README.md for our DOI
- [ ] Put the persistent DOI in our docs and on our main website, along with instructions on how to cite the code
- [ ] Document instructions on how to put a release on Zenodo in our release guide.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `plasmapy/__init__.py`
Content:
```
1 # Licensed under a 3-clause BSD style license - see LICENSE.rst
2
3 # Packages may add whatever they like to this file, but
4 # should keep this content at the top.
5 # ----------------------------------------------------------------------------
6 from ._base_init import *
7 # ----------------------------------------------------------------------------
8
9 # Enforce Python version check during package import.
10 # This is the same check as the one at the top of setup.py
11 import sys
12
13 __name__ = "plasmapy"
14
15 __doc__ = ("A community-developed and community-driven open source "
16 "core Python package for plasma physics.")
17
18
19 class UnsupportedPythonError(Exception):
20 pass
21
22
23 if sys.version_info < tuple((int(val) for val in "3.6".split('.'))):
24 raise UnsupportedPythonError("plasmapy does not support Python < {}".format(3.6))
25
26 if not _ASTROPY_SETUP_:
27 # For egg_info test builds to pass, put package imports here.
28 from . import atomic
29 from . import classes
30 from . import constants
31 from . import diagnostics
32 from . import mathematics
33 from . import physics
34 from . import utils
35
36 def online_help(query):
37 """
38 Search the online PlasmaPy documentation for the given query from plasmapy.org
39 Opens the results in the default web browser.
40 Requires an active Internet connection.
41 Redirects to Astropy.units in case of query 'unit' or 'units'
42
43 Parameters
44 ----------
45 query : str
46 The search query.
47 """
48 from urllib.parse import urlencode
49 import webbrowser
50
51 url = ('http://docs.plasmapy.org/en/stable/search.html?'
52 '{0}&check_keywords=yes&area=default').format(urlencode({'q': query}))
53
54 if(query.lower() in ('unit', 'units')):
55 url = 'http://docs.astropy.org/en/stable/units/'
56
57 webbrowser.open(url)
58
59 __citation__ = """@misc{plasmapy_community_2018_1238132,
60 author = {PlasmaPy Community and
61 Murphy, Nicholas A. and
62 Leonard, Andrew J. and
63 Sta\'nczak, Dominik and
64 Kozlowski, Pawel M. and
65 Langendorf, Samuel J. and
66 Haggerty, Colby C. and
67 Beckers, Jasper P. and
68 Mumford, Stuart J. and
69 Parashar, Tulasi N. and
70 Huang, Yi-Min},
71 title = {{PlasmaPy: an open source community-developed
72 Python package for plasma physics}},
73 month = apr,
74 year = 2018,
75 doi = {10.5281/zenodo.1238132},
76 url = {https://doi.org/10.5281/zenodo.1238132}
77 }"""
78
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/plasmapy/__init__.py b/plasmapy/__init__.py
--- a/plasmapy/__init__.py
+++ b/plasmapy/__init__.py
@@ -57,7 +57,7 @@
webbrowser.open(url)
__citation__ = """@misc{plasmapy_community_2018_1238132,
- author = {PlasmaPy Community and
+ author = {{PlasmaPy Community} and
Murphy, Nicholas A. and
Leonard, Andrew J. and
Sta\'nczak, Dominik and
| {"golden_diff": "diff --git a/plasmapy/__init__.py b/plasmapy/__init__.py\n--- a/plasmapy/__init__.py\n+++ b/plasmapy/__init__.py\n@@ -57,7 +57,7 @@\n webbrowser.open(url)\n \n __citation__ = \"\"\"@misc{plasmapy_community_2018_1238132,\n- author = {PlasmaPy Community and\n+ author = {{PlasmaPy Community} and\n Murphy, Nicholas A. and\n Leonard, Andrew J. and\n Sta\\'nczak, Dominik and\n", "issue": "Archive version 0.1.1 on Zenodo and get a DOI\nThere is a partnership between [Zenodo](https://zenodo.org/) and GitHub that allows Zenodo to archive releases and [make code citable](https://guides.github.com/activities/citable-code/). Zenodo can then mint a digital object identifier (DOI) that would make that version of PlasmaPy citable. We can also get a persistent doi that would alway refers to the most recent version. We should make archiving our release on Zenodo part of our regular release process. [SunPy](https://doi.org/10.5281/zenodo.591887) has done this already. \r\n\r\n- [x] Link the PlasmaPy organization/repository to Zenodo\r\n- [ ] Archive version 0.1 on Zenodo and get a persistent DOI and a release DOI\r\n- [ ] Put a badge on our main README.md for our DOI\r\n- [ ] Put the persistent DOI in our docs and on our main website, along with instructions on how to cite the code\r\n- [ ] Document instructions on how to put a release on Zenodo in our release guide.\r\n\r\n\r\n\n", "before_files": [{"content": "# Licensed under a 3-clause BSD style license - see LICENSE.rst\n\n# Packages may add whatever they like to this file, but\n# should keep this content at the top.\n# ----------------------------------------------------------------------------\nfrom ._base_init import *\n# ----------------------------------------------------------------------------\n\n# Enforce Python version check during package import.\n# This is the same check as the one at the top of setup.py\nimport sys\n\n__name__ = \"plasmapy\"\n\n__doc__ = (\"A community-developed and community-driven open source \"\n \"core Python package for plasma physics.\")\n\n\nclass UnsupportedPythonError(Exception):\n pass\n\n\nif sys.version_info < tuple((int(val) for val in \"3.6\".split('.'))):\n raise UnsupportedPythonError(\"plasmapy does not support Python < {}\".format(3.6))\n\nif not _ASTROPY_SETUP_:\n # For egg_info test builds to pass, put package imports here.\n from . import atomic\n from . import classes\n from . import constants\n from . import diagnostics\n from . import mathematics\n from . import physics\n from . import utils\n\ndef online_help(query):\n \"\"\"\n Search the online PlasmaPy documentation for the given query from plasmapy.org\n Opens the results in the default web browser.\n Requires an active Internet connection.\n Redirects to Astropy.units in case of query 'unit' or 'units'\n\n Parameters\n ----------\n query : str\n The search query.\n \"\"\"\n from urllib.parse import urlencode\n import webbrowser\n\n url = ('http://docs.plasmapy.org/en/stable/search.html?'\n '{0}&check_keywords=yes&area=default').format(urlencode({'q': query}))\n\n if(query.lower() in ('unit', 'units')):\n url = 'http://docs.astropy.org/en/stable/units/'\n\n webbrowser.open(url)\n\n__citation__ = \"\"\"@misc{plasmapy_community_2018_1238132,\n author = {PlasmaPy Community and\n Murphy, Nicholas A. and\n Leonard, Andrew J. and\n Sta\\'nczak, Dominik and\n Kozlowski, Pawel M. and\n Langendorf, Samuel J. and\n Haggerty, Colby C. and\n Beckers, Jasper P. and\n Mumford, Stuart J. and\n Parashar, Tulasi N. and\n Huang, Yi-Min},\n title = {{PlasmaPy: an open source community-developed \n Python package for plasma physics}},\n month = apr,\n year = 2018,\n doi = {10.5281/zenodo.1238132},\n url = {https://doi.org/10.5281/zenodo.1238132}\n}\"\"\"\n", "path": "plasmapy/__init__.py"}], "after_files": [{"content": "# Licensed under a 3-clause BSD style license - see LICENSE.rst\n\n# Packages may add whatever they like to this file, but\n# should keep this content at the top.\n# ----------------------------------------------------------------------------\nfrom ._base_init import *\n# ----------------------------------------------------------------------------\n\n# Enforce Python version check during package import.\n# This is the same check as the one at the top of setup.py\nimport sys\n\n__name__ = \"plasmapy\"\n\n__doc__ = (\"A community-developed and community-driven open source \"\n \"core Python package for plasma physics.\")\n\n\nclass UnsupportedPythonError(Exception):\n pass\n\n\nif sys.version_info < tuple((int(val) for val in \"3.6\".split('.'))):\n raise UnsupportedPythonError(\"plasmapy does not support Python < {}\".format(3.6))\n\nif not _ASTROPY_SETUP_:\n # For egg_info test builds to pass, put package imports here.\n from . import atomic\n from . import classes\n from . import constants\n from . import diagnostics\n from . import mathematics\n from . import physics\n from . import utils\n\ndef online_help(query):\n \"\"\"\n Search the online PlasmaPy documentation for the given query from plasmapy.org\n Opens the results in the default web browser.\n Requires an active Internet connection.\n Redirects to Astropy.units in case of query 'unit' or 'units'\n\n Parameters\n ----------\n query : str\n The search query.\n \"\"\"\n from urllib.parse import urlencode\n import webbrowser\n\n url = ('http://docs.plasmapy.org/en/stable/search.html?'\n '{0}&check_keywords=yes&area=default').format(urlencode({'q': query}))\n\n if(query.lower() in ('unit', 'units')):\n url = 'http://docs.astropy.org/en/stable/units/'\n\n webbrowser.open(url)\n\n__citation__ = \"\"\"@misc{plasmapy_community_2018_1238132,\n author = {{PlasmaPy Community} and\n Murphy, Nicholas A. and\n Leonard, Andrew J. and\n Sta\\'nczak, Dominik and\n Kozlowski, Pawel M. and\n Langendorf, Samuel J. and\n Haggerty, Colby C. and\n Beckers, Jasper P. and\n Mumford, Stuart J. and\n Parashar, Tulasi N. and\n Huang, Yi-Min},\n title = {{PlasmaPy: an open source community-developed \n Python package for plasma physics}},\n month = apr,\n year = 2018,\n doi = {10.5281/zenodo.1238132},\n url = {https://doi.org/10.5281/zenodo.1238132}\n}\"\"\"\n", "path": "plasmapy/__init__.py"}]} | 1,296 | 142 |
gh_patches_debug_12051 | rasdani/github-patches | git_diff | learningequality__kolibri-7269 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Update profile modal displays to super admin account created with setup wizard
### Observed behavior
_(Wouldn't even know this might be an issue if I hadn't been reviewing super admin Gherkins all past week :blush:)_
According to the [Gherkin scenario](https://github.com/learningequality/kolibri/blob/release-v0.13.x/integration_testing/features/learner/learner-profile-update-notification.feature#L22), this should not appear:

### Expected behavior
No profile update modal for the super admin.
### User-facing consequences
Annoyed super admin.
### Errors and logs
…
### Steps to reproduce
1. Install Kolibri
2. Go through the setup wizard
3. Go to Learn
### Context
* Kolibri version: 0.14.0b6, DEB installer
* Operating system: Ubuntu 16.04
* Browser: both Firefox and Chrome
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kolibri/core/device/utils.py`
Content:
```
1 """
2 Do all imports of the device settings model inside the function scope here,
3 so as to allow these functions to be easily imported without worrying about
4 circular imports.
5 """
6 from django.db.utils import OperationalError
7 from django.db.utils import ProgrammingError
8
9 LANDING_PAGE_SIGN_IN = "sign-in"
10 LANDING_PAGE_LEARN = "learn"
11
12 APP_KEY_COOKIE_NAME = "app_key_cookie"
13
14
15 class DeviceNotProvisioned(Exception):
16 pass
17
18
19 no_default_value = object()
20
21
22 def get_device_setting(setting, default=no_default_value):
23 from .models import DeviceSettings
24
25 try:
26 device_settings = DeviceSettings.objects.get()
27 if device_settings is None:
28 raise DeviceSettings.DoesNotExist
29 return getattr(device_settings, setting)
30 except (DeviceSettings.DoesNotExist, OperationalError, ProgrammingError):
31 if default is not no_default_value:
32 return default
33 raise DeviceNotProvisioned
34
35
36 def device_provisioned():
37 return get_device_setting("is_provisioned", False)
38
39
40 def is_landing_page(landing_page):
41 return get_device_setting("landing_page", LANDING_PAGE_SIGN_IN) == landing_page
42
43
44 def allow_guest_access():
45 if get_device_setting("allow_guest_access", False):
46 return True
47
48 return is_landing_page(LANDING_PAGE_LEARN)
49
50
51 def allow_learner_unassigned_resource_access():
52 if get_device_setting("allow_learner_unassigned_resource_access", True):
53 return True
54
55 return is_landing_page(LANDING_PAGE_LEARN)
56
57
58 def allow_peer_unlisted_channel_import():
59 return get_device_setting("allow_peer_unlisted_channel_import", False)
60
61
62 def allow_other_browsers_to_connect():
63 return get_device_setting("allow_other_browsers_to_connect", True)
64
65
66 def set_device_settings(**kwargs):
67 from .models import DeviceSettings
68
69 try:
70 device_settings = DeviceSettings.objects.get()
71 for key, value in kwargs.items():
72 setattr(device_settings, key, value)
73 device_settings.save()
74 except DeviceSettings.DoesNotExist:
75 raise DeviceNotProvisioned
76
77
78 def create_superuser(user_data, facility):
79 from .models import DevicePermissions
80 from kolibri.core.auth.models import FacilityUser
81 from django.core.exceptions import ValidationError
82
83 username = user_data.get("username")
84 password = user_data.get("password")
85 full_name = user_data.get("full_name")
86
87 # Code copied from FacilityUserModelManager (create_superuser method doesn't work)
88 if FacilityUser.objects.filter(
89 username__iexact=username, facility=facility
90 ).exists():
91 raise ValidationError("An account with that username already exists")
92
93 superuser = FacilityUser.objects.create(
94 full_name=full_name or username,
95 username=username,
96 password=password,
97 facility=facility,
98 )
99
100 superuser.full_clean()
101 superuser.set_password(password)
102 superuser.save()
103
104 # make the user a facility admin
105 facility.add_admin(superuser)
106
107 # make the user into a superuser on this device
108 DevicePermissions.objects.create(
109 user=superuser, is_superuser=True, can_manage_content=True
110 )
111 return superuser
112
113
114 def provision_device(device_name=None, **kwargs):
115 from .models import DeviceSettings
116
117 device_settings, _ = DeviceSettings.objects.get_or_create(defaults=kwargs)
118 if device_name is not None:
119 device_settings.name = device_name
120 device_settings.is_provisioned = True
121 device_settings.save()
122
123
124 def valid_app_key(app_key):
125 from .models import DeviceAppKey
126
127 return app_key == DeviceAppKey.get_app_key()
128
129
130 def valid_app_key_on_request(request):
131 return APP_KEY_COOKIE_NAME in request.COOKIES and valid_app_key(
132 request.COOKIES.get(APP_KEY_COOKIE_NAME)
133 )
134
135
136 def set_app_key_on_response(response):
137 from .models import DeviceAppKey
138
139 response.set_cookie(APP_KEY_COOKIE_NAME, DeviceAppKey.get_app_key())
140
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/kolibri/core/device/utils.py b/kolibri/core/device/utils.py
--- a/kolibri/core/device/utils.py
+++ b/kolibri/core/device/utils.py
@@ -90,11 +90,15 @@
).exists():
raise ValidationError("An account with that username already exists")
+ # gender and birth_year are set to "DEFERRED", since superusers do not
+ # need to provide this and are not nudged to update profile on Learn page
superuser = FacilityUser.objects.create(
full_name=full_name or username,
username=username,
password=password,
facility=facility,
+ gender="DEFERRED",
+ birth_year="DEFERRED",
)
superuser.full_clean()
| {"golden_diff": "diff --git a/kolibri/core/device/utils.py b/kolibri/core/device/utils.py\n--- a/kolibri/core/device/utils.py\n+++ b/kolibri/core/device/utils.py\n@@ -90,11 +90,15 @@\n ).exists():\n raise ValidationError(\"An account with that username already exists\")\n \n+ # gender and birth_year are set to \"DEFERRED\", since superusers do not\n+ # need to provide this and are not nudged to update profile on Learn page\n superuser = FacilityUser.objects.create(\n full_name=full_name or username,\n username=username,\n password=password,\n facility=facility,\n+ gender=\"DEFERRED\",\n+ birth_year=\"DEFERRED\",\n )\n \n superuser.full_clean()\n", "issue": "Update profile modal displays to super admin account created with setup wizard \n### Observed behavior\r\n_(Wouldn't even know this might be an issue if I hadn't been reviewing super admin Gherkins all past week :blush:)_\r\n\r\nAccording to the [Gherkin scenario](https://github.com/learningequality/kolibri/blob/release-v0.13.x/integration_testing/features/learner/learner-profile-update-notification.feature#L22), this should not appear:\r\n\r\n\r\n\r\n### Expected behavior\r\nNo profile update modal for the super admin.\r\n\r\n### User-facing consequences\r\nAnnoyed super admin.\r\n\r\n### Errors and logs\r\n\u2026\r\n\r\n### Steps to reproduce\r\n1. Install Kolibri\r\n2. Go through the setup wizard\r\n3. Go to Learn \r\n\r\n### Context\r\n * Kolibri version: 0.14.0b6, DEB installer\r\n * Operating system: Ubuntu 16.04\r\n * Browser: both Firefox and Chrome\r\n\n", "before_files": [{"content": "\"\"\"\nDo all imports of the device settings model inside the function scope here,\nso as to allow these functions to be easily imported without worrying about\ncircular imports.\n\"\"\"\nfrom django.db.utils import OperationalError\nfrom django.db.utils import ProgrammingError\n\nLANDING_PAGE_SIGN_IN = \"sign-in\"\nLANDING_PAGE_LEARN = \"learn\"\n\nAPP_KEY_COOKIE_NAME = \"app_key_cookie\"\n\n\nclass DeviceNotProvisioned(Exception):\n pass\n\n\nno_default_value = object()\n\n\ndef get_device_setting(setting, default=no_default_value):\n from .models import DeviceSettings\n\n try:\n device_settings = DeviceSettings.objects.get()\n if device_settings is None:\n raise DeviceSettings.DoesNotExist\n return getattr(device_settings, setting)\n except (DeviceSettings.DoesNotExist, OperationalError, ProgrammingError):\n if default is not no_default_value:\n return default\n raise DeviceNotProvisioned\n\n\ndef device_provisioned():\n return get_device_setting(\"is_provisioned\", False)\n\n\ndef is_landing_page(landing_page):\n return get_device_setting(\"landing_page\", LANDING_PAGE_SIGN_IN) == landing_page\n\n\ndef allow_guest_access():\n if get_device_setting(\"allow_guest_access\", False):\n return True\n\n return is_landing_page(LANDING_PAGE_LEARN)\n\n\ndef allow_learner_unassigned_resource_access():\n if get_device_setting(\"allow_learner_unassigned_resource_access\", True):\n return True\n\n return is_landing_page(LANDING_PAGE_LEARN)\n\n\ndef allow_peer_unlisted_channel_import():\n return get_device_setting(\"allow_peer_unlisted_channel_import\", False)\n\n\ndef allow_other_browsers_to_connect():\n return get_device_setting(\"allow_other_browsers_to_connect\", True)\n\n\ndef set_device_settings(**kwargs):\n from .models import DeviceSettings\n\n try:\n device_settings = DeviceSettings.objects.get()\n for key, value in kwargs.items():\n setattr(device_settings, key, value)\n device_settings.save()\n except DeviceSettings.DoesNotExist:\n raise DeviceNotProvisioned\n\n\ndef create_superuser(user_data, facility):\n from .models import DevicePermissions\n from kolibri.core.auth.models import FacilityUser\n from django.core.exceptions import ValidationError\n\n username = user_data.get(\"username\")\n password = user_data.get(\"password\")\n full_name = user_data.get(\"full_name\")\n\n # Code copied from FacilityUserModelManager (create_superuser method doesn't work)\n if FacilityUser.objects.filter(\n username__iexact=username, facility=facility\n ).exists():\n raise ValidationError(\"An account with that username already exists\")\n\n superuser = FacilityUser.objects.create(\n full_name=full_name or username,\n username=username,\n password=password,\n facility=facility,\n )\n\n superuser.full_clean()\n superuser.set_password(password)\n superuser.save()\n\n # make the user a facility admin\n facility.add_admin(superuser)\n\n # make the user into a superuser on this device\n DevicePermissions.objects.create(\n user=superuser, is_superuser=True, can_manage_content=True\n )\n return superuser\n\n\ndef provision_device(device_name=None, **kwargs):\n from .models import DeviceSettings\n\n device_settings, _ = DeviceSettings.objects.get_or_create(defaults=kwargs)\n if device_name is not None:\n device_settings.name = device_name\n device_settings.is_provisioned = True\n device_settings.save()\n\n\ndef valid_app_key(app_key):\n from .models import DeviceAppKey\n\n return app_key == DeviceAppKey.get_app_key()\n\n\ndef valid_app_key_on_request(request):\n return APP_KEY_COOKIE_NAME in request.COOKIES and valid_app_key(\n request.COOKIES.get(APP_KEY_COOKIE_NAME)\n )\n\n\ndef set_app_key_on_response(response):\n from .models import DeviceAppKey\n\n response.set_cookie(APP_KEY_COOKIE_NAME, DeviceAppKey.get_app_key())\n", "path": "kolibri/core/device/utils.py"}], "after_files": [{"content": "\"\"\"\nDo all imports of the device settings model inside the function scope here,\nso as to allow these functions to be easily imported without worrying about\ncircular imports.\n\"\"\"\nfrom django.db.utils import OperationalError\nfrom django.db.utils import ProgrammingError\n\nLANDING_PAGE_SIGN_IN = \"sign-in\"\nLANDING_PAGE_LEARN = \"learn\"\n\nAPP_KEY_COOKIE_NAME = \"app_key_cookie\"\n\n\nclass DeviceNotProvisioned(Exception):\n pass\n\n\nno_default_value = object()\n\n\ndef get_device_setting(setting, default=no_default_value):\n from .models import DeviceSettings\n\n try:\n device_settings = DeviceSettings.objects.get()\n if device_settings is None:\n raise DeviceSettings.DoesNotExist\n return getattr(device_settings, setting)\n except (DeviceSettings.DoesNotExist, OperationalError, ProgrammingError):\n if default is not no_default_value:\n return default\n raise DeviceNotProvisioned\n\n\ndef device_provisioned():\n return get_device_setting(\"is_provisioned\", False)\n\n\ndef is_landing_page(landing_page):\n return get_device_setting(\"landing_page\", LANDING_PAGE_SIGN_IN) == landing_page\n\n\ndef allow_guest_access():\n if get_device_setting(\"allow_guest_access\", False):\n return True\n\n return is_landing_page(LANDING_PAGE_LEARN)\n\n\ndef allow_learner_unassigned_resource_access():\n if get_device_setting(\"allow_learner_unassigned_resource_access\", True):\n return True\n\n return is_landing_page(LANDING_PAGE_LEARN)\n\n\ndef allow_peer_unlisted_channel_import():\n return get_device_setting(\"allow_peer_unlisted_channel_import\", False)\n\n\ndef allow_other_browsers_to_connect():\n return get_device_setting(\"allow_other_browsers_to_connect\", True)\n\n\ndef set_device_settings(**kwargs):\n from .models import DeviceSettings\n\n try:\n device_settings = DeviceSettings.objects.get()\n for key, value in kwargs.items():\n setattr(device_settings, key, value)\n device_settings.save()\n except DeviceSettings.DoesNotExist:\n raise DeviceNotProvisioned\n\n\ndef create_superuser(user_data, facility):\n from .models import DevicePermissions\n from kolibri.core.auth.models import FacilityUser\n from django.core.exceptions import ValidationError\n\n username = user_data.get(\"username\")\n password = user_data.get(\"password\")\n full_name = user_data.get(\"full_name\")\n\n # Code copied from FacilityUserModelManager (create_superuser method doesn't work)\n if FacilityUser.objects.filter(\n username__iexact=username, facility=facility\n ).exists():\n raise ValidationError(\"An account with that username already exists\")\n\n # gender and birth_year are set to \"DEFERRED\", since superusers do not\n # need to provide this and are not nudged to update profile on Learn page\n superuser = FacilityUser.objects.create(\n full_name=full_name or username,\n username=username,\n password=password,\n facility=facility,\n gender=\"DEFERRED\",\n birth_year=\"DEFERRED\",\n )\n\n superuser.full_clean()\n superuser.set_password(password)\n superuser.save()\n\n # make the user a facility admin\n facility.add_admin(superuser)\n\n # make the user into a superuser on this device\n DevicePermissions.objects.create(\n user=superuser, is_superuser=True, can_manage_content=True\n )\n return superuser\n\n\ndef provision_device(device_name=None, **kwargs):\n from .models import DeviceSettings\n\n device_settings, _ = DeviceSettings.objects.get_or_create(defaults=kwargs)\n if device_name is not None:\n device_settings.name = device_name\n device_settings.is_provisioned = True\n device_settings.save()\n\n\ndef valid_app_key(app_key):\n from .models import DeviceAppKey\n\n return app_key == DeviceAppKey.get_app_key()\n\n\ndef valid_app_key_on_request(request):\n return APP_KEY_COOKIE_NAME in request.COOKIES and valid_app_key(\n request.COOKIES.get(APP_KEY_COOKIE_NAME)\n )\n\n\ndef set_app_key_on_response(response):\n from .models import DeviceAppKey\n\n response.set_cookie(APP_KEY_COOKIE_NAME, DeviceAppKey.get_app_key())\n", "path": "kolibri/core/device/utils.py"}]} | 1,697 | 169 |
gh_patches_debug_16126 | rasdani/github-patches | git_diff | pytorch__TensorRT-2091 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Fix import error for Legacy TorchScript CI on `TorchTensorRTModule`
- Older versions of TorchScript do not have the `torch._dynamo` import capability
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `py/torch_tensorrt/fx/tracer/dispatch_tracer/aten_tracer.py`
Content:
```
1 import copy
2 import sys
3 from contextlib import contextmanager
4 from typing import Any, Callable, Dict, Generator, List, Optional, Set, Tuple, Union
5
6 import torch
7
8 if not torch.__version__.startswith("1"):
9 import torch._dynamo as torchdynamo
10
11 from torch.fx.passes.infra.pass_base import PassResult
12 from torch_tensorrt.fx.utils import req_torch_version
13 from torch_tensorrt.fx.passes.lower_basic_pass_aten import (
14 compose_bmm,
15 compose_chunk,
16 compose_getitem_slice,
17 remove_ops,
18 replace_aten_op_with_indices,
19 replace_aten_reshape_alias_with_replace,
20 replace_builtin_ops,
21 replace_inplace_ops,
22 replace_native_layernorm_with_layernorm,
23 replace_transpose_mm_op_with_linear,
24 run_const_fold,
25 )
26 from typing_extensions import TypeAlias
27
28 Value: TypeAlias = Union[
29 Tuple["Value", ...],
30 List["Value"],
31 Dict[str, "Value"],
32 ]
33
34
35 class DynamoConfig:
36 """
37 Manage Exir-specific configurations of Dynamo.
38 """
39
40 def __init__(
41 self,
42 capture_scalar_outputs: bool = True,
43 guard_nn_modules: bool = True,
44 dynamic_shapes: bool = True,
45 specialize_int: bool = True,
46 verbose: bool = True,
47 ) -> None:
48
49 self.capture_scalar_outputs = capture_scalar_outputs
50 self.guard_nn_modules = guard_nn_modules
51 self.dynamic_shapes = dynamic_shapes
52 self.specialize_int = specialize_int
53 self.verbose = verbose
54
55 def activate(self) -> None:
56 torchdynamo.config.capture_scalar_outputs = self.capture_scalar_outputs
57 torchdynamo.config.guard_nn_modules = self.guard_nn_modules
58 torchdynamo.config.dynamic_shapes = self.dynamic_shapes
59 torchdynamo.config.specialize_int = self.specialize_int
60 torchdynamo.config.verbose = self.verbose
61
62 def deactivate(self) -> None:
63 torchdynamo.config.capture_scalar_outputs = True
64 torchdynamo.config.guard_nn_modules = True
65 torchdynamo.config.dynamic_shapes = True
66 torchdynamo.config.specialize_int = True
67 torchdynamo.config.verbose = True
68
69
70 @contextmanager
71 def using_config(config: DynamoConfig) -> Generator[DynamoConfig, None, None]:
72 config.activate()
73 try:
74 yield config
75 finally:
76 config.deactivate()
77
78
79 @contextmanager
80 def setting_python_recursive_limit(limit: int = 10000) -> Generator[None, None, None]:
81 """
82 Temporarily increase the python interpreter stack recursion limit.
83 This is mostly used for pickling large scale modules.
84 """
85 default = sys.getrecursionlimit()
86 if limit > default:
87 sys.setrecursionlimit(limit)
88 try:
89 yield
90 finally:
91 sys.setrecursionlimit(default)
92
93
94 @req_torch_version("2.dev")
95 def dynamo_trace(
96 f: Callable[..., Value],
97 # pyre-ignore
98 args: Tuple[Any, ...],
99 aten_graph: bool,
100 tracing_mode: str = "real",
101 dynamo_config: Optional[DynamoConfig] = None,
102 ) -> Tuple[torch.fx.GraphModule, Set]:
103 """
104 TODO: Once we fully migrate to torchdynamo frontend, we will remove
105 this config option alltogether. For now, it helps with quick
106 experiments with playing around with TorchDynamo
107 """
108 if dynamo_config is None:
109 dynamo_config = DynamoConfig()
110 with using_config(dynamo_config), setting_python_recursive_limit(2000):
111 torchdynamo.reset()
112 try:
113 return torchdynamo.export(
114 f,
115 *copy.deepcopy(args),
116 aten_graph=aten_graph,
117 tracing_mode=tracing_mode,
118 )
119 except torchdynamo.exc.Unsupported as exc:
120 raise RuntimeError(
121 "The user code is using a feature we don't support. "
122 "Please try torchdynamo.explain() to get possible the reasons",
123 ) from exc
124 except Exception as exc:
125 raise RuntimeError(
126 "torchdynamo internal error occured. Please see above stacktrace"
127 ) from exc
128
129
130 @req_torch_version("2.dev")
131 def trace(f, args, *rest):
132 graph_module, guards = dynamo_trace(f, args, True, "symbolic")
133 return graph_module, guards
134
135
136 @req_torch_version("2.dev")
137 def opt_trace(f, args, *rest):
138 """
139 Optimized trace with necessary passes which re-compose some ops or replace some ops
140 These passes should be general and functional purpose
141 """
142 passes_list = [
143 compose_bmm,
144 compose_chunk,
145 compose_getitem_slice,
146 replace_aten_reshape_alias_with_replace,
147 replace_aten_op_with_indices,
148 replace_transpose_mm_op_with_linear, # after compose_bmm
149 replace_native_layernorm_with_layernorm,
150 remove_ops,
151 replace_builtin_ops, # after replace_native_layernorm_with_layernorm
152 replace_inplace_ops, # remove it once functionalization is enabled
153 ]
154
155 fx_module, _ = trace(f, args)
156 print(fx_module.graph)
157 for passes in passes_list:
158 pr: PassResult = passes(fx_module)
159 fx_module = pr.graph_module
160
161 fx_module(*args)
162
163 fx_module = run_const_fold(fx_module)
164 print(fx_module.graph)
165 return fx_module
166
```
Path: `py/torch_tensorrt/dynamo/__init__.py`
Content:
```
1 from torch_tensorrt.dynamo import fx_ts_compat
2 from .backend import compile
3
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/py/torch_tensorrt/dynamo/__init__.py b/py/torch_tensorrt/dynamo/__init__.py
--- a/py/torch_tensorrt/dynamo/__init__.py
+++ b/py/torch_tensorrt/dynamo/__init__.py
@@ -1,2 +1,6 @@
-from torch_tensorrt.dynamo import fx_ts_compat
-from .backend import compile
+import torch
+from packaging import version
+
+if version.parse(torch.__version__) >= version.parse("2.1.dev"):
+ from torch_tensorrt.dynamo import fx_ts_compat
+ from .backend import compile
diff --git a/py/torch_tensorrt/fx/tracer/dispatch_tracer/aten_tracer.py b/py/torch_tensorrt/fx/tracer/dispatch_tracer/aten_tracer.py
--- a/py/torch_tensorrt/fx/tracer/dispatch_tracer/aten_tracer.py
+++ b/py/torch_tensorrt/fx/tracer/dispatch_tracer/aten_tracer.py
@@ -2,10 +2,11 @@
import sys
from contextlib import contextmanager
from typing import Any, Callable, Dict, Generator, List, Optional, Set, Tuple, Union
+from packaging import version
import torch
-if not torch.__version__.startswith("1"):
+if version.parse(torch.__version__) >= version.parse("2.dev"):
import torch._dynamo as torchdynamo
from torch.fx.passes.infra.pass_base import PassResult
| {"golden_diff": "diff --git a/py/torch_tensorrt/dynamo/__init__.py b/py/torch_tensorrt/dynamo/__init__.py\n--- a/py/torch_tensorrt/dynamo/__init__.py\n+++ b/py/torch_tensorrt/dynamo/__init__.py\n@@ -1,2 +1,6 @@\n-from torch_tensorrt.dynamo import fx_ts_compat\n-from .backend import compile\n+import torch\n+from packaging import version\n+\n+if version.parse(torch.__version__) >= version.parse(\"2.1.dev\"):\n+ from torch_tensorrt.dynamo import fx_ts_compat\n+ from .backend import compile\ndiff --git a/py/torch_tensorrt/fx/tracer/dispatch_tracer/aten_tracer.py b/py/torch_tensorrt/fx/tracer/dispatch_tracer/aten_tracer.py\n--- a/py/torch_tensorrt/fx/tracer/dispatch_tracer/aten_tracer.py\n+++ b/py/torch_tensorrt/fx/tracer/dispatch_tracer/aten_tracer.py\n@@ -2,10 +2,11 @@\n import sys\n from contextlib import contextmanager\n from typing import Any, Callable, Dict, Generator, List, Optional, Set, Tuple, Union\n+from packaging import version\n \n import torch\n \n-if not torch.__version__.startswith(\"1\"):\n+if version.parse(torch.__version__) >= version.parse(\"2.dev\"):\n import torch._dynamo as torchdynamo\n \n from torch.fx.passes.infra.pass_base import PassResult\n", "issue": "Fix import error for Legacy TorchScript CI on `TorchTensorRTModule`\n- Older versions of TorchScript do not have the `torch._dynamo` import capability\n", "before_files": [{"content": "import copy\nimport sys\nfrom contextlib import contextmanager\nfrom typing import Any, Callable, Dict, Generator, List, Optional, Set, Tuple, Union\n\nimport torch\n\nif not torch.__version__.startswith(\"1\"):\n import torch._dynamo as torchdynamo\n\nfrom torch.fx.passes.infra.pass_base import PassResult\nfrom torch_tensorrt.fx.utils import req_torch_version\nfrom torch_tensorrt.fx.passes.lower_basic_pass_aten import (\n compose_bmm,\n compose_chunk,\n compose_getitem_slice,\n remove_ops,\n replace_aten_op_with_indices,\n replace_aten_reshape_alias_with_replace,\n replace_builtin_ops,\n replace_inplace_ops,\n replace_native_layernorm_with_layernorm,\n replace_transpose_mm_op_with_linear,\n run_const_fold,\n)\nfrom typing_extensions import TypeAlias\n\nValue: TypeAlias = Union[\n Tuple[\"Value\", ...],\n List[\"Value\"],\n Dict[str, \"Value\"],\n]\n\n\nclass DynamoConfig:\n \"\"\"\n Manage Exir-specific configurations of Dynamo.\n \"\"\"\n\n def __init__(\n self,\n capture_scalar_outputs: bool = True,\n guard_nn_modules: bool = True,\n dynamic_shapes: bool = True,\n specialize_int: bool = True,\n verbose: bool = True,\n ) -> None:\n\n self.capture_scalar_outputs = capture_scalar_outputs\n self.guard_nn_modules = guard_nn_modules\n self.dynamic_shapes = dynamic_shapes\n self.specialize_int = specialize_int\n self.verbose = verbose\n\n def activate(self) -> None:\n torchdynamo.config.capture_scalar_outputs = self.capture_scalar_outputs\n torchdynamo.config.guard_nn_modules = self.guard_nn_modules\n torchdynamo.config.dynamic_shapes = self.dynamic_shapes\n torchdynamo.config.specialize_int = self.specialize_int\n torchdynamo.config.verbose = self.verbose\n\n def deactivate(self) -> None:\n torchdynamo.config.capture_scalar_outputs = True\n torchdynamo.config.guard_nn_modules = True\n torchdynamo.config.dynamic_shapes = True\n torchdynamo.config.specialize_int = True\n torchdynamo.config.verbose = True\n\n\n@contextmanager\ndef using_config(config: DynamoConfig) -> Generator[DynamoConfig, None, None]:\n config.activate()\n try:\n yield config\n finally:\n config.deactivate()\n\n\n@contextmanager\ndef setting_python_recursive_limit(limit: int = 10000) -> Generator[None, None, None]:\n \"\"\"\n Temporarily increase the python interpreter stack recursion limit.\n This is mostly used for pickling large scale modules.\n \"\"\"\n default = sys.getrecursionlimit()\n if limit > default:\n sys.setrecursionlimit(limit)\n try:\n yield\n finally:\n sys.setrecursionlimit(default)\n\n\n@req_torch_version(\"2.dev\")\ndef dynamo_trace(\n f: Callable[..., Value],\n # pyre-ignore\n args: Tuple[Any, ...],\n aten_graph: bool,\n tracing_mode: str = \"real\",\n dynamo_config: Optional[DynamoConfig] = None,\n) -> Tuple[torch.fx.GraphModule, Set]:\n \"\"\"\n TODO: Once we fully migrate to torchdynamo frontend, we will remove\n this config option alltogether. For now, it helps with quick\n experiments with playing around with TorchDynamo\n \"\"\"\n if dynamo_config is None:\n dynamo_config = DynamoConfig()\n with using_config(dynamo_config), setting_python_recursive_limit(2000):\n torchdynamo.reset()\n try:\n return torchdynamo.export(\n f,\n *copy.deepcopy(args),\n aten_graph=aten_graph,\n tracing_mode=tracing_mode,\n )\n except torchdynamo.exc.Unsupported as exc:\n raise RuntimeError(\n \"The user code is using a feature we don't support. \"\n \"Please try torchdynamo.explain() to get possible the reasons\",\n ) from exc\n except Exception as exc:\n raise RuntimeError(\n \"torchdynamo internal error occured. Please see above stacktrace\"\n ) from exc\n\n\n@req_torch_version(\"2.dev\")\ndef trace(f, args, *rest):\n graph_module, guards = dynamo_trace(f, args, True, \"symbolic\")\n return graph_module, guards\n\n\n@req_torch_version(\"2.dev\")\ndef opt_trace(f, args, *rest):\n \"\"\"\n Optimized trace with necessary passes which re-compose some ops or replace some ops\n These passes should be general and functional purpose\n \"\"\"\n passes_list = [\n compose_bmm,\n compose_chunk,\n compose_getitem_slice,\n replace_aten_reshape_alias_with_replace,\n replace_aten_op_with_indices,\n replace_transpose_mm_op_with_linear, # after compose_bmm\n replace_native_layernorm_with_layernorm,\n remove_ops,\n replace_builtin_ops, # after replace_native_layernorm_with_layernorm\n replace_inplace_ops, # remove it once functionalization is enabled\n ]\n\n fx_module, _ = trace(f, args)\n print(fx_module.graph)\n for passes in passes_list:\n pr: PassResult = passes(fx_module)\n fx_module = pr.graph_module\n\n fx_module(*args)\n\n fx_module = run_const_fold(fx_module)\n print(fx_module.graph)\n return fx_module\n", "path": "py/torch_tensorrt/fx/tracer/dispatch_tracer/aten_tracer.py"}, {"content": "from torch_tensorrt.dynamo import fx_ts_compat\nfrom .backend import compile\n", "path": "py/torch_tensorrt/dynamo/__init__.py"}], "after_files": [{"content": "import copy\nimport sys\nfrom contextlib import contextmanager\nfrom typing import Any, Callable, Dict, Generator, List, Optional, Set, Tuple, Union\nfrom packaging import version\n\nimport torch\n\nif version.parse(torch.__version__) >= version.parse(\"2.dev\"):\n import torch._dynamo as torchdynamo\n\nfrom torch.fx.passes.infra.pass_base import PassResult\nfrom torch_tensorrt.fx.utils import req_torch_version\nfrom torch_tensorrt.fx.passes.lower_basic_pass_aten import (\n compose_bmm,\n compose_chunk,\n compose_getitem_slice,\n remove_ops,\n replace_aten_op_with_indices,\n replace_aten_reshape_alias_with_replace,\n replace_builtin_ops,\n replace_inplace_ops,\n replace_native_layernorm_with_layernorm,\n replace_transpose_mm_op_with_linear,\n run_const_fold,\n)\nfrom typing_extensions import TypeAlias\n\nValue: TypeAlias = Union[\n Tuple[\"Value\", ...],\n List[\"Value\"],\n Dict[str, \"Value\"],\n]\n\n\nclass DynamoConfig:\n \"\"\"\n Manage Exir-specific configurations of Dynamo.\n \"\"\"\n\n def __init__(\n self,\n capture_scalar_outputs: bool = True,\n guard_nn_modules: bool = True,\n dynamic_shapes: bool = True,\n specialize_int: bool = True,\n verbose: bool = True,\n ) -> None:\n\n self.capture_scalar_outputs = capture_scalar_outputs\n self.guard_nn_modules = guard_nn_modules\n self.dynamic_shapes = dynamic_shapes\n self.specialize_int = specialize_int\n self.verbose = verbose\n\n def activate(self) -> None:\n torchdynamo.config.capture_scalar_outputs = self.capture_scalar_outputs\n torchdynamo.config.guard_nn_modules = self.guard_nn_modules\n torchdynamo.config.dynamic_shapes = self.dynamic_shapes\n torchdynamo.config.specialize_int = self.specialize_int\n torchdynamo.config.verbose = self.verbose\n\n def deactivate(self) -> None:\n torchdynamo.config.capture_scalar_outputs = True\n torchdynamo.config.guard_nn_modules = True\n torchdynamo.config.dynamic_shapes = True\n torchdynamo.config.specialize_int = True\n torchdynamo.config.verbose = True\n\n\n@contextmanager\ndef using_config(config: DynamoConfig) -> Generator[DynamoConfig, None, None]:\n config.activate()\n try:\n yield config\n finally:\n config.deactivate()\n\n\n@contextmanager\ndef setting_python_recursive_limit(limit: int = 10000) -> Generator[None, None, None]:\n \"\"\"\n Temporarily increase the python interpreter stack recursion limit.\n This is mostly used for pickling large scale modules.\n \"\"\"\n default = sys.getrecursionlimit()\n if limit > default:\n sys.setrecursionlimit(limit)\n try:\n yield\n finally:\n sys.setrecursionlimit(default)\n\n\n@req_torch_version(\"2.dev\")\ndef dynamo_trace(\n f: Callable[..., Value],\n # pyre-ignore\n args: Tuple[Any, ...],\n aten_graph: bool,\n tracing_mode: str = \"real\",\n dynamo_config: Optional[DynamoConfig] = None,\n) -> Tuple[torch.fx.GraphModule, Set]:\n \"\"\"\n TODO: Once we fully migrate to torchdynamo frontend, we will remove\n this config option alltogether. For now, it helps with quick\n experiments with playing around with TorchDynamo\n \"\"\"\n if dynamo_config is None:\n dynamo_config = DynamoConfig()\n with using_config(dynamo_config), setting_python_recursive_limit(2000):\n torchdynamo.reset()\n try:\n return torchdynamo.export(\n f,\n *copy.deepcopy(args),\n aten_graph=aten_graph,\n tracing_mode=tracing_mode,\n )\n except torchdynamo.exc.Unsupported as exc:\n raise RuntimeError(\n \"The user code is using a feature we don't support. \"\n \"Please try torchdynamo.explain() to get possible the reasons\",\n ) from exc\n except Exception as exc:\n raise RuntimeError(\n \"torchdynamo internal error occured. Please see above stacktrace\"\n ) from exc\n\n\n@req_torch_version(\"2.dev\")\ndef trace(f, args, *rest):\n graph_module, guards = dynamo_trace(f, args, True, \"symbolic\")\n return graph_module, guards\n\n\n@req_torch_version(\"2.dev\")\ndef opt_trace(f, args, *rest):\n \"\"\"\n Optimized trace with necessary passes which re-compose some ops or replace some ops\n These passes should be general and functional purpose\n \"\"\"\n passes_list = [\n compose_bmm,\n compose_chunk,\n compose_getitem_slice,\n replace_aten_reshape_alias_with_replace,\n replace_aten_op_with_indices,\n replace_transpose_mm_op_with_linear, # after compose_bmm\n replace_native_layernorm_with_layernorm,\n remove_ops,\n replace_builtin_ops, # after replace_native_layernorm_with_layernorm\n replace_inplace_ops, # remove it once functionalization is enabled\n ]\n\n fx_module, _ = trace(f, args)\n print(fx_module.graph)\n for passes in passes_list:\n pr: PassResult = passes(fx_module)\n fx_module = pr.graph_module\n\n fx_module(*args)\n\n fx_module = run_const_fold(fx_module)\n print(fx_module.graph)\n return fx_module\n", "path": "py/torch_tensorrt/fx/tracer/dispatch_tracer/aten_tracer.py"}, {"content": "import torch\nfrom packaging import version\n\nif version.parse(torch.__version__) >= version.parse(\"2.1.dev\"):\n from torch_tensorrt.dynamo import fx_ts_compat\n from .backend import compile\n", "path": "py/torch_tensorrt/dynamo/__init__.py"}]} | 1,908 | 317 |
gh_patches_debug_4875 | rasdani/github-patches | git_diff | google__flax-2171 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
New Sphinx Theme
The idea is to get a new and shiny theme that makes Flax's RTD page standout a little more.
I've gathered a couple of options:
### [JAX's Theme](https://jax.readthedocs.io/en/latest/)

### [Pydata Sphinx Theme](https://pydata-sphinx-theme.readthedocs.io/en/latest/user_guide/index.html)

### [Furo](https://pradyunsg.me/furo/quickstart/)

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/conf.py`
Content:
```
1 # Copyright 2022 The Flax Authors.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Configuration file for the Sphinx documentation builder."""
16
17
18 # This file only contains a selection of the most common options. For a full
19 # list see the documentation:
20 # https://www.sphinx-doc.org/en/master/usage/configuration.html
21
22 # -- Path setup --------------------------------------------------------------
23
24 # If extensions (or modules to document with autodoc) are in another directory,
25 # add these directories to sys.path here. If the directory is relative to the
26 # documentation root, use os.path.abspath to make it absolute, like shown here.
27 #
28 # import os
29 # import sys
30 # sys.path.insert(0, os.path.abspath('.'))
31
32 import os
33 import sys
34 sys.path.insert(0, os.path.abspath('..'))
35 # Include local extension.
36 sys.path.append(os.path.abspath('./_ext'))
37
38 # -- Project information -----------------------------------------------------
39
40 project = 'Flax'
41 copyright = '2020, The Flax authors' # pylint: disable=redefined-builtin
42 author = 'The Flax authors'
43
44
45 # -- General configuration ---------------------------------------------------
46
47 # Add any Sphinx extension module names here, as strings. They can be
48 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
49 # ones.
50 extensions = [
51 'sphinx.ext.autodoc',
52 'sphinx.ext.autosummary',
53 'sphinx.ext.autosectionlabel',
54 'sphinx.ext.doctest',
55 'sphinx.ext.intersphinx',
56 'sphinx.ext.mathjax',
57 'sphinx.ext.napoleon',
58 'sphinx.ext.viewcode',
59 'nbsphinx',
60 'recommonmark',
61 'codediff',
62 'sphinx_markdown_tables'
63 ]
64
65 # Add any paths that contain templates here, relative to this directory.
66 templates_path = ['_templates']
67
68 # List of patterns, relative to source directory, that match files and
69 # directories to ignore when looking for source files.
70 # This pattern also affects html_static_path and html_extra_path.
71 exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
72
73 source_suffix = ['.rst', '.md']
74
75 autosummary_generate = True
76
77 master_doc = 'index'
78
79 autodoc_typehints = 'description'
80
81
82 # -- Options for HTML output -------------------------------------------------
83
84 # The theme to use for HTML and HTML Help pages. See the documentation for
85 # a list of builtin themes.
86 #
87 html_theme = 'sphinx_rtd_theme'
88 html_style = 'css/flax_theme.css'
89
90 # The name of an image file (relative to this directory) to place at the top
91 # of the sidebar.
92 html_logo = './flax.png'
93
94 # Add any paths that contain custom static files (such as style sheets) here,
95 # relative to this directory. They are copied after the builtin static files,
96 # so a file named "default.css" will overwrite the builtin "default.css".
97 html_static_path = ['_static']
98
99 nbsphinx_codecell_lexer = 'ipython3'
100
101 nbsphinx_prolog = r"""
102 {% set docname = 'docs/' + env.doc2path(env.docname, base=None) %}
103
104 .. only:: html
105
106 .. role:: raw-html(raw)
107 :format: html
108
109 .. nbinfo::
110
111 :raw-html:`<a href="https://colab.research.google.com/github/google/flax/blob/main/{{ docname }}"><img alt="Open In Colab" src="https://colab.research.google.com/assets/colab-badge.svg" style="vertical-align:text-bottom"></a>`
112 :raw-html:`<a href="https://github.com/google/flax/blob/main/{{ docname }}"><img alt="Open On GitHub" src="https://img.shields.io/badge/Open-on%20GitHub-blue?logo=GitHub" style="vertical-align:text-bottom"></a>`
113
114
115 """
116
117 # -- Extension configuration -------------------------------------------------
118
119 # Tell sphinx-autodoc-typehints to generate stub parameter annotations including
120 # types, even if the parameters aren't explicitly documented.
121 always_document_param_types = True
122
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -84,8 +84,8 @@
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
-html_theme = 'sphinx_rtd_theme'
-html_style = 'css/flax_theme.css'
+html_theme = 'sphinx_book_theme'
+# html_style = 'css/flax_theme.css'
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
| {"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -84,8 +84,8 @@\n # The theme to use for HTML and HTML Help pages. See the documentation for\n # a list of builtin themes.\n #\n-html_theme = 'sphinx_rtd_theme'\n-html_style = 'css/flax_theme.css'\n+html_theme = 'sphinx_book_theme'\n+# html_style = 'css/flax_theme.css'\n \n # The name of an image file (relative to this directory) to place at the top\n # of the sidebar.\n", "issue": "New Sphinx Theme\nThe idea is to get a new and shiny theme that makes Flax's RTD page standout a little more. \r\n\r\nI've gathered a couple of options:\r\n\r\n### [JAX's Theme](https://jax.readthedocs.io/en/latest/)\r\n\r\n### [Pydata Sphinx Theme](https://pydata-sphinx-theme.readthedocs.io/en/latest/user_guide/index.html)\r\n\r\n### [Furo](https://pradyunsg.me/furo/quickstart/)\r\n\r\n\n", "before_files": [{"content": "# Copyright 2022 The Flax Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Configuration file for the Sphinx documentation builder.\"\"\"\n\n\n# This file only contains a selection of the most common options. For a full\n# list see the documentation:\n# https://www.sphinx-doc.org/en/master/usage/configuration.html\n\n# -- Path setup --------------------------------------------------------------\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\n# import os\n# import sys\n# sys.path.insert(0, os.path.abspath('.'))\n\nimport os\nimport sys\nsys.path.insert(0, os.path.abspath('..'))\n# Include local extension.\nsys.path.append(os.path.abspath('./_ext'))\n\n# -- Project information -----------------------------------------------------\n\nproject = 'Flax'\ncopyright = '2020, The Flax authors' # pylint: disable=redefined-builtin\nauthor = 'The Flax authors'\n\n\n# -- General configuration ---------------------------------------------------\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n 'sphinx.ext.autodoc',\n 'sphinx.ext.autosummary',\n 'sphinx.ext.autosectionlabel',\n 'sphinx.ext.doctest',\n 'sphinx.ext.intersphinx',\n 'sphinx.ext.mathjax',\n 'sphinx.ext.napoleon',\n 'sphinx.ext.viewcode',\n 'nbsphinx',\n 'recommonmark',\n 'codediff',\n 'sphinx_markdown_tables'\n]\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path.\nexclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']\n\nsource_suffix = ['.rst', '.md']\n\nautosummary_generate = True\n\nmaster_doc = 'index'\n\nautodoc_typehints = 'description'\n\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = 'sphinx_rtd_theme'\nhtml_style = 'css/flax_theme.css'\n\n# The name of an image file (relative to this directory) to place at the top\n# of the sidebar.\nhtml_logo = './flax.png'\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\nnbsphinx_codecell_lexer = 'ipython3'\n\nnbsphinx_prolog = r\"\"\"\n{% set docname = 'docs/' + env.doc2path(env.docname, base=None) %}\n\n.. only:: html\n\n .. role:: raw-html(raw)\n :format: html\n\n .. nbinfo::\n\n :raw-html:`<a href=\"https://colab.research.google.com/github/google/flax/blob/main/{{ docname }}\"><img alt=\"Open In Colab\" src=\"https://colab.research.google.com/assets/colab-badge.svg\" style=\"vertical-align:text-bottom\"></a>`\n :raw-html:`<a href=\"https://github.com/google/flax/blob/main/{{ docname }}\"><img alt=\"Open On GitHub\" src=\"https://img.shields.io/badge/Open-on%20GitHub-blue?logo=GitHub\" style=\"vertical-align:text-bottom\"></a>`\n\n\n\"\"\"\n\n# -- Extension configuration -------------------------------------------------\n\n# Tell sphinx-autodoc-typehints to generate stub parameter annotations including\n# types, even if the parameters aren't explicitly documented.\nalways_document_param_types = True\n", "path": "docs/conf.py"}], "after_files": [{"content": "# Copyright 2022 The Flax Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Configuration file for the Sphinx documentation builder.\"\"\"\n\n\n# This file only contains a selection of the most common options. For a full\n# list see the documentation:\n# https://www.sphinx-doc.org/en/master/usage/configuration.html\n\n# -- Path setup --------------------------------------------------------------\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\n# import os\n# import sys\n# sys.path.insert(0, os.path.abspath('.'))\n\nimport os\nimport sys\nsys.path.insert(0, os.path.abspath('..'))\n# Include local extension.\nsys.path.append(os.path.abspath('./_ext'))\n\n# -- Project information -----------------------------------------------------\n\nproject = 'Flax'\ncopyright = '2020, The Flax authors' # pylint: disable=redefined-builtin\nauthor = 'The Flax authors'\n\n\n# -- General configuration ---------------------------------------------------\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n 'sphinx.ext.autodoc',\n 'sphinx.ext.autosummary',\n 'sphinx.ext.autosectionlabel',\n 'sphinx.ext.doctest',\n 'sphinx.ext.intersphinx',\n 'sphinx.ext.mathjax',\n 'sphinx.ext.napoleon',\n 'sphinx.ext.viewcode',\n 'nbsphinx',\n 'recommonmark',\n 'codediff',\n 'sphinx_markdown_tables'\n]\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path.\nexclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']\n\nsource_suffix = ['.rst', '.md']\n\nautosummary_generate = True\n\nmaster_doc = 'index'\n\nautodoc_typehints = 'description'\n\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = 'sphinx_book_theme'\n# html_style = 'css/flax_theme.css'\n\n# The name of an image file (relative to this directory) to place at the top\n# of the sidebar.\nhtml_logo = './flax.png'\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\nnbsphinx_codecell_lexer = 'ipython3'\n\nnbsphinx_prolog = r\"\"\"\n{% set docname = 'docs/' + env.doc2path(env.docname, base=None) %}\n\n.. only:: html\n\n .. role:: raw-html(raw)\n :format: html\n\n .. nbinfo::\n\n :raw-html:`<a href=\"https://colab.research.google.com/github/google/flax/blob/main/{{ docname }}\"><img alt=\"Open In Colab\" src=\"https://colab.research.google.com/assets/colab-badge.svg\" style=\"vertical-align:text-bottom\"></a>`\n :raw-html:`<a href=\"https://github.com/google/flax/blob/main/{{ docname }}\"><img alt=\"Open On GitHub\" src=\"https://img.shields.io/badge/Open-on%20GitHub-blue?logo=GitHub\" style=\"vertical-align:text-bottom\"></a>`\n\n\n\"\"\"\n\n# -- Extension configuration -------------------------------------------------\n\n# Tell sphinx-autodoc-typehints to generate stub parameter annotations including\n# types, even if the parameters aren't explicitly documented.\nalways_document_param_types = True\n", "path": "docs/conf.py"}]} | 1,847 | 128 |
gh_patches_debug_5861 | rasdani/github-patches | git_diff | google__turbinia-929 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add --storage_file parameter to log2timeline
Add `--storage_file` parameter as new plaso version will not work without this anymore.
https://github.com/google/turbinia/blob/23a97d9d826cbcc51e6b5dfd50d85251506bf242/turbinia/workers/plaso.py#L121
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `turbinia/workers/plaso.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 # Copyright 2015 Google Inc.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15 """Task for running Plaso."""
16
17 from __future__ import unicode_literals
18
19 import os
20 import logging
21
22 from turbinia import config
23 from turbinia.evidence import EvidenceState as state
24 from turbinia.evidence import PlasoFile
25 from turbinia.workers import TurbiniaTask
26 from turbinia.lib import file_helpers
27
28
29 class PlasoTask(TurbiniaTask):
30 """Task to run Plaso (log2timeline)."""
31
32 # Plaso requires the Disk to be attached, but doesn't require it be mounted.
33 REQUIRED_STATES = [state.ATTACHED, state.DECOMPRESSED]
34
35 TASK_CONFIG = {
36 # 'none' as indicated in the options for status_view within
37 # the Plaso documentation
38 'status_view': 'none',
39 'hashers': 'all',
40 'partitions': 'all',
41 'vss_stores': 'none',
42 'artifact_filters': None,
43 'file_filter': None,
44 'yara_rules': None
45 }
46
47 def build_plaso_command(self, base_command, conf):
48 """Builds a typical plaso command, contains logic specific to log2timeline.
49
50 Args:
51 base_command (str): Command to invoke log2timeline (e.g. log2timeline.py)
52 conf (dict): Dynamic config containing the parameters for the command.
53
54 Returns:
55 String for valid Log2timeline command.
56 """
57 self.result.log(
58 'Generating Plaso command line from arguments: {0!s}'.format(conf),
59 level=logging.DEBUG)
60 cmd = [base_command]
61 for k, v in conf.items():
62 cli_args = [
63 'status_view', 'hashers', 'partitions', 'vss_stores',
64 'artifact_filters', 'file_filter', 'yara_rules'
65 ]
66 if (k not in cli_args or not v):
67 continue
68 prepend = '-'
69 if len(k) > 1:
70 prepend = '--'
71 if k == 'file_filter':
72 file_path = file_helpers.write_list_to_temp_file(
73 v, preferred_dir=self.tmp_dir)
74 cmd.extend(['-f', file_path])
75 elif k == 'yara_rules':
76 file_path = file_helpers.write_str_to_temp_file(
77 v, preferred_dir=self.tmp_dir)
78 cmd.extend(['--yara_rules', file_path])
79 elif isinstance(v, list):
80 cmd.extend([prepend + k, ','.join(v)])
81 elif isinstance(v, bool):
82 cmd.append(prepend + k)
83 elif isinstance(v, str):
84 cmd.extend([prepend + k, v])
85 return cmd
86
87 def run(self, evidence, result):
88 """Task that process data with Plaso.
89
90 Args:
91 evidence (Evidence object): The evidence we will process.
92 result (TurbiniaTaskResult): The object to place task results into.
93
94 Returns:
95 TurbiniaTaskResult object.
96 """
97
98 config.LoadConfig()
99
100 # Write plaso file into tmp_dir because sqlite has issues with some shared
101 # filesystems (e.g NFS).
102 plaso_file = os.path.join(self.tmp_dir, '{0:s}.plaso'.format(self.id))
103 plaso_evidence = PlasoFile(source_path=plaso_file)
104 plaso_log = os.path.join(self.output_dir, '{0:s}.log'.format(self.id))
105
106 cmd = self.build_plaso_command('log2timeline.py', self.task_config)
107
108 if config.DEBUG_TASKS or self.task_config.get('debug_tasks'):
109 cmd.append('-d')
110
111 if evidence.credentials:
112 for credential_type, credential_data in evidence.credentials:
113 cmd.extend([
114 '--credential', '{0:s}:{1:s}'.format(
115 credential_type, credential_data)
116 ])
117
118 cmd.extend(['--temporary_directory', self.tmp_dir])
119 cmd.extend(['--logfile', plaso_log])
120 cmd.extend(['--unattended'])
121 cmd.extend([plaso_file, evidence.local_path])
122
123 result.log('Running plaso as [{0:s}]'.format(' '.join(cmd)))
124 self.execute(
125 cmd, result, log_files=[plaso_log], new_evidence=[plaso_evidence],
126 close=True)
127
128 return result
129
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/turbinia/workers/plaso.py b/turbinia/workers/plaso.py
--- a/turbinia/workers/plaso.py
+++ b/turbinia/workers/plaso.py
@@ -118,7 +118,8 @@
cmd.extend(['--temporary_directory', self.tmp_dir])
cmd.extend(['--logfile', plaso_log])
cmd.extend(['--unattended'])
- cmd.extend([plaso_file, evidence.local_path])
+ cmd.extend(['--storage_file', plaso_file])
+ cmd.extend([evidence.local_path])
result.log('Running plaso as [{0:s}]'.format(' '.join(cmd)))
self.execute(
| {"golden_diff": "diff --git a/turbinia/workers/plaso.py b/turbinia/workers/plaso.py\n--- a/turbinia/workers/plaso.py\n+++ b/turbinia/workers/plaso.py\n@@ -118,7 +118,8 @@\n cmd.extend(['--temporary_directory', self.tmp_dir])\n cmd.extend(['--logfile', plaso_log])\n cmd.extend(['--unattended'])\n- cmd.extend([plaso_file, evidence.local_path])\n+ cmd.extend(['--storage_file', plaso_file])\n+ cmd.extend([evidence.local_path])\n \n result.log('Running plaso as [{0:s}]'.format(' '.join(cmd)))\n self.execute(\n", "issue": "Add --storage_file parameter to log2timeline\nAdd `--storage_file` parameter as new plaso version will not work without this anymore.\r\n\r\nhttps://github.com/google/turbinia/blob/23a97d9d826cbcc51e6b5dfd50d85251506bf242/turbinia/workers/plaso.py#L121\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright 2015 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Task for running Plaso.\"\"\"\n\nfrom __future__ import unicode_literals\n\nimport os\nimport logging\n\nfrom turbinia import config\nfrom turbinia.evidence import EvidenceState as state\nfrom turbinia.evidence import PlasoFile\nfrom turbinia.workers import TurbiniaTask\nfrom turbinia.lib import file_helpers\n\n\nclass PlasoTask(TurbiniaTask):\n \"\"\"Task to run Plaso (log2timeline).\"\"\"\n\n # Plaso requires the Disk to be attached, but doesn't require it be mounted.\n REQUIRED_STATES = [state.ATTACHED, state.DECOMPRESSED]\n\n TASK_CONFIG = {\n # 'none' as indicated in the options for status_view within\n # the Plaso documentation\n 'status_view': 'none',\n 'hashers': 'all',\n 'partitions': 'all',\n 'vss_stores': 'none',\n 'artifact_filters': None,\n 'file_filter': None,\n 'yara_rules': None\n }\n\n def build_plaso_command(self, base_command, conf):\n \"\"\"Builds a typical plaso command, contains logic specific to log2timeline.\n\n Args:\n base_command (str): Command to invoke log2timeline (e.g. log2timeline.py)\n conf (dict): Dynamic config containing the parameters for the command.\n\n Returns:\n String for valid Log2timeline command.\n \"\"\"\n self.result.log(\n 'Generating Plaso command line from arguments: {0!s}'.format(conf),\n level=logging.DEBUG)\n cmd = [base_command]\n for k, v in conf.items():\n cli_args = [\n 'status_view', 'hashers', 'partitions', 'vss_stores',\n 'artifact_filters', 'file_filter', 'yara_rules'\n ]\n if (k not in cli_args or not v):\n continue\n prepend = '-'\n if len(k) > 1:\n prepend = '--'\n if k == 'file_filter':\n file_path = file_helpers.write_list_to_temp_file(\n v, preferred_dir=self.tmp_dir)\n cmd.extend(['-f', file_path])\n elif k == 'yara_rules':\n file_path = file_helpers.write_str_to_temp_file(\n v, preferred_dir=self.tmp_dir)\n cmd.extend(['--yara_rules', file_path])\n elif isinstance(v, list):\n cmd.extend([prepend + k, ','.join(v)])\n elif isinstance(v, bool):\n cmd.append(prepend + k)\n elif isinstance(v, str):\n cmd.extend([prepend + k, v])\n return cmd\n\n def run(self, evidence, result):\n \"\"\"Task that process data with Plaso.\n\n Args:\n evidence (Evidence object): The evidence we will process.\n result (TurbiniaTaskResult): The object to place task results into.\n\n Returns:\n TurbiniaTaskResult object.\n \"\"\"\n\n config.LoadConfig()\n\n # Write plaso file into tmp_dir because sqlite has issues with some shared\n # filesystems (e.g NFS).\n plaso_file = os.path.join(self.tmp_dir, '{0:s}.plaso'.format(self.id))\n plaso_evidence = PlasoFile(source_path=plaso_file)\n plaso_log = os.path.join(self.output_dir, '{0:s}.log'.format(self.id))\n\n cmd = self.build_plaso_command('log2timeline.py', self.task_config)\n\n if config.DEBUG_TASKS or self.task_config.get('debug_tasks'):\n cmd.append('-d')\n\n if evidence.credentials:\n for credential_type, credential_data in evidence.credentials:\n cmd.extend([\n '--credential', '{0:s}:{1:s}'.format(\n credential_type, credential_data)\n ])\n\n cmd.extend(['--temporary_directory', self.tmp_dir])\n cmd.extend(['--logfile', plaso_log])\n cmd.extend(['--unattended'])\n cmd.extend([plaso_file, evidence.local_path])\n\n result.log('Running plaso as [{0:s}]'.format(' '.join(cmd)))\n self.execute(\n cmd, result, log_files=[plaso_log], new_evidence=[plaso_evidence],\n close=True)\n\n return result\n", "path": "turbinia/workers/plaso.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright 2015 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Task for running Plaso.\"\"\"\n\nfrom __future__ import unicode_literals\n\nimport os\nimport logging\n\nfrom turbinia import config\nfrom turbinia.evidence import EvidenceState as state\nfrom turbinia.evidence import PlasoFile\nfrom turbinia.workers import TurbiniaTask\nfrom turbinia.lib import file_helpers\n\n\nclass PlasoTask(TurbiniaTask):\n \"\"\"Task to run Plaso (log2timeline).\"\"\"\n\n # Plaso requires the Disk to be attached, but doesn't require it be mounted.\n REQUIRED_STATES = [state.ATTACHED, state.DECOMPRESSED]\n\n TASK_CONFIG = {\n # 'none' as indicated in the options for status_view within\n # the Plaso documentation\n 'status_view': 'none',\n 'hashers': 'all',\n 'partitions': 'all',\n 'vss_stores': 'none',\n 'artifact_filters': None,\n 'file_filter': None,\n 'yara_rules': None\n }\n\n def build_plaso_command(self, base_command, conf):\n \"\"\"Builds a typical plaso command, contains logic specific to log2timeline.\n\n Args:\n base_command (str): Command to invoke log2timeline (e.g. log2timeline.py)\n conf (dict): Dynamic config containing the parameters for the command.\n\n Returns:\n String for valid Log2timeline command.\n \"\"\"\n self.result.log(\n 'Generating Plaso command line from arguments: {0!s}'.format(conf),\n level=logging.DEBUG)\n cmd = [base_command]\n for k, v in conf.items():\n cli_args = [\n 'status_view', 'hashers', 'partitions', 'vss_stores',\n 'artifact_filters', 'file_filter', 'yara_rules'\n ]\n if (k not in cli_args or not v):\n continue\n prepend = '-'\n if len(k) > 1:\n prepend = '--'\n if k == 'file_filter':\n file_path = file_helpers.write_list_to_temp_file(\n v, preferred_dir=self.tmp_dir)\n cmd.extend(['-f', file_path])\n elif k == 'yara_rules':\n file_path = file_helpers.write_str_to_temp_file(\n v, preferred_dir=self.tmp_dir)\n cmd.extend(['--yara_rules', file_path])\n elif isinstance(v, list):\n cmd.extend([prepend + k, ','.join(v)])\n elif isinstance(v, bool):\n cmd.append(prepend + k)\n elif isinstance(v, str):\n cmd.extend([prepend + k, v])\n return cmd\n\n def run(self, evidence, result):\n \"\"\"Task that process data with Plaso.\n\n Args:\n evidence (Evidence object): The evidence we will process.\n result (TurbiniaTaskResult): The object to place task results into.\n\n Returns:\n TurbiniaTaskResult object.\n \"\"\"\n\n config.LoadConfig()\n\n # Write plaso file into tmp_dir because sqlite has issues with some shared\n # filesystems (e.g NFS).\n plaso_file = os.path.join(self.tmp_dir, '{0:s}.plaso'.format(self.id))\n plaso_evidence = PlasoFile(source_path=plaso_file)\n plaso_log = os.path.join(self.output_dir, '{0:s}.log'.format(self.id))\n\n cmd = self.build_plaso_command('log2timeline.py', self.task_config)\n\n if config.DEBUG_TASKS or self.task_config.get('debug_tasks'):\n cmd.append('-d')\n\n if evidence.credentials:\n for credential_type, credential_data in evidence.credentials:\n cmd.extend([\n '--credential', '{0:s}:{1:s}'.format(\n credential_type, credential_data)\n ])\n\n cmd.extend(['--temporary_directory', self.tmp_dir])\n cmd.extend(['--logfile', plaso_log])\n cmd.extend(['--unattended'])\n cmd.extend(['--storage_file', plaso_file])\n cmd.extend([evidence.local_path])\n\n result.log('Running plaso as [{0:s}]'.format(' '.join(cmd)))\n self.execute(\n cmd, result, log_files=[plaso_log], new_evidence=[plaso_evidence],\n close=True)\n\n return result\n", "path": "turbinia/workers/plaso.py"}]} | 1,676 | 153 |
gh_patches_debug_22297 | rasdani/github-patches | git_diff | canonical__snapcraft-4427 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`snapcraft remote-build --launchpad-timeout` does not work
### Bug Description
The argument `--launchpad-timeout` for remote-build stopped being accepted when snapcraft 7 was release.
Scope of work:
1. Add `--launchpad-timeout` as an argparse argument in `snapcraft/commands/remote.py`
2. Test that it gets passed to the new and fallback remote builders.
### To Reproduce
`snapcraft remote-build --launchpad-timeout <seconds>`
### Environment
n/a
### snapcraft.yaml
```shell
n/a
```
### Relevant log output
```shell
Usage: snapcraft [options] command [args]...
Try 'snapcraft remote-build -h' for help.
Error: unrecognized arguments: --launchpad-timeout 3600
```
### Additional context
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `snapcraft/commands/remote.py`
Content:
```
1 # -*- Mode:Python; indent-tabs-mode:nil; tab-width:4 -*-
2 #
3 # Copyright 2022 Canonical Ltd.
4 #
5 # This program is free software: you can redistribute it and/or modify
6 # it under the terms of the GNU General Public License version 3 as
7 # published by the Free Software Foundation.
8 #
9 # This program is distributed in the hope that it will be useful,
10 # but WITHOUT ANY WARRANTY; without even the implied warranty of
11 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
12 # GNU General Public License for more details.
13 #
14 # You should have received a copy of the GNU General Public License
15 # along with this program. If not, see <http://www.gnu.org/licenses/>.
16
17 """Snapcraft remote build command."""
18
19 import argparse
20 import os
21 import textwrap
22
23 from craft_cli import BaseCommand, emit
24 from craft_cli.helptexts import HIDDEN
25 from overrides import overrides
26
27 from snapcraft.legacy_cli import run_legacy
28 from snapcraft.parts.lifecycle import get_snap_project, process_yaml
29 from snapcraft.utils import confirm_with_user
30 from snapcraft_legacy.internal.remote_build.errors import AcceptPublicUploadError
31
32 _CONFIRMATION_PROMPT = (
33 "All data sent to remote builders will be publicly available. "
34 "Are you sure you want to continue?"
35 )
36
37
38 class RemoteBuildCommand(BaseCommand):
39 """Command passthrough for the remote-build command."""
40
41 name = "remote-build"
42 help_msg = "Dispatch a snap for remote build"
43 overview = textwrap.dedent(
44 """
45 Command remote-build sends the current project to be built
46 remotely. After the build is complete, packages for each
47 architecture are retrieved and will be available in the
48 local filesystem.
49
50 If not specified in the snapcraft.yaml file, the list of
51 architectures to build can be set using the --build-on option.
52 If both are specified, an error will occur.
53
54 Interrupted remote builds can be resumed using the --recover
55 option, followed by the build number informed when the remote
56 build was originally dispatched. The current state of the
57 remote build for each architecture can be checked using the
58 --status option."""
59 )
60
61 @overrides
62 def fill_parser(self, parser: argparse.ArgumentParser) -> None:
63 parser.add_argument(
64 "--recover", action="store_true", help="recover an interrupted build"
65 )
66 parser.add_argument(
67 "--status", action="store_true", help="display remote build status"
68 )
69 parser_target = parser.add_mutually_exclusive_group()
70 parser_target.add_argument(
71 "--build-on",
72 metavar="arch",
73 nargs="+",
74 help=HIDDEN,
75 )
76 parser_target.add_argument(
77 "--build-for",
78 metavar="arch",
79 nargs="+",
80 help="architecture to build for",
81 )
82 parser.add_argument(
83 "--build-id", metavar="build-id", help="specific build id to retrieve"
84 )
85 parser.add_argument(
86 "--launchpad-accept-public-upload",
87 action="store_true",
88 help="acknowledge that uploaded code will be publicly available.",
89 )
90
91 @overrides
92 def run(self, parsed_args):
93 if os.getenv("SUDO_USER") and os.geteuid() == 0:
94 emit.message(
95 "Running with 'sudo' may cause permission errors and is discouraged."
96 )
97
98 emit.message(
99 "snapcraft remote-build is experimental and is subject to change - use with caution."
100 )
101
102 if parsed_args.build_on:
103 emit.message("Use --build-for instead of --build-on")
104 parsed_args.build_for = parsed_args.build_on
105
106 if not parsed_args.launchpad_accept_public_upload and not confirm_with_user(
107 _CONFIRMATION_PROMPT
108 ):
109 raise AcceptPublicUploadError()
110
111 snap_project = get_snap_project()
112 # TODO proper core22 support would mean we need to load the project
113 # yaml_data = process_yaml(snap_project.project_file)
114 # for now, only log explicitly that we are falling back to legacy to
115 # remote build for core22
116 process_yaml(snap_project.project_file)
117
118 emit.debug(
119 "core22 not yet supported in new code base: re-executing into legacy for remote-build"
120 )
121 run_legacy()
122
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/snapcraft/commands/remote.py b/snapcraft/commands/remote.py
--- a/snapcraft/commands/remote.py
+++ b/snapcraft/commands/remote.py
@@ -55,7 +55,13 @@
option, followed by the build number informed when the remote
build was originally dispatched. The current state of the
remote build for each architecture can be checked using the
- --status option."""
+ --status option.
+
+ To set a timeout on the remote-build command, use the option
+ ``--launchpad-timeout=<seconds>``. The timeout is local, so the build on
+ launchpad will continue even if the local instance of snapcraft is
+ interrupted or times out.
+ """
)
@overrides
@@ -87,6 +93,13 @@
action="store_true",
help="acknowledge that uploaded code will be publicly available.",
)
+ parser.add_argument(
+ "--launchpad-timeout",
+ type=int,
+ default=0,
+ metavar="<seconds>",
+ help="Time in seconds to wait for launchpad to build.",
+ )
@overrides
def run(self, parsed_args):
| {"golden_diff": "diff --git a/snapcraft/commands/remote.py b/snapcraft/commands/remote.py\n--- a/snapcraft/commands/remote.py\n+++ b/snapcraft/commands/remote.py\n@@ -55,7 +55,13 @@\n option, followed by the build number informed when the remote\n build was originally dispatched. The current state of the\n remote build for each architecture can be checked using the\n- --status option.\"\"\"\n+ --status option.\n+\n+ To set a timeout on the remote-build command, use the option\n+ ``--launchpad-timeout=<seconds>``. The timeout is local, so the build on\n+ launchpad will continue even if the local instance of snapcraft is\n+ interrupted or times out.\n+ \"\"\"\n )\n \n @overrides\n@@ -87,6 +93,13 @@\n action=\"store_true\",\n help=\"acknowledge that uploaded code will be publicly available.\",\n )\n+ parser.add_argument(\n+ \"--launchpad-timeout\",\n+ type=int,\n+ default=0,\n+ metavar=\"<seconds>\",\n+ help=\"Time in seconds to wait for launchpad to build.\",\n+ )\n \n @overrides\n def run(self, parsed_args):\n", "issue": "`snapcraft remote-build --launchpad-timeout` does not work\n### Bug Description\n\nThe argument `--launchpad-timeout` for remote-build stopped being accepted when snapcraft 7 was release.\r\n\r\nScope of work:\r\n1. Add `--launchpad-timeout` as an argparse argument in `snapcraft/commands/remote.py`\r\n2. Test that it gets passed to the new and fallback remote builders.\n\n### To Reproduce\n\n`snapcraft remote-build --launchpad-timeout <seconds>`\n\n### Environment\n\nn/a\n\n### snapcraft.yaml\n\n```shell\nn/a\n```\n\n\n### Relevant log output\n\n```shell\nUsage: snapcraft [options] command [args]...\r\nTry 'snapcraft remote-build -h' for help.\r\n\r\nError: unrecognized arguments: --launchpad-timeout 3600\n```\n\n\n### Additional context\n\n_No response_\n", "before_files": [{"content": "# -*- Mode:Python; indent-tabs-mode:nil; tab-width:4 -*-\n#\n# Copyright 2022 Canonical Ltd.\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License version 3 as\n# published by the Free Software Foundation.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this program. If not, see <http://www.gnu.org/licenses/>.\n\n\"\"\"Snapcraft remote build command.\"\"\"\n\nimport argparse\nimport os\nimport textwrap\n\nfrom craft_cli import BaseCommand, emit\nfrom craft_cli.helptexts import HIDDEN\nfrom overrides import overrides\n\nfrom snapcraft.legacy_cli import run_legacy\nfrom snapcraft.parts.lifecycle import get_snap_project, process_yaml\nfrom snapcraft.utils import confirm_with_user\nfrom snapcraft_legacy.internal.remote_build.errors import AcceptPublicUploadError\n\n_CONFIRMATION_PROMPT = (\n \"All data sent to remote builders will be publicly available. \"\n \"Are you sure you want to continue?\"\n)\n\n\nclass RemoteBuildCommand(BaseCommand):\n \"\"\"Command passthrough for the remote-build command.\"\"\"\n\n name = \"remote-build\"\n help_msg = \"Dispatch a snap for remote build\"\n overview = textwrap.dedent(\n \"\"\"\n Command remote-build sends the current project to be built\n remotely. After the build is complete, packages for each\n architecture are retrieved and will be available in the\n local filesystem.\n\n If not specified in the snapcraft.yaml file, the list of\n architectures to build can be set using the --build-on option.\n If both are specified, an error will occur.\n\n Interrupted remote builds can be resumed using the --recover\n option, followed by the build number informed when the remote\n build was originally dispatched. The current state of the\n remote build for each architecture can be checked using the\n --status option.\"\"\"\n )\n\n @overrides\n def fill_parser(self, parser: argparse.ArgumentParser) -> None:\n parser.add_argument(\n \"--recover\", action=\"store_true\", help=\"recover an interrupted build\"\n )\n parser.add_argument(\n \"--status\", action=\"store_true\", help=\"display remote build status\"\n )\n parser_target = parser.add_mutually_exclusive_group()\n parser_target.add_argument(\n \"--build-on\",\n metavar=\"arch\",\n nargs=\"+\",\n help=HIDDEN,\n )\n parser_target.add_argument(\n \"--build-for\",\n metavar=\"arch\",\n nargs=\"+\",\n help=\"architecture to build for\",\n )\n parser.add_argument(\n \"--build-id\", metavar=\"build-id\", help=\"specific build id to retrieve\"\n )\n parser.add_argument(\n \"--launchpad-accept-public-upload\",\n action=\"store_true\",\n help=\"acknowledge that uploaded code will be publicly available.\",\n )\n\n @overrides\n def run(self, parsed_args):\n if os.getenv(\"SUDO_USER\") and os.geteuid() == 0:\n emit.message(\n \"Running with 'sudo' may cause permission errors and is discouraged.\"\n )\n\n emit.message(\n \"snapcraft remote-build is experimental and is subject to change - use with caution.\"\n )\n\n if parsed_args.build_on:\n emit.message(\"Use --build-for instead of --build-on\")\n parsed_args.build_for = parsed_args.build_on\n\n if not parsed_args.launchpad_accept_public_upload and not confirm_with_user(\n _CONFIRMATION_PROMPT\n ):\n raise AcceptPublicUploadError()\n\n snap_project = get_snap_project()\n # TODO proper core22 support would mean we need to load the project\n # yaml_data = process_yaml(snap_project.project_file)\n # for now, only log explicitly that we are falling back to legacy to\n # remote build for core22\n process_yaml(snap_project.project_file)\n\n emit.debug(\n \"core22 not yet supported in new code base: re-executing into legacy for remote-build\"\n )\n run_legacy()\n", "path": "snapcraft/commands/remote.py"}], "after_files": [{"content": "# -*- Mode:Python; indent-tabs-mode:nil; tab-width:4 -*-\n#\n# Copyright 2022 Canonical Ltd.\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License version 3 as\n# published by the Free Software Foundation.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this program. If not, see <http://www.gnu.org/licenses/>.\n\n\"\"\"Snapcraft remote build command.\"\"\"\n\nimport argparse\nimport os\nimport textwrap\n\nfrom craft_cli import BaseCommand, emit\nfrom craft_cli.helptexts import HIDDEN\nfrom overrides import overrides\n\nfrom snapcraft.legacy_cli import run_legacy\nfrom snapcraft.parts.lifecycle import get_snap_project, process_yaml\nfrom snapcraft.utils import confirm_with_user\nfrom snapcraft_legacy.internal.remote_build.errors import AcceptPublicUploadError\n\n_CONFIRMATION_PROMPT = (\n \"All data sent to remote builders will be publicly available. \"\n \"Are you sure you want to continue?\"\n)\n\n\nclass RemoteBuildCommand(BaseCommand):\n \"\"\"Command passthrough for the remote-build command.\"\"\"\n\n name = \"remote-build\"\n help_msg = \"Dispatch a snap for remote build\"\n overview = textwrap.dedent(\n \"\"\"\n Command remote-build sends the current project to be built\n remotely. After the build is complete, packages for each\n architecture are retrieved and will be available in the\n local filesystem.\n\n If not specified in the snapcraft.yaml file, the list of\n architectures to build can be set using the --build-on option.\n If both are specified, an error will occur.\n\n Interrupted remote builds can be resumed using the --recover\n option, followed by the build number informed when the remote\n build was originally dispatched. The current state of the\n remote build for each architecture can be checked using the\n --status option.\n\n To set a timeout on the remote-build command, use the option\n ``--launchpad-timeout=<seconds>``. The timeout is local, so the build on\n launchpad will continue even if the local instance of snapcraft is\n interrupted or times out.\n \"\"\"\n )\n\n @overrides\n def fill_parser(self, parser: argparse.ArgumentParser) -> None:\n parser.add_argument(\n \"--recover\", action=\"store_true\", help=\"recover an interrupted build\"\n )\n parser.add_argument(\n \"--status\", action=\"store_true\", help=\"display remote build status\"\n )\n parser_target = parser.add_mutually_exclusive_group()\n parser_target.add_argument(\n \"--build-on\",\n metavar=\"arch\",\n nargs=\"+\",\n help=HIDDEN,\n )\n parser_target.add_argument(\n \"--build-for\",\n metavar=\"arch\",\n nargs=\"+\",\n help=\"architecture to build for\",\n )\n parser.add_argument(\n \"--build-id\", metavar=\"build-id\", help=\"specific build id to retrieve\"\n )\n parser.add_argument(\n \"--launchpad-accept-public-upload\",\n action=\"store_true\",\n help=\"acknowledge that uploaded code will be publicly available.\",\n )\n parser.add_argument(\n \"--launchpad-timeout\",\n type=int,\n default=0,\n metavar=\"<seconds>\",\n help=\"Time in seconds to wait for launchpad to build.\",\n )\n\n @overrides\n def run(self, parsed_args):\n if os.getenv(\"SUDO_USER\") and os.geteuid() == 0:\n emit.message(\n \"Running with 'sudo' may cause permission errors and is discouraged.\"\n )\n\n emit.message(\n \"snapcraft remote-build is experimental and is subject to change - use with caution.\"\n )\n\n if parsed_args.build_on:\n emit.message(\"Use --build-for instead of --build-on\")\n parsed_args.build_for = parsed_args.build_on\n\n if not parsed_args.launchpad_accept_public_upload and not confirm_with_user(\n _CONFIRMATION_PROMPT\n ):\n raise AcceptPublicUploadError()\n\n snap_project = get_snap_project()\n # TODO proper core22 support would mean we need to load the project\n # yaml_data = process_yaml(snap_project.project_file)\n # for now, only log explicitly that we are falling back to legacy to\n # remote build for core22\n process_yaml(snap_project.project_file)\n\n emit.debug(\n \"core22 not yet supported in new code base: re-executing into legacy for remote-build\"\n )\n run_legacy()\n", "path": "snapcraft/commands/remote.py"}]} | 1,616 | 275 |
gh_patches_debug_10009 | rasdani/github-patches | git_diff | bridgecrewio__checkov-5886 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
CKV_AZURE_234 condition is incorrect
https://github.com/bridgecrewio/checkov/blob/dc6a7cd84c5e006c289f2710b960b7be96a29fae/checkov/terraform/checks/resource/azure/AzureDefenderDisabledForResManager.py#L20C110-L20C118
The condition used in this check is being triggered for all `azurerm_security_center_subscription_pricing` resources with **any** `resource_type`. For example,
```
resource "azurerm_security_center_subscription_pricing" "mdc_srvrs" {
tier = "Standard"
resource_type = "VirtualMachines"
subplan = "P2"
```
Would raise the `CKV_AZURE_234` finding. For any other `resource_type` we get a failure.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `checkov/terraform/checks/resource/azure/AzureDefenderDisabledForResManager.py`
Content:
```
1 from __future__ import annotations
2
3 from typing import Any
4
5 from checkov.common.models.enums import CheckCategories, CheckResult
6 from checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck
7
8
9 class AzureDefenderDisabledForResManager(BaseResourceCheck):
10 def __init__(self) -> None:
11 name = "Ensure that Azure Defender for cloud is set to On for Resource Manager"
12 id = "CKV_AZURE_234"
13 supported_resources = ("azurerm_security_center_subscription_pricing",)
14 categories = (CheckCategories.GENERAL_SECURITY,)
15 super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
16
17 def scan_resource_conf(self, conf: dict[str, list[Any]]) -> CheckResult:
18 return (
19 CheckResult.PASSED
20 if conf.get("resource_type", [""])[0].lower() == "arm" and conf.get("tier", [""])[0].lower() == "standard"
21 else CheckResult.FAILED
22 )
23
24 def get_evaluated_keys(self) -> list[str]:
25 return ["resource_type", "tier"]
26
27
28 check = AzureDefenderDisabledForResManager()
29
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/checkov/terraform/checks/resource/azure/AzureDefenderDisabledForResManager.py b/checkov/terraform/checks/resource/azure/AzureDefenderDisabledForResManager.py
--- a/checkov/terraform/checks/resource/azure/AzureDefenderDisabledForResManager.py
+++ b/checkov/terraform/checks/resource/azure/AzureDefenderDisabledForResManager.py
@@ -16,9 +16,9 @@
def scan_resource_conf(self, conf: dict[str, list[Any]]) -> CheckResult:
return (
- CheckResult.PASSED
- if conf.get("resource_type", [""])[0].lower() == "arm" and conf.get("tier", [""])[0].lower() == "standard"
- else CheckResult.FAILED
+ CheckResult.FAILED
+ if conf.get("resource_type", [""])[0].lower() == "arm" and conf.get("tier", [""])[0].lower() != "standard"
+ else CheckResult.PASSED
)
def get_evaluated_keys(self) -> list[str]:
| {"golden_diff": "diff --git a/checkov/terraform/checks/resource/azure/AzureDefenderDisabledForResManager.py b/checkov/terraform/checks/resource/azure/AzureDefenderDisabledForResManager.py\n--- a/checkov/terraform/checks/resource/azure/AzureDefenderDisabledForResManager.py\n+++ b/checkov/terraform/checks/resource/azure/AzureDefenderDisabledForResManager.py\n@@ -16,9 +16,9 @@\n \n def scan_resource_conf(self, conf: dict[str, list[Any]]) -> CheckResult:\n return (\n- CheckResult.PASSED\n- if conf.get(\"resource_type\", [\"\"])[0].lower() == \"arm\" and conf.get(\"tier\", [\"\"])[0].lower() == \"standard\"\n- else CheckResult.FAILED\n+ CheckResult.FAILED\n+ if conf.get(\"resource_type\", [\"\"])[0].lower() == \"arm\" and conf.get(\"tier\", [\"\"])[0].lower() != \"standard\"\n+ else CheckResult.PASSED\n )\n \n def get_evaluated_keys(self) -> list[str]:\n", "issue": "CKV_AZURE_234 condition is incorrect\nhttps://github.com/bridgecrewio/checkov/blob/dc6a7cd84c5e006c289f2710b960b7be96a29fae/checkov/terraform/checks/resource/azure/AzureDefenderDisabledForResManager.py#L20C110-L20C118\r\n\r\nThe condition used in this check is being triggered for all `azurerm_security_center_subscription_pricing` resources with **any** `resource_type`. For example, \r\n\r\n```\r\nresource \"azurerm_security_center_subscription_pricing\" \"mdc_srvrs\" {\r\n tier = \"Standard\"\r\n resource_type = \"VirtualMachines\"\r\n subplan = \"P2\"\r\n```\r\n\r\nWould raise the `CKV_AZURE_234` finding. For any other `resource_type` we get a failure.\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom typing import Any\n\nfrom checkov.common.models.enums import CheckCategories, CheckResult\nfrom checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck\n\n\nclass AzureDefenderDisabledForResManager(BaseResourceCheck):\n def __init__(self) -> None:\n name = \"Ensure that Azure Defender for cloud is set to On for Resource Manager\"\n id = \"CKV_AZURE_234\"\n supported_resources = (\"azurerm_security_center_subscription_pricing\",)\n categories = (CheckCategories.GENERAL_SECURITY,)\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def scan_resource_conf(self, conf: dict[str, list[Any]]) -> CheckResult:\n return (\n CheckResult.PASSED\n if conf.get(\"resource_type\", [\"\"])[0].lower() == \"arm\" and conf.get(\"tier\", [\"\"])[0].lower() == \"standard\"\n else CheckResult.FAILED\n )\n\n def get_evaluated_keys(self) -> list[str]:\n return [\"resource_type\", \"tier\"]\n\n\ncheck = AzureDefenderDisabledForResManager()\n", "path": "checkov/terraform/checks/resource/azure/AzureDefenderDisabledForResManager.py"}], "after_files": [{"content": "from __future__ import annotations\n\nfrom typing import Any\n\nfrom checkov.common.models.enums import CheckCategories, CheckResult\nfrom checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck\n\n\nclass AzureDefenderDisabledForResManager(BaseResourceCheck):\n def __init__(self) -> None:\n name = \"Ensure that Azure Defender for cloud is set to On for Resource Manager\"\n id = \"CKV_AZURE_234\"\n supported_resources = (\"azurerm_security_center_subscription_pricing\",)\n categories = (CheckCategories.GENERAL_SECURITY,)\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def scan_resource_conf(self, conf: dict[str, list[Any]]) -> CheckResult:\n return (\n CheckResult.FAILED\n if conf.get(\"resource_type\", [\"\"])[0].lower() == \"arm\" and conf.get(\"tier\", [\"\"])[0].lower() != \"standard\"\n else CheckResult.PASSED\n )\n\n def get_evaluated_keys(self) -> list[str]:\n return [\"resource_type\", \"tier\"]\n\n\ncheck = AzureDefenderDisabledForResManager()\n", "path": "checkov/terraform/checks/resource/azure/AzureDefenderDisabledForResManager.py"}]} | 780 | 242 |
gh_patches_debug_4237 | rasdani/github-patches | git_diff | great-expectations__great_expectations-2600 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
BUG with dependency: SQLAlchemy 1.4.0 for user installation
**Describe the bug**
SQLAlchemy `>= 1.4.0` breaks the imports in
https://github.com/great-expectations/great_expectations/blob/18234477693306e5e3845e69a1be78a85c639295/great_expectations/dataset/sqlalchemy_dataset.py#L42
This isn't showing up in the tests since the tests [pin the version](https://github.com/great-expectations/great_expectations/pull/2547) from above whereas the user installation doesn't
https://github.com/great-expectations/great_expectations/blob/18234477693306e5e3845e69a1be78a85c639295/setup.py#L25
The stack trace is from trying to scaffold a new suite is:
```
...
File ".../great_expectations/data_context/data_context.py", line 1421, in get_batch
return self._get_batch_v2(
File ".../great_expectations/data_context/data_context.py", line 1147, in _get_batch_v2
return validator.get_dataset()
File ".../great_expectations/validator/validator.py", line 1431, in get_dataset
return self.expectation_engine(
File ".../great_expectations/dataset/sqlalchemy_dataset.py", line 508, in __init__
self._table = sa.Table(table_name, sa.MetaData(), schema=schema)
AttributeError: 'NoneType' object has no attribute 'Table'
```
As a work around until the code is upgraded to work with `1.4.0` the version installed should be `<1.4.0`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 from setuptools import find_packages, setup
2
3 import versioneer
4
5 # Parse requirements.txt
6 with open("requirements.txt") as f:
7 required = f.read().splitlines()
8
9 # try:
10 # import pypandoc
11 # long_description = pypandoc.convert_file('README.md', 'rst')
12 # except (IOError, ImportError):
13 long_description = "Always know what to expect from your data. (See https://github.com/great-expectations/great_expectations for full description)."
14
15 config = {
16 "description": "Always know what to expect from your data.",
17 "author": "The Great Expectations Team",
18 "url": "https://github.com/great-expectations/great_expectations",
19 "author_email": "[email protected]",
20 "version": versioneer.get_version(),
21 "cmdclass": versioneer.get_cmdclass(),
22 "install_requires": required,
23 "extras_require": {
24 "spark": ["pyspark>=2.3.2"],
25 "sqlalchemy": ["sqlalchemy>=1.2"],
26 "airflow": ["apache-airflow[s3]>=1.9.0", "boto3>=1.7.3"],
27 "gcp": [
28 "google-cloud>=0.34.0",
29 "google-cloud-storage>=1.28.0",
30 "google-cloud-secret-manager>=1.0.0",
31 "pybigquery==0.4.15",
32 ],
33 "redshift": ["psycopg2>=2.8"],
34 "s3": ["boto3>=1.14"],
35 "aws_secrets": ["boto3>=1.8.7"],
36 "azure_secrets": ["azure-identity>=1.0.0", "azure-keyvault-secrets>=4.0.0"],
37 "snowflake": ["snowflake-sqlalchemy>=1.2"],
38 },
39 "packages": find_packages(exclude=["contrib*", "docs*", "tests*", "examples*"]),
40 "entry_points": {
41 "console_scripts": ["great_expectations=great_expectations.cli:main"]
42 },
43 "name": "great_expectations",
44 "long_description": long_description,
45 "license": "Apache-2.0",
46 "keywords": "data science testing pipeline data quality dataquality validation datavalidation",
47 "include_package_data": True,
48 "classifiers": [
49 "Development Status :: 4 - Beta",
50 "Intended Audience :: Developers",
51 "Intended Audience :: Science/Research",
52 "Intended Audience :: Other Audience",
53 "Topic :: Scientific/Engineering",
54 "Topic :: Software Development",
55 "Topic :: Software Development :: Testing",
56 "License :: OSI Approved :: Apache Software License",
57 "Programming Language :: Python :: 3",
58 "Programming Language :: Python :: 3.6",
59 "Programming Language :: Python :: 3.7",
60 "Programming Language :: Python :: 3.8",
61 ],
62 }
63
64 setup(**config)
65
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -22,7 +22,7 @@
"install_requires": required,
"extras_require": {
"spark": ["pyspark>=2.3.2"],
- "sqlalchemy": ["sqlalchemy>=1.2"],
+ "sqlalchemy": ["sqlalchemy>=1.2,<1.4.0"],
"airflow": ["apache-airflow[s3]>=1.9.0", "boto3>=1.7.3"],
"gcp": [
"google-cloud>=0.34.0",
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -22,7 +22,7 @@\n \"install_requires\": required,\n \"extras_require\": {\n \"spark\": [\"pyspark>=2.3.2\"],\n- \"sqlalchemy\": [\"sqlalchemy>=1.2\"],\n+ \"sqlalchemy\": [\"sqlalchemy>=1.2,<1.4.0\"],\n \"airflow\": [\"apache-airflow[s3]>=1.9.0\", \"boto3>=1.7.3\"],\n \"gcp\": [\n \"google-cloud>=0.34.0\",\n", "issue": "BUG with dependency: SQLAlchemy 1.4.0 for user installation\n**Describe the bug**\r\n\r\nSQLAlchemy `>= 1.4.0` breaks the imports in \r\nhttps://github.com/great-expectations/great_expectations/blob/18234477693306e5e3845e69a1be78a85c639295/great_expectations/dataset/sqlalchemy_dataset.py#L42\r\n\r\nThis isn't showing up in the tests since the tests [pin the version](https://github.com/great-expectations/great_expectations/pull/2547) from above whereas the user installation doesn't\r\nhttps://github.com/great-expectations/great_expectations/blob/18234477693306e5e3845e69a1be78a85c639295/setup.py#L25\r\n\r\nThe stack trace is from trying to scaffold a new suite is:\r\n```\r\n...\r\n File \".../great_expectations/data_context/data_context.py\", line 1421, in get_batch\r\n return self._get_batch_v2(\r\n File \".../great_expectations/data_context/data_context.py\", line 1147, in _get_batch_v2\r\n return validator.get_dataset()\r\n File \".../great_expectations/validator/validator.py\", line 1431, in get_dataset\r\n return self.expectation_engine(\r\n File \".../great_expectations/dataset/sqlalchemy_dataset.py\", line 508, in __init__\r\n self._table = sa.Table(table_name, sa.MetaData(), schema=schema)\r\nAttributeError: 'NoneType' object has no attribute 'Table'\r\n```\r\n\r\nAs a work around until the code is upgraded to work with `1.4.0` the version installed should be `<1.4.0`.\n", "before_files": [{"content": "from setuptools import find_packages, setup\n\nimport versioneer\n\n# Parse requirements.txt\nwith open(\"requirements.txt\") as f:\n required = f.read().splitlines()\n\n# try:\n# import pypandoc\n# long_description = pypandoc.convert_file('README.md', 'rst')\n# except (IOError, ImportError):\nlong_description = \"Always know what to expect from your data. (See https://github.com/great-expectations/great_expectations for full description).\"\n\nconfig = {\n \"description\": \"Always know what to expect from your data.\",\n \"author\": \"The Great Expectations Team\",\n \"url\": \"https://github.com/great-expectations/great_expectations\",\n \"author_email\": \"[email protected]\",\n \"version\": versioneer.get_version(),\n \"cmdclass\": versioneer.get_cmdclass(),\n \"install_requires\": required,\n \"extras_require\": {\n \"spark\": [\"pyspark>=2.3.2\"],\n \"sqlalchemy\": [\"sqlalchemy>=1.2\"],\n \"airflow\": [\"apache-airflow[s3]>=1.9.0\", \"boto3>=1.7.3\"],\n \"gcp\": [\n \"google-cloud>=0.34.0\",\n \"google-cloud-storage>=1.28.0\",\n \"google-cloud-secret-manager>=1.0.0\",\n \"pybigquery==0.4.15\",\n ],\n \"redshift\": [\"psycopg2>=2.8\"],\n \"s3\": [\"boto3>=1.14\"],\n \"aws_secrets\": [\"boto3>=1.8.7\"],\n \"azure_secrets\": [\"azure-identity>=1.0.0\", \"azure-keyvault-secrets>=4.0.0\"],\n \"snowflake\": [\"snowflake-sqlalchemy>=1.2\"],\n },\n \"packages\": find_packages(exclude=[\"contrib*\", \"docs*\", \"tests*\", \"examples*\"]),\n \"entry_points\": {\n \"console_scripts\": [\"great_expectations=great_expectations.cli:main\"]\n },\n \"name\": \"great_expectations\",\n \"long_description\": long_description,\n \"license\": \"Apache-2.0\",\n \"keywords\": \"data science testing pipeline data quality dataquality validation datavalidation\",\n \"include_package_data\": True,\n \"classifiers\": [\n \"Development Status :: 4 - Beta\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: Science/Research\",\n \"Intended Audience :: Other Audience\",\n \"Topic :: Scientific/Engineering\",\n \"Topic :: Software Development\",\n \"Topic :: Software Development :: Testing\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n ],\n}\n\nsetup(**config)\n", "path": "setup.py"}], "after_files": [{"content": "from setuptools import find_packages, setup\n\nimport versioneer\n\n# Parse requirements.txt\nwith open(\"requirements.txt\") as f:\n required = f.read().splitlines()\n\n# try:\n# import pypandoc\n# long_description = pypandoc.convert_file('README.md', 'rst')\n# except (IOError, ImportError):\nlong_description = \"Always know what to expect from your data. (See https://github.com/great-expectations/great_expectations for full description).\"\n\nconfig = {\n \"description\": \"Always know what to expect from your data.\",\n \"author\": \"The Great Expectations Team\",\n \"url\": \"https://github.com/great-expectations/great_expectations\",\n \"author_email\": \"[email protected]\",\n \"version\": versioneer.get_version(),\n \"cmdclass\": versioneer.get_cmdclass(),\n \"install_requires\": required,\n \"extras_require\": {\n \"spark\": [\"pyspark>=2.3.2\"],\n \"sqlalchemy\": [\"sqlalchemy>=1.2,<1.4.0\"],\n \"airflow\": [\"apache-airflow[s3]>=1.9.0\", \"boto3>=1.7.3\"],\n \"gcp\": [\n \"google-cloud>=0.34.0\",\n \"google-cloud-storage>=1.28.0\",\n \"google-cloud-secret-manager>=1.0.0\",\n \"pybigquery==0.4.15\",\n ],\n \"redshift\": [\"psycopg2>=2.8\"],\n \"s3\": [\"boto3>=1.14\"],\n \"aws_secrets\": [\"boto3>=1.8.7\"],\n \"azure_secrets\": [\"azure-identity>=1.0.0\", \"azure-keyvault-secrets>=4.0.0\"],\n \"snowflake\": [\"snowflake-sqlalchemy>=1.2\"],\n },\n \"packages\": find_packages(exclude=[\"contrib*\", \"docs*\", \"tests*\", \"examples*\"]),\n \"entry_points\": {\n \"console_scripts\": [\"great_expectations=great_expectations.cli:main\"]\n },\n \"name\": \"great_expectations\",\n \"long_description\": long_description,\n \"license\": \"Apache-2.0\",\n \"keywords\": \"data science testing pipeline data quality dataquality validation datavalidation\",\n \"include_package_data\": True,\n \"classifiers\": [\n \"Development Status :: 4 - Beta\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: Science/Research\",\n \"Intended Audience :: Other Audience\",\n \"Topic :: Scientific/Engineering\",\n \"Topic :: Software Development\",\n \"Topic :: Software Development :: Testing\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n ],\n}\n\nsetup(**config)\n", "path": "setup.py"}]} | 1,437 | 141 |
gh_patches_debug_26787 | rasdani/github-patches | git_diff | conan-io__conan-center-index-7037 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[request] rapidcheck/20210702
### Package Details
* Package Name/Version: **rapidcheck/20210702**
The above-mentioned version solves issue #5205 and is not yet available as a recipe. Please add this version.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `recipes/rapidcheck/all/conanfile.py`
Content:
```
1 from conans import CMake, ConanFile, tools
2 from conans.errors import ConanInvalidConfiguration
3 import os
4 import textwrap
5
6 required_conan_version = ">=1.33.0"
7
8
9 class RapidcheckConan(ConanFile):
10 name = "rapidcheck"
11 description = "QuickCheck clone for C++ with the goal of being simple to use with as little boilerplate as possible"
12 url = "https://github.com/conan-io/conan-center-index"
13 homepage = "https://github.com/emil-e/rapidcheck"
14 license = "BSD-2-Clause"
15 topics = "quickcheck", "testing", "property-testing"
16 exports_sources = "CMakeLists.txt"
17 generators = "cmake"
18 settings = "os", "arch", "compiler", "build_type"
19 options = {
20 "shared": [True, False],
21 "fPIC": [True, False],
22 "enable_rtti": [True, False],
23 }
24 default_options = {
25 "shared": False,
26 "fPIC": True,
27 "enable_rtti": True,
28 }
29
30 _cmake = None
31
32 @property
33 def _source_subfolder(self):
34 return "source_subfolder"
35
36 @property
37 def _build_subfolder(self):
38 return "build_subfolder"
39
40 def config_options(self):
41 if self.settings.os == "Windows":
42 del self.options.fPIC
43
44 def configure(self):
45 if self.options.shared:
46 del self.options.fPIC
47
48 def validate(self):
49 if self.settings.compiler.get_safe("cppstd"):
50 tools.check_min_cppstd(self, 11)
51 if self.settings.compiler == "Visual Studio" and self.options.shared:
52 raise ConanInvalidConfiguration("shared is not supported using Visual Studio")
53
54 def source(self):
55 tools.get(**self.conan_data["sources"][self.version],
56 destination=self._source_subfolder, strip_root=True)
57
58 def _configure_cmake(self):
59 if self._cmake:
60 return self._cmake
61 self._cmake = CMake(self)
62 self._cmake.definitions["RC_ENABLE_RTTI"] = self.options.enable_rtti
63 self._cmake.definitions["RC_ENABLE_TESTS"] = False
64 self._cmake.definitions["RC_ENABLE_EXAMPLES"] = False
65 self._cmake.configure(build_folder=self._build_subfolder)
66 return self._cmake
67
68 def build(self):
69 cmake = self._configure_cmake()
70 cmake.build()
71
72 def package(self):
73 self.copy(pattern="LICENSE*", src=self._source_subfolder, dst="licenses")
74 cmake = self._configure_cmake()
75 cmake.install()
76 tools.rmdir(os.path.join(self.package_folder, "share"))
77 self._create_cmake_module_alias_targets(
78 os.path.join(self.package_folder, self._module_file_rel_path),
79 {"rapidcheck": "rapidcheck::rapidcheck"}
80 )
81
82 @staticmethod
83 def _create_cmake_module_alias_targets(module_file, targets):
84 content = ""
85 for alias, aliased in targets.items():
86 content += textwrap.dedent("""\
87 if(TARGET {aliased} AND NOT TARGET {alias})
88 add_library({alias} INTERFACE IMPORTED)
89 set_property(TARGET {alias} PROPERTY INTERFACE_LINK_LIBRARIES {aliased})
90 endif()
91 """.format(alias=alias, aliased=aliased))
92 tools.save(module_file, content)
93
94 @property
95 def _module_subfolder(self):
96 return os.path.join("lib", "cmake")
97
98 @property
99 def _module_file_rel_path(self):
100 return os.path.join(self._module_subfolder,
101 "conan-official-{}-targets.cmake".format(self.name))
102
103 def package_info(self):
104 self.cpp_info.names["cmake_find_package"] = "rapidcheck"
105 self.cpp_info.names["cmake_find_package_multi"] = "rapidcheck"
106 self.cpp_info.builddirs.append(self._module_subfolder)
107 self.cpp_info.build_modules["cmake_find_package"] = [self._module_file_rel_path]
108 self.cpp_info.build_modules["cmake_find_package_multi"] = [self._module_file_rel_path]
109 self.cpp_info.libs = ["rapidcheck"]
110 if tools.Version(self.version) < "20201218":
111 if self.options.enable_rtti:
112 self.cpp_info.defines.append("RC_USE_RTTI")
113 else:
114 if not self.options.enable_rtti:
115 self.cpp_info.defines.append("RC_DONT_USE_RTTI")
116
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/recipes/rapidcheck/all/conanfile.py b/recipes/rapidcheck/all/conanfile.py
--- a/recipes/rapidcheck/all/conanfile.py
+++ b/recipes/rapidcheck/all/conanfile.py
@@ -51,6 +51,9 @@
if self.settings.compiler == "Visual Studio" and self.options.shared:
raise ConanInvalidConfiguration("shared is not supported using Visual Studio")
+ if 'cci' not in self.version:
+ self.output.warn("This version has been deprecated in favor of '{}/cci.{}'".format(self.name, self.version))
+
def source(self):
tools.get(**self.conan_data["sources"][self.version],
destination=self._source_subfolder, strip_root=True)
@@ -107,7 +110,11 @@
self.cpp_info.build_modules["cmake_find_package"] = [self._module_file_rel_path]
self.cpp_info.build_modules["cmake_find_package_multi"] = [self._module_file_rel_path]
self.cpp_info.libs = ["rapidcheck"]
- if tools.Version(self.version) < "20201218":
+ # Remove after 9473 is merged.
+ version = self.version
+ if version.startswith("cci."):
+ version = version[4:]
+ if version < "20201218":
if self.options.enable_rtti:
self.cpp_info.defines.append("RC_USE_RTTI")
else:
| {"golden_diff": "diff --git a/recipes/rapidcheck/all/conanfile.py b/recipes/rapidcheck/all/conanfile.py\n--- a/recipes/rapidcheck/all/conanfile.py\n+++ b/recipes/rapidcheck/all/conanfile.py\n@@ -51,6 +51,9 @@\n if self.settings.compiler == \"Visual Studio\" and self.options.shared:\n raise ConanInvalidConfiguration(\"shared is not supported using Visual Studio\")\n \n+ if 'cci' not in self.version:\n+ self.output.warn(\"This version has been deprecated in favor of '{}/cci.{}'\".format(self.name, self.version))\n+\n def source(self):\n tools.get(**self.conan_data[\"sources\"][self.version],\n destination=self._source_subfolder, strip_root=True)\n@@ -107,7 +110,11 @@\n self.cpp_info.build_modules[\"cmake_find_package\"] = [self._module_file_rel_path]\n self.cpp_info.build_modules[\"cmake_find_package_multi\"] = [self._module_file_rel_path]\n self.cpp_info.libs = [\"rapidcheck\"]\n- if tools.Version(self.version) < \"20201218\":\n+ # Remove after 9473 is merged.\n+ version = self.version\n+ if version.startswith(\"cci.\"):\n+ version = version[4:]\n+ if version < \"20201218\":\n if self.options.enable_rtti:\n self.cpp_info.defines.append(\"RC_USE_RTTI\")\n else:\n", "issue": "[request] rapidcheck/20210702\n### Package Details\r\n * Package Name/Version: **rapidcheck/20210702**\r\n\r\nThe above-mentioned version solves issue #5205 and is not yet available as a recipe. Please add this version.\r\n\n", "before_files": [{"content": "from conans import CMake, ConanFile, tools\nfrom conans.errors import ConanInvalidConfiguration\nimport os\nimport textwrap\n\nrequired_conan_version = \">=1.33.0\"\n\n\nclass RapidcheckConan(ConanFile):\n name = \"rapidcheck\"\n description = \"QuickCheck clone for C++ with the goal of being simple to use with as little boilerplate as possible\"\n url = \"https://github.com/conan-io/conan-center-index\"\n homepage = \"https://github.com/emil-e/rapidcheck\"\n license = \"BSD-2-Clause\"\n topics = \"quickcheck\", \"testing\", \"property-testing\"\n exports_sources = \"CMakeLists.txt\"\n generators = \"cmake\"\n settings = \"os\", \"arch\", \"compiler\", \"build_type\"\n options = {\n \"shared\": [True, False],\n \"fPIC\": [True, False],\n \"enable_rtti\": [True, False],\n }\n default_options = {\n \"shared\": False,\n \"fPIC\": True,\n \"enable_rtti\": True,\n }\n\n _cmake = None\n\n @property\n def _source_subfolder(self):\n return \"source_subfolder\"\n\n @property\n def _build_subfolder(self):\n return \"build_subfolder\"\n\n def config_options(self):\n if self.settings.os == \"Windows\":\n del self.options.fPIC\n\n def configure(self):\n if self.options.shared:\n del self.options.fPIC\n\n def validate(self):\n if self.settings.compiler.get_safe(\"cppstd\"):\n tools.check_min_cppstd(self, 11)\n if self.settings.compiler == \"Visual Studio\" and self.options.shared:\n raise ConanInvalidConfiguration(\"shared is not supported using Visual Studio\")\n\n def source(self):\n tools.get(**self.conan_data[\"sources\"][self.version],\n destination=self._source_subfolder, strip_root=True)\n\n def _configure_cmake(self):\n if self._cmake:\n return self._cmake\n self._cmake = CMake(self)\n self._cmake.definitions[\"RC_ENABLE_RTTI\"] = self.options.enable_rtti\n self._cmake.definitions[\"RC_ENABLE_TESTS\"] = False\n self._cmake.definitions[\"RC_ENABLE_EXAMPLES\"] = False\n self._cmake.configure(build_folder=self._build_subfolder)\n return self._cmake\n\n def build(self):\n cmake = self._configure_cmake()\n cmake.build()\n\n def package(self):\n self.copy(pattern=\"LICENSE*\", src=self._source_subfolder, dst=\"licenses\")\n cmake = self._configure_cmake()\n cmake.install()\n tools.rmdir(os.path.join(self.package_folder, \"share\"))\n self._create_cmake_module_alias_targets(\n os.path.join(self.package_folder, self._module_file_rel_path),\n {\"rapidcheck\": \"rapidcheck::rapidcheck\"}\n )\n\n @staticmethod\n def _create_cmake_module_alias_targets(module_file, targets):\n content = \"\"\n for alias, aliased in targets.items():\n content += textwrap.dedent(\"\"\"\\\n if(TARGET {aliased} AND NOT TARGET {alias})\n add_library({alias} INTERFACE IMPORTED)\n set_property(TARGET {alias} PROPERTY INTERFACE_LINK_LIBRARIES {aliased})\n endif()\n \"\"\".format(alias=alias, aliased=aliased))\n tools.save(module_file, content)\n\n @property\n def _module_subfolder(self):\n return os.path.join(\"lib\", \"cmake\")\n\n @property\n def _module_file_rel_path(self):\n return os.path.join(self._module_subfolder,\n \"conan-official-{}-targets.cmake\".format(self.name))\n\n def package_info(self):\n self.cpp_info.names[\"cmake_find_package\"] = \"rapidcheck\"\n self.cpp_info.names[\"cmake_find_package_multi\"] = \"rapidcheck\"\n self.cpp_info.builddirs.append(self._module_subfolder)\n self.cpp_info.build_modules[\"cmake_find_package\"] = [self._module_file_rel_path]\n self.cpp_info.build_modules[\"cmake_find_package_multi\"] = [self._module_file_rel_path]\n self.cpp_info.libs = [\"rapidcheck\"]\n if tools.Version(self.version) < \"20201218\":\n if self.options.enable_rtti:\n self.cpp_info.defines.append(\"RC_USE_RTTI\")\n else:\n if not self.options.enable_rtti:\n self.cpp_info.defines.append(\"RC_DONT_USE_RTTI\")\n", "path": "recipes/rapidcheck/all/conanfile.py"}], "after_files": [{"content": "from conans import CMake, ConanFile, tools\nfrom conans.errors import ConanInvalidConfiguration\nimport os\nimport textwrap\n\nrequired_conan_version = \">=1.33.0\"\n\n\nclass RapidcheckConan(ConanFile):\n name = \"rapidcheck\"\n description = \"QuickCheck clone for C++ with the goal of being simple to use with as little boilerplate as possible\"\n url = \"https://github.com/conan-io/conan-center-index\"\n homepage = \"https://github.com/emil-e/rapidcheck\"\n license = \"BSD-2-Clause\"\n topics = \"quickcheck\", \"testing\", \"property-testing\"\n exports_sources = \"CMakeLists.txt\"\n generators = \"cmake\"\n settings = \"os\", \"arch\", \"compiler\", \"build_type\"\n options = {\n \"shared\": [True, False],\n \"fPIC\": [True, False],\n \"enable_rtti\": [True, False],\n }\n default_options = {\n \"shared\": False,\n \"fPIC\": True,\n \"enable_rtti\": True,\n }\n\n _cmake = None\n\n @property\n def _source_subfolder(self):\n return \"source_subfolder\"\n\n @property\n def _build_subfolder(self):\n return \"build_subfolder\"\n\n def config_options(self):\n if self.settings.os == \"Windows\":\n del self.options.fPIC\n\n def configure(self):\n if self.options.shared:\n del self.options.fPIC\n\n def validate(self):\n if self.settings.compiler.get_safe(\"cppstd\"):\n tools.check_min_cppstd(self, 11)\n if self.settings.compiler == \"Visual Studio\" and self.options.shared:\n raise ConanInvalidConfiguration(\"shared is not supported using Visual Studio\")\n\n if 'cci' not in self.version:\n self.output.warn(\"This version has been deprecated in favor of '{}/cci.{}'\".format(self.name, self.version))\n\n def source(self):\n tools.get(**self.conan_data[\"sources\"][self.version],\n destination=self._source_subfolder, strip_root=True)\n\n def _configure_cmake(self):\n if self._cmake:\n return self._cmake\n self._cmake = CMake(self)\n self._cmake.definitions[\"RC_ENABLE_RTTI\"] = self.options.enable_rtti\n self._cmake.definitions[\"RC_ENABLE_TESTS\"] = False\n self._cmake.definitions[\"RC_ENABLE_EXAMPLES\"] = False\n self._cmake.configure(build_folder=self._build_subfolder)\n return self._cmake\n\n def build(self):\n cmake = self._configure_cmake()\n cmake.build()\n\n def package(self):\n self.copy(pattern=\"LICENSE*\", src=self._source_subfolder, dst=\"licenses\")\n cmake = self._configure_cmake()\n cmake.install()\n tools.rmdir(os.path.join(self.package_folder, \"share\"))\n self._create_cmake_module_alias_targets(\n os.path.join(self.package_folder, self._module_file_rel_path),\n {\"rapidcheck\": \"rapidcheck::rapidcheck\"}\n )\n\n @staticmethod\n def _create_cmake_module_alias_targets(module_file, targets):\n content = \"\"\n for alias, aliased in targets.items():\n content += textwrap.dedent(\"\"\"\\\n if(TARGET {aliased} AND NOT TARGET {alias})\n add_library({alias} INTERFACE IMPORTED)\n set_property(TARGET {alias} PROPERTY INTERFACE_LINK_LIBRARIES {aliased})\n endif()\n \"\"\".format(alias=alias, aliased=aliased))\n tools.save(module_file, content)\n\n @property\n def _module_subfolder(self):\n return os.path.join(\"lib\", \"cmake\")\n\n @property\n def _module_file_rel_path(self):\n return os.path.join(self._module_subfolder,\n \"conan-official-{}-targets.cmake\".format(self.name))\n\n def package_info(self):\n self.cpp_info.names[\"cmake_find_package\"] = \"rapidcheck\"\n self.cpp_info.names[\"cmake_find_package_multi\"] = \"rapidcheck\"\n self.cpp_info.builddirs.append(self._module_subfolder)\n self.cpp_info.build_modules[\"cmake_find_package\"] = [self._module_file_rel_path]\n self.cpp_info.build_modules[\"cmake_find_package_multi\"] = [self._module_file_rel_path]\n self.cpp_info.libs = [\"rapidcheck\"]\n # Remove after 9473 is merged.\n version = self.version\n if version.startswith(\"cci.\"):\n version = version[4:]\n if version < \"20201218\":\n if self.options.enable_rtti:\n self.cpp_info.defines.append(\"RC_USE_RTTI\")\n else:\n if not self.options.enable_rtti:\n self.cpp_info.defines.append(\"RC_DONT_USE_RTTI\")\n", "path": "recipes/rapidcheck/all/conanfile.py"}]} | 1,558 | 329 |
gh_patches_debug_4594 | rasdani/github-patches | git_diff | google__turbinia-1054 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
FileArtifactExtraction Tasks failing
From a recent run:
```
* FileArtifactExtractionTask: image_export.py failed for artifact TomcatFiles - local_path not provided.
* FileArtifactExtractionTask: image_export.py failed for artifact SshdConfigFile - local_path not provided.
* FileArtifactExtractionTask: image_export.py failed for artifact RedisConfigFile - local_path not provided.
* FileArtifactExtractionTask: image_export.py failed for artifact JupyterConfigFile - local_path not provided.
* FileArtifactExtractionTask: image_export.py failed for artifact ApacheAccessLogs - local_path not provided.
* FileArtifactExtractionTask: image_export.py failed for artifact NginxAccessLogs - local_path not provided.
* FileArtifactExtractionTask: image_export.py failed for artifact GKEDockerContainerLogs - local_path not provided.
* FileArtifactExtractionTask: image_export.py failed for artifact LinuxScheduleFiles - local_path not provided.
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `turbinia/workers/artifact.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 # Copyright 2015 Google Inc.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15 """Task for running Plaso."""
16
17 from __future__ import unicode_literals
18
19 import os
20
21 from turbinia import config
22 from turbinia.evidence import ExportedFileArtifact
23 from turbinia.evidence import EvidenceState as state
24 from turbinia.workers import TurbiniaTask
25
26
27 class FileArtifactExtractionTask(TurbiniaTask):
28 """Task to run image_export (log2timeline)."""
29
30 REQUIRED_STATES = [state.ATTACHED]
31
32 def __init__(self, artifact_name='FileArtifact'):
33 super(FileArtifactExtractionTask, self).__init__()
34 self.artifact_name = artifact_name
35
36 def run(self, evidence, result):
37 """Extracts artifacts using Plaso image_export.py.
38
39 Args:
40 evidence (Evidence object): The evidence we will process.
41 result (TurbiniaTaskResult): The object to place task results into.
42
43 Returns:
44 TurbiniaTaskResult object.
45 """
46 config.LoadConfig()
47
48 export_directory = os.path.join(self.output_dir, 'export')
49 image_export_log = os.path.join(
50 self.output_dir, '{0:s}.log'.format(self.id))
51
52 cmd = [
53 'sudo',
54 'image_export.py',
55 '--no-hashes',
56 '--logfile',
57 image_export_log,
58 '-w',
59 export_directory,
60 '--partitions',
61 'all',
62 '--volumes',
63 'all',
64 '--unattended',
65 '--artifact_filters',
66 self.artifact_name,
67 ]
68 if config.DEBUG_TASKS or self.task_config.get('debug_tasks'):
69 cmd.append('-d')
70
71 if evidence.credentials:
72 for credential_type, credential_data in evidence.credentials:
73 cmd.extend([
74 '--credential', '{0:s}:{1:s}'.format(
75 credential_type, credential_data)
76 ])
77
78 # Path to the source image/directory.
79 cmd.append(evidence.local_path)
80 if not evidence.local_path:
81 result.log('Tried to run image_export without local_path')
82 result.close(
83 self, False,
84 'image_export.py failed for artifact {0:s} - local_path not provided.'
85 .format(self.artifact_name))
86 return result
87
88 result.log('Running image_export as [{0:s}]'.format(' '.join(cmd)))
89
90 ret, _ = self.execute(cmd, result, log_files=[image_export_log])
91 if ret:
92 result.close(
93 self, False, 'image_export.py failed for artifact {0:s}.'.format(
94 self.artifact_name))
95 return result
96
97 for dirpath, _, filenames in os.walk(export_directory):
98 for filename in filenames:
99 exported_artifact = ExportedFileArtifact(
100 artifact_name=self.artifact_name, source_path=os.path.join(
101 dirpath, filename))
102 result.log('Adding artifact {0:s}'.format(filename))
103 result.add_evidence(exported_artifact, evidence.config)
104
105 result.close(
106 self, True, 'Extracted {0:d} new {1:s} artifacts'.format(
107 len(result.evidence), self.artifact_name))
108
109 return result
110
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/turbinia/workers/artifact.py b/turbinia/workers/artifact.py
--- a/turbinia/workers/artifact.py
+++ b/turbinia/workers/artifact.py
@@ -27,7 +27,7 @@
class FileArtifactExtractionTask(TurbiniaTask):
"""Task to run image_export (log2timeline)."""
- REQUIRED_STATES = [state.ATTACHED]
+ REQUIRED_STATES = [state.ATTACHED, state.CONTAINER_MOUNTED]
def __init__(self, artifact_name='FileArtifact'):
super(FileArtifactExtractionTask, self).__init__()
| {"golden_diff": "diff --git a/turbinia/workers/artifact.py b/turbinia/workers/artifact.py\n--- a/turbinia/workers/artifact.py\n+++ b/turbinia/workers/artifact.py\n@@ -27,7 +27,7 @@\n class FileArtifactExtractionTask(TurbiniaTask):\n \"\"\"Task to run image_export (log2timeline).\"\"\"\n \n- REQUIRED_STATES = [state.ATTACHED]\n+ REQUIRED_STATES = [state.ATTACHED, state.CONTAINER_MOUNTED]\n \n def __init__(self, artifact_name='FileArtifact'):\n super(FileArtifactExtractionTask, self).__init__()\n", "issue": "FileArtifactExtraction Tasks failing\nFrom a recent run:\r\n```\r\n* FileArtifactExtractionTask: image_export.py failed for artifact TomcatFiles - local_path not provided.\r\n* FileArtifactExtractionTask: image_export.py failed for artifact SshdConfigFile - local_path not provided.\r\n* FileArtifactExtractionTask: image_export.py failed for artifact RedisConfigFile - local_path not provided.\r\n* FileArtifactExtractionTask: image_export.py failed for artifact JupyterConfigFile - local_path not provided.\r\n* FileArtifactExtractionTask: image_export.py failed for artifact ApacheAccessLogs - local_path not provided.\r\n* FileArtifactExtractionTask: image_export.py failed for artifact NginxAccessLogs - local_path not provided.\r\n* FileArtifactExtractionTask: image_export.py failed for artifact GKEDockerContainerLogs - local_path not provided.\r\n* FileArtifactExtractionTask: image_export.py failed for artifact LinuxScheduleFiles - local_path not provided.\r\n```\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright 2015 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Task for running Plaso.\"\"\"\n\nfrom __future__ import unicode_literals\n\nimport os\n\nfrom turbinia import config\nfrom turbinia.evidence import ExportedFileArtifact\nfrom turbinia.evidence import EvidenceState as state\nfrom turbinia.workers import TurbiniaTask\n\n\nclass FileArtifactExtractionTask(TurbiniaTask):\n \"\"\"Task to run image_export (log2timeline).\"\"\"\n\n REQUIRED_STATES = [state.ATTACHED]\n\n def __init__(self, artifact_name='FileArtifact'):\n super(FileArtifactExtractionTask, self).__init__()\n self.artifact_name = artifact_name\n\n def run(self, evidence, result):\n \"\"\"Extracts artifacts using Plaso image_export.py.\n\n Args:\n evidence (Evidence object): The evidence we will process.\n result (TurbiniaTaskResult): The object to place task results into.\n\n Returns:\n TurbiniaTaskResult object.\n \"\"\"\n config.LoadConfig()\n\n export_directory = os.path.join(self.output_dir, 'export')\n image_export_log = os.path.join(\n self.output_dir, '{0:s}.log'.format(self.id))\n\n cmd = [\n 'sudo',\n 'image_export.py',\n '--no-hashes',\n '--logfile',\n image_export_log,\n '-w',\n export_directory,\n '--partitions',\n 'all',\n '--volumes',\n 'all',\n '--unattended',\n '--artifact_filters',\n self.artifact_name,\n ]\n if config.DEBUG_TASKS or self.task_config.get('debug_tasks'):\n cmd.append('-d')\n\n if evidence.credentials:\n for credential_type, credential_data in evidence.credentials:\n cmd.extend([\n '--credential', '{0:s}:{1:s}'.format(\n credential_type, credential_data)\n ])\n\n # Path to the source image/directory.\n cmd.append(evidence.local_path)\n if not evidence.local_path:\n result.log('Tried to run image_export without local_path')\n result.close(\n self, False,\n 'image_export.py failed for artifact {0:s} - local_path not provided.'\n .format(self.artifact_name))\n return result\n\n result.log('Running image_export as [{0:s}]'.format(' '.join(cmd)))\n\n ret, _ = self.execute(cmd, result, log_files=[image_export_log])\n if ret:\n result.close(\n self, False, 'image_export.py failed for artifact {0:s}.'.format(\n self.artifact_name))\n return result\n\n for dirpath, _, filenames in os.walk(export_directory):\n for filename in filenames:\n exported_artifact = ExportedFileArtifact(\n artifact_name=self.artifact_name, source_path=os.path.join(\n dirpath, filename))\n result.log('Adding artifact {0:s}'.format(filename))\n result.add_evidence(exported_artifact, evidence.config)\n\n result.close(\n self, True, 'Extracted {0:d} new {1:s} artifacts'.format(\n len(result.evidence), self.artifact_name))\n\n return result\n", "path": "turbinia/workers/artifact.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright 2015 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Task for running Plaso.\"\"\"\n\nfrom __future__ import unicode_literals\n\nimport os\n\nfrom turbinia import config\nfrom turbinia.evidence import ExportedFileArtifact\nfrom turbinia.evidence import EvidenceState as state\nfrom turbinia.workers import TurbiniaTask\n\n\nclass FileArtifactExtractionTask(TurbiniaTask):\n \"\"\"Task to run image_export (log2timeline).\"\"\"\n\n REQUIRED_STATES = [state.ATTACHED, state.CONTAINER_MOUNTED]\n\n def __init__(self, artifact_name='FileArtifact'):\n super(FileArtifactExtractionTask, self).__init__()\n self.artifact_name = artifact_name\n\n def run(self, evidence, result):\n \"\"\"Extracts artifacts using Plaso image_export.py.\n\n Args:\n evidence (Evidence object): The evidence we will process.\n result (TurbiniaTaskResult): The object to place task results into.\n\n Returns:\n TurbiniaTaskResult object.\n \"\"\"\n config.LoadConfig()\n\n export_directory = os.path.join(self.output_dir, 'export')\n image_export_log = os.path.join(\n self.output_dir, '{0:s}.log'.format(self.id))\n\n cmd = [\n 'sudo',\n 'image_export.py',\n '--no-hashes',\n '--logfile',\n image_export_log,\n '-w',\n export_directory,\n '--partitions',\n 'all',\n '--volumes',\n 'all',\n '--unattended',\n '--artifact_filters',\n self.artifact_name,\n ]\n if config.DEBUG_TASKS or self.task_config.get('debug_tasks'):\n cmd.append('-d')\n\n if evidence.credentials:\n for credential_type, credential_data in evidence.credentials:\n cmd.extend([\n '--credential', '{0:s}:{1:s}'.format(\n credential_type, credential_data)\n ])\n\n # Path to the source image/directory.\n cmd.append(evidence.local_path)\n if not evidence.local_path:\n result.log('Tried to run image_export without local_path')\n result.close(\n self, False,\n 'image_export.py failed for artifact {0:s} - local_path not provided.'\n .format(self.artifact_name))\n return result\n\n result.log('Running image_export as [{0:s}]'.format(' '.join(cmd)))\n\n ret, _ = self.execute(cmd, result, log_files=[image_export_log])\n if ret:\n result.close(\n self, False, 'image_export.py failed for artifact {0:s}.'.format(\n self.artifact_name))\n return result\n\n for dirpath, _, filenames in os.walk(export_directory):\n for filename in filenames:\n exported_artifact = ExportedFileArtifact(\n artifact_name=self.artifact_name, source_path=os.path.join(\n dirpath, filename))\n result.log('Adding artifact {0:s}'.format(filename))\n result.add_evidence(exported_artifact, evidence.config)\n\n result.close(\n self, True, 'Extracted {0:d} new {1:s} artifacts'.format(\n len(result.evidence), self.artifact_name))\n\n return result\n", "path": "turbinia/workers/artifact.py"}]} | 1,485 | 140 |
gh_patches_debug_35897 | rasdani/github-patches | git_diff | meltano__meltano-8215 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
feature: Support authenticating to Azure state backend without a connection string
### Feature scope
State backend
### Description
https://learn.microsoft.com/en-us/azure/storage/blobs/storage-quickstart-blobs-python?tabs=managed-identity%2Croles-azure-portal%2Csign-in-azure-cli#sign-in-and-connect-your-app-code-to-azure-using-defaultazurecredential
See request in
* Slack: https://meltano.slack.com/archives/CMN8HELB0/p1692386533863739?thread_ts=1692366917.319929&cid=CMN8HELB0
* Linen: https://www.linen.dev/s/meltano/t/15516560/hey-i-m-a-bit-confused-how-to-set-the-state-id-when-using-me#d65c3bd0-9e62-48dd-a616-90d4d7246d33
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/meltano/core/state_store/azure.py`
Content:
```
1 """StateStoreManager for Azure Blob storage backend."""
2 from __future__ import annotations
3
4 from collections.abc import Iterator
5 from contextlib import contextmanager
6 from functools import cached_property
7
8 from meltano.core.error import MeltanoError
9 from meltano.core.state_store.filesystem import (
10 CloudStateStoreManager,
11 )
12
13 AZURE_INSTALLED = True
14
15 try:
16 from azure.storage.blob import BlobServiceClient
17 except ImportError:
18 AZURE_INSTALLED = False
19
20
21 class MissingAzureError(Exception):
22 """Raised when azure is required but no installed."""
23
24 def __init__(self):
25 """Initialize a MissingAzureError."""
26 super().__init__(
27 "azure required but not installed. Install meltano[azure] to use Azure Blob Storage as a state backend.", # noqa: E501
28 )
29
30
31 @contextmanager
32 def requires_azure():
33 """Raise MissingAzureError if azure is required but missing in context.
34
35 Raises:
36 MissingAzureError: if azure is not installed.
37
38 Yields:
39 None
40 """
41 if not AZURE_INSTALLED:
42 raise MissingAzureError
43 yield
44
45
46 class AZStorageStateStoreManager(CloudStateStoreManager):
47 """State backend for Azure Blob Storage."""
48
49 label: str = "Azure Blob Storage"
50
51 def __init__(
52 self,
53 connection_string: str | None = None,
54 prefix: str | None = None,
55 **kwargs,
56 ):
57 """Initialize the BaseFilesystemStateStoreManager.
58
59 Args:
60 connection_string: connection string to use in authenticating to Azure
61 prefix: the prefix to store state at
62 kwargs: additional keyword args to pass to parent
63
64 Raises:
65 MeltanoError: If container name is not included in the URI.
66 """
67 super().__init__(**kwargs)
68 self.connection_string = connection_string
69
70 if not self.parsed.hostname:
71 raise MeltanoError(
72 f"Azure state backend URI must include a container name: {self.uri}",
73 "Verify state backend URI. Must be in the form of azure://<container>/<prefix>", # noqa: E501
74 )
75
76 self.container_name = self.parsed.hostname
77 self.prefix = prefix or self.parsed.path
78
79 @staticmethod
80 def is_file_not_found_error(err: Exception) -> bool:
81 """Check if err is equivalent to file not being found.
82
83 Args:
84 err: the err to check
85
86 Returns:
87 True if error represents file not being found, else False
88 """
89 from azure.core.exceptions import ResourceNotFoundError
90
91 return (
92 isinstance(err, ResourceNotFoundError)
93 and "ErrorCode:BlobNotFound" in err.args[0]
94 )
95
96 @cached_property
97 def client(self) -> BlobServiceClient:
98 """Get an authenticated azure.storage.blob.BlobServiceClient.
99
100 Returns:
101 An authenticated azure.storage.blob.BlobServiceClient
102
103 Raises:
104 MeltanoError: If connection string is not provided.
105 """
106 with requires_azure():
107 if self.connection_string:
108 return BlobServiceClient.from_connection_string(self.connection_string)
109
110 raise MeltanoError(
111 "Azure state backend requires a connection string",
112 "Read https://learn.microsoft.com/en-us/azure/storage/common/storage-configure-connection-string for more information.", # noqa: E501
113 )
114
115 def delete(self, file_path: str):
116 """Delete the file/blob at the given path.
117
118 Args:
119 file_path: the path to delete.
120
121 Raises:
122 Exception: if error not indicating file is not found is thrown
123 """
124 blob_client = self.client.get_blob_client(
125 container=self.container_name,
126 blob=file_path,
127 )
128 try:
129 blob_client.delete_blob()
130 except Exception as e:
131 if not self.is_file_not_found_error(e):
132 raise e
133
134 def list_all_files(self) -> Iterator[str]:
135 """List all files in the backend.
136
137 Yields:
138 The next file in the backend.
139 """
140 container_client = self.client.get_container_client(self.container_name)
141 for blob in container_client.list_blobs( # noqa: WPS526
142 name_starts_with=self.prefix.lstrip("/"),
143 ):
144 yield blob.name
145
146 def copy_file(self, src: str, dst: str) -> None:
147 """Copy a file from one location to another.
148
149 Args:
150 src: the source path
151 dst: the destination path
152 """
153 container_client = self.client.get_container_client(self.container_name)
154 src_blob_client = container_client.get_blob_client(src)
155 dst_blob_client = container_client.get_blob_client(dst)
156 dst_blob_client.start_copy_from_url(src_blob_client.url)
157
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/meltano/core/state_store/azure.py b/src/meltano/core/state_store/azure.py
--- a/src/meltano/core/state_store/azure.py
+++ b/src/meltano/core/state_store/azure.py
@@ -52,6 +52,7 @@
self,
connection_string: str | None = None,
prefix: str | None = None,
+ storage_account_url: str | None = None,
**kwargs,
):
"""Initialize the BaseFilesystemStateStoreManager.
@@ -59,6 +60,7 @@
Args:
connection_string: connection string to use in authenticating to Azure
prefix: the prefix to store state at
+ storage_account_url: url of the azure stroga account
kwargs: additional keyword args to pass to parent
Raises:
@@ -66,6 +68,7 @@
"""
super().__init__(**kwargs)
self.connection_string = connection_string
+ self.storage_account_url = storage_account_url
if not self.parsed.hostname:
raise MeltanoError(
@@ -104,11 +107,21 @@
MeltanoError: If connection string is not provided.
"""
with requires_azure():
+ if self.storage_account_url:
+ from azure.identity import DefaultAzureCredential
+
+ default_credential = DefaultAzureCredential()
+ return BlobServiceClient(
+ self.storage_account_url,
+ credential=default_credential,
+ )
+
if self.connection_string:
return BlobServiceClient.from_connection_string(self.connection_string)
raise MeltanoError(
- "Azure state backend requires a connection string",
+ "Azure state backend requires a connection string "
+ "or an account URL to use host credentials",
"Read https://learn.microsoft.com/en-us/azure/storage/common/storage-configure-connection-string for more information.", # noqa: E501
)
| {"golden_diff": "diff --git a/src/meltano/core/state_store/azure.py b/src/meltano/core/state_store/azure.py\n--- a/src/meltano/core/state_store/azure.py\n+++ b/src/meltano/core/state_store/azure.py\n@@ -52,6 +52,7 @@\n self,\n connection_string: str | None = None,\n prefix: str | None = None,\n+ storage_account_url: str | None = None,\n **kwargs,\n ):\n \"\"\"Initialize the BaseFilesystemStateStoreManager.\n@@ -59,6 +60,7 @@\n Args:\n connection_string: connection string to use in authenticating to Azure\n prefix: the prefix to store state at\n+ storage_account_url: url of the azure stroga account\n kwargs: additional keyword args to pass to parent\n \n Raises:\n@@ -66,6 +68,7 @@\n \"\"\"\n super().__init__(**kwargs)\n self.connection_string = connection_string\n+ self.storage_account_url = storage_account_url\n \n if not self.parsed.hostname:\n raise MeltanoError(\n@@ -104,11 +107,21 @@\n MeltanoError: If connection string is not provided.\n \"\"\"\n with requires_azure():\n+ if self.storage_account_url:\n+ from azure.identity import DefaultAzureCredential\n+\n+ default_credential = DefaultAzureCredential()\n+ return BlobServiceClient(\n+ self.storage_account_url,\n+ credential=default_credential,\n+ )\n+\n if self.connection_string:\n return BlobServiceClient.from_connection_string(self.connection_string)\n \n raise MeltanoError(\n- \"Azure state backend requires a connection string\",\n+ \"Azure state backend requires a connection string \"\n+ \"or an account URL to use host credentials\",\n \"Read https://learn.microsoft.com/en-us/azure/storage/common/storage-configure-connection-string for more information.\", # noqa: E501\n )\n", "issue": "feature: Support authenticating to Azure state backend without a connection string\n### Feature scope\r\n\r\nState backend\r\n\r\n### Description\r\n\r\nhttps://learn.microsoft.com/en-us/azure/storage/blobs/storage-quickstart-blobs-python?tabs=managed-identity%2Croles-azure-portal%2Csign-in-azure-cli#sign-in-and-connect-your-app-code-to-azure-using-defaultazurecredential\r\n\r\nSee request in\r\n\r\n* Slack: https://meltano.slack.com/archives/CMN8HELB0/p1692386533863739?thread_ts=1692366917.319929&cid=CMN8HELB0\r\n* Linen: https://www.linen.dev/s/meltano/t/15516560/hey-i-m-a-bit-confused-how-to-set-the-state-id-when-using-me#d65c3bd0-9e62-48dd-a616-90d4d7246d33\n", "before_files": [{"content": "\"\"\"StateStoreManager for Azure Blob storage backend.\"\"\"\nfrom __future__ import annotations\n\nfrom collections.abc import Iterator\nfrom contextlib import contextmanager\nfrom functools import cached_property\n\nfrom meltano.core.error import MeltanoError\nfrom meltano.core.state_store.filesystem import (\n CloudStateStoreManager,\n)\n\nAZURE_INSTALLED = True\n\ntry:\n from azure.storage.blob import BlobServiceClient\nexcept ImportError:\n AZURE_INSTALLED = False\n\n\nclass MissingAzureError(Exception):\n \"\"\"Raised when azure is required but no installed.\"\"\"\n\n def __init__(self):\n \"\"\"Initialize a MissingAzureError.\"\"\"\n super().__init__(\n \"azure required but not installed. Install meltano[azure] to use Azure Blob Storage as a state backend.\", # noqa: E501\n )\n\n\n@contextmanager\ndef requires_azure():\n \"\"\"Raise MissingAzureError if azure is required but missing in context.\n\n Raises:\n MissingAzureError: if azure is not installed.\n\n Yields:\n None\n \"\"\"\n if not AZURE_INSTALLED:\n raise MissingAzureError\n yield\n\n\nclass AZStorageStateStoreManager(CloudStateStoreManager):\n \"\"\"State backend for Azure Blob Storage.\"\"\"\n\n label: str = \"Azure Blob Storage\"\n\n def __init__(\n self,\n connection_string: str | None = None,\n prefix: str | None = None,\n **kwargs,\n ):\n \"\"\"Initialize the BaseFilesystemStateStoreManager.\n\n Args:\n connection_string: connection string to use in authenticating to Azure\n prefix: the prefix to store state at\n kwargs: additional keyword args to pass to parent\n\n Raises:\n MeltanoError: If container name is not included in the URI.\n \"\"\"\n super().__init__(**kwargs)\n self.connection_string = connection_string\n\n if not self.parsed.hostname:\n raise MeltanoError(\n f\"Azure state backend URI must include a container name: {self.uri}\",\n \"Verify state backend URI. Must be in the form of azure://<container>/<prefix>\", # noqa: E501\n )\n\n self.container_name = self.parsed.hostname\n self.prefix = prefix or self.parsed.path\n\n @staticmethod\n def is_file_not_found_error(err: Exception) -> bool:\n \"\"\"Check if err is equivalent to file not being found.\n\n Args:\n err: the err to check\n\n Returns:\n True if error represents file not being found, else False\n \"\"\"\n from azure.core.exceptions import ResourceNotFoundError\n\n return (\n isinstance(err, ResourceNotFoundError)\n and \"ErrorCode:BlobNotFound\" in err.args[0]\n )\n\n @cached_property\n def client(self) -> BlobServiceClient:\n \"\"\"Get an authenticated azure.storage.blob.BlobServiceClient.\n\n Returns:\n An authenticated azure.storage.blob.BlobServiceClient\n\n Raises:\n MeltanoError: If connection string is not provided.\n \"\"\"\n with requires_azure():\n if self.connection_string:\n return BlobServiceClient.from_connection_string(self.connection_string)\n\n raise MeltanoError(\n \"Azure state backend requires a connection string\",\n \"Read https://learn.microsoft.com/en-us/azure/storage/common/storage-configure-connection-string for more information.\", # noqa: E501\n )\n\n def delete(self, file_path: str):\n \"\"\"Delete the file/blob at the given path.\n\n Args:\n file_path: the path to delete.\n\n Raises:\n Exception: if error not indicating file is not found is thrown\n \"\"\"\n blob_client = self.client.get_blob_client(\n container=self.container_name,\n blob=file_path,\n )\n try:\n blob_client.delete_blob()\n except Exception as e:\n if not self.is_file_not_found_error(e):\n raise e\n\n def list_all_files(self) -> Iterator[str]:\n \"\"\"List all files in the backend.\n\n Yields:\n The next file in the backend.\n \"\"\"\n container_client = self.client.get_container_client(self.container_name)\n for blob in container_client.list_blobs( # noqa: WPS526\n name_starts_with=self.prefix.lstrip(\"/\"),\n ):\n yield blob.name\n\n def copy_file(self, src: str, dst: str) -> None:\n \"\"\"Copy a file from one location to another.\n\n Args:\n src: the source path\n dst: the destination path\n \"\"\"\n container_client = self.client.get_container_client(self.container_name)\n src_blob_client = container_client.get_blob_client(src)\n dst_blob_client = container_client.get_blob_client(dst)\n dst_blob_client.start_copy_from_url(src_blob_client.url)\n", "path": "src/meltano/core/state_store/azure.py"}], "after_files": [{"content": "\"\"\"StateStoreManager for Azure Blob storage backend.\"\"\"\nfrom __future__ import annotations\n\nfrom collections.abc import Iterator\nfrom contextlib import contextmanager\nfrom functools import cached_property\n\nfrom meltano.core.error import MeltanoError\nfrom meltano.core.state_store.filesystem import (\n CloudStateStoreManager,\n)\n\nAZURE_INSTALLED = True\n\ntry:\n from azure.storage.blob import BlobServiceClient\nexcept ImportError:\n AZURE_INSTALLED = False\n\n\nclass MissingAzureError(Exception):\n \"\"\"Raised when azure is required but no installed.\"\"\"\n\n def __init__(self):\n \"\"\"Initialize a MissingAzureError.\"\"\"\n super().__init__(\n \"azure required but not installed. Install meltano[azure] to use Azure Blob Storage as a state backend.\", # noqa: E501\n )\n\n\n@contextmanager\ndef requires_azure():\n \"\"\"Raise MissingAzureError if azure is required but missing in context.\n\n Raises:\n MissingAzureError: if azure is not installed.\n\n Yields:\n None\n \"\"\"\n if not AZURE_INSTALLED:\n raise MissingAzureError\n yield\n\n\nclass AZStorageStateStoreManager(CloudStateStoreManager):\n \"\"\"State backend for Azure Blob Storage.\"\"\"\n\n label: str = \"Azure Blob Storage\"\n\n def __init__(\n self,\n connection_string: str | None = None,\n prefix: str | None = None,\n storage_account_url: str | None = None,\n **kwargs,\n ):\n \"\"\"Initialize the BaseFilesystemStateStoreManager.\n\n Args:\n connection_string: connection string to use in authenticating to Azure\n prefix: the prefix to store state at\n storage_account_url: url of the azure stroga account\n kwargs: additional keyword args to pass to parent\n\n Raises:\n MeltanoError: If container name is not included in the URI.\n \"\"\"\n super().__init__(**kwargs)\n self.connection_string = connection_string\n self.storage_account_url = storage_account_url\n\n if not self.parsed.hostname:\n raise MeltanoError(\n f\"Azure state backend URI must include a container name: {self.uri}\",\n \"Verify state backend URI. Must be in the form of azure://<container>/<prefix>\", # noqa: E501\n )\n\n self.container_name = self.parsed.hostname\n self.prefix = prefix or self.parsed.path\n\n @staticmethod\n def is_file_not_found_error(err: Exception) -> bool:\n \"\"\"Check if err is equivalent to file not being found.\n\n Args:\n err: the err to check\n\n Returns:\n True if error represents file not being found, else False\n \"\"\"\n from azure.core.exceptions import ResourceNotFoundError\n\n return (\n isinstance(err, ResourceNotFoundError)\n and \"ErrorCode:BlobNotFound\" in err.args[0]\n )\n\n @cached_property\n def client(self) -> BlobServiceClient:\n \"\"\"Get an authenticated azure.storage.blob.BlobServiceClient.\n\n Returns:\n An authenticated azure.storage.blob.BlobServiceClient\n\n Raises:\n MeltanoError: If connection string is not provided.\n \"\"\"\n with requires_azure():\n if self.storage_account_url:\n from azure.identity import DefaultAzureCredential\n\n default_credential = DefaultAzureCredential()\n return BlobServiceClient(\n self.storage_account_url,\n credential=default_credential,\n )\n\n if self.connection_string:\n return BlobServiceClient.from_connection_string(self.connection_string)\n\n raise MeltanoError(\n \"Azure state backend requires a connection string \"\n \"or an account URL to use host credentials\",\n \"Read https://learn.microsoft.com/en-us/azure/storage/common/storage-configure-connection-string for more information.\", # noqa: E501\n )\n\n def delete(self, file_path: str):\n \"\"\"Delete the file/blob at the given path.\n\n Args:\n file_path: the path to delete.\n\n Raises:\n Exception: if error not indicating file is not found is thrown\n \"\"\"\n blob_client = self.client.get_blob_client(\n container=self.container_name,\n blob=file_path,\n )\n try:\n blob_client.delete_blob()\n except Exception as e:\n if not self.is_file_not_found_error(e):\n raise e\n\n def list_all_files(self) -> Iterator[str]:\n \"\"\"List all files in the backend.\n\n Yields:\n The next file in the backend.\n \"\"\"\n container_client = self.client.get_container_client(self.container_name)\n for blob in container_client.list_blobs( # noqa: WPS526\n name_starts_with=self.prefix.lstrip(\"/\"),\n ):\n yield blob.name\n\n def copy_file(self, src: str, dst: str) -> None:\n \"\"\"Copy a file from one location to another.\n\n Args:\n src: the source path\n dst: the destination path\n \"\"\"\n container_client = self.client.get_container_client(self.container_name)\n src_blob_client = container_client.get_blob_client(src)\n dst_blob_client = container_client.get_blob_client(dst)\n dst_blob_client.start_copy_from_url(src_blob_client.url)\n", "path": "src/meltano/core/state_store/azure.py"}]} | 1,873 | 424 |
gh_patches_debug_34580 | rasdani/github-patches | git_diff | DDMAL__CantusDB-118 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Source create page layout could be improved
At `source-create/`, the form is not taking up all the horizontal space. This should be an easy fix by changing the column width in Bootstrap.
It would be better if we keep the same layout as the old Cantus. For example, the first three fields should be on the same row. This will make the form look more compact.
Plus, the width of the fields should be adjusted according to the expected content. For example, the `summary` field should be wider than "date".
The look is not the most important thing, but in this case, a little bit more polishing can go a long way.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `django/cantusdb_project/main_app/models/source.py`
Content:
```
1 from django.db import models
2 from main_app.models import BaseModel
3 from django.contrib.auth import get_user_model
4
5
6 class Source(BaseModel):
7 cursus_choices = [("Monastic", "Monastic"), ("Secular", "Secular")]
8 source_status_choices = [
9 (
10 "Editing process (not all the fields have been proofread)",
11 "Editing process (not all the fields have been proofread)",
12 ),
13 ("Published / Complete", "Published / Complete"),
14 ("Published / Proofread pending", "Published / Proofread pending"),
15 ("Unpublished / Editing process", "Unpublished / Editing process"),
16 ("Unpublished / Indexing process", "Unpublished / Indexing process"),
17 ("Unpublished / Proofread pending", "Unpublished / Proofread pending"),
18 ("Unpublished / Proofreading process", "Unpublished / Proofreading process"),
19 ]
20
21 # sources with public=False cannot be accessed by its url (access denied) and do not appear in source list
22 public = models.BooleanField(blank=True, null=True)
23 # sources with visible=False can be accessed by typing in the url, but do not appear in source list
24 visible = models.BooleanField(blank=True, null=True)
25 title = models.CharField(
26 max_length=255,
27 help_text="Full Manuscript Identification (City, Archive, Shelf-mark)",
28 )
29 # the siglum field as implemented on the old Cantus is composed of both the RISM siglum and the shelfmark
30 # it is a human-readable ID for a source
31 siglum = models.CharField(max_length=63, null=True, blank=True)
32 # the RISM siglum uniquely identifies a library or holding institution
33 rism_siglum = models.ForeignKey(
34 "RismSiglum", on_delete=models.PROTECT, null=True, blank=True,
35 )
36 provenance = models.ForeignKey(
37 "Provenance",
38 on_delete=models.PROTECT,
39 help_text="If the origin is unknown, select a location where the source was "
40 "used later in its lifetime and provide details in the "
41 '"Provenance notes" field.',
42 null=True,
43 blank=True,
44 )
45 provenance_notes = models.TextField(
46 blank=True,
47 null=True,
48 help_text="More exact indication of the provenance (if necessary)",
49 )
50 full_source = models.BooleanField(blank=True, null=True)
51 date = models.CharField(
52 blank=True,
53 null=True,
54 max_length=63,
55 help_text='Date of the manuscript (e.g. "1200s", "1300-1350", etc.)',
56 )
57 century = models.ManyToManyField("Century", related_name="sources")
58 notation = models.ManyToManyField("Notation", related_name="sources")
59 cursus = models.CharField(
60 blank=True, null=True, choices=cursus_choices, max_length=63
61 )
62 # TODO: Fill this field up with JSON info when I have access to the Users
63 current_editors = models.ManyToManyField(get_user_model(), related_name="sources_edited")
64 inventoried_by = models.ManyToManyField(
65 "Indexer", related_name="sources_inventoried"
66 )
67 full_text_entered_by = models.ManyToManyField(
68 "Indexer", related_name="entered_full_text_for_sources"
69 )
70 melodies_entered_by = models.ManyToManyField(
71 "Indexer", related_name="entered_melody_for_sources"
72 )
73 proofreaders = models.ManyToManyField("Indexer", related_name="proofread_sources")
74 other_editors = models.ManyToManyField("Indexer", related_name="edited_sources")
75 segment = models.ForeignKey(
76 "Segment", on_delete=models.PROTECT, blank=True, null=True
77 )
78 source_status = models.CharField(blank=True, null=True, max_length=255)
79 complete_inventory = models.BooleanField(blank=True, null=True)
80 summary = models.TextField(blank=True, null=True)
81 liturgical_occasions = models.TextField(blank=True, null=True)
82 description = models.TextField(blank=True, null=True)
83 selected_bibliography = models.TextField(blank=True, null=True)
84 image_link = models.URLField(blank=True, null=True)
85 indexing_notes = models.TextField(blank=True, null=True)
86 indexing_date = models.TextField(blank=True, null=True)
87 json_info = models.JSONField(blank=True, null=True)
88 fragmentarium_id = models.CharField(max_length=15, blank=True, null=True)
89 dact_id = models.CharField(max_length=15, blank=True, null=True)
90
91 def number_of_chants(self) -> int:
92 """Returns the number of Chants and Sequences in this Source."""
93 return self.chant_set.count() + self.sequence_set.count()
94
95 def number_of_melodies(self) -> int:
96 """Returns the number of Chants in this Source that have melodies."""
97 return self.chant_set.filter(volpiano__isnull=False).count()
98
99 def __str__(self):
100 return self.title
101
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/django/cantusdb_project/main_app/models/source.py b/django/cantusdb_project/main_app/models/source.py
--- a/django/cantusdb_project/main_app/models/source.py
+++ b/django/cantusdb_project/main_app/models/source.py
@@ -28,7 +28,12 @@
)
# the siglum field as implemented on the old Cantus is composed of both the RISM siglum and the shelfmark
# it is a human-readable ID for a source
- siglum = models.CharField(max_length=63, null=True, blank=True)
+ siglum = models.CharField(
+ max_length=63,
+ null=True,
+ blank=True,
+ help_text="RISM-style siglum + Shelf-mark (e.g. GB-Ob 202).",
+ )
# the RISM siglum uniquely identifies a library or holding institution
rism_siglum = models.ForeignKey(
"RismSiglum", on_delete=models.PROTECT, null=True, blank=True,
@@ -61,6 +66,9 @@
)
# TODO: Fill this field up with JSON info when I have access to the Users
current_editors = models.ManyToManyField(get_user_model(), related_name="sources_edited")
+ # created_by = models.ForeignKey(
+ # get_user_model(), related_name="sources_created", on_delete=models.PROTECT, blank=True, null=True
+ # )
inventoried_by = models.ManyToManyField(
"Indexer", related_name="sources_inventoried"
)
@@ -81,7 +89,11 @@
liturgical_occasions = models.TextField(blank=True, null=True)
description = models.TextField(blank=True, null=True)
selected_bibliography = models.TextField(blank=True, null=True)
- image_link = models.URLField(blank=True, null=True)
+ image_link = models.URLField(
+ blank=True,
+ null=True,
+ help_text='HTTP link to the image gallery of the source.',
+ )
indexing_notes = models.TextField(blank=True, null=True)
indexing_date = models.TextField(blank=True, null=True)
json_info = models.JSONField(blank=True, null=True)
| {"golden_diff": "diff --git a/django/cantusdb_project/main_app/models/source.py b/django/cantusdb_project/main_app/models/source.py\n--- a/django/cantusdb_project/main_app/models/source.py\n+++ b/django/cantusdb_project/main_app/models/source.py\n@@ -28,7 +28,12 @@\n )\n # the siglum field as implemented on the old Cantus is composed of both the RISM siglum and the shelfmark\n # it is a human-readable ID for a source\n- siglum = models.CharField(max_length=63, null=True, blank=True)\n+ siglum = models.CharField(\n+ max_length=63, \n+ null=True, \n+ blank=True,\n+ help_text=\"RISM-style siglum + Shelf-mark (e.g. GB-Ob 202).\",\n+ )\n # the RISM siglum uniquely identifies a library or holding institution\n rism_siglum = models.ForeignKey(\n \"RismSiglum\", on_delete=models.PROTECT, null=True, blank=True,\n@@ -61,6 +66,9 @@\n )\n # TODO: Fill this field up with JSON info when I have access to the Users\n current_editors = models.ManyToManyField(get_user_model(), related_name=\"sources_edited\")\n+ # created_by = models.ForeignKey(\n+ # get_user_model(), related_name=\"sources_created\", on_delete=models.PROTECT, blank=True, null=True\n+ # )\n inventoried_by = models.ManyToManyField(\n \"Indexer\", related_name=\"sources_inventoried\"\n )\n@@ -81,7 +89,11 @@\n liturgical_occasions = models.TextField(blank=True, null=True)\n description = models.TextField(blank=True, null=True)\n selected_bibliography = models.TextField(blank=True, null=True)\n- image_link = models.URLField(blank=True, null=True)\n+ image_link = models.URLField(\n+ blank=True, \n+ null=True,\n+ help_text='HTTP link to the image gallery of the source.',\n+ )\n indexing_notes = models.TextField(blank=True, null=True)\n indexing_date = models.TextField(blank=True, null=True)\n json_info = models.JSONField(blank=True, null=True)\n", "issue": "Source create page layout could be improved\nAt `source-create/`, the form is not taking up all the horizontal space. This should be an easy fix by changing the column width in Bootstrap. \r\nIt would be better if we keep the same layout as the old Cantus. For example, the first three fields should be on the same row. This will make the form look more compact. \r\nPlus, the width of the fields should be adjusted according to the expected content. For example, the `summary` field should be wider than \"date\". \r\nThe look is not the most important thing, but in this case, a little bit more polishing can go a long way. \n", "before_files": [{"content": "from django.db import models\nfrom main_app.models import BaseModel\nfrom django.contrib.auth import get_user_model\n\n\nclass Source(BaseModel):\n cursus_choices = [(\"Monastic\", \"Monastic\"), (\"Secular\", \"Secular\")]\n source_status_choices = [\n (\n \"Editing process (not all the fields have been proofread)\",\n \"Editing process (not all the fields have been proofread)\",\n ),\n (\"Published / Complete\", \"Published / Complete\"),\n (\"Published / Proofread pending\", \"Published / Proofread pending\"),\n (\"Unpublished / Editing process\", \"Unpublished / Editing process\"),\n (\"Unpublished / Indexing process\", \"Unpublished / Indexing process\"),\n (\"Unpublished / Proofread pending\", \"Unpublished / Proofread pending\"),\n (\"Unpublished / Proofreading process\", \"Unpublished / Proofreading process\"),\n ]\n\n # sources with public=False cannot be accessed by its url (access denied) and do not appear in source list\n public = models.BooleanField(blank=True, null=True)\n # sources with visible=False can be accessed by typing in the url, but do not appear in source list\n visible = models.BooleanField(blank=True, null=True)\n title = models.CharField(\n max_length=255,\n help_text=\"Full Manuscript Identification (City, Archive, Shelf-mark)\",\n )\n # the siglum field as implemented on the old Cantus is composed of both the RISM siglum and the shelfmark\n # it is a human-readable ID for a source\n siglum = models.CharField(max_length=63, null=True, blank=True)\n # the RISM siglum uniquely identifies a library or holding institution\n rism_siglum = models.ForeignKey(\n \"RismSiglum\", on_delete=models.PROTECT, null=True, blank=True,\n )\n provenance = models.ForeignKey(\n \"Provenance\",\n on_delete=models.PROTECT,\n help_text=\"If the origin is unknown, select a location where the source was \"\n \"used later in its lifetime and provide details in the \"\n '\"Provenance notes\" field.',\n null=True,\n blank=True,\n )\n provenance_notes = models.TextField(\n blank=True,\n null=True,\n help_text=\"More exact indication of the provenance (if necessary)\",\n )\n full_source = models.BooleanField(blank=True, null=True)\n date = models.CharField(\n blank=True,\n null=True,\n max_length=63,\n help_text='Date of the manuscript (e.g. \"1200s\", \"1300-1350\", etc.)',\n )\n century = models.ManyToManyField(\"Century\", related_name=\"sources\")\n notation = models.ManyToManyField(\"Notation\", related_name=\"sources\")\n cursus = models.CharField(\n blank=True, null=True, choices=cursus_choices, max_length=63\n )\n # TODO: Fill this field up with JSON info when I have access to the Users\n current_editors = models.ManyToManyField(get_user_model(), related_name=\"sources_edited\")\n inventoried_by = models.ManyToManyField(\n \"Indexer\", related_name=\"sources_inventoried\"\n )\n full_text_entered_by = models.ManyToManyField(\n \"Indexer\", related_name=\"entered_full_text_for_sources\"\n )\n melodies_entered_by = models.ManyToManyField(\n \"Indexer\", related_name=\"entered_melody_for_sources\"\n )\n proofreaders = models.ManyToManyField(\"Indexer\", related_name=\"proofread_sources\")\n other_editors = models.ManyToManyField(\"Indexer\", related_name=\"edited_sources\")\n segment = models.ForeignKey(\n \"Segment\", on_delete=models.PROTECT, blank=True, null=True\n )\n source_status = models.CharField(blank=True, null=True, max_length=255)\n complete_inventory = models.BooleanField(blank=True, null=True)\n summary = models.TextField(blank=True, null=True)\n liturgical_occasions = models.TextField(blank=True, null=True)\n description = models.TextField(blank=True, null=True)\n selected_bibliography = models.TextField(blank=True, null=True)\n image_link = models.URLField(blank=True, null=True)\n indexing_notes = models.TextField(blank=True, null=True)\n indexing_date = models.TextField(blank=True, null=True)\n json_info = models.JSONField(blank=True, null=True)\n fragmentarium_id = models.CharField(max_length=15, blank=True, null=True)\n dact_id = models.CharField(max_length=15, blank=True, null=True)\n\n def number_of_chants(self) -> int:\n \"\"\"Returns the number of Chants and Sequences in this Source.\"\"\"\n return self.chant_set.count() + self.sequence_set.count()\n\n def number_of_melodies(self) -> int:\n \"\"\"Returns the number of Chants in this Source that have melodies.\"\"\"\n return self.chant_set.filter(volpiano__isnull=False).count()\n\n def __str__(self):\n return self.title\n ", "path": "django/cantusdb_project/main_app/models/source.py"}], "after_files": [{"content": "from django.db import models\nfrom main_app.models import BaseModel\nfrom django.contrib.auth import get_user_model\n\n\nclass Source(BaseModel):\n cursus_choices = [(\"Monastic\", \"Monastic\"), (\"Secular\", \"Secular\")]\n source_status_choices = [\n (\n \"Editing process (not all the fields have been proofread)\",\n \"Editing process (not all the fields have been proofread)\",\n ),\n (\"Published / Complete\", \"Published / Complete\"),\n (\"Published / Proofread pending\", \"Published / Proofread pending\"),\n (\"Unpublished / Editing process\", \"Unpublished / Editing process\"),\n (\"Unpublished / Indexing process\", \"Unpublished / Indexing process\"),\n (\"Unpublished / Proofread pending\", \"Unpublished / Proofread pending\"),\n (\"Unpublished / Proofreading process\", \"Unpublished / Proofreading process\"),\n ]\n\n # sources with public=False cannot be accessed by its url (access denied) and do not appear in source list\n public = models.BooleanField(blank=True, null=True)\n # sources with visible=False can be accessed by typing in the url, but do not appear in source list\n visible = models.BooleanField(blank=True, null=True)\n title = models.CharField(\n max_length=255,\n help_text=\"Full Manuscript Identification (City, Archive, Shelf-mark)\",\n )\n # the siglum field as implemented on the old Cantus is composed of both the RISM siglum and the shelfmark\n # it is a human-readable ID for a source\n siglum = models.CharField(\n max_length=63, \n null=True, \n blank=True,\n help_text=\"RISM-style siglum + Shelf-mark (e.g. GB-Ob 202).\",\n )\n # the RISM siglum uniquely identifies a library or holding institution\n rism_siglum = models.ForeignKey(\n \"RismSiglum\", on_delete=models.PROTECT, null=True, blank=True,\n )\n provenance = models.ForeignKey(\n \"Provenance\",\n on_delete=models.PROTECT,\n help_text=\"If the origin is unknown, select a location where the source was \"\n \"used later in its lifetime and provide details in the \"\n '\"Provenance notes\" field.',\n null=True,\n blank=True,\n )\n provenance_notes = models.TextField(\n blank=True,\n null=True,\n help_text=\"More exact indication of the provenance (if necessary)\",\n )\n full_source = models.BooleanField(blank=True, null=True)\n date = models.CharField(\n blank=True,\n null=True,\n max_length=63,\n help_text='Date of the manuscript (e.g. \"1200s\", \"1300-1350\", etc.)',\n )\n century = models.ManyToManyField(\"Century\", related_name=\"sources\")\n notation = models.ManyToManyField(\"Notation\", related_name=\"sources\")\n cursus = models.CharField(\n blank=True, null=True, choices=cursus_choices, max_length=63\n )\n # TODO: Fill this field up with JSON info when I have access to the Users\n current_editors = models.ManyToManyField(get_user_model(), related_name=\"sources_edited\")\n # created_by = models.ForeignKey(\n # get_user_model(), related_name=\"sources_created\", on_delete=models.PROTECT, blank=True, null=True\n # )\n inventoried_by = models.ManyToManyField(\n \"Indexer\", related_name=\"sources_inventoried\"\n )\n full_text_entered_by = models.ManyToManyField(\n \"Indexer\", related_name=\"entered_full_text_for_sources\"\n )\n melodies_entered_by = models.ManyToManyField(\n \"Indexer\", related_name=\"entered_melody_for_sources\"\n )\n proofreaders = models.ManyToManyField(\"Indexer\", related_name=\"proofread_sources\")\n other_editors = models.ManyToManyField(\"Indexer\", related_name=\"edited_sources\")\n segment = models.ForeignKey(\n \"Segment\", on_delete=models.PROTECT, blank=True, null=True\n )\n source_status = models.CharField(blank=True, null=True, max_length=255)\n complete_inventory = models.BooleanField(blank=True, null=True)\n summary = models.TextField(blank=True, null=True)\n liturgical_occasions = models.TextField(blank=True, null=True)\n description = models.TextField(blank=True, null=True)\n selected_bibliography = models.TextField(blank=True, null=True)\n image_link = models.URLField(\n blank=True, \n null=True,\n help_text='HTTP link to the image gallery of the source.',\n )\n indexing_notes = models.TextField(blank=True, null=True)\n indexing_date = models.TextField(blank=True, null=True)\n json_info = models.JSONField(blank=True, null=True)\n fragmentarium_id = models.CharField(max_length=15, blank=True, null=True)\n dact_id = models.CharField(max_length=15, blank=True, null=True)\n\n def number_of_chants(self) -> int:\n \"\"\"Returns the number of Chants and Sequences in this Source.\"\"\"\n return self.chant_set.count() + self.sequence_set.count()\n\n def number_of_melodies(self) -> int:\n \"\"\"Returns the number of Chants in this Source that have melodies.\"\"\"\n return self.chant_set.filter(volpiano__isnull=False).count()\n\n def __str__(self):\n return self.title\n ", "path": "django/cantusdb_project/main_app/models/source.py"}]} | 1,656 | 496 |
gh_patches_debug_30399 | rasdani/github-patches | git_diff | aws-powertools__powertools-lambda-python-504 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
bug: aws scalar type for AWSDateTime should include milliseconds
**What were you trying to accomplish?**
## Expected Behavior
AWSDateTime - An extended ISO 8601 date and time string in the format YYYY-MM-DDThh:mm:ss.sssZ.
## Current Behavior
AWSDateTime is not including the millisecond part `sss`.
## Possible Solution
Generate timestamps to include the milliseconds
## Steps to Reproduce (for bugs)
```python3
> print(aws_datetime())
2021-07-02T17:09:47Z
````
## Environment
* **Powertools version used**: 1.17.0
* **Packaging format (Layers, PyPi)**: PyPi
* **AWS Lambda function runtime:** Python 3.8
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `aws_lambda_powertools/utilities/data_classes/appsync/scalar_types_utils.py`
Content:
```
1 import datetime
2 import time
3 import uuid
4
5
6 def _formatted_time(now: datetime.date, fmt: str, timezone_offset: int) -> str:
7 """String formatted time with optional timezone offset
8
9 Parameters
10 ----------
11 now : datetime.date
12 Current datetime with zero timezone offset
13 fmt : str
14 Data format before adding timezone offset
15 timezone_offset : int
16 Timezone offset in hours, defaults to 0
17 Returns
18 -------
19 str
20 Returns string formatted time with optional timezone offset
21 """
22 if timezone_offset == 0:
23 return now.strftime(fmt + "Z")
24
25 now = now + datetime.timedelta(hours=timezone_offset)
26 fmt += "+" if timezone_offset > 0 else "-"
27 fmt += str(abs(timezone_offset)).zfill(2)
28 fmt += ":00:00"
29
30 return now.strftime(fmt)
31
32
33 def make_id() -> str:
34 """ID - A unique identifier for an object. This scalar is serialized like a String but isn't meant to be
35 human-readable."""
36 return str(uuid.uuid4())
37
38
39 def aws_date(timezone_offset: int = 0) -> str:
40 """AWSDate - An extended ISO 8601 date string in the format YYYY-MM-DD.
41
42 Parameters
43 ----------
44 timezone_offset : int
45 Timezone offset, defaults to 0
46
47 Returns
48 -------
49 str
50 Returns current time as AWSDate scalar string with optional timezone offset
51 """
52 return _formatted_time(datetime.datetime.utcnow(), "%Y-%m-%d", timezone_offset)
53
54
55 def aws_time(timezone_offset: int = 0) -> str:
56 """AWSTime - An extended ISO 8601 time string in the format hh:mm:ss.sss.
57
58 Parameters
59 ----------
60 timezone_offset : int
61 Timezone offset, defaults to 0
62
63 Returns
64 -------
65 str
66 Returns current time as AWSTime scalar string with optional timezone offset
67 """
68 return _formatted_time(datetime.datetime.utcnow(), "%H:%M:%S", timezone_offset)
69
70
71 def aws_datetime(timezone_offset: int = 0) -> str:
72 """AWSDateTime - An extended ISO 8601 date and time string in the format YYYY-MM-DDThh:mm:ss.sssZ.
73
74 Parameters
75 ----------
76 timezone_offset : int
77 Timezone offset, defaults to 0
78
79 Returns
80 -------
81 str
82 Returns current time as AWSDateTime scalar string with optional timezone offset
83 """
84 return _formatted_time(datetime.datetime.utcnow(), "%Y-%m-%dT%H:%M:%S", timezone_offset)
85
86
87 def aws_timestamp() -> int:
88 """AWSTimestamp - An integer value representing the number of seconds before or after 1970-01-01-T00:00Z."""
89 return int(time.time())
90
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/aws_lambda_powertools/utilities/data_classes/appsync/scalar_types_utils.py b/aws_lambda_powertools/utilities/data_classes/appsync/scalar_types_utils.py
--- a/aws_lambda_powertools/utilities/data_classes/appsync/scalar_types_utils.py
+++ b/aws_lambda_powertools/utilities/data_classes/appsync/scalar_types_utils.py
@@ -19,15 +19,21 @@
str
Returns string formatted time with optional timezone offset
"""
- if timezone_offset == 0:
- return now.strftime(fmt + "Z")
+ if timezone_offset != 0:
+ now = now + datetime.timedelta(hours=timezone_offset)
+
+ datetime_str = now.strftime(fmt)
+ if fmt.endswith(".%f"):
+ datetime_str = datetime_str[:-3]
- now = now + datetime.timedelta(hours=timezone_offset)
- fmt += "+" if timezone_offset > 0 else "-"
- fmt += str(abs(timezone_offset)).zfill(2)
- fmt += ":00:00"
+ if timezone_offset == 0:
+ postfix = "Z"
+ else:
+ postfix = "+" if timezone_offset > 0 else "-"
+ postfix += str(abs(timezone_offset)).zfill(2)
+ postfix += ":00:00"
- return now.strftime(fmt)
+ return datetime_str + postfix
def make_id() -> str:
@@ -65,7 +71,7 @@
str
Returns current time as AWSTime scalar string with optional timezone offset
"""
- return _formatted_time(datetime.datetime.utcnow(), "%H:%M:%S", timezone_offset)
+ return _formatted_time(datetime.datetime.utcnow(), "%H:%M:%S.%f", timezone_offset)
def aws_datetime(timezone_offset: int = 0) -> str:
@@ -81,7 +87,7 @@
str
Returns current time as AWSDateTime scalar string with optional timezone offset
"""
- return _formatted_time(datetime.datetime.utcnow(), "%Y-%m-%dT%H:%M:%S", timezone_offset)
+ return _formatted_time(datetime.datetime.utcnow(), "%Y-%m-%dT%H:%M:%S.%f", timezone_offset)
def aws_timestamp() -> int:
| {"golden_diff": "diff --git a/aws_lambda_powertools/utilities/data_classes/appsync/scalar_types_utils.py b/aws_lambda_powertools/utilities/data_classes/appsync/scalar_types_utils.py\n--- a/aws_lambda_powertools/utilities/data_classes/appsync/scalar_types_utils.py\n+++ b/aws_lambda_powertools/utilities/data_classes/appsync/scalar_types_utils.py\n@@ -19,15 +19,21 @@\n str\n Returns string formatted time with optional timezone offset\n \"\"\"\n- if timezone_offset == 0:\n- return now.strftime(fmt + \"Z\")\n+ if timezone_offset != 0:\n+ now = now + datetime.timedelta(hours=timezone_offset)\n+\n+ datetime_str = now.strftime(fmt)\n+ if fmt.endswith(\".%f\"):\n+ datetime_str = datetime_str[:-3]\n \n- now = now + datetime.timedelta(hours=timezone_offset)\n- fmt += \"+\" if timezone_offset > 0 else \"-\"\n- fmt += str(abs(timezone_offset)).zfill(2)\n- fmt += \":00:00\"\n+ if timezone_offset == 0:\n+ postfix = \"Z\"\n+ else:\n+ postfix = \"+\" if timezone_offset > 0 else \"-\"\n+ postfix += str(abs(timezone_offset)).zfill(2)\n+ postfix += \":00:00\"\n \n- return now.strftime(fmt)\n+ return datetime_str + postfix\n \n \n def make_id() -> str:\n@@ -65,7 +71,7 @@\n str\n Returns current time as AWSTime scalar string with optional timezone offset\n \"\"\"\n- return _formatted_time(datetime.datetime.utcnow(), \"%H:%M:%S\", timezone_offset)\n+ return _formatted_time(datetime.datetime.utcnow(), \"%H:%M:%S.%f\", timezone_offset)\n \n \n def aws_datetime(timezone_offset: int = 0) -> str:\n@@ -81,7 +87,7 @@\n str\n Returns current time as AWSDateTime scalar string with optional timezone offset\n \"\"\"\n- return _formatted_time(datetime.datetime.utcnow(), \"%Y-%m-%dT%H:%M:%S\", timezone_offset)\n+ return _formatted_time(datetime.datetime.utcnow(), \"%Y-%m-%dT%H:%M:%S.%f\", timezone_offset)\n \n \n def aws_timestamp() -> int:\n", "issue": "bug: aws scalar type for AWSDateTime should include milliseconds\n**What were you trying to accomplish?**\r\n\r\n## Expected Behavior\r\n\r\nAWSDateTime - An extended ISO 8601 date and time string in the format YYYY-MM-DDThh:mm:ss.sssZ.\r\n\r\n## Current Behavior\r\n\r\nAWSDateTime is not including the millisecond part `sss`.\r\n\r\n## Possible Solution\r\n\r\nGenerate timestamps to include the milliseconds\r\n\r\n## Steps to Reproduce (for bugs)\r\n\r\n```python3\r\n> print(aws_datetime())\r\n2021-07-02T17:09:47Z\r\n````\r\n\r\n## Environment\r\n\r\n* **Powertools version used**: 1.17.0\r\n* **Packaging format (Layers, PyPi)**: PyPi\r\n* **AWS Lambda function runtime:** Python 3.8\r\n\n", "before_files": [{"content": "import datetime\nimport time\nimport uuid\n\n\ndef _formatted_time(now: datetime.date, fmt: str, timezone_offset: int) -> str:\n \"\"\"String formatted time with optional timezone offset\n\n Parameters\n ----------\n now : datetime.date\n Current datetime with zero timezone offset\n fmt : str\n Data format before adding timezone offset\n timezone_offset : int\n Timezone offset in hours, defaults to 0\n Returns\n -------\n str\n Returns string formatted time with optional timezone offset\n \"\"\"\n if timezone_offset == 0:\n return now.strftime(fmt + \"Z\")\n\n now = now + datetime.timedelta(hours=timezone_offset)\n fmt += \"+\" if timezone_offset > 0 else \"-\"\n fmt += str(abs(timezone_offset)).zfill(2)\n fmt += \":00:00\"\n\n return now.strftime(fmt)\n\n\ndef make_id() -> str:\n \"\"\"ID - A unique identifier for an object. This scalar is serialized like a String but isn't meant to be\n human-readable.\"\"\"\n return str(uuid.uuid4())\n\n\ndef aws_date(timezone_offset: int = 0) -> str:\n \"\"\"AWSDate - An extended ISO 8601 date string in the format YYYY-MM-DD.\n\n Parameters\n ----------\n timezone_offset : int\n Timezone offset, defaults to 0\n\n Returns\n -------\n str\n Returns current time as AWSDate scalar string with optional timezone offset\n \"\"\"\n return _formatted_time(datetime.datetime.utcnow(), \"%Y-%m-%d\", timezone_offset)\n\n\ndef aws_time(timezone_offset: int = 0) -> str:\n \"\"\"AWSTime - An extended ISO 8601 time string in the format hh:mm:ss.sss.\n\n Parameters\n ----------\n timezone_offset : int\n Timezone offset, defaults to 0\n\n Returns\n -------\n str\n Returns current time as AWSTime scalar string with optional timezone offset\n \"\"\"\n return _formatted_time(datetime.datetime.utcnow(), \"%H:%M:%S\", timezone_offset)\n\n\ndef aws_datetime(timezone_offset: int = 0) -> str:\n \"\"\"AWSDateTime - An extended ISO 8601 date and time string in the format YYYY-MM-DDThh:mm:ss.sssZ.\n\n Parameters\n ----------\n timezone_offset : int\n Timezone offset, defaults to 0\n\n Returns\n -------\n str\n Returns current time as AWSDateTime scalar string with optional timezone offset\n \"\"\"\n return _formatted_time(datetime.datetime.utcnow(), \"%Y-%m-%dT%H:%M:%S\", timezone_offset)\n\n\ndef aws_timestamp() -> int:\n \"\"\"AWSTimestamp - An integer value representing the number of seconds before or after 1970-01-01-T00:00Z.\"\"\"\n return int(time.time())\n", "path": "aws_lambda_powertools/utilities/data_classes/appsync/scalar_types_utils.py"}], "after_files": [{"content": "import datetime\nimport time\nimport uuid\n\n\ndef _formatted_time(now: datetime.date, fmt: str, timezone_offset: int) -> str:\n \"\"\"String formatted time with optional timezone offset\n\n Parameters\n ----------\n now : datetime.date\n Current datetime with zero timezone offset\n fmt : str\n Data format before adding timezone offset\n timezone_offset : int\n Timezone offset in hours, defaults to 0\n Returns\n -------\n str\n Returns string formatted time with optional timezone offset\n \"\"\"\n if timezone_offset != 0:\n now = now + datetime.timedelta(hours=timezone_offset)\n\n datetime_str = now.strftime(fmt)\n if fmt.endswith(\".%f\"):\n datetime_str = datetime_str[:-3]\n\n if timezone_offset == 0:\n postfix = \"Z\"\n else:\n postfix = \"+\" if timezone_offset > 0 else \"-\"\n postfix += str(abs(timezone_offset)).zfill(2)\n postfix += \":00:00\"\n\n return datetime_str + postfix\n\n\ndef make_id() -> str:\n \"\"\"ID - A unique identifier for an object. This scalar is serialized like a String but isn't meant to be\n human-readable.\"\"\"\n return str(uuid.uuid4())\n\n\ndef aws_date(timezone_offset: int = 0) -> str:\n \"\"\"AWSDate - An extended ISO 8601 date string in the format YYYY-MM-DD.\n\n Parameters\n ----------\n timezone_offset : int\n Timezone offset, defaults to 0\n\n Returns\n -------\n str\n Returns current time as AWSDate scalar string with optional timezone offset\n \"\"\"\n return _formatted_time(datetime.datetime.utcnow(), \"%Y-%m-%d\", timezone_offset)\n\n\ndef aws_time(timezone_offset: int = 0) -> str:\n \"\"\"AWSTime - An extended ISO 8601 time string in the format hh:mm:ss.sss.\n\n Parameters\n ----------\n timezone_offset : int\n Timezone offset, defaults to 0\n\n Returns\n -------\n str\n Returns current time as AWSTime scalar string with optional timezone offset\n \"\"\"\n return _formatted_time(datetime.datetime.utcnow(), \"%H:%M:%S.%f\", timezone_offset)\n\n\ndef aws_datetime(timezone_offset: int = 0) -> str:\n \"\"\"AWSDateTime - An extended ISO 8601 date and time string in the format YYYY-MM-DDThh:mm:ss.sssZ.\n\n Parameters\n ----------\n timezone_offset : int\n Timezone offset, defaults to 0\n\n Returns\n -------\n str\n Returns current time as AWSDateTime scalar string with optional timezone offset\n \"\"\"\n return _formatted_time(datetime.datetime.utcnow(), \"%Y-%m-%dT%H:%M:%S.%f\", timezone_offset)\n\n\ndef aws_timestamp() -> int:\n \"\"\"AWSTimestamp - An integer value representing the number of seconds before or after 1970-01-01-T00:00Z.\"\"\"\n return int(time.time())\n", "path": "aws_lambda_powertools/utilities/data_classes/appsync/scalar_types_utils.py"}]} | 1,228 | 488 |
gh_patches_debug_32440 | rasdani/github-patches | git_diff | OpenMined__PySyft-3591 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Calling syft.grid.register() twice should throw an informative error
**Is your feature request related to a problem? Please describe.**
If you call syft.grid.register() twice in the same python runtime it should raise an error describing that you can't do this - that they shoudl restart the python runtime and try again.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `syft/grid/__init__.py`
Content:
```
1 from .network import Network
2 import sys
3 import uuid
4
5 DEFAULT_NETWORK_URL = "ws://ec2-13-59-45-128.us-east-2.compute.amazonaws.com"
6
7
8 def register(**kwargs):
9 """ Add this process as a new peer registering it in the grid network.
10
11 Returns:
12 peer: Peer Network instance.
13 """
14 try:
15 if not kwargs:
16 args = {"max_size": None, "timeout": 444, "url": DEFAULT_NETWORK_URL}
17 else:
18 args = kwargs
19
20 peer_id = str(uuid.uuid4())
21 sys.stdout.write(
22 "Connecting to OpenGrid (" + "\033[94m" + args["url"] + "\033[0m" + ") ... "
23 )
24
25 peer = Network(peer_id, **args)
26
27 sys.stdout.write("\033[92m" + "OK" + "\033[0m" + "\n")
28 sys.stdout.write("Peer ID: " + peer_id + "\n")
29
30 sys.stdout.write(
31 "\033[93m" + "DISCLAIMER" + "\033[0m"
32 ":"
33 + "\033[1m"
34 + " OpenGrid is an experimental feature currently in alpha. Do not use this to protect real-world data.\n"
35 + "\033[0m"
36 )
37
38 sys.stdout.write("Where to get help: \n")
39 sys.stdout.write(
40 " - Join our slack (https://slack.openmined.org) and ask for help in the #lib_syft channel.\n"
41 )
42 sys.stdout.write(
43 " - File a Github Issue: https://github.com/OpenMined/PySyft and add the string '#opengrid' in the issue title.\n"
44 )
45 sys.stdout.write(
46 " - Want to join in our development team? Apply here: https://forms.gle/wcH1vxzvPyDSbSVW6\n"
47 )
48 peer.start()
49 return peer
50 except Exception as e:
51 sys.stdout.write("\033[91m" + "FAIL" + "\033[0m" + "\n")
52 sys.stdout.write("You were not able to register your node.\n")
53
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/syft/grid/__init__.py b/syft/grid/__init__.py
--- a/syft/grid/__init__.py
+++ b/syft/grid/__init__.py
@@ -4,13 +4,25 @@
DEFAULT_NETWORK_URL = "ws://ec2-13-59-45-128.us-east-2.compute.amazonaws.com"
+_registered_peer = None
+
def register(**kwargs):
""" Add this process as a new peer registering it in the grid network.
-
+
Returns:
peer: Peer Network instance.
"""
+ global _registered_peer
+
+ if isinstance(_registered_peer, Network):
+ sys.stdout.write(
+ "\033[93m" + "WARNING" + "\033[0m"
+ ":" + f" You are already a registered peer!\n{_registered_peer}\n"
+ )
+
+ return _registered_peer
+
try:
if not kwargs:
args = {"max_size": None, "timeout": 444, "url": DEFAULT_NETWORK_URL}
@@ -22,7 +34,7 @@
"Connecting to OpenGrid (" + "\033[94m" + args["url"] + "\033[0m" + ") ... "
)
- peer = Network(peer_id, **args)
+ _registered_peer = Network(peer_id, **args)
sys.stdout.write("\033[92m" + "OK" + "\033[0m" + "\n")
sys.stdout.write("Peer ID: " + peer_id + "\n")
@@ -45,8 +57,11 @@
sys.stdout.write(
" - Want to join in our development team? Apply here: https://forms.gle/wcH1vxzvPyDSbSVW6\n"
)
- peer.start()
- return peer
+
+ _registered_peer.start()
+
+ return _registered_peer
+
except Exception as e:
sys.stdout.write("\033[91m" + "FAIL" + "\033[0m" + "\n")
sys.stdout.write("You were not able to register your node.\n")
| {"golden_diff": "diff --git a/syft/grid/__init__.py b/syft/grid/__init__.py\n--- a/syft/grid/__init__.py\n+++ b/syft/grid/__init__.py\n@@ -4,13 +4,25 @@\n \n DEFAULT_NETWORK_URL = \"ws://ec2-13-59-45-128.us-east-2.compute.amazonaws.com\"\n \n+_registered_peer = None\n+\n \n def register(**kwargs):\n \"\"\" Add this process as a new peer registering it in the grid network.\n- \n+\n Returns:\n peer: Peer Network instance.\n \"\"\"\n+ global _registered_peer\n+\n+ if isinstance(_registered_peer, Network):\n+ sys.stdout.write(\n+ \"\\033[93m\" + \"WARNING\" + \"\\033[0m\"\n+ \":\" + f\" You are already a registered peer!\\n{_registered_peer}\\n\"\n+ )\n+\n+ return _registered_peer\n+\n try:\n if not kwargs:\n args = {\"max_size\": None, \"timeout\": 444, \"url\": DEFAULT_NETWORK_URL}\n@@ -22,7 +34,7 @@\n \"Connecting to OpenGrid (\" + \"\\033[94m\" + args[\"url\"] + \"\\033[0m\" + \") ... \"\n )\n \n- peer = Network(peer_id, **args)\n+ _registered_peer = Network(peer_id, **args)\n \n sys.stdout.write(\"\\033[92m\" + \"OK\" + \"\\033[0m\" + \"\\n\")\n sys.stdout.write(\"Peer ID: \" + peer_id + \"\\n\")\n@@ -45,8 +57,11 @@\n sys.stdout.write(\n \" - Want to join in our development team? Apply here: https://forms.gle/wcH1vxzvPyDSbSVW6\\n\"\n )\n- peer.start()\n- return peer\n+\n+ _registered_peer.start()\n+\n+ return _registered_peer\n+\n except Exception as e:\n sys.stdout.write(\"\\033[91m\" + \"FAIL\" + \"\\033[0m\" + \"\\n\")\n sys.stdout.write(\"You were not able to register your node.\\n\")\n", "issue": "Calling syft.grid.register() twice should throw an informative error\n**Is your feature request related to a problem? Please describe.**\r\nIf you call syft.grid.register() twice in the same python runtime it should raise an error describing that you can't do this - that they shoudl restart the python runtime and try again.\n", "before_files": [{"content": "from .network import Network\nimport sys\nimport uuid\n\nDEFAULT_NETWORK_URL = \"ws://ec2-13-59-45-128.us-east-2.compute.amazonaws.com\"\n\n\ndef register(**kwargs):\n \"\"\" Add this process as a new peer registering it in the grid network.\n \n Returns:\n peer: Peer Network instance.\n \"\"\"\n try:\n if not kwargs:\n args = {\"max_size\": None, \"timeout\": 444, \"url\": DEFAULT_NETWORK_URL}\n else:\n args = kwargs\n\n peer_id = str(uuid.uuid4())\n sys.stdout.write(\n \"Connecting to OpenGrid (\" + \"\\033[94m\" + args[\"url\"] + \"\\033[0m\" + \") ... \"\n )\n\n peer = Network(peer_id, **args)\n\n sys.stdout.write(\"\\033[92m\" + \"OK\" + \"\\033[0m\" + \"\\n\")\n sys.stdout.write(\"Peer ID: \" + peer_id + \"\\n\")\n\n sys.stdout.write(\n \"\\033[93m\" + \"DISCLAIMER\" + \"\\033[0m\"\n \":\"\n + \"\\033[1m\"\n + \" OpenGrid is an experimental feature currently in alpha. Do not use this to protect real-world data.\\n\"\n + \"\\033[0m\"\n )\n\n sys.stdout.write(\"Where to get help: \\n\")\n sys.stdout.write(\n \" - Join our slack (https://slack.openmined.org) and ask for help in the #lib_syft channel.\\n\"\n )\n sys.stdout.write(\n \" - File a Github Issue: https://github.com/OpenMined/PySyft and add the string '#opengrid' in the issue title.\\n\"\n )\n sys.stdout.write(\n \" - Want to join in our development team? Apply here: https://forms.gle/wcH1vxzvPyDSbSVW6\\n\"\n )\n peer.start()\n return peer\n except Exception as e:\n sys.stdout.write(\"\\033[91m\" + \"FAIL\" + \"\\033[0m\" + \"\\n\")\n sys.stdout.write(\"You were not able to register your node.\\n\")\n", "path": "syft/grid/__init__.py"}], "after_files": [{"content": "from .network import Network\nimport sys\nimport uuid\n\nDEFAULT_NETWORK_URL = \"ws://ec2-13-59-45-128.us-east-2.compute.amazonaws.com\"\n\n_registered_peer = None\n\n\ndef register(**kwargs):\n \"\"\" Add this process as a new peer registering it in the grid network.\n\n Returns:\n peer: Peer Network instance.\n \"\"\"\n global _registered_peer\n\n if isinstance(_registered_peer, Network):\n sys.stdout.write(\n \"\\033[93m\" + \"WARNING\" + \"\\033[0m\"\n \":\" + f\" You are already a registered peer!\\n{_registered_peer}\\n\"\n )\n\n return _registered_peer\n\n try:\n if not kwargs:\n args = {\"max_size\": None, \"timeout\": 444, \"url\": DEFAULT_NETWORK_URL}\n else:\n args = kwargs\n\n peer_id = str(uuid.uuid4())\n sys.stdout.write(\n \"Connecting to OpenGrid (\" + \"\\033[94m\" + args[\"url\"] + \"\\033[0m\" + \") ... \"\n )\n\n _registered_peer = Network(peer_id, **args)\n\n sys.stdout.write(\"\\033[92m\" + \"OK\" + \"\\033[0m\" + \"\\n\")\n sys.stdout.write(\"Peer ID: \" + peer_id + \"\\n\")\n\n sys.stdout.write(\n \"\\033[93m\" + \"DISCLAIMER\" + \"\\033[0m\"\n \":\"\n + \"\\033[1m\"\n + \" OpenGrid is an experimental feature currently in alpha. Do not use this to protect real-world data.\\n\"\n + \"\\033[0m\"\n )\n\n sys.stdout.write(\"Where to get help: \\n\")\n sys.stdout.write(\n \" - Join our slack (https://slack.openmined.org) and ask for help in the #lib_syft channel.\\n\"\n )\n sys.stdout.write(\n \" - File a Github Issue: https://github.com/OpenMined/PySyft and add the string '#opengrid' in the issue title.\\n\"\n )\n sys.stdout.write(\n \" - Want to join in our development team? Apply here: https://forms.gle/wcH1vxzvPyDSbSVW6\\n\"\n )\n\n _registered_peer.start()\n\n return _registered_peer\n\n except Exception as e:\n sys.stdout.write(\"\\033[91m\" + \"FAIL\" + \"\\033[0m\" + \"\\n\")\n sys.stdout.write(\"You were not able to register your node.\\n\")\n", "path": "syft/grid/__init__.py"}]} | 929 | 498 |
gh_patches_debug_48578 | rasdani/github-patches | git_diff | openai__gym-2683 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
py.typed not bundled in release
The latest pypi package [gym==0.23.0](https://pypi.org/project/gym/0.23.0/) does not include `py.typed`, resulting in failed `mypy` checks.
Reproduce by `pip install gym` and noting the missing file or downloading the zip from pypi (zip on GH contains the file).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 import os.path
2 import sys
3 import itertools
4
5 from setuptools import find_packages, setup
6
7 # Don't import gym module here, since deps may not be installed
8 sys.path.insert(0, os.path.join(os.path.dirname(__file__), "gym"))
9 from version import VERSION
10
11 # Environment-specific dependencies.
12 extras = {
13 "atari": ["ale-py~=0.7.4"],
14 "accept-rom-license": ["autorom[accept-rom-license]~=0.4.2"],
15 "box2d": ["box2d-py==2.3.5", "pygame==2.1.0"],
16 "classic_control": ["pygame==2.1.0"],
17 "mujoco": ["mujoco_py>=1.50, <2.0"],
18 "toy_text": ["pygame==2.1.0", "scipy>=1.4.1"],
19 "other": ["lz4>=3.1.0", "opencv-python>=3.0"],
20 }
21
22 # Meta dependency groups.
23 nomujoco_blacklist = set(["mujoco", "accept-rom-license", "atari"])
24 nomujoco_groups = set(extras.keys()) - nomujoco_blacklist
25
26 extras["nomujoco"] = list(
27 itertools.chain.from_iterable(map(lambda group: extras[group], nomujoco_groups))
28 )
29
30
31 all_blacklist = set(["accept-rom-license"])
32 all_groups = set(extras.keys()) - all_blacklist
33
34 extras["all"] = list(
35 itertools.chain.from_iterable(map(lambda group: extras[group], all_groups))
36 )
37
38 setup(
39 name="gym",
40 version=VERSION,
41 description="Gym: A universal API for reinforcement learning environments",
42 url="https://www.gymlibrary.ml/",
43 author="Gym Community",
44 author_email="[email protected]",
45 license="MIT",
46 packages=[package for package in find_packages() if package.startswith("gym")],
47 zip_safe=False,
48 install_requires=[
49 "numpy>=1.18.0",
50 "cloudpickle>=1.2.0",
51 "importlib_metadata>=4.10.0; python_version < '3.10'",
52 "gym_notices>=0.0.4",
53 ],
54 extras_require=extras,
55 package_data={
56 "gym": [
57 "envs/mujoco/assets/*.xml",
58 "envs/classic_control/assets/*.png",
59 "envs/toy_text/font/*.ttf",
60 "envs/toy_text/img/*.png",
61 ]
62 },
63 tests_require=["pytest", "mock"],
64 python_requires=">=3.7",
65 classifiers=[
66 "Programming Language :: Python :: 3",
67 "Programming Language :: Python :: 3.7",
68 "Programming Language :: Python :: 3.8",
69 "Programming Language :: Python :: 3.9",
70 "Programming Language :: Python :: 3.10",
71 ],
72 )
73
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -58,6 +58,7 @@
"envs/classic_control/assets/*.png",
"envs/toy_text/font/*.ttf",
"envs/toy_text/img/*.png",
+ "py.typed",
]
},
tests_require=["pytest", "mock"],
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -58,6 +58,7 @@\n \"envs/classic_control/assets/*.png\",\n \"envs/toy_text/font/*.ttf\",\n \"envs/toy_text/img/*.png\",\n+ \"py.typed\",\n ]\n },\n tests_require=[\"pytest\", \"mock\"],\n", "issue": "py.typed not bundled in release\nThe latest pypi package [gym==0.23.0](https://pypi.org/project/gym/0.23.0/) does not include `py.typed`, resulting in failed `mypy` checks.\r\n\r\nReproduce by `pip install gym` and noting the missing file or downloading the zip from pypi (zip on GH contains the file).\n", "before_files": [{"content": "import os.path\nimport sys\nimport itertools\n\nfrom setuptools import find_packages, setup\n\n# Don't import gym module here, since deps may not be installed\nsys.path.insert(0, os.path.join(os.path.dirname(__file__), \"gym\"))\nfrom version import VERSION\n\n# Environment-specific dependencies.\nextras = {\n \"atari\": [\"ale-py~=0.7.4\"],\n \"accept-rom-license\": [\"autorom[accept-rom-license]~=0.4.2\"],\n \"box2d\": [\"box2d-py==2.3.5\", \"pygame==2.1.0\"],\n \"classic_control\": [\"pygame==2.1.0\"],\n \"mujoco\": [\"mujoco_py>=1.50, <2.0\"],\n \"toy_text\": [\"pygame==2.1.0\", \"scipy>=1.4.1\"],\n \"other\": [\"lz4>=3.1.0\", \"opencv-python>=3.0\"],\n}\n\n# Meta dependency groups.\nnomujoco_blacklist = set([\"mujoco\", \"accept-rom-license\", \"atari\"])\nnomujoco_groups = set(extras.keys()) - nomujoco_blacklist\n\nextras[\"nomujoco\"] = list(\n itertools.chain.from_iterable(map(lambda group: extras[group], nomujoco_groups))\n)\n\n\nall_blacklist = set([\"accept-rom-license\"])\nall_groups = set(extras.keys()) - all_blacklist\n\nextras[\"all\"] = list(\n itertools.chain.from_iterable(map(lambda group: extras[group], all_groups))\n)\n\nsetup(\n name=\"gym\",\n version=VERSION,\n description=\"Gym: A universal API for reinforcement learning environments\",\n url=\"https://www.gymlibrary.ml/\",\n author=\"Gym Community\",\n author_email=\"[email protected]\",\n license=\"MIT\",\n packages=[package for package in find_packages() if package.startswith(\"gym\")],\n zip_safe=False,\n install_requires=[\n \"numpy>=1.18.0\",\n \"cloudpickle>=1.2.0\",\n \"importlib_metadata>=4.10.0; python_version < '3.10'\",\n \"gym_notices>=0.0.4\",\n ],\n extras_require=extras,\n package_data={\n \"gym\": [\n \"envs/mujoco/assets/*.xml\",\n \"envs/classic_control/assets/*.png\",\n \"envs/toy_text/font/*.ttf\",\n \"envs/toy_text/img/*.png\",\n ]\n },\n tests_require=[\"pytest\", \"mock\"],\n python_requires=\">=3.7\",\n classifiers=[\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n ],\n)\n", "path": "setup.py"}], "after_files": [{"content": "import os.path\nimport sys\nimport itertools\n\nfrom setuptools import find_packages, setup\n\n# Don't import gym module here, since deps may not be installed\nsys.path.insert(0, os.path.join(os.path.dirname(__file__), \"gym\"))\nfrom version import VERSION\n\n# Environment-specific dependencies.\nextras = {\n \"atari\": [\"ale-py~=0.7.4\"],\n \"accept-rom-license\": [\"autorom[accept-rom-license]~=0.4.2\"],\n \"box2d\": [\"box2d-py==2.3.5\", \"pygame==2.1.0\"],\n \"classic_control\": [\"pygame==2.1.0\"],\n \"mujoco\": [\"mujoco_py>=1.50, <2.0\"],\n \"toy_text\": [\"pygame==2.1.0\", \"scipy>=1.4.1\"],\n \"other\": [\"lz4>=3.1.0\", \"opencv-python>=3.0\"],\n}\n\n# Meta dependency groups.\nnomujoco_blacklist = set([\"mujoco\", \"accept-rom-license\", \"atari\"])\nnomujoco_groups = set(extras.keys()) - nomujoco_blacklist\n\nextras[\"nomujoco\"] = list(\n itertools.chain.from_iterable(map(lambda group: extras[group], nomujoco_groups))\n)\n\n\nall_blacklist = set([\"accept-rom-license\"])\nall_groups = set(extras.keys()) - all_blacklist\n\nextras[\"all\"] = list(\n itertools.chain.from_iterable(map(lambda group: extras[group], all_groups))\n)\n\nsetup(\n name=\"gym\",\n version=VERSION,\n description=\"Gym: A universal API for reinforcement learning environments\",\n url=\"https://www.gymlibrary.ml/\",\n author=\"Gym Community\",\n author_email=\"[email protected]\",\n license=\"MIT\",\n packages=[package for package in find_packages() if package.startswith(\"gym\")],\n zip_safe=False,\n install_requires=[\n \"numpy>=1.18.0\",\n \"cloudpickle>=1.2.0\",\n \"importlib_metadata>=4.10.0; python_version < '3.10'\",\n \"gym_notices>=0.0.4\",\n ],\n extras_require=extras,\n package_data={\n \"gym\": [\n \"envs/mujoco/assets/*.xml\",\n \"envs/classic_control/assets/*.png\",\n \"envs/toy_text/font/*.ttf\",\n \"envs/toy_text/img/*.png\",\n \"py.typed\",\n ]\n },\n tests_require=[\"pytest\", \"mock\"],\n python_requires=\">=3.7\",\n classifiers=[\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n ],\n)\n", "path": "setup.py"}]} | 1,118 | 86 |
gh_patches_debug_24910 | rasdani/github-patches | git_diff | pypi__warehouse-3457 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Trending Projects are not updated
I think in the past month that I've been looking at pypi.org (and even before that), the "Trending Projects" haven't changed.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `warehouse/packaging/tasks.py`
Content:
```
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
10 # See the License for the specific language governing permissions and
11 # limitations under the License.
12
13 from warehouse import tasks
14 from warehouse.cache.origin import IOriginCache
15 from warehouse.packaging.models import Project
16
17
18 @tasks.task(ignore_result=True, acks_late=True)
19 def compute_trending(request):
20 bq = request.find_service(name="gcloud.bigquery")
21 query = bq.run_sync_query(
22 """ SELECT project,
23 IF(
24 STDDEV(downloads) > 0,
25 (todays_downloads - AVG(downloads))/STDDEV(downloads),
26 NULL
27 ) as zscore
28 FROM (
29 SELECT project,
30 date,
31 downloads,
32 FIRST_VALUE(downloads) OVER (
33 PARTITION BY project
34 ORDER BY DATE DESC
35 ROWS BETWEEN UNBOUNDED PRECEDING
36 AND UNBOUNDED FOLLOWING
37 ) as todays_downloads
38 FROM (
39 SELECT file.project as project,
40 DATE(timestamp) AS date,
41 COUNT(*) as downloads
42 FROM `{table}`
43 WHERE _TABLE_SUFFIX BETWEEN
44 FORMAT_DATE(
45 "%Y%m%d",
46 DATE_ADD(CURRENT_DATE(), INTERVAL -31 day))
47 AND
48 FORMAT_DATE(
49 "%Y%m%d",
50 DATE_ADD(CURRENT_DATE(), INTERVAL -1 day))
51 GROUP BY file.project, date
52 )
53 )
54 GROUP BY project, todays_downloads
55 HAVING SUM(downloads) >= 5000
56 ORDER BY zscore DESC
57 """.format(table=request.registry.settings["warehouse.trending_table"])
58 )
59 query.use_legacy_sql = False
60 query.run()
61
62 zscores = {}
63 page_token = None
64 while True:
65 rows, total_rows, page_token = query.fetch_data(
66 max_results=1000,
67 page_token=page_token,
68 )
69
70 zscores.update(dict(rows))
71
72 if not page_token:
73 break
74
75 # We're going to "reset" all of our zscores to a steady state where they
76 # are all equal to ``None``. The next query will then set any that have a
77 # value back to the expected value.
78 (request.db.query(Project)
79 .filter(Project.zscore != None) # noqa
80 .update({Project.zscore: None}))
81
82 # We need to convert the normalized name that we get out of BigQuery and
83 # turn it into the primary key of the Project object and construct a list
84 # of primary key: new zscore, including a default of None if the item isn't
85 # in the result set.
86 query = request.db.query(Project.name, Project.normalized_name).all()
87 to_update = [
88 {"name": name, "zscore": zscores[normalized_name]}
89 for name, normalized_name in query
90 if normalized_name in zscores
91 ]
92
93 # Reflect out updated ZScores into the database.
94 request.db.bulk_update_mappings(Project, to_update)
95
96 # Trigger a purge of the trending surrogate key.
97 try:
98 cacher = request.find_service(IOriginCache)
99 except ValueError:
100 pass
101 else:
102 cacher.purge(["trending"])
103
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/warehouse/packaging/tasks.py b/warehouse/packaging/tasks.py
--- a/warehouse/packaging/tasks.py
+++ b/warehouse/packaging/tasks.py
@@ -18,7 +18,7 @@
@tasks.task(ignore_result=True, acks_late=True)
def compute_trending(request):
bq = request.find_service(name="gcloud.bigquery")
- query = bq.run_sync_query(
+ query = bq.query(
""" SELECT project,
IF(
STDDEV(downloads) > 0,
@@ -56,21 +56,11 @@
ORDER BY zscore DESC
""".format(table=request.registry.settings["warehouse.trending_table"])
)
- query.use_legacy_sql = False
- query.run()
zscores = {}
- page_token = None
- while True:
- rows, total_rows, page_token = query.fetch_data(
- max_results=1000,
- page_token=page_token,
- )
-
- zscores.update(dict(rows))
-
- if not page_token:
- break
+ for row in query.result():
+ row = dict(row)
+ zscores[row["project"]] = row["zscore"]
# We're going to "reset" all of our zscores to a steady state where they
# are all equal to ``None``. The next query will then set any that have a
| {"golden_diff": "diff --git a/warehouse/packaging/tasks.py b/warehouse/packaging/tasks.py\n--- a/warehouse/packaging/tasks.py\n+++ b/warehouse/packaging/tasks.py\n@@ -18,7 +18,7 @@\n @tasks.task(ignore_result=True, acks_late=True)\n def compute_trending(request):\n bq = request.find_service(name=\"gcloud.bigquery\")\n- query = bq.run_sync_query(\n+ query = bq.query(\n \"\"\" SELECT project,\n IF(\n STDDEV(downloads) > 0,\n@@ -56,21 +56,11 @@\n ORDER BY zscore DESC\n \"\"\".format(table=request.registry.settings[\"warehouse.trending_table\"])\n )\n- query.use_legacy_sql = False\n- query.run()\n \n zscores = {}\n- page_token = None\n- while True:\n- rows, total_rows, page_token = query.fetch_data(\n- max_results=1000,\n- page_token=page_token,\n- )\n-\n- zscores.update(dict(rows))\n-\n- if not page_token:\n- break\n+ for row in query.result():\n+ row = dict(row)\n+ zscores[row[\"project\"]] = row[\"zscore\"]\n \n # We're going to \"reset\" all of our zscores to a steady state where they\n # are all equal to ``None``. The next query will then set any that have a\n", "issue": "Trending Projects are not updated\nI think in the past month that I've been looking at pypi.org (and even before that), the \"Trending Projects\" haven't changed.\r\n\n", "before_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom warehouse import tasks\nfrom warehouse.cache.origin import IOriginCache\nfrom warehouse.packaging.models import Project\n\n\[email protected](ignore_result=True, acks_late=True)\ndef compute_trending(request):\n bq = request.find_service(name=\"gcloud.bigquery\")\n query = bq.run_sync_query(\n \"\"\" SELECT project,\n IF(\n STDDEV(downloads) > 0,\n (todays_downloads - AVG(downloads))/STDDEV(downloads),\n NULL\n ) as zscore\n FROM (\n SELECT project,\n date,\n downloads,\n FIRST_VALUE(downloads) OVER (\n PARTITION BY project\n ORDER BY DATE DESC\n ROWS BETWEEN UNBOUNDED PRECEDING\n AND UNBOUNDED FOLLOWING\n ) as todays_downloads\n FROM (\n SELECT file.project as project,\n DATE(timestamp) AS date,\n COUNT(*) as downloads\n FROM `{table}`\n WHERE _TABLE_SUFFIX BETWEEN\n FORMAT_DATE(\n \"%Y%m%d\",\n DATE_ADD(CURRENT_DATE(), INTERVAL -31 day))\n AND\n FORMAT_DATE(\n \"%Y%m%d\",\n DATE_ADD(CURRENT_DATE(), INTERVAL -1 day))\n GROUP BY file.project, date\n )\n )\n GROUP BY project, todays_downloads\n HAVING SUM(downloads) >= 5000\n ORDER BY zscore DESC\n \"\"\".format(table=request.registry.settings[\"warehouse.trending_table\"])\n )\n query.use_legacy_sql = False\n query.run()\n\n zscores = {}\n page_token = None\n while True:\n rows, total_rows, page_token = query.fetch_data(\n max_results=1000,\n page_token=page_token,\n )\n\n zscores.update(dict(rows))\n\n if not page_token:\n break\n\n # We're going to \"reset\" all of our zscores to a steady state where they\n # are all equal to ``None``. The next query will then set any that have a\n # value back to the expected value.\n (request.db.query(Project)\n .filter(Project.zscore != None) # noqa\n .update({Project.zscore: None}))\n\n # We need to convert the normalized name that we get out of BigQuery and\n # turn it into the primary key of the Project object and construct a list\n # of primary key: new zscore, including a default of None if the item isn't\n # in the result set.\n query = request.db.query(Project.name, Project.normalized_name).all()\n to_update = [\n {\"name\": name, \"zscore\": zscores[normalized_name]}\n for name, normalized_name in query\n if normalized_name in zscores\n ]\n\n # Reflect out updated ZScores into the database.\n request.db.bulk_update_mappings(Project, to_update)\n\n # Trigger a purge of the trending surrogate key.\n try:\n cacher = request.find_service(IOriginCache)\n except ValueError:\n pass\n else:\n cacher.purge([\"trending\"])\n", "path": "warehouse/packaging/tasks.py"}], "after_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom warehouse import tasks\nfrom warehouse.cache.origin import IOriginCache\nfrom warehouse.packaging.models import Project\n\n\[email protected](ignore_result=True, acks_late=True)\ndef compute_trending(request):\n bq = request.find_service(name=\"gcloud.bigquery\")\n query = bq.query(\n \"\"\" SELECT project,\n IF(\n STDDEV(downloads) > 0,\n (todays_downloads - AVG(downloads))/STDDEV(downloads),\n NULL\n ) as zscore\n FROM (\n SELECT project,\n date,\n downloads,\n FIRST_VALUE(downloads) OVER (\n PARTITION BY project\n ORDER BY DATE DESC\n ROWS BETWEEN UNBOUNDED PRECEDING\n AND UNBOUNDED FOLLOWING\n ) as todays_downloads\n FROM (\n SELECT file.project as project,\n DATE(timestamp) AS date,\n COUNT(*) as downloads\n FROM `{table}`\n WHERE _TABLE_SUFFIX BETWEEN\n FORMAT_DATE(\n \"%Y%m%d\",\n DATE_ADD(CURRENT_DATE(), INTERVAL -31 day))\n AND\n FORMAT_DATE(\n \"%Y%m%d\",\n DATE_ADD(CURRENT_DATE(), INTERVAL -1 day))\n GROUP BY file.project, date\n )\n )\n GROUP BY project, todays_downloads\n HAVING SUM(downloads) >= 5000\n ORDER BY zscore DESC\n \"\"\".format(table=request.registry.settings[\"warehouse.trending_table\"])\n )\n\n zscores = {}\n for row in query.result():\n row = dict(row)\n zscores[row[\"project\"]] = row[\"zscore\"]\n\n # We're going to \"reset\" all of our zscores to a steady state where they\n # are all equal to ``None``. The next query will then set any that have a\n # value back to the expected value.\n (request.db.query(Project)\n .filter(Project.zscore != None) # noqa\n .update({Project.zscore: None}))\n\n # We need to convert the normalized name that we get out of BigQuery and\n # turn it into the primary key of the Project object and construct a list\n # of primary key: new zscore, including a default of None if the item isn't\n # in the result set.\n query = request.db.query(Project.name, Project.normalized_name).all()\n to_update = [\n {\"name\": name, \"zscore\": zscores[normalized_name]}\n for name, normalized_name in query\n if normalized_name in zscores\n ]\n\n # Reflect out updated ZScores into the database.\n request.db.bulk_update_mappings(Project, to_update)\n\n # Trigger a purge of the trending surrogate key.\n try:\n cacher = request.find_service(IOriginCache)\n except ValueError:\n pass\n else:\n cacher.purge([\"trending\"])\n", "path": "warehouse/packaging/tasks.py"}]} | 1,277 | 314 |
gh_patches_debug_25848 | rasdani/github-patches | git_diff | pypa__cibuildwheel-76 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Windows wheels are not built for python 3.7
On Linux wheels are correctly built for all supported python versions. Windows however stops at python 3.6.
Looking at windows.py there seem to be no references to python 3.7
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cibuildwheel/windows.py`
Content:
```
1 from __future__ import print_function
2 import os, tempfile, subprocess, sys, shutil
3 try:
4 from urllib2 import urlopen
5 except ImportError:
6 from urllib.request import urlopen
7 from collections import namedtuple
8 from glob import glob
9
10 from .util import prepare_command, get_build_verbosity_extra_flags, Unbuffered
11
12
13 def build(project_dir, package_name, output_dir, test_command, test_requires, before_build, build_verbosity, skip, environment):
14 # Python under AppVeyor/Windows seems to be buffering by default, giving problems interleaving subprocess call output with unflushed calls to 'print'
15 sys.stdout.flush()
16 sys.stdout = Unbuffered(sys.stdout)
17
18 # run_with_env is a cmd file that sets the right environment variables to
19 run_with_env = os.path.join(tempfile.gettempdir(), 'appveyor_run_with_env.cmd')
20 if not os.path.exists(run_with_env):
21 with open(run_with_env, 'wb') as f:
22 request = urlopen('https://github.com/ogrisel/python-appveyor-demo/raw/09a1c8672e5015a74d8f69d07add6ee803c176ec/appveyor/run_with_env.cmd')
23 f.write(request.read())
24
25 def shell(args, env=None, cwd=None):
26 # print the command executing for the logs
27 print('+ ' + ' '.join(args))
28 args = ['cmd', '/E:ON', '/V:ON', '/C', run_with_env] + args
29 return subprocess.check_call(' '.join(args), env=env, cwd=cwd)
30
31 PythonConfiguration = namedtuple('PythonConfiguration', ['version', 'arch', 'identifier', 'path'])
32 python_configurations = [
33 PythonConfiguration(version='2.7.x', arch="32", identifier='cp27-win32', path='C:\Python27'),
34 PythonConfiguration(version='2.7.x', arch="64", identifier='cp27-win_amd64', path='C:\Python27-x64'),
35 PythonConfiguration(version='3.3.x', arch="32", identifier='cp33-win32', path='C:\Python33'),
36 PythonConfiguration(version='3.3.x', arch="64", identifier='cp33-win_amd64', path='C:\Python33-x64'),
37 PythonConfiguration(version='3.4.x', arch="32", identifier='cp34-win32', path='C:\Python34'),
38 PythonConfiguration(version='3.4.x', arch="64", identifier='cp34-win_amd64', path='C:\Python34-x64'),
39 PythonConfiguration(version='3.5.x', arch="32", identifier='cp35-win32', path='C:\Python35'),
40 PythonConfiguration(version='3.5.x', arch="64", identifier='cp35-win_amd64', path='C:\Python35-x64'),
41 PythonConfiguration(version='3.6.x', arch="32", identifier='cp36-win32', path='C:\Python36'),
42 PythonConfiguration(version='3.6.x', arch="64", identifier='cp36-win_amd64', path='C:\Python36-x64'),
43 ]
44
45 abs_project_dir = os.path.abspath(project_dir)
46 temp_dir = tempfile.mkdtemp(prefix='cibuildwheel')
47 built_wheel_dir = os.path.join(temp_dir, 'built_wheel')
48
49 for config in python_configurations:
50 if skip(config.identifier):
51 print('cibuildwheel: Skipping build %s' % config.identifier, file=sys.stderr)
52 continue
53
54 # setup dirs
55 if os.path.exists(built_wheel_dir):
56 shutil.rmtree(built_wheel_dir)
57 os.makedirs(built_wheel_dir)
58
59 env = os.environ.copy()
60 # set up environment variables for run_with_env
61 env['PYTHON_VERSION'] = config.version
62 env['PYTHON_ARCH'] = config.arch
63 env['PATH'] = os.pathsep.join([
64 config.path,
65 os.path.join(config.path, 'Scripts'),
66 env['PATH']
67 ])
68 env = environment.as_dictionary(prev_environment=env)
69
70 # for the logs - check we're running the right version of python
71 shell(['python', '--version'], env=env)
72 shell(['python', '-c', '"import struct; print(struct.calcsize(\'P\') * 8)\"'], env=env)
73
74 # prepare the Python environment
75 shell(['python', '-m', 'pip', 'install', '--upgrade', 'pip'],
76 env=env)
77 shell(['pip', 'install', '--upgrade', 'setuptools'], env=env)
78 shell(['pip', 'install', 'wheel'], env=env)
79
80 # run the before_build command
81 if before_build:
82 before_build_prepared = prepare_command(before_build, project=abs_project_dir)
83 shell([before_build_prepared], env=env)
84
85 # build the wheel
86 shell(['pip', 'wheel', abs_project_dir, '-w', built_wheel_dir, '--no-deps'] + get_build_verbosity_extra_flags(build_verbosity), env=env)
87 built_wheel = glob(built_wheel_dir+'/*.whl')[0]
88
89 # install the wheel
90 shell(['pip', 'install', built_wheel], env=env)
91
92 # test the wheel
93 if test_requires:
94 shell(['pip', 'install'] + test_requires, env=env)
95 if test_command:
96 # run the tests from c:\, with an absolute path in the command
97 # (this ensures that Python runs the tests against the installed wheel
98 # and not the repo code)
99 test_command_prepared = prepare_command(test_command, project=abs_project_dir)
100 shell([test_command_prepared], cwd='c:\\', env=env)
101
102 # we're all done here; move it to output
103 shutil.move(built_wheel, output_dir)
104
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/cibuildwheel/windows.py b/cibuildwheel/windows.py
--- a/cibuildwheel/windows.py
+++ b/cibuildwheel/windows.py
@@ -40,6 +40,8 @@
PythonConfiguration(version='3.5.x', arch="64", identifier='cp35-win_amd64', path='C:\Python35-x64'),
PythonConfiguration(version='3.6.x', arch="32", identifier='cp36-win32', path='C:\Python36'),
PythonConfiguration(version='3.6.x', arch="64", identifier='cp36-win_amd64', path='C:\Python36-x64'),
+ PythonConfiguration(version='3.7.x', arch="32", identifier='cp37-win32', path='C:\Python37'),
+ PythonConfiguration(version='3.7.x', arch="64", identifier='cp37-win_amd64', path='C:\Python37-x64'),
]
abs_project_dir = os.path.abspath(project_dir)
@@ -50,6 +52,10 @@
if skip(config.identifier):
print('cibuildwheel: Skipping build %s' % config.identifier, file=sys.stderr)
continue
+
+ # check python & pip exist for this configuration
+ assert os.path.exists(os.path.join(config.path, 'python.exe'))
+ assert os.path.exists(os.path.join(config.path, 'Scripts', 'pip.exe'))
# setup dirs
if os.path.exists(built_wheel_dir):
| {"golden_diff": "diff --git a/cibuildwheel/windows.py b/cibuildwheel/windows.py\n--- a/cibuildwheel/windows.py\n+++ b/cibuildwheel/windows.py\n@@ -40,6 +40,8 @@\n PythonConfiguration(version='3.5.x', arch=\"64\", identifier='cp35-win_amd64', path='C:\\Python35-x64'),\n PythonConfiguration(version='3.6.x', arch=\"32\", identifier='cp36-win32', path='C:\\Python36'),\n PythonConfiguration(version='3.6.x', arch=\"64\", identifier='cp36-win_amd64', path='C:\\Python36-x64'),\n+ PythonConfiguration(version='3.7.x', arch=\"32\", identifier='cp37-win32', path='C:\\Python37'),\n+ PythonConfiguration(version='3.7.x', arch=\"64\", identifier='cp37-win_amd64', path='C:\\Python37-x64'),\n ]\n \n abs_project_dir = os.path.abspath(project_dir)\n@@ -50,6 +52,10 @@\n if skip(config.identifier):\n print('cibuildwheel: Skipping build %s' % config.identifier, file=sys.stderr)\n continue\n+ \n+ # check python & pip exist for this configuration\n+ assert os.path.exists(os.path.join(config.path, 'python.exe'))\n+ assert os.path.exists(os.path.join(config.path, 'Scripts', 'pip.exe'))\n \n # setup dirs\n if os.path.exists(built_wheel_dir):\n", "issue": "Windows wheels are not built for python 3.7\nOn Linux wheels are correctly built for all supported python versions. Windows however stops at python 3.6.\r\n\r\nLooking at windows.py there seem to be no references to python 3.7\n", "before_files": [{"content": "from __future__ import print_function\nimport os, tempfile, subprocess, sys, shutil\ntry:\n from urllib2 import urlopen\nexcept ImportError:\n from urllib.request import urlopen\nfrom collections import namedtuple\nfrom glob import glob\n\nfrom .util import prepare_command, get_build_verbosity_extra_flags, Unbuffered\n\n\ndef build(project_dir, package_name, output_dir, test_command, test_requires, before_build, build_verbosity, skip, environment):\n # Python under AppVeyor/Windows seems to be buffering by default, giving problems interleaving subprocess call output with unflushed calls to 'print'\n sys.stdout.flush()\n sys.stdout = Unbuffered(sys.stdout)\n\n # run_with_env is a cmd file that sets the right environment variables to\n run_with_env = os.path.join(tempfile.gettempdir(), 'appveyor_run_with_env.cmd')\n if not os.path.exists(run_with_env):\n with open(run_with_env, 'wb') as f:\n request = urlopen('https://github.com/ogrisel/python-appveyor-demo/raw/09a1c8672e5015a74d8f69d07add6ee803c176ec/appveyor/run_with_env.cmd')\n f.write(request.read())\n\n def shell(args, env=None, cwd=None):\n # print the command executing for the logs\n print('+ ' + ' '.join(args))\n args = ['cmd', '/E:ON', '/V:ON', '/C', run_with_env] + args\n return subprocess.check_call(' '.join(args), env=env, cwd=cwd)\n\n PythonConfiguration = namedtuple('PythonConfiguration', ['version', 'arch', 'identifier', 'path'])\n python_configurations = [\n PythonConfiguration(version='2.7.x', arch=\"32\", identifier='cp27-win32', path='C:\\Python27'),\n PythonConfiguration(version='2.7.x', arch=\"64\", identifier='cp27-win_amd64', path='C:\\Python27-x64'),\n PythonConfiguration(version='3.3.x', arch=\"32\", identifier='cp33-win32', path='C:\\Python33'),\n PythonConfiguration(version='3.3.x', arch=\"64\", identifier='cp33-win_amd64', path='C:\\Python33-x64'),\n PythonConfiguration(version='3.4.x', arch=\"32\", identifier='cp34-win32', path='C:\\Python34'),\n PythonConfiguration(version='3.4.x', arch=\"64\", identifier='cp34-win_amd64', path='C:\\Python34-x64'),\n PythonConfiguration(version='3.5.x', arch=\"32\", identifier='cp35-win32', path='C:\\Python35'),\n PythonConfiguration(version='3.5.x', arch=\"64\", identifier='cp35-win_amd64', path='C:\\Python35-x64'),\n PythonConfiguration(version='3.6.x', arch=\"32\", identifier='cp36-win32', path='C:\\Python36'),\n PythonConfiguration(version='3.6.x', arch=\"64\", identifier='cp36-win_amd64', path='C:\\Python36-x64'),\n ]\n\n abs_project_dir = os.path.abspath(project_dir)\n temp_dir = tempfile.mkdtemp(prefix='cibuildwheel')\n built_wheel_dir = os.path.join(temp_dir, 'built_wheel')\n\n for config in python_configurations:\n if skip(config.identifier):\n print('cibuildwheel: Skipping build %s' % config.identifier, file=sys.stderr)\n continue\n\n # setup dirs\n if os.path.exists(built_wheel_dir):\n shutil.rmtree(built_wheel_dir)\n os.makedirs(built_wheel_dir)\n\n env = os.environ.copy()\n # set up environment variables for run_with_env\n env['PYTHON_VERSION'] = config.version\n env['PYTHON_ARCH'] = config.arch\n env['PATH'] = os.pathsep.join([\n config.path,\n os.path.join(config.path, 'Scripts'),\n env['PATH']\n ])\n env = environment.as_dictionary(prev_environment=env)\n\n # for the logs - check we're running the right version of python\n shell(['python', '--version'], env=env)\n shell(['python', '-c', '\"import struct; print(struct.calcsize(\\'P\\') * 8)\\\"'], env=env)\n\n # prepare the Python environment\n shell(['python', '-m', 'pip', 'install', '--upgrade', 'pip'],\n env=env)\n shell(['pip', 'install', '--upgrade', 'setuptools'], env=env)\n shell(['pip', 'install', 'wheel'], env=env)\n\n # run the before_build command\n if before_build:\n before_build_prepared = prepare_command(before_build, project=abs_project_dir)\n shell([before_build_prepared], env=env)\n\n # build the wheel\n shell(['pip', 'wheel', abs_project_dir, '-w', built_wheel_dir, '--no-deps'] + get_build_verbosity_extra_flags(build_verbosity), env=env)\n built_wheel = glob(built_wheel_dir+'/*.whl')[0]\n\n # install the wheel\n shell(['pip', 'install', built_wheel], env=env)\n\n # test the wheel\n if test_requires:\n shell(['pip', 'install'] + test_requires, env=env)\n if test_command:\n # run the tests from c:\\, with an absolute path in the command\n # (this ensures that Python runs the tests against the installed wheel\n # and not the repo code)\n test_command_prepared = prepare_command(test_command, project=abs_project_dir)\n shell([test_command_prepared], cwd='c:\\\\', env=env)\n\n # we're all done here; move it to output\n shutil.move(built_wheel, output_dir)\n", "path": "cibuildwheel/windows.py"}], "after_files": [{"content": "from __future__ import print_function\nimport os, tempfile, subprocess, sys, shutil\ntry:\n from urllib2 import urlopen\nexcept ImportError:\n from urllib.request import urlopen\nfrom collections import namedtuple\nfrom glob import glob\n\nfrom .util import prepare_command, get_build_verbosity_extra_flags, Unbuffered\n\n\ndef build(project_dir, package_name, output_dir, test_command, test_requires, before_build, build_verbosity, skip, environment):\n # Python under AppVeyor/Windows seems to be buffering by default, giving problems interleaving subprocess call output with unflushed calls to 'print'\n sys.stdout.flush()\n sys.stdout = Unbuffered(sys.stdout)\n\n # run_with_env is a cmd file that sets the right environment variables to\n run_with_env = os.path.join(tempfile.gettempdir(), 'appveyor_run_with_env.cmd')\n if not os.path.exists(run_with_env):\n with open(run_with_env, 'wb') as f:\n request = urlopen('https://github.com/ogrisel/python-appveyor-demo/raw/09a1c8672e5015a74d8f69d07add6ee803c176ec/appveyor/run_with_env.cmd')\n f.write(request.read())\n\n def shell(args, env=None, cwd=None):\n # print the command executing for the logs\n print('+ ' + ' '.join(args))\n args = ['cmd', '/E:ON', '/V:ON', '/C', run_with_env] + args\n return subprocess.check_call(' '.join(args), env=env, cwd=cwd)\n\n PythonConfiguration = namedtuple('PythonConfiguration', ['version', 'arch', 'identifier', 'path'])\n python_configurations = [\n PythonConfiguration(version='2.7.x', arch=\"32\", identifier='cp27-win32', path='C:\\Python27'),\n PythonConfiguration(version='2.7.x', arch=\"64\", identifier='cp27-win_amd64', path='C:\\Python27-x64'),\n PythonConfiguration(version='3.3.x', arch=\"32\", identifier='cp33-win32', path='C:\\Python33'),\n PythonConfiguration(version='3.3.x', arch=\"64\", identifier='cp33-win_amd64', path='C:\\Python33-x64'),\n PythonConfiguration(version='3.4.x', arch=\"32\", identifier='cp34-win32', path='C:\\Python34'),\n PythonConfiguration(version='3.4.x', arch=\"64\", identifier='cp34-win_amd64', path='C:\\Python34-x64'),\n PythonConfiguration(version='3.5.x', arch=\"32\", identifier='cp35-win32', path='C:\\Python35'),\n PythonConfiguration(version='3.5.x', arch=\"64\", identifier='cp35-win_amd64', path='C:\\Python35-x64'),\n PythonConfiguration(version='3.6.x', arch=\"32\", identifier='cp36-win32', path='C:\\Python36'),\n PythonConfiguration(version='3.6.x', arch=\"64\", identifier='cp36-win_amd64', path='C:\\Python36-x64'),\n PythonConfiguration(version='3.7.x', arch=\"32\", identifier='cp37-win32', path='C:\\Python37'),\n PythonConfiguration(version='3.7.x', arch=\"64\", identifier='cp37-win_amd64', path='C:\\Python37-x64'),\n ]\n\n abs_project_dir = os.path.abspath(project_dir)\n temp_dir = tempfile.mkdtemp(prefix='cibuildwheel')\n built_wheel_dir = os.path.join(temp_dir, 'built_wheel')\n\n for config in python_configurations:\n if skip(config.identifier):\n print('cibuildwheel: Skipping build %s' % config.identifier, file=sys.stderr)\n continue\n \n # check python & pip exist for this configuration\n assert os.path.exists(os.path.join(config.path, 'python.exe'))\n assert os.path.exists(os.path.join(config.path, 'Scripts', 'pip.exe'))\n\n # setup dirs\n if os.path.exists(built_wheel_dir):\n shutil.rmtree(built_wheel_dir)\n os.makedirs(built_wheel_dir)\n\n env = os.environ.copy()\n # set up environment variables for run_with_env\n env['PYTHON_VERSION'] = config.version\n env['PYTHON_ARCH'] = config.arch\n env['PATH'] = os.pathsep.join([\n config.path,\n os.path.join(config.path, 'Scripts'),\n env['PATH']\n ])\n env = environment.as_dictionary(prev_environment=env)\n\n # for the logs - check we're running the right version of python\n shell(['python', '--version'], env=env)\n shell(['python', '-c', '\"import struct; print(struct.calcsize(\\'P\\') * 8)\\\"'], env=env)\n\n # prepare the Python environment\n shell(['python', '-m', 'pip', 'install', '--upgrade', 'pip'],\n env=env)\n shell(['pip', 'install', '--upgrade', 'setuptools'], env=env)\n shell(['pip', 'install', 'wheel'], env=env)\n\n # run the before_build command\n if before_build:\n before_build_prepared = prepare_command(before_build, project=abs_project_dir)\n shell([before_build_prepared], env=env)\n\n # build the wheel\n shell(['pip', 'wheel', abs_project_dir, '-w', built_wheel_dir, '--no-deps'] + get_build_verbosity_extra_flags(build_verbosity), env=env)\n built_wheel = glob(built_wheel_dir+'/*.whl')[0]\n\n # install the wheel\n shell(['pip', 'install', built_wheel], env=env)\n\n # test the wheel\n if test_requires:\n shell(['pip', 'install'] + test_requires, env=env)\n if test_command:\n # run the tests from c:\\, with an absolute path in the command\n # (this ensures that Python runs the tests against the installed wheel\n # and not the repo code)\n test_command_prepared = prepare_command(test_command, project=abs_project_dir)\n shell([test_command_prepared], cwd='c:\\\\', env=env)\n\n # we're all done here; move it to output\n shutil.move(built_wheel, output_dir)\n", "path": "cibuildwheel/windows.py"}]} | 1,824 | 347 |
gh_patches_debug_5434 | rasdani/github-patches | git_diff | secdev__scapy-4403 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Scapy overides platform
Scapy exports the platform name, and could override the platform module. This is likely the issue: https://github.com/secdev/scapy/blob/b0506a1e22321eba41d5c21d26bba418de04bc8f/scapy/consts.py#L10
Here are the example:
```shell
python issue.py
<class 'str'>
<class 'module'>
```
```python
import platform
from scapy.all import *
print(type(platform))
import platform
print(type(platform))
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scapy/consts.py`
Content:
```
1 # SPDX-License-Identifier: GPL-2.0-only
2 # This file is part of Scapy
3 # See https://scapy.net/ for more information
4 # Copyright (C) Philippe Biondi <[email protected]>
5
6 """
7 This file contains constants
8 """
9
10 from sys import byteorder, platform, maxsize
11 import platform as platform_lib
12
13 LINUX = platform.startswith("linux")
14 OPENBSD = platform.startswith("openbsd")
15 FREEBSD = "freebsd" in platform
16 NETBSD = platform.startswith("netbsd")
17 DARWIN = platform.startswith("darwin")
18 SOLARIS = platform.startswith("sunos")
19 WINDOWS = platform.startswith("win32")
20 WINDOWS_XP = platform_lib.release() == "XP"
21 BSD = DARWIN or FREEBSD or OPENBSD or NETBSD
22 # See https://docs.python.org/3/library/platform.html#cross-platform
23 IS_64BITS = maxsize > 2**32
24 BIG_ENDIAN = byteorder == 'big'
25 # LOOPBACK_NAME moved to conf.loopback_name
26
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/scapy/consts.py b/scapy/consts.py
--- a/scapy/consts.py
+++ b/scapy/consts.py
@@ -10,6 +10,20 @@
from sys import byteorder, platform, maxsize
import platform as platform_lib
+__all__ = [
+ "LINUX",
+ "OPENBSD",
+ "FREEBSD",
+ "NETBSD",
+ "DARWIN",
+ "SOLARIS",
+ "WINDOWS",
+ "WINDOWS_XP",
+ "BSD",
+ "IS_64BITS",
+ "BIG_ENDIAN",
+]
+
LINUX = platform.startswith("linux")
OPENBSD = platform.startswith("openbsd")
FREEBSD = "freebsd" in platform
| {"golden_diff": "diff --git a/scapy/consts.py b/scapy/consts.py\n--- a/scapy/consts.py\n+++ b/scapy/consts.py\n@@ -10,6 +10,20 @@\n from sys import byteorder, platform, maxsize\n import platform as platform_lib\n \n+__all__ = [\n+ \"LINUX\",\n+ \"OPENBSD\",\n+ \"FREEBSD\",\n+ \"NETBSD\",\n+ \"DARWIN\",\n+ \"SOLARIS\",\n+ \"WINDOWS\",\n+ \"WINDOWS_XP\",\n+ \"BSD\",\n+ \"IS_64BITS\",\n+ \"BIG_ENDIAN\",\n+]\n+\n LINUX = platform.startswith(\"linux\")\n OPENBSD = platform.startswith(\"openbsd\")\n FREEBSD = \"freebsd\" in platform\n", "issue": "Scapy overides platform\nScapy exports the platform name, and could override the platform module. This is likely the issue: https://github.com/secdev/scapy/blob/b0506a1e22321eba41d5c21d26bba418de04bc8f/scapy/consts.py#L10\r\n\r\nHere are the example:\r\n\r\n```shell\r\npython issue.py \r\n<class 'str'>\r\n<class 'module'>\r\n```\r\n\r\n```python\r\nimport platform\r\nfrom scapy.all import *\r\nprint(type(platform))\r\n\r\nimport platform\r\nprint(type(platform))\r\n```\n", "before_files": [{"content": "# SPDX-License-Identifier: GPL-2.0-only\n# This file is part of Scapy\n# See https://scapy.net/ for more information\n# Copyright (C) Philippe Biondi <[email protected]>\n\n\"\"\"\nThis file contains constants\n\"\"\"\n\nfrom sys import byteorder, platform, maxsize\nimport platform as platform_lib\n\nLINUX = platform.startswith(\"linux\")\nOPENBSD = platform.startswith(\"openbsd\")\nFREEBSD = \"freebsd\" in platform\nNETBSD = platform.startswith(\"netbsd\")\nDARWIN = platform.startswith(\"darwin\")\nSOLARIS = platform.startswith(\"sunos\")\nWINDOWS = platform.startswith(\"win32\")\nWINDOWS_XP = platform_lib.release() == \"XP\"\nBSD = DARWIN or FREEBSD or OPENBSD or NETBSD\n# See https://docs.python.org/3/library/platform.html#cross-platform\nIS_64BITS = maxsize > 2**32\nBIG_ENDIAN = byteorder == 'big'\n# LOOPBACK_NAME moved to conf.loopback_name\n", "path": "scapy/consts.py"}], "after_files": [{"content": "# SPDX-License-Identifier: GPL-2.0-only\n# This file is part of Scapy\n# See https://scapy.net/ for more information\n# Copyright (C) Philippe Biondi <[email protected]>\n\n\"\"\"\nThis file contains constants\n\"\"\"\n\nfrom sys import byteorder, platform, maxsize\nimport platform as platform_lib\n\n__all__ = [\n \"LINUX\",\n \"OPENBSD\",\n \"FREEBSD\",\n \"NETBSD\",\n \"DARWIN\",\n \"SOLARIS\",\n \"WINDOWS\",\n \"WINDOWS_XP\",\n \"BSD\",\n \"IS_64BITS\",\n \"BIG_ENDIAN\",\n]\n\nLINUX = platform.startswith(\"linux\")\nOPENBSD = platform.startswith(\"openbsd\")\nFREEBSD = \"freebsd\" in platform\nNETBSD = platform.startswith(\"netbsd\")\nDARWIN = platform.startswith(\"darwin\")\nSOLARIS = platform.startswith(\"sunos\")\nWINDOWS = platform.startswith(\"win32\")\nWINDOWS_XP = platform_lib.release() == \"XP\"\nBSD = DARWIN or FREEBSD or OPENBSD or NETBSD\n# See https://docs.python.org/3/library/platform.html#cross-platform\nIS_64BITS = maxsize > 2**32\nBIG_ENDIAN = byteorder == 'big'\n# LOOPBACK_NAME moved to conf.loopback_name\n", "path": "scapy/consts.py"}]} | 644 | 168 |
gh_patches_debug_22411 | rasdani/github-patches | git_diff | wagtail__wagtail-730 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Allow use of MPO formatted JPEG images
Just tried loading some JPEG images into a website and was given an error "Not a valid JPEG image please use blah blah".
The images were from my Nikon D3300 which seems to create JPEG files in MPO format. This format is supported by Pillow but Wagtail is blocking them from being uploaded. I disabled the format validation and everything seemed to work fine.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `wagtail/wagtailimages/fields.py`
Content:
```
1 import os
2
3 from PIL import Image
4
5 from django.forms.fields import ImageField
6 from django.core.exceptions import ValidationError
7 from django.utils.translation import ugettext_lazy as _
8 from django.template.defaultfilters import filesizeformat
9 from django.conf import settings
10
11
12 ALLOWED_EXTENSIONS = ['gif', 'jpg', 'jpeg', 'png']
13 SUPPORTED_FORMATS_TEXT = _("GIF, JPEG, PNG")
14
15 INVALID_IMAGE_ERROR = _(
16 "Not a supported image format. Supported formats: %s."
17 ) % SUPPORTED_FORMATS_TEXT
18
19 INVALID_IMAGE_KNOWN_FORMAT_ERROR = _(
20 "Not a valid %s image."
21 )
22
23 MAX_UPLOAD_SIZE = getattr(settings, 'WAGTAILIMAGES_MAX_UPLOAD_SIZE', 10 * 1024 * 1024)
24
25 if MAX_UPLOAD_SIZE is not None:
26 MAX_UPLOAD_SIZE_TEXT = filesizeformat(MAX_UPLOAD_SIZE)
27
28 FILE_TOO_LARGE_ERROR = _(
29 "This file is too big. Maximum filesize %s."
30 ) % (MAX_UPLOAD_SIZE_TEXT, )
31
32 FILE_TOO_LARGE_KNOWN_SIZE_ERROR = _(
33 "This file is too big (%%s). Maximum filesize %s."
34 ) % (MAX_UPLOAD_SIZE_TEXT, )
35
36 IMAGE_FIELD_HELP_TEXT = _(
37 "Supported formats: %s. Maximum filesize: %s."
38 ) % (SUPPORTED_FORMATS_TEXT, MAX_UPLOAD_SIZE_TEXT, )
39 else:
40 MAX_UPLOAD_SIZE_TEXT = ""
41 FILE_TOO_LARGE_ERROR = ""
42 FILE_TOO_LARGE_KNOWN_SIZE_ERROR = ""
43
44 IMAGE_FIELD_HELP_TEXT = _(
45 "Supported formats: %s."
46 ) % (SUPPORTED_FORMATS_TEXT, )
47
48
49 class WagtailImageField(ImageField):
50 default_error_messages = {
51 'invalid_image': INVALID_IMAGE_ERROR,
52 'invalid_image_known_format': INVALID_IMAGE_KNOWN_FORMAT_ERROR,
53 'file_too_large': FILE_TOO_LARGE_KNOWN_SIZE_ERROR,
54 }
55
56 def __init__(self, *args, **kwargs):
57 super(WagtailImageField, self).__init__(*args, **kwargs)
58
59 self.help_text = IMAGE_FIELD_HELP_TEXT
60
61 def check_image_file_format(self, f):
62 # Check file extension
63 extension = os.path.splitext(f.name)[1].lower()[1:]
64
65 if extension not in ALLOWED_EXTENSIONS:
66 raise ValidationError(self.error_messages['invalid_image'], code='invalid_image')
67
68 if hasattr(f, 'image'):
69 # Django 1.8 annotates the file object with the PIL image
70 image = f.image
71 elif not f.closed:
72 # Open image file
73 file_position = f.tell()
74 f.seek(0)
75
76 try:
77 image = Image.open(f)
78 except IOError:
79 # Uploaded file is not even an image file (or corrupted)
80 raise ValidationError(self.error_messages['invalid_image_known_format'],
81 code='invalid_image_known_format')
82
83 f.seek(file_position)
84 else:
85 # Couldn't get the PIL image, skip checking the internal file format
86 return
87
88 image_format = extension
89 if extension == 'jpg':
90 image_format = 'jpeg'
91
92 # Check that the internal format matches the extension
93 # It is possible to upload PSD files if their extension is set to jpg, png or gif. This should catch them out
94 if image.format.upper() != image_format.upper():
95 raise ValidationError(self.error_messages['invalid_image_known_format'] % (
96 image_format.upper()
97 ), code='invalid_image_known_format')
98
99 def check_image_file_size(self, f):
100 # Upload size checking can be disabled by setting max upload size to None
101 if MAX_UPLOAD_SIZE is None:
102 return
103
104 # Check the filesize
105 if f.size > MAX_UPLOAD_SIZE:
106 raise ValidationError(self.error_messages['file_too_large'] % (
107 filesizeformat(f.size),
108 ), code='file_too_large')
109
110 def to_python(self, data):
111 f = super(WagtailImageField, self).to_python(data)
112
113 if f is not None:
114 self.check_image_file_size(f)
115 self.check_image_file_format(f)
116
117 return f
118
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/wagtail/wagtailimages/fields.py b/wagtail/wagtailimages/fields.py
--- a/wagtail/wagtailimages/fields.py
+++ b/wagtail/wagtailimages/fields.py
@@ -85,15 +85,19 @@
# Couldn't get the PIL image, skip checking the internal file format
return
- image_format = extension
- if extension == 'jpg':
- image_format = 'jpeg'
+ image_format = extension.upper()
+ if image_format == 'JPG':
+ image_format = 'JPEG'
+
+ internal_image_format = image.format.upper()
+ if internal_image_format == 'MPO':
+ internal_image_format = 'JPEG'
# Check that the internal format matches the extension
# It is possible to upload PSD files if their extension is set to jpg, png or gif. This should catch them out
- if image.format.upper() != image_format.upper():
+ if internal_image_format != image_format:
raise ValidationError(self.error_messages['invalid_image_known_format'] % (
- image_format.upper()
+ image_format,
), code='invalid_image_known_format')
def check_image_file_size(self, f):
| {"golden_diff": "diff --git a/wagtail/wagtailimages/fields.py b/wagtail/wagtailimages/fields.py\n--- a/wagtail/wagtailimages/fields.py\n+++ b/wagtail/wagtailimages/fields.py\n@@ -85,15 +85,19 @@\n # Couldn't get the PIL image, skip checking the internal file format\n return\n \n- image_format = extension\n- if extension == 'jpg':\n- image_format = 'jpeg'\n+ image_format = extension.upper()\n+ if image_format == 'JPG':\n+ image_format = 'JPEG'\n+\n+ internal_image_format = image.format.upper()\n+ if internal_image_format == 'MPO':\n+ internal_image_format = 'JPEG'\n \n # Check that the internal format matches the extension\n # It is possible to upload PSD files if their extension is set to jpg, png or gif. This should catch them out\n- if image.format.upper() != image_format.upper():\n+ if internal_image_format != image_format:\n raise ValidationError(self.error_messages['invalid_image_known_format'] % (\n- image_format.upper()\n+ image_format,\n ), code='invalid_image_known_format')\n \n def check_image_file_size(self, f):\n", "issue": "Allow use of MPO formatted JPEG images\nJust tried loading some JPEG images into a website and was given an error \"Not a valid JPEG image please use blah blah\".\n\nThe images were from my Nikon D3300 which seems to create JPEG files in MPO format. This format is supported by Pillow but Wagtail is blocking them from being uploaded. I disabled the format validation and everything seemed to work fine.\n\n", "before_files": [{"content": "import os\n\nfrom PIL import Image\n\nfrom django.forms.fields import ImageField\nfrom django.core.exceptions import ValidationError\nfrom django.utils.translation import ugettext_lazy as _\nfrom django.template.defaultfilters import filesizeformat\nfrom django.conf import settings\n\n\nALLOWED_EXTENSIONS = ['gif', 'jpg', 'jpeg', 'png']\nSUPPORTED_FORMATS_TEXT = _(\"GIF, JPEG, PNG\")\n\nINVALID_IMAGE_ERROR = _(\n \"Not a supported image format. Supported formats: %s.\"\n) % SUPPORTED_FORMATS_TEXT\n\nINVALID_IMAGE_KNOWN_FORMAT_ERROR = _(\n \"Not a valid %s image.\"\n)\n\nMAX_UPLOAD_SIZE = getattr(settings, 'WAGTAILIMAGES_MAX_UPLOAD_SIZE', 10 * 1024 * 1024)\n\nif MAX_UPLOAD_SIZE is not None:\n MAX_UPLOAD_SIZE_TEXT = filesizeformat(MAX_UPLOAD_SIZE)\n\n FILE_TOO_LARGE_ERROR = _(\n \"This file is too big. Maximum filesize %s.\"\n ) % (MAX_UPLOAD_SIZE_TEXT, )\n\n FILE_TOO_LARGE_KNOWN_SIZE_ERROR = _(\n \"This file is too big (%%s). Maximum filesize %s.\"\n ) % (MAX_UPLOAD_SIZE_TEXT, )\n\n IMAGE_FIELD_HELP_TEXT = _(\n \"Supported formats: %s. Maximum filesize: %s.\"\n ) % (SUPPORTED_FORMATS_TEXT, MAX_UPLOAD_SIZE_TEXT, )\nelse:\n MAX_UPLOAD_SIZE_TEXT = \"\"\n FILE_TOO_LARGE_ERROR = \"\"\n FILE_TOO_LARGE_KNOWN_SIZE_ERROR = \"\"\n\n IMAGE_FIELD_HELP_TEXT = _(\n \"Supported formats: %s.\"\n ) % (SUPPORTED_FORMATS_TEXT, )\n\n\nclass WagtailImageField(ImageField):\n default_error_messages = {\n 'invalid_image': INVALID_IMAGE_ERROR,\n 'invalid_image_known_format': INVALID_IMAGE_KNOWN_FORMAT_ERROR,\n 'file_too_large': FILE_TOO_LARGE_KNOWN_SIZE_ERROR,\n }\n\n def __init__(self, *args, **kwargs):\n super(WagtailImageField, self).__init__(*args, **kwargs)\n\n self.help_text = IMAGE_FIELD_HELP_TEXT\n\n def check_image_file_format(self, f):\n # Check file extension\n extension = os.path.splitext(f.name)[1].lower()[1:]\n\n if extension not in ALLOWED_EXTENSIONS:\n raise ValidationError(self.error_messages['invalid_image'], code='invalid_image')\n\n if hasattr(f, 'image'):\n # Django 1.8 annotates the file object with the PIL image\n image = f.image\n elif not f.closed:\n # Open image file\n file_position = f.tell()\n f.seek(0)\n\n try:\n image = Image.open(f)\n except IOError:\n # Uploaded file is not even an image file (or corrupted)\n raise ValidationError(self.error_messages['invalid_image_known_format'],\n code='invalid_image_known_format')\n\n f.seek(file_position)\n else:\n # Couldn't get the PIL image, skip checking the internal file format\n return\n\n image_format = extension\n if extension == 'jpg':\n image_format = 'jpeg'\n\n # Check that the internal format matches the extension\n # It is possible to upload PSD files if their extension is set to jpg, png or gif. This should catch them out\n if image.format.upper() != image_format.upper():\n raise ValidationError(self.error_messages['invalid_image_known_format'] % (\n image_format.upper()\n ), code='invalid_image_known_format')\n\n def check_image_file_size(self, f):\n # Upload size checking can be disabled by setting max upload size to None\n if MAX_UPLOAD_SIZE is None:\n return\n\n # Check the filesize\n if f.size > MAX_UPLOAD_SIZE:\n raise ValidationError(self.error_messages['file_too_large'] % (\n filesizeformat(f.size),\n ), code='file_too_large')\n\n def to_python(self, data):\n f = super(WagtailImageField, self).to_python(data)\n\n if f is not None:\n self.check_image_file_size(f)\n self.check_image_file_format(f)\n\n return f\n", "path": "wagtail/wagtailimages/fields.py"}], "after_files": [{"content": "import os\n\nfrom PIL import Image\n\nfrom django.forms.fields import ImageField\nfrom django.core.exceptions import ValidationError\nfrom django.utils.translation import ugettext_lazy as _\nfrom django.template.defaultfilters import filesizeformat\nfrom django.conf import settings\n\n\nALLOWED_EXTENSIONS = ['gif', 'jpg', 'jpeg', 'png']\nSUPPORTED_FORMATS_TEXT = _(\"GIF, JPEG, PNG\")\n\nINVALID_IMAGE_ERROR = _(\n \"Not a supported image format. Supported formats: %s.\"\n) % SUPPORTED_FORMATS_TEXT\n\nINVALID_IMAGE_KNOWN_FORMAT_ERROR = _(\n \"Not a valid %s image.\"\n)\n\nMAX_UPLOAD_SIZE = getattr(settings, 'WAGTAILIMAGES_MAX_UPLOAD_SIZE', 10 * 1024 * 1024)\n\nif MAX_UPLOAD_SIZE is not None:\n MAX_UPLOAD_SIZE_TEXT = filesizeformat(MAX_UPLOAD_SIZE)\n\n FILE_TOO_LARGE_ERROR = _(\n \"This file is too big. Maximum filesize %s.\"\n ) % (MAX_UPLOAD_SIZE_TEXT, )\n\n FILE_TOO_LARGE_KNOWN_SIZE_ERROR = _(\n \"This file is too big (%%s). Maximum filesize %s.\"\n ) % (MAX_UPLOAD_SIZE_TEXT, )\n\n IMAGE_FIELD_HELP_TEXT = _(\n \"Supported formats: %s. Maximum filesize: %s.\"\n ) % (SUPPORTED_FORMATS_TEXT, MAX_UPLOAD_SIZE_TEXT, )\nelse:\n MAX_UPLOAD_SIZE_TEXT = \"\"\n FILE_TOO_LARGE_ERROR = \"\"\n FILE_TOO_LARGE_KNOWN_SIZE_ERROR = \"\"\n\n IMAGE_FIELD_HELP_TEXT = _(\n \"Supported formats: %s.\"\n ) % (SUPPORTED_FORMATS_TEXT, )\n\n\nclass WagtailImageField(ImageField):\n default_error_messages = {\n 'invalid_image': INVALID_IMAGE_ERROR,\n 'invalid_image_known_format': INVALID_IMAGE_KNOWN_FORMAT_ERROR,\n 'file_too_large': FILE_TOO_LARGE_KNOWN_SIZE_ERROR,\n }\n\n def __init__(self, *args, **kwargs):\n super(WagtailImageField, self).__init__(*args, **kwargs)\n\n self.help_text = IMAGE_FIELD_HELP_TEXT\n\n def check_image_file_format(self, f):\n # Check file extension\n extension = os.path.splitext(f.name)[1].lower()[1:]\n\n if extension not in ALLOWED_EXTENSIONS:\n raise ValidationError(self.error_messages['invalid_image'], code='invalid_image')\n\n if hasattr(f, 'image'):\n # Django 1.8 annotates the file object with the PIL image\n image = f.image\n elif not f.closed:\n # Open image file\n file_position = f.tell()\n f.seek(0)\n\n try:\n image = Image.open(f)\n except IOError:\n # Uploaded file is not even an image file (or corrupted)\n raise ValidationError(self.error_messages['invalid_image_known_format'],\n code='invalid_image_known_format')\n\n f.seek(file_position)\n else:\n # Couldn't get the PIL image, skip checking the internal file format\n return\n\n image_format = extension.upper()\n if image_format == 'JPG':\n image_format = 'JPEG'\n\n internal_image_format = image.format.upper()\n if internal_image_format == 'MPO':\n internal_image_format = 'JPEG'\n\n # Check that the internal format matches the extension\n # It is possible to upload PSD files if their extension is set to jpg, png or gif. This should catch them out\n if internal_image_format != image_format:\n raise ValidationError(self.error_messages['invalid_image_known_format'] % (\n image_format,\n ), code='invalid_image_known_format')\n\n def check_image_file_size(self, f):\n # Upload size checking can be disabled by setting max upload size to None\n if MAX_UPLOAD_SIZE is None:\n return\n\n # Check the filesize\n if f.size > MAX_UPLOAD_SIZE:\n raise ValidationError(self.error_messages['file_too_large'] % (\n filesizeformat(f.size),\n ), code='file_too_large')\n\n def to_python(self, data):\n f = super(WagtailImageField, self).to_python(data)\n\n if f is not None:\n self.check_image_file_size(f)\n self.check_image_file_format(f)\n\n return f\n", "path": "wagtail/wagtailimages/fields.py"}]} | 1,474 | 271 |
gh_patches_debug_14539 | rasdani/github-patches | git_diff | cloudtools__troposphere-1589 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add support for DataLocationResource & TableWithColumnsResource in AWS::LakeFormation::Permissions (2020, Jan 16 update)
waiting for the doc to be updated
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `troposphere/lakeformation.py`
Content:
```
1 # Copyright (c) 2012-2019, Mark Peek <[email protected]>
2 # All rights reserved.
3 #
4 # See LICENSE file for full license.
5 #
6 # *** Do not modify - this file is autogenerated ***
7 # Resource specification version: 5.3.0
8
9
10 from . import AWSObject
11 from . import AWSProperty
12
13
14 class Admins(AWSProperty):
15 props = {
16 }
17
18
19 class DataLakeSettings(AWSObject):
20 resource_type = "AWS::LakeFormation::DataLakeSettings"
21
22 props = {
23 'Admins': (Admins, False),
24 }
25
26
27 class DataLakePrincipal(AWSProperty):
28 props = {
29 'DataLakePrincipalIdentifier': (basestring, False),
30 }
31
32
33 class DatabaseResource(AWSProperty):
34 props = {
35 'Name': (basestring, False),
36 }
37
38
39 class TableResource(AWSProperty):
40 props = {
41 'DatabaseName': (basestring, False),
42 'Name': (basestring, False),
43 }
44
45
46 class Resource(AWSProperty):
47 props = {
48 'DatabaseResource': (DatabaseResource, False),
49 'TableResource': (TableResource, False),
50 }
51
52
53 class Permissions(AWSObject):
54 resource_type = "AWS::LakeFormation::Permissions"
55
56 props = {
57 'DataLakePrincipal': (DataLakePrincipal, True),
58 'Permissions': ([basestring], False),
59 'PermissionsWithGrantOption': ([basestring], False),
60 'Resource': (Resource, True),
61 }
62
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/troposphere/lakeformation.py b/troposphere/lakeformation.py
--- a/troposphere/lakeformation.py
+++ b/troposphere/lakeformation.py
@@ -43,10 +43,33 @@
}
+class DataLocationResource(AWSProperty):
+ props = {
+ 'S3Resource': (basestring, False),
+ }
+
+
+class ColumnWildcard(AWSProperty):
+ props = {
+ 'ExcludedColumnNames': ([basestring], False),
+ }
+
+
+class TableWithColumnsResource(AWSProperty):
+ props = {
+ 'ColumnNames': ([basestring], False),
+ 'ColumnWildcard': (ColumnWildcard, False),
+ 'DatabaseName': (basestring, False),
+ 'Name': (basestring, False),
+ }
+
+
class Resource(AWSProperty):
props = {
'DatabaseResource': (DatabaseResource, False),
+ 'DataLocationResource': (DataLocationResource, False),
'TableResource': (TableResource, False),
+ 'TableWithColumnsResource': (TableWithColumnsResource, False),
}
| {"golden_diff": "diff --git a/troposphere/lakeformation.py b/troposphere/lakeformation.py\n--- a/troposphere/lakeformation.py\n+++ b/troposphere/lakeformation.py\n@@ -43,10 +43,33 @@\n }\n \n \n+class DataLocationResource(AWSProperty):\n+ props = {\n+ 'S3Resource': (basestring, False),\n+ }\n+\n+\n+class ColumnWildcard(AWSProperty):\n+ props = {\n+ 'ExcludedColumnNames': ([basestring], False),\n+ }\n+\n+\n+class TableWithColumnsResource(AWSProperty):\n+ props = {\n+ 'ColumnNames': ([basestring], False),\n+ 'ColumnWildcard': (ColumnWildcard, False),\n+ 'DatabaseName': (basestring, False),\n+ 'Name': (basestring, False),\n+ }\n+\n+\n class Resource(AWSProperty):\n props = {\n 'DatabaseResource': (DatabaseResource, False),\n+ 'DataLocationResource': (DataLocationResource, False),\n 'TableResource': (TableResource, False),\n+ 'TableWithColumnsResource': (TableWithColumnsResource, False),\n }\n", "issue": "Add support for DataLocationResource & TableWithColumnsResource in AWS::LakeFormation::Permissions (2020, Jan 16 update)\nwaiting for the doc to be updated\n", "before_files": [{"content": "# Copyright (c) 2012-2019, Mark Peek <[email protected]>\n# All rights reserved.\n#\n# See LICENSE file for full license.\n#\n# *** Do not modify - this file is autogenerated ***\n# Resource specification version: 5.3.0\n\n\nfrom . import AWSObject\nfrom . import AWSProperty\n\n\nclass Admins(AWSProperty):\n props = {\n }\n\n\nclass DataLakeSettings(AWSObject):\n resource_type = \"AWS::LakeFormation::DataLakeSettings\"\n\n props = {\n 'Admins': (Admins, False),\n }\n\n\nclass DataLakePrincipal(AWSProperty):\n props = {\n 'DataLakePrincipalIdentifier': (basestring, False),\n }\n\n\nclass DatabaseResource(AWSProperty):\n props = {\n 'Name': (basestring, False),\n }\n\n\nclass TableResource(AWSProperty):\n props = {\n 'DatabaseName': (basestring, False),\n 'Name': (basestring, False),\n }\n\n\nclass Resource(AWSProperty):\n props = {\n 'DatabaseResource': (DatabaseResource, False),\n 'TableResource': (TableResource, False),\n }\n\n\nclass Permissions(AWSObject):\n resource_type = \"AWS::LakeFormation::Permissions\"\n\n props = {\n 'DataLakePrincipal': (DataLakePrincipal, True),\n 'Permissions': ([basestring], False),\n 'PermissionsWithGrantOption': ([basestring], False),\n 'Resource': (Resource, True),\n }\n", "path": "troposphere/lakeformation.py"}], "after_files": [{"content": "# Copyright (c) 2012-2019, Mark Peek <[email protected]>\n# All rights reserved.\n#\n# See LICENSE file for full license.\n#\n# *** Do not modify - this file is autogenerated ***\n# Resource specification version: 5.3.0\n\n\nfrom . import AWSObject\nfrom . import AWSProperty\n\n\nclass Admins(AWSProperty):\n props = {\n }\n\n\nclass DataLakeSettings(AWSObject):\n resource_type = \"AWS::LakeFormation::DataLakeSettings\"\n\n props = {\n 'Admins': (Admins, False),\n }\n\n\nclass DataLakePrincipal(AWSProperty):\n props = {\n 'DataLakePrincipalIdentifier': (basestring, False),\n }\n\n\nclass DatabaseResource(AWSProperty):\n props = {\n 'Name': (basestring, False),\n }\n\n\nclass TableResource(AWSProperty):\n props = {\n 'DatabaseName': (basestring, False),\n 'Name': (basestring, False),\n }\n\n\nclass DataLocationResource(AWSProperty):\n props = {\n 'S3Resource': (basestring, False),\n }\n\n\nclass ColumnWildcard(AWSProperty):\n props = {\n 'ExcludedColumnNames': ([basestring], False),\n }\n\n\nclass TableWithColumnsResource(AWSProperty):\n props = {\n 'ColumnNames': ([basestring], False),\n 'ColumnWildcard': (ColumnWildcard, False),\n 'DatabaseName': (basestring, False),\n 'Name': (basestring, False),\n }\n\n\nclass Resource(AWSProperty):\n props = {\n 'DatabaseResource': (DatabaseResource, False),\n 'DataLocationResource': (DataLocationResource, False),\n 'TableResource': (TableResource, False),\n 'TableWithColumnsResource': (TableWithColumnsResource, False),\n }\n\n\nclass Permissions(AWSObject):\n resource_type = \"AWS::LakeFormation::Permissions\"\n\n props = {\n 'DataLakePrincipal': (DataLakePrincipal, True),\n 'Permissions': ([basestring], False),\n 'PermissionsWithGrantOption': ([basestring], False),\n 'Resource': (Resource, True),\n }\n", "path": "troposphere/lakeformation.py"}]} | 745 | 252 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.