problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.71k
9.01k
| golden_diff
stringlengths 151
4.94k
| verification_info
stringlengths 465
11.3k
| num_tokens_prompt
int64 557
2.05k
| num_tokens_diff
int64 48
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_57017 | rasdani/github-patches | git_diff | fidals__shopelectro-995 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Resolve stuck tests
CI fails because of stuck tests. They are working at the local and relevant code looks like they should pass
https://ci.fidals.com/fidals/shopelectro/1727/9
</issue>
<code>
[start of shopelectro/settings/drone.py]
1 """Settings especially for drone CI."""
2
3 from .base import *
4
5
6 DEBUG = True
7
8 # http://bit.ly/sorl-thumbnail-docs
9 THUMBNAIL_DEBUG = True
10
11 SITE_DOMAIN_NAME = 'stage.shopelectro.ru'
12
13 YANDEX_KASSA_LINK = 'https://demomoney.yandex.ru/eshop.xml'
14
15 SELENIUM_URL = os.environ.get('SELENIUM_URL', 'http://selenium:4444/wd/hub')
16 SELENIUM_WAIT_SECONDS = int(os.environ['SELENIUM_WAIT_SECONDS'])
17 SELENIUM_TIMEOUT_SECONDS = int(os.environ['SELENIUM_TIMEOUT_SECONDS'])
18 SELENIUM_IMPLICIT_WAIT = int(os.environ['SELENIUM_IMPLICIT_WAIT'])
19
[end of shopelectro/settings/drone.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/shopelectro/settings/drone.py b/shopelectro/settings/drone.py
--- a/shopelectro/settings/drone.py
+++ b/shopelectro/settings/drone.py
@@ -5,6 +5,15 @@
DEBUG = True
+# Header categories menu uses cache in templates.
+# Disable cache to avoid stale menu testing.
+# See #991 for details.
+CACHES = {
+ 'default': {
+ 'BACKEND': 'django.core.cache.backends.dummy.DummyCache',
+ }
+}
+
# http://bit.ly/sorl-thumbnail-docs
THUMBNAIL_DEBUG = True
| {"golden_diff": "diff --git a/shopelectro/settings/drone.py b/shopelectro/settings/drone.py\n--- a/shopelectro/settings/drone.py\n+++ b/shopelectro/settings/drone.py\n@@ -5,6 +5,15 @@\n \n DEBUG = True\n \n+# Header categories menu uses cache in templates.\n+# Disable cache to avoid stale menu testing.\n+# See #991 for details.\n+CACHES = {\n+ 'default': {\n+ 'BACKEND': 'django.core.cache.backends.dummy.DummyCache',\n+ }\n+}\n+\n # http://bit.ly/sorl-thumbnail-docs\n THUMBNAIL_DEBUG = True\n", "issue": "Resolve stuck tests\nCI fails because of stuck tests. They are working at the local and relevant code looks like they should pass\r\nhttps://ci.fidals.com/fidals/shopelectro/1727/9\n", "before_files": [{"content": "\"\"\"Settings especially for drone CI.\"\"\"\n\nfrom .base import *\n\n\nDEBUG = True\n\n# http://bit.ly/sorl-thumbnail-docs\nTHUMBNAIL_DEBUG = True\n\nSITE_DOMAIN_NAME = 'stage.shopelectro.ru'\n\nYANDEX_KASSA_LINK = 'https://demomoney.yandex.ru/eshop.xml'\n\nSELENIUM_URL = os.environ.get('SELENIUM_URL', 'http://selenium:4444/wd/hub')\nSELENIUM_WAIT_SECONDS = int(os.environ['SELENIUM_WAIT_SECONDS'])\nSELENIUM_TIMEOUT_SECONDS = int(os.environ['SELENIUM_TIMEOUT_SECONDS'])\nSELENIUM_IMPLICIT_WAIT = int(os.environ['SELENIUM_IMPLICIT_WAIT'])\n", "path": "shopelectro/settings/drone.py"}]} | 783 | 142 |
gh_patches_debug_18393 | rasdani/github-patches | git_diff | tensorflow__addons-834 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add nightly tests for windows/macos
Currently we only test our nightlies on linux:
https://github.com/tensorflow/addons/blob/master/.travis.yml#L17
It should be relatively simple to enable tests for macos/windows, with the one caveat that `tf-nightly` is not published for windows.
</issue>
<code>
[start of tensorflow_addons/losses/__init__.py]
1 # Copyright 2019 The TensorFlow Authors. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 # ==============================================================================
15 """Additional losses that conform to Keras API."""
16
17 from __future__ import absolute_import
18 from __future__ import division
19 from __future__ import print_function
20
21 from tensorflow_addons.losses.contrastive import contrastive_loss, ContrastiveLoss
22 from tensorflow_addons.losses.focal_loss import sigmoid_focal_crossentropy, SigmoidFocalCrossEntropy
23 from tensorflow_addons.losses.giou_loss import giou_loss, GIoULoss
24 from tensorflow_addons.losses.lifted import lifted_struct_loss, LiftedStructLoss
25 from tensorflow_addons.losses.npairs import npairs_loss, NpairsLoss, npairs_multilabel_loss, NpairsMultilabelLoss
26 from tensorflow_addons.losses.sparsemax_loss import sparsemax_loss, SparsemaxLoss
27 from tensorflow_addons.losses.triplet import triplet_semihard_loss, TripletSemiHardLoss
28
[end of tensorflow_addons/losses/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/tensorflow_addons/losses/__init__.py b/tensorflow_addons/losses/__init__.py
--- a/tensorflow_addons/losses/__init__.py
+++ b/tensorflow_addons/losses/__init__.py
@@ -22,6 +22,11 @@
from tensorflow_addons.losses.focal_loss import sigmoid_focal_crossentropy, SigmoidFocalCrossEntropy
from tensorflow_addons.losses.giou_loss import giou_loss, GIoULoss
from tensorflow_addons.losses.lifted import lifted_struct_loss, LiftedStructLoss
-from tensorflow_addons.losses.npairs import npairs_loss, NpairsLoss, npairs_multilabel_loss, NpairsMultilabelLoss
from tensorflow_addons.losses.sparsemax_loss import sparsemax_loss, SparsemaxLoss
from tensorflow_addons.losses.triplet import triplet_semihard_loss, TripletSemiHardLoss
+
+# Temporarily disable for windows
+# Remove after: https://github.com/tensorflow/addons/issues/838
+import os
+if os.name != 'nt':
+ from tensorflow_addons.losses.npairs import npairs_loss, NpairsLoss, npairs_multilabel_loss, NpairsMultilabelLoss
| {"golden_diff": "diff --git a/tensorflow_addons/losses/__init__.py b/tensorflow_addons/losses/__init__.py\n--- a/tensorflow_addons/losses/__init__.py\n+++ b/tensorflow_addons/losses/__init__.py\n@@ -22,6 +22,11 @@\n from tensorflow_addons.losses.focal_loss import sigmoid_focal_crossentropy, SigmoidFocalCrossEntropy\n from tensorflow_addons.losses.giou_loss import giou_loss, GIoULoss\n from tensorflow_addons.losses.lifted import lifted_struct_loss, LiftedStructLoss\n-from tensorflow_addons.losses.npairs import npairs_loss, NpairsLoss, npairs_multilabel_loss, NpairsMultilabelLoss\n from tensorflow_addons.losses.sparsemax_loss import sparsemax_loss, SparsemaxLoss\n from tensorflow_addons.losses.triplet import triplet_semihard_loss, TripletSemiHardLoss\n+\n+# Temporarily disable for windows\n+# Remove after: https://github.com/tensorflow/addons/issues/838\n+import os\n+if os.name != 'nt':\n+ from tensorflow_addons.losses.npairs import npairs_loss, NpairsLoss, npairs_multilabel_loss, NpairsMultilabelLoss\n", "issue": "Add nightly tests for windows/macos\nCurrently we only test our nightlies on linux:\r\nhttps://github.com/tensorflow/addons/blob/master/.travis.yml#L17\r\n\r\nIt should be relatively simple to enable tests for macos/windows, with the one caveat that `tf-nightly` is not published for windows. \n", "before_files": [{"content": "# Copyright 2019 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\"Additional losses that conform to Keras API.\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nfrom tensorflow_addons.losses.contrastive import contrastive_loss, ContrastiveLoss\nfrom tensorflow_addons.losses.focal_loss import sigmoid_focal_crossentropy, SigmoidFocalCrossEntropy\nfrom tensorflow_addons.losses.giou_loss import giou_loss, GIoULoss\nfrom tensorflow_addons.losses.lifted import lifted_struct_loss, LiftedStructLoss\nfrom tensorflow_addons.losses.npairs import npairs_loss, NpairsLoss, npairs_multilabel_loss, NpairsMultilabelLoss\nfrom tensorflow_addons.losses.sparsemax_loss import sparsemax_loss, SparsemaxLoss\nfrom tensorflow_addons.losses.triplet import triplet_semihard_loss, TripletSemiHardLoss\n", "path": "tensorflow_addons/losses/__init__.py"}]} | 983 | 276 |
gh_patches_debug_2355 | rasdani/github-patches | git_diff | pytorch__text-248 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
A batch object created by fromvars does not have "fields" attribute
When making a batch object, the value of the `fields` attribute is set in its `__init__` method.
However, when created with `fromvars` class method, `fields` attribute is not set since the method first creates an empty object and then add information.
It should be modified to be analogous with the one created by `__init__` method.
It can be simply done by adding the following after https://github.com/pytorch/text/blob/master/torchtext/data/batch.py#L36:
```
batch.fields = dataset.fields.keys()
```
This kind of object creation is found when using BPTT iterator. Without `fields` attribute, printing a batch object is not possible due to https://github.com/pytorch/text/blob/master/torchtext/data/batch.py#L49.
</issue>
<code>
[start of torchtext/data/batch.py]
1 from torch import typename
2 from torch.tensor import _TensorBase
3
4
5 class Batch(object):
6 """Defines a batch of examples along with its Fields.
7
8 Attributes:
9 batch_size: Number of examples in the batch.
10 dataset: A reference to the dataset object the examples come from
11 (which itself contains the dataset's Field objects).
12 train: Whether the batch is from a training set.
13
14 Also stores the Variable for each column in the batch as an attribute.
15 """
16
17 def __init__(self, data=None, dataset=None, device=None, train=True):
18 """Create a Batch from a list of examples."""
19 if data is not None:
20 self.batch_size = len(data)
21 self.dataset = dataset
22 self.train = train
23 self.fields = dataset.fields.keys() # copy field names
24
25 for (name, field) in dataset.fields.items():
26 if field is not None:
27 batch = [x.__dict__[name] for x in data]
28 setattr(self, name, field.process(batch, device=device, train=train))
29
30 @classmethod
31 def fromvars(cls, dataset, batch_size, train=True, **kwargs):
32 """Create a Batch directly from a number of Variables."""
33 batch = cls()
34 batch.batch_size = batch_size
35 batch.dataset = dataset
36 batch.train = train
37 for k, v in kwargs.items():
38 setattr(batch, k, v)
39 return batch
40
41 def __repr__(self):
42 return str(self)
43
44 def __str__(self):
45 if not self.__dict__:
46 return 'Empty {} instance'.format(typename(self))
47
48 var_strs = '\n'.join(['\t[.' + name + ']' + ":" + _short_str(getattr(self, name))
49 for name in self.fields if hasattr(self, name)])
50
51 data_str = (' from {}'.format(self.dataset.name.upper())
52 if hasattr(self.dataset, 'name') and
53 isinstance(self.dataset.name, str) else '')
54
55 strt = '[{} of size {}{}]\n{}'.format(typename(self),
56 self.batch_size, data_str, var_strs)
57 return '\n' + strt
58
59 def __len__(self):
60 return self.batch_size
61
62
63 def _short_str(tensor):
64 # unwrap variable to tensor
65 if hasattr(tensor, 'data'):
66 tensor = tensor.data
67
68 # fallback in case of wrong argument type
69 if issubclass(type(tensor), _TensorBase) is False:
70 return str(tensor)
71
72 # copied from torch _tensor_str
73 size_str = 'x'.join(str(size) for size in tensor.size())
74 device_str = '' if not tensor.is_cuda else \
75 ' (GPU {})'.format(tensor.get_device())
76 strt = '[{} of size {}{}]'.format(typename(tensor),
77 size_str, device_str)
78 return strt
79
[end of torchtext/data/batch.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/torchtext/data/batch.py b/torchtext/data/batch.py
--- a/torchtext/data/batch.py
+++ b/torchtext/data/batch.py
@@ -34,6 +34,7 @@
batch.batch_size = batch_size
batch.dataset = dataset
batch.train = train
+ batch.fields = dataset.fields.keys()
for k, v in kwargs.items():
setattr(batch, k, v)
return batch
| {"golden_diff": "diff --git a/torchtext/data/batch.py b/torchtext/data/batch.py\n--- a/torchtext/data/batch.py\n+++ b/torchtext/data/batch.py\n@@ -34,6 +34,7 @@\n batch.batch_size = batch_size\n batch.dataset = dataset\n batch.train = train\n+ batch.fields = dataset.fields.keys()\n for k, v in kwargs.items():\n setattr(batch, k, v)\n return batch\n", "issue": "A batch object created by fromvars does not have \"fields\" attribute\nWhen making a batch object, the value of the `fields` attribute is set in its `__init__` method.\r\nHowever, when created with `fromvars` class method, `fields` attribute is not set since the method first creates an empty object and then add information.\r\nIt should be modified to be analogous with the one created by `__init__` method.\r\nIt can be simply done by adding the following after https://github.com/pytorch/text/blob/master/torchtext/data/batch.py#L36:\r\n```\r\nbatch.fields = dataset.fields.keys()\r\n```\r\n\r\nThis kind of object creation is found when using BPTT iterator. Without `fields` attribute, printing a batch object is not possible due to https://github.com/pytorch/text/blob/master/torchtext/data/batch.py#L49.\n", "before_files": [{"content": "from torch import typename\nfrom torch.tensor import _TensorBase\n\n\nclass Batch(object):\n \"\"\"Defines a batch of examples along with its Fields.\n\n Attributes:\n batch_size: Number of examples in the batch.\n dataset: A reference to the dataset object the examples come from\n (which itself contains the dataset's Field objects).\n train: Whether the batch is from a training set.\n\n Also stores the Variable for each column in the batch as an attribute.\n \"\"\"\n\n def __init__(self, data=None, dataset=None, device=None, train=True):\n \"\"\"Create a Batch from a list of examples.\"\"\"\n if data is not None:\n self.batch_size = len(data)\n self.dataset = dataset\n self.train = train\n self.fields = dataset.fields.keys() # copy field names\n\n for (name, field) in dataset.fields.items():\n if field is not None:\n batch = [x.__dict__[name] for x in data]\n setattr(self, name, field.process(batch, device=device, train=train))\n\n @classmethod\n def fromvars(cls, dataset, batch_size, train=True, **kwargs):\n \"\"\"Create a Batch directly from a number of Variables.\"\"\"\n batch = cls()\n batch.batch_size = batch_size\n batch.dataset = dataset\n batch.train = train\n for k, v in kwargs.items():\n setattr(batch, k, v)\n return batch\n\n def __repr__(self):\n return str(self)\n\n def __str__(self):\n if not self.__dict__:\n return 'Empty {} instance'.format(typename(self))\n\n var_strs = '\\n'.join(['\\t[.' + name + ']' + \":\" + _short_str(getattr(self, name))\n for name in self.fields if hasattr(self, name)])\n\n data_str = (' from {}'.format(self.dataset.name.upper())\n if hasattr(self.dataset, 'name') and\n isinstance(self.dataset.name, str) else '')\n\n strt = '[{} of size {}{}]\\n{}'.format(typename(self),\n self.batch_size, data_str, var_strs)\n return '\\n' + strt\n\n def __len__(self):\n return self.batch_size\n\n\ndef _short_str(tensor):\n # unwrap variable to tensor\n if hasattr(tensor, 'data'):\n tensor = tensor.data\n\n # fallback in case of wrong argument type\n if issubclass(type(tensor), _TensorBase) is False:\n return str(tensor)\n\n # copied from torch _tensor_str\n size_str = 'x'.join(str(size) for size in tensor.size())\n device_str = '' if not tensor.is_cuda else \\\n ' (GPU {})'.format(tensor.get_device())\n strt = '[{} of size {}{}]'.format(typename(tensor),\n size_str, device_str)\n return strt\n", "path": "torchtext/data/batch.py"}]} | 1,481 | 103 |
gh_patches_debug_36636 | rasdani/github-patches | git_diff | falconry__falcon-541 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Has compile_uri_template been removed?
I can't see it in the code any more.
</issue>
<code>
[start of falcon/routing/util.py]
1 # Copyright 2013 by Rackspace Hosting, Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from falcon import HTTP_METHODS, responders
16 from falcon.hooks import _wrap_with_hooks
17
18
19 def create_http_method_map(resource, before, after):
20 """Maps HTTP methods (e.g., 'GET', 'POST') to methods of a resource object.
21
22 Args:
23 resource: An object with *responder* methods, following the naming
24 convention *on_\**, that correspond to each method the resource
25 supports. For example, if a resource supports GET and POST, it
26 should define ``on_get(self, req, resp)`` and
27 ``on_post(self, req, resp)``.
28 before: An action hook or ``list`` of hooks to be called before each
29 *on_\** responder defined by the resource.
30 after: An action hook or ``list`` of hooks to be called after each
31 *on_\** responder defined by the resource.
32
33 Returns:
34 dict: A mapping of HTTP methods to responders.
35
36 """
37
38 method_map = {}
39
40 for method in HTTP_METHODS:
41 try:
42 responder = getattr(resource, 'on_' + method.lower())
43 except AttributeError:
44 # resource does not implement this method
45 pass
46 else:
47 # Usually expect a method, but any callable will do
48 if callable(responder):
49 responder = _wrap_with_hooks(
50 before, after, responder, resource)
51 method_map[method] = responder
52
53 # Attach a resource for unsupported HTTP methods
54 allowed_methods = sorted(list(method_map.keys()))
55
56 # NOTE(sebasmagri): We want the OPTIONS and 405 (Not Allowed) methods
57 # responders to be wrapped on global hooks
58 if 'OPTIONS' not in method_map:
59 # OPTIONS itself is intentionally excluded from the Allow header
60 responder = responders.create_default_options(
61 allowed_methods)
62 method_map['OPTIONS'] = _wrap_with_hooks(
63 before, after, responder, resource)
64 allowed_methods.append('OPTIONS')
65
66 na_responder = responders.create_method_not_allowed(allowed_methods)
67
68 for method in HTTP_METHODS:
69 if method not in allowed_methods:
70 method_map[method] = _wrap_with_hooks(
71 before, after, na_responder, resource)
72
73 return method_map
74
[end of falcon/routing/util.py]
[start of falcon/routing/__init__.py]
1 # Copyright 2013 by Rackspace Hosting, Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from falcon.routing.compiled import CompiledRouter
16 from falcon.routing.util import create_http_method_map # NOQA
17
18
19 DefaultRouter = CompiledRouter
20
[end of falcon/routing/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/falcon/routing/__init__.py b/falcon/routing/__init__.py
--- a/falcon/routing/__init__.py
+++ b/falcon/routing/__init__.py
@@ -14,6 +14,7 @@
from falcon.routing.compiled import CompiledRouter
from falcon.routing.util import create_http_method_map # NOQA
+from falcon.routing.util import compile_uri_template # NOQA
DefaultRouter = CompiledRouter
diff --git a/falcon/routing/util.py b/falcon/routing/util.py
--- a/falcon/routing/util.py
+++ b/falcon/routing/util.py
@@ -12,10 +12,72 @@
# See the License for the specific language governing permissions and
# limitations under the License.
+import re
+
+import six
+
from falcon import HTTP_METHODS, responders
from falcon.hooks import _wrap_with_hooks
+# NOTE(kgriffs): Published method; take care to avoid breaking changes.
+def compile_uri_template(template):
+ """Compile the given URI template string into a pattern matcher.
+
+ This function can be used to construct custom routing engines that
+ iterate through a list of possible routes, attempting to match
+ an incoming request against each route's compiled regular expression.
+
+ Each field is converted to a named group, so that when a match
+ is found, the fields can be easily extracted using
+ :py:meth:`re.MatchObject.groupdict`.
+
+ This function does not support the more flexible templating
+ syntax used in the default router. Only simple paths with bracketed
+ field expressions are recognized. For example::
+
+ /
+ /books
+ /books/{isbn}
+ /books/{isbn}/characters
+ /books/{isbn}/characters/{name}
+
+ Also, note that if the template contains a trailing slash character,
+ it will be stripped in order to normalize the routing logic.
+
+ Args:
+ template(str): The template to compile. Note that field names are
+ restricted to ASCII a-z, A-Z, and the underscore character.
+
+ Returns:
+ tuple: (template_field_names, template_regex)
+ """
+
+ if not isinstance(template, six.string_types):
+ raise TypeError('uri_template is not a string')
+
+ if not template.startswith('/'):
+ raise ValueError("uri_template must start with '/'")
+
+ if '//' in template:
+ raise ValueError("uri_template may not contain '//'")
+
+ if template != '/' and template.endswith('/'):
+ template = template[:-1]
+
+ expression_pattern = r'{([a-zA-Z][a-zA-Z_]*)}'
+
+ # Get a list of field names
+ fields = set(re.findall(expression_pattern, template))
+
+ # Convert Level 1 var patterns to equivalent named regex groups
+ escaped = re.sub(r'[\.\(\)\[\]\?\*\+\^\|]', r'\\\g<0>', template)
+ pattern = re.sub(expression_pattern, r'(?P<\1>[^/]+)', escaped)
+ pattern = r'\A' + pattern + r'\Z'
+
+ return fields, re.compile(pattern, re.IGNORECASE)
+
+
def create_http_method_map(resource, before, after):
"""Maps HTTP methods (e.g., 'GET', 'POST') to methods of a resource object.
| {"golden_diff": "diff --git a/falcon/routing/__init__.py b/falcon/routing/__init__.py\n--- a/falcon/routing/__init__.py\n+++ b/falcon/routing/__init__.py\n@@ -14,6 +14,7 @@\n \n from falcon.routing.compiled import CompiledRouter\n from falcon.routing.util import create_http_method_map # NOQA\n+from falcon.routing.util import compile_uri_template # NOQA\n \n \n DefaultRouter = CompiledRouter\ndiff --git a/falcon/routing/util.py b/falcon/routing/util.py\n--- a/falcon/routing/util.py\n+++ b/falcon/routing/util.py\n@@ -12,10 +12,72 @@\n # See the License for the specific language governing permissions and\n # limitations under the License.\n \n+import re\n+\n+import six\n+\n from falcon import HTTP_METHODS, responders\n from falcon.hooks import _wrap_with_hooks\n \n \n+# NOTE(kgriffs): Published method; take care to avoid breaking changes.\n+def compile_uri_template(template):\n+ \"\"\"Compile the given URI template string into a pattern matcher.\n+\n+ This function can be used to construct custom routing engines that\n+ iterate through a list of possible routes, attempting to match\n+ an incoming request against each route's compiled regular expression.\n+\n+ Each field is converted to a named group, so that when a match\n+ is found, the fields can be easily extracted using\n+ :py:meth:`re.MatchObject.groupdict`.\n+\n+ This function does not support the more flexible templating\n+ syntax used in the default router. Only simple paths with bracketed\n+ field expressions are recognized. For example::\n+\n+ /\n+ /books\n+ /books/{isbn}\n+ /books/{isbn}/characters\n+ /books/{isbn}/characters/{name}\n+\n+ Also, note that if the template contains a trailing slash character,\n+ it will be stripped in order to normalize the routing logic.\n+\n+ Args:\n+ template(str): The template to compile. Note that field names are\n+ restricted to ASCII a-z, A-Z, and the underscore character.\n+\n+ Returns:\n+ tuple: (template_field_names, template_regex)\n+ \"\"\"\n+\n+ if not isinstance(template, six.string_types):\n+ raise TypeError('uri_template is not a string')\n+\n+ if not template.startswith('/'):\n+ raise ValueError(\"uri_template must start with '/'\")\n+\n+ if '//' in template:\n+ raise ValueError(\"uri_template may not contain '//'\")\n+\n+ if template != '/' and template.endswith('/'):\n+ template = template[:-1]\n+\n+ expression_pattern = r'{([a-zA-Z][a-zA-Z_]*)}'\n+\n+ # Get a list of field names\n+ fields = set(re.findall(expression_pattern, template))\n+\n+ # Convert Level 1 var patterns to equivalent named regex groups\n+ escaped = re.sub(r'[\\.\\(\\)\\[\\]\\?\\*\\+\\^\\|]', r'\\\\\\g<0>', template)\n+ pattern = re.sub(expression_pattern, r'(?P<\\1>[^/]+)', escaped)\n+ pattern = r'\\A' + pattern + r'\\Z'\n+\n+ return fields, re.compile(pattern, re.IGNORECASE)\n+\n+\n def create_http_method_map(resource, before, after):\n \"\"\"Maps HTTP methods (e.g., 'GET', 'POST') to methods of a resource object.\n", "issue": "Has compile_uri_template been removed?\nI can't see it in the code any more.\n\n", "before_files": [{"content": "# Copyright 2013 by Rackspace Hosting, Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom falcon import HTTP_METHODS, responders\nfrom falcon.hooks import _wrap_with_hooks\n\n\ndef create_http_method_map(resource, before, after):\n \"\"\"Maps HTTP methods (e.g., 'GET', 'POST') to methods of a resource object.\n\n Args:\n resource: An object with *responder* methods, following the naming\n convention *on_\\**, that correspond to each method the resource\n supports. For example, if a resource supports GET and POST, it\n should define ``on_get(self, req, resp)`` and\n ``on_post(self, req, resp)``.\n before: An action hook or ``list`` of hooks to be called before each\n *on_\\** responder defined by the resource.\n after: An action hook or ``list`` of hooks to be called after each\n *on_\\** responder defined by the resource.\n\n Returns:\n dict: A mapping of HTTP methods to responders.\n\n \"\"\"\n\n method_map = {}\n\n for method in HTTP_METHODS:\n try:\n responder = getattr(resource, 'on_' + method.lower())\n except AttributeError:\n # resource does not implement this method\n pass\n else:\n # Usually expect a method, but any callable will do\n if callable(responder):\n responder = _wrap_with_hooks(\n before, after, responder, resource)\n method_map[method] = responder\n\n # Attach a resource for unsupported HTTP methods\n allowed_methods = sorted(list(method_map.keys()))\n\n # NOTE(sebasmagri): We want the OPTIONS and 405 (Not Allowed) methods\n # responders to be wrapped on global hooks\n if 'OPTIONS' not in method_map:\n # OPTIONS itself is intentionally excluded from the Allow header\n responder = responders.create_default_options(\n allowed_methods)\n method_map['OPTIONS'] = _wrap_with_hooks(\n before, after, responder, resource)\n allowed_methods.append('OPTIONS')\n\n na_responder = responders.create_method_not_allowed(allowed_methods)\n\n for method in HTTP_METHODS:\n if method not in allowed_methods:\n method_map[method] = _wrap_with_hooks(\n before, after, na_responder, resource)\n\n return method_map\n", "path": "falcon/routing/util.py"}, {"content": "# Copyright 2013 by Rackspace Hosting, Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom falcon.routing.compiled import CompiledRouter\nfrom falcon.routing.util import create_http_method_map # NOQA\n\n\nDefaultRouter = CompiledRouter\n", "path": "falcon/routing/__init__.py"}]} | 1,535 | 753 |
gh_patches_debug_9545 | rasdani/github-patches | git_diff | fossasia__open-event-server-4310 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add email to valid types in custom-form
**Current**
Currently we are not able to set an email type to the custom-form which leads to `Error: 422`.
**Expected**
email should be a valid type for the custom-form
</issue>
<code>
[start of app/api/custom_forms.py]
1 from flask_rest_jsonapi import ResourceDetail, ResourceList, ResourceRelationship
2 from marshmallow_jsonapi.flask import Schema, Relationship
3 from marshmallow_jsonapi import fields
4 import marshmallow.validate as validate
5 from app.api.helpers.permissions import jwt_required
6 from flask_rest_jsonapi.exceptions import ObjectNotFound
7
8 from app.api.bootstrap import api
9 from app.api.helpers.utilities import dasherize
10 from app.models import db
11 from app.models.custom_form import CustomForms
12 from app.models.event import Event
13 from app.api.helpers.db import safe_query
14 from app.api.helpers.utilities import require_relationship
15 from app.api.helpers.permission_manager import has_access
16 from app.api.helpers.query import event_query
17
18
19 class CustomFormSchema(Schema):
20 """
21 API Schema for Custom Forms database model
22 """
23 class Meta:
24 """
25 Meta class for CustomForm Schema
26 """
27 type_ = 'custom-form'
28 self_view = 'v1.custom_form_detail'
29 self_view_kwargs = {'id': '<id>'}
30 inflect = dasherize
31
32 id = fields.Integer(dump_only=True)
33 field_identifier = fields.Str(required=True)
34 form = fields.Str(required=True)
35 type = fields.Str(default="text", validate=validate.OneOf(
36 choices=["text", "checkbox", "select", "file", "image"]))
37 is_required = fields.Boolean(default=False)
38 is_included = fields.Boolean(default=False)
39 is_fixed = fields.Boolean(default=False)
40 event = Relationship(attribute='event',
41 self_view='v1.custom_form_event',
42 self_view_kwargs={'id': '<id>'},
43 related_view='v1.event_detail',
44 related_view_kwargs={'custom_form_id': '<id>'},
45 schema='EventSchema',
46 type_='event')
47
48
49 class CustomFormListPost(ResourceList):
50 """
51 Create and List Custom Forms
52 """
53
54 def before_post(self, args, kwargs, data):
55 """
56 method to check for required relationship with event
57 :param args:
58 :param kwargs:
59 :param data:
60 :return:
61 """
62 require_relationship(['event'], data)
63 if not has_access('is_coorganizer', event_id=data['event']):
64 raise ObjectNotFound({'parameter': 'event_id'},
65 "Event: {} not found".format(data['event_id']))
66
67 schema = CustomFormSchema
68 methods = ['POST', ]
69 data_layer = {'session': db.session,
70 'model': CustomForms
71 }
72
73
74 class CustomFormList(ResourceList):
75 """
76 Create and List Custom Forms
77 """
78 def query(self, view_kwargs):
79 """
80 query method for different view_kwargs
81 :param view_kwargs:
82 :return:
83 """
84 query_ = self.session.query(CustomForms)
85 query_ = event_query(self, query_, view_kwargs)
86 return query_
87
88 view_kwargs = True
89 decorators = (jwt_required, )
90 methods = ['GET', ]
91 schema = CustomFormSchema
92 data_layer = {'session': db.session,
93 'model': CustomForms,
94 'methods': {
95 'query': query
96 }}
97
98
99 class CustomFormDetail(ResourceDetail):
100 """
101 CustomForm Resource
102 """
103
104 def before_get_object(self, view_kwargs):
105 """
106 before get method
107 :param view_kwargs:
108 :return:
109 """
110 event = None
111 if view_kwargs.get('event_id'):
112 event = safe_query(self, Event, 'id', view_kwargs['event_id'], 'event_id')
113 elif view_kwargs.get('event_identifier'):
114 event = safe_query(self, Event, 'identifier', view_kwargs['event_identifier'], 'event_identifier')
115
116 if event:
117 custom_form = safe_query(self, CustomForms, 'event_id', event.id, 'event_id')
118 view_kwargs['id'] = custom_form.id
119
120 decorators = (api.has_permission('is_coorganizer', fetch='event_id',
121 fetch_as="event_id", model=CustomForms, methods="PATCH,DELETE"), )
122 schema = CustomFormSchema
123 data_layer = {'session': db.session,
124 'model': CustomForms}
125
126
127 class CustomFormRelationshipRequired(ResourceRelationship):
128 """
129 CustomForm Relationship (Required)
130 """
131 decorators = (api.has_permission('is_coorganizer', fetch='event_id',
132 fetch_as="event_id", model=CustomForms, methods="PATCH"),)
133 methods = ['GET', 'PATCH']
134 schema = CustomFormSchema
135 data_layer = {'session': db.session,
136 'model': CustomForms}
137
[end of app/api/custom_forms.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/app/api/custom_forms.py b/app/api/custom_forms.py
--- a/app/api/custom_forms.py
+++ b/app/api/custom_forms.py
@@ -33,7 +33,7 @@
field_identifier = fields.Str(required=True)
form = fields.Str(required=True)
type = fields.Str(default="text", validate=validate.OneOf(
- choices=["text", "checkbox", "select", "file", "image"]))
+ choices=["text", "checkbox", "select", "file", "image", "email"]))
is_required = fields.Boolean(default=False)
is_included = fields.Boolean(default=False)
is_fixed = fields.Boolean(default=False)
| {"golden_diff": "diff --git a/app/api/custom_forms.py b/app/api/custom_forms.py\n--- a/app/api/custom_forms.py\n+++ b/app/api/custom_forms.py\n@@ -33,7 +33,7 @@\n field_identifier = fields.Str(required=True)\n form = fields.Str(required=True)\n type = fields.Str(default=\"text\", validate=validate.OneOf(\n- choices=[\"text\", \"checkbox\", \"select\", \"file\", \"image\"]))\n+ choices=[\"text\", \"checkbox\", \"select\", \"file\", \"image\", \"email\"]))\n is_required = fields.Boolean(default=False)\n is_included = fields.Boolean(default=False)\n is_fixed = fields.Boolean(default=False)\n", "issue": "Add email to valid types in custom-form\n**Current**\r\nCurrently we are not able to set an email type to the custom-form which leads to `Error: 422`.\r\n\r\n**Expected**\r\nemail should be a valid type for the custom-form\n", "before_files": [{"content": "from flask_rest_jsonapi import ResourceDetail, ResourceList, ResourceRelationship\nfrom marshmallow_jsonapi.flask import Schema, Relationship\nfrom marshmallow_jsonapi import fields\nimport marshmallow.validate as validate\nfrom app.api.helpers.permissions import jwt_required\nfrom flask_rest_jsonapi.exceptions import ObjectNotFound\n\nfrom app.api.bootstrap import api\nfrom app.api.helpers.utilities import dasherize\nfrom app.models import db\nfrom app.models.custom_form import CustomForms\nfrom app.models.event import Event\nfrom app.api.helpers.db import safe_query\nfrom app.api.helpers.utilities import require_relationship\nfrom app.api.helpers.permission_manager import has_access\nfrom app.api.helpers.query import event_query\n\n\nclass CustomFormSchema(Schema):\n \"\"\"\n API Schema for Custom Forms database model\n \"\"\"\n class Meta:\n \"\"\"\n Meta class for CustomForm Schema\n \"\"\"\n type_ = 'custom-form'\n self_view = 'v1.custom_form_detail'\n self_view_kwargs = {'id': '<id>'}\n inflect = dasherize\n\n id = fields.Integer(dump_only=True)\n field_identifier = fields.Str(required=True)\n form = fields.Str(required=True)\n type = fields.Str(default=\"text\", validate=validate.OneOf(\n choices=[\"text\", \"checkbox\", \"select\", \"file\", \"image\"]))\n is_required = fields.Boolean(default=False)\n is_included = fields.Boolean(default=False)\n is_fixed = fields.Boolean(default=False)\n event = Relationship(attribute='event',\n self_view='v1.custom_form_event',\n self_view_kwargs={'id': '<id>'},\n related_view='v1.event_detail',\n related_view_kwargs={'custom_form_id': '<id>'},\n schema='EventSchema',\n type_='event')\n\n\nclass CustomFormListPost(ResourceList):\n \"\"\"\n Create and List Custom Forms\n \"\"\"\n\n def before_post(self, args, kwargs, data):\n \"\"\"\n method to check for required relationship with event\n :param args:\n :param kwargs:\n :param data:\n :return:\n \"\"\"\n require_relationship(['event'], data)\n if not has_access('is_coorganizer', event_id=data['event']):\n raise ObjectNotFound({'parameter': 'event_id'},\n \"Event: {} not found\".format(data['event_id']))\n\n schema = CustomFormSchema\n methods = ['POST', ]\n data_layer = {'session': db.session,\n 'model': CustomForms\n }\n\n\nclass CustomFormList(ResourceList):\n \"\"\"\n Create and List Custom Forms\n \"\"\"\n def query(self, view_kwargs):\n \"\"\"\n query method for different view_kwargs\n :param view_kwargs:\n :return:\n \"\"\"\n query_ = self.session.query(CustomForms)\n query_ = event_query(self, query_, view_kwargs)\n return query_\n\n view_kwargs = True\n decorators = (jwt_required, )\n methods = ['GET', ]\n schema = CustomFormSchema\n data_layer = {'session': db.session,\n 'model': CustomForms,\n 'methods': {\n 'query': query\n }}\n\n\nclass CustomFormDetail(ResourceDetail):\n \"\"\"\n CustomForm Resource\n \"\"\"\n\n def before_get_object(self, view_kwargs):\n \"\"\"\n before get method\n :param view_kwargs:\n :return:\n \"\"\"\n event = None\n if view_kwargs.get('event_id'):\n event = safe_query(self, Event, 'id', view_kwargs['event_id'], 'event_id')\n elif view_kwargs.get('event_identifier'):\n event = safe_query(self, Event, 'identifier', view_kwargs['event_identifier'], 'event_identifier')\n\n if event:\n custom_form = safe_query(self, CustomForms, 'event_id', event.id, 'event_id')\n view_kwargs['id'] = custom_form.id\n\n decorators = (api.has_permission('is_coorganizer', fetch='event_id',\n fetch_as=\"event_id\", model=CustomForms, methods=\"PATCH,DELETE\"), )\n schema = CustomFormSchema\n data_layer = {'session': db.session,\n 'model': CustomForms}\n\n\nclass CustomFormRelationshipRequired(ResourceRelationship):\n \"\"\"\n CustomForm Relationship (Required)\n \"\"\"\n decorators = (api.has_permission('is_coorganizer', fetch='event_id',\n fetch_as=\"event_id\", model=CustomForms, methods=\"PATCH\"),)\n methods = ['GET', 'PATCH']\n schema = CustomFormSchema\n data_layer = {'session': db.session,\n 'model': CustomForms}\n", "path": "app/api/custom_forms.py"}]} | 1,848 | 144 |
gh_patches_debug_32728 | rasdani/github-patches | git_diff | GeotrekCE__Geotrek-admin-2596 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Personnalisation plus facile de loadpaths
J'ai un cas où j'aimerais importer un gros fichier Shape, mais où j'aimerais filtrer selon certains attributs de chaque élément. Pour éviter de devoir réécrire ma propre command `loadpaths` complète, il serait pratique de déporter le filtrage des objets dans une méthode de la commande. Le patch proposé arrive...
</issue>
<code>
[start of geotrek/core/management/commands/loadpaths.py]
1 from django.contrib.gis.gdal import DataSource, GDALException
2 from geotrek.core.models import Path
3 from geotrek.authent.models import Structure
4 from django.contrib.gis.geos.collections import Polygon, LineString
5 from django.core.management.base import BaseCommand, CommandError
6 from django.conf import settings
7 from django.db.utils import IntegrityError, InternalError
8 from django.db import transaction
9
10
11 class Command(BaseCommand):
12 help = 'Load Paths from a file within the spatial extent\n'
13
14 def add_arguments(self, parser):
15 parser.add_argument('file_path', help="File's path of the paths")
16 parser.add_argument('--structure', action='store', dest='structure', help="Define the structure")
17 parser.add_argument('--name-attribute', '-n', action='store', dest='name', default='nom',
18 help="Name of the name's attribute inside the file")
19 parser.add_argument('--comments-attribute', '-c', nargs='*', action='store', dest='comment',
20 help="")
21 parser.add_argument('--encoding', '-e', action='store', dest='encoding', default='utf-8',
22 help='File encoding, default utf-8')
23 parser.add_argument('--srid', '-s', action='store', dest='srid', default=4326, type=int,
24 help="File's SRID")
25 parser.add_argument('--intersect', '-i', action='store_true', dest='intersect', default=False,
26 help="Check paths intersect spatial extent and not only within")
27 parser.add_argument('--fail', '-f', action='store_true', dest='fail', default=False,
28 help="Allows to grant fails")
29 parser.add_argument('--dry', '-d', action='store_true', dest='dry', default=False,
30 help="Do not change the database, dry run. Show the number of fail"
31 " and objects potentially created")
32
33 def handle(self, *args, **options):
34 verbosity = options.get('verbosity')
35 encoding = options.get('encoding')
36 file_path = options.get('file_path')
37 structure = options.get('structure')
38 name_column = options.get('name')
39 srid = options.get('srid')
40 do_intersect = options.get('intersect')
41 comments_columns = options.get('comment')
42 fail = options.get('fail')
43 dry = options.get('dry')
44
45 if dry:
46 fail = True
47
48 counter = 0
49 counter_fail = 0
50
51 if structure:
52 try:
53 structure = Structure.objects.get(name=structure)
54 except Structure.DoesNotExist:
55 raise CommandError("Structure does not match with instance's structures\n"
56 "Change your option --structure")
57 elif Structure.objects.count() == 1:
58 structure = Structure.objects.first()
59 else:
60 raise CommandError("There are more than 1 structure and you didn't define the option structure\n"
61 "Use --structure to define it")
62 if verbosity > 0:
63 self.stdout.write("All paths in DataSource will be linked to the structure : %s" % structure)
64
65 ds = DataSource(file_path, encoding=encoding)
66
67 bbox = Polygon.from_bbox(settings.SPATIAL_EXTENT)
68 bbox.srid = settings.SRID
69
70 sid = transaction.savepoint()
71
72 for layer in ds:
73 for feat in layer:
74 name = feat.get(name_column) if name_column in layer.fields else ''
75 comment_final_tab = []
76 if comments_columns:
77 for comment_column in comments_columns:
78 if comment_column in layer.fields:
79 comment_final_tab.append(feat.get(comment_column))
80 geom = feat.geom.geos
81 if not isinstance(geom, LineString):
82 if verbosity > 0:
83 self.stdout.write("%s's geometry is not a Linestring" % feat)
84 break
85 self.check_srid(srid, geom)
86 geom.dim = 2
87 if do_intersect and bbox.intersects(geom) or not do_intersect and geom.within(bbox):
88 try:
89 with transaction.atomic():
90 comment_final = '</br>'.join(comment_final_tab)
91 path = Path.objects.create(name=name,
92 structure=structure,
93 geom=geom,
94 comments=comment_final)
95 counter += 1
96 if verbosity > 0:
97 self.stdout.write('Create path with pk : {}'.format(path.pk))
98 if verbosity > 1:
99 self.stdout.write("The comment %s was added on %s" % (comment_final, name))
100 except (IntegrityError, InternalError):
101 if fail:
102 counter_fail += 1
103 self.stdout.write('Integrity Error on path : {}, {}'.format(name, geom))
104 else:
105 raise
106 if not dry:
107 transaction.savepoint_commit(sid)
108 if verbosity >= 2:
109 self.stdout.write(self.style.NOTICE(
110 "{0} objects created, {1} objects failed".format(counter, counter_fail)))
111 else:
112 transaction.savepoint_rollback(sid)
113 self.stdout.write(self.style.NOTICE(
114 "{0} objects will be create, {1} objects failed;".format(counter, counter_fail)))
115
116 def check_srid(self, srid, geom):
117 if not geom.srid:
118 geom.srid = srid
119 if geom.srid != settings.SRID:
120 try:
121 geom.transform(settings.SRID)
122 except GDALException:
123 raise CommandError("SRID is not well configurate, change/add option srid")
124
[end of geotrek/core/management/commands/loadpaths.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/geotrek/core/management/commands/loadpaths.py b/geotrek/core/management/commands/loadpaths.py
--- a/geotrek/core/management/commands/loadpaths.py
+++ b/geotrek/core/management/commands/loadpaths.py
@@ -37,7 +37,7 @@
structure = options.get('structure')
name_column = options.get('name')
srid = options.get('srid')
- do_intersect = options.get('intersect')
+ self.do_intersect = options.get('intersect')
comments_columns = options.get('comment')
fail = options.get('fail')
dry = options.get('dry')
@@ -64,8 +64,8 @@
ds = DataSource(file_path, encoding=encoding)
- bbox = Polygon.from_bbox(settings.SPATIAL_EXTENT)
- bbox.srid = settings.SRID
+ self.bbox = Polygon.from_bbox(settings.SPATIAL_EXTENT)
+ self.bbox.srid = settings.SRID
sid = transaction.savepoint()
@@ -84,7 +84,7 @@
break
self.check_srid(srid, geom)
geom.dim = 2
- if do_intersect and bbox.intersects(geom) or not do_intersect and geom.within(bbox):
+ if self.should_import(feat, geom):
try:
with transaction.atomic():
comment_final = '</br>'.join(comment_final_tab)
@@ -121,3 +121,9 @@
geom.transform(settings.SRID)
except GDALException:
raise CommandError("SRID is not well configurate, change/add option srid")
+
+ def should_import(self, feature, geom):
+ return (
+ self.do_intersect and self.bbox.intersects(geom)
+ or not self.do_intersect and geom.within(self.bbox)
+ )
| {"golden_diff": "diff --git a/geotrek/core/management/commands/loadpaths.py b/geotrek/core/management/commands/loadpaths.py\n--- a/geotrek/core/management/commands/loadpaths.py\n+++ b/geotrek/core/management/commands/loadpaths.py\n@@ -37,7 +37,7 @@\n structure = options.get('structure')\n name_column = options.get('name')\n srid = options.get('srid')\n- do_intersect = options.get('intersect')\n+ self.do_intersect = options.get('intersect')\n comments_columns = options.get('comment')\n fail = options.get('fail')\n dry = options.get('dry')\n@@ -64,8 +64,8 @@\n \n ds = DataSource(file_path, encoding=encoding)\n \n- bbox = Polygon.from_bbox(settings.SPATIAL_EXTENT)\n- bbox.srid = settings.SRID\n+ self.bbox = Polygon.from_bbox(settings.SPATIAL_EXTENT)\n+ self.bbox.srid = settings.SRID\n \n sid = transaction.savepoint()\n \n@@ -84,7 +84,7 @@\n break\n self.check_srid(srid, geom)\n geom.dim = 2\n- if do_intersect and bbox.intersects(geom) or not do_intersect and geom.within(bbox):\n+ if self.should_import(feat, geom):\n try:\n with transaction.atomic():\n comment_final = '</br>'.join(comment_final_tab)\n@@ -121,3 +121,9 @@\n geom.transform(settings.SRID)\n except GDALException:\n raise CommandError(\"SRID is not well configurate, change/add option srid\")\n+\n+ def should_import(self, feature, geom):\n+ return (\n+ self.do_intersect and self.bbox.intersects(geom)\n+ or not self.do_intersect and geom.within(self.bbox)\n+ )\n", "issue": "Personnalisation plus facile de loadpaths\nJ'ai un cas o\u00f9 j'aimerais importer un gros fichier Shape, mais o\u00f9 j'aimerais filtrer selon certains attributs de chaque \u00e9l\u00e9ment. Pour \u00e9viter de devoir r\u00e9\u00e9crire ma propre command `loadpaths` compl\u00e8te, il serait pratique de d\u00e9porter le filtrage des objets dans une m\u00e9thode de la commande. Le patch propos\u00e9 arrive...\n", "before_files": [{"content": "from django.contrib.gis.gdal import DataSource, GDALException\nfrom geotrek.core.models import Path\nfrom geotrek.authent.models import Structure\nfrom django.contrib.gis.geos.collections import Polygon, LineString\nfrom django.core.management.base import BaseCommand, CommandError\nfrom django.conf import settings\nfrom django.db.utils import IntegrityError, InternalError\nfrom django.db import transaction\n\n\nclass Command(BaseCommand):\n help = 'Load Paths from a file within the spatial extent\\n'\n\n def add_arguments(self, parser):\n parser.add_argument('file_path', help=\"File's path of the paths\")\n parser.add_argument('--structure', action='store', dest='structure', help=\"Define the structure\")\n parser.add_argument('--name-attribute', '-n', action='store', dest='name', default='nom',\n help=\"Name of the name's attribute inside the file\")\n parser.add_argument('--comments-attribute', '-c', nargs='*', action='store', dest='comment',\n help=\"\")\n parser.add_argument('--encoding', '-e', action='store', dest='encoding', default='utf-8',\n help='File encoding, default utf-8')\n parser.add_argument('--srid', '-s', action='store', dest='srid', default=4326, type=int,\n help=\"File's SRID\")\n parser.add_argument('--intersect', '-i', action='store_true', dest='intersect', default=False,\n help=\"Check paths intersect spatial extent and not only within\")\n parser.add_argument('--fail', '-f', action='store_true', dest='fail', default=False,\n help=\"Allows to grant fails\")\n parser.add_argument('--dry', '-d', action='store_true', dest='dry', default=False,\n help=\"Do not change the database, dry run. Show the number of fail\"\n \" and objects potentially created\")\n\n def handle(self, *args, **options):\n verbosity = options.get('verbosity')\n encoding = options.get('encoding')\n file_path = options.get('file_path')\n structure = options.get('structure')\n name_column = options.get('name')\n srid = options.get('srid')\n do_intersect = options.get('intersect')\n comments_columns = options.get('comment')\n fail = options.get('fail')\n dry = options.get('dry')\n\n if dry:\n fail = True\n\n counter = 0\n counter_fail = 0\n\n if structure:\n try:\n structure = Structure.objects.get(name=structure)\n except Structure.DoesNotExist:\n raise CommandError(\"Structure does not match with instance's structures\\n\"\n \"Change your option --structure\")\n elif Structure.objects.count() == 1:\n structure = Structure.objects.first()\n else:\n raise CommandError(\"There are more than 1 structure and you didn't define the option structure\\n\"\n \"Use --structure to define it\")\n if verbosity > 0:\n self.stdout.write(\"All paths in DataSource will be linked to the structure : %s\" % structure)\n\n ds = DataSource(file_path, encoding=encoding)\n\n bbox = Polygon.from_bbox(settings.SPATIAL_EXTENT)\n bbox.srid = settings.SRID\n\n sid = transaction.savepoint()\n\n for layer in ds:\n for feat in layer:\n name = feat.get(name_column) if name_column in layer.fields else ''\n comment_final_tab = []\n if comments_columns:\n for comment_column in comments_columns:\n if comment_column in layer.fields:\n comment_final_tab.append(feat.get(comment_column))\n geom = feat.geom.geos\n if not isinstance(geom, LineString):\n if verbosity > 0:\n self.stdout.write(\"%s's geometry is not a Linestring\" % feat)\n break\n self.check_srid(srid, geom)\n geom.dim = 2\n if do_intersect and bbox.intersects(geom) or not do_intersect and geom.within(bbox):\n try:\n with transaction.atomic():\n comment_final = '</br>'.join(comment_final_tab)\n path = Path.objects.create(name=name,\n structure=structure,\n geom=geom,\n comments=comment_final)\n counter += 1\n if verbosity > 0:\n self.stdout.write('Create path with pk : {}'.format(path.pk))\n if verbosity > 1:\n self.stdout.write(\"The comment %s was added on %s\" % (comment_final, name))\n except (IntegrityError, InternalError):\n if fail:\n counter_fail += 1\n self.stdout.write('Integrity Error on path : {}, {}'.format(name, geom))\n else:\n raise\n if not dry:\n transaction.savepoint_commit(sid)\n if verbosity >= 2:\n self.stdout.write(self.style.NOTICE(\n \"{0} objects created, {1} objects failed\".format(counter, counter_fail)))\n else:\n transaction.savepoint_rollback(sid)\n self.stdout.write(self.style.NOTICE(\n \"{0} objects will be create, {1} objects failed;\".format(counter, counter_fail)))\n\n def check_srid(self, srid, geom):\n if not geom.srid:\n geom.srid = srid\n if geom.srid != settings.SRID:\n try:\n geom.transform(settings.SRID)\n except GDALException:\n raise CommandError(\"SRID is not well configurate, change/add option srid\")\n", "path": "geotrek/core/management/commands/loadpaths.py"}]} | 2,045 | 406 |
gh_patches_debug_33518 | rasdani/github-patches | git_diff | qtile__qtile-1696 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
python compatibility about timezone parameter for widget.Clock
# Issue description
The following widget configuration doesn't work for python 3.8.2:
```
widget.Clock( format="%H:%M:%S", timezone="Asia/Taipei")
```
I made a workaround for this:
```
from dateutil.tz import *
widget.Clock( format="%H:%M:%S", timezone=gettz("Asia/Taipei"))
```
This error is related to the code snippets in `libqtile/widget/clock.py`:
```
def poll(self):
if self.timezone:
now = datetime.now(timezone.utc).astimezone(self.timezone)
else:
now = datetime.now(timezone.utc).astimezone()
return (now + self.DELTA).strftime(self.format)
```
It seems python 3.6+ has compatibility issue of timezone parameters, and native python doesn't support timezone locale like "Asia/Tokyo","Europe/Warsaw", ... or so. Currently I include `dateutil` to bypass the syntax error
# Qtile version
qtile 0.15.1-1 (ArchLinux)
</issue>
<code>
[start of libqtile/widget/clock.py]
1 # Copyright (c) 2010 Aldo Cortesi
2 # Copyright (c) 2012 Andrew Grigorev
3 # Copyright (c) 2014 Sean Vig
4 # Copyright (c) 2014 Tycho Andersen
5 #
6 # Permission is hereby granted, free of charge, to any person obtaining a copy
7 # of this software and associated documentation files (the "Software"), to deal
8 # in the Software without restriction, including without limitation the rights
9 # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
10 # copies of the Software, and to permit persons to whom the Software is
11 # furnished to do so, subject to the following conditions:
12 #
13 # The above copyright notice and this permission notice shall be included in
14 # all copies or substantial portions of the Software.
15 #
16 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
17 # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
18 # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
19 # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
20 # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
21 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
22 # SOFTWARE.
23
24 import sys
25 import time
26 from datetime import datetime, timedelta, timezone
27
28 from libqtile.log_utils import logger
29 from libqtile.widget import base
30
31 try:
32 import pytz
33 except ImportError:
34 pass
35
36
37 class Clock(base.InLoopPollText):
38 """A simple but flexible text-based clock"""
39 orientations = base.ORIENTATION_HORIZONTAL
40 defaults = [
41 ('format', '%H:%M', 'A Python datetime format string'),
42 ('update_interval', 1., 'Update interval for the clock'),
43 ('timezone', None, 'The timezone to use for this clock, either as'
44 ' string if pytz is installed (e.g. "US/Central" or anything in'
45 ' /usr/share/zoneinfo), or as tzinfo (e.g. datetime.timezone.utc).'
46 ' None means the system local timezone and is the default.')
47 ]
48 DELTA = timedelta(seconds=0.5)
49
50 def __init__(self, **config):
51 base.InLoopPollText.__init__(self, **config)
52 self.add_defaults(Clock.defaults)
53 if isinstance(self.timezone, str):
54 if "pytz" in sys.modules:
55 self.timezone = pytz.timezone(self.timezone)
56 else:
57 logger.warning('Clock widget can not infer its timezone from a'
58 ' string without the pytz library. Install pytz'
59 ' or give it a datetime.tzinfo instance.')
60 if self.timezone is None:
61 logger.info('Defaulting to the system local timezone.')
62
63 def tick(self):
64 self.update(self.poll())
65 return self.update_interval - time.time() % self.update_interval
66
67 # adding .5 to get a proper seconds value because glib could
68 # theoreticaly call our method too early and we could get something
69 # like (x-1).999 instead of x.000
70 def poll(self):
71 if self.timezone:
72 now = datetime.now(timezone.utc).astimezone(self.timezone)
73 else:
74 now = datetime.now(timezone.utc).astimezone()
75 return (now + self.DELTA).strftime(self.format)
76
[end of libqtile/widget/clock.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/libqtile/widget/clock.py b/libqtile/widget/clock.py
--- a/libqtile/widget/clock.py
+++ b/libqtile/widget/clock.py
@@ -33,6 +33,11 @@
except ImportError:
pass
+try:
+ import dateutil.tz
+except ImportError:
+ pass
+
class Clock(base.InLoopPollText):
"""A simple but flexible text-based clock"""
@@ -41,9 +46,10 @@
('format', '%H:%M', 'A Python datetime format string'),
('update_interval', 1., 'Update interval for the clock'),
('timezone', None, 'The timezone to use for this clock, either as'
- ' string if pytz is installed (e.g. "US/Central" or anything in'
- ' /usr/share/zoneinfo), or as tzinfo (e.g. datetime.timezone.utc).'
- ' None means the system local timezone and is the default.')
+ ' string if pytz or dateutil is installed (e.g. "US/Central" or'
+ ' anything in /usr/share/zoneinfo), or as tzinfo (e.g.'
+ ' datetime.timezone.utc). None means the system local timezone and is'
+ ' the default.')
]
DELTA = timedelta(seconds=0.5)
@@ -53,10 +59,13 @@
if isinstance(self.timezone, str):
if "pytz" in sys.modules:
self.timezone = pytz.timezone(self.timezone)
+ elif "dateutil" in sys.modules:
+ self.timezone = dateutil.tz.gettz(self.timezone)
else:
logger.warning('Clock widget can not infer its timezone from a'
- ' string without the pytz library. Install pytz'
- ' or give it a datetime.tzinfo instance.')
+ ' string without pytz or dateutil. Install one'
+ ' of these libraries, or give it a'
+ ' datetime.tzinfo instance.')
if self.timezone is None:
logger.info('Defaulting to the system local timezone.')
| {"golden_diff": "diff --git a/libqtile/widget/clock.py b/libqtile/widget/clock.py\n--- a/libqtile/widget/clock.py\n+++ b/libqtile/widget/clock.py\n@@ -33,6 +33,11 @@\n except ImportError:\n pass\n \n+try:\n+ import dateutil.tz\n+except ImportError:\n+ pass\n+\n \n class Clock(base.InLoopPollText):\n \"\"\"A simple but flexible text-based clock\"\"\"\n@@ -41,9 +46,10 @@\n ('format', '%H:%M', 'A Python datetime format string'),\n ('update_interval', 1., 'Update interval for the clock'),\n ('timezone', None, 'The timezone to use for this clock, either as'\n- ' string if pytz is installed (e.g. \"US/Central\" or anything in'\n- ' /usr/share/zoneinfo), or as tzinfo (e.g. datetime.timezone.utc).'\n- ' None means the system local timezone and is the default.')\n+ ' string if pytz or dateutil is installed (e.g. \"US/Central\" or'\n+ ' anything in /usr/share/zoneinfo), or as tzinfo (e.g.'\n+ ' datetime.timezone.utc). None means the system local timezone and is'\n+ ' the default.')\n ]\n DELTA = timedelta(seconds=0.5)\n \n@@ -53,10 +59,13 @@\n if isinstance(self.timezone, str):\n if \"pytz\" in sys.modules:\n self.timezone = pytz.timezone(self.timezone)\n+ elif \"dateutil\" in sys.modules:\n+ self.timezone = dateutil.tz.gettz(self.timezone)\n else:\n logger.warning('Clock widget can not infer its timezone from a'\n- ' string without the pytz library. Install pytz'\n- ' or give it a datetime.tzinfo instance.')\n+ ' string without pytz or dateutil. Install one'\n+ ' of these libraries, or give it a'\n+ ' datetime.tzinfo instance.')\n if self.timezone is None:\n logger.info('Defaulting to the system local timezone.')\n", "issue": "python compatibility about timezone parameter for widget.Clock\n# Issue description\r\n\r\nThe following widget configuration doesn't work for python 3.8.2:\r\n```\r\nwidget.Clock( format=\"%H:%M:%S\", timezone=\"Asia/Taipei\")\r\n```\r\n\r\nI made a workaround for this:\r\n```\r\nfrom dateutil.tz import *\r\nwidget.Clock( format=\"%H:%M:%S\", timezone=gettz(\"Asia/Taipei\"))\r\n```\r\n\r\nThis error is related to the code snippets in `libqtile/widget/clock.py`:\r\n```\r\n def poll(self):\r\n if self.timezone:\r\n now = datetime.now(timezone.utc).astimezone(self.timezone)\r\n else:\r\n now = datetime.now(timezone.utc).astimezone()\r\n return (now + self.DELTA).strftime(self.format)\r\n```\r\n\r\nIt seems python 3.6+ has compatibility issue of timezone parameters, and native python doesn't support timezone locale like \"Asia/Tokyo\",\"Europe/Warsaw\", ... or so. Currently I include `dateutil` to bypass the syntax error\r\n\r\n\r\n# Qtile version\r\nqtile 0.15.1-1 (ArchLinux)\r\n\r\n\n", "before_files": [{"content": "# Copyright (c) 2010 Aldo Cortesi\n# Copyright (c) 2012 Andrew Grigorev\n# Copyright (c) 2014 Sean Vig\n# Copyright (c) 2014 Tycho Andersen\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n# SOFTWARE.\n\nimport sys\nimport time\nfrom datetime import datetime, timedelta, timezone\n\nfrom libqtile.log_utils import logger\nfrom libqtile.widget import base\n\ntry:\n import pytz\nexcept ImportError:\n pass\n\n\nclass Clock(base.InLoopPollText):\n \"\"\"A simple but flexible text-based clock\"\"\"\n orientations = base.ORIENTATION_HORIZONTAL\n defaults = [\n ('format', '%H:%M', 'A Python datetime format string'),\n ('update_interval', 1., 'Update interval for the clock'),\n ('timezone', None, 'The timezone to use for this clock, either as'\n ' string if pytz is installed (e.g. \"US/Central\" or anything in'\n ' /usr/share/zoneinfo), or as tzinfo (e.g. datetime.timezone.utc).'\n ' None means the system local timezone and is the default.')\n ]\n DELTA = timedelta(seconds=0.5)\n\n def __init__(self, **config):\n base.InLoopPollText.__init__(self, **config)\n self.add_defaults(Clock.defaults)\n if isinstance(self.timezone, str):\n if \"pytz\" in sys.modules:\n self.timezone = pytz.timezone(self.timezone)\n else:\n logger.warning('Clock widget can not infer its timezone from a'\n ' string without the pytz library. Install pytz'\n ' or give it a datetime.tzinfo instance.')\n if self.timezone is None:\n logger.info('Defaulting to the system local timezone.')\n\n def tick(self):\n self.update(self.poll())\n return self.update_interval - time.time() % self.update_interval\n\n # adding .5 to get a proper seconds value because glib could\n # theoreticaly call our method too early and we could get something\n # like (x-1).999 instead of x.000\n def poll(self):\n if self.timezone:\n now = datetime.now(timezone.utc).astimezone(self.timezone)\n else:\n now = datetime.now(timezone.utc).astimezone()\n return (now + self.DELTA).strftime(self.format)\n", "path": "libqtile/widget/clock.py"}]} | 1,655 | 464 |
gh_patches_debug_401 | rasdani/github-patches | git_diff | getmoto__moto-698 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Unable to create a key with a trailing slash using OrdinaryCallingFormat
When using OrdinaryCallingFormat, it's not possible to create a key ending with a slash (e.g. when mimicking directory creation), since this is stripped off when parsing the key name. I can't comment on S3, but this is at least different behaviour from Ceph.
For example, the below fails as is, but works if the connection uses SubdomainCallingFormat instead.
```
import boto
import moto
import unittest
class TestCreatingKeyEndingWithSlash(unittest.TestCase):
@moto.mock_s3
def test_ordinary_calling_format(self):
bucket_name = 'testbucket'
key_name = 'key_ending_with_slash/'
conn = boto.connect_s3('access_key', 'secret_key',
calling_format=boto.s3.connection.OrdinaryCallingFormat())
bucket = conn.create_bucket(bucket_name)
key = boto.s3.key.Key(bucket)
key.key = key_name
key.set_contents_from_string('')
self.assertIn(key_name, [k.name for k in bucket.get_all_keys()])
```
</issue>
<code>
[start of moto/s3bucket_path/utils.py]
1 from __future__ import unicode_literals
2 from six.moves.urllib.parse import urlparse
3
4
5 def bucket_name_from_url(url):
6 pth = urlparse(url).path.lstrip("/")
7
8 l = pth.lstrip("/").split("/")
9 if len(l) == 0 or l[0] == "":
10 return None
11 return l[0]
12
13
14 def parse_key_name(path):
15 return "/".join(path.rstrip("/").split("/")[2:])
16
17
18 def is_delete_keys(request, path, bucket_name):
19 return (
20 path == u'/' + bucket_name + u'/?delete' or
21 path == u'/' + bucket_name + u'?delete' or
22 (path == u'/' + bucket_name and
23 getattr(request, "query_string", "") == "delete")
24 )
25
[end of moto/s3bucket_path/utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/moto/s3bucket_path/utils.py b/moto/s3bucket_path/utils.py
--- a/moto/s3bucket_path/utils.py
+++ b/moto/s3bucket_path/utils.py
@@ -12,7 +12,7 @@
def parse_key_name(path):
- return "/".join(path.rstrip("/").split("/")[2:])
+ return "/".join(path.split("/")[2:])
def is_delete_keys(request, path, bucket_name):
| {"golden_diff": "diff --git a/moto/s3bucket_path/utils.py b/moto/s3bucket_path/utils.py\n--- a/moto/s3bucket_path/utils.py\n+++ b/moto/s3bucket_path/utils.py\n@@ -12,7 +12,7 @@\n \n \n def parse_key_name(path):\n- return \"/\".join(path.rstrip(\"/\").split(\"/\")[2:])\n+ return \"/\".join(path.split(\"/\")[2:])\n \n \n def is_delete_keys(request, path, bucket_name):\n", "issue": "Unable to create a key with a trailing slash using OrdinaryCallingFormat\nWhen using OrdinaryCallingFormat, it's not possible to create a key ending with a slash (e.g. when mimicking directory creation), since this is stripped off when parsing the key name. I can't comment on S3, but this is at least different behaviour from Ceph.\n\nFor example, the below fails as is, but works if the connection uses SubdomainCallingFormat instead.\n\n```\nimport boto\nimport moto\nimport unittest\n\n\nclass TestCreatingKeyEndingWithSlash(unittest.TestCase):\n\n @moto.mock_s3\n def test_ordinary_calling_format(self):\n bucket_name = 'testbucket'\n key_name = 'key_ending_with_slash/'\n\n conn = boto.connect_s3('access_key', 'secret_key',\n calling_format=boto.s3.connection.OrdinaryCallingFormat())\n bucket = conn.create_bucket(bucket_name)\n\n key = boto.s3.key.Key(bucket)\n key.key = key_name\n key.set_contents_from_string('')\n\n self.assertIn(key_name, [k.name for k in bucket.get_all_keys()])\n```\n\n", "before_files": [{"content": "from __future__ import unicode_literals\nfrom six.moves.urllib.parse import urlparse\n\n\ndef bucket_name_from_url(url):\n pth = urlparse(url).path.lstrip(\"/\")\n\n l = pth.lstrip(\"/\").split(\"/\")\n if len(l) == 0 or l[0] == \"\":\n return None\n return l[0]\n\n\ndef parse_key_name(path):\n return \"/\".join(path.rstrip(\"/\").split(\"/\")[2:])\n\n\ndef is_delete_keys(request, path, bucket_name):\n return (\n path == u'/' + bucket_name + u'/?delete' or\n path == u'/' + bucket_name + u'?delete' or\n (path == u'/' + bucket_name and\n getattr(request, \"query_string\", \"\") == \"delete\")\n )\n", "path": "moto/s3bucket_path/utils.py"}]} | 987 | 103 |
gh_patches_debug_14293 | rasdani/github-patches | git_diff | psychopy__psychopy-569 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
feature request: develop a pylint rule-set
pylint is a code analysis tool, and will err of the side of being super duper ultra nitpicky (which is great). You just then turn some things off to see the signal in the noise. For example, I've been bitten by mutable default values to a method / function, and it will catch this. It flags bare excepts -- lots of useful stuff.
If anyone has experience with pylint, it would be great to have advice on what works well, and what is likely to work well for PsychoPy given its history and current conventions. If its counterproductive to start using pylint with a codebase this large, that would be helpful to know.
I'm thinking that even if its never run as part of the build process, it might be nice to have a project-wide pylintrc file that makes explicit what style conventions are expected (long lines ok, variable name conventions, etc). This seems like a powerful way to communicate the conventions.
PsychoPy currently has lots of bare excepts, bad indentations, unused variables, redefined builtins, unused imports, and so on -- seemingly all good targets for clean-up work.
</issue>
<code>
[start of psychopy/misc.py]
1 #!/usr/bin/env python2
2
3 # Part of the PsychoPy library
4 # Copyright (C) 2014 Jonathan Peirce
5 # Distributed under the terms of the GNU General Public License (GPL).
6
7 '''Wrapper for all miscellaneous functions and classes from psychopy.tools'''
8
9 from psychopy.tools.arraytools import (createXYs, extendArr, makeRadialMatrix,
10 ratioRange, shuffleArray, val2array)
11
12 from psychopy.tools.attributetools import attributeSetter, setWithOperation
13
14 from psychopy.tools.colorspacetools import (dkl2rgb, dklCart2rgb,
15 hsv2rgb, lms2rgb,
16 rgb2dklCart, rgb2lms)
17
18 from psychopy.tools.coordinatetools import (cart2pol, pol2cart,
19 cart2sph, sph2cart)
20
21 from psychopy.tools.fileerrortools import handleFileCollision
22
23 from psychopy.tools.filetools import toFile, fromFile, mergeFolder
24
25 from psychopy.tools.imagetools import array2image, image2array, makeImageAuto
26
27 from psychopy.tools.monitorunittools import (cm2deg, deg2cm, cm2pix, pix2cm,
28 deg2pix, pix2deg, convertToPix)
29
30 from psychopy.tools.plottools import plotFrameIntervals
31
32 from psychopy.tools.typetools import float_uint8, float_uint16, uint8_float
33
34 from numpy import radians, degrees
35
[end of psychopy/misc.py]
[start of psychopy/tools/unittools.py]
1 #!/usr/bin/env python2
2
3 # Part of the PsychoPy library
4 # Copyright (C) 2014 Jonathan Peirce
5 # Distributed under the terms of the GNU General Public License (GPL).
6
7 '''Functions and classes related to unit conversion'''
8
9 # This module is not used by psychopy; retained for backwards compatibility
10 # for user-scripts.
11
12 from numpy import radians, degrees
13
[end of psychopy/tools/unittools.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/psychopy/misc.py b/psychopy/misc.py
--- a/psychopy/misc.py
+++ b/psychopy/misc.py
@@ -6,6 +6,9 @@
'''Wrapper for all miscellaneous functions and classes from psychopy.tools'''
+# pylint: disable=W0611
+# W0611 = Unused import %s
+
from psychopy.tools.arraytools import (createXYs, extendArr, makeRadialMatrix,
ratioRange, shuffleArray, val2array)
diff --git a/psychopy/tools/unittools.py b/psychopy/tools/unittools.py
--- a/psychopy/tools/unittools.py
+++ b/psychopy/tools/unittools.py
@@ -9,4 +9,7 @@
# This module is not used by psychopy; retained for backwards compatibility
# for user-scripts.
+# pylint: disable=W0611
+# W0611 = Unused import %s
+
from numpy import radians, degrees
| {"golden_diff": "diff --git a/psychopy/misc.py b/psychopy/misc.py\n--- a/psychopy/misc.py\n+++ b/psychopy/misc.py\n@@ -6,6 +6,9 @@\n \n '''Wrapper for all miscellaneous functions and classes from psychopy.tools'''\n \n+# pylint: disable=W0611\n+# W0611 = Unused import %s\n+\n from psychopy.tools.arraytools import (createXYs, extendArr, makeRadialMatrix,\n ratioRange, shuffleArray, val2array)\n \ndiff --git a/psychopy/tools/unittools.py b/psychopy/tools/unittools.py\n--- a/psychopy/tools/unittools.py\n+++ b/psychopy/tools/unittools.py\n@@ -9,4 +9,7 @@\n # This module is not used by psychopy; retained for backwards compatibility\n # for user-scripts.\n \n+# pylint: disable=W0611\n+# W0611 = Unused import %s\n+\n from numpy import radians, degrees\n", "issue": "feature request: develop a pylint rule-set\npylint is a code analysis tool, and will err of the side of being super duper ultra nitpicky (which is great). You just then turn some things off to see the signal in the noise. For example, I've been bitten by mutable default values to a method / function, and it will catch this. It flags bare excepts -- lots of useful stuff.\n\nIf anyone has experience with pylint, it would be great to have advice on what works well, and what is likely to work well for PsychoPy given its history and current conventions. If its counterproductive to start using pylint with a codebase this large, that would be helpful to know.\n\nI'm thinking that even if its never run as part of the build process, it might be nice to have a project-wide pylintrc file that makes explicit what style conventions are expected (long lines ok, variable name conventions, etc). This seems like a powerful way to communicate the conventions. \n\nPsychoPy currently has lots of bare excepts, bad indentations, unused variables, redefined builtins, unused imports, and so on -- seemingly all good targets for clean-up work.\n\n", "before_files": [{"content": "#!/usr/bin/env python2\n\n# Part of the PsychoPy library\n# Copyright (C) 2014 Jonathan Peirce\n# Distributed under the terms of the GNU General Public License (GPL).\n\n'''Wrapper for all miscellaneous functions and classes from psychopy.tools'''\n\nfrom psychopy.tools.arraytools import (createXYs, extendArr, makeRadialMatrix,\n ratioRange, shuffleArray, val2array)\n\nfrom psychopy.tools.attributetools import attributeSetter, setWithOperation\n\nfrom psychopy.tools.colorspacetools import (dkl2rgb, dklCart2rgb,\n hsv2rgb, lms2rgb,\n rgb2dklCart, rgb2lms)\n\nfrom psychopy.tools.coordinatetools import (cart2pol, pol2cart,\n cart2sph, sph2cart)\n\nfrom psychopy.tools.fileerrortools import handleFileCollision\n\nfrom psychopy.tools.filetools import toFile, fromFile, mergeFolder\n\nfrom psychopy.tools.imagetools import array2image, image2array, makeImageAuto\n\nfrom psychopy.tools.monitorunittools import (cm2deg, deg2cm, cm2pix, pix2cm,\n deg2pix, pix2deg, convertToPix)\n\nfrom psychopy.tools.plottools import plotFrameIntervals\n\nfrom psychopy.tools.typetools import float_uint8, float_uint16, uint8_float\n\nfrom numpy import radians, degrees\n", "path": "psychopy/misc.py"}, {"content": "#!/usr/bin/env python2\n\n# Part of the PsychoPy library\n# Copyright (C) 2014 Jonathan Peirce\n# Distributed under the terms of the GNU General Public License (GPL).\n\n'''Functions and classes related to unit conversion'''\n\n# This module is not used by psychopy; retained for backwards compatibility\n# for user-scripts.\n\nfrom numpy import radians, degrees\n", "path": "psychopy/tools/unittools.py"}]} | 1,277 | 214 |
gh_patches_debug_24058 | rasdani/github-patches | git_diff | avocado-framework__avocado-4726 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Avocado crashed unexpectedly with the SIGINT
When the SIGINT is sent to the avocado in the early stages the avocado will crash.
This is happening on both runner legacy and nrunner.
```
avocado run /bin/true
JOB ID : ee66540de61211c164d9d9cb5b0e9aaf65dca8a2
JOB LOG : /home/jarichte/avocado/job-results/job-2021-05-25T16.36-ee66540/job.log
^CAvocado crashed unexpectedly:
You can find details in /var/lib/avocado/data/crashes/avocado-traceback-2021-05-25_16:36:38-_m3ikjhl.log
```
```
avocado run --test-runner=nrunner /bin/true
JOB ID : da09a60ab32ff647c79d919781f82db3543e107f
JOB LOG : /home/jarichte/avocado/job-results/job-2021-05-25T15.09-da09a60/job.log
^CAvocado crashed unexpectedly:
You can find details in /var/lib/avocado/data/crashes/avocado-traceback-2021-05-25_15:09:37-my68_dsy.log
```
</issue>
<code>
[start of avocado/core/main.py]
1 # This program is free software; you can redistribute it and/or modify
2 # it under the terms of the GNU General Public License as published by
3 # the Free Software Foundation; specifically version 2 of the License.
4 #
5 # This program is distributed in the hope that it will be useful,
6 # but WITHOUT ANY WARRANTY; without even the implied warranty of
7 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
8 #
9 # See LICENSE for more details.
10 #
11 # Copyright: RedHat 2013-2014
12 # Author: Lucas Meneghel Rodrigues <[email protected]>
13
14
15 import os
16 import sys
17 import tempfile
18 import time
19 import traceback
20
21 try:
22 from avocado.core.settings import settings
23 except ImportError:
24 sys.stderr.write("Unable to import Avocado libraries, please verify "
25 "your installation, and if necessary reinstall it.\n")
26 # This exit code is replicated from avocado/core/exit_codes.py and not
27 # imported because we are dealing with import failures
28 sys.exit(-1)
29
30
31 def get_crash_dir():
32 config = settings.as_dict()
33 crash_dir_path = os.path.join(config.get('datadir.paths.data_dir'),
34 "crashes")
35 try:
36 os.makedirs(crash_dir_path)
37 except OSError:
38 pass
39 return crash_dir_path
40
41
42 def handle_exception(*exc_info):
43 # Print traceback if AVOCADO_LOG_DEBUG environment variable is set
44 msg = "Avocado crashed:\n" + "".join(traceback.format_exception(*exc_info))
45 msg += "\n"
46 if os.environ.get("AVOCADO_LOG_DEBUG"):
47 os.write(2, msg.encode('utf-8'))
48 # Store traceback in data_dir or TMPDIR
49 prefix = "avocado-traceback-"
50 prefix += time.strftime("%F_%T") + "-"
51 tmp, name = tempfile.mkstemp(".log", prefix, get_crash_dir())
52 os.write(tmp, msg.encode('utf-8'))
53 os.close(tmp)
54 # Print friendly message in console-like output
55 msg = ("Avocado crashed unexpectedly: %s\nYou can find details in %s\n"
56 % (exc_info[1], name))
57 os.write(2, msg.encode('utf-8'))
58 # This exit code is replicated from avocado/core/exit_codes.py and not
59 # imported because we are dealing with import failures
60 sys.exit(-1)
61
62
63 def main():
64 sys.excepthook = handle_exception
65 from avocado.core.app import AvocadoApp # pylint: disable=E0611
66
67 # Override tmp in case it's not set in env
68 for attr in ("TMP", "TEMP", "TMPDIR"):
69 if attr in os.environ:
70 break
71 else: # TMP not set by user, use /var/tmp if exists
72 # TMP not set by user in environment. Try to use /var/tmp to avoid
73 # possible problems with "/tmp" being mounted as TMPFS without the
74 # support for O_DIRECT
75 if os.path.exists("/var/tmp"):
76 os.environ["TMP"] = "/var/tmp"
77 app = AvocadoApp()
78 return app.run()
79
80
81 if __name__ == '__main__':
82 sys.exit(main())
83
[end of avocado/core/main.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/avocado/core/main.py b/avocado/core/main.py
--- a/avocado/core/main.py
+++ b/avocado/core/main.py
@@ -19,6 +19,7 @@
import traceback
try:
+ from avocado.core import exit_codes
from avocado.core.settings import settings
except ImportError:
sys.stderr.write("Unable to import Avocado libraries, please verify "
@@ -51,13 +52,16 @@
tmp, name = tempfile.mkstemp(".log", prefix, get_crash_dir())
os.write(tmp, msg.encode('utf-8'))
os.close(tmp)
- # Print friendly message in console-like output
- msg = ("Avocado crashed unexpectedly: %s\nYou can find details in %s\n"
- % (exc_info[1], name))
+ if exc_info[0] is KeyboardInterrupt:
+ msg = "%s\nYou can find details in %s\n" % (exc_info[0].__doc__, name)
+ exit_code = exit_codes.AVOCADO_JOB_INTERRUPTED
+ else:
+ # Print friendly message in console-like output
+ msg = ("Avocado crashed unexpectedly: %s\nYou can find details in %s\n"
+ % (exc_info[1], name))
+ exit_code = exit_codes.AVOCADO_GENERIC_CRASH
os.write(2, msg.encode('utf-8'))
- # This exit code is replicated from avocado/core/exit_codes.py and not
- # imported because we are dealing with import failures
- sys.exit(-1)
+ sys.exit(exit_code)
def main():
| {"golden_diff": "diff --git a/avocado/core/main.py b/avocado/core/main.py\n--- a/avocado/core/main.py\n+++ b/avocado/core/main.py\n@@ -19,6 +19,7 @@\n import traceback\n \n try:\n+ from avocado.core import exit_codes\n from avocado.core.settings import settings\n except ImportError:\n sys.stderr.write(\"Unable to import Avocado libraries, please verify \"\n@@ -51,13 +52,16 @@\n tmp, name = tempfile.mkstemp(\".log\", prefix, get_crash_dir())\n os.write(tmp, msg.encode('utf-8'))\n os.close(tmp)\n- # Print friendly message in console-like output\n- msg = (\"Avocado crashed unexpectedly: %s\\nYou can find details in %s\\n\"\n- % (exc_info[1], name))\n+ if exc_info[0] is KeyboardInterrupt:\n+ msg = \"%s\\nYou can find details in %s\\n\" % (exc_info[0].__doc__, name)\n+ exit_code = exit_codes.AVOCADO_JOB_INTERRUPTED\n+ else:\n+ # Print friendly message in console-like output\n+ msg = (\"Avocado crashed unexpectedly: %s\\nYou can find details in %s\\n\"\n+ % (exc_info[1], name))\n+ exit_code = exit_codes.AVOCADO_GENERIC_CRASH\n os.write(2, msg.encode('utf-8'))\n- # This exit code is replicated from avocado/core/exit_codes.py and not\n- # imported because we are dealing with import failures\n- sys.exit(-1)\n+ sys.exit(exit_code)\n \n \n def main():\n", "issue": "Avocado crashed unexpectedly with the SIGINT\nWhen the SIGINT is sent to the avocado in the early stages the avocado will crash.\r\nThis is happening on both runner legacy and nrunner. \r\n\r\n```\r\navocado run /bin/true\r\nJOB ID : ee66540de61211c164d9d9cb5b0e9aaf65dca8a2\r\nJOB LOG : /home/jarichte/avocado/job-results/job-2021-05-25T16.36-ee66540/job.log\r\n^CAvocado crashed unexpectedly:\r\nYou can find details in /var/lib/avocado/data/crashes/avocado-traceback-2021-05-25_16:36:38-_m3ikjhl.log\r\n```\r\n\r\n```\r\navocado run --test-runner=nrunner /bin/true\r\nJOB ID : da09a60ab32ff647c79d919781f82db3543e107f\r\nJOB LOG : /home/jarichte/avocado/job-results/job-2021-05-25T15.09-da09a60/job.log\r\n^CAvocado crashed unexpectedly:\r\nYou can find details in /var/lib/avocado/data/crashes/avocado-traceback-2021-05-25_15:09:37-my68_dsy.log\r\n```\n", "before_files": [{"content": "# This program is free software; you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation; specifically version 2 of the License.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.\n#\n# See LICENSE for more details.\n#\n# Copyright: RedHat 2013-2014\n# Author: Lucas Meneghel Rodrigues <[email protected]>\n\n\nimport os\nimport sys\nimport tempfile\nimport time\nimport traceback\n\ntry:\n from avocado.core.settings import settings\nexcept ImportError:\n sys.stderr.write(\"Unable to import Avocado libraries, please verify \"\n \"your installation, and if necessary reinstall it.\\n\")\n # This exit code is replicated from avocado/core/exit_codes.py and not\n # imported because we are dealing with import failures\n sys.exit(-1)\n\n\ndef get_crash_dir():\n config = settings.as_dict()\n crash_dir_path = os.path.join(config.get('datadir.paths.data_dir'),\n \"crashes\")\n try:\n os.makedirs(crash_dir_path)\n except OSError:\n pass\n return crash_dir_path\n\n\ndef handle_exception(*exc_info):\n # Print traceback if AVOCADO_LOG_DEBUG environment variable is set\n msg = \"Avocado crashed:\\n\" + \"\".join(traceback.format_exception(*exc_info))\n msg += \"\\n\"\n if os.environ.get(\"AVOCADO_LOG_DEBUG\"):\n os.write(2, msg.encode('utf-8'))\n # Store traceback in data_dir or TMPDIR\n prefix = \"avocado-traceback-\"\n prefix += time.strftime(\"%F_%T\") + \"-\"\n tmp, name = tempfile.mkstemp(\".log\", prefix, get_crash_dir())\n os.write(tmp, msg.encode('utf-8'))\n os.close(tmp)\n # Print friendly message in console-like output\n msg = (\"Avocado crashed unexpectedly: %s\\nYou can find details in %s\\n\"\n % (exc_info[1], name))\n os.write(2, msg.encode('utf-8'))\n # This exit code is replicated from avocado/core/exit_codes.py and not\n # imported because we are dealing with import failures\n sys.exit(-1)\n\n\ndef main():\n sys.excepthook = handle_exception\n from avocado.core.app import AvocadoApp # pylint: disable=E0611\n\n # Override tmp in case it's not set in env\n for attr in (\"TMP\", \"TEMP\", \"TMPDIR\"):\n if attr in os.environ:\n break\n else: # TMP not set by user, use /var/tmp if exists\n # TMP not set by user in environment. Try to use /var/tmp to avoid\n # possible problems with \"/tmp\" being mounted as TMPFS without the\n # support for O_DIRECT\n if os.path.exists(\"/var/tmp\"):\n os.environ[\"TMP\"] = \"/var/tmp\"\n app = AvocadoApp()\n return app.run()\n\n\nif __name__ == '__main__':\n sys.exit(main())\n", "path": "avocado/core/main.py"}]} | 1,725 | 361 |
gh_patches_debug_48680 | rasdani/github-patches | git_diff | ethereum__web3.py-670 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Consider adding Chain Id to library
* Version: 4.0.0-b
* Python: 3.6.3
* OS: linux
### What was wrong?
No clear way to access known chain ids.
### How can it be fixed?
Proposed syntax
```
>>> from web3 import Chains
>>> Chains.Ropsten.id
3
```
I ran into issues here: https://web3py.readthedocs.io/en/latest/web3.eth.account.html#sign-a-contract-transaction as `buildTransaction()` requires a `chainId`. I didn't even know the chains had ids.
```
>>> unicorn_txn = unicorns.functions.transfer(
... '0xfB6916095ca1df60bB79Ce92cE3Ea74c37c5d359',
... 1,
... ).buildTransaction({
... 'chainId': 1,
... 'gas': 70000,
... 'gasPrice': w3.toWei('1', 'gwei'),
... 'nonce': nonce,
... })
```
### Maybe this will help others
According to this answer https://ethereum.stackexchange.com/a/17101/7187 the chain ids are as follows:
0: Olympic, Ethereum public pre-release testnet
1: Frontier, Homestead, Metropolis, the Ethereum public main network
1: Classic, the (un)forked public Ethereum Classic main network, chain ID 61
1: Expanse, an alternative Ethereum implementation, chain ID 2
2: Morden, the public Ethereum testnet, now Ethereum Classic testnet
3: Ropsten, the public cross-client Ethereum testnet
4: Rinkeby, the public Geth PoA testnet
42: Kovan, the public Parity PoA testnet
77: Sokol, the public POA Network testnet
99: Core, the public POA Network main network
7762959: Musicoin, the music blockchain
Consider adding Chain Id to library
* Version: 4.0.0-b
* Python: 3.6.3
* OS: linux
### What was wrong?
No clear way to access known chain ids.
### How can it be fixed?
Proposed syntax
```
>>> from web3 import Chains
>>> Chains.Ropsten.id
3
```
I ran into issues here: https://web3py.readthedocs.io/en/latest/web3.eth.account.html#sign-a-contract-transaction as `buildTransaction()` requires a `chainId`. I didn't even know the chains had ids.
```
>>> unicorn_txn = unicorns.functions.transfer(
... '0xfB6916095ca1df60bB79Ce92cE3Ea74c37c5d359',
... 1,
... ).buildTransaction({
... 'chainId': 1,
... 'gas': 70000,
... 'gasPrice': w3.toWei('1', 'gwei'),
... 'nonce': nonce,
... })
```
### Maybe this will help others
According to this answer https://ethereum.stackexchange.com/a/17101/7187 the chain ids are as follows:
0: Olympic, Ethereum public pre-release testnet
1: Frontier, Homestead, Metropolis, the Ethereum public main network
1: Classic, the (un)forked public Ethereum Classic main network, chain ID 61
1: Expanse, an alternative Ethereum implementation, chain ID 2
2: Morden, the public Ethereum testnet, now Ethereum Classic testnet
3: Ropsten, the public cross-client Ethereum testnet
4: Rinkeby, the public Geth PoA testnet
42: Kovan, the public Parity PoA testnet
77: Sokol, the public POA Network testnet
99: Core, the public POA Network main network
7762959: Musicoin, the music blockchain
</issue>
<code>
[start of web3/net.py]
1 from web3.module import (
2 Module,
3 )
4
5
6 class Net(Module):
7 @property
8 def listening(self):
9 return self.web3.manager.request_blocking("net_listening", [])
10
11 @property
12 def peerCount(self):
13 return self.web3.manager.request_blocking("net_peerCount", [])
14
15 @property
16 def version(self):
17 return self.web3.manager.request_blocking("net_version", [])
18
[end of web3/net.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/web3/net.py b/web3/net.py
--- a/web3/net.py
+++ b/web3/net.py
@@ -12,6 +12,10 @@
def peerCount(self):
return self.web3.manager.request_blocking("net_peerCount", [])
+ @property
+ def chainId(self):
+ return self.version
+
@property
def version(self):
return self.web3.manager.request_blocking("net_version", [])
| {"golden_diff": "diff --git a/web3/net.py b/web3/net.py\n--- a/web3/net.py\n+++ b/web3/net.py\n@@ -12,6 +12,10 @@\n def peerCount(self):\n return self.web3.manager.request_blocking(\"net_peerCount\", [])\n \n+ @property\n+ def chainId(self):\n+ return self.version\n+\n @property\n def version(self):\n return self.web3.manager.request_blocking(\"net_version\", [])\n", "issue": "Consider adding Chain Id to library\n* Version: 4.0.0-b\r\n* Python: 3.6.3\r\n* OS: linux\r\n\r\n### What was wrong?\r\n\r\nNo clear way to access known chain ids.\r\n\r\n### How can it be fixed?\r\n\r\nProposed syntax\r\n\r\n```\r\n>>> from web3 import Chains\r\n>>> Chains.Ropsten.id\r\n3\r\n```\r\n\r\nI ran into issues here: https://web3py.readthedocs.io/en/latest/web3.eth.account.html#sign-a-contract-transaction as `buildTransaction()` requires a `chainId`. I didn't even know the chains had ids.\r\n\r\n```\r\n>>> unicorn_txn = unicorns.functions.transfer(\r\n... '0xfB6916095ca1df60bB79Ce92cE3Ea74c37c5d359',\r\n... 1,\r\n... ).buildTransaction({\r\n... 'chainId': 1,\r\n... 'gas': 70000,\r\n... 'gasPrice': w3.toWei('1', 'gwei'),\r\n... 'nonce': nonce,\r\n... })\r\n```\r\n\r\n### Maybe this will help others\r\n\r\nAccording to this answer https://ethereum.stackexchange.com/a/17101/7187 the chain ids are as follows:\r\n\r\n0: Olympic, Ethereum public pre-release testnet\r\n1: Frontier, Homestead, Metropolis, the Ethereum public main network\r\n1: Classic, the (un)forked public Ethereum Classic main network, chain ID 61\r\n1: Expanse, an alternative Ethereum implementation, chain ID 2\r\n2: Morden, the public Ethereum testnet, now Ethereum Classic testnet\r\n3: Ropsten, the public cross-client Ethereum testnet\r\n4: Rinkeby, the public Geth PoA testnet\r\n42: Kovan, the public Parity PoA testnet\r\n77: Sokol, the public POA Network testnet\r\n99: Core, the public POA Network main network\r\n7762959: Musicoin, the music blockchain\r\n\nConsider adding Chain Id to library\n* Version: 4.0.0-b\r\n* Python: 3.6.3\r\n* OS: linux\r\n\r\n### What was wrong?\r\n\r\nNo clear way to access known chain ids.\r\n\r\n### How can it be fixed?\r\n\r\nProposed syntax\r\n\r\n```\r\n>>> from web3 import Chains\r\n>>> Chains.Ropsten.id\r\n3\r\n```\r\n\r\nI ran into issues here: https://web3py.readthedocs.io/en/latest/web3.eth.account.html#sign-a-contract-transaction as `buildTransaction()` requires a `chainId`. I didn't even know the chains had ids.\r\n\r\n```\r\n>>> unicorn_txn = unicorns.functions.transfer(\r\n... '0xfB6916095ca1df60bB79Ce92cE3Ea74c37c5d359',\r\n... 1,\r\n... ).buildTransaction({\r\n... 'chainId': 1,\r\n... 'gas': 70000,\r\n... 'gasPrice': w3.toWei('1', 'gwei'),\r\n... 'nonce': nonce,\r\n... })\r\n```\r\n\r\n### Maybe this will help others\r\n\r\nAccording to this answer https://ethereum.stackexchange.com/a/17101/7187 the chain ids are as follows:\r\n\r\n0: Olympic, Ethereum public pre-release testnet\r\n1: Frontier, Homestead, Metropolis, the Ethereum public main network\r\n1: Classic, the (un)forked public Ethereum Classic main network, chain ID 61\r\n1: Expanse, an alternative Ethereum implementation, chain ID 2\r\n2: Morden, the public Ethereum testnet, now Ethereum Classic testnet\r\n3: Ropsten, the public cross-client Ethereum testnet\r\n4: Rinkeby, the public Geth PoA testnet\r\n42: Kovan, the public Parity PoA testnet\r\n77: Sokol, the public POA Network testnet\r\n99: Core, the public POA Network main network\r\n7762959: Musicoin, the music blockchain\r\n\n", "before_files": [{"content": "from web3.module import (\n Module,\n)\n\n\nclass Net(Module):\n @property\n def listening(self):\n return self.web3.manager.request_blocking(\"net_listening\", [])\n\n @property\n def peerCount(self):\n return self.web3.manager.request_blocking(\"net_peerCount\", [])\n\n @property\n def version(self):\n return self.web3.manager.request_blocking(\"net_version\", [])\n", "path": "web3/net.py"}]} | 1,541 | 104 |
gh_patches_debug_36617 | rasdani/github-patches | git_diff | PlasmaPy__PlasmaPy-406 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Use numpy.load and save instead of pickle
As @StanczakDominik put it:
> By the way, apparently pickle is unsafe due to allowing arbitrary code execution, and we're now including those in Langmuir samples. @jasperbeckers do you think we could transition to numpy.save and numpy.load .npz files? We're just storing two arrays in each of those anyway, right?
</issue>
<code>
[start of plasmapy/examples/plot_langmuir_analysis.py]
1 # coding: utf-8
2 """
3 Langmuir probe data analysis
4 ============================
5
6 Let's analyze a few Langmuir probe characteristics using the
7 `diagnostics.langmuir` subpackage. First we need to import the module and some
8 basics.
9 """
10
11 from plasmapy.diagnostics.langmuir import Characteristic, swept_probe_analysis
12 import astropy.units as u
13 import numpy as np
14 import pickle
15 import os
16
17 ######################################################
18 # The first characteristic we analyze is a simple single-probe measurement in
19 # a low (ion) temperature, low density plasma with a cylindrical probe. This
20 # allows us to utilize OML theory implemented in `swept_probe_analysis`.
21 # The data has been preprocessed with some smoothing, which allows us to obtain
22 # a Electron Energy Distribution Function (EEDF) as well.
23
24 # Load the bias and current values stored in the .p pickle file.
25 path = os.path.join("langmuir_samples", "Beckers2017.p")
26 bias, current = pickle.load(open(path, 'rb'))
27
28 # Create the Characteristic object, taking into account the correct units
29 characteristic = Characteristic(np.array(bias) * u.V,
30 np.array(current)*1e3 * u.mA)
31
32 # Calculate the cylindrical probe surface area
33 probe_length = 1.145 * u.mm
34 probe_diameter = 1.57 * u.mm
35 probe_area = (probe_length * np.pi * probe_diameter +
36 np.pi * 0.25 * probe_diameter**2)
37
38 ######################################################
39 # Now we can actually perform the analysis. Since the plasma is in Helium an
40 # ion mass number of 4 is entered. The results are visualized and the obtained
41 # EEDF is also shown.
42 print(swept_probe_analysis(characteristic,
43 probe_area, 4 * u.u,
44 visualize=True,
45 plot_EEDF=True))
46
47 ######################################################
48 # The cyan and yellow lines indicate the fitted electron and ion currents,
49 # respectively. The green line is the sum of these and agrees nicely with the
50 # data. This indicates a succesfull analysis.
51
52 ######################################################
53 # The next sample probe data is provided by David Pace. is also obtained from a low relatively ion
54 # temperature and density plasma, in Argon.
55
56 # Load the data from a file and create the Characteristic object
57 path = os.path.join("langmuir_samples", "Pace2015.p")
58 bias, current = pickle.load(open(path, 'rb'))
59 characteristic = Characteristic(np.array(bias) * u.V,
60 np.array(current) * 1e3 * u.mA)
61
62 ######################################################
63 # Initially the electrons are assumed to be Maxwellian. To check this the fit
64 # of the electron growth region will be plotted.
65 swept_probe_analysis(characteristic,
66 0.738 * u.cm**2,
67 40 * u.u,
68 bimaxwellian=False,
69 plot_electron_fit=True)
70
71 ######################################################
72 # It can be seen that this plasma is slightly bi-Maxwellian, as there are two
73 # distinct slopes in the exponential section. The analysis is now performed
74 # with bimaxwellian set to True, which yields improved results.
75 print(swept_probe_analysis(characteristic,
76 0.738 * u.cm**2,
77 40 * u.u,
78 bimaxwellian=True,
79 visualize=True,
80 plot_electron_fit=True))
81
82 ######################################################
83 # The probe current resolution of the raw data is relatively poor, but the
84 # analysis still performs well in the ion current region. The bi-Maxwellian
85 # properties are not significant but do make a difference. Check this analysis
86 # without setting `bimaxwellian` to True!
87 # This is reflected in the results, which indicate that the temperatures of
88 # the cold and hot electron population are indeed different, but relatively
89 # close.
90
91 ######################################################
92 # This Helium plasma is fully bi-Maxwellian.
93
94 # Import probe data and calculate probe surface area.
95 path = os.path.join("langmuir_samples", "Beckers2017b.p")
96 bias, current = pickle.load(open(path, 'rb'))
97 characteristic = Characteristic(np.array(bias) * u.V,
98 np.array(current) * 1e3 * u.mA)
99 probe_length = 1.145 * u.mm
100 probe_diameter = 1.57 * u.mm
101 probe_area = (probe_length * np.pi * probe_diameter +
102 np.pi * 0.25 * probe_diameter**2)
103
104 ######################################################
105 # `plot_electron_fit` is set to True to check the bi-Maxwellian properties.
106 # The fit converges nicely to the two slopes of the electron growth region.
107 print(swept_probe_analysis(characteristic,
108 probe_area,
109 4 * u.u,
110 bimaxwellian=True,
111 plot_electron_fit=True,
112 visualize=True))
113
[end of plasmapy/examples/plot_langmuir_analysis.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/plasmapy/examples/plot_langmuir_analysis.py b/plasmapy/examples/plot_langmuir_analysis.py
--- a/plasmapy/examples/plot_langmuir_analysis.py
+++ b/plasmapy/examples/plot_langmuir_analysis.py
@@ -11,7 +11,6 @@
from plasmapy.diagnostics.langmuir import Characteristic, swept_probe_analysis
import astropy.units as u
import numpy as np
-import pickle
import os
######################################################
@@ -22,8 +21,8 @@
# a Electron Energy Distribution Function (EEDF) as well.
# Load the bias and current values stored in the .p pickle file.
-path = os.path.join("langmuir_samples", "Beckers2017.p")
-bias, current = pickle.load(open(path, 'rb'))
+path = os.path.join("langmuir_samples", "Beckers2017.npy")
+bias, current = np.load(path)
# Create the Characteristic object, taking into account the correct units
characteristic = Characteristic(np.array(bias) * u.V,
@@ -50,12 +49,12 @@
# data. This indicates a succesfull analysis.
######################################################
-# The next sample probe data is provided by David Pace. is also obtained from a low relatively ion
-# temperature and density plasma, in Argon.
+# The next sample probe data is provided by David Pace. It is also obtained
+# from a low relatively ion temperature and density plasma, in Argon.
# Load the data from a file and create the Characteristic object
-path = os.path.join("langmuir_samples", "Pace2015.p")
-bias, current = pickle.load(open(path, 'rb'))
+path = os.path.join("langmuir_samples", "Pace2015.npy")
+bias, current = np.load(path)
characteristic = Characteristic(np.array(bias) * u.V,
np.array(current) * 1e3 * u.mA)
@@ -92,8 +91,8 @@
# This Helium plasma is fully bi-Maxwellian.
# Import probe data and calculate probe surface area.
-path = os.path.join("langmuir_samples", "Beckers2017b.p")
-bias, current = pickle.load(open(path, 'rb'))
+path = os.path.join("langmuir_samples", "Beckers2017b.npy")
+bias, current = np.load(path)
characteristic = Characteristic(np.array(bias) * u.V,
np.array(current) * 1e3 * u.mA)
probe_length = 1.145 * u.mm
| {"golden_diff": "diff --git a/plasmapy/examples/plot_langmuir_analysis.py b/plasmapy/examples/plot_langmuir_analysis.py\n--- a/plasmapy/examples/plot_langmuir_analysis.py\n+++ b/plasmapy/examples/plot_langmuir_analysis.py\n@@ -11,7 +11,6 @@\n from plasmapy.diagnostics.langmuir import Characteristic, swept_probe_analysis\n import astropy.units as u\n import numpy as np\n-import pickle\n import os\n \n ######################################################\n@@ -22,8 +21,8 @@\n # a Electron Energy Distribution Function (EEDF) as well.\n \n # Load the bias and current values stored in the .p pickle file.\n-path = os.path.join(\"langmuir_samples\", \"Beckers2017.p\")\n-bias, current = pickle.load(open(path, 'rb'))\n+path = os.path.join(\"langmuir_samples\", \"Beckers2017.npy\")\n+bias, current = np.load(path)\n \n # Create the Characteristic object, taking into account the correct units\n characteristic = Characteristic(np.array(bias) * u.V,\n@@ -50,12 +49,12 @@\n # data. This indicates a succesfull analysis.\n \n ######################################################\n-# The next sample probe data is provided by David Pace. is also obtained from a low relatively ion\n-# temperature and density plasma, in Argon.\n+# The next sample probe data is provided by David Pace. It is also obtained\n+# from a low relatively ion temperature and density plasma, in Argon.\n \n # Load the data from a file and create the Characteristic object\n-path = os.path.join(\"langmuir_samples\", \"Pace2015.p\")\n-bias, current = pickle.load(open(path, 'rb'))\n+path = os.path.join(\"langmuir_samples\", \"Pace2015.npy\")\n+bias, current = np.load(path)\n characteristic = Characteristic(np.array(bias) * u.V,\n np.array(current) * 1e3 * u.mA)\n \n@@ -92,8 +91,8 @@\n # This Helium plasma is fully bi-Maxwellian.\n \n # Import probe data and calculate probe surface area.\n-path = os.path.join(\"langmuir_samples\", \"Beckers2017b.p\")\n-bias, current = pickle.load(open(path, 'rb'))\n+path = os.path.join(\"langmuir_samples\", \"Beckers2017b.npy\")\n+bias, current = np.load(path)\n characteristic = Characteristic(np.array(bias) * u.V,\n np.array(current) * 1e3 * u.mA)\n probe_length = 1.145 * u.mm\n", "issue": "Use numpy.load and save instead of pickle\nAs @StanczakDominik put it:\r\n\r\n> By the way, apparently pickle is unsafe due to allowing arbitrary code execution, and we're now including those in Langmuir samples. @jasperbeckers do you think we could transition to numpy.save and numpy.load .npz files? We're just storing two arrays in each of those anyway, right?\n", "before_files": [{"content": "# coding: utf-8\n\"\"\"\nLangmuir probe data analysis\n============================\n\nLet's analyze a few Langmuir probe characteristics using the\n`diagnostics.langmuir` subpackage. First we need to import the module and some\nbasics.\n\"\"\"\n\nfrom plasmapy.diagnostics.langmuir import Characteristic, swept_probe_analysis\nimport astropy.units as u\nimport numpy as np\nimport pickle\nimport os\n\n######################################################\n# The first characteristic we analyze is a simple single-probe measurement in\n# a low (ion) temperature, low density plasma with a cylindrical probe. This\n# allows us to utilize OML theory implemented in `swept_probe_analysis`.\n# The data has been preprocessed with some smoothing, which allows us to obtain\n# a Electron Energy Distribution Function (EEDF) as well.\n\n# Load the bias and current values stored in the .p pickle file.\npath = os.path.join(\"langmuir_samples\", \"Beckers2017.p\")\nbias, current = pickle.load(open(path, 'rb'))\n\n# Create the Characteristic object, taking into account the correct units\ncharacteristic = Characteristic(np.array(bias) * u.V,\n np.array(current)*1e3 * u.mA)\n\n# Calculate the cylindrical probe surface area\nprobe_length = 1.145 * u.mm\nprobe_diameter = 1.57 * u.mm\nprobe_area = (probe_length * np.pi * probe_diameter +\n np.pi * 0.25 * probe_diameter**2)\n\n######################################################\n# Now we can actually perform the analysis. Since the plasma is in Helium an\n# ion mass number of 4 is entered. The results are visualized and the obtained\n# EEDF is also shown.\nprint(swept_probe_analysis(characteristic,\n probe_area, 4 * u.u,\n visualize=True,\n plot_EEDF=True))\n\n######################################################\n# The cyan and yellow lines indicate the fitted electron and ion currents,\n# respectively. The green line is the sum of these and agrees nicely with the\n# data. This indicates a succesfull analysis.\n\n######################################################\n# The next sample probe data is provided by David Pace. is also obtained from a low relatively ion\n# temperature and density plasma, in Argon.\n\n# Load the data from a file and create the Characteristic object\npath = os.path.join(\"langmuir_samples\", \"Pace2015.p\")\nbias, current = pickle.load(open(path, 'rb'))\ncharacteristic = Characteristic(np.array(bias) * u.V,\n np.array(current) * 1e3 * u.mA)\n\n######################################################\n# Initially the electrons are assumed to be Maxwellian. To check this the fit\n# of the electron growth region will be plotted.\nswept_probe_analysis(characteristic,\n 0.738 * u.cm**2,\n 40 * u.u,\n bimaxwellian=False,\n plot_electron_fit=True)\n\n######################################################\n# It can be seen that this plasma is slightly bi-Maxwellian, as there are two\n# distinct slopes in the exponential section. The analysis is now performed\n# with bimaxwellian set to True, which yields improved results.\nprint(swept_probe_analysis(characteristic,\n 0.738 * u.cm**2,\n 40 * u.u,\n bimaxwellian=True,\n visualize=True,\n plot_electron_fit=True))\n\n######################################################\n# The probe current resolution of the raw data is relatively poor, but the\n# analysis still performs well in the ion current region. The bi-Maxwellian\n# properties are not significant but do make a difference. Check this analysis\n# without setting `bimaxwellian` to True!\n# This is reflected in the results, which indicate that the temperatures of\n# the cold and hot electron population are indeed different, but relatively\n# close.\n\n######################################################\n# This Helium plasma is fully bi-Maxwellian.\n\n# Import probe data and calculate probe surface area.\npath = os.path.join(\"langmuir_samples\", \"Beckers2017b.p\")\nbias, current = pickle.load(open(path, 'rb'))\ncharacteristic = Characteristic(np.array(bias) * u.V,\n np.array(current) * 1e3 * u.mA)\nprobe_length = 1.145 * u.mm\nprobe_diameter = 1.57 * u.mm\nprobe_area = (probe_length * np.pi * probe_diameter +\n np.pi * 0.25 * probe_diameter**2)\n\n######################################################\n# `plot_electron_fit` is set to True to check the bi-Maxwellian properties.\n# The fit converges nicely to the two slopes of the electron growth region.\nprint(swept_probe_analysis(characteristic,\n probe_area,\n 4 * u.u,\n bimaxwellian=True,\n plot_electron_fit=True,\n visualize=True))\n", "path": "plasmapy/examples/plot_langmuir_analysis.py"}]} | 1,917 | 589 |
gh_patches_debug_20904 | rasdani/github-patches | git_diff | cupy__cupy-1911 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[WIP] Fix assigning from complex to float (only test)
When a user assign complex value to float array, it causes an error.
</issue>
<code>
[start of cupy/core/_ufuncs.py]
1 from cupy.core._kernel import create_ufunc
2
3 elementwise_copy = create_ufunc(
4 'cupy_copy',
5 ('?->?', 'b->b', 'B->B', 'h->h', 'H->H', 'i->i', 'I->I', 'l->l', 'L->L',
6 'q->q', 'Q->Q', 'e->e', 'f->f', 'd->d', 'F->F', 'D->D'),
7 'out0 = out0_type(in0)', default_casting='unsafe')
8 # complex numbers requires out0 = complex<T>(in0)
9
[end of cupy/core/_ufuncs.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/cupy/core/_ufuncs.py b/cupy/core/_ufuncs.py
--- a/cupy/core/_ufuncs.py
+++ b/cupy/core/_ufuncs.py
@@ -1,8 +1,30 @@
from cupy.core._kernel import create_ufunc
+
+_complex_cast_copy = '''
+template<typename T, typename U>
+__device__ void cast_copy(const U& x, T& y) {y = T(x);}
+template<typename T, typename U>
+__device__ void cast_copy(const complex<U>& x, complex<T>& y) {
+ y = complex<T>(x);
+}
+template<typename T, typename U>
+__device__ void cast_copy(const complex<U>& x, T& y) {y = T(x.real());}
+'''
+
+
elementwise_copy = create_ufunc(
'cupy_copy',
('?->?', 'b->b', 'B->B', 'h->h', 'H->H', 'i->i', 'I->I', 'l->l', 'L->L',
'q->q', 'Q->Q', 'e->e', 'f->f', 'd->d', 'F->F', 'D->D'),
- 'out0 = out0_type(in0)', default_casting='unsafe')
+ 'cast_copy(in0, out0)',
+ preamble=_complex_cast_copy, default_casting='unsafe')
+
+
+elementwise_copy_where = create_ufunc(
+ 'cupy_copy_where',
+ ('??->?', 'b?->b', 'B?->B', 'h?->h', 'H?->H', 'i?->i', 'I?->I', 'l?->l',
+ 'L?->L', 'q?->q', 'Q?->Q', 'e?->e', 'f?->f', 'd?->d', 'F?->F', 'D?->D'),
+ 'if (in1) cast_copy(in0, out0)',
+ preamble=_complex_cast_copy, default_casting='unsafe')
# complex numbers requires out0 = complex<T>(in0)
| {"golden_diff": "diff --git a/cupy/core/_ufuncs.py b/cupy/core/_ufuncs.py\n--- a/cupy/core/_ufuncs.py\n+++ b/cupy/core/_ufuncs.py\n@@ -1,8 +1,30 @@\n from cupy.core._kernel import create_ufunc\n \n+\n+_complex_cast_copy = '''\n+template<typename T, typename U>\n+__device__ void cast_copy(const U& x, T& y) {y = T(x);}\n+template<typename T, typename U>\n+__device__ void cast_copy(const complex<U>& x, complex<T>& y) {\n+ y = complex<T>(x);\n+}\n+template<typename T, typename U>\n+__device__ void cast_copy(const complex<U>& x, T& y) {y = T(x.real());}\n+'''\n+\n+\n elementwise_copy = create_ufunc(\n 'cupy_copy',\n ('?->?', 'b->b', 'B->B', 'h->h', 'H->H', 'i->i', 'I->I', 'l->l', 'L->L',\n 'q->q', 'Q->Q', 'e->e', 'f->f', 'd->d', 'F->F', 'D->D'),\n- 'out0 = out0_type(in0)', default_casting='unsafe')\n+ 'cast_copy(in0, out0)',\n+ preamble=_complex_cast_copy, default_casting='unsafe')\n+\n+\n+elementwise_copy_where = create_ufunc(\n+ 'cupy_copy_where',\n+ ('??->?', 'b?->b', 'B?->B', 'h?->h', 'H?->H', 'i?->i', 'I?->I', 'l?->l',\n+ 'L?->L', 'q?->q', 'Q?->Q', 'e?->e', 'f?->f', 'd?->d', 'F?->F', 'D?->D'),\n+ 'if (in1) cast_copy(in0, out0)',\n+ preamble=_complex_cast_copy, default_casting='unsafe')\n # complex numbers requires out0 = complex<T>(in0)\n", "issue": "[WIP] Fix assigning from complex to float (only test)\nWhen a user assign complex value to float array, it causes an error.\n", "before_files": [{"content": "from cupy.core._kernel import create_ufunc\n\nelementwise_copy = create_ufunc(\n 'cupy_copy',\n ('?->?', 'b->b', 'B->B', 'h->h', 'H->H', 'i->i', 'I->I', 'l->l', 'L->L',\n 'q->q', 'Q->Q', 'e->e', 'f->f', 'd->d', 'F->F', 'D->D'),\n 'out0 = out0_type(in0)', default_casting='unsafe')\n# complex numbers requires out0 = complex<T>(in0)\n", "path": "cupy/core/_ufuncs.py"}]} | 718 | 481 |
gh_patches_debug_36592 | rasdani/github-patches | git_diff | Lightning-Universe__lightning-flash-1399 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Enable input normalization in SemanticSegmentationData module
## 🚀 Feature
Add the possibility to normalize Input images in SemanticSegmentationData module
### Motivation
Enable effortless normalization, as already implemented by ImageClassificationData: optionally configurable by doing:
```python
dm = SemanticSegmentationData.from_folders(
# ...
args_transforms=dict(mean=mean,std=std)
)
```
### Pitch
Change [/flash/image/segmentation/input_transform.py:43](https://github.com/Lightning-AI/lightning-flash/blob/master/flash/image/segmentation/input_transform.py#L43)
```python
@dataclass
class SemanticSegmentationInputTransform(InputTransform):
image_size: Tuple[int, int] = (128, 128)
def train_per_sample_transform(self) -> Callable:
return ApplyToKeys(
[DataKeys.INPUT, DataKeys.TARGET],
KorniaParallelTransforms(
K.geometry.Resize(self.image_size, interpolation="nearest"), K.augmentation.RandomHorizontalFlip(p=0.5)
),
)
def per_sample_transform(self) -> Callable:
return ApplyToKeys(
[DataKeys.INPUT, DataKeys.TARGET],
KorniaParallelTransforms(K.geometry.Resize(self.image_size, interpolation="nearest")),
)
def predict_per_sample_transform(self) -> Callable:
return ApplyToKeys(DataKeys.INPUT, K.geometry.Resize(self.image_size, interpolation="nearest"))
```
into this
```python
@dataclass
class SemanticSegmentationInputTransform(InputTransform):
image_size: Tuple[int, int] = (128, 128)
mean: Union[float, Tuple[float, float, float]] = (0.485, 0.456, 0.406)
std: Union[float, Tuple[float, float, float]] = (0.229, 0.224, 0.225)
def train_per_sample_transform(self) -> Callable:
return T.Compose(
[
ApplyToKeys(
[DataKeys.INPUT, DataKeys.TARGET],
KorniaParallelTransforms(
K.geometry.Resize(self.image_size, interpolation="nearest"),
)
),
ApplyToKeys(
[DataKeys.INPUT],
K.augmentation.Normalize(mean=mean, std=std)
),
]
)
def per_sample_transform(self) -> Callable:
return T.Compose(
[
ApplyToKeys(
[DataKeys.INPUT, DataKeys.TARGET],
KorniaParallelTransforms(
K.geometry.Resize(self.image_size, interpolation="nearest"),
)
),
ApplyToKeys(
[DataKeys.INPUT],
K.augmentation.Normalize(mean=mean, std=std)
),
]
)
def predict_per_sample_transform(self) -> Callable:
return ApplyToKeys(
DataKeys.INPUT,
K.geometry.Resize(self.image_size, interpolation="nearest"),
K.augmentation.Normalize(mean=mean, std=std)
)
```
### Alternatives
The alternative is to write a custom InputTransform object every time.
</issue>
<code>
[start of flash/image/segmentation/input_transform.py]
1 # Copyright The PyTorch Lightning team.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 from dataclasses import dataclass
15 from typing import Any, Callable, Dict, Tuple
16
17 from flash.core.data.io.input import DataKeys
18 from flash.core.data.io.input_transform import InputTransform
19 from flash.core.data.transforms import ApplyToKeys, kornia_collate, KorniaParallelTransforms
20 from flash.core.utilities.imports import _KORNIA_AVAILABLE, _TORCHVISION_AVAILABLE
21
22 if _KORNIA_AVAILABLE:
23 import kornia as K
24
25 if _TORCHVISION_AVAILABLE:
26 from torchvision import transforms as T
27
28
29 def prepare_target(batch: Dict[str, Any]) -> Dict[str, Any]:
30 """Convert the target mask to long and remove the channel dimension."""
31 if DataKeys.TARGET in batch:
32 batch[DataKeys.TARGET] = batch[DataKeys.TARGET].long().squeeze(1)
33 return batch
34
35
36 def remove_extra_dimensions(batch: Dict[str, Any]):
37 if isinstance(batch[DataKeys.INPUT], list):
38 assert len(batch[DataKeys.INPUT]) == 1
39 batch[DataKeys.INPUT] = batch[DataKeys.INPUT][0]
40 return batch
41
42
43 @dataclass
44 class SemanticSegmentationInputTransform(InputTransform):
45
46 image_size: Tuple[int, int] = (128, 128)
47
48 def train_per_sample_transform(self) -> Callable:
49 return ApplyToKeys(
50 [DataKeys.INPUT, DataKeys.TARGET],
51 KorniaParallelTransforms(
52 K.geometry.Resize(self.image_size, interpolation="nearest"), K.augmentation.RandomHorizontalFlip(p=0.5)
53 ),
54 )
55
56 def per_sample_transform(self) -> Callable:
57 return ApplyToKeys(
58 [DataKeys.INPUT, DataKeys.TARGET],
59 KorniaParallelTransforms(K.geometry.Resize(self.image_size, interpolation="nearest")),
60 )
61
62 def predict_per_sample_transform(self) -> Callable:
63 return ApplyToKeys(DataKeys.INPUT, K.geometry.Resize(self.image_size, interpolation="nearest"))
64
65 def collate(self) -> Callable:
66 return kornia_collate
67
68 def per_batch_transform(self) -> Callable:
69 return T.Compose([prepare_target, remove_extra_dimensions])
70
[end of flash/image/segmentation/input_transform.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/flash/image/segmentation/input_transform.py b/flash/image/segmentation/input_transform.py
--- a/flash/image/segmentation/input_transform.py
+++ b/flash/image/segmentation/input_transform.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from dataclasses import dataclass
-from typing import Any, Callable, Dict, Tuple
+from typing import Any, Callable, Dict, Tuple, Union
from flash.core.data.io.input import DataKeys
from flash.core.data.io.input_transform import InputTransform
@@ -44,23 +44,43 @@
class SemanticSegmentationInputTransform(InputTransform):
image_size: Tuple[int, int] = (128, 128)
+ mean: Union[float, Tuple[float, float, float]] = (0.485, 0.456, 0.406)
+ std: Union[float, Tuple[float, float, float]] = (0.229, 0.224, 0.225)
def train_per_sample_transform(self) -> Callable:
- return ApplyToKeys(
- [DataKeys.INPUT, DataKeys.TARGET],
- KorniaParallelTransforms(
- K.geometry.Resize(self.image_size, interpolation="nearest"), K.augmentation.RandomHorizontalFlip(p=0.5)
- ),
+ return T.Compose(
+ [
+ ApplyToKeys(
+ [DataKeys.INPUT, DataKeys.TARGET],
+ KorniaParallelTransforms(
+ K.geometry.Resize(self.image_size, interpolation="nearest"),
+ K.augmentation.RandomHorizontalFlip(p=0.5),
+ ),
+ ),
+ ApplyToKeys([DataKeys.INPUT], K.augmentation.Normalize(mean=self.mean, std=self.std)),
+ ]
)
def per_sample_transform(self) -> Callable:
- return ApplyToKeys(
- [DataKeys.INPUT, DataKeys.TARGET],
- KorniaParallelTransforms(K.geometry.Resize(self.image_size, interpolation="nearest")),
+ return T.Compose(
+ [
+ ApplyToKeys(
+ [DataKeys.INPUT, DataKeys.TARGET],
+ KorniaParallelTransforms(K.geometry.Resize(self.image_size, interpolation="nearest")),
+ ),
+ ApplyToKeys([DataKeys.INPUT], K.augmentation.Normalize(mean=self.mean, std=self.std)),
+ ]
)
def predict_per_sample_transform(self) -> Callable:
- return ApplyToKeys(DataKeys.INPUT, K.geometry.Resize(self.image_size, interpolation="nearest"))
+ return ApplyToKeys(
+ DataKeys.INPUT,
+ K.geometry.Resize(
+ self.image_size,
+ interpolation="nearest",
+ ),
+ K.augmentation.Normalize(mean=self.mean, std=self.std),
+ )
def collate(self) -> Callable:
return kornia_collate
| {"golden_diff": "diff --git a/flash/image/segmentation/input_transform.py b/flash/image/segmentation/input_transform.py\n--- a/flash/image/segmentation/input_transform.py\n+++ b/flash/image/segmentation/input_transform.py\n@@ -12,7 +12,7 @@\n # See the License for the specific language governing permissions and\n # limitations under the License.\n from dataclasses import dataclass\n-from typing import Any, Callable, Dict, Tuple\n+from typing import Any, Callable, Dict, Tuple, Union\n \n from flash.core.data.io.input import DataKeys\n from flash.core.data.io.input_transform import InputTransform\n@@ -44,23 +44,43 @@\n class SemanticSegmentationInputTransform(InputTransform):\n \n image_size: Tuple[int, int] = (128, 128)\n+ mean: Union[float, Tuple[float, float, float]] = (0.485, 0.456, 0.406)\n+ std: Union[float, Tuple[float, float, float]] = (0.229, 0.224, 0.225)\n \n def train_per_sample_transform(self) -> Callable:\n- return ApplyToKeys(\n- [DataKeys.INPUT, DataKeys.TARGET],\n- KorniaParallelTransforms(\n- K.geometry.Resize(self.image_size, interpolation=\"nearest\"), K.augmentation.RandomHorizontalFlip(p=0.5)\n- ),\n+ return T.Compose(\n+ [\n+ ApplyToKeys(\n+ [DataKeys.INPUT, DataKeys.TARGET],\n+ KorniaParallelTransforms(\n+ K.geometry.Resize(self.image_size, interpolation=\"nearest\"),\n+ K.augmentation.RandomHorizontalFlip(p=0.5),\n+ ),\n+ ),\n+ ApplyToKeys([DataKeys.INPUT], K.augmentation.Normalize(mean=self.mean, std=self.std)),\n+ ]\n )\n \n def per_sample_transform(self) -> Callable:\n- return ApplyToKeys(\n- [DataKeys.INPUT, DataKeys.TARGET],\n- KorniaParallelTransforms(K.geometry.Resize(self.image_size, interpolation=\"nearest\")),\n+ return T.Compose(\n+ [\n+ ApplyToKeys(\n+ [DataKeys.INPUT, DataKeys.TARGET],\n+ KorniaParallelTransforms(K.geometry.Resize(self.image_size, interpolation=\"nearest\")),\n+ ),\n+ ApplyToKeys([DataKeys.INPUT], K.augmentation.Normalize(mean=self.mean, std=self.std)),\n+ ]\n )\n \n def predict_per_sample_transform(self) -> Callable:\n- return ApplyToKeys(DataKeys.INPUT, K.geometry.Resize(self.image_size, interpolation=\"nearest\"))\n+ return ApplyToKeys(\n+ DataKeys.INPUT,\n+ K.geometry.Resize(\n+ self.image_size,\n+ interpolation=\"nearest\",\n+ ),\n+ K.augmentation.Normalize(mean=self.mean, std=self.std),\n+ )\n \n def collate(self) -> Callable:\n return kornia_collate\n", "issue": "Enable input normalization in SemanticSegmentationData module\n## \ud83d\ude80 Feature\r\nAdd the possibility to normalize Input images in SemanticSegmentationData module\r\n\r\n### Motivation\r\nEnable effortless normalization, as already implemented by ImageClassificationData: optionally configurable by doing: \r\n```python\r\ndm = SemanticSegmentationData.from_folders(\r\n # ...\r\n args_transforms=dict(mean=mean,std=std)\r\n)\r\n```\r\n\r\n### Pitch\r\nChange [/flash/image/segmentation/input_transform.py:43](https://github.com/Lightning-AI/lightning-flash/blob/master/flash/image/segmentation/input_transform.py#L43)\r\n\r\n```python\r\n\r\n@dataclass\r\nclass SemanticSegmentationInputTransform(InputTransform):\r\n\r\n image_size: Tuple[int, int] = (128, 128)\r\n\r\n def train_per_sample_transform(self) -> Callable:\r\n return ApplyToKeys(\r\n [DataKeys.INPUT, DataKeys.TARGET],\r\n KorniaParallelTransforms(\r\n K.geometry.Resize(self.image_size, interpolation=\"nearest\"), K.augmentation.RandomHorizontalFlip(p=0.5)\r\n ),\r\n )\r\n\r\n def per_sample_transform(self) -> Callable:\r\n return ApplyToKeys(\r\n [DataKeys.INPUT, DataKeys.TARGET],\r\n KorniaParallelTransforms(K.geometry.Resize(self.image_size, interpolation=\"nearest\")),\r\n )\r\n\r\n def predict_per_sample_transform(self) -> Callable:\r\n return ApplyToKeys(DataKeys.INPUT, K.geometry.Resize(self.image_size, interpolation=\"nearest\"))\r\n```\r\n\r\ninto this\r\n\r\n```python\r\n@dataclass\r\nclass SemanticSegmentationInputTransform(InputTransform):\r\n\r\n image_size: Tuple[int, int] = (128, 128)\r\n mean: Union[float, Tuple[float, float, float]] = (0.485, 0.456, 0.406)\r\n std: Union[float, Tuple[float, float, float]] = (0.229, 0.224, 0.225)\r\n\r\n\r\n def train_per_sample_transform(self) -> Callable:\r\n return T.Compose(\r\n [\r\n ApplyToKeys(\r\n [DataKeys.INPUT, DataKeys.TARGET],\r\n KorniaParallelTransforms(\r\n K.geometry.Resize(self.image_size, interpolation=\"nearest\"),\r\n )\r\n ),\r\n ApplyToKeys(\r\n [DataKeys.INPUT],\r\n K.augmentation.Normalize(mean=mean, std=std)\r\n \r\n ),\r\n ]\r\n )\r\n\r\n def per_sample_transform(self) -> Callable:\r\n return T.Compose(\r\n [\r\n ApplyToKeys(\r\n [DataKeys.INPUT, DataKeys.TARGET],\r\n KorniaParallelTransforms(\r\n K.geometry.Resize(self.image_size, interpolation=\"nearest\"),\r\n )\r\n ),\r\n ApplyToKeys(\r\n [DataKeys.INPUT],\r\n K.augmentation.Normalize(mean=mean, std=std)\r\n \r\n ),\r\n ]\r\n )\r\n\r\n def predict_per_sample_transform(self) -> Callable: \r\n return ApplyToKeys(\r\n DataKeys.INPUT, \r\n K.geometry.Resize(self.image_size, interpolation=\"nearest\"), \r\n K.augmentation.Normalize(mean=mean, std=std)\r\n )\r\n\r\n```\r\n\r\n### Alternatives\r\nThe alternative is to write a custom InputTransform object every time.\r\n\n", "before_files": [{"content": "# Copyright The PyTorch Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom dataclasses import dataclass\nfrom typing import Any, Callable, Dict, Tuple\n\nfrom flash.core.data.io.input import DataKeys\nfrom flash.core.data.io.input_transform import InputTransform\nfrom flash.core.data.transforms import ApplyToKeys, kornia_collate, KorniaParallelTransforms\nfrom flash.core.utilities.imports import _KORNIA_AVAILABLE, _TORCHVISION_AVAILABLE\n\nif _KORNIA_AVAILABLE:\n import kornia as K\n\nif _TORCHVISION_AVAILABLE:\n from torchvision import transforms as T\n\n\ndef prepare_target(batch: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"Convert the target mask to long and remove the channel dimension.\"\"\"\n if DataKeys.TARGET in batch:\n batch[DataKeys.TARGET] = batch[DataKeys.TARGET].long().squeeze(1)\n return batch\n\n\ndef remove_extra_dimensions(batch: Dict[str, Any]):\n if isinstance(batch[DataKeys.INPUT], list):\n assert len(batch[DataKeys.INPUT]) == 1\n batch[DataKeys.INPUT] = batch[DataKeys.INPUT][0]\n return batch\n\n\n@dataclass\nclass SemanticSegmentationInputTransform(InputTransform):\n\n image_size: Tuple[int, int] = (128, 128)\n\n def train_per_sample_transform(self) -> Callable:\n return ApplyToKeys(\n [DataKeys.INPUT, DataKeys.TARGET],\n KorniaParallelTransforms(\n K.geometry.Resize(self.image_size, interpolation=\"nearest\"), K.augmentation.RandomHorizontalFlip(p=0.5)\n ),\n )\n\n def per_sample_transform(self) -> Callable:\n return ApplyToKeys(\n [DataKeys.INPUT, DataKeys.TARGET],\n KorniaParallelTransforms(K.geometry.Resize(self.image_size, interpolation=\"nearest\")),\n )\n\n def predict_per_sample_transform(self) -> Callable:\n return ApplyToKeys(DataKeys.INPUT, K.geometry.Resize(self.image_size, interpolation=\"nearest\"))\n\n def collate(self) -> Callable:\n return kornia_collate\n\n def per_batch_transform(self) -> Callable:\n return T.Compose([prepare_target, remove_extra_dimensions])\n", "path": "flash/image/segmentation/input_transform.py"}]} | 1,934 | 644 |
gh_patches_debug_8703 | rasdani/github-patches | git_diff | svthalia__concrexit-1836 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
DeviceListView permission not checked
### Describe the bug
The `DeviceListView` of `api/v2` has a `IsAuthenticatedOwnerOrReadOnly` permission which is never checked as `get_object` is not used in the view.
### How to reproduce
Steps to reproduce the behaviour:
1. Set a breakpoint in the `IsAuthenticatedOwnerOrReadOnly` class
2. Enable the debugger
3. See that the `has_object_permission` method is not called on a request to the corresponding endpoint
</issue>
<code>
[start of website/pushnotifications/api/v2/views.py]
1 from django.utils.translation import get_language_from_request
2 from oauth2_provider.contrib.rest_framework import IsAuthenticatedOrTokenHasScope
3 from rest_framework.filters import OrderingFilter
4 from rest_framework.generics import (
5 ListAPIView,
6 RetrieveAPIView,
7 CreateAPIView,
8 UpdateAPIView,
9 )
10
11 from pushnotifications.api.v2.filters import CategoryFilter
12 from pushnotifications.api.v2.permissions import IsAuthenticatedOwnerOrReadOnly
13 from pushnotifications.api.v2.serializers import (
14 DeviceSerializer,
15 MessageSerializer,
16 CategorySerializer,
17 )
18 from pushnotifications.models import Device, Category, Message
19 from thaliawebsite.api.v2.permissions import IsAuthenticatedOrTokenHasScopeForMethod
20
21
22 class DeviceListView(ListAPIView, CreateAPIView):
23 """Returns an overview of all devices that are owner by the user."""
24
25 permission_classes = [
26 IsAuthenticatedOrTokenHasScopeForMethod,
27 IsAuthenticatedOwnerOrReadOnly,
28 ]
29 serializer_class = DeviceSerializer
30 queryset = Device.objects.all()
31 required_scopes_per_method = {
32 "GET": ["pushnotifications:read"],
33 "POST": ["pushnotifications:write"],
34 }
35
36 def get_queryset(self):
37 if self.request.user:
38 return Device.objects.filter(user=self.request.user)
39 return super().get_queryset()
40
41 def perform_create(self, serializer):
42 language = get_language_from_request(self.request)
43
44 try:
45 serializer.instance = Device.objects.get(
46 user=self.request.user,
47 registration_id=serializer.validated_data["registration_id"],
48 )
49 except Device.DoesNotExist:
50 pass
51
52 data = serializer.validated_data
53 categories = [c.pk for c in Category.objects.all()]
54 if "receive_category" in data and len(data["receive_category"]) > 0:
55 categories = data["receive_category"] + ["general"]
56
57 serializer.save(
58 user=self.request.user, language=language, receive_category=categories
59 )
60
61
62 class DeviceDetailView(RetrieveAPIView, UpdateAPIView):
63 """Returns details of a device."""
64
65 permission_classes = [
66 IsAuthenticatedOrTokenHasScope,
67 IsAuthenticatedOwnerOrReadOnly,
68 ]
69 serializer_class = DeviceSerializer
70 required_scopes = ["pushnotifications:read", "pushnotifications:write"]
71 queryset = Device.objects.all()
72
73 def perform_update(self, serializer):
74 serializer.save(user=self.request.user)
75
76
77 class CategoryListView(ListAPIView):
78 """Returns an overview of all available categories for push notifications."""
79
80 serializer_class = CategorySerializer
81 queryset = Category.objects.all()
82 required_scopes = ["pushnotifications:read"]
83
84
85 class MessageListView(ListAPIView):
86 """Returns a list of message sent to the user."""
87
88 serializer_class = MessageSerializer
89 required_scopes = ["pushnotifications:read"]
90 permission_classes = [
91 IsAuthenticatedOrTokenHasScope,
92 ]
93 filter_backends = (OrderingFilter, CategoryFilter)
94 ordering_fields = ("sent",)
95
96 def get_queryset(self):
97 if self.request.user:
98 return Message.all_objects.filter(users=self.request.user)
99 return Message.all_objects.all()
100
101
102 class MessageDetailView(RetrieveAPIView):
103 """Returns a message."""
104
105 serializer_class = MessageSerializer
106 required_scopes = ["pushnotifications:read"]
107 permission_classes = [
108 IsAuthenticatedOrTokenHasScope,
109 ]
110
111 def get_queryset(self):
112 if self.request.user:
113 return Message.all_objects.filter(users=self.request.user)
114 return Message.all_objects.all()
115
[end of website/pushnotifications/api/v2/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/website/pushnotifications/api/v2/views.py b/website/pushnotifications/api/v2/views.py
--- a/website/pushnotifications/api/v2/views.py
+++ b/website/pushnotifications/api/v2/views.py
@@ -22,10 +22,7 @@
class DeviceListView(ListAPIView, CreateAPIView):
"""Returns an overview of all devices that are owner by the user."""
- permission_classes = [
- IsAuthenticatedOrTokenHasScopeForMethod,
- IsAuthenticatedOwnerOrReadOnly,
- ]
+ permission_classes = [IsAuthenticatedOrTokenHasScopeForMethod]
serializer_class = DeviceSerializer
queryset = Device.objects.all()
required_scopes_per_method = {
| {"golden_diff": "diff --git a/website/pushnotifications/api/v2/views.py b/website/pushnotifications/api/v2/views.py\n--- a/website/pushnotifications/api/v2/views.py\n+++ b/website/pushnotifications/api/v2/views.py\n@@ -22,10 +22,7 @@\n class DeviceListView(ListAPIView, CreateAPIView):\n \"\"\"Returns an overview of all devices that are owner by the user.\"\"\"\n \n- permission_classes = [\n- IsAuthenticatedOrTokenHasScopeForMethod,\n- IsAuthenticatedOwnerOrReadOnly,\n- ]\n+ permission_classes = [IsAuthenticatedOrTokenHasScopeForMethod]\n serializer_class = DeviceSerializer\n queryset = Device.objects.all()\n required_scopes_per_method = {\n", "issue": "DeviceListView permission not checked\n### Describe the bug\r\nThe `DeviceListView` of `api/v2` has a `IsAuthenticatedOwnerOrReadOnly` permission which is never checked as `get_object` is not used in the view.\r\n\r\n### How to reproduce\r\nSteps to reproduce the behaviour:\r\n1. Set a breakpoint in the `IsAuthenticatedOwnerOrReadOnly` class\r\n2. Enable the debugger\r\n3. See that the `has_object_permission` method is not called on a request to the corresponding endpoint\r\n\n", "before_files": [{"content": "from django.utils.translation import get_language_from_request\nfrom oauth2_provider.contrib.rest_framework import IsAuthenticatedOrTokenHasScope\nfrom rest_framework.filters import OrderingFilter\nfrom rest_framework.generics import (\n ListAPIView,\n RetrieveAPIView,\n CreateAPIView,\n UpdateAPIView,\n)\n\nfrom pushnotifications.api.v2.filters import CategoryFilter\nfrom pushnotifications.api.v2.permissions import IsAuthenticatedOwnerOrReadOnly\nfrom pushnotifications.api.v2.serializers import (\n DeviceSerializer,\n MessageSerializer,\n CategorySerializer,\n)\nfrom pushnotifications.models import Device, Category, Message\nfrom thaliawebsite.api.v2.permissions import IsAuthenticatedOrTokenHasScopeForMethod\n\n\nclass DeviceListView(ListAPIView, CreateAPIView):\n \"\"\"Returns an overview of all devices that are owner by the user.\"\"\"\n\n permission_classes = [\n IsAuthenticatedOrTokenHasScopeForMethod,\n IsAuthenticatedOwnerOrReadOnly,\n ]\n serializer_class = DeviceSerializer\n queryset = Device.objects.all()\n required_scopes_per_method = {\n \"GET\": [\"pushnotifications:read\"],\n \"POST\": [\"pushnotifications:write\"],\n }\n\n def get_queryset(self):\n if self.request.user:\n return Device.objects.filter(user=self.request.user)\n return super().get_queryset()\n\n def perform_create(self, serializer):\n language = get_language_from_request(self.request)\n\n try:\n serializer.instance = Device.objects.get(\n user=self.request.user,\n registration_id=serializer.validated_data[\"registration_id\"],\n )\n except Device.DoesNotExist:\n pass\n\n data = serializer.validated_data\n categories = [c.pk for c in Category.objects.all()]\n if \"receive_category\" in data and len(data[\"receive_category\"]) > 0:\n categories = data[\"receive_category\"] + [\"general\"]\n\n serializer.save(\n user=self.request.user, language=language, receive_category=categories\n )\n\n\nclass DeviceDetailView(RetrieveAPIView, UpdateAPIView):\n \"\"\"Returns details of a device.\"\"\"\n\n permission_classes = [\n IsAuthenticatedOrTokenHasScope,\n IsAuthenticatedOwnerOrReadOnly,\n ]\n serializer_class = DeviceSerializer\n required_scopes = [\"pushnotifications:read\", \"pushnotifications:write\"]\n queryset = Device.objects.all()\n\n def perform_update(self, serializer):\n serializer.save(user=self.request.user)\n\n\nclass CategoryListView(ListAPIView):\n \"\"\"Returns an overview of all available categories for push notifications.\"\"\"\n\n serializer_class = CategorySerializer\n queryset = Category.objects.all()\n required_scopes = [\"pushnotifications:read\"]\n\n\nclass MessageListView(ListAPIView):\n \"\"\"Returns a list of message sent to the user.\"\"\"\n\n serializer_class = MessageSerializer\n required_scopes = [\"pushnotifications:read\"]\n permission_classes = [\n IsAuthenticatedOrTokenHasScope,\n ]\n filter_backends = (OrderingFilter, CategoryFilter)\n ordering_fields = (\"sent\",)\n\n def get_queryset(self):\n if self.request.user:\n return Message.all_objects.filter(users=self.request.user)\n return Message.all_objects.all()\n\n\nclass MessageDetailView(RetrieveAPIView):\n \"\"\"Returns a message.\"\"\"\n\n serializer_class = MessageSerializer\n required_scopes = [\"pushnotifications:read\"]\n permission_classes = [\n IsAuthenticatedOrTokenHasScope,\n ]\n\n def get_queryset(self):\n if self.request.user:\n return Message.all_objects.filter(users=self.request.user)\n return Message.all_objects.all()\n", "path": "website/pushnotifications/api/v2/views.py"}]} | 1,606 | 156 |
gh_patches_debug_16052 | rasdani/github-patches | git_diff | google__flax-985 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Port ensembling HOWTO from old diff based system
And instead, use a standalone doc with tests like in #771
Here is the old (pre-Linen) HOWTO diff, for reference:
https://github.com/google/flax/blob/master/howtos/diffs/ensembling.diff
</issue>
<code>
[start of docs/_ext/codediff.py]
1 # Copyright 2020 The Flax Authors.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 import dataclasses
15 from typing import Optional, Sequence
16 import itertools
17
18 from docutils import nodes
19 from docutils.parsers.rst import directives
20 from docutils.statemachine import ViewList
21
22 import sphinx
23 from sphinx.util.docutils import SphinxDirective
24 """Sphinx directive for creating code diff tables.
25
26 Use directive as follows:
27
28 .. codediff::
29 :title-left: <LEFT_CODE_BLOCK_TITLE>
30 :title-right: <RIGHT_CODE_BLOCK_TITLE>
31 :highlight-left: <LINES_TO_HIGHLIGHT_LEFT>
32 :highlight-right: <LINES_TO_HIGHLIGHT_RIGHT>
33
34 <CODE_BLOCK_LEFT>
35 ---
36 <CODE_BLOCK_RIGHT>
37 """
38
39 class CodeDiffParser:
40 def parse(self, lines, title_left='Base', title_right='Diff', code_sep='---'):
41 if code_sep not in lines:
42 raise ValueError('Code separator not found! Code snippets should be '
43 f'separated by {code_sep}.')
44 idx = lines.index(code_sep)
45 code_left = self._code_block(lines[0: idx])
46 code_right = self._code_block(lines[idx+1:])
47
48 self.max_left = max(len(x) for x in code_left + [title_left])
49 self.max_right = max(len(x) for x in code_right + [title_right])
50
51 output = [
52 self._hline(),
53 self._table_row(title_left, title_right),
54 self._hline(),
55 ]
56
57 for l, r in itertools.zip_longest(code_left, code_right, fillvalue=''):
58 output += [self._table_row(l, r)]
59
60 return output + [self._hline()]
61
62 def _code_block(self, lines):
63 # Remove right trailing whitespace so we can detect the comments.
64 lines = [x.rstrip() for x in lines]
65 highlight = lambda x : x.endswith('#!')
66 code = map(lambda x : x[:-2].rstrip() if highlight(x) else x, lines)
67 highlights = [i+1 for i in range(len(lines)) if highlight(lines[i])]
68 highlights = ','.join(str(i) for i in highlights)
69
70 directive = ['.. code-block:: python']
71 if highlights:
72 directive += [f' :emphasize-lines: {highlights}']
73
74 # Indent code and add empty line so the code is picked up by the directive.
75 return directive + [''] + list(map(lambda x: ' ' + x, code))
76
77 def _hline(self):
78 return '+' + '-'*(self.max_left+2) + '+' + '-'*(self.max_right+2) + '+'
79
80 def _rfill(self, text, max_len):
81 return text + ' ' * (max_len-len(text))
82
83 def _table_row(self, left, right):
84 text_left = self._rfill(left, self.max_left)
85 text_right = self._rfill(right, self.max_right)
86 return '| ' + text_left + ' | ' + text_right + ' |'
87
88
89 class CodeDiffDirective(SphinxDirective):
90 has_content = True
91 option_spec = {
92 'title_left': directives.unchanged,
93 'title_right': directives.unchanged,
94 'code_sep': directives.unchanged,
95 }
96
97 def run(self):
98 new_content = CodeDiffParser().parse(list(self.content), **self.options)
99
100 node = nodes.paragraph()
101 self.content = ViewList(new_content, self.content.parent)
102 self.state.nested_parse(self.content, self.content_offset, node)
103 return [node]
104
105 def setup(app):
106 app.add_directive('codediff', CodeDiffDirective)
107
108 return {
109 'version': sphinx.__display_version__,
110 'parallel_read_safe': True,
111 'parallel_write_safe': True,
112 }
113
[end of docs/_ext/codediff.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/docs/_ext/codediff.py b/docs/_ext/codediff.py
--- a/docs/_ext/codediff.py
+++ b/docs/_ext/codediff.py
@@ -26,14 +26,14 @@
Use directive as follows:
.. codediff::
- :title-left: <LEFT_CODE_BLOCK_TITLE>
- :title-right: <RIGHT_CODE_BLOCK_TITLE>
- :highlight-left: <LINES_TO_HIGHLIGHT_LEFT>
- :highlight-right: <LINES_TO_HIGHLIGHT_RIGHT>
+ :title_left: <LEFT_CODE_BLOCK_TITLE>
+ :title_right: <RIGHT_CODE_BLOCK_TITLE>
<CODE_BLOCK_LEFT>
---
<CODE_BLOCK_RIGHT>
+
+In order to highlight a line of code, prepend it with "#!".
"""
class CodeDiffParser:
@@ -94,7 +94,7 @@
'code_sep': directives.unchanged,
}
- def run(self):
+ def run(self):
new_content = CodeDiffParser().parse(list(self.content), **self.options)
node = nodes.paragraph()
| {"golden_diff": "diff --git a/docs/_ext/codediff.py b/docs/_ext/codediff.py\n--- a/docs/_ext/codediff.py\n+++ b/docs/_ext/codediff.py\n@@ -26,14 +26,14 @@\n Use directive as follows:\n \n .. codediff::\n- :title-left: <LEFT_CODE_BLOCK_TITLE>\n- :title-right: <RIGHT_CODE_BLOCK_TITLE>\n- :highlight-left: <LINES_TO_HIGHLIGHT_LEFT>\n- :highlight-right: <LINES_TO_HIGHLIGHT_RIGHT>\n+ :title_left: <LEFT_CODE_BLOCK_TITLE>\n+ :title_right: <RIGHT_CODE_BLOCK_TITLE>\n \n <CODE_BLOCK_LEFT>\n ---\n <CODE_BLOCK_RIGHT>\n+\n+In order to highlight a line of code, prepend it with \"#!\".\n \"\"\"\n \n class CodeDiffParser:\n@@ -94,7 +94,7 @@\n 'code_sep': directives.unchanged,\n }\n \n- def run(self): \n+ def run(self):\n new_content = CodeDiffParser().parse(list(self.content), **self.options)\n \n node = nodes.paragraph()\n", "issue": "Port ensembling HOWTO from old diff based system\nAnd instead, use a standalone doc with tests like in #771\r\n\r\nHere is the old (pre-Linen) HOWTO diff, for reference:\r\nhttps://github.com/google/flax/blob/master/howtos/diffs/ensembling.diff\n", "before_files": [{"content": "# Copyright 2020 The Flax Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nimport dataclasses\nfrom typing import Optional, Sequence\nimport itertools\n\nfrom docutils import nodes\nfrom docutils.parsers.rst import directives\nfrom docutils.statemachine import ViewList\n\nimport sphinx\nfrom sphinx.util.docutils import SphinxDirective\n\"\"\"Sphinx directive for creating code diff tables.\n\nUse directive as follows:\n\n.. codediff::\n :title-left: <LEFT_CODE_BLOCK_TITLE>\n :title-right: <RIGHT_CODE_BLOCK_TITLE>\n :highlight-left: <LINES_TO_HIGHLIGHT_LEFT>\n :highlight-right: <LINES_TO_HIGHLIGHT_RIGHT>\n \n <CODE_BLOCK_LEFT>\n ---\n <CODE_BLOCK_RIGHT>\n\"\"\"\n\nclass CodeDiffParser:\n def parse(self, lines, title_left='Base', title_right='Diff', code_sep='---'):\n if code_sep not in lines:\n raise ValueError('Code separator not found! Code snippets should be '\n f'separated by {code_sep}.')\n idx = lines.index(code_sep)\n code_left = self._code_block(lines[0: idx])\n code_right = self._code_block(lines[idx+1:])\n \n self.max_left = max(len(x) for x in code_left + [title_left])\n self.max_right = max(len(x) for x in code_right + [title_right])\n\n output = [\n self._hline(),\n self._table_row(title_left, title_right),\n self._hline(),\n ]\n\n for l, r in itertools.zip_longest(code_left, code_right, fillvalue=''):\n output += [self._table_row(l, r)]\n\n return output + [self._hline()]\n\n def _code_block(self, lines):\n # Remove right trailing whitespace so we can detect the comments.\n lines = [x.rstrip() for x in lines]\n highlight = lambda x : x.endswith('#!')\n code = map(lambda x : x[:-2].rstrip() if highlight(x) else x, lines)\n highlights = [i+1 for i in range(len(lines)) if highlight(lines[i])]\n highlights = ','.join(str(i) for i in highlights)\n\n directive = ['.. code-block:: python']\n if highlights:\n directive += [f' :emphasize-lines: {highlights}']\n\n # Indent code and add empty line so the code is picked up by the directive.\n return directive + [''] + list(map(lambda x: ' ' + x, code))\n\n def _hline(self):\n return '+' + '-'*(self.max_left+2) + '+' + '-'*(self.max_right+2) + '+'\n\n def _rfill(self, text, max_len):\n return text + ' ' * (max_len-len(text))\n\n def _table_row(self, left, right):\n text_left = self._rfill(left, self.max_left)\n text_right = self._rfill(right, self.max_right)\n return '| ' + text_left + ' | ' + text_right + ' |'\n\n\nclass CodeDiffDirective(SphinxDirective):\n has_content = True\n option_spec = {\n 'title_left': directives.unchanged,\n 'title_right': directives.unchanged,\n 'code_sep': directives.unchanged,\n }\n\n def run(self): \n new_content = CodeDiffParser().parse(list(self.content), **self.options)\n\n node = nodes.paragraph()\n self.content = ViewList(new_content, self.content.parent)\n self.state.nested_parse(self.content, self.content_offset, node)\n return [node]\n\ndef setup(app):\n app.add_directive('codediff', CodeDiffDirective)\n\n return {\n 'version': sphinx.__display_version__,\n 'parallel_read_safe': True,\n 'parallel_write_safe': True,\n }\n", "path": "docs/_ext/codediff.py"}]} | 1,774 | 243 |
gh_patches_debug_3737 | rasdani/github-patches | git_diff | intel__dffml-529 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
docs: Enable hiding of Python prompts
This will be very helpful for copy pasting examples.
References:
- https://github.com/readthedocs/sphinx_rtd_theme/issues/167
</issue>
<code>
[start of docs/conf.py]
1 # Configuration file for the Sphinx documentation builder.
2 #
3 # This file only contains a selection of the most common options. For a full
4 # list see the documentation:
5 # http://www.sphinx-doc.org/en/master/config
6
7 # -- Path setup --------------------------------------------------------------
8
9 # If extensions (or modules to document with autodoc) are in another directory,
10 # add these directories to sys.path here. If the directory is relative to the
11 # documentation root, use os.path.abspath to make it absolute, like shown here.
12 #
13 import os
14 import sys
15 import pathlib
16
17 sys.path.insert(0, os.path.abspath("."))
18 from dffml.version import VERSION
19
20 # -- Project information -----------------------------------------------------
21
22 project = "DFFML"
23 copyright = "2019, Intel"
24 author = "John Andersen"
25
26 # The short X.Y version
27 version = VERSION
28
29 # The full version, including alpha/beta/rc tags
30 release = version
31
32
33 # -- General configuration ---------------------------------------------------
34
35 # Add any Sphinx extension module names here, as strings. They can be
36 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
37 # ones.
38 extensions = [
39 "sphinx.ext.intersphinx",
40 "sphinx.ext.autodoc",
41 "sphinx.ext.viewcode",
42 "sphinx.ext.napoleon",
43 "sphinx.ext.doctest",
44 "recommonmark",
45 ]
46
47 intersphinx_mapping = {"python": ("https://docs.python.org/3", None)}
48
49 # Add any paths that contain templates here, relative to this directory.
50 templates_path = ["_templates"]
51
52 # List of patterns, relative to source directory, that match files and
53 # directories to ignore when looking for source files.
54 # This pattern also affects html_static_path and html_extra_path.
55 exclude_patterns = []
56
57 # Enable markdown
58 source_suffix = {
59 ".rst": "restructuredtext",
60 ".txt": "markdown",
61 ".md": "markdown",
62 }
63
64
65 # -- Options for HTML output -------------------------------------------------
66
67 # The theme to use for HTML and HTML Help pages. See the documentation for
68 # a list of builtin themes.
69 #
70 html_theme = "sphinx_rtd_theme"
71
72 html_context = {
73 "github_user": "intel",
74 "github_repo": "dffml",
75 "github_version": "master",
76 "conf_py_path": "/docs/",
77 "display_github": True,
78 }
79
80 html_theme_options = {
81 "description": "The fastest path to machine learning integration",
82 "github_url": "https://github.com/intel/dffml/",
83 }
84
85 # Add any paths that contain custom static files (such as style sheets) here,
86 # relative to this directory. They are copied after the builtin static files,
87 # so a file named "default.css" will overwrite the builtin "default.css".
88 html_static_path = ["_static"]
89
90 # -- Extension configuration -------------------------------------------------
91
92 napoleon_numpy_docstring = True
93
94 doctest_global_setup = (
95 pathlib.Path(__file__).parent / "doctest_header.py"
96 ).read_text()
97
[end of docs/conf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -87,6 +87,11 @@
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ["_static"]
+
+def setup(app):
+ app.add_javascript("copybutton.js")
+
+
# -- Extension configuration -------------------------------------------------
napoleon_numpy_docstring = True
| {"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -87,6 +87,11 @@\n # so a file named \"default.css\" will overwrite the builtin \"default.css\".\n html_static_path = [\"_static\"]\n \n+\n+def setup(app):\n+ app.add_javascript(\"copybutton.js\")\n+\n+\n # -- Extension configuration -------------------------------------------------\n \n napoleon_numpy_docstring = True\n", "issue": "docs: Enable hiding of Python prompts\nThis will be very helpful for copy pasting examples.\r\n\r\nReferences:\r\n- https://github.com/readthedocs/sphinx_rtd_theme/issues/167\n", "before_files": [{"content": "# Configuration file for the Sphinx documentation builder.\n#\n# This file only contains a selection of the most common options. For a full\n# list see the documentation:\n# http://www.sphinx-doc.org/en/master/config\n\n# -- Path setup --------------------------------------------------------------\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\nimport os\nimport sys\nimport pathlib\n\nsys.path.insert(0, os.path.abspath(\".\"))\nfrom dffml.version import VERSION\n\n# -- Project information -----------------------------------------------------\n\nproject = \"DFFML\"\ncopyright = \"2019, Intel\"\nauthor = \"John Andersen\"\n\n# The short X.Y version\nversion = VERSION\n\n# The full version, including alpha/beta/rc tags\nrelease = version\n\n\n# -- General configuration ---------------------------------------------------\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n \"sphinx.ext.intersphinx\",\n \"sphinx.ext.autodoc\",\n \"sphinx.ext.viewcode\",\n \"sphinx.ext.napoleon\",\n \"sphinx.ext.doctest\",\n \"recommonmark\",\n]\n\nintersphinx_mapping = {\"python\": (\"https://docs.python.org/3\", None)}\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = [\"_templates\"]\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path.\nexclude_patterns = []\n\n# Enable markdown\nsource_suffix = {\n \".rst\": \"restructuredtext\",\n \".txt\": \"markdown\",\n \".md\": \"markdown\",\n}\n\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = \"sphinx_rtd_theme\"\n\nhtml_context = {\n \"github_user\": \"intel\",\n \"github_repo\": \"dffml\",\n \"github_version\": \"master\",\n \"conf_py_path\": \"/docs/\",\n \"display_github\": True,\n}\n\nhtml_theme_options = {\n \"description\": \"The fastest path to machine learning integration\",\n \"github_url\": \"https://github.com/intel/dffml/\",\n}\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = [\"_static\"]\n\n# -- Extension configuration -------------------------------------------------\n\nnapoleon_numpy_docstring = True\n\ndoctest_global_setup = (\n pathlib.Path(__file__).parent / \"doctest_header.py\"\n).read_text()\n", "path": "docs/conf.py"}]} | 1,397 | 98 |
gh_patches_debug_28893 | rasdani/github-patches | git_diff | mirumee__ariadne-387 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Argument 'code' has invalid value "ABC"
I think there is a bug when using both literal and variable values with a custom scalar.
```python
from ariadne import ScalarType
testscalar = ScalarType('TestScalar')
@testscalar.serializer
def serializer(value):
return value.upper()
@testscalar.value_parser
def value_parser(value):
if value:
return serializer(value)
@testscalar.literal_parser
def literal_parser(ast):
value = str(ast.value)
return value_parser(value)
```
If you then make the following query:
```graphql
query($code: TestScalar) {
test1: testType(code: $code) {
id
}
test2: testType(code: "ABC") {
id
}
}
```
This error is returned: Argument 'code' has invalid value "ABC"
If you don't pass variables and only use "literal" values it works. Same for if you only pass variables it works fine.
If you don't set up a resolver for "testType" then no error is returned.
Not sure what is happening but I think this is a bug. If not, does anyone know why this is happening?
</issue>
<code>
[start of ariadne/scalars.py]
1 from typing import Optional, cast
2
3 from graphql.language.ast import (
4 BooleanValueNode,
5 FloatValueNode,
6 IntValueNode,
7 StringValueNode,
8 )
9 from graphql.type import (
10 GraphQLNamedType,
11 GraphQLScalarLiteralParser,
12 GraphQLScalarSerializer,
13 GraphQLScalarType,
14 GraphQLScalarValueParser,
15 GraphQLSchema,
16 )
17 from graphql.utilities import value_from_ast_untyped
18
19 from .types import SchemaBindable
20
21
22 class ScalarType(SchemaBindable):
23 _serialize: Optional[GraphQLScalarSerializer]
24 _parse_value: Optional[GraphQLScalarValueParser]
25 _parse_literal: Optional[GraphQLScalarLiteralParser]
26
27 def __init__(
28 self,
29 name: str,
30 *,
31 serializer: GraphQLScalarSerializer = None,
32 value_parser: GraphQLScalarValueParser = None,
33 literal_parser: GraphQLScalarLiteralParser = None,
34 ) -> None:
35 self.name = name
36 self._serialize = serializer
37 self._parse_value = value_parser
38 self._parse_literal = literal_parser
39
40 def set_serializer(self, f: GraphQLScalarSerializer) -> GraphQLScalarSerializer:
41 self._serialize = f
42 return f
43
44 def set_value_parser(self, f: GraphQLScalarValueParser) -> GraphQLScalarValueParser:
45 self._parse_value = f
46 if not self._parse_literal:
47 self._parse_literal = create_default_literal_parser(f)
48 return f
49
50 def set_literal_parser(
51 self, f: GraphQLScalarLiteralParser
52 ) -> GraphQLScalarLiteralParser:
53 self._parse_literal = f
54 return f
55
56 # Alias above setters for consistent decorator API
57 serializer = set_serializer
58 value_parser = set_value_parser
59 literal_parser = set_literal_parser
60
61 def bind_to_schema(self, schema: GraphQLSchema) -> None:
62 graphql_type = schema.type_map.get(self.name)
63 self.validate_graphql_type(graphql_type)
64 graphql_type = cast(GraphQLScalarType, graphql_type)
65
66 if self._serialize:
67 # See mypy bug https://github.com/python/mypy/issues/2427
68 graphql_type.serialize = self._serialize # type: ignore
69 if self._parse_value:
70 graphql_type.parse_value = self._parse_value # type: ignore
71 if self._parse_literal:
72 graphql_type.parse_literal = self._parse_literal # type: ignore
73
74 def validate_graphql_type(self, graphql_type: Optional[GraphQLNamedType]) -> None:
75 if not graphql_type:
76 raise ValueError("Scalar %s is not defined in the schema" % self.name)
77 if not isinstance(graphql_type, GraphQLScalarType):
78 raise ValueError(
79 "%s is defined in the schema, but it is instance of %s (expected %s)"
80 % (self.name, type(graphql_type).__name__, GraphQLScalarType.__name__)
81 )
82
83
84 SCALAR_AST_NODES = (BooleanValueNode, FloatValueNode, IntValueNode, StringValueNode)
85
86
87 def create_default_literal_parser(
88 value_parser: GraphQLScalarValueParser,
89 ) -> GraphQLScalarLiteralParser:
90 def default_literal_parser(ast):
91 return value_parser(value_from_ast_untyped(ast))
92
93 return default_literal_parser
94
[end of ariadne/scalars.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ariadne/scalars.py b/ariadne/scalars.py
--- a/ariadne/scalars.py
+++ b/ariadne/scalars.py
@@ -1,11 +1,5 @@
from typing import Optional, cast
-from graphql.language.ast import (
- BooleanValueNode,
- FloatValueNode,
- IntValueNode,
- StringValueNode,
-)
from graphql.type import (
GraphQLNamedType,
GraphQLScalarLiteralParser,
@@ -14,7 +8,6 @@
GraphQLScalarValueParser,
GraphQLSchema,
)
-from graphql.utilities import value_from_ast_untyped
from .types import SchemaBindable
@@ -43,8 +36,6 @@
def set_value_parser(self, f: GraphQLScalarValueParser) -> GraphQLScalarValueParser:
self._parse_value = f
- if not self._parse_literal:
- self._parse_literal = create_default_literal_parser(f)
return f
def set_literal_parser(
@@ -79,15 +70,3 @@
"%s is defined in the schema, but it is instance of %s (expected %s)"
% (self.name, type(graphql_type).__name__, GraphQLScalarType.__name__)
)
-
-
-SCALAR_AST_NODES = (BooleanValueNode, FloatValueNode, IntValueNode, StringValueNode)
-
-
-def create_default_literal_parser(
- value_parser: GraphQLScalarValueParser,
-) -> GraphQLScalarLiteralParser:
- def default_literal_parser(ast):
- return value_parser(value_from_ast_untyped(ast))
-
- return default_literal_parser
| {"golden_diff": "diff --git a/ariadne/scalars.py b/ariadne/scalars.py\n--- a/ariadne/scalars.py\n+++ b/ariadne/scalars.py\n@@ -1,11 +1,5 @@\n from typing import Optional, cast\n \n-from graphql.language.ast import (\n- BooleanValueNode,\n- FloatValueNode,\n- IntValueNode,\n- StringValueNode,\n-)\n from graphql.type import (\n GraphQLNamedType,\n GraphQLScalarLiteralParser,\n@@ -14,7 +8,6 @@\n GraphQLScalarValueParser,\n GraphQLSchema,\n )\n-from graphql.utilities import value_from_ast_untyped\n \n from .types import SchemaBindable\n \n@@ -43,8 +36,6 @@\n \n def set_value_parser(self, f: GraphQLScalarValueParser) -> GraphQLScalarValueParser:\n self._parse_value = f\n- if not self._parse_literal:\n- self._parse_literal = create_default_literal_parser(f)\n return f\n \n def set_literal_parser(\n@@ -79,15 +70,3 @@\n \"%s is defined in the schema, but it is instance of %s (expected %s)\"\n % (self.name, type(graphql_type).__name__, GraphQLScalarType.__name__)\n )\n-\n-\n-SCALAR_AST_NODES = (BooleanValueNode, FloatValueNode, IntValueNode, StringValueNode)\n-\n-\n-def create_default_literal_parser(\n- value_parser: GraphQLScalarValueParser,\n-) -> GraphQLScalarLiteralParser:\n- def default_literal_parser(ast):\n- return value_parser(value_from_ast_untyped(ast))\n-\n- return default_literal_parser\n", "issue": "Argument 'code' has invalid value \"ABC\"\nI think there is a bug when using both literal and variable values with a custom scalar.\r\n\r\n```python\r\nfrom ariadne import ScalarType\r\n\r\ntestscalar = ScalarType('TestScalar')\r\n\r\[email protected]\r\ndef serializer(value):\r\n return value.upper()\r\n\r\n\r\[email protected]_parser\r\ndef value_parser(value):\r\n if value:\r\n return serializer(value)\r\n\r\n\r\[email protected]_parser\r\ndef literal_parser(ast):\r\n value = str(ast.value)\r\n return value_parser(value)\r\n```\r\n\r\nIf you then make the following query:\r\n```graphql\r\nquery($code: TestScalar) {\r\n test1: testType(code: $code) {\r\n id\r\n }\r\n test2: testType(code: \"ABC\") {\r\n id\r\n }\r\n}\r\n```\r\n This error is returned: Argument 'code' has invalid value \"ABC\"\r\n\r\nIf you don't pass variables and only use \"literal\" values it works. Same for if you only pass variables it works fine.\r\n\r\nIf you don't set up a resolver for \"testType\" then no error is returned.\r\n\r\nNot sure what is happening but I think this is a bug. If not, does anyone know why this is happening?\n", "before_files": [{"content": "from typing import Optional, cast\n\nfrom graphql.language.ast import (\n BooleanValueNode,\n FloatValueNode,\n IntValueNode,\n StringValueNode,\n)\nfrom graphql.type import (\n GraphQLNamedType,\n GraphQLScalarLiteralParser,\n GraphQLScalarSerializer,\n GraphQLScalarType,\n GraphQLScalarValueParser,\n GraphQLSchema,\n)\nfrom graphql.utilities import value_from_ast_untyped\n\nfrom .types import SchemaBindable\n\n\nclass ScalarType(SchemaBindable):\n _serialize: Optional[GraphQLScalarSerializer]\n _parse_value: Optional[GraphQLScalarValueParser]\n _parse_literal: Optional[GraphQLScalarLiteralParser]\n\n def __init__(\n self,\n name: str,\n *,\n serializer: GraphQLScalarSerializer = None,\n value_parser: GraphQLScalarValueParser = None,\n literal_parser: GraphQLScalarLiteralParser = None,\n ) -> None:\n self.name = name\n self._serialize = serializer\n self._parse_value = value_parser\n self._parse_literal = literal_parser\n\n def set_serializer(self, f: GraphQLScalarSerializer) -> GraphQLScalarSerializer:\n self._serialize = f\n return f\n\n def set_value_parser(self, f: GraphQLScalarValueParser) -> GraphQLScalarValueParser:\n self._parse_value = f\n if not self._parse_literal:\n self._parse_literal = create_default_literal_parser(f)\n return f\n\n def set_literal_parser(\n self, f: GraphQLScalarLiteralParser\n ) -> GraphQLScalarLiteralParser:\n self._parse_literal = f\n return f\n\n # Alias above setters for consistent decorator API\n serializer = set_serializer\n value_parser = set_value_parser\n literal_parser = set_literal_parser\n\n def bind_to_schema(self, schema: GraphQLSchema) -> None:\n graphql_type = schema.type_map.get(self.name)\n self.validate_graphql_type(graphql_type)\n graphql_type = cast(GraphQLScalarType, graphql_type)\n\n if self._serialize:\n # See mypy bug https://github.com/python/mypy/issues/2427\n graphql_type.serialize = self._serialize # type: ignore\n if self._parse_value:\n graphql_type.parse_value = self._parse_value # type: ignore\n if self._parse_literal:\n graphql_type.parse_literal = self._parse_literal # type: ignore\n\n def validate_graphql_type(self, graphql_type: Optional[GraphQLNamedType]) -> None:\n if not graphql_type:\n raise ValueError(\"Scalar %s is not defined in the schema\" % self.name)\n if not isinstance(graphql_type, GraphQLScalarType):\n raise ValueError(\n \"%s is defined in the schema, but it is instance of %s (expected %s)\"\n % (self.name, type(graphql_type).__name__, GraphQLScalarType.__name__)\n )\n\n\nSCALAR_AST_NODES = (BooleanValueNode, FloatValueNode, IntValueNode, StringValueNode)\n\n\ndef create_default_literal_parser(\n value_parser: GraphQLScalarValueParser,\n) -> GraphQLScalarLiteralParser:\n def default_literal_parser(ast):\n return value_parser(value_from_ast_untyped(ast))\n\n return default_literal_parser\n", "path": "ariadne/scalars.py"}]} | 1,644 | 352 |
gh_patches_debug_66140 | rasdani/github-patches | git_diff | RedHatInsights__insights-core-1452 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Run Flake8 lint on RHEL6
Currently, flake8 is run only on RHEL7 and 8 and not on RHEL6. According to [the documentation](http://flake8.pycqa.org/en/latest/#installation) it is necessary to run flake8 with the exact Python version that is used. Thus to be sure that the syntax is ok even for the older Python version, we have to run in to RHEL6 too.
Tackled in #1251.
</issue>
<code>
[start of setup.py]
1 import os
2 from setuptools import setup, find_packages
3
4 __here__ = os.path.dirname(os.path.abspath(__file__))
5
6 package_info = dict.fromkeys(["RELEASE", "COMMIT", "VERSION", "NAME"])
7
8 for name in package_info:
9 with open(os.path.join(__here__, "insights", name)) as f:
10 package_info[name] = f.read().strip()
11
12 entry_points = {
13 'console_scripts': [
14 'insights-run = insights:main',
15 'insights-info = insights.tools.query:main',
16 'gen_api = insights.tools.generate_api_config:main',
17 'insights-perf = insights.tools.perf:main',
18 'client = insights.client:run',
19 'mangle = insights.util.mangle:main'
20 ]
21 }
22
23 runtime = set([
24 'pyyaml>=3.10,<=3.13',
25 'six',
26 ])
27
28
29 def maybe_require(pkg):
30 try:
31 __import__(pkg)
32 except ImportError:
33 runtime.add(pkg)
34
35
36 maybe_require("importlib")
37 maybe_require("argparse")
38
39
40 client = set([
41 'requests',
42 'pyOpenSSL',
43 ])
44
45 develop = set([
46 'futures==3.0.5',
47 'requests==2.13.0',
48 'wheel',
49 ])
50
51 docs = set([
52 'Sphinx==1.7.9',
53 'nbsphinx==0.3.1',
54 'sphinx_rtd_theme',
55 'ipython<6',
56 'colorama',
57 ])
58
59 testing = set([
60 'coverage==4.3.4',
61 'pytest==3.0.6',
62 'pytest-cov==2.4.0',
63 'mock==2.0.0',
64 ])
65
66 linting = set([
67 'flake8==3.3.0',
68 ])
69
70 optional = set([
71 'jinja2',
72 'python-cjson',
73 'python-logstash',
74 'python-statsd',
75 'watchdog',
76 ])
77
78 if __name__ == "__main__":
79 # allows for runtime modification of rpm name
80 name = os.environ.get("INSIGHTS_CORE_NAME", package_info["NAME"])
81
82 setup(
83 name=name,
84 version=package_info["VERSION"],
85 description="Insights Core is a data collection and analysis framework",
86 long_description=open("README.rst").read(),
87 url="https://github.com/redhatinsights/insights-core",
88 author="Red Hat, Inc.",
89 author_email="[email protected]",
90 packages=find_packages(),
91 install_requires=list(runtime),
92 package_data={'': ['LICENSE']},
93 license='Apache 2.0',
94 extras_require={
95 'develop': list(runtime | develop | client | docs | linting | testing),
96 'client': list(runtime | client),
97 'optional': list(optional),
98 'docs': list(docs),
99 'linting': list(linting | client),
100 'testing': list(testing | client)
101 },
102 classifiers=[
103 'Development Status :: 5 - Production/Stable',
104 'Intended Audience :: Developers',
105 'Natural Language :: English',
106 'License :: OSI Approved :: Apache Software License',
107 'Programming Language :: Python',
108 'Programming Language :: Python :: 2.6',
109 'Programming Language :: Python :: 2.7',
110 'Programming Language :: Python :: 3.3',
111 'Programming Language :: Python :: 3.4',
112 'Programming Language :: Python :: 3.5',
113 'Programming Language :: Python :: 3.6'
114 ],
115 entry_points=entry_points,
116 include_package_data=True
117 )
118
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -64,7 +64,7 @@
])
linting = set([
- 'flake8==3.3.0',
+ 'flake8==2.6.2',
])
optional = set([
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -64,7 +64,7 @@\n ])\n \n linting = set([\n- 'flake8==3.3.0',\n+ 'flake8==2.6.2',\n ])\n \n optional = set([\n", "issue": "Run Flake8 lint on RHEL6\nCurrently, flake8 is run only on RHEL7 and 8 and not on RHEL6. According to [the documentation](http://flake8.pycqa.org/en/latest/#installation) it is necessary to run flake8 with the exact Python version that is used. Thus to be sure that the syntax is ok even for the older Python version, we have to run in to RHEL6 too.\r\n\r\nTackled in #1251.\n", "before_files": [{"content": "import os\nfrom setuptools import setup, find_packages\n\n__here__ = os.path.dirname(os.path.abspath(__file__))\n\npackage_info = dict.fromkeys([\"RELEASE\", \"COMMIT\", \"VERSION\", \"NAME\"])\n\nfor name in package_info:\n with open(os.path.join(__here__, \"insights\", name)) as f:\n package_info[name] = f.read().strip()\n\nentry_points = {\n 'console_scripts': [\n 'insights-run = insights:main',\n 'insights-info = insights.tools.query:main',\n 'gen_api = insights.tools.generate_api_config:main',\n 'insights-perf = insights.tools.perf:main',\n 'client = insights.client:run',\n 'mangle = insights.util.mangle:main'\n ]\n}\n\nruntime = set([\n 'pyyaml>=3.10,<=3.13',\n 'six',\n])\n\n\ndef maybe_require(pkg):\n try:\n __import__(pkg)\n except ImportError:\n runtime.add(pkg)\n\n\nmaybe_require(\"importlib\")\nmaybe_require(\"argparse\")\n\n\nclient = set([\n 'requests',\n 'pyOpenSSL',\n])\n\ndevelop = set([\n 'futures==3.0.5',\n 'requests==2.13.0',\n 'wheel',\n])\n\ndocs = set([\n 'Sphinx==1.7.9',\n 'nbsphinx==0.3.1',\n 'sphinx_rtd_theme',\n 'ipython<6',\n 'colorama',\n])\n\ntesting = set([\n 'coverage==4.3.4',\n 'pytest==3.0.6',\n 'pytest-cov==2.4.0',\n 'mock==2.0.0',\n])\n\nlinting = set([\n 'flake8==3.3.0',\n])\n\noptional = set([\n 'jinja2',\n 'python-cjson',\n 'python-logstash',\n 'python-statsd',\n 'watchdog',\n])\n\nif __name__ == \"__main__\":\n # allows for runtime modification of rpm name\n name = os.environ.get(\"INSIGHTS_CORE_NAME\", package_info[\"NAME\"])\n\n setup(\n name=name,\n version=package_info[\"VERSION\"],\n description=\"Insights Core is a data collection and analysis framework\",\n long_description=open(\"README.rst\").read(),\n url=\"https://github.com/redhatinsights/insights-core\",\n author=\"Red Hat, Inc.\",\n author_email=\"[email protected]\",\n packages=find_packages(),\n install_requires=list(runtime),\n package_data={'': ['LICENSE']},\n license='Apache 2.0',\n extras_require={\n 'develop': list(runtime | develop | client | docs | linting | testing),\n 'client': list(runtime | client),\n 'optional': list(optional),\n 'docs': list(docs),\n 'linting': list(linting | client),\n 'testing': list(testing | client)\n },\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Developers',\n 'Natural Language :: English',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6'\n ],\n entry_points=entry_points,\n include_package_data=True\n )\n", "path": "setup.py"}]} | 1,656 | 70 |
gh_patches_debug_21365 | rasdani/github-patches | git_diff | GoogleCloudPlatform__PerfKitBenchmarker-586 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Help doesn't render with FlagValuesProxy.
Example:
```
[:~/git/PerfKitBenchmarker] [perfkit] release-0.23.0+* 1 ± python pkb.py --benchmarks redis_ycsb --machine_type n1-standard-4 --json_output redis_ycsb.json
ERROR:root:Unknown command line flag 'json_output'
Usage: pkb.py ARGS
<perfkitbenchmarker.context.FlagValuesProxy object at 0x7f51910bc050>
```
@ehankland - do you have a minute to look at this? If not assign back to me.
</issue>
<code>
[start of perfkitbenchmarker/context.py]
1 # Copyright 2015 Google Inc. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Module for working with the current thread context."""
16
17 import threading
18
19 import gflags as flags
20
21
22 class FlagsModuleProxy(object):
23 """Class which acts as a proxy for the flags module.
24
25 When the FLAGS attribute is accessed, BENCHMARK_FLAGS will be returned
26 rather than the global FlagValues object. BENCHMARK_FLAGS is an instance
27 of FlagValuesProxy, which enables benchmarks to run with different and
28 even conflicting flags. Accessing the GLOBAL_FLAGS attribute will return
29 the global FlagValues object. Otherwise, this will behave just like the
30 flags module.
31 """
32
33 def __getattr__(self, name):
34 if name == 'FLAGS':
35 return BENCHMARK_FLAGS
36 elif name == 'GLOBAL_FLAGS':
37 return flags.FLAGS
38 return flags.__dict__[name]
39
40
41 class FlagValuesProxy(object):
42 """Class which provides the same interface as FlagValues.
43
44 By acting as a proxy for the FlagValues object (i.e. flags.FLAGS),
45 this enables benchmark specific flags. This proxy attempts to
46 use the current thread's BenchmarkSpec's FlagValues object, but
47 falls back to using flags.FLAGS if the thread has no BenchmarkSpec
48 object.
49 """
50
51 @property
52 def _thread_flag_values(self):
53 """Returns the correct FlagValues object for the current thread.
54
55 This first tries to get the BenchmarkSpec object corresponding to the
56 current thread. If there is one, it returns that spec's FlagValues
57 object. If there isn't one, it will return the global FlagValues
58 object.
59 """
60 benchmark_spec = GetThreadBenchmarkSpec()
61 if benchmark_spec:
62 return benchmark_spec.FLAGS
63 else:
64 return flags.FLAGS
65
66 def __setattr__(self, name, value):
67 self._thread_flag_values.__setattr__(name, value)
68
69 def __getattr__(self, name):
70 return self._thread_flag_values.__getattr__(name)
71
72 def __setitem__(self, key, value):
73 self._thread_flag_values.__setitem__(key, value)
74
75 def __getitem__(self, key):
76 return self._thread_flag_values.__getitem__(key)
77
78 def __call__(self, argv):
79 return self._thread_flag_values.__call__(argv)
80
81 def FlagDict(self):
82 return self._thread_flag_values.FlagDict()
83
84
85 BENCHMARK_FLAGS = FlagValuesProxy()
86
87
88 class _ThreadData(threading.local):
89 def __init__(self):
90 self.benchmark_spec = None
91
92
93 _thread_local = _ThreadData()
94
95
96 def SetThreadBenchmarkSpec(benchmark_spec):
97 """Sets the current thread's BenchmarkSpec object."""
98 _thread_local.benchmark_spec = benchmark_spec
99
100
101 def GetThreadBenchmarkSpec():
102 """Gets the current thread's BenchmarkSpec object.
103
104 If SetThreadBenchmarkSpec() has not been called in either the current thread
105 or in an ancestor, then this method will return None by default.
106 """
107 return _thread_local.benchmark_spec
108
[end of perfkitbenchmarker/context.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/perfkitbenchmarker/context.py b/perfkitbenchmarker/context.py
--- a/perfkitbenchmarker/context.py
+++ b/perfkitbenchmarker/context.py
@@ -63,23 +63,24 @@
else:
return flags.FLAGS
- def __setattr__(self, name, value):
- self._thread_flag_values.__setattr__(name, value)
- def __getattr__(self, name):
- return self._thread_flag_values.__getattr__(name)
-
- def __setitem__(self, key, value):
- self._thread_flag_values.__setitem__(key, value)
-
- def __getitem__(self, key):
- return self._thread_flag_values.__getitem__(key)
-
- def __call__(self, argv):
- return self._thread_flag_values.__call__(argv)
-
- def FlagDict(self):
- return self._thread_flag_values.FlagDict()
+def _AddProxyMethod(f_name):
+ """Adds a method to FlagValuesProxy that forwards to _thread_flag_values."""
+ def f(self, *args, **kwargs):
+ return getattr(self._thread_flag_values, f_name)(*args, **kwargs)
+ f.__name__ = f_name
+ f.__doc__ = 'Proxied ' + f_name
+ setattr(FlagValuesProxy, f_name, f)
+
+
+# TODO: introduce a more generic proxy.
+for _f_name in ['FlagDict', 'Reset', 'SetDefault', 'RegisteredFlags',
+ 'FlagValuesDict', '__contains__', '__iter__', '__call__',
+ '__setattr__', '__getattr__', '__setitem__', '__getitem__',
+ '__str__']:
+ _AddProxyMethod(_f_name)
+del _f_name
+del _AddProxyMethod
BENCHMARK_FLAGS = FlagValuesProxy()
| {"golden_diff": "diff --git a/perfkitbenchmarker/context.py b/perfkitbenchmarker/context.py\n--- a/perfkitbenchmarker/context.py\n+++ b/perfkitbenchmarker/context.py\n@@ -63,23 +63,24 @@\n else:\n return flags.FLAGS\n \n- def __setattr__(self, name, value):\n- self._thread_flag_values.__setattr__(name, value)\n \n- def __getattr__(self, name):\n- return self._thread_flag_values.__getattr__(name)\n-\n- def __setitem__(self, key, value):\n- self._thread_flag_values.__setitem__(key, value)\n-\n- def __getitem__(self, key):\n- return self._thread_flag_values.__getitem__(key)\n-\n- def __call__(self, argv):\n- return self._thread_flag_values.__call__(argv)\n-\n- def FlagDict(self):\n- return self._thread_flag_values.FlagDict()\n+def _AddProxyMethod(f_name):\n+ \"\"\"Adds a method to FlagValuesProxy that forwards to _thread_flag_values.\"\"\"\n+ def f(self, *args, **kwargs):\n+ return getattr(self._thread_flag_values, f_name)(*args, **kwargs)\n+ f.__name__ = f_name\n+ f.__doc__ = 'Proxied ' + f_name\n+ setattr(FlagValuesProxy, f_name, f)\n+\n+\n+# TODO: introduce a more generic proxy.\n+for _f_name in ['FlagDict', 'Reset', 'SetDefault', 'RegisteredFlags',\n+ 'FlagValuesDict', '__contains__', '__iter__', '__call__',\n+ '__setattr__', '__getattr__', '__setitem__', '__getitem__',\n+ '__str__']:\n+ _AddProxyMethod(_f_name)\n+del _f_name\n+del _AddProxyMethod\n \n \n BENCHMARK_FLAGS = FlagValuesProxy()\n", "issue": "Help doesn't render with FlagValuesProxy.\nExample:\n\n```\n[:~/git/PerfKitBenchmarker] [perfkit] release-0.23.0+* 1 \u00b1 python pkb.py --benchmarks redis_ycsb --machine_type n1-standard-4 --json_output redis_ycsb.json\nERROR:root:Unknown command line flag 'json_output'\nUsage: pkb.py ARGS\n<perfkitbenchmarker.context.FlagValuesProxy object at 0x7f51910bc050>\n```\n\n@ehankland - do you have a minute to look at this? If not assign back to me.\n\n", "before_files": [{"content": "# Copyright 2015 Google Inc. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Module for working with the current thread context.\"\"\"\n\nimport threading\n\nimport gflags as flags\n\n\nclass FlagsModuleProxy(object):\n \"\"\"Class which acts as a proxy for the flags module.\n\n When the FLAGS attribute is accessed, BENCHMARK_FLAGS will be returned\n rather than the global FlagValues object. BENCHMARK_FLAGS is an instance\n of FlagValuesProxy, which enables benchmarks to run with different and\n even conflicting flags. Accessing the GLOBAL_FLAGS attribute will return\n the global FlagValues object. Otherwise, this will behave just like the\n flags module.\n \"\"\"\n\n def __getattr__(self, name):\n if name == 'FLAGS':\n return BENCHMARK_FLAGS\n elif name == 'GLOBAL_FLAGS':\n return flags.FLAGS\n return flags.__dict__[name]\n\n\nclass FlagValuesProxy(object):\n \"\"\"Class which provides the same interface as FlagValues.\n\n By acting as a proxy for the FlagValues object (i.e. flags.FLAGS),\n this enables benchmark specific flags. This proxy attempts to\n use the current thread's BenchmarkSpec's FlagValues object, but\n falls back to using flags.FLAGS if the thread has no BenchmarkSpec\n object.\n \"\"\"\n\n @property\n def _thread_flag_values(self):\n \"\"\"Returns the correct FlagValues object for the current thread.\n\n This first tries to get the BenchmarkSpec object corresponding to the\n current thread. If there is one, it returns that spec's FlagValues\n object. If there isn't one, it will return the global FlagValues\n object.\n \"\"\"\n benchmark_spec = GetThreadBenchmarkSpec()\n if benchmark_spec:\n return benchmark_spec.FLAGS\n else:\n return flags.FLAGS\n\n def __setattr__(self, name, value):\n self._thread_flag_values.__setattr__(name, value)\n\n def __getattr__(self, name):\n return self._thread_flag_values.__getattr__(name)\n\n def __setitem__(self, key, value):\n self._thread_flag_values.__setitem__(key, value)\n\n def __getitem__(self, key):\n return self._thread_flag_values.__getitem__(key)\n\n def __call__(self, argv):\n return self._thread_flag_values.__call__(argv)\n\n def FlagDict(self):\n return self._thread_flag_values.FlagDict()\n\n\nBENCHMARK_FLAGS = FlagValuesProxy()\n\n\nclass _ThreadData(threading.local):\n def __init__(self):\n self.benchmark_spec = None\n\n\n_thread_local = _ThreadData()\n\n\ndef SetThreadBenchmarkSpec(benchmark_spec):\n \"\"\"Sets the current thread's BenchmarkSpec object.\"\"\"\n _thread_local.benchmark_spec = benchmark_spec\n\n\ndef GetThreadBenchmarkSpec():\n \"\"\"Gets the current thread's BenchmarkSpec object.\n\n If SetThreadBenchmarkSpec() has not been called in either the current thread\n or in an ancestor, then this method will return None by default.\n \"\"\"\n return _thread_local.benchmark_spec\n", "path": "perfkitbenchmarker/context.py"}]} | 1,672 | 409 |
gh_patches_debug_5764 | rasdani/github-patches | git_diff | cookiecutter__cookiecutter-1353 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Fix pitest warning, recheck dependencies versions.
* Cookiecutter version:master
```
py27 run-test: commands[1] | /home/insspb/git/cookiecutter/.tox/py27/bin/python /snap/pycharm-professional/196/plugins/python/helpers/pycharm/_jb_pytest_runner.py --offset 10001 -- --cov=cookiecutter tests
/home/insspb/git/cookiecutter/.tox/py27/lib/python2.7/site-packages/_pytest/config/__init__.py:316: PytestConfigWarning: pytest-catchlog plugin has been merged into the core, please remove it from your requirements.
name.replace("_", "-")
```
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3
4 """cookiecutter distutils configuration."""
5
6 import os
7 import io
8 import sys
9
10 from setuptools import setup
11
12 version = "1.7.0"
13
14 if sys.argv[-1] == 'publish':
15 os.system('python setup.py sdist upload')
16 os.system('python setup.py bdist_wheel upload')
17 sys.exit()
18
19 if sys.argv[-1] == 'tag':
20 os.system("git tag -a %s -m 'version %s'" % (version, version))
21 os.system("git push --tags")
22 sys.exit()
23
24 with io.open('README.md', 'r', encoding='utf-8') as readme_file:
25 readme = readme_file.read()
26
27 requirements = [
28 'binaryornot>=0.2.0',
29 'jinja2>=2.7',
30 'click>=7.0',
31 'poyo>=0.1.0',
32 'jinja2-time>=0.1.0',
33 'python-slugify>=4.0.0',
34 'requests>=2.18.0',
35 'six>=1.10',
36 ]
37
38 if sys.argv[-1] == 'readme':
39 print(readme)
40 sys.exit()
41
42
43 setup(
44 name='cookiecutter',
45 version=version,
46 description=('A command-line utility that creates projects from project '
47 'templates, e.g. creating a Python package project from a '
48 'Python package project template.'),
49 long_description=readme,
50 long_description_content_type='text/markdown',
51 author='Audrey Roy',
52 author_email='[email protected]',
53 url='https://github.com/cookiecutter/cookiecutter',
54 packages=[
55 'cookiecutter',
56 ],
57 package_dir={'cookiecutter': 'cookiecutter'},
58 entry_points={
59 'console_scripts': [
60 'cookiecutter = cookiecutter.__main__:main',
61 ]
62 },
63 include_package_data=True,
64 python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*',
65 install_requires=requirements,
66 extras_require={
67 ':python_version<"3.3"': ['whichcraft>=0.4.0'],
68 },
69 license='BSD',
70 zip_safe=False,
71 classifiers=[
72 "Development Status :: 5 - Production/Stable",
73 "Environment :: Console",
74 "Intended Audience :: Developers",
75 "Natural Language :: English",
76 "License :: OSI Approved :: BSD License",
77 "Programming Language :: Python",
78 "Programming Language :: Python :: 2",
79 "Programming Language :: Python :: 2.7",
80 "Programming Language :: Python :: 3",
81 "Programming Language :: Python :: 3.5",
82 "Programming Language :: Python :: 3.6",
83 "Programming Language :: Python :: 3.7",
84 "Programming Language :: Python :: 3.8",
85 "Programming Language :: Python :: Implementation :: CPython",
86 "Programming Language :: Python :: Implementation :: PyPy",
87 "Topic :: Software Development",
88 ],
89 keywords=(
90 'cookiecutter, Python, projects, project templates, Jinja2, '
91 'skeleton, scaffolding, project directory, setup.py, package, '
92 'packaging'
93 ),
94 )
95
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -25,14 +25,15 @@
readme = readme_file.read()
requirements = [
- 'binaryornot>=0.2.0',
- 'jinja2>=2.7',
- 'click>=7.0',
- 'poyo>=0.1.0',
- 'jinja2-time>=0.1.0',
+ 'binaryornot>=0.4.4',
+ 'Jinja2<=2.11.0',
+ 'click>=7.1.1',
+ 'poyo>=0.5.0',
+ 'jinja2-time>=0.2.0',
'python-slugify>=4.0.0',
- 'requests>=2.18.0',
- 'six>=1.10',
+ 'requests>=2.23.0',
+ 'six>=1.14',
+ 'MarkupSafe<2.0.0'
]
if sys.argv[-1] == 'readme':
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -25,14 +25,15 @@\n readme = readme_file.read()\n \n requirements = [\n- 'binaryornot>=0.2.0',\n- 'jinja2>=2.7',\n- 'click>=7.0',\n- 'poyo>=0.1.0',\n- 'jinja2-time>=0.1.0',\n+ 'binaryornot>=0.4.4',\n+ 'Jinja2<=2.11.0',\n+ 'click>=7.1.1',\n+ 'poyo>=0.5.0',\n+ 'jinja2-time>=0.2.0',\n 'python-slugify>=4.0.0',\n- 'requests>=2.18.0',\n- 'six>=1.10',\n+ 'requests>=2.23.0',\n+ 'six>=1.14',\n+ 'MarkupSafe<2.0.0'\n ]\n \n if sys.argv[-1] == 'readme':\n", "issue": "Fix pitest warning, recheck dependencies versions.\n* Cookiecutter version:master\r\n\r\n```\r\npy27 run-test: commands[1] | /home/insspb/git/cookiecutter/.tox/py27/bin/python /snap/pycharm-professional/196/plugins/python/helpers/pycharm/_jb_pytest_runner.py --offset 10001 -- --cov=cookiecutter tests\r\n/home/insspb/git/cookiecutter/.tox/py27/lib/python2.7/site-packages/_pytest/config/__init__.py:316: PytestConfigWarning: pytest-catchlog plugin has been merged into the core, please remove it from your requirements.\r\n name.replace(\"_\", \"-\")\r\n```\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\n\"\"\"cookiecutter distutils configuration.\"\"\"\n\nimport os\nimport io\nimport sys\n\nfrom setuptools import setup\n\nversion = \"1.7.0\"\n\nif sys.argv[-1] == 'publish':\n os.system('python setup.py sdist upload')\n os.system('python setup.py bdist_wheel upload')\n sys.exit()\n\nif sys.argv[-1] == 'tag':\n os.system(\"git tag -a %s -m 'version %s'\" % (version, version))\n os.system(\"git push --tags\")\n sys.exit()\n\nwith io.open('README.md', 'r', encoding='utf-8') as readme_file:\n readme = readme_file.read()\n\nrequirements = [\n 'binaryornot>=0.2.0',\n 'jinja2>=2.7',\n 'click>=7.0',\n 'poyo>=0.1.0',\n 'jinja2-time>=0.1.0',\n 'python-slugify>=4.0.0',\n 'requests>=2.18.0',\n 'six>=1.10',\n]\n\nif sys.argv[-1] == 'readme':\n print(readme)\n sys.exit()\n\n\nsetup(\n name='cookiecutter',\n version=version,\n description=('A command-line utility that creates projects from project '\n 'templates, e.g. creating a Python package project from a '\n 'Python package project template.'),\n long_description=readme,\n long_description_content_type='text/markdown',\n author='Audrey Roy',\n author_email='[email protected]',\n url='https://github.com/cookiecutter/cookiecutter',\n packages=[\n 'cookiecutter',\n ],\n package_dir={'cookiecutter': 'cookiecutter'},\n entry_points={\n 'console_scripts': [\n 'cookiecutter = cookiecutter.__main__:main',\n ]\n },\n include_package_data=True,\n python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*',\n install_requires=requirements,\n extras_require={\n ':python_version<\"3.3\"': ['whichcraft>=0.4.0'],\n },\n license='BSD',\n zip_safe=False,\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Environment :: Console\",\n \"Intended Audience :: Developers\",\n \"Natural Language :: English\",\n \"License :: OSI Approved :: BSD License\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Programming Language :: Python :: Implementation :: PyPy\",\n \"Topic :: Software Development\",\n ],\n keywords=(\n 'cookiecutter, Python, projects, project templates, Jinja2, '\n 'skeleton, scaffolding, project directory, setup.py, package, '\n 'packaging'\n ),\n)\n", "path": "setup.py"}]} | 1,588 | 251 |
gh_patches_debug_25991 | rasdani/github-patches | git_diff | nvaccess__nvda-11605 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Chrome: "list" is reported on every line of a list in rich text editors
### Steps to reproduce:
1. Open this URL in Chrome:
`data:text/html,<div contentEditable="true" role="textbox" aria-multiline="true">Before<ul><li>a</li><li>b</li></ul>After</div>`
2. Focus the text box and ensure you are in focus mode.
3. Press control+home.
4. Read through the content line by line using the down arrow key.
### Expected behavior:
```
Before
list bullet a
bullet b
out of list After
```
### Actual behavior:
```
Before
list bullet a
list bullet b
After
```
Note: Whether you hear "bullet" depends on your symbol level; I have mine set to "all".
### System configuration:
NVDA version: next-14373,6bbe5915
NVDA Installed or portable: installed
Windows version: Windows 10 Version 1703 (OS Build 16251.0)
Name and version of other software in use when reproducing the issue: Chrome Version 62.0.3201.2 (Official Build) canary (64-bit)
### Technical info:
This happens because a contentEditable list (the `ul` tag) does not get the read-only state. Lists and list boxes both get the same role (list), but they're normally differentiated by the read-only state; a `<ul>` has read-only, whereas a `<select size="2">` doesn't. However, in this case, I can kinda understand why Chrome doesn't set read-only; after all, it does have the editable state.
I think we should probably just tweak `TextInfo.getPresentationCategory` to treat editable liss as being containers; i.e. allow for the editable state as well as the read-only state in the rule for `PRESCAT_CONTAINER`. Alternatively, we could file a bug against Chrome requesting this get fixed on their side.
P2 because this is quite annoying when dealing with rich text editors in Chrome, including the Gmail composer.
</issue>
<code>
[start of source/NVDAObjects/IAccessible/chromium.py]
1 #NVDAObjects/IAccessible/chromium.py
2 #A part of NonVisual Desktop Access (NVDA)
3 #This file is covered by the GNU General Public License.
4 #See the file COPYING for more details.
5 # Copyright (C) 2010-2013 NV Access Limited
6
7 """NVDAObjects for the Chromium browser project
8 """
9
10 from comtypes import COMError
11 import oleacc
12 import controlTypes
13 import IAccessibleHandler
14 from NVDAObjects.IAccessible import IAccessible
15 from virtualBuffers.gecko_ia2 import Gecko_ia2 as GeckoVBuf, Gecko_ia2_TextInfo as GeckoVBufTextInfo
16 from . import ia2Web
17
18
19 class ChromeVBufTextInfo(GeckoVBufTextInfo):
20
21 def _normalizeControlField(self, attrs):
22 attrs = super()._normalizeControlField(attrs)
23 if attrs['role'] == controlTypes.ROLE_TOGGLEBUTTON and controlTypes.STATE_CHECKABLE in attrs['states']:
24 # In Chromium, the checkable state is exposed erroneously on toggle buttons.
25 attrs['states'].discard(controlTypes.STATE_CHECKABLE)
26 return attrs
27
28
29 class ChromeVBuf(GeckoVBuf):
30 TextInfo = ChromeVBufTextInfo
31
32 def __contains__(self, obj):
33 if obj.windowHandle != self.rootNVDAObject.windowHandle:
34 return False
35 if not isinstance(obj,ia2Web.Ia2Web):
36 # #4080: Input composition NVDAObjects are the same window but not IAccessible2!
37 return False
38 accId = obj.IA2UniqueID
39 if accId == self.rootID:
40 return True
41 try:
42 self.rootNVDAObject.IAccessibleObject.accChild(accId)
43 except COMError:
44 return False
45 return not self._isNVDAObjectInApplication(obj)
46
47
48 class Document(ia2Web.Document):
49
50 def _get_treeInterceptorClass(self):
51 states = self.states
52 if controlTypes.STATE_EDITABLE not in states and controlTypes.STATE_BUSY not in states:
53 return ChromeVBuf
54 return super(Document, self).treeInterceptorClass
55
56 class ComboboxListItem(IAccessible):
57 """
58 Represents a list item inside a combo box.
59 """
60
61 def _get_focusRedirect(self):
62 # Chrome 68 and below fires focus on the active list item of combo boxes even when the combo box is collapsed.
63 # We get around this by redirecting focus back up to the combo box itself if the list inside is invisible (I.e. the combo box is collapsed).
64 if self.parent and controlTypes.STATE_INVISIBLE in self.parent.states:
65 return self.parent.parent
66
67
68 class ToggleButton(ia2Web.Ia2Web):
69
70 def _get_states(self):
71 # In Chromium, the checkable state is exposed erroneously on toggle buttons.
72 states = super().states
73 states.discard(controlTypes.STATE_CHECKABLE)
74 return states
75
76
77 def findExtraOverlayClasses(obj, clsList):
78 """Determine the most appropriate class(es) for Chromium objects.
79 This works similarly to L{NVDAObjects.NVDAObject.findOverlayClasses} except that it never calls any other findOverlayClasses method.
80 """
81 if obj.role==controlTypes.ROLE_LISTITEM and obj.parent and obj.parent.parent and obj.parent.parent.role==controlTypes.ROLE_COMBOBOX:
82 clsList.append(ComboboxListItem)
83 elif obj.role == controlTypes.ROLE_TOGGLEBUTTON:
84 clsList.append(ToggleButton)
85 ia2Web.findExtraOverlayClasses(obj, clsList,
86 documentClass=Document)
87
[end of source/NVDAObjects/IAccessible/chromium.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/source/NVDAObjects/IAccessible/chromium.py b/source/NVDAObjects/IAccessible/chromium.py
--- a/source/NVDAObjects/IAccessible/chromium.py
+++ b/source/NVDAObjects/IAccessible/chromium.py
@@ -74,6 +74,22 @@
return states
+class PresentationalList(ia2Web.Ia2Web):
+ """
+ Ensures that lists like UL, DL and OL always have the readonly state.
+ A work-around for issue #7562
+ allowing us to differentiate presentational lists from interactive lists
+ (such as of size greater 1 and ARIA list boxes).
+ In firefox, this is possible by the presence of a read-only state,
+ even in a content editable.
+ """
+
+ def _get_states(self):
+ states = super().states
+ states.add(controlTypes.STATE_READONLY)
+ return states
+
+
def findExtraOverlayClasses(obj, clsList):
"""Determine the most appropriate class(es) for Chromium objects.
This works similarly to L{NVDAObjects.NVDAObject.findOverlayClasses} except that it never calls any other findOverlayClasses method.
@@ -82,5 +98,7 @@
clsList.append(ComboboxListItem)
elif obj.role == controlTypes.ROLE_TOGGLEBUTTON:
clsList.append(ToggleButton)
+ elif obj.role == controlTypes.ROLE_LIST and obj.IA2Attributes.get('tag') in ('ul', 'dl', 'ol'):
+ clsList.append(PresentationalList)
ia2Web.findExtraOverlayClasses(obj, clsList,
documentClass=Document)
| {"golden_diff": "diff --git a/source/NVDAObjects/IAccessible/chromium.py b/source/NVDAObjects/IAccessible/chromium.py\n--- a/source/NVDAObjects/IAccessible/chromium.py\n+++ b/source/NVDAObjects/IAccessible/chromium.py\n@@ -74,6 +74,22 @@\n \t\treturn states\r\n \r\n \r\n+class PresentationalList(ia2Web.Ia2Web):\r\n+\t\"\"\"\r\n+\tEnsures that lists like UL, DL and OL always have the readonly state.\r\n+\tA work-around for issue #7562\r\n+\tallowing us to differentiate presentational lists from interactive lists\r\n+\t(such as of size greater 1 and ARIA list boxes).\r\n+\tIn firefox, this is possible by the presence of a read-only state,\r\n+\teven in a content editable.\r\n+\t\"\"\"\r\n+\r\n+\tdef _get_states(self):\r\n+\t\tstates = super().states\r\n+\t\tstates.add(controlTypes.STATE_READONLY)\r\n+\t\treturn states\r\n+\r\n+\r\n def findExtraOverlayClasses(obj, clsList):\r\n \t\"\"\"Determine the most appropriate class(es) for Chromium objects.\r\n \tThis works similarly to L{NVDAObjects.NVDAObject.findOverlayClasses} except that it never calls any other findOverlayClasses method.\r\n@@ -82,5 +98,7 @@\n \t\tclsList.append(ComboboxListItem)\r\n \telif obj.role == controlTypes.ROLE_TOGGLEBUTTON:\r\n \t\tclsList.append(ToggleButton)\r\n+\telif obj.role == controlTypes.ROLE_LIST and obj.IA2Attributes.get('tag') in ('ul', 'dl', 'ol'):\r\n+\t\tclsList.append(PresentationalList)\r\n \tia2Web.findExtraOverlayClasses(obj, clsList,\r\n \t\tdocumentClass=Document)\n", "issue": "Chrome: \"list\" is reported on every line of a list in rich text editors\n### Steps to reproduce:\r\n1. Open this URL in Chrome:\r\n `data:text/html,<div contentEditable=\"true\" role=\"textbox\" aria-multiline=\"true\">Before<ul><li>a</li><li>b</li></ul>After</div>`\r\n2. Focus the text box and ensure you are in focus mode.\r\n3. Press control+home.\r\n4. Read through the content line by line using the down arrow key.\r\n\r\n### Expected behavior:\r\n```\r\nBefore\r\nlist bullet a\r\nbullet b\r\nout of list After\r\n```\r\n\r\n### Actual behavior:\r\n```\r\nBefore\r\nlist bullet a\r\nlist bullet b\r\nAfter\r\n```\r\n\r\nNote: Whether you hear \"bullet\" depends on your symbol level; I have mine set to \"all\".\r\n\r\n### System configuration:\r\nNVDA version: next-14373,6bbe5915\r\nNVDA Installed or portable: installed\r\nWindows version: Windows 10 Version 1703 (OS Build 16251.0)\r\nName and version of other software in use when reproducing the issue: Chrome Version 62.0.3201.2 (Official Build) canary (64-bit)\r\n\r\n### Technical info:\r\nThis happens because a contentEditable list (the `ul` tag) does not get the read-only state. Lists and list boxes both get the same role (list), but they're normally differentiated by the read-only state; a `<ul>` has read-only, whereas a `<select size=\"2\">` doesn't. However, in this case, I can kinda understand why Chrome doesn't set read-only; after all, it does have the editable state.\r\n\r\nI think we should probably just tweak `TextInfo.getPresentationCategory` to treat editable liss as being containers; i.e. allow for the editable state as well as the read-only state in the rule for `PRESCAT_CONTAINER`. Alternatively, we could file a bug against Chrome requesting this get fixed on their side.\r\n\r\nP2 because this is quite annoying when dealing with rich text editors in Chrome, including the Gmail composer.\n", "before_files": [{"content": "#NVDAObjects/IAccessible/chromium.py\r\n#A part of NonVisual Desktop Access (NVDA)\r\n#This file is covered by the GNU General Public License.\r\n#See the file COPYING for more details.\r\n# Copyright (C) 2010-2013 NV Access Limited\r\n\r\n\"\"\"NVDAObjects for the Chromium browser project\r\n\"\"\"\r\n\r\nfrom comtypes import COMError\r\nimport oleacc\r\nimport controlTypes\r\nimport IAccessibleHandler\r\nfrom NVDAObjects.IAccessible import IAccessible\r\nfrom virtualBuffers.gecko_ia2 import Gecko_ia2 as GeckoVBuf, Gecko_ia2_TextInfo as GeckoVBufTextInfo\r\nfrom . import ia2Web\r\n\r\n\r\nclass ChromeVBufTextInfo(GeckoVBufTextInfo):\r\n\r\n\tdef _normalizeControlField(self, attrs):\r\n\t\tattrs = super()._normalizeControlField(attrs)\r\n\t\tif attrs['role'] == controlTypes.ROLE_TOGGLEBUTTON and controlTypes.STATE_CHECKABLE in attrs['states']:\r\n\t\t\t# In Chromium, the checkable state is exposed erroneously on toggle buttons.\r\n\t\t\tattrs['states'].discard(controlTypes.STATE_CHECKABLE)\r\n\t\treturn attrs\r\n\r\n\r\nclass ChromeVBuf(GeckoVBuf):\r\n\tTextInfo = ChromeVBufTextInfo\r\n\r\n\tdef __contains__(self, obj):\r\n\t\tif obj.windowHandle != self.rootNVDAObject.windowHandle:\r\n\t\t\treturn False\r\n\t\tif not isinstance(obj,ia2Web.Ia2Web):\r\n\t\t\t# #4080: Input composition NVDAObjects are the same window but not IAccessible2!\r\n\t\t\treturn False\r\n\t\taccId = obj.IA2UniqueID\r\n\t\tif accId == self.rootID:\r\n\t\t\treturn True\r\n\t\ttry:\r\n\t\t\tself.rootNVDAObject.IAccessibleObject.accChild(accId)\r\n\t\texcept COMError:\r\n\t\t\treturn False\r\n\t\treturn not self._isNVDAObjectInApplication(obj)\r\n\r\n\r\nclass Document(ia2Web.Document):\r\n\r\n\tdef _get_treeInterceptorClass(self):\r\n\t\tstates = self.states\r\n\t\tif controlTypes.STATE_EDITABLE not in states and controlTypes.STATE_BUSY not in states:\r\n\t\t\treturn ChromeVBuf\r\n\t\treturn super(Document, self).treeInterceptorClass\r\n\r\nclass ComboboxListItem(IAccessible):\r\n\t\"\"\"\r\n\tRepresents a list item inside a combo box.\r\n\t\"\"\"\r\n\r\n\tdef _get_focusRedirect(self):\r\n\t\t# Chrome 68 and below fires focus on the active list item of combo boxes even when the combo box is collapsed.\r\n\t\t# We get around this by redirecting focus back up to the combo box itself if the list inside is invisible (I.e. the combo box is collapsed).\r\n\t\tif self.parent and controlTypes.STATE_INVISIBLE in self.parent.states:\r\n\t\t\treturn self.parent.parent\r\n\r\n\r\nclass ToggleButton(ia2Web.Ia2Web):\r\n\r\n\tdef _get_states(self):\r\n\t\t# In Chromium, the checkable state is exposed erroneously on toggle buttons.\r\n\t\tstates = super().states\r\n\t\tstates.discard(controlTypes.STATE_CHECKABLE)\r\n\t\treturn states\r\n\r\n\r\ndef findExtraOverlayClasses(obj, clsList):\r\n\t\"\"\"Determine the most appropriate class(es) for Chromium objects.\r\n\tThis works similarly to L{NVDAObjects.NVDAObject.findOverlayClasses} except that it never calls any other findOverlayClasses method.\r\n\t\"\"\"\r\n\tif obj.role==controlTypes.ROLE_LISTITEM and obj.parent and obj.parent.parent and obj.parent.parent.role==controlTypes.ROLE_COMBOBOX:\r\n\t\tclsList.append(ComboboxListItem)\r\n\telif obj.role == controlTypes.ROLE_TOGGLEBUTTON:\r\n\t\tclsList.append(ToggleButton)\r\n\tia2Web.findExtraOverlayClasses(obj, clsList,\r\n\t\tdocumentClass=Document)\r\n", "path": "source/NVDAObjects/IAccessible/chromium.py"}]} | 1,961 | 379 |
gh_patches_debug_54061 | rasdani/github-patches | git_diff | docker__docker-py-2793 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Couldn't create secret object
I couldn't create secret object, the problem seemed to boil down to the way that a secret was being created from the docker daemon response.
https://github.com/docker/docker-py/blob/467cacb00d8dce68aa8ff2bdacc85acecd2d1207/docker/models/secrets.py#L31-L33
Docker version 18.03.1-ce and python version 3.5 had the following error:
````
File "docker/models/secrets.py", line 10 in __repr__
return "<%s: %s'>" % (self.__class__.__name__, self.name)
File "docker/models/secrets.py", line 14 in name
return self.attrs['Spec']['Name']
KeyError: 'Spec'
````
When calling:
````
import docker
client -docker.from_env()
mySecret = client.secrets.create(name='randomName', data='platform_node_requirements.md')
````
Changing the code to the following seemed to fix it.
````
obj = self.client.api.create_secret(**kwargs)
secret = self.client.secrets.get(obj.get('ID'))
return self.prepare_model(secret)
````
</issue>
<code>
[start of docker/models/secrets.py]
1 from ..api import APIClient
2 from .resource import Model, Collection
3
4
5 class Secret(Model):
6 """A secret."""
7 id_attribute = 'ID'
8
9 def __repr__(self):
10 return "<%s: '%s'>" % (self.__class__.__name__, self.name)
11
12 @property
13 def name(self):
14 return self.attrs['Spec']['Name']
15
16 def remove(self):
17 """
18 Remove this secret.
19
20 Raises:
21 :py:class:`docker.errors.APIError`
22 If secret failed to remove.
23 """
24 return self.client.api.remove_secret(self.id)
25
26
27 class SecretCollection(Collection):
28 """Secrets on the Docker server."""
29 model = Secret
30
31 def create(self, **kwargs):
32 obj = self.client.api.create_secret(**kwargs)
33 return self.prepare_model(obj)
34 create.__doc__ = APIClient.create_secret.__doc__
35
36 def get(self, secret_id):
37 """
38 Get a secret.
39
40 Args:
41 secret_id (str): Secret ID.
42
43 Returns:
44 (:py:class:`Secret`): The secret.
45
46 Raises:
47 :py:class:`docker.errors.NotFound`
48 If the secret does not exist.
49 :py:class:`docker.errors.APIError`
50 If the server returns an error.
51 """
52 return self.prepare_model(self.client.api.inspect_secret(secret_id))
53
54 def list(self, **kwargs):
55 """
56 List secrets. Similar to the ``docker secret ls`` command.
57
58 Args:
59 filters (dict): Server-side list filtering options.
60
61 Returns:
62 (list of :py:class:`Secret`): The secrets.
63
64 Raises:
65 :py:class:`docker.errors.APIError`
66 If the server returns an error.
67 """
68 resp = self.client.api.secrets(**kwargs)
69 return [self.prepare_model(obj) for obj in resp]
70
[end of docker/models/secrets.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/docker/models/secrets.py b/docker/models/secrets.py
--- a/docker/models/secrets.py
+++ b/docker/models/secrets.py
@@ -30,6 +30,7 @@
def create(self, **kwargs):
obj = self.client.api.create_secret(**kwargs)
+ obj.setdefault("Spec", {})["Name"] = kwargs.get("name")
return self.prepare_model(obj)
create.__doc__ = APIClient.create_secret.__doc__
| {"golden_diff": "diff --git a/docker/models/secrets.py b/docker/models/secrets.py\n--- a/docker/models/secrets.py\n+++ b/docker/models/secrets.py\n@@ -30,6 +30,7 @@\n \n def create(self, **kwargs):\n obj = self.client.api.create_secret(**kwargs)\n+ obj.setdefault(\"Spec\", {})[\"Name\"] = kwargs.get(\"name\")\n return self.prepare_model(obj)\n create.__doc__ = APIClient.create_secret.__doc__\n", "issue": "Couldn't create secret object\nI couldn't create secret object, the problem seemed to boil down to the way that a secret was being created from the docker daemon response. \r\n\r\nhttps://github.com/docker/docker-py/blob/467cacb00d8dce68aa8ff2bdacc85acecd2d1207/docker/models/secrets.py#L31-L33\r\n\r\nDocker version 18.03.1-ce and python version 3.5 had the following error:\r\n\r\n````\r\nFile \"docker/models/secrets.py\", line 10 in __repr__\r\nreturn \"<%s: %s'>\" % (self.__class__.__name__, self.name)\r\nFile \"docker/models/secrets.py\", line 14 in name\r\nreturn self.attrs['Spec']['Name']\r\nKeyError: 'Spec'\r\n\r\n````\r\n\r\nWhen calling: \r\n\r\n````\r\nimport docker\r\n\r\nclient -docker.from_env()\r\nmySecret = client.secrets.create(name='randomName', data='platform_node_requirements.md')\r\n\r\n````\r\n\r\nChanging the code to the following seemed to fix it. \r\n````\r\nobj = self.client.api.create_secret(**kwargs)\r\nsecret = self.client.secrets.get(obj.get('ID'))\r\nreturn self.prepare_model(secret)\r\n````\r\n\r\n\r\n\n", "before_files": [{"content": "from ..api import APIClient\nfrom .resource import Model, Collection\n\n\nclass Secret(Model):\n \"\"\"A secret.\"\"\"\n id_attribute = 'ID'\n\n def __repr__(self):\n return \"<%s: '%s'>\" % (self.__class__.__name__, self.name)\n\n @property\n def name(self):\n return self.attrs['Spec']['Name']\n\n def remove(self):\n \"\"\"\n Remove this secret.\n\n Raises:\n :py:class:`docker.errors.APIError`\n If secret failed to remove.\n \"\"\"\n return self.client.api.remove_secret(self.id)\n\n\nclass SecretCollection(Collection):\n \"\"\"Secrets on the Docker server.\"\"\"\n model = Secret\n\n def create(self, **kwargs):\n obj = self.client.api.create_secret(**kwargs)\n return self.prepare_model(obj)\n create.__doc__ = APIClient.create_secret.__doc__\n\n def get(self, secret_id):\n \"\"\"\n Get a secret.\n\n Args:\n secret_id (str): Secret ID.\n\n Returns:\n (:py:class:`Secret`): The secret.\n\n Raises:\n :py:class:`docker.errors.NotFound`\n If the secret does not exist.\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n \"\"\"\n return self.prepare_model(self.client.api.inspect_secret(secret_id))\n\n def list(self, **kwargs):\n \"\"\"\n List secrets. Similar to the ``docker secret ls`` command.\n\n Args:\n filters (dict): Server-side list filtering options.\n\n Returns:\n (list of :py:class:`Secret`): The secrets.\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n \"\"\"\n resp = self.client.api.secrets(**kwargs)\n return [self.prepare_model(obj) for obj in resp]\n", "path": "docker/models/secrets.py"}]} | 1,325 | 102 |
gh_patches_debug_25148 | rasdani/github-patches | git_diff | GPflow__GPflow-1536 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Check deps on CI
`pip install gpflow` currently installs dependencies (setuptools, scipy) with versions that are incompatible with the tensorflow version installed.
This ticket isn't to fix the dependencies, per se, but suggests adding a `pip check -vvv` stage to CI, so that such problems are caught at PR stage.
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3
4 # pylint: skip-file
5
6 import os
7 import sys
8
9 from setuptools import find_packages, setup
10
11
12 # Dependencies of GPflow
13 requirements = [
14 "numpy>=1.10.0",
15 "scipy>=0.18.0",
16 "multipledispatch>=0.6",
17 "tabulate",
18 "typing_extensions",
19 "cloudpickle==1.3.0", # temporary workaround for tensorflow/probability#991
20 ]
21
22 if sys.version_info < (3, 7):
23 # became part of stdlib in python 3.7
24 requirements.append("dataclasses")
25
26 # We do not want to install tensorflow in the readthedocs environment, where we
27 # use autodoc_mock_imports instead. Hence we use this flag to decide whether or
28 # not to append tensorflow and tensorflow_probability to the requirements:
29 if os.environ.get("READTHEDOCS") != "True":
30 requirements.extend(["tensorflow>=2.1.0,<2.3", "tensorflow-probability>=0.9,<0.11"])
31
32
33 def read_file(filename):
34 with open(filename, encoding="utf-8") as f:
35 return f.read().strip()
36
37
38 version = read_file("VERSION")
39 readme_text = read_file("README.md")
40
41 packages = find_packages(".", exclude=["tests"])
42
43 setup(
44 name="gpflow",
45 version=version,
46 author="James Hensman, Alex Matthews",
47 author_email="[email protected]",
48 description="Gaussian process methods in TensorFlow",
49 long_description=readme_text,
50 long_description_content_type="text/markdown",
51 license="Apache License 2.0",
52 keywords="machine-learning gaussian-processes kernels tensorflow",
53 url="https://www.gpflow.org",
54 project_urls={
55 "Source on GitHub": "https://github.com/GPflow/GPflow",
56 "Documentation": "https://gpflow.readthedocs.io",
57 },
58 packages=packages,
59 include_package_data=True,
60 install_requires=requirements,
61 extras_require={"ImageToTensorBoard": ["matplotlib"]},
62 python_requires=">=3.6",
63 classifiers=[
64 "License :: OSI Approved :: Apache Software License",
65 "Natural Language :: English",
66 "Operating System :: MacOS :: MacOS X",
67 "Operating System :: Microsoft :: Windows",
68 "Operating System :: POSIX :: Linux",
69 "Programming Language :: Python :: 3.6",
70 "Topic :: Scientific/Engineering :: Artificial Intelligence",
71 ],
72 )
73
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -12,11 +12,10 @@
# Dependencies of GPflow
requirements = [
"numpy>=1.10.0",
- "scipy>=0.18.0",
+ "scipy>=0.18.0,==1.4.1", # pinned to ==1.4.1 to satisfy tensorflow requirements
"multipledispatch>=0.6",
"tabulate",
"typing_extensions",
- "cloudpickle==1.3.0", # temporary workaround for tensorflow/probability#991
]
if sys.version_info < (3, 7):
@@ -27,7 +26,18 @@
# use autodoc_mock_imports instead. Hence we use this flag to decide whether or
# not to append tensorflow and tensorflow_probability to the requirements:
if os.environ.get("READTHEDOCS") != "True":
- requirements.extend(["tensorflow>=2.1.0,<2.3", "tensorflow-probability>=0.9,<0.11"])
+ requirements.extend(
+ [
+ # tensorflow>=2.3 not compatible with tensorflow-probability<0.11
+ "tensorflow>=2.1.0,<2.3",
+ # tensorflow-probability==0.10.0 doesn't install correctly
+ # https://github.com/tensorflow/probability/issues/991
+ #
+ # gpflow uses private functionality not present in tensorflow-probability~=0.11
+ "tensorflow-probability>=0.9,<0.11,!=0.10.0",
+ "setuptools>=41.0.0", # to satisfy dependency constraints
+ ]
+ )
def read_file(filename):
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -12,11 +12,10 @@\n # Dependencies of GPflow\n requirements = [\n \"numpy>=1.10.0\",\n- \"scipy>=0.18.0\",\n+ \"scipy>=0.18.0,==1.4.1\", # pinned to ==1.4.1 to satisfy tensorflow requirements\n \"multipledispatch>=0.6\",\n \"tabulate\",\n \"typing_extensions\",\n- \"cloudpickle==1.3.0\", # temporary workaround for tensorflow/probability#991\n ]\n \n if sys.version_info < (3, 7):\n@@ -27,7 +26,18 @@\n # use autodoc_mock_imports instead. Hence we use this flag to decide whether or\n # not to append tensorflow and tensorflow_probability to the requirements:\n if os.environ.get(\"READTHEDOCS\") != \"True\":\n- requirements.extend([\"tensorflow>=2.1.0,<2.3\", \"tensorflow-probability>=0.9,<0.11\"])\n+ requirements.extend(\n+ [\n+ # tensorflow>=2.3 not compatible with tensorflow-probability<0.11\n+ \"tensorflow>=2.1.0,<2.3\",\n+ # tensorflow-probability==0.10.0 doesn't install correctly\n+ # https://github.com/tensorflow/probability/issues/991\n+ #\n+ # gpflow uses private functionality not present in tensorflow-probability~=0.11\n+ \"tensorflow-probability>=0.9,<0.11,!=0.10.0\",\n+ \"setuptools>=41.0.0\", # to satisfy dependency constraints\n+ ]\n+ )\n \n \n def read_file(filename):\n", "issue": "Check deps on CI\n`pip install gpflow` currently installs dependencies (setuptools, scipy) with versions that are incompatible with the tensorflow version installed.\r\n\r\nThis ticket isn't to fix the dependencies, per se, but suggests adding a `pip check -vvv` stage to CI, so that such problems are caught at PR stage.\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\n# pylint: skip-file\n\nimport os\nimport sys\n\nfrom setuptools import find_packages, setup\n\n\n# Dependencies of GPflow\nrequirements = [\n \"numpy>=1.10.0\",\n \"scipy>=0.18.0\",\n \"multipledispatch>=0.6\",\n \"tabulate\",\n \"typing_extensions\",\n \"cloudpickle==1.3.0\", # temporary workaround for tensorflow/probability#991\n]\n\nif sys.version_info < (3, 7):\n # became part of stdlib in python 3.7\n requirements.append(\"dataclasses\")\n\n# We do not want to install tensorflow in the readthedocs environment, where we\n# use autodoc_mock_imports instead. Hence we use this flag to decide whether or\n# not to append tensorflow and tensorflow_probability to the requirements:\nif os.environ.get(\"READTHEDOCS\") != \"True\":\n requirements.extend([\"tensorflow>=2.1.0,<2.3\", \"tensorflow-probability>=0.9,<0.11\"])\n\n\ndef read_file(filename):\n with open(filename, encoding=\"utf-8\") as f:\n return f.read().strip()\n\n\nversion = read_file(\"VERSION\")\nreadme_text = read_file(\"README.md\")\n\npackages = find_packages(\".\", exclude=[\"tests\"])\n\nsetup(\n name=\"gpflow\",\n version=version,\n author=\"James Hensman, Alex Matthews\",\n author_email=\"[email protected]\",\n description=\"Gaussian process methods in TensorFlow\",\n long_description=readme_text,\n long_description_content_type=\"text/markdown\",\n license=\"Apache License 2.0\",\n keywords=\"machine-learning gaussian-processes kernels tensorflow\",\n url=\"https://www.gpflow.org\",\n project_urls={\n \"Source on GitHub\": \"https://github.com/GPflow/GPflow\",\n \"Documentation\": \"https://gpflow.readthedocs.io\",\n },\n packages=packages,\n include_package_data=True,\n install_requires=requirements,\n extras_require={\"ImageToTensorBoard\": [\"matplotlib\"]},\n python_requires=\">=3.6\",\n classifiers=[\n \"License :: OSI Approved :: Apache Software License\",\n \"Natural Language :: English\",\n \"Operating System :: MacOS :: MacOS X\",\n \"Operating System :: Microsoft :: Windows\",\n \"Operating System :: POSIX :: Linux\",\n \"Programming Language :: Python :: 3.6\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n ],\n)\n", "path": "setup.py"}]} | 1,288 | 418 |
gh_patches_debug_30178 | rasdani/github-patches | git_diff | mampfes__hacs_waste_collection_schedule-965 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Ku-ring-gai Council doesn't work if there is a house number 1A
works - 1A Latona Street PYMBLE 2073
doesn't work - 1 Latona Street PYMBLE 2073
Both exist
</issue>
<code>
[start of custom_components/waste_collection_schedule/waste_collection_schedule/source/kuringgai_nsw_gov_au.py]
1 import datetime
2 import json
3 import requests
4
5 from bs4 import BeautifulSoup
6 from requests.utils import requote_uri
7 from waste_collection_schedule import Collection
8
9 TITLE = "Ku-ring-gai Council"
10 DESCRIPTION = "Source for Ku-ring-gai Council waste collection."
11 URL = "https://www.krg.nsw.gov.au"
12 TEST_CASES = {
13 "randomHouse": {
14 "post_code": "2070",
15 "suburb": "LINDFIELD",
16 "street_name": "Wolseley Road",
17 "street_number": "42",
18 },
19 "randomAppartment": {
20 "post_code": "2074",
21 "suburb": "WARRAWEE",
22 "street_name": "Cherry Street",
23 "street_number": "4/9",
24 },
25 "randomMultiunit": {
26 "post_code": "2075",
27 "suburb": "ST IVES",
28 "street_name": "Kitchener Street",
29 "street_number": "99/2-8",
30 },
31 }
32
33 API_URLS = {
34 "session":"https://www.krg.nsw.gov.au" ,
35 "search": "https://www.krg.nsw.gov.au/api/v1/myarea/search?keywords={}",
36 "schedule": "https://www.krg.nsw.gov.au/ocapi/Public/myarea/wasteservices?geolocationid={}&ocsvclang=en-AU",
37 }
38
39 HEADERS = {
40 "user-agent": "Mozilla/5.0",
41 }
42
43 ICON_MAP = {
44 "GeneralWaste": "mdi:trash-can",
45 "Recycling": "mdi:recycle",
46 "GreenWaste": "mdi:leaf",
47 }
48
49 ROUNDS = {
50 "GeneralWaste": "General Waste",
51 "Recycling": "Recycling",
52 "GreenWaste": "Green Waste",
53 }
54
55 # _LOGGER = logging.getLogger(__name__)
56
57
58 class Source:
59 def __init__(
60 self, post_code: str, suburb: str, street_name: str, street_number: str
61 ):
62 self.post_code = post_code
63 self.suburb = suburb.upper()
64 self.street_name = street_name
65 self.street_number = street_number
66
67 def fetch(self):
68
69 locationId = 0
70
71 # 'collection' api call seems to require an ASP.Net_sessionID, so obtain the relevant cookie
72 s = requests.Session()
73 q = requote_uri(str(API_URLS["session"]))
74 r0 = s.get(q, headers = HEADERS)
75
76 # Do initial address search
77 address = "{} {}, {} NSW {}".format(self.street_number, self.street_name, self.suburb, self.post_code)
78 q = requote_uri(str(API_URLS["search"]).format(address))
79 r1 = s.get(q, headers = HEADERS)
80 data = json.loads(r1.text)
81
82 # Find the geolocation for the address
83 for item in data["Items"]:
84 if address in item['AddressSingleLine']:
85 locationId = item["Id"]
86 break
87
88 if locationId == 0:
89 return []
90
91 # Retrieve the upcoming collections for location
92 q = requote_uri(str(API_URLS["schedule"]).format(locationId))
93 r2 = s.get(q, headers = HEADERS)
94 data = json.loads(r2.text)
95 responseContent = data["responseContent"]
96
97 soup = BeautifulSoup(responseContent, "html.parser")
98 services = soup.find_all("article")
99
100 entries = []
101
102 for item in services:
103 waste_type = item.find('h3').text
104 date = datetime.datetime.strptime(item.find('div', {'class': 'next-service'}).text.strip(), "%a %d/%m/%Y").date()
105 entries.append(
106 Collection(
107 date = date,
108 # t=waste_type, # api returns GeneralWaste, Recycling, GreenWaste
109 t = ROUNDS.get(waste_type), # returns user-friendly General Waste, Recycling, Green Waste
110 icon=ICON_MAP.get(waste_type),
111 )
112 )
113
114 return entries
115
[end of custom_components/waste_collection_schedule/waste_collection_schedule/source/kuringgai_nsw_gov_au.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/kuringgai_nsw_gov_au.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/kuringgai_nsw_gov_au.py
--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/kuringgai_nsw_gov_au.py
+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/kuringgai_nsw_gov_au.py
@@ -28,8 +28,21 @@
"street_name": "Kitchener Street",
"street_number": "99/2-8",
},
+ "1 Latona St": {
+ "post_code": "2073",
+ "suburb": "PYMBLE",
+ "street_name": "Latona Street",
+ "street_number": "1",
+ },
+ "1A Latona St": {
+ "post_code": "2073",
+ "suburb": "PYMBLE",
+ "street_name": "Latona Street",
+ "street_number": "1A",
+ },
}
+
API_URLS = {
"session":"https://www.krg.nsw.gov.au" ,
"search": "https://www.krg.nsw.gov.au/api/v1/myarea/search?keywords={}",
@@ -77,16 +90,12 @@
address = "{} {}, {} NSW {}".format(self.street_number, self.street_name, self.suburb, self.post_code)
q = requote_uri(str(API_URLS["search"]).format(address))
r1 = s.get(q, headers = HEADERS)
- data = json.loads(r1.text)
+ data = json.loads(r1.text)["Items"]
# Find the geolocation for the address
- for item in data["Items"]:
+ for item in data:
if address in item['AddressSingleLine']:
locationId = item["Id"]
- break
-
- if locationId == 0:
- return []
# Retrieve the upcoming collections for location
q = requote_uri(str(API_URLS["schedule"]).format(locationId))
| {"golden_diff": "diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/kuringgai_nsw_gov_au.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/kuringgai_nsw_gov_au.py\n--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/kuringgai_nsw_gov_au.py\n+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/kuringgai_nsw_gov_au.py\n@@ -28,8 +28,21 @@\n \"street_name\": \"Kitchener Street\",\n \"street_number\": \"99/2-8\",\n },\n+ \"1 Latona St\": {\n+ \"post_code\": \"2073\",\n+ \"suburb\": \"PYMBLE\",\n+ \"street_name\": \"Latona Street\",\n+ \"street_number\": \"1\",\n+ },\n+ \"1A Latona St\": {\n+ \"post_code\": \"2073\",\n+ \"suburb\": \"PYMBLE\",\n+ \"street_name\": \"Latona Street\",\n+ \"street_number\": \"1A\",\n+ },\n }\n \n+\n API_URLS = {\n \"session\":\"https://www.krg.nsw.gov.au\" ,\n \"search\": \"https://www.krg.nsw.gov.au/api/v1/myarea/search?keywords={}\",\n@@ -77,16 +90,12 @@\n address = \"{} {}, {} NSW {}\".format(self.street_number, self.street_name, self.suburb, self.post_code)\n q = requote_uri(str(API_URLS[\"search\"]).format(address))\n r1 = s.get(q, headers = HEADERS)\n- data = json.loads(r1.text)\n+ data = json.loads(r1.text)[\"Items\"]\n \n # Find the geolocation for the address\n- for item in data[\"Items\"]:\n+ for item in data:\n if address in item['AddressSingleLine']:\n locationId = item[\"Id\"]\n- break\n-\n- if locationId == 0:\n- return []\n \n # Retrieve the upcoming collections for location\n q = requote_uri(str(API_URLS[\"schedule\"]).format(locationId))\n", "issue": "Ku-ring-gai Council doesn't work if there is a house number 1A\nworks - 1A Latona Street PYMBLE 2073\r\ndoesn't work - 1 Latona Street PYMBLE 2073\r\n\r\nBoth exist\n", "before_files": [{"content": "import datetime\nimport json\nimport requests\n\nfrom bs4 import BeautifulSoup\nfrom requests.utils import requote_uri\nfrom waste_collection_schedule import Collection\n\nTITLE = \"Ku-ring-gai Council\"\nDESCRIPTION = \"Source for Ku-ring-gai Council waste collection.\"\nURL = \"https://www.krg.nsw.gov.au\"\nTEST_CASES = {\n \"randomHouse\": {\n \"post_code\": \"2070\",\n \"suburb\": \"LINDFIELD\",\n \"street_name\": \"Wolseley Road\",\n \"street_number\": \"42\",\n },\n \"randomAppartment\": {\n \"post_code\": \"2074\",\n \"suburb\": \"WARRAWEE\",\n \"street_name\": \"Cherry Street\",\n \"street_number\": \"4/9\",\n },\n \"randomMultiunit\": {\n \"post_code\": \"2075\",\n \"suburb\": \"ST IVES\",\n \"street_name\": \"Kitchener Street\",\n \"street_number\": \"99/2-8\",\n },\n}\n\nAPI_URLS = {\n \"session\":\"https://www.krg.nsw.gov.au\" ,\n \"search\": \"https://www.krg.nsw.gov.au/api/v1/myarea/search?keywords={}\",\n \"schedule\": \"https://www.krg.nsw.gov.au/ocapi/Public/myarea/wasteservices?geolocationid={}&ocsvclang=en-AU\",\n}\n\nHEADERS = {\n \"user-agent\": \"Mozilla/5.0\",\n}\n\nICON_MAP = {\n \"GeneralWaste\": \"mdi:trash-can\",\n \"Recycling\": \"mdi:recycle\",\n \"GreenWaste\": \"mdi:leaf\",\n}\n\nROUNDS = {\n \"GeneralWaste\": \"General Waste\",\n \"Recycling\": \"Recycling\",\n \"GreenWaste\": \"Green Waste\",\n}\n\n# _LOGGER = logging.getLogger(__name__)\n\n\nclass Source:\n def __init__(\n self, post_code: str, suburb: str, street_name: str, street_number: str\n ):\n self.post_code = post_code\n self.suburb = suburb.upper()\n self.street_name = street_name\n self.street_number = street_number\n\n def fetch(self):\n\n locationId = 0\n\n # 'collection' api call seems to require an ASP.Net_sessionID, so obtain the relevant cookie\n s = requests.Session()\n q = requote_uri(str(API_URLS[\"session\"]))\n r0 = s.get(q, headers = HEADERS)\n\n # Do initial address search\n address = \"{} {}, {} NSW {}\".format(self.street_number, self.street_name, self.suburb, self.post_code)\n q = requote_uri(str(API_URLS[\"search\"]).format(address))\n r1 = s.get(q, headers = HEADERS)\n data = json.loads(r1.text)\n\n # Find the geolocation for the address\n for item in data[\"Items\"]:\n if address in item['AddressSingleLine']:\n locationId = item[\"Id\"]\n break\n\n if locationId == 0:\n return []\n\n # Retrieve the upcoming collections for location\n q = requote_uri(str(API_URLS[\"schedule\"]).format(locationId))\n r2 = s.get(q, headers = HEADERS)\n data = json.loads(r2.text)\n responseContent = data[\"responseContent\"]\n\n soup = BeautifulSoup(responseContent, \"html.parser\")\n services = soup.find_all(\"article\")\n \n entries = []\n\n for item in services:\n waste_type = item.find('h3').text\n date = datetime.datetime.strptime(item.find('div', {'class': 'next-service'}).text.strip(), \"%a %d/%m/%Y\").date()\n entries.append(\n Collection(\n date = date,\n # t=waste_type, # api returns GeneralWaste, Recycling, GreenWaste \n t = ROUNDS.get(waste_type), # returns user-friendly General Waste, Recycling, Green Waste\n icon=ICON_MAP.get(waste_type),\n )\n )\n\n return entries\n", "path": "custom_components/waste_collection_schedule/waste_collection_schedule/source/kuringgai_nsw_gov_au.py"}]} | 1,746 | 479 |
gh_patches_debug_8227 | rasdani/github-patches | git_diff | spacetelescope__jwql-84 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Make dev conda environment more general
We should make our `dev` `conda` environment more generalized so that it can be used on the new test server.
</issue>
<code>
[start of setup.py]
1 import numpy as np
2 from setuptools import setup
3 from setuptools import find_packages
4
5 VERSION = '0.4.0'
6
7 AUTHORS = 'Matthew Bourque, Sara Ogaz, Joe Filippazzo, Bryan Hilbert, Misty Cracraft, Graham Kanarek'
8 AUTHORS += 'Johannes Sahlmann, Lauren Chambers, Catherine Martlin'
9
10 REQUIRES = ['astropy', 'astroquery', 'bokeh', 'django', 'matplotlib', 'numpy', 'python-dateutil', 'sphinx', 'sphinx-automodapi', 'sqlalchemy']
11
12 setup(
13 name = 'jwql',
14 version = VERSION,
15 description = 'The JWST Quicklook Project',
16 url = 'https://github.com/spacetelescope/jwql.git',
17 author = AUTHORS,
18 author_email='[email protected]',
19 license='BSD',
20 keywords = ['astronomy', 'python'],
21 classifiers = ['Programming Language :: Python'],
22 packages = find_packages(),
23 install_requires = REQUIRES,
24 include_package_data=True,
25 include_dirs = [np.get_include()],
26 )
27
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -7,7 +7,7 @@
AUTHORS = 'Matthew Bourque, Sara Ogaz, Joe Filippazzo, Bryan Hilbert, Misty Cracraft, Graham Kanarek'
AUTHORS += 'Johannes Sahlmann, Lauren Chambers, Catherine Martlin'
-REQUIRES = ['astropy', 'astroquery', 'bokeh', 'django', 'matplotlib', 'numpy', 'python-dateutil', 'sphinx', 'sphinx-automodapi', 'sqlalchemy']
+REQUIRES = ['astropy', 'astroquery', 'bokeh==0.12.5', 'django', 'matplotlib', 'numpy', 'python-dateutil', 'sphinx', 'sphinx-automodapi', 'sqlalchemy']
setup(
name = 'jwql',
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -7,7 +7,7 @@\n AUTHORS = 'Matthew Bourque, Sara Ogaz, Joe Filippazzo, Bryan Hilbert, Misty Cracraft, Graham Kanarek'\n AUTHORS += 'Johannes Sahlmann, Lauren Chambers, Catherine Martlin'\n \n-REQUIRES = ['astropy', 'astroquery', 'bokeh', 'django', 'matplotlib', 'numpy', 'python-dateutil', 'sphinx', 'sphinx-automodapi', 'sqlalchemy']\n+REQUIRES = ['astropy', 'astroquery', 'bokeh==0.12.5', 'django', 'matplotlib', 'numpy', 'python-dateutil', 'sphinx', 'sphinx-automodapi', 'sqlalchemy']\n \n setup(\n name = 'jwql',\n", "issue": "Make dev conda environment more general\nWe should make our `dev` `conda` environment more generalized so that it can be used on the new test server. \n", "before_files": [{"content": "import numpy as np\nfrom setuptools import setup\nfrom setuptools import find_packages\n\nVERSION = '0.4.0'\n\nAUTHORS = 'Matthew Bourque, Sara Ogaz, Joe Filippazzo, Bryan Hilbert, Misty Cracraft, Graham Kanarek'\nAUTHORS += 'Johannes Sahlmann, Lauren Chambers, Catherine Martlin'\n\nREQUIRES = ['astropy', 'astroquery', 'bokeh', 'django', 'matplotlib', 'numpy', 'python-dateutil', 'sphinx', 'sphinx-automodapi', 'sqlalchemy']\n\nsetup(\n name = 'jwql',\n version = VERSION,\n description = 'The JWST Quicklook Project',\n url = 'https://github.com/spacetelescope/jwql.git',\n author = AUTHORS,\n author_email='[email protected]',\n license='BSD',\n keywords = ['astronomy', 'python'],\n classifiers = ['Programming Language :: Python'],\n packages = find_packages(),\n install_requires = REQUIRES,\n include_package_data=True,\n include_dirs = [np.get_include()],\n )\n", "path": "setup.py"}]} | 845 | 190 |
gh_patches_debug_18727 | rasdani/github-patches | git_diff | scrapy__scrapy-2847 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Redirect 308 missing
I did a check on the RedirectMiddleware and noticed that code 308 is missing. Is there a reason for that?
Some websites don't update their sitemap and have a long list of 308 from http to https.
(side note: is there a way to add "s" before a link is scraped?)
</issue>
<code>
[start of scrapy/downloadermiddlewares/redirect.py]
1 import logging
2 from six.moves.urllib.parse import urljoin
3
4 from w3lib.url import safe_url_string
5
6 from scrapy.http import HtmlResponse
7 from scrapy.utils.response import get_meta_refresh
8 from scrapy.exceptions import IgnoreRequest, NotConfigured
9
10 logger = logging.getLogger(__name__)
11
12
13 class BaseRedirectMiddleware(object):
14
15 enabled_setting = 'REDIRECT_ENABLED'
16
17 def __init__(self, settings):
18 if not settings.getbool(self.enabled_setting):
19 raise NotConfigured
20
21 self.max_redirect_times = settings.getint('REDIRECT_MAX_TIMES')
22 self.priority_adjust = settings.getint('REDIRECT_PRIORITY_ADJUST')
23
24 @classmethod
25 def from_crawler(cls, crawler):
26 return cls(crawler.settings)
27
28 def _redirect(self, redirected, request, spider, reason):
29 ttl = request.meta.setdefault('redirect_ttl', self.max_redirect_times)
30 redirects = request.meta.get('redirect_times', 0) + 1
31
32 if ttl and redirects <= self.max_redirect_times:
33 redirected.meta['redirect_times'] = redirects
34 redirected.meta['redirect_ttl'] = ttl - 1
35 redirected.meta['redirect_urls'] = request.meta.get('redirect_urls', []) + \
36 [request.url]
37 redirected.dont_filter = request.dont_filter
38 redirected.priority = request.priority + self.priority_adjust
39 logger.debug("Redirecting (%(reason)s) to %(redirected)s from %(request)s",
40 {'reason': reason, 'redirected': redirected, 'request': request},
41 extra={'spider': spider})
42 return redirected
43 else:
44 logger.debug("Discarding %(request)s: max redirections reached",
45 {'request': request}, extra={'spider': spider})
46 raise IgnoreRequest("max redirections reached")
47
48 def _redirect_request_using_get(self, request, redirect_url):
49 redirected = request.replace(url=redirect_url, method='GET', body='')
50 redirected.headers.pop('Content-Type', None)
51 redirected.headers.pop('Content-Length', None)
52 return redirected
53
54
55 class RedirectMiddleware(BaseRedirectMiddleware):
56 """
57 Handle redirection of requests based on response status
58 and meta-refresh html tag.
59 """
60 def process_response(self, request, response, spider):
61 if (request.meta.get('dont_redirect', False) or
62 response.status in getattr(spider, 'handle_httpstatus_list', []) or
63 response.status in request.meta.get('handle_httpstatus_list', []) or
64 request.meta.get('handle_httpstatus_all', False)):
65 return response
66
67 allowed_status = (301, 302, 303, 307)
68 if 'Location' not in response.headers or response.status not in allowed_status:
69 return response
70
71 location = safe_url_string(response.headers['location'])
72
73 redirected_url = urljoin(request.url, location)
74
75 if response.status in (301, 307) or request.method == 'HEAD':
76 redirected = request.replace(url=redirected_url)
77 return self._redirect(redirected, request, spider, response.status)
78
79 redirected = self._redirect_request_using_get(request, redirected_url)
80 return self._redirect(redirected, request, spider, response.status)
81
82
83 class MetaRefreshMiddleware(BaseRedirectMiddleware):
84
85 enabled_setting = 'METAREFRESH_ENABLED'
86
87 def __init__(self, settings):
88 super(MetaRefreshMiddleware, self).__init__(settings)
89 self._maxdelay = settings.getint('REDIRECT_MAX_METAREFRESH_DELAY',
90 settings.getint('METAREFRESH_MAXDELAY'))
91
92 def process_response(self, request, response, spider):
93 if request.meta.get('dont_redirect', False) or request.method == 'HEAD' or \
94 not isinstance(response, HtmlResponse):
95 return response
96
97 interval, url = get_meta_refresh(response)
98 if url and interval < self._maxdelay:
99 redirected = self._redirect_request_using_get(request, url)
100 return self._redirect(redirected, request, spider, 'meta refresh')
101
102 return response
103
[end of scrapy/downloadermiddlewares/redirect.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/scrapy/downloadermiddlewares/redirect.py b/scrapy/downloadermiddlewares/redirect.py
--- a/scrapy/downloadermiddlewares/redirect.py
+++ b/scrapy/downloadermiddlewares/redirect.py
@@ -64,7 +64,7 @@
request.meta.get('handle_httpstatus_all', False)):
return response
- allowed_status = (301, 302, 303, 307)
+ allowed_status = (301, 302, 303, 307, 308)
if 'Location' not in response.headers or response.status not in allowed_status:
return response
@@ -72,7 +72,7 @@
redirected_url = urljoin(request.url, location)
- if response.status in (301, 307) or request.method == 'HEAD':
+ if response.status in (301, 307, 308) or request.method == 'HEAD':
redirected = request.replace(url=redirected_url)
return self._redirect(redirected, request, spider, response.status)
| {"golden_diff": "diff --git a/scrapy/downloadermiddlewares/redirect.py b/scrapy/downloadermiddlewares/redirect.py\n--- a/scrapy/downloadermiddlewares/redirect.py\n+++ b/scrapy/downloadermiddlewares/redirect.py\n@@ -64,7 +64,7 @@\n request.meta.get('handle_httpstatus_all', False)):\n return response\n \n- allowed_status = (301, 302, 303, 307)\n+ allowed_status = (301, 302, 303, 307, 308)\n if 'Location' not in response.headers or response.status not in allowed_status:\n return response\n \n@@ -72,7 +72,7 @@\n \n redirected_url = urljoin(request.url, location)\n \n- if response.status in (301, 307) or request.method == 'HEAD':\n+ if response.status in (301, 307, 308) or request.method == 'HEAD':\n redirected = request.replace(url=redirected_url)\n return self._redirect(redirected, request, spider, response.status)\n", "issue": "Redirect 308 missing\nI did a check on the RedirectMiddleware and noticed that code 308 is missing. Is there a reason for that?\r\nSome websites don't update their sitemap and have a long list of 308 from http to https.\r\n\r\n(side note: is there a way to add \"s\" before a link is scraped?)\n", "before_files": [{"content": "import logging\nfrom six.moves.urllib.parse import urljoin\n\nfrom w3lib.url import safe_url_string\n\nfrom scrapy.http import HtmlResponse\nfrom scrapy.utils.response import get_meta_refresh\nfrom scrapy.exceptions import IgnoreRequest, NotConfigured\n\nlogger = logging.getLogger(__name__)\n\n\nclass BaseRedirectMiddleware(object):\n\n enabled_setting = 'REDIRECT_ENABLED'\n\n def __init__(self, settings):\n if not settings.getbool(self.enabled_setting):\n raise NotConfigured\n\n self.max_redirect_times = settings.getint('REDIRECT_MAX_TIMES')\n self.priority_adjust = settings.getint('REDIRECT_PRIORITY_ADJUST')\n\n @classmethod\n def from_crawler(cls, crawler):\n return cls(crawler.settings)\n\n def _redirect(self, redirected, request, spider, reason):\n ttl = request.meta.setdefault('redirect_ttl', self.max_redirect_times)\n redirects = request.meta.get('redirect_times', 0) + 1\n\n if ttl and redirects <= self.max_redirect_times:\n redirected.meta['redirect_times'] = redirects\n redirected.meta['redirect_ttl'] = ttl - 1\n redirected.meta['redirect_urls'] = request.meta.get('redirect_urls', []) + \\\n [request.url]\n redirected.dont_filter = request.dont_filter\n redirected.priority = request.priority + self.priority_adjust\n logger.debug(\"Redirecting (%(reason)s) to %(redirected)s from %(request)s\",\n {'reason': reason, 'redirected': redirected, 'request': request},\n extra={'spider': spider})\n return redirected\n else:\n logger.debug(\"Discarding %(request)s: max redirections reached\",\n {'request': request}, extra={'spider': spider})\n raise IgnoreRequest(\"max redirections reached\")\n\n def _redirect_request_using_get(self, request, redirect_url):\n redirected = request.replace(url=redirect_url, method='GET', body='')\n redirected.headers.pop('Content-Type', None)\n redirected.headers.pop('Content-Length', None)\n return redirected\n\n\nclass RedirectMiddleware(BaseRedirectMiddleware):\n \"\"\"\n Handle redirection of requests based on response status\n and meta-refresh html tag.\n \"\"\"\n def process_response(self, request, response, spider):\n if (request.meta.get('dont_redirect', False) or\n response.status in getattr(spider, 'handle_httpstatus_list', []) or\n response.status in request.meta.get('handle_httpstatus_list', []) or\n request.meta.get('handle_httpstatus_all', False)):\n return response\n\n allowed_status = (301, 302, 303, 307)\n if 'Location' not in response.headers or response.status not in allowed_status:\n return response\n\n location = safe_url_string(response.headers['location'])\n\n redirected_url = urljoin(request.url, location)\n\n if response.status in (301, 307) or request.method == 'HEAD':\n redirected = request.replace(url=redirected_url)\n return self._redirect(redirected, request, spider, response.status)\n\n redirected = self._redirect_request_using_get(request, redirected_url)\n return self._redirect(redirected, request, spider, response.status)\n\n\nclass MetaRefreshMiddleware(BaseRedirectMiddleware):\n\n enabled_setting = 'METAREFRESH_ENABLED'\n\n def __init__(self, settings):\n super(MetaRefreshMiddleware, self).__init__(settings)\n self._maxdelay = settings.getint('REDIRECT_MAX_METAREFRESH_DELAY',\n settings.getint('METAREFRESH_MAXDELAY'))\n\n def process_response(self, request, response, spider):\n if request.meta.get('dont_redirect', False) or request.method == 'HEAD' or \\\n not isinstance(response, HtmlResponse):\n return response\n\n interval, url = get_meta_refresh(response)\n if url and interval < self._maxdelay:\n redirected = self._redirect_request_using_get(request, url)\n return self._redirect(redirected, request, spider, 'meta refresh')\n\n return response\n", "path": "scrapy/downloadermiddlewares/redirect.py"}]} | 1,678 | 252 |
gh_patches_debug_11217 | rasdani/github-patches | git_diff | mampfes__hacs_waste_collection_schedule-1879 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[Bug]: WasteNet Southland not working after 1.46.0
### I Have A Problem With:
A specific source
### What's Your Problem
The WasteNet Southland website and url has changed about a month ago. The issue created by this change was supposed to be fixed in 1.46.0, but unfortunately it is still not working.
Tested with my address and even with the example data, returning all sensors as unknown.
### Source (if relevant)
wastenet_org_nz
### Logs
```Shell
no relevant logs
```
### Relevant Configuration
_No response_
### Checklist Source Error
- [X] Use the example parameters for your source (often available in the documentation) (don't forget to restart Home Assistant after changing the configuration)
- [X] Checked that the website of your service provider is still working
- [X] Tested my attributes on the service provider website (if possible)
- [X] I have tested with the latest version of the integration (master) (for HACS in the 3 dot menu of the integration click on "Redownload" and choose master as version)
### Checklist Sensor Error
- [X] Checked in the Home Assistant Calendar tab if the event names match the types names (if types argument is used)
### Required
- [X] I have searched past (closed AND opened) issues to see if this bug has already been reported, and it hasn't been.
- [X] I understand that people give their precious time for free, and thus I've done my very best to make this problem as easy as possible to investigate.
</issue>
<code>
[start of custom_components/waste_collection_schedule/waste_collection_schedule/source/wastenet_org_nz.py]
1 import re
2 from datetime import datetime
3 from html.parser import HTMLParser
4
5 import requests
6 from waste_collection_schedule import Collection # type: ignore[attr-defined]
7
8 TITLE = "Gore, Invercargill & Southland"
9 DESCRIPTION = "Source for Wastenet.org.nz."
10 URL = "http://www.wastenet.org.nz"
11 TEST_CASES = {
12 "166 Lewis Street": {"address": "166 Lewis Street"},
13 "Old Format: 199 Crawford Street": {"address": "199 Crawford Street INVERCARGILL"},
14 "Old Format: 156 Tay Street": {"address": "156 Tay Street INVERCARGILL"},
15 "entry_id glass only": {"entry_id": "23571"},
16 # "31 Conyers Street": {"address": "31 Conyers Street INVERCARGILL"}, # Thursday
17 # "67 Chesney Street": {"address": "67 Chesney Street INVERCARGILL"}, # Friday
18 }
19
20 ICON_MAP = {
21 "Glass": "mdi:glass-mug-variant",
22 "Rubbish": "mdi:delete-empty",
23 "Recycle": "mdi:recycle",
24 }
25
26
27 class WasteSearchResultsParser(HTMLParser):
28 def __init__(self):
29 super().__init__()
30 self._entries = []
31 self._wasteType = None
32 self._withinCollectionDay = False
33 self._withinType = False
34
35 @property
36 def entries(self):
37 return self._entries
38
39 def handle_starttag(self, tag, attrs):
40 if tag == "span":
41 d = dict(attrs)
42 if d.get("class", "").startswith("badge"):
43 self._withinType = True
44
45 def handle_data(self, data):
46 if self._withinType:
47 self._withinType = False
48 self._wasteType = data
49 elif data.startswith("Next Service Date:"):
50 self._withinCollectionDay = True
51 elif self._withinCollectionDay:
52 date = datetime.strptime(data, "%y/%m/%d").date()
53 if self._wasteType is not None:
54 self._entries.append(Collection(date, self._wasteType))
55 self._withinCollectionDay = False
56
57
58 HEADER = {"User-Agent": "Mozilla/5.0"}
59
60 SITE_URL = "https://www.wastenet.org.nz/bin-day/"
61 ADDRESS_URL = "https://www.wastenet.org.nz/wp-admin/admin-ajax.php"
62
63
64 class Source:
65 def __init__(self, address: str | None = None, entry_id=None):
66 if not address and not entry_id:
67 raise ValueError("Address or entry_id must be provided")
68
69 self._address = address.replace(" INVERCARGILL", "") if address else None
70 self._entry_id = entry_id
71
72 def get_entry_id(self, s):
73 r = s.get(SITE_URL)
74 r.raise_for_status()
75 # regex find security: 'KEY'
76 match = re.search(r"security: '(\w+)'", r.text)
77 if not match:
78 raise ValueError("Security key not found")
79 security_key = match.group(1)
80
81 # get token
82 params = {
83 "action": "we_data_autocomplete",
84 "term": self._address,
85 "security": security_key,
86 }
87
88 r = s.get(
89 ADDRESS_URL,
90 params=params,
91 )
92 r.raise_for_status()
93
94 return r.json()["data"][0]["url"].split("=")[1]
95
96 def fetch(self):
97 s = requests.Session()
98 s.headers.update(HEADER)
99
100 if self._entry_id is None:
101 self._entry_id = self.get_entry_id(s)
102
103 r = s.get(SITE_URL, params={"entry_id": self._entry_id})
104 r.raise_for_status()
105 p = WasteSearchResultsParser()
106 p.feed(r.text)
107 return p.entries
108
[end of custom_components/waste_collection_schedule/waste_collection_schedule/source/wastenet_org_nz.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/wastenet_org_nz.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/wastenet_org_nz.py
--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/wastenet_org_nz.py
+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/wastenet_org_nz.py
@@ -49,7 +49,7 @@
elif data.startswith("Next Service Date:"):
self._withinCollectionDay = True
elif self._withinCollectionDay:
- date = datetime.strptime(data, "%y/%m/%d").date()
+ date = datetime.strptime(data, "%d/%m/%y").date()
if self._wasteType is not None:
self._entries.append(Collection(date, self._wasteType))
self._withinCollectionDay = False
| {"golden_diff": "diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/wastenet_org_nz.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/wastenet_org_nz.py\n--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/wastenet_org_nz.py\n+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/wastenet_org_nz.py\n@@ -49,7 +49,7 @@\n elif data.startswith(\"Next Service Date:\"):\n self._withinCollectionDay = True\n elif self._withinCollectionDay:\n- date = datetime.strptime(data, \"%y/%m/%d\").date()\n+ date = datetime.strptime(data, \"%d/%m/%y\").date()\n if self._wasteType is not None:\n self._entries.append(Collection(date, self._wasteType))\n self._withinCollectionDay = False\n", "issue": "[Bug]: WasteNet Southland not working after 1.46.0\n### I Have A Problem With:\n\nA specific source\n\n### What's Your Problem\n\nThe WasteNet Southland website and url has changed about a month ago. The issue created by this change was supposed to be fixed in 1.46.0, but unfortunately it is still not working.\r\nTested with my address and even with the example data, returning all sensors as unknown.\n\n### Source (if relevant)\n\nwastenet_org_nz\n\n### Logs\n\n```Shell\nno relevant logs\n```\n\n\n### Relevant Configuration\n\n_No response_\n\n### Checklist Source Error\n\n- [X] Use the example parameters for your source (often available in the documentation) (don't forget to restart Home Assistant after changing the configuration)\n- [X] Checked that the website of your service provider is still working\n- [X] Tested my attributes on the service provider website (if possible)\n- [X] I have tested with the latest version of the integration (master) (for HACS in the 3 dot menu of the integration click on \"Redownload\" and choose master as version)\n\n### Checklist Sensor Error\n\n- [X] Checked in the Home Assistant Calendar tab if the event names match the types names (if types argument is used)\n\n### Required\n\n- [X] I have searched past (closed AND opened) issues to see if this bug has already been reported, and it hasn't been.\n- [X] I understand that people give their precious time for free, and thus I've done my very best to make this problem as easy as possible to investigate.\n", "before_files": [{"content": "import re\nfrom datetime import datetime\nfrom html.parser import HTMLParser\n\nimport requests\nfrom waste_collection_schedule import Collection # type: ignore[attr-defined]\n\nTITLE = \"Gore, Invercargill & Southland\"\nDESCRIPTION = \"Source for Wastenet.org.nz.\"\nURL = \"http://www.wastenet.org.nz\"\nTEST_CASES = {\n \"166 Lewis Street\": {\"address\": \"166 Lewis Street\"},\n \"Old Format: 199 Crawford Street\": {\"address\": \"199 Crawford Street INVERCARGILL\"},\n \"Old Format: 156 Tay Street\": {\"address\": \"156 Tay Street INVERCARGILL\"},\n \"entry_id glass only\": {\"entry_id\": \"23571\"},\n # \"31 Conyers Street\": {\"address\": \"31 Conyers Street INVERCARGILL\"}, # Thursday\n # \"67 Chesney Street\": {\"address\": \"67 Chesney Street INVERCARGILL\"}, # Friday\n}\n\nICON_MAP = {\n \"Glass\": \"mdi:glass-mug-variant\",\n \"Rubbish\": \"mdi:delete-empty\",\n \"Recycle\": \"mdi:recycle\",\n}\n\n\nclass WasteSearchResultsParser(HTMLParser):\n def __init__(self):\n super().__init__()\n self._entries = []\n self._wasteType = None\n self._withinCollectionDay = False\n self._withinType = False\n\n @property\n def entries(self):\n return self._entries\n\n def handle_starttag(self, tag, attrs):\n if tag == \"span\":\n d = dict(attrs)\n if d.get(\"class\", \"\").startswith(\"badge\"):\n self._withinType = True\n\n def handle_data(self, data):\n if self._withinType:\n self._withinType = False\n self._wasteType = data\n elif data.startswith(\"Next Service Date:\"):\n self._withinCollectionDay = True\n elif self._withinCollectionDay:\n date = datetime.strptime(data, \"%y/%m/%d\").date()\n if self._wasteType is not None:\n self._entries.append(Collection(date, self._wasteType))\n self._withinCollectionDay = False\n\n\nHEADER = {\"User-Agent\": \"Mozilla/5.0\"}\n\nSITE_URL = \"https://www.wastenet.org.nz/bin-day/\"\nADDRESS_URL = \"https://www.wastenet.org.nz/wp-admin/admin-ajax.php\"\n\n\nclass Source:\n def __init__(self, address: str | None = None, entry_id=None):\n if not address and not entry_id:\n raise ValueError(\"Address or entry_id must be provided\")\n\n self._address = address.replace(\" INVERCARGILL\", \"\") if address else None\n self._entry_id = entry_id\n\n def get_entry_id(self, s):\n r = s.get(SITE_URL)\n r.raise_for_status()\n # regex find security: 'KEY'\n match = re.search(r\"security: '(\\w+)'\", r.text)\n if not match:\n raise ValueError(\"Security key not found\")\n security_key = match.group(1)\n\n # get token\n params = {\n \"action\": \"we_data_autocomplete\",\n \"term\": self._address,\n \"security\": security_key,\n }\n\n r = s.get(\n ADDRESS_URL,\n params=params,\n )\n r.raise_for_status()\n\n return r.json()[\"data\"][0][\"url\"].split(\"=\")[1]\n\n def fetch(self):\n s = requests.Session()\n s.headers.update(HEADER)\n\n if self._entry_id is None:\n self._entry_id = self.get_entry_id(s)\n\n r = s.get(SITE_URL, params={\"entry_id\": self._entry_id})\n r.raise_for_status()\n p = WasteSearchResultsParser()\n p.feed(r.text)\n return p.entries\n", "path": "custom_components/waste_collection_schedule/waste_collection_schedule/source/wastenet_org_nz.py"}]} | 1,968 | 192 |
gh_patches_debug_44041 | rasdani/github-patches | git_diff | pypi__warehouse-11122 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add caveats to macaroons for expiration (time) and version
**What's the problem this feature will solve?**
This will allow further attenuating the permissions granted by an API key
**Describe the solution you'd like**
Addition of two addition types of caveat, project version (for uploads) and time (expiry).
</issue>
<code>
[start of warehouse/macaroons/caveats.py]
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
10 # See the License for the specific language governing permissions and
11 # limitations under the License.
12
13 import json
14
15 import pymacaroons
16
17 from warehouse.packaging.models import Project
18
19
20 class InvalidMacaroonError(Exception):
21 ...
22
23
24 class Caveat:
25 def __init__(self, verifier):
26 self.verifier = verifier
27
28 def verify(self, predicate):
29 raise InvalidMacaroonError
30
31 def __call__(self, predicate):
32 return self.verify(predicate)
33
34
35 class V1Caveat(Caveat):
36 def verify_projects(self, projects):
37 # First, ensure that we're actually operating in
38 # the context of a package.
39 if not isinstance(self.verifier.context, Project):
40 raise InvalidMacaroonError(
41 "project-scoped token used outside of a project context"
42 )
43
44 project = self.verifier.context
45 if project.normalized_name in projects:
46 return True
47
48 raise InvalidMacaroonError(
49 f"project-scoped token is not valid for project '{project.name}'"
50 )
51
52 def verify(self, predicate):
53 try:
54 data = json.loads(predicate)
55 except ValueError:
56 raise InvalidMacaroonError("malformatted predicate")
57
58 if data.get("version") != 1:
59 raise InvalidMacaroonError("invalidate version in predicate")
60
61 permissions = data.get("permissions")
62 if permissions is None:
63 raise InvalidMacaroonError("invalid permissions in predicate")
64
65 if permissions == "user":
66 # User-scoped tokens behave exactly like a user's normal credentials.
67 return True
68
69 projects = permissions.get("projects")
70 if projects is None:
71 raise InvalidMacaroonError("invalid projects in predicate")
72
73 return self.verify_projects(projects)
74
75
76 class Verifier:
77 def __init__(self, macaroon, context, principals, permission):
78 self.macaroon = macaroon
79 self.context = context
80 self.principals = principals
81 self.permission = permission
82 self.verifier = pymacaroons.Verifier()
83
84 def verify(self, key):
85 self.verifier.satisfy_general(V1Caveat(self))
86
87 try:
88 return self.verifier.verify(self.macaroon, key)
89 except (
90 pymacaroons.exceptions.MacaroonInvalidSignatureException,
91 Exception, # https://github.com/ecordell/pymacaroons/issues/50
92 ):
93 raise InvalidMacaroonError("invalid macaroon signature")
94
[end of warehouse/macaroons/caveats.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/warehouse/macaroons/caveats.py b/warehouse/macaroons/caveats.py
--- a/warehouse/macaroons/caveats.py
+++ b/warehouse/macaroons/caveats.py
@@ -11,6 +11,7 @@
# limitations under the License.
import json
+import time
import pymacaroons
@@ -24,43 +25,51 @@
class Caveat:
def __init__(self, verifier):
self.verifier = verifier
+ # TODO: Surface this failure reason to the user.
+ # See: https://github.com/pypa/warehouse/issues/9018
+ self.failure_reason = None
- def verify(self, predicate):
- raise InvalidMacaroonError
+ def verify(self, predicate) -> bool:
+ return False
def __call__(self, predicate):
return self.verify(predicate)
class V1Caveat(Caveat):
- def verify_projects(self, projects):
+ def verify_projects(self, projects) -> bool:
# First, ensure that we're actually operating in
# the context of a package.
if not isinstance(self.verifier.context, Project):
- raise InvalidMacaroonError(
+ self.failure_reason = (
"project-scoped token used outside of a project context"
)
+ return False
project = self.verifier.context
if project.normalized_name in projects:
return True
- raise InvalidMacaroonError(
+ self.failure_reason = (
f"project-scoped token is not valid for project '{project.name}'"
)
+ return False
- def verify(self, predicate):
+ def verify(self, predicate) -> bool:
try:
data = json.loads(predicate)
except ValueError:
- raise InvalidMacaroonError("malformatted predicate")
+ self.failure_reason = "malformatted predicate"
+ return False
if data.get("version") != 1:
- raise InvalidMacaroonError("invalidate version in predicate")
+ self.failure_reason = "invalid version in predicate"
+ return False
permissions = data.get("permissions")
if permissions is None:
- raise InvalidMacaroonError("invalid permissions in predicate")
+ self.failure_reason = "invalid permissions in predicate"
+ return False
if permissions == "user":
# User-scoped tokens behave exactly like a user's normal credentials.
@@ -68,11 +77,34 @@
projects = permissions.get("projects")
if projects is None:
- raise InvalidMacaroonError("invalid projects in predicate")
+ self.failure_reason = "invalid projects in predicate"
+ return False
return self.verify_projects(projects)
+class ExpiryCaveat(Caveat):
+ def verify(self, predicate):
+ try:
+ data = json.loads(predicate)
+ expiry = data["exp"]
+ not_before = data["nbf"]
+ except (KeyError, ValueError, TypeError):
+ self.failure_reason = "malformatted predicate"
+ return False
+
+ if not expiry or not not_before:
+ self.failure_reason = "missing fields"
+ return False
+
+ now = int(time.time())
+ if now < not_before or now >= expiry:
+ self.failure_reason = "token is expired"
+ return False
+
+ return True
+
+
class Verifier:
def __init__(self, macaroon, context, principals, permission):
self.macaroon = macaroon
@@ -83,6 +115,7 @@
def verify(self, key):
self.verifier.satisfy_general(V1Caveat(self))
+ self.verifier.satisfy_general(ExpiryCaveat(self))
try:
return self.verifier.verify(self.macaroon, key)
@@ -90,4 +123,4 @@
pymacaroons.exceptions.MacaroonInvalidSignatureException,
Exception, # https://github.com/ecordell/pymacaroons/issues/50
):
- raise InvalidMacaroonError("invalid macaroon signature")
+ return False
| {"golden_diff": "diff --git a/warehouse/macaroons/caveats.py b/warehouse/macaroons/caveats.py\n--- a/warehouse/macaroons/caveats.py\n+++ b/warehouse/macaroons/caveats.py\n@@ -11,6 +11,7 @@\n # limitations under the License.\n \n import json\n+import time\n \n import pymacaroons\n \n@@ -24,43 +25,51 @@\n class Caveat:\n def __init__(self, verifier):\n self.verifier = verifier\n+ # TODO: Surface this failure reason to the user.\n+ # See: https://github.com/pypa/warehouse/issues/9018\n+ self.failure_reason = None\n \n- def verify(self, predicate):\n- raise InvalidMacaroonError\n+ def verify(self, predicate) -> bool:\n+ return False\n \n def __call__(self, predicate):\n return self.verify(predicate)\n \n \n class V1Caveat(Caveat):\n- def verify_projects(self, projects):\n+ def verify_projects(self, projects) -> bool:\n # First, ensure that we're actually operating in\n # the context of a package.\n if not isinstance(self.verifier.context, Project):\n- raise InvalidMacaroonError(\n+ self.failure_reason = (\n \"project-scoped token used outside of a project context\"\n )\n+ return False\n \n project = self.verifier.context\n if project.normalized_name in projects:\n return True\n \n- raise InvalidMacaroonError(\n+ self.failure_reason = (\n f\"project-scoped token is not valid for project '{project.name}'\"\n )\n+ return False\n \n- def verify(self, predicate):\n+ def verify(self, predicate) -> bool:\n try:\n data = json.loads(predicate)\n except ValueError:\n- raise InvalidMacaroonError(\"malformatted predicate\")\n+ self.failure_reason = \"malformatted predicate\"\n+ return False\n \n if data.get(\"version\") != 1:\n- raise InvalidMacaroonError(\"invalidate version in predicate\")\n+ self.failure_reason = \"invalid version in predicate\"\n+ return False\n \n permissions = data.get(\"permissions\")\n if permissions is None:\n- raise InvalidMacaroonError(\"invalid permissions in predicate\")\n+ self.failure_reason = \"invalid permissions in predicate\"\n+ return False\n \n if permissions == \"user\":\n # User-scoped tokens behave exactly like a user's normal credentials.\n@@ -68,11 +77,34 @@\n \n projects = permissions.get(\"projects\")\n if projects is None:\n- raise InvalidMacaroonError(\"invalid projects in predicate\")\n+ self.failure_reason = \"invalid projects in predicate\"\n+ return False\n \n return self.verify_projects(projects)\n \n \n+class ExpiryCaveat(Caveat):\n+ def verify(self, predicate):\n+ try:\n+ data = json.loads(predicate)\n+ expiry = data[\"exp\"]\n+ not_before = data[\"nbf\"]\n+ except (KeyError, ValueError, TypeError):\n+ self.failure_reason = \"malformatted predicate\"\n+ return False\n+\n+ if not expiry or not not_before:\n+ self.failure_reason = \"missing fields\"\n+ return False\n+\n+ now = int(time.time())\n+ if now < not_before or now >= expiry:\n+ self.failure_reason = \"token is expired\"\n+ return False\n+\n+ return True\n+\n+\n class Verifier:\n def __init__(self, macaroon, context, principals, permission):\n self.macaroon = macaroon\n@@ -83,6 +115,7 @@\n \n def verify(self, key):\n self.verifier.satisfy_general(V1Caveat(self))\n+ self.verifier.satisfy_general(ExpiryCaveat(self))\n \n try:\n return self.verifier.verify(self.macaroon, key)\n@@ -90,4 +123,4 @@\n pymacaroons.exceptions.MacaroonInvalidSignatureException,\n Exception, # https://github.com/ecordell/pymacaroons/issues/50\n ):\n- raise InvalidMacaroonError(\"invalid macaroon signature\")\n+ return False\n", "issue": "Add caveats to macaroons for expiration (time) and version\n**What's the problem this feature will solve?**\r\n\r\nThis will allow further attenuating the permissions granted by an API key\r\n\r\n**Describe the solution you'd like**\r\n\r\nAddition of two addition types of caveat, project version (for uploads) and time (expiry).\r\n\n", "before_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport json\n\nimport pymacaroons\n\nfrom warehouse.packaging.models import Project\n\n\nclass InvalidMacaroonError(Exception):\n ...\n\n\nclass Caveat:\n def __init__(self, verifier):\n self.verifier = verifier\n\n def verify(self, predicate):\n raise InvalidMacaroonError\n\n def __call__(self, predicate):\n return self.verify(predicate)\n\n\nclass V1Caveat(Caveat):\n def verify_projects(self, projects):\n # First, ensure that we're actually operating in\n # the context of a package.\n if not isinstance(self.verifier.context, Project):\n raise InvalidMacaroonError(\n \"project-scoped token used outside of a project context\"\n )\n\n project = self.verifier.context\n if project.normalized_name in projects:\n return True\n\n raise InvalidMacaroonError(\n f\"project-scoped token is not valid for project '{project.name}'\"\n )\n\n def verify(self, predicate):\n try:\n data = json.loads(predicate)\n except ValueError:\n raise InvalidMacaroonError(\"malformatted predicate\")\n\n if data.get(\"version\") != 1:\n raise InvalidMacaroonError(\"invalidate version in predicate\")\n\n permissions = data.get(\"permissions\")\n if permissions is None:\n raise InvalidMacaroonError(\"invalid permissions in predicate\")\n\n if permissions == \"user\":\n # User-scoped tokens behave exactly like a user's normal credentials.\n return True\n\n projects = permissions.get(\"projects\")\n if projects is None:\n raise InvalidMacaroonError(\"invalid projects in predicate\")\n\n return self.verify_projects(projects)\n\n\nclass Verifier:\n def __init__(self, macaroon, context, principals, permission):\n self.macaroon = macaroon\n self.context = context\n self.principals = principals\n self.permission = permission\n self.verifier = pymacaroons.Verifier()\n\n def verify(self, key):\n self.verifier.satisfy_general(V1Caveat(self))\n\n try:\n return self.verifier.verify(self.macaroon, key)\n except (\n pymacaroons.exceptions.MacaroonInvalidSignatureException,\n Exception, # https://github.com/ecordell/pymacaroons/issues/50\n ):\n raise InvalidMacaroonError(\"invalid macaroon signature\")\n", "path": "warehouse/macaroons/caveats.py"}]} | 1,431 | 923 |
gh_patches_debug_37260 | rasdani/github-patches | git_diff | kubeflow__pipelines-4363 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
allow output artifact store configuration (vs hard coded)
it seems like the output artifacts are always stored in a specific minio service, port, namespace, bucket, secrets, etc (`minio-service.kubeflow:9000`).
see: https://github.com/kubeflow/pipelines/blob/f40a22a3f4a8e06d20cf3e3f425b5058d5c87e0b/sdk/python/kfp/compiler/_op_to_template.py#L148
it would be great to make it flexible, e.g. allow using S3, or change namespace or bucket names.
i suggest making it configurable, i can do such PR if we agree its needed.
flexible pipeline service (host) path in client SDK
when creating an SDK `Client()` the path to `ml-pipeline` API service is loaded from a hard coded value (`ml-pipeline.kubeflow.svc.cluster.local:8888`) which indicate a specific k8s namespace. it can be valuable to load that default value from an env variable, i.e. changing the line in `_client.py` from:
`config.host = host if host else Client.IN_CLUSTER_DNS_NAME`
to:
`config.host = host or os.environ.get('ML_PIPELINE_DNS_NAME',Client.IN_CLUSTER_DNS_NAME)`
also note that when a user provide the `host` parameter, the ipython output points to the API server and not to the UI service (see the logic in `_get_url_prefix()`), it seems like a potential bug
if its acceptable i can submit a PR for the line change above
</issue>
<code>
[start of sdk/python/kfp/aws.py]
1 # Copyright 2019 The Kubeflow Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 def use_aws_secret(secret_name='aws-secret', aws_access_key_id_name='AWS_ACCESS_KEY_ID', aws_secret_access_key_name='AWS_SECRET_ACCESS_KEY'):
16 """An operator that configures the container to use AWS credentials.
17
18 AWS doesn't create secret along with kubeflow deployment and it requires users
19 to manually create credential secret with proper permissions.
20
21 ::
22
23 apiVersion: v1
24 kind: Secret
25 metadata:
26 name: aws-secret
27 type: Opaque
28 data:
29 AWS_ACCESS_KEY_ID: BASE64_YOUR_AWS_ACCESS_KEY_ID
30 AWS_SECRET_ACCESS_KEY: BASE64_YOUR_AWS_SECRET_ACCESS_KEY
31 """
32
33 def _use_aws_secret(task):
34 from kubernetes import client as k8s_client
35 (
36 task.container
37 .add_env_variable(
38 k8s_client.V1EnvVar(
39 name='AWS_ACCESS_KEY_ID',
40 value_from=k8s_client.V1EnvVarSource(
41 secret_key_ref=k8s_client.V1SecretKeySelector(
42 name=secret_name,
43 key=aws_access_key_id_name
44 )
45 )
46 )
47 )
48 .add_env_variable(
49 k8s_client.V1EnvVar(
50 name='AWS_SECRET_ACCESS_KEY',
51 value_from=k8s_client.V1EnvVarSource(
52 secret_key_ref=k8s_client.V1SecretKeySelector(
53 name=secret_name,
54 key=aws_secret_access_key_name
55 )
56 )
57 )
58 )
59 )
60 return task
61
62 return _use_aws_secret
63
[end of sdk/python/kfp/aws.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/sdk/python/kfp/aws.py b/sdk/python/kfp/aws.py
--- a/sdk/python/kfp/aws.py
+++ b/sdk/python/kfp/aws.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-def use_aws_secret(secret_name='aws-secret', aws_access_key_id_name='AWS_ACCESS_KEY_ID', aws_secret_access_key_name='AWS_SECRET_ACCESS_KEY'):
+def use_aws_secret(secret_name='aws-secret', aws_access_key_id_name='AWS_ACCESS_KEY_ID', aws_secret_access_key_name='AWS_SECRET_ACCESS_KEY', aws_region=None):
"""An operator that configures the container to use AWS credentials.
AWS doesn't create secret along with kubeflow deployment and it requires users
@@ -32,31 +32,38 @@
def _use_aws_secret(task):
from kubernetes import client as k8s_client
- (
- task.container
- .add_env_variable(
- k8s_client.V1EnvVar(
- name='AWS_ACCESS_KEY_ID',
- value_from=k8s_client.V1EnvVarSource(
- secret_key_ref=k8s_client.V1SecretKeySelector(
- name=secret_name,
- key=aws_access_key_id_name
- )
+ task.container \
+ .add_env_variable(
+ k8s_client.V1EnvVar(
+ name='AWS_ACCESS_KEY_ID',
+ value_from=k8s_client.V1EnvVarSource(
+ secret_key_ref=k8s_client.V1SecretKeySelector(
+ name=secret_name,
+ key=aws_access_key_id_name
)
)
)
+ ) \
+ .add_env_variable(
+ k8s_client.V1EnvVar(
+ name='AWS_SECRET_ACCESS_KEY',
+ value_from=k8s_client.V1EnvVarSource(
+ secret_key_ref=k8s_client.V1SecretKeySelector(
+ name=secret_name,
+ key=aws_secret_access_key_name
+ )
+ )
+ )
+ )
+
+ if aws_region:
+ task.container \
.add_env_variable(
k8s_client.V1EnvVar(
- name='AWS_SECRET_ACCESS_KEY',
- value_from=k8s_client.V1EnvVarSource(
- secret_key_ref=k8s_client.V1SecretKeySelector(
- name=secret_name,
- key=aws_secret_access_key_name
- )
- )
+ name='AWS_REGION',
+ value=aws_region
)
)
- )
return task
return _use_aws_secret
| {"golden_diff": "diff --git a/sdk/python/kfp/aws.py b/sdk/python/kfp/aws.py\n--- a/sdk/python/kfp/aws.py\n+++ b/sdk/python/kfp/aws.py\n@@ -12,7 +12,7 @@\n # See the License for the specific language governing permissions and\n # limitations under the License.\n \n-def use_aws_secret(secret_name='aws-secret', aws_access_key_id_name='AWS_ACCESS_KEY_ID', aws_secret_access_key_name='AWS_SECRET_ACCESS_KEY'):\n+def use_aws_secret(secret_name='aws-secret', aws_access_key_id_name='AWS_ACCESS_KEY_ID', aws_secret_access_key_name='AWS_SECRET_ACCESS_KEY', aws_region=None):\n \"\"\"An operator that configures the container to use AWS credentials.\n \n AWS doesn't create secret along with kubeflow deployment and it requires users\n@@ -32,31 +32,38 @@\n \n def _use_aws_secret(task):\n from kubernetes import client as k8s_client\n- (\n- task.container\n- .add_env_variable(\n- k8s_client.V1EnvVar(\n- name='AWS_ACCESS_KEY_ID',\n- value_from=k8s_client.V1EnvVarSource(\n- secret_key_ref=k8s_client.V1SecretKeySelector(\n- name=secret_name,\n- key=aws_access_key_id_name\n- )\n+ task.container \\\n+ .add_env_variable(\n+ k8s_client.V1EnvVar(\n+ name='AWS_ACCESS_KEY_ID',\n+ value_from=k8s_client.V1EnvVarSource(\n+ secret_key_ref=k8s_client.V1SecretKeySelector(\n+ name=secret_name,\n+ key=aws_access_key_id_name\n )\n )\n )\n+ ) \\\n+ .add_env_variable(\n+ k8s_client.V1EnvVar(\n+ name='AWS_SECRET_ACCESS_KEY',\n+ value_from=k8s_client.V1EnvVarSource(\n+ secret_key_ref=k8s_client.V1SecretKeySelector(\n+ name=secret_name,\n+ key=aws_secret_access_key_name\n+ )\n+ )\n+ )\n+ )\n+\n+ if aws_region:\n+ task.container \\\n .add_env_variable(\n k8s_client.V1EnvVar(\n- name='AWS_SECRET_ACCESS_KEY',\n- value_from=k8s_client.V1EnvVarSource(\n- secret_key_ref=k8s_client.V1SecretKeySelector(\n- name=secret_name,\n- key=aws_secret_access_key_name\n- )\n- )\n+ name='AWS_REGION',\n+ value=aws_region\n )\n )\n- )\n return task\n \n return _use_aws_secret\n", "issue": "allow output artifact store configuration (vs hard coded)\nit seems like the output artifacts are always stored in a specific minio service, port, namespace, bucket, secrets, etc (`minio-service.kubeflow:9000`). \r\n\r\nsee: https://github.com/kubeflow/pipelines/blob/f40a22a3f4a8e06d20cf3e3f425b5058d5c87e0b/sdk/python/kfp/compiler/_op_to_template.py#L148\r\n\r\nit would be great to make it flexible, e.g. allow using S3, or change namespace or bucket names.\r\ni suggest making it configurable, i can do such PR if we agree its needed. \nflexible pipeline service (host) path in client SDK \nwhen creating an SDK `Client()` the path to `ml-pipeline` API service is loaded from a hard coded value (`ml-pipeline.kubeflow.svc.cluster.local:8888`) which indicate a specific k8s namespace. it can be valuable to load that default value from an env variable, i.e. changing the line in `_client.py` from:\r\n\r\n`config.host = host if host else Client.IN_CLUSTER_DNS_NAME`\r\n\r\nto:\r\n\r\n`config.host = host or os.environ.get('ML_PIPELINE_DNS_NAME',Client.IN_CLUSTER_DNS_NAME)`\r\n\r\nalso note that when a user provide the `host` parameter, the ipython output points to the API server and not to the UI service (see the logic in `_get_url_prefix()`), it seems like a potential bug\r\n\r\nif its acceptable i can submit a PR for the line change above\r\n \n", "before_files": [{"content": "# Copyright 2019 The Kubeflow Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\ndef use_aws_secret(secret_name='aws-secret', aws_access_key_id_name='AWS_ACCESS_KEY_ID', aws_secret_access_key_name='AWS_SECRET_ACCESS_KEY'):\n \"\"\"An operator that configures the container to use AWS credentials.\n\n AWS doesn't create secret along with kubeflow deployment and it requires users\n to manually create credential secret with proper permissions.\n\n ::\n\n apiVersion: v1\n kind: Secret\n metadata:\n name: aws-secret\n type: Opaque\n data:\n AWS_ACCESS_KEY_ID: BASE64_YOUR_AWS_ACCESS_KEY_ID\n AWS_SECRET_ACCESS_KEY: BASE64_YOUR_AWS_SECRET_ACCESS_KEY\n \"\"\"\n\n def _use_aws_secret(task):\n from kubernetes import client as k8s_client\n (\n task.container\n .add_env_variable(\n k8s_client.V1EnvVar(\n name='AWS_ACCESS_KEY_ID',\n value_from=k8s_client.V1EnvVarSource(\n secret_key_ref=k8s_client.V1SecretKeySelector(\n name=secret_name,\n key=aws_access_key_id_name\n )\n )\n )\n )\n .add_env_variable(\n k8s_client.V1EnvVar(\n name='AWS_SECRET_ACCESS_KEY',\n value_from=k8s_client.V1EnvVarSource(\n secret_key_ref=k8s_client.V1SecretKeySelector(\n name=secret_name,\n key=aws_secret_access_key_name\n )\n )\n )\n )\n )\n return task\n\n return _use_aws_secret\n", "path": "sdk/python/kfp/aws.py"}]} | 1,479 | 587 |
gh_patches_debug_4153 | rasdani/github-patches | git_diff | svthalia__concrexit-2510 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Thumbnailing of transparent images seems to break
### Describe the bug
<img width="1119" alt="image" src="https://user-images.githubusercontent.com/7915741/191974542-041bdb37-f2e0-4181-9267-9a24d5df66b3.png">
### How to reproduce
<img width="1119" alt="image" src="https://user-images.githubusercontent.com/7915741/191974542-041bdb37-f2e0-4181-9267-9a24d5df66b3.png">
### Expected behaviour
Not <img width="1119" alt="image" src="https://user-images.githubusercontent.com/7915741/191974542-041bdb37-f2e0-4181-9267-9a24d5df66b3.png">
### Screenshots
<img width="1119" alt="image" src="https://user-images.githubusercontent.com/7915741/191974542-041bdb37-f2e0-4181-9267-9a24d5df66b3.png">
### Additional context
</issue>
<code>
[start of website/utils/media/services.py]
1 import io
2 import os
3
4 from django.conf import settings
5 from django.core import signing
6 from django.core.files.base import ContentFile
7 from django.core.files.storage import get_storage_class, DefaultStorage
8 from django.core.files.uploadedfile import InMemoryUploadedFile
9 from django.db.models.fields.files import FieldFile, ImageFieldFile
10 from django.urls import reverse
11
12
13 def save_image(storage, image, path, format):
14 buffer = io.BytesIO()
15 image.convert("RGB").save(fp=buffer, format=format)
16 buff_val = buffer.getvalue()
17 content = ContentFile(buff_val)
18 file = InMemoryUploadedFile(
19 content,
20 None,
21 f"foo.{format.lower()}",
22 f"image/{format.lower()}",
23 content.tell,
24 None,
25 )
26 return storage.save(path, file)
27
28
29 def get_media_url(file, attachment=False):
30 """Get the url of the provided media file to serve in a browser.
31
32 If the file is private a signature will be added.
33 Do NOT use this with user input
34 :param file: the file field
35 :param attachment: True if the file is a forced download
36 :return: the url of the media
37 """
38 storage = DefaultStorage()
39 file_name = file
40 if isinstance(file, ImageFieldFile) or isinstance(file, FieldFile):
41 storage = file.storage
42 file_name = file.name
43
44 return f"{storage.url(file_name, attachment)}"
45
46
47 def get_thumbnail_url(file, size, fit=True):
48 """Get the thumbnail url of a media file, NEVER use this with user input.
49
50 If the thumbnail exists this function will return the url of the
51 media file, with signature if necessary. Does it not yet exist a route
52 that executes the :func:`utils.media.views.generate_thumbnail`
53 will be the output.
54 :param file: the file field
55 :param size: size of the image
56 :param fit: False to keep the aspect ratio, True to crop
57 :return: get-thumbnail path
58 """
59 storage = DefaultStorage()
60 name = file
61
62 if isinstance(file, ImageFieldFile) or isinstance(file, FieldFile):
63 storage = file.storage
64 name = file.name
65
66 is_public = isinstance(storage, get_storage_class(settings.PUBLIC_FILE_STORAGE))
67 size_fit = "{}_{}".format(size, int(fit))
68
69 if name.endswith(".svg") and is_public:
70 return storage.url(name)
71
72 sig_info = {
73 "size": size,
74 "fit": int(fit),
75 "name": name,
76 "thumb_path": f"thumbnails/{size_fit}/{name}",
77 "serve_path": f"thumbnails/{size_fit}/{name}",
78 "storage": f"{storage.__class__.__module__}.{storage.__class__.__name__}",
79 }
80
81 # We provide a URL instead of calling it as a function, so that using
82 # it means kicking off a new GET request. If we would need to check all files for the
83 # thumbnails inline, loading an album overview would have high latency.
84 return (
85 reverse("get-thumbnail", args=[os.path.join(size_fit, sig_info["name"])])
86 + f"?sig={signing.dumps(sig_info)}"
87 )
88
[end of website/utils/media/services.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/website/utils/media/services.py b/website/utils/media/services.py
--- a/website/utils/media/services.py
+++ b/website/utils/media/services.py
@@ -12,7 +12,7 @@
def save_image(storage, image, path, format):
buffer = io.BytesIO()
- image.convert("RGB").save(fp=buffer, format=format)
+ image.convert("RGB" if format == "JPEG" else "RGBA").save(fp=buffer, format=format)
buff_val = buffer.getvalue()
content = ContentFile(buff_val)
file = InMemoryUploadedFile(
| {"golden_diff": "diff --git a/website/utils/media/services.py b/website/utils/media/services.py\n--- a/website/utils/media/services.py\n+++ b/website/utils/media/services.py\n@@ -12,7 +12,7 @@\n \n def save_image(storage, image, path, format):\n buffer = io.BytesIO()\n- image.convert(\"RGB\").save(fp=buffer, format=format)\n+ image.convert(\"RGB\" if format == \"JPEG\" else \"RGBA\").save(fp=buffer, format=format)\n buff_val = buffer.getvalue()\n content = ContentFile(buff_val)\n file = InMemoryUploadedFile(\n", "issue": "Thumbnailing of transparent images seems to break\n### Describe the bug\r\n<img width=\"1119\" alt=\"image\" src=\"https://user-images.githubusercontent.com/7915741/191974542-041bdb37-f2e0-4181-9267-9a24d5df66b3.png\">\r\n\r\n### How to reproduce\r\n<img width=\"1119\" alt=\"image\" src=\"https://user-images.githubusercontent.com/7915741/191974542-041bdb37-f2e0-4181-9267-9a24d5df66b3.png\">\r\n\r\n### Expected behaviour\r\nNot <img width=\"1119\" alt=\"image\" src=\"https://user-images.githubusercontent.com/7915741/191974542-041bdb37-f2e0-4181-9267-9a24d5df66b3.png\">\r\n\r\n### Screenshots\r\n<img width=\"1119\" alt=\"image\" src=\"https://user-images.githubusercontent.com/7915741/191974542-041bdb37-f2e0-4181-9267-9a24d5df66b3.png\">\r\n\r\n### Additional context\r\n\n", "before_files": [{"content": "import io\nimport os\n\nfrom django.conf import settings\nfrom django.core import signing\nfrom django.core.files.base import ContentFile\nfrom django.core.files.storage import get_storage_class, DefaultStorage\nfrom django.core.files.uploadedfile import InMemoryUploadedFile\nfrom django.db.models.fields.files import FieldFile, ImageFieldFile\nfrom django.urls import reverse\n\n\ndef save_image(storage, image, path, format):\n buffer = io.BytesIO()\n image.convert(\"RGB\").save(fp=buffer, format=format)\n buff_val = buffer.getvalue()\n content = ContentFile(buff_val)\n file = InMemoryUploadedFile(\n content,\n None,\n f\"foo.{format.lower()}\",\n f\"image/{format.lower()}\",\n content.tell,\n None,\n )\n return storage.save(path, file)\n\n\ndef get_media_url(file, attachment=False):\n \"\"\"Get the url of the provided media file to serve in a browser.\n\n If the file is private a signature will be added.\n Do NOT use this with user input\n :param file: the file field\n :param attachment: True if the file is a forced download\n :return: the url of the media\n \"\"\"\n storage = DefaultStorage()\n file_name = file\n if isinstance(file, ImageFieldFile) or isinstance(file, FieldFile):\n storage = file.storage\n file_name = file.name\n\n return f\"{storage.url(file_name, attachment)}\"\n\n\ndef get_thumbnail_url(file, size, fit=True):\n \"\"\"Get the thumbnail url of a media file, NEVER use this with user input.\n\n If the thumbnail exists this function will return the url of the\n media file, with signature if necessary. Does it not yet exist a route\n that executes the :func:`utils.media.views.generate_thumbnail`\n will be the output.\n :param file: the file field\n :param size: size of the image\n :param fit: False to keep the aspect ratio, True to crop\n :return: get-thumbnail path\n \"\"\"\n storage = DefaultStorage()\n name = file\n\n if isinstance(file, ImageFieldFile) or isinstance(file, FieldFile):\n storage = file.storage\n name = file.name\n\n is_public = isinstance(storage, get_storage_class(settings.PUBLIC_FILE_STORAGE))\n size_fit = \"{}_{}\".format(size, int(fit))\n\n if name.endswith(\".svg\") and is_public:\n return storage.url(name)\n\n sig_info = {\n \"size\": size,\n \"fit\": int(fit),\n \"name\": name,\n \"thumb_path\": f\"thumbnails/{size_fit}/{name}\",\n \"serve_path\": f\"thumbnails/{size_fit}/{name}\",\n \"storage\": f\"{storage.__class__.__module__}.{storage.__class__.__name__}\",\n }\n\n # We provide a URL instead of calling it as a function, so that using\n # it means kicking off a new GET request. If we would need to check all files for the\n # thumbnails inline, loading an album overview would have high latency.\n return (\n reverse(\"get-thumbnail\", args=[os.path.join(size_fit, sig_info[\"name\"])])\n + f\"?sig={signing.dumps(sig_info)}\"\n )\n", "path": "website/utils/media/services.py"}]} | 1,722 | 131 |
gh_patches_debug_26436 | rasdani/github-patches | git_diff | Textualize__textual-1066 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG] Header's title text isn't centered properly
> Please give a brief but clear explanation of what the issue is. Let us know what the behaviour you expect is, and what is actually happening. Let us know what operating system you are running on, and what terminal you are using.
`Header`'s title text isn't centered, `show_clock=True` exacerbates the issue. My expectation is that the title is centered within the visible space between the icon/clock (if shown), and between the icon/right edge if not.
> Feel free to add screenshots and/or videos. These can be very helpful!



> If you can, include a complete working example that demonstrates the bug. Please check it can run without modifications.
```python
from textual.app import App, ComposeResult
from textual.widgets import Header, Static
class Demo(App):
TITLE = "Demonstration"
CSS = """
Screen {
layout: grid;
grid-size: 2;
}
.box {
height: 100%;
border: white;
}
"""
def compose(self) -> ComposeResult:
yield Header(show_clock=True)
yield Static(classes="box")
yield Static(classes="box")
```
</issue>
<code>
[start of src/textual/widgets/_header.py]
1 from __future__ import annotations
2
3 from datetime import datetime
4
5 from rich.text import Text
6
7 from ..widget import Widget
8 from ..reactive import Reactive, watch
9
10
11 class HeaderIcon(Widget):
12 """Display an 'icon' on the left of the header."""
13
14 DEFAULT_CSS = """
15 HeaderIcon {
16 dock: left;
17 padding: 0 1;
18 width: 8;
19 content-align: left middle;
20 }
21 """
22 icon = Reactive("⭘")
23
24 def render(self):
25 return self.icon
26
27
28 class HeaderClock(Widget):
29 """Display a clock on the right of the header."""
30
31 DEFAULT_CSS = """
32 HeaderClock {
33 dock: right;
34 width: 10;
35 padding: 0 1;
36 background: $secondary-background-lighten-1;
37 color: $text;
38 text-opacity: 85%;
39 content-align: center middle;
40 }
41 """
42
43 def on_mount(self) -> None:
44 self.set_interval(1, callback=self.refresh, name=f"update header clock")
45
46 def render(self):
47 return Text(datetime.now().time().strftime("%X"))
48
49
50 class HeaderTitle(Widget):
51 """Display the title / subtitle in the header."""
52
53 DEFAULT_CSS = """
54 HeaderTitle {
55 content-align: center middle;
56 width: 100%;
57 margin-right: 10;
58 }
59 """
60
61 text: Reactive[str] = Reactive("")
62 sub_text = Reactive("")
63
64 def render(self) -> Text:
65 text = Text(self.text, no_wrap=True, overflow="ellipsis")
66 if self.sub_text:
67 text.append(" — ")
68 text.append(self.sub_text, "dim")
69 return text
70
71
72 class Header(Widget):
73 """A header widget with icon and clock.
74
75 Args:
76 show_clock (bool, optional): True if the clock should be shown on the right of the header.
77 """
78
79 DEFAULT_CSS = """
80 Header {
81 dock: top;
82 width: 100%;
83 background: $secondary-background;
84 color: $text;
85 height: 1;
86 }
87 Header.-tall {
88 height: 3;
89 }
90 """
91
92 tall = Reactive(False)
93
94 DEFAULT_CLASSES = ""
95
96 def __init__(
97 self,
98 show_clock: bool = False,
99 *,
100 name: str | None = None,
101 id: str | None = None,
102 classes: str | None = None,
103 ):
104 super().__init__(name=name, id=id, classes=classes)
105 self.show_clock = show_clock
106
107 def compose(self):
108 yield HeaderIcon()
109 yield HeaderTitle()
110 if self.show_clock:
111 yield HeaderClock()
112
113 def watch_tall(self, tall: bool) -> None:
114 self.set_class(tall, "-tall")
115
116 def on_click(self):
117 self.toggle_class("-tall")
118
119 def on_mount(self) -> None:
120 def set_title(title: str) -> None:
121 self.query_one(HeaderTitle).text = title
122
123 def set_sub_title(sub_title: str) -> None:
124 self.query_one(HeaderTitle).sub_text = sub_title
125
126 watch(self.app, "title", set_title)
127 watch(self.app, "sub_title", set_sub_title)
128
[end of src/textual/widgets/_header.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/textual/widgets/_header.py b/src/textual/widgets/_header.py
--- a/src/textual/widgets/_header.py
+++ b/src/textual/widgets/_header.py
@@ -25,14 +25,26 @@
return self.icon
-class HeaderClock(Widget):
- """Display a clock on the right of the header."""
+class HeaderClockSpace(Widget):
+ """The space taken up by the clock on the right of the header."""
DEFAULT_CSS = """
- HeaderClock {
+ HeaderClockSpace {
dock: right;
width: 10;
padding: 0 1;
+ }
+ """
+
+ def render(self) -> str:
+ return ""
+
+
+class HeaderClock(HeaderClockSpace):
+ """Display a clock on the right of the header."""
+
+ DEFAULT_CSS = """
+ HeaderClock {
background: $secondary-background-lighten-1;
color: $text;
text-opacity: 85%;
@@ -54,7 +66,6 @@
HeaderTitle {
content-align: center middle;
width: 100%;
- margin-right: 10;
}
"""
@@ -107,8 +118,7 @@
def compose(self):
yield HeaderIcon()
yield HeaderTitle()
- if self.show_clock:
- yield HeaderClock()
+ yield HeaderClock() if self.show_clock else HeaderClockSpace()
def watch_tall(self, tall: bool) -> None:
self.set_class(tall, "-tall")
| {"golden_diff": "diff --git a/src/textual/widgets/_header.py b/src/textual/widgets/_header.py\n--- a/src/textual/widgets/_header.py\n+++ b/src/textual/widgets/_header.py\n@@ -25,14 +25,26 @@\n return self.icon\n \n \n-class HeaderClock(Widget):\n- \"\"\"Display a clock on the right of the header.\"\"\"\n+class HeaderClockSpace(Widget):\n+ \"\"\"The space taken up by the clock on the right of the header.\"\"\"\n \n DEFAULT_CSS = \"\"\"\n- HeaderClock {\n+ HeaderClockSpace {\n dock: right;\n width: 10;\n padding: 0 1;\n+ }\n+ \"\"\"\n+\n+ def render(self) -> str:\n+ return \"\"\n+\n+\n+class HeaderClock(HeaderClockSpace):\n+ \"\"\"Display a clock on the right of the header.\"\"\"\n+\n+ DEFAULT_CSS = \"\"\"\n+ HeaderClock {\n background: $secondary-background-lighten-1;\n color: $text;\n text-opacity: 85%;\n@@ -54,7 +66,6 @@\n HeaderTitle {\n content-align: center middle;\n width: 100%;\n- margin-right: 10;\n }\n \"\"\"\n \n@@ -107,8 +118,7 @@\n def compose(self):\n yield HeaderIcon()\n yield HeaderTitle()\n- if self.show_clock:\n- yield HeaderClock()\n+ yield HeaderClock() if self.show_clock else HeaderClockSpace()\n \n def watch_tall(self, tall: bool) -> None:\n self.set_class(tall, \"-tall\")\n", "issue": "[BUG] Header's title text isn't centered properly \n> Please give a brief but clear explanation of what the issue is. Let us know what the behaviour you expect is, and what is actually happening. Let us know what operating system you are running on, and what terminal you are using.\r\n\r\n`Header`'s title text isn't centered, `show_clock=True` exacerbates the issue. My expectation is that the title is centered within the visible space between the icon/clock (if shown), and between the icon/right edge if not.\r\n\r\n> Feel free to add screenshots and/or videos. These can be very helpful!\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n> If you can, include a complete working example that demonstrates the bug. Please check it can run without modifications.\r\n\r\n```python\r\nfrom textual.app import App, ComposeResult\r\nfrom textual.widgets import Header, Static\r\n\r\nclass Demo(App):\r\n TITLE = \"Demonstration\"\r\n CSS = \"\"\"\r\n Screen {\r\n layout: grid;\r\n grid-size: 2;\r\n }\r\n .box {\r\n height: 100%;\r\n border: white;\r\n }\r\n \"\"\"\r\n\r\n def compose(self) -> ComposeResult:\r\n yield Header(show_clock=True)\r\n yield Static(classes=\"box\")\r\n yield Static(classes=\"box\")\r\n```\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom datetime import datetime\n\nfrom rich.text import Text\n\nfrom ..widget import Widget\nfrom ..reactive import Reactive, watch\n\n\nclass HeaderIcon(Widget):\n \"\"\"Display an 'icon' on the left of the header.\"\"\"\n\n DEFAULT_CSS = \"\"\"\n HeaderIcon {\n dock: left;\n padding: 0 1;\n width: 8;\n content-align: left middle;\n }\n \"\"\"\n icon = Reactive(\"\u2b58\")\n\n def render(self):\n return self.icon\n\n\nclass HeaderClock(Widget):\n \"\"\"Display a clock on the right of the header.\"\"\"\n\n DEFAULT_CSS = \"\"\"\n HeaderClock {\n dock: right;\n width: 10;\n padding: 0 1;\n background: $secondary-background-lighten-1;\n color: $text;\n text-opacity: 85%;\n content-align: center middle;\n }\n \"\"\"\n\n def on_mount(self) -> None:\n self.set_interval(1, callback=self.refresh, name=f\"update header clock\")\n\n def render(self):\n return Text(datetime.now().time().strftime(\"%X\"))\n\n\nclass HeaderTitle(Widget):\n \"\"\"Display the title / subtitle in the header.\"\"\"\n\n DEFAULT_CSS = \"\"\"\n HeaderTitle {\n content-align: center middle;\n width: 100%;\n margin-right: 10;\n }\n \"\"\"\n\n text: Reactive[str] = Reactive(\"\")\n sub_text = Reactive(\"\")\n\n def render(self) -> Text:\n text = Text(self.text, no_wrap=True, overflow=\"ellipsis\")\n if self.sub_text:\n text.append(\" \u2014 \")\n text.append(self.sub_text, \"dim\")\n return text\n\n\nclass Header(Widget):\n \"\"\"A header widget with icon and clock.\n\n Args:\n show_clock (bool, optional): True if the clock should be shown on the right of the header.\n \"\"\"\n\n DEFAULT_CSS = \"\"\"\n Header {\n dock: top;\n width: 100%;\n background: $secondary-background;\n color: $text;\n height: 1;\n }\n Header.-tall {\n height: 3;\n }\n \"\"\"\n\n tall = Reactive(False)\n\n DEFAULT_CLASSES = \"\"\n\n def __init__(\n self,\n show_clock: bool = False,\n *,\n name: str | None = None,\n id: str | None = None,\n classes: str | None = None,\n ):\n super().__init__(name=name, id=id, classes=classes)\n self.show_clock = show_clock\n\n def compose(self):\n yield HeaderIcon()\n yield HeaderTitle()\n if self.show_clock:\n yield HeaderClock()\n\n def watch_tall(self, tall: bool) -> None:\n self.set_class(tall, \"-tall\")\n\n def on_click(self):\n self.toggle_class(\"-tall\")\n\n def on_mount(self) -> None:\n def set_title(title: str) -> None:\n self.query_one(HeaderTitle).text = title\n\n def set_sub_title(sub_title: str) -> None:\n self.query_one(HeaderTitle).sub_text = sub_title\n\n watch(self.app, \"title\", set_title)\n watch(self.app, \"sub_title\", set_sub_title)\n", "path": "src/textual/widgets/_header.py"}]} | 1,972 | 351 |
gh_patches_debug_38859 | rasdani/github-patches | git_diff | ESMCI__cime-1857 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
SystemTestsCompareTwo multisubmit tries to do too much in phase 1
In comparing #1830 with what made it to master, I noticed that the indentation of this block is wrong:
```python
# Compare results
# Case1 is the "main" case, and we need to do the comparisons from there
self._activate_case1()
self._link_to_case2_output()
self._component_compare_test(self._run_one_suffix, self._run_two_suffix, success_change=success_change)
```
-- this should be indented under the "Second run" conditional.
The current indentation leads the ERR test (and any other multi-submit test) to try to do component_compare_test after the first phase, leading to a FAIL result. This doesn't cause a test failure, because the FAIL is later overwritten with a PASS, but it is still incorrect.
I have a fix for this in an incoming PR.
</issue>
<code>
[start of scripts/lib/CIME/SystemTests/erp.py]
1 """
2 CIME ERP test. This class inherits from SystemTestsCompareTwo
3
4 This is a pes counts hybrid (open-MP/MPI) restart bfb test from
5 startup. This is just like an ERS test but the pe-counts/threading
6 count are modified on retart.
7 (1) Do an initial run with pes set up out of the box (suffix base)
8 (2) Do a restart test with half the number of tasks and threads (suffix rest)
9 """
10
11 from CIME.XML.standard_module_setup import *
12 from CIME.case_setup import case_setup
13 from CIME.SystemTests.system_tests_compare_two import SystemTestsCompareTwo
14 from CIME.check_lockedfiles import *
15
16 logger = logging.getLogger(__name__)
17
18 class ERP(SystemTestsCompareTwo):
19
20 def __init__(self, case):
21 """
22 initialize a test object
23 """
24 SystemTestsCompareTwo.__init__(self, case,
25 separate_builds = True,
26 run_two_suffix = 'rest',
27 run_one_description = 'initial',
28 run_two_description = 'restart')
29
30 def _common_setup(self):
31 self._case.set_value("BUILD_THREADED",True)
32
33 def _case_one_setup(self):
34 stop_n = self._case.get_value("STOP_N")
35
36 expect(stop_n > 2, "ERROR: stop_n value {:d} too short".format(stop_n))
37
38 def _case_two_setup(self):
39 # halve the number of tasks and threads
40 for comp in self._case.get_values("COMP_CLASSES"):
41 ntasks = self._case1.get_value("NTASKS_{}".format(comp))
42 nthreads = self._case1.get_value("NTHRDS_{}".format(comp))
43 rootpe = self._case1.get_value("ROOTPE_{}".format(comp))
44 if ( nthreads > 1 ):
45 self._case.set_value("NTHRDS_{}".format(comp), nthreads/2)
46 if ( ntasks > 1 ):
47 self._case.set_value("NTASKS_{}".format(comp), ntasks/2)
48 self._case.set_value("ROOTPE_{}".format(comp), rootpe/2)
49
50 stop_n = self._case1.get_value("STOP_N")
51 rest_n = self._case1.get_value("REST_N")
52 stop_new = stop_n - rest_n
53 expect(stop_new > 0, "ERROR: stop_n value {:d} too short {:d} {:d}".format(stop_new,stop_n,rest_n))
54 self._case.set_value("STOP_N", stop_new)
55 self._case.set_value("HIST_N", stop_n)
56 self._case.set_value("CONTINUE_RUN", True)
57 self._case.set_value("REST_OPTION","never")
58
59 # Note, some components, like CESM-CICE, have
60 # decomposition information in env_build.xml that
61 # needs to be regenerated for the above new tasks and thread counts
62 case_setup(self._case, test_mode=True, reset=True)
63
64 def _case_one_custom_postrun_action(self):
65 self.copy_case1_restarts_to_case2()
66
[end of scripts/lib/CIME/SystemTests/erp.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/scripts/lib/CIME/SystemTests/erp.py b/scripts/lib/CIME/SystemTests/erp.py
--- a/scripts/lib/CIME/SystemTests/erp.py
+++ b/scripts/lib/CIME/SystemTests/erp.py
@@ -1,5 +1,5 @@
"""
-CIME ERP test. This class inherits from SystemTestsCompareTwo
+CIME ERP test. This class inherits from RestartTest
This is a pes counts hybrid (open-MP/MPI) restart bfb test from
startup. This is just like an ERS test but the pe-counts/threading
@@ -10,31 +10,26 @@
from CIME.XML.standard_module_setup import *
from CIME.case_setup import case_setup
-from CIME.SystemTests.system_tests_compare_two import SystemTestsCompareTwo
+from CIME.SystemTests.restart_tests import RestartTest
from CIME.check_lockedfiles import *
logger = logging.getLogger(__name__)
-class ERP(SystemTestsCompareTwo):
+class ERP(RestartTest):
def __init__(self, case):
"""
initialize a test object
"""
- SystemTestsCompareTwo.__init__(self, case,
- separate_builds = True,
- run_two_suffix = 'rest',
- run_one_description = 'initial',
- run_two_description = 'restart')
+ RestartTest.__init__(self, case,
+ separate_builds = True,
+ run_two_suffix = 'rest',
+ run_one_description = 'initial',
+ run_two_description = 'restart')
def _common_setup(self):
self._case.set_value("BUILD_THREADED",True)
- def _case_one_setup(self):
- stop_n = self._case.get_value("STOP_N")
-
- expect(stop_n > 2, "ERROR: stop_n value {:d} too short".format(stop_n))
-
def _case_two_setup(self):
# halve the number of tasks and threads
for comp in self._case.get_values("COMP_CLASSES"):
@@ -47,15 +42,7 @@
self._case.set_value("NTASKS_{}".format(comp), ntasks/2)
self._case.set_value("ROOTPE_{}".format(comp), rootpe/2)
- stop_n = self._case1.get_value("STOP_N")
- rest_n = self._case1.get_value("REST_N")
- stop_new = stop_n - rest_n
- expect(stop_new > 0, "ERROR: stop_n value {:d} too short {:d} {:d}".format(stop_new,stop_n,rest_n))
- self._case.set_value("STOP_N", stop_new)
- self._case.set_value("HIST_N", stop_n)
- self._case.set_value("CONTINUE_RUN", True)
- self._case.set_value("REST_OPTION","never")
-
+ RestartTest._case_two_setup(self)
# Note, some components, like CESM-CICE, have
# decomposition information in env_build.xml that
# needs to be regenerated for the above new tasks and thread counts
| {"golden_diff": "diff --git a/scripts/lib/CIME/SystemTests/erp.py b/scripts/lib/CIME/SystemTests/erp.py\n--- a/scripts/lib/CIME/SystemTests/erp.py\n+++ b/scripts/lib/CIME/SystemTests/erp.py\n@@ -1,5 +1,5 @@\n \"\"\"\n-CIME ERP test. This class inherits from SystemTestsCompareTwo\n+CIME ERP test. This class inherits from RestartTest\n \n This is a pes counts hybrid (open-MP/MPI) restart bfb test from\n startup. This is just like an ERS test but the pe-counts/threading\n@@ -10,31 +10,26 @@\n \n from CIME.XML.standard_module_setup import *\n from CIME.case_setup import case_setup\n-from CIME.SystemTests.system_tests_compare_two import SystemTestsCompareTwo\n+from CIME.SystemTests.restart_tests import RestartTest\n from CIME.check_lockedfiles import *\n \n logger = logging.getLogger(__name__)\n \n-class ERP(SystemTestsCompareTwo):\n+class ERP(RestartTest):\n \n def __init__(self, case):\n \"\"\"\n initialize a test object\n \"\"\"\n- SystemTestsCompareTwo.__init__(self, case,\n- separate_builds = True,\n- run_two_suffix = 'rest',\n- run_one_description = 'initial',\n- run_two_description = 'restart')\n+ RestartTest.__init__(self, case,\n+ separate_builds = True,\n+ run_two_suffix = 'rest',\n+ run_one_description = 'initial',\n+ run_two_description = 'restart')\n \n def _common_setup(self):\n self._case.set_value(\"BUILD_THREADED\",True)\n \n- def _case_one_setup(self):\n- stop_n = self._case.get_value(\"STOP_N\")\n-\n- expect(stop_n > 2, \"ERROR: stop_n value {:d} too short\".format(stop_n))\n-\n def _case_two_setup(self):\n # halve the number of tasks and threads\n for comp in self._case.get_values(\"COMP_CLASSES\"):\n@@ -47,15 +42,7 @@\n self._case.set_value(\"NTASKS_{}\".format(comp), ntasks/2)\n self._case.set_value(\"ROOTPE_{}\".format(comp), rootpe/2)\n \n- stop_n = self._case1.get_value(\"STOP_N\")\n- rest_n = self._case1.get_value(\"REST_N\")\n- stop_new = stop_n - rest_n\n- expect(stop_new > 0, \"ERROR: stop_n value {:d} too short {:d} {:d}\".format(stop_new,stop_n,rest_n))\n- self._case.set_value(\"STOP_N\", stop_new)\n- self._case.set_value(\"HIST_N\", stop_n)\n- self._case.set_value(\"CONTINUE_RUN\", True)\n- self._case.set_value(\"REST_OPTION\",\"never\")\n-\n+ RestartTest._case_two_setup(self)\n # Note, some components, like CESM-CICE, have\n # decomposition information in env_build.xml that\n # needs to be regenerated for the above new tasks and thread counts\n", "issue": "SystemTestsCompareTwo multisubmit tries to do too much in phase 1\nIn comparing #1830 with what made it to master, I noticed that the indentation of this block is wrong:\r\n\r\n```python\r\n # Compare results\r\n # Case1 is the \"main\" case, and we need to do the comparisons from there\r\n self._activate_case1()\r\n self._link_to_case2_output()\r\n\r\n self._component_compare_test(self._run_one_suffix, self._run_two_suffix, success_change=success_change)\r\n```\r\n\r\n-- this should be indented under the \"Second run\" conditional.\r\n\r\nThe current indentation leads the ERR test (and any other multi-submit test) to try to do component_compare_test after the first phase, leading to a FAIL result. This doesn't cause a test failure, because the FAIL is later overwritten with a PASS, but it is still incorrect.\r\n\r\nI have a fix for this in an incoming PR.\n", "before_files": [{"content": "\"\"\"\nCIME ERP test. This class inherits from SystemTestsCompareTwo\n\nThis is a pes counts hybrid (open-MP/MPI) restart bfb test from\nstartup. This is just like an ERS test but the pe-counts/threading\ncount are modified on retart.\n(1) Do an initial run with pes set up out of the box (suffix base)\n(2) Do a restart test with half the number of tasks and threads (suffix rest)\n\"\"\"\n\nfrom CIME.XML.standard_module_setup import *\nfrom CIME.case_setup import case_setup\nfrom CIME.SystemTests.system_tests_compare_two import SystemTestsCompareTwo\nfrom CIME.check_lockedfiles import *\n\nlogger = logging.getLogger(__name__)\n\nclass ERP(SystemTestsCompareTwo):\n\n def __init__(self, case):\n \"\"\"\n initialize a test object\n \"\"\"\n SystemTestsCompareTwo.__init__(self, case,\n separate_builds = True,\n run_two_suffix = 'rest',\n run_one_description = 'initial',\n run_two_description = 'restart')\n\n def _common_setup(self):\n self._case.set_value(\"BUILD_THREADED\",True)\n\n def _case_one_setup(self):\n stop_n = self._case.get_value(\"STOP_N\")\n\n expect(stop_n > 2, \"ERROR: stop_n value {:d} too short\".format(stop_n))\n\n def _case_two_setup(self):\n # halve the number of tasks and threads\n for comp in self._case.get_values(\"COMP_CLASSES\"):\n ntasks = self._case1.get_value(\"NTASKS_{}\".format(comp))\n nthreads = self._case1.get_value(\"NTHRDS_{}\".format(comp))\n rootpe = self._case1.get_value(\"ROOTPE_{}\".format(comp))\n if ( nthreads > 1 ):\n self._case.set_value(\"NTHRDS_{}\".format(comp), nthreads/2)\n if ( ntasks > 1 ):\n self._case.set_value(\"NTASKS_{}\".format(comp), ntasks/2)\n self._case.set_value(\"ROOTPE_{}\".format(comp), rootpe/2)\n\n stop_n = self._case1.get_value(\"STOP_N\")\n rest_n = self._case1.get_value(\"REST_N\")\n stop_new = stop_n - rest_n\n expect(stop_new > 0, \"ERROR: stop_n value {:d} too short {:d} {:d}\".format(stop_new,stop_n,rest_n))\n self._case.set_value(\"STOP_N\", stop_new)\n self._case.set_value(\"HIST_N\", stop_n)\n self._case.set_value(\"CONTINUE_RUN\", True)\n self._case.set_value(\"REST_OPTION\",\"never\")\n\n # Note, some components, like CESM-CICE, have\n # decomposition information in env_build.xml that\n # needs to be regenerated for the above new tasks and thread counts\n case_setup(self._case, test_mode=True, reset=True)\n\n def _case_one_custom_postrun_action(self):\n self.copy_case1_restarts_to_case2()\n", "path": "scripts/lib/CIME/SystemTests/erp.py"}]} | 1,528 | 675 |
gh_patches_debug_29755 | rasdani/github-patches | git_diff | joke2k__faker-1036 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
"Edit on Github" link broken in ReadTheDocs
http://fake-factory.readthedocs.org/en/latest/locales.html
Clicking "Edit on Github" results in a 404 error.
EDIT:
http://fake-factory.readthedocs.org/en/latest/ has a github link to `https://github.com/joke2k/faker/blob/docs/docs/index.rst` when the correct link is
`https://github.com/joke2k/faker/blob/master/docs/index.rst`
(Note the doubled up `docs/docs` instead of `master/docs`)
</issue>
<code>
[start of faker/build_docs.py]
1 # coding=utf-8
2
3 from __future__ import print_function, unicode_literals
4
5 import os
6 import pprint
7 import sys
8
9 import six
10
11 DOCS_ROOT = os.path.abspath(os.path.join('..', 'docs'))
12
13
14 def write(fh, s):
15 return fh.write(s.encode('utf-8'))
16
17
18 def write_provider(fh, doc, provider, formatters, excludes=None):
19
20 if excludes is None:
21 excludes = []
22
23 write(fh, '\n')
24 title = "``{0}``".format(doc.get_provider_name(provider))
25 write(fh, '%s\n' % title)
26 write(fh, "-" * len(title))
27 write(fh, '\n\n::\n')
28
29 for signature, example in formatters.items():
30 if signature in excludes:
31 continue
32 try:
33 # `pprint` can't format sets of heterogenous types.
34 if not isinstance(example, set):
35 example = pprint.pformat(example, indent=4)
36 lines = six.text_type(example).expandtabs().splitlines()
37 except UnicodeEncodeError:
38 msg = 'error on "{0}" with value "{1}"'.format(signature, example)
39 raise Exception(msg)
40 write(fh, '\n')
41 write(fh, "\t{fake}\n{example}\n".format(
42 fake=signature,
43 example='\n'.join(['\t# ' + line for line in lines]),
44 ))
45
46
47 def write_docs(*args, **kwargs):
48 from faker import Faker, documentor
49 from faker.config import DEFAULT_LOCALE, AVAILABLE_LOCALES
50
51 fake = Faker(locale=DEFAULT_LOCALE)
52
53 from faker.providers import BaseProvider
54 base_provider_formatters = [f for f in dir(BaseProvider)]
55
56 doc = documentor.Documentor(fake)
57
58 formatters = doc.get_formatters(with_args=True, with_defaults=True)
59
60 for provider, fakers in formatters:
61 provider_name = doc.get_provider_name(provider)
62 fname = os.path.join(DOCS_ROOT, 'providers', '%s.rst' % provider_name)
63 with open(fname, 'wb') as fh:
64 write_provider(fh, doc, provider, fakers)
65
66 with open(os.path.join(DOCS_ROOT, 'providers.rst'), 'wb') as fh:
67 write(fh, 'Providers\n')
68 write(fh, '=========\n')
69 write(fh, '.. toctree::\n')
70 write(fh, ' :maxdepth: 2\n\n')
71 [write(fh, ' providers/%s\n' % doc.get_provider_name(provider))
72 for provider, fakers in formatters]
73
74 AVAILABLE_LOCALES = sorted(AVAILABLE_LOCALES)
75 for lang in AVAILABLE_LOCALES:
76 fname = os.path.join(DOCS_ROOT, 'locales', '%s.rst' % lang)
77 with open(fname, 'wb') as fh:
78 write(fh, '\n')
79 title = 'Language {0}\n'.format(lang)
80 write(fh, title)
81 write(fh, '=' * len(title))
82 write(fh, '\n')
83 fake = Faker(locale=lang)
84 d = documentor.Documentor(fake)
85
86 for p, fs in d.get_formatters(with_args=True, with_defaults=True,
87 locale=lang,
88 excludes=base_provider_formatters):
89 write_provider(fh, d, p, fs)
90
91 with open(os.path.join(DOCS_ROOT, 'locales.rst'), 'wb') as fh:
92 write(fh, 'Locales\n')
93 write(fh, '=======\n')
94 write(fh, '.. toctree::\n')
95 write(fh, ' :maxdepth: 2\n\n')
96 [write(fh, ' locales/%s\n' % lang) for lang in AVAILABLE_LOCALES]
97
98
99 # wrappers for sphinx
100 def _main(app, *args, **kwargs):
101 return write_docs(*args, **kwargs)
102
103
104 def setup(app):
105 app.connect(str('builder-inited'), _main)
106
107
108 if __name__ == "__main__":
109 write_docs(*sys.argv[1:])
110
[end of faker/build_docs.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/faker/build_docs.py b/faker/build_docs.py
--- a/faker/build_docs.py
+++ b/faker/build_docs.py
@@ -61,9 +61,11 @@
provider_name = doc.get_provider_name(provider)
fname = os.path.join(DOCS_ROOT, 'providers', '%s.rst' % provider_name)
with open(fname, 'wb') as fh:
+ write(fh, ':github_url: hide\n\n')
write_provider(fh, doc, provider, fakers)
with open(os.path.join(DOCS_ROOT, 'providers.rst'), 'wb') as fh:
+ write(fh, ':github_url: hide\n\n')
write(fh, 'Providers\n')
write(fh, '=========\n')
write(fh, '.. toctree::\n')
@@ -75,7 +77,7 @@
for lang in AVAILABLE_LOCALES:
fname = os.path.join(DOCS_ROOT, 'locales', '%s.rst' % lang)
with open(fname, 'wb') as fh:
- write(fh, '\n')
+ write(fh, ':github_url: hide\n\n')
title = 'Language {0}\n'.format(lang)
write(fh, title)
write(fh, '=' * len(title))
@@ -89,6 +91,7 @@
write_provider(fh, d, p, fs)
with open(os.path.join(DOCS_ROOT, 'locales.rst'), 'wb') as fh:
+ write(fh, ':github_url: hide\n\n')
write(fh, 'Locales\n')
write(fh, '=======\n')
write(fh, '.. toctree::\n')
| {"golden_diff": "diff --git a/faker/build_docs.py b/faker/build_docs.py\n--- a/faker/build_docs.py\n+++ b/faker/build_docs.py\n@@ -61,9 +61,11 @@\n provider_name = doc.get_provider_name(provider)\n fname = os.path.join(DOCS_ROOT, 'providers', '%s.rst' % provider_name)\n with open(fname, 'wb') as fh:\n+ write(fh, ':github_url: hide\\n\\n')\n write_provider(fh, doc, provider, fakers)\n \n with open(os.path.join(DOCS_ROOT, 'providers.rst'), 'wb') as fh:\n+ write(fh, ':github_url: hide\\n\\n')\n write(fh, 'Providers\\n')\n write(fh, '=========\\n')\n write(fh, '.. toctree::\\n')\n@@ -75,7 +77,7 @@\n for lang in AVAILABLE_LOCALES:\n fname = os.path.join(DOCS_ROOT, 'locales', '%s.rst' % lang)\n with open(fname, 'wb') as fh:\n- write(fh, '\\n')\n+ write(fh, ':github_url: hide\\n\\n')\n title = 'Language {0}\\n'.format(lang)\n write(fh, title)\n write(fh, '=' * len(title))\n@@ -89,6 +91,7 @@\n write_provider(fh, d, p, fs)\n \n with open(os.path.join(DOCS_ROOT, 'locales.rst'), 'wb') as fh:\n+ write(fh, ':github_url: hide\\n\\n')\n write(fh, 'Locales\\n')\n write(fh, '=======\\n')\n write(fh, '.. toctree::\\n')\n", "issue": "\"Edit on Github\" link broken in ReadTheDocs\nhttp://fake-factory.readthedocs.org/en/latest/locales.html\n\nClicking \"Edit on Github\" results in a 404 error.\n\nEDIT: \nhttp://fake-factory.readthedocs.org/en/latest/ has a github link to `https://github.com/joke2k/faker/blob/docs/docs/index.rst` when the correct link is\n`https://github.com/joke2k/faker/blob/master/docs/index.rst`\n(Note the doubled up `docs/docs` instead of `master/docs`)\n\n", "before_files": [{"content": "# coding=utf-8\n\nfrom __future__ import print_function, unicode_literals\n\nimport os\nimport pprint\nimport sys\n\nimport six\n\nDOCS_ROOT = os.path.abspath(os.path.join('..', 'docs'))\n\n\ndef write(fh, s):\n return fh.write(s.encode('utf-8'))\n\n\ndef write_provider(fh, doc, provider, formatters, excludes=None):\n\n if excludes is None:\n excludes = []\n\n write(fh, '\\n')\n title = \"``{0}``\".format(doc.get_provider_name(provider))\n write(fh, '%s\\n' % title)\n write(fh, \"-\" * len(title))\n write(fh, '\\n\\n::\\n')\n\n for signature, example in formatters.items():\n if signature in excludes:\n continue\n try:\n # `pprint` can't format sets of heterogenous types.\n if not isinstance(example, set):\n example = pprint.pformat(example, indent=4)\n lines = six.text_type(example).expandtabs().splitlines()\n except UnicodeEncodeError:\n msg = 'error on \"{0}\" with value \"{1}\"'.format(signature, example)\n raise Exception(msg)\n write(fh, '\\n')\n write(fh, \"\\t{fake}\\n{example}\\n\".format(\n fake=signature,\n example='\\n'.join(['\\t# ' + line for line in lines]),\n ))\n\n\ndef write_docs(*args, **kwargs):\n from faker import Faker, documentor\n from faker.config import DEFAULT_LOCALE, AVAILABLE_LOCALES\n\n fake = Faker(locale=DEFAULT_LOCALE)\n\n from faker.providers import BaseProvider\n base_provider_formatters = [f for f in dir(BaseProvider)]\n\n doc = documentor.Documentor(fake)\n\n formatters = doc.get_formatters(with_args=True, with_defaults=True)\n\n for provider, fakers in formatters:\n provider_name = doc.get_provider_name(provider)\n fname = os.path.join(DOCS_ROOT, 'providers', '%s.rst' % provider_name)\n with open(fname, 'wb') as fh:\n write_provider(fh, doc, provider, fakers)\n\n with open(os.path.join(DOCS_ROOT, 'providers.rst'), 'wb') as fh:\n write(fh, 'Providers\\n')\n write(fh, '=========\\n')\n write(fh, '.. toctree::\\n')\n write(fh, ' :maxdepth: 2\\n\\n')\n [write(fh, ' providers/%s\\n' % doc.get_provider_name(provider))\n for provider, fakers in formatters]\n\n AVAILABLE_LOCALES = sorted(AVAILABLE_LOCALES)\n for lang in AVAILABLE_LOCALES:\n fname = os.path.join(DOCS_ROOT, 'locales', '%s.rst' % lang)\n with open(fname, 'wb') as fh:\n write(fh, '\\n')\n title = 'Language {0}\\n'.format(lang)\n write(fh, title)\n write(fh, '=' * len(title))\n write(fh, '\\n')\n fake = Faker(locale=lang)\n d = documentor.Documentor(fake)\n\n for p, fs in d.get_formatters(with_args=True, with_defaults=True,\n locale=lang,\n excludes=base_provider_formatters):\n write_provider(fh, d, p, fs)\n\n with open(os.path.join(DOCS_ROOT, 'locales.rst'), 'wb') as fh:\n write(fh, 'Locales\\n')\n write(fh, '=======\\n')\n write(fh, '.. toctree::\\n')\n write(fh, ' :maxdepth: 2\\n\\n')\n [write(fh, ' locales/%s\\n' % lang) for lang in AVAILABLE_LOCALES]\n\n\n# wrappers for sphinx\ndef _main(app, *args, **kwargs):\n return write_docs(*args, **kwargs)\n\n\ndef setup(app):\n app.connect(str('builder-inited'), _main)\n\n\nif __name__ == \"__main__\":\n write_docs(*sys.argv[1:])\n", "path": "faker/build_docs.py"}]} | 1,755 | 372 |
gh_patches_debug_8945 | rasdani/github-patches | git_diff | open-mmlab__mmaction2-624 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ValueError: FastRCNN is not registered in LOCALIZERS, RECOGNIZERS or DETECTORS
Hello
when I train AVA dataset find a error
ValueError: FastRCNN is not registered in LOCALIZERS, RECOGNIZERS or DETECTORS
how to solve it
think you very much
</issue>
<code>
[start of mmaction/models/builder.py]
1 import torch.nn as nn
2 from mmcv.utils import Registry, build_from_cfg
3
4 from mmaction.utils import import_module_error_func
5 from .registry import BACKBONES, HEADS, LOCALIZERS, LOSSES, NECKS, RECOGNIZERS
6
7 try:
8 from mmdet.models.builder import DETECTORS, build_detector
9 except (ImportError, ModuleNotFoundError):
10 # Define an empty registry and building func, so that can import
11 DETECTORS = Registry('detector')
12
13 @import_module_error_func('mmdet')
14 def build_detector(cfg, train_cfg, test_cfg):
15 pass
16
17
18 def build(cfg, registry, default_args=None):
19 """Build a module.
20
21 Args:
22 cfg (dict, list[dict]): The config of modules, it is either a dict
23 or a list of configs.
24 registry (:obj:`Registry`): A registry the module belongs to.
25 default_args (dict, optional): Default arguments to build the module.
26 Defaults to None.
27
28 Returns:
29 nn.Module: A built nn module.
30 """
31
32 if isinstance(cfg, list):
33 modules = [
34 build_from_cfg(cfg_, registry, default_args) for cfg_ in cfg
35 ]
36 return nn.Sequential(*modules)
37
38 return build_from_cfg(cfg, registry, default_args)
39
40
41 def build_backbone(cfg):
42 """Build backbone."""
43 return build(cfg, BACKBONES)
44
45
46 def build_head(cfg):
47 """Build head."""
48 return build(cfg, HEADS)
49
50
51 def build_recognizer(cfg, train_cfg=None, test_cfg=None):
52 """Build recognizer."""
53 return build(cfg, RECOGNIZERS,
54 dict(train_cfg=train_cfg, test_cfg=test_cfg))
55
56
57 def build_loss(cfg):
58 """Build loss."""
59 return build(cfg, LOSSES)
60
61
62 def build_localizer(cfg):
63 """Build localizer."""
64 return build(cfg, LOCALIZERS)
65
66
67 def build_model(cfg, train_cfg=None, test_cfg=None):
68 """Build model."""
69 args = cfg.copy()
70 obj_type = args.pop('type')
71 if obj_type in LOCALIZERS:
72 return build_localizer(cfg)
73 if obj_type in RECOGNIZERS:
74 return build_recognizer(cfg, train_cfg, test_cfg)
75 if obj_type in DETECTORS:
76 return build_detector(cfg, train_cfg, test_cfg)
77 raise ValueError(f'{obj_type} is not registered in '
78 'LOCALIZERS, RECOGNIZERS or DETECTORS')
79
80
81 def build_neck(cfg):
82 """Build neck."""
83 return build(cfg, NECKS)
84
[end of mmaction/models/builder.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mmaction/models/builder.py b/mmaction/models/builder.py
--- a/mmaction/models/builder.py
+++ b/mmaction/models/builder.py
@@ -74,6 +74,10 @@
return build_recognizer(cfg, train_cfg, test_cfg)
if obj_type in DETECTORS:
return build_detector(cfg, train_cfg, test_cfg)
+ model_in_mmdet = ['FastRCNN']
+ if obj_type in model_in_mmdet:
+ raise ImportError(
+ 'Please install mmdet for spatial temporal detection tasks.')
raise ValueError(f'{obj_type} is not registered in '
'LOCALIZERS, RECOGNIZERS or DETECTORS')
| {"golden_diff": "diff --git a/mmaction/models/builder.py b/mmaction/models/builder.py\n--- a/mmaction/models/builder.py\n+++ b/mmaction/models/builder.py\n@@ -74,6 +74,10 @@\n return build_recognizer(cfg, train_cfg, test_cfg)\n if obj_type in DETECTORS:\n return build_detector(cfg, train_cfg, test_cfg)\n+ model_in_mmdet = ['FastRCNN']\n+ if obj_type in model_in_mmdet:\n+ raise ImportError(\n+ 'Please install mmdet for spatial temporal detection tasks.')\n raise ValueError(f'{obj_type} is not registered in '\n 'LOCALIZERS, RECOGNIZERS or DETECTORS')\n", "issue": "ValueError: FastRCNN is not registered in LOCALIZERS, RECOGNIZERS or DETECTORS\nHello\r\nwhen I train AVA dataset find a error\r\nValueError: FastRCNN is not registered in LOCALIZERS, RECOGNIZERS or DETECTORS\r\nhow to solve it \r\nthink you very much \n", "before_files": [{"content": "import torch.nn as nn\nfrom mmcv.utils import Registry, build_from_cfg\n\nfrom mmaction.utils import import_module_error_func\nfrom .registry import BACKBONES, HEADS, LOCALIZERS, LOSSES, NECKS, RECOGNIZERS\n\ntry:\n from mmdet.models.builder import DETECTORS, build_detector\nexcept (ImportError, ModuleNotFoundError):\n # Define an empty registry and building func, so that can import\n DETECTORS = Registry('detector')\n\n @import_module_error_func('mmdet')\n def build_detector(cfg, train_cfg, test_cfg):\n pass\n\n\ndef build(cfg, registry, default_args=None):\n \"\"\"Build a module.\n\n Args:\n cfg (dict, list[dict]): The config of modules, it is either a dict\n or a list of configs.\n registry (:obj:`Registry`): A registry the module belongs to.\n default_args (dict, optional): Default arguments to build the module.\n Defaults to None.\n\n Returns:\n nn.Module: A built nn module.\n \"\"\"\n\n if isinstance(cfg, list):\n modules = [\n build_from_cfg(cfg_, registry, default_args) for cfg_ in cfg\n ]\n return nn.Sequential(*modules)\n\n return build_from_cfg(cfg, registry, default_args)\n\n\ndef build_backbone(cfg):\n \"\"\"Build backbone.\"\"\"\n return build(cfg, BACKBONES)\n\n\ndef build_head(cfg):\n \"\"\"Build head.\"\"\"\n return build(cfg, HEADS)\n\n\ndef build_recognizer(cfg, train_cfg=None, test_cfg=None):\n \"\"\"Build recognizer.\"\"\"\n return build(cfg, RECOGNIZERS,\n dict(train_cfg=train_cfg, test_cfg=test_cfg))\n\n\ndef build_loss(cfg):\n \"\"\"Build loss.\"\"\"\n return build(cfg, LOSSES)\n\n\ndef build_localizer(cfg):\n \"\"\"Build localizer.\"\"\"\n return build(cfg, LOCALIZERS)\n\n\ndef build_model(cfg, train_cfg=None, test_cfg=None):\n \"\"\"Build model.\"\"\"\n args = cfg.copy()\n obj_type = args.pop('type')\n if obj_type in LOCALIZERS:\n return build_localizer(cfg)\n if obj_type in RECOGNIZERS:\n return build_recognizer(cfg, train_cfg, test_cfg)\n if obj_type in DETECTORS:\n return build_detector(cfg, train_cfg, test_cfg)\n raise ValueError(f'{obj_type} is not registered in '\n 'LOCALIZERS, RECOGNIZERS or DETECTORS')\n\n\ndef build_neck(cfg):\n \"\"\"Build neck.\"\"\"\n return build(cfg, NECKS)\n", "path": "mmaction/models/builder.py"}]} | 1,328 | 158 |
gh_patches_debug_415 | rasdani/github-patches | git_diff | freedomofpress__securedrop-6492 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Initial messages containing non-ascii characters fail if codename filtering is enabled.
## Description
Codename filtering was introduced in 2.3.0, allowing admins to block initial submissions containing only the user's codename, as they should not be shared with journalists. The filter uses the `compare_digest()` function to ensure constant-time comparison, but this fn will throw a `TypeError` if any of the strings being compared contain Unicode.
## Steps to Reproduce
- start up `make dev` on 2.4.0
- visit the JI and enable codename filtering under Admin > Instance Config
- visit the SI, create a new source, and submit an initial message containing unicode, ie `Hallo! ö, ü, ä, or ß`
## Expected Behavior
- Message is submitted
## Actual Behavior
- 500 error, and (in dev) stack trace due to TypeError
## Comments
Suggestions to fix, any other relevant information.
</issue>
<code>
[start of securedrop/source_app/utils.py]
1 import json
2 import re
3 import subprocess
4 import typing
5 from hmac import compare_digest
6
7 import werkzeug
8 from flask import current_app, flash, redirect, render_template, url_for
9 from flask.sessions import SessionMixin
10 from flask_babel import gettext
11 from markupsafe import Markup, escape
12 from source_user import SourceUser
13 from store import Storage
14
15 if typing.TYPE_CHECKING:
16 from typing import Optional
17
18
19 def codename_detected(message: str, codename: str) -> bool:
20 """
21 Check for codenames in incoming messages. including case where user copy/pasted
22 from /generate or the codename widget on the same page
23 """
24 message = message.strip()
25
26 return compare_digest(message.strip(), codename)
27
28
29 def flash_msg(
30 category: str,
31 declarative: "Optional[str]",
32 *msg_contents: "str",
33 ) -> None:
34 """
35 Render flash message with a (currently) optional declarative heading.
36 """
37 contents = Markup("<br>".join([escape(part) for part in msg_contents]))
38
39 msg = render_template(
40 "flash_message.html",
41 declarative=declarative,
42 msg_contents=contents,
43 )
44 flash(Markup(msg), category)
45
46
47 def clear_session_and_redirect_to_logged_out_page(flask_session: SessionMixin) -> werkzeug.Response:
48 msg = render_template(
49 "flash_message.html",
50 declarative=gettext("Important"),
51 msg_contents=Markup(
52 gettext(
53 'You were logged out due to inactivity. Click the <img src={icon} alt="" '
54 'width="16" height="16"> <b>New Identity</b> button in your Tor Browser\'s '
55 "toolbar before moving on. This will clear your Tor Browser activity data on "
56 "this device."
57 ).format(icon=url_for("static", filename="i/torbroom.png"))
58 ),
59 )
60
61 # Clear the session after we render the message so it's localized
62 flask_session.clear()
63
64 flash(Markup(msg), "error")
65 return redirect(url_for("main.index"))
66
67
68 def normalize_timestamps(logged_in_source: SourceUser) -> None:
69 """
70 Update the timestamps on all of the source's submissions. This
71 minimizes metadata that could be useful to investigators. See
72 #301.
73 """
74 source_in_db = logged_in_source.get_db_record()
75 sub_paths = [
76 Storage.get_default().path(logged_in_source.filesystem_id, submission.filename)
77 for submission in source_in_db.submissions
78 ]
79 if len(sub_paths) > 1:
80 args = ["touch", "--no-create"]
81 args.extend(sub_paths)
82 rc = subprocess.call(args)
83 if rc != 0:
84 current_app.logger.warning(
85 "Couldn't normalize submission " "timestamps (touch exited with %d)" % rc
86 )
87
88
89 def check_url_file(path: str, regexp: str) -> "Optional[str]":
90 """
91 Check that a file exists at the path given and contains a single line
92 matching the regexp. Used for checking the source interface address
93 files in /var/lib/securedrop (as the Apache user can't read Tor config)
94 """
95 try:
96 f = open(path, "r")
97 contents = f.readline().strip()
98 f.close()
99 if re.match(regexp, contents):
100 return contents
101 else:
102 return None
103 except IOError:
104 return None
105
106
107 def get_sourcev3_url() -> "Optional[str]":
108 return check_url_file("/var/lib/securedrop/source_v3_url", r"^[a-z0-9]{56}\.onion$")
109
110
111 def fit_codenames_into_cookie(codenames: dict) -> dict:
112 """
113 If `codenames` will approach `werkzeug.Response.max_cookie_size` once
114 serialized, incrementally pop off the oldest codename until the remaining
115 (newer) ones will fit.
116 """
117
118 serialized = json.dumps(codenames).encode()
119 if len(codenames) > 1 and len(serialized) > 4000: # werkzeug.Response.max_cookie_size = 4093
120 if current_app:
121 current_app.logger.warn(
122 f"Popping oldest of {len(codenames)} "
123 f"codenames ({len(serialized)} bytes) to "
124 f"fit within maximum cookie size"
125 )
126 del codenames[list(codenames)[0]] # FIFO
127
128 return fit_codenames_into_cookie(codenames)
129
130 return codenames
131
[end of securedrop/source_app/utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/securedrop/source_app/utils.py b/securedrop/source_app/utils.py
--- a/securedrop/source_app/utils.py
+++ b/securedrop/source_app/utils.py
@@ -23,7 +23,7 @@
"""
message = message.strip()
- return compare_digest(message.strip(), codename)
+ return compare_digest(message.strip().encode("utf-8"), codename.encode("utf-8"))
def flash_msg(
| {"golden_diff": "diff --git a/securedrop/source_app/utils.py b/securedrop/source_app/utils.py\n--- a/securedrop/source_app/utils.py\n+++ b/securedrop/source_app/utils.py\n@@ -23,7 +23,7 @@\n \"\"\"\n message = message.strip()\n \n- return compare_digest(message.strip(), codename)\n+ return compare_digest(message.strip().encode(\"utf-8\"), codename.encode(\"utf-8\"))\n \n \n def flash_msg(\n", "issue": "Initial messages containing non-ascii characters fail if codename filtering is enabled.\n## Description\r\n\r\nCodename filtering was introduced in 2.3.0, allowing admins to block initial submissions containing only the user's codename, as they should not be shared with journalists. The filter uses the `compare_digest()` function to ensure constant-time comparison, but this fn will throw a `TypeError` if any of the strings being compared contain Unicode.\r\n\r\n## Steps to Reproduce\r\n\r\n- start up `make dev` on 2.4.0\r\n- visit the JI and enable codename filtering under Admin > Instance Config\r\n- visit the SI, create a new source, and submit an initial message containing unicode, ie `Hallo! \u00f6, \u00fc, \u00e4, or \u00df`\r\n\r\n## Expected Behavior\r\n- Message is submitted\r\n\r\n## Actual Behavior\r\n- 500 error, and (in dev) stack trace due to TypeError\r\n\r\n## Comments\r\n\r\nSuggestions to fix, any other relevant information.\r\n\n", "before_files": [{"content": "import json\nimport re\nimport subprocess\nimport typing\nfrom hmac import compare_digest\n\nimport werkzeug\nfrom flask import current_app, flash, redirect, render_template, url_for\nfrom flask.sessions import SessionMixin\nfrom flask_babel import gettext\nfrom markupsafe import Markup, escape\nfrom source_user import SourceUser\nfrom store import Storage\n\nif typing.TYPE_CHECKING:\n from typing import Optional\n\n\ndef codename_detected(message: str, codename: str) -> bool:\n \"\"\"\n Check for codenames in incoming messages. including case where user copy/pasted\n from /generate or the codename widget on the same page\n \"\"\"\n message = message.strip()\n\n return compare_digest(message.strip(), codename)\n\n\ndef flash_msg(\n category: str,\n declarative: \"Optional[str]\",\n *msg_contents: \"str\",\n) -> None:\n \"\"\"\n Render flash message with a (currently) optional declarative heading.\n \"\"\"\n contents = Markup(\"<br>\".join([escape(part) for part in msg_contents]))\n\n msg = render_template(\n \"flash_message.html\",\n declarative=declarative,\n msg_contents=contents,\n )\n flash(Markup(msg), category)\n\n\ndef clear_session_and_redirect_to_logged_out_page(flask_session: SessionMixin) -> werkzeug.Response:\n msg = render_template(\n \"flash_message.html\",\n declarative=gettext(\"Important\"),\n msg_contents=Markup(\n gettext(\n 'You were logged out due to inactivity. Click the <img src={icon} alt=\"\" '\n 'width=\"16\" height=\"16\"> <b>New Identity</b> button in your Tor Browser\\'s '\n \"toolbar before moving on. This will clear your Tor Browser activity data on \"\n \"this device.\"\n ).format(icon=url_for(\"static\", filename=\"i/torbroom.png\"))\n ),\n )\n\n # Clear the session after we render the message so it's localized\n flask_session.clear()\n\n flash(Markup(msg), \"error\")\n return redirect(url_for(\"main.index\"))\n\n\ndef normalize_timestamps(logged_in_source: SourceUser) -> None:\n \"\"\"\n Update the timestamps on all of the source's submissions. This\n minimizes metadata that could be useful to investigators. See\n #301.\n \"\"\"\n source_in_db = logged_in_source.get_db_record()\n sub_paths = [\n Storage.get_default().path(logged_in_source.filesystem_id, submission.filename)\n for submission in source_in_db.submissions\n ]\n if len(sub_paths) > 1:\n args = [\"touch\", \"--no-create\"]\n args.extend(sub_paths)\n rc = subprocess.call(args)\n if rc != 0:\n current_app.logger.warning(\n \"Couldn't normalize submission \" \"timestamps (touch exited with %d)\" % rc\n )\n\n\ndef check_url_file(path: str, regexp: str) -> \"Optional[str]\":\n \"\"\"\n Check that a file exists at the path given and contains a single line\n matching the regexp. Used for checking the source interface address\n files in /var/lib/securedrop (as the Apache user can't read Tor config)\n \"\"\"\n try:\n f = open(path, \"r\")\n contents = f.readline().strip()\n f.close()\n if re.match(regexp, contents):\n return contents\n else:\n return None\n except IOError:\n return None\n\n\ndef get_sourcev3_url() -> \"Optional[str]\":\n return check_url_file(\"/var/lib/securedrop/source_v3_url\", r\"^[a-z0-9]{56}\\.onion$\")\n\n\ndef fit_codenames_into_cookie(codenames: dict) -> dict:\n \"\"\"\n If `codenames` will approach `werkzeug.Response.max_cookie_size` once\n serialized, incrementally pop off the oldest codename until the remaining\n (newer) ones will fit.\n \"\"\"\n\n serialized = json.dumps(codenames).encode()\n if len(codenames) > 1 and len(serialized) > 4000: # werkzeug.Response.max_cookie_size = 4093\n if current_app:\n current_app.logger.warn(\n f\"Popping oldest of {len(codenames)} \"\n f\"codenames ({len(serialized)} bytes) to \"\n f\"fit within maximum cookie size\"\n )\n del codenames[list(codenames)[0]] # FIFO\n\n return fit_codenames_into_cookie(codenames)\n\n return codenames\n", "path": "securedrop/source_app/utils.py"}]} | 2,007 | 100 |
gh_patches_debug_31532 | rasdani/github-patches | git_diff | pyg-team__pytorch_geometric-3889 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Improving documentation for Set2Set layer
### 📚 Describe the documentation issue
I am new to `pytorch_geometric` ecosystem and I was exploring it. At the first glance to the `Set2Set` layer in the [docs](https://pytorch-geometric.readthedocs.io/en/latest/modules/nn.html#torch_geometric.nn.glob.Set2Set), it is not clear what the inputs `x` and `batch` are to the forward pass.
If I am not wrong, `x` represents the node features of the graph and `batch` represents a mapping between the node features to their graph identifiers.
### Suggest a potential alternative/fix
I was wondering whether it will be good to include it to the docs or maybe also add typing.
Potential fix in `nn.glob.set2set.py`:
```
def forward(self, x: torch.Tensor, batch: torch.Tensor):
r"""
Args:
x: The input node features.
batch: A one dimension tensor representing a mapping between nodes and their graphs
"""
```
</issue>
<code>
[start of torch_geometric/nn/glob/set2set.py]
1 import torch
2 from torch_scatter import scatter_add
3 from torch_geometric.utils import softmax
4
5
6 class Set2Set(torch.nn.Module):
7 r"""The global pooling operator based on iterative content-based attention
8 from the `"Order Matters: Sequence to sequence for sets"
9 <https://arxiv.org/abs/1511.06391>`_ paper
10
11 .. math::
12 \mathbf{q}_t &= \mathrm{LSTM}(\mathbf{q}^{*}_{t-1})
13
14 \alpha_{i,t} &= \mathrm{softmax}(\mathbf{x}_i \cdot \mathbf{q}_t)
15
16 \mathbf{r}_t &= \sum_{i=1}^N \alpha_{i,t} \mathbf{x}_i
17
18 \mathbf{q}^{*}_t &= \mathbf{q}_t \, \Vert \, \mathbf{r}_t,
19
20 where :math:`\mathbf{q}^{*}_T` defines the output of the layer with twice
21 the dimensionality as the input.
22
23 Args:
24 in_channels (int): Size of each input sample.
25 processing_steps (int): Number of iterations :math:`T`.
26 num_layers (int, optional): Number of recurrent layers, *.e.g*, setting
27 :obj:`num_layers=2` would mean stacking two LSTMs together to form
28 a stacked LSTM, with the second LSTM taking in outputs of the first
29 LSTM and computing the final results. (default: :obj:`1`)
30 """
31 def __init__(self, in_channels, processing_steps, num_layers=1):
32 super().__init__()
33
34 self.in_channels = in_channels
35 self.out_channels = 2 * in_channels
36 self.processing_steps = processing_steps
37 self.num_layers = num_layers
38
39 self.lstm = torch.nn.LSTM(self.out_channels, self.in_channels,
40 num_layers)
41
42 self.reset_parameters()
43
44 def reset_parameters(self):
45 self.lstm.reset_parameters()
46
47 def forward(self, x, batch):
48 """"""
49 batch_size = batch.max().item() + 1
50
51 h = (x.new_zeros((self.num_layers, batch_size, self.in_channels)),
52 x.new_zeros((self.num_layers, batch_size, self.in_channels)))
53 q_star = x.new_zeros(batch_size, self.out_channels)
54
55 for _ in range(self.processing_steps):
56 q, h = self.lstm(q_star.unsqueeze(0), h)
57 q = q.view(batch_size, self.in_channels)
58 e = (x * q.index_select(0, batch)).sum(dim=-1, keepdim=True)
59 a = softmax(e, batch, num_nodes=batch_size)
60 r = scatter_add(a * x, batch, dim=0, dim_size=batch_size)
61 q_star = torch.cat([q, r], dim=-1)
62
63 return q_star
64
65 def __repr__(self) -> str:
66 return (f'{self.__class__.__name__}({self.in_channels}, '
67 f'{self.out_channels})')
68
[end of torch_geometric/nn/glob/set2set.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/torch_geometric/nn/glob/set2set.py b/torch_geometric/nn/glob/set2set.py
--- a/torch_geometric/nn/glob/set2set.py
+++ b/torch_geometric/nn/glob/set2set.py
@@ -1,5 +1,9 @@
+from typing import Optional
+
import torch
+from torch import Tensor
from torch_scatter import scatter_add
+
from torch_geometric.utils import softmax
@@ -27,8 +31,17 @@
:obj:`num_layers=2` would mean stacking two LSTMs together to form
a stacked LSTM, with the second LSTM taking in outputs of the first
LSTM and computing the final results. (default: :obj:`1`)
+
+ Shapes:
+ - **input:**
+ node features :math:`(|\mathcal{V}|, F)`,
+ batch vector :math:`(|\mathcal{V}|)` *(optional)*
+ - **output:**
+ set features :math:`(|\mathcal{G}|, 2 * F)` where
+ :math:`|\mathcal{G}|` denotes the number of graphs in the batch
"""
- def __init__(self, in_channels, processing_steps, num_layers=1):
+ def __init__(self, in_channels: int, processing_steps: int,
+ num_layers: int = 1):
super().__init__()
self.in_channels = in_channels
@@ -44,8 +57,16 @@
def reset_parameters(self):
self.lstm.reset_parameters()
- def forward(self, x, batch):
- """"""
+ def forward(self, x: Tensor, batch: Optional[Tensor] = None) -> Tensor:
+ r"""
+ Args:
+ x (Tensor): The input node features.
+ batch (LongTensor, optional): A vector that maps each node to its
+ respective graph identifier. (default: :obj:`None`)
+ """
+ if batch is None:
+ batch = x.new_zeros(x.size(0), dtype=torch.int64)
+
batch_size = batch.max().item() + 1
h = (x.new_zeros((self.num_layers, batch_size, self.in_channels)),
| {"golden_diff": "diff --git a/torch_geometric/nn/glob/set2set.py b/torch_geometric/nn/glob/set2set.py\n--- a/torch_geometric/nn/glob/set2set.py\n+++ b/torch_geometric/nn/glob/set2set.py\n@@ -1,5 +1,9 @@\n+from typing import Optional\n+\n import torch\n+from torch import Tensor\n from torch_scatter import scatter_add\n+\n from torch_geometric.utils import softmax\n \n \n@@ -27,8 +31,17 @@\n :obj:`num_layers=2` would mean stacking two LSTMs together to form\n a stacked LSTM, with the second LSTM taking in outputs of the first\n LSTM and computing the final results. (default: :obj:`1`)\n+\n+ Shapes:\n+ - **input:**\n+ node features :math:`(|\\mathcal{V}|, F)`,\n+ batch vector :math:`(|\\mathcal{V}|)` *(optional)*\n+ - **output:**\n+ set features :math:`(|\\mathcal{G}|, 2 * F)` where\n+ :math:`|\\mathcal{G}|` denotes the number of graphs in the batch\n \"\"\"\n- def __init__(self, in_channels, processing_steps, num_layers=1):\n+ def __init__(self, in_channels: int, processing_steps: int,\n+ num_layers: int = 1):\n super().__init__()\n \n self.in_channels = in_channels\n@@ -44,8 +57,16 @@\n def reset_parameters(self):\n self.lstm.reset_parameters()\n \n- def forward(self, x, batch):\n- \"\"\"\"\"\"\n+ def forward(self, x: Tensor, batch: Optional[Tensor] = None) -> Tensor:\n+ r\"\"\"\n+ Args:\n+ x (Tensor): The input node features.\n+ batch (LongTensor, optional): A vector that maps each node to its\n+ respective graph identifier. (default: :obj:`None`)\n+ \"\"\"\n+ if batch is None:\n+ batch = x.new_zeros(x.size(0), dtype=torch.int64)\n+\n batch_size = batch.max().item() + 1\n \n h = (x.new_zeros((self.num_layers, batch_size, self.in_channels)),\n", "issue": "Improving documentation for Set2Set layer\n### \ud83d\udcda Describe the documentation issue\n\nI am new to `pytorch_geometric` ecosystem and I was exploring it. At the first glance to the `Set2Set` layer in the [docs](https://pytorch-geometric.readthedocs.io/en/latest/modules/nn.html#torch_geometric.nn.glob.Set2Set), it is not clear what the inputs `x` and `batch` are to the forward pass.\r\n\r\nIf I am not wrong, `x` represents the node features of the graph and `batch` represents a mapping between the node features to their graph identifiers.\r\n\n\n### Suggest a potential alternative/fix\n\nI was wondering whether it will be good to include it to the docs or maybe also add typing.\r\nPotential fix in `nn.glob.set2set.py`:\r\n```\r\ndef forward(self, x: torch.Tensor, batch: torch.Tensor):\r\n r\"\"\"\r\n Args:\r\n x: The input node features.\r\n batch: A one dimension tensor representing a mapping between nodes and their graphs\r\n \"\"\"\r\n```\n", "before_files": [{"content": "import torch\nfrom torch_scatter import scatter_add\nfrom torch_geometric.utils import softmax\n\n\nclass Set2Set(torch.nn.Module):\n r\"\"\"The global pooling operator based on iterative content-based attention\n from the `\"Order Matters: Sequence to sequence for sets\"\n <https://arxiv.org/abs/1511.06391>`_ paper\n\n .. math::\n \\mathbf{q}_t &= \\mathrm{LSTM}(\\mathbf{q}^{*}_{t-1})\n\n \\alpha_{i,t} &= \\mathrm{softmax}(\\mathbf{x}_i \\cdot \\mathbf{q}_t)\n\n \\mathbf{r}_t &= \\sum_{i=1}^N \\alpha_{i,t} \\mathbf{x}_i\n\n \\mathbf{q}^{*}_t &= \\mathbf{q}_t \\, \\Vert \\, \\mathbf{r}_t,\n\n where :math:`\\mathbf{q}^{*}_T` defines the output of the layer with twice\n the dimensionality as the input.\n\n Args:\n in_channels (int): Size of each input sample.\n processing_steps (int): Number of iterations :math:`T`.\n num_layers (int, optional): Number of recurrent layers, *.e.g*, setting\n :obj:`num_layers=2` would mean stacking two LSTMs together to form\n a stacked LSTM, with the second LSTM taking in outputs of the first\n LSTM and computing the final results. (default: :obj:`1`)\n \"\"\"\n def __init__(self, in_channels, processing_steps, num_layers=1):\n super().__init__()\n\n self.in_channels = in_channels\n self.out_channels = 2 * in_channels\n self.processing_steps = processing_steps\n self.num_layers = num_layers\n\n self.lstm = torch.nn.LSTM(self.out_channels, self.in_channels,\n num_layers)\n\n self.reset_parameters()\n\n def reset_parameters(self):\n self.lstm.reset_parameters()\n\n def forward(self, x, batch):\n \"\"\"\"\"\"\n batch_size = batch.max().item() + 1\n\n h = (x.new_zeros((self.num_layers, batch_size, self.in_channels)),\n x.new_zeros((self.num_layers, batch_size, self.in_channels)))\n q_star = x.new_zeros(batch_size, self.out_channels)\n\n for _ in range(self.processing_steps):\n q, h = self.lstm(q_star.unsqueeze(0), h)\n q = q.view(batch_size, self.in_channels)\n e = (x * q.index_select(0, batch)).sum(dim=-1, keepdim=True)\n a = softmax(e, batch, num_nodes=batch_size)\n r = scatter_add(a * x, batch, dim=0, dim_size=batch_size)\n q_star = torch.cat([q, r], dim=-1)\n\n return q_star\n\n def __repr__(self) -> str:\n return (f'{self.__class__.__name__}({self.in_channels}, '\n f'{self.out_channels})')\n", "path": "torch_geometric/nn/glob/set2set.py"}]} | 1,577 | 508 |
gh_patches_debug_18939 | rasdani/github-patches | git_diff | TileDB-Inc__TileDB-Py-1639 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Nightly Azure Wheel Fail on Fri, February 3rd 2023
See run for more details:
https://dev.azure.com/TileDB-Inc/CI/_build/results?buildId=$&view=results
</issue>
<code>
[start of examples/config.py]
1 # config.py
2 #
3 # LICENSE
4 #
5 # The MIT License
6 #
7 # Copyright (c) 2020 TileDB, Inc.
8 #
9 # Permission is hereby granted, free of charge, to any person obtaining a copy
10 # of this software and associated documentation files (the "Software"), to deal
11 # in the Software without restriction, including without limitation the rights
12 # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
13 # copies of the Software, and to permit persons to whom the Software is
14 # furnished to do so, subject to the following conditions:
15 #
16 # The above copyright notice and this permission notice shall be included in
17 # all copies or substantial portions of the Software.
18 #
19 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
20 # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
21 # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
22 # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
23 # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
24 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
25 # THE SOFTWARE.
26 #
27 # DESCRIPTION
28 #
29 # Please see the TileDB documentation for more information:
30 # https://docs.tiledb.com/main/how-to/configuration
31 #
32 # This program shows how to set/get the TileDB configuration parameters.
33 #
34
35 import tiledb
36
37
38 def set_get_config_ctx_vfs():
39 # Create config object
40 config = tiledb.Config()
41
42 # Set/get config to/from ctx
43 ctx = tiledb.Ctx(config)
44 print(ctx.config())
45
46 # Set/get config to/from VFS
47 vfs = tiledb.VFS(config)
48 print(vfs.config())
49
50
51 def set_get_config():
52 config = tiledb.Config()
53
54 # Set value
55 config["vfs.s3.connect_timeout_ms"] = 5000
56
57 # Get value
58 tile_cache_size = config["sm.tile_cache_size"]
59 print("Tile cache size: %s" % str(tile_cache_size))
60
61
62 def print_default():
63 config = tiledb.Config()
64 print("\nDefault settings:")
65 for p in config.items():
66 print('"%s" : "%s"' % (p[0], p[1]))
67
68
69 def iter_config_with_prefix():
70 config = tiledb.Config()
71 # Print only the S3 settings.
72 print("\nVFS S3 settings:")
73 for p in config.items("vfs.s3."):
74 print('"%s" : "%s"' % (p[0], p[1]))
75
76
77 def save_load_config():
78 # Save to file
79 config = tiledb.Config()
80 config["sm.tile_cache_size"] = 0
81 config.save("tiledb_config.txt")
82
83 # Load from file
84 config_load = tiledb.Config.load("tiledb_config.txt")
85 print(
86 "\nTile cache size after loading from file: %s"
87 % str(config_load["sm.tile_cache_size"])
88 )
89
90
91 set_get_config_ctx_vfs()
92 set_get_config()
93 print_default()
94 iter_config_with_prefix()
95 save_load_config()
96
[end of examples/config.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/examples/config.py b/examples/config.py
--- a/examples/config.py
+++ b/examples/config.py
@@ -55,8 +55,8 @@
config["vfs.s3.connect_timeout_ms"] = 5000
# Get value
- tile_cache_size = config["sm.tile_cache_size"]
- print("Tile cache size: %s" % str(tile_cache_size))
+ tile_cache_size = config["sm.memory_budget"]
+ print("Memory budget: %s" % str(tile_cache_size))
def print_default():
@@ -77,14 +77,14 @@
def save_load_config():
# Save to file
config = tiledb.Config()
- config["sm.tile_cache_size"] = 0
+ config["sm.memory_budget"] = 1234
config.save("tiledb_config.txt")
# Load from file
config_load = tiledb.Config.load("tiledb_config.txt")
print(
"\nTile cache size after loading from file: %s"
- % str(config_load["sm.tile_cache_size"])
+ % str(config_load["sm.memory_budget"])
)
| {"golden_diff": "diff --git a/examples/config.py b/examples/config.py\n--- a/examples/config.py\n+++ b/examples/config.py\n@@ -55,8 +55,8 @@\n config[\"vfs.s3.connect_timeout_ms\"] = 5000\n \n # Get value\n- tile_cache_size = config[\"sm.tile_cache_size\"]\n- print(\"Tile cache size: %s\" % str(tile_cache_size))\n+ tile_cache_size = config[\"sm.memory_budget\"]\n+ print(\"Memory budget: %s\" % str(tile_cache_size))\n \n \n def print_default():\n@@ -77,14 +77,14 @@\n def save_load_config():\n # Save to file\n config = tiledb.Config()\n- config[\"sm.tile_cache_size\"] = 0\n+ config[\"sm.memory_budget\"] = 1234\n config.save(\"tiledb_config.txt\")\n \n # Load from file\n config_load = tiledb.Config.load(\"tiledb_config.txt\")\n print(\n \"\\nTile cache size after loading from file: %s\"\n- % str(config_load[\"sm.tile_cache_size\"])\n+ % str(config_load[\"sm.memory_budget\"])\n )\n", "issue": "Nightly Azure Wheel Fail on Fri, February 3rd 2023\nSee run for more details:\nhttps://dev.azure.com/TileDB-Inc/CI/_build/results?buildId=$&view=results\n", "before_files": [{"content": "# config.py\n#\n# LICENSE\n#\n# The MIT License\n#\n# Copyright (c) 2020 TileDB, Inc.\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n# THE SOFTWARE.\n#\n# DESCRIPTION\n#\n# Please see the TileDB documentation for more information:\n# https://docs.tiledb.com/main/how-to/configuration\n#\n# This program shows how to set/get the TileDB configuration parameters.\n#\n\nimport tiledb\n\n\ndef set_get_config_ctx_vfs():\n # Create config object\n config = tiledb.Config()\n\n # Set/get config to/from ctx\n ctx = tiledb.Ctx(config)\n print(ctx.config())\n\n # Set/get config to/from VFS\n vfs = tiledb.VFS(config)\n print(vfs.config())\n\n\ndef set_get_config():\n config = tiledb.Config()\n\n # Set value\n config[\"vfs.s3.connect_timeout_ms\"] = 5000\n\n # Get value\n tile_cache_size = config[\"sm.tile_cache_size\"]\n print(\"Tile cache size: %s\" % str(tile_cache_size))\n\n\ndef print_default():\n config = tiledb.Config()\n print(\"\\nDefault settings:\")\n for p in config.items():\n print('\"%s\" : \"%s\"' % (p[0], p[1]))\n\n\ndef iter_config_with_prefix():\n config = tiledb.Config()\n # Print only the S3 settings.\n print(\"\\nVFS S3 settings:\")\n for p in config.items(\"vfs.s3.\"):\n print('\"%s\" : \"%s\"' % (p[0], p[1]))\n\n\ndef save_load_config():\n # Save to file\n config = tiledb.Config()\n config[\"sm.tile_cache_size\"] = 0\n config.save(\"tiledb_config.txt\")\n\n # Load from file\n config_load = tiledb.Config.load(\"tiledb_config.txt\")\n print(\n \"\\nTile cache size after loading from file: %s\"\n % str(config_load[\"sm.tile_cache_size\"])\n )\n\n\nset_get_config_ctx_vfs()\nset_get_config()\nprint_default()\niter_config_with_prefix()\nsave_load_config()\n", "path": "examples/config.py"}]} | 1,437 | 254 |
gh_patches_debug_53387 | rasdani/github-patches | git_diff | chainer__chainer-781 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Support numpy 1.10
numpy 1.10.0 is released on 2015/10/07
https://pypi.python.org/pypi/numpy/1.10.0
</issue>
<code>
[start of cupy/creation/ranges.py]
1 import numpy
2
3 import cupy
4 from cupy import core
5
6
7 def arange(start, stop=None, step=1, dtype=None):
8 """Rerurns an array with evenly spaced values within a given interval.
9
10 Values are generated within the half-open interval [start, stop). The first
11 three arguments are mapped like the ``range`` built-in function, i.e. start
12 and step are optional.
13
14 Args:
15 start: Start of the interval.
16 stop: End of the interval.
17 step: Step width between each pair of consecutive values.
18 dtype: Data type specifier. It is inferred from other arguments by
19 default.
20
21 Returns:
22 cupy.ndarray: The 1-D array of range values.
23
24 .. seealso:: :func:`numpy.arange`
25
26 """
27 if dtype is None:
28 if any(numpy.dtype(type(val)).kind == 'f'
29 for val in (start, stop, step)):
30 dtype = float
31 else:
32 dtype = int
33
34 if stop is None:
35 stop = start
36 start = 0
37 size = int(numpy.ceil((stop - start) / step))
38 if size <= 0:
39 return cupy.empty((0,), dtype=dtype)
40
41 ret = cupy.empty((size,), dtype=dtype)
42 typ = numpy.dtype(dtype).type
43 _arange_ufunc(typ(start), typ(step), ret, dtype=dtype)
44 return ret
45
46
47 def linspace(start, stop, num=50, endpoint=True, retstep=False, dtype=None):
48 """Returns an array with evenly-spaced values within a given interval.
49
50 Instead of specifying the step width like :func:`cupy.arange`, this
51 function requires the total number of elements specified.
52
53 Args:
54 start: Start of the interval.
55 stop: End of the interval.
56 num: Number of elements.
57 endpoint (bool): If True, the stop value is included as the last
58 element. Otherwise, the stop value is omitted.
59 retstep (bool): If True, this function returns (array, step).
60 Otherwise, it returns only the array.
61 dtype: Data type specifier. It is inferred from the start and stop
62 arguments by default.
63
64 Returns:
65 cupy.ndarray: The 1-D array of ranged values.
66
67 """
68 if num < 0:
69 raise ValueError('linspace with num<0 is not supported')
70
71 if dtype is None:
72 # In actual implementation, only float is used
73 dtype = float
74
75 ret = cupy.empty((num,), dtype=dtype)
76 if num == 0:
77 step = float('nan')
78 elif num == 1:
79 ret.fill(start)
80 step = float('nan')
81 else:
82 div = (num - 1) if endpoint else num
83 step = float(stop - start) / div
84 stop = float(stop)
85
86 if step == 0.0:
87 # for underflow
88 _linspace_ufunc_underflow(start, stop - start, div, ret)
89 else:
90 _linspace_ufunc(start, step, ret)
91
92 if endpoint:
93 ret[-1] = stop
94
95 if retstep:
96 return ret, step
97 else:
98 return ret
99
100
101 # TODO(okuta): Implement logspace
102
103
104 # TODO(okuta): Implement meshgrid
105
106
107 # mgrid
108 # ogrid
109
110
111 _arange_ufunc = core.create_ufunc(
112 'cupy_arange',
113 ('bb->b', 'BB->B', 'hh->h', 'HH->H', 'ii->i', 'II->I', 'll->l', 'LL->L',
114 'qq->q', 'QQ->Q', 'ee->e', 'ff->f', 'dd->d'),
115 'out0 = in0 + i * in1')
116
117
118 _linspace_ufunc = core.create_ufunc(
119 'cupy_linspace',
120 ('dd->d',),
121 'out0 = in0 + i * in1')
122
123 _linspace_ufunc_underflow = core.create_ufunc(
124 'cupy_linspace',
125 ('ddd->d',),
126 'out0 = in0 + i * in1 / in2')
127
[end of cupy/creation/ranges.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/cupy/creation/ranges.py b/cupy/creation/ranges.py
--- a/cupy/creation/ranges.py
+++ b/cupy/creation/ranges.py
@@ -85,9 +85,10 @@
if step == 0.0:
# for underflow
- _linspace_ufunc_underflow(start, stop - start, div, ret)
+ _linspace_ufunc_underflow(start, stop - start, div, ret,
+ casting='unsafe')
else:
- _linspace_ufunc(start, step, ret)
+ _linspace_ufunc(start, step, ret, casting='unsafe')
if endpoint:
ret[-1] = stop
| {"golden_diff": "diff --git a/cupy/creation/ranges.py b/cupy/creation/ranges.py\n--- a/cupy/creation/ranges.py\n+++ b/cupy/creation/ranges.py\n@@ -85,9 +85,10 @@\n \n if step == 0.0:\n # for underflow\n- _linspace_ufunc_underflow(start, stop - start, div, ret)\n+ _linspace_ufunc_underflow(start, stop - start, div, ret,\n+ casting='unsafe')\n else:\n- _linspace_ufunc(start, step, ret)\n+ _linspace_ufunc(start, step, ret, casting='unsafe')\n \n if endpoint:\n ret[-1] = stop\n", "issue": "Support numpy 1.10\nnumpy 1.10.0 is released on 2015/10/07\n\nhttps://pypi.python.org/pypi/numpy/1.10.0\n\n", "before_files": [{"content": "import numpy\n\nimport cupy\nfrom cupy import core\n\n\ndef arange(start, stop=None, step=1, dtype=None):\n \"\"\"Rerurns an array with evenly spaced values within a given interval.\n\n Values are generated within the half-open interval [start, stop). The first\n three arguments are mapped like the ``range`` built-in function, i.e. start\n and step are optional.\n\n Args:\n start: Start of the interval.\n stop: End of the interval.\n step: Step width between each pair of consecutive values.\n dtype: Data type specifier. It is inferred from other arguments by\n default.\n\n Returns:\n cupy.ndarray: The 1-D array of range values.\n\n .. seealso:: :func:`numpy.arange`\n\n \"\"\"\n if dtype is None:\n if any(numpy.dtype(type(val)).kind == 'f'\n for val in (start, stop, step)):\n dtype = float\n else:\n dtype = int\n\n if stop is None:\n stop = start\n start = 0\n size = int(numpy.ceil((stop - start) / step))\n if size <= 0:\n return cupy.empty((0,), dtype=dtype)\n\n ret = cupy.empty((size,), dtype=dtype)\n typ = numpy.dtype(dtype).type\n _arange_ufunc(typ(start), typ(step), ret, dtype=dtype)\n return ret\n\n\ndef linspace(start, stop, num=50, endpoint=True, retstep=False, dtype=None):\n \"\"\"Returns an array with evenly-spaced values within a given interval.\n\n Instead of specifying the step width like :func:`cupy.arange`, this\n function requires the total number of elements specified.\n\n Args:\n start: Start of the interval.\n stop: End of the interval.\n num: Number of elements.\n endpoint (bool): If True, the stop value is included as the last\n element. Otherwise, the stop value is omitted.\n retstep (bool): If True, this function returns (array, step).\n Otherwise, it returns only the array.\n dtype: Data type specifier. It is inferred from the start and stop\n arguments by default.\n\n Returns:\n cupy.ndarray: The 1-D array of ranged values.\n\n \"\"\"\n if num < 0:\n raise ValueError('linspace with num<0 is not supported')\n\n if dtype is None:\n # In actual implementation, only float is used\n dtype = float\n\n ret = cupy.empty((num,), dtype=dtype)\n if num == 0:\n step = float('nan')\n elif num == 1:\n ret.fill(start)\n step = float('nan')\n else:\n div = (num - 1) if endpoint else num\n step = float(stop - start) / div\n stop = float(stop)\n\n if step == 0.0:\n # for underflow\n _linspace_ufunc_underflow(start, stop - start, div, ret)\n else:\n _linspace_ufunc(start, step, ret)\n\n if endpoint:\n ret[-1] = stop\n\n if retstep:\n return ret, step\n else:\n return ret\n\n\n# TODO(okuta): Implement logspace\n\n\n# TODO(okuta): Implement meshgrid\n\n\n# mgrid\n# ogrid\n\n\n_arange_ufunc = core.create_ufunc(\n 'cupy_arange',\n ('bb->b', 'BB->B', 'hh->h', 'HH->H', 'ii->i', 'II->I', 'll->l', 'LL->L',\n 'qq->q', 'QQ->Q', 'ee->e', 'ff->f', 'dd->d'),\n 'out0 = in0 + i * in1')\n\n\n_linspace_ufunc = core.create_ufunc(\n 'cupy_linspace',\n ('dd->d',),\n 'out0 = in0 + i * in1')\n\n_linspace_ufunc_underflow = core.create_ufunc(\n 'cupy_linspace',\n ('ddd->d',),\n 'out0 = in0 + i * in1 / in2')\n", "path": "cupy/creation/ranges.py"}]} | 1,795 | 164 |
gh_patches_debug_1201 | rasdani/github-patches | git_diff | cookiecutter__cookiecutter-588 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
No way to define options that have no defaults
Currently if you set a value in `cookiecutter.json` to `null` it becomes `None` and is then turned into the _string_ `'None'`.
</issue>
<code>
[start of cookiecutter/prompt.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3
4 """
5 cookiecutter.prompt
6 ---------------------
7
8 Functions for prompting the user for project info.
9 """
10
11 from collections import OrderedDict
12
13 import click
14 from past.builtins import basestring
15
16 from future.utils import iteritems
17 from jinja2.environment import Environment
18
19
20 def read_user_variable(var_name, default_value):
21 """Prompt the user for the given variable and return the entered value
22 or the given default.
23
24 :param str var_name: Variable of the context to query the user
25 :param default_value: Value that will be returned if no input happens
26 """
27 # Please see http://click.pocoo.org/4/api/#click.prompt
28 return click.prompt(var_name, default=default_value)
29
30
31 def read_user_yes_no(question, default_value):
32 """Prompt the user to reply with 'yes' or 'no' (or equivalent values).
33
34 Note:
35 Possible choices are 'true', '1', 'yes', 'y' or 'false', '0', 'no', 'n'
36
37 :param str question: Question to the user
38 :param default_value: Value that will be returned if no input happens
39 """
40 # Please see http://click.pocoo.org/4/api/#click.prompt
41 return click.prompt(
42 question,
43 default=default_value,
44 type=click.BOOL
45 )
46
47
48 def read_user_choice(var_name, options):
49 """Prompt the user to choose from several options for the given variable.
50
51 The first item will be returned if no input happens.
52
53 :param str var_name: Variable as specified in the context
54 :param list options: Sequence of options that are available to select from
55 :return: Exactly one item of ``options`` that has been chosen by the user
56 """
57 # Please see http://click.pocoo.org/4/api/#click.prompt
58 if not isinstance(options, list):
59 raise TypeError
60
61 if not options:
62 raise ValueError
63
64 choice_map = OrderedDict(
65 (u'{}'.format(i), value) for i, value in enumerate(options, 1)
66 )
67 choices = choice_map.keys()
68 default = u'1'
69
70 choice_lines = [u'{} - {}'.format(*c) for c in choice_map.items()]
71 prompt = u'\n'.join((
72 u'Select {}:'.format(var_name),
73 u'\n'.join(choice_lines),
74 u'Choose from {}'.format(u', '.join(choices))
75 ))
76
77 user_choice = click.prompt(
78 prompt, type=click.Choice(choices), default=default
79 )
80 return choice_map[user_choice]
81
82
83 def render_variable(env, raw, cookiecutter_dict):
84 if not isinstance(raw, basestring):
85 raw = str(raw)
86 template = env.from_string(raw)
87 rendered_template = template.render(cookiecutter=cookiecutter_dict)
88 return rendered_template
89
90
91 def prompt_choice_for_config(cookiecutter_dict, env, key, options, no_input):
92 """Prompt the user which option to choose from the given. Each of the
93 possible choices is rendered beforehand.
94 """
95 rendered_options = [
96 render_variable(env, raw, cookiecutter_dict) for raw in options
97 ]
98
99 if no_input:
100 return rendered_options[0]
101 return read_user_choice(key, rendered_options)
102
103
104 def prompt_for_config(context, no_input=False):
105 """
106 Prompts the user to enter new config, using context as a source for the
107 field names and sample values.
108
109 :param no_input: Prompt the user at command line for manual configuration?
110 """
111 cookiecutter_dict = {}
112 env = Environment()
113
114 for key, raw in iteritems(context[u'cookiecutter']):
115 if key.startswith(u'_'):
116 cookiecutter_dict[key] = raw
117 continue
118
119 if isinstance(raw, list):
120 # We are dealing with a choice variable
121 val = prompt_choice_for_config(
122 cookiecutter_dict, env, key, raw, no_input
123 )
124 else:
125 # We are dealing with a regular variable
126 val = render_variable(env, raw, cookiecutter_dict)
127
128 if not no_input:
129 val = read_user_variable(key, val)
130
131 cookiecutter_dict[key] = val
132 return cookiecutter_dict
133
[end of cookiecutter/prompt.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/cookiecutter/prompt.py b/cookiecutter/prompt.py
--- a/cookiecutter/prompt.py
+++ b/cookiecutter/prompt.py
@@ -81,6 +81,8 @@
def render_variable(env, raw, cookiecutter_dict):
+ if raw is None:
+ return None
if not isinstance(raw, basestring):
raw = str(raw)
template = env.from_string(raw)
| {"golden_diff": "diff --git a/cookiecutter/prompt.py b/cookiecutter/prompt.py\n--- a/cookiecutter/prompt.py\n+++ b/cookiecutter/prompt.py\n@@ -81,6 +81,8 @@\n \n \n def render_variable(env, raw, cookiecutter_dict):\n+ if raw is None:\n+ return None\n if not isinstance(raw, basestring):\n raw = str(raw)\n template = env.from_string(raw)\n", "issue": "No way to define options that have no defaults\nCurrently if you set a value in `cookiecutter.json` to `null` it becomes `None` and is then turned into the _string_ `'None'`.\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\n\"\"\"\ncookiecutter.prompt\n---------------------\n\nFunctions for prompting the user for project info.\n\"\"\"\n\nfrom collections import OrderedDict\n\nimport click\nfrom past.builtins import basestring\n\nfrom future.utils import iteritems\nfrom jinja2.environment import Environment\n\n\ndef read_user_variable(var_name, default_value):\n \"\"\"Prompt the user for the given variable and return the entered value\n or the given default.\n\n :param str var_name: Variable of the context to query the user\n :param default_value: Value that will be returned if no input happens\n \"\"\"\n # Please see http://click.pocoo.org/4/api/#click.prompt\n return click.prompt(var_name, default=default_value)\n\n\ndef read_user_yes_no(question, default_value):\n \"\"\"Prompt the user to reply with 'yes' or 'no' (or equivalent values).\n\n Note:\n Possible choices are 'true', '1', 'yes', 'y' or 'false', '0', 'no', 'n'\n\n :param str question: Question to the user\n :param default_value: Value that will be returned if no input happens\n \"\"\"\n # Please see http://click.pocoo.org/4/api/#click.prompt\n return click.prompt(\n question,\n default=default_value,\n type=click.BOOL\n )\n\n\ndef read_user_choice(var_name, options):\n \"\"\"Prompt the user to choose from several options for the given variable.\n\n The first item will be returned if no input happens.\n\n :param str var_name: Variable as specified in the context\n :param list options: Sequence of options that are available to select from\n :return: Exactly one item of ``options`` that has been chosen by the user\n \"\"\"\n # Please see http://click.pocoo.org/4/api/#click.prompt\n if not isinstance(options, list):\n raise TypeError\n\n if not options:\n raise ValueError\n\n choice_map = OrderedDict(\n (u'{}'.format(i), value) for i, value in enumerate(options, 1)\n )\n choices = choice_map.keys()\n default = u'1'\n\n choice_lines = [u'{} - {}'.format(*c) for c in choice_map.items()]\n prompt = u'\\n'.join((\n u'Select {}:'.format(var_name),\n u'\\n'.join(choice_lines),\n u'Choose from {}'.format(u', '.join(choices))\n ))\n\n user_choice = click.prompt(\n prompt, type=click.Choice(choices), default=default\n )\n return choice_map[user_choice]\n\n\ndef render_variable(env, raw, cookiecutter_dict):\n if not isinstance(raw, basestring):\n raw = str(raw)\n template = env.from_string(raw)\n rendered_template = template.render(cookiecutter=cookiecutter_dict)\n return rendered_template\n\n\ndef prompt_choice_for_config(cookiecutter_dict, env, key, options, no_input):\n \"\"\"Prompt the user which option to choose from the given. Each of the\n possible choices is rendered beforehand.\n \"\"\"\n rendered_options = [\n render_variable(env, raw, cookiecutter_dict) for raw in options\n ]\n\n if no_input:\n return rendered_options[0]\n return read_user_choice(key, rendered_options)\n\n\ndef prompt_for_config(context, no_input=False):\n \"\"\"\n Prompts the user to enter new config, using context as a source for the\n field names and sample values.\n\n :param no_input: Prompt the user at command line for manual configuration?\n \"\"\"\n cookiecutter_dict = {}\n env = Environment()\n\n for key, raw in iteritems(context[u'cookiecutter']):\n if key.startswith(u'_'):\n cookiecutter_dict[key] = raw\n continue\n\n if isinstance(raw, list):\n # We are dealing with a choice variable\n val = prompt_choice_for_config(\n cookiecutter_dict, env, key, raw, no_input\n )\n else:\n # We are dealing with a regular variable\n val = render_variable(env, raw, cookiecutter_dict)\n\n if not no_input:\n val = read_user_variable(key, val)\n\n cookiecutter_dict[key] = val\n return cookiecutter_dict\n", "path": "cookiecutter/prompt.py"}]} | 1,809 | 101 |
gh_patches_debug_30077 | rasdani/github-patches | git_diff | chainer__chainer-1158 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Mix CPU mode and GPU mode in one network
I want to use CPU mode for some functions and GPU mode for others in one network.
When I use a large number of vocabulary in EmbedID about >=1,000,000 words, it consumes large size of memory in GPU. In this situation, I need to use EmbedID in CPU, and to use other functions like LSTMs in GPU.
</issue>
<code>
[start of chainer/functions/array/copy.py]
1 from chainer import cuda
2 from chainer import function
3 from chainer.utils import type_check
4
5
6 class Copy(function.Function):
7
8 """Copy an input :class:`cupy.ndarray` onto another device."""
9
10 def __init__(self, out_device):
11 self.out_device = out_device
12
13 def check_type_forward(self, in_types):
14 type_check.expect(
15 in_types.size() == 1
16 )
17
18 def forward_cpu(self, x):
19 return x[0].copy(),
20
21 def forward_gpu(self, x):
22 return cuda.copy(x[0], out_device=self.out_device),
23
24 def backward_cpu(self, x, gy):
25 return gy[0].copy(),
26
27 def backward_gpu(self, x, gy):
28 return cuda.copy(gy[0], out_device=cuda.get_device(x[0])),
29
30
31 def copy(x, dst):
32 """Copies the input variable onto the specified device.
33
34 This function copies the array of input variable onto the device specified
35 by ``dst`` if the original array is on GPU, and otherwise just copies the
36 array within host memory.
37
38 Args:
39 x (~chainer.Variable): Variable to be copied.
40 dst: Target device specifier.
41
42 Returns:
43 ~chainer.Variable: Output variable.
44
45 """
46 return Copy(dst)(x)
47
[end of chainer/functions/array/copy.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/chainer/functions/array/copy.py b/chainer/functions/array/copy.py
--- a/chainer/functions/array/copy.py
+++ b/chainer/functions/array/copy.py
@@ -16,24 +16,45 @@
)
def forward_cpu(self, x):
- return x[0].copy(),
+ if self.out_device == -1:
+ return x[0].copy(),
+ else:
+ return cuda.to_gpu(x[0], device=self.out_device),
def forward_gpu(self, x):
- return cuda.copy(x[0], out_device=self.out_device),
+ if self.out_device == -1:
+ return cuda.to_cpu(x[0]),
+ else:
+ return cuda.copy(x[0], out_device=self.out_device),
+
+ def backward(self, inputs, grad_outputs):
+ # In this function, `grad_outputs` contains cuda arrays even when
+ # `inputs` only contains numpy arrays.
+ if isinstance(inputs[0], cuda.ndarray):
+ return self.backward_gpu(inputs, grad_outputs)
+ else:
+ return self.backward_cpu(inputs, grad_outputs)
def backward_cpu(self, x, gy):
- return gy[0].copy(),
+ if self.out_device == -1:
+ return gy[0].copy(),
+ else:
+ return cuda.to_cpu(gy[0]),
def backward_gpu(self, x, gy):
- return cuda.copy(gy[0], out_device=cuda.get_device(x[0])),
+ if self.out_device == -1:
+ return cuda.to_gpu(gy[0], device=cuda.get_device(x[0])),
+ else:
+ return cuda.copy(gy[0], out_device=cuda.get_device(x[0])),
def copy(x, dst):
"""Copies the input variable onto the specified device.
This function copies the array of input variable onto the device specified
- by ``dst`` if the original array is on GPU, and otherwise just copies the
- array within host memory.
+ by ``dst``. When ``dst == -1``, it copies the array onto the host memory.
+ This function supports copies from host to device, from device to device
+ and from device to host.
Args:
x (~chainer.Variable): Variable to be copied.
| {"golden_diff": "diff --git a/chainer/functions/array/copy.py b/chainer/functions/array/copy.py\n--- a/chainer/functions/array/copy.py\n+++ b/chainer/functions/array/copy.py\n@@ -16,24 +16,45 @@\n )\n \n def forward_cpu(self, x):\n- return x[0].copy(),\n+ if self.out_device == -1:\n+ return x[0].copy(),\n+ else:\n+ return cuda.to_gpu(x[0], device=self.out_device),\n \n def forward_gpu(self, x):\n- return cuda.copy(x[0], out_device=self.out_device),\n+ if self.out_device == -1:\n+ return cuda.to_cpu(x[0]),\n+ else:\n+ return cuda.copy(x[0], out_device=self.out_device),\n+\n+ def backward(self, inputs, grad_outputs):\n+ # In this function, `grad_outputs` contains cuda arrays even when\n+ # `inputs` only contains numpy arrays.\n+ if isinstance(inputs[0], cuda.ndarray):\n+ return self.backward_gpu(inputs, grad_outputs)\n+ else:\n+ return self.backward_cpu(inputs, grad_outputs)\n \n def backward_cpu(self, x, gy):\n- return gy[0].copy(),\n+ if self.out_device == -1:\n+ return gy[0].copy(),\n+ else:\n+ return cuda.to_cpu(gy[0]),\n \n def backward_gpu(self, x, gy):\n- return cuda.copy(gy[0], out_device=cuda.get_device(x[0])),\n+ if self.out_device == -1:\n+ return cuda.to_gpu(gy[0], device=cuda.get_device(x[0])),\n+ else:\n+ return cuda.copy(gy[0], out_device=cuda.get_device(x[0])),\n \n \n def copy(x, dst):\n \"\"\"Copies the input variable onto the specified device.\n \n This function copies the array of input variable onto the device specified\n- by ``dst`` if the original array is on GPU, and otherwise just copies the\n- array within host memory.\n+ by ``dst``. When ``dst == -1``, it copies the array onto the host memory.\n+ This function supports copies from host to device, from device to device\n+ and from device to host.\n \n Args:\n x (~chainer.Variable): Variable to be copied.\n", "issue": "Mix CPU mode and GPU mode in one network\nI want to use CPU mode for some functions and GPU mode for others in one network.\nWhen I use a large number of vocabulary in EmbedID about >=1,000,000 words, it consumes large size of memory in GPU. In this situation, I need to use EmbedID in CPU, and to use other functions like LSTMs in GPU.\n\n", "before_files": [{"content": "from chainer import cuda\nfrom chainer import function\nfrom chainer.utils import type_check\n\n\nclass Copy(function.Function):\n\n \"\"\"Copy an input :class:`cupy.ndarray` onto another device.\"\"\"\n\n def __init__(self, out_device):\n self.out_device = out_device\n\n def check_type_forward(self, in_types):\n type_check.expect(\n in_types.size() == 1\n )\n\n def forward_cpu(self, x):\n return x[0].copy(),\n\n def forward_gpu(self, x):\n return cuda.copy(x[0], out_device=self.out_device),\n\n def backward_cpu(self, x, gy):\n return gy[0].copy(),\n\n def backward_gpu(self, x, gy):\n return cuda.copy(gy[0], out_device=cuda.get_device(x[0])),\n\n\ndef copy(x, dst):\n \"\"\"Copies the input variable onto the specified device.\n\n This function copies the array of input variable onto the device specified\n by ``dst`` if the original array is on GPU, and otherwise just copies the\n array within host memory.\n\n Args:\n x (~chainer.Variable): Variable to be copied.\n dst: Target device specifier.\n\n Returns:\n ~chainer.Variable: Output variable.\n\n \"\"\"\n return Copy(dst)(x)\n", "path": "chainer/functions/array/copy.py"}]} | 996 | 516 |
gh_patches_debug_24801 | rasdani/github-patches | git_diff | mirumee__ariadne-158 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
If parse_literal is not present try fallbacking to `parse_value(ast.value)`
Following idea was brought up in discussion for #24:
> Maybe we could default to calling parse_value with ast.value when only one function is provided?
This requires further study. `IntValue`, `StringValue` and friends are obvious to deal with, but but complex types like `ListValue` may require some extra unpacking magic.
Still, if it is possible to pull off, it could be an excellent convenience for developers creating custom scalars, saving the need for potentially maintaining two very simiiar implementations, one doing `isinstance(value, basestr)` and other `isinstance(value, StringValue)`.
</issue>
<code>
[start of ariadne/scalars.py]
1 from typing import Optional, cast
2
3 from graphql.type import (
4 GraphQLNamedType,
5 GraphQLScalarLiteralParser,
6 GraphQLScalarSerializer,
7 GraphQLScalarType,
8 GraphQLScalarValueParser,
9 GraphQLSchema,
10 )
11
12 from .types import SchemaBindable
13
14
15 class ScalarType(SchemaBindable):
16 _serialize: Optional[GraphQLScalarSerializer]
17 _parse_value: Optional[GraphQLScalarValueParser]
18 _parse_literal: Optional[GraphQLScalarLiteralParser]
19
20 def __init__(
21 self,
22 name: str,
23 *,
24 serializer: GraphQLScalarSerializer = None,
25 value_parser: GraphQLScalarValueParser = None,
26 literal_parser: GraphQLScalarLiteralParser = None,
27 ) -> None:
28 self.name = name
29 self._serialize = serializer
30 self._parse_value = value_parser
31 self._parse_literal = literal_parser
32
33 def set_serializer(self, f: GraphQLScalarSerializer) -> GraphQLScalarSerializer:
34 self._serialize = f
35 return f
36
37 def set_value_parser(self, f: GraphQLScalarValueParser) -> GraphQLScalarValueParser:
38 self._parse_value = f
39 return f
40
41 def set_literal_parser(
42 self, f: GraphQLScalarLiteralParser
43 ) -> GraphQLScalarLiteralParser:
44 self._parse_literal = f
45 return f
46
47 # Alias above setters for consistent decorator API
48 serializer = set_serializer
49 value_parser = set_value_parser
50 literal_parser = set_literal_parser
51
52 def bind_to_schema(self, schema: GraphQLSchema) -> None:
53 graphql_type = schema.type_map.get(self.name)
54 self.validate_graphql_type(graphql_type)
55 graphql_type = cast(GraphQLScalarType, graphql_type)
56
57 if self._serialize:
58 # See mypy bug https://github.com/python/mypy/issues/2427
59 graphql_type.serialize = self._serialize # type: ignore
60 if self._parse_value:
61 graphql_type.parse_value = self._parse_value
62 if self._parse_literal:
63 graphql_type.parse_literal = self._parse_literal
64
65 def validate_graphql_type(self, graphql_type: Optional[GraphQLNamedType]) -> None:
66 if not graphql_type:
67 raise ValueError("Scalar %s is not defined in the schema" % self.name)
68 if not isinstance(graphql_type, GraphQLScalarType):
69 raise ValueError(
70 "%s is defined in the schema, but it is instance of %s (expected %s)"
71 % (self.name, type(graphql_type).__name__, GraphQLScalarType.__name__)
72 )
73
[end of ariadne/scalars.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ariadne/scalars.py b/ariadne/scalars.py
--- a/ariadne/scalars.py
+++ b/ariadne/scalars.py
@@ -1,5 +1,11 @@
from typing import Optional, cast
+from graphql.language.ast import (
+ BooleanValueNode,
+ FloatValueNode,
+ IntValueNode,
+ StringValueNode,
+)
from graphql.type import (
GraphQLNamedType,
GraphQLScalarLiteralParser,
@@ -36,6 +42,8 @@
def set_value_parser(self, f: GraphQLScalarValueParser) -> GraphQLScalarValueParser:
self._parse_value = f
+ if not self._parse_literal:
+ self._parse_literal = create_default_literal_parser(f)
return f
def set_literal_parser(
@@ -70,3 +78,15 @@
"%s is defined in the schema, but it is instance of %s (expected %s)"
% (self.name, type(graphql_type).__name__, GraphQLScalarType.__name__)
)
+
+
+SCALAR_AST_NODES = (BooleanValueNode, FloatValueNode, IntValueNode, StringValueNode)
+
+
+def create_default_literal_parser(
+ value_parser: GraphQLScalarValueParser
+) -> GraphQLScalarLiteralParser:
+ def default_literal_parser(ast):
+ return value_parser(ast.value)
+
+ return default_literal_parser
| {"golden_diff": "diff --git a/ariadne/scalars.py b/ariadne/scalars.py\n--- a/ariadne/scalars.py\n+++ b/ariadne/scalars.py\n@@ -1,5 +1,11 @@\n from typing import Optional, cast\n \n+from graphql.language.ast import (\n+ BooleanValueNode,\n+ FloatValueNode,\n+ IntValueNode,\n+ StringValueNode,\n+)\n from graphql.type import (\n GraphQLNamedType,\n GraphQLScalarLiteralParser,\n@@ -36,6 +42,8 @@\n \n def set_value_parser(self, f: GraphQLScalarValueParser) -> GraphQLScalarValueParser:\n self._parse_value = f\n+ if not self._parse_literal:\n+ self._parse_literal = create_default_literal_parser(f)\n return f\n \n def set_literal_parser(\n@@ -70,3 +78,15 @@\n \"%s is defined in the schema, but it is instance of %s (expected %s)\"\n % (self.name, type(graphql_type).__name__, GraphQLScalarType.__name__)\n )\n+\n+\n+SCALAR_AST_NODES = (BooleanValueNode, FloatValueNode, IntValueNode, StringValueNode)\n+\n+\n+def create_default_literal_parser(\n+ value_parser: GraphQLScalarValueParser\n+) -> GraphQLScalarLiteralParser:\n+ def default_literal_parser(ast):\n+ return value_parser(ast.value)\n+\n+ return default_literal_parser\n", "issue": "If parse_literal is not present try fallbacking to `parse_value(ast.value)`\nFollowing idea was brought up in discussion for #24:\r\n\r\n> Maybe we could default to calling parse_value with ast.value when only one function is provided?\r\n\r\nThis requires further study. `IntValue`, `StringValue` and friends are obvious to deal with, but but complex types like `ListValue` may require some extra unpacking magic.\r\n\r\nStill, if it is possible to pull off, it could be an excellent convenience for developers creating custom scalars, saving the need for potentially maintaining two very simiiar implementations, one doing `isinstance(value, basestr)` and other `isinstance(value, StringValue)`.\n", "before_files": [{"content": "from typing import Optional, cast\n\nfrom graphql.type import (\n GraphQLNamedType,\n GraphQLScalarLiteralParser,\n GraphQLScalarSerializer,\n GraphQLScalarType,\n GraphQLScalarValueParser,\n GraphQLSchema,\n)\n\nfrom .types import SchemaBindable\n\n\nclass ScalarType(SchemaBindable):\n _serialize: Optional[GraphQLScalarSerializer]\n _parse_value: Optional[GraphQLScalarValueParser]\n _parse_literal: Optional[GraphQLScalarLiteralParser]\n\n def __init__(\n self,\n name: str,\n *,\n serializer: GraphQLScalarSerializer = None,\n value_parser: GraphQLScalarValueParser = None,\n literal_parser: GraphQLScalarLiteralParser = None,\n ) -> None:\n self.name = name\n self._serialize = serializer\n self._parse_value = value_parser\n self._parse_literal = literal_parser\n\n def set_serializer(self, f: GraphQLScalarSerializer) -> GraphQLScalarSerializer:\n self._serialize = f\n return f\n\n def set_value_parser(self, f: GraphQLScalarValueParser) -> GraphQLScalarValueParser:\n self._parse_value = f\n return f\n\n def set_literal_parser(\n self, f: GraphQLScalarLiteralParser\n ) -> GraphQLScalarLiteralParser:\n self._parse_literal = f\n return f\n\n # Alias above setters for consistent decorator API\n serializer = set_serializer\n value_parser = set_value_parser\n literal_parser = set_literal_parser\n\n def bind_to_schema(self, schema: GraphQLSchema) -> None:\n graphql_type = schema.type_map.get(self.name)\n self.validate_graphql_type(graphql_type)\n graphql_type = cast(GraphQLScalarType, graphql_type)\n\n if self._serialize:\n # See mypy bug https://github.com/python/mypy/issues/2427\n graphql_type.serialize = self._serialize # type: ignore\n if self._parse_value:\n graphql_type.parse_value = self._parse_value\n if self._parse_literal:\n graphql_type.parse_literal = self._parse_literal\n\n def validate_graphql_type(self, graphql_type: Optional[GraphQLNamedType]) -> None:\n if not graphql_type:\n raise ValueError(\"Scalar %s is not defined in the schema\" % self.name)\n if not isinstance(graphql_type, GraphQLScalarType):\n raise ValueError(\n \"%s is defined in the schema, but it is instance of %s (expected %s)\"\n % (self.name, type(graphql_type).__name__, GraphQLScalarType.__name__)\n )\n", "path": "ariadne/scalars.py"}]} | 1,358 | 311 |
gh_patches_debug_20645 | rasdani/github-patches | git_diff | Flexget__Flexget-1101 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Problem with sabnzbd after upgrade to version 2.0.5
Hi,
Last night I upgraded to version 2.0.5 from 1.2.521. I haven't made any config changes. Everything seems to work except adding downloads to sabnzbd. Reverting back to version 1.2.521 made everything work again.
```
2016-04-27 07:30 CRITICAL sabnzbd usenet Failed to use sabnzbd. Requested http://sabnzbd:8080/sabnzbd/api?nzbname=REL_NAME&apikey=11111&mode=addurl&name=URL_THAT_WORKS
2016-04-27 07:30 CRITICAL sabnzbd usenet Result was: 'Task' object has no attribute 'get'
2016-04-27 07:30 ERROR entry usenet Failed REL_NAME (sabnzbd unreachable)
```
Manually clicking the url does add the nzb to sabznbd.
This runs in a FreeBSD 10.3 jail using Python 2.7.11 installed and upgraded using pip.
</issue>
<code>
[start of flexget/plugins/output/sabnzbd.py]
1 from __future__ import unicode_literals, division, absolute_import
2 from builtins import *
3 from future.moves.urllib.parse import urlencode
4
5 import logging
6
7 from flexget import plugin
8 from flexget.event import event
9
10 log = logging.getLogger('sabnzbd')
11
12
13 class OutputSabnzbd(object):
14 """
15 Example::
16
17 sabnzbd:
18 apikey: 123456
19 url: http://localhost/sabnzbd/api?
20 category: movies
21
22 All parameters::
23
24 sabnzbd:
25 apikey: ...
26 url: ...
27 category: ...
28 script: ...
29 pp: ...
30 priority: ...
31 """
32 schema = {
33 'type': 'object',
34 'properties': {
35 'key': {'type': 'string'},
36 'url': {'type': 'string', 'format': 'url'},
37 'category': {'type': 'string'},
38 'script': {'type': 'string'},
39 'pp': {'type': 'string'},
40 'priority': {'type': 'integer'},
41 'password': {'type': 'string'},
42 'username': {'type': 'string'},
43 },
44 'required': ['key', 'url'],
45 'additionalProperties': False,
46 }
47
48 def get_params(self, config):
49 params = {}
50 if 'key' in config:
51 params['apikey'] = config['key']
52 if 'category' in config:
53 params['cat'] = '%s' % config['category']
54 if 'script' in config:
55 params['script'] = config['script']
56 if 'pp' in config:
57 params['pp'] = config['pp']
58 if 'priority' in config:
59 params['priority'] = config['priority']
60 if 'username' in config:
61 params['ma_username'] = config['username']
62 if 'password' in config:
63 params['ma_password'] = config['password']
64 params['mode'] = 'addurl'
65 return params
66
67 def on_task_output(self, task, config):
68 for entry in task.accepted:
69 if task.options.test:
70 log.info('Would add into sabnzbd: %s' % entry['title'])
71 continue
72
73 params = self.get_params(config)
74 # allow overriding the category
75 if 'category' in entry:
76 # Dirty hack over the next few lines to strip out non-ascii
77 # chars. We're going to urlencode this, which causes
78 # serious issues in python2.x if it's not ascii input.
79 params['cat'] = ''.join([x for x in entry['category'] if ord(x) < 128])
80 params['name'] = ''.join([x for x in entry['url'] if ord(x) < 128])
81 # add cleaner nzb name (undocumented api feature)
82 params['nzbname'] = ''.join([x for x in entry['title'] if ord(x) < 128])
83
84 request_url = config['url'] + urlencode(params)
85 log.debug('request_url: %s' % request_url)
86 try:
87 response = task.get(request_url)
88 except Exception as e:
89 log.critical('Failed to use sabnzbd. Requested %s' % request_url)
90 log.critical('Result was: %s' % e)
91 entry.fail('sabnzbd unreachable')
92 if task.options.debug:
93 log.exception(e)
94 continue
95
96 if 'error' in response.text.lower():
97 entry.fail(response.text.replace('\n', ''))
98 else:
99 log.info('Added `%s` to SABnzbd' % (entry['title']))
100
101
102 @event('plugin.register')
103 def register_plugin():
104 plugin.register(OutputSabnzbd, 'sabnzbd', api_ver=2)
105
[end of flexget/plugins/output/sabnzbd.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/flexget/plugins/output/sabnzbd.py b/flexget/plugins/output/sabnzbd.py
--- a/flexget/plugins/output/sabnzbd.py
+++ b/flexget/plugins/output/sabnzbd.py
@@ -6,6 +6,7 @@
from flexget import plugin
from flexget.event import event
+from requests import RequestException
log = logging.getLogger('sabnzbd')
@@ -84,10 +85,10 @@
request_url = config['url'] + urlencode(params)
log.debug('request_url: %s' % request_url)
try:
- response = task.get(request_url)
- except Exception as e:
+ response = task.requests.get(request_url)
+ except RequestException as e:
log.critical('Failed to use sabnzbd. Requested %s' % request_url)
- log.critical('Result was: %s' % e)
+ log.critical('Result was: %s' % e.args[0])
entry.fail('sabnzbd unreachable')
if task.options.debug:
log.exception(e)
| {"golden_diff": "diff --git a/flexget/plugins/output/sabnzbd.py b/flexget/plugins/output/sabnzbd.py\n--- a/flexget/plugins/output/sabnzbd.py\n+++ b/flexget/plugins/output/sabnzbd.py\n@@ -6,6 +6,7 @@\n \n from flexget import plugin\n from flexget.event import event\n+from requests import RequestException\n \n log = logging.getLogger('sabnzbd')\n \n@@ -84,10 +85,10 @@\n request_url = config['url'] + urlencode(params)\n log.debug('request_url: %s' % request_url)\n try:\n- response = task.get(request_url)\n- except Exception as e:\n+ response = task.requests.get(request_url)\n+ except RequestException as e:\n log.critical('Failed to use sabnzbd. Requested %s' % request_url)\n- log.critical('Result was: %s' % e)\n+ log.critical('Result was: %s' % e.args[0])\n entry.fail('sabnzbd unreachable')\n if task.options.debug:\n log.exception(e)\n", "issue": "Problem with sabnzbd after upgrade to version 2.0.5\nHi,\n\nLast night I upgraded to version 2.0.5 from 1.2.521. I haven't made any config changes. Everything seems to work except adding downloads to sabnzbd. Reverting back to version 1.2.521 made everything work again.\n\n```\n2016-04-27 07:30 CRITICAL sabnzbd usenet Failed to use sabnzbd. Requested http://sabnzbd:8080/sabnzbd/api?nzbname=REL_NAME&apikey=11111&mode=addurl&name=URL_THAT_WORKS\n2016-04-27 07:30 CRITICAL sabnzbd usenet Result was: 'Task' object has no attribute 'get'\n2016-04-27 07:30 ERROR entry usenet Failed REL_NAME (sabnzbd unreachable) \n```\n\nManually clicking the url does add the nzb to sabznbd. \n\nThis runs in a FreeBSD 10.3 jail using Python 2.7.11 installed and upgraded using pip.\n\n", "before_files": [{"content": "from __future__ import unicode_literals, division, absolute_import\nfrom builtins import *\nfrom future.moves.urllib.parse import urlencode\n\nimport logging\n\nfrom flexget import plugin\nfrom flexget.event import event\n\nlog = logging.getLogger('sabnzbd')\n\n\nclass OutputSabnzbd(object):\n \"\"\"\n Example::\n\n sabnzbd:\n apikey: 123456\n url: http://localhost/sabnzbd/api?\n category: movies\n\n All parameters::\n\n sabnzbd:\n apikey: ...\n url: ...\n category: ...\n script: ...\n pp: ...\n priority: ...\n \"\"\"\n schema = {\n 'type': 'object',\n 'properties': {\n 'key': {'type': 'string'},\n 'url': {'type': 'string', 'format': 'url'},\n 'category': {'type': 'string'},\n 'script': {'type': 'string'},\n 'pp': {'type': 'string'},\n 'priority': {'type': 'integer'},\n 'password': {'type': 'string'},\n 'username': {'type': 'string'},\n },\n 'required': ['key', 'url'],\n 'additionalProperties': False,\n }\n\n def get_params(self, config):\n params = {}\n if 'key' in config:\n params['apikey'] = config['key']\n if 'category' in config:\n params['cat'] = '%s' % config['category']\n if 'script' in config:\n params['script'] = config['script']\n if 'pp' in config:\n params['pp'] = config['pp']\n if 'priority' in config:\n params['priority'] = config['priority']\n if 'username' in config:\n params['ma_username'] = config['username']\n if 'password' in config:\n params['ma_password'] = config['password']\n params['mode'] = 'addurl'\n return params\n\n def on_task_output(self, task, config):\n for entry in task.accepted:\n if task.options.test:\n log.info('Would add into sabnzbd: %s' % entry['title'])\n continue\n\n params = self.get_params(config)\n # allow overriding the category\n if 'category' in entry:\n # Dirty hack over the next few lines to strip out non-ascii\n # chars. We're going to urlencode this, which causes\n # serious issues in python2.x if it's not ascii input.\n params['cat'] = ''.join([x for x in entry['category'] if ord(x) < 128])\n params['name'] = ''.join([x for x in entry['url'] if ord(x) < 128])\n # add cleaner nzb name (undocumented api feature)\n params['nzbname'] = ''.join([x for x in entry['title'] if ord(x) < 128])\n\n request_url = config['url'] + urlencode(params)\n log.debug('request_url: %s' % request_url)\n try:\n response = task.get(request_url)\n except Exception as e:\n log.critical('Failed to use sabnzbd. Requested %s' % request_url)\n log.critical('Result was: %s' % e)\n entry.fail('sabnzbd unreachable')\n if task.options.debug:\n log.exception(e)\n continue\n\n if 'error' in response.text.lower():\n entry.fail(response.text.replace('\\n', ''))\n else:\n log.info('Added `%s` to SABnzbd' % (entry['title']))\n\n\n@event('plugin.register')\ndef register_plugin():\n plugin.register(OutputSabnzbd, 'sabnzbd', api_ver=2)\n", "path": "flexget/plugins/output/sabnzbd.py"}]} | 1,841 | 247 |
gh_patches_debug_5867 | rasdani/github-patches | git_diff | napari__napari-3424 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`normalize_dtype` excludes big endian types
## 🐛 Bug
```py
In [457]: from napari.utils._dtype import get_dtype_limits
In [458]: get_dtype_limits(np.dtype('<u2'))
Out[458]: (0, 65535)
In [459]: get_dtype_limits(np.dtype('>u2'))
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-459-d109d903c3cf> in <module>
----> 1 get_dtype_limits(np.dtype('>u2'))
~/Dropbox (HMS)/Python/forks/napari/napari/utils/_dtype.py in get_dtype_limits(dtype_spec)
103 info = np.finfo(dtype)
104 else:
--> 105 raise TypeError(f'Unrecognized or non-numeric dtype: {dtype_spec}')
106 return info.min, info.max
TypeError: Unrecognized or non-numeric dtype: >u2
In [460]: np.iinfo('>u2')
Out[460]: iinfo(min=0, max=65535, dtype=>u2)
```
</issue>
<code>
[start of napari/utils/_dtype.py]
1 from typing import Tuple, Union
2
3 import numpy as np
4
5 _np_uints = {
6 8: np.uint8,
7 16: np.uint16,
8 32: np.uint32,
9 64: np.uint64,
10 }
11
12 _np_ints = {
13 8: np.int8,
14 16: np.int16,
15 32: np.int32,
16 64: np.int64,
17 }
18
19 _np_floats = {
20 32: np.float32,
21 64: np.float64,
22 }
23
24 _np_complex = {
25 64: np.complex64,
26 128: np.complex128,
27 }
28
29 _np_kinds = {
30 'uint': _np_uints,
31 'int': _np_ints,
32 'float': _np_floats,
33 'complex': _np_complex,
34 }
35
36
37 def _normalize_str_by_bit_depth(dtype_str, kind):
38 if not any(str.isdigit(c) for c in dtype_str): # Python 'int' or 'float'
39 return np.dtype(kind).type
40 bit_dict = _np_kinds[kind]
41 if '128' in dtype_str:
42 return bit_dict[128]
43 if '8' in dtype_str:
44 return bit_dict[8]
45 if '16' in dtype_str:
46 return bit_dict[16]
47 if '32' in dtype_str:
48 return bit_dict[32]
49 if '64' in dtype_str:
50 return bit_dict[64]
51
52
53 def normalize_dtype(dtype_spec):
54 """Return a proper NumPy type given ~any duck array dtype.
55
56 Parameters
57 ----------
58 dtype_spec : numpy dtype, numpy type, torch dtype, tensorstore dtype, etc
59 A type that can be interpreted as a NumPy numeric data type, e.g.
60 'uint32', np.uint8, torch.float32, etc.
61
62 Returns
63 -------
64 dtype : numpy.dtype
65 The corresponding dtype.
66
67 Notes
68 -----
69 half-precision floats are not supported.
70 """
71 dtype_str = str(dtype_spec)
72 if 'uint' in dtype_str:
73 return _normalize_str_by_bit_depth(dtype_str, 'uint')
74 if 'int' in dtype_str:
75 return _normalize_str_by_bit_depth(dtype_str, 'int')
76 if 'float' in dtype_str:
77 return _normalize_str_by_bit_depth(dtype_str, 'float')
78 if 'complex' in dtype_str:
79 return _normalize_str_by_bit_depth(dtype_str, 'complex')
80 if 'bool' in dtype_str:
81 return np.bool_
82
83
84 def get_dtype_limits(dtype_spec) -> Tuple[float, float]:
85 """Return machine limits for numeric types.
86
87 Parameters
88 ----------
89 dtype_spec : numpy dtype, numpy type, torch dtype, tensorstore dtype, etc
90 A type that can be interpreted as a NumPy numeric data type, e.g.
91 'uint32', np.uint8, torch.float32, etc.
92
93 Returns
94 -------
95 limits : tuple
96 The smallest/largest numbers expressible by the type.
97 """
98 dtype = normalize_dtype(dtype_spec)
99 info: Union[np.iinfo, np.finfo]
100 if np.issubdtype(dtype, np.integer):
101 info = np.iinfo(dtype)
102 elif dtype and np.issubdtype(dtype, np.floating):
103 info = np.finfo(dtype)
104 else:
105 raise TypeError(f'Unrecognized or non-numeric dtype: {dtype_spec}')
106 return info.min, info.max
107
[end of napari/utils/_dtype.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/napari/utils/_dtype.py b/napari/utils/_dtype.py
--- a/napari/utils/_dtype.py
+++ b/napari/utils/_dtype.py
@@ -79,6 +79,11 @@
return _normalize_str_by_bit_depth(dtype_str, 'complex')
if 'bool' in dtype_str:
return np.bool_
+ # If we don't find one of the named dtypes, return the dtype_spec
+ # unchanged. This allows NumPy big endian types to work. See
+ # https://github.com/napari/napari/issues/3421
+ else:
+ return dtype_spec
def get_dtype_limits(dtype_spec) -> Tuple[float, float]:
| {"golden_diff": "diff --git a/napari/utils/_dtype.py b/napari/utils/_dtype.py\n--- a/napari/utils/_dtype.py\n+++ b/napari/utils/_dtype.py\n@@ -79,6 +79,11 @@\n return _normalize_str_by_bit_depth(dtype_str, 'complex')\n if 'bool' in dtype_str:\n return np.bool_\n+ # If we don't find one of the named dtypes, return the dtype_spec\n+ # unchanged. This allows NumPy big endian types to work. See\n+ # https://github.com/napari/napari/issues/3421\n+ else:\n+ return dtype_spec\n \n \n def get_dtype_limits(dtype_spec) -> Tuple[float, float]:\n", "issue": "`normalize_dtype` excludes big endian types\n## \ud83d\udc1b Bug\r\n```py\r\nIn [457]: from napari.utils._dtype import get_dtype_limits\r\n\r\nIn [458]: get_dtype_limits(np.dtype('<u2'))\r\nOut[458]: (0, 65535)\r\n\r\nIn [459]: get_dtype_limits(np.dtype('>u2'))\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\n<ipython-input-459-d109d903c3cf> in <module>\r\n----> 1 get_dtype_limits(np.dtype('>u2'))\r\n\r\n~/Dropbox (HMS)/Python/forks/napari/napari/utils/_dtype.py in get_dtype_limits(dtype_spec)\r\n 103 info = np.finfo(dtype)\r\n 104 else:\r\n--> 105 raise TypeError(f'Unrecognized or non-numeric dtype: {dtype_spec}')\r\n 106 return info.min, info.max\r\n\r\nTypeError: Unrecognized or non-numeric dtype: >u2\r\n\r\nIn [460]: np.iinfo('>u2')\r\nOut[460]: iinfo(min=0, max=65535, dtype=>u2)\r\n```\n", "before_files": [{"content": "from typing import Tuple, Union\n\nimport numpy as np\n\n_np_uints = {\n 8: np.uint8,\n 16: np.uint16,\n 32: np.uint32,\n 64: np.uint64,\n}\n\n_np_ints = {\n 8: np.int8,\n 16: np.int16,\n 32: np.int32,\n 64: np.int64,\n}\n\n_np_floats = {\n 32: np.float32,\n 64: np.float64,\n}\n\n_np_complex = {\n 64: np.complex64,\n 128: np.complex128,\n}\n\n_np_kinds = {\n 'uint': _np_uints,\n 'int': _np_ints,\n 'float': _np_floats,\n 'complex': _np_complex,\n}\n\n\ndef _normalize_str_by_bit_depth(dtype_str, kind):\n if not any(str.isdigit(c) for c in dtype_str): # Python 'int' or 'float'\n return np.dtype(kind).type\n bit_dict = _np_kinds[kind]\n if '128' in dtype_str:\n return bit_dict[128]\n if '8' in dtype_str:\n return bit_dict[8]\n if '16' in dtype_str:\n return bit_dict[16]\n if '32' in dtype_str:\n return bit_dict[32]\n if '64' in dtype_str:\n return bit_dict[64]\n\n\ndef normalize_dtype(dtype_spec):\n \"\"\"Return a proper NumPy type given ~any duck array dtype.\n\n Parameters\n ----------\n dtype_spec : numpy dtype, numpy type, torch dtype, tensorstore dtype, etc\n A type that can be interpreted as a NumPy numeric data type, e.g.\n 'uint32', np.uint8, torch.float32, etc.\n\n Returns\n -------\n dtype : numpy.dtype\n The corresponding dtype.\n\n Notes\n -----\n half-precision floats are not supported.\n \"\"\"\n dtype_str = str(dtype_spec)\n if 'uint' in dtype_str:\n return _normalize_str_by_bit_depth(dtype_str, 'uint')\n if 'int' in dtype_str:\n return _normalize_str_by_bit_depth(dtype_str, 'int')\n if 'float' in dtype_str:\n return _normalize_str_by_bit_depth(dtype_str, 'float')\n if 'complex' in dtype_str:\n return _normalize_str_by_bit_depth(dtype_str, 'complex')\n if 'bool' in dtype_str:\n return np.bool_\n\n\ndef get_dtype_limits(dtype_spec) -> Tuple[float, float]:\n \"\"\"Return machine limits for numeric types.\n\n Parameters\n ----------\n dtype_spec : numpy dtype, numpy type, torch dtype, tensorstore dtype, etc\n A type that can be interpreted as a NumPy numeric data type, e.g.\n 'uint32', np.uint8, torch.float32, etc.\n\n Returns\n -------\n limits : tuple\n The smallest/largest numbers expressible by the type.\n \"\"\"\n dtype = normalize_dtype(dtype_spec)\n info: Union[np.iinfo, np.finfo]\n if np.issubdtype(dtype, np.integer):\n info = np.iinfo(dtype)\n elif dtype and np.issubdtype(dtype, np.floating):\n info = np.finfo(dtype)\n else:\n raise TypeError(f'Unrecognized or non-numeric dtype: {dtype_spec}')\n return info.min, info.max\n", "path": "napari/utils/_dtype.py"}]} | 1,805 | 163 |
gh_patches_debug_40002 | rasdani/github-patches | git_diff | carpentries__amy-2211 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Instructor Selection: Additional filter/sort options for Admin view
On the Instructor Selection [admin view page](https://test-amy.carpentries.org/recruitment/processes/), the admin user can filter by assigned to and by status (open/closed).
We would like to see the following additional options:
* Filter by Online/in-person
* Sort by Priority ascending and descending
* Sort by date ascending and descending
* Filter by curriculum
* Filter by country
</issue>
<code>
[start of amy/recruitment/filters.py]
1 import django_filters
2
3 from workshops.fields import ModelSelect2Widget
4 from workshops.filters import AMYFilterSet
5 from workshops.forms import SELECT2_SIDEBAR
6 from workshops.models import Person
7
8 from .models import InstructorRecruitment
9
10
11 class InstructorRecruitmentFilter(AMYFilterSet):
12 assigned_to = django_filters.ModelChoiceFilter(
13 queryset=Person.objects.all(),
14 widget=ModelSelect2Widget(data_view="admin-lookup", attrs=SELECT2_SIDEBAR),
15 )
16
17 class Meta:
18 model = InstructorRecruitment
19 fields = [
20 "assigned_to",
21 "status",
22 ]
23
[end of amy/recruitment/filters.py]
[start of amy/dashboard/filters.py]
1 from django.db.models import F, QuerySet
2 from django.forms import widgets
3 import django_filters as filters
4
5 from recruitment.models import InstructorRecruitment
6 from workshops.filters import AMYFilterSet
7
8
9 class UpcomingTeachingOpportunitiesFilter(AMYFilterSet):
10 status = filters.ChoiceFilter(
11 choices=(
12 ("online", "Online only"),
13 ("inperson", "Inperson only"),
14 ),
15 empty_label="Any",
16 label="Online/inperson",
17 method="filter_status",
18 )
19
20 only_applied_to = filters.BooleanFilter(
21 label="Show only workshops I have applied to",
22 method="filter_application_only",
23 widget=widgets.CheckboxInput,
24 )
25
26 order_by = filters.OrderingFilter(
27 fields=("event__start",),
28 choices=(
29 ("-calculated_priority", "Priority"),
30 ("event__start", "Event start"),
31 ("-event__start", "Event start (descending)"),
32 ("proximity", "Closer to my airport"),
33 ("-proximity", "Further away from my airport"),
34 ),
35 method="filter_order_by",
36 )
37
38 class Meta:
39 model = InstructorRecruitment
40 fields = [
41 "status",
42 ]
43
44 def filter_status(self, queryset: QuerySet, name: str, value: str) -> QuerySet:
45 """Filter recruitments based on the event (online/inperson) status."""
46 if value == "online":
47 return queryset.filter(event__tags__name="online")
48 elif value == "inperson":
49 return queryset.exclude(event__tags__name="online")
50 else:
51 return queryset
52
53 def filter_order_by(self, queryset: QuerySet, name: str, values: list) -> QuerySet:
54 """Order entries by proximity to user's airport."""
55 try:
56 latitude: float = self.request.user.airport.latitude
57 except AttributeError:
58 latitude = 0.0
59
60 try:
61 longitude: float = self.request.user.airport.longitude
62 except AttributeError:
63 longitude = 0.0
64
65 # `0.0` is neutral element for this equation, so even if user doesn't have the
66 # airport specified, the sorting should still work
67 distance = (F("event__latitude") - latitude) ** 2.0 + (
68 F("event__longitude") - longitude
69 ) ** 2.0
70
71 if values == ["proximity"]:
72 return queryset.annotate(distance=distance).order_by("distance")
73 elif values == ["-proximity"]:
74 return queryset.annotate(distance=distance).order_by("-distance")
75 else:
76 return queryset.order_by(*values)
77
78 def filter_application_only(
79 self, queryset: QuerySet, name: str, value: bool
80 ) -> QuerySet:
81 if value:
82 return queryset.filter(signups__person=self.request.user)
83
84 return queryset
85
[end of amy/dashboard/filters.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/amy/dashboard/filters.py b/amy/dashboard/filters.py
--- a/amy/dashboard/filters.py
+++ b/amy/dashboard/filters.py
@@ -3,7 +3,9 @@
import django_filters as filters
from recruitment.models import InstructorRecruitment
-from workshops.filters import AMYFilterSet
+from workshops.fields import Select2MultipleWidget
+from workshops.filters import AllCountriesMultipleFilter, AMYFilterSet
+from workshops.models import Curriculum
class UpcomingTeachingOpportunitiesFilter(AMYFilterSet):
@@ -23,6 +25,17 @@
widget=widgets.CheckboxInput,
)
+ country = AllCountriesMultipleFilter(
+ field_name="event__country", widget=Select2MultipleWidget
+ )
+
+ curricula = filters.ModelMultipleChoiceFilter(
+ field_name="event__curricula",
+ queryset=Curriculum.objects.all(),
+ label="Curriculum",
+ widget=Select2MultipleWidget(),
+ )
+
order_by = filters.OrderingFilter(
fields=("event__start",),
choices=(
diff --git a/amy/recruitment/filters.py b/amy/recruitment/filters.py
--- a/amy/recruitment/filters.py
+++ b/amy/recruitment/filters.py
@@ -1,22 +1,68 @@
-import django_filters
+from django.db.models import QuerySet
+import django_filters as filters
-from workshops.fields import ModelSelect2Widget
-from workshops.filters import AMYFilterSet
+from workshops.fields import ModelSelect2Widget, Select2MultipleWidget
+from workshops.filters import AllCountriesMultipleFilter, AMYFilterSet
from workshops.forms import SELECT2_SIDEBAR
-from workshops.models import Person
+from workshops.models import Curriculum, Person
from .models import InstructorRecruitment
class InstructorRecruitmentFilter(AMYFilterSet):
- assigned_to = django_filters.ModelChoiceFilter(
+ assigned_to = filters.ModelChoiceFilter(
queryset=Person.objects.all(),
widget=ModelSelect2Widget(data_view="admin-lookup", attrs=SELECT2_SIDEBAR),
)
+ online_inperson = filters.ChoiceFilter(
+ choices=(
+ ("online", "Online only"),
+ ("inperson", "Inperson only"),
+ ),
+ empty_label="Any",
+ label="Online/inperson",
+ method="filter_online_inperson",
+ )
+
+ country = AllCountriesMultipleFilter(
+ field_name="event__country", widget=Select2MultipleWidget
+ )
+
+ curricula = filters.ModelMultipleChoiceFilter(
+ field_name="event__curricula",
+ queryset=Curriculum.objects.all(),
+ label="Curriculum",
+ widget=Select2MultipleWidget(),
+ )
+
+ order_by = filters.OrderingFilter(
+ fields=("event__start",),
+ choices=(
+ ("-calculated_priority", "Priority"),
+ ("event__start", "Event start"),
+ ("-event__start", "Event start (descending)"),
+ ),
+ method="filter_order_by",
+ )
+
class Meta:
model = InstructorRecruitment
fields = [
"assigned_to",
"status",
]
+
+ def filter_online_inperson(
+ self, queryset: QuerySet, name: str, value: str
+ ) -> QuerySet:
+ """Filter recruitments based on the event (online/inperson) status."""
+ if value == "online":
+ return queryset.filter(event__tags__name="online")
+ elif value == "inperson":
+ return queryset.exclude(event__tags__name="online")
+ else:
+ return queryset
+
+ def filter_order_by(self, queryset: QuerySet, name: str, values: list) -> QuerySet:
+ return queryset.order_by(*values)
| {"golden_diff": "diff --git a/amy/dashboard/filters.py b/amy/dashboard/filters.py\n--- a/amy/dashboard/filters.py\n+++ b/amy/dashboard/filters.py\n@@ -3,7 +3,9 @@\n import django_filters as filters\n \n from recruitment.models import InstructorRecruitment\n-from workshops.filters import AMYFilterSet\n+from workshops.fields import Select2MultipleWidget\n+from workshops.filters import AllCountriesMultipleFilter, AMYFilterSet\n+from workshops.models import Curriculum\n \n \n class UpcomingTeachingOpportunitiesFilter(AMYFilterSet):\n@@ -23,6 +25,17 @@\n widget=widgets.CheckboxInput,\n )\n \n+ country = AllCountriesMultipleFilter(\n+ field_name=\"event__country\", widget=Select2MultipleWidget\n+ )\n+\n+ curricula = filters.ModelMultipleChoiceFilter(\n+ field_name=\"event__curricula\",\n+ queryset=Curriculum.objects.all(),\n+ label=\"Curriculum\",\n+ widget=Select2MultipleWidget(),\n+ )\n+\n order_by = filters.OrderingFilter(\n fields=(\"event__start\",),\n choices=(\ndiff --git a/amy/recruitment/filters.py b/amy/recruitment/filters.py\n--- a/amy/recruitment/filters.py\n+++ b/amy/recruitment/filters.py\n@@ -1,22 +1,68 @@\n-import django_filters\n+from django.db.models import QuerySet\n+import django_filters as filters\n \n-from workshops.fields import ModelSelect2Widget\n-from workshops.filters import AMYFilterSet\n+from workshops.fields import ModelSelect2Widget, Select2MultipleWidget\n+from workshops.filters import AllCountriesMultipleFilter, AMYFilterSet\n from workshops.forms import SELECT2_SIDEBAR\n-from workshops.models import Person\n+from workshops.models import Curriculum, Person\n \n from .models import InstructorRecruitment\n \n \n class InstructorRecruitmentFilter(AMYFilterSet):\n- assigned_to = django_filters.ModelChoiceFilter(\n+ assigned_to = filters.ModelChoiceFilter(\n queryset=Person.objects.all(),\n widget=ModelSelect2Widget(data_view=\"admin-lookup\", attrs=SELECT2_SIDEBAR),\n )\n \n+ online_inperson = filters.ChoiceFilter(\n+ choices=(\n+ (\"online\", \"Online only\"),\n+ (\"inperson\", \"Inperson only\"),\n+ ),\n+ empty_label=\"Any\",\n+ label=\"Online/inperson\",\n+ method=\"filter_online_inperson\",\n+ )\n+\n+ country = AllCountriesMultipleFilter(\n+ field_name=\"event__country\", widget=Select2MultipleWidget\n+ )\n+\n+ curricula = filters.ModelMultipleChoiceFilter(\n+ field_name=\"event__curricula\",\n+ queryset=Curriculum.objects.all(),\n+ label=\"Curriculum\",\n+ widget=Select2MultipleWidget(),\n+ )\n+\n+ order_by = filters.OrderingFilter(\n+ fields=(\"event__start\",),\n+ choices=(\n+ (\"-calculated_priority\", \"Priority\"),\n+ (\"event__start\", \"Event start\"),\n+ (\"-event__start\", \"Event start (descending)\"),\n+ ),\n+ method=\"filter_order_by\",\n+ )\n+\n class Meta:\n model = InstructorRecruitment\n fields = [\n \"assigned_to\",\n \"status\",\n ]\n+\n+ def filter_online_inperson(\n+ self, queryset: QuerySet, name: str, value: str\n+ ) -> QuerySet:\n+ \"\"\"Filter recruitments based on the event (online/inperson) status.\"\"\"\n+ if value == \"online\":\n+ return queryset.filter(event__tags__name=\"online\")\n+ elif value == \"inperson\":\n+ return queryset.exclude(event__tags__name=\"online\")\n+ else:\n+ return queryset\n+\n+ def filter_order_by(self, queryset: QuerySet, name: str, values: list) -> QuerySet:\n+ return queryset.order_by(*values)\n", "issue": "Instructor Selection: Additional filter/sort options for Admin view \nOn the Instructor Selection [admin view page](https://test-amy.carpentries.org/recruitment/processes/), the admin user can filter by assigned to and by status (open/closed).\r\n\r\nWe would like to see the following additional options:\r\n\r\n* Filter by Online/in-person\r\n* Sort by Priority ascending and descending\r\n* Sort by date ascending and descending \r\n* Filter by curriculum\r\n* Filter by country \r\n\n", "before_files": [{"content": "import django_filters\n\nfrom workshops.fields import ModelSelect2Widget\nfrom workshops.filters import AMYFilterSet\nfrom workshops.forms import SELECT2_SIDEBAR\nfrom workshops.models import Person\n\nfrom .models import InstructorRecruitment\n\n\nclass InstructorRecruitmentFilter(AMYFilterSet):\n assigned_to = django_filters.ModelChoiceFilter(\n queryset=Person.objects.all(),\n widget=ModelSelect2Widget(data_view=\"admin-lookup\", attrs=SELECT2_SIDEBAR),\n )\n\n class Meta:\n model = InstructorRecruitment\n fields = [\n \"assigned_to\",\n \"status\",\n ]\n", "path": "amy/recruitment/filters.py"}, {"content": "from django.db.models import F, QuerySet\nfrom django.forms import widgets\nimport django_filters as filters\n\nfrom recruitment.models import InstructorRecruitment\nfrom workshops.filters import AMYFilterSet\n\n\nclass UpcomingTeachingOpportunitiesFilter(AMYFilterSet):\n status = filters.ChoiceFilter(\n choices=(\n (\"online\", \"Online only\"),\n (\"inperson\", \"Inperson only\"),\n ),\n empty_label=\"Any\",\n label=\"Online/inperson\",\n method=\"filter_status\",\n )\n\n only_applied_to = filters.BooleanFilter(\n label=\"Show only workshops I have applied to\",\n method=\"filter_application_only\",\n widget=widgets.CheckboxInput,\n )\n\n order_by = filters.OrderingFilter(\n fields=(\"event__start\",),\n choices=(\n (\"-calculated_priority\", \"Priority\"),\n (\"event__start\", \"Event start\"),\n (\"-event__start\", \"Event start (descending)\"),\n (\"proximity\", \"Closer to my airport\"),\n (\"-proximity\", \"Further away from my airport\"),\n ),\n method=\"filter_order_by\",\n )\n\n class Meta:\n model = InstructorRecruitment\n fields = [\n \"status\",\n ]\n\n def filter_status(self, queryset: QuerySet, name: str, value: str) -> QuerySet:\n \"\"\"Filter recruitments based on the event (online/inperson) status.\"\"\"\n if value == \"online\":\n return queryset.filter(event__tags__name=\"online\")\n elif value == \"inperson\":\n return queryset.exclude(event__tags__name=\"online\")\n else:\n return queryset\n\n def filter_order_by(self, queryset: QuerySet, name: str, values: list) -> QuerySet:\n \"\"\"Order entries by proximity to user's airport.\"\"\"\n try:\n latitude: float = self.request.user.airport.latitude\n except AttributeError:\n latitude = 0.0\n\n try:\n longitude: float = self.request.user.airport.longitude\n except AttributeError:\n longitude = 0.0\n\n # `0.0` is neutral element for this equation, so even if user doesn't have the\n # airport specified, the sorting should still work\n distance = (F(\"event__latitude\") - latitude) ** 2.0 + (\n F(\"event__longitude\") - longitude\n ) ** 2.0\n\n if values == [\"proximity\"]:\n return queryset.annotate(distance=distance).order_by(\"distance\")\n elif values == [\"-proximity\"]:\n return queryset.annotate(distance=distance).order_by(\"-distance\")\n else:\n return queryset.order_by(*values)\n\n def filter_application_only(\n self, queryset: QuerySet, name: str, value: bool\n ) -> QuerySet:\n if value:\n return queryset.filter(signups__person=self.request.user)\n\n return queryset\n", "path": "amy/dashboard/filters.py"}]} | 1,592 | 838 |
gh_patches_debug_16815 | rasdani/github-patches | git_diff | pypa__cibuildwheel-701 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Changing the default branch to `main`
This is just a heads up, I'm planning to change the default branch on this repo to `main` this week, let's say Wednesday 26th. Github have a tool to change it over, and update PRs to target the new branch, but you might have to update it on local checkouts and forks. Shouldn't be a big issue though, this is what [Github say](https://github.com/github/renaming#renaming-existing-branches) about it:
> Renaming a branch will:
>
> - Re-target any open pull requests
> - Update any draft releases based on the branch
> - Move any branch protection rules that explicitly reference the old name
> - Update the branch used to build GitHub Pages, if applicable
> - Show a notice to repository contributors, maintainers, and admins on the repository homepage with instructions to update local copies of the repository
> - Show a notice to contributors who git push to the old branch
> - Redirect web requests for the old branch name to the new branch name
> - Return a "Moved Permanently" response in API requests for the old branch name
---
Checklist for the switch:
- [x] Use the Github tool to change it over
- [x] Find/replace `master` to `main` in CI configs, docs, scripts, example code, etc
- [x] Change default branch on Readthedocs
</issue>
<code>
[start of bin/make_dependency_update_pr.py]
1 #!/usr/bin/env python3
2
3 from __future__ import annotations
4
5 import os
6 import sys
7 import textwrap
8 import time
9 from pathlib import Path
10 from subprocess import run
11
12 import click
13
14
15 def shell(cmd, **kwargs):
16 return run([cmd], shell=True, **kwargs)
17
18
19 def git_repo_has_changes():
20 unstaged_changes = shell("git diff-index --quiet HEAD --").returncode != 0
21 staged_changes = shell("git diff-index --quiet --cached HEAD --").returncode != 0
22 return unstaged_changes or staged_changes
23
24
25 @click.command()
26 def main():
27 project_root = Path(__file__).parent / ".."
28 os.chdir(project_root)
29
30 if git_repo_has_changes():
31 print("Your git repo has uncommitted changes. Commit or stash before continuing.")
32 sys.exit(1)
33
34 previous_branch = shell(
35 "git rev-parse --abbrev-ref HEAD", check=True, capture_output=True, encoding="utf8"
36 ).stdout.strip()
37
38 shell("git fetch origin", check=True)
39
40 timestamp = time.strftime("%Y-%m-%dT%H-%M-%S", time.gmtime())
41 branch_name = f"update-constraints-{timestamp}"
42
43 shell(f"git checkout -b {branch_name} origin/master", check=True)
44
45 try:
46 shell("bin/update_dependencies.py", check=True)
47
48 if not git_repo_has_changes():
49 print("Done: no constraint updates required.")
50 return
51
52 shell('git commit -a -m "Update dependencies"', check=True)
53 body = textwrap.dedent(
54 f"""
55 Update the versions of our dependencies.
56
57 PR generated by `{os.path.basename(__file__)}`.
58 """
59 )
60 run(
61 [
62 "gh",
63 "pr",
64 "create",
65 "--repo=pypa/cibuildwheel",
66 "--base=master",
67 "--title=Update dependencies",
68 f"--body='{body}'",
69 ],
70 check=True,
71 )
72
73 print("Done.")
74 finally:
75 # remove any local changes
76 shell("git checkout -- .")
77 shell(f"git checkout {previous_branch}", check=True)
78 shell(f"git branch -D --force {branch_name}", check=True)
79
80
81 if __name__ == "__main__":
82 main.main(standalone_mode=True)
83
[end of bin/make_dependency_update_pr.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/bin/make_dependency_update_pr.py b/bin/make_dependency_update_pr.py
--- a/bin/make_dependency_update_pr.py
+++ b/bin/make_dependency_update_pr.py
@@ -40,7 +40,7 @@
timestamp = time.strftime("%Y-%m-%dT%H-%M-%S", time.gmtime())
branch_name = f"update-constraints-{timestamp}"
- shell(f"git checkout -b {branch_name} origin/master", check=True)
+ shell(f"git checkout -b {branch_name} origin/main", check=True)
try:
shell("bin/update_dependencies.py", check=True)
@@ -63,7 +63,7 @@
"pr",
"create",
"--repo=pypa/cibuildwheel",
- "--base=master",
+ "--base=main",
"--title=Update dependencies",
f"--body='{body}'",
],
| {"golden_diff": "diff --git a/bin/make_dependency_update_pr.py b/bin/make_dependency_update_pr.py\n--- a/bin/make_dependency_update_pr.py\n+++ b/bin/make_dependency_update_pr.py\n@@ -40,7 +40,7 @@\n timestamp = time.strftime(\"%Y-%m-%dT%H-%M-%S\", time.gmtime())\n branch_name = f\"update-constraints-{timestamp}\"\n \n- shell(f\"git checkout -b {branch_name} origin/master\", check=True)\n+ shell(f\"git checkout -b {branch_name} origin/main\", check=True)\n \n try:\n shell(\"bin/update_dependencies.py\", check=True)\n@@ -63,7 +63,7 @@\n \"pr\",\n \"create\",\n \"--repo=pypa/cibuildwheel\",\n- \"--base=master\",\n+ \"--base=main\",\n \"--title=Update dependencies\",\n f\"--body='{body}'\",\n ],\n", "issue": "Changing the default branch to `main`\nThis is just a heads up, I'm planning to change the default branch on this repo to `main` this week, let's say Wednesday 26th. Github have a tool to change it over, and update PRs to target the new branch, but you might have to update it on local checkouts and forks. Shouldn't be a big issue though, this is what [Github say](https://github.com/github/renaming#renaming-existing-branches) about it:\r\n\r\n> Renaming a branch will:\r\n> \r\n> - Re-target any open pull requests\r\n> - Update any draft releases based on the branch\r\n> - Move any branch protection rules that explicitly reference the old name\r\n> - Update the branch used to build GitHub Pages, if applicable\r\n> - Show a notice to repository contributors, maintainers, and admins on the repository homepage with instructions to update local copies of the repository\r\n> - Show a notice to contributors who git push to the old branch\r\n> - Redirect web requests for the old branch name to the new branch name\r\n> - Return a \"Moved Permanently\" response in API requests for the old branch name\r\n\r\n---\r\n\r\nChecklist for the switch:\r\n\r\n- [x] Use the Github tool to change it over\r\n- [x] Find/replace `master` to `main` in CI configs, docs, scripts, example code, etc\r\n- [x] Change default branch on Readthedocs\r\n\n", "before_files": [{"content": "#!/usr/bin/env python3\n\nfrom __future__ import annotations\n\nimport os\nimport sys\nimport textwrap\nimport time\nfrom pathlib import Path\nfrom subprocess import run\n\nimport click\n\n\ndef shell(cmd, **kwargs):\n return run([cmd], shell=True, **kwargs)\n\n\ndef git_repo_has_changes():\n unstaged_changes = shell(\"git diff-index --quiet HEAD --\").returncode != 0\n staged_changes = shell(\"git diff-index --quiet --cached HEAD --\").returncode != 0\n return unstaged_changes or staged_changes\n\n\[email protected]()\ndef main():\n project_root = Path(__file__).parent / \"..\"\n os.chdir(project_root)\n\n if git_repo_has_changes():\n print(\"Your git repo has uncommitted changes. Commit or stash before continuing.\")\n sys.exit(1)\n\n previous_branch = shell(\n \"git rev-parse --abbrev-ref HEAD\", check=True, capture_output=True, encoding=\"utf8\"\n ).stdout.strip()\n\n shell(\"git fetch origin\", check=True)\n\n timestamp = time.strftime(\"%Y-%m-%dT%H-%M-%S\", time.gmtime())\n branch_name = f\"update-constraints-{timestamp}\"\n\n shell(f\"git checkout -b {branch_name} origin/master\", check=True)\n\n try:\n shell(\"bin/update_dependencies.py\", check=True)\n\n if not git_repo_has_changes():\n print(\"Done: no constraint updates required.\")\n return\n\n shell('git commit -a -m \"Update dependencies\"', check=True)\n body = textwrap.dedent(\n f\"\"\"\n Update the versions of our dependencies.\n\n PR generated by `{os.path.basename(__file__)}`.\n \"\"\"\n )\n run(\n [\n \"gh\",\n \"pr\",\n \"create\",\n \"--repo=pypa/cibuildwheel\",\n \"--base=master\",\n \"--title=Update dependencies\",\n f\"--body='{body}'\",\n ],\n check=True,\n )\n\n print(\"Done.\")\n finally:\n # remove any local changes\n shell(\"git checkout -- .\")\n shell(f\"git checkout {previous_branch}\", check=True)\n shell(f\"git branch -D --force {branch_name}\", check=True)\n\n\nif __name__ == \"__main__\":\n main.main(standalone_mode=True)\n", "path": "bin/make_dependency_update_pr.py"}]} | 1,506 | 202 |
gh_patches_debug_14764 | rasdani/github-patches | git_diff | kserve__kserve-882 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Missing requirements.txt in the Pypi source code
**What steps did you take and what happened:**
The requirements.txt file is missing in the source code on Pypi so setuptools will not work.
```
Executing setuptoolsBuildPhase
Traceback (most recent call last):
File "nix_run_setup", line 8, in <module>
exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\\r\\n', '\\n'), __file__, 'exec'))
File "setup.py", line 23, in <module>
with open('requirements.txt') as f:
FileNotFoundError: [Errno 2] No such file or directory: 'requirements.txt'
builder for '/nix/store/z8sh0v4cji9aq9v02865273xvmhcwzgh-python3.8-kfserving-0.3.0.1.drv' failed with exit code 1
cannot build derivation '/nix/store/75ihn4avq52qdpavs0s8c1y0nj0wjfdx-python3-3.8.2-env.drv': 1 dependencies couldn't be built
```
**What did you expect to happen:**
requirements.txt in the tar.gz archive
**Environment:**
- Istio Version:
- Knative Version:
- KFServing Version: 0.3.0.1
- Kubeflow version:
- Kfdef:[k8s_istio/istio_dex/gcp_basic_auth/gcp_iap/aws/aws_cognito/ibm]
- Minikube version:
- Kubernetes version: (use `kubectl version`):
- OS (e.g. from `/etc/os-release`): NixOS 20.03 (Markhor) x86_64
</issue>
<code>
[start of python/kfserving/setup.py]
1 # Copyright 2020 kubeflow.org.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import setuptools
16
17 TESTS_REQUIRES = [
18 'pytest',
19 'pytest-tornasync',
20 'mypy'
21 ]
22
23 with open('requirements.txt') as f:
24 REQUIRES = f.readlines()
25
26 setuptools.setup(
27 name='kfserving',
28 version='0.3.0.1',
29 author="Kubeflow Authors",
30 author_email='[email protected], [email protected]',
31 license="Apache License Version 2.0",
32 url="https://github.com/kubeflow/kfserving/python/kfserving",
33 description="KFServing Python SDK",
34 long_description="Python SDK for KFServing Server and Client.",
35 python_requires='>=3.6',
36 packages=[
37 'kfserving',
38 'kfserving.api',
39 'kfserving.constants',
40 'kfserving.models',
41 'kfserving.handlers',
42 'kfserving.utils',
43 ],
44 package_data={},
45 include_package_data=False,
46 zip_safe=False,
47 classifiers=[
48 'Intended Audience :: Developers',
49 'Intended Audience :: Education',
50 'Intended Audience :: Science/Research',
51 'Programming Language :: Python :: 3',
52 'Programming Language :: Python :: 3.6',
53 'Programming Language :: Python :: 3.7',
54 "License :: OSI Approved :: Apache Software License",
55 "Operating System :: OS Independent",
56 'Topic :: Scientific/Engineering',
57 'Topic :: Scientific/Engineering :: Artificial Intelligence',
58 'Topic :: Software Development',
59 'Topic :: Software Development :: Libraries',
60 'Topic :: Software Development :: Libraries :: Python Modules',
61 ],
62 install_requires=REQUIRES,
63 tests_require=TESTS_REQUIRES,
64 extras_require={'test': TESTS_REQUIRES}
65 )
66
[end of python/kfserving/setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/python/kfserving/setup.py b/python/kfserving/setup.py
--- a/python/kfserving/setup.py
+++ b/python/kfserving/setup.py
@@ -25,7 +25,7 @@
setuptools.setup(
name='kfserving',
- version='0.3.0.1',
+ version='0.3.0.2',
author="Kubeflow Authors",
author_email='[email protected], [email protected]',
license="Apache License Version 2.0",
@@ -41,8 +41,8 @@
'kfserving.handlers',
'kfserving.utils',
],
- package_data={},
- include_package_data=False,
+ package_data={'': ['requirements.txt']},
+ include_package_data=True,
zip_safe=False,
classifiers=[
'Intended Audience :: Developers',
| {"golden_diff": "diff --git a/python/kfserving/setup.py b/python/kfserving/setup.py\n--- a/python/kfserving/setup.py\n+++ b/python/kfserving/setup.py\n@@ -25,7 +25,7 @@\n \n setuptools.setup(\n name='kfserving',\n- version='0.3.0.1',\n+ version='0.3.0.2',\n author=\"Kubeflow Authors\",\n author_email='[email protected], [email protected]',\n license=\"Apache License Version 2.0\",\n@@ -41,8 +41,8 @@\n 'kfserving.handlers',\n 'kfserving.utils',\n ],\n- package_data={},\n- include_package_data=False,\n+ package_data={'': ['requirements.txt']},\n+ include_package_data=True,\n zip_safe=False,\n classifiers=[\n 'Intended Audience :: Developers',\n", "issue": "Missing requirements.txt in the Pypi source code\n**What steps did you take and what happened:**\r\nThe requirements.txt file is missing in the source code on Pypi so setuptools will not work.\r\n\r\n```\r\nExecuting setuptoolsBuildPhase\r\nTraceback (most recent call last):\r\n File \"nix_run_setup\", line 8, in <module>\r\n exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\\\\r\\\\n', '\\\\n'), __file__, 'exec'))\r\n File \"setup.py\", line 23, in <module>\r\n with open('requirements.txt') as f:\r\nFileNotFoundError: [Errno 2] No such file or directory: 'requirements.txt'\r\nbuilder for '/nix/store/z8sh0v4cji9aq9v02865273xvmhcwzgh-python3.8-kfserving-0.3.0.1.drv' failed with exit code 1\r\ncannot build derivation '/nix/store/75ihn4avq52qdpavs0s8c1y0nj0wjfdx-python3-3.8.2-env.drv': 1 dependencies couldn't be built\r\n```\r\n\r\n**What did you expect to happen:**\r\nrequirements.txt in the tar.gz archive\r\n\r\n**Environment:**\r\n\r\n- Istio Version:\r\n- Knative Version:\r\n- KFServing Version: 0.3.0.1\r\n- Kubeflow version:\r\n- Kfdef:[k8s_istio/istio_dex/gcp_basic_auth/gcp_iap/aws/aws_cognito/ibm]\r\n- Minikube version:\r\n- Kubernetes version: (use `kubectl version`):\r\n- OS (e.g. from `/etc/os-release`): NixOS 20.03 (Markhor) x86_64\r\n\n", "before_files": [{"content": "# Copyright 2020 kubeflow.org.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport setuptools\n\nTESTS_REQUIRES = [\n 'pytest',\n 'pytest-tornasync',\n 'mypy'\n]\n\nwith open('requirements.txt') as f:\n REQUIRES = f.readlines()\n\nsetuptools.setup(\n name='kfserving',\n version='0.3.0.1',\n author=\"Kubeflow Authors\",\n author_email='[email protected], [email protected]',\n license=\"Apache License Version 2.0\",\n url=\"https://github.com/kubeflow/kfserving/python/kfserving\",\n description=\"KFServing Python SDK\",\n long_description=\"Python SDK for KFServing Server and Client.\",\n python_requires='>=3.6',\n packages=[\n 'kfserving',\n 'kfserving.api',\n 'kfserving.constants',\n 'kfserving.models',\n 'kfserving.handlers',\n 'kfserving.utils',\n ],\n package_data={},\n include_package_data=False,\n zip_safe=False,\n classifiers=[\n 'Intended Audience :: Developers',\n 'Intended Audience :: Education',\n 'Intended Audience :: Science/Research',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n \"License :: OSI Approved :: Apache Software License\",\n \"Operating System :: OS Independent\",\n 'Topic :: Scientific/Engineering',\n 'Topic :: Scientific/Engineering :: Artificial Intelligence',\n 'Topic :: Software Development',\n 'Topic :: Software Development :: Libraries',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n ],\n install_requires=REQUIRES,\n tests_require=TESTS_REQUIRES,\n extras_require={'test': TESTS_REQUIRES}\n)\n", "path": "python/kfserving/setup.py"}]} | 1,566 | 198 |
gh_patches_debug_22288 | rasdani/github-patches | git_diff | dask__distributed-8381 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Dashboards fail with 500 status code when using `bokeh<3.3.0`
When using the latest `main` with `bokeh<3.3.0`, the dashboards fail with a 500 status code.
Scheduler traceback:
```
2023-11-30 18:00:07,300 - tornado.application - ERROR - Uncaught exception GET /status (192.168.178.45)
HTTPServerRequest(protocol='http', host='192.168.178.45:8787', method='GET', uri='/status', version='HTTP/1.1', remote_ip='192.168.178.45')
Traceback (most recent call last):
File "/opt/homebrew/Caskroom/mambaforge/base/envs/dask-distributed/lib/python3.11/site-packages/tornado/web.py", line 1786, in _execute
result = await result
^^^^^^^^^^^^
File "/opt/homebrew/Caskroom/mambaforge/base/envs/dask-distributed/lib/python3.11/site-packages/bokeh/server/views/doc_handler.py", line 57, in get
resources=self.application.resources(),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/hendrikmakait/projects/dask/distributed/distributed/dashboard/core.py", line 37, in resources
return super().resources(absolute_url)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Caskroom/mambaforge/base/envs/dask-distributed/lib/python3.11/site-packages/bokeh/server/tornado.py", line 621, in resources
return Resources(mode="server", root_url=root_url, path_versioner=StaticHandler.append_version)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Caskroom/mambaforge/base/envs/dask-distributed/lib/python3.11/site-packages/bokeh/resources.py", line 377, in __init__
if root_url and not root_url.endswith("/"):
^^^^^^^^^^^^^^^^^
AttributeError: 'bool' object has no attribute 'endswith'
```
git bisect blames #8347
</issue>
<code>
[start of distributed/dashboard/core.py]
1 from __future__ import annotations
2
3 import functools
4 import warnings
5
6 from bokeh.application import Application
7 from bokeh.application.handlers.function import FunctionHandler
8 from bokeh.resources import Resources
9 from bokeh.server.server import BokehTornado
10 from bokeh.server.util import create_hosts_allowlist
11
12 import dask
13
14 from distributed.dashboard.utils import BOKEH_VERSION
15 from distributed.versions import BOKEH_REQUIREMENT
16
17 # Set `prereleases=True` to allow for use with dev versions of `bokeh`
18 if not BOKEH_REQUIREMENT.specifier.contains(BOKEH_VERSION, prereleases=True):
19 warnings.warn(
20 f"\nDask needs {BOKEH_REQUIREMENT} for the dashboard."
21 f"\nYou have bokeh={BOKEH_VERSION}."
22 "\nContinuing without the dashboard."
23 )
24 raise ImportError(
25 f"Dask needs {BOKEH_REQUIREMENT} for the dashboard, not bokeh={BOKEH_VERSION}"
26 )
27
28
29 if BOKEH_VERSION.major < 3:
30 from bokeh.models import Panel as TabPanel # noqa: F401
31 else:
32 from bokeh.models import TabPanel # noqa: F401
33
34
35 class DaskBokehTornado(BokehTornado):
36 def resources(self, absolute_url: str | bool | None = True) -> Resources:
37 return super().resources(absolute_url)
38
39
40 def BokehApplication(applications, server, prefix="/", template_variables=None):
41 template_variables = template_variables or {}
42 prefix = "/" + prefix.strip("/") + "/" if prefix else "/"
43
44 extra = {"prefix": prefix, **template_variables}
45
46 funcs = {k: functools.partial(v, server, extra) for k, v in applications.items()}
47 apps = {k: Application(FunctionHandler(v)) for k, v in funcs.items()}
48
49 kwargs = dask.config.get("distributed.scheduler.dashboard.bokeh-application").copy()
50 extra_websocket_origins = create_hosts_allowlist(
51 kwargs.pop("allow_websocket_origin"), server.http_server.port
52 )
53
54 return DaskBokehTornado(
55 apps,
56 prefix=prefix,
57 use_index=False,
58 extra_websocket_origins=extra_websocket_origins,
59 absolute_url="",
60 **kwargs,
61 )
62
[end of distributed/dashboard/core.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/distributed/dashboard/core.py b/distributed/dashboard/core.py
--- a/distributed/dashboard/core.py
+++ b/distributed/dashboard/core.py
@@ -6,8 +6,8 @@
from bokeh.application import Application
from bokeh.application.handlers.function import FunctionHandler
from bokeh.resources import Resources
-from bokeh.server.server import BokehTornado
from bokeh.server.util import create_hosts_allowlist
+from packaging.version import parse as parse_version
import dask
@@ -32,9 +32,14 @@
from bokeh.models import TabPanel # noqa: F401
-class DaskBokehTornado(BokehTornado):
- def resources(self, absolute_url: str | bool | None = True) -> Resources:
- return super().resources(absolute_url)
+if BOKEH_VERSION < parse_version("3.3.0"):
+ from bokeh.server.server import BokehTornado as DaskBokehTornado
+else:
+ from bokeh.server.server import BokehTornado
+
+ class DaskBokehTornado(BokehTornado): # type: ignore[no-redef]
+ def resources(self, absolute_url: str | bool | None = True) -> Resources:
+ return super().resources(absolute_url)
def BokehApplication(applications, server, prefix="/", template_variables=None):
| {"golden_diff": "diff --git a/distributed/dashboard/core.py b/distributed/dashboard/core.py\n--- a/distributed/dashboard/core.py\n+++ b/distributed/dashboard/core.py\n@@ -6,8 +6,8 @@\n from bokeh.application import Application\n from bokeh.application.handlers.function import FunctionHandler\n from bokeh.resources import Resources\n-from bokeh.server.server import BokehTornado\n from bokeh.server.util import create_hosts_allowlist\n+from packaging.version import parse as parse_version\n \n import dask\n \n@@ -32,9 +32,14 @@\n from bokeh.models import TabPanel # noqa: F401\n \n \n-class DaskBokehTornado(BokehTornado):\n- def resources(self, absolute_url: str | bool | None = True) -> Resources:\n- return super().resources(absolute_url)\n+if BOKEH_VERSION < parse_version(\"3.3.0\"):\n+ from bokeh.server.server import BokehTornado as DaskBokehTornado\n+else:\n+ from bokeh.server.server import BokehTornado\n+\n+ class DaskBokehTornado(BokehTornado): # type: ignore[no-redef]\n+ def resources(self, absolute_url: str | bool | None = True) -> Resources:\n+ return super().resources(absolute_url)\n \n \n def BokehApplication(applications, server, prefix=\"/\", template_variables=None):\n", "issue": "Dashboards fail with 500 status code when using `bokeh<3.3.0`\nWhen using the latest `main` with `bokeh<3.3.0`, the dashboards fail with a 500 status code.\r\n\r\nScheduler traceback:\r\n```\r\n2023-11-30 18:00:07,300 - tornado.application - ERROR - Uncaught exception GET /status (192.168.178.45)\r\nHTTPServerRequest(protocol='http', host='192.168.178.45:8787', method='GET', uri='/status', version='HTTP/1.1', remote_ip='192.168.178.45')\r\nTraceback (most recent call last):\r\n File \"/opt/homebrew/Caskroom/mambaforge/base/envs/dask-distributed/lib/python3.11/site-packages/tornado/web.py\", line 1786, in _execute\r\n result = await result\r\n ^^^^^^^^^^^^\r\n File \"/opt/homebrew/Caskroom/mambaforge/base/envs/dask-distributed/lib/python3.11/site-packages/bokeh/server/views/doc_handler.py\", line 57, in get\r\n resources=self.application.resources(),\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/hendrikmakait/projects/dask/distributed/distributed/dashboard/core.py\", line 37, in resources\r\n return super().resources(absolute_url)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/opt/homebrew/Caskroom/mambaforge/base/envs/dask-distributed/lib/python3.11/site-packages/bokeh/server/tornado.py\", line 621, in resources\r\n return Resources(mode=\"server\", root_url=root_url, path_versioner=StaticHandler.append_version)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/opt/homebrew/Caskroom/mambaforge/base/envs/dask-distributed/lib/python3.11/site-packages/bokeh/resources.py\", line 377, in __init__\r\n if root_url and not root_url.endswith(\"/\"):\r\n ^^^^^^^^^^^^^^^^^\r\nAttributeError: 'bool' object has no attribute 'endswith'\r\n```\r\n\r\ngit bisect blames #8347\n", "before_files": [{"content": "from __future__ import annotations\n\nimport functools\nimport warnings\n\nfrom bokeh.application import Application\nfrom bokeh.application.handlers.function import FunctionHandler\nfrom bokeh.resources import Resources\nfrom bokeh.server.server import BokehTornado\nfrom bokeh.server.util import create_hosts_allowlist\n\nimport dask\n\nfrom distributed.dashboard.utils import BOKEH_VERSION\nfrom distributed.versions import BOKEH_REQUIREMENT\n\n# Set `prereleases=True` to allow for use with dev versions of `bokeh`\nif not BOKEH_REQUIREMENT.specifier.contains(BOKEH_VERSION, prereleases=True):\n warnings.warn(\n f\"\\nDask needs {BOKEH_REQUIREMENT} for the dashboard.\"\n f\"\\nYou have bokeh={BOKEH_VERSION}.\"\n \"\\nContinuing without the dashboard.\"\n )\n raise ImportError(\n f\"Dask needs {BOKEH_REQUIREMENT} for the dashboard, not bokeh={BOKEH_VERSION}\"\n )\n\n\nif BOKEH_VERSION.major < 3:\n from bokeh.models import Panel as TabPanel # noqa: F401\nelse:\n from bokeh.models import TabPanel # noqa: F401\n\n\nclass DaskBokehTornado(BokehTornado):\n def resources(self, absolute_url: str | bool | None = True) -> Resources:\n return super().resources(absolute_url)\n\n\ndef BokehApplication(applications, server, prefix=\"/\", template_variables=None):\n template_variables = template_variables or {}\n prefix = \"/\" + prefix.strip(\"/\") + \"/\" if prefix else \"/\"\n\n extra = {\"prefix\": prefix, **template_variables}\n\n funcs = {k: functools.partial(v, server, extra) for k, v in applications.items()}\n apps = {k: Application(FunctionHandler(v)) for k, v in funcs.items()}\n\n kwargs = dask.config.get(\"distributed.scheduler.dashboard.bokeh-application\").copy()\n extra_websocket_origins = create_hosts_allowlist(\n kwargs.pop(\"allow_websocket_origin\"), server.http_server.port\n )\n\n return DaskBokehTornado(\n apps,\n prefix=prefix,\n use_index=False,\n extra_websocket_origins=extra_websocket_origins,\n absolute_url=\"\",\n **kwargs,\n )\n", "path": "distributed/dashboard/core.py"}]} | 1,662 | 302 |
gh_patches_debug_23710 | rasdani/github-patches | git_diff | mindsdb__mindsdb-2704 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Endpoint to return handler's icons
At the moment we return icons for handlers by general `GET /handlers` route. Icons are return in svg or base64, which is not effective. We need new endpoint to return handler icon:
`GET /handlers/{name}/icon/{icon_file_name}`
</issue>
<code>
[start of mindsdb/api/http/namespaces/handlers.py]
1 from flask import request
2 from flask_restx import Resource
3
4 from mindsdb.api.http.utils import http_error
5 from mindsdb.api.http.namespaces.configs.handlers import ns_conf
6 from mindsdb.integrations.utilities.install import install_dependencies
7
8
9 @ns_conf.route('/')
10 class HandlersList(Resource):
11 @ns_conf.doc('handlers_list')
12 def get(self):
13 '''List all db handlers'''
14 handlers = request.integration_controller.get_handlers_import_status()
15 result = []
16 for handler_type, handler_meta in handlers.items():
17 row = {'name': handler_type}
18 row.update(handler_meta)
19 result.append(row)
20 return result
21
22
23 @ns_conf.route('/<handler_name>/install')
24 class InstallDependencies(Resource):
25 @ns_conf.param('handler_name', 'Handler name')
26 def post(self, handler_name):
27 handler_import_status = request.integration_controller.get_handlers_import_status()
28 if handler_name not in handler_import_status:
29 return f'Unkown handler: {handler_name}', 400
30
31 if handler_import_status[handler_name].get('import', {}).get('success', False) is True:
32 return 'Installed', 200
33
34 handler_meta = handler_import_status[handler_name]
35
36 dependencies = handler_meta['import']['dependencies']
37 if len(dependencies) == 0:
38 return 'Installed', 200
39
40 result = install_dependencies(dependencies)
41
42 # reload it if any result, so we can get new error message
43 request.integration_controller.reload_handler_module(handler_name)
44 if result.get('success') is True:
45 return '', 200
46 return http_error(
47 500,
48 'Failed to install dependency',
49 result.get('error_message', 'unknown error')
50 )
51
[end of mindsdb/api/http/namespaces/handlers.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mindsdb/api/http/namespaces/handlers.py b/mindsdb/api/http/namespaces/handlers.py
--- a/mindsdb/api/http/namespaces/handlers.py
+++ b/mindsdb/api/http/namespaces/handlers.py
@@ -1,4 +1,8 @@
-from flask import request
+import os
+import importlib
+from pathlib import Path
+
+from flask import request, send_file, abort
from flask_restx import Resource
from mindsdb.api.http.utils import http_error
@@ -20,6 +24,24 @@
return result
+@ns_conf.route('/<handler_name>/icon')
+class HandlerIcon(Resource):
+ @ns_conf.param('handler_name', 'Handler name')
+ def get(self, handler_name):
+ try:
+ handlers_import_status = request.integration_controller.get_handlers_import_status()
+ icon_name = handlers_import_status[handler_name]['icon']['name']
+ handler_folder = handlers_import_status[handler_name]['import']['folder']
+ mindsdb_path = Path(importlib.util.find_spec('mindsdb').origin).parent
+ icon_path = mindsdb_path.joinpath('integrations/handlers').joinpath(handler_folder).joinpath(icon_name)
+ if icon_path.is_absolute() is False:
+ icon_path = Path(os.getcwd()).joinpath(icon_path)
+ except Exception:
+ return abort(404)
+ else:
+ return send_file(icon_path)
+
+
@ns_conf.route('/<handler_name>/install')
class InstallDependencies(Resource):
@ns_conf.param('handler_name', 'Handler name')
| {"golden_diff": "diff --git a/mindsdb/api/http/namespaces/handlers.py b/mindsdb/api/http/namespaces/handlers.py\n--- a/mindsdb/api/http/namespaces/handlers.py\n+++ b/mindsdb/api/http/namespaces/handlers.py\n@@ -1,4 +1,8 @@\n-from flask import request\n+import os\n+import importlib\n+from pathlib import Path\n+\n+from flask import request, send_file, abort\n from flask_restx import Resource\n \n from mindsdb.api.http.utils import http_error\n@@ -20,6 +24,24 @@\n return result\n \n \n+@ns_conf.route('/<handler_name>/icon')\n+class HandlerIcon(Resource):\n+ @ns_conf.param('handler_name', 'Handler name')\n+ def get(self, handler_name):\n+ try:\n+ handlers_import_status = request.integration_controller.get_handlers_import_status()\n+ icon_name = handlers_import_status[handler_name]['icon']['name']\n+ handler_folder = handlers_import_status[handler_name]['import']['folder']\n+ mindsdb_path = Path(importlib.util.find_spec('mindsdb').origin).parent\n+ icon_path = mindsdb_path.joinpath('integrations/handlers').joinpath(handler_folder).joinpath(icon_name)\n+ if icon_path.is_absolute() is False:\n+ icon_path = Path(os.getcwd()).joinpath(icon_path)\n+ except Exception:\n+ return abort(404)\n+ else:\n+ return send_file(icon_path)\n+\n+\n @ns_conf.route('/<handler_name>/install')\n class InstallDependencies(Resource):\n @ns_conf.param('handler_name', 'Handler name')\n", "issue": "Endpoint to return handler's icons\nAt the moment we return icons for handlers by general `GET /handlers` route. Icons are return in svg or base64, which is not effective. We need new endpoint to return handler icon:\r\n`GET /handlers/{name}/icon/{icon_file_name}`\r\n\n", "before_files": [{"content": "from flask import request\nfrom flask_restx import Resource\n\nfrom mindsdb.api.http.utils import http_error\nfrom mindsdb.api.http.namespaces.configs.handlers import ns_conf\nfrom mindsdb.integrations.utilities.install import install_dependencies\n\n\n@ns_conf.route('/')\nclass HandlersList(Resource):\n @ns_conf.doc('handlers_list')\n def get(self):\n '''List all db handlers'''\n handlers = request.integration_controller.get_handlers_import_status()\n result = []\n for handler_type, handler_meta in handlers.items():\n row = {'name': handler_type}\n row.update(handler_meta)\n result.append(row)\n return result\n\n\n@ns_conf.route('/<handler_name>/install')\nclass InstallDependencies(Resource):\n @ns_conf.param('handler_name', 'Handler name')\n def post(self, handler_name):\n handler_import_status = request.integration_controller.get_handlers_import_status()\n if handler_name not in handler_import_status:\n return f'Unkown handler: {handler_name}', 400\n\n if handler_import_status[handler_name].get('import', {}).get('success', False) is True:\n return 'Installed', 200\n\n handler_meta = handler_import_status[handler_name]\n\n dependencies = handler_meta['import']['dependencies']\n if len(dependencies) == 0:\n return 'Installed', 200\n\n result = install_dependencies(dependencies)\n\n # reload it if any result, so we can get new error message\n request.integration_controller.reload_handler_module(handler_name)\n if result.get('success') is True:\n return '', 200\n return http_error(\n 500,\n 'Failed to install dependency',\n result.get('error_message', 'unknown error')\n )\n", "path": "mindsdb/api/http/namespaces/handlers.py"}]} | 1,078 | 359 |
gh_patches_debug_15237 | rasdani/github-patches | git_diff | rlworkgroup__garage-691 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
sim_policy not working
Hi,
I just found that sim_policy.py cannot work.
data that read from "params.pkl" does not include the key of "policy"
</issue>
<code>
[start of examples/sim_policy.py]
1 #!/usr/bin/env python3
2
3 import argparse
4
5 import joblib
6 import tensorflow as tf
7
8 from garage.misc.console import query_yes_no
9 from garage.sampler.utils import rollout
10
11 if __name__ == "__main__":
12
13 parser = argparse.ArgumentParser()
14 parser.add_argument('file', type=str, help='path to the snapshot file')
15 parser.add_argument(
16 '--max_path_length',
17 type=int,
18 default=1000,
19 help='Max length of rollout')
20 parser.add_argument('--speedup', type=float, default=1, help='Speedup')
21 args = parser.parse_args()
22
23 # If the snapshot file use tensorflow, do:
24 # import tensorflow as tf
25 # with tf.Session():
26 # [rest of the code]
27 with tf.Session() as sess:
28 data = joblib.load(args.file)
29 policy = data['policy']
30 env = data['env']
31 while True:
32 path = rollout(
33 env,
34 policy,
35 max_path_length=args.max_path_length,
36 animated=True,
37 speedup=args.speedup)
38 if not query_yes_no('Continue simulation?'):
39 break
40
[end of examples/sim_policy.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/examples/sim_policy.py b/examples/sim_policy.py
--- a/examples/sim_policy.py
+++ b/examples/sim_policy.py
@@ -8,7 +8,7 @@
from garage.misc.console import query_yes_no
from garage.sampler.utils import rollout
-if __name__ == "__main__":
+if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('file', type=str, help='path to the snapshot file')
@@ -26,7 +26,7 @@
# [rest of the code]
with tf.Session() as sess:
data = joblib.load(args.file)
- policy = data['policy']
+ policy = data['algo'].policy
env = data['env']
while True:
path = rollout(
| {"golden_diff": "diff --git a/examples/sim_policy.py b/examples/sim_policy.py\n--- a/examples/sim_policy.py\n+++ b/examples/sim_policy.py\n@@ -8,7 +8,7 @@\n from garage.misc.console import query_yes_no\n from garage.sampler.utils import rollout\n \n-if __name__ == \"__main__\":\n+if __name__ == '__main__':\n \n parser = argparse.ArgumentParser()\n parser.add_argument('file', type=str, help='path to the snapshot file')\n@@ -26,7 +26,7 @@\n # [rest of the code]\n with tf.Session() as sess:\n data = joblib.load(args.file)\n- policy = data['policy']\n+ policy = data['algo'].policy\n env = data['env']\n while True:\n path = rollout(\n", "issue": "sim_policy not working\nHi, \r\nI just found that sim_policy.py cannot work. \r\ndata that read from \"params.pkl\" does not include the key of \"policy\"\n", "before_files": [{"content": "#!/usr/bin/env python3\n\nimport argparse\n\nimport joblib\nimport tensorflow as tf\n\nfrom garage.misc.console import query_yes_no\nfrom garage.sampler.utils import rollout\n\nif __name__ == \"__main__\":\n\n parser = argparse.ArgumentParser()\n parser.add_argument('file', type=str, help='path to the snapshot file')\n parser.add_argument(\n '--max_path_length',\n type=int,\n default=1000,\n help='Max length of rollout')\n parser.add_argument('--speedup', type=float, default=1, help='Speedup')\n args = parser.parse_args()\n\n # If the snapshot file use tensorflow, do:\n # import tensorflow as tf\n # with tf.Session():\n # [rest of the code]\n with tf.Session() as sess:\n data = joblib.load(args.file)\n policy = data['policy']\n env = data['env']\n while True:\n path = rollout(\n env,\n policy,\n max_path_length=args.max_path_length,\n animated=True,\n speedup=args.speedup)\n if not query_yes_no('Continue simulation?'):\n break\n", "path": "examples/sim_policy.py"}]} | 889 | 174 |
gh_patches_debug_9203 | rasdani/github-patches | git_diff | Qiskit__qiskit-4081 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Improve an error message in qiskit.converters.circuit_to_gate()
<!-- ⚠️ If you do not respect this template, your issue will be closed -->
<!-- ⚠️ Make sure to browse the opened and closed issues to confirm this idea does not exist. -->
### What is the expected enhancement?
Let's assume we have `QuantumCircuit` object called `qc`, and one tries to convert it into a `Gate` object using `qiskit.converters.circuit_to_gate()`. If `qc` contains some instructions which cannot be converted into `Gate`, the following exception is raised
```
QiskitError: 'One or more instructions in this instruction cannot be converted to a gate'
```
My suggestion is to improve this error message and add some info about the particular instruction preventing the convertion from happening. I believe, something like the instruction name in the error message should be more helpfull, than the current general statement.
Below is a code snippet (for a `qc` containing a measurement operation) which can be used to achieve the error mentioned above
```
from qiskit import QuantumCircuit
from qiskit.converters import circuit_to_gate
qc = QuantumCircuit(1, 1)
qc.h(0)
qc.measure(0, 0)
gate = circuit_to_gate(qc)
```
</issue>
<code>
[start of qiskit/converters/circuit_to_gate.py]
1 # -*- coding: utf-8 -*-
2
3 # This code is part of Qiskit.
4 #
5 # (C) Copyright IBM 2017, 2019.
6 #
7 # This code is licensed under the Apache License, Version 2.0. You may
8 # obtain a copy of this license in the LICENSE.txt file in the root directory
9 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
10 #
11 # Any modifications or derivative works of this code must retain this
12 # copyright notice, and modified files need to carry a notice indicating
13 # that they have been altered from the originals.
14
15 """Helper function for converting a circuit to a gate"""
16
17 from qiskit.circuit.gate import Gate
18 from qiskit.circuit.quantumregister import QuantumRegister, Qubit
19 from qiskit.exceptions import QiskitError
20
21
22 def circuit_to_gate(circuit, parameter_map=None):
23 """Build a ``Gate`` object from a ``QuantumCircuit``.
24
25 The gate is anonymous (not tied to a named quantum register),
26 and so can be inserted into another circuit. The gate will
27 have the same string name as the circuit.
28
29 Args:
30 circuit (QuantumCircuit): the input circuit.
31 parameter_map (dict): For parameterized circuits, a mapping from
32 parameters in the circuit to parameters to be used in the gate.
33 If None, existing circuit parameters will also parameterize the
34 Gate.
35
36 Raises:
37 QiskitError: if circuit is non-unitary or if
38 parameter_map is not compatible with circuit
39
40 Return:
41 Gate: a Gate equivalent to the action of the
42 input circuit. Upon decomposition, this gate will
43 yield the components comprising the original circuit.
44 """
45 if circuit.clbits:
46 raise QiskitError('Circuit with classical bits cannot be converted '
47 'to gate.')
48
49 for inst, _, _ in circuit.data:
50 if not isinstance(inst, Gate):
51 raise QiskitError('One or more instructions in this instruction '
52 'cannot be converted to a gate')
53
54 if parameter_map is None:
55 parameter_dict = {p: p for p in circuit.parameters}
56 else:
57 parameter_dict = circuit._unroll_param_dict(parameter_map)
58
59 if parameter_dict.keys() != circuit.parameters:
60 raise QiskitError(('parameter_map should map all circuit parameters. '
61 'Circuit parameters: {}, parameter_map: {}').format(
62 circuit.parameters, parameter_dict))
63
64 gate = Gate(name=circuit.name,
65 num_qubits=sum([qreg.size for qreg in circuit.qregs]),
66 params=sorted(parameter_dict.values(), key=lambda p: p.name))
67 gate.condition = None
68
69 def find_bit_position(bit):
70 """find the index of a given bit (Register, int) within
71 a flat ordered list of bits of the circuit
72 """
73 if isinstance(bit, Qubit):
74 ordered_regs = circuit.qregs
75 else:
76 ordered_regs = circuit.cregs
77 reg_index = ordered_regs.index(bit.register)
78 return sum([reg.size for reg in ordered_regs[:reg_index]]) + bit.index
79
80 target = circuit.copy()
81 target._substitute_parameters(parameter_dict)
82
83 # pylint: disable=cyclic-import
84 from qiskit.circuit.equivalence_library import SessionEquivalenceLibrary as sel
85 # pylint: enable=cyclic-import
86 sel.add_equivalence(gate, target)
87
88 definition = target.data
89
90 if gate.num_qubits > 0:
91 q = QuantumRegister(gate.num_qubits, 'q')
92
93 # The 3rd parameter in the output tuple) is hard coded to [] because
94 # Gate objects do not have cregs set and we've verified that all
95 # instructions are gates
96 definition = list(map(
97 lambda x: (x[0],
98 list(map(lambda y: q[find_bit_position(y)], x[1])),
99 []),
100 definition))
101 gate.definition = definition
102
103 return gate
104
[end of qiskit/converters/circuit_to_gate.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/qiskit/converters/circuit_to_gate.py b/qiskit/converters/circuit_to_gate.py
--- a/qiskit/converters/circuit_to_gate.py
+++ b/qiskit/converters/circuit_to_gate.py
@@ -48,8 +48,9 @@
for inst, _, _ in circuit.data:
if not isinstance(inst, Gate):
- raise QiskitError('One or more instructions in this instruction '
- 'cannot be converted to a gate')
+ raise QiskitError(('One or more instructions cannot be converted to'
+ ' a gate. "{}" is not a gate instruction').format(
+ inst.name))
if parameter_map is None:
parameter_dict = {p: p for p in circuit.parameters}
| {"golden_diff": "diff --git a/qiskit/converters/circuit_to_gate.py b/qiskit/converters/circuit_to_gate.py\n--- a/qiskit/converters/circuit_to_gate.py\n+++ b/qiskit/converters/circuit_to_gate.py\n@@ -48,8 +48,9 @@\n \n for inst, _, _ in circuit.data:\n if not isinstance(inst, Gate):\n- raise QiskitError('One or more instructions in this instruction '\n- 'cannot be converted to a gate')\n+ raise QiskitError(('One or more instructions cannot be converted to'\n+ ' a gate. \"{}\" is not a gate instruction').format(\n+ inst.name))\n \n if parameter_map is None:\n parameter_dict = {p: p for p in circuit.parameters}\n", "issue": "Improve an error message in qiskit.converters.circuit_to_gate()\n<!-- \u26a0\ufe0f If you do not respect this template, your issue will be closed -->\r\n<!-- \u26a0\ufe0f Make sure to browse the opened and closed issues to confirm this idea does not exist. -->\r\n\r\n### What is the expected enhancement?\r\n\r\nLet's assume we have `QuantumCircuit` object called `qc`, and one tries to convert it into a `Gate` object using `qiskit.converters.circuit_to_gate()`. If `qc` contains some instructions which cannot be converted into `Gate`, the following exception is raised\r\n```\r\nQiskitError: 'One or more instructions in this instruction cannot be converted to a gate'\r\n```\r\nMy suggestion is to improve this error message and add some info about the particular instruction preventing the convertion from happening. I believe, something like the instruction name in the error message should be more helpfull, than the current general statement.\r\n\r\nBelow is a code snippet (for a `qc` containing a measurement operation) which can be used to achieve the error mentioned above\r\n```\r\nfrom qiskit import QuantumCircuit\r\nfrom qiskit.converters import circuit_to_gate\r\n\r\nqc = QuantumCircuit(1, 1)\r\nqc.h(0)\r\nqc.measure(0, 0)\r\n\r\ngate = circuit_to_gate(qc)\r\n```\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# This code is part of Qiskit.\n#\n# (C) Copyright IBM 2017, 2019.\n#\n# This code is licensed under the Apache License, Version 2.0. You may\n# obtain a copy of this license in the LICENSE.txt file in the root directory\n# of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.\n#\n# Any modifications or derivative works of this code must retain this\n# copyright notice, and modified files need to carry a notice indicating\n# that they have been altered from the originals.\n\n\"\"\"Helper function for converting a circuit to a gate\"\"\"\n\nfrom qiskit.circuit.gate import Gate\nfrom qiskit.circuit.quantumregister import QuantumRegister, Qubit\nfrom qiskit.exceptions import QiskitError\n\n\ndef circuit_to_gate(circuit, parameter_map=None):\n \"\"\"Build a ``Gate`` object from a ``QuantumCircuit``.\n\n The gate is anonymous (not tied to a named quantum register),\n and so can be inserted into another circuit. The gate will\n have the same string name as the circuit.\n\n Args:\n circuit (QuantumCircuit): the input circuit.\n parameter_map (dict): For parameterized circuits, a mapping from\n parameters in the circuit to parameters to be used in the gate.\n If None, existing circuit parameters will also parameterize the\n Gate.\n\n Raises:\n QiskitError: if circuit is non-unitary or if\n parameter_map is not compatible with circuit\n\n Return:\n Gate: a Gate equivalent to the action of the\n input circuit. Upon decomposition, this gate will\n yield the components comprising the original circuit.\n \"\"\"\n if circuit.clbits:\n raise QiskitError('Circuit with classical bits cannot be converted '\n 'to gate.')\n\n for inst, _, _ in circuit.data:\n if not isinstance(inst, Gate):\n raise QiskitError('One or more instructions in this instruction '\n 'cannot be converted to a gate')\n\n if parameter_map is None:\n parameter_dict = {p: p for p in circuit.parameters}\n else:\n parameter_dict = circuit._unroll_param_dict(parameter_map)\n\n if parameter_dict.keys() != circuit.parameters:\n raise QiskitError(('parameter_map should map all circuit parameters. '\n 'Circuit parameters: {}, parameter_map: {}').format(\n circuit.parameters, parameter_dict))\n\n gate = Gate(name=circuit.name,\n num_qubits=sum([qreg.size for qreg in circuit.qregs]),\n params=sorted(parameter_dict.values(), key=lambda p: p.name))\n gate.condition = None\n\n def find_bit_position(bit):\n \"\"\"find the index of a given bit (Register, int) within\n a flat ordered list of bits of the circuit\n \"\"\"\n if isinstance(bit, Qubit):\n ordered_regs = circuit.qregs\n else:\n ordered_regs = circuit.cregs\n reg_index = ordered_regs.index(bit.register)\n return sum([reg.size for reg in ordered_regs[:reg_index]]) + bit.index\n\n target = circuit.copy()\n target._substitute_parameters(parameter_dict)\n\n # pylint: disable=cyclic-import\n from qiskit.circuit.equivalence_library import SessionEquivalenceLibrary as sel\n # pylint: enable=cyclic-import\n sel.add_equivalence(gate, target)\n\n definition = target.data\n\n if gate.num_qubits > 0:\n q = QuantumRegister(gate.num_qubits, 'q')\n\n # The 3rd parameter in the output tuple) is hard coded to [] because\n # Gate objects do not have cregs set and we've verified that all\n # instructions are gates\n definition = list(map(\n lambda x: (x[0],\n list(map(lambda y: q[find_bit_position(y)], x[1])),\n []),\n definition))\n gate.definition = definition\n\n return gate\n", "path": "qiskit/converters/circuit_to_gate.py"}]} | 1,896 | 174 |
gh_patches_debug_984 | rasdani/github-patches | git_diff | Mailu__Mailu-2157 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Admin User Quota sorting is off
Thank you for opening an issue with Mailu. Please understand that issues are meant for bugs and enhancement-requests.
For **user-support questions**, reach out to us on [matrix](https://matrix.to/#/#mailu:tedomum.net).
To be able to help you best, we need some more information.
## Before you open your issue
- [ x] Check if no issue or pull-request for this already exists.
- [ x] Check [documentation](https://mailu.io/master/) and [FAQ](https://mailu.io/master/faq.html). (Tip, use the search function on the documentation page)
- [ x] You understand `Mailu` is made by volunteers in their **free time** — be conscise, civil and accept that delays can occur.
- [ x] The title of the issue should be short and simple. It should contain specific terms related to the actual issue. Be specific while writing the title.
## Environment & Versions
### Environment
- [ x] docker-compose
- [ ] kubernetes
- [ ] docker swarm
### Versions
1.9
## Description
When sorting by quota in the Admin interface the numbers are sorted like text instead of by number and bytes.
## Expected behaviour
kB is smaller than MB is smaller than GB

</issue>
<code>
[start of core/admin/mailu/__init__.py]
1 """ Mailu admin app
2 """
3
4 import flask
5 import flask_bootstrap
6
7 from mailu import utils, debug, models, manage, configuration
8
9 import hmac
10
11 def create_app_from_config(config):
12 """ Create a new application based on the given configuration
13 """
14 app = flask.Flask(__name__, static_folder='static', static_url_path='/static')
15 app.cli.add_command(manage.mailu)
16
17 # Bootstrap is used for error display and flash messages
18 app.bootstrap = flask_bootstrap.Bootstrap(app)
19
20 # Initialize application extensions
21 config.init_app(app)
22 models.db.init_app(app)
23 utils.session.init_app(app)
24 utils.limiter.init_app(app)
25 utils.babel.init_app(app)
26 utils.login.init_app(app)
27 utils.login.user_loader(models.User.get)
28 utils.proxy.init_app(app)
29 utils.migrate.init_app(app, models.db)
30
31 app.device_cookie_key = hmac.new(bytearray(app.secret_key, 'utf-8'), bytearray('DEVICE_COOKIE_KEY', 'utf-8'), 'sha256').digest()
32 app.temp_token_key = hmac.new(bytearray(app.secret_key, 'utf-8'), bytearray('WEBMAIL_TEMP_TOKEN_KEY', 'utf-8'), 'sha256').digest()
33 app.srs_key = hmac.new(bytearray(app.secret_key, 'utf-8'), bytearray('SRS_KEY', 'utf-8'), 'sha256').digest()
34
35 # Initialize list of translations
36 app.config.translations = {
37 str(locale): locale
38 for locale in sorted(
39 utils.babel.list_translations(),
40 key=lambda l: l.get_language_name().title()
41 )
42 }
43
44 # Initialize debugging tools
45 if app.config.get("DEBUG"):
46 debug.toolbar.init_app(app)
47 if app.config.get("DEBUG_PROFILER"):
48 debug.profiler.init_app(app)
49 if assets := app.config.get('DEBUG_ASSETS'):
50 app.static_folder = assets
51
52 # Inject the default variables in the Jinja parser
53 # TODO: move this to blueprints when needed
54 @app.context_processor
55 def inject_defaults():
56 signup_domains = models.Domain.query.filter_by(signup_enabled=True).all()
57 return dict(
58 signup_domains= signup_domains,
59 config = app.config,
60 )
61
62 # Jinja filters
63 @app.template_filter()
64 def format_date(value):
65 return utils.flask_babel.format_date(value) if value else ''
66
67 @app.template_filter()
68 def format_datetime(value):
69 return utils.flask_babel.format_datetime(value) if value else ''
70
71 # Import views
72 from mailu import ui, internal, sso
73 app.register_blueprint(ui.ui, url_prefix=app.config['WEB_ADMIN'])
74 app.register_blueprint(internal.internal, url_prefix='/internal')
75 app.register_blueprint(sso.sso, url_prefix='/sso')
76 return app
77
78
79 def create_app():
80 """ Create a new application based on the config module
81 """
82 config = configuration.ConfigManager()
83 return create_app_from_config(config)
84
85
[end of core/admin/mailu/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/core/admin/mailu/__init__.py b/core/admin/mailu/__init__.py
--- a/core/admin/mailu/__init__.py
+++ b/core/admin/mailu/__init__.py
@@ -57,6 +57,7 @@
return dict(
signup_domains= signup_domains,
config = app.config,
+ get_locale = utils.get_locale,
)
# Jinja filters
| {"golden_diff": "diff --git a/core/admin/mailu/__init__.py b/core/admin/mailu/__init__.py\n--- a/core/admin/mailu/__init__.py\n+++ b/core/admin/mailu/__init__.py\n@@ -57,6 +57,7 @@\n return dict(\n signup_domains= signup_domains,\n config = app.config,\n+ get_locale = utils.get_locale,\n )\n \n # Jinja filters\n", "issue": "Admin User Quota sorting is off\nThank you for opening an issue with Mailu. Please understand that issues are meant for bugs and enhancement-requests.\r\nFor **user-support questions**, reach out to us on [matrix](https://matrix.to/#/#mailu:tedomum.net).\r\n\r\nTo be able to help you best, we need some more information.\r\n\r\n## Before you open your issue\r\n- [ x] Check if no issue or pull-request for this already exists.\r\n- [ x] Check [documentation](https://mailu.io/master/) and [FAQ](https://mailu.io/master/faq.html). (Tip, use the search function on the documentation page)\r\n- [ x] You understand `Mailu` is made by volunteers in their **free time** \u2014 be conscise, civil and accept that delays can occur.\r\n- [ x] The title of the issue should be short and simple. It should contain specific terms related to the actual issue. Be specific while writing the title.\r\n\r\n## Environment & Versions\r\n### Environment\r\n - [ x] docker-compose\r\n - [ ] kubernetes\r\n - [ ] docker swarm\r\n\r\n### Versions\r\n1.9\r\n\r\n## Description\r\nWhen sorting by quota in the Admin interface the numbers are sorted like text instead of by number and bytes.\r\n\r\n\r\n## Expected behaviour\r\nkB is smaller than MB is smaller than GB\r\n\r\n\r\n\r\n\r\n\n", "before_files": [{"content": "\"\"\" Mailu admin app\n\"\"\"\n\nimport flask\nimport flask_bootstrap\n\nfrom mailu import utils, debug, models, manage, configuration\n\nimport hmac\n\ndef create_app_from_config(config):\n \"\"\" Create a new application based on the given configuration\n \"\"\"\n app = flask.Flask(__name__, static_folder='static', static_url_path='/static')\n app.cli.add_command(manage.mailu)\n\n # Bootstrap is used for error display and flash messages\n app.bootstrap = flask_bootstrap.Bootstrap(app)\n\n # Initialize application extensions\n config.init_app(app)\n models.db.init_app(app)\n utils.session.init_app(app)\n utils.limiter.init_app(app)\n utils.babel.init_app(app)\n utils.login.init_app(app)\n utils.login.user_loader(models.User.get)\n utils.proxy.init_app(app)\n utils.migrate.init_app(app, models.db)\n\n app.device_cookie_key = hmac.new(bytearray(app.secret_key, 'utf-8'), bytearray('DEVICE_COOKIE_KEY', 'utf-8'), 'sha256').digest()\n app.temp_token_key = hmac.new(bytearray(app.secret_key, 'utf-8'), bytearray('WEBMAIL_TEMP_TOKEN_KEY', 'utf-8'), 'sha256').digest()\n app.srs_key = hmac.new(bytearray(app.secret_key, 'utf-8'), bytearray('SRS_KEY', 'utf-8'), 'sha256').digest()\n\n # Initialize list of translations\n app.config.translations = {\n str(locale): locale\n for locale in sorted(\n utils.babel.list_translations(),\n key=lambda l: l.get_language_name().title()\n )\n }\n\n # Initialize debugging tools\n if app.config.get(\"DEBUG\"):\n debug.toolbar.init_app(app)\n if app.config.get(\"DEBUG_PROFILER\"):\n debug.profiler.init_app(app)\n if assets := app.config.get('DEBUG_ASSETS'):\n app.static_folder = assets\n\n # Inject the default variables in the Jinja parser\n # TODO: move this to blueprints when needed\n @app.context_processor\n def inject_defaults():\n signup_domains = models.Domain.query.filter_by(signup_enabled=True).all()\n return dict(\n signup_domains= signup_domains,\n config = app.config,\n )\n\n # Jinja filters\n @app.template_filter()\n def format_date(value):\n return utils.flask_babel.format_date(value) if value else ''\n\n @app.template_filter()\n def format_datetime(value):\n return utils.flask_babel.format_datetime(value) if value else ''\n\n # Import views\n from mailu import ui, internal, sso\n app.register_blueprint(ui.ui, url_prefix=app.config['WEB_ADMIN'])\n app.register_blueprint(internal.internal, url_prefix='/internal')\n app.register_blueprint(sso.sso, url_prefix='/sso')\n return app\n\n\ndef create_app():\n \"\"\" Create a new application based on the config module\n \"\"\"\n config = configuration.ConfigManager()\n return create_app_from_config(config)\n\n", "path": "core/admin/mailu/__init__.py"}]} | 1,692 | 94 |
gh_patches_debug_764 | rasdani/github-patches | git_diff | rasterio__rasterio-1692 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
more explicit NotImplementedError messages in read mode ?
In wanting to set a GeoTIFF's CRS, I encountered [this](https://github.com/mapbox/rasterio/blob/master/rasterio/_base.pyx#L516) NotImplementedError when trying to run the following code:
```
with rasterio.open(filepath) as src:
src.crs = "EPSG:3857"
```
Though in retrospect it is obvious the above will fail without explicitly specifying the proper mode , i.e. `'r+'` in this case, I was momentarily thrown off by the error and assumed something was wrong with my approach. Would a more explicit error message be useful here?
</issue>
<code>
[start of rasterio/errors.py]
1 """Errors and Warnings."""
2
3 from click import FileError
4
5
6 class RasterioError(Exception):
7 """Root exception class"""
8
9
10 class WindowError(RasterioError):
11 """Raised when errors occur during window operations"""
12
13
14 class CRSError(ValueError):
15 """Raised when a CRS string or mapping is invalid or cannot serve
16 to define a coordinate transformation."""
17
18
19 class EnvError(RasterioError):
20 """Raised when the state of GDAL/AWS environment cannot be created
21 or modified."""
22
23
24 class DriverRegistrationError(ValueError):
25 """Raised when a format driver is requested but is not registered."""
26
27
28 class FileOverwriteError(FileError):
29 """Raised when Rasterio's CLI refuses to clobber output files."""
30
31 def __init__(self, message):
32 """Raise FileOverwriteError with message as hint."""
33 super(FileOverwriteError, self).__init__('', hint=message)
34
35
36 class RasterioIOError(IOError):
37 """Raised when a dataset cannot be opened using one of the
38 registered format drivers."""
39
40
41 class NodataShadowWarning(UserWarning):
42 """Warn that a dataset's nodata attribute is shadowing its alpha band."""
43
44 def __str__(self):
45 return ("The dataset's nodata attribute is shadowing "
46 "the alpha band. All masks will be determined "
47 "by the nodata attribute")
48
49
50 class NotGeoreferencedWarning(UserWarning):
51 """Warn that a dataset isn't georeferenced."""
52
53
54 class GDALBehaviorChangeException(RuntimeError):
55 """Raised when GDAL's behavior differs from the given arguments. For
56 example, antimeridian cutting is always on as of GDAL 2.2.0. Users
57 expecting it to be off will be presented with a MultiPolygon when the
58 rest of their code expects a Polygon.
59
60 # Raises an exception on GDAL >= 2.2.0
61 rasterio.warp.transform_geometry(
62 src_crs, dst_crs, antimeridian_cutting=False)
63 """
64
65
66 class GDALOptionNotImplementedError(RasterioError):
67 """A dataset opening or dataset creation option can't be supported
68
69 This will be raised from Rasterio's shim modules. For example, when
70 a user passes arguments to open_dataset() that can't be evaluated
71 by GDAL 1.x.
72 """
73
74 class GDALVersionError(RasterioError):
75 """Raised if the runtime version of GDAL does not meet the required
76 version of GDAL."""
77
78
79 class WindowEvaluationError(ValueError):
80 """Raised when window evaluation fails"""
81
82
83 class RasterioDeprecationWarning(UserWarning):
84 """Rasterio module deprecations"""
85
86
87 class RasterBlockError(RasterioError):
88 """Raised when raster block access fails"""
89
90
91 class BandOverviewError(UserWarning):
92 """Raised when a band overview access fails."""
93
94
95 class WarpOptionsError(RasterioError):
96 """Raised when options for a warp operation are invalid"""
97
98
99 class UnsupportedOperation(RasterioError):
100 """Raised when reading from a file opened in 'w' mode"""
101
102
103 class OverviewCreationError(RasterioError):
104 """Raised when creation of an overview fails"""
105
[end of rasterio/errors.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/rasterio/errors.py b/rasterio/errors.py
--- a/rasterio/errors.py
+++ b/rasterio/errors.py
@@ -102,3 +102,7 @@
class OverviewCreationError(RasterioError):
"""Raised when creation of an overview fails"""
+
+
+class DatasetAttributeError(RasterioError, NotImplementedError):
+ """Raised when dataset attributes are misused"""
| {"golden_diff": "diff --git a/rasterio/errors.py b/rasterio/errors.py\n--- a/rasterio/errors.py\n+++ b/rasterio/errors.py\n@@ -102,3 +102,7 @@\n \n class OverviewCreationError(RasterioError):\n \"\"\"Raised when creation of an overview fails\"\"\"\n+\n+\n+class DatasetAttributeError(RasterioError, NotImplementedError):\n+ \"\"\"Raised when dataset attributes are misused\"\"\"\n", "issue": "more explicit NotImplementedError messages in read mode ?\nIn wanting to set a GeoTIFF's CRS, I encountered [this](https://github.com/mapbox/rasterio/blob/master/rasterio/_base.pyx#L516) NotImplementedError when trying to run the following code:\r\n```\r\nwith rasterio.open(filepath) as src:\r\n src.crs = \"EPSG:3857\"\r\n```\r\nThough in retrospect it is obvious the above will fail without explicitly specifying the proper mode , i.e. `'r+'` in this case, I was momentarily thrown off by the error and assumed something was wrong with my approach. Would a more explicit error message be useful here?\r\n\n", "before_files": [{"content": "\"\"\"Errors and Warnings.\"\"\"\n\nfrom click import FileError\n\n\nclass RasterioError(Exception):\n \"\"\"Root exception class\"\"\"\n\n\nclass WindowError(RasterioError):\n \"\"\"Raised when errors occur during window operations\"\"\"\n\n\nclass CRSError(ValueError):\n \"\"\"Raised when a CRS string or mapping is invalid or cannot serve\n to define a coordinate transformation.\"\"\"\n\n\nclass EnvError(RasterioError):\n \"\"\"Raised when the state of GDAL/AWS environment cannot be created\n or modified.\"\"\"\n\n\nclass DriverRegistrationError(ValueError):\n \"\"\"Raised when a format driver is requested but is not registered.\"\"\"\n\n\nclass FileOverwriteError(FileError):\n \"\"\"Raised when Rasterio's CLI refuses to clobber output files.\"\"\"\n\n def __init__(self, message):\n \"\"\"Raise FileOverwriteError with message as hint.\"\"\"\n super(FileOverwriteError, self).__init__('', hint=message)\n\n\nclass RasterioIOError(IOError):\n \"\"\"Raised when a dataset cannot be opened using one of the\n registered format drivers.\"\"\"\n\n\nclass NodataShadowWarning(UserWarning):\n \"\"\"Warn that a dataset's nodata attribute is shadowing its alpha band.\"\"\"\n\n def __str__(self):\n return (\"The dataset's nodata attribute is shadowing \"\n \"the alpha band. All masks will be determined \"\n \"by the nodata attribute\")\n\n\nclass NotGeoreferencedWarning(UserWarning):\n \"\"\"Warn that a dataset isn't georeferenced.\"\"\"\n\n\nclass GDALBehaviorChangeException(RuntimeError):\n \"\"\"Raised when GDAL's behavior differs from the given arguments. For\n example, antimeridian cutting is always on as of GDAL 2.2.0. Users\n expecting it to be off will be presented with a MultiPolygon when the\n rest of their code expects a Polygon.\n\n # Raises an exception on GDAL >= 2.2.0\n rasterio.warp.transform_geometry(\n src_crs, dst_crs, antimeridian_cutting=False)\n \"\"\"\n\n\nclass GDALOptionNotImplementedError(RasterioError):\n \"\"\"A dataset opening or dataset creation option can't be supported\n\n This will be raised from Rasterio's shim modules. For example, when\n a user passes arguments to open_dataset() that can't be evaluated\n by GDAL 1.x.\n \"\"\"\n\nclass GDALVersionError(RasterioError):\n \"\"\"Raised if the runtime version of GDAL does not meet the required\n version of GDAL.\"\"\"\n\n\nclass WindowEvaluationError(ValueError):\n \"\"\"Raised when window evaluation fails\"\"\"\n\n\nclass RasterioDeprecationWarning(UserWarning):\n \"\"\"Rasterio module deprecations\"\"\"\n\n\nclass RasterBlockError(RasterioError):\n \"\"\"Raised when raster block access fails\"\"\"\n\n\nclass BandOverviewError(UserWarning):\n \"\"\"Raised when a band overview access fails.\"\"\"\n\n\nclass WarpOptionsError(RasterioError):\n \"\"\"Raised when options for a warp operation are invalid\"\"\"\n\n\nclass UnsupportedOperation(RasterioError):\n \"\"\"Raised when reading from a file opened in 'w' mode\"\"\"\n\n\nclass OverviewCreationError(RasterioError):\n \"\"\"Raised when creation of an overview fails\"\"\"\n", "path": "rasterio/errors.py"}]} | 1,571 | 92 |
gh_patches_debug_30871 | rasdani/github-patches | git_diff | sublimelsp__LSP-1488 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
LspTextCommand should honor both session_name and capability if defined
If `capability` in [LspTextCommand](https://github.com/sublimelsp/LSP/blob/81a6e6aeb2c3a6aebad59fbd6eb0361301243bd1/plugin/core/registry.py#L52-L70) is defined, `session_name` is ignored. You might say that LSP-* plugins exactly know the capabilities of their server and thus never need to specify `capability` in a derived class, but in particular it's impossible for plugins to derive from LspExecuteCommand (which is derived from LspTextCommand), because that class already comes with a predefined `capability`. It can be convenient for a plugin to declare a derived class from LspExecuteCommand, so that their commands are only shown/enabled for corresponding filetypes:
```python
class FooExecuteCommand(LspExecuteCommand):
session_name = "foo"
```
**Describe the solution you'd like**
```python
def is_enabled(self, event: Optional[dict] = None, point: Optional[int] = None) -> bool:
if self.capability:
# At least one active session with the given capability must exist.
if not bool(self.best_session(self.capability, get_position(self.view, event, point))):
return False
if self.session_name:
# There must exist an active session with the given (config) name.
if not bool(self.session_by_name(self.session_name)):
return False
if not self.capability and not self.session_name:
# Any session will do.
return any(self.sessions())
return True
```
**Describe alternatives you've considered**
Make `session_name` win against `capability`
**Additional context**
Notice that the implementation suggested above doesn't guarantee that the sessions with the specified name and capability are the same (in case of multiple attached sessions for a view).
</issue>
<code>
[start of plugin/core/registry.py]
1 from .configurations import ConfigManager
2 from .sessions import Session
3 from .settings import client_configs
4 from .typing import Optional, Any, Generator, Iterable
5 from .windows import WindowRegistry
6 import sublime
7 import sublime_plugin
8
9
10 def sessions_for_view(view: sublime.View, capability: Optional[str] = None) -> Generator[Session, None, None]:
11 """
12 Returns all sessions for this view, optionally matching the capability path.
13 """
14 window = view.window()
15 if window:
16 manager = windows.lookup(window)
17 yield from manager.sessions(view, capability)
18
19
20 def best_session(view: sublime.View, sessions: Iterable[Session], point: Optional[int] = None) -> Optional[Session]:
21 if point is None:
22 try:
23 point = view.sel()[0].b
24 except IndexError:
25 return None
26 try:
27 return max(sessions, key=lambda s: view.score_selector(point, s.config.priority_selector)) # type: ignore
28 except ValueError:
29 return None
30
31
32 configs = ConfigManager(client_configs.all)
33 client_configs.set_listener(configs.update)
34 windows = WindowRegistry(configs)
35
36
37 def get_position(view: sublime.View, event: Optional[dict] = None, point: Optional[int] = None) -> int:
38 if isinstance(point, int):
39 return point
40 elif event:
41 return view.window_to_text((event["x"], event["y"]))
42 else:
43 return view.sel()[0].begin()
44
45
46 class LspTextCommand(sublime_plugin.TextCommand):
47 """
48 Inherit from this class to define your requests that should be triggered via the command palette and/or a
49 keybinding.
50 """
51
52 # When this is defined in a derived class, the command is enabled if and only if there exists a session attached
53 # to the view that has the given capability. When both `capability` and `session_name` are defined, `capability`
54 # wins.
55 capability = ''
56
57 # When this is defined in a derived class, the command is enabled if and only if there exists a session attached
58 # to the view that has the given name. When both `capability` and `session_name` are defined, `capability` wins.
59 session_name = ''
60
61 def is_enabled(self, event: Optional[dict] = None, point: Optional[int] = None) -> bool:
62 if self.capability:
63 # At least one active session with the given capability must exist.
64 return bool(self.best_session(self.capability, get_position(self.view, event, point)))
65 elif self.session_name:
66 # There must exist an active session with the given (config) name.
67 return bool(self.session_by_name(self.session_name))
68 else:
69 # Any session will do.
70 return any(self.sessions())
71
72 def want_event(self) -> bool:
73 return True
74
75 def best_session(self, capability: str, point: Optional[int] = None) -> Optional[Session]:
76 listener = windows.listener_for_view(self.view)
77 return listener.session(capability, point) if listener else None
78
79 def session_by_name(self, name: Optional[str] = None) -> Optional[Session]:
80 target = name if name else self.session_name
81 for session in self.sessions():
82 if session.config.name == target:
83 return session
84 return None
85
86 def sessions(self, capability: Optional[str] = None) -> Generator[Session, None, None]:
87 yield from sessions_for_view(self.view, capability)
88
89
90 class LspRestartClientCommand(sublime_plugin.TextCommand):
91 def run(self, edit: Any) -> None:
92 window = self.view.window()
93 if window:
94 windows.lookup(window).restart_sessions_async()
95
96
97 class LspRecheckSessionsCommand(sublime_plugin.WindowCommand):
98 def run(self) -> None:
99 sublime.set_timeout_async(lambda: windows.lookup(self.window).restart_sessions_async())
100
[end of plugin/core/registry.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/plugin/core/registry.py b/plugin/core/registry.py
--- a/plugin/core/registry.py
+++ b/plugin/core/registry.py
@@ -49,25 +49,27 @@
keybinding.
"""
- # When this is defined in a derived class, the command is enabled if and only if there exists a session attached
- # to the view that has the given capability. When both `capability` and `session_name` are defined, `capability`
- # wins.
+ # When this is defined in a derived class, the command is enabled only if there exists a session attached to the
+ # view that has the given capability.
capability = ''
- # When this is defined in a derived class, the command is enabled if and only if there exists a session attached
- # to the view that has the given name. When both `capability` and `session_name` are defined, `capability` wins.
+ # When this is defined in a derived class, the command is enabled only if there exists a session attached to the
+ # view that has the given name.
session_name = ''
def is_enabled(self, event: Optional[dict] = None, point: Optional[int] = None) -> bool:
if self.capability:
# At least one active session with the given capability must exist.
- return bool(self.best_session(self.capability, get_position(self.view, event, point)))
- elif self.session_name:
+ if not self.best_session(self.capability, get_position(self.view, event, point)):
+ return False
+ if self.session_name:
# There must exist an active session with the given (config) name.
- return bool(self.session_by_name(self.session_name))
- else:
+ if not self.session_by_name(self.session_name):
+ return False
+ if not self.capability and not self.session_name:
# Any session will do.
return any(self.sessions())
+ return True
def want_event(self) -> bool:
return True
| {"golden_diff": "diff --git a/plugin/core/registry.py b/plugin/core/registry.py\n--- a/plugin/core/registry.py\n+++ b/plugin/core/registry.py\n@@ -49,25 +49,27 @@\n keybinding.\n \"\"\"\n \n- # When this is defined in a derived class, the command is enabled if and only if there exists a session attached\n- # to the view that has the given capability. When both `capability` and `session_name` are defined, `capability`\n- # wins.\n+ # When this is defined in a derived class, the command is enabled only if there exists a session attached to the\n+ # view that has the given capability.\n capability = ''\n \n- # When this is defined in a derived class, the command is enabled if and only if there exists a session attached\n- # to the view that has the given name. When both `capability` and `session_name` are defined, `capability` wins.\n+ # When this is defined in a derived class, the command is enabled only if there exists a session attached to the\n+ # view that has the given name.\n session_name = ''\n \n def is_enabled(self, event: Optional[dict] = None, point: Optional[int] = None) -> bool:\n if self.capability:\n # At least one active session with the given capability must exist.\n- return bool(self.best_session(self.capability, get_position(self.view, event, point)))\n- elif self.session_name:\n+ if not self.best_session(self.capability, get_position(self.view, event, point)):\n+ return False\n+ if self.session_name:\n # There must exist an active session with the given (config) name.\n- return bool(self.session_by_name(self.session_name))\n- else:\n+ if not self.session_by_name(self.session_name):\n+ return False\n+ if not self.capability and not self.session_name:\n # Any session will do.\n return any(self.sessions())\n+ return True\n \n def want_event(self) -> bool:\n return True\n", "issue": "LspTextCommand should honor both session_name and capability if defined\nIf `capability` in [LspTextCommand](https://github.com/sublimelsp/LSP/blob/81a6e6aeb2c3a6aebad59fbd6eb0361301243bd1/plugin/core/registry.py#L52-L70) is defined, `session_name` is ignored. You might say that LSP-* plugins exactly know the capabilities of their server and thus never need to specify `capability` in a derived class, but in particular it's impossible for plugins to derive from LspExecuteCommand (which is derived from LspTextCommand), because that class already comes with a predefined `capability`. It can be convenient for a plugin to declare a derived class from LspExecuteCommand, so that their commands are only shown/enabled for corresponding filetypes:\r\n```python\r\nclass FooExecuteCommand(LspExecuteCommand):\r\n session_name = \"foo\"\r\n```\r\n\r\n**Describe the solution you'd like**\r\n```python\r\n def is_enabled(self, event: Optional[dict] = None, point: Optional[int] = None) -> bool:\r\n if self.capability:\r\n # At least one active session with the given capability must exist.\r\n if not bool(self.best_session(self.capability, get_position(self.view, event, point))):\r\n return False\r\n if self.session_name:\r\n # There must exist an active session with the given (config) name.\r\n if not bool(self.session_by_name(self.session_name)):\r\n return False\r\n if not self.capability and not self.session_name:\r\n # Any session will do.\r\n return any(self.sessions())\r\n return True\r\n```\r\n\r\n**Describe alternatives you've considered**\r\nMake `session_name` win against `capability`\r\n\r\n**Additional context**\r\nNotice that the implementation suggested above doesn't guarantee that the sessions with the specified name and capability are the same (in case of multiple attached sessions for a view).\n", "before_files": [{"content": "from .configurations import ConfigManager\nfrom .sessions import Session\nfrom .settings import client_configs\nfrom .typing import Optional, Any, Generator, Iterable\nfrom .windows import WindowRegistry\nimport sublime\nimport sublime_plugin\n\n\ndef sessions_for_view(view: sublime.View, capability: Optional[str] = None) -> Generator[Session, None, None]:\n \"\"\"\n Returns all sessions for this view, optionally matching the capability path.\n \"\"\"\n window = view.window()\n if window:\n manager = windows.lookup(window)\n yield from manager.sessions(view, capability)\n\n\ndef best_session(view: sublime.View, sessions: Iterable[Session], point: Optional[int] = None) -> Optional[Session]:\n if point is None:\n try:\n point = view.sel()[0].b\n except IndexError:\n return None\n try:\n return max(sessions, key=lambda s: view.score_selector(point, s.config.priority_selector)) # type: ignore\n except ValueError:\n return None\n\n\nconfigs = ConfigManager(client_configs.all)\nclient_configs.set_listener(configs.update)\nwindows = WindowRegistry(configs)\n\n\ndef get_position(view: sublime.View, event: Optional[dict] = None, point: Optional[int] = None) -> int:\n if isinstance(point, int):\n return point\n elif event:\n return view.window_to_text((event[\"x\"], event[\"y\"]))\n else:\n return view.sel()[0].begin()\n\n\nclass LspTextCommand(sublime_plugin.TextCommand):\n \"\"\"\n Inherit from this class to define your requests that should be triggered via the command palette and/or a\n keybinding.\n \"\"\"\n\n # When this is defined in a derived class, the command is enabled if and only if there exists a session attached\n # to the view that has the given capability. When both `capability` and `session_name` are defined, `capability`\n # wins.\n capability = ''\n\n # When this is defined in a derived class, the command is enabled if and only if there exists a session attached\n # to the view that has the given name. When both `capability` and `session_name` are defined, `capability` wins.\n session_name = ''\n\n def is_enabled(self, event: Optional[dict] = None, point: Optional[int] = None) -> bool:\n if self.capability:\n # At least one active session with the given capability must exist.\n return bool(self.best_session(self.capability, get_position(self.view, event, point)))\n elif self.session_name:\n # There must exist an active session with the given (config) name.\n return bool(self.session_by_name(self.session_name))\n else:\n # Any session will do.\n return any(self.sessions())\n\n def want_event(self) -> bool:\n return True\n\n def best_session(self, capability: str, point: Optional[int] = None) -> Optional[Session]:\n listener = windows.listener_for_view(self.view)\n return listener.session(capability, point) if listener else None\n\n def session_by_name(self, name: Optional[str] = None) -> Optional[Session]:\n target = name if name else self.session_name\n for session in self.sessions():\n if session.config.name == target:\n return session\n return None\n\n def sessions(self, capability: Optional[str] = None) -> Generator[Session, None, None]:\n yield from sessions_for_view(self.view, capability)\n\n\nclass LspRestartClientCommand(sublime_plugin.TextCommand):\n def run(self, edit: Any) -> None:\n window = self.view.window()\n if window:\n windows.lookup(window).restart_sessions_async()\n\n\nclass LspRecheckSessionsCommand(sublime_plugin.WindowCommand):\n def run(self) -> None:\n sublime.set_timeout_async(lambda: windows.lookup(self.window).restart_sessions_async())\n", "path": "plugin/core/registry.py"}]} | 1,969 | 450 |
gh_patches_debug_25594 | rasdani/github-patches | git_diff | chainer__chainer-4769 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
backward of F.normalize is not stable
`NormalizeL2.backward` computes 0/0 if the input contains a zero vector. PR #4190, I wrote, caused this. Sorry.
To begin with, x/(||x|| + eps) is C^1 but not C^2 (at x=0). The correct backward might not be a good choice.
</issue>
<code>
[start of chainer/functions/normalization/l2_normalization.py]
1 import numpy
2
3 from chainer.backends import cuda
4 from chainer import function_node
5 import chainer.functions
6 from chainer import utils
7 from chainer.utils import type_check
8
9
10 class NormalizeL2(function_node.FunctionNode):
11
12 """L2 normalization"""
13
14 def __init__(self, eps=1e-5, axis=1):
15 self.eps = eps
16 if isinstance(axis, int):
17 axis = axis,
18 self.axis = axis
19
20 def check_type_forward(self, in_types):
21 type_check.expect(in_types.size() == 1)
22 x_type, = in_types
23
24 type_check.expect(
25 x_type.dtype == numpy.float32,
26 )
27
28 def forward(self, inputs):
29 self.retain_inputs((0,))
30 x, = inputs
31 xp = cuda.get_array_module(x)
32 norm = (xp.sqrt(xp.sum(xp.square(x), axis=self.axis, keepdims=True))
33 + x.dtype.type(self.eps))
34 return utils.force_array(x / norm),
35
36 def backward(self, indexes, grad_outputs):
37 x, = self.get_retained_inputs()
38 gy, = grad_outputs
39 F = chainer.functions
40
41 norm_noeps = F.sqrt(F.sum(F.square(x), axis=self.axis, keepdims=True))
42 norm = norm_noeps + self.eps
43 norm = F.broadcast_to(norm, gy.shape)
44
45 x_gy_reduced = F.sum((x * gy), axis=self.axis, keepdims=True)
46 x_gy_reduced /= norm_noeps
47 x_gy_reduced = F.broadcast_to(x_gy_reduced, gy.shape)
48 gx = gy * norm - x_gy_reduced * x
49 gx = gx / norm ** 2
50
51 return gx,
52
53
54 def normalize(x, eps=1e-5, axis=1):
55 """L2 norm squared (a.k.a.\\ Euclidean norm).
56
57 This function implements L2 normalization on a vector along the given axis.
58 No reduction is done along the normalization axis.
59
60 In the case when :obj:`axis=1` and :math:`\\mathbf{x}` is a matrix of
61 dimension :math:`(N, K)`, where :math:`N` and :math:`K` denote mini-batch
62 size and the dimension of the input vectors, this function computes an
63 output matrix :math:`\\mathbf{y}` of dimension :math:`(N, K)` by the
64 following equation:
65
66 .. math::
67 \\mathbf{y}_i =
68 {\\mathbf{x}_i \\over \\| \\mathbf{x}_i \\|_2 + \\epsilon}
69
70 :obj:`eps` is used to avoid division by zero when norm of
71 :math:`\\mathbf{x}` along the given axis is zero.
72
73 The default value of :obj:`axis` is determined for backward compatibility.
74
75 Args:
76 x (~chainer.Variable): Two dimensional output variable. The first
77 dimension is assumed to be the mini-batch dimension.
78 eps (float): Epsilon value for numerical stability.
79 axis (int or tuple of ints): Axis along which to normalize.
80
81 Returns:
82 ~chainer.Variable: The output variable which has the same shape
83 as :math:`x`.
84
85 """
86 return NormalizeL2(eps, axis).apply((x,))[0]
87
[end of chainer/functions/normalization/l2_normalization.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/chainer/functions/normalization/l2_normalization.py b/chainer/functions/normalization/l2_normalization.py
--- a/chainer/functions/normalization/l2_normalization.py
+++ b/chainer/functions/normalization/l2_normalization.py
@@ -7,6 +7,25 @@
from chainer.utils import type_check
+class _SetItemZero(function_node.FunctionNode):
+
+ """Write values to mask of zero-initialized array"""
+
+ def __init__(self, mask):
+ self.mask = mask
+
+ def forward(self, inputs):
+ x, = inputs
+ xp = cuda.get_array_module(x)
+ y = xp.zeros(self.mask.shape, x.dtype)
+ y[self.mask] = x
+ return y,
+
+ def backward(self, indices, grad_outputs):
+ g, = grad_outputs
+ return g[self.mask],
+
+
class NormalizeL2(function_node.FunctionNode):
"""L2 normalization"""
@@ -43,7 +62,14 @@
norm = F.broadcast_to(norm, gy.shape)
x_gy_reduced = F.sum((x * gy), axis=self.axis, keepdims=True)
- x_gy_reduced /= norm_noeps
+
+ # L2 normalize with eps has continuous backward. However,
+ # the backward is not differentiable for the indices of zero vectors.
+ # To avoid nan in double backward, do not compute outside of mask.
+ mask = norm_noeps.array != 0
+ x_gy_reduced, = _SetItemZero(mask).apply((
+ x_gy_reduced[mask] / norm_noeps[mask],))
+
x_gy_reduced = F.broadcast_to(x_gy_reduced, gy.shape)
gx = gy * norm - x_gy_reduced * x
gx = gx / norm ** 2
| {"golden_diff": "diff --git a/chainer/functions/normalization/l2_normalization.py b/chainer/functions/normalization/l2_normalization.py\n--- a/chainer/functions/normalization/l2_normalization.py\n+++ b/chainer/functions/normalization/l2_normalization.py\n@@ -7,6 +7,25 @@\n from chainer.utils import type_check\n \n \n+class _SetItemZero(function_node.FunctionNode):\n+\n+ \"\"\"Write values to mask of zero-initialized array\"\"\"\n+\n+ def __init__(self, mask):\n+ self.mask = mask\n+\n+ def forward(self, inputs):\n+ x, = inputs\n+ xp = cuda.get_array_module(x)\n+ y = xp.zeros(self.mask.shape, x.dtype)\n+ y[self.mask] = x\n+ return y,\n+\n+ def backward(self, indices, grad_outputs):\n+ g, = grad_outputs\n+ return g[self.mask],\n+\n+\n class NormalizeL2(function_node.FunctionNode):\n \n \"\"\"L2 normalization\"\"\"\n@@ -43,7 +62,14 @@\n norm = F.broadcast_to(norm, gy.shape)\n \n x_gy_reduced = F.sum((x * gy), axis=self.axis, keepdims=True)\n- x_gy_reduced /= norm_noeps\n+\n+ # L2 normalize with eps has continuous backward. However,\n+ # the backward is not differentiable for the indices of zero vectors.\n+ # To avoid nan in double backward, do not compute outside of mask.\n+ mask = norm_noeps.array != 0\n+ x_gy_reduced, = _SetItemZero(mask).apply((\n+ x_gy_reduced[mask] / norm_noeps[mask],))\n+\n x_gy_reduced = F.broadcast_to(x_gy_reduced, gy.shape)\n gx = gy * norm - x_gy_reduced * x\n gx = gx / norm ** 2\n", "issue": "backward of F.normalize is not stable\n`NormalizeL2.backward` computes 0/0 if the input contains a zero vector. PR #4190, I wrote, caused this. Sorry.\r\n\r\nTo begin with, x/(||x|| + eps) is C^1 but not C^2 (at x=0). The correct backward might not be a good choice.\n", "before_files": [{"content": "import numpy\n\nfrom chainer.backends import cuda\nfrom chainer import function_node\nimport chainer.functions\nfrom chainer import utils\nfrom chainer.utils import type_check\n\n\nclass NormalizeL2(function_node.FunctionNode):\n\n \"\"\"L2 normalization\"\"\"\n\n def __init__(self, eps=1e-5, axis=1):\n self.eps = eps\n if isinstance(axis, int):\n axis = axis,\n self.axis = axis\n\n def check_type_forward(self, in_types):\n type_check.expect(in_types.size() == 1)\n x_type, = in_types\n\n type_check.expect(\n x_type.dtype == numpy.float32,\n )\n\n def forward(self, inputs):\n self.retain_inputs((0,))\n x, = inputs\n xp = cuda.get_array_module(x)\n norm = (xp.sqrt(xp.sum(xp.square(x), axis=self.axis, keepdims=True))\n + x.dtype.type(self.eps))\n return utils.force_array(x / norm),\n\n def backward(self, indexes, grad_outputs):\n x, = self.get_retained_inputs()\n gy, = grad_outputs\n F = chainer.functions\n\n norm_noeps = F.sqrt(F.sum(F.square(x), axis=self.axis, keepdims=True))\n norm = norm_noeps + self.eps\n norm = F.broadcast_to(norm, gy.shape)\n\n x_gy_reduced = F.sum((x * gy), axis=self.axis, keepdims=True)\n x_gy_reduced /= norm_noeps\n x_gy_reduced = F.broadcast_to(x_gy_reduced, gy.shape)\n gx = gy * norm - x_gy_reduced * x\n gx = gx / norm ** 2\n\n return gx,\n\n\ndef normalize(x, eps=1e-5, axis=1):\n \"\"\"L2 norm squared (a.k.a.\\\\ Euclidean norm).\n\n This function implements L2 normalization on a vector along the given axis.\n No reduction is done along the normalization axis.\n\n In the case when :obj:`axis=1` and :math:`\\\\mathbf{x}` is a matrix of\n dimension :math:`(N, K)`, where :math:`N` and :math:`K` denote mini-batch\n size and the dimension of the input vectors, this function computes an\n output matrix :math:`\\\\mathbf{y}` of dimension :math:`(N, K)` by the\n following equation:\n\n .. math::\n \\\\mathbf{y}_i =\n {\\\\mathbf{x}_i \\\\over \\\\| \\\\mathbf{x}_i \\\\|_2 + \\\\epsilon}\n\n :obj:`eps` is used to avoid division by zero when norm of\n :math:`\\\\mathbf{x}` along the given axis is zero.\n\n The default value of :obj:`axis` is determined for backward compatibility.\n\n Args:\n x (~chainer.Variable): Two dimensional output variable. The first\n dimension is assumed to be the mini-batch dimension.\n eps (float): Epsilon value for numerical stability.\n axis (int or tuple of ints): Axis along which to normalize.\n\n Returns:\n ~chainer.Variable: The output variable which has the same shape\n as :math:`x`.\n\n \"\"\"\n return NormalizeL2(eps, axis).apply((x,))[0]\n", "path": "chainer/functions/normalization/l2_normalization.py"}]} | 1,527 | 414 |
gh_patches_debug_16537 | rasdani/github-patches | git_diff | svthalia__concrexit-1662 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Fix adding avatar through api v2
### Describe the bug
In api v1 the avatar can be set through `api/v1/members/me` with a multipart patch request with a file labelled `photo`. Api v2 should also allow this, but instead return 500.
### How to reproduce
Send a request to patch the photo to api v1 and see that it works.
Send the same request to api v2 and see the 500 response.
Note that I have not tried editing anything else through api v2 yet, so it might be that some other fields also don't work.
</issue>
<code>
[start of website/members/api/v2/serializers/member.py]
1 from rest_framework import serializers
2
3 from members.api.v2.serializers.profile import ProfileSerializer
4 from members.models import Member
5 from members.services import member_achievements, member_societies
6
7
8 class MemberSerializer(serializers.ModelSerializer):
9 def __init__(self, *args, **kwargs):
10 # Don't pass the 'fields' arg up to the superclass
11 detailed = kwargs.pop("detailed", True)
12
13 # Instantiate the superclass normally
14 super().__init__(*args, **kwargs)
15
16 if not detailed:
17 hidden_fields = {"achievements", "societies"}
18 existing = set(self.fields.keys())
19 for field_name in existing & hidden_fields:
20 self.fields.pop(field_name)
21
22 class Meta:
23 model = Member
24 fields = ("pk", "membership_type", "profile", "achievements", "societies")
25
26 membership_type = serializers.SerializerMethodField("_membership_type")
27 profile = ProfileSerializer(
28 fields=(
29 "photo",
30 "display_name",
31 "short_display_name",
32 "programme",
33 "starting_year",
34 "birthday",
35 "website",
36 "profile_description",
37 )
38 )
39 achievements = serializers.SerializerMethodField("_achievements")
40 societies = serializers.SerializerMethodField("_societies")
41
42 def _achievements(self, instance):
43 return member_achievements(instance)
44
45 def _societies(self, instance):
46 return member_societies(instance)
47
48 def _membership_type(self, instance):
49 membership = instance.current_membership
50 if membership:
51 return membership.type
52 return None
53
54 def update(self, instance, validated_data):
55 profile_data = validated_data.pop("profile")
56 instance.profile = self.fields["profile"].update(
57 instance=instance.profile, validated_data=profile_data
58 )
59 return instance
60
61
62 class MemberListSerializer(MemberSerializer):
63 class Meta:
64 model = Member
65 fields = (
66 "pk",
67 "membership_type",
68 "profile",
69 )
70
71
72 class MemberCurrentSerializer(MemberSerializer):
73 class Meta:
74 model = Member
75 fields = ("pk", "membership_type", "profile", "achievements", "societies")
76
77 profile = ProfileSerializer(
78 fields=(
79 "photo",
80 "display_name",
81 "short_display_name",
82 "programme",
83 "starting_year",
84 "birthday",
85 "show_birthday",
86 "website",
87 "profile_description",
88 "address_street",
89 "address_street2",
90 "address_postal_code",
91 "address_city",
92 "address_country",
93 "phone_number",
94 "website",
95 "emergency_contact",
96 "emergency_contact_phone_number",
97 "profile_description",
98 "nickname",
99 "initials",
100 "display_name_preference",
101 "receive_optin",
102 "receive_newsletter",
103 "receive_magazine",
104 "email_gsuite_only",
105 ),
106 force_show_birthday=True,
107 )
108
[end of website/members/api/v2/serializers/member.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/website/members/api/v2/serializers/member.py b/website/members/api/v2/serializers/member.py
--- a/website/members/api/v2/serializers/member.py
+++ b/website/members/api/v2/serializers/member.py
@@ -1,4 +1,5 @@
from rest_framework import serializers
+from rest_framework.exceptions import ValidationError
from members.api.v2.serializers.profile import ProfileSerializer
from members.models import Member
@@ -52,6 +53,9 @@
return None
def update(self, instance, validated_data):
+ if "profile" not in validated_data:
+ raise ValidationError("profile field is missing")
+
profile_data = validated_data.pop("profile")
instance.profile = self.fields["profile"].update(
instance=instance.profile, validated_data=profile_data
| {"golden_diff": "diff --git a/website/members/api/v2/serializers/member.py b/website/members/api/v2/serializers/member.py\n--- a/website/members/api/v2/serializers/member.py\n+++ b/website/members/api/v2/serializers/member.py\n@@ -1,4 +1,5 @@\n from rest_framework import serializers\n+from rest_framework.exceptions import ValidationError\n \n from members.api.v2.serializers.profile import ProfileSerializer\n from members.models import Member\n@@ -52,6 +53,9 @@\n return None\n \n def update(self, instance, validated_data):\n+ if \"profile\" not in validated_data:\n+ raise ValidationError(\"profile field is missing\")\n+\n profile_data = validated_data.pop(\"profile\")\n instance.profile = self.fields[\"profile\"].update(\n instance=instance.profile, validated_data=profile_data\n", "issue": "Fix adding avatar through api v2\n### Describe the bug\r\nIn api v1 the avatar can be set through `api/v1/members/me` with a multipart patch request with a file labelled `photo`. Api v2 should also allow this, but instead return 500.\r\n\r\n### How to reproduce\r\nSend a request to patch the photo to api v1 and see that it works.\r\nSend the same request to api v2 and see the 500 response.\r\n\r\nNote that I have not tried editing anything else through api v2 yet, so it might be that some other fields also don't work.\r\n\n", "before_files": [{"content": "from rest_framework import serializers\n\nfrom members.api.v2.serializers.profile import ProfileSerializer\nfrom members.models import Member\nfrom members.services import member_achievements, member_societies\n\n\nclass MemberSerializer(serializers.ModelSerializer):\n def __init__(self, *args, **kwargs):\n # Don't pass the 'fields' arg up to the superclass\n detailed = kwargs.pop(\"detailed\", True)\n\n # Instantiate the superclass normally\n super().__init__(*args, **kwargs)\n\n if not detailed:\n hidden_fields = {\"achievements\", \"societies\"}\n existing = set(self.fields.keys())\n for field_name in existing & hidden_fields:\n self.fields.pop(field_name)\n\n class Meta:\n model = Member\n fields = (\"pk\", \"membership_type\", \"profile\", \"achievements\", \"societies\")\n\n membership_type = serializers.SerializerMethodField(\"_membership_type\")\n profile = ProfileSerializer(\n fields=(\n \"photo\",\n \"display_name\",\n \"short_display_name\",\n \"programme\",\n \"starting_year\",\n \"birthday\",\n \"website\",\n \"profile_description\",\n )\n )\n achievements = serializers.SerializerMethodField(\"_achievements\")\n societies = serializers.SerializerMethodField(\"_societies\")\n\n def _achievements(self, instance):\n return member_achievements(instance)\n\n def _societies(self, instance):\n return member_societies(instance)\n\n def _membership_type(self, instance):\n membership = instance.current_membership\n if membership:\n return membership.type\n return None\n\n def update(self, instance, validated_data):\n profile_data = validated_data.pop(\"profile\")\n instance.profile = self.fields[\"profile\"].update(\n instance=instance.profile, validated_data=profile_data\n )\n return instance\n\n\nclass MemberListSerializer(MemberSerializer):\n class Meta:\n model = Member\n fields = (\n \"pk\",\n \"membership_type\",\n \"profile\",\n )\n\n\nclass MemberCurrentSerializer(MemberSerializer):\n class Meta:\n model = Member\n fields = (\"pk\", \"membership_type\", \"profile\", \"achievements\", \"societies\")\n\n profile = ProfileSerializer(\n fields=(\n \"photo\",\n \"display_name\",\n \"short_display_name\",\n \"programme\",\n \"starting_year\",\n \"birthday\",\n \"show_birthday\",\n \"website\",\n \"profile_description\",\n \"address_street\",\n \"address_street2\",\n \"address_postal_code\",\n \"address_city\",\n \"address_country\",\n \"phone_number\",\n \"website\",\n \"emergency_contact\",\n \"emergency_contact_phone_number\",\n \"profile_description\",\n \"nickname\",\n \"initials\",\n \"display_name_preference\",\n \"receive_optin\",\n \"receive_newsletter\",\n \"receive_magazine\",\n \"email_gsuite_only\",\n ),\n force_show_birthday=True,\n )\n", "path": "website/members/api/v2/serializers/member.py"}]} | 1,512 | 184 |
gh_patches_debug_14366 | rasdani/github-patches | git_diff | conan-io__conan-13211 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[bug] Conan build command does not support conanfile.txt as described
### Description
The documentation about [build](https://docs.conan.io/2/reference/commands/build.html) command says:
```
usage: conan build [-h] [-v [V]] [--logger] [--name NAME] [--version VERSION] [--user USER] [--channel CHANNEL] [-of OUTPUT_FOLDER] [-b BUILD] [-r REMOTE | -nr] [-u] [-o OPTIONS_HOST] [-o:b OPTIONS_BUILD] [-o:h OPTIONS_HOST] [-pr PROFILE_HOST] [-pr:b PROFILE_BUILD]
[-pr:h PROFILE_HOST] [-s SETTINGS_HOST] [-s:b SETTINGS_BUILD] [-s:h SETTINGS_HOST] [-c CONF_HOST] [-c:b CONF_BUILD] [-c:h CONF_HOST] [-l LOCKFILE] [--lockfile-partial] [--lockfile-out LOCKFILE_OUT] [--lockfile-packages] [--lockfile-clean]
[path]
Install dependencies and call the build() method.
positional arguments:
path Path to a folder containing a recipe (conanfile.py or conanfile.txt) or to a recipe file. e.g., ./my_project/conanfile.txt.
```
However, `conanfile.txt` is not acceptable by build command.
As the documentation is extracted from the command output, it should be fixed on Conan client first.
### Environment details
* Operating System+version: OSX 13
* Compiler+version: Apple-Clang 14
* Conan version: 2.0.0
* Python version: 3.10
### Steps to reproduce
1. mkdir /tmp/foo && cd /tmp/foo
2. echo "[requires]\nzlib/1.2.13" > conanfile.txt
3. conan build .
4. Or, conan build ./conanfile.txt
### Logs
```
% conan build .
ERROR: Conanfile not found at /private/tmp/foo/conanfile.py
% conan build ./conanfile.txt
ERROR: A conanfile.py is needed, /private/tmp/conantxt/conanfile.txt is not acceptable
```
</issue>
<code>
[start of conan/cli/commands/build.py]
1 import os
2
3 from conan.api.output import ConanOutput
4 from conan.cli.command import conan_command
5 from conan.cli.commands import make_abs_path
6 from conan.cli.args import add_lockfile_args, add_common_install_arguments, add_reference_args
7 from conan.internal.conan_app import ConanApp
8 from conan.cli.printers.graph import print_graph_packages, print_graph_basic
9 from conans.client.conanfile.build import run_build_method
10
11
12 @conan_command(group='Creator')
13 def build(conan_api, parser, *args):
14 """
15 Install dependencies and call the build() method.
16 """
17 parser.add_argument("path", nargs="?",
18 help="Path to a folder containing a recipe (conanfile.py "
19 "or conanfile.txt) or to a recipe file. e.g., "
20 "./my_project/conanfile.txt.")
21 add_reference_args(parser)
22 # TODO: Missing --build-require argument and management
23 parser.add_argument("-of", "--output-folder",
24 help='The root output folder for generated and build files')
25 add_common_install_arguments(parser)
26 add_lockfile_args(parser)
27 args = parser.parse_args(*args)
28
29 cwd = os.getcwd()
30 path = conan_api.local.get_conanfile_path(args.path, cwd, py=True)
31 folder = os.path.dirname(path)
32 remotes = conan_api.remotes.list(args.remote) if not args.no_remote else []
33
34 lockfile = conan_api.lockfile.get_lockfile(lockfile=args.lockfile,
35 conanfile_path=path,
36 cwd=cwd,
37 partial=args.lockfile_partial)
38 profile_host, profile_build = conan_api.profiles.get_profiles_from_args(args)
39
40 deps_graph = conan_api.graph.load_graph_consumer(path, args.name, args.version,
41 args.user, args.channel,
42 profile_host, profile_build, lockfile, remotes,
43 args.update)
44 print_graph_basic(deps_graph)
45 deps_graph.report_graph_error()
46 conan_api.graph.analyze_binaries(deps_graph, args.build, remotes=remotes, update=args.update,
47 lockfile=lockfile)
48 print_graph_packages(deps_graph)
49
50 out = ConanOutput()
51 out.title("Installing packages")
52 conan_api.install.install_binaries(deps_graph=deps_graph, remotes=remotes)
53
54 source_folder = folder
55 output_folder = make_abs_path(args.output_folder, cwd) if args.output_folder else None
56 out.title("Finalizing install (deploy, generators)")
57 conan_api.install.install_consumer(deps_graph=deps_graph, source_folder=source_folder,
58 output_folder=output_folder)
59
60 # TODO: Decide API to put this
61 app = ConanApp(conan_api.cache_folder)
62 conanfile = deps_graph.root.conanfile
63 conanfile.folders.set_base_package(conanfile.folders.base_build)
64 run_build_method(conanfile, app.hook_manager)
65
66 lockfile = conan_api.lockfile.update_lockfile(lockfile, deps_graph, args.lockfile_packages,
67 clean=args.lockfile_clean)
68 conan_api.lockfile.save_lockfile(lockfile, args.lockfile_out, source_folder)
69 return deps_graph
70
[end of conan/cli/commands/build.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/conan/cli/commands/build.py b/conan/cli/commands/build.py
--- a/conan/cli/commands/build.py
+++ b/conan/cli/commands/build.py
@@ -15,9 +15,9 @@
Install dependencies and call the build() method.
"""
parser.add_argument("path", nargs="?",
- help="Path to a folder containing a recipe (conanfile.py "
- "or conanfile.txt) or to a recipe file. e.g., "
- "./my_project/conanfile.txt.")
+ help='Path to a python-based recipe file or a folder '
+ 'containing a conanfile.py recipe. conanfile.txt '
+ 'cannot be used with conan build.')
add_reference_args(parser)
# TODO: Missing --build-require argument and management
parser.add_argument("-of", "--output-folder",
| {"golden_diff": "diff --git a/conan/cli/commands/build.py b/conan/cli/commands/build.py\n--- a/conan/cli/commands/build.py\n+++ b/conan/cli/commands/build.py\n@@ -15,9 +15,9 @@\n Install dependencies and call the build() method.\n \"\"\"\n parser.add_argument(\"path\", nargs=\"?\",\n- help=\"Path to a folder containing a recipe (conanfile.py \"\n- \"or conanfile.txt) or to a recipe file. e.g., \"\n- \"./my_project/conanfile.txt.\")\n+ help='Path to a python-based recipe file or a folder '\n+ 'containing a conanfile.py recipe. conanfile.txt '\n+ 'cannot be used with conan build.')\n add_reference_args(parser)\n # TODO: Missing --build-require argument and management\n parser.add_argument(\"-of\", \"--output-folder\",\n", "issue": "[bug] Conan build command does not support conanfile.txt as described\n### Description\r\n\r\nThe documentation about [build](https://docs.conan.io/2/reference/commands/build.html) command says:\r\n\r\n```\r\nusage: conan build [-h] [-v [V]] [--logger] [--name NAME] [--version VERSION] [--user USER] [--channel CHANNEL] [-of OUTPUT_FOLDER] [-b BUILD] [-r REMOTE | -nr] [-u] [-o OPTIONS_HOST] [-o:b OPTIONS_BUILD] [-o:h OPTIONS_HOST] [-pr PROFILE_HOST] [-pr:b PROFILE_BUILD]\r\n [-pr:h PROFILE_HOST] [-s SETTINGS_HOST] [-s:b SETTINGS_BUILD] [-s:h SETTINGS_HOST] [-c CONF_HOST] [-c:b CONF_BUILD] [-c:h CONF_HOST] [-l LOCKFILE] [--lockfile-partial] [--lockfile-out LOCKFILE_OUT] [--lockfile-packages] [--lockfile-clean]\r\n [path]\r\n\r\nInstall dependencies and call the build() method.\r\n\r\npositional arguments:\r\n path Path to a folder containing a recipe (conanfile.py or conanfile.txt) or to a recipe file. e.g., ./my_project/conanfile.txt.\r\n```\r\n\r\nHowever, `conanfile.txt` is not acceptable by build command.\r\n\r\nAs the documentation is extracted from the command output, it should be fixed on Conan client first.\r\n\r\n\r\n### Environment details\r\n\r\n* Operating System+version: OSX 13\r\n* Compiler+version: Apple-Clang 14\r\n* Conan version: 2.0.0\r\n* Python version: 3.10\r\n\r\n\r\n### Steps to reproduce\r\n\r\n1. mkdir /tmp/foo && cd /tmp/foo\r\n2. echo \"[requires]\\nzlib/1.2.13\" > conanfile.txt\r\n3. conan build .\r\n4. Or, conan build ./conanfile.txt\r\n\r\n### Logs\r\n\r\n```\r\n% conan build .\r\nERROR: Conanfile not found at /private/tmp/foo/conanfile.py\r\n\r\n% conan build ./conanfile.txt \r\nERROR: A conanfile.py is needed, /private/tmp/conantxt/conanfile.txt is not acceptable\r\n```\n", "before_files": [{"content": "import os\n\nfrom conan.api.output import ConanOutput\nfrom conan.cli.command import conan_command\nfrom conan.cli.commands import make_abs_path\nfrom conan.cli.args import add_lockfile_args, add_common_install_arguments, add_reference_args\nfrom conan.internal.conan_app import ConanApp\nfrom conan.cli.printers.graph import print_graph_packages, print_graph_basic\nfrom conans.client.conanfile.build import run_build_method\n\n\n@conan_command(group='Creator')\ndef build(conan_api, parser, *args):\n \"\"\"\n Install dependencies and call the build() method.\n \"\"\"\n parser.add_argument(\"path\", nargs=\"?\",\n help=\"Path to a folder containing a recipe (conanfile.py \"\n \"or conanfile.txt) or to a recipe file. e.g., \"\n \"./my_project/conanfile.txt.\")\n add_reference_args(parser)\n # TODO: Missing --build-require argument and management\n parser.add_argument(\"-of\", \"--output-folder\",\n help='The root output folder for generated and build files')\n add_common_install_arguments(parser)\n add_lockfile_args(parser)\n args = parser.parse_args(*args)\n\n cwd = os.getcwd()\n path = conan_api.local.get_conanfile_path(args.path, cwd, py=True)\n folder = os.path.dirname(path)\n remotes = conan_api.remotes.list(args.remote) if not args.no_remote else []\n\n lockfile = conan_api.lockfile.get_lockfile(lockfile=args.lockfile,\n conanfile_path=path,\n cwd=cwd,\n partial=args.lockfile_partial)\n profile_host, profile_build = conan_api.profiles.get_profiles_from_args(args)\n\n deps_graph = conan_api.graph.load_graph_consumer(path, args.name, args.version,\n args.user, args.channel,\n profile_host, profile_build, lockfile, remotes,\n args.update)\n print_graph_basic(deps_graph)\n deps_graph.report_graph_error()\n conan_api.graph.analyze_binaries(deps_graph, args.build, remotes=remotes, update=args.update,\n lockfile=lockfile)\n print_graph_packages(deps_graph)\n\n out = ConanOutput()\n out.title(\"Installing packages\")\n conan_api.install.install_binaries(deps_graph=deps_graph, remotes=remotes)\n\n source_folder = folder\n output_folder = make_abs_path(args.output_folder, cwd) if args.output_folder else None\n out.title(\"Finalizing install (deploy, generators)\")\n conan_api.install.install_consumer(deps_graph=deps_graph, source_folder=source_folder,\n output_folder=output_folder)\n\n # TODO: Decide API to put this\n app = ConanApp(conan_api.cache_folder)\n conanfile = deps_graph.root.conanfile\n conanfile.folders.set_base_package(conanfile.folders.base_build)\n run_build_method(conanfile, app.hook_manager)\n\n lockfile = conan_api.lockfile.update_lockfile(lockfile, deps_graph, args.lockfile_packages,\n clean=args.lockfile_clean)\n conan_api.lockfile.save_lockfile(lockfile, args.lockfile_out, source_folder)\n return deps_graph\n", "path": "conan/cli/commands/build.py"}]} | 1,804 | 194 |
gh_patches_debug_2269 | rasdani/github-patches | git_diff | bookwyrm-social__bookwyrm-2501 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
AASIN and isfdb not editable
Somehow during the merge some code most be gone lost because...
<img width="640" alt="Bildschirmfoto 2022-12-11 um 21 29 47" src="https://user-images.githubusercontent.com/2017105/206927195-f9b27bcc-2f3a-46eb-ab1d-84340e5fa061.png">
</issue>
<code>
[start of bookwyrm/forms/books.py]
1 """ using django model forms """
2 from django import forms
3
4 from bookwyrm import models
5 from bookwyrm.models.fields import ClearableFileInputWithWarning
6 from .custom_form import CustomForm
7 from .widgets import ArrayWidget, SelectDateWidget, Select
8
9
10 # pylint: disable=missing-class-docstring
11 class CoverForm(CustomForm):
12 class Meta:
13 model = models.Book
14 fields = ["cover"]
15 help_texts = {f: None for f in fields}
16
17
18 class EditionForm(CustomForm):
19 class Meta:
20 model = models.Edition
21 fields = [
22 "title",
23 "subtitle",
24 "description",
25 "series",
26 "series_number",
27 "languages",
28 "subjects",
29 "publishers",
30 "first_published_date",
31 "published_date",
32 "cover",
33 "physical_format",
34 "physical_format_detail",
35 "pages",
36 "isbn_13",
37 "isbn_10",
38 "openlibrary_key",
39 "inventaire_id",
40 "goodreads_key",
41 "oclc_number",
42 "asin",
43 ]
44 widgets = {
45 "title": forms.TextInput(attrs={"aria-describedby": "desc_title"}),
46 "subtitle": forms.TextInput(attrs={"aria-describedby": "desc_subtitle"}),
47 "description": forms.Textarea(
48 attrs={"aria-describedby": "desc_description"}
49 ),
50 "series": forms.TextInput(attrs={"aria-describedby": "desc_series"}),
51 "series_number": forms.TextInput(
52 attrs={"aria-describedby": "desc_series_number"}
53 ),
54 "subjects": ArrayWidget(),
55 "languages": forms.TextInput(
56 attrs={"aria-describedby": "desc_languages_help desc_languages"}
57 ),
58 "publishers": forms.TextInput(
59 attrs={"aria-describedby": "desc_publishers_help desc_publishers"}
60 ),
61 "first_published_date": SelectDateWidget(
62 attrs={"aria-describedby": "desc_first_published_date"}
63 ),
64 "published_date": SelectDateWidget(
65 attrs={"aria-describedby": "desc_published_date"}
66 ),
67 "cover": ClearableFileInputWithWarning(
68 attrs={"aria-describedby": "desc_cover"}
69 ),
70 "physical_format": Select(
71 attrs={"aria-describedby": "desc_physical_format"}
72 ),
73 "physical_format_detail": forms.TextInput(
74 attrs={"aria-describedby": "desc_physical_format_detail"}
75 ),
76 "pages": forms.NumberInput(attrs={"aria-describedby": "desc_pages"}),
77 "isbn_13": forms.TextInput(attrs={"aria-describedby": "desc_isbn_13"}),
78 "isbn_10": forms.TextInput(attrs={"aria-describedby": "desc_isbn_10"}),
79 "openlibrary_key": forms.TextInput(
80 attrs={"aria-describedby": "desc_openlibrary_key"}
81 ),
82 "inventaire_id": forms.TextInput(
83 attrs={"aria-describedby": "desc_inventaire_id"}
84 ),
85 "goodreads_key": forms.TextInput(
86 attrs={"aria-describedby": "desc_goodreads_key"}
87 ),
88 "oclc_number": forms.TextInput(
89 attrs={"aria-describedby": "desc_oclc_number"}
90 ),
91 "ASIN": forms.TextInput(attrs={"aria-describedby": "desc_ASIN"}),
92 "AASIN": forms.TextInput(attrs={"aria-describedby": "desc_AASIN"}),
93 "isfdb": forms.TextInput(attrs={"aria-describedby": "desc_isfdb"}),
94 }
95
96
97 class EditionFromWorkForm(CustomForm):
98 def __init__(self, *args, **kwargs):
99 super().__init__(*args, **kwargs)
100 # make all fields hidden
101 for visible in self.visible_fields():
102 visible.field.widget = forms.HiddenInput()
103
104 class Meta:
105 model = models.Work
106 fields = [
107 "title",
108 "subtitle",
109 "authors",
110 "description",
111 "languages",
112 "series",
113 "series_number",
114 "subjects",
115 "subject_places",
116 "cover",
117 "first_published_date",
118 ]
119
[end of bookwyrm/forms/books.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/bookwyrm/forms/books.py b/bookwyrm/forms/books.py
--- a/bookwyrm/forms/books.py
+++ b/bookwyrm/forms/books.py
@@ -40,6 +40,8 @@
"goodreads_key",
"oclc_number",
"asin",
+ "aasin",
+ "isfdb",
]
widgets = {
"title": forms.TextInput(attrs={"aria-describedby": "desc_title"}),
| {"golden_diff": "diff --git a/bookwyrm/forms/books.py b/bookwyrm/forms/books.py\n--- a/bookwyrm/forms/books.py\n+++ b/bookwyrm/forms/books.py\n@@ -40,6 +40,8 @@\n \"goodreads_key\",\n \"oclc_number\",\n \"asin\",\n+ \"aasin\",\n+ \"isfdb\",\n ]\n widgets = {\n \"title\": forms.TextInput(attrs={\"aria-describedby\": \"desc_title\"}),\n", "issue": "AASIN and isfdb not editable\nSomehow during the merge some code most be gone lost because...\r\n\r\n<img width=\"640\" alt=\"Bildschirm\u00adfoto 2022-12-11 um 21 29 47\" src=\"https://user-images.githubusercontent.com/2017105/206927195-f9b27bcc-2f3a-46eb-ab1d-84340e5fa061.png\">\r\n\n", "before_files": [{"content": "\"\"\" using django model forms \"\"\"\nfrom django import forms\n\nfrom bookwyrm import models\nfrom bookwyrm.models.fields import ClearableFileInputWithWarning\nfrom .custom_form import CustomForm\nfrom .widgets import ArrayWidget, SelectDateWidget, Select\n\n\n# pylint: disable=missing-class-docstring\nclass CoverForm(CustomForm):\n class Meta:\n model = models.Book\n fields = [\"cover\"]\n help_texts = {f: None for f in fields}\n\n\nclass EditionForm(CustomForm):\n class Meta:\n model = models.Edition\n fields = [\n \"title\",\n \"subtitle\",\n \"description\",\n \"series\",\n \"series_number\",\n \"languages\",\n \"subjects\",\n \"publishers\",\n \"first_published_date\",\n \"published_date\",\n \"cover\",\n \"physical_format\",\n \"physical_format_detail\",\n \"pages\",\n \"isbn_13\",\n \"isbn_10\",\n \"openlibrary_key\",\n \"inventaire_id\",\n \"goodreads_key\",\n \"oclc_number\",\n \"asin\",\n ]\n widgets = {\n \"title\": forms.TextInput(attrs={\"aria-describedby\": \"desc_title\"}),\n \"subtitle\": forms.TextInput(attrs={\"aria-describedby\": \"desc_subtitle\"}),\n \"description\": forms.Textarea(\n attrs={\"aria-describedby\": \"desc_description\"}\n ),\n \"series\": forms.TextInput(attrs={\"aria-describedby\": \"desc_series\"}),\n \"series_number\": forms.TextInput(\n attrs={\"aria-describedby\": \"desc_series_number\"}\n ),\n \"subjects\": ArrayWidget(),\n \"languages\": forms.TextInput(\n attrs={\"aria-describedby\": \"desc_languages_help desc_languages\"}\n ),\n \"publishers\": forms.TextInput(\n attrs={\"aria-describedby\": \"desc_publishers_help desc_publishers\"}\n ),\n \"first_published_date\": SelectDateWidget(\n attrs={\"aria-describedby\": \"desc_first_published_date\"}\n ),\n \"published_date\": SelectDateWidget(\n attrs={\"aria-describedby\": \"desc_published_date\"}\n ),\n \"cover\": ClearableFileInputWithWarning(\n attrs={\"aria-describedby\": \"desc_cover\"}\n ),\n \"physical_format\": Select(\n attrs={\"aria-describedby\": \"desc_physical_format\"}\n ),\n \"physical_format_detail\": forms.TextInput(\n attrs={\"aria-describedby\": \"desc_physical_format_detail\"}\n ),\n \"pages\": forms.NumberInput(attrs={\"aria-describedby\": \"desc_pages\"}),\n \"isbn_13\": forms.TextInput(attrs={\"aria-describedby\": \"desc_isbn_13\"}),\n \"isbn_10\": forms.TextInput(attrs={\"aria-describedby\": \"desc_isbn_10\"}),\n \"openlibrary_key\": forms.TextInput(\n attrs={\"aria-describedby\": \"desc_openlibrary_key\"}\n ),\n \"inventaire_id\": forms.TextInput(\n attrs={\"aria-describedby\": \"desc_inventaire_id\"}\n ),\n \"goodreads_key\": forms.TextInput(\n attrs={\"aria-describedby\": \"desc_goodreads_key\"}\n ),\n \"oclc_number\": forms.TextInput(\n attrs={\"aria-describedby\": \"desc_oclc_number\"}\n ),\n \"ASIN\": forms.TextInput(attrs={\"aria-describedby\": \"desc_ASIN\"}),\n \"AASIN\": forms.TextInput(attrs={\"aria-describedby\": \"desc_AASIN\"}),\n \"isfdb\": forms.TextInput(attrs={\"aria-describedby\": \"desc_isfdb\"}),\n }\n\n\nclass EditionFromWorkForm(CustomForm):\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n # make all fields hidden\n for visible in self.visible_fields():\n visible.field.widget = forms.HiddenInput()\n\n class Meta:\n model = models.Work\n fields = [\n \"title\",\n \"subtitle\",\n \"authors\",\n \"description\",\n \"languages\",\n \"series\",\n \"series_number\",\n \"subjects\",\n \"subject_places\",\n \"cover\",\n \"first_published_date\",\n ]\n", "path": "bookwyrm/forms/books.py"}]} | 1,730 | 98 |
gh_patches_debug_2626 | rasdani/github-patches | git_diff | dbt-labs__dbt-core-7221 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[CT-1943] Loosen pin on `jsonschema` (via `hologram`)
For more context on our latest thinking around dependencies (how & why we pin today, and how we want it to change):
- https://github.com/dbt-labs/dbt-core/discussions/6495
### Summary
`dbt-core` depends on `hologram`, and as such it also includes `hologram`'s transitive dependencies on `jsonschema` and `python-dateutil`. `hologram`'s upper bound on `jsonschema` in particular is causing issues for some folks trying to install `dbt-core` alongside other popular tools, such as Airflow:
- https://github.com/dbt-labs/hologram/issues/52
- https://github.com/dbt-labs/hologram/pull/51
### Short term
- Try removing upper bound on `jsonschema`
- Release a new version of `hologram` with no / looser upper bound
- Support the new version of `hologram` [in `dbt-core`](https://github.com/dbt-labs/dbt-core/blob/a8abc496323f741d3218d298d5d2bb118fa01017/core/setup.py#L54)
### Medium term
Remove `dbt-core`'s dependency on `hologram` entirely. It doesn't do nearly as much for us today as it used to, and the validation errors it raises aren't even all that nice.
- https://github.com/dbt-labs/dbt-core/issues/6776
</issue>
<code>
[start of core/setup.py]
1 #!/usr/bin/env python
2 import os
3 import sys
4
5 if sys.version_info < (3, 7, 2):
6 print("Error: dbt does not support this version of Python.")
7 print("Please upgrade to Python 3.7.2 or higher.")
8 sys.exit(1)
9
10
11 from setuptools import setup
12
13 try:
14 from setuptools import find_namespace_packages
15 except ImportError:
16 # the user has a downlevel version of setuptools.
17 print("Error: dbt requires setuptools v40.1.0 or higher.")
18 print('Please upgrade setuptools with "pip install --upgrade setuptools" ' "and try again")
19 sys.exit(1)
20
21
22 this_directory = os.path.abspath(os.path.dirname(__file__))
23 with open(os.path.join(this_directory, "README.md")) as f:
24 long_description = f.read()
25
26
27 package_name = "dbt-core"
28 package_version = "1.5.0b4"
29 description = """With dbt, data analysts and engineers can build analytics \
30 the way engineers build applications."""
31
32
33 setup(
34 name=package_name,
35 version=package_version,
36 description=description,
37 long_description=long_description,
38 long_description_content_type="text/markdown",
39 author="dbt Labs",
40 author_email="[email protected]",
41 url="https://github.com/dbt-labs/dbt-core",
42 packages=find_namespace_packages(include=["dbt", "dbt.*"]),
43 include_package_data=True,
44 test_suite="test",
45 entry_points={
46 "console_scripts": ["dbt = dbt.cli.main:cli"],
47 },
48 install_requires=[
49 "Jinja2==3.1.2",
50 "agate>=1.6,<1.7.1",
51 "click>=7.0,<9",
52 "colorama>=0.3.9,<0.4.7",
53 "hologram>=0.0.14,<=0.0.15",
54 "isodate>=0.6,<0.7",
55 "logbook>=1.5,<1.6",
56 "mashumaro[msgpack]==3.3.1",
57 "minimal-snowplow-tracker==0.0.2",
58 "networkx>=2.3,<2.8.1;python_version<'3.8'",
59 "networkx>=2.3,<3;python_version>='3.8'",
60 "packaging>20.9",
61 "sqlparse>=0.2.3,<0.5",
62 "dbt-extractor~=0.4.1",
63 "typing-extensions>=3.7.4",
64 "werkzeug>=1,<3",
65 "pathspec>=0.9,<0.12",
66 "protobuf>=3.18.3",
67 "pytz>=2015.7",
68 # the following are all to match snowflake-connector-python
69 "requests<3.0.0",
70 "idna>=2.5,<4",
71 "cffi>=1.9,<2.0.0",
72 "pyyaml>=6.0",
73 ],
74 zip_safe=False,
75 classifiers=[
76 "Development Status :: 5 - Production/Stable",
77 "License :: OSI Approved :: Apache Software License",
78 "Operating System :: Microsoft :: Windows",
79 "Operating System :: MacOS :: MacOS X",
80 "Operating System :: POSIX :: Linux",
81 "Programming Language :: Python :: 3.7",
82 "Programming Language :: Python :: 3.8",
83 "Programming Language :: Python :: 3.9",
84 "Programming Language :: Python :: 3.10",
85 "Programming Language :: Python :: 3.11",
86 ],
87 python_requires=">=3.7.2",
88 )
89
[end of core/setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/core/setup.py b/core/setup.py
--- a/core/setup.py
+++ b/core/setup.py
@@ -50,7 +50,7 @@
"agate>=1.6,<1.7.1",
"click>=7.0,<9",
"colorama>=0.3.9,<0.4.7",
- "hologram>=0.0.14,<=0.0.15",
+ "hologram>=0.0.14,<=0.0.16",
"isodate>=0.6,<0.7",
"logbook>=1.5,<1.6",
"mashumaro[msgpack]==3.3.1",
| {"golden_diff": "diff --git a/core/setup.py b/core/setup.py\n--- a/core/setup.py\n+++ b/core/setup.py\n@@ -50,7 +50,7 @@\n \"agate>=1.6,<1.7.1\",\n \"click>=7.0,<9\",\n \"colorama>=0.3.9,<0.4.7\",\n- \"hologram>=0.0.14,<=0.0.15\",\n+ \"hologram>=0.0.14,<=0.0.16\",\n \"isodate>=0.6,<0.7\",\n \"logbook>=1.5,<1.6\",\n \"mashumaro[msgpack]==3.3.1\",\n", "issue": "[CT-1943] Loosen pin on `jsonschema` (via `hologram`)\nFor more context on our latest thinking around dependencies (how & why we pin today, and how we want it to change):\r\n- https://github.com/dbt-labs/dbt-core/discussions/6495\r\n\r\n### Summary\r\n\r\n`dbt-core` depends on `hologram`, and as such it also includes `hologram`'s transitive dependencies on `jsonschema` and `python-dateutil`. `hologram`'s upper bound on `jsonschema` in particular is causing issues for some folks trying to install `dbt-core` alongside other popular tools, such as Airflow:\r\n- https://github.com/dbt-labs/hologram/issues/52\r\n- https://github.com/dbt-labs/hologram/pull/51\r\n\r\n### Short term\r\n\r\n- Try removing upper bound on `jsonschema`\r\n- Release a new version of `hologram` with no / looser upper bound\r\n- Support the new version of `hologram` [in `dbt-core`](https://github.com/dbt-labs/dbt-core/blob/a8abc496323f741d3218d298d5d2bb118fa01017/core/setup.py#L54)\r\n\r\n### Medium term\r\n\r\nRemove `dbt-core`'s dependency on `hologram` entirely. It doesn't do nearly as much for us today as it used to, and the validation errors it raises aren't even all that nice.\r\n- https://github.com/dbt-labs/dbt-core/issues/6776\n", "before_files": [{"content": "#!/usr/bin/env python\nimport os\nimport sys\n\nif sys.version_info < (3, 7, 2):\n print(\"Error: dbt does not support this version of Python.\")\n print(\"Please upgrade to Python 3.7.2 or higher.\")\n sys.exit(1)\n\n\nfrom setuptools import setup\n\ntry:\n from setuptools import find_namespace_packages\nexcept ImportError:\n # the user has a downlevel version of setuptools.\n print(\"Error: dbt requires setuptools v40.1.0 or higher.\")\n print('Please upgrade setuptools with \"pip install --upgrade setuptools\" ' \"and try again\")\n sys.exit(1)\n\n\nthis_directory = os.path.abspath(os.path.dirname(__file__))\nwith open(os.path.join(this_directory, \"README.md\")) as f:\n long_description = f.read()\n\n\npackage_name = \"dbt-core\"\npackage_version = \"1.5.0b4\"\ndescription = \"\"\"With dbt, data analysts and engineers can build analytics \\\nthe way engineers build applications.\"\"\"\n\n\nsetup(\n name=package_name,\n version=package_version,\n description=description,\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n author=\"dbt Labs\",\n author_email=\"[email protected]\",\n url=\"https://github.com/dbt-labs/dbt-core\",\n packages=find_namespace_packages(include=[\"dbt\", \"dbt.*\"]),\n include_package_data=True,\n test_suite=\"test\",\n entry_points={\n \"console_scripts\": [\"dbt = dbt.cli.main:cli\"],\n },\n install_requires=[\n \"Jinja2==3.1.2\",\n \"agate>=1.6,<1.7.1\",\n \"click>=7.0,<9\",\n \"colorama>=0.3.9,<0.4.7\",\n \"hologram>=0.0.14,<=0.0.15\",\n \"isodate>=0.6,<0.7\",\n \"logbook>=1.5,<1.6\",\n \"mashumaro[msgpack]==3.3.1\",\n \"minimal-snowplow-tracker==0.0.2\",\n \"networkx>=2.3,<2.8.1;python_version<'3.8'\",\n \"networkx>=2.3,<3;python_version>='3.8'\",\n \"packaging>20.9\",\n \"sqlparse>=0.2.3,<0.5\",\n \"dbt-extractor~=0.4.1\",\n \"typing-extensions>=3.7.4\",\n \"werkzeug>=1,<3\",\n \"pathspec>=0.9,<0.12\",\n \"protobuf>=3.18.3\",\n \"pytz>=2015.7\",\n # the following are all to match snowflake-connector-python\n \"requests<3.0.0\",\n \"idna>=2.5,<4\",\n \"cffi>=1.9,<2.0.0\",\n \"pyyaml>=6.0\",\n ],\n zip_safe=False,\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Operating System :: Microsoft :: Windows\",\n \"Operating System :: MacOS :: MacOS X\",\n \"Operating System :: POSIX :: Linux\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Programming Language :: Python :: 3.11\",\n ],\n python_requires=\">=3.7.2\",\n)\n", "path": "core/setup.py"}]} | 1,871 | 163 |
gh_patches_debug_12289 | rasdani/github-patches | git_diff | modin-project__modin-794 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
pyarrow is a dependency but is not in `install_requires`
### Describe the problem
<!-- Describe the problem clearly here. -->
The source comes from this file: https://github.com/modin-project/modin/blob/master/modin/experimental/engines/pyarrow_on_ray/io.py#L4-L5
### Source code / logs
<!-- Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached. Try to provide a reproducible test case that is the bare minimum necessary to generate the problem. -->
</issue>
<code>
[start of modin/experimental/engines/pyarrow_on_ray/io.py]
1 from io import BytesIO
2
3 import pandas
4 import pyarrow as pa
5 import pyarrow.csv as csv
6
7 from modin.backends.pyarrow.query_compiler import PyarrowQueryCompiler
8 from modin.data_management.utils import get_default_chunksize
9 from modin.engines.ray.generic.io import RayIO
10 from modin.experimental.engines.pyarrow_on_ray.frame.data import PyarrowOnRayFrame
11 from modin.experimental.engines.pyarrow_on_ray.frame.partition import (
12 PyarrowOnRayFramePartition,
13 )
14 from modin import __execution_engine__
15
16 if __execution_engine__ == "Ray":
17 import ray
18
19 @ray.remote
20 def _read_csv_with_offset_pyarrow_on_ray(
21 fname, num_splits, start, end, kwargs, header
22 ): # pragma: no cover
23 """Use a Ray task to read a chunk of a CSV into a pyarrow Table.
24 Note: Ray functions are not detected by codecov (thus pragma: no cover)
25 Args:
26 fname: The filename of the file to open.
27 num_splits: The number of splits (partitions) to separate the DataFrame into.
28 start: The start byte offset.
29 end: The end byte offset.
30 kwargs: The kwargs for the pyarrow `read_csv` function.
31 header: The header of the file.
32 Returns:
33 A list containing the split pyarrow Tables and the the number of
34 rows of the tables as the last element. This is used to determine
35 the total length of the DataFrame to build a default Index.
36 """
37 bio = open(fname, "rb")
38 # The header line for the CSV file
39 first_line = bio.readline()
40 bio.seek(start)
41 to_read = header + first_line + bio.read(end - start)
42 bio.close()
43 table = csv.read_csv(
44 BytesIO(to_read), parse_options=csv.ParseOptions(header_rows=1)
45 )
46 chunksize = get_default_chunksize(table.num_columns, num_splits)
47 chunks = [
48 pa.Table.from_arrays(table.columns[chunksize * i : chunksize * (i + 1)])
49 for i in range(num_splits)
50 ]
51 return chunks + [
52 table.num_rows,
53 pandas.Series(
54 [t.to_pandas_dtype() for t in table.schema.types],
55 index=table.schema.names,
56 ),
57 ]
58
59
60 class PyarrowOnRayIO(RayIO):
61 frame_cls = PyarrowOnRayFrame
62 frame_partition_cls = PyarrowOnRayFramePartition
63 query_compiler_cls = PyarrowQueryCompiler
64
65 read_parquet_remote_task = None
66 if __execution_engine__ == "Ray":
67 read_csv_remote_task = _read_csv_with_offset_pyarrow_on_ray
68 read_hdf_remote_task = None
69 read_feather_remote_task = None
70 read_sql_remote_task = None
71
[end of modin/experimental/engines/pyarrow_on_ray/io.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/modin/experimental/engines/pyarrow_on_ray/io.py b/modin/experimental/engines/pyarrow_on_ray/io.py
--- a/modin/experimental/engines/pyarrow_on_ray/io.py
+++ b/modin/experimental/engines/pyarrow_on_ray/io.py
@@ -1,8 +1,6 @@
from io import BytesIO
import pandas
-import pyarrow as pa
-import pyarrow.csv as csv
from modin.backends.pyarrow.query_compiler import PyarrowQueryCompiler
from modin.data_management.utils import get_default_chunksize
@@ -15,6 +13,8 @@
if __execution_engine__ == "Ray":
import ray
+ import pyarrow as pa
+ import pyarrow.csv as csv
@ray.remote
def _read_csv_with_offset_pyarrow_on_ray(
| {"golden_diff": "diff --git a/modin/experimental/engines/pyarrow_on_ray/io.py b/modin/experimental/engines/pyarrow_on_ray/io.py\n--- a/modin/experimental/engines/pyarrow_on_ray/io.py\n+++ b/modin/experimental/engines/pyarrow_on_ray/io.py\n@@ -1,8 +1,6 @@\n from io import BytesIO\n \n import pandas\n-import pyarrow as pa\n-import pyarrow.csv as csv\n \n from modin.backends.pyarrow.query_compiler import PyarrowQueryCompiler\n from modin.data_management.utils import get_default_chunksize\n@@ -15,6 +13,8 @@\n \n if __execution_engine__ == \"Ray\":\n import ray\n+ import pyarrow as pa\n+ import pyarrow.csv as csv\n \n @ray.remote\n def _read_csv_with_offset_pyarrow_on_ray(\n", "issue": "pyarrow is a dependency but is not in `install_requires`\n\r\n### Describe the problem\r\n<!-- Describe the problem clearly here. -->\r\nThe source comes from this file: https://github.com/modin-project/modin/blob/master/modin/experimental/engines/pyarrow_on_ray/io.py#L4-L5\r\n\r\n### Source code / logs\r\n<!-- Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached. Try to provide a reproducible test case that is the bare minimum necessary to generate the problem. -->\r\n\n", "before_files": [{"content": "from io import BytesIO\n\nimport pandas\nimport pyarrow as pa\nimport pyarrow.csv as csv\n\nfrom modin.backends.pyarrow.query_compiler import PyarrowQueryCompiler\nfrom modin.data_management.utils import get_default_chunksize\nfrom modin.engines.ray.generic.io import RayIO\nfrom modin.experimental.engines.pyarrow_on_ray.frame.data import PyarrowOnRayFrame\nfrom modin.experimental.engines.pyarrow_on_ray.frame.partition import (\n PyarrowOnRayFramePartition,\n)\nfrom modin import __execution_engine__\n\nif __execution_engine__ == \"Ray\":\n import ray\n\n @ray.remote\n def _read_csv_with_offset_pyarrow_on_ray(\n fname, num_splits, start, end, kwargs, header\n ): # pragma: no cover\n \"\"\"Use a Ray task to read a chunk of a CSV into a pyarrow Table.\n Note: Ray functions are not detected by codecov (thus pragma: no cover)\n Args:\n fname: The filename of the file to open.\n num_splits: The number of splits (partitions) to separate the DataFrame into.\n start: The start byte offset.\n end: The end byte offset.\n kwargs: The kwargs for the pyarrow `read_csv` function.\n header: The header of the file.\n Returns:\n A list containing the split pyarrow Tables and the the number of\n rows of the tables as the last element. This is used to determine\n the total length of the DataFrame to build a default Index.\n \"\"\"\n bio = open(fname, \"rb\")\n # The header line for the CSV file\n first_line = bio.readline()\n bio.seek(start)\n to_read = header + first_line + bio.read(end - start)\n bio.close()\n table = csv.read_csv(\n BytesIO(to_read), parse_options=csv.ParseOptions(header_rows=1)\n )\n chunksize = get_default_chunksize(table.num_columns, num_splits)\n chunks = [\n pa.Table.from_arrays(table.columns[chunksize * i : chunksize * (i + 1)])\n for i in range(num_splits)\n ]\n return chunks + [\n table.num_rows,\n pandas.Series(\n [t.to_pandas_dtype() for t in table.schema.types],\n index=table.schema.names,\n ),\n ]\n\n\nclass PyarrowOnRayIO(RayIO):\n frame_cls = PyarrowOnRayFrame\n frame_partition_cls = PyarrowOnRayFramePartition\n query_compiler_cls = PyarrowQueryCompiler\n\n read_parquet_remote_task = None\n if __execution_engine__ == \"Ray\":\n read_csv_remote_task = _read_csv_with_offset_pyarrow_on_ray\n read_hdf_remote_task = None\n read_feather_remote_task = None\n read_sql_remote_task = None\n", "path": "modin/experimental/engines/pyarrow_on_ray/io.py"}]} | 1,411 | 183 |
gh_patches_debug_64991 | rasdani/github-patches | git_diff | conda-forge__conda-smithy-864 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Autogenerated README.md missing final newline
## The Problem
As I've confirmed is the case on multiple repos here, including our own ``spyder-feedstock`` and ``spyder-kernels-feedstock`` as well as two arbitrary conda-forge repos I checked conda-forge, the last line in README.md lacks a terminating newline (LF/``x0D``), and is thus ill-formed. I'd be happy to submit a PR to fix it since I imagine it is probably pretty trivial, if someone more knowlegable than me can let me know how to approach it.
## Proposed Solutions
A naive hack would seem to be just writing an additional ``\n`` [here](https://github.com/conda-forge/conda-smithy/blob/855f23bb96efb1cbdbdc5e60dfb9bbdd3e142d31/conda_smithy/configure_feedstock.py#L718), but editing the [template ](https://github.com/conda-forge/conda-smithy/blob/master/conda_smithy/templates/README.md.tmpl) would seem to make far more sense. However, the template *has* a trailing newline and hasn't been edited in a while, so not sure what's going on—is it not writing the last one; is it getting stripped, or what?
Thanks!
</issue>
<code>
[start of conda_smithy/vendored/__init__.py]
[end of conda_smithy/vendored/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/conda_smithy/vendored/__init__.py b/conda_smithy/vendored/__init__.py
--- a/conda_smithy/vendored/__init__.py
+++ b/conda_smithy/vendored/__init__.py
@@ -0,0 +1 @@
+
| {"golden_diff": "diff --git a/conda_smithy/vendored/__init__.py b/conda_smithy/vendored/__init__.py\n--- a/conda_smithy/vendored/__init__.py\n+++ b/conda_smithy/vendored/__init__.py\n@@ -0,0 +1 @@\n+\n", "issue": "Autogenerated README.md missing final newline\n## The Problem\r\n\r\nAs I've confirmed is the case on multiple repos here, including our own ``spyder-feedstock`` and ``spyder-kernels-feedstock`` as well as two arbitrary conda-forge repos I checked conda-forge, the last line in README.md lacks a terminating newline (LF/``x0D``), and is thus ill-formed. I'd be happy to submit a PR to fix it since I imagine it is probably pretty trivial, if someone more knowlegable than me can let me know how to approach it. \r\n\r\n## Proposed Solutions\r\n\r\nA naive hack would seem to be just writing an additional ``\\n`` [here](https://github.com/conda-forge/conda-smithy/blob/855f23bb96efb1cbdbdc5e60dfb9bbdd3e142d31/conda_smithy/configure_feedstock.py#L718), but editing the [template ](https://github.com/conda-forge/conda-smithy/blob/master/conda_smithy/templates/README.md.tmpl) would seem to make far more sense. However, the template *has* a trailing newline and hasn't been edited in a while, so not sure what's going on\u2014is it not writing the last one; is it getting stripped, or what?\r\n\r\nThanks!\n", "before_files": [{"content": "", "path": "conda_smithy/vendored/__init__.py"}]} | 843 | 70 |
gh_patches_debug_3018 | rasdani/github-patches | git_diff | Mailu__Mailu-958 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Using external smtp relay server for outgoing emails
Hi,
I need to use mailchannels.com to relay all outgoing emails from my Mailu install. In this doc is what I need to change in Postfix:
https://mailchannels.zendesk.com/hc/en-us/articles/200262640-Setting-up-for-Postfix
Is there any way to do this in Mailu ?
Thanks,
</issue>
<code>
[start of core/postfix/start.py]
1 #!/usr/bin/python3
2
3 import os
4 import glob
5 import shutil
6 import multiprocessing
7 import logging as log
8 import sys
9 from mailustart import resolve, convert
10
11 from podop import run_server
12
13 log.basicConfig(stream=sys.stderr, level=os.environ.get("LOG_LEVEL", "WARNING"))
14
15 def start_podop():
16 os.setuid(100)
17 url = "http://" + os.environ["ADMIN_ADDRESS"] + "/internal/postfix/"
18 # TODO: Remove verbosity setting from Podop?
19 run_server(0, "postfix", "/tmp/podop.socket", [
20 ("transport", "url", url + "transport/§"),
21 ("alias", "url", url + "alias/§"),
22 ("domain", "url", url + "domain/§"),
23 ("mailbox", "url", url + "mailbox/§"),
24 ("senderaccess", "url", url + "sender/access/§"),
25 ("senderlogin", "url", url + "sender/login/§")
26 ])
27
28 # Actual startup script
29 os.environ["FRONT_ADDRESS"] = resolve(os.environ.get("FRONT_ADDRESS", "front"))
30 os.environ["ADMIN_ADDRESS"] = resolve(os.environ.get("ADMIN_ADDRESS", "admin"))
31 os.environ["HOST_ANTISPAM"] = resolve(os.environ.get("HOST_ANTISPAM", "antispam:11332"))
32 os.environ["HOST_LMTP"] = resolve(os.environ.get("HOST_LMTP", "imap:2525"))
33
34 for postfix_file in glob.glob("/conf/*.cf"):
35 convert(postfix_file, os.path.join("/etc/postfix", os.path.basename(postfix_file)))
36
37 if os.path.exists("/overrides/postfix.cf"):
38 for line in open("/overrides/postfix.cf").read().strip().split("\n"):
39 os.system('postconf -e "{}"'.format(line))
40
41 if os.path.exists("/overrides/postfix.master"):
42 for line in open("/overrides/postfix.master").read().strip().split("\n"):
43 os.system('postconf -Me "{}"'.format(line))
44
45 for map_file in glob.glob("/overrides/*.map"):
46 destination = os.path.join("/etc/postfix", os.path.basename(map_file))
47 shutil.copyfile(map_file, destination)
48 os.system("postmap {}".format(destination))
49 os.remove(destination)
50
51 convert("/conf/rsyslog.conf", "/etc/rsyslog.conf")
52
53 # Run Podop and Postfix
54 multiprocessing.Process(target=start_podop).start()
55 if os.path.exists("/var/run/rsyslogd.pid"):
56 os.remove("/var/run/rsyslogd.pid")
57 os.system("/usr/lib/postfix/post-install meta_directory=/etc/postfix create-missing")
58 os.system("/usr/lib/postfix/master &")
59 os.execv("/usr/sbin/rsyslogd", ["rsyslogd", "-n"])
60
[end of core/postfix/start.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/core/postfix/start.py b/core/postfix/start.py
--- a/core/postfix/start.py
+++ b/core/postfix/start.py
@@ -48,6 +48,11 @@
os.system("postmap {}".format(destination))
os.remove(destination)
+if "RELAYUSER" in os.environ:
+ path = "/etc/postfix/sasl_passwd"
+ convert("/conf/sasl_passwd", path)
+ os.system("postmap {}".format(path))
+
convert("/conf/rsyslog.conf", "/etc/rsyslog.conf")
# Run Podop and Postfix
| {"golden_diff": "diff --git a/core/postfix/start.py b/core/postfix/start.py\n--- a/core/postfix/start.py\n+++ b/core/postfix/start.py\n@@ -48,6 +48,11 @@\n os.system(\"postmap {}\".format(destination))\n os.remove(destination)\n \n+if \"RELAYUSER\" in os.environ:\n+ path = \"/etc/postfix/sasl_passwd\"\n+ convert(\"/conf/sasl_passwd\", path)\n+ os.system(\"postmap {}\".format(path))\n+\n convert(\"/conf/rsyslog.conf\", \"/etc/rsyslog.conf\")\n \n # Run Podop and Postfix\n", "issue": "Using external smtp relay server for outgoing emails\nHi,\r\n\r\nI need to use mailchannels.com to relay all outgoing emails from my Mailu install. In this doc is what I need to change in Postfix:\r\n\r\nhttps://mailchannels.zendesk.com/hc/en-us/articles/200262640-Setting-up-for-Postfix\r\n\r\nIs there any way to do this in Mailu ?\r\n\r\nThanks,\r\n\n", "before_files": [{"content": "#!/usr/bin/python3\n\nimport os\nimport glob\nimport shutil\nimport multiprocessing\nimport logging as log\nimport sys\nfrom mailustart import resolve, convert\n\nfrom podop import run_server\n\nlog.basicConfig(stream=sys.stderr, level=os.environ.get(\"LOG_LEVEL\", \"WARNING\"))\n\ndef start_podop():\n os.setuid(100)\n url = \"http://\" + os.environ[\"ADMIN_ADDRESS\"] + \"/internal/postfix/\"\n # TODO: Remove verbosity setting from Podop?\n run_server(0, \"postfix\", \"/tmp/podop.socket\", [\n\t\t(\"transport\", \"url\", url + \"transport/\u00a7\"),\n\t\t(\"alias\", \"url\", url + \"alias/\u00a7\"),\n\t\t(\"domain\", \"url\", url + \"domain/\u00a7\"),\n (\"mailbox\", \"url\", url + \"mailbox/\u00a7\"),\n (\"senderaccess\", \"url\", url + \"sender/access/\u00a7\"),\n (\"senderlogin\", \"url\", url + \"sender/login/\u00a7\")\n ])\n\n# Actual startup script\nos.environ[\"FRONT_ADDRESS\"] = resolve(os.environ.get(\"FRONT_ADDRESS\", \"front\"))\nos.environ[\"ADMIN_ADDRESS\"] = resolve(os.environ.get(\"ADMIN_ADDRESS\", \"admin\"))\nos.environ[\"HOST_ANTISPAM\"] = resolve(os.environ.get(\"HOST_ANTISPAM\", \"antispam:11332\"))\nos.environ[\"HOST_LMTP\"] = resolve(os.environ.get(\"HOST_LMTP\", \"imap:2525\"))\n\nfor postfix_file in glob.glob(\"/conf/*.cf\"):\n convert(postfix_file, os.path.join(\"/etc/postfix\", os.path.basename(postfix_file)))\n\nif os.path.exists(\"/overrides/postfix.cf\"):\n for line in open(\"/overrides/postfix.cf\").read().strip().split(\"\\n\"):\n os.system('postconf -e \"{}\"'.format(line))\n\nif os.path.exists(\"/overrides/postfix.master\"):\n for line in open(\"/overrides/postfix.master\").read().strip().split(\"\\n\"):\n os.system('postconf -Me \"{}\"'.format(line))\n\nfor map_file in glob.glob(\"/overrides/*.map\"):\n destination = os.path.join(\"/etc/postfix\", os.path.basename(map_file))\n shutil.copyfile(map_file, destination)\n os.system(\"postmap {}\".format(destination))\n os.remove(destination)\n\nconvert(\"/conf/rsyslog.conf\", \"/etc/rsyslog.conf\")\n\n# Run Podop and Postfix\nmultiprocessing.Process(target=start_podop).start()\nif os.path.exists(\"/var/run/rsyslogd.pid\"):\n os.remove(\"/var/run/rsyslogd.pid\")\nos.system(\"/usr/lib/postfix/post-install meta_directory=/etc/postfix create-missing\")\nos.system(\"/usr/lib/postfix/master &\")\nos.execv(\"/usr/sbin/rsyslogd\", [\"rsyslogd\", \"-n\"])\n", "path": "core/postfix/start.py"}]} | 1,338 | 132 |
gh_patches_debug_8744 | rasdani/github-patches | git_diff | mindsdb__mindsdb-1749 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Error when importing MindsDB in a Jupyter notebook
**Your Environment**
* Python version: 3.6
* Operating system: Ubuntu
* Mindsdb version: 2.12.2
**Describe the bug**
Importing MindsDB from a Jupyter Notebook fails, apparently because the HTTP API triggers.
**To Reproduce**
1. Run a new Jupyter notebook
2. Execute a cell with `import mindsdb`
The following error should occur:
```usage: ipykernel_launcher.py [-h] [--api API] [--config CONFIG] [--verbose] [-v]
ipykernel_launcher.py: error: unrecognized arguments: -f /home/user/.local/share/jupyter/runtime/kernel.json
An exception has occurred, use %tb to see the full traceback.
SystemExit: 2
/env/lib/python3.6/site-packages/IPython/core/interactiveshell.py:3351: UserWarning: To exit: use 'exit', 'quit', or Ctrl-D.
warn("To exit: use 'exit', 'quit', or Ctrl-D.", stacklevel=1)
```
**Expected behavior**
MindsDB should import successfully.
**Additional note**
`import mindsdb_native` works fine.
</issue>
<code>
[start of mindsdb/utilities/functions.py]
1 import argparse
2 import datetime
3 import requests
4 from functools import wraps
5
6 from mindsdb.utilities.fs import create_process_mark, delete_process_mark
7
8
9 def args_parse():
10 parser = argparse.ArgumentParser(description='CL argument for mindsdb server')
11 parser.add_argument('--api', type=str, default=None)
12 parser.add_argument('--config', type=str, default=None)
13 parser.add_argument('--verbose', action='store_true')
14 parser.add_argument('--no_studio', action='store_true')
15 parser.add_argument('-v', '--version', action='store_true')
16 parser.add_argument('--ray', action='store_true', default=None)
17 return parser.parse_args()
18
19
20 def cast_row_types(row, field_types):
21 '''
22 '''
23 keys = [x for x in row.keys() if x in field_types]
24 for key in keys:
25 t = field_types[key]
26 if t == 'Timestamp' and isinstance(row[key], (int, float)):
27 timestamp = datetime.datetime.utcfromtimestamp(row[key])
28 row[key] = timestamp.strftime('%Y-%m-%d %H:%M:%S')
29 elif t == 'Date' and isinstance(row[key], (int, float)):
30 timestamp = datetime.datetime.utcfromtimestamp(row[key])
31 row[key] = timestamp.strftime('%Y-%m-%d')
32 elif t == 'Int' and isinstance(row[key], (int, float, str)):
33 try:
34 print(f'cast {row[key]} to {int(row[key])}')
35 row[key] = int(row[key])
36 except Exception:
37 pass
38
39
40 def is_notebook():
41 try:
42 shell = get_ipython().__class__.__name__
43 if shell == 'ZMQInteractiveShell':
44 return True # Jupyter notebook or qtconsole
45 elif shell == 'TerminalInteractiveShell':
46 return False # Terminal running IPython
47 else:
48 return False # Other type (?)
49 except NameError:
50 return False # Probably standard Python interpreter
51
52
53 def mark_process(name):
54 def mark_process_wrapper(func):
55 @wraps(func)
56 def wrapper(*args, **kwargs):
57 mark = create_process_mark(name)
58 try:
59 return func(*args, **kwargs)
60 finally:
61 delete_process_mark(name, mark)
62 return wrapper
63 return mark_process_wrapper
64
65
66 def get_versions_where_predictors_become_obsolete():
67 """ Get list of MindsDB versions in which predictors should be retrained
68 Returns:
69 list of str or False
70 """
71 versions_for_updating_predictors = []
72 try:
73 try:
74 res = requests.get(
75 'https://mindsdb-cloud-public-service-files.s3.us-east-2.amazonaws.com/version_for_updating_predictors.txt',
76 timeout=0.5
77 )
78 except (ConnectionError, requests.exceptions.ConnectionError) as e:
79 print(f'Is no connection. {e}')
80 raise
81 except Exception as e:
82 print(f'Is something wrong with getting version_for_updating_predictors.txt: {e}')
83 raise
84
85 if res.status_code != 200:
86 print(f'Cant get version_for_updating_predictors.txt: returned status code = {res.status_code}')
87 raise
88
89 try:
90 versions_for_updating_predictors = res.text.replace(' \t\r', '').split('\n')
91 except Exception as e:
92 print(f'Cant decode compatible-config.json: {e}')
93 raise
94 except Exception:
95 return False, versions_for_updating_predictors
96
97 versions_for_updating_predictors = [x for x in versions_for_updating_predictors if len(x) > 0]
98 return True, versions_for_updating_predictors
99
[end of mindsdb/utilities/functions.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mindsdb/utilities/functions.py b/mindsdb/utilities/functions.py
--- a/mindsdb/utilities/functions.py
+++ b/mindsdb/utilities/functions.py
@@ -39,13 +39,10 @@
def is_notebook():
try:
- shell = get_ipython().__class__.__name__
- if shell == 'ZMQInteractiveShell':
- return True # Jupyter notebook or qtconsole
- elif shell == 'TerminalInteractiveShell':
- return False # Terminal running IPython
+ if 'IPKernelApp' in get_ipython().config:
+ return True
else:
- return False # Other type (?)
+ return False
except NameError:
return False # Probably standard Python interpreter
| {"golden_diff": "diff --git a/mindsdb/utilities/functions.py b/mindsdb/utilities/functions.py\n--- a/mindsdb/utilities/functions.py\n+++ b/mindsdb/utilities/functions.py\n@@ -39,13 +39,10 @@\n \n def is_notebook():\n try:\n- shell = get_ipython().__class__.__name__\n- if shell == 'ZMQInteractiveShell':\n- return True # Jupyter notebook or qtconsole\n- elif shell == 'TerminalInteractiveShell':\n- return False # Terminal running IPython\n+ if 'IPKernelApp' in get_ipython().config:\n+ return True\n else:\n- return False # Other type (?)\n+ return False\n except NameError:\n return False # Probably standard Python interpreter\n", "issue": "Error when importing MindsDB in a Jupyter notebook\n**Your Environment**\r\n\r\n* Python version: 3.6\r\n* Operating system: Ubuntu\r\n* Mindsdb version: 2.12.2\r\n\r\n**Describe the bug**\r\nImporting MindsDB from a Jupyter Notebook fails, apparently because the HTTP API triggers.\r\n\r\n**To Reproduce**\r\n1. Run a new Jupyter notebook\r\n2. Execute a cell with `import mindsdb`\r\n\r\nThe following error should occur:\r\n```usage: ipykernel_launcher.py [-h] [--api API] [--config CONFIG] [--verbose] [-v]\r\nipykernel_launcher.py: error: unrecognized arguments: -f /home/user/.local/share/jupyter/runtime/kernel.json\r\n\r\nAn exception has occurred, use %tb to see the full traceback.\r\nSystemExit: 2\r\n\r\n/env/lib/python3.6/site-packages/IPython/core/interactiveshell.py:3351: UserWarning: To exit: use 'exit', 'quit', or Ctrl-D.\r\n warn(\"To exit: use 'exit', 'quit', or Ctrl-D.\", stacklevel=1)\r\n```\r\n\r\n**Expected behavior**\r\nMindsDB should import successfully.\r\n\r\n**Additional note**\r\n`import mindsdb_native` works fine.\n", "before_files": [{"content": "import argparse\nimport datetime\nimport requests\nfrom functools import wraps\n\nfrom mindsdb.utilities.fs import create_process_mark, delete_process_mark\n\n\ndef args_parse():\n parser = argparse.ArgumentParser(description='CL argument for mindsdb server')\n parser.add_argument('--api', type=str, default=None)\n parser.add_argument('--config', type=str, default=None)\n parser.add_argument('--verbose', action='store_true')\n parser.add_argument('--no_studio', action='store_true')\n parser.add_argument('-v', '--version', action='store_true')\n parser.add_argument('--ray', action='store_true', default=None)\n return parser.parse_args()\n\n\ndef cast_row_types(row, field_types):\n '''\n '''\n keys = [x for x in row.keys() if x in field_types]\n for key in keys:\n t = field_types[key]\n if t == 'Timestamp' and isinstance(row[key], (int, float)):\n timestamp = datetime.datetime.utcfromtimestamp(row[key])\n row[key] = timestamp.strftime('%Y-%m-%d %H:%M:%S')\n elif t == 'Date' and isinstance(row[key], (int, float)):\n timestamp = datetime.datetime.utcfromtimestamp(row[key])\n row[key] = timestamp.strftime('%Y-%m-%d')\n elif t == 'Int' and isinstance(row[key], (int, float, str)):\n try:\n print(f'cast {row[key]} to {int(row[key])}')\n row[key] = int(row[key])\n except Exception:\n pass\n\n\ndef is_notebook():\n try:\n shell = get_ipython().__class__.__name__\n if shell == 'ZMQInteractiveShell':\n return True # Jupyter notebook or qtconsole\n elif shell == 'TerminalInteractiveShell':\n return False # Terminal running IPython\n else:\n return False # Other type (?)\n except NameError:\n return False # Probably standard Python interpreter\n\n\ndef mark_process(name):\n def mark_process_wrapper(func):\n @wraps(func)\n def wrapper(*args, **kwargs):\n mark = create_process_mark(name)\n try:\n return func(*args, **kwargs)\n finally:\n delete_process_mark(name, mark)\n return wrapper\n return mark_process_wrapper\n\n\ndef get_versions_where_predictors_become_obsolete():\n \"\"\" Get list of MindsDB versions in which predictors should be retrained\n Returns:\n list of str or False\n \"\"\"\n versions_for_updating_predictors = []\n try:\n try:\n res = requests.get(\n 'https://mindsdb-cloud-public-service-files.s3.us-east-2.amazonaws.com/version_for_updating_predictors.txt',\n timeout=0.5\n )\n except (ConnectionError, requests.exceptions.ConnectionError) as e:\n print(f'Is no connection. {e}')\n raise\n except Exception as e:\n print(f'Is something wrong with getting version_for_updating_predictors.txt: {e}')\n raise\n\n if res.status_code != 200:\n print(f'Cant get version_for_updating_predictors.txt: returned status code = {res.status_code}')\n raise\n\n try:\n versions_for_updating_predictors = res.text.replace(' \\t\\r', '').split('\\n')\n except Exception as e:\n print(f'Cant decode compatible-config.json: {e}')\n raise\n except Exception:\n return False, versions_for_updating_predictors\n\n versions_for_updating_predictors = [x for x in versions_for_updating_predictors if len(x) > 0]\n return True, versions_for_updating_predictors\n", "path": "mindsdb/utilities/functions.py"}]} | 1,762 | 174 |
gh_patches_debug_3236 | rasdani/github-patches | git_diff | pre-commit__pre-commit-1092 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Globally disable color?
I cannot find the way to globally disable color in the pre-commit output. Setting only the background color to green and not changing the foreground color does not work for my terminal with the following settings in the Xt resources (as set in the `${HOME}/.Xresources` file):
````properties
Rxvt.background: black
Rxvt.foreground: deepSkyBlue
````
Is there a way? It would be great to respect https://no-color.org/ environment variable. And, while we are here, maybe understand the following git config setting:
````ini
[color]
ui = never
````
</issue>
<code>
[start of pre_commit/color.py]
1 from __future__ import unicode_literals
2
3 import os
4 import sys
5
6 terminal_supports_color = True
7 if os.name == 'nt': # pragma: no cover (windows)
8 from pre_commit.color_windows import enable_virtual_terminal_processing
9 try:
10 enable_virtual_terminal_processing()
11 except WindowsError:
12 terminal_supports_color = False
13
14 RED = '\033[41m'
15 GREEN = '\033[42m'
16 YELLOW = '\033[43;30m'
17 TURQUOISE = '\033[46;30m'
18 NORMAL = '\033[0m'
19
20
21 class InvalidColorSetting(ValueError):
22 pass
23
24
25 def format_color(text, color, use_color_setting):
26 """Format text with color.
27
28 Args:
29 text - Text to be formatted with color if `use_color`
30 color - The color start string
31 use_color_setting - Whether or not to color
32 """
33 if not use_color_setting:
34 return text
35 else:
36 return '{}{}{}'.format(color, text, NORMAL)
37
38
39 COLOR_CHOICES = ('auto', 'always', 'never')
40
41
42 def use_color(setting):
43 """Choose whether to use color based on the command argument.
44
45 Args:
46 setting - Either `auto`, `always`, or `never`
47 """
48 if setting not in COLOR_CHOICES:
49 raise InvalidColorSetting(setting)
50
51 return (
52 setting == 'always' or
53 (setting == 'auto' and sys.stdout.isatty() and terminal_supports_color)
54 )
55
[end of pre_commit/color.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pre_commit/color.py b/pre_commit/color.py
--- a/pre_commit/color.py
+++ b/pre_commit/color.py
@@ -48,6 +48,9 @@
if setting not in COLOR_CHOICES:
raise InvalidColorSetting(setting)
+ if os.environ.get('NO_COLOR'):
+ return False
+
return (
setting == 'always' or
(setting == 'auto' and sys.stdout.isatty() and terminal_supports_color)
| {"golden_diff": "diff --git a/pre_commit/color.py b/pre_commit/color.py\n--- a/pre_commit/color.py\n+++ b/pre_commit/color.py\n@@ -48,6 +48,9 @@\n if setting not in COLOR_CHOICES:\n raise InvalidColorSetting(setting)\n \n+ if os.environ.get('NO_COLOR'):\n+ return False\n+\n return (\n setting == 'always' or\n (setting == 'auto' and sys.stdout.isatty() and terminal_supports_color)\n", "issue": "Globally disable color?\nI cannot find the way to globally disable color in the pre-commit output. Setting only the background color to green and not changing the foreground color does not work for my terminal with the following settings in the Xt resources (as set in the `${HOME}/.Xresources` file):\r\n\r\n````properties\r\nRxvt.background: black\r\nRxvt.foreground: deepSkyBlue\r\n````\r\n\r\nIs there a way? It would be great to respect https://no-color.org/ environment variable. And, while we are here, maybe understand the following git config setting:\r\n\r\n````ini\r\n[color]\r\n ui = never\r\n````\r\n\n", "before_files": [{"content": "from __future__ import unicode_literals\n\nimport os\nimport sys\n\nterminal_supports_color = True\nif os.name == 'nt': # pragma: no cover (windows)\n from pre_commit.color_windows import enable_virtual_terminal_processing\n try:\n enable_virtual_terminal_processing()\n except WindowsError:\n terminal_supports_color = False\n\nRED = '\\033[41m'\nGREEN = '\\033[42m'\nYELLOW = '\\033[43;30m'\nTURQUOISE = '\\033[46;30m'\nNORMAL = '\\033[0m'\n\n\nclass InvalidColorSetting(ValueError):\n pass\n\n\ndef format_color(text, color, use_color_setting):\n \"\"\"Format text with color.\n\n Args:\n text - Text to be formatted with color if `use_color`\n color - The color start string\n use_color_setting - Whether or not to color\n \"\"\"\n if not use_color_setting:\n return text\n else:\n return '{}{}{}'.format(color, text, NORMAL)\n\n\nCOLOR_CHOICES = ('auto', 'always', 'never')\n\n\ndef use_color(setting):\n \"\"\"Choose whether to use color based on the command argument.\n\n Args:\n setting - Either `auto`, `always`, or `never`\n \"\"\"\n if setting not in COLOR_CHOICES:\n raise InvalidColorSetting(setting)\n\n return (\n setting == 'always' or\n (setting == 'auto' and sys.stdout.isatty() and terminal_supports_color)\n )\n", "path": "pre_commit/color.py"}]} | 1,105 | 103 |
gh_patches_debug_60893 | rasdani/github-patches | git_diff | webkom__lego-2342 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Phone number not saved from registration form
When creating a new user, LEGO ignores the phone number inserted into the registration form.
</issue>
<code>
[start of lego/apps/users/serializers/registration.py]
1 from django.contrib.auth import password_validation
2 from rest_framework import exceptions, serializers
3
4 from lego.apps.users.models import User
5 from lego.utils.functions import verify_captcha
6
7
8 class RegistrationSerializer(serializers.ModelSerializer):
9 captcha_response = serializers.CharField(required=True)
10
11 def validate_captcha_response(self, captcha_response):
12 if not verify_captcha(captcha_response):
13 raise exceptions.ValidationError("invalid_captcha")
14 return captcha_response
15
16 class Meta:
17 model = User
18 fields = ("email", "captcha_response")
19
20
21 class RegistrationConfirmationSerializer(serializers.ModelSerializer):
22
23 password = serializers.CharField(required=True, write_only=True)
24
25 def validate_username(self, username):
26 username_exists = User.objects.filter(username__iexact=username).exists()
27 if username_exists:
28 raise exceptions.ValidationError("Username exists")
29 return username
30
31 def validate_password(self, password):
32 password_validation.validate_password(password)
33 return password
34
35 class Meta:
36 model = User
37 fields = (
38 "username",
39 "first_name",
40 "last_name",
41 "gender",
42 "password",
43 "allergies",
44 )
45
[end of lego/apps/users/serializers/registration.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/lego/apps/users/serializers/registration.py b/lego/apps/users/serializers/registration.py
--- a/lego/apps/users/serializers/registration.py
+++ b/lego/apps/users/serializers/registration.py
@@ -41,4 +41,5 @@
"gender",
"password",
"allergies",
+ "phone_number",
)
| {"golden_diff": "diff --git a/lego/apps/users/serializers/registration.py b/lego/apps/users/serializers/registration.py\n--- a/lego/apps/users/serializers/registration.py\n+++ b/lego/apps/users/serializers/registration.py\n@@ -41,4 +41,5 @@\n \"gender\",\n \"password\",\n \"allergies\",\n+ \"phone_number\",\n )\n", "issue": "Phone number not saved from registration form\nWhen creating a new user, LEGO ignores the phone number inserted into the registration form.\n", "before_files": [{"content": "from django.contrib.auth import password_validation\nfrom rest_framework import exceptions, serializers\n\nfrom lego.apps.users.models import User\nfrom lego.utils.functions import verify_captcha\n\n\nclass RegistrationSerializer(serializers.ModelSerializer):\n captcha_response = serializers.CharField(required=True)\n\n def validate_captcha_response(self, captcha_response):\n if not verify_captcha(captcha_response):\n raise exceptions.ValidationError(\"invalid_captcha\")\n return captcha_response\n\n class Meta:\n model = User\n fields = (\"email\", \"captcha_response\")\n\n\nclass RegistrationConfirmationSerializer(serializers.ModelSerializer):\n\n password = serializers.CharField(required=True, write_only=True)\n\n def validate_username(self, username):\n username_exists = User.objects.filter(username__iexact=username).exists()\n if username_exists:\n raise exceptions.ValidationError(\"Username exists\")\n return username\n\n def validate_password(self, password):\n password_validation.validate_password(password)\n return password\n\n class Meta:\n model = User\n fields = (\n \"username\",\n \"first_name\",\n \"last_name\",\n \"gender\",\n \"password\",\n \"allergies\",\n )\n", "path": "lego/apps/users/serializers/registration.py"}]} | 888 | 91 |
gh_patches_debug_24477 | rasdani/github-patches | git_diff | wemake-services__wemake-python-styleguide-2529 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
609: Allow __enter__() inside an __enter__()
### What's wrong
One design pattern is to wrap a context manager. It would be nice to avoid WPS609 errors with this code, which seems to require accessing the direct magic methods.
### How it should be
Allow code like:
```
class Foo:
...
def __enter__(self):
self._conn.__enter__()
return self
def __exit__(self, exc_type, exc_value, traceback):
self._conn.__exit__(exc_type, exc_value, traceback)
```
I guess the same for aenter/aexit as well.
</issue>
<code>
[start of wemake_python_styleguide/visitors/ast/attributes.py]
1 import ast
2 from typing import ClassVar, FrozenSet
3
4 from typing_extensions import final
5
6 from wemake_python_styleguide.constants import ALL_MAGIC_METHODS
7 from wemake_python_styleguide.logic.naming import access
8 from wemake_python_styleguide.violations.best_practices import (
9 ProtectedAttributeViolation,
10 )
11 from wemake_python_styleguide.violations.oop import (
12 DirectMagicAttributeAccessViolation,
13 )
14 from wemake_python_styleguide.visitors.base import BaseNodeVisitor
15
16
17 @final
18 class WrongAttributeVisitor(BaseNodeVisitor):
19 """Ensures that attributes are used correctly."""
20
21 _allowed_to_use_protected: ClassVar[FrozenSet[str]] = frozenset((
22 'self',
23 'cls',
24 'mcs',
25 ))
26
27 def visit_Attribute(self, node: ast.Attribute) -> None:
28 """Checks the `Attribute` node."""
29 self._check_protected_attribute(node)
30 self._check_magic_attribute(node)
31 self.generic_visit(node)
32
33 def _is_super_called(self, node: ast.Call) -> bool:
34 return isinstance(node.func, ast.Name) and node.func.id == 'super'
35
36 def _ensure_attribute_type(self, node: ast.Attribute, exception) -> None:
37 if isinstance(node.value, ast.Name):
38 if node.value.id in self._allowed_to_use_protected:
39 return
40
41 if isinstance(node.value, ast.Call):
42 if self._is_super_called(node.value):
43 return
44
45 self.add_violation(exception(node, text=node.attr))
46
47 def _check_protected_attribute(self, node: ast.Attribute) -> None:
48 if access.is_protected(node.attr):
49 self._ensure_attribute_type(node, ProtectedAttributeViolation)
50
51 def _check_magic_attribute(self, node: ast.Attribute) -> None:
52 if access.is_magic(node.attr):
53 if node.attr in ALL_MAGIC_METHODS:
54 self._ensure_attribute_type(
55 node, DirectMagicAttributeAccessViolation,
56 )
57
[end of wemake_python_styleguide/visitors/ast/attributes.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/wemake_python_styleguide/visitors/ast/attributes.py b/wemake_python_styleguide/visitors/ast/attributes.py
--- a/wemake_python_styleguide/visitors/ast/attributes.py
+++ b/wemake_python_styleguide/visitors/ast/attributes.py
@@ -3,7 +3,9 @@
from typing_extensions import final
+from wemake_python_styleguide.compat.aliases import FunctionNodes
from wemake_python_styleguide.constants import ALL_MAGIC_METHODS
+from wemake_python_styleguide.logic import nodes
from wemake_python_styleguide.logic.naming import access
from wemake_python_styleguide.violations.best_practices import (
ProtectedAttributeViolation,
@@ -50,6 +52,15 @@
def _check_magic_attribute(self, node: ast.Attribute) -> None:
if access.is_magic(node.attr):
+ # If "magic" method being called has the same name as
+ # the enclosing function, then it is a "wrapper" and thus
+ # a "false positive".
+
+ ctx = nodes.get_context(node)
+ if isinstance(ctx, FunctionNodes):
+ if node.attr == ctx.name:
+ return
+
if node.attr in ALL_MAGIC_METHODS:
self._ensure_attribute_type(
node, DirectMagicAttributeAccessViolation,
| {"golden_diff": "diff --git a/wemake_python_styleguide/visitors/ast/attributes.py b/wemake_python_styleguide/visitors/ast/attributes.py\n--- a/wemake_python_styleguide/visitors/ast/attributes.py\n+++ b/wemake_python_styleguide/visitors/ast/attributes.py\n@@ -3,7 +3,9 @@\n \n from typing_extensions import final\n \n+from wemake_python_styleguide.compat.aliases import FunctionNodes\n from wemake_python_styleguide.constants import ALL_MAGIC_METHODS\n+from wemake_python_styleguide.logic import nodes\n from wemake_python_styleguide.logic.naming import access\n from wemake_python_styleguide.violations.best_practices import (\n ProtectedAttributeViolation,\n@@ -50,6 +52,15 @@\n \n def _check_magic_attribute(self, node: ast.Attribute) -> None:\n if access.is_magic(node.attr):\n+ # If \"magic\" method being called has the same name as\n+ # the enclosing function, then it is a \"wrapper\" and thus\n+ # a \"false positive\".\n+\n+ ctx = nodes.get_context(node)\n+ if isinstance(ctx, FunctionNodes):\n+ if node.attr == ctx.name:\n+ return\n+\n if node.attr in ALL_MAGIC_METHODS:\n self._ensure_attribute_type(\n node, DirectMagicAttributeAccessViolation,\n", "issue": "609: Allow __enter__() inside an __enter__()\n### What's wrong\r\n\r\nOne design pattern is to wrap a context manager. It would be nice to avoid WPS609 errors with this code, which seems to require accessing the direct magic methods.\r\n\r\n### How it should be\r\n\r\nAllow code like:\r\n```\r\nclass Foo:\r\n ...\r\n\r\n def __enter__(self):\r\n self._conn.__enter__()\r\n return self\r\n\r\n def __exit__(self, exc_type, exc_value, traceback):\r\n self._conn.__exit__(exc_type, exc_value, traceback)\r\n```\r\n\r\nI guess the same for aenter/aexit as well.\n", "before_files": [{"content": "import ast\nfrom typing import ClassVar, FrozenSet\n\nfrom typing_extensions import final\n\nfrom wemake_python_styleguide.constants import ALL_MAGIC_METHODS\nfrom wemake_python_styleguide.logic.naming import access\nfrom wemake_python_styleguide.violations.best_practices import (\n ProtectedAttributeViolation,\n)\nfrom wemake_python_styleguide.violations.oop import (\n DirectMagicAttributeAccessViolation,\n)\nfrom wemake_python_styleguide.visitors.base import BaseNodeVisitor\n\n\n@final\nclass WrongAttributeVisitor(BaseNodeVisitor):\n \"\"\"Ensures that attributes are used correctly.\"\"\"\n\n _allowed_to_use_protected: ClassVar[FrozenSet[str]] = frozenset((\n 'self',\n 'cls',\n 'mcs',\n ))\n\n def visit_Attribute(self, node: ast.Attribute) -> None:\n \"\"\"Checks the `Attribute` node.\"\"\"\n self._check_protected_attribute(node)\n self._check_magic_attribute(node)\n self.generic_visit(node)\n\n def _is_super_called(self, node: ast.Call) -> bool:\n return isinstance(node.func, ast.Name) and node.func.id == 'super'\n\n def _ensure_attribute_type(self, node: ast.Attribute, exception) -> None:\n if isinstance(node.value, ast.Name):\n if node.value.id in self._allowed_to_use_protected:\n return\n\n if isinstance(node.value, ast.Call):\n if self._is_super_called(node.value):\n return\n\n self.add_violation(exception(node, text=node.attr))\n\n def _check_protected_attribute(self, node: ast.Attribute) -> None:\n if access.is_protected(node.attr):\n self._ensure_attribute_type(node, ProtectedAttributeViolation)\n\n def _check_magic_attribute(self, node: ast.Attribute) -> None:\n if access.is_magic(node.attr):\n if node.attr in ALL_MAGIC_METHODS:\n self._ensure_attribute_type(\n node, DirectMagicAttributeAccessViolation,\n )\n", "path": "wemake_python_styleguide/visitors/ast/attributes.py"}]} | 1,214 | 298 |
gh_patches_debug_38138 | rasdani/github-patches | git_diff | aws__aws-cli-483 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Option to associate public ip address in ec2 run-instance
There doesn't seem to be any way to associate a public ip address without also adding a network interface with the --network-interfaces parameter. Is it possible for this to be a top level parameter?
</issue>
<code>
[start of awscli/customizations/ec2runinstances.py]
1 # Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License"). You
4 # may not use this file except in compliance with the License. A copy of
5 # the License is located at
6 #
7 # http://aws.amazon.com/apache2.0/
8 #
9 # or in the "license" file accompanying this file. This file is
10 # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
11 # ANY KIND, either express or implied. See the License for the specific
12 # language governing permissions and limitations under the License.
13 """
14 This customization adds two new parameters to the ``ec2 run-instance``
15 command. The first, ``--secondary-private-ip-addresses`` allows a list
16 of IP addresses within the specified subnet to be associated with the
17 new instance. The second, ``--secondary-ip-address-count`` allows you
18 to specify how many additional IP addresses you want but the actual
19 address will be assigned for you.
20
21 This functionality (and much more) is also available using the
22 ``--network-interfaces`` complex argument. This just makes two of
23 the most commonly used features available more easily.
24 """
25 from awscli.arguments import CustomArgument
26
27
28 # --secondary-private-ip-address
29 SECONDARY_PRIVATE_IP_ADDRESSES_DOCS = (
30 '[EC2-VPC] A secondary private IP address for the network interface '
31 'or instance. You can specify this multiple times to assign multiple '
32 'secondary IP addresses. If you want additional private IP addresses '
33 'but do not need a specific address, use the '
34 '--secondary-private-ip-address-count option.')
35
36 # --secondary-private-ip-address-count
37 SECONDARY_PRIVATE_IP_ADDRESS_COUNT_DOCS = (
38 '[EC2-VPC] The number of secondary IP addresses to assign to '
39 'the network interface or instance.')
40
41
42 def _add_params(argument_table, operation, **kwargs):
43 arg = SecondaryPrivateIpAddressesArgument(
44 name='secondary-private-ip-addresses',
45 help_text=SECONDARY_PRIVATE_IP_ADDRESSES_DOCS)
46 argument_table['secondary-private-ip-addresses'] = arg
47 arg = SecondaryPrivateIpAddressCountArgument(
48 name='secondary-private-ip-address-count',
49 help_text=SECONDARY_PRIVATE_IP_ADDRESS_COUNT_DOCS)
50 argument_table['secondary-private-ip-address-count'] = arg
51
52
53 def _check_args(parsed_args, **kwargs):
54 # This function checks the parsed args. If the user specified
55 # the --network-interfaces option with any of the scalar options we
56 # raise an error.
57 arg_dict = vars(parsed_args)
58 if arg_dict['network_interfaces']:
59 for key in ('secondary_private_ip_addresses',
60 'secondary_private_ip_address_count'):
61 if arg_dict[key]:
62 msg = ('Mixing the --network-interfaces option '
63 'with the simple, scalar options is '
64 'not supported.')
65 raise ValueError(msg)
66
67 EVENTS = [
68 ('building-argument-table.ec2.run-instances', _add_params),
69 ('operation-args-parsed.ec2.run-instances', _check_args),
70 ]
71
72
73 def register_runinstances(event_handler):
74 # Register all of the events for customizing BundleInstance
75 for event, handler in EVENTS:
76 event_handler.register(event, handler)
77
78
79 def _build_network_interfaces(params, key, value):
80 # Build up the NetworkInterfaces data structure
81 if 'network_interfaces' not in params:
82 params['network_interfaces'] = [{'DeviceIndex': 0}]
83
84 if key == 'PrivateIpAddresses':
85 if 'PrivateIpAddresses' not in params['network_interfaces'][0]:
86 params['network_interfaces'][0]['PrivateIpAddresses'] = value
87 else:
88 params['network_interfaces'][0][key] = value
89
90
91 class SecondaryPrivateIpAddressesArgument(CustomArgument):
92
93 def add_to_parser(self, parser, cli_name=None):
94 parser.add_argument(self.cli_name, dest=self.py_name,
95 default=self._default, nargs='*')
96
97 def add_to_params(self, parameters, value):
98 if value:
99 value = [{'PrivateIpAddress': v, 'Primary': False} for
100 v in value]
101 _build_network_interfaces(parameters,
102 'PrivateIpAddresses',
103 value)
104
105
106 class SecondaryPrivateIpAddressCountArgument(CustomArgument):
107
108 def add_to_parser(self, parser, cli_name=None):
109 parser.add_argument(self.cli_name, dest=self.py_name,
110 default=self._default, type=int)
111
112 def add_to_params(self, parameters, value):
113 if value:
114 _build_network_interfaces(parameters,
115 'SecondaryPrivateIpAddressCount',
116 value)
117
[end of awscli/customizations/ec2runinstances.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/awscli/customizations/ec2runinstances.py b/awscli/customizations/ec2runinstances.py
--- a/awscli/customizations/ec2runinstances.py
+++ b/awscli/customizations/ec2runinstances.py
@@ -38,6 +38,12 @@
'[EC2-VPC] The number of secondary IP addresses to assign to '
'the network interface or instance.')
+# --associate-public-ip-address
+ASSOCIATE_PUBLIC_IP_ADDRESS_DOCS = (
+ '[EC2-VPC] If specified a public IP address will be assigned '
+ 'to the new instance in a VPC.')
+
+
def _add_params(argument_table, operation, **kwargs):
arg = SecondaryPrivateIpAddressesArgument(
@@ -48,6 +54,16 @@
name='secondary-private-ip-address-count',
help_text=SECONDARY_PRIVATE_IP_ADDRESS_COUNT_DOCS)
argument_table['secondary-private-ip-address-count'] = arg
+ arg = AssociatePublicIpAddressArgument(
+ name='associate-public-ip-address',
+ help_text=ASSOCIATE_PUBLIC_IP_ADDRESS_DOCS,
+ action='store_true', group_name='associate_public_ip')
+ argument_table['associate-public-ip-address'] = arg
+ arg = NoAssociatePublicIpAddressArgument(
+ name='no-associate-public-ip-address',
+ help_text=ASSOCIATE_PUBLIC_IP_ADDRESS_DOCS,
+ action='store_false', group_name='associate_public_ip')
+ argument_table['no-associate-public-ip-address'] = arg
def _check_args(parsed_args, **kwargs):
@@ -57,7 +73,8 @@
arg_dict = vars(parsed_args)
if arg_dict['network_interfaces']:
for key in ('secondary_private_ip_addresses',
- 'secondary_private_ip_address_count'):
+ 'secondary_private_ip_address_count',
+ 'associate_public_ip_address'):
if arg_dict[key]:
msg = ('Mixing the --network-interfaces option '
'with the simple, scalar options is '
@@ -114,3 +131,21 @@
_build_network_interfaces(parameters,
'SecondaryPrivateIpAddressCount',
value)
+
+
+class AssociatePublicIpAddressArgument(CustomArgument):
+
+ def add_to_params(self, parameters, value):
+ if value is True:
+ _build_network_interfaces(parameters,
+ 'AssociatePublicIpAddress',
+ value)
+
+
+class NoAssociatePublicIpAddressArgument(CustomArgument):
+
+ def add_to_params(self, parameters, value):
+ if value is False:
+ _build_network_interfaces(parameters,
+ 'AssociatePublicIpAddress',
+ value)
| {"golden_diff": "diff --git a/awscli/customizations/ec2runinstances.py b/awscli/customizations/ec2runinstances.py\n--- a/awscli/customizations/ec2runinstances.py\n+++ b/awscli/customizations/ec2runinstances.py\n@@ -38,6 +38,12 @@\n '[EC2-VPC] The number of secondary IP addresses to assign to '\n 'the network interface or instance.')\n \n+# --associate-public-ip-address\n+ASSOCIATE_PUBLIC_IP_ADDRESS_DOCS = (\n+ '[EC2-VPC] If specified a public IP address will be assigned '\n+ 'to the new instance in a VPC.')\n+\n+\n \n def _add_params(argument_table, operation, **kwargs):\n arg = SecondaryPrivateIpAddressesArgument(\n@@ -48,6 +54,16 @@\n name='secondary-private-ip-address-count',\n help_text=SECONDARY_PRIVATE_IP_ADDRESS_COUNT_DOCS)\n argument_table['secondary-private-ip-address-count'] = arg\n+ arg = AssociatePublicIpAddressArgument(\n+ name='associate-public-ip-address',\n+ help_text=ASSOCIATE_PUBLIC_IP_ADDRESS_DOCS,\n+ action='store_true', group_name='associate_public_ip')\n+ argument_table['associate-public-ip-address'] = arg\n+ arg = NoAssociatePublicIpAddressArgument(\n+ name='no-associate-public-ip-address',\n+ help_text=ASSOCIATE_PUBLIC_IP_ADDRESS_DOCS,\n+ action='store_false', group_name='associate_public_ip')\n+ argument_table['no-associate-public-ip-address'] = arg\n \n \n def _check_args(parsed_args, **kwargs):\n@@ -57,7 +73,8 @@\n arg_dict = vars(parsed_args)\n if arg_dict['network_interfaces']:\n for key in ('secondary_private_ip_addresses',\n- 'secondary_private_ip_address_count'):\n+ 'secondary_private_ip_address_count',\n+ 'associate_public_ip_address'):\n if arg_dict[key]:\n msg = ('Mixing the --network-interfaces option '\n 'with the simple, scalar options is '\n@@ -114,3 +131,21 @@\n _build_network_interfaces(parameters,\n 'SecondaryPrivateIpAddressCount',\n value)\n+\n+\n+class AssociatePublicIpAddressArgument(CustomArgument):\n+\n+ def add_to_params(self, parameters, value):\n+ if value is True:\n+ _build_network_interfaces(parameters,\n+ 'AssociatePublicIpAddress',\n+ value)\n+\n+\n+class NoAssociatePublicIpAddressArgument(CustomArgument):\n+\n+ def add_to_params(self, parameters, value):\n+ if value is False:\n+ _build_network_interfaces(parameters,\n+ 'AssociatePublicIpAddress',\n+ value)\n", "issue": "Option to associate public ip address in ec2 run-instance\nThere doesn't seem to be any way to associate a public ip address without also adding a network interface with the --network-interfaces parameter. Is it possible for this to be a top level parameter?\n\n", "before_files": [{"content": "# Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"). You\n# may not use this file except in compliance with the License. A copy of\n# the License is located at\n#\n# http://aws.amazon.com/apache2.0/\n#\n# or in the \"license\" file accompanying this file. This file is\n# distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF\n# ANY KIND, either express or implied. See the License for the specific\n# language governing permissions and limitations under the License.\n\"\"\"\nThis customization adds two new parameters to the ``ec2 run-instance``\ncommand. The first, ``--secondary-private-ip-addresses`` allows a list\nof IP addresses within the specified subnet to be associated with the\nnew instance. The second, ``--secondary-ip-address-count`` allows you\nto specify how many additional IP addresses you want but the actual\naddress will be assigned for you.\n\nThis functionality (and much more) is also available using the\n``--network-interfaces`` complex argument. This just makes two of\nthe most commonly used features available more easily.\n\"\"\"\nfrom awscli.arguments import CustomArgument\n\n\n# --secondary-private-ip-address\nSECONDARY_PRIVATE_IP_ADDRESSES_DOCS = (\n '[EC2-VPC] A secondary private IP address for the network interface '\n 'or instance. You can specify this multiple times to assign multiple '\n 'secondary IP addresses. If you want additional private IP addresses '\n 'but do not need a specific address, use the '\n '--secondary-private-ip-address-count option.')\n\n# --secondary-private-ip-address-count\nSECONDARY_PRIVATE_IP_ADDRESS_COUNT_DOCS = (\n '[EC2-VPC] The number of secondary IP addresses to assign to '\n 'the network interface or instance.')\n\n\ndef _add_params(argument_table, operation, **kwargs):\n arg = SecondaryPrivateIpAddressesArgument(\n name='secondary-private-ip-addresses',\n help_text=SECONDARY_PRIVATE_IP_ADDRESSES_DOCS)\n argument_table['secondary-private-ip-addresses'] = arg\n arg = SecondaryPrivateIpAddressCountArgument(\n name='secondary-private-ip-address-count',\n help_text=SECONDARY_PRIVATE_IP_ADDRESS_COUNT_DOCS)\n argument_table['secondary-private-ip-address-count'] = arg\n\n\ndef _check_args(parsed_args, **kwargs):\n # This function checks the parsed args. If the user specified\n # the --network-interfaces option with any of the scalar options we\n # raise an error.\n arg_dict = vars(parsed_args)\n if arg_dict['network_interfaces']:\n for key in ('secondary_private_ip_addresses',\n 'secondary_private_ip_address_count'):\n if arg_dict[key]:\n msg = ('Mixing the --network-interfaces option '\n 'with the simple, scalar options is '\n 'not supported.')\n raise ValueError(msg)\n\nEVENTS = [\n ('building-argument-table.ec2.run-instances', _add_params),\n ('operation-args-parsed.ec2.run-instances', _check_args),\n ]\n\n\ndef register_runinstances(event_handler):\n # Register all of the events for customizing BundleInstance\n for event, handler in EVENTS:\n event_handler.register(event, handler)\n\n\ndef _build_network_interfaces(params, key, value):\n # Build up the NetworkInterfaces data structure\n if 'network_interfaces' not in params:\n params['network_interfaces'] = [{'DeviceIndex': 0}]\n\n if key == 'PrivateIpAddresses':\n if 'PrivateIpAddresses' not in params['network_interfaces'][0]:\n params['network_interfaces'][0]['PrivateIpAddresses'] = value\n else:\n params['network_interfaces'][0][key] = value\n\n\nclass SecondaryPrivateIpAddressesArgument(CustomArgument):\n\n def add_to_parser(self, parser, cli_name=None):\n parser.add_argument(self.cli_name, dest=self.py_name,\n default=self._default, nargs='*')\n\n def add_to_params(self, parameters, value):\n if value:\n value = [{'PrivateIpAddress': v, 'Primary': False} for\n v in value]\n _build_network_interfaces(parameters,\n 'PrivateIpAddresses',\n value)\n\n\nclass SecondaryPrivateIpAddressCountArgument(CustomArgument):\n\n def add_to_parser(self, parser, cli_name=None):\n parser.add_argument(self.cli_name, dest=self.py_name,\n default=self._default, type=int)\n\n def add_to_params(self, parameters, value):\n if value:\n _build_network_interfaces(parameters,\n 'SecondaryPrivateIpAddressCount',\n value)\n", "path": "awscli/customizations/ec2runinstances.py"}]} | 1,828 | 570 |
gh_patches_debug_31329 | rasdani/github-patches | git_diff | dbt-labs__dbt-core-1174 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
don't rely on master branch for latest version number
## Feature
### Feature description
The `master` branch of dbt isn't really a thing anymore. Instead of relying on the [master](https://github.com/fishtown-analytics/dbt/blob/51f68e3aabcda57afbe5051983d1d17e092be665/dbt/version.py#L12) branch to grab the latest release number, we should pull it from PyPi.
We can use [this api](https://warehouse.readthedocs.io/api-reference/json/) to fetch [some JSON info](https://pypi.org/pypi/dbt/json) about dbt releases.
We need to confirm that pre-releases are not shown as the latest version for a package on PyPi.
### Who will this benefit?
dbt maintainers :)
</issue>
<code>
[start of dbt/version.py]
1 import re
2
3 import dbt.semver
4
5 try:
6 # For Python 3.0 and later
7 from urllib.request import urlopen
8 except ImportError:
9 # Fall back to Python 2's urllib2
10 from urllib2 import urlopen
11
12 REMOTE_VERSION_FILE = \
13 'https://raw.githubusercontent.com/fishtown-analytics/dbt/' \
14 'master/.bumpversion.cfg'
15
16
17 def get_version_string_from_text(contents):
18 matches = re.search(r"current_version = ([\.0-9a-z]+)", contents)
19 if matches is None or len(matches.groups()) != 1:
20 return ""
21 version = matches.groups()[0]
22 return version
23
24
25 def get_remote_version_file_contents(url=REMOTE_VERSION_FILE):
26 try:
27 f = urlopen(url)
28 contents = f.read()
29 except Exception:
30 contents = ''
31 if hasattr(contents, 'decode'):
32 contents = contents.decode('utf-8')
33 return contents
34
35
36 def get_latest_version():
37 contents = get_remote_version_file_contents()
38 if contents == '':
39 return None
40 version_string = get_version_string_from_text(contents)
41 return dbt.semver.VersionSpecifier.from_version_string(version_string)
42
43
44 def get_installed_version():
45 return dbt.semver.VersionSpecifier.from_version_string(__version__)
46
47
48 def get_version_information():
49 installed = get_installed_version()
50 latest = get_latest_version()
51
52 installed_s = installed.to_version_string(skip_matcher=True)
53 if latest is None:
54 latest_s = 'unknown'
55 else:
56 latest_s = latest.to_version_string(skip_matcher=True)
57
58 version_msg = ("installed version: {}\n"
59 " latest version: {}\n\n".format(installed_s, latest_s))
60
61 if latest is None:
62 return ("{}The latest version of dbt could not be determined!\n"
63 "Make sure that the following URL is accessible:\n{}"
64 .format(version_msg, REMOTE_VERSION_FILE))
65
66 if installed == latest:
67 return "{}Up to date!".format(version_msg)
68
69 elif installed > latest:
70 return ("{}Your version of dbt is ahead of the latest "
71 "release!".format(version_msg))
72
73 else:
74 return ("{}Your version of dbt is out of date! "
75 "You can find instructions for upgrading here:\n"
76 "https://docs.getdbt.com/docs/installation"
77 .format(version_msg))
78
79
80 __version__ = '0.12.1'
81 installed = get_installed_version()
82
[end of dbt/version.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/dbt/version.py b/dbt/version.py
--- a/dbt/version.py
+++ b/dbt/version.py
@@ -1,43 +1,23 @@
+import json
import re
-import dbt.semver
-
-try:
- # For Python 3.0 and later
- from urllib.request import urlopen
-except ImportError:
- # Fall back to Python 2's urllib2
- from urllib2 import urlopen
-
-REMOTE_VERSION_FILE = \
- 'https://raw.githubusercontent.com/fishtown-analytics/dbt/' \
- 'master/.bumpversion.cfg'
-
+import requests
-def get_version_string_from_text(contents):
- matches = re.search(r"current_version = ([\.0-9a-z]+)", contents)
- if matches is None or len(matches.groups()) != 1:
- return ""
- version = matches.groups()[0]
- return version
+import dbt.exceptions
+import dbt.semver
-def get_remote_version_file_contents(url=REMOTE_VERSION_FILE):
- try:
- f = urlopen(url)
- contents = f.read()
- except Exception:
- contents = ''
- if hasattr(contents, 'decode'):
- contents = contents.decode('utf-8')
- return contents
+PYPI_VERSION_URL = 'https://pypi.org/pypi/dbt/json'
def get_latest_version():
- contents = get_remote_version_file_contents()
- if contents == '':
+ try:
+ resp = requests.get(PYPI_VERSION_URL)
+ data = resp.json()
+ version_string = data['info']['version']
+ except (json.JSONDecodeError, KeyError, requests.RequestException):
return None
- version_string = get_version_string_from_text(contents)
+
return dbt.semver.VersionSpecifier.from_version_string(version_string)
@@ -61,7 +41,7 @@
if latest is None:
return ("{}The latest version of dbt could not be determined!\n"
"Make sure that the following URL is accessible:\n{}"
- .format(version_msg, REMOTE_VERSION_FILE))
+ .format(version_msg, PYPI_VERSION_URL))
if installed == latest:
return "{}Up to date!".format(version_msg)
| {"golden_diff": "diff --git a/dbt/version.py b/dbt/version.py\n--- a/dbt/version.py\n+++ b/dbt/version.py\n@@ -1,43 +1,23 @@\n+import json\n import re\n \n-import dbt.semver\n-\n-try:\n- # For Python 3.0 and later\n- from urllib.request import urlopen\n-except ImportError:\n- # Fall back to Python 2's urllib2\n- from urllib2 import urlopen\n-\n-REMOTE_VERSION_FILE = \\\n- 'https://raw.githubusercontent.com/fishtown-analytics/dbt/' \\\n- 'master/.bumpversion.cfg'\n-\n+import requests\n \n-def get_version_string_from_text(contents):\n- matches = re.search(r\"current_version = ([\\.0-9a-z]+)\", contents)\n- if matches is None or len(matches.groups()) != 1:\n- return \"\"\n- version = matches.groups()[0]\n- return version\n+import dbt.exceptions\n+import dbt.semver\n \n \n-def get_remote_version_file_contents(url=REMOTE_VERSION_FILE):\n- try:\n- f = urlopen(url)\n- contents = f.read()\n- except Exception:\n- contents = ''\n- if hasattr(contents, 'decode'):\n- contents = contents.decode('utf-8')\n- return contents\n+PYPI_VERSION_URL = 'https://pypi.org/pypi/dbt/json'\n \n \n def get_latest_version():\n- contents = get_remote_version_file_contents()\n- if contents == '':\n+ try:\n+ resp = requests.get(PYPI_VERSION_URL)\n+ data = resp.json()\n+ version_string = data['info']['version']\n+ except (json.JSONDecodeError, KeyError, requests.RequestException):\n return None\n- version_string = get_version_string_from_text(contents)\n+\n return dbt.semver.VersionSpecifier.from_version_string(version_string)\n \n \n@@ -61,7 +41,7 @@\n if latest is None:\n return (\"{}The latest version of dbt could not be determined!\\n\"\n \"Make sure that the following URL is accessible:\\n{}\"\n- .format(version_msg, REMOTE_VERSION_FILE))\n+ .format(version_msg, PYPI_VERSION_URL))\n \n if installed == latest:\n return \"{}Up to date!\".format(version_msg)\n", "issue": "don't rely on master branch for latest version number\n## Feature\r\n\r\n### Feature description\r\nThe `master` branch of dbt isn't really a thing anymore. Instead of relying on the [master](https://github.com/fishtown-analytics/dbt/blob/51f68e3aabcda57afbe5051983d1d17e092be665/dbt/version.py#L12) branch to grab the latest release number, we should pull it from PyPi.\r\n\r\nWe can use [this api](https://warehouse.readthedocs.io/api-reference/json/) to fetch [some JSON info](https://pypi.org/pypi/dbt/json) about dbt releases.\r\n\r\nWe need to confirm that pre-releases are not shown as the latest version for a package on PyPi.\r\n\r\n### Who will this benefit?\r\ndbt maintainers :) \n", "before_files": [{"content": "import re\n\nimport dbt.semver\n\ntry:\n # For Python 3.0 and later\n from urllib.request import urlopen\nexcept ImportError:\n # Fall back to Python 2's urllib2\n from urllib2 import urlopen\n\nREMOTE_VERSION_FILE = \\\n 'https://raw.githubusercontent.com/fishtown-analytics/dbt/' \\\n 'master/.bumpversion.cfg'\n\n\ndef get_version_string_from_text(contents):\n matches = re.search(r\"current_version = ([\\.0-9a-z]+)\", contents)\n if matches is None or len(matches.groups()) != 1:\n return \"\"\n version = matches.groups()[0]\n return version\n\n\ndef get_remote_version_file_contents(url=REMOTE_VERSION_FILE):\n try:\n f = urlopen(url)\n contents = f.read()\n except Exception:\n contents = ''\n if hasattr(contents, 'decode'):\n contents = contents.decode('utf-8')\n return contents\n\n\ndef get_latest_version():\n contents = get_remote_version_file_contents()\n if contents == '':\n return None\n version_string = get_version_string_from_text(contents)\n return dbt.semver.VersionSpecifier.from_version_string(version_string)\n\n\ndef get_installed_version():\n return dbt.semver.VersionSpecifier.from_version_string(__version__)\n\n\ndef get_version_information():\n installed = get_installed_version()\n latest = get_latest_version()\n\n installed_s = installed.to_version_string(skip_matcher=True)\n if latest is None:\n latest_s = 'unknown'\n else:\n latest_s = latest.to_version_string(skip_matcher=True)\n\n version_msg = (\"installed version: {}\\n\"\n \" latest version: {}\\n\\n\".format(installed_s, latest_s))\n\n if latest is None:\n return (\"{}The latest version of dbt could not be determined!\\n\"\n \"Make sure that the following URL is accessible:\\n{}\"\n .format(version_msg, REMOTE_VERSION_FILE))\n\n if installed == latest:\n return \"{}Up to date!\".format(version_msg)\n\n elif installed > latest:\n return (\"{}Your version of dbt is ahead of the latest \"\n \"release!\".format(version_msg))\n\n else:\n return (\"{}Your version of dbt is out of date! \"\n \"You can find instructions for upgrading here:\\n\"\n \"https://docs.getdbt.com/docs/installation\"\n .format(version_msg))\n\n\n__version__ = '0.12.1'\ninstalled = get_installed_version()\n", "path": "dbt/version.py"}]} | 1,416 | 499 |
gh_patches_debug_27759 | rasdani/github-patches | git_diff | mdn__kuma-6029 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Search json endpoint is not available in new front-end
**Summary**
https://twitter.com/klaascuvelier/status/1182203293117886464
**Steps To Reproduce (STR)**
_How can we reproduce the problem?_
Go to https://developer.mozilla.org/en-US/search.json?q=array
**Actual behavior**
Blank page
**Expected behavior**
JSON is returned like it is now only at https://wiki.developer.mozilla.org/en-US/search.json?q=array
**Additional context**
There might be a few external services, twitter bots etc. that depend on this endpoint.
</issue>
<code>
[start of kuma/search/views.py]
1 from django.shortcuts import render
2 from django.views.decorators.cache import never_cache
3 from django.views.decorators.http import require_GET
4 from ratelimit.decorators import ratelimit
5
6 from kuma.api.v1.views import search as search_api
7 from kuma.core.decorators import shared_cache_control
8 from kuma.core.utils import is_wiki
9
10 from .search import SearchView
11
12 # Since the search endpoint accepts user input (via query parameters) and its
13 # response is compressed, use rate limiting to mitigate the BREACH attack
14 # (see http://breachattack.com/). It still needs to allow a user to click
15 # the filter switches (bug 1426968).
16 # Alternate: forbid gzip by setting Content-Encoding: identity
17 @never_cache
18 @require_GET
19 @ratelimit(key='user_or_ip', rate='25/m', block=True)
20 def search(request, *args, **kwargs):
21 """
22 The search view.
23 """
24 if is_wiki(request):
25 return wiki_search(request, *args, **kwargs)
26
27 results = search_api(request, *args, **kwargs).data
28 context = {
29 'results': {
30 'results': None if results.get('error') else results
31 }
32 }
33
34 return render(request, 'search/react.html', context)
35
36
37 wiki_search = SearchView.as_view()
38
39
40 @shared_cache_control(s_maxage=60 * 60 * 24 * 7)
41 def plugin(request):
42 """Render an OpenSearch Plugin."""
43 return render(request, 'search/plugin.html', {
44 'locale': request.LANGUAGE_CODE
45 }, content_type='application/opensearchdescription+xml')
46
[end of kuma/search/views.py]
[start of kuma/search/urls.py]
1 from django.conf.urls import url
2 from rest_framework.urlpatterns import format_suffix_patterns
3
4 from . import views
5
6 # this allows using ".json" extensions for the view to force json output
7 lang_base_urlpatterns = format_suffix_patterns(
8 [url(r'^$', views.search, name='search')])
9
10
11 lang_urlpatterns = [
12 url(r'^xml$',
13 views.plugin,
14 name='search.plugin'),
15 ]
16
[end of kuma/search/urls.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/kuma/search/urls.py b/kuma/search/urls.py
--- a/kuma/search/urls.py
+++ b/kuma/search/urls.py
@@ -1,11 +1,11 @@
from django.conf.urls import url
-from rest_framework.urlpatterns import format_suffix_patterns
from . import views
-# this allows using ".json" extensions for the view to force json output
-lang_base_urlpatterns = format_suffix_patterns(
- [url(r'^$', views.search, name='search')])
+lang_base_urlpatterns = [
+ url(r'^$', views.search, name='search'),
+ url(r'^.(?P<format>json)$', views.SearchRedirectView.as_view())
+]
lang_urlpatterns = [
diff --git a/kuma/search/views.py b/kuma/search/views.py
--- a/kuma/search/views.py
+++ b/kuma/search/views.py
@@ -1,6 +1,8 @@
from django.shortcuts import render
+from django.urls import reverse_lazy
from django.views.decorators.cache import never_cache
from django.views.decorators.http import require_GET
+from django.views.generic import RedirectView
from ratelimit.decorators import ratelimit
from kuma.api.v1.views import search as search_api
@@ -37,6 +39,17 @@
wiki_search = SearchView.as_view()
+class SearchRedirectView(RedirectView):
+ permanent = True
+
+ def get_redirect_url(self, *args, **kwargs):
+ query_string = self.request.META.get('QUERY_STRING')
+ url = reverse_lazy('api.v1.search', kwargs={'locale': self.request.LANGUAGE_CODE})
+ if query_string:
+ url += '?' + query_string
+ return url
+
+
@shared_cache_control(s_maxage=60 * 60 * 24 * 7)
def plugin(request):
"""Render an OpenSearch Plugin."""
| {"golden_diff": "diff --git a/kuma/search/urls.py b/kuma/search/urls.py\n--- a/kuma/search/urls.py\n+++ b/kuma/search/urls.py\n@@ -1,11 +1,11 @@\n from django.conf.urls import url\n-from rest_framework.urlpatterns import format_suffix_patterns\n \n from . import views\n \n-# this allows using \".json\" extensions for the view to force json output\n-lang_base_urlpatterns = format_suffix_patterns(\n- [url(r'^$', views.search, name='search')])\n+lang_base_urlpatterns = [\n+ url(r'^$', views.search, name='search'),\n+ url(r'^.(?P<format>json)$', views.SearchRedirectView.as_view())\n+]\n \n \n lang_urlpatterns = [\ndiff --git a/kuma/search/views.py b/kuma/search/views.py\n--- a/kuma/search/views.py\n+++ b/kuma/search/views.py\n@@ -1,6 +1,8 @@\n from django.shortcuts import render\n+from django.urls import reverse_lazy\n from django.views.decorators.cache import never_cache\n from django.views.decorators.http import require_GET\n+from django.views.generic import RedirectView\n from ratelimit.decorators import ratelimit\n \n from kuma.api.v1.views import search as search_api\n@@ -37,6 +39,17 @@\n wiki_search = SearchView.as_view()\n \n \n+class SearchRedirectView(RedirectView):\n+ permanent = True\n+\n+ def get_redirect_url(self, *args, **kwargs):\n+ query_string = self.request.META.get('QUERY_STRING')\n+ url = reverse_lazy('api.v1.search', kwargs={'locale': self.request.LANGUAGE_CODE})\n+ if query_string:\n+ url += '?' + query_string\n+ return url\n+\n+\n @shared_cache_control(s_maxage=60 * 60 * 24 * 7)\n def plugin(request):\n \"\"\"Render an OpenSearch Plugin.\"\"\"\n", "issue": "Search json endpoint is not available in new front-end\n**Summary**\r\nhttps://twitter.com/klaascuvelier/status/1182203293117886464\r\n\r\n\r\n**Steps To Reproduce (STR)**\r\n_How can we reproduce the problem?_\r\n\r\nGo to https://developer.mozilla.org/en-US/search.json?q=array\r\n \r\n\r\n\r\n**Actual behavior**\r\nBlank page\r\n\r\n\r\n**Expected behavior**\r\nJSON is returned like it is now only at https://wiki.developer.mozilla.org/en-US/search.json?q=array\r\n\r\n\r\n**Additional context**\r\nThere might be a few external services, twitter bots etc. that depend on this endpoint.\r\n\n", "before_files": [{"content": "from django.shortcuts import render\nfrom django.views.decorators.cache import never_cache\nfrom django.views.decorators.http import require_GET\nfrom ratelimit.decorators import ratelimit\n\nfrom kuma.api.v1.views import search as search_api\nfrom kuma.core.decorators import shared_cache_control\nfrom kuma.core.utils import is_wiki\n\nfrom .search import SearchView\n\n# Since the search endpoint accepts user input (via query parameters) and its\n# response is compressed, use rate limiting to mitigate the BREACH attack\n# (see http://breachattack.com/). It still needs to allow a user to click\n# the filter switches (bug 1426968).\n# Alternate: forbid gzip by setting Content-Encoding: identity\n@never_cache\n@require_GET\n@ratelimit(key='user_or_ip', rate='25/m', block=True)\ndef search(request, *args, **kwargs):\n \"\"\"\n The search view.\n \"\"\"\n if is_wiki(request):\n return wiki_search(request, *args, **kwargs)\n\n results = search_api(request, *args, **kwargs).data\n context = {\n 'results': {\n 'results': None if results.get('error') else results\n }\n }\n\n return render(request, 'search/react.html', context)\n\n\nwiki_search = SearchView.as_view()\n\n\n@shared_cache_control(s_maxage=60 * 60 * 24 * 7)\ndef plugin(request):\n \"\"\"Render an OpenSearch Plugin.\"\"\"\n return render(request, 'search/plugin.html', {\n 'locale': request.LANGUAGE_CODE\n }, content_type='application/opensearchdescription+xml')\n", "path": "kuma/search/views.py"}, {"content": "from django.conf.urls import url\nfrom rest_framework.urlpatterns import format_suffix_patterns\n\nfrom . import views\n\n# this allows using \".json\" extensions for the view to force json output\nlang_base_urlpatterns = format_suffix_patterns(\n [url(r'^$', views.search, name='search')])\n\n\nlang_urlpatterns = [\n url(r'^xml$',\n views.plugin,\n name='search.plugin'),\n]\n", "path": "kuma/search/urls.py"}]} | 1,240 | 409 |
gh_patches_debug_27319 | rasdani/github-patches | git_diff | medtagger__MedTagger-306 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Allow backend to return Slices in reverse order
## Expected Behavior
When user moves slider down, backend should send Slices in reverse order, so that UI will be able to show them first.
## Actual Behavior
Backend always send Slices in ascending order.
## Steps to Reproduce the Problem
1. Go to the marker page.
2. Move to the bottom of current view (let's assume that the last Slice on which you are now has index N).
3. UI will request backend to send Slices from range (N-10, N-1).
4. Backend will send Slices **in order**: (N-10, N-9, N-8, ..., N-1).
5. Marker will add (N-10)th Slice to the view from above response.
6. Marker will allow user to move between all Slices in range from N-10 but Slices (N-9, N-8, ...) won't be loaded yet!
## Additional comment
Marker should request backend to send Slices in descending order, so that it will be able to load them to the marker first. Such case should be enabled **only** if user wants to go back/down!
To debug this case, slow your Internet connection down in your browser's dev tools or apply huge load on the backend server.
</issue>
<code>
[start of backend/medtagger/api/scans/service_web_socket.py]
1 """Module responsible for definition of Scans service available via WebSockets."""
2 from typing import Dict
3
4 from flask_socketio import Namespace, emit
5
6 from medtagger.api import web_socket
7 from medtagger.database.models import SliceOrientation
8 from medtagger.types import ScanID
9 from medtagger.api.exceptions import InvalidArgumentsException
10 from medtagger.api.scans import business
11
12
13 class Slices(Namespace):
14 """WebSocket handler for /slices namespace."""
15
16 MAX_NUMBER_OF_SLICES_PER_REQUEST = 25
17
18 def on_request_slices(self, request: Dict) -> None:
19 """Handle slices request triggered by `request_slices` event."""
20 assert request.get('scan_id'), 'ScanID is required!'
21 scan_id = ScanID(str(request['scan_id']))
22 begin = max(0, request.get('begin', 0))
23 count = request.get('count', 1)
24 orientation = request.get('orientation', SliceOrientation.Z.value)
25 self._raise_on_invalid_request_slices(count, orientation)
26
27 orientation = SliceOrientation[orientation]
28 slices = business.get_slices_for_scan(scan_id, begin, count, orientation=orientation)
29 for index, (_slice, image) in enumerate(slices):
30 emit('slice', {'scan_id': scan_id, 'index': begin + index, 'image': image})
31
32 def _raise_on_invalid_request_slices(self, count: int, orientation: str) -> None:
33 """Validate incoming request and raise an exception if there are issues with given arguments.
34
35 :param count: number of slices that should be returned
36 :param orientation: Slice's orientation as a string
37 """
38 # Make sure that passed orientation is proper one
39 if orientation not in SliceOrientation.__members__:
40 raise InvalidArgumentsException('Invalid Slice orientation.')
41
42 # Make sure that nobody will fetch whole scan at once. It could freeze our backend application.
43 if count > self.MAX_NUMBER_OF_SLICES_PER_REQUEST:
44 message = 'Cannot return more than {} slices per request.'.format(self.MAX_NUMBER_OF_SLICES_PER_REQUEST)
45 raise InvalidArgumentsException(message)
46
47
48 # Register above namespace
49 web_socket.on_namespace(Slices('/slices'))
50
[end of backend/medtagger/api/scans/service_web_socket.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/backend/medtagger/api/scans/service_web_socket.py b/backend/medtagger/api/scans/service_web_socket.py
--- a/backend/medtagger/api/scans/service_web_socket.py
+++ b/backend/medtagger/api/scans/service_web_socket.py
@@ -21,13 +21,21 @@
scan_id = ScanID(str(request['scan_id']))
begin = max(0, request.get('begin', 0))
count = request.get('count', 1)
+ reversed_order = request.get('reversed', False)
orientation = request.get('orientation', SliceOrientation.Z.value)
self._raise_on_invalid_request_slices(count, orientation)
orientation = SliceOrientation[orientation]
slices = business.get_slices_for_scan(scan_id, begin, count, orientation=orientation)
- for index, (_slice, image) in enumerate(slices):
- emit('slice', {'scan_id': scan_id, 'index': begin + index, 'image': image})
+ slices_to_send = reversed(list(enumerate(slices))) if reversed_order else enumerate(slices)
+ last_in_batch = begin if reversed_order else begin + count - 1
+ for index, (_slice, image) in slices_to_send:
+ emit('slice', {
+ 'scan_id': scan_id,
+ 'index': begin + index,
+ 'last_in_batch': last_in_batch,
+ 'image': image,
+ })
def _raise_on_invalid_request_slices(self, count: int, orientation: str) -> None:
"""Validate incoming request and raise an exception if there are issues with given arguments.
| {"golden_diff": "diff --git a/backend/medtagger/api/scans/service_web_socket.py b/backend/medtagger/api/scans/service_web_socket.py\n--- a/backend/medtagger/api/scans/service_web_socket.py\n+++ b/backend/medtagger/api/scans/service_web_socket.py\n@@ -21,13 +21,21 @@\n scan_id = ScanID(str(request['scan_id']))\n begin = max(0, request.get('begin', 0))\n count = request.get('count', 1)\n+ reversed_order = request.get('reversed', False)\n orientation = request.get('orientation', SliceOrientation.Z.value)\n self._raise_on_invalid_request_slices(count, orientation)\n \n orientation = SliceOrientation[orientation]\n slices = business.get_slices_for_scan(scan_id, begin, count, orientation=orientation)\n- for index, (_slice, image) in enumerate(slices):\n- emit('slice', {'scan_id': scan_id, 'index': begin + index, 'image': image})\n+ slices_to_send = reversed(list(enumerate(slices))) if reversed_order else enumerate(slices)\n+ last_in_batch = begin if reversed_order else begin + count - 1\n+ for index, (_slice, image) in slices_to_send:\n+ emit('slice', {\n+ 'scan_id': scan_id,\n+ 'index': begin + index,\n+ 'last_in_batch': last_in_batch,\n+ 'image': image,\n+ })\n \n def _raise_on_invalid_request_slices(self, count: int, orientation: str) -> None:\n \"\"\"Validate incoming request and raise an exception if there are issues with given arguments.\n", "issue": "Allow backend to return Slices in reverse order\n## Expected Behavior\r\n\r\nWhen user moves slider down, backend should send Slices in reverse order, so that UI will be able to show them first.\r\n\r\n## Actual Behavior\r\n\r\nBackend always send Slices in ascending order.\r\n\r\n## Steps to Reproduce the Problem\r\n\r\n 1. Go to the marker page.\r\n 2. Move to the bottom of current view (let's assume that the last Slice on which you are now has index N).\r\n 3. UI will request backend to send Slices from range (N-10, N-1).\r\n 4. Backend will send Slices **in order**: (N-10, N-9, N-8, ..., N-1).\r\n 5. Marker will add (N-10)th Slice to the view from above response.\r\n 6. Marker will allow user to move between all Slices in range from N-10 but Slices (N-9, N-8, ...) won't be loaded yet!\r\n\r\n## Additional comment\r\n\r\nMarker should request backend to send Slices in descending order, so that it will be able to load them to the marker first. Such case should be enabled **only** if user wants to go back/down!\r\n\r\nTo debug this case, slow your Internet connection down in your browser's dev tools or apply huge load on the backend server.\n", "before_files": [{"content": "\"\"\"Module responsible for definition of Scans service available via WebSockets.\"\"\"\nfrom typing import Dict\n\nfrom flask_socketio import Namespace, emit\n\nfrom medtagger.api import web_socket\nfrom medtagger.database.models import SliceOrientation\nfrom medtagger.types import ScanID\nfrom medtagger.api.exceptions import InvalidArgumentsException\nfrom medtagger.api.scans import business\n\n\nclass Slices(Namespace):\n \"\"\"WebSocket handler for /slices namespace.\"\"\"\n\n MAX_NUMBER_OF_SLICES_PER_REQUEST = 25\n\n def on_request_slices(self, request: Dict) -> None:\n \"\"\"Handle slices request triggered by `request_slices` event.\"\"\"\n assert request.get('scan_id'), 'ScanID is required!'\n scan_id = ScanID(str(request['scan_id']))\n begin = max(0, request.get('begin', 0))\n count = request.get('count', 1)\n orientation = request.get('orientation', SliceOrientation.Z.value)\n self._raise_on_invalid_request_slices(count, orientation)\n\n orientation = SliceOrientation[orientation]\n slices = business.get_slices_for_scan(scan_id, begin, count, orientation=orientation)\n for index, (_slice, image) in enumerate(slices):\n emit('slice', {'scan_id': scan_id, 'index': begin + index, 'image': image})\n\n def _raise_on_invalid_request_slices(self, count: int, orientation: str) -> None:\n \"\"\"Validate incoming request and raise an exception if there are issues with given arguments.\n\n :param count: number of slices that should be returned\n :param orientation: Slice's orientation as a string\n \"\"\"\n # Make sure that passed orientation is proper one\n if orientation not in SliceOrientation.__members__:\n raise InvalidArgumentsException('Invalid Slice orientation.')\n\n # Make sure that nobody will fetch whole scan at once. It could freeze our backend application.\n if count > self.MAX_NUMBER_OF_SLICES_PER_REQUEST:\n message = 'Cannot return more than {} slices per request.'.format(self.MAX_NUMBER_OF_SLICES_PER_REQUEST)\n raise InvalidArgumentsException(message)\n\n\n# Register above namespace\nweb_socket.on_namespace(Slices('/slices'))\n", "path": "backend/medtagger/api/scans/service_web_socket.py"}]} | 1,395 | 358 |
gh_patches_debug_23334 | rasdani/github-patches | git_diff | NVIDIA__NVFlare-318 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Unable to run poc command if nvflare is installed by pip install -e .
</issue>
<code>
[start of nvflare/lighter/poc.py]
1 # Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import argparse
16 import os
17 import pathlib
18 import shutil
19
20
21 def clone_client(num_clients: int):
22 current_path = os.getcwd()
23 poc_folder = os.path.join(current_path, "poc")
24 src_folder = os.path.join(poc_folder, "client")
25 for index in range(1, num_clients + 1):
26 dst_folder = os.path.join(poc_folder, f"site-{index}")
27 shutil.copytree(src_folder, dst_folder)
28 start_sh = open(os.path.join(dst_folder, "startup", "start.sh"), "rt")
29 content = start_sh.read()
30 start_sh.close()
31 content = content.replace("NNN", f"{index}")
32 with open(os.path.join(dst_folder, "startup", "start.sh"), "wt") as f:
33 f.write(content)
34 shutil.rmtree(src_folder)
35
36
37 def main():
38 parser = argparse.ArgumentParser()
39 parser.add_argument("-n", "--num_clients", type=int, default=1, help="number of client folders to create")
40
41 args = parser.parse_args()
42
43 file_dir_path = pathlib.Path(__file__).parent.absolute()
44 poc_zip_path = file_dir_path.parent / "poc.zip"
45 answer = input("This will delete poc folder in current directory and create a new one. Is it OK to proceed? (y/N) ")
46 if answer.strip().upper() == "Y":
47 dest_poc_folder = os.path.join(os.getcwd(), "poc")
48 shutil.rmtree(dest_poc_folder, ignore_errors=True)
49 shutil.unpack_archive(poc_zip_path)
50 for root, dirs, files in os.walk(dest_poc_folder):
51 for file in files:
52 if file.endswith(".sh"):
53 os.chmod(os.path.join(root, file), 0o755)
54 clone_client(args.num_clients)
55 print("Successfully creating poc folder. Please read poc/Readme.rst for user guide.")
56
57
58 if __name__ == "__main__":
59 main()
60
[end of nvflare/lighter/poc.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/nvflare/lighter/poc.py b/nvflare/lighter/poc.py
--- a/nvflare/lighter/poc.py
+++ b/nvflare/lighter/poc.py
@@ -42,11 +42,20 @@
file_dir_path = pathlib.Path(__file__).parent.absolute()
poc_zip_path = file_dir_path.parent / "poc.zip"
+ poc_folder_path = file_dir_path.parent / "poc"
answer = input("This will delete poc folder in current directory and create a new one. Is it OK to proceed? (y/N) ")
if answer.strip().upper() == "Y":
dest_poc_folder = os.path.join(os.getcwd(), "poc")
shutil.rmtree(dest_poc_folder, ignore_errors=True)
- shutil.unpack_archive(poc_zip_path)
+ try:
+ shutil.unpack_archive(poc_zip_path)
+ except shutil.ReadError:
+ print(f"poc.zip not found at {poc_zip_path}, try to use template poc folder")
+ try:
+ shutil.copytree(poc_folder_path, dest_poc_folder)
+ except BaseException:
+ print(f"Unable to copy poc folder from {poc_folder_path}. Exit")
+ exit(1)
for root, dirs, files in os.walk(dest_poc_folder):
for file in files:
if file.endswith(".sh"):
| {"golden_diff": "diff --git a/nvflare/lighter/poc.py b/nvflare/lighter/poc.py\n--- a/nvflare/lighter/poc.py\n+++ b/nvflare/lighter/poc.py\n@@ -42,11 +42,20 @@\n \n file_dir_path = pathlib.Path(__file__).parent.absolute()\n poc_zip_path = file_dir_path.parent / \"poc.zip\"\n+ poc_folder_path = file_dir_path.parent / \"poc\"\n answer = input(\"This will delete poc folder in current directory and create a new one. Is it OK to proceed? (y/N) \")\n if answer.strip().upper() == \"Y\":\n dest_poc_folder = os.path.join(os.getcwd(), \"poc\")\n shutil.rmtree(dest_poc_folder, ignore_errors=True)\n- shutil.unpack_archive(poc_zip_path)\n+ try:\n+ shutil.unpack_archive(poc_zip_path)\n+ except shutil.ReadError:\n+ print(f\"poc.zip not found at {poc_zip_path}, try to use template poc folder\")\n+ try:\n+ shutil.copytree(poc_folder_path, dest_poc_folder)\n+ except BaseException:\n+ print(f\"Unable to copy poc folder from {poc_folder_path}. Exit\")\n+ exit(1)\n for root, dirs, files in os.walk(dest_poc_folder):\n for file in files:\n if file.endswith(\".sh\"):\n", "issue": "Unable to run poc command if nvflare is installed by pip install -e .\n\n", "before_files": [{"content": "# Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport argparse\nimport os\nimport pathlib\nimport shutil\n\n\ndef clone_client(num_clients: int):\n current_path = os.getcwd()\n poc_folder = os.path.join(current_path, \"poc\")\n src_folder = os.path.join(poc_folder, \"client\")\n for index in range(1, num_clients + 1):\n dst_folder = os.path.join(poc_folder, f\"site-{index}\")\n shutil.copytree(src_folder, dst_folder)\n start_sh = open(os.path.join(dst_folder, \"startup\", \"start.sh\"), \"rt\")\n content = start_sh.read()\n start_sh.close()\n content = content.replace(\"NNN\", f\"{index}\")\n with open(os.path.join(dst_folder, \"startup\", \"start.sh\"), \"wt\") as f:\n f.write(content)\n shutil.rmtree(src_folder)\n\n\ndef main():\n parser = argparse.ArgumentParser()\n parser.add_argument(\"-n\", \"--num_clients\", type=int, default=1, help=\"number of client folders to create\")\n\n args = parser.parse_args()\n\n file_dir_path = pathlib.Path(__file__).parent.absolute()\n poc_zip_path = file_dir_path.parent / \"poc.zip\"\n answer = input(\"This will delete poc folder in current directory and create a new one. Is it OK to proceed? (y/N) \")\n if answer.strip().upper() == \"Y\":\n dest_poc_folder = os.path.join(os.getcwd(), \"poc\")\n shutil.rmtree(dest_poc_folder, ignore_errors=True)\n shutil.unpack_archive(poc_zip_path)\n for root, dirs, files in os.walk(dest_poc_folder):\n for file in files:\n if file.endswith(\".sh\"):\n os.chmod(os.path.join(root, file), 0o755)\n clone_client(args.num_clients)\n print(\"Successfully creating poc folder. Please read poc/Readme.rst for user guide.\")\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "nvflare/lighter/poc.py"}]} | 1,230 | 306 |
gh_patches_debug_41144 | rasdani/github-patches | git_diff | streamlink__streamlink-4029 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
artetv: de/fr Livestreams aren't playable anymore
### Checklist
- [X] This is a plugin issue and not a different kind of issue
- [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink)
- [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22)
- [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master)
### Streamlink version
Latest build from the master branch
### Description
Since about a week the live channels aren't playable anymore. However VODs working fine.
### Debug log
```text
streamlink https://www.arte.tv/de/live/ worst -l debug
[cli][debug] OS: Linux-5.14.3-arch1-1-x86_64-with-glibc2.33
[cli][debug] Python: 3.9.7
[cli][debug] Streamlink: 2.4.0+17.g24c59a2
[cli][debug] Requests(2.26.0), Socks(1.7.1), Websocket(0.59.0)
[cli][debug] Arguments:
[cli][debug] url=https://www.arte.tv/de/live/
[cli][debug] stream=['worst']
[cli][debug] --loglevel=debug
[cli][info] Found matching plugin artetv for URL https://www.arte.tv/de/live/
error: No playable streams found on this URL: https://www.arte.tv/de/live/
streamlink https://www.arte.tv/fr/direct/ best -l debug
[cli][debug] OS: Linux-5.14.3-arch1-1-x86_64-with-glibc2.33
[cli][debug] Python: 3.9.7
[cli][debug] Streamlink: 2.4.0+17.g24c59a2
[cli][debug] Requests(2.26.0), Socks(1.7.1), Websocket(0.59.0)
[cli][debug] Arguments:
[cli][debug] url=https://www.arte.tv/fr/direct/
[cli][debug] stream=['best']
[cli][debug] --loglevel=debug
[cli][info] Found matching plugin artetv for URL https://www.arte.tv/fr/direct/
error: No playable streams found on this URL: https://www.arte.tv/fr/direct/
```
plugins.arte: switch to arte.tv v2 API
The Arte.tv V1 API doens't seem to work anymore for live streams (see #4026).
Both web site and mobile app use the V2 API, which requires an authentication token. The one from the website is used here for this fix.
</issue>
<code>
[start of src/streamlink/plugins/artetv.py]
1 """Plugin for Arte.tv, bi-lingual art and culture channel."""
2
3 import logging
4 import re
5 from operator import itemgetter
6
7 from streamlink.plugin import Plugin, pluginmatcher
8 from streamlink.plugin.api import validate
9 from streamlink.stream import HLSStream
10
11 log = logging.getLogger(__name__)
12 JSON_VOD_URL = "https://api.arte.tv/api/player/v1/config/{0}/{1}?platform=ARTE_NEXT"
13 JSON_LIVE_URL = "https://api.arte.tv/api/player/v1/livestream/{0}"
14
15 _video_schema = validate.Schema({
16 "videoJsonPlayer": {
17 "VSR": validate.any(
18 [],
19 {
20 validate.text: {
21 "height": int,
22 "mediaType": validate.text,
23 "url": validate.text,
24 "versionProg": int,
25 "versionLibelle": validate.text
26 },
27 },
28 )
29 }
30 })
31
32
33 @pluginmatcher(re.compile(r"""
34 https?://(?:\w+\.)?arte\.tv/(?:guide/)?
35 (?P<language>[a-z]{2})/
36 (?:
37 (?:videos/)?(?P<video_id>(?!RC-|videos)[^/]+?)/.+
38 |
39 (?:direct|live)
40 )
41 """, re.VERBOSE))
42 class ArteTV(Plugin):
43 def _create_stream(self, streams):
44 variant, variantname = min([(stream["versionProg"], stream["versionLibelle"]) for stream in streams.values()],
45 key=itemgetter(0))
46 log.debug(f"Using the '{variantname}' stream variant")
47 for sname, stream in streams.items():
48 if stream["versionProg"] == variant:
49 if stream["mediaType"] == "hls":
50 try:
51 streams = HLSStream.parse_variant_playlist(self.session, stream["url"])
52 yield from streams.items()
53 except OSError as err:
54 log.warning(f"Failed to extract HLS streams for {sname}/{stream['versionLibelle']}: {err}")
55
56 def _get_streams(self):
57 language = self.match.group('language')
58 video_id = self.match.group('video_id')
59 if video_id is None:
60 json_url = JSON_LIVE_URL.format(language)
61 else:
62 json_url = JSON_VOD_URL.format(language, video_id)
63 res = self.session.http.get(json_url)
64 video = self.session.http.json(res, schema=_video_schema)
65
66 if not video["videoJsonPlayer"]["VSR"]:
67 return
68
69 vsr = video["videoJsonPlayer"]["VSR"]
70 return self._create_stream(vsr)
71
72
73 __plugin__ = ArteTV
74
[end of src/streamlink/plugins/artetv.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/streamlink/plugins/artetv.py b/src/streamlink/plugins/artetv.py
--- a/src/streamlink/plugins/artetv.py
+++ b/src/streamlink/plugins/artetv.py
@@ -1,5 +1,3 @@
-"""Plugin for Arte.tv, bi-lingual art and culture channel."""
-
import logging
import re
from operator import itemgetter
@@ -9,25 +7,6 @@
from streamlink.stream import HLSStream
log = logging.getLogger(__name__)
-JSON_VOD_URL = "https://api.arte.tv/api/player/v1/config/{0}/{1}?platform=ARTE_NEXT"
-JSON_LIVE_URL = "https://api.arte.tv/api/player/v1/livestream/{0}"
-
-_video_schema = validate.Schema({
- "videoJsonPlayer": {
- "VSR": validate.any(
- [],
- {
- validate.text: {
- "height": int,
- "mediaType": validate.text,
- "url": validate.text,
- "versionProg": int,
- "versionLibelle": validate.text
- },
- },
- )
- }
-})
@pluginmatcher(re.compile(r"""
@@ -40,34 +19,49 @@
)
""", re.VERBOSE))
class ArteTV(Plugin):
- def _create_stream(self, streams):
- variant, variantname = min([(stream["versionProg"], stream["versionLibelle"]) for stream in streams.values()],
- key=itemgetter(0))
- log.debug(f"Using the '{variantname}' stream variant")
- for sname, stream in streams.items():
- if stream["versionProg"] == variant:
- if stream["mediaType"] == "hls":
- try:
- streams = HLSStream.parse_variant_playlist(self.session, stream["url"])
- yield from streams.items()
- except OSError as err:
- log.warning(f"Failed to extract HLS streams for {sname}/{stream['versionLibelle']}: {err}")
+ API_URL = "https://api.arte.tv/api/player/v2/config/{0}/{1}"
+ API_TOKEN = "MzYyZDYyYmM1Y2Q3ZWRlZWFjMmIyZjZjNTRiMGY4MzY4NzBhOWQ5YjE4MGQ1NGFiODJmOTFlZDQwN2FkOTZjMQ"
def _get_streams(self):
- language = self.match.group('language')
- video_id = self.match.group('video_id')
- if video_id is None:
- json_url = JSON_LIVE_URL.format(language)
- else:
- json_url = JSON_VOD_URL.format(language, video_id)
- res = self.session.http.get(json_url)
- video = self.session.http.json(res, schema=_video_schema)
+ language = self.match.group("language")
+ video_id = self.match.group("video_id")
- if not video["videoJsonPlayer"]["VSR"]:
+ json_url = self.API_URL.format(language, video_id or "LIVE")
+ headers = {
+ "Authorization": f"Bearer {self.API_TOKEN}"
+ }
+ streams, metadata = self.session.http.get(json_url, headers=headers, schema=validate.Schema(
+ validate.parse_json(),
+ {"data": {"attributes": {
+ "streams": validate.any(
+ [],
+ [
+ validate.all(
+ {
+ "url": validate.url(),
+ "slot": int,
+ "protocol": validate.any("HLS", "HLS_NG"),
+ },
+ validate.union_get("slot", "protocol", "url")
+ )
+ ]
+ ),
+ "metadata": {
+ "title": str,
+ "subtitle": validate.any(None, str)
+ }
+ }}},
+ validate.get(("data", "attributes")),
+ validate.union_get("streams", "metadata")
+ ))
+
+ if not streams:
return
- vsr = video["videoJsonPlayer"]["VSR"]
- return self._create_stream(vsr)
+ self.title = f"{metadata['title']} - {metadata['subtitle']}" if metadata["subtitle"] else metadata["title"]
+
+ for slot, protocol, url in sorted(streams, key=itemgetter(0)):
+ return HLSStream.parse_variant_playlist(self.session, url)
__plugin__ = ArteTV
| {"golden_diff": "diff --git a/src/streamlink/plugins/artetv.py b/src/streamlink/plugins/artetv.py\n--- a/src/streamlink/plugins/artetv.py\n+++ b/src/streamlink/plugins/artetv.py\n@@ -1,5 +1,3 @@\n-\"\"\"Plugin for Arte.tv, bi-lingual art and culture channel.\"\"\"\n-\n import logging\n import re\n from operator import itemgetter\n@@ -9,25 +7,6 @@\n from streamlink.stream import HLSStream\n \n log = logging.getLogger(__name__)\n-JSON_VOD_URL = \"https://api.arte.tv/api/player/v1/config/{0}/{1}?platform=ARTE_NEXT\"\n-JSON_LIVE_URL = \"https://api.arte.tv/api/player/v1/livestream/{0}\"\n-\n-_video_schema = validate.Schema({\n- \"videoJsonPlayer\": {\n- \"VSR\": validate.any(\n- [],\n- {\n- validate.text: {\n- \"height\": int,\n- \"mediaType\": validate.text,\n- \"url\": validate.text,\n- \"versionProg\": int,\n- \"versionLibelle\": validate.text\n- },\n- },\n- )\n- }\n-})\n \n \n @pluginmatcher(re.compile(r\"\"\"\n@@ -40,34 +19,49 @@\n )\n \"\"\", re.VERBOSE))\n class ArteTV(Plugin):\n- def _create_stream(self, streams):\n- variant, variantname = min([(stream[\"versionProg\"], stream[\"versionLibelle\"]) for stream in streams.values()],\n- key=itemgetter(0))\n- log.debug(f\"Using the '{variantname}' stream variant\")\n- for sname, stream in streams.items():\n- if stream[\"versionProg\"] == variant:\n- if stream[\"mediaType\"] == \"hls\":\n- try:\n- streams = HLSStream.parse_variant_playlist(self.session, stream[\"url\"])\n- yield from streams.items()\n- except OSError as err:\n- log.warning(f\"Failed to extract HLS streams for {sname}/{stream['versionLibelle']}: {err}\")\n+ API_URL = \"https://api.arte.tv/api/player/v2/config/{0}/{1}\"\n+ API_TOKEN = \"MzYyZDYyYmM1Y2Q3ZWRlZWFjMmIyZjZjNTRiMGY4MzY4NzBhOWQ5YjE4MGQ1NGFiODJmOTFlZDQwN2FkOTZjMQ\"\n \n def _get_streams(self):\n- language = self.match.group('language')\n- video_id = self.match.group('video_id')\n- if video_id is None:\n- json_url = JSON_LIVE_URL.format(language)\n- else:\n- json_url = JSON_VOD_URL.format(language, video_id)\n- res = self.session.http.get(json_url)\n- video = self.session.http.json(res, schema=_video_schema)\n+ language = self.match.group(\"language\")\n+ video_id = self.match.group(\"video_id\")\n \n- if not video[\"videoJsonPlayer\"][\"VSR\"]:\n+ json_url = self.API_URL.format(language, video_id or \"LIVE\")\n+ headers = {\n+ \"Authorization\": f\"Bearer {self.API_TOKEN}\"\n+ }\n+ streams, metadata = self.session.http.get(json_url, headers=headers, schema=validate.Schema(\n+ validate.parse_json(),\n+ {\"data\": {\"attributes\": {\n+ \"streams\": validate.any(\n+ [],\n+ [\n+ validate.all(\n+ {\n+ \"url\": validate.url(),\n+ \"slot\": int,\n+ \"protocol\": validate.any(\"HLS\", \"HLS_NG\"),\n+ },\n+ validate.union_get(\"slot\", \"protocol\", \"url\")\n+ )\n+ ]\n+ ),\n+ \"metadata\": {\n+ \"title\": str,\n+ \"subtitle\": validate.any(None, str)\n+ }\n+ }}},\n+ validate.get((\"data\", \"attributes\")),\n+ validate.union_get(\"streams\", \"metadata\")\n+ ))\n+\n+ if not streams:\n return\n \n- vsr = video[\"videoJsonPlayer\"][\"VSR\"]\n- return self._create_stream(vsr)\n+ self.title = f\"{metadata['title']} - {metadata['subtitle']}\" if metadata[\"subtitle\"] else metadata[\"title\"]\n+\n+ for slot, protocol, url in sorted(streams, key=itemgetter(0)):\n+ return HLSStream.parse_variant_playlist(self.session, url)\n \n \n __plugin__ = ArteTV\n", "issue": "artetv: de/fr Livestreams aren't playable anymore\n### Checklist\n\n- [X] This is a plugin issue and not a different kind of issue\n- [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink)\n- [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22)\n- [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master)\n\n### Streamlink version\n\nLatest build from the master branch\n\n### Description\n\nSince about a week the live channels aren't playable anymore. However VODs working fine.\r\n\n\n### Debug log\n\n```text\nstreamlink https://www.arte.tv/de/live/ worst -l debug\r\n[cli][debug] OS: Linux-5.14.3-arch1-1-x86_64-with-glibc2.33\r\n[cli][debug] Python: 3.9.7\r\n[cli][debug] Streamlink: 2.4.0+17.g24c59a2\r\n[cli][debug] Requests(2.26.0), Socks(1.7.1), Websocket(0.59.0)\r\n[cli][debug] Arguments:\r\n[cli][debug] url=https://www.arte.tv/de/live/\r\n[cli][debug] stream=['worst']\r\n[cli][debug] --loglevel=debug\r\n[cli][info] Found matching plugin artetv for URL https://www.arte.tv/de/live/\r\nerror: No playable streams found on this URL: https://www.arte.tv/de/live/\r\n\r\nstreamlink https://www.arte.tv/fr/direct/ best -l debug\r\n[cli][debug] OS: Linux-5.14.3-arch1-1-x86_64-with-glibc2.33\r\n[cli][debug] Python: 3.9.7\r\n[cli][debug] Streamlink: 2.4.0+17.g24c59a2\r\n[cli][debug] Requests(2.26.0), Socks(1.7.1), Websocket(0.59.0)\r\n[cli][debug] Arguments:\r\n[cli][debug] url=https://www.arte.tv/fr/direct/\r\n[cli][debug] stream=['best']\r\n[cli][debug] --loglevel=debug\r\n[cli][info] Found matching plugin artetv for URL https://www.arte.tv/fr/direct/\r\nerror: No playable streams found on this URL: https://www.arte.tv/fr/direct/\n```\n\nplugins.arte: switch to arte.tv v2 API\nThe Arte.tv V1 API doens't seem to work anymore for live streams (see #4026).\r\n\r\nBoth web site and mobile app use the V2 API, which requires an authentication token. The one from the website is used here for this fix.\n", "before_files": [{"content": "\"\"\"Plugin for Arte.tv, bi-lingual art and culture channel.\"\"\"\n\nimport logging\nimport re\nfrom operator import itemgetter\n\nfrom streamlink.plugin import Plugin, pluginmatcher\nfrom streamlink.plugin.api import validate\nfrom streamlink.stream import HLSStream\n\nlog = logging.getLogger(__name__)\nJSON_VOD_URL = \"https://api.arte.tv/api/player/v1/config/{0}/{1}?platform=ARTE_NEXT\"\nJSON_LIVE_URL = \"https://api.arte.tv/api/player/v1/livestream/{0}\"\n\n_video_schema = validate.Schema({\n \"videoJsonPlayer\": {\n \"VSR\": validate.any(\n [],\n {\n validate.text: {\n \"height\": int,\n \"mediaType\": validate.text,\n \"url\": validate.text,\n \"versionProg\": int,\n \"versionLibelle\": validate.text\n },\n },\n )\n }\n})\n\n\n@pluginmatcher(re.compile(r\"\"\"\n https?://(?:\\w+\\.)?arte\\.tv/(?:guide/)?\n (?P<language>[a-z]{2})/\n (?:\n (?:videos/)?(?P<video_id>(?!RC-|videos)[^/]+?)/.+\n |\n (?:direct|live)\n )\n\"\"\", re.VERBOSE))\nclass ArteTV(Plugin):\n def _create_stream(self, streams):\n variant, variantname = min([(stream[\"versionProg\"], stream[\"versionLibelle\"]) for stream in streams.values()],\n key=itemgetter(0))\n log.debug(f\"Using the '{variantname}' stream variant\")\n for sname, stream in streams.items():\n if stream[\"versionProg\"] == variant:\n if stream[\"mediaType\"] == \"hls\":\n try:\n streams = HLSStream.parse_variant_playlist(self.session, stream[\"url\"])\n yield from streams.items()\n except OSError as err:\n log.warning(f\"Failed to extract HLS streams for {sname}/{stream['versionLibelle']}: {err}\")\n\n def _get_streams(self):\n language = self.match.group('language')\n video_id = self.match.group('video_id')\n if video_id is None:\n json_url = JSON_LIVE_URL.format(language)\n else:\n json_url = JSON_VOD_URL.format(language, video_id)\n res = self.session.http.get(json_url)\n video = self.session.http.json(res, schema=_video_schema)\n\n if not video[\"videoJsonPlayer\"][\"VSR\"]:\n return\n\n vsr = video[\"videoJsonPlayer\"][\"VSR\"]\n return self._create_stream(vsr)\n\n\n__plugin__ = ArteTV\n", "path": "src/streamlink/plugins/artetv.py"}]} | 1,926 | 991 |
gh_patches_debug_31351 | rasdani/github-patches | git_diff | iterative__dvc-2646 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
get/import: could not perform a HEAD request
```
DVC version: 0.62.1
Python version: 3.7.3
Platform: Darwin-18.7.0-x86_64-i386-64bit
Binary: False
Cache: reflink - True, hardlink - True, symlink - True
Filesystem type (cache directory): ('apfs', '/dev/disk1s1')
Filesystem type (workspace): ('apfs', '/dev/disk1s1')
```
I'm trying to import a directory versioned in our own [dataset registry](https://github.com/iterative/dataset-registry) project into an empty, non-Git DVC project, but getting this cryptic error:
```console
$ dvc import --rev 0547f58 \
[email protected]:iterative/dataset-registry.git \
use-cases/data
Importing 'use-cases/data ([email protected]:iterative/dataset-registry.git)' -> 'data'
ERROR: failed to import 'use-cases/data' from '[email protected]:iterative/dataset-registry.git'. - unable to find DVC-file with output '../../../../private/var/folders/_c/3mt_xn_d4xl2ddsx2m98h_r40000gn/T/tmphs83czecdvc-repo/use-cases/data'
Having any troubles? Hit us up at https://dvc.org/support, we are always happy to help!
```
The directory in question has file name `b6923e1e4ad16ea1a7e2b328842d56a2.dir ` (See [use-cases/cats-dogs.dvc](https://github.com/iterative/dataset-registry/blob/0547f58/use-cases/cats-dogs.dvc) of that version). And the default remote is [configured[(https://github.com/iterative/dataset-registry/blob/master/.dvc/config) to https://remote.dvc.org/dataset-registry (which is an HTTP redirect to the s3://dvc-public/remote/dataset-registry bucket). ~~The file seems to be in the remote~~
Am I just doing something wrong here (hopefully), or is `dvc import` broken?
</issue>
<code>
[start of dvc/remote/http.py]
1 from __future__ import unicode_literals
2
3 import logging
4 from dvc.scheme import Schemes
5 from dvc.utils.compat import open
6
7 from dvc.progress import Tqdm
8 from dvc.exceptions import DvcException
9 from dvc.config import Config, ConfigError
10 from dvc.remote.base import RemoteBASE
11
12 logger = logging.getLogger(__name__)
13
14
15 class RemoteHTTP(RemoteBASE):
16 scheme = Schemes.HTTP
17 REQUEST_TIMEOUT = 10
18 CHUNK_SIZE = 2 ** 16
19 PARAM_CHECKSUM = "etag"
20
21 def __init__(self, repo, config):
22 super(RemoteHTTP, self).__init__(repo, config)
23
24 url = config.get(Config.SECTION_REMOTE_URL)
25 self.path_info = self.path_cls(url) if url else None
26
27 if not self.no_traverse:
28 raise ConfigError(
29 "HTTP doesn't support traversing the remote to list existing "
30 "files. Use: `dvc remote modify <name> no_traverse true`"
31 )
32
33 def _download(self, from_info, to_file, name=None, no_progress_bar=False):
34 request = self._request("GET", from_info.url, stream=True)
35 with Tqdm(
36 total=None if no_progress_bar else self._content_length(from_info),
37 leave=False,
38 bytes=True,
39 desc=from_info.url if name is None else name,
40 disable=no_progress_bar,
41 ) as pbar:
42 with open(to_file, "wb") as fd:
43 for chunk in request.iter_content(chunk_size=self.CHUNK_SIZE):
44 fd.write(chunk)
45 fd.flush()
46 pbar.update(len(chunk))
47
48 def exists(self, path_info):
49 return bool(self._request("HEAD", path_info.url))
50
51 def _content_length(self, url_or_request):
52 headers = getattr(
53 url_or_request,
54 "headers",
55 self._request("HEAD", url_or_request).headers,
56 )
57 res = headers.get("Content-Length")
58 return int(res) if res else None
59
60 def get_file_checksum(self, path_info):
61 url = path_info.url
62 headers = self._request("HEAD", url).headers
63 etag = headers.get("ETag") or headers.get("Content-MD5")
64
65 if not etag:
66 raise DvcException(
67 "could not find an ETag or "
68 "Content-MD5 header for '{url}'".format(url=url)
69 )
70
71 if etag.startswith("W/"):
72 raise DvcException(
73 "Weak ETags are not supported."
74 " (Etag: '{etag}', URL: '{url}')".format(etag=etag, url=url)
75 )
76
77 return etag
78
79 def _request(self, method, url, **kwargs):
80 import requests
81
82 kwargs.setdefault("allow_redirects", True)
83 kwargs.setdefault("timeout", self.REQUEST_TIMEOUT)
84
85 try:
86 return requests.request(method, url, **kwargs)
87 except requests.exceptions.RequestException:
88 raise DvcException("could not perform a {} request".format(method))
89
90 def gc(self):
91 raise NotImplementedError
92
[end of dvc/remote/http.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/dvc/remote/http.py b/dvc/remote/http.py
--- a/dvc/remote/http.py
+++ b/dvc/remote/http.py
@@ -1,9 +1,11 @@
from __future__ import unicode_literals
import logging
+
+from funcy import cached_property
+
from dvc.scheme import Schemes
from dvc.utils.compat import open
-
from dvc.progress import Tqdm
from dvc.exceptions import DvcException
from dvc.config import Config, ConfigError
@@ -14,6 +16,8 @@
class RemoteHTTP(RemoteBASE):
scheme = Schemes.HTTP
+ SESSION_RETRIES = 5
+ SESSION_BACKOFF_FACTOR = 0.1
REQUEST_TIMEOUT = 10
CHUNK_SIZE = 2 ** 16
PARAM_CHECKSUM = "etag"
@@ -76,6 +80,24 @@
return etag
+ @cached_property
+ def _session(self):
+ import requests
+ from requests.adapters import HTTPAdapter
+ from urllib3.util.retry import Retry
+
+ session = requests.Session()
+
+ retries = Retry(
+ total=self.SESSION_RETRIES,
+ backoff_factor=self.SESSION_BACKOFF_FACTOR,
+ )
+
+ session.mount("http://", HTTPAdapter(max_retries=retries))
+ session.mount("https://", HTTPAdapter(max_retries=retries))
+
+ return session
+
def _request(self, method, url, **kwargs):
import requests
@@ -83,7 +105,7 @@
kwargs.setdefault("timeout", self.REQUEST_TIMEOUT)
try:
- return requests.request(method, url, **kwargs)
+ return self._session.request(method, url, **kwargs)
except requests.exceptions.RequestException:
raise DvcException("could not perform a {} request".format(method))
| {"golden_diff": "diff --git a/dvc/remote/http.py b/dvc/remote/http.py\n--- a/dvc/remote/http.py\n+++ b/dvc/remote/http.py\n@@ -1,9 +1,11 @@\n from __future__ import unicode_literals\n \n import logging\n+\n+from funcy import cached_property\n+\n from dvc.scheme import Schemes\n from dvc.utils.compat import open\n-\n from dvc.progress import Tqdm\n from dvc.exceptions import DvcException\n from dvc.config import Config, ConfigError\n@@ -14,6 +16,8 @@\n \n class RemoteHTTP(RemoteBASE):\n scheme = Schemes.HTTP\n+ SESSION_RETRIES = 5\n+ SESSION_BACKOFF_FACTOR = 0.1\n REQUEST_TIMEOUT = 10\n CHUNK_SIZE = 2 ** 16\n PARAM_CHECKSUM = \"etag\"\n@@ -76,6 +80,24 @@\n \n return etag\n \n+ @cached_property\n+ def _session(self):\n+ import requests\n+ from requests.adapters import HTTPAdapter\n+ from urllib3.util.retry import Retry\n+\n+ session = requests.Session()\n+\n+ retries = Retry(\n+ total=self.SESSION_RETRIES,\n+ backoff_factor=self.SESSION_BACKOFF_FACTOR,\n+ )\n+\n+ session.mount(\"http://\", HTTPAdapter(max_retries=retries))\n+ session.mount(\"https://\", HTTPAdapter(max_retries=retries))\n+\n+ return session\n+\n def _request(self, method, url, **kwargs):\n import requests\n \n@@ -83,7 +105,7 @@\n kwargs.setdefault(\"timeout\", self.REQUEST_TIMEOUT)\n \n try:\n- return requests.request(method, url, **kwargs)\n+ return self._session.request(method, url, **kwargs)\n except requests.exceptions.RequestException:\n raise DvcException(\"could not perform a {} request\".format(method))\n", "issue": "get/import: could not perform a HEAD request\n```\r\nDVC version: 0.62.1\r\nPython version: 3.7.3\r\nPlatform: Darwin-18.7.0-x86_64-i386-64bit\r\nBinary: False\r\nCache: reflink - True, hardlink - True, symlink - True\r\nFilesystem type (cache directory): ('apfs', '/dev/disk1s1')\r\nFilesystem type (workspace): ('apfs', '/dev/disk1s1')\r\n```\r\n\r\nI'm trying to import a directory versioned in our own [dataset registry](https://github.com/iterative/dataset-registry) project into an empty, non-Git DVC project, but getting this cryptic error:\r\n\r\n```console\r\n$ dvc import --rev 0547f58 \\ \r\n [email protected]:iterative/dataset-registry.git \\\r\n use-cases/data\r\nImporting 'use-cases/data ([email protected]:iterative/dataset-registry.git)' -> 'data'\r\nERROR: failed to import 'use-cases/data' from '[email protected]:iterative/dataset-registry.git'. - unable to find DVC-file with output '../../../../private/var/folders/_c/3mt_xn_d4xl2ddsx2m98h_r40000gn/T/tmphs83czecdvc-repo/use-cases/data'\r\n\r\nHaving any troubles? Hit us up at https://dvc.org/support, we are always happy to help!\r\n```\r\n\r\nThe directory in question has file name `b6923e1e4ad16ea1a7e2b328842d56a2.dir ` (See [use-cases/cats-dogs.dvc](https://github.com/iterative/dataset-registry/blob/0547f58/use-cases/cats-dogs.dvc) of that version). And the default remote is [configured[(https://github.com/iterative/dataset-registry/blob/master/.dvc/config) to https://remote.dvc.org/dataset-registry (which is an HTTP redirect to the s3://dvc-public/remote/dataset-registry bucket). ~~The file seems to be in the remote~~\r\n\r\nAm I just doing something wrong here (hopefully), or is `dvc import` broken?\n", "before_files": [{"content": "from __future__ import unicode_literals\n\nimport logging\nfrom dvc.scheme import Schemes\nfrom dvc.utils.compat import open\n\nfrom dvc.progress import Tqdm\nfrom dvc.exceptions import DvcException\nfrom dvc.config import Config, ConfigError\nfrom dvc.remote.base import RemoteBASE\n\nlogger = logging.getLogger(__name__)\n\n\nclass RemoteHTTP(RemoteBASE):\n scheme = Schemes.HTTP\n REQUEST_TIMEOUT = 10\n CHUNK_SIZE = 2 ** 16\n PARAM_CHECKSUM = \"etag\"\n\n def __init__(self, repo, config):\n super(RemoteHTTP, self).__init__(repo, config)\n\n url = config.get(Config.SECTION_REMOTE_URL)\n self.path_info = self.path_cls(url) if url else None\n\n if not self.no_traverse:\n raise ConfigError(\n \"HTTP doesn't support traversing the remote to list existing \"\n \"files. Use: `dvc remote modify <name> no_traverse true`\"\n )\n\n def _download(self, from_info, to_file, name=None, no_progress_bar=False):\n request = self._request(\"GET\", from_info.url, stream=True)\n with Tqdm(\n total=None if no_progress_bar else self._content_length(from_info),\n leave=False,\n bytes=True,\n desc=from_info.url if name is None else name,\n disable=no_progress_bar,\n ) as pbar:\n with open(to_file, \"wb\") as fd:\n for chunk in request.iter_content(chunk_size=self.CHUNK_SIZE):\n fd.write(chunk)\n fd.flush()\n pbar.update(len(chunk))\n\n def exists(self, path_info):\n return bool(self._request(\"HEAD\", path_info.url))\n\n def _content_length(self, url_or_request):\n headers = getattr(\n url_or_request,\n \"headers\",\n self._request(\"HEAD\", url_or_request).headers,\n )\n res = headers.get(\"Content-Length\")\n return int(res) if res else None\n\n def get_file_checksum(self, path_info):\n url = path_info.url\n headers = self._request(\"HEAD\", url).headers\n etag = headers.get(\"ETag\") or headers.get(\"Content-MD5\")\n\n if not etag:\n raise DvcException(\n \"could not find an ETag or \"\n \"Content-MD5 header for '{url}'\".format(url=url)\n )\n\n if etag.startswith(\"W/\"):\n raise DvcException(\n \"Weak ETags are not supported.\"\n \" (Etag: '{etag}', URL: '{url}')\".format(etag=etag, url=url)\n )\n\n return etag\n\n def _request(self, method, url, **kwargs):\n import requests\n\n kwargs.setdefault(\"allow_redirects\", True)\n kwargs.setdefault(\"timeout\", self.REQUEST_TIMEOUT)\n\n try:\n return requests.request(method, url, **kwargs)\n except requests.exceptions.RequestException:\n raise DvcException(\"could not perform a {} request\".format(method))\n\n def gc(self):\n raise NotImplementedError\n", "path": "dvc/remote/http.py"}]} | 1,896 | 418 |
gh_patches_debug_17448 | rasdani/github-patches | git_diff | akvo__akvo-rsr-2064 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Block Google from Indexing our Development Environments
## Test plan
GIVEN all dev environments (Test / UAT)
WHEN looking at the _head_ tag
THEN a _meta name="robots" content="none"_ node should be added
GIVEN the live environment
WHEN looking at the _head_ tag
THEN a _meta name="robots" content="none"_ node should not be added
## Issue description
We should add a robots.txt to all NON LIVE machines that prevents Google from indexing the site and displaying the content in search results.
This looks to be pretty simple: https://support.google.com/webmasters/answer/156449?rd=1
</issue>
<code>
[start of akvo/rsr/context_processors.py]
1 # -*- coding: utf-8 -*-
2 """
3 Akvo RSR is covered by the GNU Affero General Public License.
4
5 See more details in the license.txt file located at the root folder of the
6 Akvo RSR module. For additional details on the GNU license please see
7 < http://www.gnu.org/licenses/agpl.html >.
8 """
9
10 import django
11
12 from django.conf import settings
13 from django.core.exceptions import DisallowedHost
14 from django.contrib.sites.models import get_current_site
15
16
17 def extra_context(request, protocol="http"):
18 """Add information to the request context."""
19 try:
20 current_site = get_current_site(request)
21 except DisallowedHost:
22 current_site = None
23
24 django_version = django.get_version()
25 deploy_tag = getattr(settings, 'DEPLOY_TAG', 'Unknown')
26 deploy_branch = getattr(settings, 'DEPLOY_BRANCH', 'Unknown')
27 deploy_commit_id = getattr(settings, 'DEPLOY_COMMIT_ID', 'Unknown')
28 deploy_commit_full_id = getattr(settings, 'DEPLOY_COMMIT_FULL_ID', 'Unknown')
29
30 return dict(
31 current_site=current_site,
32 django_version=django_version,
33 deploy_tag=deploy_tag,
34 deploy_branch=deploy_branch,
35 deploy_commit_id=deploy_commit_id,
36 deploy_commit_full_id=deploy_commit_full_id
37 )
38
39
40 def get_current_path_without_lang(request):
41 """Return current path without lang."""
42 path = request.get_full_path()
43 path_bits = path.split('/')
44 path = '/'.join(path_bits[2:])
45 return {'current_path_without_lang': path}
46
47
48 def extra_pages_context(request):
49 """Add context information of an RSR Page."""
50 if request.rsr_page:
51 page = request.rsr_page
52 return {
53 'rsr_page': page,
54 'favicon': page.favicon,
55 'logo': page.logo,
56 'organisation': page.organisation,
57 'return_url': page.return_url,
58 'return_url_text': page.custom_return_url_text,
59 'stylesheet': page.stylesheet,
60 'akvoapp_root_url': '//{}'.format(settings.AKVOAPP_DOMAIN),
61 'domain_url': '//{}'.format(settings.RSR_DOMAIN),
62 'no_facebook': not page.facebook_button,
63 'facebook_app_id': page.facebook_app_id,
64 'no_twitter': not page.twitter_button,
65 }
66
67 return {}
68
[end of akvo/rsr/context_processors.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/akvo/rsr/context_processors.py b/akvo/rsr/context_processors.py
--- a/akvo/rsr/context_processors.py
+++ b/akvo/rsr/context_processors.py
@@ -22,6 +22,7 @@
current_site = None
django_version = django.get_version()
+ debug = getattr(settings, 'DEBUG', False)
deploy_tag = getattr(settings, 'DEPLOY_TAG', 'Unknown')
deploy_branch = getattr(settings, 'DEPLOY_BRANCH', 'Unknown')
deploy_commit_id = getattr(settings, 'DEPLOY_COMMIT_ID', 'Unknown')
@@ -30,6 +31,7 @@
return dict(
current_site=current_site,
django_version=django_version,
+ debug=debug,
deploy_tag=deploy_tag,
deploy_branch=deploy_branch,
deploy_commit_id=deploy_commit_id,
| {"golden_diff": "diff --git a/akvo/rsr/context_processors.py b/akvo/rsr/context_processors.py\n--- a/akvo/rsr/context_processors.py\n+++ b/akvo/rsr/context_processors.py\n@@ -22,6 +22,7 @@\n current_site = None\n \n django_version = django.get_version()\n+ debug = getattr(settings, 'DEBUG', False)\n deploy_tag = getattr(settings, 'DEPLOY_TAG', 'Unknown')\n deploy_branch = getattr(settings, 'DEPLOY_BRANCH', 'Unknown')\n deploy_commit_id = getattr(settings, 'DEPLOY_COMMIT_ID', 'Unknown')\n@@ -30,6 +31,7 @@\n return dict(\n current_site=current_site,\n django_version=django_version,\n+ debug=debug,\n deploy_tag=deploy_tag,\n deploy_branch=deploy_branch,\n deploy_commit_id=deploy_commit_id,\n", "issue": "Block Google from Indexing our Development Environments\n## Test plan\n\nGIVEN all dev environments (Test / UAT)\nWHEN looking at the _head_ tag\nTHEN a _meta name=\"robots\" content=\"none\"_ node should be added\n\nGIVEN the live environment\nWHEN looking at the _head_ tag\nTHEN a _meta name=\"robots\" content=\"none\"_ node should not be added\n## Issue description\n\nWe should add a robots.txt to all NON LIVE machines that prevents Google from indexing the site and displaying the content in search results.\n\nThis looks to be pretty simple: https://support.google.com/webmasters/answer/156449?rd=1\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\nAkvo RSR is covered by the GNU Affero General Public License.\n\nSee more details in the license.txt file located at the root folder of the\nAkvo RSR module. For additional details on the GNU license please see\n< http://www.gnu.org/licenses/agpl.html >.\n\"\"\"\n\nimport django\n\nfrom django.conf import settings\nfrom django.core.exceptions import DisallowedHost\nfrom django.contrib.sites.models import get_current_site\n\n\ndef extra_context(request, protocol=\"http\"):\n \"\"\"Add information to the request context.\"\"\"\n try:\n current_site = get_current_site(request)\n except DisallowedHost:\n current_site = None\n\n django_version = django.get_version()\n deploy_tag = getattr(settings, 'DEPLOY_TAG', 'Unknown')\n deploy_branch = getattr(settings, 'DEPLOY_BRANCH', 'Unknown')\n deploy_commit_id = getattr(settings, 'DEPLOY_COMMIT_ID', 'Unknown')\n deploy_commit_full_id = getattr(settings, 'DEPLOY_COMMIT_FULL_ID', 'Unknown')\n\n return dict(\n current_site=current_site,\n django_version=django_version,\n deploy_tag=deploy_tag,\n deploy_branch=deploy_branch,\n deploy_commit_id=deploy_commit_id,\n deploy_commit_full_id=deploy_commit_full_id\n )\n\n\ndef get_current_path_without_lang(request):\n \"\"\"Return current path without lang.\"\"\"\n path = request.get_full_path()\n path_bits = path.split('/')\n path = '/'.join(path_bits[2:])\n return {'current_path_without_lang': path}\n\n\ndef extra_pages_context(request):\n \"\"\"Add context information of an RSR Page.\"\"\"\n if request.rsr_page:\n page = request.rsr_page\n return {\n 'rsr_page': page,\n 'favicon': page.favicon,\n 'logo': page.logo,\n 'organisation': page.organisation,\n 'return_url': page.return_url,\n 'return_url_text': page.custom_return_url_text,\n 'stylesheet': page.stylesheet,\n 'akvoapp_root_url': '//{}'.format(settings.AKVOAPP_DOMAIN),\n 'domain_url': '//{}'.format(settings.RSR_DOMAIN),\n 'no_facebook': not page.facebook_button,\n 'facebook_app_id': page.facebook_app_id,\n 'no_twitter': not page.twitter_button,\n }\n\n return {}\n", "path": "akvo/rsr/context_processors.py"}]} | 1,303 | 188 |
gh_patches_debug_13551 | rasdani/github-patches | git_diff | vyperlang__vyper-3287 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`FunctionNodeVisitor` visits twice `sqrt` body
### Version Information
* vyper Version (output of `vyper --version`): 0.3.8+commit.d76c6ed2
* OS: OSX
* Python Version (output of `python --version`): 3.8.0
### What's your issue about?
The `FunctionNodeVisitor` seems to visit twice the body of `sqrt` builtin, the first time is in the `__init__` function of the `FunctionNodeVisitor` and the second time after its creation using a `for` loop over its body.
https://github.com/vyperlang/vyper/blob/187ab0eec8efbe19ed5046e4e947249e9d43141c/vyper/builtins/_utils.py#L28-L30
https://github.com/vyperlang/vyper/blob/187ab0eec8efbe19ed5046e4e947249e9d43141c/vyper/semantics/analysis/local.py#L178-L179
</issue>
<code>
[start of vyper/builtins/_utils.py]
1 from vyper.ast import parse_to_ast
2 from vyper.codegen.context import Context
3 from vyper.codegen.global_context import GlobalContext
4 from vyper.codegen.stmt import parse_body
5 from vyper.semantics.analysis.local import FunctionNodeVisitor
6 from vyper.semantics.namespace import Namespace, override_global_namespace
7 from vyper.semantics.types.function import ContractFunctionT, FunctionVisibility, StateMutability
8
9
10 def _strip_source_pos(ir_node):
11 ir_node.source_pos = None
12 for x in ir_node.args:
13 _strip_source_pos(x)
14
15
16 def generate_inline_function(code, variables, variables_2, memory_allocator):
17 ast_code = parse_to_ast(code, add_fn_node="dummy_fn")
18 # Annotate the AST with a temporary old (i.e. typecheck) namespace
19 namespace = Namespace()
20 namespace.update(variables_2)
21 with override_global_namespace(namespace):
22 # Initialise a placeholder `FunctionDef` AST node and corresponding
23 # `ContractFunctionT` type to rely on the annotation visitors in semantics
24 # module.
25 ast_code.body[0]._metadata["type"] = ContractFunctionT(
26 "sqrt_builtin", {}, 0, 0, None, FunctionVisibility.INTERNAL, StateMutability.NONPAYABLE
27 )
28 sv = FunctionNodeVisitor(ast_code, ast_code.body[0], namespace)
29 for n in ast_code.body[0].body:
30 sv.visit(n)
31
32 new_context = Context(
33 vars_=variables, global_ctx=GlobalContext(), memory_allocator=memory_allocator
34 )
35 generated_ir = parse_body(ast_code.body[0].body, new_context)
36 # strip source position info from the generated_ir since
37 # it doesn't make any sense (e.g. the line numbers will start from 0
38 # instead of where we are in the code)
39 # NOTE if we ever use this for inlining user-code, it would make
40 # sense to fix the offsets of the source positions in the generated
41 # code instead of stripping them.
42 _strip_source_pos(generated_ir)
43 return new_context, generated_ir
44
[end of vyper/builtins/_utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/vyper/builtins/_utils.py b/vyper/builtins/_utils.py
--- a/vyper/builtins/_utils.py
+++ b/vyper/builtins/_utils.py
@@ -25,9 +25,9 @@
ast_code.body[0]._metadata["type"] = ContractFunctionT(
"sqrt_builtin", {}, 0, 0, None, FunctionVisibility.INTERNAL, StateMutability.NONPAYABLE
)
- sv = FunctionNodeVisitor(ast_code, ast_code.body[0], namespace)
- for n in ast_code.body[0].body:
- sv.visit(n)
+ # The FunctionNodeVisitor's constructor performs semantic checks
+ # annotate the AST as side effects.
+ FunctionNodeVisitor(ast_code, ast_code.body[0], namespace)
new_context = Context(
vars_=variables, global_ctx=GlobalContext(), memory_allocator=memory_allocator
| {"golden_diff": "diff --git a/vyper/builtins/_utils.py b/vyper/builtins/_utils.py\n--- a/vyper/builtins/_utils.py\n+++ b/vyper/builtins/_utils.py\n@@ -25,9 +25,9 @@\n ast_code.body[0]._metadata[\"type\"] = ContractFunctionT(\n \"sqrt_builtin\", {}, 0, 0, None, FunctionVisibility.INTERNAL, StateMutability.NONPAYABLE\n )\n- sv = FunctionNodeVisitor(ast_code, ast_code.body[0], namespace)\n- for n in ast_code.body[0].body:\n- sv.visit(n)\n+ # The FunctionNodeVisitor's constructor performs semantic checks\n+ # annotate the AST as side effects.\n+ FunctionNodeVisitor(ast_code, ast_code.body[0], namespace)\n \n new_context = Context(\n vars_=variables, global_ctx=GlobalContext(), memory_allocator=memory_allocator\n", "issue": "`FunctionNodeVisitor` visits twice `sqrt` body\n### Version Information\r\n\r\n* vyper Version (output of `vyper --version`): 0.3.8+commit.d76c6ed2\r\n* OS: OSX\r\n* Python Version (output of `python --version`): 3.8.0\r\n\r\n### What's your issue about?\r\n\r\nThe `FunctionNodeVisitor` seems to visit twice the body of `sqrt` builtin, the first time is in the `__init__` function of the `FunctionNodeVisitor` and the second time after its creation using a `for` loop over its body.\r\nhttps://github.com/vyperlang/vyper/blob/187ab0eec8efbe19ed5046e4e947249e9d43141c/vyper/builtins/_utils.py#L28-L30\r\n\r\nhttps://github.com/vyperlang/vyper/blob/187ab0eec8efbe19ed5046e4e947249e9d43141c/vyper/semantics/analysis/local.py#L178-L179\r\n\n", "before_files": [{"content": "from vyper.ast import parse_to_ast\nfrom vyper.codegen.context import Context\nfrom vyper.codegen.global_context import GlobalContext\nfrom vyper.codegen.stmt import parse_body\nfrom vyper.semantics.analysis.local import FunctionNodeVisitor\nfrom vyper.semantics.namespace import Namespace, override_global_namespace\nfrom vyper.semantics.types.function import ContractFunctionT, FunctionVisibility, StateMutability\n\n\ndef _strip_source_pos(ir_node):\n ir_node.source_pos = None\n for x in ir_node.args:\n _strip_source_pos(x)\n\n\ndef generate_inline_function(code, variables, variables_2, memory_allocator):\n ast_code = parse_to_ast(code, add_fn_node=\"dummy_fn\")\n # Annotate the AST with a temporary old (i.e. typecheck) namespace\n namespace = Namespace()\n namespace.update(variables_2)\n with override_global_namespace(namespace):\n # Initialise a placeholder `FunctionDef` AST node and corresponding\n # `ContractFunctionT` type to rely on the annotation visitors in semantics\n # module.\n ast_code.body[0]._metadata[\"type\"] = ContractFunctionT(\n \"sqrt_builtin\", {}, 0, 0, None, FunctionVisibility.INTERNAL, StateMutability.NONPAYABLE\n )\n sv = FunctionNodeVisitor(ast_code, ast_code.body[0], namespace)\n for n in ast_code.body[0].body:\n sv.visit(n)\n\n new_context = Context(\n vars_=variables, global_ctx=GlobalContext(), memory_allocator=memory_allocator\n )\n generated_ir = parse_body(ast_code.body[0].body, new_context)\n # strip source position info from the generated_ir since\n # it doesn't make any sense (e.g. the line numbers will start from 0\n # instead of where we are in the code)\n # NOTE if we ever use this for inlining user-code, it would make\n # sense to fix the offsets of the source positions in the generated\n # code instead of stripping them.\n _strip_source_pos(generated_ir)\n return new_context, generated_ir\n", "path": "vyper/builtins/_utils.py"}]} | 1,327 | 201 |
gh_patches_debug_22916 | rasdani/github-patches | git_diff | quantumlib__Cirq-1018 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add `_phase_by_` magic method to `ControlledGate`
Comes after https://github.com/quantumlib/Cirq/issues/947
The logic is as follows: if the qubit index is 0 (the control), the operation is returned unchanged. If it is larger then we delegate to phasing the sub gate with `cirq.phase_by` and a default result of NotImplemented. If it's NotImplemented, we return NotImplemented. Otherwise we return a controlled gate with the phased sub gate.
</issue>
<code>
[start of cirq/ops/controlled_gate.py]
1 # Copyright 2018 The Cirq Developers
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from typing import Union, Sequence, Any
16
17 import numpy as np
18
19 from cirq import linalg, protocols
20 from cirq.ops import raw_types
21 from cirq.type_workarounds import NotImplementedType
22
23
24 class ControlledGate(raw_types.Gate):
25 """Augments existing gates with a control qubit."""
26
27 def __init__(self, sub_gate: raw_types.Gate) -> None:
28 """Initializes the controlled gate.
29
30 Args:
31 sub_gate: The gate to add a control qubit to.
32 default_extensions: The extensions method that should be used when
33 determining if the controlled gate supports certain gate
34 features. For example, if this extensions instance is able to
35 cast sub_gate to a ExtrapolatableEffect then the controlled gate
36 can also be cast to a ExtrapolatableEffect. When this value is
37 None, an empty extensions instance is used instead.
38 """
39 self.sub_gate = sub_gate
40
41 def validate_args(self, qubits) -> None:
42 if len(qubits) < 1:
43 raise ValueError('No control qubit specified.')
44 self.sub_gate.validate_args(qubits[1:])
45
46 def __eq__(self, other):
47 if not isinstance(other, type(self)):
48 return NotImplemented
49 return self.sub_gate == other.sub_gate
50
51 def __ne__(self, other):
52 return not self == other
53
54 def __hash__(self):
55 return hash((ControlledGate, self.sub_gate))
56
57 def _apply_unitary_to_tensor_(self,
58 target_tensor: np.ndarray,
59 available_buffer: np.ndarray,
60 axes: Sequence[int],
61 ) -> np.ndarray:
62 control = axes[0]
63 rest = axes[1:]
64 active = linalg.slice_for_qubits_equal_to([control], 1)
65 sub_axes = [r - int(r > control) for r in rest]
66 target_view = target_tensor[active]
67 buffer_view = available_buffer[active]
68 result = protocols.apply_unitary_to_tensor(
69 self.sub_gate,
70 target_view,
71 buffer_view,
72 sub_axes,
73 default=NotImplemented)
74
75 if result is NotImplemented:
76 return NotImplemented
77
78 if result is target_view:
79 return target_tensor
80
81 if result is buffer_view:
82 inactive = linalg.slice_for_qubits_equal_to([control], 0)
83 available_buffer[inactive] = target_tensor[inactive]
84 return available_buffer
85
86 # HACK: assume they didn't somehow escape the slice view and edit the
87 # rest of target_tensor.
88 target_tensor[active] = result
89 return target_tensor
90
91 def _unitary_(self) -> Union[np.ndarray, NotImplementedType]:
92 sub_matrix = protocols.unitary(self.sub_gate, None)
93 if sub_matrix is None:
94 return NotImplemented
95 return linalg.block_diag(np.eye(sub_matrix.shape[0]), sub_matrix)
96
97 def __pow__(self, exponent: Any) -> 'ControlledGate':
98 new_sub_gate = protocols.pow(self.sub_gate,
99 exponent,
100 NotImplemented)
101 if new_sub_gate is NotImplemented:
102 return NotImplemented
103 return ControlledGate(new_sub_gate)
104
105 def _is_parameterized_(self):
106 return protocols.is_parameterized(self.sub_gate)
107
108 def _resolve_parameters_(self, param_resolver):
109 new_sub_gate = protocols.resolve_parameters(self.sub_gate,
110 param_resolver)
111 return ControlledGate(new_sub_gate)
112
113 def _trace_distance_bound_(self):
114 return protocols.trace_distance_bound(self.sub_gate)
115
116 def _circuit_diagram_info_(self,
117 args: protocols.CircuitDiagramInfoArgs
118 ) -> protocols.CircuitDiagramInfo:
119 sub_info = protocols.circuit_diagram_info(self.sub_gate, args, None)
120 if sub_info is None:
121 return NotImplemented
122 return protocols.CircuitDiagramInfo(
123 wire_symbols=('@',) + sub_info.wire_symbols,
124 exponent=sub_info.exponent)
125
126 def __str__(self):
127 return 'C' + str(self.sub_gate)
128
129 def __repr__(self):
130 return 'cirq.ControlledGate(sub_gate={!r})'.format(self.sub_gate)
131
[end of cirq/ops/controlled_gate.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/cirq/ops/controlled_gate.py b/cirq/ops/controlled_gate.py
--- a/cirq/ops/controlled_gate.py
+++ b/cirq/ops/controlled_gate.py
@@ -29,12 +29,6 @@
Args:
sub_gate: The gate to add a control qubit to.
- default_extensions: The extensions method that should be used when
- determining if the controlled gate supports certain gate
- features. For example, if this extensions instance is able to
- cast sub_gate to a ExtrapolatableEffect then the controlled gate
- can also be cast to a ExtrapolatableEffect. When this value is
- None, an empty extensions instance is used instead.
"""
self.sub_gate = sub_gate
@@ -102,6 +96,15 @@
return NotImplemented
return ControlledGate(new_sub_gate)
+ def _phase_by_(self, phase_turns: float, qubit_index: int):
+ if qubit_index == 0:
+ return self
+ phased_gate = protocols.phase_by(
+ self.sub_gate, phase_turns, qubit_index-1, None)
+ if phased_gate is None:
+ return NotImplemented
+ return ControlledGate(phased_gate)
+
def _is_parameterized_(self):
return protocols.is_parameterized(self.sub_gate)
| {"golden_diff": "diff --git a/cirq/ops/controlled_gate.py b/cirq/ops/controlled_gate.py\n--- a/cirq/ops/controlled_gate.py\n+++ b/cirq/ops/controlled_gate.py\n@@ -29,12 +29,6 @@\n \n Args:\n sub_gate: The gate to add a control qubit to.\n- default_extensions: The extensions method that should be used when\n- determining if the controlled gate supports certain gate\n- features. For example, if this extensions instance is able to\n- cast sub_gate to a ExtrapolatableEffect then the controlled gate\n- can also be cast to a ExtrapolatableEffect. When this value is\n- None, an empty extensions instance is used instead.\n \"\"\"\n self.sub_gate = sub_gate\n \n@@ -102,6 +96,15 @@\n return NotImplemented\n return ControlledGate(new_sub_gate)\n \n+ def _phase_by_(self, phase_turns: float, qubit_index: int):\n+ if qubit_index == 0:\n+ return self\n+ phased_gate = protocols.phase_by(\n+ self.sub_gate, phase_turns, qubit_index-1, None)\n+ if phased_gate is None:\n+ return NotImplemented\n+ return ControlledGate(phased_gate)\n+\n def _is_parameterized_(self):\n return protocols.is_parameterized(self.sub_gate)\n", "issue": "Add `_phase_by_` magic method to `ControlledGate`\nComes after https://github.com/quantumlib/Cirq/issues/947\r\n\r\nThe logic is as follows: if the qubit index is 0 (the control), the operation is returned unchanged. If it is larger then we delegate to phasing the sub gate with `cirq.phase_by` and a default result of NotImplemented. If it's NotImplemented, we return NotImplemented. Otherwise we return a controlled gate with the phased sub gate.\n", "before_files": [{"content": "# Copyright 2018 The Cirq Developers\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom typing import Union, Sequence, Any\n\nimport numpy as np\n\nfrom cirq import linalg, protocols\nfrom cirq.ops import raw_types\nfrom cirq.type_workarounds import NotImplementedType\n\n\nclass ControlledGate(raw_types.Gate):\n \"\"\"Augments existing gates with a control qubit.\"\"\"\n\n def __init__(self, sub_gate: raw_types.Gate) -> None:\n \"\"\"Initializes the controlled gate.\n\n Args:\n sub_gate: The gate to add a control qubit to.\n default_extensions: The extensions method that should be used when\n determining if the controlled gate supports certain gate\n features. For example, if this extensions instance is able to\n cast sub_gate to a ExtrapolatableEffect then the controlled gate\n can also be cast to a ExtrapolatableEffect. When this value is\n None, an empty extensions instance is used instead.\n \"\"\"\n self.sub_gate = sub_gate\n\n def validate_args(self, qubits) -> None:\n if len(qubits) < 1:\n raise ValueError('No control qubit specified.')\n self.sub_gate.validate_args(qubits[1:])\n\n def __eq__(self, other):\n if not isinstance(other, type(self)):\n return NotImplemented\n return self.sub_gate == other.sub_gate\n\n def __ne__(self, other):\n return not self == other\n\n def __hash__(self):\n return hash((ControlledGate, self.sub_gate))\n\n def _apply_unitary_to_tensor_(self,\n target_tensor: np.ndarray,\n available_buffer: np.ndarray,\n axes: Sequence[int],\n ) -> np.ndarray:\n control = axes[0]\n rest = axes[1:]\n active = linalg.slice_for_qubits_equal_to([control], 1)\n sub_axes = [r - int(r > control) for r in rest]\n target_view = target_tensor[active]\n buffer_view = available_buffer[active]\n result = protocols.apply_unitary_to_tensor(\n self.sub_gate,\n target_view,\n buffer_view,\n sub_axes,\n default=NotImplemented)\n\n if result is NotImplemented:\n return NotImplemented\n\n if result is target_view:\n return target_tensor\n\n if result is buffer_view:\n inactive = linalg.slice_for_qubits_equal_to([control], 0)\n available_buffer[inactive] = target_tensor[inactive]\n return available_buffer\n\n # HACK: assume they didn't somehow escape the slice view and edit the\n # rest of target_tensor.\n target_tensor[active] = result\n return target_tensor\n\n def _unitary_(self) -> Union[np.ndarray, NotImplementedType]:\n sub_matrix = protocols.unitary(self.sub_gate, None)\n if sub_matrix is None:\n return NotImplemented\n return linalg.block_diag(np.eye(sub_matrix.shape[0]), sub_matrix)\n\n def __pow__(self, exponent: Any) -> 'ControlledGate':\n new_sub_gate = protocols.pow(self.sub_gate,\n exponent,\n NotImplemented)\n if new_sub_gate is NotImplemented:\n return NotImplemented\n return ControlledGate(new_sub_gate)\n\n def _is_parameterized_(self):\n return protocols.is_parameterized(self.sub_gate)\n\n def _resolve_parameters_(self, param_resolver):\n new_sub_gate = protocols.resolve_parameters(self.sub_gate,\n param_resolver)\n return ControlledGate(new_sub_gate)\n\n def _trace_distance_bound_(self):\n return protocols.trace_distance_bound(self.sub_gate)\n\n def _circuit_diagram_info_(self,\n args: protocols.CircuitDiagramInfoArgs\n ) -> protocols.CircuitDiagramInfo:\n sub_info = protocols.circuit_diagram_info(self.sub_gate, args, None)\n if sub_info is None:\n return NotImplemented\n return protocols.CircuitDiagramInfo(\n wire_symbols=('@',) + sub_info.wire_symbols,\n exponent=sub_info.exponent)\n\n def __str__(self):\n return 'C' + str(self.sub_gate)\n\n def __repr__(self):\n return 'cirq.ControlledGate(sub_gate={!r})'.format(self.sub_gate)\n", "path": "cirq/ops/controlled_gate.py"}]} | 1,946 | 304 |
gh_patches_debug_4315 | rasdani/github-patches | git_diff | frappe__frappe-21985 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Create block in workspace
### Information about bug
### App Versions
```
{
"erpnext": "14.27.2",
"frappe": "14.39.0",
"hrms": "14.4.3",
"india_compliance": "14.10.1",
"payments": "0.0.1"
}
```
### Route
```
Workspaces/Home
```
### Traceback
```
Traceback (most recent call last):
File "apps/frappe/frappe/app.py", line 66, in application
response = frappe.api.handle()
File "apps/frappe/frappe/api.py", line 54, in handle
return frappe.handler.handle()
File "apps/frappe/frappe/handler.py", line 47, in handle
data = execute_cmd(cmd)
File "apps/frappe/frappe/handler.py", line 85, in execute_cmd
return frappe.call(method, **frappe.form_dict)
File "apps/frappe/frappe/__init__.py", line 1608, in call
return fn(*args, **newargs)
File "apps/frappe/frappe/desk/search.py", line 35, in search_link
search_widget(
File "apps/frappe/frappe/desk/search.py", line 106, in search_widget
raise e
File "apps/frappe/frappe/desk/search.py", line 83, in search_widget
frappe.response["values"] = frappe.call(
File "apps/frappe/frappe/__init__.py", line 1608, in call
return fn(*args, **newargs)
File "apps/frappe/frappe/desk/doctype/custom_html_block/custom_html_block.py", line 18, in get_custom_blocks_for_user
condition_query = frappe.qb.get_query(customHTMLBlock)
AttributeError: type object 'MariaDB' has no attribute 'get_query'
```
### Request Data
```
{
"type": "POST",
"args": {
"txt": "",
"doctype": "Custom HTML Block",
"reference_doctype": "",
"query": "frappe.desk.doctype.custom_html_block.custom_html_block.get_custom_blocks_for_user"
},
"headers": {},
"error_handlers": {},
"url": "/api/method/frappe.desk.search.search_link"
}
```
### Response Data
```
{
"exception": "AttributeError: type object 'MariaDB' has no attribute 'get_query'"
}
```
### Module
accounts, other
### Version
{
"erpnext": "14.27.2",
"frappe": "14.39.0",
"hrms": "14.4.3",
"india_compliance": "14.10.1",
"payments": "0.0.1"
}
### Installation method
FrappeCloud
### Relevant log output / Stack trace / Full Error Message.
_No response_
</issue>
<code>
[start of frappe/desk/doctype/custom_html_block/custom_html_block.py]
1 # Copyright (c) 2023, Frappe Technologies and contributors
2 # For license information, please see license.txt
3
4 import frappe
5 from frappe.model.document import Document
6 from frappe.query_builder.utils import DocType
7
8
9 class CustomHTMLBlock(Document):
10 # begin: auto-generated types
11 # This code is auto-generated. Do not modify anything in this block.
12
13 from typing import TYPE_CHECKING
14
15 if TYPE_CHECKING:
16 from frappe.core.doctype.has_role.has_role import HasRole
17 from frappe.types import DF
18
19 html: DF.Code | None
20 private: DF.Check
21 roles: DF.Table[HasRole]
22 script: DF.Code | None
23 style: DF.Code | None
24 # end: auto-generated types
25 pass
26
27
28 @frappe.whitelist()
29 def get_custom_blocks_for_user(doctype, txt, searchfield, start, page_len, filters):
30 # return logged in users private blocks and all public blocks
31 customHTMLBlock = DocType("Custom HTML Block")
32
33 condition_query = frappe.qb.get_query(customHTMLBlock)
34
35 return (
36 condition_query.select(customHTMLBlock.name).where(
37 (customHTMLBlock.private == 0)
38 | ((customHTMLBlock.owner == frappe.session.user) & (customHTMLBlock.private == 1))
39 )
40 ).run()
41
[end of frappe/desk/doctype/custom_html_block/custom_html_block.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/frappe/desk/doctype/custom_html_block/custom_html_block.py b/frappe/desk/doctype/custom_html_block/custom_html_block.py
--- a/frappe/desk/doctype/custom_html_block/custom_html_block.py
+++ b/frappe/desk/doctype/custom_html_block/custom_html_block.py
@@ -30,7 +30,7 @@
# return logged in users private blocks and all public blocks
customHTMLBlock = DocType("Custom HTML Block")
- condition_query = frappe.qb.get_query(customHTMLBlock)
+ condition_query = frappe.qb.from_(customHTMLBlock)
return (
condition_query.select(customHTMLBlock.name).where(
| {"golden_diff": "diff --git a/frappe/desk/doctype/custom_html_block/custom_html_block.py b/frappe/desk/doctype/custom_html_block/custom_html_block.py\n--- a/frappe/desk/doctype/custom_html_block/custom_html_block.py\n+++ b/frappe/desk/doctype/custom_html_block/custom_html_block.py\n@@ -30,7 +30,7 @@\n \t# return logged in users private blocks and all public blocks\n \tcustomHTMLBlock = DocType(\"Custom HTML Block\")\n \n-\tcondition_query = frappe.qb.get_query(customHTMLBlock)\n+\tcondition_query = frappe.qb.from_(customHTMLBlock)\n \n \treturn (\n \t\tcondition_query.select(customHTMLBlock.name).where(\n", "issue": "Create block in workspace\n### Information about bug\n\n### App Versions\r\n```\r\n{\r\n\t\"erpnext\": \"14.27.2\",\r\n\t\"frappe\": \"14.39.0\",\r\n\t\"hrms\": \"14.4.3\",\r\n\t\"india_compliance\": \"14.10.1\",\r\n\t\"payments\": \"0.0.1\"\r\n}\r\n```\r\n### Route\r\n```\r\nWorkspaces/Home\r\n```\r\n### Traceback\r\n```\r\nTraceback (most recent call last):\r\n File \"apps/frappe/frappe/app.py\", line 66, in application\r\n response = frappe.api.handle()\r\n File \"apps/frappe/frappe/api.py\", line 54, in handle\r\n return frappe.handler.handle()\r\n File \"apps/frappe/frappe/handler.py\", line 47, in handle\r\n data = execute_cmd(cmd)\r\n File \"apps/frappe/frappe/handler.py\", line 85, in execute_cmd\r\n return frappe.call(method, **frappe.form_dict)\r\n File \"apps/frappe/frappe/__init__.py\", line 1608, in call\r\n return fn(*args, **newargs)\r\n File \"apps/frappe/frappe/desk/search.py\", line 35, in search_link\r\n search_widget(\r\n File \"apps/frappe/frappe/desk/search.py\", line 106, in search_widget\r\n raise e\r\n File \"apps/frappe/frappe/desk/search.py\", line 83, in search_widget\r\n frappe.response[\"values\"] = frappe.call(\r\n File \"apps/frappe/frappe/__init__.py\", line 1608, in call\r\n return fn(*args, **newargs)\r\n File \"apps/frappe/frappe/desk/doctype/custom_html_block/custom_html_block.py\", line 18, in get_custom_blocks_for_user\r\n condition_query = frappe.qb.get_query(customHTMLBlock)\r\nAttributeError: type object 'MariaDB' has no attribute 'get_query'\r\n\r\n```\r\n### Request Data\r\n```\r\n{\r\n\t\"type\": \"POST\",\r\n\t\"args\": {\r\n\t\t\"txt\": \"\",\r\n\t\t\"doctype\": \"Custom HTML Block\",\r\n\t\t\"reference_doctype\": \"\",\r\n\t\t\"query\": \"frappe.desk.doctype.custom_html_block.custom_html_block.get_custom_blocks_for_user\"\r\n\t},\r\n\t\"headers\": {},\r\n\t\"error_handlers\": {},\r\n\t\"url\": \"/api/method/frappe.desk.search.search_link\"\r\n}\r\n```\r\n### Response Data\r\n```\r\n{\r\n\t\"exception\": \"AttributeError: type object 'MariaDB' has no attribute 'get_query'\"\r\n}\r\n```\n\n### Module\n\naccounts, other\n\n### Version\n\n{\r\n\t\"erpnext\": \"14.27.2\",\r\n\t\"frappe\": \"14.39.0\",\r\n\t\"hrms\": \"14.4.3\",\r\n\t\"india_compliance\": \"14.10.1\",\r\n\t\"payments\": \"0.0.1\"\r\n}\n\n### Installation method\n\nFrappeCloud\n\n### Relevant log output / Stack trace / Full Error Message.\n\n_No response_\n", "before_files": [{"content": "# Copyright (c) 2023, Frappe Technologies and contributors\n# For license information, please see license.txt\n\nimport frappe\nfrom frappe.model.document import Document\nfrom frappe.query_builder.utils import DocType\n\n\nclass CustomHTMLBlock(Document):\n\t# begin: auto-generated types\n\t# This code is auto-generated. Do not modify anything in this block.\n\n\tfrom typing import TYPE_CHECKING\n\n\tif TYPE_CHECKING:\n\t\tfrom frappe.core.doctype.has_role.has_role import HasRole\n\t\tfrom frappe.types import DF\n\n\t\thtml: DF.Code | None\n\t\tprivate: DF.Check\n\t\troles: DF.Table[HasRole]\n\t\tscript: DF.Code | None\n\t\tstyle: DF.Code | None\n\t# end: auto-generated types\n\tpass\n\n\[email protected]()\ndef get_custom_blocks_for_user(doctype, txt, searchfield, start, page_len, filters):\n\t# return logged in users private blocks and all public blocks\n\tcustomHTMLBlock = DocType(\"Custom HTML Block\")\n\n\tcondition_query = frappe.qb.get_query(customHTMLBlock)\n\n\treturn (\n\t\tcondition_query.select(customHTMLBlock.name).where(\n\t\t\t(customHTMLBlock.private == 0)\n\t\t\t| ((customHTMLBlock.owner == frappe.session.user) & (customHTMLBlock.private == 1))\n\t\t)\n\t).run()\n", "path": "frappe/desk/doctype/custom_html_block/custom_html_block.py"}]} | 1,609 | 151 |
gh_patches_debug_9470 | rasdani/github-patches | git_diff | nextcloud__appstore-372 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
schema does not allow digits in app ids
Apparently app ids like ``twofactor_u2f`` are not allowed by the info.xml schema. Could we change that regex to allow digits too or are there any strong arguments against that?
ref https://github.com/nextcloud/appstore/blob/e4567ce707b332ca14eb35e322bff5ec4397191b/nextcloudappstore/core/api/v1/release/info.xsd#L245-L250
</issue>
<code>
[start of nextcloudappstore/core/api/v1/urls.py]
1 from django.conf.urls import url
2 from django.views.decorators.http import etag
3 from nextcloudappstore.core.api.v1.views import AppView, AppReleaseView, \
4 CategoryView, SessionObtainAuthToken, RegenerateAuthToken, AppRatingView, \
5 AppRegisterView
6 from nextcloudappstore.core.caching import app_ratings_etag, categories_etag, \
7 apps_etag
8 from nextcloudappstore.core.versioning import SEMVER_REGEX
9
10 urlpatterns = [
11 url(r'^platform/(?P<version>\d+\.\d+\.\d+)/apps\.json$',
12 etag(apps_etag)(AppView.as_view()), name='app'),
13 url(r'^apps/releases/?$', AppReleaseView.as_view(),
14 name='app-release-create'),
15 url(r'^apps/?$', AppRegisterView.as_view(), name='app-register'),
16 url(r'^apps/(?P<pk>[a-z_]+)/?$', AppView.as_view(), name='app-delete'),
17 url(r'^ratings.json$',
18 etag(app_ratings_etag)(AppRatingView.as_view()),
19 name='app-ratings'),
20 url(r'^apps/(?P<app>[a-z_]+)/releases/(?:(?P<nightly>nightly)/)?'
21 r'(?P<version>' + SEMVER_REGEX + ')/?$',
22 AppReleaseView.as_view(), name='app-release-delete'),
23 url(r'^token/?$', SessionObtainAuthToken.as_view(), name='user-token'),
24 url(r'^token/new/?$', RegenerateAuthToken.as_view(),
25 name='user-token-new'),
26 url(r'^categories.json$',
27 etag(categories_etag)(CategoryView.as_view()), name='category'),
28 ]
29
[end of nextcloudappstore/core/api/v1/urls.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/nextcloudappstore/core/api/v1/urls.py b/nextcloudappstore/core/api/v1/urls.py
--- a/nextcloudappstore/core/api/v1/urls.py
+++ b/nextcloudappstore/core/api/v1/urls.py
@@ -13,7 +13,7 @@
url(r'^apps/releases/?$', AppReleaseView.as_view(),
name='app-release-create'),
url(r'^apps/?$', AppRegisterView.as_view(), name='app-register'),
- url(r'^apps/(?P<pk>[a-z_]+)/?$', AppView.as_view(), name='app-delete'),
+ url(r'^apps/(?P<pk>[a-z0-9_]+)/?$', AppView.as_view(), name='app-delete'),
url(r'^ratings.json$',
etag(app_ratings_etag)(AppRatingView.as_view()),
name='app-ratings'),
| {"golden_diff": "diff --git a/nextcloudappstore/core/api/v1/urls.py b/nextcloudappstore/core/api/v1/urls.py\n--- a/nextcloudappstore/core/api/v1/urls.py\n+++ b/nextcloudappstore/core/api/v1/urls.py\n@@ -13,7 +13,7 @@\n url(r'^apps/releases/?$', AppReleaseView.as_view(),\n name='app-release-create'),\n url(r'^apps/?$', AppRegisterView.as_view(), name='app-register'),\n- url(r'^apps/(?P<pk>[a-z_]+)/?$', AppView.as_view(), name='app-delete'),\n+ url(r'^apps/(?P<pk>[a-z0-9_]+)/?$', AppView.as_view(), name='app-delete'),\n url(r'^ratings.json$',\n etag(app_ratings_etag)(AppRatingView.as_view()),\n name='app-ratings'),\n", "issue": "schema does not allow digits in app ids\nApparently app ids like ``twofactor_u2f`` are not allowed by the info.xml schema. Could we change that regex to allow digits too or are there any strong arguments against that?\r\n\r\nref https://github.com/nextcloud/appstore/blob/e4567ce707b332ca14eb35e322bff5ec4397191b/nextcloudappstore/core/api/v1/release/info.xsd#L245-L250\n", "before_files": [{"content": "from django.conf.urls import url\nfrom django.views.decorators.http import etag\nfrom nextcloudappstore.core.api.v1.views import AppView, AppReleaseView, \\\n CategoryView, SessionObtainAuthToken, RegenerateAuthToken, AppRatingView, \\\n AppRegisterView\nfrom nextcloudappstore.core.caching import app_ratings_etag, categories_etag, \\\n apps_etag\nfrom nextcloudappstore.core.versioning import SEMVER_REGEX\n\nurlpatterns = [\n url(r'^platform/(?P<version>\\d+\\.\\d+\\.\\d+)/apps\\.json$',\n etag(apps_etag)(AppView.as_view()), name='app'),\n url(r'^apps/releases/?$', AppReleaseView.as_view(),\n name='app-release-create'),\n url(r'^apps/?$', AppRegisterView.as_view(), name='app-register'),\n url(r'^apps/(?P<pk>[a-z_]+)/?$', AppView.as_view(), name='app-delete'),\n url(r'^ratings.json$',\n etag(app_ratings_etag)(AppRatingView.as_view()),\n name='app-ratings'),\n url(r'^apps/(?P<app>[a-z_]+)/releases/(?:(?P<nightly>nightly)/)?'\n r'(?P<version>' + SEMVER_REGEX + ')/?$',\n AppReleaseView.as_view(), name='app-release-delete'),\n url(r'^token/?$', SessionObtainAuthToken.as_view(), name='user-token'),\n url(r'^token/new/?$', RegenerateAuthToken.as_view(),\n name='user-token-new'),\n url(r'^categories.json$',\n etag(categories_etag)(CategoryView.as_view()), name='category'),\n]\n", "path": "nextcloudappstore/core/api/v1/urls.py"}]} | 1,071 | 198 |
gh_patches_debug_6773 | rasdani/github-patches | git_diff | spacetelescope__jwql-517 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
DEPENDENCY_LINKS in setup.py causing bug in logging_functions
With the introduction of the `DEPENDENCY_LINKS` variable in `setup.py`, the logging of monitors is now failing to log the versions of depenencies listed, since the `REQUIRES` is not immediately followed by `setup()`:
```python
for i, line in enumerate(data):
if 'REQUIRES = [' in line:
begin = i + 1
elif 'setup(' in line:
end = i - 2
```
The solution is so simple move `DEPENDENCY _LINKS` to be defined before `REQUIRES`.
</issue>
<code>
[start of setup.py]
1 import numpy as np
2 from setuptools import setup
3 from setuptools import find_packages
4
5 VERSION = '0.22.0'
6
7 AUTHORS = 'Matthew Bourque, Misty Cracraft, Joe Filippazzo, Bryan Hilbert, '
8 AUTHORS += 'Graham Kanarek, Catherine Martlin, Johannes Sahlmann, Ben Sunnquist'
9
10 DESCRIPTION = 'The James Webb Space Telescope Quicklook Project'
11
12 REQUIRES = [
13 'asdf>=2.3.3',
14 'astropy>=3.2.1',
15 'astroquery>=0.3.9',
16 'authlib',
17 'bokeh>=1.0',
18 'codecov',
19 'django>=2.0',
20 'flake8',
21 'inflection',
22 'ipython',
23 'jinja2',
24 'jsonschema==2.6.0',
25 'jwedb>=0.0.3',
26 'matplotlib',
27 'numpy',
28 'numpydoc',
29 'pandas',
30 'psycopg2',
31 'pysiaf',
32 'pytest',
33 'pytest-cov',
34 'scipy',
35 'sphinx',
36 'sqlalchemy',
37 'stsci_rtd_theme',
38 'twine'
39 ]
40
41 DEPENDENCY_LINKS = ['git+https://github.com/spacetelescope/jwst#0.13.0']
42
43 setup(
44 name='jwql',
45 version=VERSION,
46 description=DESCRIPTION,
47 url='https://github.com/spacetelescope/jwql.git',
48 author=AUTHORS,
49 author_email='[email protected]',
50 license='BSD',
51 keywords=['astronomy', 'python'],
52 classifiers=['Programming Language :: Python'],
53 packages=find_packages(),
54 install_requires=REQUIRES,
55 dependency_links=DEPENDENCY_LINKS,
56 include_package_data=True,
57 include_dirs=[np.get_include()],
58 )
59
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -9,6 +9,7 @@
DESCRIPTION = 'The James Webb Space Telescope Quicklook Project'
+DEPENDENCY_LINKS = ['git+https://github.com/spacetelescope/jwst#0.13.0']
REQUIRES = [
'asdf>=2.3.3',
'astropy>=3.2.1',
@@ -38,8 +39,6 @@
'twine'
]
-DEPENDENCY_LINKS = ['git+https://github.com/spacetelescope/jwst#0.13.0']
-
setup(
name='jwql',
version=VERSION,
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -9,6 +9,7 @@\n \n DESCRIPTION = 'The James Webb Space Telescope Quicklook Project'\n \n+DEPENDENCY_LINKS = ['git+https://github.com/spacetelescope/jwst#0.13.0']\n REQUIRES = [\n 'asdf>=2.3.3',\n 'astropy>=3.2.1',\n@@ -38,8 +39,6 @@\n 'twine'\n ]\n \n-DEPENDENCY_LINKS = ['git+https://github.com/spacetelescope/jwst#0.13.0']\n-\n setup(\n name='jwql',\n version=VERSION,\n", "issue": "DEPENDENCY_LINKS in setup.py causing bug in logging_functions\nWith the introduction of the `DEPENDENCY_LINKS` variable in `setup.py`, the logging of monitors is now failing to log the versions of depenencies listed, since the `REQUIRES` is not immediately followed by `setup()`:\r\n\r\n```python\r\nfor i, line in enumerate(data):\r\n if 'REQUIRES = [' in line:\r\n begin = i + 1\r\n elif 'setup(' in line:\r\n end = i - 2\r\n```\r\n\r\nThe solution is so simple move `DEPENDENCY _LINKS` to be defined before `REQUIRES`.\n", "before_files": [{"content": "import numpy as np\nfrom setuptools import setup\nfrom setuptools import find_packages\n\nVERSION = '0.22.0'\n\nAUTHORS = 'Matthew Bourque, Misty Cracraft, Joe Filippazzo, Bryan Hilbert, '\nAUTHORS += 'Graham Kanarek, Catherine Martlin, Johannes Sahlmann, Ben Sunnquist'\n\nDESCRIPTION = 'The James Webb Space Telescope Quicklook Project'\n\nREQUIRES = [\n 'asdf>=2.3.3',\n 'astropy>=3.2.1',\n 'astroquery>=0.3.9',\n 'authlib',\n 'bokeh>=1.0',\n 'codecov',\n 'django>=2.0',\n 'flake8',\n 'inflection',\n 'ipython',\n 'jinja2',\n 'jsonschema==2.6.0',\n 'jwedb>=0.0.3',\n 'matplotlib',\n 'numpy',\n 'numpydoc',\n 'pandas',\n 'psycopg2',\n 'pysiaf',\n 'pytest',\n 'pytest-cov',\n 'scipy',\n 'sphinx',\n 'sqlalchemy',\n 'stsci_rtd_theme',\n 'twine'\n]\n\nDEPENDENCY_LINKS = ['git+https://github.com/spacetelescope/jwst#0.13.0']\n\nsetup(\n name='jwql',\n version=VERSION,\n description=DESCRIPTION,\n url='https://github.com/spacetelescope/jwql.git',\n author=AUTHORS,\n author_email='[email protected]',\n license='BSD',\n keywords=['astronomy', 'python'],\n classifiers=['Programming Language :: Python'],\n packages=find_packages(),\n install_requires=REQUIRES,\n dependency_links=DEPENDENCY_LINKS,\n include_package_data=True,\n include_dirs=[np.get_include()],\n)\n", "path": "setup.py"}]} | 1,175 | 160 |
gh_patches_debug_35261 | rasdani/github-patches | git_diff | electricitymaps__electricitymaps-contrib-3264 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
First try at building a parser using data from Quebec
Hopefully this will show up on the map somehow. I look forward to seeing what changes will be made in order to make this parser functional.
</issue>
<code>
[start of parsers/CA_QC.py]
1 import requests
2 import logging
3 from pprint import pprint
4 # The arrow library is used to handle datetimes
5 import arrow
6
7 PRODUCTION_URL = "https://www.hydroquebec.com/data/documents-donnees/donnees-ouvertes/json/production.json"
8 CONSUMPTION_URL = "https://www.hydroquebec.com/data/documents-donnees/donnees-ouvertes/json/demande.json"
9 # Reluctant to call it 'timezone', since we are importing 'timezone' from datetime
10 timezone_id = 'America/Montreal'
11
12 def fetch_production(
13 zone_key="CA-QC",
14 session=None,
15 target_datetime=None,
16 logger=logging.getLogger(__name__),
17 ) -> dict:
18 """Requests the last known production mix (in MW) of a given region.
19 In this particular case, translated mapping of JSON keys are also required"""
20
21 def if_exists(elem: dict, etype: str):
22
23 english = {
24 "hydraulique": "hydro",
25 "thermique": "thermal",
26 "solaire": "solar",
27 "eolien": "wind",
28 "autres": "unknown",
29 "valeurs": "values",
30 }
31 english = {v: k for k, v in english.items()}
32 try:
33 return elem["valeurs"][english[etype]]
34 except KeyError:
35 return 0.0
36
37 data = _fetch_quebec_production()
38 for elem in reversed(data["details"]):
39 if elem["valeurs"]["total"] != 0:
40
41 return {
42 "zoneKey": zone_key,
43 "datetime": arrow.get(elem["date"], tzinfo=timezone_id).datetime,
44 "production": {
45 "biomass": 0.0,
46 "coal": 0.0,
47
48 # per https://github.com/tmrowco/electricitymap-contrib/issues/3218 , thermal generation
49 # is at Bécancour gas turbine. It is reported with a delay, and data source returning 0.0
50 # can indicate either no generation or not-yet-reported generation.
51 # To handle this, if reported value is 0.0, overwrite it to None, so that backend can know
52 # this is not entirely reliable and might be updated later.
53 "gas": if_exists(elem, "thermal") or None,
54
55 "hydro": if_exists(elem, "hydro"),
56 "nuclear": 0.0,
57 "oil": 0.0,
58 "solar": if_exists(elem, "solar"),
59 "wind": if_exists(elem, "wind"),
60 "geothermal": 0.0,
61 "unknown": if_exists(elem, "unknown"),
62 },
63 "source": "hydroquebec.com",
64 }
65
66
67 def fetch_consumption(zone_key="CA-QC", session=None, target_datetime=None, logger=None):
68 data = _fetch_quebec_consumption()
69 for elem in reversed(data["details"]):
70 if "demandeTotal" in elem["valeurs"]:
71 return {
72 "zoneKey": zone_key,
73 "datetime": arrow.get(elem["date"], tzinfo=timezone_id).datetime,
74 "consumption": elem["valeurs"]["demandeTotal"],
75 "source": "hydroquebec.com",
76 }
77
78
79 def _fetch_quebec_production(logger=logging.getLogger(__name__)) -> str:
80 response = requests.get(PRODUCTION_URL)
81
82 if not response.ok:
83 logger.info('CA-QC: failed getting requested production data from hydroquebec - URL {}'.format(PRODUCTION_URL))
84 return response.json()
85
86
87 def _fetch_quebec_consumption(logger=logging.getLogger(__name__)) -> str:
88 response = requests.get(CONSUMPTION_URL)
89
90 if not response.ok:
91 logger.info('CA-QC: failed getting requested consumption data from hydroquebec - URL {}'.format(CONSUMPTION_URL))
92 return response.json()
93
94
95 if __name__ == '__main__':
96 """Main method, never used by the Electricity Map backend, but handy for testing."""
97
98 test_logger = logging.getLogger()
99
100 print('fetch_production() ->')
101 pprint(fetch_production(logger=test_logger))
102
103 print('fetch_consumption() ->')
104 pprint(fetch_consumption(logger=test_logger))
105
[end of parsers/CA_QC.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/parsers/CA_QC.py b/parsers/CA_QC.py
--- a/parsers/CA_QC.py
+++ b/parsers/CA_QC.py
@@ -25,8 +25,9 @@
"thermique": "thermal",
"solaire": "solar",
"eolien": "wind",
- "autres": "unknown",
- "valeurs": "values",
+ # autres is all renewable, and mostly biomass. See Github #3218
+ "autres": "biomass",
+ "valeurs": "values"
}
english = {v: k for k, v in english.items()}
try:
@@ -42,21 +43,18 @@
"zoneKey": zone_key,
"datetime": arrow.get(elem["date"], tzinfo=timezone_id).datetime,
"production": {
- "biomass": 0.0,
+ "biomass": if_exists(elem, "biomass"),
"coal": 0.0,
-
- # per https://github.com/tmrowco/electricitymap-contrib/issues/3218 , thermal generation
- # is at Bécancour gas turbine. It is reported with a delay, and data source returning 0.0
- # can indicate either no generation or not-yet-reported generation.
- # To handle this, if reported value is 0.0, overwrite it to None, so that backend can know
- # this is not entirely reliable and might be updated later.
- "gas": if_exists(elem, "thermal") or None,
-
"hydro": if_exists(elem, "hydro"),
"nuclear": 0.0,
"oil": 0.0,
"solar": if_exists(elem, "solar"),
"wind": if_exists(elem, "wind"),
+ # See Github issue #3218, Québec's thermal generation is at Bécancour gas turbine.
+ # It is reported with a delay, and data source returning 0.0 can indicate either no generation or not-yet-reported generation.
+ # Thus, if value is 0.0, overwrite it to None, so that backend can know this is not entirely reliable and might be updated later.
+ "gas": if_exists(elem, "thermal") or None,
+ # There are no geothermal electricity generation stations in Québec (and all of Canada for that matter).
"geothermal": 0.0,
"unknown": if_exists(elem, "unknown"),
},
| {"golden_diff": "diff --git a/parsers/CA_QC.py b/parsers/CA_QC.py\n--- a/parsers/CA_QC.py\n+++ b/parsers/CA_QC.py\n@@ -25,8 +25,9 @@\n \"thermique\": \"thermal\",\n \"solaire\": \"solar\",\n \"eolien\": \"wind\",\n- \"autres\": \"unknown\",\n- \"valeurs\": \"values\",\n+ # autres is all renewable, and mostly biomass. See Github #3218\n+ \"autres\": \"biomass\",\n+ \"valeurs\": \"values\"\n }\n english = {v: k for k, v in english.items()}\n try:\n@@ -42,21 +43,18 @@\n \"zoneKey\": zone_key,\n \"datetime\": arrow.get(elem[\"date\"], tzinfo=timezone_id).datetime,\n \"production\": {\n- \"biomass\": 0.0,\n+ \"biomass\": if_exists(elem, \"biomass\"),\n \"coal\": 0.0,\n-\n- # per https://github.com/tmrowco/electricitymap-contrib/issues/3218 , thermal generation\n- # is at B\u00e9cancour gas turbine. It is reported with a delay, and data source returning 0.0\n- # can indicate either no generation or not-yet-reported generation.\n- # To handle this, if reported value is 0.0, overwrite it to None, so that backend can know\n- # this is not entirely reliable and might be updated later.\n- \"gas\": if_exists(elem, \"thermal\") or None,\n-\n \"hydro\": if_exists(elem, \"hydro\"),\n \"nuclear\": 0.0,\n \"oil\": 0.0,\n \"solar\": if_exists(elem, \"solar\"),\n \"wind\": if_exists(elem, \"wind\"),\n+ # See Github issue #3218, Qu\u00e9bec's thermal generation is at B\u00e9cancour gas turbine.\n+ # It is reported with a delay, and data source returning 0.0 can indicate either no generation or not-yet-reported generation.\n+ # Thus, if value is 0.0, overwrite it to None, so that backend can know this is not entirely reliable and might be updated later.\n+ \"gas\": if_exists(elem, \"thermal\") or None,\n+ # There are no geothermal electricity generation stations in Qu\u00e9bec (and all of Canada for that matter).\n \"geothermal\": 0.0,\n \"unknown\": if_exists(elem, \"unknown\"),\n },\n", "issue": "First try at building a parser using data from Quebec\nHopefully this will show up on the map somehow. I look forward to seeing what changes will be made in order to make this parser functional. \n", "before_files": [{"content": "import requests\nimport logging\nfrom pprint import pprint\n# The arrow library is used to handle datetimes\nimport arrow\n\nPRODUCTION_URL = \"https://www.hydroquebec.com/data/documents-donnees/donnees-ouvertes/json/production.json\"\nCONSUMPTION_URL = \"https://www.hydroquebec.com/data/documents-donnees/donnees-ouvertes/json/demande.json\"\n# Reluctant to call it 'timezone', since we are importing 'timezone' from datetime\ntimezone_id = 'America/Montreal'\n\ndef fetch_production(\n zone_key=\"CA-QC\",\n session=None,\n target_datetime=None,\n logger=logging.getLogger(__name__),\n) -> dict:\n \"\"\"Requests the last known production mix (in MW) of a given region.\n In this particular case, translated mapping of JSON keys are also required\"\"\"\n\n def if_exists(elem: dict, etype: str):\n\n english = {\n \"hydraulique\": \"hydro\",\n \"thermique\": \"thermal\",\n \"solaire\": \"solar\",\n \"eolien\": \"wind\",\n \"autres\": \"unknown\",\n \"valeurs\": \"values\",\n }\n english = {v: k for k, v in english.items()}\n try:\n return elem[\"valeurs\"][english[etype]]\n except KeyError:\n return 0.0\n\n data = _fetch_quebec_production()\n for elem in reversed(data[\"details\"]):\n if elem[\"valeurs\"][\"total\"] != 0:\n\n return {\n \"zoneKey\": zone_key,\n \"datetime\": arrow.get(elem[\"date\"], tzinfo=timezone_id).datetime,\n \"production\": {\n \"biomass\": 0.0,\n \"coal\": 0.0,\n\n # per https://github.com/tmrowco/electricitymap-contrib/issues/3218 , thermal generation\n # is at B\u00e9cancour gas turbine. It is reported with a delay, and data source returning 0.0\n # can indicate either no generation or not-yet-reported generation.\n # To handle this, if reported value is 0.0, overwrite it to None, so that backend can know\n # this is not entirely reliable and might be updated later.\n \"gas\": if_exists(elem, \"thermal\") or None,\n\n \"hydro\": if_exists(elem, \"hydro\"),\n \"nuclear\": 0.0,\n \"oil\": 0.0,\n \"solar\": if_exists(elem, \"solar\"),\n \"wind\": if_exists(elem, \"wind\"),\n \"geothermal\": 0.0,\n \"unknown\": if_exists(elem, \"unknown\"),\n },\n \"source\": \"hydroquebec.com\",\n }\n\n\ndef fetch_consumption(zone_key=\"CA-QC\", session=None, target_datetime=None, logger=None):\n data = _fetch_quebec_consumption()\n for elem in reversed(data[\"details\"]):\n if \"demandeTotal\" in elem[\"valeurs\"]:\n return {\n \"zoneKey\": zone_key,\n \"datetime\": arrow.get(elem[\"date\"], tzinfo=timezone_id).datetime,\n \"consumption\": elem[\"valeurs\"][\"demandeTotal\"],\n \"source\": \"hydroquebec.com\",\n }\n\n\ndef _fetch_quebec_production(logger=logging.getLogger(__name__)) -> str:\n response = requests.get(PRODUCTION_URL)\n\n if not response.ok:\n logger.info('CA-QC: failed getting requested production data from hydroquebec - URL {}'.format(PRODUCTION_URL))\n return response.json()\n\n\ndef _fetch_quebec_consumption(logger=logging.getLogger(__name__)) -> str:\n response = requests.get(CONSUMPTION_URL)\n\n if not response.ok:\n logger.info('CA-QC: failed getting requested consumption data from hydroquebec - URL {}'.format(CONSUMPTION_URL))\n return response.json()\n\n\nif __name__ == '__main__':\n \"\"\"Main method, never used by the Electricity Map backend, but handy for testing.\"\"\"\n\n test_logger = logging.getLogger()\n\n print('fetch_production() ->')\n pprint(fetch_production(logger=test_logger))\n\n print('fetch_consumption() ->')\n pprint(fetch_consumption(logger=test_logger))\n", "path": "parsers/CA_QC.py"}]} | 1,695 | 575 |
gh_patches_debug_13280 | rasdani/github-patches | git_diff | pyca__cryptography-10345 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Allow verifying an x509 cert chain without making assertions about the subject name
Thanks to all who worked on the X.509 verification support in version 42.
I am trying to use this API for verifying a signing certificate, and realizing that the API requires me to assert a subject name (DNS name or IP address) to get the validation output. The subject name is not defined/not relevant in this application.
How can I verify that a certificate is in the chain of trust without asserting on the subject name?
</issue>
<code>
[start of src/cryptography/x509/verification.py]
1 # This file is dual licensed under the terms of the Apache License, Version
2 # 2.0, and the BSD License. See the LICENSE file in the root of this repository
3 # for complete details.
4
5 from __future__ import annotations
6
7 import typing
8
9 from cryptography.hazmat.bindings._rust import x509 as rust_x509
10 from cryptography.x509.general_name import DNSName, IPAddress
11
12 __all__ = [
13 "Store",
14 "Subject",
15 "ServerVerifier",
16 "PolicyBuilder",
17 "VerificationError",
18 ]
19
20 Store = rust_x509.Store
21 Subject = typing.Union[DNSName, IPAddress]
22 ServerVerifier = rust_x509.ServerVerifier
23 PolicyBuilder = rust_x509.PolicyBuilder
24 VerificationError = rust_x509.VerificationError
25
[end of src/cryptography/x509/verification.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/cryptography/x509/verification.py b/src/cryptography/x509/verification.py
--- a/src/cryptography/x509/verification.py
+++ b/src/cryptography/x509/verification.py
@@ -12,6 +12,8 @@
__all__ = [
"Store",
"Subject",
+ "VerifiedClient",
+ "ClientVerifier",
"ServerVerifier",
"PolicyBuilder",
"VerificationError",
@@ -19,6 +21,8 @@
Store = rust_x509.Store
Subject = typing.Union[DNSName, IPAddress]
+VerifiedClient = rust_x509.VerifiedClient
+ClientVerifier = rust_x509.ClientVerifier
ServerVerifier = rust_x509.ServerVerifier
PolicyBuilder = rust_x509.PolicyBuilder
VerificationError = rust_x509.VerificationError
| {"golden_diff": "diff --git a/src/cryptography/x509/verification.py b/src/cryptography/x509/verification.py\n--- a/src/cryptography/x509/verification.py\n+++ b/src/cryptography/x509/verification.py\n@@ -12,6 +12,8 @@\n __all__ = [\n \"Store\",\n \"Subject\",\n+ \"VerifiedClient\",\n+ \"ClientVerifier\",\n \"ServerVerifier\",\n \"PolicyBuilder\",\n \"VerificationError\",\n@@ -19,6 +21,8 @@\n \n Store = rust_x509.Store\n Subject = typing.Union[DNSName, IPAddress]\n+VerifiedClient = rust_x509.VerifiedClient\n+ClientVerifier = rust_x509.ClientVerifier\n ServerVerifier = rust_x509.ServerVerifier\n PolicyBuilder = rust_x509.PolicyBuilder\n VerificationError = rust_x509.VerificationError\n", "issue": "Allow verifying an x509 cert chain without making assertions about the subject name\nThanks to all who worked on the X.509 verification support in version 42.\r\n\r\nI am trying to use this API for verifying a signing certificate, and realizing that the API requires me to assert a subject name (DNS name or IP address) to get the validation output. The subject name is not defined/not relevant in this application.\r\n\r\nHow can I verify that a certificate is in the chain of trust without asserting on the subject name?\n", "before_files": [{"content": "# This file is dual licensed under the terms of the Apache License, Version\n# 2.0, and the BSD License. See the LICENSE file in the root of this repository\n# for complete details.\n\nfrom __future__ import annotations\n\nimport typing\n\nfrom cryptography.hazmat.bindings._rust import x509 as rust_x509\nfrom cryptography.x509.general_name import DNSName, IPAddress\n\n__all__ = [\n \"Store\",\n \"Subject\",\n \"ServerVerifier\",\n \"PolicyBuilder\",\n \"VerificationError\",\n]\n\nStore = rust_x509.Store\nSubject = typing.Union[DNSName, IPAddress]\nServerVerifier = rust_x509.ServerVerifier\nPolicyBuilder = rust_x509.PolicyBuilder\nVerificationError = rust_x509.VerificationError\n", "path": "src/cryptography/x509/verification.py"}]} | 871 | 198 |
gh_patches_debug_2909 | rasdani/github-patches | git_diff | mirumee__ariadne-799 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Support Starlette 0.18.0
Was just released: https://github.com/encode/starlette/releases/tag/0.18.0
and currently the dependency is pinned at `<0.18.0`.
</issue>
<code>
[start of setup.py]
1 #! /usr/bin/env python
2 import os
3 from setuptools import setup
4
5 CLASSIFIERS = [
6 "Development Status :: 4 - Beta",
7 "Intended Audience :: Developers",
8 "License :: OSI Approved :: BSD License",
9 "Operating System :: OS Independent",
10 "Programming Language :: Python",
11 "Programming Language :: Python :: 3.7",
12 "Programming Language :: Python :: 3.8",
13 "Programming Language :: Python :: 3.9",
14 "Programming Language :: Python :: 3.10",
15 "Topic :: Software Development :: Libraries :: Python Modules",
16 ]
17
18 README_PATH = os.path.join(os.path.dirname(os.path.abspath(__file__)), "README.md")
19 with open(README_PATH, "r", encoding="utf8") as f:
20 README = f.read()
21
22 setup(
23 name="ariadne",
24 author="Mirumee Software",
25 author_email="[email protected]",
26 description="Ariadne is a Python library for implementing GraphQL servers.",
27 long_description=README,
28 long_description_content_type="text/markdown",
29 license="BSD",
30 version="0.15.0.dev3",
31 url="https://github.com/mirumee/ariadne",
32 packages=["ariadne"],
33 include_package_data=True,
34 install_requires=[
35 "graphql-core>=3.2.0,<3.3",
36 "starlette<0.18",
37 "typing_extensions>=3.6.0",
38 ],
39 extras_require={"asgi-file-uploads": ["python-multipart>=0.0.5"]},
40 classifiers=CLASSIFIERS,
41 platforms=["any"],
42 zip_safe=False,
43 )
44
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -33,7 +33,7 @@
include_package_data=True,
install_requires=[
"graphql-core>=3.2.0,<3.3",
- "starlette<0.18",
+ "starlette<0.19",
"typing_extensions>=3.6.0",
],
extras_require={"asgi-file-uploads": ["python-multipart>=0.0.5"]},
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -33,7 +33,7 @@\n include_package_data=True,\n install_requires=[\n \"graphql-core>=3.2.0,<3.3\",\n- \"starlette<0.18\",\n+ \"starlette<0.19\",\n \"typing_extensions>=3.6.0\",\n ],\n extras_require={\"asgi-file-uploads\": [\"python-multipart>=0.0.5\"]},\n", "issue": "Support Starlette 0.18.0\nWas just released: https://github.com/encode/starlette/releases/tag/0.18.0\r\nand currently the dependency is pinned at `<0.18.0`.\n", "before_files": [{"content": "#! /usr/bin/env python\nimport os\nfrom setuptools import setup\n\nCLASSIFIERS = [\n \"Development Status :: 4 - Beta\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: BSD License\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Topic :: Software Development :: Libraries :: Python Modules\",\n]\n\nREADME_PATH = os.path.join(os.path.dirname(os.path.abspath(__file__)), \"README.md\")\nwith open(README_PATH, \"r\", encoding=\"utf8\") as f:\n README = f.read()\n\nsetup(\n name=\"ariadne\",\n author=\"Mirumee Software\",\n author_email=\"[email protected]\",\n description=\"Ariadne is a Python library for implementing GraphQL servers.\",\n long_description=README,\n long_description_content_type=\"text/markdown\",\n license=\"BSD\",\n version=\"0.15.0.dev3\",\n url=\"https://github.com/mirumee/ariadne\",\n packages=[\"ariadne\"],\n include_package_data=True,\n install_requires=[\n \"graphql-core>=3.2.0,<3.3\",\n \"starlette<0.18\",\n \"typing_extensions>=3.6.0\",\n ],\n extras_require={\"asgi-file-uploads\": [\"python-multipart>=0.0.5\"]},\n classifiers=CLASSIFIERS,\n platforms=[\"any\"],\n zip_safe=False,\n)\n", "path": "setup.py"}]} | 1,016 | 115 |
gh_patches_debug_2549 | rasdani/github-patches | git_diff | streamlit__streamlit-724 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Fix Danny's S3 sharing issue
It looks like `[s3] keyPrefix=...` isn't making it into the URLs being fetched from S3.
This is the address of a manifest protobuf we want to fetch:
`https://yelp-people-dev.s3-us-west-2.amazonaws.com/~dqn/st/0.49.0-A8NT/reports/NJphBiGR4twz88mU9wTegn/manifest.pb`
And this is the address that's being generated:
`https://yelp-people-dev.s3.amazonaws.com/~dqn/reports/NJphBiGR4twz88mU9wTegn/manifest.pb`
The generated address is missing the `st/<streamlit version>` bits. Looks like we're splitting on a forward slash on the pathname in `ConnectionManager.fetchManifest`, which is giving us the wrong result because the keyPrefix itself has a forward slash.
</issue>
<code>
[start of examples/bart_vs_bikes.py]
1 # -*- coding: utf-8 -*-
2 # Copyright 2018-2019 Streamlit Inc.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15
16 import copy
17 from urllib.parse import urljoin
18 import pandas as pd
19 import streamlit as st
20
21
22 st.title("BART stops vs. bike rentals")
23
24 st.write(
25 """
26 This plot shows two things:
27 * Bay Area Rapit Transit (BART) train lines plotted as arcs connecting the
28 stations.
29 * A 3D hexagonal histogram plot of bike-sharing rentals (origin locations).
30 """
31 )
32
33
34 @st.cache
35 def from_data_file(filename):
36 dirname = "https://raw.githubusercontent.com/streamlit/streamlit/develop/examples/data/"
37 url = urljoin(dirname, filename)
38 return pd.read_json(url)
39
40
41 # Grab some data
42 bart_stop_stats = copy.deepcopy(from_data_file("bart_stop_stats.json"))
43 bart_path_stats = from_data_file("bart_path_stats.json")
44 bike_rental_stats = from_data_file("bike_rental_stats.json")
45
46 # Move bart stop name to the 1st column, so it looks nicer when printed as a
47 # table.
48 bart_stop_names = bart_stop_stats["name"]
49 bart_stop_stats.drop(labels=["name"], axis=1, inplace=True)
50 bart_stop_stats.insert(0, "name", bart_stop_names)
51
52 st.deck_gl_chart(
53 viewport={"latitude": 37.76, "longitude": -122.4, "zoom": 11, "pitch": 50},
54 layers=[
55 {
56 # Plot number of bike rentals throughtout the city
57 "type": "HexagonLayer",
58 "data": bike_rental_stats,
59 "radius": 200,
60 "elevationScale": 4,
61 "elevationRange": [0, 1000],
62 "pickable": True,
63 "extruded": True,
64 },
65 {
66 # Now plot locations of Bart stops
67 # ...and let's size the stops according to traffic
68 "type": "ScatterplotLayer",
69 "data": bart_stop_stats,
70 "radiusScale": 10,
71 "getRadius": 50,
72 },
73 {
74 # Now Add names of Bart stops
75 "type": "TextLayer",
76 "data": bart_stop_stats,
77 "getText": "name",
78 "getColor": [0, 0, 0, 200],
79 "getSize": 15,
80 },
81 {
82 # And draw some arcs connecting the stops
83 "type": "ArcLayer",
84 "data": bart_path_stats,
85 "pickable": True,
86 "autoHighlight": True,
87 "getStrokeWidth": 10,
88 },
89 ],
90 )
91
[end of examples/bart_vs_bikes.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/examples/bart_vs_bikes.py b/examples/bart_vs_bikes.py
--- a/examples/bart_vs_bikes.py
+++ b/examples/bart_vs_bikes.py
@@ -33,7 +33,9 @@
@st.cache
def from_data_file(filename):
- dirname = "https://raw.githubusercontent.com/streamlit/streamlit/develop/examples/data/"
+ dirname = (
+ "https://raw.githubusercontent.com/streamlit/streamlit/develop/examples/data/"
+ )
url = urljoin(dirname, filename)
return pd.read_json(url)
| {"golden_diff": "diff --git a/examples/bart_vs_bikes.py b/examples/bart_vs_bikes.py\n--- a/examples/bart_vs_bikes.py\n+++ b/examples/bart_vs_bikes.py\n@@ -33,7 +33,9 @@\n \n @st.cache\n def from_data_file(filename):\n- dirname = \"https://raw.githubusercontent.com/streamlit/streamlit/develop/examples/data/\" \n+ dirname = (\n+ \"https://raw.githubusercontent.com/streamlit/streamlit/develop/examples/data/\"\n+ )\n url = urljoin(dirname, filename)\n return pd.read_json(url)\n", "issue": "Fix Danny's S3 sharing issue\nIt looks like `[s3] keyPrefix=...` isn't making it into the URLs being fetched from S3.\r\n\r\nThis is the address of a manifest protobuf we want to fetch:\r\n`https://yelp-people-dev.s3-us-west-2.amazonaws.com/~dqn/st/0.49.0-A8NT/reports/NJphBiGR4twz88mU9wTegn/manifest.pb`\r\n\r\nAnd this is the address that's being generated:\r\n`https://yelp-people-dev.s3.amazonaws.com/~dqn/reports/NJphBiGR4twz88mU9wTegn/manifest.pb`\r\n\r\nThe generated address is missing the `st/<streamlit version>` bits. Looks like we're splitting on a forward slash on the pathname in `ConnectionManager.fetchManifest`, which is giving us the wrong result because the keyPrefix itself has a forward slash.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright 2018-2019 Streamlit Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport copy\nfrom urllib.parse import urljoin\nimport pandas as pd\nimport streamlit as st\n\n\nst.title(\"BART stops vs. bike rentals\")\n\nst.write(\n \"\"\"\n This plot shows two things:\n * Bay Area Rapit Transit (BART) train lines plotted as arcs connecting the\n stations.\n * A 3D hexagonal histogram plot of bike-sharing rentals (origin locations).\n\"\"\"\n)\n\n\[email protected]\ndef from_data_file(filename):\n dirname = \"https://raw.githubusercontent.com/streamlit/streamlit/develop/examples/data/\" \n url = urljoin(dirname, filename)\n return pd.read_json(url)\n\n\n# Grab some data\nbart_stop_stats = copy.deepcopy(from_data_file(\"bart_stop_stats.json\"))\nbart_path_stats = from_data_file(\"bart_path_stats.json\")\nbike_rental_stats = from_data_file(\"bike_rental_stats.json\")\n\n# Move bart stop name to the 1st column, so it looks nicer when printed as a\n# table.\nbart_stop_names = bart_stop_stats[\"name\"]\nbart_stop_stats.drop(labels=[\"name\"], axis=1, inplace=True)\nbart_stop_stats.insert(0, \"name\", bart_stop_names)\n\nst.deck_gl_chart(\n viewport={\"latitude\": 37.76, \"longitude\": -122.4, \"zoom\": 11, \"pitch\": 50},\n layers=[\n {\n # Plot number of bike rentals throughtout the city\n \"type\": \"HexagonLayer\",\n \"data\": bike_rental_stats,\n \"radius\": 200,\n \"elevationScale\": 4,\n \"elevationRange\": [0, 1000],\n \"pickable\": True,\n \"extruded\": True,\n },\n {\n # Now plot locations of Bart stops\n # ...and let's size the stops according to traffic\n \"type\": \"ScatterplotLayer\",\n \"data\": bart_stop_stats,\n \"radiusScale\": 10,\n \"getRadius\": 50,\n },\n {\n # Now Add names of Bart stops\n \"type\": \"TextLayer\",\n \"data\": bart_stop_stats,\n \"getText\": \"name\",\n \"getColor\": [0, 0, 0, 200],\n \"getSize\": 15,\n },\n {\n # And draw some arcs connecting the stops\n \"type\": \"ArcLayer\",\n \"data\": bart_path_stats,\n \"pickable\": True,\n \"autoHighlight\": True,\n \"getStrokeWidth\": 10,\n },\n ],\n)\n", "path": "examples/bart_vs_bikes.py"}]} | 1,616 | 125 |
gh_patches_debug_28660 | rasdani/github-patches | git_diff | mozilla__pontoon-2675 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
No warnings when trying to submit empty translations
I've noticed an increase in the number of empty strings in Firefox, where I have [special checks](https://test.flod.org/checks/).
Apparently, we don't warn anymore when someone tries to submit an empty translation.
</issue>
<code>
[start of pontoon/checks/libraries/pontoon_db.py]
1 import html
2 import re
3
4 import bleach
5
6 from collections import defaultdict
7 from fluent.syntax import FluentParser, ast
8
9 from pontoon.sync.formats.ftl import localizable_entries
10
11
12 MAX_LENGTH_RE = re.compile(r"MAX_LENGTH:( *)(\d+)", re.MULTILINE)
13 parser = FluentParser()
14
15
16 def get_max_length(comment):
17 """
18 Return max length value for an entity with MAX_LENTH.
19 """
20 max_length = re.findall(MAX_LENGTH_RE, comment or "")
21
22 if max_length:
23 return int(max_length[0][1])
24
25 return None
26
27
28 def run_checks(entity, original, string):
29 """
30 Group all checks related to the base UI that get stored in the DB
31 :arg pontoon.base.models.Entity entity: Source entity
32 :arg basestring original: an original string
33 :arg basestring string: a translation
34 """
35 checks = defaultdict(list)
36 resource_ext = entity.resource.format
37
38 if resource_ext == "lang":
39 # Newlines are not allowed in .lang files (bug 1190754)
40 if "\n" in string:
41 checks["pErrors"].append("Newline characters are not allowed")
42
43 # Prevent translations exceeding the given length limit
44 max_length = get_max_length(entity.comment)
45
46 if max_length:
47 string_length = len(
48 html.unescape(bleach.clean(string, strip=True, tags=()))
49 )
50
51 if string_length > max_length:
52 checks["pErrors"].append("Translation too long")
53
54 # Bug 1599056: Original and translation must either both end in a newline,
55 # or none of them should.
56 if resource_ext == "po":
57 if original.endswith("\n") != string.endswith("\n"):
58 checks["pErrors"].append("Ending newline mismatch")
59
60 # Prevent empty translation submissions if not supported
61 if string == "" and not entity.resource.allows_empty_translations:
62 checks["pErrors"].append("Empty translations are not allowed")
63
64 # FTL checks
65 if resource_ext == "ftl" and string != "":
66 translation_ast = parser.parse_entry(string)
67 entity_ast = parser.parse_entry(entity.string)
68
69 # Parse error
70 if isinstance(translation_ast, ast.Junk):
71 checks["pErrors"].append(translation_ast.annotations[0].message)
72
73 # Not a localizable entry
74 elif not isinstance(translation_ast, localizable_entries):
75 checks["pErrors"].append(
76 "Translation needs to be a valid localizable entry"
77 )
78
79 # Message ID mismatch
80 elif entity_ast.id.name != translation_ast.id.name:
81 checks["pErrors"].append("Translation key needs to match source string key")
82
83 return checks
84
[end of pontoon/checks/libraries/pontoon_db.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pontoon/checks/libraries/pontoon_db.py b/pontoon/checks/libraries/pontoon_db.py
--- a/pontoon/checks/libraries/pontoon_db.py
+++ b/pontoon/checks/libraries/pontoon_db.py
@@ -5,6 +5,7 @@
from collections import defaultdict
from fluent.syntax import FluentParser, ast
+from fluent.syntax.visitor import Visitor
from pontoon.sync.formats.ftl import localizable_entries
@@ -25,6 +26,24 @@
return None
+class IsEmptyVisitor(Visitor):
+ def __init__(self):
+ self.is_empty = True
+
+ def visit_Placeable(self, node):
+ if isinstance(node.expression, ast.Literal):
+ if node.expression.parse()["value"]:
+ self.is_empty = False
+ elif isinstance(node.expression, ast.SelectExpression):
+ self.generic_visit(node.expression)
+ else:
+ self.is_empty = False
+
+ def visit_TextElement(self, node):
+ if node.value:
+ self.is_empty = False
+
+
def run_checks(entity, original, string):
"""
Group all checks related to the base UI that get stored in the DB
@@ -80,4 +99,12 @@
elif entity_ast.id.name != translation_ast.id.name:
checks["pErrors"].append("Translation key needs to match source string key")
+ # Empty translation entry warning; set here rather than pontoon_non_db.py
+ # to avoid needing to parse the Fluent message twice.
+ else:
+ visitor = IsEmptyVisitor()
+ visitor.visit(translation_ast)
+ if visitor.is_empty:
+ checks["pndbWarnings"].append("Empty translation")
+
return checks
| {"golden_diff": "diff --git a/pontoon/checks/libraries/pontoon_db.py b/pontoon/checks/libraries/pontoon_db.py\n--- a/pontoon/checks/libraries/pontoon_db.py\n+++ b/pontoon/checks/libraries/pontoon_db.py\n@@ -5,6 +5,7 @@\n \n from collections import defaultdict\n from fluent.syntax import FluentParser, ast\n+from fluent.syntax.visitor import Visitor\n \n from pontoon.sync.formats.ftl import localizable_entries\n \n@@ -25,6 +26,24 @@\n return None\n \n \n+class IsEmptyVisitor(Visitor):\n+ def __init__(self):\n+ self.is_empty = True\n+\n+ def visit_Placeable(self, node):\n+ if isinstance(node.expression, ast.Literal):\n+ if node.expression.parse()[\"value\"]:\n+ self.is_empty = False\n+ elif isinstance(node.expression, ast.SelectExpression):\n+ self.generic_visit(node.expression)\n+ else:\n+ self.is_empty = False\n+\n+ def visit_TextElement(self, node):\n+ if node.value:\n+ self.is_empty = False\n+\n+\n def run_checks(entity, original, string):\n \"\"\"\n Group all checks related to the base UI that get stored in the DB\n@@ -80,4 +99,12 @@\n elif entity_ast.id.name != translation_ast.id.name:\n checks[\"pErrors\"].append(\"Translation key needs to match source string key\")\n \n+ # Empty translation entry warning; set here rather than pontoon_non_db.py\n+ # to avoid needing to parse the Fluent message twice.\n+ else:\n+ visitor = IsEmptyVisitor()\n+ visitor.visit(translation_ast)\n+ if visitor.is_empty:\n+ checks[\"pndbWarnings\"].append(\"Empty translation\")\n+\n return checks\n", "issue": "No warnings when trying to submit empty translations\nI've noticed an increase in the number of empty strings in Firefox, where I have [special checks](https://test.flod.org/checks/).\r\n\r\nApparently, we don't warn anymore when someone tries to submit an empty translation.\n", "before_files": [{"content": "import html\nimport re\n\nimport bleach\n\nfrom collections import defaultdict\nfrom fluent.syntax import FluentParser, ast\n\nfrom pontoon.sync.formats.ftl import localizable_entries\n\n\nMAX_LENGTH_RE = re.compile(r\"MAX_LENGTH:( *)(\\d+)\", re.MULTILINE)\nparser = FluentParser()\n\n\ndef get_max_length(comment):\n \"\"\"\n Return max length value for an entity with MAX_LENTH.\n \"\"\"\n max_length = re.findall(MAX_LENGTH_RE, comment or \"\")\n\n if max_length:\n return int(max_length[0][1])\n\n return None\n\n\ndef run_checks(entity, original, string):\n \"\"\"\n Group all checks related to the base UI that get stored in the DB\n :arg pontoon.base.models.Entity entity: Source entity\n :arg basestring original: an original string\n :arg basestring string: a translation\n \"\"\"\n checks = defaultdict(list)\n resource_ext = entity.resource.format\n\n if resource_ext == \"lang\":\n # Newlines are not allowed in .lang files (bug 1190754)\n if \"\\n\" in string:\n checks[\"pErrors\"].append(\"Newline characters are not allowed\")\n\n # Prevent translations exceeding the given length limit\n max_length = get_max_length(entity.comment)\n\n if max_length:\n string_length = len(\n html.unescape(bleach.clean(string, strip=True, tags=()))\n )\n\n if string_length > max_length:\n checks[\"pErrors\"].append(\"Translation too long\")\n\n # Bug 1599056: Original and translation must either both end in a newline,\n # or none of them should.\n if resource_ext == \"po\":\n if original.endswith(\"\\n\") != string.endswith(\"\\n\"):\n checks[\"pErrors\"].append(\"Ending newline mismatch\")\n\n # Prevent empty translation submissions if not supported\n if string == \"\" and not entity.resource.allows_empty_translations:\n checks[\"pErrors\"].append(\"Empty translations are not allowed\")\n\n # FTL checks\n if resource_ext == \"ftl\" and string != \"\":\n translation_ast = parser.parse_entry(string)\n entity_ast = parser.parse_entry(entity.string)\n\n # Parse error\n if isinstance(translation_ast, ast.Junk):\n checks[\"pErrors\"].append(translation_ast.annotations[0].message)\n\n # Not a localizable entry\n elif not isinstance(translation_ast, localizable_entries):\n checks[\"pErrors\"].append(\n \"Translation needs to be a valid localizable entry\"\n )\n\n # Message ID mismatch\n elif entity_ast.id.name != translation_ast.id.name:\n checks[\"pErrors\"].append(\"Translation key needs to match source string key\")\n\n return checks\n", "path": "pontoon/checks/libraries/pontoon_db.py"}]} | 1,355 | 395 |
gh_patches_debug_6668 | rasdani/github-patches | git_diff | pyjanitor-devs__pyjanitor-635 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Natsort import error
# Brief Description
The module `natsort` isn't found because it was added (in #627) to dev requirements but needs to be in the main requirements file. It is imported with all functions from the init script.
Rather than requiring it, perhaps it could also be brought in with a `try`, `except` per #97
# Error Messages
```
/usr/local/lib/python3.7/site-packages/janitor/functions.py:25: in <module>
from natsort import index_natsorted, natsorted
E ModuleNotFoundError: No module named 'natsort'
```
</issue>
<code>
[start of setup.py]
1 import re
2 from pathlib import Path
3
4 from setuptools import setup
5
6
7 def requirements():
8 with open("requirements.txt", "r+") as f:
9 return f.read()
10
11
12 def generate_long_description() -> str:
13 """
14 Extra chunks from README for PyPI description.
15
16 Target chunks must be contained within `.. pypi-doc` pair comments,
17 so there must be an even number of comments in README.
18
19 :returns: Extracted description from README
20
21 """
22 # Read the contents of README file
23 this_directory = Path(__file__).parent
24 with open(this_directory / "README.rst", encoding="utf-8") as f:
25 readme = f.read()
26
27 # Find pypi-doc comments in README
28 indices = [m.start() for m in re.finditer(".. pypi-doc", readme)]
29 if len(indices) % 2 != 0:
30 raise Exception("Odd number of `.. pypi-doc` comments in README")
31
32 # Loop through pairs of comments and save text between pairs
33 long_description = ""
34 for i in range(0, len(indices), 2):
35 start_index = indices[i] + 11
36 end_index = indices[i + 1]
37 long_description += readme[start_index:end_index]
38 return long_description
39
40
41 extra_spark = ["pyspark"]
42 extra_biology = ["biopython"]
43 extra_chemistry = ["rdkit"]
44 extra_engineering = ["unyt"]
45 extra_all = extra_biology + extra_engineering + extra_spark
46
47 setup(
48 name="pyjanitor",
49 version="0.20.1",
50 description="Tools for cleaning pandas DataFrames",
51 author="Eric J. Ma",
52 author_email="[email protected]",
53 url="https://github.com/ericmjl/pyjanitor",
54 license="MIT",
55 packages=["janitor"],
56 install_requires=requirements(),
57 extras_require={
58 "all": extra_all,
59 "biology": extra_biology,
60 # "chemistry": extra_chemistry, should be inserted once rdkit
61 # fixes https://github.com/rdkit/rdkit/issues/1812
62 "engineering": extra_engineering,
63 "spark": extra_spark,
64 },
65 python_requires=">=3.6",
66 long_description=generate_long_description(),
67 long_description_content_type="text/x-rst",
68 )
69
[end of setup.py]
[start of janitor/__init__.py]
1 try:
2 import janitor.xarray
3 except ImportError:
4 pass
5
6 from .functions import * # noqa: F403, F401
7 from .math import *
8 from .ml import get_features_targets as _get_features_targets
9 from .utils import refactored_function
10
11 # from .dataframe import JanitorDataFrame as DataFrame # noqa: F401
12 # from .dataframe import JanitorSeries as Series # noqa: F401
13
14
15 @refactored_function(
16 "get_features_targets() has moved. Please use ml.get_features_targets()."
17 )
18 def get_features_targets(*args, **kwargs):
19 return _get_features_targets(*args, **kwargs)
20
21
22 __version__ = "0.20.1"
23
[end of janitor/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/janitor/__init__.py b/janitor/__init__.py
--- a/janitor/__init__.py
+++ b/janitor/__init__.py
@@ -19,4 +19,4 @@
return _get_features_targets(*args, **kwargs)
-__version__ = "0.20.1"
+__version__ = "0.20.2"
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -46,7 +46,7 @@
setup(
name="pyjanitor",
- version="0.20.1",
+ version="0.20.2",
description="Tools for cleaning pandas DataFrames",
author="Eric J. Ma",
author_email="[email protected]",
| {"golden_diff": "diff --git a/janitor/__init__.py b/janitor/__init__.py\n--- a/janitor/__init__.py\n+++ b/janitor/__init__.py\n@@ -19,4 +19,4 @@\n return _get_features_targets(*args, **kwargs)\n \n \n-__version__ = \"0.20.1\"\n+__version__ = \"0.20.2\"\ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -46,7 +46,7 @@\n \n setup(\n name=\"pyjanitor\",\n- version=\"0.20.1\",\n+ version=\"0.20.2\",\n description=\"Tools for cleaning pandas DataFrames\",\n author=\"Eric J. Ma\",\n author_email=\"[email protected]\",\n", "issue": "Natsort import error\n# Brief Description\r\n\r\nThe module `natsort` isn't found because it was added (in #627) to dev requirements but needs to be in the main requirements file. It is imported with all functions from the init script. \r\n\r\nRather than requiring it, perhaps it could also be brought in with a `try`, `except` per #97 \r\n\r\n# Error Messages\r\n\r\n```\r\n /usr/local/lib/python3.7/site-packages/janitor/functions.py:25: in <module>\r\n from natsort import index_natsorted, natsorted\r\n E ModuleNotFoundError: No module named 'natsort'\r\n```\n", "before_files": [{"content": "import re\nfrom pathlib import Path\n\nfrom setuptools import setup\n\n\ndef requirements():\n with open(\"requirements.txt\", \"r+\") as f:\n return f.read()\n\n\ndef generate_long_description() -> str:\n \"\"\"\n Extra chunks from README for PyPI description.\n\n Target chunks must be contained within `.. pypi-doc` pair comments,\n so there must be an even number of comments in README.\n\n :returns: Extracted description from README\n\n \"\"\"\n # Read the contents of README file\n this_directory = Path(__file__).parent\n with open(this_directory / \"README.rst\", encoding=\"utf-8\") as f:\n readme = f.read()\n\n # Find pypi-doc comments in README\n indices = [m.start() for m in re.finditer(\".. pypi-doc\", readme)]\n if len(indices) % 2 != 0:\n raise Exception(\"Odd number of `.. pypi-doc` comments in README\")\n\n # Loop through pairs of comments and save text between pairs\n long_description = \"\"\n for i in range(0, len(indices), 2):\n start_index = indices[i] + 11\n end_index = indices[i + 1]\n long_description += readme[start_index:end_index]\n return long_description\n\n\nextra_spark = [\"pyspark\"]\nextra_biology = [\"biopython\"]\nextra_chemistry = [\"rdkit\"]\nextra_engineering = [\"unyt\"]\nextra_all = extra_biology + extra_engineering + extra_spark\n\nsetup(\n name=\"pyjanitor\",\n version=\"0.20.1\",\n description=\"Tools for cleaning pandas DataFrames\",\n author=\"Eric J. Ma\",\n author_email=\"[email protected]\",\n url=\"https://github.com/ericmjl/pyjanitor\",\n license=\"MIT\",\n packages=[\"janitor\"],\n install_requires=requirements(),\n extras_require={\n \"all\": extra_all,\n \"biology\": extra_biology,\n # \"chemistry\": extra_chemistry, should be inserted once rdkit\n # fixes https://github.com/rdkit/rdkit/issues/1812\n \"engineering\": extra_engineering,\n \"spark\": extra_spark,\n },\n python_requires=\">=3.6\",\n long_description=generate_long_description(),\n long_description_content_type=\"text/x-rst\",\n)\n", "path": "setup.py"}, {"content": "try:\n import janitor.xarray\nexcept ImportError:\n pass\n\nfrom .functions import * # noqa: F403, F401\nfrom .math import *\nfrom .ml import get_features_targets as _get_features_targets\nfrom .utils import refactored_function\n\n# from .dataframe import JanitorDataFrame as DataFrame # noqa: F401\n# from .dataframe import JanitorSeries as Series # noqa: F401\n\n\n@refactored_function(\n \"get_features_targets() has moved. Please use ml.get_features_targets().\"\n)\ndef get_features_targets(*args, **kwargs):\n return _get_features_targets(*args, **kwargs)\n\n\n__version__ = \"0.20.1\"\n", "path": "janitor/__init__.py"}]} | 1,533 | 185 |
gh_patches_debug_23123 | rasdani/github-patches | git_diff | streamlink__streamlink-5762 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
plugins.vidio: 403 Client Error on stream token acquirement
### Checklist
- [X] This is a [plugin issue](https://streamlink.github.io/plugins.html) and not [a different kind of issue](https://github.com/streamlink/streamlink/issues/new/choose)
- [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink)
- [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22)
- [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master)
### Streamlink version
Unable to open URL: https://www.vidio.com/live/204/tokens (403 Client Error: Forbidden for url: https://www.vidio.com/live/204/tokens?type=hls)
### Description
The live stream: https://www.vidio.com/live/204-sctv
the output: https://www.vidio.com/live/204/tokens (403 Client Error: Forbidden for url: https://www.vidio.com/live/204/tokens?type=hls)
It is missing sctv
### Debug log
```text
streamlink https://www.vidio.com/live/204-sctv best
[cli][info] Found matching plugin vidio for URL https://www.vidio.com/live/204-sctv
error: Unable to open URL: https://www.vidio.com/live/204/tokens (403 Client Error: Forbidden for url: https://www.vidio.com/live/204/tokens?type=hls)
```
</issue>
<code>
[start of src/streamlink/plugins/vidio.py]
1 """
2 $description Indonesian & international live TV channels and video on-demand service. OTT service from Vidio.
3 $url vidio.com
4 $type live, vod
5 """
6 import logging
7 import re
8 from urllib.parse import urlsplit, urlunsplit
9
10 from streamlink.plugin import Plugin, pluginmatcher
11 from streamlink.plugin.api import validate
12 from streamlink.stream.dash import DASHStream
13 from streamlink.stream.hls import HLSStream
14
15
16 log = logging.getLogger(__name__)
17
18
19 @pluginmatcher(re.compile(
20 r"https?://(?:www\.)?vidio\.com/",
21 ))
22 class Vidio(Plugin):
23 tokens_url = "https://www.vidio.com/live/{id}/tokens"
24
25 def _get_stream_token(self, stream_id, stream_type):
26 log.debug("Getting stream token")
27 return self.session.http.post(
28 self.tokens_url.format(id=stream_id),
29 params={"type": stream_type},
30 headers={"Referer": self.url},
31 schema=validate.Schema(
32 validate.parse_json(),
33 {"token": str},
34 validate.get("token"),
35 ),
36 )
37
38 def _get_streams(self):
39 stream_id, has_token, hls_url, dash_url = self.session.http.get(
40 self.url,
41 schema=validate.Schema(
42 validate.parse_html(),
43 validate.xml_find(".//*[@data-video-id]"),
44 validate.union((
45 validate.get("data-video-id"),
46 validate.all(
47 validate.get("data-video-has-token"),
48 validate.transform(lambda val: val and val != "false"),
49 ),
50 validate.get("data-vjs-clip-hls-url"),
51 validate.get("data-vjs-clip-dash-url"),
52 )),
53 ),
54 )
55
56 if dash_url and has_token:
57 token = self._get_stream_token(stream_id, "dash")
58 parsed = urlsplit(dash_url)
59 dash_url = urlunsplit(parsed._replace(path=f"{token}{parsed.path}"))
60 return DASHStream.parse_manifest(
61 self.session,
62 dash_url,
63 headers={"Referer": "https://www.vidio.com/"},
64 )
65
66 if not hls_url:
67 return
68
69 if has_token:
70 token = self._get_stream_token(stream_id, "hls")
71 hls_url = f"{hls_url}?{token}"
72
73 return HLSStream.parse_variant_playlist(
74 self.session,
75 hls_url,
76 headers={"Referer": "https://www.vidio.com/"},
77 )
78
79
80 __plugin__ = Vidio
81
[end of src/streamlink/plugins/vidio.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/streamlink/plugins/vidio.py b/src/streamlink/plugins/vidio.py
--- a/src/streamlink/plugins/vidio.py
+++ b/src/streamlink/plugins/vidio.py
@@ -6,6 +6,7 @@
import logging
import re
from urllib.parse import urlsplit, urlunsplit
+from uuid import uuid4
from streamlink.plugin import Plugin, pluginmatcher
from streamlink.plugin.api import validate
@@ -17,7 +18,7 @@
@pluginmatcher(re.compile(
- r"https?://(?:www\.)?vidio\.com/",
+ r"https?://(?:www\.)?vidio\.com/.+",
))
class Vidio(Plugin):
tokens_url = "https://www.vidio.com/live/{id}/tokens"
@@ -28,6 +29,10 @@
self.tokens_url.format(id=stream_id),
params={"type": stream_type},
headers={"Referer": self.url},
+ cookies={
+ "ahoy_visit": str(uuid4()),
+ "ahoy_visitor": str(uuid4()),
+ },
schema=validate.Schema(
validate.parse_json(),
{"token": str},
| {"golden_diff": "diff --git a/src/streamlink/plugins/vidio.py b/src/streamlink/plugins/vidio.py\n--- a/src/streamlink/plugins/vidio.py\n+++ b/src/streamlink/plugins/vidio.py\n@@ -6,6 +6,7 @@\n import logging\n import re\n from urllib.parse import urlsplit, urlunsplit\n+from uuid import uuid4\n \n from streamlink.plugin import Plugin, pluginmatcher\n from streamlink.plugin.api import validate\n@@ -17,7 +18,7 @@\n \n \n @pluginmatcher(re.compile(\n- r\"https?://(?:www\\.)?vidio\\.com/\",\n+ r\"https?://(?:www\\.)?vidio\\.com/.+\",\n ))\n class Vidio(Plugin):\n tokens_url = \"https://www.vidio.com/live/{id}/tokens\"\n@@ -28,6 +29,10 @@\n self.tokens_url.format(id=stream_id),\n params={\"type\": stream_type},\n headers={\"Referer\": self.url},\n+ cookies={\n+ \"ahoy_visit\": str(uuid4()),\n+ \"ahoy_visitor\": str(uuid4()),\n+ },\n schema=validate.Schema(\n validate.parse_json(),\n {\"token\": str},\n", "issue": "plugins.vidio: 403 Client Error on stream token acquirement\n### Checklist\n\n- [X] This is a [plugin issue](https://streamlink.github.io/plugins.html) and not [a different kind of issue](https://github.com/streamlink/streamlink/issues/new/choose)\n- [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink)\n- [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22)\n- [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master)\n\n### Streamlink version\n\nUnable to open URL: https://www.vidio.com/live/204/tokens (403 Client Error: Forbidden for url: https://www.vidio.com/live/204/tokens?type=hls)\n\n### Description\n\nThe live stream: https://www.vidio.com/live/204-sctv\r\nthe output: https://www.vidio.com/live/204/tokens (403 Client Error: Forbidden for url: https://www.vidio.com/live/204/tokens?type=hls)\r\n\r\nIt is missing sctv\n\n### Debug log\n\n```text\nstreamlink https://www.vidio.com/live/204-sctv best\r\n[cli][info] Found matching plugin vidio for URL https://www.vidio.com/live/204-sctv\r\nerror: Unable to open URL: https://www.vidio.com/live/204/tokens (403 Client Error: Forbidden for url: https://www.vidio.com/live/204/tokens?type=hls)\n```\n\n", "before_files": [{"content": "\"\"\"\n$description Indonesian & international live TV channels and video on-demand service. OTT service from Vidio.\n$url vidio.com\n$type live, vod\n\"\"\"\nimport logging\nimport re\nfrom urllib.parse import urlsplit, urlunsplit\n\nfrom streamlink.plugin import Plugin, pluginmatcher\nfrom streamlink.plugin.api import validate\nfrom streamlink.stream.dash import DASHStream\nfrom streamlink.stream.hls import HLSStream\n\n\nlog = logging.getLogger(__name__)\n\n\n@pluginmatcher(re.compile(\n r\"https?://(?:www\\.)?vidio\\.com/\",\n))\nclass Vidio(Plugin):\n tokens_url = \"https://www.vidio.com/live/{id}/tokens\"\n\n def _get_stream_token(self, stream_id, stream_type):\n log.debug(\"Getting stream token\")\n return self.session.http.post(\n self.tokens_url.format(id=stream_id),\n params={\"type\": stream_type},\n headers={\"Referer\": self.url},\n schema=validate.Schema(\n validate.parse_json(),\n {\"token\": str},\n validate.get(\"token\"),\n ),\n )\n\n def _get_streams(self):\n stream_id, has_token, hls_url, dash_url = self.session.http.get(\n self.url,\n schema=validate.Schema(\n validate.parse_html(),\n validate.xml_find(\".//*[@data-video-id]\"),\n validate.union((\n validate.get(\"data-video-id\"),\n validate.all(\n validate.get(\"data-video-has-token\"),\n validate.transform(lambda val: val and val != \"false\"),\n ),\n validate.get(\"data-vjs-clip-hls-url\"),\n validate.get(\"data-vjs-clip-dash-url\"),\n )),\n ),\n )\n\n if dash_url and has_token:\n token = self._get_stream_token(stream_id, \"dash\")\n parsed = urlsplit(dash_url)\n dash_url = urlunsplit(parsed._replace(path=f\"{token}{parsed.path}\"))\n return DASHStream.parse_manifest(\n self.session,\n dash_url,\n headers={\"Referer\": \"https://www.vidio.com/\"},\n )\n\n if not hls_url:\n return\n\n if has_token:\n token = self._get_stream_token(stream_id, \"hls\")\n hls_url = f\"{hls_url}?{token}\"\n\n return HLSStream.parse_variant_playlist(\n self.session,\n hls_url,\n headers={\"Referer\": \"https://www.vidio.com/\"},\n )\n\n\n__plugin__ = Vidio\n", "path": "src/streamlink/plugins/vidio.py"}]} | 1,640 | 261 |
gh_patches_debug_50124 | rasdani/github-patches | git_diff | scrapy__scrapy-2649 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
After adding request flags subclasses of logformatter that rely on 'flags' format string are broken
#2082 added flags to request but it also renamed formatting string key from flags to response_flags/request_flags
```
CRAWLEDMSG = u"Crawled (%(status)s) %(request)s (referer: %(referer)s)%(flags)s"
+CRAWLEDMSG = u"Crawled (%(status)s) %(request)s%(request_flags)s (referer: %(referer)s)%(response_flags)s"
```
Scrapy allows you to override logformatter and this is what I have in my project. I have logformatter looking rouhgly like this
```python
# dirbot/logf.py
from scrapy.logformatter import LogFormatter
class CustomLogFormatter(LogFormatter):
def crawled(self, request, response, spider):
kwargs = super(CustomLogFormatter, self).crawled(
request, response, spider)
kwargs['msg'] = (
u"Crawled (%(status)s) %(request)s "
u"(referer: %(referer)s, latency: %(latency).2f s)%(flags)s"
)
kwargs['args']['latency'] = response.meta.get('download_latency', 0)
return kwargs
```
now if you enable it in settings `LOG_FORMATTER = 'dirbot.logf.CustomLogFormatter'
` and try to run it with recent master you'll get KeyError
```
2017-03-13 14:15:26 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
Traceback (most recent call last):
File "/usr/lib/python2.7/logging/__init__.py", line 851, in emit
msg = self.format(record)
File "/usr/lib/python2.7/logging/__init__.py", line 724, in format
return fmt.format(record)
File "/usr/lib/python2.7/logging/__init__.py", line 464, in format
record.message = record.getMessage()
File "/usr/lib/python2.7/logging/__init__.py", line 328, in getMessage
msg = msg % self.args
KeyError: u'flags'
Logged from file engine.py, line 238
Traceback (most recent call last):
File "/usr/lib/python2.7/logging/__init__.py", line 851, in emit
msg = self.format(record)
File "/usr/lib/python2.7/logging/__init__.py", line 724, in format
return fmt.format(record)
File "/usr/lib/python2.7/logging/__init__.py", line 464, in format
record.message = record.getMessage()
File "/usr/lib/python2.7/logging/__init__.py", line 328, in getMessage
msg = msg % self.args
KeyError: u'flags'
Logged from file engine.py, line 238
2017-03-13 14:15:27 [scrapy.core.scraper] DEBUG: Scraped from <200 http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/>
```
So this change that renamed `flags` to `response_flags/request_flags` seems backward incompatible.
</issue>
<code>
[start of scrapy/logformatter.py]
1 import os
2 import logging
3
4 from twisted.python.failure import Failure
5
6 from scrapy.utils.request import referer_str
7
8 SCRAPEDMSG = u"Scraped from %(src)s" + os.linesep + "%(item)s"
9 DROPPEDMSG = u"Dropped: %(exception)s" + os.linesep + "%(item)s"
10 CRAWLEDMSG = u"Crawled (%(status)s) %(request)s%(request_flags)s (referer: %(referer)s)%(response_flags)s"
11
12
13 class LogFormatter(object):
14 """Class for generating log messages for different actions.
15
16 All methods must return a dictionary listing the parameters `level`, `msg`
17 and `args` which are going to be used for constructing the log message when
18 calling logging.log.
19
20 Dictionary keys for the method outputs:
21 * `level` should be the log level for that action, you can use those
22 from the python logging library: logging.DEBUG, logging.INFO,
23 logging.WARNING, logging.ERROR and logging.CRITICAL.
24
25 * `msg` should be a string that can contain different formatting
26 placeholders. This string, formatted with the provided `args`, is going
27 to be the log message for that action.
28
29 * `args` should be a tuple or dict with the formatting placeholders for
30 `msg`. The final log message is computed as output['msg'] %
31 output['args'].
32 """
33
34 def crawled(self, request, response, spider):
35 request_flags = ' %s' % str(request.flags) if request.flags else ''
36 response_flags = ' %s' % str(response.flags) if response.flags else ''
37 return {
38 'level': logging.DEBUG,
39 'msg': CRAWLEDMSG,
40 'args': {
41 'status': response.status,
42 'request': request,
43 'request_flags' : request_flags,
44 'referer': referer_str(request),
45 'response_flags': response_flags,
46 }
47 }
48
49 def scraped(self, item, response, spider):
50 if isinstance(response, Failure):
51 src = response.getErrorMessage()
52 else:
53 src = response
54 return {
55 'level': logging.DEBUG,
56 'msg': SCRAPEDMSG,
57 'args': {
58 'src': src,
59 'item': item,
60 }
61 }
62
63 def dropped(self, item, exception, response, spider):
64 return {
65 'level': logging.WARNING,
66 'msg': DROPPEDMSG,
67 'args': {
68 'exception': exception,
69 'item': item,
70 }
71 }
72
73 @classmethod
74 def from_crawler(cls, crawler):
75 return cls()
76
[end of scrapy/logformatter.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/scrapy/logformatter.py b/scrapy/logformatter.py
--- a/scrapy/logformatter.py
+++ b/scrapy/logformatter.py
@@ -43,6 +43,8 @@
'request_flags' : request_flags,
'referer': referer_str(request),
'response_flags': response_flags,
+ # backward compatibility with Scrapy logformatter below 1.4 version
+ 'flags': response_flags
}
}
| {"golden_diff": "diff --git a/scrapy/logformatter.py b/scrapy/logformatter.py\n--- a/scrapy/logformatter.py\n+++ b/scrapy/logformatter.py\n@@ -43,6 +43,8 @@\n 'request_flags' : request_flags,\n 'referer': referer_str(request),\n 'response_flags': response_flags,\n+ # backward compatibility with Scrapy logformatter below 1.4 version\n+ 'flags': response_flags\n }\n }\n", "issue": "After adding request flags subclasses of logformatter that rely on 'flags' format string are broken\n#2082 added flags to request but it also renamed formatting string key from flags to response_flags/request_flags\r\n```\r\nCRAWLEDMSG = u\"Crawled (%(status)s) %(request)s (referer: %(referer)s)%(flags)s\"\r\n +CRAWLEDMSG = u\"Crawled (%(status)s) %(request)s%(request_flags)s (referer: %(referer)s)%(response_flags)s\" \r\n```\r\n\r\nScrapy allows you to override logformatter and this is what I have in my project. I have logformatter looking rouhgly like this\r\n\r\n\r\n```python\r\n# dirbot/logf.py\r\nfrom scrapy.logformatter import LogFormatter\r\n\r\n\r\nclass CustomLogFormatter(LogFormatter):\r\n def crawled(self, request, response, spider):\r\n kwargs = super(CustomLogFormatter, self).crawled(\r\n request, response, spider)\r\n kwargs['msg'] = (\r\n u\"Crawled (%(status)s) %(request)s \"\r\n u\"(referer: %(referer)s, latency: %(latency).2f s)%(flags)s\"\r\n )\r\n kwargs['args']['latency'] = response.meta.get('download_latency', 0)\r\n return kwargs\r\n```\r\n\r\nnow if you enable it in settings `LOG_FORMATTER = 'dirbot.logf.CustomLogFormatter'\r\n` and try to run it with recent master you'll get KeyError\r\n\r\n```\r\n2017-03-13 14:15:26 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023\r\nTraceback (most recent call last):\r\n File \"/usr/lib/python2.7/logging/__init__.py\", line 851, in emit\r\n msg = self.format(record)\r\n File \"/usr/lib/python2.7/logging/__init__.py\", line 724, in format\r\n return fmt.format(record)\r\n File \"/usr/lib/python2.7/logging/__init__.py\", line 464, in format\r\n record.message = record.getMessage()\r\n File \"/usr/lib/python2.7/logging/__init__.py\", line 328, in getMessage\r\n msg = msg % self.args\r\nKeyError: u'flags'\r\nLogged from file engine.py, line 238\r\nTraceback (most recent call last):\r\n File \"/usr/lib/python2.7/logging/__init__.py\", line 851, in emit\r\n msg = self.format(record)\r\n File \"/usr/lib/python2.7/logging/__init__.py\", line 724, in format\r\n return fmt.format(record)\r\n File \"/usr/lib/python2.7/logging/__init__.py\", line 464, in format\r\n record.message = record.getMessage()\r\n File \"/usr/lib/python2.7/logging/__init__.py\", line 328, in getMessage\r\n msg = msg % self.args\r\nKeyError: u'flags'\r\nLogged from file engine.py, line 238\r\n2017-03-13 14:15:27 [scrapy.core.scraper] DEBUG: Scraped from <200 http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/>\r\n```\r\n\r\nSo this change that renamed `flags` to `response_flags/request_flags` seems backward incompatible. \n", "before_files": [{"content": "import os\nimport logging\n\nfrom twisted.python.failure import Failure\n\nfrom scrapy.utils.request import referer_str\n\nSCRAPEDMSG = u\"Scraped from %(src)s\" + os.linesep + \"%(item)s\"\nDROPPEDMSG = u\"Dropped: %(exception)s\" + os.linesep + \"%(item)s\"\nCRAWLEDMSG = u\"Crawled (%(status)s) %(request)s%(request_flags)s (referer: %(referer)s)%(response_flags)s\"\n\n\nclass LogFormatter(object):\n \"\"\"Class for generating log messages for different actions.\n\n All methods must return a dictionary listing the parameters `level`, `msg`\n and `args` which are going to be used for constructing the log message when\n calling logging.log.\n\n Dictionary keys for the method outputs:\n * `level` should be the log level for that action, you can use those\n from the python logging library: logging.DEBUG, logging.INFO,\n logging.WARNING, logging.ERROR and logging.CRITICAL.\n\n * `msg` should be a string that can contain different formatting\n placeholders. This string, formatted with the provided `args`, is going\n to be the log message for that action.\n\n * `args` should be a tuple or dict with the formatting placeholders for\n `msg`. The final log message is computed as output['msg'] %\n output['args'].\n \"\"\"\n\n def crawled(self, request, response, spider):\n request_flags = ' %s' % str(request.flags) if request.flags else ''\n response_flags = ' %s' % str(response.flags) if response.flags else ''\n return {\n 'level': logging.DEBUG,\n 'msg': CRAWLEDMSG,\n 'args': {\n 'status': response.status,\n 'request': request,\n 'request_flags' : request_flags,\n 'referer': referer_str(request),\n 'response_flags': response_flags,\n }\n }\n\n def scraped(self, item, response, spider):\n if isinstance(response, Failure):\n src = response.getErrorMessage()\n else:\n src = response\n return {\n 'level': logging.DEBUG,\n 'msg': SCRAPEDMSG,\n 'args': {\n 'src': src,\n 'item': item,\n }\n }\n\n def dropped(self, item, exception, response, spider):\n return {\n 'level': logging.WARNING,\n 'msg': DROPPEDMSG,\n 'args': {\n 'exception': exception,\n 'item': item,\n }\n }\n\n @classmethod\n def from_crawler(cls, crawler):\n return cls()\n", "path": "scrapy/logformatter.py"}]} | 1,972 | 100 |
gh_patches_debug_15426 | rasdani/github-patches | git_diff | airctic__icevision-734 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Can't save a full model using torch.save (at least with faster-RCNN)
It is not possible to save a full model using default settings of `torch.save` (see stack trace below). This is because of the implementation of `remove_internal_model_transforms`, which uses inner functions in its implementation. The default pickle module does not support inner functions.
Workaround: use the `dill` module instead, which does support inner functions.
Suggested fix: It does not look as if the internal functions are necessary. If there were moved to standard functions, then the default pickle module should work.
`torch.save(model, 'mod.pth', pickle_module=pickle)` causes an error.
`torch.save(model, 'mod.pth', pickle_module=dill)` is a workaround.
**To Reproduce**
`torch.save(model, 'mod1-full.pth', pickle_module=pickle)`
results in:
```python
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-12-50f3761f4f3c> in <module>
----> 1 torch.save(model, 'mod1-full.pth', pickle_module=pickle)
~/anaconda3/envs/dlm/lib/python3.8/site-packages/torch/serialization.py in save(obj, f, pickle_module, pickle_protocol, _use_new_zipfile_serialization)
370 if _use_new_zipfile_serialization:
371 with _open_zipfile_writer(opened_file) as opened_zipfile:
--> 372 _save(obj, opened_zipfile, pickle_module, pickle_protocol)
373 return
374 _legacy_save(obj, opened_file, pickle_module, pickle_protocol)
~/anaconda3/envs/dlm/lib/python3.8/site-packages/torch/serialization.py in _save(obj, zip_file, pickle_module, pickle_protocol)
474 pickler = pickle_module.Pickler(data_buf, protocol=pickle_protocol)
475 pickler.persistent_id = persistent_id
--> 476 pickler.dump(obj)
477 data_value = data_buf.getvalue()
478 zip_file.write_record('data.pkl', data_value, len(data_value))
AttributeError: Can't pickle local object 'remove_internal_model_transforms.<locals>.noop_normalize'
```
Relevant definition:
```
def remove_internal_model_transforms(model: GeneralizedRCNN):
def noop_normalize(image: Tensor) -> Tensor:
return image
def noop_resize(
image: Tensor, target: Optional[Dict[str, Tensor]]
) -> Tuple[Tensor, Optional[Dict[str, Tensor]]]:
return image, target
model.transform.normalize = noop_normalize
model.transform.resize = noop_resize
```
</issue>
<code>
[start of icevision/models/torchvision/utils.py]
1 __all__ = [
2 "remove_internal_model_transforms",
3 "patch_rcnn_param_groups",
4 "patch_retinanet_param_groups",
5 ]
6
7 from icevision.imports import *
8 from icevision.utils import *
9 from torchvision.models.detection.generalized_rcnn import GeneralizedRCNN
10
11
12 def remove_internal_model_transforms(model: GeneralizedRCNN):
13 def noop_normalize(image: Tensor) -> Tensor:
14 return image
15
16 def noop_resize(
17 image: Tensor, target: Optional[Dict[str, Tensor]]
18 ) -> Tuple[Tensor, Optional[Dict[str, Tensor]]]:
19 return image, target
20
21 model.transform.normalize = noop_normalize
22 model.transform.resize = noop_resize
23
24
25 def patch_param_groups(
26 model: nn.Module,
27 head_layers: List[nn.Module],
28 backbone_param_groups: List[List[nn.Parameter]],
29 ):
30 def param_groups(model: nn.Module) -> List[List[nn.Parameter]]:
31 head_param_groups = [list(layer.parameters()) for layer in head_layers]
32
33 _param_groups = backbone_param_groups + head_param_groups
34 check_all_model_params_in_groups2(model, _param_groups)
35
36 return _param_groups
37
38 model.param_groups = MethodType(param_groups, model)
39
40
41 def patch_rcnn_param_groups(model: nn.Module):
42 return patch_param_groups(
43 model=model,
44 head_layers=[model.rpn, model.roi_heads],
45 backbone_param_groups=model.backbone.param_groups(),
46 )
47
48
49 def patch_retinanet_param_groups(model: nn.Module):
50 return patch_param_groups(
51 model=model,
52 head_layers=[model.head],
53 backbone_param_groups=model.backbone.param_groups(),
54 )
55
[end of icevision/models/torchvision/utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/icevision/models/torchvision/utils.py b/icevision/models/torchvision/utils.py
--- a/icevision/models/torchvision/utils.py
+++ b/icevision/models/torchvision/utils.py
@@ -9,17 +9,19 @@
from torchvision.models.detection.generalized_rcnn import GeneralizedRCNN
-def remove_internal_model_transforms(model: GeneralizedRCNN):
- def noop_normalize(image: Tensor) -> Tensor:
- return image
+def _noop_normalize(image: Tensor) -> Tensor:
+ return image
+
- def noop_resize(
- image: Tensor, target: Optional[Dict[str, Tensor]]
- ) -> Tuple[Tensor, Optional[Dict[str, Tensor]]]:
- return image, target
+def _noop_resize(
+ image: Tensor, target: Optional[Dict[str, Tensor]]
+) -> Tuple[Tensor, Optional[Dict[str, Tensor]]]:
+ return image, target
- model.transform.normalize = noop_normalize
- model.transform.resize = noop_resize
+
+def remove_internal_model_transforms(model: GeneralizedRCNN):
+ model.transform.normalize = _noop_normalize
+ model.transform.resize = _noop_resize
def patch_param_groups(
| {"golden_diff": "diff --git a/icevision/models/torchvision/utils.py b/icevision/models/torchvision/utils.py\n--- a/icevision/models/torchvision/utils.py\n+++ b/icevision/models/torchvision/utils.py\n@@ -9,17 +9,19 @@\n from torchvision.models.detection.generalized_rcnn import GeneralizedRCNN\n \n \n-def remove_internal_model_transforms(model: GeneralizedRCNN):\n- def noop_normalize(image: Tensor) -> Tensor:\n- return image\n+def _noop_normalize(image: Tensor) -> Tensor:\n+ return image\n+\n \n- def noop_resize(\n- image: Tensor, target: Optional[Dict[str, Tensor]]\n- ) -> Tuple[Tensor, Optional[Dict[str, Tensor]]]:\n- return image, target\n+def _noop_resize(\n+ image: Tensor, target: Optional[Dict[str, Tensor]]\n+) -> Tuple[Tensor, Optional[Dict[str, Tensor]]]:\n+ return image, target\n \n- model.transform.normalize = noop_normalize\n- model.transform.resize = noop_resize\n+\n+def remove_internal_model_transforms(model: GeneralizedRCNN):\n+ model.transform.normalize = _noop_normalize\n+ model.transform.resize = _noop_resize\n \n \n def patch_param_groups(\n", "issue": "Can't save a full model using torch.save (at least with faster-RCNN)\nIt is not possible to save a full model using default settings of `torch.save` (see stack trace below). This is because of the implementation of `remove_internal_model_transforms`, which uses inner functions in its implementation. The default pickle module does not support inner functions.\r\n\r\nWorkaround: use the `dill` module instead, which does support inner functions.\r\n\r\nSuggested fix: It does not look as if the internal functions are necessary. If there were moved to standard functions, then the default pickle module should work.\r\n`torch.save(model, 'mod.pth', pickle_module=pickle)` causes an error.\r\n\r\n`torch.save(model, 'mod.pth', pickle_module=dill)` is a workaround.\r\n\r\n**To Reproduce**\r\n\r\n`torch.save(model, 'mod1-full.pth', pickle_module=pickle)`\r\nresults in:\r\n\r\n```python\r\n---------------------------------------------------------------------------\r\nAttributeError Traceback (most recent call last)\r\n<ipython-input-12-50f3761f4f3c> in <module>\r\n----> 1 torch.save(model, 'mod1-full.pth', pickle_module=pickle)\r\n\r\n~/anaconda3/envs/dlm/lib/python3.8/site-packages/torch/serialization.py in save(obj, f, pickle_module, pickle_protocol, _use_new_zipfile_serialization)\r\n 370 if _use_new_zipfile_serialization:\r\n 371 with _open_zipfile_writer(opened_file) as opened_zipfile:\r\n--> 372 _save(obj, opened_zipfile, pickle_module, pickle_protocol)\r\n 373 return\r\n 374 _legacy_save(obj, opened_file, pickle_module, pickle_protocol)\r\n\r\n~/anaconda3/envs/dlm/lib/python3.8/site-packages/torch/serialization.py in _save(obj, zip_file, pickle_module, pickle_protocol)\r\n 474 pickler = pickle_module.Pickler(data_buf, protocol=pickle_protocol)\r\n 475 pickler.persistent_id = persistent_id\r\n--> 476 pickler.dump(obj)\r\n 477 data_value = data_buf.getvalue()\r\n 478 zip_file.write_record('data.pkl', data_value, len(data_value))\r\n\r\nAttributeError: Can't pickle local object 'remove_internal_model_transforms.<locals>.noop_normalize'\r\n```\r\n\r\nRelevant definition:\r\n```\r\ndef remove_internal_model_transforms(model: GeneralizedRCNN):\r\n def noop_normalize(image: Tensor) -> Tensor:\r\n return image\r\n\r\n def noop_resize(\r\n image: Tensor, target: Optional[Dict[str, Tensor]]\r\n ) -> Tuple[Tensor, Optional[Dict[str, Tensor]]]:\r\n return image, target\r\n\r\n model.transform.normalize = noop_normalize\r\n model.transform.resize = noop_resize\r\n```\r\n\r\n\n", "before_files": [{"content": "__all__ = [\n \"remove_internal_model_transforms\",\n \"patch_rcnn_param_groups\",\n \"patch_retinanet_param_groups\",\n]\n\nfrom icevision.imports import *\nfrom icevision.utils import *\nfrom torchvision.models.detection.generalized_rcnn import GeneralizedRCNN\n\n\ndef remove_internal_model_transforms(model: GeneralizedRCNN):\n def noop_normalize(image: Tensor) -> Tensor:\n return image\n\n def noop_resize(\n image: Tensor, target: Optional[Dict[str, Tensor]]\n ) -> Tuple[Tensor, Optional[Dict[str, Tensor]]]:\n return image, target\n\n model.transform.normalize = noop_normalize\n model.transform.resize = noop_resize\n\n\ndef patch_param_groups(\n model: nn.Module,\n head_layers: List[nn.Module],\n backbone_param_groups: List[List[nn.Parameter]],\n):\n def param_groups(model: nn.Module) -> List[List[nn.Parameter]]:\n head_param_groups = [list(layer.parameters()) for layer in head_layers]\n\n _param_groups = backbone_param_groups + head_param_groups\n check_all_model_params_in_groups2(model, _param_groups)\n\n return _param_groups\n\n model.param_groups = MethodType(param_groups, model)\n\n\ndef patch_rcnn_param_groups(model: nn.Module):\n return patch_param_groups(\n model=model,\n head_layers=[model.rpn, model.roi_heads],\n backbone_param_groups=model.backbone.param_groups(),\n )\n\n\ndef patch_retinanet_param_groups(model: nn.Module):\n return patch_param_groups(\n model=model,\n head_layers=[model.head],\n backbone_param_groups=model.backbone.param_groups(),\n )\n", "path": "icevision/models/torchvision/utils.py"}]} | 1,596 | 271 |
gh_patches_debug_3901 | rasdani/github-patches | git_diff | carpentries__amy-646 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
API: don't return todos with unknown start
This breaks the timeline.
</issue>
<code>
[start of api/views.py]
1 import datetime
2
3 from django.db.models import Q
4 from rest_framework.generics import ListAPIView
5 from rest_framework.metadata import SimpleMetadata
6 from rest_framework.permissions import (
7 IsAuthenticatedOrReadOnly, IsAuthenticated
8 )
9 from rest_framework.response import Response
10 from rest_framework.reverse import reverse
11 from rest_framework.views import APIView
12
13 from workshops.models import Badge, Airport, Event, TodoItem, Tag
14 from workshops.util import get_members, default_membership_cutoff
15
16 from .serializers import (
17 PersonNameEmailSerializer,
18 ExportBadgesSerializer,
19 ExportInstructorLocationsSerializer,
20 EventSerializer,
21 TodoSerializer,
22 )
23
24
25 class QueryMetadata(SimpleMetadata):
26 """Additionally include info about query parameters."""
27
28 def determine_metadata(self, request, view):
29 data = super().determine_metadata(request, view)
30
31 try:
32 data['query_params'] = view.get_query_params_description()
33 except AttributeError:
34 pass
35
36 return data
37
38
39 class ApiRoot(APIView):
40 def get(self, request, format=None):
41 return Response({
42 'export-badges': reverse('api:export-badges', request=request,
43 format=format),
44 'export-instructors': reverse('api:export-instructors',
45 request=request, format=format),
46 'export-members': reverse('api:export-members', request=request,
47 format=format),
48 'events-published': reverse('api:events-published',
49 request=request, format=format),
50 'user-todos': reverse('api:user-todos',
51 request=request, format=format),
52 })
53
54
55 class ExportBadgesView(ListAPIView):
56 """List all badges and people who have them."""
57 permission_classes = (IsAuthenticatedOrReadOnly, )
58 paginator = None # disable pagination
59
60 queryset = Badge.objects.prefetch_related('person_set')
61 serializer_class = ExportBadgesSerializer
62
63
64 class ExportInstructorLocationsView(ListAPIView):
65 """List all airports and instructors located near them."""
66 permission_classes = (IsAuthenticatedOrReadOnly, )
67 paginator = None # disable pagination
68
69 queryset = Airport.objects.exclude(person=None) \
70 .prefetch_related('person_set')
71 serializer_class = ExportInstructorLocationsSerializer
72
73
74 class ExportMembersView(ListAPIView):
75 """Show everyone who qualifies as an SCF member."""
76 permission_classes = (IsAuthenticatedOrReadOnly, )
77 paginator = None # disable pagination
78
79 serializer_class = PersonNameEmailSerializer
80
81 def get_queryset(self):
82 earliest_default, latest_default = default_membership_cutoff()
83
84 earliest = self.request.query_params.get('earliest', None)
85 if earliest is not None:
86 try:
87 earliest = datetime.datetime.strptime(earliest, '%Y-%m-%d') \
88 .date()
89 except ValueError:
90 earliest = earliest_default
91 else:
92 earliest = earliest_default
93
94 latest = self.request.query_params.get('latest', None)
95 if latest is not None:
96 try:
97 latest = datetime.datetime.strptime(latest, '%Y-%m-%d').date()
98 except ValueError:
99 latest = latest_default
100 else:
101 latest = latest_default
102
103 return get_members(earliest, latest)
104
105 def get_query_params_description(self):
106 return {
107 'earliest': 'Date of earliest workshop someone taught at.'
108 ' Defaults to -2*365 days from current date.',
109 'latest': 'Date of latest workshop someone taught at.'
110 ' Defaults to current date.',
111 }
112
113
114 class PublishedEvents(ListAPIView):
115 """List published events."""
116
117 # only events that have both a starting date and a URL
118 permission_classes = (IsAuthenticatedOrReadOnly, )
119 paginator = None # disable pagination
120
121 serializer_class = EventSerializer
122
123 metadata_class = QueryMetadata
124
125 def get_queryset(self):
126 """Optionally restrict the returned event set to events hosted by
127 specific host or administered by specific admin."""
128 queryset = Event.objects.published_events()
129
130 administrator = self.request.query_params.get('administrator', None)
131 if administrator is not None:
132 queryset = queryset.filter(administrator__pk=administrator)
133
134 host = self.request.query_params.get('host', None)
135 if host is not None:
136 queryset = queryset.filter(host__pk=host)
137
138 tags = self.request.query_params.getlist('tag', None)
139 if tags:
140 tags = Tag.objects.filter(name__in=tags)
141 for tag in tags:
142 queryset = queryset.filter(tags=tag)
143
144 return queryset
145
146 def get_query_params_description(self):
147 return {
148 'administrator': 'ID of the organization responsible for admin '
149 'work on events.',
150 'host': 'ID of the organization hosting the event.',
151 'tag': "Events' tag(s). You can use this parameter multiple "
152 "times.",
153 }
154
155
156 class UserTodoItems(ListAPIView):
157 permission_classes = (IsAuthenticated, )
158 paginator = None
159 serializer_class = TodoSerializer
160
161 def get_queryset(self):
162 """Return current TODOs for currently logged in user."""
163 return TodoItem.objects.user(self.request.user) \
164 .incomplete() \
165 .select_related('event')
166
[end of api/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/api/views.py b/api/views.py
--- a/api/views.py
+++ b/api/views.py
@@ -162,4 +162,5 @@
"""Return current TODOs for currently logged in user."""
return TodoItem.objects.user(self.request.user) \
.incomplete() \
+ .exclude(due=None) \
.select_related('event')
| {"golden_diff": "diff --git a/api/views.py b/api/views.py\n--- a/api/views.py\n+++ b/api/views.py\n@@ -162,4 +162,5 @@\n \"\"\"Return current TODOs for currently logged in user.\"\"\"\n return TodoItem.objects.user(self.request.user) \\\n .incomplete() \\\n+ .exclude(due=None) \\\n .select_related('event')\n", "issue": "API: don't return todos with unknown start\nThis breaks the timeline.\n\n", "before_files": [{"content": "import datetime\n\nfrom django.db.models import Q\nfrom rest_framework.generics import ListAPIView\nfrom rest_framework.metadata import SimpleMetadata\nfrom rest_framework.permissions import (\n IsAuthenticatedOrReadOnly, IsAuthenticated\n)\nfrom rest_framework.response import Response\nfrom rest_framework.reverse import reverse\nfrom rest_framework.views import APIView\n\nfrom workshops.models import Badge, Airport, Event, TodoItem, Tag\nfrom workshops.util import get_members, default_membership_cutoff\n\nfrom .serializers import (\n PersonNameEmailSerializer,\n ExportBadgesSerializer,\n ExportInstructorLocationsSerializer,\n EventSerializer,\n TodoSerializer,\n)\n\n\nclass QueryMetadata(SimpleMetadata):\n \"\"\"Additionally include info about query parameters.\"\"\"\n\n def determine_metadata(self, request, view):\n data = super().determine_metadata(request, view)\n\n try:\n data['query_params'] = view.get_query_params_description()\n except AttributeError:\n pass\n\n return data\n\n\nclass ApiRoot(APIView):\n def get(self, request, format=None):\n return Response({\n 'export-badges': reverse('api:export-badges', request=request,\n format=format),\n 'export-instructors': reverse('api:export-instructors',\n request=request, format=format),\n 'export-members': reverse('api:export-members', request=request,\n format=format),\n 'events-published': reverse('api:events-published',\n request=request, format=format),\n 'user-todos': reverse('api:user-todos',\n request=request, format=format),\n })\n\n\nclass ExportBadgesView(ListAPIView):\n \"\"\"List all badges and people who have them.\"\"\"\n permission_classes = (IsAuthenticatedOrReadOnly, )\n paginator = None # disable pagination\n\n queryset = Badge.objects.prefetch_related('person_set')\n serializer_class = ExportBadgesSerializer\n\n\nclass ExportInstructorLocationsView(ListAPIView):\n \"\"\"List all airports and instructors located near them.\"\"\"\n permission_classes = (IsAuthenticatedOrReadOnly, )\n paginator = None # disable pagination\n\n queryset = Airport.objects.exclude(person=None) \\\n .prefetch_related('person_set')\n serializer_class = ExportInstructorLocationsSerializer\n\n\nclass ExportMembersView(ListAPIView):\n \"\"\"Show everyone who qualifies as an SCF member.\"\"\"\n permission_classes = (IsAuthenticatedOrReadOnly, )\n paginator = None # disable pagination\n\n serializer_class = PersonNameEmailSerializer\n\n def get_queryset(self):\n earliest_default, latest_default = default_membership_cutoff()\n\n earliest = self.request.query_params.get('earliest', None)\n if earliest is not None:\n try:\n earliest = datetime.datetime.strptime(earliest, '%Y-%m-%d') \\\n .date()\n except ValueError:\n earliest = earliest_default\n else:\n earliest = earliest_default\n\n latest = self.request.query_params.get('latest', None)\n if latest is not None:\n try:\n latest = datetime.datetime.strptime(latest, '%Y-%m-%d').date()\n except ValueError:\n latest = latest_default\n else:\n latest = latest_default\n\n return get_members(earliest, latest)\n\n def get_query_params_description(self):\n return {\n 'earliest': 'Date of earliest workshop someone taught at.'\n ' Defaults to -2*365 days from current date.',\n 'latest': 'Date of latest workshop someone taught at.'\n ' Defaults to current date.',\n }\n\n\nclass PublishedEvents(ListAPIView):\n \"\"\"List published events.\"\"\"\n\n # only events that have both a starting date and a URL\n permission_classes = (IsAuthenticatedOrReadOnly, )\n paginator = None # disable pagination\n\n serializer_class = EventSerializer\n\n metadata_class = QueryMetadata\n\n def get_queryset(self):\n \"\"\"Optionally restrict the returned event set to events hosted by\n specific host or administered by specific admin.\"\"\"\n queryset = Event.objects.published_events()\n\n administrator = self.request.query_params.get('administrator', None)\n if administrator is not None:\n queryset = queryset.filter(administrator__pk=administrator)\n\n host = self.request.query_params.get('host', None)\n if host is not None:\n queryset = queryset.filter(host__pk=host)\n\n tags = self.request.query_params.getlist('tag', None)\n if tags:\n tags = Tag.objects.filter(name__in=tags)\n for tag in tags:\n queryset = queryset.filter(tags=tag)\n\n return queryset\n\n def get_query_params_description(self):\n return {\n 'administrator': 'ID of the organization responsible for admin '\n 'work on events.',\n 'host': 'ID of the organization hosting the event.',\n 'tag': \"Events' tag(s). You can use this parameter multiple \"\n \"times.\",\n }\n\n\nclass UserTodoItems(ListAPIView):\n permission_classes = (IsAuthenticated, )\n paginator = None\n serializer_class = TodoSerializer\n\n def get_queryset(self):\n \"\"\"Return current TODOs for currently logged in user.\"\"\"\n return TodoItem.objects.user(self.request.user) \\\n .incomplete() \\\n .select_related('event')\n", "path": "api/views.py"}]} | 2,029 | 84 |
gh_patches_debug_10225 | rasdani/github-patches | git_diff | wagtail__wagtail-822 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Unicode content in rich text causes error
Steps to reproduce:
1. Using the wagtail interface, create a new instance of a page derived model which has a rich text field
2. In the rich text field, include unicode characters such as: `©` or `’`
3. Publish the page
The page will be published fine, and probably rendered normally through a template.
4. Return to the wagtail admin, and edit the newly created page
The rich text field is not rendered in the editing page. Inspecting the html shows that wagtail is attempting to render the field, its heading is there. But the field is not rendered.
If you attempt to publish this new page, the form will submit a page with no content in the rich text field.
</issue>
<code>
[start of wagtail/utils/widgets.py]
1 from django.forms.widgets import Widget
2 from django.utils.safestring import mark_safe
3
4
5 class WidgetWithScript(Widget):
6 def render(self, name, value, attrs=None):
7 widget = super(WidgetWithScript, self).render(name, value, attrs)
8
9 final_attrs = self.build_attrs(attrs, name=name)
10 id_ = final_attrs.get('id', None)
11 if 'id_' is None:
12 return widget
13
14 js = self.render_js_init(id_, name, value)
15 out = '{0}<script>{1}</script>'.format(widget, js)
16 return mark_safe(out)
17
18 def render_js_init(self, id_, name, value):
19 return ''
20
[end of wagtail/utils/widgets.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/wagtail/utils/widgets.py b/wagtail/utils/widgets.py
--- a/wagtail/utils/widgets.py
+++ b/wagtail/utils/widgets.py
@@ -1,3 +1,5 @@
+from __future__ import absolute_import, unicode_literals
+
from django.forms.widgets import Widget
from django.utils.safestring import mark_safe
@@ -8,7 +10,7 @@
final_attrs = self.build_attrs(attrs, name=name)
id_ = final_attrs.get('id', None)
- if 'id_' is None:
+ if id_ is None:
return widget
js = self.render_js_init(id_, name, value)
| {"golden_diff": "diff --git a/wagtail/utils/widgets.py b/wagtail/utils/widgets.py\n--- a/wagtail/utils/widgets.py\n+++ b/wagtail/utils/widgets.py\n@@ -1,3 +1,5 @@\n+from __future__ import absolute_import, unicode_literals\n+\n from django.forms.widgets import Widget\n from django.utils.safestring import mark_safe\n \n@@ -8,7 +10,7 @@\n \n final_attrs = self.build_attrs(attrs, name=name)\n id_ = final_attrs.get('id', None)\n- if 'id_' is None:\n+ if id_ is None:\n return widget\n \n js = self.render_js_init(id_, name, value)\n", "issue": "Unicode content in rich text causes error\nSteps to reproduce:\n1. Using the wagtail interface, create a new instance of a page derived model which has a rich text field\n2. In the rich text field, include unicode characters such as: `\u00a9` or `\u2019`\n3. Publish the page\n \n The page will be published fine, and probably rendered normally through a template.\n4. Return to the wagtail admin, and edit the newly created page\n \n The rich text field is not rendered in the editing page. Inspecting the html shows that wagtail is attempting to render the field, its heading is there. But the field is not rendered.\n \n If you attempt to publish this new page, the form will submit a page with no content in the rich text field.\n\n", "before_files": [{"content": "from django.forms.widgets import Widget\nfrom django.utils.safestring import mark_safe\n\n\nclass WidgetWithScript(Widget):\n def render(self, name, value, attrs=None):\n widget = super(WidgetWithScript, self).render(name, value, attrs)\n\n final_attrs = self.build_attrs(attrs, name=name)\n id_ = final_attrs.get('id', None)\n if 'id_' is None:\n return widget\n\n js = self.render_js_init(id_, name, value)\n out = '{0}<script>{1}</script>'.format(widget, js)\n return mark_safe(out)\n\n def render_js_init(self, id_, name, value):\n return ''\n", "path": "wagtail/utils/widgets.py"}]} | 876 | 147 |
gh_patches_debug_4403 | rasdani/github-patches | git_diff | learningequality__kolibri-5037 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Facing error while kolibri 0.12.0 deb file installation
### Observed behavior
After running below command it shows error:
**sudo dpkg -i kolibri_0.12.0b4-0ubuntu1_all.deb**
File downloaded from https://github.com/learningequality/kolibri/releases.

### Context
Kolibri version : Kolibri 0.12.0
Operating system : Ubuntu 14.04
### Screenshots:


</issue>
<code>
[start of kolibri/__init__.py]
1 """
2 CAUTION! Keep everything here at at minimum. Do not import stuff.
3 This module is imported in setup.py, so you cannot for instance
4 import a dependency.
5 """
6 from __future__ import absolute_import
7 from __future__ import print_function
8 from __future__ import unicode_literals
9
10 from .utils import env
11 from .utils.version import get_version
12
13 # Setup the environment before loading anything else from the application
14 env.set_env()
15
16 #: This may not be the exact version as it's subject to modification with
17 #: get_version() - use ``kolibri.__version__`` for the exact version string.
18 VERSION = (0, 12, 0, 'alpha', 0)
19
20 __author__ = 'Learning Equality'
21 __email__ = '[email protected]'
22 __version__ = str(get_version(VERSION))
23
[end of kolibri/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/kolibri/__init__.py b/kolibri/__init__.py
--- a/kolibri/__init__.py
+++ b/kolibri/__init__.py
@@ -15,7 +15,7 @@
#: This may not be the exact version as it's subject to modification with
#: get_version() - use ``kolibri.__version__`` for the exact version string.
-VERSION = (0, 12, 0, 'alpha', 0)
+VERSION = (0, 12, 0, 'beta', 0)
__author__ = 'Learning Equality'
__email__ = '[email protected]'
| {"golden_diff": "diff --git a/kolibri/__init__.py b/kolibri/__init__.py\n--- a/kolibri/__init__.py\n+++ b/kolibri/__init__.py\n@@ -15,7 +15,7 @@\n \n #: This may not be the exact version as it's subject to modification with\n #: get_version() - use ``kolibri.__version__`` for the exact version string.\n-VERSION = (0, 12, 0, 'alpha', 0)\n+VERSION = (0, 12, 0, 'beta', 0)\n \n __author__ = 'Learning Equality'\n __email__ = '[email protected]'\n", "issue": "Facing error while kolibri 0.12.0 deb file installation\n### Observed behavior\r\nAfter running below command it shows error:\r\n**sudo dpkg -i kolibri_0.12.0b4-0ubuntu1_all.deb**\r\n\r\nFile downloaded from https://github.com/learningequality/kolibri/releases.\r\n\r\n\r\n\r\n### Context\r\nKolibri version : Kolibri 0.12.0\r\nOperating system : Ubuntu 14.04\r\n\r\n### Screenshots:\r\n\r\n\r\n\n", "before_files": [{"content": "\"\"\"\nCAUTION! Keep everything here at at minimum. Do not import stuff.\nThis module is imported in setup.py, so you cannot for instance\nimport a dependency.\n\"\"\"\nfrom __future__ import absolute_import\nfrom __future__ import print_function\nfrom __future__ import unicode_literals\n\nfrom .utils import env\nfrom .utils.version import get_version\n\n# Setup the environment before loading anything else from the application\nenv.set_env()\n\n#: This may not be the exact version as it's subject to modification with\n#: get_version() - use ``kolibri.__version__`` for the exact version string.\nVERSION = (0, 12, 0, 'alpha', 0)\n\n__author__ = 'Learning Equality'\n__email__ = '[email protected]'\n__version__ = str(get_version(VERSION))\n", "path": "kolibri/__init__.py"}]} | 1,044 | 147 |
gh_patches_debug_21066 | rasdani/github-patches | git_diff | googleapis__google-auth-library-python-1428 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
deprecation notice for 3.7 modifies global state (user warning filters) as import side-effect
It is impossible to filter Python37DeprecationWarning after PR https://github.com/googleapis/google-auth-library-python/pull/1371.
Custom libraries should not configure warning filters, because it is user project's global state. Most of the times you cannot modify import order and insert new warning filters after your library modifies them.
#### Environment details
- OS: Ubuntu 22.04.3 LTS linux 5.15.0-89-generic
- Python version: 3.7.17
- pip version: 23.3.1
- `google-auth` version: 2.24.0
#### Steps to reproduce
1. install google-auth into your python3.7 project
2. configure filterwarning rule `ignore::DeprecationWarning` in pytest.ini
3. use google.auth or google.oauth2 somewhere in your project
4. run pytest
5. get Python37DeprecationWarning that you cannot filter
</issue>
<code>
[start of google/oauth2/__init__.py]
1 # Copyright 2016 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Google OAuth 2.0 Library for Python."""
16
17 import sys
18 import warnings
19
20
21 class Python37DeprecationWarning(DeprecationWarning): # pragma: NO COVER
22 """
23 Deprecation warning raised when Python 3.7 runtime is detected.
24 Python 3.7 support will be dropped after January 1, 2024. See
25 https://cloud.google.com/python/docs/python37-sunset/ for more information.
26 """
27
28 pass
29
30
31 # Checks if the current runtime is Python 3.7.
32 if sys.version_info.major == 3 and sys.version_info.minor == 7: # pragma: NO COVER
33 message = (
34 "After January 1, 2024, new releases of this library will drop support "
35 "for Python 3.7. More details about Python 3.7 support "
36 "can be found at https://cloud.google.com/python/docs/python37-sunset/"
37 )
38 # Configure the Python37DeprecationWarning warning so that it is only emitted once.
39 warnings.simplefilter("once", Python37DeprecationWarning)
40 warnings.warn(message, Python37DeprecationWarning)
41
[end of google/oauth2/__init__.py]
[start of google/auth/__init__.py]
1 # Copyright 2016 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Google Auth Library for Python."""
16
17 import logging
18 import sys
19 import warnings
20
21 from google.auth import version as google_auth_version
22 from google.auth._default import (
23 default,
24 load_credentials_from_dict,
25 load_credentials_from_file,
26 )
27
28
29 __version__ = google_auth_version.__version__
30
31
32 __all__ = ["default", "load_credentials_from_file", "load_credentials_from_dict"]
33
34
35 class Python37DeprecationWarning(DeprecationWarning): # pragma: NO COVER
36 """
37 Deprecation warning raised when Python 3.7 runtime is detected.
38 Python 3.7 support will be dropped after January 1, 2024. See
39 https://cloud.google.com/python/docs/python37-sunset/ for more information.
40 """
41
42 pass
43
44
45 # Checks if the current runtime is Python 3.7.
46 if sys.version_info.major == 3 and sys.version_info.minor == 7: # pragma: NO COVER
47 message = (
48 "After January 1, 2024, new releases of this library will drop support "
49 "for Python 3.7. More details about Python 3.7 support "
50 "can be found at https://cloud.google.com/python/docs/python37-sunset/"
51 )
52
53 # Configure the Python37DeprecationWarning warning so that it is only emitted once.
54 warnings.simplefilter("once", Python37DeprecationWarning)
55 warnings.warn(message, Python37DeprecationWarning)
56
57 # Set default logging handler to avoid "No handler found" warnings.
58 logging.getLogger(__name__).addHandler(logging.NullHandler())
59
[end of google/auth/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/google/auth/__init__.py b/google/auth/__init__.py
--- a/google/auth/__init__.py
+++ b/google/auth/__init__.py
@@ -49,9 +49,6 @@
"for Python 3.7. More details about Python 3.7 support "
"can be found at https://cloud.google.com/python/docs/python37-sunset/"
)
-
- # Configure the Python37DeprecationWarning warning so that it is only emitted once.
- warnings.simplefilter("once", Python37DeprecationWarning)
warnings.warn(message, Python37DeprecationWarning)
# Set default logging handler to avoid "No handler found" warnings.
diff --git a/google/oauth2/__init__.py b/google/oauth2/__init__.py
--- a/google/oauth2/__init__.py
+++ b/google/oauth2/__init__.py
@@ -35,6 +35,4 @@
"for Python 3.7. More details about Python 3.7 support "
"can be found at https://cloud.google.com/python/docs/python37-sunset/"
)
- # Configure the Python37DeprecationWarning warning so that it is only emitted once.
- warnings.simplefilter("once", Python37DeprecationWarning)
warnings.warn(message, Python37DeprecationWarning)
| {"golden_diff": "diff --git a/google/auth/__init__.py b/google/auth/__init__.py\n--- a/google/auth/__init__.py\n+++ b/google/auth/__init__.py\n@@ -49,9 +49,6 @@\n \"for Python 3.7. More details about Python 3.7 support \"\n \"can be found at https://cloud.google.com/python/docs/python37-sunset/\"\n )\n-\n- # Configure the Python37DeprecationWarning warning so that it is only emitted once.\n- warnings.simplefilter(\"once\", Python37DeprecationWarning)\n warnings.warn(message, Python37DeprecationWarning)\n \n # Set default logging handler to avoid \"No handler found\" warnings.\ndiff --git a/google/oauth2/__init__.py b/google/oauth2/__init__.py\n--- a/google/oauth2/__init__.py\n+++ b/google/oauth2/__init__.py\n@@ -35,6 +35,4 @@\n \"for Python 3.7. More details about Python 3.7 support \"\n \"can be found at https://cloud.google.com/python/docs/python37-sunset/\"\n )\n- # Configure the Python37DeprecationWarning warning so that it is only emitted once.\n- warnings.simplefilter(\"once\", Python37DeprecationWarning)\n warnings.warn(message, Python37DeprecationWarning)\n", "issue": "deprecation notice for 3.7 modifies global state (user warning filters) as import side-effect\nIt is impossible to filter Python37DeprecationWarning after PR https://github.com/googleapis/google-auth-library-python/pull/1371.\r\n\r\nCustom libraries should not configure warning filters, because it is user project's global state. Most of the times you cannot modify import order and insert new warning filters after your library modifies them.\r\n\r\n#### Environment details\r\n\r\n - OS: Ubuntu 22.04.3 LTS linux 5.15.0-89-generic\r\n - Python version: 3.7.17\r\n - pip version: 23.3.1\r\n - `google-auth` version: 2.24.0\r\n\r\n#### Steps to reproduce\r\n\r\n 1. install google-auth into your python3.7 project\r\n 2. configure filterwarning rule `ignore::DeprecationWarning` in pytest.ini\r\n 3. use google.auth or google.oauth2 somewhere in your project\r\n 4. run pytest\r\n 5. get Python37DeprecationWarning that you cannot filter \n", "before_files": [{"content": "# Copyright 2016 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Google OAuth 2.0 Library for Python.\"\"\"\n\nimport sys\nimport warnings\n\n\nclass Python37DeprecationWarning(DeprecationWarning): # pragma: NO COVER\n \"\"\"\n Deprecation warning raised when Python 3.7 runtime is detected.\n Python 3.7 support will be dropped after January 1, 2024. See\n https://cloud.google.com/python/docs/python37-sunset/ for more information.\n \"\"\"\n\n pass\n\n\n# Checks if the current runtime is Python 3.7.\nif sys.version_info.major == 3 and sys.version_info.minor == 7: # pragma: NO COVER\n message = (\n \"After January 1, 2024, new releases of this library will drop support \"\n \"for Python 3.7. More details about Python 3.7 support \"\n \"can be found at https://cloud.google.com/python/docs/python37-sunset/\"\n )\n # Configure the Python37DeprecationWarning warning so that it is only emitted once.\n warnings.simplefilter(\"once\", Python37DeprecationWarning)\n warnings.warn(message, Python37DeprecationWarning)\n", "path": "google/oauth2/__init__.py"}, {"content": "# Copyright 2016 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Google Auth Library for Python.\"\"\"\n\nimport logging\nimport sys\nimport warnings\n\nfrom google.auth import version as google_auth_version\nfrom google.auth._default import (\n default,\n load_credentials_from_dict,\n load_credentials_from_file,\n)\n\n\n__version__ = google_auth_version.__version__\n\n\n__all__ = [\"default\", \"load_credentials_from_file\", \"load_credentials_from_dict\"]\n\n\nclass Python37DeprecationWarning(DeprecationWarning): # pragma: NO COVER\n \"\"\"\n Deprecation warning raised when Python 3.7 runtime is detected.\n Python 3.7 support will be dropped after January 1, 2024. See\n https://cloud.google.com/python/docs/python37-sunset/ for more information.\n \"\"\"\n\n pass\n\n\n# Checks if the current runtime is Python 3.7.\nif sys.version_info.major == 3 and sys.version_info.minor == 7: # pragma: NO COVER\n message = (\n \"After January 1, 2024, new releases of this library will drop support \"\n \"for Python 3.7. More details about Python 3.7 support \"\n \"can be found at https://cloud.google.com/python/docs/python37-sunset/\"\n )\n\n # Configure the Python37DeprecationWarning warning so that it is only emitted once.\n warnings.simplefilter(\"once\", Python37DeprecationWarning)\n warnings.warn(message, Python37DeprecationWarning)\n\n# Set default logging handler to avoid \"No handler found\" warnings.\nlogging.getLogger(__name__).addHandler(logging.NullHandler())\n", "path": "google/auth/__init__.py"}]} | 1,859 | 297 |
gh_patches_debug_7882 | rasdani/github-patches | git_diff | numpy__numpy-15189 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
TST: Add the first test using hypothesis
This pull request adds the first test that uses hypothesis and hence brings in hypothesis as an additional test dependency.
@mattip Could you take a look at this please?
</issue>
<code>
[start of numpy/conftest.py]
1 """
2 Pytest configuration and fixtures for the Numpy test suite.
3 """
4 import os
5
6 import pytest
7 import numpy
8
9 from numpy.core._multiarray_tests import get_fpu_mode
10
11
12 _old_fpu_mode = None
13 _collect_results = {}
14
15
16 def pytest_configure(config):
17 config.addinivalue_line("markers",
18 "valgrind_error: Tests that are known to error under valgrind.")
19 config.addinivalue_line("markers",
20 "leaks_references: Tests that are known to leak references.")
21 config.addinivalue_line("markers",
22 "slow: Tests that are very slow.")
23
24
25 def pytest_addoption(parser):
26 parser.addoption("--available-memory", action="store", default=None,
27 help=("Set amount of memory available for running the "
28 "test suite. This can result to tests requiring "
29 "especially large amounts of memory to be skipped. "
30 "Equivalent to setting environment variable "
31 "NPY_AVAILABLE_MEM. Default: determined"
32 "automatically."))
33
34
35 def pytest_sessionstart(session):
36 available_mem = session.config.getoption('available_memory')
37 if available_mem is not None:
38 os.environ['NPY_AVAILABLE_MEM'] = available_mem
39
40
41 #FIXME when yield tests are gone.
42 @pytest.hookimpl()
43 def pytest_itemcollected(item):
44 """
45 Check FPU precision mode was not changed during test collection.
46
47 The clumsy way we do it here is mainly necessary because numpy
48 still uses yield tests, which can execute code at test collection
49 time.
50 """
51 global _old_fpu_mode
52
53 mode = get_fpu_mode()
54
55 if _old_fpu_mode is None:
56 _old_fpu_mode = mode
57 elif mode != _old_fpu_mode:
58 _collect_results[item] = (_old_fpu_mode, mode)
59 _old_fpu_mode = mode
60
61
62 @pytest.fixture(scope="function", autouse=True)
63 def check_fpu_mode(request):
64 """
65 Check FPU precision mode was not changed during the test.
66 """
67 old_mode = get_fpu_mode()
68 yield
69 new_mode = get_fpu_mode()
70
71 if old_mode != new_mode:
72 raise AssertionError("FPU precision mode changed from {0:#x} to {1:#x}"
73 " during the test".format(old_mode, new_mode))
74
75 collect_result = _collect_results.get(request.node)
76 if collect_result is not None:
77 old_mode, new_mode = collect_result
78 raise AssertionError("FPU precision mode changed from {0:#x} to {1:#x}"
79 " when collecting the test".format(old_mode,
80 new_mode))
81
82
83 @pytest.fixture(autouse=True)
84 def add_np(doctest_namespace):
85 doctest_namespace['np'] = numpy
86
[end of numpy/conftest.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/numpy/conftest.py b/numpy/conftest.py
--- a/numpy/conftest.py
+++ b/numpy/conftest.py
@@ -3,6 +3,7 @@
"""
import os
+import hypothesis
import pytest
import numpy
@@ -12,6 +13,12 @@
_old_fpu_mode = None
_collect_results = {}
+# See https://hypothesis.readthedocs.io/en/latest/settings.html
+hypothesis.settings.register_profile(
+ name="numpy-profile", deadline=None, print_blob=True,
+)
+hypothesis.settings.load_profile("numpy-profile")
+
def pytest_configure(config):
config.addinivalue_line("markers",
| {"golden_diff": "diff --git a/numpy/conftest.py b/numpy/conftest.py\n--- a/numpy/conftest.py\n+++ b/numpy/conftest.py\n@@ -3,6 +3,7 @@\n \"\"\"\n import os\n \n+import hypothesis\n import pytest\n import numpy\n \n@@ -12,6 +13,12 @@\n _old_fpu_mode = None\n _collect_results = {}\n \n+# See https://hypothesis.readthedocs.io/en/latest/settings.html\n+hypothesis.settings.register_profile(\n+ name=\"numpy-profile\", deadline=None, print_blob=True,\n+)\n+hypothesis.settings.load_profile(\"numpy-profile\")\n+\n \n def pytest_configure(config):\n config.addinivalue_line(\"markers\",\n", "issue": "TST: Add the first test using hypothesis\nThis pull request adds the first test that uses hypothesis and hence brings in hypothesis as an additional test dependency.\r\n\r\n@mattip Could you take a look at this please?\r\n\n", "before_files": [{"content": "\"\"\"\nPytest configuration and fixtures for the Numpy test suite.\n\"\"\"\nimport os\n\nimport pytest\nimport numpy\n\nfrom numpy.core._multiarray_tests import get_fpu_mode\n\n\n_old_fpu_mode = None\n_collect_results = {}\n\n\ndef pytest_configure(config):\n config.addinivalue_line(\"markers\",\n \"valgrind_error: Tests that are known to error under valgrind.\")\n config.addinivalue_line(\"markers\",\n \"leaks_references: Tests that are known to leak references.\")\n config.addinivalue_line(\"markers\",\n \"slow: Tests that are very slow.\")\n\n\ndef pytest_addoption(parser):\n parser.addoption(\"--available-memory\", action=\"store\", default=None,\n help=(\"Set amount of memory available for running the \"\n \"test suite. This can result to tests requiring \"\n \"especially large amounts of memory to be skipped. \"\n \"Equivalent to setting environment variable \"\n \"NPY_AVAILABLE_MEM. Default: determined\"\n \"automatically.\"))\n\n\ndef pytest_sessionstart(session):\n available_mem = session.config.getoption('available_memory')\n if available_mem is not None:\n os.environ['NPY_AVAILABLE_MEM'] = available_mem\n\n\n#FIXME when yield tests are gone.\[email protected]()\ndef pytest_itemcollected(item):\n \"\"\"\n Check FPU precision mode was not changed during test collection.\n\n The clumsy way we do it here is mainly necessary because numpy\n still uses yield tests, which can execute code at test collection\n time.\n \"\"\"\n global _old_fpu_mode\n\n mode = get_fpu_mode()\n\n if _old_fpu_mode is None:\n _old_fpu_mode = mode\n elif mode != _old_fpu_mode:\n _collect_results[item] = (_old_fpu_mode, mode)\n _old_fpu_mode = mode\n\n\[email protected](scope=\"function\", autouse=True)\ndef check_fpu_mode(request):\n \"\"\"\n Check FPU precision mode was not changed during the test.\n \"\"\"\n old_mode = get_fpu_mode()\n yield\n new_mode = get_fpu_mode()\n\n if old_mode != new_mode:\n raise AssertionError(\"FPU precision mode changed from {0:#x} to {1:#x}\"\n \" during the test\".format(old_mode, new_mode))\n\n collect_result = _collect_results.get(request.node)\n if collect_result is not None:\n old_mode, new_mode = collect_result\n raise AssertionError(\"FPU precision mode changed from {0:#x} to {1:#x}\"\n \" when collecting the test\".format(old_mode,\n new_mode))\n\n\[email protected](autouse=True)\ndef add_np(doctest_namespace):\n doctest_namespace['np'] = numpy\n", "path": "numpy/conftest.py"}]} | 1,339 | 154 |
gh_patches_debug_5546 | rasdani/github-patches | git_diff | mitmproxy__mitmproxy-5695 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
mitmdump jumps to 100% CPU when parent process exits
#### Problem Description
It took me two days to make this reproduce in isolation. I hope someone with Python skills can figure out what is happening here. Depending on what the root cause is this might not even be related to my funny architecture.
I'm spawning `mitmdump` from Node.js. If the node process exits mitmdump will be re-assigned to become a child of `systemd` (some unix wizardry). It will then immediately jump to 100% CPU and stay there. This _only_ happens when an addon is using at least one network event (go figure...). E.g. I'm using `client_connected` (works with `clientconnect` on v6 as well). If the addon is only using sth. like `running` the bug does not occur. Even better: if the addon originally only has "running" nothing bad happens. But if I then add a `client_connected` and save the file (and the addon is automatically reloaded) it will instantly jump to 100% CPU.
My guess is that it might be related to stdout and the switcheroo with the parent process? In my actual architecture the mitmdump process will poll the parent via gRPC every second and shutdown if it's gone. But the 100% CPU prevents that.
Update: while trying to write down the exact steps it turns out this might only reproduce via local venv and and not if you download the binary. I'm not sure, it's confusing. I'm confused. But I have video proof, so I'm not completely insane.
#### Steps to reproduce the behavior:
index.js
```js
const path = require('path');
const { spawn } = require('child_process');
function handleStdOut(data) {
console.log(`mitmdump stdout: ${data}`);
}
function handleStdError(data) {
console.error(`mitmdump stderr: ${data}`);
}
function handleExit(code) {
console.log(`mitm process exited with code ${code}`);
}
const mitm = spawn(
// Adjust this path
'/home/alex/Projects/Bandsalat/src/forks/mitmproxy/venv/bin/mitmdump',
['--quiet', '--set', 'connection_strategy=lazy', '--scripts', 'addon.py'],
{
detached: true,
windowsHide: true,
env: {
PYTHONUNBUFFERED: '1',
},
}
);
console.log(mitm.spawnargs);
mitm.unref();
mitm.on('exit', handleExit);
mitm.stdout.on('data', handleStdOut);
mitm.stderr.on('data', handleStdError);
```
addon.py
```py
class MyAddon:
def running(self):
print('running')
def client_connected(self, client):
print('client_connected')
addons = [
MyAddon()
]
```
1. I'm on Ubuntu
2. Adjust index.js to point to your local mitmproxy git venv
3. Launch `node index.js` (Node 14 or 16 work both for me)
4. Now open Chromium with mitmproxy configured. You don't need to enter any URL, Chromium will phone home anyway.
5. Keep Chromium open and ctrl+c the node process
6. Observe your fan getting louder and `top` showing mitmdump at 100% CPU
https://user-images.githubusercontent.com/679144/124594746-740a7080-de60-11eb-9ffb-a5fc4b3ba24a.mp4
#### System Information
Happens with both v6 and HEAD.
```
Mitmproxy: 7.0.0.dev (+492, commit af27556)
Python: 3.8.10
OpenSSL: OpenSSL 1.1.1i 8 Dec 2020
Platform: Linux-5.8.0-59-generic-x86_64-with-glibc2.29
```
</issue>
<code>
[start of mitmproxy/addons/termlog.py]
1 from __future__ import annotations
2 import asyncio
3 import logging
4 from typing import IO
5
6 import sys
7
8 from mitmproxy import ctx, log
9 from mitmproxy.utils import vt_codes
10
11
12 class TermLog:
13 def __init__(
14 self,
15 out: IO[str] | None = None
16 ):
17 self.logger = TermLogHandler(out)
18 self.logger.install()
19
20 def load(self, loader):
21 loader.add_option(
22 "termlog_verbosity", str, "info", "Log verbosity.", choices=log.LogLevels
23 )
24 self.logger.setLevel(logging.INFO)
25
26 def configure(self, updated):
27 if "termlog_verbosity" in updated:
28 self.logger.setLevel(ctx.options.termlog_verbosity.upper())
29
30 def done(self):
31 t = self._teardown()
32 try:
33 # try to delay teardown a bit.
34 asyncio.create_task(t)
35 except RuntimeError:
36 # no event loop, we're in a test.
37 asyncio.run(t)
38
39 async def _teardown(self):
40 self.logger.uninstall()
41
42
43 class TermLogHandler(log.MitmLogHandler):
44 def __init__(
45 self,
46 out: IO[str] | None = None
47 ):
48 super().__init__()
49 self.file: IO[str] = out or sys.stdout
50 self.has_vt_codes = vt_codes.ensure_supported(self.file)
51 self.formatter = log.MitmFormatter(self.has_vt_codes)
52
53 def emit(self, record: logging.LogRecord) -> None:
54 print(
55 self.format(record),
56 file=self.file
57 )
58
[end of mitmproxy/addons/termlog.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mitmproxy/addons/termlog.py b/mitmproxy/addons/termlog.py
--- a/mitmproxy/addons/termlog.py
+++ b/mitmproxy/addons/termlog.py
@@ -51,7 +51,9 @@
self.formatter = log.MitmFormatter(self.has_vt_codes)
def emit(self, record: logging.LogRecord) -> None:
- print(
- self.format(record),
- file=self.file
- )
+ try:
+ print(self.format(record), file=self.file)
+ except OSError:
+ # We cannot print, exit immediately.
+ # See https://github.com/mitmproxy/mitmproxy/issues/4669
+ sys.exit(1)
| {"golden_diff": "diff --git a/mitmproxy/addons/termlog.py b/mitmproxy/addons/termlog.py\n--- a/mitmproxy/addons/termlog.py\n+++ b/mitmproxy/addons/termlog.py\n@@ -51,7 +51,9 @@\n self.formatter = log.MitmFormatter(self.has_vt_codes)\n \n def emit(self, record: logging.LogRecord) -> None:\n- print(\n- self.format(record),\n- file=self.file\n- )\n+ try:\n+ print(self.format(record), file=self.file)\n+ except OSError:\n+ # We cannot print, exit immediately.\n+ # See https://github.com/mitmproxy/mitmproxy/issues/4669\n+ sys.exit(1)\n", "issue": "mitmdump jumps to 100% CPU when parent process exits\n#### Problem Description\r\n\r\nIt took me two days to make this reproduce in isolation. I hope someone with Python skills can figure out what is happening here. Depending on what the root cause is this might not even be related to my funny architecture.\r\n\r\nI'm spawning `mitmdump` from Node.js. If the node process exits mitmdump will be re-assigned to become a child of `systemd` (some unix wizardry). It will then immediately jump to 100% CPU and stay there. This _only_ happens when an addon is using at least one network event (go figure...). E.g. I'm using `client_connected` (works with `clientconnect` on v6 as well). If the addon is only using sth. like `running` the bug does not occur. Even better: if the addon originally only has \"running\" nothing bad happens. But if I then add a `client_connected` and save the file (and the addon is automatically reloaded) it will instantly jump to 100% CPU.\r\n\r\nMy guess is that it might be related to stdout and the switcheroo with the parent process? In my actual architecture the mitmdump process will poll the parent via gRPC every second and shutdown if it's gone. But the 100% CPU prevents that.\r\n\r\nUpdate: while trying to write down the exact steps it turns out this might only reproduce via local venv and and not if you download the binary. I'm not sure, it's confusing. I'm confused. But I have video proof, so I'm not completely insane.\r\n\r\n#### Steps to reproduce the behavior:\r\n\r\nindex.js\r\n\r\n```js\r\nconst path = require('path');\r\nconst { spawn } = require('child_process');\r\n\r\nfunction handleStdOut(data) {\r\n console.log(`mitmdump stdout: ${data}`);\r\n}\r\n\r\nfunction handleStdError(data) {\r\n console.error(`mitmdump stderr: ${data}`);\r\n}\r\n\r\nfunction handleExit(code) {\r\n console.log(`mitm process exited with code ${code}`);\r\n}\r\n\r\nconst mitm = spawn(\r\n // Adjust this path\r\n '/home/alex/Projects/Bandsalat/src/forks/mitmproxy/venv/bin/mitmdump',\r\n ['--quiet', '--set', 'connection_strategy=lazy', '--scripts', 'addon.py'],\r\n {\r\n detached: true,\r\n windowsHide: true,\r\n env: {\r\n PYTHONUNBUFFERED: '1',\r\n },\r\n }\r\n);\r\n\r\nconsole.log(mitm.spawnargs);\r\n\r\nmitm.unref();\r\nmitm.on('exit', handleExit);\r\nmitm.stdout.on('data', handleStdOut);\r\nmitm.stderr.on('data', handleStdError);\r\n```\r\naddon.py\r\n\r\n```py\r\nclass MyAddon:\r\n def running(self):\r\n print('running')\r\n\r\n def client_connected(self, client):\r\n print('client_connected')\r\n\r\naddons = [\r\n MyAddon()\r\n]\r\n```\r\n\r\n1. I'm on Ubuntu\r\n2. Adjust index.js to point to your local mitmproxy git venv\r\n3. Launch `node index.js` (Node 14 or 16 work both for me)\r\n4. Now open Chromium with mitmproxy configured. You don't need to enter any URL, Chromium will phone home anyway.\r\n5. Keep Chromium open and ctrl+c the node process\r\n6. Observe your fan getting louder and `top` showing mitmdump at 100% CPU\r\n\r\nhttps://user-images.githubusercontent.com/679144/124594746-740a7080-de60-11eb-9ffb-a5fc4b3ba24a.mp4\r\n\r\n#### System Information\r\n\r\nHappens with both v6 and HEAD.\r\n\r\n```\r\nMitmproxy: 7.0.0.dev (+492, commit af27556)\r\nPython: 3.8.10\r\nOpenSSL: OpenSSL 1.1.1i 8 Dec 2020\r\nPlatform: Linux-5.8.0-59-generic-x86_64-with-glibc2.29\r\n```\r\n\n", "before_files": [{"content": "from __future__ import annotations\nimport asyncio\nimport logging\nfrom typing import IO\n\nimport sys\n\nfrom mitmproxy import ctx, log\nfrom mitmproxy.utils import vt_codes\n\n\nclass TermLog:\n def __init__(\n self,\n out: IO[str] | None = None\n ):\n self.logger = TermLogHandler(out)\n self.logger.install()\n\n def load(self, loader):\n loader.add_option(\n \"termlog_verbosity\", str, \"info\", \"Log verbosity.\", choices=log.LogLevels\n )\n self.logger.setLevel(logging.INFO)\n\n def configure(self, updated):\n if \"termlog_verbosity\" in updated:\n self.logger.setLevel(ctx.options.termlog_verbosity.upper())\n\n def done(self):\n t = self._teardown()\n try:\n # try to delay teardown a bit.\n asyncio.create_task(t)\n except RuntimeError:\n # no event loop, we're in a test.\n asyncio.run(t)\n\n async def _teardown(self):\n self.logger.uninstall()\n\n\nclass TermLogHandler(log.MitmLogHandler):\n def __init__(\n self,\n out: IO[str] | None = None\n ):\n super().__init__()\n self.file: IO[str] = out or sys.stdout\n self.has_vt_codes = vt_codes.ensure_supported(self.file)\n self.formatter = log.MitmFormatter(self.has_vt_codes)\n\n def emit(self, record: logging.LogRecord) -> None:\n print(\n self.format(record),\n file=self.file\n )\n", "path": "mitmproxy/addons/termlog.py"}]} | 1,868 | 168 |
gh_patches_debug_27961 | rasdani/github-patches | git_diff | sunpy__sunpy-6926 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add a "How do I..." page to our documentation
<!--
We know asking good questions takes effort, and we appreciate your time.
Thank you.
Please be aware that everyone has to follow our code of conduct:
https://sunpy.org/coc
These comments are hidden when you submit this github issue.
Please have a search on our GitHub repository to see if a similar issue has already been posted.
If a similar issue is closed, have a quick look to see if you are satisfied by the resolution.
If not please go ahead and open an issue!
-->
<!--
Provide a general description of the feature you would like.
If you prefer, you can also suggest a draft design or API.
-->
e.g. this page from the xarray docs: http://xarray.pydata.org/en/stable/howdoi.html
</issue>
<code>
[start of examples/acquiring_data/searching_multiple_wavelengths.py]
1 """
2 ==============================================
3 Searching for multiple wavelengths with Fido
4 ==============================================
5
6 This example shows how you can search for several wavelengths of AIA data with Fido.
7 """
8 from astropy import units as u
9
10 from sunpy.net import Fido
11 from sunpy.net import attrs as a
12
13 ###############################################################################
14 # Here we are demonstrating how you can search for specific wavelengths of
15 # AIA data using `Fido <sunpy.net.fido_factory.UnifiedDownloaderFactory>`
16 # and the `sunpy.net.attrs.AttrOr` function.
17 # For example, you may only want a single wavelength, say 171 Angstrom:
18
19 aia_search = Fido.search(a.Time("2022-02-20 00:00", "2022-02-20 00:01"),
20 a.Instrument("AIA"),
21 a.Wavelength(171*u.angstrom))
22
23 print(aia_search)
24
25 ###############################################################################
26 # But say you actually want to search for several wavelengths, rather than just one.
27 # You could use the "|" operator, or instead you can use the `sunpy.net.attrs.AttrOr`
28 # function.
29
30 wavelengths = [94, 131, 171, 193, 211]*u.angstrom
31 aia_search = Fido.search(a.Time("2022-02-20 00:00", "2022-02-20 00:01"),
32 a.Instrument("AIA"),
33 a.AttrOr([a.Wavelength(wav) for wav in wavelengths]))
34
35 print(aia_search)
36
37 # This returns several searches for each of the wavelengths, which can be indexed.
38 # Here the first index is that of 94 angstrom.
39 print(aia_search[0])
40
41 ###############################################################################
42 # You can then pass the `Fido <sunpy.net.fido_factory.UnifiedDownloaderFactory>`
43 # result to :meth:`Fido.fetch <sunpy.net.fido_factory.UnifiedDownloaderFactory.fetch>`
44 # to download the data, i.e., ``Fido.fetch(aia_search)``.
45
[end of examples/acquiring_data/searching_multiple_wavelengths.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/examples/acquiring_data/searching_multiple_wavelengths.py b/examples/acquiring_data/searching_multiple_wavelengths.py
deleted file mode 100644
--- a/examples/acquiring_data/searching_multiple_wavelengths.py
+++ /dev/null
@@ -1,44 +0,0 @@
-"""
-==============================================
-Searching for multiple wavelengths with Fido
-==============================================
-
-This example shows how you can search for several wavelengths of AIA data with Fido.
-"""
-from astropy import units as u
-
-from sunpy.net import Fido
-from sunpy.net import attrs as a
-
-###############################################################################
-# Here we are demonstrating how you can search for specific wavelengths of
-# AIA data using `Fido <sunpy.net.fido_factory.UnifiedDownloaderFactory>`
-# and the `sunpy.net.attrs.AttrOr` function.
-# For example, you may only want a single wavelength, say 171 Angstrom:
-
-aia_search = Fido.search(a.Time("2022-02-20 00:00", "2022-02-20 00:01"),
- a.Instrument("AIA"),
- a.Wavelength(171*u.angstrom))
-
-print(aia_search)
-
-###############################################################################
-# But say you actually want to search for several wavelengths, rather than just one.
-# You could use the "|" operator, or instead you can use the `sunpy.net.attrs.AttrOr`
-# function.
-
-wavelengths = [94, 131, 171, 193, 211]*u.angstrom
-aia_search = Fido.search(a.Time("2022-02-20 00:00", "2022-02-20 00:01"),
- a.Instrument("AIA"),
- a.AttrOr([a.Wavelength(wav) for wav in wavelengths]))
-
-print(aia_search)
-
-# This returns several searches for each of the wavelengths, which can be indexed.
-# Here the first index is that of 94 angstrom.
-print(aia_search[0])
-
-###############################################################################
-# You can then pass the `Fido <sunpy.net.fido_factory.UnifiedDownloaderFactory>`
-# result to :meth:`Fido.fetch <sunpy.net.fido_factory.UnifiedDownloaderFactory.fetch>`
-# to download the data, i.e., ``Fido.fetch(aia_search)``.
| {"golden_diff": "diff --git a/examples/acquiring_data/searching_multiple_wavelengths.py b/examples/acquiring_data/searching_multiple_wavelengths.py\ndeleted file mode 100644\n--- a/examples/acquiring_data/searching_multiple_wavelengths.py\n+++ /dev/null\n@@ -1,44 +0,0 @@\n-\"\"\"\n-==============================================\n-Searching for multiple wavelengths with Fido\n-==============================================\n-\n-This example shows how you can search for several wavelengths of AIA data with Fido.\n-\"\"\"\n-from astropy import units as u\n-\n-from sunpy.net import Fido\n-from sunpy.net import attrs as a\n-\n-###############################################################################\n-# Here we are demonstrating how you can search for specific wavelengths of\n-# AIA data using `Fido <sunpy.net.fido_factory.UnifiedDownloaderFactory>`\n-# and the `sunpy.net.attrs.AttrOr` function.\n-# For example, you may only want a single wavelength, say 171 Angstrom:\n-\n-aia_search = Fido.search(a.Time(\"2022-02-20 00:00\", \"2022-02-20 00:01\"),\n- a.Instrument(\"AIA\"),\n- a.Wavelength(171*u.angstrom))\n-\n-print(aia_search)\n-\n-###############################################################################\n-# But say you actually want to search for several wavelengths, rather than just one.\n-# You could use the \"|\" operator, or instead you can use the `sunpy.net.attrs.AttrOr`\n-# function.\n-\n-wavelengths = [94, 131, 171, 193, 211]*u.angstrom\n-aia_search = Fido.search(a.Time(\"2022-02-20 00:00\", \"2022-02-20 00:01\"),\n- a.Instrument(\"AIA\"),\n- a.AttrOr([a.Wavelength(wav) for wav in wavelengths]))\n-\n-print(aia_search)\n-\n-# This returns several searches for each of the wavelengths, which can be indexed.\n-# Here the first index is that of 94 angstrom.\n-print(aia_search[0])\n-\n-###############################################################################\n-# You can then pass the `Fido <sunpy.net.fido_factory.UnifiedDownloaderFactory>`\n-# result to :meth:`Fido.fetch <sunpy.net.fido_factory.UnifiedDownloaderFactory.fetch>`\n-# to download the data, i.e., ``Fido.fetch(aia_search)``.\n", "issue": "Add a \"How do I...\" page to our documentation\n<!--\r\nWe know asking good questions takes effort, and we appreciate your time.\r\nThank you.\r\n\r\nPlease be aware that everyone has to follow our code of conduct:\r\nhttps://sunpy.org/coc\r\n\r\nThese comments are hidden when you submit this github issue.\r\n\r\nPlease have a search on our GitHub repository to see if a similar issue has already been posted.\r\nIf a similar issue is closed, have a quick look to see if you are satisfied by the resolution.\r\nIf not please go ahead and open an issue!\r\n-->\r\n\r\n\r\n<!--\r\nProvide a general description of the feature you would like.\r\nIf you prefer, you can also suggest a draft design or API.\r\n-->\r\n\r\ne.g. this page from the xarray docs: http://xarray.pydata.org/en/stable/howdoi.html\r\n\n", "before_files": [{"content": "\"\"\"\n==============================================\nSearching for multiple wavelengths with Fido\n==============================================\n\nThis example shows how you can search for several wavelengths of AIA data with Fido.\n\"\"\"\nfrom astropy import units as u\n\nfrom sunpy.net import Fido\nfrom sunpy.net import attrs as a\n\n###############################################################################\n# Here we are demonstrating how you can search for specific wavelengths of\n# AIA data using `Fido <sunpy.net.fido_factory.UnifiedDownloaderFactory>`\n# and the `sunpy.net.attrs.AttrOr` function.\n# For example, you may only want a single wavelength, say 171 Angstrom:\n\naia_search = Fido.search(a.Time(\"2022-02-20 00:00\", \"2022-02-20 00:01\"),\n a.Instrument(\"AIA\"),\n a.Wavelength(171*u.angstrom))\n\nprint(aia_search)\n\n###############################################################################\n# But say you actually want to search for several wavelengths, rather than just one.\n# You could use the \"|\" operator, or instead you can use the `sunpy.net.attrs.AttrOr`\n# function.\n\nwavelengths = [94, 131, 171, 193, 211]*u.angstrom\naia_search = Fido.search(a.Time(\"2022-02-20 00:00\", \"2022-02-20 00:01\"),\n a.Instrument(\"AIA\"),\n a.AttrOr([a.Wavelength(wav) for wav in wavelengths]))\n\nprint(aia_search)\n\n# This returns several searches for each of the wavelengths, which can be indexed.\n# Here the first index is that of 94 angstrom.\nprint(aia_search[0])\n\n###############################################################################\n# You can then pass the `Fido <sunpy.net.fido_factory.UnifiedDownloaderFactory>`\n# result to :meth:`Fido.fetch <sunpy.net.fido_factory.UnifiedDownloaderFactory.fetch>`\n# to download the data, i.e., ``Fido.fetch(aia_search)``.\n", "path": "examples/acquiring_data/searching_multiple_wavelengths.py"}]} | 1,257 | 552 |
gh_patches_debug_1057 | rasdani/github-patches | git_diff | StackStorm__st2-5091 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
St2Stream service broken when using SSL with mongodb
## SUMMARY
This issue is an extension to #4832 however this time it is the st2stream service, I have looked that the code and can see the same monkey patch code hasn't been applied to the st2stream app
### STACKSTORM VERSION
Paste the output of ``st2 --version``: 3.3.0
##### OS, environment, install method
Docker compose with the split services and mongo db references commented out so that an external db can be used https://github.com/StackStorm/st2-docker/blob/master/docker-compose.yml
All other services correctly connected to mongodb.net test instance with the exception of st2stream.
## Steps to reproduce the problem
use docker yaml at https://github.com/StackStorm/st2-docker/blob/master/docker-compose.yml, comment out mongo container and references, adjust files/st2-docker.conf to point to external DB with SSL = True enabled.
docker-compose up
## Expected Results
What did you expect to happen when running the steps above?
st2stream to operate correctly
## Actual Results
What happened? What output did you get?
2020-11-16 05:48:55,053 WARNING [-] Retry on ConnectionError - Cannot connect to database default :
maximum recursion depth exceeded
Adding monkey patch code to st2stream app resolves the issue (manually injected into container to test).
file: st2stream/cmd/api.py
Code:
from st2common.util.monkey_patch import monkey_patch
monkey_patch()
</issue>
<code>
[start of st2stream/st2stream/cmd/api.py]
1 # Copyright 2020 The StackStorm Authors.
2 # Copyright 2019 Extreme Networks, Inc.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15
16 import os
17 import sys
18
19 import eventlet
20 from oslo_config import cfg
21 from eventlet import wsgi
22
23 from st2common import log as logging
24 from st2common.service_setup import setup as common_setup
25 from st2common.service_setup import teardown as common_teardown
26 from st2common.stream.listener import get_listener_if_set
27 from st2common.util.wsgi import shutdown_server_kill_pending_requests
28 from st2stream.signal_handlers import register_stream_signal_handlers
29 from st2stream import config
30 config.register_opts()
31 from st2stream import app
32
33 __all__ = [
34 'main'
35 ]
36
37
38 eventlet.monkey_patch(
39 os=True,
40 select=True,
41 socket=True,
42 thread=False if '--use-debugger' in sys.argv else True,
43 time=True)
44
45 LOG = logging.getLogger(__name__)
46
47 # How much time to give to the request in progress to finish in seconds before killing them
48 WSGI_SERVER_REQUEST_SHUTDOWN_TIME = 2
49
50
51 def _setup():
52 capabilities = {
53 'name': 'stream',
54 'listen_host': cfg.CONF.stream.host,
55 'listen_port': cfg.CONF.stream.port,
56 'type': 'active'
57 }
58 common_setup(service='stream', config=config, setup_db=True, register_mq_exchanges=True,
59 register_signal_handlers=True, register_internal_trigger_types=False,
60 run_migrations=False, service_registry=True, capabilities=capabilities)
61
62
63 def _run_server():
64 host = cfg.CONF.stream.host
65 port = cfg.CONF.stream.port
66
67 LOG.info('(PID=%s) ST2 Stream API is serving on http://%s:%s.', os.getpid(), host, port)
68
69 max_pool_size = eventlet.wsgi.DEFAULT_MAX_SIMULTANEOUS_REQUESTS
70 worker_pool = eventlet.GreenPool(max_pool_size)
71 sock = eventlet.listen((host, port))
72
73 def queue_shutdown(signal_number, stack_frame):
74 eventlet.spawn_n(shutdown_server_kill_pending_requests, sock=sock,
75 worker_pool=worker_pool, wait_time=WSGI_SERVER_REQUEST_SHUTDOWN_TIME)
76
77 # We register a custom SIGINT handler which allows us to kill long running active requests.
78 # Note: Eventually we will support draining (waiting for short-running requests), but we
79 # will still want to kill long running stream requests.
80 register_stream_signal_handlers(handler_func=queue_shutdown)
81
82 wsgi.server(sock, app.setup_app(), custom_pool=worker_pool)
83 return 0
84
85
86 def _teardown():
87 common_teardown()
88
89
90 def main():
91 try:
92 _setup()
93 return _run_server()
94 except SystemExit as exit_code:
95 sys.exit(exit_code)
96 except KeyboardInterrupt:
97 listener = get_listener_if_set(name='stream')
98
99 if listener:
100 listener.shutdown()
101 except Exception:
102 LOG.exception('(PID=%s) ST2 Stream API quit due to exception.', os.getpid())
103 return 1
104 finally:
105 _teardown()
106
[end of st2stream/st2stream/cmd/api.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/st2stream/st2stream/cmd/api.py b/st2stream/st2stream/cmd/api.py
--- a/st2stream/st2stream/cmd/api.py
+++ b/st2stream/st2stream/cmd/api.py
@@ -13,6 +13,9 @@
# See the License for the specific language governing permissions and
# limitations under the License.
+from st2common.util.monkey_patch import monkey_patch
+monkey_patch()
+
import os
import sys
| {"golden_diff": "diff --git a/st2stream/st2stream/cmd/api.py b/st2stream/st2stream/cmd/api.py\n--- a/st2stream/st2stream/cmd/api.py\n+++ b/st2stream/st2stream/cmd/api.py\n@@ -13,6 +13,9 @@\n # See the License for the specific language governing permissions and\n # limitations under the License.\n \n+from st2common.util.monkey_patch import monkey_patch\n+monkey_patch()\n+\n import os\n import sys\n", "issue": "St2Stream service broken when using SSL with mongodb\n## SUMMARY\r\n\r\nThis issue is an extension to #4832 however this time it is the st2stream service, I have looked that the code and can see the same monkey patch code hasn't been applied to the st2stream app\r\n\r\n### STACKSTORM VERSION\r\n\r\nPaste the output of ``st2 --version``: 3.3.0\r\n\r\n##### OS, environment, install method\r\n\r\nDocker compose with the split services and mongo db references commented out so that an external db can be used https://github.com/StackStorm/st2-docker/blob/master/docker-compose.yml\r\n\r\nAll other services correctly connected to mongodb.net test instance with the exception of st2stream.\r\n\r\n## Steps to reproduce the problem\r\n\r\nuse docker yaml at https://github.com/StackStorm/st2-docker/blob/master/docker-compose.yml, comment out mongo container and references, adjust files/st2-docker.conf to point to external DB with SSL = True enabled.\r\ndocker-compose up\r\n\r\n## Expected Results\r\n\r\nWhat did you expect to happen when running the steps above?\r\n\r\nst2stream to operate correctly\r\n\r\n## Actual Results\r\n\r\nWhat happened? What output did you get?\r\n\r\n2020-11-16 05:48:55,053 WARNING [-] Retry on ConnectionError - Cannot connect to database default :\r\nmaximum recursion depth exceeded\r\n\r\n\r\n\r\nAdding monkey patch code to st2stream app resolves the issue (manually injected into container to test).\r\n\r\nfile: st2stream/cmd/api.py\r\nCode: \r\nfrom st2common.util.monkey_patch import monkey_patch\r\nmonkey_patch()\r\n\r\n\n", "before_files": [{"content": "# Copyright 2020 The StackStorm Authors.\n# Copyright 2019 Extreme Networks, Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\nimport sys\n\nimport eventlet\nfrom oslo_config import cfg\nfrom eventlet import wsgi\n\nfrom st2common import log as logging\nfrom st2common.service_setup import setup as common_setup\nfrom st2common.service_setup import teardown as common_teardown\nfrom st2common.stream.listener import get_listener_if_set\nfrom st2common.util.wsgi import shutdown_server_kill_pending_requests\nfrom st2stream.signal_handlers import register_stream_signal_handlers\nfrom st2stream import config\nconfig.register_opts()\nfrom st2stream import app\n\n__all__ = [\n 'main'\n]\n\n\neventlet.monkey_patch(\n os=True,\n select=True,\n socket=True,\n thread=False if '--use-debugger' in sys.argv else True,\n time=True)\n\nLOG = logging.getLogger(__name__)\n\n# How much time to give to the request in progress to finish in seconds before killing them\nWSGI_SERVER_REQUEST_SHUTDOWN_TIME = 2\n\n\ndef _setup():\n capabilities = {\n 'name': 'stream',\n 'listen_host': cfg.CONF.stream.host,\n 'listen_port': cfg.CONF.stream.port,\n 'type': 'active'\n }\n common_setup(service='stream', config=config, setup_db=True, register_mq_exchanges=True,\n register_signal_handlers=True, register_internal_trigger_types=False,\n run_migrations=False, service_registry=True, capabilities=capabilities)\n\n\ndef _run_server():\n host = cfg.CONF.stream.host\n port = cfg.CONF.stream.port\n\n LOG.info('(PID=%s) ST2 Stream API is serving on http://%s:%s.', os.getpid(), host, port)\n\n max_pool_size = eventlet.wsgi.DEFAULT_MAX_SIMULTANEOUS_REQUESTS\n worker_pool = eventlet.GreenPool(max_pool_size)\n sock = eventlet.listen((host, port))\n\n def queue_shutdown(signal_number, stack_frame):\n eventlet.spawn_n(shutdown_server_kill_pending_requests, sock=sock,\n worker_pool=worker_pool, wait_time=WSGI_SERVER_REQUEST_SHUTDOWN_TIME)\n\n # We register a custom SIGINT handler which allows us to kill long running active requests.\n # Note: Eventually we will support draining (waiting for short-running requests), but we\n # will still want to kill long running stream requests.\n register_stream_signal_handlers(handler_func=queue_shutdown)\n\n wsgi.server(sock, app.setup_app(), custom_pool=worker_pool)\n return 0\n\n\ndef _teardown():\n common_teardown()\n\n\ndef main():\n try:\n _setup()\n return _run_server()\n except SystemExit as exit_code:\n sys.exit(exit_code)\n except KeyboardInterrupt:\n listener = get_listener_if_set(name='stream')\n\n if listener:\n listener.shutdown()\n except Exception:\n LOG.exception('(PID=%s) ST2 Stream API quit due to exception.', os.getpid())\n return 1\n finally:\n _teardown()\n", "path": "st2stream/st2stream/cmd/api.py"}]} | 1,871 | 104 |
gh_patches_debug_11914 | rasdani/github-patches | git_diff | pytorch__ignite-2984 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Fix warning in fast_neural_style example
Here is another good first issue to improve the ignite project. Currently, we have a warning on this line: https://github.com/pytorch/ignite/blob/master/examples/fast_neural_style/vgg.py#L10 (fast neural style example)
/opt/hostedtoolcache/Python/3.9.17/x64/lib/python3.9/site-packages/torchvision/models/_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead.
warnings.warn(
/opt/hostedtoolcache/Python/3.9.17/x64/lib/python3.9/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing `weights=VGG16_Weights.IMAGENET1K_V1`. You can also use `weights=VGG16_Weights.DEFAULT` to get the most up-to-date weights.
</issue>
<code>
[start of examples/fast_neural_style/vgg.py]
1 from collections import namedtuple
2
3 import torch
4 from torchvision import models
5
6
7 class Vgg16(torch.nn.Module):
8 def __init__(self, requires_grad=False):
9 super(Vgg16, self).__init__()
10 vgg_pretrained_features = models.vgg16(pretrained=True).features
11 self.slice1 = torch.nn.Sequential()
12 self.slice2 = torch.nn.Sequential()
13 self.slice3 = torch.nn.Sequential()
14 self.slice4 = torch.nn.Sequential()
15 for x in range(4):
16 self.slice1.add_module(str(x), vgg_pretrained_features[x])
17 for x in range(4, 9):
18 self.slice2.add_module(str(x), vgg_pretrained_features[x])
19 for x in range(9, 16):
20 self.slice3.add_module(str(x), vgg_pretrained_features[x])
21 for x in range(16, 23):
22 self.slice4.add_module(str(x), vgg_pretrained_features[x])
23 if not requires_grad:
24 for param in self.parameters():
25 param.requires_grad = False
26
27 def forward(self, X):
28 h = self.slice1(X)
29 h_relu1_2 = h
30 h = self.slice2(h)
31 h_relu2_2 = h
32 h = self.slice3(h)
33 h_relu3_3 = h
34 h = self.slice4(h)
35 h_relu4_3 = h
36 vgg_outputs = namedtuple("VggOutputs", ["relu1_2", "relu2_2", "relu3_3", "relu4_3"])
37 out = vgg_outputs(h_relu1_2, h_relu2_2, h_relu3_3, h_relu4_3)
38 return out
39
[end of examples/fast_neural_style/vgg.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/examples/fast_neural_style/vgg.py b/examples/fast_neural_style/vgg.py
--- a/examples/fast_neural_style/vgg.py
+++ b/examples/fast_neural_style/vgg.py
@@ -2,12 +2,13 @@
import torch
from torchvision import models
+from torchvision.models.vgg import VGG16_Weights
class Vgg16(torch.nn.Module):
def __init__(self, requires_grad=False):
super(Vgg16, self).__init__()
- vgg_pretrained_features = models.vgg16(pretrained=True).features
+ vgg_pretrained_features = models.vgg16(weights=VGG16_Weights.IMAGENET1K_V1).features
self.slice1 = torch.nn.Sequential()
self.slice2 = torch.nn.Sequential()
self.slice3 = torch.nn.Sequential()
| {"golden_diff": "diff --git a/examples/fast_neural_style/vgg.py b/examples/fast_neural_style/vgg.py\n--- a/examples/fast_neural_style/vgg.py\n+++ b/examples/fast_neural_style/vgg.py\n@@ -2,12 +2,13 @@\n \n import torch\n from torchvision import models\n+from torchvision.models.vgg import VGG16_Weights\n \n \n class Vgg16(torch.nn.Module):\n def __init__(self, requires_grad=False):\n super(Vgg16, self).__init__()\n- vgg_pretrained_features = models.vgg16(pretrained=True).features\n+ vgg_pretrained_features = models.vgg16(weights=VGG16_Weights.IMAGENET1K_V1).features\n self.slice1 = torch.nn.Sequential()\n self.slice2 = torch.nn.Sequential()\n self.slice3 = torch.nn.Sequential()\n", "issue": "Fix warning in fast_neural_style example\nHere is another good first issue to improve the ignite project. Currently, we have a warning on this line: https://github.com/pytorch/ignite/blob/master/examples/fast_neural_style/vgg.py#L10 (fast neural style example)\r\n /opt/hostedtoolcache/Python/3.9.17/x64/lib/python3.9/site-packages/torchvision/models/_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead.\r\n warnings.warn(\r\n/opt/hostedtoolcache/Python/3.9.17/x64/lib/python3.9/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing `weights=VGG16_Weights.IMAGENET1K_V1`. You can also use `weights=VGG16_Weights.DEFAULT` to get the most up-to-date weights.\n", "before_files": [{"content": "from collections import namedtuple\n\nimport torch\nfrom torchvision import models\n\n\nclass Vgg16(torch.nn.Module):\n def __init__(self, requires_grad=False):\n super(Vgg16, self).__init__()\n vgg_pretrained_features = models.vgg16(pretrained=True).features\n self.slice1 = torch.nn.Sequential()\n self.slice2 = torch.nn.Sequential()\n self.slice3 = torch.nn.Sequential()\n self.slice4 = torch.nn.Sequential()\n for x in range(4):\n self.slice1.add_module(str(x), vgg_pretrained_features[x])\n for x in range(4, 9):\n self.slice2.add_module(str(x), vgg_pretrained_features[x])\n for x in range(9, 16):\n self.slice3.add_module(str(x), vgg_pretrained_features[x])\n for x in range(16, 23):\n self.slice4.add_module(str(x), vgg_pretrained_features[x])\n if not requires_grad:\n for param in self.parameters():\n param.requires_grad = False\n\n def forward(self, X):\n h = self.slice1(X)\n h_relu1_2 = h\n h = self.slice2(h)\n h_relu2_2 = h\n h = self.slice3(h)\n h_relu3_3 = h\n h = self.slice4(h)\n h_relu4_3 = h\n vgg_outputs = namedtuple(\"VggOutputs\", [\"relu1_2\", \"relu2_2\", \"relu3_3\", \"relu4_3\"])\n out = vgg_outputs(h_relu1_2, h_relu2_2, h_relu3_3, h_relu4_3)\n return out\n", "path": "examples/fast_neural_style/vgg.py"}]} | 1,235 | 190 |
gh_patches_debug_4532 | rasdani/github-patches | git_diff | huggingface__dataset-viewer-2789 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Truncate all the logs
We sometimes have very big logs (one row > 5MB). It's not useful at all and triggers warnings from infra. When we setup the logs configuration, we could try to set a maximum length
https://github.com/huggingface/dataset-viewer/blob/95527c2f1f0b8f077ed9ec74d3c75e45dbc1d00a/libs/libcommon/src/libcommon/log.py#L7-L9
</issue>
<code>
[start of libs/libcommon/src/libcommon/log.py]
1 # SPDX-License-Identifier: Apache-2.0
2 # Copyright 2022 The HuggingFace Authors.
3
4 import logging
5
6
7 def init_logging(level: int = logging.INFO) -> None:
8 logging.basicConfig(level=level, format="%(levelname)s: %(asctime)s - %(name)s - %(message)s")
9 logging.debug(f"Log level set to: {logging.getLevelName(logging.getLogger().getEffectiveLevel())}")
10
[end of libs/libcommon/src/libcommon/log.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/libs/libcommon/src/libcommon/log.py b/libs/libcommon/src/libcommon/log.py
--- a/libs/libcommon/src/libcommon/log.py
+++ b/libs/libcommon/src/libcommon/log.py
@@ -5,5 +5,5 @@
def init_logging(level: int = logging.INFO) -> None:
- logging.basicConfig(level=level, format="%(levelname)s: %(asctime)s - %(name)s - %(message)s")
+ logging.basicConfig(level=level, format="%(levelname)s: %(asctime)s - %(name)s - %(message).5000s")
logging.debug(f"Log level set to: {logging.getLevelName(logging.getLogger().getEffectiveLevel())}")
| {"golden_diff": "diff --git a/libs/libcommon/src/libcommon/log.py b/libs/libcommon/src/libcommon/log.py\n--- a/libs/libcommon/src/libcommon/log.py\n+++ b/libs/libcommon/src/libcommon/log.py\n@@ -5,5 +5,5 @@\n \n \n def init_logging(level: int = logging.INFO) -> None:\n- logging.basicConfig(level=level, format=\"%(levelname)s: %(asctime)s - %(name)s - %(message)s\")\n+ logging.basicConfig(level=level, format=\"%(levelname)s: %(asctime)s - %(name)s - %(message).5000s\")\n logging.debug(f\"Log level set to: {logging.getLevelName(logging.getLogger().getEffectiveLevel())}\")\n", "issue": "Truncate all the logs\nWe sometimes have very big logs (one row > 5MB). It's not useful at all and triggers warnings from infra. When we setup the logs configuration, we could try to set a maximum length\r\n\r\nhttps://github.com/huggingface/dataset-viewer/blob/95527c2f1f0b8f077ed9ec74d3c75e45dbc1d00a/libs/libcommon/src/libcommon/log.py#L7-L9\r\n\r\n\n", "before_files": [{"content": "# SPDX-License-Identifier: Apache-2.0\n# Copyright 2022 The HuggingFace Authors.\n\nimport logging\n\n\ndef init_logging(level: int = logging.INFO) -> None:\n logging.basicConfig(level=level, format=\"%(levelname)s: %(asctime)s - %(name)s - %(message)s\")\n logging.debug(f\"Log level set to: {logging.getLevelName(logging.getLogger().getEffectiveLevel())}\")\n", "path": "libs/libcommon/src/libcommon/log.py"}]} | 757 | 149 |
gh_patches_debug_23578 | rasdani/github-patches | git_diff | Flexget__Flexget-2271 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Kodi API has been changed in v18 (Leia) such that HTTP POST is required
<!---
Before opening an issue, verify:
- Is this a feature request? Post it on https://feathub.com/Flexget/Flexget
- Did you recently upgrade? Look at the Change Log and Upgrade Actions to make sure that you don't need to make any changes to your config https://flexget.com/ChangeLog https://flexget.com/UpgradeActions
- Are you running FlexGet as a daemon? Stop it completely and then start it again https://flexget.com/CLI/daemon
- Did you search to see if the issue already exists? https://github.com/Flexget/Flexget/issues
- Did you fill out the issue template as completely as possible?
The issue template is here because it helps to ensure you submitted all the necessary information the first time, and allows us to more quickly review issues. Please fill it out correctly and do not ignore it, no matter how irrelevant you think it may be. Thanks in advance for your help with this!
--->
### Expected behaviour:
<!---
Please don't just say "it doesn't crash" or "it works". Explain what the expected result is.
--->
Updates should work
### Actual behaviour:
Error message: `JSONRPC failed. Error -32099: Bad client permission`
### Steps to reproduce:
- Step 1: Call a kodi library scan from a task
#### Config:
```
kodi_library:
action: scan
category: video
url: http://192.168.1.214
port: 80
```
### Details
The kodi API has been changed in v18 Leia and up. In the old API, all requests were HTTP GET (even API calls that update/mutate state). They've finally updated the API to require HTTP POST for updates, but they've completely failed to update the API version or even provide sensible error messages.
https://forum.kodi.tv/showthread.php?tid=324598
https://discuss.flexget.com/t/kodi-plugin-not-working-on-kodi-18/4196
**NOTE**: I no longer use Kodi, so I'm simply creating an issue based on a forum post to keep track of the issue in case other users begin to experience it.
</issue>
<code>
[start of flexget/plugins/services/kodi_library.py]
1 from __future__ import unicode_literals, division, absolute_import
2 from builtins import * # noqa pylint: disable=unused-import, redefined-builtin
3
4 import logging
5 import json
6
7 from flexget import plugin
8 from flexget.event import event
9 from flexget.utils.requests import RequestException
10
11 log = logging.getLogger('kodi_library')
12
13 JSON_URI = '/jsonrpc'
14
15
16 class KodiLibrary(object):
17 schema = {
18 'type': 'object',
19 'properties': {
20 'action': {'type': 'string', 'enum': ['clean', 'scan']},
21 'category': {'type': 'string', 'enum': ['audio', 'video']},
22 'url': {'type': 'string', 'format': 'url'},
23 'port': {'type': 'integer', 'default': 8080},
24 'username': {'type': 'string'},
25 'password': {'type': 'string'},
26 'only_on_accepted': {'type': 'boolean', 'default': True}
27 },
28 'required': ['url', 'action', 'category'],
29 'additionalProperties': False,
30 }
31
32 @plugin.priority(-255)
33 def on_task_exit(self, task, config):
34 if task.accepted or not config['only_on_accepted']:
35 # make the url without trailing slash
36 base_url = config['url'][:-1] if config['url'].endswith('/') else config['url']
37 base_url += ':{0}'.format(config['port'])
38
39 url = base_url + JSON_URI
40 # create the params
41 json_params = {"id": 1, "jsonrpc": "2.0",
42 'method': '{category}Library.{action}'.format(category=config['category'].title(),
43 action=config['action'].title())}
44 params = {'request': json.dumps(json_params)}
45 log.debug('Sending request params %s', params)
46
47 try:
48 r = task.requests.get(url, params=params, auth=(config.get('username'), config.get('password'))).json()
49 if r.get('result') == 'OK':
50 log.info('Successfully sent a %s request for the %s library', config['action'], config['category'])
51 else:
52 if r.get('error'):
53 log.error('Kodi JSONRPC failed. Error %s: %s', r['error']['code'], r['error']['message'])
54 else:
55 # this should never happen as Kodi say they follow the JSON-RPC 2.0 spec
56 log.debug('Received error response %s', json.dumps(r))
57 log.error('Kodi JSONRPC failed with unrecognized message: %s', json.dumps(r))
58 except RequestException as e:
59 raise plugin.PluginError('Failed to send request to Kodi: %s' % e.args[0])
60 else:
61 log.info('No entries were accepted. No request is sent.')
62
63
64 @event('plugin.register')
65 def register_plugin():
66 plugin.register(KodiLibrary, 'kodi_library', api_ver=2)
67
[end of flexget/plugins/services/kodi_library.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/flexget/plugins/services/kodi_library.py b/flexget/plugins/services/kodi_library.py
--- a/flexget/plugins/services/kodi_library.py
+++ b/flexget/plugins/services/kodi_library.py
@@ -38,14 +38,13 @@
url = base_url + JSON_URI
# create the params
- json_params = {"id": 1, "jsonrpc": "2.0",
- 'method': '{category}Library.{action}'.format(category=config['category'].title(),
- action=config['action'].title())}
- params = {'request': json.dumps(json_params)}
+ params = {"id": 1, "jsonrpc": "2.0",
+ 'method': '{category}Library.{action}'.format(category=config['category'].title(),
+ action=config['action'].title())}
log.debug('Sending request params %s', params)
try:
- r = task.requests.get(url, params=params, auth=(config.get('username'), config.get('password'))).json()
+ r = task.requests.post(url, json=params, auth=(config.get('username'), config.get('password'))).json()
if r.get('result') == 'OK':
log.info('Successfully sent a %s request for the %s library', config['action'], config['category'])
else:
| {"golden_diff": "diff --git a/flexget/plugins/services/kodi_library.py b/flexget/plugins/services/kodi_library.py\n--- a/flexget/plugins/services/kodi_library.py\n+++ b/flexget/plugins/services/kodi_library.py\n@@ -38,14 +38,13 @@\n \n url = base_url + JSON_URI\n # create the params\n- json_params = {\"id\": 1, \"jsonrpc\": \"2.0\",\n- 'method': '{category}Library.{action}'.format(category=config['category'].title(),\n- action=config['action'].title())}\n- params = {'request': json.dumps(json_params)}\n+ params = {\"id\": 1, \"jsonrpc\": \"2.0\",\n+ 'method': '{category}Library.{action}'.format(category=config['category'].title(),\n+ action=config['action'].title())}\n log.debug('Sending request params %s', params)\n \n try:\n- r = task.requests.get(url, params=params, auth=(config.get('username'), config.get('password'))).json()\n+ r = task.requests.post(url, json=params, auth=(config.get('username'), config.get('password'))).json()\n if r.get('result') == 'OK':\n log.info('Successfully sent a %s request for the %s library', config['action'], config['category'])\n else:\n", "issue": "Kodi API has been changed in v18 (Leia) such that HTTP POST is required\n<!---\r\nBefore opening an issue, verify:\r\n\r\n- Is this a feature request? Post it on https://feathub.com/Flexget/Flexget\r\n- Did you recently upgrade? Look at the Change Log and Upgrade Actions to make sure that you don't need to make any changes to your config https://flexget.com/ChangeLog https://flexget.com/UpgradeActions\r\n- Are you running FlexGet as a daemon? Stop it completely and then start it again https://flexget.com/CLI/daemon\r\n- Did you search to see if the issue already exists? https://github.com/Flexget/Flexget/issues\r\n- Did you fill out the issue template as completely as possible?\r\n\r\nThe issue template is here because it helps to ensure you submitted all the necessary information the first time, and allows us to more quickly review issues. Please fill it out correctly and do not ignore it, no matter how irrelevant you think it may be. Thanks in advance for your help with this!\r\n--->\r\n### Expected behaviour:\r\n<!---\r\nPlease don't just say \"it doesn't crash\" or \"it works\". Explain what the expected result is.\r\n--->\r\nUpdates should work\r\n### Actual behaviour:\r\nError message: `JSONRPC failed. Error -32099: Bad client permission`\r\n### Steps to reproduce:\r\n- Step 1: Call a kodi library scan from a task\r\n\r\n#### Config:\r\n```\r\nkodi_library:\r\n action: scan\r\n category: video\r\n url: http://192.168.1.214\r\n port: 80\r\n```\r\n\r\n### Details\r\nThe kodi API has been changed in v18 Leia and up. In the old API, all requests were HTTP GET (even API calls that update/mutate state). They've finally updated the API to require HTTP POST for updates, but they've completely failed to update the API version or even provide sensible error messages.\r\n\r\nhttps://forum.kodi.tv/showthread.php?tid=324598\r\nhttps://discuss.flexget.com/t/kodi-plugin-not-working-on-kodi-18/4196\r\n\r\n**NOTE**: I no longer use Kodi, so I'm simply creating an issue based on a forum post to keep track of the issue in case other users begin to experience it.\n", "before_files": [{"content": "from __future__ import unicode_literals, division, absolute_import\nfrom builtins import * # noqa pylint: disable=unused-import, redefined-builtin\n\nimport logging\nimport json\n\nfrom flexget import plugin\nfrom flexget.event import event\nfrom flexget.utils.requests import RequestException\n\nlog = logging.getLogger('kodi_library')\n\nJSON_URI = '/jsonrpc'\n\n\nclass KodiLibrary(object):\n schema = {\n 'type': 'object',\n 'properties': {\n 'action': {'type': 'string', 'enum': ['clean', 'scan']},\n 'category': {'type': 'string', 'enum': ['audio', 'video']},\n 'url': {'type': 'string', 'format': 'url'},\n 'port': {'type': 'integer', 'default': 8080},\n 'username': {'type': 'string'},\n 'password': {'type': 'string'},\n 'only_on_accepted': {'type': 'boolean', 'default': True}\n },\n 'required': ['url', 'action', 'category'],\n 'additionalProperties': False,\n }\n\n @plugin.priority(-255)\n def on_task_exit(self, task, config):\n if task.accepted or not config['only_on_accepted']:\n # make the url without trailing slash\n base_url = config['url'][:-1] if config['url'].endswith('/') else config['url']\n base_url += ':{0}'.format(config['port'])\n\n url = base_url + JSON_URI\n # create the params\n json_params = {\"id\": 1, \"jsonrpc\": \"2.0\",\n 'method': '{category}Library.{action}'.format(category=config['category'].title(),\n action=config['action'].title())}\n params = {'request': json.dumps(json_params)}\n log.debug('Sending request params %s', params)\n\n try:\n r = task.requests.get(url, params=params, auth=(config.get('username'), config.get('password'))).json()\n if r.get('result') == 'OK':\n log.info('Successfully sent a %s request for the %s library', config['action'], config['category'])\n else:\n if r.get('error'):\n log.error('Kodi JSONRPC failed. Error %s: %s', r['error']['code'], r['error']['message'])\n else:\n # this should never happen as Kodi say they follow the JSON-RPC 2.0 spec\n log.debug('Received error response %s', json.dumps(r))\n log.error('Kodi JSONRPC failed with unrecognized message: %s', json.dumps(r))\n except RequestException as e:\n raise plugin.PluginError('Failed to send request to Kodi: %s' % e.args[0])\n else:\n log.info('No entries were accepted. No request is sent.')\n\n\n@event('plugin.register')\ndef register_plugin():\n plugin.register(KodiLibrary, 'kodi_library', api_ver=2)\n", "path": "flexget/plugins/services/kodi_library.py"}]} | 1,811 | 298 |
gh_patches_debug_17345 | rasdani/github-patches | git_diff | RedHatInsights__insights-core-3074 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
The smt combiner is raising IndexError exceptions in production.
The CpuTopology combiner is throwing a large number of the exception IndexError('list index out of range',) in production.
</issue>
<code>
[start of insights/combiners/smt.py]
1 """
2 Simultaneous Multithreading (SMT) combiner
3 ==========================================
4
5 Combiner for Simultaneous Multithreading (SMT). It uses the results of the following parsers:
6 :class:`insights.parsers.smt.CpuCoreOnline`,
7 :class:`insights.parsers.smt.CpuSiblings`.
8 """
9
10 from insights.core.plugins import combiner
11 from insights.parsers.smt import CpuCoreOnline, CpuSiblings
12
13
14 @combiner(CpuCoreOnline, CpuSiblings)
15 class CpuTopology(object):
16 """
17 Class for collecting the online/siblings status for all CPU cores.
18
19 Sample output of the ``CpuCoreOnline`` parser is::
20
21 [[Core 0: Online], [Core 1: Online], [Core 2: Online], [Core 3: Online]]
22
23 Sample output of the ``CpuSiblings`` parser is::
24
25 [[Core 0 Siblings: [0, 2]], [Core 1 Siblings: [1, 3]], [Core 2 Siblings: [0, 2]], [Core 3 Siblings: [1, 3]]]
26
27 Attributes:
28 cores (list of dictionaries): List of all cores.
29 all_solitary (bool): True, if hyperthreading is not used.
30
31 Examples:
32 >>> type(cpu_topology)
33 <class 'insights.combiners.smt.CpuTopology'>
34 >>> cpu_topology.cores == [{'online': True, 'siblings': [0, 2]}, {'online': True, 'siblings': [1, 3]}, {'online': True, 'siblings': [0, 2]}, {'online': True, 'siblings': [1, 3]}]
35 True
36 >>> cpu_topology.all_solitary
37 False
38 """
39
40 def __init__(self, cpu_online, cpu_siblings):
41 self.cores = []
42
43 max_cpu_core_id = max([core.core_id for core in cpu_online])
44 for n in range(max_cpu_core_id + 1):
45 online = [core for core in cpu_online if core.core_id == n]
46 online = online[0].on
47 siblings = [sibling for sibling in cpu_siblings if sibling.core_id == n]
48 if len(siblings) != 0:
49 siblings = siblings[0].siblings
50
51 one_core = {"online": online, "siblings": siblings}
52 self.cores.append(one_core)
53
54 self.all_solitary = all([len(core["siblings"]) <= 1 for core in self.cores])
55
56 def online(self, core_id):
57 """
58 Returns bool value obtained from "online" file for given core_id.
59 """
60 if core_id >= len(self.cores) or core_id < 0:
61 return None
62 return self.cores[core_id]["online"]
63
64 def siblings(self, core_id):
65 """
66 Returns list of siblings for given core_id.
67 """
68 if core_id >= len(self.cores) or core_id < 0:
69 return None
70 return self.cores[core_id]["siblings"]
71
[end of insights/combiners/smt.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/insights/combiners/smt.py b/insights/combiners/smt.py
--- a/insights/combiners/smt.py
+++ b/insights/combiners/smt.py
@@ -43,7 +43,13 @@
max_cpu_core_id = max([core.core_id for core in cpu_online])
for n in range(max_cpu_core_id + 1):
online = [core for core in cpu_online if core.core_id == n]
- online = online[0].on
+ # On some boxes cpu0 doesn't have the online file, since technically cpu0 will always
+ # be online. So check if online returns anything before trying to access online[0].
+ # If it returns nothing and n is 0 set online to True.
+ if online:
+ online = online[0].on
+ elif not online and n == 0:
+ online = True
siblings = [sibling for sibling in cpu_siblings if sibling.core_id == n]
if len(siblings) != 0:
siblings = siblings[0].siblings
| {"golden_diff": "diff --git a/insights/combiners/smt.py b/insights/combiners/smt.py\n--- a/insights/combiners/smt.py\n+++ b/insights/combiners/smt.py\n@@ -43,7 +43,13 @@\n max_cpu_core_id = max([core.core_id for core in cpu_online])\n for n in range(max_cpu_core_id + 1):\n online = [core for core in cpu_online if core.core_id == n]\n- online = online[0].on\n+ # On some boxes cpu0 doesn't have the online file, since technically cpu0 will always\n+ # be online. So check if online returns anything before trying to access online[0].\n+ # If it returns nothing and n is 0 set online to True.\n+ if online:\n+ online = online[0].on\n+ elif not online and n == 0:\n+ online = True\n siblings = [sibling for sibling in cpu_siblings if sibling.core_id == n]\n if len(siblings) != 0:\n siblings = siblings[0].siblings\n", "issue": "The smt combiner is raising IndexError exceptions in production.\nThe CpuTopology combiner is throwing a large number of the exception IndexError('list index out of range',) in production.\n", "before_files": [{"content": "\"\"\"\nSimultaneous Multithreading (SMT) combiner\n==========================================\n\nCombiner for Simultaneous Multithreading (SMT). It uses the results of the following parsers:\n:class:`insights.parsers.smt.CpuCoreOnline`,\n:class:`insights.parsers.smt.CpuSiblings`.\n\"\"\"\n\nfrom insights.core.plugins import combiner\nfrom insights.parsers.smt import CpuCoreOnline, CpuSiblings\n\n\n@combiner(CpuCoreOnline, CpuSiblings)\nclass CpuTopology(object):\n \"\"\"\n Class for collecting the online/siblings status for all CPU cores.\n\n Sample output of the ``CpuCoreOnline`` parser is::\n\n [[Core 0: Online], [Core 1: Online], [Core 2: Online], [Core 3: Online]]\n\n Sample output of the ``CpuSiblings`` parser is::\n\n [[Core 0 Siblings: [0, 2]], [Core 1 Siblings: [1, 3]], [Core 2 Siblings: [0, 2]], [Core 3 Siblings: [1, 3]]]\n\n Attributes:\n cores (list of dictionaries): List of all cores.\n all_solitary (bool): True, if hyperthreading is not used.\n\n Examples:\n >>> type(cpu_topology)\n <class 'insights.combiners.smt.CpuTopology'>\n >>> cpu_topology.cores == [{'online': True, 'siblings': [0, 2]}, {'online': True, 'siblings': [1, 3]}, {'online': True, 'siblings': [0, 2]}, {'online': True, 'siblings': [1, 3]}]\n True\n >>> cpu_topology.all_solitary\n False\n \"\"\"\n\n def __init__(self, cpu_online, cpu_siblings):\n self.cores = []\n\n max_cpu_core_id = max([core.core_id for core in cpu_online])\n for n in range(max_cpu_core_id + 1):\n online = [core for core in cpu_online if core.core_id == n]\n online = online[0].on\n siblings = [sibling for sibling in cpu_siblings if sibling.core_id == n]\n if len(siblings) != 0:\n siblings = siblings[0].siblings\n\n one_core = {\"online\": online, \"siblings\": siblings}\n self.cores.append(one_core)\n\n self.all_solitary = all([len(core[\"siblings\"]) <= 1 for core in self.cores])\n\n def online(self, core_id):\n \"\"\"\n Returns bool value obtained from \"online\" file for given core_id.\n \"\"\"\n if core_id >= len(self.cores) or core_id < 0:\n return None\n return self.cores[core_id][\"online\"]\n\n def siblings(self, core_id):\n \"\"\"\n Returns list of siblings for given core_id.\n \"\"\"\n if core_id >= len(self.cores) or core_id < 0:\n return None\n return self.cores[core_id][\"siblings\"]\n", "path": "insights/combiners/smt.py"}]} | 1,372 | 248 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.