problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.71k
9.01k
| golden_diff
stringlengths 151
4.94k
| verification_info
stringlengths 465
11.3k
| num_tokens_prompt
int64 557
2.05k
| num_tokens_diff
int64 48
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_53384 | rasdani/github-patches | git_diff | chainer__chainer-271 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
FunctionSet.copy_parameters_from()
Hi all!
The code in 'FunctionSet.copy_parameters_from()' does not work, when 'src' and 'dst' are both numpy.ndarrays?
``` python
if isinstance(dst, numpy.ndarray):
if isinstance(src, numpy.ndarray):
dst.copy(src) # this gives a ValueError
```
I think this should read
``` python
if isinstance(dst, numpy.ndarray):
if isinstance(src, numpy.ndarray):
numpy.copyto(dst, src)
```
My numpy.version.full_version is 1.9.2, the 'copyto' method exists since 1.7.0.
Cheers,
-r
</issue>
<code>
[start of chainer/function_set.py]
1 import numpy
2 import six
3
4 from chainer import cuda
5
6
7 class FunctionSet(object):
8
9 """Set of objects with ``parameters`` and ``gradients`` properties.
10
11 :class:`FunctionSet` is useful to collect parameters and gradients of
12 multiple parameterized :class:`Function` objects. :class:`FunctionSet`
13 itself also implements :attr:`~FunctionSet.parameters` and
14 :attr:`~FunctionSet.gradients`, so it can be nested in another
15 :class:`FunctionSet` object.
16
17 Function registration is done by just adding an attribute to
18 :class:`FunctionSet` object.
19
20 """
21
22 def __init__(self, **functions):
23 """Initializes the function set by given functions.
24
25 Args:
26 **functions: ``dict`` of ``str`` key and :class:`Function` values.
27 The key-value pairs are just set to the :class:`FunctionSet`
28 object as attributes.
29
30 """
31 for name, func in six.iteritems(functions):
32 setattr(self, name, func)
33
34 def collect_parameters(self):
35 """Returns a tuple of parameters and gradients.
36
37 Returns:
38 Tuple (pair) of two tuples. The first element is a tuple of
39 parameter arrays, and the second is a tuple of gradient arrays.
40
41 """
42 return self.parameters, self.gradients
43
44 def to_gpu(self, device=None):
45 """Migrates all parameters and gradients onto GPU.
46
47 This method calls ``to_gpu`` method of each registered object.
48
49 Args:
50 device (int or :class:`pycuda.driver.Device` or ``None``): Device
51 ID of GPU. If ``None`` is given, it uses the current device.
52
53 Returns:
54 self
55
56 """
57 for func in six.itervalues(self.__dict__):
58 func.to_gpu(device=device)
59 return self
60
61 def to_cpu(self):
62 """Migrates all parameters and gradients onto CPU.
63
64 This method calls ``to_cpu`` method of each registered object.
65
66 Returns:
67 self
68
69 """
70 for func in six.itervalues(self.__dict__):
71 func.to_cpu()
72 return self
73
74 def copy_parameters_from(self, params):
75 """Copies parameters from another source without reallocation.
76
77 Args:
78 params (Iterable): Iterable of parameter arrays.
79
80 """
81 for dst, src in zip(self.parameters, params):
82 if isinstance(dst, numpy.ndarray):
83 if isinstance(src, numpy.ndarray):
84 dst.copy(src)
85 else:
86 src.get(dst)
87 elif isinstance(src, numpy.ndarray):
88 dst.set(src)
89 else:
90 cuda.copy(src, out=dst)
91
92 @property
93 def parameters(self):
94 """Tuple of parameter arrays of all registered functions.
95
96 The order of parameters is consistent with :meth:`gradients` property.
97
98 """
99 return sum((func.parameters for _, func in self._get_sorted_funcs()),
100 ())
101
102 @parameters.setter
103 def parameters(self, params):
104 param_iter = iter(params)
105 for _, func in self._get_sorted_funcs():
106 func.parameters = param_iter
107
108 @property
109 def gradients(self):
110 """Tuple of gradient arrays of all registered functions.
111
112 The order of gradients is consistent with :meth:`parameters` property.
113
114 """
115 return sum((func.gradients for _, func in self._get_sorted_funcs()),
116 ())
117
118 @gradients.setter
119 def gradients(self, grads):
120 grad_iter = iter(grads)
121 for _, func in self._get_sorted_funcs():
122 func.gradients = grad_iter
123
124 def _get_sorted_funcs(self):
125 return sorted(six.iteritems(self.__dict__))
126
[end of chainer/function_set.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/chainer/function_set.py b/chainer/function_set.py
--- a/chainer/function_set.py
+++ b/chainer/function_set.py
@@ -81,7 +81,7 @@
for dst, src in zip(self.parameters, params):
if isinstance(dst, numpy.ndarray):
if isinstance(src, numpy.ndarray):
- dst.copy(src)
+ numpy.copyto(dst, src)
else:
src.get(dst)
elif isinstance(src, numpy.ndarray):
| {"golden_diff": "diff --git a/chainer/function_set.py b/chainer/function_set.py\n--- a/chainer/function_set.py\n+++ b/chainer/function_set.py\n@@ -81,7 +81,7 @@\n for dst, src in zip(self.parameters, params):\n if isinstance(dst, numpy.ndarray):\n if isinstance(src, numpy.ndarray):\n- dst.copy(src)\n+ numpy.copyto(dst, src)\n else:\n src.get(dst)\n elif isinstance(src, numpy.ndarray):\n", "issue": "FunctionSet.copy_parameters_from()\nHi all!\n\nThe code in 'FunctionSet.copy_parameters_from()' does not work, when 'src' and 'dst' are both numpy.ndarrays?\n\n``` python\nif isinstance(dst, numpy.ndarray):\n if isinstance(src, numpy.ndarray):\n dst.copy(src) # this gives a ValueError\n```\n\nI think this should read\n\n``` python\nif isinstance(dst, numpy.ndarray):\n if isinstance(src, numpy.ndarray):\n numpy.copyto(dst, src)\n```\n\nMy numpy.version.full_version is 1.9.2, the 'copyto' method exists since 1.7.0.\n\nCheers,\n-r\n\n", "before_files": [{"content": "import numpy\nimport six\n\nfrom chainer import cuda\n\n\nclass FunctionSet(object):\n\n \"\"\"Set of objects with ``parameters`` and ``gradients`` properties.\n\n :class:`FunctionSet` is useful to collect parameters and gradients of\n multiple parameterized :class:`Function` objects. :class:`FunctionSet`\n itself also implements :attr:`~FunctionSet.parameters` and\n :attr:`~FunctionSet.gradients`, so it can be nested in another\n :class:`FunctionSet` object.\n\n Function registration is done by just adding an attribute to\n :class:`FunctionSet` object.\n\n \"\"\"\n\n def __init__(self, **functions):\n \"\"\"Initializes the function set by given functions.\n\n Args:\n **functions: ``dict`` of ``str`` key and :class:`Function` values.\n The key-value pairs are just set to the :class:`FunctionSet`\n object as attributes.\n\n \"\"\"\n for name, func in six.iteritems(functions):\n setattr(self, name, func)\n\n def collect_parameters(self):\n \"\"\"Returns a tuple of parameters and gradients.\n\n Returns:\n Tuple (pair) of two tuples. The first element is a tuple of\n parameter arrays, and the second is a tuple of gradient arrays.\n\n \"\"\"\n return self.parameters, self.gradients\n\n def to_gpu(self, device=None):\n \"\"\"Migrates all parameters and gradients onto GPU.\n\n This method calls ``to_gpu`` method of each registered object.\n\n Args:\n device (int or :class:`pycuda.driver.Device` or ``None``): Device\n ID of GPU. If ``None`` is given, it uses the current device.\n\n Returns:\n self\n\n \"\"\"\n for func in six.itervalues(self.__dict__):\n func.to_gpu(device=device)\n return self\n\n def to_cpu(self):\n \"\"\"Migrates all parameters and gradients onto CPU.\n\n This method calls ``to_cpu`` method of each registered object.\n\n Returns:\n self\n\n \"\"\"\n for func in six.itervalues(self.__dict__):\n func.to_cpu()\n return self\n\n def copy_parameters_from(self, params):\n \"\"\"Copies parameters from another source without reallocation.\n\n Args:\n params (Iterable): Iterable of parameter arrays.\n\n \"\"\"\n for dst, src in zip(self.parameters, params):\n if isinstance(dst, numpy.ndarray):\n if isinstance(src, numpy.ndarray):\n dst.copy(src)\n else:\n src.get(dst)\n elif isinstance(src, numpy.ndarray):\n dst.set(src)\n else:\n cuda.copy(src, out=dst)\n\n @property\n def parameters(self):\n \"\"\"Tuple of parameter arrays of all registered functions.\n\n The order of parameters is consistent with :meth:`gradients` property.\n\n \"\"\"\n return sum((func.parameters for _, func in self._get_sorted_funcs()),\n ())\n\n @parameters.setter\n def parameters(self, params):\n param_iter = iter(params)\n for _, func in self._get_sorted_funcs():\n func.parameters = param_iter\n\n @property\n def gradients(self):\n \"\"\"Tuple of gradient arrays of all registered functions.\n\n The order of gradients is consistent with :meth:`parameters` property.\n\n \"\"\"\n return sum((func.gradients for _, func in self._get_sorted_funcs()),\n ())\n\n @gradients.setter\n def gradients(self, grads):\n grad_iter = iter(grads)\n for _, func in self._get_sorted_funcs():\n func.gradients = grad_iter\n\n def _get_sorted_funcs(self):\n return sorted(six.iteritems(self.__dict__))\n", "path": "chainer/function_set.py"}]} | 1,729 | 104 |
gh_patches_debug_15485 | rasdani/github-patches | git_diff | keras-team__autokeras-1145 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
IO API, multi-modal classification, predict method problem
### Bug Description
IO API, multi-modal classification, predict method problem
### Bug Reproduction
https://github.com/datamllab/automl-in-action-notebooks/blob/master/3.4.2-Functional-API-Multi-Input.ipynb
### Setup Details
Include the details about the versions of:
- OS type and version:
- Python:
- autokeras: 1.0.2
- keras-tuner:
- scikit-learn:
- numpy:
- pandas:
- tensorflow: 2.1.0
### Additional context
<!---
If applicable, add any other context about the problem.
-->
</issue>
<code>
[start of autokeras/keras_layers.py]
1 import inspect
2
3 import tensorflow as tf
4 from tensorflow.keras.layers.experimental import preprocessing
5 from tensorflow.python.keras.layers.preprocessing import index_lookup
6 from tensorflow.python.util import nest
7
8 CombinerPreprocessingLayer = inspect.getmro(preprocessing.Normalization)[1]
9 Combiner = inspect.getmro(preprocessing.Normalization()._combiner.__class__)[1]
10
11 INT = 'int'
12 NONE = 'none'
13 ONE_HOT = 'one-hot'
14
15
16 class MultiColumnCategoricalEncoding(preprocessing.PreprocessingLayer):
17 """Encode the categorical features to numerical features.
18
19 # Arguments
20 encoding: A list of strings, which has the same number of elements as the
21 columns in the structured data. Each of the strings specifies the
22 encoding method used for the corresponding column. Use 'int' for
23 categorical columns and 'none' for numerical columns.
24 """
25
26 # TODO: Support one-hot encoding.
27 # TODO: Support frequency encoding.
28
29 def __init__(self, encoding, **kwargs):
30 super().__init__(**kwargs)
31 self.encoding = encoding
32 self.encoding_layers = []
33 for encoding in self.encoding:
34 if encoding == NONE:
35 self.encoding_layers.append(None)
36 elif encoding == INT:
37 self.encoding_layers.append(index_lookup.IndexLookup())
38 elif encoding == ONE_HOT:
39 self.encoding_layers.append(None)
40
41 def build(self, input_shape):
42 for encoding_layer in self.encoding_layers:
43 if encoding_layer is not None:
44 encoding_layer.build(tf.TensorShape([1]))
45
46 def call(self, inputs):
47 input_nodes = nest.flatten(inputs)[0]
48 split_inputs = tf.split(input_nodes, [1] * len(self.encoding), axis=-1)
49 output_nodes = []
50 for input_node, encoding_layer in zip(split_inputs, self.encoding_layers):
51 if encoding_layer is None:
52 output_nodes.append(tf.strings.to_number(input_node, tf.float32))
53 else:
54 output_nodes.append(tf.cast(encoding_layer(input_node), tf.float32))
55 return tf.keras.layers.Concatenate()(output_nodes)
56
57 def adapt(self, data):
58 for index, encoding_layer in enumerate(self.encoding_layers):
59 if encoding_layer is None:
60 continue
61 data_column = data.map(lambda x: tf.slice(x, [0, index], [-1, 1]))
62 encoding_layer.adapt(data_column)
63
64 def get_config(self):
65 config = {
66 'encoding': self.encoding,
67 }
68 base_config = super().get_config()
69 return dict(list(base_config.items()) + list(config.items()))
70
71
72 CUSTOM_OBJECTS = {
73 'MultiColumnCategoricalEncoding': MultiColumnCategoricalEncoding,
74 }
75
[end of autokeras/keras_layers.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/autokeras/keras_layers.py b/autokeras/keras_layers.py
--- a/autokeras/keras_layers.py
+++ b/autokeras/keras_layers.py
@@ -49,7 +49,12 @@
output_nodes = []
for input_node, encoding_layer in zip(split_inputs, self.encoding_layers):
if encoding_layer is None:
- output_nodes.append(tf.strings.to_number(input_node, tf.float32))
+ number = tf.strings.to_number(input_node, tf.float32)
+ # Replace NaN with 0.
+ imputed = tf.where(tf.math.is_nan(number),
+ tf.zeros_like(number),
+ number)
+ output_nodes.append(imputed)
else:
output_nodes.append(tf.cast(encoding_layer(input_node), tf.float32))
return tf.keras.layers.Concatenate()(output_nodes)
| {"golden_diff": "diff --git a/autokeras/keras_layers.py b/autokeras/keras_layers.py\n--- a/autokeras/keras_layers.py\n+++ b/autokeras/keras_layers.py\n@@ -49,7 +49,12 @@\n output_nodes = []\n for input_node, encoding_layer in zip(split_inputs, self.encoding_layers):\n if encoding_layer is None:\n- output_nodes.append(tf.strings.to_number(input_node, tf.float32))\n+ number = tf.strings.to_number(input_node, tf.float32)\n+ # Replace NaN with 0.\n+ imputed = tf.where(tf.math.is_nan(number),\n+ tf.zeros_like(number),\n+ number)\n+ output_nodes.append(imputed)\n else:\n output_nodes.append(tf.cast(encoding_layer(input_node), tf.float32))\n return tf.keras.layers.Concatenate()(output_nodes)\n", "issue": "IO API, multi-modal classification, predict method problem\n### Bug Description\r\nIO API, multi-modal classification, predict method problem\r\n\r\n\r\n### Bug Reproduction\r\n\r\nhttps://github.com/datamllab/automl-in-action-notebooks/blob/master/3.4.2-Functional-API-Multi-Input.ipynb\r\n\r\n\r\n### Setup Details\r\nInclude the details about the versions of:\r\n - OS type and version:\r\n - Python: \r\n - autokeras: 1.0.2\r\n - keras-tuner:\r\n - scikit-learn:\r\n - numpy:\r\n - pandas:\r\n - tensorflow: 2.1.0\r\n\r\n### Additional context\r\n<!---\r\nIf applicable, add any other context about the problem.\r\n-->\r\n\n", "before_files": [{"content": "import inspect\n\nimport tensorflow as tf\nfrom tensorflow.keras.layers.experimental import preprocessing\nfrom tensorflow.python.keras.layers.preprocessing import index_lookup\nfrom tensorflow.python.util import nest\n\nCombinerPreprocessingLayer = inspect.getmro(preprocessing.Normalization)[1]\nCombiner = inspect.getmro(preprocessing.Normalization()._combiner.__class__)[1]\n\nINT = 'int'\nNONE = 'none'\nONE_HOT = 'one-hot'\n\n\nclass MultiColumnCategoricalEncoding(preprocessing.PreprocessingLayer):\n \"\"\"Encode the categorical features to numerical features.\n\n # Arguments\n encoding: A list of strings, which has the same number of elements as the\n columns in the structured data. Each of the strings specifies the\n encoding method used for the corresponding column. Use 'int' for\n categorical columns and 'none' for numerical columns.\n \"\"\"\n\n # TODO: Support one-hot encoding.\n # TODO: Support frequency encoding.\n\n def __init__(self, encoding, **kwargs):\n super().__init__(**kwargs)\n self.encoding = encoding\n self.encoding_layers = []\n for encoding in self.encoding:\n if encoding == NONE:\n self.encoding_layers.append(None)\n elif encoding == INT:\n self.encoding_layers.append(index_lookup.IndexLookup())\n elif encoding == ONE_HOT:\n self.encoding_layers.append(None)\n\n def build(self, input_shape):\n for encoding_layer in self.encoding_layers:\n if encoding_layer is not None:\n encoding_layer.build(tf.TensorShape([1]))\n\n def call(self, inputs):\n input_nodes = nest.flatten(inputs)[0]\n split_inputs = tf.split(input_nodes, [1] * len(self.encoding), axis=-1)\n output_nodes = []\n for input_node, encoding_layer in zip(split_inputs, self.encoding_layers):\n if encoding_layer is None:\n output_nodes.append(tf.strings.to_number(input_node, tf.float32))\n else:\n output_nodes.append(tf.cast(encoding_layer(input_node), tf.float32))\n return tf.keras.layers.Concatenate()(output_nodes)\n\n def adapt(self, data):\n for index, encoding_layer in enumerate(self.encoding_layers):\n if encoding_layer is None:\n continue\n data_column = data.map(lambda x: tf.slice(x, [0, index], [-1, 1]))\n encoding_layer.adapt(data_column)\n\n def get_config(self):\n config = {\n 'encoding': self.encoding,\n }\n base_config = super().get_config()\n return dict(list(base_config.items()) + list(config.items()))\n\n\nCUSTOM_OBJECTS = {\n 'MultiColumnCategoricalEncoding': MultiColumnCategoricalEncoding,\n}\n", "path": "autokeras/keras_layers.py"}]} | 1,387 | 194 |
gh_patches_debug_21069 | rasdani/github-patches | git_diff | fossasia__open-event-server-9044 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Cannot save the badge field
```
HINT: You will need to rewrite or cast the expression.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/data/app/app/api/helpers/db.py", line 27, in save_to_db
db.session.commit()
File "/opt/pysetup/.venv/lib/python3.8/site-packages/sqlalchemy/orm/scoping.py", line 163, in do
return getattr(self.registry(), name)(*args, **kwargs)
File "/opt/pysetup/.venv/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 1046, in commit
self.transaction.commit()
File "/opt/pysetup/.venv/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 504, in commit
self._prepare_impl()
File "/opt/pysetup/.venv/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 483, in _prepare_impl
self.session.flush()
File "/opt/pysetup/.venv/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 2540, in flush
self._flush(objects)
File "/opt/pysetup/.venv/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 2682, in _flush
transaction.rollback(_capture_exception=True)
File "/opt/pysetup/.venv/lib/python3.8/site-packages/sqlalchemy/util/langhelpers.py", line 68, in __exit__
compat.raise_(
File "/opt/pysetup/.venv/lib/python3.8/site-packages/sqlalchemy/util/compat.py", line 182, in raise_
raise exception
File "/opt/pysetup/.venv/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 2642, in _flush
flush_context.execute()
File "/opt/pysetup/.venv/lib/python3.8/site-packages/sqlalchemy/orm/unitofwork.py", line 422, in execute
rec.execute(self)
File "/opt/pysetup/.venv/lib/python3.8/site-packages/sqlalchemy/orm/unitofwork.py", line 586, in execute
persistence.save_obj(
File "/opt/pysetup/.venv/lib/python3.8/site-packages/sqlalchemy/orm/persistence.py", line 239, in save_obj
_emit_insert_statements(
File "/opt/pysetup/.venv/lib/python3.8/site-packages/sqlalchemy/orm/persistence.py", line 1135, in _emit_insert_statements
result = cached_connections[connection].execute(
File "/opt/pysetup/.venv/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1011, in execute
return meth(self, multiparams, params)
File "/opt/pysetup/.venv/lib/python3.8/site-packages/sqlalchemy/sql/elements.py", line 298, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
File "/opt/pysetup/.venv/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1124, in _execute_clauseelement
ret = self._execute_context(
File "/opt/pysetup/.venv/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1316, in _execute_context
self._handle_dbapi_exception(
File "/opt/pysetup/.venv/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1510, in _handle_dbapi_exception
util.raise_(
File "/opt/pysetup/.venv/lib/python3.8/site-packages/sqlalchemy/util/compat.py", line 182, in raise_
raise exception
File "/opt/pysetup/.venv/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1276, in _execute_context
self.dialect.do_execute(
File "/opt/pysetup/.venv/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 608, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.ProgrammingError: (psycopg2.errors.DatatypeMismatch) column "font_weight" is of type integer but expression is of type json[]
LINE 1: ...e', 'Last Name', 'Sample Text', 14, 'Arial', CAST(ARRAY['{"n...
```
</issue>
<code>
[start of migrations/versions/rev-2023-08-01-14:10:12-8b5bc48e1d4c_.py]
1 """empty message
2
3 Revision ID: 8b5bc48e1d4c
4 Revises: 21c79d253f21
5 Create Date: 2023-08-01 14:10:12.187180
6
7 """
8
9 from alembic import op
10 import sqlalchemy as sa
11 from sqlalchemy.dialects import postgresql
12
13 # revision identifiers, used by Alembic.
14 revision = '8b5bc48e1d4c'
15 down_revision = '21c79d253f21'
16
17
18 def upgrade():
19 # ### commands auto generated by Alembic - please adjust! ###
20 op.alter_column('badge_field_forms', 'font_weight',
21 existing_type=sa.TEXT(),
22 type_=postgresql.ARRAY(sa.JSON()),
23 postgresql_using='font_weight::json[]',
24 existing_nullable=True)
25 # ### end Alembic commands ###
26
27
28 def downgrade():
29 # ### commands auto generated by Alembic - please adjust! ###
30 op.alter_column('badge_field_forms', 'font_weight',
31 existing_type=postgresql.ARRAY(sa.JSON()),
32 type_=sa.TEXT(),
33 existing_nullable=True)
34 # ### end Alembic commands ###
35
[end of migrations/versions/rev-2023-08-01-14:10:12-8b5bc48e1d4c_.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/migrations/versions/rev-2023-08-01-14:10:12-8b5bc48e1d4c_.py b/migrations/versions/rev-2023-08-01-14:10:12-8b5bc48e1d4c_.py
--- a/migrations/versions/rev-2023-08-01-14:10:12-8b5bc48e1d4c_.py
+++ b/migrations/versions/rev-2023-08-01-14:10:12-8b5bc48e1d4c_.py
@@ -17,18 +17,15 @@
def upgrade():
# ### commands auto generated by Alembic - please adjust! ###
- op.alter_column('badge_field_forms', 'font_weight',
- existing_type=sa.TEXT(),
- type_=postgresql.ARRAY(sa.JSON()),
- postgresql_using='font_weight::json[]',
- existing_nullable=True)
+ op.drop_column('badge_field_forms', 'font_weight')
+ op.add_column('badge_field_forms', sa.Column('font_weight',
+ postgresql.ARRAY(sa.JSON()), nullable=True))
# ### end Alembic commands ###
def downgrade():
# ### commands auto generated by Alembic - please adjust! ###
- op.alter_column('badge_field_forms', 'font_weight',
- existing_type=postgresql.ARRAY(sa.JSON()),
- type_=sa.TEXT(),
- existing_nullable=True)
+ op.drop_column('badge_field_forms', 'font_weight')
+ op.add_column('badge_field_forms', sa.Column('font_weight',
+ sa.Integer(), nullable=True))
# ### end Alembic commands ###
| {"golden_diff": "diff --git a/migrations/versions/rev-2023-08-01-14:10:12-8b5bc48e1d4c_.py b/migrations/versions/rev-2023-08-01-14:10:12-8b5bc48e1d4c_.py\n--- a/migrations/versions/rev-2023-08-01-14:10:12-8b5bc48e1d4c_.py\n+++ b/migrations/versions/rev-2023-08-01-14:10:12-8b5bc48e1d4c_.py\n@@ -17,18 +17,15 @@\n \n def upgrade():\n # ### commands auto generated by Alembic - please adjust! ###\n- op.alter_column('badge_field_forms', 'font_weight',\n- existing_type=sa.TEXT(),\n- type_=postgresql.ARRAY(sa.JSON()),\n- postgresql_using='font_weight::json[]',\n- existing_nullable=True)\n+ op.drop_column('badge_field_forms', 'font_weight')\n+ op.add_column('badge_field_forms', sa.Column('font_weight',\n+ postgresql.ARRAY(sa.JSON()), nullable=True))\n # ### end Alembic commands ###\n \n \n def downgrade():\n # ### commands auto generated by Alembic - please adjust! ###\n- op.alter_column('badge_field_forms', 'font_weight',\n- existing_type=postgresql.ARRAY(sa.JSON()),\n- type_=sa.TEXT(),\n- existing_nullable=True)\n+ op.drop_column('badge_field_forms', 'font_weight')\n+ op.add_column('badge_field_forms', sa.Column('font_weight',\n+ sa.Integer(), nullable=True))\n # ### end Alembic commands ###\n", "issue": "Cannot save the badge field\n```\r\nHINT: You will need to rewrite or cast the expression.\r\n\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"/data/app/app/api/helpers/db.py\", line 27, in save_to_db\r\n db.session.commit()\r\n File \"/opt/pysetup/.venv/lib/python3.8/site-packages/sqlalchemy/orm/scoping.py\", line 163, in do\r\n return getattr(self.registry(), name)(*args, **kwargs)\r\n File \"/opt/pysetup/.venv/lib/python3.8/site-packages/sqlalchemy/orm/session.py\", line 1046, in commit\r\n self.transaction.commit()\r\n File \"/opt/pysetup/.venv/lib/python3.8/site-packages/sqlalchemy/orm/session.py\", line 504, in commit\r\n self._prepare_impl()\r\n File \"/opt/pysetup/.venv/lib/python3.8/site-packages/sqlalchemy/orm/session.py\", line 483, in _prepare_impl\r\n self.session.flush()\r\n File \"/opt/pysetup/.venv/lib/python3.8/site-packages/sqlalchemy/orm/session.py\", line 2540, in flush\r\n self._flush(objects)\r\n File \"/opt/pysetup/.venv/lib/python3.8/site-packages/sqlalchemy/orm/session.py\", line 2682, in _flush\r\n transaction.rollback(_capture_exception=True)\r\n File \"/opt/pysetup/.venv/lib/python3.8/site-packages/sqlalchemy/util/langhelpers.py\", line 68, in __exit__\r\n compat.raise_(\r\n File \"/opt/pysetup/.venv/lib/python3.8/site-packages/sqlalchemy/util/compat.py\", line 182, in raise_\r\n raise exception\r\n File \"/opt/pysetup/.venv/lib/python3.8/site-packages/sqlalchemy/orm/session.py\", line 2642, in _flush\r\n flush_context.execute()\r\n File \"/opt/pysetup/.venv/lib/python3.8/site-packages/sqlalchemy/orm/unitofwork.py\", line 422, in execute\r\n rec.execute(self)\r\n File \"/opt/pysetup/.venv/lib/python3.8/site-packages/sqlalchemy/orm/unitofwork.py\", line 586, in execute\r\n persistence.save_obj(\r\n File \"/opt/pysetup/.venv/lib/python3.8/site-packages/sqlalchemy/orm/persistence.py\", line 239, in save_obj\r\n _emit_insert_statements(\r\n File \"/opt/pysetup/.venv/lib/python3.8/site-packages/sqlalchemy/orm/persistence.py\", line 1135, in _emit_insert_statements\r\n result = cached_connections[connection].execute(\r\n File \"/opt/pysetup/.venv/lib/python3.8/site-packages/sqlalchemy/engine/base.py\", line 1011, in execute\r\n return meth(self, multiparams, params)\r\n File \"/opt/pysetup/.venv/lib/python3.8/site-packages/sqlalchemy/sql/elements.py\", line 298, in _execute_on_connection\r\n return connection._execute_clauseelement(self, multiparams, params)\r\n File \"/opt/pysetup/.venv/lib/python3.8/site-packages/sqlalchemy/engine/base.py\", line 1124, in _execute_clauseelement\r\n ret = self._execute_context(\r\n File \"/opt/pysetup/.venv/lib/python3.8/site-packages/sqlalchemy/engine/base.py\", line 1316, in _execute_context\r\n self._handle_dbapi_exception(\r\n File \"/opt/pysetup/.venv/lib/python3.8/site-packages/sqlalchemy/engine/base.py\", line 1510, in _handle_dbapi_exception\r\n util.raise_(\r\n File \"/opt/pysetup/.venv/lib/python3.8/site-packages/sqlalchemy/util/compat.py\", line 182, in raise_\r\n raise exception\r\n File \"/opt/pysetup/.venv/lib/python3.8/site-packages/sqlalchemy/engine/base.py\", line 1276, in _execute_context\r\n self.dialect.do_execute(\r\n File \"/opt/pysetup/.venv/lib/python3.8/site-packages/sqlalchemy/engine/default.py\", line 608, in do_execute\r\n cursor.execute(statement, parameters)\r\nsqlalchemy.exc.ProgrammingError: (psycopg2.errors.DatatypeMismatch) column \"font_weight\" is of type integer but expression is of type json[]\r\nLINE 1: ...e', 'Last Name', 'Sample Text', 14, 'Arial', CAST(ARRAY['{\"n...\r\n```\n", "before_files": [{"content": "\"\"\"empty message\n\nRevision ID: 8b5bc48e1d4c\nRevises: 21c79d253f21\nCreate Date: 2023-08-01 14:10:12.187180\n\n\"\"\"\n\nfrom alembic import op\nimport sqlalchemy as sa\nfrom sqlalchemy.dialects import postgresql\n\n# revision identifiers, used by Alembic.\nrevision = '8b5bc48e1d4c'\ndown_revision = '21c79d253f21'\n\n\ndef upgrade():\n # ### commands auto generated by Alembic - please adjust! ###\n op.alter_column('badge_field_forms', 'font_weight',\n existing_type=sa.TEXT(),\n type_=postgresql.ARRAY(sa.JSON()),\n postgresql_using='font_weight::json[]',\n existing_nullable=True)\n # ### end Alembic commands ###\n\n\ndef downgrade():\n # ### commands auto generated by Alembic - please adjust! ###\n op.alter_column('badge_field_forms', 'font_weight',\n existing_type=postgresql.ARRAY(sa.JSON()),\n type_=sa.TEXT(),\n existing_nullable=True)\n # ### end Alembic commands ###\n", "path": "migrations/versions/rev-2023-08-01-14:10:12-8b5bc48e1d4c_.py"}]} | 1,929 | 413 |
gh_patches_debug_4684 | rasdani/github-patches | git_diff | pwr-Solaar__Solaar-1026 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
metainfo file installed in the wrong place
**Information**
- Solaar version (`solaar --version` or `git describe --tags` if cloned from this repository): 1.0.4-57-g69f889e
- Distribution: Fedora
- Kernel version (ex. `uname -srmo`): N/A
- Output of `solaar show`: N/A
**Describe the bug**
The `metainfo.xml` file gets installed into the wrong location, i.e. `/usr/share/metainfo/io.github.pwr_solaar.solaar.metainfo.xml/metainfo.xml`.
**To Reproduce**
Steps to reproduce the behavior (this is part of RPM package build process, hence the `--root xxx` option):
```
...
/usr/bin/python3 setup.py install -O1 --skip-build --root /builddir/build/BUILDROOT/solaar-1.0.4-3.fc33.x86_64
...
running install_data
creating /builddir/build/BUILDROOT/solaar-1.0.4-3.fc33.x86_64/usr/share
...
creating /builddir/build/BUILDROOT/solaar-1.0.4-3.fc33.x86_64/usr/share/metainfo
creating /builddir/build/BUILDROOT/solaar-1.0.4-3.fc33.x86_64/usr/share/metainfo/io.github.pwr_solaar.solaar.metainfo.xml
copying share/solaar/metainfo.xml -> /builddir/build/BUILDROOT/solaar-1.0.4-3.fc33.x86_64/usr/share/metainfo/io.github.pwr_solaar.solaar.metainfo.xml
...
```
**Screenshots**
N/A
**Additional context**
The correct location is: `/usr/share/metainfo/io.github.pwr_solaar.solaar.metainfo.xml` (a file under `/usr/share/metainfo`, not a directory).
The solution is to rename `metainfo.xml` to `io.github.pwr_solaar.solaar.metainfo.xml` and install it under `/usr/share/metainfo` in `setup.py`. I'll send a PR shortly.
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python3
2
3 from glob import glob as _glob
4
5 try:
6 from setuptools import setup
7 except ImportError:
8 from distutils.core import setup
9
10 # from solaar import NAME, __version__
11 __version__ = '1.0.4'
12 NAME = 'Solaar'
13
14
15 def _data_files():
16 from os.path import dirname as _dirname
17
18 yield 'share/solaar/icons', _glob('share/solaar/icons/solaar*.svg')
19 yield 'share/solaar/icons', _glob('share/solaar/icons/light_*.png')
20 yield 'share/icons/hicolor/scalable/apps', ['share/solaar/icons/solaar.svg']
21
22 for mo in _glob('share/locale/*/LC_MESSAGES/solaar.mo'):
23 yield _dirname(mo), [mo]
24
25 yield 'share/applications', ['share/applications/solaar.desktop']
26 yield 'share/solaar/udev-rules.d', ['rules.d/42-logitech-unify-permissions.rules']
27 yield 'share/metainfo/io.github.pwr_solaar.solaar.metainfo.xml', ['share/solaar/metainfo.xml']
28
29 del _dirname
30
31
32 setup(
33 name=NAME.lower(),
34 version=__version__,
35 description='Linux devices manager for the Logitech Unifying Receiver.',
36 long_description='''
37 Solaar is a Linux device manager for Logitech's Unifying Receiver peripherals.
38 It is able to pair/unpair devices with the receiver, for many devices show
39 battery status, and show and modify some of the modifiable features of devices.
40 '''.strip(),
41 author='Daniel Pavel',
42 license='GPLv2',
43 url='http://pwr-solaar.github.io/Solaar/',
44 classifiers=[
45 'Development Status :: 4 - Beta',
46 'Environment :: X11 Applications :: GTK',
47 'Environment :: Console',
48 'Intended Audience :: End Users/Desktop',
49 'License :: DFSG approved',
50 'License :: OSI Approved :: GNU General Public License v2 (GPLv2)',
51 'Natural Language :: English',
52 'Programming Language :: Python :: 3 :: Only',
53 'Operating System :: POSIX :: Linux',
54 'Topic :: Utilities',
55 ],
56 platforms=['linux'],
57
58 # sudo apt install python-gi python3-gi \
59 # gir1.2-gtk-3.0 gir1.2-notify-0.7 gir1.2-ayatanaappindicator3-0.1
60 # os_requires=['gi.repository.GObject (>= 2.0)', 'gi.repository.Gtk (>= 3.0)'],
61 python_requires='>=3.6',
62 install_requires=[
63 'pyudev (>= 0.13)',
64 'PyYAML (>= 5.1)',
65 'python-xlib (>= 0.27)',
66 'psutil (>= 5.6.0)',
67 ],
68 package_dir={'': 'lib'},
69 packages=['hidapi', 'logitech_receiver', 'solaar', 'solaar.ui', 'solaar.cli'],
70 data_files=list(_data_files()),
71 scripts=_glob('bin/*'),
72 )
73
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -24,7 +24,7 @@
yield 'share/applications', ['share/applications/solaar.desktop']
yield 'share/solaar/udev-rules.d', ['rules.d/42-logitech-unify-permissions.rules']
- yield 'share/metainfo/io.github.pwr_solaar.solaar.metainfo.xml', ['share/solaar/metainfo.xml']
+ yield 'share/metainfo', ['share/solaar/io.github.pwr_solaar.solaar.metainfo.xml']
del _dirname
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -24,7 +24,7 @@\n \n yield 'share/applications', ['share/applications/solaar.desktop']\n yield 'share/solaar/udev-rules.d', ['rules.d/42-logitech-unify-permissions.rules']\n- yield 'share/metainfo/io.github.pwr_solaar.solaar.metainfo.xml', ['share/solaar/metainfo.xml']\n+ yield 'share/metainfo', ['share/solaar/io.github.pwr_solaar.solaar.metainfo.xml']\n \n del _dirname\n", "issue": "metainfo file installed in the wrong place\n**Information**\r\n- Solaar version (`solaar --version` or `git describe --tags` if cloned from this repository): 1.0.4-57-g69f889e\r\n- Distribution: Fedora\r\n- Kernel version (ex. `uname -srmo`): N/A\r\n- Output of `solaar show`: N/A\r\n\r\n**Describe the bug**\r\nThe `metainfo.xml` file gets installed into the wrong location, i.e. `/usr/share/metainfo/io.github.pwr_solaar.solaar.metainfo.xml/metainfo.xml`.\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior (this is part of RPM package build process, hence the `--root xxx` option):\r\n```\r\n...\r\n/usr/bin/python3 setup.py install -O1 --skip-build --root /builddir/build/BUILDROOT/solaar-1.0.4-3.fc33.x86_64\r\n...\r\nrunning install_data\r\ncreating /builddir/build/BUILDROOT/solaar-1.0.4-3.fc33.x86_64/usr/share\r\n...\r\ncreating /builddir/build/BUILDROOT/solaar-1.0.4-3.fc33.x86_64/usr/share/metainfo\r\ncreating /builddir/build/BUILDROOT/solaar-1.0.4-3.fc33.x86_64/usr/share/metainfo/io.github.pwr_solaar.solaar.metainfo.xml\r\ncopying share/solaar/metainfo.xml -> /builddir/build/BUILDROOT/solaar-1.0.4-3.fc33.x86_64/usr/share/metainfo/io.github.pwr_solaar.solaar.metainfo.xml\r\n...\r\n```\r\n\r\n**Screenshots**\r\nN/A\r\n\r\n**Additional context**\r\nThe correct location is: `/usr/share/metainfo/io.github.pwr_solaar.solaar.metainfo.xml` (a file under `/usr/share/metainfo`, not a directory).\r\nThe solution is to rename `metainfo.xml` to `io.github.pwr_solaar.solaar.metainfo.xml` and install it under `/usr/share/metainfo` in `setup.py`. I'll send a PR shortly.\r\n\n", "before_files": [{"content": "#!/usr/bin/env python3\n\nfrom glob import glob as _glob\n\ntry:\n from setuptools import setup\nexcept ImportError:\n from distutils.core import setup\n\n# from solaar import NAME, __version__\n__version__ = '1.0.4'\nNAME = 'Solaar'\n\n\ndef _data_files():\n from os.path import dirname as _dirname\n\n yield 'share/solaar/icons', _glob('share/solaar/icons/solaar*.svg')\n yield 'share/solaar/icons', _glob('share/solaar/icons/light_*.png')\n yield 'share/icons/hicolor/scalable/apps', ['share/solaar/icons/solaar.svg']\n\n for mo in _glob('share/locale/*/LC_MESSAGES/solaar.mo'):\n yield _dirname(mo), [mo]\n\n yield 'share/applications', ['share/applications/solaar.desktop']\n yield 'share/solaar/udev-rules.d', ['rules.d/42-logitech-unify-permissions.rules']\n yield 'share/metainfo/io.github.pwr_solaar.solaar.metainfo.xml', ['share/solaar/metainfo.xml']\n\n del _dirname\n\n\nsetup(\n name=NAME.lower(),\n version=__version__,\n description='Linux devices manager for the Logitech Unifying Receiver.',\n long_description='''\nSolaar is a Linux device manager for Logitech's Unifying Receiver peripherals.\nIt is able to pair/unpair devices with the receiver, for many devices show\nbattery status, and show and modify some of the modifiable features of devices.\n'''.strip(),\n author='Daniel Pavel',\n license='GPLv2',\n url='http://pwr-solaar.github.io/Solaar/',\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Environment :: X11 Applications :: GTK',\n 'Environment :: Console',\n 'Intended Audience :: End Users/Desktop',\n 'License :: DFSG approved',\n 'License :: OSI Approved :: GNU General Public License v2 (GPLv2)',\n 'Natural Language :: English',\n 'Programming Language :: Python :: 3 :: Only',\n 'Operating System :: POSIX :: Linux',\n 'Topic :: Utilities',\n ],\n platforms=['linux'],\n\n # sudo apt install python-gi python3-gi \\\n # gir1.2-gtk-3.0 gir1.2-notify-0.7 gir1.2-ayatanaappindicator3-0.1\n # os_requires=['gi.repository.GObject (>= 2.0)', 'gi.repository.Gtk (>= 3.0)'],\n python_requires='>=3.6',\n install_requires=[\n 'pyudev (>= 0.13)',\n 'PyYAML (>= 5.1)',\n 'python-xlib (>= 0.27)',\n 'psutil (>= 5.6.0)',\n ],\n package_dir={'': 'lib'},\n packages=['hidapi', 'logitech_receiver', 'solaar', 'solaar.ui', 'solaar.cli'],\n data_files=list(_data_files()),\n scripts=_glob('bin/*'),\n)\n", "path": "setup.py"}]} | 1,861 | 145 |
gh_patches_debug_17795 | rasdani/github-patches | git_diff | goauthentik__authentik-5727 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
OCI Registry Blueprint port ignored
**Describe the bug**
When I try to load blueprints from a registry running on a custom port (e.g. port 5050) the connection fails.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to `Costomization > Blueprints`
2. Create a new `OCI Registry` Blueprint with a non default Port
3. Example `oci://larsl.dev:5050/larsl-net/authentik-config/blueprints/larsl-stages-base:latest`
4. A connection error occurs
**Expected behavior**
authentik connects on the port specified in the URL (5050). What happens according to the error message is that authentik uses port 443.
**Logs**
```
HTTPSConnectionPool(host='larsl.dev', port=443): Max retries exceeded with url: /v2/larsl-net/authentik-config/blueprints/larsl-stages-base/manifests/latest (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f2e26efa690>: Failed to establish a new connection: [Errno 111] Connection refused'))
```
**Version and Deployment (please complete the following information):**
- authentik version: 2023.5.1
- Deployment: docker-compose
</issue>
<code>
[start of authentik/blueprints/v1/oci.py]
1 """OCI Client"""
2 from typing import Any
3 from urllib.parse import ParseResult, urlparse
4
5 from opencontainers.distribution.reggie import (
6 NewClient,
7 WithDebug,
8 WithDefaultName,
9 WithDigest,
10 WithReference,
11 WithUserAgent,
12 WithUsernamePassword,
13 )
14 from requests.exceptions import RequestException
15 from structlog import get_logger
16 from structlog.stdlib import BoundLogger
17
18 from authentik.lib.sentry import SentryIgnoredException
19 from authentik.lib.utils.http import authentik_user_agent
20
21 OCI_MEDIA_TYPE = "application/vnd.goauthentik.blueprint.v1+yaml"
22
23
24 class OCIException(SentryIgnoredException):
25 """OCI-related errors"""
26
27
28 class BlueprintOCIClient:
29 """Blueprint OCI Client"""
30
31 url: ParseResult
32 sanitized_url: str
33 logger: BoundLogger
34 ref: str
35 client: NewClient
36
37 def __init__(self, url: str) -> None:
38 self._parse_url(url)
39 self.logger = get_logger().bind(url=self.sanitized_url)
40
41 self.ref = "latest"
42 path = self.url.path[1:]
43 if ":" in self.url.path:
44 path, _, self.ref = path.partition(":")
45 self.client = NewClient(
46 f"https://{self.url.hostname}",
47 WithUserAgent(authentik_user_agent()),
48 WithUsernamePassword(self.url.username, self.url.password),
49 WithDefaultName(path),
50 WithDebug(True),
51 )
52
53 def _parse_url(self, url: str):
54 self.url = urlparse(url)
55 netloc = self.url.netloc
56 if "@" in netloc:
57 netloc = netloc[netloc.index("@") + 1 :]
58 self.sanitized_url = self.url._replace(netloc=netloc).geturl()
59
60 def fetch_manifests(self) -> dict[str, Any]:
61 """Fetch manifests for ref"""
62 self.logger.info("Fetching OCI manifests for blueprint")
63 manifest_request = self.client.NewRequest(
64 "GET",
65 "/v2/<name>/manifests/<reference>",
66 WithReference(self.ref),
67 ).SetHeader("Accept", "application/vnd.oci.image.manifest.v1+json")
68 try:
69 manifest_response = self.client.Do(manifest_request)
70 manifest_response.raise_for_status()
71 except RequestException as exc:
72 raise OCIException(exc) from exc
73 manifest = manifest_response.json()
74 if "errors" in manifest:
75 raise OCIException(manifest["errors"])
76 return manifest
77
78 def fetch_blobs(self, manifest: dict[str, Any]):
79 """Fetch blob based on manifest info"""
80 blob = None
81 for layer in manifest.get("layers", []):
82 if layer.get("mediaType", "") == OCI_MEDIA_TYPE:
83 blob = layer.get("digest")
84 self.logger.debug("Found layer with matching media type", blob=blob)
85 if not blob:
86 raise OCIException("Blob not found")
87
88 blob_request = self.client.NewRequest(
89 "GET",
90 "/v2/<name>/blobs/<digest>",
91 WithDigest(blob),
92 )
93 try:
94 blob_response = self.client.Do(blob_request)
95 blob_response.raise_for_status()
96 return blob_response.text
97 except RequestException as exc:
98 raise OCIException(exc) from exc
99
[end of authentik/blueprints/v1/oci.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/authentik/blueprints/v1/oci.py b/authentik/blueprints/v1/oci.py
--- a/authentik/blueprints/v1/oci.py
+++ b/authentik/blueprints/v1/oci.py
@@ -39,11 +39,16 @@
self.logger = get_logger().bind(url=self.sanitized_url)
self.ref = "latest"
+ # Remove the leading slash of the path to convert it to an image name
path = self.url.path[1:]
- if ":" in self.url.path:
+ if ":" in path:
+ # if there's a colon in the path, use everything after it as a ref
path, _, self.ref = path.partition(":")
+ base_url = f"https://{self.url.hostname}"
+ if self.url.port:
+ base_url += f":{self.url.port}"
self.client = NewClient(
- f"https://{self.url.hostname}",
+ base_url,
WithUserAgent(authentik_user_agent()),
WithUsernamePassword(self.url.username, self.url.password),
WithDefaultName(path),
| {"golden_diff": "diff --git a/authentik/blueprints/v1/oci.py b/authentik/blueprints/v1/oci.py\n--- a/authentik/blueprints/v1/oci.py\n+++ b/authentik/blueprints/v1/oci.py\n@@ -39,11 +39,16 @@\n self.logger = get_logger().bind(url=self.sanitized_url)\n \n self.ref = \"latest\"\n+ # Remove the leading slash of the path to convert it to an image name\n path = self.url.path[1:]\n- if \":\" in self.url.path:\n+ if \":\" in path:\n+ # if there's a colon in the path, use everything after it as a ref\n path, _, self.ref = path.partition(\":\")\n+ base_url = f\"https://{self.url.hostname}\"\n+ if self.url.port:\n+ base_url += f\":{self.url.port}\"\n self.client = NewClient(\n- f\"https://{self.url.hostname}\",\n+ base_url,\n WithUserAgent(authentik_user_agent()),\n WithUsernamePassword(self.url.username, self.url.password),\n WithDefaultName(path),\n", "issue": "OCI Registry Blueprint port ignored\n**Describe the bug**\r\nWhen I try to load blueprints from a registry running on a custom port (e.g. port 5050) the connection fails.\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n\r\n1. Go to `Costomization > Blueprints`\r\n2. Create a new `OCI Registry` Blueprint with a non default Port\r\n3. Example `oci://larsl.dev:5050/larsl-net/authentik-config/blueprints/larsl-stages-base:latest`\r\n4. A connection error occurs\r\n\r\n**Expected behavior**\r\nauthentik connects on the port specified in the URL (5050). What happens according to the error message is that authentik uses port 443.\r\n\r\n**Logs**\r\n```\r\nHTTPSConnectionPool(host='larsl.dev', port=443): Max retries exceeded with url: /v2/larsl-net/authentik-config/blueprints/larsl-stages-base/manifests/latest (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f2e26efa690>: Failed to establish a new connection: [Errno 111] Connection refused'))\r\n```\r\n\r\n**Version and Deployment (please complete the following information):**\r\n\r\n- authentik version: 2023.5.1\r\n- Deployment: docker-compose\r\n\n", "before_files": [{"content": "\"\"\"OCI Client\"\"\"\nfrom typing import Any\nfrom urllib.parse import ParseResult, urlparse\n\nfrom opencontainers.distribution.reggie import (\n NewClient,\n WithDebug,\n WithDefaultName,\n WithDigest,\n WithReference,\n WithUserAgent,\n WithUsernamePassword,\n)\nfrom requests.exceptions import RequestException\nfrom structlog import get_logger\nfrom structlog.stdlib import BoundLogger\n\nfrom authentik.lib.sentry import SentryIgnoredException\nfrom authentik.lib.utils.http import authentik_user_agent\n\nOCI_MEDIA_TYPE = \"application/vnd.goauthentik.blueprint.v1+yaml\"\n\n\nclass OCIException(SentryIgnoredException):\n \"\"\"OCI-related errors\"\"\"\n\n\nclass BlueprintOCIClient:\n \"\"\"Blueprint OCI Client\"\"\"\n\n url: ParseResult\n sanitized_url: str\n logger: BoundLogger\n ref: str\n client: NewClient\n\n def __init__(self, url: str) -> None:\n self._parse_url(url)\n self.logger = get_logger().bind(url=self.sanitized_url)\n\n self.ref = \"latest\"\n path = self.url.path[1:]\n if \":\" in self.url.path:\n path, _, self.ref = path.partition(\":\")\n self.client = NewClient(\n f\"https://{self.url.hostname}\",\n WithUserAgent(authentik_user_agent()),\n WithUsernamePassword(self.url.username, self.url.password),\n WithDefaultName(path),\n WithDebug(True),\n )\n\n def _parse_url(self, url: str):\n self.url = urlparse(url)\n netloc = self.url.netloc\n if \"@\" in netloc:\n netloc = netloc[netloc.index(\"@\") + 1 :]\n self.sanitized_url = self.url._replace(netloc=netloc).geturl()\n\n def fetch_manifests(self) -> dict[str, Any]:\n \"\"\"Fetch manifests for ref\"\"\"\n self.logger.info(\"Fetching OCI manifests for blueprint\")\n manifest_request = self.client.NewRequest(\n \"GET\",\n \"/v2/<name>/manifests/<reference>\",\n WithReference(self.ref),\n ).SetHeader(\"Accept\", \"application/vnd.oci.image.manifest.v1+json\")\n try:\n manifest_response = self.client.Do(manifest_request)\n manifest_response.raise_for_status()\n except RequestException as exc:\n raise OCIException(exc) from exc\n manifest = manifest_response.json()\n if \"errors\" in manifest:\n raise OCIException(manifest[\"errors\"])\n return manifest\n\n def fetch_blobs(self, manifest: dict[str, Any]):\n \"\"\"Fetch blob based on manifest info\"\"\"\n blob = None\n for layer in manifest.get(\"layers\", []):\n if layer.get(\"mediaType\", \"\") == OCI_MEDIA_TYPE:\n blob = layer.get(\"digest\")\n self.logger.debug(\"Found layer with matching media type\", blob=blob)\n if not blob:\n raise OCIException(\"Blob not found\")\n\n blob_request = self.client.NewRequest(\n \"GET\",\n \"/v2/<name>/blobs/<digest>\",\n WithDigest(blob),\n )\n try:\n blob_response = self.client.Do(blob_request)\n blob_response.raise_for_status()\n return blob_response.text\n except RequestException as exc:\n raise OCIException(exc) from exc\n", "path": "authentik/blueprints/v1/oci.py"}]} | 1,732 | 244 |
gh_patches_debug_24709 | rasdani/github-patches | git_diff | HypothesisWorks__hypothesis-1535 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add compat with pytest-capturelog for note()-style logging capture?
Using hypothesis with code that has logging statements makes it very difficult to use those log calls to debug.
Having the ability to have logging output captured in the style of `note()` would be extremely useful. The [pytest-capturelog](https://pypi.python.org/pypi/pytest-capturelog) plugin collects the logging output into the test failure message. It would be really nice to have some kind of cross-compatibility with them so that it can group captured logs by example rather than at the test-function level
</issue>
<code>
[start of hypothesis-python/src/hypothesis/control.py]
1 # coding=utf-8
2 #
3 # This file is part of Hypothesis, which may be found at
4 # https://github.com/HypothesisWorks/hypothesis-python
5 #
6 # Most of this work is copyright (C) 2013-2018 David R. MacIver
7 # ([email protected]), but it contains contributions by others. See
8 # CONTRIBUTING.rst for a full list of people who may hold copyright, and
9 # consult the git log if you need to determine who owns an individual
10 # contribution.
11 #
12 # This Source Code Form is subject to the terms of the Mozilla Public License,
13 # v. 2.0. If a copy of the MPL was not distributed with this file, You can
14 # obtain one at http://mozilla.org/MPL/2.0/.
15 #
16 # END HEADER
17
18 from __future__ import division, print_function, absolute_import
19
20 import traceback
21
22 from hypothesis import Verbosity, settings
23 from hypothesis.errors import CleanupFailed, InvalidArgument, \
24 UnsatisfiedAssumption
25 from hypothesis.reporting import report
26 from hypothesis.utils.dynamicvariables import DynamicVariable
27
28 if False:
29 from typing import Any, AnyStr # noqa
30
31
32 def reject():
33 raise UnsatisfiedAssumption()
34
35
36 def assume(condition):
37 # type: (Any) -> bool
38 """Calling ``assume`` is like an :ref:`assert <python:assert>` that marks
39 the example as bad, rather than failing the test.
40
41 This allows you to specify properties that you *assume* will be
42 true, and let Hypothesis try to avoid similar examples in future.
43 """
44 if not condition:
45 raise UnsatisfiedAssumption()
46 return True
47
48
49 _current_build_context = DynamicVariable(None)
50
51
52 def current_build_context():
53 context = _current_build_context.value
54 if context is None:
55 raise InvalidArgument(
56 u'No build context registered')
57 return context
58
59
60 class BuildContext(object):
61
62 def __init__(self, data, is_final=False, close_on_capture=True):
63 self.data = data
64 self.tasks = []
65 self.is_final = is_final
66 self.close_on_capture = close_on_capture
67 self.close_on_del = False
68 self.notes = []
69
70 def __enter__(self):
71 self.assign_variable = _current_build_context.with_value(self)
72 self.assign_variable.__enter__()
73 return self
74
75 def __exit__(self, exc_type, exc_value, tb):
76 self.assign_variable.__exit__(exc_type, exc_value, tb)
77 if self.close() and exc_type is None:
78 raise CleanupFailed()
79
80 def local(self):
81 return _current_build_context.with_value(self)
82
83 def close(self):
84 any_failed = False
85 for task in self.tasks:
86 try:
87 task()
88 except BaseException:
89 any_failed = True
90 report(traceback.format_exc())
91 return any_failed
92
93
94 def cleanup(teardown):
95 """Register a function to be called when the current test has finished
96 executing. Any exceptions thrown in teardown will be printed but not
97 rethrown.
98
99 Inside a test this isn't very interesting, because you can just use
100 a finally block, but note that you can use this inside map, flatmap,
101 etc. in order to e.g. insist that a value is closed at the end.
102 """
103 context = _current_build_context.value
104 if context is None:
105 raise InvalidArgument(
106 u'Cannot register cleanup outside of build context')
107 context.tasks.append(teardown)
108
109
110 def note(value):
111 # type: (AnyStr) -> None
112 """Report this value in the final execution."""
113 context = _current_build_context.value
114 if context is None:
115 raise InvalidArgument(
116 'Cannot make notes outside of a test')
117 context.notes.append(value)
118 if context.is_final or settings.default.verbosity >= Verbosity.verbose:
119 report(value)
120
121
122 def event(value):
123 # type: (AnyStr) -> None
124 """Record an event that occurred this test. Statistics on number of test
125 runs with each event will be reported at the end if you run Hypothesis in
126 statistics reporting mode.
127
128 Events should be strings or convertible to them.
129 """
130 context = _current_build_context.value
131 if context is None:
132 raise InvalidArgument(
133 'Cannot make record events outside of a test')
134
135 if context.data is not None:
136 context.data.note_event(value)
137
[end of hypothesis-python/src/hypothesis/control.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/hypothesis-python/src/hypothesis/control.py b/hypothesis-python/src/hypothesis/control.py
--- a/hypothesis-python/src/hypothesis/control.py
+++ b/hypothesis-python/src/hypothesis/control.py
@@ -17,6 +17,7 @@
from __future__ import division, print_function, absolute_import
+import logging
import traceback
from hypothesis import Verbosity, settings
@@ -66,14 +67,20 @@
self.close_on_capture = close_on_capture
self.close_on_del = False
self.notes = []
+ self.original_logging_disable = logging.NOTSET
def __enter__(self):
+ if not self.is_final:
+ self.original_logging_disable = logging.root.manager.disable
+ logging.disable(logging.CRITICAL)
self.assign_variable = _current_build_context.with_value(self)
self.assign_variable.__enter__()
return self
def __exit__(self, exc_type, exc_value, tb):
self.assign_variable.__exit__(exc_type, exc_value, tb)
+ if not self.is_final:
+ logging.disable(self.original_logging_disable)
if self.close() and exc_type is None:
raise CleanupFailed()
| {"golden_diff": "diff --git a/hypothesis-python/src/hypothesis/control.py b/hypothesis-python/src/hypothesis/control.py\n--- a/hypothesis-python/src/hypothesis/control.py\n+++ b/hypothesis-python/src/hypothesis/control.py\n@@ -17,6 +17,7 @@\n \n from __future__ import division, print_function, absolute_import\n \n+import logging\n import traceback\n \n from hypothesis import Verbosity, settings\n@@ -66,14 +67,20 @@\n self.close_on_capture = close_on_capture\n self.close_on_del = False\n self.notes = []\n+ self.original_logging_disable = logging.NOTSET\n \n def __enter__(self):\n+ if not self.is_final:\n+ self.original_logging_disable = logging.root.manager.disable\n+ logging.disable(logging.CRITICAL)\n self.assign_variable = _current_build_context.with_value(self)\n self.assign_variable.__enter__()\n return self\n \n def __exit__(self, exc_type, exc_value, tb):\n self.assign_variable.__exit__(exc_type, exc_value, tb)\n+ if not self.is_final:\n+ logging.disable(self.original_logging_disable)\n if self.close() and exc_type is None:\n raise CleanupFailed()\n", "issue": "Add compat with pytest-capturelog for note()-style logging capture?\nUsing hypothesis with code that has logging statements makes it very difficult to use those log calls to debug.\n\nHaving the ability to have logging output captured in the style of `note()` would be extremely useful. The [pytest-capturelog](https://pypi.python.org/pypi/pytest-capturelog) plugin collects the logging output into the test failure message. It would be really nice to have some kind of cross-compatibility with them so that it can group captured logs by example rather than at the test-function level\n\n", "before_files": [{"content": "# coding=utf-8\n#\n# This file is part of Hypothesis, which may be found at\n# https://github.com/HypothesisWorks/hypothesis-python\n#\n# Most of this work is copyright (C) 2013-2018 David R. MacIver\n# ([email protected]), but it contains contributions by others. See\n# CONTRIBUTING.rst for a full list of people who may hold copyright, and\n# consult the git log if you need to determine who owns an individual\n# contribution.\n#\n# This Source Code Form is subject to the terms of the Mozilla Public License,\n# v. 2.0. If a copy of the MPL was not distributed with this file, You can\n# obtain one at http://mozilla.org/MPL/2.0/.\n#\n# END HEADER\n\nfrom __future__ import division, print_function, absolute_import\n\nimport traceback\n\nfrom hypothesis import Verbosity, settings\nfrom hypothesis.errors import CleanupFailed, InvalidArgument, \\\n UnsatisfiedAssumption\nfrom hypothesis.reporting import report\nfrom hypothesis.utils.dynamicvariables import DynamicVariable\n\nif False:\n from typing import Any, AnyStr # noqa\n\n\ndef reject():\n raise UnsatisfiedAssumption()\n\n\ndef assume(condition):\n # type: (Any) -> bool\n \"\"\"Calling ``assume`` is like an :ref:`assert <python:assert>` that marks\n the example as bad, rather than failing the test.\n\n This allows you to specify properties that you *assume* will be\n true, and let Hypothesis try to avoid similar examples in future.\n \"\"\"\n if not condition:\n raise UnsatisfiedAssumption()\n return True\n\n\n_current_build_context = DynamicVariable(None)\n\n\ndef current_build_context():\n context = _current_build_context.value\n if context is None:\n raise InvalidArgument(\n u'No build context registered')\n return context\n\n\nclass BuildContext(object):\n\n def __init__(self, data, is_final=False, close_on_capture=True):\n self.data = data\n self.tasks = []\n self.is_final = is_final\n self.close_on_capture = close_on_capture\n self.close_on_del = False\n self.notes = []\n\n def __enter__(self):\n self.assign_variable = _current_build_context.with_value(self)\n self.assign_variable.__enter__()\n return self\n\n def __exit__(self, exc_type, exc_value, tb):\n self.assign_variable.__exit__(exc_type, exc_value, tb)\n if self.close() and exc_type is None:\n raise CleanupFailed()\n\n def local(self):\n return _current_build_context.with_value(self)\n\n def close(self):\n any_failed = False\n for task in self.tasks:\n try:\n task()\n except BaseException:\n any_failed = True\n report(traceback.format_exc())\n return any_failed\n\n\ndef cleanup(teardown):\n \"\"\"Register a function to be called when the current test has finished\n executing. Any exceptions thrown in teardown will be printed but not\n rethrown.\n\n Inside a test this isn't very interesting, because you can just use\n a finally block, but note that you can use this inside map, flatmap,\n etc. in order to e.g. insist that a value is closed at the end.\n \"\"\"\n context = _current_build_context.value\n if context is None:\n raise InvalidArgument(\n u'Cannot register cleanup outside of build context')\n context.tasks.append(teardown)\n\n\ndef note(value):\n # type: (AnyStr) -> None\n \"\"\"Report this value in the final execution.\"\"\"\n context = _current_build_context.value\n if context is None:\n raise InvalidArgument(\n 'Cannot make notes outside of a test')\n context.notes.append(value)\n if context.is_final or settings.default.verbosity >= Verbosity.verbose:\n report(value)\n\n\ndef event(value):\n # type: (AnyStr) -> None\n \"\"\"Record an event that occurred this test. Statistics on number of test\n runs with each event will be reported at the end if you run Hypothesis in\n statistics reporting mode.\n\n Events should be strings or convertible to them.\n \"\"\"\n context = _current_build_context.value\n if context is None:\n raise InvalidArgument(\n 'Cannot make record events outside of a test')\n\n if context.data is not None:\n context.data.note_event(value)\n", "path": "hypothesis-python/src/hypothesis/control.py"}]} | 1,923 | 264 |
gh_patches_debug_14070 | rasdani/github-patches | git_diff | huggingface__diffusers-7013 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Color channel order for watermark embedding
### Describe the bug
The encoder from the invisible watermark library expects input images with the channel order BGR, which is the default in OpenCV. This can be seen [here](https://github.com/ShieldMnt/invisible-watermark/blob/68d0376d94a4701ed240af0841ec12e00676e325/imwatermark/maxDct.py#L21).
As far as I can see from [here](https://github.com/huggingface/diffusers/blob/3369bc810a09a52521bbf8cc1ec77df3a8c682a8/src/diffusers/pipelines/stable_diffusion_xl/watermark.py#L24), diffusers passes the images in RGB order.
The watermark encoder then converts the given image from BGR to YUV. When the image is passed with the wrong channel order, this will lead to unexpected U and V channel values.
### Reproduction
n/a
### Logs
_No response_
### System Info
Python 3.10, diffusers 0.24.0, invisible-watermark-0.2.0
### Who can help?
_No response_
</issue>
<code>
[start of src/diffusers/pipelines/stable_diffusion_xl/watermark.py]
1 import numpy as np
2 import torch
3
4 from ...utils import is_invisible_watermark_available
5
6
7 if is_invisible_watermark_available():
8 from imwatermark import WatermarkEncoder
9
10
11 # Copied from https://github.com/Stability-AI/generative-models/blob/613af104c6b85184091d42d374fef420eddb356d/scripts/demo/streamlit_helpers.py#L66
12 WATERMARK_MESSAGE = 0b101100111110110010010000011110111011000110011110
13 # bin(x)[2:] gives bits of x as str, use int to convert them to 0/1
14 WATERMARK_BITS = [int(bit) for bit in bin(WATERMARK_MESSAGE)[2:]]
15
16
17 class StableDiffusionXLWatermarker:
18 def __init__(self):
19 self.watermark = WATERMARK_BITS
20 self.encoder = WatermarkEncoder()
21
22 self.encoder.set_watermark("bits", self.watermark)
23
24 def apply_watermark(self, images: torch.FloatTensor):
25 # can't encode images that are smaller than 256
26 if images.shape[-1] < 256:
27 return images
28
29 images = (255 * (images / 2 + 0.5)).cpu().permute(0, 2, 3, 1).float().numpy()
30
31 images = [self.encoder.encode(image, "dwtDct") for image in images]
32
33 images = torch.from_numpy(np.array(images)).permute(0, 3, 1, 2)
34
35 images = torch.clamp(2 * (images / 255 - 0.5), min=-1.0, max=1.0)
36 return images
37
[end of src/diffusers/pipelines/stable_diffusion_xl/watermark.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/diffusers/pipelines/stable_diffusion_xl/watermark.py b/src/diffusers/pipelines/stable_diffusion_xl/watermark.py
--- a/src/diffusers/pipelines/stable_diffusion_xl/watermark.py
+++ b/src/diffusers/pipelines/stable_diffusion_xl/watermark.py
@@ -28,9 +28,15 @@
images = (255 * (images / 2 + 0.5)).cpu().permute(0, 2, 3, 1).float().numpy()
- images = [self.encoder.encode(image, "dwtDct") for image in images]
+ # Convert RGB to BGR, which is the channel order expected by the watermark encoder.
+ images = images[:, :, :, ::-1]
- images = torch.from_numpy(np.array(images)).permute(0, 3, 1, 2)
+ # Add watermark and convert BGR back to RGB
+ images = [self.encoder.encode(image, "dwtDct")[:, :, ::-1] for image in images]
+
+ images = np.array(images)
+
+ images = torch.from_numpy(images).permute(0, 3, 1, 2)
images = torch.clamp(2 * (images / 255 - 0.5), min=-1.0, max=1.0)
return images
| {"golden_diff": "diff --git a/src/diffusers/pipelines/stable_diffusion_xl/watermark.py b/src/diffusers/pipelines/stable_diffusion_xl/watermark.py\n--- a/src/diffusers/pipelines/stable_diffusion_xl/watermark.py\n+++ b/src/diffusers/pipelines/stable_diffusion_xl/watermark.py\n@@ -28,9 +28,15 @@\n \n images = (255 * (images / 2 + 0.5)).cpu().permute(0, 2, 3, 1).float().numpy()\n \n- images = [self.encoder.encode(image, \"dwtDct\") for image in images]\n+ # Convert RGB to BGR, which is the channel order expected by the watermark encoder.\n+ images = images[:, :, :, ::-1]\n \n- images = torch.from_numpy(np.array(images)).permute(0, 3, 1, 2)\n+ # Add watermark and convert BGR back to RGB\n+ images = [self.encoder.encode(image, \"dwtDct\")[:, :, ::-1] for image in images]\n+\n+ images = np.array(images)\n+\n+ images = torch.from_numpy(images).permute(0, 3, 1, 2)\n \n images = torch.clamp(2 * (images / 255 - 0.5), min=-1.0, max=1.0)\n return images\n", "issue": "Color channel order for watermark embedding\n### Describe the bug\n\nThe encoder from the invisible watermark library expects input images with the channel order BGR, which is the default in OpenCV. This can be seen [here](https://github.com/ShieldMnt/invisible-watermark/blob/68d0376d94a4701ed240af0841ec12e00676e325/imwatermark/maxDct.py#L21).\r\n\r\nAs far as I can see from [here](https://github.com/huggingface/diffusers/blob/3369bc810a09a52521bbf8cc1ec77df3a8c682a8/src/diffusers/pipelines/stable_diffusion_xl/watermark.py#L24), diffusers passes the images in RGB order.\r\n\r\nThe watermark encoder then converts the given image from BGR to YUV. When the image is passed with the wrong channel order, this will lead to unexpected U and V channel values.\n\n### Reproduction\n\nn/a\n\n### Logs\n\n_No response_\n\n### System Info\n\nPython 3.10, diffusers 0.24.0, invisible-watermark-0.2.0\n\n### Who can help?\n\n_No response_\n", "before_files": [{"content": "import numpy as np\nimport torch\n\nfrom ...utils import is_invisible_watermark_available\n\n\nif is_invisible_watermark_available():\n from imwatermark import WatermarkEncoder\n\n\n# Copied from https://github.com/Stability-AI/generative-models/blob/613af104c6b85184091d42d374fef420eddb356d/scripts/demo/streamlit_helpers.py#L66\nWATERMARK_MESSAGE = 0b101100111110110010010000011110111011000110011110\n# bin(x)[2:] gives bits of x as str, use int to convert them to 0/1\nWATERMARK_BITS = [int(bit) for bit in bin(WATERMARK_MESSAGE)[2:]]\n\n\nclass StableDiffusionXLWatermarker:\n def __init__(self):\n self.watermark = WATERMARK_BITS\n self.encoder = WatermarkEncoder()\n\n self.encoder.set_watermark(\"bits\", self.watermark)\n\n def apply_watermark(self, images: torch.FloatTensor):\n # can't encode images that are smaller than 256\n if images.shape[-1] < 256:\n return images\n\n images = (255 * (images / 2 + 0.5)).cpu().permute(0, 2, 3, 1).float().numpy()\n\n images = [self.encoder.encode(image, \"dwtDct\") for image in images]\n\n images = torch.from_numpy(np.array(images)).permute(0, 3, 1, 2)\n\n images = torch.clamp(2 * (images / 255 - 0.5), min=-1.0, max=1.0)\n return images\n", "path": "src/diffusers/pipelines/stable_diffusion_xl/watermark.py"}]} | 1,331 | 322 |
gh_patches_debug_7983 | rasdani/github-patches | git_diff | pulp__pulpcore-4095 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Gunicorn consuming excessive amounts of memory
**Version**
3.16.z
**Describe the bug**
Gunicorn consuming excessive amounts of memory, 3.5-4gb
**To Reproduce**
Unclear
**Expected behavior**
Probably not to have a single gunicorn process use 4gb of memory
**Additional context**
BZ: https://bugzilla.redhat.com/show_bug.cgi?id=2035873
Katello forum discussion: https://community.theforeman.org/t/katello-4-5-foreman-3-3-memory-leak-in-gunicorn/29658/22
</issue>
<code>
[start of pulpcore/app/access_policy.py]
1 from functools import lru_cache
2 from rest_access_policy import AccessPolicy
3 from rest_framework.exceptions import APIException
4
5 from pulpcore.app.models import AccessPolicy as AccessPolicyModel
6 from pulpcore.app.util import get_view_urlpattern, get_viewset_for_model
7
8
9 class AccessPolicyFromDB(AccessPolicy):
10 """
11 An AccessPolicy that loads statements from an `AccessPolicy` model instance.
12 """
13
14 @staticmethod
15 @lru_cache
16 def get_access_policy(view):
17 """
18 Retrieves the AccessPolicy from the DB or None if it doesn't exist.
19
20 Args:
21 view (subclass of rest_framework.view.APIView): The view or viewset to receive the
22 AccessPolicy model for.
23
24 Returns:
25 Either a `pulpcore.app.models.AccessPolicy` or None.
26 """
27 try:
28 urlpattern = get_view_urlpattern(view)
29 except AttributeError:
30 # The view does not define a `urlpattern()` method, e.g. it's not a NamedModelViewset
31 return None
32
33 try:
34 return AccessPolicyModel.objects.get(viewset_name=urlpattern)
35 except AccessPolicyModel.DoesNotExist:
36 return None
37
38 @classmethod
39 def handle_creation_hooks(cls, obj):
40 """
41 Handle the creation hooks defined in this policy for the passed in `obj`.
42
43 Args:
44 cls: The class this method belongs to.
45 obj: The model instance to have its creation hooks handled for.
46
47 """
48 viewset = get_viewset_for_model(obj)
49 access_policy = cls.get_access_policy(viewset)
50 if access_policy and access_policy.creation_hooks is not None:
51 for creation_hook in access_policy.creation_hooks:
52 hook_name = creation_hook["function"]
53 try:
54 function = obj.REGISTERED_CREATION_HOOKS[hook_name]
55 except KeyError:
56 raise APIException(
57 f"Creation hook '{hook_name}' was not registered for this view set."
58 )
59
60 kwargs = creation_hook.get("parameters") or {}
61 function(**kwargs)
62
63 def scope_queryset(self, view, qs):
64 """
65 Scope the queryset based on the access policy `scope_queryset` method if present.
66 """
67 if access_policy := self.get_access_policy(view):
68 if access_policy.queryset_scoping:
69 scope = access_policy.queryset_scoping["function"]
70 if not (function := getattr(view, scope, None)):
71 raise APIException(
72 f"Queryset scoping method {scope} is not present on this view set."
73 )
74 kwargs = access_policy.queryset_scoping.get("parameters") or {}
75 qs = function(qs, **kwargs)
76 return qs
77
78 def get_policy_statements(self, request, view):
79 """
80 Return the policy statements from an AccessPolicy instance matching the viewset name.
81
82 This is an implementation of a method that will be called by
83 `rest_access_policy.AccessPolicy`. See the drf-access-policy docs for more info:
84
85 https://rsinger86.github.io/drf-access-policy/loading_external_source/
86
87 The `pulpcore.plugin.models.AccessPolicy` instance is looked up by the `viewset_name`
88 attribute using::
89
90 AccessPolicyModel.objects.get(viewset_name=get_view_urlpattern(view))
91
92 If a matching `pulpcore.plugin.models.AccessPolicy` cannot be found, a default behavior of
93 allowing only admin users to perform any operation is used. This fallback allows the Pulp
94 RBAC implementation to be turned on endpoint-by-endpoint with less effort.
95
96 Args:
97 request (rest_framework.request.Request): The request being checked for authorization.
98 view (subclass rest_framework.viewsets.GenericViewSet): The view name being requested.
99
100 Returns:
101 The access policy statements in drf-access-policy policy structure.
102 """
103 if access_policy_obj := self.get_access_policy(view):
104 return access_policy_obj.statements
105 else:
106 default_statement = [{"action": "*", "principal": "admin", "effect": "allow"}]
107 policy = getattr(view, "DEFAULT_ACCESS_POLICY", {"statements": default_statement})
108 return policy["statements"]
109
[end of pulpcore/app/access_policy.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pulpcore/app/access_policy.py b/pulpcore/app/access_policy.py
--- a/pulpcore/app/access_policy.py
+++ b/pulpcore/app/access_policy.py
@@ -1,4 +1,3 @@
-from functools import lru_cache
from rest_access_policy import AccessPolicy
from rest_framework.exceptions import APIException
@@ -12,7 +11,6 @@
"""
@staticmethod
- @lru_cache
def get_access_policy(view):
"""
Retrieves the AccessPolicy from the DB or None if it doesn't exist.
| {"golden_diff": "diff --git a/pulpcore/app/access_policy.py b/pulpcore/app/access_policy.py\n--- a/pulpcore/app/access_policy.py\n+++ b/pulpcore/app/access_policy.py\n@@ -1,4 +1,3 @@\n-from functools import lru_cache\n from rest_access_policy import AccessPolicy\n from rest_framework.exceptions import APIException\n \n@@ -12,7 +11,6 @@\n \"\"\"\n \n @staticmethod\n- @lru_cache\n def get_access_policy(view):\n \"\"\"\n Retrieves the AccessPolicy from the DB or None if it doesn't exist.\n", "issue": "Gunicorn consuming excessive amounts of memory\n**Version**\r\n3.16.z\r\n\r\n**Describe the bug**\r\nGunicorn consuming excessive amounts of memory, 3.5-4gb\r\n\r\n**To Reproduce**\r\nUnclear\r\n\r\n**Expected behavior**\r\nProbably not to have a single gunicorn process use 4gb of memory\r\n\r\n**Additional context**\r\n\r\nBZ: https://bugzilla.redhat.com/show_bug.cgi?id=2035873\r\nKatello forum discussion: https://community.theforeman.org/t/katello-4-5-foreman-3-3-memory-leak-in-gunicorn/29658/22\n", "before_files": [{"content": "from functools import lru_cache\nfrom rest_access_policy import AccessPolicy\nfrom rest_framework.exceptions import APIException\n\nfrom pulpcore.app.models import AccessPolicy as AccessPolicyModel\nfrom pulpcore.app.util import get_view_urlpattern, get_viewset_for_model\n\n\nclass AccessPolicyFromDB(AccessPolicy):\n \"\"\"\n An AccessPolicy that loads statements from an `AccessPolicy` model instance.\n \"\"\"\n\n @staticmethod\n @lru_cache\n def get_access_policy(view):\n \"\"\"\n Retrieves the AccessPolicy from the DB or None if it doesn't exist.\n\n Args:\n view (subclass of rest_framework.view.APIView): The view or viewset to receive the\n AccessPolicy model for.\n\n Returns:\n Either a `pulpcore.app.models.AccessPolicy` or None.\n \"\"\"\n try:\n urlpattern = get_view_urlpattern(view)\n except AttributeError:\n # The view does not define a `urlpattern()` method, e.g. it's not a NamedModelViewset\n return None\n\n try:\n return AccessPolicyModel.objects.get(viewset_name=urlpattern)\n except AccessPolicyModel.DoesNotExist:\n return None\n\n @classmethod\n def handle_creation_hooks(cls, obj):\n \"\"\"\n Handle the creation hooks defined in this policy for the passed in `obj`.\n\n Args:\n cls: The class this method belongs to.\n obj: The model instance to have its creation hooks handled for.\n\n \"\"\"\n viewset = get_viewset_for_model(obj)\n access_policy = cls.get_access_policy(viewset)\n if access_policy and access_policy.creation_hooks is not None:\n for creation_hook in access_policy.creation_hooks:\n hook_name = creation_hook[\"function\"]\n try:\n function = obj.REGISTERED_CREATION_HOOKS[hook_name]\n except KeyError:\n raise APIException(\n f\"Creation hook '{hook_name}' was not registered for this view set.\"\n )\n\n kwargs = creation_hook.get(\"parameters\") or {}\n function(**kwargs)\n\n def scope_queryset(self, view, qs):\n \"\"\"\n Scope the queryset based on the access policy `scope_queryset` method if present.\n \"\"\"\n if access_policy := self.get_access_policy(view):\n if access_policy.queryset_scoping:\n scope = access_policy.queryset_scoping[\"function\"]\n if not (function := getattr(view, scope, None)):\n raise APIException(\n f\"Queryset scoping method {scope} is not present on this view set.\"\n )\n kwargs = access_policy.queryset_scoping.get(\"parameters\") or {}\n qs = function(qs, **kwargs)\n return qs\n\n def get_policy_statements(self, request, view):\n \"\"\"\n Return the policy statements from an AccessPolicy instance matching the viewset name.\n\n This is an implementation of a method that will be called by\n `rest_access_policy.AccessPolicy`. See the drf-access-policy docs for more info:\n\n https://rsinger86.github.io/drf-access-policy/loading_external_source/\n\n The `pulpcore.plugin.models.AccessPolicy` instance is looked up by the `viewset_name`\n attribute using::\n\n AccessPolicyModel.objects.get(viewset_name=get_view_urlpattern(view))\n\n If a matching `pulpcore.plugin.models.AccessPolicy` cannot be found, a default behavior of\n allowing only admin users to perform any operation is used. This fallback allows the Pulp\n RBAC implementation to be turned on endpoint-by-endpoint with less effort.\n\n Args:\n request (rest_framework.request.Request): The request being checked for authorization.\n view (subclass rest_framework.viewsets.GenericViewSet): The view name being requested.\n\n Returns:\n The access policy statements in drf-access-policy policy structure.\n \"\"\"\n if access_policy_obj := self.get_access_policy(view):\n return access_policy_obj.statements\n else:\n default_statement = [{\"action\": \"*\", \"principal\": \"admin\", \"effect\": \"allow\"}]\n policy = getattr(view, \"DEFAULT_ACCESS_POLICY\", {\"statements\": default_statement})\n return policy[\"statements\"]\n", "path": "pulpcore/app/access_policy.py"}]} | 1,759 | 125 |
gh_patches_debug_32313 | rasdani/github-patches | git_diff | sopel-irc__sopel-725 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Make .seen persist over bot restarts
Currently, when a Willie-based bot is restarted, it loses all the info about who it saw. It is quite inconvenient, as restarts are required quite often, especially on networks where there are lots of netsplits going on and the bot loses its nick to them all the time. The proposed solution would be to keep a persistent DB containing the relevant info from which old records may or may not be auto-deleted at regular intervals.
</issue>
<code>
[start of willie/modules/seen.py]
1 # coding=utf8
2 """
3 seen.py - Willie Seen Module
4 Copyright 2008, Sean B. Palmer, inamidst.com
5 Copyright © 2012, Elad Alfassa <[email protected]>
6 Licensed under the Eiffel Forum License 2.
7
8 http://willie.dftba.net
9 """
10 from __future__ import unicode_literals
11
12 import time
13 import datetime
14 from willie.tools import Ddict, Identifier, get_timezone, format_time
15 from willie.module import commands, rule, priority
16
17 seen_dict = Ddict(dict)
18
19
20 @commands('seen')
21 def seen(bot, trigger):
22 """Reports when and where the user was last seen."""
23 if not trigger.group(2):
24 bot.say(".seen <nick> - Reports when <nick> was last seen.")
25 return
26 nick = Identifier(trigger.group(2).strip())
27 if nick in seen_dict:
28 timestamp = seen_dict[nick]['timestamp']
29 channel = seen_dict[nick]['channel']
30 message = seen_dict[nick]['message']
31
32 tz = get_timezone(bot.db, bot.config, None, trigger.nick,
33 trigger.sender)
34 saw = datetime.datetime.utcfromtimestamp(timestamp)
35 timestamp = format_time(bot.db, bot.config, tz, trigger.nick,
36 trigger.sender, saw)
37
38 msg = "I last saw {} at {}".format(nick, timestamp)
39 if Identifier(channel) == trigger.sender:
40 msg = msg + " in here, saying " + message
41 else:
42 msg += " in another channel."
43 bot.say(str(trigger.nick) + ': ' + msg)
44 else:
45 bot.say("Sorry, I haven't seen %s around." % nick)
46
47
48 @rule('(.*)')
49 @priority('low')
50 def note(bot, trigger):
51 if not trigger.is_privmsg:
52 nick = Identifier(trigger.nick)
53 seen_dict[nick]['timestamp'] = time.time()
54 seen_dict[nick]['channel'] = trigger.sender
55 seen_dict[nick]['message'] = trigger
56
[end of willie/modules/seen.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/willie/modules/seen.py b/willie/modules/seen.py
--- a/willie/modules/seen.py
+++ b/willie/modules/seen.py
@@ -11,11 +11,9 @@
import time
import datetime
-from willie.tools import Ddict, Identifier, get_timezone, format_time
+from willie.tools import Identifier, get_timezone, format_time
from willie.module import commands, rule, priority
-seen_dict = Ddict(dict)
-
@commands('seen')
def seen(bot, trigger):
@@ -23,11 +21,11 @@
if not trigger.group(2):
bot.say(".seen <nick> - Reports when <nick> was last seen.")
return
- nick = Identifier(trigger.group(2).strip())
- if nick in seen_dict:
- timestamp = seen_dict[nick]['timestamp']
- channel = seen_dict[nick]['channel']
- message = seen_dict[nick]['message']
+ nick = trigger.group(2).strip()
+ timestamp = bot.db.get_nick_value(nick, 'seen_timestamp')
+ if timestamp:
+ channel = bot.db.get_nick_value(nick, 'seen_channel')
+ message = bot.db.get_nick_value(nick, 'seen_message')
tz = get_timezone(bot.db, bot.config, None, trigger.nick,
trigger.sender)
@@ -42,14 +40,13 @@
msg += " in another channel."
bot.say(str(trigger.nick) + ': ' + msg)
else:
- bot.say("Sorry, I haven't seen %s around." % nick)
+ bot.say("Sorry, I haven't seen {} around.".format(nick))
@rule('(.*)')
@priority('low')
def note(bot, trigger):
if not trigger.is_privmsg:
- nick = Identifier(trigger.nick)
- seen_dict[nick]['timestamp'] = time.time()
- seen_dict[nick]['channel'] = trigger.sender
- seen_dict[nick]['message'] = trigger
+ bot.db.set_nick_value(trigger.nick, 'seen_timestamp', time.time())
+ bot.db.set_nick_value(trigger.nick, 'seen_channel', trigger.sender)
+ bot.db.set_nick_value(trigger.nick, 'seen_message', trigger)
| {"golden_diff": "diff --git a/willie/modules/seen.py b/willie/modules/seen.py\n--- a/willie/modules/seen.py\n+++ b/willie/modules/seen.py\n@@ -11,11 +11,9 @@\n \n import time\n import datetime\n-from willie.tools import Ddict, Identifier, get_timezone, format_time\n+from willie.tools import Identifier, get_timezone, format_time\n from willie.module import commands, rule, priority\n \n-seen_dict = Ddict(dict)\n-\n \n @commands('seen')\n def seen(bot, trigger):\n@@ -23,11 +21,11 @@\n if not trigger.group(2):\n bot.say(\".seen <nick> - Reports when <nick> was last seen.\")\n return\n- nick = Identifier(trigger.group(2).strip())\n- if nick in seen_dict:\n- timestamp = seen_dict[nick]['timestamp']\n- channel = seen_dict[nick]['channel']\n- message = seen_dict[nick]['message']\n+ nick = trigger.group(2).strip()\n+ timestamp = bot.db.get_nick_value(nick, 'seen_timestamp')\n+ if timestamp:\n+ channel = bot.db.get_nick_value(nick, 'seen_channel')\n+ message = bot.db.get_nick_value(nick, 'seen_message')\n \n tz = get_timezone(bot.db, bot.config, None, trigger.nick,\n trigger.sender)\n@@ -42,14 +40,13 @@\n msg += \" in another channel.\"\n bot.say(str(trigger.nick) + ': ' + msg)\n else:\n- bot.say(\"Sorry, I haven't seen %s around.\" % nick)\n+ bot.say(\"Sorry, I haven't seen {} around.\".format(nick))\n \n \n @rule('(.*)')\n @priority('low')\n def note(bot, trigger):\n if not trigger.is_privmsg:\n- nick = Identifier(trigger.nick)\n- seen_dict[nick]['timestamp'] = time.time()\n- seen_dict[nick]['channel'] = trigger.sender\n- seen_dict[nick]['message'] = trigger\n+ bot.db.set_nick_value(trigger.nick, 'seen_timestamp', time.time())\n+ bot.db.set_nick_value(trigger.nick, 'seen_channel', trigger.sender)\n+ bot.db.set_nick_value(trigger.nick, 'seen_message', trigger)\n", "issue": "Make .seen persist over bot restarts\nCurrently, when a Willie-based bot is restarted, it loses all the info about who it saw. It is quite inconvenient, as restarts are required quite often, especially on networks where there are lots of netsplits going on and the bot loses its nick to them all the time. The proposed solution would be to keep a persistent DB containing the relevant info from which old records may or may not be auto-deleted at regular intervals.\n\n", "before_files": [{"content": "# coding=utf8\n\"\"\"\nseen.py - Willie Seen Module\nCopyright 2008, Sean B. Palmer, inamidst.com\nCopyright \u00a9 2012, Elad Alfassa <[email protected]>\nLicensed under the Eiffel Forum License 2.\n\nhttp://willie.dftba.net\n\"\"\"\nfrom __future__ import unicode_literals\n\nimport time\nimport datetime\nfrom willie.tools import Ddict, Identifier, get_timezone, format_time\nfrom willie.module import commands, rule, priority\n\nseen_dict = Ddict(dict)\n\n\n@commands('seen')\ndef seen(bot, trigger):\n \"\"\"Reports when and where the user was last seen.\"\"\"\n if not trigger.group(2):\n bot.say(\".seen <nick> - Reports when <nick> was last seen.\")\n return\n nick = Identifier(trigger.group(2).strip())\n if nick in seen_dict:\n timestamp = seen_dict[nick]['timestamp']\n channel = seen_dict[nick]['channel']\n message = seen_dict[nick]['message']\n\n tz = get_timezone(bot.db, bot.config, None, trigger.nick,\n trigger.sender)\n saw = datetime.datetime.utcfromtimestamp(timestamp)\n timestamp = format_time(bot.db, bot.config, tz, trigger.nick,\n trigger.sender, saw)\n\n msg = \"I last saw {} at {}\".format(nick, timestamp)\n if Identifier(channel) == trigger.sender:\n msg = msg + \" in here, saying \" + message\n else:\n msg += \" in another channel.\"\n bot.say(str(trigger.nick) + ': ' + msg)\n else:\n bot.say(\"Sorry, I haven't seen %s around.\" % nick)\n\n\n@rule('(.*)')\n@priority('low')\ndef note(bot, trigger):\n if not trigger.is_privmsg:\n nick = Identifier(trigger.nick)\n seen_dict[nick]['timestamp'] = time.time()\n seen_dict[nick]['channel'] = trigger.sender\n seen_dict[nick]['message'] = trigger\n", "path": "willie/modules/seen.py"}]} | 1,170 | 500 |
gh_patches_debug_16434 | rasdani/github-patches | git_diff | lightly-ai__lightly-655 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Did cifar10 dataset need gaussian blur
In the https://github.com/lightly-ai/lightly/blob/master/lightly/data/collate.py, class SimCLRCollateFunction() presents a n example for using it,
collate_fn = SimCLRCollateFunction(
>>> input_size=32,
>>> gaussian_blur=0.,
>>> )
but in https://docs.lightly.ai/examples/simclr.html
collate_fn = SimCLRCollateFunction(input_size=32)
so I wonder which one is the one you suggested?
</issue>
<code>
[start of examples/pytorch/simclr.py]
1 import torch
2 from torch import nn
3 import torchvision
4
5 from lightly.data import LightlyDataset
6 from lightly.data import SimCLRCollateFunction
7 from lightly.loss import NTXentLoss
8 from lightly.models.modules import SimCLRProjectionHead
9
10
11 class SimCLR(nn.Module):
12 def __init__(self, backbone):
13 super().__init__()
14 self.backbone = backbone
15 self.projection_head = SimCLRProjectionHead(512, 512, 128)
16
17 def forward(self, x):
18 x = self.backbone(x).flatten(start_dim=1)
19 z = self.projection_head(x)
20 return z
21
22
23 resnet = torchvision.models.resnet18()
24 backbone = nn.Sequential(*list(resnet.children())[:-1])
25 model = SimCLR(backbone)
26
27 device = "cuda" if torch.cuda.is_available() else "cpu"
28 model.to(device)
29
30 cifar10 = torchvision.datasets.CIFAR10("datasets/cifar10", download=True)
31 dataset = LightlyDataset.from_torch_dataset(cifar10)
32 # or create a dataset from a folder containing images or videos:
33 # dataset = LightlyDataset("path/to/folder")
34
35 collate_fn = SimCLRCollateFunction(input_size=32)
36
37 dataloader = torch.utils.data.DataLoader(
38 dataset,
39 batch_size=256,
40 collate_fn=collate_fn,
41 shuffle=True,
42 drop_last=True,
43 num_workers=8,
44 )
45
46 criterion = NTXentLoss()
47 optimizer = torch.optim.SGD(model.parameters(), lr=0.06)
48
49 print("Starting Training")
50 for epoch in range(10):
51 total_loss = 0
52 for (x0, x1), _, _ in dataloader:
53 x0 = x0.to(device)
54 x1 = x1.to(device)
55 z0 = model(x0)
56 z1 = model(x1)
57 loss = criterion(z0, z1)
58 total_loss += loss.detach()
59 loss.backward()
60 optimizer.step()
61 optimizer.zero_grad()
62 avg_loss = total_loss / len(dataloader)
63 print(f"epoch: {epoch:>02}, loss: {avg_loss:.5f}")
64
[end of examples/pytorch/simclr.py]
[start of examples/pytorch_lightning/simclr.py]
1 import torch
2 from torch import nn
3 import torchvision
4 import pytorch_lightning as pl
5
6 from lightly.data import LightlyDataset
7 from lightly.data import SimCLRCollateFunction
8 from lightly.loss import NTXentLoss
9 from lightly.models.modules import SimCLRProjectionHead
10
11
12 class SimCLR(pl.LightningModule):
13 def __init__(self):
14 super().__init__()
15 resnet = torchvision.models.resnet18()
16 self.backbone = nn.Sequential(*list(resnet.children())[:-1])
17 self.projection_head = SimCLRProjectionHead(512, 2048, 2048)
18 self.criterion = NTXentLoss()
19
20 def forward(self, x):
21 x = self.backbone(x).flatten(start_dim=1)
22 z = self.projection_head(x)
23 return z
24
25 def training_step(self, batch, batch_index):
26 (x0, x1), _, _ = batch
27 z0 = self.forward(x0)
28 z1 = self.forward(x1)
29 loss = self.criterion(z0, z1)
30 return loss
31
32 def configure_optimizers(self):
33 optim = torch.optim.SGD(self.parameters(), lr=0.06)
34 return optim
35
36
37 model = SimCLR()
38
39 cifar10 = torchvision.datasets.CIFAR10("datasets/cifar10", download=True)
40 dataset = LightlyDataset.from_torch_dataset(cifar10)
41 # or create a dataset from a folder containing images or videos:
42 # dataset = LightlyDataset("path/to/folder")
43
44 collate_fn = SimCLRCollateFunction(input_size=32)
45
46 dataloader = torch.utils.data.DataLoader(
47 dataset,
48 batch_size=256,
49 collate_fn=collate_fn,
50 shuffle=True,
51 drop_last=True,
52 num_workers=8,
53 )
54
55 gpus = 1 if torch.cuda.is_available() else 0
56
57 trainer = pl.Trainer(max_epochs=10, gpus=gpus)
58 trainer.fit(model=model, train_dataloaders=dataloader)
59
[end of examples/pytorch_lightning/simclr.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/examples/pytorch/simclr.py b/examples/pytorch/simclr.py
--- a/examples/pytorch/simclr.py
+++ b/examples/pytorch/simclr.py
@@ -32,7 +32,10 @@
# or create a dataset from a folder containing images or videos:
# dataset = LightlyDataset("path/to/folder")
-collate_fn = SimCLRCollateFunction(input_size=32)
+collate_fn = SimCLRCollateFunction(
+ input_size=32,
+ gaussian_blur=0.,
+)
dataloader = torch.utils.data.DataLoader(
dataset,
diff --git a/examples/pytorch_lightning/simclr.py b/examples/pytorch_lightning/simclr.py
--- a/examples/pytorch_lightning/simclr.py
+++ b/examples/pytorch_lightning/simclr.py
@@ -41,7 +41,10 @@
# or create a dataset from a folder containing images or videos:
# dataset = LightlyDataset("path/to/folder")
-collate_fn = SimCLRCollateFunction(input_size=32)
+collate_fn = SimCLRCollateFunction(
+ input_size=32,
+ gaussian_blur=0.,
+)
dataloader = torch.utils.data.DataLoader(
dataset,
| {"golden_diff": "diff --git a/examples/pytorch/simclr.py b/examples/pytorch/simclr.py\n--- a/examples/pytorch/simclr.py\n+++ b/examples/pytorch/simclr.py\n@@ -32,7 +32,10 @@\n # or create a dataset from a folder containing images or videos:\n # dataset = LightlyDataset(\"path/to/folder\")\n \n-collate_fn = SimCLRCollateFunction(input_size=32)\n+collate_fn = SimCLRCollateFunction(\n+ input_size=32,\n+ gaussian_blur=0.,\n+)\n \n dataloader = torch.utils.data.DataLoader(\n dataset,\ndiff --git a/examples/pytorch_lightning/simclr.py b/examples/pytorch_lightning/simclr.py\n--- a/examples/pytorch_lightning/simclr.py\n+++ b/examples/pytorch_lightning/simclr.py\n@@ -41,7 +41,10 @@\n # or create a dataset from a folder containing images or videos:\n # dataset = LightlyDataset(\"path/to/folder\")\n \n-collate_fn = SimCLRCollateFunction(input_size=32)\n+collate_fn = SimCLRCollateFunction(\n+ input_size=32,\n+ gaussian_blur=0.,\n+)\n \n dataloader = torch.utils.data.DataLoader(\n dataset,\n", "issue": "Did cifar10 dataset need gaussian blur\nIn the https://github.com/lightly-ai/lightly/blob/master/lightly/data/collate.py, class SimCLRCollateFunction() presents a n example for using it,\r\ncollate_fn = SimCLRCollateFunction(\r\n >>> input_size=32,\r\n >>> gaussian_blur=0.,\r\n >>> )\r\nbut in https://docs.lightly.ai/examples/simclr.html\r\ncollate_fn = SimCLRCollateFunction(input_size=32)\r\nso I wonder which one is the one you suggested?\n", "before_files": [{"content": "import torch\nfrom torch import nn\nimport torchvision\n\nfrom lightly.data import LightlyDataset\nfrom lightly.data import SimCLRCollateFunction\nfrom lightly.loss import NTXentLoss\nfrom lightly.models.modules import SimCLRProjectionHead\n\n\nclass SimCLR(nn.Module):\n def __init__(self, backbone):\n super().__init__()\n self.backbone = backbone\n self.projection_head = SimCLRProjectionHead(512, 512, 128)\n\n def forward(self, x):\n x = self.backbone(x).flatten(start_dim=1)\n z = self.projection_head(x)\n return z\n\n\nresnet = torchvision.models.resnet18()\nbackbone = nn.Sequential(*list(resnet.children())[:-1])\nmodel = SimCLR(backbone)\n\ndevice = \"cuda\" if torch.cuda.is_available() else \"cpu\"\nmodel.to(device)\n\ncifar10 = torchvision.datasets.CIFAR10(\"datasets/cifar10\", download=True)\ndataset = LightlyDataset.from_torch_dataset(cifar10)\n# or create a dataset from a folder containing images or videos:\n# dataset = LightlyDataset(\"path/to/folder\")\n\ncollate_fn = SimCLRCollateFunction(input_size=32)\n\ndataloader = torch.utils.data.DataLoader(\n dataset,\n batch_size=256,\n collate_fn=collate_fn,\n shuffle=True,\n drop_last=True,\n num_workers=8,\n)\n\ncriterion = NTXentLoss()\noptimizer = torch.optim.SGD(model.parameters(), lr=0.06)\n\nprint(\"Starting Training\")\nfor epoch in range(10):\n total_loss = 0\n for (x0, x1), _, _ in dataloader:\n x0 = x0.to(device)\n x1 = x1.to(device)\n z0 = model(x0)\n z1 = model(x1)\n loss = criterion(z0, z1)\n total_loss += loss.detach()\n loss.backward()\n optimizer.step()\n optimizer.zero_grad()\n avg_loss = total_loss / len(dataloader)\n print(f\"epoch: {epoch:>02}, loss: {avg_loss:.5f}\")\n", "path": "examples/pytorch/simclr.py"}, {"content": "import torch\nfrom torch import nn\nimport torchvision\nimport pytorch_lightning as pl\n\nfrom lightly.data import LightlyDataset\nfrom lightly.data import SimCLRCollateFunction\nfrom lightly.loss import NTXentLoss\nfrom lightly.models.modules import SimCLRProjectionHead\n\n\nclass SimCLR(pl.LightningModule):\n def __init__(self):\n super().__init__()\n resnet = torchvision.models.resnet18()\n self.backbone = nn.Sequential(*list(resnet.children())[:-1])\n self.projection_head = SimCLRProjectionHead(512, 2048, 2048)\n self.criterion = NTXentLoss()\n\n def forward(self, x):\n x = self.backbone(x).flatten(start_dim=1)\n z = self.projection_head(x)\n return z\n\n def training_step(self, batch, batch_index):\n (x0, x1), _, _ = batch\n z0 = self.forward(x0)\n z1 = self.forward(x1)\n loss = self.criterion(z0, z1)\n return loss\n\n def configure_optimizers(self):\n optim = torch.optim.SGD(self.parameters(), lr=0.06)\n return optim\n\n\nmodel = SimCLR()\n\ncifar10 = torchvision.datasets.CIFAR10(\"datasets/cifar10\", download=True)\ndataset = LightlyDataset.from_torch_dataset(cifar10)\n# or create a dataset from a folder containing images or videos:\n# dataset = LightlyDataset(\"path/to/folder\")\n\ncollate_fn = SimCLRCollateFunction(input_size=32)\n\ndataloader = torch.utils.data.DataLoader(\n dataset,\n batch_size=256,\n collate_fn=collate_fn,\n shuffle=True,\n drop_last=True,\n num_workers=8,\n)\n\ngpus = 1 if torch.cuda.is_available() else 0\n\ntrainer = pl.Trainer(max_epochs=10, gpus=gpus)\ntrainer.fit(model=model, train_dataloaders=dataloader)\n", "path": "examples/pytorch_lightning/simclr.py"}]} | 1,837 | 289 |
gh_patches_debug_3496 | rasdani/github-patches | git_diff | strawberry-graphql__strawberry-858 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
from_pydantic() converts False values to None
When calling `from_pydantic()`, values consistent with `bool(value) == False` may be replaced with None.
This recreates the issue:
```
from pydantic import BaseModel
import strawberry
class PydanticClass(BaseModel):
str1: str
str2: str
bool1: bool
bool2: bool
@strawberry.experimental.pydantic.type(
model=PydanticClass,
fields=['str1', 'str2', 'bool1', 'bool2']
)
class StrawberryClass:
pass
str1 = 'nonempty'
str2 = ''
bool1 = True
bool2 = False
myobj = PydanticClass(
str1=str1,
str2=str2,
bool1=bool1,
bool2=bool2
)
print('pydantic obj:', myobj)
converted = StrawberryClass.from_pydantic(myobj)
print('converted:', converted)
```
The output:
```
pydantic obj: str1='nonempty' str2='' bool1=True bool2=False
converted obj: StrawberryClass(str1='nonempty', str2=None, bool1=True, bool2=None)
```
Both str2 and bool2 were converted to None.
Location of the bug: https://github.com/strawberry-graphql/strawberry/blob/main/strawberry/experimental/pydantic/conversion.py#L10
</issue>
<code>
[start of strawberry/experimental/pydantic/conversion.py]
1 from typing import cast
2
3 from strawberry.field import StrawberryField
4 from strawberry.scalars import is_scalar
5
6
7 def _convert_from_pydantic_to_strawberry_field(
8 field: StrawberryField, data_from_model=None, extra=None
9 ):
10 data = data_from_model or extra
11
12 if field.is_list:
13 assert field.child is not None
14
15 items = [None for _ in data]
16
17 for index, item in enumerate(data):
18 items[index] = _convert_from_pydantic_to_strawberry_field(
19 field.child,
20 data_from_model=item,
21 extra=extra[index] if extra else None,
22 )
23
24 return items
25 elif is_scalar(field.type): # type: ignore
26 return data
27 else:
28 return convert_pydantic_model_to_strawberry_class(
29 field.type, model_instance=data_from_model, extra=extra
30 )
31
32
33 def convert_pydantic_model_to_strawberry_class(cls, *, model_instance=None, extra=None):
34 extra = extra or {}
35 kwargs = {}
36
37 for field in cls._type_definition.fields:
38 field = cast(StrawberryField, field)
39 python_name = field.python_name
40
41 data_from_extra = extra.get(python_name, None)
42 data_from_model = (
43 getattr(model_instance, python_name, None) if model_instance else None
44 )
45 kwargs[python_name] = _convert_from_pydantic_to_strawberry_field(
46 field, data_from_model, extra=data_from_extra
47 )
48
49 return cls(**kwargs)
50
[end of strawberry/experimental/pydantic/conversion.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/strawberry/experimental/pydantic/conversion.py b/strawberry/experimental/pydantic/conversion.py
--- a/strawberry/experimental/pydantic/conversion.py
+++ b/strawberry/experimental/pydantic/conversion.py
@@ -7,7 +7,7 @@
def _convert_from_pydantic_to_strawberry_field(
field: StrawberryField, data_from_model=None, extra=None
):
- data = data_from_model or extra
+ data = data_from_model if data_from_model is not None else extra
if field.is_list:
assert field.child is not None
| {"golden_diff": "diff --git a/strawberry/experimental/pydantic/conversion.py b/strawberry/experimental/pydantic/conversion.py\n--- a/strawberry/experimental/pydantic/conversion.py\n+++ b/strawberry/experimental/pydantic/conversion.py\n@@ -7,7 +7,7 @@\n def _convert_from_pydantic_to_strawberry_field(\n field: StrawberryField, data_from_model=None, extra=None\n ):\n- data = data_from_model or extra\n+ data = data_from_model if data_from_model is not None else extra\n \n if field.is_list:\n assert field.child is not None\n", "issue": "from_pydantic() converts False values to None\nWhen calling `from_pydantic()`, values consistent with `bool(value) == False` may be replaced with None. \r\n\r\nThis recreates the issue:\r\n```\r\nfrom pydantic import BaseModel\r\nimport strawberry\r\n\r\nclass PydanticClass(BaseModel):\r\n str1: str\r\n str2: str\r\n bool1: bool\r\n bool2: bool\r\n\r\[email protected](\r\n model=PydanticClass,\r\n fields=['str1', 'str2', 'bool1', 'bool2']\r\n)\r\nclass StrawberryClass:\r\n pass\r\n\r\nstr1 = 'nonempty'\r\nstr2 = ''\r\nbool1 = True\r\nbool2 = False\r\n\r\nmyobj = PydanticClass(\r\n str1=str1,\r\n str2=str2,\r\n bool1=bool1,\r\n bool2=bool2\r\n)\r\nprint('pydantic obj:', myobj)\r\n\r\nconverted = StrawberryClass.from_pydantic(myobj)\r\nprint('converted:', converted)\r\n```\r\n\r\nThe output:\r\n```\r\npydantic obj: str1='nonempty' str2='' bool1=True bool2=False\r\nconverted obj: StrawberryClass(str1='nonempty', str2=None, bool1=True, bool2=None)\r\n```\r\nBoth str2 and bool2 were converted to None.\r\n\r\nLocation of the bug: https://github.com/strawberry-graphql/strawberry/blob/main/strawberry/experimental/pydantic/conversion.py#L10\r\n\r\n\n", "before_files": [{"content": "from typing import cast\n\nfrom strawberry.field import StrawberryField\nfrom strawberry.scalars import is_scalar\n\n\ndef _convert_from_pydantic_to_strawberry_field(\n field: StrawberryField, data_from_model=None, extra=None\n):\n data = data_from_model or extra\n\n if field.is_list:\n assert field.child is not None\n\n items = [None for _ in data]\n\n for index, item in enumerate(data):\n items[index] = _convert_from_pydantic_to_strawberry_field(\n field.child,\n data_from_model=item,\n extra=extra[index] if extra else None,\n )\n\n return items\n elif is_scalar(field.type): # type: ignore\n return data\n else:\n return convert_pydantic_model_to_strawberry_class(\n field.type, model_instance=data_from_model, extra=extra\n )\n\n\ndef convert_pydantic_model_to_strawberry_class(cls, *, model_instance=None, extra=None):\n extra = extra or {}\n kwargs = {}\n\n for field in cls._type_definition.fields:\n field = cast(StrawberryField, field)\n python_name = field.python_name\n\n data_from_extra = extra.get(python_name, None)\n data_from_model = (\n getattr(model_instance, python_name, None) if model_instance else None\n )\n kwargs[python_name] = _convert_from_pydantic_to_strawberry_field(\n field, data_from_model, extra=data_from_extra\n )\n\n return cls(**kwargs)\n", "path": "strawberry/experimental/pydantic/conversion.py"}]} | 1,282 | 140 |
gh_patches_debug_24313 | rasdani/github-patches | git_diff | microsoft__ptvsd-111 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Pip Installing PTVSD fails
* Python2.7
* Pip install ptvsd from local source fails with the following error:
```
running build_ext
building 'ptvsd.pydevd._pydevd_bundle.pydevd_cython' extension
error: Microsoft Visual C++ 9.0 is required. Get it from http://aka.ms/vcpython27
```
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2
3 # Copyright (c) Microsoft Corporation. All rights reserved.
4 # Licensed under the MIT License. See LICENSE in the project root
5 # for license information.
6
7 import os
8 import os.path
9 from setuptools import setup, Extension
10
11 ROOT = os.path.dirname(os.path.abspath(__file__))
12
13 # Add pydevd files as data files for this package. They are not treated as a package of their own,
14 # because we don't actually want to provide pydevd - just use our own copy internally.
15 def get_pydevd_package_data():
16 ptvsd_prefix = os.path.join(ROOT, 'ptvsd')
17 pydevd_prefix = os.path.join(ptvsd_prefix, 'pydevd')
18 for root, dirs, files in os.walk(pydevd_prefix):
19 # From the root of pydevd repo, we want only scripts and subdirectories that
20 # constitute the package itself (not helper scripts, tests etc). But when
21 # walking down into those subdirectories, we want everything below.
22 if os.path.normcase(root) == os.path.normcase(pydevd_prefix):
23 dirs[:] = [d for d in dirs if d.startswith('pydev') or d.startswith('_pydev')]
24 files[:] = [f for f in files if f.endswith('.py') and 'pydev' in f]
25 for f in files:
26 yield os.path.join(root[len(ptvsd_prefix) + 1:], f)
27
28 setup(name='ptvsd',
29 version='4.0.0a1',
30 description='Visual Studio remote debugging server for Python',
31 license='MIT',
32 author='Microsoft Corporation',
33 author_email='[email protected]',
34 url='https://aka.ms/ptvs',
35 classifiers=[
36 'Development Status :: 3 - Alpha',
37 'Programming Language :: Python',
38 'Programming Language :: Python :: 2',
39 'Programming Language :: Python :: 3',
40 'License :: OSI Approved :: MIT License'],
41 packages=['ptvsd'],
42 package_data={'ptvsd': list(get_pydevd_package_data()) + ['ThirdPartyNotices.txt']},
43 ext_modules=[Extension('ptvsd.pydevd._pydevd_bundle.pydevd_cython',
44 ['ptvsd/pydevd/_pydevd_bundle/pydevd_cython.c'],
45 optional=True)],
46 )
47
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -6,6 +6,7 @@
import os
import os.path
+import sys
from setuptools import setup, Extension
ROOT = os.path.dirname(os.path.abspath(__file__))
@@ -25,6 +26,18 @@
for f in files:
yield os.path.join(root[len(ptvsd_prefix) + 1:], f)
+cmdclass = {}
+
+if sys.version_info[0] == 2:
+ from setuptools.command.build_ext import build_ext
+ class build_optional_ext(build_ext):
+ def build_extension(self, ext):
+ try:
+ super(build_optional_ext, self).build_extension(ext)
+ except:
+ pass
+ cmdclass = { 'build_ext': build_optional_ext }
+
setup(name='ptvsd',
version='4.0.0a1',
description='Visual Studio remote debugging server for Python',
@@ -43,4 +56,5 @@
ext_modules=[Extension('ptvsd.pydevd._pydevd_bundle.pydevd_cython',
['ptvsd/pydevd/_pydevd_bundle/pydevd_cython.c'],
optional=True)],
+ cmdclass=cmdclass,
)
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -6,6 +6,7 @@\n \n import os\n import os.path\n+import sys\n from setuptools import setup, Extension\n \n ROOT = os.path.dirname(os.path.abspath(__file__))\n@@ -25,6 +26,18 @@\n for f in files:\n yield os.path.join(root[len(ptvsd_prefix) + 1:], f)\n \n+cmdclass = {}\n+\n+if sys.version_info[0] == 2:\n+ from setuptools.command.build_ext import build_ext\n+ class build_optional_ext(build_ext):\n+ def build_extension(self, ext):\n+ try:\n+ super(build_optional_ext, self).build_extension(ext)\n+ except:\n+ pass\n+ cmdclass = { 'build_ext': build_optional_ext }\n+\n setup(name='ptvsd',\n version='4.0.0a1',\n description='Visual Studio remote debugging server for Python',\n@@ -43,4 +56,5 @@\n ext_modules=[Extension('ptvsd.pydevd._pydevd_bundle.pydevd_cython',\n ['ptvsd/pydevd/_pydevd_bundle/pydevd_cython.c'],\n optional=True)],\n+ cmdclass=cmdclass,\n )\n", "issue": "Pip Installing PTVSD fails \n* Python2.7\r\n* Pip install ptvsd from local source fails with the following error:\r\n```\r\nrunning build_ext\r\n building 'ptvsd.pydevd._pydevd_bundle.pydevd_cython' extension\r\n error: Microsoft Visual C++ 9.0 is required. Get it from http://aka.ms/vcpython27\r\n```\n", "before_files": [{"content": "#!/usr/bin/env python\n\n# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License. See LICENSE in the project root\n# for license information.\n\nimport os\nimport os.path\nfrom setuptools import setup, Extension\n\nROOT = os.path.dirname(os.path.abspath(__file__))\n\n# Add pydevd files as data files for this package. They are not treated as a package of their own,\n# because we don't actually want to provide pydevd - just use our own copy internally.\ndef get_pydevd_package_data():\n ptvsd_prefix = os.path.join(ROOT, 'ptvsd')\n pydevd_prefix = os.path.join(ptvsd_prefix, 'pydevd')\n for root, dirs, files in os.walk(pydevd_prefix):\n # From the root of pydevd repo, we want only scripts and subdirectories that\n # constitute the package itself (not helper scripts, tests etc). But when\n # walking down into those subdirectories, we want everything below.\n if os.path.normcase(root) == os.path.normcase(pydevd_prefix):\n dirs[:] = [d for d in dirs if d.startswith('pydev') or d.startswith('_pydev')]\n files[:] = [f for f in files if f.endswith('.py') and 'pydev' in f]\n for f in files:\n yield os.path.join(root[len(ptvsd_prefix) + 1:], f)\n\nsetup(name='ptvsd',\n version='4.0.0a1',\n description='Visual Studio remote debugging server for Python',\n license='MIT',\n author='Microsoft Corporation',\n author_email='[email protected]',\n url='https://aka.ms/ptvs',\n classifiers=[\n 'Development Status :: 3 - Alpha',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 3',\n 'License :: OSI Approved :: MIT License'],\n packages=['ptvsd'],\n package_data={'ptvsd': list(get_pydevd_package_data()) + ['ThirdPartyNotices.txt']},\n ext_modules=[Extension('ptvsd.pydevd._pydevd_bundle.pydevd_cython',\n ['ptvsd/pydevd/_pydevd_bundle/pydevd_cython.c'],\n optional=True)],\n )\n", "path": "setup.py"}]} | 1,205 | 284 |
gh_patches_debug_19118 | rasdani/github-patches | git_diff | pfnet__pytorch-pfn-extras-788 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Fix nightly CPU test failures
https://github.com/pfnet/pytorch-pfn-extras/actions/workflows/nightly-test-cpu.yml
</issue>
<code>
[start of pytorch_pfn_extras/distributed/_distributed_validation_sampler.py]
1 from typing import Iterator, Optional, Sized, TypeVar
2
3 import numpy as np
4 import torch
5 import torch.distributed as dist
6
7 T_co = TypeVar("T_co", covariant=True)
8
9
10 class DistributedValidationSampler(torch.utils.data.Sampler):
11 """Distributed sampler without duplication
12
13 This sampler splits the input dataset to each worker process in distributed setup
14 without allowing repetition.
15 It is for evaluation purpose such as :class:`~DistributedEvaluator`.
16 This does not guarantee each worker to get the same number of samples,
17 so for training do not use this sampler (use PyTorch DistributedSampler instead).
18 """
19
20 def __init__(
21 self,
22 dataset: Sized,
23 num_replicas: Optional[int] = None,
24 rank: Optional[int] = None,
25 shuffle: bool = True,
26 seed: int = 0,
27 ) -> None:
28 if num_replicas is None:
29 if not dist.is_available(): # type: ignore[no-untyped-call]
30 raise RuntimeError(
31 "Requires distributed package to be available"
32 )
33 num_replicas = dist.get_world_size() # type: ignore[no-untyped-call]
34 if rank is None:
35 if not dist.is_available(): # type: ignore[no-untyped-call]
36 raise RuntimeError(
37 "Requires distributed package to be available"
38 )
39 rank = dist.get_rank() # type: ignore[no-untyped-call]
40 if rank >= num_replicas or rank < 0:
41 raise ValueError(
42 "Invalid rank {}, rank should be in the interval"
43 " [0, {}]".format(rank, num_replicas - 1)
44 )
45 self.dataset = dataset
46 self.num_replicas = num_replicas
47 self.rank = rank
48 self.shuffle = shuffle
49 self.seed = seed
50
51 self.dataset_len = len(dataset)
52 self.num_samples = len(
53 np.array_split(range(self.dataset_len), num_replicas)[rank]
54 )
55
56 def __iter__(self) -> Iterator[T_co]:
57 if self.shuffle:
58 # deterministically shuffle based on epoch and seed
59 g = torch.Generator()
60 g.manual_seed(self.seed)
61 indices = torch.randperm(self.dataset_len, generator=g).tolist()
62 else:
63 indices = list(range(self.dataset_len))
64
65 return iter(np.array_split(indices, self.num_replicas)[self.rank])
66
67 def __len__(self) -> int:
68 return self.num_samples
69
[end of pytorch_pfn_extras/distributed/_distributed_validation_sampler.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pytorch_pfn_extras/distributed/_distributed_validation_sampler.py b/pytorch_pfn_extras/distributed/_distributed_validation_sampler.py
--- a/pytorch_pfn_extras/distributed/_distributed_validation_sampler.py
+++ b/pytorch_pfn_extras/distributed/_distributed_validation_sampler.py
@@ -26,13 +26,13 @@
seed: int = 0,
) -> None:
if num_replicas is None:
- if not dist.is_available(): # type: ignore[no-untyped-call]
+ if not dist.is_available() or not dist.is_initialized(): # type: ignore[no-untyped-call]
raise RuntimeError(
"Requires distributed package to be available"
)
num_replicas = dist.get_world_size() # type: ignore[no-untyped-call]
if rank is None:
- if not dist.is_available(): # type: ignore[no-untyped-call]
+ if not dist.is_available() or not dist.is_initialized(): # type: ignore[no-untyped-call]
raise RuntimeError(
"Requires distributed package to be available"
)
| {"golden_diff": "diff --git a/pytorch_pfn_extras/distributed/_distributed_validation_sampler.py b/pytorch_pfn_extras/distributed/_distributed_validation_sampler.py\n--- a/pytorch_pfn_extras/distributed/_distributed_validation_sampler.py\n+++ b/pytorch_pfn_extras/distributed/_distributed_validation_sampler.py\n@@ -26,13 +26,13 @@\n seed: int = 0,\n ) -> None:\n if num_replicas is None:\n- if not dist.is_available(): # type: ignore[no-untyped-call]\n+ if not dist.is_available() or not dist.is_initialized(): # type: ignore[no-untyped-call]\n raise RuntimeError(\n \"Requires distributed package to be available\"\n )\n num_replicas = dist.get_world_size() # type: ignore[no-untyped-call]\n if rank is None:\n- if not dist.is_available(): # type: ignore[no-untyped-call]\n+ if not dist.is_available() or not dist.is_initialized(): # type: ignore[no-untyped-call]\n raise RuntimeError(\n \"Requires distributed package to be available\"\n )\n", "issue": "Fix nightly CPU test failures\nhttps://github.com/pfnet/pytorch-pfn-extras/actions/workflows/nightly-test-cpu.yml\n", "before_files": [{"content": "from typing import Iterator, Optional, Sized, TypeVar\n\nimport numpy as np\nimport torch\nimport torch.distributed as dist\n\nT_co = TypeVar(\"T_co\", covariant=True)\n\n\nclass DistributedValidationSampler(torch.utils.data.Sampler):\n \"\"\"Distributed sampler without duplication\n\n This sampler splits the input dataset to each worker process in distributed setup\n without allowing repetition.\n It is for evaluation purpose such as :class:`~DistributedEvaluator`.\n This does not guarantee each worker to get the same number of samples,\n so for training do not use this sampler (use PyTorch DistributedSampler instead).\n \"\"\"\n\n def __init__(\n self,\n dataset: Sized,\n num_replicas: Optional[int] = None,\n rank: Optional[int] = None,\n shuffle: bool = True,\n seed: int = 0,\n ) -> None:\n if num_replicas is None:\n if not dist.is_available(): # type: ignore[no-untyped-call]\n raise RuntimeError(\n \"Requires distributed package to be available\"\n )\n num_replicas = dist.get_world_size() # type: ignore[no-untyped-call]\n if rank is None:\n if not dist.is_available(): # type: ignore[no-untyped-call]\n raise RuntimeError(\n \"Requires distributed package to be available\"\n )\n rank = dist.get_rank() # type: ignore[no-untyped-call]\n if rank >= num_replicas or rank < 0:\n raise ValueError(\n \"Invalid rank {}, rank should be in the interval\"\n \" [0, {}]\".format(rank, num_replicas - 1)\n )\n self.dataset = dataset\n self.num_replicas = num_replicas\n self.rank = rank\n self.shuffle = shuffle\n self.seed = seed\n\n self.dataset_len = len(dataset)\n self.num_samples = len(\n np.array_split(range(self.dataset_len), num_replicas)[rank]\n )\n\n def __iter__(self) -> Iterator[T_co]:\n if self.shuffle:\n # deterministically shuffle based on epoch and seed\n g = torch.Generator()\n g.manual_seed(self.seed)\n indices = torch.randperm(self.dataset_len, generator=g).tolist()\n else:\n indices = list(range(self.dataset_len))\n\n return iter(np.array_split(indices, self.num_replicas)[self.rank])\n\n def __len__(self) -> int:\n return self.num_samples\n", "path": "pytorch_pfn_extras/distributed/_distributed_validation_sampler.py"}]} | 1,237 | 245 |
gh_patches_debug_11912 | rasdani/github-patches | git_diff | ibis-project__ibis-2558 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
link to documentation on http://ibis-project.org/ is broken
Everything under /docs/ (including the tutorial) 404's as of 2020-12-02.
</issue>
<code>
[start of ibis/backends/impala/__init__.py]
1 """Impala backend"""
2 import ibis.common.exceptions as com
3 import ibis.config
4 from ibis.config import options
5
6 # these objects are exposed in the public API and are not used in the module
7 from .client import ( # noqa: F401
8 ImpalaClient,
9 ImpalaConnection,
10 ImpalaDatabase,
11 ImpalaTable,
12 )
13 from .compiler import dialect # noqa: F401
14 from .hdfs import HDFS, WebHDFS, hdfs_connect # noqa: F401
15 from .udf import * # noqa: F401,F403
16
17 with ibis.config.config_prefix('impala'):
18 ibis.config.register_option(
19 'temp_db',
20 '__ibis_tmp',
21 'Database to use for temporary tables, views. functions, etc.',
22 )
23 ibis.config.register_option(
24 'temp_hdfs_path',
25 '/tmp/ibis',
26 'HDFS path for storage of temporary data',
27 )
28
29
30 def compile(expr, params=None):
31 """Force compilation of expression.
32
33 Returns
34 -------
35 str
36
37 """
38 from .compiler import to_sql
39
40 return to_sql(expr, dialect.make_context(params=params))
41
42
43 def verify(expr, params=None):
44 """
45 Determine if expression can be successfully translated to execute on Impala
46 """
47 try:
48 compile(expr, params=params)
49 return True
50 except com.TranslationError:
51 return False
52
53
54 def connect(
55 host='localhost',
56 port=21050,
57 database='default',
58 timeout=45,
59 use_ssl=False,
60 ca_cert=None,
61 user=None,
62 password=None,
63 auth_mechanism='NOSASL',
64 kerberos_service_name='impala',
65 pool_size=8,
66 hdfs_client=None,
67 ):
68 """Create an ImpalaClient for use with Ibis.
69
70 Parameters
71 ----------
72 host : str, optional
73 Host name of the impalad or HiveServer2 in Hive
74 port : int, optional
75 Impala's HiveServer2 port
76 database : str, optional
77 Default database when obtaining new cursors
78 timeout : int, optional
79 Connection timeout in seconds when communicating with HiveServer2
80 use_ssl : bool, optional
81 Use SSL when connecting to HiveServer2
82 ca_cert : str, optional
83 Local path to 3rd party CA certificate or copy of server certificate
84 for self-signed certificates. If SSL is enabled, but this argument is
85 ``None``, then certificate validation is skipped.
86 user : str, optional
87 LDAP user to authenticate
88 password : str, optional
89 LDAP password to authenticate
90 auth_mechanism : str, optional
91 {'NOSASL' <- default, 'PLAIN', 'GSSAPI', 'LDAP'}.
92 Use NOSASL for non-secured Impala connections. Use PLAIN for
93 non-secured Hive clusters. Use LDAP for LDAP authenticated
94 connections. Use GSSAPI for Kerberos-secured clusters.
95 kerberos_service_name : str, optional
96 Specify particular impalad service principal.
97
98 Examples
99 --------
100 >>> import ibis
101 >>> import os
102 >>> hdfs_host = os.environ.get('IBIS_TEST_NN_HOST', 'localhost')
103 >>> hdfs_port = int(os.environ.get('IBIS_TEST_NN_PORT', 50070))
104 >>> impala_host = os.environ.get('IBIS_TEST_IMPALA_HOST', 'localhost')
105 >>> impala_port = int(os.environ.get('IBIS_TEST_IMPALA_PORT', 21050))
106 >>> hdfs = ibis.hdfs_connect(host=hdfs_host, port=hdfs_port)
107 >>> hdfs # doctest: +ELLIPSIS
108 <ibis.filesystems.WebHDFS object at 0x...>
109 >>> client = ibis.impala.connect(
110 ... host=impala_host,
111 ... port=impala_port,
112 ... hdfs_client=hdfs,
113 ... )
114 >>> client # doctest: +ELLIPSIS
115 <ibis.impala.client.ImpalaClient object at 0x...>
116
117 Returns
118 -------
119 ImpalaClient
120 """
121 params = {
122 'host': host,
123 'port': port,
124 'database': database,
125 'timeout': timeout,
126 'use_ssl': use_ssl,
127 'ca_cert': ca_cert,
128 'user': user,
129 'password': password,
130 'auth_mechanism': auth_mechanism,
131 'kerberos_service_name': kerberos_service_name,
132 }
133
134 con = ImpalaConnection(pool_size=pool_size, **params)
135 try:
136 client = ImpalaClient(con, hdfs_client=hdfs_client)
137 except Exception:
138 con.close()
139 raise
140 else:
141 if options.default_backend is None:
142 options.default_backend = client
143
144 return client
145
[end of ibis/backends/impala/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ibis/backends/impala/__init__.py b/ibis/backends/impala/__init__.py
--- a/ibis/backends/impala/__init__.py
+++ b/ibis/backends/impala/__init__.py
@@ -103,7 +103,7 @@
>>> hdfs_port = int(os.environ.get('IBIS_TEST_NN_PORT', 50070))
>>> impala_host = os.environ.get('IBIS_TEST_IMPALA_HOST', 'localhost')
>>> impala_port = int(os.environ.get('IBIS_TEST_IMPALA_PORT', 21050))
- >>> hdfs = ibis.hdfs_connect(host=hdfs_host, port=hdfs_port)
+ >>> hdfs = ibis.impala.hdfs_connect(host=hdfs_host, port=hdfs_port)
>>> hdfs # doctest: +ELLIPSIS
<ibis.filesystems.WebHDFS object at 0x...>
>>> client = ibis.impala.connect(
| {"golden_diff": "diff --git a/ibis/backends/impala/__init__.py b/ibis/backends/impala/__init__.py\n--- a/ibis/backends/impala/__init__.py\n+++ b/ibis/backends/impala/__init__.py\n@@ -103,7 +103,7 @@\n >>> hdfs_port = int(os.environ.get('IBIS_TEST_NN_PORT', 50070))\n >>> impala_host = os.environ.get('IBIS_TEST_IMPALA_HOST', 'localhost')\n >>> impala_port = int(os.environ.get('IBIS_TEST_IMPALA_PORT', 21050))\n- >>> hdfs = ibis.hdfs_connect(host=hdfs_host, port=hdfs_port)\n+ >>> hdfs = ibis.impala.hdfs_connect(host=hdfs_host, port=hdfs_port)\n >>> hdfs # doctest: +ELLIPSIS\n <ibis.filesystems.WebHDFS object at 0x...>\n >>> client = ibis.impala.connect(\n", "issue": "link to documentation on http://ibis-project.org/ is broken\nEverything under /docs/ (including the tutorial) 404's as of 2020-12-02.\n", "before_files": [{"content": "\"\"\"Impala backend\"\"\"\nimport ibis.common.exceptions as com\nimport ibis.config\nfrom ibis.config import options\n\n# these objects are exposed in the public API and are not used in the module\nfrom .client import ( # noqa: F401\n ImpalaClient,\n ImpalaConnection,\n ImpalaDatabase,\n ImpalaTable,\n)\nfrom .compiler import dialect # noqa: F401\nfrom .hdfs import HDFS, WebHDFS, hdfs_connect # noqa: F401\nfrom .udf import * # noqa: F401,F403\n\nwith ibis.config.config_prefix('impala'):\n ibis.config.register_option(\n 'temp_db',\n '__ibis_tmp',\n 'Database to use for temporary tables, views. functions, etc.',\n )\n ibis.config.register_option(\n 'temp_hdfs_path',\n '/tmp/ibis',\n 'HDFS path for storage of temporary data',\n )\n\n\ndef compile(expr, params=None):\n \"\"\"Force compilation of expression.\n\n Returns\n -------\n str\n\n \"\"\"\n from .compiler import to_sql\n\n return to_sql(expr, dialect.make_context(params=params))\n\n\ndef verify(expr, params=None):\n \"\"\"\n Determine if expression can be successfully translated to execute on Impala\n \"\"\"\n try:\n compile(expr, params=params)\n return True\n except com.TranslationError:\n return False\n\n\ndef connect(\n host='localhost',\n port=21050,\n database='default',\n timeout=45,\n use_ssl=False,\n ca_cert=None,\n user=None,\n password=None,\n auth_mechanism='NOSASL',\n kerberos_service_name='impala',\n pool_size=8,\n hdfs_client=None,\n):\n \"\"\"Create an ImpalaClient for use with Ibis.\n\n Parameters\n ----------\n host : str, optional\n Host name of the impalad or HiveServer2 in Hive\n port : int, optional\n Impala's HiveServer2 port\n database : str, optional\n Default database when obtaining new cursors\n timeout : int, optional\n Connection timeout in seconds when communicating with HiveServer2\n use_ssl : bool, optional\n Use SSL when connecting to HiveServer2\n ca_cert : str, optional\n Local path to 3rd party CA certificate or copy of server certificate\n for self-signed certificates. If SSL is enabled, but this argument is\n ``None``, then certificate validation is skipped.\n user : str, optional\n LDAP user to authenticate\n password : str, optional\n LDAP password to authenticate\n auth_mechanism : str, optional\n {'NOSASL' <- default, 'PLAIN', 'GSSAPI', 'LDAP'}.\n Use NOSASL for non-secured Impala connections. Use PLAIN for\n non-secured Hive clusters. Use LDAP for LDAP authenticated\n connections. Use GSSAPI for Kerberos-secured clusters.\n kerberos_service_name : str, optional\n Specify particular impalad service principal.\n\n Examples\n --------\n >>> import ibis\n >>> import os\n >>> hdfs_host = os.environ.get('IBIS_TEST_NN_HOST', 'localhost')\n >>> hdfs_port = int(os.environ.get('IBIS_TEST_NN_PORT', 50070))\n >>> impala_host = os.environ.get('IBIS_TEST_IMPALA_HOST', 'localhost')\n >>> impala_port = int(os.environ.get('IBIS_TEST_IMPALA_PORT', 21050))\n >>> hdfs = ibis.hdfs_connect(host=hdfs_host, port=hdfs_port)\n >>> hdfs # doctest: +ELLIPSIS\n <ibis.filesystems.WebHDFS object at 0x...>\n >>> client = ibis.impala.connect(\n ... host=impala_host,\n ... port=impala_port,\n ... hdfs_client=hdfs,\n ... )\n >>> client # doctest: +ELLIPSIS\n <ibis.impala.client.ImpalaClient object at 0x...>\n\n Returns\n -------\n ImpalaClient\n \"\"\"\n params = {\n 'host': host,\n 'port': port,\n 'database': database,\n 'timeout': timeout,\n 'use_ssl': use_ssl,\n 'ca_cert': ca_cert,\n 'user': user,\n 'password': password,\n 'auth_mechanism': auth_mechanism,\n 'kerberos_service_name': kerberos_service_name,\n }\n\n con = ImpalaConnection(pool_size=pool_size, **params)\n try:\n client = ImpalaClient(con, hdfs_client=hdfs_client)\n except Exception:\n con.close()\n raise\n else:\n if options.default_backend is None:\n options.default_backend = client\n\n return client\n", "path": "ibis/backends/impala/__init__.py"}]} | 1,993 | 231 |
gh_patches_debug_26835 | rasdani/github-patches | git_diff | azavea__raster-vision-536 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Avoid error when working with subtypes for SemanticSegmentationRasterSource
Here: https://github.com/azavea/raster-vision/blob/f6ea64a37fd4d09375da1838cd679e6cbce5b35b/rastervision/data/label_source/semantic_segmentation_raster_source_config.py#L123
We check for the type explicitly. We should use `isinstance` instead to allow for subclasses to pass this check - or figure out a more general way of not having other types and allowing them to bypass having to set the rgb class map.
</issue>
<code>
[start of rastervision/data/label_source/semantic_segmentation_raster_source_config.py]
1 from copy import deepcopy
2
3 import rastervision as rv
4 from rastervision.core.class_map import ClassMap
5 from rastervision.data.label_source import (LabelSourceConfig,
6 LabelSourceConfigBuilder,
7 SemanticSegmentationRasterSource)
8 from rastervision.protos.label_source_pb2 import LabelSourceConfig as LabelSourceConfigMsg
9 from rastervision.data.raster_source import RasterSourceConfig, GeoJSONSourceConfig
10
11
12 class SemanticSegmentationRasterSourceConfig(LabelSourceConfig):
13 def __init__(self, source, rgb_class_map=None):
14 super().__init__(source_type=rv.SEMANTIC_SEGMENTATION_RASTER)
15 self.source = source
16 self.rgb_class_map = rgb_class_map
17
18 def to_proto(self):
19 msg = super().to_proto()
20
21 rgb_class_items = None
22 if self.rgb_class_map is not None:
23 rgb_class_items = self.rgb_class_map.to_proto()
24 opts = LabelSourceConfigMsg.SemanticSegmentationRasterSource(
25 source=self.source.to_proto(), rgb_class_items=rgb_class_items)
26 msg.semantic_segmentation_raster_source.CopyFrom(opts)
27 return msg
28
29 def create_source(self, task_config, extent, crs_transformer, tmp_dir):
30 return SemanticSegmentationRasterSource(
31 self.source.create_source(tmp_dir, extent, crs_transformer),
32 self.rgb_class_map)
33
34 def update_for_command(self, command_type, experiment_config, context=[]):
35 if context is None:
36 context = []
37 context = context + [self]
38 io_def = rv.core.CommandIODefinition()
39
40 b = self.to_builder()
41
42 (new_raster_source, sub_io_def) = self.source.update_for_command(
43 command_type, experiment_config, context)
44
45 io_def.merge(sub_io_def)
46 b = b.with_raster_source(new_raster_source)
47
48 return (b.build(), io_def)
49
50
51 class SemanticSegmentationRasterSourceConfigBuilder(LabelSourceConfigBuilder):
52 def __init__(self, prev=None):
53 config = {}
54 if prev:
55 config = {
56 'source': prev.source,
57 'rgb_class_map': prev.rgb_class_map
58 }
59
60 super().__init__(SemanticSegmentationRasterSourceConfig, config)
61
62 def from_proto(self, msg):
63 b = SemanticSegmentationRasterSourceConfigBuilder()
64
65 raster_source_config = rv.RasterSourceConfig.from_proto(
66 msg.semantic_segmentation_raster_source.source)
67
68 b = b.with_raster_source(raster_source_config)
69 rgb_class_items = msg.semantic_segmentation_raster_source.rgb_class_items
70 if rgb_class_items:
71 b = b.with_rgb_class_map(
72 ClassMap.construct_from(list(rgb_class_items)))
73
74 return b
75
76 def with_raster_source(self, source, channel_order=None):
77 """Set raster_source.
78
79 Args:
80 source: (RasterSourceConfig) A RasterSource assumed to have RGB values that
81 are mapped to class_ids using the rgb_class_map.
82
83 Returns:
84 SemanticSegmentationRasterSourceConfigBuilder
85 """
86 b = deepcopy(self)
87 if isinstance(source, RasterSourceConfig):
88 b.config['source'] = source
89 elif isinstance(source, str):
90 provider = rv._registry.get_raster_source_default_provider(source)
91 source = provider.construct(source, channel_order=channel_order)
92 b.config['source'] = source
93 else:
94 raise rv.ConfigError(
95 'source must be either string or RasterSourceConfig, '
96 ' not {}'.format(str(type(source))))
97
98 return b
99
100 def with_rgb_class_map(self, rgb_class_map):
101 """Set rgb_class_map.
102
103 Args:
104 rgb_class_map: (something accepted by ClassMap.construct_from) a class
105 map with color values used to map RGB values to class ids
106
107 Returns:
108 SemanticSegmentationRasterSourceConfigBuilder
109 """
110 b = deepcopy(self)
111 b.config['rgb_class_map'] = ClassMap.construct_from(rgb_class_map)
112 return b
113
114 def validate(self):
115 source = self.config.get('source')
116 rgb_class_map = self.config.get('rgb_class_map')
117
118 if source is None:
119 raise rv.ConfigError(
120 'You must set the source for SemanticSegmentationRasterSourceConfig'
121 ' Use "with_raster_source".')
122
123 if type(source) != GeoJSONSourceConfig and rgb_class_map is None:
124 raise rv.ConfigError(
125 'You must set the rgb_class_map for '
126 'SemanticSegmentationRasterSourceConfig. Use "with_rgb_class_map".'
127 )
128
[end of rastervision/data/label_source/semantic_segmentation_raster_source_config.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/rastervision/data/label_source/semantic_segmentation_raster_source_config.py b/rastervision/data/label_source/semantic_segmentation_raster_source_config.py
--- a/rastervision/data/label_source/semantic_segmentation_raster_source_config.py
+++ b/rastervision/data/label_source/semantic_segmentation_raster_source_config.py
@@ -6,7 +6,7 @@
LabelSourceConfigBuilder,
SemanticSegmentationRasterSource)
from rastervision.protos.label_source_pb2 import LabelSourceConfig as LabelSourceConfigMsg
-from rastervision.data.raster_source import RasterSourceConfig, GeoJSONSourceConfig
+from rastervision.data.raster_source import RasterSourceConfig
class SemanticSegmentationRasterSourceConfig(LabelSourceConfig):
@@ -113,15 +113,8 @@
def validate(self):
source = self.config.get('source')
- rgb_class_map = self.config.get('rgb_class_map')
if source is None:
raise rv.ConfigError(
'You must set the source for SemanticSegmentationRasterSourceConfig'
' Use "with_raster_source".')
-
- if type(source) != GeoJSONSourceConfig and rgb_class_map is None:
- raise rv.ConfigError(
- 'You must set the rgb_class_map for '
- 'SemanticSegmentationRasterSourceConfig. Use "with_rgb_class_map".'
- )
| {"golden_diff": "diff --git a/rastervision/data/label_source/semantic_segmentation_raster_source_config.py b/rastervision/data/label_source/semantic_segmentation_raster_source_config.py\n--- a/rastervision/data/label_source/semantic_segmentation_raster_source_config.py\n+++ b/rastervision/data/label_source/semantic_segmentation_raster_source_config.py\n@@ -6,7 +6,7 @@\n LabelSourceConfigBuilder,\n SemanticSegmentationRasterSource)\n from rastervision.protos.label_source_pb2 import LabelSourceConfig as LabelSourceConfigMsg\n-from rastervision.data.raster_source import RasterSourceConfig, GeoJSONSourceConfig\n+from rastervision.data.raster_source import RasterSourceConfig\n \n \n class SemanticSegmentationRasterSourceConfig(LabelSourceConfig):\n@@ -113,15 +113,8 @@\n \n def validate(self):\n source = self.config.get('source')\n- rgb_class_map = self.config.get('rgb_class_map')\n \n if source is None:\n raise rv.ConfigError(\n 'You must set the source for SemanticSegmentationRasterSourceConfig'\n ' Use \"with_raster_source\".')\n-\n- if type(source) != GeoJSONSourceConfig and rgb_class_map is None:\n- raise rv.ConfigError(\n- 'You must set the rgb_class_map for '\n- 'SemanticSegmentationRasterSourceConfig. Use \"with_rgb_class_map\".'\n- )\n", "issue": "Avoid error when working with subtypes for SemanticSegmentationRasterSource\nHere: https://github.com/azavea/raster-vision/blob/f6ea64a37fd4d09375da1838cd679e6cbce5b35b/rastervision/data/label_source/semantic_segmentation_raster_source_config.py#L123\r\n\r\nWe check for the type explicitly. We should use `isinstance` instead to allow for subclasses to pass this check - or figure out a more general way of not having other types and allowing them to bypass having to set the rgb class map.\n", "before_files": [{"content": "from copy import deepcopy\n\nimport rastervision as rv\nfrom rastervision.core.class_map import ClassMap\nfrom rastervision.data.label_source import (LabelSourceConfig,\n LabelSourceConfigBuilder,\n SemanticSegmentationRasterSource)\nfrom rastervision.protos.label_source_pb2 import LabelSourceConfig as LabelSourceConfigMsg\nfrom rastervision.data.raster_source import RasterSourceConfig, GeoJSONSourceConfig\n\n\nclass SemanticSegmentationRasterSourceConfig(LabelSourceConfig):\n def __init__(self, source, rgb_class_map=None):\n super().__init__(source_type=rv.SEMANTIC_SEGMENTATION_RASTER)\n self.source = source\n self.rgb_class_map = rgb_class_map\n\n def to_proto(self):\n msg = super().to_proto()\n\n rgb_class_items = None\n if self.rgb_class_map is not None:\n rgb_class_items = self.rgb_class_map.to_proto()\n opts = LabelSourceConfigMsg.SemanticSegmentationRasterSource(\n source=self.source.to_proto(), rgb_class_items=rgb_class_items)\n msg.semantic_segmentation_raster_source.CopyFrom(opts)\n return msg\n\n def create_source(self, task_config, extent, crs_transformer, tmp_dir):\n return SemanticSegmentationRasterSource(\n self.source.create_source(tmp_dir, extent, crs_transformer),\n self.rgb_class_map)\n\n def update_for_command(self, command_type, experiment_config, context=[]):\n if context is None:\n context = []\n context = context + [self]\n io_def = rv.core.CommandIODefinition()\n\n b = self.to_builder()\n\n (new_raster_source, sub_io_def) = self.source.update_for_command(\n command_type, experiment_config, context)\n\n io_def.merge(sub_io_def)\n b = b.with_raster_source(new_raster_source)\n\n return (b.build(), io_def)\n\n\nclass SemanticSegmentationRasterSourceConfigBuilder(LabelSourceConfigBuilder):\n def __init__(self, prev=None):\n config = {}\n if prev:\n config = {\n 'source': prev.source,\n 'rgb_class_map': prev.rgb_class_map\n }\n\n super().__init__(SemanticSegmentationRasterSourceConfig, config)\n\n def from_proto(self, msg):\n b = SemanticSegmentationRasterSourceConfigBuilder()\n\n raster_source_config = rv.RasterSourceConfig.from_proto(\n msg.semantic_segmentation_raster_source.source)\n\n b = b.with_raster_source(raster_source_config)\n rgb_class_items = msg.semantic_segmentation_raster_source.rgb_class_items\n if rgb_class_items:\n b = b.with_rgb_class_map(\n ClassMap.construct_from(list(rgb_class_items)))\n\n return b\n\n def with_raster_source(self, source, channel_order=None):\n \"\"\"Set raster_source.\n\n Args:\n source: (RasterSourceConfig) A RasterSource assumed to have RGB values that\n are mapped to class_ids using the rgb_class_map.\n\n Returns:\n SemanticSegmentationRasterSourceConfigBuilder\n \"\"\"\n b = deepcopy(self)\n if isinstance(source, RasterSourceConfig):\n b.config['source'] = source\n elif isinstance(source, str):\n provider = rv._registry.get_raster_source_default_provider(source)\n source = provider.construct(source, channel_order=channel_order)\n b.config['source'] = source\n else:\n raise rv.ConfigError(\n 'source must be either string or RasterSourceConfig, '\n ' not {}'.format(str(type(source))))\n\n return b\n\n def with_rgb_class_map(self, rgb_class_map):\n \"\"\"Set rgb_class_map.\n\n Args:\n rgb_class_map: (something accepted by ClassMap.construct_from) a class\n map with color values used to map RGB values to class ids\n\n Returns:\n SemanticSegmentationRasterSourceConfigBuilder\n \"\"\"\n b = deepcopy(self)\n b.config['rgb_class_map'] = ClassMap.construct_from(rgb_class_map)\n return b\n\n def validate(self):\n source = self.config.get('source')\n rgb_class_map = self.config.get('rgb_class_map')\n\n if source is None:\n raise rv.ConfigError(\n 'You must set the source for SemanticSegmentationRasterSourceConfig'\n ' Use \"with_raster_source\".')\n\n if type(source) != GeoJSONSourceConfig and rgb_class_map is None:\n raise rv.ConfigError(\n 'You must set the rgb_class_map for '\n 'SemanticSegmentationRasterSourceConfig. Use \"with_rgb_class_map\".'\n )\n", "path": "rastervision/data/label_source/semantic_segmentation_raster_source_config.py"}]} | 1,949 | 314 |
gh_patches_debug_28879 | rasdani/github-patches | git_diff | python-geeks__Automation-scripts-885 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
yaml_to_json add command line interface
**Describe the bug**
yaml_to_json currently only allow to enter filename. It is not convenient and cannot be used with bash autocomplete
**To Reproduce**
**Expected behavior**
Application should accept command line arguments with filenames
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Additional context**
Add any other context about the problem here.
</issue>
<code>
[start of yaml_to_json/yaml_to_json.py]
1 from ruyaml import YAML
2 import json
3
4
5 def get_yaml_data():
6 yaml_name = input("Enter the yaml file name: ")
7
8 try:
9 with open(yaml_name, "r+") as f:
10 yaml_data = YAML().load(f)
11 return yaml_data
12 except: # noqa
13 print("Invalid input enter a valid yaml file name e.g. example.yaml")
14 yaml_data = get_yaml_data()
15
16
17 def convert_to_json(yaml_data):
18 json_name = input("Enter the name of output json file: ")
19
20 try:
21 with open(json_name, "w+") as o:
22 o.write(json.dumps(yaml_data))
23 except: # noqa
24 print("Invalid input enter a valid json file name e.g. example.json")
25 convert_to_json(yaml_data)
26
27
28 yaml_data = get_yaml_data()
29 convert_to_json(yaml_data)
30
31 print("Your yaml file has been converted and saved as json")
32
[end of yaml_to_json/yaml_to_json.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/yaml_to_json/yaml_to_json.py b/yaml_to_json/yaml_to_json.py
--- a/yaml_to_json/yaml_to_json.py
+++ b/yaml_to_json/yaml_to_json.py
@@ -1,9 +1,11 @@
from ruyaml import YAML
+import argparse
import json
-def get_yaml_data():
- yaml_name = input("Enter the yaml file name: ")
+def get_yaml_data(yaml_name=None):
+ if not yaml_name:
+ yaml_name = input("Enter the yaml file name: ")
try:
with open(yaml_name, "r+") as f:
@@ -14,18 +16,34 @@
yaml_data = get_yaml_data()
-def convert_to_json(yaml_data):
- json_name = input("Enter the name of output json file: ")
+def convert_to_json(yaml_data, json_name=None, intent=None):
+ if not json_name:
+ json_name = input("Enter the name of output json file: ")
try:
with open(json_name, "w+") as o:
- o.write(json.dumps(yaml_data))
+ o.write(json.dumps(yaml_data, indent=intent))
except: # noqa
print("Invalid input enter a valid json file name e.g. example.json")
convert_to_json(yaml_data)
-yaml_data = get_yaml_data()
-convert_to_json(yaml_data)
+def main():
+ parser = argparse.ArgumentParser(description='Convert YAML file to JSON')
+ parser.add_argument('--yaml', type=str, help='YAML filename')
+ parser.add_argument('--json', type=str, help='JSON filename')
+ parser.add_argument('--intent', type=int, help="intent value for JSON")
+ args = parser.parse_args()
-print("Your yaml file has been converted and saved as json")
+ yaml_name = args.yaml
+ json_name = args.json
+ intent = args.intent
+
+ yaml_data = get_yaml_data(yaml_name)
+ convert_to_json(yaml_data, json_name, intent=intent)
+
+ print("Your yaml file has been converted and saved as json")
+
+
+if __name__ == "__main__":
+ main()
| {"golden_diff": "diff --git a/yaml_to_json/yaml_to_json.py b/yaml_to_json/yaml_to_json.py\n--- a/yaml_to_json/yaml_to_json.py\n+++ b/yaml_to_json/yaml_to_json.py\n@@ -1,9 +1,11 @@\n from ruyaml import YAML\n+import argparse\n import json\n \n \n-def get_yaml_data():\n- yaml_name = input(\"Enter the yaml file name: \")\n+def get_yaml_data(yaml_name=None):\n+ if not yaml_name:\n+ yaml_name = input(\"Enter the yaml file name: \")\n \n try:\n with open(yaml_name, \"r+\") as f:\n@@ -14,18 +16,34 @@\n yaml_data = get_yaml_data()\n \n \n-def convert_to_json(yaml_data):\n- json_name = input(\"Enter the name of output json file: \")\n+def convert_to_json(yaml_data, json_name=None, intent=None):\n+ if not json_name:\n+ json_name = input(\"Enter the name of output json file: \")\n \n try:\n with open(json_name, \"w+\") as o:\n- o.write(json.dumps(yaml_data))\n+ o.write(json.dumps(yaml_data, indent=intent))\n except: # noqa\n print(\"Invalid input enter a valid json file name e.g. example.json\")\n convert_to_json(yaml_data)\n \n \n-yaml_data = get_yaml_data()\n-convert_to_json(yaml_data)\n+def main():\n+ parser = argparse.ArgumentParser(description='Convert YAML file to JSON')\n+ parser.add_argument('--yaml', type=str, help='YAML filename')\n+ parser.add_argument('--json', type=str, help='JSON filename')\n+ parser.add_argument('--intent', type=int, help=\"intent value for JSON\")\n+ args = parser.parse_args()\n \n-print(\"Your yaml file has been converted and saved as json\")\n+ yaml_name = args.yaml\n+ json_name = args.json\n+ intent = args.intent\n+\n+ yaml_data = get_yaml_data(yaml_name)\n+ convert_to_json(yaml_data, json_name, intent=intent)\n+\n+ print(\"Your yaml file has been converted and saved as json\")\n+\n+\n+if __name__ == \"__main__\":\n+ main()\n", "issue": "yaml_to_json add command line interface\n**Describe the bug**\r\nyaml_to_json currently only allow to enter filename. It is not convenient and cannot be used with bash autocomplete \r\n\r\n**To Reproduce**\r\n\r\n**Expected behavior**\r\nApplication should accept command line arguments with filenames\r\n\r\n**Screenshots**\r\nIf applicable, add screenshots to help explain your problem.\r\n\r\n**Additional context**\r\nAdd any other context about the problem here.\n", "before_files": [{"content": "from ruyaml import YAML\nimport json\n\n\ndef get_yaml_data():\n yaml_name = input(\"Enter the yaml file name: \")\n\n try:\n with open(yaml_name, \"r+\") as f:\n yaml_data = YAML().load(f)\n return yaml_data\n except: # noqa\n print(\"Invalid input enter a valid yaml file name e.g. example.yaml\")\n yaml_data = get_yaml_data()\n\n\ndef convert_to_json(yaml_data):\n json_name = input(\"Enter the name of output json file: \")\n\n try:\n with open(json_name, \"w+\") as o:\n o.write(json.dumps(yaml_data))\n except: # noqa\n print(\"Invalid input enter a valid json file name e.g. example.json\")\n convert_to_json(yaml_data)\n\n\nyaml_data = get_yaml_data()\nconvert_to_json(yaml_data)\n\nprint(\"Your yaml file has been converted and saved as json\")\n", "path": "yaml_to_json/yaml_to_json.py"}]} | 884 | 491 |
gh_patches_debug_12937 | rasdani/github-patches | git_diff | conan-io__conan-center-index-1973 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[package] xorg/system: Can you add support for dnf package manager?
<!--
Please don't forget to update the issue title.
Include all applicable information to help us reproduce your problem.
-->
Fedora uses the dnf package manager instead of yum, although yum exists in Fedora too. Also, dnf uses the same package names as yum, so maybe you could change line 42 like this:
```python
elif tools.os_info.with_yum or tools.os_info.with_dnf:
...
```
In addition, could you also add support for `FreeBSD pkg`? I think in `pkg` this package name is just `xorg`.
### Package and Environment Details (include every applicable attribute)
* Package Name/Version: **xorg/system**
* Operating System+version: **Fedora 32**
* Compiler+version: **GCC 10**
* Conan version: **conan 1.26.0**
* Python version: **Python 3.8.3**
### Conan profile (output of `conan profile show default` or `conan profile show <profile>` if custom profile is in use)
```
arch=x86_64
arch_build=x86_64
build_type=Release
compiler=gcc
compiler.libcxx=libstdc++
compiler.version=10
os=Linux
os_build=Linux
```
### Steps to reproduce (Include if Applicable)
When I try to install xorg/system
`conan install xorg/system@ --build missing`
### Logs (Include/Attach if Applicable)
<details><summary>Click to expand log</summary>
```
Configuration:
[settings]
arch=x86_64
arch_build=x86_64
build_type=Release
compiler=gcc
compiler.libcxx=libstdc++
compiler.version=10
os=Linux
os_build=Linux
[options]
[build_requires]
[env]
Installing package: xorg/system
Requirements
xorg/system from 'conan-center' - Cache
Packages
xorg/system:5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9 - Cache
Installing (downloading, building) binaries...
xorg/system: Already installed!
ERROR: xorg/system: Error in package_info() method, line 57
self._fill_cppinfo_from_pkgconfig(name)
while calling '_fill_cppinfo_from_pkgconfig', line 18
if not pkg_config.provides:
ConanException: pkg-config command ['pkg-config', '--print-provides', 'sm', '--print-errors'] failed with error: Command 'pkg-config --print-provides sm --print-errors' returned non-zero exit status 1.
Package sm was not found in the pkg-config search path.
Perhaps you should add the directory containing `sm.pc'
to the PKG_CONFIG_PATH environment variable
Package 'sm', required by 'virtual:world', not found
```
</details>
</issue>
<code>
[start of recipes/xorg/all/conanfile.py]
1 from conans import ConanFile, tools
2 from conans.errors import ConanException
3
4
5 class ConanXOrg(ConanFile):
6 name = "xorg"
7 url = "https://github.com/conan-io/conan-center-index"
8 license = "MIT"
9 homepage = "https://www.x.org/wiki/"
10 description = "The X.Org project provides an open source implementation of the X Window System."
11 settings = {"os": "Linux"}
12
13 def package_id(self):
14 self.info.header_only()
15
16 def _fill_cppinfo_from_pkgconfig(self, name):
17 pkg_config = tools.PkgConfig(name)
18 if not pkg_config.provides:
19 raise ConanException("OpenGL development files aren't available, give up")
20 libs = [lib[2:] for lib in pkg_config.libs_only_l]
21 lib_dirs = [lib[2:] for lib in pkg_config.libs_only_L]
22 ldflags = [flag for flag in pkg_config.libs_only_other]
23 include_dirs = [include[2:] for include in pkg_config.cflags_only_I]
24 cflags = [flag for flag in pkg_config.cflags_only_other if not flag.startswith("-D")]
25 defines = [flag[2:] for flag in pkg_config.cflags_only_other if flag.startswith("-D")]
26
27 self.cpp_info.system_libs.extend(libs)
28 self.cpp_info.libdirs.extend(lib_dirs)
29 self.cpp_info.sharedlinkflags.extend(ldflags)
30 self.cpp_info.exelinkflags.extend(ldflags)
31 self.cpp_info.defines.extend(defines)
32 self.cpp_info.includedirs.extend(include_dirs)
33 self.cpp_info.cflags.extend(cflags)
34 self.cpp_info.cxxflags.extend(cflags)
35
36
37 def system_requirements(self):
38 if tools.os_info.is_linux and self.settings.os == "Linux":
39 package_tool = tools.SystemPackageTool(conanfile=self, default_mode="verify")
40 if tools.os_info.with_apt:
41 packages = ["xorg-dev", "libx11-xcb-dev", "libxcb-render0-dev", "libxcb-render-util0-dev"]
42 elif tools.os_info.with_yum:
43 packages = ["xorg-x11-server-devel"]
44 elif tools.os_info.with_pacman:
45 packages = ["xorg-server-devel"]
46 elif tools.os_info.with_zypper:
47 packages = ["Xorg-x11-devel"]
48 else:
49 self.output.warn("Do not know how to install 'xorg' for {}.".format(tools.os_info.linux_distro))
50 for p in packages:
51 package_tool.install(update=True, packages=p)
52
53 def package_info(self):
54 for name in ["x11", "x11-xcb", "dmx", "fontenc", "libfs", "ice", "sm", "xau", "xaw7",
55 "xcomposite","xcursor", "xdamage", "xdmcp", "xext", "xfixes", "xft", "xi",
56 "xinerama", "xkbfile", "xmu", "xmuu", "xpm", "xrandr", "xrender", "xres",
57 "xscrnsaver", "xt", "xtst", "xv", "xvmc", "xxf86dga", "xxf86vm", "xtrans"]:
58 self._fill_cppinfo_from_pkgconfig(name)
59
[end of recipes/xorg/all/conanfile.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/recipes/xorg/all/conanfile.py b/recipes/xorg/all/conanfile.py
--- a/recipes/xorg/all/conanfile.py
+++ b/recipes/xorg/all/conanfile.py
@@ -39,7 +39,7 @@
package_tool = tools.SystemPackageTool(conanfile=self, default_mode="verify")
if tools.os_info.with_apt:
packages = ["xorg-dev", "libx11-xcb-dev", "libxcb-render0-dev", "libxcb-render-util0-dev"]
- elif tools.os_info.with_yum:
+ elif tools.os_info.with_yum or tools.os_info.with_dnf:
packages = ["xorg-x11-server-devel"]
elif tools.os_info.with_pacman:
packages = ["xorg-server-devel"]
| {"golden_diff": "diff --git a/recipes/xorg/all/conanfile.py b/recipes/xorg/all/conanfile.py\n--- a/recipes/xorg/all/conanfile.py\n+++ b/recipes/xorg/all/conanfile.py\n@@ -39,7 +39,7 @@\n package_tool = tools.SystemPackageTool(conanfile=self, default_mode=\"verify\")\n if tools.os_info.with_apt:\n packages = [\"xorg-dev\", \"libx11-xcb-dev\", \"libxcb-render0-dev\", \"libxcb-render-util0-dev\"]\n- elif tools.os_info.with_yum:\n+ elif tools.os_info.with_yum or tools.os_info.with_dnf:\n packages = [\"xorg-x11-server-devel\"]\n elif tools.os_info.with_pacman:\n packages = [\"xorg-server-devel\"]\n", "issue": "[package] xorg/system: Can you add support for dnf package manager?\n<!-- \r\n Please don't forget to update the issue title.\r\n Include all applicable information to help us reproduce your problem.\r\n-->\r\n\r\nFedora uses the dnf package manager instead of yum, although yum exists in Fedora too. Also, dnf uses the same package names as yum, so maybe you could change line 42 like this:\r\n```python\r\nelif tools.os_info.with_yum or tools.os_info.with_dnf:\r\n ...\r\n```\r\nIn addition, could you also add support for `FreeBSD pkg`? I think in `pkg` this package name is just `xorg`.\r\n\r\n### Package and Environment Details (include every applicable attribute)\r\n * Package Name/Version: **xorg/system**\r\n * Operating System+version: **Fedora 32**\r\n * Compiler+version: **GCC 10**\r\n * Conan version: **conan 1.26.0**\r\n * Python version: **Python 3.8.3**\r\n\r\n\r\n### Conan profile (output of `conan profile show default` or `conan profile show <profile>` if custom profile is in use)\r\n```\r\narch=x86_64\r\narch_build=x86_64\r\nbuild_type=Release\r\ncompiler=gcc\r\ncompiler.libcxx=libstdc++\r\ncompiler.version=10\r\nos=Linux\r\nos_build=Linux\r\n```\r\n\r\n\r\n### Steps to reproduce (Include if Applicable)\r\nWhen I try to install xorg/system\r\n`conan install xorg/system@ --build missing`\r\n\r\n\r\n### Logs (Include/Attach if Applicable)\r\n<details><summary>Click to expand log</summary>\r\n\r\n```\r\nConfiguration:\r\n[settings]\r\narch=x86_64\r\narch_build=x86_64\r\nbuild_type=Release\r\ncompiler=gcc\r\ncompiler.libcxx=libstdc++\r\ncompiler.version=10\r\nos=Linux\r\nos_build=Linux\r\n[options]\r\n[build_requires]\r\n[env]\r\n\r\nInstalling package: xorg/system\r\nRequirements\r\n xorg/system from 'conan-center' - Cache\r\nPackages\r\n xorg/system:5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9 - Cache\r\n\r\nInstalling (downloading, building) binaries...\r\nxorg/system: Already installed!\r\nERROR: xorg/system: Error in package_info() method, line 57\r\n\tself._fill_cppinfo_from_pkgconfig(name)\r\nwhile calling '_fill_cppinfo_from_pkgconfig', line 18\r\n\tif not pkg_config.provides:\r\n\tConanException: pkg-config command ['pkg-config', '--print-provides', 'sm', '--print-errors'] failed with error: Command 'pkg-config --print-provides sm --print-errors' returned non-zero exit status 1.\r\nPackage sm was not found in the pkg-config search path.\r\nPerhaps you should add the directory containing `sm.pc'\r\nto the PKG_CONFIG_PATH environment variable\r\nPackage 'sm', required by 'virtual:world', not found\r\n```\r\n\r\n</details>\r\n\n", "before_files": [{"content": "from conans import ConanFile, tools\nfrom conans.errors import ConanException\n\n\nclass ConanXOrg(ConanFile):\n name = \"xorg\"\n url = \"https://github.com/conan-io/conan-center-index\"\n license = \"MIT\"\n homepage = \"https://www.x.org/wiki/\"\n description = \"The X.Org project provides an open source implementation of the X Window System.\"\n settings = {\"os\": \"Linux\"}\n\n def package_id(self):\n self.info.header_only()\n\n def _fill_cppinfo_from_pkgconfig(self, name):\n pkg_config = tools.PkgConfig(name)\n if not pkg_config.provides:\n raise ConanException(\"OpenGL development files aren't available, give up\")\n libs = [lib[2:] for lib in pkg_config.libs_only_l]\n lib_dirs = [lib[2:] for lib in pkg_config.libs_only_L]\n ldflags = [flag for flag in pkg_config.libs_only_other]\n include_dirs = [include[2:] for include in pkg_config.cflags_only_I]\n cflags = [flag for flag in pkg_config.cflags_only_other if not flag.startswith(\"-D\")]\n defines = [flag[2:] for flag in pkg_config.cflags_only_other if flag.startswith(\"-D\")]\n\n self.cpp_info.system_libs.extend(libs)\n self.cpp_info.libdirs.extend(lib_dirs)\n self.cpp_info.sharedlinkflags.extend(ldflags)\n self.cpp_info.exelinkflags.extend(ldflags)\n self.cpp_info.defines.extend(defines)\n self.cpp_info.includedirs.extend(include_dirs)\n self.cpp_info.cflags.extend(cflags)\n self.cpp_info.cxxflags.extend(cflags)\n\n\n def system_requirements(self):\n if tools.os_info.is_linux and self.settings.os == \"Linux\":\n package_tool = tools.SystemPackageTool(conanfile=self, default_mode=\"verify\")\n if tools.os_info.with_apt:\n packages = [\"xorg-dev\", \"libx11-xcb-dev\", \"libxcb-render0-dev\", \"libxcb-render-util0-dev\"]\n elif tools.os_info.with_yum:\n packages = [\"xorg-x11-server-devel\"]\n elif tools.os_info.with_pacman:\n packages = [\"xorg-server-devel\"]\n elif tools.os_info.with_zypper:\n packages = [\"Xorg-x11-devel\"]\n else:\n self.output.warn(\"Do not know how to install 'xorg' for {}.\".format(tools.os_info.linux_distro))\n for p in packages:\n package_tool.install(update=True, packages=p)\n\n def package_info(self):\n for name in [\"x11\", \"x11-xcb\", \"dmx\", \"fontenc\", \"libfs\", \"ice\", \"sm\", \"xau\", \"xaw7\",\n \"xcomposite\",\"xcursor\", \"xdamage\", \"xdmcp\", \"xext\", \"xfixes\", \"xft\", \"xi\",\n \"xinerama\", \"xkbfile\", \"xmu\", \"xmuu\", \"xpm\", \"xrandr\", \"xrender\", \"xres\",\n \"xscrnsaver\", \"xt\", \"xtst\", \"xv\", \"xvmc\", \"xxf86dga\", \"xxf86vm\", \"xtrans\"]:\n self._fill_cppinfo_from_pkgconfig(name)\n", "path": "recipes/xorg/all/conanfile.py"}]} | 2,028 | 177 |
gh_patches_debug_29497 | rasdani/github-patches | git_diff | python__peps-2533 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Waste less vertical space at top of rendered PEP
This is about usability of peps rendered on peps.python.org.
At the top of a PEP (e.g. https://peps.python.org/pep-0687/) there's a table with metadata. Most of that I ignore or is even duplicate (the title). I usually have to scroll right past that to the Abstract. Maybe the metadata could be collapsed, like the ToC? Or moved to the sidebar, like the ToC?
</issue>
<code>
[start of pep_sphinx_extensions/pep_processor/transforms/pep_title.py]
1 from pathlib import Path
2
3 from docutils import nodes
4 from docutils import transforms
5 from docutils import utils
6 from docutils.parsers.rst import roles
7 from docutils.parsers.rst import states
8
9
10 class PEPTitle(transforms.Transform):
11 """Add PEP title and organise document hierarchy."""
12
13 # needs to run before docutils.transforms.frontmatter.DocInfo and after
14 # pep_processor.transforms.pep_title.PEPTitle
15 default_priority = 335
16
17 def apply(self) -> None:
18 if not Path(self.document["source"]).match("pep-*"):
19 return # not a PEP file, exit early
20
21 # Directory to hold the PEP's RFC2822 header details, to extract a title string
22 pep_header_details = {}
23
24 # Iterate through the header fields, which are the first section of the document
25 for field in self.document[0]:
26 # Hold details of the attribute's tag against its details
27 row_attributes = {sub.tagname: sub.rawsource for sub in field}
28 pep_header_details[row_attributes["field_name"]] = row_attributes["field_body"]
29
30 # We only need the PEP number and title
31 if pep_header_details.keys() >= {"PEP", "Title"}:
32 break
33
34 # Create the title string for the PEP
35 pep_number = int(pep_header_details["PEP"])
36 pep_title = pep_header_details["Title"]
37 pep_title_string = f"PEP {pep_number} -- {pep_title}" # double hyphen for en dash
38
39 # Generate the title section node and its properties
40 title_nodes = _line_to_nodes(pep_title_string)
41 pep_title_node = nodes.section("", nodes.title("", "", *title_nodes, classes=["page-title"]), names=["pep-content"])
42
43 # Insert the title node as the root element, move children down
44 document_children = self.document.children
45 self.document.children = [pep_title_node]
46 pep_title_node.extend(document_children)
47 self.document.note_implicit_target(pep_title_node, pep_title_node)
48
49
50 def _line_to_nodes(text: str) -> list[nodes.Node]:
51 """Parse RST string to nodes."""
52 document = utils.new_document("<inline-rst>")
53 document.settings.pep_references = document.settings.rfc_references = False # patch settings
54 states.RSTStateMachine(state_classes=states.state_classes, initial_state="Body").run([text], document) # do parsing
55 roles._roles.pop("", None) # restore the "default" default role after parsing a document
56 return document[0].children
57
[end of pep_sphinx_extensions/pep_processor/transforms/pep_title.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pep_sphinx_extensions/pep_processor/transforms/pep_title.py b/pep_sphinx_extensions/pep_processor/transforms/pep_title.py
--- a/pep_sphinx_extensions/pep_processor/transforms/pep_title.py
+++ b/pep_sphinx_extensions/pep_processor/transforms/pep_title.py
@@ -22,13 +22,19 @@
pep_header_details = {}
# Iterate through the header fields, which are the first section of the document
+ desired_fields = {"PEP", "Title"}
+ fields_to_remove = []
for field in self.document[0]:
# Hold details of the attribute's tag against its details
row_attributes = {sub.tagname: sub.rawsource for sub in field}
pep_header_details[row_attributes["field_name"]] = row_attributes["field_body"]
+ # Store the redundant fields in the table for removal
+ if row_attributes["field_name"] in desired_fields:
+ fields_to_remove.append(field)
+
# We only need the PEP number and title
- if pep_header_details.keys() >= {"PEP", "Title"}:
+ if pep_header_details.keys() >= desired_fields:
break
# Create the title string for the PEP
@@ -46,6 +52,10 @@
pep_title_node.extend(document_children)
self.document.note_implicit_target(pep_title_node, pep_title_node)
+ # Remove the now-redundant fields
+ for field in fields_to_remove:
+ field.parent.remove(field)
+
def _line_to_nodes(text: str) -> list[nodes.Node]:
"""Parse RST string to nodes."""
| {"golden_diff": "diff --git a/pep_sphinx_extensions/pep_processor/transforms/pep_title.py b/pep_sphinx_extensions/pep_processor/transforms/pep_title.py\n--- a/pep_sphinx_extensions/pep_processor/transforms/pep_title.py\n+++ b/pep_sphinx_extensions/pep_processor/transforms/pep_title.py\n@@ -22,13 +22,19 @@\n pep_header_details = {}\n \n # Iterate through the header fields, which are the first section of the document\n+ desired_fields = {\"PEP\", \"Title\"}\n+ fields_to_remove = []\n for field in self.document[0]:\n # Hold details of the attribute's tag against its details\n row_attributes = {sub.tagname: sub.rawsource for sub in field}\n pep_header_details[row_attributes[\"field_name\"]] = row_attributes[\"field_body\"]\n \n+ # Store the redundant fields in the table for removal\n+ if row_attributes[\"field_name\"] in desired_fields:\n+ fields_to_remove.append(field)\n+\n # We only need the PEP number and title\n- if pep_header_details.keys() >= {\"PEP\", \"Title\"}:\n+ if pep_header_details.keys() >= desired_fields:\n break\n \n # Create the title string for the PEP\n@@ -46,6 +52,10 @@\n pep_title_node.extend(document_children)\n self.document.note_implicit_target(pep_title_node, pep_title_node)\n \n+ # Remove the now-redundant fields\n+ for field in fields_to_remove:\n+ field.parent.remove(field)\n+\n \n def _line_to_nodes(text: str) -> list[nodes.Node]:\n \"\"\"Parse RST string to nodes.\"\"\"\n", "issue": "Waste less vertical space at top of rendered PEP\nThis is about usability of peps rendered on peps.python.org.\r\n\r\nAt the top of a PEP (e.g. https://peps.python.org/pep-0687/) there's a table with metadata. Most of that I ignore or is even duplicate (the title). I usually have to scroll right past that to the Abstract. Maybe the metadata could be collapsed, like the ToC? Or moved to the sidebar, like the ToC?\n", "before_files": [{"content": "from pathlib import Path\n\nfrom docutils import nodes\nfrom docutils import transforms\nfrom docutils import utils\nfrom docutils.parsers.rst import roles\nfrom docutils.parsers.rst import states\n\n\nclass PEPTitle(transforms.Transform):\n \"\"\"Add PEP title and organise document hierarchy.\"\"\"\n\n # needs to run before docutils.transforms.frontmatter.DocInfo and after\n # pep_processor.transforms.pep_title.PEPTitle\n default_priority = 335\n\n def apply(self) -> None:\n if not Path(self.document[\"source\"]).match(\"pep-*\"):\n return # not a PEP file, exit early\n\n # Directory to hold the PEP's RFC2822 header details, to extract a title string\n pep_header_details = {}\n\n # Iterate through the header fields, which are the first section of the document\n for field in self.document[0]:\n # Hold details of the attribute's tag against its details\n row_attributes = {sub.tagname: sub.rawsource for sub in field}\n pep_header_details[row_attributes[\"field_name\"]] = row_attributes[\"field_body\"]\n\n # We only need the PEP number and title\n if pep_header_details.keys() >= {\"PEP\", \"Title\"}:\n break\n\n # Create the title string for the PEP\n pep_number = int(pep_header_details[\"PEP\"])\n pep_title = pep_header_details[\"Title\"]\n pep_title_string = f\"PEP {pep_number} -- {pep_title}\" # double hyphen for en dash\n\n # Generate the title section node and its properties\n title_nodes = _line_to_nodes(pep_title_string)\n pep_title_node = nodes.section(\"\", nodes.title(\"\", \"\", *title_nodes, classes=[\"page-title\"]), names=[\"pep-content\"])\n\n # Insert the title node as the root element, move children down\n document_children = self.document.children\n self.document.children = [pep_title_node]\n pep_title_node.extend(document_children)\n self.document.note_implicit_target(pep_title_node, pep_title_node)\n\n\ndef _line_to_nodes(text: str) -> list[nodes.Node]:\n \"\"\"Parse RST string to nodes.\"\"\"\n document = utils.new_document(\"<inline-rst>\")\n document.settings.pep_references = document.settings.rfc_references = False # patch settings\n states.RSTStateMachine(state_classes=states.state_classes, initial_state=\"Body\").run([text], document) # do parsing\n roles._roles.pop(\"\", None) # restore the \"default\" default role after parsing a document\n return document[0].children\n", "path": "pep_sphinx_extensions/pep_processor/transforms/pep_title.py"}]} | 1,332 | 377 |
gh_patches_debug_4941 | rasdani/github-patches | git_diff | Parsl__parsl-1156 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Monitoring should be an optional extra
We should push this out as 0.8.1. This is needed by the nersc DESC stack.
</issue>
<code>
[start of setup.py]
1 from setuptools import setup, find_packages
2
3 with open('parsl/version.py') as f:
4 exec(f.read())
5
6 with open('requirements.txt') as f:
7 install_requires = f.readlines()
8
9 extras_require = {
10 'aws' : ['boto3'],
11 'kubernetes' : ['kubernetes'],
12 'oauth_ssh' : ['oauth-ssh>=0.9'],
13 'extreme_scale' : ['mpi4py'],
14 'docs' : ['nbsphinx', 'sphinx_rtd_theme'],
15 'google_cloud' : ['google-auth', 'google-api-python-client'],
16 'gssapi' : ['python-gssapi'],
17 'azure' : ['azure', 'msrestazure'],
18 'workqueue': ['work_queue'],
19 }
20 extras_require['all'] = sum(extras_require.values(), [])
21
22 setup(
23 name='parsl',
24 version=VERSION,
25 description='Simple data dependent workflows in Python',
26 long_description='Simple parallel workflows system for Python',
27 url='https://github.com/Parsl/parsl',
28 author='The Parsl Team',
29 author_email='[email protected]',
30 license='Apache 2.0',
31 download_url='https://github.com/Parsl/parsl/archive/{}.tar.gz'.format(VERSION),
32 include_package_data=True,
33 packages=find_packages(),
34 install_requires=install_requires,
35 scripts = ['parsl/executors/high_throughput/process_worker_pool.py',
36 'parsl/executors/extreme_scale/mpi_worker_pool.py',
37 'parsl/executors/low_latency/lowlatency_worker.py',
38 'parsl/executors/workqueue/workqueue_worker.py',
39 ],
40
41 extras_require=extras_require,
42 classifiers=[
43 # Maturity
44 'Development Status :: 3 - Alpha',
45 # Intended audience
46 'Intended Audience :: Developers',
47 # Licence, must match with licence above
48 'License :: OSI Approved :: Apache Software License',
49 # Python versions supported
50 'Programming Language :: Python :: 3.5',
51 'Programming Language :: Python :: 3.6',
52 ],
53 keywords=['Workflows', 'Scientific computing'],
54 entry_points={'console_scripts':
55 [
56 'parsl-globus-auth=parsl.data_provider.globus:cli_run',
57 'parsl-visualize=parsl.monitoring.visualization.app:cli_run',
58 ]}
59 )
60
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -7,6 +7,17 @@
install_requires = f.readlines()
extras_require = {
+ 'monitoring' : [
+ 'sqlalchemy>=1.3.0,!=1.3.4',
+ 'sqlalchemy_utils',
+ 'pydot',
+ 'networkx',
+ 'Flask>=1.0.2',
+ 'flask_sqlalchemy',
+ 'pandas',
+ 'plotly',
+ 'python-daemon'
+ ],
'aws' : ['boto3'],
'kubernetes' : ['kubernetes'],
'oauth_ssh' : ['oauth-ssh>=0.9'],
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -7,6 +7,17 @@\n install_requires = f.readlines()\n \n extras_require = {\n+ 'monitoring' : [\n+ 'sqlalchemy>=1.3.0,!=1.3.4',\n+ 'sqlalchemy_utils',\n+ 'pydot',\n+ 'networkx',\n+ 'Flask>=1.0.2',\n+ 'flask_sqlalchemy',\n+ 'pandas',\n+ 'plotly',\n+ 'python-daemon'\n+ ],\n 'aws' : ['boto3'],\n 'kubernetes' : ['kubernetes'],\n 'oauth_ssh' : ['oauth-ssh>=0.9'],\n", "issue": "Monitoring should be an optional extra\nWe should push this out as 0.8.1. This is needed by the nersc DESC stack.\n", "before_files": [{"content": "from setuptools import setup, find_packages\n\nwith open('parsl/version.py') as f:\n exec(f.read())\n\nwith open('requirements.txt') as f:\n install_requires = f.readlines()\n\nextras_require = {\n 'aws' : ['boto3'],\n 'kubernetes' : ['kubernetes'],\n 'oauth_ssh' : ['oauth-ssh>=0.9'],\n 'extreme_scale' : ['mpi4py'],\n 'docs' : ['nbsphinx', 'sphinx_rtd_theme'],\n 'google_cloud' : ['google-auth', 'google-api-python-client'],\n 'gssapi' : ['python-gssapi'],\n 'azure' : ['azure', 'msrestazure'],\n 'workqueue': ['work_queue'],\n}\nextras_require['all'] = sum(extras_require.values(), [])\n\nsetup(\n name='parsl',\n version=VERSION,\n description='Simple data dependent workflows in Python',\n long_description='Simple parallel workflows system for Python',\n url='https://github.com/Parsl/parsl',\n author='The Parsl Team',\n author_email='[email protected]',\n license='Apache 2.0',\n download_url='https://github.com/Parsl/parsl/archive/{}.tar.gz'.format(VERSION),\n include_package_data=True,\n packages=find_packages(),\n install_requires=install_requires,\n scripts = ['parsl/executors/high_throughput/process_worker_pool.py',\n 'parsl/executors/extreme_scale/mpi_worker_pool.py',\n 'parsl/executors/low_latency/lowlatency_worker.py',\n 'parsl/executors/workqueue/workqueue_worker.py',\n ],\n\n extras_require=extras_require,\n classifiers=[\n # Maturity\n 'Development Status :: 3 - Alpha',\n # Intended audience\n 'Intended Audience :: Developers',\n # Licence, must match with licence above\n 'License :: OSI Approved :: Apache Software License',\n # Python versions supported\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n ],\n keywords=['Workflows', 'Scientific computing'],\n entry_points={'console_scripts':\n [\n 'parsl-globus-auth=parsl.data_provider.globus:cli_run',\n 'parsl-visualize=parsl.monitoring.visualization.app:cli_run',\n ]}\n)\n", "path": "setup.py"}]} | 1,183 | 167 |
gh_patches_debug_36603 | rasdani/github-patches | git_diff | getsentry__sentry-59486 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
User avatars don't show in emails
At least for comment notifications, the avatar of the user who commented is just a blue box with a question mark regardless of whether they have a custom avatar or the default gravatar. We should check if this is happening for other notifications or if it's just the comment workflow email.
</issue>
<code>
[start of src/sentry/notifications/utils/avatar.py]
1 from __future__ import annotations
2
3 from django.urls import reverse
4 from django.utils.html import format_html
5 from django.utils.safestring import SafeString
6
7 from sentry.models.avatars.user_avatar import UserAvatar
8 from sentry.models.user import User
9 from sentry.services.hybrid_cloud.user import RpcUser
10 from sentry.utils.assets import get_asset_url
11 from sentry.utils.avatar import get_email_avatar
12 from sentry.utils.http import absolute_uri
13
14
15 def get_user_avatar_url(user: User | RpcUser, size: int = 20) -> str:
16 ident: str
17 if isinstance(user, User):
18 try:
19 avatar = UserAvatar.objects.get(user=user)
20 ident = avatar.ident
21 except UserAvatar.DoesNotExist:
22 return ""
23 elif user.avatar:
24 if user.avatar is None:
25 return ""
26 ident = user.avatar.ident
27 else:
28 return ""
29
30 url = reverse("sentry-user-avatar-url", args=[ident])
31 if size:
32 url = f"{url}?s={int(size)}"
33 return str(absolute_uri(url))
34
35
36 def get_sentry_avatar_url() -> str:
37 url = "/images/sentry-email-avatar.png"
38 return str(absolute_uri(get_asset_url("sentry", url)))
39
40
41 def avatar_as_html(user: User | RpcUser) -> SafeString:
42 if not user:
43 return format_html(
44 '<img class="avatar" src="{}" width="20px" height="20px" />', get_sentry_avatar_url()
45 )
46 avatar_type = user.get_avatar_type()
47 if avatar_type == "upload":
48 return format_html('<img class="avatar" src="{}" />', get_user_avatar_url(user))
49 elif avatar_type == "letter_avatar":
50 return get_email_avatar(user.get_display_name(), user.get_label(), 20, False)
51 else:
52 return get_email_avatar(user.get_display_name(), user.get_label(), 20, True)
53
[end of src/sentry/notifications/utils/avatar.py]
[start of src/sentry/notifications/notifications/activity/note.py]
1 from __future__ import annotations
2
3 from typing import Any, Mapping, Optional
4
5 from sentry.services.hybrid_cloud.actor import RpcActor
6 from sentry.types.integrations import ExternalProviders
7
8 from .base import GroupActivityNotification
9
10
11 class NoteActivityNotification(GroupActivityNotification):
12 message_builder = "SlackNotificationsMessageBuilder"
13 metrics_key = "note_activity"
14 template_path = "sentry/emails/activity/note"
15
16 def get_description(self) -> tuple[str, Optional[str], Mapping[str, Any]]:
17 # Notes may contain {} characters so we should escape them.
18 text = str(self.activity.data["text"]).replace("{", "{{").replace("}", "}}")
19 return text, None, {}
20
21 @property
22 def title(self) -> str:
23 if self.user:
24 author = self.user.get_display_name()
25 else:
26 author = "Unknown"
27 return f"New comment by {author}"
28
29 def get_notification_title(
30 self, provider: ExternalProviders, context: Mapping[str, Any] | None = None
31 ) -> str:
32 return self.title
33
34 def get_message_description(self, recipient: RpcActor, provider: ExternalProviders) -> Any:
35 return self.get_context()["text_description"]
36
[end of src/sentry/notifications/notifications/activity/note.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/sentry/notifications/notifications/activity/note.py b/src/sentry/notifications/notifications/activity/note.py
--- a/src/sentry/notifications/notifications/activity/note.py
+++ b/src/sentry/notifications/notifications/activity/note.py
@@ -2,6 +2,10 @@
from typing import Any, Mapping, Optional
+from django.utils.html import format_html
+from django.utils.safestring import SafeString
+
+from sentry.notifications.utils.avatar import avatar_as_html
from sentry.services.hybrid_cloud.actor import RpcActor
from sentry.types.integrations import ExternalProviders
@@ -33,3 +37,15 @@
def get_message_description(self, recipient: RpcActor, provider: ExternalProviders) -> Any:
return self.get_context()["text_description"]
+
+ def description_as_html(self, description: str, params: Mapping[str, Any]) -> SafeString:
+ """Note emails are formatted differently from almost all other activity emails.
+ Rather than passing the `description` as a string to be formatted into HTML with
+ `author` and `an_issue` (see base definition and resolved.py's `get_description`
+ as an example) we are simply passed the comment as a string that needs no formatting,
+ and want the avatar on it's own rather than bundled with the author's display name
+ because the display name is already shown in the notification title."""
+ fmt = '<span class="avatar-container">{}</span>'
+ if self.user:
+ return format_html(fmt, avatar_as_html(self.user, 48))
+ return format_html(description)
diff --git a/src/sentry/notifications/utils/avatar.py b/src/sentry/notifications/utils/avatar.py
--- a/src/sentry/notifications/utils/avatar.py
+++ b/src/sentry/notifications/utils/avatar.py
@@ -38,15 +38,18 @@
return str(absolute_uri(get_asset_url("sentry", url)))
-def avatar_as_html(user: User | RpcUser) -> SafeString:
+def avatar_as_html(user: User | RpcUser, size: int = 20) -> SafeString:
if not user:
return format_html(
- '<img class="avatar" src="{}" width="20px" height="20px" />', get_sentry_avatar_url()
+ '<img class="avatar" src="{}" width="{}px" height="{}px" />',
+ get_sentry_avatar_url(),
+ size,
+ size,
)
avatar_type = user.get_avatar_type()
if avatar_type == "upload":
return format_html('<img class="avatar" src="{}" />', get_user_avatar_url(user))
elif avatar_type == "letter_avatar":
- return get_email_avatar(user.get_display_name(), user.get_label(), 20, False)
+ return get_email_avatar(user.get_display_name(), user.get_label(), size, False)
else:
- return get_email_avatar(user.get_display_name(), user.get_label(), 20, True)
+ return get_email_avatar(user.get_display_name(), user.get_label(), size, True)
| {"golden_diff": "diff --git a/src/sentry/notifications/notifications/activity/note.py b/src/sentry/notifications/notifications/activity/note.py\n--- a/src/sentry/notifications/notifications/activity/note.py\n+++ b/src/sentry/notifications/notifications/activity/note.py\n@@ -2,6 +2,10 @@\n \n from typing import Any, Mapping, Optional\n \n+from django.utils.html import format_html\n+from django.utils.safestring import SafeString\n+\n+from sentry.notifications.utils.avatar import avatar_as_html\n from sentry.services.hybrid_cloud.actor import RpcActor\n from sentry.types.integrations import ExternalProviders\n \n@@ -33,3 +37,15 @@\n \n def get_message_description(self, recipient: RpcActor, provider: ExternalProviders) -> Any:\n return self.get_context()[\"text_description\"]\n+\n+ def description_as_html(self, description: str, params: Mapping[str, Any]) -> SafeString:\n+ \"\"\"Note emails are formatted differently from almost all other activity emails.\n+ Rather than passing the `description` as a string to be formatted into HTML with\n+ `author` and `an_issue` (see base definition and resolved.py's `get_description`\n+ as an example) we are simply passed the comment as a string that needs no formatting,\n+ and want the avatar on it's own rather than bundled with the author's display name\n+ because the display name is already shown in the notification title.\"\"\"\n+ fmt = '<span class=\"avatar-container\">{}</span>'\n+ if self.user:\n+ return format_html(fmt, avatar_as_html(self.user, 48))\n+ return format_html(description)\ndiff --git a/src/sentry/notifications/utils/avatar.py b/src/sentry/notifications/utils/avatar.py\n--- a/src/sentry/notifications/utils/avatar.py\n+++ b/src/sentry/notifications/utils/avatar.py\n@@ -38,15 +38,18 @@\n return str(absolute_uri(get_asset_url(\"sentry\", url)))\n \n \n-def avatar_as_html(user: User | RpcUser) -> SafeString:\n+def avatar_as_html(user: User | RpcUser, size: int = 20) -> SafeString:\n if not user:\n return format_html(\n- '<img class=\"avatar\" src=\"{}\" width=\"20px\" height=\"20px\" />', get_sentry_avatar_url()\n+ '<img class=\"avatar\" src=\"{}\" width=\"{}px\" height=\"{}px\" />',\n+ get_sentry_avatar_url(),\n+ size,\n+ size,\n )\n avatar_type = user.get_avatar_type()\n if avatar_type == \"upload\":\n return format_html('<img class=\"avatar\" src=\"{}\" />', get_user_avatar_url(user))\n elif avatar_type == \"letter_avatar\":\n- return get_email_avatar(user.get_display_name(), user.get_label(), 20, False)\n+ return get_email_avatar(user.get_display_name(), user.get_label(), size, False)\n else:\n- return get_email_avatar(user.get_display_name(), user.get_label(), 20, True)\n+ return get_email_avatar(user.get_display_name(), user.get_label(), size, True)\n", "issue": "User avatars don't show in emails\nAt least for comment notifications, the avatar of the user who commented is just a blue box with a question mark regardless of whether they have a custom avatar or the default gravatar. We should check if this is happening for other notifications or if it's just the comment workflow email.\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom django.urls import reverse\nfrom django.utils.html import format_html\nfrom django.utils.safestring import SafeString\n\nfrom sentry.models.avatars.user_avatar import UserAvatar\nfrom sentry.models.user import User\nfrom sentry.services.hybrid_cloud.user import RpcUser\nfrom sentry.utils.assets import get_asset_url\nfrom sentry.utils.avatar import get_email_avatar\nfrom sentry.utils.http import absolute_uri\n\n\ndef get_user_avatar_url(user: User | RpcUser, size: int = 20) -> str:\n ident: str\n if isinstance(user, User):\n try:\n avatar = UserAvatar.objects.get(user=user)\n ident = avatar.ident\n except UserAvatar.DoesNotExist:\n return \"\"\n elif user.avatar:\n if user.avatar is None:\n return \"\"\n ident = user.avatar.ident\n else:\n return \"\"\n\n url = reverse(\"sentry-user-avatar-url\", args=[ident])\n if size:\n url = f\"{url}?s={int(size)}\"\n return str(absolute_uri(url))\n\n\ndef get_sentry_avatar_url() -> str:\n url = \"/images/sentry-email-avatar.png\"\n return str(absolute_uri(get_asset_url(\"sentry\", url)))\n\n\ndef avatar_as_html(user: User | RpcUser) -> SafeString:\n if not user:\n return format_html(\n '<img class=\"avatar\" src=\"{}\" width=\"20px\" height=\"20px\" />', get_sentry_avatar_url()\n )\n avatar_type = user.get_avatar_type()\n if avatar_type == \"upload\":\n return format_html('<img class=\"avatar\" src=\"{}\" />', get_user_avatar_url(user))\n elif avatar_type == \"letter_avatar\":\n return get_email_avatar(user.get_display_name(), user.get_label(), 20, False)\n else:\n return get_email_avatar(user.get_display_name(), user.get_label(), 20, True)\n", "path": "src/sentry/notifications/utils/avatar.py"}, {"content": "from __future__ import annotations\n\nfrom typing import Any, Mapping, Optional\n\nfrom sentry.services.hybrid_cloud.actor import RpcActor\nfrom sentry.types.integrations import ExternalProviders\n\nfrom .base import GroupActivityNotification\n\n\nclass NoteActivityNotification(GroupActivityNotification):\n message_builder = \"SlackNotificationsMessageBuilder\"\n metrics_key = \"note_activity\"\n template_path = \"sentry/emails/activity/note\"\n\n def get_description(self) -> tuple[str, Optional[str], Mapping[str, Any]]:\n # Notes may contain {} characters so we should escape them.\n text = str(self.activity.data[\"text\"]).replace(\"{\", \"{{\").replace(\"}\", \"}}\")\n return text, None, {}\n\n @property\n def title(self) -> str:\n if self.user:\n author = self.user.get_display_name()\n else:\n author = \"Unknown\"\n return f\"New comment by {author}\"\n\n def get_notification_title(\n self, provider: ExternalProviders, context: Mapping[str, Any] | None = None\n ) -> str:\n return self.title\n\n def get_message_description(self, recipient: RpcActor, provider: ExternalProviders) -> Any:\n return self.get_context()[\"text_description\"]\n", "path": "src/sentry/notifications/notifications/activity/note.py"}]} | 1,477 | 683 |
gh_patches_debug_7838 | rasdani/github-patches | git_diff | facebookresearch__hydra-472 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[Bug] setuptools finds and installs tests/ as a top-level package in site-packages/
</issue>
<code>
[start of setup.py]
1 # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
2 # type: ignore
3 import codecs
4 import os
5 import pathlib
6 import re
7 import shutil
8 from distutils import cmd
9 from os.path import exists, isdir, join
10 from typing import Any, List
11
12 import pkg_resources
13 from setuptools import find_packages, setup
14
15 here = os.path.abspath(os.path.dirname(__file__))
16
17
18 def read(*parts):
19 with codecs.open(os.path.join(here, *parts), "r") as fp:
20 return fp.read()
21
22
23 def find_version(*file_paths):
24 version_file = read(*file_paths)
25 version_match = re.search(r"^__version__ = ['\"]([^'\"]*)['\"]", version_file, re.M)
26 if version_match:
27 return version_match.group(1)
28 raise RuntimeError("Unable to find version string.")
29
30
31 with pathlib.Path("requirements/requirements.txt").open() as requirements_txt:
32 install_requires = [
33 str(requirement)
34 for requirement in pkg_resources.parse_requirements(requirements_txt)
35 ]
36
37
38 class CleanCommand(cmd.Command):
39 """
40 Our custom command to clean out junk files.
41 """
42
43 description = "Cleans out junk files we don't want in the repo"
44 user_options: List[Any] = []
45
46 def initialize_options(self):
47 pass
48
49 def finalize_options(self):
50 pass
51
52 @staticmethod
53 def find(root, includes, excludes=[]):
54 res = []
55 for parent, dirs, files in os.walk(root):
56 for f in dirs + files:
57 add = list()
58 for include in includes:
59 if re.findall(include, f):
60 add.append(join(parent, f))
61 res.extend(add)
62 final_list = []
63 # Exclude things that matches an exclude pattern
64 for ex in excludes:
65 for file in res:
66 if not re.findall(ex, file):
67 final_list.append(file)
68 return final_list
69
70 def run(self):
71 delete_patterns = [
72 ".eggs",
73 ".egg-info",
74 ".pytest_cache",
75 "build",
76 "dist",
77 "__pycache__",
78 ".pyc",
79 ]
80 deletion_list = CleanCommand.find(
81 ".", includes=delete_patterns, excludes=["\\.nox/.*"]
82 )
83
84 for f in deletion_list:
85 if exists(f):
86 if isdir(f):
87 shutil.rmtree(f, ignore_errors=True)
88 else:
89 os.unlink(f)
90
91
92 with open("README.md", "r") as fh:
93 LONG_DESC = fh.read()
94 setup(
95 cmdclass={"clean": CleanCommand},
96 name="hydra-core",
97 version=find_version("hydra", "__init__.py"),
98 author="Omry Yadan",
99 author_email="[email protected]",
100 description="A framework for elegantly configuring complex applications",
101 long_description=LONG_DESC,
102 long_description_content_type="text/markdown",
103 url="https://github.com/facebookresearch/hydra",
104 keywords="command-line configuration yaml tab-completion",
105 packages=find_packages(),
106 include_package_data=True,
107 classifiers=[
108 "License :: OSI Approved :: MIT License",
109 "Development Status :: 4 - Beta",
110 "Programming Language :: Python :: 3.6",
111 "Programming Language :: Python :: 3.7",
112 "Programming Language :: Python :: 3.8",
113 "Operating System :: POSIX :: Linux",
114 "Operating System :: MacOS",
115 "Operating System :: Microsoft :: Windows",
116 ],
117 install_requires=install_requires,
118 # Install development dependencies with
119 # pip install -r requirements/dev.txt -e .
120 )
121
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -102,7 +102,7 @@
long_description_content_type="text/markdown",
url="https://github.com/facebookresearch/hydra",
keywords="command-line configuration yaml tab-completion",
- packages=find_packages(),
+ packages=find_packages(include=["hydra"]),
include_package_data=True,
classifiers=[
"License :: OSI Approved :: MIT License",
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -102,7 +102,7 @@\n long_description_content_type=\"text/markdown\",\n url=\"https://github.com/facebookresearch/hydra\",\n keywords=\"command-line configuration yaml tab-completion\",\n- packages=find_packages(),\n+ packages=find_packages(include=[\"hydra\"]),\n include_package_data=True,\n classifiers=[\n \"License :: OSI Approved :: MIT License\",\n", "issue": "[Bug] setuptools finds and installs tests/ as a top-level package in site-packages/\n\n", "before_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n# type: ignore\nimport codecs\nimport os\nimport pathlib\nimport re\nimport shutil\nfrom distutils import cmd\nfrom os.path import exists, isdir, join\nfrom typing import Any, List\n\nimport pkg_resources\nfrom setuptools import find_packages, setup\n\nhere = os.path.abspath(os.path.dirname(__file__))\n\n\ndef read(*parts):\n with codecs.open(os.path.join(here, *parts), \"r\") as fp:\n return fp.read()\n\n\ndef find_version(*file_paths):\n version_file = read(*file_paths)\n version_match = re.search(r\"^__version__ = ['\\\"]([^'\\\"]*)['\\\"]\", version_file, re.M)\n if version_match:\n return version_match.group(1)\n raise RuntimeError(\"Unable to find version string.\")\n\n\nwith pathlib.Path(\"requirements/requirements.txt\").open() as requirements_txt:\n install_requires = [\n str(requirement)\n for requirement in pkg_resources.parse_requirements(requirements_txt)\n ]\n\n\nclass CleanCommand(cmd.Command):\n \"\"\"\n Our custom command to clean out junk files.\n \"\"\"\n\n description = \"Cleans out junk files we don't want in the repo\"\n user_options: List[Any] = []\n\n def initialize_options(self):\n pass\n\n def finalize_options(self):\n pass\n\n @staticmethod\n def find(root, includes, excludes=[]):\n res = []\n for parent, dirs, files in os.walk(root):\n for f in dirs + files:\n add = list()\n for include in includes:\n if re.findall(include, f):\n add.append(join(parent, f))\n res.extend(add)\n final_list = []\n # Exclude things that matches an exclude pattern\n for ex in excludes:\n for file in res:\n if not re.findall(ex, file):\n final_list.append(file)\n return final_list\n\n def run(self):\n delete_patterns = [\n \".eggs\",\n \".egg-info\",\n \".pytest_cache\",\n \"build\",\n \"dist\",\n \"__pycache__\",\n \".pyc\",\n ]\n deletion_list = CleanCommand.find(\n \".\", includes=delete_patterns, excludes=[\"\\\\.nox/.*\"]\n )\n\n for f in deletion_list:\n if exists(f):\n if isdir(f):\n shutil.rmtree(f, ignore_errors=True)\n else:\n os.unlink(f)\n\n\nwith open(\"README.md\", \"r\") as fh:\n LONG_DESC = fh.read()\n setup(\n cmdclass={\"clean\": CleanCommand},\n name=\"hydra-core\",\n version=find_version(\"hydra\", \"__init__.py\"),\n author=\"Omry Yadan\",\n author_email=\"[email protected]\",\n description=\"A framework for elegantly configuring complex applications\",\n long_description=LONG_DESC,\n long_description_content_type=\"text/markdown\",\n url=\"https://github.com/facebookresearch/hydra\",\n keywords=\"command-line configuration yaml tab-completion\",\n packages=find_packages(),\n include_package_data=True,\n classifiers=[\n \"License :: OSI Approved :: MIT License\",\n \"Development Status :: 4 - Beta\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Operating System :: POSIX :: Linux\",\n \"Operating System :: MacOS\",\n \"Operating System :: Microsoft :: Windows\",\n ],\n install_requires=install_requires,\n # Install development dependencies with\n # pip install -r requirements/dev.txt -e .\n )\n", "path": "setup.py"}]} | 1,572 | 104 |
gh_patches_debug_2075 | rasdani/github-patches | git_diff | litestar-org__litestar-2433 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bug: `2.2.0` does not have `[full]` group
### Description
The move from `poetry` to `pdm` in 2.2.0 has a regression for the `[full]` group.
### URL to code causing the issue
_No response_
### MCVE
```python
pip install litestar[full]==2.2.0 && pip show pydantic
```
### Steps to reproduce
- `pip install litestar[full]`
- Observe no `[full]` group is available, and `pip show $package` does not show expected pacakges
### Screenshots
_No response_
### Logs
_No response_
### Litestar Version
2.2.0
### Platform
- [ ] Linux
- [ ] Mac
- [ ] Windows
- [X] Other (Please specify in the description above)
<!-- POLAR PLEDGE BADGE START -->
> [!NOTE]
> Check out all issues funded or available for funding here: https://polar.sh/litestar-org
> * If you would like to see an issue prioritized, make a pledge towards it!
> * We receive the pledge once the issue is completed & verified
<a href="https://polar.sh/litestar-org/litestar/issues/2434">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://polar.sh/api/github/litestar-org/litestar/issues/2434/pledge.svg?darkmode=1">
<img alt="Fund with Polar" src="https://polar.sh/api/github/litestar-org/litestar/issues/2434/pledge.svg">
</picture>
</a>
<!-- POLAR PLEDGE BADGE END -->
</issue>
<code>
[start of litestar/types/internal_types.py]
1 from __future__ import annotations
2
3 from typing import TYPE_CHECKING, Any, Callable, Literal, NamedTuple
4
5 from litestar.utils.deprecation import warn_deprecation
6
7 __all__ = (
8 "ControllerRouterHandler",
9 "PathParameterDefinition",
10 "PathParameterDefinition",
11 "ReservedKwargs",
12 "ResponseType",
13 "RouteHandlerMapItem",
14 "RouteHandlerType",
15 )
16
17 if TYPE_CHECKING:
18 from typing_extensions import TypeAlias
19
20 from litestar.app import Litestar
21 from litestar.controller import Controller
22 from litestar.handlers.asgi_handlers import ASGIRouteHandler
23 from litestar.handlers.http_handlers import HTTPRouteHandler
24 from litestar.handlers.websocket_handlers import WebsocketRouteHandler
25 from litestar.response import Response
26 from litestar.router import Router
27 from litestar.types import Method
28
29 ReservedKwargs: TypeAlias = Literal["request", "socket", "headers", "query", "cookies", "state", "data"]
30 RouteHandlerType: TypeAlias = "HTTPRouteHandler | WebsocketRouteHandler | ASGIRouteHandler"
31 ResponseType: TypeAlias = "type[Response]"
32 ControllerRouterHandler: TypeAlias = "type[Controller] | RouteHandlerType | Router | Callable[..., Any]"
33 RouteHandlerMapItem: TypeAlias = 'dict[Method | Literal["websocket", "asgi"], RouteHandlerType]'
34
35 # deprecated
36 _LitestarType: TypeAlias = "Litestar"
37
38
39 class PathParameterDefinition(NamedTuple):
40 """Path parameter tuple."""
41
42 name: str
43 full: str
44 type: type
45 parser: Callable[[str], Any] | None
46
47
48 def __getattr__(name: str) -> Any:
49 if name == "LitestarType":
50 warn_deprecation(
51 "2.3.0",
52 "LitestarType",
53 "import",
54 removal_in="3.0.0",
55 alternative="Litestar",
56 )
57 return _LitestarType
58 raise AttributeError(f"module {__name__!r} has no attribute {name!r}")
59
[end of litestar/types/internal_types.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/litestar/types/internal_types.py b/litestar/types/internal_types.py
--- a/litestar/types/internal_types.py
+++ b/litestar/types/internal_types.py
@@ -48,7 +48,7 @@
def __getattr__(name: str) -> Any:
if name == "LitestarType":
warn_deprecation(
- "2.3.0",
+ "2.2.1",
"LitestarType",
"import",
removal_in="3.0.0",
| {"golden_diff": "diff --git a/litestar/types/internal_types.py b/litestar/types/internal_types.py\n--- a/litestar/types/internal_types.py\n+++ b/litestar/types/internal_types.py\n@@ -48,7 +48,7 @@\n def __getattr__(name: str) -> Any:\n if name == \"LitestarType\":\n warn_deprecation(\n- \"2.3.0\",\n+ \"2.2.1\",\n \"LitestarType\",\n \"import\",\n removal_in=\"3.0.0\",\n", "issue": "Bug: `2.2.0` does not have `[full]` group\n### Description\r\n\r\nThe move from `poetry` to `pdm` in 2.2.0 has a regression for the `[full]` group.\r\n\r\n### URL to code causing the issue\r\n\r\n_No response_\r\n\r\n### MCVE\r\n\r\n```python\r\npip install litestar[full]==2.2.0 && pip show pydantic\r\n```\r\n\r\n\r\n### Steps to reproduce\r\n\r\n- `pip install litestar[full]`\r\n- Observe no `[full]` group is available, and `pip show $package` does not show expected pacakges\r\n\r\n\r\n### Screenshots\r\n\r\n_No response_\r\n\r\n### Logs\r\n\r\n_No response_\r\n\r\n### Litestar Version\r\n\r\n2.2.0\r\n\r\n### Platform\r\n\r\n- [ ] Linux\r\n- [ ] Mac\r\n- [ ] Windows\r\n- [X] Other (Please specify in the description above)\r\n\r\n<!-- POLAR PLEDGE BADGE START -->\r\n> [!NOTE] \r\n> Check out all issues funded or available for funding here: https://polar.sh/litestar-org\r\n> * If you would like to see an issue prioritized, make a pledge towards it!\r\n> * We receive the pledge once the issue is completed & verified\r\n\r\n<a href=\"https://polar.sh/litestar-org/litestar/issues/2434\">\r\n<picture>\r\n <source media=\"(prefers-color-scheme: dark)\" srcset=\"https://polar.sh/api/github/litestar-org/litestar/issues/2434/pledge.svg?darkmode=1\">\r\n <img alt=\"Fund with Polar\" src=\"https://polar.sh/api/github/litestar-org/litestar/issues/2434/pledge.svg\">\r\n</picture>\r\n</a>\r\n<!-- POLAR PLEDGE BADGE END -->\r\n\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom typing import TYPE_CHECKING, Any, Callable, Literal, NamedTuple\n\nfrom litestar.utils.deprecation import warn_deprecation\n\n__all__ = (\n \"ControllerRouterHandler\",\n \"PathParameterDefinition\",\n \"PathParameterDefinition\",\n \"ReservedKwargs\",\n \"ResponseType\",\n \"RouteHandlerMapItem\",\n \"RouteHandlerType\",\n)\n\nif TYPE_CHECKING:\n from typing_extensions import TypeAlias\n\n from litestar.app import Litestar\n from litestar.controller import Controller\n from litestar.handlers.asgi_handlers import ASGIRouteHandler\n from litestar.handlers.http_handlers import HTTPRouteHandler\n from litestar.handlers.websocket_handlers import WebsocketRouteHandler\n from litestar.response import Response\n from litestar.router import Router\n from litestar.types import Method\n\nReservedKwargs: TypeAlias = Literal[\"request\", \"socket\", \"headers\", \"query\", \"cookies\", \"state\", \"data\"]\nRouteHandlerType: TypeAlias = \"HTTPRouteHandler | WebsocketRouteHandler | ASGIRouteHandler\"\nResponseType: TypeAlias = \"type[Response]\"\nControllerRouterHandler: TypeAlias = \"type[Controller] | RouteHandlerType | Router | Callable[..., Any]\"\nRouteHandlerMapItem: TypeAlias = 'dict[Method | Literal[\"websocket\", \"asgi\"], RouteHandlerType]'\n\n# deprecated\n_LitestarType: TypeAlias = \"Litestar\"\n\n\nclass PathParameterDefinition(NamedTuple):\n \"\"\"Path parameter tuple.\"\"\"\n\n name: str\n full: str\n type: type\n parser: Callable[[str], Any] | None\n\n\ndef __getattr__(name: str) -> Any:\n if name == \"LitestarType\":\n warn_deprecation(\n \"2.3.0\",\n \"LitestarType\",\n \"import\",\n removal_in=\"3.0.0\",\n alternative=\"Litestar\",\n )\n return _LitestarType\n raise AttributeError(f\"module {__name__!r} has no attribute {name!r}\")\n", "path": "litestar/types/internal_types.py"}]} | 1,473 | 116 |
gh_patches_debug_32626 | rasdani/github-patches | git_diff | uccser__cs-unplugged-147 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Store custom Kordac templates
The custom Kordac templates for Markdown conversion need to be stored within the repository.
Gut instinct is to store these within the `templates` directory under `markdown_templates` and then exclude this folder from the Django template loader (to avoid loading unused templates in serving webpages).
These can then be loaded for Kordac (possibly a Django loader would do the job).
</issue>
<code>
[start of csunplugged/utils/BaseLoader.py]
1 import yaml
2 import mdx_math
3 import abc
4 import sys
5 from kordac import Kordac
6 from .check_converter_required_files import check_required_files
7
8
9 class BaseLoader():
10 """Base loader class for individual loaders"""
11
12 def __init__(self, BASE_PATH='', load_log=[]):
13 if load_log:
14 self.load_log = load_log
15 else:
16 self.load_log = list(load_log)
17 self.BASE_PATH = BASE_PATH
18 self.setup_md_to_html_converter()
19
20 def setup_md_to_html_converter(self):
21 """Create Kordac converter with custom processors, html templates,
22 and extensions.
23 """
24 templates = dict()
25 templates['scratch'] = '<div><object data="{% autoescape false -%}{{ "{% get_static_prefix %}" }}img/scratch-blocks-{{ hash }}.svg{%- endautoescape %}" type="image/svg+xml" /></div>' # noqa: E501 Fixed in #77
26 templates['iframe'] = '<iframe allowtransparency="true" width="485" height="402" src="{{ link }}" frameborder="0" allowfullscreen="true"></iframe>' # noqa: E501 Fixed in #77
27 templates['heading'] = '<{{ heading_type }} id="{{ title_slug }}">{{ title }}</{{ heading_type }}>' # noqa: E501 Fixed in #77
28 extensions = [
29 'markdown.extensions.fenced_code',
30 'markdown.extensions.codehilite',
31 'markdown.extensions.sane_lists',
32 'markdown.extensions.tables',
33 mdx_math.MathExtension(enable_dollar_delimiter=True)
34 ]
35 self.converter = Kordac(html_templates=templates, extensions=extensions)
36 custom_processors = self.converter.processor_defaults()
37 custom_processors.add('remove-title')
38 self.converter.update_processors(custom_processors)
39
40 def convert_md_file(self, md_file_path):
41 """Returns the Kordac object for a given Markdown file
42
43 Args:
44 file_path: location of md file to convert
45
46 Returns:
47 Kordac result object
48 """
49 content = open(md_file_path, encoding='UTF-8').read()
50 result = self.converter.convert(content)
51 check_required_files(result.required_files)
52 return result
53
54 def log(self, log_message, indent_amount=0):
55 """Adds the log message to the load log with the specified indent"""
56 self.load_log.append((log_message, indent_amount))
57
58 def print_load_log(self):
59 """Output log messages from loader to console"""
60 for (log, indent_amount) in self.load_log:
61 indent = ' ' * indent_amount
62 sys.stdout.write('{indent}{text}\n'.format(indent=indent, text=log))
63 sys.stdout.write('\n')
64 self.load_log = []
65
66 def load_yaml_file(self, yaml_file_path):
67 """Loads and reads yaml file
68
69 Args:
70 file_path: location of yaml file to read
71
72 Returns:
73 Either list or string, depending on structure of given yaml file
74 """
75 yaml_file = open(yaml_file_path, encoding='UTF-8').read()
76 return yaml.load(yaml_file)
77
78 @abc.abstractmethod
79 def load(self):
80 raise NotImplementedError('subclass does not implement this method')
81
[end of csunplugged/utils/BaseLoader.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/csunplugged/utils/BaseLoader.py b/csunplugged/utils/BaseLoader.py
--- a/csunplugged/utils/BaseLoader.py
+++ b/csunplugged/utils/BaseLoader.py
@@ -2,6 +2,9 @@
import mdx_math
import abc
import sys
+import re
+import os.path
+from os import listdir
from kordac import Kordac
from .check_converter_required_files import check_required_files
@@ -21,10 +24,7 @@
"""Create Kordac converter with custom processors, html templates,
and extensions.
"""
- templates = dict()
- templates['scratch'] = '<div><object data="{% autoescape false -%}{{ "{% get_static_prefix %}" }}img/scratch-blocks-{{ hash }}.svg{%- endautoescape %}" type="image/svg+xml" /></div>' # noqa: E501 Fixed in #77
- templates['iframe'] = '<iframe allowtransparency="true" width="485" height="402" src="{{ link }}" frameborder="0" allowfullscreen="true"></iframe>' # noqa: E501 Fixed in #77
- templates['heading'] = '<{{ heading_type }} id="{{ title_slug }}">{{ title }}</{{ heading_type }}>' # noqa: E501 Fixed in #77
+ templates = self.load_template_files()
extensions = [
'markdown.extensions.fenced_code',
'markdown.extensions.codehilite',
@@ -75,6 +75,19 @@
yaml_file = open(yaml_file_path, encoding='UTF-8').read()
return yaml.load(yaml_file)
+ def load_template_files(self):
+ templates = dict()
+ template_path = os.path.join(
+ os.path.dirname(__file__),
+ 'custom_converter_templates/'
+ )
+ for file in listdir(template_path):
+ template_file = re.search(r'(.*?).html$', file)
+ if template_file:
+ template_name = template_file.groups()[0]
+ templates[template_name] = open(template_path + file).read()
+ return templates
+
@abc.abstractmethod
def load(self):
raise NotImplementedError('subclass does not implement this method')
| {"golden_diff": "diff --git a/csunplugged/utils/BaseLoader.py b/csunplugged/utils/BaseLoader.py\n--- a/csunplugged/utils/BaseLoader.py\n+++ b/csunplugged/utils/BaseLoader.py\n@@ -2,6 +2,9 @@\n import mdx_math\n import abc\n import sys\n+import re\n+import os.path\n+from os import listdir\n from kordac import Kordac\n from .check_converter_required_files import check_required_files\n \n@@ -21,10 +24,7 @@\n \"\"\"Create Kordac converter with custom processors, html templates,\n and extensions.\n \"\"\"\n- templates = dict()\n- templates['scratch'] = '<div><object data=\"{% autoescape false -%}{{ \"{% get_static_prefix %}\" }}img/scratch-blocks-{{ hash }}.svg{%- endautoescape %}\" type=\"image/svg+xml\" /></div>' # noqa: E501 Fixed in #77\n- templates['iframe'] = '<iframe allowtransparency=\"true\" width=\"485\" height=\"402\" src=\"{{ link }}\" frameborder=\"0\" allowfullscreen=\"true\"></iframe>' # noqa: E501 Fixed in #77\n- templates['heading'] = '<{{ heading_type }} id=\"{{ title_slug }}\">{{ title }}</{{ heading_type }}>' # noqa: E501 Fixed in #77\n+ templates = self.load_template_files()\n extensions = [\n 'markdown.extensions.fenced_code',\n 'markdown.extensions.codehilite',\n@@ -75,6 +75,19 @@\n yaml_file = open(yaml_file_path, encoding='UTF-8').read()\n return yaml.load(yaml_file)\n \n+ def load_template_files(self):\n+ templates = dict()\n+ template_path = os.path.join(\n+ os.path.dirname(__file__),\n+ 'custom_converter_templates/'\n+ )\n+ for file in listdir(template_path):\n+ template_file = re.search(r'(.*?).html$', file)\n+ if template_file:\n+ template_name = template_file.groups()[0]\n+ templates[template_name] = open(template_path + file).read()\n+ return templates\n+\n @abc.abstractmethod\n def load(self):\n raise NotImplementedError('subclass does not implement this method')\n", "issue": "Store custom Kordac templates\nThe custom Kordac templates for Markdown conversion need to be stored within the repository.\r\nGut instinct is to store these within the `templates` directory under `markdown_templates` and then exclude this folder from the Django template loader (to avoid loading unused templates in serving webpages).\r\n\r\nThese can then be loaded for Kordac (possibly a Django loader would do the job).\n", "before_files": [{"content": "import yaml\nimport mdx_math\nimport abc\nimport sys\nfrom kordac import Kordac\nfrom .check_converter_required_files import check_required_files\n\n\nclass BaseLoader():\n \"\"\"Base loader class for individual loaders\"\"\"\n\n def __init__(self, BASE_PATH='', load_log=[]):\n if load_log:\n self.load_log = load_log\n else:\n self.load_log = list(load_log)\n self.BASE_PATH = BASE_PATH\n self.setup_md_to_html_converter()\n\n def setup_md_to_html_converter(self):\n \"\"\"Create Kordac converter with custom processors, html templates,\n and extensions.\n \"\"\"\n templates = dict()\n templates['scratch'] = '<div><object data=\"{% autoescape false -%}{{ \"{% get_static_prefix %}\" }}img/scratch-blocks-{{ hash }}.svg{%- endautoescape %}\" type=\"image/svg+xml\" /></div>' # noqa: E501 Fixed in #77\n templates['iframe'] = '<iframe allowtransparency=\"true\" width=\"485\" height=\"402\" src=\"{{ link }}\" frameborder=\"0\" allowfullscreen=\"true\"></iframe>' # noqa: E501 Fixed in #77\n templates['heading'] = '<{{ heading_type }} id=\"{{ title_slug }}\">{{ title }}</{{ heading_type }}>' # noqa: E501 Fixed in #77\n extensions = [\n 'markdown.extensions.fenced_code',\n 'markdown.extensions.codehilite',\n 'markdown.extensions.sane_lists',\n 'markdown.extensions.tables',\n mdx_math.MathExtension(enable_dollar_delimiter=True)\n ]\n self.converter = Kordac(html_templates=templates, extensions=extensions)\n custom_processors = self.converter.processor_defaults()\n custom_processors.add('remove-title')\n self.converter.update_processors(custom_processors)\n\n def convert_md_file(self, md_file_path):\n \"\"\"Returns the Kordac object for a given Markdown file\n\n Args:\n file_path: location of md file to convert\n\n Returns:\n Kordac result object\n \"\"\"\n content = open(md_file_path, encoding='UTF-8').read()\n result = self.converter.convert(content)\n check_required_files(result.required_files)\n return result\n\n def log(self, log_message, indent_amount=0):\n \"\"\"Adds the log message to the load log with the specified indent\"\"\"\n self.load_log.append((log_message, indent_amount))\n\n def print_load_log(self):\n \"\"\"Output log messages from loader to console\"\"\"\n for (log, indent_amount) in self.load_log:\n indent = ' ' * indent_amount\n sys.stdout.write('{indent}{text}\\n'.format(indent=indent, text=log))\n sys.stdout.write('\\n')\n self.load_log = []\n\n def load_yaml_file(self, yaml_file_path):\n \"\"\"Loads and reads yaml file\n\n Args:\n file_path: location of yaml file to read\n\n Returns:\n Either list or string, depending on structure of given yaml file\n \"\"\"\n yaml_file = open(yaml_file_path, encoding='UTF-8').read()\n return yaml.load(yaml_file)\n\n @abc.abstractmethod\n def load(self):\n raise NotImplementedError('subclass does not implement this method')\n", "path": "csunplugged/utils/BaseLoader.py"}]} | 1,477 | 502 |
gh_patches_debug_2679 | rasdani/github-patches | git_diff | TileDB-Inc__TileDB-Py-501 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Four components should be three components?
In the recently created example "writing_dense_rgb.py" there is this fragment:
https://github.com/TileDB-Inc/TileDB-Py/blob/75ddcf56ed80ba5e1a1237b7e527ec4fbd87abb9/examples/writing_dense_rgb.py#L56-L57
It says four int32 components where it seems like it should be three int32 components. After all the values of the attribute are RGB and not RGBA.
</issue>
<code>
[start of examples/writing_dense_rgb.py]
1 # writing_dense_rgb.py
2 #
3 # LICENSE
4 #
5 # The MIT License
6 #
7 # Copyright (c) 2021 TileDB, Inc.
8 #
9 # Permission is hereby granted, free of charge, to any person obtaining a copy
10 # of this software and associated documentation files (the "Software"), to deal
11 # in the Software without restriction, including without limitation the rights
12 # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
13 # copies of the Software, and to permit persons to whom the Software is
14 # furnished to do so, subject to the following conditions:
15 #
16 # The above copyright notice and this permission notice shall be included in
17 # all copies or substantial portions of the Software.
18 #
19 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
20 # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
21 # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
22 # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
23 # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
24 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
25 # THE SOFTWARE.
26 #
27 # DESCRIPTION
28 #
29 # Please see the TileDB documentation for more information:
30 # https://docs.tiledb.com/main/solutions/tiledb-embedded/api-usage/writing-arrays/writing-in-dense-subarrays
31 #
32 # When run, this program will create a 2D+1 multi-component (eg RGB) dense array, write some
33 # data to it, and read the entire array data.
34
35 import tiledb, numpy as np
36
37 img_shape = (100, 224, 224)
38 img_uri = "writing_dense_rgb"
39
40 image_data = np.random.randint(low=0, high=100, size=(*img_shape, 3), dtype=np.int32)
41
42
43 def create_array():
44 domain = tiledb.Domain(
45 tiledb.Dim(
46 name="image_id", domain=(0, img_shape[0] - 1), tile=4, dtype=np.int32
47 ),
48 tiledb.Dim(
49 name="x", domain=(0, img_shape[1] - 1), tile=img_shape[1], dtype=np.int32
50 ),
51 tiledb.Dim(
52 name="y", domain=(0, img_shape[2] - 1), tile=img_shape[2], dtype=np.int32
53 ),
54 )
55
56 # create multi-component attribute with four int32 components
57 attr = tiledb.Attr(dtype=np.dtype("i4, i4, i4"))
58
59 schema = tiledb.ArraySchema(domain=domain, sparse=False, attrs=[attr])
60
61 tiledb.Array.create(img_uri, schema)
62
63 image_data_rgb = image_data.view(np.dtype("i4, i4, i4"))
64
65 with tiledb.open(img_uri, "w") as A:
66 # write data to 1st image_id slot
67 A[:] = image_data_rgb
68
69
70 def read_array():
71 with tiledb.open(img_uri) as A:
72 print(A[:].shape)
73
74
75 if __name__ == "__main__":
76 create_array()
77 read_array()
78
[end of examples/writing_dense_rgb.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/examples/writing_dense_rgb.py b/examples/writing_dense_rgb.py
--- a/examples/writing_dense_rgb.py
+++ b/examples/writing_dense_rgb.py
@@ -53,7 +53,7 @@
),
)
- # create multi-component attribute with four int32 components
+ # create multi-component attribute with three int32 components
attr = tiledb.Attr(dtype=np.dtype("i4, i4, i4"))
schema = tiledb.ArraySchema(domain=domain, sparse=False, attrs=[attr])
| {"golden_diff": "diff --git a/examples/writing_dense_rgb.py b/examples/writing_dense_rgb.py\n--- a/examples/writing_dense_rgb.py\n+++ b/examples/writing_dense_rgb.py\n@@ -53,7 +53,7 @@\n ),\n )\n \n- # create multi-component attribute with four int32 components\n+ # create multi-component attribute with three int32 components\n attr = tiledb.Attr(dtype=np.dtype(\"i4, i4, i4\"))\n \n schema = tiledb.ArraySchema(domain=domain, sparse=False, attrs=[attr])\n", "issue": "Four components should be three components?\nIn the recently created example \"writing_dense_rgb.py\" there is this fragment:\r\nhttps://github.com/TileDB-Inc/TileDB-Py/blob/75ddcf56ed80ba5e1a1237b7e527ec4fbd87abb9/examples/writing_dense_rgb.py#L56-L57\r\n\r\nIt says four int32 components where it seems like it should be three int32 components. After all the values of the attribute are RGB and not RGBA.\n", "before_files": [{"content": "# writing_dense_rgb.py\n#\n# LICENSE\n#\n# The MIT License\n#\n# Copyright (c) 2021 TileDB, Inc.\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n# THE SOFTWARE.\n#\n# DESCRIPTION\n#\n# Please see the TileDB documentation for more information:\n# https://docs.tiledb.com/main/solutions/tiledb-embedded/api-usage/writing-arrays/writing-in-dense-subarrays\n#\n# When run, this program will create a 2D+1 multi-component (eg RGB) dense array, write some\n# data to it, and read the entire array data.\n\nimport tiledb, numpy as np\n\nimg_shape = (100, 224, 224)\nimg_uri = \"writing_dense_rgb\"\n\nimage_data = np.random.randint(low=0, high=100, size=(*img_shape, 3), dtype=np.int32)\n\n\ndef create_array():\n domain = tiledb.Domain(\n tiledb.Dim(\n name=\"image_id\", domain=(0, img_shape[0] - 1), tile=4, dtype=np.int32\n ),\n tiledb.Dim(\n name=\"x\", domain=(0, img_shape[1] - 1), tile=img_shape[1], dtype=np.int32\n ),\n tiledb.Dim(\n name=\"y\", domain=(0, img_shape[2] - 1), tile=img_shape[2], dtype=np.int32\n ),\n )\n\n # create multi-component attribute with four int32 components\n attr = tiledb.Attr(dtype=np.dtype(\"i4, i4, i4\"))\n\n schema = tiledb.ArraySchema(domain=domain, sparse=False, attrs=[attr])\n\n tiledb.Array.create(img_uri, schema)\n\n image_data_rgb = image_data.view(np.dtype(\"i4, i4, i4\"))\n\n with tiledb.open(img_uri, \"w\") as A:\n # write data to 1st image_id slot\n A[:] = image_data_rgb\n\n\ndef read_array():\n with tiledb.open(img_uri) as A:\n print(A[:].shape)\n\n\nif __name__ == \"__main__\":\n create_array()\n read_array()\n", "path": "examples/writing_dense_rgb.py"}]} | 1,499 | 121 |
gh_patches_debug_7792 | rasdani/github-patches | git_diff | locustio__locust-401 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
requests.exceptions.ConnectionError: ('Connection aborted.', ResponseNotReady('Request-sent',))
I wanted to offer this up not as an issue, but as a solution to one that I found today.
I had a test that when run on a specific server would always fail with this unhelpful message:
requests.exceptions.ConnectionError: ('Connection aborted.', ResponseNotReady('Request-sent',))
The test had multiple requests to the same client within a single task and a colleague suspected it was something to do with the connection from the first request not being properly closed.
After a lot of playing around with timeouts and attempting to close out the first connection before the next one was sent (both of which did not solve the issue), I found a stackoverflow article with the same issue:
http://stackoverflow.com/questions/30033516/single-session-multiple-post-get-in-python-requests
The quick and dirty solution was to update to requests 2.7.0. At the time of getting this error I was on 2.6.2. I also noticed that the default version for locust is on 2.4. If you are experiencing this issue, simply update to 2.7 and you should be good!
</issue>
<code>
[start of setup.py]
1 # encoding: utf-8
2
3 from setuptools import setup, find_packages, Command
4 import sys, os
5
6 version = '0.7.3'
7
8
9 class Unit2Discover(Command):
10 user_options = []
11
12 def initialize_options(self):
13 pass
14
15 def finalize_options(self):
16 pass
17
18 def run(self):
19 import sys, subprocess
20 basecmd = ['unit2', 'discover']
21 errno = subprocess.call(basecmd)
22 raise SystemExit(errno)
23
24
25 setup(
26 name='locustio',
27 version=version,
28 description="Website load testing framework",
29 long_description="""Locust is a python utility for doing easy, distributed load testing of a web site""",
30 classifiers=[
31 "Topic :: Software Development :: Testing :: Traffic Generation",
32 "Development Status :: 4 - Beta",
33 "License :: OSI Approved :: MIT License",
34 "Operating System :: OS Independent",
35 "Programming Language :: Python",
36 "Programming Language :: Python :: 2",
37 "Programming Language :: Python :: 2.6",
38 "Programming Language :: Python :: 2.7",
39 "Intended Audience :: Developers",
40 "Intended Audience :: System Administrators",
41 ],
42 keywords='',
43 author='Jonatan Heyman, Carl Bystrom, Joakim Hamrén, Hugo Heyman',
44 author_email='',
45 url='http://locust.io',
46 license='MIT',
47 packages=find_packages(exclude=['ez_setup', 'examples', 'tests']),
48 include_package_data=True,
49 zip_safe=False,
50 install_requires=["gevent==1.0.1", "flask>=0.10.1", "requests>=2.4.1", "msgpack-python>=0.4.2"],
51 tests_require=['unittest2', 'mock', 'pyzmq'],
52 entry_points={
53 'console_scripts': [
54 'locust = locust.main:main',
55 ]
56 },
57 test_suite='unittest2.collector',
58 )
59
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -47,7 +47,7 @@
packages=find_packages(exclude=['ez_setup', 'examples', 'tests']),
include_package_data=True,
zip_safe=False,
- install_requires=["gevent==1.0.1", "flask>=0.10.1", "requests>=2.4.1", "msgpack-python>=0.4.2"],
+ install_requires=["gevent==1.0.1", "flask>=0.10.1", "requests>=2.9.1", "msgpack-python>=0.4.2"],
tests_require=['unittest2', 'mock', 'pyzmq'],
entry_points={
'console_scripts': [
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -47,7 +47,7 @@\n packages=find_packages(exclude=['ez_setup', 'examples', 'tests']),\n include_package_data=True,\n zip_safe=False,\n- install_requires=[\"gevent==1.0.1\", \"flask>=0.10.1\", \"requests>=2.4.1\", \"msgpack-python>=0.4.2\"],\n+ install_requires=[\"gevent==1.0.1\", \"flask>=0.10.1\", \"requests>=2.9.1\", \"msgpack-python>=0.4.2\"],\n tests_require=['unittest2', 'mock', 'pyzmq'],\n entry_points={\n 'console_scripts': [\n", "issue": "requests.exceptions.ConnectionError: ('Connection aborted.', ResponseNotReady('Request-sent',))\nI wanted to offer this up not as an issue, but as a solution to one that I found today.\n\nI had a test that when run on a specific server would always fail with this unhelpful message:\nrequests.exceptions.ConnectionError: ('Connection aborted.', ResponseNotReady('Request-sent',))\n\nThe test had multiple requests to the same client within a single task and a colleague suspected it was something to do with the connection from the first request not being properly closed.\n\nAfter a lot of playing around with timeouts and attempting to close out the first connection before the next one was sent (both of which did not solve the issue), I found a stackoverflow article with the same issue:\nhttp://stackoverflow.com/questions/30033516/single-session-multiple-post-get-in-python-requests\n\nThe quick and dirty solution was to update to requests 2.7.0. At the time of getting this error I was on 2.6.2. I also noticed that the default version for locust is on 2.4. If you are experiencing this issue, simply update to 2.7 and you should be good!\n\n", "before_files": [{"content": "# encoding: utf-8\n\nfrom setuptools import setup, find_packages, Command\nimport sys, os\n\nversion = '0.7.3'\n\n\nclass Unit2Discover(Command):\n user_options = []\n\n def initialize_options(self):\n pass\n\n def finalize_options(self):\n pass\n\n def run(self):\n import sys, subprocess\n basecmd = ['unit2', 'discover']\n errno = subprocess.call(basecmd)\n raise SystemExit(errno)\n\n\nsetup(\n name='locustio',\n version=version,\n description=\"Website load testing framework\",\n long_description=\"\"\"Locust is a python utility for doing easy, distributed load testing of a web site\"\"\",\n classifiers=[\n \"Topic :: Software Development :: Testing :: Traffic Generation\",\n \"Development Status :: 4 - Beta\",\n \"License :: OSI Approved :: MIT License\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.6\",\n \"Programming Language :: Python :: 2.7\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: System Administrators\",\n ],\n keywords='',\n author='Jonatan Heyman, Carl Bystrom, Joakim Hamr\u00e9n, Hugo Heyman',\n author_email='',\n url='http://locust.io',\n license='MIT',\n packages=find_packages(exclude=['ez_setup', 'examples', 'tests']),\n include_package_data=True,\n zip_safe=False,\n install_requires=[\"gevent==1.0.1\", \"flask>=0.10.1\", \"requests>=2.4.1\", \"msgpack-python>=0.4.2\"],\n tests_require=['unittest2', 'mock', 'pyzmq'],\n entry_points={\n 'console_scripts': [\n 'locust = locust.main:main',\n ]\n },\n test_suite='unittest2.collector',\n)\n", "path": "setup.py"}]} | 1,310 | 175 |
gh_patches_debug_35304 | rasdani/github-patches | git_diff | vaexio__vaex-757 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bool values get flipped when converting Arrow table to DataFrame
Using the latest version:
`vaex==2.6.1`
Just realised that when converting an Arrow table to a DataFrame, bool columns get flipped and converted to integers:
```python
import vaex
from pyarrow import feather
bool_array = [False, True, True, False]
pdf = pd.DataFrame({"col1": bool_array})
pdf.to_feather("test_data.feather")
arrow_table = feather.read_table("test_data.feather")
vaex.from_arrow_table(arrow_table)
```
```
# | col1
-- | --
0 | 1
1 | 0
2 | 0
3 | 1
```
</issue>
<code>
[start of packages/vaex-arrow/vaex_arrow/convert.py]
1 """Convert between arrow and vaex/numpy columns/arrays without doing memory copies."""
2 import pyarrow
3 import numpy as np
4 from vaex.column import ColumnStringArrow
5
6 def arrow_array_from_numpy_array(array):
7 dtype = array.dtype
8 mask = None
9 if np.ma.isMaskedArray(array):
10 mask = array.mask
11 # arrow 0.16 behaves weird in this case https://github.com/vaexio/vaex/pull/639
12 if mask is np.False_:
13 mask = None
14 elif mask is np.True_:
15 raise ValueError('not sure what pyarrow does with mask=True')
16 array = array.data
17 if dtype.kind == 'S':
18 type = pyarrow.binary(dtype.itemsize)
19 arrow_array = pyarrow.array(array, type, mask=mask)
20 else:
21 if not dtype.isnative:
22 array = array.astype(dtype.newbyteorder('='))
23 arrow_array = pyarrow.Array.from_pandas(array, mask=mask)
24 return arrow_array
25
26 from vaex.dataframe import Column
27
28
29 def column_from_arrow_array(arrow_array):
30 arrow_type = arrow_array.type
31 buffers = arrow_array.buffers()
32 if len(buffers) == 2:
33 return numpy_array_from_arrow_array(arrow_array)
34 elif len(buffers) == 3 and isinstance(arrow_array.type, type(pyarrow.string())):
35 bitmap_buffer, offsets, string_bytes = arrow_array.buffers()
36 if arrow_array.null_count == 0:
37 null_bitmap = None # we drop any null_bitmap when there are no null counts
38 else:
39 null_bitmap = np.frombuffer(bitmap_buffer, 'uint8', len(bitmap_buffer))
40 offsets = np.frombuffer(offsets, np.int32, len(offsets)//4)
41 if string_bytes is None:
42 string_bytes = np.array([], dtype='S1')
43 else:
44 string_bytes = np.frombuffer(string_bytes, 'S1', len(string_bytes))
45 column = ColumnStringArrow(offsets, string_bytes, len(arrow_array), null_bitmap=null_bitmap)
46 return column
47 else:
48 raise TypeError('type unsupported: %r' % arrow_type)
49
50
51 def numpy_array_from_arrow_array(arrow_array):
52 arrow_type = arrow_array.type
53 buffers = arrow_array.buffers()
54 assert len(buffers) == 2
55 bitmap_buffer, data_buffer = buffers
56 if isinstance(arrow_type, type(pyarrow.binary(1))): # todo, is there a better way to typecheck?
57 # mimics python/pyarrow/array.pxi::Array::to_numpy
58 assert len(buffers) == 2
59 dtype = "S" + str(arrow_type.byte_width)
60 # arrow seems to do padding, check if it is all ok
61 expected_length = arrow_type.byte_width * len(arrow_array)
62 actual_length = len(buffers[-1])
63 if actual_length < expected_length:
64 raise ValueError('buffer is smaller (%d) than expected (%d)' % (actual_length, expected_length))
65 array = np.frombuffer(buffers[-1], dtype, len(arrow_array))# TODO: deal with offset ? [arrow_array.offset:arrow_array.offset + len(arrow_array)]
66 else:
67 dtype = arrow_array.type.to_pandas_dtype()
68 if np.bool_ == dtype:
69 # TODO: this will also be a copy, we probably want to support bitmasks as well
70 bitmap = np.frombuffer(data_buffer, np.uint8, len(data_buffer))
71 array = numpy_mask_from_arrow_mask(bitmap, len(arrow_array))
72 else:
73 array = np.frombuffer(data_buffer, dtype, len(arrow_array))
74
75 if bitmap_buffer is not None:
76 bitmap = np.frombuffer(bitmap_buffer, np.uint8, len(bitmap_buffer))
77 mask = numpy_mask_from_arrow_mask(bitmap, len(arrow_array))
78 array = np.ma.MaskedArray(array, mask=mask)
79 return array
80
81 def numpy_mask_from_arrow_mask(bitmap, length):
82 # arrow uses a bitmap https://github.com/apache/arrow/blob/master/format/Layout.md
83 # we do have to change the ordering of the bits
84 return 1-np.unpackbits(bitmap).reshape((len(bitmap),8))[:,::-1].reshape(-1)[:length]
85
86
87
88 def arrow_table_from_vaex_df(ds, column_names=None, selection=None, strings=True, virtual=False):
89 """Implementation of Dataset.to_arrow_table"""
90 names = []
91 arrays = []
92 for name, array in ds.to_items(column_names=column_names, selection=selection, strings=strings, virtual=virtual):
93 names.append(name)
94 arrays.append(arrow_array_from_numpy_array(array))
95 return pyarrow.Table.from_arrays(arrays, names)
96
97 def vaex_df_from_arrow_table(table):
98 from .dataset import DatasetArrow
99 return DatasetArrow(table=table)
100
[end of packages/vaex-arrow/vaex_arrow/convert.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/packages/vaex-arrow/vaex_arrow/convert.py b/packages/vaex-arrow/vaex_arrow/convert.py
--- a/packages/vaex-arrow/vaex_arrow/convert.py
+++ b/packages/vaex-arrow/vaex_arrow/convert.py
@@ -53,6 +53,7 @@
buffers = arrow_array.buffers()
assert len(buffers) == 2
bitmap_buffer, data_buffer = buffers
+ offset = arrow_array.offset
if isinstance(arrow_type, type(pyarrow.binary(1))): # todo, is there a better way to typecheck?
# mimics python/pyarrow/array.pxi::Array::to_numpy
assert len(buffers) == 2
@@ -68,13 +69,13 @@
if np.bool_ == dtype:
# TODO: this will also be a copy, we probably want to support bitmasks as well
bitmap = np.frombuffer(data_buffer, np.uint8, len(data_buffer))
- array = numpy_mask_from_arrow_mask(bitmap, len(arrow_array))
+ array = numpy_bool_from_arrow_bitmap(bitmap, len(arrow_array) + offset)[offset:]
else:
- array = np.frombuffer(data_buffer, dtype, len(arrow_array))
+ array = np.frombuffer(data_buffer, dtype, len(arrow_array) + offset)[offset:]
if bitmap_buffer is not None:
bitmap = np.frombuffer(bitmap_buffer, np.uint8, len(bitmap_buffer))
- mask = numpy_mask_from_arrow_mask(bitmap, len(arrow_array))
+ mask = numpy_mask_from_arrow_mask(bitmap, len(arrow_array) + offset)[offset:]
array = np.ma.MaskedArray(array, mask=mask)
return array
@@ -83,7 +84,10 @@
# we do have to change the ordering of the bits
return 1-np.unpackbits(bitmap).reshape((len(bitmap),8))[:,::-1].reshape(-1)[:length]
-
+def numpy_bool_from_arrow_bitmap(bitmap, length):
+ # arrow uses a bitmap https://github.com/apache/arrow/blob/master/format/Layout.md
+ # we do have to change the ordering of the bits
+ return np.unpackbits(bitmap).reshape((len(bitmap),8))[:,::-1].reshape(-1)[:length].view(np.bool_)
def arrow_table_from_vaex_df(ds, column_names=None, selection=None, strings=True, virtual=False):
"""Implementation of Dataset.to_arrow_table"""
| {"golden_diff": "diff --git a/packages/vaex-arrow/vaex_arrow/convert.py b/packages/vaex-arrow/vaex_arrow/convert.py\n--- a/packages/vaex-arrow/vaex_arrow/convert.py\n+++ b/packages/vaex-arrow/vaex_arrow/convert.py\n@@ -53,6 +53,7 @@\n buffers = arrow_array.buffers()\n assert len(buffers) == 2\n bitmap_buffer, data_buffer = buffers\n+ offset = arrow_array.offset\n if isinstance(arrow_type, type(pyarrow.binary(1))): # todo, is there a better way to typecheck?\n # mimics python/pyarrow/array.pxi::Array::to_numpy\n assert len(buffers) == 2\n@@ -68,13 +69,13 @@\n if np.bool_ == dtype:\n # TODO: this will also be a copy, we probably want to support bitmasks as well\n bitmap = np.frombuffer(data_buffer, np.uint8, len(data_buffer))\n- array = numpy_mask_from_arrow_mask(bitmap, len(arrow_array))\n+ array = numpy_bool_from_arrow_bitmap(bitmap, len(arrow_array) + offset)[offset:]\n else:\n- array = np.frombuffer(data_buffer, dtype, len(arrow_array))\n+ array = np.frombuffer(data_buffer, dtype, len(arrow_array) + offset)[offset:]\n \n if bitmap_buffer is not None:\n bitmap = np.frombuffer(bitmap_buffer, np.uint8, len(bitmap_buffer))\n- mask = numpy_mask_from_arrow_mask(bitmap, len(arrow_array))\n+ mask = numpy_mask_from_arrow_mask(bitmap, len(arrow_array) + offset)[offset:]\n array = np.ma.MaskedArray(array, mask=mask)\n return array\n \n@@ -83,7 +84,10 @@\n # we do have to change the ordering of the bits\n return 1-np.unpackbits(bitmap).reshape((len(bitmap),8))[:,::-1].reshape(-1)[:length]\n \n-\n+def numpy_bool_from_arrow_bitmap(bitmap, length):\n+ # arrow uses a bitmap https://github.com/apache/arrow/blob/master/format/Layout.md\n+ # we do have to change the ordering of the bits\n+ return np.unpackbits(bitmap).reshape((len(bitmap),8))[:,::-1].reshape(-1)[:length].view(np.bool_)\n \n def arrow_table_from_vaex_df(ds, column_names=None, selection=None, strings=True, virtual=False):\n \"\"\"Implementation of Dataset.to_arrow_table\"\"\"\n", "issue": "Bool values get flipped when converting Arrow table to DataFrame\nUsing the latest version:\r\n`vaex==2.6.1`\r\n\r\nJust realised that when converting an Arrow table to a DataFrame, bool columns get flipped and converted to integers:\r\n\r\n```python\r\nimport vaex\r\nfrom pyarrow import feather\r\n\r\nbool_array = [False, True, True, False]\r\npdf = pd.DataFrame({\"col1\": bool_array})\r\npdf.to_feather(\"test_data.feather\")\r\narrow_table = feather.read_table(\"test_data.feather\")\r\nvaex.from_arrow_table(arrow_table)\r\n```\r\n\r\n```\r\n# | col1\r\n-- | --\r\n0 | 1\r\n1 | 0\r\n2 | 0\r\n3 | 1\r\n```\n", "before_files": [{"content": "\"\"\"Convert between arrow and vaex/numpy columns/arrays without doing memory copies.\"\"\"\nimport pyarrow\nimport numpy as np\nfrom vaex.column import ColumnStringArrow\n\ndef arrow_array_from_numpy_array(array):\n dtype = array.dtype\n mask = None\n if np.ma.isMaskedArray(array):\n mask = array.mask\n # arrow 0.16 behaves weird in this case https://github.com/vaexio/vaex/pull/639\n if mask is np.False_:\n mask = None\n elif mask is np.True_:\n raise ValueError('not sure what pyarrow does with mask=True')\n array = array.data\n if dtype.kind == 'S':\n type = pyarrow.binary(dtype.itemsize)\n arrow_array = pyarrow.array(array, type, mask=mask)\n else:\n if not dtype.isnative:\n array = array.astype(dtype.newbyteorder('='))\n arrow_array = pyarrow.Array.from_pandas(array, mask=mask)\n return arrow_array\n\nfrom vaex.dataframe import Column\n\n\ndef column_from_arrow_array(arrow_array):\n arrow_type = arrow_array.type\n buffers = arrow_array.buffers()\n if len(buffers) == 2:\n return numpy_array_from_arrow_array(arrow_array)\n elif len(buffers) == 3 and isinstance(arrow_array.type, type(pyarrow.string())):\n bitmap_buffer, offsets, string_bytes = arrow_array.buffers()\n if arrow_array.null_count == 0:\n null_bitmap = None # we drop any null_bitmap when there are no null counts\n else:\n null_bitmap = np.frombuffer(bitmap_buffer, 'uint8', len(bitmap_buffer))\n offsets = np.frombuffer(offsets, np.int32, len(offsets)//4)\n if string_bytes is None:\n string_bytes = np.array([], dtype='S1')\n else:\n string_bytes = np.frombuffer(string_bytes, 'S1', len(string_bytes))\n column = ColumnStringArrow(offsets, string_bytes, len(arrow_array), null_bitmap=null_bitmap)\n return column\n else:\n raise TypeError('type unsupported: %r' % arrow_type)\n\n\ndef numpy_array_from_arrow_array(arrow_array):\n arrow_type = arrow_array.type\n buffers = arrow_array.buffers()\n assert len(buffers) == 2\n bitmap_buffer, data_buffer = buffers\n if isinstance(arrow_type, type(pyarrow.binary(1))): # todo, is there a better way to typecheck?\n # mimics python/pyarrow/array.pxi::Array::to_numpy\n assert len(buffers) == 2\n dtype = \"S\" + str(arrow_type.byte_width)\n # arrow seems to do padding, check if it is all ok\n expected_length = arrow_type.byte_width * len(arrow_array)\n actual_length = len(buffers[-1])\n if actual_length < expected_length:\n raise ValueError('buffer is smaller (%d) than expected (%d)' % (actual_length, expected_length))\n array = np.frombuffer(buffers[-1], dtype, len(arrow_array))# TODO: deal with offset ? [arrow_array.offset:arrow_array.offset + len(arrow_array)]\n else:\n dtype = arrow_array.type.to_pandas_dtype()\n if np.bool_ == dtype:\n # TODO: this will also be a copy, we probably want to support bitmasks as well\n bitmap = np.frombuffer(data_buffer, np.uint8, len(data_buffer))\n array = numpy_mask_from_arrow_mask(bitmap, len(arrow_array))\n else:\n array = np.frombuffer(data_buffer, dtype, len(arrow_array))\n\n if bitmap_buffer is not None:\n bitmap = np.frombuffer(bitmap_buffer, np.uint8, len(bitmap_buffer))\n mask = numpy_mask_from_arrow_mask(bitmap, len(arrow_array))\n array = np.ma.MaskedArray(array, mask=mask)\n return array\n\ndef numpy_mask_from_arrow_mask(bitmap, length):\n # arrow uses a bitmap https://github.com/apache/arrow/blob/master/format/Layout.md\n # we do have to change the ordering of the bits\n return 1-np.unpackbits(bitmap).reshape((len(bitmap),8))[:,::-1].reshape(-1)[:length]\n\n\n\ndef arrow_table_from_vaex_df(ds, column_names=None, selection=None, strings=True, virtual=False):\n \"\"\"Implementation of Dataset.to_arrow_table\"\"\"\n names = []\n arrays = []\n for name, array in ds.to_items(column_names=column_names, selection=selection, strings=strings, virtual=virtual):\n names.append(name)\n arrays.append(arrow_array_from_numpy_array(array))\n return pyarrow.Table.from_arrays(arrays, names)\n\ndef vaex_df_from_arrow_table(table):\n from .dataset import DatasetArrow\n return DatasetArrow(table=table)\n", "path": "packages/vaex-arrow/vaex_arrow/convert.py"}]} | 1,933 | 544 |
gh_patches_debug_64467 | rasdani/github-patches | git_diff | liqd__a4-meinberlin-3019 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
testing 2958: bplan verification mail
**URL:** mail
**user:** administration staff working via imperia
**expected behaviour:** /
**behaviour:** wording changed, see below
**important screensize:**/
**device & browser:** /
**Comment/Question:**
- cross out the word "Betreff" in e-mail-subject
- correct "Projektü**n**ersicht" to "Projektübersicht"
- can you write "Uhr" behind date and time?
- I already know that it is complicated to separate date and time via comma, I guess this hasn't changed?
- the word "identifier" shouldn't be there but I guess it is only there because you entered it into the field together with the identifier itself, right?
Screenshot?
<img width="707" alt="Bildschirmfoto 2020-07-02 um 12 25 14" src="https://user-images.githubusercontent.com/35491681/86348098-7ccdd280-bc5f-11ea-9fb7-010f71c1a1a9.png">
</issue>
<code>
[start of meinberlin/apps/bplan/signals.py]
1 from django.db.models.signals import post_save
2 from django.dispatch import receiver
3
4 from . import emails
5 from . import tasks
6 from .models import Bplan
7 from .models import Statement
8
9
10 @receiver(post_save, sender=Bplan)
11 def get_location(sender, instance, update_fields, **kwargs):
12 if instance.identifier and (not update_fields or
13 'point' not in update_fields):
14 tasks.get_location_information(instance.pk)
15
16
17 @receiver(post_save, sender=Statement)
18 def send_notification(sender, instance, created, **kwargs):
19 if created:
20 emails.OfficeWorkerNotification.send(instance)
21
22 if instance.email:
23 emails.SubmitterConfirmation.send(instance)
24
25
26 @receiver(post_save, sender=Bplan)
27 def send_update(sender, instance, update_fields, **kwargs):
28 if update_fields:
29 emails.OfficeWorkerUpdateConfirmation.send(instance)
30
[end of meinberlin/apps/bplan/signals.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/meinberlin/apps/bplan/signals.py b/meinberlin/apps/bplan/signals.py
--- a/meinberlin/apps/bplan/signals.py
+++ b/meinberlin/apps/bplan/signals.py
@@ -25,5 +25,5 @@
@receiver(post_save, sender=Bplan)
def send_update(sender, instance, update_fields, **kwargs):
- if update_fields:
+ if not update_fields or 'point' not in update_fields:
emails.OfficeWorkerUpdateConfirmation.send(instance)
| {"golden_diff": "diff --git a/meinberlin/apps/bplan/signals.py b/meinberlin/apps/bplan/signals.py\n--- a/meinberlin/apps/bplan/signals.py\n+++ b/meinberlin/apps/bplan/signals.py\n@@ -25,5 +25,5 @@\n \n @receiver(post_save, sender=Bplan)\n def send_update(sender, instance, update_fields, **kwargs):\n- if update_fields:\n+ if not update_fields or 'point' not in update_fields:\n emails.OfficeWorkerUpdateConfirmation.send(instance)\n", "issue": "testing 2958: bplan verification mail\n**URL:** mail\r\n**user:** administration staff working via imperia\r\n**expected behaviour:** /\r\n**behaviour:** wording changed, see below\r\n**important screensize:**/\r\n**device & browser:** /\r\n**Comment/Question:**\r\n\r\n- cross out the word \"Betreff\" in e-mail-subject\r\n\r\n- correct \"Projekt\u00fc**n**ersicht\" to \"Projekt\u00fcbersicht\"\r\n\r\n- can you write \"Uhr\" behind date and time?\r\n\r\n- I already know that it is complicated to separate date and time via comma, I guess this hasn't changed?\r\n\r\n- the word \"identifier\" shouldn't be there but I guess it is only there because you entered it into the field together with the identifier itself, right?\r\n\r\nScreenshot?\r\n<img width=\"707\" alt=\"Bildschirmfoto 2020-07-02 um 12 25 14\" src=\"https://user-images.githubusercontent.com/35491681/86348098-7ccdd280-bc5f-11ea-9fb7-010f71c1a1a9.png\">\r\n\r\n\r\n\n", "before_files": [{"content": "from django.db.models.signals import post_save\nfrom django.dispatch import receiver\n\nfrom . import emails\nfrom . import tasks\nfrom .models import Bplan\nfrom .models import Statement\n\n\n@receiver(post_save, sender=Bplan)\ndef get_location(sender, instance, update_fields, **kwargs):\n if instance.identifier and (not update_fields or\n 'point' not in update_fields):\n tasks.get_location_information(instance.pk)\n\n\n@receiver(post_save, sender=Statement)\ndef send_notification(sender, instance, created, **kwargs):\n if created:\n emails.OfficeWorkerNotification.send(instance)\n\n if instance.email:\n emails.SubmitterConfirmation.send(instance)\n\n\n@receiver(post_save, sender=Bplan)\ndef send_update(sender, instance, update_fields, **kwargs):\n if update_fields:\n emails.OfficeWorkerUpdateConfirmation.send(instance)\n", "path": "meinberlin/apps/bplan/signals.py"}]} | 1,042 | 118 |
gh_patches_debug_61112 | rasdani/github-patches | git_diff | pre-commit__pre-commit-933 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
pre-commit autoupdate fails when config is empty
Running `pre-commit autoupdate` with an empty `.pre-commit-config.yaml` results in the below error:
```An unexpected error has occurred: IndexError: list index out of range
Traceback (most recent call last):
File "/usr/local/Cellar/pre-commit/1.14.2/libexec/lib/python3.7/site-packages/pre_commit/error_handler.py", line 46, in error_handler
yield
File "/usr/local/Cellar/pre-commit/1.14.2/libexec/lib/python3.7/site-packages/pre_commit/main.py", line 286, in main
repos=args.repos,
File "/usr/local/Cellar/pre-commit/1.14.2/libexec/lib/python3.7/site-packages/pre_commit/commands/autoupdate.py", line 117, in autoupdate
migrate_config(config_file, quiet=True)
File "/usr/local/Cellar/pre-commit/1.14.2/libexec/lib/python3.7/site-packages/pre_commit/commands/migrate_config.py", line 52, in migrate_config
contents = _migrate_map(contents)
File "/usr/local/Cellar/pre-commit/1.14.2/libexec/lib/python3.7/site-packages/pre_commit/commands/migrate_config.py", line 24, in _migrate_map
while _is_header_line(lines[i]):
IndexError: list index out of range
```
</issue>
<code>
[start of pre_commit/commands/migrate_config.py]
1 from __future__ import print_function
2 from __future__ import unicode_literals
3
4 import io
5 import re
6
7 import yaml
8 from aspy.yaml import ordered_load
9
10
11 def _indent(s):
12 lines = s.splitlines(True)
13 return ''.join(' ' * 4 + line if line.strip() else line for line in lines)
14
15
16 def _is_header_line(line):
17 return (line.startswith(('#', '---')) or not line.strip())
18
19
20 def _migrate_map(contents):
21 # Find the first non-header line
22 lines = contents.splitlines(True)
23 i = 0
24 while _is_header_line(lines[i]):
25 i += 1
26
27 header = ''.join(lines[:i])
28 rest = ''.join(lines[i:])
29
30 if isinstance(ordered_load(contents), list):
31 # If they are using the "default" flow style of yaml, this operation
32 # will yield a valid configuration
33 try:
34 trial_contents = header + 'repos:\n' + rest
35 ordered_load(trial_contents)
36 contents = trial_contents
37 except yaml.YAMLError:
38 contents = header + 'repos:\n' + _indent(rest)
39
40 return contents
41
42
43 def _migrate_sha_to_rev(contents):
44 reg = re.compile(r'(\n\s+)sha:')
45 return reg.sub(r'\1rev:', contents)
46
47
48 def migrate_config(config_file, quiet=False):
49 with io.open(config_file) as f:
50 orig_contents = contents = f.read()
51
52 contents = _migrate_map(contents)
53 contents = _migrate_sha_to_rev(contents)
54
55 if contents != orig_contents:
56 with io.open(config_file, 'w') as f:
57 f.write(contents)
58
59 print('Configuration has been migrated.')
60 elif not quiet:
61 print('Configuration is already migrated.')
62
[end of pre_commit/commands/migrate_config.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pre_commit/commands/migrate_config.py b/pre_commit/commands/migrate_config.py
--- a/pre_commit/commands/migrate_config.py
+++ b/pre_commit/commands/migrate_config.py
@@ -21,7 +21,8 @@
# Find the first non-header line
lines = contents.splitlines(True)
i = 0
- while _is_header_line(lines[i]):
+ # Only loop on non empty configuration file
+ while i < len(lines) and _is_header_line(lines[i]):
i += 1
header = ''.join(lines[:i])
| {"golden_diff": "diff --git a/pre_commit/commands/migrate_config.py b/pre_commit/commands/migrate_config.py\n--- a/pre_commit/commands/migrate_config.py\n+++ b/pre_commit/commands/migrate_config.py\n@@ -21,7 +21,8 @@\n # Find the first non-header line\n lines = contents.splitlines(True)\n i = 0\n- while _is_header_line(lines[i]):\n+ # Only loop on non empty configuration file\n+ while i < len(lines) and _is_header_line(lines[i]):\n i += 1\n \n header = ''.join(lines[:i])\n", "issue": "pre-commit autoupdate fails when config is empty\nRunning `pre-commit autoupdate` with an empty `.pre-commit-config.yaml` results in the below error:\r\n```An unexpected error has occurred: IndexError: list index out of range\r\nTraceback (most recent call last):\r\n File \"/usr/local/Cellar/pre-commit/1.14.2/libexec/lib/python3.7/site-packages/pre_commit/error_handler.py\", line 46, in error_handler\r\n yield\r\n File \"/usr/local/Cellar/pre-commit/1.14.2/libexec/lib/python3.7/site-packages/pre_commit/main.py\", line 286, in main\r\n repos=args.repos,\r\n File \"/usr/local/Cellar/pre-commit/1.14.2/libexec/lib/python3.7/site-packages/pre_commit/commands/autoupdate.py\", line 117, in autoupdate\r\n migrate_config(config_file, quiet=True)\r\n File \"/usr/local/Cellar/pre-commit/1.14.2/libexec/lib/python3.7/site-packages/pre_commit/commands/migrate_config.py\", line 52, in migrate_config\r\n contents = _migrate_map(contents)\r\n File \"/usr/local/Cellar/pre-commit/1.14.2/libexec/lib/python3.7/site-packages/pre_commit/commands/migrate_config.py\", line 24, in _migrate_map\r\n while _is_header_line(lines[i]):\r\nIndexError: list index out of range\r\n```\n", "before_files": [{"content": "from __future__ import print_function\nfrom __future__ import unicode_literals\n\nimport io\nimport re\n\nimport yaml\nfrom aspy.yaml import ordered_load\n\n\ndef _indent(s):\n lines = s.splitlines(True)\n return ''.join(' ' * 4 + line if line.strip() else line for line in lines)\n\n\ndef _is_header_line(line):\n return (line.startswith(('#', '---')) or not line.strip())\n\n\ndef _migrate_map(contents):\n # Find the first non-header line\n lines = contents.splitlines(True)\n i = 0\n while _is_header_line(lines[i]):\n i += 1\n\n header = ''.join(lines[:i])\n rest = ''.join(lines[i:])\n\n if isinstance(ordered_load(contents), list):\n # If they are using the \"default\" flow style of yaml, this operation\n # will yield a valid configuration\n try:\n trial_contents = header + 'repos:\\n' + rest\n ordered_load(trial_contents)\n contents = trial_contents\n except yaml.YAMLError:\n contents = header + 'repos:\\n' + _indent(rest)\n\n return contents\n\n\ndef _migrate_sha_to_rev(contents):\n reg = re.compile(r'(\\n\\s+)sha:')\n return reg.sub(r'\\1rev:', contents)\n\n\ndef migrate_config(config_file, quiet=False):\n with io.open(config_file) as f:\n orig_contents = contents = f.read()\n\n contents = _migrate_map(contents)\n contents = _migrate_sha_to_rev(contents)\n\n if contents != orig_contents:\n with io.open(config_file, 'w') as f:\n f.write(contents)\n\n print('Configuration has been migrated.')\n elif not quiet:\n print('Configuration is already migrated.')\n", "path": "pre_commit/commands/migrate_config.py"}]} | 1,369 | 132 |
gh_patches_debug_8989 | rasdani/github-patches | git_diff | certbot__certbot-4248 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
returned non-string (type Error)
Hey there, I installed certbot as per the doc's from letsencrypt on Debian Jessie and certbot in manual mode returns:
certbot certonly --manual -d mydomain.com
```
An unexpected error occurred:
TypeError: __str__ returned non-string (type Error)
```
```
pip2 list
acme (0.9.3)
...
certbot (0.9.3)
cryptography (1.5.3)
...
pyOpenSSL (16.0.0)
```
Anyone seen this before and can offer a solution? Thanks
</issue>
<code>
[start of acme/setup.py]
1 import sys
2
3 from setuptools import setup
4 from setuptools import find_packages
5
6
7 version = '0.12.0.dev0'
8
9 # Please update tox.ini when modifying dependency version requirements
10 install_requires = [
11 # load_pem_private/public_key (>=0.6)
12 # rsa_recover_prime_factors (>=0.8)
13 'cryptography>=0.8',
14 # Connection.set_tlsext_host_name (>=0.13)
15 'PyOpenSSL>=0.13',
16 'pyrfc3339',
17 'pytz',
18 'requests[security]>=2.4.1', # security extras added in 2.4.1
19 # For pkg_resources. >=1.0 so pip resolves it to a version cryptography
20 # will tolerate; see #2599:
21 'setuptools>=1.0',
22 'six',
23 ]
24
25 # env markers in extras_require cause problems with older pip: #517
26 # Keep in sync with conditional_requirements.py.
27 if sys.version_info < (2, 7):
28 install_requires.extend([
29 # only some distros recognize stdlib argparse as already satisfying
30 'argparse',
31 'mock<1.1.0',
32 ])
33 else:
34 install_requires.append('mock')
35
36 dev_extras = [
37 'nose',
38 'tox',
39 ]
40
41 docs_extras = [
42 'Sphinx>=1.0', # autodoc_member_order = 'bysource', autodoc_default_flags
43 'sphinx_rtd_theme',
44 ]
45
46
47 setup(
48 name='acme',
49 version=version,
50 description='ACME protocol implementation in Python',
51 url='https://github.com/letsencrypt/letsencrypt',
52 author="Certbot Project",
53 author_email='[email protected]',
54 license='Apache License 2.0',
55 classifiers=[
56 'Development Status :: 3 - Alpha',
57 'Intended Audience :: Developers',
58 'License :: OSI Approved :: Apache Software License',
59 'Programming Language :: Python',
60 'Programming Language :: Python :: 2',
61 'Programming Language :: Python :: 2.6',
62 'Programming Language :: Python :: 2.7',
63 'Programming Language :: Python :: 3',
64 'Programming Language :: Python :: 3.3',
65 'Programming Language :: Python :: 3.4',
66 'Programming Language :: Python :: 3.5',
67 'Topic :: Internet :: WWW/HTTP',
68 'Topic :: Security',
69 ],
70
71 packages=find_packages(),
72 include_package_data=True,
73 install_requires=install_requires,
74 extras_require={
75 'dev': dev_extras,
76 'docs': docs_extras,
77 },
78 entry_points={
79 'console_scripts': [
80 'jws = acme.jose.jws:CLI.run',
81 ],
82 },
83 test_suite='acme',
84 )
85
[end of acme/setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/acme/setup.py b/acme/setup.py
--- a/acme/setup.py
+++ b/acme/setup.py
@@ -15,7 +15,11 @@
'PyOpenSSL>=0.13',
'pyrfc3339',
'pytz',
- 'requests[security]>=2.4.1', # security extras added in 2.4.1
+ # requests>=2.10 is required to fix
+ # https://github.com/shazow/urllib3/issues/556. This requirement can be
+ # relaxed to 'requests[security]>=2.4.1', however, less useful errors
+ # will be raised for some network/SSL errors.
+ 'requests[security]>=2.10',
# For pkg_resources. >=1.0 so pip resolves it to a version cryptography
# will tolerate; see #2599:
'setuptools>=1.0',
| {"golden_diff": "diff --git a/acme/setup.py b/acme/setup.py\n--- a/acme/setup.py\n+++ b/acme/setup.py\n@@ -15,7 +15,11 @@\n 'PyOpenSSL>=0.13',\n 'pyrfc3339',\n 'pytz',\n- 'requests[security]>=2.4.1', # security extras added in 2.4.1\n+ # requests>=2.10 is required to fix\n+ # https://github.com/shazow/urllib3/issues/556. This requirement can be\n+ # relaxed to 'requests[security]>=2.4.1', however, less useful errors\n+ # will be raised for some network/SSL errors.\n+ 'requests[security]>=2.10',\n # For pkg_resources. >=1.0 so pip resolves it to a version cryptography\n # will tolerate; see #2599:\n 'setuptools>=1.0',\n", "issue": "returned non-string (type Error)\nHey there, I installed certbot as per the doc's from letsencrypt on Debian Jessie and certbot in manual mode returns:\r\n\r\ncertbot certonly --manual -d mydomain.com\r\n\r\n```\r\nAn unexpected error occurred:\r\nTypeError: __str__ returned non-string (type Error)\r\n```\r\n\r\n```\r\npip2 list\r\nacme (0.9.3)\r\n...\r\ncertbot (0.9.3)\r\ncryptography (1.5.3)\r\n...\r\npyOpenSSL (16.0.0)\r\n```\r\n\r\nAnyone seen this before and can offer a solution? Thanks\r\n\n", "before_files": [{"content": "import sys\n\nfrom setuptools import setup\nfrom setuptools import find_packages\n\n\nversion = '0.12.0.dev0'\n\n# Please update tox.ini when modifying dependency version requirements\ninstall_requires = [\n # load_pem_private/public_key (>=0.6)\n # rsa_recover_prime_factors (>=0.8)\n 'cryptography>=0.8',\n # Connection.set_tlsext_host_name (>=0.13)\n 'PyOpenSSL>=0.13',\n 'pyrfc3339',\n 'pytz',\n 'requests[security]>=2.4.1', # security extras added in 2.4.1\n # For pkg_resources. >=1.0 so pip resolves it to a version cryptography\n # will tolerate; see #2599:\n 'setuptools>=1.0',\n 'six',\n]\n\n# env markers in extras_require cause problems with older pip: #517\n# Keep in sync with conditional_requirements.py.\nif sys.version_info < (2, 7):\n install_requires.extend([\n # only some distros recognize stdlib argparse as already satisfying\n 'argparse',\n 'mock<1.1.0',\n ])\nelse:\n install_requires.append('mock')\n\ndev_extras = [\n 'nose',\n 'tox',\n]\n\ndocs_extras = [\n 'Sphinx>=1.0', # autodoc_member_order = 'bysource', autodoc_default_flags\n 'sphinx_rtd_theme',\n]\n\n\nsetup(\n name='acme',\n version=version,\n description='ACME protocol implementation in Python',\n url='https://github.com/letsencrypt/letsencrypt',\n author=\"Certbot Project\",\n author_email='[email protected]',\n license='Apache License 2.0',\n classifiers=[\n 'Development Status :: 3 - Alpha',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Topic :: Internet :: WWW/HTTP',\n 'Topic :: Security',\n ],\n\n packages=find_packages(),\n include_package_data=True,\n install_requires=install_requires,\n extras_require={\n 'dev': dev_extras,\n 'docs': docs_extras,\n },\n entry_points={\n 'console_scripts': [\n 'jws = acme.jose.jws:CLI.run',\n ],\n },\n test_suite='acme',\n)\n", "path": "acme/setup.py"}]} | 1,440 | 220 |
gh_patches_debug_21404 | rasdani/github-patches | git_diff | matrix-org__synapse-6578 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Fatal 'Failed to upgrade database' error on startup
As of Synapse 1.7.0, when I start synapse with an old database version, I get this rather cryptic error.
</issue>
<code>
[start of synapse/storage/engines/sqlite.py]
1 # -*- coding: utf-8 -*-
2 # Copyright 2015, 2016 OpenMarket Ltd
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15
16 import struct
17 import threading
18
19 from synapse.storage.prepare_database import prepare_database
20
21
22 class Sqlite3Engine(object):
23 single_threaded = True
24
25 def __init__(self, database_module, database_config):
26 self.module = database_module
27
28 # The current max state_group, or None if we haven't looked
29 # in the DB yet.
30 self._current_state_group_id = None
31 self._current_state_group_id_lock = threading.Lock()
32
33 @property
34 def can_native_upsert(self):
35 """
36 Do we support native UPSERTs? This requires SQLite3 3.24+, plus some
37 more work we haven't done yet to tell what was inserted vs updated.
38 """
39 return self.module.sqlite_version_info >= (3, 24, 0)
40
41 @property
42 def supports_tuple_comparison(self):
43 """
44 Do we support comparing tuples, i.e. `(a, b) > (c, d)`? This requires
45 SQLite 3.15+.
46 """
47 return self.module.sqlite_version_info >= (3, 15, 0)
48
49 @property
50 def supports_using_any_list(self):
51 """Do we support using `a = ANY(?)` and passing a list
52 """
53 return False
54
55 def check_database(self, txn):
56 pass
57
58 def convert_param_style(self, sql):
59 return sql
60
61 def on_new_connection(self, db_conn):
62 prepare_database(db_conn, self, config=None)
63 db_conn.create_function("rank", 1, _rank)
64
65 def is_deadlock(self, error):
66 return False
67
68 def is_connection_closed(self, conn):
69 return False
70
71 def lock_table(self, txn, table):
72 return
73
74 def get_next_state_group_id(self, txn):
75 """Returns an int that can be used as a new state_group ID
76 """
77 # We do application locking here since if we're using sqlite then
78 # we are a single process synapse.
79 with self._current_state_group_id_lock:
80 if self._current_state_group_id is None:
81 txn.execute("SELECT COALESCE(max(id), 0) FROM state_groups")
82 self._current_state_group_id = txn.fetchone()[0]
83
84 self._current_state_group_id += 1
85 return self._current_state_group_id
86
87 @property
88 def server_version(self):
89 """Gets a string giving the server version. For example: '3.22.0'
90
91 Returns:
92 string
93 """
94 return "%i.%i.%i" % self.module.sqlite_version_info
95
96
97 # Following functions taken from: https://github.com/coleifer/peewee
98
99
100 def _parse_match_info(buf):
101 bufsize = len(buf)
102 return [struct.unpack("@I", buf[i : i + 4])[0] for i in range(0, bufsize, 4)]
103
104
105 def _rank(raw_match_info):
106 """Handle match_info called w/default args 'pcx' - based on the example rank
107 function http://sqlite.org/fts3.html#appendix_a
108 """
109 match_info = _parse_match_info(raw_match_info)
110 score = 0.0
111 p, c = match_info[:2]
112 for phrase_num in range(p):
113 phrase_info_idx = 2 + (phrase_num * c * 3)
114 for col_num in range(c):
115 col_idx = phrase_info_idx + (col_num * 3)
116 x1, x2 = match_info[col_idx : col_idx + 2]
117 if x1 > 0:
118 score += float(x1) / x2
119 return score
120
[end of synapse/storage/engines/sqlite.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/synapse/storage/engines/sqlite.py b/synapse/storage/engines/sqlite.py
--- a/synapse/storage/engines/sqlite.py
+++ b/synapse/storage/engines/sqlite.py
@@ -25,6 +25,9 @@
def __init__(self, database_module, database_config):
self.module = database_module
+ database = database_config.get("args", {}).get("database")
+ self._is_in_memory = database in (None, ":memory:",)
+
# The current max state_group, or None if we haven't looked
# in the DB yet.
self._current_state_group_id = None
@@ -59,7 +62,12 @@
return sql
def on_new_connection(self, db_conn):
- prepare_database(db_conn, self, config=None)
+ if self._is_in_memory:
+ # In memory databases need to be rebuilt each time. Ideally we'd
+ # reuse the same connection as we do when starting up, but that
+ # would involve using adbapi before we have started the reactor.
+ prepare_database(db_conn, self, config=None)
+
db_conn.create_function("rank", 1, _rank)
def is_deadlock(self, error):
| {"golden_diff": "diff --git a/synapse/storage/engines/sqlite.py b/synapse/storage/engines/sqlite.py\n--- a/synapse/storage/engines/sqlite.py\n+++ b/synapse/storage/engines/sqlite.py\n@@ -25,6 +25,9 @@\n def __init__(self, database_module, database_config):\n self.module = database_module\n \n+ database = database_config.get(\"args\", {}).get(\"database\")\n+ self._is_in_memory = database in (None, \":memory:\",)\n+\n # The current max state_group, or None if we haven't looked\n # in the DB yet.\n self._current_state_group_id = None\n@@ -59,7 +62,12 @@\n return sql\n \n def on_new_connection(self, db_conn):\n- prepare_database(db_conn, self, config=None)\n+ if self._is_in_memory:\n+ # In memory databases need to be rebuilt each time. Ideally we'd\n+ # reuse the same connection as we do when starting up, but that\n+ # would involve using adbapi before we have started the reactor.\n+ prepare_database(db_conn, self, config=None)\n+\n db_conn.create_function(\"rank\", 1, _rank)\n \n def is_deadlock(self, error):\n", "issue": "Fatal 'Failed to upgrade database' error on startup\nAs of Synapse 1.7.0, when I start synapse with an old database version, I get this rather cryptic error.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright 2015, 2016 OpenMarket Ltd\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport struct\nimport threading\n\nfrom synapse.storage.prepare_database import prepare_database\n\n\nclass Sqlite3Engine(object):\n single_threaded = True\n\n def __init__(self, database_module, database_config):\n self.module = database_module\n\n # The current max state_group, or None if we haven't looked\n # in the DB yet.\n self._current_state_group_id = None\n self._current_state_group_id_lock = threading.Lock()\n\n @property\n def can_native_upsert(self):\n \"\"\"\n Do we support native UPSERTs? This requires SQLite3 3.24+, plus some\n more work we haven't done yet to tell what was inserted vs updated.\n \"\"\"\n return self.module.sqlite_version_info >= (3, 24, 0)\n\n @property\n def supports_tuple_comparison(self):\n \"\"\"\n Do we support comparing tuples, i.e. `(a, b) > (c, d)`? This requires\n SQLite 3.15+.\n \"\"\"\n return self.module.sqlite_version_info >= (3, 15, 0)\n\n @property\n def supports_using_any_list(self):\n \"\"\"Do we support using `a = ANY(?)` and passing a list\n \"\"\"\n return False\n\n def check_database(self, txn):\n pass\n\n def convert_param_style(self, sql):\n return sql\n\n def on_new_connection(self, db_conn):\n prepare_database(db_conn, self, config=None)\n db_conn.create_function(\"rank\", 1, _rank)\n\n def is_deadlock(self, error):\n return False\n\n def is_connection_closed(self, conn):\n return False\n\n def lock_table(self, txn, table):\n return\n\n def get_next_state_group_id(self, txn):\n \"\"\"Returns an int that can be used as a new state_group ID\n \"\"\"\n # We do application locking here since if we're using sqlite then\n # we are a single process synapse.\n with self._current_state_group_id_lock:\n if self._current_state_group_id is None:\n txn.execute(\"SELECT COALESCE(max(id), 0) FROM state_groups\")\n self._current_state_group_id = txn.fetchone()[0]\n\n self._current_state_group_id += 1\n return self._current_state_group_id\n\n @property\n def server_version(self):\n \"\"\"Gets a string giving the server version. For example: '3.22.0'\n\n Returns:\n string\n \"\"\"\n return \"%i.%i.%i\" % self.module.sqlite_version_info\n\n\n# Following functions taken from: https://github.com/coleifer/peewee\n\n\ndef _parse_match_info(buf):\n bufsize = len(buf)\n return [struct.unpack(\"@I\", buf[i : i + 4])[0] for i in range(0, bufsize, 4)]\n\n\ndef _rank(raw_match_info):\n \"\"\"Handle match_info called w/default args 'pcx' - based on the example rank\n function http://sqlite.org/fts3.html#appendix_a\n \"\"\"\n match_info = _parse_match_info(raw_match_info)\n score = 0.0\n p, c = match_info[:2]\n for phrase_num in range(p):\n phrase_info_idx = 2 + (phrase_num * c * 3)\n for col_num in range(c):\n col_idx = phrase_info_idx + (col_num * 3)\n x1, x2 = match_info[col_idx : col_idx + 2]\n if x1 > 0:\n score += float(x1) / x2\n return score\n", "path": "synapse/storage/engines/sqlite.py"}]} | 1,780 | 285 |
gh_patches_debug_17006 | rasdani/github-patches | git_diff | sublimelsp__LSP-1950 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Side by side option for symbol action links in hover popup doesn't work if location is in same file
**Describe the bug**
The side by side icon link for "Definition" / "Type Definition" / "Declaration" from the hover popup doesn't work if the location of the definition/declaration is in the same file.
**To Reproduce**
Steps to reproduce the behavior:
1. Have `"show_symbol_action_links": true` in the settings (this is the default value)
2. Hover over symbol (e.g. function call) which has a definition in the same file
3. Click on ◨ next to "Definition", or use <kbd>Ctrl</kbd> + click on the text link
4. See that the view scrolls to the location, instead of opening the location in a new tab to the right
**Expected behavior**
LSP should open the definition in a new new to the right, similar to how the built-in definitions popup from ST does
**Environment (please complete the following information):**
- OS: Win 10
- LSP version: main
**Additional context**
Seems like the `flags` argument which includes the "side_by_side" information is lost/ignored here:
https://github.com/sublimelsp/LSP/blob/1bcd518102c1516c9d808c974b7d2a5eba7d0b80/plugin/core/open.py#L30-L31
</issue>
<code>
[start of plugin/core/open.py]
1 from .logging import exception_log
2 from .promise import Promise
3 from .promise import ResolveFunc
4 from .protocol import DocumentUri
5 from .protocol import Range
6 from .protocol import RangeLsp
7 from .typing import Dict, Tuple, Optional
8 from .url import parse_uri
9 from .views import range_to_region
10 import os
11 import sublime
12 import subprocess
13
14
15 opening_files = {} # type: Dict[str, Tuple[Promise[Optional[sublime.View]], ResolveFunc[Optional[sublime.View]]]]
16
17
18 def open_file(
19 window: sublime.Window, uri: DocumentUri, flags: int = 0, group: int = -1
20 ) -> Promise[Optional[sublime.View]]:
21 """
22 Open a file asynchronously.
23 It is only safe to call this function from the UI thread.
24 The provided uri MUST be a file URI
25 """
26 file = parse_uri(uri)[1]
27 # window.open_file brings the file to focus if it's already opened, which we don't want.
28 # So we first check if there's already a view for that file.
29 view = window.find_open_file(file)
30 if view:
31 return Promise.resolve(view)
32
33 view = window.open_file(file, flags, group)
34 if not view.is_loading():
35 # It's already loaded. Possibly already open in a tab.
36 return Promise.resolve(view)
37
38 # Is the view opening right now? Then return the associated unresolved promise
39 for fn, value in opening_files.items():
40 if fn == file or os.path.samefile(fn, file):
41 # Return the unresolved promise. A future on_load event will resolve the promise.
42 return value[0]
43
44 # Prepare a new promise to be resolved by a future on_load event (see the event listener in main.py)
45 def fullfill(resolve: ResolveFunc[Optional[sublime.View]]) -> None:
46 global opening_files
47 # Save the promise in the first element of the tuple -- except we cannot yet do that here
48 opening_files[file] = (None, resolve) # type: ignore
49
50 promise = Promise(fullfill)
51 tup = opening_files[file]
52 # Save the promise in the first element of the tuple so that the for-loop above can return it
53 opening_files[file] = (promise, tup[1])
54 return promise
55
56
57 def center_selection(v: sublime.View, r: RangeLsp) -> sublime.View:
58 selection = range_to_region(Range.from_lsp(r), v)
59 v.run_command("lsp_selection_set", {"regions": [(selection.a, selection.a)]})
60 window = v.window()
61 if window:
62 window.focus_view(v)
63 if int(sublime.version()) >= 4124:
64 v.show_at_center(selection, animate=False)
65 else:
66 # TODO: remove later when a stable build lands
67 v.show_at_center(selection) # type: ignore
68 return v
69
70
71 def open_externally(uri: str, take_focus: bool) -> bool:
72 """
73 A blocking function that invokes the OS's "open with default extension"
74 """
75 try:
76 # TODO: handle take_focus
77 if sublime.platform() == "windows":
78 os.startfile(uri) # type: ignore
79 elif sublime.platform() == "osx":
80 subprocess.check_call(("/usr/bin/open", uri))
81 else: # linux
82 subprocess.check_call(("xdg-open", uri))
83 return True
84 except Exception as ex:
85 exception_log("Failed to open {}".format(uri), ex)
86 return False
87
[end of plugin/core/open.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/plugin/core/open.py b/plugin/core/open.py
--- a/plugin/core/open.py
+++ b/plugin/core/open.py
@@ -24,11 +24,15 @@
The provided uri MUST be a file URI
"""
file = parse_uri(uri)[1]
- # window.open_file brings the file to focus if it's already opened, which we don't want.
- # So we first check if there's already a view for that file.
+ # window.open_file brings the file to focus if it's already opened, which we don't want (unless it's supposed
+ # to open as a separate view).
view = window.find_open_file(file)
if view:
- return Promise.resolve(view)
+ opens_in_current_group = group == -1 or window.active_group() == group
+ opens_as_new_selection = (flags & (sublime.ADD_TO_SELECTION | sublime.REPLACE_MRU)) != 0
+ return_existing_view = opens_in_current_group and not opens_as_new_selection
+ if return_existing_view:
+ return Promise.resolve(view)
view = window.open_file(file, flags, group)
if not view.is_loading():
| {"golden_diff": "diff --git a/plugin/core/open.py b/plugin/core/open.py\n--- a/plugin/core/open.py\n+++ b/plugin/core/open.py\n@@ -24,11 +24,15 @@\n The provided uri MUST be a file URI\n \"\"\"\n file = parse_uri(uri)[1]\n- # window.open_file brings the file to focus if it's already opened, which we don't want.\n- # So we first check if there's already a view for that file.\n+ # window.open_file brings the file to focus if it's already opened, which we don't want (unless it's supposed\n+ # to open as a separate view).\n view = window.find_open_file(file)\n if view:\n- return Promise.resolve(view)\n+ opens_in_current_group = group == -1 or window.active_group() == group\n+ opens_as_new_selection = (flags & (sublime.ADD_TO_SELECTION | sublime.REPLACE_MRU)) != 0\n+ return_existing_view = opens_in_current_group and not opens_as_new_selection\n+ if return_existing_view:\n+ return Promise.resolve(view)\n \n view = window.open_file(file, flags, group)\n if not view.is_loading():\n", "issue": "Side by side option for symbol action links in hover popup doesn't work if location is in same file\n**Describe the bug**\r\nThe side by side icon link for \"Definition\" / \"Type Definition\" / \"Declaration\" from the hover popup doesn't work if the location of the definition/declaration is in the same file.\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. Have `\"show_symbol_action_links\": true` in the settings (this is the default value)\r\n2. Hover over symbol (e.g. function call) which has a definition in the same file\r\n3. Click on \u25e8 next to \"Definition\", or use <kbd>Ctrl</kbd> + click on the text link\r\n4. See that the view scrolls to the location, instead of opening the location in a new tab to the right\r\n\r\n**Expected behavior**\r\nLSP should open the definition in a new new to the right, similar to how the built-in definitions popup from ST does\r\n\r\n**Environment (please complete the following information):**\r\n- OS: Win 10\r\n- LSP version: main\r\n\r\n**Additional context**\r\n\r\nSeems like the `flags` argument which includes the \"side_by_side\" information is lost/ignored here:\r\nhttps://github.com/sublimelsp/LSP/blob/1bcd518102c1516c9d808c974b7d2a5eba7d0b80/plugin/core/open.py#L30-L31\n", "before_files": [{"content": "from .logging import exception_log\nfrom .promise import Promise\nfrom .promise import ResolveFunc\nfrom .protocol import DocumentUri\nfrom .protocol import Range\nfrom .protocol import RangeLsp\nfrom .typing import Dict, Tuple, Optional\nfrom .url import parse_uri\nfrom .views import range_to_region\nimport os\nimport sublime\nimport subprocess\n\n\nopening_files = {} # type: Dict[str, Tuple[Promise[Optional[sublime.View]], ResolveFunc[Optional[sublime.View]]]]\n\n\ndef open_file(\n window: sublime.Window, uri: DocumentUri, flags: int = 0, group: int = -1\n) -> Promise[Optional[sublime.View]]:\n \"\"\"\n Open a file asynchronously.\n It is only safe to call this function from the UI thread.\n The provided uri MUST be a file URI\n \"\"\"\n file = parse_uri(uri)[1]\n # window.open_file brings the file to focus if it's already opened, which we don't want.\n # So we first check if there's already a view for that file.\n view = window.find_open_file(file)\n if view:\n return Promise.resolve(view)\n\n view = window.open_file(file, flags, group)\n if not view.is_loading():\n # It's already loaded. Possibly already open in a tab.\n return Promise.resolve(view)\n\n # Is the view opening right now? Then return the associated unresolved promise\n for fn, value in opening_files.items():\n if fn == file or os.path.samefile(fn, file):\n # Return the unresolved promise. A future on_load event will resolve the promise.\n return value[0]\n\n # Prepare a new promise to be resolved by a future on_load event (see the event listener in main.py)\n def fullfill(resolve: ResolveFunc[Optional[sublime.View]]) -> None:\n global opening_files\n # Save the promise in the first element of the tuple -- except we cannot yet do that here\n opening_files[file] = (None, resolve) # type: ignore\n\n promise = Promise(fullfill)\n tup = opening_files[file]\n # Save the promise in the first element of the tuple so that the for-loop above can return it\n opening_files[file] = (promise, tup[1])\n return promise\n\n\ndef center_selection(v: sublime.View, r: RangeLsp) -> sublime.View:\n selection = range_to_region(Range.from_lsp(r), v)\n v.run_command(\"lsp_selection_set\", {\"regions\": [(selection.a, selection.a)]})\n window = v.window()\n if window:\n window.focus_view(v)\n if int(sublime.version()) >= 4124:\n v.show_at_center(selection, animate=False)\n else:\n # TODO: remove later when a stable build lands\n v.show_at_center(selection) # type: ignore\n return v\n\n\ndef open_externally(uri: str, take_focus: bool) -> bool:\n \"\"\"\n A blocking function that invokes the OS's \"open with default extension\"\n \"\"\"\n try:\n # TODO: handle take_focus\n if sublime.platform() == \"windows\":\n os.startfile(uri) # type: ignore\n elif sublime.platform() == \"osx\":\n subprocess.check_call((\"/usr/bin/open\", uri))\n else: # linux\n subprocess.check_call((\"xdg-open\", uri))\n return True\n except Exception as ex:\n exception_log(\"Failed to open {}\".format(uri), ex)\n return False\n", "path": "plugin/core/open.py"}]} | 1,766 | 259 |
gh_patches_debug_42600 | rasdani/github-patches | git_diff | pyro-ppl__pyro-145 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Provide default implementation of batch_log_pdf
Could we provide a default implementation of `batch_log_pdf` as a simple for loop?
```py
class Distribution(object):
...
def batch_log_pdf(self, x, batch_size, *args, **kwargs):
result = torch.Tensor([batch_size])
for i in range(batch_size):
result[i] = self.log_pdf(x[i], *args, **kwargs)
return torch.autograd.Variable(result) # Caller decides whether to .sum().
```
Or do we want to instead implement correct handling of `NotImplementedError`s everywhere `batch_log_pdf` is used?
Disclaimer: I don't understand what `batch_log_pdf` does, and there is no docstring.
Edited to not sum the result.
</issue>
<code>
[start of pyro/distributions/distribution.py]
1 class Distribution(object):
2 """
3 Distribution abstract base class
4 """
5
6 def __init__(self, *args, **kwargs):
7 """
8 Constructor for base distribution class.
9
10 Currently takes no explicit arguments.
11 """
12 self.reparameterized = False
13
14 def __call__(self, *args, **kwargs):
15 """
16 Samples on call
17 """
18 return self.sample(*args, **kwargs)
19
20 def sample(self, *args, **kwargs):
21 """
22 Virtual sample method.
23 """
24 raise NotImplementedError()
25
26 def log_pdf(self, x):
27 raise NotImplementedError()
28
29 def batch_log_pdf(self, x, batch_size):
30 raise NotImplementedError()
31
32 def support(self):
33 raise NotImplementedError("Support not supported for {}".format(str(type(self))))
34
35 def analytic_mean(self, *args, **kwargs):
36 """
37 Analytic mean of the distribution, to be implemented by derived classes.
38 Note that this is optional, and currently only used for testing distributions.
39 :return: Analytic mean, assuming it can be computed analytically given the distribution parameters
40 :rtype: torch.autograd.Variable.
41 """
42 raise NotImplementedError("Method not implemented by the subclass {}".format(str(type(self))))
43
44 def analytic_var(self, *args, **kwargs):
45 """
46 Analytic variance of the distribution, to be implemented by derived classes.
47 Note that this is optional, and currently only used for testing distributions.
48 :return: Analytic variance, assuming it can be computed analytically given the distribution parameters
49 :rtype: torch.autograd.Variable.
50 """
51 raise NotImplementedError("Method not implemented by the subclass {}".format(str(type(self))))
52
[end of pyro/distributions/distribution.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pyro/distributions/distribution.py b/pyro/distributions/distribution.py
--- a/pyro/distributions/distribution.py
+++ b/pyro/distributions/distribution.py
@@ -1,6 +1,17 @@
+import torch
+
+
class Distribution(object):
"""
- Distribution abstract base class
+ Abstract base class for probability distributions.
+
+ Instances can either be constructed from a fixed parameter and called without paramters,
+ or constructed without a parameter and called with a paramter.
+ It is not allowed to specify a parameter both during construction and when calling.
+ When calling with a parameter, it is preferred to use one of the singleton instances
+ in pyro.distributions rather than constructing a new instance without a parameter.
+
+ Derived classes must implement the `sample`, and `batch_log_pdf` methods.
"""
def __init__(self, *args, **kwargs):
@@ -13,39 +24,69 @@
def __call__(self, *args, **kwargs):
"""
- Samples on call
+ Samples a random value.
+
+ :return: A random value.
+ :rtype: torch.autograd.Variable
"""
return self.sample(*args, **kwargs)
def sample(self, *args, **kwargs):
"""
- Virtual sample method.
+ Samples a random value.
+
+ :return: A random value.
+ :rtype: torch.autograd.Variable
"""
- raise NotImplementedError()
+ raise NotImplementedError
- def log_pdf(self, x):
- raise NotImplementedError()
+ def log_pdf(self, x, *args, **kwargs):
+ """
+ Evaluates total log probability density for one or a batch of samples and parameters.
- def batch_log_pdf(self, x, batch_size):
- raise NotImplementedError()
+ :param torch.autograd.Variable x: A value.
+ :return: total log probability density as a one-dimensional torch.autograd.Variable of size 1.
+ :rtype: torch.autograd.Variable
+ """
+ return torch.sum(self.batch_log_pdf(x, *args, **kwargs))
- def support(self):
- raise NotImplementedError("Support not supported for {}".format(str(type(self))))
+ def batch_log_pdf(self, x, *args, **kwargs):
+ """
+ Evaluates log probability densities for one or a batch of samples and parameters.
+
+ :param torch.autograd.Variable x: A single value or a batch of values batched along axis 0.
+ :return: log probability densities as a one-dimensional torch.autograd.Variable.
+ :rtype: torch.autograd.Variable
+ """
+ raise NotImplementedError
+
+ def support(self, *args, **kwargs):
+ """
+ Returns a representation of the distribution's support.
+
+ :return: A representation of the distribution's support.
+ :rtype: torch.Tensor
+ """
+ raise NotImplementedError("Support not implemented for {}".format(type(self)))
def analytic_mean(self, *args, **kwargs):
"""
Analytic mean of the distribution, to be implemented by derived classes.
+
Note that this is optional, and currently only used for testing distributions.
+
:return: Analytic mean, assuming it can be computed analytically given the distribution parameters
:rtype: torch.autograd.Variable.
"""
- raise NotImplementedError("Method not implemented by the subclass {}".format(str(type(self))))
+ raise NotImplementedError("Method not implemented by the subclass {}".format(type(self)))
def analytic_var(self, *args, **kwargs):
"""
Analytic variance of the distribution, to be implemented by derived classes.
+
Note that this is optional, and currently only used for testing distributions.
+
:return: Analytic variance, assuming it can be computed analytically given the distribution parameters
:rtype: torch.autograd.Variable.
"""
- raise NotImplementedError("Method not implemented by the subclass {}".format(str(type(self))))
+ raise NotImplementedError("Method not implemented by the subclass {}".format(type(self)))
| {"golden_diff": "diff --git a/pyro/distributions/distribution.py b/pyro/distributions/distribution.py\n--- a/pyro/distributions/distribution.py\n+++ b/pyro/distributions/distribution.py\n@@ -1,6 +1,17 @@\n+import torch\n+\n+\n class Distribution(object):\n \"\"\"\n- Distribution abstract base class\n+ Abstract base class for probability distributions.\n+\n+ Instances can either be constructed from a fixed parameter and called without paramters,\n+ or constructed without a parameter and called with a paramter.\n+ It is not allowed to specify a parameter both during construction and when calling.\n+ When calling with a parameter, it is preferred to use one of the singleton instances\n+ in pyro.distributions rather than constructing a new instance without a parameter.\n+\n+ Derived classes must implement the `sample`, and `batch_log_pdf` methods.\n \"\"\"\n \n def __init__(self, *args, **kwargs):\n@@ -13,39 +24,69 @@\n \n def __call__(self, *args, **kwargs):\n \"\"\"\n- Samples on call\n+ Samples a random value.\n+\n+ :return: A random value.\n+ :rtype: torch.autograd.Variable\n \"\"\"\n return self.sample(*args, **kwargs)\n \n def sample(self, *args, **kwargs):\n \"\"\"\n- Virtual sample method.\n+ Samples a random value.\n+\n+ :return: A random value.\n+ :rtype: torch.autograd.Variable\n \"\"\"\n- raise NotImplementedError()\n+ raise NotImplementedError\n \n- def log_pdf(self, x):\n- raise NotImplementedError()\n+ def log_pdf(self, x, *args, **kwargs):\n+ \"\"\"\n+ Evaluates total log probability density for one or a batch of samples and parameters.\n \n- def batch_log_pdf(self, x, batch_size):\n- raise NotImplementedError()\n+ :param torch.autograd.Variable x: A value.\n+ :return: total log probability density as a one-dimensional torch.autograd.Variable of size 1.\n+ :rtype: torch.autograd.Variable\n+ \"\"\"\n+ return torch.sum(self.batch_log_pdf(x, *args, **kwargs))\n \n- def support(self):\n- raise NotImplementedError(\"Support not supported for {}\".format(str(type(self))))\n+ def batch_log_pdf(self, x, *args, **kwargs):\n+ \"\"\"\n+ Evaluates log probability densities for one or a batch of samples and parameters.\n+\n+ :param torch.autograd.Variable x: A single value or a batch of values batched along axis 0.\n+ :return: log probability densities as a one-dimensional torch.autograd.Variable.\n+ :rtype: torch.autograd.Variable\n+ \"\"\"\n+ raise NotImplementedError\n+\n+ def support(self, *args, **kwargs):\n+ \"\"\"\n+ Returns a representation of the distribution's support.\n+\n+ :return: A representation of the distribution's support.\n+ :rtype: torch.Tensor\n+ \"\"\"\n+ raise NotImplementedError(\"Support not implemented for {}\".format(type(self)))\n \n def analytic_mean(self, *args, **kwargs):\n \"\"\"\n Analytic mean of the distribution, to be implemented by derived classes.\n+\n Note that this is optional, and currently only used for testing distributions.\n+\n :return: Analytic mean, assuming it can be computed analytically given the distribution parameters\n :rtype: torch.autograd.Variable.\n \"\"\"\n- raise NotImplementedError(\"Method not implemented by the subclass {}\".format(str(type(self))))\n+ raise NotImplementedError(\"Method not implemented by the subclass {}\".format(type(self)))\n \n def analytic_var(self, *args, **kwargs):\n \"\"\"\n Analytic variance of the distribution, to be implemented by derived classes.\n+\n Note that this is optional, and currently only used for testing distributions.\n+\n :return: Analytic variance, assuming it can be computed analytically given the distribution parameters\n :rtype: torch.autograd.Variable.\n \"\"\"\n- raise NotImplementedError(\"Method not implemented by the subclass {}\".format(str(type(self))))\n+ raise NotImplementedError(\"Method not implemented by the subclass {}\".format(type(self)))\n", "issue": "Provide default implementation of batch_log_pdf\nCould we provide a default implementation of `batch_log_pdf` as a simple for loop?\r\n```py\r\nclass Distribution(object):\r\n ...\r\n def batch_log_pdf(self, x, batch_size, *args, **kwargs):\r\n result = torch.Tensor([batch_size])\r\n for i in range(batch_size):\r\n result[i] = self.log_pdf(x[i], *args, **kwargs)\r\n return torch.autograd.Variable(result) # Caller decides whether to .sum().\r\n```\r\nOr do we want to instead implement correct handling of `NotImplementedError`s everywhere `batch_log_pdf` is used?\r\n\r\nDisclaimer: I don't understand what `batch_log_pdf` does, and there is no docstring.\r\n\r\nEdited to not sum the result.\n", "before_files": [{"content": "class Distribution(object):\n \"\"\"\n Distribution abstract base class\n \"\"\"\n\n def __init__(self, *args, **kwargs):\n \"\"\"\n Constructor for base distribution class.\n\n Currently takes no explicit arguments.\n \"\"\"\n self.reparameterized = False\n\n def __call__(self, *args, **kwargs):\n \"\"\"\n Samples on call\n \"\"\"\n return self.sample(*args, **kwargs)\n\n def sample(self, *args, **kwargs):\n \"\"\"\n Virtual sample method.\n \"\"\"\n raise NotImplementedError()\n\n def log_pdf(self, x):\n raise NotImplementedError()\n\n def batch_log_pdf(self, x, batch_size):\n raise NotImplementedError()\n\n def support(self):\n raise NotImplementedError(\"Support not supported for {}\".format(str(type(self))))\n\n def analytic_mean(self, *args, **kwargs):\n \"\"\"\n Analytic mean of the distribution, to be implemented by derived classes.\n Note that this is optional, and currently only used for testing distributions.\n :return: Analytic mean, assuming it can be computed analytically given the distribution parameters\n :rtype: torch.autograd.Variable.\n \"\"\"\n raise NotImplementedError(\"Method not implemented by the subclass {}\".format(str(type(self))))\n\n def analytic_var(self, *args, **kwargs):\n \"\"\"\n Analytic variance of the distribution, to be implemented by derived classes.\n Note that this is optional, and currently only used for testing distributions.\n :return: Analytic variance, assuming it can be computed analytically given the distribution parameters\n :rtype: torch.autograd.Variable.\n \"\"\"\n raise NotImplementedError(\"Method not implemented by the subclass {}\".format(str(type(self))))\n", "path": "pyro/distributions/distribution.py"}]} | 1,144 | 877 |
gh_patches_debug_32842 | rasdani/github-patches | git_diff | TheAlgorithms__Python-9069 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Delete base85 algorithm
### Describe your change:
Re #6216
Normally, I'm not in favour of just deleting algorithms, but I would make the argument that this is not an algorithm, rather just a snippet of code that utilises another library.
Per `CONTRIBTUING.md`
> Algorithms in this repo should not be how-to examples for existing Python packages. Instead, they should perform internal calculations or manipulations to convert input values into different output values
This `base85` algorithm has essentially got two lines of code that purely utilise a singular library. The doctests only test an external library
This repository should not contains examples on how to use a certain library, that would be the library documentation here
https://docs.python.org/3/library/base64.html
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
* [x] Delete an algorithm
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
</issue>
<code>
[start of ciphers/base85.py]
1 import base64
2
3
4 def base85_encode(string: str) -> bytes:
5 """
6 >>> base85_encode("")
7 b''
8 >>> base85_encode("12345")
9 b'0etOA2#'
10 >>> base85_encode("base 85")
11 b'@UX=h+?24'
12 """
13 # encoded the input to a bytes-like object and then a85encode that
14 return base64.a85encode(string.encode("utf-8"))
15
16
17 def base85_decode(a85encoded: bytes) -> str:
18 """
19 >>> base85_decode(b"")
20 ''
21 >>> base85_decode(b"0etOA2#")
22 '12345'
23 >>> base85_decode(b"@UX=h+?24")
24 'base 85'
25 """
26 # a85decode the input into bytes and decode that into a human readable string
27 return base64.a85decode(a85encoded).decode("utf-8")
28
29
30 if __name__ == "__main__":
31 import doctest
32
33 doctest.testmod()
34
[end of ciphers/base85.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ciphers/base85.py b/ciphers/base85.py
--- a/ciphers/base85.py
+++ b/ciphers/base85.py
@@ -1,30 +1,55 @@
-import base64
+"""
+Base85 (Ascii85) encoding and decoding
+https://en.wikipedia.org/wiki/Ascii85
+"""
-def base85_encode(string: str) -> bytes:
+
+def _base10_to_85(d: int) -> str:
+ return "".join(chr(d % 85 + 33)) + _base10_to_85(d // 85) if d > 0 else ""
+
+
+def _base85_to_10(digits: list) -> int:
+ return sum(char * 85**i for i, char in enumerate(reversed(digits)))
+
+
+def ascii85_encode(data: bytes) -> bytes:
"""
- >>> base85_encode("")
+ >>> ascii85_encode(b"")
b''
- >>> base85_encode("12345")
+ >>> ascii85_encode(b"12345")
b'0etOA2#'
- >>> base85_encode("base 85")
+ >>> ascii85_encode(b"base 85")
b'@UX=h+?24'
"""
- # encoded the input to a bytes-like object and then a85encode that
- return base64.a85encode(string.encode("utf-8"))
+ binary_data = "".join(bin(ord(d))[2:].zfill(8) for d in data.decode("utf-8"))
+ null_values = (32 * ((len(binary_data) // 32) + 1) - len(binary_data)) // 8
+ binary_data = binary_data.ljust(32 * ((len(binary_data) // 32) + 1), "0")
+ b85_chunks = [int(_s, 2) for _s in map("".join, zip(*[iter(binary_data)] * 32))]
+ result = "".join(_base10_to_85(chunk)[::-1] for chunk in b85_chunks)
+ return bytes(result[:-null_values] if null_values % 4 != 0 else result, "utf-8")
-def base85_decode(a85encoded: bytes) -> str:
+def ascii85_decode(data: bytes) -> bytes:
"""
- >>> base85_decode(b"")
- ''
- >>> base85_decode(b"0etOA2#")
- '12345'
- >>> base85_decode(b"@UX=h+?24")
- 'base 85'
+ >>> ascii85_decode(b"")
+ b''
+ >>> ascii85_decode(b"0etOA2#")
+ b'12345'
+ >>> ascii85_decode(b"@UX=h+?24")
+ b'base 85'
"""
- # a85decode the input into bytes and decode that into a human readable string
- return base64.a85decode(a85encoded).decode("utf-8")
+ null_values = 5 * ((len(data) // 5) + 1) - len(data)
+ binary_data = data.decode("utf-8") + "u" * null_values
+ b85_chunks = map("".join, zip(*[iter(binary_data)] * 5))
+ b85_segments = [[ord(_s) - 33 for _s in chunk] for chunk in b85_chunks]
+ results = [bin(_base85_to_10(chunk))[2::].zfill(32) for chunk in b85_segments]
+ char_chunks = [
+ [chr(int(_s, 2)) for _s in map("".join, zip(*[iter(r)] * 8))] for r in results
+ ]
+ result = "".join("".join(char) for char in char_chunks)
+ offset = int(null_values % 5 == 0)
+ return bytes(result[: offset - null_values], "utf-8")
if __name__ == "__main__":
| {"golden_diff": "diff --git a/ciphers/base85.py b/ciphers/base85.py\n--- a/ciphers/base85.py\n+++ b/ciphers/base85.py\n@@ -1,30 +1,55 @@\n-import base64\n+\"\"\"\n+Base85 (Ascii85) encoding and decoding\n \n+https://en.wikipedia.org/wiki/Ascii85\n+\"\"\"\n \n-def base85_encode(string: str) -> bytes:\n+\n+def _base10_to_85(d: int) -> str:\n+ return \"\".join(chr(d % 85 + 33)) + _base10_to_85(d // 85) if d > 0 else \"\"\n+\n+\n+def _base85_to_10(digits: list) -> int:\n+ return sum(char * 85**i for i, char in enumerate(reversed(digits)))\n+\n+\n+def ascii85_encode(data: bytes) -> bytes:\n \"\"\"\n- >>> base85_encode(\"\")\n+ >>> ascii85_encode(b\"\")\n b''\n- >>> base85_encode(\"12345\")\n+ >>> ascii85_encode(b\"12345\")\n b'0etOA2#'\n- >>> base85_encode(\"base 85\")\n+ >>> ascii85_encode(b\"base 85\")\n b'@UX=h+?24'\n \"\"\"\n- # encoded the input to a bytes-like object and then a85encode that\n- return base64.a85encode(string.encode(\"utf-8\"))\n+ binary_data = \"\".join(bin(ord(d))[2:].zfill(8) for d in data.decode(\"utf-8\"))\n+ null_values = (32 * ((len(binary_data) // 32) + 1) - len(binary_data)) // 8\n+ binary_data = binary_data.ljust(32 * ((len(binary_data) // 32) + 1), \"0\")\n+ b85_chunks = [int(_s, 2) for _s in map(\"\".join, zip(*[iter(binary_data)] * 32))]\n+ result = \"\".join(_base10_to_85(chunk)[::-1] for chunk in b85_chunks)\n+ return bytes(result[:-null_values] if null_values % 4 != 0 else result, \"utf-8\")\n \n \n-def base85_decode(a85encoded: bytes) -> str:\n+def ascii85_decode(data: bytes) -> bytes:\n \"\"\"\n- >>> base85_decode(b\"\")\n- ''\n- >>> base85_decode(b\"0etOA2#\")\n- '12345'\n- >>> base85_decode(b\"@UX=h+?24\")\n- 'base 85'\n+ >>> ascii85_decode(b\"\")\n+ b''\n+ >>> ascii85_decode(b\"0etOA2#\")\n+ b'12345'\n+ >>> ascii85_decode(b\"@UX=h+?24\")\n+ b'base 85'\n \"\"\"\n- # a85decode the input into bytes and decode that into a human readable string\n- return base64.a85decode(a85encoded).decode(\"utf-8\")\n+ null_values = 5 * ((len(data) // 5) + 1) - len(data)\n+ binary_data = data.decode(\"utf-8\") + \"u\" * null_values\n+ b85_chunks = map(\"\".join, zip(*[iter(binary_data)] * 5))\n+ b85_segments = [[ord(_s) - 33 for _s in chunk] for chunk in b85_chunks]\n+ results = [bin(_base85_to_10(chunk))[2::].zfill(32) for chunk in b85_segments]\n+ char_chunks = [\n+ [chr(int(_s, 2)) for _s in map(\"\".join, zip(*[iter(r)] * 8))] for r in results\n+ ]\n+ result = \"\".join(\"\".join(char) for char in char_chunks)\n+ offset = int(null_values % 5 == 0)\n+ return bytes(result[: offset - null_values], \"utf-8\")\n \n \n if __name__ == \"__main__\":\n", "issue": "Delete base85 algorithm\n### Describe your change:\r\nRe #6216\r\n\r\nNormally, I'm not in favour of just deleting algorithms, but I would make the argument that this is not an algorithm, rather just a snippet of code that utilises another library.\r\n\r\nPer `CONTRIBTUING.md`\r\n> Algorithms in this repo should not be how-to examples for existing Python packages. Instead, they should perform internal calculations or manipulations to convert input values into different output values\r\nThis `base85` algorithm has essentially got two lines of code that purely utilise a singular library. The doctests only test an external library\r\n\r\nThis repository should not contains examples on how to use a certain library, that would be the library documentation here\r\nhttps://docs.python.org/3/library/base64.html\r\n\r\n\r\n* [ ] Add an algorithm?\r\n* [ ] Fix a bug or typo in an existing algorithm?\r\n* [ ] Documentation change?\r\n* [x] Delete an algorithm\r\n\r\n### Checklist:\r\n* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).\r\n* [x] This pull request is all my own work -- I have not plagiarized.\r\n* [x] I know that pull requests will not be merged if they fail the automated tests.\r\n* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.\r\n* [x] All new Python files are placed inside an existing directory.\r\n* [x] All filenames are in all lowercase characters with no spaces or dashes.\r\n* [x] All functions and variable names follow Python naming conventions.\r\n* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).\r\n* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.\r\n* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.\r\n* [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): \"Fixes #ISSUE-NUMBER\".\r\n\n", "before_files": [{"content": "import base64\n\n\ndef base85_encode(string: str) -> bytes:\n \"\"\"\n >>> base85_encode(\"\")\n b''\n >>> base85_encode(\"12345\")\n b'0etOA2#'\n >>> base85_encode(\"base 85\")\n b'@UX=h+?24'\n \"\"\"\n # encoded the input to a bytes-like object and then a85encode that\n return base64.a85encode(string.encode(\"utf-8\"))\n\n\ndef base85_decode(a85encoded: bytes) -> str:\n \"\"\"\n >>> base85_decode(b\"\")\n ''\n >>> base85_decode(b\"0etOA2#\")\n '12345'\n >>> base85_decode(b\"@UX=h+?24\")\n 'base 85'\n \"\"\"\n # a85decode the input into bytes and decode that into a human readable string\n return base64.a85decode(a85encoded).decode(\"utf-8\")\n\n\nif __name__ == \"__main__\":\n import doctest\n\n doctest.testmod()\n", "path": "ciphers/base85.py"}]} | 1,338 | 952 |
gh_patches_debug_28897 | rasdani/github-patches | git_diff | privacyidea__privacyidea-2663 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Database migration fails if the URI contains '%' signs
If the `SQLALCHEMY_DATABASE_URI` contains query parameters like `ssl_ca=/path/to/cert` the path separators will be url-encoded with `%` signs.
This fails when passing the URI to the alembic configuration (https://alembic.sqlalchemy.org/en/latest/api/config.html#alembic.config.Config.set_main_option).
The `%` signs should be escaped in the URI string before passing it to alembic.
</issue>
<code>
[start of migrations/env.py]
1 from __future__ import with_statement
2 from alembic import context
3 from sqlalchemy import engine_from_config, pool
4 from sqlalchemy.engine.url import make_url
5 from logging.config import fileConfig
6
7 # this is the Alembic Config object, which provides
8 # access to the values within the .ini file in use.
9
10 config = context.config
11
12 # Interpret the config file for Python logging.
13 # This line sets up loggers basically.
14 fileConfig(config.config_file_name)
15
16 # add your model's MetaData object here
17 # for 'autogenerate' support
18 # from myapp import mymodel
19 # target_metadata = mymodel.Base.metadata
20 from flask import current_app
21
22
23 def set_database_url(config):
24 url = current_app.config.get('SQLALCHEMY_DATABASE_URI')
25 try:
26 # In case of MySQL, add ``charset=utf8`` to the parameters (if no charset is set),
27 # because this is what Flask-SQLAlchemy does
28 if url.startswith("mysql"):
29 parsed_url = make_url(url)
30 parsed_url.query.setdefault("charset", "utf8")
31 url = str(parsed_url)
32 except Exception as exx:
33 print(u"Attempted to set charset=utf8 on connection, but failed: {}".format(exx))
34 config.set_main_option('sqlalchemy.url', url)
35
36
37 set_database_url(config)
38 target_metadata = current_app.extensions['migrate'].db.metadata
39
40 # other values from the config, defined by the needs of env.py,
41 # can be acquired:
42 # my_important_option = config.get_main_option("my_important_option")
43 # ... etc.
44
45
46 def run_migrations_offline():
47 """Run migrations in 'offline' mode.
48
49 This configures the context with just a URL
50 and not an Engine, though an Engine is acceptable
51 here as well. By skipping the Engine creation
52 we don't even need a DBAPI to be available.
53
54 Calls to context.execute() here emit the given string to the
55 script output.
56
57 """
58 url = config.get_main_option("sqlalchemy.url")
59 context.configure(url=url)
60
61 with context.begin_transaction():
62 context.run_migrations()
63
64
65 def run_migrations_online():
66 """Run migrations in 'online' mode.
67
68 In this scenario we need to create an Engine
69 and associate a connection with the context.
70
71 """
72 # FIX for Postgres updates
73 url = config.get_section(config.config_ini_section).get("sqlalchemy.url")
74 driver = url.split(":")[0]
75
76 if driver == "postgresql+psycopg2":
77 engine = engine_from_config(
78 config.get_section(config.config_ini_section),
79 prefix='sqlalchemy.',
80 isolation_level="AUTOCOMMIT",
81 poolclass=pool.NullPool)
82 else:
83 engine = engine_from_config(
84 config.get_section(config.config_ini_section),
85 prefix='sqlalchemy.',
86 poolclass=pool.NullPool)
87
88 connection = engine.connect()
89 context.configure(
90 connection=connection,
91 target_metadata=target_metadata,
92 compare_type=True
93 )
94
95 try:
96 with context.begin_transaction():
97 context.run_migrations()
98 finally:
99 connection.close()
100
101 if context.is_offline_mode():
102 print("Running offline")
103 run_migrations_offline()
104 else:
105 print("Running online")
106 run_migrations_online()
107
108
[end of migrations/env.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/migrations/env.py b/migrations/env.py
--- a/migrations/env.py
+++ b/migrations/env.py
@@ -3,6 +3,7 @@
from sqlalchemy import engine_from_config, pool
from sqlalchemy.engine.url import make_url
from logging.config import fileConfig
+from six.moves.urllib.parse import quote
# this is the Alembic Config object, which provides
# access to the values within the .ini file in use.
@@ -28,10 +29,13 @@
if url.startswith("mysql"):
parsed_url = make_url(url)
parsed_url.query.setdefault("charset", "utf8")
+ # We need to quote the password in case it contains special chars
+ parsed_url.password = quote(parsed_url.password)
url = str(parsed_url)
except Exception as exx:
print(u"Attempted to set charset=utf8 on connection, but failed: {}".format(exx))
- config.set_main_option('sqlalchemy.url', url)
+ # set_main_option() requires escaped "%" signs in the string
+ config.set_main_option('sqlalchemy.url', url.replace('%', '%%'))
set_database_url(config)
@@ -98,10 +102,10 @@
finally:
connection.close()
+
if context.is_offline_mode():
print("Running offline")
run_migrations_offline()
else:
print("Running online")
run_migrations_online()
-
| {"golden_diff": "diff --git a/migrations/env.py b/migrations/env.py\n--- a/migrations/env.py\n+++ b/migrations/env.py\n@@ -3,6 +3,7 @@\n from sqlalchemy import engine_from_config, pool\n from sqlalchemy.engine.url import make_url\n from logging.config import fileConfig\n+from six.moves.urllib.parse import quote\n \n # this is the Alembic Config object, which provides\n # access to the values within the .ini file in use.\n@@ -28,10 +29,13 @@\n if url.startswith(\"mysql\"):\n parsed_url = make_url(url)\n parsed_url.query.setdefault(\"charset\", \"utf8\")\n+ # We need to quote the password in case it contains special chars\n+ parsed_url.password = quote(parsed_url.password)\n url = str(parsed_url)\n except Exception as exx:\n print(u\"Attempted to set charset=utf8 on connection, but failed: {}\".format(exx))\n- config.set_main_option('sqlalchemy.url', url)\n+ # set_main_option() requires escaped \"%\" signs in the string\n+ config.set_main_option('sqlalchemy.url', url.replace('%', '%%'))\n \n \n set_database_url(config)\n@@ -98,10 +102,10 @@\n finally:\n connection.close()\n \n+\n if context.is_offline_mode():\n print(\"Running offline\")\n run_migrations_offline()\n else:\n print(\"Running online\")\n run_migrations_online()\n-\n", "issue": "Database migration fails if the URI contains '%' signs\nIf the `SQLALCHEMY_DATABASE_URI` contains query parameters like `ssl_ca=/path/to/cert` the path separators will be url-encoded with `%` signs.\r\nThis fails when passing the URI to the alembic configuration (https://alembic.sqlalchemy.org/en/latest/api/config.html#alembic.config.Config.set_main_option).\r\nThe `%` signs should be escaped in the URI string before passing it to alembic.\n", "before_files": [{"content": "from __future__ import with_statement\nfrom alembic import context\nfrom sqlalchemy import engine_from_config, pool\nfrom sqlalchemy.engine.url import make_url\nfrom logging.config import fileConfig\n\n# this is the Alembic Config object, which provides\n# access to the values within the .ini file in use.\n\nconfig = context.config\n\n# Interpret the config file for Python logging.\n# This line sets up loggers basically.\nfileConfig(config.config_file_name)\n\n# add your model's MetaData object here\n# for 'autogenerate' support\n# from myapp import mymodel\n# target_metadata = mymodel.Base.metadata\nfrom flask import current_app\n\n\ndef set_database_url(config):\n url = current_app.config.get('SQLALCHEMY_DATABASE_URI')\n try:\n # In case of MySQL, add ``charset=utf8`` to the parameters (if no charset is set),\n # because this is what Flask-SQLAlchemy does\n if url.startswith(\"mysql\"):\n parsed_url = make_url(url)\n parsed_url.query.setdefault(\"charset\", \"utf8\")\n url = str(parsed_url)\n except Exception as exx:\n print(u\"Attempted to set charset=utf8 on connection, but failed: {}\".format(exx))\n config.set_main_option('sqlalchemy.url', url)\n\n\nset_database_url(config)\ntarget_metadata = current_app.extensions['migrate'].db.metadata\n\n# other values from the config, defined by the needs of env.py,\n# can be acquired:\n# my_important_option = config.get_main_option(\"my_important_option\")\n# ... etc.\n\n\ndef run_migrations_offline():\n \"\"\"Run migrations in 'offline' mode.\n\n This configures the context with just a URL\n and not an Engine, though an Engine is acceptable\n here as well. By skipping the Engine creation\n we don't even need a DBAPI to be available.\n\n Calls to context.execute() here emit the given string to the\n script output.\n\n \"\"\"\n url = config.get_main_option(\"sqlalchemy.url\")\n context.configure(url=url)\n\n with context.begin_transaction():\n context.run_migrations()\n\n\ndef run_migrations_online():\n \"\"\"Run migrations in 'online' mode.\n\n In this scenario we need to create an Engine\n and associate a connection with the context.\n\n \"\"\"\n # FIX for Postgres updates\n url = config.get_section(config.config_ini_section).get(\"sqlalchemy.url\")\n driver = url.split(\":\")[0]\n\n if driver == \"postgresql+psycopg2\":\n engine = engine_from_config(\n config.get_section(config.config_ini_section),\n prefix='sqlalchemy.',\n isolation_level=\"AUTOCOMMIT\",\n poolclass=pool.NullPool)\n else:\n engine = engine_from_config(\n config.get_section(config.config_ini_section),\n prefix='sqlalchemy.',\n poolclass=pool.NullPool)\n\n connection = engine.connect()\n context.configure(\n connection=connection,\n target_metadata=target_metadata,\n compare_type=True\n )\n\n try:\n with context.begin_transaction():\n context.run_migrations()\n finally:\n connection.close()\n\nif context.is_offline_mode():\n print(\"Running offline\")\n run_migrations_offline()\nelse:\n print(\"Running online\")\n run_migrations_online()\n\n", "path": "migrations/env.py"}]} | 1,551 | 314 |
gh_patches_debug_11387 | rasdani/github-patches | git_diff | encode__uvicorn-592 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Uvicorn via gunicorn worker doesn't respect `--forwarded-allow-ips`
I use uvicorn in docker as uvicorn-worker for gunicorn for my fastapi app. My application needs to know the real client IP of each request, so I use proxy-server with the `X-Forwarded-For` header.
Gunicorn has a special option to change proxy-ip to real-ip, so I running gunicorn like this:
```
gunicorn \
ppm_telegram_bot.api:app \
--forwarded-allow-ips="*"
--worker-class=uvicorn.workers.UvicornWorker \
--bind=0.0.0.0:$PORT
```
Because I'm in a container, my WSGI/ASGI server receives requests not from the localhost, but from the docker network.
But uvicorn-worker doesn't respect gunicorn's `forwarded-allow-ips`, so in `ProxyHeadersMiddleware.trusted_hosts` I receive default `127.0.0.1` and proxy-ip instead of real-ip.
https://github.com/encode/uvicorn/blob/9d9f8820a8155e36dcb5e4d4023f470e51aa4e03/uvicorn/middleware/proxy_headers.py#L14-L17
It looks like uvicorn-worker can forward this information to config via `config_kwargs`: https://github.com/encode/uvicorn/blob/9d9f8820a8155e36dcb5e4d4023f470e51aa4e03/uvicorn/workers.py#L28-L35
I could do PR with this change, if required 🙌
</issue>
<code>
[start of uvicorn/workers.py]
1 import asyncio
2 import logging
3
4 from gunicorn.workers.base import Worker
5 from uvicorn.config import Config
6 from uvicorn.main import Server
7
8
9 class UvicornWorker(Worker):
10 """
11 A worker class for Gunicorn that interfaces with an ASGI consumer callable,
12 rather than a WSGI callable.
13 """
14
15 CONFIG_KWARGS = {"loop": "uvloop", "http": "httptools"}
16
17 def __init__(self, *args, **kwargs):
18 super(UvicornWorker, self).__init__(*args, **kwargs)
19
20 logger = logging.getLogger("uvicorn.error")
21 logger.handlers = self.log.error_log.handlers
22 logger.setLevel(self.log.error_log.level)
23
24 logger = logging.getLogger("uvicorn.access")
25 logger.handlers = self.log.access_log.handlers
26 logger.setLevel(self.log.access_log.level)
27
28 config_kwargs = {
29 "app": None,
30 "log_config": None,
31 "timeout_keep_alive": self.cfg.keepalive,
32 "timeout_notify": self.timeout,
33 "callback_notify": self.callback_notify,
34 "limit_max_requests": self.max_requests,
35 }
36
37 if self.cfg.is_ssl:
38 ssl_kwargs = {
39 "ssl_keyfile": self.cfg.ssl_options.get("keyfile"),
40 "ssl_certfile": self.cfg.ssl_options.get("certfile"),
41 "ssl_version": self.cfg.ssl_options.get("ssl_version"),
42 "ssl_cert_reqs": self.cfg.ssl_options.get("cert_reqs"),
43 "ssl_ca_certs": self.cfg.ssl_options.get("ca_certs"),
44 "ssl_ciphers": self.cfg.ssl_options.get("ciphers"),
45 }
46 config_kwargs.update(ssl_kwargs)
47
48 if self.cfg.settings["backlog"].value:
49 config_kwargs["backlog"] = self.cfg.settings["backlog"].value
50
51 config_kwargs.update(self.CONFIG_KWARGS)
52
53 self.config = Config(**config_kwargs)
54
55 def init_process(self):
56 self.config.setup_event_loop()
57 super(UvicornWorker, self).init_process()
58
59 def init_signals(self):
60 pass
61
62 def run(self):
63 self.config.app = self.wsgi
64 server = Server(config=self.config)
65 loop = asyncio.get_event_loop()
66 loop.run_until_complete(server.serve(sockets=self.sockets))
67
68 async def callback_notify(self):
69 self.notify()
70
71
72 class UvicornH11Worker(UvicornWorker):
73 CONFIG_KWARGS = {"loop": "asyncio", "http": "h11"}
74
[end of uvicorn/workers.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/uvicorn/workers.py b/uvicorn/workers.py
--- a/uvicorn/workers.py
+++ b/uvicorn/workers.py
@@ -2,6 +2,7 @@
import logging
from gunicorn.workers.base import Worker
+
from uvicorn.config import Config
from uvicorn.main import Server
@@ -32,6 +33,7 @@
"timeout_notify": self.timeout,
"callback_notify": self.callback_notify,
"limit_max_requests": self.max_requests,
+ "forwarded_allow_ips": self.cfg.forwarded_allow_ips,
}
if self.cfg.is_ssl:
| {"golden_diff": "diff --git a/uvicorn/workers.py b/uvicorn/workers.py\n--- a/uvicorn/workers.py\n+++ b/uvicorn/workers.py\n@@ -2,6 +2,7 @@\n import logging\n \n from gunicorn.workers.base import Worker\n+\n from uvicorn.config import Config\n from uvicorn.main import Server\n \n@@ -32,6 +33,7 @@\n \"timeout_notify\": self.timeout,\n \"callback_notify\": self.callback_notify,\n \"limit_max_requests\": self.max_requests,\n+ \"forwarded_allow_ips\": self.cfg.forwarded_allow_ips,\n }\n \n if self.cfg.is_ssl:\n", "issue": "Uvicorn via gunicorn worker doesn't respect `--forwarded-allow-ips`\nI use uvicorn in docker as uvicorn-worker for gunicorn for my fastapi app. My application needs to know the real client IP of each request, so I use proxy-server with the `X-Forwarded-For` header.\r\n\r\nGunicorn has a special option to change proxy-ip to real-ip, so I running gunicorn like this:\r\n```\r\ngunicorn \\\r\n ppm_telegram_bot.api:app \\\r\n --forwarded-allow-ips=\"*\" \r\n --worker-class=uvicorn.workers.UvicornWorker \\\r\n --bind=0.0.0.0:$PORT\r\n```\r\n\r\nBecause I'm in a container, my WSGI/ASGI server receives requests not from the localhost, but from the docker network.\r\n\r\nBut uvicorn-worker doesn't respect gunicorn's `forwarded-allow-ips`, so in `ProxyHeadersMiddleware.trusted_hosts` I receive default `127.0.0.1` and proxy-ip instead of real-ip.\r\nhttps://github.com/encode/uvicorn/blob/9d9f8820a8155e36dcb5e4d4023f470e51aa4e03/uvicorn/middleware/proxy_headers.py#L14-L17\r\n\r\nIt looks like uvicorn-worker can forward this information to config via `config_kwargs`: https://github.com/encode/uvicorn/blob/9d9f8820a8155e36dcb5e4d4023f470e51aa4e03/uvicorn/workers.py#L28-L35\r\n\r\nI could do PR with this change, if required \ud83d\ude4c \n", "before_files": [{"content": "import asyncio\nimport logging\n\nfrom gunicorn.workers.base import Worker\nfrom uvicorn.config import Config\nfrom uvicorn.main import Server\n\n\nclass UvicornWorker(Worker):\n \"\"\"\n A worker class for Gunicorn that interfaces with an ASGI consumer callable,\n rather than a WSGI callable.\n \"\"\"\n\n CONFIG_KWARGS = {\"loop\": \"uvloop\", \"http\": \"httptools\"}\n\n def __init__(self, *args, **kwargs):\n super(UvicornWorker, self).__init__(*args, **kwargs)\n\n logger = logging.getLogger(\"uvicorn.error\")\n logger.handlers = self.log.error_log.handlers\n logger.setLevel(self.log.error_log.level)\n\n logger = logging.getLogger(\"uvicorn.access\")\n logger.handlers = self.log.access_log.handlers\n logger.setLevel(self.log.access_log.level)\n\n config_kwargs = {\n \"app\": None,\n \"log_config\": None,\n \"timeout_keep_alive\": self.cfg.keepalive,\n \"timeout_notify\": self.timeout,\n \"callback_notify\": self.callback_notify,\n \"limit_max_requests\": self.max_requests,\n }\n\n if self.cfg.is_ssl:\n ssl_kwargs = {\n \"ssl_keyfile\": self.cfg.ssl_options.get(\"keyfile\"),\n \"ssl_certfile\": self.cfg.ssl_options.get(\"certfile\"),\n \"ssl_version\": self.cfg.ssl_options.get(\"ssl_version\"),\n \"ssl_cert_reqs\": self.cfg.ssl_options.get(\"cert_reqs\"),\n \"ssl_ca_certs\": self.cfg.ssl_options.get(\"ca_certs\"),\n \"ssl_ciphers\": self.cfg.ssl_options.get(\"ciphers\"),\n }\n config_kwargs.update(ssl_kwargs)\n\n if self.cfg.settings[\"backlog\"].value:\n config_kwargs[\"backlog\"] = self.cfg.settings[\"backlog\"].value\n\n config_kwargs.update(self.CONFIG_KWARGS)\n\n self.config = Config(**config_kwargs)\n\n def init_process(self):\n self.config.setup_event_loop()\n super(UvicornWorker, self).init_process()\n\n def init_signals(self):\n pass\n\n def run(self):\n self.config.app = self.wsgi\n server = Server(config=self.config)\n loop = asyncio.get_event_loop()\n loop.run_until_complete(server.serve(sockets=self.sockets))\n\n async def callback_notify(self):\n self.notify()\n\n\nclass UvicornH11Worker(UvicornWorker):\n CONFIG_KWARGS = {\"loop\": \"asyncio\", \"http\": \"h11\"}\n", "path": "uvicorn/workers.py"}]} | 1,593 | 138 |
gh_patches_debug_1288 | rasdani/github-patches | git_diff | archlinux__archinstall-555 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Version Bump in conf.py?
https://github.com/archlinux/archinstall/blob/a4033a7d3a94916f2b4972d212f9d0069fca39cd/docs/conf.py#L44
</issue>
<code>
[start of docs/conf.py]
1 import os
2 import re
3 import sys
4
5 sys.path.insert(0, os.path.abspath('..'))
6
7
8 def process_docstring(app, what, name, obj, options, lines):
9 spaces_pat = re.compile(r"( {8})")
10 ll = []
11 for line in lines:
12 ll.append(spaces_pat.sub(" ", line))
13 lines[:] = ll
14
15
16 def setup(app):
17 app.connect('autodoc-process-docstring', process_docstring)
18
19
20 # Configuration file for the Sphinx documentation builder.
21 #
22 # This file only contains a selection of the most common options. For a full
23 # list see the documentation:
24 # https://www.sphinx-doc.org/en/master/usage/configuration.html
25
26 # -- Path setup --------------------------------------------------------------
27
28 # If extensions (or modules to document with autodoc) are in another directory,
29 # add these directories to sys.path here. If the directory is relative to the
30 # documentation root, use os.path.abspath to make it absolute, like shown here.
31 #
32 # import os
33 # import sys
34 # sys.path.insert(0, os.path.abspath('.'))
35
36
37 # -- Project information -----------------------------------------------------
38
39 project = 'python-archinstall'
40 copyright = '2020, Anton Hvornum'
41 author = 'Anton Hvornum'
42
43 # The full version, including alpha/beta/rc tags
44 release = 'v2.1.0'
45
46 # -- General configuration ---------------------------------------------------
47
48 master_doc = 'index'
49 # Add any Sphinx extension module names here, as strings. They can be
50 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
51 # ones.
52 extensions = [
53 'sphinx.ext.autodoc',
54 'sphinx.ext.inheritance_diagram',
55 'sphinx.ext.todo'
56 ]
57
58 # Add any paths that contain templates here, relative to this directory.
59 templates_path = ['_templates']
60
61 # List of patterns, relative to source directory, that match files and
62 # directories to ignore when looking for source files.
63 # This pattern also affects html_static_path and html_extra_path.
64 exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
65
66 # -- Options for HTML output -------------------------------------------------
67
68 # The theme to use for HTML and HTML Help pages. See the documentation for
69 # a list of builtin themes.
70 #
71 # html_theme = 'alabaster'
72 html_theme = 'sphinx_rtd_theme'
73
74 html_logo = "_static/logo.png"
75
76 # Add any paths that contain custom static files (such as style sheets) here,
77 # relative to this directory. They are copied after the builtin static files,
78 # so a file named "default.css" will overwrite the builtin "default.css".
79 html_static_path = ['_static']
80
81 # If false, no module index is generated.
82 html_domain_indices = True
83
84 # If false, no index is generated.
85 html_use_index = True
86
87 # If true, the index is split into individual pages for each letter.
88 html_split_index = True
89
90 # If true, links to the reST sources are added to the pages.
91 html_show_sourcelink = False
92
93 # If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
94 # html_show_sphinx = True
95
96 # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
97 # html_show_copyright = True
98
99 # If true, an OpenSearch description file will be output, and all pages will
100 # contain a <link> tag referring to it. The value of this option must be the
101 # base URL from which the finished HTML is served.
102 # html_use_opensearch = ''
103
104 # This is the file name suffix for HTML files (e.g. ".xhtml").
105 # html_file_suffix = None
106
107 # Output file base name for HTML help builder.
108 htmlhelp_basename = 'archinstalldoc'
109
110 # -- Options for manual page output --------------------------------------------
111
112 # One entry per manual page. List of tuples
113 # (source start file, name, description, authors, manual section).
114 man_pages = [("index", "archinstall", u"archinstall Documentation", [u"Anton Hvornum"], 1)]
115
116 # If true, show URL addresses after external links.
117 # man_show_urls = False
118
119
120 # -- Options for Texinfo output ------------------------------------------------
121
122 # Grouping the document tree into Texinfo files. List of tuples
123 # (source start file, target name, title, author,
124 # dir menu entry, description, category)
125 texinfo_documents = [
126 ("index", "archinstall", u"archinstall Documentation", u"Anton Hvornum", "archinstall", "Simple and minimal HTTP server."),
127 ]
128
[end of docs/conf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -41,7 +41,7 @@
author = 'Anton Hvornum'
# The full version, including alpha/beta/rc tags
-release = 'v2.1.0'
+release = 'v2.3.0.dev0'
# -- General configuration ---------------------------------------------------
| {"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -41,7 +41,7 @@\n author = 'Anton Hvornum'\n \n # The full version, including alpha/beta/rc tags\n-release = 'v2.1.0'\n+release = 'v2.3.0.dev0'\n \n # -- General configuration ---------------------------------------------------\n", "issue": "Version Bump in conf.py?\nhttps://github.com/archlinux/archinstall/blob/a4033a7d3a94916f2b4972d212f9d0069fca39cd/docs/conf.py#L44\n", "before_files": [{"content": "import os\nimport re\nimport sys\n\nsys.path.insert(0, os.path.abspath('..'))\n\n\ndef process_docstring(app, what, name, obj, options, lines):\n\tspaces_pat = re.compile(r\"( {8})\")\n\tll = []\n\tfor line in lines:\n\t\tll.append(spaces_pat.sub(\" \", line))\n\tlines[:] = ll\n\n\ndef setup(app):\n\tapp.connect('autodoc-process-docstring', process_docstring)\n\n\n# Configuration file for the Sphinx documentation builder.\n#\n# This file only contains a selection of the most common options. For a full\n# list see the documentation:\n# https://www.sphinx-doc.org/en/master/usage/configuration.html\n\n# -- Path setup --------------------------------------------------------------\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\n# import os\n# import sys\n# sys.path.insert(0, os.path.abspath('.'))\n\n\n# -- Project information -----------------------------------------------------\n\nproject = 'python-archinstall'\ncopyright = '2020, Anton Hvornum'\nauthor = 'Anton Hvornum'\n\n# The full version, including alpha/beta/rc tags\nrelease = 'v2.1.0'\n\n# -- General configuration ---------------------------------------------------\n\nmaster_doc = 'index'\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n\t'sphinx.ext.autodoc',\n\t'sphinx.ext.inheritance_diagram',\n\t'sphinx.ext.todo'\n]\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path.\nexclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\n# html_theme = 'alabaster'\nhtml_theme = 'sphinx_rtd_theme'\n\nhtml_logo = \"_static/logo.png\"\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\n# If false, no module index is generated.\nhtml_domain_indices = True\n\n# If false, no index is generated.\nhtml_use_index = True\n\n# If true, the index is split into individual pages for each letter.\nhtml_split_index = True\n\n# If true, links to the reST sources are added to the pages.\nhtml_show_sourcelink = False\n\n# If true, \"Created using Sphinx\" is shown in the HTML footer. Default is True.\n# html_show_sphinx = True\n\n# If true, \"(C) Copyright ...\" is shown in the HTML footer. Default is True.\n# html_show_copyright = True\n\n# If true, an OpenSearch description file will be output, and all pages will\n# contain a <link> tag referring to it. The value of this option must be the\n# base URL from which the finished HTML is served.\n# html_use_opensearch = ''\n\n# This is the file name suffix for HTML files (e.g. \".xhtml\").\n# html_file_suffix = None\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'archinstalldoc'\n\n# -- Options for manual page output --------------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [(\"index\", \"archinstall\", u\"archinstall Documentation\", [u\"Anton Hvornum\"], 1)]\n\n# If true, show URL addresses after external links.\n# man_show_urls = False\n\n\n# -- Options for Texinfo output ------------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n\t(\"index\", \"archinstall\", u\"archinstall Documentation\", u\"Anton Hvornum\", \"archinstall\", \"Simple and minimal HTTP server.\"),\n]\n", "path": "docs/conf.py"}]} | 1,864 | 89 |
gh_patches_debug_27326 | rasdani/github-patches | git_diff | huggingface__dataset-viewer-207 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
regression: fallback if streaming fails is disabled
Causes https://github.com/huggingface/datasets/issues/3185 for example: the fallback should have loaded the dataset in normal mode.
</issue>
<code>
[start of src/datasets_preview_backend/config.py]
1 import os
2
3 from dotenv import load_dotenv
4
5 from datasets_preview_backend.constants import (
6 DEFAULT_APP_HOSTNAME,
7 DEFAULT_APP_PORT,
8 DEFAULT_ASSETS_DIRECTORY,
9 DEFAULT_DATASETS_ENABLE_PRIVATE,
10 DEFAULT_DATASETS_REVISION,
11 DEFAULT_HF_TOKEN,
12 DEFAULT_LOG_LEVEL,
13 DEFAULT_MAX_AGE_LONG_SECONDS,
14 DEFAULT_MAX_AGE_SHORT_SECONDS,
15 DEFAULT_MONGO_CACHE_DATABASE,
16 DEFAULT_MONGO_QUEUE_DATABASE,
17 DEFAULT_MONGO_URL,
18 DEFAULT_ROWS_MAX_BYTES,
19 DEFAULT_ROWS_MAX_NUMBER,
20 DEFAULT_ROWS_MIN_NUMBER,
21 DEFAULT_WEB_CONCURRENCY,
22 )
23 from datasets_preview_backend.utils import (
24 get_bool_value,
25 get_int_value,
26 get_str_or_none_value,
27 get_str_value,
28 )
29
30 # Load environment variables defined in .env, if any
31 load_dotenv()
32
33 APP_HOSTNAME = get_str_value(d=os.environ, key="APP_HOSTNAME", default=DEFAULT_APP_HOSTNAME)
34 APP_PORT = get_int_value(d=os.environ, key="APP_PORT", default=DEFAULT_APP_PORT)
35 ASSETS_DIRECTORY = get_str_or_none_value(d=os.environ, key="ASSETS_DIRECTORY", default=DEFAULT_ASSETS_DIRECTORY)
36 DATASETS_ENABLE_PRIVATE = get_bool_value(
37 d=os.environ, key="DATASETS_ENABLE_PRIVATE", default=DEFAULT_DATASETS_ENABLE_PRIVATE
38 )
39 DATASETS_REVISION = get_str_value(d=os.environ, key="DATASETS_REVISION", default=DEFAULT_DATASETS_REVISION)
40 HF_TOKEN = get_str_or_none_value(d=os.environ, key="HF_TOKEN", default=DEFAULT_HF_TOKEN)
41 LOG_LEVEL = get_str_value(d=os.environ, key="LOG_LEVEL", default=DEFAULT_LOG_LEVEL)
42 MAX_AGE_LONG_SECONDS = get_int_value(d=os.environ, key="MAX_AGE_LONG_SECONDS", default=DEFAULT_MAX_AGE_LONG_SECONDS)
43 MAX_AGE_SHORT_SECONDS = get_int_value(d=os.environ, key="MAX_AGE_SHORT_SECONDS", default=DEFAULT_MAX_AGE_SHORT_SECONDS)
44 MONGO_CACHE_DATABASE = get_str_value(d=os.environ, key="MONGO_CACHE_DATABASE", default=DEFAULT_MONGO_CACHE_DATABASE)
45 MONGO_QUEUE_DATABASE = get_str_value(d=os.environ, key="MONGO_QUEUE_DATABASE", default=DEFAULT_MONGO_QUEUE_DATABASE)
46 MONGO_URL = get_str_value(d=os.environ, key="MONGO_URL", default=DEFAULT_MONGO_URL)
47 WEB_CONCURRENCY = get_int_value(d=os.environ, key="WEB_CONCURRENCY", default=DEFAULT_WEB_CONCURRENCY)
48
49 # Ensure datasets library uses the expected revision for canonical datasets
50 os.environ["HF_SCRIPTS_VERSION"] = DATASETS_REVISION
51
52 # for tests - to be removed
53 ROWS_MAX_BYTES = get_int_value(d=os.environ, key="ROWS_MAX_BYTES", default=DEFAULT_ROWS_MAX_BYTES)
54 ROWS_MAX_NUMBER = get_int_value(d=os.environ, key="ROWS_MAX_NUMBER", default=DEFAULT_ROWS_MAX_NUMBER)
55 ROWS_MIN_NUMBER = get_int_value(d=os.environ, key="ROWS_MIN_NUMBER", default=DEFAULT_ROWS_MIN_NUMBER)
56
[end of src/datasets_preview_backend/config.py]
[start of src/datasets_preview_backend/models/row.py]
1 import itertools
2 import logging
3 from typing import Any, Dict, List, Optional
4
5 from datasets import Dataset, DownloadMode, IterableDataset, load_dataset
6
7 from datasets_preview_backend.constants import DEFAULT_ROWS_MAX_NUMBER
8 from datasets_preview_backend.utils import retry
9
10 logger = logging.getLogger(__name__)
11
12
13 Row = Dict[str, Any]
14
15
16 @retry(logger=logger)
17 def get_rows(
18 dataset_name: str,
19 config_name: str,
20 split_name: str,
21 hf_token: Optional[str] = None,
22 streaming: bool = True,
23 rows_max_number: Optional[int] = None,
24 ) -> List[Row]:
25 if rows_max_number is None:
26 rows_max_number = DEFAULT_ROWS_MAX_NUMBER
27 dataset = load_dataset(
28 dataset_name,
29 name=config_name,
30 split=split_name,
31 streaming=True,
32 download_mode=DownloadMode.FORCE_REDOWNLOAD,
33 use_auth_token=hf_token,
34 )
35 if streaming:
36 if not isinstance(dataset, IterableDataset):
37 raise TypeError("load_dataset should return an IterableDataset")
38 elif not isinstance(dataset, Dataset):
39 raise TypeError("load_dataset should return a Dataset")
40 rows_plus_one = list(itertools.islice(dataset, rows_max_number + 1))
41 # ^^ to be able to detect if a split has exactly ROWS_MAX_NUMBER rows
42 if len(rows_plus_one) <= rows_max_number:
43 logger.debug(f"all the rows in the split have been fetched ({len(rows_plus_one)})")
44 else:
45 logger.debug(f"the rows in the split have been truncated ({rows_max_number} rows)")
46 return rows_plus_one[:rows_max_number]
47
[end of src/datasets_preview_backend/models/row.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/datasets_preview_backend/config.py b/src/datasets_preview_backend/config.py
--- a/src/datasets_preview_backend/config.py
+++ b/src/datasets_preview_backend/config.py
@@ -12,6 +12,7 @@
DEFAULT_LOG_LEVEL,
DEFAULT_MAX_AGE_LONG_SECONDS,
DEFAULT_MAX_AGE_SHORT_SECONDS,
+ DEFAULT_MAX_SIZE_FALLBACK,
DEFAULT_MONGO_CACHE_DATABASE,
DEFAULT_MONGO_QUEUE_DATABASE,
DEFAULT_MONGO_URL,
@@ -50,6 +51,7 @@
os.environ["HF_SCRIPTS_VERSION"] = DATASETS_REVISION
# for tests - to be removed
+MAX_SIZE_FALLBACK = get_int_value(os.environ, "MAX_SIZE_FALLBACK", DEFAULT_MAX_SIZE_FALLBACK)
ROWS_MAX_BYTES = get_int_value(d=os.environ, key="ROWS_MAX_BYTES", default=DEFAULT_ROWS_MAX_BYTES)
ROWS_MAX_NUMBER = get_int_value(d=os.environ, key="ROWS_MAX_NUMBER", default=DEFAULT_ROWS_MAX_NUMBER)
ROWS_MIN_NUMBER = get_int_value(d=os.environ, key="ROWS_MIN_NUMBER", default=DEFAULT_ROWS_MIN_NUMBER)
diff --git a/src/datasets_preview_backend/models/row.py b/src/datasets_preview_backend/models/row.py
--- a/src/datasets_preview_backend/models/row.py
+++ b/src/datasets_preview_backend/models/row.py
@@ -28,7 +28,7 @@
dataset_name,
name=config_name,
split=split_name,
- streaming=True,
+ streaming=streaming,
download_mode=DownloadMode.FORCE_REDOWNLOAD,
use_auth_token=hf_token,
)
| {"golden_diff": "diff --git a/src/datasets_preview_backend/config.py b/src/datasets_preview_backend/config.py\n--- a/src/datasets_preview_backend/config.py\n+++ b/src/datasets_preview_backend/config.py\n@@ -12,6 +12,7 @@\n DEFAULT_LOG_LEVEL,\n DEFAULT_MAX_AGE_LONG_SECONDS,\n DEFAULT_MAX_AGE_SHORT_SECONDS,\n+ DEFAULT_MAX_SIZE_FALLBACK,\n DEFAULT_MONGO_CACHE_DATABASE,\n DEFAULT_MONGO_QUEUE_DATABASE,\n DEFAULT_MONGO_URL,\n@@ -50,6 +51,7 @@\n os.environ[\"HF_SCRIPTS_VERSION\"] = DATASETS_REVISION\n \n # for tests - to be removed\n+MAX_SIZE_FALLBACK = get_int_value(os.environ, \"MAX_SIZE_FALLBACK\", DEFAULT_MAX_SIZE_FALLBACK)\n ROWS_MAX_BYTES = get_int_value(d=os.environ, key=\"ROWS_MAX_BYTES\", default=DEFAULT_ROWS_MAX_BYTES)\n ROWS_MAX_NUMBER = get_int_value(d=os.environ, key=\"ROWS_MAX_NUMBER\", default=DEFAULT_ROWS_MAX_NUMBER)\n ROWS_MIN_NUMBER = get_int_value(d=os.environ, key=\"ROWS_MIN_NUMBER\", default=DEFAULT_ROWS_MIN_NUMBER)\ndiff --git a/src/datasets_preview_backend/models/row.py b/src/datasets_preview_backend/models/row.py\n--- a/src/datasets_preview_backend/models/row.py\n+++ b/src/datasets_preview_backend/models/row.py\n@@ -28,7 +28,7 @@\n dataset_name,\n name=config_name,\n split=split_name,\n- streaming=True,\n+ streaming=streaming,\n download_mode=DownloadMode.FORCE_REDOWNLOAD,\n use_auth_token=hf_token,\n )\n", "issue": "regression: fallback if streaming fails is disabled\nCauses https://github.com/huggingface/datasets/issues/3185 for example: the fallback should have loaded the dataset in normal mode.\n", "before_files": [{"content": "import os\n\nfrom dotenv import load_dotenv\n\nfrom datasets_preview_backend.constants import (\n DEFAULT_APP_HOSTNAME,\n DEFAULT_APP_PORT,\n DEFAULT_ASSETS_DIRECTORY,\n DEFAULT_DATASETS_ENABLE_PRIVATE,\n DEFAULT_DATASETS_REVISION,\n DEFAULT_HF_TOKEN,\n DEFAULT_LOG_LEVEL,\n DEFAULT_MAX_AGE_LONG_SECONDS,\n DEFAULT_MAX_AGE_SHORT_SECONDS,\n DEFAULT_MONGO_CACHE_DATABASE,\n DEFAULT_MONGO_QUEUE_DATABASE,\n DEFAULT_MONGO_URL,\n DEFAULT_ROWS_MAX_BYTES,\n DEFAULT_ROWS_MAX_NUMBER,\n DEFAULT_ROWS_MIN_NUMBER,\n DEFAULT_WEB_CONCURRENCY,\n)\nfrom datasets_preview_backend.utils import (\n get_bool_value,\n get_int_value,\n get_str_or_none_value,\n get_str_value,\n)\n\n# Load environment variables defined in .env, if any\nload_dotenv()\n\nAPP_HOSTNAME = get_str_value(d=os.environ, key=\"APP_HOSTNAME\", default=DEFAULT_APP_HOSTNAME)\nAPP_PORT = get_int_value(d=os.environ, key=\"APP_PORT\", default=DEFAULT_APP_PORT)\nASSETS_DIRECTORY = get_str_or_none_value(d=os.environ, key=\"ASSETS_DIRECTORY\", default=DEFAULT_ASSETS_DIRECTORY)\nDATASETS_ENABLE_PRIVATE = get_bool_value(\n d=os.environ, key=\"DATASETS_ENABLE_PRIVATE\", default=DEFAULT_DATASETS_ENABLE_PRIVATE\n)\nDATASETS_REVISION = get_str_value(d=os.environ, key=\"DATASETS_REVISION\", default=DEFAULT_DATASETS_REVISION)\nHF_TOKEN = get_str_or_none_value(d=os.environ, key=\"HF_TOKEN\", default=DEFAULT_HF_TOKEN)\nLOG_LEVEL = get_str_value(d=os.environ, key=\"LOG_LEVEL\", default=DEFAULT_LOG_LEVEL)\nMAX_AGE_LONG_SECONDS = get_int_value(d=os.environ, key=\"MAX_AGE_LONG_SECONDS\", default=DEFAULT_MAX_AGE_LONG_SECONDS)\nMAX_AGE_SHORT_SECONDS = get_int_value(d=os.environ, key=\"MAX_AGE_SHORT_SECONDS\", default=DEFAULT_MAX_AGE_SHORT_SECONDS)\nMONGO_CACHE_DATABASE = get_str_value(d=os.environ, key=\"MONGO_CACHE_DATABASE\", default=DEFAULT_MONGO_CACHE_DATABASE)\nMONGO_QUEUE_DATABASE = get_str_value(d=os.environ, key=\"MONGO_QUEUE_DATABASE\", default=DEFAULT_MONGO_QUEUE_DATABASE)\nMONGO_URL = get_str_value(d=os.environ, key=\"MONGO_URL\", default=DEFAULT_MONGO_URL)\nWEB_CONCURRENCY = get_int_value(d=os.environ, key=\"WEB_CONCURRENCY\", default=DEFAULT_WEB_CONCURRENCY)\n\n# Ensure datasets library uses the expected revision for canonical datasets\nos.environ[\"HF_SCRIPTS_VERSION\"] = DATASETS_REVISION\n\n# for tests - to be removed\nROWS_MAX_BYTES = get_int_value(d=os.environ, key=\"ROWS_MAX_BYTES\", default=DEFAULT_ROWS_MAX_BYTES)\nROWS_MAX_NUMBER = get_int_value(d=os.environ, key=\"ROWS_MAX_NUMBER\", default=DEFAULT_ROWS_MAX_NUMBER)\nROWS_MIN_NUMBER = get_int_value(d=os.environ, key=\"ROWS_MIN_NUMBER\", default=DEFAULT_ROWS_MIN_NUMBER)\n", "path": "src/datasets_preview_backend/config.py"}, {"content": "import itertools\nimport logging\nfrom typing import Any, Dict, List, Optional\n\nfrom datasets import Dataset, DownloadMode, IterableDataset, load_dataset\n\nfrom datasets_preview_backend.constants import DEFAULT_ROWS_MAX_NUMBER\nfrom datasets_preview_backend.utils import retry\n\nlogger = logging.getLogger(__name__)\n\n\nRow = Dict[str, Any]\n\n\n@retry(logger=logger)\ndef get_rows(\n dataset_name: str,\n config_name: str,\n split_name: str,\n hf_token: Optional[str] = None,\n streaming: bool = True,\n rows_max_number: Optional[int] = None,\n) -> List[Row]:\n if rows_max_number is None:\n rows_max_number = DEFAULT_ROWS_MAX_NUMBER\n dataset = load_dataset(\n dataset_name,\n name=config_name,\n split=split_name,\n streaming=True,\n download_mode=DownloadMode.FORCE_REDOWNLOAD,\n use_auth_token=hf_token,\n )\n if streaming:\n if not isinstance(dataset, IterableDataset):\n raise TypeError(\"load_dataset should return an IterableDataset\")\n elif not isinstance(dataset, Dataset):\n raise TypeError(\"load_dataset should return a Dataset\")\n rows_plus_one = list(itertools.islice(dataset, rows_max_number + 1))\n # ^^ to be able to detect if a split has exactly ROWS_MAX_NUMBER rows\n if len(rows_plus_one) <= rows_max_number:\n logger.debug(f\"all the rows in the split have been fetched ({len(rows_plus_one)})\")\n else:\n logger.debug(f\"the rows in the split have been truncated ({rows_max_number} rows)\")\n return rows_plus_one[:rows_max_number]\n", "path": "src/datasets_preview_backend/models/row.py"}]} | 1,769 | 345 |
gh_patches_debug_51276 | rasdani/github-patches | git_diff | open-telemetry__opentelemetry-python-3848 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
lint takes a long time
Fix that.
</issue>
<code>
[start of exporter/opentelemetry-exporter-opencensus/src/opentelemetry/exporter/opencensus/util.py]
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from os import getpid
16 from socket import gethostname
17 from time import time
18
19 # pylint: disable=wrong-import-position
20 from google.protobuf.timestamp_pb2 import Timestamp
21 from opencensus.proto.agent.common.v1 import common_pb2
22 from opencensus.proto.trace.v1 import trace_pb2
23
24 from opentelemetry.exporter.opencensus.version import (
25 __version__ as opencensusexporter_exporter_version,
26 )
27 from opentelemetry.trace import SpanKind
28 from opentelemetry.util._importlib_metadata import version
29
30 OPENTELEMETRY_VERSION = version("opentelemetry-api")
31
32
33 def proto_timestamp_from_time_ns(time_ns):
34 """Converts datetime to protobuf timestamp.
35
36 Args:
37 time_ns: Time in nanoseconds
38
39 Returns:
40 Returns protobuf timestamp.
41 """
42 ts = Timestamp()
43 if time_ns is not None:
44 # pylint: disable=no-member
45 ts.FromNanoseconds(time_ns)
46 return ts
47
48
49 # pylint: disable=no-member
50 def get_collector_span_kind(kind: SpanKind):
51 if kind is SpanKind.SERVER:
52 return trace_pb2.Span.SpanKind.SERVER
53 if kind is SpanKind.CLIENT:
54 return trace_pb2.Span.SpanKind.CLIENT
55 return trace_pb2.Span.SpanKind.SPAN_KIND_UNSPECIFIED
56
57
58 def add_proto_attribute_value(pb_attributes, key, value):
59 """Sets string, int, boolean or float value on protobuf
60 span, link or annotation attributes.
61
62 Args:
63 pb_attributes: protobuf Span's attributes property.
64 key: attribute key to set.
65 value: attribute value
66 """
67
68 if isinstance(value, bool):
69 pb_attributes.attribute_map[key].bool_value = value
70 elif isinstance(value, int):
71 pb_attributes.attribute_map[key].int_value = value
72 elif isinstance(value, str):
73 pb_attributes.attribute_map[key].string_value.value = value
74 elif isinstance(value, float):
75 pb_attributes.attribute_map[key].double_value = value
76 else:
77 pb_attributes.attribute_map[key].string_value.value = str(value)
78
79
80 # pylint: disable=no-member
81 def get_node(service_name, host_name):
82 """Generates Node message from params and system information.
83
84 Args:
85 service_name: Name of Collector service.
86 host_name: Host name.
87 """
88 return common_pb2.Node(
89 identifier=common_pb2.ProcessIdentifier(
90 host_name=gethostname() if host_name is None else host_name,
91 pid=getpid(),
92 start_timestamp=proto_timestamp_from_time_ns(int(time() * 1e9)),
93 ),
94 library_info=common_pb2.LibraryInfo(
95 language=common_pb2.LibraryInfo.Language.Value("PYTHON"),
96 exporter_version=opencensusexporter_exporter_version,
97 core_library_version=OPENTELEMETRY_VERSION,
98 ),
99 service_info=common_pb2.ServiceInfo(name=service_name),
100 )
101
[end of exporter/opentelemetry-exporter-opencensus/src/opentelemetry/exporter/opencensus/util.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/exporter/opentelemetry-exporter-opencensus/src/opentelemetry/exporter/opencensus/util.py b/exporter/opentelemetry-exporter-opencensus/src/opentelemetry/exporter/opencensus/util.py
--- a/exporter/opentelemetry-exporter-opencensus/src/opentelemetry/exporter/opencensus/util.py
+++ b/exporter/opentelemetry-exporter-opencensus/src/opentelemetry/exporter/opencensus/util.py
@@ -17,7 +17,9 @@
from time import time
# pylint: disable=wrong-import-position
-from google.protobuf.timestamp_pb2 import Timestamp
+from google.protobuf.timestamp_pb2 import ( # pylint: disable=no-name-in-module
+ Timestamp,
+)
from opencensus.proto.agent.common.v1 import common_pb2
from opencensus.proto.trace.v1 import trace_pb2
| {"golden_diff": "diff --git a/exporter/opentelemetry-exporter-opencensus/src/opentelemetry/exporter/opencensus/util.py b/exporter/opentelemetry-exporter-opencensus/src/opentelemetry/exporter/opencensus/util.py\n--- a/exporter/opentelemetry-exporter-opencensus/src/opentelemetry/exporter/opencensus/util.py\n+++ b/exporter/opentelemetry-exporter-opencensus/src/opentelemetry/exporter/opencensus/util.py\n@@ -17,7 +17,9 @@\n from time import time\n \n # pylint: disable=wrong-import-position\n-from google.protobuf.timestamp_pb2 import Timestamp\n+from google.protobuf.timestamp_pb2 import ( # pylint: disable=no-name-in-module\n+ Timestamp,\n+)\n from opencensus.proto.agent.common.v1 import common_pb2\n from opencensus.proto.trace.v1 import trace_pb2\n", "issue": "lint takes a long time\nFix that.\n", "before_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom os import getpid\nfrom socket import gethostname\nfrom time import time\n\n# pylint: disable=wrong-import-position\nfrom google.protobuf.timestamp_pb2 import Timestamp\nfrom opencensus.proto.agent.common.v1 import common_pb2\nfrom opencensus.proto.trace.v1 import trace_pb2\n\nfrom opentelemetry.exporter.opencensus.version import (\n __version__ as opencensusexporter_exporter_version,\n)\nfrom opentelemetry.trace import SpanKind\nfrom opentelemetry.util._importlib_metadata import version\n\nOPENTELEMETRY_VERSION = version(\"opentelemetry-api\")\n\n\ndef proto_timestamp_from_time_ns(time_ns):\n \"\"\"Converts datetime to protobuf timestamp.\n\n Args:\n time_ns: Time in nanoseconds\n\n Returns:\n Returns protobuf timestamp.\n \"\"\"\n ts = Timestamp()\n if time_ns is not None:\n # pylint: disable=no-member\n ts.FromNanoseconds(time_ns)\n return ts\n\n\n# pylint: disable=no-member\ndef get_collector_span_kind(kind: SpanKind):\n if kind is SpanKind.SERVER:\n return trace_pb2.Span.SpanKind.SERVER\n if kind is SpanKind.CLIENT:\n return trace_pb2.Span.SpanKind.CLIENT\n return trace_pb2.Span.SpanKind.SPAN_KIND_UNSPECIFIED\n\n\ndef add_proto_attribute_value(pb_attributes, key, value):\n \"\"\"Sets string, int, boolean or float value on protobuf\n span, link or annotation attributes.\n\n Args:\n pb_attributes: protobuf Span's attributes property.\n key: attribute key to set.\n value: attribute value\n \"\"\"\n\n if isinstance(value, bool):\n pb_attributes.attribute_map[key].bool_value = value\n elif isinstance(value, int):\n pb_attributes.attribute_map[key].int_value = value\n elif isinstance(value, str):\n pb_attributes.attribute_map[key].string_value.value = value\n elif isinstance(value, float):\n pb_attributes.attribute_map[key].double_value = value\n else:\n pb_attributes.attribute_map[key].string_value.value = str(value)\n\n\n# pylint: disable=no-member\ndef get_node(service_name, host_name):\n \"\"\"Generates Node message from params and system information.\n\n Args:\n service_name: Name of Collector service.\n host_name: Host name.\n \"\"\"\n return common_pb2.Node(\n identifier=common_pb2.ProcessIdentifier(\n host_name=gethostname() if host_name is None else host_name,\n pid=getpid(),\n start_timestamp=proto_timestamp_from_time_ns(int(time() * 1e9)),\n ),\n library_info=common_pb2.LibraryInfo(\n language=common_pb2.LibraryInfo.Language.Value(\"PYTHON\"),\n exporter_version=opencensusexporter_exporter_version,\n core_library_version=OPENTELEMETRY_VERSION,\n ),\n service_info=common_pb2.ServiceInfo(name=service_name),\n )\n", "path": "exporter/opentelemetry-exporter-opencensus/src/opentelemetry/exporter/opencensus/util.py"}]} | 1,511 | 184 |
gh_patches_debug_6860 | rasdani/github-patches | git_diff | scrapy__scrapy-5858 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
TLS logging broken with new cryptography
https://github.com/pyca/cryptography/pull/8391 dropped `SSL_get_server_tmp_key()` so we need to disable the code that uses it if it's not available.
</issue>
<code>
[start of scrapy/utils/ssl.py]
1 import OpenSSL._util as pyOpenSSLutil
2 import OpenSSL.SSL
3
4 from scrapy.utils.python import to_unicode
5
6
7 def ffi_buf_to_string(buf):
8 return to_unicode(pyOpenSSLutil.ffi.string(buf))
9
10
11 def x509name_to_string(x509name):
12 # from OpenSSL.crypto.X509Name.__repr__
13 result_buffer = pyOpenSSLutil.ffi.new("char[]", 512)
14 pyOpenSSLutil.lib.X509_NAME_oneline(
15 x509name._name, result_buffer, len(result_buffer)
16 )
17
18 return ffi_buf_to_string(result_buffer)
19
20
21 def get_temp_key_info(ssl_object):
22 # adapted from OpenSSL apps/s_cb.c::ssl_print_tmp_key()
23 temp_key_p = pyOpenSSLutil.ffi.new("EVP_PKEY **")
24 if not pyOpenSSLutil.lib.SSL_get_server_tmp_key(ssl_object, temp_key_p):
25 return None
26 temp_key = temp_key_p[0]
27 if temp_key == pyOpenSSLutil.ffi.NULL:
28 return None
29 temp_key = pyOpenSSLutil.ffi.gc(temp_key, pyOpenSSLutil.lib.EVP_PKEY_free)
30 key_info = []
31 key_type = pyOpenSSLutil.lib.EVP_PKEY_id(temp_key)
32 if key_type == pyOpenSSLutil.lib.EVP_PKEY_RSA:
33 key_info.append("RSA")
34 elif key_type == pyOpenSSLutil.lib.EVP_PKEY_DH:
35 key_info.append("DH")
36 elif key_type == pyOpenSSLutil.lib.EVP_PKEY_EC:
37 key_info.append("ECDH")
38 ec_key = pyOpenSSLutil.lib.EVP_PKEY_get1_EC_KEY(temp_key)
39 ec_key = pyOpenSSLutil.ffi.gc(ec_key, pyOpenSSLutil.lib.EC_KEY_free)
40 nid = pyOpenSSLutil.lib.EC_GROUP_get_curve_name(
41 pyOpenSSLutil.lib.EC_KEY_get0_group(ec_key)
42 )
43 cname = pyOpenSSLutil.lib.EC_curve_nid2nist(nid)
44 if cname == pyOpenSSLutil.ffi.NULL:
45 cname = pyOpenSSLutil.lib.OBJ_nid2sn(nid)
46 key_info.append(ffi_buf_to_string(cname))
47 else:
48 key_info.append(ffi_buf_to_string(pyOpenSSLutil.lib.OBJ_nid2sn(key_type)))
49 key_info.append(f"{pyOpenSSLutil.lib.EVP_PKEY_bits(temp_key)} bits")
50 return ", ".join(key_info)
51
52
53 def get_openssl_version():
54 system_openssl = OpenSSL.SSL.SSLeay_version(OpenSSL.SSL.SSLEAY_VERSION).decode(
55 "ascii", errors="replace"
56 )
57 return f"{OpenSSL.version.__version__} ({system_openssl})"
58
[end of scrapy/utils/ssl.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/scrapy/utils/ssl.py b/scrapy/utils/ssl.py
--- a/scrapy/utils/ssl.py
+++ b/scrapy/utils/ssl.py
@@ -20,6 +20,9 @@
def get_temp_key_info(ssl_object):
# adapted from OpenSSL apps/s_cb.c::ssl_print_tmp_key()
+ if not hasattr(pyOpenSSLutil.lib, "SSL_get_server_tmp_key"):
+ # removed in cryptography 40.0.0
+ return None
temp_key_p = pyOpenSSLutil.ffi.new("EVP_PKEY **")
if not pyOpenSSLutil.lib.SSL_get_server_tmp_key(ssl_object, temp_key_p):
return None
| {"golden_diff": "diff --git a/scrapy/utils/ssl.py b/scrapy/utils/ssl.py\n--- a/scrapy/utils/ssl.py\n+++ b/scrapy/utils/ssl.py\n@@ -20,6 +20,9 @@\n \n def get_temp_key_info(ssl_object):\n # adapted from OpenSSL apps/s_cb.c::ssl_print_tmp_key()\n+ if not hasattr(pyOpenSSLutil.lib, \"SSL_get_server_tmp_key\"):\n+ # removed in cryptography 40.0.0\n+ return None\n temp_key_p = pyOpenSSLutil.ffi.new(\"EVP_PKEY **\")\n if not pyOpenSSLutil.lib.SSL_get_server_tmp_key(ssl_object, temp_key_p):\n return None\n", "issue": "TLS logging broken with new cryptography\nhttps://github.com/pyca/cryptography/pull/8391 dropped `SSL_get_server_tmp_key()` so we need to disable the code that uses it if it's not available.\n", "before_files": [{"content": "import OpenSSL._util as pyOpenSSLutil\nimport OpenSSL.SSL\n\nfrom scrapy.utils.python import to_unicode\n\n\ndef ffi_buf_to_string(buf):\n return to_unicode(pyOpenSSLutil.ffi.string(buf))\n\n\ndef x509name_to_string(x509name):\n # from OpenSSL.crypto.X509Name.__repr__\n result_buffer = pyOpenSSLutil.ffi.new(\"char[]\", 512)\n pyOpenSSLutil.lib.X509_NAME_oneline(\n x509name._name, result_buffer, len(result_buffer)\n )\n\n return ffi_buf_to_string(result_buffer)\n\n\ndef get_temp_key_info(ssl_object):\n # adapted from OpenSSL apps/s_cb.c::ssl_print_tmp_key()\n temp_key_p = pyOpenSSLutil.ffi.new(\"EVP_PKEY **\")\n if not pyOpenSSLutil.lib.SSL_get_server_tmp_key(ssl_object, temp_key_p):\n return None\n temp_key = temp_key_p[0]\n if temp_key == pyOpenSSLutil.ffi.NULL:\n return None\n temp_key = pyOpenSSLutil.ffi.gc(temp_key, pyOpenSSLutil.lib.EVP_PKEY_free)\n key_info = []\n key_type = pyOpenSSLutil.lib.EVP_PKEY_id(temp_key)\n if key_type == pyOpenSSLutil.lib.EVP_PKEY_RSA:\n key_info.append(\"RSA\")\n elif key_type == pyOpenSSLutil.lib.EVP_PKEY_DH:\n key_info.append(\"DH\")\n elif key_type == pyOpenSSLutil.lib.EVP_PKEY_EC:\n key_info.append(\"ECDH\")\n ec_key = pyOpenSSLutil.lib.EVP_PKEY_get1_EC_KEY(temp_key)\n ec_key = pyOpenSSLutil.ffi.gc(ec_key, pyOpenSSLutil.lib.EC_KEY_free)\n nid = pyOpenSSLutil.lib.EC_GROUP_get_curve_name(\n pyOpenSSLutil.lib.EC_KEY_get0_group(ec_key)\n )\n cname = pyOpenSSLutil.lib.EC_curve_nid2nist(nid)\n if cname == pyOpenSSLutil.ffi.NULL:\n cname = pyOpenSSLutil.lib.OBJ_nid2sn(nid)\n key_info.append(ffi_buf_to_string(cname))\n else:\n key_info.append(ffi_buf_to_string(pyOpenSSLutil.lib.OBJ_nid2sn(key_type)))\n key_info.append(f\"{pyOpenSSLutil.lib.EVP_PKEY_bits(temp_key)} bits\")\n return \", \".join(key_info)\n\n\ndef get_openssl_version():\n system_openssl = OpenSSL.SSL.SSLeay_version(OpenSSL.SSL.SSLEAY_VERSION).decode(\n \"ascii\", errors=\"replace\"\n )\n return f\"{OpenSSL.version.__version__} ({system_openssl})\"\n", "path": "scrapy/utils/ssl.py"}]} | 1,295 | 156 |
gh_patches_debug_50223 | rasdani/github-patches | git_diff | pex-tool__pex-916 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Release 2.1.6
On the docket:
+ [x] Don't delete the root `__init__.py` when devendoring. #915
</issue>
<code>
[start of pex/version.py]
1 # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 __version__ = '2.1.5'
5
[end of pex/version.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pex/version.py b/pex/version.py
--- a/pex/version.py
+++ b/pex/version.py
@@ -1,4 +1,4 @@
# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
# Licensed under the Apache License, Version 2.0 (see LICENSE).
-__version__ = '2.1.5'
+__version__ = '2.1.6'
| {"golden_diff": "diff --git a/pex/version.py b/pex/version.py\n--- a/pex/version.py\n+++ b/pex/version.py\n@@ -1,4 +1,4 @@\n # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n # Licensed under the Apache License, Version 2.0 (see LICENSE).\n \n-__version__ = '2.1.5'\n+__version__ = '2.1.6'\n", "issue": "Release 2.1.6\nOn the docket:\r\n+ [x] Don't delete the root `__init__.py` when devendoring. #915\r\n\n", "before_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = '2.1.5'\n", "path": "pex/version.py"}]} | 621 | 95 |
gh_patches_debug_236 | rasdani/github-patches | git_diff | jazzband__pip-tools-2042 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Broken build due to failed `linkcheck` job
I've noticed that matrix badges are frequently inaccessible, see README:
<img width="893" alt="image" src="https://github.com/jazzband/pip-tools/assets/7377671/94c2d45a-12ef-4237-8a85-434ee1bd7c05">
Sometimes, a certain issue even results in CI builds [breaking](https://github.com/jazzband/pip-tools/actions/runs/5920050370/job/16051009863#step:10:446) (caught in #1973):
```
broken https://img.shields.io/matrix/pip-tools:matrix.org?label=Discuss%20on%20Matrix%20at%20%23pip-tools%3Amatrix.org&logo=matrix&server_fqdn=matrix.org&style=flat - 408 Client Error: Request Timeout for url: https://img.shields.io/matrix/pip-tools:matrix.org?label=Discuss%20on%20Matrix%20at%20%23pip-tools%3Amatrix.org&logo=matrix&server_fqdn=matrix.org&style=flat
```
Perhaps we should consider [ignoring](https://github.com/jazzband/pip-tools/blob/04d2235716bc43cad3c10288081a4d2b7ee56944/docs/conf.py#L55-L57) `https://img.shields.io/matrix` as well?
/cc @webknjaz
</issue>
<code>
[start of docs/conf.py]
1 # https://www.sphinx-doc.org/en/master/usage/configuration.html
2 """Configuration file for the Sphinx documentation builder."""
3
4 from __future__ import annotations
5
6 from importlib.metadata import version as get_version
7 from pathlib import Path
8
9 from sphinx.util import logging
10 from sphinx.util.console import bold
11
12 logger = logging.getLogger(__name__)
13
14 # -- Path setup --------------------------------------------------------------
15
16 PROJECT_ROOT_DIR = Path(__file__).parents[1].resolve()
17
18
19 # -- Project information -----------------------------------------------------
20
21 project = "pip-tools"
22 author = f"{project} Contributors"
23 copyright = f"The {author}"
24
25 # The full version, including alpha/beta/rc tags
26 release = get_version(project)
27
28 # The short X.Y version
29 version = ".".join(release.split(".")[:3])
30
31 logger.info(bold("%s version: %s"), project, version)
32 logger.info(bold("%s release: %s"), project, release)
33
34 # -- General configuration ---------------------------------------------------
35
36 # Add any Sphinx extension module names here, as strings. They can be
37 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
38 # ones.
39 extensions = ["myst_parser", "sphinxcontrib.programoutput"]
40
41
42 # -- Options for HTML output -------------------------------------------------
43
44 # The theme to use for HTML and HTML Help pages. See the documentation for
45 # a list of builtin themes.
46 #
47 html_theme = "furo"
48 html_title = f"<nobr>{project}</nobr> documentation v{release}"
49
50
51 # -------------------------------------------------------------------------
52 default_role = "any"
53 nitpicky = True
54
55 linkcheck_ignore = [
56 r"^https://matrix\.to/#",
57 ]
58
59 suppress_warnings = ["myst.xref_missing"]
60
[end of docs/conf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -54,6 +54,7 @@
linkcheck_ignore = [
r"^https://matrix\.to/#",
+ r"^https://img.shields.io/matrix",
]
suppress_warnings = ["myst.xref_missing"]
| {"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -54,6 +54,7 @@\n \n linkcheck_ignore = [\n r\"^https://matrix\\.to/#\",\n+ r\"^https://img.shields.io/matrix\",\n ]\n \n suppress_warnings = [\"myst.xref_missing\"]\n", "issue": "Broken build due to failed `linkcheck` job\nI've noticed that matrix badges are frequently inaccessible, see README:\r\n<img width=\"893\" alt=\"image\" src=\"https://github.com/jazzband/pip-tools/assets/7377671/94c2d45a-12ef-4237-8a85-434ee1bd7c05\">\r\n\r\nSometimes, a certain issue even results in CI builds [breaking](https://github.com/jazzband/pip-tools/actions/runs/5920050370/job/16051009863#step:10:446) (caught in #1973):\r\n\r\n```\r\nbroken https://img.shields.io/matrix/pip-tools:matrix.org?label=Discuss%20on%20Matrix%20at%20%23pip-tools%3Amatrix.org&logo=matrix&server_fqdn=matrix.org&style=flat - 408 Client Error: Request Timeout for url: https://img.shields.io/matrix/pip-tools:matrix.org?label=Discuss%20on%20Matrix%20at%20%23pip-tools%3Amatrix.org&logo=matrix&server_fqdn=matrix.org&style=flat\r\n```\r\n\r\nPerhaps we should consider [ignoring](https://github.com/jazzband/pip-tools/blob/04d2235716bc43cad3c10288081a4d2b7ee56944/docs/conf.py#L55-L57) `https://img.shields.io/matrix` as well?\r\n\r\n/cc @webknjaz \r\n\n", "before_files": [{"content": "# https://www.sphinx-doc.org/en/master/usage/configuration.html\n\"\"\"Configuration file for the Sphinx documentation builder.\"\"\"\n\nfrom __future__ import annotations\n\nfrom importlib.metadata import version as get_version\nfrom pathlib import Path\n\nfrom sphinx.util import logging\nfrom sphinx.util.console import bold\n\nlogger = logging.getLogger(__name__)\n\n# -- Path setup --------------------------------------------------------------\n\nPROJECT_ROOT_DIR = Path(__file__).parents[1].resolve()\n\n\n# -- Project information -----------------------------------------------------\n\nproject = \"pip-tools\"\nauthor = f\"{project} Contributors\"\ncopyright = f\"The {author}\"\n\n# The full version, including alpha/beta/rc tags\nrelease = get_version(project)\n\n# The short X.Y version\nversion = \".\".join(release.split(\".\")[:3])\n\nlogger.info(bold(\"%s version: %s\"), project, version)\nlogger.info(bold(\"%s release: %s\"), project, release)\n\n# -- General configuration ---------------------------------------------------\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\"myst_parser\", \"sphinxcontrib.programoutput\"]\n\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = \"furo\"\nhtml_title = f\"<nobr>{project}</nobr> documentation v{release}\"\n\n\n# -------------------------------------------------------------------------\ndefault_role = \"any\"\nnitpicky = True\n\nlinkcheck_ignore = [\n r\"^https://matrix\\.to/#\",\n]\n\nsuppress_warnings = [\"myst.xref_missing\"]\n", "path": "docs/conf.py"}]} | 1,389 | 78 |
gh_patches_debug_65929 | rasdani/github-patches | git_diff | iterative__dvc-985 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Trouble installing dvc with pip: No matching distribution found for futures>=3.2.0 (from dvc)
I'm on a fresh ubuntu 18.04 and I want to install DVC. But I run into some dependency problems. Never had that problem before.
```
➤ virtualenv -p python3 .venv
➤ source .venv/bin/activate.fish
➤ pip install dvc
Collecting dvc
Using cached https://files.pythonhosted.org/packages/d2/2d/117b6e99f4e7f0760d99944919d9dcaaeabfb6c6182a9c890b7260eec697/dvc-0.15.2-py2.py3-none-any.whl
Collecting pyasn1>=0.4.1 (from dvc)
Using cached https://files.pythonhosted.org/packages/d1/a1/7790cc85db38daa874f6a2e6308131b9953feb1367f2ae2d1123bb93a9f5/pyasn1-0.4.4-py2.py3-none-any.whl
Collecting ply>=3.9 (from dvc)
Using cached https://files.pythonhosted.org/packages/a3/58/35da89ee790598a0700ea49b2a66594140f44dec458c07e8e3d4979137fc/ply-3.11-py2.py3-none-any.whl
Collecting futures>=3.2.0 (from dvc)
Could not find a version that satisfies the requirement futures>=3.2.0 (from dvc) (from versions: 0.2.python3, 0.1, 0.2, 1.0, 2.0, 2.1, 2.1.1, 2.1.2, 2.1.3, 2.1.4, 2.1.5, 2.1.6, 2.2.0, 3.0.0, 3.0.1, 3.0.2, 3.0.3, 3.0.4, 3.0.5, 3.1.0, 3.1.1)
No matching distribution found for futures>=3.2.0 (from dvc)
```
Here are all relevant version
```
➤ pip --version
pip 18.0 from /home/PATH/.venv/lib/python3.6/site-packages/pip (python 3.6)
➤ python --version
Python 3.6.5
➤ virtualenv --version
16.0.0
```
</issue>
<code>
[start of setup.py]
1 import sys
2 import platform
3 from setuptools import setup, find_packages
4 from distutils.errors import DistutilsPlatformError
5 from dvc import VERSION
6
7
8 install_requires = [
9 "ply>=3.9", # See https://github.com/pyinstaller/pyinstaller/issues/1945
10 "configparser>=3.5.0",
11 "zc.lockfile>=1.2.1",
12 "future>=0.16.0",
13 "colorama>=0.3.9",
14 "configobj>=5.0.6",
15 "networkx==2.1",
16 "pyyaml>=3.12",
17 "gitpython>=2.1.8",
18 "ntfsutils>=0.1.4",
19 "setuptools>=34.0.0",
20 "nanotime>=0.5.2",
21 "pyasn1>=0.4.1",
22 "schema>=0.6.7",
23 "jsonpath-rw==1.4.0",
24 "reflink==0.2.0",
25 "requests>=2.18.4",
26 ]
27
28 if sys.version_info[0] == 2:
29 install_requires.append("futures>=3.2.0")
30
31 # Extra dependencies for remote integrations
32 gs = [
33 "google-cloud==0.32.0",
34 ]
35 s3 = [
36 "boto3==1.7.4",
37 ]
38 azure = [
39 "azure-storage-blob==1.3.0"
40 ]
41 ssh = [
42 "paramiko>=2.4.1",
43 ]
44 all_remotes = gs + s3 + azure + ssh
45
46 setup(
47 name='dvc',
48 version=VERSION,
49 description='Git for data scientists - manage your code and data together',
50 long_description=open('README.rst', 'r').read(),
51 author='Dmitry Petrov',
52 author_email='[email protected]',
53 download_url='https://github.com/iterative/dvc',
54 license='Apache License 2.0',
55 install_requires=install_requires,
56 extras_require={
57 'all': all_remotes,
58 'gs': gs,
59 's3': s3,
60 'azure': azure,
61 'ssh': ssh,
62 },
63 keywords='data science, data version control, machine learning',
64 classifiers=[
65 'Development Status :: 4 - Beta',
66 'Programming Language :: Python :: 2',
67 'Programming Language :: Python :: 3',
68 ],
69 packages=find_packages(exclude=['bin', 'tests', 'functests']),
70 include_package_data=True,
71 url='http://dataversioncontrol.com',
72 entry_points={
73 'console_scripts': ['dvc = dvc.main:main']
74 },
75 zip_safe=False
76 )
77
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -23,11 +23,9 @@
"jsonpath-rw==1.4.0",
"reflink==0.2.0",
"requests>=2.18.4",
+ 'futures; python_version == "2.7"',
]
-if sys.version_info[0] == 2:
- install_requires.append("futures>=3.2.0")
-
# Extra dependencies for remote integrations
gs = [
"google-cloud==0.32.0",
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -23,11 +23,9 @@\n \"jsonpath-rw==1.4.0\",\n \"reflink==0.2.0\",\n \"requests>=2.18.4\",\n+ 'futures; python_version == \"2.7\"',\n ]\n \n-if sys.version_info[0] == 2:\n- install_requires.append(\"futures>=3.2.0\")\n-\n # Extra dependencies for remote integrations\n gs = [\n \"google-cloud==0.32.0\",\n", "issue": "Trouble installing dvc with pip: No matching distribution found for futures>=3.2.0 (from dvc)\nI'm on a fresh ubuntu 18.04 and I want to install DVC. But I run into some dependency problems. Never had that problem before.\r\n```\r\n\u27a4 virtualenv -p python3 .venv\r\n\u27a4 source .venv/bin/activate.fish\r\n\u27a4 pip install dvc\r\nCollecting dvc\r\n Using cached https://files.pythonhosted.org/packages/d2/2d/117b6e99f4e7f0760d99944919d9dcaaeabfb6c6182a9c890b7260eec697/dvc-0.15.2-py2.py3-none-any.whl\r\nCollecting pyasn1>=0.4.1 (from dvc)\r\n Using cached https://files.pythonhosted.org/packages/d1/a1/7790cc85db38daa874f6a2e6308131b9953feb1367f2ae2d1123bb93a9f5/pyasn1-0.4.4-py2.py3-none-any.whl\r\nCollecting ply>=3.9 (from dvc)\r\n Using cached https://files.pythonhosted.org/packages/a3/58/35da89ee790598a0700ea49b2a66594140f44dec458c07e8e3d4979137fc/ply-3.11-py2.py3-none-any.whl\r\nCollecting futures>=3.2.0 (from dvc)\r\n Could not find a version that satisfies the requirement futures>=3.2.0 (from dvc) (from versions: 0.2.python3, 0.1, 0.2, 1.0, 2.0, 2.1, 2.1.1, 2.1.2, 2.1.3, 2.1.4, 2.1.5, 2.1.6, 2.2.0, 3.0.0, 3.0.1, 3.0.2, 3.0.3, 3.0.4, 3.0.5, 3.1.0, 3.1.1)\r\nNo matching distribution found for futures>=3.2.0 (from dvc)\r\n```\r\nHere are all relevant version\r\n```\r\n\u27a4 pip --version\r\npip 18.0 from /home/PATH/.venv/lib/python3.6/site-packages/pip (python 3.6)\r\n\u27a4 python --version\r\nPython 3.6.5\r\n\u27a4 virtualenv --version\r\n16.0.0\r\n```\n", "before_files": [{"content": "import sys\nimport platform\nfrom setuptools import setup, find_packages\nfrom distutils.errors import DistutilsPlatformError\nfrom dvc import VERSION\n\n\ninstall_requires = [\n \"ply>=3.9\", # See https://github.com/pyinstaller/pyinstaller/issues/1945\n \"configparser>=3.5.0\",\n \"zc.lockfile>=1.2.1\",\n \"future>=0.16.0\",\n \"colorama>=0.3.9\",\n \"configobj>=5.0.6\",\n \"networkx==2.1\",\n \"pyyaml>=3.12\",\n \"gitpython>=2.1.8\",\n \"ntfsutils>=0.1.4\",\n \"setuptools>=34.0.0\",\n \"nanotime>=0.5.2\",\n \"pyasn1>=0.4.1\",\n \"schema>=0.6.7\",\n \"jsonpath-rw==1.4.0\",\n \"reflink==0.2.0\",\n \"requests>=2.18.4\",\n]\n\nif sys.version_info[0] == 2:\n install_requires.append(\"futures>=3.2.0\")\n\n# Extra dependencies for remote integrations\ngs = [\n \"google-cloud==0.32.0\",\n]\ns3 = [\n \"boto3==1.7.4\",\n]\nazure = [\n \"azure-storage-blob==1.3.0\"\n]\nssh = [\n \"paramiko>=2.4.1\",\n]\nall_remotes = gs + s3 + azure + ssh\n\nsetup(\n name='dvc',\n version=VERSION,\n description='Git for data scientists - manage your code and data together',\n long_description=open('README.rst', 'r').read(),\n author='Dmitry Petrov',\n author_email='[email protected]',\n download_url='https://github.com/iterative/dvc',\n license='Apache License 2.0',\n install_requires=install_requires,\n extras_require={\n 'all': all_remotes,\n 'gs': gs,\n 's3': s3,\n 'azure': azure,\n 'ssh': ssh,\n },\n keywords='data science, data version control, machine learning',\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 3',\n ],\n packages=find_packages(exclude=['bin', 'tests', 'functests']),\n include_package_data=True,\n url='http://dataversioncontrol.com',\n entry_points={\n 'console_scripts': ['dvc = dvc.main:main']\n },\n zip_safe=False\n)\n", "path": "setup.py"}]} | 1,929 | 135 |
gh_patches_debug_23587 | rasdani/github-patches | git_diff | alltheplaces__alltheplaces-359 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
McDonald's
JSON endpoint: http://rl.mcdonalds.com/googleapps/GoogleSearchUSAction.do?method=searchLocation&searchTxtLatlng=(43.1272254%2C-87.9432837)&actionType=searchRestaurant&language=en&country=us
Search by lat/lon only? Looks like they geocode using Google Maps API and then call this endpoint with a lat/lon.
</issue>
<code>
[start of locations/spiders/mcdonalds_localizer.py]
1 # -*- coding: utf-8 -*-
2 import scrapy
3 import json
4 from locations.items import GeojsonPointItem
5
6
7 class McLocalizer(scrapy.Spider):
8
9 name = "mclocalizer"
10 allowed_domains = ["www.mcdonalds.com", "www.mcdonalds.com.pr", "www.mcdonalds.co.cr", "www.mcdonalds.com.ar"]
11 start_urls = (
12 'http://www.mcdonalds.com.pr/api/restaurantsByCountry?country=PR',
13 'http://www.mcdonalds.co.cr/api/restaurantsByCountry?country=CR',
14 'http://www.mcdonalds.com.ar/api/restaurantsByCountry?country=AR'
15 )
16
17 def parse(self, response):
18 data = response.body_as_unicode()
19 data.replace('" ', '"')
20 data.replace(' "', '"')
21 results = json.loads(data)
22 results = results["content"]["restaurants"]
23 for data in results:
24 properties = {
25 'ref': data['id'],
26 'lon': float(data['longitude']),
27 'lat': float(data['latitude']),
28
29 }
30
31 contact_info = data['name'][:data['name'].find("<br")]
32 name = contact_info[:contact_info.find("</br")]
33
34 properties["name"] = name
35 properties["addr_full"] = data['name'][data['name'].find("<small>"):-8][8:]
36 # = address[8:]
37
38 yield GeojsonPointItem(**properties)
[end of locations/spiders/mcdonalds_localizer.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/locations/spiders/mcdonalds_localizer.py b/locations/spiders/mcdonalds_localizer.py
--- a/locations/spiders/mcdonalds_localizer.py
+++ b/locations/spiders/mcdonalds_localizer.py
@@ -7,11 +7,12 @@
class McLocalizer(scrapy.Spider):
name = "mclocalizer"
- allowed_domains = ["www.mcdonalds.com", "www.mcdonalds.com.pr", "www.mcdonalds.co.cr", "www.mcdonalds.com.ar"]
+ allowed_domains = ["www.mcdonalds.com", "www.mcdonalds.com.pr", "www.mcdonalds.co.cr", "www.mcdonalds.com.ar", "www.mcdonalds.com.pa"]
start_urls = (
'http://www.mcdonalds.com.pr/api/restaurantsByCountry?country=PR',
'http://www.mcdonalds.co.cr/api/restaurantsByCountry?country=CR',
- 'http://www.mcdonalds.com.ar/api/restaurantsByCountry?country=AR'
+ 'http://www.mcdonalds.com.ar/api/restaurantsByCountry?country=AR',
+ 'http://www.mcdonalds.com.pa/api/restaurantsByCountry?country=PA'
)
def parse(self, response):
@@ -33,6 +34,5 @@
properties["name"] = name
properties["addr_full"] = data['name'][data['name'].find("<small>"):-8][8:]
- # = address[8:]
yield GeojsonPointItem(**properties)
\ No newline at end of file
| {"golden_diff": "diff --git a/locations/spiders/mcdonalds_localizer.py b/locations/spiders/mcdonalds_localizer.py\n--- a/locations/spiders/mcdonalds_localizer.py\n+++ b/locations/spiders/mcdonalds_localizer.py\n@@ -7,11 +7,12 @@\n class McLocalizer(scrapy.Spider):\n \n name = \"mclocalizer\"\n- allowed_domains = [\"www.mcdonalds.com\", \"www.mcdonalds.com.pr\", \"www.mcdonalds.co.cr\", \"www.mcdonalds.com.ar\"]\n+ allowed_domains = [\"www.mcdonalds.com\", \"www.mcdonalds.com.pr\", \"www.mcdonalds.co.cr\", \"www.mcdonalds.com.ar\", \"www.mcdonalds.com.pa\"]\n start_urls = (\n 'http://www.mcdonalds.com.pr/api/restaurantsByCountry?country=PR',\n 'http://www.mcdonalds.co.cr/api/restaurantsByCountry?country=CR',\n- 'http://www.mcdonalds.com.ar/api/restaurantsByCountry?country=AR'\n+ 'http://www.mcdonalds.com.ar/api/restaurantsByCountry?country=AR',\n+ 'http://www.mcdonalds.com.pa/api/restaurantsByCountry?country=PA'\n )\n \n def parse(self, response):\n@@ -33,6 +34,5 @@\n \n properties[\"name\"] = name\n properties[\"addr_full\"] = data['name'][data['name'].find(\"<small>\"):-8][8:]\n- # = address[8:]\n \n yield GeojsonPointItem(**properties)\n\\ No newline at end of file\n", "issue": "McDonald's\nJSON endpoint: http://rl.mcdonalds.com/googleapps/GoogleSearchUSAction.do?method=searchLocation&searchTxtLatlng=(43.1272254%2C-87.9432837)&actionType=searchRestaurant&language=en&country=us\n\nSearch by lat/lon only? Looks like they geocode using Google Maps API and then call this endpoint with a lat/lon.\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport scrapy\nimport json\nfrom locations.items import GeojsonPointItem\n\n\nclass McLocalizer(scrapy.Spider):\n\n name = \"mclocalizer\"\n allowed_domains = [\"www.mcdonalds.com\", \"www.mcdonalds.com.pr\", \"www.mcdonalds.co.cr\", \"www.mcdonalds.com.ar\"]\n start_urls = (\n 'http://www.mcdonalds.com.pr/api/restaurantsByCountry?country=PR',\n 'http://www.mcdonalds.co.cr/api/restaurantsByCountry?country=CR',\n 'http://www.mcdonalds.com.ar/api/restaurantsByCountry?country=AR'\n )\n\n def parse(self, response):\n data = response.body_as_unicode()\n data.replace('\" ', '\"')\n data.replace(' \"', '\"')\n results = json.loads(data)\n results = results[\"content\"][\"restaurants\"]\n for data in results:\n properties = {\n 'ref': data['id'],\n 'lon': float(data['longitude']),\n 'lat': float(data['latitude']),\n \n }\n\n contact_info = data['name'][:data['name'].find(\"<br\")]\n name = contact_info[:contact_info.find(\"</br\")]\n\n properties[\"name\"] = name\n properties[\"addr_full\"] = data['name'][data['name'].find(\"<small>\"):-8][8:]\n # = address[8:]\n\n yield GeojsonPointItem(**properties)", "path": "locations/spiders/mcdonalds_localizer.py"}]} | 1,027 | 371 |
gh_patches_debug_8064 | rasdani/github-patches | git_diff | getsentry__sentry-37553 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Cannot edit or delete alerts
### Environment
SaaS (https://sentry.io/)
### Version
_No response_
### Steps to Reproduce
1. Have alerts that were set up a while ago
2. Get a bunch of emails from one alert that is too touchy
3. Try to edit alert (fails)
4. Try to delete alert (fails)
### Expected Result
Can edit or delete alerts that I created on an account that I am the only user for
### Actual Result
Cannot edit or delete alerts


</issue>
<code>
[start of src/sentry/incidents/endpoints/bases.py]
1 from rest_framework.exceptions import PermissionDenied
2 from rest_framework.request import Request
3
4 from sentry import features
5 from sentry.api.bases.organization import OrganizationAlertRulePermission, OrganizationEndpoint
6 from sentry.api.bases.project import ProjectAlertRulePermission, ProjectEndpoint
7 from sentry.api.exceptions import ResourceDoesNotExist
8 from sentry.incidents.models import AlertRule, AlertRuleTrigger, AlertRuleTriggerAction
9
10
11 class ProjectAlertRuleEndpoint(ProjectEndpoint):
12 permission_classes = (ProjectAlertRulePermission,)
13
14 def convert_args(self, request: Request, alert_rule_id, *args, **kwargs):
15 args, kwargs = super().convert_args(request, *args, **kwargs)
16 project = kwargs["project"]
17
18 if not features.has("organizations:incidents", project.organization, actor=request.user):
19 raise ResourceDoesNotExist
20
21 if not request.access.has_project_access(project):
22 raise PermissionDenied
23
24 try:
25 kwargs["alert_rule"] = AlertRule.objects.get(
26 snuba_query__subscriptions__project=project, id=alert_rule_id
27 )
28 except AlertRule.DoesNotExist:
29 raise ResourceDoesNotExist
30
31 return args, kwargs
32
33
34 class OrganizationAlertRuleEndpoint(OrganizationEndpoint):
35 permission_classes = (OrganizationAlertRulePermission,)
36
37 def convert_args(self, request: Request, alert_rule_id, *args, **kwargs):
38 args, kwargs = super().convert_args(request, *args, **kwargs)
39 organization = kwargs["organization"]
40
41 if not features.has("organizations:incidents", organization, actor=request.user):
42 raise ResourceDoesNotExist
43
44 try:
45 kwargs["alert_rule"] = AlertRule.objects.get(
46 organization=organization, id=alert_rule_id
47 )
48 except AlertRule.DoesNotExist:
49 raise ResourceDoesNotExist
50
51 return args, kwargs
52
53
54 class OrganizationAlertRuleTriggerEndpoint(OrganizationAlertRuleEndpoint):
55 def convert_args(self, request: Request, alert_rule_trigger_id, *args, **kwargs):
56 args, kwargs = super().convert_args(request, *args, **kwargs)
57 organization = kwargs["organization"]
58 alert_rule = kwargs["alert_rule"]
59
60 if not features.has("organizations:incidents", organization, actor=request.user):
61 raise ResourceDoesNotExist
62
63 try:
64 kwargs["alert_rule_trigger"] = AlertRuleTrigger.objects.get(
65 alert_rule=alert_rule, id=alert_rule_trigger_id
66 )
67 except AlertRuleTrigger.DoesNotExist:
68 raise ResourceDoesNotExist
69
70 return args, kwargs
71
72
73 class OrganizationAlertRuleTriggerActionEndpoint(OrganizationAlertRuleTriggerEndpoint):
74 def convert_args(self, request: Request, alert_rule_trigger_action_id, *args, **kwargs):
75 args, kwargs = super().convert_args(request, *args, **kwargs)
76 organization = kwargs["organization"]
77 trigger = kwargs["alert_rule_trigger"]
78
79 if not features.has("organizations:incidents", organization, actor=request.user):
80 raise ResourceDoesNotExist
81
82 try:
83 kwargs["alert_rule_trigger_action"] = AlertRuleTriggerAction.objects.get(
84 alert_rule_trigger=trigger, id=alert_rule_trigger_action_id
85 )
86 except AlertRuleTriggerAction.DoesNotExist:
87 raise ResourceDoesNotExist
88
89 return args, kwargs
90
[end of src/sentry/incidents/endpoints/bases.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/sentry/incidents/endpoints/bases.py b/src/sentry/incidents/endpoints/bases.py
--- a/src/sentry/incidents/endpoints/bases.py
+++ b/src/sentry/incidents/endpoints/bases.py
@@ -38,7 +38,10 @@
args, kwargs = super().convert_args(request, *args, **kwargs)
organization = kwargs["organization"]
- if not features.has("organizations:incidents", organization, actor=request.user):
+ # Allow orgs that have downgraded plans to delete metric alerts
+ if request.method != "DELETE" and not features.has(
+ "organizations:incidents", organization, actor=request.user
+ ):
raise ResourceDoesNotExist
try:
| {"golden_diff": "diff --git a/src/sentry/incidents/endpoints/bases.py b/src/sentry/incidents/endpoints/bases.py\n--- a/src/sentry/incidents/endpoints/bases.py\n+++ b/src/sentry/incidents/endpoints/bases.py\n@@ -38,7 +38,10 @@\n args, kwargs = super().convert_args(request, *args, **kwargs)\n organization = kwargs[\"organization\"]\n \n- if not features.has(\"organizations:incidents\", organization, actor=request.user):\n+ # Allow orgs that have downgraded plans to delete metric alerts\n+ if request.method != \"DELETE\" and not features.has(\n+ \"organizations:incidents\", organization, actor=request.user\n+ ):\n raise ResourceDoesNotExist\n \n try:\n", "issue": "Cannot edit or delete alerts\n### Environment\n\nSaaS (https://sentry.io/)\n\n### Version\n\n_No response_\n\n### Steps to Reproduce\n\n1. Have alerts that were set up a while ago\r\n2. Get a bunch of emails from one alert that is too touchy\r\n3. Try to edit alert (fails)\r\n4. Try to delete alert (fails)\n\n### Expected Result\n\nCan edit or delete alerts that I created on an account that I am the only user for\n\n### Actual Result\n\nCannot edit or delete alerts\r\n\r\n\r\n\r\n\r\n\r\n\n", "before_files": [{"content": "from rest_framework.exceptions import PermissionDenied\nfrom rest_framework.request import Request\n\nfrom sentry import features\nfrom sentry.api.bases.organization import OrganizationAlertRulePermission, OrganizationEndpoint\nfrom sentry.api.bases.project import ProjectAlertRulePermission, ProjectEndpoint\nfrom sentry.api.exceptions import ResourceDoesNotExist\nfrom sentry.incidents.models import AlertRule, AlertRuleTrigger, AlertRuleTriggerAction\n\n\nclass ProjectAlertRuleEndpoint(ProjectEndpoint):\n permission_classes = (ProjectAlertRulePermission,)\n\n def convert_args(self, request: Request, alert_rule_id, *args, **kwargs):\n args, kwargs = super().convert_args(request, *args, **kwargs)\n project = kwargs[\"project\"]\n\n if not features.has(\"organizations:incidents\", project.organization, actor=request.user):\n raise ResourceDoesNotExist\n\n if not request.access.has_project_access(project):\n raise PermissionDenied\n\n try:\n kwargs[\"alert_rule\"] = AlertRule.objects.get(\n snuba_query__subscriptions__project=project, id=alert_rule_id\n )\n except AlertRule.DoesNotExist:\n raise ResourceDoesNotExist\n\n return args, kwargs\n\n\nclass OrganizationAlertRuleEndpoint(OrganizationEndpoint):\n permission_classes = (OrganizationAlertRulePermission,)\n\n def convert_args(self, request: Request, alert_rule_id, *args, **kwargs):\n args, kwargs = super().convert_args(request, *args, **kwargs)\n organization = kwargs[\"organization\"]\n\n if not features.has(\"organizations:incidents\", organization, actor=request.user):\n raise ResourceDoesNotExist\n\n try:\n kwargs[\"alert_rule\"] = AlertRule.objects.get(\n organization=organization, id=alert_rule_id\n )\n except AlertRule.DoesNotExist:\n raise ResourceDoesNotExist\n\n return args, kwargs\n\n\nclass OrganizationAlertRuleTriggerEndpoint(OrganizationAlertRuleEndpoint):\n def convert_args(self, request: Request, alert_rule_trigger_id, *args, **kwargs):\n args, kwargs = super().convert_args(request, *args, **kwargs)\n organization = kwargs[\"organization\"]\n alert_rule = kwargs[\"alert_rule\"]\n\n if not features.has(\"organizations:incidents\", organization, actor=request.user):\n raise ResourceDoesNotExist\n\n try:\n kwargs[\"alert_rule_trigger\"] = AlertRuleTrigger.objects.get(\n alert_rule=alert_rule, id=alert_rule_trigger_id\n )\n except AlertRuleTrigger.DoesNotExist:\n raise ResourceDoesNotExist\n\n return args, kwargs\n\n\nclass OrganizationAlertRuleTriggerActionEndpoint(OrganizationAlertRuleTriggerEndpoint):\n def convert_args(self, request: Request, alert_rule_trigger_action_id, *args, **kwargs):\n args, kwargs = super().convert_args(request, *args, **kwargs)\n organization = kwargs[\"organization\"]\n trigger = kwargs[\"alert_rule_trigger\"]\n\n if not features.has(\"organizations:incidents\", organization, actor=request.user):\n raise ResourceDoesNotExist\n\n try:\n kwargs[\"alert_rule_trigger_action\"] = AlertRuleTriggerAction.objects.get(\n alert_rule_trigger=trigger, id=alert_rule_trigger_action_id\n )\n except AlertRuleTriggerAction.DoesNotExist:\n raise ResourceDoesNotExist\n\n return args, kwargs\n", "path": "src/sentry/incidents/endpoints/bases.py"}]} | 1,665 | 165 |
gh_patches_debug_15468 | rasdani/github-patches | git_diff | codespell-project__codespell-2477 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[Build] QA warning about codespell_lib.tests being installed as data
While packaging v2.2.0 for Gentoo Linux, I got a QA notice about this:
```
* QA Notice: setuptools warnings detected:
*
* Installing 'codespell_lib.tests' as data is deprecated, please list it in `packages`.
```
The actual setuptools warning is as (here shown for Python 3.11, but same for 3.10)
```
/usr/lib/python3.11/site-packages/setuptools/command/build_py.py:202: SetuptoolsDeprecationWarning: Instal
ling 'codespell_lib.tests' as data is deprecated, please list it in `packages`.
!!
############################
# Package would be ignored #
############################
Python recognizes 'codespell_lib.tests' as an importable package,
but it is not listed in the `packages` configuration of setuptools.
'codespell_lib.tests' has been automatically added to the distribution only
because it may contain data files, but this behavior is likely to change
in future versions of setuptools (and therefore is considered deprecated).
Please make sure that 'codespell_lib.tests' is included as a package by using
the `packages` configuration field or the proper discovery methods
(for example by using `find_namespace_packages(...)`/`find_namespace:`
instead of `find_packages(...)`/`find:`).
You can read more about "package discovery" and "data files" on setuptools
documentation page.
!!
check.warn(importable)
```
Find attached the full build log.
[codespell-2.2.0:20220818-083735.log](https://github.com/codespell-project/codespell/files/9371941/codespell-2.2.0.20220818-083735.log)
</issue>
<code>
[start of setup.py]
1 #! /usr/bin/env python
2
3 # adapted from mne-python
4
5 import os
6
7 from setuptools import setup
8
9 from codespell_lib import __version__
10
11 DISTNAME = 'codespell'
12 DESCRIPTION = """Codespell"""
13 MAINTAINER = 'Lucas De Marchi'
14 MAINTAINER_EMAIL = '[email protected]'
15 URL = 'https://github.com/codespell-project/codespell/'
16 LICENSE = 'GPL v2'
17 DOWNLOAD_URL = 'https://github.com/codespell-project/codespell/'
18 with open('README.rst', 'r') as f:
19 LONG_DESCRIPTION = f.read()
20
21 if __name__ == "__main__":
22 if os.path.exists('MANIFEST'):
23 os.remove('MANIFEST')
24
25 setup(name=DISTNAME,
26 maintainer=MAINTAINER,
27 include_package_data=True,
28 maintainer_email=MAINTAINER_EMAIL,
29 description=DESCRIPTION,
30 license=LICENSE,
31 url=URL,
32 version=__version__,
33 download_url=DOWNLOAD_URL,
34 long_description=LONG_DESCRIPTION,
35 long_description_content_type='text/x-rst',
36 zip_safe=False,
37 classifiers=['Intended Audience :: Developers',
38 'License :: OSI Approved',
39 'Programming Language :: Python',
40 'Topic :: Software Development',
41 'Operating System :: Microsoft :: Windows',
42 'Operating System :: POSIX',
43 'Operating System :: Unix',
44 'Operating System :: MacOS'],
45 platforms='any',
46 python_requires='>=3.6',
47 packages=[
48 'codespell_lib',
49 'codespell_lib.data',
50 ],
51 package_data={'codespell_lib': [
52 os.path.join('data', 'dictionary*.txt'),
53 os.path.join('data', 'linux-kernel.exclude'),
54 ]},
55 exclude_package_data={'codespell_lib': [
56 os.path.join('tests', '*'),
57 ]},
58 entry_points={
59 'console_scripts': [
60 'codespell = codespell_lib:_script_main'
61 ],
62 },
63 extras_require={
64 "dev": ["check-manifest", "flake8", "pytest", "pytest-cov",
65 "pytest-dependency"],
66 "hard-encoding-detection": ["chardet"],
67 }
68 )
69
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -46,15 +46,13 @@
python_requires='>=3.6',
packages=[
'codespell_lib',
+ 'codespell_lib.tests',
'codespell_lib.data',
],
package_data={'codespell_lib': [
os.path.join('data', 'dictionary*.txt'),
os.path.join('data', 'linux-kernel.exclude'),
]},
- exclude_package_data={'codespell_lib': [
- os.path.join('tests', '*'),
- ]},
entry_points={
'console_scripts': [
'codespell = codespell_lib:_script_main'
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -46,15 +46,13 @@\n python_requires='>=3.6',\n packages=[\n 'codespell_lib',\n+ 'codespell_lib.tests',\n 'codespell_lib.data',\n ],\n package_data={'codespell_lib': [\n os.path.join('data', 'dictionary*.txt'),\n os.path.join('data', 'linux-kernel.exclude'),\n ]},\n- exclude_package_data={'codespell_lib': [\n- os.path.join('tests', '*'),\n- ]},\n entry_points={\n 'console_scripts': [\n 'codespell = codespell_lib:_script_main'\n", "issue": "[Build] QA warning about codespell_lib.tests being installed as data\nWhile packaging v2.2.0 for Gentoo Linux, I got a QA notice about this:\r\n\r\n```\r\n* QA Notice: setuptools warnings detected:\r\n * \r\n * Installing 'codespell_lib.tests' as data is deprecated, please list it in `packages`.\r\n```\r\n\r\nThe actual setuptools warning is as (here shown for Python 3.11, but same for 3.10)\r\n\r\n```\r\n/usr/lib/python3.11/site-packages/setuptools/command/build_py.py:202: SetuptoolsDeprecationWarning: Instal\r\nling 'codespell_lib.tests' as data is deprecated, please list it in `packages`.\r\n !!\r\n\r\n\r\n ############################\r\n # Package would be ignored #\r\n ############################\r\n Python recognizes 'codespell_lib.tests' as an importable package,\r\n but it is not listed in the `packages` configuration of setuptools.\r\n\r\n 'codespell_lib.tests' has been automatically added to the distribution only\r\n because it may contain data files, but this behavior is likely to change\r\n in future versions of setuptools (and therefore is considered deprecated).\r\n\r\n Please make sure that 'codespell_lib.tests' is included as a package by using\r\n the `packages` configuration field or the proper discovery methods\r\n (for example by using `find_namespace_packages(...)`/`find_namespace:`\r\n instead of `find_packages(...)`/`find:`).\r\n\r\n You can read more about \"package discovery\" and \"data files\" on setuptools\r\n documentation page.\r\n\r\n\r\n!!\r\n\r\n check.warn(importable)\r\n```\r\n\r\nFind attached the full build log.\r\n[codespell-2.2.0:20220818-083735.log](https://github.com/codespell-project/codespell/files/9371941/codespell-2.2.0.20220818-083735.log)\r\n\n", "before_files": [{"content": "#! /usr/bin/env python\n\n# adapted from mne-python\n\nimport os\n\nfrom setuptools import setup\n\nfrom codespell_lib import __version__\n\nDISTNAME = 'codespell'\nDESCRIPTION = \"\"\"Codespell\"\"\"\nMAINTAINER = 'Lucas De Marchi'\nMAINTAINER_EMAIL = '[email protected]'\nURL = 'https://github.com/codespell-project/codespell/'\nLICENSE = 'GPL v2'\nDOWNLOAD_URL = 'https://github.com/codespell-project/codespell/'\nwith open('README.rst', 'r') as f:\n LONG_DESCRIPTION = f.read()\n\nif __name__ == \"__main__\":\n if os.path.exists('MANIFEST'):\n os.remove('MANIFEST')\n\n setup(name=DISTNAME,\n maintainer=MAINTAINER,\n include_package_data=True,\n maintainer_email=MAINTAINER_EMAIL,\n description=DESCRIPTION,\n license=LICENSE,\n url=URL,\n version=__version__,\n download_url=DOWNLOAD_URL,\n long_description=LONG_DESCRIPTION,\n long_description_content_type='text/x-rst',\n zip_safe=False,\n classifiers=['Intended Audience :: Developers',\n 'License :: OSI Approved',\n 'Programming Language :: Python',\n 'Topic :: Software Development',\n 'Operating System :: Microsoft :: Windows',\n 'Operating System :: POSIX',\n 'Operating System :: Unix',\n 'Operating System :: MacOS'],\n platforms='any',\n python_requires='>=3.6',\n packages=[\n 'codespell_lib',\n 'codespell_lib.data',\n ],\n package_data={'codespell_lib': [\n os.path.join('data', 'dictionary*.txt'),\n os.path.join('data', 'linux-kernel.exclude'),\n ]},\n exclude_package_data={'codespell_lib': [\n os.path.join('tests', '*'),\n ]},\n entry_points={\n 'console_scripts': [\n 'codespell = codespell_lib:_script_main'\n ],\n },\n extras_require={\n \"dev\": [\"check-manifest\", \"flake8\", \"pytest\", \"pytest-cov\",\n \"pytest-dependency\"],\n \"hard-encoding-detection\": [\"chardet\"],\n }\n )\n", "path": "setup.py"}]} | 1,549 | 154 |
gh_patches_debug_9172 | rasdani/github-patches | git_diff | Gallopsled__pwntools-201 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Pwnlib loads very slowly
On my system it takes two thirds of a second to load pwnlib:
```
~> time python -c "import pwn"
real 0m0.641s
user 0m0.576s
sys 0m0.044s
```
I've tracked down the culprit: `pwnlib.util.web` imports the `requests` module which takes forever (https://github.com/Gallopsled/pwntools/blob/master/pwnlib/util/web.py#L3).
I suggest we load `requests` lazily in `pwnlib.util.web.wget()`.
</issue>
<code>
[start of pwnlib/util/web.py]
1 # -*- coding: utf-8 -*-
2 import os, tempfile, logging
3 from requests import *
4 from .misc import size
5 log = logging.getLogger(__name__)
6
7 def wget(url, save=None, timeout=5, **kwargs):
8 """wget(url, save=None, timeout=5) -> str
9
10 Downloads a file via HTTP/HTTPS.
11
12 Args:
13 url (str): URL to download
14 save (str or bool): Name to save as. Any truthy value
15 will auto-generate a name based on the URL.
16 timeout (int): Timeout, in seconds
17
18 Example:
19
20 >>> url = 'http://httpbin.org/robots.txt'
21 >>> with context.local(log_level='ERROR'): result = wget(url)
22 >>> result
23 'User-agent: *\nDisallow: /deny\n'
24 >>> with context.local(log_level='ERROR'): wget(url, True)
25 >>> result == file('robots.txt').read()
26 True
27 """
28 with log.progress("Downloading '%s'" % url) as w:
29 w.status("Making request...")
30
31 response = get(url, stream=True, **kwargs)
32
33 if not response.ok:
34 w.failure("Got code %s" % response.status_code)
35 return
36
37 total_size = int(response.headers.get('content-length',0))
38
39 w.status('0 / %s' % size(total_size))
40
41 # Find out the next largest size we can represent as
42 chunk_size = 1
43 while chunk_size < (total_size/10):
44 chunk_size *= 1000
45
46 # Count chunks as they're received
47 total_data = ''
48
49 # Loop until we have all of the data
50 for chunk in response.iter_content(chunk_size = 2**10):
51 total_data += chunk
52 if total_size:
53 w.status('%s / %s' % (size(total_data), size(total_size)))
54 else:
55 w.status('%s' % size(total_data))
56
57 # Save to the target file if provided
58 if save:
59 if not isinstance(save, (str, unicode)):
60 save = os.path.basename(url)
61 save = save or NamedTemporaryFile(dir='.', delete=False).name
62 with file(save,'wb+') as f:
63 f.write(total_data)
64 w.success('Saved %r (%s)' % (f.name, size(total_data)))
65 else:
66 w.success('%s' % size(total_data))
67
68 return total_data
69
70
[end of pwnlib/util/web.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pwnlib/util/web.py b/pwnlib/util/web.py
--- a/pwnlib/util/web.py
+++ b/pwnlib/util/web.py
@@ -1,6 +1,5 @@
# -*- coding: utf-8 -*-
import os, tempfile, logging
-from requests import *
from .misc import size
log = logging.getLogger(__name__)
@@ -25,6 +24,8 @@
>>> result == file('robots.txt').read()
True
"""
+ from requests import *
+
with log.progress("Downloading '%s'" % url) as w:
w.status("Making request...")
| {"golden_diff": "diff --git a/pwnlib/util/web.py b/pwnlib/util/web.py\n--- a/pwnlib/util/web.py\n+++ b/pwnlib/util/web.py\n@@ -1,6 +1,5 @@\n # -*- coding: utf-8 -*-\n import os, tempfile, logging\n-from requests import *\n from .misc import size\n log = logging.getLogger(__name__)\n \n@@ -25,6 +24,8 @@\n >>> result == file('robots.txt').read()\n True\n \"\"\"\n+ from requests import *\n+\n with log.progress(\"Downloading '%s'\" % url) as w:\n w.status(\"Making request...\")\n", "issue": "Pwnlib loads very slowly\nOn my system it takes two thirds of a second to load pwnlib:\n\n```\n~> time python -c \"import pwn\"\n\nreal 0m0.641s\nuser 0m0.576s\nsys 0m0.044s\n```\n\nI've tracked down the culprit: `pwnlib.util.web` imports the `requests` module which takes forever (https://github.com/Gallopsled/pwntools/blob/master/pwnlib/util/web.py#L3).\n\nI suggest we load `requests` lazily in `pwnlib.util.web.wget()`.\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport os, tempfile, logging\nfrom requests import *\nfrom .misc import size\nlog = logging.getLogger(__name__)\n\ndef wget(url, save=None, timeout=5, **kwargs):\n \"\"\"wget(url, save=None, timeout=5) -> str\n\n Downloads a file via HTTP/HTTPS.\n\n Args:\n url (str): URL to download\n save (str or bool): Name to save as. Any truthy value\n will auto-generate a name based on the URL.\n timeout (int): Timeout, in seconds\n\n Example:\n\n >>> url = 'http://httpbin.org/robots.txt'\n >>> with context.local(log_level='ERROR'): result = wget(url)\n >>> result\n 'User-agent: *\\nDisallow: /deny\\n'\n >>> with context.local(log_level='ERROR'): wget(url, True)\n >>> result == file('robots.txt').read()\n True\n \"\"\"\n with log.progress(\"Downloading '%s'\" % url) as w:\n w.status(\"Making request...\")\n\n response = get(url, stream=True, **kwargs)\n\n if not response.ok:\n w.failure(\"Got code %s\" % response.status_code)\n return\n\n total_size = int(response.headers.get('content-length',0))\n\n w.status('0 / %s' % size(total_size))\n\n # Find out the next largest size we can represent as\n chunk_size = 1\n while chunk_size < (total_size/10):\n chunk_size *= 1000\n\n # Count chunks as they're received\n total_data = ''\n\n # Loop until we have all of the data\n for chunk in response.iter_content(chunk_size = 2**10):\n total_data += chunk\n if total_size:\n w.status('%s / %s' % (size(total_data), size(total_size)))\n else:\n w.status('%s' % size(total_data))\n\n # Save to the target file if provided\n if save:\n if not isinstance(save, (str, unicode)):\n save = os.path.basename(url)\n save = save or NamedTemporaryFile(dir='.', delete=False).name\n with file(save,'wb+') as f:\n f.write(total_data)\n w.success('Saved %r (%s)' % (f.name, size(total_data)))\n else:\n w.success('%s' % size(total_data))\n\n return total_data\n\n", "path": "pwnlib/util/web.py"}]} | 1,346 | 137 |
gh_patches_debug_34055 | rasdani/github-patches | git_diff | open-telemetry__opentelemetry-python-contrib-1552 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add readthedocs documentation for kafka python instrumentation
Part of [1491](https://github.com/open-telemetry/opentelemetry-python-contrib/issues/1491)
</issue>
<code>
[start of instrumentation/opentelemetry-instrumentation-kafka-python/src/opentelemetry/instrumentation/kafka/__init__.py]
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """
16 Instrument `kafka-python` to report instrumentation-kafka produced and consumed messages
17
18 Usage
19 -----
20
21 ..code:: python
22
23 from opentelemetry.instrumentation.kafka import KafkaInstrumentor
24 from kafka import KafkaProducer, KafkaConsumer
25
26 # Instrument kafka
27 KafkaInstrumentor().instrument()
28
29 # report a span of type producer with the default settings
30 producer = KafkaProducer(bootstrap_servers=['localhost:9092'])
31 producer.send('my-topic', b'raw_bytes')
32
33
34 # report a span of type consumer with the default settings
35 consumer = KafkaConsumer('my-topic',
36 group_id='my-group',
37 bootstrap_servers=['localhost:9092'])
38 for message in consumer:
39 # process message
40
41 The `_instrument` method accepts the following keyword args:
42 tracer_provider (TracerProvider) - an optional tracer provider
43 produce_hook (Callable) - a function with extra user-defined logic to be performed before sending the message
44 this function signature is:
45 def produce_hook(span: Span, args, kwargs)
46 consume_hook (Callable) - a function with extra user-defined logic to be performed after consuming a message
47 this function signature is:
48 def consume
49 _hook(span: Span, record: kafka.record.ABCRecord, args, kwargs)
50 for example:
51 .. code: python
52 from opentelemetry.instrumentation.kafka import KafkaInstrumentor
53 from kafka import KafkaProducer, KafkaConsumer
54
55 def produce_hook(span, args, kwargs):
56 if span and span.is_recording():
57 span.set_attribute("custom_user_attribute_from_produce_hook", "some-value")
58 def consume_hook(span, record, args, kwargs):
59 if span and span.is_recording():
60 span.set_attribute("custom_user_attribute_from_consume_hook", "some-value")
61
62 # instrument kafka with produce and consume hooks
63 KafkaInstrumentor().instrument(produce_hook=produce_hook, consume_hook=consume_hook)
64
65 # Using kafka as normal now will automatically generate spans,
66 # including user custom attributes added from the hooks
67 producer = KafkaProducer(bootstrap_servers=['localhost:9092'])
68 producer.send('my-topic', b'raw_bytes')
69
70 API
71 ___
72 """
73 from typing import Collection
74
75 import kafka
76 from wrapt import wrap_function_wrapper
77
78 from opentelemetry import trace
79 from opentelemetry.instrumentation.instrumentor import BaseInstrumentor
80 from opentelemetry.instrumentation.kafka.package import _instruments
81 from opentelemetry.instrumentation.kafka.utils import _wrap_next, _wrap_send
82 from opentelemetry.instrumentation.kafka.version import __version__
83 from opentelemetry.instrumentation.utils import unwrap
84
85
86 class KafkaInstrumentor(BaseInstrumentor):
87 """An instrumentor for kafka module
88 See `BaseInstrumentor`
89 """
90
91 def instrumentation_dependencies(self) -> Collection[str]:
92 return _instruments
93
94 def _instrument(self, **kwargs):
95 """Instruments the kafka module
96
97 Args:
98 **kwargs: Optional arguments
99 ``tracer_provider``: a TracerProvider, defaults to global.
100 ``produce_hook``: a callable to be executed just before producing a message
101 ``consume_hook``: a callable to be executed just after consuming a message
102 """
103 tracer_provider = kwargs.get("tracer_provider")
104 produce_hook = kwargs.get("produce_hook")
105 consume_hook = kwargs.get("consume_hook")
106
107 tracer = trace.get_tracer(
108 __name__, __version__, tracer_provider=tracer_provider
109 )
110
111 wrap_function_wrapper(
112 kafka.KafkaProducer, "send", _wrap_send(tracer, produce_hook)
113 )
114 wrap_function_wrapper(
115 kafka.KafkaConsumer,
116 "__next__",
117 _wrap_next(tracer, consume_hook),
118 )
119
120 def _uninstrument(self, **kwargs):
121 unwrap(kafka.KafkaProducer, "send")
122 unwrap(kafka.KafkaConsumer, "__next__")
123
[end of instrumentation/opentelemetry-instrumentation-kafka-python/src/opentelemetry/instrumentation/kafka/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/instrumentation/opentelemetry-instrumentation-kafka-python/src/opentelemetry/instrumentation/kafka/__init__.py b/instrumentation/opentelemetry-instrumentation-kafka-python/src/opentelemetry/instrumentation/kafka/__init__.py
--- a/instrumentation/opentelemetry-instrumentation-kafka-python/src/opentelemetry/instrumentation/kafka/__init__.py
+++ b/instrumentation/opentelemetry-instrumentation-kafka-python/src/opentelemetry/instrumentation/kafka/__init__.py
@@ -13,7 +13,7 @@
# limitations under the License.
"""
-Instrument `kafka-python` to report instrumentation-kafka produced and consumed messages
+Instrument kafka-python to report instrumentation-kafka produced and consumed messages
Usage
-----
@@ -30,24 +30,21 @@
producer = KafkaProducer(bootstrap_servers=['localhost:9092'])
producer.send('my-topic', b'raw_bytes')
-
# report a span of type consumer with the default settings
- consumer = KafkaConsumer('my-topic',
- group_id='my-group',
- bootstrap_servers=['localhost:9092'])
+ consumer = KafkaConsumer('my-topic', group_id='my-group', bootstrap_servers=['localhost:9092'])
for message in consumer:
- # process message
+ # process message
-The `_instrument` method accepts the following keyword args:
+The _instrument() method accepts the following keyword args:
tracer_provider (TracerProvider) - an optional tracer provider
produce_hook (Callable) - a function with extra user-defined logic to be performed before sending the message
- this function signature is:
- def produce_hook(span: Span, args, kwargs)
+this function signature is:
+def produce_hook(span: Span, args, kwargs)
consume_hook (Callable) - a function with extra user-defined logic to be performed after consuming a message
- this function signature is:
- def consume
- _hook(span: Span, record: kafka.record.ABCRecord, args, kwargs)
+this function signature is:
+def consume_hook(span: Span, record: kafka.record.ABCRecord, args, kwargs)
for example:
+
.. code: python
from opentelemetry.instrumentation.kafka import KafkaInstrumentor
from kafka import KafkaProducer, KafkaConsumer
| {"golden_diff": "diff --git a/instrumentation/opentelemetry-instrumentation-kafka-python/src/opentelemetry/instrumentation/kafka/__init__.py b/instrumentation/opentelemetry-instrumentation-kafka-python/src/opentelemetry/instrumentation/kafka/__init__.py\n--- a/instrumentation/opentelemetry-instrumentation-kafka-python/src/opentelemetry/instrumentation/kafka/__init__.py\n+++ b/instrumentation/opentelemetry-instrumentation-kafka-python/src/opentelemetry/instrumentation/kafka/__init__.py\n@@ -13,7 +13,7 @@\n # limitations under the License.\n \n \"\"\"\n-Instrument `kafka-python` to report instrumentation-kafka produced and consumed messages\n+Instrument kafka-python to report instrumentation-kafka produced and consumed messages\n \n Usage\n -----\n@@ -30,24 +30,21 @@\n producer = KafkaProducer(bootstrap_servers=['localhost:9092'])\n producer.send('my-topic', b'raw_bytes')\n \n-\n # report a span of type consumer with the default settings\n- consumer = KafkaConsumer('my-topic',\n- group_id='my-group',\n- bootstrap_servers=['localhost:9092'])\n+ consumer = KafkaConsumer('my-topic', group_id='my-group', bootstrap_servers=['localhost:9092'])\n for message in consumer:\n- # process message\n+ # process message\n \n-The `_instrument` method accepts the following keyword args:\n+The _instrument() method accepts the following keyword args:\n tracer_provider (TracerProvider) - an optional tracer provider\n produce_hook (Callable) - a function with extra user-defined logic to be performed before sending the message\n- this function signature is:\n- def produce_hook(span: Span, args, kwargs)\n+this function signature is:\n+def produce_hook(span: Span, args, kwargs)\n consume_hook (Callable) - a function with extra user-defined logic to be performed after consuming a message\n- this function signature is:\n- def consume\n- _hook(span: Span, record: kafka.record.ABCRecord, args, kwargs)\n+this function signature is:\n+def consume_hook(span: Span, record: kafka.record.ABCRecord, args, kwargs)\n for example:\n+\n .. code: python\n from opentelemetry.instrumentation.kafka import KafkaInstrumentor\n from kafka import KafkaProducer, KafkaConsumer\n", "issue": "Add readthedocs documentation for kafka python instrumentation\nPart of [1491](https://github.com/open-telemetry/opentelemetry-python-contrib/issues/1491)\n", "before_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nInstrument `kafka-python` to report instrumentation-kafka produced and consumed messages\n\nUsage\n-----\n\n..code:: python\n\n from opentelemetry.instrumentation.kafka import KafkaInstrumentor\n from kafka import KafkaProducer, KafkaConsumer\n\n # Instrument kafka\n KafkaInstrumentor().instrument()\n\n # report a span of type producer with the default settings\n producer = KafkaProducer(bootstrap_servers=['localhost:9092'])\n producer.send('my-topic', b'raw_bytes')\n\n\n # report a span of type consumer with the default settings\n consumer = KafkaConsumer('my-topic',\n group_id='my-group',\n bootstrap_servers=['localhost:9092'])\n for message in consumer:\n # process message\n\nThe `_instrument` method accepts the following keyword args:\ntracer_provider (TracerProvider) - an optional tracer provider\nproduce_hook (Callable) - a function with extra user-defined logic to be performed before sending the message\n this function signature is:\n def produce_hook(span: Span, args, kwargs)\nconsume_hook (Callable) - a function with extra user-defined logic to be performed after consuming a message\n this function signature is:\n def consume\n _hook(span: Span, record: kafka.record.ABCRecord, args, kwargs)\nfor example:\n.. code: python\n from opentelemetry.instrumentation.kafka import KafkaInstrumentor\n from kafka import KafkaProducer, KafkaConsumer\n\n def produce_hook(span, args, kwargs):\n if span and span.is_recording():\n span.set_attribute(\"custom_user_attribute_from_produce_hook\", \"some-value\")\n def consume_hook(span, record, args, kwargs):\n if span and span.is_recording():\n span.set_attribute(\"custom_user_attribute_from_consume_hook\", \"some-value\")\n\n # instrument kafka with produce and consume hooks\n KafkaInstrumentor().instrument(produce_hook=produce_hook, consume_hook=consume_hook)\n\n # Using kafka as normal now will automatically generate spans,\n # including user custom attributes added from the hooks\n producer = KafkaProducer(bootstrap_servers=['localhost:9092'])\n producer.send('my-topic', b'raw_bytes')\n\nAPI\n___\n\"\"\"\nfrom typing import Collection\n\nimport kafka\nfrom wrapt import wrap_function_wrapper\n\nfrom opentelemetry import trace\nfrom opentelemetry.instrumentation.instrumentor import BaseInstrumentor\nfrom opentelemetry.instrumentation.kafka.package import _instruments\nfrom opentelemetry.instrumentation.kafka.utils import _wrap_next, _wrap_send\nfrom opentelemetry.instrumentation.kafka.version import __version__\nfrom opentelemetry.instrumentation.utils import unwrap\n\n\nclass KafkaInstrumentor(BaseInstrumentor):\n \"\"\"An instrumentor for kafka module\n See `BaseInstrumentor`\n \"\"\"\n\n def instrumentation_dependencies(self) -> Collection[str]:\n return _instruments\n\n def _instrument(self, **kwargs):\n \"\"\"Instruments the kafka module\n\n Args:\n **kwargs: Optional arguments\n ``tracer_provider``: a TracerProvider, defaults to global.\n ``produce_hook``: a callable to be executed just before producing a message\n ``consume_hook``: a callable to be executed just after consuming a message\n \"\"\"\n tracer_provider = kwargs.get(\"tracer_provider\")\n produce_hook = kwargs.get(\"produce_hook\")\n consume_hook = kwargs.get(\"consume_hook\")\n\n tracer = trace.get_tracer(\n __name__, __version__, tracer_provider=tracer_provider\n )\n\n wrap_function_wrapper(\n kafka.KafkaProducer, \"send\", _wrap_send(tracer, produce_hook)\n )\n wrap_function_wrapper(\n kafka.KafkaConsumer,\n \"__next__\",\n _wrap_next(tracer, consume_hook),\n )\n\n def _uninstrument(self, **kwargs):\n unwrap(kafka.KafkaProducer, \"send\")\n unwrap(kafka.KafkaConsumer, \"__next__\")\n", "path": "instrumentation/opentelemetry-instrumentation-kafka-python/src/opentelemetry/instrumentation/kafka/__init__.py"}]} | 1,823 | 504 |
gh_patches_debug_37763 | rasdani/github-patches | git_diff | kornia__kornia-1853 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Loftr does not work with some image size (not a memory issue)
### Describe the bug
LoFTR incorrectly does something with positional embeddings
```
RuntimeError Traceback (most recent call last)
[<ipython-input-1-54d246337ab1>](https://9t3p2yszpxn-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20220613-060046-RC00_454553376#) in <module>()
10 "image1": torch.rand(1,1, 1704, 2272).cuda()}
11 with torch.no_grad():
---> 12 correspondences = matcher(input_dict)
3 frames
[/usr/local/lib/python3.7/dist-packages/kornia/feature/loftr/utils/position_encoding.py](https://9t3p2yszpxn-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20220613-060046-RC00_454553376#) in forward(self, x)
39 x: [N, C, H, W]
40 """
---> 41 return x + self.pe[:, :, :x.size(2), :x.size(3)]
RuntimeError: The size of tensor a (284) must match the size of tensor b (256) at non-singleton dimension 3
```
### Reproduction steps
```bash
import kornia as K
import kornia.feature as KF
import numpy as np
import torch
matcher = KF.LoFTR(pretrained='outdoor').cuda()
input_dict = {"image0": torch.rand(1,1, 1704, 2272),
"image1": torch.rand(1,1, 1704, 2272)}
with torch.no_grad():
correspondences = matcher(input_dict)
```
### Expected behavior
Not an error
### Environment
```shell
not relevant
```
### Additional context
_No response_
</issue>
<code>
[start of kornia/feature/loftr/utils/position_encoding.py]
1 import math
2
3 import torch
4 from torch import nn
5
6
7 class PositionEncodingSine(nn.Module):
8 """This is a sinusoidal position encoding that generalized to 2-dimensional images."""
9
10 def __init__(self, d_model, max_shape=(256, 256), temp_bug_fix=True):
11 """
12 Args:
13 max_shape (tuple): for 1/8 featmap, the max length of 256 corresponds to 2048 pixels
14 temp_bug_fix (bool): As noted in this [issue](https://github.com/zju3dv/LoFTR/issues/41),
15 the original implementation of LoFTR includes a bug in the pos-enc impl, which has little impact
16 on the final performance. For now, we keep both impls for backward compatibility.
17 We will remove the buggy impl after re-training all variants of our released models.
18 """
19 super().__init__()
20
21 pe = torch.zeros((d_model, *max_shape))
22 y_position = torch.ones(max_shape).cumsum(0).float().unsqueeze(0)
23 x_position = torch.ones(max_shape).cumsum(1).float().unsqueeze(0)
24 if temp_bug_fix:
25 div_term = torch.exp(torch.arange(0, d_model // 2, 2).float() * (-math.log(10000.0) / (d_model // 2)))
26 else: # a buggy implementation (for backward compatibility only)
27 div_term = torch.exp(torch.arange(0, d_model // 2, 2).float() * (-math.log(10000.0) / d_model // 2))
28 div_term = div_term[:, None, None] # [C//4, 1, 1]
29 pe[0::4, :, :] = torch.sin(x_position * div_term)
30 pe[1::4, :, :] = torch.cos(x_position * div_term)
31 pe[2::4, :, :] = torch.sin(y_position * div_term)
32 pe[3::4, :, :] = torch.cos(y_position * div_term)
33
34 self.register_buffer('pe', pe.unsqueeze(0), persistent=False) # [1, C, H, W]
35
36 def forward(self, x):
37 """
38 Args:
39 x: [N, C, H, W]
40 """
41 return x + self.pe[:, :, : x.size(2), : x.size(3)]
42
[end of kornia/feature/loftr/utils/position_encoding.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/kornia/feature/loftr/utils/position_encoding.py b/kornia/feature/loftr/utils/position_encoding.py
--- a/kornia/feature/loftr/utils/position_encoding.py
+++ b/kornia/feature/loftr/utils/position_encoding.py
@@ -17,25 +17,51 @@
We will remove the buggy impl after re-training all variants of our released models.
"""
super().__init__()
+ self.d_model = d_model
+ self.temp_bug_fix = temp_bug_fix
- pe = torch.zeros((d_model, *max_shape))
+ pe = self._create_position_encoding(max_shape)
+ self.register_buffer('pe', pe, persistent=False) # [1, C, H, W]
+
+ def _create_position_encoding(self, max_shape):
+ """Creates a position encoding from scratch.
+
+ For 1/8 feature map (which is standard): If the input image size is H, W (both divisible by 8), the max_shape
+ should be (H//8, W//8).
+ """
+ pe = torch.zeros((self.d_model, *max_shape))
y_position = torch.ones(max_shape).cumsum(0).float().unsqueeze(0)
x_position = torch.ones(max_shape).cumsum(1).float().unsqueeze(0)
- if temp_bug_fix:
- div_term = torch.exp(torch.arange(0, d_model // 2, 2).float() * (-math.log(10000.0) / (d_model // 2)))
+ if self.temp_bug_fix:
+ div_term = torch.exp(
+ torch.arange(0, self.d_model // 2, 2).float() * (-math.log(10000.0) / (self.d_model // 2))
+ )
else: # a buggy implementation (for backward compatibility only)
- div_term = torch.exp(torch.arange(0, d_model // 2, 2).float() * (-math.log(10000.0) / d_model // 2))
+ div_term = torch.exp(
+ torch.arange(0, self.d_model // 2, 2).float() * (-math.log(10000.0) / self.d_model // 2)
+ )
div_term = div_term[:, None, None] # [C//4, 1, 1]
pe[0::4, :, :] = torch.sin(x_position * div_term)
pe[1::4, :, :] = torch.cos(x_position * div_term)
pe[2::4, :, :] = torch.sin(y_position * div_term)
pe[3::4, :, :] = torch.cos(y_position * div_term)
+ return pe.unsqueeze(0)
- self.register_buffer('pe', pe.unsqueeze(0), persistent=False) # [1, C, H, W]
+ def update_position_encoding_size(self, max_shape):
+ """Updates position encoding to new max_shape.
+
+ For 1/8 feature map (which is standard): If the input image size is H, W (both divisible by 8), the max_shape
+ should be (H//8, W//8).
+ """
+ self.pe = self._create_position_encoding(max_shape).to(self.pe.device)
def forward(self, x):
"""
Args:
x: [N, C, H, W]
"""
+ if x.size(2) > self.pe.size(2) or x.size(3) > self.pe.size(3):
+ max_shape = (max(x.size(2), self.pe.size(2)), max(x.size(3), self.pe.size(3)))
+ self.update_position_encoding_size(max_shape)
+
return x + self.pe[:, :, : x.size(2), : x.size(3)]
| {"golden_diff": "diff --git a/kornia/feature/loftr/utils/position_encoding.py b/kornia/feature/loftr/utils/position_encoding.py\n--- a/kornia/feature/loftr/utils/position_encoding.py\n+++ b/kornia/feature/loftr/utils/position_encoding.py\n@@ -17,25 +17,51 @@\n We will remove the buggy impl after re-training all variants of our released models.\n \"\"\"\n super().__init__()\n+ self.d_model = d_model\n+ self.temp_bug_fix = temp_bug_fix\n \n- pe = torch.zeros((d_model, *max_shape))\n+ pe = self._create_position_encoding(max_shape)\n+ self.register_buffer('pe', pe, persistent=False) # [1, C, H, W]\n+\n+ def _create_position_encoding(self, max_shape):\n+ \"\"\"Creates a position encoding from scratch.\n+\n+ For 1/8 feature map (which is standard): If the input image size is H, W (both divisible by 8), the max_shape\n+ should be (H//8, W//8).\n+ \"\"\"\n+ pe = torch.zeros((self.d_model, *max_shape))\n y_position = torch.ones(max_shape).cumsum(0).float().unsqueeze(0)\n x_position = torch.ones(max_shape).cumsum(1).float().unsqueeze(0)\n- if temp_bug_fix:\n- div_term = torch.exp(torch.arange(0, d_model // 2, 2).float() * (-math.log(10000.0) / (d_model // 2)))\n+ if self.temp_bug_fix:\n+ div_term = torch.exp(\n+ torch.arange(0, self.d_model // 2, 2).float() * (-math.log(10000.0) / (self.d_model // 2))\n+ )\n else: # a buggy implementation (for backward compatibility only)\n- div_term = torch.exp(torch.arange(0, d_model // 2, 2).float() * (-math.log(10000.0) / d_model // 2))\n+ div_term = torch.exp(\n+ torch.arange(0, self.d_model // 2, 2).float() * (-math.log(10000.0) / self.d_model // 2)\n+ )\n div_term = div_term[:, None, None] # [C//4, 1, 1]\n pe[0::4, :, :] = torch.sin(x_position * div_term)\n pe[1::4, :, :] = torch.cos(x_position * div_term)\n pe[2::4, :, :] = torch.sin(y_position * div_term)\n pe[3::4, :, :] = torch.cos(y_position * div_term)\n+ return pe.unsqueeze(0)\n \n- self.register_buffer('pe', pe.unsqueeze(0), persistent=False) # [1, C, H, W]\n+ def update_position_encoding_size(self, max_shape):\n+ \"\"\"Updates position encoding to new max_shape.\n+\n+ For 1/8 feature map (which is standard): If the input image size is H, W (both divisible by 8), the max_shape\n+ should be (H//8, W//8).\n+ \"\"\"\n+ self.pe = self._create_position_encoding(max_shape).to(self.pe.device)\n \n def forward(self, x):\n \"\"\"\n Args:\n x: [N, C, H, W]\n \"\"\"\n+ if x.size(2) > self.pe.size(2) or x.size(3) > self.pe.size(3):\n+ max_shape = (max(x.size(2), self.pe.size(2)), max(x.size(3), self.pe.size(3)))\n+ self.update_position_encoding_size(max_shape)\n+\n return x + self.pe[:, :, : x.size(2), : x.size(3)]\n", "issue": "Loftr does not work with some image size (not a memory issue)\n### Describe the bug\n\nLoFTR incorrectly does something with positional embeddings\r\n```\r\nRuntimeError Traceback (most recent call last)\r\n[<ipython-input-1-54d246337ab1>](https://9t3p2yszpxn-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20220613-060046-RC00_454553376#) in <module>()\r\n 10 \"image1\": torch.rand(1,1, 1704, 2272).cuda()}\r\n 11 with torch.no_grad():\r\n---> 12 correspondences = matcher(input_dict)\r\n\r\n3 frames\r\n[/usr/local/lib/python3.7/dist-packages/kornia/feature/loftr/utils/position_encoding.py](https://9t3p2yszpxn-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20220613-060046-RC00_454553376#) in forward(self, x)\r\n 39 x: [N, C, H, W]\r\n 40 \"\"\"\r\n---> 41 return x + self.pe[:, :, :x.size(2), :x.size(3)]\r\n\r\nRuntimeError: The size of tensor a (284) must match the size of tensor b (256) at non-singleton dimension 3\r\n```\n\n### Reproduction steps\n\n```bash\nimport kornia as K\r\nimport kornia.feature as KF\r\nimport numpy as np\r\nimport torch\r\n\r\nmatcher = KF.LoFTR(pretrained='outdoor').cuda()\r\n\r\ninput_dict = {\"image0\": torch.rand(1,1, 1704, 2272),\r\n \"image1\": torch.rand(1,1, 1704, 2272)}\r\nwith torch.no_grad():\r\n correspondences = matcher(input_dict)\n```\n\n\n### Expected behavior\n\nNot an error \n\n### Environment\n\n```shell\nnot relevant\n```\n\n\n### Additional context\n\n_No response_\n", "before_files": [{"content": "import math\n\nimport torch\nfrom torch import nn\n\n\nclass PositionEncodingSine(nn.Module):\n \"\"\"This is a sinusoidal position encoding that generalized to 2-dimensional images.\"\"\"\n\n def __init__(self, d_model, max_shape=(256, 256), temp_bug_fix=True):\n \"\"\"\n Args:\n max_shape (tuple): for 1/8 featmap, the max length of 256 corresponds to 2048 pixels\n temp_bug_fix (bool): As noted in this [issue](https://github.com/zju3dv/LoFTR/issues/41),\n the original implementation of LoFTR includes a bug in the pos-enc impl, which has little impact\n on the final performance. For now, we keep both impls for backward compatibility.\n We will remove the buggy impl after re-training all variants of our released models.\n \"\"\"\n super().__init__()\n\n pe = torch.zeros((d_model, *max_shape))\n y_position = torch.ones(max_shape).cumsum(0).float().unsqueeze(0)\n x_position = torch.ones(max_shape).cumsum(1).float().unsqueeze(0)\n if temp_bug_fix:\n div_term = torch.exp(torch.arange(0, d_model // 2, 2).float() * (-math.log(10000.0) / (d_model // 2)))\n else: # a buggy implementation (for backward compatibility only)\n div_term = torch.exp(torch.arange(0, d_model // 2, 2).float() * (-math.log(10000.0) / d_model // 2))\n div_term = div_term[:, None, None] # [C//4, 1, 1]\n pe[0::4, :, :] = torch.sin(x_position * div_term)\n pe[1::4, :, :] = torch.cos(x_position * div_term)\n pe[2::4, :, :] = torch.sin(y_position * div_term)\n pe[3::4, :, :] = torch.cos(y_position * div_term)\n\n self.register_buffer('pe', pe.unsqueeze(0), persistent=False) # [1, C, H, W]\n\n def forward(self, x):\n \"\"\"\n Args:\n x: [N, C, H, W]\n \"\"\"\n return x + self.pe[:, :, : x.size(2), : x.size(3)]\n", "path": "kornia/feature/loftr/utils/position_encoding.py"}]} | 1,683 | 865 |
gh_patches_debug_8026 | rasdani/github-patches | git_diff | dmlc__dgl-3696 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[Bug] FileExistsError when sometimes importing dgl from multiprocess training
## 🐛 Bug
Sometimes, when I launch my Pytorch distributed trainer (which spawns multiple trainer processes, eg once for each GPU for multi-gpu model training), my training job fails with the following error:
```
# pardon the possibly out-of-order stack trace, multiple processes are interleaving the stdout
import dgl
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/dgl/__init__.py", line 13, in <module>
from .backend import load_backend, backend_name
File "/usr/local/lib/python3.7/os.py", line 221, in makedirs
mkdir(name, mode)
File "trainer/utils/cli.py", line 137, in <module>
locals()["run_" + args.which](args, extra)
File "/usr/local/lib/python3.7/site-packages/dgl/backend/__init__.py", line 107, in <module>
load_backend(get_preferred_backend())
File "trainer/utils/cli.py", line 27, in run_local
trainer_class = locate(args.trainer)
FileExistsError: [Errno 17] File exists: '/root/.dgl'
File "/usr/local/lib/python3.7/site-packages/dgl/backend/__init__.py", line 103, in get_preferred_backend
set_default_backend(default_dir, 'pytorch')
FileExistsError: [Errno 17] File exists: '/root/.dgl'
```
I see this occur fairly often, say ~10-20% of the time. Usually, retrying the train command fixes things.
For what it's worth: I am running this within a Docker container, using a DGL nightly build from `2021-10-18`
## To Reproduce
Steps to reproduce the behavior:
I don't have a repro script. But, hopefully this stack trace can point out a diagnosis + fix.
## Expected behavior
Importing dgl shouldn't cause an error.
## Environment
- DGL Version (e.g., 1.0): >0.7 (Nightly build from 2021-10-18).
- Backend Library & Version (e.g., PyTorch 0.4.1, MXNet/Gluon 1.3):
- OS (e.g., Linux): Linux
- How you installed DGL (`conda`, `pip`, source): From nightly
- Build command you used (if compiling from source):
- Python version: 3.7
- CUDA/cuDNN version (if applicable):
- GPU models and configuration (e.g. V100):
- Any other relevant information:
## Additional context
</issue>
<code>
[start of python/dgl/backend/set_default_backend.py]
1 import argparse
2 import os
3 import json
4
5 def set_default_backend(default_dir, backend_name):
6 if not os.path.exists(default_dir):
7 os.makedirs(default_dir)
8 config_path = os.path.join(default_dir, 'config.json')
9 with open(config_path, "w") as config_file:
10 json.dump({'backend': backend_name.lower()}, config_file)
11 print('Setting the default backend to "{}". You can change it in the '
12 '~/.dgl/config.json file or export the DGLBACKEND environment variable. '
13 'Valid options are: pytorch, mxnet, tensorflow (all lowercase)'.format(
14 backend_name))
15
16 if __name__ == "__main__":
17 parser = argparse.ArgumentParser()
18 parser.add_argument("default_dir", type=str, default=os.path.join(os.path.expanduser('~'), '.dgl'))
19 parser.add_argument("backend", nargs=1, type=str, choices=[
20 'pytorch', 'tensorflow', 'mxnet'], help="Set default backend")
21 args = parser.parse_args()
22 set_default_backend(args.default_dir, args.backend[0])
23
[end of python/dgl/backend/set_default_backend.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/python/dgl/backend/set_default_backend.py b/python/dgl/backend/set_default_backend.py
--- a/python/dgl/backend/set_default_backend.py
+++ b/python/dgl/backend/set_default_backend.py
@@ -3,8 +3,8 @@
import json
def set_default_backend(default_dir, backend_name):
- if not os.path.exists(default_dir):
- os.makedirs(default_dir)
+ # the exists_ok requires python >= 3.2
+ os.makedirs(default_dir, exists_ok=True)
config_path = os.path.join(default_dir, 'config.json')
with open(config_path, "w") as config_file:
json.dump({'backend': backend_name.lower()}, config_file)
| {"golden_diff": "diff --git a/python/dgl/backend/set_default_backend.py b/python/dgl/backend/set_default_backend.py\n--- a/python/dgl/backend/set_default_backend.py\n+++ b/python/dgl/backend/set_default_backend.py\n@@ -3,8 +3,8 @@\n import json\n \n def set_default_backend(default_dir, backend_name):\n- if not os.path.exists(default_dir):\n- os.makedirs(default_dir)\n+ # the exists_ok requires python >= 3.2\n+ os.makedirs(default_dir, exists_ok=True)\n config_path = os.path.join(default_dir, 'config.json')\n with open(config_path, \"w\") as config_file: \n json.dump({'backend': backend_name.lower()}, config_file)\n", "issue": "[Bug] FileExistsError when sometimes importing dgl from multiprocess training\n## \ud83d\udc1b Bug\r\nSometimes, when I launch my Pytorch distributed trainer (which spawns multiple trainer processes, eg once for each GPU for multi-gpu model training), my training job fails with the following error:\r\n\r\n```\r\n# pardon the possibly out-of-order stack trace, multiple processes are interleaving the stdout\r\n import dgl\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.7/site-packages/dgl/__init__.py\", line 13, in <module>\r\n from .backend import load_backend, backend_name\r\n File \"/usr/local/lib/python3.7/os.py\", line 221, in makedirs\r\n mkdir(name, mode)\r\n File \"trainer/utils/cli.py\", line 137, in <module>\r\n locals()[\"run_\" + args.which](args, extra)\r\n File \"/usr/local/lib/python3.7/site-packages/dgl/backend/__init__.py\", line 107, in <module>\r\n load_backend(get_preferred_backend())\r\n File \"trainer/utils/cli.py\", line 27, in run_local\r\n trainer_class = locate(args.trainer)\r\nFileExistsError: [Errno 17] File exists: '/root/.dgl'\r\n File \"/usr/local/lib/python3.7/site-packages/dgl/backend/__init__.py\", line 103, in get_preferred_backend\r\n set_default_backend(default_dir, 'pytorch')\r\nFileExistsError: [Errno 17] File exists: '/root/.dgl'\r\n```\r\n\r\nI see this occur fairly often, say ~10-20% of the time. Usually, retrying the train command fixes things.\r\n\r\nFor what it's worth: I am running this within a Docker container, using a DGL nightly build from `2021-10-18`\r\n\r\n## To Reproduce\r\n\r\nSteps to reproduce the behavior:\r\n\r\nI don't have a repro script. But, hopefully this stack trace can point out a diagnosis + fix.\r\n\r\n## Expected behavior\r\n\r\nImporting dgl shouldn't cause an error.\r\n\r\n## Environment\r\n\r\n - DGL Version (e.g., 1.0): >0.7 (Nightly build from 2021-10-18).\r\n - Backend Library & Version (e.g., PyTorch 0.4.1, MXNet/Gluon 1.3):\r\n - OS (e.g., Linux): Linux\r\n - How you installed DGL (`conda`, `pip`, source): From nightly\r\n - Build command you used (if compiling from source):\r\n - Python version: 3.7\r\n - CUDA/cuDNN version (if applicable):\r\n - GPU models and configuration (e.g. V100):\r\n - Any other relevant information:\r\n\r\n## Additional context\r\n\n", "before_files": [{"content": "import argparse\nimport os\nimport json\n\ndef set_default_backend(default_dir, backend_name):\n if not os.path.exists(default_dir):\n os.makedirs(default_dir)\n config_path = os.path.join(default_dir, 'config.json')\n with open(config_path, \"w\") as config_file: \n json.dump({'backend': backend_name.lower()}, config_file)\n print('Setting the default backend to \"{}\". You can change it in the '\n '~/.dgl/config.json file or export the DGLBACKEND environment variable. '\n 'Valid options are: pytorch, mxnet, tensorflow (all lowercase)'.format(\n backend_name))\n\nif __name__ == \"__main__\":\n parser = argparse.ArgumentParser()\n parser.add_argument(\"default_dir\", type=str, default=os.path.join(os.path.expanduser('~'), '.dgl'))\n parser.add_argument(\"backend\", nargs=1, type=str, choices=[\n 'pytorch', 'tensorflow', 'mxnet'], help=\"Set default backend\")\n args = parser.parse_args()\n set_default_backend(args.default_dir, args.backend[0])\n", "path": "python/dgl/backend/set_default_backend.py"}]} | 1,410 | 151 |
gh_patches_debug_43122 | rasdani/github-patches | git_diff | aws-cloudformation__cfn-lint-1627 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
False alarm from new W4002
*cfn-lint version: 0.34.0*
[Here](https://gist.github.com/schmiddy/44a779032a930995d22ee2722a18f163) is an example template which causes a false alarm like this:
```
$ cfn-lint /tmp/example.yml
W4002 As the resource "metadata" section contains reference to a "NoEcho" parameter DBUser, CloudFormation will display the parameter value in plaintext
/tmp/example.yml:21:7
W4002 As the resource "metadata" section contains reference to a "NoEcho" parameter DBPass, CloudFormation will display the parameter value in plaintext
/tmp/example.yml:21:7
```
The problem seems to be that the rule is looking for any mention of the parameter name, even as a text description that is not actually referencing the parameter.
</issue>
<code>
[start of src/cfnlint/rules/resources/NoEcho.py]
1 """
2 Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
3 SPDX-License-Identifier: MIT-0
4 """
5 from cfnlint.helpers import bool_compare
6 from cfnlint.rules import CloudFormationLintRule
7 from cfnlint.rules import RuleMatch
8
9
10 class NoEcho(CloudFormationLintRule):
11 id = 'W4002'
12 shortdesc = 'Check for NoEcho References'
13 description = 'Check if there is a NoEcho enabled parameter referenced within a resources Metadata section'
14 source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/parameters-section-structure.html#parameters-section-structure-properties'
15 tags = ['resources', 'NoEcho']
16
17 def match(self, cfn):
18 matches = []
19 no_echo_params = []
20 parameters = cfn.get_parameters()
21 for parameter_name, parameter_value in parameters.items():
22 noecho = parameter_value.get('NoEcho', default=False)
23 if bool_compare(noecho, True):
24 no_echo_params.append(parameter_name)
25
26 if not no_echo_params:
27 return no_echo_params
28
29 resource_properties = cfn.get_resources()
30 resource_dict = {key: resource_properties[key] for key in resource_properties if
31 isinstance(resource_properties[key], dict)}
32 for resource_name, resource_values in resource_dict.items():
33 resource_values = {key: resource_values[key] for key in resource_values if
34 isinstance(resource_values[key], dict)}
35 metadata = resource_values.get('Metadata', {})
36 if metadata is not None:
37 for prop_name, properties in metadata.items():
38 if isinstance(properties, dict):
39 for property_value in properties.values():
40 for param in no_echo_params and no_echo_params:
41 if str(property_value).find(str(param)) > -1:
42 path = ['Resources', resource_name, 'Metadata', prop_name]
43 matches.append(RuleMatch(path, 'As the resource "metadata" section contains '
44 'reference to a "NoEcho" parameter ' + str(param)
45 + ', CloudFormation will display the parameter value in '
46 'plaintext'))
47 return matches
48
[end of src/cfnlint/rules/resources/NoEcho.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/cfnlint/rules/resources/NoEcho.py b/src/cfnlint/rules/resources/NoEcho.py
--- a/src/cfnlint/rules/resources/NoEcho.py
+++ b/src/cfnlint/rules/resources/NoEcho.py
@@ -2,6 +2,7 @@
Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
SPDX-License-Identifier: MIT-0
"""
+import six
from cfnlint.helpers import bool_compare
from cfnlint.rules import CloudFormationLintRule
from cfnlint.rules import RuleMatch
@@ -14,34 +15,58 @@
source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/parameters-section-structure.html#parameters-section-structure-properties'
tags = ['resources', 'NoEcho']
- def match(self, cfn):
- matches = []
+ def _get_no_echo_params(self, cfn):
+ """ Get no Echo Params"""
no_echo_params = []
- parameters = cfn.get_parameters()
- for parameter_name, parameter_value in parameters.items():
+ for parameter_name, parameter_value in cfn.get_parameters().items():
noecho = parameter_value.get('NoEcho', default=False)
if bool_compare(noecho, True):
no_echo_params.append(parameter_name)
+ return no_echo_params
+
+ def _check_ref(self, cfn, no_echo_params):
+ """ Check Refs """
+ matches = []
+ refs = cfn.search_deep_keys('Ref')
+ for ref in refs:
+ if ref[-1] in no_echo_params:
+ if len(ref) > 3:
+ if ref[0] == 'Resources' and ref[2] == 'Metadata':
+ matches.append(RuleMatch(ref, 'As the resource "metadata" section contains ' +
+ 'reference to a "NoEcho" parameter ' +
+ str(ref[-1]) +
+ ', CloudFormation will display the parameter value in ' +
+ 'plaintext'))
+
+ return matches
+
+ def _check_sub(self, cfn, no_echo_params):
+ """ Check Subs """
+ matches = []
+ subs = cfn.search_deep_keys('Fn::Sub')
+ for sub in subs:
+ if isinstance(sub[-1], six.string_types):
+ params = cfn.get_sub_parameters(sub[-1])
+ for param in params:
+ if param in no_echo_params:
+ if len(sub) > 2:
+ if sub[0] == 'Resources' and sub[2] == 'Metadata':
+
+ matches.append(RuleMatch(sub[:-1], 'As the resource "metadata" section contains ' +
+ 'reference to a "NoEcho" parameter ' +
+ str(param) +
+ ', CloudFormation will display the parameter value in ' +
+ 'plaintext'))
+
+ return matches
+
+ def match(self, cfn):
+ matches = []
+ no_echo_params = self._get_no_echo_params(cfn)
if not no_echo_params:
- return no_echo_params
-
- resource_properties = cfn.get_resources()
- resource_dict = {key: resource_properties[key] for key in resource_properties if
- isinstance(resource_properties[key], dict)}
- for resource_name, resource_values in resource_dict.items():
- resource_values = {key: resource_values[key] for key in resource_values if
- isinstance(resource_values[key], dict)}
- metadata = resource_values.get('Metadata', {})
- if metadata is not None:
- for prop_name, properties in metadata.items():
- if isinstance(properties, dict):
- for property_value in properties.values():
- for param in no_echo_params and no_echo_params:
- if str(property_value).find(str(param)) > -1:
- path = ['Resources', resource_name, 'Metadata', prop_name]
- matches.append(RuleMatch(path, 'As the resource "metadata" section contains '
- 'reference to a "NoEcho" parameter ' + str(param)
- + ', CloudFormation will display the parameter value in '
- 'plaintext'))
+ return matches
+ matches.extend(self._check_ref(cfn, no_echo_params))
+ matches.extend(self._check_sub(cfn, no_echo_params))
+
return matches
| {"golden_diff": "diff --git a/src/cfnlint/rules/resources/NoEcho.py b/src/cfnlint/rules/resources/NoEcho.py\n--- a/src/cfnlint/rules/resources/NoEcho.py\n+++ b/src/cfnlint/rules/resources/NoEcho.py\n@@ -2,6 +2,7 @@\n Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.\n SPDX-License-Identifier: MIT-0\n \"\"\"\n+import six\n from cfnlint.helpers import bool_compare\n from cfnlint.rules import CloudFormationLintRule\n from cfnlint.rules import RuleMatch\n@@ -14,34 +15,58 @@\n source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/parameters-section-structure.html#parameters-section-structure-properties'\n tags = ['resources', 'NoEcho']\n \n- def match(self, cfn):\n- matches = []\n+ def _get_no_echo_params(self, cfn):\n+ \"\"\" Get no Echo Params\"\"\"\n no_echo_params = []\n- parameters = cfn.get_parameters()\n- for parameter_name, parameter_value in parameters.items():\n+ for parameter_name, parameter_value in cfn.get_parameters().items():\n noecho = parameter_value.get('NoEcho', default=False)\n if bool_compare(noecho, True):\n no_echo_params.append(parameter_name)\n \n+ return no_echo_params\n+\n+ def _check_ref(self, cfn, no_echo_params):\n+ \"\"\" Check Refs \"\"\"\n+ matches = []\n+ refs = cfn.search_deep_keys('Ref')\n+ for ref in refs:\n+ if ref[-1] in no_echo_params:\n+ if len(ref) > 3:\n+ if ref[0] == 'Resources' and ref[2] == 'Metadata':\n+ matches.append(RuleMatch(ref, 'As the resource \"metadata\" section contains ' +\n+ 'reference to a \"NoEcho\" parameter ' +\n+ str(ref[-1]) +\n+ ', CloudFormation will display the parameter value in ' +\n+ 'plaintext'))\n+\n+ return matches\n+\n+ def _check_sub(self, cfn, no_echo_params):\n+ \"\"\" Check Subs \"\"\"\n+ matches = []\n+ subs = cfn.search_deep_keys('Fn::Sub')\n+ for sub in subs:\n+ if isinstance(sub[-1], six.string_types):\n+ params = cfn.get_sub_parameters(sub[-1])\n+ for param in params:\n+ if param in no_echo_params:\n+ if len(sub) > 2:\n+ if sub[0] == 'Resources' and sub[2] == 'Metadata':\n+\n+ matches.append(RuleMatch(sub[:-1], 'As the resource \"metadata\" section contains ' +\n+ 'reference to a \"NoEcho\" parameter ' +\n+ str(param) +\n+ ', CloudFormation will display the parameter value in ' +\n+ 'plaintext'))\n+\n+ return matches\n+\n+ def match(self, cfn):\n+ matches = []\n+ no_echo_params = self._get_no_echo_params(cfn)\n if not no_echo_params:\n- return no_echo_params\n-\n- resource_properties = cfn.get_resources()\n- resource_dict = {key: resource_properties[key] for key in resource_properties if\n- isinstance(resource_properties[key], dict)}\n- for resource_name, resource_values in resource_dict.items():\n- resource_values = {key: resource_values[key] for key in resource_values if\n- isinstance(resource_values[key], dict)}\n- metadata = resource_values.get('Metadata', {})\n- if metadata is not None:\n- for prop_name, properties in metadata.items():\n- if isinstance(properties, dict):\n- for property_value in properties.values():\n- for param in no_echo_params and no_echo_params:\n- if str(property_value).find(str(param)) > -1:\n- path = ['Resources', resource_name, 'Metadata', prop_name]\n- matches.append(RuleMatch(path, 'As the resource \"metadata\" section contains '\n- 'reference to a \"NoEcho\" parameter ' + str(param)\n- + ', CloudFormation will display the parameter value in '\n- 'plaintext'))\n+ return matches\n+ matches.extend(self._check_ref(cfn, no_echo_params))\n+ matches.extend(self._check_sub(cfn, no_echo_params))\n+\n return matches\n", "issue": "False alarm from new W4002\n*cfn-lint version: 0.34.0*\r\n\r\n[Here](https://gist.github.com/schmiddy/44a779032a930995d22ee2722a18f163) is an example template which causes a false alarm like this:\r\n\r\n```\r\n$ cfn-lint /tmp/example.yml \r\nW4002 As the resource \"metadata\" section contains reference to a \"NoEcho\" parameter DBUser, CloudFormation will display the parameter value in plaintext\r\n/tmp/example.yml:21:7\r\n\r\nW4002 As the resource \"metadata\" section contains reference to a \"NoEcho\" parameter DBPass, CloudFormation will display the parameter value in plaintext\r\n/tmp/example.yml:21:7\r\n```\r\n\r\nThe problem seems to be that the rule is looking for any mention of the parameter name, even as a text description that is not actually referencing the parameter.\r\n\n", "before_files": [{"content": "\"\"\"\nCopyright Amazon.com, Inc. or its affiliates. All Rights Reserved.\nSPDX-License-Identifier: MIT-0\n\"\"\"\nfrom cfnlint.helpers import bool_compare\nfrom cfnlint.rules import CloudFormationLintRule\nfrom cfnlint.rules import RuleMatch\n\n\nclass NoEcho(CloudFormationLintRule):\n id = 'W4002'\n shortdesc = 'Check for NoEcho References'\n description = 'Check if there is a NoEcho enabled parameter referenced within a resources Metadata section'\n source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/parameters-section-structure.html#parameters-section-structure-properties'\n tags = ['resources', 'NoEcho']\n\n def match(self, cfn):\n matches = []\n no_echo_params = []\n parameters = cfn.get_parameters()\n for parameter_name, parameter_value in parameters.items():\n noecho = parameter_value.get('NoEcho', default=False)\n if bool_compare(noecho, True):\n no_echo_params.append(parameter_name)\n\n if not no_echo_params:\n return no_echo_params\n\n resource_properties = cfn.get_resources()\n resource_dict = {key: resource_properties[key] for key in resource_properties if\n isinstance(resource_properties[key], dict)}\n for resource_name, resource_values in resource_dict.items():\n resource_values = {key: resource_values[key] for key in resource_values if\n isinstance(resource_values[key], dict)}\n metadata = resource_values.get('Metadata', {})\n if metadata is not None:\n for prop_name, properties in metadata.items():\n if isinstance(properties, dict):\n for property_value in properties.values():\n for param in no_echo_params and no_echo_params:\n if str(property_value).find(str(param)) > -1:\n path = ['Resources', resource_name, 'Metadata', prop_name]\n matches.append(RuleMatch(path, 'As the resource \"metadata\" section contains '\n 'reference to a \"NoEcho\" parameter ' + str(param)\n + ', CloudFormation will display the parameter value in '\n 'plaintext'))\n return matches\n", "path": "src/cfnlint/rules/resources/NoEcho.py"}]} | 1,287 | 940 |
gh_patches_debug_9061 | rasdani/github-patches | git_diff | modin-project__modin-506 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Numpy 1.16 support for future read_hdf
Due to this issue (https://github.com/PyTables/PyTables/issues/717) it seems that, at least on my host machine, the latest version of numpy is needed to store and play with large datasets using hdf5. Naturally, I would love to use modin (ray) for these purposes and but realized that modin runs with numpy<=1.15.
I downloaded the source of Ray from github to test to see if numpy 1.15+ was supported and it seems that tests were failing for numpy 1.16.1. I was curious if modin planned to support higher versions of numpy in the near term as would be required to interplay with py tables.
</issue>
<code>
[start of setup.py]
1 from __future__ import absolute_import
2 from __future__ import division
3 from __future__ import print_function
4
5 from setuptools import setup, find_packages
6
7 with open("README.md", "r", encoding="utf8") as fh:
8 long_description = fh.read()
9
10 setup(
11 name="modin",
12 version="0.4.0",
13 description="Modin: Make your pandas code run faster by changing one line of code.",
14 packages=find_packages(),
15 url="https://github.com/modin-project/modin",
16 long_description=long_description,
17 long_description_content_type="text/markdown",
18 install_requires=["pandas==0.24.1", "ray==0.6.2", "numpy<=1.15.0", "typing"],
19 extras_require={
20 # can be installed by pip install modin[dask]
21 "dask": ["dask==1.0.0", "distributed==1.25.0"],
22 },
23 )
24
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -15,9 +15,9 @@
url="https://github.com/modin-project/modin",
long_description=long_description,
long_description_content_type="text/markdown",
- install_requires=["pandas==0.24.1", "ray==0.6.2", "numpy<=1.15.0", "typing"],
+ install_requires=["pandas==0.24.1", "ray==0.6.2", "typing"],
extras_require={
# can be installed by pip install modin[dask]
- "dask": ["dask==1.0.0", "distributed==1.25.0"],
+ "dask": ["dask==1.1.0", "distributed==1.25.0"],
},
)
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -15,9 +15,9 @@\n url=\"https://github.com/modin-project/modin\",\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n- install_requires=[\"pandas==0.24.1\", \"ray==0.6.2\", \"numpy<=1.15.0\", \"typing\"],\n+ install_requires=[\"pandas==0.24.1\", \"ray==0.6.2\", \"typing\"],\n extras_require={\n # can be installed by pip install modin[dask]\n- \"dask\": [\"dask==1.0.0\", \"distributed==1.25.0\"],\n+ \"dask\": [\"dask==1.1.0\", \"distributed==1.25.0\"],\n },\n )\n", "issue": "Numpy 1.16 support for future read_hdf\nDue to this issue (https://github.com/PyTables/PyTables/issues/717) it seems that, at least on my host machine, the latest version of numpy is needed to store and play with large datasets using hdf5. Naturally, I would love to use modin (ray) for these purposes and but realized that modin runs with numpy<=1.15.\r\n\r\nI downloaded the source of Ray from github to test to see if numpy 1.15+ was supported and it seems that tests were failing for numpy 1.16.1. I was curious if modin planned to support higher versions of numpy in the near term as would be required to interplay with py tables.\n", "before_files": [{"content": "from __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nfrom setuptools import setup, find_packages\n\nwith open(\"README.md\", \"r\", encoding=\"utf8\") as fh:\n long_description = fh.read()\n\nsetup(\n name=\"modin\",\n version=\"0.4.0\",\n description=\"Modin: Make your pandas code run faster by changing one line of code.\",\n packages=find_packages(),\n url=\"https://github.com/modin-project/modin\",\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n install_requires=[\"pandas==0.24.1\", \"ray==0.6.2\", \"numpy<=1.15.0\", \"typing\"],\n extras_require={\n # can be installed by pip install modin[dask]\n \"dask\": [\"dask==1.0.0\", \"distributed==1.25.0\"],\n },\n)\n", "path": "setup.py"}]} | 940 | 199 |
gh_patches_debug_55117 | rasdani/github-patches | git_diff | netbox-community__netbox-15725 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
PROTECTION_RULES: Custom Validator does not show error message on object deletion
### Deployment Type
Self-hosted
### NetBox Version
v4.0-beta1 (commit c7f6c206cf5068f890b89da9ca04d4d3583f5107)
### Python Version
3.11
### Steps to Reproduce
1. Create a custom validator with the following code:
```python
from extras.validators import CustomValidator
from utilities.exceptions import AbortRequest
class IPAddressDeleteValidator(CustomValidator):
def validate(self, instance, request):
raise AbortRequest("Do not delete IP addresses!")
```
and store as `/opt/netbox/validators/test.py`
2. Add the custom validator as a protect rule for `IPAddress` objects:
```python
PROTECTION_RULES = {
"ipam.ipaddress": [
"validators.test.IPAddressDeleteValidator",
]
}
```
3. Navigate to IPAM/IP Addresses
4. Create an arbitrary IP address
5. Click on "Delete" in the new address's detail view and confirm deletion
### Expected Behavior
The IP address is not deleted, an error message is shown saying "Do not delete IP addresses!"
### Observed Behavior
The IP address is not deleted, but there is no error message.
The error message is, however, displayed when one tries to delete an IP address using the bulk edit view:

</issue>
<code>
[start of netbox/utilities/htmx.py]
1 __all__ = (
2 'htmx_partial',
3 )
4
5 PAGE_CONTAINER_ID = 'page-content'
6
7
8 def htmx_partial(request):
9 """
10 Determines whether to render partial (versus complete) HTML content
11 in response to an HTMX request, based on the target element.
12 """
13 return request.htmx and request.htmx.target and request.htmx.target != PAGE_CONTAINER_ID
14
[end of netbox/utilities/htmx.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/netbox/utilities/htmx.py b/netbox/utilities/htmx.py
--- a/netbox/utilities/htmx.py
+++ b/netbox/utilities/htmx.py
@@ -2,12 +2,10 @@
'htmx_partial',
)
-PAGE_CONTAINER_ID = 'page-content'
-
def htmx_partial(request):
"""
Determines whether to render partial (versus complete) HTML content
in response to an HTMX request, based on the target element.
"""
- return request.htmx and request.htmx.target and request.htmx.target != PAGE_CONTAINER_ID
+ return request.htmx and not request.htmx.boosted
| {"golden_diff": "diff --git a/netbox/utilities/htmx.py b/netbox/utilities/htmx.py\n--- a/netbox/utilities/htmx.py\n+++ b/netbox/utilities/htmx.py\n@@ -2,12 +2,10 @@\n 'htmx_partial',\n )\n \n-PAGE_CONTAINER_ID = 'page-content'\n-\n \n def htmx_partial(request):\n \"\"\"\n Determines whether to render partial (versus complete) HTML content\n in response to an HTMX request, based on the target element.\n \"\"\"\n- return request.htmx and request.htmx.target and request.htmx.target != PAGE_CONTAINER_ID\n+ return request.htmx and not request.htmx.boosted\n", "issue": "PROTECTION_RULES: Custom Validator does not show error message on object deletion\n### Deployment Type\r\n\r\nSelf-hosted\r\n\r\n### NetBox Version\r\n\r\nv4.0-beta1 (commit c7f6c206cf5068f890b89da9ca04d4d3583f5107)\r\n\r\n### Python Version\r\n\r\n3.11\r\n\r\n### Steps to Reproduce\r\n\r\n1. Create a custom validator with the following code:\r\n```python\r\nfrom extras.validators import CustomValidator\r\nfrom utilities.exceptions import AbortRequest\r\n\r\n\r\nclass IPAddressDeleteValidator(CustomValidator):\r\n\r\n def validate(self, instance, request):\r\n raise AbortRequest(\"Do not delete IP addresses!\")\r\n```\r\nand store as `/opt/netbox/validators/test.py`\r\n\r\n2. Add the custom validator as a protect rule for `IPAddress` objects:\r\n```python\r\nPROTECTION_RULES = {\r\n \"ipam.ipaddress\": [\r\n \"validators.test.IPAddressDeleteValidator\",\r\n ]\r\n}\r\n```\r\n3. Navigate to IPAM/IP Addresses\r\n4. Create an arbitrary IP address\r\n5. Click on \"Delete\" in the new address's detail view and confirm deletion\r\n\r\n### Expected Behavior\r\n\r\nThe IP address is not deleted, an error message is shown saying \"Do not delete IP addresses!\"\r\n\r\n### Observed Behavior\r\n\r\nThe IP address is not deleted, but there is no error message. \r\n\r\nThe error message is, however, displayed when one tries to delete an IP address using the bulk edit view:\r\n\r\n\n", "before_files": [{"content": "__all__ = (\n 'htmx_partial',\n)\n\nPAGE_CONTAINER_ID = 'page-content'\n\n\ndef htmx_partial(request):\n \"\"\"\n Determines whether to render partial (versus complete) HTML content\n in response to an HTMX request, based on the target element.\n \"\"\"\n return request.htmx and request.htmx.target and request.htmx.target != PAGE_CONTAINER_ID\n", "path": "netbox/utilities/htmx.py"}]} | 1,008 | 151 |
gh_patches_debug_3077 | rasdani/github-patches | git_diff | SeldonIO__MLServer-1168 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Expected XGBoost model file "model.bst" extension is undocumented?
On https://github.com/SeldonIO/MLServer/blob/master/runtimes/xgboost/mlserver_xgboost/xgboost.py#L21 you can see that MLServer is looking for an XGBoost model file called "model.bst". However, I cannot find any reference to that file extension in the XGBoost documentation. As far as I can see, XGBoost's documented file extensions are:
- ".json" added in 1.0.0, an "open format that can be easily reused"
- ".ubj" for Universal Binary JSON format, available in 1.6.0
- ".model" for the "old binary internal format" prior to 1.0.0, as shown in examples
Where does MLServer get the ".bst" extension from, and what model format does it use? Shouldn't it use one of the extensions mentioned in the XGBoost documentation instead, to avoid ambiguity?
</issue>
<code>
[start of runtimes/xgboost/mlserver_xgboost/xgboost.py]
1 import xgboost as xgb
2
3 from typing import List
4 from xgboost.sklearn import XGBModel
5
6 from mlserver.errors import InferenceError
7 from mlserver.model import MLModel
8 from mlserver.utils import get_model_uri
9 from mlserver.codecs import NumpyRequestCodec, NumpyCodec
10 from mlserver.types import (
11 InferenceRequest,
12 InferenceResponse,
13 RequestOutput,
14 ResponseOutput,
15 )
16
17 PREDICT_OUTPUT = "predict"
18 PREDICT_PROBA_OUTPUT = "predict_proba"
19 VALID_OUTPUTS = [PREDICT_OUTPUT, PREDICT_PROBA_OUTPUT]
20
21 WELLKNOWN_MODEL_FILENAMES = ["model.bst", "model.json"]
22
23
24 def _load_sklearn_interface(model_uri: str) -> XGBModel:
25 try:
26 regressor = xgb.XGBRegressor()
27 regressor.load_model(model_uri)
28 return regressor
29 except TypeError:
30 # If there was an error, it's likely due to the model being a
31 # classifier
32 classifier = xgb.XGBClassifier()
33 classifier.load_model(model_uri)
34 return classifier
35
36
37 class XGBoostModel(MLModel):
38 """
39 Implementationof the MLModel interface to load and serve `xgboost` models.
40 """
41
42 async def load(self) -> bool:
43 model_uri = await get_model_uri(
44 self._settings, wellknown_filenames=WELLKNOWN_MODEL_FILENAMES
45 )
46
47 self._model = _load_sklearn_interface(model_uri)
48
49 return True
50
51 def _check_request(self, payload: InferenceRequest) -> InferenceRequest:
52 if not payload.outputs:
53 # By default, only return the result of `predict()`
54 payload.outputs = [RequestOutput(name=PREDICT_OUTPUT)]
55 else:
56 for request_output in payload.outputs:
57 if request_output.name not in VALID_OUTPUTS:
58 raise InferenceError(
59 f"XGBoostModel only supports '{PREDICT_OUTPUT}' and "
60 f"'{PREDICT_PROBA_OUTPUT}' as outputs "
61 f"({request_output.name} was received)"
62 )
63
64 # Regression models do not support `predict_proba`
65 if PREDICT_PROBA_OUTPUT in [o.name for o in payload.outputs]:
66 if isinstance(self._model, xgb.XGBRegressor):
67 raise InferenceError(
68 f"XGBRegressor models do not support '{PREDICT_PROBA_OUTPUT}"
69 )
70
71 return payload
72
73 def _get_model_outputs(self, payload: InferenceRequest) -> List[ResponseOutput]:
74 decoded_request = self.decode_request(payload, default_codec=NumpyRequestCodec)
75
76 outputs = []
77 for request_output in payload.outputs: # type: ignore
78 predict_fn = getattr(self._model, request_output.name)
79 y = predict_fn(decoded_request)
80
81 output = self.encode(y, request_output, default_codec=NumpyCodec)
82 outputs.append(output)
83
84 return outputs
85
86 async def predict(self, payload: InferenceRequest) -> InferenceResponse:
87 payload = self._check_request(payload)
88 outputs = self._get_model_outputs(payload)
89
90 return InferenceResponse(
91 model_name=self.name,
92 model_version=self.version,
93 outputs=outputs,
94 )
95
[end of runtimes/xgboost/mlserver_xgboost/xgboost.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/runtimes/xgboost/mlserver_xgboost/xgboost.py b/runtimes/xgboost/mlserver_xgboost/xgboost.py
--- a/runtimes/xgboost/mlserver_xgboost/xgboost.py
+++ b/runtimes/xgboost/mlserver_xgboost/xgboost.py
@@ -18,7 +18,7 @@
PREDICT_PROBA_OUTPUT = "predict_proba"
VALID_OUTPUTS = [PREDICT_OUTPUT, PREDICT_PROBA_OUTPUT]
-WELLKNOWN_MODEL_FILENAMES = ["model.bst", "model.json"]
+WELLKNOWN_MODEL_FILENAMES = ["model.bst", "model.json", "model.ubj"]
def _load_sklearn_interface(model_uri: str) -> XGBModel:
| {"golden_diff": "diff --git a/runtimes/xgboost/mlserver_xgboost/xgboost.py b/runtimes/xgboost/mlserver_xgboost/xgboost.py\n--- a/runtimes/xgboost/mlserver_xgboost/xgboost.py\n+++ b/runtimes/xgboost/mlserver_xgboost/xgboost.py\n@@ -18,7 +18,7 @@\n PREDICT_PROBA_OUTPUT = \"predict_proba\"\n VALID_OUTPUTS = [PREDICT_OUTPUT, PREDICT_PROBA_OUTPUT]\n \n-WELLKNOWN_MODEL_FILENAMES = [\"model.bst\", \"model.json\"]\n+WELLKNOWN_MODEL_FILENAMES = [\"model.bst\", \"model.json\", \"model.ubj\"]\n \n \n def _load_sklearn_interface(model_uri: str) -> XGBModel:\n", "issue": "Expected XGBoost model file \"model.bst\" extension is undocumented? \nOn https://github.com/SeldonIO/MLServer/blob/master/runtimes/xgboost/mlserver_xgboost/xgboost.py#L21 you can see that MLServer is looking for an XGBoost model file called \"model.bst\". However, I cannot find any reference to that file extension in the XGBoost documentation. As far as I can see, XGBoost's documented file extensions are:\r\n\r\n- \".json\" added in 1.0.0, an \"open format that can be easily reused\"\r\n- \".ubj\" for Universal Binary JSON format, available in 1.6.0\r\n- \".model\" for the \"old binary internal format\" prior to 1.0.0, as shown in examples\r\n\r\nWhere does MLServer get the \".bst\" extension from, and what model format does it use? Shouldn't it use one of the extensions mentioned in the XGBoost documentation instead, to avoid ambiguity?\n", "before_files": [{"content": "import xgboost as xgb\n\nfrom typing import List\nfrom xgboost.sklearn import XGBModel\n\nfrom mlserver.errors import InferenceError\nfrom mlserver.model import MLModel\nfrom mlserver.utils import get_model_uri\nfrom mlserver.codecs import NumpyRequestCodec, NumpyCodec\nfrom mlserver.types import (\n InferenceRequest,\n InferenceResponse,\n RequestOutput,\n ResponseOutput,\n)\n\nPREDICT_OUTPUT = \"predict\"\nPREDICT_PROBA_OUTPUT = \"predict_proba\"\nVALID_OUTPUTS = [PREDICT_OUTPUT, PREDICT_PROBA_OUTPUT]\n\nWELLKNOWN_MODEL_FILENAMES = [\"model.bst\", \"model.json\"]\n\n\ndef _load_sklearn_interface(model_uri: str) -> XGBModel:\n try:\n regressor = xgb.XGBRegressor()\n regressor.load_model(model_uri)\n return regressor\n except TypeError:\n # If there was an error, it's likely due to the model being a\n # classifier\n classifier = xgb.XGBClassifier()\n classifier.load_model(model_uri)\n return classifier\n\n\nclass XGBoostModel(MLModel):\n \"\"\"\n Implementationof the MLModel interface to load and serve `xgboost` models.\n \"\"\"\n\n async def load(self) -> bool:\n model_uri = await get_model_uri(\n self._settings, wellknown_filenames=WELLKNOWN_MODEL_FILENAMES\n )\n\n self._model = _load_sklearn_interface(model_uri)\n\n return True\n\n def _check_request(self, payload: InferenceRequest) -> InferenceRequest:\n if not payload.outputs:\n # By default, only return the result of `predict()`\n payload.outputs = [RequestOutput(name=PREDICT_OUTPUT)]\n else:\n for request_output in payload.outputs:\n if request_output.name not in VALID_OUTPUTS:\n raise InferenceError(\n f\"XGBoostModel only supports '{PREDICT_OUTPUT}' and \"\n f\"'{PREDICT_PROBA_OUTPUT}' as outputs \"\n f\"({request_output.name} was received)\"\n )\n\n # Regression models do not support `predict_proba`\n if PREDICT_PROBA_OUTPUT in [o.name for o in payload.outputs]:\n if isinstance(self._model, xgb.XGBRegressor):\n raise InferenceError(\n f\"XGBRegressor models do not support '{PREDICT_PROBA_OUTPUT}\"\n )\n\n return payload\n\n def _get_model_outputs(self, payload: InferenceRequest) -> List[ResponseOutput]:\n decoded_request = self.decode_request(payload, default_codec=NumpyRequestCodec)\n\n outputs = []\n for request_output in payload.outputs: # type: ignore\n predict_fn = getattr(self._model, request_output.name)\n y = predict_fn(decoded_request)\n\n output = self.encode(y, request_output, default_codec=NumpyCodec)\n outputs.append(output)\n\n return outputs\n\n async def predict(self, payload: InferenceRequest) -> InferenceResponse:\n payload = self._check_request(payload)\n outputs = self._get_model_outputs(payload)\n\n return InferenceResponse(\n model_name=self.name,\n model_version=self.version,\n outputs=outputs,\n )\n", "path": "runtimes/xgboost/mlserver_xgboost/xgboost.py"}]} | 1,639 | 171 |
gh_patches_debug_18799 | rasdani/github-patches | git_diff | mindee__doctr-30 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[documents] Add basic document reader
For documents to be analyzed, we first need to add a utility for document reading (PDF mostly). The following specs would be nice to have:
- inherit for a shared reader class ("DocumentReader" for instance)
- to be located in the `doctr.documents.reader` module
The following formats should be handled:
- [x] PDF (#8, #25): this resource would be nice to check: https://github.com/pymupdf/PyMuPDF
- [x] PNG (#30)
- [x] JPG (#30)
cc @charlesmindee
</issue>
<code>
[start of doctr/documents/reader.py]
1 # Copyright (C) 2021, Mindee.
2
3 # This program is licensed under the Apache License version 2.
4 # See LICENSE or go to <https://www.apache.org/licenses/LICENSE-2.0.txt> for full license details.
5
6 import fitz
7 import numpy as np
8 import cv2
9 from typing import List, Tuple, Optional, Any
10
11 __all__ = ['read_pdf']
12
13
14 def read_pdf(file_path: str, **kwargs: Any) -> List[np.ndarray]:
15 """Read a PDF file and convert it into an image in numpy format
16
17 Example::
18 >>> from doctr.documents import read_pdf
19 >>> doc = read_pdf("path/to/your/doc.pdf")
20
21 Args:
22 file_path: the path to the PDF file
23 Returns:
24 the list of pages decoded as numpy ndarray of shape H x W x 3
25 """
26
27 # Read pages with fitz and convert them to numpy ndarrays
28 return [convert_page_to_numpy(page, **kwargs) for page in fitz.open(file_path)]
29
30
31 def convert_page_to_numpy(
32 page: fitz.fitz.Page,
33 output_size: Optional[Tuple[int, int]] = None,
34 rgb_output: bool = True,
35 ) -> np.ndarray:
36 """Convert a fitz page to a numpy-formatted image
37
38 Args:
39 page: the page of a file read with PyMuPDF
40 output_size: the expected output size of each page in format H x W
41 rgb_output: whether the output ndarray channel order should be RGB instead of BGR.
42
43 Returns:
44 the rendered image in numpy format
45 """
46
47 transform_matrix = None
48
49 # If no output size is specified, keep the origin one
50 if output_size is not None:
51 scales = (output_size[1] / page.MediaBox[2], output_size[0] / page.MediaBox[3])
52 transform_matrix = fitz.Matrix(*scales)
53
54 # Generate the pixel map using the transformation matrix
55 stream = page.getPixmap(matrix=transform_matrix).getImageData()
56 # Decode it into a numpy
57 img = cv2.imdecode(np.frombuffer(stream, dtype=np.uint8), cv2.IMREAD_UNCHANGED)
58
59 # Switch the channel order
60 if rgb_output:
61 img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
62
63 return img
64
[end of doctr/documents/reader.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/doctr/documents/reader.py b/doctr/documents/reader.py
--- a/doctr/documents/reader.py
+++ b/doctr/documents/reader.py
@@ -8,7 +8,36 @@
import cv2
from typing import List, Tuple, Optional, Any
-__all__ = ['read_pdf']
+__all__ = ['read_pdf', 'read_img']
+
+
+def read_img(
+ file_path: str,
+ output_size: Optional[Tuple[int, int]] = None,
+ rgb_output: bool = True,
+) -> np.ndarray:
+ """Read an image file into numpy format
+
+ Example::
+ >>> from doctr.documents import read_img
+ >>> page = read_img("path/to/your/doc.jpg")
+
+ Args:
+ file_path: the path to the image file
+ output_size: the expected output size of each page in format H x W
+ rgb_output: whether the output ndarray channel order should be RGB instead of BGR.
+ Returns:
+ the page decoded as numpy ndarray of shape H x W x 3
+ """
+
+ img = cv2.imread(file_path, cv2.IMREAD_COLOR)
+ # Resizing
+ if isinstance(output_size, tuple):
+ img = cv2.resize(img, output_size[::-1], interpolation=cv2.INTER_LINEAR)
+ # Switch the channel order
+ if rgb_output:
+ img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
+ return img
def read_pdf(file_path: str, **kwargs: Any) -> List[np.ndarray]:
| {"golden_diff": "diff --git a/doctr/documents/reader.py b/doctr/documents/reader.py\n--- a/doctr/documents/reader.py\n+++ b/doctr/documents/reader.py\n@@ -8,7 +8,36 @@\n import cv2\n from typing import List, Tuple, Optional, Any\n \n-__all__ = ['read_pdf']\n+__all__ = ['read_pdf', 'read_img']\n+\n+\n+def read_img(\n+ file_path: str,\n+ output_size: Optional[Tuple[int, int]] = None,\n+ rgb_output: bool = True,\n+) -> np.ndarray:\n+ \"\"\"Read an image file into numpy format\n+\n+ Example::\n+ >>> from doctr.documents import read_img\n+ >>> page = read_img(\"path/to/your/doc.jpg\")\n+\n+ Args:\n+ file_path: the path to the image file\n+ output_size: the expected output size of each page in format H x W\n+ rgb_output: whether the output ndarray channel order should be RGB instead of BGR.\n+ Returns:\n+ the page decoded as numpy ndarray of shape H x W x 3\n+ \"\"\"\n+\n+ img = cv2.imread(file_path, cv2.IMREAD_COLOR)\n+ # Resizing\n+ if isinstance(output_size, tuple):\n+ img = cv2.resize(img, output_size[::-1], interpolation=cv2.INTER_LINEAR)\n+ # Switch the channel order\n+ if rgb_output:\n+ img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)\n+ return img\n \n \n def read_pdf(file_path: str, **kwargs: Any) -> List[np.ndarray]:\n", "issue": "[documents] Add basic document reader\nFor documents to be analyzed, we first need to add a utility for document reading (PDF mostly). The following specs would be nice to have:\r\n- inherit for a shared reader class (\"DocumentReader\" for instance)\r\n- to be located in the `doctr.documents.reader` module\r\n\r\nThe following formats should be handled:\r\n- [x] PDF (#8, #25): this resource would be nice to check: https://github.com/pymupdf/PyMuPDF\r\n- [x] PNG (#30)\r\n- [x] JPG (#30)\r\n\r\n\r\ncc @charlesmindee \n", "before_files": [{"content": "# Copyright (C) 2021, Mindee.\n\n# This program is licensed under the Apache License version 2.\n# See LICENSE or go to <https://www.apache.org/licenses/LICENSE-2.0.txt> for full license details.\n\nimport fitz\nimport numpy as np\nimport cv2\nfrom typing import List, Tuple, Optional, Any\n\n__all__ = ['read_pdf']\n\n\ndef read_pdf(file_path: str, **kwargs: Any) -> List[np.ndarray]:\n \"\"\"Read a PDF file and convert it into an image in numpy format\n\n Example::\n >>> from doctr.documents import read_pdf\n >>> doc = read_pdf(\"path/to/your/doc.pdf\")\n\n Args:\n file_path: the path to the PDF file\n Returns:\n the list of pages decoded as numpy ndarray of shape H x W x 3\n \"\"\"\n\n # Read pages with fitz and convert them to numpy ndarrays\n return [convert_page_to_numpy(page, **kwargs) for page in fitz.open(file_path)]\n\n\ndef convert_page_to_numpy(\n page: fitz.fitz.Page,\n output_size: Optional[Tuple[int, int]] = None,\n rgb_output: bool = True,\n) -> np.ndarray:\n \"\"\"Convert a fitz page to a numpy-formatted image\n\n Args:\n page: the page of a file read with PyMuPDF\n output_size: the expected output size of each page in format H x W\n rgb_output: whether the output ndarray channel order should be RGB instead of BGR.\n\n Returns:\n the rendered image in numpy format\n \"\"\"\n\n transform_matrix = None\n\n # If no output size is specified, keep the origin one\n if output_size is not None:\n scales = (output_size[1] / page.MediaBox[2], output_size[0] / page.MediaBox[3])\n transform_matrix = fitz.Matrix(*scales)\n\n # Generate the pixel map using the transformation matrix\n stream = page.getPixmap(matrix=transform_matrix).getImageData()\n # Decode it into a numpy\n img = cv2.imdecode(np.frombuffer(stream, dtype=np.uint8), cv2.IMREAD_UNCHANGED)\n\n # Switch the channel order\n if rgb_output:\n img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)\n\n return img\n", "path": "doctr/documents/reader.py"}]} | 1,297 | 353 |
gh_patches_debug_11528 | rasdani/github-patches | git_diff | mlcommons__GaNDLF-590 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
PyTorch security vulnerability
See https://github.com/advisories/GHSA-47fc-vmwq-366v
Need to upgrade to PyTorch 1.13.1
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2
3 """The setup script."""
4
5
6 import sys, re
7 from setuptools import setup, find_packages
8 from setuptools.command.install import install
9 from setuptools.command.develop import develop
10 from setuptools.command.egg_info import egg_info
11
12 try:
13 with open("README.md") as readme_file:
14 readme = readme_file.read()
15 except Exception as error:
16 readme = "No README information found."
17 sys.stderr.write("Warning: Could not open '%s' due %s\n" % ("README.md", error))
18
19
20 class CustomInstallCommand(install):
21 def run(self):
22 install.run(self)
23
24
25 class CustomDevelopCommand(develop):
26 def run(self):
27 develop.run(self)
28
29
30 class CustomEggInfoCommand(egg_info):
31 def run(self):
32 egg_info.run(self)
33
34
35 try:
36 filepath = "GANDLF/version.py"
37 version_file = open(filepath)
38 (__version__,) = re.findall('__version__ = "(.*)"', version_file.read())
39
40 except Exception as error:
41 __version__ = "0.0.1"
42 sys.stderr.write("Warning: Could not open '%s' due %s\n" % (filepath, error))
43
44 requirements = [
45 "black",
46 "numpy==1.22.0",
47 "scipy",
48 "SimpleITK!=2.0.*",
49 "SimpleITK!=2.2.1", # https://github.com/mlcommons/GaNDLF/issues/536
50 "torchvision",
51 "tqdm",
52 "torchio==0.18.75",
53 "pandas",
54 "scikit-learn>=0.23.2",
55 "scikit-image>=0.19.1",
56 "setuptools",
57 "seaborn",
58 "pyyaml",
59 "tiffslide",
60 "matplotlib",
61 "requests>=2.25.0",
62 "pytest",
63 "coverage",
64 "pytest-cov",
65 "psutil",
66 "medcam",
67 "opencv-python",
68 "torchmetrics==0.5.1", # newer versions have changed api for f1 invocation
69 "OpenPatchMiner==0.1.8",
70 "zarr==2.10.3",
71 "pydicom",
72 "onnx",
73 "torchinfo==1.7.0",
74 "segmentation-models-pytorch==0.3.0",
75 "ACSConv==0.1.1",
76 "docker",
77 "dicom-anonymizer",
78 "twine",
79 "zarr",
80 "keyring",
81 ]
82
83 # pytorch doesn't have LTS support on OSX - https://github.com/mlcommons/GaNDLF/issues/389
84 if sys.platform == "darwin":
85 requirements.append("torch==1.11.0")
86 else:
87 requirements.append("torch==1.11.0")
88
89 if __name__ == "__main__":
90 setup(
91 name="GANDLF",
92 version=__version__,
93 author="MLCommons",
94 author_email="[email protected]",
95 python_requires=">=3.8",
96 packages=find_packages(),
97 cmdclass={
98 "install": CustomInstallCommand,
99 "develop": CustomDevelopCommand,
100 "egg_info": CustomEggInfoCommand,
101 },
102 scripts=[
103 "gandlf_run",
104 "gandlf_constructCSV",
105 "gandlf_collectStats",
106 "gandlf_patchMiner",
107 "gandlf_preprocess",
108 "gandlf_anonymizer",
109 "gandlf_verifyInstall",
110 "gandlf_configGenerator",
111 "gandlf_recoverConfig",
112 "gandlf_deploy",
113 ],
114 classifiers=[
115 "Development Status :: 3 - Alpha",
116 "Intended Audience :: Science/Research",
117 "License :: OSI Approved :: Apache Software License",
118 "Natural Language :: English",
119 "Operating System :: OS Independent",
120 "Programming Language :: Python :: 3.8",
121 "Programming Language :: Python :: 3.9",
122 "Programming Language :: Python :: 3.10",
123 "Topic :: Scientific/Engineering :: Medical Science Apps",
124 ],
125 description=(
126 "PyTorch-based framework that handles segmentation/regression/classification using various DL architectures for medical imaging."
127 ),
128 install_requires=requirements,
129 license="Apache-2.0",
130 long_description=readme,
131 long_description_content_type="text/markdown",
132 include_package_data=True,
133 keywords="semantic, segmentation, regression, classification, data-augmentation, medical-imaging, clinical-workflows, deep-learning, pytorch",
134 zip_safe=False,
135 )
136
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -42,6 +42,7 @@
sys.stderr.write("Warning: Could not open '%s' due %s\n" % (filepath, error))
requirements = [
+ "torch==1.13.1",
"black",
"numpy==1.22.0",
"scipy",
@@ -80,12 +81,6 @@
"keyring",
]
-# pytorch doesn't have LTS support on OSX - https://github.com/mlcommons/GaNDLF/issues/389
-if sys.platform == "darwin":
- requirements.append("torch==1.11.0")
-else:
- requirements.append("torch==1.11.0")
-
if __name__ == "__main__":
setup(
name="GANDLF",
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -42,6 +42,7 @@\n sys.stderr.write(\"Warning: Could not open '%s' due %s\\n\" % (filepath, error))\n \n requirements = [\n+ \"torch==1.13.1\",\n \"black\",\n \"numpy==1.22.0\",\n \"scipy\",\n@@ -80,12 +81,6 @@\n \"keyring\",\n ]\n \n-# pytorch doesn't have LTS support on OSX - https://github.com/mlcommons/GaNDLF/issues/389\n-if sys.platform == \"darwin\":\n- requirements.append(\"torch==1.11.0\")\n-else:\n- requirements.append(\"torch==1.11.0\")\n-\n if __name__ == \"__main__\":\n setup(\n name=\"GANDLF\",\n", "issue": "PyTorch security vulnerability\nSee https://github.com/advisories/GHSA-47fc-vmwq-366v\r\n\r\nNeed to upgrade to PyTorch 1.13.1\n", "before_files": [{"content": "#!/usr/bin/env python\n\n\"\"\"The setup script.\"\"\"\n\n\nimport sys, re\nfrom setuptools import setup, find_packages\nfrom setuptools.command.install import install\nfrom setuptools.command.develop import develop\nfrom setuptools.command.egg_info import egg_info\n\ntry:\n with open(\"README.md\") as readme_file:\n readme = readme_file.read()\nexcept Exception as error:\n readme = \"No README information found.\"\n sys.stderr.write(\"Warning: Could not open '%s' due %s\\n\" % (\"README.md\", error))\n\n\nclass CustomInstallCommand(install):\n def run(self):\n install.run(self)\n\n\nclass CustomDevelopCommand(develop):\n def run(self):\n develop.run(self)\n\n\nclass CustomEggInfoCommand(egg_info):\n def run(self):\n egg_info.run(self)\n\n\ntry:\n filepath = \"GANDLF/version.py\"\n version_file = open(filepath)\n (__version__,) = re.findall('__version__ = \"(.*)\"', version_file.read())\n\nexcept Exception as error:\n __version__ = \"0.0.1\"\n sys.stderr.write(\"Warning: Could not open '%s' due %s\\n\" % (filepath, error))\n\nrequirements = [\n \"black\",\n \"numpy==1.22.0\",\n \"scipy\",\n \"SimpleITK!=2.0.*\",\n \"SimpleITK!=2.2.1\", # https://github.com/mlcommons/GaNDLF/issues/536\n \"torchvision\",\n \"tqdm\",\n \"torchio==0.18.75\",\n \"pandas\",\n \"scikit-learn>=0.23.2\",\n \"scikit-image>=0.19.1\",\n \"setuptools\",\n \"seaborn\",\n \"pyyaml\",\n \"tiffslide\",\n \"matplotlib\",\n \"requests>=2.25.0\",\n \"pytest\",\n \"coverage\",\n \"pytest-cov\",\n \"psutil\",\n \"medcam\",\n \"opencv-python\",\n \"torchmetrics==0.5.1\", # newer versions have changed api for f1 invocation\n \"OpenPatchMiner==0.1.8\",\n \"zarr==2.10.3\",\n \"pydicom\",\n \"onnx\",\n \"torchinfo==1.7.0\",\n \"segmentation-models-pytorch==0.3.0\",\n \"ACSConv==0.1.1\",\n \"docker\",\n \"dicom-anonymizer\",\n \"twine\",\n \"zarr\",\n \"keyring\",\n]\n\n# pytorch doesn't have LTS support on OSX - https://github.com/mlcommons/GaNDLF/issues/389\nif sys.platform == \"darwin\":\n requirements.append(\"torch==1.11.0\")\nelse:\n requirements.append(\"torch==1.11.0\")\n\nif __name__ == \"__main__\":\n setup(\n name=\"GANDLF\",\n version=__version__,\n author=\"MLCommons\",\n author_email=\"[email protected]\",\n python_requires=\">=3.8\",\n packages=find_packages(),\n cmdclass={\n \"install\": CustomInstallCommand,\n \"develop\": CustomDevelopCommand,\n \"egg_info\": CustomEggInfoCommand,\n },\n scripts=[\n \"gandlf_run\",\n \"gandlf_constructCSV\",\n \"gandlf_collectStats\",\n \"gandlf_patchMiner\",\n \"gandlf_preprocess\",\n \"gandlf_anonymizer\",\n \"gandlf_verifyInstall\",\n \"gandlf_configGenerator\",\n \"gandlf_recoverConfig\",\n \"gandlf_deploy\",\n ],\n classifiers=[\n \"Development Status :: 3 - Alpha\",\n \"Intended Audience :: Science/Research\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Natural Language :: English\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Topic :: Scientific/Engineering :: Medical Science Apps\",\n ],\n description=(\n \"PyTorch-based framework that handles segmentation/regression/classification using various DL architectures for medical imaging.\"\n ),\n install_requires=requirements,\n license=\"Apache-2.0\",\n long_description=readme,\n long_description_content_type=\"text/markdown\",\n include_package_data=True,\n keywords=\"semantic, segmentation, regression, classification, data-augmentation, medical-imaging, clinical-workflows, deep-learning, pytorch\",\n zip_safe=False,\n )\n", "path": "setup.py"}]} | 1,884 | 198 |
gh_patches_debug_16711 | rasdani/github-patches | git_diff | google__TensorNetwork-489 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
tn.set_default_backend should raise exception
`tn.set_default_backend(backend_name)` should raise if `backend_name` is not a valid backend.
</issue>
<code>
[start of tensornetwork/backend_contextmanager.py]
1 from typing import Text, Union
2 from tensornetwork.backends.base_backend import BaseBackend
3
4 class DefaultBackend():
5 """Context manager for setting up backend for nodes"""
6
7 def __init__(self, backend: Union[Text, BaseBackend]) -> None:
8 if not isinstance(backend, (Text, BaseBackend)):
9 raise ValueError("Item passed to DefaultBackend "
10 "must be Text or BaseBackend")
11 self.backend = backend
12
13 def __enter__(self):
14 _default_backend_stack.stack.append(self)
15
16 def __exit__(self, exc_type, exc_val, exc_tb):
17 _default_backend_stack.stack.pop()
18
19 class _DefaultBackendStack():
20 """A stack to keep track default backends context manager"""
21
22 def __init__(self):
23 self.stack = []
24 self.default_backend = "numpy"
25
26 def get_current_backend(self):
27 return self.stack[-1].backend if self.stack else self.default_backend
28
29 _default_backend_stack = _DefaultBackendStack()
30
31 def get_default_backend():
32 return _default_backend_stack.get_current_backend()
33
34 def set_default_backend(backend: Union[Text, BaseBackend]) -> None:
35 if _default_backend_stack.stack:
36 raise AssertionError("The default backend should not be changed "
37 "inside the backend context manager")
38 if not isinstance(backend, (Text, BaseBackend)):
39 raise ValueError("Item passed to set_default_backend "
40 "must be Text or BaseBackend")
41 _default_backend_stack.default_backend = backend
42
[end of tensornetwork/backend_contextmanager.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/tensornetwork/backend_contextmanager.py b/tensornetwork/backend_contextmanager.py
--- a/tensornetwork/backend_contextmanager.py
+++ b/tensornetwork/backend_contextmanager.py
@@ -1,5 +1,6 @@
from typing import Text, Union
from tensornetwork.backends.base_backend import BaseBackend
+from tensornetwork.backends import backend_factory
class DefaultBackend():
"""Context manager for setting up backend for nodes"""
@@ -38,4 +39,6 @@
if not isinstance(backend, (Text, BaseBackend)):
raise ValueError("Item passed to set_default_backend "
"must be Text or BaseBackend")
+ if isinstance(backend, Text) and backend not in backend_factory._BACKENDS:
+ raise ValueError(f"Backend '{backend}' was not found.")
_default_backend_stack.default_backend = backend
| {"golden_diff": "diff --git a/tensornetwork/backend_contextmanager.py b/tensornetwork/backend_contextmanager.py\n--- a/tensornetwork/backend_contextmanager.py\n+++ b/tensornetwork/backend_contextmanager.py\n@@ -1,5 +1,6 @@\n from typing import Text, Union\n from tensornetwork.backends.base_backend import BaseBackend\n+from tensornetwork.backends import backend_factory\n \n class DefaultBackend():\n \"\"\"Context manager for setting up backend for nodes\"\"\"\n@@ -38,4 +39,6 @@\n if not isinstance(backend, (Text, BaseBackend)):\n raise ValueError(\"Item passed to set_default_backend \"\n \"must be Text or BaseBackend\")\n+ if isinstance(backend, Text) and backend not in backend_factory._BACKENDS:\n+ raise ValueError(f\"Backend '{backend}' was not found.\")\n _default_backend_stack.default_backend = backend\n", "issue": "tn.set_default_backend should raise exception\n`tn.set_default_backend(backend_name)` should raise if `backend_name` is not a valid backend.\n", "before_files": [{"content": "from typing import Text, Union\nfrom tensornetwork.backends.base_backend import BaseBackend\n\nclass DefaultBackend():\n \"\"\"Context manager for setting up backend for nodes\"\"\"\n\n def __init__(self, backend: Union[Text, BaseBackend]) -> None:\n if not isinstance(backend, (Text, BaseBackend)):\n raise ValueError(\"Item passed to DefaultBackend \"\n \"must be Text or BaseBackend\")\n self.backend = backend\n\n def __enter__(self):\n _default_backend_stack.stack.append(self)\n\n def __exit__(self, exc_type, exc_val, exc_tb):\n _default_backend_stack.stack.pop()\n\nclass _DefaultBackendStack():\n \"\"\"A stack to keep track default backends context manager\"\"\"\n\n def __init__(self):\n self.stack = []\n self.default_backend = \"numpy\"\n\n def get_current_backend(self):\n return self.stack[-1].backend if self.stack else self.default_backend\n\n_default_backend_stack = _DefaultBackendStack()\n\ndef get_default_backend():\n return _default_backend_stack.get_current_backend()\n\ndef set_default_backend(backend: Union[Text, BaseBackend]) -> None:\n if _default_backend_stack.stack:\n raise AssertionError(\"The default backend should not be changed \"\n \"inside the backend context manager\")\n if not isinstance(backend, (Text, BaseBackend)):\n raise ValueError(\"Item passed to set_default_backend \"\n \"must be Text or BaseBackend\")\n _default_backend_stack.default_backend = backend\n", "path": "tensornetwork/backend_contextmanager.py"}]} | 960 | 187 |
gh_patches_debug_9249 | rasdani/github-patches | git_diff | sublimelsp__LSP-490 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[bug] CamelCase instead of snace_case
`documentChanges` argument on the left https://github.com/tomv564/LSP/blob/5a472ba6f23d70f6f8f1ebaabb83015c066ce198/plugin/rename.py#L69
should be `document_changes`, like `LspApplyWorkspaceEditCommand` expects:
https://github.com/tomv564/LSP/blob/5a472ba6f23d70f6f8f1ebaabb83015c066ce198/plugin/core/edit.py#L19
When doing a rename, this popped up in the console
```
LSP: --> textDocument/rename
Traceback (most recent call last):
File "/opt/sublime_text/sublime_plugin.py", line 1034, in run_
return self.run(**args)
TypeError: run() got an unexpected keyword argument 'documentChanges'
```
</issue>
<code>
[start of plugin/rename.py]
1 import sublime_plugin
2 from .core.registry import client_for_view, LspTextCommand
3 from .core.protocol import Request
4 from .core.documents import get_document_position, get_position, is_at_word
5 try:
6 from typing import List, Dict, Optional
7 assert List and Dict and Optional
8 except ImportError:
9 pass
10
11
12 class RenameSymbolInputHandler(sublime_plugin.TextInputHandler):
13 def __init__(self, view):
14 self.view = view
15
16 def name(self):
17 return "new_name"
18
19 def placeholder(self):
20 return self.get_current_symbol_name()
21
22 def initial_text(self):
23 return self.get_current_symbol_name()
24
25 def validate(self, name):
26 return len(name) > 0
27
28 def get_current_symbol_name(self):
29 pos = get_position(self.view)
30 current_name = self.view.substr(self.view.word(pos))
31 # Is this check necessary?
32 if not current_name:
33 current_name = ""
34 return current_name
35
36
37 class LspSymbolRenameCommand(LspTextCommand):
38 def __init__(self, view):
39 super().__init__(view)
40
41 def is_enabled(self, event=None):
42 # TODO: check what kind of scope we're in.
43 if self.has_client_with_capability('renameProvider'):
44 return is_at_word(self.view, event)
45 return False
46
47 def input(self, args):
48 if "new_name" not in args:
49 return RenameSymbolInputHandler(self.view)
50 else:
51 return None
52
53 def run(self, edit, new_name, event=None):
54 pos = get_position(self.view, event)
55 params = get_document_position(self.view, pos)
56
57 self.request_rename(params, new_name)
58
59 def request_rename(self, params, new_name) -> None:
60 client = client_for_view(self.view)
61 if client:
62 params["newName"] = new_name
63 client.send_request(Request.rename(params), self.handle_response)
64
65 def handle_response(self, response: 'Optional[Dict]') -> None:
66 if response:
67 self.view.window().run_command('lsp_apply_workspace_edit',
68 {'changes': response.get('changes'),
69 'documentChanges': response.get('documentChanges')})
70 else:
71 self.view.window().status_message('No rename edits returned')
72
73 def want_event(self):
74 return True
75
[end of plugin/rename.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/plugin/rename.py b/plugin/rename.py
--- a/plugin/rename.py
+++ b/plugin/rename.py
@@ -66,7 +66,7 @@
if response:
self.view.window().run_command('lsp_apply_workspace_edit',
{'changes': response.get('changes'),
- 'documentChanges': response.get('documentChanges')})
+ 'document_changes': response.get('documentChanges')})
else:
self.view.window().status_message('No rename edits returned')
| {"golden_diff": "diff --git a/plugin/rename.py b/plugin/rename.py\n--- a/plugin/rename.py\n+++ b/plugin/rename.py\n@@ -66,7 +66,7 @@\n if response:\n self.view.window().run_command('lsp_apply_workspace_edit',\n {'changes': response.get('changes'),\n- 'documentChanges': response.get('documentChanges')})\n+ 'document_changes': response.get('documentChanges')})\n else:\n self.view.window().status_message('No rename edits returned')\n", "issue": "[bug] CamelCase instead of snace_case \n`documentChanges` argument on the left https://github.com/tomv564/LSP/blob/5a472ba6f23d70f6f8f1ebaabb83015c066ce198/plugin/rename.py#L69\r\nshould be `document_changes`, like `LspApplyWorkspaceEditCommand` expects:\r\nhttps://github.com/tomv564/LSP/blob/5a472ba6f23d70f6f8f1ebaabb83015c066ce198/plugin/core/edit.py#L19\r\n\r\nWhen doing a rename, this popped up in the console\r\n```\r\nLSP: --> textDocument/rename\r\nTraceback (most recent call last):\r\n File \"/opt/sublime_text/sublime_plugin.py\", line 1034, in run_\r\n return self.run(**args)\r\nTypeError: run() got an unexpected keyword argument 'documentChanges'\r\n```\n", "before_files": [{"content": "import sublime_plugin\nfrom .core.registry import client_for_view, LspTextCommand\nfrom .core.protocol import Request\nfrom .core.documents import get_document_position, get_position, is_at_word\ntry:\n from typing import List, Dict, Optional\n assert List and Dict and Optional\nexcept ImportError:\n pass\n\n\nclass RenameSymbolInputHandler(sublime_plugin.TextInputHandler):\n def __init__(self, view):\n self.view = view\n\n def name(self):\n return \"new_name\"\n\n def placeholder(self):\n return self.get_current_symbol_name()\n\n def initial_text(self):\n return self.get_current_symbol_name()\n\n def validate(self, name):\n return len(name) > 0\n\n def get_current_symbol_name(self):\n pos = get_position(self.view)\n current_name = self.view.substr(self.view.word(pos))\n # Is this check necessary?\n if not current_name:\n current_name = \"\"\n return current_name\n\n\nclass LspSymbolRenameCommand(LspTextCommand):\n def __init__(self, view):\n super().__init__(view)\n\n def is_enabled(self, event=None):\n # TODO: check what kind of scope we're in.\n if self.has_client_with_capability('renameProvider'):\n return is_at_word(self.view, event)\n return False\n\n def input(self, args):\n if \"new_name\" not in args:\n return RenameSymbolInputHandler(self.view)\n else:\n return None\n\n def run(self, edit, new_name, event=None):\n pos = get_position(self.view, event)\n params = get_document_position(self.view, pos)\n\n self.request_rename(params, new_name)\n\n def request_rename(self, params, new_name) -> None:\n client = client_for_view(self.view)\n if client:\n params[\"newName\"] = new_name\n client.send_request(Request.rename(params), self.handle_response)\n\n def handle_response(self, response: 'Optional[Dict]') -> None:\n if response:\n self.view.window().run_command('lsp_apply_workspace_edit',\n {'changes': response.get('changes'),\n 'documentChanges': response.get('documentChanges')})\n else:\n self.view.window().status_message('No rename edits returned')\n\n def want_event(self):\n return True\n", "path": "plugin/rename.py"}]} | 1,396 | 110 |
gh_patches_debug_41023 | rasdani/github-patches | git_diff | pyload__pyload-180 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Implemented StreamcloudEu plugin based on XFileSharingPro
Resolves #128
</issue>
<code>
[start of module/plugins/hoster/StreamcloudEu.py]
1 # -*- coding: utf-8 -*-
2 from module.plugins.hoster.XFileSharingPro import XFileSharingPro, create_getInfo
3 import re
4
5 class StreamcloudEu(XFileSharingPro):
6 __name__ = "StreamcloudEu"
7 __type__ = "hoster"
8 __pattern__ = r"http://(www\.)?streamcloud\.eu/\S+"
9 __version__ = "0.01"
10 __description__ = """Streamcloud.eu hoster plugin"""
11 __author_name__ = ("seoester")
12 __author_mail__ = ("[email protected]")
13
14 HOSTER_NAME = "streamcloud.eu"
15 DIRECT_LINK_PATTERN = r'file: "(http://(stor|cdn)\d+\.streamcloud.eu:?\d*/.*/video\.mp4)",'
16
17 def setup(self):
18 super(XFileSharingPro, self).setup()
19 self.multiDL = True
20
21 def getDownloadLink(self):
22 found = re.search(self.DIRECT_LINK_PATTERN, self.html, re.S)
23 if found:
24 return found.group(1)
25
26 return super(XFileSharingPro, self).getDownloadLink()
27
28 getInfo = create_getInfo(StreamcloudEu)
29
[end of module/plugins/hoster/StreamcloudEu.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/module/plugins/hoster/StreamcloudEu.py b/module/plugins/hoster/StreamcloudEu.py
--- a/module/plugins/hoster/StreamcloudEu.py
+++ b/module/plugins/hoster/StreamcloudEu.py
@@ -1,5 +1,7 @@
# -*- coding: utf-8 -*-
from module.plugins.hoster.XFileSharingPro import XFileSharingPro, create_getInfo
+from module.network.HTTPRequest import HTTPRequest
+from time import sleep
import re
class StreamcloudEu(XFileSharingPro):
@@ -15,7 +17,7 @@
DIRECT_LINK_PATTERN = r'file: "(http://(stor|cdn)\d+\.streamcloud.eu:?\d*/.*/video\.mp4)",'
def setup(self):
- super(XFileSharingPro, self).setup()
+ super(StreamcloudEu, self).setup()
self.multiDL = True
def getDownloadLink(self):
@@ -23,6 +25,87 @@
if found:
return found.group(1)
- return super(XFileSharingPro, self).getDownloadLink()
+ for i in range(5):
+ self.logDebug("Getting download link: #%d" % i)
+ data = self.getPostParameters()
+ httpRequest = HTTPRequest(options=self.req.options)
+ httpRequest.cj = self.req.cj
+ sleep(10)
+ self.html = httpRequest.load(self.pyfile.url, post = data, referer=False, cookies=True, decode = True)
+ self.header = httpRequest.header
+
+ found = re.search("Location\s*:\s*(.*)", self.header, re.I)
+ if found:
+ break
+
+ found = re.search(self.DIRECT_LINK_PATTERN, self.html, re.S)
+ if found:
+ break
+
+ else:
+ if self.errmsg and 'captcha' in self.errmsg:
+ self.fail("No valid captcha code entered")
+ else:
+ self.fail("Download link not found")
+
+ return found.group(1)
+
+ def getPostParameters(self):
+ for i in range(3):
+ if not self.errmsg: self.checkErrors()
+
+ if hasattr(self,"FORM_PATTERN"):
+ action, inputs = self.parseHtmlForm(self.FORM_PATTERN)
+ else:
+ action, inputs = self.parseHtmlForm(input_names={"op": re.compile("^download")})
+
+ if not inputs:
+ action, inputs = self.parseHtmlForm('F1')
+ if not inputs:
+ if self.errmsg:
+ self.retry()
+ else:
+ self.parseError("Form not found")
+
+ self.logDebug(self.HOSTER_NAME, inputs)
+
+ if 'op' in inputs and inputs['op'] in ('download1', 'download2', 'download3'):
+ if "password" in inputs:
+ if self.passwords:
+ inputs['password'] = self.passwords.pop(0)
+ else:
+ self.fail("No or invalid passport")
+
+ if not self.premium:
+ found = re.search(self.WAIT_PATTERN, self.html)
+ if found:
+ wait_time = int(found.group(1)) + 1
+ self.setWait(wait_time, False)
+ else:
+ wait_time = 0
+
+ self.captcha = self.handleCaptcha(inputs)
+
+ if wait_time: self.wait()
+
+ self.errmsg = None
+ self.logDebug("getPostParameters {0}".format(i))
+ return inputs
+
+ else:
+ inputs['referer'] = self.pyfile.url
+
+ if self.premium:
+ inputs['method_premium'] = "Premium Download"
+ if 'method_free' in inputs: del inputs['method_free']
+ else:
+ inputs['method_free'] = "Free Download"
+ if 'method_premium' in inputs: del inputs['method_premium']
+
+ self.html = self.load(self.pyfile.url, post = inputs, ref = False)
+ self.errmsg = None
+
+ else: self.parseError('FORM: %s' % (inputs['op'] if 'op' in inputs else 'UNKNOWN'))
+
getInfo = create_getInfo(StreamcloudEu)
| {"golden_diff": "diff --git a/module/plugins/hoster/StreamcloudEu.py b/module/plugins/hoster/StreamcloudEu.py\n--- a/module/plugins/hoster/StreamcloudEu.py\n+++ b/module/plugins/hoster/StreamcloudEu.py\n@@ -1,5 +1,7 @@\n # -*- coding: utf-8 -*-\n from module.plugins.hoster.XFileSharingPro import XFileSharingPro, create_getInfo\n+from module.network.HTTPRequest import HTTPRequest\n+from time import sleep\n import re\n \n class StreamcloudEu(XFileSharingPro):\n@@ -15,7 +17,7 @@\n DIRECT_LINK_PATTERN = r'file: \"(http://(stor|cdn)\\d+\\.streamcloud.eu:?\\d*/.*/video\\.mp4)\",'\n \n def setup(self):\n- super(XFileSharingPro, self).setup()\n+ super(StreamcloudEu, self).setup()\n self.multiDL = True\n \n def getDownloadLink(self):\n@@ -23,6 +25,87 @@\n if found:\n return found.group(1)\n \n- return super(XFileSharingPro, self).getDownloadLink()\n+ for i in range(5):\n+ self.logDebug(\"Getting download link: #%d\" % i)\n+ data = self.getPostParameters()\n+ httpRequest = HTTPRequest(options=self.req.options)\n+ httpRequest.cj = self.req.cj\n+ sleep(10)\n+ self.html = httpRequest.load(self.pyfile.url, post = data, referer=False, cookies=True, decode = True)\n+ self.header = httpRequest.header\n+\n+ found = re.search(\"Location\\s*:\\s*(.*)\", self.header, re.I)\n+ if found:\n+ break\n+\n+ found = re.search(self.DIRECT_LINK_PATTERN, self.html, re.S)\n+ if found:\n+ break\n+\n+ else:\n+ if self.errmsg and 'captcha' in self.errmsg:\n+ self.fail(\"No valid captcha code entered\")\n+ else:\n+ self.fail(\"Download link not found\")\n+\n+ return found.group(1)\n+\n+ def getPostParameters(self):\n+ for i in range(3):\n+ if not self.errmsg: self.checkErrors()\n+\n+ if hasattr(self,\"FORM_PATTERN\"):\n+ action, inputs = self.parseHtmlForm(self.FORM_PATTERN)\n+ else:\n+ action, inputs = self.parseHtmlForm(input_names={\"op\": re.compile(\"^download\")})\n+\n+ if not inputs:\n+ action, inputs = self.parseHtmlForm('F1')\n+ if not inputs:\n+ if self.errmsg:\n+ self.retry()\n+ else:\n+ self.parseError(\"Form not found\")\n+\n+ self.logDebug(self.HOSTER_NAME, inputs)\n+\n+ if 'op' in inputs and inputs['op'] in ('download1', 'download2', 'download3'):\n+ if \"password\" in inputs:\n+ if self.passwords:\n+ inputs['password'] = self.passwords.pop(0)\n+ else:\n+ self.fail(\"No or invalid passport\")\n+\n+ if not self.premium:\n+ found = re.search(self.WAIT_PATTERN, self.html)\n+ if found:\n+ wait_time = int(found.group(1)) + 1\n+ self.setWait(wait_time, False)\n+ else:\n+ wait_time = 0\n+\n+ self.captcha = self.handleCaptcha(inputs)\n+\n+ if wait_time: self.wait()\n+\n+ self.errmsg = None\n+ self.logDebug(\"getPostParameters {0}\".format(i))\n+ return inputs\n+\n+ else:\n+ inputs['referer'] = self.pyfile.url\n+\n+ if self.premium:\n+ inputs['method_premium'] = \"Premium Download\"\n+ if 'method_free' in inputs: del inputs['method_free']\n+ else:\n+ inputs['method_free'] = \"Free Download\"\n+ if 'method_premium' in inputs: del inputs['method_premium']\n+\n+ self.html = self.load(self.pyfile.url, post = inputs, ref = False)\n+ self.errmsg = None\n+\n+ else: self.parseError('FORM: %s' % (inputs['op'] if 'op' in inputs else 'UNKNOWN'))\n+\n \n getInfo = create_getInfo(StreamcloudEu)\n", "issue": "Implemented StreamcloudEu plugin based on XFileSharingPro\nResolves #128\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nfrom module.plugins.hoster.XFileSharingPro import XFileSharingPro, create_getInfo\nimport re\n\nclass StreamcloudEu(XFileSharingPro):\n __name__ = \"StreamcloudEu\"\n __type__ = \"hoster\"\n __pattern__ = r\"http://(www\\.)?streamcloud\\.eu/\\S+\"\n __version__ = \"0.01\"\n __description__ = \"\"\"Streamcloud.eu hoster plugin\"\"\"\n __author_name__ = (\"seoester\")\n __author_mail__ = (\"[email protected]\")\n\n HOSTER_NAME = \"streamcloud.eu\"\n DIRECT_LINK_PATTERN = r'file: \"(http://(stor|cdn)\\d+\\.streamcloud.eu:?\\d*/.*/video\\.mp4)\",'\n\n def setup(self):\n super(XFileSharingPro, self).setup()\n self.multiDL = True\n\n def getDownloadLink(self):\n found = re.search(self.DIRECT_LINK_PATTERN, self.html, re.S)\n if found:\n return found.group(1)\n\n return super(XFileSharingPro, self).getDownloadLink()\n\ngetInfo = create_getInfo(StreamcloudEu)\n", "path": "module/plugins/hoster/StreamcloudEu.py"}]} | 871 | 947 |
gh_patches_debug_36759 | rasdani/github-patches | git_diff | tensorflow__addons-206 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Generate API docs
As our repository matures it's important to have api docs to improve user experience. As discussed in #38 we will also be able to remove the table of contents off the main README.
Should we host on https://readthedocs.org/ or is there something else recommended @ewilderj @dynamicwebpaige @karmel ?
</issue>
<code>
[start of tools/docs/build_docs.py]
1 # Copyright 2015 The TensorFlow Authors. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 # ==============================================================================
15 """ Modified from the tfdocs example api reference docs generation script.
16
17 This script generates API reference docs.
18
19 Install pre-requisites:
20 $> pip install -U git+https://github.com/tensorflow/docs
21 $> pip install artifacts/tensorflow_addons-*.whl
22
23 Generate Docs:
24 $> from the repo root run: python tools/docs/build_docs.py
25 """
26
27 from __future__ import absolute_import
28 from __future__ import division
29 from __future__ import print_function
30
31 from absl import app
32 from absl import flags
33
34 import tensorflow_addons
35 from tensorflow_docs.api_generator import generate_lib
36 from tensorflow_docs.api_generator import public_api
37
38 PROJECT_SHORT_NAME = 'tfaddons'
39 PROJECT_FULL_NAME = 'TensorFlow Addons'
40 CODE_URL_PREFIX = 'https://github.com/tensorflow/addons/tree/master/tensorflow_addons'
41
42 FLAGS = flags.FLAGS
43
44 flags.DEFINE_string(
45 'output_dir',
46 default='/addons/docs/api_docs/python/',
47 help='Where to write the resulting docs to.')
48
49
50 def main(argv):
51 if argv[1:]:
52 raise ValueError('Unrecognized arguments: {}'.format(argv[1:]))
53
54 doc_generator = generate_lib.DocGenerator(
55 root_title=PROJECT_FULL_NAME,
56 # Replace `tensorflow_docs` with your module, here.
57 py_modules=[(PROJECT_SHORT_NAME, tensorflow_addons)],
58 code_url_prefix=CODE_URL_PREFIX,
59 # This callback cleans up a lot of aliases caused by internal imports.
60 callbacks=[public_api.local_definitions_filter])
61
62 doc_generator.build(FLAGS.output_dir)
63
64 print('Output docs to: ', FLAGS.output_dir)
65
66
67 if __name__ == '__main__':
68 app.run(main)
69
[end of tools/docs/build_docs.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/tools/docs/build_docs.py b/tools/docs/build_docs.py
--- a/tools/docs/build_docs.py
+++ b/tools/docs/build_docs.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
-""" Modified from the tfdocs example api reference docs generation script.
+"""Modified from the tfdocs example api reference docs generation script.
This script generates API reference docs.
@@ -31,19 +31,30 @@
from absl import app
from absl import flags
-import tensorflow_addons
+import tensorflow_addons as tfa
+
from tensorflow_docs.api_generator import generate_lib
+from tensorflow_docs.api_generator import parser
from tensorflow_docs.api_generator import public_api
-PROJECT_SHORT_NAME = 'tfaddons'
+from tensorflow.python.util import tf_inspect
+
+# Use tensorflow's `tf_inspect`, which is aware of `tf_decorator`.
+parser.tf_inspect = tf_inspect
+
+PROJECT_SHORT_NAME = 'tfa'
PROJECT_FULL_NAME = 'TensorFlow Addons'
-CODE_URL_PREFIX = 'https://github.com/tensorflow/addons/tree/master/tensorflow_addons'
FLAGS = flags.FLAGS
+flags.DEFINE_string(
+ 'git_branch',
+ default='master',
+ help='The name of the corresponding branch on github.')
+
flags.DEFINE_string(
'output_dir',
- default='/addons/docs/api_docs/python/',
+ default='docs/api_docs/python/',
help='Where to write the resulting docs to.')
@@ -51,11 +62,16 @@
if argv[1:]:
raise ValueError('Unrecognized arguments: {}'.format(argv[1:]))
+ code_url_prefix = ('https://github.com/tensorflow/addons/tree/'
+ '{git_branch}/tensorflow_addons'.format(
+ git_branch=FLAGS.git_branch))
+
doc_generator = generate_lib.DocGenerator(
root_title=PROJECT_FULL_NAME,
# Replace `tensorflow_docs` with your module, here.
- py_modules=[(PROJECT_SHORT_NAME, tensorflow_addons)],
- code_url_prefix=CODE_URL_PREFIX,
+ py_modules=[(PROJECT_SHORT_NAME, tfa)],
+ code_url_prefix=code_url_prefix,
+ private_map={'tfa': ['__version__', 'utils', 'version']},
# This callback cleans up a lot of aliases caused by internal imports.
callbacks=[public_api.local_definitions_filter])
| {"golden_diff": "diff --git a/tools/docs/build_docs.py b/tools/docs/build_docs.py\n--- a/tools/docs/build_docs.py\n+++ b/tools/docs/build_docs.py\n@@ -12,7 +12,7 @@\n # See the License for the specific language governing permissions and\n # limitations under the License.\n # ==============================================================================\n-\"\"\" Modified from the tfdocs example api reference docs generation script.\n+\"\"\"Modified from the tfdocs example api reference docs generation script.\n \n This script generates API reference docs.\n \n@@ -31,19 +31,30 @@\n from absl import app\n from absl import flags\n \n-import tensorflow_addons\n+import tensorflow_addons as tfa\n+\n from tensorflow_docs.api_generator import generate_lib\n+from tensorflow_docs.api_generator import parser\n from tensorflow_docs.api_generator import public_api\n \n-PROJECT_SHORT_NAME = 'tfaddons'\n+from tensorflow.python.util import tf_inspect\n+\n+# Use tensorflow's `tf_inspect`, which is aware of `tf_decorator`.\n+parser.tf_inspect = tf_inspect\n+\n+PROJECT_SHORT_NAME = 'tfa'\n PROJECT_FULL_NAME = 'TensorFlow Addons'\n-CODE_URL_PREFIX = 'https://github.com/tensorflow/addons/tree/master/tensorflow_addons'\n \n FLAGS = flags.FLAGS\n \n+flags.DEFINE_string(\n+ 'git_branch',\n+ default='master',\n+ help='The name of the corresponding branch on github.')\n+\n flags.DEFINE_string(\n 'output_dir',\n- default='/addons/docs/api_docs/python/',\n+ default='docs/api_docs/python/',\n help='Where to write the resulting docs to.')\n \n \n@@ -51,11 +62,16 @@\n if argv[1:]:\n raise ValueError('Unrecognized arguments: {}'.format(argv[1:]))\n \n+ code_url_prefix = ('https://github.com/tensorflow/addons/tree/'\n+ '{git_branch}/tensorflow_addons'.format(\n+ git_branch=FLAGS.git_branch))\n+\n doc_generator = generate_lib.DocGenerator(\n root_title=PROJECT_FULL_NAME,\n # Replace `tensorflow_docs` with your module, here.\n- py_modules=[(PROJECT_SHORT_NAME, tensorflow_addons)],\n- code_url_prefix=CODE_URL_PREFIX,\n+ py_modules=[(PROJECT_SHORT_NAME, tfa)],\n+ code_url_prefix=code_url_prefix,\n+ private_map={'tfa': ['__version__', 'utils', 'version']},\n # This callback cleans up a lot of aliases caused by internal imports.\n callbacks=[public_api.local_definitions_filter])\n", "issue": "Generate API docs\nAs our repository matures it's important to have api docs to improve user experience. As discussed in #38 we will also be able to remove the table of contents off the main README.\r\n\r\nShould we host on https://readthedocs.org/ or is there something else recommended @ewilderj @dynamicwebpaige @karmel ?\n", "before_files": [{"content": "# Copyright 2015 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\" Modified from the tfdocs example api reference docs generation script.\n\nThis script generates API reference docs.\n\nInstall pre-requisites:\n$> pip install -U git+https://github.com/tensorflow/docs\n$> pip install artifacts/tensorflow_addons-*.whl\n\nGenerate Docs:\n$> from the repo root run: python tools/docs/build_docs.py\n\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nfrom absl import app\nfrom absl import flags\n\nimport tensorflow_addons\nfrom tensorflow_docs.api_generator import generate_lib\nfrom tensorflow_docs.api_generator import public_api\n\nPROJECT_SHORT_NAME = 'tfaddons'\nPROJECT_FULL_NAME = 'TensorFlow Addons'\nCODE_URL_PREFIX = 'https://github.com/tensorflow/addons/tree/master/tensorflow_addons'\n\nFLAGS = flags.FLAGS\n\nflags.DEFINE_string(\n 'output_dir',\n default='/addons/docs/api_docs/python/',\n help='Where to write the resulting docs to.')\n\n\ndef main(argv):\n if argv[1:]:\n raise ValueError('Unrecognized arguments: {}'.format(argv[1:]))\n\n doc_generator = generate_lib.DocGenerator(\n root_title=PROJECT_FULL_NAME,\n # Replace `tensorflow_docs` with your module, here.\n py_modules=[(PROJECT_SHORT_NAME, tensorflow_addons)],\n code_url_prefix=CODE_URL_PREFIX,\n # This callback cleans up a lot of aliases caused by internal imports.\n callbacks=[public_api.local_definitions_filter])\n\n doc_generator.build(FLAGS.output_dir)\n\n print('Output docs to: ', FLAGS.output_dir)\n\n\nif __name__ == '__main__':\n app.run(main)\n", "path": "tools/docs/build_docs.py"}]} | 1,233 | 532 |
gh_patches_debug_35287 | rasdani/github-patches | git_diff | pyinstaller__pyinstaller-6529 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
pkgutil.iter_modules with arbitrary path
## Description of the issue
The iter_modules patch implemented in #5959 has a bug where the path must start with the _MEIPASS or it will throw an assertion error.
The normal iter_modules function can take any valid path. Your code first calls that:
https://github.com/pyinstaller/pyinstaller/blob/develop/PyInstaller/hooks/rthooks/pyi_rth_pkgutil.py#L37
and later asserts it starts with _MEIPASS
https://github.com/pyinstaller/pyinstaller/blob/develop/PyInstaller/hooks/rthooks/pyi_rth_pkgutil.py#L59
which means that a path outside of the executable will throw the assertion error.
I think when implementing it was overlooked that this function could be used to look at a path outside the executable path.
### Context information (for bug reports)
* PyInstaller Version 4.8
* All OS and python versions
I will have a look into creating a pull request to fix this issue.
I think the solution is to change the assertion to an if statement to only run the code below that if it starts with _MEIPASS and thus could be bundled in the executable.
</issue>
<code>
[start of PyInstaller/hooks/rthooks/pyi_rth_pkgutil.py]
1 #-----------------------------------------------------------------------------
2 # Copyright (c) 2021, PyInstaller Development Team.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 #
7 # The full license is in the file COPYING.txt, distributed with this software.
8 #
9 # SPDX-License-Identifier: Apache-2.0
10 #-----------------------------------------------------------------------------
11 #
12 # This rthook overrides pkgutil.iter_modules with custom implementation that uses PyInstaller's FrozenImporter to list
13 # sub-modules embedded in the PYZ archive. The non-embedded modules (binary extensions, or .pyc modules in noarchive
14 # build) are handled by original pkgutil iter_modules implementation (and consequently, python's FileFinder).
15 #
16 # The preferred way of adding support for iter_modules would be adding non-standard iter_modules() method to
17 # FrozenImporter itself. However, that seems to work only for path entry finders (for use with sys.path_hooks), while
18 # PyInstaller's FrozenImporter is registered as meta path finders (for use with sys.meta_path). Turning FrozenImporter
19 # into path entry finder, would seemingly require the latter to support on-filesystem resources (e.g., extension
20 # modules) in addition to PYZ-embedded ones.
21 #
22 # Therefore, we instead opt for overriding pkgutil.iter_modules with custom implementation that augments the output of
23 # original implementation with contents of PYZ archive from FrozenImporter's TOC.
24
25 import os
26 import pkgutil
27 import sys
28
29 from pyimod03_importers import FrozenImporter
30
31 _orig_pkgutil_iter_modules = pkgutil.iter_modules
32
33
34 def _pyi_pkgutil_iter_modules(path=None, prefix=''):
35 # Use original implementation to discover on-filesystem modules (binary extensions in regular builds, or both binary
36 # extensions and compiled pyc modules in noarchive debug builds).
37 yield from _orig_pkgutil_iter_modules(path, prefix)
38
39 # Find the instance of PyInstaller's FrozenImporter.
40 for importer in pkgutil.iter_importers():
41 if isinstance(importer, FrozenImporter):
42 break
43 else:
44 return
45
46 if not path:
47 # Search for all top-level packages/modules. These will have no dots in their entry names.
48 for entry in importer.toc:
49 if entry.count('.') != 0:
50 continue
51 is_pkg = importer.is_package(entry)
52 yield pkgutil.ModuleInfo(importer, prefix + entry, is_pkg)
53 else:
54 # Declare SYS_PREFIX locally, to avoid clash with eponymous global symbol from pyi_rth_pkgutil hook.
55 SYS_PREFIX = sys._MEIPASS + os.path.sep
56 SYS_PREFIXLEN = len(SYS_PREFIX)
57 # Only single path is supported, and it must start with sys._MEIPASS.
58 pkg_path = os.path.normpath(path[0])
59 assert pkg_path.startswith(SYS_PREFIX)
60 # Construct package prefix from path...
61 pkg_prefix = pkg_path[SYS_PREFIXLEN:]
62 pkg_prefix = pkg_prefix.replace(os.path.sep, '.')
63 # ... and ensure it ends with a dot (so we can directly filter out the package itself).
64 if not pkg_prefix.endswith('.'):
65 pkg_prefix += '.'
66 pkg_prefix_len = len(pkg_prefix)
67
68 for entry in importer.toc:
69 if not entry.startswith(pkg_prefix):
70 continue
71 name = entry[pkg_prefix_len:]
72 if name.count('.') != 0:
73 continue
74 is_pkg = importer.is_package(entry)
75 yield pkgutil.ModuleInfo(importer, prefix + name, is_pkg)
76
77
78 pkgutil.iter_modules = _pyi_pkgutil_iter_modules
79
[end of PyInstaller/hooks/rthooks/pyi_rth_pkgutil.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/PyInstaller/hooks/rthooks/pyi_rth_pkgutil.py b/PyInstaller/hooks/rthooks/pyi_rth_pkgutil.py
--- a/PyInstaller/hooks/rthooks/pyi_rth_pkgutil.py
+++ b/PyInstaller/hooks/rthooks/pyi_rth_pkgutil.py
@@ -43,7 +43,7 @@
else:
return
- if not path:
+ if path is None:
# Search for all top-level packages/modules. These will have no dots in their entry names.
for entry in importer.toc:
if entry.count('.') != 0:
@@ -54,25 +54,28 @@
# Declare SYS_PREFIX locally, to avoid clash with eponymous global symbol from pyi_rth_pkgutil hook.
SYS_PREFIX = sys._MEIPASS + os.path.sep
SYS_PREFIXLEN = len(SYS_PREFIX)
- # Only single path is supported, and it must start with sys._MEIPASS.
- pkg_path = os.path.normpath(path[0])
- assert pkg_path.startswith(SYS_PREFIX)
- # Construct package prefix from path...
- pkg_prefix = pkg_path[SYS_PREFIXLEN:]
- pkg_prefix = pkg_prefix.replace(os.path.sep, '.')
- # ... and ensure it ends with a dot (so we can directly filter out the package itself).
- if not pkg_prefix.endswith('.'):
- pkg_prefix += '.'
- pkg_prefix_len = len(pkg_prefix)
- for entry in importer.toc:
- if not entry.startswith(pkg_prefix):
- continue
- name = entry[pkg_prefix_len:]
- if name.count('.') != 0:
+ for pkg_path in path:
+ pkg_path = os.path.normpath(pkg_path)
+ if not pkg_path.startswith(SYS_PREFIX):
+ # if the path does not start with sys._MEIPASS then it cannot be a bundled package.
continue
- is_pkg = importer.is_package(entry)
- yield pkgutil.ModuleInfo(importer, prefix + name, is_pkg)
+ # Construct package prefix from path...
+ pkg_prefix = pkg_path[SYS_PREFIXLEN:]
+ pkg_prefix = pkg_prefix.replace(os.path.sep, '.')
+ # ... and ensure it ends with a dot (so we can directly filter out the package itself).
+ if not pkg_prefix.endswith('.'):
+ pkg_prefix += '.'
+ pkg_prefix_len = len(pkg_prefix)
+
+ for entry in importer.toc:
+ if not entry.startswith(pkg_prefix):
+ continue
+ name = entry[pkg_prefix_len:]
+ if name.count('.') != 0:
+ continue
+ is_pkg = importer.is_package(entry)
+ yield pkgutil.ModuleInfo(importer, prefix + name, is_pkg)
pkgutil.iter_modules = _pyi_pkgutil_iter_modules
| {"golden_diff": "diff --git a/PyInstaller/hooks/rthooks/pyi_rth_pkgutil.py b/PyInstaller/hooks/rthooks/pyi_rth_pkgutil.py\n--- a/PyInstaller/hooks/rthooks/pyi_rth_pkgutil.py\n+++ b/PyInstaller/hooks/rthooks/pyi_rth_pkgutil.py\n@@ -43,7 +43,7 @@\n else:\n return\n \n- if not path:\n+ if path is None:\n # Search for all top-level packages/modules. These will have no dots in their entry names.\n for entry in importer.toc:\n if entry.count('.') != 0:\n@@ -54,25 +54,28 @@\n # Declare SYS_PREFIX locally, to avoid clash with eponymous global symbol from pyi_rth_pkgutil hook.\n SYS_PREFIX = sys._MEIPASS + os.path.sep\n SYS_PREFIXLEN = len(SYS_PREFIX)\n- # Only single path is supported, and it must start with sys._MEIPASS.\n- pkg_path = os.path.normpath(path[0])\n- assert pkg_path.startswith(SYS_PREFIX)\n- # Construct package prefix from path...\n- pkg_prefix = pkg_path[SYS_PREFIXLEN:]\n- pkg_prefix = pkg_prefix.replace(os.path.sep, '.')\n- # ... and ensure it ends with a dot (so we can directly filter out the package itself).\n- if not pkg_prefix.endswith('.'):\n- pkg_prefix += '.'\n- pkg_prefix_len = len(pkg_prefix)\n \n- for entry in importer.toc:\n- if not entry.startswith(pkg_prefix):\n- continue\n- name = entry[pkg_prefix_len:]\n- if name.count('.') != 0:\n+ for pkg_path in path:\n+ pkg_path = os.path.normpath(pkg_path)\n+ if not pkg_path.startswith(SYS_PREFIX):\n+ # if the path does not start with sys._MEIPASS then it cannot be a bundled package.\n continue\n- is_pkg = importer.is_package(entry)\n- yield pkgutil.ModuleInfo(importer, prefix + name, is_pkg)\n+ # Construct package prefix from path...\n+ pkg_prefix = pkg_path[SYS_PREFIXLEN:]\n+ pkg_prefix = pkg_prefix.replace(os.path.sep, '.')\n+ # ... and ensure it ends with a dot (so we can directly filter out the package itself).\n+ if not pkg_prefix.endswith('.'):\n+ pkg_prefix += '.'\n+ pkg_prefix_len = len(pkg_prefix)\n+\n+ for entry in importer.toc:\n+ if not entry.startswith(pkg_prefix):\n+ continue\n+ name = entry[pkg_prefix_len:]\n+ if name.count('.') != 0:\n+ continue\n+ is_pkg = importer.is_package(entry)\n+ yield pkgutil.ModuleInfo(importer, prefix + name, is_pkg)\n \n \n pkgutil.iter_modules = _pyi_pkgutil_iter_modules\n", "issue": "pkgutil.iter_modules with arbitrary path\n## Description of the issue\r\nThe iter_modules patch implemented in #5959 has a bug where the path must start with the _MEIPASS or it will throw an assertion error.\r\n\r\nThe normal iter_modules function can take any valid path. Your code first calls that:\r\nhttps://github.com/pyinstaller/pyinstaller/blob/develop/PyInstaller/hooks/rthooks/pyi_rth_pkgutil.py#L37\r\n\r\nand later asserts it starts with _MEIPASS\r\nhttps://github.com/pyinstaller/pyinstaller/blob/develop/PyInstaller/hooks/rthooks/pyi_rth_pkgutil.py#L59\r\n\r\nwhich means that a path outside of the executable will throw the assertion error.\r\n\r\nI think when implementing it was overlooked that this function could be used to look at a path outside the executable path.\r\n\r\n### Context information (for bug reports)\r\n\r\n* PyInstaller Version 4.8\r\n* All OS and python versions\r\n\r\nI will have a look into creating a pull request to fix this issue.\r\nI think the solution is to change the assertion to an if statement to only run the code below that if it starts with _MEIPASS and thus could be bundled in the executable.\n", "before_files": [{"content": "#-----------------------------------------------------------------------------\n# Copyright (c) 2021, PyInstaller Development Team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n#\n# The full license is in the file COPYING.txt, distributed with this software.\n#\n# SPDX-License-Identifier: Apache-2.0\n#-----------------------------------------------------------------------------\n#\n# This rthook overrides pkgutil.iter_modules with custom implementation that uses PyInstaller's FrozenImporter to list\n# sub-modules embedded in the PYZ archive. The non-embedded modules (binary extensions, or .pyc modules in noarchive\n# build) are handled by original pkgutil iter_modules implementation (and consequently, python's FileFinder).\n#\n# The preferred way of adding support for iter_modules would be adding non-standard iter_modules() method to\n# FrozenImporter itself. However, that seems to work only for path entry finders (for use with sys.path_hooks), while\n# PyInstaller's FrozenImporter is registered as meta path finders (for use with sys.meta_path). Turning FrozenImporter\n# into path entry finder, would seemingly require the latter to support on-filesystem resources (e.g., extension\n# modules) in addition to PYZ-embedded ones.\n#\n# Therefore, we instead opt for overriding pkgutil.iter_modules with custom implementation that augments the output of\n# original implementation with contents of PYZ archive from FrozenImporter's TOC.\n\nimport os\nimport pkgutil\nimport sys\n\nfrom pyimod03_importers import FrozenImporter\n\n_orig_pkgutil_iter_modules = pkgutil.iter_modules\n\n\ndef _pyi_pkgutil_iter_modules(path=None, prefix=''):\n # Use original implementation to discover on-filesystem modules (binary extensions in regular builds, or both binary\n # extensions and compiled pyc modules in noarchive debug builds).\n yield from _orig_pkgutil_iter_modules(path, prefix)\n\n # Find the instance of PyInstaller's FrozenImporter.\n for importer in pkgutil.iter_importers():\n if isinstance(importer, FrozenImporter):\n break\n else:\n return\n\n if not path:\n # Search for all top-level packages/modules. These will have no dots in their entry names.\n for entry in importer.toc:\n if entry.count('.') != 0:\n continue\n is_pkg = importer.is_package(entry)\n yield pkgutil.ModuleInfo(importer, prefix + entry, is_pkg)\n else:\n # Declare SYS_PREFIX locally, to avoid clash with eponymous global symbol from pyi_rth_pkgutil hook.\n SYS_PREFIX = sys._MEIPASS + os.path.sep\n SYS_PREFIXLEN = len(SYS_PREFIX)\n # Only single path is supported, and it must start with sys._MEIPASS.\n pkg_path = os.path.normpath(path[0])\n assert pkg_path.startswith(SYS_PREFIX)\n # Construct package prefix from path...\n pkg_prefix = pkg_path[SYS_PREFIXLEN:]\n pkg_prefix = pkg_prefix.replace(os.path.sep, '.')\n # ... and ensure it ends with a dot (so we can directly filter out the package itself).\n if not pkg_prefix.endswith('.'):\n pkg_prefix += '.'\n pkg_prefix_len = len(pkg_prefix)\n\n for entry in importer.toc:\n if not entry.startswith(pkg_prefix):\n continue\n name = entry[pkg_prefix_len:]\n if name.count('.') != 0:\n continue\n is_pkg = importer.is_package(entry)\n yield pkgutil.ModuleInfo(importer, prefix + name, is_pkg)\n\n\npkgutil.iter_modules = _pyi_pkgutil_iter_modules\n", "path": "PyInstaller/hooks/rthooks/pyi_rth_pkgutil.py"}]} | 1,714 | 621 |
gh_patches_debug_1650 | rasdani/github-patches | git_diff | ivy-llc__ivy-13273 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
unravel_index
</issue>
<code>
[start of ivy/functional/frontends/jax/numpy/indexing.py]
1 # local
2 import ivy
3 from ivy.functional.frontends.jax.func_wrapper import (
4 to_ivy_arrays_and_back,
5 )
6
7
8 @to_ivy_arrays_and_back
9 def diagonal(a, offset=0, axis1=0, axis2=1):
10 return ivy.diagonal(a, offset=offset, axis1=axis1, axis2=axis2)
11
12
13 @to_ivy_arrays_and_back
14 def diag(v, k=0):
15 return ivy.diag(v, k=k)
16
17
18 @to_ivy_arrays_and_back
19 def diag_indices(n, ndim=2):
20 idx = ivy.arange(n, dtype=int)
21 return (idx,) * ndim
22
23
24 # take_along_axis
25 @to_ivy_arrays_and_back
26 def take_along_axis(arr, indices, axis, mode="fill"):
27 return ivy.take_along_axis(arr, indices, axis, mode=mode)
28
29
30 @to_ivy_arrays_and_back
31 def tril_indices(n_rows, n_cols=None, k=0):
32 return ivy.tril_indices(n_rows, n_cols, k)
33
34
35 @to_ivy_arrays_and_back
36 def triu_indices(n, k=0, m=None):
37 return ivy.triu_indices(n, m, k)
38
39
40 @to_ivy_arrays_and_back
41 def triu_indices_from(arr, k=0):
42 return ivy.triu_indices(arr.shape[-2], arr.shape[-1], k)
43
44
45 def tril_indices_from(arr, k=0):
46 return ivy.tril_indices(arr.shape[-2], arr.shape[-1], k)
47
[end of ivy/functional/frontends/jax/numpy/indexing.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ivy/functional/frontends/jax/numpy/indexing.py b/ivy/functional/frontends/jax/numpy/indexing.py
--- a/ivy/functional/frontends/jax/numpy/indexing.py
+++ b/ivy/functional/frontends/jax/numpy/indexing.py
@@ -44,3 +44,10 @@
def tril_indices_from(arr, k=0):
return ivy.tril_indices(arr.shape[-2], arr.shape[-1], k)
+
+
+# unravel_index
+@to_ivy_arrays_and_back
+def unravel_index(indices, shape):
+ ret = [x.astype("int64") for x in ivy.unravel_index(indices, shape)]
+ return tuple(ret)
| {"golden_diff": "diff --git a/ivy/functional/frontends/jax/numpy/indexing.py b/ivy/functional/frontends/jax/numpy/indexing.py\n--- a/ivy/functional/frontends/jax/numpy/indexing.py\n+++ b/ivy/functional/frontends/jax/numpy/indexing.py\n@@ -44,3 +44,10 @@\n \n def tril_indices_from(arr, k=0):\n return ivy.tril_indices(arr.shape[-2], arr.shape[-1], k)\n+\n+\n+# unravel_index\n+@to_ivy_arrays_and_back\n+def unravel_index(indices, shape):\n+ ret = [x.astype(\"int64\") for x in ivy.unravel_index(indices, shape)]\n+ return tuple(ret)\n", "issue": "unravel_index\n\n", "before_files": [{"content": "# local\nimport ivy\nfrom ivy.functional.frontends.jax.func_wrapper import (\n to_ivy_arrays_and_back,\n)\n\n\n@to_ivy_arrays_and_back\ndef diagonal(a, offset=0, axis1=0, axis2=1):\n return ivy.diagonal(a, offset=offset, axis1=axis1, axis2=axis2)\n\n\n@to_ivy_arrays_and_back\ndef diag(v, k=0):\n return ivy.diag(v, k=k)\n\n\n@to_ivy_arrays_and_back\ndef diag_indices(n, ndim=2):\n idx = ivy.arange(n, dtype=int)\n return (idx,) * ndim\n\n\n# take_along_axis\n@to_ivy_arrays_and_back\ndef take_along_axis(arr, indices, axis, mode=\"fill\"):\n return ivy.take_along_axis(arr, indices, axis, mode=mode)\n\n\n@to_ivy_arrays_and_back\ndef tril_indices(n_rows, n_cols=None, k=0):\n return ivy.tril_indices(n_rows, n_cols, k)\n\n\n@to_ivy_arrays_and_back\ndef triu_indices(n, k=0, m=None):\n return ivy.triu_indices(n, m, k)\n\n\n@to_ivy_arrays_and_back\ndef triu_indices_from(arr, k=0):\n return ivy.triu_indices(arr.shape[-2], arr.shape[-1], k)\n\n\ndef tril_indices_from(arr, k=0):\n return ivy.tril_indices(arr.shape[-2], arr.shape[-1], k)\n", "path": "ivy/functional/frontends/jax/numpy/indexing.py"}]} | 987 | 163 |
gh_patches_debug_4175 | rasdani/github-patches | git_diff | cleanlab__cleanlab-965 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Revert #961 before release
Tensorflow version temporarily has an upper bound (`tensorflow<2.16.0`) in requirements-dev.txt.
scikit-learn version temporarily has an upper bound (`scikit-learn>=1.0,<1.4.0`) in setup.py
This needs to be reverted before releasing v2.6.0.
_Originally posted by @elisno in https://github.com/cleanlab/cleanlab/issues/961#issuecomment-1898968097_
</issue>
<code>
[start of setup.py]
1 from setuptools import setup, find_packages
2 from setuptools.command.egg_info import egg_info
3
4 # To use a consistent encoding
5 from codecs import open
6 from os import path
7
8
9 class egg_info_ex(egg_info):
10 """Includes license file into `.egg-info` folder."""
11
12 def run(self):
13 # don't duplicate license into `.egg-info` when building a distribution
14 if not self.distribution.have_run.get("install", True):
15 # `install` command is in progress, copy license
16 self.mkpath(self.egg_info)
17 self.copy_file("LICENSE", self.egg_info)
18
19 egg_info.run(self)
20
21
22 here = path.abspath(path.dirname(__file__))
23
24 # Get the long description from the README file
25 with open(path.join(here, "README.md"), encoding="utf-8") as f:
26 long_description = f.read()
27
28 # Get version number and store it in __version__
29 exec(open("cleanlab/version.py").read())
30
31 DATALAB_REQUIRE = [
32 # Mainly for Datalab's data storage class.
33 # Still some type hints that require datasets
34 "datasets>=2.7.0",
35 ]
36
37 IMAGE_REQUIRE = DATALAB_REQUIRE + ["cleanvision>=0.3.2"]
38
39 EXTRAS_REQUIRE = {
40 "datalab": DATALAB_REQUIRE,
41 "image": IMAGE_REQUIRE,
42 "all": ["matplotlib>=3.5.1"],
43 }
44 EXTRAS_REQUIRE["all"] = list(set(sum(EXTRAS_REQUIRE.values(), [])))
45
46 setup(
47 name="cleanlab",
48 version=__version__,
49 license="AGPLv3+",
50 long_description=long_description,
51 long_description_content_type="text/markdown",
52 description="The standard package for data-centric AI, machine learning with label errors, "
53 "and automatically finding and fixing dataset issues in Python.",
54 url="https://cleanlab.ai",
55 project_urls={
56 "Documentation": "https://docs.cleanlab.ai",
57 "Bug Tracker": "https://github.com/cleanlab/cleanlab/issues",
58 "Source Code": "https://github.com/cleanlab/cleanlab",
59 },
60 author="Cleanlab Inc.",
61 author_email="[email protected]",
62 # See https://pypi.python.org/pypi?%3Aaction=list_classifiers
63 classifiers=[
64 "Development Status :: 4 - Beta",
65 "Intended Audience :: Developers",
66 "Intended Audience :: Education",
67 "Intended Audience :: Science/Research",
68 "Intended Audience :: Information Technology",
69 "License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)",
70 "Natural Language :: English",
71 # We believe this package works will these versions, but we do not guarantee it!
72 "Programming Language :: Python :: 3",
73 "Programming Language :: Python :: 3.7",
74 "Programming Language :: Python :: 3.8",
75 "Programming Language :: Python :: 3.9",
76 "Programming Language :: Python :: 3.10",
77 "Programming Language :: Python",
78 "Topic :: Software Development",
79 "Topic :: Scientific/Engineering",
80 "Topic :: Scientific/Engineering :: Mathematics",
81 "Topic :: Scientific/Engineering :: Artificial Intelligence",
82 "Topic :: Software Development :: Libraries",
83 "Topic :: Software Development :: Libraries :: Python Modules",
84 ],
85 python_requires=">=3.7",
86 # What does your project relate to?
87 keywords="machine_learning data_cleaning confident_learning classification weak_supervision "
88 "learning_with_noisy_labels unsupervised_learning datacentric_ai, datacentric",
89 # You can just specify the packages manually here if your project is
90 # simple. Or you can use find_packages().
91 packages=find_packages(exclude=[]),
92 # Include cleanlab license file.
93 include_package_data=True,
94 package_data={
95 "": ["LICENSE"],
96 },
97 license_files=("LICENSE",),
98 cmdclass={"egg_info": egg_info_ex},
99 # List run-time dependencies here. These will be installed by pip when
100 # your project is installed. For an analysis of "install_requires" vs pip's
101 # requirements files see:
102 # https://packaging.python.org/en/latest/discussions/install-requires-vs-requirements/
103 install_requires=[
104 "numpy>=1.20.0",
105 "scikit-learn>=1.0,<1.4.0",
106 "tqdm>=4.53.0",
107 "pandas>=1.1.5",
108 "termcolor>=2.0.0,<2.4.0",
109 ],
110 extras_require=EXTRAS_REQUIRE,
111 )
112
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -102,7 +102,7 @@
# https://packaging.python.org/en/latest/discussions/install-requires-vs-requirements/
install_requires=[
"numpy>=1.20.0",
- "scikit-learn>=1.0,<1.4.0",
+ "scikit-learn>=1.0",
"tqdm>=4.53.0",
"pandas>=1.1.5",
"termcolor>=2.0.0,<2.4.0",
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -102,7 +102,7 @@\n # https://packaging.python.org/en/latest/discussions/install-requires-vs-requirements/\n install_requires=[\n \"numpy>=1.20.0\",\n- \"scikit-learn>=1.0,<1.4.0\",\n+ \"scikit-learn>=1.0\",\n \"tqdm>=4.53.0\",\n \"pandas>=1.1.5\",\n \"termcolor>=2.0.0,<2.4.0\",\n", "issue": "Revert #961 before release\nTensorflow version temporarily has an upper bound (`tensorflow<2.16.0`) in requirements-dev.txt.\r\nscikit-learn version temporarily has an upper bound (`scikit-learn>=1.0,<1.4.0`) in setup.py\r\n\r\nThis needs to be reverted before releasing v2.6.0.\r\n\r\n\r\n _Originally posted by @elisno in https://github.com/cleanlab/cleanlab/issues/961#issuecomment-1898968097_\r\n \n", "before_files": [{"content": "from setuptools import setup, find_packages\nfrom setuptools.command.egg_info import egg_info\n\n# To use a consistent encoding\nfrom codecs import open\nfrom os import path\n\n\nclass egg_info_ex(egg_info):\n \"\"\"Includes license file into `.egg-info` folder.\"\"\"\n\n def run(self):\n # don't duplicate license into `.egg-info` when building a distribution\n if not self.distribution.have_run.get(\"install\", True):\n # `install` command is in progress, copy license\n self.mkpath(self.egg_info)\n self.copy_file(\"LICENSE\", self.egg_info)\n\n egg_info.run(self)\n\n\nhere = path.abspath(path.dirname(__file__))\n\n# Get the long description from the README file\nwith open(path.join(here, \"README.md\"), encoding=\"utf-8\") as f:\n long_description = f.read()\n\n# Get version number and store it in __version__\nexec(open(\"cleanlab/version.py\").read())\n\nDATALAB_REQUIRE = [\n # Mainly for Datalab's data storage class.\n # Still some type hints that require datasets\n \"datasets>=2.7.0\",\n]\n\nIMAGE_REQUIRE = DATALAB_REQUIRE + [\"cleanvision>=0.3.2\"]\n\nEXTRAS_REQUIRE = {\n \"datalab\": DATALAB_REQUIRE,\n \"image\": IMAGE_REQUIRE,\n \"all\": [\"matplotlib>=3.5.1\"],\n}\nEXTRAS_REQUIRE[\"all\"] = list(set(sum(EXTRAS_REQUIRE.values(), [])))\n\nsetup(\n name=\"cleanlab\",\n version=__version__,\n license=\"AGPLv3+\",\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n description=\"The standard package for data-centric AI, machine learning with label errors, \"\n \"and automatically finding and fixing dataset issues in Python.\",\n url=\"https://cleanlab.ai\",\n project_urls={\n \"Documentation\": \"https://docs.cleanlab.ai\",\n \"Bug Tracker\": \"https://github.com/cleanlab/cleanlab/issues\",\n \"Source Code\": \"https://github.com/cleanlab/cleanlab\",\n },\n author=\"Cleanlab Inc.\",\n author_email=\"[email protected]\",\n # See https://pypi.python.org/pypi?%3Aaction=list_classifiers\n classifiers=[\n \"Development Status :: 4 - Beta\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: Education\",\n \"Intended Audience :: Science/Research\",\n \"Intended Audience :: Information Technology\",\n \"License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)\",\n \"Natural Language :: English\",\n # We believe this package works will these versions, but we do not guarantee it!\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Programming Language :: Python\",\n \"Topic :: Software Development\",\n \"Topic :: Scientific/Engineering\",\n \"Topic :: Scientific/Engineering :: Mathematics\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n \"Topic :: Software Development :: Libraries\",\n \"Topic :: Software Development :: Libraries :: Python Modules\",\n ],\n python_requires=\">=3.7\",\n # What does your project relate to?\n keywords=\"machine_learning data_cleaning confident_learning classification weak_supervision \"\n \"learning_with_noisy_labels unsupervised_learning datacentric_ai, datacentric\",\n # You can just specify the packages manually here if your project is\n # simple. Or you can use find_packages().\n packages=find_packages(exclude=[]),\n # Include cleanlab license file.\n include_package_data=True,\n package_data={\n \"\": [\"LICENSE\"],\n },\n license_files=(\"LICENSE\",),\n cmdclass={\"egg_info\": egg_info_ex},\n # List run-time dependencies here. These will be installed by pip when\n # your project is installed. For an analysis of \"install_requires\" vs pip's\n # requirements files see:\n # https://packaging.python.org/en/latest/discussions/install-requires-vs-requirements/\n install_requires=[\n \"numpy>=1.20.0\",\n \"scikit-learn>=1.0,<1.4.0\",\n \"tqdm>=4.53.0\",\n \"pandas>=1.1.5\",\n \"termcolor>=2.0.0,<2.4.0\",\n ],\n extras_require=EXTRAS_REQUIRE,\n)\n", "path": "setup.py"}]} | 1,866 | 140 |
gh_patches_debug_31014 | rasdani/github-patches | git_diff | mathesar-foundation__mathesar-3608 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Implement `tables.delete` RPC method
</issue>
<code>
[start of mathesar/rpc/tables.py]
1 from typing import Optional, TypedDict
2
3 from modernrpc.core import rpc_method, REQUEST_KEY
4 from modernrpc.auth.basic import http_basic_auth_login_required
5
6 from db.tables.operations.select import get_table_info
7 from mathesar.rpc.exceptions.handlers import handle_rpc_exceptions
8 from mathesar.rpc.utils import connect
9
10
11 class TableInfo(TypedDict):
12 """
13 Information about a table.
14
15 Attributes:
16 oid: The `oid` of the table in the schema.
17 name: The name of the table.
18 schema: The `oid` of the schema where the table lives.
19 description: The description of the table.
20 """
21 oid: int
22 name: str
23 schema: int
24 description: Optional[str]
25
26
27 @rpc_method(name="tables.list")
28 @http_basic_auth_login_required
29 @handle_rpc_exceptions
30 def list_(*, schema_oid: int, database_id: int, **kwargs) -> list[TableInfo]:
31 """
32 List information about tables for a schema. Exposed as `list`.
33
34 Args:
35 schema_oid: Identity of the schema in the user's database.
36 database_id: The Django id of the database containing the table.
37
38 Returns:
39 A list of table details.
40 """
41 user = kwargs.get(REQUEST_KEY).user
42 with connect(database_id, user) as conn:
43 raw_table_info = get_table_info(schema_oid, conn)
44 return [
45 TableInfo(tab) for tab in raw_table_info
46 ]
47
[end of mathesar/rpc/tables.py]
[start of db/tables/operations/drop.py]
1 from db.connection import execute_msar_func_with_engine
2
3
4 def drop_table(name, schema, engine, cascade=False, if_exists=False):
5 execute_msar_func_with_engine(engine, 'drop_table', schema, name, cascade, if_exists)
6
[end of db/tables/operations/drop.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/db/tables/operations/drop.py b/db/tables/operations/drop.py
--- a/db/tables/operations/drop.py
+++ b/db/tables/operations/drop.py
@@ -1,5 +1,21 @@
-from db.connection import execute_msar_func_with_engine
+from db.connection import execute_msar_func_with_engine, exec_msar_func
def drop_table(name, schema, engine, cascade=False, if_exists=False):
execute_msar_func_with_engine(engine, 'drop_table', schema, name, cascade, if_exists)
+
+
+def drop_table_from_database(table_oid, conn, cascade=False):
+ """
+ Drop a table.
+
+ Args:
+ table_oid: OID of the table to drop.
+ cascade: Whether to drop the dependent objects.
+
+ Returns:
+ Returns the fully qualified name of the dropped table.
+ """
+ return exec_msar_func(
+ conn, 'drop_table', table_oid, cascade
+ ).fetchone()[0]
diff --git a/mathesar/rpc/tables.py b/mathesar/rpc/tables.py
--- a/mathesar/rpc/tables.py
+++ b/mathesar/rpc/tables.py
@@ -4,6 +4,7 @@
from modernrpc.auth.basic import http_basic_auth_login_required
from db.tables.operations.select import get_table_info
+from db.tables.operations.drop import drop_table_from_database
from mathesar.rpc.exceptions.handlers import handle_rpc_exceptions
from mathesar.rpc.utils import connect
@@ -44,3 +45,25 @@
return [
TableInfo(tab) for tab in raw_table_info
]
+
+
+@rpc_method(name="tables.delete")
+@http_basic_auth_login_required
+@handle_rpc_exceptions
+def delete(
+ *, table_oid: int, database_id: int, cascade: bool = False, **kwargs
+) -> str:
+ """
+ Delete a table from a schema.
+
+ Args:
+ table_oid: Identity of the table in the user's database.
+ database_id: The Django id of the database containing the table.
+ cascade: Whether to drop the dependent objects.
+
+ Returns:
+ The name of the dropped table.
+ """
+ user = kwargs.get(REQUEST_KEY).user
+ with connect(database_id, user) as conn:
+ return drop_table_from_database(table_oid, conn, cascade)
| {"golden_diff": "diff --git a/db/tables/operations/drop.py b/db/tables/operations/drop.py\n--- a/db/tables/operations/drop.py\n+++ b/db/tables/operations/drop.py\n@@ -1,5 +1,21 @@\n-from db.connection import execute_msar_func_with_engine\n+from db.connection import execute_msar_func_with_engine, exec_msar_func\n \n \n def drop_table(name, schema, engine, cascade=False, if_exists=False):\n execute_msar_func_with_engine(engine, 'drop_table', schema, name, cascade, if_exists)\n+\n+\n+def drop_table_from_database(table_oid, conn, cascade=False):\n+ \"\"\"\n+ Drop a table.\n+\n+ Args:\n+ table_oid: OID of the table to drop.\n+ cascade: Whether to drop the dependent objects.\n+\n+ Returns:\n+ Returns the fully qualified name of the dropped table.\n+ \"\"\"\n+ return exec_msar_func(\n+ conn, 'drop_table', table_oid, cascade\n+ ).fetchone()[0]\ndiff --git a/mathesar/rpc/tables.py b/mathesar/rpc/tables.py\n--- a/mathesar/rpc/tables.py\n+++ b/mathesar/rpc/tables.py\n@@ -4,6 +4,7 @@\n from modernrpc.auth.basic import http_basic_auth_login_required\n \n from db.tables.operations.select import get_table_info\n+from db.tables.operations.drop import drop_table_from_database\n from mathesar.rpc.exceptions.handlers import handle_rpc_exceptions\n from mathesar.rpc.utils import connect\n \n@@ -44,3 +45,25 @@\n return [\n TableInfo(tab) for tab in raw_table_info\n ]\n+\n+\n+@rpc_method(name=\"tables.delete\")\n+@http_basic_auth_login_required\n+@handle_rpc_exceptions\n+def delete(\n+ *, table_oid: int, database_id: int, cascade: bool = False, **kwargs\n+) -> str:\n+ \"\"\"\n+ Delete a table from a schema.\n+\n+ Args:\n+ table_oid: Identity of the table in the user's database.\n+ database_id: The Django id of the database containing the table.\n+ cascade: Whether to drop the dependent objects.\n+\n+ Returns:\n+ The name of the dropped table.\n+ \"\"\"\n+ user = kwargs.get(REQUEST_KEY).user\n+ with connect(database_id, user) as conn:\n+ return drop_table_from_database(table_oid, conn, cascade)\n", "issue": "Implement `tables.delete` RPC method\n\n", "before_files": [{"content": "from typing import Optional, TypedDict\n\nfrom modernrpc.core import rpc_method, REQUEST_KEY\nfrom modernrpc.auth.basic import http_basic_auth_login_required\n\nfrom db.tables.operations.select import get_table_info\nfrom mathesar.rpc.exceptions.handlers import handle_rpc_exceptions\nfrom mathesar.rpc.utils import connect\n\n\nclass TableInfo(TypedDict):\n \"\"\"\n Information about a table.\n\n Attributes:\n oid: The `oid` of the table in the schema.\n name: The name of the table.\n schema: The `oid` of the schema where the table lives.\n description: The description of the table.\n \"\"\"\n oid: int\n name: str\n schema: int\n description: Optional[str]\n\n\n@rpc_method(name=\"tables.list\")\n@http_basic_auth_login_required\n@handle_rpc_exceptions\ndef list_(*, schema_oid: int, database_id: int, **kwargs) -> list[TableInfo]:\n \"\"\"\n List information about tables for a schema. Exposed as `list`.\n\n Args:\n schema_oid: Identity of the schema in the user's database.\n database_id: The Django id of the database containing the table.\n\n Returns:\n A list of table details.\n \"\"\"\n user = kwargs.get(REQUEST_KEY).user\n with connect(database_id, user) as conn:\n raw_table_info = get_table_info(schema_oid, conn)\n return [\n TableInfo(tab) for tab in raw_table_info\n ]\n", "path": "mathesar/rpc/tables.py"}, {"content": "from db.connection import execute_msar_func_with_engine\n\n\ndef drop_table(name, schema, engine, cascade=False, if_exists=False):\n execute_msar_func_with_engine(engine, 'drop_table', schema, name, cascade, if_exists)\n", "path": "db/tables/operations/drop.py"}]} | 1,026 | 526 |
gh_patches_debug_2811 | rasdani/github-patches | git_diff | liqd__a4-meinberlin-4707 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
rules/participate in project
As you can see in the test, the paricipate_project rule behaves a bit weird for project group members. I think, they should also be allowed to participate. The question is what it is used for.
Cool! The participate_project rule is a bit unexpected, so we should check that out. Like where it is used and what for. But anyway, will merge for now and add an issue.
_Originally posted by @fuzzylogic2000 in https://github.com/liqd/a4-meinberlin/pull/4077#pullrequestreview-837466549_
</issue>
<code>
[start of meinberlin/apps/projects/rules.py]
1 import rules
2 from rules.predicates import is_superuser
3
4 from adhocracy4.organisations.predicates import is_initiator
5 from adhocracy4.projects.predicates import is_live
6 from adhocracy4.projects.predicates import is_moderator
7 from adhocracy4.projects.predicates import is_prj_group_member
8 from adhocracy4.projects.predicates import is_project_member
9 from adhocracy4.projects.predicates import is_public
10 from adhocracy4.projects.predicates import is_semipublic
11
12 rules.remove_perm('a4projects.view_project')
13 rules.add_perm('a4projects.view_project',
14 is_superuser | is_initiator |
15 is_moderator | is_prj_group_member |
16 ((is_public | is_semipublic | is_project_member)
17 & is_live))
18
19 rules.set_perm('a4projects.participate_in_project',
20 is_superuser | is_initiator | is_moderator |
21 ((is_public | is_project_member) & is_live))
22
[end of meinberlin/apps/projects/rules.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/meinberlin/apps/projects/rules.py b/meinberlin/apps/projects/rules.py
--- a/meinberlin/apps/projects/rules.py
+++ b/meinberlin/apps/projects/rules.py
@@ -17,5 +17,6 @@
& is_live))
rules.set_perm('a4projects.participate_in_project',
- is_superuser | is_initiator | is_moderator |
+ is_superuser | is_initiator |
+ is_moderator | is_prj_group_member |
((is_public | is_project_member) & is_live))
| {"golden_diff": "diff --git a/meinberlin/apps/projects/rules.py b/meinberlin/apps/projects/rules.py\n--- a/meinberlin/apps/projects/rules.py\n+++ b/meinberlin/apps/projects/rules.py\n@@ -17,5 +17,6 @@\n & is_live))\n \n rules.set_perm('a4projects.participate_in_project',\n- is_superuser | is_initiator | is_moderator |\n+ is_superuser | is_initiator |\n+ is_moderator | is_prj_group_member |\n ((is_public | is_project_member) & is_live))\n", "issue": "rules/participate in project\nAs you can see in the test, the paricipate_project rule behaves a bit weird for project group members. I think, they should also be allowed to participate. The question is what it is used for.\r\n\r\nCool! The participate_project rule is a bit unexpected, so we should check that out. Like where it is used and what for. But anyway, will merge for now and add an issue.\r\n\r\n_Originally posted by @fuzzylogic2000 in https://github.com/liqd/a4-meinberlin/pull/4077#pullrequestreview-837466549_\n", "before_files": [{"content": "import rules\nfrom rules.predicates import is_superuser\n\nfrom adhocracy4.organisations.predicates import is_initiator\nfrom adhocracy4.projects.predicates import is_live\nfrom adhocracy4.projects.predicates import is_moderator\nfrom adhocracy4.projects.predicates import is_prj_group_member\nfrom adhocracy4.projects.predicates import is_project_member\nfrom adhocracy4.projects.predicates import is_public\nfrom adhocracy4.projects.predicates import is_semipublic\n\nrules.remove_perm('a4projects.view_project')\nrules.add_perm('a4projects.view_project',\n is_superuser | is_initiator |\n is_moderator | is_prj_group_member |\n ((is_public | is_semipublic | is_project_member)\n & is_live))\n\nrules.set_perm('a4projects.participate_in_project',\n is_superuser | is_initiator | is_moderator |\n ((is_public | is_project_member) & is_live))\n", "path": "meinberlin/apps/projects/rules.py"}]} | 917 | 125 |
gh_patches_debug_23064 | rasdani/github-patches | git_diff | modoboa__modoboa-515 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
handle_mailbox_operations command not working
Hello,
This is a new Modoboa 1.1.0 installation. When I try to run:
```
python /opt/modoboa_admin/manage.py handle_mailbox_operations
```
I get the following error:
```
NotDefined: Application 'admin' and/or parameter 'HANDLE_MAILBOXES' not defined
```
According to the [documentation](http://modoboa.readthedocs.org/en/1.1.0/getting_started/configuration.html#admin-params) there should be an option in Modoboa->Parameters->General to activate this HANDLE_MAILBOXES. But I don't see it anywhere.
I tried to outsmart the system by inserting the value in the lib_parameter table but no luck. I guess something else is required.
```
insert into lib_parameter (name, value) values ('admin.HANDLE_MAILBOXES', 'yes')
```
Am I missing something ? Here is the screenshot of my admin interface, logged as the default admin user:

</issue>
<code>
[start of modoboa/extensions/admin/app_settings.py]
1 from django import forms
2 from django.utils.translation import ugettext_lazy
3 from modoboa.lib.formutils import YesNoField, SeparatorField
4 from modoboa.lib.sysutils import exec_cmd
5 from modoboa.lib import parameters
6
7
8 class AdminParametersForm(parameters.AdminParametersForm):
9 app = "admin"
10
11 mbsep = SeparatorField(label=ugettext_lazy("Mailboxes"))
12
13 handle_mailboxes = YesNoField(
14 label=ugettext_lazy("Handle mailboxes on filesystem"),
15 initial="no",
16 help_text=ugettext_lazy("Rename or remove mailboxes on the filesystem when they get renamed or removed within Modoboa")
17 )
18
19 mailboxes_owner = forms.CharField(
20 label=ugettext_lazy("Mailboxes ower"),
21 initial="vmail",
22 help_text=ugettext_lazy("The UNIX account who owns mailboxes on the filesystem")
23 )
24
25 default_domain_quota = forms.IntegerField(
26 label=ugettext_lazy("Default domain quota"),
27 initial=0,
28 help_text=ugettext_lazy(
29 "Default quota (in MB) applied to freshly created domains with no "
30 "value specified. A value of 0 means no quota."
31 ),
32 widget=forms.TextInput(attrs={'class': 'span2'})
33 )
34
35 auto_account_removal = YesNoField(
36 label=ugettext_lazy("Automatic account removal"),
37 initial="no",
38 help_text=ugettext_lazy("When a mailbox is removed, also remove the associated account")
39 )
40
41 # Visibility rules
42 visibility_rules = {
43 "mailboxes_owner": "handle_mailboxes=yes",
44 }
45
46 def __init__(self, *args, **kwargs):
47 super(AdminParametersForm, self).__init__(*args, **kwargs)
48 hide_fields = False
49 code, output = exec_cmd("which dovecot")
50 if not code:
51 dpath = output.strip()
52 try:
53 code, version = exec_cmd("%s --version" % dpath)
54 except OSError:
55 hide_fields = True
56 else:
57 if code or not version.strip().startswith("2"):
58 hide_fields = True
59 else:
60 hide_fields = True
61 if hide_fields:
62 del self.fields["handle_mailboxes"]
63 del self.fields["mailboxes_owner"]
64
65 def clean_default_domain_quota(self):
66 """Ensure quota is a positive integer."""
67 if self.cleaned_data['default_domain_quota'] < 0:
68 raise forms.ValidationError(
69 ugettext_lazy('Must be a positive integer')
70 )
71 return self.cleaned_data['default_domain_quota']
72
[end of modoboa/extensions/admin/app_settings.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/modoboa/extensions/admin/app_settings.py b/modoboa/extensions/admin/app_settings.py
--- a/modoboa/extensions/admin/app_settings.py
+++ b/modoboa/extensions/admin/app_settings.py
@@ -3,6 +3,7 @@
from modoboa.lib.formutils import YesNoField, SeparatorField
from modoboa.lib.sysutils import exec_cmd
from modoboa.lib import parameters
+import os
class AdminParametersForm(parameters.AdminParametersForm):
@@ -46,9 +47,16 @@
def __init__(self, *args, **kwargs):
super(AdminParametersForm, self).__init__(*args, **kwargs)
hide_fields = False
+ dpath = None
code, output = exec_cmd("which dovecot")
+ known_paths = ("/usr/sbin/dovecot", "/usr/local/sbin/dovecot")
if not code:
dpath = output.strip()
+ else:
+ for fpath in known_paths:
+ if os.path.isfile(fpath) and os.access(fpath, os.X_OK):
+ dpath = fpath
+ if dpath:
try:
code, version = exec_cmd("%s --version" % dpath)
except OSError:
| {"golden_diff": "diff --git a/modoboa/extensions/admin/app_settings.py b/modoboa/extensions/admin/app_settings.py\n--- a/modoboa/extensions/admin/app_settings.py\n+++ b/modoboa/extensions/admin/app_settings.py\n@@ -3,6 +3,7 @@\n from modoboa.lib.formutils import YesNoField, SeparatorField\n from modoboa.lib.sysutils import exec_cmd\n from modoboa.lib import parameters\n+import os\n \n \n class AdminParametersForm(parameters.AdminParametersForm):\n@@ -46,9 +47,16 @@\n def __init__(self, *args, **kwargs):\n super(AdminParametersForm, self).__init__(*args, **kwargs)\n hide_fields = False\n+ dpath = None\n code, output = exec_cmd(\"which dovecot\")\n+ known_paths = (\"/usr/sbin/dovecot\", \"/usr/local/sbin/dovecot\")\n if not code:\n dpath = output.strip()\n+ else:\n+ for fpath in known_paths:\n+ if os.path.isfile(fpath) and os.access(fpath, os.X_OK):\n+ dpath = fpath\n+ if dpath:\n try:\n code, version = exec_cmd(\"%s --version\" % dpath)\n except OSError:\n", "issue": "handle_mailbox_operations command not working\nHello,\n\nThis is a new Modoboa 1.1.0 installation. When I try to run:\n\n```\npython /opt/modoboa_admin/manage.py handle_mailbox_operations\n```\n\nI get the following error:\n\n```\nNotDefined: Application 'admin' and/or parameter 'HANDLE_MAILBOXES' not defined\n```\n\nAccording to the [documentation](http://modoboa.readthedocs.org/en/1.1.0/getting_started/configuration.html#admin-params) there should be an option in Modoboa->Parameters->General to activate this HANDLE_MAILBOXES. But I don't see it anywhere.\n\nI tried to outsmart the system by inserting the value in the lib_parameter table but no luck. I guess something else is required.\n\n```\ninsert into lib_parameter (name, value) values ('admin.HANDLE_MAILBOXES', 'yes')\n```\n\nAm I missing something ? Here is the screenshot of my admin interface, logged as the default admin user:\n\n\n", "before_files": [{"content": "from django import forms\nfrom django.utils.translation import ugettext_lazy\nfrom modoboa.lib.formutils import YesNoField, SeparatorField\nfrom modoboa.lib.sysutils import exec_cmd\nfrom modoboa.lib import parameters\n\n\nclass AdminParametersForm(parameters.AdminParametersForm):\n app = \"admin\"\n\n mbsep = SeparatorField(label=ugettext_lazy(\"Mailboxes\"))\n\n handle_mailboxes = YesNoField(\n label=ugettext_lazy(\"Handle mailboxes on filesystem\"),\n initial=\"no\",\n help_text=ugettext_lazy(\"Rename or remove mailboxes on the filesystem when they get renamed or removed within Modoboa\")\n )\n\n mailboxes_owner = forms.CharField(\n label=ugettext_lazy(\"Mailboxes ower\"),\n initial=\"vmail\",\n help_text=ugettext_lazy(\"The UNIX account who owns mailboxes on the filesystem\")\n )\n\n default_domain_quota = forms.IntegerField(\n label=ugettext_lazy(\"Default domain quota\"),\n initial=0,\n help_text=ugettext_lazy(\n \"Default quota (in MB) applied to freshly created domains with no \"\n \"value specified. A value of 0 means no quota.\"\n ),\n widget=forms.TextInput(attrs={'class': 'span2'})\n )\n\n auto_account_removal = YesNoField(\n label=ugettext_lazy(\"Automatic account removal\"),\n initial=\"no\",\n help_text=ugettext_lazy(\"When a mailbox is removed, also remove the associated account\")\n )\n\n # Visibility rules\n visibility_rules = {\n \"mailboxes_owner\": \"handle_mailboxes=yes\",\n }\n\n def __init__(self, *args, **kwargs):\n super(AdminParametersForm, self).__init__(*args, **kwargs)\n hide_fields = False\n code, output = exec_cmd(\"which dovecot\")\n if not code:\n dpath = output.strip()\n try:\n code, version = exec_cmd(\"%s --version\" % dpath)\n except OSError:\n hide_fields = True\n else:\n if code or not version.strip().startswith(\"2\"):\n hide_fields = True\n else:\n hide_fields = True\n if hide_fields:\n del self.fields[\"handle_mailboxes\"]\n del self.fields[\"mailboxes_owner\"]\n\n def clean_default_domain_quota(self):\n \"\"\"Ensure quota is a positive integer.\"\"\"\n if self.cleaned_data['default_domain_quota'] < 0:\n raise forms.ValidationError(\n ugettext_lazy('Must be a positive integer')\n )\n return self.cleaned_data['default_domain_quota']\n", "path": "modoboa/extensions/admin/app_settings.py"}]} | 1,486 | 272 |
gh_patches_debug_25467 | rasdani/github-patches | git_diff | digitalfabrik__integreat-cms-175 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
IntegrityError in language tree
I just found a bug causing an `IntegrityError` in the language tree. The error can be reproduced reliably in the current state of the develop branch.
Steps to reproduce:
- In the network admin view:
- Create a new region
- Create at least two languages (in the following steps, we assume the two languages to be German and Englisch, works with any other languages as well)
- In the region view (in the region we just created):
- Create a new language node for the base language (German in this example)
- **Bug occurs in the next steps, therefore I provide a more precise description of the following steps:** in the language tree view, click on "create language tree node"
- Choose "English" as language, "German" as source language, check the checkbox for language activation
- click on "save", a success message should show up
- click on "save" again without changing any form fields
- now the form fields should have the following contents:
- language: "English"
- source language: "German"
- activate language: is checked (`true`)
- change language field to "German", as all languages can be chosen again
- now the form fields should have the following contents:
- language: "German"
- source language: "German"
- activate language: is checked (`true`)
- click on "save" again
- `IntegrityError` occurs
</issue>
<code>
[start of backend/cms/views/language_tree/language_tree_node.py]
1 """
2
3 Returns:
4 [type]: [description]
5 """
6
7 from django.contrib import messages
8 from django.contrib.auth.decorators import login_required
9 from django.contrib.auth.mixins import PermissionRequiredMixin
10 from django.utils.translation import ugettext as _
11 from django.utils.decorators import method_decorator
12 from django.views.generic import TemplateView
13 from django.shortcuts import render, redirect
14
15 from .language_tree_node_form import LanguageTreeNodeForm
16 from ...models import Language, LanguageTreeNode, Site
17 from ...decorators import region_permission_required
18
19
20 @method_decorator(login_required, name='dispatch')
21 @method_decorator(region_permission_required, name='dispatch')
22 class LanguageTreeNodeView(PermissionRequiredMixin, TemplateView):
23 permission_required = 'cms.manage_language_tree'
24 raise_exception = True
25
26 template_name = 'language_tree/tree_node.html'
27 base_context = {'current_menu_item': 'language_tree'}
28
29 def get(self, request, *args, **kwargs):
30 language_tree_node_id = self.kwargs.get('language_tree_node_id')
31 # limit possible parents to nodes of current region
32 parent_queryset = Site.get_current_site(request).language_tree_nodes
33 # limit possible languages to those which are not yet included in the tree
34 language_queryset = Language.objects.exclude(
35 language_tree_nodes__in=parent_queryset.exclude(id=language_tree_node_id)
36 )
37 if language_tree_node_id:
38 language_tree_node = LanguageTreeNode.objects.get(id=language_tree_node_id)
39 children = language_tree_node.get_descendants(include_self=True)
40 parent_queryset = parent_queryset.difference(children)
41 form = LanguageTreeNodeForm(initial={
42 'language': language_tree_node.language,
43 'parent': language_tree_node.parent,
44 'active': language_tree_node.active,
45 })
46 else:
47 form = LanguageTreeNodeForm()
48 form.fields['parent'].queryset = parent_queryset
49 form.fields['language'].queryset = language_queryset
50 return render(request, self.template_name, {
51 'form': form, **self.base_context})
52
53 def post(self, request, site_slug, language_tree_node_id=None):
54 # TODO: error handling
55 form = LanguageTreeNodeForm(data=request.POST, site_slug=site_slug)
56 if form.is_valid():
57 if language_tree_node_id:
58 form.save_language_node(
59 language_tree_node_id=language_tree_node_id,
60 )
61 messages.success(request, _('Language tree node was saved successfully.'))
62 else:
63 language_tree_node = form.save_language_node()
64 messages.success(request, _('Language tree node was created successfully.'))
65 return redirect('edit_language_tree_node', **{
66 'language_tree_node_id': language_tree_node.id,
67 'site_slug': site_slug,
68 })
69 # TODO: improve messages
70 else:
71 messages.error(request, _('Errors have occurred.'))
72
73 return render(request, self.template_name, {
74 'form': form, **self.base_context})
75
[end of backend/cms/views/language_tree/language_tree_node.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/backend/cms/views/language_tree/language_tree_node.py b/backend/cms/views/language_tree/language_tree_node.py
--- a/backend/cms/views/language_tree/language_tree_node.py
+++ b/backend/cms/views/language_tree/language_tree_node.py
@@ -55,17 +55,17 @@
form = LanguageTreeNodeForm(data=request.POST, site_slug=site_slug)
if form.is_valid():
if language_tree_node_id:
- form.save_language_node(
+ language_tree_node = form.save_language_node(
language_tree_node_id=language_tree_node_id,
)
messages.success(request, _('Language tree node was saved successfully.'))
else:
language_tree_node = form.save_language_node()
messages.success(request, _('Language tree node was created successfully.'))
- return redirect('edit_language_tree_node', **{
- 'language_tree_node_id': language_tree_node.id,
- 'site_slug': site_slug,
- })
+ return redirect('edit_language_tree_node', **{
+ 'language_tree_node_id': language_tree_node.id,
+ 'site_slug': site_slug,
+ })
# TODO: improve messages
else:
messages.error(request, _('Errors have occurred.'))
| {"golden_diff": "diff --git a/backend/cms/views/language_tree/language_tree_node.py b/backend/cms/views/language_tree/language_tree_node.py\n--- a/backend/cms/views/language_tree/language_tree_node.py\n+++ b/backend/cms/views/language_tree/language_tree_node.py\n@@ -55,17 +55,17 @@\n form = LanguageTreeNodeForm(data=request.POST, site_slug=site_slug)\n if form.is_valid():\n if language_tree_node_id:\n- form.save_language_node(\n+ language_tree_node = form.save_language_node(\n language_tree_node_id=language_tree_node_id,\n )\n messages.success(request, _('Language tree node was saved successfully.'))\n else:\n language_tree_node = form.save_language_node()\n messages.success(request, _('Language tree node was created successfully.'))\n- return redirect('edit_language_tree_node', **{\n- 'language_tree_node_id': language_tree_node.id,\n- 'site_slug': site_slug,\n- })\n+ return redirect('edit_language_tree_node', **{\n+ 'language_tree_node_id': language_tree_node.id,\n+ 'site_slug': site_slug,\n+ })\n # TODO: improve messages\n else:\n messages.error(request, _('Errors have occurred.'))\n", "issue": "IntegrityError in language tree\nI just found a bug causing an `IntegrityError` in the language tree. The error can be reproduced reliably in the current state of the develop branch.\r\n\r\nSteps to reproduce:\r\n- In the network admin view:\r\n - Create a new region\r\n - Create at least two languages (in the following steps, we assume the two languages to be German and Englisch, works with any other languages as well)\r\n- In the region view (in the region we just created):\r\n - Create a new language node for the base language (German in this example)\r\n - **Bug occurs in the next steps, therefore I provide a more precise description of the following steps:** in the language tree view, click on \"create language tree node\"\r\n - Choose \"English\" as language, \"German\" as source language, check the checkbox for language activation\r\n - click on \"save\", a success message should show up\r\n - click on \"save\" again without changing any form fields\r\n - now the form fields should have the following contents:\r\n - language: \"English\"\r\n - source language: \"German\"\r\n - activate language: is checked (`true`)\r\n - change language field to \"German\", as all languages can be chosen again\r\n - now the form fields should have the following contents:\r\n - language: \"German\"\r\n - source language: \"German\"\r\n - activate language: is checked (`true`)\r\n - click on \"save\" again\r\n - `IntegrityError` occurs\n", "before_files": [{"content": "\"\"\"\n\nReturns:\n [type]: [description]\n\"\"\"\n\nfrom django.contrib import messages\nfrom django.contrib.auth.decorators import login_required\nfrom django.contrib.auth.mixins import PermissionRequiredMixin\nfrom django.utils.translation import ugettext as _\nfrom django.utils.decorators import method_decorator\nfrom django.views.generic import TemplateView\nfrom django.shortcuts import render, redirect\n\nfrom .language_tree_node_form import LanguageTreeNodeForm\nfrom ...models import Language, LanguageTreeNode, Site\nfrom ...decorators import region_permission_required\n\n\n@method_decorator(login_required, name='dispatch')\n@method_decorator(region_permission_required, name='dispatch')\nclass LanguageTreeNodeView(PermissionRequiredMixin, TemplateView):\n permission_required = 'cms.manage_language_tree'\n raise_exception = True\n\n template_name = 'language_tree/tree_node.html'\n base_context = {'current_menu_item': 'language_tree'}\n\n def get(self, request, *args, **kwargs):\n language_tree_node_id = self.kwargs.get('language_tree_node_id')\n # limit possible parents to nodes of current region\n parent_queryset = Site.get_current_site(request).language_tree_nodes\n # limit possible languages to those which are not yet included in the tree\n language_queryset = Language.objects.exclude(\n language_tree_nodes__in=parent_queryset.exclude(id=language_tree_node_id)\n )\n if language_tree_node_id:\n language_tree_node = LanguageTreeNode.objects.get(id=language_tree_node_id)\n children = language_tree_node.get_descendants(include_self=True)\n parent_queryset = parent_queryset.difference(children)\n form = LanguageTreeNodeForm(initial={\n 'language': language_tree_node.language,\n 'parent': language_tree_node.parent,\n 'active': language_tree_node.active,\n })\n else:\n form = LanguageTreeNodeForm()\n form.fields['parent'].queryset = parent_queryset\n form.fields['language'].queryset = language_queryset\n return render(request, self.template_name, {\n 'form': form, **self.base_context})\n\n def post(self, request, site_slug, language_tree_node_id=None):\n # TODO: error handling\n form = LanguageTreeNodeForm(data=request.POST, site_slug=site_slug)\n if form.is_valid():\n if language_tree_node_id:\n form.save_language_node(\n language_tree_node_id=language_tree_node_id,\n )\n messages.success(request, _('Language tree node was saved successfully.'))\n else:\n language_tree_node = form.save_language_node()\n messages.success(request, _('Language tree node was created successfully.'))\n return redirect('edit_language_tree_node', **{\n 'language_tree_node_id': language_tree_node.id,\n 'site_slug': site_slug,\n })\n # TODO: improve messages\n else:\n messages.error(request, _('Errors have occurred.'))\n\n return render(request, self.template_name, {\n 'form': form, **self.base_context})\n", "path": "backend/cms/views/language_tree/language_tree_node.py"}]} | 1,604 | 259 |
gh_patches_debug_43552 | rasdani/github-patches | git_diff | streamlink__streamlink-4202 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
plugins.ard_mediathek: rewrite plugin
Resolves #4191
One issue I couldn't fix is the text encoding of the metadata which gets messed up by `validate.parse_html()`. See the VOD title down below...
https://github.com/streamlink/streamlink/blob/175d4748561c7154bb80c5a47dae22039e45d4ce/src/streamlink/utils/parse.py#L54-L55
Some VODs also have a second title, eg. if it's a TV show, but I couldn't be bothered to implement this. Not important.
----
Das Erste - Live:
```
$ streamlink -l debug --title '{author} - {title}' 'https://www.ardmediathek.de/daserste/live/Y3JpZDovL2Rhc2Vyc3RlLmRlL0xpdmVzdHJlYW0tRGFzRXJzdGU/' best
[cli.output][debug] Opening subprocess: mpv "--force-media-title=Das Erste - Das Erste" -
```
WDR - Live:
```
$ streamlink -l debug --title '{author} - {title}' 'https://www.ardmediathek.de/live/Y3JpZDovL3dkci5kZS9CZWl0cmFnLTNkYTY2NGRlLTE4YzItNDY1MC1hNGZmLTRmNjQxNDcyMDcyYg/' best
[cli.output][debug] Opening subprocess: mpv "--force-media-title=WDR - WDR Fernsehen im Livestream" -
```
VOD
```
$ streamlink -l debug --title '{author} - {title}' 'https://www.ardmediathek.de/video/dokus-im-ersten/wirecard-die-milliarden-luege/das-erste/Y3JpZDovL2Rhc2Vyc3RlLmRlL3JlcG9ydGFnZSBfIGRva3VtZW50YXRpb24gaW0gZXJzdGVuL2NlMjQ0OWM4LTQ4YTUtNGIyNC1iMTdlLWNhOTNjMDQ5OTc4Zg/' best
[cli.output][debug] Opening subprocess: mpv "--force-media-title=Das Erste - Wirecard - Die Milliarden-Lüge" -
```
</issue>
<code>
[start of src/streamlink/plugins/ard_mediathek.py]
1 import logging
2 import re
3
4 from streamlink.plugin import Plugin, pluginmatcher
5 from streamlink.plugin.api import validate
6 from streamlink.stream.hls import HLSStream
7
8
9 log = logging.getLogger(__name__)
10
11
12 @pluginmatcher(re.compile(
13 r"https?://(?:(\w+\.)?ardmediathek\.de/|mediathek\.daserste\.de/)"
14 ))
15 class ARDMediathek(Plugin):
16 def _get_streams(self):
17 data_json = self.session.http.get(self.url, schema=validate.Schema(
18 validate.parse_html(),
19 validate.xml_findtext(".//script[@id='fetchedContextValue'][@type='application/json']"),
20 validate.any(None, validate.all(
21 validate.parse_json(),
22 {str: dict},
23 validate.transform(lambda obj: list(obj.items())),
24 validate.filter(lambda item: item[0].startswith("https://api.ardmediathek.de/page-gateway/pages/")),
25 validate.any(validate.get((0, 1)), [])
26 ))
27 ))
28 if not data_json:
29 return
30
31 schema_data = validate.Schema({
32 "id": str,
33 "widgets": validate.all(
34 [dict],
35 validate.filter(lambda item: item.get("mediaCollection")),
36 validate.get(0),
37 {
38 "geoblocked": bool,
39 "publicationService": {
40 "name": str,
41 },
42 "title": str,
43 "mediaCollection": {
44 "embedded": {
45 "_mediaArray": [{
46 "_mediaStreamArray": [{
47 "_quality": validate.any(str, int),
48 "_stream": validate.url()
49 }]
50 }]
51 }
52 }
53 }
54 )
55 })
56 data = schema_data.validate(data_json)
57
58 log.debug(f"Found media id: {data['id']}")
59 data_media = data["widgets"]
60
61 if data_media["geoblocked"]:
62 log.info("The content is not available in your region")
63 return
64
65 self.author = data_media["publicationService"]["name"]
66 self.title = data_media["title"]
67
68 for media in data_media["mediaCollection"]["embedded"]["_mediaArray"]:
69 for stream in media["_mediaStreamArray"]:
70 if stream["_quality"] != "auto" or ".m3u8" not in stream["_stream"]:
71 continue
72 return HLSStream.parse_variant_playlist(self.session, stream["_stream"])
73
74
75 __plugin__ = ARDMediathek
76
[end of src/streamlink/plugins/ard_mediathek.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/streamlink/plugins/ard_mediathek.py b/src/streamlink/plugins/ard_mediathek.py
--- a/src/streamlink/plugins/ard_mediathek.py
+++ b/src/streamlink/plugins/ard_mediathek.py
@@ -4,6 +4,7 @@
from streamlink.plugin import Plugin, pluginmatcher
from streamlink.plugin.api import validate
from streamlink.stream.hls import HLSStream
+from streamlink.stream.http import HTTPStream
log = logging.getLogger(__name__)
@@ -13,6 +14,14 @@
r"https?://(?:(\w+\.)?ardmediathek\.de/|mediathek\.daserste\.de/)"
))
class ARDMediathek(Plugin):
+ _QUALITY_MAP = {
+ 4: "1080p",
+ 3: "720p",
+ 2: "540p",
+ 1: "360p",
+ 0: "270p"
+ }
+
def _get_streams(self):
data_json = self.session.http.get(self.url, schema=validate.Schema(
validate.parse_html(),
@@ -34,42 +43,64 @@
[dict],
validate.filter(lambda item: item.get("mediaCollection")),
validate.get(0),
- {
- "geoblocked": bool,
- "publicationService": {
- "name": str,
+ validate.any(None, validate.all(
+ {
+ "geoblocked": bool,
+ "publicationService": {
+ "name": str,
+ },
+ "show": validate.any(None, validate.all(
+ {"title": str},
+ validate.get("title")
+ )),
+ "title": str,
+ "mediaCollection": {
+ "embedded": {
+ "_mediaArray": [validate.all(
+ {
+ "_mediaStreamArray": [validate.all(
+ {
+ "_quality": validate.any(str, int),
+ "_stream": validate.url(),
+ },
+ validate.union_get("_quality", "_stream")
+ )]
+ },
+ validate.get("_mediaStreamArray"),
+ validate.transform(dict)
+ )]
+ }
+ },
},
- "title": str,
- "mediaCollection": {
- "embedded": {
- "_mediaArray": [{
- "_mediaStreamArray": [{
- "_quality": validate.any(str, int),
- "_stream": validate.url()
- }]
- }]
- }
- }
- }
+ validate.union_get(
+ "geoblocked",
+ ("mediaCollection", "embedded", "_mediaArray", 0),
+ ("publicationService", "name"),
+ "title",
+ "show",
+ )
+ ))
)
})
data = schema_data.validate(data_json)
log.debug(f"Found media id: {data['id']}")
- data_media = data["widgets"]
+ if not data["widgets"]:
+ log.info("The content is unavailable")
+ return
- if data_media["geoblocked"]:
+ geoblocked, media, self.author, self.title, show = data["widgets"]
+ if geoblocked:
log.info("The content is not available in your region")
return
+ if show:
+ self.title = f"{show}: {self.title}"
- self.author = data_media["publicationService"]["name"]
- self.title = data_media["title"]
-
- for media in data_media["mediaCollection"]["embedded"]["_mediaArray"]:
- for stream in media["_mediaStreamArray"]:
- if stream["_quality"] != "auto" or ".m3u8" not in stream["_stream"]:
- continue
- return HLSStream.parse_variant_playlist(self.session, stream["_stream"])
+ if media.get("auto"):
+ yield from HLSStream.parse_variant_playlist(self.session, media.get("auto")).items()
+ else:
+ for quality, stream in media.items():
+ yield self._QUALITY_MAP.get(quality, quality), HTTPStream(self.session, stream)
__plugin__ = ARDMediathek
| {"golden_diff": "diff --git a/src/streamlink/plugins/ard_mediathek.py b/src/streamlink/plugins/ard_mediathek.py\n--- a/src/streamlink/plugins/ard_mediathek.py\n+++ b/src/streamlink/plugins/ard_mediathek.py\n@@ -4,6 +4,7 @@\n from streamlink.plugin import Plugin, pluginmatcher\n from streamlink.plugin.api import validate\n from streamlink.stream.hls import HLSStream\n+from streamlink.stream.http import HTTPStream\n \n \n log = logging.getLogger(__name__)\n@@ -13,6 +14,14 @@\n r\"https?://(?:(\\w+\\.)?ardmediathek\\.de/|mediathek\\.daserste\\.de/)\"\n ))\n class ARDMediathek(Plugin):\n+ _QUALITY_MAP = {\n+ 4: \"1080p\",\n+ 3: \"720p\",\n+ 2: \"540p\",\n+ 1: \"360p\",\n+ 0: \"270p\"\n+ }\n+\n def _get_streams(self):\n data_json = self.session.http.get(self.url, schema=validate.Schema(\n validate.parse_html(),\n@@ -34,42 +43,64 @@\n [dict],\n validate.filter(lambda item: item.get(\"mediaCollection\")),\n validate.get(0),\n- {\n- \"geoblocked\": bool,\n- \"publicationService\": {\n- \"name\": str,\n+ validate.any(None, validate.all(\n+ {\n+ \"geoblocked\": bool,\n+ \"publicationService\": {\n+ \"name\": str,\n+ },\n+ \"show\": validate.any(None, validate.all(\n+ {\"title\": str},\n+ validate.get(\"title\")\n+ )),\n+ \"title\": str,\n+ \"mediaCollection\": {\n+ \"embedded\": {\n+ \"_mediaArray\": [validate.all(\n+ {\n+ \"_mediaStreamArray\": [validate.all(\n+ {\n+ \"_quality\": validate.any(str, int),\n+ \"_stream\": validate.url(),\n+ },\n+ validate.union_get(\"_quality\", \"_stream\")\n+ )]\n+ },\n+ validate.get(\"_mediaStreamArray\"),\n+ validate.transform(dict)\n+ )]\n+ }\n+ },\n },\n- \"title\": str,\n- \"mediaCollection\": {\n- \"embedded\": {\n- \"_mediaArray\": [{\n- \"_mediaStreamArray\": [{\n- \"_quality\": validate.any(str, int),\n- \"_stream\": validate.url()\n- }]\n- }]\n- }\n- }\n- }\n+ validate.union_get(\n+ \"geoblocked\",\n+ (\"mediaCollection\", \"embedded\", \"_mediaArray\", 0),\n+ (\"publicationService\", \"name\"),\n+ \"title\",\n+ \"show\",\n+ )\n+ ))\n )\n })\n data = schema_data.validate(data_json)\n \n log.debug(f\"Found media id: {data['id']}\")\n- data_media = data[\"widgets\"]\n+ if not data[\"widgets\"]:\n+ log.info(\"The content is unavailable\")\n+ return\n \n- if data_media[\"geoblocked\"]:\n+ geoblocked, media, self.author, self.title, show = data[\"widgets\"]\n+ if geoblocked:\n log.info(\"The content is not available in your region\")\n return\n+ if show:\n+ self.title = f\"{show}: {self.title}\"\n \n- self.author = data_media[\"publicationService\"][\"name\"]\n- self.title = data_media[\"title\"]\n-\n- for media in data_media[\"mediaCollection\"][\"embedded\"][\"_mediaArray\"]:\n- for stream in media[\"_mediaStreamArray\"]:\n- if stream[\"_quality\"] != \"auto\" or \".m3u8\" not in stream[\"_stream\"]:\n- continue\n- return HLSStream.parse_variant_playlist(self.session, stream[\"_stream\"])\n+ if media.get(\"auto\"):\n+ yield from HLSStream.parse_variant_playlist(self.session, media.get(\"auto\")).items()\n+ else:\n+ for quality, stream in media.items():\n+ yield self._QUALITY_MAP.get(quality, quality), HTTPStream(self.session, stream)\n \n \n __plugin__ = ARDMediathek\n", "issue": "plugins.ard_mediathek: rewrite plugin\nResolves #4191 \r\n\r\nOne issue I couldn't fix is the text encoding of the metadata which gets messed up by `validate.parse_html()`. See the VOD title down below...\r\nhttps://github.com/streamlink/streamlink/blob/175d4748561c7154bb80c5a47dae22039e45d4ce/src/streamlink/utils/parse.py#L54-L55\r\n\r\nSome VODs also have a second title, eg. if it's a TV show, but I couldn't be bothered to implement this. Not important.\r\n\r\n----\r\n\r\nDas Erste - Live:\r\n```\r\n$ streamlink -l debug --title '{author} - {title}' 'https://www.ardmediathek.de/daserste/live/Y3JpZDovL2Rhc2Vyc3RlLmRlL0xpdmVzdHJlYW0tRGFzRXJzdGU/' best\r\n[cli.output][debug] Opening subprocess: mpv \"--force-media-title=Das Erste - Das Erste\" -\r\n```\r\n\r\nWDR - Live:\r\n```\r\n$ streamlink -l debug --title '{author} - {title}' 'https://www.ardmediathek.de/live/Y3JpZDovL3dkci5kZS9CZWl0cmFnLTNkYTY2NGRlLTE4YzItNDY1MC1hNGZmLTRmNjQxNDcyMDcyYg/' best\r\n[cli.output][debug] Opening subprocess: mpv \"--force-media-title=WDR - WDR Fernsehen im Livestream\" -\r\n```\r\n\r\nVOD\r\n```\r\n$ streamlink -l debug --title '{author} - {title}' 'https://www.ardmediathek.de/video/dokus-im-ersten/wirecard-die-milliarden-luege/das-erste/Y3JpZDovL2Rhc2Vyc3RlLmRlL3JlcG9ydGFnZSBfIGRva3VtZW50YXRpb24gaW0gZXJzdGVuL2NlMjQ0OWM4LTQ4YTUtNGIyNC1iMTdlLWNhOTNjMDQ5OTc4Zg/' best\r\n[cli.output][debug] Opening subprocess: mpv \"--force-media-title=Das Erste - Wirecard - Die Milliarden-L\u00c3\u00bcge\" -\r\n```\n", "before_files": [{"content": "import logging\nimport re\n\nfrom streamlink.plugin import Plugin, pluginmatcher\nfrom streamlink.plugin.api import validate\nfrom streamlink.stream.hls import HLSStream\n\n\nlog = logging.getLogger(__name__)\n\n\n@pluginmatcher(re.compile(\n r\"https?://(?:(\\w+\\.)?ardmediathek\\.de/|mediathek\\.daserste\\.de/)\"\n))\nclass ARDMediathek(Plugin):\n def _get_streams(self):\n data_json = self.session.http.get(self.url, schema=validate.Schema(\n validate.parse_html(),\n validate.xml_findtext(\".//script[@id='fetchedContextValue'][@type='application/json']\"),\n validate.any(None, validate.all(\n validate.parse_json(),\n {str: dict},\n validate.transform(lambda obj: list(obj.items())),\n validate.filter(lambda item: item[0].startswith(\"https://api.ardmediathek.de/page-gateway/pages/\")),\n validate.any(validate.get((0, 1)), [])\n ))\n ))\n if not data_json:\n return\n\n schema_data = validate.Schema({\n \"id\": str,\n \"widgets\": validate.all(\n [dict],\n validate.filter(lambda item: item.get(\"mediaCollection\")),\n validate.get(0),\n {\n \"geoblocked\": bool,\n \"publicationService\": {\n \"name\": str,\n },\n \"title\": str,\n \"mediaCollection\": {\n \"embedded\": {\n \"_mediaArray\": [{\n \"_mediaStreamArray\": [{\n \"_quality\": validate.any(str, int),\n \"_stream\": validate.url()\n }]\n }]\n }\n }\n }\n )\n })\n data = schema_data.validate(data_json)\n\n log.debug(f\"Found media id: {data['id']}\")\n data_media = data[\"widgets\"]\n\n if data_media[\"geoblocked\"]:\n log.info(\"The content is not available in your region\")\n return\n\n self.author = data_media[\"publicationService\"][\"name\"]\n self.title = data_media[\"title\"]\n\n for media in data_media[\"mediaCollection\"][\"embedded\"][\"_mediaArray\"]:\n for stream in media[\"_mediaStreamArray\"]:\n if stream[\"_quality\"] != \"auto\" or \".m3u8\" not in stream[\"_stream\"]:\n continue\n return HLSStream.parse_variant_playlist(self.session, stream[\"_stream\"])\n\n\n__plugin__ = ARDMediathek\n", "path": "src/streamlink/plugins/ard_mediathek.py"}]} | 1,765 | 923 |
gh_patches_debug_13091 | rasdani/github-patches | git_diff | PrefectHQ__prefect-11999 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
no import statement for wait_for_flow_run
### First check
- [X] I added a descriptive title to this issue.
- [X] I used GitHub search to find a similar request and didn't find it 😇
### Describe the issue
There is no import statement for wait_for_flow_run so typing this code into pycharm shows wait_for_flow_run as an error. Searching the internets, the import statement used to be
_from prefect.tasks.prefect import wait_for_flow_run_
yeah, that doesn't work anymore.
### Describe the proposed change
put the correct import statement in the docs which is
_from prefect.flow_runs import wait_for_flow_run_
</issue>
<code>
[start of src/prefect/flow_runs.py]
1 from typing import Optional
2 from uuid import UUID
3
4 import anyio
5
6 from prefect.client.orchestration import PrefectClient
7 from prefect.client.schemas import FlowRun
8 from prefect.client.utilities import inject_client
9 from prefect.exceptions import FlowRunWaitTimeout
10 from prefect.logging import get_logger
11
12
13 @inject_client
14 async def wait_for_flow_run(
15 flow_run_id: UUID,
16 timeout: Optional[int] = 10800,
17 poll_interval: int = 5,
18 client: Optional[PrefectClient] = None,
19 log_states: bool = False,
20 ) -> FlowRun:
21 """
22 Waits for the prefect flow run to finish and returns the FlowRun
23
24 Args:
25 flow_run_id: The flow run ID for the flow run to wait for.
26 timeout: The wait timeout in seconds. Defaults to 10800 (3 hours).
27 poll_interval: The poll interval in seconds. Defaults to 5.
28
29 Returns:
30 FlowRun: The finished flow run.
31
32 Raises:
33 prefect.exceptions.FlowWaitTimeout: If flow run goes over the timeout.
34
35 Examples:
36 Create a flow run for a deployment and wait for it to finish:
37 ```python
38 import asyncio
39
40 from prefect import get_client
41
42 async def main():
43 async with get_client() as client:
44 flow_run = await client.create_flow_run_from_deployment(deployment_id="my-deployment-id")
45 flow_run = await wait_for_flow_run(flow_run_id=flow_run.id)
46 print(flow_run.state)
47
48 if __name__ == "__main__":
49 asyncio.run(main())
50
51 ```
52
53 Trigger multiple flow runs and wait for them to finish:
54 ```python
55 import asyncio
56
57 from prefect import get_client
58
59 async def main(num_runs: int):
60 async with get_client() as client:
61 flow_runs = [
62 await client.create_flow_run_from_deployment(deployment_id="my-deployment-id")
63 for _
64 in range(num_runs)
65 ]
66 coros = [wait_for_flow_run(flow_run_id=flow_run.id) for flow_run in flow_runs]
67 finished_flow_runs = await asyncio.gather(*coros)
68 print([flow_run.state for flow_run in finished_flow_runs])
69
70 if __name__ == "__main__":
71 asyncio.run(main(num_runs=10))
72
73 ```
74 """
75 assert client is not None, "Client injection failed"
76 logger = get_logger()
77 with anyio.move_on_after(timeout):
78 while True:
79 flow_run = await client.read_flow_run(flow_run_id)
80 flow_state = flow_run.state
81 if log_states:
82 logger.info(f"Flow run is in state {flow_run.state.name!r}")
83 if flow_state and flow_state.is_final():
84 return flow_run
85 await anyio.sleep(poll_interval)
86 raise FlowRunWaitTimeout(
87 f"Flow run with ID {flow_run_id} exceeded watch timeout of {timeout} seconds"
88 )
89
[end of src/prefect/flow_runs.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/prefect/flow_runs.py b/src/prefect/flow_runs.py
--- a/src/prefect/flow_runs.py
+++ b/src/prefect/flow_runs.py
@@ -38,6 +38,7 @@
import asyncio
from prefect import get_client
+ from prefect.flow_runs import wait_for_flow_run
async def main():
async with get_client() as client:
@@ -55,6 +56,7 @@
import asyncio
from prefect import get_client
+ from prefect.flow_runs import wait_for_flow_run
async def main(num_runs: int):
async with get_client() as client:
| {"golden_diff": "diff --git a/src/prefect/flow_runs.py b/src/prefect/flow_runs.py\n--- a/src/prefect/flow_runs.py\n+++ b/src/prefect/flow_runs.py\n@@ -38,6 +38,7 @@\n import asyncio\n \n from prefect import get_client\n+ from prefect.flow_runs import wait_for_flow_run\n \n async def main():\n async with get_client() as client:\n@@ -55,6 +56,7 @@\n import asyncio\n \n from prefect import get_client\n+ from prefect.flow_runs import wait_for_flow_run\n \n async def main(num_runs: int):\n async with get_client() as client:\n", "issue": "no import statement for wait_for_flow_run\n### First check\r\n\r\n- [X] I added a descriptive title to this issue.\r\n- [X] I used GitHub search to find a similar request and didn't find it \ud83d\ude07\r\n\r\n### Describe the issue\r\n\r\nThere is no import statement for wait_for_flow_run so typing this code into pycharm shows wait_for_flow_run as an error. Searching the internets, the import statement used to be\r\n\r\n_from prefect.tasks.prefect import wait_for_flow_run_\r\n\r\nyeah, that doesn't work anymore.\r\n\r\n### Describe the proposed change\r\n\r\nput the correct import statement in the docs which is \r\n\r\n_from prefect.flow_runs import wait_for_flow_run_\r\n\n", "before_files": [{"content": "from typing import Optional\nfrom uuid import UUID\n\nimport anyio\n\nfrom prefect.client.orchestration import PrefectClient\nfrom prefect.client.schemas import FlowRun\nfrom prefect.client.utilities import inject_client\nfrom prefect.exceptions import FlowRunWaitTimeout\nfrom prefect.logging import get_logger\n\n\n@inject_client\nasync def wait_for_flow_run(\n flow_run_id: UUID,\n timeout: Optional[int] = 10800,\n poll_interval: int = 5,\n client: Optional[PrefectClient] = None,\n log_states: bool = False,\n) -> FlowRun:\n \"\"\"\n Waits for the prefect flow run to finish and returns the FlowRun\n\n Args:\n flow_run_id: The flow run ID for the flow run to wait for.\n timeout: The wait timeout in seconds. Defaults to 10800 (3 hours).\n poll_interval: The poll interval in seconds. Defaults to 5.\n\n Returns:\n FlowRun: The finished flow run.\n\n Raises:\n prefect.exceptions.FlowWaitTimeout: If flow run goes over the timeout.\n\n Examples:\n Create a flow run for a deployment and wait for it to finish:\n ```python\n import asyncio\n\n from prefect import get_client\n\n async def main():\n async with get_client() as client:\n flow_run = await client.create_flow_run_from_deployment(deployment_id=\"my-deployment-id\")\n flow_run = await wait_for_flow_run(flow_run_id=flow_run.id)\n print(flow_run.state)\n\n if __name__ == \"__main__\":\n asyncio.run(main())\n\n ```\n\n Trigger multiple flow runs and wait for them to finish:\n ```python\n import asyncio\n\n from prefect import get_client\n\n async def main(num_runs: int):\n async with get_client() as client:\n flow_runs = [\n await client.create_flow_run_from_deployment(deployment_id=\"my-deployment-id\")\n for _\n in range(num_runs)\n ]\n coros = [wait_for_flow_run(flow_run_id=flow_run.id) for flow_run in flow_runs]\n finished_flow_runs = await asyncio.gather(*coros)\n print([flow_run.state for flow_run in finished_flow_runs])\n\n if __name__ == \"__main__\":\n asyncio.run(main(num_runs=10))\n\n ```\n \"\"\"\n assert client is not None, \"Client injection failed\"\n logger = get_logger()\n with anyio.move_on_after(timeout):\n while True:\n flow_run = await client.read_flow_run(flow_run_id)\n flow_state = flow_run.state\n if log_states:\n logger.info(f\"Flow run is in state {flow_run.state.name!r}\")\n if flow_state and flow_state.is_final():\n return flow_run\n await anyio.sleep(poll_interval)\n raise FlowRunWaitTimeout(\n f\"Flow run with ID {flow_run_id} exceeded watch timeout of {timeout} seconds\"\n )\n", "path": "src/prefect/flow_runs.py"}]} | 1,485 | 147 |
gh_patches_debug_3915 | rasdani/github-patches | git_diff | fossasia__open-event-server-890 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Show return model of sponsor types list in Swagger spec
Currently no return model (or schema) is shown for the GET API to get sponsor types used in a Event

</issue>
<code>
[start of open_event/api/sponsors.py]
1 from flask.ext.restplus import Resource, Namespace
2
3 from open_event.models.sponsor import Sponsor as SponsorModel
4
5 from .helpers.helpers import get_paginated_list, requires_auth, get_object_in_event
6 from .helpers.utils import PAGINATED_MODEL, PaginatedResourceBase, ServiceDAO, \
7 PAGE_PARAMS, POST_RESPONSES, PUT_RESPONSES
8 from .helpers import custom_fields as fields
9
10 api = Namespace('sponsors', description='Sponsors', path='/')
11
12 SPONSOR = api.model('Sponsor', {
13 'id': fields.Integer(required=True),
14 'name': fields.String(),
15 'url': fields.Uri(),
16 'logo': fields.ImageUri(),
17 'description': fields.String(),
18 'level': fields.String(),
19 'sponsor_type': fields.String(),
20 })
21
22 SPONSOR_PAGINATED = api.clone('SponsorPaginated', PAGINATED_MODEL, {
23 'results': fields.List(fields.Nested(SPONSOR))
24 })
25
26 SPONSOR_POST = api.clone('SponsorPost', SPONSOR)
27 del SPONSOR_POST['id']
28
29
30 # Create DAO
31 class SponsorDAO(ServiceDAO):
32 def list_types(self, event_id):
33 sponsors = self.list(event_id)
34 return list(set(
35 sponsor.sponsor_type for sponsor in sponsors
36 if sponsor.sponsor_type))
37
38
39 DAO = SponsorDAO(SponsorModel, SPONSOR_POST)
40
41
42 @api.route('/events/<int:event_id>/sponsors/<int:sponsor_id>')
43 @api.response(404, 'Sponsor not found')
44 @api.response(400, 'Sponsor does not belong to event')
45 class Sponsor(Resource):
46 @api.doc('get_sponsor')
47 @api.marshal_with(SPONSOR)
48 def get(self, event_id, sponsor_id):
49 """Fetch a sponsor given its id"""
50 return DAO.get(event_id, sponsor_id)
51
52 @requires_auth
53 @api.doc('delete_sponsor')
54 @api.marshal_with(SPONSOR)
55 def delete(self, event_id, sponsor_id):
56 """Delete a sponsor given its id"""
57 return DAO.delete(event_id, sponsor_id)
58
59 @requires_auth
60 @api.doc('update_sponsor', responses=PUT_RESPONSES)
61 @api.marshal_with(SPONSOR)
62 @api.expect(SPONSOR_POST)
63 def put(self, event_id, sponsor_id):
64 """Update a sponsor given its id"""
65 return DAO.update(event_id, sponsor_id, self.api.payload)
66
67
68 @api.route('/events/<int:event_id>/sponsors')
69 class SponsorList(Resource):
70 @api.doc('list_sponsors')
71 @api.marshal_list_with(SPONSOR)
72 def get(self, event_id):
73 """List all sponsors"""
74 return DAO.list(event_id)
75
76 @requires_auth
77 @api.doc('create_sponsor', responses=POST_RESPONSES)
78 @api.marshal_with(SPONSOR)
79 @api.expect(SPONSOR_POST)
80 def post(self, event_id):
81 """Create a sponsor"""
82 return DAO.create(
83 event_id,
84 self.api.payload,
85 self.api.url_for(self, event_id=event_id)
86 )
87
88
89 @api.route('/events/<int:event_id>/sponsors/types')
90 class SponsorTypesList(Resource):
91 @api.doc('list_sponsor_types')
92 def get(self, event_id):
93 """List all sponsor types"""
94 return DAO.list_types(event_id)
95
96
97 @api.route('/events/<int:event_id>/sponsors/page')
98 class SponsorListPaginated(Resource, PaginatedResourceBase):
99 @api.doc('list_sponsors_paginated', params=PAGE_PARAMS)
100 @api.marshal_with(SPONSOR_PAGINATED)
101 def get(self, event_id):
102 """List sponsors in a paginated manner"""
103 return get_paginated_list(
104 SponsorModel,
105 self.api.url_for(self, event_id=event_id),
106 args=self.parser.parse_args(),
107 event_id=event_id
108 )
109
[end of open_event/api/sponsors.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/open_event/api/sponsors.py b/open_event/api/sponsors.py
--- a/open_event/api/sponsors.py
+++ b/open_event/api/sponsors.py
@@ -88,7 +88,7 @@
@api.route('/events/<int:event_id>/sponsors/types')
class SponsorTypesList(Resource):
- @api.doc('list_sponsor_types')
+ @api.doc('list_sponsor_types', model=[fields.String()])
def get(self, event_id):
"""List all sponsor types"""
return DAO.list_types(event_id)
| {"golden_diff": "diff --git a/open_event/api/sponsors.py b/open_event/api/sponsors.py\n--- a/open_event/api/sponsors.py\n+++ b/open_event/api/sponsors.py\n@@ -88,7 +88,7 @@\n \n @api.route('/events/<int:event_id>/sponsors/types')\n class SponsorTypesList(Resource):\n- @api.doc('list_sponsor_types')\n+ @api.doc('list_sponsor_types', model=[fields.String()])\n def get(self, event_id):\n \"\"\"List all sponsor types\"\"\"\n return DAO.list_types(event_id)\n", "issue": "Show return model of sponsor types list in Swagger spec\nCurrently no return model (or schema) is shown for the GET API to get sponsor types used in a Event\n\n\n\n", "before_files": [{"content": "from flask.ext.restplus import Resource, Namespace\n\nfrom open_event.models.sponsor import Sponsor as SponsorModel\n\nfrom .helpers.helpers import get_paginated_list, requires_auth, get_object_in_event\nfrom .helpers.utils import PAGINATED_MODEL, PaginatedResourceBase, ServiceDAO, \\\n PAGE_PARAMS, POST_RESPONSES, PUT_RESPONSES\nfrom .helpers import custom_fields as fields\n\napi = Namespace('sponsors', description='Sponsors', path='/')\n\nSPONSOR = api.model('Sponsor', {\n 'id': fields.Integer(required=True),\n 'name': fields.String(),\n 'url': fields.Uri(),\n 'logo': fields.ImageUri(),\n 'description': fields.String(),\n 'level': fields.String(),\n 'sponsor_type': fields.String(),\n})\n\nSPONSOR_PAGINATED = api.clone('SponsorPaginated', PAGINATED_MODEL, {\n 'results': fields.List(fields.Nested(SPONSOR))\n})\n\nSPONSOR_POST = api.clone('SponsorPost', SPONSOR)\ndel SPONSOR_POST['id']\n\n\n# Create DAO\nclass SponsorDAO(ServiceDAO):\n def list_types(self, event_id):\n sponsors = self.list(event_id)\n return list(set(\n sponsor.sponsor_type for sponsor in sponsors\n if sponsor.sponsor_type))\n\n\nDAO = SponsorDAO(SponsorModel, SPONSOR_POST)\n\n\[email protected]('/events/<int:event_id>/sponsors/<int:sponsor_id>')\[email protected](404, 'Sponsor not found')\[email protected](400, 'Sponsor does not belong to event')\nclass Sponsor(Resource):\n @api.doc('get_sponsor')\n @api.marshal_with(SPONSOR)\n def get(self, event_id, sponsor_id):\n \"\"\"Fetch a sponsor given its id\"\"\"\n return DAO.get(event_id, sponsor_id)\n\n @requires_auth\n @api.doc('delete_sponsor')\n @api.marshal_with(SPONSOR)\n def delete(self, event_id, sponsor_id):\n \"\"\"Delete a sponsor given its id\"\"\"\n return DAO.delete(event_id, sponsor_id)\n\n @requires_auth\n @api.doc('update_sponsor', responses=PUT_RESPONSES)\n @api.marshal_with(SPONSOR)\n @api.expect(SPONSOR_POST)\n def put(self, event_id, sponsor_id):\n \"\"\"Update a sponsor given its id\"\"\"\n return DAO.update(event_id, sponsor_id, self.api.payload)\n\n\[email protected]('/events/<int:event_id>/sponsors')\nclass SponsorList(Resource):\n @api.doc('list_sponsors')\n @api.marshal_list_with(SPONSOR)\n def get(self, event_id):\n \"\"\"List all sponsors\"\"\"\n return DAO.list(event_id)\n\n @requires_auth\n @api.doc('create_sponsor', responses=POST_RESPONSES)\n @api.marshal_with(SPONSOR)\n @api.expect(SPONSOR_POST)\n def post(self, event_id):\n \"\"\"Create a sponsor\"\"\"\n return DAO.create(\n event_id,\n self.api.payload,\n self.api.url_for(self, event_id=event_id)\n )\n\n\[email protected]('/events/<int:event_id>/sponsors/types')\nclass SponsorTypesList(Resource):\n @api.doc('list_sponsor_types')\n def get(self, event_id):\n \"\"\"List all sponsor types\"\"\"\n return DAO.list_types(event_id)\n\n\[email protected]('/events/<int:event_id>/sponsors/page')\nclass SponsorListPaginated(Resource, PaginatedResourceBase):\n @api.doc('list_sponsors_paginated', params=PAGE_PARAMS)\n @api.marshal_with(SPONSOR_PAGINATED)\n def get(self, event_id):\n \"\"\"List sponsors in a paginated manner\"\"\"\n return get_paginated_list(\n SponsorModel,\n self.api.url_for(self, event_id=event_id),\n args=self.parser.parse_args(),\n event_id=event_id\n )\n", "path": "open_event/api/sponsors.py"}]} | 1,693 | 120 |
gh_patches_debug_14502 | rasdani/github-patches | git_diff | conan-io__conan-3839 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Conan doesn't keep the username to log to server anymore
From conan 1.8,
When an authentication is required by the conan server, the username is now always asked event though it was specified by conan user. In older version, only the password was required.
To reproduce:
```
$ conan user -c
$ conan user username
Changed user of remote 'server' from 'None' (anonymous) to 'username'
$ conan search -r server *
Please log in to "server" to perform this action. Execute "conan user" command.
Remote 'server' username:
```
To help us debug your issue please explain:
- [X] I've read the [CONTRIBUTING guide](https://raw.githubusercontent.com/conan-io/conan/develop/.github/CONTRIBUTING.md).
- [X] I've specified the Conan version, operating system version and any tool that can be relevant.
- [X] I've explained the steps to reproduce the error or the motivation/use case of the question/suggestion.
</issue>
<code>
[start of conans/client/userio.py]
1 import os
2 import sys
3 from conans.client.output import ConanOutput
4 from conans.errors import InvalidNameException, ConanException
5 import getpass
6 from six.moves import input as raw_input
7
8
9 class UserIO(object):
10 """Class to interact with the user, used to show messages and ask for information"""
11
12 def __init__(self, ins=sys.stdin, out=None):
13 """
14 Params:
15 ins: input stream
16 out: ConanOutput, should have "write" method
17 """
18 self._ins = ins
19 if not out:
20 out = ConanOutput(sys.stdout)
21 self.out = out
22 self._interactive = True
23
24 def disable_input(self):
25 self._interactive = False
26
27 def _raise_if_non_interactive(self):
28 if not self._interactive:
29 raise ConanException("Conan interactive mode disabled")
30
31 def raw_input(self):
32 self._raise_if_non_interactive()
33 return raw_input()
34
35 def get_pass(self):
36 self._raise_if_non_interactive()
37 return getpass.getpass("")
38
39 def request_login(self, remote_name, username=None):
40 """Request user to input their name and password
41 :param username If username is specified it only request password"""
42 if self._interactive:
43 self.out.write("Remote '%s' username: " % remote_name)
44 username = self.get_username(remote_name)
45
46 if self._interactive:
47 self.out.write('Please enter a password for "%s" account: ' % username)
48 try:
49 pwd = self.get_password(remote_name)
50 except ConanException:
51 raise
52 except Exception as e:
53 raise ConanException('Cancelled pass %s' % e)
54 return username, pwd
55
56 def get_username(self, remote_name):
57 """Overridable for testing purpose"""
58 return self._get_env_username(remote_name) or self.raw_input()
59
60 def get_password(self, remote_name):
61 """Overridable for testing purpose"""
62 return self._get_env_password(remote_name) or self.get_pass()
63
64 def request_string(self, msg, default_value=None):
65 """Request user to input a msg
66 :param msg Name of the msg
67 """
68 self._raise_if_non_interactive()
69
70 if default_value:
71 self.out.input_text('%s (%s): ' % (msg, default_value))
72 else:
73 self.out.input_text('%s: ' % msg)
74 s = self._ins.readline().replace("\n", "")
75 if default_value is not None and s == '':
76 return default_value
77 return s
78
79 def request_boolean(self, msg, default_option=None):
80 """Request user to input a boolean"""
81 ret = None
82 while ret is None:
83 if default_option is True:
84 s = self.request_string("%s (YES/no)" % msg)
85 elif default_option is False:
86 s = self.request_string("%s (NO/yes)" % msg)
87 else:
88 s = self.request_string("%s (yes/no)" % msg)
89 if default_option is not None and s == '':
90 return default_option
91 if s.lower() in ['yes', 'y']:
92 ret = True
93 elif s.lower() in ['no', 'n']:
94 ret = False
95 else:
96 self.out.error("%s is not a valid answer" % s)
97 return ret
98
99 def _get_env_password(self, remote_name):
100 """
101 Try CONAN_PASSWORD_REMOTE_NAME or CONAN_PASSWORD or return None
102 """
103 remote_name = remote_name.replace("-", "_").upper()
104 var_name = "CONAN_PASSWORD_%s" % remote_name
105 ret = os.getenv(var_name, None) or os.getenv("CONAN_PASSWORD", None)
106 if ret:
107 self.out.info("Got password '******' from environment")
108 return ret
109
110 def _get_env_username(self, remote_name):
111 """
112 Try CONAN_LOGIN_USERNAME_REMOTE_NAME or CONAN_LOGIN_USERNAME or return None
113 """
114 remote_name = remote_name.replace("-", "_").upper()
115 var_name = "CONAN_LOGIN_USERNAME_%s" % remote_name
116 ret = os.getenv(var_name, None) or os.getenv("CONAN_LOGIN_USERNAME", None)
117
118 if ret:
119 self.out.info("Got username '%s' from environment" % ret)
120 return ret
121
[end of conans/client/userio.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/conans/client/userio.py b/conans/client/userio.py
--- a/conans/client/userio.py
+++ b/conans/client/userio.py
@@ -39,9 +39,11 @@
def request_login(self, remote_name, username=None):
"""Request user to input their name and password
:param username If username is specified it only request password"""
- if self._interactive:
- self.out.write("Remote '%s' username: " % remote_name)
- username = self.get_username(remote_name)
+
+ if not username:
+ if self._interactive:
+ self.out.write("Remote '%s' username: " % remote_name)
+ username = self.get_username(remote_name)
if self._interactive:
self.out.write('Please enter a password for "%s" account: ' % username)
| {"golden_diff": "diff --git a/conans/client/userio.py b/conans/client/userio.py\n--- a/conans/client/userio.py\n+++ b/conans/client/userio.py\n@@ -39,9 +39,11 @@\n def request_login(self, remote_name, username=None):\n \"\"\"Request user to input their name and password\n :param username If username is specified it only request password\"\"\"\n- if self._interactive:\n- self.out.write(\"Remote '%s' username: \" % remote_name)\n- username = self.get_username(remote_name)\n+\n+ if not username:\n+ if self._interactive:\n+ self.out.write(\"Remote '%s' username: \" % remote_name)\n+ username = self.get_username(remote_name)\n \n if self._interactive:\n self.out.write('Please enter a password for \"%s\" account: ' % username)\n", "issue": "Conan doesn't keep the username to log to server anymore\nFrom conan 1.8,\r\n\r\nWhen an authentication is required by the conan server, the username is now always asked event though it was specified by conan user. In older version, only the password was required.\r\n\r\nTo reproduce:\r\n```\r\n$ conan user -c\r\n$ conan user username\r\nChanged user of remote 'server' from 'None' (anonymous) to 'username'\r\n$ conan search -r server *\r\nPlease log in to \"server\" to perform this action. Execute \"conan user\" command.\r\nRemote 'server' username:\r\n```\r\n\r\nTo help us debug your issue please explain:\r\n\r\n- [X] I've read the [CONTRIBUTING guide](https://raw.githubusercontent.com/conan-io/conan/develop/.github/CONTRIBUTING.md).\r\n- [X] I've specified the Conan version, operating system version and any tool that can be relevant.\r\n- [X] I've explained the steps to reproduce the error or the motivation/use case of the question/suggestion.\r\n\r\n\n", "before_files": [{"content": "import os\nimport sys\nfrom conans.client.output import ConanOutput\nfrom conans.errors import InvalidNameException, ConanException\nimport getpass\nfrom six.moves import input as raw_input\n\n\nclass UserIO(object):\n \"\"\"Class to interact with the user, used to show messages and ask for information\"\"\"\n\n def __init__(self, ins=sys.stdin, out=None):\n \"\"\"\n Params:\n ins: input stream\n out: ConanOutput, should have \"write\" method\n \"\"\"\n self._ins = ins\n if not out:\n out = ConanOutput(sys.stdout)\n self.out = out\n self._interactive = True\n\n def disable_input(self):\n self._interactive = False\n\n def _raise_if_non_interactive(self):\n if not self._interactive:\n raise ConanException(\"Conan interactive mode disabled\")\n\n def raw_input(self):\n self._raise_if_non_interactive()\n return raw_input()\n\n def get_pass(self):\n self._raise_if_non_interactive()\n return getpass.getpass(\"\")\n\n def request_login(self, remote_name, username=None):\n \"\"\"Request user to input their name and password\n :param username If username is specified it only request password\"\"\"\n if self._interactive:\n self.out.write(\"Remote '%s' username: \" % remote_name)\n username = self.get_username(remote_name)\n\n if self._interactive:\n self.out.write('Please enter a password for \"%s\" account: ' % username)\n try:\n pwd = self.get_password(remote_name)\n except ConanException:\n raise\n except Exception as e:\n raise ConanException('Cancelled pass %s' % e)\n return username, pwd\n\n def get_username(self, remote_name):\n \"\"\"Overridable for testing purpose\"\"\"\n return self._get_env_username(remote_name) or self.raw_input()\n\n def get_password(self, remote_name):\n \"\"\"Overridable for testing purpose\"\"\"\n return self._get_env_password(remote_name) or self.get_pass()\n\n def request_string(self, msg, default_value=None):\n \"\"\"Request user to input a msg\n :param msg Name of the msg\n \"\"\"\n self._raise_if_non_interactive()\n\n if default_value:\n self.out.input_text('%s (%s): ' % (msg, default_value))\n else:\n self.out.input_text('%s: ' % msg)\n s = self._ins.readline().replace(\"\\n\", \"\")\n if default_value is not None and s == '':\n return default_value\n return s\n\n def request_boolean(self, msg, default_option=None):\n \"\"\"Request user to input a boolean\"\"\"\n ret = None\n while ret is None:\n if default_option is True:\n s = self.request_string(\"%s (YES/no)\" % msg)\n elif default_option is False:\n s = self.request_string(\"%s (NO/yes)\" % msg)\n else:\n s = self.request_string(\"%s (yes/no)\" % msg)\n if default_option is not None and s == '':\n return default_option\n if s.lower() in ['yes', 'y']:\n ret = True\n elif s.lower() in ['no', 'n']:\n ret = False\n else:\n self.out.error(\"%s is not a valid answer\" % s)\n return ret\n\n def _get_env_password(self, remote_name):\n \"\"\"\n Try CONAN_PASSWORD_REMOTE_NAME or CONAN_PASSWORD or return None\n \"\"\"\n remote_name = remote_name.replace(\"-\", \"_\").upper()\n var_name = \"CONAN_PASSWORD_%s\" % remote_name\n ret = os.getenv(var_name, None) or os.getenv(\"CONAN_PASSWORD\", None)\n if ret:\n self.out.info(\"Got password '******' from environment\")\n return ret\n\n def _get_env_username(self, remote_name):\n \"\"\"\n Try CONAN_LOGIN_USERNAME_REMOTE_NAME or CONAN_LOGIN_USERNAME or return None\n \"\"\"\n remote_name = remote_name.replace(\"-\", \"_\").upper()\n var_name = \"CONAN_LOGIN_USERNAME_%s\" % remote_name\n ret = os.getenv(var_name, None) or os.getenv(\"CONAN_LOGIN_USERNAME\", None)\n\n if ret:\n self.out.info(\"Got username '%s' from environment\" % ret)\n return ret\n", "path": "conans/client/userio.py"}]} | 1,937 | 187 |
gh_patches_debug_3843 | rasdani/github-patches | git_diff | jazzband__pip-tools-1105 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
--upgrade-package downgrades unrelated pre-release package when --pre not given
<!-- Describe the issue briefly here. -->
#### Environment Versions
1. OS Type: macOS 10.15.4
1. Python version: 3.7.7
1. pip version: 20.0.2
1. pip-tools version: 4.5.1
#### Steps to replicate
(Note: this example will stop working when `gevent` releases 1.5 final but it can be replicated with any other package that currently has a pre-release version.)
1. Example `req.in` file:
```
click<7
gevent
```
2. `pip-compile req.in`
Output:
```
#
# This file is autogenerated by pip-compile
# To update, run:
#
# pip-compile req.in
#
click==6.7 # via -r req.in
gevent==1.4.0 # via -r req.in
greenlet==0.4.15 # via gevent
```
3. Upgrade gevent to pre-relese
`pip-compile --pre --upgrade-package gevent req.in`
Output:
```
#
# This file is autogenerated by pip-compile
# To update, run:
#
# pip-compile --pre req.in
#
click==6.7 # via -r req.in
gevent==1.5a4 # via -r req.in
greenlet==0.4.15 # via gevent
```
4. Remove version pin of `click` in `.in` file:
```
click
gevent
```
5. Upgrade click:
`pip-compile --upgrade-package click req.in`
Output:
```
#
# This file is autogenerated by pip-compile
# To update, run:
#
# pip-compile req.in
#
click==6.7 # via -r req.in
gevent==1.4.0 # via -r req.in
greenlet==0.4.15 # via gevent
```
#### Expected result
Once a package has been resolved to a pre-release version it should never "magically" be downgraded. Especially if only unrelated other packages are concerned.
I could see that there may be an argument for a plain `pip-compile` run to revert to the non-prerelease version, but I would disagree even there. But for `--upgrade-package` I see no way where this is correct behaviour.
#### Actual result
Not giving `--pre` at any time after it has been used once and a package is resolved to a pre-release version will downgrade it back to the last released version.
</issue>
<code>
[start of piptools/repositories/local.py]
1 # coding: utf-8
2 from __future__ import absolute_import, division, print_function, unicode_literals
3
4 from contextlib import contextmanager
5
6 from pip._internal.utils.hashes import FAVORITE_HASH
7
8 from .._compat import PIP_VERSION
9 from .base import BaseRepository
10
11 from piptools.utils import as_tuple, key_from_ireq, make_install_requirement
12
13
14 def ireq_satisfied_by_existing_pin(ireq, existing_pin):
15 """
16 Return True if the given InstallationRequirement is satisfied by the
17 previously encountered version pin.
18 """
19 version = next(iter(existing_pin.req.specifier)).version
20 return version in ireq.req.specifier
21
22
23 class LocalRequirementsRepository(BaseRepository):
24 """
25 The LocalRequirementsRepository proxied the _real_ repository by first
26 checking if a requirement can be satisfied by existing pins (i.e. the
27 result of a previous compile step).
28
29 In effect, if a requirement can be satisfied with a version pinned in the
30 requirements file, we prefer that version over the best match found in
31 PyPI. This keeps updates to the requirements.txt down to a minimum.
32 """
33
34 def __init__(self, existing_pins, proxied_repository):
35 self.repository = proxied_repository
36 self.existing_pins = existing_pins
37
38 @property
39 def options(self):
40 return self.repository.options
41
42 @property
43 def finder(self):
44 return self.repository.finder
45
46 @property
47 def session(self):
48 return self.repository.session
49
50 @property
51 def DEFAULT_INDEX_URL(self):
52 return self.repository.DEFAULT_INDEX_URL
53
54 def clear_caches(self):
55 self.repository.clear_caches()
56
57 def freshen_build_caches(self):
58 self.repository.freshen_build_caches()
59
60 def find_best_match(self, ireq, prereleases=None):
61 key = key_from_ireq(ireq)
62 existing_pin = self.existing_pins.get(key)
63 if existing_pin and ireq_satisfied_by_existing_pin(ireq, existing_pin):
64 project, version, _ = as_tuple(existing_pin)
65 return make_install_requirement(
66 project, version, ireq.extras, constraint=ireq.constraint
67 )
68 else:
69 return self.repository.find_best_match(ireq, prereleases)
70
71 def get_dependencies(self, ireq):
72 return self.repository.get_dependencies(ireq)
73
74 def get_hashes(self, ireq):
75 key = key_from_ireq(ireq)
76 existing_pin = self.existing_pins.get(key)
77 if existing_pin and ireq_satisfied_by_existing_pin(ireq, existing_pin):
78 if PIP_VERSION[:2] <= (20, 0):
79 hashes = existing_pin.options.get("hashes", {})
80 else:
81 hashes = existing_pin.hash_options
82 hexdigests = hashes.get(FAVORITE_HASH)
83 if hexdigests:
84 return {
85 ":".join([FAVORITE_HASH, hexdigest]) for hexdigest in hexdigests
86 }
87 return self.repository.get_hashes(ireq)
88
89 @contextmanager
90 def allow_all_wheels(self):
91 with self.repository.allow_all_wheels():
92 yield
93
[end of piptools/repositories/local.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/piptools/repositories/local.py b/piptools/repositories/local.py
--- a/piptools/repositories/local.py
+++ b/piptools/repositories/local.py
@@ -17,7 +17,9 @@
previously encountered version pin.
"""
version = next(iter(existing_pin.req.specifier)).version
- return version in ireq.req.specifier
+ return ireq.req.specifier.contains(
+ version, prereleases=existing_pin.req.specifier.prereleases
+ )
class LocalRequirementsRepository(BaseRepository):
| {"golden_diff": "diff --git a/piptools/repositories/local.py b/piptools/repositories/local.py\n--- a/piptools/repositories/local.py\n+++ b/piptools/repositories/local.py\n@@ -17,7 +17,9 @@\n previously encountered version pin.\n \"\"\"\n version = next(iter(existing_pin.req.specifier)).version\n- return version in ireq.req.specifier\n+ return ireq.req.specifier.contains(\n+ version, prereleases=existing_pin.req.specifier.prereleases\n+ )\n \n \n class LocalRequirementsRepository(BaseRepository):\n", "issue": "--upgrade-package downgrades unrelated pre-release package when --pre not given\n<!-- Describe the issue briefly here. -->\r\n\r\n#### Environment Versions\r\n\r\n1. OS Type: macOS 10.15.4\r\n1. Python version: 3.7.7\r\n1. pip version: 20.0.2\r\n1. pip-tools version: 4.5.1\r\n\r\n#### Steps to replicate\r\n\r\n(Note: this example will stop working when `gevent` releases 1.5 final but it can be replicated with any other package that currently has a pre-release version.)\r\n\r\n1. Example `req.in` file:\r\n ```\r\n click<7\r\n gevent\r\n ```\r\n2. `pip-compile req.in`\r\n Output:\r\n ```\r\n #\r\n # This file is autogenerated by pip-compile\r\n # To update, run:\r\n #\r\n # pip-compile req.in\r\n #\r\n click==6.7 # via -r req.in\r\n gevent==1.4.0 # via -r req.in\r\n greenlet==0.4.15 # via gevent\r\n ```\r\n3. Upgrade gevent to pre-relese\r\n `pip-compile --pre --upgrade-package gevent req.in`\r\n Output:\r\n ```\r\n #\r\n # This file is autogenerated by pip-compile\r\n # To update, run:\r\n #\r\n # pip-compile --pre req.in\r\n #\r\n click==6.7 # via -r req.in\r\n gevent==1.5a4 # via -r req.in\r\n greenlet==0.4.15 # via gevent\r\n ```\r\n4. Remove version pin of `click` in `.in` file:\r\n ```\r\n click\r\n gevent\r\n ```\r\n5. Upgrade click:\r\n `pip-compile --upgrade-package click req.in`\r\n Output:\r\n ```\r\n #\r\n # This file is autogenerated by pip-compile\r\n # To update, run:\r\n #\r\n # pip-compile req.in\r\n #\r\n click==6.7 # via -r req.in\r\n gevent==1.4.0 # via -r req.in\r\n greenlet==0.4.15 # via gevent\r\n ```\r\n\r\n#### Expected result\r\n\r\nOnce a package has been resolved to a pre-release version it should never \"magically\" be downgraded. Especially if only unrelated other packages are concerned.\r\n\r\nI could see that there may be an argument for a plain `pip-compile` run to revert to the non-prerelease version, but I would disagree even there. But for `--upgrade-package` I see no way where this is correct behaviour.\r\n\r\n#### Actual result\r\n\r\nNot giving `--pre` at any time after it has been used once and a package is resolved to a pre-release version will downgrade it back to the last released version.\n", "before_files": [{"content": "# coding: utf-8\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nfrom contextlib import contextmanager\n\nfrom pip._internal.utils.hashes import FAVORITE_HASH\n\nfrom .._compat import PIP_VERSION\nfrom .base import BaseRepository\n\nfrom piptools.utils import as_tuple, key_from_ireq, make_install_requirement\n\n\ndef ireq_satisfied_by_existing_pin(ireq, existing_pin):\n \"\"\"\n Return True if the given InstallationRequirement is satisfied by the\n previously encountered version pin.\n \"\"\"\n version = next(iter(existing_pin.req.specifier)).version\n return version in ireq.req.specifier\n\n\nclass LocalRequirementsRepository(BaseRepository):\n \"\"\"\n The LocalRequirementsRepository proxied the _real_ repository by first\n checking if a requirement can be satisfied by existing pins (i.e. the\n result of a previous compile step).\n\n In effect, if a requirement can be satisfied with a version pinned in the\n requirements file, we prefer that version over the best match found in\n PyPI. This keeps updates to the requirements.txt down to a minimum.\n \"\"\"\n\n def __init__(self, existing_pins, proxied_repository):\n self.repository = proxied_repository\n self.existing_pins = existing_pins\n\n @property\n def options(self):\n return self.repository.options\n\n @property\n def finder(self):\n return self.repository.finder\n\n @property\n def session(self):\n return self.repository.session\n\n @property\n def DEFAULT_INDEX_URL(self):\n return self.repository.DEFAULT_INDEX_URL\n\n def clear_caches(self):\n self.repository.clear_caches()\n\n def freshen_build_caches(self):\n self.repository.freshen_build_caches()\n\n def find_best_match(self, ireq, prereleases=None):\n key = key_from_ireq(ireq)\n existing_pin = self.existing_pins.get(key)\n if existing_pin and ireq_satisfied_by_existing_pin(ireq, existing_pin):\n project, version, _ = as_tuple(existing_pin)\n return make_install_requirement(\n project, version, ireq.extras, constraint=ireq.constraint\n )\n else:\n return self.repository.find_best_match(ireq, prereleases)\n\n def get_dependencies(self, ireq):\n return self.repository.get_dependencies(ireq)\n\n def get_hashes(self, ireq):\n key = key_from_ireq(ireq)\n existing_pin = self.existing_pins.get(key)\n if existing_pin and ireq_satisfied_by_existing_pin(ireq, existing_pin):\n if PIP_VERSION[:2] <= (20, 0):\n hashes = existing_pin.options.get(\"hashes\", {})\n else:\n hashes = existing_pin.hash_options\n hexdigests = hashes.get(FAVORITE_HASH)\n if hexdigests:\n return {\n \":\".join([FAVORITE_HASH, hexdigest]) for hexdigest in hexdigests\n }\n return self.repository.get_hashes(ireq)\n\n @contextmanager\n def allow_all_wheels(self):\n with self.repository.allow_all_wheels():\n yield\n", "path": "piptools/repositories/local.py"}]} | 2,028 | 122 |
gh_patches_debug_60613 | rasdani/github-patches | git_diff | cloudtools__troposphere-552 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Support AutoScalingCreationPolicy
From the docs, this is a top-level property of a [CreationPolicy](http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-creationpolicy.html#cfn-attributes-creationpolicy-properties). It is used for the [AutoScalingReplacingPolicy](http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-updatepolicy.html#cfn-attributes-updatepolicy-replacingupdate) to specify the MinSuccessfulInstancesPercent property.
The docs have a good example of this:
``` json
"UpdatePolicy" : {
"AutoScalingReplacingUpdate" : {
"WillReplace" : "true"
},
"CreationPolicy" : {
"ResourceSignal" : {
"Count" : { "Ref" : "ResourceSignalsOnCreate"},
"Timeout" : "PT10M"
},
"AutoScalingCreationPolicy" : {
"MinSuccessfulInstancesPercent" : { "Ref" : "MinSuccessfulPercentParameter" }
}
}
```
I might take a crack at this but I figured I'd file an issue first if only so that I can reference it.
</issue>
<code>
[start of troposphere/policies.py]
1 from . import AWSProperty, AWSAttribute, validate_pausetime
2 from .validators import positive_integer, integer, boolean
3
4
5 class AutoScalingRollingUpdate(AWSProperty):
6 props = {
7 'MaxBatchSize': (positive_integer, False),
8 'MinInstancesInService': (integer, False),
9 'MinSuccessfulInstancesPercent': (integer, False),
10 'PauseTime': (validate_pausetime, False),
11 'SuspendProcesses': ([basestring], False),
12 'WaitOnResourceSignals': (boolean, False),
13 }
14
15
16 class AutoScalingScheduledAction(AWSProperty):
17 props = {
18 'IgnoreUnmodifiedGroupSizeProperties': (boolean, False),
19 }
20
21
22 class AutoScalingReplacingUpdate(AWSProperty):
23 props = {
24 'WillReplace': (boolean, False),
25 }
26
27
28 class UpdatePolicy(AWSAttribute):
29 props = {
30 'AutoScalingRollingUpdate': (AutoScalingRollingUpdate, False),
31 'AutoScalingScheduledAction': (AutoScalingScheduledAction, False),
32 'AutoScalingReplacingUpdate': (AutoScalingReplacingUpdate, False),
33 }
34
35
36 class ResourceSignal(AWSProperty):
37 props = {
38 'Count': (positive_integer, False),
39 'Timeout': (validate_pausetime, False),
40 }
41
42
43 class CreationPolicy(AWSAttribute):
44 props = {
45 'ResourceSignal': (ResourceSignal, True),
46 }
47
[end of troposphere/policies.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/troposphere/policies.py b/troposphere/policies.py
--- a/troposphere/policies.py
+++ b/troposphere/policies.py
@@ -40,7 +40,14 @@
}
+class AutoScalingCreationPolicy(AWSProperty):
+ props = {
+ 'MinSuccessfulInstancesPercent': (integer, False),
+ }
+
+
class CreationPolicy(AWSAttribute):
props = {
+ 'AutoScalingCreationPolicy': (AutoScalingCreationPolicy, False),
'ResourceSignal': (ResourceSignal, True),
}
| {"golden_diff": "diff --git a/troposphere/policies.py b/troposphere/policies.py\n--- a/troposphere/policies.py\n+++ b/troposphere/policies.py\n@@ -40,7 +40,14 @@\n }\n \n \n+class AutoScalingCreationPolicy(AWSProperty):\n+ props = {\n+ 'MinSuccessfulInstancesPercent': (integer, False),\n+ }\n+\n+\n class CreationPolicy(AWSAttribute):\n props = {\n+ 'AutoScalingCreationPolicy': (AutoScalingCreationPolicy, False),\n 'ResourceSignal': (ResourceSignal, True),\n }\n", "issue": "Support AutoScalingCreationPolicy\nFrom the docs, this is a top-level property of a [CreationPolicy](http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-creationpolicy.html#cfn-attributes-creationpolicy-properties). It is used for the [AutoScalingReplacingPolicy](http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-updatepolicy.html#cfn-attributes-updatepolicy-replacingupdate) to specify the MinSuccessfulInstancesPercent property.\n\nThe docs have a good example of this:\n\n``` json\n\"UpdatePolicy\" : {\n \"AutoScalingReplacingUpdate\" : {\n \"WillReplace\" : \"true\"\n },\n\"CreationPolicy\" : {\n \"ResourceSignal\" : {\n \"Count\" : { \"Ref\" : \"ResourceSignalsOnCreate\"},\n \"Timeout\" : \"PT10M\"\n },\n \"AutoScalingCreationPolicy\" : {\n \"MinSuccessfulInstancesPercent\" : { \"Ref\" : \"MinSuccessfulPercentParameter\" }\n }\n}\n```\n\nI might take a crack at this but I figured I'd file an issue first if only so that I can reference it.\n\n", "before_files": [{"content": "from . import AWSProperty, AWSAttribute, validate_pausetime\nfrom .validators import positive_integer, integer, boolean\n\n\nclass AutoScalingRollingUpdate(AWSProperty):\n props = {\n 'MaxBatchSize': (positive_integer, False),\n 'MinInstancesInService': (integer, False),\n 'MinSuccessfulInstancesPercent': (integer, False),\n 'PauseTime': (validate_pausetime, False),\n 'SuspendProcesses': ([basestring], False),\n 'WaitOnResourceSignals': (boolean, False),\n }\n\n\nclass AutoScalingScheduledAction(AWSProperty):\n props = {\n 'IgnoreUnmodifiedGroupSizeProperties': (boolean, False),\n }\n\n\nclass AutoScalingReplacingUpdate(AWSProperty):\n props = {\n 'WillReplace': (boolean, False),\n }\n\n\nclass UpdatePolicy(AWSAttribute):\n props = {\n 'AutoScalingRollingUpdate': (AutoScalingRollingUpdate, False),\n 'AutoScalingScheduledAction': (AutoScalingScheduledAction, False),\n 'AutoScalingReplacingUpdate': (AutoScalingReplacingUpdate, False),\n }\n\n\nclass ResourceSignal(AWSProperty):\n props = {\n 'Count': (positive_integer, False),\n 'Timeout': (validate_pausetime, False),\n }\n\n\nclass CreationPolicy(AWSAttribute):\n props = {\n 'ResourceSignal': (ResourceSignal, True),\n }\n", "path": "troposphere/policies.py"}]} | 1,159 | 126 |
gh_patches_debug_30995 | rasdani/github-patches | git_diff | pypa__pip-4224 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
pip search picks older version if returned list of versions are not ordered
* Pip version: 9.0.1
* Python version: 2.7
* Operating System: Ubuntu/CentOS
### Description:
For a list of versions returned by local pypi server that was ill-ordered like
```[{...'versions': ['1.0.249', '1.0.251', '1.0.250'], 'name':...}...]```
search picks the top element among all the versions returned to it.
```version = hit.get('versions', ['-'])[-1]```
at https://github.com/pypa/pip/blob/9.0.1/pip/commands/search.py#L107 and https://github.com/pypa/pip/blob/9.0.1/pip/commands/search.py#L99
Rather it should do something like
```version = highest_version(hit.get('versions', ['-']))```
</issue>
<code>
[start of pip/commands/search.py]
1 from __future__ import absolute_import
2
3 import logging
4 import sys
5 import textwrap
6
7 from pip.basecommand import Command, SUCCESS
8 from pip.compat import OrderedDict
9 from pip.download import PipXmlrpcTransport
10 from pip.models import PyPI
11 from pip.utils import get_terminal_size
12 from pip.utils.logging import indent_log
13 from pip.exceptions import CommandError
14 from pip.status_codes import NO_MATCHES_FOUND
15 from pip._vendor.packaging.version import parse as parse_version
16 from pip._vendor import pkg_resources
17 from pip._vendor.six.moves import xmlrpc_client
18
19
20 logger = logging.getLogger(__name__)
21
22
23 class SearchCommand(Command):
24 """Search for PyPI packages whose name or summary contains <query>."""
25 name = 'search'
26 usage = """
27 %prog [options] <query>"""
28 summary = 'Search PyPI for packages.'
29
30 def __init__(self, *args, **kw):
31 super(SearchCommand, self).__init__(*args, **kw)
32 self.cmd_opts.add_option(
33 '-i', '--index',
34 dest='index',
35 metavar='URL',
36 default=PyPI.pypi_url,
37 help='Base URL of Python Package Index (default %default)')
38
39 self.parser.insert_option_group(0, self.cmd_opts)
40
41 def run(self, options, args):
42 if not args:
43 raise CommandError('Missing required argument (search query).')
44 query = args
45 pypi_hits = self.search(query, options)
46 hits = transform_hits(pypi_hits)
47
48 terminal_width = None
49 if sys.stdout.isatty():
50 terminal_width = get_terminal_size()[0]
51
52 print_results(hits, terminal_width=terminal_width)
53 if pypi_hits:
54 return SUCCESS
55 return NO_MATCHES_FOUND
56
57 def search(self, query, options):
58 index_url = options.index
59 with self._build_session(options) as session:
60 transport = PipXmlrpcTransport(index_url, session)
61 pypi = xmlrpc_client.ServerProxy(index_url, transport)
62 hits = pypi.search({'name': query, 'summary': query}, 'or')
63 return hits
64
65
66 def transform_hits(hits):
67 """
68 The list from pypi is really a list of versions. We want a list of
69 packages with the list of versions stored inline. This converts the
70 list from pypi into one we can use.
71 """
72 packages = OrderedDict()
73 for hit in hits:
74 name = hit['name']
75 summary = hit['summary']
76 version = hit['version']
77
78 if name not in packages.keys():
79 packages[name] = {
80 'name': name,
81 'summary': summary,
82 'versions': [version],
83 }
84 else:
85 packages[name]['versions'].append(version)
86
87 # if this is the highest version, replace summary and score
88 if version == highest_version(packages[name]['versions']):
89 packages[name]['summary'] = summary
90
91 return list(packages.values())
92
93
94 def print_results(hits, name_column_width=None, terminal_width=None):
95 if not hits:
96 return
97 if name_column_width is None:
98 name_column_width = max([
99 len(hit['name']) + len(hit.get('versions', ['-'])[-1])
100 for hit in hits
101 ]) + 4
102
103 installed_packages = [p.project_name for p in pkg_resources.working_set]
104 for hit in hits:
105 name = hit['name']
106 summary = hit['summary'] or ''
107 version = hit.get('versions', ['-'])[-1]
108 if terminal_width is not None:
109 target_width = terminal_width - name_column_width - 5
110 if target_width > 10:
111 # wrap and indent summary to fit terminal
112 summary = textwrap.wrap(summary, target_width)
113 summary = ('\n' + ' ' * (name_column_width + 3)).join(summary)
114
115 line = '%-*s - %s' % (name_column_width,
116 '%s (%s)' % (name, version), summary)
117 try:
118 logger.info(line)
119 if name in installed_packages:
120 dist = pkg_resources.get_distribution(name)
121 with indent_log():
122 latest = highest_version(hit['versions'])
123 if dist.version == latest:
124 logger.info('INSTALLED: %s (latest)', dist.version)
125 else:
126 logger.info('INSTALLED: %s', dist.version)
127 logger.info('LATEST: %s', latest)
128 except UnicodeEncodeError:
129 pass
130
131
132 def highest_version(versions):
133 return max(versions, key=parse_version)
134
[end of pip/commands/search.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pip/commands/search.py b/pip/commands/search.py
--- a/pip/commands/search.py
+++ b/pip/commands/search.py
@@ -96,7 +96,7 @@
return
if name_column_width is None:
name_column_width = max([
- len(hit['name']) + len(hit.get('versions', ['-'])[-1])
+ len(hit['name']) + len(highest_version(hit.get('versions', ['-'])))
for hit in hits
]) + 4
@@ -104,7 +104,7 @@
for hit in hits:
name = hit['name']
summary = hit['summary'] or ''
- version = hit.get('versions', ['-'])[-1]
+ latest = highest_version(hit.get('versions', ['-']))
if terminal_width is not None:
target_width = terminal_width - name_column_width - 5
if target_width > 10:
@@ -113,13 +113,12 @@
summary = ('\n' + ' ' * (name_column_width + 3)).join(summary)
line = '%-*s - %s' % (name_column_width,
- '%s (%s)' % (name, version), summary)
+ '%s (%s)' % (name, latest), summary)
try:
logger.info(line)
if name in installed_packages:
dist = pkg_resources.get_distribution(name)
with indent_log():
- latest = highest_version(hit['versions'])
if dist.version == latest:
logger.info('INSTALLED: %s (latest)', dist.version)
else:
| {"golden_diff": "diff --git a/pip/commands/search.py b/pip/commands/search.py\n--- a/pip/commands/search.py\n+++ b/pip/commands/search.py\n@@ -96,7 +96,7 @@\n return\n if name_column_width is None:\n name_column_width = max([\n- len(hit['name']) + len(hit.get('versions', ['-'])[-1])\n+ len(hit['name']) + len(highest_version(hit.get('versions', ['-'])))\n for hit in hits\n ]) + 4\n \n@@ -104,7 +104,7 @@\n for hit in hits:\n name = hit['name']\n summary = hit['summary'] or ''\n- version = hit.get('versions', ['-'])[-1]\n+ latest = highest_version(hit.get('versions', ['-']))\n if terminal_width is not None:\n target_width = terminal_width - name_column_width - 5\n if target_width > 10:\n@@ -113,13 +113,12 @@\n summary = ('\\n' + ' ' * (name_column_width + 3)).join(summary)\n \n line = '%-*s - %s' % (name_column_width,\n- '%s (%s)' % (name, version), summary)\n+ '%s (%s)' % (name, latest), summary)\n try:\n logger.info(line)\n if name in installed_packages:\n dist = pkg_resources.get_distribution(name)\n with indent_log():\n- latest = highest_version(hit['versions'])\n if dist.version == latest:\n logger.info('INSTALLED: %s (latest)', dist.version)\n else:\n", "issue": "pip search picks older version if returned list of versions are not ordered\n* Pip version: 9.0.1\r\n* Python version: 2.7\r\n* Operating System: Ubuntu/CentOS\r\n\r\n### Description:\r\n\r\nFor a list of versions returned by local pypi server that was ill-ordered like\r\n```[{...'versions': ['1.0.249', '1.0.251', '1.0.250'], 'name':...}...]```\r\n\r\nsearch picks the top element among all the versions returned to it.\r\n```version = hit.get('versions', ['-'])[-1]```\r\n at https://github.com/pypa/pip/blob/9.0.1/pip/commands/search.py#L107 and https://github.com/pypa/pip/blob/9.0.1/pip/commands/search.py#L99\r\n\r\nRather it should do something like\r\n```version = highest_version(hit.get('versions', ['-']))```\r\n\r\n\n", "before_files": [{"content": "from __future__ import absolute_import\n\nimport logging\nimport sys\nimport textwrap\n\nfrom pip.basecommand import Command, SUCCESS\nfrom pip.compat import OrderedDict\nfrom pip.download import PipXmlrpcTransport\nfrom pip.models import PyPI\nfrom pip.utils import get_terminal_size\nfrom pip.utils.logging import indent_log\nfrom pip.exceptions import CommandError\nfrom pip.status_codes import NO_MATCHES_FOUND\nfrom pip._vendor.packaging.version import parse as parse_version\nfrom pip._vendor import pkg_resources\nfrom pip._vendor.six.moves import xmlrpc_client\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass SearchCommand(Command):\n \"\"\"Search for PyPI packages whose name or summary contains <query>.\"\"\"\n name = 'search'\n usage = \"\"\"\n %prog [options] <query>\"\"\"\n summary = 'Search PyPI for packages.'\n\n def __init__(self, *args, **kw):\n super(SearchCommand, self).__init__(*args, **kw)\n self.cmd_opts.add_option(\n '-i', '--index',\n dest='index',\n metavar='URL',\n default=PyPI.pypi_url,\n help='Base URL of Python Package Index (default %default)')\n\n self.parser.insert_option_group(0, self.cmd_opts)\n\n def run(self, options, args):\n if not args:\n raise CommandError('Missing required argument (search query).')\n query = args\n pypi_hits = self.search(query, options)\n hits = transform_hits(pypi_hits)\n\n terminal_width = None\n if sys.stdout.isatty():\n terminal_width = get_terminal_size()[0]\n\n print_results(hits, terminal_width=terminal_width)\n if pypi_hits:\n return SUCCESS\n return NO_MATCHES_FOUND\n\n def search(self, query, options):\n index_url = options.index\n with self._build_session(options) as session:\n transport = PipXmlrpcTransport(index_url, session)\n pypi = xmlrpc_client.ServerProxy(index_url, transport)\n hits = pypi.search({'name': query, 'summary': query}, 'or')\n return hits\n\n\ndef transform_hits(hits):\n \"\"\"\n The list from pypi is really a list of versions. We want a list of\n packages with the list of versions stored inline. This converts the\n list from pypi into one we can use.\n \"\"\"\n packages = OrderedDict()\n for hit in hits:\n name = hit['name']\n summary = hit['summary']\n version = hit['version']\n\n if name not in packages.keys():\n packages[name] = {\n 'name': name,\n 'summary': summary,\n 'versions': [version],\n }\n else:\n packages[name]['versions'].append(version)\n\n # if this is the highest version, replace summary and score\n if version == highest_version(packages[name]['versions']):\n packages[name]['summary'] = summary\n\n return list(packages.values())\n\n\ndef print_results(hits, name_column_width=None, terminal_width=None):\n if not hits:\n return\n if name_column_width is None:\n name_column_width = max([\n len(hit['name']) + len(hit.get('versions', ['-'])[-1])\n for hit in hits\n ]) + 4\n\n installed_packages = [p.project_name for p in pkg_resources.working_set]\n for hit in hits:\n name = hit['name']\n summary = hit['summary'] or ''\n version = hit.get('versions', ['-'])[-1]\n if terminal_width is not None:\n target_width = terminal_width - name_column_width - 5\n if target_width > 10:\n # wrap and indent summary to fit terminal\n summary = textwrap.wrap(summary, target_width)\n summary = ('\\n' + ' ' * (name_column_width + 3)).join(summary)\n\n line = '%-*s - %s' % (name_column_width,\n '%s (%s)' % (name, version), summary)\n try:\n logger.info(line)\n if name in installed_packages:\n dist = pkg_resources.get_distribution(name)\n with indent_log():\n latest = highest_version(hit['versions'])\n if dist.version == latest:\n logger.info('INSTALLED: %s (latest)', dist.version)\n else:\n logger.info('INSTALLED: %s', dist.version)\n logger.info('LATEST: %s', latest)\n except UnicodeEncodeError:\n pass\n\n\ndef highest_version(versions):\n return max(versions, key=parse_version)\n", "path": "pip/commands/search.py"}]} | 2,017 | 360 |
gh_patches_debug_16875 | rasdani/github-patches | git_diff | getsentry__sentry-python-2105 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
AttributeError: 'async_generator_athrow' object has no attribute '__qualname__'
### How do you use Sentry?
Sentry Saas (sentry.io)
### Version
1.19.1
### Steps to Reproduce
I'm trying to use the `asyncio` integration like this:
```python
sentry_sdk.init(dsn=os.environ.get("SENTRY_DSN"), traces_sample_rate=0.1, integrations=[AsyncioIntegration()])
```
I keep on getting a traceback that seems to be a Sentry-specific issue.
### Expected Result
No tracebacks repeatedly occur
### Actual Result
I see this traceback repeatedly printed in the logs:
```python
Task exception was never retrieved
future: <Task finished name='Task-1512' coro=<patch_asyncio.<locals>._sentry_task_factory.<locals>._coro_creating_hub_and_span() done, defined at /usr/local/lib/python3.9/site-packages/sentry_sdk/integrations/asyncio.py:34> exception=AttributeError("'async_generator_athrow' object has no attribute '__qualname__'")>
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/sentry_sdk/integrations/asyncio.py", line 40, in _coro_creating_hub_and_span
with hub.start_span(op=OP.FUNCTION, description=coro.__qualname__):
AttributeError: 'async_generator_athrow' object has no attribute '__qualname__'
```
</issue>
<code>
[start of sentry_sdk/integrations/asyncio.py]
1 from __future__ import absolute_import
2 import sys
3
4 from sentry_sdk._compat import reraise
5 from sentry_sdk.consts import OP
6 from sentry_sdk.hub import Hub
7 from sentry_sdk.integrations import Integration, DidNotEnable
8 from sentry_sdk._types import TYPE_CHECKING
9 from sentry_sdk.utils import event_from_exception
10
11 try:
12 import asyncio
13 from asyncio.tasks import Task
14 except ImportError:
15 raise DidNotEnable("asyncio not available")
16
17
18 if TYPE_CHECKING:
19 from typing import Any
20
21 from sentry_sdk._types import ExcInfo
22
23
24 def patch_asyncio():
25 # type: () -> None
26 orig_task_factory = None
27 try:
28 loop = asyncio.get_running_loop()
29 orig_task_factory = loop.get_task_factory()
30
31 def _sentry_task_factory(loop, coro):
32 # type: (Any, Any) -> Any
33
34 async def _coro_creating_hub_and_span():
35 # type: () -> Any
36 hub = Hub(Hub.current)
37 result = None
38
39 with hub:
40 with hub.start_span(op=OP.FUNCTION, description=coro.__qualname__):
41 try:
42 result = await coro
43 except Exception:
44 reraise(*_capture_exception(hub))
45
46 return result
47
48 # Trying to use user set task factory (if there is one)
49 if orig_task_factory:
50 return orig_task_factory(loop, _coro_creating_hub_and_span())
51
52 # The default task factory in `asyncio` does not have its own function
53 # but is just a couple of lines in `asyncio.base_events.create_task()`
54 # Those lines are copied here.
55
56 # WARNING:
57 # If the default behavior of the task creation in asyncio changes,
58 # this will break!
59 task = Task(_coro_creating_hub_and_span(), loop=loop)
60 if task._source_traceback: # type: ignore
61 del task._source_traceback[-1] # type: ignore
62
63 return task
64
65 loop.set_task_factory(_sentry_task_factory)
66 except RuntimeError:
67 # When there is no running loop, we have nothing to patch.
68 pass
69
70
71 def _capture_exception(hub):
72 # type: (Hub) -> ExcInfo
73 exc_info = sys.exc_info()
74
75 integration = hub.get_integration(AsyncioIntegration)
76 if integration is not None:
77 # If an integration is there, a client has to be there.
78 client = hub.client # type: Any
79
80 event, hint = event_from_exception(
81 exc_info,
82 client_options=client.options,
83 mechanism={"type": "asyncio", "handled": False},
84 )
85 hub.capture_event(event, hint=hint)
86
87 return exc_info
88
89
90 class AsyncioIntegration(Integration):
91 identifier = "asyncio"
92
93 @staticmethod
94 def setup_once():
95 # type: () -> None
96 patch_asyncio()
97
[end of sentry_sdk/integrations/asyncio.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/sentry_sdk/integrations/asyncio.py b/sentry_sdk/integrations/asyncio.py
--- a/sentry_sdk/integrations/asyncio.py
+++ b/sentry_sdk/integrations/asyncio.py
@@ -21,6 +21,15 @@
from sentry_sdk._types import ExcInfo
+def get_name(coro):
+ # type: (Any) -> str
+ return (
+ getattr(coro, "__qualname__", None)
+ or getattr(coro, "__name__", None)
+ or "coroutine without __name__"
+ )
+
+
def patch_asyncio():
# type: () -> None
orig_task_factory = None
@@ -37,7 +46,7 @@
result = None
with hub:
- with hub.start_span(op=OP.FUNCTION, description=coro.__qualname__):
+ with hub.start_span(op=OP.FUNCTION, description=get_name(coro)):
try:
result = await coro
except Exception:
| {"golden_diff": "diff --git a/sentry_sdk/integrations/asyncio.py b/sentry_sdk/integrations/asyncio.py\n--- a/sentry_sdk/integrations/asyncio.py\n+++ b/sentry_sdk/integrations/asyncio.py\n@@ -21,6 +21,15 @@\n from sentry_sdk._types import ExcInfo\n \n \n+def get_name(coro):\n+ # type: (Any) -> str\n+ return (\n+ getattr(coro, \"__qualname__\", None)\n+ or getattr(coro, \"__name__\", None)\n+ or \"coroutine without __name__\"\n+ )\n+\n+\n def patch_asyncio():\n # type: () -> None\n orig_task_factory = None\n@@ -37,7 +46,7 @@\n result = None\n \n with hub:\n- with hub.start_span(op=OP.FUNCTION, description=coro.__qualname__):\n+ with hub.start_span(op=OP.FUNCTION, description=get_name(coro)):\n try:\n result = await coro\n except Exception:\n", "issue": "AttributeError: 'async_generator_athrow' object has no attribute '__qualname__'\n### How do you use Sentry?\r\n\r\nSentry Saas (sentry.io)\r\n\r\n### Version\r\n\r\n1.19.1\r\n\r\n### Steps to Reproduce\r\n\r\nI'm trying to use the `asyncio` integration like this:\r\n\r\n```python\r\nsentry_sdk.init(dsn=os.environ.get(\"SENTRY_DSN\"), traces_sample_rate=0.1, integrations=[AsyncioIntegration()])\r\n```\r\n\r\nI keep on getting a traceback that seems to be a Sentry-specific issue.\r\n\r\n### Expected Result\r\n\r\nNo tracebacks repeatedly occur\r\n\r\n### Actual Result\r\n\r\nI see this traceback repeatedly printed in the logs:\r\n\r\n```python\r\nTask exception was never retrieved\r\nfuture: <Task finished name='Task-1512' coro=<patch_asyncio.<locals>._sentry_task_factory.<locals>._coro_creating_hub_and_span() done, defined at /usr/local/lib/python3.9/site-packages/sentry_sdk/integrations/asyncio.py:34> exception=AttributeError(\"'async_generator_athrow' object has no attribute '__qualname__'\")>\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.9/site-packages/sentry_sdk/integrations/asyncio.py\", line 40, in _coro_creating_hub_and_span\r\n with hub.start_span(op=OP.FUNCTION, description=coro.__qualname__):\r\nAttributeError: 'async_generator_athrow' object has no attribute '__qualname__'\r\n```\n", "before_files": [{"content": "from __future__ import absolute_import\nimport sys\n\nfrom sentry_sdk._compat import reraise\nfrom sentry_sdk.consts import OP\nfrom sentry_sdk.hub import Hub\nfrom sentry_sdk.integrations import Integration, DidNotEnable\nfrom sentry_sdk._types import TYPE_CHECKING\nfrom sentry_sdk.utils import event_from_exception\n\ntry:\n import asyncio\n from asyncio.tasks import Task\nexcept ImportError:\n raise DidNotEnable(\"asyncio not available\")\n\n\nif TYPE_CHECKING:\n from typing import Any\n\n from sentry_sdk._types import ExcInfo\n\n\ndef patch_asyncio():\n # type: () -> None\n orig_task_factory = None\n try:\n loop = asyncio.get_running_loop()\n orig_task_factory = loop.get_task_factory()\n\n def _sentry_task_factory(loop, coro):\n # type: (Any, Any) -> Any\n\n async def _coro_creating_hub_and_span():\n # type: () -> Any\n hub = Hub(Hub.current)\n result = None\n\n with hub:\n with hub.start_span(op=OP.FUNCTION, description=coro.__qualname__):\n try:\n result = await coro\n except Exception:\n reraise(*_capture_exception(hub))\n\n return result\n\n # Trying to use user set task factory (if there is one)\n if orig_task_factory:\n return orig_task_factory(loop, _coro_creating_hub_and_span())\n\n # The default task factory in `asyncio` does not have its own function\n # but is just a couple of lines in `asyncio.base_events.create_task()`\n # Those lines are copied here.\n\n # WARNING:\n # If the default behavior of the task creation in asyncio changes,\n # this will break!\n task = Task(_coro_creating_hub_and_span(), loop=loop)\n if task._source_traceback: # type: ignore\n del task._source_traceback[-1] # type: ignore\n\n return task\n\n loop.set_task_factory(_sentry_task_factory)\n except RuntimeError:\n # When there is no running loop, we have nothing to patch.\n pass\n\n\ndef _capture_exception(hub):\n # type: (Hub) -> ExcInfo\n exc_info = sys.exc_info()\n\n integration = hub.get_integration(AsyncioIntegration)\n if integration is not None:\n # If an integration is there, a client has to be there.\n client = hub.client # type: Any\n\n event, hint = event_from_exception(\n exc_info,\n client_options=client.options,\n mechanism={\"type\": \"asyncio\", \"handled\": False},\n )\n hub.capture_event(event, hint=hint)\n\n return exc_info\n\n\nclass AsyncioIntegration(Integration):\n identifier = \"asyncio\"\n\n @staticmethod\n def setup_once():\n # type: () -> None\n patch_asyncio()\n", "path": "sentry_sdk/integrations/asyncio.py"}]} | 1,704 | 236 |
gh_patches_debug_8350 | rasdani/github-patches | git_diff | getsentry__sentry-5984 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Auto assign should occur as actor
When using 'Fixes XXX' annotation in a commit, I noticed that while Sentry auto assigned to me (expected), it did so on behalf of itself instead of my user.

</issue>
<code>
[start of src/sentry/receivers/releases.py]
1 from __future__ import absolute_import, print_function
2
3 from django.db import IntegrityError, transaction
4 from django.db.models.signals import post_save
5
6 from sentry.models import (
7 Activity, Commit, GroupAssignee, GroupCommitResolution, Release, TagValue
8 )
9 from sentry.tasks.clear_expired_resolutions import clear_expired_resolutions
10
11
12 def ensure_release_exists(instance, created, **kwargs):
13 if instance.key != 'sentry:release':
14 return
15
16 if instance.data and instance.data.get('release_id'):
17 return
18
19 try:
20 with transaction.atomic():
21 release = Release.objects.create(
22 organization_id=instance.project.organization_id,
23 version=instance.value,
24 date_added=instance.first_seen,
25 )
26 except IntegrityError:
27 release = Release.objects.get(
28 organization_id=instance.project.organization_id,
29 version=instance.value,
30 )
31 release.update(date_added=instance.first_seen)
32 else:
33 instance.update(data={'release_id': release.id})
34
35 release.add_project(instance.project)
36
37
38 def resolve_group_resolutions(instance, created, **kwargs):
39 if not created:
40 return
41
42 clear_expired_resolutions.delay(release_id=instance.id)
43
44
45 def resolved_in_commit(instance, created, **kwargs):
46 # TODO(dcramer): we probably should support an updated message
47 if not created:
48 return
49
50 groups = instance.find_referenced_groups()
51 for group in groups:
52 try:
53 with transaction.atomic():
54 GroupCommitResolution.objects.create(
55 group_id=group.id,
56 commit_id=instance.id,
57 )
58 if instance.author:
59 user_list = list(instance.author.find_users())
60 else:
61 user_list = ()
62 if user_list:
63 Activity.objects.create(
64 project_id=group.project_id,
65 group=group,
66 type=Activity.SET_RESOLVED_IN_COMMIT,
67 ident=instance.id,
68 user=user_list[0],
69 data={
70 'commit': instance.id,
71 }
72 )
73 GroupAssignee.objects.assign(group=group, assigned_to=user_list[0])
74 else:
75 Activity.objects.create(
76 project_id=group.project_id,
77 group=group,
78 type=Activity.SET_RESOLVED_IN_COMMIT,
79 ident=instance.id,
80 data={
81 'commit': instance.id,
82 }
83 )
84 except IntegrityError:
85 pass
86
87
88 post_save.connect(
89 resolve_group_resolutions, sender=Release, dispatch_uid="resolve_group_resolutions", weak=False
90 )
91
92 post_save.connect(
93 ensure_release_exists, sender=TagValue, dispatch_uid="ensure_release_exists", weak=False
94 )
95
96 post_save.connect(
97 resolved_in_commit,
98 sender=Commit,
99 dispatch_uid="resolved_in_commit",
100 weak=False,
101 )
102
[end of src/sentry/receivers/releases.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/sentry/receivers/releases.py b/src/sentry/receivers/releases.py
--- a/src/sentry/receivers/releases.py
+++ b/src/sentry/receivers/releases.py
@@ -70,7 +70,8 @@
'commit': instance.id,
}
)
- GroupAssignee.objects.assign(group=group, assigned_to=user_list[0])
+ GroupAssignee.objects.assign(
+ group=group, assigned_to=user_list[0], acting_user=user_list[0])
else:
Activity.objects.create(
project_id=group.project_id,
| {"golden_diff": "diff --git a/src/sentry/receivers/releases.py b/src/sentry/receivers/releases.py\n--- a/src/sentry/receivers/releases.py\n+++ b/src/sentry/receivers/releases.py\n@@ -70,7 +70,8 @@\n 'commit': instance.id,\n }\n )\n- GroupAssignee.objects.assign(group=group, assigned_to=user_list[0])\n+ GroupAssignee.objects.assign(\n+ group=group, assigned_to=user_list[0], acting_user=user_list[0])\n else:\n Activity.objects.create(\n project_id=group.project_id,\n", "issue": "Auto assign should occur as actor\nWhen using 'Fixes XXX' annotation in a commit, I noticed that while Sentry auto assigned to me (expected), it did so on behalf of itself instead of my user.\r\n\r\n\r\n\n", "before_files": [{"content": "from __future__ import absolute_import, print_function\n\nfrom django.db import IntegrityError, transaction\nfrom django.db.models.signals import post_save\n\nfrom sentry.models import (\n Activity, Commit, GroupAssignee, GroupCommitResolution, Release, TagValue\n)\nfrom sentry.tasks.clear_expired_resolutions import clear_expired_resolutions\n\n\ndef ensure_release_exists(instance, created, **kwargs):\n if instance.key != 'sentry:release':\n return\n\n if instance.data and instance.data.get('release_id'):\n return\n\n try:\n with transaction.atomic():\n release = Release.objects.create(\n organization_id=instance.project.organization_id,\n version=instance.value,\n date_added=instance.first_seen,\n )\n except IntegrityError:\n release = Release.objects.get(\n organization_id=instance.project.organization_id,\n version=instance.value,\n )\n release.update(date_added=instance.first_seen)\n else:\n instance.update(data={'release_id': release.id})\n\n release.add_project(instance.project)\n\n\ndef resolve_group_resolutions(instance, created, **kwargs):\n if not created:\n return\n\n clear_expired_resolutions.delay(release_id=instance.id)\n\n\ndef resolved_in_commit(instance, created, **kwargs):\n # TODO(dcramer): we probably should support an updated message\n if not created:\n return\n\n groups = instance.find_referenced_groups()\n for group in groups:\n try:\n with transaction.atomic():\n GroupCommitResolution.objects.create(\n group_id=group.id,\n commit_id=instance.id,\n )\n if instance.author:\n user_list = list(instance.author.find_users())\n else:\n user_list = ()\n if user_list:\n Activity.objects.create(\n project_id=group.project_id,\n group=group,\n type=Activity.SET_RESOLVED_IN_COMMIT,\n ident=instance.id,\n user=user_list[0],\n data={\n 'commit': instance.id,\n }\n )\n GroupAssignee.objects.assign(group=group, assigned_to=user_list[0])\n else:\n Activity.objects.create(\n project_id=group.project_id,\n group=group,\n type=Activity.SET_RESOLVED_IN_COMMIT,\n ident=instance.id,\n data={\n 'commit': instance.id,\n }\n )\n except IntegrityError:\n pass\n\n\npost_save.connect(\n resolve_group_resolutions, sender=Release, dispatch_uid=\"resolve_group_resolutions\", weak=False\n)\n\npost_save.connect(\n ensure_release_exists, sender=TagValue, dispatch_uid=\"ensure_release_exists\", weak=False\n)\n\npost_save.connect(\n resolved_in_commit,\n sender=Commit,\n dispatch_uid=\"resolved_in_commit\",\n weak=False,\n)\n", "path": "src/sentry/receivers/releases.py"}]} | 1,441 | 130 |
gh_patches_debug_61141 | rasdani/github-patches | git_diff | e2nIEE__pandapower-2263 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[bug] __format_version__ not increased.
### Issue Description
The `__format_version__` in `_version.py` has not been increased eventhough the format got changed!
This is an issue in the develop branch **not** in master!
In my fork I made an update to many test cases since I changed the format, so I saved many networks in as files, they contain the current format_version (2.14.0). After merging the current version of develop I got some tests that suddenly failed eventhough my code should not mess with them. So I did a little diging and found that the expected and actual results differ in `net.res_switch_est` DataFrame. This is because the expected result only contains the old columns while the actual result contains the updated columns.
This is because the expected results are loaded form file using the `pandapower.from_json` function and since then format version is the same as the current format verison in `_version.py` the conversion to the newest format is not done. So the network is returned as loaded from file.
The actual results however are a product of a conversion from a different network type. So they are the output of a converter that creates a new pandapowerNet. These then contain all new columns.
If new columns are added `__format_version__` should be incremented at least in the bugfix number. But I would expect that this constitutes at least a minor release as a new format version most likely breaks backwards compatibility. On a bugfix version I would expect I go backwards and forwards without issue. But this is not the case if the format version changes! A 2.13.1 Network should sucessfully load on 2.13.0 but this will not work if new columns are added. So this change should be reflected by an increase of the format verison to at least 2.15.0 in my opinion.
The breaking commit is 516f8af as it changed the format without changeing the format version.
</issue>
<code>
[start of pandapower/_version.py]
1 import importlib.metadata
2
3 __version__ = importlib.metadata.version("pandapower")
4 __format_version__ = "2.14.0"
5
[end of pandapower/_version.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pandapower/_version.py b/pandapower/_version.py
--- a/pandapower/_version.py
+++ b/pandapower/_version.py
@@ -1,4 +1,4 @@
import importlib.metadata
__version__ = importlib.metadata.version("pandapower")
-__format_version__ = "2.14.0"
+__format_version__ = "2.15.0"
| {"golden_diff": "diff --git a/pandapower/_version.py b/pandapower/_version.py\n--- a/pandapower/_version.py\n+++ b/pandapower/_version.py\n@@ -1,4 +1,4 @@\n import importlib.metadata\n \n __version__ = importlib.metadata.version(\"pandapower\")\n-__format_version__ = \"2.14.0\"\n+__format_version__ = \"2.15.0\"\n", "issue": "[bug] __format_version__ not increased.\n### Issue Description\r\n\r\nThe `__format_version__` in `_version.py` has not been increased eventhough the format got changed!\r\n\r\nThis is an issue in the develop branch **not** in master!\r\n\r\nIn my fork I made an update to many test cases since I changed the format, so I saved many networks in as files, they contain the current format_version (2.14.0). After merging the current version of develop I got some tests that suddenly failed eventhough my code should not mess with them. So I did a little diging and found that the expected and actual results differ in `net.res_switch_est` DataFrame. This is because the expected result only contains the old columns while the actual result contains the updated columns.\r\n\r\nThis is because the expected results are loaded form file using the `pandapower.from_json` function and since then format version is the same as the current format verison in `_version.py` the conversion to the newest format is not done. So the network is returned as loaded from file.\r\nThe actual results however are a product of a conversion from a different network type. So they are the output of a converter that creates a new pandapowerNet. These then contain all new columns.\r\n\r\nIf new columns are added `__format_version__` should be incremented at least in the bugfix number. But I would expect that this constitutes at least a minor release as a new format version most likely breaks backwards compatibility. On a bugfix version I would expect I go backwards and forwards without issue. But this is not the case if the format version changes! A 2.13.1 Network should sucessfully load on 2.13.0 but this will not work if new columns are added. So this change should be reflected by an increase of the format verison to at least 2.15.0 in my opinion.\r\n\r\nThe breaking commit is 516f8af as it changed the format without changeing the format version.\r\n\n", "before_files": [{"content": "import importlib.metadata\n\n__version__ = importlib.metadata.version(\"pandapower\")\n__format_version__ = \"2.14.0\"\n", "path": "pandapower/_version.py"}]} | 998 | 98 |
gh_patches_debug_9115 | rasdani/github-patches | git_diff | alltheplaces__alltheplaces-2637 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Spider cvs is broken
During the global build at 2021-08-18-14-42-26, spider **cvs** failed with **0 features** and **9870 errors**.
Here's [the log](https://data.alltheplaces.xyz/runs/2021-08-18-14-42-26/logs/cvs.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-08-18-14-42-26/output/cvs.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-08-18-14-42-26/output/cvs.geojson))
</issue>
<code>
[start of locations/spiders/cvs.py]
1 import json
2 import scrapy
3 import re
4 from locations.items import GeojsonPointItem
5 from locations.hours import OpeningHours
6
7 DAYS = [
8 'Mo',
9 'Tu',
10 'We',
11 'Th',
12 'Fr',
13 'Sa',
14 'Su'
15 ]
16
17
18 class CVSSpider(scrapy.Spider):
19
20 name = "cvs"
21 item_attributes = { 'brand': "CVS", 'brand_wikidata': "Q2078880" }
22 allowed_domains = ["www.cvs.com"]
23 download_delay = 0.5
24 start_urls = (
25 'https://www.cvs.com/store-locator/cvs-pharmacy-locations',
26 )
27
28 def parse_hours(self, hours):
29 opening_hours = OpeningHours()
30
31 for group in hours:
32 if 'closed' in group:
33 continue
34 if 'open 24 hours' in group:
35 days = re.search(r'([a-zA-Z\-]+)\s+open 24 hours', group).groups()[0]
36 open_time, close_time = '00:00:00', '23:59:00'
37 else:
38 try:
39 days, open_time, close_time = re.search(r'([a-zA-Z\-]+)\s+([\d:\sapm]+)-([\d:\sapm]+)', group).groups()
40 except AttributeError:
41 continue # no hours listed, just day
42 try:
43 start_day, end_day = days.split('-')
44 except ValueError:
45 start_day, end_day = days, days
46 for day in DAYS[DAYS.index(start_day):DAYS.index(end_day) + 1]:
47 if 'm' in open_time:
48 open_time = open_time.strip(' apm') + ":00"
49 if 'm' in close_time:
50 close_time = close_time.strip(' apm') + ":00"
51 opening_hours.add_range(day=day,
52 open_time=open_time.strip(),
53 close_time=close_time.strip(),
54 time_format='%H:%M:%S')
55
56 return opening_hours.as_opening_hours()
57
58 def parse_stores(self, response):
59 try:
60 data = json.loads(response.xpath('//script[@type="application/ld+json" and contains(text(), "streetAddress")]/text()').extract_first())[0]
61 except json.decoder.JSONDecodeError:
62 # one malformed json body on this store:
63 # https://www.cvs.com/store-locator/cvs-pharmacy-address/84+South+Avenue+tops+Plaza+-Hilton-NY-14468/storeid=5076
64 data = response.xpath('//script[@type="application/ld+json" and contains(text(), "streetAddress")]/text()').extract_first()
65 data = re.sub(r'"tops Plaza\s*"', '', data)
66 data = json.loads(data)[0]
67 except TypeError:
68 return # empty store page
69
70 properties = {
71 'name': data["name"],
72 'ref': re.search(r'.+/?storeid=(.+)', response.url).group(1),
73 'addr_full': data["address"]["streetAddress"].strip(', '),
74 'city': data["address"]["addressLocality"],
75 'state': data["address"]["addressRegion"],
76 'postcode': data["address"]["postalCode"],
77 'country': data["address"]["addressCountry"],
78 'phone': data["address"].get("telephone"),
79 'website': data.get("url") or response.url,
80 'lat': float(data["geo"]["latitude"]),
81 'lon': float(data["geo"]["longitude"]),
82 }
83
84 hours = self.parse_hours(data["openingHours"])
85 if hours:
86 properties["opening_hours"] = hours
87
88 yield GeojsonPointItem(**properties)
89
90 def parse_city_stores(self, response):
91 stores = response.xpath('//div[@class="each-store"]')
92
93 for store in stores:
94
95 direction = store.xpath('normalize-space(.//span[@class="store-number"]/a/@href)').extract_first()
96 if direction:
97 yield scrapy.Request(response.urljoin(direction), callback=self.parse_stores)
98
99 def parse_state(self, response):
100 city_urls = response.xpath('//div[@class="states"]/ul/li/a/@href').extract()
101 for path in city_urls:
102 yield scrapy.Request(response.urljoin(path), callback=self.parse_city_stores)
103
104 def parse(self, response):
105 urls = response.xpath('//div[@class="states"]/ul/li/a/@href').extract()
106 for path in urls:
107 yield scrapy.Request(response.urljoin(path), callback=self.parse_state)
108
[end of locations/spiders/cvs.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/locations/spiders/cvs.py b/locations/spiders/cvs.py
--- a/locations/spiders/cvs.py
+++ b/locations/spiders/cvs.py
@@ -77,8 +77,8 @@
'country': data["address"]["addressCountry"],
'phone': data["address"].get("telephone"),
'website': data.get("url") or response.url,
- 'lat': float(data["geo"]["latitude"]),
- 'lon': float(data["geo"]["longitude"]),
+ 'lat': data["geo"]["latitude"] or None,
+ 'lon': data["geo"]["longitude"] or None,
}
hours = self.parse_hours(data["openingHours"])
| {"golden_diff": "diff --git a/locations/spiders/cvs.py b/locations/spiders/cvs.py\n--- a/locations/spiders/cvs.py\n+++ b/locations/spiders/cvs.py\n@@ -77,8 +77,8 @@\n 'country': data[\"address\"][\"addressCountry\"],\n 'phone': data[\"address\"].get(\"telephone\"),\n 'website': data.get(\"url\") or response.url,\n- 'lat': float(data[\"geo\"][\"latitude\"]),\n- 'lon': float(data[\"geo\"][\"longitude\"]),\n+ 'lat': data[\"geo\"][\"latitude\"] or None,\n+ 'lon': data[\"geo\"][\"longitude\"] or None,\n }\n \n hours = self.parse_hours(data[\"openingHours\"])\n", "issue": "Spider cvs is broken\nDuring the global build at 2021-08-18-14-42-26, spider **cvs** failed with **0 features** and **9870 errors**.\n\nHere's [the log](https://data.alltheplaces.xyz/runs/2021-08-18-14-42-26/logs/cvs.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-08-18-14-42-26/output/cvs.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-08-18-14-42-26/output/cvs.geojson))\n", "before_files": [{"content": "import json\nimport scrapy\nimport re\nfrom locations.items import GeojsonPointItem\nfrom locations.hours import OpeningHours\n\nDAYS = [\n 'Mo',\n 'Tu',\n 'We',\n 'Th',\n 'Fr',\n 'Sa',\n 'Su'\n]\n\n\nclass CVSSpider(scrapy.Spider):\n\n name = \"cvs\"\n item_attributes = { 'brand': \"CVS\", 'brand_wikidata': \"Q2078880\" }\n allowed_domains = [\"www.cvs.com\"]\n download_delay = 0.5\n start_urls = (\n 'https://www.cvs.com/store-locator/cvs-pharmacy-locations',\n )\n\n def parse_hours(self, hours):\n opening_hours = OpeningHours()\n\n for group in hours:\n if 'closed' in group:\n continue\n if 'open 24 hours' in group:\n days = re.search(r'([a-zA-Z\\-]+)\\s+open 24 hours', group).groups()[0]\n open_time, close_time = '00:00:00', '23:59:00'\n else:\n try:\n days, open_time, close_time = re.search(r'([a-zA-Z\\-]+)\\s+([\\d:\\sapm]+)-([\\d:\\sapm]+)', group).groups()\n except AttributeError:\n continue # no hours listed, just day\n try:\n start_day, end_day = days.split('-')\n except ValueError:\n start_day, end_day = days, days\n for day in DAYS[DAYS.index(start_day):DAYS.index(end_day) + 1]:\n if 'm' in open_time:\n open_time = open_time.strip(' apm') + \":00\"\n if 'm' in close_time:\n close_time = close_time.strip(' apm') + \":00\"\n opening_hours.add_range(day=day,\n open_time=open_time.strip(),\n close_time=close_time.strip(),\n time_format='%H:%M:%S')\n\n return opening_hours.as_opening_hours()\n\n def parse_stores(self, response):\n try:\n data = json.loads(response.xpath('//script[@type=\"application/ld+json\" and contains(text(), \"streetAddress\")]/text()').extract_first())[0]\n except json.decoder.JSONDecodeError:\n # one malformed json body on this store:\n # https://www.cvs.com/store-locator/cvs-pharmacy-address/84+South+Avenue+tops+Plaza+-Hilton-NY-14468/storeid=5076\n data = response.xpath('//script[@type=\"application/ld+json\" and contains(text(), \"streetAddress\")]/text()').extract_first()\n data = re.sub(r'\"tops Plaza\\s*\"', '', data)\n data = json.loads(data)[0]\n except TypeError:\n return # empty store page\n\n properties = {\n 'name': data[\"name\"],\n 'ref': re.search(r'.+/?storeid=(.+)', response.url).group(1),\n 'addr_full': data[\"address\"][\"streetAddress\"].strip(', '),\n 'city': data[\"address\"][\"addressLocality\"],\n 'state': data[\"address\"][\"addressRegion\"],\n 'postcode': data[\"address\"][\"postalCode\"],\n 'country': data[\"address\"][\"addressCountry\"],\n 'phone': data[\"address\"].get(\"telephone\"),\n 'website': data.get(\"url\") or response.url,\n 'lat': float(data[\"geo\"][\"latitude\"]),\n 'lon': float(data[\"geo\"][\"longitude\"]),\n }\n\n hours = self.parse_hours(data[\"openingHours\"])\n if hours:\n properties[\"opening_hours\"] = hours\n\n yield GeojsonPointItem(**properties)\n\n def parse_city_stores(self, response):\n stores = response.xpath('//div[@class=\"each-store\"]')\n\n for store in stores:\n\n direction = store.xpath('normalize-space(.//span[@class=\"store-number\"]/a/@href)').extract_first()\n if direction:\n yield scrapy.Request(response.urljoin(direction), callback=self.parse_stores)\n\n def parse_state(self, response):\n city_urls = response.xpath('//div[@class=\"states\"]/ul/li/a/@href').extract()\n for path in city_urls:\n yield scrapy.Request(response.urljoin(path), callback=self.parse_city_stores)\n\n def parse(self, response):\n urls = response.xpath('//div[@class=\"states\"]/ul/li/a/@href').extract()\n for path in urls:\n yield scrapy.Request(response.urljoin(path), callback=self.parse_state)\n", "path": "locations/spiders/cvs.py"}]} | 1,932 | 155 |
gh_patches_debug_29204 | rasdani/github-patches | git_diff | microsoft__botbuilder-python-426 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bug in 45.state-management sample
To create the user profile property , it should refer the UserState but in the sample its referring the
conversationstate.
Current code : self.user_profile = self.conversation_state.create_property("UserProfile")
Expected code : self.user_profile = self.user_state.create_property("UserProfile")
</issue>
<code>
[start of samples/45.state-management/bots/state_management_bot.py]
1 # Copyright (c) Microsoft Corporation. All rights reserved.
2 # Licensed under the MIT License.
3
4 import time
5 import pytz
6 from datetime import datetime
7
8 from botbuilder.core import ActivityHandler, ConversationState, TurnContext, UserState
9 from botbuilder.schema import ChannelAccount
10
11 from data_models import ConversationData, UserProfile
12
13
14 class StateManagementBot(ActivityHandler):
15 def __init__(self, conversation_state: ConversationState, user_state: UserState):
16 if conversation_state is None:
17 raise TypeError(
18 "[StateManagementBot]: Missing parameter. conversation_state is required but None was given"
19 )
20 if user_state is None:
21 raise TypeError(
22 "[StateManagementBot]: Missing parameter. user_state is required but None was given"
23 )
24
25 self.conversation_state = conversation_state
26 self.user_state = user_state
27
28 self.conversation_data = self.conversation_state.create_property(
29 "ConversationData"
30 )
31 self.user_profile = self.conversation_state.create_property("UserProfile")
32
33 async def on_turn(self, turn_context: TurnContext):
34 await super().on_turn(turn_context)
35
36 await self.conversation_state.save_changes(turn_context)
37 await self.user_state.save_changes(turn_context)
38
39 async def on_members_added_activity(
40 self, members_added: [ChannelAccount], turn_context: TurnContext
41 ):
42 for member in members_added:
43 if member.id != turn_context.activity.recipient.id:
44 await turn_context.send_activity(
45 "Welcome to State Bot Sample. Type anything to get started."
46 )
47
48 async def on_message_activity(self, turn_context: TurnContext):
49 # Get the state properties from the turn context.
50 user_profile = await self.user_profile.get(turn_context, UserProfile)
51 conversation_data = await self.conversation_data.get(
52 turn_context, ConversationData
53 )
54
55 if user_profile.name is None:
56 # First time around this is undefined, so we will prompt user for name.
57 if conversation_data.prompted_for_user_name:
58 # Set the name to what the user provided.
59 user_profile.name = turn_context.activity.text
60
61 # Acknowledge that we got their name.
62 await turn_context.send_activity(
63 f"Thanks { user_profile.name }. To see conversation data, type anything."
64 )
65
66 # Reset the flag to allow the bot to go though the cycle again.
67 conversation_data.prompted_for_user_name = False
68 else:
69 # Prompt the user for their name.
70 await turn_context.send_activity("What is your name?")
71
72 # Set the flag to true, so we don't prompt in the next turn.
73 conversation_data.prompted_for_user_name = True
74 else:
75 # Add message details to the conversation data.
76 conversation_data.timestamp = self.__datetime_from_utc_to_local(
77 turn_context.activity.timestamp
78 )
79 conversation_data.channel_id = turn_context.activity.channel_id
80
81 # Display state data.
82 await turn_context.send_activity(
83 f"{ user_profile.name } sent: { turn_context.activity.text }"
84 )
85 await turn_context.send_activity(
86 f"Message received at: { conversation_data.timestamp }"
87 )
88 await turn_context.send_activity(
89 f"Message received from: { conversation_data.channel_id }"
90 )
91
92 def __datetime_from_utc_to_local(self, utc_datetime):
93 now_timestamp = time.time()
94 offset = datetime.fromtimestamp(now_timestamp) - datetime.utcfromtimestamp(
95 now_timestamp
96 )
97 result = utc_datetime + offset
98 return result.strftime("%I:%M:%S %p, %A, %B %d of %Y")
99
[end of samples/45.state-management/bots/state_management_bot.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/samples/45.state-management/bots/state_management_bot.py b/samples/45.state-management/bots/state_management_bot.py
--- a/samples/45.state-management/bots/state_management_bot.py
+++ b/samples/45.state-management/bots/state_management_bot.py
@@ -2,7 +2,6 @@
# Licensed under the MIT License.
import time
-import pytz
from datetime import datetime
from botbuilder.core import ActivityHandler, ConversationState, TurnContext, UserState
@@ -25,10 +24,10 @@
self.conversation_state = conversation_state
self.user_state = user_state
- self.conversation_data = self.conversation_state.create_property(
+ self.conversation_data_accessor = self.conversation_state.create_property(
"ConversationData"
)
- self.user_profile = self.conversation_state.create_property("UserProfile")
+ self.user_profile_accessor = self.user_state.create_property("UserProfile")
async def on_turn(self, turn_context: TurnContext):
await super().on_turn(turn_context)
@@ -47,8 +46,8 @@
async def on_message_activity(self, turn_context: TurnContext):
# Get the state properties from the turn context.
- user_profile = await self.user_profile.get(turn_context, UserProfile)
- conversation_data = await self.conversation_data.get(
+ user_profile = await self.user_profile_accessor.get(turn_context, UserProfile)
+ conversation_data = await self.conversation_data_accessor.get(
turn_context, ConversationData
)
| {"golden_diff": "diff --git a/samples/45.state-management/bots/state_management_bot.py b/samples/45.state-management/bots/state_management_bot.py\n--- a/samples/45.state-management/bots/state_management_bot.py\n+++ b/samples/45.state-management/bots/state_management_bot.py\n@@ -2,7 +2,6 @@\n # Licensed under the MIT License.\n \n import time\n-import pytz\n from datetime import datetime\n \n from botbuilder.core import ActivityHandler, ConversationState, TurnContext, UserState\n@@ -25,10 +24,10 @@\n self.conversation_state = conversation_state\n self.user_state = user_state\n \n- self.conversation_data = self.conversation_state.create_property(\n+ self.conversation_data_accessor = self.conversation_state.create_property(\n \"ConversationData\"\n )\n- self.user_profile = self.conversation_state.create_property(\"UserProfile\")\n+ self.user_profile_accessor = self.user_state.create_property(\"UserProfile\")\n \n async def on_turn(self, turn_context: TurnContext):\n await super().on_turn(turn_context)\n@@ -47,8 +46,8 @@\n \n async def on_message_activity(self, turn_context: TurnContext):\n # Get the state properties from the turn context.\n- user_profile = await self.user_profile.get(turn_context, UserProfile)\n- conversation_data = await self.conversation_data.get(\n+ user_profile = await self.user_profile_accessor.get(turn_context, UserProfile)\n+ conversation_data = await self.conversation_data_accessor.get(\n turn_context, ConversationData\n )\n", "issue": "Bug in 45.state-management sample\n\r\nTo create the user profile property , it should refer the UserState but in the sample its referring the \r\nconversationstate.\r\n\r\nCurrent code : self.user_profile = self.conversation_state.create_property(\"UserProfile\")\r\n\r\nExpected code : self.user_profile = self.user_state.create_property(\"UserProfile\")\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License.\n\nimport time\nimport pytz\nfrom datetime import datetime\n\nfrom botbuilder.core import ActivityHandler, ConversationState, TurnContext, UserState\nfrom botbuilder.schema import ChannelAccount\n\nfrom data_models import ConversationData, UserProfile\n\n\nclass StateManagementBot(ActivityHandler):\n def __init__(self, conversation_state: ConversationState, user_state: UserState):\n if conversation_state is None:\n raise TypeError(\n \"[StateManagementBot]: Missing parameter. conversation_state is required but None was given\"\n )\n if user_state is None:\n raise TypeError(\n \"[StateManagementBot]: Missing parameter. user_state is required but None was given\"\n )\n\n self.conversation_state = conversation_state\n self.user_state = user_state\n\n self.conversation_data = self.conversation_state.create_property(\n \"ConversationData\"\n )\n self.user_profile = self.conversation_state.create_property(\"UserProfile\")\n\n async def on_turn(self, turn_context: TurnContext):\n await super().on_turn(turn_context)\n\n await self.conversation_state.save_changes(turn_context)\n await self.user_state.save_changes(turn_context)\n\n async def on_members_added_activity(\n self, members_added: [ChannelAccount], turn_context: TurnContext\n ):\n for member in members_added:\n if member.id != turn_context.activity.recipient.id:\n await turn_context.send_activity(\n \"Welcome to State Bot Sample. Type anything to get started.\"\n )\n\n async def on_message_activity(self, turn_context: TurnContext):\n # Get the state properties from the turn context.\n user_profile = await self.user_profile.get(turn_context, UserProfile)\n conversation_data = await self.conversation_data.get(\n turn_context, ConversationData\n )\n\n if user_profile.name is None:\n # First time around this is undefined, so we will prompt user for name.\n if conversation_data.prompted_for_user_name:\n # Set the name to what the user provided.\n user_profile.name = turn_context.activity.text\n\n # Acknowledge that we got their name.\n await turn_context.send_activity(\n f\"Thanks { user_profile.name }. To see conversation data, type anything.\"\n )\n\n # Reset the flag to allow the bot to go though the cycle again.\n conversation_data.prompted_for_user_name = False\n else:\n # Prompt the user for their name.\n await turn_context.send_activity(\"What is your name?\")\n\n # Set the flag to true, so we don't prompt in the next turn.\n conversation_data.prompted_for_user_name = True\n else:\n # Add message details to the conversation data.\n conversation_data.timestamp = self.__datetime_from_utc_to_local(\n turn_context.activity.timestamp\n )\n conversation_data.channel_id = turn_context.activity.channel_id\n\n # Display state data.\n await turn_context.send_activity(\n f\"{ user_profile.name } sent: { turn_context.activity.text }\"\n )\n await turn_context.send_activity(\n f\"Message received at: { conversation_data.timestamp }\"\n )\n await turn_context.send_activity(\n f\"Message received from: { conversation_data.channel_id }\"\n )\n\n def __datetime_from_utc_to_local(self, utc_datetime):\n now_timestamp = time.time()\n offset = datetime.fromtimestamp(now_timestamp) - datetime.utcfromtimestamp(\n now_timestamp\n )\n result = utc_datetime + offset\n return result.strftime(\"%I:%M:%S %p, %A, %B %d of %Y\")\n", "path": "samples/45.state-management/bots/state_management_bot.py"}]} | 1,564 | 335 |
gh_patches_debug_5466 | rasdani/github-patches | git_diff | docker__docker-py-820 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
POST /volumes is now POST /volumes/create
https://github.com/docker/docker/pull/17136
</issue>
<code>
[start of docker/api/volume.py]
1 from .. import utils
2
3
4 class VolumeApiMixin(object):
5 @utils.minimum_version('1.21')
6 def volumes(self, filters=None):
7 params = {
8 'filter': utils.convert_filters(filters) if filters else None
9 }
10 url = self._url('/volumes')
11 return self._result(self._get(url, params=params), True)
12
13 @utils.minimum_version('1.21')
14 def create_volume(self, name, driver=None, driver_opts=None):
15 url = self._url('/volumes')
16 if driver_opts is not None and not isinstance(driver_opts, dict):
17 raise TypeError('driver_opts must be a dictionary')
18
19 data = {
20 'Name': name,
21 'Driver': driver,
22 'DriverOpts': driver_opts,
23 }
24 return self._result(self._post_json(url, data=data), True)
25
26 @utils.minimum_version('1.21')
27 def inspect_volume(self, name):
28 url = self._url('/volumes/{0}', name)
29 return self._result(self._get(url), True)
30
31 @utils.minimum_version('1.21')
32 def remove_volume(self, name):
33 url = self._url('/volumes/{0}', name)
34 resp = self._delete(url)
35 self._raise_for_status(resp)
36 return True
37
[end of docker/api/volume.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/docker/api/volume.py b/docker/api/volume.py
--- a/docker/api/volume.py
+++ b/docker/api/volume.py
@@ -12,7 +12,7 @@
@utils.minimum_version('1.21')
def create_volume(self, name, driver=None, driver_opts=None):
- url = self._url('/volumes')
+ url = self._url('/volumes/create')
if driver_opts is not None and not isinstance(driver_opts, dict):
raise TypeError('driver_opts must be a dictionary')
| {"golden_diff": "diff --git a/docker/api/volume.py b/docker/api/volume.py\n--- a/docker/api/volume.py\n+++ b/docker/api/volume.py\n@@ -12,7 +12,7 @@\n \n @utils.minimum_version('1.21')\n def create_volume(self, name, driver=None, driver_opts=None):\n- url = self._url('/volumes')\n+ url = self._url('/volumes/create')\n if driver_opts is not None and not isinstance(driver_opts, dict):\n raise TypeError('driver_opts must be a dictionary')\n", "issue": "POST /volumes is now POST /volumes/create\nhttps://github.com/docker/docker/pull/17136\n\n", "before_files": [{"content": "from .. import utils\n\n\nclass VolumeApiMixin(object):\n @utils.minimum_version('1.21')\n def volumes(self, filters=None):\n params = {\n 'filter': utils.convert_filters(filters) if filters else None\n }\n url = self._url('/volumes')\n return self._result(self._get(url, params=params), True)\n\n @utils.minimum_version('1.21')\n def create_volume(self, name, driver=None, driver_opts=None):\n url = self._url('/volumes')\n if driver_opts is not None and not isinstance(driver_opts, dict):\n raise TypeError('driver_opts must be a dictionary')\n\n data = {\n 'Name': name,\n 'Driver': driver,\n 'DriverOpts': driver_opts,\n }\n return self._result(self._post_json(url, data=data), True)\n\n @utils.minimum_version('1.21')\n def inspect_volume(self, name):\n url = self._url('/volumes/{0}', name)\n return self._result(self._get(url), True)\n\n @utils.minimum_version('1.21')\n def remove_volume(self, name):\n url = self._url('/volumes/{0}', name)\n resp = self._delete(url)\n self._raise_for_status(resp)\n return True\n", "path": "docker/api/volume.py"}]} | 911 | 121 |
gh_patches_debug_5879 | rasdani/github-patches | git_diff | inventree__InvenTree-1860 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Migration warns for phantom part changes
Here is the warning:
```
Your models in app(s): 'part' have changes that are not yet reflected in a migration, and so won't be applied.
Run 'manage.py makemigrations' to make new migrations, and then re-run 'manage.py migrate' to apply them.
```
Running `manage.py makemigrations` does **not** generate new migration file...
Running `manage.py showmigrations part` shows all part migrations are complete.
I found this warning both with PostGreSQL and SQLite3, if that has anything to do with backend dependency.
</issue>
<code>
[start of InvenTree/InvenTree/fields.py]
1 """ Custom fields used in InvenTree """
2
3 # -*- coding: utf-8 -*-
4 from __future__ import unicode_literals
5 import sys
6
7 from .validators import allowable_url_schemes
8
9 from django.utils.translation import ugettext_lazy as _
10
11 from django.forms.fields import URLField as FormURLField
12 from django.db import models as models
13 from django.core import validators
14 from django import forms
15
16 from decimal import Decimal
17
18 from djmoney.models.fields import MoneyField as ModelMoneyField
19 from djmoney.forms.fields import MoneyField
20 from djmoney.models.validators import MinMoneyValidator
21
22 import InvenTree.helpers
23
24
25 class InvenTreeURLFormField(FormURLField):
26 """ Custom URL form field with custom scheme validators """
27
28 default_validators = [validators.URLValidator(schemes=allowable_url_schemes())]
29
30
31 class InvenTreeURLField(models.URLField):
32 """ Custom URL field which has custom scheme validators """
33
34 default_validators = [validators.URLValidator(schemes=allowable_url_schemes())]
35
36 def formfield(self, **kwargs):
37 return super().formfield(**{
38 'form_class': InvenTreeURLFormField
39 })
40
41
42 def money_kwargs():
43 """ returns the database settings for MoneyFields """
44 from common.settings import currency_code_mappings, currency_code_default
45
46 kwargs = {}
47 kwargs['currency_choices'] = currency_code_mappings()
48 kwargs['default_currency'] = currency_code_default()
49 return kwargs
50
51
52 class InvenTreeModelMoneyField(ModelMoneyField):
53 """
54 Custom MoneyField for clean migrations while using dynamic currency settings
55 """
56
57 def __init__(self, **kwargs):
58 # detect if creating migration
59 if 'makemigrations' in sys.argv:
60 # remove currency information for a clean migration
61 kwargs['default_currency'] = ''
62 kwargs['currency_choices'] = []
63 else:
64 # set defaults
65 kwargs.update(money_kwargs())
66
67 # Set a minimum value validator
68 validators = kwargs.get('validators', [])
69
70 if len(validators) == 0:
71 validators.append(
72 MinMoneyValidator(0),
73 )
74
75 kwargs['validators'] = validators
76
77 super().__init__(**kwargs)
78
79 def formfield(self, **kwargs):
80 """ override form class to use own function """
81 kwargs['form_class'] = InvenTreeMoneyField
82 return super().formfield(**kwargs)
83
84
85 class InvenTreeMoneyField(MoneyField):
86 """ custom MoneyField for clean migrations while using dynamic currency settings """
87 def __init__(self, *args, **kwargs):
88 # override initial values with the real info from database
89 kwargs.update(money_kwargs())
90 super().__init__(*args, **kwargs)
91
92
93 class DatePickerFormField(forms.DateField):
94 """
95 Custom date-picker field
96 """
97
98 def __init__(self, **kwargs):
99
100 help_text = kwargs.get('help_text', _('Enter date'))
101 label = kwargs.get('label', None)
102 required = kwargs.get('required', False)
103 initial = kwargs.get('initial', None)
104
105 widget = forms.DateInput(
106 attrs={
107 'type': 'date',
108 }
109 )
110
111 forms.DateField.__init__(
112 self,
113 required=required,
114 initial=initial,
115 help_text=help_text,
116 widget=widget,
117 label=label
118 )
119
120
121 def round_decimal(value, places):
122 """
123 Round value to the specified number of places.
124 """
125
126 if value is not None:
127 # see https://docs.python.org/2/library/decimal.html#decimal.Decimal.quantize for options
128 return value.quantize(Decimal(10) ** -places)
129 return value
130
131
132 class RoundingDecimalFormField(forms.DecimalField):
133 def to_python(self, value):
134 value = super(RoundingDecimalFormField, self).to_python(value)
135 value = round_decimal(value, self.decimal_places)
136 return value
137
138 def prepare_value(self, value):
139 """
140 Override the 'prepare_value' method, to remove trailing zeros when displaying.
141 Why? It looks nice!
142 """
143
144 if type(value) == Decimal:
145 return InvenTree.helpers.normalize(value)
146 else:
147 return value
148
149
150 class RoundingDecimalField(models.DecimalField):
151 def to_python(self, value):
152 value = super(RoundingDecimalField, self).to_python(value)
153 return round_decimal(value, self.decimal_places)
154
155 def formfield(self, **kwargs):
156 defaults = {
157 'form_class': RoundingDecimalFormField
158 }
159
160 defaults.update(kwargs)
161
162 return super().formfield(**kwargs)
163
[end of InvenTree/InvenTree/fields.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/InvenTree/InvenTree/fields.py b/InvenTree/InvenTree/fields.py
--- a/InvenTree/InvenTree/fields.py
+++ b/InvenTree/InvenTree/fields.py
@@ -55,7 +55,7 @@
def __init__(self, **kwargs):
# detect if creating migration
- if 'makemigrations' in sys.argv:
+ if 'migrate' in sys.argv or 'makemigrations' in sys.argv:
# remove currency information for a clean migration
kwargs['default_currency'] = ''
kwargs['currency_choices'] = []
| {"golden_diff": "diff --git a/InvenTree/InvenTree/fields.py b/InvenTree/InvenTree/fields.py\n--- a/InvenTree/InvenTree/fields.py\n+++ b/InvenTree/InvenTree/fields.py\n@@ -55,7 +55,7 @@\n \n def __init__(self, **kwargs):\n # detect if creating migration\n- if 'makemigrations' in sys.argv:\n+ if 'migrate' in sys.argv or 'makemigrations' in sys.argv:\n # remove currency information for a clean migration\n kwargs['default_currency'] = ''\n kwargs['currency_choices'] = []\n", "issue": "Migration warns for phantom part changes \nHere is the warning:\r\n\r\n```\r\nYour models in app(s): 'part' have changes that are not yet reflected in a migration, and so won't be applied.\r\nRun 'manage.py makemigrations' to make new migrations, and then re-run 'manage.py migrate' to apply them.\r\n```\r\n\r\nRunning `manage.py makemigrations` does **not** generate new migration file...\r\n\r\nRunning `manage.py showmigrations part` shows all part migrations are complete.\r\n\r\nI found this warning both with PostGreSQL and SQLite3, if that has anything to do with backend dependency.\n", "before_files": [{"content": "\"\"\" Custom fields used in InvenTree \"\"\"\n\n# -*- coding: utf-8 -*-\nfrom __future__ import unicode_literals\nimport sys\n\nfrom .validators import allowable_url_schemes\n\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom django.forms.fields import URLField as FormURLField\nfrom django.db import models as models\nfrom django.core import validators\nfrom django import forms\n\nfrom decimal import Decimal\n\nfrom djmoney.models.fields import MoneyField as ModelMoneyField\nfrom djmoney.forms.fields import MoneyField\nfrom djmoney.models.validators import MinMoneyValidator\n\nimport InvenTree.helpers\n\n\nclass InvenTreeURLFormField(FormURLField):\n \"\"\" Custom URL form field with custom scheme validators \"\"\"\n\n default_validators = [validators.URLValidator(schemes=allowable_url_schemes())]\n\n\nclass InvenTreeURLField(models.URLField):\n \"\"\" Custom URL field which has custom scheme validators \"\"\"\n\n default_validators = [validators.URLValidator(schemes=allowable_url_schemes())]\n\n def formfield(self, **kwargs):\n return super().formfield(**{\n 'form_class': InvenTreeURLFormField\n })\n\n\ndef money_kwargs():\n \"\"\" returns the database settings for MoneyFields \"\"\"\n from common.settings import currency_code_mappings, currency_code_default\n\n kwargs = {}\n kwargs['currency_choices'] = currency_code_mappings()\n kwargs['default_currency'] = currency_code_default()\n return kwargs\n\n\nclass InvenTreeModelMoneyField(ModelMoneyField):\n \"\"\"\n Custom MoneyField for clean migrations while using dynamic currency settings\n \"\"\"\n \n def __init__(self, **kwargs):\n # detect if creating migration\n if 'makemigrations' in sys.argv:\n # remove currency information for a clean migration\n kwargs['default_currency'] = ''\n kwargs['currency_choices'] = []\n else:\n # set defaults\n kwargs.update(money_kwargs())\n\n # Set a minimum value validator\n validators = kwargs.get('validators', [])\n\n if len(validators) == 0:\n validators.append(\n MinMoneyValidator(0),\n )\n\n kwargs['validators'] = validators\n\n super().__init__(**kwargs)\n\n def formfield(self, **kwargs):\n \"\"\" override form class to use own function \"\"\"\n kwargs['form_class'] = InvenTreeMoneyField\n return super().formfield(**kwargs)\n\n\nclass InvenTreeMoneyField(MoneyField):\n \"\"\" custom MoneyField for clean migrations while using dynamic currency settings \"\"\"\n def __init__(self, *args, **kwargs):\n # override initial values with the real info from database\n kwargs.update(money_kwargs())\n super().__init__(*args, **kwargs)\n\n\nclass DatePickerFormField(forms.DateField):\n \"\"\"\n Custom date-picker field\n \"\"\"\n\n def __init__(self, **kwargs):\n\n help_text = kwargs.get('help_text', _('Enter date'))\n label = kwargs.get('label', None)\n required = kwargs.get('required', False)\n initial = kwargs.get('initial', None)\n\n widget = forms.DateInput(\n attrs={\n 'type': 'date',\n }\n )\n\n forms.DateField.__init__(\n self,\n required=required,\n initial=initial,\n help_text=help_text,\n widget=widget,\n label=label\n )\n\n\ndef round_decimal(value, places):\n \"\"\"\n Round value to the specified number of places.\n \"\"\"\n\n if value is not None:\n # see https://docs.python.org/2/library/decimal.html#decimal.Decimal.quantize for options\n return value.quantize(Decimal(10) ** -places)\n return value\n\n\nclass RoundingDecimalFormField(forms.DecimalField):\n def to_python(self, value):\n value = super(RoundingDecimalFormField, self).to_python(value)\n value = round_decimal(value, self.decimal_places)\n return value\n\n def prepare_value(self, value):\n \"\"\"\n Override the 'prepare_value' method, to remove trailing zeros when displaying.\n Why? It looks nice!\n \"\"\"\n\n if type(value) == Decimal:\n return InvenTree.helpers.normalize(value)\n else:\n return value\n\n\nclass RoundingDecimalField(models.DecimalField):\n def to_python(self, value):\n value = super(RoundingDecimalField, self).to_python(value)\n return round_decimal(value, self.decimal_places)\n\n def formfield(self, **kwargs):\n defaults = {\n 'form_class': RoundingDecimalFormField\n }\n\n defaults.update(kwargs)\n\n return super().formfield(**kwargs)\n", "path": "InvenTree/InvenTree/fields.py"}]} | 2,030 | 145 |
gh_patches_debug_14841 | rasdani/github-patches | git_diff | koxudaxi__datamodel-code-generator-421 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
IPv4Address doesn't import from pydantic.validators
**Describe the bug**
When using `format: ipv4`, the following import is added to the output:
```py
from pydantic import IPv4Address
```
This isn't a valid import.
**To Reproduce**
Example schema:
```yaml
openapi: 3.0.0
info:
version: 0.0.1
title: Foo API
paths:
/foo:
get:
responses:
"200":
description: Success
components:
schemas:
Foo:
type: object
properties:
ip:
type: string
format: ipv4
```
Used commandline:
```
$ datamodel-codegen --input openapi.yaml
```
**Expected behavior**
When using `format: ipv4`, the following import is added to the output:
```py
from pydantic.validators import IPv4Address
```
**Version:**
- OS: MacOS
- Python version: `3.9.2`
- datamodel-code-generator version: `0.8.2`
**Additional context**
None
</issue>
<code>
[start of datamodel_code_generator/model/pydantic/imports.py]
1 from datamodel_code_generator.imports import Import
2
3 IMPORT_CONSTR = Import.from_full_path('pydantic.constr')
4 IMPORT_CONINT = Import.from_full_path('pydantic.conint')
5 IMPORT_CONFLOAT = Import.from_full_path('pydantic.confloat')
6 IMPORT_CONDECIMAL = Import.from_full_path('pydantic.condecimal')
7 IMPORT_CONBYTES = Import.from_full_path('pydantic.conbytes')
8 IMPORT_POSITIVE_INT = Import.from_full_path('pydantic.PositiveInt')
9 IMPORT_NEGATIVE_INT = Import.from_full_path('pydantic.NegativeInt')
10 IMPORT_POSITIVE_FLOAT = Import.from_full_path('pydantic.PositiveFloat')
11 IMPORT_NEGATIVE_FLOAT = Import.from_full_path('pydantic.NegativeFloat')
12 IMPORT_SECRET_STR = Import.from_full_path('pydantic.SecretStr')
13 IMPORT_EMAIL_STR = Import.from_full_path('pydantic.EmailStr')
14 IMPORT_UUID1 = Import.from_full_path('pydantic.UUID1')
15 IMPORT_UUID2 = Import.from_full_path('pydantic.UUID2')
16 IMPORT_UUID3 = Import.from_full_path('pydantic.UUID3')
17 IMPORT_UUID4 = Import.from_full_path('pydantic.UUID4')
18 IMPORT_UUID5 = Import.from_full_path('pydantic.UUID5')
19 IMPORT_ANYURL = Import.from_full_path('pydantic.AnyUrl')
20 IMPORT_IPV4ADDRESS = Import.from_full_path('pydantic.IPv4Address')
21 IMPORT_IPV6ADDRESS = Import.from_full_path('pydantic.IPv6Address')
22 IMPORT_EXTRA = Import.from_full_path('pydantic.Extra')
23 IMPORT_FIELD = Import.from_full_path('pydantic.Field')
24 IMPORT_STRICT_INT = Import.from_full_path('pydantic.StrictInt')
25 IMPORT_STRICT_FLOAT = Import.from_full_path('pydantic.StrictFloat')
26 IMPORT_STRICT_STR = Import.from_full_path('pydantic.StrictStr')
27 IMPORT_STRICT_BOOL = Import.from_full_path('pydantic.StrictBool')
28 IMPORT_STRICT_BYTES = Import.from_full_path('pydantic.StrictBytes')
29 IMPORT_DATACLASS = Import.from_full_path('pydantic.dataclasses.dataclass')
30
[end of datamodel_code_generator/model/pydantic/imports.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/datamodel_code_generator/model/pydantic/imports.py b/datamodel_code_generator/model/pydantic/imports.py
--- a/datamodel_code_generator/model/pydantic/imports.py
+++ b/datamodel_code_generator/model/pydantic/imports.py
@@ -17,8 +17,8 @@
IMPORT_UUID4 = Import.from_full_path('pydantic.UUID4')
IMPORT_UUID5 = Import.from_full_path('pydantic.UUID5')
IMPORT_ANYURL = Import.from_full_path('pydantic.AnyUrl')
-IMPORT_IPV4ADDRESS = Import.from_full_path('pydantic.IPv4Address')
-IMPORT_IPV6ADDRESS = Import.from_full_path('pydantic.IPv6Address')
+IMPORT_IPV4ADDRESS = Import.from_full_path('ipaddress.IPv4Address')
+IMPORT_IPV6ADDRESS = Import.from_full_path('ipaddress.IPv6Address')
IMPORT_EXTRA = Import.from_full_path('pydantic.Extra')
IMPORT_FIELD = Import.from_full_path('pydantic.Field')
IMPORT_STRICT_INT = Import.from_full_path('pydantic.StrictInt')
| {"golden_diff": "diff --git a/datamodel_code_generator/model/pydantic/imports.py b/datamodel_code_generator/model/pydantic/imports.py\n--- a/datamodel_code_generator/model/pydantic/imports.py\n+++ b/datamodel_code_generator/model/pydantic/imports.py\n@@ -17,8 +17,8 @@\n IMPORT_UUID4 = Import.from_full_path('pydantic.UUID4')\n IMPORT_UUID5 = Import.from_full_path('pydantic.UUID5')\n IMPORT_ANYURL = Import.from_full_path('pydantic.AnyUrl')\n-IMPORT_IPV4ADDRESS = Import.from_full_path('pydantic.IPv4Address')\n-IMPORT_IPV6ADDRESS = Import.from_full_path('pydantic.IPv6Address')\n+IMPORT_IPV4ADDRESS = Import.from_full_path('ipaddress.IPv4Address')\n+IMPORT_IPV6ADDRESS = Import.from_full_path('ipaddress.IPv6Address')\n IMPORT_EXTRA = Import.from_full_path('pydantic.Extra')\n IMPORT_FIELD = Import.from_full_path('pydantic.Field')\n IMPORT_STRICT_INT = Import.from_full_path('pydantic.StrictInt')\n", "issue": "IPv4Address doesn't import from pydantic.validators\n**Describe the bug**\r\n\r\nWhen using `format: ipv4`, the following import is added to the output:\r\n\r\n```py\r\nfrom pydantic import IPv4Address\r\n```\r\n\r\nThis isn't a valid import.\r\n\r\n**To Reproduce**\r\n\r\nExample schema:\r\n```yaml\r\nopenapi: 3.0.0\r\n\r\ninfo:\r\n version: 0.0.1\r\n title: Foo API\r\n\r\npaths:\r\n /foo:\r\n get:\r\n responses:\r\n \"200\":\r\n description: Success\r\n\r\ncomponents:\r\n schemas:\r\n Foo:\r\n type: object\r\n properties:\r\n ip:\r\n type: string\r\n format: ipv4\r\n```\r\n\r\nUsed commandline:\r\n```\r\n$ datamodel-codegen --input openapi.yaml\r\n```\r\n\r\n**Expected behavior**\r\n\r\nWhen using `format: ipv4`, the following import is added to the output:\r\n\r\n```py\r\nfrom pydantic.validators import IPv4Address\r\n```\r\n\r\n**Version:**\r\n - OS: MacOS\r\n - Python version: `3.9.2`\r\n - datamodel-code-generator version: `0.8.2`\r\n\r\n**Additional context**\r\nNone\r\n\n", "before_files": [{"content": "from datamodel_code_generator.imports import Import\n\nIMPORT_CONSTR = Import.from_full_path('pydantic.constr')\nIMPORT_CONINT = Import.from_full_path('pydantic.conint')\nIMPORT_CONFLOAT = Import.from_full_path('pydantic.confloat')\nIMPORT_CONDECIMAL = Import.from_full_path('pydantic.condecimal')\nIMPORT_CONBYTES = Import.from_full_path('pydantic.conbytes')\nIMPORT_POSITIVE_INT = Import.from_full_path('pydantic.PositiveInt')\nIMPORT_NEGATIVE_INT = Import.from_full_path('pydantic.NegativeInt')\nIMPORT_POSITIVE_FLOAT = Import.from_full_path('pydantic.PositiveFloat')\nIMPORT_NEGATIVE_FLOAT = Import.from_full_path('pydantic.NegativeFloat')\nIMPORT_SECRET_STR = Import.from_full_path('pydantic.SecretStr')\nIMPORT_EMAIL_STR = Import.from_full_path('pydantic.EmailStr')\nIMPORT_UUID1 = Import.from_full_path('pydantic.UUID1')\nIMPORT_UUID2 = Import.from_full_path('pydantic.UUID2')\nIMPORT_UUID3 = Import.from_full_path('pydantic.UUID3')\nIMPORT_UUID4 = Import.from_full_path('pydantic.UUID4')\nIMPORT_UUID5 = Import.from_full_path('pydantic.UUID5')\nIMPORT_ANYURL = Import.from_full_path('pydantic.AnyUrl')\nIMPORT_IPV4ADDRESS = Import.from_full_path('pydantic.IPv4Address')\nIMPORT_IPV6ADDRESS = Import.from_full_path('pydantic.IPv6Address')\nIMPORT_EXTRA = Import.from_full_path('pydantic.Extra')\nIMPORT_FIELD = Import.from_full_path('pydantic.Field')\nIMPORT_STRICT_INT = Import.from_full_path('pydantic.StrictInt')\nIMPORT_STRICT_FLOAT = Import.from_full_path('pydantic.StrictFloat')\nIMPORT_STRICT_STR = Import.from_full_path('pydantic.StrictStr')\nIMPORT_STRICT_BOOL = Import.from_full_path('pydantic.StrictBool')\nIMPORT_STRICT_BYTES = Import.from_full_path('pydantic.StrictBytes')\nIMPORT_DATACLASS = Import.from_full_path('pydantic.dataclasses.dataclass')\n", "path": "datamodel_code_generator/model/pydantic/imports.py"}]} | 1,282 | 231 |
gh_patches_debug_417 | rasdani/github-patches | git_diff | python__python-docs-es-1712 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Translate 'library/base64.po'
This needs to reach 100% translated.
The rendered version of this file will be available at https://docs.python.org/es/3.10/library/base64.html once translated.
Meanwhile, the English version is shown.
Current stats for `library/base64.po`:
* Fuzzy: 4
* Percent translated: 90.9%
* Entries: 50 / 55
* Untranslated: 5
Please, comment here if you want this file to be assigned to you and an member will assign it to you as soon as possible, so you can start working on it.
Remember to follow the steps in our [Contributing Guide](https://python-docs-es.readthedocs.io/page/CONTRIBUTING.html).
</issue>
<code>
[start of scripts/translate.py]
1 import os
2 import re
3 import sys
4 from typing import Dict, Tuple
5
6 import polib
7
8 VERBOSE = False
9 DEBUG = False
10 SKIP_TRANSLATED_ENTRIES = True
11
12 try:
13 from deep_translator import GoogleTranslator
14 except ImportError:
15 print("Error: This util script needs `deep_translator` to be installed")
16 sys.exit(1)
17
18 _patterns = [
19 ":c:func:`[^`]+`",
20 ":c:type:`[^`]+`",
21 ":c:macro:`[^`]+`",
22 ":c:member:`[^`]+`",
23 ":c:data:`[^`]+`",
24 ":py:data:`[^`]+`",
25 ":py:mod:`[^`]+`",
26 ":func:`[^`]+`",
27 ":mod:`[^`]+`",
28 ":ref:`[^`]+`",
29 ":class:`[^`]+`",
30 ":pep:`[^`]+`",
31 ":data:`[^`]+`",
32 ":exc:`[^`]+`",
33 ":term:`[^`]+`",
34 ":meth:`[^`]+`",
35 ":envvar:`[^`]+`",
36 ":file:`[^`]+`",
37 ":attr:`[^`]+`",
38 ":const:`[^`]+`",
39 ":issue:`[^`]+`",
40 ":opcode:`[^`]+`",
41 ":option:`[^`]+`",
42 ":program:`[^`]+`",
43 ":keyword:`[^`]+`",
44 ":RFC:`[^`]+`",
45 ":doc:`[^`]+`",
46 "``[^`]+``",
47 "`[^`]+`__",
48 "`[^`]+`_",
49 "\*\*.+\*\*", # bold text between **
50 "\*.+\*", # italic text between *
51 ]
52
53 _exps = [re.compile(e) for e in _patterns]
54
55 def protect_sphinx_directives(s: str) -> Tuple[dict, str]:
56 """
57 Parameters:
58 string containing the text to translate
59
60 Returns:
61 dictionary containing all the placeholder text as keys
62 and the correct value.
63 """
64
65 i = 0
66 d: Dict[str, str] = {}
67 for exp in _exps:
68 matches = exp.findall(s)
69 if DEBUG:
70 print(exp, matches)
71 for match in matches:
72 ph = f"XASDF{str(i).zfill(2)}"
73 s = s.replace(match, ph)
74 if ph in d and VERBOSE:
75 print(f"Error: {ph} is already in the dictionary")
76 print("new", match)
77 print("old", d[ph])
78 d[ph] = match
79 i += 1
80 return d, s
81
82
83 def undo_sphinx_directives_protection(placeholders: dict, translated_text: str) -> str:
84 for ph, value in placeholders.items():
85 translated_text = translated_text.replace(ph, value)
86 if DEBUG:
87 print(ph, value)
88 print(translated_text)
89 return translated_text
90
91
92 if __name__ == "__main__":
93 filename = sys.argv[1]
94 if not os.path.isfile(filename):
95 print(f"File not found: '{filename}'")
96 sys.exit(-1)
97
98 po = polib.pofile(filename)
99 translator = GoogleTranslator(source="en", target="es")
100
101 for entry in po:
102 # If the entry has already a translation, skip.
103 if SKIP_TRANSLATED_ENTRIES and entry.msgstr:
104 continue
105
106 print("\nEN|", entry.msgid)
107 placeholders, temp_text = protect_sphinx_directives(entry.msgid)
108 if VERBOSE:
109 print(temp_text)
110 print(placeholders)
111
112 # Translate the temporary text without sphinx statements
113 translated_text = translator.translate(temp_text)
114
115 # Recover sphinx statements
116 real_text = undo_sphinx_directives_protection(placeholders, translated_text)
117 print("ES|", real_text)
118
119 # Replace the po file translated entry
120 entry.msgstr = real_text
121
122 # Save the file after all the entries are translated
123 po.save()
124
[end of scripts/translate.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/scripts/translate.py b/scripts/translate.py
--- a/scripts/translate.py
+++ b/scripts/translate.py
@@ -42,6 +42,7 @@
":program:`[^`]+`",
":keyword:`[^`]+`",
":RFC:`[^`]+`",
+ ":rfc:`[^`]+`",
":doc:`[^`]+`",
"``[^`]+``",
"`[^`]+`__",
| {"golden_diff": "diff --git a/scripts/translate.py b/scripts/translate.py\n--- a/scripts/translate.py\n+++ b/scripts/translate.py\n@@ -42,6 +42,7 @@\n \":program:`[^`]+`\",\n \":keyword:`[^`]+`\",\n \":RFC:`[^`]+`\",\n+ \":rfc:`[^`]+`\",\n \":doc:`[^`]+`\",\n \"``[^`]+``\",\n \"`[^`]+`__\",\n", "issue": "Translate 'library/base64.po'\nThis needs to reach 100% translated.\n\nThe rendered version of this file will be available at https://docs.python.org/es/3.10/library/base64.html once translated.\nMeanwhile, the English version is shown.\n\nCurrent stats for `library/base64.po`:\n\n* Fuzzy: 4\n* Percent translated: 90.9%\n* Entries: 50 / 55\n* Untranslated: 5\n\nPlease, comment here if you want this file to be assigned to you and an member will assign it to you as soon as possible, so you can start working on it.\n\nRemember to follow the steps in our [Contributing Guide](https://python-docs-es.readthedocs.io/page/CONTRIBUTING.html).\n", "before_files": [{"content": "import os\nimport re\nimport sys\nfrom typing import Dict, Tuple\n\nimport polib\n\nVERBOSE = False\nDEBUG = False\nSKIP_TRANSLATED_ENTRIES = True\n\ntry:\n from deep_translator import GoogleTranslator\nexcept ImportError:\n print(\"Error: This util script needs `deep_translator` to be installed\")\n sys.exit(1)\n\n_patterns = [\n \":c:func:`[^`]+`\",\n \":c:type:`[^`]+`\",\n \":c:macro:`[^`]+`\",\n \":c:member:`[^`]+`\",\n \":c:data:`[^`]+`\",\n \":py:data:`[^`]+`\",\n \":py:mod:`[^`]+`\",\n \":func:`[^`]+`\",\n \":mod:`[^`]+`\",\n \":ref:`[^`]+`\",\n \":class:`[^`]+`\",\n \":pep:`[^`]+`\",\n \":data:`[^`]+`\",\n \":exc:`[^`]+`\",\n \":term:`[^`]+`\",\n \":meth:`[^`]+`\",\n \":envvar:`[^`]+`\",\n \":file:`[^`]+`\",\n \":attr:`[^`]+`\",\n \":const:`[^`]+`\",\n \":issue:`[^`]+`\",\n \":opcode:`[^`]+`\",\n \":option:`[^`]+`\",\n \":program:`[^`]+`\",\n \":keyword:`[^`]+`\",\n \":RFC:`[^`]+`\",\n \":doc:`[^`]+`\",\n \"``[^`]+``\",\n \"`[^`]+`__\",\n \"`[^`]+`_\",\n \"\\*\\*.+\\*\\*\", # bold text between **\n \"\\*.+\\*\", # italic text between *\n]\n\n_exps = [re.compile(e) for e in _patterns]\n\ndef protect_sphinx_directives(s: str) -> Tuple[dict, str]:\n \"\"\"\n Parameters:\n string containing the text to translate\n\n Returns:\n dictionary containing all the placeholder text as keys\n and the correct value.\n \"\"\"\n\n i = 0\n d: Dict[str, str] = {}\n for exp in _exps:\n matches = exp.findall(s)\n if DEBUG:\n print(exp, matches)\n for match in matches:\n ph = f\"XASDF{str(i).zfill(2)}\"\n s = s.replace(match, ph)\n if ph in d and VERBOSE:\n print(f\"Error: {ph} is already in the dictionary\")\n print(\"new\", match)\n print(\"old\", d[ph])\n d[ph] = match\n i += 1\n return d, s\n\n\ndef undo_sphinx_directives_protection(placeholders: dict, translated_text: str) -> str:\n for ph, value in placeholders.items():\n translated_text = translated_text.replace(ph, value)\n if DEBUG:\n print(ph, value)\n print(translated_text)\n return translated_text\n\n\nif __name__ == \"__main__\":\n filename = sys.argv[1]\n if not os.path.isfile(filename):\n print(f\"File not found: '{filename}'\")\n sys.exit(-1)\n\n po = polib.pofile(filename)\n translator = GoogleTranslator(source=\"en\", target=\"es\")\n\n for entry in po:\n # If the entry has already a translation, skip.\n if SKIP_TRANSLATED_ENTRIES and entry.msgstr:\n continue\n\n print(\"\\nEN|\", entry.msgid)\n placeholders, temp_text = protect_sphinx_directives(entry.msgid)\n if VERBOSE:\n print(temp_text)\n print(placeholders)\n\n # Translate the temporary text without sphinx statements\n translated_text = translator.translate(temp_text)\n\n # Recover sphinx statements\n real_text = undo_sphinx_directives_protection(placeholders, translated_text)\n print(\"ES|\", real_text)\n\n # Replace the po file translated entry\n entry.msgstr = real_text\n\n # Save the file after all the entries are translated\n po.save()\n", "path": "scripts/translate.py"}]} | 1,848 | 104 |
gh_patches_debug_5952 | rasdani/github-patches | git_diff | Kinto__kinto-386 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Activate POST on collections
```
$ curl -H "Content-Type: application/json" \
-X POST -d '{"data": {"test": "some_data"}}' --user testuser:abc123 \
https://kinto.dev.mozaws.net/v1/buckets/test_bucket/collections
{"errno":115,"message":"Method not allowed on this endpoint.","code":405,"error":"Method Not Allowed"}
```
</issue>
<code>
[start of kinto/views/collections.py]
1 import colander
2 import jsonschema
3 from cliquet import resource
4 from jsonschema import exceptions as jsonschema_exceptions
5
6 from kinto.views import NameGenerator
7
8
9 class JSONSchemaMapping(colander.SchemaNode):
10 def schema_type(self, **kw):
11 return colander.Mapping(unknown='preserve')
12
13 def deserialize(self, cstruct=colander.null):
14 # Start by deserializing a simple mapping.
15 validated = super(JSONSchemaMapping, self).deserialize(cstruct)
16
17 # In case it is optional in parent schema.
18 if not validated or validated in (colander.null, colander.drop):
19 return validated
20
21 try:
22 jsonschema.Draft4Validator.check_schema(validated)
23 except jsonschema_exceptions.SchemaError as e:
24 self.raise_invalid(e.path.pop() + e.message)
25 return validated
26
27
28 class CollectionSchema(resource.ResourceSchema):
29 schema = JSONSchemaMapping(missing=colander.drop)
30 cache_expires = colander.SchemaNode(colander.Int(), missing=colander.drop)
31
32 class Options:
33 preserve_unknown = True
34
35
36 @resource.register(name='collection',
37 collection_methods=('GET',),
38 collection_path='/buckets/{{bucket_id}}/collections',
39 record_path='/buckets/{{bucket_id}}/collections/{{id}}')
40 class Collection(resource.ProtectedResource):
41 mapping = CollectionSchema()
42 permissions = ('read', 'write', 'record:create')
43
44 def __init__(self, *args, **kwargs):
45 super(Collection, self).__init__(*args, **kwargs)
46 self.model.id_generator = NameGenerator()
47
48 def get_parent_id(self, request):
49 bucket_id = request.matchdict['bucket_id']
50 parent_id = '/buckets/%s' % bucket_id
51 return parent_id
52
53 def delete(self):
54 result = super(Collection, self).delete()
55
56 # Delete records.
57 storage = self.model.storage
58 parent_id = '%s/collections/%s' % (self.model.parent_id,
59 self.record_id)
60 storage.delete_all(collection_id='record',
61 parent_id=parent_id,
62 with_deleted=False)
63 storage.purge_deleted(collection_id='record', parent_id=parent_id)
64
65 return result
66
[end of kinto/views/collections.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/kinto/views/collections.py b/kinto/views/collections.py
--- a/kinto/views/collections.py
+++ b/kinto/views/collections.py
@@ -34,7 +34,7 @@
@resource.register(name='collection',
- collection_methods=('GET',),
+ collection_methods=('GET', 'POST'),
collection_path='/buckets/{{bucket_id}}/collections',
record_path='/buckets/{{bucket_id}}/collections/{{id}}')
class Collection(resource.ProtectedResource):
| {"golden_diff": "diff --git a/kinto/views/collections.py b/kinto/views/collections.py\n--- a/kinto/views/collections.py\n+++ b/kinto/views/collections.py\n@@ -34,7 +34,7 @@\n \n \n @resource.register(name='collection',\n- collection_methods=('GET',),\n+ collection_methods=('GET', 'POST'),\n collection_path='/buckets/{{bucket_id}}/collections',\n record_path='/buckets/{{bucket_id}}/collections/{{id}}')\n class Collection(resource.ProtectedResource):\n", "issue": "Activate POST on collections\n```\n$ curl -H \"Content-Type: application/json\" \\\n -X POST -d '{\"data\": {\"test\": \"some_data\"}}' --user testuser:abc123 \\\n https://kinto.dev.mozaws.net/v1/buckets/test_bucket/collections\n\n{\"errno\":115,\"message\":\"Method not allowed on this endpoint.\",\"code\":405,\"error\":\"Method Not Allowed\"}\n```\n\n", "before_files": [{"content": "import colander\nimport jsonschema\nfrom cliquet import resource\nfrom jsonschema import exceptions as jsonschema_exceptions\n\nfrom kinto.views import NameGenerator\n\n\nclass JSONSchemaMapping(colander.SchemaNode):\n def schema_type(self, **kw):\n return colander.Mapping(unknown='preserve')\n\n def deserialize(self, cstruct=colander.null):\n # Start by deserializing a simple mapping.\n validated = super(JSONSchemaMapping, self).deserialize(cstruct)\n\n # In case it is optional in parent schema.\n if not validated or validated in (colander.null, colander.drop):\n return validated\n\n try:\n jsonschema.Draft4Validator.check_schema(validated)\n except jsonschema_exceptions.SchemaError as e:\n self.raise_invalid(e.path.pop() + e.message)\n return validated\n\n\nclass CollectionSchema(resource.ResourceSchema):\n schema = JSONSchemaMapping(missing=colander.drop)\n cache_expires = colander.SchemaNode(colander.Int(), missing=colander.drop)\n\n class Options:\n preserve_unknown = True\n\n\[email protected](name='collection',\n collection_methods=('GET',),\n collection_path='/buckets/{{bucket_id}}/collections',\n record_path='/buckets/{{bucket_id}}/collections/{{id}}')\nclass Collection(resource.ProtectedResource):\n mapping = CollectionSchema()\n permissions = ('read', 'write', 'record:create')\n\n def __init__(self, *args, **kwargs):\n super(Collection, self).__init__(*args, **kwargs)\n self.model.id_generator = NameGenerator()\n\n def get_parent_id(self, request):\n bucket_id = request.matchdict['bucket_id']\n parent_id = '/buckets/%s' % bucket_id\n return parent_id\n\n def delete(self):\n result = super(Collection, self).delete()\n\n # Delete records.\n storage = self.model.storage\n parent_id = '%s/collections/%s' % (self.model.parent_id,\n self.record_id)\n storage.delete_all(collection_id='record',\n parent_id=parent_id,\n with_deleted=False)\n storage.purge_deleted(collection_id='record', parent_id=parent_id)\n\n return result\n", "path": "kinto/views/collections.py"}]} | 1,221 | 109 |
gh_patches_debug_5335 | rasdani/github-patches | git_diff | Nitrate__Nitrate-415 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
import xml says Worng xml_version
import xml in not working says worng xml_version 1.1
i export the test case and generate xml and try to import same not work
thanks in advance
</issue>
<code>
[start of src/tcms/settings/product.py]
1 # Django settings for product env.
2
3 from tcms.settings.common import * # noqa
4
5 # Debug settings
6 DEBUG = False
7 TEMPLATE_DEBUG = DEBUG
8
9 # Database settings
10 DATABASES = {
11 'default': {
12 'ENGINE': SUPPORTED_DB_ENGINES[DB_ENGINE],
13 'NAME': env.get('NITRATE_DB_NAME', 'nitrate'),
14 'USER': env.get('NITRATE_DB_USER', 'nitrate'),
15 'PASSWORD': env.get('NITRATE_DB_PASSWORD', 'nitrate'),
16 'HOST': env.get('NITRATE_DB_HOST', ''),
17 'PORT': env.get('NITRATE_DB_PORT', ''),
18 },
19 }
20
21 # For Kerberos authentication, uncomment out RemoteUserMiddleware.
22 # MIDDLEWARE += (
23 # 'django.contrib.auth.middleware.RemoteUserMiddleware',
24 # )
25
26 # Remote kerberos authentication backends
27 # AUTHENTICATION_BACKENDS = (
28 # 'tcms.auth.backends.ModAuthKerbBackend',
29 # )
30
31 # To enable database routers for read/write separation.
32 # DATABASE_ROUTERS = ['tcms.core.utils.tcms_router.RWRouter']
33
34 # Kerberos realm
35 # KRB5_REALM = 'EXAMPLE.COM'
36
37 # User authentication by Bugzilla settings
38 # BUGZILLA_XMLRPC_URL = 'https://bugzilla.example.com/xmlrpc.cgi'
39
40
41 TEMPLATES[0].update({
42 'DIRS': ['/usr/share/nitrate/templates'],
43 })
44
45 # Set the default send mail address
46 EMAIL_HOST = 'smtp.example.com'
47 EMAIL_FROM = '[email protected]'
48
49 # Site-specific messages
50
51 # First run - to determine if it needs to prompt user or not.
52 FIRST_RUN = False
53
54 # You can add a help link on the footer of home page as following format:
55 # ('http://foo.com', 'foo')
56 FOOTER_LINKS = (
57 ('https://nitrate.readthedocs.io/en/latest/api/xmlrpc.html', 'XML-RPC Service'),
58 ('https://nitrate.readthedocs.io/en/latest/guide.html', 'User Guide'),
59 )
60
61 # added for nitrate3.4 compatibility
62 DEFAULT_GROUPS = ['default']
63 TESTOPIA_XML_VERSION = '1.0'
64
65 # admin settings
66 ADMINS = (
67 # ('Your Name', '[email protected]'),
68 )
69
70 DEFAULT_PAGE_SIZE = 100
71
[end of src/tcms/settings/product.py]
[start of docker/released/product.py]
1 # Django settings for product env.
2
3 from tcms.settings.common import * # noqa
4
5 # Debug settings
6 DEBUG = False
7 TEMPLATE_DEBUG = DEBUG
8
9 # Database settings
10 DATABASES = {
11 'default': {
12 'ENGINE': SUPPORTED_DB_ENGINES[DB_ENGINE],
13 'NAME': env.get('NITRATE_DB_NAME', 'nitrate'),
14 'USER': env.get('NITRATE_DB_USER', 'nitrate'),
15 'PASSWORD': env.get('NITRATE_DB_PASSWORD', ''),
16 'HOST': env.get('NITRATE_DB_HOST', ''),
17 'PORT': env.get('NITRATE_DB_PORT', ''),
18 },
19 }
20
21 AUTHENTICATION_BACKENDS = (
22 'django.contrib.auth.backends.ModelBackend',
23 )
24
25 TEMPLATES[0].update({
26 'DIRS': ['/usr/share/nitrate/templates'],
27 })
28
29 # Set the default send mail address
30 EMAIL_HOST = 'smtp.example.com'
31 EMAIL_FROM = '[email protected]'
32
33 # Site-specific messages
34
35 # First run - to determine if it needs to prompt user or not.
36 FIRST_RUN = False
37
38 # You can add a help link on the footer of home page as following format:
39 # ('http://foo.com', 'foo')
40 FOOTER_LINKS = (
41 ('https://nitrate.readthedocs.io/en/latest/api/xmlrpc.html', 'XML-RPC Service'),
42 ('https://nitrate.readthedocs.io/en/latest/guide.html', 'User Guide'),
43 )
44
45 # added for nitrate3.4 compatibility
46 DEFAULT_GROUPS = ['default']
47 TESTOPIA_XML_VERSION = '1.0'
48
49 ADMINS = (
50 )
51
52 DEFAULT_PAGE_SIZE = 100
53
[end of docker/released/product.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/docker/released/product.py b/docker/released/product.py
--- a/docker/released/product.py
+++ b/docker/released/product.py
@@ -44,7 +44,6 @@
# added for nitrate3.4 compatibility
DEFAULT_GROUPS = ['default']
-TESTOPIA_XML_VERSION = '1.0'
ADMINS = (
)
diff --git a/src/tcms/settings/product.py b/src/tcms/settings/product.py
--- a/src/tcms/settings/product.py
+++ b/src/tcms/settings/product.py
@@ -60,7 +60,6 @@
# added for nitrate3.4 compatibility
DEFAULT_GROUPS = ['default']
-TESTOPIA_XML_VERSION = '1.0'
# admin settings
ADMINS = (
| {"golden_diff": "diff --git a/docker/released/product.py b/docker/released/product.py\n--- a/docker/released/product.py\n+++ b/docker/released/product.py\n@@ -44,7 +44,6 @@\n \n # added for nitrate3.4 compatibility\n DEFAULT_GROUPS = ['default']\n-TESTOPIA_XML_VERSION = '1.0'\n \n ADMINS = (\n )\ndiff --git a/src/tcms/settings/product.py b/src/tcms/settings/product.py\n--- a/src/tcms/settings/product.py\n+++ b/src/tcms/settings/product.py\n@@ -60,7 +60,6 @@\n \n # added for nitrate3.4 compatibility\n DEFAULT_GROUPS = ['default']\n-TESTOPIA_XML_VERSION = '1.0'\n \n # admin settings\n ADMINS = (\n", "issue": "import xml says Worng xml_version\nimport xml in not working says worng xml_version 1.1\r\n\r\ni export the test case and generate xml and try to import same not work\r\n\r\nthanks in advance\n", "before_files": [{"content": "# Django settings for product env.\n\nfrom tcms.settings.common import * # noqa\n\n# Debug settings\nDEBUG = False\nTEMPLATE_DEBUG = DEBUG\n\n# Database settings\nDATABASES = {\n 'default': {\n 'ENGINE': SUPPORTED_DB_ENGINES[DB_ENGINE],\n 'NAME': env.get('NITRATE_DB_NAME', 'nitrate'),\n 'USER': env.get('NITRATE_DB_USER', 'nitrate'),\n 'PASSWORD': env.get('NITRATE_DB_PASSWORD', 'nitrate'),\n 'HOST': env.get('NITRATE_DB_HOST', ''),\n 'PORT': env.get('NITRATE_DB_PORT', ''),\n },\n}\n\n# For Kerberos authentication, uncomment out RemoteUserMiddleware.\n# MIDDLEWARE += (\n# 'django.contrib.auth.middleware.RemoteUserMiddleware',\n# )\n\n# Remote kerberos authentication backends\n# AUTHENTICATION_BACKENDS = (\n# 'tcms.auth.backends.ModAuthKerbBackend',\n# )\n\n# To enable database routers for read/write separation.\n# DATABASE_ROUTERS = ['tcms.core.utils.tcms_router.RWRouter']\n\n# Kerberos realm\n# KRB5_REALM = 'EXAMPLE.COM'\n\n# User authentication by Bugzilla settings\n# BUGZILLA_XMLRPC_URL = 'https://bugzilla.example.com/xmlrpc.cgi'\n\n\nTEMPLATES[0].update({\n 'DIRS': ['/usr/share/nitrate/templates'],\n})\n\n# Set the default send mail address\nEMAIL_HOST = 'smtp.example.com'\nEMAIL_FROM = '[email protected]'\n\n# Site-specific messages\n\n# First run - to determine if it needs to prompt user or not.\nFIRST_RUN = False\n\n# You can add a help link on the footer of home page as following format:\n# ('http://foo.com', 'foo')\nFOOTER_LINKS = (\n ('https://nitrate.readthedocs.io/en/latest/api/xmlrpc.html', 'XML-RPC Service'),\n ('https://nitrate.readthedocs.io/en/latest/guide.html', 'User Guide'),\n)\n\n# added for nitrate3.4 compatibility\nDEFAULT_GROUPS = ['default']\nTESTOPIA_XML_VERSION = '1.0'\n\n# admin settings\nADMINS = (\n # ('Your Name', '[email protected]'),\n)\n\nDEFAULT_PAGE_SIZE = 100\n", "path": "src/tcms/settings/product.py"}, {"content": "# Django settings for product env.\n\nfrom tcms.settings.common import * # noqa\n\n# Debug settings\nDEBUG = False\nTEMPLATE_DEBUG = DEBUG\n\n# Database settings\nDATABASES = {\n 'default': {\n 'ENGINE': SUPPORTED_DB_ENGINES[DB_ENGINE],\n 'NAME': env.get('NITRATE_DB_NAME', 'nitrate'),\n 'USER': env.get('NITRATE_DB_USER', 'nitrate'),\n 'PASSWORD': env.get('NITRATE_DB_PASSWORD', ''),\n 'HOST': env.get('NITRATE_DB_HOST', ''),\n 'PORT': env.get('NITRATE_DB_PORT', ''),\n },\n}\n\nAUTHENTICATION_BACKENDS = (\n 'django.contrib.auth.backends.ModelBackend',\n)\n\nTEMPLATES[0].update({\n 'DIRS': ['/usr/share/nitrate/templates'],\n})\n\n# Set the default send mail address\nEMAIL_HOST = 'smtp.example.com'\nEMAIL_FROM = '[email protected]'\n\n# Site-specific messages\n\n# First run - to determine if it needs to prompt user or not.\nFIRST_RUN = False\n\n# You can add a help link on the footer of home page as following format:\n# ('http://foo.com', 'foo')\nFOOTER_LINKS = (\n ('https://nitrate.readthedocs.io/en/latest/api/xmlrpc.html', 'XML-RPC Service'),\n ('https://nitrate.readthedocs.io/en/latest/guide.html', 'User Guide'),\n)\n\n# added for nitrate3.4 compatibility\nDEFAULT_GROUPS = ['default']\nTESTOPIA_XML_VERSION = '1.0'\n\nADMINS = (\n)\n\nDEFAULT_PAGE_SIZE = 100\n", "path": "docker/released/product.py"}]} | 1,680 | 168 |
gh_patches_debug_25743 | rasdani/github-patches | git_diff | getsentry__sentry-python-3099 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`sentry_sdk.init` breaks `import exceptiongroup` in virtualenv activated with `activate_this.py`
### How do you use Sentry?
Sentry Saas (sentry.io)
### Version
2.2.1
### Steps to Reproduce
```console
$ docker run --rm -it ubuntu:22.04
root@e264f830878b:/# apt update
root@e264f830878b:/# apt install -y python3-apport virtualenv
root@e264f830878b:/# virtualenv venv
root@e264f830878b:/# venv/bin/pip install exceptiongroup sentry-sdk
…
Successfully installed certifi-2024.2.2 exceptiongroup-1.2.1 sentry-sdk-2.2.1 urllib3-2.2.1
root@e264f830878b:/# cat > test.py <<EOF
exec(open("venv/bin/activate_this.py").read(), {"__file__": "venv/bin/activate_this.py"})
import sentry_sdk
sentry_sdk.init(dsn="https://[email protected]/1234")
import exceptiongroup
EOF
root@e264f830878b:/# python3 test.py
```
### Expected Result
No error.
### Actual Result
```pytb
Traceback (most recent call last):
File "//test.py", line 4, in <module>
import exceptiongroup
File "/venv/lib/python3.10/site-packages/exceptiongroup/__init__.py", line 20, in <module>
from ._formatting import (
File "/venv/lib/python3.10/site-packages/exceptiongroup/_formatting.py", line 394, in <module>
assert sys.excepthook is apport_python_hook.apport_excepthook
AssertionError
Sentry is attempting to send 2 pending events
Waiting up to 2 seconds
Press Ctrl-C to quit
```
The [relevant code within `exceptiongroup`](https://github.com/agronholm/exceptiongroup/blob/1.2.1/src/exceptiongroup/_formatting.py#L374-L400) is
```python
if getattr(sys.excepthook, "__name__", None) in (
"apport_excepthook",
# on ubuntu 22.10 the hook was renamed to partial_apport_excepthook
"partial_apport_excepthook",
):
…
import apport_python_hook
assert sys.excepthook is apport_python_hook.apport_excepthook
```
which fails because Sentry has patched `sys.excepthook` but retained the same `__name__`, due to `functools.wraps` within the `ensure_integration_enabled` decorator, used by `_make_excepthook` as of
- #2906
(cc @sentrivana)
This is arguably poor code within `exceptiongroup` (I opened agronholm/exceptiongroup#123), but Sentry should avoid breaking it since it’s a popular library; for example, it’s a dependency of IPython.
</issue>
<code>
[start of sentry_sdk/integrations/excepthook.py]
1 import sys
2
3 import sentry_sdk
4 from sentry_sdk.utils import (
5 capture_internal_exceptions,
6 ensure_integration_enabled,
7 event_from_exception,
8 )
9 from sentry_sdk.integrations import Integration
10
11 from sentry_sdk._types import TYPE_CHECKING
12
13 if TYPE_CHECKING:
14 from typing import Callable
15 from typing import Any
16 from typing import Type
17 from typing import Optional
18
19 from types import TracebackType
20
21 Excepthook = Callable[
22 [Type[BaseException], BaseException, Optional[TracebackType]],
23 Any,
24 ]
25
26
27 class ExcepthookIntegration(Integration):
28 identifier = "excepthook"
29
30 always_run = False
31
32 def __init__(self, always_run=False):
33 # type: (bool) -> None
34
35 if not isinstance(always_run, bool):
36 raise ValueError(
37 "Invalid value for always_run: %s (must be type boolean)"
38 % (always_run,)
39 )
40 self.always_run = always_run
41
42 @staticmethod
43 def setup_once():
44 # type: () -> None
45 sys.excepthook = _make_excepthook(sys.excepthook)
46
47
48 def _make_excepthook(old_excepthook):
49 # type: (Excepthook) -> Excepthook
50 @ensure_integration_enabled(ExcepthookIntegration, old_excepthook)
51 def sentry_sdk_excepthook(type_, value, traceback):
52 # type: (Type[BaseException], BaseException, Optional[TracebackType]) -> None
53 integration = sentry_sdk.get_client().get_integration(ExcepthookIntegration)
54
55 if _should_send(integration.always_run):
56 with capture_internal_exceptions():
57 event, hint = event_from_exception(
58 (type_, value, traceback),
59 client_options=sentry_sdk.get_client().options,
60 mechanism={"type": "excepthook", "handled": False},
61 )
62 sentry_sdk.capture_event(event, hint=hint)
63
64 return old_excepthook(type_, value, traceback)
65
66 return sentry_sdk_excepthook
67
68
69 def _should_send(always_run=False):
70 # type: (bool) -> bool
71 if always_run:
72 return True
73
74 if hasattr(sys, "ps1"):
75 # Disable the excepthook for interactive Python shells, otherwise
76 # every typo gets sent to Sentry.
77 return False
78
79 return True
80
[end of sentry_sdk/integrations/excepthook.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/sentry_sdk/integrations/excepthook.py b/sentry_sdk/integrations/excepthook.py
--- a/sentry_sdk/integrations/excepthook.py
+++ b/sentry_sdk/integrations/excepthook.py
@@ -3,7 +3,6 @@
import sentry_sdk
from sentry_sdk.utils import (
capture_internal_exceptions,
- ensure_integration_enabled,
event_from_exception,
)
from sentry_sdk.integrations import Integration
@@ -47,11 +46,16 @@
def _make_excepthook(old_excepthook):
# type: (Excepthook) -> Excepthook
- @ensure_integration_enabled(ExcepthookIntegration, old_excepthook)
def sentry_sdk_excepthook(type_, value, traceback):
# type: (Type[BaseException], BaseException, Optional[TracebackType]) -> None
integration = sentry_sdk.get_client().get_integration(ExcepthookIntegration)
+ # Note: If we replace this with ensure_integration_enabled then
+ # we break the exceptiongroup backport;
+ # See: https://github.com/getsentry/sentry-python/issues/3097
+ if integration is None:
+ return old_excepthook(type_, value, traceback)
+
if _should_send(integration.always_run):
with capture_internal_exceptions():
event, hint = event_from_exception(
| {"golden_diff": "diff --git a/sentry_sdk/integrations/excepthook.py b/sentry_sdk/integrations/excepthook.py\n--- a/sentry_sdk/integrations/excepthook.py\n+++ b/sentry_sdk/integrations/excepthook.py\n@@ -3,7 +3,6 @@\n import sentry_sdk\n from sentry_sdk.utils import (\n capture_internal_exceptions,\n- ensure_integration_enabled,\n event_from_exception,\n )\n from sentry_sdk.integrations import Integration\n@@ -47,11 +46,16 @@\n \n def _make_excepthook(old_excepthook):\n # type: (Excepthook) -> Excepthook\n- @ensure_integration_enabled(ExcepthookIntegration, old_excepthook)\n def sentry_sdk_excepthook(type_, value, traceback):\n # type: (Type[BaseException], BaseException, Optional[TracebackType]) -> None\n integration = sentry_sdk.get_client().get_integration(ExcepthookIntegration)\n \n+ # Note: If we replace this with ensure_integration_enabled then\n+ # we break the exceptiongroup backport;\n+ # See: https://github.com/getsentry/sentry-python/issues/3097\n+ if integration is None:\n+ return old_excepthook(type_, value, traceback)\n+\n if _should_send(integration.always_run):\n with capture_internal_exceptions():\n event, hint = event_from_exception(\n", "issue": "`sentry_sdk.init` breaks `import exceptiongroup` in virtualenv activated with `activate_this.py`\n### How do you use Sentry?\r\n\r\nSentry Saas (sentry.io)\r\n\r\n### Version\r\n\r\n2.2.1\r\n\r\n### Steps to Reproduce\r\n\r\n```console\r\n$ docker run --rm -it ubuntu:22.04\r\nroot@e264f830878b:/# apt update\r\nroot@e264f830878b:/# apt install -y python3-apport virtualenv\r\nroot@e264f830878b:/# virtualenv venv\r\nroot@e264f830878b:/# venv/bin/pip install exceptiongroup sentry-sdk\r\n\u2026\r\nSuccessfully installed certifi-2024.2.2 exceptiongroup-1.2.1 sentry-sdk-2.2.1 urllib3-2.2.1\r\nroot@e264f830878b:/# cat > test.py <<EOF\r\nexec(open(\"venv/bin/activate_this.py\").read(), {\"__file__\": \"venv/bin/activate_this.py\"})\r\nimport sentry_sdk\r\nsentry_sdk.init(dsn=\"https://[email protected]/1234\")\r\nimport exceptiongroup\r\nEOF\r\nroot@e264f830878b:/# python3 test.py\r\n```\r\n\r\n### Expected Result\r\n\r\nNo error.\r\n\r\n### Actual Result\r\n\r\n```pytb\r\nTraceback (most recent call last):\r\n File \"//test.py\", line 4, in <module>\r\n import exceptiongroup\r\n File \"/venv/lib/python3.10/site-packages/exceptiongroup/__init__.py\", line 20, in <module>\r\n from ._formatting import (\r\n File \"/venv/lib/python3.10/site-packages/exceptiongroup/_formatting.py\", line 394, in <module>\r\n assert sys.excepthook is apport_python_hook.apport_excepthook\r\nAssertionError\r\nSentry is attempting to send 2 pending events\r\nWaiting up to 2 seconds\r\nPress Ctrl-C to quit\r\n```\r\n\r\nThe [relevant code within `exceptiongroup`](https://github.com/agronholm/exceptiongroup/blob/1.2.1/src/exceptiongroup/_formatting.py#L374-L400) is\r\n\r\n```python\r\nif getattr(sys.excepthook, \"__name__\", None) in (\r\n \"apport_excepthook\",\r\n # on ubuntu 22.10 the hook was renamed to partial_apport_excepthook\r\n \"partial_apport_excepthook\",\r\n):\r\n \u2026\r\n import apport_python_hook\r\n\r\n assert sys.excepthook is apport_python_hook.apport_excepthook\r\n```\r\n\r\nwhich fails because Sentry has patched `sys.excepthook` but retained the same `__name__`, due to `functools.wraps` within the `ensure_integration_enabled` decorator, used by `_make_excepthook` as of\r\n\r\n- #2906\r\n\r\n(cc @sentrivana)\r\n\r\nThis is arguably poor code within `exceptiongroup` (I opened agronholm/exceptiongroup#123), but Sentry should avoid breaking it since it\u2019s a popular library; for example, it\u2019s a dependency of IPython.\n", "before_files": [{"content": "import sys\n\nimport sentry_sdk\nfrom sentry_sdk.utils import (\n capture_internal_exceptions,\n ensure_integration_enabled,\n event_from_exception,\n)\nfrom sentry_sdk.integrations import Integration\n\nfrom sentry_sdk._types import TYPE_CHECKING\n\nif TYPE_CHECKING:\n from typing import Callable\n from typing import Any\n from typing import Type\n from typing import Optional\n\n from types import TracebackType\n\n Excepthook = Callable[\n [Type[BaseException], BaseException, Optional[TracebackType]],\n Any,\n ]\n\n\nclass ExcepthookIntegration(Integration):\n identifier = \"excepthook\"\n\n always_run = False\n\n def __init__(self, always_run=False):\n # type: (bool) -> None\n\n if not isinstance(always_run, bool):\n raise ValueError(\n \"Invalid value for always_run: %s (must be type boolean)\"\n % (always_run,)\n )\n self.always_run = always_run\n\n @staticmethod\n def setup_once():\n # type: () -> None\n sys.excepthook = _make_excepthook(sys.excepthook)\n\n\ndef _make_excepthook(old_excepthook):\n # type: (Excepthook) -> Excepthook\n @ensure_integration_enabled(ExcepthookIntegration, old_excepthook)\n def sentry_sdk_excepthook(type_, value, traceback):\n # type: (Type[BaseException], BaseException, Optional[TracebackType]) -> None\n integration = sentry_sdk.get_client().get_integration(ExcepthookIntegration)\n\n if _should_send(integration.always_run):\n with capture_internal_exceptions():\n event, hint = event_from_exception(\n (type_, value, traceback),\n client_options=sentry_sdk.get_client().options,\n mechanism={\"type\": \"excepthook\", \"handled\": False},\n )\n sentry_sdk.capture_event(event, hint=hint)\n\n return old_excepthook(type_, value, traceback)\n\n return sentry_sdk_excepthook\n\n\ndef _should_send(always_run=False):\n # type: (bool) -> bool\n if always_run:\n return True\n\n if hasattr(sys, \"ps1\"):\n # Disable the excepthook for interactive Python shells, otherwise\n # every typo gets sent to Sentry.\n return False\n\n return True\n", "path": "sentry_sdk/integrations/excepthook.py"}]} | 1,958 | 322 |
gh_patches_debug_30470 | rasdani/github-patches | git_diff | vega__altair-2643 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
x-axis tick labels in Natural Disasters case study need clean up
See:

</issue>
<code>
[start of altair/examples/natural_disasters.py]
1 """
2 Natural Disasters
3 -----------------
4 This example shows a visualization of global deaths from natural disasters.
5 """
6 # category: case studies
7 import altair as alt
8 from vega_datasets import data
9
10 source = data.disasters.url
11
12 alt.Chart(source).mark_circle(
13 opacity=0.8,
14 stroke='black',
15 strokeWidth=1
16 ).encode(
17 alt.X('Year:O', axis=alt.Axis(labelAngle=0)),
18 alt.Y('Entity:N'),
19 alt.Size('Deaths:Q',
20 scale=alt.Scale(range=[0, 4000]),
21 legend=alt.Legend(title='Annual Global Deaths')
22 ),
23 alt.Color('Entity:N', legend=None)
24 ).properties(
25 width=450,
26 height=320
27 ).transform_filter(
28 alt.datum.Entity != 'All natural disasters'
29 )
30
[end of altair/examples/natural_disasters.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/altair/examples/natural_disasters.py b/altair/examples/natural_disasters.py
--- a/altair/examples/natural_disasters.py
+++ b/altair/examples/natural_disasters.py
@@ -1,7 +1,7 @@
"""
-Natural Disasters
------------------
-This example shows a visualization of global deaths from natural disasters.
+Global Deaths from Natural Disasters
+------------------------------------
+This example shows a proportional symbols visualization of deaths from natural disasters by year and type.
"""
# category: case studies
import altair as alt
@@ -9,21 +9,44 @@
source = data.disasters.url
-alt.Chart(source).mark_circle(
+alt.Chart(source).transform_filter(
+ alt.datum.Entity != 'All natural disasters'
+).mark_circle(
opacity=0.8,
stroke='black',
- strokeWidth=1
+ strokeWidth=1,
+ strokeOpacity=0.4
).encode(
- alt.X('Year:O', axis=alt.Axis(labelAngle=0)),
- alt.Y('Entity:N'),
- alt.Size('Deaths:Q',
- scale=alt.Scale(range=[0, 4000]),
- legend=alt.Legend(title='Annual Global Deaths')
+ x=alt.X('Year:T', title=None, scale=alt.Scale(domain=['1899','2018'])),
+ y=alt.Y(
+ 'Entity:N',
+ sort=alt.EncodingSortField(field="Deaths", op="sum", order='descending'),
+ title=None
+ ),
+ size=alt.Size('Deaths:Q',
+ scale=alt.Scale(range=[0, 2500]),
+ legend=alt.Legend(title='Deaths', clipHeight=30, format='s')
),
- alt.Color('Entity:N', legend=None)
+ color=alt.Color('Entity:N', legend=None),
+ tooltip=[
+ "Entity:N",
+ alt.Tooltip("Year:T", format='%Y'),
+ alt.Tooltip("Deaths:Q", format='~s')
+ ],
).properties(
width=450,
- height=320
-).transform_filter(
- alt.datum.Entity != 'All natural disasters'
+ height=320,
+ title=alt.TitleParams(
+ text="Global Deaths from Natural Disasters (1900-2017)",
+ subtitle="The size of the bubble represents the total death count per year, by type of disaster",
+ anchor='start'
+ )
+).configure_axisY(
+ domain=False,
+ ticks=False,
+ offset=10
+).configure_axisX(
+ grid=False,
+).configure_view(
+ stroke=None
)
| {"golden_diff": "diff --git a/altair/examples/natural_disasters.py b/altair/examples/natural_disasters.py\n--- a/altair/examples/natural_disasters.py\n+++ b/altair/examples/natural_disasters.py\n@@ -1,7 +1,7 @@\n \"\"\"\n-Natural Disasters\n------------------\n-This example shows a visualization of global deaths from natural disasters.\n+Global Deaths from Natural Disasters\n+------------------------------------\n+This example shows a proportional symbols visualization of deaths from natural disasters by year and type.\n \"\"\"\n # category: case studies\n import altair as alt\n@@ -9,21 +9,44 @@\n \n source = data.disasters.url\n \n-alt.Chart(source).mark_circle(\n+alt.Chart(source).transform_filter(\n+ alt.datum.Entity != 'All natural disasters'\n+).mark_circle(\n opacity=0.8,\n stroke='black',\n- strokeWidth=1\n+ strokeWidth=1,\n+ strokeOpacity=0.4\n ).encode(\n- alt.X('Year:O', axis=alt.Axis(labelAngle=0)),\n- alt.Y('Entity:N'),\n- alt.Size('Deaths:Q',\n- scale=alt.Scale(range=[0, 4000]),\n- legend=alt.Legend(title='Annual Global Deaths')\n+ x=alt.X('Year:T', title=None, scale=alt.Scale(domain=['1899','2018'])),\n+ y=alt.Y(\n+ 'Entity:N',\n+ sort=alt.EncodingSortField(field=\"Deaths\", op=\"sum\", order='descending'),\n+ title=None\n+ ),\n+ size=alt.Size('Deaths:Q',\n+ scale=alt.Scale(range=[0, 2500]),\n+ legend=alt.Legend(title='Deaths', clipHeight=30, format='s')\n ),\n- alt.Color('Entity:N', legend=None)\n+ color=alt.Color('Entity:N', legend=None),\n+ tooltip=[\n+ \"Entity:N\", \n+ alt.Tooltip(\"Year:T\", format='%Y'), \n+ alt.Tooltip(\"Deaths:Q\", format='~s')\n+ ],\n ).properties(\n width=450,\n- height=320\n-).transform_filter(\n- alt.datum.Entity != 'All natural disasters'\n+ height=320,\n+ title=alt.TitleParams(\n+ text=\"Global Deaths from Natural Disasters (1900-2017)\",\n+ subtitle=\"The size of the bubble represents the total death count per year, by type of disaster\",\n+ anchor='start'\n+ )\n+).configure_axisY(\n+ domain=False,\n+ ticks=False,\n+ offset=10\n+).configure_axisX(\n+ grid=False,\n+).configure_view(\n+ stroke=None\n )\n", "issue": "x-axis tick labels in Natural Disasters case study need clean up\nSee:\r\n\r\n\r\n\n", "before_files": [{"content": "\"\"\"\nNatural Disasters\n-----------------\nThis example shows a visualization of global deaths from natural disasters.\n\"\"\"\n# category: case studies\nimport altair as alt\nfrom vega_datasets import data\n\nsource = data.disasters.url\n\nalt.Chart(source).mark_circle(\n opacity=0.8,\n stroke='black',\n strokeWidth=1\n).encode(\n alt.X('Year:O', axis=alt.Axis(labelAngle=0)),\n alt.Y('Entity:N'),\n alt.Size('Deaths:Q',\n scale=alt.Scale(range=[0, 4000]),\n legend=alt.Legend(title='Annual Global Deaths')\n ),\n alt.Color('Entity:N', legend=None)\n).properties(\n width=450,\n height=320\n).transform_filter(\n alt.datum.Entity != 'All natural disasters'\n)\n", "path": "altair/examples/natural_disasters.py"}]} | 870 | 610 |
gh_patches_debug_15874 | rasdani/github-patches | git_diff | kubeflow__pipelines-4104 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
allow output artifact store configuration (vs hard coded)
it seems like the output artifacts are always stored in a specific minio service, port, namespace, bucket, secrets, etc (`minio-service.kubeflow:9000`).
see: https://github.com/kubeflow/pipelines/blob/f40a22a3f4a8e06d20cf3e3f425b5058d5c87e0b/sdk/python/kfp/compiler/_op_to_template.py#L148
it would be great to make it flexible, e.g. allow using S3, or change namespace or bucket names.
i suggest making it configurable, i can do such PR if we agree its needed.
flexible pipeline service (host) path in client SDK
when creating an SDK `Client()` the path to `ml-pipeline` API service is loaded from a hard coded value (`ml-pipeline.kubeflow.svc.cluster.local:8888`) which indicate a specific k8s namespace. it can be valuable to load that default value from an env variable, i.e. changing the line in `_client.py` from:
`config.host = host if host else Client.IN_CLUSTER_DNS_NAME`
to:
`config.host = host or os.environ.get('ML_PIPELINE_DNS_NAME',Client.IN_CLUSTER_DNS_NAME)`
also note that when a user provide the `host` parameter, the ipython output points to the API server and not to the UI service (see the logic in `_get_url_prefix()`), it seems like a potential bug
if its acceptable i can submit a PR for the line change above
</issue>
<code>
[start of sdk/python/kfp/dsl/_component_bridge.py]
1 # Copyright 2018 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import copy
16 from typing import Any, Mapping
17 from ..components.structures import ComponentSpec, ComponentReference
18 from ..components._components import _default_component_name, _resolve_command_line_and_paths
19 from ..components._naming import _sanitize_python_function_name, generate_unique_name_conversion_table
20 from .. import dsl
21
22
23 def _create_container_op_from_component_and_arguments(
24 component_spec: ComponentSpec,
25 arguments: Mapping[str, Any],
26 component_ref: ComponentReference = None,
27 ) -> 'dsl.ContainerOp':
28 # Check types of the reference arguments and serialize PipelineParams
29 arguments = arguments.copy()
30 for input_name, argument_value in arguments.items():
31 if isinstance(argument_value, dsl.PipelineParam):
32 input_type = component_spec._inputs_dict[input_name].type
33 reference_type = argument_value.param_type
34 dsl.types.verify_type_compatibility(reference_type, input_type, 'Incompatible argument passed to the input "{}" of component "{}": '.format(input_name, component_spec.name))
35
36 arguments[input_name] = str(argument_value)
37
38 resolved_cmd = _resolve_command_line_and_paths(
39 component_spec=component_spec,
40 arguments=arguments,
41 )
42
43 container_spec = component_spec.implementation.container
44
45 task = dsl.ContainerOp(
46 name=component_spec.name or _default_component_name,
47 image=container_spec.image,
48 command=resolved_cmd.command,
49 arguments=resolved_cmd.args,
50 file_outputs=resolved_cmd.output_paths,
51 artifact_argument_paths=[
52 dsl.InputArgumentPath(
53 argument=arguments[input_name],
54 input=input_name,
55 path=path,
56 )
57 for input_name, path in resolved_cmd.input_paths.items()
58 ],
59 )
60
61 component_meta = copy.copy(component_spec)
62 task._set_metadata(component_meta)
63 component_ref_without_spec = copy.copy(component_ref)
64 component_ref_without_spec.spec = None
65 task._component_ref = component_ref_without_spec
66
67 # Previously, ContainerOp had strict requirements for the output names, so we had to
68 # convert all the names before passing them to the ContainerOp constructor.
69 # Outputs with non-pythonic names could not be accessed using their original names.
70 # Now ContainerOp supports any output names, so we're now using the original output names.
71 # However to support legacy pipelines, we're also adding output references with pythonic names.
72 # TODO: Add warning when people use the legacy output names.
73 output_names = [output_spec.name for output_spec in component_spec.outputs or []] # Stabilizing the ordering
74 output_name_to_python = generate_unique_name_conversion_table(output_names, _sanitize_python_function_name)
75 for output_name in output_names:
76 pythonic_output_name = output_name_to_python[output_name]
77 # Note: Some component outputs are currently missing from task.outputs (e.g. MLPipeline UI Metadata)
78 if pythonic_output_name not in task.outputs and output_name in task.outputs:
79 task.outputs[pythonic_output_name] = task.outputs[output_name]
80
81 if container_spec.env:
82 from kubernetes import client as k8s_client
83 for name, value in container_spec.env.items():
84 task.container.add_env_variable(k8s_client.V1EnvVar(name=name, value=value))
85
86 if component_spec.metadata:
87 for key, value in (component_spec.metadata.annotations or {}).items():
88 task.add_pod_annotation(key, value)
89 for key, value in (component_spec.metadata.labels or {}).items():
90 task.add_pod_label(key, value)
91
92 return task
93
[end of sdk/python/kfp/dsl/_component_bridge.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/sdk/python/kfp/dsl/_component_bridge.py b/sdk/python/kfp/dsl/_component_bridge.py
--- a/sdk/python/kfp/dsl/_component_bridge.py
+++ b/sdk/python/kfp/dsl/_component_bridge.py
@@ -84,9 +84,13 @@
task.container.add_env_variable(k8s_client.V1EnvVar(name=name, value=value))
if component_spec.metadata:
- for key, value in (component_spec.metadata.annotations or {}).items():
+ annotations = component_spec.metadata.annotations or {}
+ for key, value in annotations.items():
task.add_pod_annotation(key, value)
for key, value in (component_spec.metadata.labels or {}).items():
task.add_pod_label(key, value)
+ # Disabling the caching for the volatile components by default
+ if annotations.get('volatile_component', 'false') == 'true':
+ task.execution_options.caching_strategy.max_cache_staleness = 'P0D'
return task
| {"golden_diff": "diff --git a/sdk/python/kfp/dsl/_component_bridge.py b/sdk/python/kfp/dsl/_component_bridge.py\n--- a/sdk/python/kfp/dsl/_component_bridge.py\n+++ b/sdk/python/kfp/dsl/_component_bridge.py\n@@ -84,9 +84,13 @@\n task.container.add_env_variable(k8s_client.V1EnvVar(name=name, value=value))\n \n if component_spec.metadata:\n- for key, value in (component_spec.metadata.annotations or {}).items():\n+ annotations = component_spec.metadata.annotations or {}\n+ for key, value in annotations.items():\n task.add_pod_annotation(key, value)\n for key, value in (component_spec.metadata.labels or {}).items():\n task.add_pod_label(key, value)\n+ # Disabling the caching for the volatile components by default\n+ if annotations.get('volatile_component', 'false') == 'true':\n+ task.execution_options.caching_strategy.max_cache_staleness = 'P0D'\n \n return task\n", "issue": "allow output artifact store configuration (vs hard coded)\nit seems like the output artifacts are always stored in a specific minio service, port, namespace, bucket, secrets, etc (`minio-service.kubeflow:9000`). \r\n\r\nsee: https://github.com/kubeflow/pipelines/blob/f40a22a3f4a8e06d20cf3e3f425b5058d5c87e0b/sdk/python/kfp/compiler/_op_to_template.py#L148\r\n\r\nit would be great to make it flexible, e.g. allow using S3, or change namespace or bucket names.\r\ni suggest making it configurable, i can do such PR if we agree its needed. \nflexible pipeline service (host) path in client SDK \nwhen creating an SDK `Client()` the path to `ml-pipeline` API service is loaded from a hard coded value (`ml-pipeline.kubeflow.svc.cluster.local:8888`) which indicate a specific k8s namespace. it can be valuable to load that default value from an env variable, i.e. changing the line in `_client.py` from:\r\n\r\n`config.host = host if host else Client.IN_CLUSTER_DNS_NAME`\r\n\r\nto:\r\n\r\n`config.host = host or os.environ.get('ML_PIPELINE_DNS_NAME',Client.IN_CLUSTER_DNS_NAME)`\r\n\r\nalso note that when a user provide the `host` parameter, the ipython output points to the API server and not to the UI service (see the logic in `_get_url_prefix()`), it seems like a potential bug\r\n\r\nif its acceptable i can submit a PR for the line change above\r\n \n", "before_files": [{"content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport copy\nfrom typing import Any, Mapping\nfrom ..components.structures import ComponentSpec, ComponentReference\nfrom ..components._components import _default_component_name, _resolve_command_line_and_paths\nfrom ..components._naming import _sanitize_python_function_name, generate_unique_name_conversion_table\nfrom .. import dsl\n\n\ndef _create_container_op_from_component_and_arguments(\n component_spec: ComponentSpec,\n arguments: Mapping[str, Any],\n component_ref: ComponentReference = None,\n) -> 'dsl.ContainerOp':\n # Check types of the reference arguments and serialize PipelineParams\n arguments = arguments.copy()\n for input_name, argument_value in arguments.items():\n if isinstance(argument_value, dsl.PipelineParam):\n input_type = component_spec._inputs_dict[input_name].type\n reference_type = argument_value.param_type\n dsl.types.verify_type_compatibility(reference_type, input_type, 'Incompatible argument passed to the input \"{}\" of component \"{}\": '.format(input_name, component_spec.name))\n\n arguments[input_name] = str(argument_value)\n\n resolved_cmd = _resolve_command_line_and_paths(\n component_spec=component_spec,\n arguments=arguments,\n )\n\n container_spec = component_spec.implementation.container\n\n task = dsl.ContainerOp(\n name=component_spec.name or _default_component_name,\n image=container_spec.image,\n command=resolved_cmd.command,\n arguments=resolved_cmd.args,\n file_outputs=resolved_cmd.output_paths,\n artifact_argument_paths=[\n dsl.InputArgumentPath(\n argument=arguments[input_name],\n input=input_name,\n path=path,\n )\n for input_name, path in resolved_cmd.input_paths.items()\n ],\n )\n\n component_meta = copy.copy(component_spec)\n task._set_metadata(component_meta)\n component_ref_without_spec = copy.copy(component_ref)\n component_ref_without_spec.spec = None\n task._component_ref = component_ref_without_spec\n\n # Previously, ContainerOp had strict requirements for the output names, so we had to\n # convert all the names before passing them to the ContainerOp constructor.\n # Outputs with non-pythonic names could not be accessed using their original names.\n # Now ContainerOp supports any output names, so we're now using the original output names.\n # However to support legacy pipelines, we're also adding output references with pythonic names.\n # TODO: Add warning when people use the legacy output names.\n output_names = [output_spec.name for output_spec in component_spec.outputs or []] # Stabilizing the ordering\n output_name_to_python = generate_unique_name_conversion_table(output_names, _sanitize_python_function_name)\n for output_name in output_names:\n pythonic_output_name = output_name_to_python[output_name]\n # Note: Some component outputs are currently missing from task.outputs (e.g. MLPipeline UI Metadata)\n if pythonic_output_name not in task.outputs and output_name in task.outputs:\n task.outputs[pythonic_output_name] = task.outputs[output_name]\n\n if container_spec.env:\n from kubernetes import client as k8s_client\n for name, value in container_spec.env.items():\n task.container.add_env_variable(k8s_client.V1EnvVar(name=name, value=value))\n\n if component_spec.metadata:\n for key, value in (component_spec.metadata.annotations or {}).items():\n task.add_pod_annotation(key, value)\n for key, value in (component_spec.metadata.labels or {}).items():\n task.add_pod_label(key, value)\n\n return task\n", "path": "sdk/python/kfp/dsl/_component_bridge.py"}]} | 1,949 | 216 |
gh_patches_debug_5825 | rasdani/github-patches | git_diff | Kinto__kinto-500 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
POST with If-None-Match: * and provided id in body always return 412
Detected using kinto-client v0.4.0 https://github.com/Kinto/kinto-client/blob/v0.4.0/src/requests.js#L188-L205
See https://github.com/mozilla-services/cliquet/issues/673
</issue>
<code>
[start of setup.py]
1 import codecs
2 import os
3 import sys
4 from setuptools import setup, find_packages
5
6 here = os.path.abspath(os.path.dirname(__file__))
7
8
9 def read_file(filename):
10 """Open a related file and return its content."""
11 with codecs.open(os.path.join(here, filename), encoding='utf-8') as f:
12 content = f.read()
13 return content
14
15 README = read_file('README.rst')
16 CHANGELOG = read_file('CHANGELOG.rst')
17 CONTRIBUTORS = read_file('CONTRIBUTORS.rst')
18
19 REQUIREMENTS = [
20 'waitress',
21 'cliquet>=3,<4',
22 'jsonschema',
23 ]
24
25 POSTGRESQL_REQUIREMENTS = REQUIREMENTS + [
26 'cliquet[postgresql]>=3,<4'
27 ]
28
29 MONITORING_REQUIREMENTS = REQUIREMENTS + [
30 'cliquet[monitoring]>=3,<4'
31 ]
32
33 FXA_REQUIREMENTS = REQUIREMENTS + [
34 'cliquet-fxa<2'
35 ]
36
37 ENTRY_POINTS = {
38 'paste.app_factory': [
39 'main = kinto:main',
40 ],
41 'console_scripts': [
42 'kinto = kinto.__main__:main'
43 ],
44 }
45
46 DEPENDENCY_LINKS = [
47 ]
48
49 setup(name='kinto',
50 version='1.12.0.dev0',
51 description='Kinto Web Service - Store, Sync, Share, and Self-Host.',
52 long_description=README + "\n\n" + CHANGELOG + "\n\n" + CONTRIBUTORS,
53 license='Apache License (2.0)',
54 classifiers=[
55 "Programming Language :: Python",
56 "Programming Language :: Python :: 2",
57 "Programming Language :: Python :: 2.7",
58 "Programming Language :: Python :: 3",
59 "Programming Language :: Python :: 3.4",
60 "Programming Language :: Python :: 3.5",
61 "Programming Language :: Python :: Implementation :: CPython",
62 "Programming Language :: Python :: Implementation :: PyPy",
63 "Topic :: Internet :: WWW/HTTP",
64 "Topic :: Internet :: WWW/HTTP :: WSGI :: Application",
65 "License :: OSI Approved :: Apache Software License"
66 ],
67 keywords="web sync json storage",
68 author='Mozilla Services',
69 author_email='[email protected]',
70 url='https://github.com/Kinto/kinto',
71 packages=find_packages(),
72 include_package_data=True,
73 zip_safe=False,
74 install_requires=REQUIREMENTS,
75 extras_require={
76 'postgresql': POSTGRESQL_REQUIREMENTS,
77 'monitoring': MONITORING_REQUIREMENTS,
78 'fxa': FXA_REQUIREMENTS,
79 ":python_version=='2.7'": ["functools32"],
80 },
81 test_suite="kinto.tests",
82 entry_points=ENTRY_POINTS,
83 dependency_links=DEPENDENCY_LINKS)
84
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -18,16 +18,16 @@
REQUIREMENTS = [
'waitress',
- 'cliquet>=3,<4',
+ 'cliquet>=3.1,<4',
'jsonschema',
]
POSTGRESQL_REQUIREMENTS = REQUIREMENTS + [
- 'cliquet[postgresql]>=3,<4'
+ 'cliquet[postgresql]>=3.1,<4'
]
MONITORING_REQUIREMENTS = REQUIREMENTS + [
- 'cliquet[monitoring]>=3,<4'
+ 'cliquet[monitoring]>=3.1,<4'
]
FXA_REQUIREMENTS = REQUIREMENTS + [
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -18,16 +18,16 @@\n \n REQUIREMENTS = [\n 'waitress',\n- 'cliquet>=3,<4',\n+ 'cliquet>=3.1,<4',\n 'jsonschema',\n ]\n \n POSTGRESQL_REQUIREMENTS = REQUIREMENTS + [\n- 'cliquet[postgresql]>=3,<4'\n+ 'cliquet[postgresql]>=3.1,<4'\n ]\n \n MONITORING_REQUIREMENTS = REQUIREMENTS + [\n- 'cliquet[monitoring]>=3,<4'\n+ 'cliquet[monitoring]>=3.1,<4'\n ]\n \n FXA_REQUIREMENTS = REQUIREMENTS + [\n", "issue": "POST with If-None-Match: * and provided id in body always return 412\nDetected using kinto-client v0.4.0 https://github.com/Kinto/kinto-client/blob/v0.4.0/src/requests.js#L188-L205\n\nSee https://github.com/mozilla-services/cliquet/issues/673\n\n", "before_files": [{"content": "import codecs\nimport os\nimport sys\nfrom setuptools import setup, find_packages\n\nhere = os.path.abspath(os.path.dirname(__file__))\n\n\ndef read_file(filename):\n \"\"\"Open a related file and return its content.\"\"\"\n with codecs.open(os.path.join(here, filename), encoding='utf-8') as f:\n content = f.read()\n return content\n\nREADME = read_file('README.rst')\nCHANGELOG = read_file('CHANGELOG.rst')\nCONTRIBUTORS = read_file('CONTRIBUTORS.rst')\n\nREQUIREMENTS = [\n 'waitress',\n 'cliquet>=3,<4',\n 'jsonschema',\n]\n\nPOSTGRESQL_REQUIREMENTS = REQUIREMENTS + [\n 'cliquet[postgresql]>=3,<4'\n]\n\nMONITORING_REQUIREMENTS = REQUIREMENTS + [\n 'cliquet[monitoring]>=3,<4'\n]\n\nFXA_REQUIREMENTS = REQUIREMENTS + [\n 'cliquet-fxa<2'\n]\n\nENTRY_POINTS = {\n 'paste.app_factory': [\n 'main = kinto:main',\n ],\n 'console_scripts': [\n 'kinto = kinto.__main__:main'\n ],\n}\n\nDEPENDENCY_LINKS = [\n]\n\nsetup(name='kinto',\n version='1.12.0.dev0',\n description='Kinto Web Service - Store, Sync, Share, and Self-Host.',\n long_description=README + \"\\n\\n\" + CHANGELOG + \"\\n\\n\" + CONTRIBUTORS,\n license='Apache License (2.0)',\n classifiers=[\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.4\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Programming Language :: Python :: Implementation :: PyPy\",\n \"Topic :: Internet :: WWW/HTTP\",\n \"Topic :: Internet :: WWW/HTTP :: WSGI :: Application\",\n \"License :: OSI Approved :: Apache Software License\"\n ],\n keywords=\"web sync json storage\",\n author='Mozilla Services',\n author_email='[email protected]',\n url='https://github.com/Kinto/kinto',\n packages=find_packages(),\n include_package_data=True,\n zip_safe=False,\n install_requires=REQUIREMENTS,\n extras_require={\n 'postgresql': POSTGRESQL_REQUIREMENTS,\n 'monitoring': MONITORING_REQUIREMENTS,\n 'fxa': FXA_REQUIREMENTS,\n \":python_version=='2.7'\": [\"functools32\"],\n },\n test_suite=\"kinto.tests\",\n entry_points=ENTRY_POINTS,\n dependency_links=DEPENDENCY_LINKS)\n", "path": "setup.py"}]} | 1,365 | 167 |
gh_patches_debug_4727 | rasdani/github-patches | git_diff | kserve__kserve-658 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[Help wanted] Add e2e test for canary rollout
/kind feature
**Describe the solution you'd like**
[A clear and concise description of what you want to happen.]
**Anything else you would like to add:**
[Miscellaneous information that will assist in solving the issue.]
</issue>
<code>
[start of python/kfserving/kfserving/constants/constants.py]
1 # Copyright 2019 kubeflow.org.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import os
16
17 # KFServing K8S constants
18 KFSERVING_GROUP = 'serving.kubeflow.org'
19 KFSERVING_KIND = 'InferenceService'
20 KFSERVING_PLURAL = 'inferenceservices'
21 KFSERVING_VERSION = os.environ.get('KFSERVING_VERSION', 'v1alpha2')
22
23 KFSERVING_LOGLEVEL = os.environ.get('KFSERVING_LOGLEVEL', 'INFO').upper()
24
25 # INFERENCESERVICE credentials common constants
26 INFERENCESERVICE_CONFIG_MAP_NAME = 'inferenceservice-config'
27 INFERENCESERVICE_SYSTEM_NAMESPACE = 'kfserving-system'
28 DEFAULT_SECRET_NAME = "kfserving-secret-"
29 DEFAULT_SA_NAME = "kfserving-service-credentials"
30
31 # S3 credentials constants
32 S3_ACCESS_KEY_ID_DEFAULT_NAME = "awsAccessKeyID"
33 S3_SECRET_ACCESS_KEY_DEFAULT_NAME = "awsSecretAccessKey"
34 S3_DEFAULT_CREDS_FILE = '~/.aws/credentials'
35
36 # GCS credentials constants
37 GCS_CREDS_FILE_DEFAULT_NAME = 'gcloud-application-credentials.json'
38 GCS_DEFAULT_CREDS_FILE = '~/.config/gcloud/application_default_credentials.json'
39
40 # Azure credentials constants
41 AZ_DEFAULT_CREDS_FILE = '~/.azure/azure_credentials.json'
42
[end of python/kfserving/kfserving/constants/constants.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/python/kfserving/kfserving/constants/constants.py b/python/kfserving/kfserving/constants/constants.py
--- a/python/kfserving/kfserving/constants/constants.py
+++ b/python/kfserving/kfserving/constants/constants.py
@@ -19,6 +19,7 @@
KFSERVING_KIND = 'InferenceService'
KFSERVING_PLURAL = 'inferenceservices'
KFSERVING_VERSION = os.environ.get('KFSERVING_VERSION', 'v1alpha2')
+KFSERVING_API_VERSION = KFSERVING_GROUP + '/' + KFSERVING_VERSION
KFSERVING_LOGLEVEL = os.environ.get('KFSERVING_LOGLEVEL', 'INFO').upper()
| {"golden_diff": "diff --git a/python/kfserving/kfserving/constants/constants.py b/python/kfserving/kfserving/constants/constants.py\n--- a/python/kfserving/kfserving/constants/constants.py\n+++ b/python/kfserving/kfserving/constants/constants.py\n@@ -19,6 +19,7 @@\n KFSERVING_KIND = 'InferenceService'\n KFSERVING_PLURAL = 'inferenceservices'\n KFSERVING_VERSION = os.environ.get('KFSERVING_VERSION', 'v1alpha2')\n+KFSERVING_API_VERSION = KFSERVING_GROUP + '/' + KFSERVING_VERSION\n \n KFSERVING_LOGLEVEL = os.environ.get('KFSERVING_LOGLEVEL', 'INFO').upper()\n", "issue": "[Help wanted] Add e2e test for canary rollout\n/kind feature\r\n\r\n**Describe the solution you'd like**\r\n[A clear and concise description of what you want to happen.]\r\n\r\n\r\n**Anything else you would like to add:**\r\n[Miscellaneous information that will assist in solving the issue.]\r\n\n", "before_files": [{"content": "# Copyright 2019 kubeflow.org.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\n\n# KFServing K8S constants\nKFSERVING_GROUP = 'serving.kubeflow.org'\nKFSERVING_KIND = 'InferenceService'\nKFSERVING_PLURAL = 'inferenceservices'\nKFSERVING_VERSION = os.environ.get('KFSERVING_VERSION', 'v1alpha2')\n\nKFSERVING_LOGLEVEL = os.environ.get('KFSERVING_LOGLEVEL', 'INFO').upper()\n\n# INFERENCESERVICE credentials common constants\nINFERENCESERVICE_CONFIG_MAP_NAME = 'inferenceservice-config'\nINFERENCESERVICE_SYSTEM_NAMESPACE = 'kfserving-system'\nDEFAULT_SECRET_NAME = \"kfserving-secret-\"\nDEFAULT_SA_NAME = \"kfserving-service-credentials\"\n\n# S3 credentials constants\nS3_ACCESS_KEY_ID_DEFAULT_NAME = \"awsAccessKeyID\"\nS3_SECRET_ACCESS_KEY_DEFAULT_NAME = \"awsSecretAccessKey\"\nS3_DEFAULT_CREDS_FILE = '~/.aws/credentials'\n\n# GCS credentials constants\nGCS_CREDS_FILE_DEFAULT_NAME = 'gcloud-application-credentials.json'\nGCS_DEFAULT_CREDS_FILE = '~/.config/gcloud/application_default_credentials.json'\n\n# Azure credentials constants\nAZ_DEFAULT_CREDS_FILE = '~/.azure/azure_credentials.json'\n", "path": "python/kfserving/kfserving/constants/constants.py"}]} | 1,083 | 155 |
gh_patches_debug_31066 | rasdani/github-patches | git_diff | getsentry__sentry-python-434 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Exception: raise OSError("handle is closed")
When I initialized sentry_sdk and used concurrent.futures.process.ProcessPoolExecutor, the exception will be raised after python exit.
```
from concurrent.futures.process import ProcessPoolExecutor
import sentry_sdk
sentry_sdk.init(dsn="")
def test():
...
if __name__ == "__main__":
with ProcessPoolExecutor(max_workers=4) as worker:
worker.submit(test)
```
The exception:
```
Error in atexit._run_exitfuncs:
Traceback (most recent call last):
File "/Users/tony.li/miniconda3/lib/python3.7/concurrent/futures/process.py", line 101, in _python_exit
thread_wakeup.wakeup()
File "/Users/tony.li/miniconda3/lib/python3.7/concurrent/futures/process.py", line 89, in wakeup
self._writer.send_bytes(b"")
File "/Users/tony.li/miniconda3/lib/python3.7/multiprocessing/connection.py", line 183, in send_bytes
self._check_closed()
File "/Users/tony.li/miniconda3/lib/python3.7/multiprocessing/connection.py", line 136, in _check_closed
raise OSError("handle is closed")
OSError: handle is closed
```
</issue>
<code>
[start of sentry_sdk/integrations/threading.py]
1 from __future__ import absolute_import
2
3 import sys
4
5 from threading import Thread
6
7 from sentry_sdk import Hub
8 from sentry_sdk._compat import reraise
9 from sentry_sdk.utils import event_from_exception
10 from sentry_sdk.integrations import Integration
11
12 from sentry_sdk._types import MYPY
13
14 if MYPY:
15 from typing import Any
16
17
18 class ThreadingIntegration(Integration):
19 identifier = "threading"
20
21 def __init__(self, propagate_hub=False):
22 self.propagate_hub = propagate_hub
23
24 @staticmethod
25 def setup_once():
26 # type: () -> None
27 old_start = Thread.start
28
29 def sentry_start(self, *a, **kw):
30 hub = Hub.current
31 integration = hub.get_integration(ThreadingIntegration)
32 if integration is not None:
33 if not integration.propagate_hub:
34 hub_ = None
35 else:
36 hub_ = Hub(hub)
37
38 self.run = _wrap_run(hub_, self.run)
39
40 return old_start(self, *a, **kw) # type: ignore
41
42 Thread.start = sentry_start # type: ignore
43
44
45 def _wrap_run(parent_hub, old_run):
46 def run(*a, **kw):
47 hub = parent_hub or Hub.current
48
49 with hub:
50 try:
51 return old_run(*a, **kw)
52 except Exception:
53 reraise(*_capture_exception())
54
55 return run
56
57
58 def _capture_exception():
59 hub = Hub.current
60 exc_info = sys.exc_info()
61
62 if hub.get_integration(ThreadingIntegration) is not None:
63 # If an integration is there, a client has to be there.
64 client = hub.client # type: Any
65
66 event, hint = event_from_exception(
67 exc_info,
68 client_options=client.options,
69 mechanism={"type": "threading", "handled": False},
70 )
71 hub.capture_event(event, hint=hint)
72
73 return exc_info
74
[end of sentry_sdk/integrations/threading.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/sentry_sdk/integrations/threading.py b/sentry_sdk/integrations/threading.py
--- a/sentry_sdk/integrations/threading.py
+++ b/sentry_sdk/integrations/threading.py
@@ -1,15 +1,13 @@
from __future__ import absolute_import
import sys
-
-from threading import Thread
+from threading import Thread, current_thread
from sentry_sdk import Hub
from sentry_sdk._compat import reraise
-from sentry_sdk.utils import event_from_exception
-from sentry_sdk.integrations import Integration
-
from sentry_sdk._types import MYPY
+from sentry_sdk.integrations import Integration
+from sentry_sdk.utils import event_from_exception
if MYPY:
from typing import Any
@@ -34,21 +32,26 @@
hub_ = None
else:
hub_ = Hub(hub)
-
- self.run = _wrap_run(hub_, self.run)
+ # Patching instance methods in `start()` creates a reference cycle if
+ # done in a naive way. See
+ # https://github.com/getsentry/sentry-python/pull/434
+ #
+ # In threading module, using current_thread API will access current thread instance
+ # without holding it to avoid a reference cycle in an easier way.
+ self.run = _wrap_run(hub_, self.run.__func__)
return old_start(self, *a, **kw) # type: ignore
Thread.start = sentry_start # type: ignore
-def _wrap_run(parent_hub, old_run):
+def _wrap_run(parent_hub, old_run_func):
def run(*a, **kw):
hub = parent_hub or Hub.current
-
with hub:
try:
- return old_run(*a, **kw)
+ self = current_thread()
+ return old_run_func(self, *a, **kw)
except Exception:
reraise(*_capture_exception())
| {"golden_diff": "diff --git a/sentry_sdk/integrations/threading.py b/sentry_sdk/integrations/threading.py\n--- a/sentry_sdk/integrations/threading.py\n+++ b/sentry_sdk/integrations/threading.py\n@@ -1,15 +1,13 @@\n from __future__ import absolute_import\n \n import sys\n-\n-from threading import Thread\n+from threading import Thread, current_thread\n \n from sentry_sdk import Hub\n from sentry_sdk._compat import reraise\n-from sentry_sdk.utils import event_from_exception\n-from sentry_sdk.integrations import Integration\n-\n from sentry_sdk._types import MYPY\n+from sentry_sdk.integrations import Integration\n+from sentry_sdk.utils import event_from_exception\n \n if MYPY:\n from typing import Any\n@@ -34,21 +32,26 @@\n hub_ = None\n else:\n hub_ = Hub(hub)\n-\n- self.run = _wrap_run(hub_, self.run)\n+ # Patching instance methods in `start()` creates a reference cycle if\n+ # done in a naive way. See\n+ # https://github.com/getsentry/sentry-python/pull/434\n+ #\n+ # In threading module, using current_thread API will access current thread instance\n+ # without holding it to avoid a reference cycle in an easier way.\n+ self.run = _wrap_run(hub_, self.run.__func__)\n \n return old_start(self, *a, **kw) # type: ignore\n \n Thread.start = sentry_start # type: ignore\n \n \n-def _wrap_run(parent_hub, old_run):\n+def _wrap_run(parent_hub, old_run_func):\n def run(*a, **kw):\n hub = parent_hub or Hub.current\n-\n with hub:\n try:\n- return old_run(*a, **kw)\n+ self = current_thread()\n+ return old_run_func(self, *a, **kw)\n except Exception:\n reraise(*_capture_exception())\n", "issue": "Exception: raise OSError(\"handle is closed\")\nWhen I initialized sentry_sdk and used concurrent.futures.process.ProcessPoolExecutor, the exception will be raised after python exit.\r\n\r\n```\r\nfrom concurrent.futures.process import ProcessPoolExecutor\r\n\r\nimport sentry_sdk\r\n\r\nsentry_sdk.init(dsn=\"\")\r\n\r\n\r\ndef test():\r\n ...\r\n\r\n\r\nif __name__ == \"__main__\":\r\n with ProcessPoolExecutor(max_workers=4) as worker:\r\n worker.submit(test)\r\n```\r\n\r\nThe exception:\r\n```\r\nError in atexit._run_exitfuncs:\r\nTraceback (most recent call last):\r\n File \"/Users/tony.li/miniconda3/lib/python3.7/concurrent/futures/process.py\", line 101, in _python_exit\r\n thread_wakeup.wakeup()\r\n File \"/Users/tony.li/miniconda3/lib/python3.7/concurrent/futures/process.py\", line 89, in wakeup\r\n self._writer.send_bytes(b\"\")\r\n File \"/Users/tony.li/miniconda3/lib/python3.7/multiprocessing/connection.py\", line 183, in send_bytes\r\n self._check_closed()\r\n File \"/Users/tony.li/miniconda3/lib/python3.7/multiprocessing/connection.py\", line 136, in _check_closed\r\n raise OSError(\"handle is closed\")\r\nOSError: handle is closed\r\n```\n", "before_files": [{"content": "from __future__ import absolute_import\n\nimport sys\n\nfrom threading import Thread\n\nfrom sentry_sdk import Hub\nfrom sentry_sdk._compat import reraise\nfrom sentry_sdk.utils import event_from_exception\nfrom sentry_sdk.integrations import Integration\n\nfrom sentry_sdk._types import MYPY\n\nif MYPY:\n from typing import Any\n\n\nclass ThreadingIntegration(Integration):\n identifier = \"threading\"\n\n def __init__(self, propagate_hub=False):\n self.propagate_hub = propagate_hub\n\n @staticmethod\n def setup_once():\n # type: () -> None\n old_start = Thread.start\n\n def sentry_start(self, *a, **kw):\n hub = Hub.current\n integration = hub.get_integration(ThreadingIntegration)\n if integration is not None:\n if not integration.propagate_hub:\n hub_ = None\n else:\n hub_ = Hub(hub)\n\n self.run = _wrap_run(hub_, self.run)\n\n return old_start(self, *a, **kw) # type: ignore\n\n Thread.start = sentry_start # type: ignore\n\n\ndef _wrap_run(parent_hub, old_run):\n def run(*a, **kw):\n hub = parent_hub or Hub.current\n\n with hub:\n try:\n return old_run(*a, **kw)\n except Exception:\n reraise(*_capture_exception())\n\n return run\n\n\ndef _capture_exception():\n hub = Hub.current\n exc_info = sys.exc_info()\n\n if hub.get_integration(ThreadingIntegration) is not None:\n # If an integration is there, a client has to be there.\n client = hub.client # type: Any\n\n event, hint = event_from_exception(\n exc_info,\n client_options=client.options,\n mechanism={\"type\": \"threading\", \"handled\": False},\n )\n hub.capture_event(event, hint=hint)\n\n return exc_info\n", "path": "sentry_sdk/integrations/threading.py"}]} | 1,407 | 442 |
gh_patches_debug_18060 | rasdani/github-patches | git_diff | scrapy__scrapy-4378 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Remove SCRAPY_SELECTORS_BACKEND from scrapy.settings.deprecated
There is no trace of `SCRAPY_SELECTORS_BACKEND` in the code anymore, so that line should go.
It would be a good chance to review the rest of the lines in that file, some others may be worth removing as well.
Related to https://github.com/scrapy/scrapy/issues/4356
</issue>
<code>
[start of scrapy/settings/deprecated.py]
1 import warnings
2 from scrapy.exceptions import ScrapyDeprecationWarning
3
4 DEPRECATED_SETTINGS = [
5 ('TRACK_REFS', 'no longer needed (trackref is always enabled)'),
6 ('RESPONSE_CLASSES', 'no longer supported'),
7 ('DEFAULT_RESPONSE_ENCODING', 'no longer supported'),
8 ('BOT_VERSION', 'no longer used (user agent defaults to Scrapy now)'),
9 ('ENCODING_ALIASES', 'no longer needed (encoding discovery uses w3lib now)'),
10 ('STATS_ENABLED', 'no longer supported (change STATS_CLASS instead)'),
11 ('SQLITE_DB', 'no longer supported'),
12 ('SELECTORS_BACKEND', 'use SCRAPY_SELECTORS_BACKEND environment variable instead'),
13 ('AUTOTHROTTLE_MIN_DOWNLOAD_DELAY', 'use DOWNLOAD_DELAY instead'),
14 ('AUTOTHROTTLE_MAX_CONCURRENCY', 'use CONCURRENT_REQUESTS_PER_DOMAIN instead'),
15 ('AUTOTHROTTLE_MAX_CONCURRENCY', 'use CONCURRENT_REQUESTS_PER_DOMAIN instead'),
16 ('REDIRECT_MAX_METAREFRESH_DELAY', 'use METAREFRESH_MAXDELAY instead'),
17 ('LOG_UNSERIALIZABLE_REQUESTS', 'use SCHEDULER_DEBUG instead'),
18 ]
19
20
21 def check_deprecated_settings(settings):
22 deprecated = [x for x in DEPRECATED_SETTINGS if settings[x[0]] is not None]
23 if deprecated:
24 msg = "You are using the following settings which are deprecated or obsolete"
25 msg += " (ask [email protected] for alternatives):"
26 msg = msg + "\n " + "\n ".join("%s: %s" % x for x in deprecated)
27 warnings.warn(msg, ScrapyDeprecationWarning)
28
[end of scrapy/settings/deprecated.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/scrapy/settings/deprecated.py b/scrapy/settings/deprecated.py
--- a/scrapy/settings/deprecated.py
+++ b/scrapy/settings/deprecated.py
@@ -9,10 +9,8 @@
('ENCODING_ALIASES', 'no longer needed (encoding discovery uses w3lib now)'),
('STATS_ENABLED', 'no longer supported (change STATS_CLASS instead)'),
('SQLITE_DB', 'no longer supported'),
- ('SELECTORS_BACKEND', 'use SCRAPY_SELECTORS_BACKEND environment variable instead'),
('AUTOTHROTTLE_MIN_DOWNLOAD_DELAY', 'use DOWNLOAD_DELAY instead'),
('AUTOTHROTTLE_MAX_CONCURRENCY', 'use CONCURRENT_REQUESTS_PER_DOMAIN instead'),
- ('AUTOTHROTTLE_MAX_CONCURRENCY', 'use CONCURRENT_REQUESTS_PER_DOMAIN instead'),
('REDIRECT_MAX_METAREFRESH_DELAY', 'use METAREFRESH_MAXDELAY instead'),
('LOG_UNSERIALIZABLE_REQUESTS', 'use SCHEDULER_DEBUG instead'),
]
| {"golden_diff": "diff --git a/scrapy/settings/deprecated.py b/scrapy/settings/deprecated.py\n--- a/scrapy/settings/deprecated.py\n+++ b/scrapy/settings/deprecated.py\n@@ -9,10 +9,8 @@\n ('ENCODING_ALIASES', 'no longer needed (encoding discovery uses w3lib now)'),\n ('STATS_ENABLED', 'no longer supported (change STATS_CLASS instead)'),\n ('SQLITE_DB', 'no longer supported'),\n- ('SELECTORS_BACKEND', 'use SCRAPY_SELECTORS_BACKEND environment variable instead'),\n ('AUTOTHROTTLE_MIN_DOWNLOAD_DELAY', 'use DOWNLOAD_DELAY instead'),\n ('AUTOTHROTTLE_MAX_CONCURRENCY', 'use CONCURRENT_REQUESTS_PER_DOMAIN instead'),\n- ('AUTOTHROTTLE_MAX_CONCURRENCY', 'use CONCURRENT_REQUESTS_PER_DOMAIN instead'),\n ('REDIRECT_MAX_METAREFRESH_DELAY', 'use METAREFRESH_MAXDELAY instead'),\n ('LOG_UNSERIALIZABLE_REQUESTS', 'use SCHEDULER_DEBUG instead'),\n ]\n", "issue": "Remove SCRAPY_SELECTORS_BACKEND from scrapy.settings.deprecated\nThere is no trace of `SCRAPY_SELECTORS_BACKEND` in the code anymore, so that line should go.\r\n\r\nIt would be a good chance to review the rest of the lines in that file, some others may be worth removing as well.\r\n\r\nRelated to https://github.com/scrapy/scrapy/issues/4356\n", "before_files": [{"content": "import warnings\nfrom scrapy.exceptions import ScrapyDeprecationWarning\n\nDEPRECATED_SETTINGS = [\n ('TRACK_REFS', 'no longer needed (trackref is always enabled)'),\n ('RESPONSE_CLASSES', 'no longer supported'),\n ('DEFAULT_RESPONSE_ENCODING', 'no longer supported'),\n ('BOT_VERSION', 'no longer used (user agent defaults to Scrapy now)'),\n ('ENCODING_ALIASES', 'no longer needed (encoding discovery uses w3lib now)'),\n ('STATS_ENABLED', 'no longer supported (change STATS_CLASS instead)'),\n ('SQLITE_DB', 'no longer supported'),\n ('SELECTORS_BACKEND', 'use SCRAPY_SELECTORS_BACKEND environment variable instead'),\n ('AUTOTHROTTLE_MIN_DOWNLOAD_DELAY', 'use DOWNLOAD_DELAY instead'),\n ('AUTOTHROTTLE_MAX_CONCURRENCY', 'use CONCURRENT_REQUESTS_PER_DOMAIN instead'),\n ('AUTOTHROTTLE_MAX_CONCURRENCY', 'use CONCURRENT_REQUESTS_PER_DOMAIN instead'),\n ('REDIRECT_MAX_METAREFRESH_DELAY', 'use METAREFRESH_MAXDELAY instead'),\n ('LOG_UNSERIALIZABLE_REQUESTS', 'use SCHEDULER_DEBUG instead'),\n]\n\n\ndef check_deprecated_settings(settings):\n deprecated = [x for x in DEPRECATED_SETTINGS if settings[x[0]] is not None]\n if deprecated:\n msg = \"You are using the following settings which are deprecated or obsolete\"\n msg += \" (ask [email protected] for alternatives):\"\n msg = msg + \"\\n \" + \"\\n \".join(\"%s: %s\" % x for x in deprecated)\n warnings.warn(msg, ScrapyDeprecationWarning)\n", "path": "scrapy/settings/deprecated.py"}]} | 1,016 | 219 |
gh_patches_debug_22000 | rasdani/github-patches | git_diff | hpcaitech__ColossalAI-2442 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG]: colossalai run failed with unknown reason
### 🐛 Describe the bug
Some users have reported that they encounter launch failure when using `colossalai run`, but there is no error message to tell exactly what went wrong. A sample output is given below.
```text
Error: failed to run torchrun --nproc_per_node=4 --nnodes=1 --node_rank=0 --rdzv_backend=c10d --rdzv_endpoint=127.0.0.1:29500 --rdzv_id=colossalai-default-job train.py --config config.py -s on 127.0.0.1
```
### Environment
_No response_
</issue>
<code>
[start of colossalai/cli/launcher/multinode_runner.py]
1 import fabric
2 from .hostinfo import HostInfo, HostInfoList
3 from multiprocessing import Pipe, Process
4 from multiprocessing import connection as mp_connection
5 import click
6
7
8 def run_on_host(hostinfo: HostInfo, workdir: str, recv_conn: mp_connection.Connection,
9 send_conn: mp_connection.Connection, env: dict) -> None:
10 """
11 Use fabric connection to execute command on local or remote hosts.
12
13 Args:
14 hostinfo (HostInfo): host information
15 workdir (str): the directory to execute the command
16 recv_conn (multiprocessing.connection.Connection): receive messages from the master sender
17 send_conn (multiprocessing.connection.Connection): send messages to the master receiver
18 env (dict): a dictionary for environment variables
19 """
20
21 fab_conn = fabric.Connection(hostinfo.hostname, port=hostinfo.port)
22 finish = False
23 env_msg = ' '.join([f'{k}=\"{v}\"' for k, v in env.items()])
24
25 # keep listening until exit
26 while not finish:
27 # receive cmd
28 cmds = recv_conn.recv()
29
30 if cmds == 'exit':
31 # exit from the loop
32 finish = True
33 break
34 else:
35 # execute the commands
36 try:
37 # cd to execute directory
38 with fab_conn.cd(workdir):
39 # propagate the runtime environment
40 with fab_conn.prefix(f"export {env_msg}"):
41 if hostinfo.is_local_host:
42 # execute on the local machine
43 fab_conn.local(cmds, hide=False)
44 else:
45 # execute on the remote machine
46 fab_conn.run(cmds, hide=False)
47 send_conn.send('success')
48 except:
49 click.echo(f"Error: failed to run {cmds} on {hostinfo.hostname}")
50 send_conn.send('failure')
51
52 # shutdown
53 send_conn.send("finish")
54 fab_conn.close()
55
56
57 class MultiNodeRunner:
58 """
59 A runner to execute commands on an array of machines. This runner
60 is inspired by Nezha (https://github.com/zhuzilin/NeZha).
61 """
62
63 def __init__(self):
64 self.processes = {}
65 self.master_send_conns = {}
66 self.master_recv_conns = {}
67
68 def connect(self, host_info_list: HostInfoList, workdir: str, env: dict) -> None:
69 """
70 Establish connections to a list of hosts
71
72 Args:
73 host_info_list (HostInfoList): a list of HostInfo objects
74 workdir (str): the directory where command is executed
75 env (dict): environment variables to propagate to hosts
76 """
77 for hostinfo in host_info_list:
78 master_send_conn, worker_recv_conn = Pipe()
79 master_recv_conn, worker_send_conn = Pipe()
80 p = Process(target=run_on_host, args=(hostinfo, workdir, worker_recv_conn, worker_send_conn, env))
81 p.start()
82 self.processes[hostinfo.hostname] = p
83 self.master_recv_conns[hostinfo.hostname] = master_recv_conn
84 self.master_send_conns[hostinfo.hostname] = master_send_conn
85
86 def send(self, hostinfo: HostInfo, cmd: str) -> None:
87 """
88 Send a command to a local/remote host.
89
90 Args:
91 hostinfo (HostInfo): host information
92 cmd (str): the command to execute
93 """
94
95 assert hostinfo.hostname in self.master_send_conns, \
96 f'{hostinfo} is not found in the current connections'
97 conn = self.master_send_conns[hostinfo.hostname]
98 conn.send(cmd)
99
100 def stop_all(self) -> None:
101 """
102 Stop connections to all hosts.
103 """
104
105 for hostname, conn in self.master_send_conns.items():
106 conn.send('exit')
107
108 def recv_from_all(self) -> dict:
109 """
110 Receive messages from all hosts
111
112 Returns:
113 msg_from_node (dict): a dictionry which contains messages from each node
114 """
115
116 msg_from_node = dict()
117 for hostname, conn in self.master_recv_conns.items():
118 msg_from_node[hostname] = conn.recv()
119 return msg_from_node
120
[end of colossalai/cli/launcher/multinode_runner.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/colossalai/cli/launcher/multinode_runner.py b/colossalai/cli/launcher/multinode_runner.py
--- a/colossalai/cli/launcher/multinode_runner.py
+++ b/colossalai/cli/launcher/multinode_runner.py
@@ -1,8 +1,10 @@
-import fabric
-from .hostinfo import HostInfo, HostInfoList
from multiprocessing import Pipe, Process
from multiprocessing import connection as mp_connection
+
import click
+import fabric
+
+from .hostinfo import HostInfo, HostInfoList
def run_on_host(hostinfo: HostInfo, workdir: str, recv_conn: mp_connection.Connection,
@@ -45,8 +47,10 @@
# execute on the remote machine
fab_conn.run(cmds, hide=False)
send_conn.send('success')
- except:
- click.echo(f"Error: failed to run {cmds} on {hostinfo.hostname}")
+ except Exception as e:
+ click.echo(
+ f"Error: failed to run {cmds} on {hostinfo.hostname}, is localhost: {hostinfo.is_local_host}, exception: {e}"
+ )
send_conn.send('failure')
# shutdown
| {"golden_diff": "diff --git a/colossalai/cli/launcher/multinode_runner.py b/colossalai/cli/launcher/multinode_runner.py\n--- a/colossalai/cli/launcher/multinode_runner.py\n+++ b/colossalai/cli/launcher/multinode_runner.py\n@@ -1,8 +1,10 @@\n-import fabric\n-from .hostinfo import HostInfo, HostInfoList\n from multiprocessing import Pipe, Process\n from multiprocessing import connection as mp_connection\n+\n import click\n+import fabric\n+\n+from .hostinfo import HostInfo, HostInfoList\n \n \n def run_on_host(hostinfo: HostInfo, workdir: str, recv_conn: mp_connection.Connection,\n@@ -45,8 +47,10 @@\n # execute on the remote machine\n fab_conn.run(cmds, hide=False)\n send_conn.send('success')\n- except:\n- click.echo(f\"Error: failed to run {cmds} on {hostinfo.hostname}\")\n+ except Exception as e:\n+ click.echo(\n+ f\"Error: failed to run {cmds} on {hostinfo.hostname}, is localhost: {hostinfo.is_local_host}, exception: {e}\"\n+ )\n send_conn.send('failure')\n \n # shutdown\n", "issue": "[BUG]: colossalai run failed with unknown reason\n### \ud83d\udc1b Describe the bug\n\nSome users have reported that they encounter launch failure when using `colossalai run`, but there is no error message to tell exactly what went wrong. A sample output is given below.\r\n\r\n```text\r\nError: failed to run torchrun --nproc_per_node=4 --nnodes=1 --node_rank=0 --rdzv_backend=c10d --rdzv_endpoint=127.0.0.1:29500 --rdzv_id=colossalai-default-job train.py --config config.py -s on 127.0.0.1\r\n```\n\n### Environment\n\n_No response_\n", "before_files": [{"content": "import fabric\nfrom .hostinfo import HostInfo, HostInfoList\nfrom multiprocessing import Pipe, Process\nfrom multiprocessing import connection as mp_connection\nimport click\n\n\ndef run_on_host(hostinfo: HostInfo, workdir: str, recv_conn: mp_connection.Connection,\n send_conn: mp_connection.Connection, env: dict) -> None:\n \"\"\"\n Use fabric connection to execute command on local or remote hosts.\n\n Args:\n hostinfo (HostInfo): host information\n workdir (str): the directory to execute the command\n recv_conn (multiprocessing.connection.Connection): receive messages from the master sender\n send_conn (multiprocessing.connection.Connection): send messages to the master receiver\n env (dict): a dictionary for environment variables\n \"\"\"\n\n fab_conn = fabric.Connection(hostinfo.hostname, port=hostinfo.port)\n finish = False\n env_msg = ' '.join([f'{k}=\\\"{v}\\\"' for k, v in env.items()])\n\n # keep listening until exit\n while not finish:\n # receive cmd\n cmds = recv_conn.recv()\n\n if cmds == 'exit':\n # exit from the loop\n finish = True\n break\n else:\n # execute the commands\n try:\n # cd to execute directory\n with fab_conn.cd(workdir):\n # propagate the runtime environment\n with fab_conn.prefix(f\"export {env_msg}\"):\n if hostinfo.is_local_host:\n # execute on the local machine\n fab_conn.local(cmds, hide=False)\n else:\n # execute on the remote machine\n fab_conn.run(cmds, hide=False)\n send_conn.send('success')\n except:\n click.echo(f\"Error: failed to run {cmds} on {hostinfo.hostname}\")\n send_conn.send('failure')\n\n # shutdown\n send_conn.send(\"finish\")\n fab_conn.close()\n\n\nclass MultiNodeRunner:\n \"\"\"\n A runner to execute commands on an array of machines. This runner\n is inspired by Nezha (https://github.com/zhuzilin/NeZha).\n \"\"\"\n\n def __init__(self):\n self.processes = {}\n self.master_send_conns = {}\n self.master_recv_conns = {}\n\n def connect(self, host_info_list: HostInfoList, workdir: str, env: dict) -> None:\n \"\"\"\n Establish connections to a list of hosts\n\n Args:\n host_info_list (HostInfoList): a list of HostInfo objects\n workdir (str): the directory where command is executed\n env (dict): environment variables to propagate to hosts\n \"\"\"\n for hostinfo in host_info_list:\n master_send_conn, worker_recv_conn = Pipe()\n master_recv_conn, worker_send_conn = Pipe()\n p = Process(target=run_on_host, args=(hostinfo, workdir, worker_recv_conn, worker_send_conn, env))\n p.start()\n self.processes[hostinfo.hostname] = p\n self.master_recv_conns[hostinfo.hostname] = master_recv_conn\n self.master_send_conns[hostinfo.hostname] = master_send_conn\n\n def send(self, hostinfo: HostInfo, cmd: str) -> None:\n \"\"\"\n Send a command to a local/remote host.\n\n Args:\n hostinfo (HostInfo): host information\n cmd (str): the command to execute\n \"\"\"\n\n assert hostinfo.hostname in self.master_send_conns, \\\n f'{hostinfo} is not found in the current connections'\n conn = self.master_send_conns[hostinfo.hostname]\n conn.send(cmd)\n\n def stop_all(self) -> None:\n \"\"\"\n Stop connections to all hosts.\n \"\"\"\n\n for hostname, conn in self.master_send_conns.items():\n conn.send('exit')\n\n def recv_from_all(self) -> dict:\n \"\"\"\n Receive messages from all hosts\n\n Returns:\n msg_from_node (dict): a dictionry which contains messages from each node\n \"\"\"\n\n msg_from_node = dict()\n for hostname, conn in self.master_recv_conns.items():\n msg_from_node[hostname] = conn.recv()\n return msg_from_node\n", "path": "colossalai/cli/launcher/multinode_runner.py"}]} | 1,847 | 269 |
gh_patches_debug_32894 | rasdani/github-patches | git_diff | facebookresearch__hydra-609 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[Feature Request] Allow @hydra.main() to take a config object and pass it through
# 🚀 Feature Request
Allow @hydra.main() to take a config and pass it through
</issue>
<code>
[start of hydra/main.py]
1 # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
2 import functools
3 from typing import Callable, Optional
4
5 from ._internal.utils import get_args_parser, run_hydra
6 from .types import TaskFunction
7
8
9 def main(
10 config_path: Optional[str] = None,
11 config_name: Optional[str] = None,
12 strict: Optional[bool] = None,
13 ) -> Callable[[TaskFunction], Callable[[], None]]:
14 """
15 :param config_path: the config path, a directory relative to the declaring python file.
16 :param config_name: the name of the config (usually the file name without the .yaml extension)
17 :param strict: (Deprecated) strict mode, will throw an error if command line overrides are not changing an
18 existing key or if the code is accessing a non existent key
19 """
20
21 def main_decorator(task_function: TaskFunction) -> Callable[[], None]:
22 @functools.wraps(task_function)
23 def decorated_main() -> None:
24 run_hydra(
25 args_parser=get_args_parser(),
26 task_function=task_function,
27 config_path=config_path,
28 config_name=config_name,
29 strict=strict,
30 )
31
32 return decorated_main
33
34 return main_decorator
35
[end of hydra/main.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/hydra/main.py b/hydra/main.py
--- a/hydra/main.py
+++ b/hydra/main.py
@@ -1,6 +1,8 @@
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
import functools
-from typing import Callable, Optional
+from typing import Any, Callable, Optional
+
+from omegaconf import DictConfig
from ._internal.utils import get_args_parser, run_hydra
from .types import TaskFunction
@@ -10,7 +12,7 @@
config_path: Optional[str] = None,
config_name: Optional[str] = None,
strict: Optional[bool] = None,
-) -> Callable[[TaskFunction], Callable[[], None]]:
+) -> Callable[[TaskFunction], Any]:
"""
:param config_path: the config path, a directory relative to the declaring python file.
:param config_name: the name of the config (usually the file name without the .yaml extension)
@@ -20,14 +22,20 @@
def main_decorator(task_function: TaskFunction) -> Callable[[], None]:
@functools.wraps(task_function)
- def decorated_main() -> None:
- run_hydra(
- args_parser=get_args_parser(),
- task_function=task_function,
- config_path=config_path,
- config_name=config_name,
- strict=strict,
- )
+ def decorated_main(cfg_passthrough: Optional[DictConfig] = None) -> Any:
+ if cfg_passthrough is not None:
+ return task_function(cfg_passthrough)
+ else:
+ args = get_args_parser()
+ # no return value from run_hydra() as it may sometime actually run the task_function
+ # multiple times (--multirun)
+ run_hydra(
+ args_parser=args,
+ task_function=task_function,
+ config_path=config_path,
+ config_name=config_name,
+ strict=strict,
+ )
return decorated_main
| {"golden_diff": "diff --git a/hydra/main.py b/hydra/main.py\n--- a/hydra/main.py\n+++ b/hydra/main.py\n@@ -1,6 +1,8 @@\n # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n import functools\n-from typing import Callable, Optional\n+from typing import Any, Callable, Optional\n+\n+from omegaconf import DictConfig\n \n from ._internal.utils import get_args_parser, run_hydra\n from .types import TaskFunction\n@@ -10,7 +12,7 @@\n config_path: Optional[str] = None,\n config_name: Optional[str] = None,\n strict: Optional[bool] = None,\n-) -> Callable[[TaskFunction], Callable[[], None]]:\n+) -> Callable[[TaskFunction], Any]:\n \"\"\"\n :param config_path: the config path, a directory relative to the declaring python file.\n :param config_name: the name of the config (usually the file name without the .yaml extension)\n@@ -20,14 +22,20 @@\n \n def main_decorator(task_function: TaskFunction) -> Callable[[], None]:\n @functools.wraps(task_function)\n- def decorated_main() -> None:\n- run_hydra(\n- args_parser=get_args_parser(),\n- task_function=task_function,\n- config_path=config_path,\n- config_name=config_name,\n- strict=strict,\n- )\n+ def decorated_main(cfg_passthrough: Optional[DictConfig] = None) -> Any:\n+ if cfg_passthrough is not None:\n+ return task_function(cfg_passthrough)\n+ else:\n+ args = get_args_parser()\n+ # no return value from run_hydra() as it may sometime actually run the task_function\n+ # multiple times (--multirun)\n+ run_hydra(\n+ args_parser=args,\n+ task_function=task_function,\n+ config_path=config_path,\n+ config_name=config_name,\n+ strict=strict,\n+ )\n \n return decorated_main\n", "issue": "[Feature Request] Allow @hydra.main() to take a config object and pass it through\n# \ud83d\ude80 Feature Request\r\n\r\nAllow @hydra.main() to take a config and pass it through\n", "before_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\nimport functools\nfrom typing import Callable, Optional\n\nfrom ._internal.utils import get_args_parser, run_hydra\nfrom .types import TaskFunction\n\n\ndef main(\n config_path: Optional[str] = None,\n config_name: Optional[str] = None,\n strict: Optional[bool] = None,\n) -> Callable[[TaskFunction], Callable[[], None]]:\n \"\"\"\n :param config_path: the config path, a directory relative to the declaring python file.\n :param config_name: the name of the config (usually the file name without the .yaml extension)\n :param strict: (Deprecated) strict mode, will throw an error if command line overrides are not changing an\n existing key or if the code is accessing a non existent key\n \"\"\"\n\n def main_decorator(task_function: TaskFunction) -> Callable[[], None]:\n @functools.wraps(task_function)\n def decorated_main() -> None:\n run_hydra(\n args_parser=get_args_parser(),\n task_function=task_function,\n config_path=config_path,\n config_name=config_name,\n strict=strict,\n )\n\n return decorated_main\n\n return main_decorator\n", "path": "hydra/main.py"}]} | 903 | 445 |
gh_patches_debug_354 | rasdani/github-patches | git_diff | sanic-org__sanic-1343 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Pin versions for LTS release
I think that versions of (some) should be allowed to float but when we are ready for an LTS release, the versions should be pinned at that time.
@r0fls @ahopkins @seemethere @ashleysommer @yunstanford @ahopkins
</issue>
<code>
[start of setup.py]
1 """
2 Sanic
3 """
4 import codecs
5 import os
6 import re
7 from distutils.errors import DistutilsPlatformError
8 from distutils.util import strtobool
9
10 from setuptools import setup
11
12
13 def open_local(paths, mode='r', encoding='utf8'):
14 path = os.path.join(
15 os.path.abspath(os.path.dirname(__file__)),
16 *paths
17 )
18
19 return codecs.open(path, mode, encoding)
20
21
22 with open_local(['sanic', '__init__.py'], encoding='latin1') as fp:
23 try:
24 version = re.findall(r"^__version__ = '([^']+)'\r?$",
25 fp.read(), re.M)[0]
26 except IndexError:
27 raise RuntimeError('Unable to determine version.')
28
29
30 with open_local(['README.rst']) as rm:
31 long_description = rm.read()
32
33 setup_kwargs = {
34 'name': 'sanic',
35 'version': version,
36 'url': 'http://github.com/channelcat/sanic/',
37 'license': 'MIT',
38 'author': 'Channel Cat',
39 'author_email': '[email protected]',
40 'description': (
41 'A microframework based on uvloop, httptools, and learnings of flask'),
42 'long_description': long_description,
43 'packages': ['sanic'],
44 'platforms': 'any',
45 'classifiers': [
46 'Development Status :: 4 - Beta',
47 'Environment :: Web Environment',
48 'License :: OSI Approved :: MIT License',
49 'Programming Language :: Python :: 3.5',
50 'Programming Language :: Python :: 3.6',
51 ],
52 }
53
54 env_dependency = '; sys_platform != "win32" and implementation_name == "cpython"'
55 ujson = 'ujson>=1.35' + env_dependency
56 uvloop = 'uvloop>=0.5.3' + env_dependency
57
58 requirements = [
59 'httptools>=0.0.9',
60 uvloop,
61 ujson,
62 'aiofiles>=0.3.0',
63 'websockets>=5.0,<6.0',
64 'multidict>=4.0,<5.0',
65 ]
66 if strtobool(os.environ.get("SANIC_NO_UJSON", "no")):
67 print("Installing without uJSON")
68 requirements.remove(ujson)
69
70 # 'nt' means windows OS
71 if strtobool(os.environ.get("SANIC_NO_UVLOOP", "no")):
72 print("Installing without uvLoop")
73 requirements.remove(uvloop)
74
75 setup_kwargs['install_requires'] = requirements
76 setup(**setup_kwargs)
77
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -56,7 +56,7 @@
uvloop = 'uvloop>=0.5.3' + env_dependency
requirements = [
- 'httptools>=0.0.9',
+ 'httptools>=0.0.10',
uvloop,
ujson,
'aiofiles>=0.3.0',
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -56,7 +56,7 @@\n uvloop = 'uvloop>=0.5.3' + env_dependency\n \n requirements = [\n- 'httptools>=0.0.9',\n+ 'httptools>=0.0.10',\n uvloop,\n ujson,\n 'aiofiles>=0.3.0',\n", "issue": "Pin versions for LTS release\nI think that versions of (some) should be allowed to float but when we are ready for an LTS release, the versions should be pinned at that time.\r\n\r\n@r0fls @ahopkins @seemethere @ashleysommer @yunstanford @ahopkins \n", "before_files": [{"content": "\"\"\"\nSanic\n\"\"\"\nimport codecs\nimport os\nimport re\nfrom distutils.errors import DistutilsPlatformError\nfrom distutils.util import strtobool\n\nfrom setuptools import setup\n\n\ndef open_local(paths, mode='r', encoding='utf8'):\n path = os.path.join(\n os.path.abspath(os.path.dirname(__file__)),\n *paths\n )\n\n return codecs.open(path, mode, encoding)\n\n\nwith open_local(['sanic', '__init__.py'], encoding='latin1') as fp:\n try:\n version = re.findall(r\"^__version__ = '([^']+)'\\r?$\",\n fp.read(), re.M)[0]\n except IndexError:\n raise RuntimeError('Unable to determine version.')\n\n\nwith open_local(['README.rst']) as rm:\n long_description = rm.read()\n\nsetup_kwargs = {\n 'name': 'sanic',\n 'version': version,\n 'url': 'http://github.com/channelcat/sanic/',\n 'license': 'MIT',\n 'author': 'Channel Cat',\n 'author_email': '[email protected]',\n 'description': (\n 'A microframework based on uvloop, httptools, and learnings of flask'),\n 'long_description': long_description,\n 'packages': ['sanic'],\n 'platforms': 'any',\n 'classifiers': [\n 'Development Status :: 4 - Beta',\n 'Environment :: Web Environment',\n 'License :: OSI Approved :: MIT License',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n ],\n}\n\nenv_dependency = '; sys_platform != \"win32\" and implementation_name == \"cpython\"'\nujson = 'ujson>=1.35' + env_dependency\nuvloop = 'uvloop>=0.5.3' + env_dependency\n\nrequirements = [\n 'httptools>=0.0.9',\n uvloop,\n ujson,\n 'aiofiles>=0.3.0',\n 'websockets>=5.0,<6.0',\n 'multidict>=4.0,<5.0',\n]\nif strtobool(os.environ.get(\"SANIC_NO_UJSON\", \"no\")):\n print(\"Installing without uJSON\")\n requirements.remove(ujson)\n\n# 'nt' means windows OS\nif strtobool(os.environ.get(\"SANIC_NO_UVLOOP\", \"no\")):\n print(\"Installing without uvLoop\")\n requirements.remove(uvloop)\n\nsetup_kwargs['install_requires'] = requirements\nsetup(**setup_kwargs)\n", "path": "setup.py"}]} | 1,293 | 100 |
gh_patches_debug_32361 | rasdani/github-patches | git_diff | hpcaitech__ColossalAI-3420 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[tensor] fix some unittests
[tensor] fix some unittests
[tensor] fix some unittests
</issue>
<code>
[start of applications/Chat/coati/models/gpt/gpt_actor.py]
1 from typing import Optional
2
3 from transformers.models.gpt2.configuration_gpt2 import GPT2Config
4 from transformers.models.gpt2.modeling_gpt2 import GPT2LMHeadModel
5
6 from ..base import Actor
7
8
9 class GPTActor(Actor):
10 """
11 GPT Actor model.
12
13 Args:
14 pretrained (str): Pretrained model name or path.
15 config (GPT2Config): Model config.
16 checkpoint (bool): Enable gradient checkpointing.
17 lora_rank (int): Rank of the LoRa layer.
18 lora_train_bias (str): Bias training strategy for the LoRa layer.
19 """
20
21 def __init__(self,
22 pretrained: Optional[str] = None,
23 config: Optional[GPT2Config] = None,
24 checkpoint: bool = False,
25 lora_rank: int = 0,
26 lora_train_bias: str = 'none') -> None:
27 if pretrained is not None:
28 model = GPT2LMHeadModel.from_pretrained(pretrained)
29 elif config is not None:
30 model = GPT2LMHeadModel(config)
31 else:
32 model = GPT2LMHeadModel(GPT2Config())
33 if checkpoint:
34 model.gradient_checkpointing_enable()
35 super().__init__(model, lora_rank, lora_train_bias)
36
[end of applications/Chat/coati/models/gpt/gpt_actor.py]
[start of applications/Chat/coati/models/gpt/gpt_critic.py]
1 from typing import Optional
2
3 import torch.nn as nn
4 from transformers.models.gpt2.configuration_gpt2 import GPT2Config
5 from transformers.models.gpt2.modeling_gpt2 import GPT2Model
6
7 from ..base import Critic
8
9
10 class GPTCritic(Critic):
11 """
12 GPT Critic model.
13
14 Args:
15 pretrained (str): Pretrained model name or path.
16 config (GPT2Config): Model config.
17 checkpoint (bool): Enable gradient checkpointing.
18 lora_rank (int): Rank of the LO-RA decomposition.
19 lora_train_bias (str): LoRA bias training mode.
20 """
21
22 def __init__(self,
23 pretrained: Optional[str] = None,
24 config: Optional[GPT2Config] = None,
25 checkpoint: bool = False,
26 lora_rank: int = 0,
27 lora_train_bias: str = 'none') -> None:
28 if pretrained is not None:
29 model = GPT2Model.from_pretrained(pretrained)
30 elif config is not None:
31 model = GPT2Model(config)
32 else:
33 model = GPT2Model(GPT2Config())
34 if checkpoint:
35 model.gradient_checkpointing_enable()
36 value_head = nn.Linear(model.config.n_embd, 1)
37 super().__init__(model, value_head, lora_rank, lora_train_bias)
38
[end of applications/Chat/coati/models/gpt/gpt_critic.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/applications/Chat/coati/models/gpt/gpt_actor.py b/applications/Chat/coati/models/gpt/gpt_actor.py
--- a/applications/Chat/coati/models/gpt/gpt_actor.py
+++ b/applications/Chat/coati/models/gpt/gpt_actor.py
@@ -23,7 +23,8 @@
config: Optional[GPT2Config] = None,
checkpoint: bool = False,
lora_rank: int = 0,
- lora_train_bias: str = 'none') -> None:
+ lora_train_bias: str = 'none',
+ **kwargs) -> None:
if pretrained is not None:
model = GPT2LMHeadModel.from_pretrained(pretrained)
elif config is not None:
@@ -32,4 +33,4 @@
model = GPT2LMHeadModel(GPT2Config())
if checkpoint:
model.gradient_checkpointing_enable()
- super().__init__(model, lora_rank, lora_train_bias)
+ super().__init__(model, lora_rank, lora_train_bias, **kwargs)
diff --git a/applications/Chat/coati/models/gpt/gpt_critic.py b/applications/Chat/coati/models/gpt/gpt_critic.py
--- a/applications/Chat/coati/models/gpt/gpt_critic.py
+++ b/applications/Chat/coati/models/gpt/gpt_critic.py
@@ -24,7 +24,8 @@
config: Optional[GPT2Config] = None,
checkpoint: bool = False,
lora_rank: int = 0,
- lora_train_bias: str = 'none') -> None:
+ lora_train_bias: str = 'none',
+ **kwargs) -> None:
if pretrained is not None:
model = GPT2Model.from_pretrained(pretrained)
elif config is not None:
@@ -34,4 +35,4 @@
if checkpoint:
model.gradient_checkpointing_enable()
value_head = nn.Linear(model.config.n_embd, 1)
- super().__init__(model, value_head, lora_rank, lora_train_bias)
+ super().__init__(model, value_head, lora_rank, lora_train_bias, **kwargs)
| {"golden_diff": "diff --git a/applications/Chat/coati/models/gpt/gpt_actor.py b/applications/Chat/coati/models/gpt/gpt_actor.py\n--- a/applications/Chat/coati/models/gpt/gpt_actor.py\n+++ b/applications/Chat/coati/models/gpt/gpt_actor.py\n@@ -23,7 +23,8 @@\n config: Optional[GPT2Config] = None,\n checkpoint: bool = False,\n lora_rank: int = 0,\n- lora_train_bias: str = 'none') -> None:\n+ lora_train_bias: str = 'none',\n+ **kwargs) -> None:\n if pretrained is not None:\n model = GPT2LMHeadModel.from_pretrained(pretrained)\n elif config is not None:\n@@ -32,4 +33,4 @@\n model = GPT2LMHeadModel(GPT2Config())\n if checkpoint:\n model.gradient_checkpointing_enable()\n- super().__init__(model, lora_rank, lora_train_bias)\n+ super().__init__(model, lora_rank, lora_train_bias, **kwargs)\ndiff --git a/applications/Chat/coati/models/gpt/gpt_critic.py b/applications/Chat/coati/models/gpt/gpt_critic.py\n--- a/applications/Chat/coati/models/gpt/gpt_critic.py\n+++ b/applications/Chat/coati/models/gpt/gpt_critic.py\n@@ -24,7 +24,8 @@\n config: Optional[GPT2Config] = None,\n checkpoint: bool = False,\n lora_rank: int = 0,\n- lora_train_bias: str = 'none') -> None:\n+ lora_train_bias: str = 'none',\n+ **kwargs) -> None:\n if pretrained is not None:\n model = GPT2Model.from_pretrained(pretrained)\n elif config is not None:\n@@ -34,4 +35,4 @@\n if checkpoint:\n model.gradient_checkpointing_enable()\n value_head = nn.Linear(model.config.n_embd, 1)\n- super().__init__(model, value_head, lora_rank, lora_train_bias)\n+ super().__init__(model, value_head, lora_rank, lora_train_bias, **kwargs)\n", "issue": "[tensor] fix some unittests\n\n[tensor] fix some unittests\n\n[tensor] fix some unittests\n\n", "before_files": [{"content": "from typing import Optional\n\nfrom transformers.models.gpt2.configuration_gpt2 import GPT2Config\nfrom transformers.models.gpt2.modeling_gpt2 import GPT2LMHeadModel\n\nfrom ..base import Actor\n\n\nclass GPTActor(Actor):\n \"\"\"\n GPT Actor model.\n\n Args:\n pretrained (str): Pretrained model name or path.\n config (GPT2Config): Model config.\n checkpoint (bool): Enable gradient checkpointing.\n lora_rank (int): Rank of the LoRa layer.\n lora_train_bias (str): Bias training strategy for the LoRa layer.\n \"\"\"\n\n def __init__(self,\n pretrained: Optional[str] = None,\n config: Optional[GPT2Config] = None,\n checkpoint: bool = False,\n lora_rank: int = 0,\n lora_train_bias: str = 'none') -> None:\n if pretrained is not None:\n model = GPT2LMHeadModel.from_pretrained(pretrained)\n elif config is not None:\n model = GPT2LMHeadModel(config)\n else:\n model = GPT2LMHeadModel(GPT2Config())\n if checkpoint:\n model.gradient_checkpointing_enable()\n super().__init__(model, lora_rank, lora_train_bias)\n", "path": "applications/Chat/coati/models/gpt/gpt_actor.py"}, {"content": "from typing import Optional\n\nimport torch.nn as nn\nfrom transformers.models.gpt2.configuration_gpt2 import GPT2Config\nfrom transformers.models.gpt2.modeling_gpt2 import GPT2Model\n\nfrom ..base import Critic\n\n\nclass GPTCritic(Critic):\n \"\"\"\n GPT Critic model.\n\n Args:\n pretrained (str): Pretrained model name or path.\n config (GPT2Config): Model config.\n checkpoint (bool): Enable gradient checkpointing.\n lora_rank (int): Rank of the LO-RA decomposition.\n lora_train_bias (str): LoRA bias training mode.\n \"\"\"\n\n def __init__(self,\n pretrained: Optional[str] = None,\n config: Optional[GPT2Config] = None,\n checkpoint: bool = False,\n lora_rank: int = 0,\n lora_train_bias: str = 'none') -> None:\n if pretrained is not None:\n model = GPT2Model.from_pretrained(pretrained)\n elif config is not None:\n model = GPT2Model(config)\n else:\n model = GPT2Model(GPT2Config())\n if checkpoint:\n model.gradient_checkpointing_enable()\n value_head = nn.Linear(model.config.n_embd, 1)\n super().__init__(model, value_head, lora_rank, lora_train_bias)\n", "path": "applications/Chat/coati/models/gpt/gpt_critic.py"}]} | 1,312 | 496 |
gh_patches_debug_25257 | rasdani/github-patches | git_diff | ESMCI__cime-4442 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
cs.status reset to force rebuild
I would like an additional option to cs.status or perhaps create_test that
would reset all cases in a test suite to the PEND SHAREDLIB_BUILD state so that
all tests are rebuilt before being restarted.
</issue>
<code>
[start of CIME/cs_status.py]
1 """
2 Implementation of the cs.status script, which prints the status of all
3 of the tests in one or more test suites
4 """
5
6 from __future__ import print_function
7 from CIME.XML.standard_module_setup import *
8 from CIME.XML.expected_fails_file import ExpectedFailsFile
9 from CIME.test_status import TestStatus
10 import os
11 import sys
12 from collections import defaultdict
13
14
15 def cs_status(
16 test_paths,
17 summary=False,
18 fails_only=False,
19 count_fails_phase_list=None,
20 check_throughput=False,
21 check_memory=False,
22 expected_fails_filepath=None,
23 out=sys.stdout,
24 ):
25 """Print the test statuses of all tests in test_paths. The default
26 is to print to stdout, but this can be overridden with the 'out'
27 argument.
28
29 If summary is True, then only the overall status of each test is printed
30
31 If fails_only is True, then only test failures are printed (this
32 includes PENDs as well as FAILs).
33
34 If count_fails_phase_list is provided, it should be a list of phases
35 (from the phases given by test_status.ALL_PHASES). For each phase in
36 this list: do not give line-by-line output; instead, just report the
37 total number of tests that have not PASSed this phase (this includes
38 PENDs and FAILs). (This is typically used with the fails_only
39 option, but it can also be used without that option.)
40
41 If expected_fails_filepath is provided, it should be a string giving
42 the full path to a file listing expected failures for this test
43 suite. Expected failures are then labeled as such in the output.
44 """
45 expect(not (summary and fails_only), "Cannot have both summary and fails_only")
46 expect(
47 not (summary and count_fails_phase_list),
48 "Cannot have both summary and count_fails_phase_list",
49 )
50 if count_fails_phase_list is None:
51 count_fails_phase_list = []
52 non_pass_counts = dict.fromkeys(count_fails_phase_list, 0)
53 xfails = _get_xfails(expected_fails_filepath)
54 test_id_output = defaultdict(str)
55 test_id_counts = defaultdict(int)
56 for test_path in test_paths:
57 test_dir = os.path.dirname(test_path)
58 ts = TestStatus(test_dir=test_dir)
59 test_id = os.path.basename(test_dir).split(".")[-1]
60 if summary:
61 output = _overall_output(
62 ts, " {status} {test_name}\n", check_throughput, check_memory
63 )
64 else:
65 if fails_only:
66 output = ""
67 else:
68 output = _overall_output(
69 ts,
70 " {test_name} (Overall: {status}) details:\n",
71 check_throughput,
72 check_memory,
73 )
74 output += ts.phase_statuses_dump(
75 prefix=" ",
76 skip_passes=fails_only,
77 skip_phase_list=count_fails_phase_list,
78 xfails=xfails.get(ts.get_name()),
79 )
80 if count_fails_phase_list:
81 ts.increment_non_pass_counts(non_pass_counts)
82
83 test_id_output[test_id] += output
84 test_id_counts[test_id] += 1
85
86 for test_id in sorted(test_id_output):
87 count = test_id_counts[test_id]
88 print(
89 "{}: {} test{}".format(test_id, count, "s" if count > 1 else ""), file=out
90 )
91 print(test_id_output[test_id], file=out)
92 print(" ", file=out)
93
94 if count_fails_phase_list:
95 print(72 * "=", file=out)
96 print("Non-PASS results for select phases:", file=out)
97 for phase in count_fails_phase_list:
98 print("{} non-passes: {}".format(phase, non_pass_counts[phase]), file=out)
99
100
101 def _get_xfails(expected_fails_filepath):
102 """Returns a dictionary of ExpectedFails objects, where the keys are test names
103
104 expected_fails_filepath should be either a string giving the path to
105 the file containing expected failures, or None. If None, then this
106 returns an empty dictionary (as if expected_fails_filepath were
107 pointing to a file with no expected failures listed).
108 """
109 if expected_fails_filepath is not None:
110 expected_fails_file = ExpectedFailsFile(expected_fails_filepath)
111 xfails = expected_fails_file.get_expected_fails()
112 else:
113 xfails = {}
114 return xfails
115
116
117 def _overall_output(ts, format_str, check_throughput, check_memory):
118 """Returns a string giving the overall test status
119
120 Args:
121 ts: TestStatus object
122 format_str (string): string giving the format of the output; must
123 contain place-holders for status and test_name
124 """
125 test_name = ts.get_name()
126 status = ts.get_overall_test_status(
127 check_throughput=check_throughput,
128 check_memory=check_memory,
129 )[0]
130 return format_str.format(status=status, test_name=test_name)
131
[end of CIME/cs_status.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/CIME/cs_status.py b/CIME/cs_status.py
--- a/CIME/cs_status.py
+++ b/CIME/cs_status.py
@@ -6,7 +6,7 @@
from __future__ import print_function
from CIME.XML.standard_module_setup import *
from CIME.XML.expected_fails_file import ExpectedFailsFile
-from CIME.test_status import TestStatus
+from CIME.test_status import TestStatus, SHAREDLIB_BUILD_PHASE, TEST_PEND_STATUS
import os
import sys
from collections import defaultdict
@@ -20,6 +20,7 @@
check_throughput=False,
check_memory=False,
expected_fails_filepath=None,
+ force_rebuild=False,
out=sys.stdout,
):
"""Print the test statuses of all tests in test_paths. The default
@@ -56,6 +57,11 @@
for test_path in test_paths:
test_dir = os.path.dirname(test_path)
ts = TestStatus(test_dir=test_dir)
+
+ if force_rebuild:
+ with ts:
+ ts.set_status(SHAREDLIB_BUILD_PHASE, TEST_PEND_STATUS)
+
test_id = os.path.basename(test_dir).split(".")[-1]
if summary:
output = _overall_output(
| {"golden_diff": "diff --git a/CIME/cs_status.py b/CIME/cs_status.py\n--- a/CIME/cs_status.py\n+++ b/CIME/cs_status.py\n@@ -6,7 +6,7 @@\n from __future__ import print_function\n from CIME.XML.standard_module_setup import *\n from CIME.XML.expected_fails_file import ExpectedFailsFile\n-from CIME.test_status import TestStatus\n+from CIME.test_status import TestStatus, SHAREDLIB_BUILD_PHASE, TEST_PEND_STATUS\n import os\n import sys\n from collections import defaultdict\n@@ -20,6 +20,7 @@\n check_throughput=False,\n check_memory=False,\n expected_fails_filepath=None,\n+ force_rebuild=False,\n out=sys.stdout,\n ):\n \"\"\"Print the test statuses of all tests in test_paths. The default\n@@ -56,6 +57,11 @@\n for test_path in test_paths:\n test_dir = os.path.dirname(test_path)\n ts = TestStatus(test_dir=test_dir)\n+\n+ if force_rebuild:\n+ with ts:\n+ ts.set_status(SHAREDLIB_BUILD_PHASE, TEST_PEND_STATUS)\n+\n test_id = os.path.basename(test_dir).split(\".\")[-1]\n if summary:\n output = _overall_output(\n", "issue": "cs.status reset to force rebuild\nI would like an additional option to cs.status or perhaps create_test that\r\nwould reset all cases in a test suite to the PEND SHAREDLIB_BUILD state so that \r\nall tests are rebuilt before being restarted. \n", "before_files": [{"content": "\"\"\"\nImplementation of the cs.status script, which prints the status of all\nof the tests in one or more test suites\n\"\"\"\n\nfrom __future__ import print_function\nfrom CIME.XML.standard_module_setup import *\nfrom CIME.XML.expected_fails_file import ExpectedFailsFile\nfrom CIME.test_status import TestStatus\nimport os\nimport sys\nfrom collections import defaultdict\n\n\ndef cs_status(\n test_paths,\n summary=False,\n fails_only=False,\n count_fails_phase_list=None,\n check_throughput=False,\n check_memory=False,\n expected_fails_filepath=None,\n out=sys.stdout,\n):\n \"\"\"Print the test statuses of all tests in test_paths. The default\n is to print to stdout, but this can be overridden with the 'out'\n argument.\n\n If summary is True, then only the overall status of each test is printed\n\n If fails_only is True, then only test failures are printed (this\n includes PENDs as well as FAILs).\n\n If count_fails_phase_list is provided, it should be a list of phases\n (from the phases given by test_status.ALL_PHASES). For each phase in\n this list: do not give line-by-line output; instead, just report the\n total number of tests that have not PASSed this phase (this includes\n PENDs and FAILs). (This is typically used with the fails_only\n option, but it can also be used without that option.)\n\n If expected_fails_filepath is provided, it should be a string giving\n the full path to a file listing expected failures for this test\n suite. Expected failures are then labeled as such in the output.\n \"\"\"\n expect(not (summary and fails_only), \"Cannot have both summary and fails_only\")\n expect(\n not (summary and count_fails_phase_list),\n \"Cannot have both summary and count_fails_phase_list\",\n )\n if count_fails_phase_list is None:\n count_fails_phase_list = []\n non_pass_counts = dict.fromkeys(count_fails_phase_list, 0)\n xfails = _get_xfails(expected_fails_filepath)\n test_id_output = defaultdict(str)\n test_id_counts = defaultdict(int)\n for test_path in test_paths:\n test_dir = os.path.dirname(test_path)\n ts = TestStatus(test_dir=test_dir)\n test_id = os.path.basename(test_dir).split(\".\")[-1]\n if summary:\n output = _overall_output(\n ts, \" {status} {test_name}\\n\", check_throughput, check_memory\n )\n else:\n if fails_only:\n output = \"\"\n else:\n output = _overall_output(\n ts,\n \" {test_name} (Overall: {status}) details:\\n\",\n check_throughput,\n check_memory,\n )\n output += ts.phase_statuses_dump(\n prefix=\" \",\n skip_passes=fails_only,\n skip_phase_list=count_fails_phase_list,\n xfails=xfails.get(ts.get_name()),\n )\n if count_fails_phase_list:\n ts.increment_non_pass_counts(non_pass_counts)\n\n test_id_output[test_id] += output\n test_id_counts[test_id] += 1\n\n for test_id in sorted(test_id_output):\n count = test_id_counts[test_id]\n print(\n \"{}: {} test{}\".format(test_id, count, \"s\" if count > 1 else \"\"), file=out\n )\n print(test_id_output[test_id], file=out)\n print(\" \", file=out)\n\n if count_fails_phase_list:\n print(72 * \"=\", file=out)\n print(\"Non-PASS results for select phases:\", file=out)\n for phase in count_fails_phase_list:\n print(\"{} non-passes: {}\".format(phase, non_pass_counts[phase]), file=out)\n\n\ndef _get_xfails(expected_fails_filepath):\n \"\"\"Returns a dictionary of ExpectedFails objects, where the keys are test names\n\n expected_fails_filepath should be either a string giving the path to\n the file containing expected failures, or None. If None, then this\n returns an empty dictionary (as if expected_fails_filepath were\n pointing to a file with no expected failures listed).\n \"\"\"\n if expected_fails_filepath is not None:\n expected_fails_file = ExpectedFailsFile(expected_fails_filepath)\n xfails = expected_fails_file.get_expected_fails()\n else:\n xfails = {}\n return xfails\n\n\ndef _overall_output(ts, format_str, check_throughput, check_memory):\n \"\"\"Returns a string giving the overall test status\n\n Args:\n ts: TestStatus object\n format_str (string): string giving the format of the output; must\n contain place-holders for status and test_name\n \"\"\"\n test_name = ts.get_name()\n status = ts.get_overall_test_status(\n check_throughput=check_throughput,\n check_memory=check_memory,\n )[0]\n return format_str.format(status=status, test_name=test_name)\n", "path": "CIME/cs_status.py"}]} | 1,948 | 273 |
gh_patches_debug_13744 | rasdani/github-patches | git_diff | saleor__saleor-1471 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Refactor displaying success messages in the dashboard
The code responsible for displaying success messages in the dashboard lives in [_messages.html](https://github.com/mirumee/saleor/blob/master/templates/dashboard/includes/_messages.html) template and mixes Django's templating language with JS which isn't very elegant. Instead, there should be a function written entirely in JS that would take care of rendering those messages with data passed from backend through `data-*` attributes.
</issue>
<code>
[start of saleor/dashboard/templatetags/utils.py]
1 from urllib.parse import urlencode
2
3 from django import forms
4 from django.template import Library
5 from django_filters.fields import RangeField
6 from versatileimagefield.widgets import VersatileImagePPOIClickWidget
7
8 from ...product.utils import get_margin_for_variant, get_variant_costs_data
9 from ..product.widgets import ImagePreviewWidget
10 from .chips import (
11 handle_default, handle_multiple_choice, handle_multiple_model_choice,
12 handle_nullboolean, handle_range, handle_single_choice,
13 handle_single_model_choice)
14
15 register = Library()
16
17
18 @register.simple_tag(takes_context=True)
19 def construct_get_query(context, **params):
20 request_get = context['request'].GET.dict()
21 if not (request_get or params):
22 return ''
23 all_params = {}
24 all_params.update(request_get)
25 all_params.update(params)
26 all_params.update(context.get('default_pagination_params', {}))
27 return '?' + urlencode(all_params)
28
29
30 @register.filter
31 def is_versatile_image_ppoi_click_widget(field):
32 '''
33 This filter checks if image field widget is used when user wants to edit
34 existing product image.
35 '''
36 return isinstance(field.field.widget, VersatileImagePPOIClickWidget)
37
38
39 @register.filter
40 def is_image_preview_widget(field):
41 '''
42 This filter checks if image field widget is used when user wants to add new
43 product image.
44 '''
45 return isinstance(field.field.widget, ImagePreviewWidget)
46
47
48 @register.inclusion_tag('dashboard/product/product_variant/_image_select.html')
49 def render_image_choice(field):
50 choices = zip(field, field.field.queryset)
51 return {'field': field, 'choices_with_images': choices}
52
53
54 @register.inclusion_tag('dashboard/includes/_pagination.html',
55 takes_context=True)
56 def paginate(context, page_obj, num_of_pages=5):
57 context['page_obj'] = page_obj
58 context['n_forward'] = num_of_pages + 1
59 context['n_backward'] = -num_of_pages - 1
60 context['next_section'] = (2 * num_of_pages) + 1
61 context['previous_section'] = (-2 * num_of_pages) - 1
62 return context
63
64
65 @register.simple_tag
66 def margin_for_variant(stock):
67 return get_margin_for_variant(stock)
68
69
70 @register.simple_tag
71 def margins_for_variant(variant):
72 margins = get_variant_costs_data(variant)['margins']
73 return margins
74
75
76 @register.inclusion_tag('dashboard/includes/_filters.html', takes_context=True)
77 def add_filters(context, filter_set, sort_by_filter_name='sort_by'):
78 chips = []
79 request_get = context['request'].GET.copy()
80 for filter_name in filter_set.form.cleaned_data.keys():
81 if filter_name == sort_by_filter_name:
82 # Skip processing of sort_by filter, as it's rendered differently
83 continue
84
85 field = filter_set.form[filter_name]
86 if field.value() not in ['', None]:
87 if isinstance(field.field, forms.NullBooleanField):
88 items = handle_nullboolean(field, request_get)
89 elif isinstance(field.field, forms.ModelMultipleChoiceField):
90 items = handle_multiple_model_choice(field, request_get)
91 elif isinstance(field.field, forms.MultipleChoiceField):
92 items = handle_multiple_choice(field, request_get)
93 elif isinstance(field.field, forms.ModelChoiceField):
94 items = handle_single_model_choice(field, request_get)
95 elif isinstance(field.field, forms.ChoiceField):
96 items = handle_single_choice(field, request_get)
97 elif isinstance(field.field, RangeField):
98 items = handle_range(field, request_get)
99 else:
100 items = handle_default(field, request_get)
101 chips.extend(items)
102 return {
103 'chips': chips, 'filter': filter_set, 'count': filter_set.qs.count(),
104 'sort_by': request_get.get(sort_by_filter_name, None)}
105
[end of saleor/dashboard/templatetags/utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/saleor/dashboard/templatetags/utils.py b/saleor/dashboard/templatetags/utils.py
--- a/saleor/dashboard/templatetags/utils.py
+++ b/saleor/dashboard/templatetags/utils.py
@@ -1,3 +1,5 @@
+from __future__ import unicode_literals
+from json import dumps
from urllib.parse import urlencode
from django import forms
@@ -102,3 +104,13 @@
return {
'chips': chips, 'filter': filter_set, 'count': filter_set.qs.count(),
'sort_by': request_get.get(sort_by_filter_name, None)}
+
+
[email protected]_tag(takes_context=True)
+def serialize_messages(context):
+ """Serialize django.contrib.messages to JSON"""
+ messages = context.get('messages', [])
+ data = {}
+ for i, message in enumerate(messages):
+ data[i] = str(message)
+ return dumps(data)
| {"golden_diff": "diff --git a/saleor/dashboard/templatetags/utils.py b/saleor/dashboard/templatetags/utils.py\n--- a/saleor/dashboard/templatetags/utils.py\n+++ b/saleor/dashboard/templatetags/utils.py\n@@ -1,3 +1,5 @@\n+from __future__ import unicode_literals\n+from json import dumps\n from urllib.parse import urlencode\n \n from django import forms\n@@ -102,3 +104,13 @@\n return {\n 'chips': chips, 'filter': filter_set, 'count': filter_set.qs.count(),\n 'sort_by': request_get.get(sort_by_filter_name, None)}\n+\n+\[email protected]_tag(takes_context=True)\n+def serialize_messages(context):\n+ \"\"\"Serialize django.contrib.messages to JSON\"\"\"\n+ messages = context.get('messages', [])\n+ data = {}\n+ for i, message in enumerate(messages):\n+ data[i] = str(message)\n+ return dumps(data)\n", "issue": "Refactor displaying success messages in the dashboard\nThe code responsible for displaying success messages in the dashboard lives in [_messages.html](https://github.com/mirumee/saleor/blob/master/templates/dashboard/includes/_messages.html) template and mixes Django's templating language with JS which isn't very elegant. Instead, there should be a function written entirely in JS that would take care of rendering those messages with data passed from backend through `data-*` attributes.\n", "before_files": [{"content": "from urllib.parse import urlencode\n\nfrom django import forms\nfrom django.template import Library\nfrom django_filters.fields import RangeField\nfrom versatileimagefield.widgets import VersatileImagePPOIClickWidget\n\nfrom ...product.utils import get_margin_for_variant, get_variant_costs_data\nfrom ..product.widgets import ImagePreviewWidget\nfrom .chips import (\n handle_default, handle_multiple_choice, handle_multiple_model_choice,\n handle_nullboolean, handle_range, handle_single_choice,\n handle_single_model_choice)\n\nregister = Library()\n\n\[email protected]_tag(takes_context=True)\ndef construct_get_query(context, **params):\n request_get = context['request'].GET.dict()\n if not (request_get or params):\n return ''\n all_params = {}\n all_params.update(request_get)\n all_params.update(params)\n all_params.update(context.get('default_pagination_params', {}))\n return '?' + urlencode(all_params)\n\n\[email protected]\ndef is_versatile_image_ppoi_click_widget(field):\n '''\n This filter checks if image field widget is used when user wants to edit\n existing product image.\n '''\n return isinstance(field.field.widget, VersatileImagePPOIClickWidget)\n\n\[email protected]\ndef is_image_preview_widget(field):\n '''\n This filter checks if image field widget is used when user wants to add new\n product image.\n '''\n return isinstance(field.field.widget, ImagePreviewWidget)\n\n\[email protected]_tag('dashboard/product/product_variant/_image_select.html')\ndef render_image_choice(field):\n choices = zip(field, field.field.queryset)\n return {'field': field, 'choices_with_images': choices}\n\n\[email protected]_tag('dashboard/includes/_pagination.html',\n takes_context=True)\ndef paginate(context, page_obj, num_of_pages=5):\n context['page_obj'] = page_obj\n context['n_forward'] = num_of_pages + 1\n context['n_backward'] = -num_of_pages - 1\n context['next_section'] = (2 * num_of_pages) + 1\n context['previous_section'] = (-2 * num_of_pages) - 1\n return context\n\n\[email protected]_tag\ndef margin_for_variant(stock):\n return get_margin_for_variant(stock)\n\n\[email protected]_tag\ndef margins_for_variant(variant):\n margins = get_variant_costs_data(variant)['margins']\n return margins\n\n\[email protected]_tag('dashboard/includes/_filters.html', takes_context=True)\ndef add_filters(context, filter_set, sort_by_filter_name='sort_by'):\n chips = []\n request_get = context['request'].GET.copy()\n for filter_name in filter_set.form.cleaned_data.keys():\n if filter_name == sort_by_filter_name:\n # Skip processing of sort_by filter, as it's rendered differently\n continue\n\n field = filter_set.form[filter_name]\n if field.value() not in ['', None]:\n if isinstance(field.field, forms.NullBooleanField):\n items = handle_nullboolean(field, request_get)\n elif isinstance(field.field, forms.ModelMultipleChoiceField):\n items = handle_multiple_model_choice(field, request_get)\n elif isinstance(field.field, forms.MultipleChoiceField):\n items = handle_multiple_choice(field, request_get)\n elif isinstance(field.field, forms.ModelChoiceField):\n items = handle_single_model_choice(field, request_get)\n elif isinstance(field.field, forms.ChoiceField):\n items = handle_single_choice(field, request_get)\n elif isinstance(field.field, RangeField):\n items = handle_range(field, request_get)\n else:\n items = handle_default(field, request_get)\n chips.extend(items)\n return {\n 'chips': chips, 'filter': filter_set, 'count': filter_set.qs.count(),\n 'sort_by': request_get.get(sort_by_filter_name, None)}\n", "path": "saleor/dashboard/templatetags/utils.py"}]} | 1,655 | 216 |
gh_patches_debug_12447 | rasdani/github-patches | git_diff | searxng__searxng-3204 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bug: lingva engine / redirects & Key-Errors
**Version of SearXNG, commit number if you are using on master branch and stipulate if you forked SearXNG**
Repository: https://github.com/return42/searxng
Branch: darmarit.org
Version: 2024.2.3+a6f5d690
**How did you install SearXNG?**
(unmodified fork/brand) from master branch
**What happened?**
With the default config / the "official instance" we have the errors reported below:
https://github.com/searxng/searxng/blob/df1a774003c285866a96b149bf92412037b4932d/searx/settings.yml#L1037-L1041
**How To Reproduce**
```
!lingva en-de convenient
```
**Technical report**
```
Error
* Error: httpx.ReadTimeout
* Percentage: 50
* Parameters: `(None, None, 'lingva.thedaviddelta.com')`
* File name: `searx/search/processors/online.py:118`
* Function: `_send_http_request`
* Code: `response = req(params['url'], **request_args)`
```
```
Error
* Error: 1 redirects, maximum: 0
* Percentage: 50
* Parameters: `('200', 'OK', 'lingva.thedaviddelta.com')`
* File name: `searx/search/processors/online.py:127`
* Function: `_send_http_request`
* Code: `count_error(`
```
```
Error
* Error: KeyError
* Percentage: 50
* Parameters: `()`
* File name: `searx/engines/lingva.py:51`
* Function: `response`
* Code: `infobox += f"<b>{translation['type']}</b>"`
```
</issue>
<code>
[start of searx/engines/lingva.py]
1 # SPDX-License-Identifier: AGPL-3.0-or-later
2 # lint: pylint
3 """Lingva (alternative Google Translate frontend)"""
4
5 from json import loads
6
7 about = {
8 "website": 'https://lingva.ml',
9 "wikidata_id": None,
10 "official_api_documentation": 'https://github.com/thedaviddelta/lingva-translate#public-apis',
11 "use_official_api": True,
12 "require_api_key": False,
13 "results": 'JSON',
14 }
15
16 engine_type = 'online_dictionary'
17 categories = ['general']
18
19 url = "https://lingva.thedaviddelta.com/"
20 search_url = "{url}/api/v1/{from_lang}/{to_lang}/{query}"
21
22
23 def request(_query, params):
24 params['url'] = search_url.format(
25 url=url, from_lang=params['from_lang'][1], to_lang=params['to_lang'][1], query=params['query']
26 )
27 return params
28
29
30 def response(resp):
31 results = []
32
33 result = loads(resp.text)
34 info = result["info"]
35 from_to_prefix = "%s-%s " % (resp.search_params['from_lang'][1], resp.search_params['to_lang'][1])
36
37 if "typo" in info:
38 results.append({"suggestion": from_to_prefix + info["typo"]})
39
40 if 'definitions' in info: # pylint: disable=too-many-nested-blocks
41 for definition in info['definitions']:
42 if 'list' in definition:
43 for item in definition['list']:
44 if 'synonyms' in item:
45 for synonym in item['synonyms']:
46 results.append({"suggestion": from_to_prefix + synonym})
47
48 infobox = ""
49
50 for translation in info["extraTranslations"]:
51 infobox += f"<b>{translation['type']}</b>"
52
53 for word in translation["list"]:
54 infobox += f"<dl><dt>{word['word']}</dt>"
55
56 for meaning in word["meanings"]:
57 infobox += f"<dd>{meaning}</dd>"
58
59 infobox += "</dl>"
60
61 results.append(
62 {
63 'infobox': result["translation"],
64 'content': infobox,
65 }
66 )
67
68 return results
69
[end of searx/engines/lingva.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/searx/engines/lingva.py b/searx/engines/lingva.py
--- a/searx/engines/lingva.py
+++ b/searx/engines/lingva.py
@@ -16,7 +16,7 @@
engine_type = 'online_dictionary'
categories = ['general']
-url = "https://lingva.thedaviddelta.com/"
+url = "https://lingva.thedaviddelta.com"
search_url = "{url}/api/v1/{from_lang}/{to_lang}/{query}"
@@ -48,8 +48,6 @@
infobox = ""
for translation in info["extraTranslations"]:
- infobox += f"<b>{translation['type']}</b>"
-
for word in translation["list"]:
infobox += f"<dl><dt>{word['word']}</dt>"
| {"golden_diff": "diff --git a/searx/engines/lingva.py b/searx/engines/lingva.py\n--- a/searx/engines/lingva.py\n+++ b/searx/engines/lingva.py\n@@ -16,7 +16,7 @@\n engine_type = 'online_dictionary'\n categories = ['general']\n \n-url = \"https://lingva.thedaviddelta.com/\"\n+url = \"https://lingva.thedaviddelta.com\"\n search_url = \"{url}/api/v1/{from_lang}/{to_lang}/{query}\"\n \n \n@@ -48,8 +48,6 @@\n infobox = \"\"\n \n for translation in info[\"extraTranslations\"]:\n- infobox += f\"<b>{translation['type']}</b>\"\n-\n for word in translation[\"list\"]:\n infobox += f\"<dl><dt>{word['word']}</dt>\"\n", "issue": "Bug: lingva engine / redirects & Key-Errors\n**Version of SearXNG, commit number if you are using on master branch and stipulate if you forked SearXNG**\r\n\r\nRepository: https://github.com/return42/searxng\r\nBranch: darmarit.org\r\nVersion: 2024.2.3+a6f5d690\r\n\r\n**How did you install SearXNG?**\r\n\r\n(unmodified fork/brand) from master branch\r\n\r\n**What happened?**\r\n\r\nWith the default config / the \"official instance\" we have the errors reported below:\r\n\r\nhttps://github.com/searxng/searxng/blob/df1a774003c285866a96b149bf92412037b4932d/searx/settings.yml#L1037-L1041\r\n\r\n**How To Reproduce**\r\n\r\n```\r\n!lingva en-de convenient\r\n```\r\n\r\n**Technical report**\r\n\r\n```\r\nError\r\n * Error: httpx.ReadTimeout\r\n * Percentage: 50\r\n * Parameters: `(None, None, 'lingva.thedaviddelta.com')`\r\n * File name: `searx/search/processors/online.py:118`\r\n * Function: `_send_http_request`\r\n * Code: `response = req(params['url'], **request_args)`\r\n```\r\n\r\n```\r\nError\r\n * Error: 1 redirects, maximum: 0\r\n * Percentage: 50\r\n * Parameters: `('200', 'OK', 'lingva.thedaviddelta.com')`\r\n * File name: `searx/search/processors/online.py:127`\r\n * Function: `_send_http_request`\r\n * Code: `count_error(`\r\n```\r\n\r\n```\r\nError\r\n * Error: KeyError\r\n * Percentage: 50\r\n * Parameters: `()`\r\n * File name: `searx/engines/lingva.py:51`\r\n * Function: `response`\r\n * Code: `infobox += f\"<b>{translation['type']}</b>\"`\r\n```\n", "before_files": [{"content": "# SPDX-License-Identifier: AGPL-3.0-or-later\n# lint: pylint\n\"\"\"Lingva (alternative Google Translate frontend)\"\"\"\n\nfrom json import loads\n\nabout = {\n \"website\": 'https://lingva.ml',\n \"wikidata_id\": None,\n \"official_api_documentation\": 'https://github.com/thedaviddelta/lingva-translate#public-apis',\n \"use_official_api\": True,\n \"require_api_key\": False,\n \"results\": 'JSON',\n}\n\nengine_type = 'online_dictionary'\ncategories = ['general']\n\nurl = \"https://lingva.thedaviddelta.com/\"\nsearch_url = \"{url}/api/v1/{from_lang}/{to_lang}/{query}\"\n\n\ndef request(_query, params):\n params['url'] = search_url.format(\n url=url, from_lang=params['from_lang'][1], to_lang=params['to_lang'][1], query=params['query']\n )\n return params\n\n\ndef response(resp):\n results = []\n\n result = loads(resp.text)\n info = result[\"info\"]\n from_to_prefix = \"%s-%s \" % (resp.search_params['from_lang'][1], resp.search_params['to_lang'][1])\n\n if \"typo\" in info:\n results.append({\"suggestion\": from_to_prefix + info[\"typo\"]})\n\n if 'definitions' in info: # pylint: disable=too-many-nested-blocks\n for definition in info['definitions']:\n if 'list' in definition:\n for item in definition['list']:\n if 'synonyms' in item:\n for synonym in item['synonyms']:\n results.append({\"suggestion\": from_to_prefix + synonym})\n\n infobox = \"\"\n\n for translation in info[\"extraTranslations\"]:\n infobox += f\"<b>{translation['type']}</b>\"\n\n for word in translation[\"list\"]:\n infobox += f\"<dl><dt>{word['word']}</dt>\"\n\n for meaning in word[\"meanings\"]:\n infobox += f\"<dd>{meaning}</dd>\"\n\n infobox += \"</dl>\"\n\n results.append(\n {\n 'infobox': result[\"translation\"],\n 'content': infobox,\n }\n )\n\n return results\n", "path": "searx/engines/lingva.py"}]} | 1,632 | 193 |
gh_patches_debug_254 | rasdani/github-patches | git_diff | mindee__doctr-123 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[docs] Enable documentation of multiple versions at once
As of now, the documentation that would be deployed publicly is only the latest version. The better alternative would be:
- having the latest version by default
- having the documentation of each release accessible as well using a displayed selector
Hugginface transformers did the following: https://github.com/huggingface/transformers/blob/master/.circleci/deploy.sh
</issue>
<code>
[start of docs/source/conf.py]
1 # Configuration file for the Sphinx documentation builder.
2 #
3 # This file only contains a selection of the most common options. For a full
4 # list see the documentation:
5 # https://www.sphinx-doc.org/en/master/usage/configuration.html
6
7 # -- Path setup --------------------------------------------------------------
8
9 import sphinx_rtd_theme
10
11 # If extensions (or modules to document with autodoc) are in another directory,
12 # add these directories to sys.path here. If the directory is relative to the
13 # documentation root, use os.path.abspath to make it absolute, like shown here.
14 #
15 import os
16 import sys
17 sys.path.insert(0, os.path.abspath('../..'))
18 import doctr
19
20 # -- Project information -----------------------------------------------------
21
22 master_doc = 'index'
23 project = 'doctr'
24 copyright = '2021, Mindee'
25 author = 'François-Guillaume Fernandez, Charles Gaillard, Mohamed Biaz'
26
27 # The full version, including alpha/beta/rc tags
28 version = doctr.__version__
29 release = doctr.__version__ + '-git'
30
31
32 # -- General configuration ---------------------------------------------------
33
34 # Add any Sphinx extension module names here, as strings. They can be
35 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
36 # ones.
37 extensions = [
38 'sphinx.ext.autodoc',
39 'sphinx.ext.napoleon',
40 'sphinx.ext.viewcode',
41 'sphinx.ext.coverage',
42 'sphinx.ext.mathjax',
43 'sphinxemoji.sphinxemoji', # cf. https://sphinxemojicodes.readthedocs.io/en/stable/
44 'sphinx_copybutton',
45 ]
46
47 napoleon_use_ivar = True
48
49 # Add any paths that contain templates here, relative to this directory.
50 templates_path = ['_templates']
51
52 # List of patterns, relative to source directory, that match files and
53 # directories to ignore when looking for source files.
54 # This pattern also affects html_static_path and html_extra_path.
55 exclude_patterns = [u'_build', 'Thumbs.db', '.DS_Store']
56
57
58 # The name of the Pygments (syntax highlighting) style to use.
59 pygments_style = 'sphinx'
60 highlight_language = 'python3'
61
62 # -- Options for HTML output -------------------------------------------------
63
64 # The theme to use for HTML and HTML Help pages. See the documentation for
65 # a list of builtin themes.
66 #
67 html_theme = 'sphinx_rtd_theme'
68 html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
69
70 # Theme options are theme-specific and customize the look and feel of a theme
71 # further. For a list of options available for each theme, see the
72 # documentation.
73 #
74 html_theme_options = {
75 'collapse_navigation': False,
76 'display_version': True,
77 'logo_only': False,
78 }
79
80 # html_logo = '_static/images/logo.png'
81
82
83 # Add any paths that contain custom static files (such as style sheets) here,
84 # relative to this directory. They are copied after the builtin static files,
85 # so a file named "default.css" will overwrite the builtin "default.css".
86 html_static_path = ['_static']
87
88 # A list of files that should not be packed into the epub file.
89 epub_exclude_files = ['search.html']
90
91 def setup(app):
92 app.add_css_file('css/mindee.css')
93 app.add_js_file('js/custom.js')
94
[end of docs/source/conf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/docs/source/conf.py b/docs/source/conf.py
--- a/docs/source/conf.py
+++ b/docs/source/conf.py
@@ -73,7 +73,7 @@
#
html_theme_options = {
'collapse_navigation': False,
- 'display_version': True,
+ 'display_version': False,
'logo_only': False,
}
| {"golden_diff": "diff --git a/docs/source/conf.py b/docs/source/conf.py\n--- a/docs/source/conf.py\n+++ b/docs/source/conf.py\n@@ -73,7 +73,7 @@\n #\n html_theme_options = {\n 'collapse_navigation': False,\n- 'display_version': True,\n+ 'display_version': False,\n 'logo_only': False,\n }\n", "issue": "[docs] Enable documentation of multiple versions at once\nAs of now, the documentation that would be deployed publicly is only the latest version. The better alternative would be:\r\n- having the latest version by default\r\n- having the documentation of each release accessible as well using a displayed selector\r\n\r\nHugginface transformers did the following: https://github.com/huggingface/transformers/blob/master/.circleci/deploy.sh\n", "before_files": [{"content": "# Configuration file for the Sphinx documentation builder.\n#\n# This file only contains a selection of the most common options. For a full\n# list see the documentation:\n# https://www.sphinx-doc.org/en/master/usage/configuration.html\n\n# -- Path setup --------------------------------------------------------------\n\nimport sphinx_rtd_theme\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\nimport os\nimport sys\nsys.path.insert(0, os.path.abspath('../..'))\nimport doctr\n\n# -- Project information -----------------------------------------------------\n\nmaster_doc = 'index'\nproject = 'doctr'\ncopyright = '2021, Mindee'\nauthor = 'Fran\u00e7ois-Guillaume Fernandez, Charles Gaillard, Mohamed Biaz'\n\n# The full version, including alpha/beta/rc tags\nversion = doctr.__version__\nrelease = doctr.__version__ + '-git'\n\n\n# -- General configuration ---------------------------------------------------\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n\t'sphinx.ext.autodoc',\n\t'sphinx.ext.napoleon',\n\t'sphinx.ext.viewcode',\n 'sphinx.ext.coverage',\n 'sphinx.ext.mathjax',\n 'sphinxemoji.sphinxemoji', # cf. https://sphinxemojicodes.readthedocs.io/en/stable/\n 'sphinx_copybutton',\n]\n\nnapoleon_use_ivar = True\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path.\nexclude_patterns = [u'_build', 'Thumbs.db', '.DS_Store']\n\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\nhighlight_language = 'python3'\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = 'sphinx_rtd_theme'\nhtml_theme_path = [sphinx_rtd_theme.get_html_theme_path()]\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#\nhtml_theme_options = {\n 'collapse_navigation': False,\n 'display_version': True,\n 'logo_only': False,\n}\n\n# html_logo = '_static/images/logo.png'\n\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\n# A list of files that should not be packed into the epub file.\nepub_exclude_files = ['search.html']\n\ndef setup(app):\n app.add_css_file('css/mindee.css')\n app.add_js_file('js/custom.js')\n", "path": "docs/source/conf.py"}]} | 1,505 | 78 |
gh_patches_debug_49809 | rasdani/github-patches | git_diff | plotly__plotly.py-699 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
jsonschema.SchemaError when a figure is validated
Here is a minimal example that reproduces the bug: http://nbviewer.jupyter.org/gist/empet/cf922d7c7f4269d6f63432ec67a5d020
The notebook runs OK (with plotly 2.0.1) when I call `plot(fig)`. `iplot(fig)` generates the plot too, but an error box pops up whenever Jupyter tries to save the notebook. The box has the following content:
_The save operation succeeded, but the notebook does not appear to be valid. The validation error was:
Notebook Validation failed_:
`u'data': [{u'colorscale': u'Viridis', u'z': [[2, 27, 105, 100], [87, 14, 121, 102], [26, 121, 73, 34], [44, 105, 111, 127]], u'type': u'heatmap', u'zsmooth': u'best'}], u'layout': {u'width': 400, u'height': 400}}` _is not valid under any of the given schemas_:
`{
"data": [
{
"colorscale": "Viridis",
"z": [
[
2,
27,
105,
100
],
[
87,
14,
121,
102
],
[
26,
121,
73,
34
],
[
44,
105,
111,
127
]
],
"type": "heatmap",
"zsmooth": "best"
}
],
"layout": {
"width": 400,
"height": 400
}
}`
Initially I formulated this issue only for heatmaps, but meanwhile I realized that this behaviour manifests for any type of plot.
</issue>
<code>
[start of setup.py]
1 from setuptools import setup
2
3 exec (open('plotly/version.py').read())
4
5
6 def readme():
7 with open('README.rst') as f:
8 return f.read()
9
10
11 setup(name='plotly',
12 version=__version__,
13 use_2to3=False,
14 author='Chris P',
15 author_email='[email protected]',
16 maintainer='Chris P',
17 maintainer_email='[email protected]',
18 url='https://plot.ly/python/',
19 description="Python plotting library for collaborative, "
20 "interactive, publication-quality graphs.",
21 long_description=readme(),
22 classifiers=[
23 'Development Status :: 4 - Beta',
24 'Programming Language :: Python :: 2',
25 'Programming Language :: Python :: 2.7',
26 'Programming Language :: Python :: 3',
27 'Programming Language :: Python :: 3.3',
28 'Programming Language :: Python :: 3.4',
29 'Programming Language :: Python :: 3.5',
30 'Topic :: Scientific/Engineering :: Visualization',
31 ],
32 license='MIT',
33 packages=['plotly',
34 'plotly/api',
35 'plotly/api/v1',
36 'plotly/api/v2',
37 'plotly/plotly',
38 'plotly/plotly/chunked_requests',
39 'plotly/figure_factory',
40 'plotly/graph_objs',
41 'plotly/grid_objs',
42 'plotly/widgets',
43 'plotly/offline',
44 'plotly/matplotlylib',
45 'plotly/matplotlylib/mplexporter',
46 'plotly/matplotlylib/mplexporter/renderers'],
47 package_data={'plotly': ['package_data/*']},
48 install_requires=['decorator', 'requests', 'six', 'pytz'],
49 zip_safe=False)
50
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -45,5 +45,9 @@
'plotly/matplotlylib/mplexporter',
'plotly/matplotlylib/mplexporter/renderers'],
package_data={'plotly': ['package_data/*']},
- install_requires=['decorator', 'requests', 'six', 'pytz'],
+ install_requires=['decorator',
+ 'nbformat>=4.2',
+ 'pytz',
+ 'requests',
+ 'six'],
zip_safe=False)
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -45,5 +45,9 @@\n 'plotly/matplotlylib/mplexporter',\n 'plotly/matplotlylib/mplexporter/renderers'],\n package_data={'plotly': ['package_data/*']},\n- install_requires=['decorator', 'requests', 'six', 'pytz'],\n+ install_requires=['decorator',\n+ 'nbformat>=4.2',\n+ 'pytz',\n+ 'requests',\n+ 'six'],\n zip_safe=False)\n", "issue": "jsonschema.SchemaError when a figure is validated\nHere is a minimal example that reproduces the bug: http://nbviewer.jupyter.org/gist/empet/cf922d7c7f4269d6f63432ec67a5d020\r\n\r\nThe notebook runs OK (with plotly 2.0.1) when I call `plot(fig)`. `iplot(fig)` generates the plot too, but an error box pops up whenever Jupyter tries to save the notebook. The box has the following content:\r\n\r\n_The save operation succeeded, but the notebook does not appear to be valid. The validation error was:\r\nNotebook Validation failed_:\r\n`u'data': [{u'colorscale': u'Viridis', u'z': [[2, 27, 105, 100], [87, 14, 121, 102], [26, 121, 73, 34], [44, 105, 111, 127]], u'type': u'heatmap', u'zsmooth': u'best'}], u'layout': {u'width': 400, u'height': 400}}` _is not valid under any of the given schemas_:\r\n\r\n`{\r\n \"data\": [\r\n {\r\n \"colorscale\": \"Viridis\",\r\n \"z\": [\r\n [\r\n 2,\r\n 27,\r\n 105,\r\n 100\r\n ],\r\n [\r\n 87,\r\n 14,\r\n 121,\r\n 102\r\n ],\r\n [\r\n 26,\r\n 121,\r\n 73,\r\n 34\r\n ],\r\n [\r\n 44,\r\n 105,\r\n 111,\r\n 127\r\n ]\r\n ],\r\n \"type\": \"heatmap\",\r\n \"zsmooth\": \"best\"\r\n }\r\n ],\r\n \"layout\": {\r\n \"width\": 400,\r\n \"height\": 400\r\n }\r\n}`\r\n\r\nInitially I formulated this issue only for heatmaps, but meanwhile I realized that this behaviour manifests for any type of plot.\n", "before_files": [{"content": "from setuptools import setup\n\nexec (open('plotly/version.py').read())\n\n\ndef readme():\n with open('README.rst') as f:\n return f.read()\n\n\nsetup(name='plotly',\n version=__version__,\n use_2to3=False,\n author='Chris P',\n author_email='[email protected]',\n maintainer='Chris P',\n maintainer_email='[email protected]',\n url='https://plot.ly/python/',\n description=\"Python plotting library for collaborative, \"\n \"interactive, publication-quality graphs.\",\n long_description=readme(),\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Topic :: Scientific/Engineering :: Visualization',\n ],\n license='MIT',\n packages=['plotly',\n 'plotly/api',\n 'plotly/api/v1',\n 'plotly/api/v2',\n 'plotly/plotly',\n 'plotly/plotly/chunked_requests',\n 'plotly/figure_factory',\n 'plotly/graph_objs',\n 'plotly/grid_objs',\n 'plotly/widgets',\n 'plotly/offline',\n 'plotly/matplotlylib',\n 'plotly/matplotlylib/mplexporter',\n 'plotly/matplotlylib/mplexporter/renderers'],\n package_data={'plotly': ['package_data/*']},\n install_requires=['decorator', 'requests', 'six', 'pytz'],\n zip_safe=False)\n", "path": "setup.py"}]} | 1,487 | 128 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.