problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.71k
9.01k
| golden_diff
stringlengths 151
4.94k
| verification_info
stringlengths 465
11.3k
| num_tokens_prompt
int64 557
2.05k
| num_tokens_diff
int64 48
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_13425 | rasdani/github-patches | git_diff | python-poetry__poetry-7140 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Make config file relocation instructions more explicit
- [x] I have searched the [issues](https://github.com/python-poetry/poetry/issues) of this repo and believe that this is not a duplicate.
## Issue
After upgrading from `1.1` to `1.2` I received the following message:
```
Configuration file exists at /Users/xxx/Library/Application Support/pypoetry, reusing this directory.
Consider moving configuration to /Users/xxx/Library/Preferences/pypoetry, as support for the legacy directory will be removed in an upcoming release.
```
Similar to #6854 I (naively) assumed (based on above message) that the entire directory was configuration related and therefore moved it from `~/Library/Application Support/` to `~/Library/Preferences`.
Of course this lead to poetry no longer functioning.
If an automatic move of the config file is not in the cards, at least the warning message needs to be more explicit what file(s) actually need to be moved.
</issue>
<code>
[start of src/poetry/locations.py]
1 from __future__ import annotations
2
3 import logging
4 import os
5 import sys
6
7 from pathlib import Path
8
9 from platformdirs import user_cache_path
10 from platformdirs import user_config_path
11 from platformdirs import user_data_path
12
13
14 logger = logging.getLogger(__name__)
15
16 _APP_NAME = "pypoetry"
17
18 DEFAULT_CACHE_DIR = user_cache_path(_APP_NAME, appauthor=False)
19 CONFIG_DIR = Path(
20 os.getenv("POETRY_CONFIG_DIR")
21 or user_config_path(_APP_NAME, appauthor=False, roaming=True)
22 )
23
24 # platformdirs 2.0.0 corrected the OSX/macOS config directory from
25 # /Users/<user>/Library/Application Support/<appname> to
26 # /Users/<user>/Library/Preferences/<appname>.
27 #
28 # For now we only deprecate use of the old directory.
29 if sys.platform == "darwin":
30 _LEGACY_CONFIG_DIR = CONFIG_DIR.parent.parent / "Application Support" / _APP_NAME
31 config_toml = _LEGACY_CONFIG_DIR / "config.toml"
32 auth_toml = _LEGACY_CONFIG_DIR / "auth.toml"
33
34 if any(file.exists() for file in (auth_toml, config_toml)):
35 logger.warning(
36 (
37 "Configuration file exists at %s, reusing this directory.\n\nConsider"
38 " moving configuration to %s, as support for the legacy directory will"
39 " be removed in an upcoming release."
40 ),
41 _LEGACY_CONFIG_DIR,
42 CONFIG_DIR,
43 )
44 CONFIG_DIR = _LEGACY_CONFIG_DIR
45
46
47 def data_dir() -> Path:
48 poetry_home = os.getenv("POETRY_HOME")
49 if poetry_home:
50 return Path(poetry_home).expanduser()
51
52 return user_data_path(_APP_NAME, appauthor=False, roaming=True)
53
[end of src/poetry/locations.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/poetry/locations.py b/src/poetry/locations.py
--- a/src/poetry/locations.py
+++ b/src/poetry/locations.py
@@ -34,9 +34,12 @@
if any(file.exists() for file in (auth_toml, config_toml)):
logger.warning(
(
- "Configuration file exists at %s, reusing this directory.\n\nConsider"
- " moving configuration to %s, as support for the legacy directory will"
- " be removed in an upcoming release."
+ (
+ "Configuration file exists at %s, reusing this"
+ " directory.\n\nConsider moving TOML configuration files to %s, as"
+ " support for the legacy directory will be removed in an upcoming"
+ " release."
+ ),
),
_LEGACY_CONFIG_DIR,
CONFIG_DIR,
| {"golden_diff": "diff --git a/src/poetry/locations.py b/src/poetry/locations.py\n--- a/src/poetry/locations.py\n+++ b/src/poetry/locations.py\n@@ -34,9 +34,12 @@\n if any(file.exists() for file in (auth_toml, config_toml)):\n logger.warning(\n (\n- \"Configuration file exists at %s, reusing this directory.\\n\\nConsider\"\n- \" moving configuration to %s, as support for the legacy directory will\"\n- \" be removed in an upcoming release.\"\n+ (\n+ \"Configuration file exists at %s, reusing this\"\n+ \" directory.\\n\\nConsider moving TOML configuration files to %s, as\"\n+ \" support for the legacy directory will be removed in an upcoming\"\n+ \" release.\"\n+ ),\n ),\n _LEGACY_CONFIG_DIR,\n CONFIG_DIR,\n", "issue": "Make config file relocation instructions more explicit\n- [x] I have searched the [issues](https://github.com/python-poetry/poetry/issues) of this repo and believe that this is not a duplicate.\r\n\r\n## Issue\r\n\r\nAfter upgrading from `1.1` to `1.2` I received the following message:\r\n```\r\nConfiguration file exists at /Users/xxx/Library/Application Support/pypoetry, reusing this directory.\r\n\r\nConsider moving configuration to /Users/xxx/Library/Preferences/pypoetry, as support for the legacy directory will be removed in an upcoming release.\r\n```\r\n\r\nSimilar to #6854 I (naively) assumed (based on above message) that the entire directory was configuration related and therefore moved it from `~/Library/Application Support/` to `~/Library/Preferences`.\r\n\r\nOf course this lead to poetry no longer functioning.\r\n\r\nIf an automatic move of the config file is not in the cards, at least the warning message needs to be more explicit what file(s) actually need to be moved.\n", "before_files": [{"content": "from __future__ import annotations\n\nimport logging\nimport os\nimport sys\n\nfrom pathlib import Path\n\nfrom platformdirs import user_cache_path\nfrom platformdirs import user_config_path\nfrom platformdirs import user_data_path\n\n\nlogger = logging.getLogger(__name__)\n\n_APP_NAME = \"pypoetry\"\n\nDEFAULT_CACHE_DIR = user_cache_path(_APP_NAME, appauthor=False)\nCONFIG_DIR = Path(\n os.getenv(\"POETRY_CONFIG_DIR\")\n or user_config_path(_APP_NAME, appauthor=False, roaming=True)\n)\n\n# platformdirs 2.0.0 corrected the OSX/macOS config directory from\n# /Users/<user>/Library/Application Support/<appname> to\n# /Users/<user>/Library/Preferences/<appname>.\n#\n# For now we only deprecate use of the old directory.\nif sys.platform == \"darwin\":\n _LEGACY_CONFIG_DIR = CONFIG_DIR.parent.parent / \"Application Support\" / _APP_NAME\n config_toml = _LEGACY_CONFIG_DIR / \"config.toml\"\n auth_toml = _LEGACY_CONFIG_DIR / \"auth.toml\"\n\n if any(file.exists() for file in (auth_toml, config_toml)):\n logger.warning(\n (\n \"Configuration file exists at %s, reusing this directory.\\n\\nConsider\"\n \" moving configuration to %s, as support for the legacy directory will\"\n \" be removed in an upcoming release.\"\n ),\n _LEGACY_CONFIG_DIR,\n CONFIG_DIR,\n )\n CONFIG_DIR = _LEGACY_CONFIG_DIR\n\n\ndef data_dir() -> Path:\n poetry_home = os.getenv(\"POETRY_HOME\")\n if poetry_home:\n return Path(poetry_home).expanduser()\n\n return user_data_path(_APP_NAME, appauthor=False, roaming=True)\n", "path": "src/poetry/locations.py"}]} | 1,233 | 198 |
gh_patches_debug_17079 | rasdani/github-patches | git_diff | qutebrowser__qutebrowser-2953 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Use CommandParser for configmodel.bind
In the `new-config` branch, there's a `CommandParser` in `commands/runners.py` which got split off from `CommandRunner` and can be used standalone. Various places doing their own command parsing got updated accordingly, but `configmodel.bind` is still doing its own parsing. An example how it's used, from `:bind`:
https://github.com/qutebrowser/qutebrowser/blob/2117824cf9fdc47ea6fd9457c12cecbac117202e/qutebrowser/config/config.py#L179-L189
Split off from #2779, cc @rcorre - if you want to take a look at this, feel free to do a PR against `new-config`, or wait until that's merged and then do one against `master`.
</issue>
<code>
[start of qutebrowser/completion/models/configmodel.py]
1 # vim: ft=python fileencoding=utf-8 sts=4 sw=4 et:
2
3 # Copyright 2014-2017 Florian Bruhin (The Compiler) <[email protected]>
4 #
5 # This file is part of qutebrowser.
6 #
7 # qutebrowser is free software: you can redistribute it and/or modify
8 # it under the terms of the GNU General Public License as published by
9 # the Free Software Foundation, either version 3 of the License, or
10 # (at your option) any later version.
11 #
12 # qutebrowser is distributed in the hope that it will be useful,
13 # but WITHOUT ANY WARRANTY; without even the implied warranty of
14 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
15 # GNU General Public License for more details.
16 #
17 # You should have received a copy of the GNU General Public License
18 # along with qutebrowser. If not, see <http://www.gnu.org/licenses/>.
19
20 """Functions that return config-related completion models."""
21
22 from qutebrowser.config import configdata, configexc
23 from qutebrowser.completion.models import completionmodel, listcategory, util
24 from qutebrowser.commands import cmdutils
25
26
27 def option(*, info):
28 """A CompletionModel filled with settings and their descriptions."""
29 model = completionmodel.CompletionModel(column_widths=(20, 70, 10))
30 options = ((opt.name, opt.description, info.config.get_str(opt.name))
31 for opt in configdata.DATA.values())
32 model.add_category(listcategory.ListCategory("Options", sorted(options)))
33 return model
34
35
36 def value(optname, *_values, info):
37 """A CompletionModel filled with setting values.
38
39 Args:
40 optname: The name of the config option this model shows.
41 _values: The values already provided on the command line.
42 info: A CompletionInfo instance.
43 """
44 model = completionmodel.CompletionModel(column_widths=(30, 70, 0))
45
46 try:
47 current = info.config.get_str(optname) or '""'
48 except configexc.NoOptionError:
49 return None
50
51 opt = info.config.get_opt(optname)
52 default = opt.typ.to_str(opt.default)
53 cur_cat = listcategory.ListCategory("Current/Default",
54 [(current, "Current value"), (default, "Default value")])
55 model.add_category(cur_cat)
56
57 vals = opt.typ.complete()
58 if vals is not None:
59 model.add_category(listcategory.ListCategory("Completions",
60 sorted(vals)))
61 return model
62
63
64 def bind(key, *, info):
65 """A CompletionModel filled with all bindable commands and descriptions.
66
67 Args:
68 key: the key being bound.
69 """
70 model = completionmodel.CompletionModel(column_widths=(20, 60, 20))
71 cmd_text = info.keyconf.get_command(key, 'normal')
72
73 if cmd_text:
74 cmd_name = cmd_text.split(' ')[0]
75 cmd = cmdutils.cmd_dict.get(cmd_name)
76 data = [(cmd_text, cmd.desc, key)]
77 model.add_category(listcategory.ListCategory("Current", data))
78
79 cmdlist = util.get_cmd_completions(info, include_hidden=True,
80 include_aliases=True)
81 model.add_category(listcategory.ListCategory("Commands", cmdlist))
82 return model
83
[end of qutebrowser/completion/models/configmodel.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/qutebrowser/completion/models/configmodel.py b/qutebrowser/completion/models/configmodel.py
--- a/qutebrowser/completion/models/configmodel.py
+++ b/qutebrowser/completion/models/configmodel.py
@@ -21,7 +21,7 @@
from qutebrowser.config import configdata, configexc
from qutebrowser.completion.models import completionmodel, listcategory, util
-from qutebrowser.commands import cmdutils
+from qutebrowser.commands import runners
def option(*, info):
@@ -71,8 +71,8 @@
cmd_text = info.keyconf.get_command(key, 'normal')
if cmd_text:
- cmd_name = cmd_text.split(' ')[0]
- cmd = cmdutils.cmd_dict.get(cmd_name)
+ parser = runners.CommandParser()
+ cmd = parser.parse(cmd_text).cmd
data = [(cmd_text, cmd.desc, key)]
model.add_category(listcategory.ListCategory("Current", data))
| {"golden_diff": "diff --git a/qutebrowser/completion/models/configmodel.py b/qutebrowser/completion/models/configmodel.py\n--- a/qutebrowser/completion/models/configmodel.py\n+++ b/qutebrowser/completion/models/configmodel.py\n@@ -21,7 +21,7 @@\n \n from qutebrowser.config import configdata, configexc\n from qutebrowser.completion.models import completionmodel, listcategory, util\n-from qutebrowser.commands import cmdutils\n+from qutebrowser.commands import runners\n \n \n def option(*, info):\n@@ -71,8 +71,8 @@\n cmd_text = info.keyconf.get_command(key, 'normal')\n \n if cmd_text:\n- cmd_name = cmd_text.split(' ')[0]\n- cmd = cmdutils.cmd_dict.get(cmd_name)\n+ parser = runners.CommandParser()\n+ cmd = parser.parse(cmd_text).cmd\n data = [(cmd_text, cmd.desc, key)]\n model.add_category(listcategory.ListCategory(\"Current\", data))\n", "issue": "Use CommandParser for configmodel.bind\nIn the `new-config` branch, there's a `CommandParser` in `commands/runners.py` which got split off from `CommandRunner` and can be used standalone. Various places doing their own command parsing got updated accordingly, but `configmodel.bind` is still doing its own parsing. An example how it's used, from `:bind`:\r\n\r\nhttps://github.com/qutebrowser/qutebrowser/blob/2117824cf9fdc47ea6fd9457c12cecbac117202e/qutebrowser/config/config.py#L179-L189\r\n\r\nSplit off from #2779, cc @rcorre - if you want to take a look at this, feel free to do a PR against `new-config`, or wait until that's merged and then do one against `master`.\n", "before_files": [{"content": "# vim: ft=python fileencoding=utf-8 sts=4 sw=4 et:\n\n# Copyright 2014-2017 Florian Bruhin (The Compiler) <[email protected]>\n#\n# This file is part of qutebrowser.\n#\n# qutebrowser is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# qutebrowser is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with qutebrowser. If not, see <http://www.gnu.org/licenses/>.\n\n\"\"\"Functions that return config-related completion models.\"\"\"\n\nfrom qutebrowser.config import configdata, configexc\nfrom qutebrowser.completion.models import completionmodel, listcategory, util\nfrom qutebrowser.commands import cmdutils\n\n\ndef option(*, info):\n \"\"\"A CompletionModel filled with settings and their descriptions.\"\"\"\n model = completionmodel.CompletionModel(column_widths=(20, 70, 10))\n options = ((opt.name, opt.description, info.config.get_str(opt.name))\n for opt in configdata.DATA.values())\n model.add_category(listcategory.ListCategory(\"Options\", sorted(options)))\n return model\n\n\ndef value(optname, *_values, info):\n \"\"\"A CompletionModel filled with setting values.\n\n Args:\n optname: The name of the config option this model shows.\n _values: The values already provided on the command line.\n info: A CompletionInfo instance.\n \"\"\"\n model = completionmodel.CompletionModel(column_widths=(30, 70, 0))\n\n try:\n current = info.config.get_str(optname) or '\"\"'\n except configexc.NoOptionError:\n return None\n\n opt = info.config.get_opt(optname)\n default = opt.typ.to_str(opt.default)\n cur_cat = listcategory.ListCategory(\"Current/Default\",\n [(current, \"Current value\"), (default, \"Default value\")])\n model.add_category(cur_cat)\n\n vals = opt.typ.complete()\n if vals is not None:\n model.add_category(listcategory.ListCategory(\"Completions\",\n sorted(vals)))\n return model\n\n\ndef bind(key, *, info):\n \"\"\"A CompletionModel filled with all bindable commands and descriptions.\n\n Args:\n key: the key being bound.\n \"\"\"\n model = completionmodel.CompletionModel(column_widths=(20, 60, 20))\n cmd_text = info.keyconf.get_command(key, 'normal')\n\n if cmd_text:\n cmd_name = cmd_text.split(' ')[0]\n cmd = cmdutils.cmd_dict.get(cmd_name)\n data = [(cmd_text, cmd.desc, key)]\n model.add_category(listcategory.ListCategory(\"Current\", data))\n\n cmdlist = util.get_cmd_completions(info, include_hidden=True,\n include_aliases=True)\n model.add_category(listcategory.ListCategory(\"Commands\", cmdlist))\n return model\n", "path": "qutebrowser/completion/models/configmodel.py"}]} | 1,604 | 215 |
gh_patches_debug_16863 | rasdani/github-patches | git_diff | TencentBlueKing__bk-user-1192 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
feat: add stringify_pydantic_error util
</issue>
<code>
[start of src/bk-user/bkuser/common/passwd/__init__.py]
1 # -*- coding: utf-8 -*-
2 """
3 TencentBlueKing is pleased to support the open source community by making 蓝鲸智云-用户管理(Bk-User) available.
4 Copyright (C) 2017-2021 THL A29 Limited, a Tencent company. All rights reserved.
5 Licensed under the MIT License (the "License"); you may not use this file except in compliance with the License.
6 You may obtain a copy of the License at http://opensource.org/licenses/MIT
7 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
8 an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
9 specific language governing permissions and limitations under the License.
10 """
11 from .exceptions import PasswordStrengthError
12 from .generator import PasswordGenerator
13 from .models import PasswordRule, ValidateResult
14 from .validator import PasswordValidator
15
16 __all__ = [
17 # 密码规则
18 "PasswordRule",
19 # 密码生成器
20 "PasswordGenerator",
21 # 密码强度校验器
22 "PasswordValidator",
23 # 密码校验结果
24 "ValidateResult",
25 # 密码强度过低异常
26 "PasswordStrengthError",
27 ]
28
[end of src/bk-user/bkuser/common/passwd/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/bk-user/bkuser/common/passwd/__init__.py b/src/bk-user/bkuser/common/passwd/__init__.py
--- a/src/bk-user/bkuser/common/passwd/__init__.py
+++ b/src/bk-user/bkuser/common/passwd/__init__.py
@@ -8,7 +8,7 @@
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
"""
-from .exceptions import PasswordStrengthError
+from .exceptions import PasswordGenerateError, PasswordStrengthError
from .generator import PasswordGenerator
from .models import PasswordRule, ValidateResult
from .validator import PasswordValidator
@@ -24,4 +24,6 @@
"ValidateResult",
# 密码强度过低异常
"PasswordStrengthError",
+ # 不合理的规则导致生成密码失败
+ "PasswordGenerateError",
]
| {"golden_diff": "diff --git a/src/bk-user/bkuser/common/passwd/__init__.py b/src/bk-user/bkuser/common/passwd/__init__.py\n--- a/src/bk-user/bkuser/common/passwd/__init__.py\n+++ b/src/bk-user/bkuser/common/passwd/__init__.py\n@@ -8,7 +8,7 @@\n an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the\n specific language governing permissions and limitations under the License.\n \"\"\"\n-from .exceptions import PasswordStrengthError\n+from .exceptions import PasswordGenerateError, PasswordStrengthError\n from .generator import PasswordGenerator\n from .models import PasswordRule, ValidateResult\n from .validator import PasswordValidator\n@@ -24,4 +24,6 @@\n \"ValidateResult\",\n # \u5bc6\u7801\u5f3a\u5ea6\u8fc7\u4f4e\u5f02\u5e38\n \"PasswordStrengthError\",\n+ # \u4e0d\u5408\u7406\u7684\u89c4\u5219\u5bfc\u81f4\u751f\u6210\u5bc6\u7801\u5931\u8d25\n+ \"PasswordGenerateError\",\n ]\n", "issue": "feat: add stringify_pydantic_error util\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\nTencentBlueKing is pleased to support the open source community by making \u84dd\u9cb8\u667a\u4e91-\u7528\u6237\u7ba1\u7406(Bk-User) available.\nCopyright (C) 2017-2021 THL A29 Limited, a Tencent company. All rights reserved.\nLicensed under the MIT License (the \"License\"); you may not use this file except in compliance with the License.\nYou may obtain a copy of the License at http://opensource.org/licenses/MIT\nUnless required by applicable law or agreed to in writing, software distributed under the License is distributed on\nan \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the\nspecific language governing permissions and limitations under the License.\n\"\"\"\nfrom .exceptions import PasswordStrengthError\nfrom .generator import PasswordGenerator\nfrom .models import PasswordRule, ValidateResult\nfrom .validator import PasswordValidator\n\n__all__ = [\n # \u5bc6\u7801\u89c4\u5219\n \"PasswordRule\",\n # \u5bc6\u7801\u751f\u6210\u5668\n \"PasswordGenerator\",\n # \u5bc6\u7801\u5f3a\u5ea6\u6821\u9a8c\u5668\n \"PasswordValidator\",\n # \u5bc6\u7801\u6821\u9a8c\u7ed3\u679c\n \"ValidateResult\",\n # \u5bc6\u7801\u5f3a\u5ea6\u8fc7\u4f4e\u5f02\u5e38\n \"PasswordStrengthError\",\n]\n", "path": "src/bk-user/bkuser/common/passwd/__init__.py"}]} | 883 | 213 |
gh_patches_debug_23596 | rasdani/github-patches | git_diff | rasterio__rasterio-1851 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
WarpedVRT does not permit boundless reads (sample.py)
## Expected behavior and actual behavior.
```
def test_rasterio_vrt(self):
import rasterio
# tmp_file default crs is UTM: CRS({'init': 'epsg:32618'}
with create_tmp_geotiff() as (tmp_file, expected):
with rasterio.open(tmp_file) as src:
with rasterio.vrt.WarpedVRT(src, crs="epsg:4326") as vrt:
expected_shape = (vrt.width, vrt.height)
expected_crs = vrt.crs
expected_res = vrt.res
# Value of single pixel in center of image
lon, lat = vrt.xy(vrt.width // 2, vrt.height // 2)
> expected_val = next(vrt.sample([(lon, lat)]))
test/integration/test_integration__io.py:799:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../../miniconda/envs/test/lib/python3.7/site-packages/rasterio/sample.py:43: in sample_gen
data = read(indexes, window=window, masked=masked, boundless=True)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
> ???
E ValueError: WarpedVRT does not permit boundless reads
```
## Operating system
For example: Mac OS, Windows, Linux
## Rasterio version and provenance
1.1.1 from conda-forge
</issue>
<code>
[start of rasterio/sample.py]
1 # Workaround for issue #378. A pure Python generator.
2
3 import numpy
4
5 from rasterio.windows import Window
6
7
8 def sample_gen(dataset, xy, indexes=None, masked=False):
9 """Sample pixels from a dataset
10
11 Parameters
12 ----------
13 dataset : rasterio Dataset
14 Opened in "r" mode.
15 xy : iterable
16 Pairs of x, y coordinates in the dataset's reference system.
17 indexes : int or list of int
18 Indexes of dataset bands to sample.
19 masked : bool, default: False
20 Whether to mask samples that fall outside the extent of the
21 dataset.
22
23 Yields
24 ------
25 array
26 A array of length equal to the number of specified indexes
27 containing the dataset values for the bands corresponding to
28 those indexes.
29
30 """
31 index = dataset.index
32 read = dataset.read
33
34 if isinstance(indexes, int):
35 indexes = [indexes]
36
37 for x, y in xy:
38 row_off, col_off = index(x, y)
39 # if row_off < 0 or col_off < 0:
40 # yield numpy.ones((dataset.count,), dtype=dataset.dtypes[0]) * dataset.nodata
41 # else:
42 window = Window(col_off, row_off, 1, 1)
43 data = read(indexes, window=window, masked=masked, boundless=True)
44 yield data[:, 0, 0]
45
[end of rasterio/sample.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/rasterio/sample.py b/rasterio/sample.py
--- a/rasterio/sample.py
+++ b/rasterio/sample.py
@@ -2,6 +2,7 @@
import numpy
+from rasterio.enums import MaskFlags
from rasterio.windows import Window
@@ -31,14 +32,24 @@
index = dataset.index
read = dataset.read
- if isinstance(indexes, int):
+ if indexes is None:
+ indexes = dataset.indexes
+ elif isinstance(indexes, int):
indexes = [indexes]
for x, y in xy:
+
row_off, col_off = index(x, y)
-# if row_off < 0 or col_off < 0:
-# yield numpy.ones((dataset.count,), dtype=dataset.dtypes[0]) * dataset.nodata
-# else:
- window = Window(col_off, row_off, 1, 1)
- data = read(indexes, window=window, masked=masked, boundless=True)
- yield data[:, 0, 0]
+
+ if row_off < 0 or col_off < 0 or row_off >= dataset.height or col_off >= dataset.width:
+ data = numpy.ones((len(indexes),), dtype=dataset.dtypes[0]) * (dataset.nodata or 0)
+ if masked:
+ mask = [False if MaskFlags.all_valid in dataset.mask_flag_enums[i - 1] else True for i in indexes]
+ yield numpy.ma.array(data, mask=mask)
+ else:
+ yield data
+
+ else:
+ window = Window(col_off, row_off, 1, 1)
+ data = read(indexes, window=window, masked=masked)
+ yield data[:, 0, 0]
| {"golden_diff": "diff --git a/rasterio/sample.py b/rasterio/sample.py\n--- a/rasterio/sample.py\n+++ b/rasterio/sample.py\n@@ -2,6 +2,7 @@\n \n import numpy\n \n+from rasterio.enums import MaskFlags\n from rasterio.windows import Window\n \n \n@@ -31,14 +32,24 @@\n index = dataset.index\n read = dataset.read\n \n- if isinstance(indexes, int):\n+ if indexes is None:\n+ indexes = dataset.indexes\n+ elif isinstance(indexes, int):\n indexes = [indexes]\n \n for x, y in xy:\n+\n row_off, col_off = index(x, y)\n-# if row_off < 0 or col_off < 0:\n-# yield numpy.ones((dataset.count,), dtype=dataset.dtypes[0]) * dataset.nodata\n-# else:\n- window = Window(col_off, row_off, 1, 1)\n- data = read(indexes, window=window, masked=masked, boundless=True)\n- yield data[:, 0, 0]\n+\n+ if row_off < 0 or col_off < 0 or row_off >= dataset.height or col_off >= dataset.width:\n+ data = numpy.ones((len(indexes),), dtype=dataset.dtypes[0]) * (dataset.nodata or 0)\n+ if masked:\n+ mask = [False if MaskFlags.all_valid in dataset.mask_flag_enums[i - 1] else True for i in indexes]\n+ yield numpy.ma.array(data, mask=mask)\n+ else:\n+ yield data\n+\n+ else:\n+ window = Window(col_off, row_off, 1, 1)\n+ data = read(indexes, window=window, masked=masked)\n+ yield data[:, 0, 0]\n", "issue": "WarpedVRT does not permit boundless reads (sample.py)\n## Expected behavior and actual behavior.\r\n\r\n```\r\n def test_rasterio_vrt(self):\r\n\r\n import rasterio\r\n\r\n \r\n\r\n # tmp_file default crs is UTM: CRS({'init': 'epsg:32618'}\r\n\r\n with create_tmp_geotiff() as (tmp_file, expected):\r\n\r\n with rasterio.open(tmp_file) as src:\r\n\r\n with rasterio.vrt.WarpedVRT(src, crs=\"epsg:4326\") as vrt:\r\n\r\n expected_shape = (vrt.width, vrt.height)\r\n\r\n expected_crs = vrt.crs\r\n\r\n expected_res = vrt.res\r\n\r\n # Value of single pixel in center of image\r\n\r\n lon, lat = vrt.xy(vrt.width // 2, vrt.height // 2)\r\n\r\n> expected_val = next(vrt.sample([(lon, lat)]))\r\n\r\ntest/integration/test_integration__io.py:799: \r\n\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n\r\n../../../miniconda/envs/test/lib/python3.7/site-packages/rasterio/sample.py:43: in sample_gen\r\n\r\n data = read(indexes, window=window, masked=masked, boundless=True)\r\n\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n\r\n> ???\r\n\r\nE ValueError: WarpedVRT does not permit boundless reads\r\n```\r\n\r\n## Operating system\r\n\r\nFor example: Mac OS, Windows, Linux\r\n\r\n## Rasterio version and provenance\r\n\r\n1.1.1 from conda-forge\n", "before_files": [{"content": "# Workaround for issue #378. A pure Python generator.\n\nimport numpy\n\nfrom rasterio.windows import Window\n\n\ndef sample_gen(dataset, xy, indexes=None, masked=False):\n \"\"\"Sample pixels from a dataset\n\n Parameters\n ----------\n dataset : rasterio Dataset\n Opened in \"r\" mode.\n xy : iterable\n Pairs of x, y coordinates in the dataset's reference system.\n indexes : int or list of int\n Indexes of dataset bands to sample.\n masked : bool, default: False\n Whether to mask samples that fall outside the extent of the\n dataset.\n\n Yields\n ------\n array\n A array of length equal to the number of specified indexes\n containing the dataset values for the bands corresponding to\n those indexes.\n\n \"\"\"\n index = dataset.index\n read = dataset.read\n\n if isinstance(indexes, int):\n indexes = [indexes]\n\n for x, y in xy:\n row_off, col_off = index(x, y)\n# if row_off < 0 or col_off < 0:\n# yield numpy.ones((dataset.count,), dtype=dataset.dtypes[0]) * dataset.nodata\n# else:\n window = Window(col_off, row_off, 1, 1)\n data = read(indexes, window=window, masked=masked, boundless=True)\n yield data[:, 0, 0]\n", "path": "rasterio/sample.py"}]} | 1,318 | 404 |
gh_patches_debug_38933 | rasdani/github-patches | git_diff | alltheplaces__alltheplaces-2802 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Spider anthonys_restaurants is broken
During the global build at 2021-06-16-14-42-20, spider **anthonys_restaurants** failed with **0 features** and **0 errors**.
Here's [the log](https://data.alltheplaces.xyz/runs/2021-06-16-14-42-20/logs/anthonys_restaurants.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-06-16-14-42-20/output/anthonys_restaurants.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-06-16-14-42-20/output/anthonys_restaurants.geojson))
</issue>
<code>
[start of locations/spiders/anthonys_restaurants.py]
1 # -*- coding: utf-8 -*-
2 import scrapy
3 from locations.items import GeojsonPointItem
4
5
6 class AnthonysRestaurantsSpiders(scrapy.Spider):
7 name = "anthonys_restaurants"
8 item_attributes = { 'brand': "Anthony's" }
9 allowed_domains = ["www.anthonys.com"]
10 start_urls = (
11 'https://www.anthonys.com/restaurants/search/47.6062095/-122.3320708/2000',
12 )
13
14 def parse(self, response):
15 for match in response.xpath("//markers/marker"):
16 fullAddress=match.xpath('.//@address').extract_first().replace('<br />', ',')
17
18 # Accounts for cases with second address line
19 if(len(fullAddress.split(",")) == 4):
20 cityString = fullAddress.split(",")[2].strip()
21 stateString = fullAddress.split(",")[3].strip().split(" ")[0].strip()
22 postString = fullAddress.split(",")[3].strip().split(" ")[1].strip()
23 addrLineOne = fullAddress.split(",")[0].strip()
24 addrLineTwo = fullAddress.split(",")[1].strip()
25 addrString = addrLineOne + ", " + addrLineTwo
26 else:
27 cityString = fullAddress.split(",")[1].strip()
28 stateString = fullAddress.split(",")[2].strip().split(" ")[0].strip()
29 postString = fullAddress.split(",")[2].strip().split(" ")[1].strip()
30 addrString = fullAddress.split(",")[0]
31
32 yield GeojsonPointItem(
33 ref=match.xpath('.//@title').extract_first().strip(),
34 lat=match.xpath('.//@lat').extract_first().strip(),
35 lon=match.xpath('.//@lng').extract_first().strip(),
36 addr_full=addrString,
37 city=cityString,
38 state=stateString,
39 postcode=postString,
40 phone=match.xpath('.//@phone').extract_first().replace(" ", ""),
41 )
42
[end of locations/spiders/anthonys_restaurants.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/locations/spiders/anthonys_restaurants.py b/locations/spiders/anthonys_restaurants.py
--- a/locations/spiders/anthonys_restaurants.py
+++ b/locations/spiders/anthonys_restaurants.py
@@ -1,41 +1,49 @@
# -*- coding: utf-8 -*-
+import json
+import re
+
import scrapy
+
from locations.items import GeojsonPointItem
class AnthonysRestaurantsSpiders(scrapy.Spider):
name = "anthonys_restaurants"
- item_attributes = { 'brand': "Anthony's" }
+ item_attributes = {"brand": "Anthony's"}
allowed_domains = ["www.anthonys.com"]
- start_urls = (
- 'https://www.anthonys.com/restaurants/search/47.6062095/-122.3320708/2000',
- )
+ start_urls = ("https://www.anthonys.com/restaurants/",)
def parse(self, response):
- for match in response.xpath("//markers/marker"):
- fullAddress=match.xpath('.//@address').extract_first().replace('<br />', ',')
-
- # Accounts for cases with second address line
- if(len(fullAddress.split(",")) == 4):
- cityString = fullAddress.split(",")[2].strip()
- stateString = fullAddress.split(",")[3].strip().split(" ")[0].strip()
- postString = fullAddress.split(",")[3].strip().split(" ")[1].strip()
- addrLineOne = fullAddress.split(",")[0].strip()
- addrLineTwo = fullAddress.split(",")[1].strip()
- addrString = addrLineOne + ", " + addrLineTwo
- else:
- cityString = fullAddress.split(",")[1].strip()
- stateString = fullAddress.split(",")[2].strip().split(" ")[0].strip()
- postString = fullAddress.split(",")[2].strip().split(" ")[1].strip()
- addrString = fullAddress.split(",")[0]
-
- yield GeojsonPointItem(
- ref=match.xpath('.//@title').extract_first().strip(),
- lat=match.xpath('.//@lat').extract_first().strip(),
- lon=match.xpath('.//@lng').extract_first().strip(),
- addr_full=addrString,
- city=cityString,
- state=stateString,
- postcode=postString,
- phone=match.xpath('.//@phone').extract_first().replace(" ", ""),
- )
+ script = response.css("#acf-block-locations-map-script-js-extra::text").get()
+ j = json.loads(script[script.find("{") : 1 + script.rfind("}")])
+ for row in j["restaurants"]:
+ meta = {"json": row}
+ yield scrapy.Request(row["link"], meta=meta, callback=self.parse_location)
+
+ def parse_location(self, response):
+ json_data = response.meta["json"]
+ address = json_data["address"]
+ # decode entities
+ name = scrapy.Selector(text=json_data["name"]).xpath("//text()").get()
+
+ # These are weird enough that there's no hope of parsing them, but
+ # clean the text up
+ hours = response.xpath('//strong[text()="Hours:"]/../text()').extract()
+ hours = ';'.join(s.strip().replace('\xa0', ' ') for s in hours)
+
+ properties = {
+ "ref": re.search(r"postid-(\d+)", response.css("body").attrib["class"])[1],
+ "lat": address["latitude"],
+ "lon": address["longitude"],
+ "addr_full": address["address"],
+ "city": address["city"],
+ "state": address["state"],
+ "postcode": address["zip_code"],
+ "name": name,
+ "website": response.url,
+ "phone": (
+ response.xpath("//*[starts-with(@href, 'tel:')]/@href").get() or ""
+ )[4:],
+ "opening_hours": hours,
+ }
+ return GeojsonPointItem(**properties)
| {"golden_diff": "diff --git a/locations/spiders/anthonys_restaurants.py b/locations/spiders/anthonys_restaurants.py\n--- a/locations/spiders/anthonys_restaurants.py\n+++ b/locations/spiders/anthonys_restaurants.py\n@@ -1,41 +1,49 @@\n # -*- coding: utf-8 -*-\n+import json\n+import re\n+\n import scrapy\n+\n from locations.items import GeojsonPointItem\n \n \n class AnthonysRestaurantsSpiders(scrapy.Spider):\n name = \"anthonys_restaurants\"\n- item_attributes = { 'brand': \"Anthony's\" }\n+ item_attributes = {\"brand\": \"Anthony's\"}\n allowed_domains = [\"www.anthonys.com\"]\n- start_urls = (\n- 'https://www.anthonys.com/restaurants/search/47.6062095/-122.3320708/2000',\n- )\n+ start_urls = (\"https://www.anthonys.com/restaurants/\",)\n \n def parse(self, response):\n- for match in response.xpath(\"//markers/marker\"):\n- fullAddress=match.xpath('.//@address').extract_first().replace('<br />', ',')\n-\n- # Accounts for cases with second address line\n- if(len(fullAddress.split(\",\")) == 4):\n- cityString = fullAddress.split(\",\")[2].strip()\n- stateString = fullAddress.split(\",\")[3].strip().split(\" \")[0].strip()\n- postString = fullAddress.split(\",\")[3].strip().split(\" \")[1].strip()\n- addrLineOne = fullAddress.split(\",\")[0].strip()\n- addrLineTwo = fullAddress.split(\",\")[1].strip()\n- addrString = addrLineOne + \", \" + addrLineTwo\n- else:\n- cityString = fullAddress.split(\",\")[1].strip()\n- stateString = fullAddress.split(\",\")[2].strip().split(\" \")[0].strip()\n- postString = fullAddress.split(\",\")[2].strip().split(\" \")[1].strip()\n- addrString = fullAddress.split(\",\")[0]\n-\n- yield GeojsonPointItem(\n- ref=match.xpath('.//@title').extract_first().strip(),\n- lat=match.xpath('.//@lat').extract_first().strip(),\n- lon=match.xpath('.//@lng').extract_first().strip(),\n- addr_full=addrString,\n- city=cityString,\n- state=stateString,\n- postcode=postString,\n- phone=match.xpath('.//@phone').extract_first().replace(\" \", \"\"),\n- )\n+ script = response.css(\"#acf-block-locations-map-script-js-extra::text\").get()\n+ j = json.loads(script[script.find(\"{\") : 1 + script.rfind(\"}\")])\n+ for row in j[\"restaurants\"]:\n+ meta = {\"json\": row}\n+ yield scrapy.Request(row[\"link\"], meta=meta, callback=self.parse_location)\n+\n+ def parse_location(self, response):\n+ json_data = response.meta[\"json\"]\n+ address = json_data[\"address\"]\n+ # decode entities\n+ name = scrapy.Selector(text=json_data[\"name\"]).xpath(\"//text()\").get()\n+\n+ # These are weird enough that there's no hope of parsing them, but\n+ # clean the text up\n+ hours = response.xpath('//strong[text()=\"Hours:\"]/../text()').extract()\n+ hours = ';'.join(s.strip().replace('\\xa0', ' ') for s in hours)\n+\n+ properties = {\n+ \"ref\": re.search(r\"postid-(\\d+)\", response.css(\"body\").attrib[\"class\"])[1],\n+ \"lat\": address[\"latitude\"],\n+ \"lon\": address[\"longitude\"],\n+ \"addr_full\": address[\"address\"],\n+ \"city\": address[\"city\"],\n+ \"state\": address[\"state\"],\n+ \"postcode\": address[\"zip_code\"],\n+ \"name\": name,\n+ \"website\": response.url,\n+ \"phone\": (\n+ response.xpath(\"//*[starts-with(@href, 'tel:')]/@href\").get() or \"\"\n+ )[4:],\n+ \"opening_hours\": hours,\n+ }\n+ return GeojsonPointItem(**properties)\n", "issue": "Spider anthonys_restaurants is broken\nDuring the global build at 2021-06-16-14-42-20, spider **anthonys_restaurants** failed with **0 features** and **0 errors**.\n\nHere's [the log](https://data.alltheplaces.xyz/runs/2021-06-16-14-42-20/logs/anthonys_restaurants.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-06-16-14-42-20/output/anthonys_restaurants.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-06-16-14-42-20/output/anthonys_restaurants.geojson))\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport scrapy\nfrom locations.items import GeojsonPointItem\n\n\nclass AnthonysRestaurantsSpiders(scrapy.Spider):\n name = \"anthonys_restaurants\"\n item_attributes = { 'brand': \"Anthony's\" }\n allowed_domains = [\"www.anthonys.com\"]\n start_urls = (\n 'https://www.anthonys.com/restaurants/search/47.6062095/-122.3320708/2000',\n )\n\n def parse(self, response):\n for match in response.xpath(\"//markers/marker\"):\n fullAddress=match.xpath('.//@address').extract_first().replace('<br />', ',')\n\n # Accounts for cases with second address line\n if(len(fullAddress.split(\",\")) == 4):\n cityString = fullAddress.split(\",\")[2].strip()\n stateString = fullAddress.split(\",\")[3].strip().split(\" \")[0].strip()\n postString = fullAddress.split(\",\")[3].strip().split(\" \")[1].strip()\n addrLineOne = fullAddress.split(\",\")[0].strip()\n addrLineTwo = fullAddress.split(\",\")[1].strip()\n addrString = addrLineOne + \", \" + addrLineTwo\n else:\n cityString = fullAddress.split(\",\")[1].strip()\n stateString = fullAddress.split(\",\")[2].strip().split(\" \")[0].strip()\n postString = fullAddress.split(\",\")[2].strip().split(\" \")[1].strip()\n addrString = fullAddress.split(\",\")[0]\n\n yield GeojsonPointItem(\n ref=match.xpath('.//@title').extract_first().strip(),\n lat=match.xpath('.//@lat').extract_first().strip(),\n lon=match.xpath('.//@lng').extract_first().strip(),\n addr_full=addrString,\n city=cityString,\n state=stateString,\n postcode=postString,\n phone=match.xpath('.//@phone').extract_first().replace(\" \", \"\"),\n )\n", "path": "locations/spiders/anthonys_restaurants.py"}]} | 1,248 | 928 |
gh_patches_debug_11635 | rasdani/github-patches | git_diff | hpcaitech__ColossalAI-4320 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG]: Multi-rank on same device
### 🐛 Describe the bug
When I use colossalai CLI with 2 node, I got an error "rank 8 and rank 0 both on CUDA device d000"
I have examined my scripts and command. And torchrun works well.
The error msg is:
```
Error: failed to run torchrun --nproc_per_node=8 --nnodes=2 --node_rank=0 --master_addr=192.168.0.64 --master_port=29500 benchmark.py -c 7b --plugin zero --zero 1 -l 2048 -g -b 10 on 192.168.0.64, is localhost: True, exception: I/O operation on closed file
Error: failed to run torchrun --nproc_per_node=8 --nnodes=2 --node_rank=1 --master_addr=192.168.0.64 --master_port=29500 benchmark.py -c 7b --plugin zero --zero 1 -l 2048 -g -b 10 on 192.168.0.189, is localhost: True, exception: I/O operation on closed file
```
### Environment
_No response_
[tensor] fix some unittests
[tensor] fix some unittests
</issue>
<code>
[start of colossalai/cli/launcher/hostinfo.py]
1 import socket
2 from typing import List
3
4
5 class HostInfo:
6 """
7 A data class to store host connection-related data.
8
9 Args:
10 hostname (str): name or IP address of the host
11 port (str): the port for ssh connection
12 """
13
14 def __init__(
15 self,
16 hostname: str,
17 port: str = None,
18 ):
19 self.hostname = hostname
20 self.port = port
21 self.is_local_host = HostInfo.is_host_localhost(hostname, port)
22
23 @staticmethod
24 def is_host_localhost(hostname: str, port: str = None) -> None:
25 """
26 Check if the host refers to the local machine.
27
28 Args:
29 hostname (str): name or IP address of the host
30 port (str): the port for ssh connection
31
32 Returns:
33 bool: True if it is local, False otherwise
34 """
35
36 if port is None:
37 port = 22 # no port specified, lets just use the ssh port
38
39 # socket.getfqdn("127.0.0.1") does not return localhost
40 # on some users' machines
41 # thus, we directly return True if hostname is localhost, 127.0.0.1 or 0.0.0.0
42 if hostname in ("localhost", "127.0.0.1", "0.0.0.0"):
43 return True
44
45 hostname = socket.getfqdn(hostname)
46 localhost = socket.gethostname()
47 localaddrs = socket.getaddrinfo(localhost, port)
48 targetaddrs = socket.getaddrinfo(hostname, port)
49 for (family, socktype, proto, canonname, sockaddr) in localaddrs:
50 for (rfamily, rsocktype, rproto, rcanonname, rsockaddr) in targetaddrs:
51 if rsockaddr[0] == sockaddr[0]:
52 return True
53 return False
54
55 def __str__(self):
56 return f'hostname: {self.hostname}, port: {self.port}'
57
58 def __repr__(self):
59 return self.__str__()
60
61
62 class HostInfoList:
63 """
64 A data class to store a list of HostInfo objects.
65 """
66
67 def __init__(self):
68 self.hostinfo_list = []
69
70 def append(self, hostinfo: HostInfo) -> None:
71 """
72 Add an HostInfo object to the list.
73
74 Args:
75 hostinfo (HostInfo): host information
76 """
77
78 self.hostinfo_list.append(hostinfo)
79
80 def remove(self, hostname: str) -> None:
81 """
82 Add an HostInfo object to the list.
83
84 Args:
85 hostname (str): the name of the host
86 """
87
88 hostinfo = self.get_hostinfo(hostname)
89 self.hostinfo_list.remove(hostinfo)
90
91 def get_hostinfo(self, hostname: str) -> HostInfo:
92 """
93 Return the HostInfo object which matches with the hostname.
94
95 Args:
96 hostname (str): the name of the host
97
98 Returns:
99 hostinfo (HostInfo): the HostInfo object which matches with the hostname
100 """
101
102 for hostinfo in self.hostinfo_list:
103 if hostinfo.hostname == hostname:
104 return hostinfo
105
106 raise Exception(f"Hostname {hostname} is not found")
107
108 def has(self, hostname: str) -> bool:
109 """
110 Check if the hostname has been added.
111
112 Args:
113 hostname (str): the name of the host
114
115 Returns:
116 bool: True if added, False otherwise
117 """
118 for hostinfo in self.hostinfo_list:
119 if hostinfo.hostname == hostname:
120 return True
121 return False
122
123 def __iter__(self):
124 return iter(self.hostinfo_list)
125
126 def __len__(self):
127 return len(self.hostinfo_list)
128
[end of colossalai/cli/launcher/hostinfo.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/colossalai/cli/launcher/hostinfo.py b/colossalai/cli/launcher/hostinfo.py
--- a/colossalai/cli/launcher/hostinfo.py
+++ b/colossalai/cli/launcher/hostinfo.py
@@ -46,11 +46,8 @@
localhost = socket.gethostname()
localaddrs = socket.getaddrinfo(localhost, port)
targetaddrs = socket.getaddrinfo(hostname, port)
- for (family, socktype, proto, canonname, sockaddr) in localaddrs:
- for (rfamily, rsocktype, rproto, rcanonname, rsockaddr) in targetaddrs:
- if rsockaddr[0] == sockaddr[0]:
- return True
- return False
+
+ return localaddrs == targetaddrs
def __str__(self):
return f'hostname: {self.hostname}, port: {self.port}'
| {"golden_diff": "diff --git a/colossalai/cli/launcher/hostinfo.py b/colossalai/cli/launcher/hostinfo.py\n--- a/colossalai/cli/launcher/hostinfo.py\n+++ b/colossalai/cli/launcher/hostinfo.py\n@@ -46,11 +46,8 @@\n localhost = socket.gethostname()\n localaddrs = socket.getaddrinfo(localhost, port)\n targetaddrs = socket.getaddrinfo(hostname, port)\n- for (family, socktype, proto, canonname, sockaddr) in localaddrs:\n- for (rfamily, rsocktype, rproto, rcanonname, rsockaddr) in targetaddrs:\n- if rsockaddr[0] == sockaddr[0]:\n- return True\n- return False\n+\n+ return localaddrs == targetaddrs\n \n def __str__(self):\n return f'hostname: {self.hostname}, port: {self.port}'\n", "issue": "[BUG]: Multi-rank on same device\n### \ud83d\udc1b Describe the bug\n\nWhen I use colossalai CLI with 2 node, I got an error \"rank 8 and rank 0 both on CUDA device d000\"\r\nI have examined my scripts and command. And torchrun works well.\r\n\r\nThe error msg is:\r\n```\r\nError: failed to run torchrun --nproc_per_node=8 --nnodes=2 --node_rank=0 --master_addr=192.168.0.64 --master_port=29500 benchmark.py -c 7b --plugin zero --zero 1 -l 2048 -g -b 10 on 192.168.0.64, is localhost: True, exception: I/O operation on closed file\r\nError: failed to run torchrun --nproc_per_node=8 --nnodes=2 --node_rank=1 --master_addr=192.168.0.64 --master_port=29500 benchmark.py -c 7b --plugin zero --zero 1 -l 2048 -g -b 10 on 192.168.0.189, is localhost: True, exception: I/O operation on closed file\r\n```\r\n\n\n### Environment\n\n_No response_\n[tensor] fix some unittests\n\n[tensor] fix some unittests\n\n", "before_files": [{"content": "import socket\nfrom typing import List\n\n\nclass HostInfo:\n \"\"\"\n A data class to store host connection-related data.\n\n Args:\n hostname (str): name or IP address of the host\n port (str): the port for ssh connection\n \"\"\"\n\n def __init__(\n self,\n hostname: str,\n port: str = None,\n ):\n self.hostname = hostname\n self.port = port\n self.is_local_host = HostInfo.is_host_localhost(hostname, port)\n\n @staticmethod\n def is_host_localhost(hostname: str, port: str = None) -> None:\n \"\"\"\n Check if the host refers to the local machine.\n\n Args:\n hostname (str): name or IP address of the host\n port (str): the port for ssh connection\n\n Returns:\n bool: True if it is local, False otherwise\n \"\"\"\n\n if port is None:\n port = 22 # no port specified, lets just use the ssh port\n\n # socket.getfqdn(\"127.0.0.1\") does not return localhost\n # on some users' machines\n # thus, we directly return True if hostname is localhost, 127.0.0.1 or 0.0.0.0\n if hostname in (\"localhost\", \"127.0.0.1\", \"0.0.0.0\"):\n return True\n\n hostname = socket.getfqdn(hostname)\n localhost = socket.gethostname()\n localaddrs = socket.getaddrinfo(localhost, port)\n targetaddrs = socket.getaddrinfo(hostname, port)\n for (family, socktype, proto, canonname, sockaddr) in localaddrs:\n for (rfamily, rsocktype, rproto, rcanonname, rsockaddr) in targetaddrs:\n if rsockaddr[0] == sockaddr[0]:\n return True\n return False\n\n def __str__(self):\n return f'hostname: {self.hostname}, port: {self.port}'\n\n def __repr__(self):\n return self.__str__()\n\n\nclass HostInfoList:\n \"\"\"\n A data class to store a list of HostInfo objects.\n \"\"\"\n\n def __init__(self):\n self.hostinfo_list = []\n\n def append(self, hostinfo: HostInfo) -> None:\n \"\"\"\n Add an HostInfo object to the list.\n\n Args:\n hostinfo (HostInfo): host information\n \"\"\"\n\n self.hostinfo_list.append(hostinfo)\n\n def remove(self, hostname: str) -> None:\n \"\"\"\n Add an HostInfo object to the list.\n\n Args:\n hostname (str): the name of the host\n \"\"\"\n\n hostinfo = self.get_hostinfo(hostname)\n self.hostinfo_list.remove(hostinfo)\n\n def get_hostinfo(self, hostname: str) -> HostInfo:\n \"\"\"\n Return the HostInfo object which matches with the hostname.\n\n Args:\n hostname (str): the name of the host\n\n Returns:\n hostinfo (HostInfo): the HostInfo object which matches with the hostname\n \"\"\"\n\n for hostinfo in self.hostinfo_list:\n if hostinfo.hostname == hostname:\n return hostinfo\n\n raise Exception(f\"Hostname {hostname} is not found\")\n\n def has(self, hostname: str) -> bool:\n \"\"\"\n Check if the hostname has been added.\n\n Args:\n hostname (str): the name of the host\n\n Returns:\n bool: True if added, False otherwise\n \"\"\"\n for hostinfo in self.hostinfo_list:\n if hostinfo.hostname == hostname:\n return True\n return False\n\n def __iter__(self):\n return iter(self.hostinfo_list)\n\n def __len__(self):\n return len(self.hostinfo_list)\n", "path": "colossalai/cli/launcher/hostinfo.py"}]} | 1,962 | 208 |
gh_patches_debug_6423 | rasdani/github-patches | git_diff | pytorch__examples-229 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Unused import of math in time_sequence_prediction example
The generate_sine_wave.py module imports math on the first line, but doesn't use it. This import should be removed.
</issue>
<code>
[start of time_sequence_prediction/generate_sine_wave.py]
1 import math
2 import numpy as np
3 import torch
4 T = 20
5 L = 1000
6 N = 100
7 np.random.seed(2)
8 x = np.empty((N, L), 'int64')
9 x[:] = np.array(range(L)) + np.random.randint(-4*T, 4*T, N).reshape(N, 1)
10 data = np.sin(x / 1.0 / T).astype('float64')
11 torch.save(data, open('traindata.pt', 'wb'))
12
13
[end of time_sequence_prediction/generate_sine_wave.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/time_sequence_prediction/generate_sine_wave.py b/time_sequence_prediction/generate_sine_wave.py
--- a/time_sequence_prediction/generate_sine_wave.py
+++ b/time_sequence_prediction/generate_sine_wave.py
@@ -1,12 +1,13 @@
-import math
import numpy as np
import torch
+
+np.random.seed(2)
+
T = 20
L = 1000
N = 100
-np.random.seed(2)
+
x = np.empty((N, L), 'int64')
-x[:] = np.array(range(L)) + np.random.randint(-4*T, 4*T, N).reshape(N, 1)
+x[:] = np.array(range(L)) + np.random.randint(-4 * T, 4 * T, N).reshape(N, 1)
data = np.sin(x / 1.0 / T).astype('float64')
torch.save(data, open('traindata.pt', 'wb'))
-
| {"golden_diff": "diff --git a/time_sequence_prediction/generate_sine_wave.py b/time_sequence_prediction/generate_sine_wave.py\n--- a/time_sequence_prediction/generate_sine_wave.py\n+++ b/time_sequence_prediction/generate_sine_wave.py\n@@ -1,12 +1,13 @@\n-import math\n import numpy as np\n import torch\n+\n+np.random.seed(2)\n+\n T = 20\n L = 1000\n N = 100\n-np.random.seed(2)\n+\n x = np.empty((N, L), 'int64')\n-x[:] = np.array(range(L)) + np.random.randint(-4*T, 4*T, N).reshape(N, 1)\n+x[:] = np.array(range(L)) + np.random.randint(-4 * T, 4 * T, N).reshape(N, 1)\n data = np.sin(x / 1.0 / T).astype('float64')\n torch.save(data, open('traindata.pt', 'wb'))\n-\n", "issue": "Unused import of math in time_sequence_prediction example\nThe generate_sine_wave.py module imports math on the first line, but doesn't use it. This import should be removed.\n", "before_files": [{"content": "import math\nimport numpy as np\nimport torch\nT = 20\nL = 1000\nN = 100\nnp.random.seed(2)\nx = np.empty((N, L), 'int64')\nx[:] = np.array(range(L)) + np.random.randint(-4*T, 4*T, N).reshape(N, 1)\ndata = np.sin(x / 1.0 / T).astype('float64')\ntorch.save(data, open('traindata.pt', 'wb'))\n\n", "path": "time_sequence_prediction/generate_sine_wave.py"}]} | 710 | 215 |
gh_patches_debug_8809 | rasdani/github-patches | git_diff | conan-io__conan-center-index-16928 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[package] xorg-makedepend/any: Homepage url incorrect
### Description
In the `xorg-makedepend` recipe the homepage url is incorrectly set to "https://gitlab.freedesktop.org/xorg/util/cf" which is a different repository in the same group, the correct repository url is "https://gitlab.freedesktop.org/xorg/util/makedepend". This should be changed accordingly.
To be fixed in https://github.com/conan-io/conan-center-index/blob/master/recipes/xorg-makedepend/all/conanfile.py
### Package and Environment Details
* Package Name/Version: xorg-makedepend/any
* Operating System+version: n/a
* Compiler+version: n/a
* Docker image: n/a
* Conan version: n/a
* Python version: n/a
### Conan profile
n/a
### Steps to reproduce
n/a
### Logs
n/a
</issue>
<code>
[start of recipes/xorg-makedepend/all/conanfile.py]
1 from conan import ConanFile
2 from conan.errors import ConanInvalidConfiguration
3 from conan.tools.files import apply_conandata_patches, copy, export_conandata_patches, get, load, rmdir, save
4 from conan.tools.gnu import Autotools, AutotoolsToolchain, PkgConfigDeps
5 from conan.tools.layout import basic_layout
6 import os
7 import re
8
9 required_conan_version = ">=1.53.0"
10
11
12 class XorgMakedepend(ConanFile):
13 name = "xorg-makedepend"
14 description = "Utility to parse C source files to make dependency lists for Makefiles"
15 topics = ("xorg", "dependency", "obsolete")
16 license = "MIT"
17 homepage = "https://gitlab.freedesktop.org/xorg/util/cf"
18 url = "https://github.com/conan-io/conan-center-index"
19 settings = "os", "arch", "compiler", "build_type"
20
21 @property
22 def _settings_build(self):
23 return getattr(self, "settings_build", self.settings)
24
25 def export_sources(self):
26 export_conandata_patches(self)
27
28 def requirements(self):
29 self.requires("xorg-macros/1.19.3")
30 self.requires("xorg-proto/2022.2")
31
32 def build_requirements(self):
33 self.build_requires("pkgconf/1.7.4")
34
35 def validate(self):
36 if self.settings.os == "Windows":
37 raise ConanInvalidConfiguration("Windows is not supported by xorg-makedepend")
38
39 def configure(self):
40 self.settings.rm_safe("compiler.cppstd")
41 self.settings.rm_safe("compiler.libcxx")
42
43 def package_id(self):
44 del self.info.settings.compiler
45
46 def layout(self):
47 basic_layout(self, src_folder="src")
48
49 def source(self):
50 get(self, **self.conan_data["sources"][self.version],
51 destination=self.source_folder, strip_root=True)
52
53 @property
54 def _user_info_build(self):
55 return getattr(self, "user_info_build", self.deps_user_info)
56
57 def generate(self):
58 tc = AutotoolsToolchain(self)
59 tc.generate()
60
61 deps = PkgConfigDeps(self)
62 deps.generate()
63
64 def build(self):
65 apply_conandata_patches(self)
66 autotools = Autotools(self)
67 autotools.configure()
68 autotools.make()
69
70 def package(self):
71 copy(self, "COPYING", src=self.source_folder, dst=os.path.join(self.package_folder, "licenses"))
72 def_h_text = load(self, os.path.join(self.source_folder, "def.h"))
73 license_text = next(re.finditer(r"/\*([^*]+)\*/", def_h_text)).group(1)
74 save(self, os.path.join(self.package_folder, "licenses", "LICENSE"), license_text)
75
76 autotools = Autotools(self)
77 autotools.install()
78 rmdir(self, os.path.join(self.package_folder, "share"))
79
80 def package_info(self):
81 self.cpp_info.libdirs = []
82 self.cpp_info.includedirs = []
83
84 bin_path = os.path.join(self.package_folder, "bin")
85 self.output.info("Appending PATH environment variable: {}".format(bin_path))
86 self.env_info.PATH.append(bin_path)
87
[end of recipes/xorg-makedepend/all/conanfile.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/recipes/xorg-makedepend/all/conanfile.py b/recipes/xorg-makedepend/all/conanfile.py
--- a/recipes/xorg-makedepend/all/conanfile.py
+++ b/recipes/xorg-makedepend/all/conanfile.py
@@ -14,7 +14,7 @@
description = "Utility to parse C source files to make dependency lists for Makefiles"
topics = ("xorg", "dependency", "obsolete")
license = "MIT"
- homepage = "https://gitlab.freedesktop.org/xorg/util/cf"
+ homepage = "https://gitlab.freedesktop.org/xorg/util/makedepend"
url = "https://github.com/conan-io/conan-center-index"
settings = "os", "arch", "compiler", "build_type"
| {"golden_diff": "diff --git a/recipes/xorg-makedepend/all/conanfile.py b/recipes/xorg-makedepend/all/conanfile.py\n--- a/recipes/xorg-makedepend/all/conanfile.py\n+++ b/recipes/xorg-makedepend/all/conanfile.py\n@@ -14,7 +14,7 @@\n description = \"Utility to parse C source files to make dependency lists for Makefiles\"\n topics = (\"xorg\", \"dependency\", \"obsolete\")\n license = \"MIT\"\n- homepage = \"https://gitlab.freedesktop.org/xorg/util/cf\"\n+ homepage = \"https://gitlab.freedesktop.org/xorg/util/makedepend\"\n url = \"https://github.com/conan-io/conan-center-index\"\n settings = \"os\", \"arch\", \"compiler\", \"build_type\"\n", "issue": "[package] xorg-makedepend/any: Homepage url incorrect\n### Description\n\nIn the `xorg-makedepend` recipe the homepage url is incorrectly set to \"https://gitlab.freedesktop.org/xorg/util/cf\" which is a different repository in the same group, the correct repository url is \"https://gitlab.freedesktop.org/xorg/util/makedepend\". This should be changed accordingly.\r\n\r\nTo be fixed in https://github.com/conan-io/conan-center-index/blob/master/recipes/xorg-makedepend/all/conanfile.py\n\n### Package and Environment Details\n\n* Package Name/Version: xorg-makedepend/any\r\n* Operating System+version: n/a\r\n* Compiler+version: n/a\r\n* Docker image: n/a\r\n* Conan version: n/a\r\n* Python version: n/a\r\n\n\n### Conan profile\n\nn/a\n\n### Steps to reproduce\n\nn/a\n\n### Logs\n\nn/a\n", "before_files": [{"content": "from conan import ConanFile\nfrom conan.errors import ConanInvalidConfiguration\nfrom conan.tools.files import apply_conandata_patches, copy, export_conandata_patches, get, load, rmdir, save\nfrom conan.tools.gnu import Autotools, AutotoolsToolchain, PkgConfigDeps\nfrom conan.tools.layout import basic_layout\nimport os\nimport re\n\nrequired_conan_version = \">=1.53.0\"\n\n\nclass XorgMakedepend(ConanFile):\n name = \"xorg-makedepend\"\n description = \"Utility to parse C source files to make dependency lists for Makefiles\"\n topics = (\"xorg\", \"dependency\", \"obsolete\")\n license = \"MIT\"\n homepage = \"https://gitlab.freedesktop.org/xorg/util/cf\"\n url = \"https://github.com/conan-io/conan-center-index\"\n settings = \"os\", \"arch\", \"compiler\", \"build_type\"\n\n @property\n def _settings_build(self):\n return getattr(self, \"settings_build\", self.settings)\n\n def export_sources(self):\n export_conandata_patches(self)\n\n def requirements(self):\n self.requires(\"xorg-macros/1.19.3\")\n self.requires(\"xorg-proto/2022.2\")\n\n def build_requirements(self):\n self.build_requires(\"pkgconf/1.7.4\")\n\n def validate(self):\n if self.settings.os == \"Windows\":\n raise ConanInvalidConfiguration(\"Windows is not supported by xorg-makedepend\")\n\n def configure(self):\n self.settings.rm_safe(\"compiler.cppstd\")\n self.settings.rm_safe(\"compiler.libcxx\")\n\n def package_id(self):\n del self.info.settings.compiler\n\n def layout(self):\n basic_layout(self, src_folder=\"src\")\n\n def source(self):\n get(self, **self.conan_data[\"sources\"][self.version],\n destination=self.source_folder, strip_root=True)\n\n @property\n def _user_info_build(self):\n return getattr(self, \"user_info_build\", self.deps_user_info)\n\n def generate(self):\n tc = AutotoolsToolchain(self)\n tc.generate()\n\n deps = PkgConfigDeps(self)\n deps.generate()\n\n def build(self):\n apply_conandata_patches(self)\n autotools = Autotools(self)\n autotools.configure()\n autotools.make()\n\n def package(self):\n copy(self, \"COPYING\", src=self.source_folder, dst=os.path.join(self.package_folder, \"licenses\"))\n def_h_text = load(self, os.path.join(self.source_folder, \"def.h\"))\n license_text = next(re.finditer(r\"/\\*([^*]+)\\*/\", def_h_text)).group(1)\n save(self, os.path.join(self.package_folder, \"licenses\", \"LICENSE\"), license_text)\n\n autotools = Autotools(self)\n autotools.install()\n rmdir(self, os.path.join(self.package_folder, \"share\"))\n\n def package_info(self):\n self.cpp_info.libdirs = []\n self.cpp_info.includedirs = []\n\n bin_path = os.path.join(self.package_folder, \"bin\")\n self.output.info(\"Appending PATH environment variable: {}\".format(bin_path))\n self.env_info.PATH.append(bin_path)\n", "path": "recipes/xorg-makedepend/all/conanfile.py"}]} | 1,626 | 181 |
gh_patches_debug_17607 | rasdani/github-patches | git_diff | DataDog__dd-trace-py-4246 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Cannot call sqlite3.backup(db) on a TracedSQLite object
Thanks for taking the time for reporting an issue!
Before reporting an issue on dd-trace-py, please be sure to provide all
necessary information.
If you're hitting a bug, make sure that you're using the latest version of this
library.
### Which version of dd-trace-py are you using?
1.5.0
### Which version of pip are you using?
21.1.1
_ddtrace requires pip>=18 to install one of our pre-built wheels_
### Which version of the libraries are you using?
You can copy/paste the output of `pip freeze` here.
```
ddtrace==1.5.0
```
### How can we reproduce your problem?
```
from ddtrace import config, patch_all
import sqlite3
config.env = "test" # the environment the application is in
config.service = "app" # name of your application
config.version = "v1" # version of your application
patch_all()
src = sqlite3.connect("1.db")
dst = sqlite3.connect("2.db")
with dst:
src.backup(dst, pages=1)
dst.close()
src.close()
```
### What is the result that you get?
The following TypeError
```
TypeError: backup() argument 1 must be sqlite3.Connection, not TracedSQLite
```
### What is the result that you expected?
The function should succeed without error.
</issue>
<code>
[start of ddtrace/contrib/sqlite3/patch.py]
1 import os
2 import sqlite3
3 import sqlite3.dbapi2
4
5 from ddtrace import config
6 from ddtrace.vendor import wrapt
7
8 from ...contrib.dbapi import FetchTracedCursor
9 from ...contrib.dbapi import TracedConnection
10 from ...contrib.dbapi import TracedCursor
11 from ...internal.utils.formats import asbool
12 from ...pin import Pin
13
14
15 # Original connect method
16 _connect = sqlite3.connect
17
18 config._add(
19 "sqlite",
20 dict(
21 _default_service="sqlite",
22 _dbapi_span_name_prefix="sqlite",
23 trace_fetch_methods=asbool(os.getenv("DD_SQLITE_TRACE_FETCH_METHODS", default=False)),
24 ),
25 )
26
27
28 def patch():
29 wrapped = wrapt.FunctionWrapper(_connect, traced_connect)
30
31 setattr(sqlite3, "connect", wrapped)
32 setattr(sqlite3.dbapi2, "connect", wrapped)
33
34
35 def unpatch():
36 sqlite3.connect = _connect
37 sqlite3.dbapi2.connect = _connect
38
39
40 def traced_connect(func, _, args, kwargs):
41 conn = func(*args, **kwargs)
42 return patch_conn(conn)
43
44
45 def patch_conn(conn):
46 wrapped = TracedSQLite(conn)
47 Pin().onto(wrapped)
48 return wrapped
49
50
51 class TracedSQLiteCursor(TracedCursor):
52 def executemany(self, *args, **kwargs):
53 # DEV: SQLite3 Cursor.execute always returns back the cursor instance
54 super(TracedSQLiteCursor, self).executemany(*args, **kwargs)
55 return self
56
57 def execute(self, *args, **kwargs):
58 # DEV: SQLite3 Cursor.execute always returns back the cursor instance
59 super(TracedSQLiteCursor, self).execute(*args, **kwargs)
60 return self
61
62
63 class TracedSQLiteFetchCursor(TracedSQLiteCursor, FetchTracedCursor):
64 pass
65
66
67 class TracedSQLite(TracedConnection):
68 def __init__(self, conn, pin=None, cursor_cls=None):
69 if not cursor_cls:
70 # Do not trace `fetch*` methods by default
71 cursor_cls = TracedSQLiteFetchCursor if config.sqlite.trace_fetch_methods else TracedSQLiteCursor
72
73 super(TracedSQLite, self).__init__(conn, pin=pin, cfg=config.sqlite, cursor_cls=cursor_cls)
74
75 def execute(self, *args, **kwargs):
76 # sqlite has a few extra sugar functions
77 return self.cursor().execute(*args, **kwargs)
78
[end of ddtrace/contrib/sqlite3/patch.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ddtrace/contrib/sqlite3/patch.py b/ddtrace/contrib/sqlite3/patch.py
--- a/ddtrace/contrib/sqlite3/patch.py
+++ b/ddtrace/contrib/sqlite3/patch.py
@@ -1,6 +1,7 @@
import os
import sqlite3
import sqlite3.dbapi2
+import sys
from ddtrace import config
from ddtrace.vendor import wrapt
@@ -75,3 +76,13 @@
def execute(self, *args, **kwargs):
# sqlite has a few extra sugar functions
return self.cursor().execute(*args, **kwargs)
+
+ # backup was added in Python 3.7
+ if sys.version_info >= (3, 7, 0):
+
+ def backup(self, target, *args, **kwargs):
+ # sqlite3 checks the type of `target`, it cannot be a wrapped connection
+ # https://github.com/python/cpython/blob/4652093e1b816b78e9a585d671a807ce66427417/Modules/_sqlite/connection.c#L1897-L1899
+ if isinstance(target, TracedConnection):
+ target = target.__wrapped__
+ return self.__wrapped__.backup(target, *args, **kwargs)
| {"golden_diff": "diff --git a/ddtrace/contrib/sqlite3/patch.py b/ddtrace/contrib/sqlite3/patch.py\n--- a/ddtrace/contrib/sqlite3/patch.py\n+++ b/ddtrace/contrib/sqlite3/patch.py\n@@ -1,6 +1,7 @@\n import os\n import sqlite3\n import sqlite3.dbapi2\n+import sys\n \n from ddtrace import config\n from ddtrace.vendor import wrapt\n@@ -75,3 +76,13 @@\n def execute(self, *args, **kwargs):\n # sqlite has a few extra sugar functions\n return self.cursor().execute(*args, **kwargs)\n+\n+ # backup was added in Python 3.7\n+ if sys.version_info >= (3, 7, 0):\n+\n+ def backup(self, target, *args, **kwargs):\n+ # sqlite3 checks the type of `target`, it cannot be a wrapped connection\n+ # https://github.com/python/cpython/blob/4652093e1b816b78e9a585d671a807ce66427417/Modules/_sqlite/connection.c#L1897-L1899\n+ if isinstance(target, TracedConnection):\n+ target = target.__wrapped__\n+ return self.__wrapped__.backup(target, *args, **kwargs)\n", "issue": "Cannot call sqlite3.backup(db) on a TracedSQLite object\nThanks for taking the time for reporting an issue!\r\n\r\nBefore reporting an issue on dd-trace-py, please be sure to provide all\r\nnecessary information.\r\n\r\nIf you're hitting a bug, make sure that you're using the latest version of this\r\nlibrary.\r\n\r\n### Which version of dd-trace-py are you using?\r\n1.5.0\r\n### Which version of pip are you using?\r\n21.1.1\r\n_ddtrace requires pip>=18 to install one of our pre-built wheels_\r\n\r\n### Which version of the libraries are you using?\r\n\r\nYou can copy/paste the output of `pip freeze` here.\r\n\r\n```\r\nddtrace==1.5.0\r\n```\r\n\r\n### How can we reproduce your problem?\r\n\r\n```\r\nfrom ddtrace import config, patch_all\r\nimport sqlite3\r\n\r\nconfig.env = \"test\" # the environment the application is in\r\nconfig.service = \"app\" # name of your application\r\nconfig.version = \"v1\" # version of your application\r\npatch_all()\r\n\r\nsrc = sqlite3.connect(\"1.db\")\r\ndst = sqlite3.connect(\"2.db\")\r\nwith dst:\r\n src.backup(dst, pages=1)\r\ndst.close()\r\nsrc.close()\r\n```\r\n\r\n### What is the result that you get?\r\n\r\nThe following TypeError\r\n```\r\nTypeError: backup() argument 1 must be sqlite3.Connection, not TracedSQLite\r\n```\r\n\r\n### What is the result that you expected?\r\n\r\nThe function should succeed without error.\r\n\n", "before_files": [{"content": "import os\nimport sqlite3\nimport sqlite3.dbapi2\n\nfrom ddtrace import config\nfrom ddtrace.vendor import wrapt\n\nfrom ...contrib.dbapi import FetchTracedCursor\nfrom ...contrib.dbapi import TracedConnection\nfrom ...contrib.dbapi import TracedCursor\nfrom ...internal.utils.formats import asbool\nfrom ...pin import Pin\n\n\n# Original connect method\n_connect = sqlite3.connect\n\nconfig._add(\n \"sqlite\",\n dict(\n _default_service=\"sqlite\",\n _dbapi_span_name_prefix=\"sqlite\",\n trace_fetch_methods=asbool(os.getenv(\"DD_SQLITE_TRACE_FETCH_METHODS\", default=False)),\n ),\n)\n\n\ndef patch():\n wrapped = wrapt.FunctionWrapper(_connect, traced_connect)\n\n setattr(sqlite3, \"connect\", wrapped)\n setattr(sqlite3.dbapi2, \"connect\", wrapped)\n\n\ndef unpatch():\n sqlite3.connect = _connect\n sqlite3.dbapi2.connect = _connect\n\n\ndef traced_connect(func, _, args, kwargs):\n conn = func(*args, **kwargs)\n return patch_conn(conn)\n\n\ndef patch_conn(conn):\n wrapped = TracedSQLite(conn)\n Pin().onto(wrapped)\n return wrapped\n\n\nclass TracedSQLiteCursor(TracedCursor):\n def executemany(self, *args, **kwargs):\n # DEV: SQLite3 Cursor.execute always returns back the cursor instance\n super(TracedSQLiteCursor, self).executemany(*args, **kwargs)\n return self\n\n def execute(self, *args, **kwargs):\n # DEV: SQLite3 Cursor.execute always returns back the cursor instance\n super(TracedSQLiteCursor, self).execute(*args, **kwargs)\n return self\n\n\nclass TracedSQLiteFetchCursor(TracedSQLiteCursor, FetchTracedCursor):\n pass\n\n\nclass TracedSQLite(TracedConnection):\n def __init__(self, conn, pin=None, cursor_cls=None):\n if not cursor_cls:\n # Do not trace `fetch*` methods by default\n cursor_cls = TracedSQLiteFetchCursor if config.sqlite.trace_fetch_methods else TracedSQLiteCursor\n\n super(TracedSQLite, self).__init__(conn, pin=pin, cfg=config.sqlite, cursor_cls=cursor_cls)\n\n def execute(self, *args, **kwargs):\n # sqlite has a few extra sugar functions\n return self.cursor().execute(*args, **kwargs)\n", "path": "ddtrace/contrib/sqlite3/patch.py"}]} | 1,541 | 308 |
gh_patches_debug_21357 | rasdani/github-patches | git_diff | nextcloud__appstore-282 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
API: registering an app id and uploading an app release needs to check revoked certificates
In order to prevent old or lost certificates from being abused we need to check if the certificate has been revoked. This has to be done before validating the certificate on app release upload and before registering a new app id.
</issue>
<code>
[start of nextcloudappstore/core/certificate/validator.py]
1 import logging
2 from base64 import b64decode
3
4 import pem
5 from OpenSSL.crypto import FILETYPE_PEM, load_certificate, verify, X509, \
6 X509Store, X509StoreContext, load_crl
7 from django.conf import settings # type: ignore
8 from rest_framework.exceptions import APIException
9
10 logger = logging.getLogger(__name__)
11
12
13 class CertificateConfiguration:
14 def __init__(self) -> None:
15 self.digest = settings.CERTIFICATE_DIGEST
16
17
18 class InvalidSignatureException(APIException):
19 pass
20
21
22 class InvalidCertificateException(APIException):
23 pass
24
25
26 class CertificateAppIdMismatchException(APIException):
27 pass
28
29
30 class CertificateValidator:
31 """
32 See https://pyopenssl.readthedocs.io/en/stable/api/crypto.html#signing
33 -and-verifying-signatures
34 """
35
36 def __init__(self, config: CertificateConfiguration) -> None:
37 self.config = config
38
39 def validate_signature(self, certificate: str, signature: str,
40 data: bytes) -> None:
41 """
42 Tests if a value is a valid certificate using SHA512
43 :param certificate: the certificate to use as string
44 :param signature: the signature base64 encoded string to test
45 :param data: the binary file content that was signed
46 :raises: InvalidSignatureException if the signature is invalid
47 :return: None
48 """
49 cert = self._to_cert(certificate)
50 err_msg = 'Signature is invalid'
51 try:
52 result = verify(cert, b64decode(signature.encode()), data,
53 self.config.digest)
54 if result is not None:
55 raise InvalidSignatureException(err_msg)
56 except Exception as e:
57 raise InvalidSignatureException('%s: %s' % (err_msg, str(e)))
58
59 def validate_certificate(self, certificate: str, chain: str,
60 crl: str = None) -> None:
61 """
62 Tests if a certificate has been signed by the chain, is not revoked
63 and has not yet been expired.
64 :param certificate: the certificate to test as string
65 :param chain: the certificate chain file content as string
66 :param crl: the certificate revocation list file content as string
67 :raises: InvalidCertificateException if the certificate is invalid
68 :return: None
69 """
70 # root and intermediary certificate need to be split
71 cas = pem.parse(chain.encode())
72 store = X509Store()
73 for ca in cas:
74 store.add_cert(self._to_cert(str(ca)))
75
76 cert = self._to_cert(certificate)
77 ctx = X509StoreContext(store, cert)
78 err_msg = 'Certificate is invalid'
79
80 if crl:
81 crl = load_crl(FILETYPE_PEM, crl)
82 store.add_crl(crl)
83
84 try:
85 result = ctx.verify_certificate()
86 if result is not None:
87 raise InvalidCertificateException(err_msg)
88 except Exception as e:
89 raise InvalidCertificateException('%s: %s' % (err_msg, str(e)))
90
91 def get_cn(self, certificate: str) -> str:
92 """
93 Extracts the CN from a certificate and removes the leading
94 slash, e.g. /news should return news
95 :param certificate: certificate
96 :return: the certificate's subject without the leading slash
97 """
98 cert = self._to_cert(certificate)
99 return cert.get_subject().CN
100
101 def validate_app_id(self, certificate: str, app_id: str) -> None:
102 """
103 Validates if the CN matches the app id
104 :param certificate: app certificate
105 :param app_id: the app id
106 :raises CertificateAppIdMismatchException: if the app id and cert CN do
107 not match
108 :return: None
109 """
110 cn = self.get_cn(certificate)
111 if cn != app_id:
112 msg = 'App id %s does not match cert CN %s' % (app_id, cn)
113 raise CertificateAppIdMismatchException(msg)
114
115 def _to_cert(self, certificate: str) -> X509:
116 return load_certificate(FILETYPE_PEM, certificate.encode())
117
[end of nextcloudappstore/core/certificate/validator.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/nextcloudappstore/core/certificate/validator.py b/nextcloudappstore/core/certificate/validator.py
--- a/nextcloudappstore/core/certificate/validator.py
+++ b/nextcloudappstore/core/certificate/validator.py
@@ -3,7 +3,7 @@
import pem
from OpenSSL.crypto import FILETYPE_PEM, load_certificate, verify, X509, \
- X509Store, X509StoreContext, load_crl
+ X509Store, X509StoreContext, load_crl, X509StoreFlags
from django.conf import settings # type: ignore
from rest_framework.exceptions import APIException
@@ -74,12 +74,14 @@
store.add_cert(self._to_cert(str(ca)))
cert = self._to_cert(certificate)
- ctx = X509StoreContext(store, cert)
- err_msg = 'Certificate is invalid'
if crl:
- crl = load_crl(FILETYPE_PEM, crl)
- store.add_crl(crl)
+ parsed_crl = load_crl(FILETYPE_PEM, crl)
+ store.set_flags(X509StoreFlags.CRL_CHECK)
+ store.add_crl(parsed_crl)
+
+ ctx = X509StoreContext(store, cert)
+ err_msg = 'Certificate is invalid'
try:
result = ctx.verify_certificate()
| {"golden_diff": "diff --git a/nextcloudappstore/core/certificate/validator.py b/nextcloudappstore/core/certificate/validator.py\n--- a/nextcloudappstore/core/certificate/validator.py\n+++ b/nextcloudappstore/core/certificate/validator.py\n@@ -3,7 +3,7 @@\n \n import pem\n from OpenSSL.crypto import FILETYPE_PEM, load_certificate, verify, X509, \\\n- X509Store, X509StoreContext, load_crl\n+ X509Store, X509StoreContext, load_crl, X509StoreFlags\n from django.conf import settings # type: ignore\n from rest_framework.exceptions import APIException\n \n@@ -74,12 +74,14 @@\n store.add_cert(self._to_cert(str(ca)))\n \n cert = self._to_cert(certificate)\n- ctx = X509StoreContext(store, cert)\n- err_msg = 'Certificate is invalid'\n \n if crl:\n- crl = load_crl(FILETYPE_PEM, crl)\n- store.add_crl(crl)\n+ parsed_crl = load_crl(FILETYPE_PEM, crl)\n+ store.set_flags(X509StoreFlags.CRL_CHECK)\n+ store.add_crl(parsed_crl)\n+\n+ ctx = X509StoreContext(store, cert)\n+ err_msg = 'Certificate is invalid'\n \n try:\n result = ctx.verify_certificate()\n", "issue": "API: registering an app id and uploading an app release needs to check revoked certificates\nIn order to prevent old or lost certificates from being abused we need to check if the certificate has been revoked. This has to be done before validating the certificate on app release upload and before registering a new app id.\n\n", "before_files": [{"content": "import logging\nfrom base64 import b64decode\n\nimport pem\nfrom OpenSSL.crypto import FILETYPE_PEM, load_certificate, verify, X509, \\\n X509Store, X509StoreContext, load_crl\nfrom django.conf import settings # type: ignore\nfrom rest_framework.exceptions import APIException\n\nlogger = logging.getLogger(__name__)\n\n\nclass CertificateConfiguration:\n def __init__(self) -> None:\n self.digest = settings.CERTIFICATE_DIGEST\n\n\nclass InvalidSignatureException(APIException):\n pass\n\n\nclass InvalidCertificateException(APIException):\n pass\n\n\nclass CertificateAppIdMismatchException(APIException):\n pass\n\n\nclass CertificateValidator:\n \"\"\"\n See https://pyopenssl.readthedocs.io/en/stable/api/crypto.html#signing\n -and-verifying-signatures\n \"\"\"\n\n def __init__(self, config: CertificateConfiguration) -> None:\n self.config = config\n\n def validate_signature(self, certificate: str, signature: str,\n data: bytes) -> None:\n \"\"\"\n Tests if a value is a valid certificate using SHA512\n :param certificate: the certificate to use as string\n :param signature: the signature base64 encoded string to test\n :param data: the binary file content that was signed\n :raises: InvalidSignatureException if the signature is invalid\n :return: None\n \"\"\"\n cert = self._to_cert(certificate)\n err_msg = 'Signature is invalid'\n try:\n result = verify(cert, b64decode(signature.encode()), data,\n self.config.digest)\n if result is not None:\n raise InvalidSignatureException(err_msg)\n except Exception as e:\n raise InvalidSignatureException('%s: %s' % (err_msg, str(e)))\n\n def validate_certificate(self, certificate: str, chain: str,\n crl: str = None) -> None:\n \"\"\"\n Tests if a certificate has been signed by the chain, is not revoked\n and has not yet been expired.\n :param certificate: the certificate to test as string\n :param chain: the certificate chain file content as string\n :param crl: the certificate revocation list file content as string\n :raises: InvalidCertificateException if the certificate is invalid\n :return: None\n \"\"\"\n # root and intermediary certificate need to be split\n cas = pem.parse(chain.encode())\n store = X509Store()\n for ca in cas:\n store.add_cert(self._to_cert(str(ca)))\n\n cert = self._to_cert(certificate)\n ctx = X509StoreContext(store, cert)\n err_msg = 'Certificate is invalid'\n\n if crl:\n crl = load_crl(FILETYPE_PEM, crl)\n store.add_crl(crl)\n\n try:\n result = ctx.verify_certificate()\n if result is not None:\n raise InvalidCertificateException(err_msg)\n except Exception as e:\n raise InvalidCertificateException('%s: %s' % (err_msg, str(e)))\n\n def get_cn(self, certificate: str) -> str:\n \"\"\"\n Extracts the CN from a certificate and removes the leading\n slash, e.g. /news should return news\n :param certificate: certificate\n :return: the certificate's subject without the leading slash\n \"\"\"\n cert = self._to_cert(certificate)\n return cert.get_subject().CN\n\n def validate_app_id(self, certificate: str, app_id: str) -> None:\n \"\"\"\n Validates if the CN matches the app id\n :param certificate: app certificate\n :param app_id: the app id\n :raises CertificateAppIdMismatchException: if the app id and cert CN do\n not match\n :return: None\n \"\"\"\n cn = self.get_cn(certificate)\n if cn != app_id:\n msg = 'App id %s does not match cert CN %s' % (app_id, cn)\n raise CertificateAppIdMismatchException(msg)\n\n def _to_cert(self, certificate: str) -> X509:\n return load_certificate(FILETYPE_PEM, certificate.encode())\n", "path": "nextcloudappstore/core/certificate/validator.py"}]} | 1,749 | 324 |
gh_patches_debug_5815 | rasdani/github-patches | git_diff | pulp__pulpcore-4722 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
file:// sync deletes files from directory
**Version**
Pulpcore 3.39
**Describe the bug**
When syncing file:// repositories, files are disappearing after the sync.
**To Reproduce**
1) Copy these two repositories to the FS:
- https://github.com/Katello/katello/tree/master/test/fixtures/test_repos/file1
- https://github.com/Katello/katello/tree/master/test/fixtures/test_repos/file2
2) Sync one, then the other
3) See that some files disappeared.
- In my case, file2 lost every file except PULP_MANIFEST
**Expected behavior**
No files disappear.
**Additional context**
This also occurred with RPM content type files.
</issue>
<code>
[start of pulpcore/download/file.py]
1 import os
2
3 from urllib.parse import urlparse
4
5 import aiofiles
6
7 from .base import BaseDownloader, DownloadResult
8
9
10 class FileDownloader(BaseDownloader):
11 """
12 A downloader for downloading files from the filesystem.
13
14 It provides digest and size validation along with computation of the digests needed to save the
15 file as an Artifact. It writes a new file to the disk and the return path is included in the
16 :class:`~pulpcore.plugin.download.DownloadResult`.
17
18 This downloader has all of the attributes of
19 :class:`~pulpcore.plugin.download.BaseDownloader`
20 """
21
22 def __init__(self, url, *args, **kwargs):
23 """
24 Download files from a url that starts with `file://`
25
26 Args:
27 url (str): The url to the file. This is expected to begin with `file://`
28 kwargs (dict): This accepts the parameters of
29 :class:`~pulpcore.plugin.download.BaseDownloader`.
30
31 Raises:
32 ValidationError: When the url starts with `file://`, but is not a subfolder of a path in
33 the ALLOWED_IMPORT_PATH setting.
34 """
35 from pulpcore.app.serializers import RemoteSerializer
36
37 RemoteSerializer().validate_url(url)
38 p = urlparse(url)
39 self._path = os.path.abspath(os.path.join(p.netloc, p.path))
40 super().__init__(url, *args, **kwargs)
41
42 async def _run(self, extra_data=None):
43 """
44 Read, validate, and compute digests on the `url`. This is a coroutine.
45
46 This method provides the same return object type and documented in
47 :meth:`~pulpcore.plugin.download.BaseDownloader._run`.
48
49 Args:
50 extra_data (dict): Extra data passed to the downloader.
51 """
52 async with aiofiles.open(self._path, "rb") as f_handle:
53 while True:
54 chunk = await f_handle.read(1048576) # 1 megabyte
55 if not chunk:
56 await self.finalize()
57 break # the reading is done
58 await self.handle_data(chunk)
59 return DownloadResult(
60 path=self._path,
61 artifact_attributes=self.artifact_attributes,
62 url=self.url,
63 headers=None,
64 )
65
[end of pulpcore/download/file.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pulpcore/download/file.py b/pulpcore/download/file.py
--- a/pulpcore/download/file.py
+++ b/pulpcore/download/file.py
@@ -57,7 +57,7 @@
break # the reading is done
await self.handle_data(chunk)
return DownloadResult(
- path=self._path,
+ path=self.path,
artifact_attributes=self.artifact_attributes,
url=self.url,
headers=None,
| {"golden_diff": "diff --git a/pulpcore/download/file.py b/pulpcore/download/file.py\n--- a/pulpcore/download/file.py\n+++ b/pulpcore/download/file.py\n@@ -57,7 +57,7 @@\n break # the reading is done\n await self.handle_data(chunk)\n return DownloadResult(\n- path=self._path,\n+ path=self.path,\n artifact_attributes=self.artifact_attributes,\n url=self.url,\n headers=None,\n", "issue": "file:// sync deletes files from directory\n**Version**\r\nPulpcore 3.39\r\n\r\n**Describe the bug**\r\nWhen syncing file:// repositories, files are disappearing after the sync.\r\n\r\n**To Reproduce**\r\n1) Copy these two repositories to the FS:\r\n - https://github.com/Katello/katello/tree/master/test/fixtures/test_repos/file1\r\n - https://github.com/Katello/katello/tree/master/test/fixtures/test_repos/file2\r\n2) Sync one, then the other\r\n3) See that some files disappeared.\r\n - In my case, file2 lost every file except PULP_MANIFEST\r\n\r\n\r\n**Expected behavior**\r\nNo files disappear.\r\n\r\n**Additional context**\r\nThis also occurred with RPM content type files.\r\n\n", "before_files": [{"content": "import os\n\nfrom urllib.parse import urlparse\n\nimport aiofiles\n\nfrom .base import BaseDownloader, DownloadResult\n\n\nclass FileDownloader(BaseDownloader):\n \"\"\"\n A downloader for downloading files from the filesystem.\n\n It provides digest and size validation along with computation of the digests needed to save the\n file as an Artifact. It writes a new file to the disk and the return path is included in the\n :class:`~pulpcore.plugin.download.DownloadResult`.\n\n This downloader has all of the attributes of\n :class:`~pulpcore.plugin.download.BaseDownloader`\n \"\"\"\n\n def __init__(self, url, *args, **kwargs):\n \"\"\"\n Download files from a url that starts with `file://`\n\n Args:\n url (str): The url to the file. This is expected to begin with `file://`\n kwargs (dict): This accepts the parameters of\n :class:`~pulpcore.plugin.download.BaseDownloader`.\n\n Raises:\n ValidationError: When the url starts with `file://`, but is not a subfolder of a path in\n the ALLOWED_IMPORT_PATH setting.\n \"\"\"\n from pulpcore.app.serializers import RemoteSerializer\n\n RemoteSerializer().validate_url(url)\n p = urlparse(url)\n self._path = os.path.abspath(os.path.join(p.netloc, p.path))\n super().__init__(url, *args, **kwargs)\n\n async def _run(self, extra_data=None):\n \"\"\"\n Read, validate, and compute digests on the `url`. This is a coroutine.\n\n This method provides the same return object type and documented in\n :meth:`~pulpcore.plugin.download.BaseDownloader._run`.\n\n Args:\n extra_data (dict): Extra data passed to the downloader.\n \"\"\"\n async with aiofiles.open(self._path, \"rb\") as f_handle:\n while True:\n chunk = await f_handle.read(1048576) # 1 megabyte\n if not chunk:\n await self.finalize()\n break # the reading is done\n await self.handle_data(chunk)\n return DownloadResult(\n path=self._path,\n artifact_attributes=self.artifact_attributes,\n url=self.url,\n headers=None,\n )\n", "path": "pulpcore/download/file.py"}]} | 1,292 | 100 |
gh_patches_debug_2426 | rasdani/github-patches | git_diff | kserve__kserve-864 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
explanations no longer working with 0.3.0
Am following the steps in with 0.3.0 of kfserving: https://github.com/kubeflow/kfserving/tree/master/docs/samples/explanation/alibi/income
When I execute the curl for the explain I get a 500 error and the container logs show the below. I'm guessing the [update to master](https://github.com/kubeflow/kfserving/pull/803) means that the explainer models have also been updated and so they no longer work with 0.3.0 (the latest release version)
```
[E 200605 17:15:14 web:1792] Uncaught exception POST /v1/models/income:explain (127.0.0.1)
HTTPServerRequest(protocol='http', host='income-explainer-default.default.svc.cluster.local', method='POST', uri='/v1/models/income:explain', version='HTTP/1.1', remote_ip='127.0.0.1')
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/tornado/web.py", line 1701, in _execute
result = method(*self.path_args, **self.path_kwargs)
File "/kfserving/kfserving/handlers/http.py", line 61, in post
response = model.explain(request)
File "/alibiexplainer/alibiexplainer/explainer.py", line 74, in explain
explanation = self.wrapper.explain(request["instances"])
File "/alibiexplainer/alibiexplainer/anchor_tabular.py", line 89, in explain
anchor_exp = self.anchors_tabular.explain(arr[0], **self.kwargs)
File "/usr/local/lib/python3.7/site-packages/alibi/explainers/anchor_tabular.py", line 803, in explain
for sampler in self.samplers:
AttributeError: 'AnchorTabular' object has no attribute 'samplers'
[E 200605 17:15:14 web:2250] 500 POST /v1/models/income:explain (127.0.0.1) 58.80ms
[I 200605 17:18:22 anchor_tabular:83] Arr shape ((1, 12),)
[E 200605 17:18:22 web:1792] Uncaught exception POST /v1/models/income:explain (127.0.0.1)
HTTPServerRequest(protocol='http', host='income-explainer-default.default.svc.cluster.local', method='POST', uri='/v1/models/income:explain', version='HTTP/1.1', remote_ip='127.0.0.1')
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/tornado/web.py", line 1701, in _execute
result = method(*self.path_args, **self.path_kwargs)
File "/kfserving/kfserving/handlers/http.py", line 61, in post
response = model.explain(request)
File "/alibiexplainer/alibiexplainer/explainer.py", line 74, in explain
explanation = self.wrapper.explain(request["instances"])
File "/alibiexplainer/alibiexplainer/anchor_tabular.py", line 89, in explain
anchor_exp = self.anchors_tabular.explain(arr[0], **self.kwargs)
File "/usr/local/lib/python3.7/site-packages/alibi/explainers/anchor_tabular.py", line 803, in explain
for sampler in self.samplers:
AttributeError: 'AnchorTabular' object has no attribute 'samplers'
[E 200605 17:18:22 web:2250] 500 POST /v1/models/income:explain (127.0.0.1) 31.17ms
```
Presumably it would work on master. Does that sound right @cliveseldon ? If so maybe we should just close this.
</issue>
<code>
[start of python/alibiexplainer/setup.py]
1 # Copyright 2019 kubeflow.org.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from setuptools import setup, find_packages
16
17 tests_require = [
18 'pytest',
19 'pytest-tornasync',
20 'mypy'
21 ]
22
23 setup(
24 name='alibiexplainer',
25 version='0.3.0',
26 author_email='[email protected]',
27 license='../../LICENSE.txt',
28 url='https://github.com/kubeflow/kfserving/python/kfserving/alibiexplainer',
29 description='Model Explaination Server. \
30 Not intended for use outside KFServing Frameworks Images',
31 long_description=open('README.md').read(),
32 python_requires='>=3.6',
33 packages=find_packages("alibiexplainer"),
34 install_requires=[
35 "kfserving>=0.3.0",
36 "alibi>=0.3",
37 "scikit-learn>=0.20.3",
38 "argparse>=1.4.0",
39 "requests>=2.22.0",
40 "joblib>=0.13.2",
41 "pandas>=0.24.2",
42 "numpy>=1.16.3",
43 "dill>=0.3.0",
44 "spacy>=2.1.4"
45 ],
46 tests_require=tests_require,
47 extras_require={'test': tests_require}
48 )
49
[end of python/alibiexplainer/setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/python/alibiexplainer/setup.py b/python/alibiexplainer/setup.py
--- a/python/alibiexplainer/setup.py
+++ b/python/alibiexplainer/setup.py
@@ -33,7 +33,7 @@
packages=find_packages("alibiexplainer"),
install_requires=[
"kfserving>=0.3.0",
- "alibi>=0.3",
+ "alibi==0.3.2",
"scikit-learn>=0.20.3",
"argparse>=1.4.0",
"requests>=2.22.0",
| {"golden_diff": "diff --git a/python/alibiexplainer/setup.py b/python/alibiexplainer/setup.py\n--- a/python/alibiexplainer/setup.py\n+++ b/python/alibiexplainer/setup.py\n@@ -33,7 +33,7 @@\n packages=find_packages(\"alibiexplainer\"),\n install_requires=[\n \"kfserving>=0.3.0\",\n- \"alibi>=0.3\",\n+ \"alibi==0.3.2\",\n \"scikit-learn>=0.20.3\",\n \"argparse>=1.4.0\",\n \"requests>=2.22.0\",\n", "issue": "explanations no longer working with 0.3.0\nAm following the steps in with 0.3.0 of kfserving: https://github.com/kubeflow/kfserving/tree/master/docs/samples/explanation/alibi/income\r\n\r\nWhen I execute the curl for the explain I get a 500 error and the container logs show the below. I'm guessing the [update to master](https://github.com/kubeflow/kfserving/pull/803) means that the explainer models have also been updated and so they no longer work with 0.3.0 (the latest release version)\r\n\r\n```\r\n[E 200605 17:15:14 web:1792] Uncaught exception POST /v1/models/income:explain (127.0.0.1)\r\n HTTPServerRequest(protocol='http', host='income-explainer-default.default.svc.cluster.local', method='POST', uri='/v1/models/income:explain', version='HTTP/1.1', remote_ip='127.0.0.1')\r\n Traceback (most recent call last):\r\n File \"/usr/local/lib/python3.7/site-packages/tornado/web.py\", line 1701, in _execute\r\n result = method(*self.path_args, **self.path_kwargs)\r\n File \"/kfserving/kfserving/handlers/http.py\", line 61, in post\r\n response = model.explain(request)\r\n File \"/alibiexplainer/alibiexplainer/explainer.py\", line 74, in explain\r\n explanation = self.wrapper.explain(request[\"instances\"])\r\n File \"/alibiexplainer/alibiexplainer/anchor_tabular.py\", line 89, in explain\r\n anchor_exp = self.anchors_tabular.explain(arr[0], **self.kwargs)\r\n File \"/usr/local/lib/python3.7/site-packages/alibi/explainers/anchor_tabular.py\", line 803, in explain\r\n for sampler in self.samplers:\r\n AttributeError: 'AnchorTabular' object has no attribute 'samplers'\r\n[E 200605 17:15:14 web:2250] 500 POST /v1/models/income:explain (127.0.0.1) 58.80ms\r\n[I 200605 17:18:22 anchor_tabular:83] Arr shape ((1, 12),) \r\n[E 200605 17:18:22 web:1792] Uncaught exception POST /v1/models/income:explain (127.0.0.1)\r\n HTTPServerRequest(protocol='http', host='income-explainer-default.default.svc.cluster.local', method='POST', uri='/v1/models/income:explain', version='HTTP/1.1', remote_ip='127.0.0.1')\r\n Traceback (most recent call last):\r\n File \"/usr/local/lib/python3.7/site-packages/tornado/web.py\", line 1701, in _execute\r\n result = method(*self.path_args, **self.path_kwargs)\r\n File \"/kfserving/kfserving/handlers/http.py\", line 61, in post\r\n response = model.explain(request)\r\n File \"/alibiexplainer/alibiexplainer/explainer.py\", line 74, in explain\r\n explanation = self.wrapper.explain(request[\"instances\"])\r\n File \"/alibiexplainer/alibiexplainer/anchor_tabular.py\", line 89, in explain\r\n anchor_exp = self.anchors_tabular.explain(arr[0], **self.kwargs)\r\n File \"/usr/local/lib/python3.7/site-packages/alibi/explainers/anchor_tabular.py\", line 803, in explain\r\n for sampler in self.samplers:\r\n AttributeError: 'AnchorTabular' object has no attribute 'samplers'\r\n[E 200605 17:18:22 web:2250] 500 POST /v1/models/income:explain (127.0.0.1) 31.17ms\r\n\r\n```\r\n\r\nPresumably it would work on master. Does that sound right @cliveseldon ? If so maybe we should just close this.\n", "before_files": [{"content": "# Copyright 2019 kubeflow.org.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom setuptools import setup, find_packages\n\ntests_require = [\n 'pytest',\n 'pytest-tornasync',\n 'mypy'\n]\n\nsetup(\n name='alibiexplainer',\n version='0.3.0',\n author_email='[email protected]',\n license='../../LICENSE.txt',\n url='https://github.com/kubeflow/kfserving/python/kfserving/alibiexplainer',\n description='Model Explaination Server. \\\n Not intended for use outside KFServing Frameworks Images',\n long_description=open('README.md').read(),\n python_requires='>=3.6',\n packages=find_packages(\"alibiexplainer\"),\n install_requires=[\n \"kfserving>=0.3.0\",\n \"alibi>=0.3\",\n \"scikit-learn>=0.20.3\",\n \"argparse>=1.4.0\",\n \"requests>=2.22.0\",\n \"joblib>=0.13.2\",\n \"pandas>=0.24.2\",\n \"numpy>=1.16.3\",\n \"dill>=0.3.0\",\n \"spacy>=2.1.4\"\n ],\n tests_require=tests_require,\n extras_require={'test': tests_require}\n)\n", "path": "python/alibiexplainer/setup.py"}]} | 1,991 | 137 |
gh_patches_debug_23599 | rasdani/github-patches | git_diff | svthalia__concrexit-1793 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Food order can be changed after paying
### Describe the bug
If you order a pizza and pay it, you can still change the product. If you change the product through the api, the payment is not removed.
### How to reproduce
Steps to reproduce the behaviour:
1. Order a pizza
2. Pay with Thalia Pay
3. Change the order through the api
4. Get an expensive pizza for little money
### Expected behaviour
Either changing the order after paying is impossible, or it removes the payment. I think removing the payment (as the website currently seems to do) would be strange, and for event registration we've decided not to enable this.
### Screenshots
<img width="569" alt="image" src="https://user-images.githubusercontent.com/41264528/123456318-01d59880-d5e3-11eb-86c8-9217e4720988.png">
There are probably no food events any time soon, so a hotfix may not be needed, though it might be good to double-check that similar stuff is not possible with registrations.
</issue>
<code>
[start of website/pizzas/api/v2/views.py]
1 from oauth2_provider.contrib.rest_framework import IsAuthenticatedOrTokenHasScope
2 from rest_framework.generics import (
3 ListAPIView,
4 RetrieveAPIView,
5 get_object_or_404,
6 CreateAPIView,
7 DestroyAPIView,
8 UpdateAPIView,
9 )
10
11 from rest_framework import filters as framework_filters, status
12 from rest_framework.permissions import DjangoModelPermissionsOrAnonReadOnly
13 from rest_framework.response import Response
14
15 from pizzas.api.v2 import filters
16 from pizzas.api.v2.serializers import (
17 ProductSerializer,
18 FoodOrderSerializer,
19 FoodOrderUpdateSerializer,
20 FoodOrderCreateSerializer,
21 )
22 from pizzas.api.v2.serializers.food_event import FoodEventSerializer
23 from pizzas.models import FoodEvent, Product, FoodOrder
24 from thaliawebsite.api.v2.permissions import IsAuthenticatedOrTokenHasScopeForMethod
25
26
27 class FoodEventListView(ListAPIView):
28 """Returns an overview of all food events."""
29
30 serializer_class = FoodEventSerializer
31 queryset = FoodEvent.objects.all()
32 filter_backends = (
33 framework_filters.OrderingFilter,
34 filters.FoodEventDateFilterBackend,
35 )
36 ordering_fields = ("start", "end")
37 permission_classes = [
38 IsAuthenticatedOrTokenHasScope,
39 DjangoModelPermissionsOrAnonReadOnly,
40 ]
41 required_scopes = ["food:read"]
42
43
44 class FoodEventDetailView(RetrieveAPIView):
45 """Returns one single food event."""
46
47 serializer_class = FoodEventSerializer
48 queryset = FoodEvent.objects.all()
49 permission_classes = [
50 IsAuthenticatedOrTokenHasScope,
51 DjangoModelPermissionsOrAnonReadOnly,
52 ]
53 required_scopes = ["food:read"]
54
55
56 class FoodEventProductsListView(ListAPIView):
57 """Returns an overview of all products."""
58
59 serializer_class = ProductSerializer
60 queryset = Product.available_products.all()
61 filter_backends = (framework_filters.SearchFilter,)
62 search_fields = ("name",)
63 permission_classes = [
64 IsAuthenticatedOrTokenHasScope,
65 DjangoModelPermissionsOrAnonReadOnly,
66 ]
67 required_scopes = ["food:read"]
68
69
70 class FoodEventOrderDetailView(
71 RetrieveAPIView, CreateAPIView, UpdateAPIView, DestroyAPIView
72 ):
73 """Returns details of a food order."""
74
75 permission_classes = [
76 IsAuthenticatedOrTokenHasScopeForMethod,
77 DjangoModelPermissionsOrAnonReadOnly,
78 ]
79 required_scopes_per_method = {
80 "GET": ["food:read"],
81 "POST": ["food:order"],
82 "PUT": ["food:order"],
83 "PATCH": ["food:order"],
84 "DELETE": ["food:order"],
85 }
86
87 def get_serializer_class(self):
88 if self.request.method.lower() == "get":
89 return FoodOrderSerializer
90 if self.request.method.lower() == "post":
91 return FoodOrderCreateSerializer
92 return FoodOrderUpdateSerializer
93
94 def get_queryset(self):
95 return FoodOrder.objects.filter(food_event=self.food_event)
96
97 def get_object(self):
98 queryset = self.filter_queryset(self.get_queryset())
99 obj = get_object_or_404(queryset, member=self.request.member)
100
101 # May raise a permission denied
102 self.check_object_permissions(self.request, obj)
103
104 return obj
105
106 def dispatch(self, request, *args, **kwargs):
107 self.food_event = get_object_or_404(FoodEvent, pk=self.kwargs.get("pk"))
108 return super().dispatch(request, *args, **kwargs)
109
110 def update(self, request, *args, **kwargs):
111 super().update(request, *args, **kwargs)
112 instance = self.get_object()
113 return Response(
114 FoodOrderSerializer(instance, context=self.get_serializer_context()).data
115 )
116
117 def create(self, request, *args, **kwargs):
118 serializer = self.get_serializer(data=request.data)
119 serializer.is_valid(raise_exception=True)
120 self.perform_create(serializer)
121 return Response(
122 FoodOrderSerializer(
123 serializer.instance, context=self.get_serializer_context()
124 ).data,
125 status=status.HTTP_201_CREATED,
126 )
127
[end of website/pizzas/api/v2/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/website/pizzas/api/v2/views.py b/website/pizzas/api/v2/views.py
--- a/website/pizzas/api/v2/views.py
+++ b/website/pizzas/api/v2/views.py
@@ -12,6 +12,8 @@
from rest_framework.permissions import DjangoModelPermissionsOrAnonReadOnly
from rest_framework.response import Response
+from payments.exceptions import PaymentError
+from payments.services import delete_payment
from pizzas.api.v2 import filters
from pizzas.api.v2.serializers import (
ProductSerializer,
@@ -110,6 +112,18 @@
def update(self, request, *args, **kwargs):
super().update(request, *args, **kwargs)
instance = self.get_object()
+
+ if instance.payment:
+ try:
+ delete_payment(
+ instance, member=request.member, ignore_change_window=True
+ )
+ except PaymentError:
+ return Response(
+ "Your order could not be updated because it was already paid.",
+ status=status.HTTP_403_FORBIDDEN,
+ )
+
return Response(
FoodOrderSerializer(instance, context=self.get_serializer_context()).data
)
| {"golden_diff": "diff --git a/website/pizzas/api/v2/views.py b/website/pizzas/api/v2/views.py\n--- a/website/pizzas/api/v2/views.py\n+++ b/website/pizzas/api/v2/views.py\n@@ -12,6 +12,8 @@\n from rest_framework.permissions import DjangoModelPermissionsOrAnonReadOnly\n from rest_framework.response import Response\n \n+from payments.exceptions import PaymentError\n+from payments.services import delete_payment\n from pizzas.api.v2 import filters\n from pizzas.api.v2.serializers import (\n ProductSerializer,\n@@ -110,6 +112,18 @@\n def update(self, request, *args, **kwargs):\n super().update(request, *args, **kwargs)\n instance = self.get_object()\n+\n+ if instance.payment:\n+ try:\n+ delete_payment(\n+ instance, member=request.member, ignore_change_window=True\n+ )\n+ except PaymentError:\n+ return Response(\n+ \"Your order could not be updated because it was already paid.\",\n+ status=status.HTTP_403_FORBIDDEN,\n+ )\n+\n return Response(\n FoodOrderSerializer(instance, context=self.get_serializer_context()).data\n )\n", "issue": "Food order can be changed after paying\n### Describe the bug\r\nIf you order a pizza and pay it, you can still change the product. If you change the product through the api, the payment is not removed.\r\n\r\n### How to reproduce\r\nSteps to reproduce the behaviour:\r\n1. Order a pizza\r\n2. Pay with Thalia Pay\r\n3. Change the order through the api\r\n4. Get an expensive pizza for little money\r\n\r\n### Expected behaviour\r\nEither changing the order after paying is impossible, or it removes the payment. I think removing the payment (as the website currently seems to do) would be strange, and for event registration we've decided not to enable this.\r\n\r\n### Screenshots\r\n<img width=\"569\" alt=\"image\" src=\"https://user-images.githubusercontent.com/41264528/123456318-01d59880-d5e3-11eb-86c8-9217e4720988.png\">\r\n\r\nThere are probably no food events any time soon, so a hotfix may not be needed, though it might be good to double-check that similar stuff is not possible with registrations.\r\n\n", "before_files": [{"content": "from oauth2_provider.contrib.rest_framework import IsAuthenticatedOrTokenHasScope\nfrom rest_framework.generics import (\n ListAPIView,\n RetrieveAPIView,\n get_object_or_404,\n CreateAPIView,\n DestroyAPIView,\n UpdateAPIView,\n)\n\nfrom rest_framework import filters as framework_filters, status\nfrom rest_framework.permissions import DjangoModelPermissionsOrAnonReadOnly\nfrom rest_framework.response import Response\n\nfrom pizzas.api.v2 import filters\nfrom pizzas.api.v2.serializers import (\n ProductSerializer,\n FoodOrderSerializer,\n FoodOrderUpdateSerializer,\n FoodOrderCreateSerializer,\n)\nfrom pizzas.api.v2.serializers.food_event import FoodEventSerializer\nfrom pizzas.models import FoodEvent, Product, FoodOrder\nfrom thaliawebsite.api.v2.permissions import IsAuthenticatedOrTokenHasScopeForMethod\n\n\nclass FoodEventListView(ListAPIView):\n \"\"\"Returns an overview of all food events.\"\"\"\n\n serializer_class = FoodEventSerializer\n queryset = FoodEvent.objects.all()\n filter_backends = (\n framework_filters.OrderingFilter,\n filters.FoodEventDateFilterBackend,\n )\n ordering_fields = (\"start\", \"end\")\n permission_classes = [\n IsAuthenticatedOrTokenHasScope,\n DjangoModelPermissionsOrAnonReadOnly,\n ]\n required_scopes = [\"food:read\"]\n\n\nclass FoodEventDetailView(RetrieveAPIView):\n \"\"\"Returns one single food event.\"\"\"\n\n serializer_class = FoodEventSerializer\n queryset = FoodEvent.objects.all()\n permission_classes = [\n IsAuthenticatedOrTokenHasScope,\n DjangoModelPermissionsOrAnonReadOnly,\n ]\n required_scopes = [\"food:read\"]\n\n\nclass FoodEventProductsListView(ListAPIView):\n \"\"\"Returns an overview of all products.\"\"\"\n\n serializer_class = ProductSerializer\n queryset = Product.available_products.all()\n filter_backends = (framework_filters.SearchFilter,)\n search_fields = (\"name\",)\n permission_classes = [\n IsAuthenticatedOrTokenHasScope,\n DjangoModelPermissionsOrAnonReadOnly,\n ]\n required_scopes = [\"food:read\"]\n\n\nclass FoodEventOrderDetailView(\n RetrieveAPIView, CreateAPIView, UpdateAPIView, DestroyAPIView\n):\n \"\"\"Returns details of a food order.\"\"\"\n\n permission_classes = [\n IsAuthenticatedOrTokenHasScopeForMethod,\n DjangoModelPermissionsOrAnonReadOnly,\n ]\n required_scopes_per_method = {\n \"GET\": [\"food:read\"],\n \"POST\": [\"food:order\"],\n \"PUT\": [\"food:order\"],\n \"PATCH\": [\"food:order\"],\n \"DELETE\": [\"food:order\"],\n }\n\n def get_serializer_class(self):\n if self.request.method.lower() == \"get\":\n return FoodOrderSerializer\n if self.request.method.lower() == \"post\":\n return FoodOrderCreateSerializer\n return FoodOrderUpdateSerializer\n\n def get_queryset(self):\n return FoodOrder.objects.filter(food_event=self.food_event)\n\n def get_object(self):\n queryset = self.filter_queryset(self.get_queryset())\n obj = get_object_or_404(queryset, member=self.request.member)\n\n # May raise a permission denied\n self.check_object_permissions(self.request, obj)\n\n return obj\n\n def dispatch(self, request, *args, **kwargs):\n self.food_event = get_object_or_404(FoodEvent, pk=self.kwargs.get(\"pk\"))\n return super().dispatch(request, *args, **kwargs)\n\n def update(self, request, *args, **kwargs):\n super().update(request, *args, **kwargs)\n instance = self.get_object()\n return Response(\n FoodOrderSerializer(instance, context=self.get_serializer_context()).data\n )\n\n def create(self, request, *args, **kwargs):\n serializer = self.get_serializer(data=request.data)\n serializer.is_valid(raise_exception=True)\n self.perform_create(serializer)\n return Response(\n FoodOrderSerializer(\n serializer.instance, context=self.get_serializer_context()\n ).data,\n status=status.HTTP_201_CREATED,\n )\n", "path": "website/pizzas/api/v2/views.py"}]} | 1,918 | 260 |
gh_patches_debug_9032 | rasdani/github-patches | git_diff | scikit-hep__pyhf-101 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
speed up CI tests (do we need all conda packages?)
By using Conda, unfortunately the setup phase of the CI jobs have become a bit slower than without conda, maybe we can look into speeding them up again by checking whether we need all the packages that we install during CI
</issue>
<code>
[start of setup.py]
1 from setuptools import setup, find_packages
2 setup(
3 name = 'pyhf',
4 version = '0.0.8',
5 description = '(partial) pure python histfactory implementation',
6 url = '',
7 author = 'Lukas Heinrich',
8 author_email = '[email protected]',
9 packages = find_packages(),
10 include_package_data = True,
11 install_requires = [
12 'numpy',
13 'scipy'
14 ],
15 extras_require = {
16 'xmlimport': [
17 'uproot',
18 ],
19 'torch': [
20 'torch'
21 ],
22 'mxnet':[
23 'mxnet',
24 ],
25 'develop': [
26 'pyflakes',
27 'pytest>=3.2.0',
28 'pytest-cov>=2.5.1',
29 'pytest-benchmark[histogram]',
30 'python-coveralls',
31 'matplotlib',
32 'jupyter',
33 'uproot',
34 'papermill',
35 'torch',
36 'tensorflow',
37 'mxnet>=1.0.0',
38 'graphviz',
39 'sphinx',
40 'sphinxcontrib-napoleon',
41 'sphinx_rtd_theme',
42 'nbsphinx',
43 'jsonschema>=2.6.0'
44 ]
45 },
46 entry_points = {
47 },
48 dependency_links = [
49 ]
50 )
51
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -9,7 +9,7 @@
packages = find_packages(),
include_package_data = True,
install_requires = [
- 'numpy',
+ 'numpy>=1.14.3',
'scipy'
],
extras_require = {
@@ -24,7 +24,7 @@
],
'develop': [
'pyflakes',
- 'pytest>=3.2.0',
+ 'pytest>=3.5.1',
'pytest-cov>=2.5.1',
'pytest-benchmark[histogram]',
'python-coveralls',
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -9,7 +9,7 @@\n packages = find_packages(),\n include_package_data = True,\n install_requires = [\n- 'numpy',\n+ 'numpy>=1.14.3',\n 'scipy'\n ],\n extras_require = {\n@@ -24,7 +24,7 @@\n ],\n 'develop': [\n 'pyflakes',\n- 'pytest>=3.2.0',\n+ 'pytest>=3.5.1',\n 'pytest-cov>=2.5.1',\n 'pytest-benchmark[histogram]',\n 'python-coveralls',\n", "issue": "speed up CI tests (do we need all conda packages?)\nBy using Conda, unfortunately the setup phase of the CI jobs have become a bit slower than without conda, maybe we can look into speeding them up again by checking whether we need all the packages that we install during CI\n", "before_files": [{"content": "from setuptools import setup, find_packages\nsetup(\n name = 'pyhf',\n version = '0.0.8',\n description = '(partial) pure python histfactory implementation',\n url = '',\n author = 'Lukas Heinrich',\n author_email = '[email protected]',\n packages = find_packages(),\n include_package_data = True,\n install_requires = [\n 'numpy',\n 'scipy'\n ],\n extras_require = {\n 'xmlimport': [\n 'uproot',\n ],\n 'torch': [\n 'torch'\n ],\n 'mxnet':[\n 'mxnet',\n ],\n 'develop': [\n 'pyflakes',\n 'pytest>=3.2.0',\n 'pytest-cov>=2.5.1',\n 'pytest-benchmark[histogram]',\n 'python-coveralls',\n 'matplotlib',\n 'jupyter',\n 'uproot',\n 'papermill',\n 'torch',\n 'tensorflow',\n 'mxnet>=1.0.0',\n 'graphviz',\n 'sphinx',\n 'sphinxcontrib-napoleon',\n 'sphinx_rtd_theme',\n 'nbsphinx',\n 'jsonschema>=2.6.0'\n ]\n },\n entry_points = {\n },\n dependency_links = [\n ]\n)\n", "path": "setup.py"}]} | 970 | 152 |
gh_patches_debug_15497 | rasdani/github-patches | git_diff | ipython__ipython-4363 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`?` may generate hundreds of cell
By mistake I have executed a cell like
```
for i in range(3):
x= range?
```
but with ~70 instead of 3
which generated 70 code cell with just `x= range` in it...
it was _really_ painfull to clean, it would be nice to prevent something like that
</issue>
<code>
[start of IPython/core/payload.py]
1 # -*- coding: utf-8 -*-
2 """Payload system for IPython.
3
4 Authors:
5
6 * Fernando Perez
7 * Brian Granger
8 """
9
10 #-----------------------------------------------------------------------------
11 # Copyright (C) 2008-2011 The IPython Development Team
12 #
13 # Distributed under the terms of the BSD License. The full license is in
14 # the file COPYING, distributed as part of this software.
15 #-----------------------------------------------------------------------------
16
17 #-----------------------------------------------------------------------------
18 # Imports
19 #-----------------------------------------------------------------------------
20
21 from IPython.config.configurable import Configurable
22 from IPython.utils.traitlets import List
23
24 #-----------------------------------------------------------------------------
25 # Main payload class
26 #-----------------------------------------------------------------------------
27
28 class PayloadManager(Configurable):
29
30 _payload = List([])
31
32 def write_payload(self, data):
33 if not isinstance(data, dict):
34 raise TypeError('Each payload write must be a dict, got: %r' % data)
35 self._payload.append(data)
36
37 def read_payload(self):
38 return self._payload
39
40 def clear_payload(self):
41 self._payload = []
42
[end of IPython/core/payload.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/IPython/core/payload.py b/IPython/core/payload.py
--- a/IPython/core/payload.py
+++ b/IPython/core/payload.py
@@ -29,9 +29,23 @@
_payload = List([])
- def write_payload(self, data):
+ def write_payload(self, data, single=True):
+ """Include or update the specified `data` payload in the PayloadManager.
+
+ If a previous payload with the same source exists and `single` is True,
+ it will be overwritten with the new one.
+ """
+
if not isinstance(data, dict):
raise TypeError('Each payload write must be a dict, got: %r' % data)
+
+ if single and 'source' in data:
+ source = data['source']
+ for i, pl in enumerate(self._payload):
+ if 'source' in pl and pl['source'] == source:
+ self._payload[i] = data
+ return
+
self._payload.append(data)
def read_payload(self):
| {"golden_diff": "diff --git a/IPython/core/payload.py b/IPython/core/payload.py\n--- a/IPython/core/payload.py\n+++ b/IPython/core/payload.py\n@@ -29,9 +29,23 @@\n \n _payload = List([])\n \n- def write_payload(self, data):\n+ def write_payload(self, data, single=True):\n+ \"\"\"Include or update the specified `data` payload in the PayloadManager.\n+\n+ If a previous payload with the same source exists and `single` is True,\n+ it will be overwritten with the new one.\n+ \"\"\"\n+\n if not isinstance(data, dict):\n raise TypeError('Each payload write must be a dict, got: %r' % data)\n+\n+ if single and 'source' in data:\n+ source = data['source']\n+ for i, pl in enumerate(self._payload):\n+ if 'source' in pl and pl['source'] == source:\n+ self._payload[i] = data\n+ return\n+\n self._payload.append(data)\n \n def read_payload(self):\n", "issue": "`?` may generate hundreds of cell \nBy mistake I have executed a cell like \r\n\r\n```\r\nfor i in range(3):\r\n x= range?\r\n```\r\n\r\nbut with ~70 instead of 3\r\nwhich generated 70 code cell with just `x= range` in it...\r\nit was _really_ painfull to clean, it would be nice to prevent something like that\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"Payload system for IPython.\n\nAuthors:\n\n* Fernando Perez\n* Brian Granger\n\"\"\"\n\n#-----------------------------------------------------------------------------\n# Copyright (C) 2008-2011 The IPython Development Team\n#\n# Distributed under the terms of the BSD License. The full license is in\n# the file COPYING, distributed as part of this software.\n#-----------------------------------------------------------------------------\n\n#-----------------------------------------------------------------------------\n# Imports\n#-----------------------------------------------------------------------------\n\nfrom IPython.config.configurable import Configurable\nfrom IPython.utils.traitlets import List\n\n#-----------------------------------------------------------------------------\n# Main payload class\n#-----------------------------------------------------------------------------\n\nclass PayloadManager(Configurable):\n\n _payload = List([])\n\n def write_payload(self, data):\n if not isinstance(data, dict):\n raise TypeError('Each payload write must be a dict, got: %r' % data)\n self._payload.append(data)\n\n def read_payload(self):\n return self._payload\n\n def clear_payload(self):\n self._payload = []\n", "path": "IPython/core/payload.py"}]} | 913 | 235 |
gh_patches_debug_66590 | rasdani/github-patches | git_diff | StackStorm__st2-3843 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Action 'linux.service' fails on Centos7
When I tried to execute restart some service on the Centos7 server got the following error:
```
Traceback (most recent call last):
File "/tmp/5a0459bc07ac686fb813a920/service.py", line 24, in <module>
subprocess.call(cmd, shell=True)
NameError: name 'cmd' is not defined
```
After investigation the resolution has been found:
in file /opt/stackstorm/packs/linux/actions/service.py the entry
`elif re.search(distro, 'Redhat') or re.search(distro, 'Fedora'):`
fixed to
`elif re.search(distro, 'Redhat') or re.search(distro, 'Fedora') or re.search(distro, 'CentOS Linux'):`
The issue has gone
</issue>
<code>
[start of contrib/linux/actions/service.py]
1 #!/usr/bin/env python
2
3 import re
4 import sys
5 import os
6 import platform
7 import subprocess
8
9 distro = platform.linux_distribution()[0]
10
11 args = {'act': sys.argv[1], 'service': sys.argv[2]}
12
13 if re.search(distro, 'Ubuntu'):
14 if os.path.isfile("/etc/init/%s.conf" % args['service']):
15 cmd = args['act'] + " " + args['service']
16 elif os.path.isfile("/etc/init.d/%s" % args['service']):
17 cmd = "/etc/init.d/%s %s" % (args['service'], args['act'])
18 else:
19 print("Unknown service")
20 sys.exit(2)
21 elif re.search(distro, 'Redhat') or re.search(distro, 'Fedora'):
22 cmd = "systemctl %s %s" % (args['act'], args['service'])
23
24 subprocess.call(cmd, shell=True)
25
[end of contrib/linux/actions/service.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/contrib/linux/actions/service.py b/contrib/linux/actions/service.py
--- a/contrib/linux/actions/service.py
+++ b/contrib/linux/actions/service.py
@@ -18,7 +18,8 @@
else:
print("Unknown service")
sys.exit(2)
-elif re.search(distro, 'Redhat') or re.search(distro, 'Fedora'):
+elif re.search(distro, 'Redhat') or re.search(distro, 'Fedora') or \
+ re.search(distro, 'CentOS Linux'):
cmd = "systemctl %s %s" % (args['act'], args['service'])
subprocess.call(cmd, shell=True)
| {"golden_diff": "diff --git a/contrib/linux/actions/service.py b/contrib/linux/actions/service.py\n--- a/contrib/linux/actions/service.py\n+++ b/contrib/linux/actions/service.py\n@@ -18,7 +18,8 @@\n else:\n print(\"Unknown service\")\n sys.exit(2)\n-elif re.search(distro, 'Redhat') or re.search(distro, 'Fedora'):\n+elif re.search(distro, 'Redhat') or re.search(distro, 'Fedora') or \\\n+ re.search(distro, 'CentOS Linux'):\n cmd = \"systemctl %s %s\" % (args['act'], args['service'])\n \n subprocess.call(cmd, shell=True)\n", "issue": "Action 'linux.service' fails on Centos7\nWhen I tried to execute restart some service on the Centos7 server got the following error:\r\n```\r\nTraceback (most recent call last):\r\n File \"/tmp/5a0459bc07ac686fb813a920/service.py\", line 24, in <module>\r\n subprocess.call(cmd, shell=True)\r\nNameError: name 'cmd' is not defined\r\n```\r\nAfter investigation the resolution has been found:\r\nin file /opt/stackstorm/packs/linux/actions/service.py the entry\r\n\r\n`elif re.search(distro, 'Redhat') or re.search(distro, 'Fedora'):`\r\n\r\nfixed to \r\n\r\n`elif re.search(distro, 'Redhat') or re.search(distro, 'Fedora') or re.search(distro, 'CentOS Linux'):`\r\n\r\nThe issue has gone\r\n\r\n\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n\nimport re\nimport sys\nimport os\nimport platform\nimport subprocess\n\ndistro = platform.linux_distribution()[0]\n\nargs = {'act': sys.argv[1], 'service': sys.argv[2]}\n\nif re.search(distro, 'Ubuntu'):\n if os.path.isfile(\"/etc/init/%s.conf\" % args['service']):\n cmd = args['act'] + \" \" + args['service']\n elif os.path.isfile(\"/etc/init.d/%s\" % args['service']):\n cmd = \"/etc/init.d/%s %s\" % (args['service'], args['act'])\n else:\n print(\"Unknown service\")\n sys.exit(2)\nelif re.search(distro, 'Redhat') or re.search(distro, 'Fedora'):\n cmd = \"systemctl %s %s\" % (args['act'], args['service'])\n\nsubprocess.call(cmd, shell=True)\n", "path": "contrib/linux/actions/service.py"}]} | 965 | 149 |
gh_patches_debug_13852 | rasdani/github-patches | git_diff | ESMCI__cime-2700 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Issue downloading data (wildcards not supported in HTTP)
I was wanting to have a case download all of the data it needs. First create an empty tmp inputdata directory, set the DIN env vars. However, I got error below which seems like a problem with wget and wildcards?
```
Refcase not found in /global/cscratch1/sd/ndk/inputdata/e3sm_init/20171228.beta3rc13_1850.ne30_oECv3_ICG.edison/0331-01-01, will attempt to download from inputdata
Model refcase missing file refdir = '/global/cscratch1/sd/ndk/inputdata/e3sm_init/20171228.beta3rc13_1850.ne30_oECv3_ICG.edison/0331-01-01/'
wget failed with output: and errput Warning: wildcards not supported in HTTP.
--2018-06-29 14:11:00-- https://web.lcrc.anl.gov/public/e3sm/inputdata/e3sm_init/20171228.beta3rc13_1850.ne30_oECv3_ICG.edison/0331-01-01/*
Resolving web.lcrc.anl.gov (web.lcrc.anl.gov)... 140.221.74.23
Connecting to web.lcrc.anl.gov (web.lcrc.anl.gov)|140.221.74.23|:443... connected.
HTTP request sent, awaiting response... 404 Not Found
2018-06-29 14:11:00 ERROR 404: Not Found.
```
The test I was using: `SMS_Ld2.ne30_oECv3_ICG.A_WCYCL1850S_CMIP6.cori-knl_intel.allactive-v1cmip6`
</issue>
<code>
[start of scripts/lib/CIME/Servers/wget.py]
1 """
2 WGET Server class. Interact with a server using WGET protocol
3 """
4 # pylint: disable=super-init-not-called
5 from CIME.XML.standard_module_setup import *
6 from CIME.Servers.generic_server import GenericServer
7
8 logger = logging.getLogger(__name__)
9
10 class WGET(GenericServer):
11 def __init__(self, address, user='', passwd=''):
12 self._args = ''
13 if user:
14 self._args += "--user {}".format(user)
15 if passwd:
16 self._args += "--password {}".format(passwd)
17
18 err = run_cmd("wget {} --spider {}".format(self._args, address))[0]
19 expect(err == 0,"Could not connect to repo '{0}'\nThis is most likely either a proxy, or network issue .")
20 self._server_loc = address
21
22 def fileexists(self, rel_path):
23 full_url = os.path.join(self._server_loc, rel_path)
24 stat, out, err = run_cmd("wget {} --spider {}".format(self._args, full_url))
25 if (stat != 0):
26 logging.warning("FAIL: Repo '{}' does not have file '{}'\nReason:{}\n{}\n".format(self._server_loc, full_url, out.encode('utf-8'), err.encode('utf-8')))
27 return False
28 return True
29
30 def getfile(self, rel_path, full_path):
31 full_url = os.path.join(self._server_loc, rel_path)
32 stat, output, errput = \
33 run_cmd("wget {} {} -nc --output-document {}".format(self._args, full_url, full_path))
34 if (stat != 0):
35 logging.warning("wget failed with output: {} and errput {}\n".format(output, errput))
36 # wget puts an empty file if it fails.
37 try:
38 os.remove(full_path)
39 except OSError:
40 pass
41 return False
42 else:
43 logging.info("SUCCESS\n")
44 return True
45
46 def getdirectory(self, rel_path, full_path):
47 full_url = os.path.join(self._server_loc, rel_path)
48 stat, output, errput = \
49 run_cmd("wget {} {} -P {}".format(self._args, full_url+os.sep+'*', full_path+os.sep))
50 if (stat != 0):
51 logging.warning("wget failed with output: {} and errput {}\n".format(output, errput))
52 # wget puts an empty file if it fails.
53 try:
54 os.remove(full_path)
55 except OSError:
56 pass
57 return False
58 else:
59 logging.info("SUCCESS\n")
60 return True
61
[end of scripts/lib/CIME/Servers/wget.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/scripts/lib/CIME/Servers/wget.py b/scripts/lib/CIME/Servers/wget.py
--- a/scripts/lib/CIME/Servers/wget.py
+++ b/scripts/lib/CIME/Servers/wget.py
@@ -46,7 +46,9 @@
def getdirectory(self, rel_path, full_path):
full_url = os.path.join(self._server_loc, rel_path)
stat, output, errput = \
- run_cmd("wget {} {} -P {}".format(self._args, full_url+os.sep+'*', full_path+os.sep))
+ run_cmd("wget {} {} -r -N --no-directories ".format(self._args, full_url+os.sep), from_dir=full_path)
+ logger.debug(output)
+ logger.debug(errput)
if (stat != 0):
logging.warning("wget failed with output: {} and errput {}\n".format(output, errput))
# wget puts an empty file if it fails.
| {"golden_diff": "diff --git a/scripts/lib/CIME/Servers/wget.py b/scripts/lib/CIME/Servers/wget.py\n--- a/scripts/lib/CIME/Servers/wget.py\n+++ b/scripts/lib/CIME/Servers/wget.py\n@@ -46,7 +46,9 @@\n def getdirectory(self, rel_path, full_path):\n full_url = os.path.join(self._server_loc, rel_path)\n stat, output, errput = \\\n- run_cmd(\"wget {} {} -P {}\".format(self._args, full_url+os.sep+'*', full_path+os.sep))\n+ run_cmd(\"wget {} {} -r -N --no-directories \".format(self._args, full_url+os.sep), from_dir=full_path)\n+ logger.debug(output)\n+ logger.debug(errput)\n if (stat != 0):\n logging.warning(\"wget failed with output: {} and errput {}\\n\".format(output, errput))\n # wget puts an empty file if it fails.\n", "issue": "Issue downloading data (wildcards not supported in HTTP)\nI was wanting to have a case download all of the data it needs. First create an empty tmp inputdata directory, set the DIN env vars. However, I got error below which seems like a problem with wget and wildcards?\r\n\r\n```\r\n Refcase not found in /global/cscratch1/sd/ndk/inputdata/e3sm_init/20171228.beta3rc13_1850.ne30_oECv3_ICG.edison/0331-01-01, will attempt to download from inputdata\r\n Model refcase missing file refdir = '/global/cscratch1/sd/ndk/inputdata/e3sm_init/20171228.beta3rc13_1850.ne30_oECv3_ICG.edison/0331-01-01/'\r\n wget failed with output: and errput Warning: wildcards not supported in HTTP.\r\n --2018-06-29 14:11:00-- https://web.lcrc.anl.gov/public/e3sm/inputdata/e3sm_init/20171228.beta3rc13_1850.ne30_oECv3_ICG.edison/0331-01-01/*\r\n Resolving web.lcrc.anl.gov (web.lcrc.anl.gov)... 140.221.74.23\r\n Connecting to web.lcrc.anl.gov (web.lcrc.anl.gov)|140.221.74.23|:443... connected.\r\n HTTP request sent, awaiting response... 404 Not Found\r\n 2018-06-29 14:11:00 ERROR 404: Not Found.\r\n```\r\n\r\nThe test I was using: `SMS_Ld2.ne30_oECv3_ICG.A_WCYCL1850S_CMIP6.cori-knl_intel.allactive-v1cmip6`\n", "before_files": [{"content": "\"\"\"\nWGET Server class. Interact with a server using WGET protocol\n\"\"\"\n# pylint: disable=super-init-not-called\nfrom CIME.XML.standard_module_setup import *\nfrom CIME.Servers.generic_server import GenericServer\n\nlogger = logging.getLogger(__name__)\n\nclass WGET(GenericServer):\n def __init__(self, address, user='', passwd=''):\n self._args = ''\n if user:\n self._args += \"--user {}\".format(user)\n if passwd:\n self._args += \"--password {}\".format(passwd)\n\n err = run_cmd(\"wget {} --spider {}\".format(self._args, address))[0]\n expect(err == 0,\"Could not connect to repo '{0}'\\nThis is most likely either a proxy, or network issue .\")\n self._server_loc = address\n\n def fileexists(self, rel_path):\n full_url = os.path.join(self._server_loc, rel_path)\n stat, out, err = run_cmd(\"wget {} --spider {}\".format(self._args, full_url))\n if (stat != 0):\n logging.warning(\"FAIL: Repo '{}' does not have file '{}'\\nReason:{}\\n{}\\n\".format(self._server_loc, full_url, out.encode('utf-8'), err.encode('utf-8')))\n return False\n return True\n\n def getfile(self, rel_path, full_path):\n full_url = os.path.join(self._server_loc, rel_path)\n stat, output, errput = \\\n run_cmd(\"wget {} {} -nc --output-document {}\".format(self._args, full_url, full_path))\n if (stat != 0):\n logging.warning(\"wget failed with output: {} and errput {}\\n\".format(output, errput))\n # wget puts an empty file if it fails.\n try:\n os.remove(full_path)\n except OSError:\n pass\n return False\n else:\n logging.info(\"SUCCESS\\n\")\n return True\n\n def getdirectory(self, rel_path, full_path):\n full_url = os.path.join(self._server_loc, rel_path)\n stat, output, errput = \\\n run_cmd(\"wget {} {} -P {}\".format(self._args, full_url+os.sep+'*', full_path+os.sep))\n if (stat != 0):\n logging.warning(\"wget failed with output: {} and errput {}\\n\".format(output, errput))\n # wget puts an empty file if it fails.\n try:\n os.remove(full_path)\n except OSError:\n pass\n return False\n else:\n logging.info(\"SUCCESS\\n\")\n return True\n", "path": "scripts/lib/CIME/Servers/wget.py"}]} | 1,685 | 216 |
gh_patches_debug_19063 | rasdani/github-patches | git_diff | streamlink__streamlink-185 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Plugin for Livestream.com not working right? exit's quickly for hls and not at all for normal streams
I am trying to get a live stream on livestreamer.com to work and i can't get it to play more then about 35 seconds...
When I run this command:
streamlink "http://livestream.com/Miraclenet/events/5004281" 270p --fifo --player omxplayer
it gives me an error about an swf being needed. When I run this command:
streamlink "http://livestream.com/Miraclenet/events/5004281" 270p_hls --fifo --player omxplayer
it will play the stream but just for about 35 seconds or so... I kinda don't want to have to restart it every 35 seconds to watch this stream... I'd like it to run until I stop it myself...
Any help for this non-python, non-linux guy would be very helpful...
btw, this is running on a Raspberry Pi. Just got a nice little 7 inch lcd for it and set it up on my desk to be able to watch it while I work, but can't get it to play for long at a time...
(edited to correct commands used)
</issue>
<code>
[start of src/streamlink/plugins/livestream.py]
1 import re
2
3 from streamlink.compat import urljoin
4 from streamlink.plugin import Plugin
5 from streamlink.plugin.api import http, validate
6 from streamlink.plugin.api.utils import parse_json
7 from streamlink.stream import AkamaiHDStream, HLSStream
8
9 _url_re = re.compile("http(s)?://(www\.)?livestream.com/")
10 _stream_config_schema = validate.Schema({
11 "event": {
12 "stream_info": validate.any({
13 "is_live": bool,
14 "qualities": [{
15 "bitrate": int,
16 "height": int
17 }],
18 validate.optional("play_url"): validate.url(scheme="http"),
19 validate.optional("m3u8_url"): validate.url(
20 scheme="http",
21 path=validate.endswith(".m3u8")
22 ),
23 }, None)
24 },
25 validate.optional("playerUri"): validate.text
26 })
27 _smil_schema = validate.Schema(validate.union({
28 "http_base": validate.all(
29 validate.xml_find("{http://www.w3.org/2001/SMIL20/Language}head/"
30 "{http://www.w3.org/2001/SMIL20/Language}meta"
31 "[@name='httpBase']"),
32 validate.xml_element(attrib={
33 "content": validate.text
34 }),
35 validate.get("content")
36 ),
37 "videos": validate.all(
38 validate.xml_findall("{http://www.w3.org/2001/SMIL20/Language}body/"
39 "{http://www.w3.org/2001/SMIL20/Language}switch/"
40 "{http://www.w3.org/2001/SMIL20/Language}video"),
41 [
42 validate.all(
43 validate.xml_element(attrib={
44 "src": validate.text,
45 "system-bitrate": validate.all(
46 validate.text,
47 validate.transform(int)
48 )
49 }),
50 validate.transform(
51 lambda e: (e.attrib["src"], e.attrib["system-bitrate"])
52 )
53 )
54 ],
55 )
56 }))
57
58
59 class Livestream(Plugin):
60 @classmethod
61 def default_stream_types(cls, streams):
62 return ["akamaihd", "hls"]
63
64 @classmethod
65 def can_handle_url(self, url):
66 return _url_re.match(url)
67
68 def _get_stream_info(self):
69 res = http.get(self.url)
70 match = re.search("window.config = ({.+})", res.text)
71 if match:
72 config = match.group(1)
73 return parse_json(config, "config JSON",
74 schema=_stream_config_schema)
75
76 def _parse_smil(self, url, swf_url):
77 res = http.get(url)
78 smil = http.xml(res, "SMIL config", schema=_smil_schema)
79
80 for src, bitrate in smil["videos"]:
81 url = urljoin(smil["http_base"], src)
82 yield bitrate, AkamaiHDStream(self.session, url, swf=swf_url)
83
84 def _get_streams(self):
85 info = self._get_stream_info()
86 if not info:
87 return
88
89 stream_info = info["event"]["stream_info"]
90 if not (stream_info and stream_info["is_live"]):
91 # Stream is not live
92 return
93
94 play_url = stream_info.get("play_url")
95 if play_url:
96 swf_url = info.get("playerUri")
97 if swf_url:
98 if not swf_url.startswith("http"):
99 swf_url = "http://" + swf_url
100
101 # Work around broken SSL.
102 swf_url = swf_url.replace("https://", "http://")
103
104 qualities = stream_info["qualities"]
105 for bitrate, stream in self._parse_smil(play_url, swf_url):
106 name = "{0:d}k".format(int(bitrate / 1000))
107 for quality in qualities:
108 if quality["bitrate"] == bitrate:
109 name = "{0}p".format(quality["height"])
110
111 yield name, stream
112
113 m3u8_url = stream_info.get("m3u8_url")
114 if m3u8_url:
115 streams = HLSStream.parse_variant_playlist(self.session, m3u8_url,
116 namekey="pixels")
117 # TODO: Replace with "yield from" when dropping Python 2.
118 for stream in streams.items():
119 yield stream
120
121 __plugin__ = Livestream
122
[end of src/streamlink/plugins/livestream.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/streamlink/plugins/livestream.py b/src/streamlink/plugins/livestream.py
--- a/src/streamlink/plugins/livestream.py
+++ b/src/streamlink/plugins/livestream.py
@@ -22,7 +22,10 @@
),
}, None)
},
- validate.optional("playerUri"): validate.text
+ validate.optional("playerUri"): validate.text,
+ validate.optional("viewerPlusSwfUrl"): validate.url(scheme="http"),
+ validate.optional("lsPlayerSwfUrl"): validate.text,
+ validate.optional("hdPlayerSwfUrl"): validate.text
})
_smil_schema = validate.Schema(validate.union({
"http_base": validate.all(
@@ -93,7 +96,7 @@
play_url = stream_info.get("play_url")
if play_url:
- swf_url = info.get("playerUri")
+ swf_url = info.get("playerUri") or info.get("hdPlayerSwfUrl") or info.get("lsPlayerSwfUrl") or info.get("viewerPlusSwfUrl")
if swf_url:
if not swf_url.startswith("http"):
swf_url = "http://" + swf_url
| {"golden_diff": "diff --git a/src/streamlink/plugins/livestream.py b/src/streamlink/plugins/livestream.py\n--- a/src/streamlink/plugins/livestream.py\n+++ b/src/streamlink/plugins/livestream.py\n@@ -22,7 +22,10 @@\n ),\n }, None)\n },\n- validate.optional(\"playerUri\"): validate.text\n+ validate.optional(\"playerUri\"): validate.text,\n+ validate.optional(\"viewerPlusSwfUrl\"): validate.url(scheme=\"http\"),\n+ validate.optional(\"lsPlayerSwfUrl\"): validate.text,\n+ validate.optional(\"hdPlayerSwfUrl\"): validate.text\n })\n _smil_schema = validate.Schema(validate.union({\n \"http_base\": validate.all(\n@@ -93,7 +96,7 @@\n \n play_url = stream_info.get(\"play_url\")\n if play_url:\n- swf_url = info.get(\"playerUri\")\n+ swf_url = info.get(\"playerUri\") or info.get(\"hdPlayerSwfUrl\") or info.get(\"lsPlayerSwfUrl\") or info.get(\"viewerPlusSwfUrl\")\n if swf_url:\n if not swf_url.startswith(\"http\"):\n swf_url = \"http://\" + swf_url\n", "issue": "Plugin for Livestream.com not working right? exit's quickly for hls and not at all for normal streams\nI am trying to get a live stream on livestreamer.com to work and i can't get it to play more then about 35 seconds...\r\n\r\nWhen I run this command:\r\nstreamlink \"http://livestream.com/Miraclenet/events/5004281\" 270p --fifo --player omxplayer\r\n\r\nit gives me an error about an swf being needed. When I run this command:\r\nstreamlink \"http://livestream.com/Miraclenet/events/5004281\" 270p_hls --fifo --player omxplayer\r\n\r\nit will play the stream but just for about 35 seconds or so... I kinda don't want to have to restart it every 35 seconds to watch this stream... I'd like it to run until I stop it myself...\r\n\r\nAny help for this non-python, non-linux guy would be very helpful...\r\n\r\nbtw, this is running on a Raspberry Pi. Just got a nice little 7 inch lcd for it and set it up on my desk to be able to watch it while I work, but can't get it to play for long at a time...\r\n\r\n(edited to correct commands used)\n", "before_files": [{"content": "import re\n\nfrom streamlink.compat import urljoin\nfrom streamlink.plugin import Plugin\nfrom streamlink.plugin.api import http, validate\nfrom streamlink.plugin.api.utils import parse_json\nfrom streamlink.stream import AkamaiHDStream, HLSStream\n\n_url_re = re.compile(\"http(s)?://(www\\.)?livestream.com/\")\n_stream_config_schema = validate.Schema({\n \"event\": {\n \"stream_info\": validate.any({\n \"is_live\": bool,\n \"qualities\": [{\n \"bitrate\": int,\n \"height\": int\n }],\n validate.optional(\"play_url\"): validate.url(scheme=\"http\"),\n validate.optional(\"m3u8_url\"): validate.url(\n scheme=\"http\",\n path=validate.endswith(\".m3u8\")\n ),\n }, None)\n },\n validate.optional(\"playerUri\"): validate.text\n})\n_smil_schema = validate.Schema(validate.union({\n \"http_base\": validate.all(\n validate.xml_find(\"{http://www.w3.org/2001/SMIL20/Language}head/\"\n \"{http://www.w3.org/2001/SMIL20/Language}meta\"\n \"[@name='httpBase']\"),\n validate.xml_element(attrib={\n \"content\": validate.text\n }),\n validate.get(\"content\")\n ),\n \"videos\": validate.all(\n validate.xml_findall(\"{http://www.w3.org/2001/SMIL20/Language}body/\"\n \"{http://www.w3.org/2001/SMIL20/Language}switch/\"\n \"{http://www.w3.org/2001/SMIL20/Language}video\"),\n [\n validate.all(\n validate.xml_element(attrib={\n \"src\": validate.text,\n \"system-bitrate\": validate.all(\n validate.text,\n validate.transform(int)\n )\n }),\n validate.transform(\n lambda e: (e.attrib[\"src\"], e.attrib[\"system-bitrate\"])\n )\n )\n ],\n )\n}))\n\n\nclass Livestream(Plugin):\n @classmethod\n def default_stream_types(cls, streams):\n return [\"akamaihd\", \"hls\"]\n\n @classmethod\n def can_handle_url(self, url):\n return _url_re.match(url)\n\n def _get_stream_info(self):\n res = http.get(self.url)\n match = re.search(\"window.config = ({.+})\", res.text)\n if match:\n config = match.group(1)\n return parse_json(config, \"config JSON\",\n schema=_stream_config_schema)\n\n def _parse_smil(self, url, swf_url):\n res = http.get(url)\n smil = http.xml(res, \"SMIL config\", schema=_smil_schema)\n\n for src, bitrate in smil[\"videos\"]:\n url = urljoin(smil[\"http_base\"], src)\n yield bitrate, AkamaiHDStream(self.session, url, swf=swf_url)\n\n def _get_streams(self):\n info = self._get_stream_info()\n if not info:\n return\n\n stream_info = info[\"event\"][\"stream_info\"]\n if not (stream_info and stream_info[\"is_live\"]):\n # Stream is not live\n return\n\n play_url = stream_info.get(\"play_url\")\n if play_url:\n swf_url = info.get(\"playerUri\")\n if swf_url:\n if not swf_url.startswith(\"http\"):\n swf_url = \"http://\" + swf_url\n\n # Work around broken SSL.\n swf_url = swf_url.replace(\"https://\", \"http://\")\n\n qualities = stream_info[\"qualities\"]\n for bitrate, stream in self._parse_smil(play_url, swf_url):\n name = \"{0:d}k\".format(int(bitrate / 1000))\n for quality in qualities:\n if quality[\"bitrate\"] == bitrate:\n name = \"{0}p\".format(quality[\"height\"])\n\n yield name, stream\n\n m3u8_url = stream_info.get(\"m3u8_url\")\n if m3u8_url:\n streams = HLSStream.parse_variant_playlist(self.session, m3u8_url,\n namekey=\"pixels\")\n # TODO: Replace with \"yield from\" when dropping Python 2.\n for stream in streams.items():\n yield stream\n\n__plugin__ = Livestream\n", "path": "src/streamlink/plugins/livestream.py"}]} | 2,039 | 269 |
gh_patches_debug_26282 | rasdani/github-patches | git_diff | rotki__rotki-5256 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Think of a way to keep development accounts separately
## Abstract
Between multiple development machines and between `production`/`develop` it becomes harder to keep track of which database has been used by which branch. This can lead to partially updated databases being used.
As a proposal, we could somehow separate where the `develop` accounts and the production accounts are stored so that they are not sharing the same place.
We can always copy accounts from production to develop manually (this can go to the guide).
We should also consider how this affects nightlies and how nightlies are treated. For example we might want to treat nightlies as development to avoid having users mess accidentally with their production accounts.
## Motivation
Helps better track which accounts are used in `develop`/`production`.
## Specification
- TDB
</issue>
<code>
[start of rotkehlchen/config.py]
1 import logging
2 import os
3 import platform
4 import shutil
5 from pathlib import Path
6
7 from rotkehlchen.logging import RotkehlchenLogsAdapter
8
9 logger = logging.getLogger(__name__)
10 log = RotkehlchenLogsAdapter(logger)
11
12
13 def get_xdg_data_home() -> Path:
14 directory = os.environ.get('XDG_DATA_HOME', None)
15 if directory is None:
16 home = os.path.expanduser("~")
17 directory = os.path.join(home, '.local', 'share')
18
19 return Path(directory)
20
21
22 def get_win32_appdata() -> Path:
23 directory = os.environ.get('LOCALAPPDATA', None)
24 if not directory:
25 # In windows XP there is no localappdata
26 directory = os.environ.get('APPDATA', None)
27 if not directory:
28 raise AssertionError('Could not detect an APPDATA directory')
29
30 return Path(directory)
31
32
33 def old_data_directory() -> Path:
34 home = os.path.expanduser("~")
35 directory = os.path.join(home, '.rotkehlchen')
36 return Path(directory)
37
38
39 def default_data_directory() -> Path:
40 """Find the default data directory for rotki for each different OS
41
42 An interesting lirary that finds the data directories per OS is this:
43 https://github.com/ActiveState/appdirs/blob/master/appdirs.py
44 """
45 if platform.system() == 'Linux':
46 xdgconfig = get_xdg_data_home()
47 datadir = xdgconfig / 'rotki' / 'data'
48 elif platform.system() == 'Windows':
49 appdata = get_win32_appdata()
50 datadir = appdata / 'rotki' / 'data'
51 elif platform.system() == 'Darwin':
52 datadir = Path(os.path.expanduser('~/Library/Application Support/rotki/data'))
53 else:
54 raise AssertionError(f'rotki running in unknown system: {platform.system()}')
55
56 # If old data directory exists and new does not exist copy stuff
57 old_dir = old_data_directory()
58 if old_dir.exists() and not datadir.exists():
59 log.info(f'First time using standard data directory. Copying from {old_dir} to {datadir}')
60 shutil.copytree(old_dir, datadir)
61
62 datadir.mkdir(parents=True, exist_ok=True)
63 return datadir
64
[end of rotkehlchen/config.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/rotkehlchen/config.py b/rotkehlchen/config.py
--- a/rotkehlchen/config.py
+++ b/rotkehlchen/config.py
@@ -2,6 +2,7 @@
import os
import platform
import shutil
+import sys
from pathlib import Path
from rotkehlchen.logging import RotkehlchenLogsAdapter
@@ -42,14 +43,18 @@
An interesting lirary that finds the data directories per OS is this:
https://github.com/ActiveState/appdirs/blob/master/appdirs.py
"""
+ data_dir_name = 'data'
+ if getattr(sys, 'frozen', False) is False:
+ data_dir_name = 'develop_data'
+
if platform.system() == 'Linux':
xdgconfig = get_xdg_data_home()
- datadir = xdgconfig / 'rotki' / 'data'
+ datadir = xdgconfig / 'rotki' / data_dir_name
elif platform.system() == 'Windows':
appdata = get_win32_appdata()
- datadir = appdata / 'rotki' / 'data'
+ datadir = appdata / 'rotki' / data_dir_name
elif platform.system() == 'Darwin':
- datadir = Path(os.path.expanduser('~/Library/Application Support/rotki/data'))
+ datadir = Path(os.path.expanduser(f'~/Library/Application Support/rotki/{data_dir_name}')) # noqa: E501
else:
raise AssertionError(f'rotki running in unknown system: {platform.system()}')
| {"golden_diff": "diff --git a/rotkehlchen/config.py b/rotkehlchen/config.py\n--- a/rotkehlchen/config.py\n+++ b/rotkehlchen/config.py\n@@ -2,6 +2,7 @@\n import os\n import platform\n import shutil\n+import sys\n from pathlib import Path\n \n from rotkehlchen.logging import RotkehlchenLogsAdapter\n@@ -42,14 +43,18 @@\n An interesting lirary that finds the data directories per OS is this:\n https://github.com/ActiveState/appdirs/blob/master/appdirs.py\n \"\"\"\n+ data_dir_name = 'data'\n+ if getattr(sys, 'frozen', False) is False:\n+ data_dir_name = 'develop_data'\n+\n if platform.system() == 'Linux':\n xdgconfig = get_xdg_data_home()\n- datadir = xdgconfig / 'rotki' / 'data'\n+ datadir = xdgconfig / 'rotki' / data_dir_name\n elif platform.system() == 'Windows':\n appdata = get_win32_appdata()\n- datadir = appdata / 'rotki' / 'data'\n+ datadir = appdata / 'rotki' / data_dir_name\n elif platform.system() == 'Darwin':\n- datadir = Path(os.path.expanduser('~/Library/Application Support/rotki/data'))\n+ datadir = Path(os.path.expanduser(f'~/Library/Application Support/rotki/{data_dir_name}')) # noqa: E501\n else:\n raise AssertionError(f'rotki running in unknown system: {platform.system()}')\n", "issue": "Think of a way to keep development accounts separately \n## Abstract\r\n\r\nBetween multiple development machines and between `production`/`develop` it becomes harder to keep track of which database has been used by which branch. This can lead to partially updated databases being used.\r\n\r\nAs a proposal, we could somehow separate where the `develop` accounts and the production accounts are stored so that they are not sharing the same place.\r\n\r\nWe can always copy accounts from production to develop manually (this can go to the guide).\r\n\r\nWe should also consider how this affects nightlies and how nightlies are treated. For example we might want to treat nightlies as development to avoid having users mess accidentally with their production accounts. \r\n\r\n## Motivation\r\n\r\nHelps better track which accounts are used in `develop`/`production`.\r\n\r\n## Specification\r\n\r\n- TDB\r\n\n", "before_files": [{"content": "import logging\nimport os\nimport platform\nimport shutil\nfrom pathlib import Path\n\nfrom rotkehlchen.logging import RotkehlchenLogsAdapter\n\nlogger = logging.getLogger(__name__)\nlog = RotkehlchenLogsAdapter(logger)\n\n\ndef get_xdg_data_home() -> Path:\n directory = os.environ.get('XDG_DATA_HOME', None)\n if directory is None:\n home = os.path.expanduser(\"~\")\n directory = os.path.join(home, '.local', 'share')\n\n return Path(directory)\n\n\ndef get_win32_appdata() -> Path:\n directory = os.environ.get('LOCALAPPDATA', None)\n if not directory:\n # In windows XP there is no localappdata\n directory = os.environ.get('APPDATA', None)\n if not directory:\n raise AssertionError('Could not detect an APPDATA directory')\n\n return Path(directory)\n\n\ndef old_data_directory() -> Path:\n home = os.path.expanduser(\"~\")\n directory = os.path.join(home, '.rotkehlchen')\n return Path(directory)\n\n\ndef default_data_directory() -> Path:\n \"\"\"Find the default data directory for rotki for each different OS\n\n An interesting lirary that finds the data directories per OS is this:\n https://github.com/ActiveState/appdirs/blob/master/appdirs.py\n \"\"\"\n if platform.system() == 'Linux':\n xdgconfig = get_xdg_data_home()\n datadir = xdgconfig / 'rotki' / 'data'\n elif platform.system() == 'Windows':\n appdata = get_win32_appdata()\n datadir = appdata / 'rotki' / 'data'\n elif platform.system() == 'Darwin':\n datadir = Path(os.path.expanduser('~/Library/Application Support/rotki/data'))\n else:\n raise AssertionError(f'rotki running in unknown system: {platform.system()}')\n\n # If old data directory exists and new does not exist copy stuff\n old_dir = old_data_directory()\n if old_dir.exists() and not datadir.exists():\n log.info(f'First time using standard data directory. Copying from {old_dir} to {datadir}')\n shutil.copytree(old_dir, datadir)\n\n datadir.mkdir(parents=True, exist_ok=True)\n return datadir\n", "path": "rotkehlchen/config.py"}]} | 1,321 | 358 |
gh_patches_debug_12204 | rasdani/github-patches | git_diff | conda__conda-5273 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
conda env export under python2 is ug
```
$ python2 -m conda_env export -p /conda
name: null
channels:
- !!python/unicode
'file:///Users/kfranz/.conda/conda-bld'
- !!python/unicode
'file:///conda/conda-bld'
- !!python/unicode
'bkreider'
- !!python/unicode
'conda-canary'
- !!python/unicode
'conda-forge'
- !!python/unicode
'defaults'
dependencies:
- !!python/unicode
'wget=1.15=2'
- !!python/unicode
'conda=4.3.0=py27_0'
- !!python/unicode
'conda-env=2.6.0=0'
- !!python/unicode
'filelock=2.0.6=py27_0'
- !!python/unicode
'boltons=16.3.1=py27_0'
- !!python/unicode
'ca-certificates=2016.8.31=0'
- !!python/unicode
'certifi=2016.8.31=py27_0'
- !!python/unicode
'functools32=3.2.3.2=py27_1'
...
```
</issue>
<code>
[start of conda_env/yaml.py]
1 """
2 Wrapper around yaml to ensure that everything is ordered correctly.
3
4 This is based on the answer at http://stackoverflow.com/a/16782282
5 """
6 from __future__ import absolute_import, print_function
7 from collections import OrderedDict
8
9 from conda.common.yaml import get_yaml
10 yaml = get_yaml()
11
12
13 def represent_ordereddict(dumper, data):
14 value = []
15
16 for item_key, item_value in data.items():
17 node_key = dumper.represent_data(item_key)
18 node_value = dumper.represent_data(item_value)
19
20 value.append((node_key, node_value))
21
22 return yaml.nodes.MappingNode(u'tag:yaml.org,2002:map', value)
23
24
25 yaml.add_representer(OrderedDict, represent_ordereddict)
26
27 dump = yaml.dump
28 load = yaml.load
29 dict = OrderedDict
30
[end of conda_env/yaml.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/conda_env/yaml.py b/conda_env/yaml.py
--- a/conda_env/yaml.py
+++ b/conda_env/yaml.py
@@ -6,6 +6,7 @@
from __future__ import absolute_import, print_function
from collections import OrderedDict
+from conda.common.compat import PY2
from conda.common.yaml import get_yaml
yaml = get_yaml()
@@ -24,6 +25,12 @@
yaml.add_representer(OrderedDict, represent_ordereddict)
+if PY2:
+ def represent_unicode(self, data):
+ return self.represent_str(data.encode('utf-8'))
+
+ yaml.add_representer(unicode, represent_unicode) # NOQA
+
dump = yaml.dump
load = yaml.load
dict = OrderedDict
| {"golden_diff": "diff --git a/conda_env/yaml.py b/conda_env/yaml.py\n--- a/conda_env/yaml.py\n+++ b/conda_env/yaml.py\n@@ -6,6 +6,7 @@\n from __future__ import absolute_import, print_function\n from collections import OrderedDict\n \n+from conda.common.compat import PY2\n from conda.common.yaml import get_yaml\n yaml = get_yaml()\n \n@@ -24,6 +25,12 @@\n \n yaml.add_representer(OrderedDict, represent_ordereddict)\n \n+if PY2:\n+ def represent_unicode(self, data):\n+ return self.represent_str(data.encode('utf-8'))\n+\n+ yaml.add_representer(unicode, represent_unicode) # NOQA\n+\n dump = yaml.dump\n load = yaml.load\n dict = OrderedDict\n", "issue": "conda env export under python2 is ug\n```\r\n$ python2 -m conda_env export -p /conda\r\nname: null\r\nchannels:\r\n- !!python/unicode\r\n 'file:///Users/kfranz/.conda/conda-bld'\r\n- !!python/unicode\r\n 'file:///conda/conda-bld'\r\n- !!python/unicode\r\n 'bkreider'\r\n- !!python/unicode\r\n 'conda-canary'\r\n- !!python/unicode\r\n 'conda-forge'\r\n- !!python/unicode\r\n 'defaults'\r\ndependencies:\r\n- !!python/unicode\r\n 'wget=1.15=2'\r\n- !!python/unicode\r\n 'conda=4.3.0=py27_0'\r\n- !!python/unicode\r\n 'conda-env=2.6.0=0'\r\n- !!python/unicode\r\n 'filelock=2.0.6=py27_0'\r\n- !!python/unicode\r\n 'boltons=16.3.1=py27_0'\r\n- !!python/unicode\r\n 'ca-certificates=2016.8.31=0'\r\n- !!python/unicode\r\n 'certifi=2016.8.31=py27_0'\r\n- !!python/unicode\r\n 'functools32=3.2.3.2=py27_1'\r\n...\r\n```\n", "before_files": [{"content": "\"\"\"\nWrapper around yaml to ensure that everything is ordered correctly.\n\nThis is based on the answer at http://stackoverflow.com/a/16782282\n\"\"\"\nfrom __future__ import absolute_import, print_function\nfrom collections import OrderedDict\n\nfrom conda.common.yaml import get_yaml\nyaml = get_yaml()\n\n\ndef represent_ordereddict(dumper, data):\n value = []\n\n for item_key, item_value in data.items():\n node_key = dumper.represent_data(item_key)\n node_value = dumper.represent_data(item_value)\n\n value.append((node_key, node_value))\n\n return yaml.nodes.MappingNode(u'tag:yaml.org,2002:map', value)\n\n\nyaml.add_representer(OrderedDict, represent_ordereddict)\n\ndump = yaml.dump\nload = yaml.load\ndict = OrderedDict\n", "path": "conda_env/yaml.py"}]} | 1,074 | 178 |
gh_patches_debug_11179 | rasdani/github-patches | git_diff | akvo__akvo-rsr-2213 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add reporting template for Plan Finland
We should add the reporting template that @stellanl and @Geerts are working on to the "My reports" section. Preferably so that only superusers / admins / Plan Finland employees can see this, but we might need a little hack for that.
</issue>
<code>
[start of akvo/rsr/reports.py]
1 # -*- coding: utf-8 -*-
2
3 # Akvo RSR is covered by the GNU Affero General Public License.
4 # See more details in the license.txt file located at the root folder of the Akvo RSR module.
5 # For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.
6
7 from django.utils.translation import ugettext_lazy as _
8
9 # Data for all available reports from ReportServer, with the following fields:
10 # - key: A unique identifier for the report
11 # - title: The title of the report, will be shown on the 'My reports' page as such
12 # - description: The description of the report, as displayed on the 'My reports' page
13 # - formats: The available formats for the report, see options below
14 # - parameters: The available parameters for the report, options; ['project', 'organisation']
15 # - url: The URL where the report is available. Parameter(s) should be indicated in between {..}'s.
16
17 REPORTS = [
18 {
19 'key': 'results-framework',
20 'title': unicode(_('Results and indicators overview')),
21 'description': unicode(_('This report gives an overview of the status of your project\'s '
22 'results and indicators.')),
23 'formats': ['pdf',],
24 'parameters': ['project', ],
25 'url': '/en/reports/project_results/{project}?format={format}&download=true'
26 },
27 {
28 'key': 'results-simple-table',
29 'title': unicode(_('Results and indicators table')),
30 'description': unicode(_('This report provides a view of your project\'s results and '
31 'indicators data in a table.')),
32 'formats': ['excel',],
33 'parameters': ['project', ],
34 'url': '/en/reports/project_results_simple_table/{project}?format={format}&download=true'
35 },
36 {
37 'key': 'projects-overview',
38 'title': unicode(_('Projects overview')),
39 'description': unicode(_('This report provides information about your organisation\'s '
40 'projects: amount of updates, country, total budgets, project '
41 'statuses, start- and end dates.')),
42 'formats': ['pdf', 'excel'],
43 'parameters': ['organisation', ],
44 'url': '/en/reports/project_overview/{organisation}?format={format}&download=true'
45 },
46 {
47 'key': 'data-quality',
48 'title': unicode(_('Data quality overview')),
49 'description': unicode(_('This report gives an overview of your organisation\'s projects '
50 'that have passed the planned end date, need funding or that '
51 'haven\'t been edited or updated for 3 months.')),
52 'formats': ['pdf', 'excel'],
53 'parameters': ['organisation', ],
54 'url': '/en/reports/data_quality/{organisation}?format={format}&download=true'
55 }
56 ]
57
58 # Data for all available formats from ReportServer, with the following fields:
59 # - key: A unique identifier for the format, also used in the formats field of the reports
60 # - displayName: The display name of the format, as displayed on the 'My reports' page
61 # - icon: The font awesome icon of the format, as displayed on the 'My reports' page
62
63 FORMATS = [
64 {
65 'key': 'pdf',
66 'displayName': 'PDF',
67 'icon': 'file-pdf-o',
68 },
69 {
70 'key': 'excel',
71 'displayName': 'Excel',
72 'icon': 'file-excel-o',
73 },
74 {
75 'key': 'word',
76 'displayName': 'Word',
77 'icon': 'file-word-o',
78 },
79 {
80 'key': 'html',
81 'displayName': 'HTML',
82 'icon': 'code',
83 },
84 ]
85
[end of akvo/rsr/reports.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/akvo/rsr/reports.py b/akvo/rsr/reports.py
--- a/akvo/rsr/reports.py
+++ b/akvo/rsr/reports.py
@@ -52,6 +52,15 @@
'formats': ['pdf', 'excel'],
'parameters': ['organisation', ],
'url': '/en/reports/data_quality/{organisation}?format={format}&download=true'
+ },
+ {
+ 'key': 'plan-finland',
+ 'title': unicode(_('Plan Finland report')),
+ 'description': unicode(_('This custom MFA report for Plan Finland gives an overview of the '
+ 'hierarchy of Plan Finland\'s projects and their results.')),
+ 'formats': ['pdf', ],
+ 'parameters': ['project', ],
+ 'url': '/en/reports/plan_finland/{project}?format={format}&download=true'
}
]
| {"golden_diff": "diff --git a/akvo/rsr/reports.py b/akvo/rsr/reports.py\n--- a/akvo/rsr/reports.py\n+++ b/akvo/rsr/reports.py\n@@ -52,6 +52,15 @@\n 'formats': ['pdf', 'excel'],\n 'parameters': ['organisation', ],\n 'url': '/en/reports/data_quality/{organisation}?format={format}&download=true'\n+ },\n+ {\n+ 'key': 'plan-finland',\n+ 'title': unicode(_('Plan Finland report')),\n+ 'description': unicode(_('This custom MFA report for Plan Finland gives an overview of the '\n+ 'hierarchy of Plan Finland\\'s projects and their results.')),\n+ 'formats': ['pdf', ],\n+ 'parameters': ['project', ],\n+ 'url': '/en/reports/plan_finland/{project}?format={format}&download=true'\n }\n ]\n", "issue": "Add reporting template for Plan Finland\nWe should add the reporting template that @stellanl and @Geerts are working on to the \"My reports\" section. Preferably so that only superusers / admins / Plan Finland employees can see this, but we might need a little hack for that.\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Akvo RSR is covered by the GNU Affero General Public License.\n# See more details in the license.txt file located at the root folder of the Akvo RSR module.\n# For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.\n\nfrom django.utils.translation import ugettext_lazy as _\n\n# Data for all available reports from ReportServer, with the following fields:\n# - key: A unique identifier for the report\n# - title: The title of the report, will be shown on the 'My reports' page as such\n# - description: The description of the report, as displayed on the 'My reports' page\n# - formats: The available formats for the report, see options below\n# - parameters: The available parameters for the report, options; ['project', 'organisation']\n# - url: The URL where the report is available. Parameter(s) should be indicated in between {..}'s.\n\nREPORTS = [\n {\n 'key': 'results-framework',\n 'title': unicode(_('Results and indicators overview')),\n 'description': unicode(_('This report gives an overview of the status of your project\\'s '\n 'results and indicators.')),\n 'formats': ['pdf',],\n 'parameters': ['project', ],\n 'url': '/en/reports/project_results/{project}?format={format}&download=true'\n },\n {\n 'key': 'results-simple-table',\n 'title': unicode(_('Results and indicators table')),\n 'description': unicode(_('This report provides a view of your project\\'s results and '\n 'indicators data in a table.')),\n 'formats': ['excel',],\n 'parameters': ['project', ],\n 'url': '/en/reports/project_results_simple_table/{project}?format={format}&download=true'\n },\n {\n 'key': 'projects-overview',\n 'title': unicode(_('Projects overview')),\n 'description': unicode(_('This report provides information about your organisation\\'s '\n 'projects: amount of updates, country, total budgets, project '\n 'statuses, start- and end dates.')),\n 'formats': ['pdf', 'excel'],\n 'parameters': ['organisation', ],\n 'url': '/en/reports/project_overview/{organisation}?format={format}&download=true'\n },\n {\n 'key': 'data-quality',\n 'title': unicode(_('Data quality overview')),\n 'description': unicode(_('This report gives an overview of your organisation\\'s projects '\n 'that have passed the planned end date, need funding or that '\n 'haven\\'t been edited or updated for 3 months.')),\n 'formats': ['pdf', 'excel'],\n 'parameters': ['organisation', ],\n 'url': '/en/reports/data_quality/{organisation}?format={format}&download=true'\n }\n]\n\n# Data for all available formats from ReportServer, with the following fields:\n# - key: A unique identifier for the format, also used in the formats field of the reports\n# - displayName: The display name of the format, as displayed on the 'My reports' page\n# - icon: The font awesome icon of the format, as displayed on the 'My reports' page\n\nFORMATS = [\n {\n 'key': 'pdf',\n 'displayName': 'PDF',\n 'icon': 'file-pdf-o',\n },\n {\n 'key': 'excel',\n 'displayName': 'Excel',\n 'icon': 'file-excel-o',\n },\n {\n 'key': 'word',\n 'displayName': 'Word',\n 'icon': 'file-word-o',\n },\n {\n 'key': 'html',\n 'displayName': 'HTML',\n 'icon': 'code',\n },\n]\n", "path": "akvo/rsr/reports.py"}]} | 1,556 | 204 |
gh_patches_debug_26692 | rasdani/github-patches | git_diff | google__fuzzbench-291 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[GCP] Runners are not started.
I pulled master and tried to evaluate libfuzzer against honggfuzz in 5 trials for 1 hour on 1 benchmark (mbedtls_fuzz_dtlsclient). It doesn't generate the report anymore. The web bucket is empty, the experiments-result folder does not exist in the data bucket, the SQL database is empty, and the Error Reporting gives the following error:
```
ValueError: Empty experiment data. Message: Error generating HTML report.
at validate_data (/work/src/analysis/data_utils.py:21)
at generate_report (/work/src/analysis/generate_report.py:132)
at output_report (/work/src/experiment/reporter.py:43)
```
I deleted authorization keys of the service account. I deleted the old and set up a new SQL database (incl. `alembic upgrade head`). I cleaned out the container registry (by deleting the `container` folder in the corresponding bucket). I cleaned out the Cloud Builds (by deleting `source` folder in the corresponding bucket). It recreates the containers and builds, when I start the dispatcher. The dispatcher runs properly. I SSH'ed into a random runner: `docker images` and `docker ps -a` return empty-handed. Is the recent setup gcr.io/fuzzbench-specific? Any suggestion to debug?
</issue>
<code>
[start of common/benchmark_utils.py]
1 # Copyright 2020 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 """Code for dealing with benchmarks."""
15 import os
16 import re
17
18 from common import experiment_utils
19 from common import fuzzer_utils
20 from common import logs
21 from common import oss_fuzz
22 from common import utils
23
24 VALID_BENCHMARK_REGEX = re.compile(r'^[A-Za-z0-9\._\-]+$')
25
26
27 def is_oss_fuzz(benchmark):
28 """Returns True if |benchmark| is OSS-Fuzz-based project."""
29 return os.path.isfile(oss_fuzz.get_config_file(benchmark))
30
31
32 def get_project(benchmark):
33 """Returns the OSS-Fuzz project of |benchmark| if it is based on an
34 OSS-Fuzz project, otherwise raises ValueError."""
35 if is_oss_fuzz(benchmark):
36 return oss_fuzz.get_config(benchmark)['project']
37 raise ValueError('Can only get project on OSS-Fuzz benchmarks.')
38
39
40 def get_fuzz_target(benchmark):
41 """Returns the fuzz target of |benchmark|"""
42 if is_oss_fuzz(benchmark):
43 return oss_fuzz.get_config(benchmark)['fuzz_target']
44 return fuzzer_utils.DEFAULT_FUZZ_TARGET_NAME
45
46
47 def get_runner_image_url(benchmark, fuzzer, cloud_project):
48 """Get the URL of the docker runner image for fuzzing the benchmark with
49 fuzzer."""
50 base_tag = experiment_utils.get_base_docker_tag(cloud_project)
51 if is_oss_fuzz(benchmark):
52 return '{base_tag}/oss-fuzz/runners/{fuzzer}/{project}'.format(
53 base_tag=base_tag, fuzzer=fuzzer, project=get_project(benchmark))
54 return '{base_tag}/runners/{fuzzer}/{benchmark}'.format(base_tag=base_tag,
55 fuzzer=fuzzer,
56 benchmark=benchmark)
57
58
59 def get_builder_image_url(benchmark, fuzzer, cloud_project):
60 """Get the URL of the docker builder image for fuzzing the benchmark with
61 fuzzer."""
62 base_tag = experiment_utils.get_base_docker_tag(cloud_project)
63 if is_oss_fuzz(benchmark):
64 return '{base_tag}/oss-fuzz/builders/{fuzzer}/{project}'.format(
65 base_tag=base_tag, fuzzer=fuzzer, project=get_project(benchmark))
66 return '{base_tag}/builders/{fuzzer}/{benchmark}'.format(
67 base_tag=base_tag, fuzzer=fuzzer, benchmark=benchmark)
68
69
70 def get_oss_fuzz_builder_hash(benchmark):
71 """Get the specified hash of the OSS-Fuzz builder for the OSS-Fuzz project
72 used by |benchmark|."""
73 if is_oss_fuzz(benchmark):
74 return oss_fuzz.get_config(benchmark)['oss_fuzz_builder_hash']
75 raise ValueError('Can only get project on OSS-Fuzz benchmarks.')
76
77
78 def validate(benchmark):
79 """Return True if |benchmark| is a valid fuzzbench fuzzer."""
80 if VALID_BENCHMARK_REGEX.match(benchmark) is None:
81 logs.error('%s does not conform to %s pattern.', benchmark,
82 VALID_BENCHMARK_REGEX.pattern)
83 return False
84 if benchmark in get_all_benchmarks():
85 return True
86 logs.error('%s must have a build.sh or oss-fuzz.yaml.', benchmark)
87 return False
88
89
90 def get_all_benchmarks():
91 """Returns the list of all benchmarks."""
92 benchmarks_dir = os.path.join(utils.ROOT_DIR, 'benchmarks')
93 all_benchmarks = []
94 for benchmark in os.listdir(benchmarks_dir):
95 benchmark_path = os.path.join(benchmarks_dir, benchmark)
96 if os.path.isfile(os.path.join(benchmark_path, 'oss-fuzz.yaml')):
97 # Benchmark is an OSS-Fuzz benchmark.
98 all_benchmarks.append(benchmark)
99 elif os.path.isfile(os.path.join(benchmark_path, 'build.sh')):
100 # Benchmark is a standard benchmark.
101 all_benchmarks.append(benchmark)
102 return all_benchmarks
103
[end of common/benchmark_utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/common/benchmark_utils.py b/common/benchmark_utils.py
--- a/common/benchmark_utils.py
+++ b/common/benchmark_utils.py
@@ -48,9 +48,6 @@
"""Get the URL of the docker runner image for fuzzing the benchmark with
fuzzer."""
base_tag = experiment_utils.get_base_docker_tag(cloud_project)
- if is_oss_fuzz(benchmark):
- return '{base_tag}/oss-fuzz/runners/{fuzzer}/{project}'.format(
- base_tag=base_tag, fuzzer=fuzzer, project=get_project(benchmark))
return '{base_tag}/runners/{fuzzer}/{benchmark}'.format(base_tag=base_tag,
fuzzer=fuzzer,
benchmark=benchmark)
@@ -60,9 +57,6 @@
"""Get the URL of the docker builder image for fuzzing the benchmark with
fuzzer."""
base_tag = experiment_utils.get_base_docker_tag(cloud_project)
- if is_oss_fuzz(benchmark):
- return '{base_tag}/oss-fuzz/builders/{fuzzer}/{project}'.format(
- base_tag=base_tag, fuzzer=fuzzer, project=get_project(benchmark))
return '{base_tag}/builders/{fuzzer}/{benchmark}'.format(
base_tag=base_tag, fuzzer=fuzzer, benchmark=benchmark)
| {"golden_diff": "diff --git a/common/benchmark_utils.py b/common/benchmark_utils.py\n--- a/common/benchmark_utils.py\n+++ b/common/benchmark_utils.py\n@@ -48,9 +48,6 @@\n \"\"\"Get the URL of the docker runner image for fuzzing the benchmark with\n fuzzer.\"\"\"\n base_tag = experiment_utils.get_base_docker_tag(cloud_project)\n- if is_oss_fuzz(benchmark):\n- return '{base_tag}/oss-fuzz/runners/{fuzzer}/{project}'.format(\n- base_tag=base_tag, fuzzer=fuzzer, project=get_project(benchmark))\n return '{base_tag}/runners/{fuzzer}/{benchmark}'.format(base_tag=base_tag,\n fuzzer=fuzzer,\n benchmark=benchmark)\n@@ -60,9 +57,6 @@\n \"\"\"Get the URL of the docker builder image for fuzzing the benchmark with\n fuzzer.\"\"\"\n base_tag = experiment_utils.get_base_docker_tag(cloud_project)\n- if is_oss_fuzz(benchmark):\n- return '{base_tag}/oss-fuzz/builders/{fuzzer}/{project}'.format(\n- base_tag=base_tag, fuzzer=fuzzer, project=get_project(benchmark))\n return '{base_tag}/builders/{fuzzer}/{benchmark}'.format(\n base_tag=base_tag, fuzzer=fuzzer, benchmark=benchmark)\n", "issue": "[GCP] Runners are not started.\nI pulled master and tried to evaluate libfuzzer against honggfuzz in 5 trials for 1 hour on 1 benchmark (mbedtls_fuzz_dtlsclient). It doesn't generate the report anymore. The web bucket is empty, the experiments-result folder does not exist in the data bucket, the SQL database is empty, and the Error Reporting gives the following error:\r\n```\r\nValueError: Empty experiment data. Message: Error generating HTML report.\r\nat validate_data (/work/src/analysis/data_utils.py:21)\r\nat generate_report (/work/src/analysis/generate_report.py:132)\r\nat output_report (/work/src/experiment/reporter.py:43)\r\n```\r\n\r\nI deleted authorization keys of the service account. I deleted the old and set up a new SQL database (incl. `alembic upgrade head`). I cleaned out the container registry (by deleting the `container` folder in the corresponding bucket). I cleaned out the Cloud Builds (by deleting `source` folder in the corresponding bucket). It recreates the containers and builds, when I start the dispatcher. The dispatcher runs properly. I SSH'ed into a random runner: `docker images` and `docker ps -a` return empty-handed. Is the recent setup gcr.io/fuzzbench-specific? Any suggestion to debug?\n", "before_files": [{"content": "# Copyright 2020 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Code for dealing with benchmarks.\"\"\"\nimport os\nimport re\n\nfrom common import experiment_utils\nfrom common import fuzzer_utils\nfrom common import logs\nfrom common import oss_fuzz\nfrom common import utils\n\nVALID_BENCHMARK_REGEX = re.compile(r'^[A-Za-z0-9\\._\\-]+$')\n\n\ndef is_oss_fuzz(benchmark):\n \"\"\"Returns True if |benchmark| is OSS-Fuzz-based project.\"\"\"\n return os.path.isfile(oss_fuzz.get_config_file(benchmark))\n\n\ndef get_project(benchmark):\n \"\"\"Returns the OSS-Fuzz project of |benchmark| if it is based on an\n OSS-Fuzz project, otherwise raises ValueError.\"\"\"\n if is_oss_fuzz(benchmark):\n return oss_fuzz.get_config(benchmark)['project']\n raise ValueError('Can only get project on OSS-Fuzz benchmarks.')\n\n\ndef get_fuzz_target(benchmark):\n \"\"\"Returns the fuzz target of |benchmark|\"\"\"\n if is_oss_fuzz(benchmark):\n return oss_fuzz.get_config(benchmark)['fuzz_target']\n return fuzzer_utils.DEFAULT_FUZZ_TARGET_NAME\n\n\ndef get_runner_image_url(benchmark, fuzzer, cloud_project):\n \"\"\"Get the URL of the docker runner image for fuzzing the benchmark with\n fuzzer.\"\"\"\n base_tag = experiment_utils.get_base_docker_tag(cloud_project)\n if is_oss_fuzz(benchmark):\n return '{base_tag}/oss-fuzz/runners/{fuzzer}/{project}'.format(\n base_tag=base_tag, fuzzer=fuzzer, project=get_project(benchmark))\n return '{base_tag}/runners/{fuzzer}/{benchmark}'.format(base_tag=base_tag,\n fuzzer=fuzzer,\n benchmark=benchmark)\n\n\ndef get_builder_image_url(benchmark, fuzzer, cloud_project):\n \"\"\"Get the URL of the docker builder image for fuzzing the benchmark with\n fuzzer.\"\"\"\n base_tag = experiment_utils.get_base_docker_tag(cloud_project)\n if is_oss_fuzz(benchmark):\n return '{base_tag}/oss-fuzz/builders/{fuzzer}/{project}'.format(\n base_tag=base_tag, fuzzer=fuzzer, project=get_project(benchmark))\n return '{base_tag}/builders/{fuzzer}/{benchmark}'.format(\n base_tag=base_tag, fuzzer=fuzzer, benchmark=benchmark)\n\n\ndef get_oss_fuzz_builder_hash(benchmark):\n \"\"\"Get the specified hash of the OSS-Fuzz builder for the OSS-Fuzz project\n used by |benchmark|.\"\"\"\n if is_oss_fuzz(benchmark):\n return oss_fuzz.get_config(benchmark)['oss_fuzz_builder_hash']\n raise ValueError('Can only get project on OSS-Fuzz benchmarks.')\n\n\ndef validate(benchmark):\n \"\"\"Return True if |benchmark| is a valid fuzzbench fuzzer.\"\"\"\n if VALID_BENCHMARK_REGEX.match(benchmark) is None:\n logs.error('%s does not conform to %s pattern.', benchmark,\n VALID_BENCHMARK_REGEX.pattern)\n return False\n if benchmark in get_all_benchmarks():\n return True\n logs.error('%s must have a build.sh or oss-fuzz.yaml.', benchmark)\n return False\n\n\ndef get_all_benchmarks():\n \"\"\"Returns the list of all benchmarks.\"\"\"\n benchmarks_dir = os.path.join(utils.ROOT_DIR, 'benchmarks')\n all_benchmarks = []\n for benchmark in os.listdir(benchmarks_dir):\n benchmark_path = os.path.join(benchmarks_dir, benchmark)\n if os.path.isfile(os.path.join(benchmark_path, 'oss-fuzz.yaml')):\n # Benchmark is an OSS-Fuzz benchmark.\n all_benchmarks.append(benchmark)\n elif os.path.isfile(os.path.join(benchmark_path, 'build.sh')):\n # Benchmark is a standard benchmark.\n all_benchmarks.append(benchmark)\n return all_benchmarks\n", "path": "common/benchmark_utils.py"}]} | 1,963 | 292 |
gh_patches_debug_18754 | rasdani/github-patches | git_diff | streamlink__streamlink-4467 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Remove `streamlink.plugin.api.utils.itertags`
[`streamlink.plugin.api.utils.itertags`](https://github.com/streamlink/streamlink/blob/3.2.0/src/streamlink/plugin/api/utils.py#L16-L28) has become obsolete ever since `lxml` was added as a dependency to Streamlink for parsing HTML.
`itertags` is a hacky implementation via regexes, which is not only slow, but it's also impossible to correctly parse HTML nodes with regular expressions, so it shouldn't be used when better and much faster solutions are available. It also always requires unescaping tag values, which is annoying.
We've already updated and replaced lots of plugins which were previously using it, but there are still some left:
```
$ GIT_PAGER=cat git grep -F 'from streamlink.plugin.api.utils import' a1ce471f
a1ce471f:src/streamlink/plugins/cdnbg.py:from streamlink.plugin.api.utils import itertags
a1ce471f:src/streamlink/plugins/facebook.py:from streamlink.plugin.api.utils import itertags
a1ce471f:src/streamlink/plugins/funimationnow.py:from streamlink.plugin.api.utils import itertags
a1ce471f:src/streamlink/plugins/senategov.py:from streamlink.plugin.api.utils import itertags
a1ce471f:src/streamlink/plugins/vrtbe.py:from streamlink.plugin.api.utils import itertags
a1ce471f:tests/test_plugin_utils.py:from streamlink.plugin.api.utils import itertags
```
- [x] cdnbg
- [x] facebook
- [x] funimationnow
- [x] senategov
- [x] vrtbe
Once every last plugin has been updated, the entire `streamlink.plugin.api.utils` module can be removed, as it only contains the `itertags` function and some other useless export aliases which are not even used anymore in Streamlink's codebase.
If we care about plugin-API stability (something which has never been discussed), removing this would be considered a breaking change. Since we've just dropped py36, that's something which could be included in the 4.0.0 release.
</issue>
<code>
[start of src/streamlink/plugin/api/utils.py]
1 """Useful wrappers and other tools."""
2 import re
3 from collections import namedtuple
4
5 from streamlink.utils.parse import parse_json, parse_qsd as parse_query, parse_xml
6
7 __all__ = ["parse_json", "parse_xml", "parse_query"]
8
9
10 tag_re = re.compile(r'''(?=<(?P<tag>[a-zA-Z]+)(?P<attr>.*?)(?P<end>/)?>(?:(?P<inner>.*?)</\s*(?P=tag)\s*>)?)''',
11 re.MULTILINE | re.DOTALL)
12 attr_re = re.compile(r'''\s*(?P<key>[\w-]+)\s*(?:=\s*(?P<quote>["']?)(?P<value>.*?)(?P=quote)\s*)?''')
13 Tag = namedtuple("Tag", "tag attributes text")
14
15
16 def itertags(html, tag):
17 """
18 Brute force regex based HTML tag parser. This is a rough-and-ready searcher to find HTML tags when
19 standards compliance is not required. Will find tags that are commented out, or inside script tag etc.
20
21 :param html: HTML page
22 :param tag: tag name to find
23 :return: generator with Tags
24 """
25 for match in tag_re.finditer(html):
26 if match.group("tag") == tag:
27 attrs = {a.group("key").lower(): a.group("value") for a in attr_re.finditer(match.group("attr"))}
28 yield Tag(match.group("tag"), attrs, match.group("inner"))
29
[end of src/streamlink/plugin/api/utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/streamlink/plugin/api/utils.py b/src/streamlink/plugin/api/utils.py
deleted file mode 100644
--- a/src/streamlink/plugin/api/utils.py
+++ /dev/null
@@ -1,28 +0,0 @@
-"""Useful wrappers and other tools."""
-import re
-from collections import namedtuple
-
-from streamlink.utils.parse import parse_json, parse_qsd as parse_query, parse_xml
-
-__all__ = ["parse_json", "parse_xml", "parse_query"]
-
-
-tag_re = re.compile(r'''(?=<(?P<tag>[a-zA-Z]+)(?P<attr>.*?)(?P<end>/)?>(?:(?P<inner>.*?)</\s*(?P=tag)\s*>)?)''',
- re.MULTILINE | re.DOTALL)
-attr_re = re.compile(r'''\s*(?P<key>[\w-]+)\s*(?:=\s*(?P<quote>["']?)(?P<value>.*?)(?P=quote)\s*)?''')
-Tag = namedtuple("Tag", "tag attributes text")
-
-
-def itertags(html, tag):
- """
- Brute force regex based HTML tag parser. This is a rough-and-ready searcher to find HTML tags when
- standards compliance is not required. Will find tags that are commented out, or inside script tag etc.
-
- :param html: HTML page
- :param tag: tag name to find
- :return: generator with Tags
- """
- for match in tag_re.finditer(html):
- if match.group("tag") == tag:
- attrs = {a.group("key").lower(): a.group("value") for a in attr_re.finditer(match.group("attr"))}
- yield Tag(match.group("tag"), attrs, match.group("inner"))
| {"golden_diff": "diff --git a/src/streamlink/plugin/api/utils.py b/src/streamlink/plugin/api/utils.py\ndeleted file mode 100644\n--- a/src/streamlink/plugin/api/utils.py\n+++ /dev/null\n@@ -1,28 +0,0 @@\n-\"\"\"Useful wrappers and other tools.\"\"\"\n-import re\n-from collections import namedtuple\n-\n-from streamlink.utils.parse import parse_json, parse_qsd as parse_query, parse_xml\n-\n-__all__ = [\"parse_json\", \"parse_xml\", \"parse_query\"]\n-\n-\n-tag_re = re.compile(r'''(?=<(?P<tag>[a-zA-Z]+)(?P<attr>.*?)(?P<end>/)?>(?:(?P<inner>.*?)</\\s*(?P=tag)\\s*>)?)''',\n- re.MULTILINE | re.DOTALL)\n-attr_re = re.compile(r'''\\s*(?P<key>[\\w-]+)\\s*(?:=\\s*(?P<quote>[\"']?)(?P<value>.*?)(?P=quote)\\s*)?''')\n-Tag = namedtuple(\"Tag\", \"tag attributes text\")\n-\n-\n-def itertags(html, tag):\n- \"\"\"\n- Brute force regex based HTML tag parser. This is a rough-and-ready searcher to find HTML tags when\n- standards compliance is not required. Will find tags that are commented out, or inside script tag etc.\n-\n- :param html: HTML page\n- :param tag: tag name to find\n- :return: generator with Tags\n- \"\"\"\n- for match in tag_re.finditer(html):\n- if match.group(\"tag\") == tag:\n- attrs = {a.group(\"key\").lower(): a.group(\"value\") for a in attr_re.finditer(match.group(\"attr\"))}\n- yield Tag(match.group(\"tag\"), attrs, match.group(\"inner\"))\n", "issue": "Remove `streamlink.plugin.api.utils.itertags`\n[`streamlink.plugin.api.utils.itertags`](https://github.com/streamlink/streamlink/blob/3.2.0/src/streamlink/plugin/api/utils.py#L16-L28) has become obsolete ever since `lxml` was added as a dependency to Streamlink for parsing HTML.\r\n\r\n`itertags` is a hacky implementation via regexes, which is not only slow, but it's also impossible to correctly parse HTML nodes with regular expressions, so it shouldn't be used when better and much faster solutions are available. It also always requires unescaping tag values, which is annoying.\r\n\r\nWe've already updated and replaced lots of plugins which were previously using it, but there are still some left:\r\n```\r\n$ GIT_PAGER=cat git grep -F 'from streamlink.plugin.api.utils import' a1ce471f\r\na1ce471f:src/streamlink/plugins/cdnbg.py:from streamlink.plugin.api.utils import itertags\r\na1ce471f:src/streamlink/plugins/facebook.py:from streamlink.plugin.api.utils import itertags\r\na1ce471f:src/streamlink/plugins/funimationnow.py:from streamlink.plugin.api.utils import itertags\r\na1ce471f:src/streamlink/plugins/senategov.py:from streamlink.plugin.api.utils import itertags\r\na1ce471f:src/streamlink/plugins/vrtbe.py:from streamlink.plugin.api.utils import itertags\r\na1ce471f:tests/test_plugin_utils.py:from streamlink.plugin.api.utils import itertags\r\n```\r\n\r\n- [x] cdnbg\r\n- [x] facebook\r\n- [x] funimationnow\r\n- [x] senategov\r\n- [x] vrtbe\r\n\r\nOnce every last plugin has been updated, the entire `streamlink.plugin.api.utils` module can be removed, as it only contains the `itertags` function and some other useless export aliases which are not even used anymore in Streamlink's codebase.\r\n\r\nIf we care about plugin-API stability (something which has never been discussed), removing this would be considered a breaking change. Since we've just dropped py36, that's something which could be included in the 4.0.0 release.\n", "before_files": [{"content": "\"\"\"Useful wrappers and other tools.\"\"\"\nimport re\nfrom collections import namedtuple\n\nfrom streamlink.utils.parse import parse_json, parse_qsd as parse_query, parse_xml\n\n__all__ = [\"parse_json\", \"parse_xml\", \"parse_query\"]\n\n\ntag_re = re.compile(r'''(?=<(?P<tag>[a-zA-Z]+)(?P<attr>.*?)(?P<end>/)?>(?:(?P<inner>.*?)</\\s*(?P=tag)\\s*>)?)''',\n re.MULTILINE | re.DOTALL)\nattr_re = re.compile(r'''\\s*(?P<key>[\\w-]+)\\s*(?:=\\s*(?P<quote>[\"']?)(?P<value>.*?)(?P=quote)\\s*)?''')\nTag = namedtuple(\"Tag\", \"tag attributes text\")\n\n\ndef itertags(html, tag):\n \"\"\"\n Brute force regex based HTML tag parser. This is a rough-and-ready searcher to find HTML tags when\n standards compliance is not required. Will find tags that are commented out, or inside script tag etc.\n\n :param html: HTML page\n :param tag: tag name to find\n :return: generator with Tags\n \"\"\"\n for match in tag_re.finditer(html):\n if match.group(\"tag\") == tag:\n attrs = {a.group(\"key\").lower(): a.group(\"value\") for a in attr_re.finditer(match.group(\"attr\"))}\n yield Tag(match.group(\"tag\"), attrs, match.group(\"inner\"))\n", "path": "src/streamlink/plugin/api/utils.py"}]} | 1,415 | 408 |
gh_patches_debug_60797 | rasdani/github-patches | git_diff | engnadeau__pybotics-751 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Create a way to add your own arm model[FEATURE]
## User Story
<!-- A clear and concise description of what the problem is.
I want to add my own arm configuration to the list of pre-trained models.
## Potential Solutions
<!-- A clear and concise description of what you want to happen. -->
If there was a comment next to each line of one of the arrays containing the pre-trained model saying what exactly each value was supposed to represent, that would help.
<!-- A clear and concise description of any alternative solutions or features you've considered. -->
I tried looking at the spec sheets of the arms and matching up values but I couldn't figure much out.
</issue>
<code>
[start of pybotics/predefined_models.py]
1 """Predefined robot models."""
2 import numpy as np # type: ignore
3
4
5 def kuka_lbr_iiwa_7() -> np.ndarray: # pragma: no cover
6 """Get KUKA LBR iiwa 7 MDH model."""
7 return np.array(
8 [
9 [0, 0, 0, 340],
10 [-np.pi / 2, 0, 0, 0],
11 [np.pi / 2, 0, 0, 400],
12 [np.pi / 2, 0, 0, 0],
13 [-np.pi / 2, 0, 0, 400],
14 [-np.pi / 2, 0, 0, 0],
15 [np.pi / 2, 0, 0, 126],
16 ]
17 )
18
19
20 def mecademic_meca500() -> np.ndarray: # pragma: no cover
21 """Get Meca500 MDH model."""
22 return np.array(
23 [
24 [0, 0, 0, 135],
25 [-np.pi / 2, 0, -np.pi / 2, 0],
26 [0, 135, 0, 0],
27 [-np.pi / 2, 38, 0, 120],
28 [np.pi / 2, 0, 0, 0],
29 [-np.pi / 2, 0, np.pi, 72],
30 ]
31 )
32
33
34 def puma560() -> np.ndarray: # pragma: no cover
35 """Get PUMA560 MDH model."""
36 return np.array(
37 [
38 [0, 0, 0, 0],
39 [-np.pi / 2, 0, 0, 0],
40 [0, 612.7, 0, 0],
41 [0, 571.6, 0, 163.9],
42 [-np.pi / 2, 0, 0, 115.7],
43 [np.pi / 2, 0, np.pi, 92.2],
44 ]
45 )
46
47
48 def ur10() -> np.ndarray: # pragma: no cover
49 """Get UR10 MDH model."""
50 return np.array(
51 [
52 [0, 0, 0, 118],
53 [np.pi / 2, 0, np.pi, 0],
54 [0, 612.7, 0, 0],
55 [0, 571.6, 0, 163.9],
56 [-np.pi / 2, 0, 0, 115.7],
57 [np.pi / 2, 0, np.pi, 92.2],
58 ]
59 )
60
61
62 def abb_irb120() -> np.ndarray: # pragma: no cover
63 """Get ABB irb120 MDH model."""
64 return np.array(
65 [
66 [0, 0, 0, 290],
67 [-np.pi / 2, 0, -np.pi / 2, 0],
68 [0, 270, 0, 0],
69 [-np.pi / 2, 70, 0, 302],
70 [np.pi / 2, 0, 0, 0],
71 [-np.pi / 2, 0, np.pi, 72],
72 ]
73 )
74
[end of pybotics/predefined_models.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pybotics/predefined_models.py b/pybotics/predefined_models.py
--- a/pybotics/predefined_models.py
+++ b/pybotics/predefined_models.py
@@ -1,4 +1,8 @@
-"""Predefined robot models."""
+"""Predefined robot models.
+
+These models correspond to the Modified Denavit–Hartenberg parameters:
+https://en.wikipedia.org/wiki/Denavit%E2%80%93Hartenberg_parameters
+"""
import numpy as np # type: ignore
| {"golden_diff": "diff --git a/pybotics/predefined_models.py b/pybotics/predefined_models.py\n--- a/pybotics/predefined_models.py\n+++ b/pybotics/predefined_models.py\n@@ -1,4 +1,8 @@\n-\"\"\"Predefined robot models.\"\"\"\n+\"\"\"Predefined robot models.\n+\n+These models correspond to the Modified Denavit\u2013Hartenberg parameters:\n+https://en.wikipedia.org/wiki/Denavit%E2%80%93Hartenberg_parameters\n+\"\"\"\n import numpy as np # type: ignore\n", "issue": "Create a way to add your own arm model[FEATURE]\n## User Story\r\n\r\n<!-- A clear and concise description of what the problem is. \r\nI want to add my own arm configuration to the list of pre-trained models.\r\n\r\n## Potential Solutions\r\n\r\n<!-- A clear and concise description of what you want to happen. -->\r\nIf there was a comment next to each line of one of the arrays containing the pre-trained model saying what exactly each value was supposed to represent, that would help.\r\n<!-- A clear and concise description of any alternative solutions or features you've considered. -->\r\nI tried looking at the spec sheets of the arms and matching up values but I couldn't figure much out. \r\n\r\n\n", "before_files": [{"content": "\"\"\"Predefined robot models.\"\"\"\nimport numpy as np # type: ignore\n\n\ndef kuka_lbr_iiwa_7() -> np.ndarray: # pragma: no cover\n \"\"\"Get KUKA LBR iiwa 7 MDH model.\"\"\"\n return np.array(\n [\n [0, 0, 0, 340],\n [-np.pi / 2, 0, 0, 0],\n [np.pi / 2, 0, 0, 400],\n [np.pi / 2, 0, 0, 0],\n [-np.pi / 2, 0, 0, 400],\n [-np.pi / 2, 0, 0, 0],\n [np.pi / 2, 0, 0, 126],\n ]\n )\n\n\ndef mecademic_meca500() -> np.ndarray: # pragma: no cover\n \"\"\"Get Meca500 MDH model.\"\"\"\n return np.array(\n [\n [0, 0, 0, 135],\n [-np.pi / 2, 0, -np.pi / 2, 0],\n [0, 135, 0, 0],\n [-np.pi / 2, 38, 0, 120],\n [np.pi / 2, 0, 0, 0],\n [-np.pi / 2, 0, np.pi, 72],\n ]\n )\n\n\ndef puma560() -> np.ndarray: # pragma: no cover\n \"\"\"Get PUMA560 MDH model.\"\"\"\n return np.array(\n [\n [0, 0, 0, 0],\n [-np.pi / 2, 0, 0, 0],\n [0, 612.7, 0, 0],\n [0, 571.6, 0, 163.9],\n [-np.pi / 2, 0, 0, 115.7],\n [np.pi / 2, 0, np.pi, 92.2],\n ]\n )\n\n\ndef ur10() -> np.ndarray: # pragma: no cover\n \"\"\"Get UR10 MDH model.\"\"\"\n return np.array(\n [\n [0, 0, 0, 118],\n [np.pi / 2, 0, np.pi, 0],\n [0, 612.7, 0, 0],\n [0, 571.6, 0, 163.9],\n [-np.pi / 2, 0, 0, 115.7],\n [np.pi / 2, 0, np.pi, 92.2],\n ]\n )\n\n\ndef abb_irb120() -> np.ndarray: # pragma: no cover\n \"\"\"Get ABB irb120 MDH model.\"\"\"\n return np.array(\n [\n [0, 0, 0, 290],\n [-np.pi / 2, 0, -np.pi / 2, 0],\n [0, 270, 0, 0],\n [-np.pi / 2, 70, 0, 302],\n [np.pi / 2, 0, 0, 0],\n [-np.pi / 2, 0, np.pi, 72],\n ]\n )\n", "path": "pybotics/predefined_models.py"}]} | 1,614 | 116 |
gh_patches_debug_1536 | rasdani/github-patches | git_diff | mathesar-foundation__mathesar-2525 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Demo template management command unexpected args.
## Description
<!-- A clear and concise description of what the bug is. -->
After starting dev environment, the management command to setup the demo DB is broken. Trying to run:
```sh
# docker exec -it mathesar_service_dev python manage.py setup_demo_template_db
```
results in:
```
Traceback (most recent call last):
File "/code/manage.py", line 22, in <module>
main()
File "/code/manage.py", line 18, in main
execute_from_command_line(sys.argv)
File "/usr/local/lib/python3.9/site-packages/django/core/management/__init__.py", line 401, in execute_from_command_line
utility.execute()
File "/usr/local/lib/python3.9/site-packages/django/core/management/__init__.py", line 395, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/usr/local/lib/python3.9/site-packages/django/core/management/base.py", line 330, in run_from_argv
self.execute(*args, **cmd_options)
File "/usr/local/lib/python3.9/site-packages/django/core/management/base.py", line 371, in execute
output = self.handle(*args, **options)
File "/code/demo/management/commands/setup_demo_template_db.py", line 15, in handle
_setup_demo_template_db(*args, **options)
TypeError: _setup_demo_template_db() got an unexpected keyword argument 'verbosity'
```
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
You should be able to run the command listed above successfully in the `dev` environment.
## To Reproduce
<!-- How can we recreate this bug? Please try to provide a Minimal, Complete, and Verifiable (http://stackoverflow.com/help/mcve) example if code-related. -->
Start the dev environment with a fresh docker state. Run the above command.
## Additional context
<!-- Add any other context about the problem or screenshots here. -->
The problem is in line 15 of `demo/management/commands/setup_demo_template.py`.
</issue>
<code>
[start of demo/management/commands/setup_demo_template_db.py]
1 from sqlalchemy import text
2
3 from django.conf import settings
4 from django.core.management import BaseCommand
5
6 from db.install import install_mathesar
7 from demo.install.datasets import load_datasets
8 from mathesar.database.base import create_mathesar_engine
9
10
11 class Command(BaseCommand):
12 help = 'Initialize the demo template database.'
13
14 def handle(self, *args, **options):
15 _setup_demo_template_db(*args, **options)
16
17
18 def _setup_demo_template_db():
19 print("Initializing demo template database...")
20
21 template_db_name = settings.MATHESAR_DEMO_TEMPLATE
22 root_engine = create_mathesar_engine(settings.DATABASES["default"]["NAME"])
23 with root_engine.connect() as conn:
24 conn.execution_options(isolation_level="AUTOCOMMIT")
25 conn.execute(text(f"DROP DATABASE IF EXISTS {template_db_name} WITH (FORCE)"))
26 root_engine.dispose()
27 install_mathesar(
28 database_name=template_db_name,
29 username=settings.DATABASES["default"]["USER"],
30 password=settings.DATABASES["default"]["PASSWORD"],
31 hostname=settings.DATABASES["default"]["HOST"],
32 port=settings.DATABASES["default"]["PORT"],
33 skip_confirm=True
34 )
35 user_engine = create_mathesar_engine(template_db_name)
36 load_datasets(user_engine)
37 user_engine.dispose()
38
[end of demo/management/commands/setup_demo_template_db.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/demo/management/commands/setup_demo_template_db.py b/demo/management/commands/setup_demo_template_db.py
--- a/demo/management/commands/setup_demo_template_db.py
+++ b/demo/management/commands/setup_demo_template_db.py
@@ -12,7 +12,7 @@
help = 'Initialize the demo template database.'
def handle(self, *args, **options):
- _setup_demo_template_db(*args, **options)
+ _setup_demo_template_db()
def _setup_demo_template_db():
| {"golden_diff": "diff --git a/demo/management/commands/setup_demo_template_db.py b/demo/management/commands/setup_demo_template_db.py\n--- a/demo/management/commands/setup_demo_template_db.py\n+++ b/demo/management/commands/setup_demo_template_db.py\n@@ -12,7 +12,7 @@\n help = 'Initialize the demo template database.'\n \n def handle(self, *args, **options):\n- _setup_demo_template_db(*args, **options)\n+ _setup_demo_template_db()\n \n \n def _setup_demo_template_db():\n", "issue": "Demo template management command unexpected args.\n## Description\r\n<!-- A clear and concise description of what the bug is. -->\r\n\r\nAfter starting dev environment, the management command to setup the demo DB is broken. Trying to run:\r\n```sh\r\n# docker exec -it mathesar_service_dev python manage.py setup_demo_template_db\r\n```\r\nresults in:\r\n```\r\nTraceback (most recent call last):\r\n File \"/code/manage.py\", line 22, in <module>\r\n main()\r\n File \"/code/manage.py\", line 18, in main\r\n execute_from_command_line(sys.argv)\r\n File \"/usr/local/lib/python3.9/site-packages/django/core/management/__init__.py\", line 401, in execute_from_command_line\r\n utility.execute()\r\n File \"/usr/local/lib/python3.9/site-packages/django/core/management/__init__.py\", line 395, in execute\r\n self.fetch_command(subcommand).run_from_argv(self.argv)\r\n File \"/usr/local/lib/python3.9/site-packages/django/core/management/base.py\", line 330, in run_from_argv\r\n self.execute(*args, **cmd_options)\r\n File \"/usr/local/lib/python3.9/site-packages/django/core/management/base.py\", line 371, in execute\r\n output = self.handle(*args, **options)\r\n File \"/code/demo/management/commands/setup_demo_template_db.py\", line 15, in handle\r\n _setup_demo_template_db(*args, **options)\r\nTypeError: _setup_demo_template_db() got an unexpected keyword argument 'verbosity'\r\n```\r\n\r\n## Expected behavior\r\n<!-- A clear and concise description of what you expected to happen. -->\r\n\r\nYou should be able to run the command listed above successfully in the `dev` environment.\r\n\r\n## To Reproduce\r\n<!-- How can we recreate this bug? Please try to provide a Minimal, Complete, and Verifiable (http://stackoverflow.com/help/mcve) example if code-related. -->\r\n\r\nStart the dev environment with a fresh docker state. Run the above command.\r\n\r\n## Additional context\r\n<!-- Add any other context about the problem or screenshots here. -->\r\n\r\nThe problem is in line 15 of `demo/management/commands/setup_demo_template.py`.\n", "before_files": [{"content": "from sqlalchemy import text\n\nfrom django.conf import settings\nfrom django.core.management import BaseCommand\n\nfrom db.install import install_mathesar\nfrom demo.install.datasets import load_datasets\nfrom mathesar.database.base import create_mathesar_engine\n\n\nclass Command(BaseCommand):\n help = 'Initialize the demo template database.'\n\n def handle(self, *args, **options):\n _setup_demo_template_db(*args, **options)\n\n\ndef _setup_demo_template_db():\n print(\"Initializing demo template database...\")\n\n template_db_name = settings.MATHESAR_DEMO_TEMPLATE\n root_engine = create_mathesar_engine(settings.DATABASES[\"default\"][\"NAME\"])\n with root_engine.connect() as conn:\n conn.execution_options(isolation_level=\"AUTOCOMMIT\")\n conn.execute(text(f\"DROP DATABASE IF EXISTS {template_db_name} WITH (FORCE)\"))\n root_engine.dispose()\n install_mathesar(\n database_name=template_db_name,\n username=settings.DATABASES[\"default\"][\"USER\"],\n password=settings.DATABASES[\"default\"][\"PASSWORD\"],\n hostname=settings.DATABASES[\"default\"][\"HOST\"],\n port=settings.DATABASES[\"default\"][\"PORT\"],\n skip_confirm=True\n )\n user_engine = create_mathesar_engine(template_db_name)\n load_datasets(user_engine)\n user_engine.dispose()\n", "path": "demo/management/commands/setup_demo_template_db.py"}]} | 1,350 | 116 |
gh_patches_debug_5188 | rasdani/github-patches | git_diff | Lightning-Universe__lightning-flash-104 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Download Data Fails if Content Length Not Defined in Header
## 🐛 Bug
When I try to download a zip file using download_data from flash.core.data it fails because the response header does not contain a value for 'Content Length' this should be check for and handled in the code.
### To Reproduce
Steps to reproduce the behavior:
KeyError Traceback (most recent call last)
<ipython-input-7-aa10e89f3a8e> in <module>()
1 # 1. Download the data
----> 2 download_data("https://github.com/karoldvl/ESC-50/archive/master.zip", 'data/')
2 frames
/content/gdrive/MyDrive/lightning-flash/flash/core/data/utils.py in download_data(url, path)
75
76 """
---> 77 download_file(url, path)
78
79
/content/gdrive/MyDrive/lightning-flash/flash/core/data/utils.py in download_file(url, path, verbose)
36 local_filename = os.path.join(path, url.split('/')[-1])
37 r = requests.get(url, stream=True)
---> 38 file_size = int(r.headers['Content-Length'])
39 chunk = 1
40 chunk_size = 1024
/usr/local/lib/python3.6/dist-packages/requests/structures.py in __getitem__(self, key)
52
53 def __getitem__(self, key):
---> 54 return self._store[key.lower()][1]
55
56 def __delitem__(self, key):
KeyError: 'content-length'
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
#### Code sample
<!-- Ideally attach a minimal code sample to reproduce the decried issue.
Minimal means having the shortest code but still preserving the bug. -->
```python
import flash
from flash.core.data import download_data
download_data("https://github.com/karoldvl/ESC-50/archive/master.zip", 'data/')
```
### Expected behavior
File downloads and extracts ESC-50 data into datasets folder
### Environment
Default Collab Configuration
### Additional context
<!-- Add any other context about the problem here. -->
</issue>
<code>
[start of flash/core/data/utils.py]
1 # Copyright The PyTorch Lightning team.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import os.path
16 import zipfile
17 from typing import Any, Type
18
19 import requests
20 import torch
21 from tqdm.auto import tqdm as tq
22
23
24 # Code taken from: https://gist.github.com/ruxi/5d6803c116ec1130d484a4ab8c00c603
25 # __author__ = "github.com/ruxi"
26 # __license__ = "MIT"
27 def download_file(url: str, path: str, verbose: bool = False) -> None:
28 """
29 Download file with progressbar
30
31 Usage:
32 download_file('http://web4host.net/5MB.zip')
33 """
34 if not os.path.exists(path):
35 os.makedirs(path)
36 local_filename = os.path.join(path, url.split('/')[-1])
37 r = requests.get(url, stream=True)
38 file_size = int(r.headers['Content-Length'])
39 chunk = 1
40 chunk_size = 1024
41 num_bars = int(file_size / chunk_size)
42 if verbose:
43 print(dict(file_size=file_size))
44 print(dict(num_bars=num_bars))
45
46 if not os.path.exists(local_filename):
47 with open(local_filename, 'wb') as fp:
48 for chunk in tq(
49 r.iter_content(chunk_size=chunk_size),
50 total=num_bars,
51 unit='KB',
52 desc=local_filename,
53 leave=True # progressbar stays
54 ):
55 fp.write(chunk) # type: ignore
56
57 if '.zip' in local_filename:
58 if os.path.exists(local_filename):
59 with zipfile.ZipFile(local_filename, 'r') as zip_ref:
60 zip_ref.extractall(path)
61
62
63 def download_data(url: str, path: str = "data/") -> None:
64 """
65 Downloads data automatically from the given url to the path. Defaults to data/ for the path.
66 Automatically handles .csv, .zip
67
68 Example::
69
70 from flash import download_data
71
72 Args:
73 url: path
74 path: local
75
76 """
77 download_file(url, path)
78
79
80 def _contains_any_tensor(value: Any, dtype: Type = torch.Tensor) -> bool:
81 # TODO: we should refactor FlashDatasetFolder to better integrate
82 # with DataPipeline. That way, we wouldn't need this check.
83 # This is because we are running transforms in both places.
84 if isinstance(value, dtype):
85 return True
86 if isinstance(value, (list, tuple)):
87 return any(_contains_any_tensor(v, dtype=dtype) for v in value)
88 elif isinstance(value, dict):
89 return any(_contains_any_tensor(v, dtype=dtype) for v in value.values())
90 return False
91
[end of flash/core/data/utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/flash/core/data/utils.py b/flash/core/data/utils.py
--- a/flash/core/data/utils.py
+++ b/flash/core/data/utils.py
@@ -35,7 +35,7 @@
os.makedirs(path)
local_filename = os.path.join(path, url.split('/')[-1])
r = requests.get(url, stream=True)
- file_size = int(r.headers['Content-Length'])
+ file_size = int(r.headers['Content-Length']) if 'Content-Length' in r.headers else 0
chunk = 1
chunk_size = 1024
num_bars = int(file_size / chunk_size)
| {"golden_diff": "diff --git a/flash/core/data/utils.py b/flash/core/data/utils.py\n--- a/flash/core/data/utils.py\n+++ b/flash/core/data/utils.py\n@@ -35,7 +35,7 @@\n os.makedirs(path)\n local_filename = os.path.join(path, url.split('/')[-1])\n r = requests.get(url, stream=True)\n- file_size = int(r.headers['Content-Length'])\n+ file_size = int(r.headers['Content-Length']) if 'Content-Length' in r.headers else 0\n chunk = 1\n chunk_size = 1024\n num_bars = int(file_size / chunk_size)\n", "issue": "Download Data Fails if Content Length Not Defined in Header\n## \ud83d\udc1b Bug\r\n\r\nWhen I try to download a zip file using download_data from flash.core.data it fails because the response header does not contain a value for 'Content Length' this should be check for and handled in the code. \r\n\r\n### To Reproduce\r\n\r\nSteps to reproduce the behavior:\r\n\r\nKeyError Traceback (most recent call last)\r\n<ipython-input-7-aa10e89f3a8e> in <module>()\r\n 1 # 1. Download the data\r\n----> 2 download_data(\"https://github.com/karoldvl/ESC-50/archive/master.zip\", 'data/')\r\n\r\n2 frames\r\n/content/gdrive/MyDrive/lightning-flash/flash/core/data/utils.py in download_data(url, path)\r\n 75 \r\n 76 \"\"\"\r\n---> 77 download_file(url, path)\r\n 78 \r\n 79 \r\n\r\n/content/gdrive/MyDrive/lightning-flash/flash/core/data/utils.py in download_file(url, path, verbose)\r\n 36 local_filename = os.path.join(path, url.split('/')[-1])\r\n 37 r = requests.get(url, stream=True)\r\n---> 38 file_size = int(r.headers['Content-Length'])\r\n 39 chunk = 1\r\n 40 chunk_size = 1024\r\n\r\n/usr/local/lib/python3.6/dist-packages/requests/structures.py in __getitem__(self, key)\r\n 52 \r\n 53 def __getitem__(self, key):\r\n---> 54 return self._store[key.lower()][1]\r\n 55 \r\n 56 def __delitem__(self, key):\r\n\r\nKeyError: 'content-length'\r\n<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->\r\n\r\n\r\n#### Code sample\r\n<!-- Ideally attach a minimal code sample to reproduce the decried issue.\r\nMinimal means having the shortest code but still preserving the bug. -->\r\n\r\n```python\r\nimport flash\r\nfrom flash.core.data import download_data\r\ndownload_data(\"https://github.com/karoldvl/ESC-50/archive/master.zip\", 'data/')\r\n```\r\n\r\n### Expected behavior\r\n\r\nFile downloads and extracts ESC-50 data into datasets folder\r\n\r\n### Environment\r\n\r\nDefault Collab Configuration \r\n\r\n### Additional context\r\n\r\n<!-- Add any other context about the problem here. -->\r\n\n", "before_files": [{"content": "# Copyright The PyTorch Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os.path\nimport zipfile\nfrom typing import Any, Type\n\nimport requests\nimport torch\nfrom tqdm.auto import tqdm as tq\n\n\n# Code taken from: https://gist.github.com/ruxi/5d6803c116ec1130d484a4ab8c00c603\n# __author__ = \"github.com/ruxi\"\n# __license__ = \"MIT\"\ndef download_file(url: str, path: str, verbose: bool = False) -> None:\n \"\"\"\n Download file with progressbar\n\n Usage:\n download_file('http://web4host.net/5MB.zip')\n \"\"\"\n if not os.path.exists(path):\n os.makedirs(path)\n local_filename = os.path.join(path, url.split('/')[-1])\n r = requests.get(url, stream=True)\n file_size = int(r.headers['Content-Length'])\n chunk = 1\n chunk_size = 1024\n num_bars = int(file_size / chunk_size)\n if verbose:\n print(dict(file_size=file_size))\n print(dict(num_bars=num_bars))\n\n if not os.path.exists(local_filename):\n with open(local_filename, 'wb') as fp:\n for chunk in tq(\n r.iter_content(chunk_size=chunk_size),\n total=num_bars,\n unit='KB',\n desc=local_filename,\n leave=True # progressbar stays\n ):\n fp.write(chunk) # type: ignore\n\n if '.zip' in local_filename:\n if os.path.exists(local_filename):\n with zipfile.ZipFile(local_filename, 'r') as zip_ref:\n zip_ref.extractall(path)\n\n\ndef download_data(url: str, path: str = \"data/\") -> None:\n \"\"\"\n Downloads data automatically from the given url to the path. Defaults to data/ for the path.\n Automatically handles .csv, .zip\n\n Example::\n\n from flash import download_data\n\n Args:\n url: path\n path: local\n\n \"\"\"\n download_file(url, path)\n\n\ndef _contains_any_tensor(value: Any, dtype: Type = torch.Tensor) -> bool:\n # TODO: we should refactor FlashDatasetFolder to better integrate\n # with DataPipeline. That way, we wouldn't need this check.\n # This is because we are running transforms in both places.\n if isinstance(value, dtype):\n return True\n if isinstance(value, (list, tuple)):\n return any(_contains_any_tensor(v, dtype=dtype) for v in value)\n elif isinstance(value, dict):\n return any(_contains_any_tensor(v, dtype=dtype) for v in value.values())\n return False\n", "path": "flash/core/data/utils.py"}]} | 1,937 | 142 |
gh_patches_debug_7848 | rasdani/github-patches | git_diff | liqd__a4-meinberlin-977 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Maptopicprio: Polygon may not be set
When I try to create a maptopic in the dashboard, it is not guaranteed that a polygon is already set. The map for setting a point therefore fails to display.
</issue>
<code>
[start of meinberlin/apps/maptopicprio/dashboard.py]
1 from django.urls import reverse
2 from django.utils.translation import ugettext_lazy as _
3
4 from meinberlin.apps.dashboard2 import DashboardComponent
5 from meinberlin.apps.dashboard2 import components
6
7 from . import models
8 from . import views
9
10
11 class MapTopicEditComponent(DashboardComponent):
12 identifier = 'map_topic_edit'
13 weight = 20
14 label = _('Places')
15
16 def is_effective(self, module):
17 module_app = module.phases[0].content().app
18 return module_app == 'meinberlin_maptopicprio'
19
20 def get_progress(self, module):
21 if models.MapTopic.objects.filter(module=module).exists():
22 return 1, 1
23 return 0, 1
24
25 def get_base_url(self, module):
26 return reverse('a4dashboard:maptopic-list', kwargs={
27 'module_slug': module.slug
28 })
29
30 def get_urls(self):
31 return [
32 (r'^maptopics/module/(?P<module_slug>[-\w_]+)/$',
33 views.MapTopicListDashboardView.as_view(component=self),
34 'maptopic-list'),
35 (r'^maptopics/create/module/(?P<module_slug>[-\w_]+)/$',
36 views.MapTopicCreateView.as_view(component=self),
37 'maptopic-create'),
38 (r'^maptopics/(?P<slug>[-\w_]+)/update/$',
39 views.MapTopicUpdateView.as_view(component=self),
40 'maptopic-update'),
41 (r'^maptopics/(?P<slug>[-\w_]+)/delete/$',
42 views.MapTopicDeleteView.as_view(component=self),
43 'maptopic-delete')
44 ]
45
46
47 components.register_module(MapTopicEditComponent())
48
[end of meinberlin/apps/maptopicprio/dashboard.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/meinberlin/apps/maptopicprio/dashboard.py b/meinberlin/apps/maptopicprio/dashboard.py
--- a/meinberlin/apps/maptopicprio/dashboard.py
+++ b/meinberlin/apps/maptopicprio/dashboard.py
@@ -15,7 +15,12 @@
def is_effective(self, module):
module_app = module.phases[0].content().app
- return module_app == 'meinberlin_maptopicprio'
+ if module_app != 'meinberlin_maptopicprio':
+ return False
+ elif module.settings_instance.polygon == '':
+ return False
+ else:
+ return True
def get_progress(self, module):
if models.MapTopic.objects.filter(module=module).exists():
| {"golden_diff": "diff --git a/meinberlin/apps/maptopicprio/dashboard.py b/meinberlin/apps/maptopicprio/dashboard.py\n--- a/meinberlin/apps/maptopicprio/dashboard.py\n+++ b/meinberlin/apps/maptopicprio/dashboard.py\n@@ -15,7 +15,12 @@\n \n def is_effective(self, module):\n module_app = module.phases[0].content().app\n- return module_app == 'meinberlin_maptopicprio'\n+ if module_app != 'meinberlin_maptopicprio':\n+ return False\n+ elif module.settings_instance.polygon == '':\n+ return False\n+ else:\n+ return True\n \n def get_progress(self, module):\n if models.MapTopic.objects.filter(module=module).exists():\n", "issue": "Maptopicprio: Polygon may not be set\nWhen I try to create a maptopic in the dashboard, it is not guaranteed that a polygon is already set. The map for setting a point therefore fails to display.\n", "before_files": [{"content": "from django.urls import reverse\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom meinberlin.apps.dashboard2 import DashboardComponent\nfrom meinberlin.apps.dashboard2 import components\n\nfrom . import models\nfrom . import views\n\n\nclass MapTopicEditComponent(DashboardComponent):\n identifier = 'map_topic_edit'\n weight = 20\n label = _('Places')\n\n def is_effective(self, module):\n module_app = module.phases[0].content().app\n return module_app == 'meinberlin_maptopicprio'\n\n def get_progress(self, module):\n if models.MapTopic.objects.filter(module=module).exists():\n return 1, 1\n return 0, 1\n\n def get_base_url(self, module):\n return reverse('a4dashboard:maptopic-list', kwargs={\n 'module_slug': module.slug\n })\n\n def get_urls(self):\n return [\n (r'^maptopics/module/(?P<module_slug>[-\\w_]+)/$',\n views.MapTopicListDashboardView.as_view(component=self),\n 'maptopic-list'),\n (r'^maptopics/create/module/(?P<module_slug>[-\\w_]+)/$',\n views.MapTopicCreateView.as_view(component=self),\n 'maptopic-create'),\n (r'^maptopics/(?P<slug>[-\\w_]+)/update/$',\n views.MapTopicUpdateView.as_view(component=self),\n 'maptopic-update'),\n (r'^maptopics/(?P<slug>[-\\w_]+)/delete/$',\n views.MapTopicDeleteView.as_view(component=self),\n 'maptopic-delete')\n ]\n\n\ncomponents.register_module(MapTopicEditComponent())\n", "path": "meinberlin/apps/maptopicprio/dashboard.py"}]} | 1,041 | 178 |
gh_patches_debug_14220 | rasdani/github-patches | git_diff | fossasia__open-event-server-5229 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Allow bank, cheque and onsite for payment_mode in orders schema
**Describe the bug**
Currently setting bank, cheque and onsite for payment_mode returns 422 error.
**Expected behavior**
Payment Mode should allow bank, cheque and onsite as options.
</issue>
<code>
[start of app/api/schema/orders.py]
1 from flask import request
2 from marshmallow import post_dump, validates_schema, validate
3 from marshmallow_jsonapi import fields
4 from marshmallow_jsonapi.flask import Relationship
5
6 from app import db
7 from app.api.helpers.utilities import dasherize
8 from app.api.schema.base import SoftDeletionSchema
9 from utils.common import use_defaults
10
11
12 class OnSiteTicketSchema(SoftDeletionSchema):
13 class Meta:
14 type_ = 'on-site-ticket'
15 inflect = dasherize
16
17 id = fields.Str(load_only=True, required=True)
18 quantity = fields.Str(load_only=True, required=True)
19
20
21 @use_defaults()
22 class OrderSchema(SoftDeletionSchema):
23 class Meta:
24 type_ = 'order'
25 self_view = 'v1.order_detail'
26 self_view_kwargs = {'order_identifier': '<identifier>'}
27 inflect = dasherize
28
29 @post_dump
30 def generate_payment_url(self, data):
31 """
32 generate payment url for an order
33 :param data:
34 :return:
35 """
36 if 'POST' in request.method or ('GET' in request.method and 'regenerate' in request.args) and 'completed' != \
37 data["status"]:
38 if data['payment_mode'] == 'stripe':
39 data['payment_url'] = 'stripe://payment'
40 return data
41
42 @validates_schema
43 def initial_values(self, data):
44 if data.get('payment_mode') is None and 'POST' in request.method:
45 data['payment_mode'] = 'free'
46 return data
47
48 id = fields.Str(dump_only=True)
49 identifier = fields.Str(dump_only=True)
50 amount = fields.Float(validate=lambda n: n > 0, allow_none=True)
51 address = fields.Str(allow_none=True)
52 city = fields.Str(allow_none=True)
53 state = fields.Str(db.String, allow_none=True)
54 country = fields.Str(allow_none=True)
55 zipcode = fields.Str(allow_none=True)
56 completed_at = fields.DateTime(dump_only=True)
57 created_at = fields.DateTime(dump_only=True)
58 transaction_id = fields.Str(dump_only=True)
59 payment_mode = fields.Str(default="free",
60 validate=validate.OneOf(choices=["free", "stripe", "paypal"]), allow_none=True)
61 paid_via = fields.Str(dump_only=True)
62 brand = fields.Str(dump_only=True)
63 exp_month = fields.Str(dump_only=True)
64 exp_year = fields.Str(dump_only=True)
65 last4 = fields.Str(dump_only=True)
66 status = fields.Str(validate=validate.OneOf(choices=["pending", "cancelled", "completed", "placed", "expired"]))
67 discount_code_id = fields.Str(allow_none=True)
68 payment_url = fields.Str(dump_only=True)
69 cancel_note = fields.Str(allow_none=True)
70 order_notes = fields.Str(allow_none=True)
71 tickets_pdf_url = fields.Url(dump_only=True)
72
73 # only used in the case of an on site attendee.
74 on_site_tickets = fields.List(cls_or_instance=fields.Nested(OnSiteTicketSchema), load_only=True, allow_none=True)
75
76 attendees = Relationship(attribute='ticket_holders',
77 self_view='v1.order_attendee',
78 self_view_kwargs={'order_identifier': '<identifier>'},
79 related_view='v1.attendee_list',
80 related_view_kwargs={'order_identifier': '<identifier>'},
81 schema='AttendeeSchemaPublic',
82 many=True,
83 type_='attendee')
84
85 tickets = Relationship(attribute='tickets',
86 self_view='v1.order_ticket',
87 self_view_kwargs={'order_identifier': '<identifier>'},
88 related_view='v1.ticket_list',
89 related_view_kwargs={'order_identifier': '<identifier>'},
90 schema='TicketSchemaPublic',
91 many=True,
92 type_="ticket")
93
94 user = Relationship(attribute='user',
95 self_view='v1.order_user',
96 self_view_kwargs={'order_identifier': '<identifier>'},
97 related_view='v1.user_detail',
98 related_view_kwargs={'id': '<user_id>'},
99 schema='UserSchemaPublic',
100 type_="user")
101
102 event = Relationship(attribute='event',
103 self_view='v1.order_event',
104 self_view_kwargs={'order_identifier': '<identifier>'},
105 related_view='v1.event_detail',
106 related_view_kwargs={'id': '<event_id>'},
107 schema='EventSchemaPublic',
108 type_="event")
109
110 marketer = Relationship(attribute='marketer',
111 self_view='v1.order_marketer',
112 self_view_kwargs={'order_identifier': '<identifier>'},
113 related_view='v1.user_detail',
114 related_view_kwargs={'id': '<marketer_id>'},
115 schema='UserSchemaPublic',
116 type_="user")
117
118 discount_code = Relationship(attribute='discount_code',
119 self_view='v1.order_discount',
120 self_view_kwargs={'order_identifier': '<identifier>'},
121 related_view='v1.discount_code_detail',
122 related_view_kwargs={'id': '<discount_code_id>'},
123 schema='DiscountCodeSchemaPublic',
124 type_="discount-code")
125
[end of app/api/schema/orders.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/app/api/schema/orders.py b/app/api/schema/orders.py
--- a/app/api/schema/orders.py
+++ b/app/api/schema/orders.py
@@ -56,8 +56,10 @@
completed_at = fields.DateTime(dump_only=True)
created_at = fields.DateTime(dump_only=True)
transaction_id = fields.Str(dump_only=True)
- payment_mode = fields.Str(default="free",
- validate=validate.OneOf(choices=["free", "stripe", "paypal"]), allow_none=True)
+ payment_mode = fields.Str(
+ default="free",
+ validate=validate.OneOf(choices=["free", "stripe", "paypal", "bank", "cheque", "onsite"]),
+ allow_none=True)
paid_via = fields.Str(dump_only=True)
brand = fields.Str(dump_only=True)
exp_month = fields.Str(dump_only=True)
| {"golden_diff": "diff --git a/app/api/schema/orders.py b/app/api/schema/orders.py\n--- a/app/api/schema/orders.py\n+++ b/app/api/schema/orders.py\n@@ -56,8 +56,10 @@\n completed_at = fields.DateTime(dump_only=True)\n created_at = fields.DateTime(dump_only=True)\n transaction_id = fields.Str(dump_only=True)\n- payment_mode = fields.Str(default=\"free\",\n- validate=validate.OneOf(choices=[\"free\", \"stripe\", \"paypal\"]), allow_none=True)\n+ payment_mode = fields.Str(\n+ default=\"free\",\n+ validate=validate.OneOf(choices=[\"free\", \"stripe\", \"paypal\", \"bank\", \"cheque\", \"onsite\"]),\n+ allow_none=True)\n paid_via = fields.Str(dump_only=True)\n brand = fields.Str(dump_only=True)\n exp_month = fields.Str(dump_only=True)\n", "issue": "Allow bank, cheque and onsite for payment_mode in orders schema\n**Describe the bug**\r\nCurrently setting bank, cheque and onsite for payment_mode returns 422 error.\r\n\r\n**Expected behavior**\r\nPayment Mode should allow bank, cheque and onsite as options.\n", "before_files": [{"content": "from flask import request\nfrom marshmallow import post_dump, validates_schema, validate\nfrom marshmallow_jsonapi import fields\nfrom marshmallow_jsonapi.flask import Relationship\n\nfrom app import db\nfrom app.api.helpers.utilities import dasherize\nfrom app.api.schema.base import SoftDeletionSchema\nfrom utils.common import use_defaults\n\n\nclass OnSiteTicketSchema(SoftDeletionSchema):\n class Meta:\n type_ = 'on-site-ticket'\n inflect = dasherize\n\n id = fields.Str(load_only=True, required=True)\n quantity = fields.Str(load_only=True, required=True)\n\n\n@use_defaults()\nclass OrderSchema(SoftDeletionSchema):\n class Meta:\n type_ = 'order'\n self_view = 'v1.order_detail'\n self_view_kwargs = {'order_identifier': '<identifier>'}\n inflect = dasherize\n\n @post_dump\n def generate_payment_url(self, data):\n \"\"\"\n generate payment url for an order\n :param data:\n :return:\n \"\"\"\n if 'POST' in request.method or ('GET' in request.method and 'regenerate' in request.args) and 'completed' != \\\n data[\"status\"]:\n if data['payment_mode'] == 'stripe':\n data['payment_url'] = 'stripe://payment'\n return data\n\n @validates_schema\n def initial_values(self, data):\n if data.get('payment_mode') is None and 'POST' in request.method:\n data['payment_mode'] = 'free'\n return data\n\n id = fields.Str(dump_only=True)\n identifier = fields.Str(dump_only=True)\n amount = fields.Float(validate=lambda n: n > 0, allow_none=True)\n address = fields.Str(allow_none=True)\n city = fields.Str(allow_none=True)\n state = fields.Str(db.String, allow_none=True)\n country = fields.Str(allow_none=True)\n zipcode = fields.Str(allow_none=True)\n completed_at = fields.DateTime(dump_only=True)\n created_at = fields.DateTime(dump_only=True)\n transaction_id = fields.Str(dump_only=True)\n payment_mode = fields.Str(default=\"free\",\n validate=validate.OneOf(choices=[\"free\", \"stripe\", \"paypal\"]), allow_none=True)\n paid_via = fields.Str(dump_only=True)\n brand = fields.Str(dump_only=True)\n exp_month = fields.Str(dump_only=True)\n exp_year = fields.Str(dump_only=True)\n last4 = fields.Str(dump_only=True)\n status = fields.Str(validate=validate.OneOf(choices=[\"pending\", \"cancelled\", \"completed\", \"placed\", \"expired\"]))\n discount_code_id = fields.Str(allow_none=True)\n payment_url = fields.Str(dump_only=True)\n cancel_note = fields.Str(allow_none=True)\n order_notes = fields.Str(allow_none=True)\n tickets_pdf_url = fields.Url(dump_only=True)\n\n # only used in the case of an on site attendee.\n on_site_tickets = fields.List(cls_or_instance=fields.Nested(OnSiteTicketSchema), load_only=True, allow_none=True)\n\n attendees = Relationship(attribute='ticket_holders',\n self_view='v1.order_attendee',\n self_view_kwargs={'order_identifier': '<identifier>'},\n related_view='v1.attendee_list',\n related_view_kwargs={'order_identifier': '<identifier>'},\n schema='AttendeeSchemaPublic',\n many=True,\n type_='attendee')\n\n tickets = Relationship(attribute='tickets',\n self_view='v1.order_ticket',\n self_view_kwargs={'order_identifier': '<identifier>'},\n related_view='v1.ticket_list',\n related_view_kwargs={'order_identifier': '<identifier>'},\n schema='TicketSchemaPublic',\n many=True,\n type_=\"ticket\")\n\n user = Relationship(attribute='user',\n self_view='v1.order_user',\n self_view_kwargs={'order_identifier': '<identifier>'},\n related_view='v1.user_detail',\n related_view_kwargs={'id': '<user_id>'},\n schema='UserSchemaPublic',\n type_=\"user\")\n\n event = Relationship(attribute='event',\n self_view='v1.order_event',\n self_view_kwargs={'order_identifier': '<identifier>'},\n related_view='v1.event_detail',\n related_view_kwargs={'id': '<event_id>'},\n schema='EventSchemaPublic',\n type_=\"event\")\n\n marketer = Relationship(attribute='marketer',\n self_view='v1.order_marketer',\n self_view_kwargs={'order_identifier': '<identifier>'},\n related_view='v1.user_detail',\n related_view_kwargs={'id': '<marketer_id>'},\n schema='UserSchemaPublic',\n type_=\"user\")\n\n discount_code = Relationship(attribute='discount_code',\n self_view='v1.order_discount',\n self_view_kwargs={'order_identifier': '<identifier>'},\n related_view='v1.discount_code_detail',\n related_view_kwargs={'id': '<discount_code_id>'},\n schema='DiscountCodeSchemaPublic',\n type_=\"discount-code\")\n", "path": "app/api/schema/orders.py"}]} | 1,925 | 191 |
gh_patches_debug_63639 | rasdani/github-patches | git_diff | bookwyrm-social__bookwyrm-2239 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Logged out view of list of lists is empty
This is a 🤦 on my part -- it should directly query the list of public lists, instead of trying to use the redis cache, which relies on logged in users
</issue>
<code>
[start of bookwyrm/views/list/lists.py]
1 """ book list views"""
2 from django.contrib.auth.decorators import login_required
3 from django.core.paginator import Paginator
4 from django.shortcuts import redirect
5 from django.template.response import TemplateResponse
6 from django.utils.decorators import method_decorator
7 from django.views import View
8
9 from bookwyrm import forms, models
10 from bookwyrm.lists_stream import ListsStream
11 from bookwyrm.views.helpers import get_user_from_username
12
13
14 # pylint: disable=no-self-use
15 class Lists(View):
16 """book list page"""
17
18 def get(self, request):
19 """display a book list"""
20 lists = ListsStream().get_list_stream(request.user)
21 paginated = Paginator(lists, 12)
22 data = {
23 "lists": paginated.get_page(request.GET.get("page")),
24 "list_form": forms.ListForm(),
25 "path": "/list",
26 }
27 return TemplateResponse(request, "lists/lists.html", data)
28
29 @method_decorator(login_required, name="dispatch")
30 # pylint: disable=unused-argument
31 def post(self, request):
32 """create a book_list"""
33 form = forms.ListForm(request.POST)
34 if not form.is_valid():
35 return redirect("lists")
36 book_list = form.save()
37 # list should not have a group if it is not group curated
38 if not book_list.curation == "group":
39 book_list.group = None
40 book_list.save(broadcast=False)
41
42 return redirect(book_list.local_path)
43
44
45 @method_decorator(login_required, name="dispatch")
46 class SavedLists(View):
47 """saved book list page"""
48
49 def get(self, request):
50 """display book lists"""
51 # hide lists with no approved books
52 lists = request.user.saved_lists.order_by("-updated_date")
53
54 paginated = Paginator(lists, 12)
55 data = {
56 "lists": paginated.get_page(request.GET.get("page")),
57 "list_form": forms.ListForm(),
58 "path": "/list",
59 }
60 return TemplateResponse(request, "lists/lists.html", data)
61
62
63 @method_decorator(login_required, name="dispatch")
64 class UserLists(View):
65 """a user's book list page"""
66
67 def get(self, request, username):
68 """display a book list"""
69 user = get_user_from_username(request.user, username)
70 lists = models.List.privacy_filter(request.user).filter(user=user)
71 paginated = Paginator(lists, 12)
72
73 data = {
74 "user": user,
75 "is_self": request.user.id == user.id,
76 "lists": paginated.get_page(request.GET.get("page")),
77 "list_form": forms.ListForm(),
78 "path": user.local_path + "/lists",
79 }
80 return TemplateResponse(request, "user/lists.html", data)
81
[end of bookwyrm/views/list/lists.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/bookwyrm/views/list/lists.py b/bookwyrm/views/list/lists.py
--- a/bookwyrm/views/list/lists.py
+++ b/bookwyrm/views/list/lists.py
@@ -17,7 +17,10 @@
def get(self, request):
"""display a book list"""
- lists = ListsStream().get_list_stream(request.user)
+ if request.user.is_authenticated:
+ lists = ListsStream().get_list_stream(request.user)
+ else:
+ lists = models.List.objects.filter(privacy="public")
paginated = Paginator(lists, 12)
data = {
"lists": paginated.get_page(request.GET.get("page")),
| {"golden_diff": "diff --git a/bookwyrm/views/list/lists.py b/bookwyrm/views/list/lists.py\n--- a/bookwyrm/views/list/lists.py\n+++ b/bookwyrm/views/list/lists.py\n@@ -17,7 +17,10 @@\n \n def get(self, request):\n \"\"\"display a book list\"\"\"\n- lists = ListsStream().get_list_stream(request.user)\n+ if request.user.is_authenticated:\n+ lists = ListsStream().get_list_stream(request.user)\n+ else:\n+ lists = models.List.objects.filter(privacy=\"public\")\n paginated = Paginator(lists, 12)\n data = {\n \"lists\": paginated.get_page(request.GET.get(\"page\")),\n", "issue": "Logged out view of list of lists is empty\nThis is a \ud83e\udd26 on my part -- it should directly query the list of public lists, instead of trying to use the redis cache, which relies on logged in users\n", "before_files": [{"content": "\"\"\" book list views\"\"\"\nfrom django.contrib.auth.decorators import login_required\nfrom django.core.paginator import Paginator\nfrom django.shortcuts import redirect\nfrom django.template.response import TemplateResponse\nfrom django.utils.decorators import method_decorator\nfrom django.views import View\n\nfrom bookwyrm import forms, models\nfrom bookwyrm.lists_stream import ListsStream\nfrom bookwyrm.views.helpers import get_user_from_username\n\n\n# pylint: disable=no-self-use\nclass Lists(View):\n \"\"\"book list page\"\"\"\n\n def get(self, request):\n \"\"\"display a book list\"\"\"\n lists = ListsStream().get_list_stream(request.user)\n paginated = Paginator(lists, 12)\n data = {\n \"lists\": paginated.get_page(request.GET.get(\"page\")),\n \"list_form\": forms.ListForm(),\n \"path\": \"/list\",\n }\n return TemplateResponse(request, \"lists/lists.html\", data)\n\n @method_decorator(login_required, name=\"dispatch\")\n # pylint: disable=unused-argument\n def post(self, request):\n \"\"\"create a book_list\"\"\"\n form = forms.ListForm(request.POST)\n if not form.is_valid():\n return redirect(\"lists\")\n book_list = form.save()\n # list should not have a group if it is not group curated\n if not book_list.curation == \"group\":\n book_list.group = None\n book_list.save(broadcast=False)\n\n return redirect(book_list.local_path)\n\n\n@method_decorator(login_required, name=\"dispatch\")\nclass SavedLists(View):\n \"\"\"saved book list page\"\"\"\n\n def get(self, request):\n \"\"\"display book lists\"\"\"\n # hide lists with no approved books\n lists = request.user.saved_lists.order_by(\"-updated_date\")\n\n paginated = Paginator(lists, 12)\n data = {\n \"lists\": paginated.get_page(request.GET.get(\"page\")),\n \"list_form\": forms.ListForm(),\n \"path\": \"/list\",\n }\n return TemplateResponse(request, \"lists/lists.html\", data)\n\n\n@method_decorator(login_required, name=\"dispatch\")\nclass UserLists(View):\n \"\"\"a user's book list page\"\"\"\n\n def get(self, request, username):\n \"\"\"display a book list\"\"\"\n user = get_user_from_username(request.user, username)\n lists = models.List.privacy_filter(request.user).filter(user=user)\n paginated = Paginator(lists, 12)\n\n data = {\n \"user\": user,\n \"is_self\": request.user.id == user.id,\n \"lists\": paginated.get_page(request.GET.get(\"page\")),\n \"list_form\": forms.ListForm(),\n \"path\": user.local_path + \"/lists\",\n }\n return TemplateResponse(request, \"user/lists.html\", data)\n", "path": "bookwyrm/views/list/lists.py"}]} | 1,319 | 150 |
gh_patches_debug_17668 | rasdani/github-patches | git_diff | scikit-image__scikit-image-3642 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
"[Errno 36] File name too long:" when using imread on remote resource with long querystring
## Description
When using skimage.io.imread with a remote resource, a long query string on the remote resource will cause a failure to read the remote resource, because the temporary file cannot be created.
e.g.
The following works fine
```
>>> im = imread('https://c1.staticflickr.com/9/8370/8429454143_1066b73c04_o.jpg?{}'.format(''.join(['s' for i in range(100)])))
```
while the one below fails
```
>>> im = imread('https://c1.staticflickr.com/9/8370/8429454143_1066b73c04_o.jpg?{}'.format(''.join(['s' for i in range(300)])))
Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/skimage/io/util.py", line 28, in file_or_url_context
with tempfile.NamedTemporaryFile(delete=False, suffix=ext) as f:
File "/usr/lib/python3.5/tempfile.py", line 688, in NamedTemporaryFile
(fd, name) = _mkstemp_inner(dir, prefix, suffix, flags, output_type)
File "/usr/lib/python3.5/tempfile.py", line 399, in _mkstemp_inner
fd = _os.open(file, flags, 0o600)
OSError: [Errno 36] File name too long: '/tmp/tmpmfnujlq6.jpg?ssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssss'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.5/dist-packages/skimage/io/_io.py", line 61, in imread
with file_or_url_context(fname) as fname:
File "/usr/lib/python3.5/contextlib.py", line 59, in __enter__
return next(self.gen)
File "/usr/local/lib/python3.5/dist-packages/skimage/io/util.py", line 34, in file_or_url_context
os.remove(f.name)
UnboundLocalError: local variable 'f' referenced before assignment
```
## Way to reproduce
[If reporting a bug, please include the following important information:]
- [x] Code example
- [x] Operating system and version
```
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=16.04
DISTRIB_CODENAME=xenial
DISTRIB_DESCRIPTION="Ubuntu 16.04.5 LTS"
```
- [x] Python version: Python 3.5.2
- [x] scikit-image version (run `skimage.__version__`): skimage.__version__
'0.14.1'
## Proposal
https://github.com/scikit-image/scikit-image/blob/d24634d86e4f90fe96377209d66ed114b9b601e4/skimage/io/util.py#L22
The querystring should be excluded from the temporary file name extension
</issue>
<code>
[start of skimage/io/util.py]
1 from urllib.request import urlopen
2
3 import os
4 import re
5 import tempfile
6 from contextlib import contextmanager
7
8
9 URL_REGEX = re.compile(r'http://|https://|ftp://|file://|file:\\')
10
11
12 def is_url(filename):
13 """Return True if string is an http or ftp path."""
14 return (isinstance(filename, str) and
15 URL_REGEX.match(filename) is not None)
16
17
18 @contextmanager
19 def file_or_url_context(resource_name):
20 """Yield name of file from the given resource (i.e. file or url)."""
21 if is_url(resource_name):
22 _, ext = os.path.splitext(resource_name)
23 try:
24 with tempfile.NamedTemporaryFile(delete=False, suffix=ext) as f:
25 u = urlopen(resource_name)
26 f.write(u.read())
27 # f must be closed before yielding
28 yield f.name
29 finally:
30 os.remove(f.name)
31 else:
32 yield resource_name
33
[end of skimage/io/util.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/skimage/io/util.py b/skimage/io/util.py
--- a/skimage/io/util.py
+++ b/skimage/io/util.py
@@ -1,4 +1,4 @@
-from urllib.request import urlopen
+import urllib.parse
import os
import re
@@ -19,10 +19,11 @@
def file_or_url_context(resource_name):
"""Yield name of file from the given resource (i.e. file or url)."""
if is_url(resource_name):
- _, ext = os.path.splitext(resource_name)
+ url_components = urllib.parse.urlparse(resource_name)
+ _, ext = os.path.splitext(url_components.path)
try:
with tempfile.NamedTemporaryFile(delete=False, suffix=ext) as f:
- u = urlopen(resource_name)
+ u = urllib.request.urlopen(resource_name)
f.write(u.read())
# f must be closed before yielding
yield f.name
| {"golden_diff": "diff --git a/skimage/io/util.py b/skimage/io/util.py\n--- a/skimage/io/util.py\n+++ b/skimage/io/util.py\n@@ -1,4 +1,4 @@\n-from urllib.request import urlopen\n+import urllib.parse\n \n import os\n import re\n@@ -19,10 +19,11 @@\n def file_or_url_context(resource_name):\n \"\"\"Yield name of file from the given resource (i.e. file or url).\"\"\"\n if is_url(resource_name):\n- _, ext = os.path.splitext(resource_name)\n+ url_components = urllib.parse.urlparse(resource_name)\n+ _, ext = os.path.splitext(url_components.path)\n try:\n with tempfile.NamedTemporaryFile(delete=False, suffix=ext) as f:\n- u = urlopen(resource_name)\n+ u = urllib.request.urlopen(resource_name)\n f.write(u.read())\n # f must be closed before yielding\n yield f.name\n", "issue": "\"[Errno 36] File name too long:\" when using imread on remote resource with long querystring\n## Description\r\nWhen using skimage.io.imread with a remote resource, a long query string on the remote resource will cause a failure to read the remote resource, because the temporary file cannot be created.\r\n\r\ne.g. \r\n\r\nThe following works fine\r\n```\r\n>>> im = imread('https://c1.staticflickr.com/9/8370/8429454143_1066b73c04_o.jpg?{}'.format(''.join(['s' for i in range(100)])))\r\n\r\n```\r\n\r\nwhile the one below fails\r\n\r\n```\r\n>>> im = imread('https://c1.staticflickr.com/9/8370/8429454143_1066b73c04_o.jpg?{}'.format(''.join(['s' for i in range(300)])))\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.5/dist-packages/skimage/io/util.py\", line 28, in file_or_url_context\r\n with tempfile.NamedTemporaryFile(delete=False, suffix=ext) as f:\r\n File \"/usr/lib/python3.5/tempfile.py\", line 688, in NamedTemporaryFile\r\n (fd, name) = _mkstemp_inner(dir, prefix, suffix, flags, output_type)\r\n File \"/usr/lib/python3.5/tempfile.py\", line 399, in _mkstemp_inner\r\n fd = _os.open(file, flags, 0o600)\r\nOSError: [Errno 36] File name too long: '/tmp/tmpmfnujlq6.jpg?ssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssss'\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/usr/local/lib/python3.5/dist-packages/skimage/io/_io.py\", line 61, in imread\r\n with file_or_url_context(fname) as fname:\r\n File \"/usr/lib/python3.5/contextlib.py\", line 59, in __enter__\r\n return next(self.gen)\r\n File \"/usr/local/lib/python3.5/dist-packages/skimage/io/util.py\", line 34, in file_or_url_context\r\n os.remove(f.name)\r\nUnboundLocalError: local variable 'f' referenced before assignment\r\n\r\n```\r\n\r\n## Way to reproduce\r\n[If reporting a bug, please include the following important information:]\r\n- [x] Code example\r\n- [x] Operating system and version\r\n```\r\nDISTRIB_ID=Ubuntu\r\nDISTRIB_RELEASE=16.04\r\nDISTRIB_CODENAME=xenial\r\nDISTRIB_DESCRIPTION=\"Ubuntu 16.04.5 LTS\"\r\n```\r\n- [x] Python version: Python 3.5.2\r\n- [x] scikit-image version (run `skimage.__version__`): skimage.__version__\r\n'0.14.1'\r\n\r\n## Proposal\r\n\r\nhttps://github.com/scikit-image/scikit-image/blob/d24634d86e4f90fe96377209d66ed114b9b601e4/skimage/io/util.py#L22\r\n\r\nThe querystring should be excluded from the temporary file name extension\n", "before_files": [{"content": "from urllib.request import urlopen\n\nimport os\nimport re\nimport tempfile\nfrom contextlib import contextmanager\n\n\nURL_REGEX = re.compile(r'http://|https://|ftp://|file://|file:\\\\')\n\n\ndef is_url(filename):\n \"\"\"Return True if string is an http or ftp path.\"\"\"\n return (isinstance(filename, str) and\n URL_REGEX.match(filename) is not None)\n\n\n@contextmanager\ndef file_or_url_context(resource_name):\n \"\"\"Yield name of file from the given resource (i.e. file or url).\"\"\"\n if is_url(resource_name):\n _, ext = os.path.splitext(resource_name)\n try:\n with tempfile.NamedTemporaryFile(delete=False, suffix=ext) as f:\n u = urlopen(resource_name)\n f.write(u.read())\n # f must be closed before yielding\n yield f.name\n finally:\n os.remove(f.name)\n else:\n yield resource_name\n", "path": "skimage/io/util.py"}]} | 1,679 | 207 |
gh_patches_debug_25033 | rasdani/github-patches | git_diff | apache__airflow-6783 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[AIRFLOW-3014] Fix multiple alembic heads
Make sure you have checked _all_ steps below.
### Jira
- [ ] My PR addresses the following [Airflow Jira](https://issues.apache.org/jira/browse/AIRFLOW/) issues and references them in the PR title. For example, "\[AIRFLOW-XXX\] My Airflow PR"
- https://issues.apache.org/jira/browse/AIRFLOW-6224
- In case you are fixing a typo in the documentation you can prepend your commit with \[AIRFLOW-XXX\], code changes always need a Jira issue.
- In case you are proposing a fundamental code change, you need to create an Airflow Improvement Proposal ([AIP](https://cwiki.apache.org/confluence/display/AIRFLOW/Airflow+Improvements+Proposals)).
- In case you are adding a dependency, check if the license complies with the [ASF 3rd Party License Policy](https://www.apache.org/legal/resolved.html#category-x).
### Description
- [ ] Here are some details about my PR, including screenshots of any UI changes:
### Tests
- [ ] My PR adds the following unit tests __OR__ does not need testing for this extremely good reason:
### Commits
- [ ] My commits all reference Jira issues in their subject lines, and I have squashed multiple commits if they address the same issue. In addition, my commits follow the guidelines from "[How to write a good git commit message](http://chris.beams.io/posts/git-commit/)":
1. Subject is separated from body by a blank line
1. Subject is limited to 50 characters (not including Jira issue reference)
1. Subject does not end with a period
1. Subject uses the imperative mood ("add", not "adding")
1. Body wraps at 72 characters
1. Body explains "what" and "why", not "how"
### Documentation
- [ ] In case of new functionality, my PR adds documentation that describes how to use it.
- All the public functions and the classes in the PR contain docstrings that explain what it does
- If you implement backwards incompatible changes, please leave a note in the [Updating.md](https://github.com/apache/airflow/blob/master/UPDATING.md) so we can assign it to a appropriate release
</issue>
<code>
[start of airflow/migrations/versions/c1840b4bcf1a_increase_length_of_password_column_in_.py]
1 #
2 # Licensed to the Apache Software Foundation (ASF) under one
3 # or more contributor license agreements. See the NOTICE file
4 # distributed with this work for additional information
5 # regarding copyright ownership. The ASF licenses this file
6 # to you under the Apache License, Version 2.0 (the
7 # "License"); you may not use this file except in compliance
8 # with the License. You may obtain a copy of the License at
9 #
10 # http://www.apache.org/licenses/LICENSE-2.0
11 #
12 # Unless required by applicable law or agreed to in writing,
13 # software distributed under the License is distributed on an
14 # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
15 # KIND, either express or implied. See the License for the
16 # specific language governing permissions and limitations
17 # under the License.
18
19 """Increase length of password column in connection table
20
21 Revision ID: c1840b4bcf1a
22 Revises: 004c1210f153
23 Create Date: 2019-10-02 16:56:54.865550
24
25 """
26
27 import sqlalchemy as sa
28 from alembic import op
29
30 # revision identifiers, used by Alembic.
31 revision = 'c1840b4bcf1a'
32 down_revision = '004c1210f153'
33 branch_labels = None
34 depends_on = None
35
36
37 def upgrade():
38 conn = op.get_bind()
39 if conn.dialect.name == 'sqlite':
40 # SQLite does not allow column modifications so we need to skip this migration
41 return
42
43 op.alter_column(table_name='connection',
44 column_name='password',
45 type_=sa.String(length=5000))
46
47
48 def downgrade():
49 # Can't be undone
50 pass
51
[end of airflow/migrations/versions/c1840b4bcf1a_increase_length_of_password_column_in_.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/airflow/migrations/versions/c1840b4bcf1a_increase_length_of_password_column_in_.py b/airflow/migrations/versions/c1840b4bcf1a_increase_length_of_password_column_in_.py
deleted file mode 100644
--- a/airflow/migrations/versions/c1840b4bcf1a_increase_length_of_password_column_in_.py
+++ /dev/null
@@ -1,50 +0,0 @@
-#
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements. See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership. The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License. You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing,
-# software distributed under the License is distributed on an
-# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-# KIND, either express or implied. See the License for the
-# specific language governing permissions and limitations
-# under the License.
-
-"""Increase length of password column in connection table
-
-Revision ID: c1840b4bcf1a
-Revises: 004c1210f153
-Create Date: 2019-10-02 16:56:54.865550
-
-"""
-
-import sqlalchemy as sa
-from alembic import op
-
-# revision identifiers, used by Alembic.
-revision = 'c1840b4bcf1a'
-down_revision = '004c1210f153'
-branch_labels = None
-depends_on = None
-
-
-def upgrade():
- conn = op.get_bind()
- if conn.dialect.name == 'sqlite':
- # SQLite does not allow column modifications so we need to skip this migration
- return
-
- op.alter_column(table_name='connection',
- column_name='password',
- type_=sa.String(length=5000))
-
-
-def downgrade():
- # Can't be undone
- pass
| {"golden_diff": "diff --git a/airflow/migrations/versions/c1840b4bcf1a_increase_length_of_password_column_in_.py b/airflow/migrations/versions/c1840b4bcf1a_increase_length_of_password_column_in_.py\ndeleted file mode 100644\n--- a/airflow/migrations/versions/c1840b4bcf1a_increase_length_of_password_column_in_.py\n+++ /dev/null\n@@ -1,50 +0,0 @@\n-#\n-# Licensed to the Apache Software Foundation (ASF) under one\n-# or more contributor license agreements. See the NOTICE file\n-# distributed with this work for additional information\n-# regarding copyright ownership. The ASF licenses this file\n-# to you under the Apache License, Version 2.0 (the\n-# \"License\"); you may not use this file except in compliance\n-# with the License. You may obtain a copy of the License at\n-#\n-# http://www.apache.org/licenses/LICENSE-2.0\n-#\n-# Unless required by applicable law or agreed to in writing,\n-# software distributed under the License is distributed on an\n-# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-# KIND, either express or implied. See the License for the\n-# specific language governing permissions and limitations\n-# under the License.\n-\n-\"\"\"Increase length of password column in connection table\n-\n-Revision ID: c1840b4bcf1a\n-Revises: 004c1210f153\n-Create Date: 2019-10-02 16:56:54.865550\n-\n-\"\"\"\n-\n-import sqlalchemy as sa\n-from alembic import op\n-\n-# revision identifiers, used by Alembic.\n-revision = 'c1840b4bcf1a'\n-down_revision = '004c1210f153'\n-branch_labels = None\n-depends_on = None\n-\n-\n-def upgrade():\n- conn = op.get_bind()\n- if conn.dialect.name == 'sqlite':\n- # SQLite does not allow column modifications so we need to skip this migration\n- return\n-\n- op.alter_column(table_name='connection',\n- column_name='password',\n- type_=sa.String(length=5000))\n-\n-\n-def downgrade():\n- # Can't be undone\n- pass\n", "issue": "[AIRFLOW-3014] Fix multiple alembic heads\nMake sure you have checked _all_ steps below.\r\n\r\n### Jira\r\n\r\n- [ ] My PR addresses the following [Airflow Jira](https://issues.apache.org/jira/browse/AIRFLOW/) issues and references them in the PR title. For example, \"\\[AIRFLOW-XXX\\] My Airflow PR\"\r\n - https://issues.apache.org/jira/browse/AIRFLOW-6224\r\n - In case you are fixing a typo in the documentation you can prepend your commit with \\[AIRFLOW-XXX\\], code changes always need a Jira issue.\r\n - In case you are proposing a fundamental code change, you need to create an Airflow Improvement Proposal ([AIP](https://cwiki.apache.org/confluence/display/AIRFLOW/Airflow+Improvements+Proposals)).\r\n - In case you are adding a dependency, check if the license complies with the [ASF 3rd Party License Policy](https://www.apache.org/legal/resolved.html#category-x).\r\n\r\n### Description\r\n\r\n- [ ] Here are some details about my PR, including screenshots of any UI changes:\r\n\r\n### Tests\r\n\r\n- [ ] My PR adds the following unit tests __OR__ does not need testing for this extremely good reason:\r\n\r\n### Commits\r\n\r\n- [ ] My commits all reference Jira issues in their subject lines, and I have squashed multiple commits if they address the same issue. In addition, my commits follow the guidelines from \"[How to write a good git commit message](http://chris.beams.io/posts/git-commit/)\":\r\n 1. Subject is separated from body by a blank line\r\n 1. Subject is limited to 50 characters (not including Jira issue reference)\r\n 1. Subject does not end with a period\r\n 1. Subject uses the imperative mood (\"add\", not \"adding\")\r\n 1. Body wraps at 72 characters\r\n 1. Body explains \"what\" and \"why\", not \"how\"\r\n\r\n### Documentation\r\n\r\n- [ ] In case of new functionality, my PR adds documentation that describes how to use it.\r\n - All the public functions and the classes in the PR contain docstrings that explain what it does\r\n - If you implement backwards incompatible changes, please leave a note in the [Updating.md](https://github.com/apache/airflow/blob/master/UPDATING.md) so we can assign it to a appropriate release\r\n\n", "before_files": [{"content": "#\n# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied. See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\n\"\"\"Increase length of password column in connection table\n\nRevision ID: c1840b4bcf1a\nRevises: 004c1210f153\nCreate Date: 2019-10-02 16:56:54.865550\n\n\"\"\"\n\nimport sqlalchemy as sa\nfrom alembic import op\n\n# revision identifiers, used by Alembic.\nrevision = 'c1840b4bcf1a'\ndown_revision = '004c1210f153'\nbranch_labels = None\ndepends_on = None\n\n\ndef upgrade():\n conn = op.get_bind()\n if conn.dialect.name == 'sqlite':\n # SQLite does not allow column modifications so we need to skip this migration\n return\n\n op.alter_column(table_name='connection',\n column_name='password',\n type_=sa.String(length=5000))\n\n\ndef downgrade():\n # Can't be undone\n pass\n", "path": "airflow/migrations/versions/c1840b4bcf1a_increase_length_of_password_column_in_.py"}]} | 1,588 | 554 |
gh_patches_debug_67223 | rasdani/github-patches | git_diff | svthalia__concrexit-1867 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Fix payable registry cache using old data
### Describe the bug
Payments are a mess. For example, if you pay for an event registration, delete the payment (through the admin or api), creating a new TPay payment through the api fails with 409 Conflict, there is still a payment in the registration model, but not in the payments api. Furthermore, paying with a different payment type works, but those payments can then not be removed. Also see #1806. I suspect there are many more related problems.
### How to reproduce
Play around with payable things, deleting and recreating them, or deleting and recreating payments.
### Expected behaviour
When a payable is not paid and should be payable with TPay, paying does not fail. Deleting a payment makes the payable not-paid as it was before creating the payment. Deleting or changing a payable is either impossible, or also deletes a payment that belongs to it.
### Additional context
I think it would be a good idea to combine this with #1000. Some test-driven development would make sense for payments, and I think the expected behaviour should be well-testable. Of course the problems may not be entirely within the payments app, but also in the payables defined by other apps.
</issue>
<code>
[start of website/payments/payables.py]
1 from functools import lru_cache
2
3 from django.db.models import Model
4
5 _registry = {}
6
7
8 class NotRegistered(Exception):
9 pass
10
11
12 class Payable:
13 def __init__(self, model: Model):
14 self.model = model
15
16 @property
17 def pk(self):
18 return self.model.pk
19
20 @property
21 def payment(self):
22 return self.model.payment
23
24 @payment.setter
25 def payment(self, payment):
26 self.model.payment = payment
27
28 @property
29 def payment_amount(self):
30 raise NotImplementedError
31
32 @property
33 def payment_topic(self):
34 raise NotImplementedError
35
36 @property
37 def payment_notes(self):
38 raise NotImplementedError
39
40 @property
41 def payment_payer(self):
42 raise NotImplementedError
43
44 @property
45 def tpay_allowed(self):
46 return True
47
48 def can_manage_payment(self, member):
49 raise NotImplementedError
50
51
52 class Payables:
53 _registry = {}
54
55 @lru_cache(maxsize=None)
56 def _get_key(self, model):
57 return f"{model._meta.app_label}_{model._meta.model_name}"
58
59 @lru_cache(maxsize=None)
60 def get_payable(self, model: Model) -> Payable:
61 if self._get_key(model) not in self._registry:
62 raise NotRegistered(f"No Payable registered for {self._get_key(model)}")
63 return self._registry[self._get_key(model)](model)
64
65 def register(self, model: Model, payable_class: Payable):
66 self._registry[self._get_key(model)] = payable_class
67
68
69 payables = Payables()
70
[end of website/payments/payables.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/website/payments/payables.py b/website/payments/payables.py
--- a/website/payments/payables.py
+++ b/website/payments/payables.py
@@ -56,7 +56,6 @@
def _get_key(self, model):
return f"{model._meta.app_label}_{model._meta.model_name}"
- @lru_cache(maxsize=None)
def get_payable(self, model: Model) -> Payable:
if self._get_key(model) not in self._registry:
raise NotRegistered(f"No Payable registered for {self._get_key(model)}")
| {"golden_diff": "diff --git a/website/payments/payables.py b/website/payments/payables.py\n--- a/website/payments/payables.py\n+++ b/website/payments/payables.py\n@@ -56,7 +56,6 @@\n def _get_key(self, model):\n return f\"{model._meta.app_label}_{model._meta.model_name}\"\n \n- @lru_cache(maxsize=None)\n def get_payable(self, model: Model) -> Payable:\n if self._get_key(model) not in self._registry:\n raise NotRegistered(f\"No Payable registered for {self._get_key(model)}\")\n", "issue": "Fix payable registry cache using old data\n### Describe the bug\r\nPayments are a mess. For example, if you pay for an event registration, delete the payment (through the admin or api), creating a new TPay payment through the api fails with 409 Conflict, there is still a payment in the registration model, but not in the payments api. Furthermore, paying with a different payment type works, but those payments can then not be removed. Also see #1806. I suspect there are many more related problems.\r\n\r\n### How to reproduce\r\nPlay around with payable things, deleting and recreating them, or deleting and recreating payments.\r\n\r\n### Expected behaviour\r\nWhen a payable is not paid and should be payable with TPay, paying does not fail. Deleting a payment makes the payable not-paid as it was before creating the payment. Deleting or changing a payable is either impossible, or also deletes a payment that belongs to it.\r\n\r\n### Additional context\r\nI think it would be a good idea to combine this with #1000. Some test-driven development would make sense for payments, and I think the expected behaviour should be well-testable. Of course the problems may not be entirely within the payments app, but also in the payables defined by other apps.\r\n\n", "before_files": [{"content": "from functools import lru_cache\n\nfrom django.db.models import Model\n\n_registry = {}\n\n\nclass NotRegistered(Exception):\n pass\n\n\nclass Payable:\n def __init__(self, model: Model):\n self.model = model\n\n @property\n def pk(self):\n return self.model.pk\n\n @property\n def payment(self):\n return self.model.payment\n\n @payment.setter\n def payment(self, payment):\n self.model.payment = payment\n\n @property\n def payment_amount(self):\n raise NotImplementedError\n\n @property\n def payment_topic(self):\n raise NotImplementedError\n\n @property\n def payment_notes(self):\n raise NotImplementedError\n\n @property\n def payment_payer(self):\n raise NotImplementedError\n\n @property\n def tpay_allowed(self):\n return True\n\n def can_manage_payment(self, member):\n raise NotImplementedError\n\n\nclass Payables:\n _registry = {}\n\n @lru_cache(maxsize=None)\n def _get_key(self, model):\n return f\"{model._meta.app_label}_{model._meta.model_name}\"\n\n @lru_cache(maxsize=None)\n def get_payable(self, model: Model) -> Payable:\n if self._get_key(model) not in self._registry:\n raise NotRegistered(f\"No Payable registered for {self._get_key(model)}\")\n return self._registry[self._get_key(model)](model)\n\n def register(self, model: Model, payable_class: Payable):\n self._registry[self._get_key(model)] = payable_class\n\n\npayables = Payables()\n", "path": "website/payments/payables.py"}]} | 1,281 | 136 |
gh_patches_debug_37702 | rasdani/github-patches | git_diff | Textualize__textual-2605 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add a `description` parameter to the work decorator, to use in place of the auto-generated description.
</issue>
<code>
[start of src/textual/_work_decorator.py]
1 """
2
3 A decorator used to create [workers](/guide/workers).
4 """
5
6
7 from __future__ import annotations
8
9 from functools import partial, wraps
10 from typing import TYPE_CHECKING, Callable, Coroutine, TypeVar, Union, cast, overload
11
12 from typing_extensions import ParamSpec, TypeAlias
13
14 if TYPE_CHECKING:
15 from .worker import Worker
16
17
18 FactoryParamSpec = ParamSpec("FactoryParamSpec")
19 DecoratorParamSpec = ParamSpec("DecoratorParamSpec")
20 ReturnType = TypeVar("ReturnType")
21
22 Decorator: TypeAlias = Callable[
23 [
24 Union[
25 Callable[DecoratorParamSpec, ReturnType],
26 Callable[DecoratorParamSpec, Coroutine[None, None, ReturnType]],
27 ]
28 ],
29 Callable[DecoratorParamSpec, "Worker[ReturnType]"],
30 ]
31
32
33 @overload
34 def work(
35 method: Callable[FactoryParamSpec, Coroutine[None, None, ReturnType]]
36 ) -> Callable[FactoryParamSpec, "Worker[ReturnType]"]:
37 ...
38
39
40 @overload
41 def work(
42 method: Callable[FactoryParamSpec, ReturnType]
43 ) -> Callable[FactoryParamSpec, "Worker[ReturnType]"]:
44 ...
45
46
47 @overload
48 def work(*, exclusive: bool = False) -> Decorator[..., ReturnType]:
49 ...
50
51
52 def work(
53 method: Callable[FactoryParamSpec, ReturnType]
54 | Callable[FactoryParamSpec, Coroutine[None, None, ReturnType]]
55 | None = None,
56 *,
57 name: str = "",
58 group: str = "default",
59 exit_on_error: bool = True,
60 exclusive: bool = False,
61 ) -> Callable[FactoryParamSpec, Worker[ReturnType]] | Decorator:
62 """A decorator used to create [workers](/guide/workers).
63
64 Args:
65 method: A function or coroutine.
66 name: A short string to identify the worker (in logs and debugging).
67 group: A short string to identify a group of workers.
68 exit_on_error: Exit the app if the worker raises an error. Set to `False` to suppress exceptions.
69 exclusive: Cancel all workers in the same group.
70 """
71
72 def decorator(
73 method: (
74 Callable[DecoratorParamSpec, ReturnType]
75 | Callable[DecoratorParamSpec, Coroutine[None, None, ReturnType]]
76 )
77 ) -> Callable[DecoratorParamSpec, Worker[ReturnType]]:
78 """The decorator."""
79
80 @wraps(method)
81 def decorated(
82 *args: DecoratorParamSpec.args, **kwargs: DecoratorParamSpec.kwargs
83 ) -> Worker[ReturnType]:
84 """The replaced callable."""
85 from .dom import DOMNode
86
87 self = args[0]
88 assert isinstance(self, DOMNode)
89
90 try:
91 positional_arguments = ", ".join(repr(arg) for arg in args[1:])
92 keyword_arguments = ", ".join(
93 f"{name}={value!r}" for name, value in kwargs.items()
94 )
95 tokens = [positional_arguments, keyword_arguments]
96 worker_description = f"{method.__name__}({', '.join(token for token in tokens if token)})"
97 except Exception:
98 worker_description = "<worker>"
99 worker = cast(
100 "Worker[ReturnType]",
101 self.run_worker(
102 partial(method, *args, **kwargs),
103 name=name or method.__name__,
104 group=group,
105 description=worker_description,
106 exclusive=exclusive,
107 exit_on_error=exit_on_error,
108 ),
109 )
110 return worker
111
112 return decorated
113
114 if method is None:
115 return decorator
116 else:
117 return decorator(method)
118
[end of src/textual/_work_decorator.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/textual/_work_decorator.py b/src/textual/_work_decorator.py
--- a/src/textual/_work_decorator.py
+++ b/src/textual/_work_decorator.py
@@ -58,6 +58,7 @@
group: str = "default",
exit_on_error: bool = True,
exclusive: bool = False,
+ description: str | None = None,
) -> Callable[FactoryParamSpec, Worker[ReturnType]] | Decorator:
"""A decorator used to create [workers](/guide/workers).
@@ -67,6 +68,9 @@
group: A short string to identify a group of workers.
exit_on_error: Exit the app if the worker raises an error. Set to `False` to suppress exceptions.
exclusive: Cancel all workers in the same group.
+ description: Readable description of the worker for debugging purposes.
+ By default, it uses a string representation of the decorated method
+ and its arguments.
"""
def decorator(
@@ -87,22 +91,25 @@
self = args[0]
assert isinstance(self, DOMNode)
- try:
- positional_arguments = ", ".join(repr(arg) for arg in args[1:])
- keyword_arguments = ", ".join(
- f"{name}={value!r}" for name, value in kwargs.items()
- )
- tokens = [positional_arguments, keyword_arguments]
- worker_description = f"{method.__name__}({', '.join(token for token in tokens if token)})"
- except Exception:
- worker_description = "<worker>"
+ if description is not None:
+ debug_description = description
+ else:
+ try:
+ positional_arguments = ", ".join(repr(arg) for arg in args[1:])
+ keyword_arguments = ", ".join(
+ f"{name}={value!r}" for name, value in kwargs.items()
+ )
+ tokens = [positional_arguments, keyword_arguments]
+ debug_description = f"{method.__name__}({', '.join(token for token in tokens if token)})"
+ except Exception:
+ debug_description = "<worker>"
worker = cast(
"Worker[ReturnType]",
self.run_worker(
partial(method, *args, **kwargs),
name=name or method.__name__,
group=group,
- description=worker_description,
+ description=debug_description,
exclusive=exclusive,
exit_on_error=exit_on_error,
),
| {"golden_diff": "diff --git a/src/textual/_work_decorator.py b/src/textual/_work_decorator.py\n--- a/src/textual/_work_decorator.py\n+++ b/src/textual/_work_decorator.py\n@@ -58,6 +58,7 @@\n group: str = \"default\",\n exit_on_error: bool = True,\n exclusive: bool = False,\n+ description: str | None = None,\n ) -> Callable[FactoryParamSpec, Worker[ReturnType]] | Decorator:\n \"\"\"A decorator used to create [workers](/guide/workers).\n \n@@ -67,6 +68,9 @@\n group: A short string to identify a group of workers.\n exit_on_error: Exit the app if the worker raises an error. Set to `False` to suppress exceptions.\n exclusive: Cancel all workers in the same group.\n+ description: Readable description of the worker for debugging purposes.\n+ By default, it uses a string representation of the decorated method\n+ and its arguments.\n \"\"\"\n \n def decorator(\n@@ -87,22 +91,25 @@\n self = args[0]\n assert isinstance(self, DOMNode)\n \n- try:\n- positional_arguments = \", \".join(repr(arg) for arg in args[1:])\n- keyword_arguments = \", \".join(\n- f\"{name}={value!r}\" for name, value in kwargs.items()\n- )\n- tokens = [positional_arguments, keyword_arguments]\n- worker_description = f\"{method.__name__}({', '.join(token for token in tokens if token)})\"\n- except Exception:\n- worker_description = \"<worker>\"\n+ if description is not None:\n+ debug_description = description\n+ else:\n+ try:\n+ positional_arguments = \", \".join(repr(arg) for arg in args[1:])\n+ keyword_arguments = \", \".join(\n+ f\"{name}={value!r}\" for name, value in kwargs.items()\n+ )\n+ tokens = [positional_arguments, keyword_arguments]\n+ debug_description = f\"{method.__name__}({', '.join(token for token in tokens if token)})\"\n+ except Exception:\n+ debug_description = \"<worker>\"\n worker = cast(\n \"Worker[ReturnType]\",\n self.run_worker(\n partial(method, *args, **kwargs),\n name=name or method.__name__,\n group=group,\n- description=worker_description,\n+ description=debug_description,\n exclusive=exclusive,\n exit_on_error=exit_on_error,\n ),\n", "issue": "Add a `description` parameter to the work decorator, to use in place of the auto-generated description.\n\n", "before_files": [{"content": "\"\"\"\n\nA decorator used to create [workers](/guide/workers).\n\"\"\"\n\n\nfrom __future__ import annotations\n\nfrom functools import partial, wraps\nfrom typing import TYPE_CHECKING, Callable, Coroutine, TypeVar, Union, cast, overload\n\nfrom typing_extensions import ParamSpec, TypeAlias\n\nif TYPE_CHECKING:\n from .worker import Worker\n\n\nFactoryParamSpec = ParamSpec(\"FactoryParamSpec\")\nDecoratorParamSpec = ParamSpec(\"DecoratorParamSpec\")\nReturnType = TypeVar(\"ReturnType\")\n\nDecorator: TypeAlias = Callable[\n [\n Union[\n Callable[DecoratorParamSpec, ReturnType],\n Callable[DecoratorParamSpec, Coroutine[None, None, ReturnType]],\n ]\n ],\n Callable[DecoratorParamSpec, \"Worker[ReturnType]\"],\n]\n\n\n@overload\ndef work(\n method: Callable[FactoryParamSpec, Coroutine[None, None, ReturnType]]\n) -> Callable[FactoryParamSpec, \"Worker[ReturnType]\"]:\n ...\n\n\n@overload\ndef work(\n method: Callable[FactoryParamSpec, ReturnType]\n) -> Callable[FactoryParamSpec, \"Worker[ReturnType]\"]:\n ...\n\n\n@overload\ndef work(*, exclusive: bool = False) -> Decorator[..., ReturnType]:\n ...\n\n\ndef work(\n method: Callable[FactoryParamSpec, ReturnType]\n | Callable[FactoryParamSpec, Coroutine[None, None, ReturnType]]\n | None = None,\n *,\n name: str = \"\",\n group: str = \"default\",\n exit_on_error: bool = True,\n exclusive: bool = False,\n) -> Callable[FactoryParamSpec, Worker[ReturnType]] | Decorator:\n \"\"\"A decorator used to create [workers](/guide/workers).\n\n Args:\n method: A function or coroutine.\n name: A short string to identify the worker (in logs and debugging).\n group: A short string to identify a group of workers.\n exit_on_error: Exit the app if the worker raises an error. Set to `False` to suppress exceptions.\n exclusive: Cancel all workers in the same group.\n \"\"\"\n\n def decorator(\n method: (\n Callable[DecoratorParamSpec, ReturnType]\n | Callable[DecoratorParamSpec, Coroutine[None, None, ReturnType]]\n )\n ) -> Callable[DecoratorParamSpec, Worker[ReturnType]]:\n \"\"\"The decorator.\"\"\"\n\n @wraps(method)\n def decorated(\n *args: DecoratorParamSpec.args, **kwargs: DecoratorParamSpec.kwargs\n ) -> Worker[ReturnType]:\n \"\"\"The replaced callable.\"\"\"\n from .dom import DOMNode\n\n self = args[0]\n assert isinstance(self, DOMNode)\n\n try:\n positional_arguments = \", \".join(repr(arg) for arg in args[1:])\n keyword_arguments = \", \".join(\n f\"{name}={value!r}\" for name, value in kwargs.items()\n )\n tokens = [positional_arguments, keyword_arguments]\n worker_description = f\"{method.__name__}({', '.join(token for token in tokens if token)})\"\n except Exception:\n worker_description = \"<worker>\"\n worker = cast(\n \"Worker[ReturnType]\",\n self.run_worker(\n partial(method, *args, **kwargs),\n name=name or method.__name__,\n group=group,\n description=worker_description,\n exclusive=exclusive,\n exit_on_error=exit_on_error,\n ),\n )\n return worker\n\n return decorated\n\n if method is None:\n return decorator\n else:\n return decorator(method)\n", "path": "src/textual/_work_decorator.py"}]} | 1,561 | 547 |
gh_patches_debug_4548 | rasdani/github-patches | git_diff | capitalone__DataProfiler-739 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Windows Install error - ValueError: path 'resources/' cannot end with '/
https://github.com/capitalone/DataProfiler/blob/5b04b7fe5ee3556235c397efb69b32cd5d364a3b/setup.py#L33
Ran into an install isue
ValueError: path 'resources/' cannot end with '/
As per
https://stackoverflow.com/questions/20356482/valueerror-path-conf-cannot-end-with
resource_dir = "resources/"
needs to change to
resource_dir = "resources"
Thank you.
</issue>
<code>
[start of setup.py]
1 """A setuptools for the Data Profiler Application and Python Libraries."""
2
3 import os
4
5 # To use a consistent encoding
6 from codecs import open
7 from os import path
8
9 # Always prefer setuptools over distutils
10 from setuptools import find_packages, setup
11
12 # Load package version
13 from dataprofiler.version import __version__
14
15 here = path.abspath(path.dirname(__file__))
16
17 # Get the long description from the README file
18 with open(path.join(here, "README.md"), encoding="utf-8") as f:
19 LONG_DESCRIPTION = f.read()
20
21 # Get the install_requirements from requirements.txt
22 with open(path.join(here, "requirements.txt"), encoding="utf-8") as f:
23 required_packages = f.read().splitlines()
24
25 # Get the install_requirements from requirements-ml.txt
26 with open(path.join(here, "requirements-ml.txt"), encoding="utf-8") as f:
27 ml_packages = f.read().splitlines()
28
29 # Get the install_requirements from requirements-reports.txt
30 with open(path.join(here, "requirements-reports.txt"), encoding="utf-8") as f:
31 reports_packages = f.read().splitlines()
32
33 resource_dir = "resources/"
34 default_labeler_files = [
35 (d, [os.path.join(d, f) for f in files]) for d, _, files in os.walk(resource_dir)
36 ]
37
38
39 DESCRIPTION = (
40 "What is in your data? Detect schema, statistics and entities in almost any file."
41 )
42
43 setup(
44 name="DataProfiler",
45 version=__version__,
46 python_requires=">=3.8",
47 description=DESCRIPTION,
48 long_description=LONG_DESCRIPTION,
49 long_description_content_type="text/markdown",
50 # The project's main homepage.
51 url="https://github.com/capitalone/data-profiler",
52 # Author details
53 author="Jeremy Goodsitt, Taylor Turner, Michael Davis, Kenny Bean, Tyler Farnan",
54 # Choose your license
55 license="Apache License, Version 2.0",
56 # See https://pypi.python.org/pypi?%3Aaction=list_classifiers
57 classifiers=[
58 # How mature is this project? Common values are
59 # 3 - Alpha
60 # 4 - Beta
61 # 5 - Production/Stable
62 "Development Status :: 5 - Production/Stable",
63 # Indicate who your project is intended for
64 "Intended Audience :: Developers",
65 "Intended Audience :: Education",
66 "Intended Audience :: Information Technology",
67 "Intended Audience :: Science/Research",
68 "Intended Audience :: System Administrators",
69 "Topic :: Education",
70 "Topic :: Scientific/Engineering",
71 "Topic :: Scientific/Engineering :: Information Analysis",
72 "Topic :: Security",
73 "Topic :: Software Development :: Build Tools",
74 # Pick your license as you wish (should match "license" above)
75 "License :: OSI Approved :: Apache Software License",
76 # Specify the Python versions you support here. In particular, ensure
77 # that you indicate whether you support Python 3 or both.
78 "Programming Language :: Python :: 3",
79 ],
80 # What does your project relate to?
81 keywords="Data Investigation",
82 # You can just specify the packages manually here if your project is
83 # simple. Or you can use find_packages().
84 # packages=find_packages(exclude=['src/test', 'src/sample']),
85 packages=find_packages(exclude=["tests", "examples"]),
86 # List run-time dependencies here. These will be installed by pip when
87 # your project is installed. For an analysis of "install_requires" vs pip's
88 # requirements files see:
89 # https://packaging.python.org/en/latest/requirements.html
90 install_requires=required_packages,
91 # List of run-time dependencies for the labeler. These will be installed
92 # by pip when someone installs the project[<label>].
93 extras_require={
94 "ml": ml_packages,
95 "reports": reports_packages,
96 "full": ml_packages + reports_packages,
97 },
98 # # If there are data files included in your packages that need to be
99 # # installed, specify them here. If using Python 2.6 or less, then these
100 # # have to be included in MANIFEST.in as well.
101 # package_data={
102 # 'data': [],
103 # },
104 #
105 # # Although 'package_data' is the preferred approach, in some case you may
106 # # need to place data files outside of your packages. See:
107 # # http://docs.python.org/3.4/distutils/setupscript.html#installing-additional-files # noqa
108 # # In this case, 'data_file' will be installed into '<sys.prefix>/my_data'
109 data_files=default_labeler_files,
110 include_package_data=True,
111 )
112
113 print("find_packages():", find_packages())
114
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -30,7 +30,7 @@
with open(path.join(here, "requirements-reports.txt"), encoding="utf-8") as f:
reports_packages = f.read().splitlines()
-resource_dir = "resources/"
+resource_dir = "resources"
default_labeler_files = [
(d, [os.path.join(d, f) for f in files]) for d, _, files in os.walk(resource_dir)
]
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -30,7 +30,7 @@\n with open(path.join(here, \"requirements-reports.txt\"), encoding=\"utf-8\") as f:\n reports_packages = f.read().splitlines()\n \n-resource_dir = \"resources/\"\n+resource_dir = \"resources\"\n default_labeler_files = [\n (d, [os.path.join(d, f) for f in files]) for d, _, files in os.walk(resource_dir)\n ]\n", "issue": "Windows Install error - ValueError: path 'resources/' cannot end with '/\nhttps://github.com/capitalone/DataProfiler/blob/5b04b7fe5ee3556235c397efb69b32cd5d364a3b/setup.py#L33\r\n\r\nRan into an install isue \r\nValueError: path 'resources/' cannot end with '/\r\n\r\nAs per \r\nhttps://stackoverflow.com/questions/20356482/valueerror-path-conf-cannot-end-with\r\n\r\nresource_dir = \"resources/\"\r\nneeds to change to \r\nresource_dir = \"resources\"\r\n\r\nThank you. \n", "before_files": [{"content": "\"\"\"A setuptools for the Data Profiler Application and Python Libraries.\"\"\"\n\nimport os\n\n# To use a consistent encoding\nfrom codecs import open\nfrom os import path\n\n# Always prefer setuptools over distutils\nfrom setuptools import find_packages, setup\n\n# Load package version\nfrom dataprofiler.version import __version__\n\nhere = path.abspath(path.dirname(__file__))\n\n# Get the long description from the README file\nwith open(path.join(here, \"README.md\"), encoding=\"utf-8\") as f:\n LONG_DESCRIPTION = f.read()\n\n# Get the install_requirements from requirements.txt\nwith open(path.join(here, \"requirements.txt\"), encoding=\"utf-8\") as f:\n required_packages = f.read().splitlines()\n\n# Get the install_requirements from requirements-ml.txt\nwith open(path.join(here, \"requirements-ml.txt\"), encoding=\"utf-8\") as f:\n ml_packages = f.read().splitlines()\n\n# Get the install_requirements from requirements-reports.txt\nwith open(path.join(here, \"requirements-reports.txt\"), encoding=\"utf-8\") as f:\n reports_packages = f.read().splitlines()\n\nresource_dir = \"resources/\"\ndefault_labeler_files = [\n (d, [os.path.join(d, f) for f in files]) for d, _, files in os.walk(resource_dir)\n]\n\n\nDESCRIPTION = (\n \"What is in your data? Detect schema, statistics and entities in almost any file.\"\n)\n\nsetup(\n name=\"DataProfiler\",\n version=__version__,\n python_requires=\">=3.8\",\n description=DESCRIPTION,\n long_description=LONG_DESCRIPTION,\n long_description_content_type=\"text/markdown\",\n # The project's main homepage.\n url=\"https://github.com/capitalone/data-profiler\",\n # Author details\n author=\"Jeremy Goodsitt, Taylor Turner, Michael Davis, Kenny Bean, Tyler Farnan\",\n # Choose your license\n license=\"Apache License, Version 2.0\",\n # See https://pypi.python.org/pypi?%3Aaction=list_classifiers\n classifiers=[\n # How mature is this project? Common values are\n # 3 - Alpha\n # 4 - Beta\n # 5 - Production/Stable\n \"Development Status :: 5 - Production/Stable\",\n # Indicate who your project is intended for\n \"Intended Audience :: Developers\",\n \"Intended Audience :: Education\",\n \"Intended Audience :: Information Technology\",\n \"Intended Audience :: Science/Research\",\n \"Intended Audience :: System Administrators\",\n \"Topic :: Education\",\n \"Topic :: Scientific/Engineering\",\n \"Topic :: Scientific/Engineering :: Information Analysis\",\n \"Topic :: Security\",\n \"Topic :: Software Development :: Build Tools\",\n # Pick your license as you wish (should match \"license\" above)\n \"License :: OSI Approved :: Apache Software License\",\n # Specify the Python versions you support here. In particular, ensure\n # that you indicate whether you support Python 3 or both.\n \"Programming Language :: Python :: 3\",\n ],\n # What does your project relate to?\n keywords=\"Data Investigation\",\n # You can just specify the packages manually here if your project is\n # simple. Or you can use find_packages().\n # packages=find_packages(exclude=['src/test', 'src/sample']),\n packages=find_packages(exclude=[\"tests\", \"examples\"]),\n # List run-time dependencies here. These will be installed by pip when\n # your project is installed. For an analysis of \"install_requires\" vs pip's\n # requirements files see:\n # https://packaging.python.org/en/latest/requirements.html\n install_requires=required_packages,\n # List of run-time dependencies for the labeler. These will be installed\n # by pip when someone installs the project[<label>].\n extras_require={\n \"ml\": ml_packages,\n \"reports\": reports_packages,\n \"full\": ml_packages + reports_packages,\n },\n # # If there are data files included in your packages that need to be\n # # installed, specify them here. If using Python 2.6 or less, then these\n # # have to be included in MANIFEST.in as well.\n # package_data={\n # 'data': [],\n # },\n #\n # # Although 'package_data' is the preferred approach, in some case you may\n # # need to place data files outside of your packages. See:\n # # http://docs.python.org/3.4/distutils/setupscript.html#installing-additional-files # noqa\n # # In this case, 'data_file' will be installed into '<sys.prefix>/my_data'\n data_files=default_labeler_files,\n include_package_data=True,\n)\n\nprint(\"find_packages():\", find_packages())\n", "path": "setup.py"}]} | 1,932 | 114 |
gh_patches_debug_4972 | rasdani/github-patches | git_diff | alltheplaces__alltheplaces-3342 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Spider superonefoods is broken
During the global build at 2021-09-22-14-42-27, spider **superonefoods** failed with **0 features** and **1 errors**.
Here's [the log](https://data.alltheplaces.xyz/runs/2021-09-22-14-42-27/logs/superonefoods.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-09-22-14-42-27/output/superonefoods.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-09-22-14-42-27/output/superonefoods.geojson))
</issue>
<code>
[start of locations/spiders/superonefoods.py]
1 # -*- coding: utf-8 -*-
2 import json
3 import scrapy
4
5 from locations.items import GeojsonPointItem
6
7
8 class SuperonefoodsSpider(scrapy.Spider):
9 name = "superonefoods"
10 item_attributes = { 'brand': "Super One Foods" }
11 allowed_domains = ["www.superonefoods.com"]
12 start_urls = (
13 'https://www.superonefoods.com/store-finder',
14 )
15
16 def parse(self, response):
17 # retrieve js data variable from script tag
18 items = response.xpath('//script/text()')[3].re("var stores =(.+?);\n")
19
20 # convert data variable from unicode to string
21 items = [str(x) for x in items]
22
23 # convert type string representation of list to type list
24 data = [items[0]]
25
26 # load list into json object for parsing
27 jsondata = json.loads(data[0])
28
29 # loop through json data object and retrieve values; yield the values to GeojsonPointItem
30 for item in jsondata:
31 yield GeojsonPointItem(
32 ref=item.get('_id'),
33 lat=float(item.get('latitude')),
34 lon=float(item.get('longitude')),
35 addr_full=item.get('address'),
36 city=item.get('city'),
37 state=item.get('state'),
38 postcode=item.get('zip'),
39 website='https://www.superonefoods.com/store-details/'+item.get('url'),
40 )
41
[end of locations/spiders/superonefoods.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/locations/spiders/superonefoods.py b/locations/spiders/superonefoods.py
--- a/locations/spiders/superonefoods.py
+++ b/locations/spiders/superonefoods.py
@@ -15,7 +15,7 @@
def parse(self, response):
# retrieve js data variable from script tag
- items = response.xpath('//script/text()')[3].re("var stores =(.+?);\n")
+ items = response.xpath('//script/text()')[4].re("var stores =(.+?);\n")
# convert data variable from unicode to string
items = [str(x) for x in items]
| {"golden_diff": "diff --git a/locations/spiders/superonefoods.py b/locations/spiders/superonefoods.py\n--- a/locations/spiders/superonefoods.py\n+++ b/locations/spiders/superonefoods.py\n@@ -15,7 +15,7 @@\n \n def parse(self, response):\n # retrieve js data variable from script tag\n- items = response.xpath('//script/text()')[3].re(\"var stores =(.+?);\\n\")\n+ items = response.xpath('//script/text()')[4].re(\"var stores =(.+?);\\n\")\n \n # convert data variable from unicode to string\n items = [str(x) for x in items]\n", "issue": "Spider superonefoods is broken\nDuring the global build at 2021-09-22-14-42-27, spider **superonefoods** failed with **0 features** and **1 errors**.\n\nHere's [the log](https://data.alltheplaces.xyz/runs/2021-09-22-14-42-27/logs/superonefoods.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-09-22-14-42-27/output/superonefoods.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-09-22-14-42-27/output/superonefoods.geojson))\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport json\nimport scrapy\n\nfrom locations.items import GeojsonPointItem\n\n\nclass SuperonefoodsSpider(scrapy.Spider):\n name = \"superonefoods\"\n item_attributes = { 'brand': \"Super One Foods\" }\n allowed_domains = [\"www.superonefoods.com\"]\n start_urls = (\n 'https://www.superonefoods.com/store-finder',\n )\n\n def parse(self, response):\n # retrieve js data variable from script tag\n items = response.xpath('//script/text()')[3].re(\"var stores =(.+?);\\n\")\n\n # convert data variable from unicode to string\n items = [str(x) for x in items]\n\n # convert type string representation of list to type list\n data = [items[0]]\n\n # load list into json object for parsing\n jsondata = json.loads(data[0])\n\n # loop through json data object and retrieve values; yield the values to GeojsonPointItem\n for item in jsondata:\n yield GeojsonPointItem(\n ref=item.get('_id'),\n lat=float(item.get('latitude')),\n lon=float(item.get('longitude')),\n addr_full=item.get('address'),\n city=item.get('city'),\n state=item.get('state'),\n postcode=item.get('zip'),\n website='https://www.superonefoods.com/store-details/'+item.get('url'),\n )\n", "path": "locations/spiders/superonefoods.py"}]} | 1,105 | 150 |
gh_patches_debug_31326 | rasdani/github-patches | git_diff | apluslms__a-plus-575 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Compress HTML pages in the cache
Exercise and chapter pages are stored in the cache and only update if the backend reports that there is a change. Some pages might be large (e.g. 1M), but do compress relatively well. Memcached API supports compression on the fly, but that is not usable over django API.
Thus, we should at least compress HTML content manually. Alternatively, we can specialize `CachedAbstract` for memcached, which would also allow us to use `cas` operation.
Relevant files:
* `lib/cache/cached.py`
* `exercise/cache/exercise.py` (`content` in `_generate_data(...)` and `content()`)
</issue>
<code>
[start of exercise/cache/exercise.py]
1 import time
2 from django.conf import settings
3 from django.db.models.signals import post_save, post_delete
4
5 from lib.cache import CachedAbstract
6 from lib.remote_page import RemotePageNotModified
7 from ..protocol.aplus import load_exercise_page
8
9
10 class ExerciseCache(CachedAbstract):
11 """ Exercise HTML content """
12 KEY_PREFIX = "exercise"
13
14 def __init__(self, exercise, language, request, students, url_name):
15 self.exercise = exercise
16 self.load_args = [language, request, students, url_name]
17 super().__init__(exercise, modifiers=[language])
18
19 def _needs_generation(self, data):
20 expires = data['expires'] if data else None
21 return not expires or time.time() > expires
22
23 def _generate_data(self, exercise, data=None):
24 try:
25 page = exercise.load_page(
26 *self.load_args,
27 last_modified=data['last_modified'] if data else None
28 )
29 return {
30 'head': page.head,
31 'content': page.content,
32 'last_modified': page.last_modified,
33 'expires': page.expires if page.is_loaded else 0,
34 }
35 except RemotePageNotModified as e:
36 if e.expires:
37 data['expires'] = e.expires
38 return data
39
40 def head(self):
41 return self.data['head']
42
43 def content(self):
44 return self.data['content']
45
46
47 def invalidate_instance(instance):
48 for module in instance.course_modules.all():
49 for exercise in module.learning_objects.all():
50 for language,_ in settings.LANGUAGES:
51 ExerciseCache.invalidate(exercise, modifiers=[language])
52
[end of exercise/cache/exercise.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/exercise/cache/exercise.py b/exercise/cache/exercise.py
--- a/exercise/cache/exercise.py
+++ b/exercise/cache/exercise.py
@@ -1,4 +1,6 @@
+import logging
import time
+
from django.conf import settings
from django.db.models.signals import post_save, post_delete
@@ -6,6 +8,18 @@
from lib.remote_page import RemotePageNotModified
from ..protocol.aplus import load_exercise_page
+logger = logging.getLogger('aplus.cached')
+
+try:
+ from lz4.block import compress as _compress, decompress
+ def compress(data):
+ return _compress(data, compression=1)
+except ImportError:
+ logger.warning("Unable to import lz4, using a slower zlib instead")
+ from zlib import compress as _compress, decompress
+ def compress(data):
+ return _compress(data, level=1)
+
class ExerciseCache(CachedAbstract):
""" Exercise HTML content """
@@ -26,9 +40,12 @@
*self.load_args,
last_modified=data['last_modified'] if data else None
)
+
+ content = compress(page.content.encode('utf-8'))
+
return {
'head': page.head,
- 'content': page.content,
+ 'content': content,
'last_modified': page.last_modified,
'expires': page.expires if page.is_loaded else 0,
}
@@ -41,7 +58,8 @@
return self.data['head']
def content(self):
- return self.data['content']
+ content = decompress(self.data['content']).decode('utf-8')
+ return content
def invalidate_instance(instance):
| {"golden_diff": "diff --git a/exercise/cache/exercise.py b/exercise/cache/exercise.py\n--- a/exercise/cache/exercise.py\n+++ b/exercise/cache/exercise.py\n@@ -1,4 +1,6 @@\n+import logging\n import time\n+\n from django.conf import settings\n from django.db.models.signals import post_save, post_delete\n \n@@ -6,6 +8,18 @@\n from lib.remote_page import RemotePageNotModified\n from ..protocol.aplus import load_exercise_page\n \n+logger = logging.getLogger('aplus.cached')\n+\n+try:\n+ from lz4.block import compress as _compress, decompress\n+ def compress(data):\n+ return _compress(data, compression=1)\n+except ImportError:\n+ logger.warning(\"Unable to import lz4, using a slower zlib instead\")\n+ from zlib import compress as _compress, decompress\n+ def compress(data):\n+ return _compress(data, level=1)\n+\n \n class ExerciseCache(CachedAbstract):\n \"\"\" Exercise HTML content \"\"\"\n@@ -26,9 +40,12 @@\n *self.load_args,\n last_modified=data['last_modified'] if data else None\n )\n+\n+ content = compress(page.content.encode('utf-8'))\n+\n return {\n 'head': page.head,\n- 'content': page.content,\n+ 'content': content,\n 'last_modified': page.last_modified,\n 'expires': page.expires if page.is_loaded else 0,\n }\n@@ -41,7 +58,8 @@\n return self.data['head']\n \n def content(self):\n- return self.data['content']\n+ content = decompress(self.data['content']).decode('utf-8')\n+ return content\n \n \n def invalidate_instance(instance):\n", "issue": "Compress HTML pages in the cache\nExercise and chapter pages are stored in the cache and only update if the backend reports that there is a change. Some pages might be large (e.g. 1M), but do compress relatively well. Memcached API supports compression on the fly, but that is not usable over django API.\r\n\r\nThus, we should at least compress HTML content manually. Alternatively, we can specialize `CachedAbstract` for memcached, which would also allow us to use `cas` operation.\r\n\r\nRelevant files:\r\n* `lib/cache/cached.py`\r\n* `exercise/cache/exercise.py` (`content` in `_generate_data(...)` and `content()`)\n", "before_files": [{"content": "import time\nfrom django.conf import settings\nfrom django.db.models.signals import post_save, post_delete\n\nfrom lib.cache import CachedAbstract\nfrom lib.remote_page import RemotePageNotModified\nfrom ..protocol.aplus import load_exercise_page\n\n\nclass ExerciseCache(CachedAbstract):\n \"\"\" Exercise HTML content \"\"\"\n KEY_PREFIX = \"exercise\"\n\n def __init__(self, exercise, language, request, students, url_name):\n self.exercise = exercise\n self.load_args = [language, request, students, url_name]\n super().__init__(exercise, modifiers=[language])\n\n def _needs_generation(self, data):\n expires = data['expires'] if data else None\n return not expires or time.time() > expires\n\n def _generate_data(self, exercise, data=None):\n try:\n page = exercise.load_page(\n *self.load_args,\n last_modified=data['last_modified'] if data else None\n )\n return {\n 'head': page.head,\n 'content': page.content,\n 'last_modified': page.last_modified,\n 'expires': page.expires if page.is_loaded else 0,\n }\n except RemotePageNotModified as e:\n if e.expires:\n data['expires'] = e.expires\n return data\n\n def head(self):\n return self.data['head']\n\n def content(self):\n return self.data['content']\n\n\ndef invalidate_instance(instance):\n for module in instance.course_modules.all():\n for exercise in module.learning_objects.all():\n for language,_ in settings.LANGUAGES:\n ExerciseCache.invalidate(exercise, modifiers=[language])\n", "path": "exercise/cache/exercise.py"}]} | 1,111 | 378 |
gh_patches_debug_8197 | rasdani/github-patches | git_diff | sanic-org__sanic-2438 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Easier websocket interface annotation
Right now, to properly annotate a websocket endpoint you need to do this:
```python
from sanic.server.websockets.impl import WebsocketImplProtocol
from sanic import Request
@app.websocket("")
async def handler(request: Request, ws: WebsocketImplProtocol):
...
```
That is not easy or intuitive.
This would be much nicer:
```python
from sanic import Request, Websocket
@app.websocket("")
async def handler(request: Request, ws: Websocket):
...
```
We should just alias and put it inside `__init__.py` for convenience.
</issue>
<code>
[start of sanic/__init__.py]
1 from sanic.__version__ import __version__
2 from sanic.app import Sanic
3 from sanic.blueprints import Blueprint
4 from sanic.constants import HTTPMethod
5 from sanic.request import Request
6 from sanic.response import HTTPResponse, html, json, text
7
8
9 __all__ = (
10 "__version__",
11 "Sanic",
12 "Blueprint",
13 "HTTPMethod",
14 "HTTPResponse",
15 "Request",
16 "html",
17 "json",
18 "text",
19 )
20
[end of sanic/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/sanic/__init__.py b/sanic/__init__.py
--- a/sanic/__init__.py
+++ b/sanic/__init__.py
@@ -4,6 +4,7 @@
from sanic.constants import HTTPMethod
from sanic.request import Request
from sanic.response import HTTPResponse, html, json, text
+from sanic.server.websockets.impl import WebsocketImplProtocol as Websocket
__all__ = (
@@ -13,6 +14,7 @@
"HTTPMethod",
"HTTPResponse",
"Request",
+ "Websocket",
"html",
"json",
"text",
| {"golden_diff": "diff --git a/sanic/__init__.py b/sanic/__init__.py\n--- a/sanic/__init__.py\n+++ b/sanic/__init__.py\n@@ -4,6 +4,7 @@\n from sanic.constants import HTTPMethod\n from sanic.request import Request\n from sanic.response import HTTPResponse, html, json, text\n+from sanic.server.websockets.impl import WebsocketImplProtocol as Websocket\n \n \n __all__ = (\n@@ -13,6 +14,7 @@\n \"HTTPMethod\",\n \"HTTPResponse\",\n \"Request\",\n+ \"Websocket\",\n \"html\",\n \"json\",\n \"text\",\n", "issue": "Easier websocket interface annotation\nRight now, to properly annotate a websocket endpoint you need to do this:\r\n\r\n```python\r\nfrom sanic.server.websockets.impl import WebsocketImplProtocol\r\nfrom sanic import Request\r\n\r\[email protected](\"\")\r\nasync def handler(request: Request, ws: WebsocketImplProtocol):\r\n ...\r\n```\r\n\r\nThat is not easy or intuitive.\r\n\r\nThis would be much nicer:\r\n\r\n```python\r\nfrom sanic import Request, Websocket\r\n\r\[email protected](\"\")\r\nasync def handler(request: Request, ws: Websocket):\r\n ...\r\n```\r\n\r\nWe should just alias and put it inside `__init__.py` for convenience.\n", "before_files": [{"content": "from sanic.__version__ import __version__\nfrom sanic.app import Sanic\nfrom sanic.blueprints import Blueprint\nfrom sanic.constants import HTTPMethod\nfrom sanic.request import Request\nfrom sanic.response import HTTPResponse, html, json, text\n\n\n__all__ = (\n \"__version__\",\n \"Sanic\",\n \"Blueprint\",\n \"HTTPMethod\",\n \"HTTPResponse\",\n \"Request\",\n \"html\",\n \"json\",\n \"text\",\n)\n", "path": "sanic/__init__.py"}]} | 799 | 143 |
gh_patches_debug_25585 | rasdani/github-patches | git_diff | talonhub__community-758 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Improve phrase history mechanism
instead of this:
https://github.com/knausj85/knausj_talon/blob/3e57e0165257cf07b0e21880d44a91e79cb3ef16/code/history.py#L19-L29
consider something like this:
```py
def on_phrase(j):
global history
words = j.get('text')
if words:
text = ' '.join(words)
history.append(text)
history = history[-setting_command_history_size.get() :]
```
</issue>
<code>
[start of code/history.py]
1 from talon import imgui, Module, speech_system, actions, app
2
3 # We keep command_history_size lines of history, but by default display only
4 # command_history_display of them.
5 mod = Module()
6 setting_command_history_size = mod.setting("command_history_size", int, default=50)
7 setting_command_history_display = mod.setting(
8 "command_history_display", int, default=10
9 )
10
11 hist_more = False
12 history = []
13
14
15 def parse_phrase(word_list):
16 return " ".join(word.split("\\")[0] for word in word_list)
17
18
19 def on_phrase(j):
20 global history
21
22 try:
23 val = parse_phrase(getattr(j["parsed"], "_unmapped", j["phrase"]))
24 except:
25 val = parse_phrase(j["phrase"])
26
27 if val != "":
28 history.append(val)
29 history = history[-setting_command_history_size.get() :]
30
31
32 # todo: dynamic rect?
33 @imgui.open(y=0)
34 def gui(gui: imgui.GUI):
35 global history
36 gui.text("Command History")
37 gui.line()
38 text = (
39 history[:] if hist_more else history[-setting_command_history_display.get() :]
40 )
41 for line in text:
42 gui.text(line)
43
44 gui.spacer()
45 if gui.button("Command history close"):
46 actions.user.history_disable()
47
48
49 speech_system.register("phrase", on_phrase)
50
51
52 @mod.action_class
53 class Actions:
54 def history_toggle():
55 """Toggles viewing the history"""
56 if gui.showing:
57 gui.hide()
58 else:
59 gui.show()
60
61 def history_enable():
62 """Enables the history"""
63 gui.show()
64
65 def history_disable():
66 """Disables the history"""
67 gui.hide()
68
69 def history_clear():
70 """Clear the history"""
71 global history
72 history = []
73
74 def history_more():
75 """Show more history"""
76 global hist_more
77 hist_more = True
78
79 def history_less():
80 """Show less history"""
81 global hist_more
82 hist_more = False
83
84 def history_get(number: int):
85 """returns the history entry at the specified index"""
86 num = (0 - number) - 1
87 return history[num]
88
[end of code/history.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/code/history.py b/code/history.py
--- a/code/history.py
+++ b/code/history.py
@@ -1,3 +1,4 @@
+from typing import Optional
from talon import imgui, Module, speech_system, actions, app
# We keep command_history_size lines of history, but by default display only
@@ -12,20 +13,15 @@
history = []
-def parse_phrase(word_list):
- return " ".join(word.split("\\")[0] for word in word_list)
-
-
def on_phrase(j):
global history
- try:
- val = parse_phrase(getattr(j["parsed"], "_unmapped", j["phrase"]))
- except:
- val = parse_phrase(j["phrase"])
+ words = j.get('text')
+
+ text = actions.user.history_transform_phrase_text(words)
- if val != "":
- history.append(val)
+ if text is not None:
+ history.append(text)
history = history[-setting_command_history_size.get() :]
@@ -85,3 +81,11 @@
"""returns the history entry at the specified index"""
num = (0 - number) - 1
return history[num]
+
+ def history_transform_phrase_text(words: list[str]) -> Optional[str]:
+ """Transforms phrase text for presentation in history. Return `None` to omit from history"""
+
+ if not actions.speech.enabled():
+ return None
+
+ return ' '.join(words) if words else None
\ No newline at end of file
| {"golden_diff": "diff --git a/code/history.py b/code/history.py\n--- a/code/history.py\n+++ b/code/history.py\n@@ -1,3 +1,4 @@\n+from typing import Optional\n from talon import imgui, Module, speech_system, actions, app\n \n # We keep command_history_size lines of history, but by default display only\n@@ -12,20 +13,15 @@\n history = []\n \n \n-def parse_phrase(word_list):\n- return \" \".join(word.split(\"\\\\\")[0] for word in word_list)\n-\n-\n def on_phrase(j):\n global history\n \n- try:\n- val = parse_phrase(getattr(j[\"parsed\"], \"_unmapped\", j[\"phrase\"]))\n- except:\n- val = parse_phrase(j[\"phrase\"])\n+ words = j.get('text')\n+\n+ text = actions.user.history_transform_phrase_text(words)\n \n- if val != \"\":\n- history.append(val)\n+ if text is not None:\n+ history.append(text)\n history = history[-setting_command_history_size.get() :]\n \n \n@@ -85,3 +81,11 @@\n \"\"\"returns the history entry at the specified index\"\"\"\n num = (0 - number) - 1\n return history[num]\n+\n+ def history_transform_phrase_text(words: list[str]) -> Optional[str]:\n+ \"\"\"Transforms phrase text for presentation in history. Return `None` to omit from history\"\"\"\n+\n+ if not actions.speech.enabled():\n+ return None\n+\n+ return ' '.join(words) if words else None\n\\ No newline at end of file\n", "issue": "Improve phrase history mechanism\ninstead of this:\r\n\r\nhttps://github.com/knausj85/knausj_talon/blob/3e57e0165257cf07b0e21880d44a91e79cb3ef16/code/history.py#L19-L29\r\n\r\nconsider something like this:\r\n\r\n```py\r\ndef on_phrase(j):\r\n global history\r\n words = j.get('text')\r\n if words:\r\n text = ' '.join(words)\r\n history.append(text)\r\n history = history[-setting_command_history_size.get() :]\r\n```\n", "before_files": [{"content": "from talon import imgui, Module, speech_system, actions, app\n\n# We keep command_history_size lines of history, but by default display only\n# command_history_display of them.\nmod = Module()\nsetting_command_history_size = mod.setting(\"command_history_size\", int, default=50)\nsetting_command_history_display = mod.setting(\n \"command_history_display\", int, default=10\n)\n\nhist_more = False\nhistory = []\n\n\ndef parse_phrase(word_list):\n return \" \".join(word.split(\"\\\\\")[0] for word in word_list)\n\n\ndef on_phrase(j):\n global history\n\n try:\n val = parse_phrase(getattr(j[\"parsed\"], \"_unmapped\", j[\"phrase\"]))\n except:\n val = parse_phrase(j[\"phrase\"])\n\n if val != \"\":\n history.append(val)\n history = history[-setting_command_history_size.get() :]\n\n\n# todo: dynamic rect?\[email protected](y=0)\ndef gui(gui: imgui.GUI):\n global history\n gui.text(\"Command History\")\n gui.line()\n text = (\n history[:] if hist_more else history[-setting_command_history_display.get() :]\n )\n for line in text:\n gui.text(line)\n\n gui.spacer()\n if gui.button(\"Command history close\"):\n actions.user.history_disable()\n\n\nspeech_system.register(\"phrase\", on_phrase)\n\n\[email protected]_class\nclass Actions:\n def history_toggle():\n \"\"\"Toggles viewing the history\"\"\"\n if gui.showing:\n gui.hide()\n else:\n gui.show()\n\n def history_enable():\n \"\"\"Enables the history\"\"\"\n gui.show()\n\n def history_disable():\n \"\"\"Disables the history\"\"\"\n gui.hide()\n\n def history_clear():\n \"\"\"Clear the history\"\"\"\n global history\n history = []\n\n def history_more():\n \"\"\"Show more history\"\"\"\n global hist_more\n hist_more = True\n\n def history_less():\n \"\"\"Show less history\"\"\"\n global hist_more\n hist_more = False\n\n def history_get(number: int):\n \"\"\"returns the history entry at the specified index\"\"\"\n num = (0 - number) - 1\n return history[num]\n", "path": "code/history.py"}]} | 1,301 | 342 |
gh_patches_debug_54590 | rasdani/github-patches | git_diff | zulip__zulip-20491 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Remove "Send a reply" new user tip
After implementing #19900, there are two places where new users are told how to reply to a message: in the Welcome Bot text and in the "Send a reply" new user tip immediately below.
To simplify and avoid redundancy, we should remove the "Send a reply" new user tip.
<img width="909" alt="Screen_Shot_2021-12-06_at_10_08_14_AM" src="https://user-images.githubusercontent.com/2090066/144938995-080268ce-510d-4b76-b3c1-b691fbb814f4.png">
[CZO thread](https://chat.zulip.org/#narrow/stream/101-design/topic/.22click.20to.20reply.22.20whale)
</issue>
<code>
[start of zerver/lib/hotspots.py]
1 # See https://zulip.readthedocs.io/en/latest/subsystems/hotspots.html
2 # for documentation on this subsystem.
3 from typing import Dict, List
4
5 from django.conf import settings
6 from django.utils.functional import Promise
7 from django.utils.translation import gettext_lazy
8
9 from zerver.models import UserHotspot, UserProfile
10
11 INTRO_HOTSPOTS: Dict[str, Dict[str, Promise]] = {
12 "intro_reply": {
13 "title": gettext_lazy("Reply to a message"),
14 "description": gettext_lazy("Click anywhere on a message to reply."),
15 },
16 "intro_streams": {
17 "title": gettext_lazy("Catch up on a stream"),
18 "description": gettext_lazy(
19 "Messages sent to a stream are seen by everyone subscribed "
20 "to that stream. Try clicking on one of the stream links below."
21 ),
22 },
23 "intro_topics": {
24 "title": gettext_lazy("Topics"),
25 "description": gettext_lazy(
26 "Every message has a topic. Topics keep conversations "
27 "easy to follow, and make it easy to reply to conversations that start "
28 "while you are offline."
29 ),
30 },
31 "intro_gear": {
32 "title": gettext_lazy("Settings"),
33 "description": gettext_lazy(
34 "Go to Settings to configure your notifications and display settings."
35 ),
36 },
37 "intro_compose": {
38 "title": gettext_lazy("Compose"),
39 "description": gettext_lazy(
40 "Click here to start a new conversation. Pick a topic "
41 "(2-3 words is best), and give it a go!"
42 ),
43 },
44 }
45
46 # We would most likely implement new hotspots in the future that aren't
47 # a part of the initial tutorial. To that end, classifying them into
48 # categories which are aggregated in ALL_HOTSPOTS, seems like a good start.
49 ALL_HOTSPOTS: Dict[str, Dict[str, Promise]] = {
50 **INTRO_HOTSPOTS,
51 }
52
53
54 def get_next_hotspots(user: UserProfile) -> List[Dict[str, object]]:
55 # For manual testing, it can be convenient to set
56 # ALWAYS_SEND_ALL_HOTSPOTS=True in `zproject/dev_settings.py` to
57 # make it easy to click on all of the hotspots. Note that
58 # ALWAYS_SEND_ALL_HOTSPOTS has some bugs; see ReadTheDocs (link
59 # above) for details.
60 #
61 # Since this is just for development purposes, it's convenient for us to send
62 # all the hotspots rather than any specific category.
63 if settings.ALWAYS_SEND_ALL_HOTSPOTS:
64 return [
65 {
66 "name": hotspot,
67 "title": str(ALL_HOTSPOTS[hotspot]["title"]),
68 "description": str(ALL_HOTSPOTS[hotspot]["description"]),
69 "delay": 0,
70 }
71 for hotspot in ALL_HOTSPOTS
72 ]
73
74 # If a Zulip server has disabled the tutorial, never send hotspots.
75 if not settings.TUTORIAL_ENABLED:
76 return []
77
78 if user.tutorial_status == UserProfile.TUTORIAL_FINISHED:
79 return []
80
81 seen_hotspots = frozenset(
82 UserHotspot.objects.filter(user=user).values_list("hotspot", flat=True)
83 )
84 for hotspot in INTRO_HOTSPOTS.keys():
85 if hotspot not in seen_hotspots:
86 return [
87 {
88 "name": hotspot,
89 "title": str(INTRO_HOTSPOTS[hotspot]["title"]),
90 "description": str(INTRO_HOTSPOTS[hotspot]["description"]),
91 "delay": 0.5,
92 }
93 ]
94
95 user.tutorial_status = UserProfile.TUTORIAL_FINISHED
96 user.save(update_fields=["tutorial_status"])
97 return []
98
99
100 def copy_hotspots(source_profile: UserProfile, target_profile: UserProfile) -> None:
101 for userhotspot in frozenset(UserHotspot.objects.filter(user=source_profile)):
102 UserHotspot.objects.create(
103 user=target_profile, hotspot=userhotspot.hotspot, timestamp=userhotspot.timestamp
104 )
105
106 target_profile.tutorial_status = source_profile.tutorial_status
107 target_profile.onboarding_steps = source_profile.onboarding_steps
108 target_profile.save(update_fields=["tutorial_status", "onboarding_steps"])
109
[end of zerver/lib/hotspots.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/zerver/lib/hotspots.py b/zerver/lib/hotspots.py
--- a/zerver/lib/hotspots.py
+++ b/zerver/lib/hotspots.py
@@ -9,10 +9,6 @@
from zerver.models import UserHotspot, UserProfile
INTRO_HOTSPOTS: Dict[str, Dict[str, Promise]] = {
- "intro_reply": {
- "title": gettext_lazy("Reply to a message"),
- "description": gettext_lazy("Click anywhere on a message to reply."),
- },
"intro_streams": {
"title": gettext_lazy("Catch up on a stream"),
"description": gettext_lazy(
| {"golden_diff": "diff --git a/zerver/lib/hotspots.py b/zerver/lib/hotspots.py\n--- a/zerver/lib/hotspots.py\n+++ b/zerver/lib/hotspots.py\n@@ -9,10 +9,6 @@\n from zerver.models import UserHotspot, UserProfile\n \n INTRO_HOTSPOTS: Dict[str, Dict[str, Promise]] = {\n- \"intro_reply\": {\n- \"title\": gettext_lazy(\"Reply to a message\"),\n- \"description\": gettext_lazy(\"Click anywhere on a message to reply.\"),\n- },\n \"intro_streams\": {\n \"title\": gettext_lazy(\"Catch up on a stream\"),\n \"description\": gettext_lazy(\n", "issue": "Remove \"Send a reply\" new user tip\nAfter implementing #19900, there are two places where new users are told how to reply to a message: in the Welcome Bot text and in the \"Send a reply\" new user tip immediately below.\r\n\r\nTo simplify and avoid redundancy, we should remove the \"Send a reply\" new user tip.\r\n\r\n<img width=\"909\" alt=\"Screen_Shot_2021-12-06_at_10_08_14_AM\" src=\"https://user-images.githubusercontent.com/2090066/144938995-080268ce-510d-4b76-b3c1-b691fbb814f4.png\">\r\n\r\n[CZO thread](https://chat.zulip.org/#narrow/stream/101-design/topic/.22click.20to.20reply.22.20whale)\n", "before_files": [{"content": "# See https://zulip.readthedocs.io/en/latest/subsystems/hotspots.html\n# for documentation on this subsystem.\nfrom typing import Dict, List\n\nfrom django.conf import settings\nfrom django.utils.functional import Promise\nfrom django.utils.translation import gettext_lazy\n\nfrom zerver.models import UserHotspot, UserProfile\n\nINTRO_HOTSPOTS: Dict[str, Dict[str, Promise]] = {\n \"intro_reply\": {\n \"title\": gettext_lazy(\"Reply to a message\"),\n \"description\": gettext_lazy(\"Click anywhere on a message to reply.\"),\n },\n \"intro_streams\": {\n \"title\": gettext_lazy(\"Catch up on a stream\"),\n \"description\": gettext_lazy(\n \"Messages sent to a stream are seen by everyone subscribed \"\n \"to that stream. Try clicking on one of the stream links below.\"\n ),\n },\n \"intro_topics\": {\n \"title\": gettext_lazy(\"Topics\"),\n \"description\": gettext_lazy(\n \"Every message has a topic. Topics keep conversations \"\n \"easy to follow, and make it easy to reply to conversations that start \"\n \"while you are offline.\"\n ),\n },\n \"intro_gear\": {\n \"title\": gettext_lazy(\"Settings\"),\n \"description\": gettext_lazy(\n \"Go to Settings to configure your notifications and display settings.\"\n ),\n },\n \"intro_compose\": {\n \"title\": gettext_lazy(\"Compose\"),\n \"description\": gettext_lazy(\n \"Click here to start a new conversation. Pick a topic \"\n \"(2-3 words is best), and give it a go!\"\n ),\n },\n}\n\n# We would most likely implement new hotspots in the future that aren't\n# a part of the initial tutorial. To that end, classifying them into\n# categories which are aggregated in ALL_HOTSPOTS, seems like a good start.\nALL_HOTSPOTS: Dict[str, Dict[str, Promise]] = {\n **INTRO_HOTSPOTS,\n}\n\n\ndef get_next_hotspots(user: UserProfile) -> List[Dict[str, object]]:\n # For manual testing, it can be convenient to set\n # ALWAYS_SEND_ALL_HOTSPOTS=True in `zproject/dev_settings.py` to\n # make it easy to click on all of the hotspots. Note that\n # ALWAYS_SEND_ALL_HOTSPOTS has some bugs; see ReadTheDocs (link\n # above) for details.\n #\n # Since this is just for development purposes, it's convenient for us to send\n # all the hotspots rather than any specific category.\n if settings.ALWAYS_SEND_ALL_HOTSPOTS:\n return [\n {\n \"name\": hotspot,\n \"title\": str(ALL_HOTSPOTS[hotspot][\"title\"]),\n \"description\": str(ALL_HOTSPOTS[hotspot][\"description\"]),\n \"delay\": 0,\n }\n for hotspot in ALL_HOTSPOTS\n ]\n\n # If a Zulip server has disabled the tutorial, never send hotspots.\n if not settings.TUTORIAL_ENABLED:\n return []\n\n if user.tutorial_status == UserProfile.TUTORIAL_FINISHED:\n return []\n\n seen_hotspots = frozenset(\n UserHotspot.objects.filter(user=user).values_list(\"hotspot\", flat=True)\n )\n for hotspot in INTRO_HOTSPOTS.keys():\n if hotspot not in seen_hotspots:\n return [\n {\n \"name\": hotspot,\n \"title\": str(INTRO_HOTSPOTS[hotspot][\"title\"]),\n \"description\": str(INTRO_HOTSPOTS[hotspot][\"description\"]),\n \"delay\": 0.5,\n }\n ]\n\n user.tutorial_status = UserProfile.TUTORIAL_FINISHED\n user.save(update_fields=[\"tutorial_status\"])\n return []\n\n\ndef copy_hotspots(source_profile: UserProfile, target_profile: UserProfile) -> None:\n for userhotspot in frozenset(UserHotspot.objects.filter(user=source_profile)):\n UserHotspot.objects.create(\n user=target_profile, hotspot=userhotspot.hotspot, timestamp=userhotspot.timestamp\n )\n\n target_profile.tutorial_status = source_profile.tutorial_status\n target_profile.onboarding_steps = source_profile.onboarding_steps\n target_profile.save(update_fields=[\"tutorial_status\", \"onboarding_steps\"])\n", "path": "zerver/lib/hotspots.py"}]} | 1,872 | 144 |
gh_patches_debug_5182 | rasdani/github-patches | git_diff | Gallopsled__pwntools-1852 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`atexit.register` does not work
### What I did
```python3
from pwn import *
atexit.register(print, "hello world")
exit()
```
### What I expected to see
```python3 test.py
hello world
```
### What I saw
Nothing
I noticed this because `asm()`, which adds an `atexit` handler to remove the `/tmp/pwn-asm-XXXXXX` folder, does not in fact remove it, meaning multiple script runs leads to many similar folders.
</issue>
<code>
[start of pwnlib/atexit.py]
1 """
2 Replacement for the Python standard library's atexit.py.
3
4 Whereas the standard :mod:`atexit` module only defines :func:`atexit.register`,
5 this replacement module also defines :func:`unregister`.
6
7 This module also fixes a the issue that exceptions raised by an exit handler is
8 printed twice when the standard :mod:`atexit` is used.
9 """
10 from __future__ import absolute_import
11 from __future__ import division
12
13 import sys
14 import threading
15 import traceback
16
17 from pwnlib.context import context
18
19 __all__ = ['register', 'unregister']
20
21 _lock = threading.Lock()
22 _ident = 0
23 _handlers = {}
24
25 def register(func, *args, **kwargs):
26 """register(func, *args, **kwargs)
27
28 Registers a function to be called on program termination. The function will
29 be called with positional arguments `args` and keyword arguments `kwargs`,
30 i.e. ``func(*args, **kwargs)``. The current `context` is recorded and will
31 be the one used when the handler is run.
32
33 E.g. to suppress logging output from an exit-handler one could write::
34
35 with context.local(log_level = 'error'):
36 atexit.register(handler)
37
38 An identifier is returned which can be used to unregister the exit-handler.
39
40 This function can be used as a decorator::
41
42 @atexit.register
43 def handler():
44 ...
45
46 Notice however that this will bind ``handler`` to the identifier and not the
47 actual exit-handler. The exit-handler can then be unregistered with::
48
49 atexit.unregister(handler)
50
51 This function is thread safe.
52
53 """
54 global _ident
55 with _lock:
56 ident = _ident
57 _ident += 1
58 _handlers[ident] = (func, args, kwargs, vars(context))
59 return ident
60
61 def unregister(ident):
62 """unregister(ident)
63
64 Remove the exit-handler identified by `ident` from the list of registered
65 handlers. If `ident` isn't registered this is a no-op.
66 """
67 if ident in _handlers:
68 del _handlers[ident]
69
70 def _run_handlers():
71 """_run_handlers()
72
73 Run registered exit-handlers. They run in the reverse order of which they
74 were registered.
75
76 If a handler raises an exception, it will be printed but nothing else
77 happens, i.e. other handlers will be run and `sys.excepthook` will not be
78 called for that reason.
79 """
80 context.clear()
81 for _ident, (func, args, kwargs, ctx) in \
82 sorted(_handlers.items(), reverse = True):
83 try:
84 with context.local(**ctx):
85 func(*args, **kwargs)
86 except SystemExit:
87 pass
88 except Exception:
89 # extract the current exception and rewind the traceback to where it
90 # originated
91 typ, val, tb = sys.exc_info()
92 traceback.print_exception(typ, val, tb.tb_next)
93
94 # if there's already an exitfunc registered be sure to run that too
95 if hasattr(sys, "exitfunc"):
96 register(sys.exitfunc)
97
98 sys.exitfunc = _run_handlers
99
[end of pwnlib/atexit.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pwnlib/atexit.py b/pwnlib/atexit.py
--- a/pwnlib/atexit.py
+++ b/pwnlib/atexit.py
@@ -13,6 +13,7 @@
import sys
import threading
import traceback
+import atexit as std_atexit
from pwnlib.context import context
@@ -95,4 +96,8 @@
if hasattr(sys, "exitfunc"):
register(sys.exitfunc)
-sys.exitfunc = _run_handlers
+if sys.version_info[0] < 3:
+ sys.exitfunc = _run_handlers
+else:
+ std_atexit.register(_run_handlers)
+
| {"golden_diff": "diff --git a/pwnlib/atexit.py b/pwnlib/atexit.py\n--- a/pwnlib/atexit.py\n+++ b/pwnlib/atexit.py\n@@ -13,6 +13,7 @@\n import sys\n import threading\n import traceback\n+import atexit as std_atexit\n \n from pwnlib.context import context\n \n@@ -95,4 +96,8 @@\n if hasattr(sys, \"exitfunc\"):\n register(sys.exitfunc)\n \n-sys.exitfunc = _run_handlers\n+if sys.version_info[0] < 3:\n+ sys.exitfunc = _run_handlers\n+else:\n+ std_atexit.register(_run_handlers)\n+\n", "issue": "`atexit.register` does not work\n### What I did\r\n```python3\r\nfrom pwn import *\r\natexit.register(print, \"hello world\")\r\nexit()\r\n```\r\n### What I expected to see\r\n```python3 test.py\r\nhello world\r\n```\r\n### What I saw\r\nNothing\r\n\r\nI noticed this because `asm()`, which adds an `atexit` handler to remove the `/tmp/pwn-asm-XXXXXX` folder, does not in fact remove it, meaning multiple script runs leads to many similar folders.\n", "before_files": [{"content": "\"\"\"\nReplacement for the Python standard library's atexit.py.\n\nWhereas the standard :mod:`atexit` module only defines :func:`atexit.register`,\nthis replacement module also defines :func:`unregister`.\n\nThis module also fixes a the issue that exceptions raised by an exit handler is\nprinted twice when the standard :mod:`atexit` is used.\n\"\"\"\nfrom __future__ import absolute_import\nfrom __future__ import division\n\nimport sys\nimport threading\nimport traceback\n\nfrom pwnlib.context import context\n\n__all__ = ['register', 'unregister']\n\n_lock = threading.Lock()\n_ident = 0\n_handlers = {}\n\ndef register(func, *args, **kwargs):\n \"\"\"register(func, *args, **kwargs)\n\n Registers a function to be called on program termination. The function will\n be called with positional arguments `args` and keyword arguments `kwargs`,\n i.e. ``func(*args, **kwargs)``. The current `context` is recorded and will\n be the one used when the handler is run.\n\n E.g. to suppress logging output from an exit-handler one could write::\n\n with context.local(log_level = 'error'):\n atexit.register(handler)\n\n An identifier is returned which can be used to unregister the exit-handler.\n\n This function can be used as a decorator::\n\n @atexit.register\n def handler():\n ...\n\n Notice however that this will bind ``handler`` to the identifier and not the\n actual exit-handler. The exit-handler can then be unregistered with::\n\n atexit.unregister(handler)\n\n This function is thread safe.\n\n \"\"\"\n global _ident\n with _lock:\n ident = _ident\n _ident += 1\n _handlers[ident] = (func, args, kwargs, vars(context))\n return ident\n\ndef unregister(ident):\n \"\"\"unregister(ident)\n\n Remove the exit-handler identified by `ident` from the list of registered\n handlers. If `ident` isn't registered this is a no-op.\n \"\"\"\n if ident in _handlers:\n del _handlers[ident]\n\ndef _run_handlers():\n \"\"\"_run_handlers()\n\n Run registered exit-handlers. They run in the reverse order of which they\n were registered.\n\n If a handler raises an exception, it will be printed but nothing else\n happens, i.e. other handlers will be run and `sys.excepthook` will not be\n called for that reason.\n \"\"\"\n context.clear()\n for _ident, (func, args, kwargs, ctx) in \\\n sorted(_handlers.items(), reverse = True):\n try:\n with context.local(**ctx):\n func(*args, **kwargs)\n except SystemExit:\n pass\n except Exception:\n # extract the current exception and rewind the traceback to where it\n # originated\n typ, val, tb = sys.exc_info()\n traceback.print_exception(typ, val, tb.tb_next)\n\n# if there's already an exitfunc registered be sure to run that too\nif hasattr(sys, \"exitfunc\"):\n register(sys.exitfunc)\n\nsys.exitfunc = _run_handlers\n", "path": "pwnlib/atexit.py"}]} | 1,519 | 147 |
gh_patches_debug_26174 | rasdani/github-patches | git_diff | python-telegram-bot__python-telegram-bot-776 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Remove botan from our library
according to [this](https://github.com/botanio/sdk#py) botan has it's own implementation for python. No need to reinvent the wheel. I suggest we remove it from ptb in the next major (8.0) version.
</issue>
<code>
[start of telegram/contrib/__init__.py]
1 from .botan import Botan
2
3 __all__ = ['Botan']
4
[end of telegram/contrib/__init__.py]
[start of telegram/contrib/botan.py]
1 import logging
2
3 from future.moves.urllib.parse import quote
4 from future.moves.urllib.error import HTTPError, URLError
5 from future.moves.urllib.request import urlopen, Request
6
7 logging.getLogger(__name__).addHandler(logging.NullHandler())
8
9
10 class Botan(object):
11 """This class helps to send incoming events to your botan analytics account.
12 See more: https://github.com/botanio/sdk#botan-sdk
13 """
14
15 token = ''
16 url_template = 'https://api.botan.io/track?token={token}' \
17 '&uid={uid}&name={name}&src=python-telegram-bot'
18
19 def __init__(self, token):
20 self.token = token
21 self.logger = logging.getLogger(__name__)
22
23 def track(self, message, event_name='event'):
24 try:
25 uid = message.chat_id
26 except AttributeError:
27 self.logger.warn('No chat_id in message')
28 return False
29 data = message.to_json()
30 try:
31 url = self.url_template.format(
32 token=str(self.token), uid=str(uid), name=quote(event_name))
33 request = Request(
34 url, data=data.encode(), headers={'Content-Type': 'application/json'})
35 urlopen(request)
36 return True
37 except HTTPError as error:
38 self.logger.warn('Botan track error ' + str(error.code) + ':' + error.read().decode(
39 'utf-8'))
40 return False
41 except URLError as error:
42 self.logger.warn('Botan track error ' + str(error.reason))
43 return False
44
[end of telegram/contrib/botan.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/telegram/contrib/__init__.py b/telegram/contrib/__init__.py
deleted file mode 100644
--- a/telegram/contrib/__init__.py
+++ /dev/null
@@ -1,3 +0,0 @@
-from .botan import Botan
-
-__all__ = ['Botan']
diff --git a/telegram/contrib/botan.py b/telegram/contrib/botan.py
deleted file mode 100644
--- a/telegram/contrib/botan.py
+++ /dev/null
@@ -1,43 +0,0 @@
-import logging
-
-from future.moves.urllib.parse import quote
-from future.moves.urllib.error import HTTPError, URLError
-from future.moves.urllib.request import urlopen, Request
-
-logging.getLogger(__name__).addHandler(logging.NullHandler())
-
-
-class Botan(object):
- """This class helps to send incoming events to your botan analytics account.
- See more: https://github.com/botanio/sdk#botan-sdk
- """
-
- token = ''
- url_template = 'https://api.botan.io/track?token={token}' \
- '&uid={uid}&name={name}&src=python-telegram-bot'
-
- def __init__(self, token):
- self.token = token
- self.logger = logging.getLogger(__name__)
-
- def track(self, message, event_name='event'):
- try:
- uid = message.chat_id
- except AttributeError:
- self.logger.warn('No chat_id in message')
- return False
- data = message.to_json()
- try:
- url = self.url_template.format(
- token=str(self.token), uid=str(uid), name=quote(event_name))
- request = Request(
- url, data=data.encode(), headers={'Content-Type': 'application/json'})
- urlopen(request)
- return True
- except HTTPError as error:
- self.logger.warn('Botan track error ' + str(error.code) + ':' + error.read().decode(
- 'utf-8'))
- return False
- except URLError as error:
- self.logger.warn('Botan track error ' + str(error.reason))
- return False
| {"golden_diff": "diff --git a/telegram/contrib/__init__.py b/telegram/contrib/__init__.py\ndeleted file mode 100644\n--- a/telegram/contrib/__init__.py\n+++ /dev/null\n@@ -1,3 +0,0 @@\n-from .botan import Botan\n-\n-__all__ = ['Botan']\ndiff --git a/telegram/contrib/botan.py b/telegram/contrib/botan.py\ndeleted file mode 100644\n--- a/telegram/contrib/botan.py\n+++ /dev/null\n@@ -1,43 +0,0 @@\n-import logging\n-\n-from future.moves.urllib.parse import quote\n-from future.moves.urllib.error import HTTPError, URLError\n-from future.moves.urllib.request import urlopen, Request\n-\n-logging.getLogger(__name__).addHandler(logging.NullHandler())\n-\n-\n-class Botan(object):\n- \"\"\"This class helps to send incoming events to your botan analytics account.\n- See more: https://github.com/botanio/sdk#botan-sdk\n- \"\"\"\n-\n- token = ''\n- url_template = 'https://api.botan.io/track?token={token}' \\\n- '&uid={uid}&name={name}&src=python-telegram-bot'\n-\n- def __init__(self, token):\n- self.token = token\n- self.logger = logging.getLogger(__name__)\n-\n- def track(self, message, event_name='event'):\n- try:\n- uid = message.chat_id\n- except AttributeError:\n- self.logger.warn('No chat_id in message')\n- return False\n- data = message.to_json()\n- try:\n- url = self.url_template.format(\n- token=str(self.token), uid=str(uid), name=quote(event_name))\n- request = Request(\n- url, data=data.encode(), headers={'Content-Type': 'application/json'})\n- urlopen(request)\n- return True\n- except HTTPError as error:\n- self.logger.warn('Botan track error ' + str(error.code) + ':' + error.read().decode(\n- 'utf-8'))\n- return False\n- except URLError as error:\n- self.logger.warn('Botan track error ' + str(error.reason))\n- return False\n", "issue": "Remove botan from our library\naccording to [this](https://github.com/botanio/sdk#py) botan has it's own implementation for python. No need to reinvent the wheel. I suggest we remove it from ptb in the next major (8.0) version.\n", "before_files": [{"content": "from .botan import Botan\n\n__all__ = ['Botan']\n", "path": "telegram/contrib/__init__.py"}, {"content": "import logging\n\nfrom future.moves.urllib.parse import quote\nfrom future.moves.urllib.error import HTTPError, URLError\nfrom future.moves.urllib.request import urlopen, Request\n\nlogging.getLogger(__name__).addHandler(logging.NullHandler())\n\n\nclass Botan(object):\n \"\"\"This class helps to send incoming events to your botan analytics account.\n See more: https://github.com/botanio/sdk#botan-sdk\n \"\"\"\n\n token = ''\n url_template = 'https://api.botan.io/track?token={token}' \\\n '&uid={uid}&name={name}&src=python-telegram-bot'\n\n def __init__(self, token):\n self.token = token\n self.logger = logging.getLogger(__name__)\n\n def track(self, message, event_name='event'):\n try:\n uid = message.chat_id\n except AttributeError:\n self.logger.warn('No chat_id in message')\n return False\n data = message.to_json()\n try:\n url = self.url_template.format(\n token=str(self.token), uid=str(uid), name=quote(event_name))\n request = Request(\n url, data=data.encode(), headers={'Content-Type': 'application/json'})\n urlopen(request)\n return True\n except HTTPError as error:\n self.logger.warn('Botan track error ' + str(error.code) + ':' + error.read().decode(\n 'utf-8'))\n return False\n except URLError as error:\n self.logger.warn('Botan track error ' + str(error.reason))\n return False\n", "path": "telegram/contrib/botan.py"}]} | 1,051 | 499 |
gh_patches_debug_16746 | rasdani/github-patches | git_diff | scikit-image__scikit-image-1927 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
DOC: Reduce size image in plot_inpaint
In reference to #1920
- Reduce the size of the image in doc/examples/filters/plot_inpaint.py to show more clearly the result of the algorithm.
</issue>
<code>
[start of doc/examples/filters/plot_inpaint.py]
1 """
2 ===========
3 Inpainting
4 ===========
5 Inpainting [1]_ is the process of reconstructing lost or deteriorated
6 parts of images and videos.
7
8 The reconstruction is supposed to be performed in fully automatic way by
9 exploiting the information presented in non-damaged regions.
10
11 In this example, we show how the masked pixels get inpainted by
12 inpainting algorithm based on 'biharmonic equation'-assumption [2]_ [3]_.
13
14 .. [1] Wikipedia. Inpainting
15 https://en.wikipedia.org/wiki/Inpainting
16 .. [2] Wikipedia. Biharmonic equation
17 https://en.wikipedia.org/wiki/Biharmonic_equation
18 .. [3] N.S.Hoang, S.B.Damelin, "On surface completion and image
19 inpainting by biharmonic functions: numerical aspects",
20 http://www.ima.umn.edu/~damelin/biharmonic
21 """
22
23 import numpy as np
24 import matplotlib.pyplot as plt
25
26 from skimage import data, color
27 from skimage.restoration import inpaint
28
29 image_orig = data.astronaut()
30
31 # Create mask with three defect regions: left, middle, right respectively
32 mask = np.zeros(image_orig.shape[:-1])
33 mask[20:60, 0:20] = 1
34 mask[200:300, 150:170] = 1
35 mask[50:100, 400:430] = 1
36
37 # Defect image over the same region in each color channel
38 image_defect = image_orig.copy()
39 for layer in range(image_defect.shape[-1]):
40 image_defect[np.where(mask)] = 0
41
42 image_result = inpaint.inpaint_biharmonic(image_defect, mask, multichannel=True)
43
44 fig, axes = plt.subplots(ncols=2, nrows=2)
45 ax0, ax1, ax2, ax3 = axes.ravel()
46
47 ax0.set_title('Original image')
48 ax0.imshow(image_orig)
49 ax0.axis('off')
50
51 ax1.set_title('Mask')
52 ax1.imshow(mask, cmap=plt.cm.gray)
53 ax1.axis('off')
54
55 ax2.set_title('Defected image')
56 ax2.imshow(image_defect)
57 ax2.axis('off')
58
59 ax3.set_title('Inpainted image')
60 ax3.imshow(image_result)
61 ax3.axis('off')
62
63 plt.tight_layout()
64 plt.show()
65
[end of doc/examples/filters/plot_inpaint.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/doc/examples/filters/plot_inpaint.py b/doc/examples/filters/plot_inpaint.py
--- a/doc/examples/filters/plot_inpaint.py
+++ b/doc/examples/filters/plot_inpaint.py
@@ -26,13 +26,13 @@
from skimage import data, color
from skimage.restoration import inpaint
-image_orig = data.astronaut()
+image_orig = data.astronaut()[0:200, 0:200]
# Create mask with three defect regions: left, middle, right respectively
mask = np.zeros(image_orig.shape[:-1])
mask[20:60, 0:20] = 1
-mask[200:300, 150:170] = 1
-mask[50:100, 400:430] = 1
+mask[160:180, 70:155] = 1
+mask[30:60, 170:195] = 1
# Defect image over the same region in each color channel
image_defect = image_orig.copy()
@@ -60,5 +60,5 @@
ax3.imshow(image_result)
ax3.axis('off')
-plt.tight_layout()
+fig.tight_layout()
plt.show()
| {"golden_diff": "diff --git a/doc/examples/filters/plot_inpaint.py b/doc/examples/filters/plot_inpaint.py\n--- a/doc/examples/filters/plot_inpaint.py\n+++ b/doc/examples/filters/plot_inpaint.py\n@@ -26,13 +26,13 @@\n from skimage import data, color\n from skimage.restoration import inpaint\n \n-image_orig = data.astronaut()\n+image_orig = data.astronaut()[0:200, 0:200]\n \n # Create mask with three defect regions: left, middle, right respectively\n mask = np.zeros(image_orig.shape[:-1])\n mask[20:60, 0:20] = 1\n-mask[200:300, 150:170] = 1\n-mask[50:100, 400:430] = 1\n+mask[160:180, 70:155] = 1\n+mask[30:60, 170:195] = 1\n \n # Defect image over the same region in each color channel\n image_defect = image_orig.copy()\n@@ -60,5 +60,5 @@\n ax3.imshow(image_result)\n ax3.axis('off')\n \n-plt.tight_layout()\n+fig.tight_layout()\n plt.show()\n", "issue": "DOC: Reduce size image in plot_inpaint\nIn reference to #1920 \n- Reduce the size of the image in doc/examples/filters/plot_inpaint.py to show more clearly the result of the algorithm.\n\n", "before_files": [{"content": "\"\"\"\n===========\nInpainting\n===========\nInpainting [1]_ is the process of reconstructing lost or deteriorated\nparts of images and videos.\n\nThe reconstruction is supposed to be performed in fully automatic way by\nexploiting the information presented in non-damaged regions.\n\nIn this example, we show how the masked pixels get inpainted by\ninpainting algorithm based on 'biharmonic equation'-assumption [2]_ [3]_.\n\n.. [1] Wikipedia. Inpainting\n https://en.wikipedia.org/wiki/Inpainting\n.. [2] Wikipedia. Biharmonic equation\n https://en.wikipedia.org/wiki/Biharmonic_equation\n.. [3] N.S.Hoang, S.B.Damelin, \"On surface completion and image\n inpainting by biharmonic functions: numerical aspects\",\n http://www.ima.umn.edu/~damelin/biharmonic\n\"\"\"\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nfrom skimage import data, color\nfrom skimage.restoration import inpaint\n\nimage_orig = data.astronaut()\n\n# Create mask with three defect regions: left, middle, right respectively\nmask = np.zeros(image_orig.shape[:-1])\nmask[20:60, 0:20] = 1\nmask[200:300, 150:170] = 1\nmask[50:100, 400:430] = 1\n\n# Defect image over the same region in each color channel\nimage_defect = image_orig.copy()\nfor layer in range(image_defect.shape[-1]):\n image_defect[np.where(mask)] = 0\n\nimage_result = inpaint.inpaint_biharmonic(image_defect, mask, multichannel=True)\n\nfig, axes = plt.subplots(ncols=2, nrows=2)\nax0, ax1, ax2, ax3 = axes.ravel()\n\nax0.set_title('Original image')\nax0.imshow(image_orig)\nax0.axis('off')\n\nax1.set_title('Mask')\nax1.imshow(mask, cmap=plt.cm.gray)\nax1.axis('off')\n\nax2.set_title('Defected image')\nax2.imshow(image_defect)\nax2.axis('off')\n\nax3.set_title('Inpainted image')\nax3.imshow(image_result)\nax3.axis('off')\n\nplt.tight_layout()\nplt.show()\n", "path": "doc/examples/filters/plot_inpaint.py"}]} | 1,231 | 301 |
gh_patches_debug_37093 | rasdani/github-patches | git_diff | TheAlgorithms__Python-2221 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
maths/number_of_digits.py is naive!
<code>[maths/number_of_digits.py](https://github.com/TheAlgorithms/Python/blob/master/maths/number_of_digits.py)</code> has a naive method. The suggested method has a time complexity of O(n). We can count number of digits of a number in O(1).
</issue>
<code>
[start of maths/number_of_digits.py]
1 def num_digits(n: int) -> int:
2 """
3 Find the number of digits in a number.
4
5 >>> num_digits(12345)
6 5
7 >>> num_digits(123)
8 3
9 """
10 digits = 0
11 while n > 0:
12 n = n // 10
13 digits += 1
14 return digits
15
16
17 if __name__ == "__main__":
18 print(num_digits(12345)) # ===> 5
19
[end of maths/number_of_digits.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/maths/number_of_digits.py b/maths/number_of_digits.py
--- a/maths/number_of_digits.py
+++ b/maths/number_of_digits.py
@@ -1,3 +1,7 @@
+import math
+from timeit import timeit
+
+
def num_digits(n: int) -> int:
"""
Find the number of digits in a number.
@@ -14,5 +18,82 @@
return digits
+def num_digits_fast(n: int) -> int:
+ """
+ Find the number of digits in a number.
+ abs() is used as logarithm for negative numbers is not defined.
+
+ >>> num_digits_fast(12345)
+ 5
+ >>> num_digits_fast(123)
+ 3
+ """
+ return (math.floor(math.log(abs(n), 10) + 1))
+
+
+def num_digits_faster(n: int) -> int:
+ """
+ Find the number of digits in a number.
+ abs() is used for negative numbers
+
+ >>> num_digits_faster(12345)
+ 5
+ >>> num_digits_faster(123)
+ 3
+ """
+ return (len(str(abs(n))))
+
+
+def benchmark() -> None:
+ """
+ Benchmark code for comparing 3 functions,
+ with 3 different length int values.
+ """
+ print('\nFor small_num = ', small_num, ':')
+ print("> num_digits()",
+ '\t\tans =', num_digits(small_num),
+ '\ttime =', timeit("z.num_digits(z.small_num)",
+ setup="import __main__ as z"), "seconds")
+ print("> num_digits_fast()",
+ '\tans =', num_digits_fast(small_num),
+ '\ttime =', timeit("z.num_digits_fast(z.small_num)",
+ setup="import __main__ as z"), "seconds")
+ print("> num_digits_faster()",
+ '\tans =', num_digits_faster(small_num),
+ '\ttime =', timeit("z.num_digits_faster(z.small_num)",
+ setup="import __main__ as z"), "seconds")
+
+ print('\nFor medium_num = ', medium_num, ':')
+ print("> num_digits()",
+ '\t\tans =', num_digits(medium_num),
+ '\ttime =', timeit("z.num_digits(z.medium_num)",
+ setup="import __main__ as z"), "seconds")
+ print("> num_digits_fast()",
+ '\tans =', num_digits_fast(medium_num),
+ '\ttime =', timeit("z.num_digits_fast(z.medium_num)",
+ setup="import __main__ as z"), "seconds")
+ print("> num_digits_faster()",
+ '\tans =', num_digits_faster(medium_num),
+ '\ttime =', timeit("z.num_digits_faster(z.medium_num)",
+ setup="import __main__ as z"), "seconds")
+
+ print('\nFor large_num = ', large_num, ':')
+ print("> num_digits()",
+ '\t\tans =', num_digits(large_num),
+ '\ttime =', timeit("z.num_digits(z.large_num)",
+ setup="import __main__ as z"), "seconds")
+ print("> num_digits_fast()",
+ '\tans =', num_digits_fast(large_num),
+ '\ttime =', timeit("z.num_digits_fast(z.large_num)",
+ setup="import __main__ as z"), "seconds")
+ print("> num_digits_faster()",
+ '\tans =', num_digits_faster(large_num),
+ '\ttime =', timeit("z.num_digits_faster(z.large_num)",
+ setup="import __main__ as z"), "seconds")
+
+
if __name__ == "__main__":
- print(num_digits(12345)) # ===> 5
+ small_num = 262144
+ medium_num = 1125899906842624
+ large_num = 1267650600228229401496703205376
+ benchmark()
| {"golden_diff": "diff --git a/maths/number_of_digits.py b/maths/number_of_digits.py\n--- a/maths/number_of_digits.py\n+++ b/maths/number_of_digits.py\n@@ -1,3 +1,7 @@\n+import math\n+from timeit import timeit\n+\n+\n def num_digits(n: int) -> int:\n \"\"\"\n Find the number of digits in a number.\n@@ -14,5 +18,82 @@\n return digits\n \n \n+def num_digits_fast(n: int) -> int:\n+ \"\"\"\n+ Find the number of digits in a number.\n+ abs() is used as logarithm for negative numbers is not defined.\n+\n+ >>> num_digits_fast(12345)\n+ 5\n+ >>> num_digits_fast(123)\n+ 3\n+ \"\"\"\n+ return (math.floor(math.log(abs(n), 10) + 1))\n+\n+\n+def num_digits_faster(n: int) -> int:\n+ \"\"\"\n+ Find the number of digits in a number.\n+ abs() is used for negative numbers\n+\n+ >>> num_digits_faster(12345)\n+ 5\n+ >>> num_digits_faster(123)\n+ 3\n+ \"\"\"\n+ return (len(str(abs(n))))\n+\n+\n+def benchmark() -> None:\n+ \"\"\"\n+ Benchmark code for comparing 3 functions,\n+ with 3 different length int values.\n+ \"\"\"\n+ print('\\nFor small_num = ', small_num, ':')\n+ print(\"> num_digits()\",\n+ '\\t\\tans =', num_digits(small_num),\n+ '\\ttime =', timeit(\"z.num_digits(z.small_num)\",\n+ setup=\"import __main__ as z\"), \"seconds\")\n+ print(\"> num_digits_fast()\",\n+ '\\tans =', num_digits_fast(small_num),\n+ '\\ttime =', timeit(\"z.num_digits_fast(z.small_num)\",\n+ setup=\"import __main__ as z\"), \"seconds\")\n+ print(\"> num_digits_faster()\",\n+ '\\tans =', num_digits_faster(small_num),\n+ '\\ttime =', timeit(\"z.num_digits_faster(z.small_num)\",\n+ setup=\"import __main__ as z\"), \"seconds\")\n+\n+ print('\\nFor medium_num = ', medium_num, ':')\n+ print(\"> num_digits()\",\n+ '\\t\\tans =', num_digits(medium_num),\n+ '\\ttime =', timeit(\"z.num_digits(z.medium_num)\",\n+ setup=\"import __main__ as z\"), \"seconds\")\n+ print(\"> num_digits_fast()\",\n+ '\\tans =', num_digits_fast(medium_num),\n+ '\\ttime =', timeit(\"z.num_digits_fast(z.medium_num)\",\n+ setup=\"import __main__ as z\"), \"seconds\")\n+ print(\"> num_digits_faster()\",\n+ '\\tans =', num_digits_faster(medium_num),\n+ '\\ttime =', timeit(\"z.num_digits_faster(z.medium_num)\",\n+ setup=\"import __main__ as z\"), \"seconds\")\n+\n+ print('\\nFor large_num = ', large_num, ':')\n+ print(\"> num_digits()\",\n+ '\\t\\tans =', num_digits(large_num),\n+ '\\ttime =', timeit(\"z.num_digits(z.large_num)\",\n+ setup=\"import __main__ as z\"), \"seconds\")\n+ print(\"> num_digits_fast()\",\n+ '\\tans =', num_digits_fast(large_num),\n+ '\\ttime =', timeit(\"z.num_digits_fast(z.large_num)\",\n+ setup=\"import __main__ as z\"), \"seconds\")\n+ print(\"> num_digits_faster()\",\n+ '\\tans =', num_digits_faster(large_num),\n+ '\\ttime =', timeit(\"z.num_digits_faster(z.large_num)\",\n+ setup=\"import __main__ as z\"), \"seconds\")\n+\n+\n if __name__ == \"__main__\":\n- print(num_digits(12345)) # ===> 5\n+ small_num = 262144\n+ medium_num = 1125899906842624\n+ large_num = 1267650600228229401496703205376\n+ benchmark()\n", "issue": "maths/number_of_digits.py is naive!\n<code>[maths/number_of_digits.py](https://github.com/TheAlgorithms/Python/blob/master/maths/number_of_digits.py)</code> has a naive method. The suggested method has a time complexity of O(n). We can count number of digits of a number in O(1).\n", "before_files": [{"content": "def num_digits(n: int) -> int:\n \"\"\"\n Find the number of digits in a number.\n\n >>> num_digits(12345)\n 5\n >>> num_digits(123)\n 3\n \"\"\"\n digits = 0\n while n > 0:\n n = n // 10\n digits += 1\n return digits\n\n\nif __name__ == \"__main__\":\n print(num_digits(12345)) # ===> 5\n", "path": "maths/number_of_digits.py"}]} | 751 | 978 |
gh_patches_debug_3043 | rasdani/github-patches | git_diff | docker__docker-py-1250 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
attach is causing an "Invalid Argument" exception from os.read
``` python
stream = client.attach(container, stream=True, stdout=True, stderr=True)
for chunk in stream:
pass
```
Results in:
```
File "/Users/michael/work/oss/marina/marina/build.py", line 695, in watcher
for chunk in stream:
File ".venv/lib/python3.5/site-packages/docker/utils/socket.py", line 67, in frames_iter
yield read(socket, n)
File ".venv/lib/python3.5/site-packages/docker/utils/socket.py", line 25, in read
return os.read(socket.fileno(), n)
OSError: [Errno 22] Invalid argument
```
Using docker-py 1.10.2 on OS X 10.11.6 with docker for mac 1.12.0-rc3. Reverting to 1.9.0 fixes the issue.
</issue>
<code>
[start of docker/utils/socket.py]
1 import errno
2 import os
3 import select
4 import struct
5
6 import six
7
8 try:
9 from ..transport import NpipeSocket
10 except ImportError:
11 NpipeSocket = type(None)
12
13
14 class SocketError(Exception):
15 pass
16
17
18 def read(socket, n=4096):
19 """
20 Reads at most n bytes from socket
21 """
22
23 recoverable_errors = (errno.EINTR, errno.EDEADLK, errno.EWOULDBLOCK)
24
25 # wait for data to become available
26 if not isinstance(socket, NpipeSocket):
27 select.select([socket], [], [])
28
29 try:
30 if hasattr(socket, 'recv'):
31 return socket.recv(n)
32 return os.read(socket.fileno(), n)
33 except EnvironmentError as e:
34 if e.errno not in recoverable_errors:
35 raise
36
37
38 def read_exactly(socket, n):
39 """
40 Reads exactly n bytes from socket
41 Raises SocketError if there isn't enough data
42 """
43 data = six.binary_type()
44 while len(data) < n:
45 next_data = read(socket, n - len(data))
46 if not next_data:
47 raise SocketError("Unexpected EOF")
48 data += next_data
49 return data
50
51
52 def next_frame_size(socket):
53 """
54 Returns the size of the next frame of data waiting to be read from socket,
55 according to the protocol defined here:
56
57 https://docs.docker.com/engine/reference/api/docker_remote_api_v1.24/#/attach-to-a-container
58 """
59 try:
60 data = read_exactly(socket, 8)
61 except SocketError:
62 return 0
63
64 _, actual = struct.unpack('>BxxxL', data)
65 return actual
66
67
68 def frames_iter(socket):
69 """
70 Returns a generator of frames read from socket
71 """
72 n = next_frame_size(socket)
73 while n > 0:
74 yield read(socket, n)
75 n = next_frame_size(socket)
76
[end of docker/utils/socket.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/docker/utils/socket.py b/docker/utils/socket.py
--- a/docker/utils/socket.py
+++ b/docker/utils/socket.py
@@ -69,7 +69,11 @@
"""
Returns a generator of frames read from socket
"""
- n = next_frame_size(socket)
- while n > 0:
- yield read(socket, n)
+ while True:
n = next_frame_size(socket)
+ if n == 0:
+ break
+ while n > 0:
+ result = read(socket, n)
+ n -= len(result)
+ yield result
| {"golden_diff": "diff --git a/docker/utils/socket.py b/docker/utils/socket.py\n--- a/docker/utils/socket.py\n+++ b/docker/utils/socket.py\n@@ -69,7 +69,11 @@\n \"\"\"\n Returns a generator of frames read from socket\n \"\"\"\n- n = next_frame_size(socket)\n- while n > 0:\n- yield read(socket, n)\n+ while True:\n n = next_frame_size(socket)\n+ if n == 0:\n+ break\n+ while n > 0:\n+ result = read(socket, n)\n+ n -= len(result)\n+ yield result\n", "issue": "attach is causing an \"Invalid Argument\" exception from os.read\n``` python\nstream = client.attach(container, stream=True, stdout=True, stderr=True)\nfor chunk in stream:\n pass\n```\n\nResults in:\n\n```\n File \"/Users/michael/work/oss/marina/marina/build.py\", line 695, in watcher\n for chunk in stream:\n File \".venv/lib/python3.5/site-packages/docker/utils/socket.py\", line 67, in frames_iter\n yield read(socket, n)\n File \".venv/lib/python3.5/site-packages/docker/utils/socket.py\", line 25, in read\n return os.read(socket.fileno(), n)\nOSError: [Errno 22] Invalid argument\n```\n\nUsing docker-py 1.10.2 on OS X 10.11.6 with docker for mac 1.12.0-rc3. Reverting to 1.9.0 fixes the issue.\n\n", "before_files": [{"content": "import errno\nimport os\nimport select\nimport struct\n\nimport six\n\ntry:\n from ..transport import NpipeSocket\nexcept ImportError:\n NpipeSocket = type(None)\n\n\nclass SocketError(Exception):\n pass\n\n\ndef read(socket, n=4096):\n \"\"\"\n Reads at most n bytes from socket\n \"\"\"\n\n recoverable_errors = (errno.EINTR, errno.EDEADLK, errno.EWOULDBLOCK)\n\n # wait for data to become available\n if not isinstance(socket, NpipeSocket):\n select.select([socket], [], [])\n\n try:\n if hasattr(socket, 'recv'):\n return socket.recv(n)\n return os.read(socket.fileno(), n)\n except EnvironmentError as e:\n if e.errno not in recoverable_errors:\n raise\n\n\ndef read_exactly(socket, n):\n \"\"\"\n Reads exactly n bytes from socket\n Raises SocketError if there isn't enough data\n \"\"\"\n data = six.binary_type()\n while len(data) < n:\n next_data = read(socket, n - len(data))\n if not next_data:\n raise SocketError(\"Unexpected EOF\")\n data += next_data\n return data\n\n\ndef next_frame_size(socket):\n \"\"\"\n Returns the size of the next frame of data waiting to be read from socket,\n according to the protocol defined here:\n\n https://docs.docker.com/engine/reference/api/docker_remote_api_v1.24/#/attach-to-a-container\n \"\"\"\n try:\n data = read_exactly(socket, 8)\n except SocketError:\n return 0\n\n _, actual = struct.unpack('>BxxxL', data)\n return actual\n\n\ndef frames_iter(socket):\n \"\"\"\n Returns a generator of frames read from socket\n \"\"\"\n n = next_frame_size(socket)\n while n > 0:\n yield read(socket, n)\n n = next_frame_size(socket)\n", "path": "docker/utils/socket.py"}]} | 1,304 | 135 |
gh_patches_debug_2193 | rasdani/github-patches | git_diff | ansible-collections__community.general-6695 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
read_csv - Key 'Name' was not found in the CSV header fields
##### SUMMARY
The `read_csv` module fails to identify a field, yet displaces the field in the list of available fields.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
read_csv
##### ANSIBLE VERSION
```
ansible 2.9.10
config file = /home/anton/git/ansible-deploy-vmware-vm/ansible.cfg
configured module search path = ['/home/anton/git/ansible-deploy-vmware-vm/library']
ansible python module location = /home/anton/.local/lib/python3.6/site-packages/ansible
executable location = /home/anton/.local/bin/ansible
python version = 3.6.9 (default, Apr 18 2020, 01:56:04) [GCC 8.4.0]
```
##### CONFIGURATION
```
# config file for ansible -- http://ansible.com/
# ==============================================
# nearly all parameters can be overridden in ansible-playbook
# or with command line flags. ansible will read ANSIBLE_CONFIG,
# ansible.cfg in the current working directory, .ansible.cfg in
# the home directory or /etc/ansible/ansible.cfg, whichever it
# finds first
[defaults]
host_key_checking = False
host_key_check = False
ansible_python_interpreter=/usr/bin/python3
log_path = ./ansible.log
#bin_ansible_callbacks=True
#stdout_callback = debug
# some basic default values...
library = ./library
# additional paths to search for roles in, colon separated
roles_path = ./roles
[ssh_connection]
# ssh arguments to use
ssh_args = -o StrictHostKeyChecking=no
timeout=60
```
##### OS / ENVIRONMENT
Ubuntu 20:04
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```
---
- name: Right-size VMs
gather_facts: false
hosts: all
connection: local
tasks:
# Read a CSV file and access the first item
- name: Read users from CSV file and return a list
read_csv:
path: "files/vms/6-19-20 Optimization Report - Oversized Virtual Machines Prod2.csv"
key: Name
register: users
- debug:
msg: 'User {{ users.list.2.Name}}'
# msg: 'User {{ users.list.2.Name}} has UID {{ users.list.2.ReclaimablevCPUs}} and GID {{ users.list.2.ReclaimableMemory}}'
# msg: "{{ users }}"
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
Expect to be able to read CSV values by col name (field) as based on module documentation.
##### ACTUAL RESULTS
```
fatal: [localhost]: FAILED! => {"ansible_facts": {"discovered_interpreter_python": "/usr/bin/python"}, "changed": false, "msg": "Key 'Name' was not found in the CSV header fields: Name, Configured-vCPU, ReclaimablevCPUs, ConfiguredMemory, ReclaimableMemory, ParentvCenter"}
```
</issue>
<code>
[start of plugins/module_utils/csv.py]
1 # -*- coding: utf-8 -*-
2
3 # Copyright (c) 2021, Andrew Pantuso (@ajpantuso) <[email protected]>
4 # Copyright (c) 2018, Dag Wieers (@dagwieers) <[email protected]>
5 # GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
6 # SPDX-License-Identifier: GPL-3.0-or-later
7
8 from __future__ import absolute_import, division, print_function
9 __metaclass__ = type
10
11 import csv
12 from io import BytesIO, StringIO
13
14 from ansible.module_utils.common.text.converters import to_native
15 from ansible.module_utils.six import PY3
16
17
18 class CustomDialectFailureError(Exception):
19 pass
20
21
22 class DialectNotAvailableError(Exception):
23 pass
24
25
26 CSVError = csv.Error
27
28
29 def initialize_dialect(dialect, **kwargs):
30 # Add Unix dialect from Python 3
31 class unix_dialect(csv.Dialect):
32 """Describe the usual properties of Unix-generated CSV files."""
33 delimiter = ','
34 quotechar = '"'
35 doublequote = True
36 skipinitialspace = False
37 lineterminator = '\n'
38 quoting = csv.QUOTE_ALL
39
40 csv.register_dialect("unix", unix_dialect)
41
42 if dialect not in csv.list_dialects():
43 raise DialectNotAvailableError("Dialect '%s' is not supported by your version of python." % dialect)
44
45 # Create a dictionary from only set options
46 dialect_params = dict((k, v) for k, v in kwargs.items() if v is not None)
47 if dialect_params:
48 try:
49 csv.register_dialect('custom', dialect, **dialect_params)
50 except TypeError as e:
51 raise CustomDialectFailureError("Unable to create custom dialect: %s" % to_native(e))
52 dialect = 'custom'
53
54 return dialect
55
56
57 def read_csv(data, dialect, fieldnames=None):
58
59 data = to_native(data, errors='surrogate_or_strict')
60
61 if PY3:
62 fake_fh = StringIO(data)
63 else:
64 fake_fh = BytesIO(data)
65
66 reader = csv.DictReader(fake_fh, fieldnames=fieldnames, dialect=dialect)
67
68 return reader
69
[end of plugins/module_utils/csv.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/plugins/module_utils/csv.py b/plugins/module_utils/csv.py
--- a/plugins/module_utils/csv.py
+++ b/plugins/module_utils/csv.py
@@ -55,8 +55,10 @@
def read_csv(data, dialect, fieldnames=None):
-
+ BOM = to_native(u'\ufeff')
data = to_native(data, errors='surrogate_or_strict')
+ if data.startswith(BOM):
+ data = data[len(BOM):]
if PY3:
fake_fh = StringIO(data)
| {"golden_diff": "diff --git a/plugins/module_utils/csv.py b/plugins/module_utils/csv.py\n--- a/plugins/module_utils/csv.py\n+++ b/plugins/module_utils/csv.py\n@@ -55,8 +55,10 @@\n \n \n def read_csv(data, dialect, fieldnames=None):\n-\n+ BOM = to_native(u'\\ufeff')\n data = to_native(data, errors='surrogate_or_strict')\n+ if data.startswith(BOM):\n+ data = data[len(BOM):]\n \n if PY3:\n fake_fh = StringIO(data)\n", "issue": "read_csv - Key 'Name' was not found in the CSV header fields\n##### SUMMARY\r\nThe `read_csv` module fails to identify a field, yet displaces the field in the list of available fields.\r\n\r\n##### ISSUE TYPE\r\n- Bug Report\r\n\r\n##### COMPONENT NAME\r\nread_csv\r\n\r\n##### ANSIBLE VERSION\r\n\r\n```\r\nansible 2.9.10\r\n config file = /home/anton/git/ansible-deploy-vmware-vm/ansible.cfg\r\n configured module search path = ['/home/anton/git/ansible-deploy-vmware-vm/library']\r\n ansible python module location = /home/anton/.local/lib/python3.6/site-packages/ansible\r\n executable location = /home/anton/.local/bin/ansible\r\n python version = 3.6.9 (default, Apr 18 2020, 01:56:04) [GCC 8.4.0]\r\n\r\n```\r\n\r\n##### CONFIGURATION\r\n\r\n```\r\n# config file for ansible -- http://ansible.com/\r\n# ==============================================\r\n\r\n# nearly all parameters can be overridden in ansible-playbook\r\n# or with command line flags. ansible will read ANSIBLE_CONFIG,\r\n# ansible.cfg in the current working directory, .ansible.cfg in\r\n# the home directory or /etc/ansible/ansible.cfg, whichever it\r\n# finds first\r\n\r\n[defaults]\r\nhost_key_checking = False\r\nhost_key_check = False\r\nansible_python_interpreter=/usr/bin/python3\r\nlog_path = ./ansible.log\r\n#bin_ansible_callbacks=True\r\n#stdout_callback = debug\r\n\r\n\r\n# some basic default values...\r\nlibrary = ./library\r\n\r\n# additional paths to search for roles in, colon separated\r\nroles_path = ./roles\r\n\r\n[ssh_connection]\r\n# ssh arguments to use\r\nssh_args = -o StrictHostKeyChecking=no\r\ntimeout=60\r\n\r\n\r\n```\r\n\r\n##### OS / ENVIRONMENT\r\nUbuntu 20:04\r\n\r\n\r\n##### STEPS TO REPRODUCE\r\n<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->\r\n\r\n<!--- Paste example playbooks or commands between quotes below -->\r\n```\r\n---\r\n- name: Right-size VMs\r\n gather_facts: false\r\n hosts: all\r\n connection: local\r\n tasks:\r\n # Read a CSV file and access the first item\r\n - name: Read users from CSV file and return a list\r\n read_csv:\r\n path: \"files/vms/6-19-20 Optimization Report - Oversized Virtual Machines Prod2.csv\"\r\n key: Name\r\n register: users\r\n\r\n - debug:\r\n msg: 'User {{ users.list.2.Name}}'\r\n # msg: 'User {{ users.list.2.Name}} has UID {{ users.list.2.ReclaimablevCPUs}} and GID {{ users.list.2.ReclaimableMemory}}'\r\n # msg: \"{{ users }}\"\r\n\r\n\r\n\r\n```\r\n\r\n<!--- HINT: You can paste gist.github.com links for larger files -->\r\n\r\n##### EXPECTED RESULTS\r\nExpect to be able to read CSV values by col name (field) as based on module documentation.\r\n\r\n\r\n##### ACTUAL RESULTS\r\n```\r\nfatal: [localhost]: FAILED! => {\"ansible_facts\": {\"discovered_interpreter_python\": \"/usr/bin/python\"}, \"changed\": false, \"msg\": \"Key 'Name' was not found in the CSV header fields: \ufeffName, Configured-vCPU, ReclaimablevCPUs, ConfiguredMemory, ReclaimableMemory, ParentvCenter\"}\r\n```\r\n\r\n\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Copyright (c) 2021, Andrew Pantuso (@ajpantuso) <[email protected]>\n# Copyright (c) 2018, Dag Wieers (@dagwieers) <[email protected]>\n# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)\n# SPDX-License-Identifier: GPL-3.0-or-later\n\nfrom __future__ import absolute_import, division, print_function\n__metaclass__ = type\n\nimport csv\nfrom io import BytesIO, StringIO\n\nfrom ansible.module_utils.common.text.converters import to_native\nfrom ansible.module_utils.six import PY3\n\n\nclass CustomDialectFailureError(Exception):\n pass\n\n\nclass DialectNotAvailableError(Exception):\n pass\n\n\nCSVError = csv.Error\n\n\ndef initialize_dialect(dialect, **kwargs):\n # Add Unix dialect from Python 3\n class unix_dialect(csv.Dialect):\n \"\"\"Describe the usual properties of Unix-generated CSV files.\"\"\"\n delimiter = ','\n quotechar = '\"'\n doublequote = True\n skipinitialspace = False\n lineterminator = '\\n'\n quoting = csv.QUOTE_ALL\n\n csv.register_dialect(\"unix\", unix_dialect)\n\n if dialect not in csv.list_dialects():\n raise DialectNotAvailableError(\"Dialect '%s' is not supported by your version of python.\" % dialect)\n\n # Create a dictionary from only set options\n dialect_params = dict((k, v) for k, v in kwargs.items() if v is not None)\n if dialect_params:\n try:\n csv.register_dialect('custom', dialect, **dialect_params)\n except TypeError as e:\n raise CustomDialectFailureError(\"Unable to create custom dialect: %s\" % to_native(e))\n dialect = 'custom'\n\n return dialect\n\n\ndef read_csv(data, dialect, fieldnames=None):\n\n data = to_native(data, errors='surrogate_or_strict')\n\n if PY3:\n fake_fh = StringIO(data)\n else:\n fake_fh = BytesIO(data)\n\n reader = csv.DictReader(fake_fh, fieldnames=fieldnames, dialect=dialect)\n\n return reader\n", "path": "plugins/module_utils/csv.py"}]} | 1,908 | 117 |
gh_patches_debug_11630 | rasdani/github-patches | git_diff | mozilla__bugbug-407 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Not all training tasks need commits DB
Indeed I think none of the ones we currently run as part of the data pipeline need the commits.
We should:
- Make the trainer script only download the DBs which are necessary;
- Remove the dependency on the commit retrieval task in the data-pipeline.yml.
</issue>
<code>
[start of scripts/trainer.py]
1 # -*- coding: utf-8 -*-
2
3 import argparse
4 import lzma
5 import os
6 import shutil
7 from logging import INFO, basicConfig, getLogger
8 from urllib.request import urlretrieve
9
10 from bugbug.models.component import ComponentModel
11 from bugbug.models.defect_enhancement_task import DefectEnhancementTaskModel
12 from bugbug.models.regression import RegressionModel
13 from bugbug.models.tracking import TrackingModel
14
15 basicConfig(level=INFO)
16 logger = getLogger(__name__)
17
18 BASE_URL = "https://index.taskcluster.net/v1/task/project.relman.bugbug.data_{}.latest/artifacts/public"
19
20
21 class Trainer(object):
22 def decompress_file(self, path):
23 with lzma.open(f"{path}.xz", "rb") as input_f:
24 with open(path, "wb") as output_f:
25 shutil.copyfileobj(input_f, output_f)
26
27 def compress_file(self, path):
28 with open(path, "rb") as input_f:
29 with lzma.open(f"{path}.xz", "wb") as output_f:
30 shutil.copyfileobj(input_f, output_f)
31
32 def train_defect_enhancement_task(self):
33 logger.info("Training *defect vs enhancement vs task* model")
34 model = DefectEnhancementTaskModel()
35 model.train()
36 self.compress_file("defectenhancementtaskmodel")
37
38 def train_component(self):
39 logger.info("Training *component* model")
40 model = ComponentModel()
41 model.train()
42 self.compress_file("componentmodel")
43
44 def train_regression(self):
45 logger.info("Training *regression vs non-regression* model")
46 model = RegressionModel()
47 model.train()
48 self.compress_file("regressionmodel")
49
50 def train_tracking(self):
51 logger.info("Training *tracking* model")
52 model = TrackingModel()
53 model.train()
54 self.compress_file("trackingmodel")
55
56 def go(self, model):
57 # TODO: Stop hard-coding them
58 valid_models = ["defect", "component", "regression", "tracking"]
59
60 if model not in valid_models:
61 exception = (
62 f"Invalid model {model!r} name, use one of {valid_models!r} instead"
63 )
64 raise ValueError(exception)
65
66 # Download datasets that were built by bugbug_data.
67 os.makedirs("data", exist_ok=True)
68
69 # Bugs.json
70 logger.info("Downloading bugs database")
71 bugs_url = BASE_URL.format("bugs")
72 urlretrieve(f"{bugs_url}/bugs.json.xz", "data/bugs.json.xz")
73 logger.info("Decompressing bugs database")
74 self.decompress_file("data/bugs.json")
75
76 # Commits.json
77 logger.info("Downloading commits database")
78 commits_url = BASE_URL.format("commits")
79 urlretrieve(f"{commits_url}/commits.json.xz", "data/commits.json.xz")
80 logger.info("Decompressing commits database")
81 self.decompress_file("data/commits.json")
82
83 if model == "defect":
84 # Train classifier for defect-vs-enhancement-vs-task.
85 self.train_defect_enhancement_task()
86 elif model == "component":
87 # Train classifier for the component of a bug.
88 self.train_component()
89 elif model == "regression":
90 # Train classifier for regression-vs-nonregression.
91 self.train_regression()
92 elif model == "tracking":
93 # Train classifier for tracking bugs.
94 self.train_tracking()
95 else:
96 # We shouldn't be here
97 raise Exception("valid_models is likely not up-to-date anymore")
98
99
100 def main():
101 description = "Train the models"
102 parser = argparse.ArgumentParser(description=description)
103
104 parser.add_argument("model", help="Which model to train.")
105
106 args = parser.parse_args()
107
108 retriever = Trainer()
109 retriever.go(args.model)
110
[end of scripts/trainer.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/scripts/trainer.py b/scripts/trainer.py
--- a/scripts/trainer.py
+++ b/scripts/trainer.py
@@ -73,13 +73,6 @@
logger.info("Decompressing bugs database")
self.decompress_file("data/bugs.json")
- # Commits.json
- logger.info("Downloading commits database")
- commits_url = BASE_URL.format("commits")
- urlretrieve(f"{commits_url}/commits.json.xz", "data/commits.json.xz")
- logger.info("Decompressing commits database")
- self.decompress_file("data/commits.json")
-
if model == "defect":
# Train classifier for defect-vs-enhancement-vs-task.
self.train_defect_enhancement_task()
| {"golden_diff": "diff --git a/scripts/trainer.py b/scripts/trainer.py\n--- a/scripts/trainer.py\n+++ b/scripts/trainer.py\n@@ -73,13 +73,6 @@\n logger.info(\"Decompressing bugs database\")\n self.decompress_file(\"data/bugs.json\")\n \n- # Commits.json\n- logger.info(\"Downloading commits database\")\n- commits_url = BASE_URL.format(\"commits\")\n- urlretrieve(f\"{commits_url}/commits.json.xz\", \"data/commits.json.xz\")\n- logger.info(\"Decompressing commits database\")\n- self.decompress_file(\"data/commits.json\")\n-\n if model == \"defect\":\n # Train classifier for defect-vs-enhancement-vs-task.\n self.train_defect_enhancement_task()\n", "issue": "Not all training tasks need commits DB\nIndeed I think none of the ones we currently run as part of the data pipeline need the commits.\r\nWe should:\r\n- Make the trainer script only download the DBs which are necessary;\r\n- Remove the dependency on the commit retrieval task in the data-pipeline.yml.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\nimport argparse\nimport lzma\nimport os\nimport shutil\nfrom logging import INFO, basicConfig, getLogger\nfrom urllib.request import urlretrieve\n\nfrom bugbug.models.component import ComponentModel\nfrom bugbug.models.defect_enhancement_task import DefectEnhancementTaskModel\nfrom bugbug.models.regression import RegressionModel\nfrom bugbug.models.tracking import TrackingModel\n\nbasicConfig(level=INFO)\nlogger = getLogger(__name__)\n\nBASE_URL = \"https://index.taskcluster.net/v1/task/project.relman.bugbug.data_{}.latest/artifacts/public\"\n\n\nclass Trainer(object):\n def decompress_file(self, path):\n with lzma.open(f\"{path}.xz\", \"rb\") as input_f:\n with open(path, \"wb\") as output_f:\n shutil.copyfileobj(input_f, output_f)\n\n def compress_file(self, path):\n with open(path, \"rb\") as input_f:\n with lzma.open(f\"{path}.xz\", \"wb\") as output_f:\n shutil.copyfileobj(input_f, output_f)\n\n def train_defect_enhancement_task(self):\n logger.info(\"Training *defect vs enhancement vs task* model\")\n model = DefectEnhancementTaskModel()\n model.train()\n self.compress_file(\"defectenhancementtaskmodel\")\n\n def train_component(self):\n logger.info(\"Training *component* model\")\n model = ComponentModel()\n model.train()\n self.compress_file(\"componentmodel\")\n\n def train_regression(self):\n logger.info(\"Training *regression vs non-regression* model\")\n model = RegressionModel()\n model.train()\n self.compress_file(\"regressionmodel\")\n\n def train_tracking(self):\n logger.info(\"Training *tracking* model\")\n model = TrackingModel()\n model.train()\n self.compress_file(\"trackingmodel\")\n\n def go(self, model):\n # TODO: Stop hard-coding them\n valid_models = [\"defect\", \"component\", \"regression\", \"tracking\"]\n\n if model not in valid_models:\n exception = (\n f\"Invalid model {model!r} name, use one of {valid_models!r} instead\"\n )\n raise ValueError(exception)\n\n # Download datasets that were built by bugbug_data.\n os.makedirs(\"data\", exist_ok=True)\n\n # Bugs.json\n logger.info(\"Downloading bugs database\")\n bugs_url = BASE_URL.format(\"bugs\")\n urlretrieve(f\"{bugs_url}/bugs.json.xz\", \"data/bugs.json.xz\")\n logger.info(\"Decompressing bugs database\")\n self.decompress_file(\"data/bugs.json\")\n\n # Commits.json\n logger.info(\"Downloading commits database\")\n commits_url = BASE_URL.format(\"commits\")\n urlretrieve(f\"{commits_url}/commits.json.xz\", \"data/commits.json.xz\")\n logger.info(\"Decompressing commits database\")\n self.decompress_file(\"data/commits.json\")\n\n if model == \"defect\":\n # Train classifier for defect-vs-enhancement-vs-task.\n self.train_defect_enhancement_task()\n elif model == \"component\":\n # Train classifier for the component of a bug.\n self.train_component()\n elif model == \"regression\":\n # Train classifier for regression-vs-nonregression.\n self.train_regression()\n elif model == \"tracking\":\n # Train classifier for tracking bugs.\n self.train_tracking()\n else:\n # We shouldn't be here\n raise Exception(\"valid_models is likely not up-to-date anymore\")\n\n\ndef main():\n description = \"Train the models\"\n parser = argparse.ArgumentParser(description=description)\n\n parser.add_argument(\"model\", help=\"Which model to train.\")\n\n args = parser.parse_args()\n\n retriever = Trainer()\n retriever.go(args.model)\n", "path": "scripts/trainer.py"}]} | 1,639 | 174 |
gh_patches_debug_10182 | rasdani/github-patches | git_diff | getredash__redash-998 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Embed query description appearing larger than vizualization name
The query description is appearing larger then the visualization name:

</issue>
<code>
[start of redash/handlers/embed.py]
1 import json
2
3 from funcy import project
4 from flask import render_template, request
5 from flask_login import login_required, current_user
6 from flask_restful import abort
7
8 from redash import models, settings
9 from redash import serializers
10 from redash.utils import json_dumps
11 from redash.handlers import routes
12 from redash.handlers.base import org_scoped_rule
13 from redash.permissions import require_access, view_only
14 from authentication import current_org
15
16
17 @routes.route(org_scoped_rule('/embed/query/<query_id>/visualization/<visualization_id>'), methods=['GET'])
18 @login_required
19 def embed(query_id, visualization_id, org_slug=None):
20 # TODO: add event for embed access
21 query = models.Query.get_by_id_and_org(query_id, current_org)
22 require_access(query.groups, current_user, view_only)
23 vis = query.visualizations.where(models.Visualization.id == visualization_id).first()
24 qr = {}
25
26 if vis is not None:
27 vis = vis.to_dict()
28 qr = query.latest_query_data
29 if qr is None:
30 abort(400, message="No Results for this query")
31 else:
32 qr = qr.to_dict()
33 else:
34 abort(404, message="Visualization not found.")
35
36 client_config = {}
37 client_config.update(settings.COMMON_CLIENT_CONFIG)
38
39 qr = project(qr, ('data', 'id', 'retrieved_at'))
40 vis = project(vis, ('description', 'name', 'id', 'options', 'query', 'type', 'updated_at'))
41 vis['query'] = project(vis['query'], ('created_at', 'description', 'name', 'id', 'latest_query_data_id', 'name', 'updated_at'))
42
43 return render_template("embed.html",
44
45 client_config=json_dumps(client_config),
46 visualization=json_dumps(vis),
47 query_result=json_dumps(qr))
48
49
50 @routes.route(org_scoped_rule('/public/dashboards/<token>'), methods=['GET'])
51 @login_required
52 def public_dashboard(token, org_slug=None):
53 # TODO: verify object is a dashboard?
54 if not isinstance(current_user, models.ApiUser):
55 api_key = models.ApiKey.get_by_api_key(token)
56 dashboard = api_key.object
57 else:
58 dashboard = current_user.object
59
60 user = {
61 'permissions': [],
62 'apiKey': current_user.id
63 }
64
65 headers = {
66 'Cache-Control': 'no-cache, no-store, max-age=0, must-revalidate'
67 }
68
69 response = render_template("public.html",
70 headless='embed' in request.args,
71 user=json.dumps(user),
72 seed_data=json_dumps({
73 'dashboard': serializers.public_dashboard(dashboard)
74 }),
75 client_config=json.dumps(settings.COMMON_CLIENT_CONFIG))
76
77 return response, 200, headers
78
[end of redash/handlers/embed.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/redash/handlers/embed.py b/redash/handlers/embed.py
--- a/redash/handlers/embed.py
+++ b/redash/handlers/embed.py
@@ -41,7 +41,6 @@
vis['query'] = project(vis['query'], ('created_at', 'description', 'name', 'id', 'latest_query_data_id', 'name', 'updated_at'))
return render_template("embed.html",
-
client_config=json_dumps(client_config),
visualization=json_dumps(vis),
query_result=json_dumps(qr))
| {"golden_diff": "diff --git a/redash/handlers/embed.py b/redash/handlers/embed.py\n--- a/redash/handlers/embed.py\n+++ b/redash/handlers/embed.py\n@@ -41,7 +41,6 @@\n vis['query'] = project(vis['query'], ('created_at', 'description', 'name', 'id', 'latest_query_data_id', 'name', 'updated_at'))\n \n return render_template(\"embed.html\",\n-\n client_config=json_dumps(client_config),\n visualization=json_dumps(vis),\n query_result=json_dumps(qr))\n", "issue": "Embed query description appearing larger than vizualization name\nThe query description is appearing larger then the visualization name:\n\n\n\n", "before_files": [{"content": "import json\n\nfrom funcy import project\nfrom flask import render_template, request\nfrom flask_login import login_required, current_user\nfrom flask_restful import abort\n\nfrom redash import models, settings\nfrom redash import serializers\nfrom redash.utils import json_dumps\nfrom redash.handlers import routes\nfrom redash.handlers.base import org_scoped_rule\nfrom redash.permissions import require_access, view_only\nfrom authentication import current_org\n\n\[email protected](org_scoped_rule('/embed/query/<query_id>/visualization/<visualization_id>'), methods=['GET'])\n@login_required\ndef embed(query_id, visualization_id, org_slug=None):\n # TODO: add event for embed access\n query = models.Query.get_by_id_and_org(query_id, current_org)\n require_access(query.groups, current_user, view_only)\n vis = query.visualizations.where(models.Visualization.id == visualization_id).first()\n qr = {}\n\n if vis is not None:\n vis = vis.to_dict()\n qr = query.latest_query_data\n if qr is None:\n abort(400, message=\"No Results for this query\")\n else:\n qr = qr.to_dict()\n else:\n abort(404, message=\"Visualization not found.\")\n\n client_config = {}\n client_config.update(settings.COMMON_CLIENT_CONFIG)\n\n qr = project(qr, ('data', 'id', 'retrieved_at'))\n vis = project(vis, ('description', 'name', 'id', 'options', 'query', 'type', 'updated_at'))\n vis['query'] = project(vis['query'], ('created_at', 'description', 'name', 'id', 'latest_query_data_id', 'name', 'updated_at'))\n\n return render_template(\"embed.html\",\n\n client_config=json_dumps(client_config),\n visualization=json_dumps(vis),\n query_result=json_dumps(qr))\n\n\[email protected](org_scoped_rule('/public/dashboards/<token>'), methods=['GET'])\n@login_required\ndef public_dashboard(token, org_slug=None):\n # TODO: verify object is a dashboard?\n if not isinstance(current_user, models.ApiUser):\n api_key = models.ApiKey.get_by_api_key(token)\n dashboard = api_key.object\n else:\n dashboard = current_user.object\n\n user = {\n 'permissions': [],\n 'apiKey': current_user.id\n }\n\n headers = {\n 'Cache-Control': 'no-cache, no-store, max-age=0, must-revalidate'\n }\n\n response = render_template(\"public.html\",\n headless='embed' in request.args,\n user=json.dumps(user),\n seed_data=json_dumps({\n 'dashboard': serializers.public_dashboard(dashboard)\n }),\n client_config=json.dumps(settings.COMMON_CLIENT_CONFIG))\n\n return response, 200, headers\n", "path": "redash/handlers/embed.py"}]} | 1,376 | 125 |
gh_patches_debug_12620 | rasdani/github-patches | git_diff | kivy__kivy-5187 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Kivy breaks Clipboard
### Versions
* Python: 2.7.12
* OS: Windows 10
* Kivy: 1.9.2-dev0
* Kivy installation method: wheel
### Description
When pasting some data into a `TextInput`, the clipboard breaks across the system, and copying and pasting is not possible until the Kivy app is terminated. Specifically, I found the following steps to reproduce the problem:
1. Try copying a file into the `TextInput` box (nothing will paste in as expected)
2. Try copying some text somewhere else (does not have to be in the `TextInput`)
After step 1, nothing is copied or pasted and the Kivy application must be terminated before the clipboard starts working again.
</issue>
<code>
[start of kivy/core/clipboard/clipboard_winctypes.py]
1 '''
2 Clipboard windows: an implementation of the Clipboard using ctypes.
3 '''
4
5 __all__ = ('ClipboardWindows', )
6
7 from kivy.utils import platform
8 from kivy.core.clipboard import ClipboardBase
9
10 if platform != 'win':
11 raise SystemError('unsupported platform for Windows clipboard')
12
13 import ctypes
14 from ctypes import wintypes
15 user32 = ctypes.windll.user32
16 kernel32 = ctypes.windll.kernel32
17 msvcrt = ctypes.cdll.msvcrt
18 c_char_p = ctypes.c_char_p
19 c_wchar_p = ctypes.c_wchar_p
20
21
22 class ClipboardWindows(ClipboardBase):
23
24 def get(self, mimetype='text/plain'):
25 GetClipboardData = user32.GetClipboardData
26 GetClipboardData.argtypes = [wintypes.UINT]
27 GetClipboardData.restype = wintypes.HANDLE
28
29 user32.OpenClipboard(user32.GetActiveWindow())
30 # 1 is CF_TEXT
31 pcontents = GetClipboardData(13)
32 if not pcontents:
33 return ''
34 data = c_wchar_p(pcontents).value.encode(self._encoding)
35 user32.CloseClipboard()
36 return data
37
38 def put(self, text, mimetype='text/plain'):
39 text = text.decode(self._encoding) # auto converted later
40 text += u'\x00'
41
42 SetClipboardData = user32.SetClipboardData
43 SetClipboardData.argtypes = [wintypes.UINT, wintypes.HANDLE]
44 SetClipboardData.restype = wintypes.HANDLE
45
46 GlobalAlloc = kernel32.GlobalAlloc
47 GlobalAlloc.argtypes = [wintypes.UINT, ctypes.c_size_t]
48 GlobalAlloc.restype = wintypes.HGLOBAL
49
50 CF_UNICODETEXT = 13
51
52 user32.OpenClipboard(user32.GetActiveWindow())
53 user32.EmptyClipboard()
54 hCd = GlobalAlloc(0, len(text) * ctypes.sizeof(ctypes.c_wchar))
55 msvcrt.wcscpy_s(c_wchar_p(hCd), len(text), c_wchar_p(text))
56 SetClipboardData(CF_UNICODETEXT, hCd)
57 user32.CloseClipboard()
58
59 def get_types(self):
60 return ['text/plain']
61
[end of kivy/core/clipboard/clipboard_winctypes.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/kivy/core/clipboard/clipboard_winctypes.py b/kivy/core/clipboard/clipboard_winctypes.py
--- a/kivy/core/clipboard/clipboard_winctypes.py
+++ b/kivy/core/clipboard/clipboard_winctypes.py
@@ -27,9 +27,13 @@
GetClipboardData.restype = wintypes.HANDLE
user32.OpenClipboard(user32.GetActiveWindow())
- # 1 is CF_TEXT
+ # Standard Clipboard Format "1" is "CF_TEXT"
pcontents = GetClipboardData(13)
+
+ # if someone pastes a FILE, the content is None for SCF 13
+ # and the clipboard is locked if not closed properly
if not pcontents:
+ user32.CloseClipboard()
return ''
data = c_wchar_p(pcontents).value.encode(self._encoding)
user32.CloseClipboard()
| {"golden_diff": "diff --git a/kivy/core/clipboard/clipboard_winctypes.py b/kivy/core/clipboard/clipboard_winctypes.py\n--- a/kivy/core/clipboard/clipboard_winctypes.py\n+++ b/kivy/core/clipboard/clipboard_winctypes.py\n@@ -27,9 +27,13 @@\n GetClipboardData.restype = wintypes.HANDLE\n \n user32.OpenClipboard(user32.GetActiveWindow())\n- # 1 is CF_TEXT\n+ # Standard Clipboard Format \"1\" is \"CF_TEXT\"\n pcontents = GetClipboardData(13)\n+\n+ # if someone pastes a FILE, the content is None for SCF 13\n+ # and the clipboard is locked if not closed properly\n if not pcontents:\n+ user32.CloseClipboard()\n return ''\n data = c_wchar_p(pcontents).value.encode(self._encoding)\n user32.CloseClipboard()\n", "issue": "Kivy breaks Clipboard\n### Versions\r\n\r\n* Python: 2.7.12\r\n* OS: Windows 10\r\n* Kivy: 1.9.2-dev0\r\n* Kivy installation method: wheel\r\n\r\n### Description\r\n\r\nWhen pasting some data into a `TextInput`, the clipboard breaks across the system, and copying and pasting is not possible until the Kivy app is terminated. Specifically, I found the following steps to reproduce the problem:\r\n1. Try copying a file into the `TextInput` box (nothing will paste in as expected)\r\n2. Try copying some text somewhere else (does not have to be in the `TextInput`)\r\n\r\nAfter step 1, nothing is copied or pasted and the Kivy application must be terminated before the clipboard starts working again.\n", "before_files": [{"content": "'''\nClipboard windows: an implementation of the Clipboard using ctypes.\n'''\n\n__all__ = ('ClipboardWindows', )\n\nfrom kivy.utils import platform\nfrom kivy.core.clipboard import ClipboardBase\n\nif platform != 'win':\n raise SystemError('unsupported platform for Windows clipboard')\n\nimport ctypes\nfrom ctypes import wintypes\nuser32 = ctypes.windll.user32\nkernel32 = ctypes.windll.kernel32\nmsvcrt = ctypes.cdll.msvcrt\nc_char_p = ctypes.c_char_p\nc_wchar_p = ctypes.c_wchar_p\n\n\nclass ClipboardWindows(ClipboardBase):\n\n def get(self, mimetype='text/plain'):\n GetClipboardData = user32.GetClipboardData\n GetClipboardData.argtypes = [wintypes.UINT]\n GetClipboardData.restype = wintypes.HANDLE\n\n user32.OpenClipboard(user32.GetActiveWindow())\n # 1 is CF_TEXT\n pcontents = GetClipboardData(13)\n if not pcontents:\n return ''\n data = c_wchar_p(pcontents).value.encode(self._encoding)\n user32.CloseClipboard()\n return data\n\n def put(self, text, mimetype='text/plain'):\n text = text.decode(self._encoding) # auto converted later\n text += u'\\x00'\n\n SetClipboardData = user32.SetClipboardData\n SetClipboardData.argtypes = [wintypes.UINT, wintypes.HANDLE]\n SetClipboardData.restype = wintypes.HANDLE\n\n GlobalAlloc = kernel32.GlobalAlloc\n GlobalAlloc.argtypes = [wintypes.UINT, ctypes.c_size_t]\n GlobalAlloc.restype = wintypes.HGLOBAL\n\n CF_UNICODETEXT = 13\n\n user32.OpenClipboard(user32.GetActiveWindow())\n user32.EmptyClipboard()\n hCd = GlobalAlloc(0, len(text) * ctypes.sizeof(ctypes.c_wchar))\n msvcrt.wcscpy_s(c_wchar_p(hCd), len(text), c_wchar_p(text))\n SetClipboardData(CF_UNICODETEXT, hCd)\n user32.CloseClipboard()\n\n def get_types(self):\n return ['text/plain']\n", "path": "kivy/core/clipboard/clipboard_winctypes.py"}]} | 1,309 | 205 |
gh_patches_debug_5831 | rasdani/github-patches | git_diff | sherlock-project__sherlock-139 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Sites sorting
It may be a good idea to sort the sites in sites.md and data.json alphabetically. When I'm looking for sites to add, I always have to Ctrl+F in this repo or just scroll through the file... Also when seeing the results, it's just chaos.
</issue>
<code>
[start of site_list.py]
1 """Sherlock: Supported Site Listing
2
3 This module generates the listing of supported sites.
4 """
5 import json
6
7 with open("data.json", "r", encoding="utf-8") as data_file:
8 data = json.load(data_file)
9
10 with open("sites.md", "w") as site_file:
11 site_file.write(f'## List Of Supported Sites ({len(data)} Sites In Total!)\n')
12
13 index = 1
14 for social_network in data:
15 url_main = data.get(social_network).get("urlMain")
16 site_file.write(f'{index}. [{social_network}]({url_main})\n')
17 index = index + 1
18
19 print("Finished updating supported site listing!")
20
[end of site_list.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/site_list.py b/site_list.py
--- a/site_list.py
+++ b/site_list.py
@@ -7,6 +7,11 @@
with open("data.json", "r", encoding="utf-8") as data_file:
data = json.load(data_file)
+sorted_json_data = json.dumps(data, indent=2, sort_keys=True)
+
+with open("data.json", "w") as data_file:
+ data_file.write(sorted_json_data)
+
with open("sites.md", "w") as site_file:
site_file.write(f'## List Of Supported Sites ({len(data)} Sites In Total!)\n')
| {"golden_diff": "diff --git a/site_list.py b/site_list.py\n--- a/site_list.py\n+++ b/site_list.py\n@@ -7,6 +7,11 @@\n with open(\"data.json\", \"r\", encoding=\"utf-8\") as data_file:\n data = json.load(data_file)\n \n+sorted_json_data = json.dumps(data, indent=2, sort_keys=True)\n+\n+with open(\"data.json\", \"w\") as data_file:\n+ data_file.write(sorted_json_data)\n+\n with open(\"sites.md\", \"w\") as site_file:\n site_file.write(f'## List Of Supported Sites ({len(data)} Sites In Total!)\\n')\n", "issue": "Sites sorting\nIt may be a good idea to sort the sites in sites.md and data.json alphabetically. When I'm looking for sites to add, I always have to Ctrl+F in this repo or just scroll through the file... Also when seeing the results, it's just chaos.\n", "before_files": [{"content": "\"\"\"Sherlock: Supported Site Listing\n\nThis module generates the listing of supported sites.\n\"\"\"\nimport json\n\nwith open(\"data.json\", \"r\", encoding=\"utf-8\") as data_file:\n data = json.load(data_file)\n\nwith open(\"sites.md\", \"w\") as site_file:\n site_file.write(f'## List Of Supported Sites ({len(data)} Sites In Total!)\\n')\n\n index = 1\n for social_network in data:\n url_main = data.get(social_network).get(\"urlMain\")\n site_file.write(f'{index}. [{social_network}]({url_main})\\n')\n index = index + 1\n\nprint(\"Finished updating supported site listing!\")\n", "path": "site_list.py"}]} | 772 | 141 |
gh_patches_debug_5674 | rasdani/github-patches | git_diff | mozilla__bugbug-1214 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Log number of spam/non-spam bugs in SpamBug get_labels
</issue>
<code>
[start of bugbug/models/spambug.py]
1 # -*- coding: utf-8 -*-
2 # This Source Code Form is subject to the terms of the Mozilla Public
3 # License, v. 2.0. If a copy of the MPL was not distributed with this file,
4 # You can obtain one at http://mozilla.org/MPL/2.0/.
5
6 import xgboost
7 from imblearn.under_sampling import RandomUnderSampler
8 from sklearn.compose import ColumnTransformer
9 from sklearn.feature_extraction import DictVectorizer
10 from sklearn.pipeline import Pipeline
11
12 from bugbug import bug_features, bugzilla, feature_cleanup
13 from bugbug.model import BugModel
14
15
16 class SpamBugModel(BugModel):
17 def __init__(self, lemmatization=False):
18 BugModel.__init__(self, lemmatization)
19
20 self.sampler = RandomUnderSampler(random_state=0)
21
22 feature_extractors = [
23 bug_features.has_str(),
24 bug_features.has_regression_range(),
25 bug_features.severity(),
26 bug_features.is_coverity_issue(),
27 bug_features.has_crash_signature(),
28 bug_features.has_url(),
29 bug_features.has_w3c_url(),
30 bug_features.has_github_url(),
31 bug_features.whiteboard(),
32 bug_features.patches(),
33 bug_features.landings(),
34 bug_features.product(),
35 bug_features.component(),
36 bug_features.num_words_title(),
37 bug_features.num_words_comments(),
38 bug_features.keywords(),
39 ]
40
41 cleanup_functions = [
42 feature_cleanup.fileref(),
43 feature_cleanup.url(),
44 feature_cleanup.synonyms(),
45 ]
46
47 self.extraction_pipeline = Pipeline(
48 [
49 (
50 "bug_extractor",
51 bug_features.BugExtractor(
52 feature_extractors, cleanup_functions, rollback=True
53 ),
54 ),
55 (
56 "union",
57 ColumnTransformer(
58 [
59 ("data", DictVectorizer(), "data"),
60 ("title", self.text_vectorizer(), "title"),
61 ("comments", self.text_vectorizer(), "comments"),
62 ]
63 ),
64 ),
65 ]
66 )
67
68 self.clf = xgboost.XGBClassifier(n_jobs=16)
69 self.clf.set_params(predictor="cpu_predictor")
70
71 def get_labels(self):
72 classes = {}
73
74 for bug_data in bugzilla.get_bugs(include_invalid=True):
75 bug_id = bug_data["id"]
76
77 # Legitimate bugs
78 if bug_data["resolution"] == "FIXED":
79 classes[bug_id] = 0
80
81 # Spam bugs
82 elif (
83 bug_data["product"] == "Invalid Bugs"
84 and bug_data["component"] == "General"
85 ):
86 classes[bug_id] = 1
87
88 return classes, [0, 1]
89
90 def items_gen(self, classes):
91 # Overwriting this method to add include_invalid=True to get_bugs to
92 # include spam bugs.
93 return (
94 (bug, classes[bug["id"]])
95 for bug in bugzilla.get_bugs(include_invalid=True)
96 if bug["id"] in classes
97 )
98
99 def get_feature_names(self):
100 return self.extraction_pipeline.named_steps["union"].get_feature_names()
101
102 def overwrite_classes(self, bugs, classes, probabilities):
103 for (i, bug) in enumerate(bugs):
104 if "@mozilla" in bug["creator"]:
105 if probabilities:
106 classes[i] = [1.0, 0.0]
107 else:
108 classes[i] = 0
109
110 return classes
111
[end of bugbug/models/spambug.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/bugbug/models/spambug.py b/bugbug/models/spambug.py
--- a/bugbug/models/spambug.py
+++ b/bugbug/models/spambug.py
@@ -85,6 +85,17 @@
):
classes[bug_id] = 1
+ print(
+ "{} bugs are classified as non-spam".format(
+ sum(1 for label in classes.values() if label == 0)
+ )
+ )
+ print(
+ "{} bugs are classified as spam".format(
+ sum(1 for label in classes.values() if label == 1)
+ )
+ )
+
return classes, [0, 1]
def items_gen(self, classes):
| {"golden_diff": "diff --git a/bugbug/models/spambug.py b/bugbug/models/spambug.py\n--- a/bugbug/models/spambug.py\n+++ b/bugbug/models/spambug.py\n@@ -85,6 +85,17 @@\n ):\n classes[bug_id] = 1\n \n+ print(\n+ \"{} bugs are classified as non-spam\".format(\n+ sum(1 for label in classes.values() if label == 0)\n+ )\n+ )\n+ print(\n+ \"{} bugs are classified as spam\".format(\n+ sum(1 for label in classes.values() if label == 1)\n+ )\n+ )\n+\n return classes, [0, 1]\n \n def items_gen(self, classes):\n", "issue": "Log number of spam/non-spam bugs in SpamBug get_labels\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# This Source Code Form is subject to the terms of the Mozilla Public\n# License, v. 2.0. If a copy of the MPL was not distributed with this file,\n# You can obtain one at http://mozilla.org/MPL/2.0/.\n\nimport xgboost\nfrom imblearn.under_sampling import RandomUnderSampler\nfrom sklearn.compose import ColumnTransformer\nfrom sklearn.feature_extraction import DictVectorizer\nfrom sklearn.pipeline import Pipeline\n\nfrom bugbug import bug_features, bugzilla, feature_cleanup\nfrom bugbug.model import BugModel\n\n\nclass SpamBugModel(BugModel):\n def __init__(self, lemmatization=False):\n BugModel.__init__(self, lemmatization)\n\n self.sampler = RandomUnderSampler(random_state=0)\n\n feature_extractors = [\n bug_features.has_str(),\n bug_features.has_regression_range(),\n bug_features.severity(),\n bug_features.is_coverity_issue(),\n bug_features.has_crash_signature(),\n bug_features.has_url(),\n bug_features.has_w3c_url(),\n bug_features.has_github_url(),\n bug_features.whiteboard(),\n bug_features.patches(),\n bug_features.landings(),\n bug_features.product(),\n bug_features.component(),\n bug_features.num_words_title(),\n bug_features.num_words_comments(),\n bug_features.keywords(),\n ]\n\n cleanup_functions = [\n feature_cleanup.fileref(),\n feature_cleanup.url(),\n feature_cleanup.synonyms(),\n ]\n\n self.extraction_pipeline = Pipeline(\n [\n (\n \"bug_extractor\",\n bug_features.BugExtractor(\n feature_extractors, cleanup_functions, rollback=True\n ),\n ),\n (\n \"union\",\n ColumnTransformer(\n [\n (\"data\", DictVectorizer(), \"data\"),\n (\"title\", self.text_vectorizer(), \"title\"),\n (\"comments\", self.text_vectorizer(), \"comments\"),\n ]\n ),\n ),\n ]\n )\n\n self.clf = xgboost.XGBClassifier(n_jobs=16)\n self.clf.set_params(predictor=\"cpu_predictor\")\n\n def get_labels(self):\n classes = {}\n\n for bug_data in bugzilla.get_bugs(include_invalid=True):\n bug_id = bug_data[\"id\"]\n\n # Legitimate bugs\n if bug_data[\"resolution\"] == \"FIXED\":\n classes[bug_id] = 0\n\n # Spam bugs\n elif (\n bug_data[\"product\"] == \"Invalid Bugs\"\n and bug_data[\"component\"] == \"General\"\n ):\n classes[bug_id] = 1\n\n return classes, [0, 1]\n\n def items_gen(self, classes):\n # Overwriting this method to add include_invalid=True to get_bugs to\n # include spam bugs.\n return (\n (bug, classes[bug[\"id\"]])\n for bug in bugzilla.get_bugs(include_invalid=True)\n if bug[\"id\"] in classes\n )\n\n def get_feature_names(self):\n return self.extraction_pipeline.named_steps[\"union\"].get_feature_names()\n\n def overwrite_classes(self, bugs, classes, probabilities):\n for (i, bug) in enumerate(bugs):\n if \"@mozilla\" in bug[\"creator\"]:\n if probabilities:\n classes[i] = [1.0, 0.0]\n else:\n classes[i] = 0\n\n return classes\n", "path": "bugbug/models/spambug.py"}]} | 1,497 | 168 |
gh_patches_debug_16905 | rasdani/github-patches | git_diff | deeppavlov__DeepPavlov-101 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Downloading requirements
I was trying to install deeppavlov and had a problem following the installation steps.
1) There is no download.py file in root folder, it is in `deeppavlov/download.py`
``` sh
python download.py [-all]
```
2) Even if I use that file it outputs the error:
``` sh
(env) root@mysexyhost:~/work/ipavlov/DeepPavlov# python3 deeppavlov/download.py
/home/ubuntu/work/ipavlov/env/local/lib/python3.5/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
Using TensorFlow backend.
2018-03-12 07:34:11.490 ERROR in 'deeppavlov.core.models.serializable'['log'] at line 54: LOGGER ERROR: Can not initialise deeppavlov.core.models.serializable logger, logging to the stderr. Error traceback:
Traceback (most recent call last):
File "/home/ubuntu/work/ipavlov/DeepPavlov/deeppavlov/core/common/log.py", line 32, in get_logger
with open(log_config_path) as log_config_json:
TypeError: invalid file: PosixPath('/home/ubuntu/work/ipavlov/DeepPavlov/deeppavlov/log_config.json')
2018-03-12 07:34:11.491 ERROR in 'deeppavlov.core.models.keras_model'['log'] at line 54: LOGGER ERROR: Can not initialise deeppavlov.core.models.keras_model logger, logging to the stderr. Error traceback:
Traceback (most recent call last):
File "/home/ubuntu/work/ipavlov/DeepPavlov/deeppavlov/core/common/log.py", line 32, in get_logger
with open(log_config_path) as log_config_json:
TypeError: invalid file: PosixPath('/home/ubuntu/work/ipavlov/DeepPavlov/deeppavlov/log_config.json')
Traceback (most recent call last):
File "deeppavlov/download.py", line 24, in <module>
from deeppavlov.core.data.utils import download, download_decompress
File "/home/ubuntu/work/ipavlov/DeepPavlov/deeppavlov/__init__.py", line 1, in <module>
import deeppavlov.core.models.keras_model
File "/home/ubuntu/work/ipavlov/DeepPavlov/deeppavlov/core/models/keras_model.py", line 39, in <module>
class KerasModel(NNModel, metaclass=TfModelMeta):
File "/home/ubuntu/work/ipavlov/DeepPavlov/deeppavlov/core/models/keras_model.py", line 143, in KerasModel
sample_weight_mode=None, weighted_metrics=None, target_tensors=None):
File "/home/ubuntu/work/ipavlov/env/local/lib/python3.5/site-packages/overrides/overrides.py", line 70, in overrides
method.__name__)
AssertionError: No super class method found for "load"
```
</issue>
<code>
[start of telegram_utils/telegram_ui.py]
1 """
2 Copyright 2017 Neural Networks and Deep Learning lab, MIPT
3
4 Licensed under the Apache License, Version 2.0 (the "License");
5 you may not use this file except in compliance with the License.
6 You may obtain a copy of the License at
7
8 http://www.apache.org/licenses/LICENSE-2.0
9
10 Unless required by applicable law or agreed to in writing, software
11 distributed under the License is distributed on an "AS IS" BASIS,
12 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 See the License for the specific language governing permissions and
14 limitations under the License.
15 """
16 import telebot
17
18 from deeppavlov.core.common.file import read_json
19 from deeppavlov.core.commands.infer import build_model_from_config
20
21
22 def init_bot_for_model(token, model):
23 bot = telebot.TeleBot(token)
24
25 model_name = type(model).__name__
26 models_info = read_json('../telegram_utils/models_info.json')
27 model_info = models_info[model_name] if model_name in models_info else models_info['@default']
28
29 @bot.message_handler(commands=['start'])
30 def send_start_message(message):
31 chat_id = message.chat.id
32 out_message = model_info['start_message']
33 if hasattr(model, 'reset'):
34 model.reset()
35 bot.send_message(chat_id, out_message)
36
37 @bot.message_handler(commands=['help'])
38 def send_help_message(message):
39 chat_id = message.chat.id
40 out_message = model_info['help_message']
41 bot.send_message(chat_id, out_message)
42
43 @bot.message_handler()
44 def handle_inference(message):
45 chat_id = message.chat.id
46 context = message.text
47
48 pred = model([context])
49 reply_message = str(pred[0])
50 bot.send_message(chat_id, reply_message)
51
52 bot.polling()
53
54
55 def interact_model_by_telegram(config_path, token):
56 config = read_json(config_path)
57 model = build_model_from_config(config)
58 init_bot_for_model(token, model)
59
[end of telegram_utils/telegram_ui.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/telegram_utils/telegram_ui.py b/telegram_utils/telegram_ui.py
--- a/telegram_utils/telegram_ui.py
+++ b/telegram_utils/telegram_ui.py
@@ -13,6 +13,8 @@
See the License for the specific language governing permissions and
limitations under the License.
"""
+from pathlib import Path
+
import telebot
from deeppavlov.core.common.file import read_json
@@ -23,7 +25,8 @@
bot = telebot.TeleBot(token)
model_name = type(model).__name__
- models_info = read_json('../telegram_utils/models_info.json')
+ config_path = Path(__file__).parent / 'models_info.json'
+ models_info = read_json(str(config_path))
model_info = models_info[model_name] if model_name in models_info else models_info['@default']
@bot.message_handler(commands=['start'])
| {"golden_diff": "diff --git a/telegram_utils/telegram_ui.py b/telegram_utils/telegram_ui.py\n--- a/telegram_utils/telegram_ui.py\n+++ b/telegram_utils/telegram_ui.py\n@@ -13,6 +13,8 @@\n See the License for the specific language governing permissions and\n limitations under the License.\n \"\"\"\n+from pathlib import Path\n+\n import telebot\n \n from deeppavlov.core.common.file import read_json\n@@ -23,7 +25,8 @@\n bot = telebot.TeleBot(token)\n \n model_name = type(model).__name__\n- models_info = read_json('../telegram_utils/models_info.json')\n+ config_path = Path(__file__).parent / 'models_info.json'\n+ models_info = read_json(str(config_path))\n model_info = models_info[model_name] if model_name in models_info else models_info['@default']\n \n @bot.message_handler(commands=['start'])\n", "issue": "Downloading requirements\nI was trying to install deeppavlov and had a problem following the installation steps.\r\n\r\n1) There is no download.py file in root folder, it is in `deeppavlov/download.py`\r\n``` sh\r\npython download.py [-all] \r\n```\r\n\r\n2) Even if I use that file it outputs the error:\r\n``` sh\r\n(env) root@mysexyhost:~/work/ipavlov/DeepPavlov# python3 deeppavlov/download.py\r\n/home/ubuntu/work/ipavlov/env/local/lib/python3.5/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.\r\n from ._conv import register_converters as _register_converters\r\nUsing TensorFlow backend.\r\n2018-03-12 07:34:11.490 ERROR in 'deeppavlov.core.models.serializable'['log'] at line 54: LOGGER ERROR: Can not initialise deeppavlov.core.models.serializable logger, logging to the stderr. Error traceback:\r\nTraceback (most recent call last):\r\n File \"/home/ubuntu/work/ipavlov/DeepPavlov/deeppavlov/core/common/log.py\", line 32, in get_logger\r\n with open(log_config_path) as log_config_json:\r\nTypeError: invalid file: PosixPath('/home/ubuntu/work/ipavlov/DeepPavlov/deeppavlov/log_config.json')\r\n2018-03-12 07:34:11.491 ERROR in 'deeppavlov.core.models.keras_model'['log'] at line 54: LOGGER ERROR: Can not initialise deeppavlov.core.models.keras_model logger, logging to the stderr. Error traceback:\r\nTraceback (most recent call last):\r\n File \"/home/ubuntu/work/ipavlov/DeepPavlov/deeppavlov/core/common/log.py\", line 32, in get_logger\r\n with open(log_config_path) as log_config_json:\r\nTypeError: invalid file: PosixPath('/home/ubuntu/work/ipavlov/DeepPavlov/deeppavlov/log_config.json')\r\nTraceback (most recent call last):\r\n File \"deeppavlov/download.py\", line 24, in <module>\r\n from deeppavlov.core.data.utils import download, download_decompress\r\n File \"/home/ubuntu/work/ipavlov/DeepPavlov/deeppavlov/__init__.py\", line 1, in <module>\r\n import deeppavlov.core.models.keras_model\r\n File \"/home/ubuntu/work/ipavlov/DeepPavlov/deeppavlov/core/models/keras_model.py\", line 39, in <module>\r\n class KerasModel(NNModel, metaclass=TfModelMeta):\r\n File \"/home/ubuntu/work/ipavlov/DeepPavlov/deeppavlov/core/models/keras_model.py\", line 143, in KerasModel\r\n sample_weight_mode=None, weighted_metrics=None, target_tensors=None):\r\n File \"/home/ubuntu/work/ipavlov/env/local/lib/python3.5/site-packages/overrides/overrides.py\", line 70, in overrides\r\n method.__name__)\r\nAssertionError: No super class method found for \"load\"\r\n```\n", "before_files": [{"content": "\"\"\"\nCopyright 2017 Neural Networks and Deep Learning lab, MIPT\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n\"\"\"\nimport telebot\n\nfrom deeppavlov.core.common.file import read_json\nfrom deeppavlov.core.commands.infer import build_model_from_config\n\n\ndef init_bot_for_model(token, model):\n bot = telebot.TeleBot(token)\n\n model_name = type(model).__name__\n models_info = read_json('../telegram_utils/models_info.json')\n model_info = models_info[model_name] if model_name in models_info else models_info['@default']\n\n @bot.message_handler(commands=['start'])\n def send_start_message(message):\n chat_id = message.chat.id\n out_message = model_info['start_message']\n if hasattr(model, 'reset'):\n model.reset()\n bot.send_message(chat_id, out_message)\n\n @bot.message_handler(commands=['help'])\n def send_help_message(message):\n chat_id = message.chat.id\n out_message = model_info['help_message']\n bot.send_message(chat_id, out_message)\n\n @bot.message_handler()\n def handle_inference(message):\n chat_id = message.chat.id\n context = message.text\n\n pred = model([context])\n reply_message = str(pred[0])\n bot.send_message(chat_id, reply_message)\n\n bot.polling()\n\n\ndef interact_model_by_telegram(config_path, token):\n config = read_json(config_path)\n model = build_model_from_config(config)\n init_bot_for_model(token, model)\n", "path": "telegram_utils/telegram_ui.py"}]} | 1,807 | 197 |
gh_patches_debug_11304 | rasdani/github-patches | git_diff | comic__grand-challenge.org-1450 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add example code for overlay segment configuration for workstation
</issue>
<code>
[start of app/grandchallenge/workstation_configs/forms.py]
1 from django.forms import ModelForm
2
3 from grandchallenge.core.forms import SaveFormInitMixin
4 from grandchallenge.core.widgets import JSONEditorWidget
5 from grandchallenge.workstation_configs.models import (
6 OVERLAY_SEGMENTS_SCHEMA,
7 WorkstationConfig,
8 )
9
10
11 class WorkstationConfigForm(SaveFormInitMixin, ModelForm):
12 class Meta:
13 model = WorkstationConfig
14 fields = (
15 "title",
16 "description",
17 "window_presets",
18 "default_window_preset",
19 "default_slab_thickness_mm",
20 "default_slab_render_method",
21 "default_orientation",
22 "default_overlay_alpha",
23 "default_overlay_lut",
24 "default_overlay_interpolation",
25 "overlay_segments",
26 "default_zoom_scale",
27 "show_image_info_plugin",
28 "show_display_plugin",
29 "show_invert_tool",
30 "show_flip_tool",
31 "show_window_level_tool",
32 "show_reset_tool",
33 )
34 widgets = {
35 "overlay_segments": JSONEditorWidget(
36 schema=OVERLAY_SEGMENTS_SCHEMA
37 ),
38 }
39
[end of app/grandchallenge/workstation_configs/forms.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/app/grandchallenge/workstation_configs/forms.py b/app/grandchallenge/workstation_configs/forms.py
--- a/app/grandchallenge/workstation_configs/forms.py
+++ b/app/grandchallenge/workstation_configs/forms.py
@@ -36,3 +36,14 @@
schema=OVERLAY_SEGMENTS_SCHEMA
),
}
+ help_texts = {
+ "overlay_segments": (
+ "If an categorical overlay is shown, it is possible to show toggles "
+ "to change the visibility of the different overlay categories. To do "
+ "so, configure the categories that should be displayed. Data from the"
+ " algorithm's output.json can be added as an extra label to each "
+ "toggle using jinja templating. "
+ 'For example: [{ "voxel_value": 0, "name": "Level 0", "visible": '
+ 'false, "metric_template": "{{metrics.volumes[0]}} mm³"},]'
+ ),
+ }
| {"golden_diff": "diff --git a/app/grandchallenge/workstation_configs/forms.py b/app/grandchallenge/workstation_configs/forms.py\n--- a/app/grandchallenge/workstation_configs/forms.py\n+++ b/app/grandchallenge/workstation_configs/forms.py\n@@ -36,3 +36,14 @@\n schema=OVERLAY_SEGMENTS_SCHEMA\n ),\n }\n+ help_texts = {\n+ \"overlay_segments\": (\n+ \"If an categorical overlay is shown, it is possible to show toggles \"\n+ \"to change the visibility of the different overlay categories. To do \"\n+ \"so, configure the categories that should be displayed. Data from the\"\n+ \" algorithm's output.json can be added as an extra label to each \"\n+ \"toggle using jinja templating. \"\n+ 'For example: [{ \"voxel_value\": 0, \"name\": \"Level 0\", \"visible\": '\n+ 'false, \"metric_template\": \"{{metrics.volumes[0]}} mm\u00b3\"},]'\n+ ),\n+ }\n", "issue": "Add example code for overlay segment configuration for workstation\n\n", "before_files": [{"content": "from django.forms import ModelForm\n\nfrom grandchallenge.core.forms import SaveFormInitMixin\nfrom grandchallenge.core.widgets import JSONEditorWidget\nfrom grandchallenge.workstation_configs.models import (\n OVERLAY_SEGMENTS_SCHEMA,\n WorkstationConfig,\n)\n\n\nclass WorkstationConfigForm(SaveFormInitMixin, ModelForm):\n class Meta:\n model = WorkstationConfig\n fields = (\n \"title\",\n \"description\",\n \"window_presets\",\n \"default_window_preset\",\n \"default_slab_thickness_mm\",\n \"default_slab_render_method\",\n \"default_orientation\",\n \"default_overlay_alpha\",\n \"default_overlay_lut\",\n \"default_overlay_interpolation\",\n \"overlay_segments\",\n \"default_zoom_scale\",\n \"show_image_info_plugin\",\n \"show_display_plugin\",\n \"show_invert_tool\",\n \"show_flip_tool\",\n \"show_window_level_tool\",\n \"show_reset_tool\",\n )\n widgets = {\n \"overlay_segments\": JSONEditorWidget(\n schema=OVERLAY_SEGMENTS_SCHEMA\n ),\n }\n", "path": "app/grandchallenge/workstation_configs/forms.py"}]} | 845 | 221 |
gh_patches_debug_27161 | rasdani/github-patches | git_diff | ytdl-org__youtube-dl-18228 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add support for nzz.ch
rudolffischer@BueroPC-RF:~$ youtube-dl "http://www.nzz.ch/panorama/30-jahre-herzschmerz-aus-saas-fee-1.18438209" -v
[debug] System config: []
[debug] User config: []
[debug] Command-line args: ['http://www.nzz.ch/panorama/30-jahre-herzschmerz-aus-saas-fee-1.18438209', '-v']
[debug] Encodings: locale UTF-8, fs UTF-8, out UTF-8, pref UTF-8
[debug] youtube-dl version 2014.12.06.1
[debug] Python version 2.7.6 - Linux-3.13.0-39-generic-x86_64-with-Ubuntu-14.04-trusty
[debug] exe versions: rtmpdump 2.4
[debug] Proxy map: {}
[generic] 30-jahre-herzschmerz-aus-saas-fee-1: Requesting header
WARNING: Falling back on generic information extractor.
[generic] 30-jahre-herzschmerz-aus-saas-fee-1: Downloading webpage
[generic] 30-jahre-herzschmerz-aus-saas-fee-1: Extracting information
ERROR: Unsupported URL: http://www.nzz.ch/panorama/30-jahre-herzschmerz-aus-saas-fee-1.18438209; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
Traceback (most recent call last):
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/generic.py", line 651, in _real_extract
doc = parse_xml(webpage)
File "/usr/local/bin/youtube-dl/youtube_dl/utils.py", line 1425, in parse_xml
tree = xml.etree.ElementTree.XML(s.encode('utf-8'), **kwargs)
File "/usr/lib/python2.7/xml/etree/ElementTree.py", line 1300, in XML
parser.feed(text)
File "/usr/lib/python2.7/xml/etree/ElementTree.py", line 1642, in feed
self._raiseerror(v)
File "/usr/lib/python2.7/xml/etree/ElementTree.py", line 1506, in _raiseerror
raise err
ParseError: not well-formed (invalid token): line 2, column 42
Traceback (most recent call last):
File "/usr/local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 553, in extract_info
ie_result = ie.extract(url)
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/common.py", line 241, in extract
return self._real_extract(url)
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/generic.py", line 1044, in _real_extract
raise ExtractorError('Unsupported URL: %s' % url)
ExtractorError: Unsupported URL: http://www.nzz.ch/panorama/30-jahre-herzschmerz-aus-saas-fee-1.18438209; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
rudolffischer@BueroPC-RF:~$
</issue>
<code>
[start of youtube_dl/extractor/nzz.py]
1 # coding: utf-8
2 from __future__ import unicode_literals
3
4 import re
5
6 from .common import InfoExtractor
7 from ..utils import (
8 extract_attributes,
9 )
10
11
12 class NZZIE(InfoExtractor):
13 _VALID_URL = r'https?://(?:www\.)?nzz\.ch/(?:[^/]+/)*[^/?#]+-ld\.(?P<id>\d+)'
14 _TEST = {
15 'url': 'http://www.nzz.ch/zuerich/gymizyte/gymizyte-schreiben-schueler-heute-noch-diktate-ld.9153',
16 'info_dict': {
17 'id': '9153',
18 },
19 'playlist_mincount': 6,
20 }
21
22 def _real_extract(self, url):
23 page_id = self._match_id(url)
24 webpage = self._download_webpage(url, page_id)
25
26 entries = []
27 for player_element in re.findall(r'(<[^>]+class="kalturaPlayer"[^>]*>)', webpage):
28 player_params = extract_attributes(player_element)
29 if player_params.get('data-type') not in ('kaltura_singleArticle',):
30 self.report_warning('Unsupported player type')
31 continue
32 entry_id = player_params['data-id']
33 entries.append(self.url_result(
34 'kaltura:1750922:' + entry_id, 'Kaltura', entry_id))
35
36 return self.playlist_result(entries, page_id)
37
[end of youtube_dl/extractor/nzz.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/youtube_dl/extractor/nzz.py b/youtube_dl/extractor/nzz.py
--- a/youtube_dl/extractor/nzz.py
+++ b/youtube_dl/extractor/nzz.py
@@ -11,20 +11,27 @@
class NZZIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?nzz\.ch/(?:[^/]+/)*[^/?#]+-ld\.(?P<id>\d+)'
- _TEST = {
+ _TESTS = [{
'url': 'http://www.nzz.ch/zuerich/gymizyte/gymizyte-schreiben-schueler-heute-noch-diktate-ld.9153',
'info_dict': {
'id': '9153',
},
'playlist_mincount': 6,
- }
+ }, {
+ 'url': 'https://www.nzz.ch/video/nzz-standpunkte/cvp-auf-der-suche-nach-dem-mass-der-mitte-ld.1368112',
+ 'info_dict': {
+ 'id': '1368112',
+ },
+ 'playlist_count': 1,
+ }]
def _real_extract(self, url):
page_id = self._match_id(url)
webpage = self._download_webpage(url, page_id)
entries = []
- for player_element in re.findall(r'(<[^>]+class="kalturaPlayer"[^>]*>)', webpage):
+ for player_element in re.findall(
+ r'(<[^>]+class="kalturaPlayer[^"]*"[^>]*>)', webpage):
player_params = extract_attributes(player_element)
if player_params.get('data-type') not in ('kaltura_singleArticle',):
self.report_warning('Unsupported player type')
| {"golden_diff": "diff --git a/youtube_dl/extractor/nzz.py b/youtube_dl/extractor/nzz.py\n--- a/youtube_dl/extractor/nzz.py\n+++ b/youtube_dl/extractor/nzz.py\n@@ -11,20 +11,27 @@\n \n class NZZIE(InfoExtractor):\n _VALID_URL = r'https?://(?:www\\.)?nzz\\.ch/(?:[^/]+/)*[^/?#]+-ld\\.(?P<id>\\d+)'\n- _TEST = {\n+ _TESTS = [{\n 'url': 'http://www.nzz.ch/zuerich/gymizyte/gymizyte-schreiben-schueler-heute-noch-diktate-ld.9153',\n 'info_dict': {\n 'id': '9153',\n },\n 'playlist_mincount': 6,\n- }\n+ }, {\n+ 'url': 'https://www.nzz.ch/video/nzz-standpunkte/cvp-auf-der-suche-nach-dem-mass-der-mitte-ld.1368112',\n+ 'info_dict': {\n+ 'id': '1368112',\n+ },\n+ 'playlist_count': 1,\n+ }]\n \n def _real_extract(self, url):\n page_id = self._match_id(url)\n webpage = self._download_webpage(url, page_id)\n \n entries = []\n- for player_element in re.findall(r'(<[^>]+class=\"kalturaPlayer\"[^>]*>)', webpage):\n+ for player_element in re.findall(\n+ r'(<[^>]+class=\"kalturaPlayer[^\"]*\"[^>]*>)', webpage):\n player_params = extract_attributes(player_element)\n if player_params.get('data-type') not in ('kaltura_singleArticle',):\n self.report_warning('Unsupported player type')\n", "issue": "Add support for nzz.ch\nrudolffischer@BueroPC-RF:~$ youtube-dl \"http://www.nzz.ch/panorama/30-jahre-herzschmerz-aus-saas-fee-1.18438209\" -v\n[debug] System config: []\n[debug] User config: []\n[debug] Command-line args: ['http://www.nzz.ch/panorama/30-jahre-herzschmerz-aus-saas-fee-1.18438209', '-v']\n[debug] Encodings: locale UTF-8, fs UTF-8, out UTF-8, pref UTF-8\n[debug] youtube-dl version 2014.12.06.1\n[debug] Python version 2.7.6 - Linux-3.13.0-39-generic-x86_64-with-Ubuntu-14.04-trusty\n[debug] exe versions: rtmpdump 2.4\n[debug] Proxy map: {}\n[generic] 30-jahre-herzschmerz-aus-saas-fee-1: Requesting header\nWARNING: Falling back on generic information extractor.\n[generic] 30-jahre-herzschmerz-aus-saas-fee-1: Downloading webpage\n[generic] 30-jahre-herzschmerz-aus-saas-fee-1: Extracting information\nERROR: Unsupported URL: http://www.nzz.ch/panorama/30-jahre-herzschmerz-aus-saas-fee-1.18438209; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.\nTraceback (most recent call last):\n File \"/usr/local/bin/youtube-dl/youtube_dl/extractor/generic.py\", line 651, in _real_extract\n doc = parse_xml(webpage)\n File \"/usr/local/bin/youtube-dl/youtube_dl/utils.py\", line 1425, in parse_xml\n tree = xml.etree.ElementTree.XML(s.encode('utf-8'), **kwargs)\n File \"/usr/lib/python2.7/xml/etree/ElementTree.py\", line 1300, in XML\n parser.feed(text)\n File \"/usr/lib/python2.7/xml/etree/ElementTree.py\", line 1642, in feed\n self._raiseerror(v)\n File \"/usr/lib/python2.7/xml/etree/ElementTree.py\", line 1506, in _raiseerror\n raise err\nParseError: not well-formed (invalid token): line 2, column 42\nTraceback (most recent call last):\n File \"/usr/local/bin/youtube-dl/youtube_dl/YoutubeDL.py\", line 553, in extract_info\n ie_result = ie.extract(url)\n File \"/usr/local/bin/youtube-dl/youtube_dl/extractor/common.py\", line 241, in extract\n return self._real_extract(url)\n File \"/usr/local/bin/youtube-dl/youtube_dl/extractor/generic.py\", line 1044, in _real_extract\n raise ExtractorError('Unsupported URL: %s' % url)\nExtractorError: Unsupported URL: http://www.nzz.ch/panorama/30-jahre-herzschmerz-aus-saas-fee-1.18438209; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.\n\nrudolffischer@BueroPC-RF:~$ \n\n", "before_files": [{"content": "# coding: utf-8\nfrom __future__ import unicode_literals\n\nimport re\n\nfrom .common import InfoExtractor\nfrom ..utils import (\n extract_attributes,\n)\n\n\nclass NZZIE(InfoExtractor):\n _VALID_URL = r'https?://(?:www\\.)?nzz\\.ch/(?:[^/]+/)*[^/?#]+-ld\\.(?P<id>\\d+)'\n _TEST = {\n 'url': 'http://www.nzz.ch/zuerich/gymizyte/gymizyte-schreiben-schueler-heute-noch-diktate-ld.9153',\n 'info_dict': {\n 'id': '9153',\n },\n 'playlist_mincount': 6,\n }\n\n def _real_extract(self, url):\n page_id = self._match_id(url)\n webpage = self._download_webpage(url, page_id)\n\n entries = []\n for player_element in re.findall(r'(<[^>]+class=\"kalturaPlayer\"[^>]*>)', webpage):\n player_params = extract_attributes(player_element)\n if player_params.get('data-type') not in ('kaltura_singleArticle',):\n self.report_warning('Unsupported player type')\n continue\n entry_id = player_params['data-id']\n entries.append(self.url_result(\n 'kaltura:1750922:' + entry_id, 'Kaltura', entry_id))\n\n return self.playlist_result(entries, page_id)\n", "path": "youtube_dl/extractor/nzz.py"}]} | 1,817 | 421 |
gh_patches_debug_25079 | rasdani/github-patches | git_diff | Kinto__kinto-630 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Enabling the flush endpoint through env vars does not seem to work
I'm running Kinto + postgres with docker-compose (using the example docker-compose.yml in the repo).
Adding `KINTO_FLUSH_ENDPOINT_ENABLED` to the environment section in docker-compose.yml does not enable the flush endpoint for me. I instead had to add `kinto.flush_endpoint_enabled = true` to a custom ini file, that worked.
Can the flush endpoint be enabled through an env var like this?
</issue>
<code>
[start of kinto/__init__.py]
1 import pkg_resources
2 import logging
3
4 import kinto.core
5 from pyramid.config import Configurator
6 from pyramid.settings import asbool
7 from pyramid.security import Authenticated
8
9 from kinto.authorization import RouteFactory
10
11 # Module version, as defined in PEP-0396.
12 __version__ = pkg_resources.get_distribution(__package__).version
13
14 # Implemented HTTP API Version
15 HTTP_API_VERSION = '1.5'
16
17 # Main kinto logger
18 logger = logging.getLogger(__name__)
19
20
21 DEFAULT_SETTINGS = {
22 'retry_after_seconds': 3,
23 'cache_backend': 'kinto.core.cache.memory',
24 'permission_backend': 'kinto.core.permission.memory',
25 'storage_backend': 'kinto.core.storage.memory',
26 'project_docs': 'https://kinto.readthedocs.io/',
27 'bucket_create_principals': Authenticated,
28 'multiauth.authorization_policy': (
29 'kinto.authorization.AuthorizationPolicy'),
30 'experimental_collection_schema_validation': 'False',
31 'http_api_version': HTTP_API_VERSION
32 }
33
34
35 def main(global_config, config=None, **settings):
36 if not config:
37 config = Configurator(settings=settings, root_factory=RouteFactory)
38
39 # Force project name, since it determines settings prefix.
40 config.add_settings({'kinto.project_name': 'kinto'})
41
42 kinto.core.initialize(config,
43 version=__version__,
44 default_settings=DEFAULT_SETTINGS)
45
46 settings = config.get_settings()
47
48 # Expose capability
49 schema_enabled = asbool(
50 settings['experimental_collection_schema_validation']
51 )
52 if schema_enabled:
53 config.add_api_capability(
54 "schema",
55 description="Validates collection records with JSON schemas.",
56 url="http://kinto.readthedocs.io/en/latest/api/1.x/"
57 "collections.html#collection-json-schema")
58
59 # Scan Kinto views.
60 kwargs = {}
61 flush_enabled = asbool(settings.get('flush_endpoint_enabled'))
62
63 if flush_enabled:
64 config.add_api_capability(
65 "flush_endpoint",
66 description="The __flush__ endpoint can be used to remove all "
67 "data from all backends.",
68 url="http://kinto.readthedocs.io/en/latest/configuration/"
69 "settings.html#activating-the-flush-endpoint"
70 )
71 else:
72 kwargs['ignore'] = 'kinto.views.flush'
73 config.scan("kinto.views", **kwargs)
74
75 app = config.make_wsgi_app()
76
77 # Install middleware (idempotent if disabled)
78 return kinto.core.install_middlewares(app, settings)
79
[end of kinto/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/kinto/__init__.py b/kinto/__init__.py
--- a/kinto/__init__.py
+++ b/kinto/__init__.py
@@ -19,6 +19,7 @@
DEFAULT_SETTINGS = {
+ 'flush_endpoint_enabled': False,
'retry_after_seconds': 3,
'cache_backend': 'kinto.core.cache.memory',
'permission_backend': 'kinto.core.permission.memory',
@@ -58,18 +59,18 @@
# Scan Kinto views.
kwargs = {}
- flush_enabled = asbool(settings.get('flush_endpoint_enabled'))
+ flush_enabled = asbool(settings.get('flush_endpoint_enabled'))
if flush_enabled:
config.add_api_capability(
"flush_endpoint",
description="The __flush__ endpoint can be used to remove all "
"data from all backends.",
url="http://kinto.readthedocs.io/en/latest/configuration/"
- "settings.html#activating-the-flush-endpoint"
- )
+ "settings.html#activating-the-flush-endpoint")
else:
kwargs['ignore'] = 'kinto.views.flush'
+
config.scan("kinto.views", **kwargs)
app = config.make_wsgi_app()
| {"golden_diff": "diff --git a/kinto/__init__.py b/kinto/__init__.py\n--- a/kinto/__init__.py\n+++ b/kinto/__init__.py\n@@ -19,6 +19,7 @@\n \n \n DEFAULT_SETTINGS = {\n+ 'flush_endpoint_enabled': False,\n 'retry_after_seconds': 3,\n 'cache_backend': 'kinto.core.cache.memory',\n 'permission_backend': 'kinto.core.permission.memory',\n@@ -58,18 +59,18 @@\n \n # Scan Kinto views.\n kwargs = {}\n- flush_enabled = asbool(settings.get('flush_endpoint_enabled'))\n \n+ flush_enabled = asbool(settings.get('flush_endpoint_enabled'))\n if flush_enabled:\n config.add_api_capability(\n \"flush_endpoint\",\n description=\"The __flush__ endpoint can be used to remove all \"\n \"data from all backends.\",\n url=\"http://kinto.readthedocs.io/en/latest/configuration/\"\n- \"settings.html#activating-the-flush-endpoint\"\n- )\n+ \"settings.html#activating-the-flush-endpoint\")\n else:\n kwargs['ignore'] = 'kinto.views.flush'\n+\n config.scan(\"kinto.views\", **kwargs)\n \n app = config.make_wsgi_app()\n", "issue": "Enabling the flush endpoint through env vars does not seem to work\nI'm running Kinto + postgres with docker-compose (using the example docker-compose.yml in the repo). \n\nAdding `KINTO_FLUSH_ENDPOINT_ENABLED` to the environment section in docker-compose.yml does not enable the flush endpoint for me. I instead had to add `kinto.flush_endpoint_enabled = true` to a custom ini file, that worked.\n\nCan the flush endpoint be enabled through an env var like this?\n\n", "before_files": [{"content": "import pkg_resources\nimport logging\n\nimport kinto.core\nfrom pyramid.config import Configurator\nfrom pyramid.settings import asbool\nfrom pyramid.security import Authenticated\n\nfrom kinto.authorization import RouteFactory\n\n# Module version, as defined in PEP-0396.\n__version__ = pkg_resources.get_distribution(__package__).version\n\n# Implemented HTTP API Version\nHTTP_API_VERSION = '1.5'\n\n# Main kinto logger\nlogger = logging.getLogger(__name__)\n\n\nDEFAULT_SETTINGS = {\n 'retry_after_seconds': 3,\n 'cache_backend': 'kinto.core.cache.memory',\n 'permission_backend': 'kinto.core.permission.memory',\n 'storage_backend': 'kinto.core.storage.memory',\n 'project_docs': 'https://kinto.readthedocs.io/',\n 'bucket_create_principals': Authenticated,\n 'multiauth.authorization_policy': (\n 'kinto.authorization.AuthorizationPolicy'),\n 'experimental_collection_schema_validation': 'False',\n 'http_api_version': HTTP_API_VERSION\n}\n\n\ndef main(global_config, config=None, **settings):\n if not config:\n config = Configurator(settings=settings, root_factory=RouteFactory)\n\n # Force project name, since it determines settings prefix.\n config.add_settings({'kinto.project_name': 'kinto'})\n\n kinto.core.initialize(config,\n version=__version__,\n default_settings=DEFAULT_SETTINGS)\n\n settings = config.get_settings()\n\n # Expose capability\n schema_enabled = asbool(\n settings['experimental_collection_schema_validation']\n )\n if schema_enabled:\n config.add_api_capability(\n \"schema\",\n description=\"Validates collection records with JSON schemas.\",\n url=\"http://kinto.readthedocs.io/en/latest/api/1.x/\"\n \"collections.html#collection-json-schema\")\n\n # Scan Kinto views.\n kwargs = {}\n flush_enabled = asbool(settings.get('flush_endpoint_enabled'))\n\n if flush_enabled:\n config.add_api_capability(\n \"flush_endpoint\",\n description=\"The __flush__ endpoint can be used to remove all \"\n \"data from all backends.\",\n url=\"http://kinto.readthedocs.io/en/latest/configuration/\"\n \"settings.html#activating-the-flush-endpoint\"\n )\n else:\n kwargs['ignore'] = 'kinto.views.flush'\n config.scan(\"kinto.views\", **kwargs)\n\n app = config.make_wsgi_app()\n\n # Install middleware (idempotent if disabled)\n return kinto.core.install_middlewares(app, settings)\n", "path": "kinto/__init__.py"}]} | 1,321 | 273 |
gh_patches_debug_63214 | rasdani/github-patches | git_diff | ManimCommunity__manim-3108 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
The documentation for the `--resolution` flag in the cli is wrong
The current documentation of the `--resolution` flag says the format is `(W,H)` which is confusing because the passed value needs to be of the form `"W,H"`so the documentation should be updatet accordingly such that it reflects the usage `-r "W,H"` best with an example of `-r "1920,1080"`
</issue>
<code>
[start of manim/cli/render/render_options.py]
1 from __future__ import annotations
2
3 import re
4
5 import click
6 from cloup import option, option_group
7
8 from manim.constants import QUALITIES, RendererType
9
10 from ... import logger
11
12
13 def validate_scene_range(ctx, param, value):
14 try:
15 start = int(value)
16 return (start,)
17 except Exception:
18 pass
19
20 if value:
21 try:
22 start, end = map(int, re.split(r"[;,\-]", value))
23 return start, end
24 except Exception:
25 logger.error("Couldn't determine a range for -n option.")
26 exit()
27
28
29 def validate_resolution(ctx, param, value):
30 if value:
31 try:
32 start, end = map(int, re.split(r"[;,\-]", value))
33 return (start, end)
34 except Exception:
35 logger.error("Resolution option is invalid.")
36 exit()
37
38
39 render_options = option_group(
40 "Render Options",
41 option(
42 "-n",
43 "--from_animation_number",
44 callback=validate_scene_range,
45 help="Start rendering from n_0 until n_1. If n_1 is left unspecified, "
46 "renders all scenes after n_0.",
47 default=None,
48 ),
49 option(
50 "-a",
51 "--write_all",
52 is_flag=True,
53 help="Render all scenes in the input file.",
54 default=None,
55 ),
56 option(
57 "--format",
58 type=click.Choice(["png", "gif", "mp4", "webm", "mov"], case_sensitive=False),
59 default=None,
60 ),
61 option("-s", "--save_last_frame", is_flag=True, default=None),
62 option(
63 "-q",
64 "--quality",
65 default=None,
66 type=click.Choice(
67 list(reversed([q["flag"] for q in QUALITIES.values() if q["flag"]])), # type: ignore
68 case_sensitive=False,
69 ),
70 help="Render quality at the follow resolution framerates, respectively: "
71 + ", ".join(
72 reversed(
73 [
74 f'{q["pixel_width"]}x{q["pixel_height"]} {q["frame_rate"]}FPS'
75 for q in QUALITIES.values()
76 if q["flag"]
77 ]
78 )
79 ),
80 ),
81 option(
82 "-r",
83 "--resolution",
84 callback=validate_resolution,
85 default=None,
86 help="Resolution in (W,H) for when 16:9 aspect ratio isn't possible.",
87 ),
88 option(
89 "--fps",
90 "--frame_rate",
91 "frame_rate",
92 type=float,
93 default=None,
94 help="Render at this frame rate.",
95 ),
96 option(
97 "--renderer",
98 type=click.Choice(
99 [renderer_type.value for renderer_type in RendererType],
100 case_sensitive=False,
101 ),
102 help="Select a renderer for your Scene.",
103 default="cairo",
104 ),
105 option(
106 "-g",
107 "--save_pngs",
108 is_flag=True,
109 default=None,
110 help="Save each frame as png (Deprecated).",
111 ),
112 option(
113 "-i",
114 "--save_as_gif",
115 default=None,
116 is_flag=True,
117 help="Save as a gif (Deprecated).",
118 ),
119 option(
120 "--save_sections",
121 default=None,
122 is_flag=True,
123 help="Save section videos in addition to movie file.",
124 ),
125 option(
126 "-s",
127 "--save_last_frame",
128 default=None,
129 is_flag=True,
130 help="Save last frame as png (Deprecated).",
131 ),
132 option(
133 "-t",
134 "--transparent",
135 is_flag=True,
136 help="Render scenes with alpha channel.",
137 ),
138 option(
139 "--use_projection_fill_shaders",
140 is_flag=True,
141 help="Use shaders for OpenGLVMobject fill which are compatible with transformation matrices.",
142 default=None,
143 ),
144 option(
145 "--use_projection_stroke_shaders",
146 is_flag=True,
147 help="Use shaders for OpenGLVMobject stroke which are compatible with transformation matrices.",
148 default=None,
149 ),
150 )
151
[end of manim/cli/render/render_options.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/manim/cli/render/render_options.py b/manim/cli/render/render_options.py
--- a/manim/cli/render/render_options.py
+++ b/manim/cli/render/render_options.py
@@ -83,7 +83,7 @@
"--resolution",
callback=validate_resolution,
default=None,
- help="Resolution in (W,H) for when 16:9 aspect ratio isn't possible.",
+ help='Resolution in "W,H" for when 16:9 aspect ratio isn\'t possible.',
),
option(
"--fps",
| {"golden_diff": "diff --git a/manim/cli/render/render_options.py b/manim/cli/render/render_options.py\n--- a/manim/cli/render/render_options.py\n+++ b/manim/cli/render/render_options.py\n@@ -83,7 +83,7 @@\n \"--resolution\",\n callback=validate_resolution,\n default=None,\n- help=\"Resolution in (W,H) for when 16:9 aspect ratio isn't possible.\",\n+ help='Resolution in \"W,H\" for when 16:9 aspect ratio isn\\'t possible.',\n ),\n option(\n \"--fps\",\n", "issue": "The documentation for the `--resolution` flag in the cli is wrong\nThe current documentation of the `--resolution` flag says the format is `(W,H)` which is confusing because the passed value needs to be of the form `\"W,H\"`so the documentation should be updatet accordingly such that it reflects the usage `-r \"W,H\"` best with an example of `-r \"1920,1080\"`\n", "before_files": [{"content": "from __future__ import annotations\n\nimport re\n\nimport click\nfrom cloup import option, option_group\n\nfrom manim.constants import QUALITIES, RendererType\n\nfrom ... import logger\n\n\ndef validate_scene_range(ctx, param, value):\n try:\n start = int(value)\n return (start,)\n except Exception:\n pass\n\n if value:\n try:\n start, end = map(int, re.split(r\"[;,\\-]\", value))\n return start, end\n except Exception:\n logger.error(\"Couldn't determine a range for -n option.\")\n exit()\n\n\ndef validate_resolution(ctx, param, value):\n if value:\n try:\n start, end = map(int, re.split(r\"[;,\\-]\", value))\n return (start, end)\n except Exception:\n logger.error(\"Resolution option is invalid.\")\n exit()\n\n\nrender_options = option_group(\n \"Render Options\",\n option(\n \"-n\",\n \"--from_animation_number\",\n callback=validate_scene_range,\n help=\"Start rendering from n_0 until n_1. If n_1 is left unspecified, \"\n \"renders all scenes after n_0.\",\n default=None,\n ),\n option(\n \"-a\",\n \"--write_all\",\n is_flag=True,\n help=\"Render all scenes in the input file.\",\n default=None,\n ),\n option(\n \"--format\",\n type=click.Choice([\"png\", \"gif\", \"mp4\", \"webm\", \"mov\"], case_sensitive=False),\n default=None,\n ),\n option(\"-s\", \"--save_last_frame\", is_flag=True, default=None),\n option(\n \"-q\",\n \"--quality\",\n default=None,\n type=click.Choice(\n list(reversed([q[\"flag\"] for q in QUALITIES.values() if q[\"flag\"]])), # type: ignore\n case_sensitive=False,\n ),\n help=\"Render quality at the follow resolution framerates, respectively: \"\n + \", \".join(\n reversed(\n [\n f'{q[\"pixel_width\"]}x{q[\"pixel_height\"]} {q[\"frame_rate\"]}FPS'\n for q in QUALITIES.values()\n if q[\"flag\"]\n ]\n )\n ),\n ),\n option(\n \"-r\",\n \"--resolution\",\n callback=validate_resolution,\n default=None,\n help=\"Resolution in (W,H) for when 16:9 aspect ratio isn't possible.\",\n ),\n option(\n \"--fps\",\n \"--frame_rate\",\n \"frame_rate\",\n type=float,\n default=None,\n help=\"Render at this frame rate.\",\n ),\n option(\n \"--renderer\",\n type=click.Choice(\n [renderer_type.value for renderer_type in RendererType],\n case_sensitive=False,\n ),\n help=\"Select a renderer for your Scene.\",\n default=\"cairo\",\n ),\n option(\n \"-g\",\n \"--save_pngs\",\n is_flag=True,\n default=None,\n help=\"Save each frame as png (Deprecated).\",\n ),\n option(\n \"-i\",\n \"--save_as_gif\",\n default=None,\n is_flag=True,\n help=\"Save as a gif (Deprecated).\",\n ),\n option(\n \"--save_sections\",\n default=None,\n is_flag=True,\n help=\"Save section videos in addition to movie file.\",\n ),\n option(\n \"-s\",\n \"--save_last_frame\",\n default=None,\n is_flag=True,\n help=\"Save last frame as png (Deprecated).\",\n ),\n option(\n \"-t\",\n \"--transparent\",\n is_flag=True,\n help=\"Render scenes with alpha channel.\",\n ),\n option(\n \"--use_projection_fill_shaders\",\n is_flag=True,\n help=\"Use shaders for OpenGLVMobject fill which are compatible with transformation matrices.\",\n default=None,\n ),\n option(\n \"--use_projection_stroke_shaders\",\n is_flag=True,\n help=\"Use shaders for OpenGLVMobject stroke which are compatible with transformation matrices.\",\n default=None,\n ),\n)\n", "path": "manim/cli/render/render_options.py"}]} | 1,830 | 123 |
gh_patches_debug_39951 | rasdani/github-patches | git_diff | liqd__a4-opin-346 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Choose template: Small issues
There are some small wording issues when you choose a template to create a project in the dashboard. See comments in screenshot.

</issue>
<code>
[start of euth/dashboard/templatetags/dashboard_templatetags.py]
1 from django import template
2
3 register = template.Library()
4
5
6 @register.simple_tag
7 def selected(request, pattern):
8 path = request.path
9 if path == pattern:
10 return 'selected'
11 return ''
12
[end of euth/dashboard/templatetags/dashboard_templatetags.py]
[start of euth/dashboard/urls.py]
1 from django.conf.urls import url
2
3 from . import views
4
5 urlpatterns = [
6 url(
7 r'^$',
8 views.dashboard,
9 name='dashboard'),
10 url(
11 r'^profile$',
12 views.DashboardProfileView.as_view(),
13 name='dashboard-profile'),
14 url(
15 r'^email/$',
16 views.DashboardEmailView.as_view(),
17 name='dashboard-email'
18 ),
19 url(
20 r'^connections/$',
21 views.DashboardAccountView.as_view(),
22 name='dashboard-connections'
23 ),
24 url(
25 r'^(?P<organisation_slug>[-\w_]+)/$',
26 views.DashboardOrganisationUpdateView.as_view(),
27 name='dashboard-organisation-edit'
28 ),
29 url(
30 r'^(?P<organisation_slug>[-\w_]+)/projects/$',
31 views.DashboardProjectListView.as_view(),
32 name='dashboard-project-list'),
33 url(r'^(?P<organisation_slug>[-\w_]+)/blueprints/$',
34 views.DashboardBlueprintListView.as_view(),
35 name='dashboard-blueprint-list'),
36 url(r'^(?P<organisation_slug>[-\w_]+)/blueprints/'
37 r'(?P<blueprint_slug>[-\w_]+)/$',
38 views.DashboardProjectCreateView.as_view(),
39 name='dashboard-project-create'),
40 url(
41 r'^(?P<organisation_slug>[-\w_]+)/projects/(?P<slug>[-\w_]+)/$',
42 views.DashboardProjectUpdateView.as_view(),
43 name='dashboard-project-edit'
44 ),
45 url(
46 r'^(?P<organisation_slug>[-\w_]+)/projects/(?P<slug>[-\w_]+)/users$',
47 views.DashboardProjectUserView.as_view(),
48 name='dashboard-project-users'
49 ),
50 url(
51 r'^(?P<organisation_slug>[-\w_]+)/projects/'
52 r'(?P<slug>[-\w_]+)/users/invite$',
53 views.DashboardProjectInviteView.as_view(),
54 name='dashboard-project-invite'
55 ),
56 ]
57
[end of euth/dashboard/urls.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/euth/dashboard/templatetags/dashboard_templatetags.py b/euth/dashboard/templatetags/dashboard_templatetags.py
deleted file mode 100644
--- a/euth/dashboard/templatetags/dashboard_templatetags.py
+++ /dev/null
@@ -1,11 +0,0 @@
-from django import template
-
-register = template.Library()
-
-
[email protected]_tag
-def selected(request, pattern):
- path = request.path
- if path == pattern:
- return 'selected'
- return ''
diff --git a/euth/dashboard/urls.py b/euth/dashboard/urls.py
--- a/euth/dashboard/urls.py
+++ b/euth/dashboard/urls.py
@@ -10,47 +10,57 @@
url(
r'^profile$',
views.DashboardProfileView.as_view(),
+ {'dashboard_menu_item': 'profile'},
name='dashboard-profile'),
url(
r'^email/$',
views.DashboardEmailView.as_view(),
+ {'dashboard_menu_item': 'email'},
name='dashboard-email'
),
url(
r'^connections/$',
views.DashboardAccountView.as_view(),
+ {'dashboard_menu_item': 'connections'},
name='dashboard-connections'
),
url(
r'^(?P<organisation_slug>[-\w_]+)/$',
views.DashboardOrganisationUpdateView.as_view(),
+ {'dashboard_menu_item': 'organisation'},
name='dashboard-organisation-edit'
),
url(
r'^(?P<organisation_slug>[-\w_]+)/projects/$',
views.DashboardProjectListView.as_view(),
+ {'dashboard_menu_item': 'project'},
name='dashboard-project-list'),
url(r'^(?P<organisation_slug>[-\w_]+)/blueprints/$',
views.DashboardBlueprintListView.as_view(),
+ {'dashboard_menu_item': 'project'},
name='dashboard-blueprint-list'),
url(r'^(?P<organisation_slug>[-\w_]+)/blueprints/'
r'(?P<blueprint_slug>[-\w_]+)/$',
views.DashboardProjectCreateView.as_view(),
+ {'dashboard_menu_item': 'project'},
name='dashboard-project-create'),
url(
r'^(?P<organisation_slug>[-\w_]+)/projects/(?P<slug>[-\w_]+)/$',
views.DashboardProjectUpdateView.as_view(),
+ {'dashboard_menu_item': 'project'},
name='dashboard-project-edit'
),
url(
r'^(?P<organisation_slug>[-\w_]+)/projects/(?P<slug>[-\w_]+)/users$',
views.DashboardProjectUserView.as_view(),
+ {'dashboard_menu_item': 'project'},
name='dashboard-project-users'
),
url(
r'^(?P<organisation_slug>[-\w_]+)/projects/'
r'(?P<slug>[-\w_]+)/users/invite$',
views.DashboardProjectInviteView.as_view(),
+ {'dashboard_menu_item': 'project'},
name='dashboard-project-invite'
),
]
| {"golden_diff": "diff --git a/euth/dashboard/templatetags/dashboard_templatetags.py b/euth/dashboard/templatetags/dashboard_templatetags.py\ndeleted file mode 100644\n--- a/euth/dashboard/templatetags/dashboard_templatetags.py\n+++ /dev/null\n@@ -1,11 +0,0 @@\n-from django import template\n-\n-register = template.Library()\n-\n-\[email protected]_tag\n-def selected(request, pattern):\n- path = request.path\n- if path == pattern:\n- return 'selected'\n- return ''\ndiff --git a/euth/dashboard/urls.py b/euth/dashboard/urls.py\n--- a/euth/dashboard/urls.py\n+++ b/euth/dashboard/urls.py\n@@ -10,47 +10,57 @@\n url(\n r'^profile$',\n views.DashboardProfileView.as_view(),\n+ {'dashboard_menu_item': 'profile'},\n name='dashboard-profile'),\n url(\n r'^email/$',\n views.DashboardEmailView.as_view(),\n+ {'dashboard_menu_item': 'email'},\n name='dashboard-email'\n ),\n url(\n r'^connections/$',\n views.DashboardAccountView.as_view(),\n+ {'dashboard_menu_item': 'connections'},\n name='dashboard-connections'\n ),\n url(\n r'^(?P<organisation_slug>[-\\w_]+)/$',\n views.DashboardOrganisationUpdateView.as_view(),\n+ {'dashboard_menu_item': 'organisation'},\n name='dashboard-organisation-edit'\n ),\n url(\n r'^(?P<organisation_slug>[-\\w_]+)/projects/$',\n views.DashboardProjectListView.as_view(),\n+ {'dashboard_menu_item': 'project'},\n name='dashboard-project-list'),\n url(r'^(?P<organisation_slug>[-\\w_]+)/blueprints/$',\n views.DashboardBlueprintListView.as_view(),\n+ {'dashboard_menu_item': 'project'},\n name='dashboard-blueprint-list'),\n url(r'^(?P<organisation_slug>[-\\w_]+)/blueprints/'\n r'(?P<blueprint_slug>[-\\w_]+)/$',\n views.DashboardProjectCreateView.as_view(),\n+ {'dashboard_menu_item': 'project'},\n name='dashboard-project-create'),\n url(\n r'^(?P<organisation_slug>[-\\w_]+)/projects/(?P<slug>[-\\w_]+)/$',\n views.DashboardProjectUpdateView.as_view(),\n+ {'dashboard_menu_item': 'project'},\n name='dashboard-project-edit'\n ),\n url(\n r'^(?P<organisation_slug>[-\\w_]+)/projects/(?P<slug>[-\\w_]+)/users$',\n views.DashboardProjectUserView.as_view(),\n+ {'dashboard_menu_item': 'project'},\n name='dashboard-project-users'\n ),\n url(\n r'^(?P<organisation_slug>[-\\w_]+)/projects/'\n r'(?P<slug>[-\\w_]+)/users/invite$',\n views.DashboardProjectInviteView.as_view(),\n+ {'dashboard_menu_item': 'project'},\n name='dashboard-project-invite'\n ),\n ]\n", "issue": "Choose template: Small issues\nThere are some small wording issues when you choose a template to create a project in the dashboard. See comments in screenshot.\n\n\n\n", "before_files": [{"content": "from django import template\n\nregister = template.Library()\n\n\[email protected]_tag\ndef selected(request, pattern):\n path = request.path\n if path == pattern:\n return 'selected'\n return ''\n", "path": "euth/dashboard/templatetags/dashboard_templatetags.py"}, {"content": "from django.conf.urls import url\n\nfrom . import views\n\nurlpatterns = [\n url(\n r'^$',\n views.dashboard,\n name='dashboard'),\n url(\n r'^profile$',\n views.DashboardProfileView.as_view(),\n name='dashboard-profile'),\n url(\n r'^email/$',\n views.DashboardEmailView.as_view(),\n name='dashboard-email'\n ),\n url(\n r'^connections/$',\n views.DashboardAccountView.as_view(),\n name='dashboard-connections'\n ),\n url(\n r'^(?P<organisation_slug>[-\\w_]+)/$',\n views.DashboardOrganisationUpdateView.as_view(),\n name='dashboard-organisation-edit'\n ),\n url(\n r'^(?P<organisation_slug>[-\\w_]+)/projects/$',\n views.DashboardProjectListView.as_view(),\n name='dashboard-project-list'),\n url(r'^(?P<organisation_slug>[-\\w_]+)/blueprints/$',\n views.DashboardBlueprintListView.as_view(),\n name='dashboard-blueprint-list'),\n url(r'^(?P<organisation_slug>[-\\w_]+)/blueprints/'\n r'(?P<blueprint_slug>[-\\w_]+)/$',\n views.DashboardProjectCreateView.as_view(),\n name='dashboard-project-create'),\n url(\n r'^(?P<organisation_slug>[-\\w_]+)/projects/(?P<slug>[-\\w_]+)/$',\n views.DashboardProjectUpdateView.as_view(),\n name='dashboard-project-edit'\n ),\n url(\n r'^(?P<organisation_slug>[-\\w_]+)/projects/(?P<slug>[-\\w_]+)/users$',\n views.DashboardProjectUserView.as_view(),\n name='dashboard-project-users'\n ),\n url(\n r'^(?P<organisation_slug>[-\\w_]+)/projects/'\n r'(?P<slug>[-\\w_]+)/users/invite$',\n views.DashboardProjectInviteView.as_view(),\n name='dashboard-project-invite'\n ),\n]\n", "path": "euth/dashboard/urls.py"}]} | 1,271 | 686 |
gh_patches_debug_33519 | rasdani/github-patches | git_diff | TheAlgorithms__Python-10121 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Improve our test coverage
### Feature description
Many of our existing algorithm files have little to no unit testing. This is problematic because this can easily let bugs slip through. We want some assurance that the code we currently have is correct and functional. We welcome all contributors to open PRs to help us add tests to our codebase.
### How to find low-coverage files
Go to the Actions tab in this repository and find the most recent **build** workflow run. Open the logs under "Run Tests" and scroll down until you find the section on code coverage:
```
---------- coverage: platform linux, python 3.12.0-final-0 -----------
Name Stmts Miss Cover Missing
-----------------------------------------------------------------------------------------------------------
quantum/q_fourier_transform.py 30 30 0% 14-93
scripts/validate_solutions.py 54 54 0% 2-94
strings/min_cost_string_conversion.py 78 75 4% 20-57, 61-75, 79-129
...
```
The "Cover" column tells you what percentage of the lines in that file are covered by tests. We want to increase this percentage for existing files. Find a file with low coverage percentage that you wish to write tests for, add doctests for each function, and open a PR with your changes. You do not need to have a perfect coverage percentage, but all functions should have doctests.
Some files will naturally be hard to write tests for. For example, the file may be poorly written because they lack any functions. Other files might be how-tos, meaning they simply demonstrate how to use an existing library's functions rather than implementing the algorithm themselves. Ignore these kinds of files, as they will need to be rewritten eventually. Furthermore, ignore files in the `web_programming` and `project_euler` directories. Web programming files are inherently hard to test and Project Euler files have their own validation workflow, so don't worry about their test coverage.
_**When you open your PR, put "Contributes to #9943" in the PR description.**_ Do not use the word "fixes", "resolves", or "closes". This issue is an ongoing one, and your PR will not single-handedly resolve this issue.
### How to add doctests
A doctest is a unit test that is contained within the documentation comment (docstring) for a function. Here is an example of what doctests look like within a docstring:
```py
def add(a: int, b: int) -> int:
"""
Adds two non-negative numbers.
>>> add(1, 1)
2
>>> add(2, 5)
7
>>> add(1, 0)
1
>>> add(-1, -1)
Traceback (most recent last):
...
ValueError: Numbers must be non-negative
"""
```
For every function in the file you choose, you should write doctests like the ones shown above in its docstring. If a function doesn't have a docstring, add one. Your doctests should be comprehensive but not excessive: you should write just enough tests to cover all basic cases as well as all edge cases (e.g., negative numbers, empty lists, etc).
Do not simply run a function on some example inputs and put its output as the expected output for a doctest. This assumes that the function is implemented correctly when it might not be. Verify independently that your doctests and their expected outputs are correct. **Your PR will not be merged if it has failing tests.** If you happen to discover a bug while writing doctests, please fix it.
_**Please read our [contributing guidelines](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md) before you contribute.**_
</issue>
<code>
[start of maths/binary_exponentiation_2.py]
1 """
2 * Binary Exponentiation for Powers
3 * This is a method to find a^b in a time complexity of O(log b)
4 * This is one of the most commonly used methods of finding powers.
5 * Also useful in cases where solution to (a^b)%c is required,
6 * where a,b,c can be numbers over the computers calculation limits.
7 * Done using iteration, can also be done using recursion
8
9 * @author chinmoy159
10 * @version 1.0 dated 10/08/2017
11 """
12
13
14 def b_expo(a: int, b: int) -> int:
15 res = 1
16 while b > 0:
17 if b & 1:
18 res *= a
19
20 a *= a
21 b >>= 1
22
23 return res
24
25
26 def b_expo_mod(a: int, b: int, c: int) -> int:
27 res = 1
28 while b > 0:
29 if b & 1:
30 res = ((res % c) * (a % c)) % c
31
32 a *= a
33 b >>= 1
34
35 return res
36
37
38 """
39 * Wondering how this method works !
40 * It's pretty simple.
41 * Let's say you need to calculate a ^ b
42 * RULE 1 : a ^ b = (a*a) ^ (b/2) ---- example : 4 ^ 4 = (4*4) ^ (4/2) = 16 ^ 2
43 * RULE 2 : IF b is ODD, then ---- a ^ b = a * (a ^ (b - 1)) :: where (b - 1) is even.
44 * Once b is even, repeat the process to get a ^ b
45 * Repeat the process till b = 1 OR b = 0, because a^1 = a AND a^0 = 1
46 *
47 * As far as the modulo is concerned,
48 * the fact : (a*b) % c = ((a%c) * (b%c)) % c
49 * Now apply RULE 1 OR 2 whichever is required.
50 """
51
[end of maths/binary_exponentiation_2.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/maths/binary_exponentiation_2.py b/maths/binary_exponentiation_2.py
--- a/maths/binary_exponentiation_2.py
+++ b/maths/binary_exponentiation_2.py
@@ -1,17 +1,33 @@
"""
-* Binary Exponentiation for Powers
-* This is a method to find a^b in a time complexity of O(log b)
-* This is one of the most commonly used methods of finding powers.
-* Also useful in cases where solution to (a^b)%c is required,
-* where a,b,c can be numbers over the computers calculation limits.
-* Done using iteration, can also be done using recursion
-
-* @author chinmoy159
-* @version 1.0 dated 10/08/2017
+Binary Exponentiation
+This is a method to find a^b in O(log b) time complexity
+This is one of the most commonly used methods of exponentiation
+It's also useful when the solution to (a^b) % c is required because a, b, c may be
+over the computer's calculation limits
+
+Let's say you need to calculate a ^ b
+- RULE 1 : a ^ b = (a*a) ^ (b/2) ---- example : 4 ^ 4 = (4*4) ^ (4/2) = 16 ^ 2
+- RULE 2 : IF b is odd, then a ^ b = a * (a ^ (b - 1)), where b - 1 is even
+Once b is even, repeat the process until b = 1 or b = 0, because a^1 = a and a^0 = 1
+
+For modular exponentiation, we use the fact that (a*b) % c = ((a%c) * (b%c)) % c
+Now apply RULE 1 or 2 as required
+
+@author chinmoy159
"""
def b_expo(a: int, b: int) -> int:
+ """
+ >>> b_expo(2, 10)
+ 1024
+ >>> b_expo(9, 0)
+ 1
+ >>> b_expo(0, 12)
+ 0
+ >>> b_expo(4, 12)
+ 16777216
+ """
res = 1
while b > 0:
if b & 1:
@@ -24,6 +40,16 @@
def b_expo_mod(a: int, b: int, c: int) -> int:
+ """
+ >>> b_expo_mod(2, 10, 1000000007)
+ 1024
+ >>> b_expo_mod(11, 13, 19)
+ 11
+ >>> b_expo_mod(0, 19, 20)
+ 0
+ >>> b_expo_mod(15, 5, 4)
+ 3
+ """
res = 1
while b > 0:
if b & 1:
@@ -33,18 +59,3 @@
b >>= 1
return res
-
-
-"""
-* Wondering how this method works !
-* It's pretty simple.
-* Let's say you need to calculate a ^ b
-* RULE 1 : a ^ b = (a*a) ^ (b/2) ---- example : 4 ^ 4 = (4*4) ^ (4/2) = 16 ^ 2
-* RULE 2 : IF b is ODD, then ---- a ^ b = a * (a ^ (b - 1)) :: where (b - 1) is even.
-* Once b is even, repeat the process to get a ^ b
-* Repeat the process till b = 1 OR b = 0, because a^1 = a AND a^0 = 1
-*
-* As far as the modulo is concerned,
-* the fact : (a*b) % c = ((a%c) * (b%c)) % c
-* Now apply RULE 1 OR 2 whichever is required.
-"""
| {"golden_diff": "diff --git a/maths/binary_exponentiation_2.py b/maths/binary_exponentiation_2.py\n--- a/maths/binary_exponentiation_2.py\n+++ b/maths/binary_exponentiation_2.py\n@@ -1,17 +1,33 @@\n \"\"\"\n-* Binary Exponentiation for Powers\n-* This is a method to find a^b in a time complexity of O(log b)\n-* This is one of the most commonly used methods of finding powers.\n-* Also useful in cases where solution to (a^b)%c is required,\n-* where a,b,c can be numbers over the computers calculation limits.\n-* Done using iteration, can also be done using recursion\n-\n-* @author chinmoy159\n-* @version 1.0 dated 10/08/2017\n+Binary Exponentiation\n+This is a method to find a^b in O(log b) time complexity\n+This is one of the most commonly used methods of exponentiation\n+It's also useful when the solution to (a^b) % c is required because a, b, c may be\n+over the computer's calculation limits\n+\n+Let's say you need to calculate a ^ b\n+- RULE 1 : a ^ b = (a*a) ^ (b/2) ---- example : 4 ^ 4 = (4*4) ^ (4/2) = 16 ^ 2\n+- RULE 2 : IF b is odd, then a ^ b = a * (a ^ (b - 1)), where b - 1 is even\n+Once b is even, repeat the process until b = 1 or b = 0, because a^1 = a and a^0 = 1\n+\n+For modular exponentiation, we use the fact that (a*b) % c = ((a%c) * (b%c)) % c\n+Now apply RULE 1 or 2 as required\n+\n+@author chinmoy159\n \"\"\"\n \n \n def b_expo(a: int, b: int) -> int:\n+ \"\"\"\n+ >>> b_expo(2, 10)\n+ 1024\n+ >>> b_expo(9, 0)\n+ 1\n+ >>> b_expo(0, 12)\n+ 0\n+ >>> b_expo(4, 12)\n+ 16777216\n+ \"\"\"\n res = 1\n while b > 0:\n if b & 1:\n@@ -24,6 +40,16 @@\n \n \n def b_expo_mod(a: int, b: int, c: int) -> int:\n+ \"\"\"\n+ >>> b_expo_mod(2, 10, 1000000007)\n+ 1024\n+ >>> b_expo_mod(11, 13, 19)\n+ 11\n+ >>> b_expo_mod(0, 19, 20)\n+ 0\n+ >>> b_expo_mod(15, 5, 4)\n+ 3\n+ \"\"\"\n res = 1\n while b > 0:\n if b & 1:\n@@ -33,18 +59,3 @@\n b >>= 1\n \n return res\n-\n-\n-\"\"\"\n-* Wondering how this method works !\n-* It's pretty simple.\n-* Let's say you need to calculate a ^ b\n-* RULE 1 : a ^ b = (a*a) ^ (b/2) ---- example : 4 ^ 4 = (4*4) ^ (4/2) = 16 ^ 2\n-* RULE 2 : IF b is ODD, then ---- a ^ b = a * (a ^ (b - 1)) :: where (b - 1) is even.\n-* Once b is even, repeat the process to get a ^ b\n-* Repeat the process till b = 1 OR b = 0, because a^1 = a AND a^0 = 1\n-*\n-* As far as the modulo is concerned,\n-* the fact : (a*b) % c = ((a%c) * (b%c)) % c\n-* Now apply RULE 1 OR 2 whichever is required.\n-\"\"\"\n", "issue": "Improve our test coverage\n### Feature description\r\n\r\nMany of our existing algorithm files have little to no unit testing. This is problematic because this can easily let bugs slip through. We want some assurance that the code we currently have is correct and functional. We welcome all contributors to open PRs to help us add tests to our codebase.\r\n\r\n### How to find low-coverage files\r\n\r\nGo to the Actions tab in this repository and find the most recent **build** workflow run. Open the logs under \"Run Tests\" and scroll down until you find the section on code coverage:\r\n```\r\n---------- coverage: platform linux, python 3.12.0-final-0 -----------\r\nName Stmts Miss Cover Missing\r\n-----------------------------------------------------------------------------------------------------------\r\nquantum/q_fourier_transform.py 30 30 0% 14-93\r\nscripts/validate_solutions.py 54 54 0% 2-94\r\nstrings/min_cost_string_conversion.py 78 75 4% 20-57, 61-75, 79-129\r\n...\r\n```\r\nThe \"Cover\" column tells you what percentage of the lines in that file are covered by tests. We want to increase this percentage for existing files. Find a file with low coverage percentage that you wish to write tests for, add doctests for each function, and open a PR with your changes. You do not need to have a perfect coverage percentage, but all functions should have doctests.\r\n\r\nSome files will naturally be hard to write tests for. For example, the file may be poorly written because they lack any functions. Other files might be how-tos, meaning they simply demonstrate how to use an existing library's functions rather than implementing the algorithm themselves. Ignore these kinds of files, as they will need to be rewritten eventually. Furthermore, ignore files in the `web_programming` and `project_euler` directories. Web programming files are inherently hard to test and Project Euler files have their own validation workflow, so don't worry about their test coverage.\r\n\r\n_**When you open your PR, put \"Contributes to #9943\" in the PR description.**_ Do not use the word \"fixes\", \"resolves\", or \"closes\". This issue is an ongoing one, and your PR will not single-handedly resolve this issue.\r\n\r\n### How to add doctests\r\n\r\nA doctest is a unit test that is contained within the documentation comment (docstring) for a function. Here is an example of what doctests look like within a docstring:\r\n```py\r\ndef add(a: int, b: int) -> int:\r\n \"\"\"\r\n Adds two non-negative numbers.\r\n >>> add(1, 1)\r\n 2\r\n >>> add(2, 5)\r\n 7\r\n >>> add(1, 0)\r\n 1\r\n >>> add(-1, -1)\r\n Traceback (most recent last):\r\n ...\r\n ValueError: Numbers must be non-negative\r\n \"\"\"\r\n```\r\nFor every function in the file you choose, you should write doctests like the ones shown above in its docstring. If a function doesn't have a docstring, add one. Your doctests should be comprehensive but not excessive: you should write just enough tests to cover all basic cases as well as all edge cases (e.g., negative numbers, empty lists, etc).\r\n\r\nDo not simply run a function on some example inputs and put its output as the expected output for a doctest. This assumes that the function is implemented correctly when it might not be. Verify independently that your doctests and their expected outputs are correct. **Your PR will not be merged if it has failing tests.** If you happen to discover a bug while writing doctests, please fix it.\r\n\r\n_**Please read our [contributing guidelines](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md) before you contribute.**_\n", "before_files": [{"content": "\"\"\"\n* Binary Exponentiation for Powers\n* This is a method to find a^b in a time complexity of O(log b)\n* This is one of the most commonly used methods of finding powers.\n* Also useful in cases where solution to (a^b)%c is required,\n* where a,b,c can be numbers over the computers calculation limits.\n* Done using iteration, can also be done using recursion\n\n* @author chinmoy159\n* @version 1.0 dated 10/08/2017\n\"\"\"\n\n\ndef b_expo(a: int, b: int) -> int:\n res = 1\n while b > 0:\n if b & 1:\n res *= a\n\n a *= a\n b >>= 1\n\n return res\n\n\ndef b_expo_mod(a: int, b: int, c: int) -> int:\n res = 1\n while b > 0:\n if b & 1:\n res = ((res % c) * (a % c)) % c\n\n a *= a\n b >>= 1\n\n return res\n\n\n\"\"\"\n* Wondering how this method works !\n* It's pretty simple.\n* Let's say you need to calculate a ^ b\n* RULE 1 : a ^ b = (a*a) ^ (b/2) ---- example : 4 ^ 4 = (4*4) ^ (4/2) = 16 ^ 2\n* RULE 2 : IF b is ODD, then ---- a ^ b = a * (a ^ (b - 1)) :: where (b - 1) is even.\n* Once b is even, repeat the process to get a ^ b\n* Repeat the process till b = 1 OR b = 0, because a^1 = a AND a^0 = 1\n*\n* As far as the modulo is concerned,\n* the fact : (a*b) % c = ((a%c) * (b%c)) % c\n* Now apply RULE 1 OR 2 whichever is required.\n\"\"\"\n", "path": "maths/binary_exponentiation_2.py"}]} | 1,936 | 956 |
gh_patches_debug_56718 | rasdani/github-patches | git_diff | mosaicml__composer-293 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ResNet56 default num_classes argument
## 🚀 Feature Request
The `num_classes` argument for [ResNet56_cifar10](https://github.com/mosaicml/composer/blob/main/composer/models/resnet56_cifar10/model.py) should have a default value `num_classes=10`.
## Motivation
It felt silly when writing a demo notebook to have to specify `num_classes=10` when calling `composer.models.CIFAR10_ResNet56(num_classes=10)`. The model has "cifar10" in its name, and even if it didn't, it's most common use is for cifar10.
## Implementation
Does it require any changes beyond the `__init__()` signature?
</issue>
<code>
[start of composer/models/resnet56_cifar10/model.py]
1 # Copyright 2021 MosaicML. All Rights Reserved.
2
3 from typing import List, Optional
4
5 from composer.models.base import MosaicClassifier
6 from composer.models.model_hparams import Initializer
7 from composer.models.resnets import CIFAR_ResNet
8
9
10 class CIFAR10_ResNet56(MosaicClassifier):
11 """A ResNet-56 model extending :class:`MosaicClassifier`.
12
13 See this `paper <https://arxiv.org/abs/1512.03385>`_ for details
14 on the residual network architecture.
15
16 Args:
17 num_classes (int): The number of classes for the model.
18 initializers (List[Initializer], optional): Initializers
19 for the model. ``None`` for no initialization.
20 (default: ``None``)
21 """
22
23 def __init__(
24 self,
25 num_classes: int,
26 initializers: Optional[List[Initializer]] = None,
27 ) -> None:
28 if initializers is None:
29 initializers = []
30
31 model = CIFAR_ResNet.get_model_from_name(
32 "cifar_resnet_56",
33 initializers,
34 num_classes,
35 )
36 super().__init__(module=model)
37
[end of composer/models/resnet56_cifar10/model.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/composer/models/resnet56_cifar10/model.py b/composer/models/resnet56_cifar10/model.py
--- a/composer/models/resnet56_cifar10/model.py
+++ b/composer/models/resnet56_cifar10/model.py
@@ -22,7 +22,7 @@
def __init__(
self,
- num_classes: int,
+ num_classes: int = 10,
initializers: Optional[List[Initializer]] = None,
) -> None:
if initializers is None:
| {"golden_diff": "diff --git a/composer/models/resnet56_cifar10/model.py b/composer/models/resnet56_cifar10/model.py\n--- a/composer/models/resnet56_cifar10/model.py\n+++ b/composer/models/resnet56_cifar10/model.py\n@@ -22,7 +22,7 @@\n \n def __init__(\n self,\n- num_classes: int,\n+ num_classes: int = 10,\n initializers: Optional[List[Initializer]] = None,\n ) -> None:\n if initializers is None:\n", "issue": "ResNet56 default num_classes argument\n## \ud83d\ude80 Feature Request\r\nThe `num_classes` argument for [ResNet56_cifar10](https://github.com/mosaicml/composer/blob/main/composer/models/resnet56_cifar10/model.py) should have a default value `num_classes=10`.\r\n\r\n## Motivation\r\n\r\nIt felt silly when writing a demo notebook to have to specify `num_classes=10` when calling `composer.models.CIFAR10_ResNet56(num_classes=10)`. The model has \"cifar10\" in its name, and even if it didn't, it's most common use is for cifar10.\r\n\r\n## Implementation\r\n\r\nDoes it require any changes beyond the `__init__()` signature?\n", "before_files": [{"content": "# Copyright 2021 MosaicML. All Rights Reserved.\n\nfrom typing import List, Optional\n\nfrom composer.models.base import MosaicClassifier\nfrom composer.models.model_hparams import Initializer\nfrom composer.models.resnets import CIFAR_ResNet\n\n\nclass CIFAR10_ResNet56(MosaicClassifier):\n \"\"\"A ResNet-56 model extending :class:`MosaicClassifier`.\n\n See this `paper <https://arxiv.org/abs/1512.03385>`_ for details\n on the residual network architecture.\n\n Args:\n num_classes (int): The number of classes for the model.\n initializers (List[Initializer], optional): Initializers\n for the model. ``None`` for no initialization.\n (default: ``None``)\n \"\"\"\n\n def __init__(\n self,\n num_classes: int,\n initializers: Optional[List[Initializer]] = None,\n ) -> None:\n if initializers is None:\n initializers = []\n\n model = CIFAR_ResNet.get_model_from_name(\n \"cifar_resnet_56\",\n initializers,\n num_classes,\n )\n super().__init__(module=model)\n", "path": "composer/models/resnet56_cifar10/model.py"}]} | 1,040 | 128 |
gh_patches_debug_25116 | rasdani/github-patches | git_diff | lutris__lutris-2682 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Failure to read Steam's config.vdf due to wrong case
Lutris can't read Steam's config.vdf file because the "Steam" value is actually lowercase when Lutris expects it to be uppercase.

Same as #1966.
Failure to read Steam's config.vdf due to wrong case
Lutris can't read Steam's config.vdf file because the "Steam" value is actually lowercase when Lutris expects it to be uppercase.

Same as #1966.
</issue>
<code>
[start of lutris/util/steam/config.py]
1 """Handle Steam configuration"""
2 import os
3 from collections import OrderedDict, defaultdict
4
5 from lutris.util import system
6 from lutris.util.log import logger
7 from lutris.util.steam.vdf import vdf_parse
8
9
10 def get_default_acf(appid, name):
11 """Return a default configuration usable to
12 create a runnable game in Steam"""
13
14 userconfig = OrderedDict()
15 userconfig["name"] = name
16 userconfig["gameid"] = appid
17
18 appstate = OrderedDict()
19 appstate["appID"] = appid
20 appstate["Universe"] = "1"
21 appstate["StateFlags"] = "1026"
22 appstate["installdir"] = name
23 appstate["UserConfig"] = userconfig
24 return {"AppState": appstate}
25
26
27 def read_config(steam_data_dir):
28 """Read the Steam configuration and return it as an object"""
29 config_filename = os.path.join(steam_data_dir, "config/config.vdf")
30 if not system.path_exists(config_filename):
31 return None
32 with open(config_filename, "r") as steam_config_file:
33 config = vdf_parse(steam_config_file, {})
34 try:
35 return config["InstallConfigStore"]["Software"]["Valve"]["Steam"]
36 except KeyError:
37 try:
38 return config["InstallConfigStore"]["Software"]["valve"]["Steam"]
39 except KeyError as ex:
40 logger.error("Steam config %s is empty: %s", config_filename, ex)
41
42
43 def get_steamapps_paths_for_platform(platform_name):
44 """
45 """
46 from lutris.runners import winesteam, steam
47
48 runners = {"linux": steam.steam, "windows": winesteam.winesteam}
49 runner = runners[platform_name]()
50 return runner.get_steamapps_dirs()
51
52
53 def get_steamapps_paths(flat=False, platform=None):
54 base_platforms = ["linux", "windows"]
55 if flat:
56 steamapps_paths = []
57 else:
58 steamapps_paths = defaultdict(list)
59
60 if platform:
61 if platform not in base_platforms:
62 raise ValueError("Illegal value for Steam platform: %s" % platform)
63 platforms = [platform]
64 else:
65 platforms = base_platforms
66
67 for _platform in platforms:
68 folders = get_steamapps_paths_for_platform(_platform)
69 if flat:
70 steamapps_paths += folders
71 else:
72 steamapps_paths[_platform] = folders
73
74 return steamapps_paths
75
[end of lutris/util/steam/config.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/lutris/util/steam/config.py b/lutris/util/steam/config.py
--- a/lutris/util/steam/config.py
+++ b/lutris/util/steam/config.py
@@ -26,18 +26,25 @@
def read_config(steam_data_dir):
"""Read the Steam configuration and return it as an object"""
+
+ def get_entry_case_insensitive(config_dict, path):
+ for key, value in config_dict.items():
+ if key.lower() == path[0].lower():
+ if len(path) <= 1:
+ return config_dict[key]
+
+ return get_entry_case_insensitive(config_dict[key], path[1:])
+ raise KeyError(path[0])
+
config_filename = os.path.join(steam_data_dir, "config/config.vdf")
if not system.path_exists(config_filename):
return None
with open(config_filename, "r") as steam_config_file:
config = vdf_parse(steam_config_file, {})
try:
- return config["InstallConfigStore"]["Software"]["Valve"]["Steam"]
- except KeyError:
- try:
- return config["InstallConfigStore"]["Software"]["valve"]["Steam"]
- except KeyError as ex:
- logger.error("Steam config %s is empty: %s", config_filename, ex)
+ return get_entry_case_insensitive(config, ["InstallConfigStore", "Software", "Valve", "Steam"])
+ except KeyError as ex:
+ logger.error("Steam config %s is empty: %s", config_filename, ex)
def get_steamapps_paths_for_platform(platform_name):
| {"golden_diff": "diff --git a/lutris/util/steam/config.py b/lutris/util/steam/config.py\n--- a/lutris/util/steam/config.py\n+++ b/lutris/util/steam/config.py\n@@ -26,18 +26,25 @@\n \n def read_config(steam_data_dir):\n \"\"\"Read the Steam configuration and return it as an object\"\"\"\n+\n+ def get_entry_case_insensitive(config_dict, path):\n+ for key, value in config_dict.items():\n+ if key.lower() == path[0].lower():\n+ if len(path) <= 1:\n+ return config_dict[key]\n+\n+ return get_entry_case_insensitive(config_dict[key], path[1:])\n+ raise KeyError(path[0])\n+\n config_filename = os.path.join(steam_data_dir, \"config/config.vdf\")\n if not system.path_exists(config_filename):\n return None\n with open(config_filename, \"r\") as steam_config_file:\n config = vdf_parse(steam_config_file, {})\n try:\n- return config[\"InstallConfigStore\"][\"Software\"][\"Valve\"][\"Steam\"]\n- except KeyError:\n- try:\n- return config[\"InstallConfigStore\"][\"Software\"][\"valve\"][\"Steam\"]\n- except KeyError as ex:\n- logger.error(\"Steam config %s is empty: %s\", config_filename, ex)\n+ return get_entry_case_insensitive(config, [\"InstallConfigStore\", \"Software\", \"Valve\", \"Steam\"])\n+ except KeyError as ex:\n+ logger.error(\"Steam config %s is empty: %s\", config_filename, ex)\n \n \n def get_steamapps_paths_for_platform(platform_name):\n", "issue": "Failure to read Steam's config.vdf due to wrong case\nLutris can't read Steam's config.vdf file because the \"Steam\" value is actually lowercase when Lutris expects it to be uppercase.\r\n\r\n\r\n\r\nSame as #1966.\nFailure to read Steam's config.vdf due to wrong case\nLutris can't read Steam's config.vdf file because the \"Steam\" value is actually lowercase when Lutris expects it to be uppercase.\r\n\r\n\r\n\r\nSame as #1966.\n", "before_files": [{"content": "\"\"\"Handle Steam configuration\"\"\"\nimport os\nfrom collections import OrderedDict, defaultdict\n\nfrom lutris.util import system\nfrom lutris.util.log import logger\nfrom lutris.util.steam.vdf import vdf_parse\n\n\ndef get_default_acf(appid, name):\n \"\"\"Return a default configuration usable to\n create a runnable game in Steam\"\"\"\n\n userconfig = OrderedDict()\n userconfig[\"name\"] = name\n userconfig[\"gameid\"] = appid\n\n appstate = OrderedDict()\n appstate[\"appID\"] = appid\n appstate[\"Universe\"] = \"1\"\n appstate[\"StateFlags\"] = \"1026\"\n appstate[\"installdir\"] = name\n appstate[\"UserConfig\"] = userconfig\n return {\"AppState\": appstate}\n\n\ndef read_config(steam_data_dir):\n \"\"\"Read the Steam configuration and return it as an object\"\"\"\n config_filename = os.path.join(steam_data_dir, \"config/config.vdf\")\n if not system.path_exists(config_filename):\n return None\n with open(config_filename, \"r\") as steam_config_file:\n config = vdf_parse(steam_config_file, {})\n try:\n return config[\"InstallConfigStore\"][\"Software\"][\"Valve\"][\"Steam\"]\n except KeyError:\n try:\n return config[\"InstallConfigStore\"][\"Software\"][\"valve\"][\"Steam\"]\n except KeyError as ex:\n logger.error(\"Steam config %s is empty: %s\", config_filename, ex)\n\n\ndef get_steamapps_paths_for_platform(platform_name):\n \"\"\"\n \"\"\"\n from lutris.runners import winesteam, steam\n\n runners = {\"linux\": steam.steam, \"windows\": winesteam.winesteam}\n runner = runners[platform_name]()\n return runner.get_steamapps_dirs()\n\n\ndef get_steamapps_paths(flat=False, platform=None):\n base_platforms = [\"linux\", \"windows\"]\n if flat:\n steamapps_paths = []\n else:\n steamapps_paths = defaultdict(list)\n\n if platform:\n if platform not in base_platforms:\n raise ValueError(\"Illegal value for Steam platform: %s\" % platform)\n platforms = [platform]\n else:\n platforms = base_platforms\n\n for _platform in platforms:\n folders = get_steamapps_paths_for_platform(_platform)\n if flat:\n steamapps_paths += folders\n else:\n steamapps_paths[_platform] = folders\n\n return steamapps_paths\n", "path": "lutris/util/steam/config.py"}]} | 1,450 | 354 |
gh_patches_debug_29875 | rasdani/github-patches | git_diff | streamlink__streamlink-1727 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Request: Add more functions to facebook plugin
### Checklist
- [x] This is a bug report.
- [ ] This is a feature request.
- [x] This is a plugin (improvement) request.
- [ ] I have read the contribution guidelines.
### Description
Reminder that with the new initial support of Mpeg Dash #880 and #990 might be fixable now, depending on what streamlink supports and how Facebook's videos and livestreaming has changed since this was last looked it.
</issue>
<code>
[start of src/streamlink/plugins/facebook.py]
1 import re
2
3 from streamlink.plugin import Plugin
4 from streamlink.stream import HLSStream
5
6 _playlist_url = "https://www.facebook.com/video/playback/playlist.m3u8?v={0}"
7
8 _url_re = re.compile(r"http(s)?://(www\.)?facebook\.com/[^/]+/videos/(?P<video_id>\d+)")
9
10
11 class Facebook(Plugin):
12 @classmethod
13 def can_handle_url(cls, url):
14 return _url_re.match(url)
15
16 @Plugin.broken(990)
17 def _get_streams(self):
18 match = _url_re.match(self.url)
19 video = match.group("video_id")
20
21 playlist = _playlist_url.format(video)
22
23 return HLSStream.parse_variant_playlist(self.session, playlist)
24
25
26 __plugin__ = Facebook
27
[end of src/streamlink/plugins/facebook.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/streamlink/plugins/facebook.py b/src/streamlink/plugins/facebook.py
--- a/src/streamlink/plugins/facebook.py
+++ b/src/streamlink/plugins/facebook.py
@@ -1,26 +1,42 @@
import re
from streamlink.plugin import Plugin
-from streamlink.stream import HLSStream
-
-_playlist_url = "https://www.facebook.com/video/playback/playlist.m3u8?v={0}"
-
-_url_re = re.compile(r"http(s)?://(www\.)?facebook\.com/[^/]+/videos/(?P<video_id>\d+)")
+from streamlink.plugin.api import http, useragents
+from streamlink.stream import DASHStream, HTTPStream
+from streamlink.utils import parse_json
class Facebook(Plugin):
+ _url_re = re.compile(r"https?://(?:www\.)?facebook\.com/[^/]+/videos")
+ _mpd_re = re.compile(r'''(sd|hd)_src["']?\s*:\s*(?P<quote>["'])(?P<url>.+?)(?P=quote)''')
+ _playlist_re = re.compile(r'''video:\[({url:".+?}\])''')
+ _plurl_re = re.compile(r'''url:"(.*?)"''')
+
@classmethod
def can_handle_url(cls, url):
- return _url_re.match(url)
+ return cls._url_re.match(url)
- @Plugin.broken(990)
def _get_streams(self):
- match = _url_re.match(self.url)
- video = match.group("video_id")
+ res = http.get(self.url, headers={"User-Agent": useragents.CHROME})
+ with open("temp.html", "w") as f:
+ f.write(res.text)
+
+ for match in self._mpd_re.finditer(res.text):
+ manifest_url = match.group("url")
+ if "\\/" in manifest_url:
+ # if the URL is json encoded, decode it
+ manifest_url = parse_json("\"{}\"".format(manifest_url))
+ for s in DASHStream.parse_manifest(self.session, manifest_url).items():
+ yield s
+ else:
+ match = self._playlist_re.search(res.text)
+ playlist = match and match.group(1)
+ if playlist:
+ for url in {url.group(1) for url in self._plurl_re.finditer(playlist)}:
+ yield "live", HTTPStream(self.session, url)
+
- playlist = _playlist_url.format(video)
- return HLSStream.parse_variant_playlist(self.session, playlist)
__plugin__ = Facebook
| {"golden_diff": "diff --git a/src/streamlink/plugins/facebook.py b/src/streamlink/plugins/facebook.py\n--- a/src/streamlink/plugins/facebook.py\n+++ b/src/streamlink/plugins/facebook.py\n@@ -1,26 +1,42 @@\n import re\n \n from streamlink.plugin import Plugin\n-from streamlink.stream import HLSStream\n-\n-_playlist_url = \"https://www.facebook.com/video/playback/playlist.m3u8?v={0}\"\n-\n-_url_re = re.compile(r\"http(s)?://(www\\.)?facebook\\.com/[^/]+/videos/(?P<video_id>\\d+)\")\n+from streamlink.plugin.api import http, useragents\n+from streamlink.stream import DASHStream, HTTPStream\n+from streamlink.utils import parse_json\n \n \n class Facebook(Plugin):\n+ _url_re = re.compile(r\"https?://(?:www\\.)?facebook\\.com/[^/]+/videos\")\n+ _mpd_re = re.compile(r'''(sd|hd)_src[\"']?\\s*:\\s*(?P<quote>[\"'])(?P<url>.+?)(?P=quote)''')\n+ _playlist_re = re.compile(r'''video:\\[({url:\".+?}\\])''')\n+ _plurl_re = re.compile(r'''url:\"(.*?)\"''')\n+\n @classmethod\n def can_handle_url(cls, url):\n- return _url_re.match(url)\n+ return cls._url_re.match(url)\n \n- @Plugin.broken(990)\n def _get_streams(self):\n- match = _url_re.match(self.url)\n- video = match.group(\"video_id\")\n+ res = http.get(self.url, headers={\"User-Agent\": useragents.CHROME})\n+ with open(\"temp.html\", \"w\") as f:\n+ f.write(res.text)\n+\n+ for match in self._mpd_re.finditer(res.text):\n+ manifest_url = match.group(\"url\")\n+ if \"\\\\/\" in manifest_url:\n+ # if the URL is json encoded, decode it\n+ manifest_url = parse_json(\"\\\"{}\\\"\".format(manifest_url))\n+ for s in DASHStream.parse_manifest(self.session, manifest_url).items():\n+ yield s\n+ else:\n+ match = self._playlist_re.search(res.text)\n+ playlist = match and match.group(1)\n+ if playlist:\n+ for url in {url.group(1) for url in self._plurl_re.finditer(playlist)}:\n+ yield \"live\", HTTPStream(self.session, url)\n+\n \n- playlist = _playlist_url.format(video)\n \n- return HLSStream.parse_variant_playlist(self.session, playlist)\n \n \n __plugin__ = Facebook\n", "issue": "Request: Add more functions to facebook plugin\n### Checklist\r\n\r\n- [x] This is a bug report.\r\n- [ ] This is a feature request.\r\n- [x] This is a plugin (improvement) request.\r\n- [ ] I have read the contribution guidelines.\r\n\r\n### Description\r\nReminder that with the new initial support of Mpeg Dash #880 and #990 might be fixable now, depending on what streamlink supports and how Facebook's videos and livestreaming has changed since this was last looked it.\r\n\n", "before_files": [{"content": "import re\n\nfrom streamlink.plugin import Plugin\nfrom streamlink.stream import HLSStream\n\n_playlist_url = \"https://www.facebook.com/video/playback/playlist.m3u8?v={0}\"\n\n_url_re = re.compile(r\"http(s)?://(www\\.)?facebook\\.com/[^/]+/videos/(?P<video_id>\\d+)\")\n\n\nclass Facebook(Plugin):\n @classmethod\n def can_handle_url(cls, url):\n return _url_re.match(url)\n\n @Plugin.broken(990)\n def _get_streams(self):\n match = _url_re.match(self.url)\n video = match.group(\"video_id\")\n\n playlist = _playlist_url.format(video)\n\n return HLSStream.parse_variant_playlist(self.session, playlist)\n\n\n__plugin__ = Facebook\n", "path": "src/streamlink/plugins/facebook.py"}]} | 868 | 585 |
gh_patches_debug_126 | rasdani/github-patches | git_diff | holoviz__panel-3990 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Clearing value of a DatetimePicker
#### Description of expected behavior and the observed behavior
Not sure if this is a bug or a new feature to Panel. Let's say I have a layout consisting of a button named "Edit", a DatetimePicker disabled with no default value, and a button named "Submit". At the time of initialization, the value of DatetimePicker is Null. The way these objects interact is as follows:
- Click "Edit" button, DatetimePicker is enabled so user can select a specific time value.
- Click "Submit" button, the selected time value will be pushed to the DB, and the DatetimePicker will be disabled and reset back to Null.
I have tried several ways with no success in clearing the value of the DatetimePicker.
#### Complete, minimal, self-contained example code that reproduces the issue
```
time_widget = pn.widgets.DatetimePicker(disabled=True)
time_widget.value = now()
# how to set value back to None?
time_widget.value = None/pandas.NaT/np.nan => all causes error
```
</issue>
<code>
[start of panel/models/datetime_picker.py]
1 from bokeh.core.enums import CalendarPosition
2 from bokeh.core.properties import (
3 Bool, Date, Datetime, Either, Enum, List, Nullable, String, Tuple,
4 )
5 from bokeh.models.widgets.inputs import InputWidget
6
7
8 class DatetimePicker(InputWidget):
9 ''' Calendar-based date picker widget.
10
11 '''
12
13 value = String(help="""
14 The initial or picked date.
15 """)
16
17 min_date = Nullable(Either(Date, Datetime), help="""
18 Optional earliest allowable date.
19 """)
20
21 max_date = Nullable(Either(Date, Datetime), help="""
22 Optional latest allowable date.
23 """)
24
25 disabled_dates = List(Either(Date, Datetime, Tuple(Date, Date), Tuple(Datetime, Datetime)), default=[], help="""
26 A list of dates of ``(start, end)`` date ranges to make unavailable for
27 selection. All other dates will be avalable.
28
29 .. note::
30 Only one of ``disabled_dates`` and ``enabled_dates`` should be specified.
31 """)
32
33 enabled_dates = List(Either(Date, Datetime, Tuple(Date, Date), Tuple(Datetime, Datetime)), default=[], help="""
34 A list of dates of ``(start, end)`` date ranges to make available for
35 selection. All other dates will be unavailable.
36
37 .. note::
38 Only one of ``disabled_dates`` and ``enabled_dates`` should be specified.
39 """)
40
41 position = Enum(CalendarPosition, default="auto", help="""
42 Where the calendar is rendered relative to the input when ``inline`` is False.
43 """)
44
45 inline = Bool(default=False, help="""
46 Whether the calendar sholud be displayed inline.
47 """)
48
49 enable_time = Bool(default=True)
50
51 enable_seconds = Bool(default=True)
52
53 military_time = Bool(default=True)
54
55 date_format = String("Y-m-d H:i:S")
56
57 mode = String(default="single", help="""
58 Should either be "single" or "range".""")
59
[end of panel/models/datetime_picker.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/panel/models/datetime_picker.py b/panel/models/datetime_picker.py
--- a/panel/models/datetime_picker.py
+++ b/panel/models/datetime_picker.py
@@ -10,7 +10,7 @@
'''
- value = String(help="""
+ value = Nullable(String, help="""
The initial or picked date.
""")
| {"golden_diff": "diff --git a/panel/models/datetime_picker.py b/panel/models/datetime_picker.py\n--- a/panel/models/datetime_picker.py\n+++ b/panel/models/datetime_picker.py\n@@ -10,7 +10,7 @@\n \n '''\n \n- value = String(help=\"\"\"\n+ value = Nullable(String, help=\"\"\"\n The initial or picked date.\n \"\"\")\n", "issue": "Clearing value of a DatetimePicker\n#### Description of expected behavior and the observed behavior\r\nNot sure if this is a bug or a new feature to Panel. Let's say I have a layout consisting of a button named \"Edit\", a DatetimePicker disabled with no default value, and a button named \"Submit\". At the time of initialization, the value of DatetimePicker is Null. The way these objects interact is as follows:\r\n- Click \"Edit\" button, DatetimePicker is enabled so user can select a specific time value.\r\n- Click \"Submit\" button, the selected time value will be pushed to the DB, and the DatetimePicker will be disabled and reset back to Null.\r\n\r\nI have tried several ways with no success in clearing the value of the DatetimePicker.\r\n\r\n#### Complete, minimal, self-contained example code that reproduces the issue\r\n\r\n```\r\ntime_widget = pn.widgets.DatetimePicker(disabled=True)\r\ntime_widget.value = now()\r\n\r\n# how to set value back to None?\r\ntime_widget.value = None/pandas.NaT/np.nan => all causes error\r\n```\r\n\n", "before_files": [{"content": "from bokeh.core.enums import CalendarPosition\nfrom bokeh.core.properties import (\n Bool, Date, Datetime, Either, Enum, List, Nullable, String, Tuple,\n)\nfrom bokeh.models.widgets.inputs import InputWidget\n\n\nclass DatetimePicker(InputWidget):\n ''' Calendar-based date picker widget.\n\n '''\n\n value = String(help=\"\"\"\n The initial or picked date.\n \"\"\")\n\n min_date = Nullable(Either(Date, Datetime), help=\"\"\"\n Optional earliest allowable date.\n \"\"\")\n\n max_date = Nullable(Either(Date, Datetime), help=\"\"\"\n Optional latest allowable date.\n \"\"\")\n\n disabled_dates = List(Either(Date, Datetime, Tuple(Date, Date), Tuple(Datetime, Datetime)), default=[], help=\"\"\"\n A list of dates of ``(start, end)`` date ranges to make unavailable for\n selection. All other dates will be avalable.\n\n .. note::\n Only one of ``disabled_dates`` and ``enabled_dates`` should be specified.\n \"\"\")\n\n enabled_dates = List(Either(Date, Datetime, Tuple(Date, Date), Tuple(Datetime, Datetime)), default=[], help=\"\"\"\n A list of dates of ``(start, end)`` date ranges to make available for\n selection. All other dates will be unavailable.\n\n .. note::\n Only one of ``disabled_dates`` and ``enabled_dates`` should be specified.\n \"\"\")\n\n position = Enum(CalendarPosition, default=\"auto\", help=\"\"\"\n Where the calendar is rendered relative to the input when ``inline`` is False.\n \"\"\")\n\n inline = Bool(default=False, help=\"\"\"\n Whether the calendar sholud be displayed inline.\n \"\"\")\n\n enable_time = Bool(default=True)\n\n enable_seconds = Bool(default=True)\n\n military_time = Bool(default=True)\n\n date_format = String(\"Y-m-d H:i:S\")\n\n mode = String(default=\"single\", help=\"\"\"\n Should either be \"single\" or \"range\".\"\"\")\n", "path": "panel/models/datetime_picker.py"}]} | 1,302 | 85 |
gh_patches_debug_8704 | rasdani/github-patches | git_diff | sublimelsp__LSP-1557 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[regression] lsp_execute does nothing due to empty session
Since this [commit](https://github.com/sublimelsp/LSP/commit/7d05794fa3cc4ecd3931d09a90e801addc70d9fa) the `capability` variable got deleted which means that `self.best_session(self.capability)` is unable to find session.
The consequence is that [LSP-metals.sublime-commands](https://github.com/scalameta/metals-sublime/blob/master/LSP-metals.sublime-commands) aren't executed.
</issue>
<code>
[start of plugin/execute_command.py]
1 import sublime
2 from .core.protocol import Error
3 from .core.protocol import ExecuteCommandParams
4 from .core.registry import LspTextCommand
5 from .core.registry import windows
6 from .core.typing import List, Optional, Any
7 from .core.views import uri_from_view, offset_to_point, region_to_range, text_document_identifier
8
9
10 class LspExecuteCommand(LspTextCommand):
11
12 def run(self,
13 edit: sublime.Edit,
14 command_name: Optional[str] = None,
15 command_args: Optional[List[Any]] = None,
16 session_name: Optional[str] = None,
17 event: Optional[dict] = None) -> None:
18 # Handle VSCode-specific command for triggering AC/sighelp
19 if command_name == "editor.action.triggerSuggest":
20 # Triggered from set_timeout as suggestions popup doesn't trigger otherwise.
21 return sublime.set_timeout(lambda: self.view.run_command("auto_complete"))
22 if command_name == "editor.action.triggerParameterHints":
23
24 def run_async() -> None:
25 listener = windows.listener_for_view(self.view)
26 if listener:
27 listener.do_signature_help_async(manual=False)
28
29 return sublime.set_timeout_async(run_async)
30 session = self.session_by_name(session_name) if session_name else self.best_session(self.capability)
31 if session and command_name:
32 if command_args:
33 self._expand_variables(command_args)
34 params = {"command": command_name} # type: ExecuteCommandParams
35 if command_args:
36 params["arguments"] = command_args
37
38 def handle_response(response: Any) -> None:
39 assert command_name
40 if isinstance(response, Error):
41 sublime.message_dialog("command {} failed. Reason: {}".format(command_name, str(response)))
42 return
43 msg = "command {} completed".format(command_name)
44 if response:
45 msg += "with response: {}".format(response)
46 window = self.view.window()
47 if window:
48 window.status_message(msg)
49
50 session.execute_command(params, progress=True).then(handle_response)
51
52 def _expand_variables(self, command_args: List[Any]) -> None:
53 region = self.view.sel()[0]
54 for i, arg in enumerate(command_args):
55 if arg in ["$document_id", "${document_id}"]:
56 command_args[i] = text_document_identifier(self.view)
57 if arg in ["$file_uri", "${file_uri}"]:
58 command_args[i] = uri_from_view(self.view)
59 elif arg in ["$selection", "${selection}"]:
60 command_args[i] = self.view.substr(region)
61 elif arg in ["$offset", "${offset}"]:
62 command_args[i] = region.b
63 elif arg in ["$selection_begin", "${selection_begin}"]:
64 command_args[i] = region.begin()
65 elif arg in ["$selection_end", "${selection_end}"]:
66 command_args[i] = region.end()
67 elif arg in ["$position", "${position}"]:
68 command_args[i] = offset_to_point(self.view, region.b).to_lsp()
69 elif arg in ["$range", "${range}"]:
70 command_args[i] = region_to_range(self.view, region).to_lsp()
71
[end of plugin/execute_command.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/plugin/execute_command.py b/plugin/execute_command.py
--- a/plugin/execute_command.py
+++ b/plugin/execute_command.py
@@ -27,7 +27,7 @@
listener.do_signature_help_async(manual=False)
return sublime.set_timeout_async(run_async)
- session = self.session_by_name(session_name) if session_name else self.best_session(self.capability)
+ session = self.session_by_name(session_name if session_name else self.session_name)
if session and command_name:
if command_args:
self._expand_variables(command_args)
| {"golden_diff": "diff --git a/plugin/execute_command.py b/plugin/execute_command.py\n--- a/plugin/execute_command.py\n+++ b/plugin/execute_command.py\n@@ -27,7 +27,7 @@\n listener.do_signature_help_async(manual=False)\n \n return sublime.set_timeout_async(run_async)\n- session = self.session_by_name(session_name) if session_name else self.best_session(self.capability)\n+ session = self.session_by_name(session_name if session_name else self.session_name)\n if session and command_name:\n if command_args:\n self._expand_variables(command_args)\n", "issue": "[regression] lsp_execute does nothing due to empty session\nSince this [commit](https://github.com/sublimelsp/LSP/commit/7d05794fa3cc4ecd3931d09a90e801addc70d9fa) the `capability` variable got deleted which means that `self.best_session(self.capability)` is unable to find session.\r\n\r\nThe consequence is that [LSP-metals.sublime-commands](https://github.com/scalameta/metals-sublime/blob/master/LSP-metals.sublime-commands) aren't executed.\r\n\r\n \r\n\n", "before_files": [{"content": "import sublime\nfrom .core.protocol import Error\nfrom .core.protocol import ExecuteCommandParams\nfrom .core.registry import LspTextCommand\nfrom .core.registry import windows\nfrom .core.typing import List, Optional, Any\nfrom .core.views import uri_from_view, offset_to_point, region_to_range, text_document_identifier\n\n\nclass LspExecuteCommand(LspTextCommand):\n\n def run(self,\n edit: sublime.Edit,\n command_name: Optional[str] = None,\n command_args: Optional[List[Any]] = None,\n session_name: Optional[str] = None,\n event: Optional[dict] = None) -> None:\n # Handle VSCode-specific command for triggering AC/sighelp\n if command_name == \"editor.action.triggerSuggest\":\n # Triggered from set_timeout as suggestions popup doesn't trigger otherwise.\n return sublime.set_timeout(lambda: self.view.run_command(\"auto_complete\"))\n if command_name == \"editor.action.triggerParameterHints\":\n\n def run_async() -> None:\n listener = windows.listener_for_view(self.view)\n if listener:\n listener.do_signature_help_async(manual=False)\n\n return sublime.set_timeout_async(run_async)\n session = self.session_by_name(session_name) if session_name else self.best_session(self.capability)\n if session and command_name:\n if command_args:\n self._expand_variables(command_args)\n params = {\"command\": command_name} # type: ExecuteCommandParams\n if command_args:\n params[\"arguments\"] = command_args\n\n def handle_response(response: Any) -> None:\n assert command_name\n if isinstance(response, Error):\n sublime.message_dialog(\"command {} failed. Reason: {}\".format(command_name, str(response)))\n return\n msg = \"command {} completed\".format(command_name)\n if response:\n msg += \"with response: {}\".format(response)\n window = self.view.window()\n if window:\n window.status_message(msg)\n\n session.execute_command(params, progress=True).then(handle_response)\n\n def _expand_variables(self, command_args: List[Any]) -> None:\n region = self.view.sel()[0]\n for i, arg in enumerate(command_args):\n if arg in [\"$document_id\", \"${document_id}\"]:\n command_args[i] = text_document_identifier(self.view)\n if arg in [\"$file_uri\", \"${file_uri}\"]:\n command_args[i] = uri_from_view(self.view)\n elif arg in [\"$selection\", \"${selection}\"]:\n command_args[i] = self.view.substr(region)\n elif arg in [\"$offset\", \"${offset}\"]:\n command_args[i] = region.b\n elif arg in [\"$selection_begin\", \"${selection_begin}\"]:\n command_args[i] = region.begin()\n elif arg in [\"$selection_end\", \"${selection_end}\"]:\n command_args[i] = region.end()\n elif arg in [\"$position\", \"${position}\"]:\n command_args[i] = offset_to_point(self.view, region.b).to_lsp()\n elif arg in [\"$range\", \"${range}\"]:\n command_args[i] = region_to_range(self.view, region).to_lsp()\n", "path": "plugin/execute_command.py"}]} | 1,466 | 125 |
gh_patches_debug_20127 | rasdani/github-patches | git_diff | rotki__rotki-591 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Sign In Failed - TypeError
Good evening!
I'm here on Linux, I've just tried to log in to my Rotki database created with 1.0.4 (using 1.0.5 now). After I type in my password and log in, I get the message
> **Sign In Failed**
> TypeError: int() argument must be a string, a bytes-like object or a number, not 'NoneType'
Now when I attempt to go back to 1.0.4 I get
> **Sign In Failed**
> DBUpgradeError: Your database version is newer than the version expected by the executable. Did you perhaps try to revert to an older rotkehlchen version?Please only use the latest version of the software.
No big worries, I'm still evaluating the software to see if it can do what I need so there's not a ton of data in there. Just thought you should know. I'll be happy to help with debugging if I can. But kids, so... I'll do my best!
</issue>
<code>
[start of rotkehlchen/db/settings.py]
1 from typing import Any, Dict, NamedTuple, Union
2
3 from rotkehlchen.constants.assets import S_USD
4 from rotkehlchen.constants.timing import YEAR_IN_SECONDS
5 from rotkehlchen.db.utils import str_to_bool
6 from rotkehlchen.errors import DeserializationError
7 from rotkehlchen.typing import FiatAsset, Timestamp
8 from rotkehlchen.user_messages import MessagesAggregator
9
10 ROTKEHLCHEN_DB_VERSION = 8
11 DEFAULT_TAXFREE_AFTER_PERIOD = YEAR_IN_SECONDS
12 DEFAULT_INCLUDE_CRYPTO2CRYPTO = True
13 DEFAULT_INCLUDE_GAS_COSTS = True
14 DEFAULT_ANONYMIZED_LOGS = False
15 DEFAULT_PREMIUM_SHOULD_SYNC = False
16 DEFAULT_START_DATE = '01/08/2015'
17 DEFAULT_UI_FLOATING_PRECISION = 2
18 DEFAULT_BALANCE_SAVE_FREQUENCY = 24
19 DEFAULT_MAIN_CURRENCY = S_USD
20 DEFAULT_DATE_DISPLAY_FORMAT = '%d/%m/%Y %H:%M:%S %Z'
21 DEFAULT_SUBMIT_USAGE_ANALYTICS = True
22
23
24 class DBSettings(NamedTuple):
25 version: int = ROTKEHLCHEN_DB_VERSION
26 last_write_ts: Timestamp = Timestamp(0)
27 premium_should_sync: bool = DEFAULT_PREMIUM_SHOULD_SYNC
28 include_crypto2crypto: bool = DEFAULT_INCLUDE_CRYPTO2CRYPTO
29 anonymized_logs: bool = DEFAULT_ANONYMIZED_LOGS
30 last_data_upload_ts: Timestamp = Timestamp(0)
31 ui_floating_precision: int = DEFAULT_UI_FLOATING_PRECISION
32 taxfree_after_period: int = DEFAULT_TAXFREE_AFTER_PERIOD
33 balance_save_frequency: int = DEFAULT_BALANCE_SAVE_FREQUENCY
34 include_gas_costs: bool = DEFAULT_INCLUDE_GAS_COSTS
35 historical_data_start: str = DEFAULT_START_DATE
36 eth_rpc_endpoint: str = 'http://localhost:8545'
37 main_currency: FiatAsset = DEFAULT_MAIN_CURRENCY
38 date_display_format: str = DEFAULT_DATE_DISPLAY_FORMAT
39 last_balance_save: Timestamp = Timestamp(0)
40 submit_usage_analytics: bool = DEFAULT_SUBMIT_USAGE_ANALYTICS
41
42
43 def read_boolean(value: Union[str, bool]) -> bool:
44 if isinstance(value, bool):
45 return value
46 elif isinstance(value, str):
47 return str_to_bool(value)
48
49 raise DeserializationError(
50 f'Failed to read a boolean from {value} which is of type {type(value)}',
51 )
52
53
54 def db_settings_from_dict(
55 settings_dict: Dict[str, Any],
56 msg_aggregator: MessagesAggregator,
57 ) -> DBSettings:
58 specified_args: Dict[str, Any] = {}
59 for key, value in settings_dict.items():
60 if key == 'version':
61 specified_args[key] = int(value)
62 elif key == 'historical_data_start':
63 specified_args[key] = str(value)
64 elif key == 'eth_rpc_endpoint':
65 specified_args[key] = str(value)
66 elif key == 'ui_floating_precision':
67 specified_args[key] = int(value)
68 elif key == 'include_crypto2crypto':
69 specified_args[key] = read_boolean(value)
70 elif key == 'taxfree_after_period':
71 specified_args[key] = int(value)
72 elif key == 'balance_save_frequency':
73 specified_args[key] = int(value)
74 elif key == 'main_currency':
75 specified_args[key] = FiatAsset(str(value))
76 elif key == 'anonymized_logs':
77 specified_args[key] = read_boolean(value)
78 elif key == 'include_gas_costs':
79 specified_args[key] = read_boolean(value)
80 elif key == 'date_display_format':
81 specified_args[key] = str(value)
82 elif key == 'premium_should_sync':
83 specified_args[key] = read_boolean(value)
84 elif key == 'last_write_ts':
85 specified_args[key] = Timestamp(int(value))
86 elif key == 'last_data_upload_ts':
87 specified_args[key] = Timestamp(int(value))
88 elif key == 'last_balance_save':
89 specified_args[key] = Timestamp(int(value))
90 elif key == 'submit_usage_analytics':
91 specified_args[key] = read_boolean(value)
92 else:
93 msg_aggregator.add_warning(
94 f'Unknown DB setting {key} given. Ignoring it. Should not '
95 f'happen so please open an issue in Github.',
96 )
97
98 return DBSettings(**specified_args)
99
[end of rotkehlchen/db/settings.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/rotkehlchen/db/settings.py b/rotkehlchen/db/settings.py
--- a/rotkehlchen/db/settings.py
+++ b/rotkehlchen/db/settings.py
@@ -68,7 +68,23 @@
elif key == 'include_crypto2crypto':
specified_args[key] = read_boolean(value)
elif key == 'taxfree_after_period':
- specified_args[key] = int(value)
+ # taxfree_after_period can also be None, to signify disabled setting
+ if value is None:
+ specified_args[key] = value
+ else:
+ int_value = int(value)
+ if int_value <= 0:
+ value = None
+ msg_aggregator.add_warning(
+ f'A negative or zero value ({int_value}) for taxfree_after_period '
+ f'ended up in the DB. Setting it to None. Please open an issue in '
+ f'Github: https://github.com/rotki/rotki/issues/new/choose',
+ )
+
+ else:
+ value = int_value
+
+ specified_args[key] = value
elif key == 'balance_save_frequency':
specified_args[key] = int(value)
elif key == 'main_currency':
| {"golden_diff": "diff --git a/rotkehlchen/db/settings.py b/rotkehlchen/db/settings.py\n--- a/rotkehlchen/db/settings.py\n+++ b/rotkehlchen/db/settings.py\n@@ -68,7 +68,23 @@\n elif key == 'include_crypto2crypto':\n specified_args[key] = read_boolean(value)\n elif key == 'taxfree_after_period':\n- specified_args[key] = int(value)\n+ # taxfree_after_period can also be None, to signify disabled setting\n+ if value is None:\n+ specified_args[key] = value\n+ else:\n+ int_value = int(value)\n+ if int_value <= 0:\n+ value = None\n+ msg_aggregator.add_warning(\n+ f'A negative or zero value ({int_value}) for taxfree_after_period '\n+ f'ended up in the DB. Setting it to None. Please open an issue in '\n+ f'Github: https://github.com/rotki/rotki/issues/new/choose',\n+ )\n+\n+ else:\n+ value = int_value\n+\n+ specified_args[key] = value\n elif key == 'balance_save_frequency':\n specified_args[key] = int(value)\n elif key == 'main_currency':\n", "issue": "Sign In Failed - TypeError\nGood evening!\r\n\r\nI'm here on Linux, I've just tried to log in to my Rotki database created with 1.0.4 (using 1.0.5 now). After I type in my password and log in, I get the message\r\n\r\n> **Sign In Failed**\r\n> TypeError: int() argument must be a string, a bytes-like object or a number, not 'NoneType'\r\n\r\nNow when I attempt to go back to 1.0.4 I get\r\n> **Sign In Failed**\r\n> DBUpgradeError: Your database version is newer than the version expected by the executable. Did you perhaps try to revert to an older rotkehlchen version?Please only use the latest version of the software.\r\n\r\nNo big worries, I'm still evaluating the software to see if it can do what I need so there's not a ton of data in there. Just thought you should know. I'll be happy to help with debugging if I can. But kids, so... I'll do my best!\n", "before_files": [{"content": "from typing import Any, Dict, NamedTuple, Union\n\nfrom rotkehlchen.constants.assets import S_USD\nfrom rotkehlchen.constants.timing import YEAR_IN_SECONDS\nfrom rotkehlchen.db.utils import str_to_bool\nfrom rotkehlchen.errors import DeserializationError\nfrom rotkehlchen.typing import FiatAsset, Timestamp\nfrom rotkehlchen.user_messages import MessagesAggregator\n\nROTKEHLCHEN_DB_VERSION = 8\nDEFAULT_TAXFREE_AFTER_PERIOD = YEAR_IN_SECONDS\nDEFAULT_INCLUDE_CRYPTO2CRYPTO = True\nDEFAULT_INCLUDE_GAS_COSTS = True\nDEFAULT_ANONYMIZED_LOGS = False\nDEFAULT_PREMIUM_SHOULD_SYNC = False\nDEFAULT_START_DATE = '01/08/2015'\nDEFAULT_UI_FLOATING_PRECISION = 2\nDEFAULT_BALANCE_SAVE_FREQUENCY = 24\nDEFAULT_MAIN_CURRENCY = S_USD\nDEFAULT_DATE_DISPLAY_FORMAT = '%d/%m/%Y %H:%M:%S %Z'\nDEFAULT_SUBMIT_USAGE_ANALYTICS = True\n\n\nclass DBSettings(NamedTuple):\n version: int = ROTKEHLCHEN_DB_VERSION\n last_write_ts: Timestamp = Timestamp(0)\n premium_should_sync: bool = DEFAULT_PREMIUM_SHOULD_SYNC\n include_crypto2crypto: bool = DEFAULT_INCLUDE_CRYPTO2CRYPTO\n anonymized_logs: bool = DEFAULT_ANONYMIZED_LOGS\n last_data_upload_ts: Timestamp = Timestamp(0)\n ui_floating_precision: int = DEFAULT_UI_FLOATING_PRECISION\n taxfree_after_period: int = DEFAULT_TAXFREE_AFTER_PERIOD\n balance_save_frequency: int = DEFAULT_BALANCE_SAVE_FREQUENCY\n include_gas_costs: bool = DEFAULT_INCLUDE_GAS_COSTS\n historical_data_start: str = DEFAULT_START_DATE\n eth_rpc_endpoint: str = 'http://localhost:8545'\n main_currency: FiatAsset = DEFAULT_MAIN_CURRENCY\n date_display_format: str = DEFAULT_DATE_DISPLAY_FORMAT\n last_balance_save: Timestamp = Timestamp(0)\n submit_usage_analytics: bool = DEFAULT_SUBMIT_USAGE_ANALYTICS\n\n\ndef read_boolean(value: Union[str, bool]) -> bool:\n if isinstance(value, bool):\n return value\n elif isinstance(value, str):\n return str_to_bool(value)\n\n raise DeserializationError(\n f'Failed to read a boolean from {value} which is of type {type(value)}',\n )\n\n\ndef db_settings_from_dict(\n settings_dict: Dict[str, Any],\n msg_aggregator: MessagesAggregator,\n) -> DBSettings:\n specified_args: Dict[str, Any] = {}\n for key, value in settings_dict.items():\n if key == 'version':\n specified_args[key] = int(value)\n elif key == 'historical_data_start':\n specified_args[key] = str(value)\n elif key == 'eth_rpc_endpoint':\n specified_args[key] = str(value)\n elif key == 'ui_floating_precision':\n specified_args[key] = int(value)\n elif key == 'include_crypto2crypto':\n specified_args[key] = read_boolean(value)\n elif key == 'taxfree_after_period':\n specified_args[key] = int(value)\n elif key == 'balance_save_frequency':\n specified_args[key] = int(value)\n elif key == 'main_currency':\n specified_args[key] = FiatAsset(str(value))\n elif key == 'anonymized_logs':\n specified_args[key] = read_boolean(value)\n elif key == 'include_gas_costs':\n specified_args[key] = read_boolean(value)\n elif key == 'date_display_format':\n specified_args[key] = str(value)\n elif key == 'premium_should_sync':\n specified_args[key] = read_boolean(value)\n elif key == 'last_write_ts':\n specified_args[key] = Timestamp(int(value))\n elif key == 'last_data_upload_ts':\n specified_args[key] = Timestamp(int(value))\n elif key == 'last_balance_save':\n specified_args[key] = Timestamp(int(value))\n elif key == 'submit_usage_analytics':\n specified_args[key] = read_boolean(value)\n else:\n msg_aggregator.add_warning(\n f'Unknown DB setting {key} given. Ignoring it. Should not '\n f'happen so please open an issue in Github.',\n )\n\n return DBSettings(**specified_args)\n", "path": "rotkehlchen/db/settings.py"}]} | 1,867 | 277 |
gh_patches_debug_27214 | rasdani/github-patches | git_diff | mdn__kuma-6134 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Remove contributor notification post account creation
Once a user has successfully signed up, we show a banner similar to the one below either just below the page header, or generally at the top of the page.

Because of the changes to account roles, these no longer makes sense and should be removed.
</issue>
<code>
[start of kuma/users/signal_handlers.py]
1 from allauth.account.signals import email_confirmed, user_signed_up
2 from allauth.socialaccount.signals import social_account_removed
3 from django.contrib import messages
4 from django.core.exceptions import ObjectDoesNotExist
5 from django.db import transaction
6 from django.db.models.signals import post_delete, post_save, pre_delete
7 from django.dispatch import receiver
8 from django.utils.translation import ugettext_lazy as _
9 from waffle import switch_is_active
10
11 from kuma.core.urlresolvers import reverse
12 from kuma.payments.utils import cancel_stripe_customer_subscription
13 from kuma.wiki.jobs import DocumentContributorsJob
14
15 from .models import User, UserBan
16 from .tasks import send_welcome_email
17
18
19 @receiver(user_signed_up, dispatch_uid='users.user_signed_up')
20 def on_user_signed_up(sender, request, user, **kwargs):
21 """
22 Signal handler to be called when a given user has signed up.
23 """
24 url = reverse('wiki.document', args=['MDN/Getting_started'])
25 msg = _('You have completed the first step of '
26 '<a href="%s">getting started with MDN</a>') % url
27 messages.success(request, msg)
28 if switch_is_active('welcome_email'):
29 # only send if the user has already verified
30 # at least one email address
31 if user.emailaddress_set.filter(verified=True).exists():
32 transaction.on_commit(
33 lambda: send_welcome_email.delay(user.pk, request.LANGUAGE_CODE)
34 )
35
36
37 @receiver(email_confirmed, dispatch_uid='users.email_confirmed')
38 def on_email_confirmed(sender, request, email_address, **kwargs):
39 """
40 Signal handler to be called when a given email address was confirmed
41 by a user.
42 """
43 if switch_is_active('welcome_email'):
44 # only send if the user has exactly one verified (the given)
45 # email address, in other words if it was just confirmed
46 user = email_address.user
47 previous_emails = user.emailaddress_set.exclude(pk=email_address.pk)
48 if not previous_emails.exists():
49 transaction.on_commit(
50 lambda: send_welcome_email.delay(user.pk, request.LANGUAGE_CODE)
51 )
52
53
54 @receiver(social_account_removed, dispatch_uid='users.social_account_removed')
55 def on_social_account_removed(sender, request, socialaccount, **kwargs):
56 """
57 Invoked just after a user successfully removed a social account
58
59 We use it to reset the name of the socialaccount provider in
60 the user's session to one that he also has.
61 """
62 user = socialaccount.user
63 try:
64 all_socialaccounts = user.socialaccount_set.all()
65 next_socialaccount = all_socialaccounts[0]
66 request.session['sociallogin_provider'] = next_socialaccount.provider
67 request.session.modified = True
68 except (ObjectDoesNotExist, IndexError):
69 pass
70
71
72 @receiver(post_save, sender=UserBan, dispatch_uid='users.user_ban.save')
73 def on_ban_save(sender, instance, **kwargs):
74 """
75 Signal handler to be called when a given user ban is saved.
76 """
77 user = instance.user
78 user.is_active = not instance.is_active
79 user.save()
80 invalidate_document_contribution(user)
81
82
83 @receiver(post_delete, sender=UserBan, dispatch_uid='users.user_ban.delete')
84 def on_ban_delete(sender, instance, **kwargs):
85 """
86 Signal handler to be called when a user ban is deleted.
87 """
88 user = instance.user
89 user.is_active = True
90 user.save()
91 invalidate_document_contribution(user)
92
93
94 def invalidate_document_contribution(user):
95 """
96 Invalidate the contributor list for Documents the user has edited.
97
98 This will remove them if they have been banned, and add them if they
99 have been unbanned.
100 """
101 revisions = user.created_revisions
102 doc_ids = set(revisions.values_list('document_id', flat=True))
103 job = DocumentContributorsJob()
104 for doc_id in doc_ids:
105 job.invalidate(doc_id)
106
107
108 @receiver(pre_delete, sender=User, dispatch_uid='users.unsubscribe_payments')
109 def unsubscribe_payments_on_user_delete(sender, instance, **kwargs):
110 """Cancel Stripe subscriptions before deleting User."""
111 user = instance
112 if user.stripe_customer_id:
113 # This may raise an exception if the Stripe API call fails.
114 # This will stop User deletion while an admin investigates.
115 cancel_stripe_customer_subscription(user.stripe_customer_id)
116
[end of kuma/users/signal_handlers.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/kuma/users/signal_handlers.py b/kuma/users/signal_handlers.py
--- a/kuma/users/signal_handlers.py
+++ b/kuma/users/signal_handlers.py
@@ -1,14 +1,11 @@
from allauth.account.signals import email_confirmed, user_signed_up
from allauth.socialaccount.signals import social_account_removed
-from django.contrib import messages
from django.core.exceptions import ObjectDoesNotExist
from django.db import transaction
from django.db.models.signals import post_delete, post_save, pre_delete
from django.dispatch import receiver
-from django.utils.translation import ugettext_lazy as _
from waffle import switch_is_active
-from kuma.core.urlresolvers import reverse
from kuma.payments.utils import cancel_stripe_customer_subscription
from kuma.wiki.jobs import DocumentContributorsJob
@@ -21,10 +18,6 @@
"""
Signal handler to be called when a given user has signed up.
"""
- url = reverse('wiki.document', args=['MDN/Getting_started'])
- msg = _('You have completed the first step of '
- '<a href="%s">getting started with MDN</a>') % url
- messages.success(request, msg)
if switch_is_active('welcome_email'):
# only send if the user has already verified
# at least one email address
| {"golden_diff": "diff --git a/kuma/users/signal_handlers.py b/kuma/users/signal_handlers.py\n--- a/kuma/users/signal_handlers.py\n+++ b/kuma/users/signal_handlers.py\n@@ -1,14 +1,11 @@\n from allauth.account.signals import email_confirmed, user_signed_up\n from allauth.socialaccount.signals import social_account_removed\n-from django.contrib import messages\n from django.core.exceptions import ObjectDoesNotExist\n from django.db import transaction\n from django.db.models.signals import post_delete, post_save, pre_delete\n from django.dispatch import receiver\n-from django.utils.translation import ugettext_lazy as _\n from waffle import switch_is_active\n \n-from kuma.core.urlresolvers import reverse\n from kuma.payments.utils import cancel_stripe_customer_subscription\n from kuma.wiki.jobs import DocumentContributorsJob\n \n@@ -21,10 +18,6 @@\n \"\"\"\n Signal handler to be called when a given user has signed up.\n \"\"\"\n- url = reverse('wiki.document', args=['MDN/Getting_started'])\n- msg = _('You have completed the first step of '\n- '<a href=\"%s\">getting started with MDN</a>') % url\n- messages.success(request, msg)\n if switch_is_active('welcome_email'):\n # only send if the user has already verified\n # at least one email address\n", "issue": "Remove contributor notification post account creation\nOnce a user has successfully signed up, we show a banner similar to the one below either just below the page header, or generally at the top of the page.\r\n\r\n\r\n\r\n\r\nBecause of the changes to account roles, these no longer makes sense and should be removed.\n", "before_files": [{"content": "from allauth.account.signals import email_confirmed, user_signed_up\nfrom allauth.socialaccount.signals import social_account_removed\nfrom django.contrib import messages\nfrom django.core.exceptions import ObjectDoesNotExist\nfrom django.db import transaction\nfrom django.db.models.signals import post_delete, post_save, pre_delete\nfrom django.dispatch import receiver\nfrom django.utils.translation import ugettext_lazy as _\nfrom waffle import switch_is_active\n\nfrom kuma.core.urlresolvers import reverse\nfrom kuma.payments.utils import cancel_stripe_customer_subscription\nfrom kuma.wiki.jobs import DocumentContributorsJob\n\nfrom .models import User, UserBan\nfrom .tasks import send_welcome_email\n\n\n@receiver(user_signed_up, dispatch_uid='users.user_signed_up')\ndef on_user_signed_up(sender, request, user, **kwargs):\n \"\"\"\n Signal handler to be called when a given user has signed up.\n \"\"\"\n url = reverse('wiki.document', args=['MDN/Getting_started'])\n msg = _('You have completed the first step of '\n '<a href=\"%s\">getting started with MDN</a>') % url\n messages.success(request, msg)\n if switch_is_active('welcome_email'):\n # only send if the user has already verified\n # at least one email address\n if user.emailaddress_set.filter(verified=True).exists():\n transaction.on_commit(\n lambda: send_welcome_email.delay(user.pk, request.LANGUAGE_CODE)\n )\n\n\n@receiver(email_confirmed, dispatch_uid='users.email_confirmed')\ndef on_email_confirmed(sender, request, email_address, **kwargs):\n \"\"\"\n Signal handler to be called when a given email address was confirmed\n by a user.\n \"\"\"\n if switch_is_active('welcome_email'):\n # only send if the user has exactly one verified (the given)\n # email address, in other words if it was just confirmed\n user = email_address.user\n previous_emails = user.emailaddress_set.exclude(pk=email_address.pk)\n if not previous_emails.exists():\n transaction.on_commit(\n lambda: send_welcome_email.delay(user.pk, request.LANGUAGE_CODE)\n )\n\n\n@receiver(social_account_removed, dispatch_uid='users.social_account_removed')\ndef on_social_account_removed(sender, request, socialaccount, **kwargs):\n \"\"\"\n Invoked just after a user successfully removed a social account\n\n We use it to reset the name of the socialaccount provider in\n the user's session to one that he also has.\n \"\"\"\n user = socialaccount.user\n try:\n all_socialaccounts = user.socialaccount_set.all()\n next_socialaccount = all_socialaccounts[0]\n request.session['sociallogin_provider'] = next_socialaccount.provider\n request.session.modified = True\n except (ObjectDoesNotExist, IndexError):\n pass\n\n\n@receiver(post_save, sender=UserBan, dispatch_uid='users.user_ban.save')\ndef on_ban_save(sender, instance, **kwargs):\n \"\"\"\n Signal handler to be called when a given user ban is saved.\n \"\"\"\n user = instance.user\n user.is_active = not instance.is_active\n user.save()\n invalidate_document_contribution(user)\n\n\n@receiver(post_delete, sender=UserBan, dispatch_uid='users.user_ban.delete')\ndef on_ban_delete(sender, instance, **kwargs):\n \"\"\"\n Signal handler to be called when a user ban is deleted.\n \"\"\"\n user = instance.user\n user.is_active = True\n user.save()\n invalidate_document_contribution(user)\n\n\ndef invalidate_document_contribution(user):\n \"\"\"\n Invalidate the contributor list for Documents the user has edited.\n\n This will remove them if they have been banned, and add them if they\n have been unbanned.\n \"\"\"\n revisions = user.created_revisions\n doc_ids = set(revisions.values_list('document_id', flat=True))\n job = DocumentContributorsJob()\n for doc_id in doc_ids:\n job.invalidate(doc_id)\n\n\n@receiver(pre_delete, sender=User, dispatch_uid='users.unsubscribe_payments')\ndef unsubscribe_payments_on_user_delete(sender, instance, **kwargs):\n \"\"\"Cancel Stripe subscriptions before deleting User.\"\"\"\n user = instance\n if user.stripe_customer_id:\n # This may raise an exception if the Stripe API call fails.\n # This will stop User deletion while an admin investigates.\n cancel_stripe_customer_subscription(user.stripe_customer_id)\n", "path": "kuma/users/signal_handlers.py"}]} | 1,856 | 289 |
gh_patches_debug_11004 | rasdani/github-patches | git_diff | OpenNMT__OpenNMT-py-1841 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Alpha channel and grayscale in image-to-text with -image_channel_size=3
For training image to text, the argument `-image_channel_size=3` imply that the images already have the good number of channel. However, some of my images are black and white and saved with only one channel or saved in RGB but with the alpha channel.
I could fix it with a change in `onmt/inputters/image_dataset.py` [here](https://github.com/OpenNMT/OpenNMT-py/blob/master/onmt/inputters/image_dataset.py#L78):
from this:
```
if self.channel_size == 1:
img = transforms.ToTensor()(
Image.fromarray(cv2.imread(img_path, 0)))
else:
img = transforms.ToTensor()(Image.open(img_path))
```
to this:
```
if self.channel_size == 1:
img = transforms.ToTensor()(
Image.fromarray(cv2.imread(img_path, 0)))
else:
img = transforms.ToTensor()(
Image.fromarray(cv2.imread(img_path, 1)))
```
The flag in `cv2.imread` with value of 1 tell cv2 to convert to RGB no matter what the original image is.
Should I do a PR ?
</issue>
<code>
[start of onmt/inputters/image_dataset.py]
1 # -*- coding: utf-8 -*-
2
3 import os
4
5 import torch
6 from torchtext.data import Field
7
8 from onmt.inputters.datareader_base import DataReaderBase
9
10 # domain specific dependencies
11 try:
12 from PIL import Image
13 from torchvision import transforms
14 import cv2
15 except ImportError:
16 Image, transforms, cv2 = None, None, None
17
18
19 class ImageDataReader(DataReaderBase):
20 """Read image data from disk.
21
22 Args:
23 truncate (tuple[int] or NoneType): maximum img size. Use
24 ``(0,0)`` or ``None`` for unlimited.
25 channel_size (int): Number of channels per image.
26
27 Raises:
28 onmt.inputters.datareader_base.MissingDependencyException: If
29 importing any of ``PIL``, ``torchvision``, or ``cv2`` fail.
30 """
31
32 def __init__(self, truncate=None, channel_size=3):
33 self._check_deps()
34 self.truncate = truncate
35 self.channel_size = channel_size
36
37 @classmethod
38 def from_opt(cls, opt):
39 return cls(channel_size=opt.image_channel_size)
40
41 @classmethod
42 def _check_deps(cls):
43 if any([Image is None, transforms is None, cv2 is None]):
44 cls._raise_missing_dep(
45 "PIL", "torchvision", "cv2")
46
47 def read(self, images, side, img_dir=None):
48 """Read data into dicts.
49
50 Args:
51 images (str or Iterable[str]): Sequence of image paths or
52 path to file containing audio paths.
53 In either case, the filenames may be relative to ``src_dir``
54 (default behavior) or absolute.
55 side (str): Prefix used in return dict. Usually
56 ``"src"`` or ``"tgt"``.
57 img_dir (str): Location of source image files. See ``images``.
58
59 Yields:
60 a dictionary containing image data, path and index for each line.
61 """
62 if isinstance(images, str):
63 images = DataReaderBase._read_file(images)
64
65 for i, filename in enumerate(images):
66 filename = filename.decode("utf-8").strip()
67 img_path = os.path.join(img_dir, filename)
68 if not os.path.exists(img_path):
69 img_path = filename
70
71 assert os.path.exists(img_path), \
72 'img path %s not found' % filename
73
74 if self.channel_size == 1:
75 img = transforms.ToTensor()(
76 Image.fromarray(cv2.imread(img_path, 0)))
77 else:
78 img = transforms.ToTensor()(Image.open(img_path))
79 if self.truncate and self.truncate != (0, 0):
80 if not (img.size(1) <= self.truncate[0]
81 and img.size(2) <= self.truncate[1]):
82 continue
83 yield {side: img, side + '_path': filename, 'indices': i}
84
85
86 def img_sort_key(ex):
87 """Sort using the size of the image: (width, height)."""
88 return ex.src.size(2), ex.src.size(1)
89
90
91 def batch_img(data, vocab):
92 """Pad and batch a sequence of images."""
93 c = data[0].size(0)
94 h = max([t.size(1) for t in data])
95 w = max([t.size(2) for t in data])
96 imgs = torch.zeros(len(data), c, h, w).fill_(1)
97 for i, img in enumerate(data):
98 imgs[i, :, 0:img.size(1), 0:img.size(2)] = img
99 return imgs
100
101
102 def image_fields(**kwargs):
103 img = Field(
104 use_vocab=False, dtype=torch.float,
105 postprocessing=batch_img, sequential=False)
106 return img
107
[end of onmt/inputters/image_dataset.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/onmt/inputters/image_dataset.py b/onmt/inputters/image_dataset.py
--- a/onmt/inputters/image_dataset.py
+++ b/onmt/inputters/image_dataset.py
@@ -75,7 +75,8 @@
img = transforms.ToTensor()(
Image.fromarray(cv2.imread(img_path, 0)))
else:
- img = transforms.ToTensor()(Image.open(img_path))
+ img = Image.open(img_path).convert('RGB')
+ img = transforms.ToTensor()(img)
if self.truncate and self.truncate != (0, 0):
if not (img.size(1) <= self.truncate[0]
and img.size(2) <= self.truncate[1]):
| {"golden_diff": "diff --git a/onmt/inputters/image_dataset.py b/onmt/inputters/image_dataset.py\n--- a/onmt/inputters/image_dataset.py\n+++ b/onmt/inputters/image_dataset.py\n@@ -75,7 +75,8 @@\n img = transforms.ToTensor()(\n Image.fromarray(cv2.imread(img_path, 0)))\n else:\n- img = transforms.ToTensor()(Image.open(img_path))\n+ img = Image.open(img_path).convert('RGB')\n+ img = transforms.ToTensor()(img)\n if self.truncate and self.truncate != (0, 0):\n if not (img.size(1) <= self.truncate[0]\n and img.size(2) <= self.truncate[1]):\n", "issue": "Alpha channel and grayscale in image-to-text with -image_channel_size=3\nFor training image to text, the argument `-image_channel_size=3` imply that the images already have the good number of channel. However, some of my images are black and white and saved with only one channel or saved in RGB but with the alpha channel.\r\nI could fix it with a change in `onmt/inputters/image_dataset.py` [here](https://github.com/OpenNMT/OpenNMT-py/blob/master/onmt/inputters/image_dataset.py#L78):\r\n\r\nfrom this:\r\n```\r\n if self.channel_size == 1:\r\n img = transforms.ToTensor()(\r\n Image.fromarray(cv2.imread(img_path, 0)))\r\n else:\r\n img = transforms.ToTensor()(Image.open(img_path))\r\n```\r\nto this:\r\n```\r\n if self.channel_size == 1:\r\n img = transforms.ToTensor()(\r\n Image.fromarray(cv2.imread(img_path, 0)))\r\n else:\r\n img = transforms.ToTensor()(\r\n Image.fromarray(cv2.imread(img_path, 1)))\r\n```\r\nThe flag in `cv2.imread` with value of 1 tell cv2 to convert to RGB no matter what the original image is.\r\n\r\nShould I do a PR ?\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\nimport os\n\nimport torch\nfrom torchtext.data import Field\n\nfrom onmt.inputters.datareader_base import DataReaderBase\n\n# domain specific dependencies\ntry:\n from PIL import Image\n from torchvision import transforms\n import cv2\nexcept ImportError:\n Image, transforms, cv2 = None, None, None\n\n\nclass ImageDataReader(DataReaderBase):\n \"\"\"Read image data from disk.\n\n Args:\n truncate (tuple[int] or NoneType): maximum img size. Use\n ``(0,0)`` or ``None`` for unlimited.\n channel_size (int): Number of channels per image.\n\n Raises:\n onmt.inputters.datareader_base.MissingDependencyException: If\n importing any of ``PIL``, ``torchvision``, or ``cv2`` fail.\n \"\"\"\n\n def __init__(self, truncate=None, channel_size=3):\n self._check_deps()\n self.truncate = truncate\n self.channel_size = channel_size\n\n @classmethod\n def from_opt(cls, opt):\n return cls(channel_size=opt.image_channel_size)\n\n @classmethod\n def _check_deps(cls):\n if any([Image is None, transforms is None, cv2 is None]):\n cls._raise_missing_dep(\n \"PIL\", \"torchvision\", \"cv2\")\n\n def read(self, images, side, img_dir=None):\n \"\"\"Read data into dicts.\n\n Args:\n images (str or Iterable[str]): Sequence of image paths or\n path to file containing audio paths.\n In either case, the filenames may be relative to ``src_dir``\n (default behavior) or absolute.\n side (str): Prefix used in return dict. Usually\n ``\"src\"`` or ``\"tgt\"``.\n img_dir (str): Location of source image files. See ``images``.\n\n Yields:\n a dictionary containing image data, path and index for each line.\n \"\"\"\n if isinstance(images, str):\n images = DataReaderBase._read_file(images)\n\n for i, filename in enumerate(images):\n filename = filename.decode(\"utf-8\").strip()\n img_path = os.path.join(img_dir, filename)\n if not os.path.exists(img_path):\n img_path = filename\n\n assert os.path.exists(img_path), \\\n 'img path %s not found' % filename\n\n if self.channel_size == 1:\n img = transforms.ToTensor()(\n Image.fromarray(cv2.imread(img_path, 0)))\n else:\n img = transforms.ToTensor()(Image.open(img_path))\n if self.truncate and self.truncate != (0, 0):\n if not (img.size(1) <= self.truncate[0]\n and img.size(2) <= self.truncate[1]):\n continue\n yield {side: img, side + '_path': filename, 'indices': i}\n\n\ndef img_sort_key(ex):\n \"\"\"Sort using the size of the image: (width, height).\"\"\"\n return ex.src.size(2), ex.src.size(1)\n\n\ndef batch_img(data, vocab):\n \"\"\"Pad and batch a sequence of images.\"\"\"\n c = data[0].size(0)\n h = max([t.size(1) for t in data])\n w = max([t.size(2) for t in data])\n imgs = torch.zeros(len(data), c, h, w).fill_(1)\n for i, img in enumerate(data):\n imgs[i, :, 0:img.size(1), 0:img.size(2)] = img\n return imgs\n\n\ndef image_fields(**kwargs):\n img = Field(\n use_vocab=False, dtype=torch.float,\n postprocessing=batch_img, sequential=False)\n return img\n", "path": "onmt/inputters/image_dataset.py"}]} | 1,832 | 159 |
gh_patches_debug_39793 | rasdani/github-patches | git_diff | alltheplaces__alltheplaces-2965 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Spider tijuanaflats is broken
During the global build at 2021-05-26-14-42-23, spider **tijuanaflats** failed with **0 features** and **0 errors**.
Here's [the log](https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/logs/tijuanaflats.log) and [the output](https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/output/tijuanaflats.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/output/tijuanaflats.geojson))
</issue>
<code>
[start of locations/spiders/tijuanaflats.py]
1 # -*- coding: utf-8 -*-
2 import scrapy
3 import re
4
5 from locations.items import GeojsonPointItem
6
7
8 class TijuanaFlatsSpider(scrapy.Spider):
9 name = "tijuanaflats"
10 item_attributes = { 'brand': "Tijuana Flats" }
11 allowed_domains = ['tijuanaflats.com']
12 start_urls = (
13 'https://tijuanaflats.com/wpsl_stores-sitemap.xml',
14 )
15
16 def parse(self, response):
17 response.selector.remove_namespaces()
18 city_urls = response.xpath('//url/loc/text()').extract()
19 for path in city_urls:
20 yield scrapy.Request(
21 path.strip(),
22 callback=self.parse_store,
23 )
24
25 def parse_store(self, response):
26
27 if response.xpath('//table[@class="wpsl-opening-hours"]/tr').extract():
28 storeHours = str(response.xpath('//table[@class="wpsl-opening-hours"]/tr').extract())
29 storeHours = storeHours.replace('[','').replace(']','').replace("'",'').replace(',',' - ')
30 else:
31 storeHours = response.xpath('//table[@class="wpsl-opening-hours"]/tr').extract()
32
33
34 properties = {
35 'name': response.xpath('//h1[@class="entry-title"]/text()').extract_first(),
36 'website': response.request.url,
37 'ref': response.xpath('//h1[@class="entry-title"]/text()').extract_first(),
38 'addr_full': response.xpath('//div[@class="wpsl-location-address"]/span[1]/text()').extract_first() + " " + response.xpath('//div[@class="wpsl-location-address"]/span[2]/text()').extract_first(),
39 'city': response.xpath('//div[@class="wpsl-location-address"]/span[3]/text()').extract_first().rstrip(', '),
40 'state': response.xpath('//div[@class="wpsl-location-address"]/span[4]/text()').extract_first().strip(),
41 'postcode': response.xpath('//div[@class="wpsl-location-address"]/span[5]/text()').extract_first().strip(),
42 'opening_hours': storeHours,
43 'lat': float(response.xpath('//script/text()').extract()[-3].split('"lat":"')[1].split('"')[0]),
44 'lon': float(response.xpath('//script/text()').extract()[-3].split('"lng":"')[1].split('"')[0]),
45 }
46
47 yield GeojsonPointItem(**properties)
[end of locations/spiders/tijuanaflats.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/locations/spiders/tijuanaflats.py b/locations/spiders/tijuanaflats.py
--- a/locations/spiders/tijuanaflats.py
+++ b/locations/spiders/tijuanaflats.py
@@ -1,47 +1,45 @@
# -*- coding: utf-8 -*-
+import json
+
import scrapy
-import re
from locations.items import GeojsonPointItem
class TijuanaFlatsSpider(scrapy.Spider):
name = "tijuanaflats"
- item_attributes = { 'brand': "Tijuana Flats" }
- allowed_domains = ['tijuanaflats.com']
- start_urls = (
- 'https://tijuanaflats.com/wpsl_stores-sitemap.xml',
- )
+ item_attributes = {"brand": "Tijuana Flats", "brand_wikidata": "Q7801833"}
+ allowed_domains = ["tijuanaflats.com"]
+ start_urls = ("https://www.tijuanaflats.com/locations",)
def parse(self, response):
- response.selector.remove_namespaces()
- city_urls = response.xpath('//url/loc/text()').extract()
- for path in city_urls:
- yield scrapy.Request(
- path.strip(),
- callback=self.parse_store,
+ data = json.loads(
+ response.xpath(
+ '//tjs-view-locations/attribute::*[name()=":locations"]'
+ ).extract_first()
+ )
+ for row in data:
+ for ent in row["yoast_json_ld"][0]["@graph"]:
+ if ent["@type"] == "WebPage" and row["slug"] in ent["url"]:
+ name = ent["name"]
+
+ # extract text from html snippet
+ hours_of_operation = scrapy.Selector(text=row["acf"]["hours_of_operation"])
+ opening_hours = "; ".join(
+ a.strip() for a in hours_of_operation.xpath("//text()").extract()
)
- def parse_store(self, response):
-
- if response.xpath('//table[@class="wpsl-opening-hours"]/tr').extract():
- storeHours = str(response.xpath('//table[@class="wpsl-opening-hours"]/tr').extract())
- storeHours = storeHours.replace('[','').replace(']','').replace("'",'').replace(',',' - ')
- else:
- storeHours = response.xpath('//table[@class="wpsl-opening-hours"]/tr').extract()
-
-
- properties = {
- 'name': response.xpath('//h1[@class="entry-title"]/text()').extract_first(),
- 'website': response.request.url,
- 'ref': response.xpath('//h1[@class="entry-title"]/text()').extract_first(),
- 'addr_full': response.xpath('//div[@class="wpsl-location-address"]/span[1]/text()').extract_first() + " " + response.xpath('//div[@class="wpsl-location-address"]/span[2]/text()').extract_first(),
- 'city': response.xpath('//div[@class="wpsl-location-address"]/span[3]/text()').extract_first().rstrip(', '),
- 'state': response.xpath('//div[@class="wpsl-location-address"]/span[4]/text()').extract_first().strip(),
- 'postcode': response.xpath('//div[@class="wpsl-location-address"]/span[5]/text()').extract_first().strip(),
- 'opening_hours': storeHours,
- 'lat': float(response.xpath('//script/text()').extract()[-3].split('"lat":"')[1].split('"')[0]),
- 'lon': float(response.xpath('//script/text()').extract()[-3].split('"lng":"')[1].split('"')[0]),
- }
-
- yield GeojsonPointItem(**properties)
\ No newline at end of file
+ properties = {
+ "ref": row["slug"],
+ "name": name,
+ "lat": row["acf"]["physical_location"]["lat"],
+ "lon": row["acf"]["physical_location"]["lng"],
+ "addr_full": row["acf"]["address_1"],
+ "city": row["acf"]["city"],
+ "state": row["acf"]["state"],
+ "postcode": row["acf"]["zip"],
+ "phone": row["acf"]["contact_phone"],
+ "website": f'https://www.tijuanaflats.com/locations/{row["slug"]}',
+ "opening_hours": opening_hours,
+ }
+ yield GeojsonPointItem(**properties)
| {"golden_diff": "diff --git a/locations/spiders/tijuanaflats.py b/locations/spiders/tijuanaflats.py\n--- a/locations/spiders/tijuanaflats.py\n+++ b/locations/spiders/tijuanaflats.py\n@@ -1,47 +1,45 @@\n # -*- coding: utf-8 -*-\n+import json\n+\n import scrapy\n-import re\n \n from locations.items import GeojsonPointItem\n \n \n class TijuanaFlatsSpider(scrapy.Spider):\n name = \"tijuanaflats\"\n- item_attributes = { 'brand': \"Tijuana Flats\" }\n- allowed_domains = ['tijuanaflats.com']\n- start_urls = (\n- 'https://tijuanaflats.com/wpsl_stores-sitemap.xml',\n- )\n+ item_attributes = {\"brand\": \"Tijuana Flats\", \"brand_wikidata\": \"Q7801833\"}\n+ allowed_domains = [\"tijuanaflats.com\"]\n+ start_urls = (\"https://www.tijuanaflats.com/locations\",)\n \n def parse(self, response):\n- response.selector.remove_namespaces()\n- city_urls = response.xpath('//url/loc/text()').extract()\n- for path in city_urls:\n- yield scrapy.Request(\n- path.strip(),\n- callback=self.parse_store,\n+ data = json.loads(\n+ response.xpath(\n+ '//tjs-view-locations/attribute::*[name()=\":locations\"]'\n+ ).extract_first()\n+ )\n+ for row in data:\n+ for ent in row[\"yoast_json_ld\"][0][\"@graph\"]:\n+ if ent[\"@type\"] == \"WebPage\" and row[\"slug\"] in ent[\"url\"]:\n+ name = ent[\"name\"]\n+\n+ # extract text from html snippet\n+ hours_of_operation = scrapy.Selector(text=row[\"acf\"][\"hours_of_operation\"])\n+ opening_hours = \"; \".join(\n+ a.strip() for a in hours_of_operation.xpath(\"//text()\").extract()\n )\n \n- def parse_store(self, response):\n-\n- if response.xpath('//table[@class=\"wpsl-opening-hours\"]/tr').extract():\n- storeHours = str(response.xpath('//table[@class=\"wpsl-opening-hours\"]/tr').extract())\n- storeHours = storeHours.replace('[','').replace(']','').replace(\"'\",'').replace(',',' - ')\n- else:\n- storeHours = response.xpath('//table[@class=\"wpsl-opening-hours\"]/tr').extract()\n-\n-\n- properties = {\n- 'name': response.xpath('//h1[@class=\"entry-title\"]/text()').extract_first(),\n- 'website': response.request.url,\n- 'ref': response.xpath('//h1[@class=\"entry-title\"]/text()').extract_first(),\n- 'addr_full': response.xpath('//div[@class=\"wpsl-location-address\"]/span[1]/text()').extract_first() + \" \" + response.xpath('//div[@class=\"wpsl-location-address\"]/span[2]/text()').extract_first(),\n- 'city': response.xpath('//div[@class=\"wpsl-location-address\"]/span[3]/text()').extract_first().rstrip(', '),\n- 'state': response.xpath('//div[@class=\"wpsl-location-address\"]/span[4]/text()').extract_first().strip(),\n- 'postcode': response.xpath('//div[@class=\"wpsl-location-address\"]/span[5]/text()').extract_first().strip(),\n- 'opening_hours': storeHours,\n- 'lat': float(response.xpath('//script/text()').extract()[-3].split('\"lat\":\"')[1].split('\"')[0]),\n- 'lon': float(response.xpath('//script/text()').extract()[-3].split('\"lng\":\"')[1].split('\"')[0]),\n- }\n-\n- yield GeojsonPointItem(**properties)\n\\ No newline at end of file\n+ properties = {\n+ \"ref\": row[\"slug\"],\n+ \"name\": name,\n+ \"lat\": row[\"acf\"][\"physical_location\"][\"lat\"],\n+ \"lon\": row[\"acf\"][\"physical_location\"][\"lng\"],\n+ \"addr_full\": row[\"acf\"][\"address_1\"],\n+ \"city\": row[\"acf\"][\"city\"],\n+ \"state\": row[\"acf\"][\"state\"],\n+ \"postcode\": row[\"acf\"][\"zip\"],\n+ \"phone\": row[\"acf\"][\"contact_phone\"],\n+ \"website\": f'https://www.tijuanaflats.com/locations/{row[\"slug\"]}',\n+ \"opening_hours\": opening_hours,\n+ }\n+ yield GeojsonPointItem(**properties)\n", "issue": "Spider tijuanaflats is broken\nDuring the global build at 2021-05-26-14-42-23, spider **tijuanaflats** failed with **0 features** and **0 errors**.\n\nHere's [the log](https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/logs/tijuanaflats.log) and [the output](https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/output/tijuanaflats.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/output/tijuanaflats.geojson))\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport scrapy\nimport re\n\nfrom locations.items import GeojsonPointItem\n\n\nclass TijuanaFlatsSpider(scrapy.Spider):\n name = \"tijuanaflats\"\n item_attributes = { 'brand': \"Tijuana Flats\" }\n allowed_domains = ['tijuanaflats.com']\n start_urls = (\n 'https://tijuanaflats.com/wpsl_stores-sitemap.xml',\n )\n\n def parse(self, response):\n response.selector.remove_namespaces()\n city_urls = response.xpath('//url/loc/text()').extract()\n for path in city_urls:\n yield scrapy.Request(\n path.strip(),\n callback=self.parse_store,\n )\n\n def parse_store(self, response):\n\n if response.xpath('//table[@class=\"wpsl-opening-hours\"]/tr').extract():\n storeHours = str(response.xpath('//table[@class=\"wpsl-opening-hours\"]/tr').extract())\n storeHours = storeHours.replace('[','').replace(']','').replace(\"'\",'').replace(',',' - ')\n else:\n storeHours = response.xpath('//table[@class=\"wpsl-opening-hours\"]/tr').extract()\n\n\n properties = {\n 'name': response.xpath('//h1[@class=\"entry-title\"]/text()').extract_first(),\n 'website': response.request.url,\n 'ref': response.xpath('//h1[@class=\"entry-title\"]/text()').extract_first(),\n 'addr_full': response.xpath('//div[@class=\"wpsl-location-address\"]/span[1]/text()').extract_first() + \" \" + response.xpath('//div[@class=\"wpsl-location-address\"]/span[2]/text()').extract_first(),\n 'city': response.xpath('//div[@class=\"wpsl-location-address\"]/span[3]/text()').extract_first().rstrip(', '),\n 'state': response.xpath('//div[@class=\"wpsl-location-address\"]/span[4]/text()').extract_first().strip(),\n 'postcode': response.xpath('//div[@class=\"wpsl-location-address\"]/span[5]/text()').extract_first().strip(),\n 'opening_hours': storeHours,\n 'lat': float(response.xpath('//script/text()').extract()[-3].split('\"lat\":\"')[1].split('\"')[0]),\n 'lon': float(response.xpath('//script/text()').extract()[-3].split('\"lng\":\"')[1].split('\"')[0]),\n }\n\n yield GeojsonPointItem(**properties)", "path": "locations/spiders/tijuanaflats.py"}]} | 1,356 | 1,004 |
gh_patches_debug_7645 | rasdani/github-patches | git_diff | hpcaitech__ColossalAI-3327 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[tensor] fix some unittests
[tensor] fix some unittests
[tensor] fix some unittests
</issue>
<code>
[start of colossalai/context/random/__init__.py]
1 from ._helper import (seed, set_mode, with_seed, add_seed, get_seeds, get_states, get_current_mode, set_seed_states,
2 sync_states, moe_set_seed, reset_seeds)
3
4 __all__ = [
5 'seed', 'set_mode', 'with_seed', 'add_seed', 'get_seeds', 'get_states', 'get_current_mode', 'set_seed_states',
6 'sync_states', 'moe_set_seed', 'reset_seeds'
7 ]
8
[end of colossalai/context/random/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/colossalai/context/random/__init__.py b/colossalai/context/random/__init__.py
--- a/colossalai/context/random/__init__.py
+++ b/colossalai/context/random/__init__.py
@@ -1,5 +1,16 @@
-from ._helper import (seed, set_mode, with_seed, add_seed, get_seeds, get_states, get_current_mode, set_seed_states,
- sync_states, moe_set_seed, reset_seeds)
+from ._helper import (
+ add_seed,
+ get_current_mode,
+ get_seeds,
+ get_states,
+ moe_set_seed,
+ reset_seeds,
+ seed,
+ set_mode,
+ set_seed_states,
+ sync_states,
+ with_seed,
+)
__all__ = [
'seed', 'set_mode', 'with_seed', 'add_seed', 'get_seeds', 'get_states', 'get_current_mode', 'set_seed_states',
| {"golden_diff": "diff --git a/colossalai/context/random/__init__.py b/colossalai/context/random/__init__.py\n--- a/colossalai/context/random/__init__.py\n+++ b/colossalai/context/random/__init__.py\n@@ -1,5 +1,16 @@\n-from ._helper import (seed, set_mode, with_seed, add_seed, get_seeds, get_states, get_current_mode, set_seed_states,\n- sync_states, moe_set_seed, reset_seeds)\n+from ._helper import (\n+ add_seed,\n+ get_current_mode,\n+ get_seeds,\n+ get_states,\n+ moe_set_seed,\n+ reset_seeds,\n+ seed,\n+ set_mode,\n+ set_seed_states,\n+ sync_states,\n+ with_seed,\n+)\n \n __all__ = [\n 'seed', 'set_mode', 'with_seed', 'add_seed', 'get_seeds', 'get_states', 'get_current_mode', 'set_seed_states',\n", "issue": "[tensor] fix some unittests\n\n[tensor] fix some unittests\n\n[tensor] fix some unittests\n\n", "before_files": [{"content": "from ._helper import (seed, set_mode, with_seed, add_seed, get_seeds, get_states, get_current_mode, set_seed_states,\n sync_states, moe_set_seed, reset_seeds)\n\n__all__ = [\n 'seed', 'set_mode', 'with_seed', 'add_seed', 'get_seeds', 'get_states', 'get_current_mode', 'set_seed_states',\n 'sync_states', 'moe_set_seed', 'reset_seeds'\n]\n", "path": "colossalai/context/random/__init__.py"}]} | 676 | 218 |
gh_patches_debug_1703 | rasdani/github-patches | git_diff | unionai-oss__pandera-1591 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Error Importing Pandera with Polars extra
**Describe the bug**
I get an error when importing pandera after installing the latest 0.19.0b2 version with the polars extra in a clean environment. I can import it successfully if I install without the polars extra.
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the latest version of pandera.
- [ ] (optional) I have confirmed this bug exists on the main branch of pandera.
#### Code Sample, a copy-pastable example
I installed pandera 0.19.0b2 in a clean virtual environment using `pip install pandera[polars]==0.19.0b2` and attempted to import pandera:
```python
import pandera as pa
```
I got the following error message:
```
>>> import pandera as pa
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File ".venv/lib/python3.11/site-packages/pandera/__init__.py", line 6, in <module>
from pandera import errors, external_config, typing
File ".venv/lib/python3.11/site-packages/pandera/external_config.py", line 23, in <module>
import pyspark.pandas
ModuleNotFoundError: No module named 'pyspark'
```
#### Versions:
- Pandera: 0.19.0b2
- Python: 3.11.7
- Ubuntu: 22.04
</issue>
<code>
[start of pandera/external_config.py]
1 """Configuration for external packages."""
2
3 import os
4
5 is_spark_local_ip_dirty = False
6 is_pyarrow_ignore_timezone_dirty = False
7
8 try:
9 # try importing pyspark to see if it exists. This is important because the
10 # pandera.typing module defines a Series type that inherits from
11 # pandas.Series, and pyspark v1+ injects a __getitem__ method to pandas
12 # Series and DataFrames to support type hinting:
13 # https://spark.apache.org/docs/3.2.0/api/python/user_guide/pandas_on_spark/typehints.html#type-hinting-with-names
14 # pylint: disable=unused-import
15 if os.getenv("SPARK_LOCAL_IP") is None:
16 is_spark_local_ip_dirty = True
17 os.environ["SPARK_LOCAL_IP"] = "127.0.0.1"
18 if os.getenv("PYARROW_IGNORE_TIMEZONE") is None:
19 is_pyarrow_ignore_timezone_dirty = True
20 # This can be overriden by the user
21 os.environ["PYARROW_IGNORE_TIMEZONE"] = "1"
22
23 import pyspark.pandas
24 finally:
25 if is_spark_local_ip_dirty:
26 os.environ.pop("SPARK_LOCAL_IP")
27 if is_pyarrow_ignore_timezone_dirty:
28 os.environ.pop("PYARROW_IGNORE_TIMEZONE")
29
[end of pandera/external_config.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pandera/external_config.py b/pandera/external_config.py
--- a/pandera/external_config.py
+++ b/pandera/external_config.py
@@ -21,6 +21,8 @@
os.environ["PYARROW_IGNORE_TIMEZONE"] = "1"
import pyspark.pandas
+except (ImportError, ModuleNotFoundError):
+ pass
finally:
if is_spark_local_ip_dirty:
os.environ.pop("SPARK_LOCAL_IP")
| {"golden_diff": "diff --git a/pandera/external_config.py b/pandera/external_config.py\n--- a/pandera/external_config.py\n+++ b/pandera/external_config.py\n@@ -21,6 +21,8 @@\n os.environ[\"PYARROW_IGNORE_TIMEZONE\"] = \"1\"\n \n import pyspark.pandas\n+except (ImportError, ModuleNotFoundError):\n+ pass\n finally:\n if is_spark_local_ip_dirty:\n os.environ.pop(\"SPARK_LOCAL_IP\")\n", "issue": "Error Importing Pandera with Polars extra\n**Describe the bug**\r\nI get an error when importing pandera after installing the latest 0.19.0b2 version with the polars extra in a clean environment. I can import it successfully if I install without the polars extra.\r\n\r\n- [x] I have checked that this issue has not already been reported.\r\n- [x] I have confirmed this bug exists on the latest version of pandera.\r\n- [ ] (optional) I have confirmed this bug exists on the main branch of pandera.\r\n\r\n#### Code Sample, a copy-pastable example\r\n\r\nI installed pandera 0.19.0b2 in a clean virtual environment using `pip install pandera[polars]==0.19.0b2` and attempted to import pandera:\r\n\r\n```python\r\nimport pandera as pa\r\n```\r\n\r\nI got the following error message:\r\n```\r\n>>> import pandera as pa\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \".venv/lib/python3.11/site-packages/pandera/__init__.py\", line 6, in <module>\r\n from pandera import errors, external_config, typing\r\n File \".venv/lib/python3.11/site-packages/pandera/external_config.py\", line 23, in <module>\r\n import pyspark.pandas\r\nModuleNotFoundError: No module named 'pyspark'\r\n```\r\n\r\n#### Versions:\r\n\r\n - Pandera: 0.19.0b2\r\n - Python: 3.11.7\r\n - Ubuntu: 22.04\r\n\n", "before_files": [{"content": "\"\"\"Configuration for external packages.\"\"\"\n\nimport os\n\nis_spark_local_ip_dirty = False\nis_pyarrow_ignore_timezone_dirty = False\n\ntry:\n # try importing pyspark to see if it exists. This is important because the\n # pandera.typing module defines a Series type that inherits from\n # pandas.Series, and pyspark v1+ injects a __getitem__ method to pandas\n # Series and DataFrames to support type hinting:\n # https://spark.apache.org/docs/3.2.0/api/python/user_guide/pandas_on_spark/typehints.html#type-hinting-with-names\n # pylint: disable=unused-import\n if os.getenv(\"SPARK_LOCAL_IP\") is None:\n is_spark_local_ip_dirty = True\n os.environ[\"SPARK_LOCAL_IP\"] = \"127.0.0.1\"\n if os.getenv(\"PYARROW_IGNORE_TIMEZONE\") is None:\n is_pyarrow_ignore_timezone_dirty = True\n # This can be overriden by the user\n os.environ[\"PYARROW_IGNORE_TIMEZONE\"] = \"1\"\n\n import pyspark.pandas\nfinally:\n if is_spark_local_ip_dirty:\n os.environ.pop(\"SPARK_LOCAL_IP\")\n if is_pyarrow_ignore_timezone_dirty:\n os.environ.pop(\"PYARROW_IGNORE_TIMEZONE\")\n", "path": "pandera/external_config.py"}]} | 1,222 | 110 |
gh_patches_debug_22260 | rasdani/github-patches | git_diff | aws-cloudformation__cfn-lint-1392 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
E0002 bug when using parameters for DynamoDB AttributeDefinitions
*cfn-lint version: 0.28.2*
*Description of issue.*
Rule E3039 (added in 0.28.0) doesn't support Refs and results in a E0002 error for the template.
Repeatable with this template snippet:
```
AWSTemplateFormatVersion: '2010-09-09'
Parameters:
HashKeyName:
Description: Primary Key Name
Type: String
AllowedPattern: '[a-zA-Z0-9]*'
MinLength: '1'
MaxLength: '2048'
ConstraintDescription: must contain only alphanumberic characters
HashKeyType:
Description: Primary Key Type
Type: String
Default: S
AllowedPattern: '[S|N]'
MinLength: '1'
MaxLength: '1'
ConstraintDescription: must be either S or N
RangeKeyName:
Description: Sort Key Name
Type: String
Default: 'NA'
AllowedPattern: '[a-zA-Z0-9]*'
MinLength: '0'
MaxLength: '2048'
ConstraintDescription: must contain only alphanumberic characters
RangeKeyType:
Description: Sort Key Type
Type: String
Default: S
AllowedPattern: '[S|N]'
MinLength: '0'
MaxLength: '1'
ConstraintDescription: must be either S or Ns
Conditions:
isRangeKeyAvailable: !Not [ !Equals [ !Ref RangeKeyName, 'NA' ] ]
Resources:
DynamoDBTable:
DeletionPolicy: Delete
UpdateReplacePolicy: Delete
Type: AWS::DynamoDB::Table
Properties:
AttributeDefinitions: !If
- isRangeKeyAvailable
- - AttributeName: !Ref HashKeyName
AttributeType: !Ref HashKeyType
- AttributeName: !Ref RangeKeyName
AttributeType: !Ref RangeKeyType
- - AttributeName: !Ref HashKeyName
AttributeType: !Ref HashKeyType
KeySchema: !If
- isRangeKeyAvailable
- - AttributeName: !Ref HashKeyName
KeyType: HASH
- AttributeName: !Ref RangeKeyName
KeyType: RANGE
- - AttributeName: !Ref HashKeyName
KeyType: HASH
```
</issue>
<code>
[start of src/cfnlint/rules/resources/dynamodb/AttributeMismatch.py]
1 """
2 Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
3 SPDX-License-Identifier: MIT-0
4 """
5 import six
6 from cfnlint.decode.node import list_node
7 from cfnlint.rules import CloudFormationLintRule
8 from cfnlint.rules import RuleMatch
9
10
11 class AttributeMismatch(CloudFormationLintRule):
12 """Check DynamoDB Attributes"""
13 id = 'E3039'
14 shortdesc = 'AttributeDefinitions / KeySchemas mismatch'
15 description = 'Verify the set of Attributes in AttributeDefinitions and KeySchemas match'
16 source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-table.html'
17 tags = ['resources', 'dynamodb']
18
19 def __init__(self):
20 """Init"""
21 super(AttributeMismatch, self).__init__()
22 self.resource_property_types = ['AWS::DynamoDB::Table']
23
24 def _get_key_schema_attributes(self, key_schemas_sets):
25 """ Get Key Schema attributes """
26 keys = set()
27
28 for properties, _ in key_schemas_sets:
29 for key in properties:
30 attribute_name = key.get_safe('AttributeName', type_t=six.string_types)
31 if attribute_name:
32 keys.add(key.get('AttributeName'))
33 return keys
34
35 def _get_attribute_secondary(self, property_sets):
36 """ Get the key schemas from secondary indexes """
37 keys = set()
38
39 for properties, _ in property_sets:
40 for index in properties:
41 keys = keys.union(
42 self._get_key_schema_attributes(
43 index.get_safe('KeySchema', list_node([], None, None), [], list)
44 )
45 )
46
47 return keys
48
49 def check_property_set(self, property_set, path):
50 """ Check a property set """
51 matches = []
52 properties = property_set.get('Object')
53
54 keys = set()
55 attributes = set()
56
57 for attribute in properties.get('AttributeDefinitions', []):
58 attribute_name = attribute.get('AttributeName')
59 if isinstance(attribute_name, six.string_types):
60 attributes.add(attribute.get('AttributeName'))
61 else:
62 self.logger.info('attribute definitions is not using just strings')
63 return matches
64 keys = keys.union(
65 self._get_key_schema_attributes(
66 properties.get_safe('KeySchema', list_node([], None, None), [], list)
67 )
68 )
69 keys = keys.union(self._get_attribute_secondary(
70 properties.get_safe('GlobalSecondaryIndexes', list_node([], None, None), path, list
71 ))) # pylint: disable=bad-continuation
72 keys = keys.union(self._get_attribute_secondary(
73 properties.get_safe('LocalSecondaryIndexes', list_node([], None, None), path, list
74 ))) # pylint: disable=bad-continuation
75
76 if attributes != keys:
77 message = 'The set of Attributes in AttributeDefinitions: {0} and KeySchemas: {1} must match at {2}'
78 matches.append(RuleMatch(
79 path,
80 message.format(sorted(list(attributes)), sorted(list(keys)), '/'.join(map(str, path)))
81 ))
82
83 return matches
84
85 def check(self, properties, path, cfn):
86 """Check itself"""
87 matches = []
88
89 property_sets = cfn.get_object_without_conditions(properties, path)
90 for property_set in property_sets:
91 matches.extend(self.check_property_set(property_set, path))
92 return matches
93
94 def match_resource_properties(self, properties, _, path, cfn):
95 """Match for sub properties"""
96 matches = []
97 matches.extend(self.check(properties, path, cfn))
98 return matches
99
[end of src/cfnlint/rules/resources/dynamodb/AttributeMismatch.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/cfnlint/rules/resources/dynamodb/AttributeMismatch.py b/src/cfnlint/rules/resources/dynamodb/AttributeMismatch.py
--- a/src/cfnlint/rules/resources/dynamodb/AttributeMismatch.py
+++ b/src/cfnlint/rules/resources/dynamodb/AttributeMismatch.py
@@ -77,7 +77,8 @@
message = 'The set of Attributes in AttributeDefinitions: {0} and KeySchemas: {1} must match at {2}'
matches.append(RuleMatch(
path,
- message.format(sorted(list(attributes)), sorted(list(keys)), '/'.join(map(str, path)))
+ message.format(sorted(list(attributes)), sorted(
+ list(keys)), '/'.join(map(str, path)))
))
return matches
@@ -86,7 +87,8 @@
"""Check itself"""
matches = []
- property_sets = cfn.get_object_without_conditions(properties, path)
+ property_sets = cfn.get_object_without_conditions(
+ properties, ['AttributeDefinitions', 'KeySchema', 'GlobalSecondaryIndexes', 'LocalSecondaryIndexes'])
for property_set in property_sets:
matches.extend(self.check_property_set(property_set, path))
return matches
| {"golden_diff": "diff --git a/src/cfnlint/rules/resources/dynamodb/AttributeMismatch.py b/src/cfnlint/rules/resources/dynamodb/AttributeMismatch.py\n--- a/src/cfnlint/rules/resources/dynamodb/AttributeMismatch.py\n+++ b/src/cfnlint/rules/resources/dynamodb/AttributeMismatch.py\n@@ -77,7 +77,8 @@\n message = 'The set of Attributes in AttributeDefinitions: {0} and KeySchemas: {1} must match at {2}'\n matches.append(RuleMatch(\n path,\n- message.format(sorted(list(attributes)), sorted(list(keys)), '/'.join(map(str, path)))\n+ message.format(sorted(list(attributes)), sorted(\n+ list(keys)), '/'.join(map(str, path)))\n ))\n \n return matches\n@@ -86,7 +87,8 @@\n \"\"\"Check itself\"\"\"\n matches = []\n \n- property_sets = cfn.get_object_without_conditions(properties, path)\n+ property_sets = cfn.get_object_without_conditions(\n+ properties, ['AttributeDefinitions', 'KeySchema', 'GlobalSecondaryIndexes', 'LocalSecondaryIndexes'])\n for property_set in property_sets:\n matches.extend(self.check_property_set(property_set, path))\n return matches\n", "issue": "E0002 bug when using parameters for DynamoDB AttributeDefinitions\n*cfn-lint version: 0.28.2*\r\n\r\n*Description of issue.*\r\n\r\nRule E3039 (added in 0.28.0) doesn't support Refs and results in a E0002 error for the template. \r\n\r\nRepeatable with this template snippet:\r\n\r\n```\r\nAWSTemplateFormatVersion: '2010-09-09'\r\n\r\nParameters:\r\n HashKeyName:\r\n Description: Primary Key Name\r\n Type: String\r\n AllowedPattern: '[a-zA-Z0-9]*'\r\n MinLength: '1'\r\n MaxLength: '2048'\r\n ConstraintDescription: must contain only alphanumberic characters\r\n\r\n HashKeyType:\r\n Description: Primary Key Type\r\n Type: String\r\n Default: S\r\n AllowedPattern: '[S|N]'\r\n MinLength: '1'\r\n MaxLength: '1'\r\n ConstraintDescription: must be either S or N\r\n\r\n RangeKeyName:\r\n Description: Sort Key Name\r\n Type: String\r\n Default: 'NA'\r\n AllowedPattern: '[a-zA-Z0-9]*'\r\n MinLength: '0'\r\n MaxLength: '2048'\r\n ConstraintDescription: must contain only alphanumberic characters\r\n\r\n RangeKeyType:\r\n Description: Sort Key Type\r\n Type: String\r\n Default: S\r\n AllowedPattern: '[S|N]'\r\n MinLength: '0'\r\n MaxLength: '1'\r\n ConstraintDescription: must be either S or Ns\r\n\r\nConditions:\r\n isRangeKeyAvailable: !Not [ !Equals [ !Ref RangeKeyName, 'NA' ] ]\r\n\r\nResources:\r\n DynamoDBTable:\r\n DeletionPolicy: Delete\r\n UpdateReplacePolicy: Delete\r\n Type: AWS::DynamoDB::Table\r\n Properties:\r\n AttributeDefinitions: !If\r\n - isRangeKeyAvailable\r\n - - AttributeName: !Ref HashKeyName\r\n AttributeType: !Ref HashKeyType\r\n - AttributeName: !Ref RangeKeyName\r\n AttributeType: !Ref RangeKeyType\r\n - - AttributeName: !Ref HashKeyName\r\n AttributeType: !Ref HashKeyType\r\n KeySchema: !If\r\n - isRangeKeyAvailable\r\n - - AttributeName: !Ref HashKeyName\r\n KeyType: HASH\r\n - AttributeName: !Ref RangeKeyName\r\n KeyType: RANGE\r\n - - AttributeName: !Ref HashKeyName\r\n KeyType: HASH\r\n```\r\n\n", "before_files": [{"content": "\"\"\"\nCopyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.\nSPDX-License-Identifier: MIT-0\n\"\"\"\nimport six\nfrom cfnlint.decode.node import list_node\nfrom cfnlint.rules import CloudFormationLintRule\nfrom cfnlint.rules import RuleMatch\n\n\nclass AttributeMismatch(CloudFormationLintRule):\n \"\"\"Check DynamoDB Attributes\"\"\"\n id = 'E3039'\n shortdesc = 'AttributeDefinitions / KeySchemas mismatch'\n description = 'Verify the set of Attributes in AttributeDefinitions and KeySchemas match'\n source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-table.html'\n tags = ['resources', 'dynamodb']\n\n def __init__(self):\n \"\"\"Init\"\"\"\n super(AttributeMismatch, self).__init__()\n self.resource_property_types = ['AWS::DynamoDB::Table']\n\n def _get_key_schema_attributes(self, key_schemas_sets):\n \"\"\" Get Key Schema attributes \"\"\"\n keys = set()\n\n for properties, _ in key_schemas_sets:\n for key in properties:\n attribute_name = key.get_safe('AttributeName', type_t=six.string_types)\n if attribute_name:\n keys.add(key.get('AttributeName'))\n return keys\n\n def _get_attribute_secondary(self, property_sets):\n \"\"\" Get the key schemas from secondary indexes \"\"\"\n keys = set()\n\n for properties, _ in property_sets:\n for index in properties:\n keys = keys.union(\n self._get_key_schema_attributes(\n index.get_safe('KeySchema', list_node([], None, None), [], list)\n )\n )\n\n return keys\n\n def check_property_set(self, property_set, path):\n \"\"\" Check a property set \"\"\"\n matches = []\n properties = property_set.get('Object')\n\n keys = set()\n attributes = set()\n\n for attribute in properties.get('AttributeDefinitions', []):\n attribute_name = attribute.get('AttributeName')\n if isinstance(attribute_name, six.string_types):\n attributes.add(attribute.get('AttributeName'))\n else:\n self.logger.info('attribute definitions is not using just strings')\n return matches\n keys = keys.union(\n self._get_key_schema_attributes(\n properties.get_safe('KeySchema', list_node([], None, None), [], list)\n )\n )\n keys = keys.union(self._get_attribute_secondary(\n properties.get_safe('GlobalSecondaryIndexes', list_node([], None, None), path, list\n ))) # pylint: disable=bad-continuation\n keys = keys.union(self._get_attribute_secondary(\n properties.get_safe('LocalSecondaryIndexes', list_node([], None, None), path, list\n ))) # pylint: disable=bad-continuation\n\n if attributes != keys:\n message = 'The set of Attributes in AttributeDefinitions: {0} and KeySchemas: {1} must match at {2}'\n matches.append(RuleMatch(\n path,\n message.format(sorted(list(attributes)), sorted(list(keys)), '/'.join(map(str, path)))\n ))\n\n return matches\n\n def check(self, properties, path, cfn):\n \"\"\"Check itself\"\"\"\n matches = []\n\n property_sets = cfn.get_object_without_conditions(properties, path)\n for property_set in property_sets:\n matches.extend(self.check_property_set(property_set, path))\n return matches\n\n def match_resource_properties(self, properties, _, path, cfn):\n \"\"\"Match for sub properties\"\"\"\n matches = []\n matches.extend(self.check(properties, path, cfn))\n return matches\n", "path": "src/cfnlint/rules/resources/dynamodb/AttributeMismatch.py"}]} | 2,037 | 257 |
gh_patches_debug_5432 | rasdani/github-patches | git_diff | lhotse-speech__lhotse-240 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Cut concatenate doesn't consider the first sample in each batch
Found in #234
</issue>
<code>
[start of lhotse/dataset/cut_transforms/concatenate.py]
1 from typing import Optional, Sequence
2
3 from lhotse import CutSet
4 from lhotse.cut import AnyCut
5 from lhotse.utils import Seconds
6
7
8 class CutConcatenate:
9 """
10 A transform on batch of cuts (``CutSet``) that concatenates the cuts to minimize the total amount of padding;
11 e.g. instead of creating a batch with 40 examples, we will merge some of the examples together
12 adding some silence between them to avoid a large number of padding frames that waste the computation.
13 """
14
15 def __init__(
16 self,
17 gap: Seconds = 1.0,
18 duration_factor: float = 1.0
19 ) -> None:
20 """
21 CutConcatenate's constructor.
22
23 :param gap: The duration of silence in seconds that is inserted between the cuts;
24 it's goal is to let the model "know" that there are separate utterances in a single example.
25 :param duration_factor: Determines the maximum duration of the concatenated cuts;
26 by default it's 1, setting the limit at the duration of the longest cut in the batch.
27 """
28 self.gap = gap
29 self.duration_factor = duration_factor
30
31 def __call__(self, cuts: CutSet) -> CutSet:
32 cuts = cuts.sort_by_duration(ascending=False)
33 return concat_cuts(
34 cuts,
35 gap=self.gap,
36 max_duration=cuts[0].duration * self.duration_factor
37 )
38
39
40 def concat_cuts(
41 cuts: Sequence[AnyCut],
42 gap: Seconds = 1.0,
43 max_duration: Optional[Seconds] = None
44 ) -> CutSet:
45 """
46 We're going to concatenate the cuts to minimize the amount of total padding frames used.
47 This means that some samples in the batch will be merged together into one sample,
48 separated by an interval of silence.
49 This is actually solving a knapsack problem.
50 In this initial implementation we're using a greedy approach:
51 going from the back (i.e. the shortest cuts) we'll try to concat them to the longest cut
52 that still has some "space" at the end.
53
54 :param cuts: a list of cuts to pack.
55 :param gap: the duration of silence inserted between concatenated cuts.
56 :param max_duration: the maximum duration for the concatenated cuts
57 (by default set to the duration of the first cut).
58 :return a list of packed cuts.
59 """
60 if len(cuts) <= 1:
61 # Nothing to do.
62 return CutSet.from_cuts(cuts)
63 cuts = sorted(cuts, key=lambda c: c.duration, reverse=True)
64 max_duration = cuts[0].duration if max_duration is None else max_duration
65 current_idx = 1
66 while True:
67 can_fit = False
68 shortest = cuts[-1]
69 for idx in range(current_idx, len(cuts) - 1):
70 cut = cuts[current_idx]
71 can_fit = cut.duration + gap + shortest.duration <= max_duration
72 if can_fit:
73 cuts[current_idx] = cut.pad(cut.duration + gap).append(shortest)
74 cuts = cuts[:-1]
75 break
76 current_idx += 1
77 if not can_fit:
78 break
79 return CutSet.from_cuts(cuts)
80
[end of lhotse/dataset/cut_transforms/concatenate.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/lhotse/dataset/cut_transforms/concatenate.py b/lhotse/dataset/cut_transforms/concatenate.py
--- a/lhotse/dataset/cut_transforms/concatenate.py
+++ b/lhotse/dataset/cut_transforms/concatenate.py
@@ -62,7 +62,7 @@
return CutSet.from_cuts(cuts)
cuts = sorted(cuts, key=lambda c: c.duration, reverse=True)
max_duration = cuts[0].duration if max_duration is None else max_duration
- current_idx = 1
+ current_idx = 0
while True:
can_fit = False
shortest = cuts[-1]
| {"golden_diff": "diff --git a/lhotse/dataset/cut_transforms/concatenate.py b/lhotse/dataset/cut_transforms/concatenate.py\n--- a/lhotse/dataset/cut_transforms/concatenate.py\n+++ b/lhotse/dataset/cut_transforms/concatenate.py\n@@ -62,7 +62,7 @@\n return CutSet.from_cuts(cuts)\n cuts = sorted(cuts, key=lambda c: c.duration, reverse=True)\n max_duration = cuts[0].duration if max_duration is None else max_duration\n- current_idx = 1\n+ current_idx = 0\n while True:\n can_fit = False\n shortest = cuts[-1]\n", "issue": "Cut concatenate doesn't consider the first sample in each batch\nFound in #234 \n", "before_files": [{"content": "from typing import Optional, Sequence\n\nfrom lhotse import CutSet\nfrom lhotse.cut import AnyCut\nfrom lhotse.utils import Seconds\n\n\nclass CutConcatenate:\n \"\"\"\n A transform on batch of cuts (``CutSet``) that concatenates the cuts to minimize the total amount of padding;\n e.g. instead of creating a batch with 40 examples, we will merge some of the examples together\n adding some silence between them to avoid a large number of padding frames that waste the computation.\n \"\"\"\n\n def __init__(\n self,\n gap: Seconds = 1.0,\n duration_factor: float = 1.0\n ) -> None:\n \"\"\"\n CutConcatenate's constructor.\n\n :param gap: The duration of silence in seconds that is inserted between the cuts;\n it's goal is to let the model \"know\" that there are separate utterances in a single example.\n :param duration_factor: Determines the maximum duration of the concatenated cuts;\n by default it's 1, setting the limit at the duration of the longest cut in the batch.\n \"\"\"\n self.gap = gap\n self.duration_factor = duration_factor\n\n def __call__(self, cuts: CutSet) -> CutSet:\n cuts = cuts.sort_by_duration(ascending=False)\n return concat_cuts(\n cuts,\n gap=self.gap,\n max_duration=cuts[0].duration * self.duration_factor\n )\n\n\ndef concat_cuts(\n cuts: Sequence[AnyCut],\n gap: Seconds = 1.0,\n max_duration: Optional[Seconds] = None\n) -> CutSet:\n \"\"\"\n We're going to concatenate the cuts to minimize the amount of total padding frames used.\n This means that some samples in the batch will be merged together into one sample,\n separated by an interval of silence.\n This is actually solving a knapsack problem.\n In this initial implementation we're using a greedy approach:\n going from the back (i.e. the shortest cuts) we'll try to concat them to the longest cut\n that still has some \"space\" at the end.\n\n :param cuts: a list of cuts to pack.\n :param gap: the duration of silence inserted between concatenated cuts.\n :param max_duration: the maximum duration for the concatenated cuts\n (by default set to the duration of the first cut).\n :return a list of packed cuts.\n \"\"\"\n if len(cuts) <= 1:\n # Nothing to do.\n return CutSet.from_cuts(cuts)\n cuts = sorted(cuts, key=lambda c: c.duration, reverse=True)\n max_duration = cuts[0].duration if max_duration is None else max_duration\n current_idx = 1\n while True:\n can_fit = False\n shortest = cuts[-1]\n for idx in range(current_idx, len(cuts) - 1):\n cut = cuts[current_idx]\n can_fit = cut.duration + gap + shortest.duration <= max_duration\n if can_fit:\n cuts[current_idx] = cut.pad(cut.duration + gap).append(shortest)\n cuts = cuts[:-1]\n break\n current_idx += 1\n if not can_fit:\n break\n return CutSet.from_cuts(cuts)\n", "path": "lhotse/dataset/cut_transforms/concatenate.py"}]} | 1,425 | 155 |
gh_patches_debug_34601 | rasdani/github-patches | git_diff | sunpy__sunpy-7316 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Resampling Maps in the example gallery gives a confusing example for the superpixel method
### Provide a general description of the issue or problem.
That's a minor thing perhaps but checking this [page](https://docs.sunpy.org/en/stable/generated/gallery/map/map_resampling_and_superpixels.html) I got confused by the example for the superpixel method.
It says:
`new_dimensions = u.Quantity(aia_map.dimensions) / 16`
`aia_superpixel_map = aia_map.superpixel([new_dimensions]`
The first line should be instead e.g.:
`new_dimensions=[16,16]*u.pixel `
</issue>
<code>
[start of examples/map/map_resampling_and_superpixels.py]
1 """
2 ===============
3 Resampling Maps
4 ===============
5
6 How to resample a map using the resample method, which implements interpolation, or
7 using superpixels, which combines pixels.
8 """
9 import matplotlib.pyplot as plt
10
11 import astropy.units as u
12
13 import sunpy.data.sample
14 import sunpy.map
15
16 ###############################################################################
17 # We start with the sample data.
18
19 aia_map = sunpy.map.Map(sunpy.data.sample.AIA_171_IMAGE)
20
21 ##############################################################################
22 # To reduce the angular resolution of the map you can use the `~sunpy.map.GenericMap.resample` method,
23 # specifying the new dimensions in pixels. By default, this method uses linear interpolation
24 # but this can be changed with the ``method`` argument ('nearest', 'linear' or 'spline').
25
26 new_dimensions = [40, 40] * u.pixel
27 aia_resampled_map = aia_map.resample(new_dimensions)
28
29 ##############################################################################
30 # Let's plot the result.
31
32 fig = plt.figure()
33 ax = fig.add_subplot(projection=aia_resampled_map)
34 aia_resampled_map.plot(axes=ax)
35 plt.show()
36
37 ##############################################################################
38 # Another way to resample is by using the `~sunpy.map.GenericMap.superpixel` method.
39 # This can be used to increase the signal to noise ratio by reducing the
40 # resolution of the image by combining pixels. This means that the new dimension
41 # must divide the original size exactly.
42 # For example you can reduce the AIA map resolution by a factor of 16.
43
44 new_dimensions = u.Quantity(aia_map.dimensions) / 16
45 aia_superpixel_map = aia_map.superpixel(new_dimensions)
46
47 ##############################################################################
48 # Let's plot the result.
49
50 fig = plt.figure()
51 ax = fig.add_subplot(projection=aia_superpixel_map)
52 aia_superpixel_map.plot(axes=ax)
53 plt.show()
54
[end of examples/map/map_resampling_and_superpixels.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/examples/map/map_resampling_and_superpixels.py b/examples/map/map_resampling_and_superpixels.py
--- a/examples/map/map_resampling_and_superpixels.py
+++ b/examples/map/map_resampling_and_superpixels.py
@@ -13,15 +13,16 @@
import sunpy.data.sample
import sunpy.map
-###############################################################################
+##############################################################################
# We start with the sample data.
aia_map = sunpy.map.Map(sunpy.data.sample.AIA_171_IMAGE)
##############################################################################
-# To reduce the angular resolution of the map you can use the `~sunpy.map.GenericMap.resample` method,
-# specifying the new dimensions in pixels. By default, this method uses linear interpolation
-# but this can be changed with the ``method`` argument ('nearest', 'linear' or 'spline').
+# To reduce the angular resolution of the map, you can use the
+# :meth:`~sunpy.map.GenericMap.resample` method, specifying the new dimensions
+# in pixels. By default, this method uses linear interpolation but this can be
+# changed with the ``method`` argument ('nearest', 'linear' or 'spline').
new_dimensions = [40, 40] * u.pixel
aia_resampled_map = aia_map.resample(new_dimensions)
@@ -35,14 +36,15 @@
plt.show()
##############################################################################
-# Another way to resample is by using the `~sunpy.map.GenericMap.superpixel` method.
-# This can be used to increase the signal to noise ratio by reducing the
-# resolution of the image by combining pixels. This means that the new dimension
-# must divide the original size exactly.
-# For example you can reduce the AIA map resolution by a factor of 16.
-
-new_dimensions = u.Quantity(aia_map.dimensions) / 16
-aia_superpixel_map = aia_map.superpixel(new_dimensions)
+# Another way to reduce the angular resolution of the map is by using the
+# :meth:`~sunpy.map.GenericMap.superpixel` method, which combines pixels.
+# The superpixel dimensions do not need to be square, and the intensity of
+# each superpixel defaults to the sum of the constituent pixels. For example,
+# you can reduce the AIA map resolution by a factor of 16 by specifying 16x16
+# superpixels.
+
+superpixel_size = [16, 16] * u.pixel
+aia_superpixel_map = aia_map.superpixel(superpixel_size)
##############################################################################
# Let's plot the result.
| {"golden_diff": "diff --git a/examples/map/map_resampling_and_superpixels.py b/examples/map/map_resampling_and_superpixels.py\n--- a/examples/map/map_resampling_and_superpixels.py\n+++ b/examples/map/map_resampling_and_superpixels.py\n@@ -13,15 +13,16 @@\n import sunpy.data.sample\n import sunpy.map\n \n-###############################################################################\n+##############################################################################\n # We start with the sample data.\n \n aia_map = sunpy.map.Map(sunpy.data.sample.AIA_171_IMAGE)\n \n ##############################################################################\n-# To reduce the angular resolution of the map you can use the `~sunpy.map.GenericMap.resample` method,\n-# specifying the new dimensions in pixels. By default, this method uses linear interpolation\n-# but this can be changed with the ``method`` argument ('nearest', 'linear' or 'spline').\n+# To reduce the angular resolution of the map, you can use the\n+# :meth:`~sunpy.map.GenericMap.resample` method, specifying the new dimensions\n+# in pixels. By default, this method uses linear interpolation but this can be\n+# changed with the ``method`` argument ('nearest', 'linear' or 'spline').\n \n new_dimensions = [40, 40] * u.pixel\n aia_resampled_map = aia_map.resample(new_dimensions)\n@@ -35,14 +36,15 @@\n plt.show()\n \n ##############################################################################\n-# Another way to resample is by using the `~sunpy.map.GenericMap.superpixel` method.\n-# This can be used to increase the signal to noise ratio by reducing the\n-# resolution of the image by combining pixels. This means that the new dimension\n-# must divide the original size exactly.\n-# For example you can reduce the AIA map resolution by a factor of 16.\n-\n-new_dimensions = u.Quantity(aia_map.dimensions) / 16\n-aia_superpixel_map = aia_map.superpixel(new_dimensions)\n+# Another way to reduce the angular resolution of the map is by using the\n+# :meth:`~sunpy.map.GenericMap.superpixel` method, which combines pixels.\n+# The superpixel dimensions do not need to be square, and the intensity of\n+# each superpixel defaults to the sum of the constituent pixels. For example,\n+# you can reduce the AIA map resolution by a factor of 16 by specifying 16x16\n+# superpixels.\n+\n+superpixel_size = [16, 16] * u.pixel\n+aia_superpixel_map = aia_map.superpixel(superpixel_size)\n \n ##############################################################################\n # Let's plot the result.\n", "issue": "Resampling Maps in the example gallery gives a confusing example for the superpixel method\n### Provide a general description of the issue or problem.\n\nThat's a minor thing perhaps but checking this [page](https://docs.sunpy.org/en/stable/generated/gallery/map/map_resampling_and_superpixels.html) I got confused by the example for the superpixel method. \r\nIt says:\r\n`new_dimensions = u.Quantity(aia_map.dimensions) / 16`\r\n`aia_superpixel_map = aia_map.superpixel([new_dimensions]`\r\n\r\nThe first line should be instead e.g.:\r\n`new_dimensions=[16,16]*u.pixel `\n", "before_files": [{"content": "\"\"\"\n===============\nResampling Maps\n===============\n\nHow to resample a map using the resample method, which implements interpolation, or\nusing superpixels, which combines pixels.\n\"\"\"\nimport matplotlib.pyplot as plt\n\nimport astropy.units as u\n\nimport sunpy.data.sample\nimport sunpy.map\n\n###############################################################################\n# We start with the sample data.\n\naia_map = sunpy.map.Map(sunpy.data.sample.AIA_171_IMAGE)\n\n##############################################################################\n# To reduce the angular resolution of the map you can use the `~sunpy.map.GenericMap.resample` method,\n# specifying the new dimensions in pixels. By default, this method uses linear interpolation\n# but this can be changed with the ``method`` argument ('nearest', 'linear' or 'spline').\n\nnew_dimensions = [40, 40] * u.pixel\naia_resampled_map = aia_map.resample(new_dimensions)\n\n##############################################################################\n# Let's plot the result.\n\nfig = plt.figure()\nax = fig.add_subplot(projection=aia_resampled_map)\naia_resampled_map.plot(axes=ax)\nplt.show()\n\n##############################################################################\n# Another way to resample is by using the `~sunpy.map.GenericMap.superpixel` method.\n# This can be used to increase the signal to noise ratio by reducing the\n# resolution of the image by combining pixels. This means that the new dimension\n# must divide the original size exactly.\n# For example you can reduce the AIA map resolution by a factor of 16.\n\nnew_dimensions = u.Quantity(aia_map.dimensions) / 16\naia_superpixel_map = aia_map.superpixel(new_dimensions)\n\n##############################################################################\n# Let's plot the result.\n\nfig = plt.figure()\nax = fig.add_subplot(projection=aia_superpixel_map)\naia_superpixel_map.plot(axes=ax)\nplt.show()\n", "path": "examples/map/map_resampling_and_superpixels.py"}]} | 1,169 | 555 |
gh_patches_debug_12235 | rasdani/github-patches | git_diff | open-telemetry__opentelemetry-python-2303 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
_OTEL_METRICS_EXPORTER env var should be OTEL_METRICS_EXPORTER
The environment variable `_OTEL_METRICS_EXPORTER` is prefixed with an underscore, but there's no need for it as that environment variable is marked as stable in the specification https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/sdk-environment-variables.md#exporter-selection
</issue>
<code>
[start of opentelemetry-api/src/opentelemetry/environment_variables.py]
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 OTEL_PROPAGATORS = "OTEL_PROPAGATORS"
16 """
17 .. envvar:: OTEL_PROPAGATORS
18 """
19
20 OTEL_PYTHON_CONTEXT = "OTEL_PYTHON_CONTEXT"
21 """
22 .. envvar:: OTEL_PYTHON_CONTEXT
23 """
24
25 OTEL_PYTHON_ID_GENERATOR = "OTEL_PYTHON_ID_GENERATOR"
26 """
27 .. envvar:: OTEL_PYTHON_ID_GENERATOR
28 """
29
30 OTEL_TRACES_EXPORTER = "OTEL_TRACES_EXPORTER"
31 """
32 .. envvar:: OTEL_TRACES_EXPORTER
33 """
34
35 OTEL_PYTHON_TRACER_PROVIDER = "OTEL_PYTHON_TRACER_PROVIDER"
36 """
37 .. envvar:: OTEL_PYTHON_TRACER_PROVIDER
38 """
39
40 _OTEL_PYTHON_METER_PROVIDER = "OTEL_PYTHON_METER_PROVIDER"
41 """
42 .. envvar:: OTEL_PYTHON_METER_PROVIDER
43 """
44
45 _OTEL_METRICS_EXPORTER = "OTEL_METRICS_EXPORTER"
46 """
47 .. envvar:: OTEL_METRICS_EXPORTER
48
49 """
50
[end of opentelemetry-api/src/opentelemetry/environment_variables.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/opentelemetry-api/src/opentelemetry/environment_variables.py b/opentelemetry-api/src/opentelemetry/environment_variables.py
--- a/opentelemetry-api/src/opentelemetry/environment_variables.py
+++ b/opentelemetry-api/src/opentelemetry/environment_variables.py
@@ -12,6 +12,12 @@
# See the License for the specific language governing permissions and
# limitations under the License.
+OTEL_METRICS_EXPORTER = "OTEL_METRICS_EXPORTER"
+"""
+.. envvar:: OTEL_METRICS_EXPORTER
+
+"""
+
OTEL_PROPAGATORS = "OTEL_PROPAGATORS"
"""
.. envvar:: OTEL_PROPAGATORS
@@ -41,9 +47,3 @@
"""
.. envvar:: OTEL_PYTHON_METER_PROVIDER
"""
-
-_OTEL_METRICS_EXPORTER = "OTEL_METRICS_EXPORTER"
-"""
-.. envvar:: OTEL_METRICS_EXPORTER
-
-"""
| {"golden_diff": "diff --git a/opentelemetry-api/src/opentelemetry/environment_variables.py b/opentelemetry-api/src/opentelemetry/environment_variables.py\n--- a/opentelemetry-api/src/opentelemetry/environment_variables.py\n+++ b/opentelemetry-api/src/opentelemetry/environment_variables.py\n@@ -12,6 +12,12 @@\n # See the License for the specific language governing permissions and\n # limitations under the License.\n \n+OTEL_METRICS_EXPORTER = \"OTEL_METRICS_EXPORTER\"\n+\"\"\"\n+.. envvar:: OTEL_METRICS_EXPORTER\n+\n+\"\"\"\n+\n OTEL_PROPAGATORS = \"OTEL_PROPAGATORS\"\n \"\"\"\n .. envvar:: OTEL_PROPAGATORS\n@@ -41,9 +47,3 @@\n \"\"\"\n .. envvar:: OTEL_PYTHON_METER_PROVIDER\n \"\"\"\n-\n-_OTEL_METRICS_EXPORTER = \"OTEL_METRICS_EXPORTER\"\n-\"\"\"\n-.. envvar:: OTEL_METRICS_EXPORTER\n-\n-\"\"\"\n", "issue": "_OTEL_METRICS_EXPORTER env var should be OTEL_METRICS_EXPORTER\nThe environment variable `_OTEL_METRICS_EXPORTER` is prefixed with an underscore, but there's no need for it as that environment variable is marked as stable in the specification https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/sdk-environment-variables.md#exporter-selection\r\n\r\n\n", "before_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nOTEL_PROPAGATORS = \"OTEL_PROPAGATORS\"\n\"\"\"\n.. envvar:: OTEL_PROPAGATORS\n\"\"\"\n\nOTEL_PYTHON_CONTEXT = \"OTEL_PYTHON_CONTEXT\"\n\"\"\"\n.. envvar:: OTEL_PYTHON_CONTEXT\n\"\"\"\n\nOTEL_PYTHON_ID_GENERATOR = \"OTEL_PYTHON_ID_GENERATOR\"\n\"\"\"\n.. envvar:: OTEL_PYTHON_ID_GENERATOR\n\"\"\"\n\nOTEL_TRACES_EXPORTER = \"OTEL_TRACES_EXPORTER\"\n\"\"\"\n.. envvar:: OTEL_TRACES_EXPORTER\n\"\"\"\n\nOTEL_PYTHON_TRACER_PROVIDER = \"OTEL_PYTHON_TRACER_PROVIDER\"\n\"\"\"\n.. envvar:: OTEL_PYTHON_TRACER_PROVIDER\n\"\"\"\n\n_OTEL_PYTHON_METER_PROVIDER = \"OTEL_PYTHON_METER_PROVIDER\"\n\"\"\"\n.. envvar:: OTEL_PYTHON_METER_PROVIDER\n\"\"\"\n\n_OTEL_METRICS_EXPORTER = \"OTEL_METRICS_EXPORTER\"\n\"\"\"\n.. envvar:: OTEL_METRICS_EXPORTER\n\n\"\"\"\n", "path": "opentelemetry-api/src/opentelemetry/environment_variables.py"}]} | 1,060 | 207 |
gh_patches_debug_42145 | rasdani/github-patches | git_diff | alltheplaces__alltheplaces-1140 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Walmart Spider Error
Something with the Walmart spider appears to be failing. When importing the geojson file from alltheplaces.xyz to qgis or geojson.io, there are a large number of locations missing in the western US.

</issue>
<code>
[start of locations/spiders/walmart.py]
1 # -*- coding: utf-8 -*-
2 import scrapy
3 import json
4
5 from locations.items import GeojsonPointItem
6
7
8 class WalmartSpider(scrapy.Spider):
9 name = "walmart"
10 allowed_domains = ["walmart.com"]
11 start_urls = (
12 'https://www.walmart.com/sitemap_store_main.xml',
13 )
14
15 def store_hours(self, store_hours):
16 if store_hours == 'Mo-Su':
17 return u'24/7'
18 elif store_hours is None:
19 return None
20 else:
21 return store_hours
22
23 def parse(self, response):
24 response.selector.remove_namespaces()
25 for u in response.xpath('//loc/text()').extract():
26 if u.endswith('/details'):
27 yield scrapy.Request(u.strip(), callback=self.parse_store)
28
29 def parse_store(self, response):
30 addr = response.xpath('//div[@itemprop="address"]')[0]
31 yield GeojsonPointItem(
32 lat=response.xpath('//meta[@itemprop="latitude"]/@content').extract_first(),
33 lon=response.xpath('//meta[@itemprop="longitude"]/@content').extract_first(),
34 ref=response.url.split('/')[4],
35 phone=response.xpath('//meta[@itemprop="telephone"]/@content').extract_first(),
36 name=response.xpath('//meta[@itemprop="name"]/@content').extract_first(),
37 opening_hours=self.store_hours(response.xpath('//meta[@itemprop="openingHours"]/@content').extract_first()),
38 addr_full=addr.xpath('//span[@itemprop="streetAddress"]/text()').extract_first(),
39 city=addr.xpath('//span[@itemprop="locality"]/text()').extract_first(),
40 state=addr.xpath('//span[@itemprop="addressRegion"]/text()').extract_first(),
41 postcode=addr.xpath('//span[@itemprop="postalCode"]/text()').extract_first(),
42 )
43
[end of locations/spiders/walmart.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/locations/spiders/walmart.py b/locations/spiders/walmart.py
--- a/locations/spiders/walmart.py
+++ b/locations/spiders/walmart.py
@@ -1,7 +1,9 @@
# -*- coding: utf-8 -*-
import scrapy
import json
+import re
+from collections import defaultdict
from locations.items import GeojsonPointItem
@@ -11,14 +13,39 @@
start_urls = (
'https://www.walmart.com/sitemap_store_main.xml',
)
+ retries = defaultdict(int)
def store_hours(self, store_hours):
- if store_hours == 'Mo-Su':
+ if store_hours.get('operationalHours').get('open24Hours') is True:
return u'24/7'
- elif store_hours is None:
+ elif not store_hours.get('operationalHoursCombined'):
return None
else:
- return store_hours
+ op_hours = store_hours.get('operationalHoursCombined')
+ open_hours = []
+ for op_hour in op_hours:
+ if op_hour.get('dailyHours').get('closed') is True:
+ continue
+
+ if op_hour.get('dailyHours').get('openFullDay') is True:
+ start_hr = '00:00'
+ end_hr = '24:00'
+ else:
+ start_hr = op_hour.get('dailyHours').get('startHr')
+ end_hr = op_hour.get('dailyHours').get('endHr')
+
+ start_day = op_hour.get('startDayName')
+ end_day = op_hour.get('endDayName')
+
+ if end_day is None:
+ end_day = ''
+
+ hours = start_day+'-'+end_day+' '+start_hr+'-'+end_hr
+ open_hours.append(hours)
+
+ hours_combined = '; '.join(open_hours)
+
+ return hours_combined
def parse(self, response):
response.selector.remove_namespaces()
@@ -27,16 +54,30 @@
yield scrapy.Request(u.strip(), callback=self.parse_store)
def parse_store(self, response):
- addr = response.xpath('//div[@itemprop="address"]')[0]
+ script = response.xpath("//script[contains(.,'WML_REDUX_INITIAL_STATE')]").extract_first()
+ # In rare cases will hit page before script tag loads with content
+ if script is None:
+ if self.retries.get(response.url, 0) <= 2:
+ self.retries[response.url] += 1
+ yield scrapy.Request(response.url, callback=self.parse_store) # Try again
+ else:
+ raise Exception('Retried too many times')
+
+ script_content = re.search(r'window.__WML_REDUX_INITIAL_STATE__ = (.*);</script>', script,
+ flags=re.IGNORECASE | re.DOTALL).group(1)
+
+ store_data = json.loads(script_content).get('store')
+
yield GeojsonPointItem(
- lat=response.xpath('//meta[@itemprop="latitude"]/@content').extract_first(),
- lon=response.xpath('//meta[@itemprop="longitude"]/@content').extract_first(),
- ref=response.url.split('/')[4],
- phone=response.xpath('//meta[@itemprop="telephone"]/@content').extract_first(),
- name=response.xpath('//meta[@itemprop="name"]/@content').extract_first(),
- opening_hours=self.store_hours(response.xpath('//meta[@itemprop="openingHours"]/@content').extract_first()),
- addr_full=addr.xpath('//span[@itemprop="streetAddress"]/text()').extract_first(),
- city=addr.xpath('//span[@itemprop="locality"]/text()').extract_first(),
- state=addr.xpath('//span[@itemprop="addressRegion"]/text()').extract_first(),
- postcode=addr.xpath('//span[@itemprop="postalCode"]/text()').extract_first(),
+ lat=store_data.get('geoPoint').get('latitude'),
+ lon=store_data.get('geoPoint').get('longitude'),
+ ref=store_data.get('id'),
+ phone=store_data.get('phone'),
+ name=store_data.get('displayName'),
+ opening_hours=self.store_hours(store_data),
+ addr_full=store_data.get('address').get('streetAddress'),
+ city=store_data.get('address').get('city'),
+ state=store_data.get('address').get('state'),
+ postcode=store_data.get('address').get('postalCode'),
+ website=store_data.get('detailsPageURL'),
)
| {"golden_diff": "diff --git a/locations/spiders/walmart.py b/locations/spiders/walmart.py\n--- a/locations/spiders/walmart.py\n+++ b/locations/spiders/walmart.py\n@@ -1,7 +1,9 @@\n # -*- coding: utf-8 -*-\n import scrapy\n import json\n+import re\n \n+from collections import defaultdict\n from locations.items import GeojsonPointItem\n \n \n@@ -11,14 +13,39 @@\n start_urls = (\n 'https://www.walmart.com/sitemap_store_main.xml',\n )\n+ retries = defaultdict(int)\n \n def store_hours(self, store_hours):\n- if store_hours == 'Mo-Su':\n+ if store_hours.get('operationalHours').get('open24Hours') is True:\n return u'24/7'\n- elif store_hours is None:\n+ elif not store_hours.get('operationalHoursCombined'):\n return None\n else:\n- return store_hours\n+ op_hours = store_hours.get('operationalHoursCombined')\n+ open_hours = []\n+ for op_hour in op_hours:\n+ if op_hour.get('dailyHours').get('closed') is True:\n+ continue\n+\n+ if op_hour.get('dailyHours').get('openFullDay') is True:\n+ start_hr = '00:00'\n+ end_hr = '24:00'\n+ else:\n+ start_hr = op_hour.get('dailyHours').get('startHr')\n+ end_hr = op_hour.get('dailyHours').get('endHr')\n+\n+ start_day = op_hour.get('startDayName')\n+ end_day = op_hour.get('endDayName')\n+\n+ if end_day is None:\n+ end_day = ''\n+\n+ hours = start_day+'-'+end_day+' '+start_hr+'-'+end_hr\n+ open_hours.append(hours)\n+\n+ hours_combined = '; '.join(open_hours)\n+\n+ return hours_combined\n \n def parse(self, response):\n response.selector.remove_namespaces()\n@@ -27,16 +54,30 @@\n yield scrapy.Request(u.strip(), callback=self.parse_store)\n \n def parse_store(self, response):\n- addr = response.xpath('//div[@itemprop=\"address\"]')[0]\n+ script = response.xpath(\"//script[contains(.,'WML_REDUX_INITIAL_STATE')]\").extract_first()\n+ # In rare cases will hit page before script tag loads with content\n+ if script is None:\n+ if self.retries.get(response.url, 0) <= 2:\n+ self.retries[response.url] += 1\n+ yield scrapy.Request(response.url, callback=self.parse_store) # Try again\n+ else:\n+ raise Exception('Retried too many times')\n+\n+ script_content = re.search(r'window.__WML_REDUX_INITIAL_STATE__ = (.*);</script>', script,\n+ flags=re.IGNORECASE | re.DOTALL).group(1)\n+\n+ store_data = json.loads(script_content).get('store')\n+\n yield GeojsonPointItem(\n- lat=response.xpath('//meta[@itemprop=\"latitude\"]/@content').extract_first(),\n- lon=response.xpath('//meta[@itemprop=\"longitude\"]/@content').extract_first(),\n- ref=response.url.split('/')[4],\n- phone=response.xpath('//meta[@itemprop=\"telephone\"]/@content').extract_first(),\n- name=response.xpath('//meta[@itemprop=\"name\"]/@content').extract_first(),\n- opening_hours=self.store_hours(response.xpath('//meta[@itemprop=\"openingHours\"]/@content').extract_first()),\n- addr_full=addr.xpath('//span[@itemprop=\"streetAddress\"]/text()').extract_first(),\n- city=addr.xpath('//span[@itemprop=\"locality\"]/text()').extract_first(),\n- state=addr.xpath('//span[@itemprop=\"addressRegion\"]/text()').extract_first(),\n- postcode=addr.xpath('//span[@itemprop=\"postalCode\"]/text()').extract_first(),\n+ lat=store_data.get('geoPoint').get('latitude'),\n+ lon=store_data.get('geoPoint').get('longitude'),\n+ ref=store_data.get('id'),\n+ phone=store_data.get('phone'),\n+ name=store_data.get('displayName'),\n+ opening_hours=self.store_hours(store_data),\n+ addr_full=store_data.get('address').get('streetAddress'),\n+ city=store_data.get('address').get('city'),\n+ state=store_data.get('address').get('state'),\n+ postcode=store_data.get('address').get('postalCode'),\n+ website=store_data.get('detailsPageURL'),\n )\n", "issue": "Walmart Spider Error\nSomething with the Walmart spider appears to be failing. When importing the geojson file from alltheplaces.xyz to qgis or geojson.io, there are a large number of locations missing in the western US.\r\n\r\n\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport scrapy\nimport json\n\nfrom locations.items import GeojsonPointItem\n\n\nclass WalmartSpider(scrapy.Spider):\n name = \"walmart\"\n allowed_domains = [\"walmart.com\"]\n start_urls = (\n 'https://www.walmart.com/sitemap_store_main.xml',\n )\n\n def store_hours(self, store_hours):\n if store_hours == 'Mo-Su':\n return u'24/7'\n elif store_hours is None:\n return None\n else:\n return store_hours\n\n def parse(self, response):\n response.selector.remove_namespaces()\n for u in response.xpath('//loc/text()').extract():\n if u.endswith('/details'):\n yield scrapy.Request(u.strip(), callback=self.parse_store)\n\n def parse_store(self, response):\n addr = response.xpath('//div[@itemprop=\"address\"]')[0]\n yield GeojsonPointItem(\n lat=response.xpath('//meta[@itemprop=\"latitude\"]/@content').extract_first(),\n lon=response.xpath('//meta[@itemprop=\"longitude\"]/@content').extract_first(),\n ref=response.url.split('/')[4],\n phone=response.xpath('//meta[@itemprop=\"telephone\"]/@content').extract_first(),\n name=response.xpath('//meta[@itemprop=\"name\"]/@content').extract_first(),\n opening_hours=self.store_hours(response.xpath('//meta[@itemprop=\"openingHours\"]/@content').extract_first()),\n addr_full=addr.xpath('//span[@itemprop=\"streetAddress\"]/text()').extract_first(),\n city=addr.xpath('//span[@itemprop=\"locality\"]/text()').extract_first(),\n state=addr.xpath('//span[@itemprop=\"addressRegion\"]/text()').extract_first(),\n postcode=addr.xpath('//span[@itemprop=\"postalCode\"]/text()').extract_first(),\n )\n", "path": "locations/spiders/walmart.py"}]} | 1,112 | 1,013 |
gh_patches_debug_18543 | rasdani/github-patches | git_diff | mne-tools__mne-python-9055 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
use bibtex in multi_comp.py
convert references in `mne/stats/multi_comp.py` to use footcite / footbibliography
</issue>
<code>
[start of mne/stats/multi_comp.py]
1 # Authors: Josef Pktd and example from H Raja and rewrite from Vincent Davis
2 # Alexandre Gramfort <[email protected]>
3 #
4 # Code borrowed from statsmodels
5 #
6 # License: BSD (3-clause)
7
8 import numpy as np
9
10
11 def _ecdf(x):
12 """No frills empirical cdf used in fdrcorrection."""
13 nobs = len(x)
14 return np.arange(1, nobs + 1) / float(nobs)
15
16
17 def fdr_correction(pvals, alpha=0.05, method='indep'):
18 """P-value correction with False Discovery Rate (FDR).
19
20 Correction for multiple comparison using FDR [1]_.
21
22 This covers Benjamini/Hochberg for independent or positively correlated and
23 Benjamini/Yekutieli for general or negatively correlated tests.
24
25 Parameters
26 ----------
27 pvals : array_like
28 Set of p-values of the individual tests.
29 alpha : float
30 Error rate.
31 method : 'indep' | 'negcorr'
32 If 'indep' it implements Benjamini/Hochberg for independent or if
33 'negcorr' it corresponds to Benjamini/Yekutieli.
34
35 Returns
36 -------
37 reject : array, bool
38 True if a hypothesis is rejected, False if not.
39 pval_corrected : array
40 P-values adjusted for multiple hypothesis testing to limit FDR.
41
42 References
43 ----------
44 .. [1] Genovese CR, Lazar NA, Nichols T. Thresholding of statistical maps
45 in functional neuroimaging using the false discovery rate.
46 Neuroimage. 2002 Apr;15(4):870-8.
47 """
48 pvals = np.asarray(pvals)
49 shape_init = pvals.shape
50 pvals = pvals.ravel()
51
52 pvals_sortind = np.argsort(pvals)
53 pvals_sorted = pvals[pvals_sortind]
54 sortrevind = pvals_sortind.argsort()
55
56 if method in ['i', 'indep', 'p', 'poscorr']:
57 ecdffactor = _ecdf(pvals_sorted)
58 elif method in ['n', 'negcorr']:
59 cm = np.sum(1. / np.arange(1, len(pvals_sorted) + 1))
60 ecdffactor = _ecdf(pvals_sorted) / cm
61 else:
62 raise ValueError("Method should be 'indep' and 'negcorr'")
63
64 reject = pvals_sorted < (ecdffactor * alpha)
65 if reject.any():
66 rejectmax = max(np.nonzero(reject)[0])
67 else:
68 rejectmax = 0
69 reject[:rejectmax] = True
70
71 pvals_corrected_raw = pvals_sorted / ecdffactor
72 pvals_corrected = np.minimum.accumulate(pvals_corrected_raw[::-1])[::-1]
73 pvals_corrected[pvals_corrected > 1.0] = 1.0
74 pvals_corrected = pvals_corrected[sortrevind].reshape(shape_init)
75 reject = reject[sortrevind].reshape(shape_init)
76 return reject, pvals_corrected
77
78
79 def bonferroni_correction(pval, alpha=0.05):
80 """P-value correction with Bonferroni method.
81
82 Parameters
83 ----------
84 pval : array_like
85 Set of p-values of the individual tests.
86 alpha : float
87 Error rate.
88
89 Returns
90 -------
91 reject : array, bool
92 True if a hypothesis is rejected, False if not.
93 pval_corrected : array
94 P-values adjusted for multiple hypothesis testing to limit FDR.
95 """
96 pval = np.asarray(pval)
97 pval_corrected = pval * float(pval.size)
98 # p-values must not be larger than 1.
99 pval_corrected = pval_corrected.clip(max=1.)
100 reject = pval_corrected < alpha
101 return reject, pval_corrected
102
[end of mne/stats/multi_comp.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mne/stats/multi_comp.py b/mne/stats/multi_comp.py
--- a/mne/stats/multi_comp.py
+++ b/mne/stats/multi_comp.py
@@ -17,7 +17,7 @@
def fdr_correction(pvals, alpha=0.05, method='indep'):
"""P-value correction with False Discovery Rate (FDR).
- Correction for multiple comparison using FDR [1]_.
+ Correction for multiple comparison using FDR :footcite:`GenoveseEtAl2002`.
This covers Benjamini/Hochberg for independent or positively correlated and
Benjamini/Yekutieli for general or negatively correlated tests.
@@ -41,9 +41,7 @@
References
----------
- .. [1] Genovese CR, Lazar NA, Nichols T. Thresholding of statistical maps
- in functional neuroimaging using the false discovery rate.
- Neuroimage. 2002 Apr;15(4):870-8.
+ .. footbibliography::
"""
pvals = np.asarray(pvals)
shape_init = pvals.shape
| {"golden_diff": "diff --git a/mne/stats/multi_comp.py b/mne/stats/multi_comp.py\n--- a/mne/stats/multi_comp.py\n+++ b/mne/stats/multi_comp.py\n@@ -17,7 +17,7 @@\n def fdr_correction(pvals, alpha=0.05, method='indep'):\n \"\"\"P-value correction with False Discovery Rate (FDR).\n \n- Correction for multiple comparison using FDR [1]_.\n+ Correction for multiple comparison using FDR :footcite:`GenoveseEtAl2002`.\n \n This covers Benjamini/Hochberg for independent or positively correlated and\n Benjamini/Yekutieli for general or negatively correlated tests.\n@@ -41,9 +41,7 @@\n \n References\n ----------\n- .. [1] Genovese CR, Lazar NA, Nichols T. Thresholding of statistical maps\n- in functional neuroimaging using the false discovery rate.\n- Neuroimage. 2002 Apr;15(4):870-8.\n+ .. footbibliography::\n \"\"\"\n pvals = np.asarray(pvals)\n shape_init = pvals.shape\n", "issue": "use bibtex in multi_comp.py\nconvert references in `mne/stats/multi_comp.py` to use footcite / footbibliography\r\n\n", "before_files": [{"content": "# Authors: Josef Pktd and example from H Raja and rewrite from Vincent Davis\n# Alexandre Gramfort <[email protected]>\n#\n# Code borrowed from statsmodels\n#\n# License: BSD (3-clause)\n\nimport numpy as np\n\n\ndef _ecdf(x):\n \"\"\"No frills empirical cdf used in fdrcorrection.\"\"\"\n nobs = len(x)\n return np.arange(1, nobs + 1) / float(nobs)\n\n\ndef fdr_correction(pvals, alpha=0.05, method='indep'):\n \"\"\"P-value correction with False Discovery Rate (FDR).\n\n Correction for multiple comparison using FDR [1]_.\n\n This covers Benjamini/Hochberg for independent or positively correlated and\n Benjamini/Yekutieli for general or negatively correlated tests.\n\n Parameters\n ----------\n pvals : array_like\n Set of p-values of the individual tests.\n alpha : float\n Error rate.\n method : 'indep' | 'negcorr'\n If 'indep' it implements Benjamini/Hochberg for independent or if\n 'negcorr' it corresponds to Benjamini/Yekutieli.\n\n Returns\n -------\n reject : array, bool\n True if a hypothesis is rejected, False if not.\n pval_corrected : array\n P-values adjusted for multiple hypothesis testing to limit FDR.\n\n References\n ----------\n .. [1] Genovese CR, Lazar NA, Nichols T. Thresholding of statistical maps\n in functional neuroimaging using the false discovery rate.\n Neuroimage. 2002 Apr;15(4):870-8.\n \"\"\"\n pvals = np.asarray(pvals)\n shape_init = pvals.shape\n pvals = pvals.ravel()\n\n pvals_sortind = np.argsort(pvals)\n pvals_sorted = pvals[pvals_sortind]\n sortrevind = pvals_sortind.argsort()\n\n if method in ['i', 'indep', 'p', 'poscorr']:\n ecdffactor = _ecdf(pvals_sorted)\n elif method in ['n', 'negcorr']:\n cm = np.sum(1. / np.arange(1, len(pvals_sorted) + 1))\n ecdffactor = _ecdf(pvals_sorted) / cm\n else:\n raise ValueError(\"Method should be 'indep' and 'negcorr'\")\n\n reject = pvals_sorted < (ecdffactor * alpha)\n if reject.any():\n rejectmax = max(np.nonzero(reject)[0])\n else:\n rejectmax = 0\n reject[:rejectmax] = True\n\n pvals_corrected_raw = pvals_sorted / ecdffactor\n pvals_corrected = np.minimum.accumulate(pvals_corrected_raw[::-1])[::-1]\n pvals_corrected[pvals_corrected > 1.0] = 1.0\n pvals_corrected = pvals_corrected[sortrevind].reshape(shape_init)\n reject = reject[sortrevind].reshape(shape_init)\n return reject, pvals_corrected\n\n\ndef bonferroni_correction(pval, alpha=0.05):\n \"\"\"P-value correction with Bonferroni method.\n\n Parameters\n ----------\n pval : array_like\n Set of p-values of the individual tests.\n alpha : float\n Error rate.\n\n Returns\n -------\n reject : array, bool\n True if a hypothesis is rejected, False if not.\n pval_corrected : array\n P-values adjusted for multiple hypothesis testing to limit FDR.\n \"\"\"\n pval = np.asarray(pval)\n pval_corrected = pval * float(pval.size)\n # p-values must not be larger than 1.\n pval_corrected = pval_corrected.clip(max=1.)\n reject = pval_corrected < alpha\n return reject, pval_corrected\n", "path": "mne/stats/multi_comp.py"}]} | 1,638 | 259 |
gh_patches_debug_30754 | rasdani/github-patches | git_diff | cal-itp__benefits-441 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add logging for OAuth flows
We merged the basic implementation in #414, but neglected to include any additional logging around the new flows/logic.
Some ideas of what we should log:
- [x] The `OAUTH_CLIENT_NAME` used
- [x] The `redirect_uri` sent to the authorization server with the `authorize_redirect` request
- [x] If an access token fails to be authorized
</issue>
<code>
[start of benefits/oauth/views.py]
1 from django.shortcuts import redirect
2 from django.urls import reverse
3
4 from authlib.integrations.django_client import OAuth
5
6 from benefits.core import session
7 from benefits.settings import OAUTH_CLIENT_NAME
8
9
10 if OAUTH_CLIENT_NAME:
11 _oauth = OAuth()
12 _oauth.register(OAUTH_CLIENT_NAME)
13 oauth_client = _oauth.create_client(OAUTH_CLIENT_NAME)
14
15
16 ROUTE_AUTH = "oauth:authorize"
17 ROUTE_START = "eligibility:start"
18 ROUTE_CONFIRM = "eligibility:confirm"
19
20
21 def login(request):
22 if not oauth_client:
23 raise Exception("No OAuth client")
24
25 route = reverse(ROUTE_AUTH)
26 redirect_uri = request.build_absolute_uri(route)
27
28 return oauth_client.authorize_redirect(request, redirect_uri)
29
30
31 def authorize(request):
32 if not oauth_client:
33 raise Exception("No OAuth client")
34
35 token = oauth_client.authorize_access_token(request)
36
37 if token is None:
38 return redirect(ROUTE_START)
39 else:
40 # we are intentionally not storing anything about the user, including their token
41 session.update(request, auth=True)
42 return redirect(ROUTE_CONFIRM)
43
[end of benefits/oauth/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/benefits/oauth/views.py b/benefits/oauth/views.py
--- a/benefits/oauth/views.py
+++ b/benefits/oauth/views.py
@@ -1,3 +1,5 @@
+import logging
+
from django.shortcuts import redirect
from django.urls import reverse
@@ -7,7 +9,12 @@
from benefits.settings import OAUTH_CLIENT_NAME
+logger = logging.getLogger(__name__)
+
+
if OAUTH_CLIENT_NAME:
+ logger.debug(f"Using OAuth client configuration: {OAUTH_CLIENT_NAME}")
+
_oauth = OAuth()
_oauth.register(OAUTH_CLIENT_NAME)
oauth_client = _oauth.create_client(OAUTH_CLIENT_NAME)
@@ -25,6 +32,8 @@
route = reverse(ROUTE_AUTH)
redirect_uri = request.build_absolute_uri(route)
+ logger.debug(f"OAuth authorize_redirect with redirect_uri: {redirect_uri}")
+
return oauth_client.authorize_redirect(request, redirect_uri)
@@ -32,11 +41,14 @@
if not oauth_client:
raise Exception("No OAuth client")
+ logger.debug("Attempting to authorize OAuth access token")
token = oauth_client.authorize_access_token(request)
if token is None:
+ logger.warning("Could not authorize OAuth access token")
return redirect(ROUTE_START)
else:
# we are intentionally not storing anything about the user, including their token
+ logger.debug("OAuth access token authorized")
session.update(request, auth=True)
return redirect(ROUTE_CONFIRM)
| {"golden_diff": "diff --git a/benefits/oauth/views.py b/benefits/oauth/views.py\n--- a/benefits/oauth/views.py\n+++ b/benefits/oauth/views.py\n@@ -1,3 +1,5 @@\n+import logging\n+\n from django.shortcuts import redirect\n from django.urls import reverse\n \n@@ -7,7 +9,12 @@\n from benefits.settings import OAUTH_CLIENT_NAME\n \n \n+logger = logging.getLogger(__name__)\n+\n+\n if OAUTH_CLIENT_NAME:\n+ logger.debug(f\"Using OAuth client configuration: {OAUTH_CLIENT_NAME}\")\n+\n _oauth = OAuth()\n _oauth.register(OAUTH_CLIENT_NAME)\n oauth_client = _oauth.create_client(OAUTH_CLIENT_NAME)\n@@ -25,6 +32,8 @@\n route = reverse(ROUTE_AUTH)\n redirect_uri = request.build_absolute_uri(route)\n \n+ logger.debug(f\"OAuth authorize_redirect with redirect_uri: {redirect_uri}\")\n+\n return oauth_client.authorize_redirect(request, redirect_uri)\n \n \n@@ -32,11 +41,14 @@\n if not oauth_client:\n raise Exception(\"No OAuth client\")\n \n+ logger.debug(\"Attempting to authorize OAuth access token\")\n token = oauth_client.authorize_access_token(request)\n \n if token is None:\n+ logger.warning(\"Could not authorize OAuth access token\")\n return redirect(ROUTE_START)\n else:\n # we are intentionally not storing anything about the user, including their token\n+ logger.debug(\"OAuth access token authorized\")\n session.update(request, auth=True)\n return redirect(ROUTE_CONFIRM)\n", "issue": "Add logging for OAuth flows\nWe merged the basic implementation in #414, but neglected to include any additional logging around the new flows/logic.\r\n\r\nSome ideas of what we should log:\r\n\r\n- [x] The `OAUTH_CLIENT_NAME` used\r\n- [x] The `redirect_uri` sent to the authorization server with the `authorize_redirect` request\r\n- [x] If an access token fails to be authorized\n", "before_files": [{"content": "from django.shortcuts import redirect\nfrom django.urls import reverse\n\nfrom authlib.integrations.django_client import OAuth\n\nfrom benefits.core import session\nfrom benefits.settings import OAUTH_CLIENT_NAME\n\n\nif OAUTH_CLIENT_NAME:\n _oauth = OAuth()\n _oauth.register(OAUTH_CLIENT_NAME)\n oauth_client = _oauth.create_client(OAUTH_CLIENT_NAME)\n\n\nROUTE_AUTH = \"oauth:authorize\"\nROUTE_START = \"eligibility:start\"\nROUTE_CONFIRM = \"eligibility:confirm\"\n\n\ndef login(request):\n if not oauth_client:\n raise Exception(\"No OAuth client\")\n\n route = reverse(ROUTE_AUTH)\n redirect_uri = request.build_absolute_uri(route)\n\n return oauth_client.authorize_redirect(request, redirect_uri)\n\n\ndef authorize(request):\n if not oauth_client:\n raise Exception(\"No OAuth client\")\n\n token = oauth_client.authorize_access_token(request)\n\n if token is None:\n return redirect(ROUTE_START)\n else:\n # we are intentionally not storing anything about the user, including their token\n session.update(request, auth=True)\n return redirect(ROUTE_CONFIRM)\n", "path": "benefits/oauth/views.py"}]} | 937 | 331 |
gh_patches_debug_1787 | rasdani/github-patches | git_diff | dbt-labs__dbt-core-9068 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[CT-3377] [Regression] `dbt deps` fails on tarball dependencies
### Is this a regression in a recent version of dbt-core?
- [X] I believe this is a regression in dbt-core functionality
- [X] I have searched the existing issues, and I could not find an existing issue for this regression
### Current Behavior
When `dependencies.yml` includes a tarball dependency, I get an error message from `dbt deps`:
```
11:18:06 Running with dbt=1.7.1
11:18:06 Updating lock file in file path: /workspace/dbt-deps-tarball-failure/asdf/package-lock.yml
11:18:06 Encountered an error:
Runtime Error
The packages.yml file in this project is malformed. Please double check
the contents of this file and fix any errors before retrying.
You can find more information on the syntax for this file here:
https://docs.getdbt.com/docs/package-management
Validator Error:
dbt_utils was not found in the package index. Packages on the index require a namespace, e.g dbt-labs/dbt_utils
```
### Expected/Previous Behavior
Expected output:
```
11:27:03 Running with dbt=1.6.8
11:27:03 Installing dbt_utils
11:27:03 Installed from tarball (url: https://codeload.github.com/dbt-labs/dbt-utils/tar.gz/0.9.6)
```
The validator should
- not check the index for tarball dependencies
- not validate the `namespace/package-name` for tarball dependencies
- mention the correct filename (this is a minor thing)
### Steps To Reproduce
1. In a new dbt project
2. With the following `dependencies.yml`:
```yaml
packages:
- tarball: https://codeload.github.com/dbt-labs/dbt-utils/tar.gz/0.9.6
name: 'dbt_utils'
```
3. Run `dbt deps`
4. See error message above
### Relevant log output
_No response_
### Environment
```markdown
- OS: Ubuntu 22.04.3
- Python: 3.11.1
- dbt-core (latest working version): 1.6.8
- dbt-core (earliest regression version): 1.7.0
- dbt-core (latest version): 1.7.1
```
### Which database adapter are you using with dbt?
_No response_
### Additional Context
_No response_
</issue>
<code>
[start of core/dbt/deps/tarball.py]
1 from typing import Dict
2
3 from dbt.contracts.project import RegistryPackageMetadata, TarballPackage
4 from dbt.deps.base import PinnedPackage, UnpinnedPackage
5
6
7 class TarballPackageMixin:
8 def __init__(self, tarball: str) -> None:
9 super().__init__()
10 self.tarball = tarball
11
12 @property
13 def name(self):
14 return self.tarball
15
16 def source_type(self) -> str:
17 return "tarball"
18
19
20 class TarballPinnedPackage(TarballPackageMixin, PinnedPackage):
21 def __init__(self, tarball: str, package: str) -> None:
22 super().__init__(tarball)
23 # setup to recycle RegistryPinnedPackage fns
24 self.package = package
25 self.version = "tarball"
26
27 @property
28 def name(self):
29 return self.package
30
31 def to_dict(self) -> Dict[str, str]:
32 return {
33 "tarball": self.tarball,
34 "version": self.version,
35 "package": self.package,
36 }
37
38 def get_version(self):
39 return self.version
40
41 def nice_version_name(self):
42 return f"tarball (url: {self.tarball})"
43
44 def _fetch_metadata(self, project, renderer):
45 """
46 recycle RegistryPackageMetadata so that we can use the install and
47 download_and_untar from RegistryPinnedPackage next.
48 build RegistryPackageMetadata from info passed via packages.yml since no
49 'metadata' service exists in this case.
50 """
51
52 dct = {
53 "name": self.package,
54 "packages": [], # note: required by RegistryPackageMetadata
55 "downloads": {"tarball": self.tarball},
56 }
57
58 return RegistryPackageMetadata.from_dict(dct)
59
60 def install(self, project, renderer):
61 self._install(project, renderer)
62
63
64 class TarballUnpinnedPackage(TarballPackageMixin, UnpinnedPackage[TarballPinnedPackage]):
65 def __init__(
66 self,
67 tarball: str,
68 package: str,
69 ) -> None:
70 super().__init__(tarball)
71 # setup to recycle RegistryPinnedPackage fns
72 self.package = package
73 self.version = "tarball"
74
75 @classmethod
76 def from_contract(cls, contract: TarballPackage) -> "TarballUnpinnedPackage":
77 return cls(tarball=contract.tarball, package=contract.name)
78
79 def incorporate(self, other: "TarballUnpinnedPackage") -> "TarballUnpinnedPackage":
80 return TarballUnpinnedPackage(tarball=self.tarball, package=self.package)
81
82 def resolved(self) -> TarballPinnedPackage:
83 return TarballPinnedPackage(tarball=self.tarball, package=self.package)
84
[end of core/dbt/deps/tarball.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/core/dbt/deps/tarball.py b/core/dbt/deps/tarball.py
--- a/core/dbt/deps/tarball.py
+++ b/core/dbt/deps/tarball.py
@@ -31,8 +31,7 @@
def to_dict(self) -> Dict[str, str]:
return {
"tarball": self.tarball,
- "version": self.version,
- "package": self.package,
+ "name": self.package,
}
def get_version(self):
| {"golden_diff": "diff --git a/core/dbt/deps/tarball.py b/core/dbt/deps/tarball.py\n--- a/core/dbt/deps/tarball.py\n+++ b/core/dbt/deps/tarball.py\n@@ -31,8 +31,7 @@\n def to_dict(self) -> Dict[str, str]:\n return {\n \"tarball\": self.tarball,\n- \"version\": self.version,\n- \"package\": self.package,\n+ \"name\": self.package,\n }\n \n def get_version(self):\n", "issue": "[CT-3377] [Regression] `dbt deps` fails on tarball dependencies\n### Is this a regression in a recent version of dbt-core?\n\n- [X] I believe this is a regression in dbt-core functionality\n- [X] I have searched the existing issues, and I could not find an existing issue for this regression\n\n### Current Behavior\n\nWhen `dependencies.yml` includes a tarball dependency, I get an error message from `dbt deps`:\r\n\r\n```\r\n11:18:06 Running with dbt=1.7.1\r\n11:18:06 Updating lock file in file path: /workspace/dbt-deps-tarball-failure/asdf/package-lock.yml\r\n11:18:06 Encountered an error:\r\nRuntime Error\r\n The packages.yml file in this project is malformed. Please double check\r\n the contents of this file and fix any errors before retrying.\r\n \r\n You can find more information on the syntax for this file here:\r\n https://docs.getdbt.com/docs/package-management\r\n \r\n Validator Error:\r\n dbt_utils was not found in the package index. Packages on the index require a namespace, e.g dbt-labs/dbt_utils\r\n```\n\n### Expected/Previous Behavior\n\nExpected output:\r\n```\r\n11:27:03 Running with dbt=1.6.8\r\n11:27:03 Installing dbt_utils\r\n11:27:03 Installed from tarball (url: https://codeload.github.com/dbt-labs/dbt-utils/tar.gz/0.9.6)\r\n```\r\n\r\nThe validator should \r\n- not check the index for tarball dependencies\r\n- not validate the `namespace/package-name` for tarball dependencies\r\n- mention the correct filename (this is a minor thing)\n\n### Steps To Reproduce\n\n1. In a new dbt project\r\n2. With the following `dependencies.yml`:\r\n```yaml\r\npackages:\r\n - tarball: https://codeload.github.com/dbt-labs/dbt-utils/tar.gz/0.9.6\r\n name: 'dbt_utils'\r\n```\r\n3. Run `dbt deps`\r\n4. See error message above\n\n### Relevant log output\n\n_No response_\n\n### Environment\n\n```markdown\n- OS: Ubuntu 22.04.3\r\n- Python: 3.11.1\r\n- dbt-core (latest working version): 1.6.8\r\n- dbt-core (earliest regression version): 1.7.0\r\n- dbt-core (latest version): 1.7.1\n```\n\n\n### Which database adapter are you using with dbt?\n\n_No response_\n\n### Additional Context\n\n_No response_\n", "before_files": [{"content": "from typing import Dict\n\nfrom dbt.contracts.project import RegistryPackageMetadata, TarballPackage\nfrom dbt.deps.base import PinnedPackage, UnpinnedPackage\n\n\nclass TarballPackageMixin:\n def __init__(self, tarball: str) -> None:\n super().__init__()\n self.tarball = tarball\n\n @property\n def name(self):\n return self.tarball\n\n def source_type(self) -> str:\n return \"tarball\"\n\n\nclass TarballPinnedPackage(TarballPackageMixin, PinnedPackage):\n def __init__(self, tarball: str, package: str) -> None:\n super().__init__(tarball)\n # setup to recycle RegistryPinnedPackage fns\n self.package = package\n self.version = \"tarball\"\n\n @property\n def name(self):\n return self.package\n\n def to_dict(self) -> Dict[str, str]:\n return {\n \"tarball\": self.tarball,\n \"version\": self.version,\n \"package\": self.package,\n }\n\n def get_version(self):\n return self.version\n\n def nice_version_name(self):\n return f\"tarball (url: {self.tarball})\"\n\n def _fetch_metadata(self, project, renderer):\n \"\"\"\n recycle RegistryPackageMetadata so that we can use the install and\n download_and_untar from RegistryPinnedPackage next.\n build RegistryPackageMetadata from info passed via packages.yml since no\n 'metadata' service exists in this case.\n \"\"\"\n\n dct = {\n \"name\": self.package,\n \"packages\": [], # note: required by RegistryPackageMetadata\n \"downloads\": {\"tarball\": self.tarball},\n }\n\n return RegistryPackageMetadata.from_dict(dct)\n\n def install(self, project, renderer):\n self._install(project, renderer)\n\n\nclass TarballUnpinnedPackage(TarballPackageMixin, UnpinnedPackage[TarballPinnedPackage]):\n def __init__(\n self,\n tarball: str,\n package: str,\n ) -> None:\n super().__init__(tarball)\n # setup to recycle RegistryPinnedPackage fns\n self.package = package\n self.version = \"tarball\"\n\n @classmethod\n def from_contract(cls, contract: TarballPackage) -> \"TarballUnpinnedPackage\":\n return cls(tarball=contract.tarball, package=contract.name)\n\n def incorporate(self, other: \"TarballUnpinnedPackage\") -> \"TarballUnpinnedPackage\":\n return TarballUnpinnedPackage(tarball=self.tarball, package=self.package)\n\n def resolved(self) -> TarballPinnedPackage:\n return TarballPinnedPackage(tarball=self.tarball, package=self.package)\n", "path": "core/dbt/deps/tarball.py"}]} | 1,895 | 118 |
gh_patches_debug_38546 | rasdani/github-patches | git_diff | beetbox__beets-1129 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
zero: Permit stripping album art
It would be nice to have the option of automatically clearing embedded art when an item is imported. Whether or not a media item actually contains embedded art, beets should ensure the resulting media item has no embedded art after being import. There are two plugins which would offer a good place of implementation for this feature: the EmbedArt and the Zero plugins.
The EmbedArt plugin already supports a command called `clearart` which allows for the manual stripping of embedded art from items which match a query. Since the the `clearart` operation is not automatic and there is no option for automation, an extra step is required on the importation of media.
What probably makes more sense is implementing support for the art field in the Zero plugin. It can only be assumed that people who would use such a feature already have the Zero plugin deployed for clearing other fields. That said, it would require less configuration as all a user would need to do is drop the art field in their configuration for the Zero plugin. Moreover, with the EmbedArt plugin, it embeds art into media items by default. This feature would need to be disabled in the configuration as well.
</issue>
<code>
[start of beetsplug/zero.py]
1 # This file is part of beets.
2 # Copyright 2013, Blemjhoo Tezoulbr <[email protected]>.
3 #
4 # Permission is hereby granted, free of charge, to any person obtaining
5 # a copy of this software and associated documentation files (the
6 # "Software"), to deal in the Software without restriction, including
7 # without limitation the rights to use, copy, modify, merge, publish,
8 # distribute, sublicense, and/or sell copies of the Software, and to
9 # permit persons to whom the Software is furnished to do so, subject to
10 # the following conditions:
11 #
12 # The above copyright notice and this permission notice shall be
13 # included in all copies or substantial portions of the Software.
14
15 """ Clears tag fields in media files."""
16
17 import re
18 import logging
19 from beets.plugins import BeetsPlugin
20 from beets.library import Item
21 from beets.importer import action
22 from beets.util import confit
23
24 __author__ = '[email protected]'
25 __version__ = '0.10'
26
27 log = logging.getLogger('beets')
28
29
30 class ZeroPlugin(BeetsPlugin):
31
32 _instance = None
33
34 def __init__(self):
35 super(ZeroPlugin, self).__init__()
36
37 # Listeners.
38 self.register_listener('write', self.write_event)
39 self.register_listener('import_task_choice',
40 self.import_task_choice_event)
41
42 self.config.add({
43 'fields': [],
44 })
45
46 self.patterns = {}
47 self.warned = False
48
49 for field in self.config['fields'].as_str_seq():
50 if field in ('id', 'path', 'album_id'):
51 log.warn(u'[zero] field \'{0}\' ignored, zeroing '
52 u'it would be dangerous'.format(field))
53 continue
54 if field not in Item._fields.keys():
55 log.error(u'[zero] invalid field: {0}'.format(field))
56 continue
57
58 try:
59 self.patterns[field] = self.config[field].as_str_seq()
60 except confit.NotFoundError:
61 # Matches everything
62 self.patterns[field] = [u'']
63
64 def import_task_choice_event(self, session, task):
65 """Listen for import_task_choice event."""
66 if task.choice_flag == action.ASIS and not self.warned:
67 log.warn(u'[zero] cannot zero in \"as-is\" mode')
68 self.warned = True
69 # TODO request write in as-is mode
70
71 @classmethod
72 def match_patterns(cls, field, patterns):
73 """Check if field (as string) is matching any of the patterns in
74 the list.
75 """
76 for p in patterns:
77 if re.search(p, unicode(field), flags=re.IGNORECASE):
78 return True
79 return False
80
81 def write_event(self, item, path, tags):
82 """Listen for write event."""
83 if not self.patterns:
84 log.warn(u'[zero] no fields, nothing to do')
85 return
86
87 for field, patterns in self.patterns.items():
88 if field not in tags:
89 log.error(u'[zero] no such field: {0}'.format(field))
90 continue
91
92 value = tags[field]
93 if self.match_patterns(value, patterns):
94 log.debug(u'[zero] {0}: {1} -> None'.format(field, value))
95 tags[field] = None
96
[end of beetsplug/zero.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/beetsplug/zero.py b/beetsplug/zero.py
--- a/beetsplug/zero.py
+++ b/beetsplug/zero.py
@@ -17,7 +17,7 @@
import re
import logging
from beets.plugins import BeetsPlugin
-from beets.library import Item
+from beets.mediafile import MediaFile
from beets.importer import action
from beets.util import confit
@@ -51,7 +51,7 @@
log.warn(u'[zero] field \'{0}\' ignored, zeroing '
u'it would be dangerous'.format(field))
continue
- if field not in Item._fields.keys():
+ if field not in MediaFile.fields():
log.error(u'[zero] invalid field: {0}'.format(field))
continue
@@ -59,7 +59,7 @@
self.patterns[field] = self.config[field].as_str_seq()
except confit.NotFoundError:
# Matches everything
- self.patterns[field] = [u'']
+ self.patterns[field] = True
def import_task_choice_event(self, session, task):
"""Listen for import_task_choice event."""
@@ -73,23 +73,29 @@
"""Check if field (as string) is matching any of the patterns in
the list.
"""
+ if patterns is True:
+ return True
for p in patterns:
if re.search(p, unicode(field), flags=re.IGNORECASE):
return True
return False
def write_event(self, item, path, tags):
- """Listen for write event."""
+ """Set values in tags to `None` if the key and value are matched
+ by `self.patterns`.
+ """
if not self.patterns:
log.warn(u'[zero] no fields, nothing to do')
return
for field, patterns in self.patterns.items():
- if field not in tags:
- log.error(u'[zero] no such field: {0}'.format(field))
- continue
-
- value = tags[field]
- if self.match_patterns(value, patterns):
+ if field in tags:
+ value = tags[field]
+ match = self.match_patterns(tags[field], patterns)
+ else:
+ value = ''
+ match = patterns is True
+
+ if match:
log.debug(u'[zero] {0}: {1} -> None'.format(field, value))
tags[field] = None
| {"golden_diff": "diff --git a/beetsplug/zero.py b/beetsplug/zero.py\n--- a/beetsplug/zero.py\n+++ b/beetsplug/zero.py\n@@ -17,7 +17,7 @@\n import re\n import logging\n from beets.plugins import BeetsPlugin\n-from beets.library import Item\n+from beets.mediafile import MediaFile\n from beets.importer import action\n from beets.util import confit\n \n@@ -51,7 +51,7 @@\n log.warn(u'[zero] field \\'{0}\\' ignored, zeroing '\n u'it would be dangerous'.format(field))\n continue\n- if field not in Item._fields.keys():\n+ if field not in MediaFile.fields():\n log.error(u'[zero] invalid field: {0}'.format(field))\n continue\n \n@@ -59,7 +59,7 @@\n self.patterns[field] = self.config[field].as_str_seq()\n except confit.NotFoundError:\n # Matches everything\n- self.patterns[field] = [u'']\n+ self.patterns[field] = True\n \n def import_task_choice_event(self, session, task):\n \"\"\"Listen for import_task_choice event.\"\"\"\n@@ -73,23 +73,29 @@\n \"\"\"Check if field (as string) is matching any of the patterns in\n the list.\n \"\"\"\n+ if patterns is True:\n+ return True\n for p in patterns:\n if re.search(p, unicode(field), flags=re.IGNORECASE):\n return True\n return False\n \n def write_event(self, item, path, tags):\n- \"\"\"Listen for write event.\"\"\"\n+ \"\"\"Set values in tags to `None` if the key and value are matched\n+ by `self.patterns`.\n+ \"\"\"\n if not self.patterns:\n log.warn(u'[zero] no fields, nothing to do')\n return\n \n for field, patterns in self.patterns.items():\n- if field not in tags:\n- log.error(u'[zero] no such field: {0}'.format(field))\n- continue\n-\n- value = tags[field]\n- if self.match_patterns(value, patterns):\n+ if field in tags:\n+ value = tags[field]\n+ match = self.match_patterns(tags[field], patterns)\n+ else:\n+ value = ''\n+ match = patterns is True\n+\n+ if match:\n log.debug(u'[zero] {0}: {1} -> None'.format(field, value))\n tags[field] = None\n", "issue": "zero: Permit stripping album art\nIt would be nice to have the option of automatically clearing embedded art when an item is imported. Whether or not a media item actually contains embedded art, beets should ensure the resulting media item has no embedded art after being import. There are two plugins which would offer a good place of implementation for this feature: the EmbedArt and the Zero plugins.\n\nThe EmbedArt plugin already supports a command called `clearart` which allows for the manual stripping of embedded art from items which match a query. Since the the `clearart` operation is not automatic and there is no option for automation, an extra step is required on the importation of media.\n\nWhat probably makes more sense is implementing support for the art field in the Zero plugin. It can only be assumed that people who would use such a feature already have the Zero plugin deployed for clearing other fields. That said, it would require less configuration as all a user would need to do is drop the art field in their configuration for the Zero plugin. Moreover, with the EmbedArt plugin, it embeds art into media items by default. This feature would need to be disabled in the configuration as well.\n\n", "before_files": [{"content": "# This file is part of beets.\n# Copyright 2013, Blemjhoo Tezoulbr <[email protected]>.\n#\n# Permission is hereby granted, free of charge, to any person obtaining\n# a copy of this software and associated documentation files (the\n# \"Software\"), to deal in the Software without restriction, including\n# without limitation the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the Software, and to\n# permit persons to whom the Software is furnished to do so, subject to\n# the following conditions:\n#\n# The above copyright notice and this permission notice shall be\n# included in all copies or substantial portions of the Software.\n\n\"\"\" Clears tag fields in media files.\"\"\"\n\nimport re\nimport logging\nfrom beets.plugins import BeetsPlugin\nfrom beets.library import Item\nfrom beets.importer import action\nfrom beets.util import confit\n\n__author__ = '[email protected]'\n__version__ = '0.10'\n\nlog = logging.getLogger('beets')\n\n\nclass ZeroPlugin(BeetsPlugin):\n\n _instance = None\n\n def __init__(self):\n super(ZeroPlugin, self).__init__()\n\n # Listeners.\n self.register_listener('write', self.write_event)\n self.register_listener('import_task_choice',\n self.import_task_choice_event)\n\n self.config.add({\n 'fields': [],\n })\n\n self.patterns = {}\n self.warned = False\n\n for field in self.config['fields'].as_str_seq():\n if field in ('id', 'path', 'album_id'):\n log.warn(u'[zero] field \\'{0}\\' ignored, zeroing '\n u'it would be dangerous'.format(field))\n continue\n if field not in Item._fields.keys():\n log.error(u'[zero] invalid field: {0}'.format(field))\n continue\n\n try:\n self.patterns[field] = self.config[field].as_str_seq()\n except confit.NotFoundError:\n # Matches everything\n self.patterns[field] = [u'']\n\n def import_task_choice_event(self, session, task):\n \"\"\"Listen for import_task_choice event.\"\"\"\n if task.choice_flag == action.ASIS and not self.warned:\n log.warn(u'[zero] cannot zero in \\\"as-is\\\" mode')\n self.warned = True\n # TODO request write in as-is mode\n\n @classmethod\n def match_patterns(cls, field, patterns):\n \"\"\"Check if field (as string) is matching any of the patterns in\n the list.\n \"\"\"\n for p in patterns:\n if re.search(p, unicode(field), flags=re.IGNORECASE):\n return True\n return False\n\n def write_event(self, item, path, tags):\n \"\"\"Listen for write event.\"\"\"\n if not self.patterns:\n log.warn(u'[zero] no fields, nothing to do')\n return\n\n for field, patterns in self.patterns.items():\n if field not in tags:\n log.error(u'[zero] no such field: {0}'.format(field))\n continue\n\n value = tags[field]\n if self.match_patterns(value, patterns):\n log.debug(u'[zero] {0}: {1} -> None'.format(field, value))\n tags[field] = None\n", "path": "beetsplug/zero.py"}]} | 1,693 | 550 |
gh_patches_debug_20500 | rasdani/github-patches | git_diff | AlexsLemonade__refinebio-3299 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Cache Docker Images by Branch
### Context
We want to be able to cache docker image layers that are created locally as testing artfacts locally to be used by github actions.
The current prepare_images.sh does this but there was an issue with the definition for branch_name.
We also don't want to remove support non-ccdl members developing locally.

### Solution or next step
- After #3285 is merged, we should set sensible defaults that can be overridden for external contributors.
- Get current branch name or tag to be set when pushing images to ccdl(staging) repo.
Determine:
- If they don't have access to the docker repo should we just build locally and not push?
- How long can docker tags be / are they compatible with our longer branch names.
</issue>
<code>
[start of common/setup.py]
1 import os
2
3 from setuptools import find_packages, setup
4
5 # allow setup.py to be run from any path
6 os.chdir(os.path.normpath(os.path.join(os.path.abspath(__file__), os.pardir)))
7
8 VERSION_FILE = "version"
9 try:
10 with open(VERSION_FILE, "rt") as version_file:
11 version_string = version_file.read().strip().split("-")[0]
12 except OSError:
13 print(
14 "Cannot read version to determine System Version."
15 " Please create a file common/version containing an up to date System Version."
16 )
17 raise
18
19 setup(
20 name="data-refinery-common",
21 version=version_string,
22 packages=find_packages(),
23 include_package_data=True,
24 # These values are based on what is in common/requirements.txt.
25 install_requires=[
26 "boto3>=1.9.16",
27 "coverage>=4.5.1",
28 "daiquiri>=1.5.0",
29 "django>=3.2,<4",
30 "raven>=6.9.0",
31 "requests>=2.10.1",
32 "retrying>=1.3.3",
33 "psycopg2-binary>=2.7.5",
34 ],
35 license="BSD License",
36 description="Common functionality to be shared between Data Refinery sub-projects.",
37 url="https://www.greenelab.com",
38 author="Kurt Wheeler",
39 author_email="[email protected]",
40 classifiers=[
41 "Environment :: Web Environment",
42 "Framework :: Django",
43 "Intended Audience :: Developers",
44 "License :: OSI Approved :: BSD License",
45 "Operating System :: Ubuntu",
46 "Programming Language :: Python",
47 "Programming Language :: Python :: 3.5",
48 "Programming Language :: Python :: 3.6",
49 "Topic :: Internet :: WWW/HTTP",
50 ],
51 )
52
[end of common/setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/common/setup.py b/common/setup.py
--- a/common/setup.py
+++ b/common/setup.py
@@ -1,4 +1,6 @@
import os
+import re
+from datetime import datetime
from setuptools import find_packages, setup
@@ -11,11 +13,21 @@
version_string = version_file.read().strip().split("-")[0]
except OSError:
print(
- "Cannot read version to determine System Version."
- " Please create a file common/version containing an up to date System Version."
+ "Cannot read version file to determine system version. "
+ "Please create a file common/version containing an up to date system version."
)
raise
+version_re = re.compile(
+ r"^([1-9][0-9]*!)?(0|[1-9][0-9]*)"
+ "(\.(0|[1-9][0-9]*))*((a|b|rc)(0|[1-9][0-9]*))"
+ "?(\.post(0|[1-9][0-9]*))?(\.dev(0|[1-9][0-9]*))?$"
+)
+if not version_re.match(version_string):
+ # Generate version based on the datetime.now(): e.g., 2023.5.17.dev1684352560.
+ now = datetime.now()
+ version_string = f"{now.strftime('%Y.%-m.%-d.dev')}{int(datetime.timestamp(now))}"
+
setup(
name="data-refinery-common",
version=version_string,
| {"golden_diff": "diff --git a/common/setup.py b/common/setup.py\n--- a/common/setup.py\n+++ b/common/setup.py\n@@ -1,4 +1,6 @@\n import os\n+import re\n+from datetime import datetime\n \n from setuptools import find_packages, setup\n \n@@ -11,11 +13,21 @@\n version_string = version_file.read().strip().split(\"-\")[0]\n except OSError:\n print(\n- \"Cannot read version to determine System Version.\"\n- \" Please create a file common/version containing an up to date System Version.\"\n+ \"Cannot read version file to determine system version. \"\n+ \"Please create a file common/version containing an up to date system version.\"\n )\n raise\n \n+version_re = re.compile(\n+ r\"^([1-9][0-9]*!)?(0|[1-9][0-9]*)\"\n+ \"(\\.(0|[1-9][0-9]*))*((a|b|rc)(0|[1-9][0-9]*))\"\n+ \"?(\\.post(0|[1-9][0-9]*))?(\\.dev(0|[1-9][0-9]*))?$\"\n+)\n+if not version_re.match(version_string):\n+ # Generate version based on the datetime.now(): e.g., 2023.5.17.dev1684352560.\n+ now = datetime.now()\n+ version_string = f\"{now.strftime('%Y.%-m.%-d.dev')}{int(datetime.timestamp(now))}\"\n+\n setup(\n name=\"data-refinery-common\",\n version=version_string,\n", "issue": "Cache Docker Images by Branch\n### Context\r\n\r\nWe want to be able to cache docker image layers that are created locally as testing artfacts locally to be used by github actions.\r\nThe current prepare_images.sh does this but there was an issue with the definition for branch_name.\r\nWe also don't want to remove support non-ccdl members developing locally.\r\n\r\n\r\n\r\n\r\n\r\n### Solution or next step\r\n\r\n- After #3285 is merged, we should set sensible defaults that can be overridden for external contributors.\r\n- Get current branch name or tag to be set when pushing images to ccdl(staging) repo.\r\n\r\nDetermine:\r\n- If they don't have access to the docker repo should we just build locally and not push?\r\n- How long can docker tags be / are they compatible with our longer branch names.\r\n\n", "before_files": [{"content": "import os\n\nfrom setuptools import find_packages, setup\n\n# allow setup.py to be run from any path\nos.chdir(os.path.normpath(os.path.join(os.path.abspath(__file__), os.pardir)))\n\nVERSION_FILE = \"version\"\ntry:\n with open(VERSION_FILE, \"rt\") as version_file:\n version_string = version_file.read().strip().split(\"-\")[0]\nexcept OSError:\n print(\n \"Cannot read version to determine System Version.\"\n \" Please create a file common/version containing an up to date System Version.\"\n )\n raise\n\nsetup(\n name=\"data-refinery-common\",\n version=version_string,\n packages=find_packages(),\n include_package_data=True,\n # These values are based on what is in common/requirements.txt.\n install_requires=[\n \"boto3>=1.9.16\",\n \"coverage>=4.5.1\",\n \"daiquiri>=1.5.0\",\n \"django>=3.2,<4\",\n \"raven>=6.9.0\",\n \"requests>=2.10.1\",\n \"retrying>=1.3.3\",\n \"psycopg2-binary>=2.7.5\",\n ],\n license=\"BSD License\",\n description=\"Common functionality to be shared between Data Refinery sub-projects.\",\n url=\"https://www.greenelab.com\",\n author=\"Kurt Wheeler\",\n author_email=\"[email protected]\",\n classifiers=[\n \"Environment :: Web Environment\",\n \"Framework :: Django\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: BSD License\",\n \"Operating System :: Ubuntu\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Topic :: Internet :: WWW/HTTP\",\n ],\n)\n", "path": "common/setup.py"}]} | 1,247 | 354 |
gh_patches_debug_25112 | rasdani/github-patches | git_diff | scoutapp__scout_apm_python-668 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
capture_backtrace raises AttributeError on PEP-420 namespace packages
The new `capture_backtrace` function in `scout_apm.core.backtrace` raises an AttributeError when the stack includes a [PEP-420] namespace package.
This is caused by the [`module_filepath` function](https://github.com/scoutapp/scout_apm_python/blob/v2.21.0/src/scout_apm/core/backtrace.py#L26-L33), specifically line 32:
```python
module_dir = sys.modules[root_module].__file__.rsplit(os.sep, 2)[0]
```
If `sys.modules[root_module]` is a [PEP-420] namespace package, this will raise
```
AttributeError: 'NoneType' object has no attribute 'rsplit'
```
### Steps to reproduce
Create a namespace package, with some modules inside, e.g.:
```
namespace/
foo/
__init__.py
bar/
__init__.py
```
Then on an interactive Python shell:
```
>>> from scout_apm.core.backtrace import module_filepath
>>> from namespace import foo
>>> module_filepath("namespace.foo", "namespace")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/jack/venvs/tmp-a17ac7185189989/lib/python3.8/site-packages/scout_apm/core/backtrace.py", line 32, in module_filepath
module_dir = sys.modules[root_module].__file__.rsplit(os.sep, 2)[0]
AttributeError: 'NoneType' object has no attribute 'rsplit'
```
### Details
- Tested with version 2.21.0
- Current workaround is to pin version to 2.20.0
[PEP-420]: https://www.python.org/dev/peps/pep-0420/
</issue>
<code>
[start of src/scout_apm/core/backtrace.py]
1 # coding=utf-8
2 from __future__ import absolute_import, division, print_function, unicode_literals
3
4 import itertools
5 import os
6 import sys
7 import sysconfig
8 import traceback
9 import warnings
10
11 # Maximum non-Scout frames to target retrieving
12 LIMIT = 50
13 # How many upper frames from inside Scout to ignore
14 IGNORED = 1
15
16
17 def filter_frames(frames):
18 """Filter the stack trace frames down to non-library code."""
19 paths = sysconfig.get_paths()
20 library_paths = {paths["purelib"], paths["platlib"]}
21 for frame in frames:
22 if not any(frame["file"].startswith(exclusion) for exclusion in library_paths):
23 yield frame
24
25
26 def module_filepath(module, filepath):
27 """Get the filepath relative to the base module."""
28 root_module = module.split(".", 1)[0]
29 if root_module == module:
30 return os.path.basename(filepath)
31
32 module_dir = sys.modules[root_module].__file__.rsplit(os.sep, 2)[0]
33 return filepath.split(module_dir, 1)[-1].lstrip(os.sep)
34
35
36 def filepath(frame):
37 """Get the filepath for frame."""
38 module = frame.f_globals.get("__name__", None)
39 filepath = frame.f_code.co_filename
40
41 if filepath.endswith(".pyc"):
42 filepath = filepath[:-1]
43
44 if not module:
45 return filepath
46 return module_filepath(module, filepath)
47
48
49 if sys.version_info >= (3, 5):
50
51 def stacktrace_walker(tb):
52 """Iterate over each frame of the stack downards for exceptions."""
53 for frame, lineno in traceback.walk_tb(tb):
54 name = frame.f_code.co_name
55 yield {"file": filepath(frame), "line": lineno, "function": name}
56
57 def backtrace_walker():
58 """Iterate over each frame of the stack upwards.
59
60 Taken from python3/traceback.ExtractSummary.extract to support
61 iterating over the entire stack, but without creating a large
62 data structure.
63 """
64 start_frame = sys._getframe().f_back
65 for frame, lineno in traceback.walk_stack(start_frame):
66 name = frame.f_code.co_name
67 yield {"file": filepath(frame), "line": lineno, "function": name}
68
69
70 else:
71
72 def stacktrace_walker(tb):
73 """Iterate over each frame of the stack downards for exceptions."""
74 while tb is not None:
75 lineno = tb.tb_lineno
76 name = tb.tb_frame.f_code.co_name
77 yield {
78 "file": filepath(tb.tb_frame),
79 "line": lineno,
80 "function": name,
81 }
82 tb = tb.tb_next
83
84 def backtrace_walker():
85 """Iterate over each frame of the stack upwards.
86
87 Taken from python2.7/traceback.extract_stack to support iterating
88 over the entire stack, but without creating a large data structure.
89 """
90 try:
91 raise ZeroDivisionError
92 except ZeroDivisionError:
93 # Get the current frame
94 frame = sys.exc_info()[2].tb_frame.f_back
95
96 while frame is not None:
97 lineno = frame.f_lineno
98 name = frame.f_code.co_name
99 yield {"file": filepath(frame), "line": lineno, "function": name}
100 frame = frame.f_back
101
102
103 def capture_backtrace():
104 walker = filter_frames(backtrace_walker())
105 return list(itertools.islice(walker, LIMIT))
106
107
108 def capture_stacktrace(tb):
109 walker = stacktrace_walker(tb)
110 return list(reversed(list(itertools.islice(walker, LIMIT))))
111
112
113 def capture():
114 warnings.warn(
115 "capture is deprecated, instead use capture_backtrace instead.",
116 DeprecationWarning,
117 2,
118 )
119 return capture_backtrace()
120
[end of src/scout_apm/core/backtrace.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/scout_apm/core/backtrace.py b/src/scout_apm/core/backtrace.py
--- a/src/scout_apm/core/backtrace.py
+++ b/src/scout_apm/core/backtrace.py
@@ -7,6 +7,9 @@
import sysconfig
import traceback
import warnings
+from logging import getLogger
+
+logger = getLogger(__name__)
# Maximum non-Scout frames to target retrieving
LIMIT = 50
@@ -25,11 +28,25 @@
def module_filepath(module, filepath):
"""Get the filepath relative to the base module."""
- root_module = module.split(".", 1)[0]
- if root_module == module:
+ root_module_name = module.split(".", 1)[0]
+ if root_module_name == module:
return os.path.basename(filepath)
- module_dir = sys.modules[root_module].__file__.rsplit(os.sep, 2)[0]
+ root_module = sys.modules[root_module_name]
+ if root_module.__file__:
+ module_dir = root_module.__file__.rsplit(os.sep, 2)[0]
+ elif root_module.__path__:
+ # Default to using the first path specified for the module.
+ module_dir = root_module.__path__[0].rsplit(os.sep, 1)[0]
+ if len(root_module.__path__) > 1:
+ logger.debug(
+ "{} has {} paths. Use the first and ignore the rest.".format(
+ root_module, len(root_module.__path__)
+ )
+ )
+ else:
+ # If the file path don't exist, then return the full path.
+ return filepath
return filepath.split(module_dir, 1)[-1].lstrip(os.sep)
| {"golden_diff": "diff --git a/src/scout_apm/core/backtrace.py b/src/scout_apm/core/backtrace.py\n--- a/src/scout_apm/core/backtrace.py\n+++ b/src/scout_apm/core/backtrace.py\n@@ -7,6 +7,9 @@\n import sysconfig\n import traceback\n import warnings\n+from logging import getLogger\n+\n+logger = getLogger(__name__)\n \n # Maximum non-Scout frames to target retrieving\n LIMIT = 50\n@@ -25,11 +28,25 @@\n \n def module_filepath(module, filepath):\n \"\"\"Get the filepath relative to the base module.\"\"\"\n- root_module = module.split(\".\", 1)[0]\n- if root_module == module:\n+ root_module_name = module.split(\".\", 1)[0]\n+ if root_module_name == module:\n return os.path.basename(filepath)\n \n- module_dir = sys.modules[root_module].__file__.rsplit(os.sep, 2)[0]\n+ root_module = sys.modules[root_module_name]\n+ if root_module.__file__:\n+ module_dir = root_module.__file__.rsplit(os.sep, 2)[0]\n+ elif root_module.__path__:\n+ # Default to using the first path specified for the module.\n+ module_dir = root_module.__path__[0].rsplit(os.sep, 1)[0]\n+ if len(root_module.__path__) > 1:\n+ logger.debug(\n+ \"{} has {} paths. Use the first and ignore the rest.\".format(\n+ root_module, len(root_module.__path__)\n+ )\n+ )\n+ else:\n+ # If the file path don't exist, then return the full path.\n+ return filepath\n return filepath.split(module_dir, 1)[-1].lstrip(os.sep)\n", "issue": "capture_backtrace raises AttributeError on PEP-420 namespace packages\nThe new `capture_backtrace` function in `scout_apm.core.backtrace` raises an AttributeError when the stack includes a [PEP-420] namespace package.\r\n\r\nThis is caused by the [`module_filepath` function](https://github.com/scoutapp/scout_apm_python/blob/v2.21.0/src/scout_apm/core/backtrace.py#L26-L33), specifically line 32:\r\n\r\n```python\r\n module_dir = sys.modules[root_module].__file__.rsplit(os.sep, 2)[0]\r\n```\r\n\r\nIf `sys.modules[root_module]` is a [PEP-420] namespace package, this will raise\r\n```\r\nAttributeError: 'NoneType' object has no attribute 'rsplit'\r\n```\r\n\r\n### Steps to reproduce\r\n\r\nCreate a namespace package, with some modules inside, e.g.:\r\n```\r\nnamespace/\r\n foo/\r\n __init__.py\r\n bar/\r\n __init__.py\r\n```\r\n\r\nThen on an interactive Python shell:\r\n\r\n```\r\n>>> from scout_apm.core.backtrace import module_filepath\r\n>>> from namespace import foo\r\n>>> module_filepath(\"namespace.foo\", \"namespace\")\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/jack/venvs/tmp-a17ac7185189989/lib/python3.8/site-packages/scout_apm/core/backtrace.py\", line 32, in module_filepath\r\n module_dir = sys.modules[root_module].__file__.rsplit(os.sep, 2)[0]\r\nAttributeError: 'NoneType' object has no attribute 'rsplit'\r\n```\r\n\r\n### Details\r\n\r\n- Tested with version 2.21.0\r\n- Current workaround is to pin version to 2.20.0\r\n\r\n[PEP-420]: https://www.python.org/dev/peps/pep-0420/\n", "before_files": [{"content": "# coding=utf-8\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport itertools\nimport os\nimport sys\nimport sysconfig\nimport traceback\nimport warnings\n\n# Maximum non-Scout frames to target retrieving\nLIMIT = 50\n# How many upper frames from inside Scout to ignore\nIGNORED = 1\n\n\ndef filter_frames(frames):\n \"\"\"Filter the stack trace frames down to non-library code.\"\"\"\n paths = sysconfig.get_paths()\n library_paths = {paths[\"purelib\"], paths[\"platlib\"]}\n for frame in frames:\n if not any(frame[\"file\"].startswith(exclusion) for exclusion in library_paths):\n yield frame\n\n\ndef module_filepath(module, filepath):\n \"\"\"Get the filepath relative to the base module.\"\"\"\n root_module = module.split(\".\", 1)[0]\n if root_module == module:\n return os.path.basename(filepath)\n\n module_dir = sys.modules[root_module].__file__.rsplit(os.sep, 2)[0]\n return filepath.split(module_dir, 1)[-1].lstrip(os.sep)\n\n\ndef filepath(frame):\n \"\"\"Get the filepath for frame.\"\"\"\n module = frame.f_globals.get(\"__name__\", None)\n filepath = frame.f_code.co_filename\n\n if filepath.endswith(\".pyc\"):\n filepath = filepath[:-1]\n\n if not module:\n return filepath\n return module_filepath(module, filepath)\n\n\nif sys.version_info >= (3, 5):\n\n def stacktrace_walker(tb):\n \"\"\"Iterate over each frame of the stack downards for exceptions.\"\"\"\n for frame, lineno in traceback.walk_tb(tb):\n name = frame.f_code.co_name\n yield {\"file\": filepath(frame), \"line\": lineno, \"function\": name}\n\n def backtrace_walker():\n \"\"\"Iterate over each frame of the stack upwards.\n\n Taken from python3/traceback.ExtractSummary.extract to support\n iterating over the entire stack, but without creating a large\n data structure.\n \"\"\"\n start_frame = sys._getframe().f_back\n for frame, lineno in traceback.walk_stack(start_frame):\n name = frame.f_code.co_name\n yield {\"file\": filepath(frame), \"line\": lineno, \"function\": name}\n\n\nelse:\n\n def stacktrace_walker(tb):\n \"\"\"Iterate over each frame of the stack downards for exceptions.\"\"\"\n while tb is not None:\n lineno = tb.tb_lineno\n name = tb.tb_frame.f_code.co_name\n yield {\n \"file\": filepath(tb.tb_frame),\n \"line\": lineno,\n \"function\": name,\n }\n tb = tb.tb_next\n\n def backtrace_walker():\n \"\"\"Iterate over each frame of the stack upwards.\n\n Taken from python2.7/traceback.extract_stack to support iterating\n over the entire stack, but without creating a large data structure.\n \"\"\"\n try:\n raise ZeroDivisionError\n except ZeroDivisionError:\n # Get the current frame\n frame = sys.exc_info()[2].tb_frame.f_back\n\n while frame is not None:\n lineno = frame.f_lineno\n name = frame.f_code.co_name\n yield {\"file\": filepath(frame), \"line\": lineno, \"function\": name}\n frame = frame.f_back\n\n\ndef capture_backtrace():\n walker = filter_frames(backtrace_walker())\n return list(itertools.islice(walker, LIMIT))\n\n\ndef capture_stacktrace(tb):\n walker = stacktrace_walker(tb)\n return list(reversed(list(itertools.islice(walker, LIMIT))))\n\n\ndef capture():\n warnings.warn(\n \"capture is deprecated, instead use capture_backtrace instead.\",\n DeprecationWarning,\n 2,\n )\n return capture_backtrace()\n", "path": "src/scout_apm/core/backtrace.py"}]} | 2,029 | 389 |
gh_patches_debug_14653 | rasdani/github-patches | git_diff | conda__conda-4327 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Channels in centrally installed .condarc file are being ignored in conda 4.3.4
Hi, I am testing a centrally installed Anaconda setup with Anaconda installed under `C:\Program Files\Anaconda3`. I have a condarc file under `C:\Program Files\Anaconda3\.condarc`.
When I run `conda info` it tells me that my config file is under the correct location.
config file : C:\Program Files\Anaconda3\.condarc
I have configured a few custom channels in this `.condarc` file, e.g.:
channels:
- http://some.internal/url
I can also use `conda config --system --add channels http://some.internal/url` to set this value and conda tells me that channels already contains this value.
But when I run `conda config --system --show`, the list of channels is always set to:
channels:
- defaults
It seems that the list of channels in the central `.condarc` file is completely ignored and always replaced by `defaults`. I have also tried to set the list of `default_channels` in the central `.condarc` file but without success.
Using conda 4.3.4 on win-64.
</issue>
<code>
[start of conda/__init__.py]
1 # (c) 2012-2016 Continuum Analytics, Inc. / http://continuum.io
2 # All Rights Reserved
3 #
4 # conda is distributed under the terms of the BSD 3-clause license.
5 # Consult LICENSE.txt or http://opensource.org/licenses/BSD-3-Clause.
6 """OS-agnostic, system-level binary package manager."""
7 from __future__ import absolute_import, division, print_function, unicode_literals
8
9 from os.path import dirname
10
11 from ._vendor.auxlib.packaging import get_version
12 from .common.compat import iteritems, text_type
13
14 __all__ = [
15 "__name__", "__version__", "__author__",
16 "__email__", "__license__", "__copyright__",
17 "__summary__", "__url__",
18 ]
19
20 __name__ = "conda"
21 __version__ = get_version(__file__)
22 __author__ = "Continuum Analytics, Inc."
23 __email__ = "[email protected]"
24 __license__ = "BSD"
25 __summary__ = __doc__
26 __url__ = "https://github.com/conda/conda"
27
28 CONDA_PACKAGE_ROOT = dirname(__file__)
29
30
31 class CondaError(Exception):
32 def __init__(self, message, **kwargs):
33 self.message = message
34 self._kwargs = kwargs
35 super(CondaError, self).__init__(message)
36
37 def __repr__(self):
38 return '%s: %s\n' % (self.__class__.__name__, text_type(self))
39
40 def __str__(self):
41 return text_type(self.message % self._kwargs)
42
43 def dump_map(self):
44 result = dict((k, v) for k, v in iteritems(vars(self)) if not k.startswith('_'))
45 result.update(exception_type=text_type(type(self)),
46 exception_name=self.__class__.__name__,
47 message=text_type(self),
48 error=repr(self),
49 **self._kwargs)
50 return result
51
52
53 class CondaMultiError(CondaError):
54
55 def __init__(self, errors):
56 self.errors = errors
57 super(CondaError, self).__init__(None)
58
59 def __repr__(self):
60 return '\n'.join(repr(e) for e in self.errors) + '\n'
61
62 def __str__(self):
63 return '\n'.join(text_type(e) for e in self.errors) + '\n'
64
65 def dump_map(self):
66 return dict(exception_type=text_type(type(self)),
67 exception_name=self.__class__.__name__,
68 errors=tuple(error.dump_map() for error in self.errors),
69 error="Multiple Errors Encountered.",
70 )
71
72
73 class CondaExitZero(CondaError):
74 pass
75
[end of conda/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/conda/__init__.py b/conda/__init__.py
--- a/conda/__init__.py
+++ b/conda/__init__.py
@@ -6,7 +6,9 @@
"""OS-agnostic, system-level binary package manager."""
from __future__ import absolute_import, division, print_function, unicode_literals
+import os
from os.path import dirname
+import sys
from ._vendor.auxlib.packaging import get_version
from .common.compat import iteritems, text_type
@@ -25,6 +27,10 @@
__summary__ = __doc__
__url__ = "https://github.com/conda/conda"
+
+if os.getenv('CONDA_ROOT') is None:
+ os.environ['CONDA_ROOT'] = sys.prefix
+
CONDA_PACKAGE_ROOT = dirname(__file__)
| {"golden_diff": "diff --git a/conda/__init__.py b/conda/__init__.py\n--- a/conda/__init__.py\n+++ b/conda/__init__.py\n@@ -6,7 +6,9 @@\n \"\"\"OS-agnostic, system-level binary package manager.\"\"\"\n from __future__ import absolute_import, division, print_function, unicode_literals\n \n+import os\n from os.path import dirname\n+import sys\n \n from ._vendor.auxlib.packaging import get_version\n from .common.compat import iteritems, text_type\n@@ -25,6 +27,10 @@\n __summary__ = __doc__\n __url__ = \"https://github.com/conda/conda\"\n \n+\n+if os.getenv('CONDA_ROOT') is None:\n+ os.environ['CONDA_ROOT'] = sys.prefix\n+\n CONDA_PACKAGE_ROOT = dirname(__file__)\n", "issue": "Channels in centrally installed .condarc file are being ignored in conda 4.3.4\nHi, I am testing a centrally installed Anaconda setup with Anaconda installed under `C:\\Program Files\\Anaconda3`. I have a condarc file under `C:\\Program Files\\Anaconda3\\.condarc`.\r\n\r\nWhen I run `conda info` it tells me that my config file is under the correct location.\r\n\r\n config file : C:\\Program Files\\Anaconda3\\.condarc\r\n\r\nI have configured a few custom channels in this `.condarc` file, e.g.:\r\n\r\n channels:\r\n - http://some.internal/url\r\n\r\nI can also use `conda config --system --add channels http://some.internal/url` to set this value and conda tells me that channels already contains this value.\r\n\r\nBut when I run `conda config --system --show`, the list of channels is always set to:\r\n\r\n channels:\r\n - defaults\r\n\r\nIt seems that the list of channels in the central `.condarc` file is completely ignored and always replaced by `defaults`. I have also tried to set the list of `default_channels` in the central `.condarc` file but without success.\r\n\r\nUsing conda 4.3.4 on win-64.\r\n\n", "before_files": [{"content": "# (c) 2012-2016 Continuum Analytics, Inc. / http://continuum.io\n# All Rights Reserved\n#\n# conda is distributed under the terms of the BSD 3-clause license.\n# Consult LICENSE.txt or http://opensource.org/licenses/BSD-3-Clause.\n\"\"\"OS-agnostic, system-level binary package manager.\"\"\"\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nfrom os.path import dirname\n\nfrom ._vendor.auxlib.packaging import get_version\nfrom .common.compat import iteritems, text_type\n\n__all__ = [\n \"__name__\", \"__version__\", \"__author__\",\n \"__email__\", \"__license__\", \"__copyright__\",\n \"__summary__\", \"__url__\",\n]\n\n__name__ = \"conda\"\n__version__ = get_version(__file__)\n__author__ = \"Continuum Analytics, Inc.\"\n__email__ = \"[email protected]\"\n__license__ = \"BSD\"\n__summary__ = __doc__\n__url__ = \"https://github.com/conda/conda\"\n\nCONDA_PACKAGE_ROOT = dirname(__file__)\n\n\nclass CondaError(Exception):\n def __init__(self, message, **kwargs):\n self.message = message\n self._kwargs = kwargs\n super(CondaError, self).__init__(message)\n\n def __repr__(self):\n return '%s: %s\\n' % (self.__class__.__name__, text_type(self))\n\n def __str__(self):\n return text_type(self.message % self._kwargs)\n\n def dump_map(self):\n result = dict((k, v) for k, v in iteritems(vars(self)) if not k.startswith('_'))\n result.update(exception_type=text_type(type(self)),\n exception_name=self.__class__.__name__,\n message=text_type(self),\n error=repr(self),\n **self._kwargs)\n return result\n\n\nclass CondaMultiError(CondaError):\n\n def __init__(self, errors):\n self.errors = errors\n super(CondaError, self).__init__(None)\n\n def __repr__(self):\n return '\\n'.join(repr(e) for e in self.errors) + '\\n'\n\n def __str__(self):\n return '\\n'.join(text_type(e) for e in self.errors) + '\\n'\n\n def dump_map(self):\n return dict(exception_type=text_type(type(self)),\n exception_name=self.__class__.__name__,\n errors=tuple(error.dump_map() for error in self.errors),\n error=\"Multiple Errors Encountered.\",\n )\n\n\nclass CondaExitZero(CondaError):\n pass\n", "path": "conda/__init__.py"}]} | 1,515 | 182 |
gh_patches_debug_19684 | rasdani/github-patches | git_diff | Azure__azure-cli-extensions-2985 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
The parameter for --administration-members is incorrectly stated as optional
For the function 'az powerbi embedded-capacity create', the parameter for --administration-members is incorrectly stated as optional.
If you leave this parameter out, it will give this error:
**BadRequestError: At least one capacity administrator is required**
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: edf4a4a9-8ff1-c276-3e51-d5e83c180879
* Version Independent ID: de63a28e-4d16-2270-595f-1a67f5e682bd
* Content: [az powerbi embedded-capacity](https://docs.microsoft.com/en-us/cli/azure/ext/powerbidedicated/powerbi/embedded-capacity?view=azure-cli-latest)
* Content Source: [latest/docs-ref-autogen/ext/powerbidedicated/powerbi/embedded-capacity.yml](https://github.com/MicrosoftDocs/azure-docs-cli/blob/master/latest/docs-ref-autogen/ext/powerbidedicated/powerbi/embedded-capacity.yml)
* GitHub Login: @rloutlaw
* Microsoft Alias: **routlaw**
</issue>
<code>
[start of src/powerbidedicated/azext_powerbidedicated/_params.py]
1 # --------------------------------------------------------------------------------------------
2 # Copyright (c) Microsoft Corporation. All rights reserved.
3 # Licensed under the MIT License. See License.txt in the project root for license information.
4 # --------------------------------------------------------------------------------------------
5 # pylint: disable=line-too-long
6 # pylint: disable=too-many-lines
7 # pylint: disable=too-many-statements
8
9 from knack.arguments import CLIArgumentType
10
11 from azure.cli.core.commands.parameters import (
12 tags_type,
13 get_enum_type,
14 resource_group_name_type,
15 get_location_type
16 )
17
18
19 def load_arguments(self, _):
20 name_type = CLIArgumentType(
21 options_list=['--name', '-n'],
22 help='The name of the Dedicated capacity. It must be at least 3 characters in length, and no more than 63.')
23 sku_name_type = CLIArgumentType(
24 arg_type=get_enum_type(['A1', 'A2', 'A3', 'A4', 'A5', 'A6']),
25 help='Name of the SKU level. For more information, please refer to '
26 'https://azure.microsoft.com/en-us/pricing/details/power-bi-embedded/.'
27 )
28 sku_tier_type = CLIArgumentType(
29 arg_type=get_enum_type(['PBIE_Azure']),
30 help='The name of the Azure pricing tier to which the SKU applies.'
31 )
32 administration_type = CLIArgumentType(
33 help='An array of administrator user identities.', nargs='+'
34 )
35
36 with self.argument_context('powerbi embedded-capacity') as c:
37 c.argument('resource_group_name', resource_group_name_type)
38 c.argument('name', name_type)
39
40 with self.argument_context('powerbi embedded-capacity create') as c:
41 c.argument('sku_name', sku_name_type)
42 c.argument('sku_tier', sku_tier_type)
43 c.argument('tags', tags_type)
44 c.argument('administration_members', administration_type)
45 c.argument('location', get_location_type(self.cli_ctx))
46
47 with self.argument_context('powerbi embedded-capacity update') as c:
48 c.argument('sku_name', sku_name_type)
49 c.argument('sku_tier', sku_tier_type)
50 c.argument('tags', tags_type)
51 c.argument('administration_members', administration_type)
52
[end of src/powerbidedicated/azext_powerbidedicated/_params.py]
[start of src/powerbidedicated/setup.py]
1 #!/usr/bin/env python
2
3 # --------------------------------------------------------------------------------------------
4 # Copyright (c) Microsoft Corporation. All rights reserved.
5 # Licensed under the MIT License. See License.txt in the project root for license information.
6 # --------------------------------------------------------------------------------------------
7
8
9 from codecs import open
10 from setuptools import setup, find_packages
11 try:
12 from azure_bdist_wheel import cmdclass
13 except ImportError:
14 from distutils import log as logger
15 logger.warn("Wheel is not available, disabling bdist_wheel hook")
16
17 # TODO: Confirm this is the right version number you want and it matches your
18 # HISTORY.rst entry.
19 VERSION = '0.1.1'
20
21 # The full list of classifiers is available at
22 # https://pypi.python.org/pypi?%3Aaction=list_classifiers
23 CLASSIFIERS = [
24 'Development Status :: 4 - Beta',
25 'Intended Audience :: Developers',
26 'Intended Audience :: System Administrators',
27 'Programming Language :: Python',
28 'Programming Language :: Python :: 2',
29 'Programming Language :: Python :: 2.7',
30 'Programming Language :: Python :: 3',
31 'Programming Language :: Python :: 3.4',
32 'Programming Language :: Python :: 3.5',
33 'Programming Language :: Python :: 3.6',
34 'License :: OSI Approved :: MIT License',
35 ]
36
37 # TODO: Add any additional SDK dependencies here
38 DEPENDENCIES = []
39
40 with open('README.md', 'r', encoding='utf-8') as f:
41 README = f.read()
42 with open('HISTORY.rst', 'r', encoding='utf-8') as f:
43 HISTORY = f.read()
44
45 setup(
46 name='powerbidedicated',
47 version=VERSION,
48 description='Microsoft Azure Command-Line Tools PowerBIDedicated Extension',
49 # TODO: Update author and email, if applicable
50 author='Microsoft Corporation',
51 author_email='[email protected]',
52 url='https://github.com/Azure/azure-cli-extensions/tree/master/src/powerbidedicated',
53 long_description=README + '\n\n' + HISTORY,
54 license='MIT',
55 classifiers=CLASSIFIERS,
56 packages=find_packages(),
57 install_requires=DEPENDENCIES,
58 package_data={'azext_powerbidedicated': ['azext_metadata.json']},
59 )
60
[end of src/powerbidedicated/setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/powerbidedicated/azext_powerbidedicated/_params.py b/src/powerbidedicated/azext_powerbidedicated/_params.py
--- a/src/powerbidedicated/azext_powerbidedicated/_params.py
+++ b/src/powerbidedicated/azext_powerbidedicated/_params.py
@@ -41,7 +41,7 @@
c.argument('sku_name', sku_name_type)
c.argument('sku_tier', sku_tier_type)
c.argument('tags', tags_type)
- c.argument('administration_members', administration_type)
+ c.argument('administration_members', administration_type, required=True)
c.argument('location', get_location_type(self.cli_ctx))
with self.argument_context('powerbi embedded-capacity update') as c:
diff --git a/src/powerbidedicated/setup.py b/src/powerbidedicated/setup.py
--- a/src/powerbidedicated/setup.py
+++ b/src/powerbidedicated/setup.py
@@ -16,7 +16,7 @@
# TODO: Confirm this is the right version number you want and it matches your
# HISTORY.rst entry.
-VERSION = '0.1.1'
+VERSION = '0.2.0'
# The full list of classifiers is available at
# https://pypi.python.org/pypi?%3Aaction=list_classifiers
| {"golden_diff": "diff --git a/src/powerbidedicated/azext_powerbidedicated/_params.py b/src/powerbidedicated/azext_powerbidedicated/_params.py\n--- a/src/powerbidedicated/azext_powerbidedicated/_params.py\n+++ b/src/powerbidedicated/azext_powerbidedicated/_params.py\n@@ -41,7 +41,7 @@\n c.argument('sku_name', sku_name_type)\n c.argument('sku_tier', sku_tier_type)\n c.argument('tags', tags_type)\n- c.argument('administration_members', administration_type)\n+ c.argument('administration_members', administration_type, required=True)\n c.argument('location', get_location_type(self.cli_ctx))\n \n with self.argument_context('powerbi embedded-capacity update') as c:\ndiff --git a/src/powerbidedicated/setup.py b/src/powerbidedicated/setup.py\n--- a/src/powerbidedicated/setup.py\n+++ b/src/powerbidedicated/setup.py\n@@ -16,7 +16,7 @@\n \n # TODO: Confirm this is the right version number you want and it matches your\n # HISTORY.rst entry.\n-VERSION = '0.1.1'\n+VERSION = '0.2.0'\n \n # The full list of classifiers is available at\n # https://pypi.python.org/pypi?%3Aaction=list_classifiers\n", "issue": "The parameter for --administration-members is incorrectly stated as optional \nFor the function 'az powerbi embedded-capacity create', the parameter for --administration-members is incorrectly stated as optional.\r\nIf you leave this parameter out, it will give this error:\r\n**BadRequestError: At least one capacity administrator is required**\r\n\r\n---\r\n#### Document Details\r\n\r\n\u26a0 *Do not edit this section. It is required for docs.microsoft.com \u279f GitHub issue linking.*\r\n\r\n* ID: edf4a4a9-8ff1-c276-3e51-d5e83c180879\r\n* Version Independent ID: de63a28e-4d16-2270-595f-1a67f5e682bd\r\n* Content: [az powerbi embedded-capacity](https://docs.microsoft.com/en-us/cli/azure/ext/powerbidedicated/powerbi/embedded-capacity?view=azure-cli-latest)\r\n* Content Source: [latest/docs-ref-autogen/ext/powerbidedicated/powerbi/embedded-capacity.yml](https://github.com/MicrosoftDocs/azure-docs-cli/blob/master/latest/docs-ref-autogen/ext/powerbidedicated/powerbi/embedded-capacity.yml)\r\n* GitHub Login: @rloutlaw\r\n* Microsoft Alias: **routlaw**\n", "before_files": [{"content": "# --------------------------------------------------------------------------------------------\n# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License. See License.txt in the project root for license information.\n# --------------------------------------------------------------------------------------------\n# pylint: disable=line-too-long\n# pylint: disable=too-many-lines\n# pylint: disable=too-many-statements\n\nfrom knack.arguments import CLIArgumentType\n\nfrom azure.cli.core.commands.parameters import (\n tags_type,\n get_enum_type,\n resource_group_name_type,\n get_location_type\n)\n\n\ndef load_arguments(self, _):\n name_type = CLIArgumentType(\n options_list=['--name', '-n'],\n help='The name of the Dedicated capacity. It must be at least 3 characters in length, and no more than 63.')\n sku_name_type = CLIArgumentType(\n arg_type=get_enum_type(['A1', 'A2', 'A3', 'A4', 'A5', 'A6']),\n help='Name of the SKU level. For more information, please refer to '\n 'https://azure.microsoft.com/en-us/pricing/details/power-bi-embedded/.'\n )\n sku_tier_type = CLIArgumentType(\n arg_type=get_enum_type(['PBIE_Azure']),\n help='The name of the Azure pricing tier to which the SKU applies.'\n )\n administration_type = CLIArgumentType(\n help='An array of administrator user identities.', nargs='+'\n )\n\n with self.argument_context('powerbi embedded-capacity') as c:\n c.argument('resource_group_name', resource_group_name_type)\n c.argument('name', name_type)\n\n with self.argument_context('powerbi embedded-capacity create') as c:\n c.argument('sku_name', sku_name_type)\n c.argument('sku_tier', sku_tier_type)\n c.argument('tags', tags_type)\n c.argument('administration_members', administration_type)\n c.argument('location', get_location_type(self.cli_ctx))\n\n with self.argument_context('powerbi embedded-capacity update') as c:\n c.argument('sku_name', sku_name_type)\n c.argument('sku_tier', sku_tier_type)\n c.argument('tags', tags_type)\n c.argument('administration_members', administration_type)\n", "path": "src/powerbidedicated/azext_powerbidedicated/_params.py"}, {"content": "#!/usr/bin/env python\n\n# --------------------------------------------------------------------------------------------\n# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License. See License.txt in the project root for license information.\n# --------------------------------------------------------------------------------------------\n\n\nfrom codecs import open\nfrom setuptools import setup, find_packages\ntry:\n from azure_bdist_wheel import cmdclass\nexcept ImportError:\n from distutils import log as logger\n logger.warn(\"Wheel is not available, disabling bdist_wheel hook\")\n\n# TODO: Confirm this is the right version number you want and it matches your\n# HISTORY.rst entry.\nVERSION = '0.1.1'\n\n# The full list of classifiers is available at\n# https://pypi.python.org/pypi?%3Aaction=list_classifiers\nCLASSIFIERS = [\n 'Development Status :: 4 - Beta',\n 'Intended Audience :: Developers',\n 'Intended Audience :: System Administrators',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'License :: OSI Approved :: MIT License',\n]\n\n# TODO: Add any additional SDK dependencies here\nDEPENDENCIES = []\n\nwith open('README.md', 'r', encoding='utf-8') as f:\n README = f.read()\nwith open('HISTORY.rst', 'r', encoding='utf-8') as f:\n HISTORY = f.read()\n\nsetup(\n name='powerbidedicated',\n version=VERSION,\n description='Microsoft Azure Command-Line Tools PowerBIDedicated Extension',\n # TODO: Update author and email, if applicable\n author='Microsoft Corporation',\n author_email='[email protected]',\n url='https://github.com/Azure/azure-cli-extensions/tree/master/src/powerbidedicated',\n long_description=README + '\\n\\n' + HISTORY,\n license='MIT',\n classifiers=CLASSIFIERS,\n packages=find_packages(),\n install_requires=DEPENDENCIES,\n package_data={'azext_powerbidedicated': ['azext_metadata.json']},\n)\n", "path": "src/powerbidedicated/setup.py"}]} | 2,017 | 298 |
gh_patches_debug_10979 | rasdani/github-patches | git_diff | bokeh__bokeh-10074 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[DOCS] Page wise display of documentation search
**Is your feature request related to a problem?**
Yes. I searched for a relatively simple query in the documentation search bar of https://docs.bokeh.org, and it took very long to load the results. In my second try, the results weren't even loading, I'm afraid. These are displayed in an unordered list which fills the entire page up. It might get frustrating to read through everything to find the answer to the input query.
**Describe the solution you'd like**
I would suggest displaying the fetched results in a page wise format, the way most search engines do it. Relevance weighted sorted answer, shown page wise. Fill up only the current page of about 20 to 30 odd query results, and depending on whether the user wants to see the other pages, load them.
**Describe alternatives you've considered**
If not a page wise result, a folder wise result would also benefit, which leaves the option to the user to navigate where he/she wants to. A custom google search may also help.
**Additional context**

</issue>
<code>
[start of sphinx/docserver.py]
1 import os
2 import sys
3 import threading
4 import time
5 import webbrowser
6
7 import flask
8 import tornado
9 from tornado.httpserver import HTTPServer
10 from tornado.ioloop import IOLoop
11 from tornado.wsgi import WSGIContainer
12
13 _basedir = os.path.join("..", os.path.dirname(__file__))
14
15 app = flask.Flask(__name__, static_folder="/unused")
16 PORT=5009
17 http_server = HTTPServer(WSGIContainer(app))
18
19 @app.route('/')
20 def welcome():
21 return """
22 <h1>Welcome to the Bokeh documentation server</h1>
23 You probably want to go to <a href="/en/latest/index.html"> Index</a>
24 """
25
26 @app.route('/versions.json')
27 def send_versions():
28 return flask.send_from_directory(
29 os.path.join(_basedir, "sphinx"), "test_versions.json")
30
31 @app.route('/alert.html')
32 def send_alert():
33 return os.environ.get("BOKEH_DOCS_ALERT", "")
34
35 @app.route('/en/latest/<path:filename>')
36 def send_docs(filename):
37 return flask.send_from_directory(
38 os.path.join(_basedir, "sphinx/build/html/"), filename)
39
40 def open_browser():
41 # Child process
42 time.sleep(0.5)
43 webbrowser.open("http://localhost:%d/en/latest/index.html" % PORT, new="tab")
44
45 data = {}
46
47 def serve_http():
48 data['ioloop'] = IOLoop()
49 http_server.listen(PORT)
50 IOLoop.current().start()
51
52 def shutdown_server():
53 ioloop = data['ioloop']
54 ioloop.add_callback(ioloop.stop)
55 print("Asked Server to shut down.")
56
57 def ui():
58 try:
59 time.sleep(0.5)
60 input("Press <ENTER> to exit...\n") # lgtm [py/use-of-input]
61 except KeyboardInterrupt:
62 pass
63
64 if __name__ == "__main__":
65
66 if tornado.version_info[0] == 4:
67 print('docserver.py script requires tornado 5 or higher')
68 sys.exit(1)
69
70 print("\nStarting Bokeh plot server on port %d..." % PORT)
71 print("Visit http://localhost:%d/en/latest/index.html to see plots\n" % PORT)
72
73 t_server = threading.Thread(target=serve_http)
74 t_server.start()
75 t_browser = threading.Thread(target=open_browser)
76 t_browser.start()
77
78 ui()
79
80 shutdown_server()
81 t_server.join()
82 t_browser.join()
83 print("Server shut down.")
84
[end of sphinx/docserver.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/sphinx/docserver.py b/sphinx/docserver.py
--- a/sphinx/docserver.py
+++ b/sphinx/docserver.py
@@ -1,3 +1,4 @@
+import asyncio
import os
import sys
import threading
@@ -10,6 +11,11 @@
from tornado.ioloop import IOLoop
from tornado.wsgi import WSGIContainer
+# Needed for Windows + Python 3.8 config
+if sys.version_info.major==3 and sys.version_info.minor >= 8 and sys.platform.startswith('win'):
+ asyncio.set_event_loop_policy(asyncio.WindowsSelectorEventLoopPolicy())
+
+
_basedir = os.path.join("..", os.path.dirname(__file__))
app = flask.Flask(__name__, static_folder="/unused")
| {"golden_diff": "diff --git a/sphinx/docserver.py b/sphinx/docserver.py\n--- a/sphinx/docserver.py\n+++ b/sphinx/docserver.py\n@@ -1,3 +1,4 @@\n+import asyncio\n import os\n import sys\n import threading\n@@ -10,6 +11,11 @@\n from tornado.ioloop import IOLoop\n from tornado.wsgi import WSGIContainer\n \n+# Needed for Windows + Python 3.8 config\n+if sys.version_info.major==3 and sys.version_info.minor >= 8 and sys.platform.startswith('win'):\n+ asyncio.set_event_loop_policy(asyncio.WindowsSelectorEventLoopPolicy())\n+\n+\n _basedir = os.path.join(\"..\", os.path.dirname(__file__))\n \n app = flask.Flask(__name__, static_folder=\"/unused\")\n", "issue": "[DOCS] Page wise display of documentation search \n**Is your feature request related to a problem?**\r\nYes. I searched for a relatively simple query in the documentation search bar of https://docs.bokeh.org, and it took very long to load the results. In my second try, the results weren't even loading, I'm afraid. These are displayed in an unordered list which fills the entire page up. It might get frustrating to read through everything to find the answer to the input query. \r\n\r\n**Describe the solution you'd like**\r\nI would suggest displaying the fetched results in a page wise format, the way most search engines do it. Relevance weighted sorted answer, shown page wise. Fill up only the current page of about 20 to 30 odd query results, and depending on whether the user wants to see the other pages, load them.\r\n\r\n**Describe alternatives you've considered**\r\nIf not a page wise result, a folder wise result would also benefit, which leaves the option to the user to navigate where he/she wants to. A custom google search may also help.\r\n\r\n**Additional context**\r\n\r\n\r\n\n", "before_files": [{"content": "import os\nimport sys\nimport threading\nimport time\nimport webbrowser\n\nimport flask\nimport tornado\nfrom tornado.httpserver import HTTPServer\nfrom tornado.ioloop import IOLoop\nfrom tornado.wsgi import WSGIContainer\n\n_basedir = os.path.join(\"..\", os.path.dirname(__file__))\n\napp = flask.Flask(__name__, static_folder=\"/unused\")\nPORT=5009\nhttp_server = HTTPServer(WSGIContainer(app))\n\[email protected]('/')\ndef welcome():\n return \"\"\"\n <h1>Welcome to the Bokeh documentation server</h1>\n You probably want to go to <a href=\"/en/latest/index.html\"> Index</a>\n \"\"\"\n\[email protected]('/versions.json')\ndef send_versions():\n return flask.send_from_directory(\n os.path.join(_basedir, \"sphinx\"), \"test_versions.json\")\n\[email protected]('/alert.html')\ndef send_alert():\n return os.environ.get(\"BOKEH_DOCS_ALERT\", \"\")\n\[email protected]('/en/latest/<path:filename>')\ndef send_docs(filename):\n return flask.send_from_directory(\n os.path.join(_basedir, \"sphinx/build/html/\"), filename)\n\ndef open_browser():\n # Child process\n time.sleep(0.5)\n webbrowser.open(\"http://localhost:%d/en/latest/index.html\" % PORT, new=\"tab\")\n\ndata = {}\n\ndef serve_http():\n data['ioloop'] = IOLoop()\n http_server.listen(PORT)\n IOLoop.current().start()\n\ndef shutdown_server():\n ioloop = data['ioloop']\n ioloop.add_callback(ioloop.stop)\n print(\"Asked Server to shut down.\")\n\ndef ui():\n try:\n time.sleep(0.5)\n input(\"Press <ENTER> to exit...\\n\") # lgtm [py/use-of-input]\n except KeyboardInterrupt:\n pass\n\nif __name__ == \"__main__\":\n\n if tornado.version_info[0] == 4:\n print('docserver.py script requires tornado 5 or higher')\n sys.exit(1)\n\n print(\"\\nStarting Bokeh plot server on port %d...\" % PORT)\n print(\"Visit http://localhost:%d/en/latest/index.html to see plots\\n\" % PORT)\n\n t_server = threading.Thread(target=serve_http)\n t_server.start()\n t_browser = threading.Thread(target=open_browser)\n t_browser.start()\n\n ui()\n\n shutdown_server()\n t_server.join()\n t_browser.join()\n print(\"Server shut down.\")\n", "path": "sphinx/docserver.py"}]} | 1,530 | 171 |
gh_patches_debug_31758 | rasdani/github-patches | git_diff | docker__docker-py-384 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Do not support sslv3 (poodle)
In Docker 1.3.1 (coming very soon), only TLS1.0+ will be supported.
Ping @shin-
</issue>
<code>
[start of docker/tls.py]
1 import os
2
3 from . import errors
4 from .ssladapter import ssladapter
5
6
7 class TLSConfig(object):
8 cert = None
9 verify = None
10 ssl_version = None
11
12 def __init__(self, client_cert=None, ca_cert=None, verify=None,
13 ssl_version=None, assert_hostname=None):
14 # Argument compatibility/mapping with
15 # http://docs.docker.com/examples/https/
16 # This diverges from the Docker CLI in that users can specify 'tls'
17 # here, but also disable any public/default CA pool verification by
18 # leaving tls_verify=False
19
20 # urllib3 sets a default ssl_version if ssl_version is None
21 # http://tinyurl.com/kxga8hb
22 self.ssl_version = ssl_version
23 self.assert_hostname = assert_hostname
24
25 # "tls" and "tls_verify" must have both or neither cert/key files
26 # In either case, Alert the user when both are expected, but any are
27 # missing.
28
29 if client_cert:
30 try:
31 tls_cert, tls_key = client_cert
32 except ValueError:
33 raise errors.TLSParameterError(
34 'client_config must be a tuple of'
35 ' (client certificate, key file)'
36 )
37
38 if not (tls_cert and tls_key) or (not os.path.isfile(tls_cert) or
39 not os.path.isfile(tls_key)):
40 raise errors.TLSParameterError(
41 'Path to a certificate and key files must be provided'
42 ' through the client_config param'
43 )
44 self.cert = (tls_cert, tls_key)
45
46 # Either set verify to True (public/default CA checks) or to the
47 # path of a CA Cert file.
48 if verify is not None:
49 if not ca_cert:
50 self.verify = verify
51 elif os.path.isfile(ca_cert):
52 if not verify:
53 raise errors.TLSParameterError(
54 'verify can not be False when a CA cert is'
55 ' provided.'
56 )
57 self.verify = ca_cert
58 else:
59 raise errors.TLSParameterError(
60 'Invalid CA certificate provided for `tls_ca_cert`.'
61 )
62
63 def configure_client(self, client):
64 client.ssl_version = self.ssl_version
65 if self.verify is not None:
66 client.verify = self.verify
67 if self.cert:
68 client.cert = self.cert
69 client.mount('https://', ssladapter.SSLAdapter(
70 ssl_version=self.ssl_version,
71 assert_hostname=self.assert_hostname,
72 ))
73
[end of docker/tls.py]
[start of docker/ssladapter/ssladapter.py]
1 """ Resolves OpenSSL issues in some servers:
2 https://lukasa.co.uk/2013/01/Choosing_SSL_Version_In_Requests/
3 https://github.com/kennethreitz/requests/pull/799
4 """
5 from distutils.version import StrictVersion
6 from requests.adapters import HTTPAdapter
7 try:
8 import requests.packages.urllib3 as urllib3
9 except ImportError:
10 import urllib3
11
12
13 PoolManager = urllib3.poolmanager.PoolManager
14
15
16 class SSLAdapter(HTTPAdapter):
17 '''An HTTPS Transport Adapter that uses an arbitrary SSL version.'''
18 def __init__(self, ssl_version=None, assert_hostname=None, **kwargs):
19 self.ssl_version = ssl_version
20 self.assert_hostname = assert_hostname
21 super(SSLAdapter, self).__init__(**kwargs)
22
23 def init_poolmanager(self, connections, maxsize, block=False):
24 kwargs = {
25 'num_pools': connections,
26 'maxsize': maxsize,
27 'block': block,
28 'assert_hostname': self.assert_hostname,
29 }
30 if self.can_override_ssl_version():
31 kwargs['ssl_version'] = self.ssl_version
32
33 self.poolmanager = PoolManager(**kwargs)
34
35 def can_override_ssl_version(self):
36 urllib_ver = urllib3.__version__.split('-')[0]
37 if urllib_ver is None:
38 return False
39 if urllib_ver == 'dev':
40 return True
41 return StrictVersion(urllib_ver) > StrictVersion('1.5')
42
[end of docker/ssladapter/ssladapter.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/docker/ssladapter/ssladapter.py b/docker/ssladapter/ssladapter.py
--- a/docker/ssladapter/ssladapter.py
+++ b/docker/ssladapter/ssladapter.py
@@ -4,6 +4,7 @@
"""
from distutils.version import StrictVersion
from requests.adapters import HTTPAdapter
+import ssl
try:
import requests.packages.urllib3 as urllib3
except ImportError:
@@ -13,9 +14,19 @@
PoolManager = urllib3.poolmanager.PoolManager
+def get_max_tls_protocol():
+ protocols = ('PROTOCOL_TLSv1_2',
+ 'PROTOCOL_TLSv1_1',
+ 'PROTOCOL_TLSv1')
+ for proto in protocols:
+ if hasattr(ssl, proto):
+ return proto
+
+
class SSLAdapter(HTTPAdapter):
'''An HTTPS Transport Adapter that uses an arbitrary SSL version.'''
def __init__(self, ssl_version=None, assert_hostname=None, **kwargs):
+ ssl_version = ssl_version or get_max_tls_protocol()
self.ssl_version = ssl_version
self.assert_hostname = assert_hostname
super(SSLAdapter, self).__init__(**kwargs)
diff --git a/docker/tls.py b/docker/tls.py
--- a/docker/tls.py
+++ b/docker/tls.py
@@ -17,8 +17,11 @@
# here, but also disable any public/default CA pool verification by
# leaving tls_verify=False
- # urllib3 sets a default ssl_version if ssl_version is None
- # http://tinyurl.com/kxga8hb
+ # urllib3 sets a default ssl_version if ssl_version is None,
+ # but that default is the vulnerable PROTOCOL_SSLv23 selection,
+ # so we override the default with the maximum supported in the running
+ # Python interpeter up to TLS 1.2. (see: http://tinyurl.com/kxga8hb)
+ ssl_version = ssl_version or ssladapter.get_max_tls_protocol()
self.ssl_version = ssl_version
self.assert_hostname = assert_hostname
| {"golden_diff": "diff --git a/docker/ssladapter/ssladapter.py b/docker/ssladapter/ssladapter.py\n--- a/docker/ssladapter/ssladapter.py\n+++ b/docker/ssladapter/ssladapter.py\n@@ -4,6 +4,7 @@\n \"\"\"\n from distutils.version import StrictVersion\n from requests.adapters import HTTPAdapter\n+import ssl\n try:\n import requests.packages.urllib3 as urllib3\n except ImportError:\n@@ -13,9 +14,19 @@\n PoolManager = urllib3.poolmanager.PoolManager\n \n \n+def get_max_tls_protocol():\n+ protocols = ('PROTOCOL_TLSv1_2',\n+ 'PROTOCOL_TLSv1_1',\n+ 'PROTOCOL_TLSv1')\n+ for proto in protocols:\n+ if hasattr(ssl, proto):\n+ return proto\n+\n+\n class SSLAdapter(HTTPAdapter):\n '''An HTTPS Transport Adapter that uses an arbitrary SSL version.'''\n def __init__(self, ssl_version=None, assert_hostname=None, **kwargs):\n+ ssl_version = ssl_version or get_max_tls_protocol()\n self.ssl_version = ssl_version\n self.assert_hostname = assert_hostname\n super(SSLAdapter, self).__init__(**kwargs)\ndiff --git a/docker/tls.py b/docker/tls.py\n--- a/docker/tls.py\n+++ b/docker/tls.py\n@@ -17,8 +17,11 @@\n # here, but also disable any public/default CA pool verification by\n # leaving tls_verify=False\n \n- # urllib3 sets a default ssl_version if ssl_version is None\n- # http://tinyurl.com/kxga8hb\n+ # urllib3 sets a default ssl_version if ssl_version is None,\n+ # but that default is the vulnerable PROTOCOL_SSLv23 selection,\n+ # so we override the default with the maximum supported in the running\n+ # Python interpeter up to TLS 1.2. (see: http://tinyurl.com/kxga8hb)\n+ ssl_version = ssl_version or ssladapter.get_max_tls_protocol()\n self.ssl_version = ssl_version\n self.assert_hostname = assert_hostname\n", "issue": "Do not support sslv3 (poodle)\nIn Docker 1.3.1 (coming very soon), only TLS1.0+ will be supported.\nPing @shin- \n\n", "before_files": [{"content": "import os\n\nfrom . import errors\nfrom .ssladapter import ssladapter\n\n\nclass TLSConfig(object):\n cert = None\n verify = None\n ssl_version = None\n\n def __init__(self, client_cert=None, ca_cert=None, verify=None,\n ssl_version=None, assert_hostname=None):\n # Argument compatibility/mapping with\n # http://docs.docker.com/examples/https/\n # This diverges from the Docker CLI in that users can specify 'tls'\n # here, but also disable any public/default CA pool verification by\n # leaving tls_verify=False\n\n # urllib3 sets a default ssl_version if ssl_version is None\n # http://tinyurl.com/kxga8hb\n self.ssl_version = ssl_version\n self.assert_hostname = assert_hostname\n\n # \"tls\" and \"tls_verify\" must have both or neither cert/key files\n # In either case, Alert the user when both are expected, but any are\n # missing.\n\n if client_cert:\n try:\n tls_cert, tls_key = client_cert\n except ValueError:\n raise errors.TLSParameterError(\n 'client_config must be a tuple of'\n ' (client certificate, key file)'\n )\n\n if not (tls_cert and tls_key) or (not os.path.isfile(tls_cert) or\n not os.path.isfile(tls_key)):\n raise errors.TLSParameterError(\n 'Path to a certificate and key files must be provided'\n ' through the client_config param'\n )\n self.cert = (tls_cert, tls_key)\n\n # Either set verify to True (public/default CA checks) or to the\n # path of a CA Cert file.\n if verify is not None:\n if not ca_cert:\n self.verify = verify\n elif os.path.isfile(ca_cert):\n if not verify:\n raise errors.TLSParameterError(\n 'verify can not be False when a CA cert is'\n ' provided.'\n )\n self.verify = ca_cert\n else:\n raise errors.TLSParameterError(\n 'Invalid CA certificate provided for `tls_ca_cert`.'\n )\n\n def configure_client(self, client):\n client.ssl_version = self.ssl_version\n if self.verify is not None:\n client.verify = self.verify\n if self.cert:\n client.cert = self.cert\n client.mount('https://', ssladapter.SSLAdapter(\n ssl_version=self.ssl_version,\n assert_hostname=self.assert_hostname,\n ))\n", "path": "docker/tls.py"}, {"content": "\"\"\" Resolves OpenSSL issues in some servers:\n https://lukasa.co.uk/2013/01/Choosing_SSL_Version_In_Requests/\n https://github.com/kennethreitz/requests/pull/799\n\"\"\"\nfrom distutils.version import StrictVersion\nfrom requests.adapters import HTTPAdapter\ntry:\n import requests.packages.urllib3 as urllib3\nexcept ImportError:\n import urllib3\n\n\nPoolManager = urllib3.poolmanager.PoolManager\n\n\nclass SSLAdapter(HTTPAdapter):\n '''An HTTPS Transport Adapter that uses an arbitrary SSL version.'''\n def __init__(self, ssl_version=None, assert_hostname=None, **kwargs):\n self.ssl_version = ssl_version\n self.assert_hostname = assert_hostname\n super(SSLAdapter, self).__init__(**kwargs)\n\n def init_poolmanager(self, connections, maxsize, block=False):\n kwargs = {\n 'num_pools': connections,\n 'maxsize': maxsize,\n 'block': block,\n 'assert_hostname': self.assert_hostname,\n }\n if self.can_override_ssl_version():\n kwargs['ssl_version'] = self.ssl_version\n\n self.poolmanager = PoolManager(**kwargs)\n\n def can_override_ssl_version(self):\n urllib_ver = urllib3.__version__.split('-')[0]\n if urllib_ver is None:\n return False\n if urllib_ver == 'dev':\n return True\n return StrictVersion(urllib_ver) > StrictVersion('1.5')\n", "path": "docker/ssladapter/ssladapter.py"}]} | 1,656 | 459 |
gh_patches_debug_11025 | rasdani/github-patches | git_diff | Qiskit__qiskit-3555 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Can't invert gate created from QuantumCircuit.to_gate
<!-- ⚠️ If you do not respect this template, your issue will be closed -->
<!-- ⚠️ Make sure to browse the opened and closed issues -->
### Information
- **Qiskit Terra version**:
- **Python version**:
- **Operating system**:
### What is the current behavior?
When inverting a gate created from QuantumCircuit.to_gate the following exception is raised:
`ValueError: not enough values to unpack (expected 3, got 2)`
### Steps to reproduce the problem
```
qc = QuantumCircuit(1)
qc.x(0)
gate = qc.to_gate()
gate.inverse()
```
### What is the expected behavior?
### Suggested solutions
</issue>
<code>
[start of qiskit/converters/circuit_to_gate.py]
1 # -*- coding: utf-8 -*-
2
3 # This code is part of Qiskit.
4 #
5 # (C) Copyright IBM 2017, 2019.
6 #
7 # This code is licensed under the Apache License, Version 2.0. You may
8 # obtain a copy of this license in the LICENSE.txt file in the root directory
9 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
10 #
11 # Any modifications or derivative works of this code must retain this
12 # copyright notice, and modified files need to carry a notice indicating
13 # that they have been altered from the originals.
14
15 """Helper function for converting a circuit to a gate"""
16
17 from qiskit.circuit.gate import Gate
18 from qiskit.circuit.quantumregister import QuantumRegister, Qubit
19 from qiskit.exceptions import QiskitError
20
21
22 def circuit_to_gate(circuit, parameter_map=None):
23 """Build a ``Gate`` object from a ``QuantumCircuit``.
24
25 The gate is anonymous (not tied to a named quantum register),
26 and so can be inserted into another circuit. The gate will
27 have the same string name as the circuit.
28
29 Args:
30 circuit (QuantumCircuit): the input circuit.
31 parameter_map (dict): For parameterized circuits, a mapping from
32 parameters in the circuit to parameters to be used in the gate.
33 If None, existing circuit parameters will also parameterize the
34 Gate.
35
36 Raises:
37 QiskitError: if circuit is non-unitary or if
38 parameter_map is not compatible with circuit
39
40 Return:
41 Gate: a Gate equivalent to the action of the
42 input circuit. Upon decomposition, this gate will
43 yield the components comprising the original circuit.
44 """
45 for inst, _, _ in circuit.data:
46 if not isinstance(inst, Gate):
47 raise QiskitError('One or more instructions in this instruction '
48 'cannot be converted to a gate')
49
50 if parameter_map is None:
51 parameter_dict = {p: p for p in circuit.parameters}
52 else:
53 parameter_dict = circuit._unroll_param_dict(parameter_map)
54
55 if parameter_dict.keys() != circuit.parameters:
56 raise QiskitError(('parameter_map should map all circuit parameters. '
57 'Circuit parameters: {}, parameter_map: {}').format(
58 circuit.parameters, parameter_dict))
59
60 gate = Gate(name=circuit.name,
61 num_qubits=sum([qreg.size for qreg in circuit.qregs]),
62 params=sorted(parameter_dict.values(), key=lambda p: p.name))
63 gate.condition = None
64
65 def find_bit_position(bit):
66 """find the index of a given bit (Register, int) within
67 a flat ordered list of bits of the circuit
68 """
69 if isinstance(bit, Qubit):
70 ordered_regs = circuit.qregs
71 else:
72 ordered_regs = circuit.cregs
73 reg_index = ordered_regs.index(bit.register)
74 return sum([reg.size for reg in ordered_regs[:reg_index]]) + bit.index
75
76 target = circuit.copy()
77 target._substitute_parameters(parameter_dict)
78
79 definition = target.data
80
81 if gate.num_qubits > 0:
82 q = QuantumRegister(gate.num_qubits, 'q')
83
84 definition = list(map(lambda x:
85 (x[0], list(map(lambda y: q[find_bit_position(y)], x[1]))),
86 definition))
87 gate.definition = definition
88
89 return gate
90
[end of qiskit/converters/circuit_to_gate.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/qiskit/converters/circuit_to_gate.py b/qiskit/converters/circuit_to_gate.py
--- a/qiskit/converters/circuit_to_gate.py
+++ b/qiskit/converters/circuit_to_gate.py
@@ -81,9 +81,14 @@
if gate.num_qubits > 0:
q = QuantumRegister(gate.num_qubits, 'q')
- definition = list(map(lambda x:
- (x[0], list(map(lambda y: q[find_bit_position(y)], x[1]))),
- definition))
+ # The 3rd parameter in the output tuple) is hard coded to [] because
+ # Gate objects do not have cregs set and we've verified that all
+ # instructions are gates
+ definition = list(map(
+ lambda x: (x[0],
+ list(map(lambda y: q[find_bit_position(y)], x[1])),
+ []),
+ definition))
gate.definition = definition
return gate
| {"golden_diff": "diff --git a/qiskit/converters/circuit_to_gate.py b/qiskit/converters/circuit_to_gate.py\n--- a/qiskit/converters/circuit_to_gate.py\n+++ b/qiskit/converters/circuit_to_gate.py\n@@ -81,9 +81,14 @@\n if gate.num_qubits > 0:\n q = QuantumRegister(gate.num_qubits, 'q')\n \n- definition = list(map(lambda x:\n- (x[0], list(map(lambda y: q[find_bit_position(y)], x[1]))),\n- definition))\n+ # The 3rd parameter in the output tuple) is hard coded to [] because\n+ # Gate objects do not have cregs set and we've verified that all\n+ # instructions are gates\n+ definition = list(map(\n+ lambda x: (x[0],\n+ list(map(lambda y: q[find_bit_position(y)], x[1])),\n+ []),\n+ definition))\n gate.definition = definition\n \n return gate\n", "issue": "Can't invert gate created from QuantumCircuit.to_gate\n<!-- \u26a0\ufe0f If you do not respect this template, your issue will be closed -->\r\n<!-- \u26a0\ufe0f Make sure to browse the opened and closed issues -->\r\n\r\n### Information\r\n\r\n- **Qiskit Terra version**:\r\n- **Python version**:\r\n- **Operating system**:\r\n\r\n### What is the current behavior?\r\nWhen inverting a gate created from QuantumCircuit.to_gate the following exception is raised:\r\n\r\n`ValueError: not enough values to unpack (expected 3, got 2)`\r\n\r\n\r\n### Steps to reproduce the problem\r\n```\r\nqc = QuantumCircuit(1)\r\nqc.x(0)\r\ngate = qc.to_gate()\r\ngate.inverse()\r\n```\r\n\r\n### What is the expected behavior?\r\n\r\n\r\n\r\n### Suggested solutions\r\n\r\n\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# This code is part of Qiskit.\n#\n# (C) Copyright IBM 2017, 2019.\n#\n# This code is licensed under the Apache License, Version 2.0. You may\n# obtain a copy of this license in the LICENSE.txt file in the root directory\n# of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.\n#\n# Any modifications or derivative works of this code must retain this\n# copyright notice, and modified files need to carry a notice indicating\n# that they have been altered from the originals.\n\n\"\"\"Helper function for converting a circuit to a gate\"\"\"\n\nfrom qiskit.circuit.gate import Gate\nfrom qiskit.circuit.quantumregister import QuantumRegister, Qubit\nfrom qiskit.exceptions import QiskitError\n\n\ndef circuit_to_gate(circuit, parameter_map=None):\n \"\"\"Build a ``Gate`` object from a ``QuantumCircuit``.\n\n The gate is anonymous (not tied to a named quantum register),\n and so can be inserted into another circuit. The gate will\n have the same string name as the circuit.\n\n Args:\n circuit (QuantumCircuit): the input circuit.\n parameter_map (dict): For parameterized circuits, a mapping from\n parameters in the circuit to parameters to be used in the gate.\n If None, existing circuit parameters will also parameterize the\n Gate.\n\n Raises:\n QiskitError: if circuit is non-unitary or if\n parameter_map is not compatible with circuit\n\n Return:\n Gate: a Gate equivalent to the action of the\n input circuit. Upon decomposition, this gate will\n yield the components comprising the original circuit.\n \"\"\"\n for inst, _, _ in circuit.data:\n if not isinstance(inst, Gate):\n raise QiskitError('One or more instructions in this instruction '\n 'cannot be converted to a gate')\n\n if parameter_map is None:\n parameter_dict = {p: p for p in circuit.parameters}\n else:\n parameter_dict = circuit._unroll_param_dict(parameter_map)\n\n if parameter_dict.keys() != circuit.parameters:\n raise QiskitError(('parameter_map should map all circuit parameters. '\n 'Circuit parameters: {}, parameter_map: {}').format(\n circuit.parameters, parameter_dict))\n\n gate = Gate(name=circuit.name,\n num_qubits=sum([qreg.size for qreg in circuit.qregs]),\n params=sorted(parameter_dict.values(), key=lambda p: p.name))\n gate.condition = None\n\n def find_bit_position(bit):\n \"\"\"find the index of a given bit (Register, int) within\n a flat ordered list of bits of the circuit\n \"\"\"\n if isinstance(bit, Qubit):\n ordered_regs = circuit.qregs\n else:\n ordered_regs = circuit.cregs\n reg_index = ordered_regs.index(bit.register)\n return sum([reg.size for reg in ordered_regs[:reg_index]]) + bit.index\n\n target = circuit.copy()\n target._substitute_parameters(parameter_dict)\n\n definition = target.data\n\n if gate.num_qubits > 0:\n q = QuantumRegister(gate.num_qubits, 'q')\n\n definition = list(map(lambda x:\n (x[0], list(map(lambda y: q[find_bit_position(y)], x[1]))),\n definition))\n gate.definition = definition\n\n return gate\n", "path": "qiskit/converters/circuit_to_gate.py"}]} | 1,623 | 229 |
gh_patches_debug_62393 | rasdani/github-patches | git_diff | AUTOMATIC1111__stable-diffusion-webui-6772 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[Bug]: New SHA256 hash takes extremely long time up to a point of of model load being unusable
### Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
### What happened?
Newly added sha-256 hash takes extremely long time to calculate on model load up to a point where loading appears to hang (i've restarted server twice before i even let it run until completion)
Previously switching to a new model was sub 10 sec, now switching to a new model (that does not have hash stored already) takes 100-150 sec (and this is a high end system)!
And to make it worse, messages about hash calculation are only printed **after** it has been calculated, there is no progress info or anything to indicate system is actually doing anything for 2 min!
### Steps to reproduce the problem
1. Switch to a new model and wait for completion - it takes forever
### What should have happened?
Model load should **never** take over 2 minutes to complete.
### Commit where the problem happens
f8c512478568293155539f616dce26c5e4495055
### What platforms do you use to access UI ?
Windows, Linux
### What browsers do you use to access the UI ?
Google Chrome, Microsoft Edge
### Command Line Arguments
```Shell
--api --xformers
```
### Additional information, context and logs
Console log showing model load taking 142 seconds!
```text
Calculating sha256 for /home/vlado/dev/automatic/models/Stable-diffusion/mood-beautyreal-v01.ckpt: bcc0afd3b264ea028928187f56f70840f8d87ccf283b020982beba35d9c7e4ef
Loading weights [bcc0afd3b2] from /home/vlado/dev/automatic/models/Stable-diffusion/mood-beautyreal-v01.ckpt
Couldn't find VAE named vae-ft-mse-840000-ema-pruned; using None instead
Applying xformers cross attention optimization.
Weights loaded in 142.6s.
```
</issue>
<code>
[start of modules/hashes.py]
1 import hashlib
2 import json
3 import os.path
4
5 import filelock
6
7
8 cache_filename = "cache.json"
9 cache_data = None
10
11
12 def dump_cache():
13 with filelock.FileLock(cache_filename+".lock"):
14 with open(cache_filename, "w", encoding="utf8") as file:
15 json.dump(cache_data, file, indent=4)
16
17
18 def cache(subsection):
19 global cache_data
20
21 if cache_data is None:
22 with filelock.FileLock(cache_filename+".lock"):
23 if not os.path.isfile(cache_filename):
24 cache_data = {}
25 else:
26 with open(cache_filename, "r", encoding="utf8") as file:
27 cache_data = json.load(file)
28
29 s = cache_data.get(subsection, {})
30 cache_data[subsection] = s
31
32 return s
33
34
35 def calculate_sha256(filename):
36 hash_sha256 = hashlib.sha256()
37
38 with open(filename, "rb") as f:
39 for chunk in iter(lambda: f.read(4096), b""):
40 hash_sha256.update(chunk)
41
42 return hash_sha256.hexdigest()
43
44
45 def sha256_from_cache(filename, title):
46 hashes = cache("hashes")
47 ondisk_mtime = os.path.getmtime(filename)
48
49 if title not in hashes:
50 return None
51
52 cached_sha256 = hashes[title].get("sha256", None)
53 cached_mtime = hashes[title].get("mtime", 0)
54
55 if ondisk_mtime > cached_mtime or cached_sha256 is None:
56 return None
57
58 return cached_sha256
59
60
61 def sha256(filename, title):
62 hashes = cache("hashes")
63
64 sha256_value = sha256_from_cache(filename, title)
65 if sha256_value is not None:
66 return sha256_value
67
68 print(f"Calculating sha256 for {filename}: ", end='')
69 sha256_value = calculate_sha256(filename)
70 print(f"{sha256_value}")
71
72 hashes[title] = {
73 "mtime": os.path.getmtime(filename),
74 "sha256": sha256_value,
75 }
76
77 dump_cache()
78
79 return sha256_value
80
81
82
83
84
85
[end of modules/hashes.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/modules/hashes.py b/modules/hashes.py
--- a/modules/hashes.py
+++ b/modules/hashes.py
@@ -34,9 +34,10 @@
def calculate_sha256(filename):
hash_sha256 = hashlib.sha256()
+ blksize = 1024 * 1024
with open(filename, "rb") as f:
- for chunk in iter(lambda: f.read(4096), b""):
+ for chunk in iter(lambda: f.read(blksize), b""):
hash_sha256.update(chunk)
return hash_sha256.hexdigest()
| {"golden_diff": "diff --git a/modules/hashes.py b/modules/hashes.py\n--- a/modules/hashes.py\n+++ b/modules/hashes.py\n@@ -34,9 +34,10 @@\n \r\n def calculate_sha256(filename):\r\n hash_sha256 = hashlib.sha256()\r\n+ blksize = 1024 * 1024\r\n \r\n with open(filename, \"rb\") as f:\r\n- for chunk in iter(lambda: f.read(4096), b\"\"):\r\n+ for chunk in iter(lambda: f.read(blksize), b\"\"):\r\n hash_sha256.update(chunk)\r\n \r\n return hash_sha256.hexdigest()\n", "issue": "[Bug]: New SHA256 hash takes extremely long time up to a point of of model load being unusable\n### Is there an existing issue for this?\r\n\r\n- [X] I have searched the existing issues and checked the recent builds/commits\r\n\r\n### What happened?\r\n\r\nNewly added sha-256 hash takes extremely long time to calculate on model load up to a point where loading appears to hang (i've restarted server twice before i even let it run until completion) \r\n\r\nPreviously switching to a new model was sub 10 sec, now switching to a new model (that does not have hash stored already) takes 100-150 sec (and this is a high end system)!\r\n\r\nAnd to make it worse, messages about hash calculation are only printed **after** it has been calculated, there is no progress info or anything to indicate system is actually doing anything for 2 min!\r\n\r\n\r\n### Steps to reproduce the problem\r\n\r\n1. Switch to a new model and wait for completion - it takes forever\r\n\r\n\r\n### What should have happened?\r\n\r\nModel load should **never** take over 2 minutes to complete.\r\n\r\n### Commit where the problem happens\r\n\r\nf8c512478568293155539f616dce26c5e4495055\r\n\r\n### What platforms do you use to access UI ?\r\n\r\nWindows, Linux\r\n\r\n### What browsers do you use to access the UI ?\r\n\r\nGoogle Chrome, Microsoft Edge\r\n\r\n### Command Line Arguments\r\n\r\n```Shell\r\n--api --xformers\r\n```\r\n\r\n\r\n### Additional information, context and logs\r\n\r\nConsole log showing model load taking 142 seconds!\r\n\r\n```text\r\nCalculating sha256 for /home/vlado/dev/automatic/models/Stable-diffusion/mood-beautyreal-v01.ckpt: bcc0afd3b264ea028928187f56f70840f8d87ccf283b020982beba35d9c7e4ef\r\nLoading weights [bcc0afd3b2] from /home/vlado/dev/automatic/models/Stable-diffusion/mood-beautyreal-v01.ckpt\r\nCouldn't find VAE named vae-ft-mse-840000-ema-pruned; using None instead\r\nApplying xformers cross attention optimization.\r\nWeights loaded in 142.6s.\r\n```\r\n\n", "before_files": [{"content": "import hashlib\r\nimport json\r\nimport os.path\r\n\r\nimport filelock\r\n\r\n\r\ncache_filename = \"cache.json\"\r\ncache_data = None\r\n\r\n\r\ndef dump_cache():\r\n with filelock.FileLock(cache_filename+\".lock\"):\r\n with open(cache_filename, \"w\", encoding=\"utf8\") as file:\r\n json.dump(cache_data, file, indent=4)\r\n\r\n\r\ndef cache(subsection):\r\n global cache_data\r\n\r\n if cache_data is None:\r\n with filelock.FileLock(cache_filename+\".lock\"):\r\n if not os.path.isfile(cache_filename):\r\n cache_data = {}\r\n else:\r\n with open(cache_filename, \"r\", encoding=\"utf8\") as file:\r\n cache_data = json.load(file)\r\n\r\n s = cache_data.get(subsection, {})\r\n cache_data[subsection] = s\r\n\r\n return s\r\n\r\n\r\ndef calculate_sha256(filename):\r\n hash_sha256 = hashlib.sha256()\r\n\r\n with open(filename, \"rb\") as f:\r\n for chunk in iter(lambda: f.read(4096), b\"\"):\r\n hash_sha256.update(chunk)\r\n\r\n return hash_sha256.hexdigest()\r\n\r\n\r\ndef sha256_from_cache(filename, title):\r\n hashes = cache(\"hashes\")\r\n ondisk_mtime = os.path.getmtime(filename)\r\n\r\n if title not in hashes:\r\n return None\r\n\r\n cached_sha256 = hashes[title].get(\"sha256\", None)\r\n cached_mtime = hashes[title].get(\"mtime\", 0)\r\n\r\n if ondisk_mtime > cached_mtime or cached_sha256 is None:\r\n return None\r\n\r\n return cached_sha256\r\n\r\n\r\ndef sha256(filename, title):\r\n hashes = cache(\"hashes\")\r\n\r\n sha256_value = sha256_from_cache(filename, title)\r\n if sha256_value is not None:\r\n return sha256_value\r\n\r\n print(f\"Calculating sha256 for {filename}: \", end='')\r\n sha256_value = calculate_sha256(filename)\r\n print(f\"{sha256_value}\")\r\n\r\n hashes[title] = {\r\n \"mtime\": os.path.getmtime(filename),\r\n \"sha256\": sha256_value,\r\n }\r\n\r\n dump_cache()\r\n\r\n return sha256_value\r\n\r\n\r\n\r\n\r\n\r\n", "path": "modules/hashes.py"}]} | 1,735 | 149 |
gh_patches_debug_605 | rasdani/github-patches | git_diff | pex-tool__pex-1664 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Release 2.1.71
On the docket:
+ [x] Secure Pex against sha1 collision attacks. #1662
+ [x] Problems building venvs from certain distributions. #1656
</issue>
<code>
[start of pex/version.py]
1 # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 __version__ = "2.1.70"
5
[end of pex/version.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pex/version.py b/pex/version.py
--- a/pex/version.py
+++ b/pex/version.py
@@ -1,4 +1,4 @@
# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
# Licensed under the Apache License, Version 2.0 (see LICENSE).
-__version__ = "2.1.70"
+__version__ = "2.1.71"
| {"golden_diff": "diff --git a/pex/version.py b/pex/version.py\n--- a/pex/version.py\n+++ b/pex/version.py\n@@ -1,4 +1,4 @@\n # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n # Licensed under the Apache License, Version 2.0 (see LICENSE).\n \n-__version__ = \"2.1.70\"\n+__version__ = \"2.1.71\"\n", "issue": "Release 2.1.71\nOn the docket:\r\n+ [x] Secure Pex against sha1 collision attacks. #1662 \r\n+ [x] Problems building venvs from certain distributions. #1656\n", "before_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.70\"\n", "path": "pex/version.py"}]} | 635 | 97 |
gh_patches_debug_881 | rasdani/github-patches | git_diff | python__peps-3263 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Infra: Check Sphinx warnings on CI
This is similar to what we have in the CPython repo, most recently: https://github.com/python/cpython/pull/106460, and will help us gradually remove Sphinx warnings, and avoid new ones being introduces.
It checks three things:
1. If a file previously had no warnings (not listed in `.nitignore`), and new ones are introduced, it fails
* -> To prevent regressions
2. If a file previously had warnings (it's lsited in `.nitignore`), but now has none, it fails and tells us to remove it from `.nitignore`
* To help us incrementally improve over time
3. If a file previously had warnings (it's listed in `.nitignore`), and still has warnings, it doesn't fail, but it will annotate the PR to show the warning
* To make them more visible, and give us the opportunity to fix them
I've intentionally kept the code and layout as close as possible to the CPython version (see https://github.com/python/cpython/tree/main/Doc/tools) for easier future maintenance.
<!-- readthedocs-preview pep-previews start -->
----
:books: Documentation preview :books:: https://pep-previews--3213.org.readthedocs.build/
<!-- readthedocs-preview pep-previews end -->
</issue>
<code>
[start of conf.py]
1 # This file is placed in the public domain or under the
2 # CC0-1.0-Universal license, whichever is more permissive.
3
4 """Configuration for building PEPs using Sphinx."""
5
6 from pathlib import Path
7 import sys
8
9 sys.path.append(str(Path(".").absolute()))
10
11 # -- Project information -----------------------------------------------------
12
13 project = "PEPs"
14 master_doc = "contents"
15
16 # -- General configuration ---------------------------------------------------
17
18 # Add any Sphinx extension module names here, as strings.
19 extensions = [
20 "pep_sphinx_extensions",
21 "sphinx.ext.intersphinx",
22 "sphinx.ext.githubpages",
23 ]
24
25 # The file extensions of source files. Sphinx uses these suffixes as sources.
26 source_suffix = {
27 ".rst": "pep",
28 ".txt": "pep",
29 }
30
31 # List of patterns (relative to source dir) to ignore when looking for source files.
32 include_patterns = [
33 # Required for Sphinx
34 "contents.rst",
35 # PEP files
36 "pep-????.rst",
37 "pep-????.txt",
38 # PEP ancillary files
39 "pep-????/*.rst",
40 # Documentation
41 "docs/*.rst",
42 ]
43 exclude_patterns = [
44 # PEP Template
45 "pep-0012/pep-NNNN.rst",
46 ]
47
48 # Intersphinx configuration
49 intersphinx_mapping = {
50 'python': ('https://docs.python.org/3/', None),
51 'packaging': ('https://packaging.python.org/en/latest/', None),
52 'devguide': ('https://devguide.python.org/', None),
53 'py3.11': ('https://docs.python.org/3.11/', None),
54 'py3.12': ('https://docs.python.org/3.12/', None),
55 }
56 intersphinx_disabled_reftypes = []
57
58 # -- Options for HTML output -------------------------------------------------
59
60 # HTML output settings
61 html_math_renderer = "maths_to_html" # Maths rendering
62
63 # Theme settings
64 html_theme_path = ["pep_sphinx_extensions"]
65 html_theme = "pep_theme" # The actual theme directory (child of html_theme_path)
66 html_use_index = False # Disable index (we use PEP 0)
67 html_style = "" # must be defined here or in theme.conf, but is unused
68 html_permalinks = False # handled in the PEPContents transform
69 html_baseurl = "https://peps.python.org" # to create the CNAME file
70 gettext_auto_build = False # speed-ups
71
72 templates_path = ["pep_sphinx_extensions/pep_theme/templates"] # Theme template relative paths from `confdir`
73
[end of conf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/conf.py b/conf.py
--- a/conf.py
+++ b/conf.py
@@ -45,6 +45,9 @@
"pep-0012/pep-NNNN.rst",
]
+# Warn on missing references
+nitpicky = True
+
# Intersphinx configuration
intersphinx_mapping = {
'python': ('https://docs.python.org/3/', None),
| {"golden_diff": "diff --git a/conf.py b/conf.py\n--- a/conf.py\n+++ b/conf.py\n@@ -45,6 +45,9 @@\n \"pep-0012/pep-NNNN.rst\",\n ]\n \n+# Warn on missing references\n+nitpicky = True\n+\n # Intersphinx configuration\n intersphinx_mapping = {\n 'python': ('https://docs.python.org/3/', None),\n", "issue": "Infra: Check Sphinx warnings on CI\nThis is similar to what we have in the CPython repo, most recently: https://github.com/python/cpython/pull/106460, and will help us gradually remove Sphinx warnings, and avoid new ones being introduces.\r\n\r\nIt checks three things:\r\n\r\n1. If a file previously had no warnings (not listed in `.nitignore`), and new ones are introduced, it fails\r\n * -> To prevent regressions\r\n\r\n2. If a file previously had warnings (it's lsited in `.nitignore`), but now has none, it fails and tells us to remove it from `.nitignore`\r\n * To help us incrementally improve over time\r\n\r\n3. If a file previously had warnings (it's listed in `.nitignore`), and still has warnings, it doesn't fail, but it will annotate the PR to show the warning\r\n * To make them more visible, and give us the opportunity to fix them\r\n\r\nI've intentionally kept the code and layout as close as possible to the CPython version (see https://github.com/python/cpython/tree/main/Doc/tools) for easier future maintenance.\r\n\r\n\r\n\r\n<!-- readthedocs-preview pep-previews start -->\r\n----\n:books: Documentation preview :books:: https://pep-previews--3213.org.readthedocs.build/\n\r\n<!-- readthedocs-preview pep-previews end -->\n", "before_files": [{"content": "# This file is placed in the public domain or under the\n# CC0-1.0-Universal license, whichever is more permissive.\n\n\"\"\"Configuration for building PEPs using Sphinx.\"\"\"\n\nfrom pathlib import Path\nimport sys\n\nsys.path.append(str(Path(\".\").absolute()))\n\n# -- Project information -----------------------------------------------------\n\nproject = \"PEPs\"\nmaster_doc = \"contents\"\n\n# -- General configuration ---------------------------------------------------\n\n# Add any Sphinx extension module names here, as strings.\nextensions = [\n \"pep_sphinx_extensions\",\n \"sphinx.ext.intersphinx\",\n \"sphinx.ext.githubpages\",\n]\n\n# The file extensions of source files. Sphinx uses these suffixes as sources.\nsource_suffix = {\n \".rst\": \"pep\",\n \".txt\": \"pep\",\n}\n\n# List of patterns (relative to source dir) to ignore when looking for source files.\ninclude_patterns = [\n # Required for Sphinx\n \"contents.rst\",\n # PEP files\n \"pep-????.rst\",\n \"pep-????.txt\",\n # PEP ancillary files\n \"pep-????/*.rst\",\n # Documentation\n \"docs/*.rst\",\n]\nexclude_patterns = [\n # PEP Template\n \"pep-0012/pep-NNNN.rst\",\n]\n\n# Intersphinx configuration\nintersphinx_mapping = {\n 'python': ('https://docs.python.org/3/', None),\n 'packaging': ('https://packaging.python.org/en/latest/', None),\n 'devguide': ('https://devguide.python.org/', None),\n 'py3.11': ('https://docs.python.org/3.11/', None),\n 'py3.12': ('https://docs.python.org/3.12/', None),\n}\nintersphinx_disabled_reftypes = []\n\n# -- Options for HTML output -------------------------------------------------\n\n# HTML output settings\nhtml_math_renderer = \"maths_to_html\" # Maths rendering\n\n# Theme settings\nhtml_theme_path = [\"pep_sphinx_extensions\"]\nhtml_theme = \"pep_theme\" # The actual theme directory (child of html_theme_path)\nhtml_use_index = False # Disable index (we use PEP 0)\nhtml_style = \"\" # must be defined here or in theme.conf, but is unused\nhtml_permalinks = False # handled in the PEPContents transform\nhtml_baseurl = \"https://peps.python.org\" # to create the CNAME file\ngettext_auto_build = False # speed-ups\n\ntemplates_path = [\"pep_sphinx_extensions/pep_theme/templates\"] # Theme template relative paths from `confdir`\n", "path": "conf.py"}]} | 1,526 | 93 |
gh_patches_debug_3260 | rasdani/github-patches | git_diff | getredash__redash-5623 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Loading schema for Sqlite DB with "Order" column name fails
### Issue Summary
I added a Sqlite Database which has an column with the name `Order`.
When I try to create a query, the error `Schema refresh failed.` comes up.
### Steps to Reproduce
1. Add an Sqlite Database which has a column with the name `Order`
2. Try to create a query
3. Get the error `Schema refresh failed.`
### Technical details:
* Redash Version: cloned from master
* Browser/OS: Brave Browser & Ubuntu 18.1
* How did you install Redash: built from source
</issue>
<code>
[start of redash/query_runner/sqlite.py]
1 import logging
2 import sqlite3
3
4 from redash.query_runner import BaseSQLQueryRunner, register, JobTimeoutException
5 from redash.utils import json_dumps, json_loads
6
7 logger = logging.getLogger(__name__)
8
9
10 class Sqlite(BaseSQLQueryRunner):
11 noop_query = "pragma quick_check"
12
13 @classmethod
14 def configuration_schema(cls):
15 return {
16 "type": "object",
17 "properties": {"dbpath": {"type": "string", "title": "Database Path"}},
18 "required": ["dbpath"],
19 }
20
21 @classmethod
22 def type(cls):
23 return "sqlite"
24
25 def __init__(self, configuration):
26 super(Sqlite, self).__init__(configuration)
27
28 self._dbpath = self.configuration["dbpath"]
29
30 def _get_tables(self, schema):
31 query_table = "select tbl_name from sqlite_master where type='table'"
32 query_columns = "PRAGMA table_info(%s)"
33
34 results, error = self.run_query(query_table, None)
35
36 if error is not None:
37 raise Exception("Failed getting schema.")
38
39 results = json_loads(results)
40
41 for row in results["rows"]:
42 table_name = row["tbl_name"]
43 schema[table_name] = {"name": table_name, "columns": []}
44 results_table, error = self.run_query(query_columns % (table_name,), None)
45 if error is not None:
46 raise Exception("Failed getting schema.")
47
48 results_table = json_loads(results_table)
49 for row_column in results_table["rows"]:
50 schema[table_name]["columns"].append(row_column["name"])
51
52 return list(schema.values())
53
54 def run_query(self, query, user):
55 connection = sqlite3.connect(self._dbpath)
56
57 cursor = connection.cursor()
58
59 try:
60 cursor.execute(query)
61
62 if cursor.description is not None:
63 columns = self.fetch_columns([(i[0], None) for i in cursor.description])
64 rows = [
65 dict(zip((column["name"] for column in columns), row))
66 for row in cursor
67 ]
68
69 data = {"columns": columns, "rows": rows}
70 error = None
71 json_data = json_dumps(data)
72 else:
73 error = "Query completed but it returned no data."
74 json_data = None
75 except (KeyboardInterrupt, JobTimeoutException):
76 connection.cancel()
77 raise
78 finally:
79 connection.close()
80 return json_data, error
81
82
83 register(Sqlite)
84
[end of redash/query_runner/sqlite.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/redash/query_runner/sqlite.py b/redash/query_runner/sqlite.py
--- a/redash/query_runner/sqlite.py
+++ b/redash/query_runner/sqlite.py
@@ -29,7 +29,7 @@
def _get_tables(self, schema):
query_table = "select tbl_name from sqlite_master where type='table'"
- query_columns = "PRAGMA table_info(%s)"
+ query_columns = "PRAGMA table_info(\"%s\")"
results, error = self.run_query(query_table, None)
| {"golden_diff": "diff --git a/redash/query_runner/sqlite.py b/redash/query_runner/sqlite.py\n--- a/redash/query_runner/sqlite.py\n+++ b/redash/query_runner/sqlite.py\n@@ -29,7 +29,7 @@\n \n def _get_tables(self, schema):\n query_table = \"select tbl_name from sqlite_master where type='table'\"\n- query_columns = \"PRAGMA table_info(%s)\"\n+ query_columns = \"PRAGMA table_info(\\\"%s\\\")\"\n \n results, error = self.run_query(query_table, None)\n", "issue": "Loading schema for Sqlite DB with \"Order\" column name fails\n### Issue Summary\r\n\r\nI added a Sqlite Database which has an column with the name `Order`.\r\nWhen I try to create a query, the error `Schema refresh failed.` comes up.\r\n\r\n### Steps to Reproduce\r\n\r\n1. Add an Sqlite Database which has a column with the name `Order`\r\n2. Try to create a query\r\n3. Get the error `Schema refresh failed.`\r\n\r\n\r\n### Technical details:\r\n\r\n* Redash Version: cloned from master\r\n* Browser/OS: Brave Browser & Ubuntu 18.1\r\n* How did you install Redash: built from source\r\n\n", "before_files": [{"content": "import logging\nimport sqlite3\n\nfrom redash.query_runner import BaseSQLQueryRunner, register, JobTimeoutException\nfrom redash.utils import json_dumps, json_loads\n\nlogger = logging.getLogger(__name__)\n\n\nclass Sqlite(BaseSQLQueryRunner):\n noop_query = \"pragma quick_check\"\n\n @classmethod\n def configuration_schema(cls):\n return {\n \"type\": \"object\",\n \"properties\": {\"dbpath\": {\"type\": \"string\", \"title\": \"Database Path\"}},\n \"required\": [\"dbpath\"],\n }\n\n @classmethod\n def type(cls):\n return \"sqlite\"\n\n def __init__(self, configuration):\n super(Sqlite, self).__init__(configuration)\n\n self._dbpath = self.configuration[\"dbpath\"]\n\n def _get_tables(self, schema):\n query_table = \"select tbl_name from sqlite_master where type='table'\"\n query_columns = \"PRAGMA table_info(%s)\"\n\n results, error = self.run_query(query_table, None)\n\n if error is not None:\n raise Exception(\"Failed getting schema.\")\n\n results = json_loads(results)\n\n for row in results[\"rows\"]:\n table_name = row[\"tbl_name\"]\n schema[table_name] = {\"name\": table_name, \"columns\": []}\n results_table, error = self.run_query(query_columns % (table_name,), None)\n if error is not None:\n raise Exception(\"Failed getting schema.\")\n\n results_table = json_loads(results_table)\n for row_column in results_table[\"rows\"]:\n schema[table_name][\"columns\"].append(row_column[\"name\"])\n\n return list(schema.values())\n\n def run_query(self, query, user):\n connection = sqlite3.connect(self._dbpath)\n\n cursor = connection.cursor()\n\n try:\n cursor.execute(query)\n\n if cursor.description is not None:\n columns = self.fetch_columns([(i[0], None) for i in cursor.description])\n rows = [\n dict(zip((column[\"name\"] for column in columns), row))\n for row in cursor\n ]\n\n data = {\"columns\": columns, \"rows\": rows}\n error = None\n json_data = json_dumps(data)\n else:\n error = \"Query completed but it returned no data.\"\n json_data = None\n except (KeyboardInterrupt, JobTimeoutException):\n connection.cancel()\n raise\n finally:\n connection.close()\n return json_data, error\n\n\nregister(Sqlite)\n", "path": "redash/query_runner/sqlite.py"}]} | 1,366 | 121 |
gh_patches_debug_11482 | rasdani/github-patches | git_diff | scoutapp__scout_apm_python-228 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Capture details of Celery Chains and Chords
Celery has some more advanced features to join multiple jobs into one. The agent needs testing and investigation into how they can be best instrumented.
</issue>
<code>
[start of src/scout_apm/celery.py]
1 # coding=utf-8
2 from __future__ import absolute_import, division, print_function, unicode_literals
3
4 import datetime as dt
5
6 from celery.signals import before_task_publish, task_postrun, task_prerun
7
8 import scout_apm.core
9 from scout_apm.compat import datetime_to_timestamp
10 from scout_apm.core.tracked_request import TrackedRequest
11
12
13 def before_publish_callback(headers=None, properties=None, **kwargs):
14 if "scout_task_start" not in headers:
15 headers["scout_task_start"] = datetime_to_timestamp(dt.datetime.utcnow())
16
17
18 def prerun_callback(task=None, **kwargs):
19 tracked_request = TrackedRequest.instance()
20 tracked_request.mark_real_request()
21
22 start = getattr(task.request, "scout_task_start", None)
23 if start is not None:
24 now = datetime_to_timestamp(dt.datetime.utcnow())
25 try:
26 queue_time = now - start
27 except TypeError:
28 pass
29 else:
30 tracked_request.tag("queue_time", queue_time)
31
32 delivery_info = task.request.delivery_info
33 tracked_request.tag("is_eager", delivery_info.get("is_eager", False))
34 tracked_request.tag("exchange", delivery_info.get("exchange", "unknown"))
35 tracked_request.tag("routing_key", delivery_info.get("routing_key", "unknown"))
36 tracked_request.tag("queue", delivery_info.get("queue", "unknown"))
37
38 tracked_request.start_span(operation=("Job/" + task.name))
39
40
41 def postrun_callback(task=None, **kwargs):
42 tracked_request = TrackedRequest.instance()
43 tracked_request.stop_span()
44
45
46 def install():
47 installed = scout_apm.core.install()
48 if not installed:
49 return
50
51 before_task_publish.connect(before_publish_callback)
52 task_prerun.connect(prerun_callback)
53 task_postrun.connect(postrun_callback)
54
55
56 def uninstall():
57 before_task_publish.disconnect(before_publish_callback)
58 task_prerun.disconnect(prerun_callback)
59 task_postrun.disconnect(postrun_callback)
60
[end of src/scout_apm/celery.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/scout_apm/celery.py b/src/scout_apm/celery.py
--- a/src/scout_apm/celery.py
+++ b/src/scout_apm/celery.py
@@ -29,6 +29,13 @@
else:
tracked_request.tag("queue_time", queue_time)
+ task_id = getattr(task.request, "id", None)
+ if task_id:
+ tracked_request.tag("task_id", task_id)
+ parent_task_id = getattr(task.request, "parent_id", None)
+ if parent_task_id:
+ tracked_request.tag("parent_task_id", parent_task_id)
+
delivery_info = task.request.delivery_info
tracked_request.tag("is_eager", delivery_info.get("is_eager", False))
tracked_request.tag("exchange", delivery_info.get("exchange", "unknown"))
| {"golden_diff": "diff --git a/src/scout_apm/celery.py b/src/scout_apm/celery.py\n--- a/src/scout_apm/celery.py\n+++ b/src/scout_apm/celery.py\n@@ -29,6 +29,13 @@\n else:\n tracked_request.tag(\"queue_time\", queue_time)\n \n+ task_id = getattr(task.request, \"id\", None)\n+ if task_id:\n+ tracked_request.tag(\"task_id\", task_id)\n+ parent_task_id = getattr(task.request, \"parent_id\", None)\n+ if parent_task_id:\n+ tracked_request.tag(\"parent_task_id\", parent_task_id)\n+\n delivery_info = task.request.delivery_info\n tracked_request.tag(\"is_eager\", delivery_info.get(\"is_eager\", False))\n tracked_request.tag(\"exchange\", delivery_info.get(\"exchange\", \"unknown\"))\n", "issue": "Capture details of Celery Chains and Chords\nCelery has some more advanced features to join multiple jobs into one. The agent needs testing and investigation into how they can be best instrumented.\n", "before_files": [{"content": "# coding=utf-8\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport datetime as dt\n\nfrom celery.signals import before_task_publish, task_postrun, task_prerun\n\nimport scout_apm.core\nfrom scout_apm.compat import datetime_to_timestamp\nfrom scout_apm.core.tracked_request import TrackedRequest\n\n\ndef before_publish_callback(headers=None, properties=None, **kwargs):\n if \"scout_task_start\" not in headers:\n headers[\"scout_task_start\"] = datetime_to_timestamp(dt.datetime.utcnow())\n\n\ndef prerun_callback(task=None, **kwargs):\n tracked_request = TrackedRequest.instance()\n tracked_request.mark_real_request()\n\n start = getattr(task.request, \"scout_task_start\", None)\n if start is not None:\n now = datetime_to_timestamp(dt.datetime.utcnow())\n try:\n queue_time = now - start\n except TypeError:\n pass\n else:\n tracked_request.tag(\"queue_time\", queue_time)\n\n delivery_info = task.request.delivery_info\n tracked_request.tag(\"is_eager\", delivery_info.get(\"is_eager\", False))\n tracked_request.tag(\"exchange\", delivery_info.get(\"exchange\", \"unknown\"))\n tracked_request.tag(\"routing_key\", delivery_info.get(\"routing_key\", \"unknown\"))\n tracked_request.tag(\"queue\", delivery_info.get(\"queue\", \"unknown\"))\n\n tracked_request.start_span(operation=(\"Job/\" + task.name))\n\n\ndef postrun_callback(task=None, **kwargs):\n tracked_request = TrackedRequest.instance()\n tracked_request.stop_span()\n\n\ndef install():\n installed = scout_apm.core.install()\n if not installed:\n return\n\n before_task_publish.connect(before_publish_callback)\n task_prerun.connect(prerun_callback)\n task_postrun.connect(postrun_callback)\n\n\ndef uninstall():\n before_task_publish.disconnect(before_publish_callback)\n task_prerun.disconnect(prerun_callback)\n task_postrun.disconnect(postrun_callback)\n", "path": "src/scout_apm/celery.py"}]} | 1,118 | 191 |
gh_patches_debug_20353 | rasdani/github-patches | git_diff | WeblateOrg__weblate-10604 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Some languages don't have all strings available for translation
### Describe the issue
My project is here: https://hosted.weblate.org/projects/feeder/android-strings
A few languages Polish, French and Chinese (Simplified), are missing a dozen strings.
One example is the string `other_minutes` which is not available for translation in these languages.
I have tried re-scanning strings and similar with no change.
### I already tried
- [X] I've read and searched [the documentation](https://docs.weblate.org/).
- [X] I've searched for similar issues in this repository.
### Steps to reproduce the behavior
Not sure how to reproduce it but it is happening here :https://hosted.weblate.org/projects/feeder/android-strings
look at string `other_minutes`, it is missing from Polish, French, and Chinese (Simplified)
### Expected behavior
All strings should be available for translation in all languages.
### Screenshots
_No response_
### Exception traceback
_No response_
### How do you run Weblate?
weblate.org service
### Weblate versions
_No response_
### Weblate deploy checks
_No response_
### Additional context
_No response_
</issue>
<code>
[start of weblate/addons/cleanup.py]
1 # Copyright © Michal Čihař <[email protected]>
2 #
3 # SPDX-License-Identifier: GPL-3.0-or-later
4
5 from django.utils.translation import gettext_lazy
6
7 from weblate.addons.base import UpdateBaseAddon
8 from weblate.addons.events import EVENT_POST_COMMIT, EVENT_POST_UPDATE, EVENT_PRE_COMMIT
9 from weblate.trans.exceptions import FileParseError
10
11
12 class BaseCleanupAddon(UpdateBaseAddon):
13 @classmethod
14 def can_install(cls, component, user):
15 if not component.has_template():
16 return False
17 return super().can_install(component, user)
18
19
20 class CleanupAddon(BaseCleanupAddon):
21 name = "weblate.cleanup.generic"
22 verbose = gettext_lazy("Cleanup translation files")
23 description = gettext_lazy(
24 "Update all translation files to match the monolingual base file. "
25 "For most file formats, this means removing stale translation keys "
26 "no longer present in the base file."
27 )
28 icon = "eraser.svg"
29 events = (EVENT_PRE_COMMIT, EVENT_POST_UPDATE)
30
31 def update_translations(self, component, previous_head):
32 for translation in self.iterate_translations(component):
33 filenames = translation.store.cleanup_unused()
34 if filenames is None:
35 continue
36 self.extra_files.extend(filenames)
37 translation.store_hash()
38
39 def pre_commit(self, translation, author):
40 if translation.is_source and not translation.component.intermediate:
41 return
42 try:
43 filenames = translation.store.cleanup_unused()
44 except FileParseError:
45 return
46 if filenames is not None:
47 self.extra_files.extend(filenames)
48 translation.store_hash()
49
50
51 class RemoveBlankAddon(BaseCleanupAddon):
52 name = "weblate.cleanup.blank"
53 verbose = gettext_lazy("Remove blank strings")
54 description = gettext_lazy(
55 "Removes strings without a translation from translation files."
56 )
57 events = (EVENT_POST_COMMIT, EVENT_POST_UPDATE)
58 icon = "eraser.svg"
59
60 def update_translations(self, component, previous_head):
61 for translation in self.iterate_translations(component):
62 filenames = translation.store.cleanup_blank()
63 if filenames is None:
64 continue
65 self.extra_files.extend(filenames)
66 translation.store_hash()
67
68 def post_commit(self, component):
69 self.post_update(component, None, skip_push=True)
70
[end of weblate/addons/cleanup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/weblate/addons/cleanup.py b/weblate/addons/cleanup.py
--- a/weblate/addons/cleanup.py
+++ b/weblate/addons/cleanup.py
@@ -34,7 +34,7 @@
if filenames is None:
continue
self.extra_files.extend(filenames)
- translation.store_hash()
+ # Do not update hash here as this is just before parsing updated files
def pre_commit(self, translation, author):
if translation.is_source and not translation.component.intermediate:
@@ -63,7 +63,9 @@
if filenames is None:
continue
self.extra_files.extend(filenames)
- translation.store_hash()
+ # Do not update hash in post_update, only in post_commit
+ if previous_head == "weblate:post-commit":
+ translation.store_hash()
def post_commit(self, component):
- self.post_update(component, None, skip_push=True)
+ self.post_update(component, "weblate:post-commit", skip_push=True)
| {"golden_diff": "diff --git a/weblate/addons/cleanup.py b/weblate/addons/cleanup.py\n--- a/weblate/addons/cleanup.py\n+++ b/weblate/addons/cleanup.py\n@@ -34,7 +34,7 @@\n if filenames is None:\n continue\n self.extra_files.extend(filenames)\n- translation.store_hash()\n+ # Do not update hash here as this is just before parsing updated files\n \n def pre_commit(self, translation, author):\n if translation.is_source and not translation.component.intermediate:\n@@ -63,7 +63,9 @@\n if filenames is None:\n continue\n self.extra_files.extend(filenames)\n- translation.store_hash()\n+ # Do not update hash in post_update, only in post_commit\n+ if previous_head == \"weblate:post-commit\":\n+ translation.store_hash()\n \n def post_commit(self, component):\n- self.post_update(component, None, skip_push=True)\n+ self.post_update(component, \"weblate:post-commit\", skip_push=True)\n", "issue": "Some languages don't have all strings available for translation\n### Describe the issue\n\nMy project is here: https://hosted.weblate.org/projects/feeder/android-strings\r\n\r\nA few languages Polish, French and Chinese (Simplified), are missing a dozen strings.\r\n\r\nOne example is the string `other_minutes` which is not available for translation in these languages.\r\n\r\nI have tried re-scanning strings and similar with no change.\n\n### I already tried\n\n- [X] I've read and searched [the documentation](https://docs.weblate.org/).\n- [X] I've searched for similar issues in this repository.\n\n### Steps to reproduce the behavior\n\nNot sure how to reproduce it but it is happening here :https://hosted.weblate.org/projects/feeder/android-strings\r\n\r\nlook at string `other_minutes`, it is missing from Polish, French, and Chinese (Simplified)\n\n### Expected behavior\n\nAll strings should be available for translation in all languages.\n\n### Screenshots\n\n_No response_\n\n### Exception traceback\n\n_No response_\n\n### How do you run Weblate?\n\nweblate.org service\n\n### Weblate versions\n\n_No response_\n\n### Weblate deploy checks\n\n_No response_\n\n### Additional context\n\n_No response_\n", "before_files": [{"content": "# Copyright \u00a9 Michal \u010ciha\u0159 <[email protected]>\n#\n# SPDX-License-Identifier: GPL-3.0-or-later\n\nfrom django.utils.translation import gettext_lazy\n\nfrom weblate.addons.base import UpdateBaseAddon\nfrom weblate.addons.events import EVENT_POST_COMMIT, EVENT_POST_UPDATE, EVENT_PRE_COMMIT\nfrom weblate.trans.exceptions import FileParseError\n\n\nclass BaseCleanupAddon(UpdateBaseAddon):\n @classmethod\n def can_install(cls, component, user):\n if not component.has_template():\n return False\n return super().can_install(component, user)\n\n\nclass CleanupAddon(BaseCleanupAddon):\n name = \"weblate.cleanup.generic\"\n verbose = gettext_lazy(\"Cleanup translation files\")\n description = gettext_lazy(\n \"Update all translation files to match the monolingual base file. \"\n \"For most file formats, this means removing stale translation keys \"\n \"no longer present in the base file.\"\n )\n icon = \"eraser.svg\"\n events = (EVENT_PRE_COMMIT, EVENT_POST_UPDATE)\n\n def update_translations(self, component, previous_head):\n for translation in self.iterate_translations(component):\n filenames = translation.store.cleanup_unused()\n if filenames is None:\n continue\n self.extra_files.extend(filenames)\n translation.store_hash()\n\n def pre_commit(self, translation, author):\n if translation.is_source and not translation.component.intermediate:\n return\n try:\n filenames = translation.store.cleanup_unused()\n except FileParseError:\n return\n if filenames is not None:\n self.extra_files.extend(filenames)\n translation.store_hash()\n\n\nclass RemoveBlankAddon(BaseCleanupAddon):\n name = \"weblate.cleanup.blank\"\n verbose = gettext_lazy(\"Remove blank strings\")\n description = gettext_lazy(\n \"Removes strings without a translation from translation files.\"\n )\n events = (EVENT_POST_COMMIT, EVENT_POST_UPDATE)\n icon = \"eraser.svg\"\n\n def update_translations(self, component, previous_head):\n for translation in self.iterate_translations(component):\n filenames = translation.store.cleanup_blank()\n if filenames is None:\n continue\n self.extra_files.extend(filenames)\n translation.store_hash()\n\n def post_commit(self, component):\n self.post_update(component, None, skip_push=True)\n", "path": "weblate/addons/cleanup.py"}]} | 1,423 | 231 |
gh_patches_debug_3202 | rasdani/github-patches | git_diff | hylang__hy-2190 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add `project_urls` to `setup.py`
This would allow us to provide links to our GitHub repository etc. in a sidebar on PyPI.
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2
3 from setuptools import find_packages, setup
4 import fastentrypoints # Monkey-patches setuptools.
5
6 from get_version import __version__
7
8 os.chdir(os.path.split(os.path.abspath(__file__))[0])
9
10 PKG = "hy"
11
12 long_description = """Hy is a Python <--> Lisp layer. It helps
13 make things work nicer, and lets Python and the Hy lisp variant play
14 nice together. """
15
16 setup(
17 name=PKG,
18 version=__version__,
19 install_requires=[
20 'rply>=0.7.7',
21 'funcparserlib>=1.0.0a0',
22 'colorama',
23 'astor>=0.8 ; python_version < "3.9"',
24 ],
25 python_requires = '>= 3.7, <= 3.10',
26 entry_points={
27 'console_scripts': [
28 'hy = hy.cmdline:hy_main',
29 'hy3 = hy.cmdline:hy_main',
30 'hyc = hy.cmdline:hyc_main',
31 'hyc3 = hy.cmdline:hyc_main',
32 'hy2py = hy.cmdline:hy2py_main',
33 'hy2py3 = hy.cmdline:hy2py_main',
34 ]
35 },
36 packages=find_packages(exclude=['tests*']),
37 package_data={
38 'hy': ['*.hy', '__pycache__/*'],
39 'hy.contrib': ['*.hy', '__pycache__/*'],
40 'hy.core': ['*.hy', '__pycache__/*'],
41 'hy.extra': ['*.hy', '__pycache__/*'],
42 },
43 data_files=[
44 ('get_version', ['get_version.py'])
45 ],
46 author="Paul Tagliamonte",
47 author_email="[email protected]",
48 long_description=long_description,
49 description='Lisp and Python love each other.',
50 license="Expat",
51 url="http://hylang.org/",
52 platforms=['any'],
53 classifiers=[
54 "Development Status :: 4 - Beta",
55 "Intended Audience :: Developers",
56 "License :: DFSG approved",
57 "License :: OSI Approved :: MIT License", # Really "Expat". Ugh.
58 "Operating System :: OS Independent",
59 "Programming Language :: Lisp",
60 "Programming Language :: Python",
61 "Programming Language :: Python :: 3",
62 "Programming Language :: Python :: 3.7",
63 "Programming Language :: Python :: 3.8",
64 "Programming Language :: Python :: 3.9",
65 "Programming Language :: Python :: 3.10",
66 "Topic :: Software Development :: Code Generators",
67 "Topic :: Software Development :: Compilers",
68 "Topic :: Software Development :: Libraries",
69 ]
70 )
71
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -66,5 +66,9 @@
"Topic :: Software Development :: Code Generators",
"Topic :: Software Development :: Compilers",
"Topic :: Software Development :: Libraries",
- ]
+ ],
+ project_urls={
+ "Documentation": "https://docs.hylang.org/",
+ "Source": "https://github.com/hylang/hy",
+ }
)
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -66,5 +66,9 @@\n \"Topic :: Software Development :: Code Generators\",\n \"Topic :: Software Development :: Compilers\",\n \"Topic :: Software Development :: Libraries\",\n- ]\n+ ],\n+ project_urls={\n+ \"Documentation\": \"https://docs.hylang.org/\",\n+ \"Source\": \"https://github.com/hylang/hy\",\n+ }\n )\n", "issue": "Add `project_urls` to `setup.py`\nThis would allow us to provide links to our GitHub repository etc. in a sidebar on PyPI.\n", "before_files": [{"content": "#!/usr/bin/env python\n\nfrom setuptools import find_packages, setup\nimport fastentrypoints # Monkey-patches setuptools.\n\nfrom get_version import __version__\n\nos.chdir(os.path.split(os.path.abspath(__file__))[0])\n\nPKG = \"hy\"\n\nlong_description = \"\"\"Hy is a Python <--> Lisp layer. It helps\nmake things work nicer, and lets Python and the Hy lisp variant play\nnice together. \"\"\"\n\nsetup(\n name=PKG,\n version=__version__,\n install_requires=[\n 'rply>=0.7.7',\n 'funcparserlib>=1.0.0a0',\n 'colorama',\n 'astor>=0.8 ; python_version < \"3.9\"',\n ],\n python_requires = '>= 3.7, <= 3.10',\n entry_points={\n 'console_scripts': [\n 'hy = hy.cmdline:hy_main',\n 'hy3 = hy.cmdline:hy_main',\n 'hyc = hy.cmdline:hyc_main',\n 'hyc3 = hy.cmdline:hyc_main',\n 'hy2py = hy.cmdline:hy2py_main',\n 'hy2py3 = hy.cmdline:hy2py_main',\n ]\n },\n packages=find_packages(exclude=['tests*']),\n package_data={\n 'hy': ['*.hy', '__pycache__/*'],\n 'hy.contrib': ['*.hy', '__pycache__/*'],\n 'hy.core': ['*.hy', '__pycache__/*'],\n 'hy.extra': ['*.hy', '__pycache__/*'],\n },\n data_files=[\n ('get_version', ['get_version.py'])\n ],\n author=\"Paul Tagliamonte\",\n author_email=\"[email protected]\",\n long_description=long_description,\n description='Lisp and Python love each other.',\n license=\"Expat\",\n url=\"http://hylang.org/\",\n platforms=['any'],\n classifiers=[\n \"Development Status :: 4 - Beta\",\n \"Intended Audience :: Developers\",\n \"License :: DFSG approved\",\n \"License :: OSI Approved :: MIT License\", # Really \"Expat\". Ugh.\n \"Operating System :: OS Independent\",\n \"Programming Language :: Lisp\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Topic :: Software Development :: Code Generators\",\n \"Topic :: Software Development :: Compilers\",\n \"Topic :: Software Development :: Libraries\",\n ]\n)\n", "path": "setup.py"}]} | 1,276 | 108 |
gh_patches_debug_39739 | rasdani/github-patches | git_diff | streamlink__streamlink-1878 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Problem with live.russia.tv
I have Problem with the Plugin live.russia.tv :
```
#SERVICE 4097:0:1:0:0:0:224:0:0:0:http%3a//127.0.0.1%3a8088/https%3a//live.russia.tv/index/index/channel_id/76:Москва 24 HD
#DESCRIPTION Москва 24 HD
#SERVICE 4097:0:1:0:0:0:449:0:0:0:http%3a//127.0.0.1%3a8088/https%3a//live.russia.tv/index/index/channel_id/1:Rossija 1 HD
#DESCRIPTION Rossija 1 HD
#SERVICE 4097:0:1:0:0:0:445:0:0:0:http%3a//127.0.0.1%3a8088/https%3a//live.russia.tv/index/index/channel_id/82:Rossija RTR HD
#DESCRIPTION Rossija RTR HD
#SERVICE 4097:0:1:0:0:0:447:0:0:0:http%3a//127.0.0.1%3a8088/https%3a//live.russia.tv/index/index/channel_id/3:Rossija 24 HD
#DESCRIPTION Rossija 24 HD
```
The Channels not working on streamlink - from PC work the channels ok.
</issue>
<code>
[start of src/streamlink/plugins/live_russia_tv.py]
1 import re
2 from streamlink.plugin import Plugin
3 from streamlink.plugin.api import http
4 from streamlink.stream import HLSStream
5
6 class LiveRussia(Plugin):
7 url_re = re.compile(r"https?://(?:www.)?live.russia.tv/index/index/channel_id/")
8 iframe_re = re.compile(r"""<iframe[^>]*src=["']([^'"]+)["'][^>]*>""")
9 stream_re = re.compile(r"""window.pl.data.*m3u8":"(.*)"}.*};""")
10
11 @classmethod
12 def can_handle_url(cls, url):
13 return cls.url_re.match(url) is not None
14
15 def _get_streams(self):
16 res = http.get(self.url)
17 iframe_result = re.search(self.iframe_re, res.text)
18
19 if not iframe_result:
20 self.logger.error("The requested content is unavailable.")
21 return
22
23 res = http.get(iframe_result.group(1))
24 stream_url_result = re.search(self.stream_re, res.text)
25
26 if not stream_url_result:
27 self.logger.error("The requested content is unavailable.")
28 return
29
30 return HLSStream.parse_variant_playlist(self.session, stream_url_result.group(1))
31
32
33 __plugin__ = LiveRussia
[end of src/streamlink/plugins/live_russia_tv.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/streamlink/plugins/live_russia_tv.py b/src/streamlink/plugins/live_russia_tv.py
--- a/src/streamlink/plugins/live_russia_tv.py
+++ b/src/streamlink/plugins/live_russia_tv.py
@@ -1,33 +1,81 @@
+import logging
import re
+
from streamlink.plugin import Plugin
-from streamlink.plugin.api import http
-from streamlink.stream import HLSStream
+from streamlink.plugin.api import http, validate
+from streamlink.plugin.api.utils import itertags
+from streamlink.stream import HLSStream, HTTPStream
+
+log = logging.getLogger(__name__)
+
class LiveRussia(Plugin):
- url_re = re.compile(r"https?://(?:www.)?live.russia.tv/index/index/channel_id/")
- iframe_re = re.compile(r"""<iframe[^>]*src=["']([^'"]+)["'][^>]*>""")
- stream_re = re.compile(r"""window.pl.data.*m3u8":"(.*)"}.*};""")
+ url_re = re.compile(r"https?://(?:www\.|live\.)?russia.tv")
+ _data_re = re.compile(r"""window\.pl\.data\.([\w_]+)\s*=\s*['"]?(.*?)['"]?;""")
@classmethod
def can_handle_url(cls, url):
return cls.url_re.match(url) is not None
+ def _get_iframe_url(self, url):
+ res = http.get(url)
+ for iframe in itertags(res.text, 'iframe'):
+ src = iframe.attributes.get("src")
+ if src:
+ return src
+
+ def _get_stream_info_url(self, url):
+ data = {}
+ res = http.get(url)
+ for m in self._data_re.finditer(res.text):
+ data[m.group(1)] = m.group(2)
+
+ log.debug("Got pl_data={0}".format(data))
+
+ if data:
+ if data["isVod"] == '0':
+ return "https:{domain}/iframe/datalive/id/{id}/sid/{sid}".format(**data)
+ else:
+ return "https:{domain}/iframe/datavideo/id/{id}/sid/{sid}".format(**data)
+
def _get_streams(self):
- res = http.get(self.url)
- iframe_result = re.search(self.iframe_re, res.text)
+ iframe_url = self._get_iframe_url(self.url)
+
+ if iframe_url:
+ log.debug("Found iframe URL={0}".format(iframe_url))
+ info_url = self._get_stream_info_url(iframe_url)
+
+ if info_url:
+ log.debug("Getting info from URL: {0}".format(info_url))
+ res = http.get(info_url, headers={"Referer": iframe_url})
+ data = http.json(res)
+
+ if data['status'] == 200:
+ for media in data['data']['playlist']['medialist']:
+ if media['errors']:
+ log.error(media['errors'].replace('\n', '').replace('\r', ''))
+
+ for media_type in media.get('sources', []):
+
+ if media_type == "m3u8":
+ hls_url = media['sources'][media_type]['auto']
+ for s in HLSStream.parse_variant_playlist(self.session, hls_url).items():
+ yield s
+
+ if media_type == "http":
+ for pix, url in media['sources'][media_type].items():
+ yield "{0}p".format(pix), HTTPStream(self.session, url)
+ else:
+ log.error("An error occurred: {0}".format(data['errors'].replace('\n', '').replace('\r', '')))
+ else:
+ log.error("Unable to get stream info URL")
+ else:
+ log.error("Could not find video iframe")
+
- if not iframe_result:
- self.logger.error("The requested content is unavailable.")
- return
- res = http.get(iframe_result.group(1))
- stream_url_result = re.search(self.stream_re, res.text)
- if not stream_url_result:
- self.logger.error("The requested content is unavailable.")
- return
- return HLSStream.parse_variant_playlist(self.session, stream_url_result.group(1))
-__plugin__ = LiveRussia
\ No newline at end of file
+__plugin__ = LiveRussia
| {"golden_diff": "diff --git a/src/streamlink/plugins/live_russia_tv.py b/src/streamlink/plugins/live_russia_tv.py\n--- a/src/streamlink/plugins/live_russia_tv.py\n+++ b/src/streamlink/plugins/live_russia_tv.py\n@@ -1,33 +1,81 @@\n+import logging\n import re\n+\n from streamlink.plugin import Plugin\n-from streamlink.plugin.api import http\n-from streamlink.stream import HLSStream\n+from streamlink.plugin.api import http, validate\n+from streamlink.plugin.api.utils import itertags\n+from streamlink.stream import HLSStream, HTTPStream\n+\n+log = logging.getLogger(__name__)\n+\n \n class LiveRussia(Plugin):\n- url_re = re.compile(r\"https?://(?:www.)?live.russia.tv/index/index/channel_id/\")\n- iframe_re = re.compile(r\"\"\"<iframe[^>]*src=[\"']([^'\"]+)[\"'][^>]*>\"\"\")\n- stream_re = re.compile(r\"\"\"window.pl.data.*m3u8\":\"(.*)\"}.*};\"\"\")\n+ url_re = re.compile(r\"https?://(?:www\\.|live\\.)?russia.tv\")\n+ _data_re = re.compile(r\"\"\"window\\.pl\\.data\\.([\\w_]+)\\s*=\\s*['\"]?(.*?)['\"]?;\"\"\")\n \n @classmethod\n def can_handle_url(cls, url):\n return cls.url_re.match(url) is not None\n \n+ def _get_iframe_url(self, url):\n+ res = http.get(url)\n+ for iframe in itertags(res.text, 'iframe'):\n+ src = iframe.attributes.get(\"src\")\n+ if src:\n+ return src\n+\n+ def _get_stream_info_url(self, url):\n+ data = {}\n+ res = http.get(url)\n+ for m in self._data_re.finditer(res.text):\n+ data[m.group(1)] = m.group(2)\n+\n+ log.debug(\"Got pl_data={0}\".format(data))\n+\n+ if data:\n+ if data[\"isVod\"] == '0':\n+ return \"https:{domain}/iframe/datalive/id/{id}/sid/{sid}\".format(**data)\n+ else:\n+ return \"https:{domain}/iframe/datavideo/id/{id}/sid/{sid}\".format(**data)\n+\n def _get_streams(self):\n- res = http.get(self.url)\n- iframe_result = re.search(self.iframe_re, res.text)\n+ iframe_url = self._get_iframe_url(self.url)\n+\n+ if iframe_url:\n+ log.debug(\"Found iframe URL={0}\".format(iframe_url))\n+ info_url = self._get_stream_info_url(iframe_url)\n+\n+ if info_url:\n+ log.debug(\"Getting info from URL: {0}\".format(info_url))\n+ res = http.get(info_url, headers={\"Referer\": iframe_url})\n+ data = http.json(res)\n+\n+ if data['status'] == 200:\n+ for media in data['data']['playlist']['medialist']:\n+ if media['errors']:\n+ log.error(media['errors'].replace('\\n', '').replace('\\r', ''))\n+\n+ for media_type in media.get('sources', []):\n+\n+ if media_type == \"m3u8\":\n+ hls_url = media['sources'][media_type]['auto']\n+ for s in HLSStream.parse_variant_playlist(self.session, hls_url).items():\n+ yield s\n+\n+ if media_type == \"http\":\n+ for pix, url in media['sources'][media_type].items():\n+ yield \"{0}p\".format(pix), HTTPStream(self.session, url)\n+ else:\n+ log.error(\"An error occurred: {0}\".format(data['errors'].replace('\\n', '').replace('\\r', '')))\n+ else:\n+ log.error(\"Unable to get stream info URL\")\n+ else:\n+ log.error(\"Could not find video iframe\")\n+\n \n- if not iframe_result:\n- self.logger.error(\"The requested content is unavailable.\")\n- return\n \n- res = http.get(iframe_result.group(1))\n- stream_url_result = re.search(self.stream_re, res.text)\n \n- if not stream_url_result:\n- self.logger.error(\"The requested content is unavailable.\")\n- return\n \n- return HLSStream.parse_variant_playlist(self.session, stream_url_result.group(1))\n \n \n-__plugin__ = LiveRussia\n\\ No newline at end of file\n+__plugin__ = LiveRussia\n", "issue": "Problem with live.russia.tv\nI have Problem with the Plugin live.russia.tv : \r\n```\r\n#SERVICE 4097:0:1:0:0:0:224:0:0:0:http%3a//127.0.0.1%3a8088/https%3a//live.russia.tv/index/index/channel_id/76:\u041c\u043e\u0441\u043a\u0432\u0430 24 HD\r\n#DESCRIPTION \u041c\u043e\u0441\u043a\u0432\u0430 24 HD\r\n#SERVICE 4097:0:1:0:0:0:449:0:0:0:http%3a//127.0.0.1%3a8088/https%3a//live.russia.tv/index/index/channel_id/1:Rossija 1 HD\r\n#DESCRIPTION Rossija 1 HD\r\n#SERVICE 4097:0:1:0:0:0:445:0:0:0:http%3a//127.0.0.1%3a8088/https%3a//live.russia.tv/index/index/channel_id/82:Rossija RTR HD\r\n#DESCRIPTION Rossija RTR HD\r\n#SERVICE 4097:0:1:0:0:0:447:0:0:0:http%3a//127.0.0.1%3a8088/https%3a//live.russia.tv/index/index/channel_id/3:Rossija 24 HD\r\n#DESCRIPTION Rossija 24 HD\r\n```\r\nThe Channels not working on streamlink - from PC work the channels ok.\n", "before_files": [{"content": "import re\nfrom streamlink.plugin import Plugin\nfrom streamlink.plugin.api import http\nfrom streamlink.stream import HLSStream\n\nclass LiveRussia(Plugin):\n url_re = re.compile(r\"https?://(?:www.)?live.russia.tv/index/index/channel_id/\")\n iframe_re = re.compile(r\"\"\"<iframe[^>]*src=[\"']([^'\"]+)[\"'][^>]*>\"\"\")\n stream_re = re.compile(r\"\"\"window.pl.data.*m3u8\":\"(.*)\"}.*};\"\"\")\n\n @classmethod\n def can_handle_url(cls, url):\n return cls.url_re.match(url) is not None\n\n def _get_streams(self):\n res = http.get(self.url)\n iframe_result = re.search(self.iframe_re, res.text)\n\n if not iframe_result:\n self.logger.error(\"The requested content is unavailable.\")\n return\n\n res = http.get(iframe_result.group(1))\n stream_url_result = re.search(self.stream_re, res.text)\n\n if not stream_url_result:\n self.logger.error(\"The requested content is unavailable.\")\n return\n\n return HLSStream.parse_variant_playlist(self.session, stream_url_result.group(1))\n\n\n__plugin__ = LiveRussia", "path": "src/streamlink/plugins/live_russia_tv.py"}]} | 1,220 | 981 |
gh_patches_debug_11877 | rasdani/github-patches | git_diff | CTFd__CTFd-1048 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
import will crash ctfd
<!--
If this is a bug report please fill out the template below.
If this is a feature request please describe the behavior that you'd like to see.
-->
**Environment**:
- CTFd Version/Commit: 2.1.3
- Operating System: ubuntu 18.04
- Web Browser and Version: Opera 60.0.3255.170
**What happened?**
trying to import db (zip file)
**What did you expect to happen?**
it would import db (zip file)
**How to reproduce your issue**
**Any associated stack traces or error logs**
Failed to disable foreign key checks. Continuing.
Error: No support for ALTER of constraints in SQLite dialect
I believe it's Alembic fault
</issue>
<code>
[start of migrations/versions/b5551cd26764_add_captain_column_to_teams.py]
1 """Add captain column to Teams
2
3 Revision ID: b5551cd26764
4 Revises: 4e4d5a9ea000
5 Create Date: 2019-04-12 00:29:08.021141
6
7 """
8 from CTFd.models import db
9 from alembic import op
10 import sqlalchemy as sa
11 from sqlalchemy.sql import text, table, column, and_
12
13 # revision identifiers, used by Alembic.
14 revision = 'b5551cd26764'
15 down_revision = '4e4d5a9ea000'
16 branch_labels = None
17 depends_on = None
18
19 teams_table = table('teams',
20 column('id', db.Integer),
21 column('captain_id', db.Integer),
22 )
23
24 users_table = table('users',
25 column('id', db.Integer),
26 column('team_id', db.Integer),
27 )
28
29
30 def upgrade():
31 # ### commands auto generated by Alembic - please adjust! ###
32 op.add_column('teams', sa.Column('captain_id', sa.Integer(), nullable=True))
33 op.create_foreign_key('team_captain_id', 'teams', 'users', ['captain_id'], ['id'])
34
35 connection = op.get_bind()
36 for team in connection.execute(teams_table.select()):
37 users = connection.execute(
38 users_table.select().where(users_table.c.team_id == team.id).order_by(users_table.c.id).limit(1)
39 )
40 for user in users:
41 connection.execute(
42 teams_table.update().where(
43 teams_table.c.id == team.id
44 ).values(
45 captain_id=user.id
46 )
47 )
48 # ### end Alembic commands ###
49
50
51 def downgrade():
52 # ### commands auto generated by Alembic - please adjust! ###
53 op.drop_constraint('team_captain_id', 'teams', type_='foreignkey')
54 op.drop_column('teams', 'captain_id')
55 # ### end Alembic commands ###
56
[end of migrations/versions/b5551cd26764_add_captain_column_to_teams.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/migrations/versions/b5551cd26764_add_captain_column_to_teams.py b/migrations/versions/b5551cd26764_add_captain_column_to_teams.py
--- a/migrations/versions/b5551cd26764_add_captain_column_to_teams.py
+++ b/migrations/versions/b5551cd26764_add_captain_column_to_teams.py
@@ -30,7 +30,11 @@
def upgrade():
# ### commands auto generated by Alembic - please adjust! ###
op.add_column('teams', sa.Column('captain_id', sa.Integer(), nullable=True))
- op.create_foreign_key('team_captain_id', 'teams', 'users', ['captain_id'], ['id'])
+
+ bind = op.get_bind()
+ url = str(bind.engine.url)
+ if url.startswith('sqlite') is False:
+ op.create_foreign_key('team_captain_id', 'teams', 'users', ['captain_id'], ['id'])
connection = op.get_bind()
for team in connection.execute(teams_table.select()):
| {"golden_diff": "diff --git a/migrations/versions/b5551cd26764_add_captain_column_to_teams.py b/migrations/versions/b5551cd26764_add_captain_column_to_teams.py\n--- a/migrations/versions/b5551cd26764_add_captain_column_to_teams.py\n+++ b/migrations/versions/b5551cd26764_add_captain_column_to_teams.py\n@@ -30,7 +30,11 @@\n def upgrade():\n # ### commands auto generated by Alembic - please adjust! ###\n op.add_column('teams', sa.Column('captain_id', sa.Integer(), nullable=True))\n- op.create_foreign_key('team_captain_id', 'teams', 'users', ['captain_id'], ['id'])\n+\n+ bind = op.get_bind()\n+ url = str(bind.engine.url)\n+ if url.startswith('sqlite') is False:\n+ op.create_foreign_key('team_captain_id', 'teams', 'users', ['captain_id'], ['id'])\n \n connection = op.get_bind()\n for team in connection.execute(teams_table.select()):\n", "issue": "import will crash ctfd\n<!--\r\nIf this is a bug report please fill out the template below.\r\n\r\nIf this is a feature request please describe the behavior that you'd like to see.\r\n-->\r\n\r\n**Environment**:\r\n\r\n - CTFd Version/Commit: 2.1.3\r\n - Operating System: ubuntu 18.04\r\n - Web Browser and Version: Opera 60.0.3255.170\r\n\r\n**What happened?**\r\ntrying to import db (zip file)\r\n**What did you expect to happen?**\r\nit would import db (zip file)\r\n**How to reproduce your issue**\r\n\r\n**Any associated stack traces or error logs**\r\nFailed to disable foreign key checks. Continuing.\r\nError: No support for ALTER of constraints in SQLite dialect\r\n\r\nI believe it's Alembic fault \n", "before_files": [{"content": "\"\"\"Add captain column to Teams\n\nRevision ID: b5551cd26764\nRevises: 4e4d5a9ea000\nCreate Date: 2019-04-12 00:29:08.021141\n\n\"\"\"\nfrom CTFd.models import db\nfrom alembic import op\nimport sqlalchemy as sa\nfrom sqlalchemy.sql import text, table, column, and_\n\n# revision identifiers, used by Alembic.\nrevision = 'b5551cd26764'\ndown_revision = '4e4d5a9ea000'\nbranch_labels = None\ndepends_on = None\n\nteams_table = table('teams',\n column('id', db.Integer),\n column('captain_id', db.Integer),\n)\n\nusers_table = table('users',\n column('id', db.Integer),\n column('team_id', db.Integer),\n)\n\n\ndef upgrade():\n # ### commands auto generated by Alembic - please adjust! ###\n op.add_column('teams', sa.Column('captain_id', sa.Integer(), nullable=True))\n op.create_foreign_key('team_captain_id', 'teams', 'users', ['captain_id'], ['id'])\n\n connection = op.get_bind()\n for team in connection.execute(teams_table.select()):\n users = connection.execute(\n users_table.select().where(users_table.c.team_id == team.id).order_by(users_table.c.id).limit(1)\n )\n for user in users:\n connection.execute(\n teams_table.update().where(\n teams_table.c.id == team.id\n ).values(\n captain_id=user.id\n )\n )\n # ### end Alembic commands ###\n\n\ndef downgrade():\n # ### commands auto generated by Alembic - please adjust! ###\n op.drop_constraint('team_captain_id', 'teams', type_='foreignkey')\n op.drop_column('teams', 'captain_id')\n # ### end Alembic commands ###\n", "path": "migrations/versions/b5551cd26764_add_captain_column_to_teams.py"}]} | 1,284 | 258 |
gh_patches_debug_11710 | rasdani/github-patches | git_diff | Textualize__textual-2317 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Scrolling containers should be focusable by default
`ScrollHorizontal` and `ScrollVertical` should have `can_focus=True`.
Check this doesn't break any of the example apps.
</issue>
<code>
[start of src/textual/containers.py]
1 """
2 Container widgets for quick styling.
3
4 """
5
6
7 from .widget import Widget
8
9
10 class Container(Widget):
11 """Simple container widget, with vertical layout."""
12
13 DEFAULT_CSS = """
14 Container {
15 height: 1fr;
16 layout: vertical;
17 overflow: auto;
18 }
19 """
20
21
22 class Vertical(Widget):
23 """A container which arranges children vertically."""
24
25 DEFAULT_CSS = """
26 Vertical {
27 width: 1fr;
28 layout: vertical;
29 overflow: hidden hidden;
30 }
31 """
32
33
34 class VerticalScroll(Widget):
35 """A container which arranges children vertically, with an automatic vertical scrollbar."""
36
37 DEFAULT_CSS = """
38 VerticalScroll {
39 width: 1fr;
40 layout: vertical;
41 overflow-y: auto;
42 }
43 """
44
45
46 class Horizontal(Widget):
47 """A container which arranges children horizontally."""
48
49 DEFAULT_CSS = """
50 Horizontal {
51 height: 1fr;
52 layout: horizontal;
53 overflow: hidden hidden;
54 }
55 """
56
57
58 class HorizontalScroll(Widget):
59 """A container which arranges children horizontally, with an automatic horizontal scrollbar."""
60
61 DEFAULT_CSS = """
62 HorizontalScroll {
63 height: 1fr;
64 layout: horizontal;
65 overflow-x: auto;
66 }
67 """
68
69
70 class Center(Widget):
71 """A container which centers children horizontally."""
72
73 DEFAULT_CSS = """
74 Center {
75 align-horizontal: center;
76 height: auto;
77 width: 1fr;
78 }
79 """
80
81
82 class Middle(Widget):
83 """A container which aligns children vertically in the middle."""
84
85 DEFAULT_CSS = """
86 Middle {
87 align-vertical: middle;
88 width: auto;
89 height: 1fr;
90 }
91 """
92
93
94 class Grid(Widget):
95 """A container with grid alignment."""
96
97 DEFAULT_CSS = """
98 Grid {
99 height: 1fr;
100 layout: grid;
101 }
102 """
103
104
105 class Content(Widget, can_focus=True, can_focus_children=False):
106 """A container for content such as text."""
107
108 DEFAULT_CSS = """
109 VerticalScroll {
110 height: 1fr;
111 layout: vertical;
112 overflow-y: auto;
113 }
114 """
115
[end of src/textual/containers.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/textual/containers.py b/src/textual/containers.py
--- a/src/textual/containers.py
+++ b/src/textual/containers.py
@@ -31,7 +31,7 @@
"""
-class VerticalScroll(Widget):
+class VerticalScroll(Widget, can_focus=True):
"""A container which arranges children vertically, with an automatic vertical scrollbar."""
DEFAULT_CSS = """
@@ -55,7 +55,7 @@
"""
-class HorizontalScroll(Widget):
+class HorizontalScroll(Widget, can_focus=True):
"""A container which arranges children horizontally, with an automatic horizontal scrollbar."""
DEFAULT_CSS = """
| {"golden_diff": "diff --git a/src/textual/containers.py b/src/textual/containers.py\n--- a/src/textual/containers.py\n+++ b/src/textual/containers.py\n@@ -31,7 +31,7 @@\n \"\"\"\n \n \n-class VerticalScroll(Widget):\n+class VerticalScroll(Widget, can_focus=True):\n \"\"\"A container which arranges children vertically, with an automatic vertical scrollbar.\"\"\"\n \n DEFAULT_CSS = \"\"\"\n@@ -55,7 +55,7 @@\n \"\"\"\n \n \n-class HorizontalScroll(Widget):\n+class HorizontalScroll(Widget, can_focus=True):\n \"\"\"A container which arranges children horizontally, with an automatic horizontal scrollbar.\"\"\"\n \n DEFAULT_CSS = \"\"\"\n", "issue": "Scrolling containers should be focusable by default\n`ScrollHorizontal` and `ScrollVertical` should have `can_focus=True`.\n\nCheck this doesn't break any of the example apps.\n", "before_files": [{"content": "\"\"\"\nContainer widgets for quick styling.\n\n\"\"\"\n\n\nfrom .widget import Widget\n\n\nclass Container(Widget):\n \"\"\"Simple container widget, with vertical layout.\"\"\"\n\n DEFAULT_CSS = \"\"\"\n Container {\n height: 1fr;\n layout: vertical;\n overflow: auto;\n }\n \"\"\"\n\n\nclass Vertical(Widget):\n \"\"\"A container which arranges children vertically.\"\"\"\n\n DEFAULT_CSS = \"\"\"\n Vertical {\n width: 1fr;\n layout: vertical;\n overflow: hidden hidden;\n }\n \"\"\"\n\n\nclass VerticalScroll(Widget):\n \"\"\"A container which arranges children vertically, with an automatic vertical scrollbar.\"\"\"\n\n DEFAULT_CSS = \"\"\"\n VerticalScroll {\n width: 1fr;\n layout: vertical;\n overflow-y: auto;\n }\n \"\"\"\n\n\nclass Horizontal(Widget):\n \"\"\"A container which arranges children horizontally.\"\"\"\n\n DEFAULT_CSS = \"\"\"\n Horizontal {\n height: 1fr;\n layout: horizontal;\n overflow: hidden hidden;\n }\n \"\"\"\n\n\nclass HorizontalScroll(Widget):\n \"\"\"A container which arranges children horizontally, with an automatic horizontal scrollbar.\"\"\"\n\n DEFAULT_CSS = \"\"\"\n HorizontalScroll {\n height: 1fr;\n layout: horizontal;\n overflow-x: auto;\n }\n \"\"\"\n\n\nclass Center(Widget):\n \"\"\"A container which centers children horizontally.\"\"\"\n\n DEFAULT_CSS = \"\"\"\n Center {\n align-horizontal: center;\n height: auto;\n width: 1fr;\n }\n \"\"\"\n\n\nclass Middle(Widget):\n \"\"\"A container which aligns children vertically in the middle.\"\"\"\n\n DEFAULT_CSS = \"\"\"\n Middle {\n align-vertical: middle;\n width: auto;\n height: 1fr;\n }\n \"\"\"\n\n\nclass Grid(Widget):\n \"\"\"A container with grid alignment.\"\"\"\n\n DEFAULT_CSS = \"\"\"\n Grid {\n height: 1fr;\n layout: grid;\n }\n \"\"\"\n\n\nclass Content(Widget, can_focus=True, can_focus_children=False):\n \"\"\"A container for content such as text.\"\"\"\n\n DEFAULT_CSS = \"\"\"\n VerticalScroll {\n height: 1fr;\n layout: vertical;\n overflow-y: auto;\n }\n \"\"\"\n", "path": "src/textual/containers.py"}]} | 1,280 | 145 |
gh_patches_debug_11955 | rasdani/github-patches | git_diff | MongoEngine__mongoengine-1430 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Drop Python 2.6 support
For the most relevant discussion about the topic, see #1294.
Plan:
1. In the upcoming minor release, I'm going to include `warnings.warn(msg, DeprecationWarning)`. with the message saying "Python v2.6 support is deprecated and is going to be dropped entirely in the upcoming v0.11.0 release. Update your Python version if you want to have access to the latest features and bug fixes in MongoEngine."
2. In v0.11.0 (most likely shipped with #1428), I'll update the way we do dict comprehensions and other relics of the past, thus making it truly incompatible with v2.6.
Cc @lafrech @gukoff
</issue>
<code>
[start of mongoengine/python_support.py]
1 """Helper functions and types to aid with Python 2.5 - 3 support."""
2
3 import sys
4 import pymongo
5
6
7 if pymongo.version_tuple[0] < 3:
8 IS_PYMONGO_3 = False
9 else:
10 IS_PYMONGO_3 = True
11
12 PY3 = sys.version_info[0] == 3
13
14 if PY3:
15 import codecs
16 from io import BytesIO as StringIO
17
18 # return s converted to binary. b('test') should be equivalent to b'test'
19 def b(s):
20 return codecs.latin_1_encode(s)[0]
21
22 bin_type = bytes
23 txt_type = str
24 else:
25 try:
26 from cStringIO import StringIO
27 except ImportError:
28 from StringIO import StringIO
29
30 # Conversion to binary only necessary in Python 3
31 def b(s):
32 return s
33
34 bin_type = str
35 txt_type = unicode
36
37 str_types = (bin_type, txt_type)
38
[end of mongoengine/python_support.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mongoengine/python_support.py b/mongoengine/python_support.py
--- a/mongoengine/python_support.py
+++ b/mongoengine/python_support.py
@@ -1,9 +1,22 @@
-"""Helper functions and types to aid with Python 2.5 - 3 support."""
+"""Helper functions and types to aid with Python 2.6 - 3 support."""
import sys
+import warnings
+
import pymongo
+# Show a deprecation warning for people using Python v2.6
+# TODO remove in mongoengine v0.11.0
+if sys.version_info[0] == 2 and sys.version_info[1] == 6:
+ warnings.warn(
+ 'Python v2.6 support is deprecated and is going to be dropped '
+ 'entirely in the upcoming v0.11.0 release. Update your Python '
+ 'version if you want to have access to the latest features and '
+ 'bug fixes in MongoEngine.',
+ DeprecationWarning
+ )
+
if pymongo.version_tuple[0] < 3:
IS_PYMONGO_3 = False
else:
| {"golden_diff": "diff --git a/mongoengine/python_support.py b/mongoengine/python_support.py\n--- a/mongoengine/python_support.py\n+++ b/mongoengine/python_support.py\n@@ -1,9 +1,22 @@\n-\"\"\"Helper functions and types to aid with Python 2.5 - 3 support.\"\"\"\n+\"\"\"Helper functions and types to aid with Python 2.6 - 3 support.\"\"\"\n \n import sys\n+import warnings\n+\n import pymongo\n \n \n+# Show a deprecation warning for people using Python v2.6\n+# TODO remove in mongoengine v0.11.0\n+if sys.version_info[0] == 2 and sys.version_info[1] == 6:\n+ warnings.warn(\n+ 'Python v2.6 support is deprecated and is going to be dropped '\n+ 'entirely in the upcoming v0.11.0 release. Update your Python '\n+ 'version if you want to have access to the latest features and '\n+ 'bug fixes in MongoEngine.',\n+ DeprecationWarning\n+ )\n+\n if pymongo.version_tuple[0] < 3:\n IS_PYMONGO_3 = False\n else:\n", "issue": "Drop Python 2.6 support\nFor the most relevant discussion about the topic, see #1294.\r\n\r\nPlan:\r\n1. In the upcoming minor release, I'm going to include `warnings.warn(msg, DeprecationWarning)`. with the message saying \"Python v2.6 support is deprecated and is going to be dropped entirely in the upcoming v0.11.0 release. Update your Python version if you want to have access to the latest features and bug fixes in MongoEngine.\"\r\n2. In v0.11.0 (most likely shipped with #1428), I'll update the way we do dict comprehensions and other relics of the past, thus making it truly incompatible with v2.6.\r\n\r\nCc @lafrech @gukoff \n", "before_files": [{"content": "\"\"\"Helper functions and types to aid with Python 2.5 - 3 support.\"\"\"\n\nimport sys\nimport pymongo\n\n\nif pymongo.version_tuple[0] < 3:\n IS_PYMONGO_3 = False\nelse:\n IS_PYMONGO_3 = True\n\nPY3 = sys.version_info[0] == 3\n\nif PY3:\n import codecs\n from io import BytesIO as StringIO\n\n # return s converted to binary. b('test') should be equivalent to b'test'\n def b(s):\n return codecs.latin_1_encode(s)[0]\n\n bin_type = bytes\n txt_type = str\nelse:\n try:\n from cStringIO import StringIO\n except ImportError:\n from StringIO import StringIO\n\n # Conversion to binary only necessary in Python 3\n def b(s):\n return s\n\n bin_type = str\n txt_type = unicode\n\nstr_types = (bin_type, txt_type)\n", "path": "mongoengine/python_support.py"}]} | 979 | 254 |
gh_patches_debug_11989 | rasdani/github-patches | git_diff | sagemath__sage-36173 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Unoptimal memory complexity of `sage.matrix.berlekamp`
The code here is unoptimal:
https://github.com/sagemath/sage/blob/6695becb762aebab78ef47d0fb12eae52be5d79d/src/sage/matrix/berlekamp_massey.py#L90-L98
For example, the following code uses a lot of memory:
```python
sage: from sage.matrix.berlekamp_massey import berlekamp_massey
sage: p = next_prime(2**64)
sage: ls = [GF(p).random_element() for _ in range(20000)]
sage: berlekamp_massey(ls);
```
To be more specific, the dictionaries are not necessarily and only `f[j - 2]` and `f[j - 1]` are used every time, same for `s`. So they can be stored as temporary variables.
### Additional Information
I am fixing it.
### Checklist
- [X] I have searched the existing issues for a bug report that matches the one I want to file, without success.
- [X] I have read the documentation and troubleshoot guide
</issue>
<code>
[start of src/sage/matrix/berlekamp_massey.py]
1 """
2 Minimal Polynomials of Linear Recurrence Sequences
3
4 AUTHORS:
5
6 - William Stein
7 """
8 # ****************************************************************************
9 # Copyright (C) 2005 William Stein <[email protected]>
10 #
11 # Distributed under the terms of the GNU General Public License (GPL)
12 #
13 # This code is distributed in the hope that it will be useful,
14 # but WITHOUT ANY WARRANTY; without even the implied warranty of
15 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
16 # General Public License for more details.
17 #
18 # The full text of the GPL is available at:
19 #
20 # https://www.gnu.org/licenses/
21 # ****************************************************************************
22
23 import sage.rings.rational_field
24
25
26 def berlekamp_massey(a):
27 r"""
28 Use the Berlekamp-Massey algorithm to find the minimal polynomial
29 of a linear recurrence sequence `a`.
30
31 The minimal polynomial of a linear recurrence `\{a_r\}` is
32 by definition the unique monic polynomial `g`, such that if
33 `\{a_r\}` satisfies a linear recurrence
34 `a_{j+k} + b_{j-1} a_{j-1+k} + \cdots + b_0 a_k=0`
35 (for all `k\geq 0`), then `g` divides the
36 polynomial `x^j + \sum_{i=0}^{j-1} b_i x^i`.
37
38 INPUT:
39
40 - ``a`` -- a list of even length of elements of a field (or domain)
41
42 OUTPUT:
43
44 the minimal polynomial of the sequence, as a polynomial over the
45 field in which the entries of `a` live
46
47 .. WARNING::
48
49 The result is only guaranteed to be correct on the full
50 sequence if there exists a linear recurrence of length less
51 than half the length of `a`.
52
53 EXAMPLES::
54
55 sage: from sage.matrix.berlekamp_massey import berlekamp_massey
56 sage: berlekamp_massey([1,2,1,2,1,2])
57 x^2 - 1
58 sage: berlekamp_massey([GF(7)(1), 19, 1, 19])
59 x^2 + 6
60 sage: berlekamp_massey([2,2,1,2,1,191,393,132])
61 x^4 - 36727/11711*x^3 + 34213/5019*x^2 + 7024942/35133*x - 335813/1673
62 sage: berlekamp_massey(prime_range(2, 38)) # needs sage.libs.pari
63 x^6 - 14/9*x^5 - 7/9*x^4 + 157/54*x^3 - 25/27*x^2 - 73/18*x + 37/9
64
65 TESTS::
66
67 sage: berlekamp_massey("banana")
68 Traceback (most recent call last):
69 ...
70 TypeError: argument must be a list or tuple
71 sage: berlekamp_massey([1,2,5])
72 Traceback (most recent call last):
73 ...
74 ValueError: argument must have an even number of terms
75 """
76 if not isinstance(a, (list, tuple)):
77 raise TypeError("argument must be a list or tuple")
78 if len(a) % 2:
79 raise ValueError("argument must have an even number of terms")
80
81 M = len(a) // 2
82
83 try:
84 K = a[0].parent().fraction_field()
85 except AttributeError:
86 K = sage.rings.rational_field.RationalField()
87 R = K['x']
88 x = R.gen()
89
90 f = {-1: R(a), 0: x**(2 * M)}
91 s = {-1: 1, 0: 0}
92 j = 0
93 while f[j].degree() >= M:
94 j += 1
95 qj, f[j] = f[j - 2].quo_rem(f[j - 1])
96 s[j] = s[j - 2] - qj * s[j - 1]
97 t = s[j].reverse()
98 return ~(t[t.degree()]) * t # make monic (~ is inverse in python)
99
[end of src/sage/matrix/berlekamp_massey.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/sage/matrix/berlekamp_massey.py b/src/sage/matrix/berlekamp_massey.py
--- a/src/sage/matrix/berlekamp_massey.py
+++ b/src/sage/matrix/berlekamp_massey.py
@@ -84,15 +84,11 @@
K = a[0].parent().fraction_field()
except AttributeError:
K = sage.rings.rational_field.RationalField()
- R = K['x']
- x = R.gen()
-
- f = {-1: R(a), 0: x**(2 * M)}
- s = {-1: 1, 0: 0}
- j = 0
- while f[j].degree() >= M:
- j += 1
- qj, f[j] = f[j - 2].quo_rem(f[j - 1])
- s[j] = s[j - 2] - qj * s[j - 1]
- t = s[j].reverse()
- return ~(t[t.degree()]) * t # make monic (~ is inverse in python)
+
+ R, x = K['x'].objgen()
+ f0, f1 = R(a), x**(2 * M)
+ s0, s1 = 1, 0
+ while f1.degree() >= M:
+ f0, (q, f1) = f1, f0.quo_rem(f1)
+ s0, s1 = s1, s0 - q * s1
+ return s1.reverse().monic()
| {"golden_diff": "diff --git a/src/sage/matrix/berlekamp_massey.py b/src/sage/matrix/berlekamp_massey.py\n--- a/src/sage/matrix/berlekamp_massey.py\n+++ b/src/sage/matrix/berlekamp_massey.py\n@@ -84,15 +84,11 @@\n K = a[0].parent().fraction_field()\n except AttributeError:\n K = sage.rings.rational_field.RationalField()\n- R = K['x']\n- x = R.gen()\n-\n- f = {-1: R(a), 0: x**(2 * M)}\n- s = {-1: 1, 0: 0}\n- j = 0\n- while f[j].degree() >= M:\n- j += 1\n- qj, f[j] = f[j - 2].quo_rem(f[j - 1])\n- s[j] = s[j - 2] - qj * s[j - 1]\n- t = s[j].reverse()\n- return ~(t[t.degree()]) * t # make monic (~ is inverse in python)\n+\n+ R, x = K['x'].objgen()\n+ f0, f1 = R(a), x**(2 * M)\n+ s0, s1 = 1, 0\n+ while f1.degree() >= M:\n+ f0, (q, f1) = f1, f0.quo_rem(f1)\n+ s0, s1 = s1, s0 - q * s1\n+ return s1.reverse().monic()\n", "issue": "Unoptimal memory complexity of `sage.matrix.berlekamp`\nThe code here is unoptimal:\r\n\r\nhttps://github.com/sagemath/sage/blob/6695becb762aebab78ef47d0fb12eae52be5d79d/src/sage/matrix/berlekamp_massey.py#L90-L98\r\n\r\nFor example, the following code uses a lot of memory:\r\n\r\n```python\r\nsage: from sage.matrix.berlekamp_massey import berlekamp_massey\r\nsage: p = next_prime(2**64)\r\nsage: ls = [GF(p).random_element() for _ in range(20000)]\r\nsage: berlekamp_massey(ls);\r\n```\r\n\r\nTo be more specific, the dictionaries are not necessarily and only `f[j - 2]` and `f[j - 1]` are used every time, same for `s`. So they can be stored as temporary variables.\r\n\r\n### Additional Information\r\n\r\nI am fixing it.\r\n\r\n### Checklist\r\n\r\n- [X] I have searched the existing issues for a bug report that matches the one I want to file, without success.\r\n- [X] I have read the documentation and troubleshoot guide\n", "before_files": [{"content": "\"\"\"\nMinimal Polynomials of Linear Recurrence Sequences\n\nAUTHORS:\n\n- William Stein\n\"\"\"\n# ****************************************************************************\n# Copyright (C) 2005 William Stein <[email protected]>\n#\n# Distributed under the terms of the GNU General Public License (GPL)\n#\n# This code is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU\n# General Public License for more details.\n#\n# The full text of the GPL is available at:\n#\n# https://www.gnu.org/licenses/\n# ****************************************************************************\n\nimport sage.rings.rational_field\n\n\ndef berlekamp_massey(a):\n r\"\"\"\n Use the Berlekamp-Massey algorithm to find the minimal polynomial\n of a linear recurrence sequence `a`.\n\n The minimal polynomial of a linear recurrence `\\{a_r\\}` is\n by definition the unique monic polynomial `g`, such that if\n `\\{a_r\\}` satisfies a linear recurrence\n `a_{j+k} + b_{j-1} a_{j-1+k} + \\cdots + b_0 a_k=0`\n (for all `k\\geq 0`), then `g` divides the\n polynomial `x^j + \\sum_{i=0}^{j-1} b_i x^i`.\n\n INPUT:\n\n - ``a`` -- a list of even length of elements of a field (or domain)\n\n OUTPUT:\n\n the minimal polynomial of the sequence, as a polynomial over the\n field in which the entries of `a` live\n\n .. WARNING::\n\n The result is only guaranteed to be correct on the full\n sequence if there exists a linear recurrence of length less\n than half the length of `a`.\n\n EXAMPLES::\n\n sage: from sage.matrix.berlekamp_massey import berlekamp_massey\n sage: berlekamp_massey([1,2,1,2,1,2])\n x^2 - 1\n sage: berlekamp_massey([GF(7)(1), 19, 1, 19])\n x^2 + 6\n sage: berlekamp_massey([2,2,1,2,1,191,393,132])\n x^4 - 36727/11711*x^3 + 34213/5019*x^2 + 7024942/35133*x - 335813/1673\n sage: berlekamp_massey(prime_range(2, 38)) # needs sage.libs.pari\n x^6 - 14/9*x^5 - 7/9*x^4 + 157/54*x^3 - 25/27*x^2 - 73/18*x + 37/9\n\n TESTS::\n\n sage: berlekamp_massey(\"banana\")\n Traceback (most recent call last):\n ...\n TypeError: argument must be a list or tuple\n sage: berlekamp_massey([1,2,5])\n Traceback (most recent call last):\n ...\n ValueError: argument must have an even number of terms\n \"\"\"\n if not isinstance(a, (list, tuple)):\n raise TypeError(\"argument must be a list or tuple\")\n if len(a) % 2:\n raise ValueError(\"argument must have an even number of terms\")\n\n M = len(a) // 2\n\n try:\n K = a[0].parent().fraction_field()\n except AttributeError:\n K = sage.rings.rational_field.RationalField()\n R = K['x']\n x = R.gen()\n\n f = {-1: R(a), 0: x**(2 * M)}\n s = {-1: 1, 0: 0}\n j = 0\n while f[j].degree() >= M:\n j += 1\n qj, f[j] = f[j - 2].quo_rem(f[j - 1])\n s[j] = s[j - 2] - qj * s[j - 1]\n t = s[j].reverse()\n return ~(t[t.degree()]) * t # make monic (~ is inverse in python)\n", "path": "src/sage/matrix/berlekamp_massey.py"}]} | 1,989 | 363 |
gh_patches_debug_34863 | rasdani/github-patches | git_diff | microsoft__lisa-836 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Ubuntu 20.04 - platform.dist() is deprecated since Python 3.5 and removed in Python 3.8
Affected distro - ubuntu 20.04 (use python 3.8)
Affected case - WALA-VERIFY-VERBOSE-ENABLED-LOGS
Use distro.linux_distribution(full_distribution_name=False) instead
</issue>
<code>
[start of Testscripts/Linux/WALA-VERIFY-VERBOSE-ENABLED-LOGS.py]
1 #!/usr/bin/env python
2 # Copyright (c) Microsoft Corporation. All rights reserved.
3 # Licensed under the Apache License.
4 from azuremodules import *
5
6 import argparse
7 import os
8 import platform
9 import time
10
11 parser = argparse.ArgumentParser()
12
13 file_path = os.path.dirname(os.path.realpath(__file__))
14 constants_path = os.path.join(file_path, "constants.sh")
15 params = GetParams(constants_path)
16 passwd = params["PASSWORD"]
17
18 distro = platform.dist()
19
20
21 def RunTest():
22 UpdateState("TestRunning")
23 if(distro[0] == "CoreOS"):
24 versionOutPut = Run("waagent --version")
25 else:
26 output = Run("pgrep -fa python3.*waagent")
27 if ("python3" in output) :
28 versionOutPut = Run("/usr/bin/python3 /usr/sbin/waagent --version")
29 else :
30 versionOutPut = Run("/usr/sbin/waagent --version")
31
32 RunLog.info("Checking log waagent.log...")
33 if("2.0." in versionOutPut):
34 output = Run("grep -i 'iptables -I INPUT -p udp --dport' /var/log/waagent* | wc -l | tr -d '\n'")
35 RunLog.info("agent version is 2.0")
36 else:
37 output = Run("grep -i 'VERBOSE' /var/log/waagent* | wc -l | tr -d '\n'")
38 RunLog.info("agent version > 2.0")
39
40 if not (output == "0") :
41 RunLog.info('The log file contains the verbose logs')
42 ResultLog.info('PASS')
43 UpdateState("TestCompleted")
44 else :
45 RunLog.error('Verify waagent.log fail, the log file does not contain the verbose logs')
46 ResultLog.error('FAIL')
47 UpdateState("TestCompleted")
48
49
50 def Restartwaagent():
51 if (distro[0] == "CoreOS"):
52 Run("echo '"+passwd+"' | sudo -S sed -i s/Logs.Verbose=n/Logs.Verbose=y/g /usr/share/oem/waagent.conf")
53 elif (DetectDistro()[0] == 'clear-linux-os'):
54 Run("echo '"+passwd+"' | sudo -S sed -i s/Logs.Verbose=n/Logs.Verbose=y/g \
55 /usr/share/defaults/waagent/waagent.conf")
56 else:
57 Run("echo '"+passwd+"' | sudo -S sed -i s/Logs.Verbose=n/Logs.Verbose=y/g /etc/waagent.conf")
58 RunLog.info("Restart waagent service...")
59 result = Run("echo '"+passwd+"' | sudo -S find / -name systemctl |wc -l | tr -d '\n'")
60 if (distro[0] == "Ubuntu") or (distro[0] == "debian"):
61 Run("echo '"+passwd+"' | sudo -S service walinuxagent restart")
62 else:
63 if (result == "0") :
64 os.system("echo '"+passwd+"' | sudo -S service waagent restart")
65 else:
66 os.system("echo '"+passwd+"' | sudo -S systemctl restart waagent")
67 time.sleep(60)
68
69 Restartwaagent()
70 RunTest()
71
[end of Testscripts/Linux/WALA-VERIFY-VERBOSE-ENABLED-LOGS.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/Testscripts/Linux/WALA-VERIFY-VERBOSE-ENABLED-LOGS.py b/Testscripts/Linux/WALA-VERIFY-VERBOSE-ENABLED-LOGS.py
--- a/Testscripts/Linux/WALA-VERIFY-VERBOSE-ENABLED-LOGS.py
+++ b/Testscripts/Linux/WALA-VERIFY-VERBOSE-ENABLED-LOGS.py
@@ -7,6 +7,7 @@
import os
import platform
import time
+import sys
parser = argparse.ArgumentParser()
@@ -14,13 +15,16 @@
constants_path = os.path.join(file_path, "constants.sh")
params = GetParams(constants_path)
passwd = params["PASSWORD"]
-
-distro = platform.dist()
+if sys.version_info[0] >= 3:
+ import distro
+ distro = distro.linux_distribution(full_distribution_name=False)
+else:
+ distro = platform.dist()
def RunTest():
UpdateState("TestRunning")
- if(distro[0] == "CoreOS"):
+ if(distro[0].upper() == "COREOS"):
versionOutPut = Run("waagent --version")
else:
output = Run("pgrep -fa python3.*waagent")
@@ -48,7 +52,7 @@
def Restartwaagent():
- if (distro[0] == "CoreOS"):
+ if (distro[0].upper() == "COREOS"):
Run("echo '"+passwd+"' | sudo -S sed -i s/Logs.Verbose=n/Logs.Verbose=y/g /usr/share/oem/waagent.conf")
elif (DetectDistro()[0] == 'clear-linux-os'):
Run("echo '"+passwd+"' | sudo -S sed -i s/Logs.Verbose=n/Logs.Verbose=y/g \
@@ -57,7 +61,7 @@
Run("echo '"+passwd+"' | sudo -S sed -i s/Logs.Verbose=n/Logs.Verbose=y/g /etc/waagent.conf")
RunLog.info("Restart waagent service...")
result = Run("echo '"+passwd+"' | sudo -S find / -name systemctl |wc -l | tr -d '\n'")
- if (distro[0] == "Ubuntu") or (distro[0] == "debian"):
+ if (distro[0].upper() == "UBUNTU") or (distro[0].upper() == "DEBIAN"):
Run("echo '"+passwd+"' | sudo -S service walinuxagent restart")
else:
if (result == "0") :
| {"golden_diff": "diff --git a/Testscripts/Linux/WALA-VERIFY-VERBOSE-ENABLED-LOGS.py b/Testscripts/Linux/WALA-VERIFY-VERBOSE-ENABLED-LOGS.py\n--- a/Testscripts/Linux/WALA-VERIFY-VERBOSE-ENABLED-LOGS.py\n+++ b/Testscripts/Linux/WALA-VERIFY-VERBOSE-ENABLED-LOGS.py\n@@ -7,6 +7,7 @@\n import os\n import platform\n import time\n+import sys\n \n parser = argparse.ArgumentParser()\n \n@@ -14,13 +15,16 @@\n constants_path = os.path.join(file_path, \"constants.sh\")\n params = GetParams(constants_path)\n passwd = params[\"PASSWORD\"]\n-\n-distro = platform.dist()\n+if sys.version_info[0] >= 3:\n+ import distro\n+ distro = distro.linux_distribution(full_distribution_name=False)\n+else:\n+ distro = platform.dist()\n \n \n def RunTest():\n UpdateState(\"TestRunning\")\n- if(distro[0] == \"CoreOS\"):\n+ if(distro[0].upper() == \"COREOS\"):\n versionOutPut = Run(\"waagent --version\")\n else:\n output = Run(\"pgrep -fa python3.*waagent\")\n@@ -48,7 +52,7 @@\n \n \n def Restartwaagent():\n- if (distro[0] == \"CoreOS\"):\n+ if (distro[0].upper() == \"COREOS\"):\n Run(\"echo '\"+passwd+\"' | sudo -S sed -i s/Logs.Verbose=n/Logs.Verbose=y/g /usr/share/oem/waagent.conf\")\n elif (DetectDistro()[0] == 'clear-linux-os'):\n Run(\"echo '\"+passwd+\"' | sudo -S sed -i s/Logs.Verbose=n/Logs.Verbose=y/g \\\n@@ -57,7 +61,7 @@\n Run(\"echo '\"+passwd+\"' | sudo -S sed -i s/Logs.Verbose=n/Logs.Verbose=y/g /etc/waagent.conf\")\n RunLog.info(\"Restart waagent service...\")\n result = Run(\"echo '\"+passwd+\"' | sudo -S find / -name systemctl |wc -l | tr -d '\\n'\")\n- if (distro[0] == \"Ubuntu\") or (distro[0] == \"debian\"):\n+ if (distro[0].upper() == \"UBUNTU\") or (distro[0].upper() == \"DEBIAN\"):\n Run(\"echo '\"+passwd+\"' | sudo -S service walinuxagent restart\")\n else:\n if (result == \"0\") :\n", "issue": "Ubuntu 20.04 - platform.dist() is deprecated since Python 3.5 and removed in Python 3.8\nAffected distro - ubuntu 20.04 (use python 3.8)\r\nAffected case - WALA-VERIFY-VERBOSE-ENABLED-LOGS\r\nUse distro.linux_distribution(full_distribution_name=False) instead\n", "before_files": [{"content": "#!/usr/bin/env python\n# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the Apache License.\nfrom azuremodules import *\n\nimport argparse\nimport os\nimport platform\nimport time\n\nparser = argparse.ArgumentParser()\n\nfile_path = os.path.dirname(os.path.realpath(__file__))\nconstants_path = os.path.join(file_path, \"constants.sh\")\nparams = GetParams(constants_path)\npasswd = params[\"PASSWORD\"]\n\ndistro = platform.dist()\n\n\ndef RunTest():\n UpdateState(\"TestRunning\")\n if(distro[0] == \"CoreOS\"):\n versionOutPut = Run(\"waagent --version\")\n else:\n output = Run(\"pgrep -fa python3.*waagent\")\n if (\"python3\" in output) :\n versionOutPut = Run(\"/usr/bin/python3 /usr/sbin/waagent --version\")\n else :\n versionOutPut = Run(\"/usr/sbin/waagent --version\")\n\n RunLog.info(\"Checking log waagent.log...\")\n if(\"2.0.\" in versionOutPut):\n output = Run(\"grep -i 'iptables -I INPUT -p udp --dport' /var/log/waagent* | wc -l | tr -d '\\n'\")\n RunLog.info(\"agent version is 2.0\")\n else:\n output = Run(\"grep -i 'VERBOSE' /var/log/waagent* | wc -l | tr -d '\\n'\")\n RunLog.info(\"agent version > 2.0\")\n\n if not (output == \"0\") :\n RunLog.info('The log file contains the verbose logs')\n ResultLog.info('PASS')\n UpdateState(\"TestCompleted\")\n else :\n RunLog.error('Verify waagent.log fail, the log file does not contain the verbose logs')\n ResultLog.error('FAIL')\n UpdateState(\"TestCompleted\")\n\n\ndef Restartwaagent():\n if (distro[0] == \"CoreOS\"):\n Run(\"echo '\"+passwd+\"' | sudo -S sed -i s/Logs.Verbose=n/Logs.Verbose=y/g /usr/share/oem/waagent.conf\")\n elif (DetectDistro()[0] == 'clear-linux-os'):\n Run(\"echo '\"+passwd+\"' | sudo -S sed -i s/Logs.Verbose=n/Logs.Verbose=y/g \\\n /usr/share/defaults/waagent/waagent.conf\")\n else:\n Run(\"echo '\"+passwd+\"' | sudo -S sed -i s/Logs.Verbose=n/Logs.Verbose=y/g /etc/waagent.conf\")\n RunLog.info(\"Restart waagent service...\")\n result = Run(\"echo '\"+passwd+\"' | sudo -S find / -name systemctl |wc -l | tr -d '\\n'\")\n if (distro[0] == \"Ubuntu\") or (distro[0] == \"debian\"):\n Run(\"echo '\"+passwd+\"' | sudo -S service walinuxagent restart\")\n else:\n if (result == \"0\") :\n os.system(\"echo '\"+passwd+\"' | sudo -S service waagent restart\")\n else:\n os.system(\"echo '\"+passwd+\"' | sudo -S systemctl restart waagent\")\n time.sleep(60)\n\nRestartwaagent()\nRunTest()\n", "path": "Testscripts/Linux/WALA-VERIFY-VERBOSE-ENABLED-LOGS.py"}]} | 1,457 | 565 |
gh_patches_debug_10497 | rasdani/github-patches | git_diff | lhotse-speech__lhotse-138 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bug: broken import from augmentations
Hi I installed the latest pip version of lhotse and I am getting an import error when using the lhotse CLI:
Setup:
```
python3.7.0
lhotse==0.2.0
```
To reproduce, try either from the following lines:
```
lhotse convert-kaldi <data-dir> 16000 <other-data-dir>
python -c "from lhotse.augmentation import available_wav_augmentations"
```
</issue>
<code>
[start of lhotse/augmentation/__init__.py]
1 from .common import AugmentFn
2 from .torchaudio import *
3 from .wavaugment import WavAugmenter, is_wav_augment_available
4
[end of lhotse/augmentation/__init__.py]
[start of setup.py]
1 # coding=utf-8
2 import os
3 from pathlib import Path
4
5 from setuptools import find_packages, setup
6
7 project_root = Path(__file__).parent
8
9 install_requires = (project_root / 'requirements.txt').read_text().splitlines()
10 docs_require = (project_root / 'docs' / 'requirements.txt').read_text().splitlines()
11 tests_require = ['pytest==5.4.3', 'flake8==3.8.3', 'coverage==5.1', 'hypothesis==5.41.2']
12 dev_requires = docs_require + tests_require + ['jupyterlab', 'matplotlib', 'isort']
13
14 if os.environ.get('READTHEDOCS', False):
15 # When building documentation, omit torchaudio installation and mock it instead.
16 # This works around the inability to install libsoundfile1 in read-the-docs env,
17 # which caused the documentation builds to silently crash.
18 install_requires = [req for req in install_requires if not req.startswith('torchaudio')]
19
20 setup(
21 name='lhotse',
22 version='0.2.0',
23 python_requires='>=3.7.0',
24 description='Data preparation for speech processing models training.',
25 author='The Lhotse Development Team',
26 author_email="[email protected]",
27 long_description=(project_root / 'README.md').read_text(),
28 long_description_content_type="text/markdown",
29 license='Apache-2.0 License',
30 packages=find_packages(),
31 # The line below makes every script in the list an executable that's inserted in PATH
32 # as long as the virtualenv/conda env is active; they can be used like any other shell program
33 scripts=['lhotse/bin/lhotse'],
34 install_requires=install_requires,
35 extras_require={
36 'docs': docs_require,
37 'tests': tests_require,
38 'dev': docs_require + tests_require
39 },
40 classifiers=[
41 "Development Status :: 3 - Alpha",
42 "Programming Language :: Python :: 3.7",
43 "Programming Language :: Python :: 3.8",
44 "Intended Audience :: Science/Research",
45 "Operating System :: POSIX :: Linux",
46 "Operating System :: MacOS :: MacOS X",
47 "License :: OSI Approved :: Apache Software License",
48 "Topic :: Multimedia :: Sound/Audio :: Speech",
49 "Topic :: Scientific/Engineering :: Artificial Intelligence",
50 "Topic :: Software Development :: Libraries :: Python Modules",
51 "Typing :: Typed"
52 ],
53 )
54
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/lhotse/augmentation/__init__.py b/lhotse/augmentation/__init__.py
--- a/lhotse/augmentation/__init__.py
+++ b/lhotse/augmentation/__init__.py
@@ -1,3 +1,3 @@
from .common import AugmentFn
from .torchaudio import *
-from .wavaugment import WavAugmenter, is_wav_augment_available
+from .wavaugment import WavAugmenter, is_wav_augment_available, available_wav_augmentations
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -19,7 +19,7 @@
setup(
name='lhotse',
- version='0.2.0',
+ version='0.2.1',
python_requires='>=3.7.0',
description='Data preparation for speech processing models training.',
author='The Lhotse Development Team',
| {"golden_diff": "diff --git a/lhotse/augmentation/__init__.py b/lhotse/augmentation/__init__.py\n--- a/lhotse/augmentation/__init__.py\n+++ b/lhotse/augmentation/__init__.py\n@@ -1,3 +1,3 @@\n from .common import AugmentFn\n from .torchaudio import *\n-from .wavaugment import WavAugmenter, is_wav_augment_available\n+from .wavaugment import WavAugmenter, is_wav_augment_available, available_wav_augmentations\ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -19,7 +19,7 @@\n \n setup(\n name='lhotse',\n- version='0.2.0',\n+ version='0.2.1',\n python_requires='>=3.7.0',\n description='Data preparation for speech processing models training.',\n author='The Lhotse Development Team',\n", "issue": "Bug: broken import from augmentations\nHi I installed the latest pip version of lhotse and I am getting an import error when using the lhotse CLI:\r\n\r\nSetup:\r\n```\r\npython3.7.0 \r\nlhotse==0.2.0\r\n```\r\n\r\nTo reproduce, try either from the following lines:\r\n```\r\nlhotse convert-kaldi <data-dir> 16000 <other-data-dir>\r\npython -c \"from lhotse.augmentation import available_wav_augmentations\"\r\n```\n", "before_files": [{"content": "from .common import AugmentFn\nfrom .torchaudio import *\nfrom .wavaugment import WavAugmenter, is_wav_augment_available\n", "path": "lhotse/augmentation/__init__.py"}, {"content": "# coding=utf-8\nimport os\nfrom pathlib import Path\n\nfrom setuptools import find_packages, setup\n\nproject_root = Path(__file__).parent\n\ninstall_requires = (project_root / 'requirements.txt').read_text().splitlines()\ndocs_require = (project_root / 'docs' / 'requirements.txt').read_text().splitlines()\ntests_require = ['pytest==5.4.3', 'flake8==3.8.3', 'coverage==5.1', 'hypothesis==5.41.2']\ndev_requires = docs_require + tests_require + ['jupyterlab', 'matplotlib', 'isort']\n\nif os.environ.get('READTHEDOCS', False):\n # When building documentation, omit torchaudio installation and mock it instead.\n # This works around the inability to install libsoundfile1 in read-the-docs env,\n # which caused the documentation builds to silently crash.\n install_requires = [req for req in install_requires if not req.startswith('torchaudio')]\n\nsetup(\n name='lhotse',\n version='0.2.0',\n python_requires='>=3.7.0',\n description='Data preparation for speech processing models training.',\n author='The Lhotse Development Team',\n author_email=\"[email protected]\",\n long_description=(project_root / 'README.md').read_text(),\n long_description_content_type=\"text/markdown\",\n license='Apache-2.0 License',\n packages=find_packages(),\n # The line below makes every script in the list an executable that's inserted in PATH\n # as long as the virtualenv/conda env is active; they can be used like any other shell program\n scripts=['lhotse/bin/lhotse'],\n install_requires=install_requires,\n extras_require={\n 'docs': docs_require,\n 'tests': tests_require,\n 'dev': docs_require + tests_require\n },\n classifiers=[\n \"Development Status :: 3 - Alpha\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Intended Audience :: Science/Research\",\n \"Operating System :: POSIX :: Linux\",\n \"Operating System :: MacOS :: MacOS X\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Topic :: Multimedia :: Sound/Audio :: Speech\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n \"Topic :: Software Development :: Libraries :: Python Modules\",\n \"Typing :: Typed\"\n ],\n)\n", "path": "setup.py"}]} | 1,337 | 220 |
gh_patches_debug_15590 | rasdani/github-patches | git_diff | plone__Products.CMFPlone-3688 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Update package metadata in PyPi
</issue>
<code>
[start of setup.py]
1 from setuptools import find_packages
2 from setuptools import setup
3
4
5 version = '6.0.0rc2.dev0'
6
7
8 setup(
9 name='Products.CMFPlone',
10 version=version,
11 description="The Plone Content Management System (core)",
12 long_description=open("README.rst").read() + "\n" +
13 open("CHANGES.rst").read(),
14 classifiers=[
15 "Development Status :: 5 - Production/Stable",
16 "Environment :: Web Environment",
17 "Framework :: Plone",
18 "Framework :: Plone :: 6.0",
19 "Framework :: Plone :: Core",
20 "Framework :: Zope :: 5",
21 "License :: OSI Approved :: GNU General Public License v2 (GPLv2)",
22 "Operating System :: OS Independent",
23 "Programming Language :: Python",
24 "Programming Language :: Python :: 3.8",
25 "Programming Language :: Python :: 3.9",
26 "Programming Language :: Python :: 3.10",
27 "Programming Language :: Python :: 3.11",
28 ],
29 python_requires='>=3.8',
30 keywords='Plone CMF Python Zope CMS Webapplication',
31 author='Plone Foundation',
32 author_email='[email protected]',
33 url='https://plone.org',
34 license='GPL version 2',
35 packages=find_packages(),
36 namespace_packages=['Products'],
37 include_package_data=True,
38 zip_safe=False,
39 install_requires=[
40 'borg.localrole',
41 'five.customerize',
42 'lxml',
43 'plone.api >= 1.4.4',
44 'plone.app.content',
45 'plone.app.contentlisting',
46 'plone.app.contentmenu >= 2.0.1',
47 'plone.app.contentrules',
48 'plone.app.contenttypes',
49 'plone.app.customerize',
50 'plone.app.dexterity',
51 'plone.app.discussion',
52 'plone.app.i18n',
53 'plone.app.layout >= 2.5.15',
54 'plone.app.linkintegrity >=1.0.3',
55 'plone.app.locales',
56 'plone.app.multilingual',
57 'plone.app.portlets',
58 'plone.app.redirector',
59 'plone.app.registry',
60 'plone.app.theming',
61 'plone.app.users',
62 'plone.app.uuid',
63 'plone.app.viewletmanager',
64 'plone.app.vocabularies',
65 'plone.app.workflow',
66 'plone.base',
67 'plone.browserlayer >= 2.1.5',
68 'plone.contentrules',
69 'plone.folder',
70 'plone.i18n >= 4.0.5',
71 'plone.indexer',
72 'plone.intelligenttext',
73 'plone.locking',
74 'plone.memoize',
75 'plone.outputfilters',
76 'plone.portlet.collection',
77 'plone.portlet.static',
78 'plone.portlets',
79 'plone.protect >= 3.0.0',
80 'plone.resource',
81 'plone.schema',
82 'plone.session',
83 'plone.staticresources',
84 'plone.theme',
85 'plonetheme.barceloneta',
86 'Products.CMFEditions',
87 'Products.DCWorkflow',
88 'Products.ExtendedPathIndex',
89 'Products.isurlinportal',
90 'Products.MimetypesRegistry',
91 'Products.PlonePAS',
92 'Products.PortalTransforms',
93 'Products.SiteErrorLog',
94 'Products.statusmessages',
95 'setuptools>=36.2',
96 'plone.autoinclude',
97 'webresource>=1.1',
98 'Zope[wsgi] >= 5.0',
99 'zope.app.locales >= 3.6.0',
100 'zope.cachedescriptors',
101 'zope.deferredimport',
102 'zope.deprecation',
103 'zope.dottedname',
104 'zope.i18n',
105 'zope.i18nmessageid',
106 'zope.structuredtext',
107 ],
108 extras_require={
109 'test': [
110 'lxml',
111 'mock',
112 'plone.app.robotframework>=1.0',
113 'robotframework-debuglibrary',
114 'plone.app.testing',
115 'zope.globalrequest',
116 'zope.testing',
117 'gunicorn',
118 ]
119 },
120 )
121
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -32,6 +32,19 @@
author_email='[email protected]',
url='https://plone.org',
license='GPL version 2',
+ project_urls={
+ "Homepage": "https://plone.org",
+ "Documentation": "https://docs.plone.org",
+ "PyPI": "https://pypi.python.org/pypi/Products.CMFPlone",
+ "Source": "https://github.com/plone/Products.CMFPlone",
+ "Issues": "https://github.com/plone/plone.org/Products.CMFPlone",
+ "Forum": "https://community.plone.org/",
+ "Chat": "https://discord.gg/zFY3EBbjaj",
+ "Mastodon": "https://plone.social/@plone",
+ "Twitter": "https://twitter.com/plone",
+ "Videos": "https://youtube.com/@plonecms",
+ "Sponsor": "https://github.com/sponsors/plone",
+ },
packages=find_packages(),
namespace_packages=['Products'],
include_package_data=True,
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -32,6 +32,19 @@\n author_email='[email protected]',\n url='https://plone.org',\n license='GPL version 2',\n+ project_urls={\n+ \"Homepage\": \"https://plone.org\",\n+ \"Documentation\": \"https://docs.plone.org\",\n+ \"PyPI\": \"https://pypi.python.org/pypi/Products.CMFPlone\",\n+ \"Source\": \"https://github.com/plone/Products.CMFPlone\",\n+ \"Issues\": \"https://github.com/plone/plone.org/Products.CMFPlone\",\n+ \"Forum\": \"https://community.plone.org/\",\n+ \"Chat\": \"https://discord.gg/zFY3EBbjaj\",\n+ \"Mastodon\": \"https://plone.social/@plone\",\n+ \"Twitter\": \"https://twitter.com/plone\",\n+ \"Videos\": \"https://youtube.com/@plonecms\",\n+ \"Sponsor\": \"https://github.com/sponsors/plone\",\n+ },\n packages=find_packages(),\n namespace_packages=['Products'],\n include_package_data=True,\n", "issue": "Update package metadata in PyPi\n\n", "before_files": [{"content": "from setuptools import find_packages\nfrom setuptools import setup\n\n\nversion = '6.0.0rc2.dev0'\n\n\nsetup(\n name='Products.CMFPlone',\n version=version,\n description=\"The Plone Content Management System (core)\",\n long_description=open(\"README.rst\").read() + \"\\n\" +\n open(\"CHANGES.rst\").read(),\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Environment :: Web Environment\",\n \"Framework :: Plone\",\n \"Framework :: Plone :: 6.0\",\n \"Framework :: Plone :: Core\",\n \"Framework :: Zope :: 5\",\n \"License :: OSI Approved :: GNU General Public License v2 (GPLv2)\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Programming Language :: Python :: 3.11\",\n ],\n python_requires='>=3.8',\n keywords='Plone CMF Python Zope CMS Webapplication',\n author='Plone Foundation',\n author_email='[email protected]',\n url='https://plone.org',\n license='GPL version 2',\n packages=find_packages(),\n namespace_packages=['Products'],\n include_package_data=True,\n zip_safe=False,\n install_requires=[\n 'borg.localrole',\n 'five.customerize',\n 'lxml',\n 'plone.api >= 1.4.4',\n 'plone.app.content',\n 'plone.app.contentlisting',\n 'plone.app.contentmenu >= 2.0.1',\n 'plone.app.contentrules',\n 'plone.app.contenttypes',\n 'plone.app.customerize',\n 'plone.app.dexterity',\n 'plone.app.discussion',\n 'plone.app.i18n',\n 'plone.app.layout >= 2.5.15',\n 'plone.app.linkintegrity >=1.0.3',\n 'plone.app.locales',\n 'plone.app.multilingual',\n 'plone.app.portlets',\n 'plone.app.redirector',\n 'plone.app.registry',\n 'plone.app.theming',\n 'plone.app.users',\n 'plone.app.uuid',\n 'plone.app.viewletmanager',\n 'plone.app.vocabularies',\n 'plone.app.workflow',\n 'plone.base',\n 'plone.browserlayer >= 2.1.5',\n 'plone.contentrules',\n 'plone.folder',\n 'plone.i18n >= 4.0.5',\n 'plone.indexer',\n 'plone.intelligenttext',\n 'plone.locking',\n 'plone.memoize',\n 'plone.outputfilters',\n 'plone.portlet.collection',\n 'plone.portlet.static',\n 'plone.portlets',\n 'plone.protect >= 3.0.0',\n 'plone.resource',\n 'plone.schema',\n 'plone.session',\n 'plone.staticresources',\n 'plone.theme',\n 'plonetheme.barceloneta',\n 'Products.CMFEditions',\n 'Products.DCWorkflow',\n 'Products.ExtendedPathIndex',\n 'Products.isurlinportal',\n 'Products.MimetypesRegistry',\n 'Products.PlonePAS',\n 'Products.PortalTransforms',\n 'Products.SiteErrorLog',\n 'Products.statusmessages',\n 'setuptools>=36.2',\n 'plone.autoinclude',\n 'webresource>=1.1',\n 'Zope[wsgi] >= 5.0',\n 'zope.app.locales >= 3.6.0',\n 'zope.cachedescriptors',\n 'zope.deferredimport',\n 'zope.deprecation',\n 'zope.dottedname',\n 'zope.i18n',\n 'zope.i18nmessageid',\n 'zope.structuredtext',\n ],\n extras_require={\n 'test': [\n 'lxml',\n 'mock',\n 'plone.app.robotframework>=1.0',\n 'robotframework-debuglibrary',\n 'plone.app.testing',\n 'zope.globalrequest',\n 'zope.testing',\n 'gunicorn',\n ]\n },\n)\n", "path": "setup.py"}]} | 1,748 | 269 |
gh_patches_debug_42382 | rasdani/github-patches | git_diff | lutris__lutris-2973 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add option to turn columns on/off in List View
When right-clicking to table headbar in List View, you expect to get a menu for turning columns on/off, but you just select first game in the list.
</issue>
<code>
[start of lutris/gui/views/list.py]
1 """TreeView based game list"""
2 from gettext import gettext as _
3
4 # Third Party Libraries
5 # pylint: disable=no-member
6 from gi.repository import Gtk, Pango
7
8 # Lutris Modules
9 from lutris import settings
10 from lutris.gui.views import (
11 COL_ICON, COL_INSTALLED_AT, COL_INSTALLED_AT_TEXT, COL_LASTPLAYED, COL_LASTPLAYED_TEXT, COL_NAME, COL_PLATFORM,
12 COL_PLAYTIME, COL_PLAYTIME_TEXT, COL_RUNNER_HUMAN_NAME, COL_YEAR, COLUMN_NAMES
13 )
14 from lutris.gui.views.base import GameView
15 from lutris.gui.views.store import sort_func
16
17
18 class GameListView(Gtk.TreeView, GameView):
19
20 """Show the main list of games."""
21
22 __gsignals__ = GameView.__gsignals__
23
24 def __init__(self, store):
25 self.game_store = store
26 self.model = self.game_store.modelsort
27 super().__init__(self.model)
28 self.set_rules_hint(True)
29
30 # Icon column
31 image_cell = Gtk.CellRendererPixbuf()
32 column = Gtk.TreeViewColumn("", image_cell, pixbuf=COL_ICON)
33 column.set_reorderable(True)
34 column.set_sort_indicator(False)
35 self.append_column(column)
36
37 # Text columns
38 default_text_cell = self.set_text_cell()
39 name_cell = self.set_text_cell()
40 name_cell.set_padding(5, 0)
41
42 self.set_column(name_cell, _("Name"), COL_NAME, 200)
43 self.set_column(default_text_cell, _("Year"), COL_YEAR, 60)
44 self.set_column(default_text_cell, _("Runner"), COL_RUNNER_HUMAN_NAME, 120)
45 self.set_column(default_text_cell, _("Platform"), COL_PLATFORM, 120)
46 self.set_column(default_text_cell, _("Last Played"), COL_LASTPLAYED_TEXT, 120)
47 self.set_sort_with_column(COL_LASTPLAYED_TEXT, COL_LASTPLAYED)
48 self.set_column(default_text_cell, _("Installed At"), COL_INSTALLED_AT_TEXT, 120)
49 self.set_sort_with_column(COL_INSTALLED_AT_TEXT, COL_INSTALLED_AT)
50 self.set_column(default_text_cell, _("Play Time"), COL_PLAYTIME_TEXT, 100)
51 self.set_sort_with_column(COL_PLAYTIME_TEXT, COL_PLAYTIME)
52
53 self.get_selection().set_mode(Gtk.SelectionMode.SINGLE)
54
55 self.connect_signals()
56 self.connect("row-activated", self.on_row_activated)
57 self.get_selection().connect("changed", self.on_cursor_changed)
58
59 @staticmethod
60 def set_text_cell():
61 text_cell = Gtk.CellRendererText()
62 text_cell.set_padding(10, 0)
63 text_cell.set_property("ellipsize", Pango.EllipsizeMode.END)
64 return text_cell
65
66 def set_column(self, cell, header, column_id, default_width, sort_id=None):
67 column = Gtk.TreeViewColumn(header, cell, markup=column_id)
68 column.set_sort_indicator(True)
69 column.set_sort_column_id(column_id if sort_id is None else sort_id)
70 self.set_column_sort(column_id if sort_id is None else sort_id)
71 column.set_resizable(True)
72 column.set_reorderable(True)
73 width = settings.read_setting("%s_column_width" % COLUMN_NAMES[column_id], "list view")
74 column.set_fixed_width(int(width) if width else default_width)
75 self.append_column(column)
76 column.connect("notify::width", self.on_column_width_changed)
77 return column
78
79 def set_column_sort(self, col):
80 """Sort a column and fallback to sorting by name and runner."""
81 self.model.set_sort_func(col, sort_func, col)
82
83 def set_sort_with_column(self, col, sort_col):
84 """Sort a column by using another column's data"""
85 self.model.set_sort_func(col, sort_func, sort_col)
86
87 def get_selected_item(self):
88 """Return the currently selected game's id."""
89 selection = self.get_selection()
90 if not selection:
91 return None
92 _model, select_iter = selection.get_selected()
93 if select_iter:
94 return select_iter
95
96 def select(self):
97 self.set_cursor(self.current_path[0])
98
99 def set_selected_game(self, game_id):
100 row = self.game_store.get_row_by_id(game_id, filtered=True)
101 if row:
102 self.set_cursor(row.path)
103
104 def on_row_activated(self, widget, line=None, column=None):
105 """Handles double clicks"""
106 selected_item = self.get_selected_item()
107 if selected_item:
108 selected_game = self.get_selected_game(selected_item)
109 else:
110 selected_game = None
111 self.emit("game-activated", selected_game)
112
113 def on_cursor_changed(self, widget, _line=None, _column=None):
114 selected_item = self.get_selected_item()
115 if selected_item:
116 self.selected_game = self.get_selected_game(selected_item)
117 else:
118 self.selected_game = None
119 self.emit("game-selected", self.selected_game)
120
121 @staticmethod
122 def on_column_width_changed(col, *args):
123 col_name = col.get_title()
124 if col_name:
125 settings.write_setting(
126 col_name.replace(" ", "") + "_column_width",
127 col.get_fixed_width(),
128 "list view",
129 )
130
[end of lutris/gui/views/list.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/lutris/gui/views/list.py b/lutris/gui/views/list.py
--- a/lutris/gui/views/list.py
+++ b/lutris/gui/views/list.py
@@ -39,7 +39,7 @@
name_cell = self.set_text_cell()
name_cell.set_padding(5, 0)
- self.set_column(name_cell, _("Name"), COL_NAME, 200)
+ self.set_column(name_cell, _("Name"), COL_NAME, 200, always_visible=True)
self.set_column(default_text_cell, _("Year"), COL_YEAR, 60)
self.set_column(default_text_cell, _("Runner"), COL_RUNNER_HUMAN_NAME, 120)
self.set_column(default_text_cell, _("Platform"), COL_PLATFORM, 120)
@@ -63,7 +63,7 @@
text_cell.set_property("ellipsize", Pango.EllipsizeMode.END)
return text_cell
- def set_column(self, cell, header, column_id, default_width, sort_id=None):
+ def set_column(self, cell, header, column_id, default_width, always_visible=False, sort_id=None):
column = Gtk.TreeViewColumn(header, cell, markup=column_id)
column.set_sort_indicator(True)
column.set_sort_column_id(column_id if sort_id is None else sort_id)
@@ -71,9 +71,12 @@
column.set_resizable(True)
column.set_reorderable(True)
width = settings.read_setting("%s_column_width" % COLUMN_NAMES[column_id], "list view")
+ is_visible = settings.read_setting("%s_visible" % COLUMN_NAMES[column_id], "list view")
column.set_fixed_width(int(width) if width else default_width)
+ column.set_visible(is_visible == "True" or always_visible if is_visible else True)
self.append_column(column)
column.connect("notify::width", self.on_column_width_changed)
+ column.get_button().connect('button-press-event', self.on_column_header_button_pressed)
return column
def set_column_sort(self, col):
@@ -101,6 +104,13 @@
if row:
self.set_cursor(row.path)
+ def on_column_header_button_pressed(self, button, event):
+ """Handles column header button press events"""
+ if event.button == 3:
+ menu = GameListColumnToggleMenu(self.get_columns())
+ menu.popup_at_pointer(None)
+ return True
+
def on_row_activated(self, widget, line=None, column=None):
"""Handles double clicks"""
selected_item = self.get_selected_item()
@@ -127,3 +137,37 @@
col.get_fixed_width(),
"list view",
)
+
+
+class GameListColumnToggleMenu(Gtk.Menu):
+
+ def __init__(self, columns):
+ super().__init__()
+ self.columns = columns
+ self.column_map = {}
+ self.create_menuitems()
+ self.show_all()
+
+ def create_menuitems(self):
+ for column in self.columns:
+ title = column.get_title()
+ if title == "":
+ continue
+ checkbox = Gtk.CheckMenuItem(title)
+ checkbox.set_active(column.get_visible())
+ if title == _("Name"):
+ checkbox.set_sensitive(False)
+ else:
+ checkbox.connect("toggled", self.on_toggle_column)
+ self.column_map[checkbox] = column
+ self.append(checkbox)
+
+ def on_toggle_column(self, check_menu_item):
+ column = self.column_map[check_menu_item]
+ is_visible = check_menu_item.get_active()
+ column.set_visible(is_visible)
+ settings.write_setting(
+ column.get_title().replace(" ", "") + "_visible",
+ str(is_visible),
+ "list view",
+ )
| {"golden_diff": "diff --git a/lutris/gui/views/list.py b/lutris/gui/views/list.py\n--- a/lutris/gui/views/list.py\n+++ b/lutris/gui/views/list.py\n@@ -39,7 +39,7 @@\n name_cell = self.set_text_cell()\n name_cell.set_padding(5, 0)\n \n- self.set_column(name_cell, _(\"Name\"), COL_NAME, 200)\n+ self.set_column(name_cell, _(\"Name\"), COL_NAME, 200, always_visible=True)\n self.set_column(default_text_cell, _(\"Year\"), COL_YEAR, 60)\n self.set_column(default_text_cell, _(\"Runner\"), COL_RUNNER_HUMAN_NAME, 120)\n self.set_column(default_text_cell, _(\"Platform\"), COL_PLATFORM, 120)\n@@ -63,7 +63,7 @@\n text_cell.set_property(\"ellipsize\", Pango.EllipsizeMode.END)\n return text_cell\n \n- def set_column(self, cell, header, column_id, default_width, sort_id=None):\n+ def set_column(self, cell, header, column_id, default_width, always_visible=False, sort_id=None):\n column = Gtk.TreeViewColumn(header, cell, markup=column_id)\n column.set_sort_indicator(True)\n column.set_sort_column_id(column_id if sort_id is None else sort_id)\n@@ -71,9 +71,12 @@\n column.set_resizable(True)\n column.set_reorderable(True)\n width = settings.read_setting(\"%s_column_width\" % COLUMN_NAMES[column_id], \"list view\")\n+ is_visible = settings.read_setting(\"%s_visible\" % COLUMN_NAMES[column_id], \"list view\")\n column.set_fixed_width(int(width) if width else default_width)\n+ column.set_visible(is_visible == \"True\" or always_visible if is_visible else True)\n self.append_column(column)\n column.connect(\"notify::width\", self.on_column_width_changed)\n+ column.get_button().connect('button-press-event', self.on_column_header_button_pressed)\n return column\n \n def set_column_sort(self, col):\n@@ -101,6 +104,13 @@\n if row:\n self.set_cursor(row.path)\n \n+ def on_column_header_button_pressed(self, button, event):\n+ \"\"\"Handles column header button press events\"\"\"\n+ if event.button == 3:\n+ menu = GameListColumnToggleMenu(self.get_columns())\n+ menu.popup_at_pointer(None)\n+ return True\n+\n def on_row_activated(self, widget, line=None, column=None):\n \"\"\"Handles double clicks\"\"\"\n selected_item = self.get_selected_item()\n@@ -127,3 +137,37 @@\n col.get_fixed_width(),\n \"list view\",\n )\n+\n+\n+class GameListColumnToggleMenu(Gtk.Menu):\n+\n+ def __init__(self, columns):\n+ super().__init__()\n+ self.columns = columns\n+ self.column_map = {}\n+ self.create_menuitems()\n+ self.show_all()\n+\n+ def create_menuitems(self):\n+ for column in self.columns:\n+ title = column.get_title()\n+ if title == \"\":\n+ continue\n+ checkbox = Gtk.CheckMenuItem(title)\n+ checkbox.set_active(column.get_visible())\n+ if title == _(\"Name\"):\n+ checkbox.set_sensitive(False)\n+ else:\n+ checkbox.connect(\"toggled\", self.on_toggle_column)\n+ self.column_map[checkbox] = column\n+ self.append(checkbox)\n+\n+ def on_toggle_column(self, check_menu_item):\n+ column = self.column_map[check_menu_item]\n+ is_visible = check_menu_item.get_active()\n+ column.set_visible(is_visible)\n+ settings.write_setting(\n+ column.get_title().replace(\" \", \"\") + \"_visible\",\n+ str(is_visible),\n+ \"list view\",\n+ )\n", "issue": "Add option to turn columns on/off in List View\nWhen right-clicking to table headbar in List View, you expect to get a menu for turning columns on/off, but you just select first game in the list.\n", "before_files": [{"content": "\"\"\"TreeView based game list\"\"\"\nfrom gettext import gettext as _\n\n# Third Party Libraries\n# pylint: disable=no-member\nfrom gi.repository import Gtk, Pango\n\n# Lutris Modules\nfrom lutris import settings\nfrom lutris.gui.views import (\n COL_ICON, COL_INSTALLED_AT, COL_INSTALLED_AT_TEXT, COL_LASTPLAYED, COL_LASTPLAYED_TEXT, COL_NAME, COL_PLATFORM,\n COL_PLAYTIME, COL_PLAYTIME_TEXT, COL_RUNNER_HUMAN_NAME, COL_YEAR, COLUMN_NAMES\n)\nfrom lutris.gui.views.base import GameView\nfrom lutris.gui.views.store import sort_func\n\n\nclass GameListView(Gtk.TreeView, GameView):\n\n \"\"\"Show the main list of games.\"\"\"\n\n __gsignals__ = GameView.__gsignals__\n\n def __init__(self, store):\n self.game_store = store\n self.model = self.game_store.modelsort\n super().__init__(self.model)\n self.set_rules_hint(True)\n\n # Icon column\n image_cell = Gtk.CellRendererPixbuf()\n column = Gtk.TreeViewColumn(\"\", image_cell, pixbuf=COL_ICON)\n column.set_reorderable(True)\n column.set_sort_indicator(False)\n self.append_column(column)\n\n # Text columns\n default_text_cell = self.set_text_cell()\n name_cell = self.set_text_cell()\n name_cell.set_padding(5, 0)\n\n self.set_column(name_cell, _(\"Name\"), COL_NAME, 200)\n self.set_column(default_text_cell, _(\"Year\"), COL_YEAR, 60)\n self.set_column(default_text_cell, _(\"Runner\"), COL_RUNNER_HUMAN_NAME, 120)\n self.set_column(default_text_cell, _(\"Platform\"), COL_PLATFORM, 120)\n self.set_column(default_text_cell, _(\"Last Played\"), COL_LASTPLAYED_TEXT, 120)\n self.set_sort_with_column(COL_LASTPLAYED_TEXT, COL_LASTPLAYED)\n self.set_column(default_text_cell, _(\"Installed At\"), COL_INSTALLED_AT_TEXT, 120)\n self.set_sort_with_column(COL_INSTALLED_AT_TEXT, COL_INSTALLED_AT)\n self.set_column(default_text_cell, _(\"Play Time\"), COL_PLAYTIME_TEXT, 100)\n self.set_sort_with_column(COL_PLAYTIME_TEXT, COL_PLAYTIME)\n\n self.get_selection().set_mode(Gtk.SelectionMode.SINGLE)\n\n self.connect_signals()\n self.connect(\"row-activated\", self.on_row_activated)\n self.get_selection().connect(\"changed\", self.on_cursor_changed)\n\n @staticmethod\n def set_text_cell():\n text_cell = Gtk.CellRendererText()\n text_cell.set_padding(10, 0)\n text_cell.set_property(\"ellipsize\", Pango.EllipsizeMode.END)\n return text_cell\n\n def set_column(self, cell, header, column_id, default_width, sort_id=None):\n column = Gtk.TreeViewColumn(header, cell, markup=column_id)\n column.set_sort_indicator(True)\n column.set_sort_column_id(column_id if sort_id is None else sort_id)\n self.set_column_sort(column_id if sort_id is None else sort_id)\n column.set_resizable(True)\n column.set_reorderable(True)\n width = settings.read_setting(\"%s_column_width\" % COLUMN_NAMES[column_id], \"list view\")\n column.set_fixed_width(int(width) if width else default_width)\n self.append_column(column)\n column.connect(\"notify::width\", self.on_column_width_changed)\n return column\n\n def set_column_sort(self, col):\n \"\"\"Sort a column and fallback to sorting by name and runner.\"\"\"\n self.model.set_sort_func(col, sort_func, col)\n\n def set_sort_with_column(self, col, sort_col):\n \"\"\"Sort a column by using another column's data\"\"\"\n self.model.set_sort_func(col, sort_func, sort_col)\n\n def get_selected_item(self):\n \"\"\"Return the currently selected game's id.\"\"\"\n selection = self.get_selection()\n if not selection:\n return None\n _model, select_iter = selection.get_selected()\n if select_iter:\n return select_iter\n\n def select(self):\n self.set_cursor(self.current_path[0])\n\n def set_selected_game(self, game_id):\n row = self.game_store.get_row_by_id(game_id, filtered=True)\n if row:\n self.set_cursor(row.path)\n\n def on_row_activated(self, widget, line=None, column=None):\n \"\"\"Handles double clicks\"\"\"\n selected_item = self.get_selected_item()\n if selected_item:\n selected_game = self.get_selected_game(selected_item)\n else:\n selected_game = None\n self.emit(\"game-activated\", selected_game)\n\n def on_cursor_changed(self, widget, _line=None, _column=None):\n selected_item = self.get_selected_item()\n if selected_item:\n self.selected_game = self.get_selected_game(selected_item)\n else:\n self.selected_game = None\n self.emit(\"game-selected\", self.selected_game)\n\n @staticmethod\n def on_column_width_changed(col, *args):\n col_name = col.get_title()\n if col_name:\n settings.write_setting(\n col_name.replace(\" \", \"\") + \"_column_width\",\n col.get_fixed_width(),\n \"list view\",\n )\n", "path": "lutris/gui/views/list.py"}]} | 1,996 | 840 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.