problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.71k
9.01k
| golden_diff
stringlengths 151
4.94k
| verification_info
stringlengths 465
11.3k
| num_tokens_prompt
int64 557
2.05k
| num_tokens_diff
int64 48
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_20622 | rasdani/github-patches | git_diff | ManimCommunity__manim-573 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG-General] Terminal output not formatted and uses ASCII in windows
**Describe the bug**
I am on Windows and command prompt and I `cd` into example_scenes directory and run
```sh
manim basic.py
```
and I get a output like below.

I should get in green colour though.
**To Reproduce**
Just running the one in example_scene in enough.
**Expected behavior**
The ill formatted thing should be in green colour.
**Logs**
<details><summary>Terminal output (Screenshots acceptable)</summary>

<!-- Paste screenshot here -->
</details>
**System Specifications**
<details><summary>System Details</summary>
- OS (with version, e.g Windows 10 v2004 or macOS 10.15 (Catalina)): Windows 7
- Python version (`python/py/python3 --version`): 3.8
</issue>
<code>
[start of manim/config/logger.py]
1 """
2 logger.py
3 ---------
4 This is the logging library for manim.
5 This library uses rich for coloured log outputs.
6
7 """
8
9
10 __all__ = ["logger", "console"]
11
12
13 import configparser
14 import logging
15
16 from rich.console import Console
17 from rich.logging import RichHandler
18 from rich.theme import Theme
19 from rich import print as printf
20 from rich import errors, color
21 import json
22 import copy
23
24
25 class JSONFormatter(logging.Formatter):
26 """Subclass of `:class:`logging.Formatter`, to build our own format of the logs (JSON)."""
27
28 def format(self, record):
29 record_c = copy.deepcopy(record)
30 if record_c.args:
31 for arg in record_c.args:
32 record_c.args[arg] = "<>"
33 return json.dumps(
34 {
35 "levelname": record_c.levelname,
36 "module": record_c.module,
37 "message": super().format(record_c),
38 }
39 )
40
41
42 def _parse_theme(config_logger):
43 theme = dict(
44 zip(
45 [key.replace("_", ".") for key in config_logger.keys()],
46 list(config_logger.values()),
47 )
48 )
49
50 theme["log.width"] = None if theme["log.width"] == "-1" else int(theme["log.width"])
51
52 theme["log.height"] = (
53 None if theme["log.height"] == "-1" else int(theme["log.height"])
54 )
55 theme["log.timestamps"] = False
56 try:
57 customTheme = Theme(
58 {
59 k: v
60 for k, v in theme.items()
61 if k not in ["log.width", "log.height", "log.timestamps"]
62 }
63 )
64 except (color.ColorParseError, errors.StyleSyntaxError):
65 customTheme = None
66 printf(
67 "[logging.level.error]It seems your colour configuration couldn't be parsed. Loading the default color configuration...[/logging.level.error]"
68 )
69 return customTheme
70
71
72 def set_rich_logger(config_logger, verbosity):
73 """Will set the RichHandler of the logger.
74
75 Parameter
76 ----------
77 config_logger :class:
78 Config object of the logger.
79 """
80 theme = _parse_theme(config_logger)
81 global console
82 console = Console(theme=theme)
83 # These keywords Are Highlighted specially.
84 RichHandler.KEYWORDS = [
85 "Played",
86 "animations",
87 "scene",
88 "Reading",
89 "Writing",
90 "script",
91 "arguments",
92 "Invalid",
93 "Aborting",
94 "module",
95 "File",
96 "Rendering",
97 "Rendered",
98 ]
99 rich_handler = RichHandler(
100 console=console, show_time=config_logger.getboolean("log_timestamps")
101 )
102 global logger
103 rich_handler.setLevel(verbosity)
104 logger.addHandler(rich_handler)
105
106
107 def set_file_logger(log_file_path):
108 file_handler = logging.FileHandler(log_file_path, mode="w")
109 file_handler.setFormatter(JSONFormatter())
110 global logger
111 logger.addHandler(file_handler)
112
113
114 logger = logging.getLogger("manim")
115 # The console is set to None as it will be changed by set_rich_logger.
116 console = None
117
118 # TODO : This is only temporary to keep the terminal output clean when working with ImageMobject and matplotlib plots
119 logging.getLogger("PIL").setLevel(logging.INFO)
120 logging.getLogger("matplotlib").setLevel(logging.INFO)
121
[end of manim/config/logger.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/manim/config/logger.py b/manim/config/logger.py
--- a/manim/config/logger.py
+++ b/manim/config/logger.py
@@ -10,12 +10,12 @@
__all__ = ["logger", "console"]
-import configparser
import logging
from rich.console import Console
from rich.logging import RichHandler
from rich.theme import Theme
+from rich.traceback import install
from rich import print as printf
from rich import errors, color
import json
@@ -114,7 +114,7 @@
logger = logging.getLogger("manim")
# The console is set to None as it will be changed by set_rich_logger.
console = None
-
+install()
# TODO : This is only temporary to keep the terminal output clean when working with ImageMobject and matplotlib plots
logging.getLogger("PIL").setLevel(logging.INFO)
logging.getLogger("matplotlib").setLevel(logging.INFO)
| {"golden_diff": "diff --git a/manim/config/logger.py b/manim/config/logger.py\n--- a/manim/config/logger.py\n+++ b/manim/config/logger.py\n@@ -10,12 +10,12 @@\n __all__ = [\"logger\", \"console\"]\n \n \n-import configparser\n import logging\n \n from rich.console import Console\n from rich.logging import RichHandler\n from rich.theme import Theme\n+from rich.traceback import install\n from rich import print as printf\n from rich import errors, color\n import json\n@@ -114,7 +114,7 @@\n logger = logging.getLogger(\"manim\")\n # The console is set to None as it will be changed by set_rich_logger.\n console = None\n-\n+install()\n # TODO : This is only temporary to keep the terminal output clean when working with ImageMobject and matplotlib plots\n logging.getLogger(\"PIL\").setLevel(logging.INFO)\n logging.getLogger(\"matplotlib\").setLevel(logging.INFO)\n", "issue": " [BUG-General] Terminal output not formatted and uses ASCII in windows\n**Describe the bug**\r\nI am on Windows and command prompt and I `cd` into example_scenes directory and run \r\n```sh\r\nmanim basic.py\r\n``` \r\nand I get a output like below.\r\n\r\nI should get in green colour though.\r\n\r\n**To Reproduce**\r\nJust running the one in example_scene in enough.\r\n\r\n**Expected behavior**\r\nThe ill formatted thing should be in green colour.\r\n\r\n**Logs**\r\n<details><summary>Terminal output (Screenshots acceptable)</summary>\r\n\r\n\r\n\r\n<!-- Paste screenshot here -->\r\n\r\n</details>\r\n\r\n**System Specifications**\r\n\r\n<details><summary>System Details</summary>\r\n\r\n- OS (with version, e.g Windows 10 v2004 or macOS 10.15 (Catalina)): Windows 7\r\n- Python version (`python/py/python3 --version`): 3.8\r\n\n", "before_files": [{"content": "\"\"\"\nlogger.py\n---------\nThis is the logging library for manim.\nThis library uses rich for coloured log outputs.\n\n\"\"\"\n\n\n__all__ = [\"logger\", \"console\"]\n\n\nimport configparser\nimport logging\n\nfrom rich.console import Console\nfrom rich.logging import RichHandler\nfrom rich.theme import Theme\nfrom rich import print as printf\nfrom rich import errors, color\nimport json\nimport copy\n\n\nclass JSONFormatter(logging.Formatter):\n \"\"\"Subclass of `:class:`logging.Formatter`, to build our own format of the logs (JSON).\"\"\"\n\n def format(self, record):\n record_c = copy.deepcopy(record)\n if record_c.args:\n for arg in record_c.args:\n record_c.args[arg] = \"<>\"\n return json.dumps(\n {\n \"levelname\": record_c.levelname,\n \"module\": record_c.module,\n \"message\": super().format(record_c),\n }\n )\n\n\ndef _parse_theme(config_logger):\n theme = dict(\n zip(\n [key.replace(\"_\", \".\") for key in config_logger.keys()],\n list(config_logger.values()),\n )\n )\n\n theme[\"log.width\"] = None if theme[\"log.width\"] == \"-1\" else int(theme[\"log.width\"])\n\n theme[\"log.height\"] = (\n None if theme[\"log.height\"] == \"-1\" else int(theme[\"log.height\"])\n )\n theme[\"log.timestamps\"] = False\n try:\n customTheme = Theme(\n {\n k: v\n for k, v in theme.items()\n if k not in [\"log.width\", \"log.height\", \"log.timestamps\"]\n }\n )\n except (color.ColorParseError, errors.StyleSyntaxError):\n customTheme = None\n printf(\n \"[logging.level.error]It seems your colour configuration couldn't be parsed. Loading the default color configuration...[/logging.level.error]\"\n )\n return customTheme\n\n\ndef set_rich_logger(config_logger, verbosity):\n \"\"\"Will set the RichHandler of the logger.\n\n Parameter\n ----------\n config_logger :class:\n Config object of the logger.\n \"\"\"\n theme = _parse_theme(config_logger)\n global console\n console = Console(theme=theme)\n # These keywords Are Highlighted specially.\n RichHandler.KEYWORDS = [\n \"Played\",\n \"animations\",\n \"scene\",\n \"Reading\",\n \"Writing\",\n \"script\",\n \"arguments\",\n \"Invalid\",\n \"Aborting\",\n \"module\",\n \"File\",\n \"Rendering\",\n \"Rendered\",\n ]\n rich_handler = RichHandler(\n console=console, show_time=config_logger.getboolean(\"log_timestamps\")\n )\n global logger\n rich_handler.setLevel(verbosity)\n logger.addHandler(rich_handler)\n\n\ndef set_file_logger(log_file_path):\n file_handler = logging.FileHandler(log_file_path, mode=\"w\")\n file_handler.setFormatter(JSONFormatter())\n global logger\n logger.addHandler(file_handler)\n\n\nlogger = logging.getLogger(\"manim\")\n# The console is set to None as it will be changed by set_rich_logger.\nconsole = None\n\n# TODO : This is only temporary to keep the terminal output clean when working with ImageMobject and matplotlib plots\nlogging.getLogger(\"PIL\").setLevel(logging.INFO)\nlogging.getLogger(\"matplotlib\").setLevel(logging.INFO)\n", "path": "manim/config/logger.py"}]} | 1,793 | 200 |
gh_patches_debug_27175 | rasdani/github-patches | git_diff | xorbitsai__inference-143 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ENH: auto find available port for API
### Describe the bug
```
~ ❯ xinference 6s base 18:24:18
Traceback (most recent call last):
File "/Users/hekaisheng/miniconda3/bin/xinference", line 33, in <module>
sys.exit(load_entry_point('xinference', 'console_scripts', 'xinference')())
File "/Users/hekaisheng/miniconda3/lib/python3.9/site-packages/click/core.py", line 1128, in __call__
return self.main(*args, **kwargs)
File "/Users/hekaisheng/miniconda3/lib/python3.9/site-packages/click/core.py", line 1053, in main
rv = self.invoke(ctx)
File "/Users/hekaisheng/miniconda3/lib/python3.9/site-packages/click/core.py", line 1637, in invoke
super().invoke(ctx)
File "/Users/hekaisheng/miniconda3/lib/python3.9/site-packages/click/core.py", line 1395, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/Users/hekaisheng/miniconda3/lib/python3.9/site-packages/click/core.py", line 754, in invoke
return __callback(*args, **kwargs)
File "/Users/hekaisheng/miniconda3/lib/python3.9/site-packages/click/decorators.py", line 26, in new_func
return f(get_current_context(), *args, **kwargs)
File "/Users/hekaisheng/Documents/projects/inference/xinference/deploy/cmdline.py", line 51, in cli
main(
File "/Users/hekaisheng/Documents/projects/inference/xinference/deploy/local.py", line 50, in main
loop.run_until_complete(task)
File "/Users/hekaisheng/miniconda3/lib/python3.9/asyncio/base_events.py", line 647, in run_until_complete
return future.result()
File "/Users/hekaisheng/Documents/projects/inference/xinference/deploy/local.py", line 36, in _start_local_cluster
url = await start_supervisor_components(address=address, host=host, port=port)
File "/Users/hekaisheng/Documents/projects/inference/xinference/deploy/supervisor.py", line 35, in start_supervisor_components
sock.bind((host, port))
OSError: [Errno 48] Address already in use
```
Use available port if users not specify.
</issue>
<code>
[start of xinference/deploy/supervisor.py]
1 # Copyright 2022-2023 XProbe Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import asyncio
16 import logging
17 import socket
18 from typing import Dict, Optional
19
20 import xoscar as xo
21
22 from ..core.gradio import GradioApp
23 from ..core.restful_api import RESTfulAPIActor
24 from ..core.service import SupervisorActor
25
26 logger = logging.getLogger("xinference")
27
28
29 async def start_supervisor_components(address: str, host: str, port: int):
30 await xo.create_actor(SupervisorActor, address=address, uid=SupervisorActor.uid())
31 gradio_block = GradioApp(address).build()
32 # create a socket for RESTful API
33 sockets = []
34 sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
35 sock.bind((host, port))
36 sockets.append(sock)
37 restful_actor = await xo.create_actor(
38 RESTfulAPIActor,
39 address=address,
40 uid=RESTfulAPIActor.uid(),
41 sockets=sockets,
42 gradio_block=gradio_block,
43 )
44 await restful_actor.serve()
45 url = f"http://{host}:{port}"
46 logger.info(f"Server address: {url}")
47 return url
48
49
50 async def _start_supervisor(
51 address: str, host: str, port: int, logging_conf: Optional[Dict] = None
52 ):
53 pool = None
54 try:
55 pool = await xo.create_actor_pool(
56 address=address, n_process=0, logging_conf=logging_conf
57 )
58 await start_supervisor_components(address=address, host=host, port=port)
59 await pool.join()
60 except asyncio.exceptions.CancelledError:
61 if pool is not None:
62 await pool.stop()
63
64
65 def main(*args, **kwargs):
66 loop = asyncio.get_event_loop()
67 task = loop.create_task(_start_supervisor(*args, **kwargs))
68
69 try:
70 loop.run_until_complete(task)
71 except KeyboardInterrupt:
72 task.cancel()
73 loop.run_until_complete(task)
74 # avoid displaying exception-unhandled warnings
75 task.exception()
76
[end of xinference/deploy/supervisor.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/xinference/deploy/supervisor.py b/xinference/deploy/supervisor.py
--- a/xinference/deploy/supervisor.py
+++ b/xinference/deploy/supervisor.py
@@ -18,7 +18,9 @@
from typing import Dict, Optional
import xoscar as xo
+from xoscar.utils import get_next_port
+from ..constants import XINFERENCE_DEFAULT_ENDPOINT_PORT
from ..core.gradio import GradioApp
from ..core.restful_api import RESTfulAPIActor
from ..core.service import SupervisorActor
@@ -30,10 +32,26 @@
await xo.create_actor(SupervisorActor, address=address, uid=SupervisorActor.uid())
gradio_block = GradioApp(address).build()
# create a socket for RESTful API
- sockets = []
- sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
- sock.bind((host, port))
- sockets.append(sock)
+ try:
+ sockets = []
+ sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
+ sock.bind((host, port))
+ sockets.append(sock)
+ except OSError:
+ if port is XINFERENCE_DEFAULT_ENDPOINT_PORT:
+ while True:
+ try:
+ sockets = []
+ sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
+ port = get_next_port()
+ sock.bind((host, port))
+ sockets.append(sock)
+ break
+ except OSError:
+ pass
+ else:
+ raise OSError
+
restful_actor = await xo.create_actor(
RESTfulAPIActor,
address=address,
| {"golden_diff": "diff --git a/xinference/deploy/supervisor.py b/xinference/deploy/supervisor.py\n--- a/xinference/deploy/supervisor.py\n+++ b/xinference/deploy/supervisor.py\n@@ -18,7 +18,9 @@\n from typing import Dict, Optional\n \n import xoscar as xo\n+from xoscar.utils import get_next_port\n \n+from ..constants import XINFERENCE_DEFAULT_ENDPOINT_PORT\n from ..core.gradio import GradioApp\n from ..core.restful_api import RESTfulAPIActor\n from ..core.service import SupervisorActor\n@@ -30,10 +32,26 @@\n await xo.create_actor(SupervisorActor, address=address, uid=SupervisorActor.uid())\n gradio_block = GradioApp(address).build()\n # create a socket for RESTful API\n- sockets = []\n- sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n- sock.bind((host, port))\n- sockets.append(sock)\n+ try:\n+ sockets = []\n+ sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n+ sock.bind((host, port))\n+ sockets.append(sock)\n+ except OSError:\n+ if port is XINFERENCE_DEFAULT_ENDPOINT_PORT:\n+ while True:\n+ try:\n+ sockets = []\n+ sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n+ port = get_next_port()\n+ sock.bind((host, port))\n+ sockets.append(sock)\n+ break\n+ except OSError:\n+ pass\n+ else:\n+ raise OSError\n+\n restful_actor = await xo.create_actor(\n RESTfulAPIActor,\n address=address,\n", "issue": "ENH: auto find available port for API\n### Describe the bug\r\n```\r\n~ \u276f xinference 6s \ue73c base 18:24:18\r\nTraceback (most recent call last):\r\n File \"/Users/hekaisheng/miniconda3/bin/xinference\", line 33, in <module>\r\n sys.exit(load_entry_point('xinference', 'console_scripts', 'xinference')())\r\n File \"/Users/hekaisheng/miniconda3/lib/python3.9/site-packages/click/core.py\", line 1128, in __call__\r\n return self.main(*args, **kwargs)\r\n File \"/Users/hekaisheng/miniconda3/lib/python3.9/site-packages/click/core.py\", line 1053, in main\r\n rv = self.invoke(ctx)\r\n File \"/Users/hekaisheng/miniconda3/lib/python3.9/site-packages/click/core.py\", line 1637, in invoke\r\n super().invoke(ctx)\r\n File \"/Users/hekaisheng/miniconda3/lib/python3.9/site-packages/click/core.py\", line 1395, in invoke\r\n return ctx.invoke(self.callback, **ctx.params)\r\n File \"/Users/hekaisheng/miniconda3/lib/python3.9/site-packages/click/core.py\", line 754, in invoke\r\n return __callback(*args, **kwargs)\r\n File \"/Users/hekaisheng/miniconda3/lib/python3.9/site-packages/click/decorators.py\", line 26, in new_func\r\n return f(get_current_context(), *args, **kwargs)\r\n File \"/Users/hekaisheng/Documents/projects/inference/xinference/deploy/cmdline.py\", line 51, in cli\r\n main(\r\n File \"/Users/hekaisheng/Documents/projects/inference/xinference/deploy/local.py\", line 50, in main\r\n loop.run_until_complete(task)\r\n File \"/Users/hekaisheng/miniconda3/lib/python3.9/asyncio/base_events.py\", line 647, in run_until_complete\r\n return future.result()\r\n File \"/Users/hekaisheng/Documents/projects/inference/xinference/deploy/local.py\", line 36, in _start_local_cluster\r\n url = await start_supervisor_components(address=address, host=host, port=port)\r\n File \"/Users/hekaisheng/Documents/projects/inference/xinference/deploy/supervisor.py\", line 35, in start_supervisor_components\r\n sock.bind((host, port))\r\nOSError: [Errno 48] Address already in use\r\n```\r\n\r\nUse available port if users not specify.\n", "before_files": [{"content": "# Copyright 2022-2023 XProbe Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport asyncio\nimport logging\nimport socket\nfrom typing import Dict, Optional\n\nimport xoscar as xo\n\nfrom ..core.gradio import GradioApp\nfrom ..core.restful_api import RESTfulAPIActor\nfrom ..core.service import SupervisorActor\n\nlogger = logging.getLogger(\"xinference\")\n\n\nasync def start_supervisor_components(address: str, host: str, port: int):\n await xo.create_actor(SupervisorActor, address=address, uid=SupervisorActor.uid())\n gradio_block = GradioApp(address).build()\n # create a socket for RESTful API\n sockets = []\n sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n sock.bind((host, port))\n sockets.append(sock)\n restful_actor = await xo.create_actor(\n RESTfulAPIActor,\n address=address,\n uid=RESTfulAPIActor.uid(),\n sockets=sockets,\n gradio_block=gradio_block,\n )\n await restful_actor.serve()\n url = f\"http://{host}:{port}\"\n logger.info(f\"Server address: {url}\")\n return url\n\n\nasync def _start_supervisor(\n address: str, host: str, port: int, logging_conf: Optional[Dict] = None\n):\n pool = None\n try:\n pool = await xo.create_actor_pool(\n address=address, n_process=0, logging_conf=logging_conf\n )\n await start_supervisor_components(address=address, host=host, port=port)\n await pool.join()\n except asyncio.exceptions.CancelledError:\n if pool is not None:\n await pool.stop()\n\n\ndef main(*args, **kwargs):\n loop = asyncio.get_event_loop()\n task = loop.create_task(_start_supervisor(*args, **kwargs))\n\n try:\n loop.run_until_complete(task)\n except KeyboardInterrupt:\n task.cancel()\n loop.run_until_complete(task)\n # avoid displaying exception-unhandled warnings\n task.exception()\n", "path": "xinference/deploy/supervisor.py"}]} | 1,825 | 369 |
gh_patches_debug_52467 | rasdani/github-patches | git_diff | iterative__dvc-1859 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Repro: logger doesn't work correctly on exception
DVC version: 0.35.5+d80137,
Platform: Linux
Method of installation: pip install from git
https://github.com/iterative/dvc/blob/54072d70b542115a78a374fa702129b6959a1d02/dvc/command/repro.py#L50-L51
This lines should be:
```
except DvcException, msg:
logger.exception(msg)
```
</issue>
<code>
[start of dvc/command/repro.py]
1 from __future__ import unicode_literals
2
3 import argparse
4 import os
5 import logging
6
7 from dvc.command.base import CmdBase, append_doc_link
8 from dvc.command.metrics import show_metrics
9 from dvc.command.status import CmdDataStatus
10 from dvc.exceptions import DvcException
11
12
13 logger = logging.getLogger(__name__)
14
15
16 class CmdRepro(CmdBase):
17 def run(self):
18 recursive = not self.args.single_item
19 saved_dir = os.path.realpath(os.curdir)
20 if self.args.cwd:
21 os.chdir(self.args.cwd)
22
23 # Dirty hack so the for loop below can at least enter once
24 if self.args.all_pipelines:
25 self.args.targets = [None]
26 elif not self.args.targets:
27 self.args.targets = self.default_targets
28
29 ret = 0
30 for target in self.args.targets:
31 try:
32 stages = self.repo.reproduce(
33 target,
34 recursive=recursive,
35 force=self.args.force,
36 dry=self.args.dry,
37 interactive=self.args.interactive,
38 pipeline=self.args.pipeline,
39 all_pipelines=self.args.all_pipelines,
40 ignore_build_cache=self.args.ignore_build_cache,
41 no_commit=self.args.no_commit,
42 )
43
44 if len(stages) == 0:
45 logger.info(CmdDataStatus.UP_TO_DATE_MSG)
46
47 if self.args.metrics:
48 metrics = self.repo.metrics.show()
49 show_metrics(metrics)
50 except DvcException:
51 logger.exception()
52 ret = 1
53 break
54
55 os.chdir(saved_dir)
56 return ret
57
58
59 def add_parser(subparsers, parent_parser):
60 REPRO_HELP = "Check for changes and reproduce DVC file and dependencies."
61 repro_parser = subparsers.add_parser(
62 "repro",
63 parents=[parent_parser],
64 description=append_doc_link(REPRO_HELP, "repro"),
65 help=REPRO_HELP,
66 formatter_class=argparse.RawDescriptionHelpFormatter,
67 )
68 repro_parser.add_argument(
69 "targets",
70 nargs="*",
71 help="DVC file to reproduce (default - 'Dvcfile').",
72 )
73 repro_parser.add_argument(
74 "-f",
75 "--force",
76 action="store_true",
77 default=False,
78 help="Reproduce even if dependencies were not changed.",
79 )
80 repro_parser.add_argument(
81 "-s",
82 "--single-item",
83 action="store_true",
84 default=False,
85 help="Reproduce only single data item without recursive dependencies "
86 "check.",
87 )
88 repro_parser.add_argument(
89 "-c",
90 "--cwd",
91 default=os.path.curdir,
92 help="Directory within your repo to reproduce from.",
93 )
94 repro_parser.add_argument(
95 "-m",
96 "--metrics",
97 action="store_true",
98 default=False,
99 help="Show metrics after reproduction.",
100 )
101 repro_parser.add_argument(
102 "--dry",
103 action="store_true",
104 default=False,
105 help="Only print the commands that would be executed without "
106 "actually executing.",
107 )
108 repro_parser.add_argument(
109 "-i",
110 "--interactive",
111 action="store_true",
112 default=False,
113 help="Ask for confirmation before reproducing each stage.",
114 )
115 repro_parser.add_argument(
116 "-p",
117 "--pipeline",
118 action="store_true",
119 default=False,
120 help="Reproduce the whole pipeline that the specified stage file "
121 "belongs to.",
122 )
123 repro_parser.add_argument(
124 "-P",
125 "--all-pipelines",
126 action="store_true",
127 default=False,
128 help="Reproduce all pipelines in the repo.",
129 )
130 repro_parser.add_argument(
131 "--ignore-build-cache",
132 action="store_true",
133 default=False,
134 help="Reproduce all descendants of a changed stage even if their "
135 "direct dependencies didn't change.",
136 )
137 repro_parser.add_argument(
138 "--no-commit",
139 action="store_true",
140 default=False,
141 help="Don't put files/directories into cache.",
142 )
143 repro_parser.set_defaults(func=CmdRepro)
144
[end of dvc/command/repro.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/dvc/command/repro.py b/dvc/command/repro.py
--- a/dvc/command/repro.py
+++ b/dvc/command/repro.py
@@ -48,7 +48,7 @@
metrics = self.repo.metrics.show()
show_metrics(metrics)
except DvcException:
- logger.exception()
+ logger.exception("")
ret = 1
break
| {"golden_diff": "diff --git a/dvc/command/repro.py b/dvc/command/repro.py\n--- a/dvc/command/repro.py\n+++ b/dvc/command/repro.py\n@@ -48,7 +48,7 @@\n metrics = self.repo.metrics.show()\n show_metrics(metrics)\n except DvcException:\n- logger.exception()\n+ logger.exception(\"\")\n ret = 1\n break\n", "issue": "Repro: logger doesn't work correctly on exception\nDVC version: 0.35.5+d80137,\r\nPlatform: Linux\r\nMethod of installation: pip install from git\r\n\r\nhttps://github.com/iterative/dvc/blob/54072d70b542115a78a374fa702129b6959a1d02/dvc/command/repro.py#L50-L51 \r\n\r\nThis lines should be:\r\n```\r\nexcept DvcException, msg:\r\n logger.exception(msg)\r\n```\r\n\r\n\n", "before_files": [{"content": "from __future__ import unicode_literals\n\nimport argparse\nimport os\nimport logging\n\nfrom dvc.command.base import CmdBase, append_doc_link\nfrom dvc.command.metrics import show_metrics\nfrom dvc.command.status import CmdDataStatus\nfrom dvc.exceptions import DvcException\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass CmdRepro(CmdBase):\n def run(self):\n recursive = not self.args.single_item\n saved_dir = os.path.realpath(os.curdir)\n if self.args.cwd:\n os.chdir(self.args.cwd)\n\n # Dirty hack so the for loop below can at least enter once\n if self.args.all_pipelines:\n self.args.targets = [None]\n elif not self.args.targets:\n self.args.targets = self.default_targets\n\n ret = 0\n for target in self.args.targets:\n try:\n stages = self.repo.reproduce(\n target,\n recursive=recursive,\n force=self.args.force,\n dry=self.args.dry,\n interactive=self.args.interactive,\n pipeline=self.args.pipeline,\n all_pipelines=self.args.all_pipelines,\n ignore_build_cache=self.args.ignore_build_cache,\n no_commit=self.args.no_commit,\n )\n\n if len(stages) == 0:\n logger.info(CmdDataStatus.UP_TO_DATE_MSG)\n\n if self.args.metrics:\n metrics = self.repo.metrics.show()\n show_metrics(metrics)\n except DvcException:\n logger.exception()\n ret = 1\n break\n\n os.chdir(saved_dir)\n return ret\n\n\ndef add_parser(subparsers, parent_parser):\n REPRO_HELP = \"Check for changes and reproduce DVC file and dependencies.\"\n repro_parser = subparsers.add_parser(\n \"repro\",\n parents=[parent_parser],\n description=append_doc_link(REPRO_HELP, \"repro\"),\n help=REPRO_HELP,\n formatter_class=argparse.RawDescriptionHelpFormatter,\n )\n repro_parser.add_argument(\n \"targets\",\n nargs=\"*\",\n help=\"DVC file to reproduce (default - 'Dvcfile').\",\n )\n repro_parser.add_argument(\n \"-f\",\n \"--force\",\n action=\"store_true\",\n default=False,\n help=\"Reproduce even if dependencies were not changed.\",\n )\n repro_parser.add_argument(\n \"-s\",\n \"--single-item\",\n action=\"store_true\",\n default=False,\n help=\"Reproduce only single data item without recursive dependencies \"\n \"check.\",\n )\n repro_parser.add_argument(\n \"-c\",\n \"--cwd\",\n default=os.path.curdir,\n help=\"Directory within your repo to reproduce from.\",\n )\n repro_parser.add_argument(\n \"-m\",\n \"--metrics\",\n action=\"store_true\",\n default=False,\n help=\"Show metrics after reproduction.\",\n )\n repro_parser.add_argument(\n \"--dry\",\n action=\"store_true\",\n default=False,\n help=\"Only print the commands that would be executed without \"\n \"actually executing.\",\n )\n repro_parser.add_argument(\n \"-i\",\n \"--interactive\",\n action=\"store_true\",\n default=False,\n help=\"Ask for confirmation before reproducing each stage.\",\n )\n repro_parser.add_argument(\n \"-p\",\n \"--pipeline\",\n action=\"store_true\",\n default=False,\n help=\"Reproduce the whole pipeline that the specified stage file \"\n \"belongs to.\",\n )\n repro_parser.add_argument(\n \"-P\",\n \"--all-pipelines\",\n action=\"store_true\",\n default=False,\n help=\"Reproduce all pipelines in the repo.\",\n )\n repro_parser.add_argument(\n \"--ignore-build-cache\",\n action=\"store_true\",\n default=False,\n help=\"Reproduce all descendants of a changed stage even if their \"\n \"direct dependencies didn't change.\",\n )\n repro_parser.add_argument(\n \"--no-commit\",\n action=\"store_true\",\n default=False,\n help=\"Don't put files/directories into cache.\",\n )\n repro_parser.set_defaults(func=CmdRepro)\n", "path": "dvc/command/repro.py"}]} | 1,841 | 87 |
gh_patches_debug_1606 | rasdani/github-patches | git_diff | Cloud-CV__EvalAI-3370 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[Frontend V2] Fix the media assets endpoint
### Description
We recently moved to `https://evalai.s3.amazonaws.com/` endpoint for our media assets. Frontend v2 is still using `https://staging-evalai.s3.amazonaws.com/` endpoint. We should switch to new enpdoint in frontend v2.
</issue>
<code>
[start of settings/staging.py]
1 from .prod import * # noqa: ignore=F405
2
3 ALLOWED_HOSTS = ["staging.eval.ai"]
4
5 CORS_ORIGIN_ALLOW_ALL = False
6
7 CORS_ORIGIN_WHITELIST = (
8 "https://staging-evalai.s3.amazonaws.com",
9 "https://staging.eval.ai",
10 "https://beta-staging.eval.ai",
11 )
12
[end of settings/staging.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/settings/staging.py b/settings/staging.py
--- a/settings/staging.py
+++ b/settings/staging.py
@@ -5,6 +5,7 @@
CORS_ORIGIN_ALLOW_ALL = False
CORS_ORIGIN_WHITELIST = (
+ "https://evalai.s3.amazonaws.com",
"https://staging-evalai.s3.amazonaws.com",
"https://staging.eval.ai",
"https://beta-staging.eval.ai",
| {"golden_diff": "diff --git a/settings/staging.py b/settings/staging.py\n--- a/settings/staging.py\n+++ b/settings/staging.py\n@@ -5,6 +5,7 @@\n CORS_ORIGIN_ALLOW_ALL = False\n \n CORS_ORIGIN_WHITELIST = (\n+ \"https://evalai.s3.amazonaws.com\",\n \"https://staging-evalai.s3.amazonaws.com\",\n \"https://staging.eval.ai\",\n \"https://beta-staging.eval.ai\",\n", "issue": "[Frontend V2] Fix the media assets endpoint\n### Description\r\n\r\nWe recently moved to `https://evalai.s3.amazonaws.com/` endpoint for our media assets. Frontend v2 is still using `https://staging-evalai.s3.amazonaws.com/` endpoint. We should switch to new enpdoint in frontend v2.\n", "before_files": [{"content": "from .prod import * # noqa: ignore=F405\n\nALLOWED_HOSTS = [\"staging.eval.ai\"]\n\nCORS_ORIGIN_ALLOW_ALL = False\n\nCORS_ORIGIN_WHITELIST = (\n \"https://staging-evalai.s3.amazonaws.com\",\n \"https://staging.eval.ai\",\n \"https://beta-staging.eval.ai\",\n)\n", "path": "settings/staging.py"}]} | 699 | 99 |
gh_patches_debug_1467 | rasdani/github-patches | git_diff | ckan__ckan-7881 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Invalid session timeout value on CKAN 2.10 (logged out users unexpectedly)
## CKAN version
2.10
## Describe the bug
According to our config declaration for [`beaker.session.timeout`](https://github.com/ckan/ckan/blob/656a39de2e7ed0ce47e15080f0f5d42b66b4929b/ckan/config/config_declaration.yaml#L306):
> Defaults to never expiring.
But the defined default value is 600 :upside_down_face:
Apart from the inconsistency, this is problematic because now that the logged-in user id is stored in the session by Flask-login, this means that users are logged out every 10 minutes.
The fix is to default it to never expire as described on the docs (which is also the [Beaker default](https://beaker.readthedocs.io/en/latest/configuration.html#session-options)), but the problem is that I can set it to `None` because then Beaker complains that the value is not an int:
```
File "/home/adria/dev/pyenvs/gates/lib/python3.8/site-packages/beaker/util.py", line 290, in verify_rules
params[key] = verify_options(params[key], types, message)
File "/home/adria/dev/pyenvs/gates/lib/python3.8/site-packages/beaker/util.py", line 281, in verify_options
raise Exception(error)
Exception: Session timeout must be an integer.
```
This is because our config parsing does not support "int or None", and leaves the string "None" as the value. I guess the alternative is to put a really big number but would be good to handle it better.
</issue>
<code>
[start of ckan/cli/shell.py]
1 # encoding: utf-8
2 import click
3 import logging
4
5 import ckan.model as model
6
7 from typing import Any, Mapping
8
9 from ckan.plugins import toolkit
10
11
12 log = logging.getLogger(__name__)
13
14
15 _banner = """
16 ****** Welcome to the CKAN shell ******
17
18 This session has some variables pre-populated:
19 - app (CKAN Application object)
20 - config (CKAN config dictionary)
21 - model (CKAN model module to access the Database)
22 - toolkit (CKAN toolkit module)
23 """
24
25
26 def ipython(namespace: Mapping[str, Any], banner: str) -> None:
27 import IPython
28 from traitlets.config.loader import Config
29
30 c = Config()
31 c.TerminalInteractiveShell.banner2 = banner # type: ignore
32
33 IPython.start_ipython([], user_ns=namespace, config=c)
34
35
36 def python(namespace: Mapping[str, Any], banner: str) -> None:
37 import code
38 code.interact(banner=banner, local=namespace)
39
40
41 @click.command()
42 @click.help_option("-h", "--help")
43 @click.pass_context
44 def shell(ctx: click.Context):
45 """Run an interactive IPython shell with the context of the
46 CKAN instance.
47
48 It will try to use IPython, if not installed it will callback
49 to the default Python's shell.
50 """
51
52 namespace = {
53 "app": ctx.obj.app._wsgi_app,
54 "model": model,
55 "config": ctx.obj.config,
56 "toolkit": toolkit,
57 }
58
59 try:
60 ipython(namespace, _banner)
61 except ImportError:
62 log.debug("`ipython` library is missing. Using default python shell.")
63 python(namespace, _banner)
64
[end of ckan/cli/shell.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ckan/cli/shell.py b/ckan/cli/shell.py
--- a/ckan/cli/shell.py
+++ b/ckan/cli/shell.py
@@ -28,7 +28,7 @@
from traitlets.config.loader import Config
c = Config()
- c.TerminalInteractiveShell.banner2 = banner # type: ignore
+ c.TerminalInteractiveShell.banner2 = banner
IPython.start_ipython([], user_ns=namespace, config=c)
| {"golden_diff": "diff --git a/ckan/cli/shell.py b/ckan/cli/shell.py\n--- a/ckan/cli/shell.py\n+++ b/ckan/cli/shell.py\n@@ -28,7 +28,7 @@\n from traitlets.config.loader import Config\n \n c = Config()\n- c.TerminalInteractiveShell.banner2 = banner # type: ignore\n+ c.TerminalInteractiveShell.banner2 = banner\n \n IPython.start_ipython([], user_ns=namespace, config=c)\n", "issue": "Invalid session timeout value on CKAN 2.10 (logged out users unexpectedly)\n## CKAN version\r\n2.10\r\n\r\n## Describe the bug\r\n\r\nAccording to our config declaration for [`beaker.session.timeout`](https://github.com/ckan/ckan/blob/656a39de2e7ed0ce47e15080f0f5d42b66b4929b/ckan/config/config_declaration.yaml#L306):\r\n\r\n> Defaults to never expiring.\r\n\r\nBut the defined default value is 600 :upside_down_face: \r\nApart from the inconsistency, this is problematic because now that the logged-in user id is stored in the session by Flask-login, this means that users are logged out every 10 minutes.\r\n\r\nThe fix is to default it to never expire as described on the docs (which is also the [Beaker default](https://beaker.readthedocs.io/en/latest/configuration.html#session-options)), but the problem is that I can set it to `None` because then Beaker complains that the value is not an int:\r\n\r\n```\r\n File \"/home/adria/dev/pyenvs/gates/lib/python3.8/site-packages/beaker/util.py\", line 290, in verify_rules\r\n params[key] = verify_options(params[key], types, message)\r\n File \"/home/adria/dev/pyenvs/gates/lib/python3.8/site-packages/beaker/util.py\", line 281, in verify_options\r\n raise Exception(error)\r\nException: Session timeout must be an integer.\r\n```\r\nThis is because our config parsing does not support \"int or None\", and leaves the string \"None\" as the value. I guess the alternative is to put a really big number but would be good to handle it better.\r\n\n", "before_files": [{"content": "# encoding: utf-8\nimport click\nimport logging\n\nimport ckan.model as model\n\nfrom typing import Any, Mapping\n\nfrom ckan.plugins import toolkit\n\n\nlog = logging.getLogger(__name__)\n\n\n_banner = \"\"\"\n****** Welcome to the CKAN shell ******\n\nThis session has some variables pre-populated:\n - app (CKAN Application object)\n - config (CKAN config dictionary)\n - model (CKAN model module to access the Database)\n - toolkit (CKAN toolkit module)\n \"\"\"\n\n\ndef ipython(namespace: Mapping[str, Any], banner: str) -> None:\n import IPython\n from traitlets.config.loader import Config\n\n c = Config()\n c.TerminalInteractiveShell.banner2 = banner # type: ignore\n\n IPython.start_ipython([], user_ns=namespace, config=c)\n\n\ndef python(namespace: Mapping[str, Any], banner: str) -> None:\n import code\n code.interact(banner=banner, local=namespace)\n\n\[email protected]()\[email protected]_option(\"-h\", \"--help\")\[email protected]_context\ndef shell(ctx: click.Context):\n \"\"\"Run an interactive IPython shell with the context of the\n CKAN instance.\n\n It will try to use IPython, if not installed it will callback\n to the default Python's shell.\n \"\"\"\n\n namespace = {\n \"app\": ctx.obj.app._wsgi_app,\n \"model\": model,\n \"config\": ctx.obj.config,\n \"toolkit\": toolkit,\n }\n\n try:\n ipython(namespace, _banner)\n except ImportError:\n log.debug(\"`ipython` library is missing. Using default python shell.\")\n python(namespace, _banner)\n", "path": "ckan/cli/shell.py"}]} | 1,413 | 112 |
gh_patches_debug_2699 | rasdani/github-patches | git_diff | pwr-Solaar__Solaar-1003 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Please create an AppData file for Solaar
Please consider writing and installing an AppData file with the application description and some screenshots, else Solaar looks really bad in the GNOME and KDE Software Centers. We'd love to showcase more applications, but without the extra data file we can't. See http://people.freedesktop.org/~hughsient/appdata/ for details; thanks!
Richard
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python3
2
3 from glob import glob as _glob
4
5 try:
6 from setuptools import setup
7 except ImportError:
8 from distutils.core import setup
9
10 # from solaar import NAME, __version__
11 __version__ = '1.0.4'
12 NAME = 'Solaar'
13
14
15 def _data_files():
16 from os.path import dirname as _dirname
17
18 yield 'share/solaar/icons', _glob('share/solaar/icons/solaar*.svg')
19 yield 'share/solaar/icons', _glob('share/solaar/icons/light_*.png')
20 yield 'share/icons/hicolor/scalable/apps', ['share/solaar/icons/solaar.svg']
21
22 for mo in _glob('share/locale/*/LC_MESSAGES/solaar.mo'):
23 yield _dirname(mo), [mo]
24
25 yield 'share/applications', ['share/applications/solaar.desktop']
26 yield 'share/solaar/udev-rules.d', ['rules.d/42-logitech-unify-permissions.rules']
27
28 del _dirname
29
30
31 setup(
32 name=NAME.lower(),
33 version=__version__,
34 description='Linux devices manager for the Logitech Unifying Receiver.',
35 long_description='''
36 Solaar is a Linux device manager for Logitech's Unifying Receiver peripherals.
37 It is able to pair/unpair devices with the receiver, for many devices show
38 battery status, and show and modify some of the modifiable features of devices.
39 '''.strip(),
40 author='Daniel Pavel',
41 license='GPLv2',
42 url='http://pwr-solaar.github.io/Solaar/',
43 classifiers=[
44 'Development Status :: 4 - Beta',
45 'Environment :: X11 Applications :: GTK',
46 'Environment :: Console',
47 'Intended Audience :: End Users/Desktop',
48 'License :: DFSG approved',
49 'License :: OSI Approved :: GNU General Public License v2 (GPLv2)',
50 'Natural Language :: English',
51 'Programming Language :: Python :: 3 :: Only',
52 'Operating System :: POSIX :: Linux',
53 'Topic :: Utilities',
54 ],
55 platforms=['linux'],
56
57 # sudo apt install python-gi python3-gi \
58 # gir1.2-gtk-3.0 gir1.2-notify-0.7 gir1.2-ayatanaappindicator3-0.1
59 # os_requires=['gi.repository.GObject (>= 2.0)', 'gi.repository.Gtk (>= 3.0)'],
60 python_requires='>=3.6',
61 install_requires=[
62 'pyudev (>= 0.13)',
63 'PyYAML (>= 5.1)',
64 'python-xlib (>= 0.27)',
65 'pynput (>= 1.7.0)',
66 'psutil (>= 5.7.3)',
67 ],
68 package_dir={'': 'lib'},
69 packages=['hidapi', 'logitech_receiver', 'solaar', 'solaar.ui', 'solaar.cli'],
70 data_files=list(_data_files()),
71 scripts=_glob('bin/*'),
72 )
73
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -24,6 +24,7 @@
yield 'share/applications', ['share/applications/solaar.desktop']
yield 'share/solaar/udev-rules.d', ['rules.d/42-logitech-unify-permissions.rules']
+ yield 'share/metainfo/io.github.pwr_solaar.solaar.metainfo.xml', ['share/solaar/metainfo.xml']
del _dirname
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -24,6 +24,7 @@\n \n yield 'share/applications', ['share/applications/solaar.desktop']\n yield 'share/solaar/udev-rules.d', ['rules.d/42-logitech-unify-permissions.rules']\n+ yield 'share/metainfo/io.github.pwr_solaar.solaar.metainfo.xml', ['share/solaar/metainfo.xml']\n \n del _dirname\n", "issue": "Please create an AppData file for Solaar\nPlease consider writing and installing an AppData file with the application description and some screenshots, else Solaar looks really bad in the GNOME and KDE Software Centers. We'd love to showcase more applications, but without the extra data file we can't. See http://people.freedesktop.org/~hughsient/appdata/ for details; thanks!\n\nRichard\n\n", "before_files": [{"content": "#!/usr/bin/env python3\n\nfrom glob import glob as _glob\n\ntry:\n from setuptools import setup\nexcept ImportError:\n from distutils.core import setup\n\n# from solaar import NAME, __version__\n__version__ = '1.0.4'\nNAME = 'Solaar'\n\n\ndef _data_files():\n from os.path import dirname as _dirname\n\n yield 'share/solaar/icons', _glob('share/solaar/icons/solaar*.svg')\n yield 'share/solaar/icons', _glob('share/solaar/icons/light_*.png')\n yield 'share/icons/hicolor/scalable/apps', ['share/solaar/icons/solaar.svg']\n\n for mo in _glob('share/locale/*/LC_MESSAGES/solaar.mo'):\n yield _dirname(mo), [mo]\n\n yield 'share/applications', ['share/applications/solaar.desktop']\n yield 'share/solaar/udev-rules.d', ['rules.d/42-logitech-unify-permissions.rules']\n\n del _dirname\n\n\nsetup(\n name=NAME.lower(),\n version=__version__,\n description='Linux devices manager for the Logitech Unifying Receiver.',\n long_description='''\nSolaar is a Linux device manager for Logitech's Unifying Receiver peripherals.\nIt is able to pair/unpair devices with the receiver, for many devices show\nbattery status, and show and modify some of the modifiable features of devices.\n'''.strip(),\n author='Daniel Pavel',\n license='GPLv2',\n url='http://pwr-solaar.github.io/Solaar/',\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Environment :: X11 Applications :: GTK',\n 'Environment :: Console',\n 'Intended Audience :: End Users/Desktop',\n 'License :: DFSG approved',\n 'License :: OSI Approved :: GNU General Public License v2 (GPLv2)',\n 'Natural Language :: English',\n 'Programming Language :: Python :: 3 :: Only',\n 'Operating System :: POSIX :: Linux',\n 'Topic :: Utilities',\n ],\n platforms=['linux'],\n\n # sudo apt install python-gi python3-gi \\\n # gir1.2-gtk-3.0 gir1.2-notify-0.7 gir1.2-ayatanaappindicator3-0.1\n # os_requires=['gi.repository.GObject (>= 2.0)', 'gi.repository.Gtk (>= 3.0)'],\n python_requires='>=3.6',\n install_requires=[\n 'pyudev (>= 0.13)',\n 'PyYAML (>= 5.1)',\n 'python-xlib (>= 0.27)',\n 'pynput (>= 1.7.0)',\n 'psutil (>= 5.7.3)',\n ],\n package_dir={'': 'lib'},\n packages=['hidapi', 'logitech_receiver', 'solaar', 'solaar.ui', 'solaar.cli'],\n data_files=list(_data_files()),\n scripts=_glob('bin/*'),\n)\n", "path": "setup.py"}]} | 1,417 | 116 |
gh_patches_debug_1447 | rasdani/github-patches | git_diff | strawberry-graphql__strawberry-945 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Can't get DataLoader to work
Hello! I'm trying examples from this page https://strawberry.rocks/docs/guides/dataloaders.
Running the following code on Python 3.8:
```python
import strawberry
from strawberry.dataloader import DataLoader
from typing import List
@strawberry.type
class User:
id: strawberry.ID
async def load_users(keys) -> List[User]:
return [User(id=key) for key in keys]
loader = DataLoader(load_fn=load_users)
@strawberry.type
class Query:
@strawberry.field
async def get_user(self, id: strawberry.ID) -> User:
return await loader.load(id)
schema = strawberry.Schema(query=Query)
```
I get the following error message:
```
Task <Task pending name='Task-8' coro=<ExecutionContext.resolve_field.<locals>.await_result()
running at /Users/-/Documents/src/dataservice-poc/virtualenv/lib/python3.8/site-packages/graphql/execution/execute.py:625>
cb=[gather.<locals>._done_callback() at /usr/local/Cellar/[email protected]/3.8.5/Frameworks/Python.framework/Versions/3.8/lib/python3.8/asyncio/tasks.py:758]>
got Future <Future pending> attached to a different loop
```
When I try my own code (which is pretty much the same, but the loader is real - it reads data from the db) I get this: "RuntimeError: await wasn't used with future".
I'm stuck, don't really know where to look. I thought Strawberry is supposed to manage async processing, but looks like it doesn't work that way. Any help would be greatly appreciated.
</issue>
<code>
[start of strawberry/cli/commands/server.py]
1 import importlib
2 import sys
3
4 import click
5 import hupper
6 import uvicorn
7 from starlette.applications import Starlette
8 from starlette.middleware.cors import CORSMiddleware
9
10 from strawberry import Schema
11 from strawberry.asgi import GraphQL
12 from strawberry.utils.importer import import_module_symbol
13
14
15 @click.command("server", short_help="Starts debug server")
16 @click.argument("schema", type=str)
17 @click.option("-h", "--host", default="0.0.0.0", type=str)
18 @click.option("-p", "--port", default=8000, type=int)
19 @click.option(
20 "--app-dir",
21 default=".",
22 type=str,
23 show_default=True,
24 help=(
25 "Look for the module in the specified directory, by adding this to the "
26 "PYTHONPATH. Defaults to the current working directory. "
27 "Works the same as `--app-dir` in uvicorn."
28 ),
29 )
30 def server(schema, host, port, app_dir):
31 sys.path.insert(0, app_dir)
32
33 try:
34 schema_symbol = import_module_symbol(schema, default_symbol_name="schema")
35 except (ImportError, AttributeError) as exc:
36 message = str(exc)
37 raise click.BadArgumentUsage(message)
38
39 if not isinstance(schema_symbol, Schema):
40 message = "The `schema` must be an instance of strawberry.Schema"
41 raise click.BadArgumentUsage(message)
42
43 reloader = hupper.start_reloader("strawberry.cli.run", verbose=False)
44 schema_module = importlib.import_module(schema_symbol.__module__)
45 reloader.watch_files([schema_module.__file__])
46
47 app = Starlette(debug=True)
48 app.add_middleware(
49 CORSMiddleware, allow_headers=["*"], allow_origins=["*"], allow_methods=["*"]
50 )
51
52 graphql_app = GraphQL(schema_symbol, debug=True)
53
54 paths = ["/", "/graphql"]
55 for path in paths:
56 app.add_route(path, graphql_app)
57 app.add_websocket_route(path, graphql_app)
58
59 print(f"Running strawberry on http://{host}:{port}/ 🍓")
60 uvicorn.run(app, host=host, port=port, log_level="error")
61
[end of strawberry/cli/commands/server.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/strawberry/cli/commands/server.py b/strawberry/cli/commands/server.py
--- a/strawberry/cli/commands/server.py
+++ b/strawberry/cli/commands/server.py
@@ -57,4 +57,4 @@
app.add_websocket_route(path, graphql_app)
print(f"Running strawberry on http://{host}:{port}/ 🍓")
- uvicorn.run(app, host=host, port=port, log_level="error")
+ uvicorn.run(app, loop="none", host=host, port=port, log_level="error")
| {"golden_diff": "diff --git a/strawberry/cli/commands/server.py b/strawberry/cli/commands/server.py\n--- a/strawberry/cli/commands/server.py\n+++ b/strawberry/cli/commands/server.py\n@@ -57,4 +57,4 @@\n app.add_websocket_route(path, graphql_app)\n \n print(f\"Running strawberry on http://{host}:{port}/ \ud83c\udf53\")\n- uvicorn.run(app, host=host, port=port, log_level=\"error\")\n+ uvicorn.run(app, loop=\"none\", host=host, port=port, log_level=\"error\")\n", "issue": "Can't get DataLoader to work\nHello! I'm trying examples from this page https://strawberry.rocks/docs/guides/dataloaders.\r\nRunning the following code on Python 3.8:\r\n```python\r\nimport strawberry\r\nfrom strawberry.dataloader import DataLoader\r\nfrom typing import List\r\n\r\n\r\[email protected]\r\nclass User:\r\n id: strawberry.ID\r\n\r\n\r\nasync def load_users(keys) -> List[User]:\r\n return [User(id=key) for key in keys]\r\n\r\nloader = DataLoader(load_fn=load_users)\r\n\r\n\r\[email protected]\r\nclass Query:\r\n @strawberry.field\r\n async def get_user(self, id: strawberry.ID) -> User:\r\n return await loader.load(id)\r\n\r\n\r\nschema = strawberry.Schema(query=Query)\r\n```\r\nI get the following error message:\r\n```\r\nTask <Task pending name='Task-8' coro=<ExecutionContext.resolve_field.<locals>.await_result() \r\nrunning at /Users/-/Documents/src/dataservice-poc/virtualenv/lib/python3.8/site-packages/graphql/execution/execute.py:625> \r\ncb=[gather.<locals>._done_callback() at /usr/local/Cellar/[email protected]/3.8.5/Frameworks/Python.framework/Versions/3.8/lib/python3.8/asyncio/tasks.py:758]> \r\ngot Future <Future pending> attached to a different loop\r\n```\r\n\r\nWhen I try my own code (which is pretty much the same, but the loader is real - it reads data from the db) I get this: \"RuntimeError: await wasn't used with future\".\r\n\r\nI'm stuck, don't really know where to look. I thought Strawberry is supposed to manage async processing, but looks like it doesn't work that way. Any help would be greatly appreciated.\n", "before_files": [{"content": "import importlib\nimport sys\n\nimport click\nimport hupper\nimport uvicorn\nfrom starlette.applications import Starlette\nfrom starlette.middleware.cors import CORSMiddleware\n\nfrom strawberry import Schema\nfrom strawberry.asgi import GraphQL\nfrom strawberry.utils.importer import import_module_symbol\n\n\[email protected](\"server\", short_help=\"Starts debug server\")\[email protected](\"schema\", type=str)\[email protected](\"-h\", \"--host\", default=\"0.0.0.0\", type=str)\[email protected](\"-p\", \"--port\", default=8000, type=int)\[email protected](\n \"--app-dir\",\n default=\".\",\n type=str,\n show_default=True,\n help=(\n \"Look for the module in the specified directory, by adding this to the \"\n \"PYTHONPATH. Defaults to the current working directory. \"\n \"Works the same as `--app-dir` in uvicorn.\"\n ),\n)\ndef server(schema, host, port, app_dir):\n sys.path.insert(0, app_dir)\n\n try:\n schema_symbol = import_module_symbol(schema, default_symbol_name=\"schema\")\n except (ImportError, AttributeError) as exc:\n message = str(exc)\n raise click.BadArgumentUsage(message)\n\n if not isinstance(schema_symbol, Schema):\n message = \"The `schema` must be an instance of strawberry.Schema\"\n raise click.BadArgumentUsage(message)\n\n reloader = hupper.start_reloader(\"strawberry.cli.run\", verbose=False)\n schema_module = importlib.import_module(schema_symbol.__module__)\n reloader.watch_files([schema_module.__file__])\n\n app = Starlette(debug=True)\n app.add_middleware(\n CORSMiddleware, allow_headers=[\"*\"], allow_origins=[\"*\"], allow_methods=[\"*\"]\n )\n\n graphql_app = GraphQL(schema_symbol, debug=True)\n\n paths = [\"/\", \"/graphql\"]\n for path in paths:\n app.add_route(path, graphql_app)\n app.add_websocket_route(path, graphql_app)\n\n print(f\"Running strawberry on http://{host}:{port}/ \ud83c\udf53\")\n uvicorn.run(app, host=host, port=port, log_level=\"error\")\n", "path": "strawberry/cli/commands/server.py"}]} | 1,493 | 134 |
gh_patches_debug_24265 | rasdani/github-patches | git_diff | encode__uvicorn-469 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add py3.8 to the test matrix
Adds py3.8 to the test matrix
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3
4 import os
5 import re
6 import sys
7 import platform
8
9 from setuptools import setup
10
11
12 def get_version(package):
13 """
14 Return package version as listed in `__version__` in `init.py`.
15 """
16 path = os.path.join(package, '__init__.py')
17 init_py = open(path, 'r', encoding='utf8').read()
18 return re.search("__version__ = ['\"]([^'\"]+)['\"]", init_py).group(1)
19
20
21 def get_long_description():
22 """
23 Return the README.
24 """
25 return open('README.md', 'r', encoding='utf8').read()
26
27
28 def get_packages(package):
29 """
30 Return root package and all sub-packages.
31 """
32 return [dirpath
33 for dirpath, dirnames, filenames in os.walk(package)
34 if os.path.exists(os.path.join(dirpath, '__init__.py'))]
35
36
37 env_marker = (
38 "sys_platform != 'win32'"
39 " and sys_platform != 'cygwin'"
40 " and platform_python_implementation != 'pypy'"
41 )
42
43 requirements = [
44 "click==7.*",
45 "h11==0.8.*",
46 "websockets==8.*",
47 "httptools==0.0.13 ;" + env_marker,
48 "uvloop==0.* ;" + env_marker,
49 ]
50
51
52 setup(
53 name='uvicorn',
54 version=get_version('uvicorn'),
55 url='https://github.com/encode/uvicorn',
56 license='BSD',
57 description='The lightning-fast ASGI server.',
58 long_description=get_long_description(),
59 long_description_content_type='text/markdown',
60 author='Tom Christie',
61 author_email='[email protected]',
62 packages=get_packages('uvicorn'),
63 install_requires=requirements,
64 data_files = [("", ["LICENSE.md"])],
65 classifiers=[
66 'Development Status :: 3 - Alpha',
67 'Environment :: Web Environment',
68 'Intended Audience :: Developers',
69 'License :: OSI Approved :: BSD License',
70 'Operating System :: OS Independent',
71 'Topic :: Internet :: WWW/HTTP',
72 'Programming Language :: Python :: 3',
73 'Programming Language :: Python :: 3.6',
74 'Programming Language :: Python :: 3.7',
75 'Programming Language :: Python :: Implementation :: CPython',
76 'Programming Language :: Python :: Implementation :: PyPy',
77 ],
78 entry_points="""
79 [console_scripts]
80 uvicorn=uvicorn.main:main
81 """
82 )
83
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -45,7 +45,7 @@
"h11==0.8.*",
"websockets==8.*",
"httptools==0.0.13 ;" + env_marker,
- "uvloop==0.* ;" + env_marker,
+ "uvloop==0.14.0rc2 ;" + env_marker,
]
@@ -63,7 +63,7 @@
install_requires=requirements,
data_files = [("", ["LICENSE.md"])],
classifiers=[
- 'Development Status :: 3 - Alpha',
+ 'Development Status :: 4 - Beta',
'Environment :: Web Environment',
'Intended Audience :: Developers',
'License :: OSI Approved :: BSD License',
@@ -72,6 +72,7 @@
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: 3.7',
+ 'Programming Language :: Python :: 3.8',
'Programming Language :: Python :: Implementation :: CPython',
'Programming Language :: Python :: Implementation :: PyPy',
],
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -45,7 +45,7 @@\n \"h11==0.8.*\",\n \"websockets==8.*\",\n \"httptools==0.0.13 ;\" + env_marker,\n- \"uvloop==0.* ;\" + env_marker,\n+ \"uvloop==0.14.0rc2 ;\" + env_marker,\n ]\n \n \n@@ -63,7 +63,7 @@\n install_requires=requirements,\n data_files = [(\"\", [\"LICENSE.md\"])],\n classifiers=[\n- 'Development Status :: 3 - Alpha',\n+ 'Development Status :: 4 - Beta',\n 'Environment :: Web Environment',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: BSD License',\n@@ -72,6 +72,7 @@\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n+ 'Programming Language :: Python :: 3.8',\n 'Programming Language :: Python :: Implementation :: CPython',\n 'Programming Language :: Python :: Implementation :: PyPy',\n ],\n", "issue": "Add py3.8 to the test matrix\nAdds py3.8 to the test matrix\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\nimport os\nimport re\nimport sys\nimport platform\n\nfrom setuptools import setup\n\n\ndef get_version(package):\n \"\"\"\n Return package version as listed in `__version__` in `init.py`.\n \"\"\"\n path = os.path.join(package, '__init__.py')\n init_py = open(path, 'r', encoding='utf8').read()\n return re.search(\"__version__ = ['\\\"]([^'\\\"]+)['\\\"]\", init_py).group(1)\n\n\ndef get_long_description():\n \"\"\"\n Return the README.\n \"\"\"\n return open('README.md', 'r', encoding='utf8').read()\n\n\ndef get_packages(package):\n \"\"\"\n Return root package and all sub-packages.\n \"\"\"\n return [dirpath\n for dirpath, dirnames, filenames in os.walk(package)\n if os.path.exists(os.path.join(dirpath, '__init__.py'))]\n\n\nenv_marker = (\n \"sys_platform != 'win32'\"\n \" and sys_platform != 'cygwin'\"\n \" and platform_python_implementation != 'pypy'\"\n)\n\nrequirements = [\n \"click==7.*\",\n \"h11==0.8.*\",\n \"websockets==8.*\",\n \"httptools==0.0.13 ;\" + env_marker,\n \"uvloop==0.* ;\" + env_marker,\n]\n\n\nsetup(\n name='uvicorn',\n version=get_version('uvicorn'),\n url='https://github.com/encode/uvicorn',\n license='BSD',\n description='The lightning-fast ASGI server.',\n long_description=get_long_description(),\n long_description_content_type='text/markdown',\n author='Tom Christie',\n author_email='[email protected]',\n packages=get_packages('uvicorn'),\n install_requires=requirements,\n data_files = [(\"\", [\"LICENSE.md\"])],\n classifiers=[\n 'Development Status :: 3 - Alpha',\n 'Environment :: Web Environment',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: BSD License',\n 'Operating System :: OS Independent',\n 'Topic :: Internet :: WWW/HTTP',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: Implementation :: CPython',\n 'Programming Language :: Python :: Implementation :: PyPy',\n ],\n entry_points=\"\"\"\n [console_scripts]\n uvicorn=uvicorn.main:main\n \"\"\"\n)\n", "path": "setup.py"}]} | 1,259 | 270 |
gh_patches_debug_47732 | rasdani/github-patches | git_diff | ray-project__ray-8617 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[rllib] PyTorch and SampleAsync validation
<!--Please include [tune], [rllib], [autoscaler] etc. in the issue title if relevant-->
### What is the problem?
PyTorch is supposed to be thread-safe, as long as you don't write a tensor using multiple threads. Please see https://discuss.pytorch.org/t/is-pytorch-supposed-to-be-thread-safe/36540/2
It might be worth removing the validation of sample_async and use_pytorch for A3C (and maybe others?).
Ray Version 0.9.0dev (but this applies to any ray version actually)
### Reproduction (REQUIRED)
Please provide a script that can be run to reproduce the issue. The script should have **no external library dependencies** (i.e., use fake or mock data / environments):
If we cannot run your script, we cannot fix your issue.
- [x] I have verified my script runs in a clean environment and reproduces the issue.
- [x] I have verified the issue also occurs with the [latest wheels](https://docs.ray.io/en/latest/installation.html).
[rllib] PyTorch and SampleAsync validation
<!--Please include [tune], [rllib], [autoscaler] etc. in the issue title if relevant-->
### What is the problem?
PyTorch is supposed to be thread-safe, as long as you don't write a tensor using multiple threads. Please see https://discuss.pytorch.org/t/is-pytorch-supposed-to-be-thread-safe/36540/2
It might be worth removing the validation of sample_async and use_pytorch for A3C (and maybe others?).
Ray Version 0.9.0dev (but this applies to any ray version actually)
### Reproduction (REQUIRED)
Please provide a script that can be run to reproduce the issue. The script should have **no external library dependencies** (i.e., use fake or mock data / environments):
If we cannot run your script, we cannot fix your issue.
- [x] I have verified my script runs in a clean environment and reproduces the issue.
- [x] I have verified the issue also occurs with the [latest wheels](https://docs.ray.io/en/latest/installation.html).
</issue>
<code>
[start of rllib/agents/a3c/a3c.py]
1 import logging
2
3 from ray.rllib.agents.a3c.a3c_tf_policy import A3CTFPolicy
4 from ray.rllib.agents.trainer import with_common_config
5 from ray.rllib.agents.trainer_template import build_trainer
6 from ray.rllib.execution.rollout_ops import AsyncGradients
7 from ray.rllib.execution.train_ops import ApplyGradients
8 from ray.rllib.execution.metric_ops import StandardMetricsReporting
9
10 logger = logging.getLogger(__name__)
11
12 # yapf: disable
13 # __sphinx_doc_begin__
14 DEFAULT_CONFIG = with_common_config({
15 # Should use a critic as a baseline (otherwise don't use value baseline;
16 # required for using GAE).
17 "use_critic": True,
18 # If true, use the Generalized Advantage Estimator (GAE)
19 # with a value function, see https://arxiv.org/pdf/1506.02438.pdf.
20 "use_gae": True,
21 # Size of rollout batch
22 "rollout_fragment_length": 10,
23 # GAE(gamma) parameter
24 "lambda": 1.0,
25 # Max global norm for each gradient calculated by worker
26 "grad_clip": 40.0,
27 # Learning rate
28 "lr": 0.0001,
29 # Learning rate schedule
30 "lr_schedule": None,
31 # Value Function Loss coefficient
32 "vf_loss_coeff": 0.5,
33 # Entropy coefficient
34 "entropy_coeff": 0.01,
35 # Min time per iteration
36 "min_iter_time_s": 5,
37 # Workers sample async. Note that this increases the effective
38 # rollout_fragment_length by up to 5x due to async buffering of batches.
39 "sample_async": True,
40 })
41 # __sphinx_doc_end__
42 # yapf: enable
43
44
45 def get_policy_class(config):
46 if config["use_pytorch"]:
47 from ray.rllib.agents.a3c.a3c_torch_policy import \
48 A3CTorchPolicy
49 return A3CTorchPolicy
50 else:
51 return A3CTFPolicy
52
53
54 def validate_config(config):
55 if config["entropy_coeff"] < 0:
56 raise DeprecationWarning("entropy_coeff must be >= 0")
57 if config["sample_async"] and config["use_pytorch"]:
58 config["sample_async"] = False
59 logger.warning(
60 "The sample_async option is not supported with use_pytorch: "
61 "Multithreading can be lead to crashes if used with pytorch.")
62
63
64 def execution_plan(workers, config):
65 # For A3C, compute policy gradients remotely on the rollout workers.
66 grads = AsyncGradients(workers)
67
68 # Apply the gradients as they arrive. We set update_all to False so that
69 # only the worker sending the gradient is updated with new weights.
70 train_op = grads.for_each(ApplyGradients(workers, update_all=False))
71
72 return StandardMetricsReporting(train_op, workers, config)
73
74
75 A3CTrainer = build_trainer(
76 name="A3C",
77 default_config=DEFAULT_CONFIG,
78 default_policy=A3CTFPolicy,
79 get_policy_class=get_policy_class,
80 validate_config=validate_config,
81 execution_plan=execution_plan)
82
[end of rllib/agents/a3c/a3c.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/rllib/agents/a3c/a3c.py b/rllib/agents/a3c/a3c.py
--- a/rllib/agents/a3c/a3c.py
+++ b/rllib/agents/a3c/a3c.py
@@ -54,11 +54,6 @@
def validate_config(config):
if config["entropy_coeff"] < 0:
raise DeprecationWarning("entropy_coeff must be >= 0")
- if config["sample_async"] and config["use_pytorch"]:
- config["sample_async"] = False
- logger.warning(
- "The sample_async option is not supported with use_pytorch: "
- "Multithreading can be lead to crashes if used with pytorch.")
def execution_plan(workers, config):
| {"golden_diff": "diff --git a/rllib/agents/a3c/a3c.py b/rllib/agents/a3c/a3c.py\n--- a/rllib/agents/a3c/a3c.py\n+++ b/rllib/agents/a3c/a3c.py\n@@ -54,11 +54,6 @@\n def validate_config(config):\n if config[\"entropy_coeff\"] < 0:\n raise DeprecationWarning(\"entropy_coeff must be >= 0\")\n- if config[\"sample_async\"] and config[\"use_pytorch\"]:\n- config[\"sample_async\"] = False\n- logger.warning(\n- \"The sample_async option is not supported with use_pytorch: \"\n- \"Multithreading can be lead to crashes if used with pytorch.\")\n \n \n def execution_plan(workers, config):\n", "issue": "[rllib] PyTorch and SampleAsync validation\n<!--Please include [tune], [rllib], [autoscaler] etc. in the issue title if relevant-->\r\n\r\n### What is the problem?\r\n\r\nPyTorch is supposed to be thread-safe, as long as you don't write a tensor using multiple threads. Please see https://discuss.pytorch.org/t/is-pytorch-supposed-to-be-thread-safe/36540/2 \r\n\r\nIt might be worth removing the validation of sample_async and use_pytorch for A3C (and maybe others?).\r\n\r\nRay Version 0.9.0dev (but this applies to any ray version actually)\r\n\r\n### Reproduction (REQUIRED)\r\nPlease provide a script that can be run to reproduce the issue. The script should have **no external library dependencies** (i.e., use fake or mock data / environments):\r\n\r\nIf we cannot run your script, we cannot fix your issue.\r\n\r\n- [x] I have verified my script runs in a clean environment and reproduces the issue.\r\n- [x] I have verified the issue also occurs with the [latest wheels](https://docs.ray.io/en/latest/installation.html).\r\n\n[rllib] PyTorch and SampleAsync validation\n<!--Please include [tune], [rllib], [autoscaler] etc. in the issue title if relevant-->\r\n\r\n### What is the problem?\r\n\r\nPyTorch is supposed to be thread-safe, as long as you don't write a tensor using multiple threads. Please see https://discuss.pytorch.org/t/is-pytorch-supposed-to-be-thread-safe/36540/2 \r\n\r\nIt might be worth removing the validation of sample_async and use_pytorch for A3C (and maybe others?).\r\n\r\nRay Version 0.9.0dev (but this applies to any ray version actually)\r\n\r\n### Reproduction (REQUIRED)\r\nPlease provide a script that can be run to reproduce the issue. The script should have **no external library dependencies** (i.e., use fake or mock data / environments):\r\n\r\nIf we cannot run your script, we cannot fix your issue.\r\n\r\n- [x] I have verified my script runs in a clean environment and reproduces the issue.\r\n- [x] I have verified the issue also occurs with the [latest wheels](https://docs.ray.io/en/latest/installation.html).\r\n\n", "before_files": [{"content": "import logging\n\nfrom ray.rllib.agents.a3c.a3c_tf_policy import A3CTFPolicy\nfrom ray.rllib.agents.trainer import with_common_config\nfrom ray.rllib.agents.trainer_template import build_trainer\nfrom ray.rllib.execution.rollout_ops import AsyncGradients\nfrom ray.rllib.execution.train_ops import ApplyGradients\nfrom ray.rllib.execution.metric_ops import StandardMetricsReporting\n\nlogger = logging.getLogger(__name__)\n\n# yapf: disable\n# __sphinx_doc_begin__\nDEFAULT_CONFIG = with_common_config({\n # Should use a critic as a baseline (otherwise don't use value baseline;\n # required for using GAE).\n \"use_critic\": True,\n # If true, use the Generalized Advantage Estimator (GAE)\n # with a value function, see https://arxiv.org/pdf/1506.02438.pdf.\n \"use_gae\": True,\n # Size of rollout batch\n \"rollout_fragment_length\": 10,\n # GAE(gamma) parameter\n \"lambda\": 1.0,\n # Max global norm for each gradient calculated by worker\n \"grad_clip\": 40.0,\n # Learning rate\n \"lr\": 0.0001,\n # Learning rate schedule\n \"lr_schedule\": None,\n # Value Function Loss coefficient\n \"vf_loss_coeff\": 0.5,\n # Entropy coefficient\n \"entropy_coeff\": 0.01,\n # Min time per iteration\n \"min_iter_time_s\": 5,\n # Workers sample async. Note that this increases the effective\n # rollout_fragment_length by up to 5x due to async buffering of batches.\n \"sample_async\": True,\n})\n# __sphinx_doc_end__\n# yapf: enable\n\n\ndef get_policy_class(config):\n if config[\"use_pytorch\"]:\n from ray.rllib.agents.a3c.a3c_torch_policy import \\\n A3CTorchPolicy\n return A3CTorchPolicy\n else:\n return A3CTFPolicy\n\n\ndef validate_config(config):\n if config[\"entropy_coeff\"] < 0:\n raise DeprecationWarning(\"entropy_coeff must be >= 0\")\n if config[\"sample_async\"] and config[\"use_pytorch\"]:\n config[\"sample_async\"] = False\n logger.warning(\n \"The sample_async option is not supported with use_pytorch: \"\n \"Multithreading can be lead to crashes if used with pytorch.\")\n\n\ndef execution_plan(workers, config):\n # For A3C, compute policy gradients remotely on the rollout workers.\n grads = AsyncGradients(workers)\n\n # Apply the gradients as they arrive. We set update_all to False so that\n # only the worker sending the gradient is updated with new weights.\n train_op = grads.for_each(ApplyGradients(workers, update_all=False))\n\n return StandardMetricsReporting(train_op, workers, config)\n\n\nA3CTrainer = build_trainer(\n name=\"A3C\",\n default_config=DEFAULT_CONFIG,\n default_policy=A3CTFPolicy,\n get_policy_class=get_policy_class,\n validate_config=validate_config,\n execution_plan=execution_plan)\n", "path": "rllib/agents/a3c/a3c.py"}]} | 1,890 | 173 |
gh_patches_debug_9723 | rasdani/github-patches | git_diff | dask__dask-3157 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
LZ4_compress and LZ4_uncompress removed
Since commit python-lz4/python-lz4@d62fdc50c0e183d7260961f09d4e0701fbdf0c5c LZ4_compress and LZ4_decompress have been removed (they've been deprecated for a while). With the version of python-lz4 released on pypi, it means we can't use lz4 compression with dask, and worse importing dask.bytes.compression errors out.
</issue>
<code>
[start of dask/bytes/compression.py]
1 from __future__ import print_function, division, absolute_import
2
3 import bz2
4 import sys
5 import zlib
6
7 from toolz import identity
8
9 from ..compatibility import gzip_compress, gzip_decompress, GzipFile
10 from ..utils import ignoring
11
12
13 def noop_file(file, **kwargs):
14 return file
15
16
17 compress = {'gzip': gzip_compress,
18 'zlib': zlib.compress,
19 'bz2': bz2.compress,
20 None: identity}
21 decompress = {'gzip': gzip_decompress,
22 'zlib': zlib.decompress,
23 'bz2': bz2.decompress,
24 None: identity}
25 files = {'gzip': lambda f, **kwargs: GzipFile(fileobj=f, **kwargs),
26 None: noop_file}
27 seekable_files = {None: noop_file}
28
29
30 with ignoring(ImportError):
31 import snappy
32 compress['snappy'] = snappy.compress
33 decompress['snappy'] = snappy.decompress
34
35
36 with ignoring(ImportError):
37 import lz4
38 compress['lz4'] = lz4.LZ4_compress
39 decompress['lz4'] = lz4.LZ4_uncompress
40
41 with ignoring(ImportError):
42 from ..compatibility import LZMAFile, lzma_compress, lzma_decompress
43 compress['xz'] = lzma_compress
44 decompress['xz'] = lzma_decompress
45 files['xz'] = LZMAFile
46
47 # Seekable xz files actually tend to scan whole file - see `get_xz_blocks`
48 # with ignoring(ImportError):
49 # import lzma
50 # seekable_files['xz'] = lzma.LZMAFile
51 #
52 # with ignoring(ImportError):
53 # import lzmaffi
54 # seekable_files['xz'] = lzmaffi.LZMAFile
55
56
57 if sys.version_info[0] >= 3:
58 import bz2
59 files['bz2'] = bz2.BZ2File
60
61
62 def get_xz_blocks(fp):
63 from lzmaffi import (STREAM_HEADER_SIZE, decode_stream_footer,
64 decode_index, LZMAError)
65 fp.seek(0, 2)
66
67 def _peek(f, size):
68 data = f.read(size)
69 f.seek(-size, 1)
70 return data
71
72 if fp.tell() < 2 * STREAM_HEADER_SIZE:
73 raise LZMAError("file too small")
74
75 # read stream paddings (4 bytes each)
76 fp.seek(-4, 1)
77 padding = 0
78 while _peek(fp, 4) == b'\x00\x00\x00\x00':
79 fp.seek(-4, 1)
80 padding += 4
81
82 fp.seek(-STREAM_HEADER_SIZE + 4, 1)
83
84 stream_flags = decode_stream_footer(_peek(fp, STREAM_HEADER_SIZE))
85 fp.seek(-stream_flags.backward_size, 1)
86
87 index = decode_index(_peek(fp, stream_flags.backward_size), padding)
88 return {'offsets': [b.compressed_file_offset for i, b in index],
89 'lengths': [b.unpadded_size for i, b in index],
90 'check': stream_flags.check}
91
92
93 def xz_decompress(data, check):
94 from lzmaffi import decode_block_header_size, LZMADecompressor, FORMAT_BLOCK
95 hsize = decode_block_header_size(data[:1])
96 header = data[:hsize]
97 dc = LZMADecompressor(format=FORMAT_BLOCK, header=header,
98 unpadded_size=len(data), check=check)
99 return dc.decompress(data[len(header):])
100
[end of dask/bytes/compression.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/dask/bytes/compression.py b/dask/bytes/compression.py
--- a/dask/bytes/compression.py
+++ b/dask/bytes/compression.py
@@ -33,10 +33,17 @@
decompress['snappy'] = snappy.decompress
-with ignoring(ImportError):
- import lz4
- compress['lz4'] = lz4.LZ4_compress
- decompress['lz4'] = lz4.LZ4_uncompress
+try:
+ import lz4.block
+ compress['lz4'] = lz4.block.compress
+ compress['lz4'] = lz4.block.decompress
+except ImportError:
+ try:
+ import lz4
+ compress['lz4'] = lz4.LZ4_compress
+ compress['lz4'] = lz4.LZ4_uncompress
+ except ImportError:
+ pass
with ignoring(ImportError):
from ..compatibility import LZMAFile, lzma_compress, lzma_decompress
| {"golden_diff": "diff --git a/dask/bytes/compression.py b/dask/bytes/compression.py\n--- a/dask/bytes/compression.py\n+++ b/dask/bytes/compression.py\n@@ -33,10 +33,17 @@\n decompress['snappy'] = snappy.decompress\n \n \n-with ignoring(ImportError):\n- import lz4\n- compress['lz4'] = lz4.LZ4_compress\n- decompress['lz4'] = lz4.LZ4_uncompress\n+try:\n+ import lz4.block\n+ compress['lz4'] = lz4.block.compress\n+ compress['lz4'] = lz4.block.decompress\n+except ImportError:\n+ try:\n+ import lz4\n+ compress['lz4'] = lz4.LZ4_compress\n+ compress['lz4'] = lz4.LZ4_uncompress\n+ except ImportError:\n+ pass\n \n with ignoring(ImportError):\n from ..compatibility import LZMAFile, lzma_compress, lzma_decompress\n", "issue": "LZ4_compress and LZ4_uncompress removed\nSince commit python-lz4/python-lz4@d62fdc50c0e183d7260961f09d4e0701fbdf0c5c LZ4_compress and LZ4_decompress have been removed (they've been deprecated for a while). With the version of python-lz4 released on pypi, it means we can't use lz4 compression with dask, and worse importing dask.bytes.compression errors out.\r\n\n", "before_files": [{"content": "from __future__ import print_function, division, absolute_import\n\nimport bz2\nimport sys\nimport zlib\n\nfrom toolz import identity\n\nfrom ..compatibility import gzip_compress, gzip_decompress, GzipFile\nfrom ..utils import ignoring\n\n\ndef noop_file(file, **kwargs):\n return file\n\n\ncompress = {'gzip': gzip_compress,\n 'zlib': zlib.compress,\n 'bz2': bz2.compress,\n None: identity}\ndecompress = {'gzip': gzip_decompress,\n 'zlib': zlib.decompress,\n 'bz2': bz2.decompress,\n None: identity}\nfiles = {'gzip': lambda f, **kwargs: GzipFile(fileobj=f, **kwargs),\n None: noop_file}\nseekable_files = {None: noop_file}\n\n\nwith ignoring(ImportError):\n import snappy\n compress['snappy'] = snappy.compress\n decompress['snappy'] = snappy.decompress\n\n\nwith ignoring(ImportError):\n import lz4\n compress['lz4'] = lz4.LZ4_compress\n decompress['lz4'] = lz4.LZ4_uncompress\n\nwith ignoring(ImportError):\n from ..compatibility import LZMAFile, lzma_compress, lzma_decompress\n compress['xz'] = lzma_compress\n decompress['xz'] = lzma_decompress\n files['xz'] = LZMAFile\n\n# Seekable xz files actually tend to scan whole file - see `get_xz_blocks`\n# with ignoring(ImportError):\n# import lzma\n# seekable_files['xz'] = lzma.LZMAFile\n#\n# with ignoring(ImportError):\n# import lzmaffi\n# seekable_files['xz'] = lzmaffi.LZMAFile\n\n\nif sys.version_info[0] >= 3:\n import bz2\n files['bz2'] = bz2.BZ2File\n\n\ndef get_xz_blocks(fp):\n from lzmaffi import (STREAM_HEADER_SIZE, decode_stream_footer,\n decode_index, LZMAError)\n fp.seek(0, 2)\n\n def _peek(f, size):\n data = f.read(size)\n f.seek(-size, 1)\n return data\n\n if fp.tell() < 2 * STREAM_HEADER_SIZE:\n raise LZMAError(\"file too small\")\n\n # read stream paddings (4 bytes each)\n fp.seek(-4, 1)\n padding = 0\n while _peek(fp, 4) == b'\\x00\\x00\\x00\\x00':\n fp.seek(-4, 1)\n padding += 4\n\n fp.seek(-STREAM_HEADER_SIZE + 4, 1)\n\n stream_flags = decode_stream_footer(_peek(fp, STREAM_HEADER_SIZE))\n fp.seek(-stream_flags.backward_size, 1)\n\n index = decode_index(_peek(fp, stream_flags.backward_size), padding)\n return {'offsets': [b.compressed_file_offset for i, b in index],\n 'lengths': [b.unpadded_size for i, b in index],\n 'check': stream_flags.check}\n\n\ndef xz_decompress(data, check):\n from lzmaffi import decode_block_header_size, LZMADecompressor, FORMAT_BLOCK\n hsize = decode_block_header_size(data[:1])\n header = data[:hsize]\n dc = LZMADecompressor(format=FORMAT_BLOCK, header=header,\n unpadded_size=len(data), check=check)\n return dc.decompress(data[len(header):])\n", "path": "dask/bytes/compression.py"}]} | 1,626 | 229 |
gh_patches_debug_2325 | rasdani/github-patches | git_diff | chainer__chainer-256 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`concat` with the last axis fails on py3
Same problem in `concat` as #253
@ShigekiKarita reported this problem too. Thanks!
https://gist.github.com/ShigekiKarita/4293f886765a1ed4a144
</issue>
<code>
[start of chainer/functions/concat.py]
1 import numpy
2
3 from chainer import cuda
4 from chainer import function
5 from chainer.utils import type_check
6
7 _args = 'const float* x, float* y, int cdimx, int cdimy, int rdim, int coffset'
8 _preamble = '''
9 #define COPY(statement) \
10 int l = i / (rdim * cdimx); \
11 int c = i / rdim % cdimx + coffset; \
12 int r = i % rdim; \
13 int idx = r + rdim * (c + cdimy * l); \
14 statement;
15 '''
16
17
18 class Concat(function.Function):
19
20 """Concatenate multiple tensors towards specified axis."""
21
22 # concat along the channel dimension by default
23 def __init__(self, axis=1):
24 self.axis = axis
25
26 def check_type_forward(self, in_types):
27 type_check.expect(in_types.size() > 0)
28 type_check.expect(in_types[0].ndim >
29 type_check.Variable(self.axis, 'axis'))
30
31 ndim = in_types[0].ndim.eval()
32 for i in range(1, in_types.size().eval()):
33 type_check.expect(
34 in_types[0].dtype == in_types[i].dtype,
35 in_types[0].ndim == in_types[i].ndim,
36 )
37 for d in range(0, ndim):
38 if d == self.axis:
39 continue
40 type_check.expect(in_types[0].shape[d] == in_types[i].shape[d])
41
42 def check_type_backward(self, in_types, out_types):
43 type_check.expect(
44 in_types.size() > 0,
45 out_types.size() == 1,
46 )
47 y_type, = out_types
48
49 type_check.expect(y_type.dtype == in_types[0].dtype)
50 ndim = in_types[0].ndim.eval()
51 concat_size = sum(typ.shape[self.axis] for typ in in_types)
52 type_check.expect(concat_size == y_type.shape[self.axis])
53
54 for d in range(0, ndim):
55 if d == self.axis:
56 continue
57 type_check.expect(y_type.shape[d] == in_types[0].shape[d])
58
59 def forward_cpu(self, xs):
60 return numpy.concatenate(xs, axis=self.axis),
61
62 def forward_gpu(self, xs):
63 # TODO(beam2d): Unify the process into a single kernel.
64 shape = list(xs[0].shape)
65 for x in xs[1:]:
66 shape[self.axis] += x.shape[self.axis]
67 self.shape = shape
68
69 y = cuda.empty(shape, dtype=xs[0].dtype)
70 self.cdimy = y.shape[self.axis]
71 self.rdim = numpy.prod(shape[self.axis + 1:])
72
73 coffset = 0
74 kernel = cuda.elementwise(
75 _args, 'COPY(y[idx] = x[i])', 'concat_fwd', preamble=_preamble)
76 for x in xs:
77 cdimx = x.shape[self.axis]
78 kernel(x, y, cdimx, self.cdimy, self.rdim, coffset)
79 coffset += cdimx
80
81 return y,
82
83 def backward_cpu(self, xs, gy):
84 sizes = numpy.array([x.shape[self.axis] for x in xs[:-1]]).cumsum()
85 return numpy.split(gy[0], sizes, axis=self.axis)
86
87 def backward_gpu(self, xs, gy):
88 gxs = tuple(cuda.empty_like(x) for x in xs)
89
90 coffset = 0
91 kernel = cuda.elementwise(
92 _args, 'COPY(x[i] = y[idx])', 'concat_bwd', preamble=_preamble)
93 for gx in gxs:
94 cdimx = gx.shape[self.axis]
95 kernel(gx, gy[0], cdimx, self.cdimy, self.rdim, coffset)
96 coffset += cdimx
97
98 return gxs
99
100
101 def concat(xs, axis=1):
102 """Concatenates given variables along an axis.
103
104 Args:
105 xs (tuple of Variables): Variables to be concatenated.
106 axis (int): Axis that the input arrays are concatenated along.
107
108 Returns:
109 ~chainer.Variable: Output variable.
110
111 """
112 return Concat(axis=axis)(*xs)
113
[end of chainer/functions/concat.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/chainer/functions/concat.py b/chainer/functions/concat.py
--- a/chainer/functions/concat.py
+++ b/chainer/functions/concat.py
@@ -68,7 +68,7 @@
y = cuda.empty(shape, dtype=xs[0].dtype)
self.cdimy = y.shape[self.axis]
- self.rdim = numpy.prod(shape[self.axis + 1:])
+ self.rdim = numpy.prod(shape[self.axis + 1:], dtype=int)
coffset = 0
kernel = cuda.elementwise(
| {"golden_diff": "diff --git a/chainer/functions/concat.py b/chainer/functions/concat.py\n--- a/chainer/functions/concat.py\n+++ b/chainer/functions/concat.py\n@@ -68,7 +68,7 @@\n \n y = cuda.empty(shape, dtype=xs[0].dtype)\n self.cdimy = y.shape[self.axis]\n- self.rdim = numpy.prod(shape[self.axis + 1:])\n+ self.rdim = numpy.prod(shape[self.axis + 1:], dtype=int)\n \n coffset = 0\n kernel = cuda.elementwise(\n", "issue": "`concat` with the last axis fails on py3\nSame problem in `concat` as #253 \n\n@ShigekiKarita reported this problem too. Thanks!\nhttps://gist.github.com/ShigekiKarita/4293f886765a1ed4a144\n\n", "before_files": [{"content": "import numpy\n\nfrom chainer import cuda\nfrom chainer import function\nfrom chainer.utils import type_check\n\n_args = 'const float* x, float* y, int cdimx, int cdimy, int rdim, int coffset'\n_preamble = '''\n#define COPY(statement) \\\n int l = i / (rdim * cdimx); \\\n int c = i / rdim % cdimx + coffset; \\\n int r = i % rdim; \\\n int idx = r + rdim * (c + cdimy * l); \\\n statement;\n'''\n\n\nclass Concat(function.Function):\n\n \"\"\"Concatenate multiple tensors towards specified axis.\"\"\"\n\n # concat along the channel dimension by default\n def __init__(self, axis=1):\n self.axis = axis\n\n def check_type_forward(self, in_types):\n type_check.expect(in_types.size() > 0)\n type_check.expect(in_types[0].ndim >\n type_check.Variable(self.axis, 'axis'))\n\n ndim = in_types[0].ndim.eval()\n for i in range(1, in_types.size().eval()):\n type_check.expect(\n in_types[0].dtype == in_types[i].dtype,\n in_types[0].ndim == in_types[i].ndim,\n )\n for d in range(0, ndim):\n if d == self.axis:\n continue\n type_check.expect(in_types[0].shape[d] == in_types[i].shape[d])\n\n def check_type_backward(self, in_types, out_types):\n type_check.expect(\n in_types.size() > 0,\n out_types.size() == 1,\n )\n y_type, = out_types\n\n type_check.expect(y_type.dtype == in_types[0].dtype)\n ndim = in_types[0].ndim.eval()\n concat_size = sum(typ.shape[self.axis] for typ in in_types)\n type_check.expect(concat_size == y_type.shape[self.axis])\n\n for d in range(0, ndim):\n if d == self.axis:\n continue\n type_check.expect(y_type.shape[d] == in_types[0].shape[d])\n\n def forward_cpu(self, xs):\n return numpy.concatenate(xs, axis=self.axis),\n\n def forward_gpu(self, xs):\n # TODO(beam2d): Unify the process into a single kernel.\n shape = list(xs[0].shape)\n for x in xs[1:]:\n shape[self.axis] += x.shape[self.axis]\n self.shape = shape\n\n y = cuda.empty(shape, dtype=xs[0].dtype)\n self.cdimy = y.shape[self.axis]\n self.rdim = numpy.prod(shape[self.axis + 1:])\n\n coffset = 0\n kernel = cuda.elementwise(\n _args, 'COPY(y[idx] = x[i])', 'concat_fwd', preamble=_preamble)\n for x in xs:\n cdimx = x.shape[self.axis]\n kernel(x, y, cdimx, self.cdimy, self.rdim, coffset)\n coffset += cdimx\n\n return y,\n\n def backward_cpu(self, xs, gy):\n sizes = numpy.array([x.shape[self.axis] for x in xs[:-1]]).cumsum()\n return numpy.split(gy[0], sizes, axis=self.axis)\n\n def backward_gpu(self, xs, gy):\n gxs = tuple(cuda.empty_like(x) for x in xs)\n\n coffset = 0\n kernel = cuda.elementwise(\n _args, 'COPY(x[i] = y[idx])', 'concat_bwd', preamble=_preamble)\n for gx in gxs:\n cdimx = gx.shape[self.axis]\n kernel(gx, gy[0], cdimx, self.cdimy, self.rdim, coffset)\n coffset += cdimx\n\n return gxs\n\n\ndef concat(xs, axis=1):\n \"\"\"Concatenates given variables along an axis.\n\n Args:\n xs (tuple of Variables): Variables to be concatenated.\n axis (int): Axis that the input arrays are concatenated along.\n\n Returns:\n ~chainer.Variable: Output variable.\n\n \"\"\"\n return Concat(axis=axis)(*xs)\n", "path": "chainer/functions/concat.py"}]} | 1,773 | 124 |
gh_patches_debug_12504 | rasdani/github-patches | git_diff | WeblateOrg__weblate-9990 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
docker: Enable WEBLATE_GITLAB_CREDENTIALS environment variable
### Describe the problem
Right now it seems I can use gitlab_username and gitlab_token variables. But when I try to use gitlab_credentials:
> WEBLATE_GITLAB_CREDENTIALS: "git.duniter.org": {username: weblate,token: XXXXXXXXXXXXXXX}
I get this error:
> ERROR: yaml.scanner.ScannerError: mapping values are not allowed here
> in "./docker-compose.override.yml", line 17, column 52
### Describe the solution you'd like
Add weblate_gitlab_credentials support
### Describe alternatives you've considered
_No response_
### Screenshots
_No response_
### Additional context
_No response_
</issue>
<code>
[start of weblate/utils/environment.py]
1 # Copyright © Michal Čihař <[email protected]>
2 #
3 # SPDX-License-Identifier: GPL-3.0-or-later
4
5 from __future__ import annotations
6
7 import os
8
9
10 def get_env_str(
11 name: str,
12 default: str | None = None,
13 required: bool = False,
14 fallback_name: str | None = None,
15 ) -> str:
16 file_env = f"{name}_FILE"
17 if filename := os.environ.get(file_env):
18 try:
19 with open(filename) as handle:
20 result = handle.read()
21 except OSError as error:
22 raise ValueError(
23 f"Failed to open {filename} as specified by {file_env}: {error}"
24 ) from error
25 else:
26 if fallback_name and name not in os.environ:
27 name = fallback_name
28 result = os.environ.get(name, default)
29 if required and not result:
30 raise ValueError(f"{name} has to be configured!")
31 return result
32
33
34 def get_env_list(name: str, default: list[str] | None = None) -> list[str]:
35 """Helper to get list from environment."""
36 if name not in os.environ:
37 return default or []
38 return os.environ[name].split(",")
39
40
41 def get_env_map(name: str, default: dict[str, str] | None = None) -> dict[str, str]:
42 """
43 Helper to get mapping from environment.
44
45 parses 'full_name:name,email:mail' into {'email': 'mail', 'full_name': 'name'}
46 """
47 if os.environ.get(name):
48 return dict(e.split(":") for e in os.environ[name].split(","))
49 return default or {}
50
51
52 def get_env_int(name: str, default: int = 0) -> int:
53 """Helper to get integer value from environment."""
54 if name not in os.environ:
55 return default
56 try:
57 return int(os.environ[name])
58 except ValueError as error:
59 raise ValueError(f"{name} is not an integer: {error}") from error
60
61
62 def get_env_float(name: str, default: float = 0.0) -> float:
63 """Helper to get float value from environment."""
64 if name not in os.environ:
65 return default
66 try:
67 return float(os.environ[name])
68 except ValueError as error:
69 raise ValueError(f"{name} is not an float: {error}") from error
70
71
72 def get_env_bool(name: str, default: bool = False) -> bool:
73 """Helper to get boolean value from environment."""
74 if name not in os.environ:
75 return default
76 true_values = {"true", "yes", "1"}
77 return os.environ[name].lower() in true_values
78
79
80 def modify_env_list(current: list[str], name: str) -> list[str]:
81 """Helper to modify list (for example checks)."""
82 for item in reversed(get_env_list(f"WEBLATE_ADD_{name}")):
83 current.insert(0, item)
84 for item in get_env_list(f"WEBLATE_REMOVE_{name}"):
85 current.remove(item)
86 return current
87
88
89 def get_env_credentials(
90 name: str,
91 ) -> dict[str, dict[str, str]]:
92 """Parses VCS integration credentials."""
93 username = os.environ.get(f"WEBLATE_{name}_USERNAME")
94 token = os.environ.get(f"WEBLATE_{name}_TOKEN")
95 host = os.environ.get(f"WEBLATE_{name}_HOST")
96
97 if not host and (username or token):
98 raise ValueError(
99 f"Incomplete {name}_CREDENTIALS configuration: missing WEBLATE_{name}_HOST"
100 )
101 return {host: {"username": username, "token": token}}
102
103
104 def get_env_ratelimit(name: str, default: str) -> str:
105 value = os.environ.get(name, default)
106
107 # Taken from rest_framework.throttling.SimpleRateThrottle.parse_rate
108 # it can not be imported here as that breaks config loading for
109 # rest_framework
110
111 try:
112 num, period = value.split("/")
113 except ValueError as error:
114 raise ValueError(f"Could not parse {name}: {error}") from error
115 if not num.isdigit():
116 raise ValueError(f"Could not parse {name}: rate is not numeric: {num}")
117 if period[0] not in ("s", "m", "h", "d"):
118 raise ValueError(f"Could not parse {name}: unknown period: {period}")
119
120 return value
121
[end of weblate/utils/environment.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/weblate/utils/environment.py b/weblate/utils/environment.py
--- a/weblate/utils/environment.py
+++ b/weblate/utils/environment.py
@@ -4,6 +4,7 @@
from __future__ import annotations
+import ast
import os
@@ -90,6 +91,8 @@
name: str,
) -> dict[str, dict[str, str]]:
"""Parses VCS integration credentials."""
+ if credentials := get_env_str(f"WEBLATE_{name}_CREDENTIALS"):
+ return ast.literal_eval(credentials)
username = os.environ.get(f"WEBLATE_{name}_USERNAME")
token = os.environ.get(f"WEBLATE_{name}_TOKEN")
host = os.environ.get(f"WEBLATE_{name}_HOST")
| {"golden_diff": "diff --git a/weblate/utils/environment.py b/weblate/utils/environment.py\n--- a/weblate/utils/environment.py\n+++ b/weblate/utils/environment.py\n@@ -4,6 +4,7 @@\n \n from __future__ import annotations\n \n+import ast\n import os\n \n \n@@ -90,6 +91,8 @@\n name: str,\n ) -> dict[str, dict[str, str]]:\n \"\"\"Parses VCS integration credentials.\"\"\"\n+ if credentials := get_env_str(f\"WEBLATE_{name}_CREDENTIALS\"):\n+ return ast.literal_eval(credentials)\n username = os.environ.get(f\"WEBLATE_{name}_USERNAME\")\n token = os.environ.get(f\"WEBLATE_{name}_TOKEN\")\n host = os.environ.get(f\"WEBLATE_{name}_HOST\")\n", "issue": "docker: Enable WEBLATE_GITLAB_CREDENTIALS environment variable\n### Describe the problem\n\nRight now it seems I can use gitlab_username and gitlab_token variables. But when I try to use gitlab_credentials:\r\n\r\n> WEBLATE_GITLAB_CREDENTIALS: \"git.duniter.org\": {username: weblate,token: XXXXXXXXXXXXXXX}\r\n\r\nI get this error:\r\n\r\n> ERROR: yaml.scanner.ScannerError: mapping values are not allowed here\r\n> in \"./docker-compose.override.yml\", line 17, column 52\r\n\n\n### Describe the solution you'd like\n\nAdd weblate_gitlab_credentials support\n\n### Describe alternatives you've considered\n\n_No response_\n\n### Screenshots\n\n_No response_\n\n### Additional context\n\n_No response_\n", "before_files": [{"content": "# Copyright \u00a9 Michal \u010ciha\u0159 <[email protected]>\n#\n# SPDX-License-Identifier: GPL-3.0-or-later\n\nfrom __future__ import annotations\n\nimport os\n\n\ndef get_env_str(\n name: str,\n default: str | None = None,\n required: bool = False,\n fallback_name: str | None = None,\n) -> str:\n file_env = f\"{name}_FILE\"\n if filename := os.environ.get(file_env):\n try:\n with open(filename) as handle:\n result = handle.read()\n except OSError as error:\n raise ValueError(\n f\"Failed to open {filename} as specified by {file_env}: {error}\"\n ) from error\n else:\n if fallback_name and name not in os.environ:\n name = fallback_name\n result = os.environ.get(name, default)\n if required and not result:\n raise ValueError(f\"{name} has to be configured!\")\n return result\n\n\ndef get_env_list(name: str, default: list[str] | None = None) -> list[str]:\n \"\"\"Helper to get list from environment.\"\"\"\n if name not in os.environ:\n return default or []\n return os.environ[name].split(\",\")\n\n\ndef get_env_map(name: str, default: dict[str, str] | None = None) -> dict[str, str]:\n \"\"\"\n Helper to get mapping from environment.\n\n parses 'full_name:name,email:mail' into {'email': 'mail', 'full_name': 'name'}\n \"\"\"\n if os.environ.get(name):\n return dict(e.split(\":\") for e in os.environ[name].split(\",\"))\n return default or {}\n\n\ndef get_env_int(name: str, default: int = 0) -> int:\n \"\"\"Helper to get integer value from environment.\"\"\"\n if name not in os.environ:\n return default\n try:\n return int(os.environ[name])\n except ValueError as error:\n raise ValueError(f\"{name} is not an integer: {error}\") from error\n\n\ndef get_env_float(name: str, default: float = 0.0) -> float:\n \"\"\"Helper to get float value from environment.\"\"\"\n if name not in os.environ:\n return default\n try:\n return float(os.environ[name])\n except ValueError as error:\n raise ValueError(f\"{name} is not an float: {error}\") from error\n\n\ndef get_env_bool(name: str, default: bool = False) -> bool:\n \"\"\"Helper to get boolean value from environment.\"\"\"\n if name not in os.environ:\n return default\n true_values = {\"true\", \"yes\", \"1\"}\n return os.environ[name].lower() in true_values\n\n\ndef modify_env_list(current: list[str], name: str) -> list[str]:\n \"\"\"Helper to modify list (for example checks).\"\"\"\n for item in reversed(get_env_list(f\"WEBLATE_ADD_{name}\")):\n current.insert(0, item)\n for item in get_env_list(f\"WEBLATE_REMOVE_{name}\"):\n current.remove(item)\n return current\n\n\ndef get_env_credentials(\n name: str,\n) -> dict[str, dict[str, str]]:\n \"\"\"Parses VCS integration credentials.\"\"\"\n username = os.environ.get(f\"WEBLATE_{name}_USERNAME\")\n token = os.environ.get(f\"WEBLATE_{name}_TOKEN\")\n host = os.environ.get(f\"WEBLATE_{name}_HOST\")\n\n if not host and (username or token):\n raise ValueError(\n f\"Incomplete {name}_CREDENTIALS configuration: missing WEBLATE_{name}_HOST\"\n )\n return {host: {\"username\": username, \"token\": token}}\n\n\ndef get_env_ratelimit(name: str, default: str) -> str:\n value = os.environ.get(name, default)\n\n # Taken from rest_framework.throttling.SimpleRateThrottle.parse_rate\n # it can not be imported here as that breaks config loading for\n # rest_framework\n\n try:\n num, period = value.split(\"/\")\n except ValueError as error:\n raise ValueError(f\"Could not parse {name}: {error}\") from error\n if not num.isdigit():\n raise ValueError(f\"Could not parse {name}: rate is not numeric: {num}\")\n if period[0] not in (\"s\", \"m\", \"h\", \"d\"):\n raise ValueError(f\"Could not parse {name}: unknown period: {period}\")\n\n return value\n", "path": "weblate/utils/environment.py"}]} | 1,919 | 176 |
gh_patches_debug_11791 | rasdani/github-patches | git_diff | streamlink__streamlink-1027 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ine.py for source in data["playlist"][0]["sources"]: TypeError: 'NoneType' object is not subscriptable
Hi, INE plugin is failing since recently:
```
$ streamlink -o ./streamlink.mp4 https://streaming.ine.com/play/1cfbc029-dd6d-4646-80b9-7316e3ac121a/introduction 720p --http-cookie laravel_session=removed
[cli][info] Found matching plugin ine for URL https://streaming.ine.com/play/1cfbc029-dd6d-4646-80b9-7316e3ac121a/introduction
Traceback (most recent call last):
File "/opt/local/Library/Frameworks/Python.framework/Versions/Current/bin/streamlink", line 11, in <module>
load_entry_point('streamlink==0.6.0', 'console_scripts', 'streamlink')()
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/streamlink_cli/main.py", line 1027, in main
handle_url()
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/streamlink_cli/main.py", line 482, in handle_url
streams = fetch_streams(plugin)
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/streamlink_cli/main.py", line 394, in fetch_streams
sorting_excludes=args.stream_sorting_excludes)
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/streamlink/plugin/plugin.py", line 328, in get_streams
return self.streams(*args, **kwargs)
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/streamlink/plugin/plugin.py", line 236, in streams
ostreams = self._get_streams()
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/streamlink/plugins/ine.py", line 50, in _get_streams
for source in data["playlist"][0]["sources"]:
TypeError: 'NoneType' object is not subscriptable
$
$ python --version
Python 3.5.3
$ streamlink --version
streamlink 0.6.0
$ streamlink --version-check
[cli][info] Your Streamlink version (0.6) is up to date!
$
```
Same error on mac OS and Windows.
This particular URL was 'downloadable' with no problem about a month ago or so.
</issue>
<code>
[start of src/streamlink/plugins/ine.py]
1 from __future__ import print_function
2
3 import json
4 import re
5
6 from streamlink.plugin import Plugin
7 from streamlink.plugin.api import http
8 from streamlink.plugin.api import validate
9 from streamlink.stream import HLSStream
10
11
12 class INE(Plugin):
13 url_re = re.compile(r"""https://streaming.ine.com/play\#?/
14 ([0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12})/?
15 (.*?)""", re.VERBOSE)
16 play_url = "https://streaming.ine.com/play/{vid}/watch"
17 js_re = re.compile(r'''script type="text/javascript" src="(https://content.jwplatform.com/players/.*?)"''')
18 jwplayer_re = re.compile(r'''jwplayer\(".*?"\).setup\((\{.*\})\);''', re.DOTALL)
19 setup_schema = validate.Schema(
20 validate.transform(jwplayer_re.search),
21 validate.any(
22 None,
23 validate.all(
24 validate.get(1),
25 validate.transform(json.loads),
26 {"playlist": [
27 {"sources": [{"file": validate.text,
28 "type": validate.text}]}
29 ]}
30 )
31 )
32 )
33
34 @classmethod
35 def can_handle_url(cls, url):
36 return cls.url_re.match(url) is not None
37
38 def _get_streams(self):
39 vid = self.url_re.match(self.url).group(1)
40 self.logger.debug("Found video ID: {0}", vid)
41
42 page = http.get(self.play_url.format(vid=vid))
43 js_url_m = self.js_re.search(page.text)
44 if js_url_m:
45 js_url = js_url_m.group(1)
46 self.logger.debug("Loading player JS: {0}", js_url)
47
48 res = http.get(js_url)
49 data = self.setup_schema.validate(res.text)
50 for source in data["playlist"][0]["sources"]:
51 if source["type"] == "hls":
52 return HLSStream.parse_variant_playlist(self.session, "https:" + source["file"])
53
54
55 __plugin__ = INE
56
[end of src/streamlink/plugins/ine.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/streamlink/plugins/ine.py b/src/streamlink/plugins/ine.py
--- a/src/streamlink/plugins/ine.py
+++ b/src/streamlink/plugins/ine.py
@@ -15,7 +15,7 @@
(.*?)""", re.VERBOSE)
play_url = "https://streaming.ine.com/play/{vid}/watch"
js_re = re.compile(r'''script type="text/javascript" src="(https://content.jwplatform.com/players/.*?)"''')
- jwplayer_re = re.compile(r'''jwplayer\(".*?"\).setup\((\{.*\})\);''', re.DOTALL)
+ jwplayer_re = re.compile(r'''jwConfig\s*=\s*(\{.*\});''', re.DOTALL)
setup_schema = validate.Schema(
validate.transform(jwplayer_re.search),
validate.any(
| {"golden_diff": "diff --git a/src/streamlink/plugins/ine.py b/src/streamlink/plugins/ine.py\n--- a/src/streamlink/plugins/ine.py\n+++ b/src/streamlink/plugins/ine.py\n@@ -15,7 +15,7 @@\n (.*?)\"\"\", re.VERBOSE)\n play_url = \"https://streaming.ine.com/play/{vid}/watch\"\n js_re = re.compile(r'''script type=\"text/javascript\" src=\"(https://content.jwplatform.com/players/.*?)\"''')\n- jwplayer_re = re.compile(r'''jwplayer\\(\".*?\"\\).setup\\((\\{.*\\})\\);''', re.DOTALL)\n+ jwplayer_re = re.compile(r'''jwConfig\\s*=\\s*(\\{.*\\});''', re.DOTALL)\n setup_schema = validate.Schema(\n validate.transform(jwplayer_re.search),\n validate.any(\n", "issue": "ine.py for source in data[\"playlist\"][0][\"sources\"]: TypeError: 'NoneType' object is not subscriptable\nHi, INE plugin is failing since recently:\r\n\r\n```\r\n$ streamlink -o ./streamlink.mp4 https://streaming.ine.com/play/1cfbc029-dd6d-4646-80b9-7316e3ac121a/introduction 720p --http-cookie laravel_session=removed\r\n[cli][info] Found matching plugin ine for URL https://streaming.ine.com/play/1cfbc029-dd6d-4646-80b9-7316e3ac121a/introduction\r\nTraceback (most recent call last):\r\n File \"/opt/local/Library/Frameworks/Python.framework/Versions/Current/bin/streamlink\", line 11, in <module>\r\n load_entry_point('streamlink==0.6.0', 'console_scripts', 'streamlink')()\r\n File \"/opt/local/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/streamlink_cli/main.py\", line 1027, in main\r\n handle_url()\r\n File \"/opt/local/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/streamlink_cli/main.py\", line 482, in handle_url\r\n streams = fetch_streams(plugin)\r\n File \"/opt/local/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/streamlink_cli/main.py\", line 394, in fetch_streams\r\n sorting_excludes=args.stream_sorting_excludes)\r\n File \"/opt/local/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/streamlink/plugin/plugin.py\", line 328, in get_streams\r\n return self.streams(*args, **kwargs)\r\n File \"/opt/local/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/streamlink/plugin/plugin.py\", line 236, in streams\r\n ostreams = self._get_streams()\r\n File \"/opt/local/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/streamlink/plugins/ine.py\", line 50, in _get_streams\r\n for source in data[\"playlist\"][0][\"sources\"]:\r\nTypeError: 'NoneType' object is not subscriptable\r\n$ \r\n$ python --version\r\nPython 3.5.3\r\n$ streamlink --version\r\nstreamlink 0.6.0\r\n$ streamlink --version-check\r\n[cli][info] Your Streamlink version (0.6) is up to date!\r\n$\r\n```\r\nSame error on mac OS and Windows.\r\nThis particular URL was 'downloadable' with no problem about a month ago or so.\n", "before_files": [{"content": "from __future__ import print_function\n\nimport json\nimport re\n\nfrom streamlink.plugin import Plugin\nfrom streamlink.plugin.api import http\nfrom streamlink.plugin.api import validate\nfrom streamlink.stream import HLSStream\n\n\nclass INE(Plugin):\n url_re = re.compile(r\"\"\"https://streaming.ine.com/play\\#?/\n ([0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12})/?\n (.*?)\"\"\", re.VERBOSE)\n play_url = \"https://streaming.ine.com/play/{vid}/watch\"\n js_re = re.compile(r'''script type=\"text/javascript\" src=\"(https://content.jwplatform.com/players/.*?)\"''')\n jwplayer_re = re.compile(r'''jwplayer\\(\".*?\"\\).setup\\((\\{.*\\})\\);''', re.DOTALL)\n setup_schema = validate.Schema(\n validate.transform(jwplayer_re.search),\n validate.any(\n None,\n validate.all(\n validate.get(1),\n validate.transform(json.loads),\n {\"playlist\": [\n {\"sources\": [{\"file\": validate.text,\n \"type\": validate.text}]}\n ]}\n )\n )\n )\n\n @classmethod\n def can_handle_url(cls, url):\n return cls.url_re.match(url) is not None\n\n def _get_streams(self):\n vid = self.url_re.match(self.url).group(1)\n self.logger.debug(\"Found video ID: {0}\", vid)\n\n page = http.get(self.play_url.format(vid=vid))\n js_url_m = self.js_re.search(page.text)\n if js_url_m:\n js_url = js_url_m.group(1)\n self.logger.debug(\"Loading player JS: {0}\", js_url)\n\n res = http.get(js_url)\n data = self.setup_schema.validate(res.text)\n for source in data[\"playlist\"][0][\"sources\"]:\n if source[\"type\"] == \"hls\":\n return HLSStream.parse_variant_playlist(self.session, \"https:\" + source[\"file\"])\n\n\n__plugin__ = INE\n", "path": "src/streamlink/plugins/ine.py"}]} | 1,729 | 199 |
gh_patches_debug_29769 | rasdani/github-patches | git_diff | alltheplaces__alltheplaces-5130 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Relationship between vetco and petco
I was looking at fixing the `vetco` spider, but after a quick look on the website everything I've seen is titled "At Petco".
To the Americans: is Vetco a real brand?
</issue>
<code>
[start of locations/spiders/vetco_clinics.py]
1 import re
2
3 import scrapy
4 from scrapy.selector import Selector
5
6 from locations.geo import postal_regions
7 from locations.items import Feature
8
9
10 class VetcoClinicsSpider(scrapy.Spider):
11 name = "vetco"
12 item_attributes = {"brand": "Vetco Clinics"}
13 allowed_domains = ["vetcoclinics.com"]
14
15 def start_requests(self):
16 for record in postal_regions("US"):
17 url_template = "https://www.vetcoclinics.com/_assets/dynamic/ajax/locator.php?zip={}"
18 yield scrapy.http.Request(url_template.format(record["postal_region"]))
19
20 def parse(self, response):
21 jsonresponse = response.json()
22 if jsonresponse is not None:
23 clinics = jsonresponse.get("clinics")
24 if clinics:
25 for stores in clinics:
26 body = stores["label"]
27 address = Selector(text=body).xpath("//address/text()").extract()
28 if len(address) == 3:
29 addr_full, city_state_postal, phone = (item.split(",") for item in address)
30 city, state_postal = (item.split(",") for item in city_state_postal)
31 state, postal = re.search(r"([A-Z]{2}) (\d{5})", state_postal[0]).groups()
32
33 else:
34 addr_full, city_state_postal = (item.split(",") for item in address)
35 city, state_postal = (item.split(",") for item in city_state_postal)
36 state, postal = re.search(r"([A-Z]{2}) (\d{5})", state_postal[0]).groups()
37
38 properties = {
39 "ref": addr_full[0].strip(),
40 "addr_full": addr_full[0].strip(),
41 "city": city[0].strip(),
42 "state": state,
43 "postcode": postal,
44 "lat": float(stores["point"]["lat"]),
45 "lon": float(stores["point"]["long"]),
46 "website": response.url,
47 }
48
49 yield Feature(**properties)
50
[end of locations/spiders/vetco_clinics.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/locations/spiders/vetco_clinics.py b/locations/spiders/vetco_clinics.py
deleted file mode 100644
--- a/locations/spiders/vetco_clinics.py
+++ /dev/null
@@ -1,49 +0,0 @@
-import re
-
-import scrapy
-from scrapy.selector import Selector
-
-from locations.geo import postal_regions
-from locations.items import Feature
-
-
-class VetcoClinicsSpider(scrapy.Spider):
- name = "vetco"
- item_attributes = {"brand": "Vetco Clinics"}
- allowed_domains = ["vetcoclinics.com"]
-
- def start_requests(self):
- for record in postal_regions("US"):
- url_template = "https://www.vetcoclinics.com/_assets/dynamic/ajax/locator.php?zip={}"
- yield scrapy.http.Request(url_template.format(record["postal_region"]))
-
- def parse(self, response):
- jsonresponse = response.json()
- if jsonresponse is not None:
- clinics = jsonresponse.get("clinics")
- if clinics:
- for stores in clinics:
- body = stores["label"]
- address = Selector(text=body).xpath("//address/text()").extract()
- if len(address) == 3:
- addr_full, city_state_postal, phone = (item.split(",") for item in address)
- city, state_postal = (item.split(",") for item in city_state_postal)
- state, postal = re.search(r"([A-Z]{2}) (\d{5})", state_postal[0]).groups()
-
- else:
- addr_full, city_state_postal = (item.split(",") for item in address)
- city, state_postal = (item.split(",") for item in city_state_postal)
- state, postal = re.search(r"([A-Z]{2}) (\d{5})", state_postal[0]).groups()
-
- properties = {
- "ref": addr_full[0].strip(),
- "addr_full": addr_full[0].strip(),
- "city": city[0].strip(),
- "state": state,
- "postcode": postal,
- "lat": float(stores["point"]["lat"]),
- "lon": float(stores["point"]["long"]),
- "website": response.url,
- }
-
- yield Feature(**properties)
| {"golden_diff": "diff --git a/locations/spiders/vetco_clinics.py b/locations/spiders/vetco_clinics.py\ndeleted file mode 100644\n--- a/locations/spiders/vetco_clinics.py\n+++ /dev/null\n@@ -1,49 +0,0 @@\n-import re\n-\n-import scrapy\n-from scrapy.selector import Selector\n-\n-from locations.geo import postal_regions\n-from locations.items import Feature\n-\n-\n-class VetcoClinicsSpider(scrapy.Spider):\n- name = \"vetco\"\n- item_attributes = {\"brand\": \"Vetco Clinics\"}\n- allowed_domains = [\"vetcoclinics.com\"]\n-\n- def start_requests(self):\n- for record in postal_regions(\"US\"):\n- url_template = \"https://www.vetcoclinics.com/_assets/dynamic/ajax/locator.php?zip={}\"\n- yield scrapy.http.Request(url_template.format(record[\"postal_region\"]))\n-\n- def parse(self, response):\n- jsonresponse = response.json()\n- if jsonresponse is not None:\n- clinics = jsonresponse.get(\"clinics\")\n- if clinics:\n- for stores in clinics:\n- body = stores[\"label\"]\n- address = Selector(text=body).xpath(\"//address/text()\").extract()\n- if len(address) == 3:\n- addr_full, city_state_postal, phone = (item.split(\",\") for item in address)\n- city, state_postal = (item.split(\",\") for item in city_state_postal)\n- state, postal = re.search(r\"([A-Z]{2}) (\\d{5})\", state_postal[0]).groups()\n-\n- else:\n- addr_full, city_state_postal = (item.split(\",\") for item in address)\n- city, state_postal = (item.split(\",\") for item in city_state_postal)\n- state, postal = re.search(r\"([A-Z]{2}) (\\d{5})\", state_postal[0]).groups()\n-\n- properties = {\n- \"ref\": addr_full[0].strip(),\n- \"addr_full\": addr_full[0].strip(),\n- \"city\": city[0].strip(),\n- \"state\": state,\n- \"postcode\": postal,\n- \"lat\": float(stores[\"point\"][\"lat\"]),\n- \"lon\": float(stores[\"point\"][\"long\"]),\n- \"website\": response.url,\n- }\n-\n- yield Feature(**properties)\n", "issue": "Relationship between vetco and petco\nI was looking at fixing the `vetco` spider, but after a quick look on the website everything I've seen is titled \"At Petco\".\r\n\r\nTo the Americans: is Vetco a real brand?\n", "before_files": [{"content": "import re\n\nimport scrapy\nfrom scrapy.selector import Selector\n\nfrom locations.geo import postal_regions\nfrom locations.items import Feature\n\n\nclass VetcoClinicsSpider(scrapy.Spider):\n name = \"vetco\"\n item_attributes = {\"brand\": \"Vetco Clinics\"}\n allowed_domains = [\"vetcoclinics.com\"]\n\n def start_requests(self):\n for record in postal_regions(\"US\"):\n url_template = \"https://www.vetcoclinics.com/_assets/dynamic/ajax/locator.php?zip={}\"\n yield scrapy.http.Request(url_template.format(record[\"postal_region\"]))\n\n def parse(self, response):\n jsonresponse = response.json()\n if jsonresponse is not None:\n clinics = jsonresponse.get(\"clinics\")\n if clinics:\n for stores in clinics:\n body = stores[\"label\"]\n address = Selector(text=body).xpath(\"//address/text()\").extract()\n if len(address) == 3:\n addr_full, city_state_postal, phone = (item.split(\",\") for item in address)\n city, state_postal = (item.split(\",\") for item in city_state_postal)\n state, postal = re.search(r\"([A-Z]{2}) (\\d{5})\", state_postal[0]).groups()\n\n else:\n addr_full, city_state_postal = (item.split(\",\") for item in address)\n city, state_postal = (item.split(\",\") for item in city_state_postal)\n state, postal = re.search(r\"([A-Z]{2}) (\\d{5})\", state_postal[0]).groups()\n\n properties = {\n \"ref\": addr_full[0].strip(),\n \"addr_full\": addr_full[0].strip(),\n \"city\": city[0].strip(),\n \"state\": state,\n \"postcode\": postal,\n \"lat\": float(stores[\"point\"][\"lat\"]),\n \"lon\": float(stores[\"point\"][\"long\"]),\n \"website\": response.url,\n }\n\n yield Feature(**properties)\n", "path": "locations/spiders/vetco_clinics.py"}]} | 1,121 | 537 |
gh_patches_debug_299 | rasdani/github-patches | git_diff | PyGithub__PyGithub-557 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
GitHub Integration raises "NotImplementedError Algorithm not supported"
We have working github integration code using PyGithub v1.32 that does essentially:
```python
integration = github.GithubIntegration(settings.GITHUB_INTEGRATION_ID, settings.GITHUB_INTEGRATION_PRIVATE_PEM)
inst_token = integration.get_access_token(installation_id).token
```
After upgrading to v1.34 this code raises "NotImplementedError Algorithm not supported"
I suspect it has to do with the [switch to pyjwt from python-jose](https://github.com/PyGithub/PyGithub/commit/d447eb13b9f4688a4c981ca03b1b3111fb299142)
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3
4 # ########################## Copyrights and license ############################
5 # #
6 # Copyright 2012 Vincent Jacques <[email protected]> #
7 # Copyright 2012 Zearin <[email protected]> #
8 # Copyright 2013 Vincent Jacques <[email protected]> #
9 # #
10 # This file is part of PyGithub. #
11 # http://pygithub.github.io/PyGithub/v1/index.html #
12 # #
13 # PyGithub is free software: you can redistribute it and/or modify it under #
14 # the terms of the GNU Lesser General Public License as published by the Free #
15 # Software Foundation, either version 3 of the License, or (at your option) #
16 # any later version. #
17 # #
18 # PyGithub is distributed in the hope that it will be useful, but WITHOUT ANY #
19 # WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS #
20 # FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more #
21 # details. #
22 # #
23 # You should have received a copy of the GNU Lesser General Public License #
24 # along with PyGithub. If not, see <http://www.gnu.org/licenses/>. #
25 # #
26 # ##############################################################################
27
28 import setuptools
29 import textwrap
30
31 version = "1.34"
32
33
34 if __name__ == "__main__":
35 setuptools.setup(
36 name="PyGithub",
37 version=version,
38 description="Use the full Github API v3",
39 author="Vincent Jacques",
40 author_email="[email protected]",
41 url="http://pygithub.github.io/PyGithub/v1/index.html",
42 long_description=textwrap.dedent("""\
43 (Very short) Tutorial
44 =====================
45
46 First create a Github instance::
47
48 from github import Github
49
50 g = Github("user", "password")
51
52 Then play with your Github objects::
53
54 for repo in g.get_user().get_repos():
55 print repo.name
56 repo.edit(has_wiki=False)
57
58 You can also create a Github instance with an OAuth token::
59
60 g = Github(token)
61
62 Or without authentication::
63
64 g = Github()
65
66 Reference documentation
67 =======================
68
69 See http://pygithub.github.io/PyGithub/v1/index.html"""),
70 packages=[
71 "github",
72 "github.tests",
73 ],
74 package_data={
75 "github": ["tests/ReplayData/*.txt"]
76 },
77 classifiers=[
78 "Development Status :: 5 - Production/Stable",
79 "Environment :: Web Environment",
80 "Intended Audience :: Developers",
81 "License :: OSI Approved :: GNU Library or Lesser General Public License (LGPL)",
82 "Operating System :: OS Independent",
83 "Programming Language :: Python",
84 "Programming Language :: Python :: 2",
85 "Programming Language :: Python :: 2.5",
86 "Programming Language :: Python :: 2.6",
87 "Programming Language :: Python :: 2.7",
88 "Programming Language :: Python :: 3",
89 "Programming Language :: Python :: 3.2",
90 "Programming Language :: Python :: 3.3",
91 "Programming Language :: Python :: 3.4",
92 "Programming Language :: Python :: 3.5",
93 "Topic :: Software Development",
94 ],
95 test_suite="github.tests.AllTests",
96 use_2to3=True,
97 install_requires=[
98 "pyjwt"
99 ]
100 )
101
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -96,5 +96,8 @@
use_2to3=True,
install_requires=[
"pyjwt"
- ]
+ ],
+ extras_require = {
+ "integrations": ["cryptography"]
+ }
)
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -96,5 +96,8 @@\n use_2to3=True,\n install_requires=[\n \"pyjwt\"\n- ]\n+ ],\n+ extras_require = {\n+ \"integrations\": [\"cryptography\"]\n+ }\n )\n", "issue": "GitHub Integration raises \"NotImplementedError Algorithm not supported\"\nWe have working github integration code using PyGithub v1.32 that does essentially:\r\n\r\n```python\r\nintegration = github.GithubIntegration(settings.GITHUB_INTEGRATION_ID, settings.GITHUB_INTEGRATION_PRIVATE_PEM)\r\ninst_token = integration.get_access_token(installation_id).token\r\n```\r\nAfter upgrading to v1.34 this code raises \"NotImplementedError Algorithm not supported\"\r\n\r\nI suspect it has to do with the [switch to pyjwt from python-jose](https://github.com/PyGithub/PyGithub/commit/d447eb13b9f4688a4c981ca03b1b3111fb299142)\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\n# ########################## Copyrights and license ############################\n# #\n# Copyright 2012 Vincent Jacques <[email protected]> #\n# Copyright 2012 Zearin <[email protected]> #\n# Copyright 2013 Vincent Jacques <[email protected]> #\n# #\n# This file is part of PyGithub. #\n# http://pygithub.github.io/PyGithub/v1/index.html #\n# #\n# PyGithub is free software: you can redistribute it and/or modify it under #\n# the terms of the GNU Lesser General Public License as published by the Free #\n# Software Foundation, either version 3 of the License, or (at your option) #\n# any later version. #\n# #\n# PyGithub is distributed in the hope that it will be useful, but WITHOUT ANY #\n# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS #\n# FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more #\n# details. #\n# #\n# You should have received a copy of the GNU Lesser General Public License #\n# along with PyGithub. If not, see <http://www.gnu.org/licenses/>. #\n# #\n# ##############################################################################\n\nimport setuptools\nimport textwrap\n\nversion = \"1.34\"\n\n\nif __name__ == \"__main__\":\n setuptools.setup(\n name=\"PyGithub\",\n version=version,\n description=\"Use the full Github API v3\",\n author=\"Vincent Jacques\",\n author_email=\"[email protected]\",\n url=\"http://pygithub.github.io/PyGithub/v1/index.html\",\n long_description=textwrap.dedent(\"\"\"\\\n (Very short) Tutorial\n =====================\n\n First create a Github instance::\n\n from github import Github\n\n g = Github(\"user\", \"password\")\n\n Then play with your Github objects::\n\n for repo in g.get_user().get_repos():\n print repo.name\n repo.edit(has_wiki=False)\n\n You can also create a Github instance with an OAuth token::\n\n g = Github(token)\n\n Or without authentication::\n\n g = Github()\n\n Reference documentation\n =======================\n\n See http://pygithub.github.io/PyGithub/v1/index.html\"\"\"),\n packages=[\n \"github\",\n \"github.tests\",\n ],\n package_data={\n \"github\": [\"tests/ReplayData/*.txt\"]\n },\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Environment :: Web Environment\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: GNU Library or Lesser General Public License (LGPL)\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.5\",\n \"Programming Language :: Python :: 2.6\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.2\",\n \"Programming Language :: Python :: 3.3\",\n \"Programming Language :: Python :: 3.4\",\n \"Programming Language :: Python :: 3.5\",\n \"Topic :: Software Development\",\n ],\n test_suite=\"github.tests.AllTests\",\n use_2to3=True,\n install_requires=[\n \"pyjwt\"\n ]\n )\n", "path": "setup.py"}]} | 1,644 | 77 |
gh_patches_debug_55170 | rasdani/github-patches | git_diff | spack__spack-10720 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
git-lfs aborts (sometimes), fix in progress upstream
This is mostly an FYI.
Starting with `[email protected]` we frequently had `git-lfs` aborting. In some situations it ran successfully, in others it didn't. It seemed to depend on what other modules were loaded, but...
Between `[email protected]` and `[email protected]` the Makefile started unconditionally adding a `-extldflags` bit to the `go` command line, setting it to the value of `LDFLAGS`. If `LDFLAGS` isn't set to anything (our case) then it wasn't given an argument, even though it needs one. I'm not sure why this doesn't provide an error from the compiler, it seems to be grabbing something out of whatever comes next in memory.
I've changed the Makefile only set `-extldflags` if `LDFLAGS` is defined and made a Pull Request upstream: https://github.com/git-lfs/git-lfs/pull/3545
Depending what Upstream has to say, perhaps we'll want to patch `[email protected]`, or forbid it, or ...
I'll keep this updated as the `git-lfs` PR progresses.
</issue>
<code>
[start of var/spack/repos/builtin/packages/git-lfs/package.py]
1 # Copyright 2013-2019 Lawrence Livermore National Security, LLC and other
2 # Spack Project Developers. See the top-level COPYRIGHT file for details.
3 #
4 # SPDX-License-Identifier: (Apache-2.0 OR MIT)
5
6 from spack import *
7
8
9 class GitLfs(MakefilePackage):
10 """Git LFS is a system for managing and versioning large files in
11 association with a Git repository. Instead of storing the large files
12 within the Git repository as blobs, Git LFS stores special "pointer
13 files" in the repository, while storing the actual file contents on a
14 Git LFS server."""
15
16 homepage = "https://git-lfs.github.com"
17 url = "https://github.com/git-lfs/git-lfs/archive/v2.6.1.tar.gz"
18
19 version('2.7.0', sha256='1c829ddd163be2206a44edb366bd7f6d84c5afae3496687405ca9d2a5f3af07b')
20 version('2.6.1', sha256='e17cd9d4e66d1116be32f7ddc7e660c7f8fabbf510bc01b01ec15a22dd934ead')
21
22 depends_on('[email protected]:', type='build')
23 depends_on('[email protected]:', type='run')
24
25 parallel = False
26
27 # Git-lfs does not provide an 'install' target in the Makefile
28 def install(self, spec, prefix):
29 mkdirp(prefix.bin)
30 install(join_path('bin', 'git-lfs'), prefix.bin)
31
[end of var/spack/repos/builtin/packages/git-lfs/package.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/var/spack/repos/builtin/packages/git-lfs/package.py b/var/spack/repos/builtin/packages/git-lfs/package.py
--- a/var/spack/repos/builtin/packages/git-lfs/package.py
+++ b/var/spack/repos/builtin/packages/git-lfs/package.py
@@ -22,6 +22,8 @@
depends_on('[email protected]:', type='build')
depends_on('[email protected]:', type='run')
+ patch('patches/issue-10702.patch', when='@2.7.0')
+
parallel = False
# Git-lfs does not provide an 'install' target in the Makefile
| {"golden_diff": "diff --git a/var/spack/repos/builtin/packages/git-lfs/package.py b/var/spack/repos/builtin/packages/git-lfs/package.py\n--- a/var/spack/repos/builtin/packages/git-lfs/package.py\n+++ b/var/spack/repos/builtin/packages/git-lfs/package.py\n@@ -22,6 +22,8 @@\n depends_on('[email protected]:', type='build')\n depends_on('[email protected]:', type='run')\n \n+ patch('patches/issue-10702.patch', when='@2.7.0')\n+\n parallel = False\n \n # Git-lfs does not provide an 'install' target in the Makefile\n", "issue": "git-lfs aborts (sometimes), fix in progress upstream\nThis is mostly an FYI.\r\n\r\nStarting with `[email protected]` we frequently had `git-lfs` aborting. In some situations it ran successfully, in others it didn't. It seemed to depend on what other modules were loaded, but...\r\n\r\nBetween `[email protected]` and `[email protected]` the Makefile started unconditionally adding a `-extldflags` bit to the `go` command line, setting it to the value of `LDFLAGS`. If `LDFLAGS` isn't set to anything (our case) then it wasn't given an argument, even though it needs one. I'm not sure why this doesn't provide an error from the compiler, it seems to be grabbing something out of whatever comes next in memory.\r\n\r\nI've changed the Makefile only set `-extldflags` if `LDFLAGS` is defined and made a Pull Request upstream: https://github.com/git-lfs/git-lfs/pull/3545\r\n\r\nDepending what Upstream has to say, perhaps we'll want to patch `[email protected]`, or forbid it, or ...\r\n\r\nI'll keep this updated as the `git-lfs` PR progresses.\r\n\r\n\n", "before_files": [{"content": "# Copyright 2013-2019 Lawrence Livermore National Security, LLC and other\n# Spack Project Developers. See the top-level COPYRIGHT file for details.\n#\n# SPDX-License-Identifier: (Apache-2.0 OR MIT)\n\nfrom spack import *\n\n\nclass GitLfs(MakefilePackage):\n \"\"\"Git LFS is a system for managing and versioning large files in\n association with a Git repository. Instead of storing the large files\n within the Git repository as blobs, Git LFS stores special \"pointer\n files\" in the repository, while storing the actual file contents on a\n Git LFS server.\"\"\"\n\n homepage = \"https://git-lfs.github.com\"\n url = \"https://github.com/git-lfs/git-lfs/archive/v2.6.1.tar.gz\"\n\n version('2.7.0', sha256='1c829ddd163be2206a44edb366bd7f6d84c5afae3496687405ca9d2a5f3af07b')\n version('2.6.1', sha256='e17cd9d4e66d1116be32f7ddc7e660c7f8fabbf510bc01b01ec15a22dd934ead')\n\n depends_on('[email protected]:', type='build')\n depends_on('[email protected]:', type='run')\n\n parallel = False\n\n # Git-lfs does not provide an 'install' target in the Makefile\n def install(self, spec, prefix):\n mkdirp(prefix.bin)\n install(join_path('bin', 'git-lfs'), prefix.bin)\n", "path": "var/spack/repos/builtin/packages/git-lfs/package.py"}]} | 1,278 | 147 |
gh_patches_debug_15244 | rasdani/github-patches | git_diff | google__fuzzbench-242 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Should local docker run restrict cpu to 1 to match FuzzBench prod environment ?
See also
https://github.com/google/fuzzbench/issues/173#issuecomment-605283610
</issue>
<code>
[start of common/fuzzer_utils.py]
1 # Copyright 2020 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 """Fuzzer helpers."""
15
16 import importlib
17 import os
18 import re
19 from typing import Optional
20
21 from common import logs
22 from common import utils
23 from common import yaml_utils
24
25 DEFAULT_FUZZ_TARGET_NAME = 'fuzz-target'
26 FUZZ_TARGET_SEARCH_STRING = b'LLVMFuzzerTestOneInput'
27 VALID_FUZZER_REGEX = re.compile(r'^[A-Za-z0-9_]+$')
28
29
30 def get_fuzz_target_binary(search_directory: str,
31 fuzz_target_name: str) -> Optional[str]:
32 """Return target binary path."""
33 if fuzz_target_name:
34 fuzz_target_binary = os.path.join(search_directory, fuzz_target_name)
35 if os.path.exists(fuzz_target_binary):
36 return fuzz_target_binary
37 return None
38
39 default_fuzz_target_binary = os.path.join(search_directory,
40 DEFAULT_FUZZ_TARGET_NAME)
41 if os.path.exists(default_fuzz_target_binary):
42 return default_fuzz_target_binary
43
44 for root, _, files in os.walk(search_directory):
45 if root == 'uninstrumented':
46 continue
47 for filename in files:
48 if filename.endswith('-uninstrumented'):
49 # Skip uninstrumented binaries (e.g. with QSYM).
50 continue
51
52 file_path = os.path.join(root, filename)
53 with open(file_path, 'rb') as file_handle:
54 if FUZZ_TARGET_SEARCH_STRING in file_handle.read():
55 return file_path
56
57 return None
58
59
60 def validate(fuzzer):
61 """Return True if |fuzzer| is a valid fuzzbench fuzzer."""
62 # Although importing probably allows a subset of what the regex allows, use
63 # the regex anyway to be safe. The regex is enforcing that the fuzzer is a
64 # valid path for GCS or a linux system.
65 if VALID_FUZZER_REGEX.match(fuzzer) is None:
66 logs.error('%s does not conform to %s pattern.', fuzzer,
67 VALID_FUZZER_REGEX.pattern)
68 return False
69
70 # Try importing the fuzzer module.
71 module_name = 'fuzzers.{}.fuzzer'.format(fuzzer)
72 try:
73 importlib.import_module(module_name)
74 return True
75 except Exception as error: # pylint: disable=broad-except
76 logs.error('Encountered "%s" while trying to import %s.', error,
77 module_name)
78 return False
79
80
81 def get_fuzzer_configs(fuzzers=None):
82 """Returns the list of all fuzzers."""
83 fuzzers_dir = os.path.join(utils.ROOT_DIR, 'fuzzers')
84 fuzzer_configs = []
85 for fuzzer in os.listdir(fuzzers_dir):
86 if not os.path.isfile(os.path.join(fuzzers_dir, fuzzer, 'fuzzer.py')):
87 continue
88 if fuzzer == 'coverage':
89 continue
90
91 if not fuzzers or fuzzer in fuzzers:
92 # Auto-generate the default configuration for each base fuzzer.
93 fuzzer_configs.append({'fuzzer': fuzzer})
94
95 variant_config_path = os.path.join(fuzzers_dir, fuzzer, 'variants.yaml')
96 if not os.path.isfile(variant_config_path):
97 continue
98
99 variant_config = yaml_utils.read(variant_config_path)
100 assert 'variants' in variant_config, (
101 'Missing "variants" section of {}'.format(variant_config_path))
102 for variant in variant_config['variants']:
103 if not fuzzers or variant['name'] in fuzzers:
104 # Modify the config from the variants.yaml format to the
105 # format expected by a fuzzer config.
106 assert 'name' in variant, (
107 'Missing name attribute for fuzzer variant in {}'.format(
108 variant_config_path))
109 variant['variant_name'] = variant['name']
110 del variant['name']
111 variant['fuzzer'] = fuzzer
112 fuzzer_configs.append(variant)
113
114 return fuzzer_configs
115
[end of common/fuzzer_utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/common/fuzzer_utils.py b/common/fuzzer_utils.py
--- a/common/fuzzer_utils.py
+++ b/common/fuzzer_utils.py
@@ -20,7 +20,6 @@
from common import logs
from common import utils
-from common import yaml_utils
DEFAULT_FUZZ_TARGET_NAME = 'fuzz-target'
FUZZ_TARGET_SEARCH_STRING = b'LLVMFuzzerTestOneInput'
@@ -80,6 +79,10 @@
def get_fuzzer_configs(fuzzers=None):
"""Returns the list of all fuzzers."""
+ # Import it here to avoid yaml dependency in runner.
+ # pylint: disable=import-outside-toplevel
+ from common import yaml_utils
+
fuzzers_dir = os.path.join(utils.ROOT_DIR, 'fuzzers')
fuzzer_configs = []
for fuzzer in os.listdir(fuzzers_dir):
| {"golden_diff": "diff --git a/common/fuzzer_utils.py b/common/fuzzer_utils.py\n--- a/common/fuzzer_utils.py\n+++ b/common/fuzzer_utils.py\n@@ -20,7 +20,6 @@\n \n from common import logs\n from common import utils\n-from common import yaml_utils\n \n DEFAULT_FUZZ_TARGET_NAME = 'fuzz-target'\n FUZZ_TARGET_SEARCH_STRING = b'LLVMFuzzerTestOneInput'\n@@ -80,6 +79,10 @@\n \n def get_fuzzer_configs(fuzzers=None):\n \"\"\"Returns the list of all fuzzers.\"\"\"\n+ # Import it here to avoid yaml dependency in runner.\n+ # pylint: disable=import-outside-toplevel\n+ from common import yaml_utils\n+\n fuzzers_dir = os.path.join(utils.ROOT_DIR, 'fuzzers')\n fuzzer_configs = []\n for fuzzer in os.listdir(fuzzers_dir):\n", "issue": "Should local docker run restrict cpu to 1 to match FuzzBench prod environment ?\nSee also\r\nhttps://github.com/google/fuzzbench/issues/173#issuecomment-605283610\n", "before_files": [{"content": "# Copyright 2020 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Fuzzer helpers.\"\"\"\n\nimport importlib\nimport os\nimport re\nfrom typing import Optional\n\nfrom common import logs\nfrom common import utils\nfrom common import yaml_utils\n\nDEFAULT_FUZZ_TARGET_NAME = 'fuzz-target'\nFUZZ_TARGET_SEARCH_STRING = b'LLVMFuzzerTestOneInput'\nVALID_FUZZER_REGEX = re.compile(r'^[A-Za-z0-9_]+$')\n\n\ndef get_fuzz_target_binary(search_directory: str,\n fuzz_target_name: str) -> Optional[str]:\n \"\"\"Return target binary path.\"\"\"\n if fuzz_target_name:\n fuzz_target_binary = os.path.join(search_directory, fuzz_target_name)\n if os.path.exists(fuzz_target_binary):\n return fuzz_target_binary\n return None\n\n default_fuzz_target_binary = os.path.join(search_directory,\n DEFAULT_FUZZ_TARGET_NAME)\n if os.path.exists(default_fuzz_target_binary):\n return default_fuzz_target_binary\n\n for root, _, files in os.walk(search_directory):\n if root == 'uninstrumented':\n continue\n for filename in files:\n if filename.endswith('-uninstrumented'):\n # Skip uninstrumented binaries (e.g. with QSYM).\n continue\n\n file_path = os.path.join(root, filename)\n with open(file_path, 'rb') as file_handle:\n if FUZZ_TARGET_SEARCH_STRING in file_handle.read():\n return file_path\n\n return None\n\n\ndef validate(fuzzer):\n \"\"\"Return True if |fuzzer| is a valid fuzzbench fuzzer.\"\"\"\n # Although importing probably allows a subset of what the regex allows, use\n # the regex anyway to be safe. The regex is enforcing that the fuzzer is a\n # valid path for GCS or a linux system.\n if VALID_FUZZER_REGEX.match(fuzzer) is None:\n logs.error('%s does not conform to %s pattern.', fuzzer,\n VALID_FUZZER_REGEX.pattern)\n return False\n\n # Try importing the fuzzer module.\n module_name = 'fuzzers.{}.fuzzer'.format(fuzzer)\n try:\n importlib.import_module(module_name)\n return True\n except Exception as error: # pylint: disable=broad-except\n logs.error('Encountered \"%s\" while trying to import %s.', error,\n module_name)\n return False\n\n\ndef get_fuzzer_configs(fuzzers=None):\n \"\"\"Returns the list of all fuzzers.\"\"\"\n fuzzers_dir = os.path.join(utils.ROOT_DIR, 'fuzzers')\n fuzzer_configs = []\n for fuzzer in os.listdir(fuzzers_dir):\n if not os.path.isfile(os.path.join(fuzzers_dir, fuzzer, 'fuzzer.py')):\n continue\n if fuzzer == 'coverage':\n continue\n\n if not fuzzers or fuzzer in fuzzers:\n # Auto-generate the default configuration for each base fuzzer.\n fuzzer_configs.append({'fuzzer': fuzzer})\n\n variant_config_path = os.path.join(fuzzers_dir, fuzzer, 'variants.yaml')\n if not os.path.isfile(variant_config_path):\n continue\n\n variant_config = yaml_utils.read(variant_config_path)\n assert 'variants' in variant_config, (\n 'Missing \"variants\" section of {}'.format(variant_config_path))\n for variant in variant_config['variants']:\n if not fuzzers or variant['name'] in fuzzers:\n # Modify the config from the variants.yaml format to the\n # format expected by a fuzzer config.\n assert 'name' in variant, (\n 'Missing name attribute for fuzzer variant in {}'.format(\n variant_config_path))\n variant['variant_name'] = variant['name']\n del variant['name']\n variant['fuzzer'] = fuzzer\n fuzzer_configs.append(variant)\n\n return fuzzer_configs\n", "path": "common/fuzzer_utils.py"}]} | 1,779 | 195 |
gh_patches_debug_3667 | rasdani/github-patches | git_diff | vega__altair-784 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Fix vega-embed version for Altair 1
For example in https://github.com/altair-viz/altair/blob/d4d29ca06e920f71073766c6456d387e682cee17/altair/vegalite/v1/html.py#L7
</issue>
<code>
[start of altair/vegalite/v1/html.py]
1 HTML_TEMPLATE = """
2 <!DOCTYPE html>
3 <html>
4 <head>
5 <script src="https://cdn.jsdelivr.net/npm/vega@2"></script>
6 <script src="https://cdn.jsdelivr.net/npm/vega-lite@1"></script>
7 <script src="https://cdn.jsdelivr.net/npm/vega-embed@3"></script>
8 </head>
9 <body>
10 <div id="vis"></div>
11 <script type="text/javascript">
12 var spec = {spec};
13 var opt = {opt};
14 vegaEmbed("#vis", spec, opt);
15 </script>
16 </body>
17 </html>
18 """
19
[end of altair/vegalite/v1/html.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/altair/vegalite/v1/html.py b/altair/vegalite/v1/html.py
--- a/altair/vegalite/v1/html.py
+++ b/altair/vegalite/v1/html.py
@@ -4,7 +4,7 @@
<head>
<script src="https://cdn.jsdelivr.net/npm/vega@2"></script>
<script src="https://cdn.jsdelivr.net/npm/vega-lite@1"></script>
- <script src="https://cdn.jsdelivr.net/npm/vega-embed@3"></script>
+ <script src="https://cdn.jsdelivr.net/npm/vega-embed@2"></script>
</head>
<body>
<div id="vis"></div>
| {"golden_diff": "diff --git a/altair/vegalite/v1/html.py b/altair/vegalite/v1/html.py\n--- a/altair/vegalite/v1/html.py\n+++ b/altair/vegalite/v1/html.py\n@@ -4,7 +4,7 @@\n <head>\n <script src=\"https://cdn.jsdelivr.net/npm/vega@2\"></script>\n <script src=\"https://cdn.jsdelivr.net/npm/vega-lite@1\"></script>\n- <script src=\"https://cdn.jsdelivr.net/npm/vega-embed@3\"></script>\n+ <script src=\"https://cdn.jsdelivr.net/npm/vega-embed@2\"></script>\n </head>\n <body>\n <div id=\"vis\"></div>\n", "issue": "Fix vega-embed version for Altair 1\nFor example in https://github.com/altair-viz/altair/blob/d4d29ca06e920f71073766c6456d387e682cee17/altair/vegalite/v1/html.py#L7\n", "before_files": [{"content": "HTML_TEMPLATE = \"\"\"\n<!DOCTYPE html>\n<html>\n<head>\n <script src=\"https://cdn.jsdelivr.net/npm/vega@2\"></script>\n <script src=\"https://cdn.jsdelivr.net/npm/vega-lite@1\"></script>\n <script src=\"https://cdn.jsdelivr.net/npm/vega-embed@3\"></script>\n</head>\n<body>\n <div id=\"vis\"></div>\n <script type=\"text/javascript\">\n var spec = {spec};\n var opt = {opt};\n vegaEmbed(\"#vis\", spec, opt);\n </script>\n</body>\n</html>\n\"\"\"\n", "path": "altair/vegalite/v1/html.py"}]} | 788 | 168 |
gh_patches_debug_2104 | rasdani/github-patches | git_diff | shuup__shuup-1574 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Admin: Main menu won't stay hidden
Two issues (at least):
Desktop: If I close (minimize, desktop) main-menu and click any link, the menu appears again.
Desktop to mobile: If I minimize the menu on a bigger desktop and then drag window smaller the menu appears again.
</issue>
<code>
[start of shuup/admin/views/menu.py]
1 # -*- coding: utf-8 -*-
2 # This file is part of Shuup.
3 #
4 # Copyright (c) 2012-2018, Shuup Inc. All rights reserved.
5 #
6 # This source code is licensed under the OSL-3.0 license found in the
7 # LICENSE file in the root directory of this source tree.
8 from django.http import JsonResponse
9 from django.views.generic import TemplateView, View
10
11
12 class MenuView(TemplateView):
13 template_name = "shuup/admin/base/_main_menu.jinja"
14
15
16 class MenuToggleView(View):
17 def post(self, request, *args, **kwargs):
18 request.session["menu_open"] = int(request.POST.get("menu_open", 0))
19 return JsonResponse({"success": True})
20
[end of shuup/admin/views/menu.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/shuup/admin/views/menu.py b/shuup/admin/views/menu.py
--- a/shuup/admin/views/menu.py
+++ b/shuup/admin/views/menu.py
@@ -15,5 +15,5 @@
class MenuToggleView(View):
def post(self, request, *args, **kwargs):
- request.session["menu_open"] = int(request.POST.get("menu_open", 0))
+ request.session["menu_open"] = not bool(request.session.get("menu_open", True))
return JsonResponse({"success": True})
| {"golden_diff": "diff --git a/shuup/admin/views/menu.py b/shuup/admin/views/menu.py\n--- a/shuup/admin/views/menu.py\n+++ b/shuup/admin/views/menu.py\n@@ -15,5 +15,5 @@\n \n class MenuToggleView(View):\n def post(self, request, *args, **kwargs):\n- request.session[\"menu_open\"] = int(request.POST.get(\"menu_open\", 0))\n+ request.session[\"menu_open\"] = not bool(request.session.get(\"menu_open\", True))\n return JsonResponse({\"success\": True})\n", "issue": "Admin: Main menu won't stay hidden\nTwo issues (at least):\r\nDesktop: If I close (minimize, desktop) main-menu and click any link, the menu appears again.\r\nDesktop to mobile: If I minimize the menu on a bigger desktop and then drag window smaller the menu appears again. \n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# This file is part of Shuup.\n#\n# Copyright (c) 2012-2018, Shuup Inc. All rights reserved.\n#\n# This source code is licensed under the OSL-3.0 license found in the\n# LICENSE file in the root directory of this source tree.\nfrom django.http import JsonResponse\nfrom django.views.generic import TemplateView, View\n\n\nclass MenuView(TemplateView):\n template_name = \"shuup/admin/base/_main_menu.jinja\"\n\n\nclass MenuToggleView(View):\n def post(self, request, *args, **kwargs):\n request.session[\"menu_open\"] = int(request.POST.get(\"menu_open\", 0))\n return JsonResponse({\"success\": True})\n", "path": "shuup/admin/views/menu.py"}]} | 799 | 121 |
gh_patches_debug_34226 | rasdani/github-patches | git_diff | dj-stripe__dj-stripe-268 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
importing djstripe within setup.py causes race condition when installing from repo
Trying to install dj-stripe from a repo runs into a race condition at setup.py:
``` bash
pip install -e git://github.com/pydanny/dj-stripe.git#egg=djstripe
Obtaining djstripe from git+git://github.com/pydanny/dj-stripe.git#egg=djstripe
Cloning git://github.com/pydanny/dj-stripe.git to ./v/test_djstripe/src/djstripe
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 20, in <module>
File "/home/dave/v/test_djstripe/src/djstripe/setup.py", line 6, in <module>
import djstripe
File "/home/dave/v/test_djstripe/src/djstripe/djstripe/__init__.py", line 4, in <module>
from django import get_version as get_django_version
ImportError: No module named 'django'
----------------------------------------
```
There are a few ways to fix this. I would suggest the, for example, get_version(package) methods used in https://github.com/pydanny/django-admin2/blob/master/setup.py
This is a trivial fix, I'll get a patch together soon.
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2
3 import os
4 import sys
5
6 import djstripe
7
8 version = djstripe.__version__
9
10 try:
11 from setuptools import setup
12 except ImportError:
13 from distutils.core import setup
14
15 if sys.argv[-1] == 'publish':
16 os.system('python setup.py sdist upload')
17 os.system('python setup.py bdist_wheel upload')
18 sys.exit()
19
20 if sys.argv[-1] == 'tag':
21 print("Tagging the version on github:")
22 os.system("git tag -a %s -m 'version %s'" % (version, version))
23 os.system("git push --tags")
24 sys.exit()
25
26 readme = open('README.rst').read()
27 history = open('HISTORY.rst').read().replace('.. :changelog:', '')
28
29 INSTALL_REQUIRES = [
30 'django>=1.7',
31 'stripe>=1.22.2',
32 'django-model-utils>=2.2',
33 'django-braces>=1.8.0',
34 'jsonfield>=1.0.3',
35 'pytz>=2015.4'
36 ]
37
38 setup(
39 name='dj-stripe',
40 version=version,
41 description=djstripe.__summary__,
42 long_description=readme + '\n\n' + history,
43 author=djstripe.__author__,
44 author_email=djstripe.__email__,
45 url=djstripe.__uri__,
46 packages=[
47 'djstripe',
48 ],
49 package_dir={'djstripe': 'djstripe'},
50 include_package_data=True,
51 install_requires=INSTALL_REQUIRES,
52 license=djstripe.__license__,
53 zip_safe=False,
54 keywords='stripe django',
55 classifiers=[
56 'Development Status :: 4 - Beta',
57 'Environment :: Web Environment',
58 'Framework :: Django',
59 'Framework :: Django :: 1.7',
60 'Framework :: Django :: 1.8',
61 'Intended Audience :: Developers',
62 'License :: OSI Approved :: BSD License',
63 'Natural Language :: English',
64 "Programming Language :: Python :: 2",
65 'Programming Language :: Python :: 2.7',
66 'Programming Language :: Python :: 3',
67 'Programming Language :: Python :: 3.3',
68 'Programming Language :: Python :: 3.4',
69 'Programming Language :: Python :: 3.5'
70 ],
71 )
72
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -1,11 +1,37 @@
#!/usr/bin/env python
+import ast
import os
import sys
-import djstripe
-version = djstripe.__version__
+class MetadataFinder(ast.NodeVisitor):
+ def __init__(self):
+ self.version = None
+ self.summary = None
+ self.author = None
+ self.email = None
+ self.uri = None
+ self.licence = None
+
+ def visit_Assign(self, node):
+ if node.targets[0].id == '__version__':
+ self.version = node.value.s
+ elif node.targets[0].id == '__summary__':
+ self.summary = node.value.s
+ elif node.targets[0].id == '__author__':
+ self.author = node.value.s
+ elif node.targets[0].id == '__email__':
+ self.email = node.value.s
+ elif node.targets[0].id == '__uri__':
+ self.uri = node.value.s
+ elif node.targets[0].id == '__license__':
+ self.license = node.value.s
+
+
+with open(os.path.join('djstripe', '__init__.py')) as open_file:
+ finder = MetadataFinder()
+ finder.visit(ast.parse(open_file.read()))
try:
from setuptools import setup
@@ -19,7 +45,8 @@
if sys.argv[-1] == 'tag':
print("Tagging the version on github:")
- os.system("git tag -a %s -m 'version %s'" % (version, version))
+ os.system("git tag -a %s -m 'version %s'" % (finder.version,
+ finder.version))
os.system("git push --tags")
sys.exit()
@@ -37,19 +64,19 @@
setup(
name='dj-stripe',
- version=version,
- description=djstripe.__summary__,
+ version=finder.version,
+ description=finder.summary,
long_description=readme + '\n\n' + history,
- author=djstripe.__author__,
- author_email=djstripe.__email__,
- url=djstripe.__uri__,
+ author=finder.author,
+ author_email=finder.email,
+ url=finder.uri,
packages=[
'djstripe',
],
package_dir={'djstripe': 'djstripe'},
include_package_data=True,
install_requires=INSTALL_REQUIRES,
- license=djstripe.__license__,
+ license=finder.license,
zip_safe=False,
keywords='stripe django',
classifiers=[
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -1,11 +1,37 @@\n #!/usr/bin/env python\n \n+import ast\n import os\n import sys\n \n-import djstripe\n \n-version = djstripe.__version__\n+class MetadataFinder(ast.NodeVisitor):\n+ def __init__(self):\n+ self.version = None\n+ self.summary = None\n+ self.author = None\n+ self.email = None\n+ self.uri = None\n+ self.licence = None\n+\n+ def visit_Assign(self, node):\n+ if node.targets[0].id == '__version__':\n+ self.version = node.value.s\n+ elif node.targets[0].id == '__summary__':\n+ self.summary = node.value.s\n+ elif node.targets[0].id == '__author__':\n+ self.author = node.value.s\n+ elif node.targets[0].id == '__email__':\n+ self.email = node.value.s\n+ elif node.targets[0].id == '__uri__':\n+ self.uri = node.value.s\n+ elif node.targets[0].id == '__license__':\n+ self.license = node.value.s\n+\n+\n+with open(os.path.join('djstripe', '__init__.py')) as open_file:\n+ finder = MetadataFinder()\n+ finder.visit(ast.parse(open_file.read()))\n \n try:\n from setuptools import setup\n@@ -19,7 +45,8 @@\n \n if sys.argv[-1] == 'tag':\n print(\"Tagging the version on github:\")\n- os.system(\"git tag -a %s -m 'version %s'\" % (version, version))\n+ os.system(\"git tag -a %s -m 'version %s'\" % (finder.version,\n+ finder.version))\n os.system(\"git push --tags\")\n sys.exit()\n \n@@ -37,19 +64,19 @@\n \n setup(\n name='dj-stripe',\n- version=version,\n- description=djstripe.__summary__,\n+ version=finder.version,\n+ description=finder.summary,\n long_description=readme + '\\n\\n' + history,\n- author=djstripe.__author__,\n- author_email=djstripe.__email__,\n- url=djstripe.__uri__,\n+ author=finder.author,\n+ author_email=finder.email,\n+ url=finder.uri,\n packages=[\n 'djstripe',\n ],\n package_dir={'djstripe': 'djstripe'},\n include_package_data=True,\n install_requires=INSTALL_REQUIRES,\n- license=djstripe.__license__,\n+ license=finder.license,\n zip_safe=False,\n keywords='stripe django',\n classifiers=[\n", "issue": "importing djstripe within setup.py causes race condition when installing from repo\nTrying to install dj-stripe from a repo runs into a race condition at setup.py:\n\n``` bash\npip install -e git://github.com/pydanny/dj-stripe.git#egg=djstripe \nObtaining djstripe from git+git://github.com/pydanny/dj-stripe.git#egg=djstripe\n Cloning git://github.com/pydanny/dj-stripe.git to ./v/test_djstripe/src/djstripe\n Complete output from command python setup.py egg_info:\n Traceback (most recent call last):\n File \"<string>\", line 20, in <module>\n File \"/home/dave/v/test_djstripe/src/djstripe/setup.py\", line 6, in <module>\n import djstripe\n File \"/home/dave/v/test_djstripe/src/djstripe/djstripe/__init__.py\", line 4, in <module>\n from django import get_version as get_django_version\n ImportError: No module named 'django'\n\n ----------------------------------------\n```\n\nThere are a few ways to fix this. I would suggest the, for example, get_version(package) methods used in https://github.com/pydanny/django-admin2/blob/master/setup.py\n\nThis is a trivial fix, I'll get a patch together soon. \n\n", "before_files": [{"content": "#!/usr/bin/env python\n\nimport os\nimport sys\n\nimport djstripe\n\nversion = djstripe.__version__\n\ntry:\n from setuptools import setup\nexcept ImportError:\n from distutils.core import setup\n\nif sys.argv[-1] == 'publish':\n os.system('python setup.py sdist upload')\n os.system('python setup.py bdist_wheel upload')\n sys.exit()\n\nif sys.argv[-1] == 'tag':\n print(\"Tagging the version on github:\")\n os.system(\"git tag -a %s -m 'version %s'\" % (version, version))\n os.system(\"git push --tags\")\n sys.exit()\n\nreadme = open('README.rst').read()\nhistory = open('HISTORY.rst').read().replace('.. :changelog:', '')\n\nINSTALL_REQUIRES = [\n 'django>=1.7',\n 'stripe>=1.22.2',\n 'django-model-utils>=2.2',\n 'django-braces>=1.8.0',\n 'jsonfield>=1.0.3',\n 'pytz>=2015.4'\n]\n\nsetup(\n name='dj-stripe',\n version=version,\n description=djstripe.__summary__,\n long_description=readme + '\\n\\n' + history,\n author=djstripe.__author__,\n author_email=djstripe.__email__,\n url=djstripe.__uri__,\n packages=[\n 'djstripe',\n ],\n package_dir={'djstripe': 'djstripe'},\n include_package_data=True,\n install_requires=INSTALL_REQUIRES,\n license=djstripe.__license__,\n zip_safe=False,\n keywords='stripe django',\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Environment :: Web Environment',\n 'Framework :: Django',\n 'Framework :: Django :: 1.7',\n 'Framework :: Django :: 1.8',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: BSD License',\n 'Natural Language :: English',\n \"Programming Language :: Python :: 2\",\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5'\n ],\n)\n", "path": "setup.py"}]} | 1,456 | 592 |
gh_patches_debug_9495 | rasdani/github-patches | git_diff | DataDog__dd-trace-py-231 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
psycopg2 cursor's __enter__ method is not patched to be traced
See behavior here:
```python
>>> import ddtrace
>>> ddtrace.patch_all()
>>> import psycopg2
>>> conn = psycopg2.connect('postgresql://localhost')
>>> print(type(conn.cursor()))
<class 'ddtrace.contrib.dbapi.TracedCursor'>
>>> with conn.cursor() as cur:
... print(type(cur))
<type 'psycopg2.extensions.cursor'>
```
</issue>
<code>
[start of ddtrace/contrib/dbapi/__init__.py]
1 """
2 Generic dbapi tracing code.
3 """
4
5 # stdlib
6 import logging
7
8 # 3p
9 import wrapt
10
11 # project
12 from ddtrace import Pin
13 from ddtrace.ext import sql
14
15
16 log = logging.getLogger(__name__)
17
18
19 class TracedCursor(wrapt.ObjectProxy):
20 """ TracedCursor wraps a psql cursor and traces it's queries. """
21
22 _datadog_pin = None
23 _datadog_name = None
24
25 def __init__(self, cursor, pin):
26 super(TracedCursor, self).__init__(cursor)
27 self._datadog_pin = pin
28 name = pin.app or 'sql'
29 self._datadog_name = '%s.query' % name
30
31 def executemany(self, query, *args, **kwargs):
32 pin = self._datadog_pin
33 if not pin or not pin.enabled():
34 return self.__wrapped__.executemany(query, *args, **kwargs)
35 service = pin.service
36
37 # FIXME[matt] properly handle kwargs here. arg names can be different
38 # with different libs.
39 with pin.tracer.trace(self._datadog_name, service=service, resource=query) as s:
40 s.span_type = sql.TYPE
41 s.set_tag(sql.QUERY, query)
42 s.set_tags(pin.tags)
43 s.set_tag("sql.executemany", "true")
44 try:
45 return self.__wrapped__.executemany(query, *args, **kwargs)
46 finally:
47 s.set_metric("db.rowcount", self.rowcount)
48
49 def execute(self, query, *args, **kwargs):
50 pin = self._datadog_pin
51 if not pin or not pin.enabled():
52 return self.__wrapped__.execute(query, *args, **kwargs)
53
54 service = pin.service
55 with pin.tracer.trace(self._datadog_name, service=service, resource=query) as s:
56 s.span_type = sql.TYPE
57 s.set_tag(sql.QUERY, query)
58 s.set_tags(pin.tags)
59 try:
60 return self.__wrapped__.execute(query, *args, **kwargs)
61 finally:
62 s.set_metric("db.rowcount", self.rowcount)
63
64 def callproc(self, proc, args):
65 pin = self._datadog_pin
66 if not pin or not pin.enabled():
67 return self.__wrapped__.callproc(proc, args)
68
69 with pin.tracer.trace(self._datadog_name, service=pin.service, resource=proc) as s:
70 s.span_type = sql.TYPE
71 s.set_tag(sql.QUERY, proc)
72 s.set_tags(pin.tags)
73 try:
74 return self.__wrapped__.callproc(proc, args)
75 finally:
76 s.set_metric("db.rowcount", self.rowcount)
77
78
79 class TracedConnection(wrapt.ObjectProxy):
80 """ TracedConnection wraps a Connection with tracing code. """
81
82 _datadog_pin = None
83
84 def __init__(self, conn):
85 super(TracedConnection, self).__init__(conn)
86 name = _get_vendor(conn)
87 Pin(service=name, app=name).onto(self)
88
89 def cursor(self, *args, **kwargs):
90 cursor = self.__wrapped__.cursor(*args, **kwargs)
91 pin = self._datadog_pin
92 if not pin:
93 return cursor
94 return TracedCursor(cursor, pin)
95
96
97 def _get_vendor(conn):
98 """ Return the vendor (e.g postgres, mysql) of the given
99 database.
100 """
101 try:
102 name = _get_module_name(conn)
103 except Exception:
104 log.debug("couldnt parse module name", exc_info=True)
105 name = "sql"
106 return sql.normalize_vendor(name)
107
108 def _get_module_name(conn):
109 return conn.__class__.__module__.split('.')[0]
110
[end of ddtrace/contrib/dbapi/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ddtrace/contrib/dbapi/__init__.py b/ddtrace/contrib/dbapi/__init__.py
--- a/ddtrace/contrib/dbapi/__init__.py
+++ b/ddtrace/contrib/dbapi/__init__.py
@@ -75,6 +75,15 @@
finally:
s.set_metric("db.rowcount", self.rowcount)
+ def __enter__(self):
+ # previous versions of the dbapi didn't support context managers. let's
+ # reference the func that would be called to ensure that errors
+ # messages will be the same.
+ self.__wrapped__.__enter__
+
+ # and finally, yield the traced cursor.
+ return self
+
class TracedConnection(wrapt.ObjectProxy):
""" TracedConnection wraps a Connection with tracing code. """
| {"golden_diff": "diff --git a/ddtrace/contrib/dbapi/__init__.py b/ddtrace/contrib/dbapi/__init__.py\n--- a/ddtrace/contrib/dbapi/__init__.py\n+++ b/ddtrace/contrib/dbapi/__init__.py\n@@ -75,6 +75,15 @@\n finally:\n s.set_metric(\"db.rowcount\", self.rowcount)\n \n+ def __enter__(self):\n+ # previous versions of the dbapi didn't support context managers. let's\n+ # reference the func that would be called to ensure that errors\n+ # messages will be the same.\n+ self.__wrapped__.__enter__\n+\n+ # and finally, yield the traced cursor.\n+ return self\n+\n \n class TracedConnection(wrapt.ObjectProxy):\n \"\"\" TracedConnection wraps a Connection with tracing code. \"\"\"\n", "issue": "psycopg2 cursor's __enter__ method is not patched to be traced\nSee behavior here:\r\n\r\n```python\r\n>>> import ddtrace\r\n>>> ddtrace.patch_all()\r\n>>> import psycopg2\r\n>>> conn = psycopg2.connect('postgresql://localhost')\r\n>>> print(type(conn.cursor()))\r\n<class 'ddtrace.contrib.dbapi.TracedCursor'>\r\n>>> with conn.cursor() as cur:\r\n... print(type(cur))\r\n<type 'psycopg2.extensions.cursor'>\r\n```\n", "before_files": [{"content": "\"\"\"\nGeneric dbapi tracing code.\n\"\"\"\n\n# stdlib\nimport logging\n\n# 3p\nimport wrapt\n\n# project\nfrom ddtrace import Pin\nfrom ddtrace.ext import sql\n\n\nlog = logging.getLogger(__name__)\n\n\nclass TracedCursor(wrapt.ObjectProxy):\n \"\"\" TracedCursor wraps a psql cursor and traces it's queries. \"\"\"\n\n _datadog_pin = None\n _datadog_name = None\n\n def __init__(self, cursor, pin):\n super(TracedCursor, self).__init__(cursor)\n self._datadog_pin = pin\n name = pin.app or 'sql'\n self._datadog_name = '%s.query' % name\n\n def executemany(self, query, *args, **kwargs):\n pin = self._datadog_pin\n if not pin or not pin.enabled():\n return self.__wrapped__.executemany(query, *args, **kwargs)\n service = pin.service\n\n # FIXME[matt] properly handle kwargs here. arg names can be different\n # with different libs.\n with pin.tracer.trace(self._datadog_name, service=service, resource=query) as s:\n s.span_type = sql.TYPE\n s.set_tag(sql.QUERY, query)\n s.set_tags(pin.tags)\n s.set_tag(\"sql.executemany\", \"true\")\n try:\n return self.__wrapped__.executemany(query, *args, **kwargs)\n finally:\n s.set_metric(\"db.rowcount\", self.rowcount)\n\n def execute(self, query, *args, **kwargs):\n pin = self._datadog_pin\n if not pin or not pin.enabled():\n return self.__wrapped__.execute(query, *args, **kwargs)\n\n service = pin.service\n with pin.tracer.trace(self._datadog_name, service=service, resource=query) as s:\n s.span_type = sql.TYPE\n s.set_tag(sql.QUERY, query)\n s.set_tags(pin.tags)\n try:\n return self.__wrapped__.execute(query, *args, **kwargs)\n finally:\n s.set_metric(\"db.rowcount\", self.rowcount)\n\n def callproc(self, proc, args):\n pin = self._datadog_pin\n if not pin or not pin.enabled():\n return self.__wrapped__.callproc(proc, args)\n\n with pin.tracer.trace(self._datadog_name, service=pin.service, resource=proc) as s:\n s.span_type = sql.TYPE\n s.set_tag(sql.QUERY, proc)\n s.set_tags(pin.tags)\n try:\n return self.__wrapped__.callproc(proc, args)\n finally:\n s.set_metric(\"db.rowcount\", self.rowcount)\n\n\nclass TracedConnection(wrapt.ObjectProxy):\n \"\"\" TracedConnection wraps a Connection with tracing code. \"\"\"\n\n _datadog_pin = None\n\n def __init__(self, conn):\n super(TracedConnection, self).__init__(conn)\n name = _get_vendor(conn)\n Pin(service=name, app=name).onto(self)\n\n def cursor(self, *args, **kwargs):\n cursor = self.__wrapped__.cursor(*args, **kwargs)\n pin = self._datadog_pin\n if not pin:\n return cursor\n return TracedCursor(cursor, pin)\n\n\ndef _get_vendor(conn):\n \"\"\" Return the vendor (e.g postgres, mysql) of the given\n database.\n \"\"\"\n try:\n name = _get_module_name(conn)\n except Exception:\n log.debug(\"couldnt parse module name\", exc_info=True)\n name = \"sql\"\n return sql.normalize_vendor(name)\n\ndef _get_module_name(conn):\n return conn.__class__.__module__.split('.')[0]\n", "path": "ddtrace/contrib/dbapi/__init__.py"}]} | 1,685 | 183 |
gh_patches_debug_4362 | rasdani/github-patches | git_diff | psf__black-2836 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Ignore __pypackages__ directory contents
**Describe the bug**
When using [PDM](https://pdm.fming.dev/), `black` does not ignore `__pypackages__` directory contents.
**To Reproduce**
Run `pdm run black .`
**Expected behavior**
`black` should reformat only project files.
**Environment**
- Black's version: 22.1.0
- PDM version: 1.12.6
- OS and Python version: Ubuntu 21.10 with Python 3.10.1
</issue>
<code>
[start of src/black/const.py]
1 DEFAULT_LINE_LENGTH = 88
2 DEFAULT_EXCLUDES = r"/(\.direnv|\.eggs|\.git|\.hg|\.mypy_cache|\.nox|\.tox|\.venv|venv|\.svn|_build|buck-out|build|dist)/" # noqa: B950
3 DEFAULT_INCLUDES = r"(\.pyi?|\.ipynb)$"
4 STDIN_PLACEHOLDER = "__BLACK_STDIN_FILENAME__"
5
[end of src/black/const.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/black/const.py b/src/black/const.py
--- a/src/black/const.py
+++ b/src/black/const.py
@@ -1,4 +1,4 @@
DEFAULT_LINE_LENGTH = 88
-DEFAULT_EXCLUDES = r"/(\.direnv|\.eggs|\.git|\.hg|\.mypy_cache|\.nox|\.tox|\.venv|venv|\.svn|_build|buck-out|build|dist)/" # noqa: B950
+DEFAULT_EXCLUDES = r"/(\.direnv|\.eggs|\.git|\.hg|\.mypy_cache|\.nox|\.tox|\.venv|venv|\.svn|_build|buck-out|build|dist|__pypackages__)/" # noqa: B950
DEFAULT_INCLUDES = r"(\.pyi?|\.ipynb)$"
STDIN_PLACEHOLDER = "__BLACK_STDIN_FILENAME__"
| {"golden_diff": "diff --git a/src/black/const.py b/src/black/const.py\n--- a/src/black/const.py\n+++ b/src/black/const.py\n@@ -1,4 +1,4 @@\n DEFAULT_LINE_LENGTH = 88\n-DEFAULT_EXCLUDES = r\"/(\\.direnv|\\.eggs|\\.git|\\.hg|\\.mypy_cache|\\.nox|\\.tox|\\.venv|venv|\\.svn|_build|buck-out|build|dist)/\" # noqa: B950\n+DEFAULT_EXCLUDES = r\"/(\\.direnv|\\.eggs|\\.git|\\.hg|\\.mypy_cache|\\.nox|\\.tox|\\.venv|venv|\\.svn|_build|buck-out|build|dist|__pypackages__)/\" # noqa: B950\n DEFAULT_INCLUDES = r\"(\\.pyi?|\\.ipynb)$\"\n STDIN_PLACEHOLDER = \"__BLACK_STDIN_FILENAME__\"\n", "issue": "Ignore __pypackages__ directory contents\n**Describe the bug**\r\n\r\nWhen using [PDM](https://pdm.fming.dev/), `black` does not ignore `__pypackages__` directory contents.\r\n\r\n**To Reproduce**\r\n\r\nRun `pdm run black .`\r\n\r\n**Expected behavior**\r\n\r\n`black` should reformat only project files.\r\n\r\n**Environment**\r\n\r\n- Black's version: 22.1.0\r\n- PDM version: 1.12.6\r\n- OS and Python version: Ubuntu 21.10 with Python 3.10.1\r\n\n", "before_files": [{"content": "DEFAULT_LINE_LENGTH = 88\nDEFAULT_EXCLUDES = r\"/(\\.direnv|\\.eggs|\\.git|\\.hg|\\.mypy_cache|\\.nox|\\.tox|\\.venv|venv|\\.svn|_build|buck-out|build|dist)/\" # noqa: B950\nDEFAULT_INCLUDES = r\"(\\.pyi?|\\.ipynb)$\"\nSTDIN_PLACEHOLDER = \"__BLACK_STDIN_FILENAME__\"\n", "path": "src/black/const.py"}]} | 774 | 219 |
gh_patches_debug_23756 | rasdani/github-patches | git_diff | chainer__chainer-1398 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
log_name of LogReport with keys causes AttributeError
In MNIST example you change the code
https://github.com/pfnet/chainer/blob/master/examples/mnist/train_mnist.py#L84
```
# Write a log of evaluation statistics for each epoch
trainer.extend(extensions.LogReport())
```
to
```
# Write a log of evaluation statistics for each epoch
trainer.extend(extensions.LogReport(log_name='log_{.iteration}'))
```
run train_mnist.py and you'll get `AttributeError: 'dict' object has no attribute 'iteration'`
</issue>
<code>
[start of chainer/training/extensions/log_report.py]
1 import json
2 import os
3 import tempfile
4
5 import six
6
7 from chainer import reporter
8 import chainer.serializer as serializer_module
9 from chainer.training import extension
10 import chainer.training.trigger as trigger_module
11
12
13 class LogReport(extension.Extension):
14
15 """Trainer extension to output the accumulated results to a log file.
16
17 This extension accumulates the observations of the trainer to
18 :class:`~chainer.DictSummary` at a regular interval specified by a supplied
19 trigger, and writes them into a log file in JSON format.
20
21 There are two triggers to handle this extension. One is the trigger to
22 invoke this extension, which is used to handle the timing of accumulating
23 the results. It is set to ``1, 'iteration'`` by default. The other is the
24 trigger to determine when to emit the result. When this trigger returns
25 True, this extension appends the summary of accumulated values to the list
26 of past summaries, and writes the list to the log file. Then, this
27 extension makes a new fresh summary object which is used until the next
28 time that the trigger fires.
29
30 It also adds ``'epoch'`` and ``'iteration'`` entries to each result
31 dictionary, which are the epoch and iteration counts at the output.
32
33 Args:
34 keys (iterable of strs): Keys of values to accumulate. If this is None,
35 all the values are accumulated and output to the log file.
36 trigger: Trigger that decides when to aggregate the result and output
37 the values. This is distinct from the trigger of this extension
38 itself. If it is a tuple in the form ``<int>, 'epoch'`` or
39 ``<int>, 'iteration'``, it is passed to :class:`IntervalTrigger`.
40 postprocess: Callback to postprocess the result dictionaries. Each
41 result dictionary is passed to this callback on the output. This
42 callback can modify the result dictionaries, which are used to
43 output to the log file.
44 log_name (str): Name of the log file under the output directory. It can
45 be a format string: the last result dictionary is passed for the
46 formatting. For example, users can use '{.iteration}' to separate
47 the log files for different iterations. If the log name is None, it
48 does not output the log to any file.
49
50 """
51 def __init__(self, keys=None, trigger=(1, 'epoch'), postprocess=None,
52 log_name='log'):
53 self._keys = keys
54 self._trigger = trigger_module.get_trigger(trigger)
55 self._postprocess = postprocess
56 self._log_name = log_name
57 self._log = []
58
59 self._init_summary()
60
61 def __call__(self, trainer):
62 # accumulate the observations
63 keys = self._keys
64 observation = trainer.observation
65 summary = self._summary
66
67 if keys is None:
68 summary.add(observation)
69 else:
70 summary.add({k: observation[k] for k in keys if k in observation})
71
72 if self._trigger(trainer):
73 # output the result
74 stats = self._summary.compute_mean()
75 stats_cpu = {}
76 for name, value in six.iteritems(stats):
77 stats_cpu[name] = float(value) # copy to CPU
78
79 updater = trainer.updater
80 stats_cpu['epoch'] = updater.epoch
81 stats_cpu['iteration'] = updater.iteration
82
83 if self._postprocess is not None:
84 self._postprocess(stats_cpu)
85
86 self._log.append(stats_cpu)
87
88 # write to the log file
89 if self._log_name is not None:
90 log_name = self._log_name.format(stats_cpu)
91 fd, path = tempfile.mkstemp(prefix=log_name, dir=trainer.out)
92 with os.fdopen(fd, 'w') as f:
93 json.dump(self._log, f, indent=4)
94 os.rename(path, os.path.join(trainer.out, log_name))
95
96 # reset the summary for the next output
97 self._init_summary()
98
99 @property
100 def log(self):
101 """The current list of observation dictionaries."""
102 return self._log
103
104 def serialize(self, serializer):
105 # Note that this serialization may lose some information of small
106 # numerical differences.
107 if isinstance(serializer, serializer_module.Serializer):
108 log = json.dumps(self._log)
109 serializer('_log', log)
110 else:
111 log = serializer('_log', '')
112 self._log = json.loads(log)
113
114 def _init_summary(self):
115 self._summary = reporter.DictSummary()
116
[end of chainer/training/extensions/log_report.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/chainer/training/extensions/log_report.py b/chainer/training/extensions/log_report.py
--- a/chainer/training/extensions/log_report.py
+++ b/chainer/training/extensions/log_report.py
@@ -43,7 +43,7 @@
output to the log file.
log_name (str): Name of the log file under the output directory. It can
be a format string: the last result dictionary is passed for the
- formatting. For example, users can use '{.iteration}' to separate
+ formatting. For example, users can use '{iteration}' to separate
the log files for different iterations. If the log name is None, it
does not output the log to any file.
@@ -87,7 +87,7 @@
# write to the log file
if self._log_name is not None:
- log_name = self._log_name.format(stats_cpu)
+ log_name = self._log_name.format(**stats_cpu)
fd, path = tempfile.mkstemp(prefix=log_name, dir=trainer.out)
with os.fdopen(fd, 'w') as f:
json.dump(self._log, f, indent=4)
| {"golden_diff": "diff --git a/chainer/training/extensions/log_report.py b/chainer/training/extensions/log_report.py\n--- a/chainer/training/extensions/log_report.py\n+++ b/chainer/training/extensions/log_report.py\n@@ -43,7 +43,7 @@\n output to the log file.\n log_name (str): Name of the log file under the output directory. It can\n be a format string: the last result dictionary is passed for the\n- formatting. For example, users can use '{.iteration}' to separate\n+ formatting. For example, users can use '{iteration}' to separate\n the log files for different iterations. If the log name is None, it\n does not output the log to any file.\n \n@@ -87,7 +87,7 @@\n \n # write to the log file\n if self._log_name is not None:\n- log_name = self._log_name.format(stats_cpu)\n+ log_name = self._log_name.format(**stats_cpu)\n fd, path = tempfile.mkstemp(prefix=log_name, dir=trainer.out)\n with os.fdopen(fd, 'w') as f:\n json.dump(self._log, f, indent=4)\n", "issue": "log_name of LogReport with keys causes AttributeError\nIn MNIST example you change the code\n\nhttps://github.com/pfnet/chainer/blob/master/examples/mnist/train_mnist.py#L84\n\n```\n # Write a log of evaluation statistics for each epoch\n trainer.extend(extensions.LogReport())\n```\n\nto \n\n```\n # Write a log of evaluation statistics for each epoch\n trainer.extend(extensions.LogReport(log_name='log_{.iteration}'))\n```\n\nrun train_mnist.py and you'll get `AttributeError: 'dict' object has no attribute 'iteration'`\n\n", "before_files": [{"content": "import json\nimport os\nimport tempfile\n\nimport six\n\nfrom chainer import reporter\nimport chainer.serializer as serializer_module\nfrom chainer.training import extension\nimport chainer.training.trigger as trigger_module\n\n\nclass LogReport(extension.Extension):\n\n \"\"\"Trainer extension to output the accumulated results to a log file.\n\n This extension accumulates the observations of the trainer to\n :class:`~chainer.DictSummary` at a regular interval specified by a supplied\n trigger, and writes them into a log file in JSON format.\n\n There are two triggers to handle this extension. One is the trigger to\n invoke this extension, which is used to handle the timing of accumulating\n the results. It is set to ``1, 'iteration'`` by default. The other is the\n trigger to determine when to emit the result. When this trigger returns\n True, this extension appends the summary of accumulated values to the list\n of past summaries, and writes the list to the log file. Then, this\n extension makes a new fresh summary object which is used until the next\n time that the trigger fires.\n\n It also adds ``'epoch'`` and ``'iteration'`` entries to each result\n dictionary, which are the epoch and iteration counts at the output.\n\n Args:\n keys (iterable of strs): Keys of values to accumulate. If this is None,\n all the values are accumulated and output to the log file.\n trigger: Trigger that decides when to aggregate the result and output\n the values. This is distinct from the trigger of this extension\n itself. If it is a tuple in the form ``<int>, 'epoch'`` or\n ``<int>, 'iteration'``, it is passed to :class:`IntervalTrigger`.\n postprocess: Callback to postprocess the result dictionaries. Each\n result dictionary is passed to this callback on the output. This\n callback can modify the result dictionaries, which are used to\n output to the log file.\n log_name (str): Name of the log file under the output directory. It can\n be a format string: the last result dictionary is passed for the\n formatting. For example, users can use '{.iteration}' to separate\n the log files for different iterations. If the log name is None, it\n does not output the log to any file.\n\n \"\"\"\n def __init__(self, keys=None, trigger=(1, 'epoch'), postprocess=None,\n log_name='log'):\n self._keys = keys\n self._trigger = trigger_module.get_trigger(trigger)\n self._postprocess = postprocess\n self._log_name = log_name\n self._log = []\n\n self._init_summary()\n\n def __call__(self, trainer):\n # accumulate the observations\n keys = self._keys\n observation = trainer.observation\n summary = self._summary\n\n if keys is None:\n summary.add(observation)\n else:\n summary.add({k: observation[k] for k in keys if k in observation})\n\n if self._trigger(trainer):\n # output the result\n stats = self._summary.compute_mean()\n stats_cpu = {}\n for name, value in six.iteritems(stats):\n stats_cpu[name] = float(value) # copy to CPU\n\n updater = trainer.updater\n stats_cpu['epoch'] = updater.epoch\n stats_cpu['iteration'] = updater.iteration\n\n if self._postprocess is not None:\n self._postprocess(stats_cpu)\n\n self._log.append(stats_cpu)\n\n # write to the log file\n if self._log_name is not None:\n log_name = self._log_name.format(stats_cpu)\n fd, path = tempfile.mkstemp(prefix=log_name, dir=trainer.out)\n with os.fdopen(fd, 'w') as f:\n json.dump(self._log, f, indent=4)\n os.rename(path, os.path.join(trainer.out, log_name))\n\n # reset the summary for the next output\n self._init_summary()\n\n @property\n def log(self):\n \"\"\"The current list of observation dictionaries.\"\"\"\n return self._log\n\n def serialize(self, serializer):\n # Note that this serialization may lose some information of small\n # numerical differences.\n if isinstance(serializer, serializer_module.Serializer):\n log = json.dumps(self._log)\n serializer('_log', log)\n else:\n log = serializer('_log', '')\n self._log = json.loads(log)\n\n def _init_summary(self):\n self._summary = reporter.DictSummary()\n", "path": "chainer/training/extensions/log_report.py"}]} | 1,891 | 261 |
gh_patches_debug_42503 | rasdani/github-patches | git_diff | nvaccess__nvda-12122 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
nvda does not automatically read messages received on skype for business
### Steps to reproduce:
I'm sure I've seen this working for some time.
I have not used skype for business for a long time but in my current work I'm making use of it again, and when I received a message on skype for business, nvda did not automatically read the received message.
Open a conversation on skype for business type something and wait inside the conversation for your partner to respond.
### Actual behavior:
The nvda is mute.
### Expected behavior:
Nvda should automatically announce the response received.
### System configuration
#### NVDA installed/portable/running from source:
install
#### NVDA version:
2018.4.1
#### Windows version:
10 17134.556
#### Name and version of other software in use when reproducing the issue:
office 16.0.4266.1001
#### Other information about your system:
### Other questions
#### Does the issue still occur after restarting your PC?
yes
#### Have you tried any other versions of NVDA?
no
</issue>
<code>
[start of source/appModules/lync.py]
1 #A part of NonVisual Desktop Access (NVDA)
2 #This file is covered by the GNU General Public License.
3 #See the file COPYING for more details.
4 #Copyright (C) 2017 NV Access Limited
5
6 """appModule for Microsoft Skype for business. """
7
8 import ui
9 from NVDAObjects.UIA import UIA
10 import appModuleHandler
11
12 class NetUIRicherLabel(UIA):
13 """A label sometimes found within list items that can fire live region changes, such as for chat messages."""
14
15 def event_liveRegionChange(self):
16 # The base liveRegionChange event is not enough as Skype for Business concatinates recent chat messages from the same person within the same minute
17 # Therefore, specifically strip out the chat content and only report the most recent part added.
18 # The object's name contains the full message (I.e. person: content, timestamp) loosely separated by commas.
19 # Example string: "Michael Curran : , , Hello\r\n\r\nThis is a test , 10:45 am."
20 # Where person is "Michael Curran", content is "Hello\nThis is a test" and timestamp is "10:45 am"
21 # The object's value just contains the content.
22 # Example: "Hello\rThis is a test"
23 # We are only interested in person and content
24 # Therefore use value (content) to locate and split off the person from the name (fullText)
25 # Normalize the usage of end-of-line characters (name and value seem to expose them differently, which would break comparison)
26 content=self.value.replace('\r','\n').strip()
27 fullText=self.name.replace('\r\n\r\n','\n')
28 contentLines=content.split('\n')
29 contentStartIndex=fullText.find(content)
30 pretext=fullText[:contentStartIndex]
31 # There are some annoying comma characters after the person's name
32 pretext=pretext.replace(' ,','')
33 # If the objects are the same, the person is the same, and the new content is the old content but with more appended, report the appended content
34 # Otherwise, report the person and the initial content
35 runtimeID=self.UIAElement.getRuntimeId()
36 lastRuntimeID,lastPretext,lastContentLines=self.appModule._lastLiveChatMessageData
37 contentLinesLen=len(contentLines)
38 lastContentLinesLen=len(lastContentLines)
39 if runtimeID==lastRuntimeID and pretext==lastPretext and contentLinesLen>lastContentLinesLen and contentLines[:lastContentLinesLen]==lastContentLines:
40 message="\n".join(contentLines[lastContentLinesLen:])
41 else:
42 message=pretext+content
43 ui.message(message)
44 # Cache the message data for later possible comparisons
45 self.appModule._lastLiveChatMessageData=runtimeID,pretext,contentLines
46
47 class AppModule(appModuleHandler.AppModule):
48
49 # data to store the last chat message (runtime ID,person,content lines)
50 _lastLiveChatMessageData=[],"",[]
51
52 def chooseNVDAObjectOverlayClasses(self,obj,clsList):
53 if isinstance(obj,UIA) and obj.UIAElement.cachedClassName=='NetUIRicherLabel':
54 clsList.insert(0,NetUIRicherLabel)
55 return clsList
56
57
[end of source/appModules/lync.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/source/appModules/lync.py b/source/appModules/lync.py
--- a/source/appModules/lync.py
+++ b/source/appModules/lync.py
@@ -4,10 +4,14 @@
#Copyright (C) 2017 NV Access Limited
"""appModule for Microsoft Skype for business. """
-
+
import ui
from NVDAObjects.UIA import UIA
import appModuleHandler
+from logHandler import log
+
+import re
+
class NetUIRicherLabel(UIA):
"""A label sometimes found within list items that can fire live region changes, such as for chat messages."""
@@ -15,19 +19,45 @@
def event_liveRegionChange(self):
# The base liveRegionChange event is not enough as Skype for Business concatinates recent chat messages from the same person within the same minute
# Therefore, specifically strip out the chat content and only report the most recent part added.
- # The object's name contains the full message (I.e. person: content, timestamp) loosely separated by commas.
+ # When not empty, the object's name contains the full message (I.e. person: content, timestamp)
+ # loosely separated by commas.
# Example string: "Michael Curran : , , Hello\r\n\r\nThis is a test , 10:45 am."
# Where person is "Michael Curran", content is "Hello\nThis is a test" and timestamp is "10:45 am"
- # The object's value just contains the content.
- # Example: "Hello\rThis is a test"
- # We are only interested in person and content
- # Therefore use value (content) to locate and split off the person from the name (fullText)
+
# Normalize the usage of end-of-line characters (name and value seem to expose them differently, which would break comparison)
- content=self.value.replace('\r','\n').strip()
fullText=self.name.replace('\r\n\r\n','\n')
+
+ # At the object's creation, an unuseful liveRegionChange event is triggered with an empty name,
+ # so we discard it.
+ if not self.name.strip():
+ return
+
+ if self.value is not None:
+ # For some versions of Lync / Skype for Business, the object's value contains just the content.
+ # Example: "Hello\rThis is a test"
+ # We are only interested in person and content
+ # Therefore use value (content) to locate and split off the person from the name (fullText)
+ content = self.value.replace('\r', '\n').strip()
+ contentStartIndex = fullText.find(content)
+ pretext = fullText[:contentStartIndex]
+ else:
+ # For other versions of Lync / Skype for Business, self.value is just None.
+ # So we just look at self.name formatting to split content from person and timestamp (less robust).
+ pattern = r'^(?P<name>.+?): (?P<priority>.*?), , (?P<content>.+),(?!, , ) , (?P<timestamp>.+)'
+ match = re.match(pattern, self.name, flags=re.DOTALL)
+ if match:
+ pretext = match['name']
+ priority = match['priority']
+ content = match['content']
+ if priority:
+ content = priority + ', ' + content
+ else:
+ # In case no match is found, log the unexpected message and return the whole message.
+ log.error(f'Unrecognized pattern in the following message: {self.name}')
+ pretext = ''
+ content = self.name
+ content = content.replace('\r', '\n').strip()
contentLines=content.split('\n')
- contentStartIndex=fullText.find(content)
- pretext=fullText[:contentStartIndex]
# There are some annoying comma characters after the person's name
pretext=pretext.replace(' ,','')
# If the objects are the same, the person is the same, and the new content is the old content but with more appended, report the appended content
| {"golden_diff": "diff --git a/source/appModules/lync.py b/source/appModules/lync.py\n--- a/source/appModules/lync.py\n+++ b/source/appModules/lync.py\n@@ -4,10 +4,14 @@\n #Copyright (C) 2017 NV Access Limited\r\n \r\n \"\"\"appModule for Microsoft Skype for business. \"\"\"\r\n- \r\n+\r\n import ui\r\n from NVDAObjects.UIA import UIA\r\n import appModuleHandler\r\n+from logHandler import log\r\n+\r\n+import re\r\n+\r\n \r\n class NetUIRicherLabel(UIA):\r\n \t\"\"\"A label sometimes found within list items that can fire live region changes, such as for chat messages.\"\"\"\r\n@@ -15,19 +19,45 @@\n \tdef event_liveRegionChange(self):\r\n \t\t# The base liveRegionChange event is not enough as Skype for Business concatinates recent chat messages from the same person within the same minute\r\n \t\t# Therefore, specifically strip out the chat content and only report the most recent part added.\r\n-\t\t# The object's name contains the full message (I.e. person: content, timestamp) loosely separated by commas.\r\n+\t\t# When not empty, the object's name contains the full message (I.e. person: content, timestamp)\r\n+\t\t# loosely separated by commas.\r\n \t\t# Example string: \"Michael Curran : , , Hello\\r\\n\\r\\nThis is a test , 10:45 am.\"\r\n \t\t# Where person is \"Michael Curran\", content is \"Hello\\nThis is a test\" and timestamp is \"10:45 am\" \r\n-\t\t# The object's value just contains the content.\r\n-\t\t# Example: \"Hello\\rThis is a test\"\r\n-\t\t# We are only interested in person and content\r\n-\t\t# Therefore use value (content) to locate and split off the person from the name (fullText)\r\n+\t\t\r\n \t\t# Normalize the usage of end-of-line characters (name and value seem to expose them differently, which would break comparison)\r\n-\t\tcontent=self.value.replace('\\r','\\n').strip()\r\n \t\tfullText=self.name.replace('\\r\\n\\r\\n','\\n')\r\n+\t\t\r\n+\t\t# At the object's creation, an unuseful liveRegionChange event is triggered with an empty name,\r\n+\t\t# so we discard it.\r\n+\t\tif not self.name.strip():\r\n+\t\t\treturn\r\n+\t\t\r\n+\t\tif self.value is not None:\r\n+\t\t\t# For some versions of Lync / Skype for Business, the object's value contains just the content.\r\n+\t\t\t# Example: \"Hello\\rThis is a test\"\r\n+\t\t\t# We are only interested in person and content\r\n+\t\t\t# Therefore use value (content) to locate and split off the person from the name (fullText)\r\n+\t\t\tcontent = self.value.replace('\\r', '\\n').strip()\r\n+\t\t\tcontentStartIndex = fullText.find(content)\r\n+\t\t\tpretext = fullText[:contentStartIndex]\r\n+\t\telse:\r\n+\t\t\t# For other versions of Lync / Skype for Business, self.value is just None.\r\n+\t\t\t# So we just look at self.name formatting to split content from person and timestamp (less robust).\r\n+\t\t\tpattern = r'^(?P<name>.+?): (?P<priority>.*?), , (?P<content>.+),(?!, , ) , (?P<timestamp>.+)'\r\n+\t\t\tmatch = re.match(pattern, self.name, flags=re.DOTALL)\r\n+\t\t\tif match:\r\n+\t\t\t\tpretext = match['name']\r\n+\t\t\t\tpriority = match['priority']\r\n+\t\t\t\tcontent = match['content']\r\n+\t\t\t\tif priority:\r\n+\t\t\t\t\tcontent = priority + ', ' + content\r\n+\t\t\telse:\r\n+\t\t\t\t# In case no match is found, log the unexpected message and return the whole message.\r\n+\t\t\t\tlog.error(f'Unrecognized pattern in the following message: {self.name}')\r\n+\t\t\t\tpretext = ''\r\n+\t\t\t\tcontent = self.name\r\n+\t\t\tcontent = content.replace('\\r', '\\n').strip()\r\n \t\tcontentLines=content.split('\\n')\r\n-\t\tcontentStartIndex=fullText.find(content)\r\n-\t\tpretext=fullText[:contentStartIndex]\r\n \t\t# There are some annoying comma characters after the person's name \r\n \t\tpretext=pretext.replace(' ,','')\r\n \t\t# If the objects are the same, the person is the same, and the new content is the old content but with more appended, report the appended content\n", "issue": "nvda does not automatically read messages received on skype for business\n\r\n### Steps to reproduce:\r\nI'm sure I've seen this working for some time.\r\nI have not used skype for business for a long time but in my current work I'm making use of it again, and when I received a message on skype for business, nvda did not automatically read the received message.\r\nOpen a conversation on skype for business type something and wait inside the conversation for your partner to respond.\r\n### Actual behavior:\r\nThe nvda is mute.\r\n### Expected behavior:\r\nNvda should automatically announce the response received.\r\n### System configuration\r\n#### NVDA installed/portable/running from source:\r\ninstall\r\n#### NVDA version:\r\n2018.4.1\r\n#### Windows version:\r\n10 17134.556\r\n#### Name and version of other software in use when reproducing the issue:\r\noffice 16.0.4266.1001\r\n\r\n#### Other information about your system:\r\n\r\n### Other questions\r\n#### Does the issue still occur after restarting your PC?\r\nyes\r\n#### Have you tried any other versions of NVDA?\r\nno\n", "before_files": [{"content": "#A part of NonVisual Desktop Access (NVDA)\r\n#This file is covered by the GNU General Public License.\r\n#See the file COPYING for more details.\r\n#Copyright (C) 2017 NV Access Limited\r\n\r\n\"\"\"appModule for Microsoft Skype for business. \"\"\"\r\n \r\nimport ui\r\nfrom NVDAObjects.UIA import UIA\r\nimport appModuleHandler\r\n\r\nclass NetUIRicherLabel(UIA):\r\n\t\"\"\"A label sometimes found within list items that can fire live region changes, such as for chat messages.\"\"\"\r\n\r\n\tdef event_liveRegionChange(self):\r\n\t\t# The base liveRegionChange event is not enough as Skype for Business concatinates recent chat messages from the same person within the same minute\r\n\t\t# Therefore, specifically strip out the chat content and only report the most recent part added.\r\n\t\t# The object's name contains the full message (I.e. person: content, timestamp) loosely separated by commas.\r\n\t\t# Example string: \"Michael Curran : , , Hello\\r\\n\\r\\nThis is a test , 10:45 am.\"\r\n\t\t# Where person is \"Michael Curran\", content is \"Hello\\nThis is a test\" and timestamp is \"10:45 am\" \r\n\t\t# The object's value just contains the content.\r\n\t\t# Example: \"Hello\\rThis is a test\"\r\n\t\t# We are only interested in person and content\r\n\t\t# Therefore use value (content) to locate and split off the person from the name (fullText)\r\n\t\t# Normalize the usage of end-of-line characters (name and value seem to expose them differently, which would break comparison)\r\n\t\tcontent=self.value.replace('\\r','\\n').strip()\r\n\t\tfullText=self.name.replace('\\r\\n\\r\\n','\\n')\r\n\t\tcontentLines=content.split('\\n')\r\n\t\tcontentStartIndex=fullText.find(content)\r\n\t\tpretext=fullText[:contentStartIndex]\r\n\t\t# There are some annoying comma characters after the person's name \r\n\t\tpretext=pretext.replace(' ,','')\r\n\t\t# If the objects are the same, the person is the same, and the new content is the old content but with more appended, report the appended content\r\n\t\t# Otherwise, report the person and the initial content\r\n\t\truntimeID=self.UIAElement.getRuntimeId()\r\n\t\tlastRuntimeID,lastPretext,lastContentLines=self.appModule._lastLiveChatMessageData\r\n\t\tcontentLinesLen=len(contentLines)\r\n\t\tlastContentLinesLen=len(lastContentLines)\r\n\t\tif runtimeID==lastRuntimeID and pretext==lastPretext and contentLinesLen>lastContentLinesLen and contentLines[:lastContentLinesLen]==lastContentLines:\r\n\t\t\tmessage=\"\\n\".join(contentLines[lastContentLinesLen:])\r\n\t\telse:\r\n\t\t\tmessage=pretext+content\r\n\t\tui.message(message)\r\n\t\t# Cache the message data for later possible comparisons \r\n\t\tself.appModule._lastLiveChatMessageData=runtimeID,pretext,contentLines\r\n\r\nclass AppModule(appModuleHandler.AppModule):\r\n\r\n\t# data to store the last chat message (runtime ID,person,content lines)\r\n\t_lastLiveChatMessageData=[],\"\",[]\r\n\r\n\tdef chooseNVDAObjectOverlayClasses(self,obj,clsList):\r\n\t\tif isinstance(obj,UIA) and obj.UIAElement.cachedClassName=='NetUIRicherLabel':\r\n\t\t\tclsList.insert(0,NetUIRicherLabel)\r\n\t\treturn clsList\r\n\r\n", "path": "source/appModules/lync.py"}]} | 1,605 | 961 |
gh_patches_debug_21060 | rasdani/github-patches | git_diff | Pylons__pyramid-1296 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
DOC: XSS in quicktour/views/views.py
http://docs.pylonsproject.org/projects/pyramid/en/latest/quick_tour.html#views
https://github.com/Pylons/pyramid/blob/master/docs/quick_tour/views/views.py#L17
As there is no templating layer to autoescape the user-supplied `name` parameter and the response is by default `text/html`, `hello_view` contains an XSS vulnerability.
Templating is not the focus of (this part of) the quick tour.
I can think of two approaches:
1. Use `cgi.escape` before doing string interpolation (`body % cgi.escape(name)').
2. Add a note about XSS and the value of utilizing a good templating engine with autoescape.
"CWE-79: Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting')" http://cwe.mitre.org/data/definitions/79.html
</issue>
<code>
[start of docs/quick_tour/views/views.py]
1 from pyramid.httpexceptions import HTTPFound
2 from pyramid.response import Response
3 from pyramid.view import view_config
4
5
6 # First view, available at http://localhost:6543/
7 @view_config(route_name='home')
8 def home_view(request):
9 return Response('<p>Visit <a href="/howdy?name=lisa">hello</a></p>')
10
11
12 # /howdy?name=alice which links to the next view
13 @view_config(route_name='hello')
14 def hello_view(request):
15 name = request.params.get('name', 'No Name')
16 body = '<p>Hi %s, this <a href="/goto">redirects</a></p>'
17 return Response(body % name)
18
19
20 # /goto which issues HTTP redirect to the last view
21 @view_config(route_name='redirect')
22 def redirect_view(request):
23 return HTTPFound(location="/problem")
24
25
26 # /problem which causes an site error
27 @view_config(route_name='exception')
28 def exception_view(request):
29 raise Exception()
30
[end of docs/quick_tour/views/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/docs/quick_tour/views/views.py b/docs/quick_tour/views/views.py
--- a/docs/quick_tour/views/views.py
+++ b/docs/quick_tour/views/views.py
@@ -1,3 +1,5 @@
+import cgi
+
from pyramid.httpexceptions import HTTPFound
from pyramid.response import Response
from pyramid.view import view_config
@@ -14,7 +16,8 @@
def hello_view(request):
name = request.params.get('name', 'No Name')
body = '<p>Hi %s, this <a href="/goto">redirects</a></p>'
- return Response(body % name)
+ # cgi.escape to prevent Cross-Site Scripting (XSS) [CWE 79]
+ return Response(body % cgi.escape(name))
# /goto which issues HTTP redirect to the last view
@@ -23,7 +26,7 @@
return HTTPFound(location="/problem")
-# /problem which causes an site error
+# /problem which causes a site error
@view_config(route_name='exception')
def exception_view(request):
raise Exception()
| {"golden_diff": "diff --git a/docs/quick_tour/views/views.py b/docs/quick_tour/views/views.py\n--- a/docs/quick_tour/views/views.py\n+++ b/docs/quick_tour/views/views.py\n@@ -1,3 +1,5 @@\n+import cgi\n+\n from pyramid.httpexceptions import HTTPFound\n from pyramid.response import Response\n from pyramid.view import view_config\n@@ -14,7 +16,8 @@\n def hello_view(request):\n name = request.params.get('name', 'No Name')\n body = '<p>Hi %s, this <a href=\"/goto\">redirects</a></p>'\n- return Response(body % name)\n+ # cgi.escape to prevent Cross-Site Scripting (XSS) [CWE 79]\n+ return Response(body % cgi.escape(name))\n \n \n # /goto which issues HTTP redirect to the last view\n@@ -23,7 +26,7 @@\n return HTTPFound(location=\"/problem\")\n \n \n-# /problem which causes an site error\n+# /problem which causes a site error\n @view_config(route_name='exception')\n def exception_view(request):\n raise Exception()\n", "issue": "DOC: XSS in quicktour/views/views.py\nhttp://docs.pylonsproject.org/projects/pyramid/en/latest/quick_tour.html#views\n\nhttps://github.com/Pylons/pyramid/blob/master/docs/quick_tour/views/views.py#L17\n\nAs there is no templating layer to autoescape the user-supplied `name` parameter and the response is by default `text/html`, `hello_view` contains an XSS vulnerability.\n\nTemplating is not the focus of (this part of) the quick tour.\n\nI can think of two approaches:\n1. Use `cgi.escape` before doing string interpolation (`body % cgi.escape(name)').\n2. Add a note about XSS and the value of utilizing a good templating engine with autoescape.\n\n\"CWE-79: Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting')\" http://cwe.mitre.org/data/definitions/79.html\n\n", "before_files": [{"content": "from pyramid.httpexceptions import HTTPFound\nfrom pyramid.response import Response\nfrom pyramid.view import view_config\n\n\n# First view, available at http://localhost:6543/\n@view_config(route_name='home')\ndef home_view(request):\n return Response('<p>Visit <a href=\"/howdy?name=lisa\">hello</a></p>')\n\n\n# /howdy?name=alice which links to the next view\n@view_config(route_name='hello')\ndef hello_view(request):\n name = request.params.get('name', 'No Name')\n body = '<p>Hi %s, this <a href=\"/goto\">redirects</a></p>'\n return Response(body % name)\n\n\n# /goto which issues HTTP redirect to the last view\n@view_config(route_name='redirect')\ndef redirect_view(request):\n return HTTPFound(location=\"/problem\")\n\n\n# /problem which causes an site error\n@view_config(route_name='exception')\ndef exception_view(request):\n raise Exception()\n", "path": "docs/quick_tour/views/views.py"}]} | 998 | 246 |
gh_patches_debug_5685 | rasdani/github-patches | git_diff | pulp__pulpcore-3996 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
RESTAPI document fix for Upstream Pulp Replication API
**Version**
Pulp installed through the Python modules.
"core:3.28.0"
"certguard:3.28.0"
"file:3.28.0"
"python:3.28.0"
"rpm:3.28.0"
**Describe the bug**
Why the attributes of **upstream_pulps_create**/**update** is mentioned again in the **upstream_pulps_replicate" document? Are those attributes (base_url, api_root, domain,...) used at time making an API request "https://PULP-SERVER/pulp/api/v3/upstream_pulps/{object_id}/replicate/"?
**To Reproduce**
None.
**Expected behavior**
A fix is required in the REST API document.
**Additional context**
Create Upstream Pulp API document: https://docs.pulpproject.org/pulpcore/restapi.html#tag/Upstream-Pulps/operation/upstream_pulps_create
Upstream Replication API document: https://docs.pulpproject.org/pulpcore/restapi.html#tag/Upstream-Pulps/operation/upstream_pulps_replicate
</issue>
<code>
[start of pulpcore/app/viewsets/replica.py]
1 """
2 ViewSet for replicating repositories and distributions from an upstream Pulp
3 """
4 from django.conf import settings
5 from drf_spectacular.utils import extend_schema
6 from rest_framework import mixins
7 from rest_framework.decorators import action
8
9 from pulpcore.app.models import TaskGroup, UpstreamPulp
10 from pulpcore.app.serializers import AsyncOperationResponseSerializer, UpstreamPulpSerializer
11 from pulpcore.app.viewsets import NamedModelViewSet
12 from pulpcore.app.response import TaskGroupOperationResponse
13 from pulpcore.app.tasks import replicate_distributions
14 from pulpcore.tasking.tasks import dispatch
15
16
17 class UpstreamPulpViewSet(
18 NamedModelViewSet,
19 mixins.CreateModelMixin,
20 mixins.RetrieveModelMixin,
21 mixins.ListModelMixin,
22 mixins.DestroyModelMixin,
23 mixins.UpdateModelMixin,
24 ):
25 """API for configuring an upstream Pulp to replicate. This API is provided as a tech preview."""
26
27 queryset = UpstreamPulp.objects.all()
28 endpoint_name = "upstream-pulps"
29 serializer_class = UpstreamPulpSerializer
30 ordering = "-pulp_created"
31
32 @extend_schema(
33 summary="Replicate",
34 description="Trigger an asynchronous repository replication task group. This API is "
35 "provided as a tech preview.",
36 responses={202: AsyncOperationResponseSerializer},
37 )
38 @action(detail=True, methods=["post"])
39 def replicate(self, request, pk):
40 """
41 Triggers an asynchronous repository replication operation.
42 """
43 server = UpstreamPulp.objects.get(pk=pk)
44 task_group = TaskGroup.objects.create(description=f"Replication of {server.name}")
45
46 uri = "/api/v3/servers/"
47 if settings.DOMAIN_ENABLED:
48 uri = f"/{request.domain.name}{uri}"
49
50 dispatch(
51 replicate_distributions,
52 exclusive_resources=[uri],
53 kwargs={"server_pk": pk},
54 task_group=task_group,
55 )
56
57 return TaskGroupOperationResponse(task_group, request)
58
[end of pulpcore/app/viewsets/replica.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pulpcore/app/viewsets/replica.py b/pulpcore/app/viewsets/replica.py
--- a/pulpcore/app/viewsets/replica.py
+++ b/pulpcore/app/viewsets/replica.py
@@ -33,6 +33,7 @@
summary="Replicate",
description="Trigger an asynchronous repository replication task group. This API is "
"provided as a tech preview.",
+ request=None,
responses={202: AsyncOperationResponseSerializer},
)
@action(detail=True, methods=["post"])
| {"golden_diff": "diff --git a/pulpcore/app/viewsets/replica.py b/pulpcore/app/viewsets/replica.py\n--- a/pulpcore/app/viewsets/replica.py\n+++ b/pulpcore/app/viewsets/replica.py\n@@ -33,6 +33,7 @@\n summary=\"Replicate\",\n description=\"Trigger an asynchronous repository replication task group. This API is \"\n \"provided as a tech preview.\",\n+ request=None,\n responses={202: AsyncOperationResponseSerializer},\n )\n @action(detail=True, methods=[\"post\"])\n", "issue": "RESTAPI document fix for Upstream Pulp Replication API\n**Version**\r\nPulp installed through the Python modules.\r\n\"core:3.28.0\"\r\n\"certguard:3.28.0\"\r\n\"file:3.28.0\"\r\n\"python:3.28.0\"\r\n\"rpm:3.28.0\"\r\n\r\n**Describe the bug**\r\nWhy the attributes of **upstream_pulps_create**/**update** is mentioned again in the **upstream_pulps_replicate\" document? Are those attributes (base_url, api_root, domain,...) used at time making an API request \"https://PULP-SERVER/pulp/api/v3/upstream_pulps/{object_id}/replicate/\"?\r\n\r\n**To Reproduce**\r\nNone.\r\n\r\n**Expected behavior**\r\nA fix is required in the REST API document.\r\n\r\n**Additional context**\r\nCreate Upstream Pulp API document: https://docs.pulpproject.org/pulpcore/restapi.html#tag/Upstream-Pulps/operation/upstream_pulps_create\r\nUpstream Replication API document: https://docs.pulpproject.org/pulpcore/restapi.html#tag/Upstream-Pulps/operation/upstream_pulps_replicate\r\n\r\n\n", "before_files": [{"content": "\"\"\"\nViewSet for replicating repositories and distributions from an upstream Pulp\n\"\"\"\nfrom django.conf import settings\nfrom drf_spectacular.utils import extend_schema\nfrom rest_framework import mixins\nfrom rest_framework.decorators import action\n\nfrom pulpcore.app.models import TaskGroup, UpstreamPulp\nfrom pulpcore.app.serializers import AsyncOperationResponseSerializer, UpstreamPulpSerializer\nfrom pulpcore.app.viewsets import NamedModelViewSet\nfrom pulpcore.app.response import TaskGroupOperationResponse\nfrom pulpcore.app.tasks import replicate_distributions\nfrom pulpcore.tasking.tasks import dispatch\n\n\nclass UpstreamPulpViewSet(\n NamedModelViewSet,\n mixins.CreateModelMixin,\n mixins.RetrieveModelMixin,\n mixins.ListModelMixin,\n mixins.DestroyModelMixin,\n mixins.UpdateModelMixin,\n):\n \"\"\"API for configuring an upstream Pulp to replicate. This API is provided as a tech preview.\"\"\"\n\n queryset = UpstreamPulp.objects.all()\n endpoint_name = \"upstream-pulps\"\n serializer_class = UpstreamPulpSerializer\n ordering = \"-pulp_created\"\n\n @extend_schema(\n summary=\"Replicate\",\n description=\"Trigger an asynchronous repository replication task group. This API is \"\n \"provided as a tech preview.\",\n responses={202: AsyncOperationResponseSerializer},\n )\n @action(detail=True, methods=[\"post\"])\n def replicate(self, request, pk):\n \"\"\"\n Triggers an asynchronous repository replication operation.\n \"\"\"\n server = UpstreamPulp.objects.get(pk=pk)\n task_group = TaskGroup.objects.create(description=f\"Replication of {server.name}\")\n\n uri = \"/api/v3/servers/\"\n if settings.DOMAIN_ENABLED:\n uri = f\"/{request.domain.name}{uri}\"\n\n dispatch(\n replicate_distributions,\n exclusive_resources=[uri],\n kwargs={\"server_pk\": pk},\n task_group=task_group,\n )\n\n return TaskGroupOperationResponse(task_group, request)\n", "path": "pulpcore/app/viewsets/replica.py"}]} | 1,329 | 123 |
gh_patches_debug_9381 | rasdani/github-patches | git_diff | pyinstaller__pyinstaller-5128 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Hook file for sqlalchemy misses hidden import "sqlalchemy.ext.baked"
The provided hook file for sqlalchemy doesn't seem to pick up the hidden import of "sqlalchemy.ext.baked".
</issue>
<code>
[start of PyInstaller/hooks/hook-sqlalchemy.py]
1 #-----------------------------------------------------------------------------
2 # Copyright (c) 2005-2020, PyInstaller Development Team.
3 #
4 # Distributed under the terms of the GNU General Public License (version 2
5 # or later) with exception for distributing the bootloader.
6 #
7 # The full license is in the file COPYING.txt, distributed with this software.
8 #
9 # SPDX-License-Identifier: (GPL-2.0-or-later WITH Bootloader-exception)
10 #-----------------------------------------------------------------------------
11
12 import re
13 from PyInstaller.utils.hooks import (
14 exec_statement, is_module_satisfies, logger)
15 from PyInstaller.compat import open_file, text_read_mode
16 from PyInstaller.lib.modulegraph.modulegraph import SourceModule
17 from PyInstaller.lib.modulegraph.util import guess_encoding
18
19 # 'sqlalchemy.testing' causes bundling a lot of unnecessary modules.
20 excludedimports = ['sqlalchemy.testing']
21
22 # include most common database bindings
23 # some database bindings are detected and include some
24 # are not. We should explicitly include database backends.
25 hiddenimports = ['pysqlite2', 'MySQLdb', 'psycopg2']
26
27 # In SQLAlchemy >= 0.6, the "sqlalchemy.dialects" package provides dialects.
28 if is_module_satisfies('sqlalchemy >= 0.6'):
29 dialects = exec_statement("import sqlalchemy.dialects;print(sqlalchemy.dialects.__all__)")
30 dialects = eval(dialects.strip())
31
32 for n in dialects:
33 hiddenimports.append("sqlalchemy.dialects." + n)
34 # In SQLAlchemy <= 0.5, the "sqlalchemy.databases" package provides dialects.
35 else:
36 databases = exec_statement("import sqlalchemy.databases; print(sqlalchemy.databases.__all__)")
37 databases = eval(databases.strip())
38
39 for n in databases:
40 hiddenimports.append("sqlalchemy.databases." + n)
41
42
43 def hook(hook_api):
44 """
45 SQLAlchemy 0.9 introduced the decorator 'util.dependencies'. This
46 decorator does imports. eg:
47
48 @util.dependencies("sqlalchemy.sql.schema")
49
50 This hook scans for included SQLAlchemy modules and then scans those modules
51 for any util.dependencies and marks those modules as hidden imports.
52 """
53
54 if not is_module_satisfies('sqlalchemy >= 0.9'):
55 return
56
57 # this parser is very simplistic but seems to catch all cases as of V1.1
58 depend_regex = re.compile(r'@util.dependencies\([\'"](.*?)[\'"]\)')
59
60 hidden_imports_set = set()
61 known_imports = set()
62 for node in hook_api.module_graph.flatten(start=hook_api.module):
63 if isinstance(node, SourceModule) and \
64 node.identifier.startswith('sqlalchemy.'):
65 known_imports.add(node.identifier)
66 # Determine the encoding of the source file.
67 with open_file(node.filename, 'rb') as f:
68 encoding = guess_encoding(f)
69 # Use that to open the file.
70 with open_file(node.filename, text_read_mode,
71 encoding=encoding) as f:
72 for match in depend_regex.findall(f.read()):
73 hidden_imports_set.add(match)
74
75 hidden_imports_set -= known_imports
76 if len(hidden_imports_set):
77 logger.info(" Found %d sqlalchemy hidden imports",
78 len(hidden_imports_set))
79 hook_api.add_imports(*list(hidden_imports_set))
80
[end of PyInstaller/hooks/hook-sqlalchemy.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/PyInstaller/hooks/hook-sqlalchemy.py b/PyInstaller/hooks/hook-sqlalchemy.py
--- a/PyInstaller/hooks/hook-sqlalchemy.py
+++ b/PyInstaller/hooks/hook-sqlalchemy.py
@@ -22,7 +22,7 @@
# include most common database bindings
# some database bindings are detected and include some
# are not. We should explicitly include database backends.
-hiddenimports = ['pysqlite2', 'MySQLdb', 'psycopg2']
+hiddenimports = ['pysqlite2', 'MySQLdb', 'psycopg2', 'sqlalchemy.ext.baked']
# In SQLAlchemy >= 0.6, the "sqlalchemy.dialects" package provides dialects.
if is_module_satisfies('sqlalchemy >= 0.6'):
| {"golden_diff": "diff --git a/PyInstaller/hooks/hook-sqlalchemy.py b/PyInstaller/hooks/hook-sqlalchemy.py\n--- a/PyInstaller/hooks/hook-sqlalchemy.py\n+++ b/PyInstaller/hooks/hook-sqlalchemy.py\n@@ -22,7 +22,7 @@\n # include most common database bindings\n # some database bindings are detected and include some\n # are not. We should explicitly include database backends.\n-hiddenimports = ['pysqlite2', 'MySQLdb', 'psycopg2']\n+hiddenimports = ['pysqlite2', 'MySQLdb', 'psycopg2', 'sqlalchemy.ext.baked']\n \n # In SQLAlchemy >= 0.6, the \"sqlalchemy.dialects\" package provides dialects.\n if is_module_satisfies('sqlalchemy >= 0.6'):\n", "issue": "Hook file for sqlalchemy misses hidden import \"sqlalchemy.ext.baked\"\nThe provided hook file for sqlalchemy doesn't seem to pick up the hidden import of \"sqlalchemy.ext.baked\".\n", "before_files": [{"content": "#-----------------------------------------------------------------------------\n# Copyright (c) 2005-2020, PyInstaller Development Team.\n#\n# Distributed under the terms of the GNU General Public License (version 2\n# or later) with exception for distributing the bootloader.\n#\n# The full license is in the file COPYING.txt, distributed with this software.\n#\n# SPDX-License-Identifier: (GPL-2.0-or-later WITH Bootloader-exception)\n#-----------------------------------------------------------------------------\n\nimport re\nfrom PyInstaller.utils.hooks import (\n exec_statement, is_module_satisfies, logger)\nfrom PyInstaller.compat import open_file, text_read_mode\nfrom PyInstaller.lib.modulegraph.modulegraph import SourceModule\nfrom PyInstaller.lib.modulegraph.util import guess_encoding\n\n# 'sqlalchemy.testing' causes bundling a lot of unnecessary modules.\nexcludedimports = ['sqlalchemy.testing']\n\n# include most common database bindings\n# some database bindings are detected and include some\n# are not. We should explicitly include database backends.\nhiddenimports = ['pysqlite2', 'MySQLdb', 'psycopg2']\n\n# In SQLAlchemy >= 0.6, the \"sqlalchemy.dialects\" package provides dialects.\nif is_module_satisfies('sqlalchemy >= 0.6'):\n dialects = exec_statement(\"import sqlalchemy.dialects;print(sqlalchemy.dialects.__all__)\")\n dialects = eval(dialects.strip())\n\n for n in dialects:\n hiddenimports.append(\"sqlalchemy.dialects.\" + n)\n# In SQLAlchemy <= 0.5, the \"sqlalchemy.databases\" package provides dialects.\nelse:\n databases = exec_statement(\"import sqlalchemy.databases; print(sqlalchemy.databases.__all__)\")\n databases = eval(databases.strip())\n\n for n in databases:\n hiddenimports.append(\"sqlalchemy.databases.\" + n)\n\n\ndef hook(hook_api):\n \"\"\"\n SQLAlchemy 0.9 introduced the decorator 'util.dependencies'. This\n decorator does imports. eg:\n\n @util.dependencies(\"sqlalchemy.sql.schema\")\n\n This hook scans for included SQLAlchemy modules and then scans those modules\n for any util.dependencies and marks those modules as hidden imports.\n \"\"\"\n\n if not is_module_satisfies('sqlalchemy >= 0.9'):\n return\n\n # this parser is very simplistic but seems to catch all cases as of V1.1\n depend_regex = re.compile(r'@util.dependencies\\([\\'\"](.*?)[\\'\"]\\)')\n\n hidden_imports_set = set()\n known_imports = set()\n for node in hook_api.module_graph.flatten(start=hook_api.module):\n if isinstance(node, SourceModule) and \\\n node.identifier.startswith('sqlalchemy.'):\n known_imports.add(node.identifier)\n # Determine the encoding of the source file.\n with open_file(node.filename, 'rb') as f:\n encoding = guess_encoding(f)\n # Use that to open the file.\n with open_file(node.filename, text_read_mode,\n encoding=encoding) as f:\n for match in depend_regex.findall(f.read()):\n hidden_imports_set.add(match)\n\n hidden_imports_set -= known_imports\n if len(hidden_imports_set):\n logger.info(\" Found %d sqlalchemy hidden imports\",\n len(hidden_imports_set))\n hook_api.add_imports(*list(hidden_imports_set))\n", "path": "PyInstaller/hooks/hook-sqlalchemy.py"}]} | 1,444 | 176 |
gh_patches_debug_11454 | rasdani/github-patches | git_diff | beeware__toga-1617 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
WHEN TYPING "-" IN THE NUMBERINPUT, WIDGET FAILS.
"""
TESTE
"""
import toga
from toga.style import Pack
from toga.style.pack import COLUMN, ROW
class TESTE(toga.App):
def startup(self):
"""
Construct and show the Toga application.
Usually, you would add your application to a main content box.
We then create a main window (with a name matching the app), and
show the main window.
"""
# WIDGETS ###############################
self.number = toga.NumberInput()
self.pushButton = toga.Button('AHHHH')
########################################
# BOX ####################################################
main_box = toga.Box(style=Pack(direction=COLUMN))
main_box.add(self.number, self.pushButton)
#########################################################
# EVENT #####################################################
self.pushButton.on_press = self.printar
##############################################################
# WINDOW #####################################################
self.main_window = toga.MainWindow(title=self.formal_name)
self.main_window.content = main_box
self.main_window.show()
##############################################################
def printar(self, widget):
brasil = float(self.number.value)
print(brasil)
def main():
return TESTE()
https://user-images.githubusercontent.com/75274707/195914116-84981cc4-62d4-423c-a51d-0b77b4f6948a.mp4
</issue>
<code>
[start of src/android/toga_android/widgets/numberinput.py]
1 from decimal import Decimal
2
3 from travertino.size import at_least
4
5 from ..libs.android.text import InputType, TextWatcher
6 from ..libs.android.util import TypedValue
7 from ..libs.android.view import Gravity, View__MeasureSpec
8 from ..libs.android.widget import EditText
9 from .base import Widget, align
10
11
12 def decimal_from_string(s):
13 """If s is the empty string, return `None`. Otherwise, convert to a `Decimal`,
14 allowing any exceptions to bubble up."""
15 if not s:
16 return None
17 return Decimal(s)
18
19
20 def string_from_decimal(d):
21 '''Implement the inverse of `decimal_from_string()`. This way, Toga's
22 `NumericInput` can pass us a `None` or `Decimal`, and we can always place
23 a String in the Android `EditText`.'''
24 if d is None:
25 return ""
26 return str(d)
27
28
29 class TogaNumberInputWatcher(TextWatcher):
30 def __init__(self, impl):
31 super().__init__()
32 self.interface = impl.interface
33
34 def beforeTextChanged(self, _charSequence, _start, _count, _after):
35 pass
36
37 def afterTextChanged(self, editable):
38 # Toga `NumberInput` stores the value as a property on the `interface`.
39 self.interface._value = decimal_from_string(editable.toString())
40 # Call the user on_change callback, if it exists.
41 if self.interface.on_change:
42 self.interface.on_change(widget=self.interface)
43
44 def onTextChanged(self, _charSequence, _start, _before, _count):
45 pass
46
47
48 class NumberInput(Widget):
49 def create(self):
50 self.native = EditText(self._native_activity)
51 self.native.addTextChangedListener(TogaNumberInputWatcher(self))
52
53 # A `NumberInput` in Toga supports signed decimal numbers.
54 self.native.setInputType(
55 InputType.TYPE_CLASS_NUMBER
56 | InputType.TYPE_NUMBER_FLAG_DECIMAL
57 | InputType.TYPE_NUMBER_FLAG_SIGNED
58 )
59
60 def set_readonly(self, value):
61 self.native.setFocusable(not value)
62
63 def set_placeholder(self, value):
64 # Android EditText's setHint() requires a Python string.
65 self.native.setHint(value if value is not None else "")
66
67 def set_alignment(self, value):
68 self.native.setGravity(Gravity.CENTER_VERTICAL | align(value))
69
70 def set_font(self, font):
71 if font:
72 font_impl = font.bind(self.interface.factory)
73 self.native.setTextSize(TypedValue.COMPLEX_UNIT_SP, font_impl.get_size())
74 self.native.setTypeface(font_impl.get_typeface(), font_impl.get_style())
75
76 def set_value(self, value):
77 # Store a string in the Android widget. The `afterTextChanged` method
78 # will call the user on_change handler.
79 self.native.setText(string_from_decimal(value))
80
81 def set_step(self, step):
82 self.interface.factory.not_implemented("NumberInput.set_step()")
83
84 def set_max_value(self, value):
85 self.interface.factory.not_implemented("NumberInput.set_max_value()")
86
87 def set_min_value(self, value):
88 self.interface.factory.not_implemented("NumberInput.set_min_value()")
89
90 def set_on_change(self, handler):
91 # No special handling required.
92 pass
93
94 def rehint(self):
95 # On Android, EditText's measure() throws NullPointerException if the widget has no
96 # LayoutParams.
97 if not self.native.getLayoutParams():
98 return
99 self.native.measure(
100 View__MeasureSpec.UNSPECIFIED, View__MeasureSpec.UNSPECIFIED
101 )
102 self.interface.intrinsic.width = at_least(self.native.getMeasuredWidth())
103 self.interface.intrinsic.height = self.native.getMeasuredHeight()
104
[end of src/android/toga_android/widgets/numberinput.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/android/toga_android/widgets/numberinput.py b/src/android/toga_android/widgets/numberinput.py
--- a/src/android/toga_android/widgets/numberinput.py
+++ b/src/android/toga_android/widgets/numberinput.py
@@ -1,4 +1,4 @@
-from decimal import Decimal
+from decimal import Decimal, InvalidOperation
from travertino.size import at_least
@@ -10,11 +10,11 @@
def decimal_from_string(s):
- """If s is the empty string, return `None`. Otherwise, convert to a `Decimal`,
- allowing any exceptions to bubble up."""
- if not s:
+ """Convert s to a `Decimal`, returning `None` if it's not a valid number."""
+ try:
+ return Decimal(s)
+ except InvalidOperation:
return None
- return Decimal(s)
def string_from_decimal(d):
| {"golden_diff": "diff --git a/src/android/toga_android/widgets/numberinput.py b/src/android/toga_android/widgets/numberinput.py\n--- a/src/android/toga_android/widgets/numberinput.py\n+++ b/src/android/toga_android/widgets/numberinput.py\n@@ -1,4 +1,4 @@\n-from decimal import Decimal\n+from decimal import Decimal, InvalidOperation\n \n from travertino.size import at_least\n \n@@ -10,11 +10,11 @@\n \n \n def decimal_from_string(s):\n- \"\"\"If s is the empty string, return `None`. Otherwise, convert to a `Decimal`,\n- allowing any exceptions to bubble up.\"\"\"\n- if not s:\n+ \"\"\"Convert s to a `Decimal`, returning `None` if it's not a valid number.\"\"\"\n+ try:\n+ return Decimal(s)\n+ except InvalidOperation:\n return None\n- return Decimal(s)\n \n \n def string_from_decimal(d):\n", "issue": "WHEN TYPING \"-\" IN THE NUMBERINPUT, WIDGET FAILS.\n\"\"\"\r\nTESTE\r\n\"\"\"\r\nimport toga\r\nfrom toga.style import Pack\r\nfrom toga.style.pack import COLUMN, ROW\r\n\r\n\r\nclass TESTE(toga.App):\r\n\r\n def startup(self):\r\n \"\"\"\r\n Construct and show the Toga application.\r\n\r\n Usually, you would add your application to a main content box.\r\n We then create a main window (with a name matching the app), and\r\n show the main window.\r\n \"\"\"\r\n\r\n # WIDGETS ###############################\r\n self.number = toga.NumberInput()\r\n self.pushButton = toga.Button('AHHHH')\r\n ########################################\r\n\r\n # BOX ####################################################\r\n main_box = toga.Box(style=Pack(direction=COLUMN))\r\n main_box.add(self.number, self.pushButton)\r\n #########################################################\r\n\r\n # EVENT #####################################################\r\n self.pushButton.on_press = self.printar\r\n ##############################################################\r\n\r\n # WINDOW #####################################################\r\n self.main_window = toga.MainWindow(title=self.formal_name)\r\n self.main_window.content = main_box\r\n self.main_window.show()\r\n ##############################################################\r\n\r\n def printar(self, widget):\r\n brasil = float(self.number.value)\r\n print(brasil)\r\n\r\ndef main():\r\n return TESTE()\r\n\r\nhttps://user-images.githubusercontent.com/75274707/195914116-84981cc4-62d4-423c-a51d-0b77b4f6948a.mp4\r\n\r\n\n", "before_files": [{"content": "from decimal import Decimal\n\nfrom travertino.size import at_least\n\nfrom ..libs.android.text import InputType, TextWatcher\nfrom ..libs.android.util import TypedValue\nfrom ..libs.android.view import Gravity, View__MeasureSpec\nfrom ..libs.android.widget import EditText\nfrom .base import Widget, align\n\n\ndef decimal_from_string(s):\n \"\"\"If s is the empty string, return `None`. Otherwise, convert to a `Decimal`,\n allowing any exceptions to bubble up.\"\"\"\n if not s:\n return None\n return Decimal(s)\n\n\ndef string_from_decimal(d):\n '''Implement the inverse of `decimal_from_string()`. This way, Toga's\n `NumericInput` can pass us a `None` or `Decimal`, and we can always place\n a String in the Android `EditText`.'''\n if d is None:\n return \"\"\n return str(d)\n\n\nclass TogaNumberInputWatcher(TextWatcher):\n def __init__(self, impl):\n super().__init__()\n self.interface = impl.interface\n\n def beforeTextChanged(self, _charSequence, _start, _count, _after):\n pass\n\n def afterTextChanged(self, editable):\n # Toga `NumberInput` stores the value as a property on the `interface`.\n self.interface._value = decimal_from_string(editable.toString())\n # Call the user on_change callback, if it exists.\n if self.interface.on_change:\n self.interface.on_change(widget=self.interface)\n\n def onTextChanged(self, _charSequence, _start, _before, _count):\n pass\n\n\nclass NumberInput(Widget):\n def create(self):\n self.native = EditText(self._native_activity)\n self.native.addTextChangedListener(TogaNumberInputWatcher(self))\n\n # A `NumberInput` in Toga supports signed decimal numbers.\n self.native.setInputType(\n InputType.TYPE_CLASS_NUMBER\n | InputType.TYPE_NUMBER_FLAG_DECIMAL\n | InputType.TYPE_NUMBER_FLAG_SIGNED\n )\n\n def set_readonly(self, value):\n self.native.setFocusable(not value)\n\n def set_placeholder(self, value):\n # Android EditText's setHint() requires a Python string.\n self.native.setHint(value if value is not None else \"\")\n\n def set_alignment(self, value):\n self.native.setGravity(Gravity.CENTER_VERTICAL | align(value))\n\n def set_font(self, font):\n if font:\n font_impl = font.bind(self.interface.factory)\n self.native.setTextSize(TypedValue.COMPLEX_UNIT_SP, font_impl.get_size())\n self.native.setTypeface(font_impl.get_typeface(), font_impl.get_style())\n\n def set_value(self, value):\n # Store a string in the Android widget. The `afterTextChanged` method\n # will call the user on_change handler.\n self.native.setText(string_from_decimal(value))\n\n def set_step(self, step):\n self.interface.factory.not_implemented(\"NumberInput.set_step()\")\n\n def set_max_value(self, value):\n self.interface.factory.not_implemented(\"NumberInput.set_max_value()\")\n\n def set_min_value(self, value):\n self.interface.factory.not_implemented(\"NumberInput.set_min_value()\")\n\n def set_on_change(self, handler):\n # No special handling required.\n pass\n\n def rehint(self):\n # On Android, EditText's measure() throws NullPointerException if the widget has no\n # LayoutParams.\n if not self.native.getLayoutParams():\n return\n self.native.measure(\n View__MeasureSpec.UNSPECIFIED, View__MeasureSpec.UNSPECIFIED\n )\n self.interface.intrinsic.width = at_least(self.native.getMeasuredWidth())\n self.interface.intrinsic.height = self.native.getMeasuredHeight()\n", "path": "src/android/toga_android/widgets/numberinput.py"}]} | 1,858 | 198 |
gh_patches_debug_11123 | rasdani/github-patches | git_diff | kivy__python-for-android-662 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
.jam files not installed
The user-config.jam in https://github.com/kivy/python-for-android/tree/master/pythonforandroid/recipes/boost does not show up in the installed p4a recipes folder /home/paul/.local/lib/python2.7/site-packages/pythonforandroid/recipes/boost/
Perhaps .jam files have to be added to this array as well: https://github.com/kived/python-for-android/commit/93fcf656e2aafc6a75ee06dab3e471e1eb509d87
</issue>
<code>
[start of setup.py]
1
2 from setuptools import setup, find_packages
3 from os import walk
4 from os.path import join, dirname, sep
5 import os
6 import glob
7
8 # NOTE: All package data should also be set in MANIFEST.in
9
10 packages = find_packages()
11
12 package_data = {'': ['*.tmpl',
13 '*.patch', ], }
14
15 data_files = []
16
17 # By specifying every file manually, package_data will be able to
18 # include them in binary distributions. Note that we have to add
19 # everything as a 'pythonforandroid' rule, using '' apparently doesn't
20 # work.
21 def recursively_include(results, directory, patterns):
22 for root, subfolders, files in walk(directory):
23 for fn in files:
24 if not any([glob.fnmatch.fnmatch(fn, pattern) for pattern in patterns]):
25 continue
26 filename = join(root, fn)
27 directory = 'pythonforandroid'
28 if directory not in results:
29 results[directory] = []
30 results[directory].append(join(*filename.split(sep)[1:]))
31
32 recursively_include(package_data, 'pythonforandroid/recipes',
33 ['*.patch', 'Setup*', '*.pyx', '*.py', '*.c', '*.h',
34 '*.mk', ])
35 recursively_include(package_data, 'pythonforandroid/bootstraps',
36 ['*.properties', '*.xml', '*.java', '*.tmpl', '*.txt', '*.png',
37 '*.mk', '*.c', '*.h', '*.py', '*.sh', '*.jpg', '*.aidl', ])
38 recursively_include(package_data, 'pythonforandroid/bootstraps',
39 ['sdl-config', ])
40 recursively_include(package_data, 'pythonforandroid',
41 ['liblink', 'biglink', 'liblink.sh'])
42
43 setup(name='python-for-android',
44 version='0.3',
45 description='Android APK packager for Python scripts and apps',
46 author='The Kivy team',
47 author_email='[email protected]',
48 url='https://github.com/kivy/python-for-android',
49 license='MIT',
50 install_requires=['appdirs', 'colorama>0.3', 'sh', 'jinja2', 'argparse',
51 'six'],
52 entry_points={
53 'console_scripts': [
54 'python-for-android = pythonforandroid.toolchain:main',
55 'p4a = pythonforandroid.toolchain:main',
56 ],
57 'distutils.commands': [
58 'bdist_apk = pythonforandroid.bdist_apk:BdistAPK',
59 ],
60 },
61 classifiers = [
62 'Development Status :: 3 - Alpha',
63 'Intended Audience :: Developers',
64 'License :: OSI Approved :: MIT License',
65 'Operating System :: Microsoft :: Windows',
66 'Operating System :: OS Independent',
67 'Operating System :: POSIX :: Linux',
68 'Operating System :: MacOS :: MacOS X',
69 'Programming Language :: C',
70 'Programming Language :: Python :: 2',
71 'Programming Language :: Python :: 3',
72 'Topic :: Software Development',
73 'Topic :: Utilities',
74 ],
75 packages=packages,
76 package_data=package_data,
77 )
78
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -31,7 +31,7 @@
recursively_include(package_data, 'pythonforandroid/recipes',
['*.patch', 'Setup*', '*.pyx', '*.py', '*.c', '*.h',
- '*.mk', ])
+ '*.mk', '*.jam', ])
recursively_include(package_data, 'pythonforandroid/bootstraps',
['*.properties', '*.xml', '*.java', '*.tmpl', '*.txt', '*.png',
'*.mk', '*.c', '*.h', '*.py', '*.sh', '*.jpg', '*.aidl', ])
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -31,7 +31,7 @@\n \n recursively_include(package_data, 'pythonforandroid/recipes',\n ['*.patch', 'Setup*', '*.pyx', '*.py', '*.c', '*.h',\n- '*.mk', ])\n+ '*.mk', '*.jam', ])\n recursively_include(package_data, 'pythonforandroid/bootstraps',\n ['*.properties', '*.xml', '*.java', '*.tmpl', '*.txt', '*.png',\n '*.mk', '*.c', '*.h', '*.py', '*.sh', '*.jpg', '*.aidl', ])\n", "issue": ".jam files not installed\nThe user-config.jam in https://github.com/kivy/python-for-android/tree/master/pythonforandroid/recipes/boost does not show up in the installed p4a recipes folder /home/paul/.local/lib/python2.7/site-packages/pythonforandroid/recipes/boost/\n\nPerhaps .jam files have to be added to this array as well: https://github.com/kived/python-for-android/commit/93fcf656e2aafc6a75ee06dab3e471e1eb509d87\n\n", "before_files": [{"content": "\nfrom setuptools import setup, find_packages\nfrom os import walk\nfrom os.path import join, dirname, sep\nimport os\nimport glob\n\n# NOTE: All package data should also be set in MANIFEST.in\n\npackages = find_packages()\n\npackage_data = {'': ['*.tmpl',\n '*.patch', ], }\n\ndata_files = []\n\n# By specifying every file manually, package_data will be able to\n# include them in binary distributions. Note that we have to add\n# everything as a 'pythonforandroid' rule, using '' apparently doesn't\n# work.\ndef recursively_include(results, directory, patterns):\n for root, subfolders, files in walk(directory):\n for fn in files:\n if not any([glob.fnmatch.fnmatch(fn, pattern) for pattern in patterns]):\n continue\n filename = join(root, fn)\n directory = 'pythonforandroid'\n if directory not in results:\n results[directory] = []\n results[directory].append(join(*filename.split(sep)[1:]))\n\nrecursively_include(package_data, 'pythonforandroid/recipes',\n ['*.patch', 'Setup*', '*.pyx', '*.py', '*.c', '*.h',\n '*.mk', ])\nrecursively_include(package_data, 'pythonforandroid/bootstraps',\n ['*.properties', '*.xml', '*.java', '*.tmpl', '*.txt', '*.png',\n '*.mk', '*.c', '*.h', '*.py', '*.sh', '*.jpg', '*.aidl', ])\nrecursively_include(package_data, 'pythonforandroid/bootstraps',\n ['sdl-config', ])\nrecursively_include(package_data, 'pythonforandroid',\n ['liblink', 'biglink', 'liblink.sh'])\n\nsetup(name='python-for-android',\n version='0.3',\n description='Android APK packager for Python scripts and apps',\n author='The Kivy team',\n author_email='[email protected]',\n url='https://github.com/kivy/python-for-android', \n license='MIT', \n install_requires=['appdirs', 'colorama>0.3', 'sh', 'jinja2', 'argparse',\n 'six'],\n entry_points={\n 'console_scripts': [\n 'python-for-android = pythonforandroid.toolchain:main',\n 'p4a = pythonforandroid.toolchain:main',\n ],\n 'distutils.commands': [\n 'bdist_apk = pythonforandroid.bdist_apk:BdistAPK',\n ],\n },\n classifiers = [\n 'Development Status :: 3 - Alpha',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: MIT License',\n 'Operating System :: Microsoft :: Windows',\n 'Operating System :: OS Independent',\n 'Operating System :: POSIX :: Linux',\n 'Operating System :: MacOS :: MacOS X',\n 'Programming Language :: C',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 3',\n 'Topic :: Software Development',\n 'Topic :: Utilities',\n ],\n packages=packages,\n package_data=package_data,\n )\n", "path": "setup.py"}]} | 1,451 | 138 |
gh_patches_debug_36804 | rasdani/github-patches | git_diff | OpenMined__PySyft-124 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Implement Default Absolute Value Functions in Base Tensor Type
**User Story A:** As a Data Scientist using Syft's Base Tensor type, we want to implement a default method for computing the elementwise absolute value of a Tensor of arbitrary type. abs() should return a new tensor and abs_ should perform the operation inline. For a great reference on how
**Acceptance Criteria:**
- If the Base Tensor type's attribute "encrypted" is set to True, it should return a NotImplemented error.
- a unit test demonstrating the correct operation of abs() and abs_() on the Base Tensor type implemented over int and float Tensors.
- inline documentation in the python code. For inspiration on inline documentation, please check out [PyTorch](http://pytorch.org/docs/master/tensors.html)'s documentation for this operator.
Implement Default addmm Functionality in Base Tensor Type
**User Story A:** As a Data Scientist using Syft's Base Tensor type, we want to implement a default method for computing each operation on a Tensor of arbitrary type. addmm_() should return a new tensor and addmm_() should perform the operation inline. For a reference on the operation this performs check out [PyTorch](http://pytorch.org/docs/master/tensors.html)'s documentation.
**Acceptance Criteria:**
- If the Base Tensor type's attribute "encrypted" is set to True, it should return a NotImplemented error.
- a unit test demonstrating the correct operation of addmm() and addmm_() on the Base Tensor type implemented over int and float Tensors.
- inline documentation in the python code. For inspiration on inline documentation, please check out [PyTorch](http://pytorch.org/docs/master/tensors.html)'s documentation for this operator.
</issue>
<code>
[start of syft/tensor.py]
1 import numpy as np
2
3 def _ensure_ndarray(arr):
4 if not isinstance(arr, np.ndarray):
5 arr = np.array(arr)
6
7 return arr
8
9 class TensorBase(object):
10 """
11 A base tensor class that perform basic element-wise operation such as
12 addition, subtraction, multiplication and division
13 """
14
15 def __init__(self, arr_like, encrypted=False):
16 self.data = _ensure_ndarray(arr_like)
17 self.encrypted = encrypted
18
19 def __add__(self, arr_like):
20 """Performs element-wise addition between two array like objects"""
21 if self.encrypted:
22 return NotImplemented
23
24 arr_like = _ensure_ndarray(arr_like)
25 return self.data + arr_like
26
27 def __iadd__(self, arr_like):
28 """Performs in place element-wise addition between two array like objects"""
29 if self.encrypted:
30 return NotImplemented
31
32 arr_like = _ensure_ndarray(arr_like)
33 self.data = self.data + arr_like
34 return self.data
35
36 def __sub__(self, arr_like):
37 """Performs element-wise subtraction between two array like objects"""
38 if self.encrypted:
39 return NotImplemented
40
41 arr_like = _ensure_ndarray(arr_like)
42 return self.data - arr_like
43
44 def __isub__(self, arr_like):
45 """Performs in place element-wise subtraction between two array like objects"""
46 if self.encrypted:
47 return NotImplemented
48
49 arr_like = _ensure_ndarray(arr_like)
50 self.data = self.data - arr_like
51 return self.data
52
53 def __mul__(self, arr_like):
54 """Performs element-wise multiplication between two array like objects"""
55 if self.encrypted:
56 return NotImplemented
57
58 arr_like = _ensure_ndarray(arr_like)
59 return self.data * arr_like
60
61 def __imul__(self, arr_like):
62 """Performs in place element-wise multiplication between two array like objects"""
63 if self.encrypted:
64 return NotImplemented
65
66 arr_like = _ensure_ndarray(arr_like)
67 self.data = self.data * arr_like
68 return self.data
69
70 def __truediv__(self, arr_like):
71 """Performs element-wise division between two array like objects"""
72 if self.encrypted:
73 return NotImplemented
74
75 arr_like = _ensure_ndarray(arr_like)
76 return self.data / arr_like
77
78 def __itruediv__(self, arr_like):
79 """Performs in place element-wise subtraction between two array like objects"""
80 if self.encrypted:
81 return NotImplemented
82
83 arr_like = _ensure_ndarray(arr_like)
84 self.data = self.data / arr_like
85 return self.data
86
87 def shape(self):
88 """Returns a tuple of input array dimensions."""
89 if self.encrypted:
90 return NotImplemented
91
92 return self.data.shape
93
94 def sum(self, dim=None):
95 """Returns the sum of all elements in the input array."""
96 if self.encrypted:
97 return NotImplemented
98
99 if dim is None:
100 return self.data.sum()
101 else:
102 return self.data.sum(axis=dim)
103
[end of syft/tensor.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/syft/tensor.py b/syft/tensor.py
--- a/syft/tensor.py
+++ b/syft/tensor.py
@@ -84,6 +84,19 @@
self.data = self.data / arr_like
return self.data
+ def abs(self):
+ """Returns absolute value of tensor as a new tensor"""
+ if self.encrypted:
+ return NotImplemented
+ return np.absolute(self.data)
+
+ def abs_(self):
+ """Replaces tensor values with its absolute value"""
+ if self.encrypted:
+ return NotImplemented
+ self.data=np.absolute(self.data)
+ return self.data
+
def shape(self):
"""Returns a tuple of input array dimensions."""
if self.encrypted:
@@ -100,3 +113,33 @@
return self.data.sum()
else:
return self.data.sum(axis=dim)
+
+ def addmm(self,tensor2,mat,beta=1,alpha=1):
+ """Performs ((Mat*Beta)+((Tensor1.Tensor2)*Alpha)) and returns the result as a Tensor
+ Tensor1.Tensor2 is performed as Matrix product of two array The behavior depends on the arguments in the following way.
+ *If both tensors are 1-dimensional, their dot product is returned.
+ *If both arguments are 2-D they are multiplied like conventional matrices.
+ *If either argument is N-D, N > 2, it is treated as a stack of matrices residing in the last two indexes and broadcast accordingly.
+ *If the first argument is 1-D, it is promoted to a matrix by prepending a 1 to its dimensions. After matrix multiplication the prepended 1 is removed.
+ *If the second argument is 1-D, it is promoted to a matrix by appending a 1 to its dimensions. After matrix multiplication the appended 1 is removed.
+ """
+ if self.encrypted or tensor2.encrypted or mat.encrypted:
+ return NotImplemented
+ else:
+ return TensorBase(np.array((mat*beta)+((np.matmul(self.data,tensor2.data))*alpha)))
+
+ def addmm_(self,tensor2,mat,beta=1,alpha=1):
+ """Performs ((Mat*Beta)+((Tensor1.Tensor2)*Alpha)) and updates Tensor1 with result and reurns it
+ Tensor1.Tensor2 is performed as Matrix product of two array The behavior depends on the arguments in the following way.
+ *If both tensors are 1-dimensional, their dot product is returned.
+ *If both arguments are 2-D they are multiplied like conventional matrices.
+ *If either argument is N-D, N > 2, it is treated as a stack of matrices residing in the last two indexes and broadcast accordingly.
+ *If the first argument is 1-D, it is promoted to a matrix by prepending a 1 to its dimensions. After matrix multiplication the prepended 1 is removed.
+ *If the second argument is 1-D, it is promoted to a matrix by appending a 1 to its dimensions. After matrix multiplication the appended 1 is removed.
+ """
+ if self.encrypted is True or tensor2.encrypted is True or mat.encrypted is True:
+ return NotImplemented
+ else:
+ self.data=np.array((mat*beta)+((np.matmul(self.data,tensor2.data))*alpha))
+ return self
+
| {"golden_diff": "diff --git a/syft/tensor.py b/syft/tensor.py\n--- a/syft/tensor.py\n+++ b/syft/tensor.py\n@@ -84,6 +84,19 @@\n self.data = self.data / arr_like\n return self.data\n \n+ def abs(self):\n+ \"\"\"Returns absolute value of tensor as a new tensor\"\"\"\n+ if self.encrypted:\n+ return NotImplemented\n+ return np.absolute(self.data)\n+ \n+ def abs_(self):\n+ \"\"\"Replaces tensor values with its absolute value\"\"\"\n+ if self.encrypted:\n+ return NotImplemented\n+ self.data=np.absolute(self.data)\n+ return self.data\n+\n def shape(self):\n \"\"\"Returns a tuple of input array dimensions.\"\"\"\n if self.encrypted:\n@@ -100,3 +113,33 @@\n return self.data.sum()\n else:\n return self.data.sum(axis=dim)\n+ \n+ def addmm(self,tensor2,mat,beta=1,alpha=1):\n+ \"\"\"Performs ((Mat*Beta)+((Tensor1.Tensor2)*Alpha)) and returns the result as a Tensor\n+ Tensor1.Tensor2 is performed as Matrix product of two array The behavior depends on the arguments in the following way.\n+ *If both tensors are 1-dimensional, their dot product is returned.\n+ *If both arguments are 2-D they are multiplied like conventional matrices.\n+ *If either argument is N-D, N > 2, it is treated as a stack of matrices residing in the last two indexes and broadcast accordingly.\n+ *If the first argument is 1-D, it is promoted to a matrix by prepending a 1 to its dimensions. After matrix multiplication the prepended 1 is removed.\n+ *If the second argument is 1-D, it is promoted to a matrix by appending a 1 to its dimensions. After matrix multiplication the appended 1 is removed.\n+ \"\"\"\n+ if self.encrypted or tensor2.encrypted or mat.encrypted:\n+ return NotImplemented\n+ else:\n+ return TensorBase(np.array((mat*beta)+((np.matmul(self.data,tensor2.data))*alpha)))\n+\n+ def addmm_(self,tensor2,mat,beta=1,alpha=1):\n+ \"\"\"Performs ((Mat*Beta)+((Tensor1.Tensor2)*Alpha)) and updates Tensor1 with result and reurns it\n+ Tensor1.Tensor2 is performed as Matrix product of two array The behavior depends on the arguments in the following way.\n+ *If both tensors are 1-dimensional, their dot product is returned.\n+ *If both arguments are 2-D they are multiplied like conventional matrices.\n+ *If either argument is N-D, N > 2, it is treated as a stack of matrices residing in the last two indexes and broadcast accordingly.\n+ *If the first argument is 1-D, it is promoted to a matrix by prepending a 1 to its dimensions. After matrix multiplication the prepended 1 is removed.\n+ *If the second argument is 1-D, it is promoted to a matrix by appending a 1 to its dimensions. After matrix multiplication the appended 1 is removed.\n+ \"\"\"\n+ if self.encrypted is True or tensor2.encrypted is True or mat.encrypted is True:\n+ return NotImplemented\n+ else:\n+ self.data=np.array((mat*beta)+((np.matmul(self.data,tensor2.data))*alpha))\n+ return self\n+\n", "issue": "Implement Default Absolute Value Functions in Base Tensor Type\n**User Story A:** As a Data Scientist using Syft's Base Tensor type, we want to implement a default method for computing the elementwise absolute value of a Tensor of arbitrary type. abs() should return a new tensor and abs_ should perform the operation inline. For a great reference on how \r\n\r\n**Acceptance Criteria:**\r\n- If the Base Tensor type's attribute \"encrypted\" is set to True, it should return a NotImplemented error.\r\n- a unit test demonstrating the correct operation of abs() and abs_() on the Base Tensor type implemented over int and float Tensors.\r\n- inline documentation in the python code. For inspiration on inline documentation, please check out [PyTorch](http://pytorch.org/docs/master/tensors.html)'s documentation for this operator.\nImplement Default addmm Functionality in Base Tensor Type\n**User Story A:** As a Data Scientist using Syft's Base Tensor type, we want to implement a default method for computing each operation on a Tensor of arbitrary type. addmm_() should return a new tensor and addmm_() should perform the operation inline. For a reference on the operation this performs check out [PyTorch](http://pytorch.org/docs/master/tensors.html)'s documentation.\r\n\r\n**Acceptance Criteria:**\r\n- If the Base Tensor type's attribute \"encrypted\" is set to True, it should return a NotImplemented error.\r\n- a unit test demonstrating the correct operation of addmm() and addmm_() on the Base Tensor type implemented over int and float Tensors.\r\n- inline documentation in the python code. For inspiration on inline documentation, please check out [PyTorch](http://pytorch.org/docs/master/tensors.html)'s documentation for this operator.\n", "before_files": [{"content": "import numpy as np\n\ndef _ensure_ndarray(arr):\n if not isinstance(arr, np.ndarray):\n arr = np.array(arr)\n\n return arr\n\nclass TensorBase(object):\n \"\"\"\n A base tensor class that perform basic element-wise operation such as\n addition, subtraction, multiplication and division\n \"\"\"\n\n def __init__(self, arr_like, encrypted=False):\n self.data = _ensure_ndarray(arr_like)\n self.encrypted = encrypted\n\n def __add__(self, arr_like):\n \"\"\"Performs element-wise addition between two array like objects\"\"\"\n if self.encrypted:\n return NotImplemented\n\n arr_like = _ensure_ndarray(arr_like)\n return self.data + arr_like\n\n def __iadd__(self, arr_like):\n \"\"\"Performs in place element-wise addition between two array like objects\"\"\"\n if self.encrypted:\n return NotImplemented\n\n arr_like = _ensure_ndarray(arr_like)\n self.data = self.data + arr_like\n return self.data\n\n def __sub__(self, arr_like):\n \"\"\"Performs element-wise subtraction between two array like objects\"\"\"\n if self.encrypted:\n return NotImplemented\n\n arr_like = _ensure_ndarray(arr_like)\n return self.data - arr_like\n\n def __isub__(self, arr_like):\n \"\"\"Performs in place element-wise subtraction between two array like objects\"\"\"\n if self.encrypted:\n return NotImplemented\n\n arr_like = _ensure_ndarray(arr_like)\n self.data = self.data - arr_like\n return self.data\n\n def __mul__(self, arr_like):\n \"\"\"Performs element-wise multiplication between two array like objects\"\"\"\n if self.encrypted:\n return NotImplemented\n\n arr_like = _ensure_ndarray(arr_like)\n return self.data * arr_like\n\n def __imul__(self, arr_like):\n \"\"\"Performs in place element-wise multiplication between two array like objects\"\"\"\n if self.encrypted:\n return NotImplemented\n\n arr_like = _ensure_ndarray(arr_like)\n self.data = self.data * arr_like\n return self.data\n\n def __truediv__(self, arr_like):\n \"\"\"Performs element-wise division between two array like objects\"\"\"\n if self.encrypted:\n return NotImplemented\n\n arr_like = _ensure_ndarray(arr_like)\n return self.data / arr_like\n\n def __itruediv__(self, arr_like):\n \"\"\"Performs in place element-wise subtraction between two array like objects\"\"\"\n if self.encrypted:\n return NotImplemented\n\n arr_like = _ensure_ndarray(arr_like)\n self.data = self.data / arr_like\n return self.data\n\n def shape(self):\n \"\"\"Returns a tuple of input array dimensions.\"\"\"\n if self.encrypted:\n return NotImplemented\n\n return self.data.shape\n\n def sum(self, dim=None):\n \"\"\"Returns the sum of all elements in the input array.\"\"\"\n if self.encrypted:\n return NotImplemented\n\n if dim is None:\n return self.data.sum()\n else:\n return self.data.sum(axis=dim)\n", "path": "syft/tensor.py"}]} | 1,762 | 757 |
gh_patches_debug_12911 | rasdani/github-patches | git_diff | svthalia__concrexit-1090 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Newsletters show unpublished events
### Describe the bug
Newsletters show unpublished events
### How to reproduce
Steps to reproduce the behaviour:
1. Check one of the newsletters of the last weeks
### Expected behaviour
Only published events should show.
### Additional context
This is probably because of the low number of events during these days.
</issue>
<code>
[start of website/newsletters/services.py]
1 import os
2
3 from django.conf import settings
4 from django.template.loader import get_template
5 from django.utils import translation, timezone
6
7 from events.models import Event
8 from members.models import Member
9 from newsletters import emails
10 from partners.models import Partner
11 from pushnotifications.models import Message, Category
12
13
14 def write_to_file(pk, lang, html_message):
15 """
16 Write newsletter to a file
17 """
18 cache_dir = os.path.join(settings.MEDIA_ROOT, "newsletters")
19 if not os.path.isdir(cache_dir):
20 os.makedirs(cache_dir)
21
22 with open(os.path.join(cache_dir, f"{pk}_{lang}.html"), "w+") as cache_file:
23 cache_file.write(html_message)
24
25
26 def save_to_disk(newsletter, request):
27 """
28 Writes the newsletter as HTML to file (in all languages)
29 """
30 main_partner = Partner.objects.filter(is_main_partner=True).first()
31 local_partner = Partner.objects.filter(is_local_partner=True).first()
32
33 html_template = get_template("newsletters/email.html")
34
35 for language in settings.LANGUAGES:
36 translation.activate(language[0])
37
38 context = {
39 "newsletter": newsletter,
40 "agenda_events": (
41 newsletter.newslettercontent_set.filter(newsletteritem=None).order_by(
42 "newsletterevent__start_datetime"
43 )
44 ),
45 "main_partner": main_partner,
46 "local_partner": local_partner,
47 "lang_code": language[0],
48 "request": request,
49 }
50
51 html_message = html_template.render(context)
52
53 write_to_file(newsletter.pk, language[0], html_message)
54
55
56 def get_agenda(start_date):
57 end_date = start_date + timezone.timedelta(weeks=2)
58 base_events = Event.objects.filter(
59 start__gte=start_date, end__lt=end_date, published=True
60 ).order_by("start")
61 if base_events.count() < 10:
62 more_events = Event.objects.filter(end__gte=end_date).order_by("start")
63 return [*base_events, *more_events][:10]
64 return base_events
65
66
67 def send_newsletter(newsletter):
68 emails.send_newsletter(newsletter)
69 newsletter.sent = True
70 newsletter.save()
71 message = Message.objects.create(
72 title_nl=newsletter.title_nl,
73 title_en=newsletter.title_en,
74 body_nl="Tik om te bekijken",
75 body_en="Tap to view",
76 url=settings.BASE_URL + newsletter.get_absolute_url(),
77 category=Category.objects.get(key=Category.NEWSLETTER),
78 )
79 message.users.set(Member.current_members.all())
80 message.send()
81
[end of website/newsletters/services.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/website/newsletters/services.py b/website/newsletters/services.py
--- a/website/newsletters/services.py
+++ b/website/newsletters/services.py
@@ -55,11 +55,12 @@
def get_agenda(start_date):
end_date = start_date + timezone.timedelta(weeks=2)
- base_events = Event.objects.filter(
- start__gte=start_date, end__lt=end_date, published=True
+ published_events = Event.objects.filter(published=True)
+ base_events = published_events.filter(
+ start__gte=start_date, end__lt=end_date
).order_by("start")
if base_events.count() < 10:
- more_events = Event.objects.filter(end__gte=end_date).order_by("start")
+ more_events = published_events.filter(end__gte=end_date).order_by("start")
return [*base_events, *more_events][:10]
return base_events
| {"golden_diff": "diff --git a/website/newsletters/services.py b/website/newsletters/services.py\n--- a/website/newsletters/services.py\n+++ b/website/newsletters/services.py\n@@ -55,11 +55,12 @@\n \n def get_agenda(start_date):\n end_date = start_date + timezone.timedelta(weeks=2)\n- base_events = Event.objects.filter(\n- start__gte=start_date, end__lt=end_date, published=True\n+ published_events = Event.objects.filter(published=True)\n+ base_events = published_events.filter(\n+ start__gte=start_date, end__lt=end_date\n ).order_by(\"start\")\n if base_events.count() < 10:\n- more_events = Event.objects.filter(end__gte=end_date).order_by(\"start\")\n+ more_events = published_events.filter(end__gte=end_date).order_by(\"start\")\n return [*base_events, *more_events][:10]\n return base_events\n", "issue": "Newsletters show unpublished events\n### Describe the bug\r\nNewsletters show unpublished events\r\n\r\n### How to reproduce\r\nSteps to reproduce the behaviour:\r\n1. Check one of the newsletters of the last weeks\r\n\r\n### Expected behaviour\r\nOnly published events should show.\r\n\r\n### Additional context\r\nThis is probably because of the low number of events during these days.\r\n\n", "before_files": [{"content": "import os\n\nfrom django.conf import settings\nfrom django.template.loader import get_template\nfrom django.utils import translation, timezone\n\nfrom events.models import Event\nfrom members.models import Member\nfrom newsletters import emails\nfrom partners.models import Partner\nfrom pushnotifications.models import Message, Category\n\n\ndef write_to_file(pk, lang, html_message):\n \"\"\"\n Write newsletter to a file\n \"\"\"\n cache_dir = os.path.join(settings.MEDIA_ROOT, \"newsletters\")\n if not os.path.isdir(cache_dir):\n os.makedirs(cache_dir)\n\n with open(os.path.join(cache_dir, f\"{pk}_{lang}.html\"), \"w+\") as cache_file:\n cache_file.write(html_message)\n\n\ndef save_to_disk(newsletter, request):\n \"\"\"\n Writes the newsletter as HTML to file (in all languages)\n \"\"\"\n main_partner = Partner.objects.filter(is_main_partner=True).first()\n local_partner = Partner.objects.filter(is_local_partner=True).first()\n\n html_template = get_template(\"newsletters/email.html\")\n\n for language in settings.LANGUAGES:\n translation.activate(language[0])\n\n context = {\n \"newsletter\": newsletter,\n \"agenda_events\": (\n newsletter.newslettercontent_set.filter(newsletteritem=None).order_by(\n \"newsletterevent__start_datetime\"\n )\n ),\n \"main_partner\": main_partner,\n \"local_partner\": local_partner,\n \"lang_code\": language[0],\n \"request\": request,\n }\n\n html_message = html_template.render(context)\n\n write_to_file(newsletter.pk, language[0], html_message)\n\n\ndef get_agenda(start_date):\n end_date = start_date + timezone.timedelta(weeks=2)\n base_events = Event.objects.filter(\n start__gte=start_date, end__lt=end_date, published=True\n ).order_by(\"start\")\n if base_events.count() < 10:\n more_events = Event.objects.filter(end__gte=end_date).order_by(\"start\")\n return [*base_events, *more_events][:10]\n return base_events\n\n\ndef send_newsletter(newsletter):\n emails.send_newsletter(newsletter)\n newsletter.sent = True\n newsletter.save()\n message = Message.objects.create(\n title_nl=newsletter.title_nl,\n title_en=newsletter.title_en,\n body_nl=\"Tik om te bekijken\",\n body_en=\"Tap to view\",\n url=settings.BASE_URL + newsletter.get_absolute_url(),\n category=Category.objects.get(key=Category.NEWSLETTER),\n )\n message.users.set(Member.current_members.all())\n message.send()\n", "path": "website/newsletters/services.py"}]} | 1,307 | 208 |
gh_patches_debug_2222 | rasdani/github-patches | git_diff | scrapy__scrapy-2337 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add a sample middleware to startproject's template
It will be nice to have a middleware template inside the template project to serve as an example for people that want to use it.
</issue>
<code>
[start of scrapy/commands/startproject.py]
1 from __future__ import print_function
2 import re
3 import os
4 import string
5 from importlib import import_module
6 from os.path import join, exists, abspath
7 from shutil import ignore_patterns, move, copy2, copystat
8
9 import scrapy
10 from scrapy.commands import ScrapyCommand
11 from scrapy.utils.template import render_templatefile, string_camelcase
12 from scrapy.exceptions import UsageError
13
14
15 TEMPLATES_TO_RENDER = (
16 ('scrapy.cfg',),
17 ('${project_name}', 'settings.py.tmpl'),
18 ('${project_name}', 'items.py.tmpl'),
19 ('${project_name}', 'pipelines.py.tmpl'),
20 )
21
22 IGNORE = ignore_patterns('*.pyc', '.svn')
23
24
25 class Command(ScrapyCommand):
26
27 requires_project = False
28 default_settings = {'LOG_ENABLED': False}
29
30 def syntax(self):
31 return "<project_name> [project_dir]"
32
33 def short_desc(self):
34 return "Create new project"
35
36 def _is_valid_name(self, project_name):
37 def _module_exists(module_name):
38 try:
39 import_module(module_name)
40 return True
41 except ImportError:
42 return False
43
44 if not re.search(r'^[_a-zA-Z]\w*$', project_name):
45 print('Error: Project names must begin with a letter and contain'\
46 ' only\nletters, numbers and underscores')
47 elif _module_exists(project_name):
48 print('Error: Module %r already exists' % project_name)
49 else:
50 return True
51 return False
52
53 def _copytree(self, src, dst):
54 """
55 Since the original function always creates the directory, to resolve
56 the issue a new function had to be created. It's a simple copy and
57 was reduced for this case.
58
59 More info at:
60 https://github.com/scrapy/scrapy/pull/2005
61 """
62 ignore = IGNORE
63 names = os.listdir(src)
64 ignored_names = ignore(src, names)
65
66 if not os.path.exists(dst):
67 os.makedirs(dst)
68
69 for name in names:
70 if name in ignored_names:
71 continue
72
73 srcname = os.path.join(src, name)
74 dstname = os.path.join(dst, name)
75 if os.path.isdir(srcname):
76 self._copytree(srcname, dstname)
77 else:
78 copy2(srcname, dstname)
79 copystat(src, dst)
80
81 def run(self, args, opts):
82 if len(args) not in (1, 2):
83 raise UsageError()
84
85 project_name = args[0]
86 project_dir = args[0]
87
88 if len(args) == 2:
89 project_dir = args[1]
90
91 if exists(join(project_dir, 'scrapy.cfg')):
92 self.exitcode = 1
93 print('Error: scrapy.cfg already exists in %s' % abspath(project_dir))
94 return
95
96 if not self._is_valid_name(project_name):
97 self.exitcode = 1
98 return
99
100 self._copytree(self.templates_dir, abspath(project_dir))
101 move(join(project_dir, 'module'), join(project_dir, project_name))
102 for paths in TEMPLATES_TO_RENDER:
103 path = join(*paths)
104 tplfile = join(project_dir,
105 string.Template(path).substitute(project_name=project_name))
106 render_templatefile(tplfile, project_name=project_name,
107 ProjectName=string_camelcase(project_name))
108 print("New Scrapy project %r, using template directory %r, created in:" % \
109 (project_name, self.templates_dir))
110 print(" %s\n" % abspath(project_dir))
111 print("You can start your first spider with:")
112 print(" cd %s" % project_dir)
113 print(" scrapy genspider example example.com")
114
115 @property
116 def templates_dir(self):
117 _templates_base_dir = self.settings['TEMPLATES_DIR'] or \
118 join(scrapy.__path__[0], 'templates')
119 return join(_templates_base_dir, 'project')
120
121
[end of scrapy/commands/startproject.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/scrapy/commands/startproject.py b/scrapy/commands/startproject.py
--- a/scrapy/commands/startproject.py
+++ b/scrapy/commands/startproject.py
@@ -17,6 +17,7 @@
('${project_name}', 'settings.py.tmpl'),
('${project_name}', 'items.py.tmpl'),
('${project_name}', 'pipelines.py.tmpl'),
+ ('${project_name}', 'middlewares.py.tmpl'),
)
IGNORE = ignore_patterns('*.pyc', '.svn')
| {"golden_diff": "diff --git a/scrapy/commands/startproject.py b/scrapy/commands/startproject.py\n--- a/scrapy/commands/startproject.py\n+++ b/scrapy/commands/startproject.py\n@@ -17,6 +17,7 @@\n ('${project_name}', 'settings.py.tmpl'),\n ('${project_name}', 'items.py.tmpl'),\n ('${project_name}', 'pipelines.py.tmpl'),\n+ ('${project_name}', 'middlewares.py.tmpl'),\n )\n \n IGNORE = ignore_patterns('*.pyc', '.svn')\n", "issue": "Add a sample middleware to startproject's template\nIt will be nice to have a middleware template inside the template project to serve as an example for people that want to use it.\n\n", "before_files": [{"content": "from __future__ import print_function\nimport re\nimport os\nimport string\nfrom importlib import import_module\nfrom os.path import join, exists, abspath\nfrom shutil import ignore_patterns, move, copy2, copystat\n\nimport scrapy\nfrom scrapy.commands import ScrapyCommand\nfrom scrapy.utils.template import render_templatefile, string_camelcase\nfrom scrapy.exceptions import UsageError\n\n\nTEMPLATES_TO_RENDER = (\n ('scrapy.cfg',),\n ('${project_name}', 'settings.py.tmpl'),\n ('${project_name}', 'items.py.tmpl'),\n ('${project_name}', 'pipelines.py.tmpl'),\n)\n\nIGNORE = ignore_patterns('*.pyc', '.svn')\n\n\nclass Command(ScrapyCommand):\n\n requires_project = False\n default_settings = {'LOG_ENABLED': False}\n\n def syntax(self):\n return \"<project_name> [project_dir]\"\n\n def short_desc(self):\n return \"Create new project\"\n\n def _is_valid_name(self, project_name):\n def _module_exists(module_name):\n try:\n import_module(module_name)\n return True\n except ImportError:\n return False\n\n if not re.search(r'^[_a-zA-Z]\\w*$', project_name):\n print('Error: Project names must begin with a letter and contain'\\\n ' only\\nletters, numbers and underscores')\n elif _module_exists(project_name):\n print('Error: Module %r already exists' % project_name)\n else:\n return True\n return False\n\n def _copytree(self, src, dst):\n \"\"\"\n Since the original function always creates the directory, to resolve\n the issue a new function had to be created. It's a simple copy and\n was reduced for this case.\n\n More info at:\n https://github.com/scrapy/scrapy/pull/2005\n \"\"\"\n ignore = IGNORE\n names = os.listdir(src)\n ignored_names = ignore(src, names)\n\n if not os.path.exists(dst):\n os.makedirs(dst)\n\n for name in names:\n if name in ignored_names:\n continue\n\n srcname = os.path.join(src, name)\n dstname = os.path.join(dst, name)\n if os.path.isdir(srcname):\n self._copytree(srcname, dstname)\n else:\n copy2(srcname, dstname)\n copystat(src, dst)\n\n def run(self, args, opts):\n if len(args) not in (1, 2):\n raise UsageError()\n\n project_name = args[0]\n project_dir = args[0]\n\n if len(args) == 2:\n project_dir = args[1]\n\n if exists(join(project_dir, 'scrapy.cfg')):\n self.exitcode = 1\n print('Error: scrapy.cfg already exists in %s' % abspath(project_dir))\n return\n\n if not self._is_valid_name(project_name):\n self.exitcode = 1\n return\n\n self._copytree(self.templates_dir, abspath(project_dir))\n move(join(project_dir, 'module'), join(project_dir, project_name))\n for paths in TEMPLATES_TO_RENDER:\n path = join(*paths)\n tplfile = join(project_dir,\n string.Template(path).substitute(project_name=project_name))\n render_templatefile(tplfile, project_name=project_name,\n ProjectName=string_camelcase(project_name))\n print(\"New Scrapy project %r, using template directory %r, created in:\" % \\\n (project_name, self.templates_dir))\n print(\" %s\\n\" % abspath(project_dir))\n print(\"You can start your first spider with:\")\n print(\" cd %s\" % project_dir)\n print(\" scrapy genspider example example.com\")\n\n @property\n def templates_dir(self):\n _templates_base_dir = self.settings['TEMPLATES_DIR'] or \\\n join(scrapy.__path__[0], 'templates')\n return join(_templates_base_dir, 'project')\n \n", "path": "scrapy/commands/startproject.py"}]} | 1,698 | 117 |
gh_patches_debug_40251 | rasdani/github-patches | git_diff | pre-commit__pre-commit-685 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Windows: Node Support
This involves solving this ticket: https://github.com/ekalinin/nodeenv/issues/53
I've already started some work on this
</issue>
<code>
[start of pre_commit/languages/node.py]
1 from __future__ import unicode_literals
2
3 import contextlib
4 import os
5 import sys
6
7 from pre_commit.envcontext import envcontext
8 from pre_commit.envcontext import Var
9 from pre_commit.languages import helpers
10 from pre_commit.util import clean_path_on_failure
11 from pre_commit.util import cmd_output
12 from pre_commit.xargs import xargs
13
14
15 ENVIRONMENT_DIR = 'node_env'
16 get_default_version = helpers.basic_get_default_version
17 healthy = helpers.basic_healthy
18
19
20 def get_env_patch(venv): # pragma: windows no cover
21 if sys.platform == 'cygwin': # pragma: no cover
22 _, win_venv, _ = cmd_output('cygpath', '-w', venv)
23 install_prefix = r'{}\bin'.format(win_venv.strip())
24 else:
25 install_prefix = venv
26 return (
27 ('NODE_VIRTUAL_ENV', venv),
28 ('NPM_CONFIG_PREFIX', install_prefix),
29 ('npm_config_prefix', install_prefix),
30 ('NODE_PATH', os.path.join(venv, 'lib', 'node_modules')),
31 ('PATH', (os.path.join(venv, 'bin'), os.pathsep, Var('PATH'))),
32 )
33
34
35 @contextlib.contextmanager
36 def in_env(prefix, language_version): # pragma: windows no cover
37 envdir = prefix.path(
38 helpers.environment_dir(ENVIRONMENT_DIR, language_version),
39 )
40 with envcontext(get_env_patch(envdir)):
41 yield
42
43
44 def install_environment(
45 prefix, version, additional_dependencies,
46 ): # pragma: windows no cover
47 additional_dependencies = tuple(additional_dependencies)
48 assert prefix.exists('package.json')
49 directory = helpers.environment_dir(ENVIRONMENT_DIR, version)
50
51 env_dir = prefix.path(directory)
52 with clean_path_on_failure(env_dir):
53 cmd = [sys.executable, '-m', 'nodeenv', '--prebuilt', env_dir]
54 if version != 'default':
55 cmd.extend(['-n', version])
56 cmd_output(*cmd)
57
58 with in_env(prefix, version):
59 helpers.run_setup_cmd(
60 prefix,
61 ('npm', 'install', '-g', '.') + additional_dependencies,
62 )
63
64
65 def run_hook(prefix, hook, file_args): # pragma: windows no cover
66 with in_env(prefix, hook['language_version']):
67 return xargs(helpers.to_cmd(hook), file_args)
68
[end of pre_commit/languages/node.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pre_commit/languages/node.py b/pre_commit/languages/node.py
--- a/pre_commit/languages/node.py
+++ b/pre_commit/languages/node.py
@@ -7,6 +7,7 @@
from pre_commit.envcontext import envcontext
from pre_commit.envcontext import Var
from pre_commit.languages import helpers
+from pre_commit.languages.python import bin_dir
from pre_commit.util import clean_path_on_failure
from pre_commit.util import cmd_output
from pre_commit.xargs import xargs
@@ -17,10 +18,17 @@
healthy = helpers.basic_healthy
-def get_env_patch(venv): # pragma: windows no cover
+def _envdir(prefix, version):
+ directory = helpers.environment_dir(ENVIRONMENT_DIR, version)
+ return prefix.path(directory)
+
+
+def get_env_patch(venv):
if sys.platform == 'cygwin': # pragma: no cover
_, win_venv, _ = cmd_output('cygpath', '-w', venv)
install_prefix = r'{}\bin'.format(win_venv.strip())
+ elif sys.platform == 'win32': # pragma: no cover
+ install_prefix = bin_dir(venv)
else:
install_prefix = venv
return (
@@ -28,29 +36,26 @@
('NPM_CONFIG_PREFIX', install_prefix),
('npm_config_prefix', install_prefix),
('NODE_PATH', os.path.join(venv, 'lib', 'node_modules')),
- ('PATH', (os.path.join(venv, 'bin'), os.pathsep, Var('PATH'))),
+ ('PATH', (bin_dir(venv), os.pathsep, Var('PATH'))),
)
@contextlib.contextmanager
-def in_env(prefix, language_version): # pragma: windows no cover
- envdir = prefix.path(
- helpers.environment_dir(ENVIRONMENT_DIR, language_version),
- )
- with envcontext(get_env_patch(envdir)):
+def in_env(prefix, language_version):
+ with envcontext(get_env_patch(_envdir(prefix, language_version))):
yield
-def install_environment(
- prefix, version, additional_dependencies,
-): # pragma: windows no cover
+def install_environment(prefix, version, additional_dependencies):
additional_dependencies = tuple(additional_dependencies)
assert prefix.exists('package.json')
- directory = helpers.environment_dir(ENVIRONMENT_DIR, version)
+ envdir = _envdir(prefix, version)
- env_dir = prefix.path(directory)
- with clean_path_on_failure(env_dir):
- cmd = [sys.executable, '-m', 'nodeenv', '--prebuilt', env_dir]
+ # https://msdn.microsoft.com/en-us/library/windows/desktop/aa365247(v=vs.85).aspx?f=255&MSPPError=-2147217396#maxpath
+ if sys.platform == 'win32': # pragma: no cover
+ envdir = '\\\\?\\' + os.path.normpath(envdir)
+ with clean_path_on_failure(envdir):
+ cmd = [sys.executable, '-m', 'nodeenv', '--prebuilt', envdir]
if version != 'default':
cmd.extend(['-n', version])
cmd_output(*cmd)
@@ -62,6 +67,6 @@
)
-def run_hook(prefix, hook, file_args): # pragma: windows no cover
+def run_hook(prefix, hook, file_args):
with in_env(prefix, hook['language_version']):
return xargs(helpers.to_cmd(hook), file_args)
| {"golden_diff": "diff --git a/pre_commit/languages/node.py b/pre_commit/languages/node.py\n--- a/pre_commit/languages/node.py\n+++ b/pre_commit/languages/node.py\n@@ -7,6 +7,7 @@\n from pre_commit.envcontext import envcontext\n from pre_commit.envcontext import Var\n from pre_commit.languages import helpers\n+from pre_commit.languages.python import bin_dir\n from pre_commit.util import clean_path_on_failure\n from pre_commit.util import cmd_output\n from pre_commit.xargs import xargs\n@@ -17,10 +18,17 @@\n healthy = helpers.basic_healthy\n \n \n-def get_env_patch(venv): # pragma: windows no cover\n+def _envdir(prefix, version):\n+ directory = helpers.environment_dir(ENVIRONMENT_DIR, version)\n+ return prefix.path(directory)\n+\n+\n+def get_env_patch(venv):\n if sys.platform == 'cygwin': # pragma: no cover\n _, win_venv, _ = cmd_output('cygpath', '-w', venv)\n install_prefix = r'{}\\bin'.format(win_venv.strip())\n+ elif sys.platform == 'win32': # pragma: no cover\n+ install_prefix = bin_dir(venv)\n else:\n install_prefix = venv\n return (\n@@ -28,29 +36,26 @@\n ('NPM_CONFIG_PREFIX', install_prefix),\n ('npm_config_prefix', install_prefix),\n ('NODE_PATH', os.path.join(venv, 'lib', 'node_modules')),\n- ('PATH', (os.path.join(venv, 'bin'), os.pathsep, Var('PATH'))),\n+ ('PATH', (bin_dir(venv), os.pathsep, Var('PATH'))),\n )\n \n \n @contextlib.contextmanager\n-def in_env(prefix, language_version): # pragma: windows no cover\n- envdir = prefix.path(\n- helpers.environment_dir(ENVIRONMENT_DIR, language_version),\n- )\n- with envcontext(get_env_patch(envdir)):\n+def in_env(prefix, language_version):\n+ with envcontext(get_env_patch(_envdir(prefix, language_version))):\n yield\n \n \n-def install_environment(\n- prefix, version, additional_dependencies,\n-): # pragma: windows no cover\n+def install_environment(prefix, version, additional_dependencies):\n additional_dependencies = tuple(additional_dependencies)\n assert prefix.exists('package.json')\n- directory = helpers.environment_dir(ENVIRONMENT_DIR, version)\n+ envdir = _envdir(prefix, version)\n \n- env_dir = prefix.path(directory)\n- with clean_path_on_failure(env_dir):\n- cmd = [sys.executable, '-m', 'nodeenv', '--prebuilt', env_dir]\n+ # https://msdn.microsoft.com/en-us/library/windows/desktop/aa365247(v=vs.85).aspx?f=255&MSPPError=-2147217396#maxpath\n+ if sys.platform == 'win32': # pragma: no cover\n+ envdir = '\\\\\\\\?\\\\' + os.path.normpath(envdir)\n+ with clean_path_on_failure(envdir):\n+ cmd = [sys.executable, '-m', 'nodeenv', '--prebuilt', envdir]\n if version != 'default':\n cmd.extend(['-n', version])\n cmd_output(*cmd)\n@@ -62,6 +67,6 @@\n )\n \n \n-def run_hook(prefix, hook, file_args): # pragma: windows no cover\n+def run_hook(prefix, hook, file_args):\n with in_env(prefix, hook['language_version']):\n return xargs(helpers.to_cmd(hook), file_args)\n", "issue": "Windows: Node Support\nThis involves solving this ticket: https://github.com/ekalinin/nodeenv/issues/53\n\nI've already started some work on this\n\n", "before_files": [{"content": "from __future__ import unicode_literals\n\nimport contextlib\nimport os\nimport sys\n\nfrom pre_commit.envcontext import envcontext\nfrom pre_commit.envcontext import Var\nfrom pre_commit.languages import helpers\nfrom pre_commit.util import clean_path_on_failure\nfrom pre_commit.util import cmd_output\nfrom pre_commit.xargs import xargs\n\n\nENVIRONMENT_DIR = 'node_env'\nget_default_version = helpers.basic_get_default_version\nhealthy = helpers.basic_healthy\n\n\ndef get_env_patch(venv): # pragma: windows no cover\n if sys.platform == 'cygwin': # pragma: no cover\n _, win_venv, _ = cmd_output('cygpath', '-w', venv)\n install_prefix = r'{}\\bin'.format(win_venv.strip())\n else:\n install_prefix = venv\n return (\n ('NODE_VIRTUAL_ENV', venv),\n ('NPM_CONFIG_PREFIX', install_prefix),\n ('npm_config_prefix', install_prefix),\n ('NODE_PATH', os.path.join(venv, 'lib', 'node_modules')),\n ('PATH', (os.path.join(venv, 'bin'), os.pathsep, Var('PATH'))),\n )\n\n\[email protected]\ndef in_env(prefix, language_version): # pragma: windows no cover\n envdir = prefix.path(\n helpers.environment_dir(ENVIRONMENT_DIR, language_version),\n )\n with envcontext(get_env_patch(envdir)):\n yield\n\n\ndef install_environment(\n prefix, version, additional_dependencies,\n): # pragma: windows no cover\n additional_dependencies = tuple(additional_dependencies)\n assert prefix.exists('package.json')\n directory = helpers.environment_dir(ENVIRONMENT_DIR, version)\n\n env_dir = prefix.path(directory)\n with clean_path_on_failure(env_dir):\n cmd = [sys.executable, '-m', 'nodeenv', '--prebuilt', env_dir]\n if version != 'default':\n cmd.extend(['-n', version])\n cmd_output(*cmd)\n\n with in_env(prefix, version):\n helpers.run_setup_cmd(\n prefix,\n ('npm', 'install', '-g', '.') + additional_dependencies,\n )\n\n\ndef run_hook(prefix, hook, file_args): # pragma: windows no cover\n with in_env(prefix, hook['language_version']):\n return xargs(helpers.to_cmd(hook), file_args)\n", "path": "pre_commit/languages/node.py"}]} | 1,210 | 806 |
gh_patches_debug_11395 | rasdani/github-patches | git_diff | sopel-irc__sopel-958 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
AttributeError: 'NoneType' object has no attribute 'strip' (file "/usr/local/lib/python2.7/dist-packages/sopel/modules/unicode_info.py", line 23, in codepoint)
[01:04am] <Ant> .u
01:04AM <Sopel> AttributeError: 'NoneType' object has no attribute 'strip' (file "/usr/local/lib/python2.7/dist-packages/sopel/modules/unicode_info.py", line 23, in codepoint)
01:04AM <Sopel> Ant: Sopel v. 6.1.1
This is in my Debian oldstable with Python v2.7.3. :(
AttributeError: 'NoneType' object has no attribute 'strip' (file "/usr/local/lib/python2.7/dist-packages/sopel/modules/unicode_info.py", line 23, in codepoint)
[01:04am] <Ant> .u
01:04AM <Sopel> AttributeError: 'NoneType' object has no attribute 'strip' (file "/usr/local/lib/python2.7/dist-packages/sopel/modules/unicode_info.py", line 23, in codepoint)
01:04AM <Sopel> Ant: Sopel v. 6.1.1
This is in my Debian oldstable with Python v2.7.3. :(
</issue>
<code>
[start of sopel/modules/unicode_info.py]
1 # coding=utf-8
2 """Codepoints Module"""
3 # Copyright 2013, Elsie Powell, embolalia.com
4 # Copyright 2008, Sean B. Palmer, inamidst.com
5 # Licensed under the Eiffel Forum License 2.
6 from __future__ import unicode_literals, absolute_import, print_function, division
7 import unicodedata
8 import sys
9 from sopel.module import commands, example, NOLIMIT
10
11 if sys.version_info.major >= 3:
12 unichr = chr
13
14
15 @commands('u')
16 @example('.u ‽', 'U+203D INTERROBANG (‽)')
17 @example('.u 203D', 'U+203D INTERROBANG (‽)')
18 def codepoint(bot, trigger):
19 arg = trigger.group(2).strip()
20 if len(arg) == 0:
21 bot.reply('What code point do you want me to look up?')
22 return NOLIMIT
23 elif len(arg) > 1:
24 if arg.startswith('U+'):
25 arg = arg[2:]
26 try:
27 arg = unichr(int(arg, 16))
28 except:
29 bot.reply("That's not a valid code point.")
30 return NOLIMIT
31
32 # Get the hex value for the code point, and drop the 0x from the front
33 point = str(hex(ord(u'' + arg)))[2:]
34 # Make the hex 4 characters long with preceding 0s, and all upper case
35 point = point.rjust(4, str('0')).upper()
36 try:
37 name = unicodedata.name(arg)
38 except ValueError:
39 return 'U+%s (No name found)' % point
40
41 if not unicodedata.combining(arg):
42 template = 'U+%s %s (%s)'
43 else:
44 template = 'U+%s %s (\xe2\x97\x8c%s)'
45 bot.say(template % (point, name, arg))
46
47 if __name__ == "__main__":
48 from sopel.test_tools import run_example_tests
49 run_example_tests(__file__)
50
[end of sopel/modules/unicode_info.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/sopel/modules/unicode_info.py b/sopel/modules/unicode_info.py
--- a/sopel/modules/unicode_info.py
+++ b/sopel/modules/unicode_info.py
@@ -16,11 +16,14 @@
@example('.u ‽', 'U+203D INTERROBANG (‽)')
@example('.u 203D', 'U+203D INTERROBANG (‽)')
def codepoint(bot, trigger):
- arg = trigger.group(2).strip()
- if len(arg) == 0:
+ arg = trigger.group(2)
+ if not arg:
bot.reply('What code point do you want me to look up?')
return NOLIMIT
- elif len(arg) > 1:
+ stripped = arg.strip()
+ if len(stripped) > 0:
+ arg = stripped
+ if len(arg) > 1:
if arg.startswith('U+'):
arg = arg[2:]
try:
| {"golden_diff": "diff --git a/sopel/modules/unicode_info.py b/sopel/modules/unicode_info.py\n--- a/sopel/modules/unicode_info.py\n+++ b/sopel/modules/unicode_info.py\n@@ -16,11 +16,14 @@\n @example('.u \u203d', 'U+203D INTERROBANG (\u203d)')\n @example('.u 203D', 'U+203D INTERROBANG (\u203d)')\n def codepoint(bot, trigger):\n- arg = trigger.group(2).strip()\n- if len(arg) == 0:\n+ arg = trigger.group(2)\n+ if not arg:\n bot.reply('What code point do you want me to look up?')\n return NOLIMIT\n- elif len(arg) > 1:\n+ stripped = arg.strip()\n+ if len(stripped) > 0:\n+ arg = stripped\n+ if len(arg) > 1:\n if arg.startswith('U+'):\n arg = arg[2:]\n try:\n", "issue": "AttributeError: 'NoneType' object has no attribute 'strip' (file \"/usr/local/lib/python2.7/dist-packages/sopel/modules/unicode_info.py\", line 23, in codepoint)\n[01:04am] <Ant> .u\n01:04AM <Sopel> AttributeError: 'NoneType' object has no attribute 'strip' (file \"/usr/local/lib/python2.7/dist-packages/sopel/modules/unicode_info.py\", line 23, in codepoint)\n01:04AM <Sopel> Ant: Sopel v. 6.1.1\n\nThis is in my Debian oldstable with Python v2.7.3. :(\n\nAttributeError: 'NoneType' object has no attribute 'strip' (file \"/usr/local/lib/python2.7/dist-packages/sopel/modules/unicode_info.py\", line 23, in codepoint)\n[01:04am] <Ant> .u\n01:04AM <Sopel> AttributeError: 'NoneType' object has no attribute 'strip' (file \"/usr/local/lib/python2.7/dist-packages/sopel/modules/unicode_info.py\", line 23, in codepoint)\n01:04AM <Sopel> Ant: Sopel v. 6.1.1\n\nThis is in my Debian oldstable with Python v2.7.3. :(\n\n", "before_files": [{"content": "# coding=utf-8\n\"\"\"Codepoints Module\"\"\"\n# Copyright 2013, Elsie Powell, embolalia.com\n# Copyright 2008, Sean B. Palmer, inamidst.com\n# Licensed under the Eiffel Forum License 2.\nfrom __future__ import unicode_literals, absolute_import, print_function, division\nimport unicodedata\nimport sys\nfrom sopel.module import commands, example, NOLIMIT\n\nif sys.version_info.major >= 3:\n unichr = chr\n\n\n@commands('u')\n@example('.u \u203d', 'U+203D INTERROBANG (\u203d)')\n@example('.u 203D', 'U+203D INTERROBANG (\u203d)')\ndef codepoint(bot, trigger):\n arg = trigger.group(2).strip()\n if len(arg) == 0:\n bot.reply('What code point do you want me to look up?')\n return NOLIMIT\n elif len(arg) > 1:\n if arg.startswith('U+'):\n arg = arg[2:]\n try:\n arg = unichr(int(arg, 16))\n except:\n bot.reply(\"That's not a valid code point.\")\n return NOLIMIT\n\n # Get the hex value for the code point, and drop the 0x from the front\n point = str(hex(ord(u'' + arg)))[2:]\n # Make the hex 4 characters long with preceding 0s, and all upper case\n point = point.rjust(4, str('0')).upper()\n try:\n name = unicodedata.name(arg)\n except ValueError:\n return 'U+%s (No name found)' % point\n\n if not unicodedata.combining(arg):\n template = 'U+%s %s (%s)'\n else:\n template = 'U+%s %s (\\xe2\\x97\\x8c%s)'\n bot.say(template % (point, name, arg))\n\nif __name__ == \"__main__\":\n from sopel.test_tools import run_example_tests\n run_example_tests(__file__)\n", "path": "sopel/modules/unicode_info.py"}]} | 1,402 | 232 |
gh_patches_debug_38450 | rasdani/github-patches | git_diff | searx__searx-1452 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Findx is shutting down
https://privacore.github.io/
</issue>
<code>
[start of searx/engines/findx.py]
1 """
2 FindX (General, Images, Videos)
3
4 @website https://www.findx.com
5 @provide-api no
6 @using-api no
7 @results HTML
8 @stable no
9 @parse url, title, content, embedded, img_src, thumbnail_src
10 """
11
12 from dateutil import parser
13 from json import loads
14 import re
15
16 from lxml import html
17
18 from searx import logger
19 from searx.engines.xpath import extract_text
20 from searx.engines.youtube_noapi import base_youtube_url, embedded_url
21 from searx.url_utils import urlencode
22
23
24 paging = True
25 results_xpath = '//script[@id="initial-state"]'
26 search_url = 'https://www.findx.com/{category}?{q}'
27 type_map = {
28 'none': 'web',
29 'general': 'web',
30 'images': 'images',
31 'videos': 'videos',
32 }
33
34
35 def request(query, params):
36 params['url'] = search_url.format(
37 category=type_map[params['category']],
38 q=urlencode({
39 'q': query,
40 'page': params['pageno']
41 })
42 )
43 return params
44
45
46 def response(resp):
47 dom = html.fromstring(resp.text)
48 results_raw_json = dom.xpath(results_xpath)
49 results_json = loads(extract_text(results_raw_json))
50
51 if len(results_json['web']['results']) > 0:
52 return _general_results(results_json['web']['results']['webSearch']['results'])
53
54 if len(results_json['images']['results']) > 0:
55 return _images_results(results_json['images']['results'])
56
57 if len(results_json['video']['results']) > 0:
58 return _videos_results(results_json['video']['results'])
59
60 return []
61
62
63 def _general_results(general_results):
64 results = []
65 for result in general_results:
66 results.append({
67 'url': result['url'],
68 'title': result['title'],
69 'content': result['sum'],
70 })
71 return results
72
73
74 def _images_results(image_results):
75 results = []
76 for result in image_results:
77 results.append({
78 'url': result['sourceURL'],
79 'title': result['title'],
80 'content': result['source'],
81 'thumbnail_src': _extract_url(result['assets']['thumb']['url']),
82 'img_src': _extract_url(result['assets']['file']['url']),
83 'template': 'images.html',
84 })
85 return results
86
87
88 def _videos_results(video_results):
89 results = []
90 for result in video_results:
91 if not result['kind'].startswith('youtube'):
92 logger.warn('Unknown video kind in findx: {}'.format(result['kind']))
93 continue
94
95 description = result['snippet']['description']
96 if len(description) > 300:
97 description = description[:300] + '...'
98
99 results.append({
100 'url': base_youtube_url + result['id'],
101 'title': result['snippet']['title'],
102 'content': description,
103 'thumbnail': _extract_url(result['snippet']['thumbnails']['default']['url']),
104 'publishedDate': parser.parse(result['snippet']['publishedAt']),
105 'embedded': embedded_url.format(videoid=result['id']),
106 'template': 'videos.html',
107 })
108 return results
109
110
111 def _extract_url(url):
112 matching = re.search('(/https?://[^)]+)', url)
113 if matching:
114 return matching.group(0)[1:]
115 return ''
116
[end of searx/engines/findx.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/searx/engines/findx.py b/searx/engines/findx.py
deleted file mode 100644
--- a/searx/engines/findx.py
+++ /dev/null
@@ -1,115 +0,0 @@
-"""
-FindX (General, Images, Videos)
-
-@website https://www.findx.com
-@provide-api no
-@using-api no
-@results HTML
-@stable no
-@parse url, title, content, embedded, img_src, thumbnail_src
-"""
-
-from dateutil import parser
-from json import loads
-import re
-
-from lxml import html
-
-from searx import logger
-from searx.engines.xpath import extract_text
-from searx.engines.youtube_noapi import base_youtube_url, embedded_url
-from searx.url_utils import urlencode
-
-
-paging = True
-results_xpath = '//script[@id="initial-state"]'
-search_url = 'https://www.findx.com/{category}?{q}'
-type_map = {
- 'none': 'web',
- 'general': 'web',
- 'images': 'images',
- 'videos': 'videos',
-}
-
-
-def request(query, params):
- params['url'] = search_url.format(
- category=type_map[params['category']],
- q=urlencode({
- 'q': query,
- 'page': params['pageno']
- })
- )
- return params
-
-
-def response(resp):
- dom = html.fromstring(resp.text)
- results_raw_json = dom.xpath(results_xpath)
- results_json = loads(extract_text(results_raw_json))
-
- if len(results_json['web']['results']) > 0:
- return _general_results(results_json['web']['results']['webSearch']['results'])
-
- if len(results_json['images']['results']) > 0:
- return _images_results(results_json['images']['results'])
-
- if len(results_json['video']['results']) > 0:
- return _videos_results(results_json['video']['results'])
-
- return []
-
-
-def _general_results(general_results):
- results = []
- for result in general_results:
- results.append({
- 'url': result['url'],
- 'title': result['title'],
- 'content': result['sum'],
- })
- return results
-
-
-def _images_results(image_results):
- results = []
- for result in image_results:
- results.append({
- 'url': result['sourceURL'],
- 'title': result['title'],
- 'content': result['source'],
- 'thumbnail_src': _extract_url(result['assets']['thumb']['url']),
- 'img_src': _extract_url(result['assets']['file']['url']),
- 'template': 'images.html',
- })
- return results
-
-
-def _videos_results(video_results):
- results = []
- for result in video_results:
- if not result['kind'].startswith('youtube'):
- logger.warn('Unknown video kind in findx: {}'.format(result['kind']))
- continue
-
- description = result['snippet']['description']
- if len(description) > 300:
- description = description[:300] + '...'
-
- results.append({
- 'url': base_youtube_url + result['id'],
- 'title': result['snippet']['title'],
- 'content': description,
- 'thumbnail': _extract_url(result['snippet']['thumbnails']['default']['url']),
- 'publishedDate': parser.parse(result['snippet']['publishedAt']),
- 'embedded': embedded_url.format(videoid=result['id']),
- 'template': 'videos.html',
- })
- return results
-
-
-def _extract_url(url):
- matching = re.search('(/https?://[^)]+)', url)
- if matching:
- return matching.group(0)[1:]
- return ''
| {"golden_diff": "diff --git a/searx/engines/findx.py b/searx/engines/findx.py\ndeleted file mode 100644\n--- a/searx/engines/findx.py\n+++ /dev/null\n@@ -1,115 +0,0 @@\n-\"\"\"\n-FindX (General, Images, Videos)\n-\n-@website https://www.findx.com\n-@provide-api no\n-@using-api no\n-@results HTML\n-@stable no\n-@parse url, title, content, embedded, img_src, thumbnail_src\n-\"\"\"\n-\n-from dateutil import parser\n-from json import loads\n-import re\n-\n-from lxml import html\n-\n-from searx import logger\n-from searx.engines.xpath import extract_text\n-from searx.engines.youtube_noapi import base_youtube_url, embedded_url\n-from searx.url_utils import urlencode\n-\n-\n-paging = True\n-results_xpath = '//script[@id=\"initial-state\"]'\n-search_url = 'https://www.findx.com/{category}?{q}'\n-type_map = {\n- 'none': 'web',\n- 'general': 'web',\n- 'images': 'images',\n- 'videos': 'videos',\n-}\n-\n-\n-def request(query, params):\n- params['url'] = search_url.format(\n- category=type_map[params['category']],\n- q=urlencode({\n- 'q': query,\n- 'page': params['pageno']\n- })\n- )\n- return params\n-\n-\n-def response(resp):\n- dom = html.fromstring(resp.text)\n- results_raw_json = dom.xpath(results_xpath)\n- results_json = loads(extract_text(results_raw_json))\n-\n- if len(results_json['web']['results']) > 0:\n- return _general_results(results_json['web']['results']['webSearch']['results'])\n-\n- if len(results_json['images']['results']) > 0:\n- return _images_results(results_json['images']['results'])\n-\n- if len(results_json['video']['results']) > 0:\n- return _videos_results(results_json['video']['results'])\n-\n- return []\n-\n-\n-def _general_results(general_results):\n- results = []\n- for result in general_results:\n- results.append({\n- 'url': result['url'],\n- 'title': result['title'],\n- 'content': result['sum'],\n- })\n- return results\n-\n-\n-def _images_results(image_results):\n- results = []\n- for result in image_results:\n- results.append({\n- 'url': result['sourceURL'],\n- 'title': result['title'],\n- 'content': result['source'],\n- 'thumbnail_src': _extract_url(result['assets']['thumb']['url']),\n- 'img_src': _extract_url(result['assets']['file']['url']),\n- 'template': 'images.html',\n- })\n- return results\n-\n-\n-def _videos_results(video_results):\n- results = []\n- for result in video_results:\n- if not result['kind'].startswith('youtube'):\n- logger.warn('Unknown video kind in findx: {}'.format(result['kind']))\n- continue\n-\n- description = result['snippet']['description']\n- if len(description) > 300:\n- description = description[:300] + '...'\n-\n- results.append({\n- 'url': base_youtube_url + result['id'],\n- 'title': result['snippet']['title'],\n- 'content': description,\n- 'thumbnail': _extract_url(result['snippet']['thumbnails']['default']['url']),\n- 'publishedDate': parser.parse(result['snippet']['publishedAt']),\n- 'embedded': embedded_url.format(videoid=result['id']),\n- 'template': 'videos.html',\n- })\n- return results\n-\n-\n-def _extract_url(url):\n- matching = re.search('(/https?://[^)]+)', url)\n- if matching:\n- return matching.group(0)[1:]\n- return ''\n", "issue": "Findx is shutting down\nhttps://privacore.github.io/\n", "before_files": [{"content": "\"\"\"\nFindX (General, Images, Videos)\n\n@website https://www.findx.com\n@provide-api no\n@using-api no\n@results HTML\n@stable no\n@parse url, title, content, embedded, img_src, thumbnail_src\n\"\"\"\n\nfrom dateutil import parser\nfrom json import loads\nimport re\n\nfrom lxml import html\n\nfrom searx import logger\nfrom searx.engines.xpath import extract_text\nfrom searx.engines.youtube_noapi import base_youtube_url, embedded_url\nfrom searx.url_utils import urlencode\n\n\npaging = True\nresults_xpath = '//script[@id=\"initial-state\"]'\nsearch_url = 'https://www.findx.com/{category}?{q}'\ntype_map = {\n 'none': 'web',\n 'general': 'web',\n 'images': 'images',\n 'videos': 'videos',\n}\n\n\ndef request(query, params):\n params['url'] = search_url.format(\n category=type_map[params['category']],\n q=urlencode({\n 'q': query,\n 'page': params['pageno']\n })\n )\n return params\n\n\ndef response(resp):\n dom = html.fromstring(resp.text)\n results_raw_json = dom.xpath(results_xpath)\n results_json = loads(extract_text(results_raw_json))\n\n if len(results_json['web']['results']) > 0:\n return _general_results(results_json['web']['results']['webSearch']['results'])\n\n if len(results_json['images']['results']) > 0:\n return _images_results(results_json['images']['results'])\n\n if len(results_json['video']['results']) > 0:\n return _videos_results(results_json['video']['results'])\n\n return []\n\n\ndef _general_results(general_results):\n results = []\n for result in general_results:\n results.append({\n 'url': result['url'],\n 'title': result['title'],\n 'content': result['sum'],\n })\n return results\n\n\ndef _images_results(image_results):\n results = []\n for result in image_results:\n results.append({\n 'url': result['sourceURL'],\n 'title': result['title'],\n 'content': result['source'],\n 'thumbnail_src': _extract_url(result['assets']['thumb']['url']),\n 'img_src': _extract_url(result['assets']['file']['url']),\n 'template': 'images.html',\n })\n return results\n\n\ndef _videos_results(video_results):\n results = []\n for result in video_results:\n if not result['kind'].startswith('youtube'):\n logger.warn('Unknown video kind in findx: {}'.format(result['kind']))\n continue\n\n description = result['snippet']['description']\n if len(description) > 300:\n description = description[:300] + '...'\n\n results.append({\n 'url': base_youtube_url + result['id'],\n 'title': result['snippet']['title'],\n 'content': description,\n 'thumbnail': _extract_url(result['snippet']['thumbnails']['default']['url']),\n 'publishedDate': parser.parse(result['snippet']['publishedAt']),\n 'embedded': embedded_url.format(videoid=result['id']),\n 'template': 'videos.html',\n })\n return results\n\n\ndef _extract_url(url):\n matching = re.search('(/https?://[^)]+)', url)\n if matching:\n return matching.group(0)[1:]\n return ''\n", "path": "searx/engines/findx.py"}]} | 1,542 | 883 |
gh_patches_debug_16627 | rasdani/github-patches | git_diff | kserve__kserve-3551 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Support .json and .ubj model format for XGBoost server image
/kind feature
**Description**
In the XGBoost image, the only supported model format is .bst: https://github.com/kserve/kserve/blob/56b8fe0d189fc0d557e9a8af07eab0c12852d5fd/python/xgbserver/xgbserver/model.py#L28
This format has been deprecated for a while and is not backwards compatible between xgboost framework versions. The recommended model format is .json or .ubj: https://xgboost.readthedocs.io/en/stable/tutorials/saving_model.html
Users that want to use the recommended model format for XGBoost models, are currently not able to do so.
**Proposed solution**
Support the recommended file formats, while also keeping support for the old .bst format.
</issue>
<code>
[start of python/xgbserver/xgbserver/model.py]
1 # Copyright 2021 The KServe Authors.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15
16 import os
17 from typing import Dict, Union
18
19 import xgboost as xgb
20 from kserve.errors import InferenceError, ModelMissingError
21 from kserve.protocol.infer_type import InferRequest, InferResponse
22 from kserve.utils.utils import get_predict_input, get_predict_response
23 from xgboost import XGBModel
24
25 from kserve import Model
26 from kserve.storage import Storage
27
28 BOOSTER_FILE_EXTENSION = ".bst"
29
30
31 class XGBoostModel(Model):
32 def __init__(
33 self, name: str, model_dir: str, nthread: int, booster: XGBModel = None
34 ):
35 super().__init__(name)
36 self.name = name
37 self.model_dir = model_dir
38 self.nthread = nthread
39 if booster is not None:
40 self._booster = booster
41 self.ready = True
42
43 def load(self) -> bool:
44 model_path = Storage.download(self.model_dir)
45 model_files = []
46 for file in os.listdir(model_path):
47 file_path = os.path.join(model_path, file)
48 if os.path.isfile(file_path) and file.endswith(BOOSTER_FILE_EXTENSION):
49 model_files.append(file_path)
50 if len(model_files) == 0:
51 raise ModelMissingError(model_path)
52 elif len(model_files) > 1:
53 raise RuntimeError(
54 "More than one model file is detected, "
55 f"Only one is allowed within model_dir: {model_files}"
56 )
57
58 self._booster = xgb.Booster(
59 params={"nthread": self.nthread}, model_file=model_files[0]
60 )
61 self.ready = True
62 return self.ready
63
64 def predict(
65 self, payload: Union[Dict, InferRequest], headers: Dict[str, str] = None
66 ) -> Union[Dict, InferResponse]:
67 try:
68 # Use of list as input is deprecated see https://github.com/dmlc/xgboost/pull/3970
69 instances = get_predict_input(payload)
70 dmatrix = xgb.DMatrix(instances, nthread=self.nthread)
71 result = self._booster.predict(dmatrix)
72 return get_predict_response(payload, result, self.name)
73 except Exception as e:
74 raise InferenceError(str(e))
75
[end of python/xgbserver/xgbserver/model.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/python/xgbserver/xgbserver/model.py b/python/xgbserver/xgbserver/model.py
--- a/python/xgbserver/xgbserver/model.py
+++ b/python/xgbserver/xgbserver/model.py
@@ -25,7 +25,7 @@
from kserve import Model
from kserve.storage import Storage
-BOOSTER_FILE_EXTENSION = ".bst"
+BOOSTER_FILE_EXTENSIONS = (".bst", ".json", ".ubj")
class XGBoostModel(Model):
@@ -45,7 +45,7 @@
model_files = []
for file in os.listdir(model_path):
file_path = os.path.join(model_path, file)
- if os.path.isfile(file_path) and file.endswith(BOOSTER_FILE_EXTENSION):
+ if os.path.isfile(file_path) and file.endswith(BOOSTER_FILE_EXTENSIONS):
model_files.append(file_path)
if len(model_files) == 0:
raise ModelMissingError(model_path)
| {"golden_diff": "diff --git a/python/xgbserver/xgbserver/model.py b/python/xgbserver/xgbserver/model.py\n--- a/python/xgbserver/xgbserver/model.py\n+++ b/python/xgbserver/xgbserver/model.py\n@@ -25,7 +25,7 @@\n from kserve import Model\n from kserve.storage import Storage\n \n-BOOSTER_FILE_EXTENSION = \".bst\"\n+BOOSTER_FILE_EXTENSIONS = (\".bst\", \".json\", \".ubj\")\n \n \n class XGBoostModel(Model):\n@@ -45,7 +45,7 @@\n model_files = []\n for file in os.listdir(model_path):\n file_path = os.path.join(model_path, file)\n- if os.path.isfile(file_path) and file.endswith(BOOSTER_FILE_EXTENSION):\n+ if os.path.isfile(file_path) and file.endswith(BOOSTER_FILE_EXTENSIONS):\n model_files.append(file_path)\n if len(model_files) == 0:\n raise ModelMissingError(model_path)\n", "issue": "Support .json and .ubj model format for XGBoost server image\n/kind feature\r\n\r\n\r\n**Description**\r\nIn the XGBoost image, the only supported model format is .bst: https://github.com/kserve/kserve/blob/56b8fe0d189fc0d557e9a8af07eab0c12852d5fd/python/xgbserver/xgbserver/model.py#L28\r\n\r\nThis format has been deprecated for a while and is not backwards compatible between xgboost framework versions. The recommended model format is .json or .ubj: https://xgboost.readthedocs.io/en/stable/tutorials/saving_model.html\r\n\r\nUsers that want to use the recommended model format for XGBoost models, are currently not able to do so.\r\n\r\n\r\n**Proposed solution**\r\nSupport the recommended file formats, while also keeping support for the old .bst format. \r\n\r\n\r\n\r\n\r\n\n", "before_files": [{"content": "# Copyright 2021 The KServe Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\nimport os\nfrom typing import Dict, Union\n\nimport xgboost as xgb\nfrom kserve.errors import InferenceError, ModelMissingError\nfrom kserve.protocol.infer_type import InferRequest, InferResponse\nfrom kserve.utils.utils import get_predict_input, get_predict_response\nfrom xgboost import XGBModel\n\nfrom kserve import Model\nfrom kserve.storage import Storage\n\nBOOSTER_FILE_EXTENSION = \".bst\"\n\n\nclass XGBoostModel(Model):\n def __init__(\n self, name: str, model_dir: str, nthread: int, booster: XGBModel = None\n ):\n super().__init__(name)\n self.name = name\n self.model_dir = model_dir\n self.nthread = nthread\n if booster is not None:\n self._booster = booster\n self.ready = True\n\n def load(self) -> bool:\n model_path = Storage.download(self.model_dir)\n model_files = []\n for file in os.listdir(model_path):\n file_path = os.path.join(model_path, file)\n if os.path.isfile(file_path) and file.endswith(BOOSTER_FILE_EXTENSION):\n model_files.append(file_path)\n if len(model_files) == 0:\n raise ModelMissingError(model_path)\n elif len(model_files) > 1:\n raise RuntimeError(\n \"More than one model file is detected, \"\n f\"Only one is allowed within model_dir: {model_files}\"\n )\n\n self._booster = xgb.Booster(\n params={\"nthread\": self.nthread}, model_file=model_files[0]\n )\n self.ready = True\n return self.ready\n\n def predict(\n self, payload: Union[Dict, InferRequest], headers: Dict[str, str] = None\n ) -> Union[Dict, InferResponse]:\n try:\n # Use of list as input is deprecated see https://github.com/dmlc/xgboost/pull/3970\n instances = get_predict_input(payload)\n dmatrix = xgb.DMatrix(instances, nthread=self.nthread)\n result = self._booster.predict(dmatrix)\n return get_predict_response(payload, result, self.name)\n except Exception as e:\n raise InferenceError(str(e))\n", "path": "python/xgbserver/xgbserver/model.py"}]} | 1,503 | 207 |
gh_patches_debug_47979 | rasdani/github-patches | git_diff | TheAlgorithms__Python-10664 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Improve our test coverage
### Feature description
Many of our existing algorithm files have little to no unit testing. This is problematic because this can easily let bugs slip through. We want some assurance that the code we currently have is correct and functional. We welcome all contributors to open PRs to help us add tests to our codebase.
### How to find low-coverage files
Go to the Actions tab in this repository and find the most recent **build** workflow run. Open the logs under "Run Tests" and scroll down until you find the section on code coverage:
```
---------- coverage: platform linux, python 3.12.0-final-0 -----------
Name Stmts Miss Cover Missing
-----------------------------------------------------------------------------------------------------------
quantum/q_fourier_transform.py 30 30 0% 14-93
scripts/validate_solutions.py 54 54 0% 2-94
strings/min_cost_string_conversion.py 78 75 4% 20-57, 61-75, 79-129
...
```
The "Cover" column tells you what percentage of the lines in that file are covered by tests. We want to increase this percentage for existing files. Find a file with low coverage percentage that you wish to write tests for, add doctests for each function, and open a PR with your changes. You do not need to have a perfect coverage percentage, but all functions should have doctests.
Some files will naturally be hard to write tests for. For example, the file may be poorly written because they lack any functions. Other files might be how-tos, meaning they simply demonstrate how to use an existing library's functions rather than implementing the algorithm themselves. Ignore these kinds of files, as they will need to be rewritten eventually. Furthermore, ignore files in the `web_programming` and `project_euler` directories. Web programming files are inherently hard to test and Project Euler files have their own validation workflow, so don't worry about their test coverage.
_**When you open your PR, put "Contributes to #9943" in the PR description.**_ Do not use the word "fixes", "resolves", or "closes". This issue is an ongoing one, and your PR will not single-handedly resolve this issue.
### How to add doctests
A doctest is a unit test that is contained within the documentation comment (docstring) for a function. Here is an example of what doctests look like within a docstring:
```py
def add(a: int, b: int) -> int:
"""
Adds two non-negative numbers.
>>> add(1, 1)
2
>>> add(2, 5)
7
>>> add(1, 0)
1
>>> add(-1, -1)
Traceback (most recent last):
...
ValueError: Numbers must be non-negative
"""
```
For every function in the file you choose, you should write doctests like the ones shown above in its docstring. If a function doesn't have a docstring, add one. Your doctests should be comprehensive but not excessive: you should write just enough tests to cover all basic cases as well as all edge cases (e.g., negative numbers, empty lists, etc).
Do not simply run a function on some example inputs and put its output as the expected output for a doctest. This assumes that the function is implemented correctly when it might not be. Verify independently that your doctests and their expected outputs are correct. **Your PR will not be merged if it has failing tests.** If you happen to discover a bug while writing doctests, please fix it.
_**Please read our [contributing guidelines](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md) before you contribute.**_
</issue>
<code>
[start of maths/power_using_recursion.py]
1 """
2 == Raise base to the power of exponent using recursion ==
3 Input -->
4 Enter the base: 3
5 Enter the exponent: 4
6 Output -->
7 3 to the power of 4 is 81
8 Input -->
9 Enter the base: 2
10 Enter the exponent: 0
11 Output -->
12 2 to the power of 0 is 1
13 """
14
15
16 def power(base: int, exponent: int) -> float:
17 """
18 >>> power(3, 4)
19 81
20 >>> power(2, 0)
21 1
22 >>> all(power(base, exponent) == pow(base, exponent)
23 ... for base in range(-10, 10) for exponent in range(10))
24 True
25 >>> power('a', 1)
26 'a'
27 >>> power('a', 2)
28 Traceback (most recent call last):
29 ...
30 TypeError: can't multiply sequence by non-int of type 'str'
31 >>> power('a', 'b')
32 Traceback (most recent call last):
33 ...
34 TypeError: unsupported operand type(s) for -: 'str' and 'int'
35 >>> power(2, -1)
36 Traceback (most recent call last):
37 ...
38 RecursionError: maximum recursion depth exceeded
39 """
40 return base * power(base, (exponent - 1)) if exponent else 1
41
42
43 if __name__ == "__main__":
44 from doctests import testmod
45
46 testmod()
47 print("Raise base to the power of exponent using recursion...")
48 base = int(input("Enter the base: ").strip())
49 exponent = int(input("Enter the exponent: ").strip())
50 result = power(base, abs(exponent))
51 if exponent < 0: # power() does not properly deal w/ negative exponents
52 result = 1 / result
53 print(f"{base} to the power of {exponent} is {result}")
54
[end of maths/power_using_recursion.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/maths/power_using_recursion.py b/maths/power_using_recursion.py
--- a/maths/power_using_recursion.py
+++ b/maths/power_using_recursion.py
@@ -15,6 +15,8 @@
def power(base: int, exponent: int) -> float:
"""
+ Calculate the power of a base raised to an exponent.
+
>>> power(3, 4)
81
>>> power(2, 0)
| {"golden_diff": "diff --git a/maths/power_using_recursion.py b/maths/power_using_recursion.py\n--- a/maths/power_using_recursion.py\n+++ b/maths/power_using_recursion.py\n@@ -15,6 +15,8 @@\n \n def power(base: int, exponent: int) -> float:\n \"\"\"\n+ Calculate the power of a base raised to an exponent.\n+\n >>> power(3, 4)\n 81\n >>> power(2, 0)\n", "issue": "Improve our test coverage\n### Feature description\r\n\r\nMany of our existing algorithm files have little to no unit testing. This is problematic because this can easily let bugs slip through. We want some assurance that the code we currently have is correct and functional. We welcome all contributors to open PRs to help us add tests to our codebase.\r\n\r\n### How to find low-coverage files\r\n\r\nGo to the Actions tab in this repository and find the most recent **build** workflow run. Open the logs under \"Run Tests\" and scroll down until you find the section on code coverage:\r\n```\r\n---------- coverage: platform linux, python 3.12.0-final-0 -----------\r\nName Stmts Miss Cover Missing\r\n-----------------------------------------------------------------------------------------------------------\r\nquantum/q_fourier_transform.py 30 30 0% 14-93\r\nscripts/validate_solutions.py 54 54 0% 2-94\r\nstrings/min_cost_string_conversion.py 78 75 4% 20-57, 61-75, 79-129\r\n...\r\n```\r\nThe \"Cover\" column tells you what percentage of the lines in that file are covered by tests. We want to increase this percentage for existing files. Find a file with low coverage percentage that you wish to write tests for, add doctests for each function, and open a PR with your changes. You do not need to have a perfect coverage percentage, but all functions should have doctests.\r\n\r\nSome files will naturally be hard to write tests for. For example, the file may be poorly written because they lack any functions. Other files might be how-tos, meaning they simply demonstrate how to use an existing library's functions rather than implementing the algorithm themselves. Ignore these kinds of files, as they will need to be rewritten eventually. Furthermore, ignore files in the `web_programming` and `project_euler` directories. Web programming files are inherently hard to test and Project Euler files have their own validation workflow, so don't worry about their test coverage.\r\n\r\n_**When you open your PR, put \"Contributes to #9943\" in the PR description.**_ Do not use the word \"fixes\", \"resolves\", or \"closes\". This issue is an ongoing one, and your PR will not single-handedly resolve this issue.\r\n\r\n### How to add doctests\r\n\r\nA doctest is a unit test that is contained within the documentation comment (docstring) for a function. Here is an example of what doctests look like within a docstring:\r\n```py\r\ndef add(a: int, b: int) -> int:\r\n \"\"\"\r\n Adds two non-negative numbers.\r\n >>> add(1, 1)\r\n 2\r\n >>> add(2, 5)\r\n 7\r\n >>> add(1, 0)\r\n 1\r\n >>> add(-1, -1)\r\n Traceback (most recent last):\r\n ...\r\n ValueError: Numbers must be non-negative\r\n \"\"\"\r\n```\r\nFor every function in the file you choose, you should write doctests like the ones shown above in its docstring. If a function doesn't have a docstring, add one. Your doctests should be comprehensive but not excessive: you should write just enough tests to cover all basic cases as well as all edge cases (e.g., negative numbers, empty lists, etc).\r\n\r\nDo not simply run a function on some example inputs and put its output as the expected output for a doctest. This assumes that the function is implemented correctly when it might not be. Verify independently that your doctests and their expected outputs are correct. **Your PR will not be merged if it has failing tests.** If you happen to discover a bug while writing doctests, please fix it.\r\n\r\n_**Please read our [contributing guidelines](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md) before you contribute.**_\n", "before_files": [{"content": "\"\"\"\n== Raise base to the power of exponent using recursion ==\n Input -->\n Enter the base: 3\n Enter the exponent: 4\n Output -->\n 3 to the power of 4 is 81\n Input -->\n Enter the base: 2\n Enter the exponent: 0\n Output -->\n 2 to the power of 0 is 1\n\"\"\"\n\n\ndef power(base: int, exponent: int) -> float:\n \"\"\"\n >>> power(3, 4)\n 81\n >>> power(2, 0)\n 1\n >>> all(power(base, exponent) == pow(base, exponent)\n ... for base in range(-10, 10) for exponent in range(10))\n True\n >>> power('a', 1)\n 'a'\n >>> power('a', 2)\n Traceback (most recent call last):\n ...\n TypeError: can't multiply sequence by non-int of type 'str'\n >>> power('a', 'b')\n Traceback (most recent call last):\n ...\n TypeError: unsupported operand type(s) for -: 'str' and 'int'\n >>> power(2, -1)\n Traceback (most recent call last):\n ...\n RecursionError: maximum recursion depth exceeded\n \"\"\"\n return base * power(base, (exponent - 1)) if exponent else 1\n\n\nif __name__ == \"__main__\":\n from doctests import testmod\n\n testmod()\n print(\"Raise base to the power of exponent using recursion...\")\n base = int(input(\"Enter the base: \").strip())\n exponent = int(input(\"Enter the exponent: \").strip())\n result = power(base, abs(exponent))\n if exponent < 0: # power() does not properly deal w/ negative exponents\n result = 1 / result\n print(f\"{base} to the power of {exponent} is {result}\")\n", "path": "maths/power_using_recursion.py"}]} | 1,904 | 106 |
gh_patches_debug_28309 | rasdani/github-patches | git_diff | pre-commit__pre-commit-1142 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
include version information in error log
would be useful to include things like:
- pre-commit version
- sys.version
- sys.executable
</issue>
<code>
[start of pre_commit/error_handler.py]
1 from __future__ import absolute_import
2 from __future__ import print_function
3 from __future__ import unicode_literals
4
5 import contextlib
6 import os.path
7 import traceback
8
9 import six
10
11 from pre_commit import five
12 from pre_commit import output
13 from pre_commit.store import Store
14
15
16 class FatalError(RuntimeError):
17 pass
18
19
20 def _to_bytes(exc):
21 try:
22 return bytes(exc)
23 except Exception:
24 return six.text_type(exc).encode('UTF-8')
25
26
27 def _log_and_exit(msg, exc, formatted):
28 error_msg = b''.join((
29 five.to_bytes(msg), b': ',
30 five.to_bytes(type(exc).__name__), b': ',
31 _to_bytes(exc), b'\n',
32 ))
33 output.write(error_msg)
34 store = Store()
35 log_path = os.path.join(store.directory, 'pre-commit.log')
36 output.write_line('Check the log at {}'.format(log_path))
37 with open(log_path, 'wb') as log:
38 output.write(error_msg, stream=log)
39 output.write_line(formatted, stream=log)
40 raise SystemExit(1)
41
42
43 @contextlib.contextmanager
44 def error_handler():
45 try:
46 yield
47 except (Exception, KeyboardInterrupt) as e:
48 if isinstance(e, FatalError):
49 msg = 'An error has occurred'
50 elif isinstance(e, KeyboardInterrupt):
51 msg = 'Interrupted (^C)'
52 else:
53 msg = 'An unexpected error has occurred'
54 _log_and_exit(msg, e, traceback.format_exc())
55
[end of pre_commit/error_handler.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pre_commit/error_handler.py b/pre_commit/error_handler.py
--- a/pre_commit/error_handler.py
+++ b/pre_commit/error_handler.py
@@ -4,10 +4,12 @@
import contextlib
import os.path
+import sys
import traceback
import six
+import pre_commit.constants as C
from pre_commit import five
from pre_commit import output
from pre_commit.store import Store
@@ -34,9 +36,36 @@
store = Store()
log_path = os.path.join(store.directory, 'pre-commit.log')
output.write_line('Check the log at {}'.format(log_path))
+
with open(log_path, 'wb') as log:
+ output.write_line(
+ '### version information\n```', stream=log,
+ )
+ output.write_line(
+ 'pre-commit.version: {}'.format(C.VERSION), stream=log,
+ )
+ output.write_line(
+ 'sys.version:\n{}'.format(
+ '\n'.join(
+ [
+ ' {}'.format(line)
+ for line in sys.version.splitlines()
+ ],
+ ),
+ ),
+ stream=log,
+ )
+ output.write_line(
+ 'sys.executable: {}'.format(sys.executable), stream=log,
+ )
+ output.write_line('os.name: {}'.format(os.name), stream=log)
+ output.write_line(
+ 'sys.platform: {}\n```'.format(sys.platform), stream=log,
+ )
+ output.write_line('### error information\n```', stream=log)
output.write(error_msg, stream=log)
output.write_line(formatted, stream=log)
+ output.write('\n```\n', stream=log)
raise SystemExit(1)
| {"golden_diff": "diff --git a/pre_commit/error_handler.py b/pre_commit/error_handler.py\n--- a/pre_commit/error_handler.py\n+++ b/pre_commit/error_handler.py\n@@ -4,10 +4,12 @@\n \n import contextlib\n import os.path\n+import sys\n import traceback\n \n import six\n \n+import pre_commit.constants as C\n from pre_commit import five\n from pre_commit import output\n from pre_commit.store import Store\n@@ -34,9 +36,36 @@\n store = Store()\n log_path = os.path.join(store.directory, 'pre-commit.log')\n output.write_line('Check the log at {}'.format(log_path))\n+\n with open(log_path, 'wb') as log:\n+ output.write_line(\n+ '### version information\\n```', stream=log,\n+ )\n+ output.write_line(\n+ 'pre-commit.version: {}'.format(C.VERSION), stream=log,\n+ )\n+ output.write_line(\n+ 'sys.version:\\n{}'.format(\n+ '\\n'.join(\n+ [\n+ ' {}'.format(line)\n+ for line in sys.version.splitlines()\n+ ],\n+ ),\n+ ),\n+ stream=log,\n+ )\n+ output.write_line(\n+ 'sys.executable: {}'.format(sys.executable), stream=log,\n+ )\n+ output.write_line('os.name: {}'.format(os.name), stream=log)\n+ output.write_line(\n+ 'sys.platform: {}\\n```'.format(sys.platform), stream=log,\n+ )\n+ output.write_line('### error information\\n```', stream=log)\n output.write(error_msg, stream=log)\n output.write_line(formatted, stream=log)\n+ output.write('\\n```\\n', stream=log)\n raise SystemExit(1)\n", "issue": "include version information in error log\nwould be useful to include things like:\r\n\r\n- pre-commit version\r\n- sys.version\r\n- sys.executable\n", "before_files": [{"content": "from __future__ import absolute_import\nfrom __future__ import print_function\nfrom __future__ import unicode_literals\n\nimport contextlib\nimport os.path\nimport traceback\n\nimport six\n\nfrom pre_commit import five\nfrom pre_commit import output\nfrom pre_commit.store import Store\n\n\nclass FatalError(RuntimeError):\n pass\n\n\ndef _to_bytes(exc):\n try:\n return bytes(exc)\n except Exception:\n return six.text_type(exc).encode('UTF-8')\n\n\ndef _log_and_exit(msg, exc, formatted):\n error_msg = b''.join((\n five.to_bytes(msg), b': ',\n five.to_bytes(type(exc).__name__), b': ',\n _to_bytes(exc), b'\\n',\n ))\n output.write(error_msg)\n store = Store()\n log_path = os.path.join(store.directory, 'pre-commit.log')\n output.write_line('Check the log at {}'.format(log_path))\n with open(log_path, 'wb') as log:\n output.write(error_msg, stream=log)\n output.write_line(formatted, stream=log)\n raise SystemExit(1)\n\n\[email protected]\ndef error_handler():\n try:\n yield\n except (Exception, KeyboardInterrupt) as e:\n if isinstance(e, FatalError):\n msg = 'An error has occurred'\n elif isinstance(e, KeyboardInterrupt):\n msg = 'Interrupted (^C)'\n else:\n msg = 'An unexpected error has occurred'\n _log_and_exit(msg, e, traceback.format_exc())\n", "path": "pre_commit/error_handler.py"}]} | 989 | 379 |
gh_patches_debug_32024 | rasdani/github-patches | git_diff | medtagger__MedTagger-391 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Remove error about not picking category properly
## Current Behavior
When user access labeling page without choosing the category via the category page he/she receives an error about not choosing the category properly. While this is necessary for preventing users accessing this page, it makes development more difficult. Every time when front-end loads, developer has to go back to category page.
## Expected Behavior
There shouldn't be an error about not picking category properly.
## Steps to Reproduce the Problem
1. Go to labeling page `/labeling` without going through category page.
## Additional comment (optional)
We should probably get category using `queryParams` like before and load current category on marker page.
</issue>
<code>
[start of backend/medtagger/api/tasks/service_rest.py]
1 """Module responsible for definition of Tasks service available via HTTP REST API."""
2 from typing import Any
3
4 from flask import request
5 from flask_restplus import Resource
6
7 from medtagger.api import api
8 from medtagger.api.tasks import business, serializers
9 from medtagger.api.security import login_required, role_required
10 from medtagger.database.models import LabelTag
11
12 tasks_ns = api.namespace('tasks', 'Methods related with tasks')
13
14
15 @tasks_ns.route('')
16 class Tasks(Resource):
17 """Endpoint that manages tasks."""
18
19 @staticmethod
20 @login_required
21 @tasks_ns.marshal_with(serializers.out__task)
22 @tasks_ns.doc(security='token')
23 @tasks_ns.doc(description='Return all available tasks.')
24 @tasks_ns.doc(responses={200: 'Success'})
25 def get() -> Any:
26 """Return all available tasks."""
27 return business.get_tasks()
28
29 @staticmethod
30 @login_required
31 @role_required('admin')
32 @tasks_ns.expect(serializers.in__task)
33 @tasks_ns.marshal_with(serializers.out__task)
34 @tasks_ns.doc(security='token')
35 @tasks_ns.doc(description='Create new Task.')
36 @tasks_ns.doc(responses={201: 'Success'})
37 def post() -> Any:
38 """Create new Task."""
39 payload = request.json
40
41 key = payload['key']
42 name = payload['name']
43 image_path = payload['image_path']
44 datasets_keys = payload['datasets_keys']
45 tags = [LabelTag(tag['key'], tag['name'], tag['tools']) for tag in payload['tags']]
46
47 return business.create_task(key, name, image_path, datasets_keys, tags), 201
48
[end of backend/medtagger/api/tasks/service_rest.py]
[start of backend/medtagger/api/tasks/business.py]
1 """Module responsible for business logic in all Tasks endpoints."""
2 from typing import List
3
4 from medtagger.database.models import Task, LabelTag
5 from medtagger.repositories import (
6 tasks as TasksRepository,
7 )
8
9
10 def get_tasks() -> List[Task]:
11 """Fetch all tasks.
12
13 :return: list of tasks
14 """
15 return TasksRepository.get_all_tasks()
16
17
18 def create_task(key: str, name: str, image_path: str, datasets_keys: List[str], tags: List[LabelTag]) -> Task:
19 """Create new Task.
20
21 :param key: unique key representing Task
22 :param name: name which describes this Task
23 :param image_path: path to the image which is located on the frontend
24 :param datasets_keys: Keys of Datasets that Task takes Scans from
25 :param tags: Label Tags that will be created and assigned to Task
26 :return: Task object
27 """
28 return TasksRepository.add_task(key, name, image_path, datasets_keys, tags)
29
[end of backend/medtagger/api/tasks/business.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/backend/medtagger/api/tasks/business.py b/backend/medtagger/api/tasks/business.py
--- a/backend/medtagger/api/tasks/business.py
+++ b/backend/medtagger/api/tasks/business.py
@@ -1,6 +1,9 @@
"""Module responsible for business logic in all Tasks endpoints."""
from typing import List
+from sqlalchemy.orm.exc import NoResultFound
+
+from medtagger.api.exceptions import NotFoundException
from medtagger.database.models import Task, LabelTag
from medtagger.repositories import (
tasks as TasksRepository,
@@ -15,6 +18,17 @@
return TasksRepository.get_all_tasks()
+def get_task_for_key(task_key: str) -> Task:
+ """Fetch Task for given key.
+
+ :return: Task
+ """
+ try:
+ return TasksRepository.get_task_by_key(task_key)
+ except NoResultFound:
+ raise NotFoundException('Did not found task for {} key!'.format(task_key))
+
+
def create_task(key: str, name: str, image_path: str, datasets_keys: List[str], tags: List[LabelTag]) -> Task:
"""Create new Task.
diff --git a/backend/medtagger/api/tasks/service_rest.py b/backend/medtagger/api/tasks/service_rest.py
--- a/backend/medtagger/api/tasks/service_rest.py
+++ b/backend/medtagger/api/tasks/service_rest.py
@@ -43,5 +43,19 @@
image_path = payload['image_path']
datasets_keys = payload['datasets_keys']
tags = [LabelTag(tag['key'], tag['name'], tag['tools']) for tag in payload['tags']]
-
return business.create_task(key, name, image_path, datasets_keys, tags), 201
+
+
+@tasks_ns.route('/<string:task_key>')
+class Task(Resource):
+ """Endpoint that manages single task."""
+
+ @staticmethod
+ @login_required
+ @tasks_ns.marshal_with(serializers.out__task)
+ @tasks_ns.doc(security='token')
+ @tasks_ns.doc(description='Get task for given key.')
+ @tasks_ns.doc(responses={200: 'Success', 404: 'Could not find task'})
+ def get(task_key: str) -> Any:
+ """Return task for given key."""
+ return business.get_task_for_key(task_key)
| {"golden_diff": "diff --git a/backend/medtagger/api/tasks/business.py b/backend/medtagger/api/tasks/business.py\n--- a/backend/medtagger/api/tasks/business.py\n+++ b/backend/medtagger/api/tasks/business.py\n@@ -1,6 +1,9 @@\n \"\"\"Module responsible for business logic in all Tasks endpoints.\"\"\"\n from typing import List\n \n+from sqlalchemy.orm.exc import NoResultFound\n+\n+from medtagger.api.exceptions import NotFoundException\n from medtagger.database.models import Task, LabelTag\n from medtagger.repositories import (\n tasks as TasksRepository,\n@@ -15,6 +18,17 @@\n return TasksRepository.get_all_tasks()\n \n \n+def get_task_for_key(task_key: str) -> Task:\n+ \"\"\"Fetch Task for given key.\n+\n+ :return: Task\n+ \"\"\"\n+ try:\n+ return TasksRepository.get_task_by_key(task_key)\n+ except NoResultFound:\n+ raise NotFoundException('Did not found task for {} key!'.format(task_key))\n+\n+\n def create_task(key: str, name: str, image_path: str, datasets_keys: List[str], tags: List[LabelTag]) -> Task:\n \"\"\"Create new Task.\n \ndiff --git a/backend/medtagger/api/tasks/service_rest.py b/backend/medtagger/api/tasks/service_rest.py\n--- a/backend/medtagger/api/tasks/service_rest.py\n+++ b/backend/medtagger/api/tasks/service_rest.py\n@@ -43,5 +43,19 @@\n image_path = payload['image_path']\n datasets_keys = payload['datasets_keys']\n tags = [LabelTag(tag['key'], tag['name'], tag['tools']) for tag in payload['tags']]\n-\n return business.create_task(key, name, image_path, datasets_keys, tags), 201\n+\n+\n+@tasks_ns.route('/<string:task_key>')\n+class Task(Resource):\n+ \"\"\"Endpoint that manages single task.\"\"\"\n+\n+ @staticmethod\n+ @login_required\n+ @tasks_ns.marshal_with(serializers.out__task)\n+ @tasks_ns.doc(security='token')\n+ @tasks_ns.doc(description='Get task for given key.')\n+ @tasks_ns.doc(responses={200: 'Success', 404: 'Could not find task'})\n+ def get(task_key: str) -> Any:\n+ \"\"\"Return task for given key.\"\"\"\n+ return business.get_task_for_key(task_key)\n", "issue": "Remove error about not picking category properly\n## Current Behavior\r\n\r\nWhen user access labeling page without choosing the category via the category page he/she receives an error about not choosing the category properly. While this is necessary for preventing users accessing this page, it makes development more difficult. Every time when front-end loads, developer has to go back to category page.\r\n\r\n## Expected Behavior\r\n\r\nThere shouldn't be an error about not picking category properly. \r\n\r\n## Steps to Reproduce the Problem\r\n\r\n 1. Go to labeling page `/labeling` without going through category page.\r\n\r\n## Additional comment (optional)\r\n\r\nWe should probably get category using `queryParams` like before and load current category on marker page.\r\n\n", "before_files": [{"content": "\"\"\"Module responsible for definition of Tasks service available via HTTP REST API.\"\"\"\nfrom typing import Any\n\nfrom flask import request\nfrom flask_restplus import Resource\n\nfrom medtagger.api import api\nfrom medtagger.api.tasks import business, serializers\nfrom medtagger.api.security import login_required, role_required\nfrom medtagger.database.models import LabelTag\n\ntasks_ns = api.namespace('tasks', 'Methods related with tasks')\n\n\n@tasks_ns.route('')\nclass Tasks(Resource):\n \"\"\"Endpoint that manages tasks.\"\"\"\n\n @staticmethod\n @login_required\n @tasks_ns.marshal_with(serializers.out__task)\n @tasks_ns.doc(security='token')\n @tasks_ns.doc(description='Return all available tasks.')\n @tasks_ns.doc(responses={200: 'Success'})\n def get() -> Any:\n \"\"\"Return all available tasks.\"\"\"\n return business.get_tasks()\n\n @staticmethod\n @login_required\n @role_required('admin')\n @tasks_ns.expect(serializers.in__task)\n @tasks_ns.marshal_with(serializers.out__task)\n @tasks_ns.doc(security='token')\n @tasks_ns.doc(description='Create new Task.')\n @tasks_ns.doc(responses={201: 'Success'})\n def post() -> Any:\n \"\"\"Create new Task.\"\"\"\n payload = request.json\n\n key = payload['key']\n name = payload['name']\n image_path = payload['image_path']\n datasets_keys = payload['datasets_keys']\n tags = [LabelTag(tag['key'], tag['name'], tag['tools']) for tag in payload['tags']]\n\n return business.create_task(key, name, image_path, datasets_keys, tags), 201\n", "path": "backend/medtagger/api/tasks/service_rest.py"}, {"content": "\"\"\"Module responsible for business logic in all Tasks endpoints.\"\"\"\nfrom typing import List\n\nfrom medtagger.database.models import Task, LabelTag\nfrom medtagger.repositories import (\n tasks as TasksRepository,\n)\n\n\ndef get_tasks() -> List[Task]:\n \"\"\"Fetch all tasks.\n\n :return: list of tasks\n \"\"\"\n return TasksRepository.get_all_tasks()\n\n\ndef create_task(key: str, name: str, image_path: str, datasets_keys: List[str], tags: List[LabelTag]) -> Task:\n \"\"\"Create new Task.\n\n :param key: unique key representing Task\n :param name: name which describes this Task\n :param image_path: path to the image which is located on the frontend\n :param datasets_keys: Keys of Datasets that Task takes Scans from\n :param tags: Label Tags that will be created and assigned to Task\n :return: Task object\n \"\"\"\n return TasksRepository.add_task(key, name, image_path, datasets_keys, tags)\n", "path": "backend/medtagger/api/tasks/business.py"}]} | 1,427 | 527 |
gh_patches_debug_25052 | rasdani/github-patches | git_diff | Pylons__pyramid-2759 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Drop Python 3.3 support
This is a placeholder for Pyramid 1.8 to drop Python 3.3 support.
Creating a new issue, splitting it off from https://github.com/Pylons/pyramid/issues/2368.
</issue>
<code>
[start of setup.py]
1 ##############################################################################
2 #
3 # Copyright (c) 2008-2013 Agendaless Consulting and Contributors.
4 # All Rights Reserved.
5 #
6 # This software is subject to the provisions of the BSD-like license at
7 # http://www.repoze.org/LICENSE.txt. A copy of the license should accompany
8 # this distribution. THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL
9 # EXPRESS OR IMPLIED WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO,
10 # THE IMPLIED WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND
11 # FITNESS FOR A PARTICULAR PURPOSE
12 #
13 ##############################################################################
14
15 import os
16 import sys
17
18 from setuptools import setup, find_packages
19
20 py_version = sys.version_info[:2]
21 is_pypy = '__pypy__' in sys.builtin_module_names
22
23 PY3 = py_version[0] == 3
24
25 if PY3:
26 if py_version < (3, 3) and not is_pypy: # PyPy3 masquerades as Python 3.2...
27 raise RuntimeError('On Python 3, Pyramid requires Python 3.3 or better')
28 else:
29 if py_version < (2, 6):
30 raise RuntimeError('On Python 2, Pyramid requires Python 2.6 or better')
31
32 here = os.path.abspath(os.path.dirname(__file__))
33 try:
34 with open(os.path.join(here, 'README.rst')) as f:
35 README = f.read()
36 with open(os.path.join(here, 'CHANGES.txt')) as f:
37 CHANGES = f.read()
38 except IOError:
39 README = CHANGES = ''
40
41 install_requires = [
42 'setuptools',
43 'WebOb >= 1.3.1', # request.domain and CookieProfile
44 'repoze.lru >= 0.4', # py3 compat
45 'zope.interface >= 3.8.0', # has zope.interface.registry
46 'zope.deprecation >= 3.5.0', # py3 compat
47 'venusian >= 1.0a3', # ``ignore``
48 'translationstring >= 0.4', # py3 compat
49 'PasteDeploy >= 1.5.0', # py3 compat
50 ]
51
52 tests_require = [
53 'WebTest >= 1.3.1', # py3 compat
54 ]
55
56 if not PY3:
57 tests_require.append('zope.component>=3.11.0')
58
59 docs_extras = [
60 'Sphinx >= 1.3.5',
61 'docutils',
62 'repoze.sphinx.autointerface',
63 'pylons_sphinx_latesturl',
64 'pylons-sphinx-themes',
65 'sphinxcontrib-programoutput',
66 ]
67
68 testing_extras = tests_require + [
69 'nose',
70 'coverage',
71 'virtualenv', # for scaffolding tests
72 ]
73
74 setup(name='pyramid',
75 version='1.8.dev0',
76 description='The Pyramid Web Framework, a Pylons project',
77 long_description=README + '\n\n' + CHANGES,
78 classifiers=[
79 "Development Status :: 6 - Mature",
80 "Intended Audience :: Developers",
81 "Programming Language :: Python",
82 "Programming Language :: Python :: 2.7",
83 "Programming Language :: Python :: 3",
84 "Programming Language :: Python :: 3.3",
85 "Programming Language :: Python :: 3.4",
86 "Programming Language :: Python :: 3.5",
87 "Programming Language :: Python :: Implementation :: CPython",
88 "Programming Language :: Python :: Implementation :: PyPy",
89 "Framework :: Pyramid",
90 "Topic :: Internet :: WWW/HTTP",
91 "Topic :: Internet :: WWW/HTTP :: WSGI",
92 "License :: Repoze Public License",
93 ],
94 keywords='web wsgi pylons pyramid',
95 author="Chris McDonough, Agendaless Consulting",
96 author_email="[email protected]",
97 url="https://trypyramid.com",
98 license="BSD-derived (http://www.repoze.org/LICENSE.txt)",
99 packages=find_packages(),
100 include_package_data=True,
101 zip_safe=False,
102 install_requires=install_requires,
103 extras_require={
104 'testing': testing_extras,
105 'docs': docs_extras,
106 },
107 tests_require=tests_require,
108 test_suite="pyramid.tests",
109 entry_points="""\
110 [pyramid.scaffold]
111 starter=pyramid.scaffolds:StarterProjectTemplate
112 zodb=pyramid.scaffolds:ZODBProjectTemplate
113 alchemy=pyramid.scaffolds:AlchemyProjectTemplate
114 [pyramid.pshell_runner]
115 python=pyramid.scripts.pshell:python_shell_runner
116 [console_scripts]
117 pcreate = pyramid.scripts.pcreate:main
118 pserve = pyramid.scripts.pserve:main
119 pshell = pyramid.scripts.pshell:main
120 proutes = pyramid.scripts.proutes:main
121 pviews = pyramid.scripts.pviews:main
122 ptweens = pyramid.scripts.ptweens:main
123 prequest = pyramid.scripts.prequest:main
124 pdistreport = pyramid.scripts.pdistreport:main
125 [paste.server_runner]
126 wsgiref = pyramid.scripts.pserve:wsgiref_server_runner
127 cherrypy = pyramid.scripts.pserve:cherrypy_server_runner
128 """
129 )
130
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -18,16 +18,15 @@
from setuptools import setup, find_packages
py_version = sys.version_info[:2]
-is_pypy = '__pypy__' in sys.builtin_module_names
PY3 = py_version[0] == 3
if PY3:
- if py_version < (3, 3) and not is_pypy: # PyPy3 masquerades as Python 3.2...
- raise RuntimeError('On Python 3, Pyramid requires Python 3.3 or better')
+ if py_version < (3, 4):
+ raise RuntimeError('On Python 3, Pyramid requires Python 3.4 or better')
else:
- if py_version < (2, 6):
- raise RuntimeError('On Python 2, Pyramid requires Python 2.6 or better')
+ if py_version < (2, 7):
+ raise RuntimeError('On Python 2, Pyramid requires Python 2.7 or better')
here = os.path.abspath(os.path.dirname(__file__))
try:
@@ -81,7 +80,6 @@
"Programming Language :: Python",
"Programming Language :: Python :: 2.7",
"Programming Language :: Python :: 3",
- "Programming Language :: Python :: 3.3",
"Programming Language :: Python :: 3.4",
"Programming Language :: Python :: 3.5",
"Programming Language :: Python :: Implementation :: CPython",
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -18,16 +18,15 @@\n from setuptools import setup, find_packages\n \n py_version = sys.version_info[:2]\n-is_pypy = '__pypy__' in sys.builtin_module_names\n \n PY3 = py_version[0] == 3\n \n if PY3:\n- if py_version < (3, 3) and not is_pypy: # PyPy3 masquerades as Python 3.2...\n- raise RuntimeError('On Python 3, Pyramid requires Python 3.3 or better')\n+ if py_version < (3, 4):\n+ raise RuntimeError('On Python 3, Pyramid requires Python 3.4 or better')\n else:\n- if py_version < (2, 6):\n- raise RuntimeError('On Python 2, Pyramid requires Python 2.6 or better')\n+ if py_version < (2, 7):\n+ raise RuntimeError('On Python 2, Pyramid requires Python 2.7 or better')\n \n here = os.path.abspath(os.path.dirname(__file__))\n try:\n@@ -81,7 +80,6 @@\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n- \"Programming Language :: Python :: 3.3\",\n \"Programming Language :: Python :: 3.4\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n", "issue": "Drop Python 3.3 support\nThis is a placeholder for Pyramid 1.8 to drop Python 3.3 support.\n\nCreating a new issue, splitting it off from https://github.com/Pylons/pyramid/issues/2368.\n\n", "before_files": [{"content": "##############################################################################\n#\n# Copyright (c) 2008-2013 Agendaless Consulting and Contributors.\n# All Rights Reserved.\n#\n# This software is subject to the provisions of the BSD-like license at\n# http://www.repoze.org/LICENSE.txt. A copy of the license should accompany\n# this distribution. THIS SOFTWARE IS PROVIDED \"AS IS\" AND ANY AND ALL\n# EXPRESS OR IMPLIED WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO,\n# THE IMPLIED WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND\n# FITNESS FOR A PARTICULAR PURPOSE\n#\n##############################################################################\n\nimport os\nimport sys\n\nfrom setuptools import setup, find_packages\n\npy_version = sys.version_info[:2]\nis_pypy = '__pypy__' in sys.builtin_module_names\n\nPY3 = py_version[0] == 3\n\nif PY3:\n if py_version < (3, 3) and not is_pypy: # PyPy3 masquerades as Python 3.2...\n raise RuntimeError('On Python 3, Pyramid requires Python 3.3 or better')\nelse:\n if py_version < (2, 6):\n raise RuntimeError('On Python 2, Pyramid requires Python 2.6 or better')\n\nhere = os.path.abspath(os.path.dirname(__file__))\ntry:\n with open(os.path.join(here, 'README.rst')) as f:\n README = f.read()\n with open(os.path.join(here, 'CHANGES.txt')) as f:\n CHANGES = f.read()\nexcept IOError:\n README = CHANGES = ''\n\ninstall_requires = [\n 'setuptools',\n 'WebOb >= 1.3.1', # request.domain and CookieProfile\n 'repoze.lru >= 0.4', # py3 compat\n 'zope.interface >= 3.8.0', # has zope.interface.registry\n 'zope.deprecation >= 3.5.0', # py3 compat\n 'venusian >= 1.0a3', # ``ignore``\n 'translationstring >= 0.4', # py3 compat\n 'PasteDeploy >= 1.5.0', # py3 compat\n ]\n\ntests_require = [\n 'WebTest >= 1.3.1', # py3 compat\n ]\n\nif not PY3:\n tests_require.append('zope.component>=3.11.0')\n\ndocs_extras = [\n 'Sphinx >= 1.3.5',\n 'docutils',\n 'repoze.sphinx.autointerface',\n 'pylons_sphinx_latesturl',\n 'pylons-sphinx-themes',\n 'sphinxcontrib-programoutput',\n ]\n\ntesting_extras = tests_require + [\n 'nose',\n 'coverage',\n 'virtualenv', # for scaffolding tests\n ]\n\nsetup(name='pyramid',\n version='1.8.dev0',\n description='The Pyramid Web Framework, a Pylons project',\n long_description=README + '\\n\\n' + CHANGES,\n classifiers=[\n \"Development Status :: 6 - Mature\",\n \"Intended Audience :: Developers\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.3\",\n \"Programming Language :: Python :: 3.4\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Programming Language :: Python :: Implementation :: PyPy\",\n \"Framework :: Pyramid\",\n \"Topic :: Internet :: WWW/HTTP\",\n \"Topic :: Internet :: WWW/HTTP :: WSGI\",\n \"License :: Repoze Public License\",\n ],\n keywords='web wsgi pylons pyramid',\n author=\"Chris McDonough, Agendaless Consulting\",\n author_email=\"[email protected]\",\n url=\"https://trypyramid.com\",\n license=\"BSD-derived (http://www.repoze.org/LICENSE.txt)\",\n packages=find_packages(),\n include_package_data=True,\n zip_safe=False,\n install_requires=install_requires,\n extras_require={\n 'testing': testing_extras,\n 'docs': docs_extras,\n },\n tests_require=tests_require,\n test_suite=\"pyramid.tests\",\n entry_points=\"\"\"\\\n [pyramid.scaffold]\n starter=pyramid.scaffolds:StarterProjectTemplate\n zodb=pyramid.scaffolds:ZODBProjectTemplate\n alchemy=pyramid.scaffolds:AlchemyProjectTemplate\n [pyramid.pshell_runner]\n python=pyramid.scripts.pshell:python_shell_runner\n [console_scripts]\n pcreate = pyramid.scripts.pcreate:main\n pserve = pyramid.scripts.pserve:main\n pshell = pyramid.scripts.pshell:main\n proutes = pyramid.scripts.proutes:main\n pviews = pyramid.scripts.pviews:main\n ptweens = pyramid.scripts.ptweens:main\n prequest = pyramid.scripts.prequest:main\n pdistreport = pyramid.scripts.pdistreport:main\n [paste.server_runner]\n wsgiref = pyramid.scripts.pserve:wsgiref_server_runner\n cherrypy = pyramid.scripts.pserve:cherrypy_server_runner\n \"\"\"\n )\n", "path": "setup.py"}]} | 2,021 | 338 |
gh_patches_debug_19166 | rasdani/github-patches | git_diff | airctic__icevision-870 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add EfficientDet AdvProp-AA
## 🚀 Feature
Add EfficientDet AdvProp-AA pretrained backbones for D0-D5
See https://github.com/google/automl/blob/master/efficientdet/Det-AdvProp.md
</issue>
<code>
[start of icevision/models/ross/efficientdet/backbones.py]
1 __all__ = [
2 "tf_lite0",
3 "tf_lite1",
4 "tf_lite2",
5 "tf_lite3",
6 "tf_d0",
7 "tf_d1",
8 "tf_d2",
9 "tf_d3",
10 "tf_d4",
11 "tf_d5",
12 "tf_d6",
13 "tf_d7",
14 "tf_d7x",
15 "d0",
16 "d1",
17 "d2",
18 "d3",
19 "d4",
20 "d5",
21 "d6",
22 "d7",
23 "d7x",
24 ]
25
26 from icevision.models.ross.efficientdet.utils import *
27
28
29 tf_lite0 = EfficientDetBackboneConfig(model_name="tf_efficientdet_lite0")
30 tf_lite1 = EfficientDetBackboneConfig(model_name="tf_efficientdet_lite1")
31 tf_lite2 = EfficientDetBackboneConfig(model_name="tf_efficientdet_lite2")
32 tf_lite3 = EfficientDetBackboneConfig(model_name="tf_efficientdet_lite3")
33
34 tf_d0 = EfficientDetBackboneConfig(model_name="tf_efficientdet_d0")
35 tf_d1 = EfficientDetBackboneConfig(model_name="tf_efficientdet_d1")
36 tf_d2 = EfficientDetBackboneConfig(model_name="tf_efficientdet_d2")
37 tf_d3 = EfficientDetBackboneConfig(model_name="tf_efficientdet_d3")
38 tf_d4 = EfficientDetBackboneConfig(model_name="tf_efficientdet_d4")
39 tf_d5 = EfficientDetBackboneConfig(model_name="tf_efficientdet_d5")
40 tf_d6 = EfficientDetBackboneConfig(model_name="tf_efficientdet_d6")
41 tf_d7 = EfficientDetBackboneConfig(model_name="tf_efficientdet_d7")
42 tf_d7x = EfficientDetBackboneConfig(model_name="tf_efficientdet_d7x")
43
44 d0 = EfficientDetBackboneConfig(model_name="efficientdet_d0")
45 d1 = EfficientDetBackboneConfig(model_name="efficientdet_d1")
46 d2 = EfficientDetBackboneConfig(model_name="efficientdet_d2")
47 d3 = EfficientDetBackboneConfig(model_name="efficientdet_d3")
48 d4 = EfficientDetBackboneConfig(model_name="efficientdet_d4")
49 d5 = EfficientDetBackboneConfig(model_name="efficientdet_d5")
50 d6 = EfficientDetBackboneConfig(model_name="efficientdet_d6")
51 d7 = EfficientDetBackboneConfig(model_name="efficientdet_d7")
52 d7x = EfficientDetBackboneConfig(model_name="efficientdet_d7x")
53
[end of icevision/models/ross/efficientdet/backbones.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/icevision/models/ross/efficientdet/backbones.py b/icevision/models/ross/efficientdet/backbones.py
--- a/icevision/models/ross/efficientdet/backbones.py
+++ b/icevision/models/ross/efficientdet/backbones.py
@@ -21,6 +21,12 @@
"d6",
"d7",
"d7x",
+ "tf_d0_ap",
+ "tf_d1_ap",
+ "tf_d2_ap",
+ "tf_d3_ap",
+ "tf_d4_ap",
+ "tf_d5_ap",
]
from icevision.models.ross.efficientdet.utils import *
@@ -50,3 +56,10 @@
d6 = EfficientDetBackboneConfig(model_name="efficientdet_d6")
d7 = EfficientDetBackboneConfig(model_name="efficientdet_d7")
d7x = EfficientDetBackboneConfig(model_name="efficientdet_d7x")
+
+tf_d0_ap = EfficientDetBackboneConfig(model_name="tf_efficientdet_d0_ap")
+tf_d1_ap = EfficientDetBackboneConfig(model_name="tf_efficientdet_d1_ap")
+tf_d2_ap = EfficientDetBackboneConfig(model_name="tf_efficientdet_d2_ap")
+tf_d3_ap = EfficientDetBackboneConfig(model_name="tf_efficientdet_d3_ap")
+tf_d4_ap = EfficientDetBackboneConfig(model_name="tf_efficientdet_d4_ap")
+tf_d5_ap = EfficientDetBackboneConfig(model_name="tf_efficientdet_d5_ap")
| {"golden_diff": "diff --git a/icevision/models/ross/efficientdet/backbones.py b/icevision/models/ross/efficientdet/backbones.py\n--- a/icevision/models/ross/efficientdet/backbones.py\n+++ b/icevision/models/ross/efficientdet/backbones.py\n@@ -21,6 +21,12 @@\n \"d6\",\n \"d7\",\n \"d7x\",\n+ \"tf_d0_ap\",\n+ \"tf_d1_ap\",\n+ \"tf_d2_ap\",\n+ \"tf_d3_ap\",\n+ \"tf_d4_ap\",\n+ \"tf_d5_ap\",\n ]\n \n from icevision.models.ross.efficientdet.utils import *\n@@ -50,3 +56,10 @@\n d6 = EfficientDetBackboneConfig(model_name=\"efficientdet_d6\")\n d7 = EfficientDetBackboneConfig(model_name=\"efficientdet_d7\")\n d7x = EfficientDetBackboneConfig(model_name=\"efficientdet_d7x\")\n+\n+tf_d0_ap = EfficientDetBackboneConfig(model_name=\"tf_efficientdet_d0_ap\")\n+tf_d1_ap = EfficientDetBackboneConfig(model_name=\"tf_efficientdet_d1_ap\")\n+tf_d2_ap = EfficientDetBackboneConfig(model_name=\"tf_efficientdet_d2_ap\")\n+tf_d3_ap = EfficientDetBackboneConfig(model_name=\"tf_efficientdet_d3_ap\")\n+tf_d4_ap = EfficientDetBackboneConfig(model_name=\"tf_efficientdet_d4_ap\")\n+tf_d5_ap = EfficientDetBackboneConfig(model_name=\"tf_efficientdet_d5_ap\")\n", "issue": "Add EfficientDet AdvProp-AA\n## \ud83d\ude80 Feature\r\nAdd EfficientDet AdvProp-AA pretrained backbones for D0-D5\r\n\r\nSee https://github.com/google/automl/blob/master/efficientdet/Det-AdvProp.md\n", "before_files": [{"content": "__all__ = [\n \"tf_lite0\",\n \"tf_lite1\",\n \"tf_lite2\",\n \"tf_lite3\",\n \"tf_d0\",\n \"tf_d1\",\n \"tf_d2\",\n \"tf_d3\",\n \"tf_d4\",\n \"tf_d5\",\n \"tf_d6\",\n \"tf_d7\",\n \"tf_d7x\",\n \"d0\",\n \"d1\",\n \"d2\",\n \"d3\",\n \"d4\",\n \"d5\",\n \"d6\",\n \"d7\",\n \"d7x\",\n]\n\nfrom icevision.models.ross.efficientdet.utils import *\n\n\ntf_lite0 = EfficientDetBackboneConfig(model_name=\"tf_efficientdet_lite0\")\ntf_lite1 = EfficientDetBackboneConfig(model_name=\"tf_efficientdet_lite1\")\ntf_lite2 = EfficientDetBackboneConfig(model_name=\"tf_efficientdet_lite2\")\ntf_lite3 = EfficientDetBackboneConfig(model_name=\"tf_efficientdet_lite3\")\n\ntf_d0 = EfficientDetBackboneConfig(model_name=\"tf_efficientdet_d0\")\ntf_d1 = EfficientDetBackboneConfig(model_name=\"tf_efficientdet_d1\")\ntf_d2 = EfficientDetBackboneConfig(model_name=\"tf_efficientdet_d2\")\ntf_d3 = EfficientDetBackboneConfig(model_name=\"tf_efficientdet_d3\")\ntf_d4 = EfficientDetBackboneConfig(model_name=\"tf_efficientdet_d4\")\ntf_d5 = EfficientDetBackboneConfig(model_name=\"tf_efficientdet_d5\")\ntf_d6 = EfficientDetBackboneConfig(model_name=\"tf_efficientdet_d6\")\ntf_d7 = EfficientDetBackboneConfig(model_name=\"tf_efficientdet_d7\")\ntf_d7x = EfficientDetBackboneConfig(model_name=\"tf_efficientdet_d7x\")\n\nd0 = EfficientDetBackboneConfig(model_name=\"efficientdet_d0\")\nd1 = EfficientDetBackboneConfig(model_name=\"efficientdet_d1\")\nd2 = EfficientDetBackboneConfig(model_name=\"efficientdet_d2\")\nd3 = EfficientDetBackboneConfig(model_name=\"efficientdet_d3\")\nd4 = EfficientDetBackboneConfig(model_name=\"efficientdet_d4\")\nd5 = EfficientDetBackboneConfig(model_name=\"efficientdet_d5\")\nd6 = EfficientDetBackboneConfig(model_name=\"efficientdet_d6\")\nd7 = EfficientDetBackboneConfig(model_name=\"efficientdet_d7\")\nd7x = EfficientDetBackboneConfig(model_name=\"efficientdet_d7x\")\n", "path": "icevision/models/ross/efficientdet/backbones.py"}]} | 1,239 | 349 |
gh_patches_debug_25654 | rasdani/github-patches | git_diff | bokeh__bokeh-5348 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Depreciated example
https://github.com/bokeh/bokeh/blob/0.12.3/examples/embed/simple/simple.py
```
Because the ``resources`` argument is no longer needed, it is deprecated and no longer has any effect.
```
The link is also broken:
http://bokeh.pydata.org/en/latest/docs/user_guide/embedding.html#components
</issue>
<code>
[start of examples/embed/simple/simple.py]
1 '''This example demonstrates embedding a standalone Bokeh document
2 into a simple Flask application, with a basic HTML web form.
3
4 To view the example, run:
5
6 python simple.py
7
8 in this directory, and navigate to:
9
10 http://localhost:5000
11
12 '''
13 from __future__ import print_function
14
15 import flask
16
17 from bokeh.embed import components
18 from bokeh.plotting import figure
19 from bokeh.resources import INLINE
20 from bokeh.util.string import encode_utf8
21
22 app = flask.Flask(__name__)
23
24 colors = {
25 'Black': '#000000',
26 'Red': '#FF0000',
27 'Green': '#00FF00',
28 'Blue': '#0000FF',
29 }
30
31 def getitem(obj, item, default):
32 if item not in obj:
33 return default
34 else:
35 return obj[item]
36
37 @app.route("/")
38 def polynomial():
39 """ Very simple embedding of a polynomial chart
40
41 """
42
43 # Grab the inputs arguments from the URL
44 # This is automated by the button
45 args = flask.request.args
46
47 # Get all the form arguments in the url with defaults
48 color = colors[getitem(args, 'color', 'Black')]
49 _from = int(getitem(args, '_from', 0))
50 to = int(getitem(args, 'to', 10))
51
52 # Create a polynomial line graph
53 x = list(range(_from, to + 1))
54 fig = figure(title="Polynomial")
55 fig.line(x, [i ** 2 for i in x], color=color, line_width=2)
56
57 # Configure resources to include BokehJS inline in the document.
58 # For more details see:
59 # http://bokeh.pydata.org/en/latest/docs/reference/resources_embedding.html#bokeh-embed
60 js_resources = INLINE.render_js()
61 css_resources = INLINE.render_css()
62
63 # For more details see:
64 # http://bokeh.pydata.org/en/latest/docs/user_guide/embedding.html#components
65 script, div = components(fig, INLINE)
66 html = flask.render_template(
67 'embed.html',
68 plot_script=script,
69 plot_div=div,
70 js_resources=js_resources,
71 css_resources=css_resources,
72 color=color,
73 _from=_from,
74 to=to
75 )
76 return encode_utf8(html)
77
78 if __name__ == "__main__":
79 print(__doc__)
80 app.run()
81
[end of examples/embed/simple/simple.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/examples/embed/simple/simple.py b/examples/embed/simple/simple.py
--- a/examples/embed/simple/simple.py
+++ b/examples/embed/simple/simple.py
@@ -41,7 +41,6 @@
"""
# Grab the inputs arguments from the URL
- # This is automated by the button
args = flask.request.args
# Get all the form arguments in the url with defaults
@@ -49,20 +48,15 @@
_from = int(getitem(args, '_from', 0))
to = int(getitem(args, 'to', 10))
- # Create a polynomial line graph
+ # Create a polynomial line graph with those arguments
x = list(range(_from, to + 1))
fig = figure(title="Polynomial")
fig.line(x, [i ** 2 for i in x], color=color, line_width=2)
- # Configure resources to include BokehJS inline in the document.
- # For more details see:
- # http://bokeh.pydata.org/en/latest/docs/reference/resources_embedding.html#bokeh-embed
js_resources = INLINE.render_js()
css_resources = INLINE.render_css()
- # For more details see:
- # http://bokeh.pydata.org/en/latest/docs/user_guide/embedding.html#components
- script, div = components(fig, INLINE)
+ script, div = components(fig)
html = flask.render_template(
'embed.html',
plot_script=script,
| {"golden_diff": "diff --git a/examples/embed/simple/simple.py b/examples/embed/simple/simple.py\n--- a/examples/embed/simple/simple.py\n+++ b/examples/embed/simple/simple.py\n@@ -41,7 +41,6 @@\n \"\"\"\n \n # Grab the inputs arguments from the URL\n- # This is automated by the button\n args = flask.request.args\n \n # Get all the form arguments in the url with defaults\n@@ -49,20 +48,15 @@\n _from = int(getitem(args, '_from', 0))\n to = int(getitem(args, 'to', 10))\n \n- # Create a polynomial line graph\n+ # Create a polynomial line graph with those arguments\n x = list(range(_from, to + 1))\n fig = figure(title=\"Polynomial\")\n fig.line(x, [i ** 2 for i in x], color=color, line_width=2)\n \n- # Configure resources to include BokehJS inline in the document.\n- # For more details see:\n- # http://bokeh.pydata.org/en/latest/docs/reference/resources_embedding.html#bokeh-embed\n js_resources = INLINE.render_js()\n css_resources = INLINE.render_css()\n \n- # For more details see:\n- # http://bokeh.pydata.org/en/latest/docs/user_guide/embedding.html#components\n- script, div = components(fig, INLINE)\n+ script, div = components(fig)\n html = flask.render_template(\n 'embed.html',\n plot_script=script,\n", "issue": "Depreciated example\nhttps://github.com/bokeh/bokeh/blob/0.12.3/examples/embed/simple/simple.py\n\n```\nBecause the ``resources`` argument is no longer needed, it is deprecated and no longer has any effect.\n```\n\nThe link is also broken:\nhttp://bokeh.pydata.org/en/latest/docs/user_guide/embedding.html#components\n\n", "before_files": [{"content": "'''This example demonstrates embedding a standalone Bokeh document\ninto a simple Flask application, with a basic HTML web form.\n\nTo view the example, run:\n\n python simple.py\n\nin this directory, and navigate to:\n\n http://localhost:5000\n\n'''\nfrom __future__ import print_function\n\nimport flask\n\nfrom bokeh.embed import components\nfrom bokeh.plotting import figure\nfrom bokeh.resources import INLINE\nfrom bokeh.util.string import encode_utf8\n\napp = flask.Flask(__name__)\n\ncolors = {\n 'Black': '#000000',\n 'Red': '#FF0000',\n 'Green': '#00FF00',\n 'Blue': '#0000FF',\n}\n\ndef getitem(obj, item, default):\n if item not in obj:\n return default\n else:\n return obj[item]\n\[email protected](\"/\")\ndef polynomial():\n \"\"\" Very simple embedding of a polynomial chart\n\n \"\"\"\n\n # Grab the inputs arguments from the URL\n # This is automated by the button\n args = flask.request.args\n\n # Get all the form arguments in the url with defaults\n color = colors[getitem(args, 'color', 'Black')]\n _from = int(getitem(args, '_from', 0))\n to = int(getitem(args, 'to', 10))\n\n # Create a polynomial line graph\n x = list(range(_from, to + 1))\n fig = figure(title=\"Polynomial\")\n fig.line(x, [i ** 2 for i in x], color=color, line_width=2)\n\n # Configure resources to include BokehJS inline in the document.\n # For more details see:\n # http://bokeh.pydata.org/en/latest/docs/reference/resources_embedding.html#bokeh-embed\n js_resources = INLINE.render_js()\n css_resources = INLINE.render_css()\n\n # For more details see:\n # http://bokeh.pydata.org/en/latest/docs/user_guide/embedding.html#components\n script, div = components(fig, INLINE)\n html = flask.render_template(\n 'embed.html',\n plot_script=script,\n plot_div=div,\n js_resources=js_resources,\n css_resources=css_resources,\n color=color,\n _from=_from,\n to=to\n )\n return encode_utf8(html)\n\nif __name__ == \"__main__\":\n print(__doc__)\n app.run()\n", "path": "examples/embed/simple/simple.py"}]} | 1,302 | 332 |
gh_patches_debug_3980 | rasdani/github-patches | git_diff | data-for-change__anyway-291 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Improve cluster accuracy
Cluster aggregates markers in `in_cluster` is using box instead of a circle parameter calculation which I think may cause duplications and inaccuracy
</issue>
<code>
[start of static/pymapcluster.py]
1 ##
2 import globalmaptiles as globaltiles
3 from math import cos, sin, atan2, sqrt
4 import time
5 ##
6
7 def center_geolocation(geolocations):
8 """
9 Provide a relatively accurate center lat, lon returned as a list pair, given
10 a list of list pairs.
11 ex: in: geolocations = ((lat1,lon1), (lat2,lon2),)
12 out: (center_lat, center_lon)
13 """
14 x = 0
15 y = 0
16 z = 0
17
18 for lat, lon in geolocations:
19 lat = float(lat)
20 lon = float(lon)
21 x += cos(lat) * cos(lon)
22 y += cos(lat) * sin(lon)
23 z += sin(lat)
24
25 x = float(x / len(geolocations))
26 y = float(y / len(geolocations))
27 z = float(z / len(geolocations))
28
29 return (atan2(y, x), atan2(z, sqrt(x * x + y * y)))
30
31 def latlng_to_zoompixels(mercator, lat, lng, zoom):
32 mx, my = mercator.LatLonToMeters(lat, lng)
33 pix = mercator.MetersToPixels(mx, my, zoom)
34 return pix
35
36 def in_cluster(center, radius, point):
37 return (point[0] >= center[0] - radius) and (point[0] <= center[0] + radius) \
38 and (point[1] >= center[1] - radius) and (point[1] <= center[1] + radius)
39
40 def cluster_markers(mercator, latlngs, zoom, gridsize=50):
41 """
42 Args:
43 mercator: instance of GlobalMercator()
44 latlngs: list of (lat,lng) tuple
45 zoom: current zoom level
46 gridsize: cluster radius (in pixels in current zoom level)
47 Returns:
48 centers: list of indices in latlngs of points used as centers
49 clusters: list of same length as latlngs giving assigning each point to
50 a cluster
51 """
52 start_time = time.time()
53 centers = []
54 clusters = []
55 sizes = []
56 latlngs = map(lambda latlng: latlng.serialize(), latlngs)
57 for i, latlng in enumerate(latlngs):
58 lat = latlng['latitude']
59 lng = latlng['longitude']
60 point_pix = latlng_to_zoompixels(mercator, lat, lng, zoom)
61 assigned = False
62 for cidx, c in enumerate(centers):
63 center = latlngs[c]
64 center = latlng_to_zoompixels(mercator, center['latitude'], center['longitude'], zoom)
65 if in_cluster(center, gridsize, point_pix):
66 # Assign point to cluster
67 clusters.append(cidx)
68 sizes[cidx] += 1
69 assigned = True
70 break
71 if not assigned:
72 # Create new cluster for point
73 #TODO center_geolocation the center!
74 centers.append(i)
75 sizes.append(1)
76 clusters.append(len(centers) - 1)
77
78 print('time for cluster_markers: ' + str(time.time() - start_time))
79 return centers, clusters, sizes
80
81 def create_clusters_centers(markers, zoom, radius):
82 mercator = globaltiles.GlobalMercator()
83 centers, clusters, sizes = cluster_markers(mercator, markers, zoom, radius)
84 centers_markers = [markers[i] for i in centers]
85 return centers_markers, clusters, sizes
86
87 def get_cluster_json(clust_marker, clust_size):
88 return {
89 'longitude': clust_marker.longitude,
90 'latitude': clust_marker.latitude,
91 'size': clust_size
92 }
93
94 def get_cluster_size(index, clusters):
95 from collections import Counter
96 #TODO: don't call Counter for every cluster in the array
97 return Counter(clusters)[index]
98
99 def generate_clusters_json(markers, zoom, radius=50):
100 centers, clusters, sizes = create_clusters_centers(markers, zoom, radius)
101 json_clusts=[]
102
103 for i, point in enumerate(centers):
104 json_clusts.append(get_cluster_json(point, sizes[i]))
105
106 return {
107 'clusters': json_clusts
108 }
109
110 ##
111 if __name__ == '__main__':
112 ##
113 mercator = globaltiles.GlobalMercator()
114 latlngs = [(28.43, 8), (28.43, 8), (28.44, 8), (35, 8)]
115 centers, clusters = cluster_markers(mercator, latlngs, 21)
116 ##
[end of static/pymapcluster.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/static/pymapcluster.py b/static/pymapcluster.py
--- a/static/pymapcluster.py
+++ b/static/pymapcluster.py
@@ -34,8 +34,7 @@
return pix
def in_cluster(center, radius, point):
- return (point[0] >= center[0] - radius) and (point[0] <= center[0] + radius) \
- and (point[1] >= center[1] - radius) and (point[1] <= center[1] + radius)
+ return sqrt((point[0] - center[0])**2 + (point[1] - center[1])**2) <= radius
def cluster_markers(mercator, latlngs, zoom, gridsize=50):
"""
| {"golden_diff": "diff --git a/static/pymapcluster.py b/static/pymapcluster.py\n--- a/static/pymapcluster.py\n+++ b/static/pymapcluster.py\n@@ -34,8 +34,7 @@\n return pix\n \n def in_cluster(center, radius, point):\n- return (point[0] >= center[0] - radius) and (point[0] <= center[0] + radius) \\\n- and (point[1] >= center[1] - radius) and (point[1] <= center[1] + radius)\n+ return sqrt((point[0] - center[0])**2 + (point[1] - center[1])**2) <= radius\n \n def cluster_markers(mercator, latlngs, zoom, gridsize=50):\n \"\"\"\n", "issue": "Improve cluster accuracy\nCluster aggregates markers in `in_cluster` is using box instead of a circle parameter calculation which I think may cause duplications and inaccuracy\n\n", "before_files": [{"content": "##\nimport globalmaptiles as globaltiles\nfrom math import cos, sin, atan2, sqrt\nimport time\n##\n \ndef center_geolocation(geolocations):\n \"\"\"\n Provide a relatively accurate center lat, lon returned as a list pair, given\n a list of list pairs.\n ex: in: geolocations = ((lat1,lon1), (lat2,lon2),)\n out: (center_lat, center_lon)\n \"\"\"\n x = 0\n y = 0\n z = 0\n \n for lat, lon in geolocations:\n lat = float(lat)\n lon = float(lon)\n x += cos(lat) * cos(lon)\n y += cos(lat) * sin(lon)\n z += sin(lat)\n \n x = float(x / len(geolocations))\n y = float(y / len(geolocations))\n z = float(z / len(geolocations))\n \n return (atan2(y, x), atan2(z, sqrt(x * x + y * y)))\n\ndef latlng_to_zoompixels(mercator, lat, lng, zoom):\n mx, my = mercator.LatLonToMeters(lat, lng)\n pix = mercator.MetersToPixels(mx, my, zoom)\n return pix\n\ndef in_cluster(center, radius, point):\n return (point[0] >= center[0] - radius) and (point[0] <= center[0] + radius) \\\n and (point[1] >= center[1] - radius) and (point[1] <= center[1] + radius)\n\ndef cluster_markers(mercator, latlngs, zoom, gridsize=50):\n \"\"\"\n Args:\n mercator: instance of GlobalMercator()\n latlngs: list of (lat,lng) tuple\n zoom: current zoom level\n gridsize: cluster radius (in pixels in current zoom level)\n Returns:\n centers: list of indices in latlngs of points used as centers\n clusters: list of same length as latlngs giving assigning each point to\n a cluster\n \"\"\"\n start_time = time.time()\n centers = []\n clusters = []\n sizes = []\n latlngs = map(lambda latlng: latlng.serialize(), latlngs)\n for i, latlng in enumerate(latlngs):\n lat = latlng['latitude']\n lng = latlng['longitude']\n point_pix = latlng_to_zoompixels(mercator, lat, lng, zoom)\n assigned = False\n for cidx, c in enumerate(centers):\n center = latlngs[c]\n center = latlng_to_zoompixels(mercator, center['latitude'], center['longitude'], zoom)\n if in_cluster(center, gridsize, point_pix):\n # Assign point to cluster\n clusters.append(cidx)\n sizes[cidx] += 1\n assigned = True\n break\n if not assigned:\n # Create new cluster for point\n #TODO center_geolocation the center!\n centers.append(i)\n sizes.append(1)\n clusters.append(len(centers) - 1)\n\n print('time for cluster_markers: ' + str(time.time() - start_time))\n return centers, clusters, sizes\n\ndef create_clusters_centers(markers, zoom, radius):\n mercator = globaltiles.GlobalMercator()\n centers, clusters, sizes = cluster_markers(mercator, markers, zoom, radius)\n centers_markers = [markers[i] for i in centers]\n return centers_markers, clusters, sizes\n\ndef get_cluster_json(clust_marker, clust_size):\n return {\n 'longitude': clust_marker.longitude,\n 'latitude': clust_marker.latitude,\n 'size': clust_size\n }\n\ndef get_cluster_size(index, clusters):\n from collections import Counter\n #TODO: don't call Counter for every cluster in the array\n return Counter(clusters)[index]\n\ndef generate_clusters_json(markers, zoom, radius=50):\n centers, clusters, sizes = create_clusters_centers(markers, zoom, radius)\n json_clusts=[]\n\n for i, point in enumerate(centers):\n json_clusts.append(get_cluster_json(point, sizes[i]))\n\n return {\n 'clusters': json_clusts\n }\n\n##\nif __name__ == '__main__':\n ##\n mercator = globaltiles.GlobalMercator()\n latlngs = [(28.43, 8), (28.43, 8), (28.44, 8), (35, 8)]\n centers, clusters = cluster_markers(mercator, latlngs, 21)\n ##", "path": "static/pymapcluster.py"}]} | 1,819 | 175 |
gh_patches_debug_6729 | rasdani/github-patches | git_diff | OCHA-DAP__hdx-ckan-1829 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Organization view pages result in 500 error
Only on stag. I tested several different orgs.

</issue>
<code>
[start of ckanext-hdx_search/ckanext/hdx_search/plugin.py]
1 import logging, re
2 import ckan.plugins as plugins
3 import ckan.plugins.toolkit as tk
4 import ckan.lib.plugins as lib_plugins
5
6 def convert_country(q):
7 for c in tk.get_action('group_list')({'user':'127.0.0.1'},{'all_fields': True}):
8 if re.findall(c['display_name'].lower(),q.lower()):
9 q += ' '+c['name']
10 return q
11
12 class HDXSearchPlugin(plugins.SingletonPlugin):
13 plugins.implements(plugins.IConfigurer, inherit=False)
14 plugins.implements(plugins.IRoutes, inherit=True)
15 plugins.implements(plugins.ITemplateHelpers, inherit=False)
16 plugins.implements(plugins.IPackageController, inherit=True)
17
18 def update_config(self, config):
19 tk.add_template_directory(config, 'templates')
20
21 def get_helpers(self):
22 return {}
23
24 def before_map(self, map):
25 map.connect('search', '/search',
26 controller='ckanext.hdx_search.controllers.search_controller:HDXSearchController', action='search')
27 map.connect('simple_search',
28 '/dataset', controller='ckanext.hdx_search.controllers.simple_search_controller:HDXSimpleSearchController', action='package_search')
29 return map
30
31 def after_map(self, map):
32 map.connect('search', '/search',
33 controller='ckanext.hdx_search.controllers.search_controller:HDXSearchController', action='search')
34 map.connect('simple_search',
35 '/dataset', controller='ckanext.hdx_search.controllers.simple_search_controller:HDXSimpleSearchController', action='package_search')
36 return map
37
38 def before_search(self, search_params):
39 search_params['q'] = convert_country(search_params['q'])
40 if 'facet.field' in search_params and 'vocab_Topics' not in search_params['facet.field']:
41 search_params['facet.field'].append('vocab_Topics')
42
43 # If indicator flag is set, search only that type
44 if 'ext_indicator' in search_params['extras']:
45 if int(search_params['extras']['ext_indicator']) == 1:
46 search_params['fq'] = search_params['fq'] + ' +extras_indicator:1'
47 elif int(search_params['extras']['ext_indicator']) == 0:
48 search_params['fq'] = search_params[
49 'fq'] + ' -extras_indicator:1'
50 return search_params
51
52 def after_search(self, search_results, search_params):
53 return search_results
54
55 def before_view(self, pkg_dict):
56 return pkg_dict
57
[end of ckanext-hdx_search/ckanext/hdx_search/plugin.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ckanext-hdx_search/ckanext/hdx_search/plugin.py b/ckanext-hdx_search/ckanext/hdx_search/plugin.py
--- a/ckanext-hdx_search/ckanext/hdx_search/plugin.py
+++ b/ckanext-hdx_search/ckanext/hdx_search/plugin.py
@@ -36,7 +36,7 @@
return map
def before_search(self, search_params):
- search_params['q'] = convert_country(search_params['q'])
+ #search_params['q'] = convert_country(search_params['q'])
if 'facet.field' in search_params and 'vocab_Topics' not in search_params['facet.field']:
search_params['facet.field'].append('vocab_Topics')
| {"golden_diff": "diff --git a/ckanext-hdx_search/ckanext/hdx_search/plugin.py b/ckanext-hdx_search/ckanext/hdx_search/plugin.py\n--- a/ckanext-hdx_search/ckanext/hdx_search/plugin.py\n+++ b/ckanext-hdx_search/ckanext/hdx_search/plugin.py\n@@ -36,7 +36,7 @@\n return map\n \n def before_search(self, search_params):\n- search_params['q'] = convert_country(search_params['q'])\n+ #search_params['q'] = convert_country(search_params['q'])\n if 'facet.field' in search_params and 'vocab_Topics' not in search_params['facet.field']:\n search_params['facet.field'].append('vocab_Topics')\n", "issue": "Organization view pages result in 500 error\nOnly on stag. I tested several different orgs. \n\n\n\n", "before_files": [{"content": "import logging, re\nimport ckan.plugins as plugins\nimport ckan.plugins.toolkit as tk\nimport ckan.lib.plugins as lib_plugins\n\ndef convert_country(q):\n for c in tk.get_action('group_list')({'user':'127.0.0.1'},{'all_fields': True}):\n if re.findall(c['display_name'].lower(),q.lower()):\n q += ' '+c['name']\n return q\n\nclass HDXSearchPlugin(plugins.SingletonPlugin):\n plugins.implements(plugins.IConfigurer, inherit=False)\n plugins.implements(plugins.IRoutes, inherit=True)\n plugins.implements(plugins.ITemplateHelpers, inherit=False)\n plugins.implements(plugins.IPackageController, inherit=True)\n\n def update_config(self, config):\n tk.add_template_directory(config, 'templates')\n\n def get_helpers(self):\n return {}\n\n def before_map(self, map):\n map.connect('search', '/search',\n controller='ckanext.hdx_search.controllers.search_controller:HDXSearchController', action='search')\n map.connect('simple_search',\n '/dataset', controller='ckanext.hdx_search.controllers.simple_search_controller:HDXSimpleSearchController', action='package_search')\n return map\n\n def after_map(self, map):\n map.connect('search', '/search',\n controller='ckanext.hdx_search.controllers.search_controller:HDXSearchController', action='search')\n map.connect('simple_search',\n '/dataset', controller='ckanext.hdx_search.controllers.simple_search_controller:HDXSimpleSearchController', action='package_search')\n return map\n\n def before_search(self, search_params):\n search_params['q'] = convert_country(search_params['q'])\n if 'facet.field' in search_params and 'vocab_Topics' not in search_params['facet.field']:\n search_params['facet.field'].append('vocab_Topics')\n\n # If indicator flag is set, search only that type\n if 'ext_indicator' in search_params['extras']:\n if int(search_params['extras']['ext_indicator']) == 1:\n search_params['fq'] = search_params['fq'] + ' +extras_indicator:1'\n elif int(search_params['extras']['ext_indicator']) == 0:\n search_params['fq'] = search_params[\n 'fq'] + ' -extras_indicator:1'\n return search_params\n\n def after_search(self, search_results, search_params):\n return search_results\n\n def before_view(self, pkg_dict):\n return pkg_dict\n", "path": "ckanext-hdx_search/ckanext/hdx_search/plugin.py"}]} | 1,281 | 169 |
gh_patches_debug_12807 | rasdani/github-patches | git_diff | bridgecrewio__checkov-1086 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
CKV_AWS_119 - DynamoDB table encryption
**Describe the bug**
In general DynamoDB tables are encrypted by default and this can't be turned off, you can change it to use a KMS key of your choice. Therefore the check description is incorrect.
Further infos can be found in the API documentation https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_SSESpecification.html
</issue>
<code>
[start of checkov/terraform/checks/resource/aws/DynamoDBTablesEncrypted.py]
1 from checkov.common.models.enums import CheckCategories
2 from checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck
3
4
5 class DynamoDBTablesEncrypted(BaseResourceValueCheck):
6 def __init__(self):
7 name = "Ensure DynamoDB Tables are encrypted"
8 id = "CKV_AWS_119"
9 supported_resources = ['aws_dynamodb_table']
10 categories = [CheckCategories.NETWORKING]
11 super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
12
13 def get_inspected_key(self):
14 return "server_side_encryption/[0]/enabled"
15
16
17 check = DynamoDBTablesEncrypted()
18
[end of checkov/terraform/checks/resource/aws/DynamoDBTablesEncrypted.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/checkov/terraform/checks/resource/aws/DynamoDBTablesEncrypted.py b/checkov/terraform/checks/resource/aws/DynamoDBTablesEncrypted.py
--- a/checkov/terraform/checks/resource/aws/DynamoDBTablesEncrypted.py
+++ b/checkov/terraform/checks/resource/aws/DynamoDBTablesEncrypted.py
@@ -4,10 +4,10 @@
class DynamoDBTablesEncrypted(BaseResourceValueCheck):
def __init__(self):
- name = "Ensure DynamoDB Tables are encrypted"
+ name = "Ensure DynamoDB Tables are encrypted using KMS"
id = "CKV_AWS_119"
- supported_resources = ['aws_dynamodb_table']
- categories = [CheckCategories.NETWORKING]
+ supported_resources = ["aws_dynamodb_table"]
+ categories = [CheckCategories.ENCRYPTION]
super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
def get_inspected_key(self):
| {"golden_diff": "diff --git a/checkov/terraform/checks/resource/aws/DynamoDBTablesEncrypted.py b/checkov/terraform/checks/resource/aws/DynamoDBTablesEncrypted.py\n--- a/checkov/terraform/checks/resource/aws/DynamoDBTablesEncrypted.py\n+++ b/checkov/terraform/checks/resource/aws/DynamoDBTablesEncrypted.py\n@@ -4,10 +4,10 @@\n \n class DynamoDBTablesEncrypted(BaseResourceValueCheck):\n def __init__(self):\n- name = \"Ensure DynamoDB Tables are encrypted\"\n+ name = \"Ensure DynamoDB Tables are encrypted using KMS\"\n id = \"CKV_AWS_119\"\n- supported_resources = ['aws_dynamodb_table']\n- categories = [CheckCategories.NETWORKING]\n+ supported_resources = [\"aws_dynamodb_table\"]\n+ categories = [CheckCategories.ENCRYPTION]\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n \n def get_inspected_key(self):\n", "issue": "CKV_AWS_119 - DynamoDB table encryption\n**Describe the bug**\r\nIn general DynamoDB tables are encrypted by default and this can't be turned off, you can change it to use a KMS key of your choice. Therefore the check description is incorrect.\r\n\r\nFurther infos can be found in the API documentation https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_SSESpecification.html\r\n\r\n\n", "before_files": [{"content": "from checkov.common.models.enums import CheckCategories\nfrom checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck\n\n\nclass DynamoDBTablesEncrypted(BaseResourceValueCheck):\n def __init__(self):\n name = \"Ensure DynamoDB Tables are encrypted\"\n id = \"CKV_AWS_119\"\n supported_resources = ['aws_dynamodb_table']\n categories = [CheckCategories.NETWORKING]\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def get_inspected_key(self):\n return \"server_side_encryption/[0]/enabled\"\n\n\ncheck = DynamoDBTablesEncrypted()\n", "path": "checkov/terraform/checks/resource/aws/DynamoDBTablesEncrypted.py"}]} | 814 | 218 |
gh_patches_debug_6434 | rasdani/github-patches | git_diff | python-pillow__Pillow-6973 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Cannot identify .fits file
### What did you do?
Tried using pillow for opening/handling a .fits file for training a machine learning model. According to the documentation opening/reading fits files should be enabled? Or am I misunderstanding how a fits file should be opened?
From Issue [4054](https://github.com/python-pillow/Pillow/issues/4054)/ PR 6056
> I've created PR https://github.com/python-pillow/Pillow/pull/6056 to resolve this. If that is merged, you should no longer have to worry about register_handler(), but can instead just Image.open("sample.fits").
### What did you expect to happen?
Not recieving a "cannot identify error" while using Image.open. Expected the function to work as with other supported file formats. The .fits files in question are not corrupted, and can be opened as normal with other software.
### What happened?
```python
from PIL import Image
with Image.open('example.fits') as im:
im.verify()
```
```
---------------------------------------------------------------------------
UnidentifiedImageError Traceback (most recent call last)
Cell In [38], line 2
1 from PIL import FitsImagePlugin, ImageFile
----> 2 with Image.open('example.fits') as im:
3 im.verify()
File ~\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\PIL\Image.py:3186, in open(fp, mode, formats)
3184 for message in accept_warnings:
3185 warnings.warn(message)
-> 3186 raise UnidentifiedImageError(
3187 "cannot identify image file %r" % (filename if filename else fp)
3188 )
UnidentifiedImageError: cannot identify image file 'example.fits'
```
### What are your OS, Python and Pillow versions?
* OS: windows 10
* Python: 3.10
* Pillow: 9.3.0
<!--
Please include **code** that reproduces the issue and whenever possible, an **image** that demonstrates the issue. Please upload images to GitHub, not to third-party file hosting sites. If necessary, add the image to a zip or tar archive.
The best reproductions are self-contained scripts with minimal dependencies. If you are using a framework such as Plone, Django, or Buildout, try to replicate the issue just using Pillow.
-->
</issue>
<code>
[start of src/PIL/FitsImagePlugin.py]
1 #
2 # The Python Imaging Library
3 # $Id$
4 #
5 # FITS file handling
6 #
7 # Copyright (c) 1998-2003 by Fredrik Lundh
8 #
9 # See the README file for information on usage and redistribution.
10 #
11
12 import math
13
14 from . import Image, ImageFile
15
16
17 def _accept(prefix):
18 return prefix[:6] == b"SIMPLE"
19
20
21 class FitsImageFile(ImageFile.ImageFile):
22 format = "FITS"
23 format_description = "FITS"
24
25 def _open(self):
26 headers = {}
27 while True:
28 header = self.fp.read(80)
29 if not header:
30 msg = "Truncated FITS file"
31 raise OSError(msg)
32 keyword = header[:8].strip()
33 if keyword == b"END":
34 break
35 value = header[8:].strip()
36 if value.startswith(b"="):
37 value = value[1:].strip()
38 if not headers and (not _accept(keyword) or value != b"T"):
39 msg = "Not a FITS file"
40 raise SyntaxError(msg)
41 headers[keyword] = value
42
43 naxis = int(headers[b"NAXIS"])
44 if naxis == 0:
45 msg = "No image data"
46 raise ValueError(msg)
47 elif naxis == 1:
48 self._size = 1, int(headers[b"NAXIS1"])
49 else:
50 self._size = int(headers[b"NAXIS1"]), int(headers[b"NAXIS2"])
51
52 number_of_bits = int(headers[b"BITPIX"])
53 if number_of_bits == 8:
54 self.mode = "L"
55 elif number_of_bits == 16:
56 self.mode = "I"
57 # rawmode = "I;16S"
58 elif number_of_bits == 32:
59 self.mode = "I"
60 elif number_of_bits in (-32, -64):
61 self.mode = "F"
62 # rawmode = "F" if number_of_bits == -32 else "F;64F"
63
64 offset = math.ceil(self.fp.tell() / 2880) * 2880
65 self.tile = [("raw", (0, 0) + self.size, offset, (self.mode, 0, -1))]
66
67
68 # --------------------------------------------------------------------
69 # Registry
70
71 Image.register_open(FitsImageFile.format, FitsImageFile, _accept)
72
73 Image.register_extensions(FitsImageFile.format, [".fit", ".fits"])
74
[end of src/PIL/FitsImagePlugin.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/PIL/FitsImagePlugin.py b/src/PIL/FitsImagePlugin.py
--- a/src/PIL/FitsImagePlugin.py
+++ b/src/PIL/FitsImagePlugin.py
@@ -32,7 +32,7 @@
keyword = header[:8].strip()
if keyword == b"END":
break
- value = header[8:].strip()
+ value = header[8:].split(b"/")[0].strip()
if value.startswith(b"="):
value = value[1:].strip()
if not headers and (not _accept(keyword) or value != b"T"):
| {"golden_diff": "diff --git a/src/PIL/FitsImagePlugin.py b/src/PIL/FitsImagePlugin.py\n--- a/src/PIL/FitsImagePlugin.py\n+++ b/src/PIL/FitsImagePlugin.py\n@@ -32,7 +32,7 @@\n keyword = header[:8].strip()\n if keyword == b\"END\":\n break\n- value = header[8:].strip()\n+ value = header[8:].split(b\"/\")[0].strip()\n if value.startswith(b\"=\"):\n value = value[1:].strip()\n if not headers and (not _accept(keyword) or value != b\"T\"):\n", "issue": "Cannot identify .fits file\n### What did you do?\r\nTried using pillow for opening/handling a .fits file for training a machine learning model. According to the documentation opening/reading fits files should be enabled? Or am I misunderstanding how a fits file should be opened? \r\n\r\n\r\nFrom Issue [4054](https://github.com/python-pillow/Pillow/issues/4054)/ PR 6056\r\n\r\n> I've created PR https://github.com/python-pillow/Pillow/pull/6056 to resolve this. If that is merged, you should no longer have to worry about register_handler(), but can instead just Image.open(\"sample.fits\").\r\n\r\n\r\n### What did you expect to happen?\r\nNot recieving a \"cannot identify error\" while using Image.open. Expected the function to work as with other supported file formats. The .fits files in question are not corrupted, and can be opened as normal with other software. \r\n\r\n### What happened?\r\n```python\r\nfrom PIL import Image\r\nwith Image.open('example.fits') as im:\r\n im.verify()\r\n```\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nUnidentifiedImageError Traceback (most recent call last)\r\nCell In [38], line 2\r\n 1 from PIL import FitsImagePlugin, ImageFile\r\n----> 2 with Image.open('example.fits') as im:\r\n 3 im.verify()\r\n\r\nFile ~\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python310\\site-packages\\PIL\\Image.py:3186, in open(fp, mode, formats)\r\n 3184 for message in accept_warnings:\r\n 3185 warnings.warn(message)\r\n-> 3186 raise UnidentifiedImageError(\r\n 3187 \"cannot identify image file %r\" % (filename if filename else fp)\r\n 3188 )\r\n\r\nUnidentifiedImageError: cannot identify image file 'example.fits'\r\n```\r\n### What are your OS, Python and Pillow versions?\r\n\r\n* OS: windows 10\r\n* Python: 3.10\r\n* Pillow: 9.3.0\r\n\r\n<!--\r\nPlease include **code** that reproduces the issue and whenever possible, an **image** that demonstrates the issue. Please upload images to GitHub, not to third-party file hosting sites. If necessary, add the image to a zip or tar archive.\r\n\r\nThe best reproductions are self-contained scripts with minimal dependencies. If you are using a framework such as Plone, Django, or Buildout, try to replicate the issue just using Pillow.\r\n-->\r\n\r\n\n", "before_files": [{"content": "#\n# The Python Imaging Library\n# $Id$\n#\n# FITS file handling\n#\n# Copyright (c) 1998-2003 by Fredrik Lundh\n#\n# See the README file for information on usage and redistribution.\n#\n\nimport math\n\nfrom . import Image, ImageFile\n\n\ndef _accept(prefix):\n return prefix[:6] == b\"SIMPLE\"\n\n\nclass FitsImageFile(ImageFile.ImageFile):\n format = \"FITS\"\n format_description = \"FITS\"\n\n def _open(self):\n headers = {}\n while True:\n header = self.fp.read(80)\n if not header:\n msg = \"Truncated FITS file\"\n raise OSError(msg)\n keyword = header[:8].strip()\n if keyword == b\"END\":\n break\n value = header[8:].strip()\n if value.startswith(b\"=\"):\n value = value[1:].strip()\n if not headers and (not _accept(keyword) or value != b\"T\"):\n msg = \"Not a FITS file\"\n raise SyntaxError(msg)\n headers[keyword] = value\n\n naxis = int(headers[b\"NAXIS\"])\n if naxis == 0:\n msg = \"No image data\"\n raise ValueError(msg)\n elif naxis == 1:\n self._size = 1, int(headers[b\"NAXIS1\"])\n else:\n self._size = int(headers[b\"NAXIS1\"]), int(headers[b\"NAXIS2\"])\n\n number_of_bits = int(headers[b\"BITPIX\"])\n if number_of_bits == 8:\n self.mode = \"L\"\n elif number_of_bits == 16:\n self.mode = \"I\"\n # rawmode = \"I;16S\"\n elif number_of_bits == 32:\n self.mode = \"I\"\n elif number_of_bits in (-32, -64):\n self.mode = \"F\"\n # rawmode = \"F\" if number_of_bits == -32 else \"F;64F\"\n\n offset = math.ceil(self.fp.tell() / 2880) * 2880\n self.tile = [(\"raw\", (0, 0) + self.size, offset, (self.mode, 0, -1))]\n\n\n# --------------------------------------------------------------------\n# Registry\n\nImage.register_open(FitsImageFile.format, FitsImageFile, _accept)\n\nImage.register_extensions(FitsImageFile.format, [\".fit\", \".fits\"])\n", "path": "src/PIL/FitsImagePlugin.py"}]} | 1,781 | 137 |
gh_patches_debug_15674 | rasdani/github-patches | git_diff | mesonbuild__meson-10230 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Unhandled python exception with 0.62 on windows (0.61 ok)
**Describe the bug**
When running meson 0.62 on win32 and a project using `dependency()` (ex glib):
Unhandled python exception
ModuleNotFoundError: No module named 'mesonbuild.dependencies.data'
```
Traceback (most recent call last):
File "mesonbuild\mesonmain.py", line 151, in run
File "mesonbuild\msetup.py", line 301, in run
File "mesonbuild\msetup.py", line 185, in generate
File "mesonbuild\msetup.py", line 229, in _generate
File "mesonbuild\interpreter\interpreter.py", line 2698, in run
File "mesonbuild\interpreterbase\interpreterbase.py", line 149, in run
File "mesonbuild\interpreterbase\interpreterbase.py", line 174, in evaluate_codeblock
File "mesonbuild\interpreterbase\interpreterbase.py", line 167, in evaluate_codeblock
File "mesonbuild\interpreterbase\interpreterbase.py", line 182, in evaluate_statement
File "mesonbuild\interpreterbase\interpreterbase.py", line 567, in assignment
File "mesonbuild\interpreterbase\interpreterbase.py", line 180, in evaluate_statement
File "mesonbuild\interpreterbase\interpreterbase.py", line 455, in function_call
File "mesonbuild\interpreterbase\decorators.py", line 768, in wrapped
File "mesonbuild\interpreterbase\decorators.py", line 768, in wrapped
File "mesonbuild\interpreterbase\decorators.py", line 768, in wrapped
[Previous line repeated 5 more times]
File "mesonbuild\interpreterbase\decorators.py", line 109, in wrapped
File "mesonbuild\interpreterbase\decorators.py", line 127, in wrapped
File "mesonbuild\interpreterbase\decorators.py", line 277, in wrapper
File "mesonbuild\interpreter\interpreter.py", line 1620, in func_dependency
File "mesonbuild\interpreter\dependencyfallbacks.py", line 352, in lookup
File "mesonbuild\interpreter\dependencyfallbacks.py", line 93, in _do_dependency
File "mesonbuild\dependencies\detect.py", line 112, in find_external_dependency
File "mesonbuild\dependencies\cmake.py", line 135, in __init__
File "mesonbuild\dependencies\cmake.py", line 183, in _get_cmake_info
File "mesonbuild\dependencies\cmake.py", line 614, in _call_cmake
File "mesonbuild\dependencies\cmake.py", line 585, in _setup_cmake_dir
File "importlib\resources.py", line 103, in read_text
File "importlib\resources.py", line 82, in open_text
File "importlib\resources.py", line 43, in open_binary
File "importlib\_common.py", line 66, in get_package
File "importlib\_common.py", line 57, in resolve
File "importlib\__init__.py", line 126, in import_module
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1004, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'mesonbuild.dependencies.data'
```
**To Reproduce**
project('foo')
pcre = dependency('libpcre')
**system parameters**
meson 0.62 (MSI) on windev VM (https://developer.microsoft.com/en-us/windows/downloads/virtual-machines/)
works as expected on 0.61
</issue>
<code>
[start of packaging/hook-mesonbuild.py]
1 #!hint/python3
2
3 """
4 PyInstaller hook to make mesonbuild include everything it needs to.
5 """
6
7 import os
8 from glob import glob
9
10 hiddenimports = []
11
12 def get_all_modules_from_dir(dirname):
13 '''
14 Get all modules required for Meson itself from directories.
15 '''
16 modname = os.path.basename(dirname)
17 modules = [os.path.splitext(os.path.split(x)[1])[0] for x in glob(os.path.join(dirname, '*'))]
18 modules = ['mesonbuild.' + modname + '.' + x for x in modules if not x.startswith('_')]
19 return modules
20
21 hiddenimports += get_all_modules_from_dir('mesonbuild/modules')
22 hiddenimports += get_all_modules_from_dir('mesonbuild/scripts')
23
24 # Python packagers want to be minimal and only copy the things
25 # that they can see being used. They are blind to many things.
26 hiddenimports += [
27 # we run distutils as a subprocess via INTROSPECT_COMMAND.
28 'distutils.archive_util',
29 'distutils.cmd',
30 'distutils.config',
31 'distutils.core',
32 'distutils.debug',
33 'distutils.dep_util',
34 'distutils.dir_util',
35 'distutils.dist',
36 'distutils.errors',
37 'distutils.extension',
38 'distutils.fancy_getopt',
39 'distutils.file_util',
40 'distutils.spawn',
41 'distutils.util',
42 'distutils.version',
43 'distutils.command.build_ext',
44 'distutils.command.build',
45 'distutils.command.install',
46
47 # needed for gtk's find_program() scripts
48 'filecmp',
49 ]
50
[end of packaging/hook-mesonbuild.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/packaging/hook-mesonbuild.py b/packaging/hook-mesonbuild.py
--- a/packaging/hook-mesonbuild.py
+++ b/packaging/hook-mesonbuild.py
@@ -7,6 +7,9 @@
import os
from glob import glob
+from PyInstaller.utils.hooks import collect_data_files
+
+datas = []
hiddenimports = []
def get_all_modules_from_dir(dirname):
@@ -18,6 +21,10 @@
modules = ['mesonbuild.' + modname + '.' + x for x in modules if not x.startswith('_')]
return modules
+datas += collect_data_files('mesonbuild.scripts')
+datas += collect_data_files('mesonbuild.cmake.data')
+datas += collect_data_files('mesonbuild.dependencies.data')
+
hiddenimports += get_all_modules_from_dir('mesonbuild/modules')
hiddenimports += get_all_modules_from_dir('mesonbuild/scripts')
| {"golden_diff": "diff --git a/packaging/hook-mesonbuild.py b/packaging/hook-mesonbuild.py\n--- a/packaging/hook-mesonbuild.py\n+++ b/packaging/hook-mesonbuild.py\n@@ -7,6 +7,9 @@\n import os\n from glob import glob\n \n+from PyInstaller.utils.hooks import collect_data_files\n+\n+datas = []\n hiddenimports = []\n \n def get_all_modules_from_dir(dirname):\n@@ -18,6 +21,10 @@\n modules = ['mesonbuild.' + modname + '.' + x for x in modules if not x.startswith('_')]\n return modules\n \n+datas += collect_data_files('mesonbuild.scripts')\n+datas += collect_data_files('mesonbuild.cmake.data')\n+datas += collect_data_files('mesonbuild.dependencies.data')\n+\n hiddenimports += get_all_modules_from_dir('mesonbuild/modules')\n hiddenimports += get_all_modules_from_dir('mesonbuild/scripts')\n", "issue": "Unhandled python exception with 0.62 on windows (0.61 ok)\n**Describe the bug**\r\nWhen running meson 0.62 on win32 and a project using `dependency()` (ex glib):\r\n\r\nUnhandled python exception\r\nModuleNotFoundError: No module named 'mesonbuild.dependencies.data'\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"mesonbuild\\mesonmain.py\", line 151, in run\r\n File \"mesonbuild\\msetup.py\", line 301, in run\r\n File \"mesonbuild\\msetup.py\", line 185, in generate\r\n File \"mesonbuild\\msetup.py\", line 229, in _generate\r\n File \"mesonbuild\\interpreter\\interpreter.py\", line 2698, in run\r\n File \"mesonbuild\\interpreterbase\\interpreterbase.py\", line 149, in run\r\n File \"mesonbuild\\interpreterbase\\interpreterbase.py\", line 174, in evaluate_codeblock\r\n File \"mesonbuild\\interpreterbase\\interpreterbase.py\", line 167, in evaluate_codeblock\r\n File \"mesonbuild\\interpreterbase\\interpreterbase.py\", line 182, in evaluate_statement\r\n File \"mesonbuild\\interpreterbase\\interpreterbase.py\", line 567, in assignment\r\n File \"mesonbuild\\interpreterbase\\interpreterbase.py\", line 180, in evaluate_statement\r\n File \"mesonbuild\\interpreterbase\\interpreterbase.py\", line 455, in function_call\r\n File \"mesonbuild\\interpreterbase\\decorators.py\", line 768, in wrapped\r\n File \"mesonbuild\\interpreterbase\\decorators.py\", line 768, in wrapped\r\n File \"mesonbuild\\interpreterbase\\decorators.py\", line 768, in wrapped\r\n [Previous line repeated 5 more times]\r\n File \"mesonbuild\\interpreterbase\\decorators.py\", line 109, in wrapped\r\n File \"mesonbuild\\interpreterbase\\decorators.py\", line 127, in wrapped\r\n File \"mesonbuild\\interpreterbase\\decorators.py\", line 277, in wrapper\r\n File \"mesonbuild\\interpreter\\interpreter.py\", line 1620, in func_dependency\r\n File \"mesonbuild\\interpreter\\dependencyfallbacks.py\", line 352, in lookup\r\n File \"mesonbuild\\interpreter\\dependencyfallbacks.py\", line 93, in _do_dependency\r\n File \"mesonbuild\\dependencies\\detect.py\", line 112, in find_external_dependency\r\n File \"mesonbuild\\dependencies\\cmake.py\", line 135, in __init__\r\n File \"mesonbuild\\dependencies\\cmake.py\", line 183, in _get_cmake_info\r\n File \"mesonbuild\\dependencies\\cmake.py\", line 614, in _call_cmake\r\n File \"mesonbuild\\dependencies\\cmake.py\", line 585, in _setup_cmake_dir\r\n File \"importlib\\resources.py\", line 103, in read_text\r\n File \"importlib\\resources.py\", line 82, in open_text\r\n File \"importlib\\resources.py\", line 43, in open_binary\r\n File \"importlib\\_common.py\", line 66, in get_package\r\n File \"importlib\\_common.py\", line 57, in resolve\r\n File \"importlib\\__init__.py\", line 126, in import_module\r\n File \"<frozen importlib._bootstrap>\", line 1050, in _gcd_import\r\n File \"<frozen importlib._bootstrap>\", line 1027, in _find_and_load\r\n File \"<frozen importlib._bootstrap>\", line 1004, in _find_and_load_unlocked\r\nModuleNotFoundError: No module named 'mesonbuild.dependencies.data'\r\n```\r\n\r\n**To Reproduce**\r\nproject('foo')\r\npcre = dependency('libpcre')\r\n\r\n**system parameters**\r\nmeson 0.62 (MSI) on windev VM (https://developer.microsoft.com/en-us/windows/downloads/virtual-machines/)\r\nworks as expected on 0.61\n", "before_files": [{"content": "#!hint/python3\n\n\"\"\"\nPyInstaller hook to make mesonbuild include everything it needs to.\n\"\"\"\n\nimport os\nfrom glob import glob\n\nhiddenimports = []\n\ndef get_all_modules_from_dir(dirname):\n '''\n Get all modules required for Meson itself from directories.\n '''\n modname = os.path.basename(dirname)\n modules = [os.path.splitext(os.path.split(x)[1])[0] for x in glob(os.path.join(dirname, '*'))]\n modules = ['mesonbuild.' + modname + '.' + x for x in modules if not x.startswith('_')]\n return modules\n\nhiddenimports += get_all_modules_from_dir('mesonbuild/modules')\nhiddenimports += get_all_modules_from_dir('mesonbuild/scripts')\n\n# Python packagers want to be minimal and only copy the things\n# that they can see being used. They are blind to many things.\nhiddenimports += [\n # we run distutils as a subprocess via INTROSPECT_COMMAND.\n 'distutils.archive_util',\n 'distutils.cmd',\n 'distutils.config',\n 'distutils.core',\n 'distutils.debug',\n 'distutils.dep_util',\n 'distutils.dir_util',\n 'distutils.dist',\n 'distutils.errors',\n 'distutils.extension',\n 'distutils.fancy_getopt',\n 'distutils.file_util',\n 'distutils.spawn',\n 'distutils.util',\n 'distutils.version',\n 'distutils.command.build_ext',\n 'distutils.command.build',\n 'distutils.command.install',\n\n # needed for gtk's find_program() scripts\n 'filecmp',\n]\n", "path": "packaging/hook-mesonbuild.py"}]} | 1,915 | 208 |
gh_patches_debug_19687 | rasdani/github-patches | git_diff | facebookresearch__ParlAI-1625 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
No module named 'parlai_internal'
https://parl.ai/projects/wizard_of_wikipedia/
When running ```python projects/wizard_of_wikipedia/scripts/eval_retrieval_model.py``` I get the following error:
```
Traceback (most recent call last):
File "projects/wizard_of_wikipedia/scripts/eval_retrieval_model.py", line 48, in <module>
eval_model(parser)
File "/home/ml/jwang301/Development/ParlAI/parlai/scripts/eval_model.py", line 68, in eval_model
agent = create_agent(opt, requireModelExists=True)
File "/home/ml/jwang301/Development/ParlAI/parlai/core/agents.py", line 554, in create_agent
model = load_agent_module(opt)
File "/home/ml/jwang301/Development/ParlAI/parlai/core/agents.py", line 407, in load_agent_module
model_class = get_agent_module(new_opt['model'])
File "/home/ml/jwang301/Development/ParlAI/parlai/core/agents.py", line 516, in get_agent_module
my_module = importlib.import_module(module_name)
File "/home/ml/jwang301/anaconda2/envs/ParlAI/lib/python3.6/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 994, in _gcd_import
File "<frozen importlib._bootstrap>", line 971, in _find_and_load
File "<frozen importlib._bootstrap>", line 941, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 994, in _gcd_import
File "<frozen importlib._bootstrap>", line 971, in _find_and_load
File "<frozen importlib._bootstrap>", line 941, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 994, in _gcd_import
File "<frozen importlib._bootstrap>", line 971, in _find_and_load
File "<frozen importlib._bootstrap>", line 941, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 994, in _gcd_import
File "<frozen importlib._bootstrap>", line 971, in _find_and_load
File "<frozen importlib._bootstrap>", line 953, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'parlai_internal'
```
I'm assuming this is accidental since the wiki is public.
</issue>
<code>
[start of projects/wizard_of_wikipedia/scripts/eval_retrieval_model.py]
1 #!/usr/bin/env python3
2
3 # Copyright (c) Facebook, Inc. and its affiliates.
4 # This source code is licensed under the MIT license found in the
5 # LICENSE file in the root directory of this source tree.
6 from parlai.core.params import ParlaiParser
7 from parlai.scripts.eval_model import eval_model
8 from parlai.zoo.wizard_of_wikipedia\
9 .full_dialogue_retrieval_model import download
10 from projects.wizard_of_wikipedia.wizard_transformer_ranker\
11 .wizard_transformer_ranker import WizardTransformerRankerAgent
12
13 """Evaluate pre-trained retrieval model on the full Wizard Dialogue task.
14
15 NOTE: Metrics here differ slightly to those reported in the paper as a result
16 of code changes.
17
18 Results on seen test set:
19 Hits@1/100: 86.7
20
21 Results on unseen test set (run with flag
22 `-t wizard_of_wikipedia:WizardDialogKnowledge:topic_split`):
23 Hits@1/100: 68.96
24 """
25
26 if __name__ == '__main__':
27 parser = ParlaiParser(add_model_args=True)
28 parser.add_argument('-n', '--num-examples', default=100000000)
29 parser.add_argument('-d', '--display-examples', type='bool', default=False)
30 parser.add_argument('-ltim', '--log-every-n-secs', type=float, default=2)
31 WizardTransformerRankerAgent.add_cmdline_args(parser)
32 parser.set_defaults(
33 task='wizard_of_wikipedia',
34 model='projects:wizard_of_wikipedia:wizard_transformer_ranker',
35 model_file='models:wizard_of_wikipedia/full_dialogue_retrieval_model/model',
36 datatype='test',
37 n_heads=6,
38 ffn_size=1200,
39 embeddings_scale=False,
40 delimiter=' __SOC__ ',
41 n_positions=1000,
42 legacy=True
43 )
44
45 opt = parser.parse_args()
46 download(opt['datapath']) # download pretrained retrieval model
47
48 eval_model(parser)
49
[end of projects/wizard_of_wikipedia/scripts/eval_retrieval_model.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/projects/wizard_of_wikipedia/scripts/eval_retrieval_model.py b/projects/wizard_of_wikipedia/scripts/eval_retrieval_model.py
--- a/projects/wizard_of_wikipedia/scripts/eval_retrieval_model.py
+++ b/projects/wizard_of_wikipedia/scripts/eval_retrieval_model.py
@@ -29,7 +29,7 @@
parser.add_argument('-d', '--display-examples', type='bool', default=False)
parser.add_argument('-ltim', '--log-every-n-secs', type=float, default=2)
WizardTransformerRankerAgent.add_cmdline_args(parser)
- parser.set_defaults(
+ parser.set_params(
task='wizard_of_wikipedia',
model='projects:wizard_of_wikipedia:wizard_transformer_ranker',
model_file='models:wizard_of_wikipedia/full_dialogue_retrieval_model/model',
@@ -45,4 +45,4 @@
opt = parser.parse_args()
download(opt['datapath']) # download pretrained retrieval model
- eval_model(parser)
+ eval_model(opt)
| {"golden_diff": "diff --git a/projects/wizard_of_wikipedia/scripts/eval_retrieval_model.py b/projects/wizard_of_wikipedia/scripts/eval_retrieval_model.py\n--- a/projects/wizard_of_wikipedia/scripts/eval_retrieval_model.py\n+++ b/projects/wizard_of_wikipedia/scripts/eval_retrieval_model.py\n@@ -29,7 +29,7 @@\n parser.add_argument('-d', '--display-examples', type='bool', default=False)\n parser.add_argument('-ltim', '--log-every-n-secs', type=float, default=2)\n WizardTransformerRankerAgent.add_cmdline_args(parser)\n- parser.set_defaults(\n+ parser.set_params(\n task='wizard_of_wikipedia',\n model='projects:wizard_of_wikipedia:wizard_transformer_ranker',\n model_file='models:wizard_of_wikipedia/full_dialogue_retrieval_model/model',\n@@ -45,4 +45,4 @@\n opt = parser.parse_args()\n download(opt['datapath']) # download pretrained retrieval model\n \n- eval_model(parser)\n+ eval_model(opt)\n", "issue": "No module named 'parlai_internal'\nhttps://parl.ai/projects/wizard_of_wikipedia/\r\n\r\nWhen running ```python projects/wizard_of_wikipedia/scripts/eval_retrieval_model.py``` I get the following error:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"projects/wizard_of_wikipedia/scripts/eval_retrieval_model.py\", line 48, in <module>\r\n eval_model(parser)\r\n File \"/home/ml/jwang301/Development/ParlAI/parlai/scripts/eval_model.py\", line 68, in eval_model\r\n agent = create_agent(opt, requireModelExists=True)\r\n File \"/home/ml/jwang301/Development/ParlAI/parlai/core/agents.py\", line 554, in create_agent\r\n model = load_agent_module(opt)\r\n File \"/home/ml/jwang301/Development/ParlAI/parlai/core/agents.py\", line 407, in load_agent_module\r\n model_class = get_agent_module(new_opt['model'])\r\n File \"/home/ml/jwang301/Development/ParlAI/parlai/core/agents.py\", line 516, in get_agent_module\r\n my_module = importlib.import_module(module_name)\r\n File \"/home/ml/jwang301/anaconda2/envs/ParlAI/lib/python3.6/importlib/__init__.py\", line 126, in import_module\r\n return _bootstrap._gcd_import(name[level:], package, level)\r\n File \"<frozen importlib._bootstrap>\", line 994, in _gcd_import\r\n File \"<frozen importlib._bootstrap>\", line 971, in _find_and_load\r\n File \"<frozen importlib._bootstrap>\", line 941, in _find_and_load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 219, in _call_with_frames_removed\r\n File \"<frozen importlib._bootstrap>\", line 994, in _gcd_import\r\n File \"<frozen importlib._bootstrap>\", line 971, in _find_and_load\r\n File \"<frozen importlib._bootstrap>\", line 941, in _find_and_load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 219, in _call_with_frames_removed\r\n File \"<frozen importlib._bootstrap>\", line 994, in _gcd_import\r\n File \"<frozen importlib._bootstrap>\", line 971, in _find_and_load\r\n File \"<frozen importlib._bootstrap>\", line 941, in _find_and_load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 219, in _call_with_frames_removed\r\n File \"<frozen importlib._bootstrap>\", line 994, in _gcd_import\r\n File \"<frozen importlib._bootstrap>\", line 971, in _find_and_load\r\n File \"<frozen importlib._bootstrap>\", line 953, in _find_and_load_unlocked\r\nModuleNotFoundError: No module named 'parlai_internal'\r\n```\r\n\r\nI'm assuming this is accidental since the wiki is public. \n", "before_files": [{"content": "#!/usr/bin/env python3\n\n# Copyright (c) Facebook, Inc. and its affiliates.\n# This source code is licensed under the MIT license found in the\n# LICENSE file in the root directory of this source tree.\nfrom parlai.core.params import ParlaiParser\nfrom parlai.scripts.eval_model import eval_model\nfrom parlai.zoo.wizard_of_wikipedia\\\n .full_dialogue_retrieval_model import download\nfrom projects.wizard_of_wikipedia.wizard_transformer_ranker\\\n .wizard_transformer_ranker import WizardTransformerRankerAgent\n\n\"\"\"Evaluate pre-trained retrieval model on the full Wizard Dialogue task.\n\nNOTE: Metrics here differ slightly to those reported in the paper as a result\nof code changes.\n\nResults on seen test set:\nHits@1/100: 86.7\n\nResults on unseen test set (run with flag\n`-t wizard_of_wikipedia:WizardDialogKnowledge:topic_split`):\nHits@1/100: 68.96\n\"\"\"\n\nif __name__ == '__main__':\n parser = ParlaiParser(add_model_args=True)\n parser.add_argument('-n', '--num-examples', default=100000000)\n parser.add_argument('-d', '--display-examples', type='bool', default=False)\n parser.add_argument('-ltim', '--log-every-n-secs', type=float, default=2)\n WizardTransformerRankerAgent.add_cmdline_args(parser)\n parser.set_defaults(\n task='wizard_of_wikipedia',\n model='projects:wizard_of_wikipedia:wizard_transformer_ranker',\n model_file='models:wizard_of_wikipedia/full_dialogue_retrieval_model/model',\n datatype='test',\n n_heads=6,\n ffn_size=1200,\n embeddings_scale=False,\n delimiter=' __SOC__ ',\n n_positions=1000,\n legacy=True\n )\n\n opt = parser.parse_args()\n download(opt['datapath']) # download pretrained retrieval model\n\n eval_model(parser)\n", "path": "projects/wizard_of_wikipedia/scripts/eval_retrieval_model.py"}]} | 1,776 | 235 |
gh_patches_debug_34331 | rasdani/github-patches | git_diff | scoutapp__scout_apm_python-494 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Require Celery app reference to read configuration
We had a customer whose Celery tasks weren't reporting whilst their Django views were. It turns out they had configured in the Django settings file, which isn't applied when Celery runs. This is because it doesn't run "under" Django through `manage.py`, but separately through `celery worker`.
The django pattern is to use [Celery's `app.config_from_object`](https://docs.celeryproject.org/en/latest/reference/celery.html#celery.Celery.config_from_object) to read the Django settings. If we then read out of there for the scout settings, we would again allow shared configuration between the two.
This would need changing the Celery install process to take an `app` argument:
```python
app = celery.Celery(..)
...
scout_apm.celery.install(app)
```
We should work without this for backwards compatibility reasons, but throw a warninng when it's not passed as I predict this issue will appear repeatedly if we don't encourage users this way.
</issue>
<code>
[start of src/scout_apm/celery.py]
1 # coding=utf-8
2 from __future__ import absolute_import, division, print_function, unicode_literals
3
4 import datetime as dt
5
6 from celery.signals import before_task_publish, task_postrun, task_prerun
7
8 import scout_apm.core
9 from scout_apm.compat import datetime_to_timestamp
10 from scout_apm.core.tracked_request import TrackedRequest
11
12
13 def before_publish_callback(headers=None, properties=None, **kwargs):
14 if "scout_task_start" not in headers:
15 headers["scout_task_start"] = datetime_to_timestamp(dt.datetime.utcnow())
16
17
18 def prerun_callback(task=None, **kwargs):
19 tracked_request = TrackedRequest.instance()
20 tracked_request.is_real_request = True
21
22 start = getattr(task.request, "scout_task_start", None)
23 if start is not None:
24 now = datetime_to_timestamp(dt.datetime.utcnow())
25 try:
26 queue_time = now - start
27 except TypeError:
28 pass
29 else:
30 tracked_request.tag("queue_time", queue_time)
31
32 task_id = getattr(task.request, "id", None)
33 if task_id:
34 tracked_request.tag("task_id", task_id)
35 parent_task_id = getattr(task.request, "parent_id", None)
36 if parent_task_id:
37 tracked_request.tag("parent_task_id", parent_task_id)
38
39 delivery_info = task.request.delivery_info
40 tracked_request.tag("is_eager", delivery_info.get("is_eager", False))
41 tracked_request.tag("exchange", delivery_info.get("exchange", "unknown"))
42 tracked_request.tag("routing_key", delivery_info.get("routing_key", "unknown"))
43 tracked_request.tag("queue", delivery_info.get("queue", "unknown"))
44
45 tracked_request.start_span(operation=("Job/" + task.name))
46
47
48 def postrun_callback(task=None, **kwargs):
49 tracked_request = TrackedRequest.instance()
50 tracked_request.stop_span()
51
52
53 def install():
54 installed = scout_apm.core.install()
55 if not installed:
56 return
57
58 before_task_publish.connect(before_publish_callback)
59 task_prerun.connect(prerun_callback)
60 task_postrun.connect(postrun_callback)
61
62
63 def uninstall():
64 before_task_publish.disconnect(before_publish_callback)
65 task_prerun.disconnect(prerun_callback)
66 task_postrun.disconnect(postrun_callback)
67
[end of src/scout_apm/celery.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/scout_apm/celery.py b/src/scout_apm/celery.py
--- a/src/scout_apm/celery.py
+++ b/src/scout_apm/celery.py
@@ -7,15 +7,16 @@
import scout_apm.core
from scout_apm.compat import datetime_to_timestamp
+from scout_apm.core.config import scout_config
from scout_apm.core.tracked_request import TrackedRequest
-def before_publish_callback(headers=None, properties=None, **kwargs):
+def before_task_publish_callback(headers=None, properties=None, **kwargs):
if "scout_task_start" not in headers:
headers["scout_task_start"] = datetime_to_timestamp(dt.datetime.utcnow())
-def prerun_callback(task=None, **kwargs):
+def task_prerun_callback(task=None, **kwargs):
tracked_request = TrackedRequest.instance()
tracked_request.is_real_request = True
@@ -45,22 +46,39 @@
tracked_request.start_span(operation=("Job/" + task.name))
-def postrun_callback(task=None, **kwargs):
+def task_postrun_callback(task=None, **kwargs):
tracked_request = TrackedRequest.instance()
tracked_request.stop_span()
-def install():
+def install(app=None):
+ if app is not None:
+ copy_configuration(app)
+
installed = scout_apm.core.install()
if not installed:
return
- before_task_publish.connect(before_publish_callback)
- task_prerun.connect(prerun_callback)
- task_postrun.connect(postrun_callback)
+ before_task_publish.connect(before_task_publish_callback)
+ task_prerun.connect(task_prerun_callback)
+ task_postrun.connect(task_postrun_callback)
+
+
+def copy_configuration(app):
+ prefix = "scout_"
+ prefix_len = len(prefix)
+
+ to_set = {}
+ for key, value in app.conf.items():
+ key_lower = key.lower()
+ if key_lower.startswith(prefix) and len(key_lower) > prefix_len:
+ scout_key = key_lower[prefix_len:]
+ to_set[scout_key] = value
+
+ scout_config.set(**to_set)
def uninstall():
- before_task_publish.disconnect(before_publish_callback)
- task_prerun.disconnect(prerun_callback)
- task_postrun.disconnect(postrun_callback)
+ before_task_publish.disconnect(before_task_publish_callback)
+ task_prerun.disconnect(task_prerun_callback)
+ task_postrun.disconnect(task_postrun_callback)
| {"golden_diff": "diff --git a/src/scout_apm/celery.py b/src/scout_apm/celery.py\n--- a/src/scout_apm/celery.py\n+++ b/src/scout_apm/celery.py\n@@ -7,15 +7,16 @@\n \n import scout_apm.core\n from scout_apm.compat import datetime_to_timestamp\n+from scout_apm.core.config import scout_config\n from scout_apm.core.tracked_request import TrackedRequest\n \n \n-def before_publish_callback(headers=None, properties=None, **kwargs):\n+def before_task_publish_callback(headers=None, properties=None, **kwargs):\n if \"scout_task_start\" not in headers:\n headers[\"scout_task_start\"] = datetime_to_timestamp(dt.datetime.utcnow())\n \n \n-def prerun_callback(task=None, **kwargs):\n+def task_prerun_callback(task=None, **kwargs):\n tracked_request = TrackedRequest.instance()\n tracked_request.is_real_request = True\n \n@@ -45,22 +46,39 @@\n tracked_request.start_span(operation=(\"Job/\" + task.name))\n \n \n-def postrun_callback(task=None, **kwargs):\n+def task_postrun_callback(task=None, **kwargs):\n tracked_request = TrackedRequest.instance()\n tracked_request.stop_span()\n \n \n-def install():\n+def install(app=None):\n+ if app is not None:\n+ copy_configuration(app)\n+\n installed = scout_apm.core.install()\n if not installed:\n return\n \n- before_task_publish.connect(before_publish_callback)\n- task_prerun.connect(prerun_callback)\n- task_postrun.connect(postrun_callback)\n+ before_task_publish.connect(before_task_publish_callback)\n+ task_prerun.connect(task_prerun_callback)\n+ task_postrun.connect(task_postrun_callback)\n+\n+\n+def copy_configuration(app):\n+ prefix = \"scout_\"\n+ prefix_len = len(prefix)\n+\n+ to_set = {}\n+ for key, value in app.conf.items():\n+ key_lower = key.lower()\n+ if key_lower.startswith(prefix) and len(key_lower) > prefix_len:\n+ scout_key = key_lower[prefix_len:]\n+ to_set[scout_key] = value\n+\n+ scout_config.set(**to_set)\n \n \n def uninstall():\n- before_task_publish.disconnect(before_publish_callback)\n- task_prerun.disconnect(prerun_callback)\n- task_postrun.disconnect(postrun_callback)\n+ before_task_publish.disconnect(before_task_publish_callback)\n+ task_prerun.disconnect(task_prerun_callback)\n+ task_postrun.disconnect(task_postrun_callback)\n", "issue": "Require Celery app reference to read configuration\nWe had a customer whose Celery tasks weren't reporting whilst their Django views were. It turns out they had configured in the Django settings file, which isn't applied when Celery runs. This is because it doesn't run \"under\" Django through `manage.py`, but separately through `celery worker`.\r\n\r\nThe django pattern is to use [Celery's `app.config_from_object`](https://docs.celeryproject.org/en/latest/reference/celery.html#celery.Celery.config_from_object) to read the Django settings. If we then read out of there for the scout settings, we would again allow shared configuration between the two.\r\n\r\nThis would need changing the Celery install process to take an `app` argument:\r\n\r\n```python\r\napp = celery.Celery(..)\r\n...\r\nscout_apm.celery.install(app)\r\n```\r\n\r\nWe should work without this for backwards compatibility reasons, but throw a warninng when it's not passed as I predict this issue will appear repeatedly if we don't encourage users this way.\n", "before_files": [{"content": "# coding=utf-8\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport datetime as dt\n\nfrom celery.signals import before_task_publish, task_postrun, task_prerun\n\nimport scout_apm.core\nfrom scout_apm.compat import datetime_to_timestamp\nfrom scout_apm.core.tracked_request import TrackedRequest\n\n\ndef before_publish_callback(headers=None, properties=None, **kwargs):\n if \"scout_task_start\" not in headers:\n headers[\"scout_task_start\"] = datetime_to_timestamp(dt.datetime.utcnow())\n\n\ndef prerun_callback(task=None, **kwargs):\n tracked_request = TrackedRequest.instance()\n tracked_request.is_real_request = True\n\n start = getattr(task.request, \"scout_task_start\", None)\n if start is not None:\n now = datetime_to_timestamp(dt.datetime.utcnow())\n try:\n queue_time = now - start\n except TypeError:\n pass\n else:\n tracked_request.tag(\"queue_time\", queue_time)\n\n task_id = getattr(task.request, \"id\", None)\n if task_id:\n tracked_request.tag(\"task_id\", task_id)\n parent_task_id = getattr(task.request, \"parent_id\", None)\n if parent_task_id:\n tracked_request.tag(\"parent_task_id\", parent_task_id)\n\n delivery_info = task.request.delivery_info\n tracked_request.tag(\"is_eager\", delivery_info.get(\"is_eager\", False))\n tracked_request.tag(\"exchange\", delivery_info.get(\"exchange\", \"unknown\"))\n tracked_request.tag(\"routing_key\", delivery_info.get(\"routing_key\", \"unknown\"))\n tracked_request.tag(\"queue\", delivery_info.get(\"queue\", \"unknown\"))\n\n tracked_request.start_span(operation=(\"Job/\" + task.name))\n\n\ndef postrun_callback(task=None, **kwargs):\n tracked_request = TrackedRequest.instance()\n tracked_request.stop_span()\n\n\ndef install():\n installed = scout_apm.core.install()\n if not installed:\n return\n\n before_task_publish.connect(before_publish_callback)\n task_prerun.connect(prerun_callback)\n task_postrun.connect(postrun_callback)\n\n\ndef uninstall():\n before_task_publish.disconnect(before_publish_callback)\n task_prerun.disconnect(prerun_callback)\n task_postrun.disconnect(postrun_callback)\n", "path": "src/scout_apm/celery.py"}]} | 1,379 | 557 |
gh_patches_debug_271 | rasdani/github-patches | git_diff | codespell-project__codespell-3218 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Codespell don't handle KeyboardInterrupt exception
This should be catched and the program should stop gracefully but instead show default stack trace:
```
^CTraceback (most recent call last):
File "/home/kuba/.local/bin/codespell", line 8, in <module>
sys.exit(_script_main())
^^^^^^^^^^^^^^
File "/home/kuba/.local/lib/python3.12/site-packages/codespell_lib/_codespell.py", line 1017, in _script_main
return main(*sys.argv[1:])
^^^^^^^^^^^^^^^^^^^
File "/home/kuba/.local/lib/python3.12/site-packages/codespell_lib/_codespell.py", line 1185, in main
bad_count += parse_file(
^^^^^^^^^^^
File "/home/kuba/.local/lib/python3.12/site-packages/codespell_lib/_codespell.py", line 903, in parse_file
check_matches = extract_words_iter(line, word_regex, ignore_word_regex)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/kuba/.local/lib/python3.12/site-packages/codespell_lib/_codespell.py", line 793, in extract_words_iter
return list(word_regex.finditer(_ignore_word_sub(text, ignore_word_regex)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
KeyboardInterrupt
```
There is no need to show `KeyboardInterrupt` exception stack trace.
</issue>
<code>
[start of codespell_lib/__main__.py]
1 import sys
2
3 from ._codespell import _script_main
4
5 if __name__ == "__main__":
6 sys.exit(_script_main())
7
[end of codespell_lib/__main__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/codespell_lib/__main__.py b/codespell_lib/__main__.py
--- a/codespell_lib/__main__.py
+++ b/codespell_lib/__main__.py
@@ -3,4 +3,7 @@
from ._codespell import _script_main
if __name__ == "__main__":
- sys.exit(_script_main())
+ try:
+ sys.exit(_script_main())
+ except KeyboardInterrupt:
+ pass
| {"golden_diff": "diff --git a/codespell_lib/__main__.py b/codespell_lib/__main__.py\n--- a/codespell_lib/__main__.py\n+++ b/codespell_lib/__main__.py\n@@ -3,4 +3,7 @@\n from ._codespell import _script_main\n \n if __name__ == \"__main__\":\n- sys.exit(_script_main())\n+ try:\n+ sys.exit(_script_main())\n+ except KeyboardInterrupt:\n+ pass\n", "issue": "Codespell don't handle KeyboardInterrupt exception\nThis should be catched and the program should stop gracefully but instead show default stack trace:\r\n\r\n```\r\n^CTraceback (most recent call last):\r\n File \"/home/kuba/.local/bin/codespell\", line 8, in <module>\r\n sys.exit(_script_main())\r\n ^^^^^^^^^^^^^^\r\n File \"/home/kuba/.local/lib/python3.12/site-packages/codespell_lib/_codespell.py\", line 1017, in _script_main\r\n return main(*sys.argv[1:])\r\n ^^^^^^^^^^^^^^^^^^^\r\n File \"/home/kuba/.local/lib/python3.12/site-packages/codespell_lib/_codespell.py\", line 1185, in main\r\n bad_count += parse_file(\r\n ^^^^^^^^^^^\r\n File \"/home/kuba/.local/lib/python3.12/site-packages/codespell_lib/_codespell.py\", line 903, in parse_file\r\n check_matches = extract_words_iter(line, word_regex, ignore_word_regex)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/kuba/.local/lib/python3.12/site-packages/codespell_lib/_codespell.py\", line 793, in extract_words_iter\r\n return list(word_regex.finditer(_ignore_word_sub(text, ignore_word_regex)))\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nKeyboardInterrupt\r\n```\r\n\r\nThere is no need to show `KeyboardInterrupt` exception stack trace.\n", "before_files": [{"content": "import sys\n\nfrom ._codespell import _script_main\n\nif __name__ == \"__main__\":\n sys.exit(_script_main())\n", "path": "codespell_lib/__main__.py"}]} | 913 | 101 |
gh_patches_debug_9054 | rasdani/github-patches | git_diff | python__peps-632 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
pep2rss disregards PEPs written in reStructuredText format
This can be seen at https://www.python.org/dev/peps/peps.rss/ where the last (most recent) RSS entry is the last PEP written in plaintext.
</issue>
<code>
[start of pep2rss.py]
1 #!/usr/bin/env python
2
3 # usage: pep-hook.py $REPOS $REV
4 # (standard post-commit args)
5
6 import os, glob, time, datetime, stat, re, sys
7 import codecs
8 import PyRSS2Gen as rssgen
9
10 RSS_PATH = os.path.join(sys.argv[1], 'peps.rss')
11
12 def firstline_startingwith(full_path, text):
13 for line in codecs.open(full_path, encoding="utf-8"):
14 if line.startswith(text):
15 return line[len(text):].strip()
16 return None
17
18 # get list of peps with creation time (from "Created:" string in pep .txt)
19 peps = glob.glob('pep-*.txt')
20 def pep_creation_dt(full_path):
21 created_str = firstline_startingwith(full_path, 'Created:')
22 # bleh, I was hoping to avoid re but some PEPs editorialize
23 # on the Created line
24 m = re.search(r'''(\d+-\w+-\d{4})''', created_str)
25 if not m:
26 # some older ones have an empty line, that's okay, if it's old
27 # we ipso facto don't care about it.
28 # "return None" would make the most sense but datetime objects
29 # refuse to compare with that. :-|
30 return datetime.datetime(*time.localtime(0)[:6])
31 created_str = m.group(1)
32 try:
33 t = time.strptime(created_str, '%d-%b-%Y')
34 except ValueError:
35 t = time.strptime(created_str, '%d-%B-%Y')
36 return datetime.datetime(*t[:6])
37 peps_with_dt = [(pep_creation_dt(full_path), full_path) for full_path in peps]
38 # sort peps by date, newest first
39 peps_with_dt.sort(reverse=True)
40
41 # generate rss items for 10 most recent peps
42 items = []
43 for dt, full_path in peps_with_dt[:10]:
44 try:
45 n = int(full_path.split('-')[-1].split('.')[0])
46 except ValueError:
47 pass
48 title = firstline_startingwith(full_path, 'Title:')
49 author = firstline_startingwith(full_path, 'Author:')
50 url = 'http://www.python.org/dev/peps/pep-%0.4d' % n
51 item = rssgen.RSSItem(
52 title = 'PEP %d: %s' % (n, title),
53 link = url,
54 description = 'Author: %s' % author,
55 guid = rssgen.Guid(url),
56 pubDate = dt)
57 items.append(item)
58
59 # the rss envelope
60 desc = """
61 Newest Python Enhancement Proposals (PEPs) - Information on new
62 language features, and some meta-information like release
63 procedure and schedules
64 """.strip()
65 rss = rssgen.RSS2(
66 title = 'Newest Python PEPs',
67 link = 'http://www.python.org/dev/peps',
68 description = desc,
69 lastBuildDate = datetime.datetime.now(),
70 items = items)
71
72 with open(RSS_PATH, 'w') as fp:
73 fp.write(rss.to_xml())
74
[end of pep2rss.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pep2rss.py b/pep2rss.py
--- a/pep2rss.py
+++ b/pep2rss.py
@@ -15,8 +15,10 @@
return line[len(text):].strip()
return None
-# get list of peps with creation time (from "Created:" string in pep .txt)
+# get list of peps with creation time
+# (from "Created:" string in pep .rst or .txt)
peps = glob.glob('pep-*.txt')
+peps.extend(glob.glob('pep-*.rst'))
def pep_creation_dt(full_path):
created_str = firstline_startingwith(full_path, 'Created:')
# bleh, I was hoping to avoid re but some PEPs editorialize
| {"golden_diff": "diff --git a/pep2rss.py b/pep2rss.py\n--- a/pep2rss.py\n+++ b/pep2rss.py\n@@ -15,8 +15,10 @@\n return line[len(text):].strip()\n return None\n \n-# get list of peps with creation time (from \"Created:\" string in pep .txt)\n+# get list of peps with creation time\n+# (from \"Created:\" string in pep .rst or .txt)\n peps = glob.glob('pep-*.txt')\n+peps.extend(glob.glob('pep-*.rst'))\n def pep_creation_dt(full_path):\n created_str = firstline_startingwith(full_path, 'Created:')\n # bleh, I was hoping to avoid re but some PEPs editorialize\n", "issue": "pep2rss disregards PEPs written in reStructuredText format\nThis can be seen at https://www.python.org/dev/peps/peps.rss/ where the last (most recent) RSS entry is the last PEP written in plaintext.\n", "before_files": [{"content": "#!/usr/bin/env python\n\n# usage: pep-hook.py $REPOS $REV\n# (standard post-commit args)\n\nimport os, glob, time, datetime, stat, re, sys\nimport codecs\nimport PyRSS2Gen as rssgen\n\nRSS_PATH = os.path.join(sys.argv[1], 'peps.rss')\n\ndef firstline_startingwith(full_path, text):\n for line in codecs.open(full_path, encoding=\"utf-8\"):\n if line.startswith(text):\n return line[len(text):].strip()\n return None\n\n# get list of peps with creation time (from \"Created:\" string in pep .txt)\npeps = glob.glob('pep-*.txt')\ndef pep_creation_dt(full_path):\n created_str = firstline_startingwith(full_path, 'Created:')\n # bleh, I was hoping to avoid re but some PEPs editorialize\n # on the Created line\n m = re.search(r'''(\\d+-\\w+-\\d{4})''', created_str)\n if not m:\n # some older ones have an empty line, that's okay, if it's old\n # we ipso facto don't care about it.\n # \"return None\" would make the most sense but datetime objects\n # refuse to compare with that. :-|\n return datetime.datetime(*time.localtime(0)[:6])\n created_str = m.group(1)\n try:\n t = time.strptime(created_str, '%d-%b-%Y')\n except ValueError:\n t = time.strptime(created_str, '%d-%B-%Y')\n return datetime.datetime(*t[:6])\npeps_with_dt = [(pep_creation_dt(full_path), full_path) for full_path in peps]\n# sort peps by date, newest first\npeps_with_dt.sort(reverse=True)\n\n# generate rss items for 10 most recent peps\nitems = []\nfor dt, full_path in peps_with_dt[:10]:\n try:\n n = int(full_path.split('-')[-1].split('.')[0])\n except ValueError:\n pass\n title = firstline_startingwith(full_path, 'Title:')\n author = firstline_startingwith(full_path, 'Author:')\n url = 'http://www.python.org/dev/peps/pep-%0.4d' % n\n item = rssgen.RSSItem(\n title = 'PEP %d: %s' % (n, title),\n link = url,\n description = 'Author: %s' % author,\n guid = rssgen.Guid(url),\n pubDate = dt)\n items.append(item)\n\n# the rss envelope\ndesc = \"\"\"\nNewest Python Enhancement Proposals (PEPs) - Information on new\nlanguage features, and some meta-information like release\nprocedure and schedules\n\"\"\".strip()\nrss = rssgen.RSS2(\n title = 'Newest Python PEPs',\n link = 'http://www.python.org/dev/peps',\n description = desc,\n lastBuildDate = datetime.datetime.now(),\n items = items)\n\nwith open(RSS_PATH, 'w') as fp:\n fp.write(rss.to_xml())\n", "path": "pep2rss.py"}]} | 1,405 | 176 |
gh_patches_debug_14253 | rasdani/github-patches | git_diff | oppia__oppia-7996 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Exploration Cards Show "Invalid date" as date
**Describe the bug**
In the library, exploration cards have `Invalid date` in the lower right-hand corner.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to https://oppiatestserver.appspot.com/library
**Observed behavior**
The exploration cards show `Invalid date`
**Expected behavior**
The cards should show the creation date.
**Screenshots**

**Desktop (please complete the following information; delete this section if the issue does not arise on desktop):**
- OS: macOS
- Browser: Firefox
- Version: 2.8.7
Publish change button has overflowing text
**Describe the bug**
Publish change text while publishing a collection moves out of the button box.
**To Reproduce**
Steps to reproduce the behavior:
1. Create a collection and check the publish button. The text moves out of the button box.
**Screenshots**
<img width="1440" alt="Screenshot 2019-11-14 at 12 35 14 AM" src="https://user-images.githubusercontent.com/15226041/68795290-a9a08b80-0676-11ea-8b46-57b6b68c3077.png">
**Desktop (please complete the following information; delete this section if the issue does not arise on desktop):**
- OS: Mac
- Browser: Chrome
</issue>
<code>
[start of scripts/typescript_checks.py]
1 # Copyright 2019 The Oppia Authors. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS-IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """File for compiling and checking typescript."""
16 from __future__ import absolute_import # pylint: disable=import-only-modules
17 from __future__ import unicode_literals # pylint: disable=import-only-modules
18
19 import json
20 import os
21 import shutil
22 import subprocess
23 import sys
24
25 import python_utils
26
27 COMPILED_JS_DIR = os.path.join('local_compiled_js_for_test', '')
28 TSCONFIG_FILEPATH = 'tsconfig-for-compile-check.json'
29
30
31 def validate_compiled_js_dir():
32 """Validates that compiled js dir matches out dir in tsconfig."""
33 with python_utils.open_file(TSCONFIG_FILEPATH, 'r') as f:
34 config_data = json.load(f)
35 out_dir = os.path.join(config_data['compilerOptions']['outDir'], '')
36 if out_dir != COMPILED_JS_DIR:
37 raise Exception(
38 'COMPILED_JS_DIR: %s does not match the output directory '
39 'in %s: %s' % (COMPILED_JS_DIR, TSCONFIG_FILEPATH, out_dir))
40
41
42 def compile_and_check_typescript():
43 """Compiles typescript files and checks the compilation errors."""
44 node_path = os.path.join(os.pardir, 'oppia_tools/node-10.15.3')
45 os.environ['PATH'] = '%s/bin:' % node_path + os.environ['PATH']
46
47 validate_compiled_js_dir()
48
49 if os.path.exists(COMPILED_JS_DIR):
50 shutil.rmtree(COMPILED_JS_DIR)
51
52 python_utils.PRINT('Compiling and testing typescript...')
53 cmd = [
54 './node_modules/typescript/bin/tsc', '--project',
55 TSCONFIG_FILEPATH]
56 process = subprocess.Popen(cmd, stdout=subprocess.PIPE)
57 if os.path.exists(COMPILED_JS_DIR):
58 shutil.rmtree(COMPILED_JS_DIR)
59 error_messages = []
60 for line in iter(process.stdout.readline, ''):
61 error_messages.append(line)
62 if error_messages:
63 python_utils.PRINT('Errors found during compilation\n')
64 for message in error_messages:
65 python_utils.PRINT(message)
66 sys.exit(1)
67 else:
68 python_utils.PRINT('Compilation successful!')
69
70
71 # The 'no coverage' pragma is used as this line is un-testable. This is because
72 # it will only be called when typescript_checks.py is used as a script.
73 if __name__ == '__main__': # pragma: no cover
74 compile_and_check_typescript()
75
[end of scripts/typescript_checks.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/scripts/typescript_checks.py b/scripts/typescript_checks.py
--- a/scripts/typescript_checks.py
+++ b/scripts/typescript_checks.py
@@ -54,11 +54,11 @@
'./node_modules/typescript/bin/tsc', '--project',
TSCONFIG_FILEPATH]
process = subprocess.Popen(cmd, stdout=subprocess.PIPE)
- if os.path.exists(COMPILED_JS_DIR):
- shutil.rmtree(COMPILED_JS_DIR)
error_messages = []
for line in iter(process.stdout.readline, ''):
error_messages.append(line)
+ if os.path.exists(COMPILED_JS_DIR):
+ shutil.rmtree(COMPILED_JS_DIR)
if error_messages:
python_utils.PRINT('Errors found during compilation\n')
for message in error_messages:
| {"golden_diff": "diff --git a/scripts/typescript_checks.py b/scripts/typescript_checks.py\n--- a/scripts/typescript_checks.py\n+++ b/scripts/typescript_checks.py\n@@ -54,11 +54,11 @@\n './node_modules/typescript/bin/tsc', '--project',\n TSCONFIG_FILEPATH]\n process = subprocess.Popen(cmd, stdout=subprocess.PIPE)\n- if os.path.exists(COMPILED_JS_DIR):\n- shutil.rmtree(COMPILED_JS_DIR)\n error_messages = []\n for line in iter(process.stdout.readline, ''):\n error_messages.append(line)\n+ if os.path.exists(COMPILED_JS_DIR):\n+ shutil.rmtree(COMPILED_JS_DIR)\n if error_messages:\n python_utils.PRINT('Errors found during compilation\\n')\n for message in error_messages:\n", "issue": "Exploration Cards Show \"Invalid date\" as date\n**Describe the bug**\r\nIn the library, exploration cards have `Invalid date` in the lower right-hand corner.\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n 1. Go to https://oppiatestserver.appspot.com/library\r\n\r\n**Observed behavior**\r\nThe exploration cards show `Invalid date`\r\n\r\n**Expected behavior**\r\nThe cards should show the creation date.\r\n\r\n**Screenshots**\r\n\r\n\r\n\r\n**Desktop (please complete the following information; delete this section if the issue does not arise on desktop):**\r\n - OS: macOS\r\n - Browser: Firefox\r\n - Version: 2.8.7\nPublish change button has overflowing text\n**Describe the bug**\r\nPublish change text while publishing a collection moves out of the button box.\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n 1. Create a collection and check the publish button. The text moves out of the button box.\r\n\r\n**Screenshots**\r\n<img width=\"1440\" alt=\"Screenshot 2019-11-14 at 12 35 14 AM\" src=\"https://user-images.githubusercontent.com/15226041/68795290-a9a08b80-0676-11ea-8b46-57b6b68c3077.png\">\r\n\r\n\r\n**Desktop (please complete the following information; delete this section if the issue does not arise on desktop):**\r\n - OS: Mac\r\n - Browser: Chrome\r\n\r\n\n", "before_files": [{"content": "# Copyright 2019 The Oppia Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS-IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"File for compiling and checking typescript.\"\"\"\nfrom __future__ import absolute_import # pylint: disable=import-only-modules\nfrom __future__ import unicode_literals # pylint: disable=import-only-modules\n\nimport json\nimport os\nimport shutil\nimport subprocess\nimport sys\n\nimport python_utils\n\nCOMPILED_JS_DIR = os.path.join('local_compiled_js_for_test', '')\nTSCONFIG_FILEPATH = 'tsconfig-for-compile-check.json'\n\n\ndef validate_compiled_js_dir():\n \"\"\"Validates that compiled js dir matches out dir in tsconfig.\"\"\"\n with python_utils.open_file(TSCONFIG_FILEPATH, 'r') as f:\n config_data = json.load(f)\n out_dir = os.path.join(config_data['compilerOptions']['outDir'], '')\n if out_dir != COMPILED_JS_DIR:\n raise Exception(\n 'COMPILED_JS_DIR: %s does not match the output directory '\n 'in %s: %s' % (COMPILED_JS_DIR, TSCONFIG_FILEPATH, out_dir))\n\n\ndef compile_and_check_typescript():\n \"\"\"Compiles typescript files and checks the compilation errors.\"\"\"\n node_path = os.path.join(os.pardir, 'oppia_tools/node-10.15.3')\n os.environ['PATH'] = '%s/bin:' % node_path + os.environ['PATH']\n\n validate_compiled_js_dir()\n\n if os.path.exists(COMPILED_JS_DIR):\n shutil.rmtree(COMPILED_JS_DIR)\n\n python_utils.PRINT('Compiling and testing typescript...')\n cmd = [\n './node_modules/typescript/bin/tsc', '--project',\n TSCONFIG_FILEPATH]\n process = subprocess.Popen(cmd, stdout=subprocess.PIPE)\n if os.path.exists(COMPILED_JS_DIR):\n shutil.rmtree(COMPILED_JS_DIR)\n error_messages = []\n for line in iter(process.stdout.readline, ''):\n error_messages.append(line)\n if error_messages:\n python_utils.PRINT('Errors found during compilation\\n')\n for message in error_messages:\n python_utils.PRINT(message)\n sys.exit(1)\n else:\n python_utils.PRINT('Compilation successful!')\n\n\n# The 'no coverage' pragma is used as this line is un-testable. This is because\n# it will only be called when typescript_checks.py is used as a script.\nif __name__ == '__main__': # pragma: no cover\n compile_and_check_typescript()\n", "path": "scripts/typescript_checks.py"}]} | 1,752 | 169 |
gh_patches_debug_33380 | rasdani/github-patches | git_diff | apache__airflow-26343 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
API Endpoints - /xcomEntries/{xcom_key} cannot deserialize customized xcom backend
### Description
We use S3 as our xcom backend database and write serialize/deserialize method for xcoms.
However, when we want to access xcom through REST API, it returns the s3 file url instead of the deserialized value. Could you please add the feature to support customized xcom backend for REST API access?
### Use case/motivation
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
</issue>
<code>
[start of airflow/api_connexion/endpoints/xcom_endpoint.py]
1 # Licensed to the Apache Software Foundation (ASF) under one
2 # or more contributor license agreements. See the NOTICE file
3 # distributed with this work for additional information
4 # regarding copyright ownership. The ASF licenses this file
5 # to you under the Apache License, Version 2.0 (the
6 # "License"); you may not use this file except in compliance
7 # with the License. You may obtain a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing,
12 # software distributed under the License is distributed on an
13 # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
14 # KIND, either express or implied. See the License for the
15 # specific language governing permissions and limitations
16 # under the License.
17 from typing import Optional
18
19 from flask import g
20 from sqlalchemy import and_
21 from sqlalchemy.orm import Session
22
23 from airflow.api_connexion import security
24 from airflow.api_connexion.exceptions import NotFound
25 from airflow.api_connexion.parameters import check_limit, format_parameters
26 from airflow.api_connexion.schemas.xcom_schema import XComCollection, xcom_collection_schema, xcom_schema
27 from airflow.api_connexion.types import APIResponse
28 from airflow.models import DagRun as DR, XCom
29 from airflow.security import permissions
30 from airflow.utils.airflow_flask_app import get_airflow_app
31 from airflow.utils.session import NEW_SESSION, provide_session
32
33
34 @security.requires_access(
35 [
36 (permissions.ACTION_CAN_READ, permissions.RESOURCE_DAG),
37 (permissions.ACTION_CAN_READ, permissions.RESOURCE_DAG_RUN),
38 (permissions.ACTION_CAN_READ, permissions.RESOURCE_TASK_INSTANCE),
39 (permissions.ACTION_CAN_READ, permissions.RESOURCE_XCOM),
40 ],
41 )
42 @format_parameters({"limit": check_limit})
43 @provide_session
44 def get_xcom_entries(
45 *,
46 dag_id: str,
47 dag_run_id: str,
48 task_id: str,
49 limit: Optional[int],
50 offset: Optional[int] = None,
51 session: Session = NEW_SESSION,
52 ) -> APIResponse:
53 """Get all XCom values"""
54 query = session.query(XCom)
55 if dag_id == '~':
56 appbuilder = get_airflow_app().appbuilder
57 readable_dag_ids = appbuilder.sm.get_readable_dag_ids(g.user)
58 query = query.filter(XCom.dag_id.in_(readable_dag_ids))
59 query = query.join(DR, and_(XCom.dag_id == DR.dag_id, XCom.run_id == DR.run_id))
60 else:
61 query = query.filter(XCom.dag_id == dag_id)
62 query = query.join(DR, and_(XCom.dag_id == DR.dag_id, XCom.run_id == DR.run_id))
63
64 if task_id != '~':
65 query = query.filter(XCom.task_id == task_id)
66 if dag_run_id != '~':
67 query = query.filter(DR.run_id == dag_run_id)
68 query = query.order_by(DR.execution_date, XCom.task_id, XCom.dag_id, XCom.key)
69 total_entries = query.count()
70 query = query.offset(offset).limit(limit)
71 return xcom_collection_schema.dump(XComCollection(xcom_entries=query.all(), total_entries=total_entries))
72
73
74 @security.requires_access(
75 [
76 (permissions.ACTION_CAN_READ, permissions.RESOURCE_DAG),
77 (permissions.ACTION_CAN_READ, permissions.RESOURCE_DAG_RUN),
78 (permissions.ACTION_CAN_READ, permissions.RESOURCE_TASK_INSTANCE),
79 (permissions.ACTION_CAN_READ, permissions.RESOURCE_XCOM),
80 ],
81 )
82 @provide_session
83 def get_xcom_entry(
84 *,
85 dag_id: str,
86 task_id: str,
87 dag_run_id: str,
88 xcom_key: str,
89 session: Session = NEW_SESSION,
90 ) -> APIResponse:
91 """Get an XCom entry"""
92 query = session.query(XCom).filter(XCom.dag_id == dag_id, XCom.task_id == task_id, XCom.key == xcom_key)
93 query = query.join(DR, and_(XCom.dag_id == DR.dag_id, XCom.run_id == DR.run_id))
94 query = query.filter(DR.run_id == dag_run_id)
95
96 query_object = query.one_or_none()
97 if not query_object:
98 raise NotFound("XCom entry not found")
99 return xcom_schema.dump(query_object)
100
[end of airflow/api_connexion/endpoints/xcom_endpoint.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/airflow/api_connexion/endpoints/xcom_endpoint.py b/airflow/api_connexion/endpoints/xcom_endpoint.py
--- a/airflow/api_connexion/endpoints/xcom_endpoint.py
+++ b/airflow/api_connexion/endpoints/xcom_endpoint.py
@@ -14,6 +14,7 @@
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
+import copy
from typing import Optional
from flask import g
@@ -68,7 +69,7 @@
query = query.order_by(DR.execution_date, XCom.task_id, XCom.dag_id, XCom.key)
total_entries = query.count()
query = query.offset(offset).limit(limit)
- return xcom_collection_schema.dump(XComCollection(xcom_entries=query.all(), total_entries=total_entries))
+ return xcom_collection_schema.dump(XComCollection(xcom_entries=query, total_entries=total_entries))
@security.requires_access(
@@ -86,14 +87,28 @@
task_id: str,
dag_run_id: str,
xcom_key: str,
+ deserialize: bool = False,
session: Session = NEW_SESSION,
) -> APIResponse:
"""Get an XCom entry"""
- query = session.query(XCom).filter(XCom.dag_id == dag_id, XCom.task_id == task_id, XCom.key == xcom_key)
+ if deserialize:
+ query = session.query(XCom, XCom.value)
+ else:
+ query = session.query(XCom)
+
+ query = query.filter(XCom.dag_id == dag_id, XCom.task_id == task_id, XCom.key == xcom_key)
query = query.join(DR, and_(XCom.dag_id == DR.dag_id, XCom.run_id == DR.run_id))
query = query.filter(DR.run_id == dag_run_id)
- query_object = query.one_or_none()
- if not query_object:
+ item = query.one_or_none()
+ if item is None:
raise NotFound("XCom entry not found")
- return xcom_schema.dump(query_object)
+
+ if deserialize:
+ xcom, value = item
+ stub = copy.copy(xcom)
+ stub.value = value
+ stub.value = XCom.deserialize_value(stub)
+ item = stub
+
+ return xcom_schema.dump(item)
| {"golden_diff": "diff --git a/airflow/api_connexion/endpoints/xcom_endpoint.py b/airflow/api_connexion/endpoints/xcom_endpoint.py\n--- a/airflow/api_connexion/endpoints/xcom_endpoint.py\n+++ b/airflow/api_connexion/endpoints/xcom_endpoint.py\n@@ -14,6 +14,7 @@\n # KIND, either express or implied. See the License for the\n # specific language governing permissions and limitations\n # under the License.\n+import copy\n from typing import Optional\n \n from flask import g\n@@ -68,7 +69,7 @@\n query = query.order_by(DR.execution_date, XCom.task_id, XCom.dag_id, XCom.key)\n total_entries = query.count()\n query = query.offset(offset).limit(limit)\n- return xcom_collection_schema.dump(XComCollection(xcom_entries=query.all(), total_entries=total_entries))\n+ return xcom_collection_schema.dump(XComCollection(xcom_entries=query, total_entries=total_entries))\n \n \n @security.requires_access(\n@@ -86,14 +87,28 @@\n task_id: str,\n dag_run_id: str,\n xcom_key: str,\n+ deserialize: bool = False,\n session: Session = NEW_SESSION,\n ) -> APIResponse:\n \"\"\"Get an XCom entry\"\"\"\n- query = session.query(XCom).filter(XCom.dag_id == dag_id, XCom.task_id == task_id, XCom.key == xcom_key)\n+ if deserialize:\n+ query = session.query(XCom, XCom.value)\n+ else:\n+ query = session.query(XCom)\n+\n+ query = query.filter(XCom.dag_id == dag_id, XCom.task_id == task_id, XCom.key == xcom_key)\n query = query.join(DR, and_(XCom.dag_id == DR.dag_id, XCom.run_id == DR.run_id))\n query = query.filter(DR.run_id == dag_run_id)\n \n- query_object = query.one_or_none()\n- if not query_object:\n+ item = query.one_or_none()\n+ if item is None:\n raise NotFound(\"XCom entry not found\")\n- return xcom_schema.dump(query_object)\n+\n+ if deserialize:\n+ xcom, value = item\n+ stub = copy.copy(xcom)\n+ stub.value = value\n+ stub.value = XCom.deserialize_value(stub)\n+ item = stub\n+\n+ return xcom_schema.dump(item)\n", "issue": "API Endpoints - /xcomEntries/{xcom_key} cannot deserialize customized xcom backend\n### Description\n\nWe use S3 as our xcom backend database and write serialize/deserialize method for xcoms.\r\nHowever, when we want to access xcom through REST API, it returns the s3 file url instead of the deserialized value. Could you please add the feature to support customized xcom backend for REST API access?\n\n### Use case/motivation\n\n_No response_\n\n### Related issues\n\n_No response_\n\n### Are you willing to submit a PR?\n\n- [ ] Yes I am willing to submit a PR!\n\n### Code of Conduct\n\n- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)\n\n", "before_files": [{"content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied. See the License for the\n# specific language governing permissions and limitations\n# under the License.\nfrom typing import Optional\n\nfrom flask import g\nfrom sqlalchemy import and_\nfrom sqlalchemy.orm import Session\n\nfrom airflow.api_connexion import security\nfrom airflow.api_connexion.exceptions import NotFound\nfrom airflow.api_connexion.parameters import check_limit, format_parameters\nfrom airflow.api_connexion.schemas.xcom_schema import XComCollection, xcom_collection_schema, xcom_schema\nfrom airflow.api_connexion.types import APIResponse\nfrom airflow.models import DagRun as DR, XCom\nfrom airflow.security import permissions\nfrom airflow.utils.airflow_flask_app import get_airflow_app\nfrom airflow.utils.session import NEW_SESSION, provide_session\n\n\[email protected]_access(\n [\n (permissions.ACTION_CAN_READ, permissions.RESOURCE_DAG),\n (permissions.ACTION_CAN_READ, permissions.RESOURCE_DAG_RUN),\n (permissions.ACTION_CAN_READ, permissions.RESOURCE_TASK_INSTANCE),\n (permissions.ACTION_CAN_READ, permissions.RESOURCE_XCOM),\n ],\n)\n@format_parameters({\"limit\": check_limit})\n@provide_session\ndef get_xcom_entries(\n *,\n dag_id: str,\n dag_run_id: str,\n task_id: str,\n limit: Optional[int],\n offset: Optional[int] = None,\n session: Session = NEW_SESSION,\n) -> APIResponse:\n \"\"\"Get all XCom values\"\"\"\n query = session.query(XCom)\n if dag_id == '~':\n appbuilder = get_airflow_app().appbuilder\n readable_dag_ids = appbuilder.sm.get_readable_dag_ids(g.user)\n query = query.filter(XCom.dag_id.in_(readable_dag_ids))\n query = query.join(DR, and_(XCom.dag_id == DR.dag_id, XCom.run_id == DR.run_id))\n else:\n query = query.filter(XCom.dag_id == dag_id)\n query = query.join(DR, and_(XCom.dag_id == DR.dag_id, XCom.run_id == DR.run_id))\n\n if task_id != '~':\n query = query.filter(XCom.task_id == task_id)\n if dag_run_id != '~':\n query = query.filter(DR.run_id == dag_run_id)\n query = query.order_by(DR.execution_date, XCom.task_id, XCom.dag_id, XCom.key)\n total_entries = query.count()\n query = query.offset(offset).limit(limit)\n return xcom_collection_schema.dump(XComCollection(xcom_entries=query.all(), total_entries=total_entries))\n\n\[email protected]_access(\n [\n (permissions.ACTION_CAN_READ, permissions.RESOURCE_DAG),\n (permissions.ACTION_CAN_READ, permissions.RESOURCE_DAG_RUN),\n (permissions.ACTION_CAN_READ, permissions.RESOURCE_TASK_INSTANCE),\n (permissions.ACTION_CAN_READ, permissions.RESOURCE_XCOM),\n ],\n)\n@provide_session\ndef get_xcom_entry(\n *,\n dag_id: str,\n task_id: str,\n dag_run_id: str,\n xcom_key: str,\n session: Session = NEW_SESSION,\n) -> APIResponse:\n \"\"\"Get an XCom entry\"\"\"\n query = session.query(XCom).filter(XCom.dag_id == dag_id, XCom.task_id == task_id, XCom.key == xcom_key)\n query = query.join(DR, and_(XCom.dag_id == DR.dag_id, XCom.run_id == DR.run_id))\n query = query.filter(DR.run_id == dag_run_id)\n\n query_object = query.one_or_none()\n if not query_object:\n raise NotFound(\"XCom entry not found\")\n return xcom_schema.dump(query_object)\n", "path": "airflow/api_connexion/endpoints/xcom_endpoint.py"}]} | 1,832 | 539 |
gh_patches_debug_23835 | rasdani/github-patches | git_diff | ResonantGeoData__ResonantGeoData-70 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add an endpoint to get status of workers
It would be useful to know if we have any workers associated with the system, and, if so, if they are busy.
Specifically, this could probably be something like is done in girder_worker (see https://github.com/girder/girder_worker/blob/master/girder_worker/girder_plugin/api/worker.py#L40-L55). For this purpose, the celery app can be reached via `from rgd import celery_app`.
Ideally, this let's us determine the following conditions:
- The broker is unavailable
- There are no workers
- The number of idle workers
- The number of busy workers (and, ideally, what they are busy doing)
In the future, we may have multiple worker pools (for instance, for GPU and non-GPU tasks), so this will probably change exactly what gets reported in the future.
</issue>
<code>
[start of setup.py]
1 from setuptools import setup
2
3 setup(
4 name='resonantgeodata',
5 version='0.1',
6 python_requires='>=3.8.0',
7 install_requires=[
8 'boto3',
9 'celery!=4.4.4',
10 'django',
11 'django-admin-display',
12 'django-allauth',
13 'django-cleanup',
14 'django-configurations[database]',
15 'django-cors-headers',
16 'django-crispy-forms',
17 'django-extensions',
18 'django-storages',
19 'djangorestframework',
20 'docker',
21 'drf-yasg',
22 'gputil',
23 'psycopg2',
24 'python-magic',
25 'rules',
26 'uritemplate',
27 'whitenoise[brotli]',
28 # Production-only
29 'django-storages',
30 'gunicorn',
31 # Development-only
32 'django-debug-toolbar',
33 'django-minio-storage',
34 ],
35 )
36
[end of setup.py]
[start of core/urls.py]
1 from django.urls import path
2
3 from . import views
4
5 urlpatterns = [
6 path('', views.index, name='index'),
7 path('algorithms/', views.algorithms, name='algorithms'),
8 path(
9 'algorithms/<str:creator>/<int:pk>/',
10 views.AlgorithmDetailView.as_view(),
11 name='algorithm-detail',
12 ),
13 path(
14 'algorithms/<str:creator>/<int:pk>/delete/',
15 views.AlgorithmDeleteView.as_view(),
16 name='delete-algorithm',
17 ),
18 path('algorithms/new/', views.AlgorithmCreateView.as_view(), name='new-algorithm'),
19 path('jobs/', views.jobs, name='jobs'),
20 path('jobs/new/', views.JobCreateView.as_view(), name='new-job'),
21 path('jobs/<str:creator>/<int:pk>/', views.JobDetailView.as_view(), name='job-detail'),
22 path('tasks/', views.tasks, name='tasks'),
23 path('task/<int:pk>-<str:name>/', views.TaskDetailView.as_view(), name='task-detail'),
24 path('api/download/<model>/<int:id>/<field>', views.download_file, name='download-file'),
25 ]
26
27 handler500 = views.handler500
28
[end of core/urls.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/core/urls.py b/core/urls.py
--- a/core/urls.py
+++ b/core/urls.py
@@ -1,7 +1,11 @@
+from django.contrib import admin
from django.urls import path
+from djproxy.urls import generate_routes
from . import views
+
+admin.site.index_template = 'admin/add_flower.html'
urlpatterns = [
path('', views.index, name='index'),
path('algorithms/', views.algorithms, name='algorithms'),
@@ -22,6 +26,6 @@
path('tasks/', views.tasks, name='tasks'),
path('task/<int:pk>-<str:name>/', views.TaskDetailView.as_view(), name='task-detail'),
path('api/download/<model>/<int:id>/<field>', views.download_file, name='download-file'),
-]
+] + generate_routes({'flower-proxy': {'base_url': 'http://flower:5555/', 'prefix': '/flower/'}})
handler500 = views.handler500
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -17,6 +17,7 @@
'django-extensions',
'django-storages',
'djangorestframework',
+ 'djproxy',
'docker',
'drf-yasg',
'gputil',
| {"golden_diff": "diff --git a/core/urls.py b/core/urls.py\n--- a/core/urls.py\n+++ b/core/urls.py\n@@ -1,7 +1,11 @@\n+from django.contrib import admin\n from django.urls import path\n+from djproxy.urls import generate_routes\n \n from . import views\n \n+\n+admin.site.index_template = 'admin/add_flower.html'\n urlpatterns = [\n path('', views.index, name='index'),\n path('algorithms/', views.algorithms, name='algorithms'),\n@@ -22,6 +26,6 @@\n path('tasks/', views.tasks, name='tasks'),\n path('task/<int:pk>-<str:name>/', views.TaskDetailView.as_view(), name='task-detail'),\n path('api/download/<model>/<int:id>/<field>', views.download_file, name='download-file'),\n-]\n+] + generate_routes({'flower-proxy': {'base_url': 'http://flower:5555/', 'prefix': '/flower/'}})\n \n handler500 = views.handler500\ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -17,6 +17,7 @@\n 'django-extensions',\n 'django-storages',\n 'djangorestframework',\n+ 'djproxy',\n 'docker',\n 'drf-yasg',\n 'gputil',\n", "issue": "Add an endpoint to get status of workers\nIt would be useful to know if we have any workers associated with the system, and, if so, if they are busy.\r\n\r\nSpecifically, this could probably be something like is done in girder_worker (see https://github.com/girder/girder_worker/blob/master/girder_worker/girder_plugin/api/worker.py#L40-L55). For this purpose, the celery app can be reached via `from rgd import celery_app`.\r\n\r\nIdeally, this let's us determine the following conditions:\r\n- The broker is unavailable \r\n- There are no workers\r\n- The number of idle workers\r\n- The number of busy workers (and, ideally, what they are busy doing)\r\n\r\nIn the future, we may have multiple worker pools (for instance, for GPU and non-GPU tasks), so this will probably change exactly what gets reported in the future.\n", "before_files": [{"content": "from setuptools import setup\n\nsetup(\n name='resonantgeodata',\n version='0.1',\n python_requires='>=3.8.0',\n install_requires=[\n 'boto3',\n 'celery!=4.4.4',\n 'django',\n 'django-admin-display',\n 'django-allauth',\n 'django-cleanup',\n 'django-configurations[database]',\n 'django-cors-headers',\n 'django-crispy-forms',\n 'django-extensions',\n 'django-storages',\n 'djangorestframework',\n 'docker',\n 'drf-yasg',\n 'gputil',\n 'psycopg2',\n 'python-magic',\n 'rules',\n 'uritemplate',\n 'whitenoise[brotli]',\n # Production-only\n 'django-storages',\n 'gunicorn',\n # Development-only\n 'django-debug-toolbar',\n 'django-minio-storage',\n ],\n)\n", "path": "setup.py"}, {"content": "from django.urls import path\n\nfrom . import views\n\nurlpatterns = [\n path('', views.index, name='index'),\n path('algorithms/', views.algorithms, name='algorithms'),\n path(\n 'algorithms/<str:creator>/<int:pk>/',\n views.AlgorithmDetailView.as_view(),\n name='algorithm-detail',\n ),\n path(\n 'algorithms/<str:creator>/<int:pk>/delete/',\n views.AlgorithmDeleteView.as_view(),\n name='delete-algorithm',\n ),\n path('algorithms/new/', views.AlgorithmCreateView.as_view(), name='new-algorithm'),\n path('jobs/', views.jobs, name='jobs'),\n path('jobs/new/', views.JobCreateView.as_view(), name='new-job'),\n path('jobs/<str:creator>/<int:pk>/', views.JobDetailView.as_view(), name='job-detail'),\n path('tasks/', views.tasks, name='tasks'),\n path('task/<int:pk>-<str:name>/', views.TaskDetailView.as_view(), name='task-detail'),\n path('api/download/<model>/<int:id>/<field>', views.download_file, name='download-file'),\n]\n\nhandler500 = views.handler500\n", "path": "core/urls.py"}]} | 1,307 | 297 |
gh_patches_debug_66361 | rasdani/github-patches | git_diff | opsdroid__opsdroid-737 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Cannot disconnect from SQLite
<!-- Before you post an issue or if you are unsure about something join our gitter channel https://gitter.im/opsdroid/ and ask away! We are more than happy to help you. -->
# Description
SQLite database connector can’t disconnect because of wrong method signature.
## Steps to Reproduce
Enable the SQLite database module, then try to shut down the bot.
## Expected Functionality
The bot should shut down.
## Experienced Functionality
This error message on the console, and the bot remains running (but with the connectors already disconnected).
```
ERROR opsdroid.core: {'message': 'Task exception was never retrieved', 'exception': TypeError('disconnect() takes 1 positional argument but 2 were given',), 'future': <Task finished coro=<OpsDroid.handle_signal() done, defined at /home/polesz/.local/lib/python3.6/site-packages/opsdroid/core.py:121> exception=TypeError('disconnect() takes 1 positional argument but 2 were given',)>}
```
## Versions
- **Opsdroid version:** 0.13.0
- **Python version:** 3.6.6 (bundled with Fedora 28)
- **OS/Docker version:** Fedora 28, no Docker involved
## Additional information
It seems the method signature of `Database.disconnect()` is wrong (should be `async def disconnect(self, opsdroid)`) or the caller (`OpsDroid.unload()`) should not pass the `opsdroid` instance to `database.disconnect()` (personally i’d vote for the former).
</issue>
<code>
[start of opsdroid/database/__init__.py]
1 """A base class for databases to inherit from."""
2
3
4 class Database():
5 """A base database.
6
7 Database classes are used to persist key/value pairs in a database.
8
9 """
10
11 def __init__(self, config):
12 """Create the database.
13
14 Set some basic properties from the database config such as the name
15 of this database. It could also be a good place to setup properties
16 to hold things like the database connection object and the database
17 name.
18
19 Args:
20 config (dict): The config for this database specified in the
21 `configuration.yaml` file.
22
23 """
24 self.name = ""
25 self.config = config
26 self.client = None
27 self.database = None
28
29 async def connect(self, opsdroid):
30 """Connect to database service and store the connection object.
31
32 This method should connect to the given database using a native
33 python library for that database. The library will most likely involve
34 a connection object which will be used by the put and get methods.
35 This object should be stored in self.
36
37 Args:
38 opsdroid (OpsDroid): An instance of the opsdroid core.
39
40 """
41 raise NotImplementedError
42
43 async def disconnect(self):
44 """Disconnect from the database.
45
46 This method should disconnect from the given database using a native
47 python library for that database.
48
49 """
50 pass
51
52 async def put(self, key, data):
53 """Store the data object in a database against the key.
54
55 The data object will need to be serialised in a sensible way which
56 suits the database being used and allows for reconstruction of the
57 object.
58
59 Args:
60 key (string): The key to store the data object under.
61 data (object): The data object to store.
62
63 Returns:
64 bool: True for data successfully stored, False otherwise.
65
66 """
67 raise NotImplementedError
68
69 async def get(self, key):
70 """Return a data object for a given key.
71
72 Args:
73 key (string): The key to lookup in the database.
74
75 Returns:
76 object or None: The data object stored for that key, or None if no
77 object found for that key.
78
79 """
80 raise NotImplementedError
81
[end of opsdroid/database/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/opsdroid/database/__init__.py b/opsdroid/database/__init__.py
--- a/opsdroid/database/__init__.py
+++ b/opsdroid/database/__init__.py
@@ -40,7 +40,7 @@
"""
raise NotImplementedError
- async def disconnect(self):
+ async def disconnect(self, opsdroid):
"""Disconnect from the database.
This method should disconnect from the given database using a native
| {"golden_diff": "diff --git a/opsdroid/database/__init__.py b/opsdroid/database/__init__.py\n--- a/opsdroid/database/__init__.py\n+++ b/opsdroid/database/__init__.py\n@@ -40,7 +40,7 @@\n \"\"\"\n raise NotImplementedError\n \n- async def disconnect(self):\n+ async def disconnect(self, opsdroid):\n \"\"\"Disconnect from the database.\n \n This method should disconnect from the given database using a native\n", "issue": "Cannot disconnect from SQLite\n<!-- Before you post an issue or if you are unsure about something join our gitter channel https://gitter.im/opsdroid/ and ask away! We are more than happy to help you. -->\r\n# Description\r\nSQLite database connector can\u2019t disconnect because of wrong method signature.\r\n\r\n## Steps to Reproduce\r\nEnable the SQLite database module, then try to shut down the bot.\r\n\r\n\r\n## Expected Functionality\r\nThe bot should shut down.\r\n\r\n## Experienced Functionality\r\nThis error message on the console, and the bot remains running (but with the connectors already disconnected).\r\n\r\n```\r\nERROR opsdroid.core: {'message': 'Task exception was never retrieved', 'exception': TypeError('disconnect() takes 1 positional argument but 2 were given',), 'future': <Task finished coro=<OpsDroid.handle_signal() done, defined at /home/polesz/.local/lib/python3.6/site-packages/opsdroid/core.py:121> exception=TypeError('disconnect() takes 1 positional argument but 2 were given',)>}\r\n```\r\n\r\n## Versions\r\n- **Opsdroid version:** 0.13.0\r\n- **Python version:** 3.6.6 (bundled with Fedora 28)\r\n- **OS/Docker version:** Fedora 28, no Docker involved\r\n\r\n## Additional information\r\nIt seems the method signature of `Database.disconnect()` is wrong (should be `async def disconnect(self, opsdroid)`) or the caller (`OpsDroid.unload()`) should not pass the `opsdroid` instance to `database.disconnect()` (personally i\u2019d vote for the former).\n", "before_files": [{"content": "\"\"\"A base class for databases to inherit from.\"\"\"\n\n\nclass Database():\n \"\"\"A base database.\n\n Database classes are used to persist key/value pairs in a database.\n\n \"\"\"\n\n def __init__(self, config):\n \"\"\"Create the database.\n\n Set some basic properties from the database config such as the name\n of this database. It could also be a good place to setup properties\n to hold things like the database connection object and the database\n name.\n\n Args:\n config (dict): The config for this database specified in the\n `configuration.yaml` file.\n\n \"\"\"\n self.name = \"\"\n self.config = config\n self.client = None\n self.database = None\n\n async def connect(self, opsdroid):\n \"\"\"Connect to database service and store the connection object.\n\n This method should connect to the given database using a native\n python library for that database. The library will most likely involve\n a connection object which will be used by the put and get methods.\n This object should be stored in self.\n\n Args:\n opsdroid (OpsDroid): An instance of the opsdroid core.\n\n \"\"\"\n raise NotImplementedError\n\n async def disconnect(self):\n \"\"\"Disconnect from the database.\n\n This method should disconnect from the given database using a native\n python library for that database.\n\n \"\"\"\n pass\n\n async def put(self, key, data):\n \"\"\"Store the data object in a database against the key.\n\n The data object will need to be serialised in a sensible way which\n suits the database being used and allows for reconstruction of the\n object.\n\n Args:\n key (string): The key to store the data object under.\n data (object): The data object to store.\n\n Returns:\n bool: True for data successfully stored, False otherwise.\n\n \"\"\"\n raise NotImplementedError\n\n async def get(self, key):\n \"\"\"Return a data object for a given key.\n\n Args:\n key (string): The key to lookup in the database.\n\n Returns:\n object or None: The data object stored for that key, or None if no\n object found for that key.\n\n \"\"\"\n raise NotImplementedError\n", "path": "opsdroid/database/__init__.py"}]} | 1,518 | 106 |
gh_patches_debug_37224 | rasdani/github-patches | git_diff | bridgecrewio__checkov-5254 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Checkov Managed Disk Encryption check in Bicep IaC failing
**Describe the issue**
Checkov Managed Disk Encryption check will fail despite having the required check in Bicep code. It will only be successful if both checks are in the code, but need to be hashed out.
**Examples**
```
resource Disks 'Microsoft.Compute/disks@2022-07-02' = [for (disk, i) in dataDisks: {
name: disk.diskName
location: location
tags: tags
sku: {
name: disk.storageAccountType
}
zones: [
avZone
]
properties: {
creationData: {
createOption: 'Empty'
}
diskSizeGB: disk.diskSizeGB
// encryption: {
// type: 'EncryptionAtRestWithCustomerKey'
// diskEncryptionSetId: diskEncryptionSetId
// }
encryption: {
type: 'EncryptionAtRestWithCustomerKey'
diskEncryptionSetId: diskEncryptionSetId
}
// encryptionSettingsCollection: {
// enabled: true
// encryptionSettings: [
// {
// diskEncryptionKey: {
// secretUrl: keyURL
// sourceVault: {
// id: keyVaultId
// }
// }
// }
// ]
// }
}
}]
```
**Version :**
- Latest
**Additional context**
Even if I remove the commented out sections, the check will fail. If I have the "encryptionSettingsCollection" block, the check will fail. It will only work if it is formatted like the above.
</issue>
<code>
[start of checkov/arm/checks/resource/AzureManagedDiscEncryption.py]
1 from __future__ import annotations
2
3 from typing import Any
4
5 from checkov.common.models.enums import CheckResult, CheckCategories
6 from checkov.arm.base_resource_check import BaseResourceCheck
7
8
9 class AzureManagedDiscEncryption(BaseResourceCheck):
10 def __init__(self) -> None:
11 name = "Ensure Azure managed disk have encryption enabled"
12 id = "CKV_AZURE_2"
13 supported_resources = ("Microsoft.Compute/disks",)
14 categories = (CheckCategories.ENCRYPTION,)
15 super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
16
17 def scan_resource_conf(self, conf: dict[str, Any]) -> CheckResult:
18 if "properties" in conf:
19 if "encryptionSettingsCollection" in conf["properties"]:
20 if "enabled" in conf["properties"]["encryptionSettingsCollection"]:
21 if str(conf["properties"]["encryptionSettingsCollection"]["enabled"]).lower() == "true":
22 return CheckResult.PASSED
23 elif "encryptionSettings" in conf["properties"]:
24 if "enabled" in conf["properties"]["encryptionSettings"]:
25 if str(conf["properties"]["encryptionSettings"]["enabled"]).lower() == "true":
26 return CheckResult.PASSED
27 return CheckResult.FAILED
28
29
30 check = AzureManagedDiscEncryption()
31
[end of checkov/arm/checks/resource/AzureManagedDiscEncryption.py]
[start of checkov/arm/base_resource_check.py]
1 from __future__ import annotations
2
3 from abc import abstractmethod
4 from collections.abc import Iterable
5 from typing import Any, Callable
6
7 from checkov.arm.registry import arm_resource_registry
8 from checkov.bicep.checks.resource.registry import registry as bicep_registry
9 from checkov.common.checks.base_check import BaseCheck
10 from checkov.common.models.enums import CheckCategories, CheckResult
11 from checkov.common.multi_signature import multi_signature
12
13
14 class BaseResourceCheck(BaseCheck):
15 def __init__(
16 self,
17 name: str,
18 id: str,
19 categories: "Iterable[CheckCategories]",
20 supported_resources: "Iterable[str]",
21 guideline: str | None = None,
22 ) -> None:
23 super().__init__(
24 name=name,
25 id=id,
26 categories=categories,
27 supported_entities=supported_resources,
28 block_type="resource",
29 guideline=guideline,
30 )
31 self.supported_resources = supported_resources
32 arm_resource_registry.register(self)
33 # leverage ARM checks to use with bicep runner
34 bicep_registry.register(self)
35
36 def scan_entity_conf(self, conf: dict[str, Any], entity_type: str) -> CheckResult: # type:ignore[override] # it's ok
37 self.entity_type = entity_type
38
39 # the "existing" key indicates a Bicep resource
40 if "existing" in conf:
41 if conf["existing"] is True:
42 # the existing keyword is used to retrieve information about an already deployed resource
43 return CheckResult.UNKNOWN
44
45 self.api_version = conf["api_version"]
46 conf["config"]["apiVersion"] = conf["api_version"] # set for better reusability of existing ARM checks
47
48 return self.scan_resource_conf(conf["config"], entity_type) # type:ignore[no-any-return] # issue with multi_signature annotation
49
50 self.api_version = None
51
52 return self.scan_resource_conf(conf, entity_type) # type:ignore[no-any-return] # issue with multi_signature annotation
53
54 @multi_signature()
55 @abstractmethod
56 def scan_resource_conf(self, conf: dict[str, Any], entity_type: str) -> CheckResult:
57 raise NotImplementedError()
58
59 @classmethod
60 @scan_resource_conf.add_signature(args=["self", "conf"])
61 def _scan_resource_conf_self_conf(cls, wrapped: Callable[..., CheckResult]) -> Callable[..., CheckResult]:
62 def wrapper(self: BaseCheck, conf: dict[str, Any], entity_type: str | None = None) -> CheckResult:
63 # keep default argument for entity_type so old code, that doesn't set it, will work.
64 return wrapped(self, conf)
65
66 return wrapper
67
[end of checkov/arm/base_resource_check.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/checkov/arm/base_resource_check.py b/checkov/arm/base_resource_check.py
--- a/checkov/arm/base_resource_check.py
+++ b/checkov/arm/base_resource_check.py
@@ -45,7 +45,12 @@
self.api_version = conf["api_version"]
conf["config"]["apiVersion"] = conf["api_version"] # set for better reusability of existing ARM checks
- return self.scan_resource_conf(conf["config"], entity_type) # type:ignore[no-any-return] # issue with multi_signature annotation
+ resource_conf = conf["config"]
+ if "loop_type" in resource_conf:
+ # this means the whole resource block is surrounded by a for loop
+ resource_conf = resource_conf["config"]
+
+ return self.scan_resource_conf(resource_conf, entity_type) # type:ignore[no-any-return] # issue with multi_signature annotation
self.api_version = None
diff --git a/checkov/arm/checks/resource/AzureManagedDiscEncryption.py b/checkov/arm/checks/resource/AzureManagedDiscEncryption.py
--- a/checkov/arm/checks/resource/AzureManagedDiscEncryption.py
+++ b/checkov/arm/checks/resource/AzureManagedDiscEncryption.py
@@ -4,6 +4,7 @@
from checkov.common.models.enums import CheckResult, CheckCategories
from checkov.arm.base_resource_check import BaseResourceCheck
+from checkov.common.util.data_structures_utils import find_in_dict
class AzureManagedDiscEncryption(BaseResourceCheck):
@@ -15,15 +16,21 @@
super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
def scan_resource_conf(self, conf: dict[str, Any]) -> CheckResult:
- if "properties" in conf:
- if "encryptionSettingsCollection" in conf["properties"]:
- if "enabled" in conf["properties"]["encryptionSettingsCollection"]:
- if str(conf["properties"]["encryptionSettingsCollection"]["enabled"]).lower() == "true":
- return CheckResult.PASSED
- elif "encryptionSettings" in conf["properties"]:
- if "enabled" in conf["properties"]["encryptionSettings"]:
- if str(conf["properties"]["encryptionSettings"]["enabled"]).lower() == "true":
- return CheckResult.PASSED
+ properties = conf.get("properties")
+ if properties:
+ encryption = properties.get("encryption")
+ if encryption:
+ # if the block exists, then it is enabled
+ return CheckResult.PASSED
+
+ encryption_enabled = find_in_dict(input_dict=properties, key_path="encryptionSettingsCollection/enabled")
+ if str(encryption_enabled).lower() == "true":
+ return CheckResult.PASSED
+
+ encryption_enabled = find_in_dict(input_dict=properties, key_path="encryptionSettings/enabled")
+ if str(encryption_enabled).lower() == "true":
+ return CheckResult.PASSED
+
return CheckResult.FAILED
| {"golden_diff": "diff --git a/checkov/arm/base_resource_check.py b/checkov/arm/base_resource_check.py\n--- a/checkov/arm/base_resource_check.py\n+++ b/checkov/arm/base_resource_check.py\n@@ -45,7 +45,12 @@\n self.api_version = conf[\"api_version\"]\n conf[\"config\"][\"apiVersion\"] = conf[\"api_version\"] # set for better reusability of existing ARM checks\n \n- return self.scan_resource_conf(conf[\"config\"], entity_type) # type:ignore[no-any-return] # issue with multi_signature annotation\n+ resource_conf = conf[\"config\"]\n+ if \"loop_type\" in resource_conf:\n+ # this means the whole resource block is surrounded by a for loop\n+ resource_conf = resource_conf[\"config\"]\n+\n+ return self.scan_resource_conf(resource_conf, entity_type) # type:ignore[no-any-return] # issue with multi_signature annotation\n \n self.api_version = None\n \ndiff --git a/checkov/arm/checks/resource/AzureManagedDiscEncryption.py b/checkov/arm/checks/resource/AzureManagedDiscEncryption.py\n--- a/checkov/arm/checks/resource/AzureManagedDiscEncryption.py\n+++ b/checkov/arm/checks/resource/AzureManagedDiscEncryption.py\n@@ -4,6 +4,7 @@\n \n from checkov.common.models.enums import CheckResult, CheckCategories\n from checkov.arm.base_resource_check import BaseResourceCheck\n+from checkov.common.util.data_structures_utils import find_in_dict\n \n \n class AzureManagedDiscEncryption(BaseResourceCheck):\n@@ -15,15 +16,21 @@\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n \n def scan_resource_conf(self, conf: dict[str, Any]) -> CheckResult:\n- if \"properties\" in conf:\n- if \"encryptionSettingsCollection\" in conf[\"properties\"]:\n- if \"enabled\" in conf[\"properties\"][\"encryptionSettingsCollection\"]:\n- if str(conf[\"properties\"][\"encryptionSettingsCollection\"][\"enabled\"]).lower() == \"true\":\n- return CheckResult.PASSED\n- elif \"encryptionSettings\" in conf[\"properties\"]:\n- if \"enabled\" in conf[\"properties\"][\"encryptionSettings\"]:\n- if str(conf[\"properties\"][\"encryptionSettings\"][\"enabled\"]).lower() == \"true\":\n- return CheckResult.PASSED\n+ properties = conf.get(\"properties\")\n+ if properties:\n+ encryption = properties.get(\"encryption\")\n+ if encryption:\n+ # if the block exists, then it is enabled\n+ return CheckResult.PASSED\n+\n+ encryption_enabled = find_in_dict(input_dict=properties, key_path=\"encryptionSettingsCollection/enabled\")\n+ if str(encryption_enabled).lower() == \"true\":\n+ return CheckResult.PASSED\n+\n+ encryption_enabled = find_in_dict(input_dict=properties, key_path=\"encryptionSettings/enabled\")\n+ if str(encryption_enabled).lower() == \"true\":\n+ return CheckResult.PASSED\n+\n return CheckResult.FAILED\n", "issue": "Checkov Managed Disk Encryption check in Bicep IaC failing\n**Describe the issue**\r\nCheckov Managed Disk Encryption check will fail despite having the required check in Bicep code. It will only be successful if both checks are in the code, but need to be hashed out.\r\n\r\n**Examples**\r\n```\r\nresource Disks 'Microsoft.Compute/disks@2022-07-02' = [for (disk, i) in dataDisks: {\r\n name: disk.diskName\r\n location: location\r\n tags: tags\r\n sku: {\r\n name: disk.storageAccountType\r\n }\r\n zones: [\r\n avZone\r\n ]\r\n properties: {\r\n creationData: {\r\n createOption: 'Empty'\r\n }\r\n diskSizeGB: disk.diskSizeGB\r\n // encryption: {\r\n // type: 'EncryptionAtRestWithCustomerKey'\r\n // diskEncryptionSetId: diskEncryptionSetId\r\n // }\r\n encryption: {\r\n type: 'EncryptionAtRestWithCustomerKey'\r\n diskEncryptionSetId: diskEncryptionSetId\r\n }\r\n // encryptionSettingsCollection: {\r\n // enabled: true\r\n // encryptionSettings: [\r\n // {\r\n // diskEncryptionKey: {\r\n // secretUrl: keyURL\r\n // sourceVault: {\r\n // id: keyVaultId\r\n // }\r\n // }\r\n // }\r\n // ]\r\n // }\r\n }\r\n}]\r\n```\r\n\r\n**Version :**\r\n - Latest\r\n\r\n**Additional context**\r\nEven if I remove the commented out sections, the check will fail. If I have the \"encryptionSettingsCollection\" block, the check will fail. It will only work if it is formatted like the above.\r\n\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom typing import Any\n\nfrom checkov.common.models.enums import CheckResult, CheckCategories\nfrom checkov.arm.base_resource_check import BaseResourceCheck\n\n\nclass AzureManagedDiscEncryption(BaseResourceCheck):\n def __init__(self) -> None:\n name = \"Ensure Azure managed disk have encryption enabled\"\n id = \"CKV_AZURE_2\"\n supported_resources = (\"Microsoft.Compute/disks\",)\n categories = (CheckCategories.ENCRYPTION,)\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def scan_resource_conf(self, conf: dict[str, Any]) -> CheckResult:\n if \"properties\" in conf:\n if \"encryptionSettingsCollection\" in conf[\"properties\"]:\n if \"enabled\" in conf[\"properties\"][\"encryptionSettingsCollection\"]:\n if str(conf[\"properties\"][\"encryptionSettingsCollection\"][\"enabled\"]).lower() == \"true\":\n return CheckResult.PASSED\n elif \"encryptionSettings\" in conf[\"properties\"]:\n if \"enabled\" in conf[\"properties\"][\"encryptionSettings\"]:\n if str(conf[\"properties\"][\"encryptionSettings\"][\"enabled\"]).lower() == \"true\":\n return CheckResult.PASSED\n return CheckResult.FAILED\n\n\ncheck = AzureManagedDiscEncryption()\n", "path": "checkov/arm/checks/resource/AzureManagedDiscEncryption.py"}, {"content": "from __future__ import annotations\n\nfrom abc import abstractmethod\nfrom collections.abc import Iterable\nfrom typing import Any, Callable\n\nfrom checkov.arm.registry import arm_resource_registry\nfrom checkov.bicep.checks.resource.registry import registry as bicep_registry\nfrom checkov.common.checks.base_check import BaseCheck\nfrom checkov.common.models.enums import CheckCategories, CheckResult\nfrom checkov.common.multi_signature import multi_signature\n\n\nclass BaseResourceCheck(BaseCheck):\n def __init__(\n self,\n name: str,\n id: str,\n categories: \"Iterable[CheckCategories]\",\n supported_resources: \"Iterable[str]\",\n guideline: str | None = None,\n ) -> None:\n super().__init__(\n name=name,\n id=id,\n categories=categories,\n supported_entities=supported_resources,\n block_type=\"resource\",\n guideline=guideline,\n )\n self.supported_resources = supported_resources\n arm_resource_registry.register(self)\n # leverage ARM checks to use with bicep runner\n bicep_registry.register(self)\n\n def scan_entity_conf(self, conf: dict[str, Any], entity_type: str) -> CheckResult: # type:ignore[override] # it's ok\n self.entity_type = entity_type\n\n # the \"existing\" key indicates a Bicep resource\n if \"existing\" in conf:\n if conf[\"existing\"] is True:\n # the existing keyword is used to retrieve information about an already deployed resource\n return CheckResult.UNKNOWN\n\n self.api_version = conf[\"api_version\"]\n conf[\"config\"][\"apiVersion\"] = conf[\"api_version\"] # set for better reusability of existing ARM checks\n\n return self.scan_resource_conf(conf[\"config\"], entity_type) # type:ignore[no-any-return] # issue with multi_signature annotation\n\n self.api_version = None\n\n return self.scan_resource_conf(conf, entity_type) # type:ignore[no-any-return] # issue with multi_signature annotation\n\n @multi_signature()\n @abstractmethod\n def scan_resource_conf(self, conf: dict[str, Any], entity_type: str) -> CheckResult:\n raise NotImplementedError()\n\n @classmethod\n @scan_resource_conf.add_signature(args=[\"self\", \"conf\"])\n def _scan_resource_conf_self_conf(cls, wrapped: Callable[..., CheckResult]) -> Callable[..., CheckResult]:\n def wrapper(self: BaseCheck, conf: dict[str, Any], entity_type: str | None = None) -> CheckResult:\n # keep default argument for entity_type so old code, that doesn't set it, will work.\n return wrapped(self, conf)\n\n return wrapper\n", "path": "checkov/arm/base_resource_check.py"}]} | 1,963 | 653 |
gh_patches_debug_23225 | rasdani/github-patches | git_diff | replicate__cog-843 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Set python package version explicitly and expose in package
The cog python package sets version metadata but this has never been updated:
```python
In [1]: from importlib.metadata import version
In [2]: version('cog')
Out[2]: '0.0.1'
```
In addition, there's no `__version__` property on the package. This isn't essential but it would be nice to have this too:
```python
In [3]: import cog
In [4]: cog.__version__
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In [4], line 1
----> 1 cog.__version__
AttributeError: module 'cog' has no attribute '__version__'
```
It would be really nice to do this in a way that:
- returns the same version from both of the above
- returns the tagged version in tagged builds (e.g. `0.3.4`)
- appends git metadata when not on a tagged build (e.g. `0.3.4-dev+630e696`)
</issue>
<code>
[start of python/cog/__init__.py]
1 from pydantic import BaseModel
2
3 from .predictor import BasePredictor
4 from .types import File, Input, Path
5
6 __all__ = [
7 "BaseModel",
8 "BasePredictor",
9 "File",
10 "Input",
11 "Path",
12 ]
13
[end of python/cog/__init__.py]
[start of python/setup.py]
1 import setuptools
2
3 with open("../README.md", "r", encoding="utf-8") as fh:
4 long_description = fh.read()
5
6
7 setuptools.setup(
8 name="cog",
9 version="0.0.1",
10 author_email="[email protected]",
11 description="Containers for machine learning",
12 long_description=long_description,
13 long_description_content_type="text/markdown",
14 url="https://github.com/replicate/cog",
15 license="Apache License 2.0",
16 python_requires=">=3.6.0",
17 install_requires=[
18 # intentionally loose. perhaps these should be vendored to not collide with user code?
19 "attrs>=20.1,<23",
20 "fastapi>=0.75.2,<1",
21 "opentelemetry-exporter-otlp>=1.11.1,<2",
22 "opentelemetry-sdk>=1.11.1,<2",
23 "protobuf<=3.20.3",
24 "pydantic>=1,<2",
25 "PyYAML",
26 "redis>=4,<5",
27 "requests>=2,<3",
28 "typing_extensions>=4.1.0",
29 "uvicorn[standard]>=0.12,<1",
30 ],
31 packages=setuptools.find_packages(),
32 )
33
[end of python/setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/python/cog/__init__.py b/python/cog/__init__.py
--- a/python/cog/__init__.py
+++ b/python/cog/__init__.py
@@ -3,7 +3,14 @@
from .predictor import BasePredictor
from .types import File, Input, Path
+try:
+ from ._version import __version__
+except ImportError:
+ __version__ = "0.0.0+unknown"
+
+
__all__ = [
+ "__version__",
"BaseModel",
"BasePredictor",
"File",
diff --git a/python/setup.py b/python/setup.py
deleted file mode 100644
--- a/python/setup.py
+++ /dev/null
@@ -1,32 +0,0 @@
-import setuptools
-
-with open("../README.md", "r", encoding="utf-8") as fh:
- long_description = fh.read()
-
-
-setuptools.setup(
- name="cog",
- version="0.0.1",
- author_email="[email protected]",
- description="Containers for machine learning",
- long_description=long_description,
- long_description_content_type="text/markdown",
- url="https://github.com/replicate/cog",
- license="Apache License 2.0",
- python_requires=">=3.6.0",
- install_requires=[
- # intentionally loose. perhaps these should be vendored to not collide with user code?
- "attrs>=20.1,<23",
- "fastapi>=0.75.2,<1",
- "opentelemetry-exporter-otlp>=1.11.1,<2",
- "opentelemetry-sdk>=1.11.1,<2",
- "protobuf<=3.20.3",
- "pydantic>=1,<2",
- "PyYAML",
- "redis>=4,<5",
- "requests>=2,<3",
- "typing_extensions>=4.1.0",
- "uvicorn[standard]>=0.12,<1",
- ],
- packages=setuptools.find_packages(),
-)
| {"golden_diff": "diff --git a/python/cog/__init__.py b/python/cog/__init__.py\n--- a/python/cog/__init__.py\n+++ b/python/cog/__init__.py\n@@ -3,7 +3,14 @@\n from .predictor import BasePredictor\n from .types import File, Input, Path\n \n+try:\n+ from ._version import __version__\n+except ImportError:\n+ __version__ = \"0.0.0+unknown\"\n+\n+\n __all__ = [\n+ \"__version__\",\n \"BaseModel\",\n \"BasePredictor\",\n \"File\",\ndiff --git a/python/setup.py b/python/setup.py\ndeleted file mode 100644\n--- a/python/setup.py\n+++ /dev/null\n@@ -1,32 +0,0 @@\n-import setuptools\n-\n-with open(\"../README.md\", \"r\", encoding=\"utf-8\") as fh:\n- long_description = fh.read()\n-\n-\n-setuptools.setup(\n- name=\"cog\",\n- version=\"0.0.1\",\n- author_email=\"[email protected]\",\n- description=\"Containers for machine learning\",\n- long_description=long_description,\n- long_description_content_type=\"text/markdown\",\n- url=\"https://github.com/replicate/cog\",\n- license=\"Apache License 2.0\",\n- python_requires=\">=3.6.0\",\n- install_requires=[\n- # intentionally loose. perhaps these should be vendored to not collide with user code?\n- \"attrs>=20.1,<23\",\n- \"fastapi>=0.75.2,<1\",\n- \"opentelemetry-exporter-otlp>=1.11.1,<2\",\n- \"opentelemetry-sdk>=1.11.1,<2\",\n- \"protobuf<=3.20.3\",\n- \"pydantic>=1,<2\",\n- \"PyYAML\",\n- \"redis>=4,<5\",\n- \"requests>=2,<3\",\n- \"typing_extensions>=4.1.0\",\n- \"uvicorn[standard]>=0.12,<1\",\n- ],\n- packages=setuptools.find_packages(),\n-)\n", "issue": "Set python package version explicitly and expose in package\nThe cog python package sets version metadata but this has never been updated:\r\n\r\n```python\r\nIn [1]: from importlib.metadata import version\r\n\r\nIn [2]: version('cog')\r\nOut[2]: '0.0.1'\r\n```\r\n\r\nIn addition, there's no `__version__` property on the package. This isn't essential but it would be nice to have this too:\r\n\r\n```python\r\nIn [3]: import cog\r\n\r\nIn [4]: cog.__version__\r\n---------------------------------------------------------------------------\r\nAttributeError Traceback (most recent call last)\r\nCell In [4], line 1\r\n----> 1 cog.__version__\r\n\r\nAttributeError: module 'cog' has no attribute '__version__'\r\n```\r\n\r\nIt would be really nice to do this in a way that:\r\n\r\n- returns the same version from both of the above\r\n- returns the tagged version in tagged builds (e.g. `0.3.4`)\r\n- appends git metadata when not on a tagged build (e.g. `0.3.4-dev+630e696`)\r\n\r\n\n", "before_files": [{"content": "from pydantic import BaseModel\n\nfrom .predictor import BasePredictor\nfrom .types import File, Input, Path\n\n__all__ = [\n \"BaseModel\",\n \"BasePredictor\",\n \"File\",\n \"Input\",\n \"Path\",\n]\n", "path": "python/cog/__init__.py"}, {"content": "import setuptools\n\nwith open(\"../README.md\", \"r\", encoding=\"utf-8\") as fh:\n long_description = fh.read()\n\n\nsetuptools.setup(\n name=\"cog\",\n version=\"0.0.1\",\n author_email=\"[email protected]\",\n description=\"Containers for machine learning\",\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n url=\"https://github.com/replicate/cog\",\n license=\"Apache License 2.0\",\n python_requires=\">=3.6.0\",\n install_requires=[\n # intentionally loose. perhaps these should be vendored to not collide with user code?\n \"attrs>=20.1,<23\",\n \"fastapi>=0.75.2,<1\",\n \"opentelemetry-exporter-otlp>=1.11.1,<2\",\n \"opentelemetry-sdk>=1.11.1,<2\",\n \"protobuf<=3.20.3\",\n \"pydantic>=1,<2\",\n \"PyYAML\",\n \"redis>=4,<5\",\n \"requests>=2,<3\",\n \"typing_extensions>=4.1.0\",\n \"uvicorn[standard]>=0.12,<1\",\n ],\n packages=setuptools.find_packages(),\n)\n", "path": "python/setup.py"}]} | 1,198 | 482 |
gh_patches_debug_656 | rasdani/github-patches | git_diff | pex-tool__pex-2081 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Release 2.1.126
On the docket:
+ [x] Resolve sdist builds can race and fail. #2078
</issue>
<code>
[start of pex/version.py]
1 # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 __version__ = "2.1.125"
5
[end of pex/version.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pex/version.py b/pex/version.py
--- a/pex/version.py
+++ b/pex/version.py
@@ -1,4 +1,4 @@
# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
# Licensed under the Apache License, Version 2.0 (see LICENSE).
-__version__ = "2.1.125"
+__version__ = "2.1.126"
| {"golden_diff": "diff --git a/pex/version.py b/pex/version.py\n--- a/pex/version.py\n+++ b/pex/version.py\n@@ -1,4 +1,4 @@\n # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n # Licensed under the Apache License, Version 2.0 (see LICENSE).\n \n-__version__ = \"2.1.125\"\n+__version__ = \"2.1.126\"\n", "issue": "Release 2.1.126\nOn the docket:\r\n+ [x] Resolve sdist builds can race and fail. #2078 \n", "before_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.125\"\n", "path": "pex/version.py"}]} | 618 | 99 |
gh_patches_debug_22628 | rasdani/github-patches | git_diff | beetbox__beets-5160 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`unimported`'s `ignore_subdirectories` doesn't work
### Problem
```sh
beet unimported
```
Leads to directories specified in `ignore_subdirectories` still being listed
### Setup
* OS: Arch Linux
* Python version: 3.11.7
* beets version: 1.6.1
* Turning off plugins made problem go away (yes/no): n/a
My configuration (output of `beet config`) is:
```yaml
unimported:
ignore_extensions: jpg png txt md org mod
ignore_subdirectories: Unsorted import
```
`ignore_extensions` works as expected though
</issue>
<code>
[start of beetsplug/unimported.py]
1 # This file is part of beets.
2 # Copyright 2019, Joris Jensen
3 #
4 # Permission is hereby granted, free of charge, to any person obtaining
5 # a copy of this software and associated documentation files (the
6 # "Software"), to deal in the Software without restriction, including
7 # without limitation the rights to use, copy, modify, merge, publish,
8 # distribute, sublicense, and/or sell copies of the Software, and to
9 # permit persons to whom the Software is furnished to do so, subject to
10 # the following conditions:
11 #
12 # The above copyright notice and this permission notice shall be
13 # included in all copies or substantial portions of the Software.
14
15 """
16 List all files in the library folder which are not listed in the
17 beets library database, including art files
18 """
19
20 import os
21
22 from beets import util
23 from beets.plugins import BeetsPlugin
24 from beets.ui import Subcommand, print_
25
26 __author__ = "https://github.com/MrNuggelz"
27
28
29 class Unimported(BeetsPlugin):
30 def __init__(self):
31 super().__init__()
32 self.config.add({"ignore_extensions": [], "ignore_subdirectories": []})
33
34 def commands(self):
35 def print_unimported(lib, opts, args):
36 ignore_exts = [
37 ("." + x).encode()
38 for x in self.config["ignore_extensions"].as_str_seq()
39 ]
40 ignore_dirs = [
41 os.path.join(lib.directory, x.encode())
42 for x in self.config["ignore_subdirectories"].as_str_seq()
43 ]
44 in_folder = {
45 os.path.join(r, file)
46 for r, d, f in os.walk(lib.directory)
47 for file in f
48 if not any(
49 [file.endswith(ext) for ext in ignore_exts]
50 + [r in ignore_dirs]
51 )
52 }
53 in_library = {x.path for x in lib.items()}
54 art_files = {x.artpath for x in lib.albums()}
55 for f in in_folder - in_library - art_files:
56 print_(util.displayable_path(f))
57
58 unimported = Subcommand(
59 "unimported",
60 help="list all files in the library folder which are not listed"
61 " in the beets library database",
62 )
63 unimported.func = print_unimported
64 return [unimported]
65
[end of beetsplug/unimported.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/beetsplug/unimported.py b/beetsplug/unimported.py
--- a/beetsplug/unimported.py
+++ b/beetsplug/unimported.py
@@ -41,15 +41,17 @@
os.path.join(lib.directory, x.encode())
for x in self.config["ignore_subdirectories"].as_str_seq()
]
- in_folder = {
- os.path.join(r, file)
- for r, d, f in os.walk(lib.directory)
- for file in f
- if not any(
- [file.endswith(ext) for ext in ignore_exts]
- + [r in ignore_dirs]
- )
- }
+ in_folder = set()
+ for root, _, files in os.walk(lib.directory):
+ # do not traverse if root is a child of an ignored directory
+ if any(root.startswith(ignored) for ignored in ignore_dirs):
+ continue
+ for file in files:
+ # ignore files with ignored extensions
+ if any(file.endswith(ext) for ext in ignore_exts):
+ continue
+ in_folder.add(os.path.join(root, file))
+
in_library = {x.path for x in lib.items()}
art_files = {x.artpath for x in lib.albums()}
for f in in_folder - in_library - art_files:
| {"golden_diff": "diff --git a/beetsplug/unimported.py b/beetsplug/unimported.py\n--- a/beetsplug/unimported.py\n+++ b/beetsplug/unimported.py\n@@ -41,15 +41,17 @@\n os.path.join(lib.directory, x.encode())\n for x in self.config[\"ignore_subdirectories\"].as_str_seq()\n ]\n- in_folder = {\n- os.path.join(r, file)\n- for r, d, f in os.walk(lib.directory)\n- for file in f\n- if not any(\n- [file.endswith(ext) for ext in ignore_exts]\n- + [r in ignore_dirs]\n- )\n- }\n+ in_folder = set()\n+ for root, _, files in os.walk(lib.directory):\n+ # do not traverse if root is a child of an ignored directory\n+ if any(root.startswith(ignored) for ignored in ignore_dirs):\n+ continue\n+ for file in files:\n+ # ignore files with ignored extensions\n+ if any(file.endswith(ext) for ext in ignore_exts):\n+ continue\n+ in_folder.add(os.path.join(root, file))\n+\n in_library = {x.path for x in lib.items()}\n art_files = {x.artpath for x in lib.albums()}\n for f in in_folder - in_library - art_files:\n", "issue": "`unimported`'s `ignore_subdirectories` doesn't work\n### Problem\r\n\r\n\r\n```sh\r\nbeet unimported\r\n```\r\n\r\nLeads to directories specified in `ignore_subdirectories` still being listed\r\n\r\n### Setup\r\n\r\n* OS: Arch Linux\r\n* Python version: 3.11.7 \r\n* beets version: 1.6.1\r\n* Turning off plugins made problem go away (yes/no): n/a\r\n\r\nMy configuration (output of `beet config`) is:\r\n\r\n```yaml\r\nunimported:\r\n ignore_extensions: jpg png txt md org mod\r\n ignore_subdirectories: Unsorted import\r\n```\r\n`ignore_extensions` works as expected though\r\n\r\n\n", "before_files": [{"content": "# This file is part of beets.\n# Copyright 2019, Joris Jensen\n#\n# Permission is hereby granted, free of charge, to any person obtaining\n# a copy of this software and associated documentation files (the\n# \"Software\"), to deal in the Software without restriction, including\n# without limitation the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the Software, and to\n# permit persons to whom the Software is furnished to do so, subject to\n# the following conditions:\n#\n# The above copyright notice and this permission notice shall be\n# included in all copies or substantial portions of the Software.\n\n\"\"\"\nList all files in the library folder which are not listed in the\n beets library database, including art files\n\"\"\"\n\nimport os\n\nfrom beets import util\nfrom beets.plugins import BeetsPlugin\nfrom beets.ui import Subcommand, print_\n\n__author__ = \"https://github.com/MrNuggelz\"\n\n\nclass Unimported(BeetsPlugin):\n def __init__(self):\n super().__init__()\n self.config.add({\"ignore_extensions\": [], \"ignore_subdirectories\": []})\n\n def commands(self):\n def print_unimported(lib, opts, args):\n ignore_exts = [\n (\".\" + x).encode()\n for x in self.config[\"ignore_extensions\"].as_str_seq()\n ]\n ignore_dirs = [\n os.path.join(lib.directory, x.encode())\n for x in self.config[\"ignore_subdirectories\"].as_str_seq()\n ]\n in_folder = {\n os.path.join(r, file)\n for r, d, f in os.walk(lib.directory)\n for file in f\n if not any(\n [file.endswith(ext) for ext in ignore_exts]\n + [r in ignore_dirs]\n )\n }\n in_library = {x.path for x in lib.items()}\n art_files = {x.artpath for x in lib.albums()}\n for f in in_folder - in_library - art_files:\n print_(util.displayable_path(f))\n\n unimported = Subcommand(\n \"unimported\",\n help=\"list all files in the library folder which are not listed\"\n \" in the beets library database\",\n )\n unimported.func = print_unimported\n return [unimported]\n", "path": "beetsplug/unimported.py"}]} | 1,311 | 297 |
gh_patches_debug_31966 | rasdani/github-patches | git_diff | nilearn__nilearn-3739 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[DOC] improve instructions in "Default Mode Network extraction of ADHD dataset" example
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Describe your proposed suggestion in detail.
It seems the instructions in [this example](https://nilearn.github.io/dev/auto_examples/04_glm_first_level/plot_adhd_dmn.html#default-mode-network-extraction-of-adhd-dataset) need some improvement. There was a confusion mentioned on [NeuroStar](https://neurostars.org/t/why-is-there-glm-for-resting-state-data/25841). After discussing with @Remi-Gau, we concluded that maybe we can add one or two lines saying that in this example we extract the activity of a seed region and then use the extracted signal as regressor in a GLM and this will yield the correlation of each region with the seed region.
### List any pages that would be impacted.
https://nilearn.github.io/dev/auto_examples/04_glm_first_level/plot_adhd_dmn.html#default-mode-network-extraction-of-adhd-dataset
</issue>
<code>
[start of examples/04_glm_first_level/plot_adhd_dmn.py]
1 """
2 Default Mode Network extraction of ADHD dataset
3 ===============================================
4
5 This example shows a full step-by-step workflow of fitting a GLM to data
6 extracted from a seed on the Posterior Cingulate Cortex and saving the results.
7
8 More specifically:
9
10 1. A sequence of fMRI volumes are loaded.
11 2. A design matrix with the Posterior Cingulate Cortex seed is defined.
12 3. A GLM is applied to the dataset (effect/covariance,
13 then contrast estimation).
14 4. The Default Mode Network is displayed.
15
16 .. include:: ../../../examples/masker_note.rst
17
18 """
19 import numpy as np
20 from nilearn import datasets, plotting
21 from nilearn.glm.first_level import (
22 FirstLevelModel,
23 make_first_level_design_matrix,
24 )
25 from nilearn.maskers import NiftiSpheresMasker
26
27 #########################################################################
28 # Prepare data and analysis parameters
29 # ------------------------------------
30 # Prepare the data.
31 adhd_dataset = datasets.fetch_adhd(n_subjects=1)
32
33 # Prepare timing
34 t_r = 2.0
35 slice_time_ref = 0.0
36 n_scans = 176
37
38 # Prepare seed
39 pcc_coords = (0, -53, 26)
40
41 #########################################################################
42 # Estimate contrasts
43 # ------------------
44 # Specify the contrasts.
45 seed_masker = NiftiSpheresMasker(
46 [pcc_coords],
47 radius=10,
48 detrend=True,
49 standardize="zscore_sample",
50 low_pass=0.1,
51 high_pass=0.01,
52 t_r=2.0,
53 memory="nilearn_cache",
54 memory_level=1,
55 verbose=0,
56 )
57 seed_time_series = seed_masker.fit_transform(adhd_dataset.func[0])
58 frametimes = np.linspace(0, (n_scans - 1) * t_r, n_scans)
59 design_matrix = make_first_level_design_matrix(
60 frametimes,
61 hrf_model="spm",
62 add_regs=seed_time_series,
63 add_reg_names=["pcc_seed"],
64 )
65 dmn_contrast = np.array([1] + [0] * (design_matrix.shape[1] - 1))
66 contrasts = {"seed_based_glm": dmn_contrast}
67
68 #########################################################################
69 # Perform first level analysis
70 # ----------------------------
71 # Setup and fit GLM.
72 first_level_model = FirstLevelModel(t_r=t_r, slice_time_ref=slice_time_ref)
73 first_level_model = first_level_model.fit(
74 run_imgs=adhd_dataset.func[0], design_matrices=design_matrix
75 )
76
77 #########################################################################
78 # Estimate the contrast.
79 print("Contrast seed_based_glm computed.")
80 z_map = first_level_model.compute_contrast(
81 contrasts["seed_based_glm"], output_type="z_score"
82 )
83
84 # Saving snapshots of the contrasts
85 filename = "dmn_z_map.png"
86 display = plotting.plot_stat_map(
87 z_map, threshold=3.0, title="Seed based GLM", cut_coords=pcc_coords
88 )
89 display.add_markers(
90 marker_coords=[pcc_coords], marker_color="g", marker_size=300
91 )
92 display.savefig(filename)
93 print(f"Save z-map in '{filename}'.")
94
95 ###########################################################################
96 # Generating a report
97 # -------------------
98 # It can be useful to quickly generate a
99 # portable, ready-to-view report with most of the pertinent information.
100 # This is easy to do if you have a fitted model and the list of contrasts,
101 # which we do here.
102 from nilearn.reporting import make_glm_report
103
104 report = make_glm_report(
105 first_level_model,
106 contrasts=contrasts,
107 title="ADHD DMN Report",
108 cluster_threshold=15,
109 min_distance=8.0,
110 plot_type="glass",
111 )
112
113 #########################################################################
114 # We have several ways to access the report:
115
116 # report # This report can be viewed in a notebook
117 # report.save_as_html('report.html')
118 # report.open_in_browser()
119
[end of examples/04_glm_first_level/plot_adhd_dmn.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/examples/04_glm_first_level/plot_adhd_dmn.py b/examples/04_glm_first_level/plot_adhd_dmn.py
--- a/examples/04_glm_first_level/plot_adhd_dmn.py
+++ b/examples/04_glm_first_level/plot_adhd_dmn.py
@@ -2,8 +2,11 @@
Default Mode Network extraction of ADHD dataset
===============================================
-This example shows a full step-by-step workflow of fitting a GLM to data
+This example shows a full step-by-step workflow of fitting a GLM to signal
extracted from a seed on the Posterior Cingulate Cortex and saving the results.
+More precisely, this example shows how to use a signal extracted from a
+seed region as the regressor in a GLM to determine the correlation
+of each region in the dataset with the seed region.
More specifically:
@@ -39,9 +42,9 @@
pcc_coords = (0, -53, 26)
#########################################################################
-# Estimate contrasts
-# ------------------
-# Specify the contrasts.
+# Extract the seed region's time course
+# -------------------------------------
+# Extract the time course of the seed region.
seed_masker = NiftiSpheresMasker(
[pcc_coords],
radius=10,
@@ -56,6 +59,22 @@
)
seed_time_series = seed_masker.fit_transform(adhd_dataset.func[0])
frametimes = np.linspace(0, (n_scans - 1) * t_r, n_scans)
+
+#########################################################################
+# Plot the time course of the seed region.
+import matplotlib.pyplot as plt
+
+fig = plt.figure(figsize=(9, 3))
+ax = fig.add_subplot(111)
+ax.plot(frametimes, seed_time_series, linewidth=2, label="seed region")
+ax.legend(loc=2)
+ax.set_title("Time course of the seed region")
+plt.show()
+
+#########################################################################
+# Estimate contrasts
+# ------------------
+# Specify the contrasts.
design_matrix = make_first_level_design_matrix(
frametimes,
hrf_model="spm",
| {"golden_diff": "diff --git a/examples/04_glm_first_level/plot_adhd_dmn.py b/examples/04_glm_first_level/plot_adhd_dmn.py\n--- a/examples/04_glm_first_level/plot_adhd_dmn.py\n+++ b/examples/04_glm_first_level/plot_adhd_dmn.py\n@@ -2,8 +2,11 @@\n Default Mode Network extraction of ADHD dataset\n ===============================================\n \n-This example shows a full step-by-step workflow of fitting a GLM to data\n+This example shows a full step-by-step workflow of fitting a GLM to signal\n extracted from a seed on the Posterior Cingulate Cortex and saving the results.\n+More precisely, this example shows how to use a signal extracted from a\n+seed region as the regressor in a GLM to determine the correlation\n+of each region in the dataset with the seed region.\n \n More specifically:\n \n@@ -39,9 +42,9 @@\n pcc_coords = (0, -53, 26)\n \n #########################################################################\n-# Estimate contrasts\n-# ------------------\n-# Specify the contrasts.\n+# Extract the seed region's time course\n+# -------------------------------------\n+# Extract the time course of the seed region.\n seed_masker = NiftiSpheresMasker(\n [pcc_coords],\n radius=10,\n@@ -56,6 +59,22 @@\n )\n seed_time_series = seed_masker.fit_transform(adhd_dataset.func[0])\n frametimes = np.linspace(0, (n_scans - 1) * t_r, n_scans)\n+\n+#########################################################################\n+# Plot the time course of the seed region.\n+import matplotlib.pyplot as plt\n+\n+fig = plt.figure(figsize=(9, 3))\n+ax = fig.add_subplot(111)\n+ax.plot(frametimes, seed_time_series, linewidth=2, label=\"seed region\")\n+ax.legend(loc=2)\n+ax.set_title(\"Time course of the seed region\")\n+plt.show()\n+\n+#########################################################################\n+# Estimate contrasts\n+# ------------------\n+# Specify the contrasts.\n design_matrix = make_first_level_design_matrix(\n frametimes,\n hrf_model=\"spm\",\n", "issue": "[DOC] improve instructions in \"Default Mode Network extraction of ADHD dataset\" example\n### Is there an existing issue for this?\r\n\r\n- [X] I have searched the existing issues\r\n\r\n### Describe your proposed suggestion in detail.\r\n\r\nIt seems the instructions in [this example](https://nilearn.github.io/dev/auto_examples/04_glm_first_level/plot_adhd_dmn.html#default-mode-network-extraction-of-adhd-dataset) need some improvement. There was a confusion mentioned on [NeuroStar](https://neurostars.org/t/why-is-there-glm-for-resting-state-data/25841). After discussing with @Remi-Gau, we concluded that maybe we can add one or two lines saying that in this example we extract the activity of a seed region and then use the extracted signal as regressor in a GLM and this will yield the correlation of each region with the seed region.\r\n\r\n### List any pages that would be impacted.\r\n\r\nhttps://nilearn.github.io/dev/auto_examples/04_glm_first_level/plot_adhd_dmn.html#default-mode-network-extraction-of-adhd-dataset\n", "before_files": [{"content": "\"\"\"\nDefault Mode Network extraction of ADHD dataset\n===============================================\n\nThis example shows a full step-by-step workflow of fitting a GLM to data\nextracted from a seed on the Posterior Cingulate Cortex and saving the results.\n\nMore specifically:\n\n1. A sequence of fMRI volumes are loaded.\n2. A design matrix with the Posterior Cingulate Cortex seed is defined.\n3. A GLM is applied to the dataset (effect/covariance,\n then contrast estimation).\n4. The Default Mode Network is displayed.\n\n.. include:: ../../../examples/masker_note.rst\n\n\"\"\"\nimport numpy as np\nfrom nilearn import datasets, plotting\nfrom nilearn.glm.first_level import (\n FirstLevelModel,\n make_first_level_design_matrix,\n)\nfrom nilearn.maskers import NiftiSpheresMasker\n\n#########################################################################\n# Prepare data and analysis parameters\n# ------------------------------------\n# Prepare the data.\nadhd_dataset = datasets.fetch_adhd(n_subjects=1)\n\n# Prepare timing\nt_r = 2.0\nslice_time_ref = 0.0\nn_scans = 176\n\n# Prepare seed\npcc_coords = (0, -53, 26)\n\n#########################################################################\n# Estimate contrasts\n# ------------------\n# Specify the contrasts.\nseed_masker = NiftiSpheresMasker(\n [pcc_coords],\n radius=10,\n detrend=True,\n standardize=\"zscore_sample\",\n low_pass=0.1,\n high_pass=0.01,\n t_r=2.0,\n memory=\"nilearn_cache\",\n memory_level=1,\n verbose=0,\n)\nseed_time_series = seed_masker.fit_transform(adhd_dataset.func[0])\nframetimes = np.linspace(0, (n_scans - 1) * t_r, n_scans)\ndesign_matrix = make_first_level_design_matrix(\n frametimes,\n hrf_model=\"spm\",\n add_regs=seed_time_series,\n add_reg_names=[\"pcc_seed\"],\n)\ndmn_contrast = np.array([1] + [0] * (design_matrix.shape[1] - 1))\ncontrasts = {\"seed_based_glm\": dmn_contrast}\n\n#########################################################################\n# Perform first level analysis\n# ----------------------------\n# Setup and fit GLM.\nfirst_level_model = FirstLevelModel(t_r=t_r, slice_time_ref=slice_time_ref)\nfirst_level_model = first_level_model.fit(\n run_imgs=adhd_dataset.func[0], design_matrices=design_matrix\n)\n\n#########################################################################\n# Estimate the contrast.\nprint(\"Contrast seed_based_glm computed.\")\nz_map = first_level_model.compute_contrast(\n contrasts[\"seed_based_glm\"], output_type=\"z_score\"\n)\n\n# Saving snapshots of the contrasts\nfilename = \"dmn_z_map.png\"\ndisplay = plotting.plot_stat_map(\n z_map, threshold=3.0, title=\"Seed based GLM\", cut_coords=pcc_coords\n)\ndisplay.add_markers(\n marker_coords=[pcc_coords], marker_color=\"g\", marker_size=300\n)\ndisplay.savefig(filename)\nprint(f\"Save z-map in '{filename}'.\")\n\n###########################################################################\n# Generating a report\n# -------------------\n# It can be useful to quickly generate a\n# portable, ready-to-view report with most of the pertinent information.\n# This is easy to do if you have a fitted model and the list of contrasts,\n# which we do here.\nfrom nilearn.reporting import make_glm_report\n\nreport = make_glm_report(\n first_level_model,\n contrasts=contrasts,\n title=\"ADHD DMN Report\",\n cluster_threshold=15,\n min_distance=8.0,\n plot_type=\"glass\",\n)\n\n#########################################################################\n# We have several ways to access the report:\n\n# report # This report can be viewed in a notebook\n# report.save_as_html('report.html')\n# report.open_in_browser()\n", "path": "examples/04_glm_first_level/plot_adhd_dmn.py"}]} | 1,878 | 466 |
gh_patches_debug_3953 | rasdani/github-patches | git_diff | chainer__chainer-7178 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Emit warning if compiled in Debug mode
In debug mode ChainerX runs significantly slower.
However sometimes it's difficult notice that.
</issue>
<code>
[start of chainerx/__init__.py]
1 import os
2 import sys
3
4
5 if sys.version_info[0] < 3:
6 _available = False
7 else:
8 try:
9 from chainerx import _core
10 _available = True
11 except Exception:
12 _available = False
13
14
15 if _available:
16 from numpy import dtype # NOQA
17 from numpy import (
18 bool_, int8, int16, int32, int64, uint8, float16, float32, float64) # NOQA
19 all_dtypes = (
20 bool_, int8, int16, int32, int64, uint8, float16, float32, float64)
21
22 from chainerx._core import * # NOQA
23 from chainerx._core import _to_cupy # NOQA
24
25 from builtins import bool, int, float # NOQA
26
27 from chainerx import _device # NOQA
28
29 from chainerx.creation.from_data import asanyarray # NOQA
30 from chainerx.creation.from_data import fromfile # NOQA
31 from chainerx.creation.from_data import fromfunction # NOQA
32 from chainerx.creation.from_data import fromiter # NOQA
33 from chainerx.creation.from_data import fromstring # NOQA
34 from chainerx.creation.from_data import loadtxt # NOQA
35
36 from chainerx.manipulation.shape import ravel # NOQA
37
38 from chainerx.math.misc import clip # NOQA
39
40 from chainerx import random # NOQA
41
42 _global_context = _core.Context()
43 _core.set_global_default_context(_global_context)
44
45 # Implements ndarray methods in Python
46 from chainerx import _ndarray
47 _ndarray.populate()
48
49 # Temporary workaround implementations that fall back to NumPy/CuPy's
50 # respective functions.
51 from chainerx import _fallback_workarounds
52 _fallback_workarounds.populate()
53
54 # Dynamically inject docstrings
55 from chainerx import _docs
56 _docs.set_docs()
57
58 from chainerx import _cuda
59 # Share memory pool with CuPy.
60 if bool(int(os.getenv('CHAINERX_CUDA_CUPY_SHARE_ALLOCATOR', '0'))):
61 _cuda.cupy_share_allocator()
62 else:
63 class ndarray(object):
64
65 """Dummy class for type testing."""
66
67 def __init__(self, *args, **kwargs):
68 raise RuntimeError('chainerx is not available.')
69
70
71 def is_available():
72 return _available
73
[end of chainerx/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/chainerx/__init__.py b/chainerx/__init__.py
--- a/chainerx/__init__.py
+++ b/chainerx/__init__.py
@@ -1,5 +1,6 @@
import os
import sys
+import warnings
if sys.version_info[0] < 3:
@@ -70,3 +71,9 @@
def is_available():
return _available
+
+
+if _available and _core._is_debug():
+ # Warn if the ChainerX core binary is built in debug mode
+ warnings.warn(
+ 'ChainerX core binary is built in debug mode.', stacklevel=2)
| {"golden_diff": "diff --git a/chainerx/__init__.py b/chainerx/__init__.py\n--- a/chainerx/__init__.py\n+++ b/chainerx/__init__.py\n@@ -1,5 +1,6 @@\n import os\n import sys\n+import warnings\n \n \n if sys.version_info[0] < 3:\n@@ -70,3 +71,9 @@\n \n def is_available():\n return _available\n+\n+\n+if _available and _core._is_debug():\n+ # Warn if the ChainerX core binary is built in debug mode\n+ warnings.warn(\n+ 'ChainerX core binary is built in debug mode.', stacklevel=2)\n", "issue": "Emit warning if compiled in Debug mode\nIn debug mode ChainerX runs significantly slower.\r\nHowever sometimes it's difficult notice that.\n", "before_files": [{"content": "import os\nimport sys\n\n\nif sys.version_info[0] < 3:\n _available = False\nelse:\n try:\n from chainerx import _core\n _available = True\n except Exception:\n _available = False\n\n\nif _available:\n from numpy import dtype # NOQA\n from numpy import (\n bool_, int8, int16, int32, int64, uint8, float16, float32, float64) # NOQA\n all_dtypes = (\n bool_, int8, int16, int32, int64, uint8, float16, float32, float64)\n\n from chainerx._core import * # NOQA\n from chainerx._core import _to_cupy # NOQA\n\n from builtins import bool, int, float # NOQA\n\n from chainerx import _device # NOQA\n\n from chainerx.creation.from_data import asanyarray # NOQA\n from chainerx.creation.from_data import fromfile # NOQA\n from chainerx.creation.from_data import fromfunction # NOQA\n from chainerx.creation.from_data import fromiter # NOQA\n from chainerx.creation.from_data import fromstring # NOQA\n from chainerx.creation.from_data import loadtxt # NOQA\n\n from chainerx.manipulation.shape import ravel # NOQA\n\n from chainerx.math.misc import clip # NOQA\n\n from chainerx import random # NOQA\n\n _global_context = _core.Context()\n _core.set_global_default_context(_global_context)\n\n # Implements ndarray methods in Python\n from chainerx import _ndarray\n _ndarray.populate()\n\n # Temporary workaround implementations that fall back to NumPy/CuPy's\n # respective functions.\n from chainerx import _fallback_workarounds\n _fallback_workarounds.populate()\n\n # Dynamically inject docstrings\n from chainerx import _docs\n _docs.set_docs()\n\n from chainerx import _cuda\n # Share memory pool with CuPy.\n if bool(int(os.getenv('CHAINERX_CUDA_CUPY_SHARE_ALLOCATOR', '0'))):\n _cuda.cupy_share_allocator()\nelse:\n class ndarray(object):\n\n \"\"\"Dummy class for type testing.\"\"\"\n\n def __init__(self, *args, **kwargs):\n raise RuntimeError('chainerx is not available.')\n\n\ndef is_available():\n return _available\n", "path": "chainerx/__init__.py"}]} | 1,276 | 149 |
gh_patches_debug_41016 | rasdani/github-patches | git_diff | kubeflow__pipelines-3486 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[Component] AutoML Tables component should show link as an artifact
/cc @jessiezcc
/cc @jingzhang36
/assign @Ark-kun
It will be helpful if components in
https://github.com/kubeflow/pipelines/tree/b89aabbce5d48fca10817c3ed3ecc2acf6c0066a/components/gcp/automl can show related AutoML tables url as markdown artifacts.
e.g.
> We would like to be able to click on a link that would take us from the component’s page to an AutoML Tables models page
</issue>
<code>
[start of components/gcp/automl/create_model_for_tables/component.py]
1 # Copyright 2019 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from typing import NamedTuple
16
17
18 def automl_create_model_for_tables(
19 gcp_project_id: str,
20 gcp_region: str,
21 display_name: str,
22 dataset_id: str,
23 target_column_path: str = None,
24 input_feature_column_paths: list = None,
25 optimization_objective: str = 'MAXIMIZE_AU_PRC',
26 train_budget_milli_node_hours: int = 1000,
27 ) -> NamedTuple('Outputs', [('model_path', str), ('model_id', str)]):
28 import sys
29 import subprocess
30 subprocess.run([sys.executable, '-m', 'pip', 'install', 'google-cloud-automl==0.4.0', '--quiet', '--no-warn-script-location'], env={'PIP_DISABLE_PIP_VERSION_CHECK': '1'}, check=True)
31
32 from google.cloud import automl
33 client = automl.AutoMlClient()
34
35 location_path = client.location_path(gcp_project_id, gcp_region)
36 model_dict = {
37 'display_name': display_name,
38 'dataset_id': dataset_id,
39 'tables_model_metadata': {
40 'target_column_spec': automl.types.ColumnSpec(name=target_column_path),
41 'input_feature_column_specs': [automl.types.ColumnSpec(name=path) for path in input_feature_column_paths] if input_feature_column_paths else None,
42 'optimization_objective': optimization_objective,
43 'train_budget_milli_node_hours': train_budget_milli_node_hours,
44 },
45 }
46
47 create_model_response = client.create_model(location_path, model_dict)
48 print('Create model operation: {}'.format(create_model_response.operation))
49 result = create_model_response.result()
50 print(result)
51 model_name = result.name
52 model_id = model_name.rsplit('/', 1)[-1]
53 return (model_name, model_id)
54
55
56 if __name__ == '__main__':
57 import kfp
58 kfp.components.func_to_container_op(automl_create_model_for_tables, output_component_file='component.yaml', base_image='python:3.7')
59
[end of components/gcp/automl/create_model_for_tables/component.py]
[start of components/gcp/automl/create_dataset_for_tables/component.py]
1 # Copyright 2019 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from typing import NamedTuple
16
17
18 def automl_create_dataset_for_tables(
19 gcp_project_id: str,
20 gcp_region: str,
21 display_name: str,
22 description: str = None,
23 tables_dataset_metadata: dict = {},
24 retry=None, #=google.api_core.gapic_v1.method.DEFAULT,
25 timeout: float = None, #=google.api_core.gapic_v1.method.DEFAULT,
26 metadata: dict = None,
27 ) -> NamedTuple('Outputs', [('dataset_path', str), ('create_time', str), ('dataset_id', str)]):
28 '''automl_create_dataset_for_tables creates an empty Dataset for AutoML tables
29 '''
30 import sys
31 import subprocess
32 subprocess.run([sys.executable, '-m', 'pip', 'install', 'google-cloud-automl==0.4.0', '--quiet', '--no-warn-script-location'], env={'PIP_DISABLE_PIP_VERSION_CHECK': '1'}, check=True)
33
34 import google
35 from google.cloud import automl
36 client = automl.AutoMlClient()
37
38 location_path = client.location_path(gcp_project_id, gcp_region)
39 dataset_dict = {
40 'display_name': display_name,
41 'description': description,
42 'tables_dataset_metadata': tables_dataset_metadata,
43 }
44 dataset = client.create_dataset(
45 location_path,
46 dataset_dict,
47 retry or google.api_core.gapic_v1.method.DEFAULT,
48 timeout or google.api_core.gapic_v1.method.DEFAULT,
49 metadata,
50 )
51 print(dataset)
52 dataset_id = dataset.name.rsplit('/', 1)[-1]
53 return (dataset.name, dataset.create_time, dataset_id)
54
55
56 if __name__ == '__main__':
57 import kfp
58 kfp.components.func_to_container_op(automl_create_dataset_for_tables, output_component_file='component.yaml', base_image='python:3.7')
59
[end of components/gcp/automl/create_dataset_for_tables/component.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/components/gcp/automl/create_dataset_for_tables/component.py b/components/gcp/automl/create_dataset_for_tables/component.py
--- a/components/gcp/automl/create_dataset_for_tables/component.py
+++ b/components/gcp/automl/create_dataset_for_tables/component.py
@@ -24,13 +24,9 @@
retry=None, #=google.api_core.gapic_v1.method.DEFAULT,
timeout: float = None, #=google.api_core.gapic_v1.method.DEFAULT,
metadata: dict = None,
-) -> NamedTuple('Outputs', [('dataset_path', str), ('create_time', str), ('dataset_id', str)]):
+) -> NamedTuple('Outputs', [('dataset_path', str), ('create_time', str), ('dataset_id', str), ('dataset_url', 'URI')]):
'''automl_create_dataset_for_tables creates an empty Dataset for AutoML tables
'''
- import sys
- import subprocess
- subprocess.run([sys.executable, '-m', 'pip', 'install', 'google-cloud-automl==0.4.0', '--quiet', '--no-warn-script-location'], env={'PIP_DISABLE_PIP_VERSION_CHECK': '1'}, check=True)
-
import google
from google.cloud import automl
client = automl.AutoMlClient()
@@ -50,9 +46,19 @@
)
print(dataset)
dataset_id = dataset.name.rsplit('/', 1)[-1]
- return (dataset.name, dataset.create_time, dataset_id)
+ dataset_url = 'https://console.cloud.google.com/automl-tables/locations/{region}/datasets/{dataset_id}/schemav2?project={project_id}'.format(
+ project_id=gcp_project_id,
+ region=gcp_region,
+ dataset_id=dataset_id,
+ )
+ return (dataset.name, dataset.create_time, dataset_id, dataset_url)
if __name__ == '__main__':
import kfp
- kfp.components.func_to_container_op(automl_create_dataset_for_tables, output_component_file='component.yaml', base_image='python:3.7')
+ kfp.components.func_to_container_op(
+ automl_create_dataset_for_tables,
+ output_component_file='component.yaml',
+ base_image='python:3.7',
+ packages_to_install=['google-cloud-automl==0.4.0']
+ )
diff --git a/components/gcp/automl/create_model_for_tables/component.py b/components/gcp/automl/create_model_for_tables/component.py
--- a/components/gcp/automl/create_model_for_tables/component.py
+++ b/components/gcp/automl/create_model_for_tables/component.py
@@ -24,11 +24,7 @@
input_feature_column_paths: list = None,
optimization_objective: str = 'MAXIMIZE_AU_PRC',
train_budget_milli_node_hours: int = 1000,
-) -> NamedTuple('Outputs', [('model_path', str), ('model_id', str)]):
- import sys
- import subprocess
- subprocess.run([sys.executable, '-m', 'pip', 'install', 'google-cloud-automl==0.4.0', '--quiet', '--no-warn-script-location'], env={'PIP_DISABLE_PIP_VERSION_CHECK': '1'}, check=True)
-
+) -> NamedTuple('Outputs', [('model_path', str), ('model_id', str), ('model_page_url', 'URI'),]):
from google.cloud import automl
client = automl.AutoMlClient()
@@ -50,9 +46,21 @@
print(result)
model_name = result.name
model_id = model_name.rsplit('/', 1)[-1]
- return (model_name, model_id)
+ model_url = 'https://console.cloud.google.com/automl-tables/locations/{region}/datasets/{dataset_id};modelId={model_id};task=basic/train?project={project_id}'.format(
+ project_id=gcp_project_id,
+ region=gcp_region,
+ dataset_id=dataset_id,
+ model_id=model_id,
+ )
+
+ return (model_name, model_id, model_url)
if __name__ == '__main__':
import kfp
- kfp.components.func_to_container_op(automl_create_model_for_tables, output_component_file='component.yaml', base_image='python:3.7')
+ kfp.components.func_to_container_op(
+ automl_create_model_for_tables,
+ output_component_file='component.yaml',
+ base_image='python:3.7',
+ packages_to_install=['google-cloud-automl==0.4.0']
+ )
| {"golden_diff": "diff --git a/components/gcp/automl/create_dataset_for_tables/component.py b/components/gcp/automl/create_dataset_for_tables/component.py\n--- a/components/gcp/automl/create_dataset_for_tables/component.py\n+++ b/components/gcp/automl/create_dataset_for_tables/component.py\n@@ -24,13 +24,9 @@\n retry=None, #=google.api_core.gapic_v1.method.DEFAULT,\n timeout: float = None, #=google.api_core.gapic_v1.method.DEFAULT,\n metadata: dict = None,\n-) -> NamedTuple('Outputs', [('dataset_path', str), ('create_time', str), ('dataset_id', str)]):\n+) -> NamedTuple('Outputs', [('dataset_path', str), ('create_time', str), ('dataset_id', str), ('dataset_url', 'URI')]):\n '''automl_create_dataset_for_tables creates an empty Dataset for AutoML tables\n '''\n- import sys\n- import subprocess\n- subprocess.run([sys.executable, '-m', 'pip', 'install', 'google-cloud-automl==0.4.0', '--quiet', '--no-warn-script-location'], env={'PIP_DISABLE_PIP_VERSION_CHECK': '1'}, check=True)\n-\n import google\n from google.cloud import automl\n client = automl.AutoMlClient()\n@@ -50,9 +46,19 @@\n )\n print(dataset)\n dataset_id = dataset.name.rsplit('/', 1)[-1]\n- return (dataset.name, dataset.create_time, dataset_id)\n+ dataset_url = 'https://console.cloud.google.com/automl-tables/locations/{region}/datasets/{dataset_id}/schemav2?project={project_id}'.format(\n+ project_id=gcp_project_id,\n+ region=gcp_region,\n+ dataset_id=dataset_id,\n+ )\n+ return (dataset.name, dataset.create_time, dataset_id, dataset_url)\n \n \n if __name__ == '__main__':\n import kfp\n- kfp.components.func_to_container_op(automl_create_dataset_for_tables, output_component_file='component.yaml', base_image='python:3.7')\n+ kfp.components.func_to_container_op(\n+ automl_create_dataset_for_tables,\n+ output_component_file='component.yaml',\n+ base_image='python:3.7',\n+ packages_to_install=['google-cloud-automl==0.4.0']\n+ )\ndiff --git a/components/gcp/automl/create_model_for_tables/component.py b/components/gcp/automl/create_model_for_tables/component.py\n--- a/components/gcp/automl/create_model_for_tables/component.py\n+++ b/components/gcp/automl/create_model_for_tables/component.py\n@@ -24,11 +24,7 @@\n input_feature_column_paths: list = None,\n optimization_objective: str = 'MAXIMIZE_AU_PRC',\n train_budget_milli_node_hours: int = 1000,\n-) -> NamedTuple('Outputs', [('model_path', str), ('model_id', str)]):\n- import sys\n- import subprocess\n- subprocess.run([sys.executable, '-m', 'pip', 'install', 'google-cloud-automl==0.4.0', '--quiet', '--no-warn-script-location'], env={'PIP_DISABLE_PIP_VERSION_CHECK': '1'}, check=True)\n-\n+) -> NamedTuple('Outputs', [('model_path', str), ('model_id', str), ('model_page_url', 'URI'),]):\n from google.cloud import automl\n client = automl.AutoMlClient()\n \n@@ -50,9 +46,21 @@\n print(result)\n model_name = result.name\n model_id = model_name.rsplit('/', 1)[-1]\n- return (model_name, model_id)\n+ model_url = 'https://console.cloud.google.com/automl-tables/locations/{region}/datasets/{dataset_id};modelId={model_id};task=basic/train?project={project_id}'.format(\n+ project_id=gcp_project_id,\n+ region=gcp_region,\n+ dataset_id=dataset_id,\n+ model_id=model_id,\n+ )\n+\n+ return (model_name, model_id, model_url)\n \n \n if __name__ == '__main__':\n import kfp\n- kfp.components.func_to_container_op(automl_create_model_for_tables, output_component_file='component.yaml', base_image='python:3.7')\n+ kfp.components.func_to_container_op(\n+ automl_create_model_for_tables,\n+ output_component_file='component.yaml',\n+ base_image='python:3.7',\n+ packages_to_install=['google-cloud-automl==0.4.0']\n+ )\n", "issue": "[Component] AutoML Tables component should show link as an artifact\n/cc @jessiezcc \r\n/cc @jingzhang36 \r\n/assign @Ark-kun \r\n\r\nIt will be helpful if components in \r\nhttps://github.com/kubeflow/pipelines/tree/b89aabbce5d48fca10817c3ed3ecc2acf6c0066a/components/gcp/automl can show related AutoML tables url as markdown artifacts.\r\n\r\ne.g.\r\n> We would like to be able to click on a link that would take us from the component\u2019s page to an AutoML Tables models page\n", "before_files": [{"content": "# Copyright 2019 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom typing import NamedTuple\n\n\ndef automl_create_model_for_tables(\n gcp_project_id: str,\n gcp_region: str,\n display_name: str,\n dataset_id: str,\n target_column_path: str = None,\n input_feature_column_paths: list = None,\n optimization_objective: str = 'MAXIMIZE_AU_PRC',\n train_budget_milli_node_hours: int = 1000,\n) -> NamedTuple('Outputs', [('model_path', str), ('model_id', str)]):\n import sys\n import subprocess\n subprocess.run([sys.executable, '-m', 'pip', 'install', 'google-cloud-automl==0.4.0', '--quiet', '--no-warn-script-location'], env={'PIP_DISABLE_PIP_VERSION_CHECK': '1'}, check=True)\n\n from google.cloud import automl\n client = automl.AutoMlClient()\n\n location_path = client.location_path(gcp_project_id, gcp_region)\n model_dict = {\n 'display_name': display_name,\n 'dataset_id': dataset_id,\n 'tables_model_metadata': {\n 'target_column_spec': automl.types.ColumnSpec(name=target_column_path),\n 'input_feature_column_specs': [automl.types.ColumnSpec(name=path) for path in input_feature_column_paths] if input_feature_column_paths else None,\n 'optimization_objective': optimization_objective,\n 'train_budget_milli_node_hours': train_budget_milli_node_hours,\n },\n }\n\n create_model_response = client.create_model(location_path, model_dict)\n print('Create model operation: {}'.format(create_model_response.operation))\n result = create_model_response.result()\n print(result)\n model_name = result.name\n model_id = model_name.rsplit('/', 1)[-1]\n return (model_name, model_id)\n\n\nif __name__ == '__main__':\n import kfp\n kfp.components.func_to_container_op(automl_create_model_for_tables, output_component_file='component.yaml', base_image='python:3.7')\n", "path": "components/gcp/automl/create_model_for_tables/component.py"}, {"content": "# Copyright 2019 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom typing import NamedTuple\n\n\ndef automl_create_dataset_for_tables(\n gcp_project_id: str,\n gcp_region: str,\n display_name: str,\n description: str = None,\n tables_dataset_metadata: dict = {},\n retry=None, #=google.api_core.gapic_v1.method.DEFAULT,\n timeout: float = None, #=google.api_core.gapic_v1.method.DEFAULT,\n metadata: dict = None,\n) -> NamedTuple('Outputs', [('dataset_path', str), ('create_time', str), ('dataset_id', str)]):\n '''automl_create_dataset_for_tables creates an empty Dataset for AutoML tables\n '''\n import sys\n import subprocess\n subprocess.run([sys.executable, '-m', 'pip', 'install', 'google-cloud-automl==0.4.0', '--quiet', '--no-warn-script-location'], env={'PIP_DISABLE_PIP_VERSION_CHECK': '1'}, check=True)\n\n import google\n from google.cloud import automl\n client = automl.AutoMlClient()\n\n location_path = client.location_path(gcp_project_id, gcp_region)\n dataset_dict = {\n 'display_name': display_name,\n 'description': description,\n 'tables_dataset_metadata': tables_dataset_metadata,\n }\n dataset = client.create_dataset(\n location_path,\n dataset_dict,\n retry or google.api_core.gapic_v1.method.DEFAULT,\n timeout or google.api_core.gapic_v1.method.DEFAULT,\n metadata,\n )\n print(dataset)\n dataset_id = dataset.name.rsplit('/', 1)[-1]\n return (dataset.name, dataset.create_time, dataset_id)\n\n\nif __name__ == '__main__':\n import kfp\n kfp.components.func_to_container_op(automl_create_dataset_for_tables, output_component_file='component.yaml', base_image='python:3.7')\n", "path": "components/gcp/automl/create_dataset_for_tables/component.py"}]} | 2,034 | 1,018 |
gh_patches_debug_12879 | rasdani/github-patches | git_diff | apluslms__a-plus-771 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
HTML Plugin admin interface does not show relevant information
Some times we have to copy plugins from previous course instances to the current instance. However, it is difficult to know which plugin belongs to the course we want.
**Current view**

**Proposed view**

HTML Plugin admin interface does not show relevant information
Some times we have to copy plugins from previous course instances to the current instance. However, it is difficult to know which plugin belongs to the course we want.
**Current view**

**Proposed view**

</issue>
<code>
[start of apps/admin.py]
1 from django.contrib import admin
2
3 from .models import (
4 BaseTab,
5 HTMLTab,
6 ExternalEmbeddedTab,
7 ExternalIFrameTab,
8 BasePlugin,
9 RSSPlugin,
10 HTMLPlugin,
11 ExternalIFramePlugin,
12 )
13
14
15 admin.site.register(BaseTab)
16 admin.site.register(HTMLTab)
17 admin.site.register(ExternalEmbeddedTab)
18 admin.site.register(ExternalIFrameTab)
19 admin.site.register(BasePlugin)
20 admin.site.register(RSSPlugin)
21 admin.site.register(HTMLPlugin)
22 admin.site.register(ExternalIFramePlugin)
23
[end of apps/admin.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/apps/admin.py b/apps/admin.py
--- a/apps/admin.py
+++ b/apps/admin.py
@@ -11,6 +11,12 @@
ExternalIFramePlugin,
)
+class HTMLPluginAdmin(admin.ModelAdmin):
+ list_display_links = ["title"]
+ list_display = ["title", "course_instance_id", "container_type", "views"]
+
+ def course_instance_id(self, obj):
+ return obj.container_pk
admin.site.register(BaseTab)
admin.site.register(HTMLTab)
@@ -18,5 +24,5 @@
admin.site.register(ExternalIFrameTab)
admin.site.register(BasePlugin)
admin.site.register(RSSPlugin)
-admin.site.register(HTMLPlugin)
+admin.site.register(HTMLPlugin, HTMLPluginAdmin)
admin.site.register(ExternalIFramePlugin)
| {"golden_diff": "diff --git a/apps/admin.py b/apps/admin.py\n--- a/apps/admin.py\n+++ b/apps/admin.py\n@@ -11,6 +11,12 @@\n ExternalIFramePlugin,\n )\n \n+class HTMLPluginAdmin(admin.ModelAdmin):\n+ list_display_links = [\"title\"]\n+ list_display = [\"title\", \"course_instance_id\", \"container_type\", \"views\"]\n+\n+ def course_instance_id(self, obj):\n+ return obj.container_pk\n \n admin.site.register(BaseTab)\n admin.site.register(HTMLTab)\n@@ -18,5 +24,5 @@\n admin.site.register(ExternalIFrameTab)\n admin.site.register(BasePlugin)\n admin.site.register(RSSPlugin)\n-admin.site.register(HTMLPlugin)\n+admin.site.register(HTMLPlugin, HTMLPluginAdmin)\n admin.site.register(ExternalIFramePlugin)\n", "issue": "HTML Plugin admin interface does not show relevant information\nSome times we have to copy plugins from previous course instances to the current instance. However, it is difficult to know which plugin belongs to the course we want.\r\n\r\n**Current view**\r\n\r\n\r\n**Proposed view**\r\n\r\n\nHTML Plugin admin interface does not show relevant information\nSome times we have to copy plugins from previous course instances to the current instance. However, it is difficult to know which plugin belongs to the course we want.\r\n\r\n**Current view**\r\n\r\n\r\n**Proposed view**\r\n\r\n\n", "before_files": [{"content": "from django.contrib import admin\n\nfrom .models import (\n BaseTab,\n HTMLTab,\n ExternalEmbeddedTab,\n ExternalIFrameTab,\n BasePlugin,\n RSSPlugin,\n HTMLPlugin,\n ExternalIFramePlugin,\n)\n\n\nadmin.site.register(BaseTab)\nadmin.site.register(HTMLTab)\nadmin.site.register(ExternalEmbeddedTab)\nadmin.site.register(ExternalIFrameTab)\nadmin.site.register(BasePlugin)\nadmin.site.register(RSSPlugin)\nadmin.site.register(HTMLPlugin)\nadmin.site.register(ExternalIFramePlugin)\n", "path": "apps/admin.py"}]} | 1,037 | 176 |
gh_patches_debug_21253 | rasdani/github-patches | git_diff | streamlit__streamlit-7061 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Some labels in Altair charts are hard to see in dark mode
### Summary
Streamlit has an awesome feature where it changes the label colors of Altair charts when you switch to dark mode. Sweet!
However, it seems that some labels were omitted and thus remain almost illegibly dark in dark mode.
### Steps to reproduce
Run this code snippet [taken from the Altair documentation](https://altair-viz.github.io/gallery/grouped_bar_chart.html):
```python
from vega_datasets import data
st.subheader("barley example")
source = data.barley()
st.write(source)
st.write(
alt.Chart(source)
.mark_bar()
.encode(x="year:O", y="sum(yield):Q", color="year:N", column="site:N")
)
```
### Expected vs actual behavior
In light mode it displays properly:

but in dark mode some of the labels have remained black and are almost impossible to read:

**Note:** I have marked the errors in red.
### Is this a regression?
Not sure.
### Debug info
- Streamlit version: `Streamlit, version 0.82.0`
- Python version: `Python 3.8.5`
- PipEnv: `pipenv, version 2020.11.15`
- OS version: `Ubuntu 20.04.2 LTS`
- Browser version: `Version 91.0.4472.77 (Official Build) (x86_64)`
</issue>
<code>
[start of e2e/scripts/st_arrow_altair_chart.py]
1 # Copyright (c) Streamlit Inc. (2018-2022) Snowflake Inc. (2022)
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import altair as alt
16 import numpy as np
17 import pandas as pd
18
19 import streamlit as st
20
21 np.random.seed(0)
22
23 data = np.random.randn(200, 3)
24 df = pd.DataFrame(data, columns=["a", "b", "c"])
25 chart = alt.Chart(df).mark_circle().encode(x="a", y="b", size="c", color="c")
26 st._arrow_altair_chart(chart, theme=None)
27
28 st.write("Show default vega lite theme:")
29 st._arrow_altair_chart(chart, theme=None)
30
31 st.write("Show streamlit theme:")
32 st._arrow_altair_chart(chart, theme="streamlit")
33
34 st.write("Overwrite theme config:")
35 chart = (
36 alt.Chart(df, usermeta={"embedOptions": {"theme": None}})
37 .mark_circle()
38 .encode(x="a", y="b", size="c", color="c")
39 )
40 st._arrow_altair_chart(chart, theme="streamlit")
41
42 data = pd.DataFrame(
43 {
44 "a": ["A", "B", "C", "D", "E", "F", "G", "H", "I"],
45 "b": [28, 55, 43, 91, 81, 53, 19, 87, 52],
46 }
47 )
48
49 chart = alt.Chart(data).mark_bar().encode(x="a", y="b")
50
51 st.write("Bar chart with default theme:")
52 st._arrow_altair_chart(chart)
53
54 st.write("Bar chart with streamlit theme:")
55 st._arrow_altair_chart(chart, theme="streamlit")
56
57 st.write("Bar chart with overwritten theme props:")
58 st._arrow_altair_chart(chart.configure_mark(color="black"), theme="streamlit")
59
60 # mark_arc was added in 4.2, but we have to support altair 4.0-4.1, so we
61 # have to skip this part of the test when testing min versions.
62 major, minor, patch = alt.__version__.split(".")
63 if not (major == "4" and minor < "2"):
64
65 source = pd.DataFrame(
66 {"category": [1, 2, 3, 4, 5, 6], "value": [4, 6, 10, 3, 7, 8]}
67 )
68
69 chart = (
70 alt.Chart(source)
71 .mark_arc(innerRadius=50)
72 .encode(
73 theta=alt.Theta(field="value", type="quantitative"),
74 color=alt.Color(field="category", type="nominal"),
75 )
76 )
77
78 st.write("Pie Chart with more than 4 Legend items")
79 st._arrow_altair_chart(chart, theme="streamlit")
80
[end of e2e/scripts/st_arrow_altair_chart.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/e2e/scripts/st_arrow_altair_chart.py b/e2e/scripts/st_arrow_altair_chart.py
--- a/e2e/scripts/st_arrow_altair_chart.py
+++ b/e2e/scripts/st_arrow_altair_chart.py
@@ -48,12 +48,6 @@
chart = alt.Chart(data).mark_bar().encode(x="a", y="b")
-st.write("Bar chart with default theme:")
-st._arrow_altair_chart(chart)
-
-st.write("Bar chart with streamlit theme:")
-st._arrow_altair_chart(chart, theme="streamlit")
-
st.write("Bar chart with overwritten theme props:")
st._arrow_altair_chart(chart.configure_mark(color="black"), theme="streamlit")
@@ -77,3 +71,20 @@
st.write("Pie Chart with more than 4 Legend items")
st._arrow_altair_chart(chart, theme="streamlit")
+
+# taken from vega_datasets barley example
+barley = alt.UrlData(
+ "https://cdn.jsdelivr.net/npm/[email protected]/data/barley.json"
+)
+
+barley_chart = (
+ alt.Chart(barley)
+ .mark_bar()
+ .encode(x="year:O", y="sum(yield):Q", color="year:N", column="site:N")
+)
+
+st.write("Grouped Bar Chart with default theme:")
+st.altair_chart(barley_chart, theme=None)
+
+st.write("Grouped Bar Chart with streamlit theme:")
+st.altair_chart(barley_chart, theme="streamlit")
| {"golden_diff": "diff --git a/e2e/scripts/st_arrow_altair_chart.py b/e2e/scripts/st_arrow_altair_chart.py\n--- a/e2e/scripts/st_arrow_altair_chart.py\n+++ b/e2e/scripts/st_arrow_altair_chart.py\n@@ -48,12 +48,6 @@\n \n chart = alt.Chart(data).mark_bar().encode(x=\"a\", y=\"b\")\n \n-st.write(\"Bar chart with default theme:\")\n-st._arrow_altair_chart(chart)\n-\n-st.write(\"Bar chart with streamlit theme:\")\n-st._arrow_altair_chart(chart, theme=\"streamlit\")\n-\n st.write(\"Bar chart with overwritten theme props:\")\n st._arrow_altair_chart(chart.configure_mark(color=\"black\"), theme=\"streamlit\")\n \n@@ -77,3 +71,20 @@\n \n st.write(\"Pie Chart with more than 4 Legend items\")\n st._arrow_altair_chart(chart, theme=\"streamlit\")\n+\n+# taken from vega_datasets barley example\n+barley = alt.UrlData(\n+ \"https://cdn.jsdelivr.net/npm/[email protected]/data/barley.json\"\n+)\n+\n+barley_chart = (\n+ alt.Chart(barley)\n+ .mark_bar()\n+ .encode(x=\"year:O\", y=\"sum(yield):Q\", color=\"year:N\", column=\"site:N\")\n+)\n+\n+st.write(\"Grouped Bar Chart with default theme:\")\n+st.altair_chart(barley_chart, theme=None)\n+\n+st.write(\"Grouped Bar Chart with streamlit theme:\")\n+st.altair_chart(barley_chart, theme=\"streamlit\")\n", "issue": "Some labels in Altair charts are hard to see in dark mode\n### Summary\r\n\r\nStreamlit has an awesome feature where it changes the label colors of Altair charts when you switch to dark mode. Sweet!\r\n\r\nHowever, it seems that some labels were omitted and thus remain almost illegibly dark in dark mode.\r\n\r\n### Steps to reproduce\r\n\r\nRun this code snippet [taken from the Altair documentation](https://altair-viz.github.io/gallery/grouped_bar_chart.html):\r\n\r\n```python\r\nfrom vega_datasets import data\r\n\r\nst.subheader(\"barley example\")\r\nsource = data.barley()\r\nst.write(source)\r\nst.write(\r\n alt.Chart(source)\r\n .mark_bar()\r\n .encode(x=\"year:O\", y=\"sum(yield):Q\", color=\"year:N\", column=\"site:N\")\r\n)\r\n```\r\n\r\n### Expected vs actual behavior\r\n\r\nIn light mode it displays properly:\r\n\r\n\r\n\r\nbut in dark mode some of the labels have remained black and are almost impossible to read:\r\n\r\n\r\n\r\n**Note:** I have marked the errors in red.\r\n\r\n### Is this a regression?\r\n\r\nNot sure.\r\n\r\n### Debug info\r\n\r\n- Streamlit version: `Streamlit, version 0.82.0`\r\n- Python version: `Python 3.8.5`\r\n- PipEnv: `pipenv, version 2020.11.15`\r\n- OS version: `Ubuntu 20.04.2 LTS`\r\n- Browser version: `Version 91.0.4472.77 (Official Build) (x86_64)`\r\n\n", "before_files": [{"content": "# Copyright (c) Streamlit Inc. (2018-2022) Snowflake Inc. (2022)\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport altair as alt\nimport numpy as np\nimport pandas as pd\n\nimport streamlit as st\n\nnp.random.seed(0)\n\ndata = np.random.randn(200, 3)\ndf = pd.DataFrame(data, columns=[\"a\", \"b\", \"c\"])\nchart = alt.Chart(df).mark_circle().encode(x=\"a\", y=\"b\", size=\"c\", color=\"c\")\nst._arrow_altair_chart(chart, theme=None)\n\nst.write(\"Show default vega lite theme:\")\nst._arrow_altair_chart(chart, theme=None)\n\nst.write(\"Show streamlit theme:\")\nst._arrow_altair_chart(chart, theme=\"streamlit\")\n\nst.write(\"Overwrite theme config:\")\nchart = (\n alt.Chart(df, usermeta={\"embedOptions\": {\"theme\": None}})\n .mark_circle()\n .encode(x=\"a\", y=\"b\", size=\"c\", color=\"c\")\n)\nst._arrow_altair_chart(chart, theme=\"streamlit\")\n\ndata = pd.DataFrame(\n {\n \"a\": [\"A\", \"B\", \"C\", \"D\", \"E\", \"F\", \"G\", \"H\", \"I\"],\n \"b\": [28, 55, 43, 91, 81, 53, 19, 87, 52],\n }\n)\n\nchart = alt.Chart(data).mark_bar().encode(x=\"a\", y=\"b\")\n\nst.write(\"Bar chart with default theme:\")\nst._arrow_altair_chart(chart)\n\nst.write(\"Bar chart with streamlit theme:\")\nst._arrow_altair_chart(chart, theme=\"streamlit\")\n\nst.write(\"Bar chart with overwritten theme props:\")\nst._arrow_altair_chart(chart.configure_mark(color=\"black\"), theme=\"streamlit\")\n\n# mark_arc was added in 4.2, but we have to support altair 4.0-4.1, so we\n# have to skip this part of the test when testing min versions.\nmajor, minor, patch = alt.__version__.split(\".\")\nif not (major == \"4\" and minor < \"2\"):\n\n source = pd.DataFrame(\n {\"category\": [1, 2, 3, 4, 5, 6], \"value\": [4, 6, 10, 3, 7, 8]}\n )\n\n chart = (\n alt.Chart(source)\n .mark_arc(innerRadius=50)\n .encode(\n theta=alt.Theta(field=\"value\", type=\"quantitative\"),\n color=alt.Color(field=\"category\", type=\"nominal\"),\n )\n )\n\n st.write(\"Pie Chart with more than 4 Legend items\")\n st._arrow_altair_chart(chart, theme=\"streamlit\")\n", "path": "e2e/scripts/st_arrow_altair_chart.py"}]} | 1,888 | 348 |
gh_patches_debug_16987 | rasdani/github-patches | git_diff | strawberry-graphql__strawberry-1233 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
The export-schema command fails when trying to import local modules
</issue>
<code>
[start of strawberry/cli/commands/export_schema.py]
1 import click
2
3 from strawberry import Schema
4 from strawberry.printer import print_schema
5 from strawberry.utils.importer import import_module_symbol
6
7
8 @click.command(short_help="Exports the schema")
9 @click.argument("schema", type=str)
10 def export_schema(schema: str):
11 try:
12 schema_symbol = import_module_symbol(schema, default_symbol_name="schema")
13 except (ImportError, AttributeError) as exc:
14 message = str(exc)
15 raise click.BadArgumentUsage(message)
16 if not isinstance(schema_symbol, Schema):
17 message = "The `schema` must be an instance of strawberry.Schema"
18 raise click.BadArgumentUsage(message)
19 print(print_schema(schema_symbol))
20
[end of strawberry/cli/commands/export_schema.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/strawberry/cli/commands/export_schema.py b/strawberry/cli/commands/export_schema.py
--- a/strawberry/cli/commands/export_schema.py
+++ b/strawberry/cli/commands/export_schema.py
@@ -1,3 +1,5 @@
+import sys
+
import click
from strawberry import Schema
@@ -7,7 +9,20 @@
@click.command(short_help="Exports the schema")
@click.argument("schema", type=str)
-def export_schema(schema: str):
[email protected](
+ "--app-dir",
+ default=".",
+ type=str,
+ show_default=True,
+ help=(
+ "Look for the module in the specified directory, by adding this to the "
+ "PYTHONPATH. Defaults to the current working directory. "
+ "Works the same as `--app-dir` in uvicorn."
+ ),
+)
+def export_schema(schema: str, app_dir):
+ sys.path.insert(0, app_dir)
+
try:
schema_symbol = import_module_symbol(schema, default_symbol_name="schema")
except (ImportError, AttributeError) as exc:
| {"golden_diff": "diff --git a/strawberry/cli/commands/export_schema.py b/strawberry/cli/commands/export_schema.py\n--- a/strawberry/cli/commands/export_schema.py\n+++ b/strawberry/cli/commands/export_schema.py\n@@ -1,3 +1,5 @@\n+import sys\n+\n import click\n \n from strawberry import Schema\n@@ -7,7 +9,20 @@\n \n @click.command(short_help=\"Exports the schema\")\n @click.argument(\"schema\", type=str)\n-def export_schema(schema: str):\[email protected](\n+ \"--app-dir\",\n+ default=\".\",\n+ type=str,\n+ show_default=True,\n+ help=(\n+ \"Look for the module in the specified directory, by adding this to the \"\n+ \"PYTHONPATH. Defaults to the current working directory. \"\n+ \"Works the same as `--app-dir` in uvicorn.\"\n+ ),\n+)\n+def export_schema(schema: str, app_dir):\n+ sys.path.insert(0, app_dir)\n+\n try:\n schema_symbol = import_module_symbol(schema, default_symbol_name=\"schema\")\n except (ImportError, AttributeError) as exc:\n", "issue": "The export-schema command fails when trying to import local modules\n\n", "before_files": [{"content": "import click\n\nfrom strawberry import Schema\nfrom strawberry.printer import print_schema\nfrom strawberry.utils.importer import import_module_symbol\n\n\[email protected](short_help=\"Exports the schema\")\[email protected](\"schema\", type=str)\ndef export_schema(schema: str):\n try:\n schema_symbol = import_module_symbol(schema, default_symbol_name=\"schema\")\n except (ImportError, AttributeError) as exc:\n message = str(exc)\n raise click.BadArgumentUsage(message)\n if not isinstance(schema_symbol, Schema):\n message = \"The `schema` must be an instance of strawberry.Schema\"\n raise click.BadArgumentUsage(message)\n print(print_schema(schema_symbol))\n", "path": "strawberry/cli/commands/export_schema.py"}]} | 721 | 250 |
gh_patches_debug_12871 | rasdani/github-patches | git_diff | archlinux__archinstall-2069 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
locale language only en_US
archlinux-2023.09.01-x86_64.iso
</issue>
<code>
[start of archinstall/lib/locale/locale.py]
1 from typing import Iterator, List
2
3 from ..exceptions import ServiceException, SysCallError
4 from ..general import SysCommand
5 from ..output import error
6
7
8 def list_keyboard_languages() -> Iterator[str]:
9 for line in SysCommand("localectl --no-pager list-keymaps", environment_vars={'SYSTEMD_COLORS': '0'}):
10 yield line.decode('UTF-8').strip()
11
12
13 def list_locales() -> List[str]:
14 with open('/etc/locale.gen', 'r') as fp:
15 locales = []
16 # before the list of locales begins there's an empty line with a '#' in front
17 # so we'll collect the localels from bottom up and halt when we're donw
18 entries = fp.readlines()
19 entries.reverse()
20
21 for entry in entries:
22 text = entry.replace('#', '').strip()
23 if text == '':
24 break
25 locales.append(text)
26
27 locales.reverse()
28 return locales
29
30
31 def list_x11_keyboard_languages() -> Iterator[str]:
32 for line in SysCommand("localectl --no-pager list-x11-keymap-layouts", environment_vars={'SYSTEMD_COLORS': '0'}):
33 yield line.decode('UTF-8').strip()
34
35
36 def verify_keyboard_layout(layout :str) -> bool:
37 for language in list_keyboard_languages():
38 if layout.lower() == language.lower():
39 return True
40 return False
41
42
43 def verify_x11_keyboard_layout(layout :str) -> bool:
44 for language in list_x11_keyboard_languages():
45 if layout.lower() == language.lower():
46 return True
47 return False
48
49
50 def set_kb_layout(locale :str) -> bool:
51 if len(locale.strip()):
52 if not verify_keyboard_layout(locale):
53 error(f"Invalid keyboard locale specified: {locale}")
54 return False
55
56 try:
57 SysCommand(f'localectl set-keymap {locale}')
58 except SysCallError as err:
59 raise ServiceException(f"Unable to set locale '{locale}' for console: {err}")
60
61 return True
62
63 return False
64
65
66 def list_timezones() -> Iterator[str]:
67 for line in SysCommand("timedatectl --no-pager list-timezones", environment_vars={'SYSTEMD_COLORS': '0'}):
68 yield line.decode('UTF-8').strip()
69
[end of archinstall/lib/locale/locale.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/archinstall/lib/locale/locale.py b/archinstall/lib/locale/locale.py
--- a/archinstall/lib/locale/locale.py
+++ b/archinstall/lib/locale/locale.py
@@ -11,21 +11,14 @@
def list_locales() -> List[str]:
- with open('/etc/locale.gen', 'r') as fp:
- locales = []
- # before the list of locales begins there's an empty line with a '#' in front
- # so we'll collect the localels from bottom up and halt when we're donw
- entries = fp.readlines()
- entries.reverse()
-
- for entry in entries:
- text = entry.replace('#', '').strip()
- if text == '':
- break
- locales.append(text)
-
- locales.reverse()
- return locales
+ locales = []
+
+ with open('/usr/share/i18n/SUPPORTED') as file:
+ for line in file:
+ if line != 'C.UTF-8 UTF-8\n':
+ locales.append(line.rstrip())
+
+ return locales
def list_x11_keyboard_languages() -> Iterator[str]:
| {"golden_diff": "diff --git a/archinstall/lib/locale/locale.py b/archinstall/lib/locale/locale.py\n--- a/archinstall/lib/locale/locale.py\n+++ b/archinstall/lib/locale/locale.py\n@@ -11,21 +11,14 @@\n \n \n def list_locales() -> List[str]:\n-\twith open('/etc/locale.gen', 'r') as fp:\n-\t\tlocales = []\n-\t\t# before the list of locales begins there's an empty line with a '#' in front\n-\t\t# so we'll collect the localels from bottom up and halt when we're donw\n-\t\tentries = fp.readlines()\n-\t\tentries.reverse()\n-\n-\t\tfor entry in entries:\n-\t\t\ttext = entry.replace('#', '').strip()\n-\t\t\tif text == '':\n-\t\t\t\tbreak\n-\t\t\tlocales.append(text)\n-\n-\t\tlocales.reverse()\n-\t\treturn locales\n+\tlocales = []\n+\n+\twith open('/usr/share/i18n/SUPPORTED') as file:\n+\t\tfor line in file:\n+\t\t\tif line != 'C.UTF-8 UTF-8\\n':\n+\t\t\t\tlocales.append(line.rstrip())\n+\n+\treturn locales\n \n \n def list_x11_keyboard_languages() -> Iterator[str]:\n", "issue": "locale language only en_US\narchlinux-2023.09.01-x86_64.iso\n", "before_files": [{"content": "from typing import Iterator, List\n\nfrom ..exceptions import ServiceException, SysCallError\nfrom ..general import SysCommand\nfrom ..output import error\n\n\ndef list_keyboard_languages() -> Iterator[str]:\n\tfor line in SysCommand(\"localectl --no-pager list-keymaps\", environment_vars={'SYSTEMD_COLORS': '0'}):\n\t\tyield line.decode('UTF-8').strip()\n\n\ndef list_locales() -> List[str]:\n\twith open('/etc/locale.gen', 'r') as fp:\n\t\tlocales = []\n\t\t# before the list of locales begins there's an empty line with a '#' in front\n\t\t# so we'll collect the localels from bottom up and halt when we're donw\n\t\tentries = fp.readlines()\n\t\tentries.reverse()\n\n\t\tfor entry in entries:\n\t\t\ttext = entry.replace('#', '').strip()\n\t\t\tif text == '':\n\t\t\t\tbreak\n\t\t\tlocales.append(text)\n\n\t\tlocales.reverse()\n\t\treturn locales\n\n\ndef list_x11_keyboard_languages() -> Iterator[str]:\n\tfor line in SysCommand(\"localectl --no-pager list-x11-keymap-layouts\", environment_vars={'SYSTEMD_COLORS': '0'}):\n\t\tyield line.decode('UTF-8').strip()\n\n\ndef verify_keyboard_layout(layout :str) -> bool:\n\tfor language in list_keyboard_languages():\n\t\tif layout.lower() == language.lower():\n\t\t\treturn True\n\treturn False\n\n\ndef verify_x11_keyboard_layout(layout :str) -> bool:\n\tfor language in list_x11_keyboard_languages():\n\t\tif layout.lower() == language.lower():\n\t\t\treturn True\n\treturn False\n\n\ndef set_kb_layout(locale :str) -> bool:\n\tif len(locale.strip()):\n\t\tif not verify_keyboard_layout(locale):\n\t\t\terror(f\"Invalid keyboard locale specified: {locale}\")\n\t\t\treturn False\n\n\t\ttry:\n\t\t\tSysCommand(f'localectl set-keymap {locale}')\n\t\texcept SysCallError as err:\n\t\t\traise ServiceException(f\"Unable to set locale '{locale}' for console: {err}\")\n\n\t\treturn True\n\n\treturn False\n\n\ndef list_timezones() -> Iterator[str]:\n\tfor line in SysCommand(\"timedatectl --no-pager list-timezones\", environment_vars={'SYSTEMD_COLORS': '0'}):\n\t\tyield line.decode('UTF-8').strip()\n", "path": "archinstall/lib/locale/locale.py"}]} | 1,194 | 256 |
gh_patches_debug_37742 | rasdani/github-patches | git_diff | NVIDIA__NVFlare-196 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
if/elif statement without else clause in `FullModelShareableGenerator`
It would be helpful to add an else statement with a warning message that this DataKind is not supported. I ran into this issue when sending a DataKind.COLLECTION with the shareable by mistake.
See https://github.com/NVIDIA/NVFlare/blob/b3ff7844a9bef746218527ccd07601feb66fd94c/nvflare/app_common/shareablegenerators/full_model_shareable_generator.py#L61
In the same class, when sending a DXO instead of Shareable type, I got this error
```
Traceback (most recent call last):
File "/home/hroth/Code/nvflare/hroth-agglib/nvflare/app_common/workflows/scatter_and_gather.py", line 202, in control_flow
self._global_weights = self.shareable_gen.shareable_to_learnable(aggr_result, fl_ctx)
File "/home/hroth/Code/nvflare/hroth-agglib/nvflare/app_common/shareablegenerators/full_model_shareable_generator.py", line 54, in shareable_to_learnable
dxo = from_shareable(shareable)
File "/home/hroth/Code/nvflare/hroth-agglib/nvflare/apis/dxo.py", line 120, in from_shareable
content_type = s.get_header(ReservedHeaderKey.CONTENT_TYPE)
AttributeError: 'DXO' object has no attribute 'get_header'
```
There should be an instance check here https://github.com/NVIDIA/NVFlare/blob/b3ff7844a9bef746218527ccd07601feb66fd94c/nvflare/app_common/shareablegenerators/full_model_shareable_generator.py#L54
</issue>
<code>
[start of nvflare/app_common/shareablegenerators/full_model_shareable_generator.py]
1 # Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from nvflare.apis.dxo import DataKind, from_shareable
16 from nvflare.apis.fl_context import FLContext
17 from nvflare.apis.shareable import Shareable
18 from nvflare.app_common.abstract.model import ModelLearnable, ModelLearnableKey, model_learnable_to_dxo
19 from nvflare.app_common.abstract.shareable_generator import ShareableGenerator
20 from nvflare.app_common.app_constant import AppConstants
21
22
23 class FullModelShareableGenerator(ShareableGenerator):
24 def learnable_to_shareable(self, ml: ModelLearnable, fl_ctx: FLContext) -> Shareable:
25 """Convert Learnable to Shareable.
26
27 Args:
28 model (Learnable): model to be converted
29 fl_ctx (FLContext): FL context
30
31 Returns:
32 Shareable: a shareable containing a DXO object,
33 """
34 dxo = model_learnable_to_dxo(ml)
35 return dxo.to_shareable()
36
37 def shareable_to_learnable(self, shareable: Shareable, fl_ctx: FLContext) -> ModelLearnable:
38 """Convert Shareable to Learnable.
39
40 Supporting TYPE == TYPE_WEIGHT_DIFF or TYPE_WEIGHTS
41
42 Args:
43 shareable (Shareable): Shareable that contains a DXO object
44 fl_ctx (FLContext): FL context
45
46 Returns: a ModelLearnable object
47 """
48 base_model = fl_ctx.get_prop(AppConstants.GLOBAL_MODEL)
49 if not base_model:
50 self.system_panic(reason="No global base model!", fl_ctx=fl_ctx)
51 return base_model
52
53 weights = base_model[ModelLearnableKey.WEIGHTS]
54 dxo = from_shareable(shareable)
55
56 if dxo.data_kind == DataKind.WEIGHT_DIFF:
57 if dxo.data is not None:
58 model_diff = dxo.data
59 for v_name, v_value in model_diff.items():
60 weights[v_name] = weights[v_name] + v_value
61 elif dxo.data_kind == DataKind.WEIGHTS:
62 weights = dxo.data
63 if not weights:
64 self.log_info(fl_ctx, "No model weights found. Model will not be updated.")
65 else:
66 base_model[ModelLearnableKey.WEIGHTS] = weights
67
68 base_model[ModelLearnableKey.META] = dxo.get_meta_props()
69 return base_model
70
[end of nvflare/app_common/shareablegenerators/full_model_shareable_generator.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/nvflare/app_common/shareablegenerators/full_model_shareable_generator.py b/nvflare/app_common/shareablegenerators/full_model_shareable_generator.py
--- a/nvflare/app_common/shareablegenerators/full_model_shareable_generator.py
+++ b/nvflare/app_common/shareablegenerators/full_model_shareable_generator.py
@@ -21,21 +21,21 @@
class FullModelShareableGenerator(ShareableGenerator):
- def learnable_to_shareable(self, ml: ModelLearnable, fl_ctx: FLContext) -> Shareable:
- """Convert Learnable to Shareable.
+ def learnable_to_shareable(self, model_learnable: ModelLearnable, fl_ctx: FLContext) -> Shareable:
+ """Convert ModelLearnable to Shareable.
Args:
- model (Learnable): model to be converted
+ model_learnable (ModelLearnable): model to be converted
fl_ctx (FLContext): FL context
Returns:
- Shareable: a shareable containing a DXO object,
+ Shareable: a shareable containing a DXO object.
"""
- dxo = model_learnable_to_dxo(ml)
+ dxo = model_learnable_to_dxo(model_learnable)
return dxo.to_shareable()
def shareable_to_learnable(self, shareable: Shareable, fl_ctx: FLContext) -> ModelLearnable:
- """Convert Shareable to Learnable.
+ """Convert Shareable to ModelLearnable.
Supporting TYPE == TYPE_WEIGHT_DIFF or TYPE_WEIGHTS
@@ -43,8 +43,16 @@
shareable (Shareable): Shareable that contains a DXO object
fl_ctx (FLContext): FL context
- Returns: a ModelLearnable object
+ Returns:
+ A ModelLearnable object
+
+ Raises:
+ TypeError: if shareable is not of type shareable
+ ValueError: if data_kind is not `DataKind.WEIGHTS` and is not `DataKind.WEIGHT_DIFF`
"""
+ if not isinstance(shareable, Shareable):
+ raise TypeError("shareable must be Shareable, but got {}.".format(type(shareable)))
+
base_model = fl_ctx.get_prop(AppConstants.GLOBAL_MODEL)
if not base_model:
self.system_panic(reason="No global base model!", fl_ctx=fl_ctx)
@@ -64,6 +72,10 @@
self.log_info(fl_ctx, "No model weights found. Model will not be updated.")
else:
base_model[ModelLearnableKey.WEIGHTS] = weights
+ else:
+ raise ValueError(
+ "data_kind should be either DataKind.WEIGHTS or DataKind.WEIGHT_DIFF, but got {}".format(dxo.data_kind)
+ )
base_model[ModelLearnableKey.META] = dxo.get_meta_props()
return base_model
| {"golden_diff": "diff --git a/nvflare/app_common/shareablegenerators/full_model_shareable_generator.py b/nvflare/app_common/shareablegenerators/full_model_shareable_generator.py\n--- a/nvflare/app_common/shareablegenerators/full_model_shareable_generator.py\n+++ b/nvflare/app_common/shareablegenerators/full_model_shareable_generator.py\n@@ -21,21 +21,21 @@\n \n \n class FullModelShareableGenerator(ShareableGenerator):\n- def learnable_to_shareable(self, ml: ModelLearnable, fl_ctx: FLContext) -> Shareable:\n- \"\"\"Convert Learnable to Shareable.\n+ def learnable_to_shareable(self, model_learnable: ModelLearnable, fl_ctx: FLContext) -> Shareable:\n+ \"\"\"Convert ModelLearnable to Shareable.\n \n Args:\n- model (Learnable): model to be converted\n+ model_learnable (ModelLearnable): model to be converted\n fl_ctx (FLContext): FL context\n \n Returns:\n- Shareable: a shareable containing a DXO object,\n+ Shareable: a shareable containing a DXO object.\n \"\"\"\n- dxo = model_learnable_to_dxo(ml)\n+ dxo = model_learnable_to_dxo(model_learnable)\n return dxo.to_shareable()\n \n def shareable_to_learnable(self, shareable: Shareable, fl_ctx: FLContext) -> ModelLearnable:\n- \"\"\"Convert Shareable to Learnable.\n+ \"\"\"Convert Shareable to ModelLearnable.\n \n Supporting TYPE == TYPE_WEIGHT_DIFF or TYPE_WEIGHTS\n \n@@ -43,8 +43,16 @@\n shareable (Shareable): Shareable that contains a DXO object\n fl_ctx (FLContext): FL context\n \n- Returns: a ModelLearnable object\n+ Returns:\n+ A ModelLearnable object\n+\n+ Raises:\n+ TypeError: if shareable is not of type shareable\n+ ValueError: if data_kind is not `DataKind.WEIGHTS` and is not `DataKind.WEIGHT_DIFF`\n \"\"\"\n+ if not isinstance(shareable, Shareable):\n+ raise TypeError(\"shareable must be Shareable, but got {}.\".format(type(shareable)))\n+\n base_model = fl_ctx.get_prop(AppConstants.GLOBAL_MODEL)\n if not base_model:\n self.system_panic(reason=\"No global base model!\", fl_ctx=fl_ctx)\n@@ -64,6 +72,10 @@\n self.log_info(fl_ctx, \"No model weights found. Model will not be updated.\")\n else:\n base_model[ModelLearnableKey.WEIGHTS] = weights\n+ else:\n+ raise ValueError(\n+ \"data_kind should be either DataKind.WEIGHTS or DataKind.WEIGHT_DIFF, but got {}\".format(dxo.data_kind)\n+ )\n \n base_model[ModelLearnableKey.META] = dxo.get_meta_props()\n return base_model\n", "issue": "if/elif statement without else clause in `FullModelShareableGenerator`\nIt would be helpful to add an else statement with a warning message that this DataKind is not supported. I ran into this issue when sending a DataKind.COLLECTION with the shareable by mistake.\r\n\r\nSee https://github.com/NVIDIA/NVFlare/blob/b3ff7844a9bef746218527ccd07601feb66fd94c/nvflare/app_common/shareablegenerators/full_model_shareable_generator.py#L61\r\n\r\nIn the same class, when sending a DXO instead of Shareable type, I got this error\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/hroth/Code/nvflare/hroth-agglib/nvflare/app_common/workflows/scatter_and_gather.py\", line 202, in control_flow\r\n self._global_weights = self.shareable_gen.shareable_to_learnable(aggr_result, fl_ctx)\r\n File \"/home/hroth/Code/nvflare/hroth-agglib/nvflare/app_common/shareablegenerators/full_model_shareable_generator.py\", line 54, in shareable_to_learnable\r\n dxo = from_shareable(shareable)\r\n File \"/home/hroth/Code/nvflare/hroth-agglib/nvflare/apis/dxo.py\", line 120, in from_shareable\r\n content_type = s.get_header(ReservedHeaderKey.CONTENT_TYPE)\r\nAttributeError: 'DXO' object has no attribute 'get_header'\r\n```\r\nThere should be an instance check here https://github.com/NVIDIA/NVFlare/blob/b3ff7844a9bef746218527ccd07601feb66fd94c/nvflare/app_common/shareablegenerators/full_model_shareable_generator.py#L54\r\n\r\n\r\n\n", "before_files": [{"content": "# Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom nvflare.apis.dxo import DataKind, from_shareable\nfrom nvflare.apis.fl_context import FLContext\nfrom nvflare.apis.shareable import Shareable\nfrom nvflare.app_common.abstract.model import ModelLearnable, ModelLearnableKey, model_learnable_to_dxo\nfrom nvflare.app_common.abstract.shareable_generator import ShareableGenerator\nfrom nvflare.app_common.app_constant import AppConstants\n\n\nclass FullModelShareableGenerator(ShareableGenerator):\n def learnable_to_shareable(self, ml: ModelLearnable, fl_ctx: FLContext) -> Shareable:\n \"\"\"Convert Learnable to Shareable.\n\n Args:\n model (Learnable): model to be converted\n fl_ctx (FLContext): FL context\n\n Returns:\n Shareable: a shareable containing a DXO object,\n \"\"\"\n dxo = model_learnable_to_dxo(ml)\n return dxo.to_shareable()\n\n def shareable_to_learnable(self, shareable: Shareable, fl_ctx: FLContext) -> ModelLearnable:\n \"\"\"Convert Shareable to Learnable.\n\n Supporting TYPE == TYPE_WEIGHT_DIFF or TYPE_WEIGHTS\n\n Args:\n shareable (Shareable): Shareable that contains a DXO object\n fl_ctx (FLContext): FL context\n\n Returns: a ModelLearnable object\n \"\"\"\n base_model = fl_ctx.get_prop(AppConstants.GLOBAL_MODEL)\n if not base_model:\n self.system_panic(reason=\"No global base model!\", fl_ctx=fl_ctx)\n return base_model\n\n weights = base_model[ModelLearnableKey.WEIGHTS]\n dxo = from_shareable(shareable)\n\n if dxo.data_kind == DataKind.WEIGHT_DIFF:\n if dxo.data is not None:\n model_diff = dxo.data\n for v_name, v_value in model_diff.items():\n weights[v_name] = weights[v_name] + v_value\n elif dxo.data_kind == DataKind.WEIGHTS:\n weights = dxo.data\n if not weights:\n self.log_info(fl_ctx, \"No model weights found. Model will not be updated.\")\n else:\n base_model[ModelLearnableKey.WEIGHTS] = weights\n\n base_model[ModelLearnableKey.META] = dxo.get_meta_props()\n return base_model\n", "path": "nvflare/app_common/shareablegenerators/full_model_shareable_generator.py"}]} | 1,745 | 646 |
gh_patches_debug_4572 | rasdani/github-patches | git_diff | cltk__cltk-533 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
External punctuation stopped working on Latin sent tokenizer
Recently reviewing the tokenizer, and it is not capturing exclamation points. I'll look to see the NLTK has changed anything.
``` python
In [12]: text = """quam penitus maestas exedit cura medullas! ut tibi tunc toto
...: pectore sollicitae sensibus ereptis mens excidit! at ego certe cognoram
...: a parva virgine magnanimam. Mam. Aemilius ad castra venit."""
In [13]: tokenizer.tokenize_sentences(text)
Out[13]:
['quam penitus maestas exedit cura medullas! ut tibi tunc toto pectore sollicitae sensibus ereptis mens excidit! at ego certe cognoram a parva virgine magnanimam.',
'Mam. Aemilius ad castra venit.']
```
</issue>
<code>
[start of cltk/tokenize/sentence.py]
1 """Tokenize sentences."""
2
3 __author__ = 'Kyle P. Johnson <[email protected]>'
4 __license__ = 'MIT License. See LICENSE.'
5
6
7 from cltk.utils.file_operations import open_pickle
8 from nltk.tokenize.punkt import PunktLanguageVars
9 from nltk.tokenize.punkt import PunktSentenceTokenizer
10 import os
11
12
13 PUNCTUATION = {'greek':
14 {'external': ('.', ';'),
15 'internal': (',', '·'),
16 'file': 'greek.pickle', },
17 'latin':
18 {'external': ('.', '?', ':'),
19 'internal': (',', ';'),
20 'file': 'latin.pickle', }}
21
22
23 class TokenizeSentence(): # pylint: disable=R0903
24 """Tokenize sentences for the language given as argument, e.g.,
25 ``TokenizeSentence('greek')``.
26 """
27
28 def __init__(self: object, language: str):
29 """Lower incoming language name and assemble variables.
30 :type language: str
31 :param language : Language for sentence tokenization.
32 """
33 self.language = language.lower()
34 self.internal_punctuation, self.external_punctuation, self.tokenizer_path = \
35 self._setup_language_variables(self.language)
36
37 def _setup_language_variables(self, lang: str):
38 """Check for language availability and presence of tokenizer file,
39 then read punctuation characters for language and build tokenizer file
40 path.
41 :param lang: The language argument given to the class.
42 :type lang: str
43 :rtype (str, str, str)
44 """
45 assert lang in PUNCTUATION.keys(), \
46 'Sentence tokenizer not available for {0} language.'.format(lang)
47 internal_punctuation = PUNCTUATION[lang]['internal']
48 external_punctuation = PUNCTUATION[lang]['external']
49 file = PUNCTUATION[lang]['file']
50 rel_path = os.path.join('~/cltk_data',
51 lang,
52 'model/' + lang + '_models_cltk/tokenizers/sentence') # pylint: disable=C0301
53 path = os.path.expanduser(rel_path)
54 tokenizer_path = os.path.join(path, file)
55 assert os.path.isfile(tokenizer_path), \
56 'CLTK linguistics data not found for language {0}'.format(lang)
57 return internal_punctuation, external_punctuation, tokenizer_path
58
59 def _setup_tokenizer(self, tokenizer: object):
60 """Add tokenizer and punctuation variables.
61 :type tokenizer: object
62 :param tokenizer : Unpickled tokenizer object.
63 :rtype : object
64 """
65 language_punkt_vars = PunktLanguageVars
66 language_punkt_vars.sent_end_chars = self.external_punctuation
67 language_punkt_vars.internal_punctuation = self.internal_punctuation
68 tokenizer.INCLUDE_ALL_COLLOCS = True
69 tokenizer.INCLUDE_ABBREV_COLLOCS = True
70 params = tokenizer.get_params()
71 return PunktSentenceTokenizer(params)
72
73 def tokenize_sentences(self: object, untokenized_string: str):
74 """Tokenize sentences by reading trained tokenizer and invoking
75 ``PunktSentenceTokenizer()``.
76 :type untokenized_string: str
77 :param untokenized_string: A string containing one of more sentences.
78 :rtype : list of strings
79 """
80 # load tokenizer
81 assert isinstance(untokenized_string, str), \
82 'Incoming argument must be a string.'
83 tokenizer = open_pickle(self.tokenizer_path)
84 tokenizer = self._setup_tokenizer(tokenizer)
85
86 # mk list of tokenized sentences
87 tokenized_sentences = []
88 for sentence in tokenizer.sentences_from_text(untokenized_string, realign_boundaries=True): # pylint: disable=C0301
89 tokenized_sentences.append(sentence)
90 return tokenized_sentences
91
92 def tokenize(self: object, untokenized_string: str):
93 # NLTK's PlaintextCorpusReader needs a function called tokenize
94 # in functions used as a parameter for sentence tokenization.
95 # So this is an alias for tokenize_sentences().
96 return self.tokenize_sentences(untokenized_string)
97
[end of cltk/tokenize/sentence.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/cltk/tokenize/sentence.py b/cltk/tokenize/sentence.py
--- a/cltk/tokenize/sentence.py
+++ b/cltk/tokenize/sentence.py
@@ -15,7 +15,7 @@
'internal': (',', '·'),
'file': 'greek.pickle', },
'latin':
- {'external': ('.', '?', ':'),
+ {'external': ('.', '?', '!', ':'),
'internal': (',', ';'),
'file': 'latin.pickle', }}
| {"golden_diff": "diff --git a/cltk/tokenize/sentence.py b/cltk/tokenize/sentence.py\n--- a/cltk/tokenize/sentence.py\n+++ b/cltk/tokenize/sentence.py\n@@ -15,7 +15,7 @@\n 'internal': (',', '\u00b7'),\n 'file': 'greek.pickle', },\n 'latin':\n- {'external': ('.', '?', ':'),\n+ {'external': ('.', '?', '!', ':'),\n 'internal': (',', ';'),\n 'file': 'latin.pickle', }}\n", "issue": "External punctuation stopped working on Latin sent tokenizer\nRecently reviewing the tokenizer, and it is not capturing exclamation points. I'll look to see the NLTK has changed anything.\r\n``` python\r\nIn [12]: text = \"\"\"quam penitus maestas exedit cura medullas! ut tibi tunc toto \r\n ...: pectore sollicitae sensibus ereptis mens excidit! at ego certe cognoram\r\n ...: a parva virgine magnanimam. Mam. Aemilius ad castra venit.\"\"\"\r\n\r\nIn [13]: tokenizer.tokenize_sentences(text)\r\nOut[13]: \r\n['quam penitus maestas exedit cura medullas! ut tibi tunc toto pectore sollicitae sensibus ereptis mens excidit! at ego certe cognoram a parva virgine magnanimam.',\r\n 'Mam. Aemilius ad castra venit.']\r\n```\n", "before_files": [{"content": "\"\"\"Tokenize sentences.\"\"\"\n\n__author__ = 'Kyle P. Johnson <[email protected]>'\n__license__ = 'MIT License. See LICENSE.'\n\n\nfrom cltk.utils.file_operations import open_pickle\nfrom nltk.tokenize.punkt import PunktLanguageVars\nfrom nltk.tokenize.punkt import PunktSentenceTokenizer\nimport os\n\n\nPUNCTUATION = {'greek':\n {'external': ('.', ';'),\n 'internal': (',', '\u00b7'),\n 'file': 'greek.pickle', },\n 'latin':\n {'external': ('.', '?', ':'),\n 'internal': (',', ';'),\n 'file': 'latin.pickle', }}\n\n\nclass TokenizeSentence(): # pylint: disable=R0903\n \"\"\"Tokenize sentences for the language given as argument, e.g.,\n ``TokenizeSentence('greek')``.\n \"\"\"\n\n def __init__(self: object, language: str):\n \"\"\"Lower incoming language name and assemble variables.\n :type language: str\n :param language : Language for sentence tokenization.\n \"\"\"\n self.language = language.lower()\n self.internal_punctuation, self.external_punctuation, self.tokenizer_path = \\\n self._setup_language_variables(self.language)\n\n def _setup_language_variables(self, lang: str):\n \"\"\"Check for language availability and presence of tokenizer file,\n then read punctuation characters for language and build tokenizer file\n path.\n :param lang: The language argument given to the class.\n :type lang: str\n :rtype (str, str, str)\n \"\"\"\n assert lang in PUNCTUATION.keys(), \\\n 'Sentence tokenizer not available for {0} language.'.format(lang)\n internal_punctuation = PUNCTUATION[lang]['internal']\n external_punctuation = PUNCTUATION[lang]['external']\n file = PUNCTUATION[lang]['file']\n rel_path = os.path.join('~/cltk_data',\n lang,\n 'model/' + lang + '_models_cltk/tokenizers/sentence') # pylint: disable=C0301\n path = os.path.expanduser(rel_path)\n tokenizer_path = os.path.join(path, file)\n assert os.path.isfile(tokenizer_path), \\\n 'CLTK linguistics data not found for language {0}'.format(lang)\n return internal_punctuation, external_punctuation, tokenizer_path\n\n def _setup_tokenizer(self, tokenizer: object):\n \"\"\"Add tokenizer and punctuation variables.\n :type tokenizer: object\n :param tokenizer : Unpickled tokenizer object.\n :rtype : object\n \"\"\"\n language_punkt_vars = PunktLanguageVars\n language_punkt_vars.sent_end_chars = self.external_punctuation\n language_punkt_vars.internal_punctuation = self.internal_punctuation\n tokenizer.INCLUDE_ALL_COLLOCS = True\n tokenizer.INCLUDE_ABBREV_COLLOCS = True\n params = tokenizer.get_params()\n return PunktSentenceTokenizer(params)\n\n def tokenize_sentences(self: object, untokenized_string: str):\n \"\"\"Tokenize sentences by reading trained tokenizer and invoking\n ``PunktSentenceTokenizer()``.\n :type untokenized_string: str\n :param untokenized_string: A string containing one of more sentences.\n :rtype : list of strings\n \"\"\"\n # load tokenizer\n assert isinstance(untokenized_string, str), \\\n 'Incoming argument must be a string.'\n tokenizer = open_pickle(self.tokenizer_path)\n tokenizer = self._setup_tokenizer(tokenizer)\n\n # mk list of tokenized sentences\n tokenized_sentences = []\n for sentence in tokenizer.sentences_from_text(untokenized_string, realign_boundaries=True): # pylint: disable=C0301\n tokenized_sentences.append(sentence)\n return tokenized_sentences\n \n def tokenize(self: object, untokenized_string: str):\n # NLTK's PlaintextCorpusReader needs a function called tokenize\n # in functions used as a parameter for sentence tokenization.\n # So this is an alias for tokenize_sentences().\n return self.tokenize_sentences(untokenized_string)\n", "path": "cltk/tokenize/sentence.py"}]} | 1,817 | 119 |
gh_patches_debug_7773 | rasdani/github-patches | git_diff | alltheplaces__alltheplaces-3339 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Spider jbhifi is broken
During the global build at 2021-06-16-14-42-20, spider **jbhifi** failed with **78 features** and **1 errors**.
Here's [the log](https://data.alltheplaces.xyz/runs/2021-06-16-14-42-20/logs/jbhifi.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-06-16-14-42-20/output/jbhifi.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-06-16-14-42-20/output/jbhifi.geojson))
</issue>
<code>
[start of locations/spiders/jbhifi.py]
1 # -*- coding: utf-8 -*-
2 import scrapy
3 import json
4 from locations.items import GeojsonPointItem
5 from locations.hours import OpeningHours
6
7 DAYS = ['Su', 'Mo', 'Tu', "We", 'Th', 'Fr', 'Sa']
8
9 class JbHifiSpider(scrapy.Spider):
10 name = "jbhifi"
11 allowed_domains = ["algolia.net"]
12
13 def start_requests(self):
14 headers = {"Content-Type": "application/json",
15 "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:81.0) Gecko/20100101 Firefox/81.0",
16 "Origin": "https://www.jbhifi.com.au",
17 "Referer": "https://www.jbhifi.com.au/pages/store-finder",
18 "Accept": "*/*",
19 'Accept-Encoding': 'gzip, deflate'
20
21 }
22 yield scrapy.http.Request(
23 url="https://vtvkm5urpx-dsn.algolia.net/1/indexes/shopify_store_locations/query?x-algolia-agent=Algolia for JavaScript (3.35.1); Browser (lite)&x-algolia-application-id=VTVKM5URPX&x-algolia-api-key=a0c0108d737ad5ab54a0e2da900bf040",
24 method="POST",
25 headers=headers,
26 body='{"params":"query=&hitsPerPage=1000&filters=displayOnWeb%3Ap"}')
27
28 def process_trading_hours(self, store_hours):
29 opening_hours = OpeningHours()
30 for day in store_hours:
31 opening_hours.add_range(DAYS[day['DayOfWeek']], day['OpeningTime'], day['ClosingTime'])
32
33 return opening_hours.as_opening_hours()
34
35 def parse(self, response):
36 stores = json.loads(response.body)
37
38 for store in stores['hits']:
39 properties = {
40 'ref': store['shopId'],
41 'name': store['storeName'],
42 'addr_full': f"{store['storeAddress']['Line1']} {store['storeAddress'].get('Line2','')} {store['storeAddress'].get('Line3','')}".strip(),
43 'city': store['storeAddress']['Suburb'],
44 'state': store['storeAddress']['State'],
45 'postcode': store['storeAddress']['Postcode'],
46 'country': 'AU',
47 'lat': store['_geoloc']['lat'],
48 'lon': store['_geoloc']['lng'],
49 'phone': store['phone'],
50 'opening_hours': self.process_trading_hours(store['normalTradingHours'])
51 }
52
53 yield GeojsonPointItem(**properties)
[end of locations/spiders/jbhifi.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/locations/spiders/jbhifi.py b/locations/spiders/jbhifi.py
--- a/locations/spiders/jbhifi.py
+++ b/locations/spiders/jbhifi.py
@@ -28,7 +28,8 @@
def process_trading_hours(self, store_hours):
opening_hours = OpeningHours()
for day in store_hours:
- opening_hours.add_range(DAYS[day['DayOfWeek']], day['OpeningTime'], day['ClosingTime'])
+ if 'NULL' not in day['OpeningTime'] and 'NULL' not in day['ClosingTime']:
+ opening_hours.add_range(DAYS[day['DayOfWeek']], day['OpeningTime'], day['ClosingTime'])
return opening_hours.as_opening_hours()
| {"golden_diff": "diff --git a/locations/spiders/jbhifi.py b/locations/spiders/jbhifi.py\n--- a/locations/spiders/jbhifi.py\n+++ b/locations/spiders/jbhifi.py\n@@ -28,7 +28,8 @@\n def process_trading_hours(self, store_hours):\n opening_hours = OpeningHours()\n for day in store_hours:\n- opening_hours.add_range(DAYS[day['DayOfWeek']], day['OpeningTime'], day['ClosingTime'])\n+ if 'NULL' not in day['OpeningTime'] and 'NULL' not in day['ClosingTime']:\n+ opening_hours.add_range(DAYS[day['DayOfWeek']], day['OpeningTime'], day['ClosingTime'])\n \n return opening_hours.as_opening_hours()\n", "issue": "Spider jbhifi is broken\nDuring the global build at 2021-06-16-14-42-20, spider **jbhifi** failed with **78 features** and **1 errors**.\n\nHere's [the log](https://data.alltheplaces.xyz/runs/2021-06-16-14-42-20/logs/jbhifi.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-06-16-14-42-20/output/jbhifi.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-06-16-14-42-20/output/jbhifi.geojson))\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport scrapy\nimport json\nfrom locations.items import GeojsonPointItem\nfrom locations.hours import OpeningHours\n\nDAYS = ['Su', 'Mo', 'Tu', \"We\", 'Th', 'Fr', 'Sa']\n\nclass JbHifiSpider(scrapy.Spider):\n name = \"jbhifi\"\n allowed_domains = [\"algolia.net\"]\n \n def start_requests(self):\n headers = {\"Content-Type\": \"application/json\",\n \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:81.0) Gecko/20100101 Firefox/81.0\",\n \"Origin\": \"https://www.jbhifi.com.au\",\n \"Referer\": \"https://www.jbhifi.com.au/pages/store-finder\",\n \"Accept\": \"*/*\",\n 'Accept-Encoding': 'gzip, deflate'\n\n }\n yield scrapy.http.Request(\n url=\"https://vtvkm5urpx-dsn.algolia.net/1/indexes/shopify_store_locations/query?x-algolia-agent=Algolia for JavaScript (3.35.1); Browser (lite)&x-algolia-application-id=VTVKM5URPX&x-algolia-api-key=a0c0108d737ad5ab54a0e2da900bf040\",\n method=\"POST\",\n headers=headers,\n body='{\"params\":\"query=&hitsPerPage=1000&filters=displayOnWeb%3Ap\"}')\n\n def process_trading_hours(self, store_hours):\n opening_hours = OpeningHours()\n for day in store_hours:\n opening_hours.add_range(DAYS[day['DayOfWeek']], day['OpeningTime'], day['ClosingTime'])\n \n return opening_hours.as_opening_hours()\n\n def parse(self, response):\n stores = json.loads(response.body)\n\n for store in stores['hits']:\n properties = {\n 'ref': store['shopId'],\n 'name': store['storeName'],\n 'addr_full': f\"{store['storeAddress']['Line1']} {store['storeAddress'].get('Line2','')} {store['storeAddress'].get('Line3','')}\".strip(),\n 'city': store['storeAddress']['Suburb'],\n 'state': store['storeAddress']['State'],\n 'postcode': store['storeAddress']['Postcode'],\n 'country': 'AU',\n 'lat': store['_geoloc']['lat'],\n 'lon': store['_geoloc']['lng'],\n 'phone': store['phone'],\n 'opening_hours': self.process_trading_hours(store['normalTradingHours'])\n }\n\n yield GeojsonPointItem(**properties)", "path": "locations/spiders/jbhifi.py"}]} | 1,417 | 165 |
gh_patches_debug_1878 | rasdani/github-patches | git_diff | googleapis__google-cloud-python-5856 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Request to release GCS Python library
Hi,
Is it possible to release the Storage client library for Python?
I'd like the new method `get_service_account_email` to be available. Unless there exist concerns.
</issue>
<code>
[start of storage/setup.py]
1 # Copyright 2018 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import io
16 import os
17
18 import setuptools
19
20
21 # Package metadata.
22
23 name = 'google-cloud-storage'
24 description = 'Google Cloud Storage API client library'
25 version = '1.10.0'
26 # Should be one of:
27 # 'Development Status :: 3 - Alpha'
28 # 'Development Status :: 4 - Beta'
29 # 'Development Status :: 5 - Production/Stable'
30 release_status = 'Development Status :: 5 - Production/Stable'
31 dependencies = [
32 'google-cloud-core<0.29dev,>=0.28.0',
33 'google-api-core<2.0.0dev,>=0.1.1',
34 'google-resumable-media>=0.3.1',
35 ]
36 extras = {
37 }
38
39
40 # Setup boilerplate below this line.
41
42 package_root = os.path.abspath(os.path.dirname(__file__))
43
44 readme_filename = os.path.join(package_root, 'README.rst')
45 with io.open(readme_filename, encoding='utf-8') as readme_file:
46 readme = readme_file.read()
47
48 # Only include packages under the 'google' namespace. Do not include tests,
49 # benchmarks, etc.
50 packages = [
51 package for package in setuptools.find_packages()
52 if package.startswith('google')]
53
54 # Determine which namespaces are needed.
55 namespaces = ['google']
56 if 'google.cloud' in packages:
57 namespaces.append('google.cloud')
58
59
60 setuptools.setup(
61 name=name,
62 version=version,
63 description=description,
64 long_description=readme,
65 author='Google LLC',
66 author_email='[email protected]',
67 license='Apache 2.0',
68 url='https://github.com/GoogleCloudPlatform/google-cloud-python',
69 classifiers=[
70 release_status,
71 'Intended Audience :: Developers',
72 'License :: OSI Approved :: Apache Software License',
73 'Programming Language :: Python',
74 'Programming Language :: Python :: 2',
75 'Programming Language :: Python :: 2.7',
76 'Programming Language :: Python :: 3',
77 'Programming Language :: Python :: 3.4',
78 'Programming Language :: Python :: 3.5',
79 'Programming Language :: Python :: 3.6',
80 'Operating System :: OS Independent',
81 'Topic :: Internet',
82 ],
83 platforms='Posix; MacOS X; Windows',
84 packages=packages,
85 namespace_packages=namespaces,
86 install_requires=dependencies,
87 extras_require=extras,
88 include_package_data=True,
89 zip_safe=False,
90 )
91
[end of storage/setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/storage/setup.py b/storage/setup.py
--- a/storage/setup.py
+++ b/storage/setup.py
@@ -22,7 +22,7 @@
name = 'google-cloud-storage'
description = 'Google Cloud Storage API client library'
-version = '1.10.0'
+version = '1.11.0'
# Should be one of:
# 'Development Status :: 3 - Alpha'
# 'Development Status :: 4 - Beta'
| {"golden_diff": "diff --git a/storage/setup.py b/storage/setup.py\n--- a/storage/setup.py\n+++ b/storage/setup.py\n@@ -22,7 +22,7 @@\n \n name = 'google-cloud-storage'\n description = 'Google Cloud Storage API client library'\n-version = '1.10.0'\n+version = '1.11.0'\n # Should be one of:\n # 'Development Status :: 3 - Alpha'\n # 'Development Status :: 4 - Beta'\n", "issue": "Request to release GCS Python library\nHi,\r\n\r\nIs it possible to release the Storage client library for Python?\r\n\r\nI'd like the new method `get_service_account_email` to be available. Unless there exist concerns.\n", "before_files": [{"content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport io\nimport os\n\nimport setuptools\n\n\n# Package metadata.\n\nname = 'google-cloud-storage'\ndescription = 'Google Cloud Storage API client library'\nversion = '1.10.0'\n# Should be one of:\n# 'Development Status :: 3 - Alpha'\n# 'Development Status :: 4 - Beta'\n# 'Development Status :: 5 - Production/Stable'\nrelease_status = 'Development Status :: 5 - Production/Stable'\ndependencies = [\n 'google-cloud-core<0.29dev,>=0.28.0',\n 'google-api-core<2.0.0dev,>=0.1.1',\n 'google-resumable-media>=0.3.1',\n]\nextras = {\n}\n\n\n# Setup boilerplate below this line.\n\npackage_root = os.path.abspath(os.path.dirname(__file__))\n\nreadme_filename = os.path.join(package_root, 'README.rst')\nwith io.open(readme_filename, encoding='utf-8') as readme_file:\n readme = readme_file.read()\n\n# Only include packages under the 'google' namespace. Do not include tests,\n# benchmarks, etc.\npackages = [\n package for package in setuptools.find_packages()\n if package.startswith('google')]\n\n# Determine which namespaces are needed.\nnamespaces = ['google']\nif 'google.cloud' in packages:\n namespaces.append('google.cloud')\n\n\nsetuptools.setup(\n name=name,\n version=version,\n description=description,\n long_description=readme,\n author='Google LLC',\n author_email='[email protected]',\n license='Apache 2.0',\n url='https://github.com/GoogleCloudPlatform/google-cloud-python',\n classifiers=[\n release_status,\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Operating System :: OS Independent',\n 'Topic :: Internet',\n ],\n platforms='Posix; MacOS X; Windows',\n packages=packages,\n namespace_packages=namespaces,\n install_requires=dependencies,\n extras_require=extras,\n include_package_data=True,\n zip_safe=False,\n)\n", "path": "storage/setup.py"}]} | 1,399 | 102 |
gh_patches_debug_1456 | rasdani/github-patches | git_diff | arviz-devs__arviz-596 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Installing arviz breaks pymc3 installation
**Describe the bug**
Installing Arviz breaks a pymc3 installation, which is unfortunate because they're built to be compatible. After installation, importing pymc3 throws the following error.
> WARNING (theano.tensor.blas): Using NumPy C-API based implementation for BLAS functions.
The reason is because arviz installation requires numpy==1.15 rather than numpy>=1.15. If you have 1.16, it uninstalls it and re-installs 1.15. It's annoying to fix. I ended up having to scrap the whole virtual environment and start over.
**To Reproduce**
Install arviz if you have any version of numpy other than 1.15, then import pymc3.
**Expected behavior**
Do not force downgrade of numpy.
</issue>
<code>
[start of arviz/__init__.py]
1 # pylint: disable=wildcard-import,invalid-name,wrong-import-position
2 """ArviZ is a library for exploratory analysis of Bayesian models."""
3 __version__ = "0.3.2"
4
5 import os
6 import logging
7 from matplotlib.pyplot import style
8
9 # add ArviZ's styles to matplotlib's styles
10 arviz_style_path = os.path.join(os.path.dirname(__file__), "plots", "styles")
11 style.core.USER_LIBRARY_PATHS.append(arviz_style_path)
12 style.core.reload_library()
13
14 # Configure logging before importing arviz internals
15 _log = logging.getLogger("arviz")
16
17 if not logging.root.handlers:
18 handler = logging.StreamHandler()
19 _log.setLevel(logging.INFO)
20 _log.addHandler(handler)
21
22 from .data import *
23 from .plots import *
24 from .stats import *
25
[end of arviz/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/arviz/__init__.py b/arviz/__init__.py
--- a/arviz/__init__.py
+++ b/arviz/__init__.py
@@ -1,6 +1,6 @@
# pylint: disable=wildcard-import,invalid-name,wrong-import-position
"""ArviZ is a library for exploratory analysis of Bayesian models."""
-__version__ = "0.3.2"
+__version__ = "0.3.3"
import os
import logging
| {"golden_diff": "diff --git a/arviz/__init__.py b/arviz/__init__.py\n--- a/arviz/__init__.py\n+++ b/arviz/__init__.py\n@@ -1,6 +1,6 @@\n # pylint: disable=wildcard-import,invalid-name,wrong-import-position\n \"\"\"ArviZ is a library for exploratory analysis of Bayesian models.\"\"\"\n-__version__ = \"0.3.2\"\n+__version__ = \"0.3.3\"\n \n import os\n import logging\n", "issue": "Installing arviz breaks pymc3 installation\n**Describe the bug**\r\nInstalling Arviz breaks a pymc3 installation, which is unfortunate because they're built to be compatible. After installation, importing pymc3 throws the following error. \r\n\r\n> WARNING (theano.tensor.blas): Using NumPy C-API based implementation for BLAS functions.\r\n\r\nThe reason is because arviz installation requires numpy==1.15 rather than numpy>=1.15. If you have 1.16, it uninstalls it and re-installs 1.15. It's annoying to fix. I ended up having to scrap the whole virtual environment and start over.\r\n\r\n**To Reproduce**\r\nInstall arviz if you have any version of numpy other than 1.15, then import pymc3. \r\n\r\n**Expected behavior**\r\nDo not force downgrade of numpy. \n", "before_files": [{"content": "# pylint: disable=wildcard-import,invalid-name,wrong-import-position\n\"\"\"ArviZ is a library for exploratory analysis of Bayesian models.\"\"\"\n__version__ = \"0.3.2\"\n\nimport os\nimport logging\nfrom matplotlib.pyplot import style\n\n# add ArviZ's styles to matplotlib's styles\narviz_style_path = os.path.join(os.path.dirname(__file__), \"plots\", \"styles\")\nstyle.core.USER_LIBRARY_PATHS.append(arviz_style_path)\nstyle.core.reload_library()\n\n# Configure logging before importing arviz internals\n_log = logging.getLogger(\"arviz\")\n\nif not logging.root.handlers:\n handler = logging.StreamHandler()\n _log.setLevel(logging.INFO)\n _log.addHandler(handler)\n\nfrom .data import *\nfrom .plots import *\nfrom .stats import *\n", "path": "arviz/__init__.py"}]} | 924 | 109 |
gh_patches_debug_14545 | rasdani/github-patches | git_diff | sql-machine-learning__elasticdl-332 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Failed generating cifar10 dataset when building dev image
</issue>
<code>
[start of elasticdl/recordio_ds_gen/cifar10/show_data.py]
1 from recordio import File
2 from elasticdl.recordio_ds_gen.mnist import record
3 import sys
4 import argparse
5
6 # TODO: share code with MNIST dataset.
7 def main(argv):
8 print(argv)
9 parser = argparse.ArgumentParser(
10 description="Show same data from CIFAR10 recordio"
11 )
12 parser.add_argument("file", help="RecordIo file to read")
13 parser.add_argument(
14 "--start", default=0, type=int, help="Start record number"
15 )
16 parser.add_argument("--step", default=1, type=int, help="Step")
17 parser.add_argument(
18 "--n", default=20, type=int, help="How many record to show"
19 )
20 args = parser.parse_args(argv)
21
22 with File(args.file, "r") as f:
23 for i in range(
24 args.start, args.start + (args.n * args.step), args.step
25 ):
26 print("-" * 10)
27 print("record:", i)
28 record.show(*record.decode(f.get(i)))
29
30
31 if __name__ == "__main__":
32 main(sys.argv[1:])
33
[end of elasticdl/recordio_ds_gen/cifar10/show_data.py]
[start of elasticdl/recordio_ds_gen/cifar10/gen_data.py]
1 #!/usr/bin/env python
2
3 """
4 Download and transform CIFAR10 data to RecordIO format.
5 """
6
7 import itertools
8 import argparse
9 import os
10 import sys
11 from recordio import File
12 from tensorflow.python.keras import backend
13 from tensorflow.python.keras.datasets import cifar10
14 from elasticdl.recordio_ds_gen.mnist import record
15
16 # TODO: This function can be shared with MNIST dataset
17 def gen(file_dir, data, label, *, chunk_size, record_per_file):
18 assert len(data) == len(label) and len(data) > 0
19 os.makedirs(file_dir)
20 it = zip(data, label)
21 try:
22 for i in itertools.count():
23 file_name = file_dir + "/data-%04d" % i
24 print("writing:", file_name)
25 with File(file_name, "w", max_chunk_size=chunk_size) as f:
26 for _ in range(record_per_file):
27 row = next(it)
28 f.write(record.encode(row[0], row[1]))
29 except StopIteration:
30 pass
31
32
33 def main(argv):
34 parser = argparse.ArgumentParser(
35 description="Generate CIFAR10 datasets in RecordIO format."
36 )
37 parser.add_argument("dir", help="Output directory")
38 parser.add_argument(
39 "--num_record_per_chunk",
40 default=1024,
41 type=int,
42 help="Approximate number of records in a chunk.",
43 )
44 parser.add_argument(
45 "--num_chunk",
46 default=16,
47 type=int,
48 help="Number of chunks in a RecordIO file",
49 )
50 args = parser.parse_args(argv)
51 # one uncompressed record has size 3 * 32 * 32 + 1 bytes.
52 # Also add some slack for safety.
53 chunk_size = args.num_record_per_chunk * (3 * 32 * 32 + 1) + 100
54 record_per_file = args.num_record_per_chunk * args.num_chunk
55 backend.set_image_data_format("channels_first")
56
57 (x_train, y_train), (x_test, y_test) = cifar10.load_data()
58 gen(
59 args.dir + "/cifar10/train",
60 x_train,
61 y_train,
62 chunk_size=chunk_size,
63 record_per_file=record_per_file,
64 )
65
66 # Work around a bug in cifar10.load_data() where y_test is not converted
67 # to uint8
68 y_test = y_test.astype("uint8")
69 gen(
70 args.dir + "/cifar10/test",
71 x_test,
72 y_test,
73 chunk_size=chunk_size,
74 record_per_file=record_per_file,
75 )
76
77
78 if __name__ == "__main__":
79 main(sys.argv[1:])
80
[end of elasticdl/recordio_ds_gen/cifar10/gen_data.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/elasticdl/recordio_ds_gen/cifar10/gen_data.py b/elasticdl/recordio_ds_gen/cifar10/gen_data.py
--- a/elasticdl/recordio_ds_gen/cifar10/gen_data.py
+++ b/elasticdl/recordio_ds_gen/cifar10/gen_data.py
@@ -11,7 +11,7 @@
from recordio import File
from tensorflow.python.keras import backend
from tensorflow.python.keras.datasets import cifar10
-from elasticdl.recordio_ds_gen.mnist import record
+from elasticdl.recordio_ds_gen.cifar10 import record
# TODO: This function can be shared with MNIST dataset
def gen(file_dir, data, label, *, chunk_size, record_per_file):
diff --git a/elasticdl/recordio_ds_gen/cifar10/show_data.py b/elasticdl/recordio_ds_gen/cifar10/show_data.py
--- a/elasticdl/recordio_ds_gen/cifar10/show_data.py
+++ b/elasticdl/recordio_ds_gen/cifar10/show_data.py
@@ -1,5 +1,5 @@
from recordio import File
-from elasticdl.recordio_ds_gen.mnist import record
+from elasticdl.recordio_ds_gen.cifar10 import record
import sys
import argparse
| {"golden_diff": "diff --git a/elasticdl/recordio_ds_gen/cifar10/gen_data.py b/elasticdl/recordio_ds_gen/cifar10/gen_data.py\n--- a/elasticdl/recordio_ds_gen/cifar10/gen_data.py\n+++ b/elasticdl/recordio_ds_gen/cifar10/gen_data.py\n@@ -11,7 +11,7 @@\n from recordio import File\n from tensorflow.python.keras import backend\n from tensorflow.python.keras.datasets import cifar10\n-from elasticdl.recordio_ds_gen.mnist import record\n+from elasticdl.recordio_ds_gen.cifar10 import record\n \n # TODO: This function can be shared with MNIST dataset\n def gen(file_dir, data, label, *, chunk_size, record_per_file):\ndiff --git a/elasticdl/recordio_ds_gen/cifar10/show_data.py b/elasticdl/recordio_ds_gen/cifar10/show_data.py\n--- a/elasticdl/recordio_ds_gen/cifar10/show_data.py\n+++ b/elasticdl/recordio_ds_gen/cifar10/show_data.py\n@@ -1,5 +1,5 @@\n from recordio import File\n-from elasticdl.recordio_ds_gen.mnist import record\n+from elasticdl.recordio_ds_gen.cifar10 import record\n import sys\n import argparse\n", "issue": "Failed generating cifar10 dataset when building dev image\n\n", "before_files": [{"content": "from recordio import File\nfrom elasticdl.recordio_ds_gen.mnist import record\nimport sys\nimport argparse\n\n# TODO: share code with MNIST dataset.\ndef main(argv):\n print(argv)\n parser = argparse.ArgumentParser(\n description=\"Show same data from CIFAR10 recordio\"\n )\n parser.add_argument(\"file\", help=\"RecordIo file to read\")\n parser.add_argument(\n \"--start\", default=0, type=int, help=\"Start record number\"\n )\n parser.add_argument(\"--step\", default=1, type=int, help=\"Step\")\n parser.add_argument(\n \"--n\", default=20, type=int, help=\"How many record to show\"\n )\n args = parser.parse_args(argv)\n\n with File(args.file, \"r\") as f:\n for i in range(\n args.start, args.start + (args.n * args.step), args.step\n ):\n print(\"-\" * 10)\n print(\"record:\", i)\n record.show(*record.decode(f.get(i)))\n\n\nif __name__ == \"__main__\":\n main(sys.argv[1:])\n", "path": "elasticdl/recordio_ds_gen/cifar10/show_data.py"}, {"content": "#!/usr/bin/env python\n\n\"\"\"\nDownload and transform CIFAR10 data to RecordIO format.\n\"\"\"\n\nimport itertools\nimport argparse\nimport os\nimport sys\nfrom recordio import File\nfrom tensorflow.python.keras import backend\nfrom tensorflow.python.keras.datasets import cifar10\nfrom elasticdl.recordio_ds_gen.mnist import record\n\n# TODO: This function can be shared with MNIST dataset\ndef gen(file_dir, data, label, *, chunk_size, record_per_file):\n assert len(data) == len(label) and len(data) > 0\n os.makedirs(file_dir)\n it = zip(data, label)\n try:\n for i in itertools.count():\n file_name = file_dir + \"/data-%04d\" % i\n print(\"writing:\", file_name)\n with File(file_name, \"w\", max_chunk_size=chunk_size) as f:\n for _ in range(record_per_file):\n row = next(it)\n f.write(record.encode(row[0], row[1]))\n except StopIteration:\n pass\n\n\ndef main(argv):\n parser = argparse.ArgumentParser(\n description=\"Generate CIFAR10 datasets in RecordIO format.\"\n )\n parser.add_argument(\"dir\", help=\"Output directory\")\n parser.add_argument(\n \"--num_record_per_chunk\",\n default=1024,\n type=int,\n help=\"Approximate number of records in a chunk.\",\n )\n parser.add_argument(\n \"--num_chunk\",\n default=16,\n type=int,\n help=\"Number of chunks in a RecordIO file\",\n )\n args = parser.parse_args(argv)\n # one uncompressed record has size 3 * 32 * 32 + 1 bytes.\n # Also add some slack for safety.\n chunk_size = args.num_record_per_chunk * (3 * 32 * 32 + 1) + 100\n record_per_file = args.num_record_per_chunk * args.num_chunk\n backend.set_image_data_format(\"channels_first\")\n\n (x_train, y_train), (x_test, y_test) = cifar10.load_data()\n gen(\n args.dir + \"/cifar10/train\",\n x_train,\n y_train,\n chunk_size=chunk_size,\n record_per_file=record_per_file,\n )\n\n # Work around a bug in cifar10.load_data() where y_test is not converted\n # to uint8\n y_test = y_test.astype(\"uint8\")\n gen(\n args.dir + \"/cifar10/test\",\n x_test,\n y_test,\n chunk_size=chunk_size,\n record_per_file=record_per_file,\n )\n\n\nif __name__ == \"__main__\":\n main(sys.argv[1:])\n", "path": "elasticdl/recordio_ds_gen/cifar10/gen_data.py"}]} | 1,637 | 287 |
gh_patches_debug_29462 | rasdani/github-patches | git_diff | akvo__akvo-rsr-2566 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Search in EUTF akvo site
Partner team had a training and workshop with EUTF last week and discovered that search terms in EUTF akvo site returned unrelated results.
Search for tombouctou shows up a project of SNV in EUTF akvo page, which is confusing for the partner as they expect to see their own projects only on their akvo site.
<img width="1070" alt="screen shot 2017-02-06 at 15 56 41" src="https://cloud.githubusercontent.com/assets/21127166/22652066/45bdf606-ec85-11e6-9c05-25d421b329c1.png">
What the partner expects is to see just projects where they are one of the participating partners.
If the search does not match any of their projects, it should then not return anything.
</issue>
<code>
[start of akvo/rest/views/typeahead.py]
1 # -*- coding: utf-8 -*-
2
3 """Akvo RSR is covered by the GNU Affero General Public License.
4 See more details in the license.txt file located at the root folder of the
5 Akvo RSR module. For additional details on the GNU license please
6 see < http://www.gnu.org/licenses/agpl.html >.
7 """
8
9 from akvo.rest.serializers import (TypeaheadCountrySerializer,
10 TypeaheadOrganisationSerializer,
11 TypeaheadProjectSerializer,
12 TypeaheadProjectUpdateSerializer)
13
14 from akvo.codelists.models import Country, Version
15 from akvo.rsr.models import Organisation, Project, ProjectUpdate
16 from akvo.rsr.views.project import _project_directory_coll
17
18 from django.conf import settings
19
20 from rest_framework.decorators import api_view
21 from rest_framework.response import Response
22
23
24 def rejig(queryset, serializer):
25 """Rearrange & add queryset count to the response data."""
26 return {
27 'count': queryset.count(),
28 'results': serializer.data
29 }
30
31
32 @api_view(['GET'])
33 def typeahead_country(request):
34 iati_version = Version.objects.get(code=settings.IATI_VERSION)
35 countries = Country.objects.filter(version=iati_version)
36 return Response(
37 rejig(countries, TypeaheadCountrySerializer(countries, many=True))
38 )
39
40
41 @api_view(['GET'])
42 def typeahead_organisation(request):
43 organisations = Organisation.objects.all()
44 return Response(
45 rejig(organisations, TypeaheadOrganisationSerializer(organisations,
46 many=True))
47 )
48
49
50 @api_view(['GET'])
51 def typeahead_user_organisations(request):
52 user = request.user
53 is_admin = user.is_active and (user.is_superuser or user.is_admin)
54 organisations = user.approved_organisations() if not is_admin else Organisation.objects.all()
55 return Response(
56 rejig(organisations, TypeaheadOrganisationSerializer(organisations,
57 many=True))
58 )
59
60
61 @api_view(['GET'])
62 def typeahead_project(request):
63 """Return the typeaheads for projects.
64
65 Without any query parameters, it returns the info for all the projects in
66 the current context -- changes depending on whether we are on a partner
67 site, or the RSR site.
68
69 If a project query parameter with a project id is passed, the info for all
70 projects associated with partners for the specified project is returned.
71
72 NOTE: The unauthenticated user gets information about all the projects when
73 using this API endpoint. More permission checking will need to be added,
74 if the amount of data being returned is changed.
75
76 """
77 project_id = request.GET.get('project', None)
78 if project_id is None:
79 project = None
80
81 else:
82 try:
83 project = Project.objects.get(id=project_id)
84 except Project.DoesNotExist:
85 project = None
86
87 if project is None:
88 # Search bar - organization projects, published
89 projects = _project_directory_coll(request)
90
91 else:
92 # Project editor - all projects of partners for this project
93 projects = Project.objects.of_partners(project.partners.distinct()).distinct()
94
95 projects = projects.exclude(title='')
96 return Response(
97 rejig(projects, TypeaheadProjectSerializer(projects, many=True))
98 )
99
100
101 @api_view(['GET'])
102 def typeahead_user_projects(request):
103 user = request.user
104 is_admin = user.is_active and (user.is_superuser or user.is_admin)
105 if is_admin:
106 projects = Project.objects.all()
107 else:
108 projects = user.approved_organisations().all_projects()
109 projects = projects.exclude(title='')
110 return Response(
111 rejig(projects, TypeaheadProjectSerializer(projects, many=True))
112 )
113
114
115 @api_view(['GET'])
116 def typeahead_impact_projects(request):
117 user = request.user
118 projects = Project.objects.all() if user.is_admin or user.is_superuser else user.my_projects()
119 projects = projects.published().filter(is_impact_project=True).order_by('title')
120
121 return Response(
122 rejig(projects, TypeaheadProjectSerializer(projects, many=True))
123 )
124
125
126 @api_view(['GET'])
127 def typeahead_projectupdate(request):
128 updates = ProjectUpdate.objects.all()
129 return Response(
130 rejig(updates, TypeaheadProjectUpdateSerializer(updates, many=True))
131 )
132
[end of akvo/rest/views/typeahead.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/akvo/rest/views/typeahead.py b/akvo/rest/views/typeahead.py
--- a/akvo/rest/views/typeahead.py
+++ b/akvo/rest/views/typeahead.py
@@ -66,32 +66,22 @@
the current context -- changes depending on whether we are on a partner
site, or the RSR site.
- If a project query parameter with a project id is passed, the info for all
- projects associated with partners for the specified project is returned.
+ If a published query parameter is passed, only projects that have been
+ published are returned.
NOTE: The unauthenticated user gets information about all the projects when
using this API endpoint. More permission checking will need to be added,
if the amount of data being returned is changed.
"""
- project_id = request.GET.get('project', None)
- if project_id is None:
- project = None
-
+ if request.GET.get('published', '0') == '0':
+ # Project editor - organization projects, all
+ page = request.rsr_page
+ projects = page.organisation.all_projects() if page else Project.objects.all()
else:
- try:
- project = Project.objects.get(id=project_id)
- except Project.DoesNotExist:
- project = None
-
- if project is None:
# Search bar - organization projects, published
projects = _project_directory_coll(request)
- else:
- # Project editor - all projects of partners for this project
- projects = Project.objects.of_partners(project.partners.distinct()).distinct()
-
projects = projects.exclude(title='')
return Response(
rejig(projects, TypeaheadProjectSerializer(projects, many=True))
| {"golden_diff": "diff --git a/akvo/rest/views/typeahead.py b/akvo/rest/views/typeahead.py\n--- a/akvo/rest/views/typeahead.py\n+++ b/akvo/rest/views/typeahead.py\n@@ -66,32 +66,22 @@\n the current context -- changes depending on whether we are on a partner\n site, or the RSR site.\n \n- If a project query parameter with a project id is passed, the info for all\n- projects associated with partners for the specified project is returned.\n+ If a published query parameter is passed, only projects that have been\n+ published are returned.\n \n NOTE: The unauthenticated user gets information about all the projects when\n using this API endpoint. More permission checking will need to be added,\n if the amount of data being returned is changed.\n \n \"\"\"\n- project_id = request.GET.get('project', None)\n- if project_id is None:\n- project = None\n-\n+ if request.GET.get('published', '0') == '0':\n+ # Project editor - organization projects, all\n+ page = request.rsr_page\n+ projects = page.organisation.all_projects() if page else Project.objects.all()\n else:\n- try:\n- project = Project.objects.get(id=project_id)\n- except Project.DoesNotExist:\n- project = None\n-\n- if project is None:\n # Search bar - organization projects, published\n projects = _project_directory_coll(request)\n \n- else:\n- # Project editor - all projects of partners for this project\n- projects = Project.objects.of_partners(project.partners.distinct()).distinct()\n-\n projects = projects.exclude(title='')\n return Response(\n rejig(projects, TypeaheadProjectSerializer(projects, many=True))\n", "issue": "Search in EUTF akvo site\nPartner team had a training and workshop with EUTF last week and discovered that search terms in EUTF akvo site returned unrelated results.\r\n\r\nSearch for tombouctou shows up a project of SNV in EUTF akvo page, which is confusing for the partner as they expect to see their own projects only on their akvo site. \r\n\r\n<img width=\"1070\" alt=\"screen shot 2017-02-06 at 15 56 41\" src=\"https://cloud.githubusercontent.com/assets/21127166/22652066/45bdf606-ec85-11e6-9c05-25d421b329c1.png\">\r\n\r\nWhat the partner expects is to see just projects where they are one of the participating partners. \r\nIf the search does not match any of their projects, it should then not return anything. \n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\"Akvo RSR is covered by the GNU Affero General Public License.\nSee more details in the license.txt file located at the root folder of the\nAkvo RSR module. For additional details on the GNU license please\nsee < http://www.gnu.org/licenses/agpl.html >.\n\"\"\"\n\nfrom akvo.rest.serializers import (TypeaheadCountrySerializer,\n TypeaheadOrganisationSerializer,\n TypeaheadProjectSerializer,\n TypeaheadProjectUpdateSerializer)\n\nfrom akvo.codelists.models import Country, Version\nfrom akvo.rsr.models import Organisation, Project, ProjectUpdate\nfrom akvo.rsr.views.project import _project_directory_coll\n\nfrom django.conf import settings\n\nfrom rest_framework.decorators import api_view\nfrom rest_framework.response import Response\n\n\ndef rejig(queryset, serializer):\n \"\"\"Rearrange & add queryset count to the response data.\"\"\"\n return {\n 'count': queryset.count(),\n 'results': serializer.data\n }\n\n\n@api_view(['GET'])\ndef typeahead_country(request):\n iati_version = Version.objects.get(code=settings.IATI_VERSION)\n countries = Country.objects.filter(version=iati_version)\n return Response(\n rejig(countries, TypeaheadCountrySerializer(countries, many=True))\n )\n\n\n@api_view(['GET'])\ndef typeahead_organisation(request):\n organisations = Organisation.objects.all()\n return Response(\n rejig(organisations, TypeaheadOrganisationSerializer(organisations,\n many=True))\n )\n\n\n@api_view(['GET'])\ndef typeahead_user_organisations(request):\n user = request.user\n is_admin = user.is_active and (user.is_superuser or user.is_admin)\n organisations = user.approved_organisations() if not is_admin else Organisation.objects.all()\n return Response(\n rejig(organisations, TypeaheadOrganisationSerializer(organisations,\n many=True))\n )\n\n\n@api_view(['GET'])\ndef typeahead_project(request):\n \"\"\"Return the typeaheads for projects.\n\n Without any query parameters, it returns the info for all the projects in\n the current context -- changes depending on whether we are on a partner\n site, or the RSR site.\n\n If a project query parameter with a project id is passed, the info for all\n projects associated with partners for the specified project is returned.\n\n NOTE: The unauthenticated user gets information about all the projects when\n using this API endpoint. More permission checking will need to be added,\n if the amount of data being returned is changed.\n\n \"\"\"\n project_id = request.GET.get('project', None)\n if project_id is None:\n project = None\n\n else:\n try:\n project = Project.objects.get(id=project_id)\n except Project.DoesNotExist:\n project = None\n\n if project is None:\n # Search bar - organization projects, published\n projects = _project_directory_coll(request)\n\n else:\n # Project editor - all projects of partners for this project\n projects = Project.objects.of_partners(project.partners.distinct()).distinct()\n\n projects = projects.exclude(title='')\n return Response(\n rejig(projects, TypeaheadProjectSerializer(projects, many=True))\n )\n\n\n@api_view(['GET'])\ndef typeahead_user_projects(request):\n user = request.user\n is_admin = user.is_active and (user.is_superuser or user.is_admin)\n if is_admin:\n projects = Project.objects.all()\n else:\n projects = user.approved_organisations().all_projects()\n projects = projects.exclude(title='')\n return Response(\n rejig(projects, TypeaheadProjectSerializer(projects, many=True))\n )\n\n\n@api_view(['GET'])\ndef typeahead_impact_projects(request):\n user = request.user\n projects = Project.objects.all() if user.is_admin or user.is_superuser else user.my_projects()\n projects = projects.published().filter(is_impact_project=True).order_by('title')\n\n return Response(\n rejig(projects, TypeaheadProjectSerializer(projects, many=True))\n )\n\n\n@api_view(['GET'])\ndef typeahead_projectupdate(request):\n updates = ProjectUpdate.objects.all()\n return Response(\n rejig(updates, TypeaheadProjectUpdateSerializer(updates, many=True))\n )\n", "path": "akvo/rest/views/typeahead.py"}]} | 1,953 | 387 |
gh_patches_debug_24894 | rasdani/github-patches | git_diff | privacyidea__privacyidea-2130 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Better separation of audit log from privacyidea*
Hi,
python logging may be easily separated using the qualname. However, privacyidea uses the module/class names. Since they all start with "privacyidea.", it is not possible to log the audit to one place and all the rest to a different place (python logging cannot *exclude* qualnames).
To solve this, one could use a custom qualname for the privacyidea audit. I think here:
https://github.com/privacyidea/privacyidea/blob/ea7d9e53d42504288ba3909f7057924fe8d250b0/privacyidea/lib/auditmodules/loggeraudit.py#L62
Best regards,
Henning
</issue>
<code>
[start of privacyidea/lib/auditmodules/loggeraudit.py]
1 # -*- coding: utf-8 -*-
2 #
3 # 2019-11-06 Cornelius Kölbel <[email protected]>
4 # initial code for writing audit information to a file
5 #
6 # This code is free software; you can redistribute it and/or
7 # modify it under the terms of the GNU AFFERO GENERAL PUBLIC LICENSE
8 # License as published by the Free Software Foundation; either
9 # version 3 of the License, or any later version.
10 #
11 # This code is distributed in the hope that it will be useful,
12 # but WITHOUT ANY WARRANTY; without even the implied warranty of
13 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
14 # GNU AFFERO GENERAL PUBLIC LICENSE for more details.
15 #
16 # You should have received a copy of the GNU Affero General Public
17 # License along with this program. If not, see <http://www.gnu.org/licenses/>.
18 #
19 #
20 __doc__ = """The Logger Audit Module is used to write audit entries to the Python logging module.
21
22 The Logger Audit Module is configured like this:
23
24 PI_AUDIT_MODULE = "privacyidea.lib.auditmodules.loggeraudit"
25 PI_AUDIT_SERVERNAME = "your choice"
26
27 PI_LOGCONFIG = "/etc/privacyidea/logging.cfg"
28
29 The LoggerAudit Class uses the same PI logging config as you could use anyways.
30 To explicitly write audit logs, you need to add something like the following to
31 the logging.cfg
32
33 Example:
34
35 [handlers]
36 keys=file,audit
37
38 [loggers]
39 keys=root,privacyidea,audit
40
41 ...
42
43 [logger_audit]
44 handlers=audit
45 qualname=privacyidea.lib.auditmodules.loggeraudit
46 level=INFO
47
48 [handler_audit]
49 class=logging.handlers.RotatingFileHandler
50 backupCount=14
51 maxBytes=10000000
52 formatter=detail
53 level=INFO
54 args=('/var/log/privacyidea/audit.log',)
55
56 """
57
58 import logging
59 from privacyidea.lib.auditmodules.base import (Audit as AuditBase)
60 import datetime
61
62 log = logging.getLogger(__name__)
63
64
65 class Audit(AuditBase):
66 """
67 This is the LoggerAudit module, which writes the audit entries
68 to the Python logging
69
70 .. note:: This audit module does not provide a *Read* capability.
71 """
72
73 def __init__(self, config=None):
74 super(Audit, self).__init__(config)
75 self.name = "loggeraudit"
76
77 def finalize_log(self):
78 """
79 This method is used to log the data
80 e.g. write the data to a file.
81 """
82 self.audit_data["policies"] = ",".join(self.audit_data.get("policies", []))
83 self.audit_data["timestamp"] = datetime.datetime.utcnow()
84 log.info(u"{0!s}".format(self.audit_data))
85 self.audit_data = {}
86
87
88
[end of privacyidea/lib/auditmodules/loggeraudit.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/privacyidea/lib/auditmodules/loggeraudit.py b/privacyidea/lib/auditmodules/loggeraudit.py
--- a/privacyidea/lib/auditmodules/loggeraudit.py
+++ b/privacyidea/lib/auditmodules/loggeraudit.py
@@ -56,10 +56,9 @@
"""
import logging
+import json
from privacyidea.lib.auditmodules.base import (Audit as AuditBase)
-import datetime
-
-log = logging.getLogger(__name__)
+from datetime import datetime
class Audit(AuditBase):
@@ -73,6 +72,8 @@
def __init__(self, config=None):
super(Audit, self).__init__(config)
self.name = "loggeraudit"
+ self.qualname = self.config.get('PI_AUDIT_LOGGER_QUALNAME', __name__)
+ self.logger = logging.getLogger(self.qualname)
def finalize_log(self):
"""
@@ -80,8 +81,6 @@
e.g. write the data to a file.
"""
self.audit_data["policies"] = ",".join(self.audit_data.get("policies", []))
- self.audit_data["timestamp"] = datetime.datetime.utcnow()
- log.info(u"{0!s}".format(self.audit_data))
+ self.audit_data["timestamp"] = datetime.utcnow().isoformat()
+ self.logger.info("{0!s}".format(json.dumps(self.audit_data, sort_keys=True)))
self.audit_data = {}
-
-
| {"golden_diff": "diff --git a/privacyidea/lib/auditmodules/loggeraudit.py b/privacyidea/lib/auditmodules/loggeraudit.py\n--- a/privacyidea/lib/auditmodules/loggeraudit.py\n+++ b/privacyidea/lib/auditmodules/loggeraudit.py\n@@ -56,10 +56,9 @@\n \"\"\"\n \n import logging\n+import json\n from privacyidea.lib.auditmodules.base import (Audit as AuditBase)\n-import datetime\n-\n-log = logging.getLogger(__name__)\n+from datetime import datetime\n \n \n class Audit(AuditBase):\n@@ -73,6 +72,8 @@\n def __init__(self, config=None):\n super(Audit, self).__init__(config)\n self.name = \"loggeraudit\"\n+ self.qualname = self.config.get('PI_AUDIT_LOGGER_QUALNAME', __name__)\n+ self.logger = logging.getLogger(self.qualname)\n \n def finalize_log(self):\n \"\"\"\n@@ -80,8 +81,6 @@\n e.g. write the data to a file.\n \"\"\"\n self.audit_data[\"policies\"] = \",\".join(self.audit_data.get(\"policies\", []))\n- self.audit_data[\"timestamp\"] = datetime.datetime.utcnow()\n- log.info(u\"{0!s}\".format(self.audit_data))\n+ self.audit_data[\"timestamp\"] = datetime.utcnow().isoformat()\n+ self.logger.info(\"{0!s}\".format(json.dumps(self.audit_data, sort_keys=True)))\n self.audit_data = {}\n-\n-\n", "issue": "Better separation of audit log from privacyidea*\nHi,\r\n\r\npython logging may be easily separated using the qualname. However, privacyidea uses the module/class names. Since they all start with \"privacyidea.\", it is not possible to log the audit to one place and all the rest to a different place (python logging cannot *exclude* qualnames).\r\n\r\nTo solve this, one could use a custom qualname for the privacyidea audit. I think here:\r\nhttps://github.com/privacyidea/privacyidea/blob/ea7d9e53d42504288ba3909f7057924fe8d250b0/privacyidea/lib/auditmodules/loggeraudit.py#L62\r\n\r\nBest regards,\r\n\r\nHenning\r\n\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# 2019-11-06 Cornelius K\u00f6lbel <[email protected]>\n# initial code for writing audit information to a file\n#\n# This code is free software; you can redistribute it and/or\n# modify it under the terms of the GNU AFFERO GENERAL PUBLIC LICENSE\n# License as published by the Free Software Foundation; either\n# version 3 of the License, or any later version.\n#\n# This code is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU AFFERO GENERAL PUBLIC LICENSE for more details.\n#\n# You should have received a copy of the GNU Affero General Public\n# License along with this program. If not, see <http://www.gnu.org/licenses/>.\n#\n#\n__doc__ = \"\"\"The Logger Audit Module is used to write audit entries to the Python logging module.\n\nThe Logger Audit Module is configured like this:\n\n PI_AUDIT_MODULE = \"privacyidea.lib.auditmodules.loggeraudit\"\n PI_AUDIT_SERVERNAME = \"your choice\"\n\n PI_LOGCONFIG = \"/etc/privacyidea/logging.cfg\"\n\nThe LoggerAudit Class uses the same PI logging config as you could use anyways.\nTo explicitly write audit logs, you need to add something like the following to\nthe logging.cfg\n\nExample:\n\n[handlers]\nkeys=file,audit\n\n[loggers]\nkeys=root,privacyidea,audit\n\n...\n\n[logger_audit]\nhandlers=audit\nqualname=privacyidea.lib.auditmodules.loggeraudit\nlevel=INFO\n\n[handler_audit]\nclass=logging.handlers.RotatingFileHandler\nbackupCount=14\nmaxBytes=10000000\nformatter=detail\nlevel=INFO\nargs=('/var/log/privacyidea/audit.log',)\n\n\"\"\"\n\nimport logging\nfrom privacyidea.lib.auditmodules.base import (Audit as AuditBase)\nimport datetime\n\nlog = logging.getLogger(__name__)\n\n\nclass Audit(AuditBase):\n \"\"\"\n This is the LoggerAudit module, which writes the audit entries\n to the Python logging\n\n .. note:: This audit module does not provide a *Read* capability.\n \"\"\"\n\n def __init__(self, config=None):\n super(Audit, self).__init__(config)\n self.name = \"loggeraudit\"\n\n def finalize_log(self):\n \"\"\"\n This method is used to log the data\n e.g. write the data to a file.\n \"\"\"\n self.audit_data[\"policies\"] = \",\".join(self.audit_data.get(\"policies\", []))\n self.audit_data[\"timestamp\"] = datetime.datetime.utcnow()\n log.info(u\"{0!s}\".format(self.audit_data))\n self.audit_data = {}\n\n\n", "path": "privacyidea/lib/auditmodules/loggeraudit.py"}]} | 1,490 | 321 |
gh_patches_debug_1290 | rasdani/github-patches | git_diff | weecology__retriever-950 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Check MySQL and Postgres credential files
In addition to allowing users to directly provide their MySQL and PostgreSQL credentials, it should also be possible for them to store these credentials in the usual places.
We should check information given by the user to the retriever first, and then fall back on the configuration files for usernames and passwords if they are not provided.
For PostgreSQL this is `~/.pgpass` with the format:
```
hostname:port:database:username:password
```
See: https://wiki.postgresql.org/wiki/Pgpass. `*`s can be used in place of any of the `:` separated values.
For MySQL this is `~/.my.cnf` with the format:
```
[client]
user = root
password = yourpassword
```
See: https://dev.mysql.com/doc/refman/5.5/en/option-files.html. `.my.cnf` can contain a lot of additional configuration information so we'll need to look explicitly for `user =` and `password =`.
</issue>
<code>
[start of retriever/engines/mysql.py]
1 from __future__ import print_function
2 from builtins import str
3 import os
4 from retriever.lib.models import Engine, no_cleanup
5 from retriever import ENCODING
6
7
8 class engine(Engine):
9 """Engine instance for MySQL."""
10 name = "MySQL"
11 abbreviation = "mysql"
12 datatypes = {
13 "auto": "INT(5) NOT NULL AUTO_INCREMENT",
14 "int": "INT",
15 "bigint": "BIGINT",
16 "double": "DOUBLE",
17 "decimal": "DECIMAL",
18 "char": ("TEXT", "VARCHAR"),
19 "bool": "BOOL",
20 }
21 max_int = 4294967295
22 placeholder = "%s"
23 required_opts = [("user",
24 "Enter your MySQL username",
25 "root"),
26 ("password",
27 "Enter your password",
28 ""),
29 ("host",
30 "Enter your MySQL host",
31 "localhost"),
32 ("port",
33 "Enter your MySQL port",
34 3306),
35 ("database_name",
36 "Format of database name",
37 "{db}"),
38 ("table_name",
39 "Format of table name",
40 "{db}.{table}"),
41 ]
42
43 def create_db_statement(self):
44 """Returns a SQL statement to create a database."""
45 createstatement = "CREATE DATABASE IF NOT EXISTS " + self.database_name()
46 return createstatement
47
48 def insert_data_from_file(self, filename):
49 """Calls MySQL "LOAD DATA LOCAL INFILE" statement to perform a bulk
50 insert."""
51
52 mysql_set_autocommit_off = """SET autocommit=0; SET UNIQUE_CHECKS=0; SET FOREIGN_KEY_CHECKS=0; SET sql_log_bin=0;"""
53 mysql_set_autocommit_on = """SET GLOBAL innodb_flush_log_at_trx_commit=1; COMMIT; SET autocommit=1; SET unique_checks=1; SET foreign_key_checks=1;"""
54
55 self.get_cursor()
56 ct = len([True for c in self.table.columns if c[1][0][:3] == "ct-"]) != 0
57 if (self.table.cleanup.function == no_cleanup and
58 not self.table.fixed_width and
59 not ct and
60 (not hasattr(self.table, "do_not_bulk_insert") or not self.table.do_not_bulk_insert)):
61
62 print ("Inserting data from " + os.path.basename(filename) + "...")
63
64 columns = self.table.get_insert_columns()
65 statement = """
66 LOAD DATA LOCAL INFILE '""" + filename.replace("\\", "\\\\") + """'
67 INTO TABLE """ + self.table_name() + """
68 FIELDS TERMINATED BY '""" + self.table.delimiter + """'
69 OPTIONALLY ENCLOSED BY '"'
70 LINES TERMINATED BY '\\n'
71 IGNORE """ + str(self.table.header_rows) + """ LINES
72 (""" + columns + ")"
73 try:
74 self.cursor.execute(mysql_set_autocommit_off)
75 self.cursor.execute(statement)
76
77 self.cursor.execute(mysql_set_autocommit_on)
78 except Exception as e:
79 self.disconnect() # If the execute fails the database connection can get hung up
80 self.cursor.execute(mysql_set_autocommit_on)
81 return Engine.insert_data_from_file(self, filename)
82 else:
83 return Engine.insert_data_from_file(self, filename)
84
85 def table_exists(self, dbname, tablename):
86 """Checks to see if the given table exists"""
87 if not hasattr(self, 'existing_table_names'):
88 self.cursor.execute(
89 "SELECT table_schema, table_name "
90 "FROM information_schema.tables WHERE table_schema NOT IN "
91 "('mysql', 'information_schema', 'performance_schema');")
92 self.existing_table_names = set()
93 for schema, table in self.cursor:
94 self.existing_table_names.add((schema.lower(), table.lower()))
95 return (dbname.lower(), tablename.lower()) in self.existing_table_names
96
97 def set_engine_encoding(self):
98 """Set MySQL database encoding to match data encoding
99
100 Please update the encoding lookup table if the required encoding is not present.
101 """
102 encoding = ENCODING.lower()
103 if self.script.encoding:
104 encoding = self.script.encoding.lower()
105 encoding_lookup = {'iso-8859-1': 'latin1', 'latin-1': 'latin1', 'utf-8': 'utf8'}
106 db_encoding = encoding_lookup.get(encoding)
107 self.execute("SET NAMES '{0}';".format(db_encoding))
108
109 def get_connection(self):
110 """Gets the db connection."""
111 args = {'host': self.opts['host'],
112 'port': int(self.opts['port']),
113 'user': self.opts['user'],
114 'passwd': self.opts['password']}
115 import pymysql as dbapi
116 import pymysql.constants.CLIENT as client
117 args['client_flag'] = client.LOCAL_FILES
118 self.get_input()
119 return dbapi.connect(**args)
120
[end of retriever/engines/mysql.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/retriever/engines/mysql.py b/retriever/engines/mysql.py
--- a/retriever/engines/mysql.py
+++ b/retriever/engines/mysql.py
@@ -116,4 +116,4 @@
import pymysql.constants.CLIENT as client
args['client_flag'] = client.LOCAL_FILES
self.get_input()
- return dbapi.connect(**args)
+ return dbapi.connect(read_default_file='~/.my.cnf', **args)
| {"golden_diff": "diff --git a/retriever/engines/mysql.py b/retriever/engines/mysql.py\n--- a/retriever/engines/mysql.py\n+++ b/retriever/engines/mysql.py\n@@ -116,4 +116,4 @@\n import pymysql.constants.CLIENT as client\n args['client_flag'] = client.LOCAL_FILES\n self.get_input()\n- return dbapi.connect(**args)\n+ return dbapi.connect(read_default_file='~/.my.cnf', **args)\n", "issue": "Check MySQL and Postgres credential files\nIn addition to allowing users to directly provide their MySQL and PostgreSQL credentials, it should also be possible for them to store these credentials in the usual places.\n\nWe should check information given by the user to the retriever first, and then fall back on the configuration files for usernames and passwords if they are not provided.\n\nFor PostgreSQL this is `~/.pgpass` with the format:\n\n```\nhostname:port:database:username:password \n```\n\nSee: https://wiki.postgresql.org/wiki/Pgpass. `*`s can be used in place of any of the `:` separated values.\n\nFor MySQL this is `~/.my.cnf` with the format:\n\n```\n[client]\nuser = root\npassword = yourpassword\n```\n\nSee: https://dev.mysql.com/doc/refman/5.5/en/option-files.html. `.my.cnf` can contain a lot of additional configuration information so we'll need to look explicitly for `user =` and `password =`.\n\n", "before_files": [{"content": "from __future__ import print_function\nfrom builtins import str\nimport os\nfrom retriever.lib.models import Engine, no_cleanup\nfrom retriever import ENCODING\n\n\nclass engine(Engine):\n \"\"\"Engine instance for MySQL.\"\"\"\n name = \"MySQL\"\n abbreviation = \"mysql\"\n datatypes = {\n \"auto\": \"INT(5) NOT NULL AUTO_INCREMENT\",\n \"int\": \"INT\",\n \"bigint\": \"BIGINT\",\n \"double\": \"DOUBLE\",\n \"decimal\": \"DECIMAL\",\n \"char\": (\"TEXT\", \"VARCHAR\"),\n \"bool\": \"BOOL\",\n }\n max_int = 4294967295\n placeholder = \"%s\"\n required_opts = [(\"user\",\n \"Enter your MySQL username\",\n \"root\"),\n (\"password\",\n \"Enter your password\",\n \"\"),\n (\"host\",\n \"Enter your MySQL host\",\n \"localhost\"),\n (\"port\",\n \"Enter your MySQL port\",\n 3306),\n (\"database_name\",\n \"Format of database name\",\n \"{db}\"),\n (\"table_name\",\n \"Format of table name\",\n \"{db}.{table}\"),\n ]\n\n def create_db_statement(self):\n \"\"\"Returns a SQL statement to create a database.\"\"\"\n createstatement = \"CREATE DATABASE IF NOT EXISTS \" + self.database_name()\n return createstatement\n\n def insert_data_from_file(self, filename):\n \"\"\"Calls MySQL \"LOAD DATA LOCAL INFILE\" statement to perform a bulk\n insert.\"\"\"\n\n mysql_set_autocommit_off = \"\"\"SET autocommit=0; SET UNIQUE_CHECKS=0; SET FOREIGN_KEY_CHECKS=0; SET sql_log_bin=0;\"\"\"\n mysql_set_autocommit_on = \"\"\"SET GLOBAL innodb_flush_log_at_trx_commit=1; COMMIT; SET autocommit=1; SET unique_checks=1; SET foreign_key_checks=1;\"\"\"\n \n self.get_cursor()\n ct = len([True for c in self.table.columns if c[1][0][:3] == \"ct-\"]) != 0\n if (self.table.cleanup.function == no_cleanup and\n not self.table.fixed_width and\n not ct and\n (not hasattr(self.table, \"do_not_bulk_insert\") or not self.table.do_not_bulk_insert)):\n\n print (\"Inserting data from \" + os.path.basename(filename) + \"...\")\n\n columns = self.table.get_insert_columns()\n statement = \"\"\"\nLOAD DATA LOCAL INFILE '\"\"\" + filename.replace(\"\\\\\", \"\\\\\\\\\") + \"\"\"'\nINTO TABLE \"\"\" + self.table_name() + \"\"\"\nFIELDS TERMINATED BY '\"\"\" + self.table.delimiter + \"\"\"'\nOPTIONALLY ENCLOSED BY '\"'\nLINES TERMINATED BY '\\\\n'\nIGNORE \"\"\" + str(self.table.header_rows) + \"\"\" LINES\n(\"\"\" + columns + \")\"\n try:\n self.cursor.execute(mysql_set_autocommit_off)\n self.cursor.execute(statement)\n\n self.cursor.execute(mysql_set_autocommit_on)\n except Exception as e:\n self.disconnect() # If the execute fails the database connection can get hung up\n self.cursor.execute(mysql_set_autocommit_on)\n return Engine.insert_data_from_file(self, filename)\n else:\n return Engine.insert_data_from_file(self, filename)\n\n def table_exists(self, dbname, tablename):\n \"\"\"Checks to see if the given table exists\"\"\"\n if not hasattr(self, 'existing_table_names'):\n self.cursor.execute(\n \"SELECT table_schema, table_name \"\n \"FROM information_schema.tables WHERE table_schema NOT IN \"\n \"('mysql', 'information_schema', 'performance_schema');\")\n self.existing_table_names = set()\n for schema, table in self.cursor:\n self.existing_table_names.add((schema.lower(), table.lower()))\n return (dbname.lower(), tablename.lower()) in self.existing_table_names\n\n def set_engine_encoding(self):\n \"\"\"Set MySQL database encoding to match data encoding\n\n Please update the encoding lookup table if the required encoding is not present.\n \"\"\"\n encoding = ENCODING.lower()\n if self.script.encoding:\n encoding = self.script.encoding.lower()\n encoding_lookup = {'iso-8859-1': 'latin1', 'latin-1': 'latin1', 'utf-8': 'utf8'}\n db_encoding = encoding_lookup.get(encoding)\n self.execute(\"SET NAMES '{0}';\".format(db_encoding))\n\n def get_connection(self):\n \"\"\"Gets the db connection.\"\"\"\n args = {'host': self.opts['host'],\n 'port': int(self.opts['port']),\n 'user': self.opts['user'],\n 'passwd': self.opts['password']}\n import pymysql as dbapi\n import pymysql.constants.CLIENT as client\n args['client_flag'] = client.LOCAL_FILES\n self.get_input()\n return dbapi.connect(**args)\n", "path": "retriever/engines/mysql.py"}]} | 2,030 | 112 |
gh_patches_debug_11468 | rasdani/github-patches | git_diff | getredash__redash-4582 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
TreasureData getSchema fails when setting non-default region
<!--
#####################################################################
#
# Need support? USE THE FORUM! https://discuss.redash.io/c/support.
#
# Don't have steps to reproduce and actually not sure it's a bug?
# Use the forum! https://discuss.redash.io/c/support.
#
#####################################################################
**Got an idea for a new feature?** Check if it isn't on the roadmap already: https://bit.ly/redash-roadmap and start a new discussion in the features category: https://discuss.redash.io/c/feature-requests 🌟.
Found a bug? Please fill out the sections below... thank you 👍
Found a security vulnerability? Please email [email protected] to report any security vulnerabilities. We will acknowledge receipt of your vulnerability and strive to send you regular updates about our progress. If you're curious about the status of your disclosure please feel free to email us again. If you want to encrypt your disclosure email, you can use this PGP key.
-->
### Issue Summary
There are some regions in Treasure Data, but getSchema alsways fails when setting non-default region.
### Steps to Reproduce
1. Set datasource using non-default region (e.g. Tokyo region)
2. Push schema refresh then "Schema refresh failed" error occurs
### Technical details:
* Redash Version: confirmed v5.0.2
* Browser/OS: any Browsers/OSs
* How did you install Redash: from Amazon AMI
### Details
When accessing Treasure Data to get schema, always default region will be set because the parameter is not prepared.
https://github.com/getredash/redash/blob/6c364369bb0eb98e2191c2e502fed72abe5a74c7/redash/query_runner/treasuredata.py#L82
</issue>
<code>
[start of redash/query_runner/treasuredata.py]
1 import logging
2
3 from redash.query_runner import *
4 from redash.utils import json_dumps
5
6 logger = logging.getLogger(__name__)
7
8 try:
9 import tdclient
10 from tdclient import errors
11
12 enabled = True
13
14 except ImportError:
15 enabled = False
16
17 TD_TYPES_MAPPING = {
18 "bigint": TYPE_INTEGER,
19 "tinyint": TYPE_INTEGER,
20 "smallint": TYPE_INTEGER,
21 "int": TYPE_INTEGER,
22 "integer": TYPE_INTEGER,
23 "long": TYPE_INTEGER,
24 "double": TYPE_FLOAT,
25 "decimal": TYPE_FLOAT,
26 "float": TYPE_FLOAT,
27 "real": TYPE_FLOAT,
28 "boolean": TYPE_BOOLEAN,
29 "timestamp": TYPE_DATETIME,
30 "date": TYPE_DATETIME,
31 "char": TYPE_STRING,
32 "string": TYPE_STRING,
33 "varchar": TYPE_STRING,
34 }
35
36
37 class TreasureData(BaseQueryRunner):
38 should_annotate_query = False
39 noop_query = "SELECT 1"
40
41 @classmethod
42 def configuration_schema(cls):
43 return {
44 "type": "object",
45 "properties": {
46 "endpoint": {"type": "string"},
47 "apikey": {"type": "string"},
48 "type": {"type": "string"},
49 "db": {"type": "string", "title": "Database Name"},
50 "get_schema": {
51 "type": "boolean",
52 "title": "Auto Schema Retrieval",
53 "default": False,
54 },
55 },
56 "required": ["apikey", "db"],
57 }
58
59 @classmethod
60 def enabled(cls):
61 return enabled
62
63 @classmethod
64 def type(cls):
65 return "treasuredata"
66
67 def get_schema(self, get_stats=False):
68 schema = {}
69 if self.configuration.get("get_schema", False):
70 try:
71 with tdclient.Client(self.configuration.get("apikey")) as client:
72 for table in client.tables(self.configuration.get("db")):
73 table_name = "{}.{}".format(
74 self.configuration.get("db"), table.name
75 )
76 for table_schema in table.schema:
77 schema[table_name] = {
78 "name": table_name,
79 "columns": [column[0] for column in table.schema],
80 }
81 except Exception as ex:
82 raise Exception("Failed getting schema")
83 return list(schema.values())
84
85 def run_query(self, query, user):
86 connection = tdclient.connect(
87 endpoint=self.configuration.get("endpoint", "https://api.treasuredata.com"),
88 apikey=self.configuration.get("apikey"),
89 type=self.configuration.get("type", "hive").lower(),
90 db=self.configuration.get("db"),
91 )
92
93 cursor = connection.cursor()
94 try:
95 cursor.execute(query)
96 columns_tuples = [
97 (i[0], TD_TYPES_MAPPING.get(i[1], None))
98 for i in cursor.show_job()["hive_result_schema"]
99 ]
100 columns = self.fetch_columns(columns_tuples)
101
102 if cursor.rowcount == 0:
103 rows = []
104 else:
105 rows = [
106 dict(zip(([column["name"] for column in columns]), r))
107 for r in cursor.fetchall()
108 ]
109 data = {"columns": columns, "rows": rows}
110 json_data = json_dumps(data)
111 error = None
112 except errors.InternalError as e:
113 json_data = None
114 error = "%s: %s" % (
115 str(e),
116 cursor.show_job()
117 .get("debug", {})
118 .get("stderr", "No stderr message in the response"),
119 )
120 return json_data, error
121
122
123 register(TreasureData)
124
[end of redash/query_runner/treasuredata.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/redash/query_runner/treasuredata.py b/redash/query_runner/treasuredata.py
--- a/redash/query_runner/treasuredata.py
+++ b/redash/query_runner/treasuredata.py
@@ -68,7 +68,7 @@
schema = {}
if self.configuration.get("get_schema", False):
try:
- with tdclient.Client(self.configuration.get("apikey")) as client:
+ with tdclient.Client(self.configuration.get("apikey"),endpoint=self.configuration.get("endpoint")) as client:
for table in client.tables(self.configuration.get("db")):
table_name = "{}.{}".format(
self.configuration.get("db"), table.name
| {"golden_diff": "diff --git a/redash/query_runner/treasuredata.py b/redash/query_runner/treasuredata.py\n--- a/redash/query_runner/treasuredata.py\n+++ b/redash/query_runner/treasuredata.py\n@@ -68,7 +68,7 @@\n schema = {}\n if self.configuration.get(\"get_schema\", False):\n try:\n- with tdclient.Client(self.configuration.get(\"apikey\")) as client:\n+ with tdclient.Client(self.configuration.get(\"apikey\"),endpoint=self.configuration.get(\"endpoint\")) as client:\n for table in client.tables(self.configuration.get(\"db\")):\n table_name = \"{}.{}\".format(\n self.configuration.get(\"db\"), table.name\n", "issue": "TreasureData getSchema fails when setting non-default region\n<!--\r\n#####################################################################\r\n#\r\n# Need support? USE THE FORUM! https://discuss.redash.io/c/support.\r\n#\r\n# Don't have steps to reproduce and actually not sure it's a bug?\r\n# Use the forum! https://discuss.redash.io/c/support.\r\n#\r\n#####################################################################\r\n\r\n**Got an idea for a new feature?** Check if it isn't on the roadmap already: https://bit.ly/redash-roadmap and start a new discussion in the features category: https://discuss.redash.io/c/feature-requests \ud83c\udf1f.\r\n\r\nFound a bug? Please fill out the sections below... thank you \ud83d\udc4d\r\n\r\nFound a security vulnerability? Please email [email protected] to report any security vulnerabilities. We will acknowledge receipt of your vulnerability and strive to send you regular updates about our progress. If you're curious about the status of your disclosure please feel free to email us again. If you want to encrypt your disclosure email, you can use this PGP key.\r\n\r\n-->\r\n\r\n### Issue Summary\r\n\r\nThere are some regions in Treasure Data, but getSchema alsways fails when setting non-default region.\r\n\r\n### Steps to Reproduce\r\n\r\n1. Set datasource using non-default region (e.g. Tokyo region)\r\n2. Push schema refresh then \"Schema refresh failed\" error occurs\r\n\r\n### Technical details:\r\n\r\n* Redash Version: confirmed v5.0.2\r\n* Browser/OS: any Browsers/OSs\r\n* How did you install Redash: from Amazon AMI\r\n\r\n### Details\r\n\r\nWhen accessing Treasure Data to get schema, always default region will be set because the parameter is not prepared.\r\nhttps://github.com/getredash/redash/blob/6c364369bb0eb98e2191c2e502fed72abe5a74c7/redash/query_runner/treasuredata.py#L82\n", "before_files": [{"content": "import logging\n\nfrom redash.query_runner import *\nfrom redash.utils import json_dumps\n\nlogger = logging.getLogger(__name__)\n\ntry:\n import tdclient\n from tdclient import errors\n\n enabled = True\n\nexcept ImportError:\n enabled = False\n\nTD_TYPES_MAPPING = {\n \"bigint\": TYPE_INTEGER,\n \"tinyint\": TYPE_INTEGER,\n \"smallint\": TYPE_INTEGER,\n \"int\": TYPE_INTEGER,\n \"integer\": TYPE_INTEGER,\n \"long\": TYPE_INTEGER,\n \"double\": TYPE_FLOAT,\n \"decimal\": TYPE_FLOAT,\n \"float\": TYPE_FLOAT,\n \"real\": TYPE_FLOAT,\n \"boolean\": TYPE_BOOLEAN,\n \"timestamp\": TYPE_DATETIME,\n \"date\": TYPE_DATETIME,\n \"char\": TYPE_STRING,\n \"string\": TYPE_STRING,\n \"varchar\": TYPE_STRING,\n}\n\n\nclass TreasureData(BaseQueryRunner):\n should_annotate_query = False\n noop_query = \"SELECT 1\"\n\n @classmethod\n def configuration_schema(cls):\n return {\n \"type\": \"object\",\n \"properties\": {\n \"endpoint\": {\"type\": \"string\"},\n \"apikey\": {\"type\": \"string\"},\n \"type\": {\"type\": \"string\"},\n \"db\": {\"type\": \"string\", \"title\": \"Database Name\"},\n \"get_schema\": {\n \"type\": \"boolean\",\n \"title\": \"Auto Schema Retrieval\",\n \"default\": False,\n },\n },\n \"required\": [\"apikey\", \"db\"],\n }\n\n @classmethod\n def enabled(cls):\n return enabled\n\n @classmethod\n def type(cls):\n return \"treasuredata\"\n\n def get_schema(self, get_stats=False):\n schema = {}\n if self.configuration.get(\"get_schema\", False):\n try:\n with tdclient.Client(self.configuration.get(\"apikey\")) as client:\n for table in client.tables(self.configuration.get(\"db\")):\n table_name = \"{}.{}\".format(\n self.configuration.get(\"db\"), table.name\n )\n for table_schema in table.schema:\n schema[table_name] = {\n \"name\": table_name,\n \"columns\": [column[0] for column in table.schema],\n }\n except Exception as ex:\n raise Exception(\"Failed getting schema\")\n return list(schema.values())\n\n def run_query(self, query, user):\n connection = tdclient.connect(\n endpoint=self.configuration.get(\"endpoint\", \"https://api.treasuredata.com\"),\n apikey=self.configuration.get(\"apikey\"),\n type=self.configuration.get(\"type\", \"hive\").lower(),\n db=self.configuration.get(\"db\"),\n )\n\n cursor = connection.cursor()\n try:\n cursor.execute(query)\n columns_tuples = [\n (i[0], TD_TYPES_MAPPING.get(i[1], None))\n for i in cursor.show_job()[\"hive_result_schema\"]\n ]\n columns = self.fetch_columns(columns_tuples)\n\n if cursor.rowcount == 0:\n rows = []\n else:\n rows = [\n dict(zip(([column[\"name\"] for column in columns]), r))\n for r in cursor.fetchall()\n ]\n data = {\"columns\": columns, \"rows\": rows}\n json_data = json_dumps(data)\n error = None\n except errors.InternalError as e:\n json_data = None\n error = \"%s: %s\" % (\n str(e),\n cursor.show_job()\n .get(\"debug\", {})\n .get(\"stderr\", \"No stderr message in the response\"),\n )\n return json_data, error\n\n\nregister(TreasureData)\n", "path": "redash/query_runner/treasuredata.py"}]} | 1,972 | 148 |
gh_patches_debug_34692 | rasdani/github-patches | git_diff | bridgecrewio__checkov-3747 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Checkov failing to perform a check against a Terraform Plan when skipping a specific AWS check
**Describe the issue**
Checkov failing to perform a check against a Terraform Plan when skipping a specific AWS check
**Examples**
```
resource "aws_iam_role" "backend" {
#checkov:skip=CKV_AWS_274:TODO Generate a policy from CloudTrail later
name = "${var.repo}-foo"
assume_role_policy = data.aws_iam_policy_document.backend-assume.json
managed_policy_arns = ["arn:aws:iam::aws:policy/AdministratorAccess"]
inline_policy {
name = "foo"
policy = data.aws_iam_policy_document.backend-permissions.json
}
tags = {
component = var.foo
}
}
```
**Exception Trace**
```
Traceback (most recent call last):
File "/usr/local/bin/checkov", line 9, in <module>
sys.exit(run())
File "/usr/local/lib/python3.9/site-packages/checkov/main.py", line 355, in run
scan_reports = runner_registry.run(external_checks_dir=external_checks_dir, files=config.file,
File "/usr/local/lib/python3.9/site-packages/checkov/common/runners/runner_registry.py", line 79, in run
self.runners[0].run(root_folder, external_checks_dir=external_checks_dir, files=files,
File "/usr/local/lib/python3.9/site-packages/checkov/terraform/plan_runner.py", line 81, in run
self.check_tf_definition(report, root_folder, runner_filter)
File "/usr/local/lib/python3.9/site-packages/checkov/terraform/plan_runner.py", line 99, in check_tf_definition
self.run_block(definition[block_type], None, full_file_path, root_folder, report, scanned_file,
File "/usr/local/lib/python3.9/site-packages/checkov/terraform/plan_runner.py", line 119, in run_block
results = registry.scan(scanned_file, entity, [], runner_filter, report_type=CheckType.TERRAFORM_PLAN)
File "/usr/local/lib/python3.9/site-packages/checkov/common/checks/base_check_registry.py", line 127, in scan
result = self.run_check(check, entity_configuration, entity_name, entity_type, scanned_file, skip_info)
File "/usr/local/lib/python3.9/site-packages/checkov/common/checks/base_check_registry.py", line 141, in run_check
result = check.run(
File "/usr/local/lib/python3.9/site-packages/checkov/common/checks/base_check.py", line 70, in run
check_result["result"] = self.scan_entity_conf(entity_configuration, entity_type)
File "/usr/local/lib/python3.9/site-packages/checkov/terraform/checks/resource/base_resource_check.py", line 43, in scan_entity_conf
return self.scan_resource_conf(conf)
File "/usr/local/lib/python3.9/site-packages/checkov/terraform/checks/resource/aws/IAMManagedAdminPolicy.py", line 42, in scan_resource_conf
if conf.get("policy_arn")[0] == ADMIN_POLICY_ARN:
TypeError: 'NoneType' object is not subscriptable
```
**Desktop (please complete the following information):**
- MacOS 11.7
- Checkov Version 2.2.0
**Additional context**
This fails as of whateverv version CKV_AWS_274 was added. Last time a build didn't crash I was using 2.1.294 and it worked.
Also if I skip it with a command-line switch then this crash does not happen (which is going to be my temp workaround)
</issue>
<code>
[start of checkov/terraform/checks/resource/aws/IAMManagedAdminPolicy.py]
1 from checkov.common.models.enums import CheckResult, CheckCategories
2 from checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck
3
4
5 ADMIN_POLICY_NAME = "AdministratorAccess"
6 ADMIN_POLICY_ARN = f"arn:aws:iam::aws:policy/{ADMIN_POLICY_NAME}"
7
8
9 class IAMManagedAdminPolicy(BaseResourceCheck):
10 def __init__(self):
11 # This is the full description of your check
12 description = "Disallow IAM roles, users, and groups from using the AWS AdministratorAccess policy"
13
14 # This is the Unique ID for your check
15 id = "CKV_AWS_274"
16
17 # These are the terraform objects supported by this check (ex: aws_iam_policy_document)
18 supported_resources = (
19 "aws_iam_role",
20 "aws_iam_policy_attachment",
21 "aws_iam_role_policy_attachment",
22 "aws_iam_user_policy_attachment",
23 "aws_iam_group_policy_attachment",
24 )
25
26 # Valid CheckCategories are defined in checkov/common/models/enums.py
27 categories = (CheckCategories.IAM,)
28 super().__init__(name=description, id=id, categories=categories, supported_resources=supported_resources)
29
30 def scan_resource_conf(self, conf):
31 if self.entity_type == "aws_iam_role":
32 if "managed_policy_arns" in conf.keys():
33 if ADMIN_POLICY_ARN in conf.get("managed_policy_arns")[0]:
34 return CheckResult.FAILED
35
36 elif self.entity_type in (
37 "aws_iam_policy_attachment",
38 "aws_iam_role_policy_attachment",
39 "aws_iam_user_policy_attachment",
40 "aws_iam_group_policy_attachment",
41 ):
42 if conf.get("policy_arn")[0] == ADMIN_POLICY_ARN:
43 return CheckResult.FAILED
44
45 return CheckResult.PASSED
46
47
48 check = IAMManagedAdminPolicy()
[end of checkov/terraform/checks/resource/aws/IAMManagedAdminPolicy.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/checkov/terraform/checks/resource/aws/IAMManagedAdminPolicy.py b/checkov/terraform/checks/resource/aws/IAMManagedAdminPolicy.py
--- a/checkov/terraform/checks/resource/aws/IAMManagedAdminPolicy.py
+++ b/checkov/terraform/checks/resource/aws/IAMManagedAdminPolicy.py
@@ -1,3 +1,7 @@
+from __future__ import annotations
+
+from typing import Any
+
from checkov.common.models.enums import CheckResult, CheckCategories
from checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck
@@ -7,7 +11,7 @@
class IAMManagedAdminPolicy(BaseResourceCheck):
- def __init__(self):
+ def __init__(self) -> None:
# This is the full description of your check
description = "Disallow IAM roles, users, and groups from using the AWS AdministratorAccess policy"
@@ -27,10 +31,10 @@
categories = (CheckCategories.IAM,)
super().__init__(name=description, id=id, categories=categories, supported_resources=supported_resources)
- def scan_resource_conf(self, conf):
+ def scan_resource_conf(self, conf: dict[str, list[Any]]) -> CheckResult:
if self.entity_type == "aws_iam_role":
if "managed_policy_arns" in conf.keys():
- if ADMIN_POLICY_ARN in conf.get("managed_policy_arns")[0]:
+ if ADMIN_POLICY_ARN in conf["managed_policy_arns"][0]:
return CheckResult.FAILED
elif self.entity_type in (
@@ -39,10 +43,11 @@
"aws_iam_user_policy_attachment",
"aws_iam_group_policy_attachment",
):
- if conf.get("policy_arn")[0] == ADMIN_POLICY_ARN:
+ policy_arn = conf.get("policy_arn")
+ if policy_arn and policy_arn[0] == ADMIN_POLICY_ARN:
return CheckResult.FAILED
return CheckResult.PASSED
-check = IAMManagedAdminPolicy()
\ No newline at end of file
+check = IAMManagedAdminPolicy()
| {"golden_diff": "diff --git a/checkov/terraform/checks/resource/aws/IAMManagedAdminPolicy.py b/checkov/terraform/checks/resource/aws/IAMManagedAdminPolicy.py\n--- a/checkov/terraform/checks/resource/aws/IAMManagedAdminPolicy.py\n+++ b/checkov/terraform/checks/resource/aws/IAMManagedAdminPolicy.py\n@@ -1,3 +1,7 @@\n+from __future__ import annotations\n+\n+from typing import Any\n+\n from checkov.common.models.enums import CheckResult, CheckCategories\n from checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck\n \n@@ -7,7 +11,7 @@\n \n \n class IAMManagedAdminPolicy(BaseResourceCheck):\n- def __init__(self):\n+ def __init__(self) -> None:\n # This is the full description of your check\n description = \"Disallow IAM roles, users, and groups from using the AWS AdministratorAccess policy\"\n \n@@ -27,10 +31,10 @@\n categories = (CheckCategories.IAM,)\n super().__init__(name=description, id=id, categories=categories, supported_resources=supported_resources)\n \n- def scan_resource_conf(self, conf):\n+ def scan_resource_conf(self, conf: dict[str, list[Any]]) -> CheckResult:\n if self.entity_type == \"aws_iam_role\":\n if \"managed_policy_arns\" in conf.keys():\n- if ADMIN_POLICY_ARN in conf.get(\"managed_policy_arns\")[0]:\n+ if ADMIN_POLICY_ARN in conf[\"managed_policy_arns\"][0]:\n return CheckResult.FAILED\n \n elif self.entity_type in (\n@@ -39,10 +43,11 @@\n \"aws_iam_user_policy_attachment\",\n \"aws_iam_group_policy_attachment\",\n ):\n- if conf.get(\"policy_arn\")[0] == ADMIN_POLICY_ARN:\n+ policy_arn = conf.get(\"policy_arn\")\n+ if policy_arn and policy_arn[0] == ADMIN_POLICY_ARN:\n return CheckResult.FAILED\n \n return CheckResult.PASSED\n \n \n-check = IAMManagedAdminPolicy()\n\\ No newline at end of file\n+check = IAMManagedAdminPolicy()\n", "issue": "Checkov failing to perform a check against a Terraform Plan when skipping a specific AWS check\n**Describe the issue**\r\nCheckov failing to perform a check against a Terraform Plan when skipping a specific AWS check\r\n\r\n**Examples**\r\n```\r\nresource \"aws_iam_role\" \"backend\" {\r\n #checkov:skip=CKV_AWS_274:TODO Generate a policy from CloudTrail later\r\n name = \"${var.repo}-foo\"\r\n assume_role_policy = data.aws_iam_policy_document.backend-assume.json\r\n managed_policy_arns = [\"arn:aws:iam::aws:policy/AdministratorAccess\"]\r\n inline_policy {\r\n name = \"foo\"\r\n policy = data.aws_iam_policy_document.backend-permissions.json\r\n }\r\n tags = {\r\n component = var.foo\r\n }\r\n}\r\n```\r\n\r\n**Exception Trace**\r\n```\r\nTraceback (most recent call last):\r\n File \"/usr/local/bin/checkov\", line 9, in <module>\r\n sys.exit(run())\r\n File \"/usr/local/lib/python3.9/site-packages/checkov/main.py\", line 355, in run\r\n scan_reports = runner_registry.run(external_checks_dir=external_checks_dir, files=config.file,\r\n File \"/usr/local/lib/python3.9/site-packages/checkov/common/runners/runner_registry.py\", line 79, in run\r\n self.runners[0].run(root_folder, external_checks_dir=external_checks_dir, files=files,\r\n File \"/usr/local/lib/python3.9/site-packages/checkov/terraform/plan_runner.py\", line 81, in run\r\n self.check_tf_definition(report, root_folder, runner_filter)\r\n File \"/usr/local/lib/python3.9/site-packages/checkov/terraform/plan_runner.py\", line 99, in check_tf_definition\r\n self.run_block(definition[block_type], None, full_file_path, root_folder, report, scanned_file,\r\n File \"/usr/local/lib/python3.9/site-packages/checkov/terraform/plan_runner.py\", line 119, in run_block\r\n results = registry.scan(scanned_file, entity, [], runner_filter, report_type=CheckType.TERRAFORM_PLAN)\r\n File \"/usr/local/lib/python3.9/site-packages/checkov/common/checks/base_check_registry.py\", line 127, in scan\r\n result = self.run_check(check, entity_configuration, entity_name, entity_type, scanned_file, skip_info)\r\n File \"/usr/local/lib/python3.9/site-packages/checkov/common/checks/base_check_registry.py\", line 141, in run_check\r\n result = check.run(\r\n File \"/usr/local/lib/python3.9/site-packages/checkov/common/checks/base_check.py\", line 70, in run\r\n check_result[\"result\"] = self.scan_entity_conf(entity_configuration, entity_type)\r\n File \"/usr/local/lib/python3.9/site-packages/checkov/terraform/checks/resource/base_resource_check.py\", line 43, in scan_entity_conf\r\n return self.scan_resource_conf(conf)\r\n File \"/usr/local/lib/python3.9/site-packages/checkov/terraform/checks/resource/aws/IAMManagedAdminPolicy.py\", line 42, in scan_resource_conf\r\n if conf.get(\"policy_arn\")[0] == ADMIN_POLICY_ARN:\r\nTypeError: 'NoneType' object is not subscriptable\r\n```\r\n\r\n**Desktop (please complete the following information):**\r\n - MacOS 11.7\r\n - Checkov Version 2.2.0\r\n\r\n**Additional context**\r\nThis fails as of whateverv version CKV_AWS_274 was added. Last time a build didn't crash I was using 2.1.294 and it worked.\r\nAlso if I skip it with a command-line switch then this crash does not happen (which is going to be my temp workaround)\r\n\n", "before_files": [{"content": "from checkov.common.models.enums import CheckResult, CheckCategories\nfrom checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck\n\n\nADMIN_POLICY_NAME = \"AdministratorAccess\"\nADMIN_POLICY_ARN = f\"arn:aws:iam::aws:policy/{ADMIN_POLICY_NAME}\"\n\n\nclass IAMManagedAdminPolicy(BaseResourceCheck):\n def __init__(self):\n # This is the full description of your check\n description = \"Disallow IAM roles, users, and groups from using the AWS AdministratorAccess policy\"\n\n # This is the Unique ID for your check\n id = \"CKV_AWS_274\"\n\n # These are the terraform objects supported by this check (ex: aws_iam_policy_document)\n supported_resources = (\n \"aws_iam_role\",\n \"aws_iam_policy_attachment\",\n \"aws_iam_role_policy_attachment\",\n \"aws_iam_user_policy_attachment\",\n \"aws_iam_group_policy_attachment\",\n )\n\n # Valid CheckCategories are defined in checkov/common/models/enums.py\n categories = (CheckCategories.IAM,)\n super().__init__(name=description, id=id, categories=categories, supported_resources=supported_resources)\n\n def scan_resource_conf(self, conf):\n if self.entity_type == \"aws_iam_role\":\n if \"managed_policy_arns\" in conf.keys():\n if ADMIN_POLICY_ARN in conf.get(\"managed_policy_arns\")[0]:\n return CheckResult.FAILED\n\n elif self.entity_type in (\n \"aws_iam_policy_attachment\",\n \"aws_iam_role_policy_attachment\",\n \"aws_iam_user_policy_attachment\",\n \"aws_iam_group_policy_attachment\",\n ):\n if conf.get(\"policy_arn\")[0] == ADMIN_POLICY_ARN:\n return CheckResult.FAILED\n\n return CheckResult.PASSED\n\n\ncheck = IAMManagedAdminPolicy()", "path": "checkov/terraform/checks/resource/aws/IAMManagedAdminPolicy.py"}]} | 1,852 | 473 |
gh_patches_debug_40754 | rasdani/github-patches | git_diff | qtile__qtile-4716 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Clock and tzupdate
### The issue:
qtile version:
0.21.0
These days I need to travel with my system and have the need to update my timezone in order to work with my calendar. I'm using `tzupdate` which easily updates my timezone with one command.
I'm using the Clock widget in the qtile bar as so:
``` python
widget.Clock(format="%A %d %b %Y %H:%M:%S %z"),
```
Updating the timezone with `tzupdate` however does not change the timezone on the Clock widget. It requires restarting qtile in order to get this done. I would expect qtile to poll the current system timezone at each tik.
### Required:
- [X] I have searched past issues to see if this bug has already been reported.
</issue>
<code>
[start of libqtile/widget/clock.py]
1 # Copyright (c) 2010 Aldo Cortesi
2 # Copyright (c) 2012 Andrew Grigorev
3 # Copyright (c) 2014 Sean Vig
4 # Copyright (c) 2014 Tycho Andersen
5 #
6 # Permission is hereby granted, free of charge, to any person obtaining a copy
7 # of this software and associated documentation files (the "Software"), to deal
8 # in the Software without restriction, including without limitation the rights
9 # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
10 # copies of the Software, and to permit persons to whom the Software is
11 # furnished to do so, subject to the following conditions:
12 #
13 # The above copyright notice and this permission notice shall be included in
14 # all copies or substantial portions of the Software.
15 #
16 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
17 # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
18 # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
19 # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
20 # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
21 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
22 # SOFTWARE.
23
24 import sys
25 import time
26 from datetime import datetime, timedelta, timezone
27
28 from libqtile.log_utils import logger
29 from libqtile.widget import base
30
31 try:
32 import pytz
33 except ImportError:
34 pass
35
36 try:
37 import dateutil.tz
38 except ImportError:
39 pass
40
41
42 class Clock(base.InLoopPollText):
43 """A simple but flexible text-based clock"""
44
45 defaults = [
46 ("format", "%H:%M", "A Python datetime format string"),
47 ("update_interval", 1.0, "Update interval for the clock"),
48 (
49 "timezone",
50 None,
51 "The timezone to use for this clock, either as"
52 ' string if pytz or dateutil is installed (e.g. "US/Central" or'
53 " anything in /usr/share/zoneinfo), or as tzinfo (e.g."
54 " datetime.timezone.utc). None means the system local timezone and is"
55 " the default.",
56 ),
57 ]
58 DELTA = timedelta(seconds=0.5)
59
60 def __init__(self, **config):
61 base.InLoopPollText.__init__(self, **config)
62 self.add_defaults(Clock.defaults)
63 if isinstance(self.timezone, str):
64 if "pytz" in sys.modules:
65 self.timezone = pytz.timezone(self.timezone)
66 elif "dateutil" in sys.modules:
67 self.timezone = dateutil.tz.gettz(self.timezone)
68 else:
69 logger.warning(
70 "Clock widget can not infer its timezone from a"
71 " string without pytz or dateutil. Install one"
72 " of these libraries, or give it a"
73 " datetime.tzinfo instance."
74 )
75 if self.timezone is None:
76 logger.debug("Defaulting to the system local timezone.")
77
78 def tick(self):
79 self.update(self.poll())
80 return self.update_interval - time.time() % self.update_interval
81
82 # adding .5 to get a proper seconds value because glib could
83 # theoreticaly call our method too early and we could get something
84 # like (x-1).999 instead of x.000
85 def poll(self):
86 if self.timezone:
87 now = datetime.now(timezone.utc).astimezone(self.timezone)
88 else:
89 now = datetime.now(timezone.utc).astimezone()
90 return (now + self.DELTA).strftime(self.format)
91
[end of libqtile/widget/clock.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/libqtile/widget/clock.py b/libqtile/widget/clock.py
--- a/libqtile/widget/clock.py
+++ b/libqtile/widget/clock.py
@@ -20,11 +20,13 @@
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
+from __future__ import annotations
import sys
import time
-from datetime import datetime, timedelta, timezone
+from datetime import datetime, timedelta, timezone, tzinfo
+from libqtile.command.base import expose_command
from libqtile.log_utils import logger
from libqtile.widget import base
@@ -60,11 +62,24 @@
def __init__(self, **config):
base.InLoopPollText.__init__(self, **config)
self.add_defaults(Clock.defaults)
- if isinstance(self.timezone, str):
+ self.timezone = self._lift_timezone(self.timezone)
+
+ if self.timezone is None:
+ logger.debug("Defaulting to the system local timezone.")
+
+ def _lift_timezone(self, timezone):
+ if isinstance(timezone, tzinfo):
+ return timezone
+ elif isinstance(timezone, str):
+ # Empty string can be used to force use of system time
+ if not timezone:
+ return None
+
+ # A string timezone needs to be converted to a tzinfo object
if "pytz" in sys.modules:
- self.timezone = pytz.timezone(self.timezone)
+ return pytz.timezone(timezone)
elif "dateutil" in sys.modules:
- self.timezone = dateutil.tz.gettz(self.timezone)
+ return dateutil.tz.gettz(timezone)
else:
logger.warning(
"Clock widget can not infer its timezone from a"
@@ -72,8 +87,12 @@
" of these libraries, or give it a"
" datetime.tzinfo instance."
)
- if self.timezone is None:
- logger.debug("Defaulting to the system local timezone.")
+ elif timezone is None:
+ pass
+ else:
+ logger.warning("Invalid timezone value %s.", timezone)
+
+ return None
def tick(self):
self.update(self.poll())
@@ -88,3 +107,27 @@
else:
now = datetime.now(timezone.utc).astimezone()
return (now + self.DELTA).strftime(self.format)
+
+ @expose_command
+ def update_timezone(self, timezone: str | tzinfo | None = None):
+ """
+ Force the clock to update timezone information.
+
+ If the method is called with no arguments then the widget will reload
+ the timzeone set on the computer (e.g. via ``timedatectl set-timezone ..``).
+ This will have no effect if you have previously set a ``timezone`` value.
+
+ Alternatively, you can pass a timezone string (e.g. ``"Europe/Lisbon"``) to change
+ the specified timezone. Setting this to an empty string will cause the clock
+ to rely on the system timezone.
+ """
+ self.timezone = self._lift_timezone(timezone)
+
+ # Force python to update timezone info (e.g. if system time has changed)
+ time.tzset()
+ self.update(self.poll())
+
+ @expose_command
+ def use_system_timezone(self):
+ """Force clock to use system timezone."""
+ self.update_timezone("")
| {"golden_diff": "diff --git a/libqtile/widget/clock.py b/libqtile/widget/clock.py\n--- a/libqtile/widget/clock.py\n+++ b/libqtile/widget/clock.py\n@@ -20,11 +20,13 @@\n # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n # SOFTWARE.\n+from __future__ import annotations\n \n import sys\n import time\n-from datetime import datetime, timedelta, timezone\n+from datetime import datetime, timedelta, timezone, tzinfo\n \n+from libqtile.command.base import expose_command\n from libqtile.log_utils import logger\n from libqtile.widget import base\n \n@@ -60,11 +62,24 @@\n def __init__(self, **config):\n base.InLoopPollText.__init__(self, **config)\n self.add_defaults(Clock.defaults)\n- if isinstance(self.timezone, str):\n+ self.timezone = self._lift_timezone(self.timezone)\n+\n+ if self.timezone is None:\n+ logger.debug(\"Defaulting to the system local timezone.\")\n+\n+ def _lift_timezone(self, timezone):\n+ if isinstance(timezone, tzinfo):\n+ return timezone\n+ elif isinstance(timezone, str):\n+ # Empty string can be used to force use of system time\n+ if not timezone:\n+ return None\n+\n+ # A string timezone needs to be converted to a tzinfo object\n if \"pytz\" in sys.modules:\n- self.timezone = pytz.timezone(self.timezone)\n+ return pytz.timezone(timezone)\n elif \"dateutil\" in sys.modules:\n- self.timezone = dateutil.tz.gettz(self.timezone)\n+ return dateutil.tz.gettz(timezone)\n else:\n logger.warning(\n \"Clock widget can not infer its timezone from a\"\n@@ -72,8 +87,12 @@\n \" of these libraries, or give it a\"\n \" datetime.tzinfo instance.\"\n )\n- if self.timezone is None:\n- logger.debug(\"Defaulting to the system local timezone.\")\n+ elif timezone is None:\n+ pass\n+ else:\n+ logger.warning(\"Invalid timezone value %s.\", timezone)\n+\n+ return None\n \n def tick(self):\n self.update(self.poll())\n@@ -88,3 +107,27 @@\n else:\n now = datetime.now(timezone.utc).astimezone()\n return (now + self.DELTA).strftime(self.format)\n+\n+ @expose_command\n+ def update_timezone(self, timezone: str | tzinfo | None = None):\n+ \"\"\"\n+ Force the clock to update timezone information.\n+\n+ If the method is called with no arguments then the widget will reload\n+ the timzeone set on the computer (e.g. via ``timedatectl set-timezone ..``).\n+ This will have no effect if you have previously set a ``timezone`` value.\n+\n+ Alternatively, you can pass a timezone string (e.g. ``\"Europe/Lisbon\"``) to change\n+ the specified timezone. Setting this to an empty string will cause the clock\n+ to rely on the system timezone.\n+ \"\"\"\n+ self.timezone = self._lift_timezone(timezone)\n+\n+ # Force python to update timezone info (e.g. if system time has changed)\n+ time.tzset()\n+ self.update(self.poll())\n+\n+ @expose_command\n+ def use_system_timezone(self):\n+ \"\"\"Force clock to use system timezone.\"\"\"\n+ self.update_timezone(\"\")\n", "issue": "Clock and tzupdate \n### The issue:\n\nqtile version:\r\n0.21.0\r\n\r\nThese days I need to travel with my system and have the need to update my timezone in order to work with my calendar. I'm using `tzupdate` which easily updates my timezone with one command. \r\n\r\nI'm using the Clock widget in the qtile bar as so:\r\n``` python \r\n widget.Clock(format=\"%A %d %b %Y %H:%M:%S %z\"),\r\n```\r\n\r\nUpdating the timezone with `tzupdate` however does not change the timezone on the Clock widget. It requires restarting qtile in order to get this done. I would expect qtile to poll the current system timezone at each tik.\n\n### Required:\n\n- [X] I have searched past issues to see if this bug has already been reported.\n", "before_files": [{"content": "# Copyright (c) 2010 Aldo Cortesi\n# Copyright (c) 2012 Andrew Grigorev\n# Copyright (c) 2014 Sean Vig\n# Copyright (c) 2014 Tycho Andersen\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n# SOFTWARE.\n\nimport sys\nimport time\nfrom datetime import datetime, timedelta, timezone\n\nfrom libqtile.log_utils import logger\nfrom libqtile.widget import base\n\ntry:\n import pytz\nexcept ImportError:\n pass\n\ntry:\n import dateutil.tz\nexcept ImportError:\n pass\n\n\nclass Clock(base.InLoopPollText):\n \"\"\"A simple but flexible text-based clock\"\"\"\n\n defaults = [\n (\"format\", \"%H:%M\", \"A Python datetime format string\"),\n (\"update_interval\", 1.0, \"Update interval for the clock\"),\n (\n \"timezone\",\n None,\n \"The timezone to use for this clock, either as\"\n ' string if pytz or dateutil is installed (e.g. \"US/Central\" or'\n \" anything in /usr/share/zoneinfo), or as tzinfo (e.g.\"\n \" datetime.timezone.utc). None means the system local timezone and is\"\n \" the default.\",\n ),\n ]\n DELTA = timedelta(seconds=0.5)\n\n def __init__(self, **config):\n base.InLoopPollText.__init__(self, **config)\n self.add_defaults(Clock.defaults)\n if isinstance(self.timezone, str):\n if \"pytz\" in sys.modules:\n self.timezone = pytz.timezone(self.timezone)\n elif \"dateutil\" in sys.modules:\n self.timezone = dateutil.tz.gettz(self.timezone)\n else:\n logger.warning(\n \"Clock widget can not infer its timezone from a\"\n \" string without pytz or dateutil. Install one\"\n \" of these libraries, or give it a\"\n \" datetime.tzinfo instance.\"\n )\n if self.timezone is None:\n logger.debug(\"Defaulting to the system local timezone.\")\n\n def tick(self):\n self.update(self.poll())\n return self.update_interval - time.time() % self.update_interval\n\n # adding .5 to get a proper seconds value because glib could\n # theoreticaly call our method too early and we could get something\n # like (x-1).999 instead of x.000\n def poll(self):\n if self.timezone:\n now = datetime.now(timezone.utc).astimezone(self.timezone)\n else:\n now = datetime.now(timezone.utc).astimezone()\n return (now + self.DELTA).strftime(self.format)\n", "path": "libqtile/widget/clock.py"}]} | 1,669 | 787 |
gh_patches_debug_11504 | rasdani/github-patches | git_diff | alltheplaces__alltheplaces-3347 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Spider subway is broken
During the global build at 2021-09-15-14-42-44, spider **subway** failed with **31396 features** and **22 errors**.
Here's [the log](https://data.alltheplaces.xyz/runs/2021-09-15-14-42-44/logs/subway.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-09-15-14-42-44/output/subway.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-09-15-14-42-44/output/subway.geojson))
</issue>
<code>
[start of locations/spiders/subway.py]
1 # -*- coding: utf-8 -*-
2 import scrapy
3 from locations.items import GeojsonPointItem
4 from locations.hours import OpeningHours
5
6 from urllib.parse import urlparse
7 import json
8
9
10 DAY_MAPPING = {
11 "MONDAY": "Mo",
12 "TUESDAY": "Tu",
13 "WEDNESDAY": "We",
14 "THURSDAY": "Th",
15 "FRIDAY": "Fr",
16 "SATURDAY": "Sa",
17 "SUNDAY": "Su",
18 }
19
20
21 class SubwaySpider(scrapy.Spider):
22 name = "subway"
23 item_attributes = {"name": "Subway", "brand": "Subway", "brand_wikidata": "Q244457"}
24 allowed_domains = ["restaurants.subway.com"]
25 start_urls = ["https://restaurants.subway.com/"]
26
27 link_extractor = scrapy.linkextractors.LinkExtractor(
28 restrict_css=".Directory-listLinks, .Directory-listTeasers"
29 )
30
31 def parse(self, response):
32 for link in self.link_extractor.extract_links(response):
33 yield scrapy.Request(link.url)
34
35 js = response.xpath('//script[@class="js-hours-config"]/text()').get()
36 if js:
37 yield from self.parse_restaurant(json.loads(js))
38
39 def parse_restaurant(self, js):
40 # Note: Of the five different coordinate fields, this is the one that always exists
41 lat_long = js["profile"]["yextDisplayCoordinate"]
42 website = urlparse(js["profile"]["websiteUrl"])._replace(query="").geturl()
43 properties = {
44 "lat": lat_long["lat"],
45 "lon": lat_long["long"],
46 "ref": js["profile"]["meta"]["id"],
47 "addr_full": js["profile"]["address"]["line1"],
48 "extras": {
49 "addr:unit": js["profile"]["address"]["line2"],
50 # Note: line3 is always null
51 "loc_name": js["profile"]["address"]["extraDescription"],
52 },
53 "city": js["profile"]["address"]["city"],
54 "state": js["profile"]["address"]["region"],
55 "postcode": js["profile"]["address"]["postalCode"],
56 "country": js["profile"]["address"]["countryCode"],
57 "phone": js["profile"].get("mainPhone", {}).get("number"),
58 "opening_hours": self.parse_hours(js["profile"]["hours"]["normalHours"]),
59 "website": website,
60 }
61 yield GeojsonPointItem(**properties)
62
63 def parse_hours(self, hours_json):
64 opening_hours = OpeningHours()
65 for date in hours_json:
66 day = DAY_MAPPING[date["day"]]
67 for interval in date["intervals"]:
68 start_hr, start_min = divmod(interval["start"], 100)
69 end_hr, end_min = divmod(interval["end"], 100)
70 opening_hours.add_range(
71 day, f"{start_hr}:{start_min}", f"{end_hr}:{end_min}"
72 )
73 return opening_hours.as_opening_hours()
74
[end of locations/spiders/subway.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/locations/spiders/subway.py b/locations/spiders/subway.py
--- a/locations/spiders/subway.py
+++ b/locations/spiders/subway.py
@@ -39,7 +39,9 @@
def parse_restaurant(self, js):
# Note: Of the five different coordinate fields, this is the one that always exists
lat_long = js["profile"]["yextDisplayCoordinate"]
- website = urlparse(js["profile"]["websiteUrl"])._replace(query="").geturl()
+ website = ""
+ if 'websiteUrl' in js["profile"]:
+ website = urlparse(js["profile"]["websiteUrl"])._replace(query="").geturl()
properties = {
"lat": lat_long["lat"],
"lon": lat_long["long"],
| {"golden_diff": "diff --git a/locations/spiders/subway.py b/locations/spiders/subway.py\n--- a/locations/spiders/subway.py\n+++ b/locations/spiders/subway.py\n@@ -39,7 +39,9 @@\n def parse_restaurant(self, js):\n # Note: Of the five different coordinate fields, this is the one that always exists\n lat_long = js[\"profile\"][\"yextDisplayCoordinate\"]\n- website = urlparse(js[\"profile\"][\"websiteUrl\"])._replace(query=\"\").geturl()\n+ website = \"\"\n+ if 'websiteUrl' in js[\"profile\"]:\n+ website = urlparse(js[\"profile\"][\"websiteUrl\"])._replace(query=\"\").geturl()\n properties = {\n \"lat\": lat_long[\"lat\"],\n \"lon\": lat_long[\"long\"],\n", "issue": "Spider subway is broken\nDuring the global build at 2021-09-15-14-42-44, spider **subway** failed with **31396 features** and **22 errors**.\n\nHere's [the log](https://data.alltheplaces.xyz/runs/2021-09-15-14-42-44/logs/subway.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-09-15-14-42-44/output/subway.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-09-15-14-42-44/output/subway.geojson))\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport scrapy\nfrom locations.items import GeojsonPointItem\nfrom locations.hours import OpeningHours\n\nfrom urllib.parse import urlparse\nimport json\n\n\nDAY_MAPPING = {\n \"MONDAY\": \"Mo\",\n \"TUESDAY\": \"Tu\",\n \"WEDNESDAY\": \"We\",\n \"THURSDAY\": \"Th\",\n \"FRIDAY\": \"Fr\",\n \"SATURDAY\": \"Sa\",\n \"SUNDAY\": \"Su\",\n}\n\n\nclass SubwaySpider(scrapy.Spider):\n name = \"subway\"\n item_attributes = {\"name\": \"Subway\", \"brand\": \"Subway\", \"brand_wikidata\": \"Q244457\"}\n allowed_domains = [\"restaurants.subway.com\"]\n start_urls = [\"https://restaurants.subway.com/\"]\n\n link_extractor = scrapy.linkextractors.LinkExtractor(\n restrict_css=\".Directory-listLinks, .Directory-listTeasers\"\n )\n\n def parse(self, response):\n for link in self.link_extractor.extract_links(response):\n yield scrapy.Request(link.url)\n\n js = response.xpath('//script[@class=\"js-hours-config\"]/text()').get()\n if js:\n yield from self.parse_restaurant(json.loads(js))\n\n def parse_restaurant(self, js):\n # Note: Of the five different coordinate fields, this is the one that always exists\n lat_long = js[\"profile\"][\"yextDisplayCoordinate\"]\n website = urlparse(js[\"profile\"][\"websiteUrl\"])._replace(query=\"\").geturl()\n properties = {\n \"lat\": lat_long[\"lat\"],\n \"lon\": lat_long[\"long\"],\n \"ref\": js[\"profile\"][\"meta\"][\"id\"],\n \"addr_full\": js[\"profile\"][\"address\"][\"line1\"],\n \"extras\": {\n \"addr:unit\": js[\"profile\"][\"address\"][\"line2\"],\n # Note: line3 is always null\n \"loc_name\": js[\"profile\"][\"address\"][\"extraDescription\"],\n },\n \"city\": js[\"profile\"][\"address\"][\"city\"],\n \"state\": js[\"profile\"][\"address\"][\"region\"],\n \"postcode\": js[\"profile\"][\"address\"][\"postalCode\"],\n \"country\": js[\"profile\"][\"address\"][\"countryCode\"],\n \"phone\": js[\"profile\"].get(\"mainPhone\", {}).get(\"number\"),\n \"opening_hours\": self.parse_hours(js[\"profile\"][\"hours\"][\"normalHours\"]),\n \"website\": website,\n }\n yield GeojsonPointItem(**properties)\n\n def parse_hours(self, hours_json):\n opening_hours = OpeningHours()\n for date in hours_json:\n day = DAY_MAPPING[date[\"day\"]]\n for interval in date[\"intervals\"]:\n start_hr, start_min = divmod(interval[\"start\"], 100)\n end_hr, end_min = divmod(interval[\"end\"], 100)\n opening_hours.add_range(\n day, f\"{start_hr}:{start_min}\", f\"{end_hr}:{end_min}\"\n )\n return opening_hours.as_opening_hours()\n", "path": "locations/spiders/subway.py"}]} | 1,502 | 173 |
gh_patches_debug_1207 | rasdani/github-patches | git_diff | pytorch__vision-2933 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Change default value of eps in FrozenBatchNorm to match BatchNorm
## ❓ Questions and Help
Hello
Loss is nan error occurs when I learn fast rcnn with resnext101 backbone
My code is as follows
```python
backbone = resnet_fpn_backbone('resnext101_32x8d', pretrained=True)
model = FasterRCNN(backbone, num_classes)
in_features = model.roi_heads.box_predictor.cls_score.in_features
model.roi_heads.box_predictor = FastRCNNPredictor(in_features, num_classes)
```
error message
```
Epoch: [0] [ 0/7208] eta: 1:27:42 lr: 0.000040 loss: 40613806080.0000 (40613806080.0000) loss_box_reg: 7979147264.0000 (7979147264.0000) loss_classifier: 11993160704.0000 (11993160704.0000) loss_objectness: 9486380032.0000 (9486380032.0000) loss_rpn_box_reg: 11155118080.0000 (11155118080.0000) time: 0.7301 data: 0.4106 max mem: 1241
Loss is nan, stopping training
```
When i change the backbone to resnet50 and resnet152, no error occrus.
### Please note that this issue tracker is not a help form and this issue will be closed.
We have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:
- [Discussion Forum](https://discuss.pytorch.org/)
</issue>
<code>
[start of torchvision/ops/misc.py]
1 """
2 helper class that supports empty tensors on some nn functions.
3
4 Ideally, add support directly in PyTorch to empty tensors in
5 those functions.
6
7 This can be removed once https://github.com/pytorch/pytorch/issues/12013
8 is implemented
9 """
10
11 import warnings
12 import torch
13 from torch import Tensor, Size
14 from torch.jit.annotations import List, Optional, Tuple
15
16
17 class Conv2d(torch.nn.Conv2d):
18 def __init__(self, *args, **kwargs):
19 super().__init__(*args, **kwargs)
20 warnings.warn(
21 "torchvision.ops.misc.Conv2d is deprecated and will be "
22 "removed in future versions, use torch.nn.Conv2d instead.", FutureWarning)
23
24
25 class ConvTranspose2d(torch.nn.ConvTranspose2d):
26 def __init__(self, *args, **kwargs):
27 super().__init__(*args, **kwargs)
28 warnings.warn(
29 "torchvision.ops.misc.ConvTranspose2d is deprecated and will be "
30 "removed in future versions, use torch.nn.ConvTranspose2d instead.", FutureWarning)
31
32
33 class BatchNorm2d(torch.nn.BatchNorm2d):
34 def __init__(self, *args, **kwargs):
35 super().__init__(*args, **kwargs)
36 warnings.warn(
37 "torchvision.ops.misc.BatchNorm2d is deprecated and will be "
38 "removed in future versions, use torch.nn.BatchNorm2d instead.", FutureWarning)
39
40
41 interpolate = torch.nn.functional.interpolate
42
43
44 # This is not in nn
45 class FrozenBatchNorm2d(torch.nn.Module):
46 """
47 BatchNorm2d where the batch statistics and the affine parameters
48 are fixed
49 """
50
51 def __init__(
52 self,
53 num_features: int,
54 eps: float = 0.,
55 n: Optional[int] = None,
56 ):
57 # n=None for backward-compatibility
58 if n is not None:
59 warnings.warn("`n` argument is deprecated and has been renamed `num_features`",
60 DeprecationWarning)
61 num_features = n
62 super(FrozenBatchNorm2d, self).__init__()
63 self.eps = eps
64 self.register_buffer("weight", torch.ones(num_features))
65 self.register_buffer("bias", torch.zeros(num_features))
66 self.register_buffer("running_mean", torch.zeros(num_features))
67 self.register_buffer("running_var", torch.ones(num_features))
68
69 def _load_from_state_dict(
70 self,
71 state_dict: dict,
72 prefix: str,
73 local_metadata: dict,
74 strict: bool,
75 missing_keys: List[str],
76 unexpected_keys: List[str],
77 error_msgs: List[str],
78 ):
79 num_batches_tracked_key = prefix + 'num_batches_tracked'
80 if num_batches_tracked_key in state_dict:
81 del state_dict[num_batches_tracked_key]
82
83 super(FrozenBatchNorm2d, self)._load_from_state_dict(
84 state_dict, prefix, local_metadata, strict,
85 missing_keys, unexpected_keys, error_msgs)
86
87 def forward(self, x: Tensor) -> Tensor:
88 # move reshapes to the beginning
89 # to make it fuser-friendly
90 w = self.weight.reshape(1, -1, 1, 1)
91 b = self.bias.reshape(1, -1, 1, 1)
92 rv = self.running_var.reshape(1, -1, 1, 1)
93 rm = self.running_mean.reshape(1, -1, 1, 1)
94 scale = w * (rv + self.eps).rsqrt()
95 bias = b - rm * scale
96 return x * scale + bias
97
98 def __repr__(self) -> str:
99 return f"{self.__class__.__name__}({self.weight.shape[0]}, eps={self.eps})"
100
[end of torchvision/ops/misc.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/torchvision/ops/misc.py b/torchvision/ops/misc.py
--- a/torchvision/ops/misc.py
+++ b/torchvision/ops/misc.py
@@ -51,7 +51,7 @@
def __init__(
self,
num_features: int,
- eps: float = 0.,
+ eps: float = 1e-5,
n: Optional[int] = None,
):
# n=None for backward-compatibility
| {"golden_diff": "diff --git a/torchvision/ops/misc.py b/torchvision/ops/misc.py\n--- a/torchvision/ops/misc.py\n+++ b/torchvision/ops/misc.py\n@@ -51,7 +51,7 @@\n def __init__(\n self,\n num_features: int,\n- eps: float = 0.,\n+ eps: float = 1e-5,\n n: Optional[int] = None,\n ):\n # n=None for backward-compatibility\n", "issue": "Change default value of eps in FrozenBatchNorm to match BatchNorm\n## \u2753 Questions and Help\r\nHello\r\nLoss is nan error occurs when I learn fast rcnn with resnext101 backbone\r\nMy code is as follows\r\n```python\r\nbackbone = resnet_fpn_backbone('resnext101_32x8d', pretrained=True)\r\nmodel = FasterRCNN(backbone, num_classes)\r\nin_features = model.roi_heads.box_predictor.cls_score.in_features\r\nmodel.roi_heads.box_predictor = FastRCNNPredictor(in_features, num_classes)\r\n```\r\n\r\nerror message\r\n```\r\nEpoch: [0] [ 0/7208] eta: 1:27:42 lr: 0.000040 loss: 40613806080.0000 (40613806080.0000) loss_box_reg: 7979147264.0000 (7979147264.0000) loss_classifier: 11993160704.0000 (11993160704.0000) loss_objectness: 9486380032.0000 (9486380032.0000) loss_rpn_box_reg: 11155118080.0000 (11155118080.0000) time: 0.7301 data: 0.4106 max mem: 1241\r\nLoss is nan, stopping training\r\n```\r\n\r\nWhen i change the backbone to resnet50 and resnet152, no error occrus.\r\n### Please note that this issue tracker is not a help form and this issue will be closed.\r\n\r\nWe have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:\r\n\r\n- [Discussion Forum](https://discuss.pytorch.org/)\r\n\n", "before_files": [{"content": "\"\"\"\nhelper class that supports empty tensors on some nn functions.\n\nIdeally, add support directly in PyTorch to empty tensors in\nthose functions.\n\nThis can be removed once https://github.com/pytorch/pytorch/issues/12013\nis implemented\n\"\"\"\n\nimport warnings\nimport torch\nfrom torch import Tensor, Size\nfrom torch.jit.annotations import List, Optional, Tuple\n\n\nclass Conv2d(torch.nn.Conv2d):\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n warnings.warn(\n \"torchvision.ops.misc.Conv2d is deprecated and will be \"\n \"removed in future versions, use torch.nn.Conv2d instead.\", FutureWarning)\n\n\nclass ConvTranspose2d(torch.nn.ConvTranspose2d):\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n warnings.warn(\n \"torchvision.ops.misc.ConvTranspose2d is deprecated and will be \"\n \"removed in future versions, use torch.nn.ConvTranspose2d instead.\", FutureWarning)\n\n\nclass BatchNorm2d(torch.nn.BatchNorm2d):\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n warnings.warn(\n \"torchvision.ops.misc.BatchNorm2d is deprecated and will be \"\n \"removed in future versions, use torch.nn.BatchNorm2d instead.\", FutureWarning)\n\n\ninterpolate = torch.nn.functional.interpolate\n\n\n# This is not in nn\nclass FrozenBatchNorm2d(torch.nn.Module):\n \"\"\"\n BatchNorm2d where the batch statistics and the affine parameters\n are fixed\n \"\"\"\n\n def __init__(\n self,\n num_features: int,\n eps: float = 0.,\n n: Optional[int] = None,\n ):\n # n=None for backward-compatibility\n if n is not None:\n warnings.warn(\"`n` argument is deprecated and has been renamed `num_features`\",\n DeprecationWarning)\n num_features = n\n super(FrozenBatchNorm2d, self).__init__()\n self.eps = eps\n self.register_buffer(\"weight\", torch.ones(num_features))\n self.register_buffer(\"bias\", torch.zeros(num_features))\n self.register_buffer(\"running_mean\", torch.zeros(num_features))\n self.register_buffer(\"running_var\", torch.ones(num_features))\n\n def _load_from_state_dict(\n self,\n state_dict: dict,\n prefix: str,\n local_metadata: dict,\n strict: bool,\n missing_keys: List[str],\n unexpected_keys: List[str],\n error_msgs: List[str],\n ):\n num_batches_tracked_key = prefix + 'num_batches_tracked'\n if num_batches_tracked_key in state_dict:\n del state_dict[num_batches_tracked_key]\n\n super(FrozenBatchNorm2d, self)._load_from_state_dict(\n state_dict, prefix, local_metadata, strict,\n missing_keys, unexpected_keys, error_msgs)\n\n def forward(self, x: Tensor) -> Tensor:\n # move reshapes to the beginning\n # to make it fuser-friendly\n w = self.weight.reshape(1, -1, 1, 1)\n b = self.bias.reshape(1, -1, 1, 1)\n rv = self.running_var.reshape(1, -1, 1, 1)\n rm = self.running_mean.reshape(1, -1, 1, 1)\n scale = w * (rv + self.eps).rsqrt()\n bias = b - rm * scale\n return x * scale + bias\n\n def __repr__(self) -> str:\n return f\"{self.__class__.__name__}({self.weight.shape[0]}, eps={self.eps})\"\n", "path": "torchvision/ops/misc.py"}]} | 2,018 | 108 |
gh_patches_debug_17358 | rasdani/github-patches | git_diff | weecology__retriever-548 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
download_only w/path fails to use path argument when checking for file
When `download_only` checks to see if the file already exists before copying it, it ignores the path argument. This means that:
```
retriever download MoM2003 -p testdata
```
will keep overwriting the file in `testdata` if it exists, and it will not copy the file to `testdata` if the file exists in `.`.
Fixes this is probably just a little logic improvement in the `final_cleanup` function of `download_only`.
</issue>
<code>
[start of engines/download_only.py]
1 from __future__ import print_function
2 from builtins import object
3 import os
4 import platform
5 import shutil
6 import inspect
7
8 from retriever.lib.engine import filename_from_url
9 from retriever.lib.models import Engine, no_cleanup
10 from retriever import DATA_DIR, HOME_DIR
11
12
13 class DummyConnection(object):
14
15 def cursor(self):
16 pass
17
18 def commit(self):
19 pass
20
21 def rollback(self):
22 pass
23
24 def close(self):
25 pass
26
27
28 class DummyCursor(DummyConnection):
29 pass
30
31
32 class engine(Engine):
33 """Engine instance for writing data to a CSV file."""
34 name = "Download Only"
35 abbreviation = "download"
36 required_opts = [("path",
37 "File path to copy data files",
38 "./"),
39 ("subdir",
40 "Keep the subdirectories for archived files",
41 False)
42 ]
43
44 def table_exists(self, dbname, tablename):
45 """Checks if the file to be downloaded already exists"""
46 try:
47 tablename = self.table_name(name=tablename, dbname=dbname)
48 return os.path.exists(tablename)
49 except:
50 return False
51
52 def get_connection(self):
53 """Gets the db connection."""
54 self.get_input()
55 return DummyConnection()
56
57 def final_cleanup(self):
58 """Copies downloaded files to desired directory
59
60 Copies the downloaded files into the chosen directory unless files with the same
61 name already exist in the directory.
62
63 """
64 if hasattr(self, "all_files"):
65 for file_name in self.all_files:
66 file_path, file_name_nopath = os.path.split(file_name)
67 subdir = os.path.split(file_path)[1] if self.opts['subdir'] else ''
68 dest_path = os.path.join(self.opts['path'], subdir)
69 if os.path.abspath(file_path) == os.path.abspath(os.path.join(DATA_DIR, subdir)):
70 print ("%s is already in the working directory" %
71 file_name_nopath)
72 print("Keeping existing copy.")
73 else:
74 print("Copying %s from %s" % (file_name_nopath, file_path))
75 if os.path.isdir(dest_path):
76 try:
77 shutil.copy(file_name, dest_path)
78 except:
79 print("Couldn't copy file to %s" % dest_path)
80 else:
81 try:
82 print("Creating directory %s" % dest_path)
83 os.makedirs(dest_path)
84 shutil.copy(file_name, dest_path)
85 except:
86 print("Couldn't create directory %s" % dest_path)
87 self.all_files = set()
88
89 def auto_create_table(self, table, url=None, filename=None, pk=None):
90 """Download the file if it doesn't exist"""
91 if url and not filename:
92 filename = filename_from_url(url)
93
94 if url and not self.find_file(filename):
95 # If the file doesn't exist, download it
96 self.download_file(url, filename)
97
98 def insert_data_from_url(self, url):
99 """Insert data from a web resource"""
100 filename = filename_from_url(url)
101 find = self.find_file(filename)
102 if not find:
103 self.create_raw_data_dir()
104 self.download_file(url, filename)
105
106 def find_file(self, filename):
107 """Checks for the given file and adds it to the list of all files"""
108 result = Engine.find_file(self, filename)
109 if not hasattr(self, "all_files"):
110 self.all_files = set()
111 if result:
112 self.all_files.add(result)
113 return result
114
115 def register_files(self, filenames):
116 """Identify a list of files to be moved by the download
117
118 When downloading archives with multiple files the engine needs to be
119 informed of all of the file names so that it can move them.
120
121 """
122 full_filenames = {self.find_file(filename) for filename in filenames
123 if self.find_file(filename)}
124 self.all_files = self.all_files.union(full_filenames)
125
126
127 # replace all other methods with a function that does nothing
128 def dummy_method(self, *args, **kwargs):
129 pass
130
131
132 methods = inspect.getmembers(engine, predicate=inspect.ismethod)
133 keep_methods = {'table_exists',
134 'get_connection',
135 'final_cleanup',
136 'auto_create_table',
137 'insert_data_from_url',
138 }
139 remove_methods = ['insert_data_from_file']
140 for name, method in methods:
141 if (name not in keep_methods and
142 'download' not in name and
143 'file' not in name and
144 'dir' not in name):
145 setattr(engine, name, dummy_method)
146 for name in remove_methods:
147 setattr(engine, name, dummy_method)
148
[end of engines/download_only.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/engines/download_only.py b/engines/download_only.py
--- a/engines/download_only.py
+++ b/engines/download_only.py
@@ -66,7 +66,9 @@
file_path, file_name_nopath = os.path.split(file_name)
subdir = os.path.split(file_path)[1] if self.opts['subdir'] else ''
dest_path = os.path.join(self.opts['path'], subdir)
- if os.path.abspath(file_path) == os.path.abspath(os.path.join(DATA_DIR, subdir)):
+ if os.path.isfile(os.path.join(dest_path, file_name_nopath)):
+ print ("File already exists at specified location")
+ elif os.path.abspath(file_path) == os.path.abspath(os.path.join(DATA_DIR, subdir)):
print ("%s is already in the working directory" %
file_name_nopath)
print("Keeping existing copy.")
| {"golden_diff": "diff --git a/engines/download_only.py b/engines/download_only.py\n--- a/engines/download_only.py\n+++ b/engines/download_only.py\n@@ -66,7 +66,9 @@\n file_path, file_name_nopath = os.path.split(file_name)\n subdir = os.path.split(file_path)[1] if self.opts['subdir'] else ''\n dest_path = os.path.join(self.opts['path'], subdir)\n- if os.path.abspath(file_path) == os.path.abspath(os.path.join(DATA_DIR, subdir)):\n+ if os.path.isfile(os.path.join(dest_path, file_name_nopath)):\n+ print (\"File already exists at specified location\")\n+ elif os.path.abspath(file_path) == os.path.abspath(os.path.join(DATA_DIR, subdir)):\n print (\"%s is already in the working directory\" %\n file_name_nopath)\n print(\"Keeping existing copy.\")\n", "issue": "download_only w/path fails to use path argument when checking for file\nWhen `download_only` checks to see if the file already exists before copying it, it ignores the path argument. This means that:\n\n```\nretriever download MoM2003 -p testdata\n```\n\nwill keep overwriting the file in `testdata` if it exists, and it will not copy the file to `testdata` if the file exists in `.`.\n\nFixes this is probably just a little logic improvement in the `final_cleanup` function of `download_only`.\n\n", "before_files": [{"content": "from __future__ import print_function\nfrom builtins import object\nimport os\nimport platform\nimport shutil\nimport inspect\n\nfrom retriever.lib.engine import filename_from_url\nfrom retriever.lib.models import Engine, no_cleanup\nfrom retriever import DATA_DIR, HOME_DIR\n\n\nclass DummyConnection(object):\n\n def cursor(self):\n pass\n\n def commit(self):\n pass\n\n def rollback(self):\n pass\n\n def close(self):\n pass\n\n\nclass DummyCursor(DummyConnection):\n pass\n\n\nclass engine(Engine):\n \"\"\"Engine instance for writing data to a CSV file.\"\"\"\n name = \"Download Only\"\n abbreviation = \"download\"\n required_opts = [(\"path\",\n \"File path to copy data files\",\n \"./\"),\n (\"subdir\",\n \"Keep the subdirectories for archived files\",\n False)\n ]\n\n def table_exists(self, dbname, tablename):\n \"\"\"Checks if the file to be downloaded already exists\"\"\"\n try:\n tablename = self.table_name(name=tablename, dbname=dbname)\n return os.path.exists(tablename)\n except:\n return False\n\n def get_connection(self):\n \"\"\"Gets the db connection.\"\"\"\n self.get_input()\n return DummyConnection()\n\n def final_cleanup(self):\n \"\"\"Copies downloaded files to desired directory\n\n Copies the downloaded files into the chosen directory unless files with the same\n name already exist in the directory.\n\n \"\"\"\n if hasattr(self, \"all_files\"):\n for file_name in self.all_files:\n file_path, file_name_nopath = os.path.split(file_name)\n subdir = os.path.split(file_path)[1] if self.opts['subdir'] else ''\n dest_path = os.path.join(self.opts['path'], subdir)\n if os.path.abspath(file_path) == os.path.abspath(os.path.join(DATA_DIR, subdir)):\n print (\"%s is already in the working directory\" %\n file_name_nopath)\n print(\"Keeping existing copy.\")\n else:\n print(\"Copying %s from %s\" % (file_name_nopath, file_path))\n if os.path.isdir(dest_path):\n try:\n shutil.copy(file_name, dest_path)\n except:\n print(\"Couldn't copy file to %s\" % dest_path)\n else:\n try:\n print(\"Creating directory %s\" % dest_path)\n os.makedirs(dest_path)\n shutil.copy(file_name, dest_path)\n except:\n print(\"Couldn't create directory %s\" % dest_path)\n self.all_files = set()\n\n def auto_create_table(self, table, url=None, filename=None, pk=None):\n \"\"\"Download the file if it doesn't exist\"\"\"\n if url and not filename:\n filename = filename_from_url(url)\n\n if url and not self.find_file(filename):\n # If the file doesn't exist, download it\n self.download_file(url, filename)\n\n def insert_data_from_url(self, url):\n \"\"\"Insert data from a web resource\"\"\"\n filename = filename_from_url(url)\n find = self.find_file(filename)\n if not find:\n self.create_raw_data_dir()\n self.download_file(url, filename)\n\n def find_file(self, filename):\n \"\"\"Checks for the given file and adds it to the list of all files\"\"\"\n result = Engine.find_file(self, filename)\n if not hasattr(self, \"all_files\"):\n self.all_files = set()\n if result:\n self.all_files.add(result)\n return result\n\n def register_files(self, filenames):\n \"\"\"Identify a list of files to be moved by the download\n\n When downloading archives with multiple files the engine needs to be\n informed of all of the file names so that it can move them.\n\n \"\"\"\n full_filenames = {self.find_file(filename) for filename in filenames\n if self.find_file(filename)}\n self.all_files = self.all_files.union(full_filenames)\n\n\n# replace all other methods with a function that does nothing\ndef dummy_method(self, *args, **kwargs):\n pass\n\n\nmethods = inspect.getmembers(engine, predicate=inspect.ismethod)\nkeep_methods = {'table_exists',\n 'get_connection',\n 'final_cleanup',\n 'auto_create_table',\n 'insert_data_from_url',\n }\nremove_methods = ['insert_data_from_file']\nfor name, method in methods:\n if (name not in keep_methods and\n 'download' not in name and\n 'file' not in name and\n 'dir' not in name):\n setattr(engine, name, dummy_method)\nfor name in remove_methods:\n setattr(engine, name, dummy_method)\n", "path": "engines/download_only.py"}]} | 1,964 | 193 |
gh_patches_debug_7495 | rasdani/github-patches | git_diff | kymatio__kymatio-890 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Make NumPy the default frontend
Since we promised earlier:
```
/home/jenkins/workspace/kymatio_dev/kymatio/frontend/entry.py:20: DeprecationWarning: Torch frontend is currently the default, but NumPy will become the default in the next version.
warnings.warn("Torch frontend is currently the default, but NumPy will become the default in the next"
```
</issue>
<code>
[start of kymatio/frontend/entry.py]
1 import logging
2 import warnings
3 import importlib
4
5
6 class ScatteringEntry(object):
7 def __init__(self, *args, **kwargs):
8 self.name = kwargs['name']
9 self.class_name = kwargs['class_name']
10 kwargs.pop('name')
11 kwargs.pop('class_name')
12
13 frontend_suffixes = {'torch' : 'Torch',
14 'numpy' : 'NumPy',
15 'tensorflow' : 'TensorFlow',
16 'keras': 'Keras',
17 'sklearn': 'Transformer'}
18
19 if 'frontend' not in kwargs:
20 warnings.warn("Torch frontend is currently the default, but NumPy will become the default in the next"
21 " version.", DeprecationWarning)
22 frontend = 'torch'
23 else:
24 frontend = kwargs['frontend'].lower()
25 kwargs.pop('frontend')
26
27 frontends = list(frontend_suffixes.keys())
28
29 if frontend not in frontends:
30 raise RuntimeError('The frontend \'%s\" is not valid. Must be '
31 'one of \'%s\', or \'%s\'.' %
32 (frontend, '\', \''.join(frontends[:-1]),
33 frontends[-1]))
34
35 try:
36 module = importlib.import_module('kymatio.' + self.class_name + '.frontend.' + frontend + '_frontend')
37
38 # Create frontend-specific class name by inserting frontend name
39 # after `Scattering`.
40 frontend = frontend_suffixes[frontend]
41
42 class_name = self.__class__.__name__
43
44 base_name = class_name[:-len('Entry*D')]
45 dim_suffix = class_name[-len('*D'):]
46
47 class_name = base_name + frontend + dim_suffix
48
49 self.__class__ = getattr(module, class_name)
50 self.__init__(*args, **kwargs)
51 except Exception as e:
52 raise e from RuntimeError('\nThe frontend \'' + frontend + '\' could not be correctly imported.')
53
54 logging.info('The ' + self.name + ' frontend ' + frontend + ' was imported.')
55
56
57 __all__ = ['ScatteringEntry']
58
[end of kymatio/frontend/entry.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/kymatio/frontend/entry.py b/kymatio/frontend/entry.py
--- a/kymatio/frontend/entry.py
+++ b/kymatio/frontend/entry.py
@@ -17,9 +17,7 @@
'sklearn': 'Transformer'}
if 'frontend' not in kwargs:
- warnings.warn("Torch frontend is currently the default, but NumPy will become the default in the next"
- " version.", DeprecationWarning)
- frontend = 'torch'
+ frontend = 'numpy'
else:
frontend = kwargs['frontend'].lower()
kwargs.pop('frontend')
| {"golden_diff": "diff --git a/kymatio/frontend/entry.py b/kymatio/frontend/entry.py\n--- a/kymatio/frontend/entry.py\n+++ b/kymatio/frontend/entry.py\n@@ -17,9 +17,7 @@\n 'sklearn': 'Transformer'}\n \n if 'frontend' not in kwargs:\n- warnings.warn(\"Torch frontend is currently the default, but NumPy will become the default in the next\"\n- \" version.\", DeprecationWarning)\n- frontend = 'torch'\n+ frontend = 'numpy'\n else:\n frontend = kwargs['frontend'].lower()\n kwargs.pop('frontend')\n", "issue": "Make NumPy the default frontend\nSince we promised earlier:\r\n\r\n```\r\n /home/jenkins/workspace/kymatio_dev/kymatio/frontend/entry.py:20: DeprecationWarning: Torch frontend is currently the default, but NumPy will become the default in the next version.\r\n warnings.warn(\"Torch frontend is currently the default, but NumPy will become the default in the next\"\r\n```\n", "before_files": [{"content": "import logging\nimport warnings\nimport importlib\n\n\nclass ScatteringEntry(object):\n def __init__(self, *args, **kwargs):\n self.name = kwargs['name']\n self.class_name = kwargs['class_name']\n kwargs.pop('name')\n kwargs.pop('class_name')\n\n frontend_suffixes = {'torch' : 'Torch',\n 'numpy' : 'NumPy',\n 'tensorflow' : 'TensorFlow',\n 'keras': 'Keras',\n 'sklearn': 'Transformer'}\n\n if 'frontend' not in kwargs:\n warnings.warn(\"Torch frontend is currently the default, but NumPy will become the default in the next\"\n \" version.\", DeprecationWarning)\n frontend = 'torch'\n else:\n frontend = kwargs['frontend'].lower()\n kwargs.pop('frontend')\n\n frontends = list(frontend_suffixes.keys())\n\n if frontend not in frontends:\n raise RuntimeError('The frontend \\'%s\\\" is not valid. Must be '\n 'one of \\'%s\\', or \\'%s\\'.' %\n (frontend, '\\', \\''.join(frontends[:-1]),\n frontends[-1]))\n\n try:\n module = importlib.import_module('kymatio.' + self.class_name + '.frontend.' + frontend + '_frontend')\n\n # Create frontend-specific class name by inserting frontend name\n # after `Scattering`.\n frontend = frontend_suffixes[frontend]\n\n class_name = self.__class__.__name__\n\n base_name = class_name[:-len('Entry*D')]\n dim_suffix = class_name[-len('*D'):]\n\n class_name = base_name + frontend + dim_suffix\n\n self.__class__ = getattr(module, class_name)\n self.__init__(*args, **kwargs)\n except Exception as e:\n raise e from RuntimeError('\\nThe frontend \\'' + frontend + '\\' could not be correctly imported.')\n\n logging.info('The ' + self.name + ' frontend ' + frontend + ' was imported.')\n\n\n__all__ = ['ScatteringEntry']\n", "path": "kymatio/frontend/entry.py"}]} | 1,169 | 137 |
gh_patches_debug_30712 | rasdani/github-patches | git_diff | nvaccess__nvda-11841 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Visual Studio: IntelliSense tooltips reported twice
### Steps to reproduce:
1. Open Visual Studio 2019
2. Open a C# project
3. Enable reporting of Tooltips
4. Trigger an IntelliSense autocomplete suggestion by typing something.
5. Arrow through the suggestions
### Actual behavior:
The selected item is announced, followed by twice the tooltip.
### Expected behavior:
The selected item is announced, followed by once the tooltip.
### System configuration
#### NVDA installed/portable/running from source:
Installed
#### NVDA version:
alpha-20957
#### Windows version:
Windows 10 2004
#### Name and version of other software in use when reproducing the issue:
Visual Studio 2019 16.7.3 Enterprise
### Other questions
#### Does the issue still occur after restarting your computer?
Yes
#### Have you tried any other versions of NVDA? If so, please report their behaviors.
No
#### If addons are disabled, is your problem still occuring?
Yes
#### Did you try to run the COM registry fixing tool in NVDA menu / tools?
Yes
</issue>
<code>
[start of source/NVDAObjects/UIA/VisualStudio.py]
1 # This file is covered by the GNU General Public License.
2 # See the file COPYING for more details.
3 # Copyright (C) 2020 NV Access Limited, Leonard de Ruijter
4
5 """
6 Object overlay classes for Visual Studio components
7 available in Visual Studio and SQL Server Management Studio.
8 """
9
10 from . import UIA
11 import speech
12 import braille
13 import api
14
15
16 class IntelliSenseItem(UIA):
17
18 def _get_name(self):
19 return self.UIAElement.cachedAutomationID
20
21 def event_UIA_elementSelected(self):
22 # Cancel speech to have speech announce the selection as soon as possible.
23 # This is needed because L{reportFocus} does not cancel speech.
24 # Therefore, if speech wouldn't be cancelled,
25 # selection announcements would queue up when changing selection rapidly.
26 speech.cancelSpeech()
27 api.setNavigatorObject(self, isFocus=True)
28 self.reportFocus()
29 # Display results as flash messages.
30 braille.handler.message(braille.getPropertiesBraille(
31 name=self.name, role=self.role, positionInfo=self.positionInfo, description=self.description
32 ))
33
34
35 class IntelliSenseList(UIA):
36 ...
37
38
39 class IntelliSenseLiveRegion(UIA):
40 """
41 Visual Studio uses both Intellisense menu item objects and a live region
42 to communicate Intellisense selections.
43 NVDA uses the menu item approach and therefore the live region provides doubled information
44 and is disabled.
45 """
46
47 _shouldAllowUIALiveRegionChangeEvent = False
48
49
50 _INTELLISENSE_LIST_AUTOMATION_IDS = {
51 "listBoxCompletions",
52 "CompletionList"
53 }
54
55
56 def findExtraOverlayClasses(obj, clsList):
57 if obj.UIAAutomationId in _INTELLISENSE_LIST_AUTOMATION_IDS:
58 clsList.insert(0, IntelliSenseList)
59 elif isinstance(obj.parent, IntelliSenseList) and obj.UIAElement.cachedClassName == "IntellisenseMenuItem":
60 clsList.insert(0, IntelliSenseItem)
61 elif (
62 obj.UIAElement.cachedClassName == "LiveTextBlock"
63 and obj.previous
64 and isinstance(obj.previous.previous, IntelliSenseList)
65 ):
66 clsList.insert(0, IntelliSenseLiveRegion)
67
[end of source/NVDAObjects/UIA/VisualStudio.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/source/NVDAObjects/UIA/VisualStudio.py b/source/NVDAObjects/UIA/VisualStudio.py
--- a/source/NVDAObjects/UIA/VisualStudio.py
+++ b/source/NVDAObjects/UIA/VisualStudio.py
@@ -7,10 +7,11 @@
available in Visual Studio and SQL Server Management Studio.
"""
-from . import UIA
+from . import UIA, ToolTip
import speech
import braille
import api
+import time
class IntelliSenseItem(UIA):
@@ -53,6 +54,34 @@
}
+class CompletionToolTip(ToolTip):
+ """ A tool tip for which duplicate open events can be fired.
+ """
+
+ #: Keeps track of the last ToolTipOpened event (text, time)
+ _lastToolTipOpenedInfo = (None, None)
+ #: The duplicate tooltip events will be dropped within this time window
+ _preventDuplicateToolTipSeconds = 0.2
+
+ def event_UIA_toolTipOpened(self):
+ oldText, oldTime = self._lastToolTipOpenedInfo
+ newText = self.name
+ newTime = time.time()
+ self.__class__._lastToolTipOpenedInfo = (newText, newTime)
+ withinPossibleDupToolTipTimeWindow = (
+ oldTime is not None
+ and (newTime - oldTime) < self._preventDuplicateToolTipSeconds
+ )
+ if newText == oldText and withinPossibleDupToolTipTimeWindow:
+ # Tool-tip event suspected to be a duplicate, drop the event.
+ # - Users attempting to rapidly re-announce tool-tips may
+ # have the announcement erroneously suppressed
+ # - Users on slower systems (or systems under load) may still
+ # receive duplicate announcements
+ return
+ super().event_UIA_toolTipOpened()
+
+
def findExtraOverlayClasses(obj, clsList):
if obj.UIAAutomationId in _INTELLISENSE_LIST_AUTOMATION_IDS:
clsList.insert(0, IntelliSenseList)
@@ -64,3 +93,5 @@
and isinstance(obj.previous.previous, IntelliSenseList)
):
clsList.insert(0, IntelliSenseLiveRegion)
+ elif obj.UIAAutomationId == "completion tooltip":
+ clsList.insert(0, CompletionToolTip)
| {"golden_diff": "diff --git a/source/NVDAObjects/UIA/VisualStudio.py b/source/NVDAObjects/UIA/VisualStudio.py\n--- a/source/NVDAObjects/UIA/VisualStudio.py\n+++ b/source/NVDAObjects/UIA/VisualStudio.py\n@@ -7,10 +7,11 @@\n available in Visual Studio and SQL Server Management Studio.\n \"\"\"\n \n-from . import UIA\n+from . import UIA, ToolTip\n import speech\n import braille\n import api\n+import time\n \n \n class IntelliSenseItem(UIA):\n@@ -53,6 +54,34 @@\n }\n \n \n+class CompletionToolTip(ToolTip):\n+\t\"\"\" A tool tip for which duplicate open events can be fired.\n+\t\"\"\"\n+\n+\t#: Keeps track of the last ToolTipOpened event (text, time)\n+\t_lastToolTipOpenedInfo = (None, None)\n+\t#: The duplicate tooltip events will be dropped within this time window\n+\t_preventDuplicateToolTipSeconds = 0.2\n+\n+\tdef event_UIA_toolTipOpened(self):\n+\t\toldText, oldTime = self._lastToolTipOpenedInfo\n+\t\tnewText = self.name\n+\t\tnewTime = time.time()\n+\t\tself.__class__._lastToolTipOpenedInfo = (newText, newTime)\n+\t\twithinPossibleDupToolTipTimeWindow = (\n+\t\t\toldTime is not None\n+\t\t\tand (newTime - oldTime) < self._preventDuplicateToolTipSeconds\n+\t\t)\n+\t\tif newText == oldText and withinPossibleDupToolTipTimeWindow:\n+\t\t\t# Tool-tip event suspected to be a duplicate, drop the event.\n+\t\t\t# - Users attempting to rapidly re-announce tool-tips may\n+\t\t\t# have the announcement erroneously suppressed\n+\t\t\t# - Users on slower systems (or systems under load) may still\n+\t\t\t# receive duplicate announcements\n+\t\t\treturn\n+\t\tsuper().event_UIA_toolTipOpened()\n+\n+\n def findExtraOverlayClasses(obj, clsList):\n \tif obj.UIAAutomationId in _INTELLISENSE_LIST_AUTOMATION_IDS:\n \t\tclsList.insert(0, IntelliSenseList)\n@@ -64,3 +93,5 @@\n \t\tand isinstance(obj.previous.previous, IntelliSenseList)\n \t):\n \t\tclsList.insert(0, IntelliSenseLiveRegion)\n+\telif obj.UIAAutomationId == \"completion tooltip\":\n+\t\tclsList.insert(0, CompletionToolTip)\n", "issue": "Visual Studio: IntelliSense tooltips reported twice\n### Steps to reproduce:\r\n1. Open Visual Studio 2019\r\n2. Open a C# project\r\n3. Enable reporting of Tooltips\r\n4. Trigger an IntelliSense autocomplete suggestion by typing something.\r\n5. Arrow through the suggestions\r\n\r\n### Actual behavior:\r\nThe selected item is announced, followed by twice the tooltip.\r\n\r\n### Expected behavior:\r\nThe selected item is announced, followed by once the tooltip.\r\n\r\n### System configuration\r\n#### NVDA installed/portable/running from source:\r\nInstalled\r\n\r\n#### NVDA version:\r\nalpha-20957\r\n\r\n#### Windows version:\r\nWindows 10 2004\r\n\r\n#### Name and version of other software in use when reproducing the issue:\r\nVisual Studio 2019 16.7.3 Enterprise\r\n\r\n### Other questions\r\n#### Does the issue still occur after restarting your computer?\r\nYes\r\n\r\n#### Have you tried any other versions of NVDA? If so, please report their behaviors.\r\nNo\r\n\r\n#### If addons are disabled, is your problem still occuring?\r\nYes\r\n\r\n#### Did you try to run the COM registry fixing tool in NVDA menu / tools?\r\nYes\n", "before_files": [{"content": "# This file is covered by the GNU General Public License.\n# See the file COPYING for more details.\n# Copyright (C) 2020 NV Access Limited, Leonard de Ruijter\n\n\"\"\"\nObject overlay classes for Visual Studio components\navailable in Visual Studio and SQL Server Management Studio.\n\"\"\"\n\nfrom . import UIA\nimport speech\nimport braille\nimport api\n\n\nclass IntelliSenseItem(UIA):\n\n\tdef _get_name(self):\n\t\treturn self.UIAElement.cachedAutomationID\n\n\tdef event_UIA_elementSelected(self):\n\t\t# Cancel speech to have speech announce the selection as soon as possible.\n\t\t# This is needed because L{reportFocus} does not cancel speech.\n\t\t# Therefore, if speech wouldn't be cancelled,\n\t\t# selection announcements would queue up when changing selection rapidly.\n\t\tspeech.cancelSpeech()\n\t\tapi.setNavigatorObject(self, isFocus=True)\n\t\tself.reportFocus()\n\t\t# Display results as flash messages.\n\t\tbraille.handler.message(braille.getPropertiesBraille(\n\t\t\tname=self.name, role=self.role, positionInfo=self.positionInfo, description=self.description\n\t\t))\n\n\nclass IntelliSenseList(UIA):\n\t...\n\n\nclass IntelliSenseLiveRegion(UIA):\n\t\"\"\"\n\tVisual Studio uses both Intellisense menu item objects and a live region\n\tto communicate Intellisense selections.\n\tNVDA uses the menu item approach and therefore the live region provides doubled information\n\tand is disabled.\n\t\"\"\"\n\n\t_shouldAllowUIALiveRegionChangeEvent = False\n\n\n_INTELLISENSE_LIST_AUTOMATION_IDS = {\n\t\"listBoxCompletions\",\n\t\"CompletionList\"\n}\n\n\ndef findExtraOverlayClasses(obj, clsList):\n\tif obj.UIAAutomationId in _INTELLISENSE_LIST_AUTOMATION_IDS:\n\t\tclsList.insert(0, IntelliSenseList)\n\telif isinstance(obj.parent, IntelliSenseList) and obj.UIAElement.cachedClassName == \"IntellisenseMenuItem\":\n\t\tclsList.insert(0, IntelliSenseItem)\n\telif (\n\t\tobj.UIAElement.cachedClassName == \"LiveTextBlock\"\n\t\tand obj.previous\n\t\tand isinstance(obj.previous.previous, IntelliSenseList)\n\t):\n\t\tclsList.insert(0, IntelliSenseLiveRegion)\n", "path": "source/NVDAObjects/UIA/VisualStudio.py"}]} | 1,406 | 530 |
gh_patches_debug_1740 | rasdani/github-patches | git_diff | flairNLP__flair-239 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bug in tokenizer?
Here's a minimum viable code to reproduce:
```
from flair.data import Sentence
from flair.models import SequenceTagger
model = SequenceTagger.load("ner-ontonotes-fast")
full_text = "\"In the 1960s and 1970s...\" Then came Thierry Mugler and Gianni Versace."
sentence = Sentence(full_text, use_tokenizer=True)
model.predict(sentence)
print(f"full text : {full_text}")
print(f"text length: {len(full_text)}")
print("tag\tstart\tend\tto_original_text()")
for entity in sentence.get_spans('ner'):
print(f"{entity.tag}\t{entity.start_pos}\t{entity.end_pos}\t{entity.to_original_text()}")
```
Output:
``` $ python predict.py
full text : "In the 1960s and 1970s..." Then came Thierry Mugler and Gianni Versace.
text length: 72
tag start end to_original_text()
DATE 8 13 1960s
DATE 18 23 1970s
PERSON 81 94 ThierryMugler
PERSON 97 110 GianniVersace
```
Seems the resulting tokens have start_pos and end_pos indexes larger than the real text length. Note also that the method to_original_text() is eating the spaces, so I suppose it is related.
Any ideas about what is causing the trouble?
</issue>
<code>
[start of setup.py]
1 from setuptools import setup, find_packages
2
3 setup(
4 name='flair',
5 version='0.3.2',
6 description='A very simple framework for state-of-the-art NLP',
7 long_description=open("README.md", encoding='utf-8').read(),
8 long_description_content_type="text/markdown",
9 author='Alan Akbik',
10 author_email='[email protected]',
11 url='https://github.com/zalandoresearch/flair',
12 packages=find_packages(exclude='test'), # same as name
13 license='MIT',
14 install_requires=[
15 'torch==0.4.1',
16 'gensim==3.4.0',
17 'typing==3.6.4',
18 'tqdm==4.23.4',
19 'segtok==1.5.6',
20 'matplotlib==3.0.0',
21 'mpld3==0.3',
22 'sklearn',
23 'sqlitedict==1.6.0',
24 'deprecated==1.2.4',
25 ],
26 include_package_data=True,
27 python_requires='>=3.6',
28 )
29
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -15,8 +15,8 @@
'torch==0.4.1',
'gensim==3.4.0',
'typing==3.6.4',
- 'tqdm==4.23.4',
- 'segtok==1.5.6',
+ 'tqdm==4.26.0',
+ 'segtok==1.5.7',
'matplotlib==3.0.0',
'mpld3==0.3',
'sklearn',
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -15,8 +15,8 @@\n 'torch==0.4.1',\n 'gensim==3.4.0',\n 'typing==3.6.4',\n- 'tqdm==4.23.4',\n- 'segtok==1.5.6',\n+ 'tqdm==4.26.0',\n+ 'segtok==1.5.7',\n 'matplotlib==3.0.0',\n 'mpld3==0.3',\n 'sklearn',\n", "issue": "Bug in tokenizer?\nHere's a minimum viable code to reproduce:\r\n\r\n```\r\nfrom flair.data import Sentence\r\nfrom flair.models import SequenceTagger\r\n\r\nmodel = SequenceTagger.load(\"ner-ontonotes-fast\")\r\nfull_text = \"\\\"In the 1960s and 1970s...\\\" Then came Thierry Mugler and Gianni Versace.\"\r\nsentence = Sentence(full_text, use_tokenizer=True)\r\nmodel.predict(sentence)\r\nprint(f\"full text : {full_text}\")\r\nprint(f\"text length: {len(full_text)}\")\r\nprint(\"tag\\tstart\\tend\\tto_original_text()\")\r\nfor entity in sentence.get_spans('ner'):\r\n print(f\"{entity.tag}\\t{entity.start_pos}\\t{entity.end_pos}\\t{entity.to_original_text()}\")\r\n```\r\n\r\nOutput:\r\n\r\n``` $ python predict.py \r\nfull text : \"In the 1960s and 1970s...\" Then came Thierry Mugler and Gianni Versace.\r\ntext length: 72\r\ntag\tstart\tend\tto_original_text()\r\nDATE\t8\t13\t1960s\r\nDATE\t18\t23\t1970s\r\nPERSON\t81\t94\tThierryMugler\r\nPERSON\t97\t110\tGianniVersace\r\n```\r\nSeems the resulting tokens have start_pos and end_pos indexes larger than the real text length. Note also that the method to_original_text() is eating the spaces, so I suppose it is related.\r\n\r\nAny ideas about what is causing the trouble?\n", "before_files": [{"content": "from setuptools import setup, find_packages\n\nsetup(\n name='flair',\n version='0.3.2',\n description='A very simple framework for state-of-the-art NLP',\n long_description=open(\"README.md\", encoding='utf-8').read(),\n long_description_content_type=\"text/markdown\",\n author='Alan Akbik',\n author_email='[email protected]',\n url='https://github.com/zalandoresearch/flair',\n packages=find_packages(exclude='test'), # same as name\n license='MIT',\n install_requires=[\n 'torch==0.4.1',\n 'gensim==3.4.0',\n 'typing==3.6.4',\n 'tqdm==4.23.4',\n 'segtok==1.5.6',\n 'matplotlib==3.0.0',\n 'mpld3==0.3',\n 'sklearn',\n 'sqlitedict==1.6.0',\n 'deprecated==1.2.4',\n ],\n include_package_data=True,\n python_requires='>=3.6',\n)\n", "path": "setup.py"}]} | 1,155 | 143 |
gh_patches_debug_18422 | rasdani/github-patches | git_diff | ansible__awx-7280 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Mattermost Notification fails on latest release
##### ISSUE TYPE
- Bug Report
##### SUMMARY
Trying to send a (test) notification to a Mattermost Channel fails with
```
mattermostinfo: Notification failed.
Error sending notification mattermost: {"id":"api.webhook.incoming.error","message":"Could not decode the multipart payload of incoming webhook.","detailed_error":"","request_id":"<request ID>","status_code":400}
```
##### ENVIRONMENT
* AWX version: 11.2.0
* AWX install method: docker on linux
* Ansible version: 2.9.7
* Operating System: CentOS 7.8
* Web Browser: Chrome,Chromium,Firefox
* Mattermost Server Version: 5.22.1
##### STEPS TO REPRODUCE
- Create an incomming webhook
- Create a mattermost notification
- Send a test notification
##### EXPECTED RESULTS
Having a notification in the Channel
##### ACTUAL RESULTS
Sending failed with above error message
##### ADDITIONAL INFORMATION
The error message in the mattermost log shows
```
{"level":"error","ts":1591342011.6592789,"caller":"mlog/log.go:175","msg":"Could not decode the multipart payload of incoming webhook.","path":"/
hooks/<hook ID>","request_id":"<request ID>","ip_addr":"<IP Address>","user_id":"","method":"POST","err_where":"
incomingWebhook","http_code":400,"err_details":"mime: no media type"}
```
---
edit: some ID removed in the log sample, mattermost server version added
</issue>
<code>
[start of awx/main/notifications/mattermost_backend.py]
1 # Copyright (c) 2016 Ansible, Inc.
2 # All Rights Reserved.
3
4 import logging
5 import requests
6 import json
7
8 from django.utils.encoding import smart_text
9 from django.utils.translation import ugettext_lazy as _
10
11 from awx.main.notifications.base import AWXBaseEmailBackend
12 from awx.main.notifications.custom_notification_base import CustomNotificationBase
13
14 logger = logging.getLogger('awx.main.notifications.mattermost_backend')
15
16
17 class MattermostBackend(AWXBaseEmailBackend, CustomNotificationBase):
18
19 init_parameters = {"mattermost_url": {"label": "Target URL", "type": "string"},
20 "mattermost_no_verify_ssl": {"label": "Verify SSL", "type": "bool"}}
21 recipient_parameter = "mattermost_url"
22 sender_parameter = None
23
24 def __init__(self, mattermost_no_verify_ssl=False, mattermost_channel=None, mattermost_username=None,
25 mattermost_icon_url=None, fail_silently=False, **kwargs):
26 super(MattermostBackend, self).__init__(fail_silently=fail_silently)
27 self.mattermost_channel = mattermost_channel
28 self.mattermost_username = mattermost_username
29 self.mattermost_icon_url = mattermost_icon_url
30 self.mattermost_no_verify_ssl = mattermost_no_verify_ssl
31
32 def format_body(self, body):
33 return body
34
35 def send_messages(self, messages):
36 sent_messages = 0
37 for m in messages:
38 payload = {}
39 for opt, optval in {'mattermost_icon_url':'icon_url',
40 'mattermost_channel': 'channel', 'mattermost_username': 'username'}.items():
41 optvalue = getattr(self, opt)
42 if optvalue is not None:
43 payload[optval] = optvalue.strip()
44
45 payload['text'] = m.subject
46
47 r = requests.post("{}".format(m.recipients()[0]),
48 data=json.dumps(payload), verify=(not self.mattermost_no_verify_ssl))
49 if r.status_code >= 400:
50 logger.error(smart_text(_("Error sending notification mattermost: {}").format(r.text)))
51 if not self.fail_silently:
52 raise Exception(smart_text(_("Error sending notification mattermost: {}").format(r.text)))
53 sent_messages += 1
54 return sent_messages
55
[end of awx/main/notifications/mattermost_backend.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/awx/main/notifications/mattermost_backend.py b/awx/main/notifications/mattermost_backend.py
--- a/awx/main/notifications/mattermost_backend.py
+++ b/awx/main/notifications/mattermost_backend.py
@@ -3,7 +3,6 @@
import logging
import requests
-import json
from django.utils.encoding import smart_text
from django.utils.translation import ugettext_lazy as _
@@ -45,7 +44,7 @@
payload['text'] = m.subject
r = requests.post("{}".format(m.recipients()[0]),
- data=json.dumps(payload), verify=(not self.mattermost_no_verify_ssl))
+ json=payload, verify=(not self.mattermost_no_verify_ssl))
if r.status_code >= 400:
logger.error(smart_text(_("Error sending notification mattermost: {}").format(r.text)))
if not self.fail_silently:
| {"golden_diff": "diff --git a/awx/main/notifications/mattermost_backend.py b/awx/main/notifications/mattermost_backend.py\n--- a/awx/main/notifications/mattermost_backend.py\n+++ b/awx/main/notifications/mattermost_backend.py\n@@ -3,7 +3,6 @@\n \n import logging\n import requests\n-import json\n \n from django.utils.encoding import smart_text\n from django.utils.translation import ugettext_lazy as _\n@@ -45,7 +44,7 @@\n payload['text'] = m.subject\n \n r = requests.post(\"{}\".format(m.recipients()[0]),\n- data=json.dumps(payload), verify=(not self.mattermost_no_verify_ssl))\n+ json=payload, verify=(not self.mattermost_no_verify_ssl))\n if r.status_code >= 400:\n logger.error(smart_text(_(\"Error sending notification mattermost: {}\").format(r.text)))\n if not self.fail_silently:\n", "issue": "Mattermost Notification fails on latest release\n##### ISSUE TYPE\r\n - Bug Report\r\n\r\n##### SUMMARY\r\nTrying to send a (test) notification to a Mattermost Channel fails with\r\n```\r\n mattermostinfo: Notification failed.\r\nError sending notification mattermost: {\"id\":\"api.webhook.incoming.error\",\"message\":\"Could not decode the multipart payload of incoming webhook.\",\"detailed_error\":\"\",\"request_id\":\"<request ID>\",\"status_code\":400}\r\n```\r\n##### ENVIRONMENT\r\n* AWX version: 11.2.0\r\n* AWX install method: docker on linux\r\n* Ansible version: 2.9.7\r\n* Operating System: CentOS 7.8\r\n* Web Browser: Chrome,Chromium,Firefox\r\n* Mattermost Server Version: 5.22.1\r\n\r\n##### STEPS TO REPRODUCE\r\n- Create an incomming webhook\r\n- Create a mattermost notification\r\n- Send a test notification\r\n\r\n\r\n##### EXPECTED RESULTS\r\nHaving a notification in the Channel\r\n\r\n\r\n##### ACTUAL RESULTS\r\n\r\nSending failed with above error message\r\n\r\n##### ADDITIONAL INFORMATION\r\n\r\nThe error message in the mattermost log shows\r\n```\r\n{\"level\":\"error\",\"ts\":1591342011.6592789,\"caller\":\"mlog/log.go:175\",\"msg\":\"Could not decode the multipart payload of incoming webhook.\",\"path\":\"/\r\nhooks/<hook ID>\",\"request_id\":\"<request ID>\",\"ip_addr\":\"<IP Address>\",\"user_id\":\"\",\"method\":\"POST\",\"err_where\":\"\r\nincomingWebhook\",\"http_code\":400,\"err_details\":\"mime: no media type\"}\r\n```\r\n---\r\nedit: some ID removed in the log sample, mattermost server version added\n", "before_files": [{"content": "# Copyright (c) 2016 Ansible, Inc.\n# All Rights Reserved.\n\nimport logging\nimport requests\nimport json\n\nfrom django.utils.encoding import smart_text\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom awx.main.notifications.base import AWXBaseEmailBackend\nfrom awx.main.notifications.custom_notification_base import CustomNotificationBase\n\nlogger = logging.getLogger('awx.main.notifications.mattermost_backend')\n\n\nclass MattermostBackend(AWXBaseEmailBackend, CustomNotificationBase):\n\n init_parameters = {\"mattermost_url\": {\"label\": \"Target URL\", \"type\": \"string\"},\n \"mattermost_no_verify_ssl\": {\"label\": \"Verify SSL\", \"type\": \"bool\"}}\n recipient_parameter = \"mattermost_url\"\n sender_parameter = None\n\n def __init__(self, mattermost_no_verify_ssl=False, mattermost_channel=None, mattermost_username=None,\n mattermost_icon_url=None, fail_silently=False, **kwargs):\n super(MattermostBackend, self).__init__(fail_silently=fail_silently)\n self.mattermost_channel = mattermost_channel\n self.mattermost_username = mattermost_username\n self.mattermost_icon_url = mattermost_icon_url\n self.mattermost_no_verify_ssl = mattermost_no_verify_ssl\n\n def format_body(self, body):\n return body\n\n def send_messages(self, messages):\n sent_messages = 0\n for m in messages:\n payload = {}\n for opt, optval in {'mattermost_icon_url':'icon_url',\n 'mattermost_channel': 'channel', 'mattermost_username': 'username'}.items():\n optvalue = getattr(self, opt)\n if optvalue is not None:\n payload[optval] = optvalue.strip()\n\n payload['text'] = m.subject\n\n r = requests.post(\"{}\".format(m.recipients()[0]),\n data=json.dumps(payload), verify=(not self.mattermost_no_verify_ssl))\n if r.status_code >= 400:\n logger.error(smart_text(_(\"Error sending notification mattermost: {}\").format(r.text)))\n if not self.fail_silently:\n raise Exception(smart_text(_(\"Error sending notification mattermost: {}\").format(r.text)))\n sent_messages += 1\n return sent_messages\n", "path": "awx/main/notifications/mattermost_backend.py"}]} | 1,504 | 204 |
gh_patches_debug_10731 | rasdani/github-patches | git_diff | litestar-org__litestar-2982 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bug: openapi schema generation fails for Union of/in msgspec.Struct models
### Description
Hello!
In the latest versions(s) (I think this originates from the changes regarding nested models in openapi generation) we cannot use Unions of `msgspec.Struct`s anymore. Neither as direct return types for routes nor nested within return types.
The result is a 500 Error. The MCVE below raises `'types.UnionType' object has no attribute '__qualname__'` internally. In our production app I get `typing.Union is not a module, class, method, or function.` instead.
Cheers
### URL to code causing the issue
_No response_
### MCVE
```python
import msgspec
import uvicorn
from litestar import Litestar, get
class SubStructA(msgspec.Struct):
a: int
class SubStructB(msgspec.Struct):
a: int
class StructyStruct(msgspec.Struct):
sub: SubStructA | SubStructB
@get("/subunion")
async def testSubUnion() -> StructyStruct:
return StructyStruct(SubStructA(0))
@get("/union")
async def testUnion() -> SubStructA | SubStructB:
return SubStructA(0)
app = Litestar(route_handlers=[test2]) # or test
uvicorn.run(app)
```
### Steps to reproduce
```bash
Run the example and browse to `localhost:8000/schema`
```
### Screenshots
_No response_
### Logs
_No response_
### Litestar Version
2.5.0
### Platform
- [X] Linux
- [ ] Mac
- [ ] Windows
- [ ] Other (Please specify in the description above)
<!-- POLAR PLEDGE BADGE START -->
---
> [!NOTE]
> While we are open for sponsoring on [GitHub Sponsors](https://github.com/sponsors/litestar-org/) and
> [OpenCollective](https://opencollective.com/litestar), we also utilize [Polar.sh](https://polar.sh/) to engage in pledge-based sponsorship.
>
> Check out all issues funded or available for funding [on our Polar.sh dashboard](https://polar.sh/litestar-org)
> * If you would like to see an issue prioritized, make a pledge towards it!
> * We receive the pledge once the issue is completed & verified
> * This, along with engagement in the community, helps us know which features are a priority to our users.
<a href="https://polar.sh/litestar-org/litestar/issues/2971">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://polar.sh/api/github/litestar-org/litestar/issues/2971/pledge.svg?darkmode=1">
<img alt="Fund with Polar" src="https://polar.sh/api/github/litestar-org/litestar/issues/2971/pledge.svg">
</picture>
</a>
<!-- POLAR PLEDGE BADGE END -->
</issue>
<code>
[start of litestar/_openapi/schema_generation/plugins/struct.py]
1 from __future__ import annotations
2
3 from typing import TYPE_CHECKING
4
5 from msgspec import Struct
6 from msgspec.structs import fields
7
8 from litestar.plugins import OpenAPISchemaPlugin
9 from litestar.types.empty import Empty
10 from litestar.typing import FieldDefinition
11 from litestar.utils.predicates import is_optional_union
12
13 if TYPE_CHECKING:
14 from msgspec.structs import FieldInfo
15
16 from litestar._openapi.schema_generation import SchemaCreator
17 from litestar.openapi.spec import Schema
18
19
20 class StructSchemaPlugin(OpenAPISchemaPlugin):
21 def is_plugin_supported_field(self, field_definition: FieldDefinition) -> bool:
22 return field_definition.is_subclass_of(Struct)
23
24 def to_openapi_schema(self, field_definition: FieldDefinition, schema_creator: SchemaCreator) -> Schema:
25 def is_field_required(field: FieldInfo) -> bool:
26 return field.required or field.default_factory is Empty
27
28 type_hints = field_definition.get_type_hints(include_extras=True, resolve_generics=True)
29 struct_fields = fields(field_definition.type_)
30
31 return schema_creator.create_component_schema(
32 field_definition,
33 required=sorted(
34 [
35 field.encode_name
36 for field in struct_fields
37 if is_field_required(field=field) and not is_optional_union(type_hints[field.name])
38 ]
39 ),
40 property_fields={
41 field.encode_name: FieldDefinition.from_kwarg(type_hints[field.name], field.encode_name)
42 for field in struct_fields
43 },
44 )
45
[end of litestar/_openapi/schema_generation/plugins/struct.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/litestar/_openapi/schema_generation/plugins/struct.py b/litestar/_openapi/schema_generation/plugins/struct.py
--- a/litestar/_openapi/schema_generation/plugins/struct.py
+++ b/litestar/_openapi/schema_generation/plugins/struct.py
@@ -19,7 +19,7 @@
class StructSchemaPlugin(OpenAPISchemaPlugin):
def is_plugin_supported_field(self, field_definition: FieldDefinition) -> bool:
- return field_definition.is_subclass_of(Struct)
+ return not field_definition.is_union and field_definition.is_subclass_of(Struct)
def to_openapi_schema(self, field_definition: FieldDefinition, schema_creator: SchemaCreator) -> Schema:
def is_field_required(field: FieldInfo) -> bool:
| {"golden_diff": "diff --git a/litestar/_openapi/schema_generation/plugins/struct.py b/litestar/_openapi/schema_generation/plugins/struct.py\n--- a/litestar/_openapi/schema_generation/plugins/struct.py\n+++ b/litestar/_openapi/schema_generation/plugins/struct.py\n@@ -19,7 +19,7 @@\n \n class StructSchemaPlugin(OpenAPISchemaPlugin):\n def is_plugin_supported_field(self, field_definition: FieldDefinition) -> bool:\n- return field_definition.is_subclass_of(Struct)\n+ return not field_definition.is_union and field_definition.is_subclass_of(Struct)\n \n def to_openapi_schema(self, field_definition: FieldDefinition, schema_creator: SchemaCreator) -> Schema:\n def is_field_required(field: FieldInfo) -> bool:\n", "issue": "Bug: openapi schema generation fails for Union of/in msgspec.Struct models\n### Description\r\n\r\nHello!\r\n\r\nIn the latest versions(s) (I think this originates from the changes regarding nested models in openapi generation) we cannot use Unions of `msgspec.Struct`s anymore. Neither as direct return types for routes nor nested within return types. \r\n\r\nThe result is a 500 Error. The MCVE below raises `'types.UnionType' object has no attribute '__qualname__'` internally. In our production app I get `typing.Union is not a module, class, method, or function.` instead.\r\n\r\nCheers\r\n\r\n### URL to code causing the issue\r\n\r\n_No response_\r\n\r\n### MCVE\r\n\r\n```python\r\nimport msgspec\r\nimport uvicorn\r\nfrom litestar import Litestar, get\r\n\r\n\r\nclass SubStructA(msgspec.Struct):\r\n a: int\r\n\r\n\r\nclass SubStructB(msgspec.Struct):\r\n a: int\r\n\r\n\r\nclass StructyStruct(msgspec.Struct):\r\n sub: SubStructA | SubStructB\r\n\r\n\r\n@get(\"/subunion\")\r\nasync def testSubUnion() -> StructyStruct:\r\n return StructyStruct(SubStructA(0))\r\n\r\n\r\n@get(\"/union\")\r\nasync def testUnion() -> SubStructA | SubStructB:\r\n return SubStructA(0)\r\n\r\n\r\napp = Litestar(route_handlers=[test2]) # or test\r\nuvicorn.run(app)\r\n```\r\n\r\n\r\n### Steps to reproduce\r\n\r\n```bash\r\nRun the example and browse to `localhost:8000/schema`\r\n```\r\n\r\n\r\n### Screenshots\r\n\r\n_No response_\r\n\r\n### Logs\r\n\r\n_No response_\r\n\r\n### Litestar Version\r\n\r\n2.5.0\r\n\r\n### Platform\r\n\r\n- [X] Linux\r\n- [ ] Mac\r\n- [ ] Windows\r\n- [ ] Other (Please specify in the description above)\r\n\r\n<!-- POLAR PLEDGE BADGE START -->\r\n---\r\n> [!NOTE] \r\n> While we are open for sponsoring on [GitHub Sponsors](https://github.com/sponsors/litestar-org/) and \r\n> [OpenCollective](https://opencollective.com/litestar), we also utilize [Polar.sh](https://polar.sh/) to engage in pledge-based sponsorship.\r\n>\r\n> Check out all issues funded or available for funding [on our Polar.sh dashboard](https://polar.sh/litestar-org)\r\n> * If you would like to see an issue prioritized, make a pledge towards it!\r\n> * We receive the pledge once the issue is completed & verified\r\n> * This, along with engagement in the community, helps us know which features are a priority to our users.\r\n\r\n<a href=\"https://polar.sh/litestar-org/litestar/issues/2971\">\r\n<picture>\r\n <source media=\"(prefers-color-scheme: dark)\" srcset=\"https://polar.sh/api/github/litestar-org/litestar/issues/2971/pledge.svg?darkmode=1\">\r\n <img alt=\"Fund with Polar\" src=\"https://polar.sh/api/github/litestar-org/litestar/issues/2971/pledge.svg\">\r\n</picture>\r\n</a>\r\n<!-- POLAR PLEDGE BADGE END -->\r\n\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom typing import TYPE_CHECKING\n\nfrom msgspec import Struct\nfrom msgspec.structs import fields\n\nfrom litestar.plugins import OpenAPISchemaPlugin\nfrom litestar.types.empty import Empty\nfrom litestar.typing import FieldDefinition\nfrom litestar.utils.predicates import is_optional_union\n\nif TYPE_CHECKING:\n from msgspec.structs import FieldInfo\n\n from litestar._openapi.schema_generation import SchemaCreator\n from litestar.openapi.spec import Schema\n\n\nclass StructSchemaPlugin(OpenAPISchemaPlugin):\n def is_plugin_supported_field(self, field_definition: FieldDefinition) -> bool:\n return field_definition.is_subclass_of(Struct)\n\n def to_openapi_schema(self, field_definition: FieldDefinition, schema_creator: SchemaCreator) -> Schema:\n def is_field_required(field: FieldInfo) -> bool:\n return field.required or field.default_factory is Empty\n\n type_hints = field_definition.get_type_hints(include_extras=True, resolve_generics=True)\n struct_fields = fields(field_definition.type_)\n\n return schema_creator.create_component_schema(\n field_definition,\n required=sorted(\n [\n field.encode_name\n for field in struct_fields\n if is_field_required(field=field) and not is_optional_union(type_hints[field.name])\n ]\n ),\n property_fields={\n field.encode_name: FieldDefinition.from_kwarg(type_hints[field.name], field.encode_name)\n for field in struct_fields\n },\n )\n", "path": "litestar/_openapi/schema_generation/plugins/struct.py"}]} | 1,606 | 169 |
gh_patches_debug_6714 | rasdani/github-patches | git_diff | open-mmlab__mmocr-570 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Write image name to pickle file
Hi MMOCR team,
Thank you for this awesome framework. I have a task to get coordinate of bounding box from Textsnake model, so I use --out argument in test.py to export to a pickle file. But when I load this pickle, I just got ‘boundary_result’ and don't know this ‘boundary_result’ belongs to which image. How can I get the image to write to the pickle file? Thank you.
</issue>
<code>
[start of mmocr/models/textdet/dense_heads/head_mixin.py]
1 # Copyright (c) OpenMMLab. All rights reserved.
2 import numpy as np
3
4 from mmocr.models.builder import HEADS
5 from mmocr.models.textdet.postprocess import decode
6 from mmocr.utils import check_argument
7
8
9 @HEADS.register_module()
10 class HeadMixin:
11 """The head minxin for dbnet and pannet heads."""
12
13 def resize_boundary(self, boundaries, scale_factor):
14 """Rescale boundaries via scale_factor.
15
16 Args:
17 boundaries (list[list[float]]): The boundary list. Each boundary
18 with size 2k+1 with k>=4.
19 scale_factor(ndarray): The scale factor of size (4,).
20
21 Returns:
22 boundaries (list[list[float]]): The scaled boundaries.
23 """
24 assert check_argument.is_2dlist(boundaries)
25 assert isinstance(scale_factor, np.ndarray)
26 assert scale_factor.shape[0] == 4
27
28 for b in boundaries:
29 sz = len(b)
30 check_argument.valid_boundary(b, True)
31 b[:sz -
32 1] = (np.array(b[:sz - 1]) *
33 (np.tile(scale_factor[:2], int(
34 (sz - 1) / 2)).reshape(1, sz - 1))).flatten().tolist()
35 return boundaries
36
37 def get_boundary(self, score_maps, img_metas, rescale):
38 """Compute text boundaries via post processing.
39
40 Args:
41 score_maps (Tensor): The text score map.
42 img_metas (dict): The image meta info.
43 rescale (bool): Rescale boundaries to the original image resolution
44 if true, and keep the score_maps resolution if false.
45
46 Returns:
47 results (dict): The result dict.
48 """
49
50 assert check_argument.is_type_list(img_metas, dict)
51 assert isinstance(rescale, bool)
52
53 score_maps = score_maps.squeeze()
54 boundaries = decode(
55 decoding_type=self.decoding_type,
56 preds=score_maps,
57 text_repr_type=self.text_repr_type)
58 if rescale:
59 boundaries = self.resize_boundary(
60 boundaries,
61 1.0 / self.downsample_ratio / img_metas[0]['scale_factor'])
62 results = dict(boundary_result=boundaries)
63 return results
64
65 def loss(self, pred_maps, **kwargs):
66 """Compute the loss for text detection.
67
68 Args:
69 pred_maps (tensor): The input score maps of NxCxHxW.
70
71 Returns:
72 losses (dict): The dict for losses.
73 """
74 losses = self.loss_module(pred_maps, self.downsample_ratio, **kwargs)
75 return losses
76
[end of mmocr/models/textdet/dense_heads/head_mixin.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mmocr/models/textdet/dense_heads/head_mixin.py b/mmocr/models/textdet/dense_heads/head_mixin.py
--- a/mmocr/models/textdet/dense_heads/head_mixin.py
+++ b/mmocr/models/textdet/dense_heads/head_mixin.py
@@ -59,7 +59,9 @@
boundaries = self.resize_boundary(
boundaries,
1.0 / self.downsample_ratio / img_metas[0]['scale_factor'])
- results = dict(boundary_result=boundaries)
+ results = dict(
+ boundary_result=boundaries, filename=img_metas[0]['filename'])
+
return results
def loss(self, pred_maps, **kwargs):
| {"golden_diff": "diff --git a/mmocr/models/textdet/dense_heads/head_mixin.py b/mmocr/models/textdet/dense_heads/head_mixin.py\n--- a/mmocr/models/textdet/dense_heads/head_mixin.py\n+++ b/mmocr/models/textdet/dense_heads/head_mixin.py\n@@ -59,7 +59,9 @@\n boundaries = self.resize_boundary(\n boundaries,\n 1.0 / self.downsample_ratio / img_metas[0]['scale_factor'])\n- results = dict(boundary_result=boundaries)\n+ results = dict(\n+ boundary_result=boundaries, filename=img_metas[0]['filename'])\n+\n return results\n \n def loss(self, pred_maps, **kwargs):\n", "issue": "Write image name to pickle file\nHi MMOCR team,\nThank you for this awesome framework. I have a task to get coordinate of bounding box from Textsnake model, so I use --out argument in test.py to export to a pickle file. But when I load this pickle, I just got \u2018boundary_result\u2019 and don't know this \u2018boundary_result\u2019 belongs to which image. How can I get the image to write to the pickle file? Thank you.\n", "before_files": [{"content": "# Copyright (c) OpenMMLab. All rights reserved.\nimport numpy as np\n\nfrom mmocr.models.builder import HEADS\nfrom mmocr.models.textdet.postprocess import decode\nfrom mmocr.utils import check_argument\n\n\[email protected]_module()\nclass HeadMixin:\n \"\"\"The head minxin for dbnet and pannet heads.\"\"\"\n\n def resize_boundary(self, boundaries, scale_factor):\n \"\"\"Rescale boundaries via scale_factor.\n\n Args:\n boundaries (list[list[float]]): The boundary list. Each boundary\n with size 2k+1 with k>=4.\n scale_factor(ndarray): The scale factor of size (4,).\n\n Returns:\n boundaries (list[list[float]]): The scaled boundaries.\n \"\"\"\n assert check_argument.is_2dlist(boundaries)\n assert isinstance(scale_factor, np.ndarray)\n assert scale_factor.shape[0] == 4\n\n for b in boundaries:\n sz = len(b)\n check_argument.valid_boundary(b, True)\n b[:sz -\n 1] = (np.array(b[:sz - 1]) *\n (np.tile(scale_factor[:2], int(\n (sz - 1) / 2)).reshape(1, sz - 1))).flatten().tolist()\n return boundaries\n\n def get_boundary(self, score_maps, img_metas, rescale):\n \"\"\"Compute text boundaries via post processing.\n\n Args:\n score_maps (Tensor): The text score map.\n img_metas (dict): The image meta info.\n rescale (bool): Rescale boundaries to the original image resolution\n if true, and keep the score_maps resolution if false.\n\n Returns:\n results (dict): The result dict.\n \"\"\"\n\n assert check_argument.is_type_list(img_metas, dict)\n assert isinstance(rescale, bool)\n\n score_maps = score_maps.squeeze()\n boundaries = decode(\n decoding_type=self.decoding_type,\n preds=score_maps,\n text_repr_type=self.text_repr_type)\n if rescale:\n boundaries = self.resize_boundary(\n boundaries,\n 1.0 / self.downsample_ratio / img_metas[0]['scale_factor'])\n results = dict(boundary_result=boundaries)\n return results\n\n def loss(self, pred_maps, **kwargs):\n \"\"\"Compute the loss for text detection.\n\n Args:\n pred_maps (tensor): The input score maps of NxCxHxW.\n\n Returns:\n losses (dict): The dict for losses.\n \"\"\"\n losses = self.loss_module(pred_maps, self.downsample_ratio, **kwargs)\n return losses\n", "path": "mmocr/models/textdet/dense_heads/head_mixin.py"}]} | 1,349 | 154 |
gh_patches_debug_29188 | rasdani/github-patches | git_diff | nilearn__nilearn-2670 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
check_paradigm should check for invalid keys in passed dict
Using the old `nipy` user logic, I passed `amplitude=somethx` instead of `modulation=somethx` in the `make_design_matrix`. I didn't crash but the values where ignored (e.g Error: unknown param, etc.). A default value of 1 was forced...
</issue>
<code>
[start of nilearn/glm/first_level/experimental_paradigm.py]
1 """
2 An experimental protocol is handled as a pandas DataFrame
3 that includes an 'onset' field.
4
5 This yields the onset time of the events in the experimental paradigm.
6 It can also contain:
7
8 * a 'trial_type' field that yields the condition identifier.
9 * a 'duration' field that yields event duration (for so-called block
10 paradigms).
11 * a 'modulation' field that associated a scalar value to each event.
12
13 Author: Bertrand Thirion, 2015
14
15 """
16 import warnings
17
18 import numpy as np
19
20
21 def check_events(events):
22 """Test that the events data describes a valid experimental paradigm
23
24 It is valid if the events data has an 'onset' key.
25
26 Parameters
27 ----------
28 events : pandas DataFrame
29 Events data that describes a functional experimental paradigm.
30
31 Returns
32 -------
33 trial_type : array of shape (n_events,), dtype='s'
34 Per-event experimental conditions identifier.
35 Defaults to np.repeat('dummy', len(onsets)).
36
37 onset : array of shape (n_events,), dtype='f'
38 Per-event onset time (in seconds)
39
40 duration : array of shape (n_events,), dtype='f'
41 Per-event durantion, (in seconds)
42 defaults to zeros(n_events) when no duration is provided
43
44 modulation : array of shape (n_events,), dtype='f'
45 Per-event modulation, (in seconds)
46 defaults to ones(n_events) when no duration is provided.
47
48 """
49 if 'onset' not in events.keys():
50 raise ValueError('The provided events data has no onset column.')
51 if 'duration' not in events.keys():
52 raise ValueError('The provided events data has no duration column.')
53
54 onset = np.array(events['onset'])
55 duration = np.array(events['duration']).astype(np.float)
56 n_events = len(onset)
57 trial_type = np.array(events['trial_type'])
58 modulation = np.ones(n_events)
59 if 'trial_type' not in events.keys():
60 warnings.warn("'trial_type' column not found "
61 "in the given events data.")
62 trial_type = np.repeat('dummy', n_events)
63 if 'modulation' in events.keys():
64 warnings.warn("'modulation' column found in the given events data.")
65 modulation = np.array(events['modulation']).astype(np.float)
66 return trial_type, onset, duration, modulation
67
[end of nilearn/glm/first_level/experimental_paradigm.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/nilearn/glm/first_level/experimental_paradigm.py b/nilearn/glm/first_level/experimental_paradigm.py
--- a/nilearn/glm/first_level/experimental_paradigm.py
+++ b/nilearn/glm/first_level/experimental_paradigm.py
@@ -17,6 +17,11 @@
import numpy as np
+VALID_FIELDS = set(["onset",
+ "duration",
+ "trial_type",
+ "modulation",
+ ])
def check_events(events):
"""Test that the events data describes a valid experimental paradigm
@@ -54,13 +59,19 @@
onset = np.array(events['onset'])
duration = np.array(events['duration']).astype(np.float)
n_events = len(onset)
- trial_type = np.array(events['trial_type'])
modulation = np.ones(n_events)
if 'trial_type' not in events.keys():
warnings.warn("'trial_type' column not found "
"in the given events data.")
trial_type = np.repeat('dummy', n_events)
+ else:
+ trial_type = np.array(events['trial_type'])
if 'modulation' in events.keys():
warnings.warn("'modulation' column found in the given events data.")
modulation = np.array(events['modulation']).astype(np.float)
+ for event,_ in events.items():
+ if event not in VALID_FIELDS:
+ warnings.warn("Unexpected key `{}` in events "
+ "will be ignored.".format(
+ event))
return trial_type, onset, duration, modulation
| {"golden_diff": "diff --git a/nilearn/glm/first_level/experimental_paradigm.py b/nilearn/glm/first_level/experimental_paradigm.py\n--- a/nilearn/glm/first_level/experimental_paradigm.py\n+++ b/nilearn/glm/first_level/experimental_paradigm.py\n@@ -17,6 +17,11 @@\n \n import numpy as np\n \n+VALID_FIELDS = set([\"onset\",\n+ \"duration\",\n+ \"trial_type\",\n+ \"modulation\",\n+ ])\n \n def check_events(events):\n \"\"\"Test that the events data describes a valid experimental paradigm\n@@ -54,13 +59,19 @@\n onset = np.array(events['onset'])\n duration = np.array(events['duration']).astype(np.float)\n n_events = len(onset)\n- trial_type = np.array(events['trial_type'])\n modulation = np.ones(n_events)\n if 'trial_type' not in events.keys():\n warnings.warn(\"'trial_type' column not found \"\n \"in the given events data.\")\n trial_type = np.repeat('dummy', n_events)\n+ else:\n+ trial_type = np.array(events['trial_type'])\n if 'modulation' in events.keys():\n warnings.warn(\"'modulation' column found in the given events data.\")\n modulation = np.array(events['modulation']).astype(np.float)\n+ for event,_ in events.items():\n+ if event not in VALID_FIELDS:\n+ warnings.warn(\"Unexpected key `{}` in events \"\n+ \"will be ignored.\".format(\n+ event))\n return trial_type, onset, duration, modulation\n", "issue": "check_paradigm should check for invalid keys in passed dict\nUsing the old `nipy` user logic, I passed `amplitude=somethx` instead of `modulation=somethx` in the `make_design_matrix`. I didn't crash but the values where ignored (e.g Error: unknown param, etc.). A default value of 1 was forced...\n\n", "before_files": [{"content": "\"\"\"\nAn experimental protocol is handled as a pandas DataFrame\nthat includes an 'onset' field.\n\nThis yields the onset time of the events in the experimental paradigm.\nIt can also contain:\n\n * a 'trial_type' field that yields the condition identifier.\n * a 'duration' field that yields event duration (for so-called block\n paradigms).\n * a 'modulation' field that associated a scalar value to each event.\n\nAuthor: Bertrand Thirion, 2015\n\n\"\"\"\nimport warnings\n\nimport numpy as np\n\n\ndef check_events(events):\n \"\"\"Test that the events data describes a valid experimental paradigm\n\n It is valid if the events data has an 'onset' key.\n\n Parameters\n ----------\n events : pandas DataFrame\n Events data that describes a functional experimental paradigm.\n\n Returns\n -------\n trial_type : array of shape (n_events,), dtype='s'\n Per-event experimental conditions identifier.\n Defaults to np.repeat('dummy', len(onsets)).\n\n onset : array of shape (n_events,), dtype='f'\n Per-event onset time (in seconds)\n\n duration : array of shape (n_events,), dtype='f'\n Per-event durantion, (in seconds)\n defaults to zeros(n_events) when no duration is provided\n\n modulation : array of shape (n_events,), dtype='f'\n Per-event modulation, (in seconds)\n defaults to ones(n_events) when no duration is provided.\n\n \"\"\"\n if 'onset' not in events.keys():\n raise ValueError('The provided events data has no onset column.')\n if 'duration' not in events.keys():\n raise ValueError('The provided events data has no duration column.')\n\n onset = np.array(events['onset'])\n duration = np.array(events['duration']).astype(np.float)\n n_events = len(onset)\n trial_type = np.array(events['trial_type'])\n modulation = np.ones(n_events)\n if 'trial_type' not in events.keys():\n warnings.warn(\"'trial_type' column not found \"\n \"in the given events data.\")\n trial_type = np.repeat('dummy', n_events)\n if 'modulation' in events.keys():\n warnings.warn(\"'modulation' column found in the given events data.\")\n modulation = np.array(events['modulation']).astype(np.float)\n return trial_type, onset, duration, modulation\n", "path": "nilearn/glm/first_level/experimental_paradigm.py"}]} | 1,269 | 352 |
gh_patches_debug_25318 | rasdani/github-patches | git_diff | getsentry__sentry-python-484 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Celery - Queue object has no attribute 'all_tasks_done'
Hi all,
I'm integrating Sentry on a project in python that uses Celery. I'm getting this error when shutting down the worker:
```
Error in atexit._run_exitfuncs:
Traceback (most recent call last):
File "/Users/jibanez/API/.conda/envs/cimrender/lib/python3.6/site-packages/sentry_sdk/worker.py", line 84, in flush
self._wait_flush(timeout, callback)
File "/Users/jibanez/API/.conda/envs/cimrender/lib/python3.6/site-packages/sentry_sdk/worker.py", line 90, in _wait_flush
if not self._timed_queue_join(initial_timeout):
File "/Users/jibanez/API/.conda/envs/cimrender/lib/python3.6/site-packages/sentry_sdk/worker.py", line 48, in _timed_queue_join
queue.all_tasks_done.acquire() # type: ignore
AttributeError: 'Queue' object has no attribute 'all_tasks_done'
```
I'm using:
- Python 3.6
- Celery 4.3.0
- OSX Mojave
Any thoughts?
</issue>
<code>
[start of sentry_sdk/worker.py]
1 import os
2
3 from threading import Thread, Lock
4 from time import sleep, time
5 from sentry_sdk._compat import queue, check_thread_support
6 from sentry_sdk.utils import logger
7
8
9 from sentry_sdk._types import MYPY
10
11 if MYPY:
12 from queue import Queue
13 from typing import Any
14 from typing import Optional
15 from typing import Callable
16
17
18 _TERMINATOR = object()
19
20
21 class BackgroundWorker(object):
22 def __init__(self):
23 # type: () -> None
24 check_thread_support()
25 self._queue = queue.Queue(-1) # type: Queue[Any]
26 self._lock = Lock()
27 self._thread = None # type: Optional[Thread]
28 self._thread_for_pid = None # type: Optional[int]
29
30 @property
31 def is_alive(self):
32 # type: () -> bool
33 if self._thread_for_pid != os.getpid():
34 return False
35 if not self._thread:
36 return False
37 return self._thread.is_alive()
38
39 def _ensure_thread(self):
40 # type: () -> None
41 if not self.is_alive:
42 self.start()
43
44 def _timed_queue_join(self, timeout):
45 # type: (float) -> bool
46 deadline = time() + timeout
47 queue = self._queue
48 queue.all_tasks_done.acquire() # type: ignore
49 try:
50 while queue.unfinished_tasks: # type: ignore
51 delay = deadline - time()
52 if delay <= 0:
53 return False
54 queue.all_tasks_done.wait(timeout=delay) # type: ignore
55 return True
56 finally:
57 queue.all_tasks_done.release() # type: ignore
58
59 def start(self):
60 # type: () -> None
61 with self._lock:
62 if not self.is_alive:
63 self._thread = Thread(
64 target=self._target, name="raven-sentry.BackgroundWorker"
65 )
66 self._thread.setDaemon(True)
67 self._thread.start()
68 self._thread_for_pid = os.getpid()
69
70 def kill(self):
71 # type: () -> None
72 logger.debug("background worker got kill request")
73 with self._lock:
74 if self._thread:
75 self._queue.put_nowait(_TERMINATOR)
76 self._thread = None
77 self._thread_for_pid = None
78
79 def flush(self, timeout, callback=None):
80 # type: (float, Optional[Any]) -> None
81 logger.debug("background worker got flush request")
82 with self._lock:
83 if self.is_alive and timeout > 0.0:
84 self._wait_flush(timeout, callback)
85 logger.debug("background worker flushed")
86
87 def _wait_flush(self, timeout, callback):
88 # type: (float, Optional[Any]) -> None
89 initial_timeout = min(0.1, timeout)
90 if not self._timed_queue_join(initial_timeout):
91 pending = self._queue.qsize()
92 logger.debug("%d event(s) pending on flush", pending)
93 if callback is not None:
94 callback(pending, timeout)
95 self._timed_queue_join(timeout - initial_timeout)
96
97 def submit(self, callback):
98 # type: (Callable[[], None]) -> None
99 self._ensure_thread()
100 self._queue.put_nowait(callback)
101
102 def _target(self):
103 # type: () -> None
104 while True:
105 callback = self._queue.get()
106 try:
107 if callback is _TERMINATOR:
108 break
109 try:
110 callback()
111 except Exception:
112 logger.error("Failed processing job", exc_info=True)
113 finally:
114 self._queue.task_done()
115 sleep(0)
116
[end of sentry_sdk/worker.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/sentry_sdk/worker.py b/sentry_sdk/worker.py
--- a/sentry_sdk/worker.py
+++ b/sentry_sdk/worker.py
@@ -45,16 +45,33 @@
# type: (float) -> bool
deadline = time() + timeout
queue = self._queue
- queue.all_tasks_done.acquire() # type: ignore
+
+ real_all_tasks_done = getattr(
+ queue, "all_tasks_done", None
+ ) # type: Optional[Any]
+ if real_all_tasks_done is not None:
+ real_all_tasks_done.acquire()
+ all_tasks_done = real_all_tasks_done # type: Optional[Any]
+ elif queue.__module__.startswith("eventlet."):
+ all_tasks_done = getattr(queue, "_cond", None)
+ else:
+ all_tasks_done = None
+
try:
while queue.unfinished_tasks: # type: ignore
delay = deadline - time()
if delay <= 0:
return False
- queue.all_tasks_done.wait(timeout=delay) # type: ignore
+ if all_tasks_done is not None:
+ all_tasks_done.wait(timeout=delay)
+ else:
+ # worst case, we just poll the number of remaining tasks
+ sleep(0.1)
+
return True
finally:
- queue.all_tasks_done.release() # type: ignore
+ if real_all_tasks_done is not None:
+ real_all_tasks_done.release() # type: ignore
def start(self):
# type: () -> None
| {"golden_diff": "diff --git a/sentry_sdk/worker.py b/sentry_sdk/worker.py\n--- a/sentry_sdk/worker.py\n+++ b/sentry_sdk/worker.py\n@@ -45,16 +45,33 @@\n # type: (float) -> bool\n deadline = time() + timeout\n queue = self._queue\n- queue.all_tasks_done.acquire() # type: ignore\n+\n+ real_all_tasks_done = getattr(\n+ queue, \"all_tasks_done\", None\n+ ) # type: Optional[Any]\n+ if real_all_tasks_done is not None:\n+ real_all_tasks_done.acquire()\n+ all_tasks_done = real_all_tasks_done # type: Optional[Any]\n+ elif queue.__module__.startswith(\"eventlet.\"):\n+ all_tasks_done = getattr(queue, \"_cond\", None)\n+ else:\n+ all_tasks_done = None\n+\n try:\n while queue.unfinished_tasks: # type: ignore\n delay = deadline - time()\n if delay <= 0:\n return False\n- queue.all_tasks_done.wait(timeout=delay) # type: ignore\n+ if all_tasks_done is not None:\n+ all_tasks_done.wait(timeout=delay)\n+ else:\n+ # worst case, we just poll the number of remaining tasks\n+ sleep(0.1)\n+\n return True\n finally:\n- queue.all_tasks_done.release() # type: ignore\n+ if real_all_tasks_done is not None:\n+ real_all_tasks_done.release() # type: ignore\n \n def start(self):\n # type: () -> None\n", "issue": "Celery - Queue object has no attribute 'all_tasks_done'\nHi all, \r\n\r\nI'm integrating Sentry on a project in python that uses Celery. I'm getting this error when shutting down the worker: \r\n\r\n```\r\nError in atexit._run_exitfuncs:\r\nTraceback (most recent call last):\r\n File \"/Users/jibanez/API/.conda/envs/cimrender/lib/python3.6/site-packages/sentry_sdk/worker.py\", line 84, in flush\r\n self._wait_flush(timeout, callback)\r\n File \"/Users/jibanez/API/.conda/envs/cimrender/lib/python3.6/site-packages/sentry_sdk/worker.py\", line 90, in _wait_flush\r\n if not self._timed_queue_join(initial_timeout):\r\n File \"/Users/jibanez/API/.conda/envs/cimrender/lib/python3.6/site-packages/sentry_sdk/worker.py\", line 48, in _timed_queue_join\r\n queue.all_tasks_done.acquire() # type: ignore\r\nAttributeError: 'Queue' object has no attribute 'all_tasks_done'\r\n```\r\n\r\nI'm using: \r\n- Python 3.6\r\n- Celery 4.3.0\r\n- OSX Mojave\r\n\r\nAny thoughts? \n", "before_files": [{"content": "import os\n\nfrom threading import Thread, Lock\nfrom time import sleep, time\nfrom sentry_sdk._compat import queue, check_thread_support\nfrom sentry_sdk.utils import logger\n\n\nfrom sentry_sdk._types import MYPY\n\nif MYPY:\n from queue import Queue\n from typing import Any\n from typing import Optional\n from typing import Callable\n\n\n_TERMINATOR = object()\n\n\nclass BackgroundWorker(object):\n def __init__(self):\n # type: () -> None\n check_thread_support()\n self._queue = queue.Queue(-1) # type: Queue[Any]\n self._lock = Lock()\n self._thread = None # type: Optional[Thread]\n self._thread_for_pid = None # type: Optional[int]\n\n @property\n def is_alive(self):\n # type: () -> bool\n if self._thread_for_pid != os.getpid():\n return False\n if not self._thread:\n return False\n return self._thread.is_alive()\n\n def _ensure_thread(self):\n # type: () -> None\n if not self.is_alive:\n self.start()\n\n def _timed_queue_join(self, timeout):\n # type: (float) -> bool\n deadline = time() + timeout\n queue = self._queue\n queue.all_tasks_done.acquire() # type: ignore\n try:\n while queue.unfinished_tasks: # type: ignore\n delay = deadline - time()\n if delay <= 0:\n return False\n queue.all_tasks_done.wait(timeout=delay) # type: ignore\n return True\n finally:\n queue.all_tasks_done.release() # type: ignore\n\n def start(self):\n # type: () -> None\n with self._lock:\n if not self.is_alive:\n self._thread = Thread(\n target=self._target, name=\"raven-sentry.BackgroundWorker\"\n )\n self._thread.setDaemon(True)\n self._thread.start()\n self._thread_for_pid = os.getpid()\n\n def kill(self):\n # type: () -> None\n logger.debug(\"background worker got kill request\")\n with self._lock:\n if self._thread:\n self._queue.put_nowait(_TERMINATOR)\n self._thread = None\n self._thread_for_pid = None\n\n def flush(self, timeout, callback=None):\n # type: (float, Optional[Any]) -> None\n logger.debug(\"background worker got flush request\")\n with self._lock:\n if self.is_alive and timeout > 0.0:\n self._wait_flush(timeout, callback)\n logger.debug(\"background worker flushed\")\n\n def _wait_flush(self, timeout, callback):\n # type: (float, Optional[Any]) -> None\n initial_timeout = min(0.1, timeout)\n if not self._timed_queue_join(initial_timeout):\n pending = self._queue.qsize()\n logger.debug(\"%d event(s) pending on flush\", pending)\n if callback is not None:\n callback(pending, timeout)\n self._timed_queue_join(timeout - initial_timeout)\n\n def submit(self, callback):\n # type: (Callable[[], None]) -> None\n self._ensure_thread()\n self._queue.put_nowait(callback)\n\n def _target(self):\n # type: () -> None\n while True:\n callback = self._queue.get()\n try:\n if callback is _TERMINATOR:\n break\n try:\n callback()\n except Exception:\n logger.error(\"Failed processing job\", exc_info=True)\n finally:\n self._queue.task_done()\n sleep(0)\n", "path": "sentry_sdk/worker.py"}]} | 1,838 | 355 |
gh_patches_debug_12533 | rasdani/github-patches | git_diff | getnikola__nikola-2108 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
output/assets/css/code.css is orphaned?
```
~/blog$ nikola build
Scanning posts................done!
copy_assets:output/assets/css/base.css
Scanning posts................done!
~/blog$
~/blog$ nikola build
Scanning posts................done!
~/blog$ nikola check -f
Scanning posts................done!
WARNING: check: Files from unknown origins (orphans):
WARNING: check: output/assets/css/code.css
~/blog$ nikola build
Scanning posts................done!
copy_assets:output/assets/css/base.css
~/blog$ nikola check -f
Scanning posts................done!
WARNING: check: Files from unknown origins (orphans):
WARNING: check: output/assets/css/code.css
```
</issue>
<code>
[start of nikola/plugins/task/copy_assets.py]
1 # -*- coding: utf-8 -*-
2
3 # Copyright © 2012-2015 Roberto Alsina and others.
4
5 # Permission is hereby granted, free of charge, to any
6 # person obtaining a copy of this software and associated
7 # documentation files (the "Software"), to deal in the
8 # Software without restriction, including without limitation
9 # the rights to use, copy, modify, merge, publish,
10 # distribute, sublicense, and/or sell copies of the
11 # Software, and to permit persons to whom the Software is
12 # furnished to do so, subject to the following conditions:
13 #
14 # The above copyright notice and this permission notice
15 # shall be included in all copies or substantial portions of
16 # the Software.
17 #
18 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
19 # KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
20 # WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
21 # PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS
22 # OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
23 # OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
24 # OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
25 # SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
26
27 """Copy theme assets into output."""
28
29 from __future__ import unicode_literals
30
31 import io
32 import os
33
34 from nikola.plugin_categories import Task
35 from nikola import utils
36
37
38 class CopyAssets(Task):
39
40 """Copy theme assets into output."""
41
42 name = "copy_assets"
43
44 def gen_tasks(self):
45 """Create tasks to copy the assets of the whole theme chain.
46
47 If a file is present on two themes, use the version
48 from the "youngest" theme.
49 """
50 kw = {
51 "themes": self.site.THEMES,
52 "files_folders": self.site.config['FILES_FOLDERS'],
53 "output_folder": self.site.config['OUTPUT_FOLDER'],
54 "filters": self.site.config['FILTERS'],
55 "code_color_scheme": self.site.config['CODE_COLOR_SCHEME'],
56 "code.css_selectors": 'pre.code',
57 "code.css_head": '/* code.css file generated by Nikola */\n',
58 "code.css_close": "\ntable.codetable { width: 100%;} td.linenos {text-align: right; width: 4em;}\n",
59 }
60 tasks = {}
61 code_css_path = os.path.join(kw['output_folder'], 'assets', 'css', 'code.css')
62 code_css_input = utils.get_asset_path('assets/css/code.css',
63 themes=kw['themes'],
64 files_folders=kw['files_folders'])
65
66 kw["code.css_input"] = code_css_input
67
68 yield self.group_task()
69
70 for theme_name in kw['themes']:
71 src = os.path.join(utils.get_theme_path(theme_name), 'assets')
72 dst = os.path.join(kw['output_folder'], 'assets')
73 for task in utils.copy_tree(src, dst):
74 if task['name'] in tasks:
75 continue
76 tasks[task['name']] = task
77 task['uptodate'] = [utils.config_changed(kw, 'nikola.plugins.task.copy_assets')]
78 task['basename'] = self.name
79 if code_css_input:
80 if 'file_dep' not in task:
81 task['file_dep'] = []
82 task['file_dep'].append(code_css_input)
83 yield utils.apply_filters(task, kw['filters'])
84
85 # Check whether or not there is a code.css file around.
86 if not code_css_input:
87 def create_code_css():
88 from pygments.formatters import get_formatter_by_name
89 formatter = get_formatter_by_name('html', style=kw["code_color_scheme"])
90 utils.makedirs(os.path.dirname(code_css_path))
91 with io.open(code_css_path, 'w+', encoding='utf8') as outf:
92 outf.write(kw["code.css_head"])
93 outf.write(formatter.get_style_defs(kw["code.css_selectors"]))
94 outf.write(kw["code.css_close"])
95
96 if os.path.exists(code_css_path):
97 with io.open(code_css_path, 'r', encoding='utf-8') as fh:
98 testcontents = fh.read(len(kw["code.css_head"])) == kw["code.css_head"]
99 else:
100 testcontents = False
101
102 task = {
103 'basename': self.name,
104 'name': code_css_path,
105 'targets': [code_css_path],
106 'uptodate': [utils.config_changed(kw, 'nikola.plugins.task.copy_assets'), testcontents],
107 'actions': [(create_code_css, [])],
108 'clean': True,
109 }
110 yield utils.apply_filters(task, kw['filters'])
111
[end of nikola/plugins/task/copy_assets.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/nikola/plugins/task/copy_assets.py b/nikola/plugins/task/copy_assets.py
--- a/nikola/plugins/task/copy_assets.py
+++ b/nikola/plugins/task/copy_assets.py
@@ -61,10 +61,7 @@
code_css_path = os.path.join(kw['output_folder'], 'assets', 'css', 'code.css')
code_css_input = utils.get_asset_path('assets/css/code.css',
themes=kw['themes'],
- files_folders=kw['files_folders'])
-
- kw["code.css_input"] = code_css_input
-
+ files_folders=kw['files_folders'], output_dir=None)
yield self.group_task()
for theme_name in kw['themes']:
| {"golden_diff": "diff --git a/nikola/plugins/task/copy_assets.py b/nikola/plugins/task/copy_assets.py\n--- a/nikola/plugins/task/copy_assets.py\n+++ b/nikola/plugins/task/copy_assets.py\n@@ -61,10 +61,7 @@\n code_css_path = os.path.join(kw['output_folder'], 'assets', 'css', 'code.css')\n code_css_input = utils.get_asset_path('assets/css/code.css',\n themes=kw['themes'],\n- files_folders=kw['files_folders'])\n-\n- kw[\"code.css_input\"] = code_css_input\n-\n+ files_folders=kw['files_folders'], output_dir=None)\n yield self.group_task()\n \n for theme_name in kw['themes']:\n", "issue": "output/assets/css/code.css is orphaned?\n```\n~/blog$ nikola build\nScanning posts................done!\ncopy_assets:output/assets/css/base.css\nScanning posts................done!\n~/blog$ \n~/blog$ nikola build\nScanning posts................done!\n~/blog$ nikola check -f\nScanning posts................done!\nWARNING: check: Files from unknown origins (orphans):\nWARNING: check: output/assets/css/code.css\n~/blog$ nikola build\nScanning posts................done!\ncopy_assets:output/assets/css/base.css\n~/blog$ nikola check -f\nScanning posts................done!\nWARNING: check: Files from unknown origins (orphans):\nWARNING: check: output/assets/css/code.css\n```\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Copyright \u00a9 2012-2015 Roberto Alsina and others.\n\n# Permission is hereby granted, free of charge, to any\n# person obtaining a copy of this software and associated\n# documentation files (the \"Software\"), to deal in the\n# Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the\n# Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice\n# shall be included in all copies or substantial portions of\n# the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY\n# KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE\n# WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR\n# PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS\n# OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR\n# OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR\n# OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\n\"\"\"Copy theme assets into output.\"\"\"\n\nfrom __future__ import unicode_literals\n\nimport io\nimport os\n\nfrom nikola.plugin_categories import Task\nfrom nikola import utils\n\n\nclass CopyAssets(Task):\n\n \"\"\"Copy theme assets into output.\"\"\"\n\n name = \"copy_assets\"\n\n def gen_tasks(self):\n \"\"\"Create tasks to copy the assets of the whole theme chain.\n\n If a file is present on two themes, use the version\n from the \"youngest\" theme.\n \"\"\"\n kw = {\n \"themes\": self.site.THEMES,\n \"files_folders\": self.site.config['FILES_FOLDERS'],\n \"output_folder\": self.site.config['OUTPUT_FOLDER'],\n \"filters\": self.site.config['FILTERS'],\n \"code_color_scheme\": self.site.config['CODE_COLOR_SCHEME'],\n \"code.css_selectors\": 'pre.code',\n \"code.css_head\": '/* code.css file generated by Nikola */\\n',\n \"code.css_close\": \"\\ntable.codetable { width: 100%;} td.linenos {text-align: right; width: 4em;}\\n\",\n }\n tasks = {}\n code_css_path = os.path.join(kw['output_folder'], 'assets', 'css', 'code.css')\n code_css_input = utils.get_asset_path('assets/css/code.css',\n themes=kw['themes'],\n files_folders=kw['files_folders'])\n\n kw[\"code.css_input\"] = code_css_input\n\n yield self.group_task()\n\n for theme_name in kw['themes']:\n src = os.path.join(utils.get_theme_path(theme_name), 'assets')\n dst = os.path.join(kw['output_folder'], 'assets')\n for task in utils.copy_tree(src, dst):\n if task['name'] in tasks:\n continue\n tasks[task['name']] = task\n task['uptodate'] = [utils.config_changed(kw, 'nikola.plugins.task.copy_assets')]\n task['basename'] = self.name\n if code_css_input:\n if 'file_dep' not in task:\n task['file_dep'] = []\n task['file_dep'].append(code_css_input)\n yield utils.apply_filters(task, kw['filters'])\n\n # Check whether or not there is a code.css file around.\n if not code_css_input:\n def create_code_css():\n from pygments.formatters import get_formatter_by_name\n formatter = get_formatter_by_name('html', style=kw[\"code_color_scheme\"])\n utils.makedirs(os.path.dirname(code_css_path))\n with io.open(code_css_path, 'w+', encoding='utf8') as outf:\n outf.write(kw[\"code.css_head\"])\n outf.write(formatter.get_style_defs(kw[\"code.css_selectors\"]))\n outf.write(kw[\"code.css_close\"])\n\n if os.path.exists(code_css_path):\n with io.open(code_css_path, 'r', encoding='utf-8') as fh:\n testcontents = fh.read(len(kw[\"code.css_head\"])) == kw[\"code.css_head\"]\n else:\n testcontents = False\n\n task = {\n 'basename': self.name,\n 'name': code_css_path,\n 'targets': [code_css_path],\n 'uptodate': [utils.config_changed(kw, 'nikola.plugins.task.copy_assets'), testcontents],\n 'actions': [(create_code_css, [])],\n 'clean': True,\n }\n yield utils.apply_filters(task, kw['filters'])\n", "path": "nikola/plugins/task/copy_assets.py"}]} | 1,918 | 163 |
gh_patches_debug_37168 | rasdani/github-patches | git_diff | conan-io__conan-center-index-2696 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[request] sentry-native/0.4.1
### Package Details
* Package Name/Version: **sentry-native/0.4.1**
* Changelog: **https://github.com/getsentry/sentry-native/blob/0.4.1/CHANGELOG.md**
https://github.com/getsentry/sentry-native/tree/0.4.1
The above mentioned version is newly released by the upstream project and not yet available as a recipe. Please add this version.
Also, **please add windows support.**
</issue>
<code>
[start of recipes/sentry-native/all/conanfile.py]
1 import os
2 from conans import ConanFile, CMake, tools
3 from conans.errors import ConanInvalidConfiguration
4
5
6 class SentryNativeConan(ConanFile):
7 name = "sentry-native"
8 description = "The Sentry Native SDK is an error and crash reporting client for native applications,\n" \
9 "optimized for C and C++. Sentry allows to add tags,\n" \
10 "breadcrumbs and arbitrary custom context to enrich error reports."
11 url = "https://github.com/conan-io/conan-center-index"
12 homepage = "https://github.com/getsentry/sentry-native"
13 license = "MIT"
14 topics = ("conan", "breakpad", "crashpad",
15 "error-reporting", "crash-reporting")
16 exports_sources = ["CMakeLists.txt"]
17 generators = "cmake", "cmake_find_package"
18 settings = "os", "arch", "compiler", "build_type"
19 options = {
20 "shared": [True, False],
21 "fPIC": [True, False],
22 "backend": ["none", "inproc", "crashpad", "breakpad"],
23 "transport": ["none", "curl", "winhttp"],
24 }
25 default_options = {
26 "shared": False,
27 "fPIC": True,
28 "backend": "inproc",
29 "transport": "curl"
30 }
31
32 @property
33 def _source_subfolder(self):
34 return "source_subfolder"
35
36 _cmake = None
37
38 def requirements(self):
39 if self.options.transport == "curl":
40 self.requires("libcurl/7.68.0")
41
42 if self.options.backend == "crashpad":
43 raise ConanInvalidConfiguration("crashpad not available yet in CCI")
44 if self.options.backend == "breakpad":
45 raise ConanInvalidConfiguration("breakpad not available yet in CCI")
46
47 def config_options(self):
48 if self.settings.os == "Windows":
49 del self.options.fPIC
50
51 def source(self):
52 tools.get(**self.conan_data["sources"][self.version])
53 extracted_dir = self.name + "-" + self.version
54 os.rename(extracted_dir, self._source_subfolder)
55
56 def configure(self):
57 if self.options.backend == "inproc" and self.settings.os == "Windows":
58 raise ConanInvalidConfiguration("The in-process backend is not supported on Windows")
59
60 def _configure_cmake(self):
61 if self._cmake:
62 return self._cmake
63 self._cmake = CMake(self)
64 self._cmake.definitions["SENTRY_BACKEND"] = self.options.backend
65 self._cmake.definitions["SENTRY_ENABLE_INSTALL"] = True
66 self._cmake.definitions["SENTRY_TRANSPORT"] = self.options.transport
67 self._cmake.definitions["SENTRY_PIC"] = self.options.get_safe("fPIC", False)
68 self._cmake.configure()
69 return self._cmake
70
71 def build(self):
72 cmake = self._configure_cmake()
73 cmake.build()
74
75 def package(self):
76 self.copy("LICENSE", dst="licenses", src=self._source_subfolder)
77 cmake = self._configure_cmake()
78 cmake.install()
79 tools.rmdir(os.path.join(self.package_folder, "lib", "cmake"))
80
81 def package_info(self):
82 self.cpp_info.libs = ["sentry"]
83 if self.settings.os in ("Android", "Windows"):
84 self.cpp_info.exelinkflags= ["--build-id=sha1"]
85 self.cpp_info.sharedlinkflags = ["--build-id=sha1"]
86 if self.settings.os == "Linux":
87 self.cpp_info.system_libs = ["pthread", "dl"]
88 elif self.settings.os == "Windows":
89 self.cpp_info.system_libs = ["winhttp", "dbghelp", "pathcch"]
90
91 if not self.options.shared:
92 self.cpp_info.defines = ["SENTRY_BUILD_STATIC"]
93
[end of recipes/sentry-native/all/conanfile.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/recipes/sentry-native/all/conanfile.py b/recipes/sentry-native/all/conanfile.py
--- a/recipes/sentry-native/all/conanfile.py
+++ b/recipes/sentry-native/all/conanfile.py
@@ -1,4 +1,5 @@
import os
+import glob
from conans import ConanFile, CMake, tools
from conans.errors import ConanInvalidConfiguration
@@ -37,8 +38,8 @@
def requirements(self):
if self.options.transport == "curl":
- self.requires("libcurl/7.68.0")
-
+ self.requires("libcurl/7.71.0")
+
if self.options.backend == "crashpad":
raise ConanInvalidConfiguration("crashpad not available yet in CCI")
if self.options.backend == "breakpad":
@@ -54,7 +55,7 @@
os.rename(extracted_dir, self._source_subfolder)
def configure(self):
- if self.options.backend == "inproc" and self.settings.os == "Windows":
+ if self.options.backend == "inproc" and self.settings.os == "Windows" and tools.Version(self.version) < "0.4":
raise ConanInvalidConfiguration("The in-process backend is not supported on Windows")
def _configure_cmake(self):
@@ -77,16 +78,18 @@
cmake = self._configure_cmake()
cmake.install()
tools.rmdir(os.path.join(self.package_folder, "lib", "cmake"))
+ for pdb in glob.glob(os.path.join(self.package_folder, "bin", "*.pdb")):
+ os.unlink(pdb)
def package_info(self):
self.cpp_info.libs = ["sentry"]
- if self.settings.os in ("Android", "Windows"):
- self.cpp_info.exelinkflags= ["--build-id=sha1"]
- self.cpp_info.sharedlinkflags = ["--build-id=sha1"]
+ if self.settings.os in ("Android", "Linux"):
+ self.cpp_info.exelinkflags = ["-Wl,-E,--build-id=sha1"]
+ self.cpp_info.sharedlinkflags = ["-Wl,-E,--build-id=sha1"]
if self.settings.os == "Linux":
self.cpp_info.system_libs = ["pthread", "dl"]
elif self.settings.os == "Windows":
- self.cpp_info.system_libs = ["winhttp", "dbghelp", "pathcch"]
+ self.cpp_info.system_libs = ["winhttp", "dbghelp", "pathcch", "shlwapi"]
if not self.options.shared:
self.cpp_info.defines = ["SENTRY_BUILD_STATIC"]
| {"golden_diff": "diff --git a/recipes/sentry-native/all/conanfile.py b/recipes/sentry-native/all/conanfile.py\n--- a/recipes/sentry-native/all/conanfile.py\n+++ b/recipes/sentry-native/all/conanfile.py\n@@ -1,4 +1,5 @@\n import os\n+import glob\n from conans import ConanFile, CMake, tools\n from conans.errors import ConanInvalidConfiguration\n \n@@ -37,8 +38,8 @@\n \n def requirements(self):\n if self.options.transport == \"curl\":\n- self.requires(\"libcurl/7.68.0\")\n- \n+ self.requires(\"libcurl/7.71.0\")\n+\n if self.options.backend == \"crashpad\":\n raise ConanInvalidConfiguration(\"crashpad not available yet in CCI\")\n if self.options.backend == \"breakpad\":\n@@ -54,7 +55,7 @@\n os.rename(extracted_dir, self._source_subfolder)\n \n def configure(self):\n- if self.options.backend == \"inproc\" and self.settings.os == \"Windows\":\n+ if self.options.backend == \"inproc\" and self.settings.os == \"Windows\" and tools.Version(self.version) < \"0.4\":\n raise ConanInvalidConfiguration(\"The in-process backend is not supported on Windows\")\n \n def _configure_cmake(self):\n@@ -77,16 +78,18 @@\n cmake = self._configure_cmake()\n cmake.install()\n tools.rmdir(os.path.join(self.package_folder, \"lib\", \"cmake\"))\n+ for pdb in glob.glob(os.path.join(self.package_folder, \"bin\", \"*.pdb\")):\n+ os.unlink(pdb)\n \n def package_info(self):\n self.cpp_info.libs = [\"sentry\"]\n- if self.settings.os in (\"Android\", \"Windows\"):\n- self.cpp_info.exelinkflags= [\"--build-id=sha1\"]\n- self.cpp_info.sharedlinkflags = [\"--build-id=sha1\"]\n+ if self.settings.os in (\"Android\", \"Linux\"):\n+ self.cpp_info.exelinkflags = [\"-Wl,-E,--build-id=sha1\"]\n+ self.cpp_info.sharedlinkflags = [\"-Wl,-E,--build-id=sha1\"]\n if self.settings.os == \"Linux\":\n self.cpp_info.system_libs = [\"pthread\", \"dl\"]\n elif self.settings.os == \"Windows\":\n- self.cpp_info.system_libs = [\"winhttp\", \"dbghelp\", \"pathcch\"]\n+ self.cpp_info.system_libs = [\"winhttp\", \"dbghelp\", \"pathcch\", \"shlwapi\"]\n \n if not self.options.shared:\n self.cpp_info.defines = [\"SENTRY_BUILD_STATIC\"]\n", "issue": "[request] sentry-native/0.4.1\n### Package Details\r\n * Package Name/Version: **sentry-native/0.4.1**\r\n * Changelog: **https://github.com/getsentry/sentry-native/blob/0.4.1/CHANGELOG.md**\r\n\r\nhttps://github.com/getsentry/sentry-native/tree/0.4.1\r\n\r\nThe above mentioned version is newly released by the upstream project and not yet available as a recipe. Please add this version.\r\nAlso, **please add windows support.**\n", "before_files": [{"content": "import os\nfrom conans import ConanFile, CMake, tools\nfrom conans.errors import ConanInvalidConfiguration\n\n\nclass SentryNativeConan(ConanFile):\n name = \"sentry-native\"\n description = \"The Sentry Native SDK is an error and crash reporting client for native applications,\\n\" \\\n \"optimized for C and C++. Sentry allows to add tags,\\n\" \\\n \"breadcrumbs and arbitrary custom context to enrich error reports.\"\n url = \"https://github.com/conan-io/conan-center-index\"\n homepage = \"https://github.com/getsentry/sentry-native\"\n license = \"MIT\"\n topics = (\"conan\", \"breakpad\", \"crashpad\",\n \"error-reporting\", \"crash-reporting\")\n exports_sources = [\"CMakeLists.txt\"]\n generators = \"cmake\", \"cmake_find_package\"\n settings = \"os\", \"arch\", \"compiler\", \"build_type\"\n options = {\n \"shared\": [True, False],\n \"fPIC\": [True, False],\n \"backend\": [\"none\", \"inproc\", \"crashpad\", \"breakpad\"],\n \"transport\": [\"none\", \"curl\", \"winhttp\"],\n }\n default_options = {\n \"shared\": False,\n \"fPIC\": True,\n \"backend\": \"inproc\",\n \"transport\": \"curl\"\n }\n\n @property\n def _source_subfolder(self):\n return \"source_subfolder\"\n\n _cmake = None\n\n def requirements(self):\n if self.options.transport == \"curl\":\n self.requires(\"libcurl/7.68.0\")\n \n if self.options.backend == \"crashpad\":\n raise ConanInvalidConfiguration(\"crashpad not available yet in CCI\")\n if self.options.backend == \"breakpad\":\n raise ConanInvalidConfiguration(\"breakpad not available yet in CCI\")\n\n def config_options(self):\n if self.settings.os == \"Windows\":\n del self.options.fPIC\n\n def source(self):\n tools.get(**self.conan_data[\"sources\"][self.version])\n extracted_dir = self.name + \"-\" + self.version\n os.rename(extracted_dir, self._source_subfolder)\n\n def configure(self):\n if self.options.backend == \"inproc\" and self.settings.os == \"Windows\":\n raise ConanInvalidConfiguration(\"The in-process backend is not supported on Windows\")\n\n def _configure_cmake(self):\n if self._cmake:\n return self._cmake\n self._cmake = CMake(self)\n self._cmake.definitions[\"SENTRY_BACKEND\"] = self.options.backend\n self._cmake.definitions[\"SENTRY_ENABLE_INSTALL\"] = True\n self._cmake.definitions[\"SENTRY_TRANSPORT\"] = self.options.transport\n self._cmake.definitions[\"SENTRY_PIC\"] = self.options.get_safe(\"fPIC\", False)\n self._cmake.configure()\n return self._cmake\n\n def build(self):\n cmake = self._configure_cmake()\n cmake.build()\n\n def package(self):\n self.copy(\"LICENSE\", dst=\"licenses\", src=self._source_subfolder)\n cmake = self._configure_cmake()\n cmake.install()\n tools.rmdir(os.path.join(self.package_folder, \"lib\", \"cmake\"))\n\n def package_info(self):\n self.cpp_info.libs = [\"sentry\"]\n if self.settings.os in (\"Android\", \"Windows\"):\n self.cpp_info.exelinkflags= [\"--build-id=sha1\"]\n self.cpp_info.sharedlinkflags = [\"--build-id=sha1\"]\n if self.settings.os == \"Linux\":\n self.cpp_info.system_libs = [\"pthread\", \"dl\"]\n elif self.settings.os == \"Windows\":\n self.cpp_info.system_libs = [\"winhttp\", \"dbghelp\", \"pathcch\"]\n\n if not self.options.shared:\n self.cpp_info.defines = [\"SENTRY_BUILD_STATIC\"]\n", "path": "recipes/sentry-native/all/conanfile.py"}]} | 1,679 | 596 |
gh_patches_debug_4349 | rasdani/github-patches | git_diff | dbt-labs__dbt-core-4890 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[CT-381] [HOTFIX] Homebrew Incident Resolution
### What Happened
dbt-core depends on the dbt-extractor package, and the dbt-extractor package depends on tree-sitter-jinja2. dbt-extractor specifies tree-sitter-jinja2 via a github link using the git protocol. Github security rules changed to require this link to use https which caused cargo to fail to build the dbt-extractor.
### Who Is Affected
Everyone attempting to build dbt-core from source after the github security rules took affect. This primarily affects homebrew users since homebrew builds dbt from source locally.
### Solution:
- release new dbt-extractor (0.4.1). The fix is already in main
- dbt-labs/dbt-extractor#51
- release new dbt-core patch from branch [1.0.4-hotfix](https://github.com/dbt-labs/dbt-core/tree/1.0.4-hotfix) which depends on this new version and accepts all future patch releases so we can skip this step in the future. This branch is only the 3 necessary commits ahead of 1.0.3 to fix this incident.
- main: #4890
- backport is directly on branch [1.0.4-hotfix](https://github.com/dbt-labs/dbt-core/tree/1.0.4-hotfix) because of complications with running the bump-version workflow for a hotfix branch.
Getting the release out has been delayed due to complications with github actions due to an [ongoing GitHub incident](https://www.githubstatus.com/incidents/dcnvr6zym66r).
</issue>
<code>
[start of core/setup.py]
1 #!/usr/bin/env python
2 import os
3 import sys
4
5 if sys.version_info < (3, 7, 2):
6 print("Error: dbt does not support this version of Python.")
7 print("Please upgrade to Python 3.7.2 or higher.")
8 sys.exit(1)
9
10
11 from setuptools import setup
12
13 try:
14 from setuptools import find_namespace_packages
15 except ImportError:
16 # the user has a downlevel version of setuptools.
17 print("Error: dbt requires setuptools v40.1.0 or higher.")
18 print('Please upgrade setuptools with "pip install --upgrade setuptools" ' "and try again")
19 sys.exit(1)
20
21
22 this_directory = os.path.abspath(os.path.dirname(__file__))
23 with open(os.path.join(this_directory, "README.md")) as f:
24 long_description = f.read()
25
26
27 package_name = "dbt-core"
28 package_version = "1.0.1"
29 description = """With dbt, data analysts and engineers can build analytics \
30 the way engineers build applications."""
31
32
33 setup(
34 name=package_name,
35 version=package_version,
36 description=description,
37 long_description=long_description,
38 long_description_content_type="text/markdown",
39 author="dbt Labs",
40 author_email="[email protected]",
41 url="https://github.com/dbt-labs/dbt-core",
42 packages=find_namespace_packages(include=["dbt", "dbt.*"]),
43 include_package_data=True,
44 test_suite="test",
45 entry_points={
46 "console_scripts": [
47 "dbt = dbt.main:main",
48 ],
49 },
50 scripts=[
51 "scripts/dbt",
52 ],
53 install_requires=[
54 "Jinja2==2.11.3",
55 "MarkupSafe==2.0.1",
56 "agate>=1.6,<1.6.4",
57 "click>=7.0,<9",
58 "colorama>=0.3.9,<0.4.5",
59 "hologram==0.0.14",
60 "isodate>=0.6,<0.7",
61 "logbook>=1.5,<1.6",
62 "mashumaro==2.9",
63 "minimal-snowplow-tracker==0.0.2",
64 "networkx>=2.3,<3",
65 "packaging>=20.9,<22.0",
66 "sqlparse>=0.2.3,<0.5",
67 "dbt-extractor==0.4.0",
68 "typing-extensions>=3.7.4,<4.2",
69 "werkzeug>=1,<3",
70 # the following are all to match snowflake-connector-python
71 "requests<3.0.0",
72 "idna>=2.5,<4",
73 "cffi>=1.9,<2.0.0",
74 ],
75 zip_safe=False,
76 classifiers=[
77 "Development Status :: 5 - Production/Stable",
78 "License :: OSI Approved :: Apache Software License",
79 "Operating System :: Microsoft :: Windows",
80 "Operating System :: MacOS :: MacOS X",
81 "Operating System :: POSIX :: Linux",
82 "Programming Language :: Python :: 3.7",
83 "Programming Language :: Python :: 3.8",
84 "Programming Language :: Python :: 3.9",
85 ],
86 python_requires=">=3.7.2",
87 )
88
[end of core/setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/core/setup.py b/core/setup.py
--- a/core/setup.py
+++ b/core/setup.py
@@ -64,7 +64,7 @@
"networkx>=2.3,<3",
"packaging>=20.9,<22.0",
"sqlparse>=0.2.3,<0.5",
- "dbt-extractor==0.4.0",
+ "dbt-extractor~=0.4.1",
"typing-extensions>=3.7.4,<4.2",
"werkzeug>=1,<3",
# the following are all to match snowflake-connector-python
| {"golden_diff": "diff --git a/core/setup.py b/core/setup.py\n--- a/core/setup.py\n+++ b/core/setup.py\n@@ -64,7 +64,7 @@\n \"networkx>=2.3,<3\",\n \"packaging>=20.9,<22.0\",\n \"sqlparse>=0.2.3,<0.5\",\n- \"dbt-extractor==0.4.0\",\n+ \"dbt-extractor~=0.4.1\",\n \"typing-extensions>=3.7.4,<4.2\",\n \"werkzeug>=1,<3\",\n # the following are all to match snowflake-connector-python\n", "issue": "[CT-381] [HOTFIX] Homebrew Incident Resolution\n### What Happened\r\n\r\ndbt-core depends on the dbt-extractor package, and the dbt-extractor package depends on tree-sitter-jinja2. dbt-extractor specifies tree-sitter-jinja2 via a github link using the git protocol. Github security rules changed to require this link to use https which caused cargo to fail to build the dbt-extractor.\r\n\r\n### Who Is Affected\r\n\r\nEveryone attempting to build dbt-core from source after the github security rules took affect. This primarily affects homebrew users since homebrew builds dbt from source locally.\r\n\r\n### Solution:\r\n- release new dbt-extractor (0.4.1). The fix is already in main\r\n - dbt-labs/dbt-extractor#51\r\n- release new dbt-core patch from branch [1.0.4-hotfix](https://github.com/dbt-labs/dbt-core/tree/1.0.4-hotfix) which depends on this new version and accepts all future patch releases so we can skip this step in the future. This branch is only the 3 necessary commits ahead of 1.0.3 to fix this incident.\r\n - main: #4890\r\n - backport is directly on branch [1.0.4-hotfix](https://github.com/dbt-labs/dbt-core/tree/1.0.4-hotfix) because of complications with running the bump-version workflow for a hotfix branch.\r\n \r\nGetting the release out has been delayed due to complications with github actions due to an [ongoing GitHub incident](https://www.githubstatus.com/incidents/dcnvr6zym66r).\r\n \n", "before_files": [{"content": "#!/usr/bin/env python\nimport os\nimport sys\n\nif sys.version_info < (3, 7, 2):\n print(\"Error: dbt does not support this version of Python.\")\n print(\"Please upgrade to Python 3.7.2 or higher.\")\n sys.exit(1)\n\n\nfrom setuptools import setup\n\ntry:\n from setuptools import find_namespace_packages\nexcept ImportError:\n # the user has a downlevel version of setuptools.\n print(\"Error: dbt requires setuptools v40.1.0 or higher.\")\n print('Please upgrade setuptools with \"pip install --upgrade setuptools\" ' \"and try again\")\n sys.exit(1)\n\n\nthis_directory = os.path.abspath(os.path.dirname(__file__))\nwith open(os.path.join(this_directory, \"README.md\")) as f:\n long_description = f.read()\n\n\npackage_name = \"dbt-core\"\npackage_version = \"1.0.1\"\ndescription = \"\"\"With dbt, data analysts and engineers can build analytics \\\nthe way engineers build applications.\"\"\"\n\n\nsetup(\n name=package_name,\n version=package_version,\n description=description,\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n author=\"dbt Labs\",\n author_email=\"[email protected]\",\n url=\"https://github.com/dbt-labs/dbt-core\",\n packages=find_namespace_packages(include=[\"dbt\", \"dbt.*\"]),\n include_package_data=True,\n test_suite=\"test\",\n entry_points={\n \"console_scripts\": [\n \"dbt = dbt.main:main\",\n ],\n },\n scripts=[\n \"scripts/dbt\",\n ],\n install_requires=[\n \"Jinja2==2.11.3\",\n \"MarkupSafe==2.0.1\",\n \"agate>=1.6,<1.6.4\",\n \"click>=7.0,<9\",\n \"colorama>=0.3.9,<0.4.5\",\n \"hologram==0.0.14\",\n \"isodate>=0.6,<0.7\",\n \"logbook>=1.5,<1.6\",\n \"mashumaro==2.9\",\n \"minimal-snowplow-tracker==0.0.2\",\n \"networkx>=2.3,<3\",\n \"packaging>=20.9,<22.0\",\n \"sqlparse>=0.2.3,<0.5\",\n \"dbt-extractor==0.4.0\",\n \"typing-extensions>=3.7.4,<4.2\",\n \"werkzeug>=1,<3\",\n # the following are all to match snowflake-connector-python\n \"requests<3.0.0\",\n \"idna>=2.5,<4\",\n \"cffi>=1.9,<2.0.0\",\n ],\n zip_safe=False,\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Operating System :: Microsoft :: Windows\",\n \"Operating System :: MacOS :: MacOS X\",\n \"Operating System :: POSIX :: Linux\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n ],\n python_requires=\">=3.7.2\",\n)\n", "path": "core/setup.py"}]} | 1,786 | 145 |
gh_patches_debug_36232 | rasdani/github-patches | git_diff | pymeasure__pymeasure-350 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Error in examples/Notebook Experiments/script2.ipynb
script.ipynb runs fine but in script2.ipynb I hit the following error at `experiment = Experiment('test', procedure, analyse)`:
```python
C:\ProgramData\Anaconda3\lib\site-packages\matplotlib\__init__.py in __setitem__(self, key, val)
927 raise KeyError(
928 '%s is not a valid rc parameter. See rcParams.keys() for a '
--> 929 'list of valid parameters.' % (key,))
930
931 def __getitem__(self, key):
KeyError: 'axes.color_cycle is not a valid rc parameter. See rcParams.keys() for a list of valid parameters.'
```
Error in examples/Notebook Experiments/script2.ipynb
script.ipynb runs fine but in script2.ipynb I hit the following error at `experiment = Experiment('test', procedure, analyse)`:
```python
C:\ProgramData\Anaconda3\lib\site-packages\matplotlib\__init__.py in __setitem__(self, key, val)
927 raise KeyError(
928 '%s is not a valid rc parameter. See rcParams.keys() for a '
--> 929 'list of valid parameters.' % (key,))
930
931 def __getitem__(self, key):
KeyError: 'axes.color_cycle is not a valid rc parameter. See rcParams.keys() for a list of valid parameters.'
```
</issue>
<code>
[start of pymeasure/experiment/config.py]
1 #
2 # This file is part of the PyMeasure package.
3 #
4 # Copyright (c) 2013-2020 PyMeasure Developers
5 #
6 # Permission is hereby granted, free of charge, to any person obtaining a copy
7 # of this software and associated documentation files (the "Software"), to deal
8 # in the Software without restriction, including without limitation the rights
9 # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
10 # copies of the Software, and to permit persons to whom the Software is
11 # furnished to do so, subject to the following conditions:
12 #
13 # The above copyright notice and this permission notice shall be included in
14 # all copies or substantial portions of the Software.
15 #
16 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
17 # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
18 # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
19 # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
20 # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
21 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
22 # THE SOFTWARE.
23 #
24
25 import configparser
26 import logging
27 import os
28
29 log = logging.getLogger(__name__)
30 log.addHandler(logging.NullHandler())
31
32
33 def set_file(filename):
34 os.environ['CONFIG'] = filename
35
36
37 def get_config(filename='default_config.ini'):
38 if 'CONFIG' in os.environ.keys():
39 filename = os.environ['CONFIG']
40 config = configparser.ConfigParser()
41 config.read(filename)
42 return config
43
44
45 # noinspection PyProtectedMember
46 def set_mpl_rcparams(config):
47 if 'matplotlib.rcParams' in config._sections.keys():
48 import matplotlib
49 for key in config._sections['matplotlib.rcParams']:
50 matplotlib.rcParams[key] = eval(config._sections['matplotlib.rcParams'][key])
51
[end of pymeasure/experiment/config.py]
[start of examples/Notebook Experiments/procedures.py]
1 #
2 # This file is part of the PyMeasure package.
3 #
4 # Copyright (c) 2013-2016 PyMeasure Developers
5 #
6 # Permission is hereby granted, free of charge, to any person obtaining a copy
7 # of this software and associated documentation files (the "Software"), to deal
8 # in the Software without restriction, including without limitation the rights
9 # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
10 # copies of the Software, and to permit persons to whom the Software is
11 # furnished to do so, subject to the following conditions:
12 #
13 # The above copyright notice and this permission notice shall be included in
14 # all copies or substantial portions of the Software.
15 #
16 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
17 # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
18 # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
19 # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
20 # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
21 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
22 # THE SOFTWARE.
23 #
24
25 import random
26 from time import sleep
27 from pymeasure.experiment import Procedure, IntegerParameter, Parameter, FloatParameter
28 import logging
29 log = logging.getLogger(__name__)
30 log.addHandler(logging.NullHandler())
31
32 class TestProcedure(Procedure):
33
34 iterations = IntegerParameter('Loop Iterations', default=100)
35 delay = FloatParameter('Delay Time', units='s', default=0.2)
36 seed = Parameter('Random Seed', default='12345')
37
38 DATA_COLUMNS = ['Iteration', 'Random Number']
39
40 def startup(self):
41 log.info("Setting up random number generator")
42 random.seed(self.seed)
43
44 def execute(self):
45 log.info("Starting to generate numbers")
46 for i in range(self.iterations):
47 data = {
48 'Iteration': i,
49 'Random Number': random.random()
50 }
51 log.debug("Produced numbers: %s" % data)
52 self.emit('results', data)
53 self.emit('progress', 100.*i/self.iterations)
54 sleep(self.delay)
55 if self.should_stop():
56 log.warning("Catch stop command in procedure")
57 break
58
59 def shutdown(self):
60 log.info("Finished")
[end of examples/Notebook Experiments/procedures.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/examples/Notebook Experiments/procedures.py b/examples/Notebook Experiments/procedures.py
deleted file mode 100644
--- a/examples/Notebook Experiments/procedures.py
+++ /dev/null
@@ -1,60 +0,0 @@
-#
-# This file is part of the PyMeasure package.
-#
-# Copyright (c) 2013-2016 PyMeasure Developers
-#
-# Permission is hereby granted, free of charge, to any person obtaining a copy
-# of this software and associated documentation files (the "Software"), to deal
-# in the Software without restriction, including without limitation the rights
-# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
-# copies of the Software, and to permit persons to whom the Software is
-# furnished to do so, subject to the following conditions:
-#
-# The above copyright notice and this permission notice shall be included in
-# all copies or substantial portions of the Software.
-#
-# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
-# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
-# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
-# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
-# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
-# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
-# THE SOFTWARE.
-#
-
-import random
-from time import sleep
-from pymeasure.experiment import Procedure, IntegerParameter, Parameter, FloatParameter
-import logging
-log = logging.getLogger(__name__)
-log.addHandler(logging.NullHandler())
-
-class TestProcedure(Procedure):
-
- iterations = IntegerParameter('Loop Iterations', default=100)
- delay = FloatParameter('Delay Time', units='s', default=0.2)
- seed = Parameter('Random Seed', default='12345')
-
- DATA_COLUMNS = ['Iteration', 'Random Number']
-
- def startup(self):
- log.info("Setting up random number generator")
- random.seed(self.seed)
-
- def execute(self):
- log.info("Starting to generate numbers")
- for i in range(self.iterations):
- data = {
- 'Iteration': i,
- 'Random Number': random.random()
- }
- log.debug("Produced numbers: %s" % data)
- self.emit('results', data)
- self.emit('progress', 100.*i/self.iterations)
- sleep(self.delay)
- if self.should_stop():
- log.warning("Catch stop command in procedure")
- break
-
- def shutdown(self):
- log.info("Finished")
\ No newline at end of file
diff --git a/pymeasure/experiment/config.py b/pymeasure/experiment/config.py
--- a/pymeasure/experiment/config.py
+++ b/pymeasure/experiment/config.py
@@ -46,5 +46,6 @@
def set_mpl_rcparams(config):
if 'matplotlib.rcParams' in config._sections.keys():
import matplotlib
+ from cycler import cycler
for key in config._sections['matplotlib.rcParams']:
matplotlib.rcParams[key] = eval(config._sections['matplotlib.rcParams'][key])
| {"golden_diff": "diff --git a/examples/Notebook Experiments/procedures.py b/examples/Notebook Experiments/procedures.py\ndeleted file mode 100644\n--- a/examples/Notebook Experiments/procedures.py\t\n+++ /dev/null\n@@ -1,60 +0,0 @@\n-#\n-# This file is part of the PyMeasure package.\n-#\n-# Copyright (c) 2013-2016 PyMeasure Developers\n-#\n-# Permission is hereby granted, free of charge, to any person obtaining a copy\n-# of this software and associated documentation files (the \"Software\"), to deal\n-# in the Software without restriction, including without limitation the rights\n-# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n-# copies of the Software, and to permit persons to whom the Software is\n-# furnished to do so, subject to the following conditions:\n-#\n-# The above copyright notice and this permission notice shall be included in\n-# all copies or substantial portions of the Software.\n-#\n-# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n-# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n-# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n-# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n-# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n-# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n-# THE SOFTWARE.\n-#\n-\n-import random\n-from time import sleep\n-from pymeasure.experiment import Procedure, IntegerParameter, Parameter, FloatParameter\n-import logging\n-log = logging.getLogger(__name__)\n-log.addHandler(logging.NullHandler())\n-\n-class TestProcedure(Procedure):\n-\n- iterations = IntegerParameter('Loop Iterations', default=100)\n- delay = FloatParameter('Delay Time', units='s', default=0.2)\n- seed = Parameter('Random Seed', default='12345')\n- \n- DATA_COLUMNS = ['Iteration', 'Random Number']\n-\n- def startup(self):\n- log.info(\"Setting up random number generator\")\n- random.seed(self.seed)\n-\n- def execute(self):\n- log.info(\"Starting to generate numbers\")\n- for i in range(self.iterations):\n- data = {\n- 'Iteration': i,\n- 'Random Number': random.random()\n- }\n- log.debug(\"Produced numbers: %s\" % data)\n- self.emit('results', data)\n- self.emit('progress', 100.*i/self.iterations)\n- sleep(self.delay)\n- if self.should_stop():\n- log.warning(\"Catch stop command in procedure\")\n- break\n-\n- def shutdown(self):\n- log.info(\"Finished\")\n\\ No newline at end of file\ndiff --git a/pymeasure/experiment/config.py b/pymeasure/experiment/config.py\n--- a/pymeasure/experiment/config.py\n+++ b/pymeasure/experiment/config.py\n@@ -46,5 +46,6 @@\n def set_mpl_rcparams(config):\n if 'matplotlib.rcParams' in config._sections.keys():\n import matplotlib\n+ from cycler import cycler\n for key in config._sections['matplotlib.rcParams']:\n matplotlib.rcParams[key] = eval(config._sections['matplotlib.rcParams'][key])\n", "issue": "Error in examples/Notebook Experiments/script2.ipynb\nscript.ipynb runs fine but in script2.ipynb I hit the following error at `experiment = Experiment('test', procedure, analyse)`:\r\n\r\n```python\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\matplotlib\\__init__.py in __setitem__(self, key, val)\r\n 927 raise KeyError(\r\n 928 '%s is not a valid rc parameter. See rcParams.keys() for a '\r\n--> 929 'list of valid parameters.' % (key,))\r\n 930 \r\n 931 def __getitem__(self, key):\r\n\r\nKeyError: 'axes.color_cycle is not a valid rc parameter. See rcParams.keys() for a list of valid parameters.'\r\n```\r\n\nError in examples/Notebook Experiments/script2.ipynb\nscript.ipynb runs fine but in script2.ipynb I hit the following error at `experiment = Experiment('test', procedure, analyse)`:\r\n\r\n```python\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\matplotlib\\__init__.py in __setitem__(self, key, val)\r\n 927 raise KeyError(\r\n 928 '%s is not a valid rc parameter. See rcParams.keys() for a '\r\n--> 929 'list of valid parameters.' % (key,))\r\n 930 \r\n 931 def __getitem__(self, key):\r\n\r\nKeyError: 'axes.color_cycle is not a valid rc parameter. See rcParams.keys() for a list of valid parameters.'\r\n```\r\n\n", "before_files": [{"content": "#\n# This file is part of the PyMeasure package.\n#\n# Copyright (c) 2013-2020 PyMeasure Developers\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n# THE SOFTWARE.\n#\n\nimport configparser\nimport logging\nimport os\n\nlog = logging.getLogger(__name__)\nlog.addHandler(logging.NullHandler())\n\n\ndef set_file(filename):\n os.environ['CONFIG'] = filename\n\n\ndef get_config(filename='default_config.ini'):\n if 'CONFIG' in os.environ.keys():\n filename = os.environ['CONFIG']\n config = configparser.ConfigParser()\n config.read(filename)\n return config\n\n\n# noinspection PyProtectedMember\ndef set_mpl_rcparams(config):\n if 'matplotlib.rcParams' in config._sections.keys():\n import matplotlib\n for key in config._sections['matplotlib.rcParams']:\n matplotlib.rcParams[key] = eval(config._sections['matplotlib.rcParams'][key])\n", "path": "pymeasure/experiment/config.py"}, {"content": "#\n# This file is part of the PyMeasure package.\n#\n# Copyright (c) 2013-2016 PyMeasure Developers\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n# THE SOFTWARE.\n#\n\nimport random\nfrom time import sleep\nfrom pymeasure.experiment import Procedure, IntegerParameter, Parameter, FloatParameter\nimport logging\nlog = logging.getLogger(__name__)\nlog.addHandler(logging.NullHandler())\n\nclass TestProcedure(Procedure):\n\n iterations = IntegerParameter('Loop Iterations', default=100)\n delay = FloatParameter('Delay Time', units='s', default=0.2)\n seed = Parameter('Random Seed', default='12345')\n \n DATA_COLUMNS = ['Iteration', 'Random Number']\n\n def startup(self):\n log.info(\"Setting up random number generator\")\n random.seed(self.seed)\n\n def execute(self):\n log.info(\"Starting to generate numbers\")\n for i in range(self.iterations):\n data = {\n 'Iteration': i,\n 'Random Number': random.random()\n }\n log.debug(\"Produced numbers: %s\" % data)\n self.emit('results', data)\n self.emit('progress', 100.*i/self.iterations)\n sleep(self.delay)\n if self.should_stop():\n log.warning(\"Catch stop command in procedure\")\n break\n\n def shutdown(self):\n log.info(\"Finished\")", "path": "examples/Notebook Experiments/procedures.py"}]} | 2,027 | 737 |
gh_patches_debug_15645 | rasdani/github-patches | git_diff | netbox-community__netbox-7928 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Don't fetch LDAP user and groups on all API request when FIND_GROUP_PERMS is disabled
### NetBox version
V3.0.9
### Feature type
Change to existing functionality
### Proposed functionality
Currently when using the LDAP backend for authentication, the AD is queried on every API request, regardless of other settings and regardless if the user is local or has been created by the LDAP backend. Additionally the LDAP cache built into django-auth-ldap does not function when using populate_user.
As the user is not actually authenticated against the AD when using the API (the token is used), I propose that the local user and it's group assignments are used when FIND_GROUP_PERMISSIONS is disabled.
I have a change ready for pull request if the issue is accepted.
For more info, please see the discussion I created: https://github.com/netbox-community/netbox/discussions/7708
This issue would also partly fix #6926 - it will not fix the caching, but the user who reported the issue is not using FIND_GROUP_PERMISSIONS.
### Use case
The end goal is vastly improved API performance when using the LDAP backend in most cases.
The above changes will result in the following changes for users:
**Not using the LDAP backend:**
No changes
**FIND_GROUP_PERMS = True:**
No changes
**MIRROR_GROUPS = True and FIND_GROUP_PERMS = True:**
No changes
**MIRROR_GROUPS = True and FIND_GROUP_PERMS = False:**
Local user and group assignments will be used when calling the API and the user and groups are never reloaded from the LDAP server during API calls. This means that LDAP users utilizing the API will have to login to the web ui to update group memberships. The change also allows one to use locally created users to call the API with querying the LDAP server.
**MIRROR_GROUPS = False and FIND_GROUP_PERMS = False:**
The user performing the API request has to be locally assigned groups or have local user object permissions.
### Database changes
No database changes
### External dependencies
_No response_
</issue>
<code>
[start of netbox/netbox/api/authentication.py]
1 from django.conf import settings
2 from rest_framework import authentication, exceptions
3 from rest_framework.permissions import BasePermission, DjangoObjectPermissions, SAFE_METHODS
4
5 from users.models import Token
6
7
8 class TokenAuthentication(authentication.TokenAuthentication):
9 """
10 A custom authentication scheme which enforces Token expiration times.
11 """
12 model = Token
13
14 def authenticate_credentials(self, key):
15 model = self.get_model()
16 try:
17 token = model.objects.prefetch_related('user').get(key=key)
18 except model.DoesNotExist:
19 raise exceptions.AuthenticationFailed("Invalid token")
20
21 # Enforce the Token's expiration time, if one has been set.
22 if token.is_expired:
23 raise exceptions.AuthenticationFailed("Token expired")
24
25 if not token.user.is_active:
26 raise exceptions.AuthenticationFailed("User inactive")
27
28 # When LDAP authentication is active try to load user data from LDAP directory
29 if settings.REMOTE_AUTH_BACKEND == 'netbox.authentication.LDAPBackend':
30 from netbox.authentication import LDAPBackend
31 ldap_backend = LDAPBackend()
32 user = ldap_backend.populate_user(token.user.username)
33 # If the user is found in the LDAP directory use it, if not fallback to the local user
34 if user:
35 return user, token
36
37 return token.user, token
38
39
40 class TokenPermissions(DjangoObjectPermissions):
41 """
42 Custom permissions handler which extends the built-in DjangoModelPermissions to validate a Token's write ability
43 for unsafe requests (POST/PUT/PATCH/DELETE).
44 """
45 # Override the stock perm_map to enforce view permissions
46 perms_map = {
47 'GET': ['%(app_label)s.view_%(model_name)s'],
48 'OPTIONS': [],
49 'HEAD': ['%(app_label)s.view_%(model_name)s'],
50 'POST': ['%(app_label)s.add_%(model_name)s'],
51 'PUT': ['%(app_label)s.change_%(model_name)s'],
52 'PATCH': ['%(app_label)s.change_%(model_name)s'],
53 'DELETE': ['%(app_label)s.delete_%(model_name)s'],
54 }
55
56 def __init__(self):
57
58 # LOGIN_REQUIRED determines whether read-only access is provided to anonymous users.
59 self.authenticated_users_only = settings.LOGIN_REQUIRED
60
61 super().__init__()
62
63 def _verify_write_permission(self, request):
64
65 # If token authentication is in use, verify that the token allows write operations (for unsafe methods).
66 if request.method in SAFE_METHODS or request.auth.write_enabled:
67 return True
68
69 def has_permission(self, request, view):
70
71 # Enforce Token write ability
72 if isinstance(request.auth, Token) and not self._verify_write_permission(request):
73 return False
74
75 return super().has_permission(request, view)
76
77 def has_object_permission(self, request, view, obj):
78
79 # Enforce Token write ability
80 if isinstance(request.auth, Token) and not self._verify_write_permission(request):
81 return False
82
83 return super().has_object_permission(request, view, obj)
84
85
86 class IsAuthenticatedOrLoginNotRequired(BasePermission):
87 """
88 Returns True if the user is authenticated or LOGIN_REQUIRED is False.
89 """
90 def has_permission(self, request, view):
91 if not settings.LOGIN_REQUIRED:
92 return True
93 return request.user.is_authenticated
94
[end of netbox/netbox/api/authentication.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/netbox/netbox/api/authentication.py b/netbox/netbox/api/authentication.py
--- a/netbox/netbox/api/authentication.py
+++ b/netbox/netbox/api/authentication.py
@@ -29,10 +29,13 @@
if settings.REMOTE_AUTH_BACKEND == 'netbox.authentication.LDAPBackend':
from netbox.authentication import LDAPBackend
ldap_backend = LDAPBackend()
- user = ldap_backend.populate_user(token.user.username)
- # If the user is found in the LDAP directory use it, if not fallback to the local user
- if user:
- return user, token
+
+ # Load from LDAP if FIND_GROUP_PERMS is active
+ if ldap_backend.settings.FIND_GROUP_PERMS:
+ user = ldap_backend.populate_user(token.user.username)
+ # If the user is found in the LDAP directory use it, if not fallback to the local user
+ if user:
+ return user, token
return token.user, token
| {"golden_diff": "diff --git a/netbox/netbox/api/authentication.py b/netbox/netbox/api/authentication.py\n--- a/netbox/netbox/api/authentication.py\n+++ b/netbox/netbox/api/authentication.py\n@@ -29,10 +29,13 @@\n if settings.REMOTE_AUTH_BACKEND == 'netbox.authentication.LDAPBackend':\n from netbox.authentication import LDAPBackend\n ldap_backend = LDAPBackend()\n- user = ldap_backend.populate_user(token.user.username)\n- # If the user is found in the LDAP directory use it, if not fallback to the local user\n- if user:\n- return user, token\n+\n+ # Load from LDAP if FIND_GROUP_PERMS is active\n+ if ldap_backend.settings.FIND_GROUP_PERMS:\n+ user = ldap_backend.populate_user(token.user.username)\n+ # If the user is found in the LDAP directory use it, if not fallback to the local user\n+ if user:\n+ return user, token\n \n return token.user, token\n", "issue": "Don't fetch LDAP user and groups on all API request when FIND_GROUP_PERMS is disabled\n### NetBox version\n\nV3.0.9\n\n### Feature type\n\nChange to existing functionality\n\n### Proposed functionality\n\nCurrently when using the LDAP backend for authentication, the AD is queried on every API request, regardless of other settings and regardless if the user is local or has been created by the LDAP backend. Additionally the LDAP cache built into django-auth-ldap does not function when using populate_user.\r\n\r\nAs the user is not actually authenticated against the AD when using the API (the token is used), I propose that the local user and it's group assignments are used when FIND_GROUP_PERMISSIONS is disabled.\r\n\r\nI have a change ready for pull request if the issue is accepted.\r\n\r\nFor more info, please see the discussion I created: https://github.com/netbox-community/netbox/discussions/7708\r\n\r\nThis issue would also partly fix #6926 - it will not fix the caching, but the user who reported the issue is not using FIND_GROUP_PERMISSIONS.\n\n### Use case\n\nThe end goal is vastly improved API performance when using the LDAP backend in most cases.\r\n\r\nThe above changes will result in the following changes for users:\r\n\r\n**Not using the LDAP backend:**\r\n\r\nNo changes\r\n\r\n**FIND_GROUP_PERMS = True:**\r\n\r\nNo changes\r\n\r\n**MIRROR_GROUPS = True and FIND_GROUP_PERMS = True:**\r\n\r\nNo changes\r\n\r\n**MIRROR_GROUPS = True and FIND_GROUP_PERMS = False:**\r\n\r\nLocal user and group assignments will be used when calling the API and the user and groups are never reloaded from the LDAP server during API calls. This means that LDAP users utilizing the API will have to login to the web ui to update group memberships. The change also allows one to use locally created users to call the API with querying the LDAP server.\r\n\r\n**MIRROR_GROUPS = False and FIND_GROUP_PERMS = False:**\r\n\r\nThe user performing the API request has to be locally assigned groups or have local user object permissions.\n\n### Database changes\n\nNo database changes\n\n### External dependencies\n\n_No response_\n", "before_files": [{"content": "from django.conf import settings\nfrom rest_framework import authentication, exceptions\nfrom rest_framework.permissions import BasePermission, DjangoObjectPermissions, SAFE_METHODS\n\nfrom users.models import Token\n\n\nclass TokenAuthentication(authentication.TokenAuthentication):\n \"\"\"\n A custom authentication scheme which enforces Token expiration times.\n \"\"\"\n model = Token\n\n def authenticate_credentials(self, key):\n model = self.get_model()\n try:\n token = model.objects.prefetch_related('user').get(key=key)\n except model.DoesNotExist:\n raise exceptions.AuthenticationFailed(\"Invalid token\")\n\n # Enforce the Token's expiration time, if one has been set.\n if token.is_expired:\n raise exceptions.AuthenticationFailed(\"Token expired\")\n\n if not token.user.is_active:\n raise exceptions.AuthenticationFailed(\"User inactive\")\n\n # When LDAP authentication is active try to load user data from LDAP directory\n if settings.REMOTE_AUTH_BACKEND == 'netbox.authentication.LDAPBackend':\n from netbox.authentication import LDAPBackend\n ldap_backend = LDAPBackend()\n user = ldap_backend.populate_user(token.user.username)\n # If the user is found in the LDAP directory use it, if not fallback to the local user\n if user:\n return user, token\n\n return token.user, token\n\n\nclass TokenPermissions(DjangoObjectPermissions):\n \"\"\"\n Custom permissions handler which extends the built-in DjangoModelPermissions to validate a Token's write ability\n for unsafe requests (POST/PUT/PATCH/DELETE).\n \"\"\"\n # Override the stock perm_map to enforce view permissions\n perms_map = {\n 'GET': ['%(app_label)s.view_%(model_name)s'],\n 'OPTIONS': [],\n 'HEAD': ['%(app_label)s.view_%(model_name)s'],\n 'POST': ['%(app_label)s.add_%(model_name)s'],\n 'PUT': ['%(app_label)s.change_%(model_name)s'],\n 'PATCH': ['%(app_label)s.change_%(model_name)s'],\n 'DELETE': ['%(app_label)s.delete_%(model_name)s'],\n }\n\n def __init__(self):\n\n # LOGIN_REQUIRED determines whether read-only access is provided to anonymous users.\n self.authenticated_users_only = settings.LOGIN_REQUIRED\n\n super().__init__()\n\n def _verify_write_permission(self, request):\n\n # If token authentication is in use, verify that the token allows write operations (for unsafe methods).\n if request.method in SAFE_METHODS or request.auth.write_enabled:\n return True\n\n def has_permission(self, request, view):\n\n # Enforce Token write ability\n if isinstance(request.auth, Token) and not self._verify_write_permission(request):\n return False\n\n return super().has_permission(request, view)\n\n def has_object_permission(self, request, view, obj):\n\n # Enforce Token write ability\n if isinstance(request.auth, Token) and not self._verify_write_permission(request):\n return False\n\n return super().has_object_permission(request, view, obj)\n\n\nclass IsAuthenticatedOrLoginNotRequired(BasePermission):\n \"\"\"\n Returns True if the user is authenticated or LOGIN_REQUIRED is False.\n \"\"\"\n def has_permission(self, request, view):\n if not settings.LOGIN_REQUIRED:\n return True\n return request.user.is_authenticated\n", "path": "netbox/netbox/api/authentication.py"}]} | 1,840 | 216 |
gh_patches_debug_31857 | rasdani/github-patches | git_diff | projectmesa__mesa-301 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Documentation is not reflecting latest changes wrt width-height argument order in Grid()
As many people start with reading mesa on readthedocs, the documentation should be inline with the code changes wrt width-height argument order in Grid functions.This is not yet reflected.
</issue>
<code>
[start of mesa/visualization/modules/CanvasGridVisualization.py]
1 # -*- coding: utf-8 -*-
2 """
3 Modular Canvas Rendering
4 ========================
5
6 Module for visualizing model objects in grid cells.
7
8 """
9 from collections import defaultdict
10 from mesa.visualization.ModularVisualization import VisualizationElement
11
12
13 class CanvasGrid(VisualizationElement):
14 """ A CanvasGrid object uses a user-provided portrayal method to generate a
15 portrayal for each object. A portrayal is a JSON-ready dictionary which
16 tells the relevant JavaScript code (GridDraw.js) where to draw what shape.
17
18 The render method returns a dictionary, keyed on layers, with values as
19 lists of portrayals to draw. Portrayals themselves are generated by the
20 user-provided portrayal_method, which accepts an object as an input and
21 produces a portrayal of it.
22
23 A portrayal as a dictionary with the following structure:
24 "x", "y": Coordinates for the cell in which the object is placed.
25 "Shape": Can be either "circle" or "rect"
26 For Circles:
27 "r": The radius, defined as a fraction of cell size. r=1 will
28 fill the entire cell.
29 For rectangles:
30 "w", "h": The width and height of the rectangle, which are in
31 fractions of cell width and height.
32 "Color": The color to draw the shape in; needs to be a valid HTML
33 color, e.g."Red" or "#AA08F8"
34 "Filled": either "true" or "false", and determines whether the shape is
35 filled or not.
36 "Layer": Layer number of 0 or above; higher-numbered layers are drawn
37 above lower-numbered layers.
38 "text": The text to be inscribed inside the Shape. Normally useful for
39 showing the unique_id of the agent.
40 "text_color": The color to draw the inscribed text. Should be given in
41 conjunction of "text" property.
42
43
44 Attributes:
45 portrayal_method: Function which generates portrayals from objects, as
46 described above.
47 grid_height, grid_width: Size of the grid to visualize, in cells.
48 canvas_height, canvas_width: Size, in pixels, of the grid visualization
49 to draw on the client.
50 template: "canvas_module.html" stores the module's HTML template.
51
52 """
53 package_includes = ["GridDraw.js", "CanvasModule.js"]
54 portrayal_method = None # Portrayal function
55 canvas_width = 500
56 canvas_height = 500
57
58 def __init__(self, portrayal_method, grid_width, grid_height,
59 canvas_width=500, canvas_height=500):
60 """ Instantiate a new CanvasGrid.
61
62 Args:
63 portrayal_method: function to convert each object on the grid to
64 a portrayal, as described above.
65 grid_width, grid_height: Size of the grid, in cells.
66 canvas_height, canvas_width: Size of the canvas to draw in the
67 client, in pixels. (default: 500x500)
68
69 """
70 self.portrayal_method = portrayal_method
71 self.grid_width = grid_width
72 self.grid_height = grid_height
73 self.canvas_width = canvas_width
74 self.canvas_height = canvas_height
75
76 new_element = ("new CanvasModule({}, {}, {}, {})"
77 .format(self.canvas_width, self.canvas_height,
78 self.grid_width, self.grid_height))
79
80 self.js_code = "elements.push(" + new_element + ");"
81
82 def render(self, model):
83 grid_state = defaultdict(list)
84 for x in range(model.grid.width):
85 for y in range(model.grid.height):
86 cell_objects = model.grid.get_cell_list_contents([(x, y)])
87 for obj in cell_objects:
88 portrayal = self.portrayal_method(obj)
89 if portrayal:
90 portrayal["x"] = x
91 portrayal["y"] = y
92 grid_state[portrayal["Layer"]].append(portrayal)
93
94 return grid_state
95
[end of mesa/visualization/modules/CanvasGridVisualization.py]
[start of examples/Basic/basic/server.py]
1 import random
2
3 from mesa.visualization.modules import CanvasGrid
4 from mesa.visualization.ModularVisualization import ModularServer
5
6 from basic.model import Walker, ShapesModel
7
8
9 def agent_draw(agent):
10 portrayal = None
11 if agent is None:
12 # Actually this if part is unnecessary, but still keeping it for
13 # aesthetics
14 pass
15 elif isinstance(agent, Walker):
16 print("Uid: {0}, Heading: {1}".format(agent.unique_id, agent.heading))
17 portrayal = {"Shape": "arrowHead",
18 "Filled": "true",
19 "Layer": 2,
20 "Color": "green",
21 "Filled": "true",
22 "heading0": agent.heading[0],
23 "heading1": agent.heading[1],
24 "text": agent.unique_id,
25 "text_color": "white",
26 "scale": 0.8,
27 }
28 return portrayal
29
30
31 def launch_basic():
32 width = 15
33 height = 10
34 num_agents = 2
35 pixel_ratio = 50
36 grid = CanvasGrid(agent_draw, width, height,
37 width * pixel_ratio, height * pixel_ratio)
38 server = ModularServer(ShapesModel, [grid], "Basic Example",
39 num_agents, width, height)
40 server.max_steps = 0
41 server.port = 8888
42 server.launch()
43
44 if __name__ == "__main__":
45 random.seed(3)
46 launch_basic()
47
[end of examples/Basic/basic/server.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/examples/Basic/basic/server.py b/examples/Basic/basic/server.py
--- a/examples/Basic/basic/server.py
+++ b/examples/Basic/basic/server.py
@@ -19,8 +19,8 @@
"Layer": 2,
"Color": "green",
"Filled": "true",
- "heading0": agent.heading[0],
- "heading1": agent.heading[1],
+ "heading_x": agent.heading[0],
+ "heading_y": agent.heading[1],
"text": agent.unique_id,
"text_color": "white",
"scale": 0.8,
diff --git a/mesa/visualization/modules/CanvasGridVisualization.py b/mesa/visualization/modules/CanvasGridVisualization.py
--- a/mesa/visualization/modules/CanvasGridVisualization.py
+++ b/mesa/visualization/modules/CanvasGridVisualization.py
@@ -22,13 +22,17 @@
A portrayal as a dictionary with the following structure:
"x", "y": Coordinates for the cell in which the object is placed.
- "Shape": Can be either "circle" or "rect"
+ "Shape": Can be either "circle", "rect" or "arrowHead"
For Circles:
"r": The radius, defined as a fraction of cell size. r=1 will
fill the entire cell.
- For rectangles:
+ For Rectangles:
"w", "h": The width and height of the rectangle, which are in
fractions of cell width and height.
+ For arrowHead:
+ "scale": Proportion scaling as a fraction of cell size.
+ "heading_x": represents x direction unit vector.
+ "heading_y": represents y direction unit vector.
"Color": The color to draw the shape in; needs to be a valid HTML
color, e.g."Red" or "#AA08F8"
"Filled": either "true" or "false", and determines whether the shape is
| {"golden_diff": "diff --git a/examples/Basic/basic/server.py b/examples/Basic/basic/server.py\n--- a/examples/Basic/basic/server.py\n+++ b/examples/Basic/basic/server.py\n@@ -19,8 +19,8 @@\n \"Layer\": 2,\n \"Color\": \"green\",\n \"Filled\": \"true\",\n- \"heading0\": agent.heading[0],\n- \"heading1\": agent.heading[1],\n+ \"heading_x\": agent.heading[0],\n+ \"heading_y\": agent.heading[1],\n \"text\": agent.unique_id,\n \"text_color\": \"white\",\n \"scale\": 0.8,\ndiff --git a/mesa/visualization/modules/CanvasGridVisualization.py b/mesa/visualization/modules/CanvasGridVisualization.py\n--- a/mesa/visualization/modules/CanvasGridVisualization.py\n+++ b/mesa/visualization/modules/CanvasGridVisualization.py\n@@ -22,13 +22,17 @@\n \n A portrayal as a dictionary with the following structure:\n \"x\", \"y\": Coordinates for the cell in which the object is placed.\n- \"Shape\": Can be either \"circle\" or \"rect\"\n+ \"Shape\": Can be either \"circle\", \"rect\" or \"arrowHead\"\n For Circles:\n \"r\": The radius, defined as a fraction of cell size. r=1 will\n fill the entire cell.\n- For rectangles:\n+ For Rectangles:\n \"w\", \"h\": The width and height of the rectangle, which are in\n fractions of cell width and height.\n+ For arrowHead:\n+ \"scale\": Proportion scaling as a fraction of cell size.\n+ \"heading_x\": represents x direction unit vector.\n+ \"heading_y\": represents y direction unit vector.\n \"Color\": The color to draw the shape in; needs to be a valid HTML\n color, e.g.\"Red\" or \"#AA08F8\"\n \"Filled\": either \"true\" or \"false\", and determines whether the shape is\n", "issue": "Documentation is not reflecting latest changes wrt width-height argument order in Grid()\nAs many people start with reading mesa on readthedocs, the documentation should be inline with the code changes wrt width-height argument order in Grid functions.This is not yet reflected.\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\nModular Canvas Rendering\n========================\n\nModule for visualizing model objects in grid cells.\n\n\"\"\"\nfrom collections import defaultdict\nfrom mesa.visualization.ModularVisualization import VisualizationElement\n\n\nclass CanvasGrid(VisualizationElement):\n \"\"\" A CanvasGrid object uses a user-provided portrayal method to generate a\n portrayal for each object. A portrayal is a JSON-ready dictionary which\n tells the relevant JavaScript code (GridDraw.js) where to draw what shape.\n\n The render method returns a dictionary, keyed on layers, with values as\n lists of portrayals to draw. Portrayals themselves are generated by the\n user-provided portrayal_method, which accepts an object as an input and\n produces a portrayal of it.\n\n A portrayal as a dictionary with the following structure:\n \"x\", \"y\": Coordinates for the cell in which the object is placed.\n \"Shape\": Can be either \"circle\" or \"rect\"\n For Circles:\n \"r\": The radius, defined as a fraction of cell size. r=1 will\n fill the entire cell.\n For rectangles:\n \"w\", \"h\": The width and height of the rectangle, which are in\n fractions of cell width and height.\n \"Color\": The color to draw the shape in; needs to be a valid HTML\n color, e.g.\"Red\" or \"#AA08F8\"\n \"Filled\": either \"true\" or \"false\", and determines whether the shape is\n filled or not.\n \"Layer\": Layer number of 0 or above; higher-numbered layers are drawn\n above lower-numbered layers.\n \"text\": The text to be inscribed inside the Shape. Normally useful for\n showing the unique_id of the agent.\n \"text_color\": The color to draw the inscribed text. Should be given in\n conjunction of \"text\" property.\n\n\n Attributes:\n portrayal_method: Function which generates portrayals from objects, as\n described above.\n grid_height, grid_width: Size of the grid to visualize, in cells.\n canvas_height, canvas_width: Size, in pixels, of the grid visualization\n to draw on the client.\n template: \"canvas_module.html\" stores the module's HTML template.\n\n \"\"\"\n package_includes = [\"GridDraw.js\", \"CanvasModule.js\"]\n portrayal_method = None # Portrayal function\n canvas_width = 500\n canvas_height = 500\n\n def __init__(self, portrayal_method, grid_width, grid_height,\n canvas_width=500, canvas_height=500):\n \"\"\" Instantiate a new CanvasGrid.\n\n Args:\n portrayal_method: function to convert each object on the grid to\n a portrayal, as described above.\n grid_width, grid_height: Size of the grid, in cells.\n canvas_height, canvas_width: Size of the canvas to draw in the\n client, in pixels. (default: 500x500)\n\n \"\"\"\n self.portrayal_method = portrayal_method\n self.grid_width = grid_width\n self.grid_height = grid_height\n self.canvas_width = canvas_width\n self.canvas_height = canvas_height\n\n new_element = (\"new CanvasModule({}, {}, {}, {})\"\n .format(self.canvas_width, self.canvas_height,\n self.grid_width, self.grid_height))\n\n self.js_code = \"elements.push(\" + new_element + \");\"\n\n def render(self, model):\n grid_state = defaultdict(list)\n for x in range(model.grid.width):\n for y in range(model.grid.height):\n cell_objects = model.grid.get_cell_list_contents([(x, y)])\n for obj in cell_objects:\n portrayal = self.portrayal_method(obj)\n if portrayal:\n portrayal[\"x\"] = x\n portrayal[\"y\"] = y\n grid_state[portrayal[\"Layer\"]].append(portrayal)\n\n return grid_state\n", "path": "mesa/visualization/modules/CanvasGridVisualization.py"}, {"content": "import random\n\nfrom mesa.visualization.modules import CanvasGrid\nfrom mesa.visualization.ModularVisualization import ModularServer\n\nfrom basic.model import Walker, ShapesModel\n\n\ndef agent_draw(agent):\n portrayal = None\n if agent is None:\n # Actually this if part is unnecessary, but still keeping it for\n # aesthetics\n pass\n elif isinstance(agent, Walker):\n print(\"Uid: {0}, Heading: {1}\".format(agent.unique_id, agent.heading))\n portrayal = {\"Shape\": \"arrowHead\",\n \"Filled\": \"true\",\n \"Layer\": 2,\n \"Color\": \"green\",\n \"Filled\": \"true\",\n \"heading0\": agent.heading[0],\n \"heading1\": agent.heading[1],\n \"text\": agent.unique_id,\n \"text_color\": \"white\",\n \"scale\": 0.8,\n }\n return portrayal\n\n\ndef launch_basic():\n width = 15\n height = 10\n num_agents = 2\n pixel_ratio = 50\n grid = CanvasGrid(agent_draw, width, height,\n width * pixel_ratio, height * pixel_ratio)\n server = ModularServer(ShapesModel, [grid], \"Basic Example\",\n num_agents, width, height)\n server.max_steps = 0\n server.port = 8888\n server.launch()\n\nif __name__ == \"__main__\":\n random.seed(3)\n launch_basic()\n", "path": "examples/Basic/basic/server.py"}]} | 2,039 | 437 |
gh_patches_debug_15 | rasdani/github-patches | git_diff | OCHA-DAP__hdx-ckan-1748 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Shrink the spacing on the top line numbers
Proposed spacings shown here:

modified css:
.item-info {
border-top: 1px solid #cccccc;
border-bottom: 1px solid #cccccc;
padding: 20px 0;
margin-top: -1px;
color: #333333;
}
.item-info .item-info-title {
font-family: 'Gotham-Bold', sans-serif;
font-weight: 400;
font-size: 16px;
letter-spacing: 0.01em;
margin-bottom: 20px;
}
.item-info .item-info-number {
font-family: 'Gotham-Light', sans-serif;
font-size: 74px;
line-height: 1;
letter-spacing: 0.01em;
margin-bottom: 20px;
}
</issue>
<code>
[start of ckanext-hdx_theme/ckanext/hdx_theme/version.py]
1 hdx_version = 'v0.4.9'
2
[end of ckanext-hdx_theme/ckanext/hdx_theme/version.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ckanext-hdx_theme/ckanext/hdx_theme/version.py b/ckanext-hdx_theme/ckanext/hdx_theme/version.py
--- a/ckanext-hdx_theme/ckanext/hdx_theme/version.py
+++ b/ckanext-hdx_theme/ckanext/hdx_theme/version.py
@@ -1 +1 @@
-hdx_version = 'v0.4.9'
+hdx_version = 'v0.4.10'
| {"golden_diff": "diff --git a/ckanext-hdx_theme/ckanext/hdx_theme/version.py b/ckanext-hdx_theme/ckanext/hdx_theme/version.py\n--- a/ckanext-hdx_theme/ckanext/hdx_theme/version.py\n+++ b/ckanext-hdx_theme/ckanext/hdx_theme/version.py\n@@ -1 +1 @@\n-hdx_version = 'v0.4.9'\n+hdx_version = 'v0.4.10'\n", "issue": "Shrink the spacing on the top line numbers\nProposed spacings shown here:\n\n\n\nmodified css:\n\n.item-info {\nborder-top: 1px solid #cccccc;\nborder-bottom: 1px solid #cccccc;\npadding: 20px 0;\nmargin-top: -1px;\ncolor: #333333;\n}\n\n.item-info .item-info-title {\nfont-family: 'Gotham-Bold', sans-serif;\nfont-weight: 400;\nfont-size: 16px;\nletter-spacing: 0.01em;\nmargin-bottom: 20px;\n}\n\n.item-info .item-info-number {\nfont-family: 'Gotham-Light', sans-serif;\nfont-size: 74px;\nline-height: 1;\nletter-spacing: 0.01em;\nmargin-bottom: 20px;\n}\n\n", "before_files": [{"content": "hdx_version = 'v0.4.9'\n", "path": "ckanext-hdx_theme/ckanext/hdx_theme/version.py"}]} | 813 | 108 |
gh_patches_debug_24875 | rasdani/github-patches | git_diff | coreproject-moe__CoreProject-Monorepo-3167 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[`Frontend`] : Move code to specific `web-component`
https://github.com/baseplate-admin/CoreProject/blob/cd436b876f4936b61397a0cc838aa88125527a78/backend/django_core/templates/anime/index.html#L123-L205
</issue>
<code>
[start of backend/django_core/apps/pages/views/anime.py]
1 from typing import TYPE_CHECKING
2
3 from django.http import HttpResponse
4 from django.shortcuts import render
5
6 from ..data.anime import (
7 anime,
8 anime_episode,
9 icons,
10 latest_animes,
11 latest_episodes,
12 my_list,
13 )
14
15 if TYPE_CHECKING:
16 from ..request import HtmxHttpRequest
17
18
19 async def anime_home_view_partial_slider_view(
20 request: "HtmxHttpRequest",
21 pk: int,
22 ) -> HttpResponse:
23 anime = latest_animes[pk]
24 next_index = (pk + 1) % len(latest_animes)
25 previous_index = (pk - 1) % len(latest_animes)
26
27 return render(
28 request,
29 "anime/_slider.html",
30 context={
31 "anime": anime,
32 "next_index": next_index,
33 "previous_index": previous_index,
34 "current_index": pk,
35 },
36 )
37
38
39 async def anime_home_view(request: "HtmxHttpRequest") -> HttpResponse:
40 if request.htmx:
41 return render(
42 request,
43 "anime/index.html",
44 context={
45 "latest_animes": latest_animes,
46 "my_list": my_list,
47 "latest_episodes": latest_episodes,
48 },
49 )
50
51 return render(
52 request,
53 "anime/_layout.html",
54 context={
55 "icons": icons,
56 "latest_animes": latest_animes,
57 "my_list": my_list,
58 "latest_episodes": latest_episodes,
59 },
60 )
61
62
63 async def anime_explore_view(request: "HtmxHttpRequest") -> HttpResponse:
64 if request.htmx:
65 return render(request, "anime/explore/index.html")
66
67 return render(request, "anime/_layout.html", context={"icons": icons})
68
69
70 async def anime_info_view(
71 request: "HtmxHttpRequest",
72 platform: str,
73 pk: int,
74 ) -> HttpResponse:
75 if request.htmx:
76 return render(
77 request,
78 "anime/info/index.html",
79 context={"anime": anime, "episode": anime_episode},
80 )
81
82 return render(request, "anime/_layout.html", context={"icons": icons})
83
84
85 async def anime_episode_view(
86 request: "HtmxHttpRequest", platform: str, mal_id: int, pk: int
87 ) -> HttpResponse:
88 if request.htmx:
89 return render(
90 request,
91 "anime/episode/index.html",
92 context={},
93 )
94
95 return render(request, "anime/_layout.html", context={"icons": icons})
96
[end of backend/django_core/apps/pages/views/anime.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/backend/django_core/apps/pages/views/anime.py b/backend/django_core/apps/pages/views/anime.py
--- a/backend/django_core/apps/pages/views/anime.py
+++ b/backend/django_core/apps/pages/views/anime.py
@@ -1,3 +1,4 @@
+import json
from typing import TYPE_CHECKING
from django.http import HttpResponse
@@ -37,6 +38,9 @@
async def anime_home_view(request: "HtmxHttpRequest") -> HttpResponse:
+ # cant parse single quoted string
+ latest_episodes_json = json.dumps(latest_episodes)
+
if request.htmx:
return render(
request,
@@ -44,7 +48,7 @@
context={
"latest_animes": latest_animes,
"my_list": my_list,
- "latest_episodes": latest_episodes,
+ "latest_episodes": latest_episodes_json,
},
)
@@ -55,7 +59,7 @@
"icons": icons,
"latest_animes": latest_animes,
"my_list": my_list,
- "latest_episodes": latest_episodes,
+ "latest_episodes": latest_episodes_json,
},
)
| {"golden_diff": "diff --git a/backend/django_core/apps/pages/views/anime.py b/backend/django_core/apps/pages/views/anime.py\n--- a/backend/django_core/apps/pages/views/anime.py\n+++ b/backend/django_core/apps/pages/views/anime.py\n@@ -1,3 +1,4 @@\n+import json\n from typing import TYPE_CHECKING\n \n from django.http import HttpResponse\n@@ -37,6 +38,9 @@\n \n \n async def anime_home_view(request: \"HtmxHttpRequest\") -> HttpResponse:\n+ # cant parse single quoted string\n+ latest_episodes_json = json.dumps(latest_episodes)\n+\n if request.htmx:\n return render(\n request,\n@@ -44,7 +48,7 @@\n context={\n \"latest_animes\": latest_animes,\n \"my_list\": my_list,\n- \"latest_episodes\": latest_episodes,\n+ \"latest_episodes\": latest_episodes_json,\n },\n )\n \n@@ -55,7 +59,7 @@\n \"icons\": icons,\n \"latest_animes\": latest_animes,\n \"my_list\": my_list,\n- \"latest_episodes\": latest_episodes,\n+ \"latest_episodes\": latest_episodes_json,\n },\n )\n", "issue": "[`Frontend`] : Move code to specific `web-component`\nhttps://github.com/baseplate-admin/CoreProject/blob/cd436b876f4936b61397a0cc838aa88125527a78/backend/django_core/templates/anime/index.html#L123-L205\n", "before_files": [{"content": "from typing import TYPE_CHECKING\n\nfrom django.http import HttpResponse\nfrom django.shortcuts import render\n\nfrom ..data.anime import (\n anime,\n anime_episode,\n icons,\n latest_animes,\n latest_episodes,\n my_list,\n)\n\nif TYPE_CHECKING:\n from ..request import HtmxHttpRequest\n\n\nasync def anime_home_view_partial_slider_view(\n request: \"HtmxHttpRequest\",\n pk: int,\n) -> HttpResponse:\n anime = latest_animes[pk]\n next_index = (pk + 1) % len(latest_animes)\n previous_index = (pk - 1) % len(latest_animes)\n\n return render(\n request,\n \"anime/_slider.html\",\n context={\n \"anime\": anime,\n \"next_index\": next_index,\n \"previous_index\": previous_index,\n \"current_index\": pk,\n },\n )\n\n\nasync def anime_home_view(request: \"HtmxHttpRequest\") -> HttpResponse:\n if request.htmx:\n return render(\n request,\n \"anime/index.html\",\n context={\n \"latest_animes\": latest_animes,\n \"my_list\": my_list,\n \"latest_episodes\": latest_episodes,\n },\n )\n\n return render(\n request,\n \"anime/_layout.html\",\n context={\n \"icons\": icons,\n \"latest_animes\": latest_animes,\n \"my_list\": my_list,\n \"latest_episodes\": latest_episodes,\n },\n )\n\n\nasync def anime_explore_view(request: \"HtmxHttpRequest\") -> HttpResponse:\n if request.htmx:\n return render(request, \"anime/explore/index.html\")\n\n return render(request, \"anime/_layout.html\", context={\"icons\": icons})\n\n\nasync def anime_info_view(\n request: \"HtmxHttpRequest\",\n platform: str,\n pk: int,\n) -> HttpResponse:\n if request.htmx:\n return render(\n request,\n \"anime/info/index.html\",\n context={\"anime\": anime, \"episode\": anime_episode},\n )\n\n return render(request, \"anime/_layout.html\", context={\"icons\": icons})\n\n\nasync def anime_episode_view(\n request: \"HtmxHttpRequest\", platform: str, mal_id: int, pk: int\n) -> HttpResponse:\n if request.htmx:\n return render(\n request,\n \"anime/episode/index.html\",\n context={},\n )\n\n return render(request, \"anime/_layout.html\", context={\"icons\": icons})\n", "path": "backend/django_core/apps/pages/views/anime.py"}]} | 1,343 | 260 |
gh_patches_debug_21401 | rasdani/github-patches | git_diff | ultrabug__py3status-551 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Runtime error (BrokenPipeError) helpers.py line 11
When restarting i3 using `i3 restart`, error bar pops up with message `py3status: Runtime error (BrokenPipeError) helpers.py line 11. Please try to fix this and reload i3wm (Mod+Shift+R)`
Everything appears to be functioning and the bar still shows.
Running Ubuntu 16.04
py3status 3.1
python 3.5.2
</issue>
<code>
[start of py3status/__init__.py]
1 import locale
2 import sys
3
4 from py3status.core import Py3statusWrapper
5
6 try:
7 from setproctitle import setproctitle
8 setproctitle('py3status')
9 except ImportError:
10 pass
11
12
13 def main():
14 try:
15 locale.setlocale(locale.LC_ALL, '')
16 except locale.Error:
17 print('No locale available')
18 sys.exit(2)
19
20 py3 = None
21 try:
22 py3 = Py3statusWrapper()
23 py3.setup()
24 except KeyboardInterrupt:
25 if py3:
26 py3.notify_user('Setup interrupted (KeyboardInterrupt).')
27 sys.exit(0)
28 except Exception as e:
29 if py3:
30 py3.report_exception('Setup error')
31 else:
32 # we cannot report this Exception
33 raise e
34 sys.exit(2)
35
36 try:
37 py3.run()
38 except Exception:
39 py3.report_exception('Runtime error')
40 sys.exit(3)
41 except KeyboardInterrupt:
42 pass
43 finally:
44 py3.stop()
45 sys.exit(0)
46
47
48 if __name__ == '__main__':
49 main()
50
[end of py3status/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/py3status/__init__.py b/py3status/__init__.py
--- a/py3status/__init__.py
+++ b/py3status/__init__.py
@@ -9,6 +9,13 @@
except ImportError:
pass
+try:
+ # python3
+ IOPipeError = BrokenPipeError
+except NameError:
+ # python2
+ IOPipeError = IOError
+
def main():
try:
@@ -21,9 +28,9 @@
try:
py3 = Py3statusWrapper()
py3.setup()
- except KeyboardInterrupt:
+ except (IOPipeError, KeyboardInterrupt):
if py3:
- py3.notify_user('Setup interrupted (KeyboardInterrupt).')
+ py3.notify_user('Setup interrupted')
sys.exit(0)
except Exception as e:
if py3:
@@ -35,11 +42,11 @@
try:
py3.run()
+ except (IOPipeError, KeyboardInterrupt):
+ pass
except Exception:
py3.report_exception('Runtime error')
sys.exit(3)
- except KeyboardInterrupt:
- pass
finally:
py3.stop()
sys.exit(0)
| {"golden_diff": "diff --git a/py3status/__init__.py b/py3status/__init__.py\n--- a/py3status/__init__.py\n+++ b/py3status/__init__.py\n@@ -9,6 +9,13 @@\n except ImportError:\n pass\n \n+try:\n+ # python3\n+ IOPipeError = BrokenPipeError\n+except NameError:\n+ # python2\n+ IOPipeError = IOError\n+\n \n def main():\n try:\n@@ -21,9 +28,9 @@\n try:\n py3 = Py3statusWrapper()\n py3.setup()\n- except KeyboardInterrupt:\n+ except (IOPipeError, KeyboardInterrupt):\n if py3:\n- py3.notify_user('Setup interrupted (KeyboardInterrupt).')\n+ py3.notify_user('Setup interrupted')\n sys.exit(0)\n except Exception as e:\n if py3:\n@@ -35,11 +42,11 @@\n \n try:\n py3.run()\n+ except (IOPipeError, KeyboardInterrupt):\n+ pass\n except Exception:\n py3.report_exception('Runtime error')\n sys.exit(3)\n- except KeyboardInterrupt:\n- pass\n finally:\n py3.stop()\n sys.exit(0)\n", "issue": "Runtime error (BrokenPipeError) helpers.py line 11\nWhen restarting i3 using `i3 restart`, error bar pops up with message `py3status: Runtime error (BrokenPipeError) helpers.py line 11. Please try to fix this and reload i3wm (Mod+Shift+R)`\n\nEverything appears to be functioning and the bar still shows.\n\nRunning Ubuntu 16.04\npy3status 3.1\npython 3.5.2\n\n", "before_files": [{"content": "import locale\nimport sys\n\nfrom py3status.core import Py3statusWrapper\n\ntry:\n from setproctitle import setproctitle\n setproctitle('py3status')\nexcept ImportError:\n pass\n\n\ndef main():\n try:\n locale.setlocale(locale.LC_ALL, '')\n except locale.Error:\n print('No locale available')\n sys.exit(2)\n\n py3 = None\n try:\n py3 = Py3statusWrapper()\n py3.setup()\n except KeyboardInterrupt:\n if py3:\n py3.notify_user('Setup interrupted (KeyboardInterrupt).')\n sys.exit(0)\n except Exception as e:\n if py3:\n py3.report_exception('Setup error')\n else:\n # we cannot report this Exception\n raise e\n sys.exit(2)\n\n try:\n py3.run()\n except Exception:\n py3.report_exception('Runtime error')\n sys.exit(3)\n except KeyboardInterrupt:\n pass\n finally:\n py3.stop()\n sys.exit(0)\n\n\nif __name__ == '__main__':\n main()\n", "path": "py3status/__init__.py"}]} | 974 | 277 |
gh_patches_debug_4747 | rasdani/github-patches | git_diff | scrapy__scrapy-5006 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Remove the use of parsel.Selector._default_type
Used at https://github.com/scrapy/scrapy/blob/58ca8bbf6d1589bd0c8cc1ebda52299346f55e8a/scrapy/selector/unified.py#L72
We should stop relying on this private class variable unless there’s a good reason for it.
[Noticed](https://github.com/scrapy/parsel/pull/181/files#r562118000) while trying out [JMESPath support for Parsel](https://github.com/scrapy/parsel/pull/181) in a real life project.
</issue>
<code>
[start of scrapy/selector/unified.py]
1 """
2 XPath selectors based on lxml
3 """
4
5 from parsel import Selector as _ParselSelector
6 from scrapy.utils.trackref import object_ref
7 from scrapy.utils.python import to_bytes
8 from scrapy.http import HtmlResponse, XmlResponse
9
10
11 __all__ = ['Selector', 'SelectorList']
12
13
14 def _st(response, st):
15 if st is None:
16 return 'xml' if isinstance(response, XmlResponse) else 'html'
17 return st
18
19
20 def _response_from_text(text, st):
21 rt = XmlResponse if st == 'xml' else HtmlResponse
22 return rt(url='about:blank', encoding='utf-8',
23 body=to_bytes(text, 'utf-8'))
24
25
26 class SelectorList(_ParselSelector.selectorlist_cls, object_ref):
27 """
28 The :class:`SelectorList` class is a subclass of the builtin ``list``
29 class, which provides a few additional methods.
30 """
31
32
33 class Selector(_ParselSelector, object_ref):
34 """
35 An instance of :class:`Selector` is a wrapper over response to select
36 certain parts of its content.
37
38 ``response`` is an :class:`~scrapy.http.HtmlResponse` or an
39 :class:`~scrapy.http.XmlResponse` object that will be used for selecting
40 and extracting data.
41
42 ``text`` is a unicode string or utf-8 encoded text for cases when a
43 ``response`` isn't available. Using ``text`` and ``response`` together is
44 undefined behavior.
45
46 ``type`` defines the selector type, it can be ``"html"``, ``"xml"``
47 or ``None`` (default).
48
49 If ``type`` is ``None``, the selector automatically chooses the best type
50 based on ``response`` type (see below), or defaults to ``"html"`` in case it
51 is used together with ``text``.
52
53 If ``type`` is ``None`` and a ``response`` is passed, the selector type is
54 inferred from the response type as follows:
55
56 * ``"html"`` for :class:`~scrapy.http.HtmlResponse` type
57 * ``"xml"`` for :class:`~scrapy.http.XmlResponse` type
58 * ``"html"`` for anything else
59
60 Otherwise, if ``type`` is set, the selector type will be forced and no
61 detection will occur.
62 """
63
64 __slots__ = ['response']
65 selectorlist_cls = SelectorList
66
67 def __init__(self, response=None, text=None, type=None, root=None, **kwargs):
68 if response is not None and text is not None:
69 raise ValueError(f'{self.__class__.__name__}.__init__() received '
70 'both response and text')
71
72 st = _st(response, type or self._default_type)
73
74 if text is not None:
75 response = _response_from_text(text, st)
76
77 if response is not None:
78 text = response.text
79 kwargs.setdefault('base_url', response.url)
80
81 self.response = response
82 super().__init__(text=text, type=st, root=root, **kwargs)
83
[end of scrapy/selector/unified.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/scrapy/selector/unified.py b/scrapy/selector/unified.py
--- a/scrapy/selector/unified.py
+++ b/scrapy/selector/unified.py
@@ -69,7 +69,7 @@
raise ValueError(f'{self.__class__.__name__}.__init__() received '
'both response and text')
- st = _st(response, type or self._default_type)
+ st = _st(response, type)
if text is not None:
response = _response_from_text(text, st)
| {"golden_diff": "diff --git a/scrapy/selector/unified.py b/scrapy/selector/unified.py\n--- a/scrapy/selector/unified.py\n+++ b/scrapy/selector/unified.py\n@@ -69,7 +69,7 @@\n raise ValueError(f'{self.__class__.__name__}.__init__() received '\n 'both response and text')\n \n- st = _st(response, type or self._default_type)\n+ st = _st(response, type)\n \n if text is not None:\n response = _response_from_text(text, st)\n", "issue": "Remove the use of parsel.Selector._default_type\nUsed at https://github.com/scrapy/scrapy/blob/58ca8bbf6d1589bd0c8cc1ebda52299346f55e8a/scrapy/selector/unified.py#L72\r\n\r\nWe should stop relying on this private class variable unless there\u2019s a good reason for it.\r\n\r\n[Noticed](https://github.com/scrapy/parsel/pull/181/files#r562118000) while trying out [JMESPath support for Parsel](https://github.com/scrapy/parsel/pull/181) in a real life project.\n", "before_files": [{"content": "\"\"\"\nXPath selectors based on lxml\n\"\"\"\n\nfrom parsel import Selector as _ParselSelector\nfrom scrapy.utils.trackref import object_ref\nfrom scrapy.utils.python import to_bytes\nfrom scrapy.http import HtmlResponse, XmlResponse\n\n\n__all__ = ['Selector', 'SelectorList']\n\n\ndef _st(response, st):\n if st is None:\n return 'xml' if isinstance(response, XmlResponse) else 'html'\n return st\n\n\ndef _response_from_text(text, st):\n rt = XmlResponse if st == 'xml' else HtmlResponse\n return rt(url='about:blank', encoding='utf-8',\n body=to_bytes(text, 'utf-8'))\n\n\nclass SelectorList(_ParselSelector.selectorlist_cls, object_ref):\n \"\"\"\n The :class:`SelectorList` class is a subclass of the builtin ``list``\n class, which provides a few additional methods.\n \"\"\"\n\n\nclass Selector(_ParselSelector, object_ref):\n \"\"\"\n An instance of :class:`Selector` is a wrapper over response to select\n certain parts of its content.\n\n ``response`` is an :class:`~scrapy.http.HtmlResponse` or an\n :class:`~scrapy.http.XmlResponse` object that will be used for selecting\n and extracting data.\n\n ``text`` is a unicode string or utf-8 encoded text for cases when a\n ``response`` isn't available. Using ``text`` and ``response`` together is\n undefined behavior.\n\n ``type`` defines the selector type, it can be ``\"html\"``, ``\"xml\"``\n or ``None`` (default).\n\n If ``type`` is ``None``, the selector automatically chooses the best type\n based on ``response`` type (see below), or defaults to ``\"html\"`` in case it\n is used together with ``text``.\n\n If ``type`` is ``None`` and a ``response`` is passed, the selector type is\n inferred from the response type as follows:\n\n * ``\"html\"`` for :class:`~scrapy.http.HtmlResponse` type\n * ``\"xml\"`` for :class:`~scrapy.http.XmlResponse` type\n * ``\"html\"`` for anything else\n\n Otherwise, if ``type`` is set, the selector type will be forced and no\n detection will occur.\n \"\"\"\n\n __slots__ = ['response']\n selectorlist_cls = SelectorList\n\n def __init__(self, response=None, text=None, type=None, root=None, **kwargs):\n if response is not None and text is not None:\n raise ValueError(f'{self.__class__.__name__}.__init__() received '\n 'both response and text')\n\n st = _st(response, type or self._default_type)\n\n if text is not None:\n response = _response_from_text(text, st)\n\n if response is not None:\n text = response.text\n kwargs.setdefault('base_url', response.url)\n\n self.response = response\n super().__init__(text=text, type=st, root=root, **kwargs)\n", "path": "scrapy/selector/unified.py"}]} | 1,520 | 122 |
gh_patches_debug_19885 | rasdani/github-patches | git_diff | mlcommons__GaNDLF-498 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Per-label accuracy does not work for multiple batches
**Describe the bug**
When `batch_size > 1`, `per_label_accuracy` computation fails.
**To Reproduce**
Steps to reproduce the behavior:
1. Set `batch_size = 4` in any classification unit test
2. See error
**Expected behavior**
The function should compute multiple batches of accuracies.
**Screenshots**
N.A.
**GaNDLF Version**
<!-- Put the output of the following command:
python -c 'import GANDLF as g;print(g.__version__)'
-->
0.0.15-dev
**Desktop (please complete the following information):**
N.A.
**Additional context**
Reported by @brandon-edwards
</issue>
<code>
[start of GANDLF/metrics/regression.py]
1 """
2 All the metrics are to be called from here
3 """
4 import torch
5 from sklearn.metrics import balanced_accuracy_score
6 import numpy as np
7
8
9 def classification_accuracy(output, label, params):
10 """
11 This function computes the classification accuracy.
12
13 Args:
14 output (torch.Tensor): The output of the model.
15 label (torch.Tensor): The ground truth labels.
16 params (dict): The parameter dictionary containing training and data information.
17
18 Returns:
19 torch.Tensor: The classification accuracy.
20 """
21 if params["problem_type"] == "classification":
22 predicted_classes = torch.argmax(output, 1)
23 else:
24 predicted_classes = output
25
26 acc = torch.sum(predicted_classes == label.squeeze()) / len(label)
27 return acc
28
29
30 def balanced_acc_score(output, label, params):
31 """
32 This function computes the balanced accuracy.
33
34 Args:
35 output (torch.Tensor): The output of the model.
36 label (torch.Tensor): The ground truth labels.
37 params (dict): The parameter dictionary containing training and data information.
38
39 Returns:
40 torch.Tensor: The balanced accuracy.
41 """
42 if params["problem_type"] == "classification":
43 predicted_classes = torch.argmax(output, 1)
44 else:
45 predicted_classes = output
46
47 return torch.from_numpy(
48 np.array(balanced_accuracy_score(predicted_classes.cpu(), label.cpu()))
49 )
50
51
52 def per_label_accuracy(output, label, params):
53 """
54 This function computes the per class accuracy.
55
56 Args:
57 output (torch.Tensor): The output of the model.
58 label (torch.Tensor): The ground truth labels.
59 params (dict): The parameter dictionary containing training and data information.
60
61 Returns:
62 torch.Tensor: The per class accuracy.
63 """
64 if params["problem_type"] == "classification":
65 predicted_classes = np.array([0] * len(params["model"]["class_list"]))
66 label_cpu = np.array([0] * len(params["model"]["class_list"]))
67 predicted_classes[torch.argmax(output, 1).cpu().item()] = 1
68 label_cpu[label.cpu().item()] = 1
69 return torch.from_numpy((predicted_classes == label_cpu).astype(float))
70 else:
71 return balanced_acc_score(output, label, params)
72
[end of GANDLF/metrics/regression.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/GANDLF/metrics/regression.py b/GANDLF/metrics/regression.py
--- a/GANDLF/metrics/regression.py
+++ b/GANDLF/metrics/regression.py
@@ -62,10 +62,14 @@
torch.Tensor: The per class accuracy.
"""
if params["problem_type"] == "classification":
- predicted_classes = np.array([0] * len(params["model"]["class_list"]))
- label_cpu = np.array([0] * len(params["model"]["class_list"]))
- predicted_classes[torch.argmax(output, 1).cpu().item()] = 1
- label_cpu[label.cpu().item()] = 1
- return torch.from_numpy((predicted_classes == label_cpu).astype(float))
+ # ensure this works for multiple batches
+ output_accuracy = torch.zeros(len(params["model"]["class_list"]))
+ for output_batch, label_batch in zip(output, label):
+ predicted_classes = torch.Tensor([0] * len(params["model"]["class_list"]))
+ label_cpu = torch.Tensor([0] * len(params["model"]["class_list"]))
+ predicted_classes[torch.argmax(output_batch, 0).cpu().item()] = 1
+ label_cpu[label_batch.cpu().item()] = 1
+ output_accuracy += (predicted_classes == label_cpu).type(torch.float)
+ return output_accuracy / len(output)
else:
return balanced_acc_score(output, label, params)
| {"golden_diff": "diff --git a/GANDLF/metrics/regression.py b/GANDLF/metrics/regression.py\n--- a/GANDLF/metrics/regression.py\n+++ b/GANDLF/metrics/regression.py\n@@ -62,10 +62,14 @@\n torch.Tensor: The per class accuracy.\n \"\"\"\n if params[\"problem_type\"] == \"classification\":\n- predicted_classes = np.array([0] * len(params[\"model\"][\"class_list\"]))\n- label_cpu = np.array([0] * len(params[\"model\"][\"class_list\"]))\n- predicted_classes[torch.argmax(output, 1).cpu().item()] = 1\n- label_cpu[label.cpu().item()] = 1\n- return torch.from_numpy((predicted_classes == label_cpu).astype(float))\n+ # ensure this works for multiple batches\n+ output_accuracy = torch.zeros(len(params[\"model\"][\"class_list\"]))\n+ for output_batch, label_batch in zip(output, label):\n+ predicted_classes = torch.Tensor([0] * len(params[\"model\"][\"class_list\"]))\n+ label_cpu = torch.Tensor([0] * len(params[\"model\"][\"class_list\"]))\n+ predicted_classes[torch.argmax(output_batch, 0).cpu().item()] = 1\n+ label_cpu[label_batch.cpu().item()] = 1\n+ output_accuracy += (predicted_classes == label_cpu).type(torch.float)\n+ return output_accuracy / len(output)\n else:\n return balanced_acc_score(output, label, params)\n", "issue": "Per-label accuracy does not work for multiple batches\n**Describe the bug**\r\nWhen `batch_size > 1`, `per_label_accuracy` computation fails.\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. Set `batch_size = 4` in any classification unit test\r\n2. See error\r\n\r\n**Expected behavior**\r\nThe function should compute multiple batches of accuracies.\r\n\r\n**Screenshots**\r\nN.A.\r\n\r\n**GaNDLF Version**\r\n<!-- Put the output of the following command:\r\npython -c 'import GANDLF as g;print(g.__version__)'\r\n-->\r\n0.0.15-dev\r\n\r\n**Desktop (please complete the following information):**\r\nN.A.\r\n\r\n**Additional context**\r\nReported by @brandon-edwards\n", "before_files": [{"content": "\"\"\"\nAll the metrics are to be called from here\n\"\"\"\nimport torch\nfrom sklearn.metrics import balanced_accuracy_score\nimport numpy as np\n\n\ndef classification_accuracy(output, label, params):\n \"\"\"\n This function computes the classification accuracy.\n\n Args:\n output (torch.Tensor): The output of the model.\n label (torch.Tensor): The ground truth labels.\n params (dict): The parameter dictionary containing training and data information.\n\n Returns:\n torch.Tensor: The classification accuracy.\n \"\"\"\n if params[\"problem_type\"] == \"classification\":\n predicted_classes = torch.argmax(output, 1)\n else:\n predicted_classes = output\n\n acc = torch.sum(predicted_classes == label.squeeze()) / len(label)\n return acc\n\n\ndef balanced_acc_score(output, label, params):\n \"\"\"\n This function computes the balanced accuracy.\n\n Args:\n output (torch.Tensor): The output of the model.\n label (torch.Tensor): The ground truth labels.\n params (dict): The parameter dictionary containing training and data information.\n\n Returns:\n torch.Tensor: The balanced accuracy.\n \"\"\"\n if params[\"problem_type\"] == \"classification\":\n predicted_classes = torch.argmax(output, 1)\n else:\n predicted_classes = output\n\n return torch.from_numpy(\n np.array(balanced_accuracy_score(predicted_classes.cpu(), label.cpu()))\n )\n\n\ndef per_label_accuracy(output, label, params):\n \"\"\"\n This function computes the per class accuracy.\n\n Args:\n output (torch.Tensor): The output of the model.\n label (torch.Tensor): The ground truth labels.\n params (dict): The parameter dictionary containing training and data information.\n\n Returns:\n torch.Tensor: The per class accuracy.\n \"\"\"\n if params[\"problem_type\"] == \"classification\":\n predicted_classes = np.array([0] * len(params[\"model\"][\"class_list\"]))\n label_cpu = np.array([0] * len(params[\"model\"][\"class_list\"]))\n predicted_classes[torch.argmax(output, 1).cpu().item()] = 1\n label_cpu[label.cpu().item()] = 1\n return torch.from_numpy((predicted_classes == label_cpu).astype(float))\n else:\n return balanced_acc_score(output, label, params)\n", "path": "GANDLF/metrics/regression.py"}]} | 1,303 | 319 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.