problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.71k
9.01k
| golden_diff
stringlengths 151
4.94k
| verification_info
stringlengths 465
11.3k
| num_tokens_prompt
int64 557
2.05k
| num_tokens_diff
int64 48
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_11125 | rasdani/github-patches | git_diff | Kinto__kinto-2108 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
memcached cache backend does not really test memcached
@autrilla reported that when he moved Kinto to GCP, the heartbeat reported success for all backends, including cache, even though there is no `memcached`. I think the problem is here:
https://github.com/Kinto/kinto/blob/61494caae2bb8fe342f32f6a5d89f40ed2b62dff/kinto/core/cache/__init__.py#L87-L91
The heartbeat for the cache backend just sends two messages which don't expect responses. For something like memcached which communicates over UDP, these messages can "succeed" by silently going nowhere.
memcached cache backend does not really test memcached
@autrilla reported that when he moved Kinto to GCP, the heartbeat reported success for all backends, including cache, even though there is no `memcached`. I think the problem is here:
https://github.com/Kinto/kinto/blob/61494caae2bb8fe342f32f6a5d89f40ed2b62dff/kinto/core/cache/__init__.py#L87-L91
The heartbeat for the cache backend just sends two messages which don't expect responses. For something like memcached which communicates over UDP, these messages can "succeed" by silently going nowhere.
</issue>
<code>
[start of kinto/core/cache/__init__.py]
1 import logging
2 import random
3
4
5 logger = logging.getLogger(__name__)
6
7
8 _HEARTBEAT_DELETE_RATE = 0.5
9 _HEARTBEAT_KEY = "__heartbeat__"
10 _HEARTBEAT_TTL_SECONDS = 3600
11
12
13 class CacheBase:
14 def __init__(self, *args, **kwargs):
15 self.prefix = kwargs["cache_prefix"]
16 self.max_size_bytes = kwargs.get("cache_max_size_bytes")
17
18 def initialize_schema(self, dry_run=False):
19 """Create every necessary objects (like tables or indices) in the
20 backend.
21
22 This is executed when the ``kinto migrate`` command is run.
23
24 :param bool dry_run: simulate instead of executing the operations.
25 """
26 raise NotImplementedError
27
28 def flush(self):
29 """Delete every values."""
30 raise NotImplementedError
31
32 def ttl(self, key):
33 """Obtain the expiration value of the specified `key`.
34
35 :param str key: key
36 :returns: number of seconds or negative if no TTL.
37 :rtype: float
38 """
39 raise NotImplementedError
40
41 def expire(self, key, ttl):
42 """Set the expiration value `ttl` for the specified `key`.
43
44 :param str key: key
45 :param float ttl: number of seconds
46 """
47 raise NotImplementedError
48
49 def set(self, key, value, ttl):
50 """Store a value with the specified `key`.
51
52 :param str key: key
53 :param str value: value to store
54 :param float ttl: expire after number of seconds
55 """
56 raise NotImplementedError
57
58 def get(self, key):
59 """Obtain the value of the specified `key`.
60
61 :param str key: key
62 :returns: the stored value or None if missing.
63 :rtype: str
64 """
65 raise NotImplementedError
66
67 def delete(self, key):
68 """Delete the value of the specified `key`.
69
70 :param str key: key
71 """
72 raise NotImplementedError
73
74
75 def heartbeat(backend):
76 def ping(request):
77 """Test that cache backend is operational.
78
79 :param request: current request object
80 :type request: :class:`~pyramid:pyramid.request.Request`
81 :returns: ``True`` is everything is ok, ``False`` otherwise.
82 :rtype: bool
83 """
84 # No specific case for readonly mode because the cache should
85 # continue to work in that mode.
86 try:
87 if random.SystemRandom().random() < _HEARTBEAT_DELETE_RATE:
88 backend.delete(_HEARTBEAT_KEY)
89 else:
90 backend.set(_HEARTBEAT_KEY, "alive", _HEARTBEAT_TTL_SECONDS)
91 return True
92 except Exception:
93 logger.exception("Heartbeat Failure")
94 return False
95
96 return ping
97
[end of kinto/core/cache/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/kinto/core/cache/__init__.py b/kinto/core/cache/__init__.py
--- a/kinto/core/cache/__init__.py
+++ b/kinto/core/cache/__init__.py
@@ -86,9 +86,9 @@
try:
if random.SystemRandom().random() < _HEARTBEAT_DELETE_RATE:
backend.delete(_HEARTBEAT_KEY)
- else:
- backend.set(_HEARTBEAT_KEY, "alive", _HEARTBEAT_TTL_SECONDS)
- return True
+ return backend.get(_HEARTBEAT_KEY) is None
+ backend.set(_HEARTBEAT_KEY, "alive", _HEARTBEAT_TTL_SECONDS)
+ return backend.get(_HEARTBEAT_KEY) == "alive"
except Exception:
logger.exception("Heartbeat Failure")
return False
| {"golden_diff": "diff --git a/kinto/core/cache/__init__.py b/kinto/core/cache/__init__.py\n--- a/kinto/core/cache/__init__.py\n+++ b/kinto/core/cache/__init__.py\n@@ -86,9 +86,9 @@\n try:\n if random.SystemRandom().random() < _HEARTBEAT_DELETE_RATE:\n backend.delete(_HEARTBEAT_KEY)\n- else:\n- backend.set(_HEARTBEAT_KEY, \"alive\", _HEARTBEAT_TTL_SECONDS)\n- return True\n+ return backend.get(_HEARTBEAT_KEY) is None\n+ backend.set(_HEARTBEAT_KEY, \"alive\", _HEARTBEAT_TTL_SECONDS)\n+ return backend.get(_HEARTBEAT_KEY) == \"alive\"\n except Exception:\n logger.exception(\"Heartbeat Failure\")\n return False\n", "issue": "memcached cache backend does not really test memcached\n@autrilla reported that when he moved Kinto to GCP, the heartbeat reported success for all backends, including cache, even though there is no `memcached`. I think the problem is here:\r\n\r\nhttps://github.com/Kinto/kinto/blob/61494caae2bb8fe342f32f6a5d89f40ed2b62dff/kinto/core/cache/__init__.py#L87-L91\r\n\r\nThe heartbeat for the cache backend just sends two messages which don't expect responses. For something like memcached which communicates over UDP, these messages can \"succeed\" by silently going nowhere.\nmemcached cache backend does not really test memcached\n@autrilla reported that when he moved Kinto to GCP, the heartbeat reported success for all backends, including cache, even though there is no `memcached`. I think the problem is here:\r\n\r\nhttps://github.com/Kinto/kinto/blob/61494caae2bb8fe342f32f6a5d89f40ed2b62dff/kinto/core/cache/__init__.py#L87-L91\r\n\r\nThe heartbeat for the cache backend just sends two messages which don't expect responses. For something like memcached which communicates over UDP, these messages can \"succeed\" by silently going nowhere.\n", "before_files": [{"content": "import logging\nimport random\n\n\nlogger = logging.getLogger(__name__)\n\n\n_HEARTBEAT_DELETE_RATE = 0.5\n_HEARTBEAT_KEY = \"__heartbeat__\"\n_HEARTBEAT_TTL_SECONDS = 3600\n\n\nclass CacheBase:\n def __init__(self, *args, **kwargs):\n self.prefix = kwargs[\"cache_prefix\"]\n self.max_size_bytes = kwargs.get(\"cache_max_size_bytes\")\n\n def initialize_schema(self, dry_run=False):\n \"\"\"Create every necessary objects (like tables or indices) in the\n backend.\n\n This is executed when the ``kinto migrate`` command is run.\n\n :param bool dry_run: simulate instead of executing the operations.\n \"\"\"\n raise NotImplementedError\n\n def flush(self):\n \"\"\"Delete every values.\"\"\"\n raise NotImplementedError\n\n def ttl(self, key):\n \"\"\"Obtain the expiration value of the specified `key`.\n\n :param str key: key\n :returns: number of seconds or negative if no TTL.\n :rtype: float\n \"\"\"\n raise NotImplementedError\n\n def expire(self, key, ttl):\n \"\"\"Set the expiration value `ttl` for the specified `key`.\n\n :param str key: key\n :param float ttl: number of seconds\n \"\"\"\n raise NotImplementedError\n\n def set(self, key, value, ttl):\n \"\"\"Store a value with the specified `key`.\n\n :param str key: key\n :param str value: value to store\n :param float ttl: expire after number of seconds\n \"\"\"\n raise NotImplementedError\n\n def get(self, key):\n \"\"\"Obtain the value of the specified `key`.\n\n :param str key: key\n :returns: the stored value or None if missing.\n :rtype: str\n \"\"\"\n raise NotImplementedError\n\n def delete(self, key):\n \"\"\"Delete the value of the specified `key`.\n\n :param str key: key\n \"\"\"\n raise NotImplementedError\n\n\ndef heartbeat(backend):\n def ping(request):\n \"\"\"Test that cache backend is operational.\n\n :param request: current request object\n :type request: :class:`~pyramid:pyramid.request.Request`\n :returns: ``True`` is everything is ok, ``False`` otherwise.\n :rtype: bool\n \"\"\"\n # No specific case for readonly mode because the cache should\n # continue to work in that mode.\n try:\n if random.SystemRandom().random() < _HEARTBEAT_DELETE_RATE:\n backend.delete(_HEARTBEAT_KEY)\n else:\n backend.set(_HEARTBEAT_KEY, \"alive\", _HEARTBEAT_TTL_SECONDS)\n return True\n except Exception:\n logger.exception(\"Heartbeat Failure\")\n return False\n\n return ping\n", "path": "kinto/core/cache/__init__.py"}]} | 1,635 | 189 |
gh_patches_debug_27587 | rasdani/github-patches | git_diff | ocadotechnology__aimmo-101 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
in portal/rr, when you are redirected to home, you are redirected to AI:MMO home
The website now loads the aimmo urls.
But now each time the website is supposed to redirect you to the portal home, it redirects to the AI:MMO login page.
Probably because both urls are named the same in their respective urls.py and the website imports both, finishing with aimmo urls?
</issue>
<code>
[start of players/autoconfig.py]
1 # -*- coding: utf-8 -*-
2 # Code for Life
3 #
4 # Copyright (C) 2015, Ocado Innovation Limited
5 #
6 # This program is free software: you can redistribute it and/or modify
7 # it under the terms of the GNU Affero General Public License as
8 # published by the Free Software Foundation, either version 3 of the
9 # License, or (at your option) any later version.
10 #
11 # This program is distributed in the hope that it will be useful,
12 # but WITHOUT ANY WARRANTY; without even the implied warranty of
13 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
14 # GNU Affero General Public License for more details.
15 #
16 # You should have received a copy of the GNU Affero General Public License
17 # along with this program. If not, see <http://www.gnu.org/licenses/>.
18 #
19 # ADDITIONAL TERMS – Section 7 GNU General Public Licence
20 #
21 # This licence does not grant any right, title or interest in any “Ocado” logos,
22 # trade names or the trademark “Ocado” or any other trademarks or domain names
23 # owned by Ocado Innovation Limited or the Ocado group of companies or any other
24 # distinctive brand features of “Ocado” as may be secured from time to time. You
25 # must not distribute any modification of this program using the trademark
26 # “Ocado” or claim any affiliation or association with Ocado or its employees.
27 #
28 # You are not authorised to use the name Ocado (or any of its trade names) or
29 # the names of any author or contributor in advertising or for publicity purposes
30 # pertaining to the distribution of this program, without the prior written
31 # authorisation of Ocado.
32 #
33 # Any propagation, distribution or conveyance of this program must include this
34 # copyright notice and these terms. You must not misrepresent the origins of this
35 # program; modified versions of the program must be marked as such and not
36 # identified as the original program.
37 '''Players autoconfig'''
38
39 DEFAULT_SETTINGS = {
40 'AUTOCONFIG_INDEX_VIEW': 'home',
41 'STATIC_URL': '/static/',
42 }
43
44 SETTINGS = {
45 'INSTALLED_APPS': [
46 'django.contrib.auth',
47 'django.contrib.messages',
48 'django.contrib.staticfiles',
49 'django_js_reverse',
50 ],
51 'TEMPLATES': [
52 {
53 'BACKEND': 'django.template.backends.django.DjangoTemplates',
54 'APP_DIRS': True,
55 'OPTIONS': {
56 'context_processors': [
57 'django.template.context_processors.debug',
58 'django.template.context_processors.request',
59 'django.contrib.auth.context_processors.auth',
60 'django.contrib.messages.context_processors.messages',
61 ]
62 }
63 }
64 ],
65 'USE_TZ': True,
66 }
67
[end of players/autoconfig.py]
[start of players/urls.py]
1 from django.conf.urls import url
2 from django.views.generic import TemplateView
3 from django.contrib.auth.decorators import login_required
4 from django.contrib.admin.views.decorators import staff_member_required
5
6 from players import views
7
8 urlpatterns = [
9 url(r'^$', staff_member_required(TemplateView.as_view(template_name='players/home.html')), name='home'),
10
11 url(r'^program/$', staff_member_required(login_required(TemplateView.as_view(template_name='players/program.html'))), name='program'),
12 url(r'^watch/$', staff_member_required(views.WatchView.as_view()), name='watch'),
13 url(r'^statistics/$', staff_member_required(TemplateView.as_view(template_name='players/statistics.html')), name='statistics'),
14
15 url(r'^api/code/$', staff_member_required(views.code), name='code'),
16 url(r'^api/games/$', views.games, name='games'),
17
18 url(r'^jsreverse/$', 'django_js_reverse.views.urls_js', name='js_reverse'), # TODO: Pull request to make django_js_reverse.urls
19 ]
20
[end of players/urls.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/players/autoconfig.py b/players/autoconfig.py
--- a/players/autoconfig.py
+++ b/players/autoconfig.py
@@ -37,7 +37,7 @@
'''Players autoconfig'''
DEFAULT_SETTINGS = {
- 'AUTOCONFIG_INDEX_VIEW': 'home',
+ 'AUTOCONFIG_INDEX_VIEW': 'aimmo/home',
'STATIC_URL': '/static/',
}
diff --git a/players/urls.py b/players/urls.py
--- a/players/urls.py
+++ b/players/urls.py
@@ -6,14 +6,14 @@
from players import views
urlpatterns = [
- url(r'^$', staff_member_required(TemplateView.as_view(template_name='players/home.html')), name='home'),
+ url(r'^$', staff_member_required(TemplateView.as_view(template_name='players/home.html')), name='aimmo/home'),
- url(r'^program/$', staff_member_required(login_required(TemplateView.as_view(template_name='players/program.html'))), name='program'),
- url(r'^watch/$', staff_member_required(views.WatchView.as_view()), name='watch'),
- url(r'^statistics/$', staff_member_required(TemplateView.as_view(template_name='players/statistics.html')), name='statistics'),
+ url(r'^program/$', staff_member_required(login_required(TemplateView.as_view(template_name='players/program.html'))), name='aimmo/program'),
+ url(r'^watch/$', staff_member_required(views.WatchView.as_view()), name='aimmo/watch'),
+ url(r'^statistics/$', staff_member_required(TemplateView.as_view(template_name='players/statistics.html')), name='aimmo/statistics'),
- url(r'^api/code/$', staff_member_required(views.code), name='code'),
- url(r'^api/games/$', views.games, name='games'),
+ url(r'^api/code/$', staff_member_required(views.code), name='aimmo/code'),
+ url(r'^api/games/$', views.games, name='aimmo/games'),
- url(r'^jsreverse/$', 'django_js_reverse.views.urls_js', name='js_reverse'), # TODO: Pull request to make django_js_reverse.urls
+ url(r'^jsreverse/$', 'django_js_reverse.views.urls_js', name='aimmo/js_reverse'), # TODO: Pull request to make django_js_reverse.urls
]
| {"golden_diff": "diff --git a/players/autoconfig.py b/players/autoconfig.py\n--- a/players/autoconfig.py\n+++ b/players/autoconfig.py\n@@ -37,7 +37,7 @@\n '''Players autoconfig'''\n \n DEFAULT_SETTINGS = {\n- 'AUTOCONFIG_INDEX_VIEW': 'home',\n+ 'AUTOCONFIG_INDEX_VIEW': 'aimmo/home',\n 'STATIC_URL': '/static/',\n }\n \ndiff --git a/players/urls.py b/players/urls.py\n--- a/players/urls.py\n+++ b/players/urls.py\n@@ -6,14 +6,14 @@\n from players import views\n \n urlpatterns = [\n- url(r'^$', staff_member_required(TemplateView.as_view(template_name='players/home.html')), name='home'),\n+ url(r'^$', staff_member_required(TemplateView.as_view(template_name='players/home.html')), name='aimmo/home'),\n \n- url(r'^program/$', staff_member_required(login_required(TemplateView.as_view(template_name='players/program.html'))), name='program'),\n- url(r'^watch/$', staff_member_required(views.WatchView.as_view()), name='watch'),\n- url(r'^statistics/$', staff_member_required(TemplateView.as_view(template_name='players/statistics.html')), name='statistics'),\n+ url(r'^program/$', staff_member_required(login_required(TemplateView.as_view(template_name='players/program.html'))), name='aimmo/program'),\n+ url(r'^watch/$', staff_member_required(views.WatchView.as_view()), name='aimmo/watch'),\n+ url(r'^statistics/$', staff_member_required(TemplateView.as_view(template_name='players/statistics.html')), name='aimmo/statistics'),\n \n- url(r'^api/code/$', staff_member_required(views.code), name='code'),\n- url(r'^api/games/$', views.games, name='games'),\n+ url(r'^api/code/$', staff_member_required(views.code), name='aimmo/code'),\n+ url(r'^api/games/$', views.games, name='aimmo/games'),\n \n- url(r'^jsreverse/$', 'django_js_reverse.views.urls_js', name='js_reverse'), # TODO: Pull request to make django_js_reverse.urls\n+ url(r'^jsreverse/$', 'django_js_reverse.views.urls_js', name='aimmo/js_reverse'), # TODO: Pull request to make django_js_reverse.urls\n ]\n", "issue": "in portal/rr, when you are redirected to home, you are redirected to AI:MMO home\nThe website now loads the aimmo urls.\nBut now each time the website is supposed to redirect you to the portal home, it redirects to the AI:MMO login page.\nProbably because both urls are named the same in their respective urls.py and the website imports both, finishing with aimmo urls?\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Code for Life\n#\n# Copyright (C) 2015, Ocado Innovation Limited\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU Affero General Public License as\n# published by the Free Software Foundation, either version 3 of the\n# License, or (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU Affero General Public License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program. If not, see <http://www.gnu.org/licenses/>.\n#\n# ADDITIONAL TERMS \u2013 Section 7 GNU General Public Licence\n#\n# This licence does not grant any right, title or interest in any \u201cOcado\u201d logos,\n# trade names or the trademark \u201cOcado\u201d or any other trademarks or domain names\n# owned by Ocado Innovation Limited or the Ocado group of companies or any other\n# distinctive brand features of \u201cOcado\u201d as may be secured from time to time. You\n# must not distribute any modification of this program using the trademark\n# \u201cOcado\u201d or claim any affiliation or association with Ocado or its employees.\n#\n# You are not authorised to use the name Ocado (or any of its trade names) or\n# the names of any author or contributor in advertising or for publicity purposes\n# pertaining to the distribution of this program, without the prior written\n# authorisation of Ocado.\n#\n# Any propagation, distribution or conveyance of this program must include this\n# copyright notice and these terms. You must not misrepresent the origins of this\n# program; modified versions of the program must be marked as such and not\n# identified as the original program.\n'''Players autoconfig'''\n\nDEFAULT_SETTINGS = {\n 'AUTOCONFIG_INDEX_VIEW': 'home',\n 'STATIC_URL': '/static/',\n}\n\nSETTINGS = {\n 'INSTALLED_APPS': [\n 'django.contrib.auth',\n 'django.contrib.messages',\n 'django.contrib.staticfiles',\n 'django_js_reverse',\n ],\n 'TEMPLATES': [\n {\n 'BACKEND': 'django.template.backends.django.DjangoTemplates',\n 'APP_DIRS': True,\n 'OPTIONS': {\n 'context_processors': [\n 'django.template.context_processors.debug',\n 'django.template.context_processors.request',\n 'django.contrib.auth.context_processors.auth',\n 'django.contrib.messages.context_processors.messages',\n ]\n }\n }\n ],\n 'USE_TZ': True,\n}\n", "path": "players/autoconfig.py"}, {"content": "from django.conf.urls import url\nfrom django.views.generic import TemplateView\nfrom django.contrib.auth.decorators import login_required\nfrom django.contrib.admin.views.decorators import staff_member_required\n\nfrom players import views\n\nurlpatterns = [\n url(r'^$', staff_member_required(TemplateView.as_view(template_name='players/home.html')), name='home'),\n\n url(r'^program/$', staff_member_required(login_required(TemplateView.as_view(template_name='players/program.html'))), name='program'),\n url(r'^watch/$', staff_member_required(views.WatchView.as_view()), name='watch'),\n url(r'^statistics/$', staff_member_required(TemplateView.as_view(template_name='players/statistics.html')), name='statistics'),\n\n url(r'^api/code/$', staff_member_required(views.code), name='code'),\n url(r'^api/games/$', views.games, name='games'),\n\n url(r'^jsreverse/$', 'django_js_reverse.views.urls_js', name='js_reverse'), # TODO: Pull request to make django_js_reverse.urls\n]\n", "path": "players/urls.py"}]} | 1,574 | 503 |
gh_patches_debug_33169 | rasdani/github-patches | git_diff | streamlink__streamlink-2388 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Playtv uk
playtv.fr working fine is there any possibility to run http://uk.play.tv/ looks like both sites france and uk owned by same company. As streamlink support tvplay.fr. hope u get support for uk playtv.
http://uk.play.tv/live-tv/363/pick-tv/
http://uk.play.tv/live-tv/752/itv4-1/
http://uk.play.tv/live-tv/1106/itv3-1/
http://uk.play.tv/live-tv/1105/itv-1/
Playtv uk
playtv.fr working fine is there any possibility to run http://uk.play.tv/ looks like both sites france and uk owned by same company. As streamlink support tvplay.fr. hope u get support for uk playtv.
http://uk.play.tv/live-tv/363/pick-tv/
http://uk.play.tv/live-tv/752/itv4-1/
http://uk.play.tv/live-tv/1106/itv3-1/
http://uk.play.tv/live-tv/1105/itv-1/
</issue>
<code>
[start of src/streamlink/plugins/playtv.py]
1 import re
2
3 from streamlink.plugin import Plugin
4 from streamlink.plugin.api import validate
5 from streamlink.stream import HDSStream, HLSStream
6
7
8 class PlayTV(Plugin):
9 FORMATS_URL = 'http://playtv.fr/player/initialize/{0}/'
10 API_URL = 'http://playtv.fr/player/play/{0}/?format={1}&language={2}&bitrate={3}'
11
12 _url_re = re.compile(r'http://(?:playtv\.fr/television|play\.tv/live-tv/\d+)/(?P<channel>[^/]+)/?')
13
14 _formats_schema = validate.Schema({
15 'streams': validate.any(
16 [],
17 {
18 validate.text: validate.Schema({
19 validate.text: {
20 'bitrates': validate.all([
21 validate.Schema({
22 'value': int
23 })
24 ])
25 }
26 })
27 }
28 )
29 })
30 _api_schema = validate.Schema({
31 'url': validate.url()
32 })
33
34 @classmethod
35 def can_handle_url(cls, url):
36 return PlayTV._url_re.match(url)
37
38 def _get_streams(self):
39 match = self._url_re.match(self.url)
40 channel = match.group('channel')
41
42 res = self.session.http.get(self.FORMATS_URL.format(channel))
43 streams = self.session.http.json(res, schema=self._formats_schema)['streams']
44 if streams == []:
45 self.logger.error('Channel may be geo-restricted, not directly provided by PlayTV or not freely available')
46 return
47
48 for language in streams:
49 for protocol, bitrates in list(streams[language].items()):
50 # - Ignore non-supported protocols (RTSP, DASH)
51 # - Ignore deprecated Flash (RTMPE/HDS) streams (PlayTV doesn't provide anymore a Flash player)
52 if protocol in ['rtsp', 'flash', 'dash', 'hds']:
53 continue
54
55 for bitrate in bitrates['bitrates']:
56 if bitrate['value'] == 0:
57 continue
58 api_url = self.API_URL.format(channel, protocol, language, bitrate['value'])
59 res = self.session.http.get(api_url)
60 video_url = self.session.http.json(res, schema=self._api_schema)['url']
61 bs = '{0}k'.format(bitrate['value'])
62
63 if protocol == 'hls':
64 for _, stream in HLSStream.parse_variant_playlist(self.session, video_url).items():
65 yield bs, stream
66 elif protocol == 'hds':
67 for _, stream in HDSStream.parse_manifest(self.session, video_url).items():
68 yield bs, stream
69
70
71 __plugin__ = PlayTV
72
[end of src/streamlink/plugins/playtv.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/streamlink/plugins/playtv.py b/src/streamlink/plugins/playtv.py
--- a/src/streamlink/plugins/playtv.py
+++ b/src/streamlink/plugins/playtv.py
@@ -1,3 +1,5 @@
+import base64
+import json
import re
from streamlink.plugin import Plugin
@@ -5,11 +7,17 @@
from streamlink.stream import HDSStream, HLSStream
+def jwt_decode(token):
+ info, payload, sig = token.split(".")
+ data = base64.urlsafe_b64decode(payload + '=' * (-len(payload) % 4))
+ return json.loads(data)
+
+
class PlayTV(Plugin):
- FORMATS_URL = 'http://playtv.fr/player/initialize/{0}/'
- API_URL = 'http://playtv.fr/player/play/{0}/?format={1}&language={2}&bitrate={3}'
+ FORMATS_URL = 'https://playtv.fr/player/initialize/{0}/'
+ API_URL = 'https://playtv.fr/player/play/{0}/?format={1}&language={2}&bitrate={3}'
- _url_re = re.compile(r'http://(?:playtv\.fr/television|play\.tv/live-tv/\d+)/(?P<channel>[^/]+)/?')
+ _url_re = re.compile(r'https?://(?:playtv\.fr/television|(:?\w+\.)?play\.tv/live-tv/\d+)/(?P<channel>[^/]+)/?')
_formats_schema = validate.Schema({
'streams': validate.any(
@@ -27,9 +35,13 @@
}
)
})
- _api_schema = validate.Schema({
- 'url': validate.url()
- })
+
+ _api_schema = validate.Schema(
+ validate.transform(lambda x: jwt_decode(x)),
+ {
+ 'url': validate.url()
+ }
+ )
@classmethod
def can_handle_url(cls, url):
@@ -57,7 +69,7 @@
continue
api_url = self.API_URL.format(channel, protocol, language, bitrate['value'])
res = self.session.http.get(api_url)
- video_url = self.session.http.json(res, schema=self._api_schema)['url']
+ video_url = self._api_schema.validate(res.text)['url']
bs = '{0}k'.format(bitrate['value'])
if protocol == 'hls':
| {"golden_diff": "diff --git a/src/streamlink/plugins/playtv.py b/src/streamlink/plugins/playtv.py\n--- a/src/streamlink/plugins/playtv.py\n+++ b/src/streamlink/plugins/playtv.py\n@@ -1,3 +1,5 @@\n+import base64\n+import json\n import re\n \n from streamlink.plugin import Plugin\n@@ -5,11 +7,17 @@\n from streamlink.stream import HDSStream, HLSStream\n \n \n+def jwt_decode(token):\n+ info, payload, sig = token.split(\".\")\n+ data = base64.urlsafe_b64decode(payload + '=' * (-len(payload) % 4))\n+ return json.loads(data)\n+\n+\n class PlayTV(Plugin):\n- FORMATS_URL = 'http://playtv.fr/player/initialize/{0}/'\n- API_URL = 'http://playtv.fr/player/play/{0}/?format={1}&language={2}&bitrate={3}'\n+ FORMATS_URL = 'https://playtv.fr/player/initialize/{0}/'\n+ API_URL = 'https://playtv.fr/player/play/{0}/?format={1}&language={2}&bitrate={3}'\n \n- _url_re = re.compile(r'http://(?:playtv\\.fr/television|play\\.tv/live-tv/\\d+)/(?P<channel>[^/]+)/?')\n+ _url_re = re.compile(r'https?://(?:playtv\\.fr/television|(:?\\w+\\.)?play\\.tv/live-tv/\\d+)/(?P<channel>[^/]+)/?')\n \n _formats_schema = validate.Schema({\n 'streams': validate.any(\n@@ -27,9 +35,13 @@\n }\n )\n })\n- _api_schema = validate.Schema({\n- 'url': validate.url()\n- })\n+\n+ _api_schema = validate.Schema(\n+ validate.transform(lambda x: jwt_decode(x)),\n+ {\n+ 'url': validate.url()\n+ }\n+ )\n \n @classmethod\n def can_handle_url(cls, url):\n@@ -57,7 +69,7 @@\n continue\n api_url = self.API_URL.format(channel, protocol, language, bitrate['value'])\n res = self.session.http.get(api_url)\n- video_url = self.session.http.json(res, schema=self._api_schema)['url']\n+ video_url = self._api_schema.validate(res.text)['url']\n bs = '{0}k'.format(bitrate['value'])\n \n if protocol == 'hls':\n", "issue": "Playtv uk\nplaytv.fr working fine is there any possibility to run http://uk.play.tv/ looks like both sites france and uk owned by same company. As streamlink support tvplay.fr. hope u get support for uk playtv.\r\nhttp://uk.play.tv/live-tv/363/pick-tv/\r\nhttp://uk.play.tv/live-tv/752/itv4-1/\r\nhttp://uk.play.tv/live-tv/1106/itv3-1/\r\nhttp://uk.play.tv/live-tv/1105/itv-1/\nPlaytv uk\nplaytv.fr working fine is there any possibility to run http://uk.play.tv/ looks like both sites france and uk owned by same company. As streamlink support tvplay.fr. hope u get support for uk playtv.\r\nhttp://uk.play.tv/live-tv/363/pick-tv/\r\nhttp://uk.play.tv/live-tv/752/itv4-1/\r\nhttp://uk.play.tv/live-tv/1106/itv3-1/\r\nhttp://uk.play.tv/live-tv/1105/itv-1/\n", "before_files": [{"content": "import re\n\nfrom streamlink.plugin import Plugin\nfrom streamlink.plugin.api import validate\nfrom streamlink.stream import HDSStream, HLSStream\n\n\nclass PlayTV(Plugin):\n FORMATS_URL = 'http://playtv.fr/player/initialize/{0}/'\n API_URL = 'http://playtv.fr/player/play/{0}/?format={1}&language={2}&bitrate={3}'\n\n _url_re = re.compile(r'http://(?:playtv\\.fr/television|play\\.tv/live-tv/\\d+)/(?P<channel>[^/]+)/?')\n\n _formats_schema = validate.Schema({\n 'streams': validate.any(\n [],\n {\n validate.text: validate.Schema({\n validate.text: {\n 'bitrates': validate.all([\n validate.Schema({\n 'value': int\n })\n ])\n }\n })\n }\n )\n })\n _api_schema = validate.Schema({\n 'url': validate.url()\n })\n\n @classmethod\n def can_handle_url(cls, url):\n return PlayTV._url_re.match(url)\n\n def _get_streams(self):\n match = self._url_re.match(self.url)\n channel = match.group('channel')\n\n res = self.session.http.get(self.FORMATS_URL.format(channel))\n streams = self.session.http.json(res, schema=self._formats_schema)['streams']\n if streams == []:\n self.logger.error('Channel may be geo-restricted, not directly provided by PlayTV or not freely available')\n return\n\n for language in streams:\n for protocol, bitrates in list(streams[language].items()):\n # - Ignore non-supported protocols (RTSP, DASH)\n # - Ignore deprecated Flash (RTMPE/HDS) streams (PlayTV doesn't provide anymore a Flash player)\n if protocol in ['rtsp', 'flash', 'dash', 'hds']:\n continue\n\n for bitrate in bitrates['bitrates']:\n if bitrate['value'] == 0:\n continue\n api_url = self.API_URL.format(channel, protocol, language, bitrate['value'])\n res = self.session.http.get(api_url)\n video_url = self.session.http.json(res, schema=self._api_schema)['url']\n bs = '{0}k'.format(bitrate['value'])\n\n if protocol == 'hls':\n for _, stream in HLSStream.parse_variant_playlist(self.session, video_url).items():\n yield bs, stream\n elif protocol == 'hds':\n for _, stream in HDSStream.parse_manifest(self.session, video_url).items():\n yield bs, stream\n\n\n__plugin__ = PlayTV\n", "path": "src/streamlink/plugins/playtv.py"}]} | 1,491 | 547 |
gh_patches_debug_3132 | rasdani/github-patches | git_diff | pantsbuild__pants-6037 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
contrib go plugin not able to recognize meta tag if meta ends with />
The regex only recognize `<meta xxxxxxxxxxxxx >` but not `<meta xxxxxxxxxx />`.
</issue>
<code>
[start of contrib/go/src/python/pants/contrib/go/subsystems/go_import_meta_tag_reader.py]
1 # coding=utf-8
2 # Copyright 2016 Pants project contributors (see CONTRIBUTORS.md).
3 # Licensed under the Apache License, Version 2.0 (see LICENSE).
4
5 from __future__ import (absolute_import, division, generators, nested_scopes, print_function,
6 unicode_literals, with_statement)
7
8 import re
9
10 import requests
11 from pants.subsystem.subsystem import Subsystem
12 from pants.util.memo import memoized_method
13
14 from pants.contrib.go.subsystems.imported_repo import ImportedRepo
15
16
17 class GoImportMetaTagReader(Subsystem):
18 """Implements a reader for the <meta name="go-import"> protocol.
19
20 See https://golang.org/cmd/go/#hdr-Remote_import_paths .
21 """
22 options_scope = 'go-import-metatag-reader'
23
24 @classmethod
25 def register_options(cls, register):
26 super(GoImportMetaTagReader, cls).register_options(register)
27 register('--retries', type=int, default=1, advanced=True,
28 help='How many times to retry when fetching meta tags.')
29
30 _META_IMPORT_REGEX = re.compile(r"""
31 <meta
32 \s+
33 name=['"]go-import['"]
34 \s+
35 content=['"](?P<root>[^\s]+)\s+(?P<vcs>[^\s]+)\s+(?P<url>[^\s]+)['"]
36 \s*
37 >""", flags=re.VERBOSE)
38
39 @classmethod
40 def find_meta_tags(cls, page_html):
41 """Returns the content of the meta tag if found inside of the provided HTML."""
42
43 return cls._META_IMPORT_REGEX.findall(page_html)
44
45 @memoized_method
46 def get_imported_repo(self, import_path):
47 """Looks for a go-import meta tag for the provided import_path.
48
49 Returns an ImportedRepo instance with the information in the meta tag,
50 or None if no go-import meta tag is found.
51 """
52 try:
53 session = requests.session()
54 # TODO: Support https with (optional) fallback to http, as Go does.
55 # See https://github.com/pantsbuild/pants/issues/3503.
56 session.mount("http://",
57 requests.adapters.HTTPAdapter(max_retries=self.get_options().retries))
58 page_data = session.get('http://{import_path}?go-get=1'.format(import_path=import_path))
59 except requests.ConnectionError:
60 return None
61
62 if not page_data:
63 return None
64
65 # Return the first match, rather than doing some kind of longest prefix search.
66 # Hopefully no one returns multiple valid go-import meta tags.
67 for (root, vcs, url) in self.find_meta_tags(page_data.text):
68 if root and vcs and url:
69 # Check to make sure returned root is an exact match to the provided import path. If it is
70 # not then run a recursive check on the returned and return the values provided by that call.
71 if root == import_path:
72 return ImportedRepo(root, vcs, url)
73 elif import_path.startswith(root):
74 return self.get_imported_repo(root)
75
76 return None
77
[end of contrib/go/src/python/pants/contrib/go/subsystems/go_import_meta_tag_reader.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/contrib/go/src/python/pants/contrib/go/subsystems/go_import_meta_tag_reader.py b/contrib/go/src/python/pants/contrib/go/subsystems/go_import_meta_tag_reader.py
--- a/contrib/go/src/python/pants/contrib/go/subsystems/go_import_meta_tag_reader.py
+++ b/contrib/go/src/python/pants/contrib/go/subsystems/go_import_meta_tag_reader.py
@@ -34,7 +34,7 @@
\s+
content=['"](?P<root>[^\s]+)\s+(?P<vcs>[^\s]+)\s+(?P<url>[^\s]+)['"]
\s*
- >""", flags=re.VERBOSE)
+ /?>""", flags=re.VERBOSE)
@classmethod
def find_meta_tags(cls, page_html):
| {"golden_diff": "diff --git a/contrib/go/src/python/pants/contrib/go/subsystems/go_import_meta_tag_reader.py b/contrib/go/src/python/pants/contrib/go/subsystems/go_import_meta_tag_reader.py\n--- a/contrib/go/src/python/pants/contrib/go/subsystems/go_import_meta_tag_reader.py\n+++ b/contrib/go/src/python/pants/contrib/go/subsystems/go_import_meta_tag_reader.py\n@@ -34,7 +34,7 @@\n \\s+\n content=['\"](?P<root>[^\\s]+)\\s+(?P<vcs>[^\\s]+)\\s+(?P<url>[^\\s]+)['\"]\n \\s*\n- >\"\"\", flags=re.VERBOSE)\n+ /?>\"\"\", flags=re.VERBOSE)\n \n @classmethod\n def find_meta_tags(cls, page_html):\n", "issue": "contrib go plugin not able to recognize meta tag if meta ends with />\nThe regex only recognize `<meta xxxxxxxxxxxxx >` but not `<meta xxxxxxxxxx />`.\n", "before_files": [{"content": "# coding=utf-8\n# Copyright 2016 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\nfrom __future__ import (absolute_import, division, generators, nested_scopes, print_function,\n unicode_literals, with_statement)\n\nimport re\n\nimport requests\nfrom pants.subsystem.subsystem import Subsystem\nfrom pants.util.memo import memoized_method\n\nfrom pants.contrib.go.subsystems.imported_repo import ImportedRepo\n\n\nclass GoImportMetaTagReader(Subsystem):\n \"\"\"Implements a reader for the <meta name=\"go-import\"> protocol.\n\n See https://golang.org/cmd/go/#hdr-Remote_import_paths .\n \"\"\"\n options_scope = 'go-import-metatag-reader'\n\n @classmethod\n def register_options(cls, register):\n super(GoImportMetaTagReader, cls).register_options(register)\n register('--retries', type=int, default=1, advanced=True,\n help='How many times to retry when fetching meta tags.')\n\n _META_IMPORT_REGEX = re.compile(r\"\"\"\n <meta\n \\s+\n name=['\"]go-import['\"]\n \\s+\n content=['\"](?P<root>[^\\s]+)\\s+(?P<vcs>[^\\s]+)\\s+(?P<url>[^\\s]+)['\"]\n \\s*\n >\"\"\", flags=re.VERBOSE)\n\n @classmethod\n def find_meta_tags(cls, page_html):\n \"\"\"Returns the content of the meta tag if found inside of the provided HTML.\"\"\"\n\n return cls._META_IMPORT_REGEX.findall(page_html)\n\n @memoized_method\n def get_imported_repo(self, import_path):\n \"\"\"Looks for a go-import meta tag for the provided import_path.\n\n Returns an ImportedRepo instance with the information in the meta tag,\n or None if no go-import meta tag is found.\n \"\"\"\n try:\n session = requests.session()\n # TODO: Support https with (optional) fallback to http, as Go does.\n # See https://github.com/pantsbuild/pants/issues/3503.\n session.mount(\"http://\",\n requests.adapters.HTTPAdapter(max_retries=self.get_options().retries))\n page_data = session.get('http://{import_path}?go-get=1'.format(import_path=import_path))\n except requests.ConnectionError:\n return None\n\n if not page_data:\n return None\n\n # Return the first match, rather than doing some kind of longest prefix search.\n # Hopefully no one returns multiple valid go-import meta tags.\n for (root, vcs, url) in self.find_meta_tags(page_data.text):\n if root and vcs and url:\n # Check to make sure returned root is an exact match to the provided import path. If it is\n # not then run a recursive check on the returned and return the values provided by that call.\n if root == import_path:\n return ImportedRepo(root, vcs, url)\n elif import_path.startswith(root):\n return self.get_imported_repo(root)\n\n return None\n", "path": "contrib/go/src/python/pants/contrib/go/subsystems/go_import_meta_tag_reader.py"}]} | 1,400 | 177 |
gh_patches_debug_18605 | rasdani/github-patches | git_diff | opensearch-project__opensearch-build-287 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add support for running bwctest.sh
</issue>
<code>
[start of bundle-workflow/src/paths/tree_walker.py]
1 # SPDX-License-Identifier: Apache-2.0
2 #
3 # The OpenSearch Contributors require contributions made to
4 # this file be licensed under the Apache-2.0 license or a
5 # compatible open source license.
6
7 import os
8
9
10 def walk(root):
11 print(f'Walking tree from {root}')
12 for dir, dirs, files in os.walk(root):
13 for file_name in files:
14 absolute_path = os.path.join(dir, file_name)
15 relative_path = os.path.relpath(absolute_path, root)
16 yield (os.path.realpath(absolute_path), relative_path)
17
[end of bundle-workflow/src/paths/tree_walker.py]
[start of bundle-workflow/src/system/execute.py]
1 # SPDX-License-Identifier: Apache-2.0
2 #
3 # The OpenSearch Contributors require contributions made to
4 # this file be licensed under the Apache-2.0 license or a
5 # compatible open source license.
6
7 import subprocess
8
9
10 def execute(command, dir, capture=True, raise_on_failure=True):
11 """
12 Execute a shell command inside a directory.
13 :param command: The shell command to execute.
14 :param dir: The full path to the directory that the command should be executed in.
15 :returns a tuple containing the exit code, stdout, and stderr.
16 """
17 print(f'Executing "{command}" in {dir}')
18 result = subprocess.run(command, cwd=dir, shell=True, capture_output=capture, text=True)
19 if raise_on_failure:
20 result.check_returncode()
21 return (result.returncode, result.stdout, result.stderr)
22
[end of bundle-workflow/src/system/execute.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/bundle-workflow/src/paths/tree_walker.py b/bundle-workflow/src/paths/tree_walker.py
--- a/bundle-workflow/src/paths/tree_walker.py
+++ b/bundle-workflow/src/paths/tree_walker.py
@@ -8,7 +8,7 @@
def walk(root):
- print(f'Walking tree from {root}')
+ print(f"Walking tree from {root}")
for dir, dirs, files in os.walk(root):
for file_name in files:
absolute_path = os.path.join(dir, file_name)
diff --git a/bundle-workflow/src/system/execute.py b/bundle-workflow/src/system/execute.py
--- a/bundle-workflow/src/system/execute.py
+++ b/bundle-workflow/src/system/execute.py
@@ -15,7 +15,9 @@
:returns a tuple containing the exit code, stdout, and stderr.
"""
print(f'Executing "{command}" in {dir}')
- result = subprocess.run(command, cwd=dir, shell=True, capture_output=capture, text=True)
+ result = subprocess.run(
+ command, cwd=dir, shell=True, capture_output=capture, text=True
+ )
if raise_on_failure:
result.check_returncode()
return (result.returncode, result.stdout, result.stderr)
| {"golden_diff": "diff --git a/bundle-workflow/src/paths/tree_walker.py b/bundle-workflow/src/paths/tree_walker.py\n--- a/bundle-workflow/src/paths/tree_walker.py\n+++ b/bundle-workflow/src/paths/tree_walker.py\n@@ -8,7 +8,7 @@\n \n \n def walk(root):\n- print(f'Walking tree from {root}')\n+ print(f\"Walking tree from {root}\")\n for dir, dirs, files in os.walk(root):\n for file_name in files:\n absolute_path = os.path.join(dir, file_name)\ndiff --git a/bundle-workflow/src/system/execute.py b/bundle-workflow/src/system/execute.py\n--- a/bundle-workflow/src/system/execute.py\n+++ b/bundle-workflow/src/system/execute.py\n@@ -15,7 +15,9 @@\n :returns a tuple containing the exit code, stdout, and stderr.\n \"\"\"\n print(f'Executing \"{command}\" in {dir}')\n- result = subprocess.run(command, cwd=dir, shell=True, capture_output=capture, text=True)\n+ result = subprocess.run(\n+ command, cwd=dir, shell=True, capture_output=capture, text=True\n+ )\n if raise_on_failure:\n result.check_returncode()\n return (result.returncode, result.stdout, result.stderr)\n", "issue": "Add support for running bwctest.sh\n\n", "before_files": [{"content": "# SPDX-License-Identifier: Apache-2.0\n#\n# The OpenSearch Contributors require contributions made to\n# this file be licensed under the Apache-2.0 license or a\n# compatible open source license.\n\nimport os\n\n\ndef walk(root):\n print(f'Walking tree from {root}')\n for dir, dirs, files in os.walk(root):\n for file_name in files:\n absolute_path = os.path.join(dir, file_name)\n relative_path = os.path.relpath(absolute_path, root)\n yield (os.path.realpath(absolute_path), relative_path)\n", "path": "bundle-workflow/src/paths/tree_walker.py"}, {"content": "# SPDX-License-Identifier: Apache-2.0\n#\n# The OpenSearch Contributors require contributions made to\n# this file be licensed under the Apache-2.0 license or a\n# compatible open source license.\n\nimport subprocess\n\n\ndef execute(command, dir, capture=True, raise_on_failure=True):\n \"\"\"\n Execute a shell command inside a directory.\n :param command: The shell command to execute.\n :param dir: The full path to the directory that the command should be executed in.\n :returns a tuple containing the exit code, stdout, and stderr.\n \"\"\"\n print(f'Executing \"{command}\" in {dir}')\n result = subprocess.run(command, cwd=dir, shell=True, capture_output=capture, text=True)\n if raise_on_failure:\n result.check_returncode()\n return (result.returncode, result.stdout, result.stderr)\n", "path": "bundle-workflow/src/system/execute.py"}]} | 940 | 291 |
gh_patches_debug_28976 | rasdani/github-patches | git_diff | huggingface__dataset-viewer-2415 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Replace `DatasetModuleNotInstalledError` errors with `DatasetWithScriptNotSupportedError`
We should never have a `DatasetModuleNotInstalledError` error, because we should return a `DatasetWithScriptNotSupportedError` error before
See https://github.com/huggingface/datasets-server/issues/1067#issuecomment-1924305954
</issue>
<code>
[start of services/worker/src/worker/job_runners/dataset/config_names.py]
1 # SPDX-License-Identifier: Apache-2.0
2 # Copyright 2022 The HuggingFace Authors.
3
4 import logging
5 from typing import Optional
6
7 from datasets import get_dataset_config_names
8 from datasets.data_files import EmptyDatasetError as _EmptyDatasetError
9 from libcommon.exceptions import (
10 ConfigNamesError,
11 DatasetModuleNotInstalledError,
12 DatasetWithScriptNotSupportedError,
13 DatasetWithTooManyConfigsError,
14 EmptyDatasetError,
15 )
16
17 from worker.dtos import CompleteJobResult, ConfigNameItem, DatasetConfigNamesResponse
18 from worker.job_runners.dataset.dataset_job_runner import (
19 DatasetJobRunnerWithDatasetsCache,
20 )
21 from worker.utils import resolve_trust_remote_code
22
23
24 def compute_config_names_response(
25 dataset: str,
26 max_number: int,
27 dataset_scripts_allow_list: list[str],
28 hf_token: Optional[str] = None,
29 ) -> DatasetConfigNamesResponse:
30 """
31 Get the response of 'dataset-config-names' for one specific dataset on huggingface.co.
32 Dataset can be gated if you pass an acceptable token.
33 It is assumed that the dataset exists and can be accessed using the token.
34
35 Args:
36 dataset (`str`):
37 A namespace (user or an organization) and a repo name separated by a `/`.
38 max_number (`int`):
39 The maximum number of configs for a dataset.
40 dataset_scripts_allow_list (`list[str]`):
41 List of datasets for which we support dataset scripts.
42 Unix shell-style wildcards also work in the dataset name for namespaced datasets,
43 for example `some_namespace/*` to refer to all the datasets in the `some_namespace` namespace.
44 The keyword `{{ALL_DATASETS_WITH_NO_NAMESPACE}}` refers to all the datasets without namespace.
45 hf_token (`str`, *optional*):
46 An authentication token (See https://huggingface.co/settings/token)
47
48 Raises:
49 [~`libcommon.exceptions.EmptyDatasetError`]:
50 The dataset is empty.
51 [~`libcommon.exceptions.DatasetModuleNotInstalledError`]:
52 The dataset tries to import a module that is not installed.
53 [~`libcommon.exceptions.ConfigNamesError`]:
54 If the list of configs could not be obtained using the datasets library.
55 [~`libcommon.exceptions.DatasetWithScriptNotSupportedError`]:
56 If the dataset has a dataset script and is not in the allow list.
57
58 Returns:
59 `DatasetConfigNamesResponse`: An object with the list of config names.
60 """
61 logging.info(f"get 'dateset-config-names' for {dataset=}")
62 # get the list of splits in streaming mode
63 try:
64 config_name_items: list[ConfigNameItem] = [
65 {"dataset": dataset, "config": str(config)}
66 for config in sorted(
67 get_dataset_config_names(
68 path=dataset,
69 token=hf_token,
70 trust_remote_code=resolve_trust_remote_code(
71 dataset=dataset, allow_list=dataset_scripts_allow_list
72 ),
73 )
74 )
75 ]
76 except _EmptyDatasetError as err:
77 raise EmptyDatasetError("The dataset is empty.", cause=err) from err
78 except ImportError as err:
79 raise DatasetModuleNotInstalledError(
80 "The dataset tries to import a module that is not installed.", cause=err
81 ) from err
82 except Exception as err:
83 if isinstance(err, ValueError) and "trust_remote_code" in str(err):
84 raise DatasetWithScriptNotSupportedError(
85 "The dataset viewer doesn't support this dataset because it runs "
86 "arbitrary python code. Please open a discussion in the discussion tab "
87 "if you think this is an error and tag @lhoestq and @severo."
88 ) from err
89 raise ConfigNamesError("Cannot get the config names for the dataset.", cause=err) from err
90
91 number_of_configs = len(config_name_items)
92 if number_of_configs > max_number:
93 raise DatasetWithTooManyConfigsError(
94 f"The maximum number of configs allowed is {max_number}, dataset has {number_of_configs} configs."
95 )
96
97 return DatasetConfigNamesResponse(config_names=config_name_items)
98
99
100 class DatasetConfigNamesJobRunner(DatasetJobRunnerWithDatasetsCache):
101 @staticmethod
102 def get_job_type() -> str:
103 return "dataset-config-names"
104
105 def compute(self) -> CompleteJobResult:
106 return CompleteJobResult(
107 compute_config_names_response(
108 dataset=self.dataset,
109 hf_token=self.app_config.common.hf_token,
110 max_number=self.app_config.config_names.max_number,
111 dataset_scripts_allow_list=self.app_config.common.dataset_scripts_allow_list,
112 )
113 )
114
[end of services/worker/src/worker/job_runners/dataset/config_names.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/services/worker/src/worker/job_runners/dataset/config_names.py b/services/worker/src/worker/job_runners/dataset/config_names.py
--- a/services/worker/src/worker/job_runners/dataset/config_names.py
+++ b/services/worker/src/worker/job_runners/dataset/config_names.py
@@ -75,18 +75,21 @@
]
except _EmptyDatasetError as err:
raise EmptyDatasetError("The dataset is empty.", cause=err) from err
- except ImportError as err:
- raise DatasetModuleNotInstalledError(
- "The dataset tries to import a module that is not installed.", cause=err
- ) from err
- except Exception as err:
- if isinstance(err, ValueError) and "trust_remote_code" in str(err):
+ except ValueError as err:
+ if "trust_remote_code" in str(err):
raise DatasetWithScriptNotSupportedError(
"The dataset viewer doesn't support this dataset because it runs "
"arbitrary python code. Please open a discussion in the discussion tab "
"if you think this is an error and tag @lhoestq and @severo."
) from err
raise ConfigNamesError("Cannot get the config names for the dataset.", cause=err) from err
+ except ImportError as err:
+ # this should only happen if the dataset is in the allow list, which should soon disappear
+ raise DatasetModuleNotInstalledError(
+ "The dataset tries to import a module that is not installed.", cause=err
+ ) from err
+ except Exception as err:
+ raise ConfigNamesError("Cannot get the config names for the dataset.", cause=err) from err
number_of_configs = len(config_name_items)
if number_of_configs > max_number:
| {"golden_diff": "diff --git a/services/worker/src/worker/job_runners/dataset/config_names.py b/services/worker/src/worker/job_runners/dataset/config_names.py\n--- a/services/worker/src/worker/job_runners/dataset/config_names.py\n+++ b/services/worker/src/worker/job_runners/dataset/config_names.py\n@@ -75,18 +75,21 @@\n ]\n except _EmptyDatasetError as err:\n raise EmptyDatasetError(\"The dataset is empty.\", cause=err) from err\n- except ImportError as err:\n- raise DatasetModuleNotInstalledError(\n- \"The dataset tries to import a module that is not installed.\", cause=err\n- ) from err\n- except Exception as err:\n- if isinstance(err, ValueError) and \"trust_remote_code\" in str(err):\n+ except ValueError as err:\n+ if \"trust_remote_code\" in str(err):\n raise DatasetWithScriptNotSupportedError(\n \"The dataset viewer doesn't support this dataset because it runs \"\n \"arbitrary python code. Please open a discussion in the discussion tab \"\n \"if you think this is an error and tag @lhoestq and @severo.\"\n ) from err\n raise ConfigNamesError(\"Cannot get the config names for the dataset.\", cause=err) from err\n+ except ImportError as err:\n+ # this should only happen if the dataset is in the allow list, which should soon disappear\n+ raise DatasetModuleNotInstalledError(\n+ \"The dataset tries to import a module that is not installed.\", cause=err\n+ ) from err\n+ except Exception as err:\n+ raise ConfigNamesError(\"Cannot get the config names for the dataset.\", cause=err) from err\n \n number_of_configs = len(config_name_items)\n if number_of_configs > max_number:\n", "issue": "Replace `DatasetModuleNotInstalledError` errors with `DatasetWithScriptNotSupportedError`\nWe should never have a `DatasetModuleNotInstalledError` error, because we should return a `DatasetWithScriptNotSupportedError` error before\r\n\r\nSee https://github.com/huggingface/datasets-server/issues/1067#issuecomment-1924305954\n", "before_files": [{"content": "# SPDX-License-Identifier: Apache-2.0\n# Copyright 2022 The HuggingFace Authors.\n\nimport logging\nfrom typing import Optional\n\nfrom datasets import get_dataset_config_names\nfrom datasets.data_files import EmptyDatasetError as _EmptyDatasetError\nfrom libcommon.exceptions import (\n ConfigNamesError,\n DatasetModuleNotInstalledError,\n DatasetWithScriptNotSupportedError,\n DatasetWithTooManyConfigsError,\n EmptyDatasetError,\n)\n\nfrom worker.dtos import CompleteJobResult, ConfigNameItem, DatasetConfigNamesResponse\nfrom worker.job_runners.dataset.dataset_job_runner import (\n DatasetJobRunnerWithDatasetsCache,\n)\nfrom worker.utils import resolve_trust_remote_code\n\n\ndef compute_config_names_response(\n dataset: str,\n max_number: int,\n dataset_scripts_allow_list: list[str],\n hf_token: Optional[str] = None,\n) -> DatasetConfigNamesResponse:\n \"\"\"\n Get the response of 'dataset-config-names' for one specific dataset on huggingface.co.\n Dataset can be gated if you pass an acceptable token.\n It is assumed that the dataset exists and can be accessed using the token.\n\n Args:\n dataset (`str`):\n A namespace (user or an organization) and a repo name separated by a `/`.\n max_number (`int`):\n The maximum number of configs for a dataset.\n dataset_scripts_allow_list (`list[str]`):\n List of datasets for which we support dataset scripts.\n Unix shell-style wildcards also work in the dataset name for namespaced datasets,\n for example `some_namespace/*` to refer to all the datasets in the `some_namespace` namespace.\n The keyword `{{ALL_DATASETS_WITH_NO_NAMESPACE}}` refers to all the datasets without namespace.\n hf_token (`str`, *optional*):\n An authentication token (See https://huggingface.co/settings/token)\n\n Raises:\n [~`libcommon.exceptions.EmptyDatasetError`]:\n The dataset is empty.\n [~`libcommon.exceptions.DatasetModuleNotInstalledError`]:\n The dataset tries to import a module that is not installed.\n [~`libcommon.exceptions.ConfigNamesError`]:\n If the list of configs could not be obtained using the datasets library.\n [~`libcommon.exceptions.DatasetWithScriptNotSupportedError`]:\n If the dataset has a dataset script and is not in the allow list.\n\n Returns:\n `DatasetConfigNamesResponse`: An object with the list of config names.\n \"\"\"\n logging.info(f\"get 'dateset-config-names' for {dataset=}\")\n # get the list of splits in streaming mode\n try:\n config_name_items: list[ConfigNameItem] = [\n {\"dataset\": dataset, \"config\": str(config)}\n for config in sorted(\n get_dataset_config_names(\n path=dataset,\n token=hf_token,\n trust_remote_code=resolve_trust_remote_code(\n dataset=dataset, allow_list=dataset_scripts_allow_list\n ),\n )\n )\n ]\n except _EmptyDatasetError as err:\n raise EmptyDatasetError(\"The dataset is empty.\", cause=err) from err\n except ImportError as err:\n raise DatasetModuleNotInstalledError(\n \"The dataset tries to import a module that is not installed.\", cause=err\n ) from err\n except Exception as err:\n if isinstance(err, ValueError) and \"trust_remote_code\" in str(err):\n raise DatasetWithScriptNotSupportedError(\n \"The dataset viewer doesn't support this dataset because it runs \"\n \"arbitrary python code. Please open a discussion in the discussion tab \"\n \"if you think this is an error and tag @lhoestq and @severo.\"\n ) from err\n raise ConfigNamesError(\"Cannot get the config names for the dataset.\", cause=err) from err\n\n number_of_configs = len(config_name_items)\n if number_of_configs > max_number:\n raise DatasetWithTooManyConfigsError(\n f\"The maximum number of configs allowed is {max_number}, dataset has {number_of_configs} configs.\"\n )\n\n return DatasetConfigNamesResponse(config_names=config_name_items)\n\n\nclass DatasetConfigNamesJobRunner(DatasetJobRunnerWithDatasetsCache):\n @staticmethod\n def get_job_type() -> str:\n return \"dataset-config-names\"\n\n def compute(self) -> CompleteJobResult:\n return CompleteJobResult(\n compute_config_names_response(\n dataset=self.dataset,\n hf_token=self.app_config.common.hf_token,\n max_number=self.app_config.config_names.max_number,\n dataset_scripts_allow_list=self.app_config.common.dataset_scripts_allow_list,\n )\n )\n", "path": "services/worker/src/worker/job_runners/dataset/config_names.py"}]} | 1,847 | 395 |
gh_patches_debug_185 | rasdani/github-patches | git_diff | pyqtgraph__pyqtgraph-868 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Crash on closing Matplotlib export
E.g. when opening the Matplotlib exporter multiple times, and closing the windows again, Python crashes with a segmentation fault.
This is caused by the Matplotlib QMainWindow listening to the closeEvent and deleting the only reference of the window before it is closed properly.
</issue>
<code>
[start of pyqtgraph/exporters/Matplotlib.py]
1 from ..Qt import QtGui, QtCore
2 from .Exporter import Exporter
3 from .. import PlotItem
4 from .. import functions as fn
5
6 __all__ = ['MatplotlibExporter']
7
8 """
9 It is helpful when using the matplotlib Exporter if your
10 .matplotlib/matplotlibrc file is configured appropriately.
11 The following are suggested for getting usable PDF output that
12 can be edited in Illustrator, etc.
13
14 backend : Qt4Agg
15 text.usetex : True # Assumes you have a findable LaTeX installation
16 interactive : False
17 font.family : sans-serif
18 font.sans-serif : 'Arial' # (make first in list)
19 mathtext.default : sf
20 figure.facecolor : white # personal preference
21 # next setting allows pdf font to be readable in Adobe Illustrator
22 pdf.fonttype : 42 # set fonts to TrueType (otherwise it will be 3
23 # and the text will be vectorized.
24 text.dvipnghack : True # primarily to clean up font appearance on Mac
25
26 The advantage is that there is less to do to get an exported file cleaned and ready for
27 publication. Fonts are not vectorized (outlined), and window colors are white.
28
29 """
30
31 class MatplotlibExporter(Exporter):
32 Name = "Matplotlib Window"
33 windows = []
34 def __init__(self, item):
35 Exporter.__init__(self, item)
36
37 def parameters(self):
38 return None
39
40 def cleanAxes(self, axl):
41 if type(axl) is not list:
42 axl = [axl]
43 for ax in axl:
44 if ax is None:
45 continue
46 for loc, spine in ax.spines.items():
47 if loc in ['left', 'bottom']:
48 pass
49 elif loc in ['right', 'top']:
50 spine.set_color('none')
51 # do not draw the spine
52 else:
53 raise ValueError('Unknown spine location: %s' % loc)
54 # turn off ticks when there is no spine
55 ax.xaxis.set_ticks_position('bottom')
56
57 def export(self, fileName=None):
58
59 if isinstance(self.item, PlotItem):
60 mpw = MatplotlibWindow()
61 MatplotlibExporter.windows.append(mpw)
62
63 stdFont = 'Arial'
64
65 fig = mpw.getFigure()
66
67 # get labels from the graphic item
68 xlabel = self.item.axes['bottom']['item'].label.toPlainText()
69 ylabel = self.item.axes['left']['item'].label.toPlainText()
70 title = self.item.titleLabel.text
71
72 ax = fig.add_subplot(111, title=title)
73 ax.clear()
74 self.cleanAxes(ax)
75 #ax.grid(True)
76 for item in self.item.curves:
77 x, y = item.getData()
78 opts = item.opts
79 pen = fn.mkPen(opts['pen'])
80 if pen.style() == QtCore.Qt.NoPen:
81 linestyle = ''
82 else:
83 linestyle = '-'
84 color = tuple([c/255. for c in fn.colorTuple(pen.color())])
85 symbol = opts['symbol']
86 if symbol == 't':
87 symbol = '^'
88 symbolPen = fn.mkPen(opts['symbolPen'])
89 symbolBrush = fn.mkBrush(opts['symbolBrush'])
90 markeredgecolor = tuple([c/255. for c in fn.colorTuple(symbolPen.color())])
91 markerfacecolor = tuple([c/255. for c in fn.colorTuple(symbolBrush.color())])
92 markersize = opts['symbolSize']
93
94 if opts['fillLevel'] is not None and opts['fillBrush'] is not None:
95 fillBrush = fn.mkBrush(opts['fillBrush'])
96 fillcolor = tuple([c/255. for c in fn.colorTuple(fillBrush.color())])
97 ax.fill_between(x=x, y1=y, y2=opts['fillLevel'], facecolor=fillcolor)
98
99 pl = ax.plot(x, y, marker=symbol, color=color, linewidth=pen.width(),
100 linestyle=linestyle, markeredgecolor=markeredgecolor, markerfacecolor=markerfacecolor,
101 markersize=markersize)
102 xr, yr = self.item.viewRange()
103 ax.set_xbound(*xr)
104 ax.set_ybound(*yr)
105 ax.set_xlabel(xlabel) # place the labels.
106 ax.set_ylabel(ylabel)
107 mpw.draw()
108 else:
109 raise Exception("Matplotlib export currently only works with plot items")
110
111 MatplotlibExporter.register()
112
113
114 class MatplotlibWindow(QtGui.QMainWindow):
115 def __init__(self):
116 from ..widgets import MatplotlibWidget
117 QtGui.QMainWindow.__init__(self)
118 self.mpl = MatplotlibWidget.MatplotlibWidget()
119 self.setCentralWidget(self.mpl)
120 self.show()
121
122 def __getattr__(self, attr):
123 return getattr(self.mpl, attr)
124
125 def closeEvent(self, ev):
126 MatplotlibExporter.windows.remove(self)
127
128
129
[end of pyqtgraph/exporters/Matplotlib.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pyqtgraph/exporters/Matplotlib.py b/pyqtgraph/exporters/Matplotlib.py
--- a/pyqtgraph/exporters/Matplotlib.py
+++ b/pyqtgraph/exporters/Matplotlib.py
@@ -124,5 +124,4 @@
def closeEvent(self, ev):
MatplotlibExporter.windows.remove(self)
-
-
+ self.deleteLater()
| {"golden_diff": "diff --git a/pyqtgraph/exporters/Matplotlib.py b/pyqtgraph/exporters/Matplotlib.py\n--- a/pyqtgraph/exporters/Matplotlib.py\n+++ b/pyqtgraph/exporters/Matplotlib.py\n@@ -124,5 +124,4 @@\n \n def closeEvent(self, ev):\n MatplotlibExporter.windows.remove(self)\n-\n-\n+ self.deleteLater()\n", "issue": "Crash on closing Matplotlib export\nE.g. when opening the Matplotlib exporter multiple times, and closing the windows again, Python crashes with a segmentation fault.\r\n\r\nThis is caused by the Matplotlib QMainWindow listening to the closeEvent and deleting the only reference of the window before it is closed properly.\n", "before_files": [{"content": "from ..Qt import QtGui, QtCore\nfrom .Exporter import Exporter\nfrom .. import PlotItem\nfrom .. import functions as fn\n\n__all__ = ['MatplotlibExporter']\n\n\"\"\"\nIt is helpful when using the matplotlib Exporter if your\n.matplotlib/matplotlibrc file is configured appropriately.\nThe following are suggested for getting usable PDF output that\ncan be edited in Illustrator, etc.\n\nbackend : Qt4Agg\ntext.usetex : True # Assumes you have a findable LaTeX installation\ninteractive : False\nfont.family : sans-serif\nfont.sans-serif : 'Arial' # (make first in list)\nmathtext.default : sf\nfigure.facecolor : white # personal preference\n# next setting allows pdf font to be readable in Adobe Illustrator\npdf.fonttype : 42 # set fonts to TrueType (otherwise it will be 3\n # and the text will be vectorized.\ntext.dvipnghack : True # primarily to clean up font appearance on Mac\n\nThe advantage is that there is less to do to get an exported file cleaned and ready for\npublication. Fonts are not vectorized (outlined), and window colors are white.\n\n\"\"\"\n \nclass MatplotlibExporter(Exporter):\n Name = \"Matplotlib Window\"\n windows = []\n def __init__(self, item):\n Exporter.__init__(self, item)\n \n def parameters(self):\n return None\n\n def cleanAxes(self, axl):\n if type(axl) is not list:\n axl = [axl]\n for ax in axl:\n if ax is None:\n continue\n for loc, spine in ax.spines.items():\n if loc in ['left', 'bottom']:\n pass\n elif loc in ['right', 'top']:\n spine.set_color('none')\n # do not draw the spine\n else:\n raise ValueError('Unknown spine location: %s' % loc)\n # turn off ticks when there is no spine\n ax.xaxis.set_ticks_position('bottom')\n \n def export(self, fileName=None):\n \n if isinstance(self.item, PlotItem):\n mpw = MatplotlibWindow()\n MatplotlibExporter.windows.append(mpw)\n\n stdFont = 'Arial'\n \n fig = mpw.getFigure()\n \n # get labels from the graphic item\n xlabel = self.item.axes['bottom']['item'].label.toPlainText()\n ylabel = self.item.axes['left']['item'].label.toPlainText()\n title = self.item.titleLabel.text\n\n ax = fig.add_subplot(111, title=title)\n ax.clear()\n self.cleanAxes(ax)\n #ax.grid(True)\n for item in self.item.curves:\n x, y = item.getData()\n opts = item.opts\n pen = fn.mkPen(opts['pen'])\n if pen.style() == QtCore.Qt.NoPen:\n linestyle = ''\n else:\n linestyle = '-'\n color = tuple([c/255. for c in fn.colorTuple(pen.color())])\n symbol = opts['symbol']\n if symbol == 't':\n symbol = '^'\n symbolPen = fn.mkPen(opts['symbolPen'])\n symbolBrush = fn.mkBrush(opts['symbolBrush'])\n markeredgecolor = tuple([c/255. for c in fn.colorTuple(symbolPen.color())])\n markerfacecolor = tuple([c/255. for c in fn.colorTuple(symbolBrush.color())])\n markersize = opts['symbolSize']\n \n if opts['fillLevel'] is not None and opts['fillBrush'] is not None:\n fillBrush = fn.mkBrush(opts['fillBrush'])\n fillcolor = tuple([c/255. for c in fn.colorTuple(fillBrush.color())])\n ax.fill_between(x=x, y1=y, y2=opts['fillLevel'], facecolor=fillcolor)\n \n pl = ax.plot(x, y, marker=symbol, color=color, linewidth=pen.width(), \n linestyle=linestyle, markeredgecolor=markeredgecolor, markerfacecolor=markerfacecolor,\n markersize=markersize)\n xr, yr = self.item.viewRange()\n ax.set_xbound(*xr)\n ax.set_ybound(*yr)\n ax.set_xlabel(xlabel) # place the labels.\n ax.set_ylabel(ylabel)\n mpw.draw()\n else:\n raise Exception(\"Matplotlib export currently only works with plot items\")\n \nMatplotlibExporter.register() \n \n\nclass MatplotlibWindow(QtGui.QMainWindow):\n def __init__(self):\n from ..widgets import MatplotlibWidget\n QtGui.QMainWindow.__init__(self)\n self.mpl = MatplotlibWidget.MatplotlibWidget()\n self.setCentralWidget(self.mpl)\n self.show()\n \n def __getattr__(self, attr):\n return getattr(self.mpl, attr)\n \n def closeEvent(self, ev):\n MatplotlibExporter.windows.remove(self)\n\n\n", "path": "pyqtgraph/exporters/Matplotlib.py"}]} | 1,931 | 88 |
gh_patches_debug_20683 | rasdani/github-patches | git_diff | cornellius-gp__gpytorch-133 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
RBF Kernel Change Breaks Testing Code
The change to RBFKernel in 84fccd898c45c08279fb5c109e6e234f3a47588a may break something about our prediction code.
I am not totally sure what the problem is yet, but I isolated this as the problem with `git bisect` and have a reasonable test case where results are significantly worse with the commit in compared to after a revert commit.
It seems like the stability issues we encountered when making this change in the past don't come up in the unit tests, but do on some real datasets.
I can try to push my test case to a branch as well, although it relies on a UCI dataset.
@Balandat @gpleiss
</issue>
<code>
[start of gpytorch/kernels/rbf_kernel.py]
1 from __future__ import absolute_import
2 from __future__ import division
3 from __future__ import print_function
4 from __future__ import unicode_literals
5
6 from .kernel import Kernel
7
8
9 class RBFKernel(Kernel):
10
11 def __init__(
12 self,
13 ard_num_dims=None,
14 log_lengthscale_bounds=(-10000, 10000),
15 eps=1e-5,
16 active_dims=None,
17 ):
18 super(RBFKernel, self).__init__(
19 has_lengthscale=True,
20 ard_num_dims=ard_num_dims,
21 log_lengthscale_bounds=log_lengthscale_bounds,
22 active_dims=active_dims,
23 )
24 self.eps = eps
25
26 def forward(self, x1, x2):
27 lengthscales = self.log_lengthscale.exp() + self.eps
28 diff = (x1.unsqueeze(2) - x2.unsqueeze(1)).div_(lengthscales)
29 return diff.pow_(2).sum(-1).mul_(-0.5).exp_()
30
[end of gpytorch/kernels/rbf_kernel.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/gpytorch/kernels/rbf_kernel.py b/gpytorch/kernels/rbf_kernel.py
--- a/gpytorch/kernels/rbf_kernel.py
+++ b/gpytorch/kernels/rbf_kernel.py
@@ -3,6 +3,7 @@
from __future__ import print_function
from __future__ import unicode_literals
+import math
from .kernel import Kernel
@@ -12,7 +13,7 @@
self,
ard_num_dims=None,
log_lengthscale_bounds=(-10000, 10000),
- eps=1e-5,
+ eps=1e-6,
active_dims=None,
):
super(RBFKernel, self).__init__(
@@ -24,6 +25,6 @@
self.eps = eps
def forward(self, x1, x2):
- lengthscales = self.log_lengthscale.exp() + self.eps
+ lengthscales = self.log_lengthscale.exp().mul(math.sqrt(2)).clamp(self.eps, 1e5)
diff = (x1.unsqueeze(2) - x2.unsqueeze(1)).div_(lengthscales)
- return diff.pow_(2).sum(-1).mul_(-0.5).exp_()
+ return diff.pow_(2).sum(-1).mul_(-1).exp_()
| {"golden_diff": "diff --git a/gpytorch/kernels/rbf_kernel.py b/gpytorch/kernels/rbf_kernel.py\n--- a/gpytorch/kernels/rbf_kernel.py\n+++ b/gpytorch/kernels/rbf_kernel.py\n@@ -3,6 +3,7 @@\n from __future__ import print_function\n from __future__ import unicode_literals\n \n+import math\n from .kernel import Kernel\n \n \n@@ -12,7 +13,7 @@\n self,\n ard_num_dims=None,\n log_lengthscale_bounds=(-10000, 10000),\n- eps=1e-5,\n+ eps=1e-6,\n active_dims=None,\n ):\n super(RBFKernel, self).__init__(\n@@ -24,6 +25,6 @@\n self.eps = eps\n \n def forward(self, x1, x2):\n- lengthscales = self.log_lengthscale.exp() + self.eps\n+ lengthscales = self.log_lengthscale.exp().mul(math.sqrt(2)).clamp(self.eps, 1e5)\n diff = (x1.unsqueeze(2) - x2.unsqueeze(1)).div_(lengthscales)\n- return diff.pow_(2).sum(-1).mul_(-0.5).exp_()\n+ return diff.pow_(2).sum(-1).mul_(-1).exp_()\n", "issue": "RBF Kernel Change Breaks Testing Code\nThe change to RBFKernel in 84fccd898c45c08279fb5c109e6e234f3a47588a may break something about our prediction code. \r\n\r\nI am not totally sure what the problem is yet, but I isolated this as the problem with `git bisect` and have a reasonable test case where results are significantly worse with the commit in compared to after a revert commit. \r\n\r\nIt seems like the stability issues we encountered when making this change in the past don't come up in the unit tests, but do on some real datasets.\r\n\r\nI can try to push my test case to a branch as well, although it relies on a UCI dataset.\r\n\r\n@Balandat @gpleiss \n", "before_files": [{"content": "from __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\nfrom __future__ import unicode_literals\n\nfrom .kernel import Kernel\n\n\nclass RBFKernel(Kernel):\n\n def __init__(\n self,\n ard_num_dims=None,\n log_lengthscale_bounds=(-10000, 10000),\n eps=1e-5,\n active_dims=None,\n ):\n super(RBFKernel, self).__init__(\n has_lengthscale=True,\n ard_num_dims=ard_num_dims,\n log_lengthscale_bounds=log_lengthscale_bounds,\n active_dims=active_dims,\n )\n self.eps = eps\n\n def forward(self, x1, x2):\n lengthscales = self.log_lengthscale.exp() + self.eps\n diff = (x1.unsqueeze(2) - x2.unsqueeze(1)).div_(lengthscales)\n return diff.pow_(2).sum(-1).mul_(-0.5).exp_()\n", "path": "gpytorch/kernels/rbf_kernel.py"}]} | 983 | 294 |
gh_patches_debug_706 | rasdani/github-patches | git_diff | deepset-ai__haystack-3705 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bad Semaphore initialization in RequestLimiter
**Describe the bug**
RequestLimiter takes a number as parameter and use it to set up a Semaphore. The issue is that the environment variable indicates the concurrent allowed requests per worker. When the semaphore is created (https://github.com/deepset-ai/haystack/blob/6790eaf7d8be05c5674d97a75cc5783e00a66875/rest_api/rest_api/controller/utils.py#L13), this value is set down by 1. This is clearly not what the project tried to achieve (at least per naming).
**Error message**
REST API will always return it's busy, error 503 when CONCURRENT_REQUEST_PER_WORKER is equal to CONCURRENT_REQUEST_PER_WORKER -1. When user set the concurrency to 1, it will never be able to call the API, since the Semaphore declaration will be Semaphore(0)
**Expected behavior**
Being able to set the request limits using the env variable CONCURRENT_REQUEST_PER_WORKER
**Additional context**
**To Reproduce**
**FAQ Check**
- [x] Have you had a look at [our new FAQ page](https://haystack.deepset.ai/overview/faq)?
**System:**
- OS: Ubuntu
- GPU/CPU: i7/ Nvidia
- Haystack version (commit or version number): 1.9
- DocumentStore:
- Reader:
- Retriever:
</issue>
<code>
[start of rest_api/rest_api/controller/utils.py]
1 from typing import Type, NewType
2
3 import inspect
4 from contextlib import contextmanager
5 from threading import Semaphore
6
7 from fastapi import Form, HTTPException
8 from pydantic import BaseModel
9
10
11 class RequestLimiter:
12 def __init__(self, limit):
13 self.semaphore = Semaphore(limit - 1)
14
15 @contextmanager
16 def run(self):
17 acquired = self.semaphore.acquire(blocking=False)
18 if not acquired:
19 raise HTTPException(status_code=503, detail="The server is busy processing requests.")
20 try:
21 yield acquired
22 finally:
23 self.semaphore.release()
24
25
26 StringId = NewType("StringId", str)
27
28
29 def as_form(cls: Type[BaseModel]):
30 """
31 Adds an as_form class method to decorated models. The as_form class method
32 can be used with FastAPI endpoints
33 """
34 new_params = [
35 inspect.Parameter(
36 field.alias,
37 inspect.Parameter.POSITIONAL_ONLY,
38 default=(Form(field.default) if not field.required else Form(...)),
39 )
40 for field in cls.__fields__.values()
41 ]
42
43 async def _as_form(**data):
44 return cls(**data)
45
46 sig = inspect.signature(_as_form)
47 sig = sig.replace(parameters=new_params)
48 _as_form.__signature__ = sig # type: ignore
49 setattr(cls, "as_form", _as_form)
50 return cls
51
[end of rest_api/rest_api/controller/utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/rest_api/rest_api/controller/utils.py b/rest_api/rest_api/controller/utils.py
--- a/rest_api/rest_api/controller/utils.py
+++ b/rest_api/rest_api/controller/utils.py
@@ -10,7 +10,7 @@
class RequestLimiter:
def __init__(self, limit):
- self.semaphore = Semaphore(limit - 1)
+ self.semaphore = Semaphore(limit)
@contextmanager
def run(self):
| {"golden_diff": "diff --git a/rest_api/rest_api/controller/utils.py b/rest_api/rest_api/controller/utils.py\n--- a/rest_api/rest_api/controller/utils.py\n+++ b/rest_api/rest_api/controller/utils.py\n@@ -10,7 +10,7 @@\n \n class RequestLimiter:\n def __init__(self, limit):\n- self.semaphore = Semaphore(limit - 1)\n+ self.semaphore = Semaphore(limit)\n \n @contextmanager\n def run(self):\n", "issue": "Bad Semaphore initialization in RequestLimiter\n**Describe the bug**\r\nRequestLimiter takes a number as parameter and use it to set up a Semaphore. The issue is that the environment variable indicates the concurrent allowed requests per worker. When the semaphore is created (https://github.com/deepset-ai/haystack/blob/6790eaf7d8be05c5674d97a75cc5783e00a66875/rest_api/rest_api/controller/utils.py#L13), this value is set down by 1. This is clearly not what the project tried to achieve (at least per naming). \r\n\r\n**Error message**\r\nREST API will always return it's busy, error 503 when CONCURRENT_REQUEST_PER_WORKER is equal to CONCURRENT_REQUEST_PER_WORKER -1. When user set the concurrency to 1, it will never be able to call the API, since the Semaphore declaration will be Semaphore(0)\r\n\r\n**Expected behavior**\r\nBeing able to set the request limits using the env variable CONCURRENT_REQUEST_PER_WORKER\r\n\r\n**Additional context**\r\n\r\n\r\n**To Reproduce**\r\n\r\n\r\n**FAQ Check**\r\n- [x] Have you had a look at [our new FAQ page](https://haystack.deepset.ai/overview/faq)?\r\n\r\n**System:**\r\n - OS: Ubuntu\r\n - GPU/CPU: i7/ Nvidia\r\n - Haystack version (commit or version number): 1.9\r\n - DocumentStore:\r\n - Reader:\r\n - Retriever:\r\n\n", "before_files": [{"content": "from typing import Type, NewType\n\nimport inspect\nfrom contextlib import contextmanager\nfrom threading import Semaphore\n\nfrom fastapi import Form, HTTPException\nfrom pydantic import BaseModel\n\n\nclass RequestLimiter:\n def __init__(self, limit):\n self.semaphore = Semaphore(limit - 1)\n\n @contextmanager\n def run(self):\n acquired = self.semaphore.acquire(blocking=False)\n if not acquired:\n raise HTTPException(status_code=503, detail=\"The server is busy processing requests.\")\n try:\n yield acquired\n finally:\n self.semaphore.release()\n\n\nStringId = NewType(\"StringId\", str)\n\n\ndef as_form(cls: Type[BaseModel]):\n \"\"\"\n Adds an as_form class method to decorated models. The as_form class method\n can be used with FastAPI endpoints\n \"\"\"\n new_params = [\n inspect.Parameter(\n field.alias,\n inspect.Parameter.POSITIONAL_ONLY,\n default=(Form(field.default) if not field.required else Form(...)),\n )\n for field in cls.__fields__.values()\n ]\n\n async def _as_form(**data):\n return cls(**data)\n\n sig = inspect.signature(_as_form)\n sig = sig.replace(parameters=new_params)\n _as_form.__signature__ = sig # type: ignore\n setattr(cls, \"as_form\", _as_form)\n return cls\n", "path": "rest_api/rest_api/controller/utils.py"}]} | 1,254 | 100 |
gh_patches_debug_30321 | rasdani/github-patches | git_diff | pwndbg__pwndbg-433 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
bug of function get_load_segment_info in readelf.py
when I got readelf result like below
```
readelf --program-headers /bin/ls | grep "LOAD" -A 10
LOAD 0x0000000000000000 0x0000000000400000 0x0000000000400000
0x000000000001da64 0x000000000001da64 R E 200000
```
the function crashed at line 65
```
65 fsize, msize, read, write, execute, align = re_secnd.match(line).groups()
```
the reason is in the regex format
```python
re_secnd = re.compile(r"\s+(0x[0-9A-Fa-f]+) (0x[0-9A-Fa-f]+) (.)(.)(.)\s+(**0x**[0-9A-Fa-f]+)"
```
I mean, "0x" should not a absolute prefix of align number
</issue>
<code>
[start of pwndbg/wrappers/readelf.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 from __future__ import absolute_import
4 from __future__ import division
5 from __future__ import print_function
6 from __future__ import unicode_literals
7
8 import re
9
10 import pwndbg.wrappers
11
12 cmd_name = "readelf"
13
14 @pwndbg.wrappers.OnlyWithCommand(cmd_name)
15 def get_jmpslots():
16 local_path = pwndbg.file.get_file(pwndbg.proc.exe)
17 cmd = [get_jmpslots.cmd_path, "--relocs", local_path]
18 readelf_out = pwndbg.wrappers.call_cmd(cmd)
19
20 return filter(_extract_jumps, readelf_out.splitlines())
21
22 def _extract_jumps(line):
23 '''
24 Checks for records in `readelf --relocs <binary>` which has type e.g. `R_X86_64_JUMP_SLO`
25 NOTE: Because of that we DO NOT display entries that are not writeable (due to FULL RELRO)
26 as they have `R_X86_64_GLOB_DAT` type.
27
28 It might be good to display them seperately in the future.
29 '''
30 try:
31 if "JUMP" in line.split()[2]:
32 return line
33 else:
34 return False
35 except IndexError:
36 return False
37
38 @pwndbg.wrappers.OnlyWithCommand(cmd_name)
39 def get_load_segment_info():
40 '''
41 Looks for LOAD sections by parsing the output of `readelf --program-headers <binary>`
42 '''
43 local_path = pwndbg.file.get_file(pwndbg.proc.exe)
44 cmd = [get_jmpslots.cmd_path, "--program-headers", local_path]
45 readelf_out = pwndbg.wrappers.call_cmd(cmd)
46
47 segments = []
48 load_found = False
49
50 # Output from readelf is
51 # Type Offset VirtAddr PhysAddr
52 # FileSiz MemSiz Flags Align
53 # LOAD 0x0000000000000000 0x0000000000000000 0x0000000000000000
54 # 0x0000000000000830 0x0000000000000830 R E 0x200000
55 # Account for this using two regular expressions
56 re_first = re.compile(r"\s+LOAD\s+(0x[0-9A-Fa-f]+) (0x[0-9A-Fa-f]+) (0x[0-9A-Fa-f]+)")
57 re_secnd = re.compile(r"\s+(0x[0-9A-Fa-f]+) (0x[0-9A-Fa-f]+) (.)(.)(.)\s+(0x[0-9A-Fa-f]+)")
58 hex2int = lambda x: int(x, 16)
59
60 for line in readelf_out.splitlines():
61 if "LOAD" in line:
62 load_found = True
63 offset, vaddr, paddr = map(hex2int, re_first.match(line).groups())
64 elif load_found:
65 fsize, msize, read, write, execute, align = re_secnd.match(line).groups()
66 fsize, msize, align = map(hex2int, (fsize, msize, align))
67 read = read == "R"
68 write = write == "W"
69 execute = execute == "E"
70
71 segments.append({"Offset": offset,
72 "VirtAddr": vaddr,
73 "PhysAddr": paddr,
74 "FileSiz": fsize,
75 "MemSiz": msize,
76 "FlagsRead": read,
77 "FlagsWrite": write,
78 "FlagsExecute": execute})
79
80 load_found = False
81
82 return segments
83
[end of pwndbg/wrappers/readelf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pwndbg/wrappers/readelf.py b/pwndbg/wrappers/readelf.py
--- a/pwndbg/wrappers/readelf.py
+++ b/pwndbg/wrappers/readelf.py
@@ -52,9 +52,15 @@
# FileSiz MemSiz Flags Align
# LOAD 0x0000000000000000 0x0000000000000000 0x0000000000000000
# 0x0000000000000830 0x0000000000000830 R E 0x200000
+ #
+ ############################################################################
+ #
+ # NOTE: On some readelf versions the Align column might not be prefixed with 0x
+ # See https://github.com/pwndbg/pwndbg/issues/427
+ #
# Account for this using two regular expressions
re_first = re.compile(r"\s+LOAD\s+(0x[0-9A-Fa-f]+) (0x[0-9A-Fa-f]+) (0x[0-9A-Fa-f]+)")
- re_secnd = re.compile(r"\s+(0x[0-9A-Fa-f]+) (0x[0-9A-Fa-f]+) (.)(.)(.)\s+(0x[0-9A-Fa-f]+)")
+ re_secnd = re.compile(r"\s+(0x[0-9A-Fa-f]+) (0x[0-9A-Fa-f]+) (.)(.)(.)\s+(0x)?([0-9A-Fa-f]+)")
hex2int = lambda x: int(x, 16)
for line in readelf_out.splitlines():
@@ -62,8 +68,8 @@
load_found = True
offset, vaddr, paddr = map(hex2int, re_first.match(line).groups())
elif load_found:
- fsize, msize, read, write, execute, align = re_secnd.match(line).groups()
- fsize, msize, align = map(hex2int, (fsize, msize, align))
+ fsize, msize, read, write, execute, _optional_prefix, align = re_secnd.match(line).groups()
+ fsize, msize, align = map(hex2int, (fsize, msize, '0x' + align))
read = read == "R"
write = write == "W"
execute = execute == "E"
| {"golden_diff": "diff --git a/pwndbg/wrappers/readelf.py b/pwndbg/wrappers/readelf.py\n--- a/pwndbg/wrappers/readelf.py\n+++ b/pwndbg/wrappers/readelf.py\n@@ -52,9 +52,15 @@\n # FileSiz MemSiz Flags Align\n # LOAD 0x0000000000000000 0x0000000000000000 0x0000000000000000\n # 0x0000000000000830 0x0000000000000830 R E 0x200000\n+ #\n+ ############################################################################\n+ #\n+ # NOTE: On some readelf versions the Align column might not be prefixed with 0x\n+ # See https://github.com/pwndbg/pwndbg/issues/427\n+ #\n # Account for this using two regular expressions\n re_first = re.compile(r\"\\s+LOAD\\s+(0x[0-9A-Fa-f]+) (0x[0-9A-Fa-f]+) (0x[0-9A-Fa-f]+)\")\n- re_secnd = re.compile(r\"\\s+(0x[0-9A-Fa-f]+) (0x[0-9A-Fa-f]+) (.)(.)(.)\\s+(0x[0-9A-Fa-f]+)\")\n+ re_secnd = re.compile(r\"\\s+(0x[0-9A-Fa-f]+) (0x[0-9A-Fa-f]+) (.)(.)(.)\\s+(0x)?([0-9A-Fa-f]+)\")\n hex2int = lambda x: int(x, 16)\n \n for line in readelf_out.splitlines():\n@@ -62,8 +68,8 @@\n load_found = True\n offset, vaddr, paddr = map(hex2int, re_first.match(line).groups())\n elif load_found:\n- fsize, msize, read, write, execute, align = re_secnd.match(line).groups()\n- fsize, msize, align = map(hex2int, (fsize, msize, align))\n+ fsize, msize, read, write, execute, _optional_prefix, align = re_secnd.match(line).groups()\n+ fsize, msize, align = map(hex2int, (fsize, msize, '0x' + align))\n read = read == \"R\"\n write = write == \"W\"\n execute = execute == \"E\"\n", "issue": "bug of function get_load_segment_info in readelf.py\nwhen I got readelf result like below\r\n```\r\nreadelf --program-headers /bin/ls | grep \"LOAD\" -A 10\r\n LOAD 0x0000000000000000 0x0000000000400000 0x0000000000400000\r\n 0x000000000001da64 0x000000000001da64 R E 200000\r\n```\r\nthe function crashed at line 65\r\n```\r\n65 fsize, msize, read, write, execute, align = re_secnd.match(line).groups()\r\n```\r\nthe reason is in the regex format\r\n```python\r\nre_secnd = re.compile(r\"\\s+(0x[0-9A-Fa-f]+) (0x[0-9A-Fa-f]+) (.)(.)(.)\\s+(**0x**[0-9A-Fa-f]+)\"\r\n```\r\nI mean, \"0x\" should not a absolute prefix of align number\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\nfrom __future__ import unicode_literals\n\nimport re\n\nimport pwndbg.wrappers\n\ncmd_name = \"readelf\"\n\[email protected](cmd_name)\ndef get_jmpslots():\n local_path = pwndbg.file.get_file(pwndbg.proc.exe)\n cmd = [get_jmpslots.cmd_path, \"--relocs\", local_path]\n readelf_out = pwndbg.wrappers.call_cmd(cmd)\n\n return filter(_extract_jumps, readelf_out.splitlines())\n\ndef _extract_jumps(line):\n '''\n Checks for records in `readelf --relocs <binary>` which has type e.g. `R_X86_64_JUMP_SLO`\n NOTE: Because of that we DO NOT display entries that are not writeable (due to FULL RELRO)\n as they have `R_X86_64_GLOB_DAT` type.\n\n It might be good to display them seperately in the future.\n '''\n try:\n if \"JUMP\" in line.split()[2]:\n return line\n else:\n return False\n except IndexError:\n return False\n\[email protected](cmd_name)\ndef get_load_segment_info():\n '''\n Looks for LOAD sections by parsing the output of `readelf --program-headers <binary>`\n '''\n local_path = pwndbg.file.get_file(pwndbg.proc.exe)\n cmd = [get_jmpslots.cmd_path, \"--program-headers\", local_path]\n readelf_out = pwndbg.wrappers.call_cmd(cmd)\n\n segments = []\n load_found = False\n\n # Output from readelf is \n # Type Offset VirtAddr PhysAddr\n # FileSiz MemSiz Flags Align\n # LOAD 0x0000000000000000 0x0000000000000000 0x0000000000000000\n # 0x0000000000000830 0x0000000000000830 R E 0x200000\n # Account for this using two regular expressions\n re_first = re.compile(r\"\\s+LOAD\\s+(0x[0-9A-Fa-f]+) (0x[0-9A-Fa-f]+) (0x[0-9A-Fa-f]+)\")\n re_secnd = re.compile(r\"\\s+(0x[0-9A-Fa-f]+) (0x[0-9A-Fa-f]+) (.)(.)(.)\\s+(0x[0-9A-Fa-f]+)\")\n hex2int = lambda x: int(x, 16)\n\n for line in readelf_out.splitlines():\n if \"LOAD\" in line:\n load_found = True\n offset, vaddr, paddr = map(hex2int, re_first.match(line).groups())\n elif load_found:\n fsize, msize, read, write, execute, align = re_secnd.match(line).groups()\n fsize, msize, align = map(hex2int, (fsize, msize, align))\n read = read == \"R\"\n write = write == \"W\"\n execute = execute == \"E\"\n\n segments.append({\"Offset\": offset,\n \"VirtAddr\": vaddr,\n \"PhysAddr\": paddr,\n \"FileSiz\": fsize,\n \"MemSiz\": msize,\n \"FlagsRead\": read,\n \"FlagsWrite\": write,\n \"FlagsExecute\": execute})\n\n load_found = False\n\n return segments\n", "path": "pwndbg/wrappers/readelf.py"}]} | 1,855 | 626 |
gh_patches_debug_17862 | rasdani/github-patches | git_diff | kivy__kivy-2700 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
SDL2 - crash on loading asyncimage loading gif?
relevant log:
Traceback (most recent call last):
File "/home/chozabu/git/KivEntEd/main.py", line 1289, in <module>
KivEntEd().run()
File "/usr/local/lib/python2.7/dist-packages/kivy/app.py", line 825, in run
runTouchApp()
File "/usr/local/lib/python2.7/dist-packages/kivy/base.py", line 484, in runTouchApp
EventLoop.window.mainloop()
File "/usr/local/lib/python2.7/dist-packages/kivy/core/window/window_pygame.py", line 364, in mainloop
self._mainloop()
File "/usr/local/lib/python2.7/dist-packages/kivy/core/window/window_pygame.py", line 268, in _mainloop
EventLoop.idle()
File "/usr/local/lib/python2.7/dist-packages/kivy/base.py", line 324, in idle
Clock.tick()
File "/usr/local/lib/python2.7/dist-packages/kivy/clock.py", line 482, in tick
self._process_events()
File "/usr/local/lib/python2.7/dist-packages/kivy/clock.py", line 614, in _process_events
event.tick(self._last_tick, remove)
File "/usr/local/lib/python2.7/dist-packages/kivy/clock.py", line 373, in tick
ret = callback(self._dt)
File "/home/chozabu/git/KivEntEd/ui_elements.py", line 121, in initUI
self.screenShot.source = serverURL+"/downloadSS?fullname="+self.info['filename']+".png"
File "kivy/properties.pyx", line 377, in kivy.properties.Property.__set__ (kivy/properties.c:4346)
File "kivy/properties.pyx", line 409, in kivy.properties.Property.set (kivy/properties.c:4861)
File "kivy/properties.pyx", line 460, in kivy.properties.Property.dispatch (kivy/properties.c:5437)
File "kivy/_event.pyx", line 1046, in kivy._event.EventObservers.dispatch (kivy/_event.c:10980)
File "/usr/local/lib/python2.7/dist-packages/kivy/uix/image.py", line 327, in _load_source
anim_delay=self.anim_delay)
File "/usr/local/lib/python2.7/dist-packages/kivy/loader.py", line 432, in image
client = ProxyImage(self.loading_image,
File "/usr/local/lib/python2.7/dist-packages/kivy/loader.py", line 163, in _get_loading_image
self._loading_image = ImageLoader.load(filename=loading_png_fn)
File "/usr/local/lib/python2.7/dist-packages/kivy/core/image/__init__.py", line 385, in load
im = loader(filename, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/kivy/core/image/**init**.py", line 164, in **init**
self._data = self.load(filename)
File "/usr/local/lib/python2.7/dist-packages/kivy/core/image/img_sdl2.py", line 34, in load
raise Exception('SDL2: Unable to load image')
Exception: SDL2: Unable to load image
</issue>
<code>
[start of kivy/core/image/img_sdl2.py]
1 '''
2 SDL2 image loader
3 =================
4 '''
5
6 __all__ = ('ImageLoaderSDL2', )
7
8 from kivy.compat import PY2
9 from kivy.logger import Logger
10 from kivy.core.image import ImageLoaderBase, ImageData, ImageLoader
11 from kivy.core.image import _img_sdl2
12
13
14 class ImageLoaderSDL2(ImageLoaderBase):
15 '''Image loader based on the PIL library'''
16
17 def _ensure_ext(self):
18 _img_sdl2.init()
19
20 @staticmethod
21 def extensions():
22 '''Return accepted extensions for this loader'''
23 return ('bmp', 'gif', 'jpg', 'jpeg', 'lbm', 'pcx', 'png', 'pnm', 'tga', 'tiff',
24 'webp', 'xcf', 'xpm', 'xv')
25
26 @staticmethod
27 def can_save():
28 return True
29
30 def load(self, filename):
31 info = _img_sdl2.load(filename)
32 if not info:
33 Logger.warning('Image: Unable to load image <%s>' % filename)
34 raise Exception('SDL2: Unable to load image')
35
36 w, h, fmt, pixels, rowlength = info
37
38 # update internals
39 self.filename = filename
40 return [ImageData(
41 w, h, fmt, pixels, source=filename,
42 rowlength=rowlength)]
43
44 @staticmethod
45 def save(filename, width, height, fmt, pixels, flipped):
46 # TODO implement the save for sdl2
47 #surface = SDL2.image.fromstring(
48 # pixels, (width, height), fmt.upper(), False)
49 #SDL2.image.save(surface, filename)
50 _img_sdl2.save(filename, width, height, fmt, pixels, flipped)
51 return True
52
53
54 # register
55 ImageLoader.register(ImageLoaderSDL2)
56
[end of kivy/core/image/img_sdl2.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/kivy/core/image/img_sdl2.py b/kivy/core/image/img_sdl2.py
--- a/kivy/core/image/img_sdl2.py
+++ b/kivy/core/image/img_sdl2.py
@@ -20,7 +20,7 @@
@staticmethod
def extensions():
'''Return accepted extensions for this loader'''
- return ('bmp', 'gif', 'jpg', 'jpeg', 'lbm', 'pcx', 'png', 'pnm', 'tga', 'tiff',
+ return ('bmp', 'jpg', 'jpeg', 'lbm', 'pcx', 'png', 'pnm', 'tga', 'tiff',
'webp', 'xcf', 'xpm', 'xv')
@staticmethod
@@ -43,10 +43,6 @@
@staticmethod
def save(filename, width, height, fmt, pixels, flipped):
- # TODO implement the save for sdl2
- #surface = SDL2.image.fromstring(
- # pixels, (width, height), fmt.upper(), False)
- #SDL2.image.save(surface, filename)
_img_sdl2.save(filename, width, height, fmt, pixels, flipped)
return True
| {"golden_diff": "diff --git a/kivy/core/image/img_sdl2.py b/kivy/core/image/img_sdl2.py\n--- a/kivy/core/image/img_sdl2.py\n+++ b/kivy/core/image/img_sdl2.py\n@@ -20,7 +20,7 @@\n @staticmethod\n def extensions():\n '''Return accepted extensions for this loader'''\n- return ('bmp', 'gif', 'jpg', 'jpeg', 'lbm', 'pcx', 'png', 'pnm', 'tga', 'tiff',\n+ return ('bmp', 'jpg', 'jpeg', 'lbm', 'pcx', 'png', 'pnm', 'tga', 'tiff',\n 'webp', 'xcf', 'xpm', 'xv')\n \n @staticmethod\n@@ -43,10 +43,6 @@\n \n @staticmethod\n def save(filename, width, height, fmt, pixels, flipped):\n- # TODO implement the save for sdl2\n- #surface = SDL2.image.fromstring(\n- # pixels, (width, height), fmt.upper(), False)\n- #SDL2.image.save(surface, filename)\n _img_sdl2.save(filename, width, height, fmt, pixels, flipped)\n return True\n", "issue": "SDL2 - crash on loading asyncimage loading gif?\nrelevant log:\n\nTraceback (most recent call last):\n File \"/home/chozabu/git/KivEntEd/main.py\", line 1289, in <module>\n KivEntEd().run()\n File \"/usr/local/lib/python2.7/dist-packages/kivy/app.py\", line 825, in run\n runTouchApp()\n File \"/usr/local/lib/python2.7/dist-packages/kivy/base.py\", line 484, in runTouchApp\n EventLoop.window.mainloop()\n File \"/usr/local/lib/python2.7/dist-packages/kivy/core/window/window_pygame.py\", line 364, in mainloop\n self._mainloop()\n File \"/usr/local/lib/python2.7/dist-packages/kivy/core/window/window_pygame.py\", line 268, in _mainloop\n EventLoop.idle()\n File \"/usr/local/lib/python2.7/dist-packages/kivy/base.py\", line 324, in idle\n Clock.tick()\n File \"/usr/local/lib/python2.7/dist-packages/kivy/clock.py\", line 482, in tick\n self._process_events()\n File \"/usr/local/lib/python2.7/dist-packages/kivy/clock.py\", line 614, in _process_events\n event.tick(self._last_tick, remove)\n File \"/usr/local/lib/python2.7/dist-packages/kivy/clock.py\", line 373, in tick\n ret = callback(self._dt)\n File \"/home/chozabu/git/KivEntEd/ui_elements.py\", line 121, in initUI\n self.screenShot.source = serverURL+\"/downloadSS?fullname=\"+self.info['filename']+\".png\"\n File \"kivy/properties.pyx\", line 377, in kivy.properties.Property.__set__ (kivy/properties.c:4346)\n File \"kivy/properties.pyx\", line 409, in kivy.properties.Property.set (kivy/properties.c:4861)\n File \"kivy/properties.pyx\", line 460, in kivy.properties.Property.dispatch (kivy/properties.c:5437)\n File \"kivy/_event.pyx\", line 1046, in kivy._event.EventObservers.dispatch (kivy/_event.c:10980)\n File \"/usr/local/lib/python2.7/dist-packages/kivy/uix/image.py\", line 327, in _load_source\n anim_delay=self.anim_delay)\n File \"/usr/local/lib/python2.7/dist-packages/kivy/loader.py\", line 432, in image\n client = ProxyImage(self.loading_image,\n File \"/usr/local/lib/python2.7/dist-packages/kivy/loader.py\", line 163, in _get_loading_image\n self._loading_image = ImageLoader.load(filename=loading_png_fn)\n File \"/usr/local/lib/python2.7/dist-packages/kivy/core/image/__init__.py\", line 385, in load\n im = loader(filename, **kwargs)\n File \"/usr/local/lib/python2.7/dist-packages/kivy/core/image/**init**.py\", line 164, in **init**\n self._data = self.load(filename)\n File \"/usr/local/lib/python2.7/dist-packages/kivy/core/image/img_sdl2.py\", line 34, in load\n raise Exception('SDL2: Unable to load image')\nException: SDL2: Unable to load image\n\n", "before_files": [{"content": "'''\nSDL2 image loader\n=================\n'''\n\n__all__ = ('ImageLoaderSDL2', )\n\nfrom kivy.compat import PY2\nfrom kivy.logger import Logger\nfrom kivy.core.image import ImageLoaderBase, ImageData, ImageLoader\nfrom kivy.core.image import _img_sdl2\n\n\nclass ImageLoaderSDL2(ImageLoaderBase):\n '''Image loader based on the PIL library'''\n\n def _ensure_ext(self):\n _img_sdl2.init()\n\n @staticmethod\n def extensions():\n '''Return accepted extensions for this loader'''\n return ('bmp', 'gif', 'jpg', 'jpeg', 'lbm', 'pcx', 'png', 'pnm', 'tga', 'tiff',\n 'webp', 'xcf', 'xpm', 'xv')\n\n @staticmethod\n def can_save():\n return True\n\n def load(self, filename):\n info = _img_sdl2.load(filename)\n if not info:\n Logger.warning('Image: Unable to load image <%s>' % filename)\n raise Exception('SDL2: Unable to load image')\n\n w, h, fmt, pixels, rowlength = info\n\n # update internals\n self.filename = filename\n return [ImageData(\n w, h, fmt, pixels, source=filename,\n rowlength=rowlength)]\n\n @staticmethod\n def save(filename, width, height, fmt, pixels, flipped):\n # TODO implement the save for sdl2\n #surface = SDL2.image.fromstring(\n # pixels, (width, height), fmt.upper(), False)\n #SDL2.image.save(surface, filename)\n _img_sdl2.save(filename, width, height, fmt, pixels, flipped)\n return True\n\n\n# register\nImageLoader.register(ImageLoaderSDL2)\n", "path": "kivy/core/image/img_sdl2.py"}]} | 1,813 | 277 |
gh_patches_debug_54184 | rasdani/github-patches | git_diff | pyro-ppl__pyro-2846 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[bug] Runtime error during SVI inference when using poutine.do()
### Issue Description
Setting: a simple model with 2 latent Gaussians z1 and z2, giving rise to x ~ N( z1+z2, I).
In this setting p(z2 | x, z1) should be the same as p(z2 | x, do(z1)).
I wanted to check whether the current Pyro interface reflects this and it seems it does not.
My initial thought is that there is a difference in how .do() and .condition() broadcast the constants across the plate context.
### Environment
- OS and python version: MacOS 10.14.6, Python: 3.8.6
- PyTorch version: 1.9.0.dev20210502 (nightly version)
- Pyro version: 1.6.0.
### Code Snippet
Replication code:
https://pastebin.com/Ki2PYX7z
</issue>
<code>
[start of pyro/poutine/do_messenger.py]
1 # Copyright (c) 2017-2019 Uber Technologies, Inc.
2 # SPDX-License-Identifier: Apache-2.0
3
4 import numbers
5 import warnings
6
7 import torch
8
9 from .messenger import Messenger
10 from .runtime import apply_stack
11
12
13 class DoMessenger(Messenger):
14 """
15 Given a stochastic function with some sample statements
16 and a dictionary of values at names,
17 set the return values of those sites equal to the values
18 as if they were hard-coded to those values
19 and introduce fresh sample sites with the same names
20 whose values do not propagate.
21
22 Composes freely with :func:`~pyro.poutine.handlers.condition`
23 to represent counterfactual distributions over potential outcomes.
24 See Single World Intervention Graphs [1] for additional details and theory.
25
26 Consider the following Pyro program:
27
28 >>> def model(x):
29 ... s = pyro.param("s", torch.tensor(0.5))
30 ... z = pyro.sample("z", dist.Normal(x, s))
31 ... return z ** 2
32
33 To intervene with a value for site `z`, we can write
34
35 >>> intervened_model = pyro.poutine.do(model, data={"z": torch.tensor(1.)})
36
37 This is equivalent to replacing `z = pyro.sample("z", ...)` with
38 `z = torch.tensor(1.)`
39 and introducing a fresh sample site pyro.sample("z", ...) whose value is not used elsewhere.
40
41 References
42
43 [1] `Single World Intervention Graphs: A Primer`,
44 Thomas Richardson, James Robins
45
46 :param fn: a stochastic function (callable containing Pyro primitive calls)
47 :param data: a ``dict`` mapping sample site names to interventions
48 :returns: stochastic function decorated with a :class:`~pyro.poutine.do_messenger.DoMessenger`
49 """
50 def __init__(self, data):
51 super().__init__()
52 self.data = data
53 self._intervener_id = str(id(self))
54
55 def _pyro_sample(self, msg):
56 if msg.get('_intervener_id', None) != self._intervener_id and \
57 self.data.get(msg['name']) is not None:
58
59 if msg.get('_intervener_id', None) is not None:
60 warnings.warn(
61 "Attempting to intervene on variable {} multiple times,"
62 "this is almost certainly incorrect behavior".format(msg['name']),
63 RuntimeWarning)
64
65 msg['_intervener_id'] = self._intervener_id
66
67 # split node, avoid reapplying self recursively to new node
68 new_msg = msg.copy()
69 apply_stack(new_msg)
70
71 # apply intervention
72 intervention = self.data[msg['name']]
73 msg['name'] = msg['name'] + "__CF" # mangle old name
74
75 if isinstance(intervention, (numbers.Number, torch.Tensor)):
76 msg['value'] = intervention
77 msg['is_observed'] = True
78 msg['stop'] = True
79 else:
80 raise NotImplementedError(
81 "Interventions of type {} not implemented (yet)".format(type(intervention)))
82
83 return None
84
[end of pyro/poutine/do_messenger.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pyro/poutine/do_messenger.py b/pyro/poutine/do_messenger.py
--- a/pyro/poutine/do_messenger.py
+++ b/pyro/poutine/do_messenger.py
@@ -66,6 +66,7 @@
# split node, avoid reapplying self recursively to new node
new_msg = msg.copy()
+ new_msg["cond_indep_stack"] = () # avoid entering plates twice
apply_stack(new_msg)
# apply intervention
| {"golden_diff": "diff --git a/pyro/poutine/do_messenger.py b/pyro/poutine/do_messenger.py\n--- a/pyro/poutine/do_messenger.py\n+++ b/pyro/poutine/do_messenger.py\n@@ -66,6 +66,7 @@\n \n # split node, avoid reapplying self recursively to new node\n new_msg = msg.copy()\n+ new_msg[\"cond_indep_stack\"] = () # avoid entering plates twice\n apply_stack(new_msg)\n \n # apply intervention\n", "issue": "[bug] Runtime error during SVI inference when using poutine.do()\n### Issue Description\r\n\r\nSetting: a simple model with 2 latent Gaussians z1 and z2, giving rise to x ~ N( z1+z2, I).\r\n\r\nIn this setting p(z2 | x, z1) should be the same as p(z2 | x, do(z1)). \r\n\r\nI wanted to check whether the current Pyro interface reflects this and it seems it does not.\r\n\r\nMy initial thought is that there is a difference in how .do() and .condition() broadcast the constants across the plate context.\r\n\r\n### Environment\r\n\r\n - OS and python version: MacOS 10.14.6, Python: 3.8.6\r\n - PyTorch version: 1.9.0.dev20210502 (nightly version)\r\n - Pyro version: 1.6.0.\r\n\r\n### Code Snippet\r\n\r\nReplication code:\r\nhttps://pastebin.com/Ki2PYX7z\r\n\n", "before_files": [{"content": "# Copyright (c) 2017-2019 Uber Technologies, Inc.\n# SPDX-License-Identifier: Apache-2.0\n\nimport numbers\nimport warnings\n\nimport torch\n\nfrom .messenger import Messenger\nfrom .runtime import apply_stack\n\n\nclass DoMessenger(Messenger):\n \"\"\"\n Given a stochastic function with some sample statements\n and a dictionary of values at names,\n set the return values of those sites equal to the values\n as if they were hard-coded to those values\n and introduce fresh sample sites with the same names\n whose values do not propagate.\n\n Composes freely with :func:`~pyro.poutine.handlers.condition`\n to represent counterfactual distributions over potential outcomes.\n See Single World Intervention Graphs [1] for additional details and theory.\n\n Consider the following Pyro program:\n\n >>> def model(x):\n ... s = pyro.param(\"s\", torch.tensor(0.5))\n ... z = pyro.sample(\"z\", dist.Normal(x, s))\n ... return z ** 2\n\n To intervene with a value for site `z`, we can write\n\n >>> intervened_model = pyro.poutine.do(model, data={\"z\": torch.tensor(1.)})\n\n This is equivalent to replacing `z = pyro.sample(\"z\", ...)` with\n `z = torch.tensor(1.)`\n and introducing a fresh sample site pyro.sample(\"z\", ...) whose value is not used elsewhere.\n\n References\n\n [1] `Single World Intervention Graphs: A Primer`,\n Thomas Richardson, James Robins\n\n :param fn: a stochastic function (callable containing Pyro primitive calls)\n :param data: a ``dict`` mapping sample site names to interventions\n :returns: stochastic function decorated with a :class:`~pyro.poutine.do_messenger.DoMessenger`\n \"\"\"\n def __init__(self, data):\n super().__init__()\n self.data = data\n self._intervener_id = str(id(self))\n\n def _pyro_sample(self, msg):\n if msg.get('_intervener_id', None) != self._intervener_id and \\\n self.data.get(msg['name']) is not None:\n\n if msg.get('_intervener_id', None) is not None:\n warnings.warn(\n \"Attempting to intervene on variable {} multiple times,\"\n \"this is almost certainly incorrect behavior\".format(msg['name']),\n RuntimeWarning)\n\n msg['_intervener_id'] = self._intervener_id\n\n # split node, avoid reapplying self recursively to new node\n new_msg = msg.copy()\n apply_stack(new_msg)\n\n # apply intervention\n intervention = self.data[msg['name']]\n msg['name'] = msg['name'] + \"__CF\" # mangle old name\n\n if isinstance(intervention, (numbers.Number, torch.Tensor)):\n msg['value'] = intervention\n msg['is_observed'] = True\n msg['stop'] = True\n else:\n raise NotImplementedError(\n \"Interventions of type {} not implemented (yet)\".format(type(intervention)))\n\n return None\n", "path": "pyro/poutine/do_messenger.py"}]} | 1,597 | 111 |
gh_patches_debug_58564 | rasdani/github-patches | git_diff | codespell-project__codespell-2626 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`python setup.py check` → `twine check`
Because `setup.py ...` is deprecated, we need an alternative to `setup.py check` such as `twine`.
</issue>
<code>
[start of setup.py]
1 #! /usr/bin/env python
2
3 from setuptools import setup
4
5 if __name__ == "__main__":
6 setup()
7
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
deleted file mode 100755
--- a/setup.py
+++ /dev/null
@@ -1,6 +0,0 @@
-#! /usr/bin/env python
-
-from setuptools import setup
-
-if __name__ == "__main__":
- setup()
| {"golden_diff": "diff --git a/setup.py b/setup.py\ndeleted file mode 100755\n--- a/setup.py\n+++ /dev/null\n@@ -1,6 +0,0 @@\n-#! /usr/bin/env python\n-\n-from setuptools import setup\n-\n-if __name__ == \"__main__\":\n- setup()\n", "issue": "`python setup.py check` \u2192 `twine check`\nBecause `setup.py ...` is deprecated, we need an alternative to `setup.py check` such as `twine`.\n", "before_files": [{"content": "#! /usr/bin/env python\n\nfrom setuptools import setup\n\nif __name__ == \"__main__\":\n setup()\n", "path": "setup.py"}]} | 600 | 69 |
gh_patches_debug_5766 | rasdani/github-patches | git_diff | napari__napari-4259 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Previously selected point deleted when deleting layer
## 🐛 Bug
Recently selected points are erroneously removed when deleting new layers with the delete key. (reproduced with points and labels layer)
## To Reproduce
Steps to reproduce the behaviour:
1. Create a point on a points layer
2. Create a new points layer
3. Select the newly created points layer from the layer list (visually deselecting the point)
4. Delete newly created layer using the delete key, the last selected point will also be deleted
Please note that this issue does not occur when the layer is deleted using the bin icon, leading me to believe it is a keybinding issue (and the point must still be 'selected' in come capacity)
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
https://user-images.githubusercontent.com/95660545/156966137-b2a645a6-25ae-42b4-baf7-137e7506e20a.mp4
## Expected behaviour
It is expected that only the newly created points layer (with no points assigned to it) should be deleted, not the point as well.
<!-- A clear and concise description of what you expected to happen. -->
## Environment
napari: 0.4.15.dev68+gdd3a2afd
Platform: Windows-10-10.0.19044-SP0
Python: 3.9.7 (default, Sep 16 2021, 16:59:28) [MSC v.1916 64 bit (AMD64)]
Qt: 5.15.2
PyQt5: 5.15.6
NumPy: 1.21.5
SciPy: 1.7.3
Dask: 2022.01.0
VisPy: 0.9.6
OpenGL:
- GL version: 4.6.0 - Build 26.20.100.7372
- MAX_TEXTURE_SIZE: 16384
Screens:
- screen 1: resolution 1920x1080, scale 1.0
Plugins:
- console: 0.0.4
- scikit-image: 0.4.15.dev68+gdd3a2afd
- svg: 0.1.6
napari contributors (2019). napari: a multi-dimensional image viewer for python. doi:10.5281/zenodo.3555620
</issue>
<code>
[start of napari/_qt/containers/qt_layer_list.py]
1 from __future__ import annotations
2
3 from typing import TYPE_CHECKING
4
5 from qtpy.QtCore import QSortFilterProxyModel, Qt
6
7 from ...layers import Layer
8 from ...utils.translations import trans
9 from ._base_item_model import SortRole, _BaseEventedItemModel
10 from ._layer_delegate import LayerDelegate
11 from .qt_list_view import QtListView
12
13 if TYPE_CHECKING:
14 from qtpy.QtGui import QKeyEvent
15 from qtpy.QtWidgets import QWidget
16
17 from ...components.layerlist import LayerList
18
19
20 class ReverseProxyModel(QSortFilterProxyModel):
21 """Proxy Model that reverses the view order of a _BaseEventedItemModel."""
22
23 def __init__(self, model: _BaseEventedItemModel) -> None:
24 super().__init__()
25 self.setSourceModel(model)
26 self.setSortRole(SortRole)
27 self.sort(0, Qt.DescendingOrder)
28
29 def dropMimeData(self, data, action, destRow, col, parent):
30 """Handle destination row for dropping with reversed indices."""
31 row = 0 if destRow == -1 else self.sourceModel().rowCount() - destRow
32 return self.sourceModel().dropMimeData(data, action, row, col, parent)
33
34
35 class QtLayerList(QtListView[Layer]):
36 """QItemView subclass specialized for the LayerList.
37
38 This is as mostly for targetting with QSS, applying the delegate and
39 reversing the view with ReverseProxyModel.
40 """
41
42 def __init__(self, root: LayerList, parent: QWidget = None):
43 super().__init__(root, parent)
44 self.setItemDelegate(LayerDelegate())
45 self.setToolTip(trans._('Layer list'))
46 font = self.font()
47 font.setPointSize(12)
48 self.setFont(font)
49
50 # This reverses the order of the items in the view,
51 # so items at the end of the list are at the top.
52 self.setModel(ReverseProxyModel(self.model()))
53
54 def keyPressEvent(self, e: QKeyEvent) -> None:
55 """Override Qt event to pass events to the viewer."""
56 if e.key() != Qt.Key_Space:
57 super().keyPressEvent(e)
58
59 e.ignore() # pass key events up to viewer
60
[end of napari/_qt/containers/qt_layer_list.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/napari/_qt/containers/qt_layer_list.py b/napari/_qt/containers/qt_layer_list.py
--- a/napari/_qt/containers/qt_layer_list.py
+++ b/napari/_qt/containers/qt_layer_list.py
@@ -53,7 +53,7 @@
def keyPressEvent(self, e: QKeyEvent) -> None:
"""Override Qt event to pass events to the viewer."""
- if e.key() != Qt.Key_Space:
+ if e.key() != Qt.Key.Key_Space:
super().keyPressEvent(e)
-
- e.ignore() # pass key events up to viewer
+ if e.key() not in (Qt.Key.Key_Backspace, Qt.Key.Key_Delete):
+ e.ignore() # pass key events up to viewer
| {"golden_diff": "diff --git a/napari/_qt/containers/qt_layer_list.py b/napari/_qt/containers/qt_layer_list.py\n--- a/napari/_qt/containers/qt_layer_list.py\n+++ b/napari/_qt/containers/qt_layer_list.py\n@@ -53,7 +53,7 @@\n \n def keyPressEvent(self, e: QKeyEvent) -> None:\n \"\"\"Override Qt event to pass events to the viewer.\"\"\"\n- if e.key() != Qt.Key_Space:\n+ if e.key() != Qt.Key.Key_Space:\n super().keyPressEvent(e)\n-\n- e.ignore() # pass key events up to viewer\n+ if e.key() not in (Qt.Key.Key_Backspace, Qt.Key.Key_Delete):\n+ e.ignore() # pass key events up to viewer\n", "issue": "Previously selected point deleted when deleting layer\n## \ud83d\udc1b Bug\r\n\r\nRecently selected points are erroneously removed when deleting new layers with the delete key. (reproduced with points and labels layer)\r\n\r\n## To Reproduce\r\n\r\nSteps to reproduce the behaviour:\r\n\r\n1. Create a point on a points layer\r\n2. Create a new points layer\r\n3. Select the newly created points layer from the layer list (visually deselecting the point)\r\n4. Delete newly created layer using the delete key, the last selected point will also be deleted\r\n\r\nPlease note that this issue does not occur when the layer is deleted using the bin icon, leading me to believe it is a keybinding issue (and the point must still be 'selected' in come capacity)\r\n<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->\r\n\r\n\r\nhttps://user-images.githubusercontent.com/95660545/156966137-b2a645a6-25ae-42b4-baf7-137e7506e20a.mp4\r\n\r\n\r\n## Expected behaviour\r\nIt is expected that only the newly created points layer (with no points assigned to it) should be deleted, not the point as well.\r\n\r\n<!-- A clear and concise description of what you expected to happen. -->\r\n\r\n## Environment\r\n\r\nnapari: 0.4.15.dev68+gdd3a2afd\r\nPlatform: Windows-10-10.0.19044-SP0\r\nPython: 3.9.7 (default, Sep 16 2021, 16:59:28) [MSC v.1916 64 bit (AMD64)]\r\nQt: 5.15.2\r\nPyQt5: 5.15.6\r\nNumPy: 1.21.5\r\nSciPy: 1.7.3\r\nDask: 2022.01.0\r\nVisPy: 0.9.6\r\n\r\nOpenGL:\r\n- GL version: 4.6.0 - Build 26.20.100.7372\r\n- MAX_TEXTURE_SIZE: 16384\r\n\r\nScreens:\r\n- screen 1: resolution 1920x1080, scale 1.0\r\n\r\nPlugins:\r\n- console: 0.0.4\r\n- scikit-image: 0.4.15.dev68+gdd3a2afd\r\n- svg: 0.1.6\r\n\r\nnapari contributors (2019). napari: a multi-dimensional image viewer for python. doi:10.5281/zenodo.3555620\r\n\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom typing import TYPE_CHECKING\n\nfrom qtpy.QtCore import QSortFilterProxyModel, Qt\n\nfrom ...layers import Layer\nfrom ...utils.translations import trans\nfrom ._base_item_model import SortRole, _BaseEventedItemModel\nfrom ._layer_delegate import LayerDelegate\nfrom .qt_list_view import QtListView\n\nif TYPE_CHECKING:\n from qtpy.QtGui import QKeyEvent\n from qtpy.QtWidgets import QWidget\n\n from ...components.layerlist import LayerList\n\n\nclass ReverseProxyModel(QSortFilterProxyModel):\n \"\"\"Proxy Model that reverses the view order of a _BaseEventedItemModel.\"\"\"\n\n def __init__(self, model: _BaseEventedItemModel) -> None:\n super().__init__()\n self.setSourceModel(model)\n self.setSortRole(SortRole)\n self.sort(0, Qt.DescendingOrder)\n\n def dropMimeData(self, data, action, destRow, col, parent):\n \"\"\"Handle destination row for dropping with reversed indices.\"\"\"\n row = 0 if destRow == -1 else self.sourceModel().rowCount() - destRow\n return self.sourceModel().dropMimeData(data, action, row, col, parent)\n\n\nclass QtLayerList(QtListView[Layer]):\n \"\"\"QItemView subclass specialized for the LayerList.\n\n This is as mostly for targetting with QSS, applying the delegate and\n reversing the view with ReverseProxyModel.\n \"\"\"\n\n def __init__(self, root: LayerList, parent: QWidget = None):\n super().__init__(root, parent)\n self.setItemDelegate(LayerDelegate())\n self.setToolTip(trans._('Layer list'))\n font = self.font()\n font.setPointSize(12)\n self.setFont(font)\n\n # This reverses the order of the items in the view,\n # so items at the end of the list are at the top.\n self.setModel(ReverseProxyModel(self.model()))\n\n def keyPressEvent(self, e: QKeyEvent) -> None:\n \"\"\"Override Qt event to pass events to the viewer.\"\"\"\n if e.key() != Qt.Key_Space:\n super().keyPressEvent(e)\n\n e.ignore() # pass key events up to viewer\n", "path": "napari/_qt/containers/qt_layer_list.py"}]} | 1,738 | 176 |
gh_patches_debug_16311 | rasdani/github-patches | git_diff | spotify__luigi-368 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
LuigiConfigParser::add_config_path() raises if instance() hasn't been accessed
To add a path to the list of config paths, one currently has to do:
``` python
LuigiConfigParser.instance() # remove this and get an exception
LuigiConfigParser.add_config_path(my_path)
```
because `add_config_path` tries to reload `cls._instance` which is initialized with `None`. Wouldn't it be cleaner to do a check there and only reload a non-null instance?
</issue>
<code>
[start of luigi/configuration.py]
1
2 import os
3 import logging
4 from ConfigParser import ConfigParser, NoOptionError, NoSectionError
5
6
7 class LuigiConfigParser(ConfigParser):
8 NO_DEFAULT = object()
9 _instance = None
10 _config_paths = ['/etc/luigi/client.cfg', 'client.cfg']
11 if 'LUIGI_CONFIG_PATH' in os.environ:
12 _config_paths.append(os.environ['LUIGI_CONFIG_PATH'])
13
14 @classmethod
15 def add_config_path(cls, path):
16 cls._config_paths.append(path)
17 cls._instance.reload()
18
19 @classmethod
20 def instance(cls, *args, **kwargs):
21 """ Singleton getter """
22 if cls._instance is None:
23 cls._instance = cls(*args, **kwargs)
24 loaded = cls._instance.reload()
25 logging.getLogger('luigi-interface').info('Loaded %r', loaded)
26
27 return cls._instance
28
29 def reload(self):
30 return self._instance.read(self._config_paths)
31
32 def _get_with_default(self, method, section, option, default, expected_type=None):
33 """ Gets the value of the section/option using method. Returns default if value
34 is not found. Raises an exception if the default value is not None and doesn't match
35 the expected_type.
36 """
37 try:
38 return method(self, section, option)
39 except (NoOptionError, NoSectionError):
40 if default is LuigiConfigParser.NO_DEFAULT:
41 raise
42 if expected_type is not None and default is not None and \
43 not isinstance(default, expected_type):
44 raise
45 return default
46
47 def get(self, section, option, default=NO_DEFAULT):
48 return self._get_with_default(ConfigParser.get, section, option, default)
49
50 def getboolean(self, section, option, default=NO_DEFAULT):
51 return self._get_with_default(ConfigParser.getboolean, section, option, default, bool)
52
53 def getint(self, section, option, default=NO_DEFAULT):
54 return self._get_with_default(ConfigParser.getint, section, option, default, int)
55
56 def getfloat(self, section, option, default=NO_DEFAULT):
57 return self._get_with_default(ConfigParser.getfloat, section, option, default, float)
58
59 def set(self, section, option, value):
60 if not ConfigParser.has_section(self, section):
61 ConfigParser.add_section(self, section)
62
63 return ConfigParser.set(self, section, option, value)
64
65 def get_config():
66 """ Convenience method (for backwards compatibility) for accessing config singleton """
67 return LuigiConfigParser.instance()
68
[end of luigi/configuration.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/luigi/configuration.py b/luigi/configuration.py
--- a/luigi/configuration.py
+++ b/luigi/configuration.py
@@ -14,7 +14,7 @@
@classmethod
def add_config_path(cls, path):
cls._config_paths.append(path)
- cls._instance.reload()
+ cls.reload()
@classmethod
def instance(cls, *args, **kwargs):
@@ -26,8 +26,9 @@
return cls._instance
- def reload(self):
- return self._instance.read(self._config_paths)
+ @classmethod
+ def reload(cls):
+ return cls.instance().read(cls._config_paths)
def _get_with_default(self, method, section, option, default, expected_type=None):
""" Gets the value of the section/option using method. Returns default if value
| {"golden_diff": "diff --git a/luigi/configuration.py b/luigi/configuration.py\n--- a/luigi/configuration.py\n+++ b/luigi/configuration.py\n@@ -14,7 +14,7 @@\n @classmethod\n def add_config_path(cls, path):\n cls._config_paths.append(path)\n- cls._instance.reload()\n+ cls.reload()\n \n @classmethod\n def instance(cls, *args, **kwargs):\n@@ -26,8 +26,9 @@\n \n return cls._instance\n \n- def reload(self):\n- return self._instance.read(self._config_paths)\n+ @classmethod\n+ def reload(cls):\n+ return cls.instance().read(cls._config_paths)\n \n def _get_with_default(self, method, section, option, default, expected_type=None):\n \"\"\" Gets the value of the section/option using method. Returns default if value\n", "issue": "LuigiConfigParser::add_config_path() raises if instance() hasn't been accessed\nTo add a path to the list of config paths, one currently has to do:\n\n``` python\nLuigiConfigParser.instance() # remove this and get an exception\nLuigiConfigParser.add_config_path(my_path)\n```\n\nbecause `add_config_path` tries to reload `cls._instance` which is initialized with `None`. Wouldn't it be cleaner to do a check there and only reload a non-null instance?\n\n", "before_files": [{"content": "\nimport os\nimport logging\nfrom ConfigParser import ConfigParser, NoOptionError, NoSectionError\n\n\nclass LuigiConfigParser(ConfigParser):\n NO_DEFAULT = object()\n _instance = None\n _config_paths = ['/etc/luigi/client.cfg', 'client.cfg']\n if 'LUIGI_CONFIG_PATH' in os.environ:\n _config_paths.append(os.environ['LUIGI_CONFIG_PATH'])\n\n @classmethod\n def add_config_path(cls, path):\n cls._config_paths.append(path)\n cls._instance.reload()\n\n @classmethod\n def instance(cls, *args, **kwargs):\n \"\"\" Singleton getter \"\"\"\n if cls._instance is None:\n cls._instance = cls(*args, **kwargs)\n loaded = cls._instance.reload()\n logging.getLogger('luigi-interface').info('Loaded %r', loaded)\n\n return cls._instance\n\n def reload(self):\n return self._instance.read(self._config_paths)\n\n def _get_with_default(self, method, section, option, default, expected_type=None):\n \"\"\" Gets the value of the section/option using method. Returns default if value\n is not found. Raises an exception if the default value is not None and doesn't match\n the expected_type.\n \"\"\"\n try:\n return method(self, section, option)\n except (NoOptionError, NoSectionError):\n if default is LuigiConfigParser.NO_DEFAULT:\n raise\n if expected_type is not None and default is not None and \\\n not isinstance(default, expected_type):\n raise\n return default\n\n def get(self, section, option, default=NO_DEFAULT):\n return self._get_with_default(ConfigParser.get, section, option, default)\n\n def getboolean(self, section, option, default=NO_DEFAULT):\n return self._get_with_default(ConfigParser.getboolean, section, option, default, bool)\n\n def getint(self, section, option, default=NO_DEFAULT):\n return self._get_with_default(ConfigParser.getint, section, option, default, int)\n\n def getfloat(self, section, option, default=NO_DEFAULT):\n return self._get_with_default(ConfigParser.getfloat, section, option, default, float)\n\n def set(self, section, option, value):\n if not ConfigParser.has_section(self, section):\n ConfigParser.add_section(self, section)\n\n return ConfigParser.set(self, section, option, value)\n\ndef get_config():\n \"\"\" Convenience method (for backwards compatibility) for accessing config singleton \"\"\"\n return LuigiConfigParser.instance()\n", "path": "luigi/configuration.py"}]} | 1,312 | 193 |
gh_patches_debug_8797 | rasdani/github-patches | git_diff | Kinto__kinto-1340 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`kinto create-user` doesn't override the password if the user already exists.
</issue>
<code>
[start of kinto/plugins/accounts/scripts.py]
1 import logging
2 import getpass
3 from pyramid.settings import asbool
4
5 from .utils import hash_password
6 from .views import AccountIdGenerator
7
8
9 logger = logging.getLogger(__name__)
10
11
12 def create_user(env, username=None, password=None):
13 """Administrative command to create a new user."""
14 registry = env['registry']
15 settings = registry.settings
16 readonly_mode = asbool(settings.get('readonly', False))
17 if readonly_mode:
18 message = 'Cannot create a user with a readonly server.'
19 logger.error(message)
20 return 51
21
22 if 'kinto.plugins.accounts' not in settings['includes']:
23 message = 'Cannot create a user when the accounts plugin is not installed.'
24 logger.error(message)
25 return 52
26
27 try:
28 validator = AccountIdGenerator()
29 if username is None:
30 username = input('Username: ')
31 while not validator.match(username):
32 print('{} is not a valid username.')
33 print('Username should match {0!r}, please try again.'.format(validator.regexp))
34 username = input('Username: ')
35
36 if password is None:
37 while True: # The user didn't entered twice the same password
38 password = getpass.getpass('Please enter a password for {}: '.format(username))
39 confirm = getpass.getpass('Please confirm the password: '.format(username))
40
41 if password != confirm:
42 print('Sorry, passwords do not match, please try again.')
43 else:
44 break
45 except EOFError:
46 print('User creation aborted')
47 return 53
48
49 print("Creating user '{}'".format(username))
50 record = {'id': username, 'password': hash_password(password)}
51 registry.storage.create(collection_id='account',
52 parent_id=username,
53 record=record,
54 ignore_conflict=True)
55 registry.permission.add_principal_to_ace('/accounts/{}'.format(username),
56 'write',
57 'account:{}'.format(username))
58
59 return 0
60
[end of kinto/plugins/accounts/scripts.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/kinto/plugins/accounts/scripts.py b/kinto/plugins/accounts/scripts.py
--- a/kinto/plugins/accounts/scripts.py
+++ b/kinto/plugins/accounts/scripts.py
@@ -1,5 +1,7 @@
import logging
import getpass
+
+import transaction as current_transaction
from pyramid.settings import asbool
from .utils import hash_password
@@ -56,4 +58,6 @@
'write',
'account:{}'.format(username))
+ current_transaction.commit()
+
return 0
| {"golden_diff": "diff --git a/kinto/plugins/accounts/scripts.py b/kinto/plugins/accounts/scripts.py\n--- a/kinto/plugins/accounts/scripts.py\n+++ b/kinto/plugins/accounts/scripts.py\n@@ -1,5 +1,7 @@\n import logging\n import getpass\n+\n+import transaction as current_transaction\n from pyramid.settings import asbool\n \n from .utils import hash_password\n@@ -56,4 +58,6 @@\n 'write',\n 'account:{}'.format(username))\n \n+ current_transaction.commit()\n+\n return 0\n", "issue": "`kinto create-user` doesn't override the password if the user already exists.\n\n", "before_files": [{"content": "import logging\nimport getpass\nfrom pyramid.settings import asbool\n\nfrom .utils import hash_password\nfrom .views import AccountIdGenerator\n\n\nlogger = logging.getLogger(__name__)\n\n\ndef create_user(env, username=None, password=None):\n \"\"\"Administrative command to create a new user.\"\"\"\n registry = env['registry']\n settings = registry.settings\n readonly_mode = asbool(settings.get('readonly', False))\n if readonly_mode:\n message = 'Cannot create a user with a readonly server.'\n logger.error(message)\n return 51\n\n if 'kinto.plugins.accounts' not in settings['includes']:\n message = 'Cannot create a user when the accounts plugin is not installed.'\n logger.error(message)\n return 52\n\n try:\n validator = AccountIdGenerator()\n if username is None:\n username = input('Username: ')\n while not validator.match(username):\n print('{} is not a valid username.')\n print('Username should match {0!r}, please try again.'.format(validator.regexp))\n username = input('Username: ')\n\n if password is None:\n while True: # The user didn't entered twice the same password\n password = getpass.getpass('Please enter a password for {}: '.format(username))\n confirm = getpass.getpass('Please confirm the password: '.format(username))\n\n if password != confirm:\n print('Sorry, passwords do not match, please try again.')\n else:\n break\n except EOFError:\n print('User creation aborted')\n return 53\n\n print(\"Creating user '{}'\".format(username))\n record = {'id': username, 'password': hash_password(password)}\n registry.storage.create(collection_id='account',\n parent_id=username,\n record=record,\n ignore_conflict=True)\n registry.permission.add_principal_to_ace('/accounts/{}'.format(username),\n 'write',\n 'account:{}'.format(username))\n\n return 0\n", "path": "kinto/plugins/accounts/scripts.py"}]} | 1,083 | 113 |
gh_patches_debug_31523 | rasdani/github-patches | git_diff | freedomofpress__securedrop-3884 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Remove unnecessary Ansible callback for profile_tasks
# Feature request
## Description
The file at `install_files/ansible-base/callback_plugins/profile_tasks.py` was added via #1196, to provide additional information on task performance, with the goal of aiding developers in improving the server config workflow. Since we moved to Ansible v2 in #1146, the hardcoded plugin is no longer necessary.
Instead, we can ansible add a lint to `ansible.cfg` under `[defaults]`:
```
callback_whitelist = profile_tasks
```
The simplification is possible because task profiling was [added to Ansible core as of v2](https://docs.ansible.com/ansible/devel/plugins/callback/profile_tasks.html).
## User Stories
As a maintainer, I want to delete redundant code wherever possible, and lean on upstream to handle core functionality when appropriate.
</issue>
<code>
[start of install_files/ansible-base/callback_plugins/profile_tasks.py]
1 # Source: https://github.com/jlafon/ansible-profile
2 # License: MIT
3 # More info: http://jlafon.io/ansible-profiling.html
4 # The profiling functionality will be provided by Ansible v2,
5 # since this callback_plugin has been merged into core,
6 # but we're including here to support older versions of Ansible.
7 import datetime
8 import os
9 import time
10
11
12 class CallbackModule(object):
13 """
14 A plugin for timing tasks
15 """
16 def __init__(self):
17 self.stats = {}
18 self.current = None
19
20 def playbook_on_task_start(self, name, is_conditional):
21 """
22 Logs the start of each task
23 """
24
25 if os.getenv("ANSIBLE_PROFILE_DISABLE") is not None:
26 return
27
28 if self.current is not None:
29 # Record the running time of the last executed task
30 self.stats[self.current] = time.time() - self.stats[self.current]
31
32 # Record the start time of the current task
33 self.current = name
34 self.stats[self.current] = time.time()
35
36 def playbook_on_stats(self, stats):
37 """
38 Prints the timings
39 """
40
41 if os.getenv("ANSIBLE_PROFILE_DISABLE") is not None:
42 return
43
44 # Record the timing of the very last task
45 if self.current is not None:
46 self.stats[self.current] = time.time() - self.stats[self.current]
47
48 # Sort the tasks by their running time
49 results = sorted(
50 self.stats.items(),
51 key=lambda value: value[1],
52 reverse=True,
53 )
54
55 # Just keep the top 10
56 results = results[:10]
57
58 # Print the timings
59 for name, elapsed in results:
60 print(
61 "{0:-<70}{1:->9}".format(
62 '{0} '.format(name),
63 ' {0:.02f}s'.format(elapsed),
64 )
65 )
66
67 total_seconds = sum([x[1] for x in self.stats.items()])
68 print("\nPlaybook finished: {0}, {1} total tasks."
69 " {2} elapsed. \n".format(
70 time.asctime(),
71 len(self.stats.items()),
72 datetime.timedelta(seconds=(int(total_seconds)))
73 )
74 )
75
[end of install_files/ansible-base/callback_plugins/profile_tasks.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/install_files/ansible-base/callback_plugins/profile_tasks.py b/install_files/ansible-base/callback_plugins/profile_tasks.py
deleted file mode 100644
--- a/install_files/ansible-base/callback_plugins/profile_tasks.py
+++ /dev/null
@@ -1,74 +0,0 @@
-# Source: https://github.com/jlafon/ansible-profile
-# License: MIT
-# More info: http://jlafon.io/ansible-profiling.html
-# The profiling functionality will be provided by Ansible v2,
-# since this callback_plugin has been merged into core,
-# but we're including here to support older versions of Ansible.
-import datetime
-import os
-import time
-
-
-class CallbackModule(object):
- """
- A plugin for timing tasks
- """
- def __init__(self):
- self.stats = {}
- self.current = None
-
- def playbook_on_task_start(self, name, is_conditional):
- """
- Logs the start of each task
- """
-
- if os.getenv("ANSIBLE_PROFILE_DISABLE") is not None:
- return
-
- if self.current is not None:
- # Record the running time of the last executed task
- self.stats[self.current] = time.time() - self.stats[self.current]
-
- # Record the start time of the current task
- self.current = name
- self.stats[self.current] = time.time()
-
- def playbook_on_stats(self, stats):
- """
- Prints the timings
- """
-
- if os.getenv("ANSIBLE_PROFILE_DISABLE") is not None:
- return
-
- # Record the timing of the very last task
- if self.current is not None:
- self.stats[self.current] = time.time() - self.stats[self.current]
-
- # Sort the tasks by their running time
- results = sorted(
- self.stats.items(),
- key=lambda value: value[1],
- reverse=True,
- )
-
- # Just keep the top 10
- results = results[:10]
-
- # Print the timings
- for name, elapsed in results:
- print(
- "{0:-<70}{1:->9}".format(
- '{0} '.format(name),
- ' {0:.02f}s'.format(elapsed),
- )
- )
-
- total_seconds = sum([x[1] for x in self.stats.items()])
- print("\nPlaybook finished: {0}, {1} total tasks."
- " {2} elapsed. \n".format(
- time.asctime(),
- len(self.stats.items()),
- datetime.timedelta(seconds=(int(total_seconds)))
- )
- )
| {"golden_diff": "diff --git a/install_files/ansible-base/callback_plugins/profile_tasks.py b/install_files/ansible-base/callback_plugins/profile_tasks.py\ndeleted file mode 100644\n--- a/install_files/ansible-base/callback_plugins/profile_tasks.py\n+++ /dev/null\n@@ -1,74 +0,0 @@\n-# Source: https://github.com/jlafon/ansible-profile\n-# License: MIT\n-# More info: http://jlafon.io/ansible-profiling.html\n-# The profiling functionality will be provided by Ansible v2,\n-# since this callback_plugin has been merged into core,\n-# but we're including here to support older versions of Ansible.\n-import datetime\n-import os\n-import time\n-\n-\n-class CallbackModule(object):\n- \"\"\"\n- A plugin for timing tasks\n- \"\"\"\n- def __init__(self):\n- self.stats = {}\n- self.current = None\n-\n- def playbook_on_task_start(self, name, is_conditional):\n- \"\"\"\n- Logs the start of each task\n- \"\"\"\n-\n- if os.getenv(\"ANSIBLE_PROFILE_DISABLE\") is not None:\n- return\n-\n- if self.current is not None:\n- # Record the running time of the last executed task\n- self.stats[self.current] = time.time() - self.stats[self.current]\n-\n- # Record the start time of the current task\n- self.current = name\n- self.stats[self.current] = time.time()\n-\n- def playbook_on_stats(self, stats):\n- \"\"\"\n- Prints the timings\n- \"\"\"\n-\n- if os.getenv(\"ANSIBLE_PROFILE_DISABLE\") is not None:\n- return\n-\n- # Record the timing of the very last task\n- if self.current is not None:\n- self.stats[self.current] = time.time() - self.stats[self.current]\n-\n- # Sort the tasks by their running time\n- results = sorted(\n- self.stats.items(),\n- key=lambda value: value[1],\n- reverse=True,\n- )\n-\n- # Just keep the top 10\n- results = results[:10]\n-\n- # Print the timings\n- for name, elapsed in results:\n- print(\n- \"{0:-<70}{1:->9}\".format(\n- '{0} '.format(name),\n- ' {0:.02f}s'.format(elapsed),\n- )\n- )\n-\n- total_seconds = sum([x[1] for x in self.stats.items()])\n- print(\"\\nPlaybook finished: {0}, {1} total tasks.\"\n- \" {2} elapsed. \\n\".format(\n- time.asctime(),\n- len(self.stats.items()),\n- datetime.timedelta(seconds=(int(total_seconds)))\n- )\n- )\n", "issue": "Remove unnecessary Ansible callback for profile_tasks\n# Feature request\r\n\r\n## Description\r\n\r\nThe file at `install_files/ansible-base/callback_plugins/profile_tasks.py` was added via #1196, to provide additional information on task performance, with the goal of aiding developers in improving the server config workflow. Since we moved to Ansible v2 in #1146, the hardcoded plugin is no longer necessary.\r\n\r\nInstead, we can ansible add a lint to `ansible.cfg` under `[defaults]`:\r\n\r\n```\r\ncallback_whitelist = profile_tasks\r\n```\r\n\r\nThe simplification is possible because task profiling was [added to Ansible core as of v2](https://docs.ansible.com/ansible/devel/plugins/callback/profile_tasks.html).\r\n\r\n## User Stories\r\nAs a maintainer, I want to delete redundant code wherever possible, and lean on upstream to handle core functionality when appropriate.\r\n\n", "before_files": [{"content": "# Source: https://github.com/jlafon/ansible-profile\n# License: MIT\n# More info: http://jlafon.io/ansible-profiling.html\n# The profiling functionality will be provided by Ansible v2,\n# since this callback_plugin has been merged into core,\n# but we're including here to support older versions of Ansible.\nimport datetime\nimport os\nimport time\n\n\nclass CallbackModule(object):\n \"\"\"\n A plugin for timing tasks\n \"\"\"\n def __init__(self):\n self.stats = {}\n self.current = None\n\n def playbook_on_task_start(self, name, is_conditional):\n \"\"\"\n Logs the start of each task\n \"\"\"\n\n if os.getenv(\"ANSIBLE_PROFILE_DISABLE\") is not None:\n return\n\n if self.current is not None:\n # Record the running time of the last executed task\n self.stats[self.current] = time.time() - self.stats[self.current]\n\n # Record the start time of the current task\n self.current = name\n self.stats[self.current] = time.time()\n\n def playbook_on_stats(self, stats):\n \"\"\"\n Prints the timings\n \"\"\"\n\n if os.getenv(\"ANSIBLE_PROFILE_DISABLE\") is not None:\n return\n\n # Record the timing of the very last task\n if self.current is not None:\n self.stats[self.current] = time.time() - self.stats[self.current]\n\n # Sort the tasks by their running time\n results = sorted(\n self.stats.items(),\n key=lambda value: value[1],\n reverse=True,\n )\n\n # Just keep the top 10\n results = results[:10]\n\n # Print the timings\n for name, elapsed in results:\n print(\n \"{0:-<70}{1:->9}\".format(\n '{0} '.format(name),\n ' {0:.02f}s'.format(elapsed),\n )\n )\n\n total_seconds = sum([x[1] for x in self.stats.items()])\n print(\"\\nPlaybook finished: {0}, {1} total tasks.\"\n \" {2} elapsed. \\n\".format(\n time.asctime(),\n len(self.stats.items()),\n datetime.timedelta(seconds=(int(total_seconds)))\n )\n )\n", "path": "install_files/ansible-base/callback_plugins/profile_tasks.py"}]} | 1,356 | 611 |
gh_patches_debug_3328 | rasdani/github-patches | git_diff | Mailu__Mailu-1944 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Letsencrypt Force Renewal
Is there a limit on the Subject Alt Name entries?
I have updated my /mailu/mailu.env "HOSTNAMES" variable, but when I restart Mailu it doesn't update the Subject Alt Names on the mailu cert.
Previously it has worked, so I am guessing that I need to force Letsencrypt to refresh as it isnt within the renewal window. But there is no guidance for the new letsencrypt certbot.
I am using the latest Mailu version (1.7) and this is the command I am using to restart mailu '/mailu/docker-compose -p mailu up -d'
</issue>
<code>
[start of core/nginx/letsencrypt.py]
1 #!/usr/bin/python3
2
3 import os
4 import time
5 import subprocess
6
7
8 command = [
9 "certbot",
10 "-n", "--agree-tos", # non-interactive
11 "-d", os.environ["HOSTNAMES"],
12 "-m", "{}@{}".format(os.environ["POSTMASTER"], os.environ["DOMAIN"]),
13 "certonly", "--standalone",
14 "--cert-name", "mailu",
15 "--preferred-challenges", "http", "--http-01-port", "8008",
16 "--keep-until-expiring",
17 "--rsa-key-size", "4096",
18 "--config-dir", "/certs/letsencrypt",
19 "--post-hook", "/config.py"
20 ]
21
22 # Wait for nginx to start
23 time.sleep(5)
24
25 # Run certbot every hour
26 while True:
27 subprocess.call(command)
28 time.sleep(3600)
29
30
[end of core/nginx/letsencrypt.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/core/nginx/letsencrypt.py b/core/nginx/letsencrypt.py
--- a/core/nginx/letsencrypt.py
+++ b/core/nginx/letsencrypt.py
@@ -14,8 +14,8 @@
"--cert-name", "mailu",
"--preferred-challenges", "http", "--http-01-port", "8008",
"--keep-until-expiring",
- "--rsa-key-size", "4096",
"--config-dir", "/certs/letsencrypt",
+ "--renew-with-new-domains",
"--post-hook", "/config.py"
]
| {"golden_diff": "diff --git a/core/nginx/letsencrypt.py b/core/nginx/letsencrypt.py\n--- a/core/nginx/letsencrypt.py\n+++ b/core/nginx/letsencrypt.py\n@@ -14,8 +14,8 @@\n \"--cert-name\", \"mailu\",\n \"--preferred-challenges\", \"http\", \"--http-01-port\", \"8008\",\n \"--keep-until-expiring\",\n- \"--rsa-key-size\", \"4096\",\n \"--config-dir\", \"/certs/letsencrypt\",\n+ \"--renew-with-new-domains\",\n \"--post-hook\", \"/config.py\"\n ]\n", "issue": "Letsencrypt Force Renewal\nIs there a limit on the Subject Alt Name entries?\r\n\r\nI have updated my /mailu/mailu.env \"HOSTNAMES\" variable, but when I restart Mailu it doesn't update the Subject Alt Names on the mailu cert.\r\n\r\nPreviously it has worked, so I am guessing that I need to force Letsencrypt to refresh as it isnt within the renewal window. But there is no guidance for the new letsencrypt certbot.\r\n\r\nI am using the latest Mailu version (1.7) and this is the command I am using to restart mailu '/mailu/docker-compose -p mailu up -d'\n", "before_files": [{"content": "#!/usr/bin/python3\n\nimport os\nimport time\nimport subprocess\n\n\ncommand = [\n \"certbot\",\n \"-n\", \"--agree-tos\", # non-interactive\n \"-d\", os.environ[\"HOSTNAMES\"],\n \"-m\", \"{}@{}\".format(os.environ[\"POSTMASTER\"], os.environ[\"DOMAIN\"]),\n \"certonly\", \"--standalone\",\n \"--cert-name\", \"mailu\",\n \"--preferred-challenges\", \"http\", \"--http-01-port\", \"8008\",\n \"--keep-until-expiring\",\n \"--rsa-key-size\", \"4096\",\n \"--config-dir\", \"/certs/letsencrypt\",\n \"--post-hook\", \"/config.py\"\n]\n\n# Wait for nginx to start\ntime.sleep(5)\n\n# Run certbot every hour\nwhile True:\n subprocess.call(command)\n time.sleep(3600)\n\n", "path": "core/nginx/letsencrypt.py"}]} | 913 | 133 |
gh_patches_debug_594 | rasdani/github-patches | git_diff | pex-tool__pex-1057 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Release 2.1.17
On the docket:
+ [x] TypeError when resolving local platforms. #1043
+ [x] No such file for interpreter's binary name #1009
+ [x] Pex resources leak while bootstrapping pants #1050
+ [x] Pex PEX perf regression #1054
</issue>
<code>
[start of pex/version.py]
1 # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 __version__ = "2.1.16"
5
[end of pex/version.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pex/version.py b/pex/version.py
--- a/pex/version.py
+++ b/pex/version.py
@@ -1,4 +1,4 @@
# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
# Licensed under the Apache License, Version 2.0 (see LICENSE).
-__version__ = "2.1.16"
+__version__ = "2.1.17"
| {"golden_diff": "diff --git a/pex/version.py b/pex/version.py\n--- a/pex/version.py\n+++ b/pex/version.py\n@@ -1,4 +1,4 @@\n # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n # Licensed under the Apache License, Version 2.0 (see LICENSE).\n \n-__version__ = \"2.1.16\"\n+__version__ = \"2.1.17\"\n", "issue": "Release 2.1.17\nOn the docket:\r\n+ [x] TypeError when resolving local platforms. #1043\r\n+ [x] No such file for interpreter's binary name #1009\r\n+ [x] Pex resources leak while bootstrapping pants #1050\r\n+ [x] Pex PEX perf regression #1054\r\n\n", "before_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.16\"\n", "path": "pex/version.py"}]} | 667 | 97 |
gh_patches_debug_3456 | rasdani/github-patches | git_diff | CTFd__CTFd-1827 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Set plugin migration version in between each migration
https://github.com/CTFd/CTFd/blob/e1991e16963b10302baa7cc50d52071a5053bf2f/CTFd/plugins/migrations.py#L72-L77
This code here probably should be setting the plugin version in between each migration so that if a migration fails it doesn't need to be started from the beginning again.
</issue>
<code>
[start of CTFd/plugins/migrations.py]
1 import inspect
2 import os
3
4 from alembic.config import Config
5 from alembic.migration import MigrationContext
6 from alembic.operations import Operations
7 from alembic.script import ScriptDirectory
8 from flask import current_app
9 from sqlalchemy import create_engine, pool
10
11 from CTFd.utils import get_config, set_config
12
13
14 def current(plugin_name=None):
15 if plugin_name is None:
16 # Get the directory name of the plugin if unspecified
17 # Doing it this way doesn't waste the rest of the inspect.stack call
18 frame = inspect.currentframe()
19 caller_info = inspect.getframeinfo(frame.f_back)
20 caller_path = caller_info[0]
21 plugin_name = os.path.basename(os.path.dirname(caller_path))
22
23 return get_config(plugin_name + "_alembic_version")
24
25
26 def upgrade(plugin_name=None, revision=None, lower="current"):
27 database_url = current_app.config.get("SQLALCHEMY_DATABASE_URI")
28 if database_url.startswith("sqlite"):
29 current_app.db.create_all()
30 return
31
32 if plugin_name is None:
33 # Get the directory name of the plugin if unspecified
34 # Doing it this way doesn't waste the rest of the inspect.stack call
35 frame = inspect.currentframe()
36 caller_info = inspect.getframeinfo(frame.f_back)
37 caller_path = caller_info[0]
38 plugin_name = os.path.basename(os.path.dirname(caller_path))
39
40 # Check if the plugin has migraitons
41 migrations_path = os.path.join(current_app.plugins_dir, plugin_name, "migrations")
42 if os.path.isdir(migrations_path) is False:
43 return
44
45 engine = create_engine(database_url, poolclass=pool.NullPool)
46 conn = engine.connect()
47 context = MigrationContext.configure(conn)
48 op = Operations(context)
49
50 # Find the list of migrations to run
51 config = Config()
52 config.set_main_option("script_location", migrations_path)
53 config.set_main_option("version_locations", migrations_path)
54 script = ScriptDirectory.from_config(config)
55
56 # Choose base revision for plugin upgrade
57 # "current" points to the current plugin version stored in config
58 # None represents the absolute base layer (e.g. first installation)
59 if lower == "current":
60 lower = get_config(plugin_name + "_alembic_version")
61
62 # Do we upgrade to head or to a specific revision
63 if revision is None:
64 upper = script.get_current_head()
65 else:
66 upper = revision
67
68 # Apply from lower to upper
69 revs = list(script.iterate_revisions(lower=lower, upper=upper))
70 revs.reverse()
71
72 try:
73 for r in revs:
74 with context.begin_transaction():
75 r.module.upgrade(op=op)
76 finally:
77 conn.close()
78
79 # Set the new latest revision
80 set_config(plugin_name + "_alembic_version", upper)
81
[end of CTFd/plugins/migrations.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/CTFd/plugins/migrations.py b/CTFd/plugins/migrations.py
--- a/CTFd/plugins/migrations.py
+++ b/CTFd/plugins/migrations.py
@@ -73,6 +73,9 @@
for r in revs:
with context.begin_transaction():
r.module.upgrade(op=op)
+ # Set revision that succeeded so we don't need
+ # to start from the beginning on failure
+ set_config(plugin_name + "_alembic_version", r.revision)
finally:
conn.close()
| {"golden_diff": "diff --git a/CTFd/plugins/migrations.py b/CTFd/plugins/migrations.py\n--- a/CTFd/plugins/migrations.py\n+++ b/CTFd/plugins/migrations.py\n@@ -73,6 +73,9 @@\n for r in revs:\n with context.begin_transaction():\n r.module.upgrade(op=op)\n+ # Set revision that succeeded so we don't need\n+ # to start from the beginning on failure\n+ set_config(plugin_name + \"_alembic_version\", r.revision)\n finally:\n conn.close()\n", "issue": "Set plugin migration version in between each migration\nhttps://github.com/CTFd/CTFd/blob/e1991e16963b10302baa7cc50d52071a5053bf2f/CTFd/plugins/migrations.py#L72-L77\r\n\r\nThis code here probably should be setting the plugin version in between each migration so that if a migration fails it doesn't need to be started from the beginning again. \n", "before_files": [{"content": "import inspect\nimport os\n\nfrom alembic.config import Config\nfrom alembic.migration import MigrationContext\nfrom alembic.operations import Operations\nfrom alembic.script import ScriptDirectory\nfrom flask import current_app\nfrom sqlalchemy import create_engine, pool\n\nfrom CTFd.utils import get_config, set_config\n\n\ndef current(plugin_name=None):\n if plugin_name is None:\n # Get the directory name of the plugin if unspecified\n # Doing it this way doesn't waste the rest of the inspect.stack call\n frame = inspect.currentframe()\n caller_info = inspect.getframeinfo(frame.f_back)\n caller_path = caller_info[0]\n plugin_name = os.path.basename(os.path.dirname(caller_path))\n\n return get_config(plugin_name + \"_alembic_version\")\n\n\ndef upgrade(plugin_name=None, revision=None, lower=\"current\"):\n database_url = current_app.config.get(\"SQLALCHEMY_DATABASE_URI\")\n if database_url.startswith(\"sqlite\"):\n current_app.db.create_all()\n return\n\n if plugin_name is None:\n # Get the directory name of the plugin if unspecified\n # Doing it this way doesn't waste the rest of the inspect.stack call\n frame = inspect.currentframe()\n caller_info = inspect.getframeinfo(frame.f_back)\n caller_path = caller_info[0]\n plugin_name = os.path.basename(os.path.dirname(caller_path))\n\n # Check if the plugin has migraitons\n migrations_path = os.path.join(current_app.plugins_dir, plugin_name, \"migrations\")\n if os.path.isdir(migrations_path) is False:\n return\n\n engine = create_engine(database_url, poolclass=pool.NullPool)\n conn = engine.connect()\n context = MigrationContext.configure(conn)\n op = Operations(context)\n\n # Find the list of migrations to run\n config = Config()\n config.set_main_option(\"script_location\", migrations_path)\n config.set_main_option(\"version_locations\", migrations_path)\n script = ScriptDirectory.from_config(config)\n\n # Choose base revision for plugin upgrade\n # \"current\" points to the current plugin version stored in config\n # None represents the absolute base layer (e.g. first installation)\n if lower == \"current\":\n lower = get_config(plugin_name + \"_alembic_version\")\n\n # Do we upgrade to head or to a specific revision\n if revision is None:\n upper = script.get_current_head()\n else:\n upper = revision\n\n # Apply from lower to upper\n revs = list(script.iterate_revisions(lower=lower, upper=upper))\n revs.reverse()\n\n try:\n for r in revs:\n with context.begin_transaction():\n r.module.upgrade(op=op)\n finally:\n conn.close()\n\n # Set the new latest revision\n set_config(plugin_name + \"_alembic_version\", upper)\n", "path": "CTFd/plugins/migrations.py"}]} | 1,409 | 123 |
gh_patches_debug_23213 | rasdani/github-patches | git_diff | microsoft__lisa-1567 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Command not found (PATH does not contain /usr/sbin)
Getting errors when using LISAv3 to deploy and test CentOS 7_9 on Azure
`[ERROR] lisa.env[generated_0].node[0].cmd[7289] not found command: Command not found: modinfo. Check that modinfo is installed and on $PATH`
`[ERROR] lisa.env[generated_0].node[0].cmd[1038] not found command: Command not found: waagent. Check that waagent is installed and on $PATH`
`[ERROR] lisa.env[generated_0].node[0].cmd[8629] not found command: Command not found: lsmod. Check that lsmod is installed and on $PATH`
SSHing into the node confirms that all three of these commands are present and runnable on the node.
The error about modinfo missing appears to occur before any tests start running. These errors do not occur when deploying and testing Ubuntu 18.04-LTS.
</issue>
<code>
[start of lisa/tools/modinfo.py]
1 # Copyright (c) Microsoft Corporation.
2 # Licensed under the MIT license.
3
4 import re
5 from typing import Any
6
7 from lisa.executable import Tool
8 from lisa.util import find_patterns_in_lines
9
10
11 class Modinfo(Tool):
12 __version_pattern = re.compile(r"^version:[ \t]*([^ \n]*)")
13
14 @property
15 def command(self) -> str:
16 return self._command
17
18 def _check_exists(self) -> bool:
19 return True
20
21 def _initialize(self, *args: Any, **kwargs: Any) -> None:
22 self._command = "modinfo"
23
24 def get_info(
25 self,
26 mod_name: str,
27 force_run: bool = False,
28 no_info_log: bool = True,
29 no_error_log: bool = True,
30 ) -> str:
31 result = self.run(
32 mod_name,
33 force_run=force_run,
34 no_info_log=no_info_log,
35 no_error_log=no_error_log,
36 )
37 if result.exit_code != 0:
38 # CentOS may not include the path when started,
39 # specify path and try again.
40 self._command = "/usr/sbin/modinfo"
41 result = self.run(
42 mod_name,
43 force_run=force_run,
44 no_info_log=no_info_log,
45 no_error_log=no_error_log,
46 )
47 return result.stdout
48
49 def get_version(
50 self,
51 mod_name: str,
52 force_run: bool = False,
53 no_info_log: bool = True,
54 no_error_log: bool = True,
55 ) -> str:
56 output = self.get_info(
57 mod_name=mod_name,
58 force_run=force_run,
59 no_info_log=no_info_log,
60 no_error_log=no_error_log,
61 )
62 found_version = find_patterns_in_lines(output, [self.__version_pattern])
63 return found_version[0][0] if found_version[0] else ""
64
[end of lisa/tools/modinfo.py]
[start of lisa/tools/lsmod.py]
1 # Copyright (c) Microsoft Corporation.
2 # Licensed under the MIT license.
3
4 import re
5 from typing import Any
6
7 from lisa.executable import Tool
8 from lisa.util import LisaException, find_patterns_in_lines
9
10
11 class Lsmod(Tool):
12 # The output of lsmod command is in below format -
13 # Module Size Used by
14 # fuse 52176 3
15 # cryptd 14125 0
16 # aes_generic 32970 1 aes_i586
17 __output_pattern = re.compile(
18 r"^(?P<name>[^\s]+)\s+(?P<size>[^\s]+)\s+(?P<usedby>.*)?$", re.MULTILINE
19 )
20
21 @property
22 def command(self) -> str:
23 return self._command
24
25 def _check_exists(self) -> bool:
26 return True
27
28 def _initialize(self, *args: Any, **kwargs: Any) -> None:
29 self._command = "lsmod"
30
31 def module_exists(
32 self,
33 mod_name: str = "",
34 force_run: bool = False,
35 no_info_log: bool = True,
36 no_error_log: bool = True,
37 ) -> bool:
38 result = self.run(
39 force_run=force_run,
40 no_info_log=no_info_log,
41 no_error_log=no_error_log,
42 )
43 if result.exit_code != 0:
44 raise LisaException(
45 f"{self._command} command got non-zero exit code: {result.exit_code}"
46 )
47
48 module_info = find_patterns_in_lines(result.stdout, [self.__output_pattern])
49 if any(mod_name in info for sublist in module_info for info in sublist):
50 return True
51
52 return False
53
[end of lisa/tools/lsmod.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/lisa/tools/lsmod.py b/lisa/tools/lsmod.py
--- a/lisa/tools/lsmod.py
+++ b/lisa/tools/lsmod.py
@@ -36,6 +36,7 @@
no_error_log: bool = True,
) -> bool:
result = self.run(
+ sudo=True,
force_run=force_run,
no_info_log=no_info_log,
no_error_log=no_error_log,
diff --git a/lisa/tools/modinfo.py b/lisa/tools/modinfo.py
--- a/lisa/tools/modinfo.py
+++ b/lisa/tools/modinfo.py
@@ -30,20 +30,13 @@
) -> str:
result = self.run(
mod_name,
+ sudo=True,
force_run=force_run,
no_info_log=no_info_log,
no_error_log=no_error_log,
+ expected_exit_code=0,
+ expected_exit_code_failure_message=f"Modinfo failed for module {mod_name}",
)
- if result.exit_code != 0:
- # CentOS may not include the path when started,
- # specify path and try again.
- self._command = "/usr/sbin/modinfo"
- result = self.run(
- mod_name,
- force_run=force_run,
- no_info_log=no_info_log,
- no_error_log=no_error_log,
- )
return result.stdout
def get_version(
| {"golden_diff": "diff --git a/lisa/tools/lsmod.py b/lisa/tools/lsmod.py\n--- a/lisa/tools/lsmod.py\n+++ b/lisa/tools/lsmod.py\n@@ -36,6 +36,7 @@\n no_error_log: bool = True,\n ) -> bool:\n result = self.run(\n+ sudo=True,\n force_run=force_run,\n no_info_log=no_info_log,\n no_error_log=no_error_log,\ndiff --git a/lisa/tools/modinfo.py b/lisa/tools/modinfo.py\n--- a/lisa/tools/modinfo.py\n+++ b/lisa/tools/modinfo.py\n@@ -30,20 +30,13 @@\n ) -> str:\n result = self.run(\n mod_name,\n+ sudo=True,\n force_run=force_run,\n no_info_log=no_info_log,\n no_error_log=no_error_log,\n+ expected_exit_code=0,\n+ expected_exit_code_failure_message=f\"Modinfo failed for module {mod_name}\",\n )\n- if result.exit_code != 0:\n- # CentOS may not include the path when started,\n- # specify path and try again.\n- self._command = \"/usr/sbin/modinfo\"\n- result = self.run(\n- mod_name,\n- force_run=force_run,\n- no_info_log=no_info_log,\n- no_error_log=no_error_log,\n- )\n return result.stdout\n \n def get_version(\n", "issue": "Command not found (PATH does not contain /usr/sbin)\nGetting errors when using LISAv3 to deploy and test CentOS 7_9 on Azure\r\n\r\n`[ERROR] lisa.env[generated_0].node[0].cmd[7289] not found command: Command not found: modinfo. Check that modinfo is installed and on $PATH`\r\n\r\n`[ERROR] lisa.env[generated_0].node[0].cmd[1038] not found command: Command not found: waagent. Check that waagent is installed and on $PATH`\r\n\r\n`[ERROR] lisa.env[generated_0].node[0].cmd[8629] not found command: Command not found: lsmod. Check that lsmod is installed and on $PATH`\r\n\r\nSSHing into the node confirms that all three of these commands are present and runnable on the node.\r\n\r\nThe error about modinfo missing appears to occur before any tests start running. These errors do not occur when deploying and testing Ubuntu 18.04-LTS.\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation.\n# Licensed under the MIT license.\n\nimport re\nfrom typing import Any\n\nfrom lisa.executable import Tool\nfrom lisa.util import find_patterns_in_lines\n\n\nclass Modinfo(Tool):\n __version_pattern = re.compile(r\"^version:[ \\t]*([^ \\n]*)\")\n\n @property\n def command(self) -> str:\n return self._command\n\n def _check_exists(self) -> bool:\n return True\n\n def _initialize(self, *args: Any, **kwargs: Any) -> None:\n self._command = \"modinfo\"\n\n def get_info(\n self,\n mod_name: str,\n force_run: bool = False,\n no_info_log: bool = True,\n no_error_log: bool = True,\n ) -> str:\n result = self.run(\n mod_name,\n force_run=force_run,\n no_info_log=no_info_log,\n no_error_log=no_error_log,\n )\n if result.exit_code != 0:\n # CentOS may not include the path when started,\n # specify path and try again.\n self._command = \"/usr/sbin/modinfo\"\n result = self.run(\n mod_name,\n force_run=force_run,\n no_info_log=no_info_log,\n no_error_log=no_error_log,\n )\n return result.stdout\n\n def get_version(\n self,\n mod_name: str,\n force_run: bool = False,\n no_info_log: bool = True,\n no_error_log: bool = True,\n ) -> str:\n output = self.get_info(\n mod_name=mod_name,\n force_run=force_run,\n no_info_log=no_info_log,\n no_error_log=no_error_log,\n )\n found_version = find_patterns_in_lines(output, [self.__version_pattern])\n return found_version[0][0] if found_version[0] else \"\"\n", "path": "lisa/tools/modinfo.py"}, {"content": "# Copyright (c) Microsoft Corporation.\n# Licensed under the MIT license.\n\nimport re\nfrom typing import Any\n\nfrom lisa.executable import Tool\nfrom lisa.util import LisaException, find_patterns_in_lines\n\n\nclass Lsmod(Tool):\n # The output of lsmod command is in below format -\n # Module Size Used by\n # fuse 52176 3\n # cryptd 14125 0\n # aes_generic 32970 1 aes_i586\n __output_pattern = re.compile(\n r\"^(?P<name>[^\\s]+)\\s+(?P<size>[^\\s]+)\\s+(?P<usedby>.*)?$\", re.MULTILINE\n )\n\n @property\n def command(self) -> str:\n return self._command\n\n def _check_exists(self) -> bool:\n return True\n\n def _initialize(self, *args: Any, **kwargs: Any) -> None:\n self._command = \"lsmod\"\n\n def module_exists(\n self,\n mod_name: str = \"\",\n force_run: bool = False,\n no_info_log: bool = True,\n no_error_log: bool = True,\n ) -> bool:\n result = self.run(\n force_run=force_run,\n no_info_log=no_info_log,\n no_error_log=no_error_log,\n )\n if result.exit_code != 0:\n raise LisaException(\n f\"{self._command} command got non-zero exit code: {result.exit_code}\"\n )\n\n module_info = find_patterns_in_lines(result.stdout, [self.__output_pattern])\n if any(mod_name in info for sublist in module_info for info in sublist):\n return True\n\n return False\n", "path": "lisa/tools/lsmod.py"}]} | 1,807 | 316 |
gh_patches_debug_26655 | rasdani/github-patches | git_diff | scikit-image__scikit-image-6989 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Existing "inpainting" gallery example could use a better (more specific) title.
Creating this issue so we don't lose track of what's been discussed in the conversation _Originally posted by @lagru in https://github.com/scikit-image/scikit-image/pull/6853#discussion_r1149741067_
> @mkcor, just wondering how this relates to [our existing inpainting example ](https://scikit-image.org/docs/dev/auto_examples/filters/plot_inpaint.html#sphx-glr-auto-examples-filters-plot-inpaint-py). I am assuming that the main benefit here is that it's a real world use case?
[...]
> Which prompts the idea that we should update the title of the existing example, so it's less generic than just "inpainting."
</issue>
<code>
[start of doc/examples/filters/plot_inpaint.py]
1 """
2 ===========
3 Inpainting
4 ===========
5 Inpainting [1]_ is the process of reconstructing lost or deteriorated
6 parts of images and videos.
7
8 The reconstruction is supposed to be performed in fully automatic way by
9 exploiting the information presented in non-damaged regions.
10
11 In this example, we show how the masked pixels get inpainted by
12 inpainting algorithm based on 'biharmonic equation'-assumption [2]_ [3]_ [4]_.
13
14 .. [1] Wikipedia. Inpainting
15 https://en.wikipedia.org/wiki/Inpainting
16 .. [2] Wikipedia. Biharmonic equation
17 https://en.wikipedia.org/wiki/Biharmonic_equation
18 .. [3] S.B.Damelin and N.S.Hoang. "On Surface Completion and Image
19 Inpainting by Biharmonic Functions: Numerical Aspects",
20 International Journal of Mathematics and Mathematical Sciences,
21 Vol. 2018, Article ID 3950312
22 :DOI:`10.1155/2018/3950312`
23 .. [4] C. K. Chui and H. N. Mhaskar, MRA Contextual-Recovery Extension of
24 Smooth Functions on Manifolds, Appl. and Comp. Harmonic Anal.,
25 28 (2010), 104-113,
26 :DOI:`10.1016/j.acha.2009.04.004`
27 """
28
29 import numpy as np
30 import matplotlib.pyplot as plt
31
32 from skimage import data
33 from skimage.morphology import disk, binary_dilation
34 from skimage.restoration import inpaint
35
36 image_orig = data.astronaut()
37
38 # Create mask with six block defect regions
39 mask = np.zeros(image_orig.shape[:-1], dtype=bool)
40 mask[20:60, 0:20] = 1
41 mask[160:180, 70:155] = 1
42 mask[30:60, 170:195] = 1
43 mask[-60:-30, 170:195] = 1
44 mask[-180:-160, 70:155] = 1
45 mask[-60:-20, 0:20] = 1
46
47 # add a few long, narrow defects
48 mask[200:205, -200:] = 1
49 mask[150:255, 20:23] = 1
50 mask[365:368, 60:130] = 1
51
52 # add randomly positioned small point-like defects
53 rstate = np.random.default_rng(0)
54 for radius in [0, 2, 4]:
55 # larger defects are less common
56 thresh = 3 + 0.25 * radius # make larger defects less common
57 tmp_mask = rstate.standard_normal(image_orig.shape[:-1]) > thresh
58 if radius > 0:
59 tmp_mask = binary_dilation(tmp_mask, disk(radius, dtype=bool))
60 mask[tmp_mask] = 1
61
62 # Apply defect mask to the image over the same region in each color channel
63 image_defect = image_orig * ~mask[..., np.newaxis]
64
65 image_result = inpaint.inpaint_biharmonic(image_defect, mask, channel_axis=-1)
66
67 fig, axes = plt.subplots(ncols=2, nrows=2)
68 ax = axes.ravel()
69
70 ax[0].set_title('Original image')
71 ax[0].imshow(image_orig)
72
73 ax[1].set_title('Mask')
74 ax[1].imshow(mask, cmap=plt.cm.gray)
75
76 ax[2].set_title('Defected image')
77 ax[2].imshow(image_defect)
78
79 ax[3].set_title('Inpainted image')
80 ax[3].imshow(image_result)
81
82 for a in ax:
83 a.axis('off')
84
85 fig.tight_layout()
86 plt.show()
87
[end of doc/examples/filters/plot_inpaint.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/doc/examples/filters/plot_inpaint.py b/doc/examples/filters/plot_inpaint.py
--- a/doc/examples/filters/plot_inpaint.py
+++ b/doc/examples/filters/plot_inpaint.py
@@ -1,15 +1,16 @@
"""
-===========
-Inpainting
-===========
+===============================
+Fill in defects with inpainting
+===============================
+
Inpainting [1]_ is the process of reconstructing lost or deteriorated
parts of images and videos.
-The reconstruction is supposed to be performed in fully automatic way by
-exploiting the information presented in non-damaged regions.
+The reconstruction (restoration) is performed in an automatic way by
+exploiting the information present in non-damaged regions.
-In this example, we show how the masked pixels get inpainted by
-inpainting algorithm based on 'biharmonic equation'-assumption [2]_ [3]_ [4]_.
+In this example, we show how the masked pixels get inpainted using an
+inpainting algorithm based on the biharmonic equation [2]_ [3]_ [4]_.
.. [1] Wikipedia. Inpainting
https://en.wikipedia.org/wiki/Inpainting
@@ -44,12 +45,12 @@
mask[-180:-160, 70:155] = 1
mask[-60:-20, 0:20] = 1
-# add a few long, narrow defects
+# Add a few long, narrow defects
mask[200:205, -200:] = 1
mask[150:255, 20:23] = 1
mask[365:368, 60:130] = 1
-# add randomly positioned small point-like defects
+# Add randomly positioned small point-like defects
rstate = np.random.default_rng(0)
for radius in [0, 2, 4]:
# larger defects are less common
| {"golden_diff": "diff --git a/doc/examples/filters/plot_inpaint.py b/doc/examples/filters/plot_inpaint.py\n--- a/doc/examples/filters/plot_inpaint.py\n+++ b/doc/examples/filters/plot_inpaint.py\n@@ -1,15 +1,16 @@\n \"\"\"\n-===========\n-Inpainting\n-===========\n+===============================\n+Fill in defects with inpainting\n+===============================\n+\n Inpainting [1]_ is the process of reconstructing lost or deteriorated\n parts of images and videos.\n \n-The reconstruction is supposed to be performed in fully automatic way by\n-exploiting the information presented in non-damaged regions.\n+The reconstruction (restoration) is performed in an automatic way by\n+exploiting the information present in non-damaged regions.\n \n-In this example, we show how the masked pixels get inpainted by\n-inpainting algorithm based on 'biharmonic equation'-assumption [2]_ [3]_ [4]_.\n+In this example, we show how the masked pixels get inpainted using an\n+inpainting algorithm based on the biharmonic equation [2]_ [3]_ [4]_.\n \n .. [1] Wikipedia. Inpainting\n https://en.wikipedia.org/wiki/Inpainting\n@@ -44,12 +45,12 @@\n mask[-180:-160, 70:155] = 1\n mask[-60:-20, 0:20] = 1\n \n-# add a few long, narrow defects\n+# Add a few long, narrow defects\n mask[200:205, -200:] = 1\n mask[150:255, 20:23] = 1\n mask[365:368, 60:130] = 1\n \n-# add randomly positioned small point-like defects\n+# Add randomly positioned small point-like defects\n rstate = np.random.default_rng(0)\n for radius in [0, 2, 4]:\n # larger defects are less common\n", "issue": "Existing \"inpainting\" gallery example could use a better (more specific) title.\nCreating this issue so we don't lose track of what's been discussed in the conversation _Originally posted by @lagru in https://github.com/scikit-image/scikit-image/pull/6853#discussion_r1149741067_\r\n\r\n> @mkcor, just wondering how this relates to [our existing inpainting example ](https://scikit-image.org/docs/dev/auto_examples/filters/plot_inpaint.html#sphx-glr-auto-examples-filters-plot-inpaint-py). I am assuming that the main benefit here is that it's a real world use case?\r\n\r\n[...]\r\n\r\n> Which prompts the idea that we should update the title of the existing example, so it's less generic than just \"inpainting.\"\n", "before_files": [{"content": "\"\"\"\n===========\nInpainting\n===========\nInpainting [1]_ is the process of reconstructing lost or deteriorated\nparts of images and videos.\n\nThe reconstruction is supposed to be performed in fully automatic way by\nexploiting the information presented in non-damaged regions.\n\nIn this example, we show how the masked pixels get inpainted by\ninpainting algorithm based on 'biharmonic equation'-assumption [2]_ [3]_ [4]_.\n\n.. [1] Wikipedia. Inpainting\n https://en.wikipedia.org/wiki/Inpainting\n.. [2] Wikipedia. Biharmonic equation\n https://en.wikipedia.org/wiki/Biharmonic_equation\n.. [3] S.B.Damelin and N.S.Hoang. \"On Surface Completion and Image\n Inpainting by Biharmonic Functions: Numerical Aspects\",\n International Journal of Mathematics and Mathematical Sciences,\n Vol. 2018, Article ID 3950312\n :DOI:`10.1155/2018/3950312`\n.. [4] C. K. Chui and H. N. Mhaskar, MRA Contextual-Recovery Extension of\n Smooth Functions on Manifolds, Appl. and Comp. Harmonic Anal.,\n 28 (2010), 104-113,\n :DOI:`10.1016/j.acha.2009.04.004`\n\"\"\"\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nfrom skimage import data\nfrom skimage.morphology import disk, binary_dilation\nfrom skimage.restoration import inpaint\n\nimage_orig = data.astronaut()\n\n# Create mask with six block defect regions\nmask = np.zeros(image_orig.shape[:-1], dtype=bool)\nmask[20:60, 0:20] = 1\nmask[160:180, 70:155] = 1\nmask[30:60, 170:195] = 1\nmask[-60:-30, 170:195] = 1\nmask[-180:-160, 70:155] = 1\nmask[-60:-20, 0:20] = 1\n\n# add a few long, narrow defects\nmask[200:205, -200:] = 1\nmask[150:255, 20:23] = 1\nmask[365:368, 60:130] = 1\n\n# add randomly positioned small point-like defects\nrstate = np.random.default_rng(0)\nfor radius in [0, 2, 4]:\n # larger defects are less common\n thresh = 3 + 0.25 * radius # make larger defects less common\n tmp_mask = rstate.standard_normal(image_orig.shape[:-1]) > thresh\n if radius > 0:\n tmp_mask = binary_dilation(tmp_mask, disk(radius, dtype=bool))\n mask[tmp_mask] = 1\n\n# Apply defect mask to the image over the same region in each color channel\nimage_defect = image_orig * ~mask[..., np.newaxis]\n\nimage_result = inpaint.inpaint_biharmonic(image_defect, mask, channel_axis=-1)\n\nfig, axes = plt.subplots(ncols=2, nrows=2)\nax = axes.ravel()\n\nax[0].set_title('Original image')\nax[0].imshow(image_orig)\n\nax[1].set_title('Mask')\nax[1].imshow(mask, cmap=plt.cm.gray)\n\nax[2].set_title('Defected image')\nax[2].imshow(image_defect)\n\nax[3].set_title('Inpainted image')\nax[3].imshow(image_result)\n\nfor a in ax:\n a.axis('off')\n\nfig.tight_layout()\nplt.show()\n", "path": "doc/examples/filters/plot_inpaint.py"}]} | 1,770 | 455 |
gh_patches_debug_21673 | rasdani/github-patches | git_diff | ivy-llc__ivy-13280 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
unwrap
</issue>
<code>
[start of ivy/functional/frontends/numpy/mathematical_functions/other_special_functions.py]
[end of ivy/functional/frontends/numpy/mathematical_functions/other_special_functions.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ivy/functional/frontends/numpy/mathematical_functions/other_special_functions.py b/ivy/functional/frontends/numpy/mathematical_functions/other_special_functions.py
--- a/ivy/functional/frontends/numpy/mathematical_functions/other_special_functions.py
+++ b/ivy/functional/frontends/numpy/mathematical_functions/other_special_functions.py
@@ -0,0 +1,48 @@
+# global
+import ivy
+
+# local
+from ivy.functional.frontends.numpy.func_wrapper import (
+ to_ivy_arrays_and_back,
+ handle_numpy_dtype,
+ from_zero_dim_arrays_to_scalar,
+ handle_numpy_out,
+)
+
+
+
+@handle_numpy_out
+@handle_numpy_dtype
+@to_ivy_arrays_and_back
+@from_zero_dim_arrays_to_scalar
+def unwrap(p, discont=None, axis=-1, *, period=2*pi):
+ p = ivy.Array.asarray(p)
+ nd = p.ndim
+ dd = ivy.diff(p, axis=axis)
+ if discont is None:
+ discont = period/2
+ slice1 = [ivy.slice(None, None)]*nd # full slices
+ slice1[axis] = ivy.slice(1, None)
+ slice1 = ivy.tuple(slice1)
+ dtype = ivy.result_type(dd, period)
+ if ivy.issubdtype(dtype, ivy.integer):
+ interval_high, rem = ivy.divmod(period, 2)
+ boundary_ambiguous = rem == 0
+ else:
+ interval_high = period / 2
+ boundary_ambiguous = True
+ interval_low = -interval_high
+ ddmod = ivy.mod(dd - interval_low, period) + interval_low
+ if boundary_ambiguous:
+ ivy.copyto(ddmod, interval_high,
+ where=(ddmod == interval_low) & (dd > 0))
+ ph_correct = ddmod - dd
+ ivy.copyto(ph_correct, 0, where=ivy.abs(dd) < discont)
+ up = ivy.array(p, copy=True, dtype=dtype)
+ up[slice1] = p[slice1] + ph_correct.cumsum(axis)
+ return up
+
+my_list = [24,8,3,4,34,8]
+ans = unwrap(my_list)
+print("After the np.unwrap()")
+print(ans)
\ No newline at end of file
| {"golden_diff": "diff --git a/ivy/functional/frontends/numpy/mathematical_functions/other_special_functions.py b/ivy/functional/frontends/numpy/mathematical_functions/other_special_functions.py\n--- a/ivy/functional/frontends/numpy/mathematical_functions/other_special_functions.py\n+++ b/ivy/functional/frontends/numpy/mathematical_functions/other_special_functions.py\n@@ -0,0 +1,48 @@\n+# global\n+import ivy\n+\n+# local\n+from ivy.functional.frontends.numpy.func_wrapper import (\n+ to_ivy_arrays_and_back,\n+ handle_numpy_dtype,\n+ from_zero_dim_arrays_to_scalar,\n+ handle_numpy_out,\n+)\n+\n+\n+\n+@handle_numpy_out\n+@handle_numpy_dtype\n+@to_ivy_arrays_and_back\n+@from_zero_dim_arrays_to_scalar\n+def unwrap(p, discont=None, axis=-1, *, period=2*pi):\n+ p = ivy.Array.asarray(p)\n+ nd = p.ndim\n+ dd = ivy.diff(p, axis=axis)\n+ if discont is None:\n+ discont = period/2\n+ slice1 = [ivy.slice(None, None)]*nd # full slices\n+ slice1[axis] = ivy.slice(1, None)\n+ slice1 = ivy.tuple(slice1)\n+ dtype = ivy.result_type(dd, period)\n+ if ivy.issubdtype(dtype, ivy.integer):\n+ interval_high, rem = ivy.divmod(period, 2)\n+ boundary_ambiguous = rem == 0\n+ else:\n+ interval_high = period / 2\n+ boundary_ambiguous = True\n+ interval_low = -interval_high\n+ ddmod = ivy.mod(dd - interval_low, period) + interval_low\n+ if boundary_ambiguous:\n+ ivy.copyto(ddmod, interval_high,\n+ where=(ddmod == interval_low) & (dd > 0))\n+ ph_correct = ddmod - dd\n+ ivy.copyto(ph_correct, 0, where=ivy.abs(dd) < discont)\n+ up = ivy.array(p, copy=True, dtype=dtype)\n+ up[slice1] = p[slice1] + ph_correct.cumsum(axis)\n+ return up\n+\n+my_list = [24,8,3,4,34,8]\n+ans = unwrap(my_list)\n+print(\"After the np.unwrap()\")\n+print(ans)\n\\ No newline at end of file\n", "issue": "unwrap\n\n", "before_files": [{"content": "", "path": "ivy/functional/frontends/numpy/mathematical_functions/other_special_functions.py"}]} | 561 | 553 |
gh_patches_debug_25191 | rasdani/github-patches | git_diff | scipy__scipy-6119 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
DeprecationWarnings in stats on python 3.5
```
/home/br/repos/scipy/build/testenv/lib/python3.5/site-packages/scipy/stats/tests/test_stats.py:101: DeprecationWarning: Please use assertRaisesRegex instead.
```
Apparently, `assertRaisesRegexp` was renamed to `assertRaisesRegex`: https://docs.python.org/3/library/unittest.html#unittest.TestCase.assertRaisesRegexp
</issue>
<code>
[start of scipy/_lib/_numpy_compat.py]
1 """Functions copypasted from newer versions of numpy.
2
3 """
4 from __future__ import division, print_function, absolute_import
5
6 import warnings
7
8 import numpy as np
9
10 from scipy._lib._version import NumpyVersion
11
12 if NumpyVersion(np.__version__) > '1.7.0.dev':
13 _assert_warns = np.testing.assert_warns
14 else:
15 def _assert_warns(warning_class, func, *args, **kw):
16 r"""
17 Fail unless the given callable throws the specified warning.
18
19 This definition is copypasted from numpy 1.9.0.dev.
20 The version in earlier numpy returns None.
21
22 Parameters
23 ----------
24 warning_class : class
25 The class defining the warning that `func` is expected to throw.
26 func : callable
27 The callable to test.
28 *args : Arguments
29 Arguments passed to `func`.
30 **kwargs : Kwargs
31 Keyword arguments passed to `func`.
32
33 Returns
34 -------
35 The value returned by `func`.
36
37 """
38 with warnings.catch_warnings(record=True) as l:
39 warnings.simplefilter('always')
40 result = func(*args, **kw)
41 if not len(l) > 0:
42 raise AssertionError("No warning raised when calling %s"
43 % func.__name__)
44 if not l[0].category is warning_class:
45 raise AssertionError("First warning for %s is not a "
46 "%s( is %s)" % (func.__name__, warning_class, l[0]))
47 return result
48
[end of scipy/_lib/_numpy_compat.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/scipy/_lib/_numpy_compat.py b/scipy/_lib/_numpy_compat.py
--- a/scipy/_lib/_numpy_compat.py
+++ b/scipy/_lib/_numpy_compat.py
@@ -4,8 +4,10 @@
from __future__ import division, print_function, absolute_import
import warnings
+import sys
import numpy as np
+from numpy.testing.nosetester import import_nose
from scipy._lib._version import NumpyVersion
@@ -45,3 +47,28 @@
raise AssertionError("First warning for %s is not a "
"%s( is %s)" % (func.__name__, warning_class, l[0]))
return result
+
+
+def assert_raises_regex(exception_class, expected_regexp,
+ callable_obj=None, *args, **kwargs):
+ """
+ Fail unless an exception of class exception_class and with message that
+ matches expected_regexp is thrown by callable when invoked with arguments
+ args and keyword arguments kwargs.
+ Name of this function adheres to Python 3.2+ reference, but should work in
+ all versions down to 2.6.
+ Notes
+ -----
+ .. versionadded:: 1.8.0
+ """
+ __tracebackhide__ = True # Hide traceback for py.test
+ nose = import_nose()
+
+ if sys.version_info.major >= 3:
+ funcname = nose.tools.assert_raises_regex
+ else:
+ # Only present in Python 2.7, missing from unittest in 2.6
+ funcname = nose.tools.assert_raises_regexp
+
+ return funcname(exception_class, expected_regexp, callable_obj,
+ *args, **kwargs)
| {"golden_diff": "diff --git a/scipy/_lib/_numpy_compat.py b/scipy/_lib/_numpy_compat.py\n--- a/scipy/_lib/_numpy_compat.py\n+++ b/scipy/_lib/_numpy_compat.py\n@@ -4,8 +4,10 @@\n from __future__ import division, print_function, absolute_import\n \n import warnings\n+import sys\n \n import numpy as np\n+from numpy.testing.nosetester import import_nose\n \n from scipy._lib._version import NumpyVersion\n \n@@ -45,3 +47,28 @@\n raise AssertionError(\"First warning for %s is not a \"\n \"%s( is %s)\" % (func.__name__, warning_class, l[0]))\n return result\n+\n+\n+def assert_raises_regex(exception_class, expected_regexp,\n+ callable_obj=None, *args, **kwargs):\n+ \"\"\"\n+ Fail unless an exception of class exception_class and with message that\n+ matches expected_regexp is thrown by callable when invoked with arguments\n+ args and keyword arguments kwargs.\n+ Name of this function adheres to Python 3.2+ reference, but should work in\n+ all versions down to 2.6.\n+ Notes\n+ -----\n+ .. versionadded:: 1.8.0\n+ \"\"\"\n+ __tracebackhide__ = True # Hide traceback for py.test\n+ nose = import_nose()\n+\n+ if sys.version_info.major >= 3:\n+ funcname = nose.tools.assert_raises_regex\n+ else:\n+ # Only present in Python 2.7, missing from unittest in 2.6\n+ funcname = nose.tools.assert_raises_regexp\n+\n+ return funcname(exception_class, expected_regexp, callable_obj,\n+ *args, **kwargs)\n", "issue": "DeprecationWarnings in stats on python 3.5\n```\n/home/br/repos/scipy/build/testenv/lib/python3.5/site-packages/scipy/stats/tests/test_stats.py:101: DeprecationWarning: Please use assertRaisesRegex instead.\n```\n\nApparently, `assertRaisesRegexp` was renamed to `assertRaisesRegex`: https://docs.python.org/3/library/unittest.html#unittest.TestCase.assertRaisesRegexp\n\n", "before_files": [{"content": "\"\"\"Functions copypasted from newer versions of numpy.\n\n\"\"\"\nfrom __future__ import division, print_function, absolute_import\n\nimport warnings\n\nimport numpy as np\n\nfrom scipy._lib._version import NumpyVersion\n\nif NumpyVersion(np.__version__) > '1.7.0.dev':\n _assert_warns = np.testing.assert_warns\nelse:\n def _assert_warns(warning_class, func, *args, **kw):\n r\"\"\"\n Fail unless the given callable throws the specified warning.\n\n This definition is copypasted from numpy 1.9.0.dev.\n The version in earlier numpy returns None.\n\n Parameters\n ----------\n warning_class : class\n The class defining the warning that `func` is expected to throw.\n func : callable\n The callable to test.\n *args : Arguments\n Arguments passed to `func`.\n **kwargs : Kwargs\n Keyword arguments passed to `func`.\n\n Returns\n -------\n The value returned by `func`.\n\n \"\"\"\n with warnings.catch_warnings(record=True) as l:\n warnings.simplefilter('always')\n result = func(*args, **kw)\n if not len(l) > 0:\n raise AssertionError(\"No warning raised when calling %s\"\n % func.__name__)\n if not l[0].category is warning_class:\n raise AssertionError(\"First warning for %s is not a \"\n \"%s( is %s)\" % (func.__name__, warning_class, l[0]))\n return result\n", "path": "scipy/_lib/_numpy_compat.py"}]} | 1,047 | 388 |
gh_patches_debug_34963 | rasdani/github-patches | git_diff | adfinis__timed-backend-925 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
bug(auth): requests to the api with an invalid token receive a response status 500 instead of 401
</issue>
<code>
[start of timed/authentication.py]
1 import base64
2 import functools
3 import hashlib
4
5 import requests
6 from django.conf import settings
7 from django.core.cache import cache
8 from django.core.exceptions import SuspiciousOperation
9 from django.utils.encoding import force_bytes
10 from mozilla_django_oidc.auth import LOGGER, OIDCAuthenticationBackend
11
12
13 class TimedOIDCAuthenticationBackend(OIDCAuthenticationBackend):
14 def get_introspection(self, access_token, id_token, payload):
15 """Return user details dictionary."""
16
17 basic = base64.b64encode(
18 f"{settings.OIDC_RP_INTROSPECT_CLIENT_ID}:{settings.OIDC_RP_INTROSPECT_CLIENT_SECRET}".encode(
19 "utf-8"
20 )
21 ).decode()
22 headers = {
23 "Authorization": f"Basic {basic}",
24 "Content-Type": "application/x-www-form-urlencoded",
25 }
26 response = requests.post(
27 settings.OIDC_OP_INTROSPECT_ENDPOINT,
28 verify=settings.OIDC_VERIFY_SSL,
29 headers=headers,
30 data={"token": access_token},
31 )
32 response.raise_for_status()
33 return response.json()
34
35 def get_userinfo_or_introspection(self, access_token):
36 try:
37 claims = self.cached_request(
38 self.get_userinfo, access_token, "auth.userinfo"
39 )
40 except requests.HTTPError as e:
41 if not (
42 e.response.status_code in [401, 403] and settings.OIDC_CHECK_INTROSPECT
43 ):
44 raise e
45
46 # check introspection if userinfo fails (confidental client)
47 claims = self.cached_request(
48 self.get_introspection, access_token, "auth.introspection"
49 )
50 if "client_id" not in claims:
51 raise SuspiciousOperation("client_id not present in introspection")
52
53 return claims
54
55 def get_or_create_user(self, access_token, id_token, payload):
56 """Verify claims and return user, otherwise raise an Exception."""
57
58 claims = self.get_userinfo_or_introspection(access_token)
59
60 users = self.filter_users_by_claims(claims)
61
62 if len(users) == 1:
63 user = users.get()
64 self.update_user_from_claims(user, claims)
65 return user
66 elif settings.OIDC_CREATE_USER:
67 return self.create_user(claims)
68 else:
69 LOGGER.debug(
70 "Login failed: No user with username %s found, and "
71 "OIDC_CREATE_USER is False",
72 self.get_username(claims),
73 )
74 return None
75
76 def update_user_from_claims(self, user, claims):
77 user.email = claims.get(settings.OIDC_EMAIL_CLAIM, "")
78 user.first_name = claims.get(settings.OIDC_FIRSTNAME_CLAIM, "")
79 user.last_name = claims.get(settings.OIDC_LASTNAME_CLAIM, "")
80 user.save()
81
82 def filter_users_by_claims(self, claims):
83 username = self.get_username(claims)
84 return self.UserModel.objects.filter(username__iexact=username)
85
86 def cached_request(self, method, token, cache_prefix):
87 token_hash = hashlib.sha256(force_bytes(token)).hexdigest()
88
89 func = functools.partial(method, token, None, None)
90
91 return cache.get_or_set(
92 f"{cache_prefix}.{token_hash}",
93 func,
94 timeout=settings.OIDC_BEARER_TOKEN_REVALIDATION_TIME,
95 )
96
97 def create_user(self, claims):
98 """Return object for a newly created user account."""
99
100 username = self.get_username(claims)
101 email = claims.get(settings.OIDC_EMAIL_CLAIM, "")
102 first_name = claims.get(settings.OIDC_FIRSTNAME_CLAIM, "")
103 last_name = claims.get(settings.OIDC_LASTNAME_CLAIM, "")
104
105 return self.UserModel.objects.create(
106 username=username, email=email, first_name=first_name, last_name=last_name
107 )
108
109 def get_username(self, claims):
110 try:
111 return claims[settings.OIDC_USERNAME_CLAIM]
112 except KeyError:
113 raise SuspiciousOperation("Couldn't find username claim")
114
[end of timed/authentication.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/timed/authentication.py b/timed/authentication.py
--- a/timed/authentication.py
+++ b/timed/authentication.py
@@ -8,6 +8,7 @@
from django.core.exceptions import SuspiciousOperation
from django.utils.encoding import force_bytes
from mozilla_django_oidc.auth import LOGGER, OIDCAuthenticationBackend
+from rest_framework.exceptions import AuthenticationFailed
class TimedOIDCAuthenticationBackend(OIDCAuthenticationBackend):
@@ -37,20 +38,29 @@
claims = self.cached_request(
self.get_userinfo, access_token, "auth.userinfo"
)
+ return claims
except requests.HTTPError as e:
- if not (
- e.response.status_code in [401, 403] and settings.OIDC_CHECK_INTROSPECT
- ):
+ if e.response.status_code not in [401, 403]:
raise e
-
- # check introspection if userinfo fails (confidental client)
- claims = self.cached_request(
- self.get_introspection, access_token, "auth.introspection"
- )
- if "client_id" not in claims:
- raise SuspiciousOperation("client_id not present in introspection")
-
- return claims
+ if settings.OIDC_CHECK_INTROSPECT:
+ try:
+ # check introspection if userinfo fails (confidential client)
+ claims = self.cached_request(
+ self.get_introspection, access_token, "auth.introspection"
+ )
+ if "client_id" not in claims:
+ raise SuspiciousOperation(
+ "client_id not present in introspection"
+ )
+ return claims
+ except requests.HTTPError as e:
+ # if the authorization fails it's not a valid client or
+ # the token is expired and permission is denied.
+ # Handing on the 401 Client Error would be transformed into
+ # a 500 by Django's exception handling. But that's not what we want.
+ if e.response.status_code not in [401, 403]: # pragma: no cover
+ raise e
+ raise AuthenticationFailed()
def get_or_create_user(self, access_token, id_token, payload):
"""Verify claims and return user, otherwise raise an Exception."""
| {"golden_diff": "diff --git a/timed/authentication.py b/timed/authentication.py\n--- a/timed/authentication.py\n+++ b/timed/authentication.py\n@@ -8,6 +8,7 @@\n from django.core.exceptions import SuspiciousOperation\n from django.utils.encoding import force_bytes\n from mozilla_django_oidc.auth import LOGGER, OIDCAuthenticationBackend\n+from rest_framework.exceptions import AuthenticationFailed\n \n \n class TimedOIDCAuthenticationBackend(OIDCAuthenticationBackend):\n@@ -37,20 +38,29 @@\n claims = self.cached_request(\n self.get_userinfo, access_token, \"auth.userinfo\"\n )\n+ return claims\n except requests.HTTPError as e:\n- if not (\n- e.response.status_code in [401, 403] and settings.OIDC_CHECK_INTROSPECT\n- ):\n+ if e.response.status_code not in [401, 403]:\n raise e\n-\n- # check introspection if userinfo fails (confidental client)\n- claims = self.cached_request(\n- self.get_introspection, access_token, \"auth.introspection\"\n- )\n- if \"client_id\" not in claims:\n- raise SuspiciousOperation(\"client_id not present in introspection\")\n-\n- return claims\n+ if settings.OIDC_CHECK_INTROSPECT:\n+ try:\n+ # check introspection if userinfo fails (confidential client)\n+ claims = self.cached_request(\n+ self.get_introspection, access_token, \"auth.introspection\"\n+ )\n+ if \"client_id\" not in claims:\n+ raise SuspiciousOperation(\n+ \"client_id not present in introspection\"\n+ )\n+ return claims\n+ except requests.HTTPError as e:\n+ # if the authorization fails it's not a valid client or\n+ # the token is expired and permission is denied.\n+ # Handing on the 401 Client Error would be transformed into\n+ # a 500 by Django's exception handling. But that's not what we want.\n+ if e.response.status_code not in [401, 403]: # pragma: no cover\n+ raise e\n+ raise AuthenticationFailed()\n \n def get_or_create_user(self, access_token, id_token, payload):\n \"\"\"Verify claims and return user, otherwise raise an Exception.\"\"\"\n", "issue": "bug(auth): requests to the api with an invalid token receive a response status 500 instead of 401\n\n", "before_files": [{"content": "import base64\nimport functools\nimport hashlib\n\nimport requests\nfrom django.conf import settings\nfrom django.core.cache import cache\nfrom django.core.exceptions import SuspiciousOperation\nfrom django.utils.encoding import force_bytes\nfrom mozilla_django_oidc.auth import LOGGER, OIDCAuthenticationBackend\n\n\nclass TimedOIDCAuthenticationBackend(OIDCAuthenticationBackend):\n def get_introspection(self, access_token, id_token, payload):\n \"\"\"Return user details dictionary.\"\"\"\n\n basic = base64.b64encode(\n f\"{settings.OIDC_RP_INTROSPECT_CLIENT_ID}:{settings.OIDC_RP_INTROSPECT_CLIENT_SECRET}\".encode(\n \"utf-8\"\n )\n ).decode()\n headers = {\n \"Authorization\": f\"Basic {basic}\",\n \"Content-Type\": \"application/x-www-form-urlencoded\",\n }\n response = requests.post(\n settings.OIDC_OP_INTROSPECT_ENDPOINT,\n verify=settings.OIDC_VERIFY_SSL,\n headers=headers,\n data={\"token\": access_token},\n )\n response.raise_for_status()\n return response.json()\n\n def get_userinfo_or_introspection(self, access_token):\n try:\n claims = self.cached_request(\n self.get_userinfo, access_token, \"auth.userinfo\"\n )\n except requests.HTTPError as e:\n if not (\n e.response.status_code in [401, 403] and settings.OIDC_CHECK_INTROSPECT\n ):\n raise e\n\n # check introspection if userinfo fails (confidental client)\n claims = self.cached_request(\n self.get_introspection, access_token, \"auth.introspection\"\n )\n if \"client_id\" not in claims:\n raise SuspiciousOperation(\"client_id not present in introspection\")\n\n return claims\n\n def get_or_create_user(self, access_token, id_token, payload):\n \"\"\"Verify claims and return user, otherwise raise an Exception.\"\"\"\n\n claims = self.get_userinfo_or_introspection(access_token)\n\n users = self.filter_users_by_claims(claims)\n\n if len(users) == 1:\n user = users.get()\n self.update_user_from_claims(user, claims)\n return user\n elif settings.OIDC_CREATE_USER:\n return self.create_user(claims)\n else:\n LOGGER.debug(\n \"Login failed: No user with username %s found, and \"\n \"OIDC_CREATE_USER is False\",\n self.get_username(claims),\n )\n return None\n\n def update_user_from_claims(self, user, claims):\n user.email = claims.get(settings.OIDC_EMAIL_CLAIM, \"\")\n user.first_name = claims.get(settings.OIDC_FIRSTNAME_CLAIM, \"\")\n user.last_name = claims.get(settings.OIDC_LASTNAME_CLAIM, \"\")\n user.save()\n\n def filter_users_by_claims(self, claims):\n username = self.get_username(claims)\n return self.UserModel.objects.filter(username__iexact=username)\n\n def cached_request(self, method, token, cache_prefix):\n token_hash = hashlib.sha256(force_bytes(token)).hexdigest()\n\n func = functools.partial(method, token, None, None)\n\n return cache.get_or_set(\n f\"{cache_prefix}.{token_hash}\",\n func,\n timeout=settings.OIDC_BEARER_TOKEN_REVALIDATION_TIME,\n )\n\n def create_user(self, claims):\n \"\"\"Return object for a newly created user account.\"\"\"\n\n username = self.get_username(claims)\n email = claims.get(settings.OIDC_EMAIL_CLAIM, \"\")\n first_name = claims.get(settings.OIDC_FIRSTNAME_CLAIM, \"\")\n last_name = claims.get(settings.OIDC_LASTNAME_CLAIM, \"\")\n\n return self.UserModel.objects.create(\n username=username, email=email, first_name=first_name, last_name=last_name\n )\n\n def get_username(self, claims):\n try:\n return claims[settings.OIDC_USERNAME_CLAIM]\n except KeyError:\n raise SuspiciousOperation(\"Couldn't find username claim\")\n", "path": "timed/authentication.py"}]} | 1,657 | 521 |
gh_patches_debug_20812 | rasdani/github-patches | git_diff | ipython__ipython-5202 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
node != nodejs within Debian packages
As part of resolving https://github.com/ipython/nbviewer/issues/196, (and https://github.com/ipython/nbviewer/pull/194), @ahmadia and I ended up finding out that Debian based Linux Distributions build the `node` binary as `nodejs`.
IPython nbconvert defaults to using `node`, which is actually `ax25-node` on Debian based systems. [See relevant posting on the Debian mailing list for more](https://lists.debian.org/debian-devel-announce/2012/07/msg00002.html).
This won't affect users of nvm (who provide `node`) or those who build from source. This will affect certain strains of Ubuntu (Saucy Salamander was what I used to test).
</issue>
<code>
[start of IPython/nbconvert/filters/markdown.py]
1 """Markdown filters
2 This file contains a collection of utility filters for dealing with
3 markdown within Jinja templates.
4 """
5 #-----------------------------------------------------------------------------
6 # Copyright (c) 2013, the IPython Development Team.
7 #
8 # Distributed under the terms of the Modified BSD License.
9 #
10 # The full license is in the file COPYING.txt, distributed with this software.
11 #-----------------------------------------------------------------------------
12
13 #-----------------------------------------------------------------------------
14 # Imports
15 #-----------------------------------------------------------------------------
16 from __future__ import print_function
17
18 # Stdlib imports
19 import os
20 import subprocess
21 from io import TextIOWrapper, BytesIO
22
23 # IPython imports
24 from IPython.nbconvert.utils.pandoc import pandoc
25 from IPython.nbconvert.utils.exceptions import ConversionException
26 from IPython.utils.process import find_cmd, FindCmdError
27 from IPython.utils.py3compat import cast_bytes
28
29 #-----------------------------------------------------------------------------
30 # Functions
31 #-----------------------------------------------------------------------------
32 marked = os.path.join(os.path.dirname(__file__), "marked.js")
33
34 __all__ = [
35 'markdown2html',
36 'markdown2html_pandoc',
37 'markdown2html_marked',
38 'markdown2latex',
39 'markdown2rst',
40 ]
41
42 class NodeJSMissing(ConversionException):
43 """Exception raised when node.js is missing."""
44 pass
45
46 def markdown2latex(source):
47 """Convert a markdown string to LaTeX via pandoc.
48
49 This function will raise an error if pandoc is not installed.
50 Any error messages generated by pandoc are printed to stderr.
51
52 Parameters
53 ----------
54 source : string
55 Input string, assumed to be valid markdown.
56
57 Returns
58 -------
59 out : string
60 Output as returned by pandoc.
61 """
62 return pandoc(source, 'markdown', 'latex')
63
64 def markdown2html_pandoc(source):
65 """Convert a markdown string to HTML via pandoc"""
66 return pandoc(source, 'markdown', 'html', extra_args=['--mathjax'])
67
68 def markdown2html_marked(source, encoding='utf-8'):
69 """Convert a markdown string to HTML via marked"""
70 command = ['node', marked]
71 try:
72 p = subprocess.Popen(command,
73 stdin=subprocess.PIPE, stdout=subprocess.PIPE
74 )
75 except OSError as e:
76 raise NodeJSMissing(
77 "The command '%s' returned an error: %s.\n" % (" ".join(command), e) +
78 "Please check that Node.js is installed."
79 )
80 out, _ = p.communicate(cast_bytes(source, encoding))
81 out = TextIOWrapper(BytesIO(out), encoding, 'replace').read()
82 return out.rstrip('\n')
83
84 def markdown2rst(source):
85 """Convert a markdown string to LaTeX via pandoc.
86
87 This function will raise an error if pandoc is not installed.
88 Any error messages generated by pandoc are printed to stderr.
89
90 Parameters
91 ----------
92 source : string
93 Input string, assumed to be valid markdown.
94
95 Returns
96 -------
97 out : string
98 Output as returned by pandoc.
99 """
100 return pandoc(source, 'markdown', 'rst')
101
102 try:
103 find_cmd('node')
104 except FindCmdError:
105 markdown2html = markdown2html_pandoc
106 else:
107 markdown2html = markdown2html_marked
108
[end of IPython/nbconvert/filters/markdown.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/IPython/nbconvert/filters/markdown.py b/IPython/nbconvert/filters/markdown.py
--- a/IPython/nbconvert/filters/markdown.py
+++ b/IPython/nbconvert/filters/markdown.py
@@ -67,7 +67,7 @@
def markdown2html_marked(source, encoding='utf-8'):
"""Convert a markdown string to HTML via marked"""
- command = ['node', marked]
+ command = [node_cmd, marked]
try:
p = subprocess.Popen(command,
stdin=subprocess.PIPE, stdout=subprocess.PIPE
@@ -99,9 +99,18 @@
"""
return pandoc(source, 'markdown', 'rst')
+# prefer md2html via marked if node.js is available
+# node is called nodejs on debian, so try that first
+node_cmd = 'nodejs'
try:
- find_cmd('node')
+ find_cmd(node_cmd)
except FindCmdError:
- markdown2html = markdown2html_pandoc
+ node_cmd = 'node'
+ try:
+ find_cmd(node_cmd)
+ except FindCmdError:
+ markdown2html = markdown2html_pandoc
+ else:
+ markdown2html = markdown2html_marked
else:
markdown2html = markdown2html_marked
| {"golden_diff": "diff --git a/IPython/nbconvert/filters/markdown.py b/IPython/nbconvert/filters/markdown.py\n--- a/IPython/nbconvert/filters/markdown.py\n+++ b/IPython/nbconvert/filters/markdown.py\n@@ -67,7 +67,7 @@\n \n def markdown2html_marked(source, encoding='utf-8'):\n \"\"\"Convert a markdown string to HTML via marked\"\"\"\n- command = ['node', marked]\n+ command = [node_cmd, marked]\n try:\n p = subprocess.Popen(command,\n stdin=subprocess.PIPE, stdout=subprocess.PIPE\n@@ -99,9 +99,18 @@\n \"\"\"\n return pandoc(source, 'markdown', 'rst')\n \n+# prefer md2html via marked if node.js is available\n+# node is called nodejs on debian, so try that first\n+node_cmd = 'nodejs'\n try:\n- find_cmd('node')\n+ find_cmd(node_cmd)\n except FindCmdError:\n- markdown2html = markdown2html_pandoc\n+ node_cmd = 'node'\n+ try:\n+ find_cmd(node_cmd)\n+ except FindCmdError:\n+ markdown2html = markdown2html_pandoc\n+ else:\n+ markdown2html = markdown2html_marked\n else:\n markdown2html = markdown2html_marked\n", "issue": "node != nodejs within Debian packages\nAs part of resolving https://github.com/ipython/nbviewer/issues/196, (and https://github.com/ipython/nbviewer/pull/194), @ahmadia and I ended up finding out that Debian based Linux Distributions build the `node` binary as `nodejs`.\n\nIPython nbconvert defaults to using `node`, which is actually `ax25-node` on Debian based systems. [See relevant posting on the Debian mailing list for more](https://lists.debian.org/debian-devel-announce/2012/07/msg00002.html).\n\nThis won't affect users of nvm (who provide `node`) or those who build from source. This will affect certain strains of Ubuntu (Saucy Salamander was what I used to test).\n\n", "before_files": [{"content": "\"\"\"Markdown filters\nThis file contains a collection of utility filters for dealing with \nmarkdown within Jinja templates.\n\"\"\"\n#-----------------------------------------------------------------------------\n# Copyright (c) 2013, the IPython Development Team.\n#\n# Distributed under the terms of the Modified BSD License.\n#\n# The full license is in the file COPYING.txt, distributed with this software.\n#-----------------------------------------------------------------------------\n\n#-----------------------------------------------------------------------------\n# Imports\n#-----------------------------------------------------------------------------\nfrom __future__ import print_function\n\n# Stdlib imports\nimport os\nimport subprocess\nfrom io import TextIOWrapper, BytesIO\n\n# IPython imports\nfrom IPython.nbconvert.utils.pandoc import pandoc\nfrom IPython.nbconvert.utils.exceptions import ConversionException\nfrom IPython.utils.process import find_cmd, FindCmdError\nfrom IPython.utils.py3compat import cast_bytes\n\n#-----------------------------------------------------------------------------\n# Functions\n#-----------------------------------------------------------------------------\nmarked = os.path.join(os.path.dirname(__file__), \"marked.js\")\n\n__all__ = [\n 'markdown2html',\n 'markdown2html_pandoc',\n 'markdown2html_marked',\n 'markdown2latex',\n 'markdown2rst',\n]\n\nclass NodeJSMissing(ConversionException):\n \"\"\"Exception raised when node.js is missing.\"\"\"\n pass\n\ndef markdown2latex(source):\n \"\"\"Convert a markdown string to LaTeX via pandoc.\n\n This function will raise an error if pandoc is not installed.\n Any error messages generated by pandoc are printed to stderr.\n\n Parameters\n ----------\n source : string\n Input string, assumed to be valid markdown.\n\n Returns\n -------\n out : string\n Output as returned by pandoc.\n \"\"\"\n return pandoc(source, 'markdown', 'latex')\n\ndef markdown2html_pandoc(source):\n \"\"\"Convert a markdown string to HTML via pandoc\"\"\"\n return pandoc(source, 'markdown', 'html', extra_args=['--mathjax'])\n\ndef markdown2html_marked(source, encoding='utf-8'):\n \"\"\"Convert a markdown string to HTML via marked\"\"\"\n command = ['node', marked]\n try:\n p = subprocess.Popen(command,\n stdin=subprocess.PIPE, stdout=subprocess.PIPE\n )\n except OSError as e:\n raise NodeJSMissing(\n \"The command '%s' returned an error: %s.\\n\" % (\" \".join(command), e) +\n \"Please check that Node.js is installed.\"\n )\n out, _ = p.communicate(cast_bytes(source, encoding))\n out = TextIOWrapper(BytesIO(out), encoding, 'replace').read()\n return out.rstrip('\\n')\n\ndef markdown2rst(source):\n \"\"\"Convert a markdown string to LaTeX via pandoc.\n\n This function will raise an error if pandoc is not installed.\n Any error messages generated by pandoc are printed to stderr.\n\n Parameters\n ----------\n source : string\n Input string, assumed to be valid markdown.\n\n Returns\n -------\n out : string\n Output as returned by pandoc.\n \"\"\"\n return pandoc(source, 'markdown', 'rst')\n\ntry:\n find_cmd('node')\nexcept FindCmdError:\n markdown2html = markdown2html_pandoc\nelse:\n markdown2html = markdown2html_marked\n", "path": "IPython/nbconvert/filters/markdown.py"}]} | 1,627 | 294 |
gh_patches_debug_8504 | rasdani/github-patches | git_diff | Gallopsled__pwntools-218 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
SyntaxWarning in pwnlib.util.web
This line generates a `SyntaxWarning`: https://github.com/Gallopsled/pwntools/blob/master/pwnlib/util/web.py#L27
Either we should use qualified names or only import the names that we need. My votes goes toward the former.
SyntaxWarning in pwnlib.util.web
This line generates a `SyntaxWarning`: https://github.com/Gallopsled/pwntools/blob/master/pwnlib/util/web.py#L27
Either we should use qualified names or only import the names that we need. My votes goes toward the former.
</issue>
<code>
[start of pwnlib/util/web.py]
1 # -*- coding: utf-8 -*-
2 import os, tempfile, logging
3 from .misc import size
4 log = logging.getLogger(__name__)
5
6 def wget(url, save=None, timeout=5, **kwargs):
7 """wget(url, save=None, timeout=5) -> str
8
9 Downloads a file via HTTP/HTTPS.
10
11 Args:
12 url (str): URL to download
13 save (str or bool): Name to save as. Any truthy value
14 will auto-generate a name based on the URL.
15 timeout (int): Timeout, in seconds
16
17 Example:
18
19 >>> url = 'http://httpbin.org/robots.txt'
20 >>> with context.local(log_level='ERROR'): result = wget(url)
21 >>> result
22 'User-agent: *\nDisallow: /deny\n'
23 >>> with context.local(log_level='ERROR'): wget(url, True)
24 >>> result == file('robots.txt').read()
25 True
26 """
27 from requests import *
28
29 with log.progress("Downloading '%s'" % url) as w:
30 w.status("Making request...")
31
32 response = get(url, stream=True, **kwargs)
33
34 if not response.ok:
35 w.failure("Got code %s" % response.status_code)
36 return
37
38 total_size = int(response.headers.get('content-length',0))
39
40 w.status('0 / %s' % size(total_size))
41
42 # Find out the next largest size we can represent as
43 chunk_size = 1
44 while chunk_size < (total_size/10):
45 chunk_size *= 1000
46
47 # Count chunks as they're received
48 total_data = ''
49
50 # Loop until we have all of the data
51 for chunk in response.iter_content(chunk_size = 2**10):
52 total_data += chunk
53 if total_size:
54 w.status('%s / %s' % (size(total_data), size(total_size)))
55 else:
56 w.status('%s' % size(total_data))
57
58 # Save to the target file if provided
59 if save:
60 if not isinstance(save, (str, unicode)):
61 save = os.path.basename(url)
62 save = save or tempfile.NamedTemporaryFile(dir='.', delete=False).name
63 with file(save,'wb+') as f:
64 f.write(total_data)
65 w.success('Saved %r (%s)' % (f.name, size(total_data)))
66 else:
67 w.success('%s' % size(total_data))
68
69 return total_data
70
71
[end of pwnlib/util/web.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pwnlib/util/web.py b/pwnlib/util/web.py
--- a/pwnlib/util/web.py
+++ b/pwnlib/util/web.py
@@ -24,12 +24,12 @@
>>> result == file('robots.txt').read()
True
"""
- from requests import *
+ import requests
with log.progress("Downloading '%s'" % url) as w:
w.status("Making request...")
- response = get(url, stream=True, **kwargs)
+ response = requests.get(url, stream=True, **kwargs)
if not response.ok:
w.failure("Got code %s" % response.status_code)
| {"golden_diff": "diff --git a/pwnlib/util/web.py b/pwnlib/util/web.py\n--- a/pwnlib/util/web.py\n+++ b/pwnlib/util/web.py\n@@ -24,12 +24,12 @@\n >>> result == file('robots.txt').read()\n True\n \"\"\"\n- from requests import *\n+ import requests\n \n with log.progress(\"Downloading '%s'\" % url) as w:\n w.status(\"Making request...\")\n \n- response = get(url, stream=True, **kwargs)\n+ response = requests.get(url, stream=True, **kwargs)\n \n if not response.ok:\n w.failure(\"Got code %s\" % response.status_code)\n", "issue": "SyntaxWarning in pwnlib.util.web\nThis line generates a `SyntaxWarning`: https://github.com/Gallopsled/pwntools/blob/master/pwnlib/util/web.py#L27\n\nEither we should use qualified names or only import the names that we need. My votes goes toward the former.\n\nSyntaxWarning in pwnlib.util.web\nThis line generates a `SyntaxWarning`: https://github.com/Gallopsled/pwntools/blob/master/pwnlib/util/web.py#L27\n\nEither we should use qualified names or only import the names that we need. My votes goes toward the former.\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport os, tempfile, logging\nfrom .misc import size\nlog = logging.getLogger(__name__)\n\ndef wget(url, save=None, timeout=5, **kwargs):\n \"\"\"wget(url, save=None, timeout=5) -> str\n\n Downloads a file via HTTP/HTTPS.\n\n Args:\n url (str): URL to download\n save (str or bool): Name to save as. Any truthy value\n will auto-generate a name based on the URL.\n timeout (int): Timeout, in seconds\n\n Example:\n\n >>> url = 'http://httpbin.org/robots.txt'\n >>> with context.local(log_level='ERROR'): result = wget(url)\n >>> result\n 'User-agent: *\\nDisallow: /deny\\n'\n >>> with context.local(log_level='ERROR'): wget(url, True)\n >>> result == file('robots.txt').read()\n True\n \"\"\"\n from requests import *\n\n with log.progress(\"Downloading '%s'\" % url) as w:\n w.status(\"Making request...\")\n\n response = get(url, stream=True, **kwargs)\n\n if not response.ok:\n w.failure(\"Got code %s\" % response.status_code)\n return\n\n total_size = int(response.headers.get('content-length',0))\n\n w.status('0 / %s' % size(total_size))\n\n # Find out the next largest size we can represent as\n chunk_size = 1\n while chunk_size < (total_size/10):\n chunk_size *= 1000\n\n # Count chunks as they're received\n total_data = ''\n\n # Loop until we have all of the data\n for chunk in response.iter_content(chunk_size = 2**10):\n total_data += chunk\n if total_size:\n w.status('%s / %s' % (size(total_data), size(total_size)))\n else:\n w.status('%s' % size(total_data))\n\n # Save to the target file if provided\n if save:\n if not isinstance(save, (str, unicode)):\n save = os.path.basename(url)\n save = save or tempfile.NamedTemporaryFile(dir='.', delete=False).name\n with file(save,'wb+') as f:\n f.write(total_data)\n w.success('Saved %r (%s)' % (f.name, size(total_data)))\n else:\n w.success('%s' % size(total_data))\n\n return total_data\n\n", "path": "pwnlib/util/web.py"}]} | 1,339 | 149 |
gh_patches_debug_14960 | rasdani/github-patches | git_diff | flairNLP__flair-422 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Don't pin package dependencies in setup.py
To be removed, once it is done: Please add the appropriate label to this ticket, e.g. feature or enhancement.
**Is your feature/enhancement request related to a problem? Please describe.**
It is not considered good practice to pin package dependencies in setup.py (see additional context).
For instance, I'm forced to downgrade certain packages by installing flair.
**Describe the solution you'd like**
Just list the abstract requirements in setup.py with less restrictive version bounds.
**Additional context**
See https://packaging.python.org/discussions/install-requires-vs-requirements/
</issue>
<code>
[start of setup.py]
1 from setuptools import setup, find_packages
2
3 setup(
4 name='flair',
5 version='0.4.0',
6 description='A very simple framework for state-of-the-art NLP',
7 long_description=open("README.md", encoding='utf-8').read(),
8 long_description_content_type="text/markdown",
9 author='Alan Akbik',
10 author_email='[email protected]',
11 url='https://github.com/zalandoresearch/flair',
12 packages=find_packages(exclude='test'), # same as name
13 license='MIT',
14 install_requires=[
15 'torch==1.0.0',
16 'gensim==3.4.0',
17 'typing==3.6.4',
18 'tqdm==4.26.0',
19 'segtok==1.5.7',
20 'matplotlib==3.0.0',
21 'mpld3==0.3',
22 'sklearn',
23 'sqlitedict==1.6.0',
24 'deprecated==1.2.4',
25 'hyperopt==0.1.1',
26 'pytorch-pretrained-bert==0.3.0'
27 ],
28 include_package_data=True,
29 python_requires='>=3.6',
30 )
31
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -12,18 +12,17 @@
packages=find_packages(exclude='test'), # same as name
license='MIT',
install_requires=[
- 'torch==1.0.0',
- 'gensim==3.4.0',
- 'typing==3.6.4',
- 'tqdm==4.26.0',
- 'segtok==1.5.7',
- 'matplotlib==3.0.0',
- 'mpld3==0.3',
+ 'torch>=1.0.0',
+ 'gensim>=3.4.0',
+ 'tqdm>=4.26.0',
+ 'segtok>=1.5.7',
+ 'matplotlib>=3.0.0',
+ 'mpld3>=0.3',
'sklearn',
- 'sqlitedict==1.6.0',
- 'deprecated==1.2.4',
- 'hyperopt==0.1.1',
- 'pytorch-pretrained-bert==0.3.0'
+ 'sqlitedict>=1.6.0',
+ 'deprecated>=1.2.4',
+ 'hyperopt>=0.1.1',
+ 'pytorch-pretrained-bert>=0.3.0'
],
include_package_data=True,
python_requires='>=3.6',
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -12,18 +12,17 @@\n packages=find_packages(exclude='test'), # same as name\n license='MIT',\n install_requires=[\n- 'torch==1.0.0',\n- 'gensim==3.4.0',\n- 'typing==3.6.4',\n- 'tqdm==4.26.0',\n- 'segtok==1.5.7',\n- 'matplotlib==3.0.0',\n- 'mpld3==0.3',\n+ 'torch>=1.0.0',\n+ 'gensim>=3.4.0',\n+ 'tqdm>=4.26.0',\n+ 'segtok>=1.5.7',\n+ 'matplotlib>=3.0.0',\n+ 'mpld3>=0.3',\n 'sklearn',\n- 'sqlitedict==1.6.0',\n- 'deprecated==1.2.4',\n- 'hyperopt==0.1.1',\n- 'pytorch-pretrained-bert==0.3.0'\n+ 'sqlitedict>=1.6.0',\n+ 'deprecated>=1.2.4',\n+ 'hyperopt>=0.1.1',\n+ 'pytorch-pretrained-bert>=0.3.0'\n ],\n include_package_data=True,\n python_requires='>=3.6',\n", "issue": "Don't pin package dependencies in setup.py\nTo be removed, once it is done: Please add the appropriate label to this ticket, e.g. feature or enhancement.\r\n\r\n**Is your feature/enhancement request related to a problem? Please describe.**\r\n\r\nIt is not considered good practice to pin package dependencies in setup.py (see additional context).\r\n\r\nFor instance, I'm forced to downgrade certain packages by installing flair.\r\n\r\n**Describe the solution you'd like**\r\n\r\nJust list the abstract requirements in setup.py with less restrictive version bounds.\r\n\r\n**Additional context**\r\n\r\nSee https://packaging.python.org/discussions/install-requires-vs-requirements/\n", "before_files": [{"content": "from setuptools import setup, find_packages\n\nsetup(\n name='flair',\n version='0.4.0',\n description='A very simple framework for state-of-the-art NLP',\n long_description=open(\"README.md\", encoding='utf-8').read(),\n long_description_content_type=\"text/markdown\",\n author='Alan Akbik',\n author_email='[email protected]',\n url='https://github.com/zalandoresearch/flair',\n packages=find_packages(exclude='test'), # same as name\n license='MIT',\n install_requires=[\n 'torch==1.0.0',\n 'gensim==3.4.0',\n 'typing==3.6.4',\n 'tqdm==4.26.0',\n 'segtok==1.5.7',\n 'matplotlib==3.0.0',\n 'mpld3==0.3',\n 'sklearn',\n 'sqlitedict==1.6.0',\n 'deprecated==1.2.4',\n 'hyperopt==0.1.1',\n 'pytorch-pretrained-bert==0.3.0'\n ],\n include_package_data=True,\n python_requires='>=3.6',\n)\n", "path": "setup.py"}]} | 986 | 340 |
gh_patches_debug_5563 | rasdani/github-patches | git_diff | mlflow__mlflow-9536 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG] basic-auth init on remote database
### Describe the problem
Same issue #9399 happened when trying to initialize database which invokes this function [migrate_if_needed](https://github.com/mlflow/mlflow/blob/master/mlflow/server/auth/db/utils.py#L30)
Suggestion: Apply the same fix #9410 to force SqlAlchemy to render unobfuscated url
### Suggestion
```
alembic_cfg = _get_alembic_config(engine.url.render_as_string(hide_password=False))
```
### What component(s) does this bug affect?
- [ ] `area/artifacts`: Artifact stores and artifact logging
- [ ] `area/build`: Build and test infrastructure for MLflow
- [ ] `area/docs`: MLflow documentation pages
- [ ] `area/examples`: Example code
- [ ] `area/gateway`: AI Gateway service, Gateway client APIs, third-party Gateway integrations
- [ ] `area/model-registry`: Model Registry service, APIs, and the fluent client calls for Model Registry
- [ ] `area/models`: MLmodel format, model serialization/deserialization, flavors
- [ ] `area/recipes`: Recipes, Recipe APIs, Recipe configs, Recipe Templates
- [ ] `area/projects`: MLproject format, project running backends
- [ ] `area/scoring`: MLflow Model server, model deployment tools, Spark UDFs
- [X] `area/server-infra`: MLflow Tracking server backend
- [ ] `area/tracking`: Tracking Service, tracking client APIs, autologging
### What interface(s) does this bug affect?
- [ ] `area/uiux`: Front-end, user experience, plotting, JavaScript, JavaScript dev server
- [ ] `area/docker`: Docker use across MLflow's components, such as MLflow Projects and MLflow Models
- [X] `area/sqlalchemy`: Use of SQLAlchemy in the Tracking Service or Model Registry
- [ ] `area/windows`: Windows support
### What language(s) does this bug affect?
- [ ] `language/r`: R APIs and clients
- [ ] `language/java`: Java APIs and clients
- [ ] `language/new`: Proposals for new client languages
### What integration(s) does this bug affect?
- [ ] `integrations/azure`: Azure and Azure ML integrations
- [ ] `integrations/sagemaker`: SageMaker integrations
- [ ] `integrations/databricks`: Databricks integrations
</issue>
<code>
[start of mlflow/server/auth/db/utils.py]
1 from pathlib import Path
2
3 from alembic.command import upgrade
4 from alembic.config import Config
5 from alembic.migration import MigrationContext
6 from alembic.script import ScriptDirectory
7 from sqlalchemy.engine.base import Engine
8
9
10 def _get_alembic_dir() -> str:
11 return Path(__file__).parent / "migrations"
12
13
14 def _get_alembic_config(url: str) -> Config:
15 alembic_dir = _get_alembic_dir()
16 alembic_ini_path = alembic_dir / "alembic.ini"
17 alembic_cfg = Config(alembic_ini_path)
18 alembic_cfg.set_main_option("script_location", str(alembic_dir))
19 alembic_cfg.set_main_option("sqlalchemy.url", url)
20 return alembic_cfg
21
22
23 def migrate(engine: Engine, revision: str) -> None:
24 alembic_cfg = _get_alembic_config(engine.url.render_as_string(hide_password=False))
25 with engine.begin() as conn:
26 alembic_cfg.attributes["connection"] = conn
27 upgrade(alembic_cfg, revision)
28
29
30 def migrate_if_needed(engine: Engine, revision: str) -> None:
31 alembic_cfg = _get_alembic_config(str(engine.url))
32 script_dir = ScriptDirectory.from_config(alembic_cfg)
33 with engine.begin() as conn:
34 context = MigrationContext.configure(conn)
35 if context.get_current_revision() != script_dir.get_current_head():
36 upgrade(alembic_cfg, revision)
37
[end of mlflow/server/auth/db/utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mlflow/server/auth/db/utils.py b/mlflow/server/auth/db/utils.py
--- a/mlflow/server/auth/db/utils.py
+++ b/mlflow/server/auth/db/utils.py
@@ -28,7 +28,7 @@
def migrate_if_needed(engine: Engine, revision: str) -> None:
- alembic_cfg = _get_alembic_config(str(engine.url))
+ alembic_cfg = _get_alembic_config(engine.url.render_as_string(hide_password=False))
script_dir = ScriptDirectory.from_config(alembic_cfg)
with engine.begin() as conn:
context = MigrationContext.configure(conn)
| {"golden_diff": "diff --git a/mlflow/server/auth/db/utils.py b/mlflow/server/auth/db/utils.py\n--- a/mlflow/server/auth/db/utils.py\n+++ b/mlflow/server/auth/db/utils.py\n@@ -28,7 +28,7 @@\n \n \n def migrate_if_needed(engine: Engine, revision: str) -> None:\n- alembic_cfg = _get_alembic_config(str(engine.url))\n+ alembic_cfg = _get_alembic_config(engine.url.render_as_string(hide_password=False))\n script_dir = ScriptDirectory.from_config(alembic_cfg)\n with engine.begin() as conn:\n context = MigrationContext.configure(conn)\n", "issue": "[BUG] basic-auth init on remote database\n### Describe the problem\r\n\r\nSame issue #9399 happened when trying to initialize database which invokes this function [migrate_if_needed](https://github.com/mlflow/mlflow/blob/master/mlflow/server/auth/db/utils.py#L30)\r\n\r\nSuggestion: Apply the same fix #9410 to force SqlAlchemy to render unobfuscated url\r\n\r\n### Suggestion\r\n```\r\nalembic_cfg = _get_alembic_config(engine.url.render_as_string(hide_password=False))\r\n```\r\n\r\n### What component(s) does this bug affect?\r\n\r\n- [ ] `area/artifacts`: Artifact stores and artifact logging\r\n- [ ] `area/build`: Build and test infrastructure for MLflow\r\n- [ ] `area/docs`: MLflow documentation pages\r\n- [ ] `area/examples`: Example code\r\n- [ ] `area/gateway`: AI Gateway service, Gateway client APIs, third-party Gateway integrations\r\n- [ ] `area/model-registry`: Model Registry service, APIs, and the fluent client calls for Model Registry\r\n- [ ] `area/models`: MLmodel format, model serialization/deserialization, flavors\r\n- [ ] `area/recipes`: Recipes, Recipe APIs, Recipe configs, Recipe Templates\r\n- [ ] `area/projects`: MLproject format, project running backends\r\n- [ ] `area/scoring`: MLflow Model server, model deployment tools, Spark UDFs\r\n- [X] `area/server-infra`: MLflow Tracking server backend\r\n- [ ] `area/tracking`: Tracking Service, tracking client APIs, autologging\r\n\r\n### What interface(s) does this bug affect?\r\n\r\n- [ ] `area/uiux`: Front-end, user experience, plotting, JavaScript, JavaScript dev server\r\n- [ ] `area/docker`: Docker use across MLflow's components, such as MLflow Projects and MLflow Models\r\n- [X] `area/sqlalchemy`: Use of SQLAlchemy in the Tracking Service or Model Registry\r\n- [ ] `area/windows`: Windows support\r\n\r\n### What language(s) does this bug affect?\r\n\r\n- [ ] `language/r`: R APIs and clients\r\n- [ ] `language/java`: Java APIs and clients\r\n- [ ] `language/new`: Proposals for new client languages\r\n\r\n### What integration(s) does this bug affect?\r\n\r\n- [ ] `integrations/azure`: Azure and Azure ML integrations\r\n- [ ] `integrations/sagemaker`: SageMaker integrations\r\n- [ ] `integrations/databricks`: Databricks integrations\n", "before_files": [{"content": "from pathlib import Path\n\nfrom alembic.command import upgrade\nfrom alembic.config import Config\nfrom alembic.migration import MigrationContext\nfrom alembic.script import ScriptDirectory\nfrom sqlalchemy.engine.base import Engine\n\n\ndef _get_alembic_dir() -> str:\n return Path(__file__).parent / \"migrations\"\n\n\ndef _get_alembic_config(url: str) -> Config:\n alembic_dir = _get_alembic_dir()\n alembic_ini_path = alembic_dir / \"alembic.ini\"\n alembic_cfg = Config(alembic_ini_path)\n alembic_cfg.set_main_option(\"script_location\", str(alembic_dir))\n alembic_cfg.set_main_option(\"sqlalchemy.url\", url)\n return alembic_cfg\n\n\ndef migrate(engine: Engine, revision: str) -> None:\n alembic_cfg = _get_alembic_config(engine.url.render_as_string(hide_password=False))\n with engine.begin() as conn:\n alembic_cfg.attributes[\"connection\"] = conn\n upgrade(alembic_cfg, revision)\n\n\ndef migrate_if_needed(engine: Engine, revision: str) -> None:\n alembic_cfg = _get_alembic_config(str(engine.url))\n script_dir = ScriptDirectory.from_config(alembic_cfg)\n with engine.begin() as conn:\n context = MigrationContext.configure(conn)\n if context.get_current_revision() != script_dir.get_current_head():\n upgrade(alembic_cfg, revision)\n", "path": "mlflow/server/auth/db/utils.py"}]} | 1,466 | 141 |
gh_patches_debug_61226 | rasdani/github-patches | git_diff | searxng__searxng-2862 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bug: bilibili engine is broken
<!-- PLEASE FILL THESE FIELDS, IT REALLY HELPS THE MAINTAINERS OF SearXNG -->
Something has changed, and now some fixes are needed to use the api successfully.
**Version of SearXNG, commit number if you are using on master branch and stipulate if you forked SearXNG**
Repository: https://github.com/searxng/searxng
Branch: master
Version: 2023.9.27+1a66d7467+dirty
<!-- If you are running on master branch using git execute this command
in order to fetch the latest commit ID:
```
git log -1
```
If you are using searxng-docker then look at the bottom of the SearXNG page
and check for the version after "Powered by SearXNG"
Please also stipulate if you are using a forked version of SearXNG and
include a link to the fork source code.
-->
**How did you install SearXNG?**
make run
<!-- Did you install SearXNG using the official wiki or using searxng-docker
or manually by executing the searx/webapp.py file? -->
**What happened?**
<!-- A clear and concise description of what the bug is. -->
**How To Reproduce**
<!-- How can we reproduce this issue? (as minimally and as precisely as possible) -->
**Expected behavior**
<!-- A clear and concise description of what you expected to happen. -->
**Screenshots & Logs**
<!-- If applicable, add screenshots, logs to help explain your problem. -->
**Additional context**
<!-- Add any other context about the problem here. -->
</issue>
<code>
[start of searx/engines/bilibili.py]
1 # SPDX-License-Identifier: AGPL-3.0-or-later
2 # lint: pylint
3 """Bilibili is a Chinese video sharing website.
4
5 .. _Bilibili: https://www.bilibili.com
6 """
7
8 import random
9 import string
10 from urllib.parse import urlencode
11 from datetime import datetime, timedelta
12
13 # Engine metadata
14 about = {
15 "website": "https://www.bilibili.com",
16 "wikidata_id": "Q3077586",
17 "official_api_documentation": None,
18 "use_official_api": False,
19 "require_api_key": False,
20 "results": "JSON",
21 }
22
23 # Engine configuration
24 paging = True
25 results_per_page = 20
26 categories = ["videos"]
27
28 # Search URL
29 base_url = "https://api.bilibili.com/x/web-interface/wbi/search/type"
30
31 cookie = {
32 "innersign": "0",
33 "buvid3": "".join(random.choice(string.hexdigits) for _ in range(16)) + "infoc",
34 "i-wanna-go-back": "-1",
35 "b_ut": "7",
36 "FEED_LIVE_VERSION": "V8",
37 "header_theme_version": "undefined",
38 "home_feed_column": "4",
39 }
40
41
42 def request(query, params):
43 query_params = {
44 "__refresh__": "true",
45 "page": params["pageno"],
46 "page_size": results_per_page,
47 "single_column": "0",
48 "keyword": query,
49 "search_type": "video",
50 }
51
52 params["url"] = f"{base_url}?{urlencode(query_params)}"
53 params["cookies"] = cookie
54
55 return params
56
57
58 # Format the video duration
59 def format_duration(duration):
60 minutes, seconds = map(int, duration.split(":"))
61 total_seconds = minutes * 60 + seconds
62
63 formatted_duration = str(timedelta(seconds=total_seconds))[2:] if 0 <= total_seconds < 3600 else ""
64
65 return formatted_duration
66
67
68 def response(resp):
69 search_res = resp.json()
70
71 results = []
72
73 for item in search_res.get("data", {}).get("result", []):
74 title = item["title"]
75 url = item["arcurl"]
76 thumbnail = item["pic"]
77 description = item["description"]
78 author = item["author"]
79 video_id = item["aid"]
80 unix_date = item["pubdate"]
81
82 formatted_date = datetime.utcfromtimestamp(unix_date)
83 formatted_duration = format_duration(item["duration"])
84 iframe_url = f"https://player.bilibili.com/player.html?aid={video_id}&high_quality=1&autoplay=false&danmaku=0"
85
86 results.append(
87 {
88 "title": title,
89 "url": url,
90 "content": description,
91 "author": author,
92 "publishedDate": formatted_date,
93 "length": formatted_duration,
94 "thumbnail": thumbnail,
95 "iframe_src": iframe_url,
96 "template": "videos.html",
97 }
98 )
99
100 return results
101
[end of searx/engines/bilibili.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/searx/engines/bilibili.py b/searx/engines/bilibili.py
--- a/searx/engines/bilibili.py
+++ b/searx/engines/bilibili.py
@@ -26,7 +26,7 @@
categories = ["videos"]
# Search URL
-base_url = "https://api.bilibili.com/x/web-interface/wbi/search/type"
+base_url = "https://api.bilibili.com/x/web-interface/search/type"
cookie = {
"innersign": "0",
| {"golden_diff": "diff --git a/searx/engines/bilibili.py b/searx/engines/bilibili.py\n--- a/searx/engines/bilibili.py\n+++ b/searx/engines/bilibili.py\n@@ -26,7 +26,7 @@\n categories = [\"videos\"]\n \n # Search URL\n-base_url = \"https://api.bilibili.com/x/web-interface/wbi/search/type\"\n+base_url = \"https://api.bilibili.com/x/web-interface/search/type\"\n \n cookie = {\n \"innersign\": \"0\",\n", "issue": "Bug: bilibili engine is broken\n<!-- PLEASE FILL THESE FIELDS, IT REALLY HELPS THE MAINTAINERS OF SearXNG -->\r\n\r\nSomething has changed, and now some fixes are needed to use the api successfully.\r\n\r\n**Version of SearXNG, commit number if you are using on master branch and stipulate if you forked SearXNG**\r\nRepository: https://github.com/searxng/searxng\r\nBranch: master\r\nVersion: 2023.9.27+1a66d7467+dirty\r\n<!-- If you are running on master branch using git execute this command\r\nin order to fetch the latest commit ID:\r\n```\r\ngit log -1\r\n``` \r\nIf you are using searxng-docker then look at the bottom of the SearXNG page\r\nand check for the version after \"Powered by SearXNG\"\r\n\r\nPlease also stipulate if you are using a forked version of SearXNG and\r\ninclude a link to the fork source code.\r\n-->\r\n**How did you install SearXNG?**\r\nmake run\r\n<!-- Did you install SearXNG using the official wiki or using searxng-docker\r\nor manually by executing the searx/webapp.py file? -->\r\n**What happened?**\r\n<!-- A clear and concise description of what the bug is. -->\r\n\r\n**How To Reproduce**\r\n<!-- How can we reproduce this issue? (as minimally and as precisely as possible) -->\r\n\r\n**Expected behavior**\r\n<!-- A clear and concise description of what you expected to happen. -->\r\n\r\n**Screenshots & Logs**\r\n<!-- If applicable, add screenshots, logs to help explain your problem. -->\r\n\r\n**Additional context**\r\n<!-- Add any other context about the problem here. -->\r\n\n", "before_files": [{"content": "# SPDX-License-Identifier: AGPL-3.0-or-later\n# lint: pylint\n\"\"\"Bilibili is a Chinese video sharing website.\n\n.. _Bilibili: https://www.bilibili.com\n\"\"\"\n\nimport random\nimport string\nfrom urllib.parse import urlencode\nfrom datetime import datetime, timedelta\n\n# Engine metadata\nabout = {\n \"website\": \"https://www.bilibili.com\",\n \"wikidata_id\": \"Q3077586\",\n \"official_api_documentation\": None,\n \"use_official_api\": False,\n \"require_api_key\": False,\n \"results\": \"JSON\",\n}\n\n# Engine configuration\npaging = True\nresults_per_page = 20\ncategories = [\"videos\"]\n\n# Search URL\nbase_url = \"https://api.bilibili.com/x/web-interface/wbi/search/type\"\n\ncookie = {\n \"innersign\": \"0\",\n \"buvid3\": \"\".join(random.choice(string.hexdigits) for _ in range(16)) + \"infoc\",\n \"i-wanna-go-back\": \"-1\",\n \"b_ut\": \"7\",\n \"FEED_LIVE_VERSION\": \"V8\",\n \"header_theme_version\": \"undefined\",\n \"home_feed_column\": \"4\",\n}\n\n\ndef request(query, params):\n query_params = {\n \"__refresh__\": \"true\",\n \"page\": params[\"pageno\"],\n \"page_size\": results_per_page,\n \"single_column\": \"0\",\n \"keyword\": query,\n \"search_type\": \"video\",\n }\n\n params[\"url\"] = f\"{base_url}?{urlencode(query_params)}\"\n params[\"cookies\"] = cookie\n\n return params\n\n\n# Format the video duration\ndef format_duration(duration):\n minutes, seconds = map(int, duration.split(\":\"))\n total_seconds = minutes * 60 + seconds\n\n formatted_duration = str(timedelta(seconds=total_seconds))[2:] if 0 <= total_seconds < 3600 else \"\"\n\n return formatted_duration\n\n\ndef response(resp):\n search_res = resp.json()\n\n results = []\n\n for item in search_res.get(\"data\", {}).get(\"result\", []):\n title = item[\"title\"]\n url = item[\"arcurl\"]\n thumbnail = item[\"pic\"]\n description = item[\"description\"]\n author = item[\"author\"]\n video_id = item[\"aid\"]\n unix_date = item[\"pubdate\"]\n\n formatted_date = datetime.utcfromtimestamp(unix_date)\n formatted_duration = format_duration(item[\"duration\"])\n iframe_url = f\"https://player.bilibili.com/player.html?aid={video_id}&high_quality=1&autoplay=false&danmaku=0\"\n\n results.append(\n {\n \"title\": title,\n \"url\": url,\n \"content\": description,\n \"author\": author,\n \"publishedDate\": formatted_date,\n \"length\": formatted_duration,\n \"thumbnail\": thumbnail,\n \"iframe_src\": iframe_url,\n \"template\": \"videos.html\",\n }\n )\n\n return results\n", "path": "searx/engines/bilibili.py"}]} | 1,770 | 125 |
gh_patches_debug_57128 | rasdani/github-patches | git_diff | liqd__adhocracy4-58 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Extend linting to javascript and jsx files
</issue>
<code>
[start of adhocracy4/reports/emails.py]
1 from django.contrib.auth import get_user_model
2 from django.core import urlresolvers
3
4 from adhocracy4 import emails
5
6 User = get_user_model()
7
8
9 class ReportModeratorEmail(emails.ModeratorNotification):
10 template_name = 'a4reports/emails/report_moderators'
11
12
13 class ReportCreatorEmail(emails.Email):
14 template_name = 'a4reports/emails/report_creator'
15
16 def get_receivers(self):
17 return [self.object.content_object.creator]
18
[end of adhocracy4/reports/emails.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/adhocracy4/reports/emails.py b/adhocracy4/reports/emails.py
--- a/adhocracy4/reports/emails.py
+++ b/adhocracy4/reports/emails.py
@@ -1,5 +1,4 @@
from django.contrib.auth import get_user_model
-from django.core import urlresolvers
from adhocracy4 import emails
| {"golden_diff": "diff --git a/adhocracy4/reports/emails.py b/adhocracy4/reports/emails.py\n--- a/adhocracy4/reports/emails.py\n+++ b/adhocracy4/reports/emails.py\n@@ -1,5 +1,4 @@\n from django.contrib.auth import get_user_model\n-from django.core import urlresolvers\n \n from adhocracy4 import emails\n", "issue": "Extend linting to javascript and jsx files\n\n", "before_files": [{"content": "from django.contrib.auth import get_user_model\nfrom django.core import urlresolvers\n\nfrom adhocracy4 import emails\n\nUser = get_user_model()\n\n\nclass ReportModeratorEmail(emails.ModeratorNotification):\n template_name = 'a4reports/emails/report_moderators'\n\n\nclass ReportCreatorEmail(emails.Email):\n template_name = 'a4reports/emails/report_creator'\n\n def get_receivers(self):\n return [self.object.content_object.creator]\n", "path": "adhocracy4/reports/emails.py"}]} | 683 | 84 |
gh_patches_debug_24071 | rasdani/github-patches | git_diff | open-mmlab__mmdetection-5654 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Error get params DETR/ Deformable DETR
Despite my attempts to modify, also just testing with the basic config detr file.
Maybe this issue has already been raised?
mmdet==2.13.0
mmcv=1.3.3
```python
python tools/analysis_tools/get_flops.py configs/detr/detr_r50_8x2_150e_coco.py
```
```python
/home/bluav/mmdetection/mmdet/models/backbones/resnet.py:400: UserWarning: DeprecationWarning: pretrained is a deprecated, please use "init_cfg" instead
warnings.warn('DeprecationWarning: pretrained is a deprecated, '
Warning: variables __flops__ or __params__ are already defined for the moduleReLU ptflops can affect your code!
Warning: variables __flops__ or __params__ are already defined for the moduleReLU ptflops can affect your code!
Warning: variables __flops__ or __params__ are already defined for the moduleReLU ptflops can affect your code!
Warning: variables __flops__ or __params__ are already defined for the moduleReLU ptflops can affect your code!
Warning: variables __flops__ or __params__ are already defined for the moduleReLU ptflops can affect your code!
Warning: variables __flops__ or __params__ are already defined for the moduleReLU ptflops can affect your code!
Warning: variables __flops__ or __params__ are already defined for the moduleReLU ptflops can affect your code!
Warning: variables __flops__ or __params__ are already defined for the moduleReLU ptflops can affect your code!
Warning: variables __flops__ or __params__ are already defined for the moduleReLU ptflops can affect your code!
Warning: variables __flops__ or __params__ are already defined for the moduleReLU ptflops can affect your code!
Warning: variables __flops__ or __params__ are already defined for the moduleReLU ptflops can affect your code!
Warning: variables __flops__ or __params__ are already defined for the moduleReLU ptflops can affect your code!
Warning: variables __flops__ or __params__ are already defined for the moduleReLU ptflops can affect your code!
Traceback (most recent call last):
File "tools/analysis_tools/get_flops.py", line 81, in <module>
main()
File "tools/analysis_tools/get_flops.py", line 71, in main
flops, params = get_model_complexity_info(model, input_shape)
File "/home/bluav/.conda/envs/open-mmlab/lib/python3.7/site-packages/mmcv/cnn/utils/flops_counter.py", line 104, in get_model_complexity_info
_ = flops_model(batch)
File "/home/bluav/.conda/envs/open-mmlab/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/bluav/mmdetection/mmdet/models/detectors/single_stage.py", line 48, in forward_dummy
outs = self.bbox_head(x)
File "/home/bluav/.conda/envs/open-mmlab/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
TypeError: forward() missing 1 required positional argument: 'img_metas'
```
</issue>
<code>
[start of mmdet/models/detectors/detr.py]
1 import torch
2
3 from ..builder import DETECTORS
4 from .single_stage import SingleStageDetector
5
6
7 @DETECTORS.register_module()
8 class DETR(SingleStageDetector):
9 r"""Implementation of `DETR: End-to-End Object Detection with
10 Transformers <https://arxiv.org/pdf/2005.12872>`_"""
11
12 def __init__(self,
13 backbone,
14 bbox_head,
15 train_cfg=None,
16 test_cfg=None,
17 pretrained=None,
18 init_cfg=None):
19 super(DETR, self).__init__(backbone, None, bbox_head, train_cfg,
20 test_cfg, pretrained, init_cfg)
21
22 # over-write `onnx_export` because:
23 # (1) the forward of bbox_head requires img_metas
24 # (2) the different behavior (e.g. construction of `masks`) between
25 # torch and ONNX model, during the forward of bbox_head
26 def onnx_export(self, img, img_metas):
27 """Test function for exporting to ONNX, without test time augmentation.
28
29 Args:
30 img (torch.Tensor): input images.
31 img_metas (list[dict]): List of image information.
32
33 Returns:
34 tuple[Tensor, Tensor]: dets of shape [N, num_det, 5]
35 and class labels of shape [N, num_det].
36 """
37 x = self.extract_feat(img)
38 # forward of this head requires img_metas
39 outs = self.bbox_head.forward_onnx(x, img_metas)
40 # get shape as tensor
41 img_shape = torch._shape_as_tensor(img)[2:]
42 img_metas[0]['img_shape_for_onnx'] = img_shape
43
44 det_bboxes, det_labels = self.bbox_head.onnx_export(*outs, img_metas)
45
46 return det_bboxes, det_labels
47
[end of mmdet/models/detectors/detr.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mmdet/models/detectors/detr.py b/mmdet/models/detectors/detr.py
--- a/mmdet/models/detectors/detr.py
+++ b/mmdet/models/detectors/detr.py
@@ -1,3 +1,5 @@
+import warnings
+
import torch
from ..builder import DETECTORS
@@ -19,6 +21,27 @@
super(DETR, self).__init__(backbone, None, bbox_head, train_cfg,
test_cfg, pretrained, init_cfg)
+ # over-write `forward_dummy` because:
+ # the forward of bbox_head requires img_metas
+ def forward_dummy(self, img):
+ """Used for computing network flops.
+
+ See `mmdetection/tools/analysis_tools/get_flops.py`
+ """
+ warnings.warn('Warning! MultiheadAttention in DETR does not '
+ 'support flops computation! Do not use the '
+ 'results in your papers!')
+
+ batch_size, _, height, width = img.shape
+ dummy_img_metas = [
+ dict(
+ batch_input_shape=(height, width),
+ img_shape=(height, width, 3)) for _ in range(batch_size)
+ ]
+ x = self.extract_feat(img)
+ outs = self.bbox_head(x, dummy_img_metas)
+ return outs
+
# over-write `onnx_export` because:
# (1) the forward of bbox_head requires img_metas
# (2) the different behavior (e.g. construction of `masks`) between
| {"golden_diff": "diff --git a/mmdet/models/detectors/detr.py b/mmdet/models/detectors/detr.py\n--- a/mmdet/models/detectors/detr.py\n+++ b/mmdet/models/detectors/detr.py\n@@ -1,3 +1,5 @@\n+import warnings\n+\n import torch\n \n from ..builder import DETECTORS\n@@ -19,6 +21,27 @@\n super(DETR, self).__init__(backbone, None, bbox_head, train_cfg,\n test_cfg, pretrained, init_cfg)\n \n+ # over-write `forward_dummy` because:\n+ # the forward of bbox_head requires img_metas\n+ def forward_dummy(self, img):\n+ \"\"\"Used for computing network flops.\n+\n+ See `mmdetection/tools/analysis_tools/get_flops.py`\n+ \"\"\"\n+ warnings.warn('Warning! MultiheadAttention in DETR does not '\n+ 'support flops computation! Do not use the '\n+ 'results in your papers!')\n+\n+ batch_size, _, height, width = img.shape\n+ dummy_img_metas = [\n+ dict(\n+ batch_input_shape=(height, width),\n+ img_shape=(height, width, 3)) for _ in range(batch_size)\n+ ]\n+ x = self.extract_feat(img)\n+ outs = self.bbox_head(x, dummy_img_metas)\n+ return outs\n+\n # over-write `onnx_export` because:\n # (1) the forward of bbox_head requires img_metas\n # (2) the different behavior (e.g. construction of `masks`) between\n", "issue": "Error get params DETR/ Deformable DETR\nDespite my attempts to modify, also just testing with the basic config detr file. \r\nMaybe this issue has already been raised?\r\nmmdet==2.13.0\r\nmmcv=1.3.3\r\n\r\n```python\r\npython tools/analysis_tools/get_flops.py configs/detr/detr_r50_8x2_150e_coco.py\r\n```\r\n\r\n```python\r\n/home/bluav/mmdetection/mmdet/models/backbones/resnet.py:400: UserWarning: DeprecationWarning: pretrained is a deprecated, please use \"init_cfg\" instead\r\n warnings.warn('DeprecationWarning: pretrained is a deprecated, '\r\nWarning: variables __flops__ or __params__ are already defined for the moduleReLU ptflops can affect your code!\r\nWarning: variables __flops__ or __params__ are already defined for the moduleReLU ptflops can affect your code!\r\nWarning: variables __flops__ or __params__ are already defined for the moduleReLU ptflops can affect your code!\r\nWarning: variables __flops__ or __params__ are already defined for the moduleReLU ptflops can affect your code!\r\nWarning: variables __flops__ or __params__ are already defined for the moduleReLU ptflops can affect your code!\r\nWarning: variables __flops__ or __params__ are already defined for the moduleReLU ptflops can affect your code!\r\nWarning: variables __flops__ or __params__ are already defined for the moduleReLU ptflops can affect your code!\r\nWarning: variables __flops__ or __params__ are already defined for the moduleReLU ptflops can affect your code!\r\nWarning: variables __flops__ or __params__ are already defined for the moduleReLU ptflops can affect your code!\r\nWarning: variables __flops__ or __params__ are already defined for the moduleReLU ptflops can affect your code!\r\nWarning: variables __flops__ or __params__ are already defined for the moduleReLU ptflops can affect your code!\r\nWarning: variables __flops__ or __params__ are already defined for the moduleReLU ptflops can affect your code!\r\nWarning: variables __flops__ or __params__ are already defined for the moduleReLU ptflops can affect your code!\r\nTraceback (most recent call last):\r\n File \"tools/analysis_tools/get_flops.py\", line 81, in <module>\r\n main()\r\n File \"tools/analysis_tools/get_flops.py\", line 71, in main\r\n flops, params = get_model_complexity_info(model, input_shape)\r\n File \"/home/bluav/.conda/envs/open-mmlab/lib/python3.7/site-packages/mmcv/cnn/utils/flops_counter.py\", line 104, in get_model_complexity_info\r\n _ = flops_model(batch)\r\n File \"/home/bluav/.conda/envs/open-mmlab/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 889, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"/home/bluav/mmdetection/mmdet/models/detectors/single_stage.py\", line 48, in forward_dummy\r\n outs = self.bbox_head(x)\r\n File \"/home/bluav/.conda/envs/open-mmlab/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 889, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\nTypeError: forward() missing 1 required positional argument: 'img_metas'\r\n```\r\n\n", "before_files": [{"content": "import torch\n\nfrom ..builder import DETECTORS\nfrom .single_stage import SingleStageDetector\n\n\[email protected]_module()\nclass DETR(SingleStageDetector):\n r\"\"\"Implementation of `DETR: End-to-End Object Detection with\n Transformers <https://arxiv.org/pdf/2005.12872>`_\"\"\"\n\n def __init__(self,\n backbone,\n bbox_head,\n train_cfg=None,\n test_cfg=None,\n pretrained=None,\n init_cfg=None):\n super(DETR, self).__init__(backbone, None, bbox_head, train_cfg,\n test_cfg, pretrained, init_cfg)\n\n # over-write `onnx_export` because:\n # (1) the forward of bbox_head requires img_metas\n # (2) the different behavior (e.g. construction of `masks`) between\n # torch and ONNX model, during the forward of bbox_head\n def onnx_export(self, img, img_metas):\n \"\"\"Test function for exporting to ONNX, without test time augmentation.\n\n Args:\n img (torch.Tensor): input images.\n img_metas (list[dict]): List of image information.\n\n Returns:\n tuple[Tensor, Tensor]: dets of shape [N, num_det, 5]\n and class labels of shape [N, num_det].\n \"\"\"\n x = self.extract_feat(img)\n # forward of this head requires img_metas\n outs = self.bbox_head.forward_onnx(x, img_metas)\n # get shape as tensor\n img_shape = torch._shape_as_tensor(img)[2:]\n img_metas[0]['img_shape_for_onnx'] = img_shape\n\n det_bboxes, det_labels = self.bbox_head.onnx_export(*outs, img_metas)\n\n return det_bboxes, det_labels\n", "path": "mmdet/models/detectors/detr.py"}]} | 1,806 | 356 |
gh_patches_debug_12016 | rasdani/github-patches | git_diff | celery__celery-450 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
os.kill is not available in windows before python 2.7
As per the topic, the current celery implementation (>=2.3.0) crashes on windows using python 2.5 and 2.6, because it uses os.kill which is not available in windows before python 2.7
</issue>
<code>
[start of celery/concurrency/processes/__init__.py]
1 """
2
3 Process Pools.
4
5 """
6 import platform
7 import signal as _signal
8
9 from os import kill as _kill
10
11 from celery.concurrency.base import BasePool
12 from celery.concurrency.processes.pool import Pool, RUN
13
14 if platform.system() == "Windows": # pragma: no cover
15 # On Windows os.kill calls TerminateProcess which cannot be
16 # handled by # any process, so this is needed to terminate the task
17 # *and its children* (if any).
18 from celery.concurrency.processes import _win
19 _kill = _win.kill_processtree # noqa
20
21
22 class TaskPool(BasePool):
23 """Process Pool for processing tasks in parallel.
24
25 :param processes: see :attr:`processes`.
26 :param logger: see :attr:`logger`.
27
28
29 .. attribute:: limit
30
31 The number of processes that can run simultaneously.
32
33 .. attribute:: logger
34
35 The logger used for debugging.
36
37 """
38 Pool = Pool
39
40 def on_start(self):
41 """Run the task pool.
42
43 Will pre-fork all workers so they're ready to accept tasks.
44
45 """
46 self._pool = self.Pool(processes=self.limit, **self.options)
47 self.on_apply = self._pool.apply_async
48
49 def on_stop(self):
50 """Gracefully stop the pool."""
51 if self._pool is not None and self._pool._state == RUN:
52 self._pool.close()
53 self._pool.join()
54 self._pool = None
55
56 def on_terminate(self):
57 """Force terminate the pool."""
58 if self._pool is not None:
59 self._pool.terminate()
60 self._pool = None
61
62 def terminate_job(self, pid, signal=None):
63 _kill(pid, signal or _signal.SIGTERM)
64
65 def grow(self, n=1):
66 return self._pool.grow(n)
67
68 def shrink(self, n=1):
69 return self._pool.shrink(n)
70
71 def _get_info(self):
72 return {"max-concurrency": self.limit,
73 "processes": [p.pid for p in self._pool._pool],
74 "max-tasks-per-child": self._pool._maxtasksperchild,
75 "put-guarded-by-semaphore": self.putlocks,
76 "timeouts": (self._pool.soft_timeout, self._pool.timeout)}
77
78 @property
79 def num_processes(self):
80 return self._pool._processes
81
[end of celery/concurrency/processes/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/celery/concurrency/processes/__init__.py b/celery/concurrency/processes/__init__.py
--- a/celery/concurrency/processes/__init__.py
+++ b/celery/concurrency/processes/__init__.py
@@ -6,8 +6,6 @@
import platform
import signal as _signal
-from os import kill as _kill
-
from celery.concurrency.base import BasePool
from celery.concurrency.processes.pool import Pool, RUN
@@ -17,6 +15,8 @@
# *and its children* (if any).
from celery.concurrency.processes import _win
_kill = _win.kill_processtree # noqa
+else:
+ from os import kill as _kill
class TaskPool(BasePool):
| {"golden_diff": "diff --git a/celery/concurrency/processes/__init__.py b/celery/concurrency/processes/__init__.py\n--- a/celery/concurrency/processes/__init__.py\n+++ b/celery/concurrency/processes/__init__.py\n@@ -6,8 +6,6 @@\n import platform\n import signal as _signal\n \n-from os import kill as _kill\n-\n from celery.concurrency.base import BasePool\n from celery.concurrency.processes.pool import Pool, RUN\n \n@@ -17,6 +15,8 @@\n # *and its children* (if any).\n from celery.concurrency.processes import _win\n _kill = _win.kill_processtree # noqa\n+else:\n+ from os import kill as _kill\n \n \n class TaskPool(BasePool):\n", "issue": "os.kill is not available in windows before python 2.7\nAs per the topic, the current celery implementation (>=2.3.0) crashes on windows using python 2.5 and 2.6, because it uses os.kill which is not available in windows before python 2.7\n\n", "before_files": [{"content": "\"\"\"\n\nProcess Pools.\n\n\"\"\"\nimport platform\nimport signal as _signal\n\nfrom os import kill as _kill\n\nfrom celery.concurrency.base import BasePool\nfrom celery.concurrency.processes.pool import Pool, RUN\n\nif platform.system() == \"Windows\": # pragma: no cover\n # On Windows os.kill calls TerminateProcess which cannot be\n # handled by # any process, so this is needed to terminate the task\n # *and its children* (if any).\n from celery.concurrency.processes import _win\n _kill = _win.kill_processtree # noqa\n\n\nclass TaskPool(BasePool):\n \"\"\"Process Pool for processing tasks in parallel.\n\n :param processes: see :attr:`processes`.\n :param logger: see :attr:`logger`.\n\n\n .. attribute:: limit\n\n The number of processes that can run simultaneously.\n\n .. attribute:: logger\n\n The logger used for debugging.\n\n \"\"\"\n Pool = Pool\n\n def on_start(self):\n \"\"\"Run the task pool.\n\n Will pre-fork all workers so they're ready to accept tasks.\n\n \"\"\"\n self._pool = self.Pool(processes=self.limit, **self.options)\n self.on_apply = self._pool.apply_async\n\n def on_stop(self):\n \"\"\"Gracefully stop the pool.\"\"\"\n if self._pool is not None and self._pool._state == RUN:\n self._pool.close()\n self._pool.join()\n self._pool = None\n\n def on_terminate(self):\n \"\"\"Force terminate the pool.\"\"\"\n if self._pool is not None:\n self._pool.terminate()\n self._pool = None\n\n def terminate_job(self, pid, signal=None):\n _kill(pid, signal or _signal.SIGTERM)\n\n def grow(self, n=1):\n return self._pool.grow(n)\n\n def shrink(self, n=1):\n return self._pool.shrink(n)\n\n def _get_info(self):\n return {\"max-concurrency\": self.limit,\n \"processes\": [p.pid for p in self._pool._pool],\n \"max-tasks-per-child\": self._pool._maxtasksperchild,\n \"put-guarded-by-semaphore\": self.putlocks,\n \"timeouts\": (self._pool.soft_timeout, self._pool.timeout)}\n\n @property\n def num_processes(self):\n return self._pool._processes\n", "path": "celery/concurrency/processes/__init__.py"}]} | 1,289 | 175 |
gh_patches_debug_16894 | rasdani/github-patches | git_diff | svthalia__concrexit-2820 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Admin site doesnt show organizers
### Describe the bug
Organizers are not shown in the site admin
### How to reproduce
Steps to reproduce the behaviour:
1. Go to any event
2. See that the organizers field is empty
### Expected behaviour
there should be at least one organizer
### Additional context
multiple organizers broke things again
Admin site doesnt show organizers
### Describe the bug
Organizers are not shown in the site admin
### How to reproduce
Steps to reproduce the behaviour:
1. Go to any event
2. See that the organizers field is empty
### Expected behaviour
there should be at least one organizer
### Additional context
multiple organizers broke things again
</issue>
<code>
[start of website/events/emails.py]
1 """The emails defined by the events package."""
2 from django.conf import settings
3 from django.core.mail import EmailMessage
4 from django.template.loader import get_template
5 from django.utils.translation import gettext_lazy as _
6
7
8 def notify_first_waiting(event):
9 """Send an email to the first person on the waiting list when someone cancels their registration.
10
11 :param event: the event
12 """
13 if (
14 event.max_participants is not None
15 and event.eventregistration_set.filter(date_cancelled=None).count()
16 > event.max_participants
17 ):
18 # Prepare email to send to the first person on the waiting list
19 first_waiting = event.eventregistration_set.filter(
20 date_cancelled=None
21 ).order_by("date")[event.max_participants]
22
23 text_template = get_template("events/member_email.txt")
24
25 subject = _("[THALIA] Notification about your registration for '{}'").format(
26 event.title
27 )
28
29 organiser_emails = [
30 organiser.contact_address
31 for organiser in event.organisers.all()
32 if organiser.contact_address is not None
33 ]
34 text_message = text_template.render(
35 {
36 "event": event,
37 "registration": first_waiting,
38 "name": first_waiting.name or first_waiting.member.first_name,
39 "base_url": settings.BASE_URL,
40 "organisers": organiser_emails,
41 }
42 )
43
44 EmailMessage(subject, text_message, to=[first_waiting.email]).send()
45
46
47 def notify_organiser(event, registration):
48 """Send an email to the organiser of the event if someone cancels their registration.
49
50 :param event: the event
51 :param registration: the registration that was cancelled
52 """
53 if not event.organisers.exists():
54 return
55
56 text_template = get_template("events/organiser_email.txt")
57 subject = f"Registration for {event.title} cancelled by member"
58 text_message = text_template.render({"event": event, "registration": registration})
59
60 EmailMessage(
61 subject,
62 text_message,
63 to=[
64 organiser.contact_mailinglist.name + "@" + settings.SITE_DOMAIN
65 for organiser in event.organisers.all()
66 ],
67 ).send()
68
69
70 def notify_waiting(event, registration):
71 text_template = get_template("events/more_places_email.txt")
72 subject = _("[THALIA] Notification about your registration for '{}'").format(
73 event.title
74 )
75 text_message = text_template.render(
76 {
77 "event": event,
78 "registration": registration,
79 "name": registration.name or registration.member.first_name,
80 "base_url": settings.BASE_URL,
81 }
82 )
83 EmailMessage(subject, text_message, to=[registration.email]).send()
84
[end of website/events/emails.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/website/events/emails.py b/website/events/emails.py
--- a/website/events/emails.py
+++ b/website/events/emails.py
@@ -72,12 +72,21 @@
subject = _("[THALIA] Notification about your registration for '{}'").format(
event.title
)
+
+ organiser_emails = [
+ organiser.contact_address
+ for organiser in event.organisers.all()
+ if organiser.contact_address is not None
+ ]
+
text_message = text_template.render(
{
"event": event,
"registration": registration,
"name": registration.name or registration.member.first_name,
"base_url": settings.BASE_URL,
+ "organisers": organiser_emails,
}
)
+
EmailMessage(subject, text_message, to=[registration.email]).send()
| {"golden_diff": "diff --git a/website/events/emails.py b/website/events/emails.py\n--- a/website/events/emails.py\n+++ b/website/events/emails.py\n@@ -72,12 +72,21 @@\n subject = _(\"[THALIA] Notification about your registration for '{}'\").format(\n event.title\n )\n+\n+ organiser_emails = [\n+ organiser.contact_address\n+ for organiser in event.organisers.all()\n+ if organiser.contact_address is not None\n+ ]\n+\n text_message = text_template.render(\n {\n \"event\": event,\n \"registration\": registration,\n \"name\": registration.name or registration.member.first_name,\n \"base_url\": settings.BASE_URL,\n+ \"organisers\": organiser_emails,\n }\n )\n+\n EmailMessage(subject, text_message, to=[registration.email]).send()\n", "issue": "Admin site doesnt show organizers\n### Describe the bug\r\nOrganizers are not shown in the site admin\r\n\r\n### How to reproduce\r\nSteps to reproduce the behaviour:\r\n1. Go to any event\r\n2. See that the organizers field is empty\r\n\r\n### Expected behaviour\r\nthere should be at least one organizer\r\n\r\n### Additional context\r\nmultiple organizers broke things again\r\n\nAdmin site doesnt show organizers\n### Describe the bug\r\nOrganizers are not shown in the site admin\r\n\r\n### How to reproduce\r\nSteps to reproduce the behaviour:\r\n1. Go to any event\r\n2. See that the organizers field is empty\r\n\r\n### Expected behaviour\r\nthere should be at least one organizer\r\n\r\n### Additional context\r\nmultiple organizers broke things again\r\n\n", "before_files": [{"content": "\"\"\"The emails defined by the events package.\"\"\"\nfrom django.conf import settings\nfrom django.core.mail import EmailMessage\nfrom django.template.loader import get_template\nfrom django.utils.translation import gettext_lazy as _\n\n\ndef notify_first_waiting(event):\n \"\"\"Send an email to the first person on the waiting list when someone cancels their registration.\n\n :param event: the event\n \"\"\"\n if (\n event.max_participants is not None\n and event.eventregistration_set.filter(date_cancelled=None).count()\n > event.max_participants\n ):\n # Prepare email to send to the first person on the waiting list\n first_waiting = event.eventregistration_set.filter(\n date_cancelled=None\n ).order_by(\"date\")[event.max_participants]\n\n text_template = get_template(\"events/member_email.txt\")\n\n subject = _(\"[THALIA] Notification about your registration for '{}'\").format(\n event.title\n )\n\n organiser_emails = [\n organiser.contact_address\n for organiser in event.organisers.all()\n if organiser.contact_address is not None\n ]\n text_message = text_template.render(\n {\n \"event\": event,\n \"registration\": first_waiting,\n \"name\": first_waiting.name or first_waiting.member.first_name,\n \"base_url\": settings.BASE_URL,\n \"organisers\": organiser_emails,\n }\n )\n\n EmailMessage(subject, text_message, to=[first_waiting.email]).send()\n\n\ndef notify_organiser(event, registration):\n \"\"\"Send an email to the organiser of the event if someone cancels their registration.\n\n :param event: the event\n :param registration: the registration that was cancelled\n \"\"\"\n if not event.organisers.exists():\n return\n\n text_template = get_template(\"events/organiser_email.txt\")\n subject = f\"Registration for {event.title} cancelled by member\"\n text_message = text_template.render({\"event\": event, \"registration\": registration})\n\n EmailMessage(\n subject,\n text_message,\n to=[\n organiser.contact_mailinglist.name + \"@\" + settings.SITE_DOMAIN\n for organiser in event.organisers.all()\n ],\n ).send()\n\n\ndef notify_waiting(event, registration):\n text_template = get_template(\"events/more_places_email.txt\")\n subject = _(\"[THALIA] Notification about your registration for '{}'\").format(\n event.title\n )\n text_message = text_template.render(\n {\n \"event\": event,\n \"registration\": registration,\n \"name\": registration.name or registration.member.first_name,\n \"base_url\": settings.BASE_URL,\n }\n )\n EmailMessage(subject, text_message, to=[registration.email]).send()\n", "path": "website/events/emails.py"}]} | 1,410 | 189 |
gh_patches_debug_28771 | rasdani/github-patches | git_diff | opsdroid__opsdroid-182 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add ssl to the web server
It should be possible to enable ssl on the web server and pass in paths to the ssl keys in the config.
</issue>
<code>
[start of opsdroid/web.py]
1 """Submodule to handle web requests in opsdroid."""
2
3 import json
4 import logging
5
6 from aiohttp import web
7
8 from opsdroid.const import __version__
9
10
11 _LOGGER = logging.getLogger(__name__)
12
13
14 class Web:
15 """Web server for opsdroid."""
16
17 def __init__(self, opsdroid):
18 """Create web object."""
19 self.opsdroid = opsdroid
20 try:
21 self.config = self.opsdroid.config["web"]
22 except KeyError:
23 self.config = {}
24 self.web_app = web.Application(loop=self.opsdroid.eventloop)
25 self.web_app.router.add_get('/', self.web_index_handler)
26 self.web_app.router.add_get('', self.web_index_handler)
27 self.web_app.router.add_get('/stats', self.web_stats_handler)
28 self.web_app.router.add_get('/stats/', self.web_stats_handler)
29
30 @property
31 def get_port(self):
32 """Return port from config or the default."""
33 try:
34 port = self.config["port"]
35 except KeyError:
36 port = 8080
37 return port
38
39 @property
40 def get_host(self):
41 """Return host from config or the default."""
42 try:
43 host = self.config["host"]
44 except KeyError:
45 host = '127.0.0.1'
46 return host
47
48 def start(self):
49 """Start web servers."""
50 _LOGGER.debug(
51 "Starting web server with host %s and port %s",
52 self.get_host, self.get_port)
53 web.run_app(self.web_app, host=self.get_host,
54 port=self.get_port, print=_LOGGER.info)
55
56 @staticmethod
57 def build_response(status, result):
58 """Build a json response object."""
59 return web.Response(text=json.dumps(result), status=status)
60
61 def web_index_handler(self, request):
62 """Handle root web request."""
63 return self.build_response(200, {
64 "message": "Welcome to the opsdroid API"})
65
66 def web_stats_handler(self, request):
67 """Handle stats request."""
68 stats = self.opsdroid.stats
69 try:
70 stats["average_response_time"] = \
71 stats["total_response_time"] / stats["total_responses"]
72 except ZeroDivisionError:
73 stats["average_response_time"] = 0
74
75 return self.build_response(200, {
76 "version": __version__,
77 "messages": {
78 "total_parsed": stats["messages_parsed"],
79 "webhooks_called": stats["webhooks_called"],
80 "total_response_time": stats["total_response_time"],
81 "total_responses": stats["total_responses"],
82 "average_response_time": stats["average_response_time"]
83 },
84 "modules": {
85 "skills": len(self.opsdroid.skills),
86 "connectors": len(self.opsdroid.connectors),
87 "databases": len(self.opsdroid.memory.databases)
88 }
89 })
90
[end of opsdroid/web.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/opsdroid/web.py b/opsdroid/web.py
--- a/opsdroid/web.py
+++ b/opsdroid/web.py
@@ -2,6 +2,7 @@
import json
import logging
+import ssl
from aiohttp import web
@@ -33,7 +34,10 @@
try:
port = self.config["port"]
except KeyError:
- port = 8080
+ if self.get_ssl_context is not None:
+ port = 8443
+ else:
+ port = 8080
return port
@property
@@ -45,13 +49,28 @@
host = '127.0.0.1'
return host
+ @property
+ def get_ssl_context(self):
+ """Return the ssl context or None."""
+ try:
+ ssl_config = self.config["ssl"]
+ sslcontext = ssl.SSLContext(ssl.PROTOCOL_SSLv23)
+ sslcontext.load_cert_chain(ssl_config["cert"], ssl_config["key"])
+ return sslcontext
+ except FileNotFoundError:
+ _LOGGER.error("Cannot find ssl cert or key.")
+ return None
+ except KeyError:
+ return None
+
def start(self):
"""Start web servers."""
_LOGGER.debug(
"Starting web server with host %s and port %s",
self.get_host, self.get_port)
web.run_app(self.web_app, host=self.get_host,
- port=self.get_port, print=_LOGGER.info)
+ port=self.get_port, print=_LOGGER.info,
+ ssl_context=self.get_ssl_context)
@staticmethod
def build_response(status, result):
| {"golden_diff": "diff --git a/opsdroid/web.py b/opsdroid/web.py\n--- a/opsdroid/web.py\n+++ b/opsdroid/web.py\n@@ -2,6 +2,7 @@\n \n import json\n import logging\n+import ssl\n \n from aiohttp import web\n \n@@ -33,7 +34,10 @@\n try:\n port = self.config[\"port\"]\n except KeyError:\n- port = 8080\n+ if self.get_ssl_context is not None:\n+ port = 8443\n+ else:\n+ port = 8080\n return port\n \n @property\n@@ -45,13 +49,28 @@\n host = '127.0.0.1'\n return host\n \n+ @property\n+ def get_ssl_context(self):\n+ \"\"\"Return the ssl context or None.\"\"\"\n+ try:\n+ ssl_config = self.config[\"ssl\"]\n+ sslcontext = ssl.SSLContext(ssl.PROTOCOL_SSLv23)\n+ sslcontext.load_cert_chain(ssl_config[\"cert\"], ssl_config[\"key\"])\n+ return sslcontext\n+ except FileNotFoundError:\n+ _LOGGER.error(\"Cannot find ssl cert or key.\")\n+ return None\n+ except KeyError:\n+ return None\n+\n def start(self):\n \"\"\"Start web servers.\"\"\"\n _LOGGER.debug(\n \"Starting web server with host %s and port %s\",\n self.get_host, self.get_port)\n web.run_app(self.web_app, host=self.get_host,\n- port=self.get_port, print=_LOGGER.info)\n+ port=self.get_port, print=_LOGGER.info,\n+ ssl_context=self.get_ssl_context)\n \n @staticmethod\n def build_response(status, result):\n", "issue": "Add ssl to the web server\nIt should be possible to enable ssl on the web server and pass in paths to the ssl keys in the config.\n", "before_files": [{"content": "\"\"\"Submodule to handle web requests in opsdroid.\"\"\"\n\nimport json\nimport logging\n\nfrom aiohttp import web\n\nfrom opsdroid.const import __version__\n\n\n_LOGGER = logging.getLogger(__name__)\n\n\nclass Web:\n \"\"\"Web server for opsdroid.\"\"\"\n\n def __init__(self, opsdroid):\n \"\"\"Create web object.\"\"\"\n self.opsdroid = opsdroid\n try:\n self.config = self.opsdroid.config[\"web\"]\n except KeyError:\n self.config = {}\n self.web_app = web.Application(loop=self.opsdroid.eventloop)\n self.web_app.router.add_get('/', self.web_index_handler)\n self.web_app.router.add_get('', self.web_index_handler)\n self.web_app.router.add_get('/stats', self.web_stats_handler)\n self.web_app.router.add_get('/stats/', self.web_stats_handler)\n\n @property\n def get_port(self):\n \"\"\"Return port from config or the default.\"\"\"\n try:\n port = self.config[\"port\"]\n except KeyError:\n port = 8080\n return port\n\n @property\n def get_host(self):\n \"\"\"Return host from config or the default.\"\"\"\n try:\n host = self.config[\"host\"]\n except KeyError:\n host = '127.0.0.1'\n return host\n\n def start(self):\n \"\"\"Start web servers.\"\"\"\n _LOGGER.debug(\n \"Starting web server with host %s and port %s\",\n self.get_host, self.get_port)\n web.run_app(self.web_app, host=self.get_host,\n port=self.get_port, print=_LOGGER.info)\n\n @staticmethod\n def build_response(status, result):\n \"\"\"Build a json response object.\"\"\"\n return web.Response(text=json.dumps(result), status=status)\n\n def web_index_handler(self, request):\n \"\"\"Handle root web request.\"\"\"\n return self.build_response(200, {\n \"message\": \"Welcome to the opsdroid API\"})\n\n def web_stats_handler(self, request):\n \"\"\"Handle stats request.\"\"\"\n stats = self.opsdroid.stats\n try:\n stats[\"average_response_time\"] = \\\n stats[\"total_response_time\"] / stats[\"total_responses\"]\n except ZeroDivisionError:\n stats[\"average_response_time\"] = 0\n\n return self.build_response(200, {\n \"version\": __version__,\n \"messages\": {\n \"total_parsed\": stats[\"messages_parsed\"],\n \"webhooks_called\": stats[\"webhooks_called\"],\n \"total_response_time\": stats[\"total_response_time\"],\n \"total_responses\": stats[\"total_responses\"],\n \"average_response_time\": stats[\"average_response_time\"]\n },\n \"modules\": {\n \"skills\": len(self.opsdroid.skills),\n \"connectors\": len(self.opsdroid.connectors),\n \"databases\": len(self.opsdroid.memory.databases)\n }\n })\n", "path": "opsdroid/web.py"}]} | 1,357 | 388 |
gh_patches_debug_11252 | rasdani/github-patches | git_diff | iterative__dvc-4462 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
experiments: show table includes all staged/stashed experiments instead of only the currently applicable ones
```
example-get-started git:executor-tree py:dvc ❯ dvc exp show --no-pager --include-params=featurize
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━┓
┃ Experiment ┃ auc ┃ featurize.max_features ┃ featurize.ngrams ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━┩
│ workspace │ 0.54175 │ 500 │ 5 │
│ bbdfa81 (2020-08-21 11:27:38) │ 0.54175 │ 500 │ 5 │
│ ├── ebbf40d (2020-08-21 11:28:42) │ 0.50822 │ 1500 │ 4 │
│ └── *32c3875 (2020-08-21 12:05:16) │ - │ 1500 │ 7 │
│ ├── *8cb834d (2020-08-21 12:04:59) │ - │ 1500 │ 2 │
│ ├── *32d107b (2020-08-21 12:05:01) │ - │ 1500 │ 5 │
│ └── *4f2c53c (2020-08-21 12:05:04) │ - │ 1500 │ 6 │
└────────────────────────────────────┴─────────┴────────────────────────┴──────────────────┘
```
the last 3 stashed experiments are derived from a different baseline commit and should be excluded by default (unless `--all-commit`/etc are used)
</issue>
<code>
[start of dvc/repo/experiments/show.py]
1 import logging
2 import re
3 from collections import OrderedDict, defaultdict
4 from datetime import datetime
5
6 from dvc.repo import locked
7 from dvc.repo.metrics.show import _collect_metrics, _read_metrics
8 from dvc.repo.params.show import _collect_configs, _read_params
9
10 logger = logging.getLogger(__name__)
11
12
13 EXP_RE = re.compile(r"(?P<rev_sha>[a-f0-9]{7})-(?P<exp_sha>[a-f0-9]+)")
14
15
16 def _collect_experiment(repo, branch, stash=False):
17 res = defaultdict(dict)
18 for rev in repo.brancher(revs=[branch]):
19 if rev == "workspace":
20 res["timestamp"] = None
21 else:
22 commit = repo.scm.repo.rev_parse(rev)
23 res["timestamp"] = datetime.fromtimestamp(commit.committed_date)
24
25 configs = _collect_configs(repo)
26 params = _read_params(repo, configs, rev)
27 if params:
28 res["params"] = params
29
30 res["queued"] = stash
31 if not stash:
32 metrics = _collect_metrics(repo, None, False)
33 vals = _read_metrics(repo, metrics, rev)
34 res["metrics"] = vals
35
36 return res
37
38
39 @locked
40 def show(
41 repo, all_branches=False, all_tags=False, revs=None, all_commits=False
42 ):
43 res = defaultdict(OrderedDict)
44
45 if revs is None:
46 revs = [repo.scm.get_rev()]
47
48 revs = OrderedDict(
49 (rev, None)
50 for rev in repo.brancher(
51 revs=revs,
52 all_branches=all_branches,
53 all_tags=all_tags,
54 all_commits=all_commits,
55 )
56 )
57
58 for rev in revs:
59 res[rev]["baseline"] = _collect_experiment(repo, rev)
60
61 # collect reproduced experiments
62 for exp_branch in repo.experiments.scm.list_branches():
63 m = re.match(EXP_RE, exp_branch)
64 if m:
65 rev = repo.scm.resolve_rev(m.group("rev_sha"))
66 if rev in revs:
67 exp_rev = repo.experiments.scm.resolve_rev(exp_branch)
68 with repo.experiments.chdir():
69 experiment = _collect_experiment(
70 repo.experiments.exp_dvc, exp_branch
71 )
72 res[rev][exp_rev] = experiment
73
74 # collect queued (not yet reproduced) experiments
75 for stash_rev, (_, baseline_rev) in repo.experiments.stash_revs.items():
76 with repo.experiments.chdir():
77 experiment = _collect_experiment(
78 repo.experiments.exp_dvc, stash_rev, stash=True
79 )
80 res[baseline_rev][stash_rev] = experiment
81
82 return res
83
[end of dvc/repo/experiments/show.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/dvc/repo/experiments/show.py b/dvc/repo/experiments/show.py
--- a/dvc/repo/experiments/show.py
+++ b/dvc/repo/experiments/show.py
@@ -73,10 +73,11 @@
# collect queued (not yet reproduced) experiments
for stash_rev, (_, baseline_rev) in repo.experiments.stash_revs.items():
- with repo.experiments.chdir():
- experiment = _collect_experiment(
- repo.experiments.exp_dvc, stash_rev, stash=True
- )
- res[baseline_rev][stash_rev] = experiment
+ if baseline_rev in revs:
+ with repo.experiments.chdir():
+ experiment = _collect_experiment(
+ repo.experiments.exp_dvc, stash_rev, stash=True
+ )
+ res[baseline_rev][stash_rev] = experiment
return res
| {"golden_diff": "diff --git a/dvc/repo/experiments/show.py b/dvc/repo/experiments/show.py\n--- a/dvc/repo/experiments/show.py\n+++ b/dvc/repo/experiments/show.py\n@@ -73,10 +73,11 @@\n \n # collect queued (not yet reproduced) experiments\n for stash_rev, (_, baseline_rev) in repo.experiments.stash_revs.items():\n- with repo.experiments.chdir():\n- experiment = _collect_experiment(\n- repo.experiments.exp_dvc, stash_rev, stash=True\n- )\n- res[baseline_rev][stash_rev] = experiment\n+ if baseline_rev in revs:\n+ with repo.experiments.chdir():\n+ experiment = _collect_experiment(\n+ repo.experiments.exp_dvc, stash_rev, stash=True\n+ )\n+ res[baseline_rev][stash_rev] = experiment\n \n return res\n", "issue": "experiments: show table includes all staged/stashed experiments instead of only the currently applicable ones\n```\r\nexample-get-started git:executor-tree py:dvc \u276f dvc exp show --no-pager --include-params=featurize\r\n\u250f\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2513\r\n\u2503 Experiment \u2503 auc \u2503 featurize.max_features \u2503 featurize.ngrams \u2503\r\n\u2521\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2529\r\n\u2502 workspace \u2502 0.54175 \u2502 500 \u2502 5 \u2502\r\n\u2502 bbdfa81 (2020-08-21 11:27:38) \u2502 0.54175 \u2502 500 \u2502 5 \u2502\r\n\u2502 \u251c\u2500\u2500 ebbf40d (2020-08-21 11:28:42) \u2502 0.50822 \u2502 1500 \u2502 4 \u2502\r\n\u2502 \u2514\u2500\u2500 *32c3875 (2020-08-21 12:05:16) \u2502 - \u2502 1500 \u2502 7 \u2502\r\n\u2502 \u251c\u2500\u2500 *8cb834d (2020-08-21 12:04:59) \u2502 - \u2502 1500 \u2502 2 \u2502\r\n\u2502 \u251c\u2500\u2500 *32d107b (2020-08-21 12:05:01) \u2502 - \u2502 1500 \u2502 5 \u2502\r\n\u2502 \u2514\u2500\u2500 *4f2c53c (2020-08-21 12:05:04) \u2502 - \u2502 1500 \u2502 6 \u2502\r\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\r\n```\r\n\r\nthe last 3 stashed experiments are derived from a different baseline commit and should be excluded by default (unless `--all-commit`/etc are used)\n", "before_files": [{"content": "import logging\nimport re\nfrom collections import OrderedDict, defaultdict\nfrom datetime import datetime\n\nfrom dvc.repo import locked\nfrom dvc.repo.metrics.show import _collect_metrics, _read_metrics\nfrom dvc.repo.params.show import _collect_configs, _read_params\n\nlogger = logging.getLogger(__name__)\n\n\nEXP_RE = re.compile(r\"(?P<rev_sha>[a-f0-9]{7})-(?P<exp_sha>[a-f0-9]+)\")\n\n\ndef _collect_experiment(repo, branch, stash=False):\n res = defaultdict(dict)\n for rev in repo.brancher(revs=[branch]):\n if rev == \"workspace\":\n res[\"timestamp\"] = None\n else:\n commit = repo.scm.repo.rev_parse(rev)\n res[\"timestamp\"] = datetime.fromtimestamp(commit.committed_date)\n\n configs = _collect_configs(repo)\n params = _read_params(repo, configs, rev)\n if params:\n res[\"params\"] = params\n\n res[\"queued\"] = stash\n if not stash:\n metrics = _collect_metrics(repo, None, False)\n vals = _read_metrics(repo, metrics, rev)\n res[\"metrics\"] = vals\n\n return res\n\n\n@locked\ndef show(\n repo, all_branches=False, all_tags=False, revs=None, all_commits=False\n):\n res = defaultdict(OrderedDict)\n\n if revs is None:\n revs = [repo.scm.get_rev()]\n\n revs = OrderedDict(\n (rev, None)\n for rev in repo.brancher(\n revs=revs,\n all_branches=all_branches,\n all_tags=all_tags,\n all_commits=all_commits,\n )\n )\n\n for rev in revs:\n res[rev][\"baseline\"] = _collect_experiment(repo, rev)\n\n # collect reproduced experiments\n for exp_branch in repo.experiments.scm.list_branches():\n m = re.match(EXP_RE, exp_branch)\n if m:\n rev = repo.scm.resolve_rev(m.group(\"rev_sha\"))\n if rev in revs:\n exp_rev = repo.experiments.scm.resolve_rev(exp_branch)\n with repo.experiments.chdir():\n experiment = _collect_experiment(\n repo.experiments.exp_dvc, exp_branch\n )\n res[rev][exp_rev] = experiment\n\n # collect queued (not yet reproduced) experiments\n for stash_rev, (_, baseline_rev) in repo.experiments.stash_revs.items():\n with repo.experiments.chdir():\n experiment = _collect_experiment(\n repo.experiments.exp_dvc, stash_rev, stash=True\n )\n res[baseline_rev][stash_rev] = experiment\n\n return res\n", "path": "dvc/repo/experiments/show.py"}]} | 1,826 | 197 |
gh_patches_debug_8445 | rasdani/github-patches | git_diff | zestedesavoir__zds-site-2951 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
L'affichage des pseudos dans la liste des tutoriels / article déconne
L'affichage du pseudo "Bat'" n'est pas correct.

Possible de voir le comportement sur la page: https://zestedesavoir.com/tutoriels/?tag=dot-net
</issue>
<code>
[start of zds/utils/templatetags/captureas.py]
1 # -*- coding: utf-8 -*-
2
3 from django import template
4
5 register = template.Library()
6
7 """
8 Define a tag allowing to capture template content as a variable.
9 """
10
11
12 @register.tag(name='captureas')
13 def do_captureas(parser, token):
14 """
15 Define a tag allowing to capture template content as a variable.
16
17 :param parser: The django template parser
18 :param token: tag token (tag_name + variable_name)
19 :return: Template node.
20 """
21
22 try:
23 _, variable_name = token.split_contents()
24 except ValueError:
25 raise template.TemplateSyntaxError("'captureas' node requires a variable name.")
26
27 nodelist = parser.parse(('endcaptureas',))
28 parser.delete_first_token()
29
30 return CaptureasNode(nodelist, variable_name)
31
32
33 class CaptureasNode(template.Node):
34 """
35 Capture end render node content to a variable name.
36 """
37
38 def __init__(self, nodelist, variable_name):
39 """
40 Create a template node which render `nodelist` to `variable_name`.
41
42 :param nodelist: The node list to capture.
43 :param variable_name: The variable name which will gain the rendered content.
44 """
45 self.__node_list = nodelist
46 self.__variable_name = variable_name
47
48 def render(self, context):
49 """
50 Render the node list to the variable name.
51
52 :param context: Current context.
53 :return: Empty string
54 :rtype: str
55 """
56 output = self.__node_list.render(context)
57 context[self.__variable_name] = output.strip()
58 return ''
59
[end of zds/utils/templatetags/captureas.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/zds/utils/templatetags/captureas.py b/zds/utils/templatetags/captureas.py
--- a/zds/utils/templatetags/captureas.py
+++ b/zds/utils/templatetags/captureas.py
@@ -1,6 +1,7 @@
# -*- coding: utf-8 -*-
from django import template
+from django.utils.safestring import mark_safe
register = template.Library()
@@ -54,5 +55,5 @@
:rtype: str
"""
output = self.__node_list.render(context)
- context[self.__variable_name] = output.strip()
+ context[self.__variable_name] = mark_safe(output.strip())
return ''
| {"golden_diff": "diff --git a/zds/utils/templatetags/captureas.py b/zds/utils/templatetags/captureas.py\n--- a/zds/utils/templatetags/captureas.py\n+++ b/zds/utils/templatetags/captureas.py\n@@ -1,6 +1,7 @@\n # -*- coding: utf-8 -*-\n \n from django import template\n+from django.utils.safestring import mark_safe\n \n register = template.Library()\n \n@@ -54,5 +55,5 @@\n :rtype: str\n \"\"\"\n output = self.__node_list.render(context)\n- context[self.__variable_name] = output.strip()\n+ context[self.__variable_name] = mark_safe(output.strip())\n return ''\n", "issue": "L'affichage des pseudos dans la liste des tutoriels / article d\u00e9conne\nL'affichage du pseudo \"Bat'\" n'est pas correct.\n\n\n\nPossible de voir le comportement sur la page: https://zestedesavoir.com/tutoriels/?tag=dot-net\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\nfrom django import template\n\nregister = template.Library()\n\n\"\"\"\nDefine a tag allowing to capture template content as a variable.\n\"\"\"\n\n\[email protected](name='captureas')\ndef do_captureas(parser, token):\n \"\"\"\n Define a tag allowing to capture template content as a variable.\n\n :param parser: The django template parser\n :param token: tag token (tag_name + variable_name)\n :return: Template node.\n \"\"\"\n\n try:\n _, variable_name = token.split_contents()\n except ValueError:\n raise template.TemplateSyntaxError(\"'captureas' node requires a variable name.\")\n\n nodelist = parser.parse(('endcaptureas',))\n parser.delete_first_token()\n\n return CaptureasNode(nodelist, variable_name)\n\n\nclass CaptureasNode(template.Node):\n \"\"\"\n Capture end render node content to a variable name.\n \"\"\"\n\n def __init__(self, nodelist, variable_name):\n \"\"\"\n Create a template node which render `nodelist` to `variable_name`.\n\n :param nodelist: The node list to capture.\n :param variable_name: The variable name which will gain the rendered content.\n \"\"\"\n self.__node_list = nodelist\n self.__variable_name = variable_name\n\n def render(self, context):\n \"\"\"\n Render the node list to the variable name.\n\n :param context: Current context.\n :return: Empty string\n :rtype: str\n \"\"\"\n output = self.__node_list.render(context)\n context[self.__variable_name] = output.strip()\n return ''\n", "path": "zds/utils/templatetags/captureas.py"}]} | 1,118 | 161 |
gh_patches_debug_5381 | rasdani/github-patches | git_diff | ManimCommunity__manim-1053 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Broken source links on the stable version of the documentation
## Description of bug / unexpected behavior
Source links on the stable version of documentation does not work. It links to something like this: https://github.com/ManimCommunity/manim/blob/stable/manim/mobject/changing.py which is a 404 error.
## Expected behavior
Source links should link to a file containing source code for the stable version.
## How to reproduce the issue
On the documentation website, switch the version to stable. Navigate to and click the source link of any class.
## Additional comments
Perhaps this is an access rights issue, which explains why it evaded detection from community devs for so long?
</issue>
<code>
[start of docs/source/conf.py]
1 # Configuration file for the Sphinx documentation builder.
2 #
3 # This file only contains a selection of the most common options. For a full
4 # list see the documentation:
5 # https://www.sphinx-doc.org/en/master/usage/configuration.html
6
7 # -- Path setup --------------------------------------------------------------
8
9 # If extensions (or modules to document with autodoc) are in another directory,
10 # add these directories to sys.path here. If the directory is relative to the
11 # documentation root, use os.path.abspath to make it absolute, like shown here.
12
13 import os
14 import sys
15 from distutils.sysconfig import get_python_lib
16 from pathlib import Path
17
18 sys.path.insert(0, os.path.abspath("."))
19
20
21 if os.environ.get("READTHEDOCS") == "True":
22 site_path = get_python_lib()
23 # we need to add ffmpeg to the path
24 ffmpeg_path = os.path.join(site_path, "imageio_ffmpeg", "binaries")
25 # the included binary is named ffmpeg-linux..., create a symlink
26 [ffmpeg_bin] = [
27 file for file in os.listdir(ffmpeg_path) if file.startswith("ffmpeg-")
28 ]
29 os.symlink(
30 os.path.join(ffmpeg_path, ffmpeg_bin), os.path.join(ffmpeg_path, "ffmpeg")
31 )
32 os.environ["PATH"] += os.pathsep + ffmpeg_path
33
34
35 # -- Project information -----------------------------------------------------
36
37 project = "Manim"
38 copyright = "2020, The Manim Community Dev Team"
39 author = "The Manim Community Dev Team"
40
41
42 # -- General configuration ---------------------------------------------------
43
44 # Add any Sphinx extension module names here, as strings. They can be
45 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
46 # ones.
47 extensions = [
48 "sphinx.ext.autodoc",
49 "recommonmark",
50 "sphinx_copybutton",
51 "sphinx.ext.napoleon",
52 "sphinx.ext.autosummary",
53 "sphinx.ext.doctest",
54 "sphinx.ext.extlinks",
55 "sphinx.ext.linkcode",
56 "sphinxext.opengraph",
57 "manim_directive",
58 ]
59
60 # Automatically generate stub pages when using the .. autosummary directive
61 autosummary_generate = True
62
63 # generate documentation from type hints
64 autodoc_typehints = "description"
65 autoclass_content = "both"
66
67 # controls whether functions documented by the autofunction directive
68 # appear with their full module names
69 add_module_names = False
70
71 # Add any paths that contain templates here, relative to this directory.
72 templates_path = ["_templates"]
73
74 # Custom section headings in our documentation
75 napoleon_custom_sections = ["Tests", ("Test", "Tests")]
76
77 # List of patterns, relative to source directory, that match files and
78 # directories to ignore when looking for source files.
79 # This pattern also affects html_static_path and html_extra_path.
80 exclude_patterns = []
81
82
83 # -- Options for HTML output -------------------------------------------------
84
85 # The theme to use for HTML and HTML Help pages. See the documentation for
86 # a list of builtin themes.
87 #
88 import guzzle_sphinx_theme
89
90 html_theme_path = guzzle_sphinx_theme.html_theme_path()
91 html_theme = "guzzle_sphinx_theme"
92 html_favicon = str(Path("_static/favicon.ico"))
93
94 # There's a standing issue with Sphinx's new-style sidebars. This is a
95 # workaround. Taken from
96 # https://github.com/guzzle/guzzle_sphinx_theme/issues/33#issuecomment-637081826
97 html_sidebars = {"**": ["logo-text.html", "globaltoc.html", "searchbox.html"]}
98
99 # Register the theme as an extension to generate a sitemap.xml
100 extensions.append("guzzle_sphinx_theme")
101
102 # Add any paths that contain custom static files (such as style sheets) here,
103 # relative to this directory. They are copied after the builtin static files,
104 # so a file named "default.css" will overwrite the builtin "default.css".
105 html_static_path = ["_static"]
106
107 # This specifies any additional css files that will override the theme's
108 html_css_files = ["custom.css"]
109
110 # source links to github
111 def linkcode_resolve(domain, info):
112 if domain != "py":
113 return None
114 if not info["module"]:
115 return None
116 filename = info["module"].replace(".", "/")
117 version = os.getenv("READTHEDOCS_VERSION", "master")
118 if version == "latest":
119 version = "master"
120 return f"https://github.com/ManimCommunity/manim/blob/{version}/{filename}.py"
121
122
123 # external links
124 extlinks = {
125 "issue": ("https://github.com/ManimCommunity/manim/issues/%s", "issue "),
126 "pr": ("https://github.com/ManimCommunity/manim/pull/%s", "pull request "),
127 }
128
129 # opengraph settings
130 ogp_image = "https://www.manim.community/logo.png"
131 ogp_site_name = "Manim Community | Documentation"
132 ogp_site_url = "https://docs.manim.community/"
133
[end of docs/source/conf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/docs/source/conf.py b/docs/source/conf.py
--- a/docs/source/conf.py
+++ b/docs/source/conf.py
@@ -115,7 +115,7 @@
return None
filename = info["module"].replace(".", "/")
version = os.getenv("READTHEDOCS_VERSION", "master")
- if version == "latest":
+ if version == "latest" or version == "stable":
version = "master"
return f"https://github.com/ManimCommunity/manim/blob/{version}/{filename}.py"
| {"golden_diff": "diff --git a/docs/source/conf.py b/docs/source/conf.py\n--- a/docs/source/conf.py\n+++ b/docs/source/conf.py\n@@ -115,7 +115,7 @@\n return None\n filename = info[\"module\"].replace(\".\", \"/\")\n version = os.getenv(\"READTHEDOCS_VERSION\", \"master\")\n- if version == \"latest\":\n+ if version == \"latest\" or version == \"stable\":\n version = \"master\"\n return f\"https://github.com/ManimCommunity/manim/blob/{version}/{filename}.py\"\n", "issue": "Broken source links on the stable version of the documentation\n## Description of bug / unexpected behavior\nSource links on the stable version of documentation does not work. It links to something like this: https://github.com/ManimCommunity/manim/blob/stable/manim/mobject/changing.py which is a 404 error. \n\n## Expected behavior\nSource links should link to a file containing source code for the stable version. \n\n## How to reproduce the issue\nOn the documentation website, switch the version to stable. Navigate to and click the source link of any class. \n\n## Additional comments\nPerhaps this is an access rights issue, which explains why it evaded detection from community devs for so long?\n\n", "before_files": [{"content": "# Configuration file for the Sphinx documentation builder.\n#\n# This file only contains a selection of the most common options. For a full\n# list see the documentation:\n# https://www.sphinx-doc.org/en/master/usage/configuration.html\n\n# -- Path setup --------------------------------------------------------------\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n\nimport os\nimport sys\nfrom distutils.sysconfig import get_python_lib\nfrom pathlib import Path\n\nsys.path.insert(0, os.path.abspath(\".\"))\n\n\nif os.environ.get(\"READTHEDOCS\") == \"True\":\n site_path = get_python_lib()\n # we need to add ffmpeg to the path\n ffmpeg_path = os.path.join(site_path, \"imageio_ffmpeg\", \"binaries\")\n # the included binary is named ffmpeg-linux..., create a symlink\n [ffmpeg_bin] = [\n file for file in os.listdir(ffmpeg_path) if file.startswith(\"ffmpeg-\")\n ]\n os.symlink(\n os.path.join(ffmpeg_path, ffmpeg_bin), os.path.join(ffmpeg_path, \"ffmpeg\")\n )\n os.environ[\"PATH\"] += os.pathsep + ffmpeg_path\n\n\n# -- Project information -----------------------------------------------------\n\nproject = \"Manim\"\ncopyright = \"2020, The Manim Community Dev Team\"\nauthor = \"The Manim Community Dev Team\"\n\n\n# -- General configuration ---------------------------------------------------\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n \"sphinx.ext.autodoc\",\n \"recommonmark\",\n \"sphinx_copybutton\",\n \"sphinx.ext.napoleon\",\n \"sphinx.ext.autosummary\",\n \"sphinx.ext.doctest\",\n \"sphinx.ext.extlinks\",\n \"sphinx.ext.linkcode\",\n \"sphinxext.opengraph\",\n \"manim_directive\",\n]\n\n# Automatically generate stub pages when using the .. autosummary directive\nautosummary_generate = True\n\n# generate documentation from type hints\nautodoc_typehints = \"description\"\nautoclass_content = \"both\"\n\n# controls whether functions documented by the autofunction directive\n# appear with their full module names\nadd_module_names = False\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = [\"_templates\"]\n\n# Custom section headings in our documentation\nnapoleon_custom_sections = [\"Tests\", (\"Test\", \"Tests\")]\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path.\nexclude_patterns = []\n\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nimport guzzle_sphinx_theme\n\nhtml_theme_path = guzzle_sphinx_theme.html_theme_path()\nhtml_theme = \"guzzle_sphinx_theme\"\nhtml_favicon = str(Path(\"_static/favicon.ico\"))\n\n# There's a standing issue with Sphinx's new-style sidebars. This is a\n# workaround. Taken from\n# https://github.com/guzzle/guzzle_sphinx_theme/issues/33#issuecomment-637081826\nhtml_sidebars = {\"**\": [\"logo-text.html\", \"globaltoc.html\", \"searchbox.html\"]}\n\n# Register the theme as an extension to generate a sitemap.xml\nextensions.append(\"guzzle_sphinx_theme\")\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = [\"_static\"]\n\n# This specifies any additional css files that will override the theme's\nhtml_css_files = [\"custom.css\"]\n\n# source links to github\ndef linkcode_resolve(domain, info):\n if domain != \"py\":\n return None\n if not info[\"module\"]:\n return None\n filename = info[\"module\"].replace(\".\", \"/\")\n version = os.getenv(\"READTHEDOCS_VERSION\", \"master\")\n if version == \"latest\":\n version = \"master\"\n return f\"https://github.com/ManimCommunity/manim/blob/{version}/{filename}.py\"\n\n\n# external links\nextlinks = {\n \"issue\": (\"https://github.com/ManimCommunity/manim/issues/%s\", \"issue \"),\n \"pr\": (\"https://github.com/ManimCommunity/manim/pull/%s\", \"pull request \"),\n}\n\n# opengraph settings\nogp_image = \"https://www.manim.community/logo.png\"\nogp_site_name = \"Manim Community | Documentation\"\nogp_site_url = \"https://docs.manim.community/\"\n", "path": "docs/source/conf.py"}]} | 2,043 | 123 |
gh_patches_debug_18454 | rasdani/github-patches | git_diff | napalm-automation__napalm-514 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[2.0] "pip3 install napalm" doesn't install requirements
Debian 9.2 (Stretch) with python3 v3.5.3, pip3 v9.0.1
With v1.2.0 a `pip3 install napalm==1.2.0` installs also the required modules (MarkupSafe, jinja2, netaddr, pyYAML, pyeapi, future, pynacl, bcrypt, paramiko, pyFG, scp, netmiko, lxml, pyIOSXR, ncclient, pyserial, junos-eznc, urllib3, idna, certifi, chardet, requests, pynxos, pan-python, requests-toolbelt, xmltodict, pyPluribus, chainmap, librouteros, vyattaconfparser).
With Napalm v2.0.0 no required module is installed with `pip3 install napalm`, so napalm won't work.
</issue>
<code>
[start of setup.py]
1 """setup.py file."""
2 import uuid
3 import os
4
5 from distutils.core import Command
6 from setuptools import setup, find_packages
7 from setuptools.command import install
8
9
10 from pip.req import parse_requirements
11
12 import pip
13 import sys
14
15 __author__ = 'David Barroso <[email protected]>'
16
17 # Read SUPPORTED_DRIVERS from file (without importing)
18 _locals = {}
19 filename = os.path.join('napalm', '_SUPPORTED_DRIVERS.py')
20 with open(filename) as supported:
21 exec(supported.read(), None, _locals)
22 SUPPORTED_DRIVERS = _locals['SUPPORTED_DRIVERS']
23
24
25 def process_requirements(dep):
26 print("PROCESSING DEPENDENCIES FOR {}".format(dep))
27 u = uuid.uuid1()
28 iter_reqs = parse_requirements("requirements/{}".format(dep), session=u)
29 [pip.main(['install', (str(ir.req))]) for ir in iter_reqs]
30
31
32 def custom_command_driver(driver):
33 class CustomCommand(Command):
34 """A custom command to run Pylint on all Python source files."""
35 user_options = []
36
37 def initialize_options(self):
38 pass
39
40 def finalize_options(self):
41 pass
42
43 def run(self):
44 """Run command."""
45 process_requirements(driver)
46
47 return CustomCommand
48
49
50 class CustomInstall(install.install):
51 """A custom command to run Pylint on all Python source files."""
52
53 def run(self):
54 """Run command."""
55 if any([d in sys.argv for d in SUPPORTED_DRIVERS]):
56 process_requirements('base')
57 else:
58 process_requirements('all')
59 install.install.run(self)
60
61
62 custom_commands = {d: custom_command_driver(d) for d in SUPPORTED_DRIVERS}
63 custom_commands['install'] = CustomInstall
64
65 setup(
66 cmdclass=custom_commands,
67 name="napalm",
68 version='2.0.0',
69 packages=find_packages(exclude=("test*", )),
70 test_suite='test_base',
71 author="David Barroso, Kirk Byers, Mircea Ulinic",
72 author_email="[email protected], [email protected], [email protected]",
73 description="Network Automation and Programmability Abstraction Layer with Multivendor support",
74 classifiers=[
75 'Topic :: Utilities',
76 'Programming Language :: Python',
77 'Programming Language :: Python :: 2',
78 'Programming Language :: Python :: 2.7',
79 'Programming Language :: Python :: 3',
80 'Programming Language :: Python :: 3.4',
81 'Programming Language :: Python :: 3.5',
82 'Programming Language :: Python :: 3.6',
83 'Operating System :: POSIX :: Linux',
84 'Operating System :: MacOS',
85 ],
86 url="https://github.com/napalm-automation/napalm",
87 include_package_data=True,
88 install_requires=[],
89 entry_points={
90 'console_scripts': [
91 'cl_napalm_configure=napalm.base.clitools.cl_napalm_configure:main',
92 'cl_napalm_test=napalm.base.clitools.cl_napalm_test:main',
93 'cl_napalm_validate=napalm.base.clitools.cl_napalm_validate:main',
94 'napalm=napalm.base.clitools.cl_napalm:main',
95 ],
96 }
97 )
98
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -5,11 +5,12 @@
from distutils.core import Command
from setuptools import setup, find_packages
from setuptools.command import install
+from subprocess import check_call
from pip.req import parse_requirements
-import pip
+import pip # noqa: test pip is installed
import sys
__author__ = 'David Barroso <[email protected]>'
@@ -26,7 +27,9 @@
print("PROCESSING DEPENDENCIES FOR {}".format(dep))
u = uuid.uuid1()
iter_reqs = parse_requirements("requirements/{}".format(dep), session=u)
- [pip.main(['install', (str(ir.req))]) for ir in iter_reqs]
+
+ for ir in iter_reqs:
+ check_call([sys.executable, '-m', 'pip', 'install', str(ir.req)])
def custom_command_driver(driver):
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -5,11 +5,12 @@\n from distutils.core import Command\n from setuptools import setup, find_packages\n from setuptools.command import install\n+from subprocess import check_call\n \n \n from pip.req import parse_requirements\n \n-import pip\n+import pip # noqa: test pip is installed\n import sys\n \n __author__ = 'David Barroso <[email protected]>'\n@@ -26,7 +27,9 @@\n print(\"PROCESSING DEPENDENCIES FOR {}\".format(dep))\n u = uuid.uuid1()\n iter_reqs = parse_requirements(\"requirements/{}\".format(dep), session=u)\n- [pip.main(['install', (str(ir.req))]) for ir in iter_reqs]\n+\n+ for ir in iter_reqs:\n+ check_call([sys.executable, '-m', 'pip', 'install', str(ir.req)])\n \n \n def custom_command_driver(driver):\n", "issue": "[2.0] \"pip3 install napalm\" doesn't install requirements\nDebian 9.2 (Stretch) with python3 v3.5.3, pip3 v9.0.1\r\n\r\nWith v1.2.0 a `pip3 install napalm==1.2.0` installs also the required modules (MarkupSafe, jinja2, netaddr, pyYAML, pyeapi, future, pynacl, bcrypt, paramiko, pyFG, scp, netmiko, lxml, pyIOSXR, ncclient, pyserial, junos-eznc, urllib3, idna, certifi, chardet, requests, pynxos, pan-python, requests-toolbelt, xmltodict, pyPluribus, chainmap, librouteros, vyattaconfparser).\r\n\r\nWith Napalm v2.0.0 no required module is installed with `pip3 install napalm`, so napalm won't work.\n", "before_files": [{"content": "\"\"\"setup.py file.\"\"\"\nimport uuid\nimport os\n\nfrom distutils.core import Command\nfrom setuptools import setup, find_packages\nfrom setuptools.command import install\n\n\nfrom pip.req import parse_requirements\n\nimport pip\nimport sys\n\n__author__ = 'David Barroso <[email protected]>'\n\n# Read SUPPORTED_DRIVERS from file (without importing)\n_locals = {}\nfilename = os.path.join('napalm', '_SUPPORTED_DRIVERS.py')\nwith open(filename) as supported:\n exec(supported.read(), None, _locals)\n SUPPORTED_DRIVERS = _locals['SUPPORTED_DRIVERS']\n\n\ndef process_requirements(dep):\n print(\"PROCESSING DEPENDENCIES FOR {}\".format(dep))\n u = uuid.uuid1()\n iter_reqs = parse_requirements(\"requirements/{}\".format(dep), session=u)\n [pip.main(['install', (str(ir.req))]) for ir in iter_reqs]\n\n\ndef custom_command_driver(driver):\n class CustomCommand(Command):\n \"\"\"A custom command to run Pylint on all Python source files.\"\"\"\n user_options = []\n\n def initialize_options(self):\n pass\n\n def finalize_options(self):\n pass\n\n def run(self):\n \"\"\"Run command.\"\"\"\n process_requirements(driver)\n\n return CustomCommand\n\n\nclass CustomInstall(install.install):\n \"\"\"A custom command to run Pylint on all Python source files.\"\"\"\n\n def run(self):\n \"\"\"Run command.\"\"\"\n if any([d in sys.argv for d in SUPPORTED_DRIVERS]):\n process_requirements('base')\n else:\n process_requirements('all')\n install.install.run(self)\n\n\ncustom_commands = {d: custom_command_driver(d) for d in SUPPORTED_DRIVERS}\ncustom_commands['install'] = CustomInstall\n\nsetup(\n cmdclass=custom_commands,\n name=\"napalm\",\n version='2.0.0',\n packages=find_packages(exclude=(\"test*\", )),\n test_suite='test_base',\n author=\"David Barroso, Kirk Byers, Mircea Ulinic\",\n author_email=\"[email protected], [email protected], [email protected]\",\n description=\"Network Automation and Programmability Abstraction Layer with Multivendor support\",\n classifiers=[\n 'Topic :: Utilities',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Operating System :: POSIX :: Linux',\n 'Operating System :: MacOS',\n ],\n url=\"https://github.com/napalm-automation/napalm\",\n include_package_data=True,\n install_requires=[],\n entry_points={\n 'console_scripts': [\n 'cl_napalm_configure=napalm.base.clitools.cl_napalm_configure:main',\n 'cl_napalm_test=napalm.base.clitools.cl_napalm_test:main',\n 'cl_napalm_validate=napalm.base.clitools.cl_napalm_validate:main',\n 'napalm=napalm.base.clitools.cl_napalm:main',\n ],\n }\n)\n", "path": "setup.py"}]} | 1,640 | 217 |
gh_patches_debug_31145 | rasdani/github-patches | git_diff | archlinux__archinstall-408 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
_gfx_driver_packages not defined when choosing sway option (current master build)

_gfx_driver_packages not defined when choosing sway option (current master build)

_gfx_driver_packages not defined when choosing sway option (current master build)

</issue>
<code>
[start of profiles/sway.py]
1 # A desktop environment using "Sway"
2
3 import archinstall
4
5 is_top_level_profile = False
6
7 __packages__ = ["sway", "swaylock", "swayidle", "waybar", "dmenu", "light", "grim", "slurp", "pavucontrol", "alacritty"]
8
9 def _prep_function(*args, **kwargs):
10 """
11 Magic function called by the importing installer
12 before continuing any further. It also avoids executing any
13 other code in this stage. So it's a safe way to ask the user
14 for more input before any other installer steps start.
15 """
16 if "nvidia" in _gfx_driver_packages:
17 choice = input("The proprietary Nvidia driver is not supported by Sway. It is likely that you will run into issues. Continue anyways? [y/N] ")
18 if choice.lower() in ("n", ""):
19 raise archinstall.lib.exceptions.HardwareIncompatibilityError("Sway does not support the proprietary nvidia drivers.")
20
21 __builtins__['_gfx_driver_packages'] = archinstall.select_driver()
22
23 return True
24
25 # Ensures that this code only gets executed if executed
26 # through importlib.util.spec_from_file_location("sway", "/somewhere/sway.py")
27 # or through conventional import sway
28 if __name__ == 'sway':
29 # Install the Sway packages
30 installation.add_additional_packages(__packages__)
31
[end of profiles/sway.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/profiles/sway.py b/profiles/sway.py
--- a/profiles/sway.py
+++ b/profiles/sway.py
@@ -4,7 +4,19 @@
is_top_level_profile = False
-__packages__ = ["sway", "swaylock", "swayidle", "waybar", "dmenu", "light", "grim", "slurp", "pavucontrol", "alacritty"]
+__packages__ = [
+ "sway",
+ "swaylock",
+ "swayidle",
+ "waybar",
+ "dmenu",
+ "light",
+ "grim",
+ "slurp",
+ "pavucontrol",
+ "alacritty",
+]
+
def _prep_function(*args, **kwargs):
"""
@@ -13,18 +25,26 @@
other code in this stage. So it's a safe way to ask the user
for more input before any other installer steps start.
"""
- if "nvidia" in _gfx_driver_packages:
- choice = input("The proprietary Nvidia driver is not supported by Sway. It is likely that you will run into issues. Continue anyways? [y/N] ")
- if choice.lower() in ("n", ""):
- raise archinstall.lib.exceptions.HardwareIncompatibilityError("Sway does not support the proprietary nvidia drivers.")
-
- __builtins__['_gfx_driver_packages'] = archinstall.select_driver()
+ __builtins__["_gfx_driver_packages"] = archinstall.select_driver()
return True
+
# Ensures that this code only gets executed if executed
# through importlib.util.spec_from_file_location("sway", "/somewhere/sway.py")
# or through conventional import sway
-if __name__ == 'sway':
+if __name__ == "sway":
+ if "nvidia" in _gfx_driver_packages:
+ choice = input(
+ "The proprietary Nvidia driver is not supported by Sway. It is likely that you will run into issues. Continue anyways? [y/N] "
+ )
+ if choice.lower() in ("n", ""):
+ raise archinstall.lib.exceptions.HardwareIncompatibilityError(
+ "Sway does not support the proprietary nvidia drivers."
+ )
+
# Install the Sway packages
installation.add_additional_packages(__packages__)
+
+ # Install the graphics driver packages
+ installation.add_additional_packages(_gfx_driver_packages)
| {"golden_diff": "diff --git a/profiles/sway.py b/profiles/sway.py\n--- a/profiles/sway.py\n+++ b/profiles/sway.py\n@@ -4,7 +4,19 @@\n \n is_top_level_profile = False\n \n-__packages__ = [\"sway\", \"swaylock\", \"swayidle\", \"waybar\", \"dmenu\", \"light\", \"grim\", \"slurp\", \"pavucontrol\", \"alacritty\"]\n+__packages__ = [\n+\t\"sway\",\n+\t\"swaylock\",\n+\t\"swayidle\",\n+\t\"waybar\",\n+\t\"dmenu\",\n+\t\"light\",\n+\t\"grim\",\n+\t\"slurp\",\n+\t\"pavucontrol\",\n+\t\"alacritty\",\n+]\n+\n \n def _prep_function(*args, **kwargs):\n \t\"\"\"\n@@ -13,18 +25,26 @@\n \tother code in this stage. So it's a safe way to ask the user\n \tfor more input before any other installer steps start.\n \t\"\"\"\n-\tif \"nvidia\" in _gfx_driver_packages:\n-\t\tchoice = input(\"The proprietary Nvidia driver is not supported by Sway. It is likely that you will run into issues. Continue anyways? [y/N] \")\n-\t\tif choice.lower() in (\"n\", \"\"):\n-\t\t\traise archinstall.lib.exceptions.HardwareIncompatibilityError(\"Sway does not support the proprietary nvidia drivers.\")\n-\n-\t__builtins__['_gfx_driver_packages'] = archinstall.select_driver()\n+\t__builtins__[\"_gfx_driver_packages\"] = archinstall.select_driver()\n \n \treturn True\n \n+\n # Ensures that this code only gets executed if executed\n # through importlib.util.spec_from_file_location(\"sway\", \"/somewhere/sway.py\")\n # or through conventional import sway\n-if __name__ == 'sway':\n+if __name__ == \"sway\":\n+\tif \"nvidia\" in _gfx_driver_packages:\n+\t\tchoice = input(\n+\t\t\t\"The proprietary Nvidia driver is not supported by Sway. It is likely that you will run into issues. Continue anyways? [y/N] \"\n+\t\t)\n+\t\tif choice.lower() in (\"n\", \"\"):\n+\t\t\traise archinstall.lib.exceptions.HardwareIncompatibilityError(\n+\t\t\t\t\"Sway does not support the proprietary nvidia drivers.\"\n+\t\t\t)\n+\n \t# Install the Sway packages\n \tinstallation.add_additional_packages(__packages__)\n+\n+\t# Install the graphics driver packages\n+\tinstallation.add_additional_packages(_gfx_driver_packages)\n", "issue": "_gfx_driver_packages not defined when choosing sway option (current master build)\n\r\n\n_gfx_driver_packages not defined when choosing sway option (current master build)\n\r\n\n_gfx_driver_packages not defined when choosing sway option (current master build)\n\r\n\n", "before_files": [{"content": "# A desktop environment using \"Sway\"\n\nimport archinstall\n\nis_top_level_profile = False\n\n__packages__ = [\"sway\", \"swaylock\", \"swayidle\", \"waybar\", \"dmenu\", \"light\", \"grim\", \"slurp\", \"pavucontrol\", \"alacritty\"]\n\ndef _prep_function(*args, **kwargs):\n\t\"\"\"\n\tMagic function called by the importing installer\n\tbefore continuing any further. It also avoids executing any\n\tother code in this stage. So it's a safe way to ask the user\n\tfor more input before any other installer steps start.\n\t\"\"\"\n\tif \"nvidia\" in _gfx_driver_packages:\n\t\tchoice = input(\"The proprietary Nvidia driver is not supported by Sway. It is likely that you will run into issues. Continue anyways? [y/N] \")\n\t\tif choice.lower() in (\"n\", \"\"):\n\t\t\traise archinstall.lib.exceptions.HardwareIncompatibilityError(\"Sway does not support the proprietary nvidia drivers.\")\n\n\t__builtins__['_gfx_driver_packages'] = archinstall.select_driver()\n\n\treturn True\n\n# Ensures that this code only gets executed if executed\n# through importlib.util.spec_from_file_location(\"sway\", \"/somewhere/sway.py\")\n# or through conventional import sway\nif __name__ == 'sway':\n\t# Install the Sway packages\n\tinstallation.add_additional_packages(__packages__)\n", "path": "profiles/sway.py"}]} | 1,204 | 559 |
gh_patches_debug_1625 | rasdani/github-patches | git_diff | Lightning-AI__pytorch-lightning-799 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Optional dependencies are required for deprecated logging module
🐛 Bug
There is a backwards compatibility issues coming from PR #767. Notably, if a user doesn't have any of the extra logging dependencies then they'll be an import error.
### To Reproduce
1. Remove all logging dependencies from your environment (E.g. comet)
2. Depend on the deprecated pytorch_lightning.logging package and run
### Expected behavior
We expect to maintain backwards compatibility here so optional dependencies shouldn't be required.
</issue>
<code>
[start of pytorch_lightning/logging/__init__.py]
1 """
2 .. warning:: `logging` package has been renamed to `loggers` since v0.6.1 and will be removed in v0.8.0
3 """
4
5 import warnings
6
7 warnings.warn("`logging` package has been renamed to `loggers` since v0.6.1"
8 " and will be removed in v0.8.0", DeprecationWarning)
9
10 from pytorch_lightning.loggers import * # noqa: F403
11 from pytorch_lightning.loggers import ( # noqa: E402
12 base, comet, mlflow, neptune, tensorboard, test_tube, wandb
13 )
14
[end of pytorch_lightning/logging/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pytorch_lightning/logging/__init__.py b/pytorch_lightning/logging/__init__.py
--- a/pytorch_lightning/logging/__init__.py
+++ b/pytorch_lightning/logging/__init__.py
@@ -8,6 +8,3 @@
" and will be removed in v0.8.0", DeprecationWarning)
from pytorch_lightning.loggers import * # noqa: F403
-from pytorch_lightning.loggers import ( # noqa: E402
- base, comet, mlflow, neptune, tensorboard, test_tube, wandb
-)
| {"golden_diff": "diff --git a/pytorch_lightning/logging/__init__.py b/pytorch_lightning/logging/__init__.py\n--- a/pytorch_lightning/logging/__init__.py\n+++ b/pytorch_lightning/logging/__init__.py\n@@ -8,6 +8,3 @@\n \" and will be removed in v0.8.0\", DeprecationWarning)\n \n from pytorch_lightning.loggers import * # noqa: F403\n-from pytorch_lightning.loggers import ( # noqa: E402\n- base, comet, mlflow, neptune, tensorboard, test_tube, wandb\n-)\n", "issue": "Optional dependencies are required for deprecated logging module\n\ud83d\udc1b Bug\r\n\r\nThere is a backwards compatibility issues coming from PR #767. Notably, if a user doesn't have any of the extra logging dependencies then they'll be an import error.\r\n\r\n### To Reproduce\r\n\r\n1. Remove all logging dependencies from your environment (E.g. comet)\r\n2. Depend on the deprecated pytorch_lightning.logging package and run\r\n\r\n### Expected behavior\r\n\r\nWe expect to maintain backwards compatibility here so optional dependencies shouldn't be required.\n", "before_files": [{"content": "\"\"\"\n.. warning:: `logging` package has been renamed to `loggers` since v0.6.1 and will be removed in v0.8.0\n\"\"\"\n\nimport warnings\n\nwarnings.warn(\"`logging` package has been renamed to `loggers` since v0.6.1\"\n \" and will be removed in v0.8.0\", DeprecationWarning)\n\nfrom pytorch_lightning.loggers import * # noqa: F403\nfrom pytorch_lightning.loggers import ( # noqa: E402\n base, comet, mlflow, neptune, tensorboard, test_tube, wandb\n)\n", "path": "pytorch_lightning/logging/__init__.py"}]} | 807 | 139 |
gh_patches_debug_12348 | rasdani/github-patches | git_diff | pyodide__pyodide-74 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Improve parsing of result line
The parsing of the input Python to find the last line which will be evaluated (rather than executed) to provide the result is probably a little brittle in certain corner cases. We should look at what IPython does here and copy that.
</issue>
<code>
[start of src/pyodide.py]
1 """
2 A library of helper utilities for connecting Python to the browser environment.
3 """
4
5 from js import XMLHttpRequest
6
7 import io
8
9
10 def open_url(url):
11 """
12 Fetches a given *url* and returns a io.StringIO to access its contents.
13 """
14 req = XMLHttpRequest.new()
15 req.open('GET', url, False)
16 req.send(None)
17 return io.StringIO(req.response)
18
19
20 __all__ = ['open_url']
21
[end of src/pyodide.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/pyodide.py b/src/pyodide.py
--- a/src/pyodide.py
+++ b/src/pyodide.py
@@ -4,6 +4,7 @@
from js import XMLHttpRequest
+import ast
import io
@@ -17,4 +18,23 @@
return io.StringIO(req.response)
-__all__ = ['open_url']
+def eval_code(code, ns):
+ """
+ Runs a string of code, the last part of which may be an expression.
+ """
+ mod = ast.parse(code)
+ if isinstance(mod.body[-1], ast.Expr):
+ expr = ast.Expression(mod.body[-1].value)
+ del mod.body[-1]
+ else:
+ expr = None
+
+ if len(mod.body):
+ exec(compile(mod, '<exec>', mode='exec'), ns, ns)
+ if expr is not None:
+ return eval(compile(expr, '<eval>', mode='eval'), ns, ns)
+ else:
+ return None
+
+
+__all__ = ['open_url', 'eval_code']
| {"golden_diff": "diff --git a/src/pyodide.py b/src/pyodide.py\n--- a/src/pyodide.py\n+++ b/src/pyodide.py\n@@ -4,6 +4,7 @@\n \n from js import XMLHttpRequest\n \n+import ast\n import io\n \n \n@@ -17,4 +18,23 @@\n return io.StringIO(req.response)\n \n \n-__all__ = ['open_url']\n+def eval_code(code, ns):\n+ \"\"\"\n+ Runs a string of code, the last part of which may be an expression.\n+ \"\"\"\n+ mod = ast.parse(code)\n+ if isinstance(mod.body[-1], ast.Expr):\n+ expr = ast.Expression(mod.body[-1].value)\n+ del mod.body[-1]\n+ else:\n+ expr = None\n+\n+ if len(mod.body):\n+ exec(compile(mod, '<exec>', mode='exec'), ns, ns)\n+ if expr is not None:\n+ return eval(compile(expr, '<eval>', mode='eval'), ns, ns)\n+ else:\n+ return None\n+\n+\n+__all__ = ['open_url', 'eval_code']\n", "issue": "Improve parsing of result line\nThe parsing of the input Python to find the last line which will be evaluated (rather than executed) to provide the result is probably a little brittle in certain corner cases. We should look at what IPython does here and copy that.\n", "before_files": [{"content": "\"\"\"\nA library of helper utilities for connecting Python to the browser environment.\n\"\"\"\n\nfrom js import XMLHttpRequest\n\nimport io\n\n\ndef open_url(url):\n \"\"\"\n Fetches a given *url* and returns a io.StringIO to access its contents.\n \"\"\"\n req = XMLHttpRequest.new()\n req.open('GET', url, False)\n req.send(None)\n return io.StringIO(req.response)\n\n\n__all__ = ['open_url']\n", "path": "src/pyodide.py"}]} | 718 | 248 |
gh_patches_debug_30993 | rasdani/github-patches | git_diff | PaddlePaddle__PaddleSpeech-1609 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[vec][search] update to paddlespeech model
</issue>
<code>
[start of demos/audio_searching/src/config.py]
1 # Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 import os
15
16 ############### Milvus Configuration ###############
17 MILVUS_HOST = os.getenv("MILVUS_HOST", "127.0.0.1")
18 MILVUS_PORT = int(os.getenv("MILVUS_PORT", "19530"))
19 VECTOR_DIMENSION = int(os.getenv("VECTOR_DIMENSION", "2048"))
20 INDEX_FILE_SIZE = int(os.getenv("INDEX_FILE_SIZE", "1024"))
21 METRIC_TYPE = os.getenv("METRIC_TYPE", "L2")
22 DEFAULT_TABLE = os.getenv("DEFAULT_TABLE", "audio_table")
23 TOP_K = int(os.getenv("TOP_K", "10"))
24
25 ############### MySQL Configuration ###############
26 MYSQL_HOST = os.getenv("MYSQL_HOST", "127.0.0.1")
27 MYSQL_PORT = int(os.getenv("MYSQL_PORT", "3306"))
28 MYSQL_USER = os.getenv("MYSQL_USER", "root")
29 MYSQL_PWD = os.getenv("MYSQL_PWD", "123456")
30 MYSQL_DB = os.getenv("MYSQL_DB", "mysql")
31
32 ############### Data Path ###############
33 UPLOAD_PATH = os.getenv("UPLOAD_PATH", "tmp/audio-data")
34
35 ############### Number of Log Files ###############
36 LOGS_NUM = int(os.getenv("logs_num", "0"))
37
[end of demos/audio_searching/src/config.py]
[start of demos/audio_searching/src/encode.py]
1 # Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 import os
15
16 import librosa
17 import numpy as np
18 from logs import LOGGER
19
20
21 def get_audio_embedding(path):
22 """
23 Use vpr_inference to generate embedding of audio
24 """
25 try:
26 RESAMPLE_RATE = 16000
27 audio, _ = librosa.load(path, sr=RESAMPLE_RATE, mono=True)
28
29 # TODO add infer/python interface to get embedding, now fake it by rand
30 # vpr = ECAPATDNN(checkpoint_path=None, device='cuda')
31 # embedding = vpr.inference(audio)
32 np.random.seed(hash(os.path.basename(path)) % 1000000)
33 embedding = np.random.rand(1, 2048)
34 embedding = embedding / np.linalg.norm(embedding)
35 embedding = embedding.tolist()[0]
36 return embedding
37 except Exception as e:
38 LOGGER.error(f"Error with embedding:{e}")
39 return None
40
[end of demos/audio_searching/src/encode.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/demos/audio_searching/src/config.py b/demos/audio_searching/src/config.py
--- a/demos/audio_searching/src/config.py
+++ b/demos/audio_searching/src/config.py
@@ -16,7 +16,7 @@
############### Milvus Configuration ###############
MILVUS_HOST = os.getenv("MILVUS_HOST", "127.0.0.1")
MILVUS_PORT = int(os.getenv("MILVUS_PORT", "19530"))
-VECTOR_DIMENSION = int(os.getenv("VECTOR_DIMENSION", "2048"))
+VECTOR_DIMENSION = int(os.getenv("VECTOR_DIMENSION", "192"))
INDEX_FILE_SIZE = int(os.getenv("INDEX_FILE_SIZE", "1024"))
METRIC_TYPE = os.getenv("METRIC_TYPE", "L2")
DEFAULT_TABLE = os.getenv("DEFAULT_TABLE", "audio_table")
diff --git a/demos/audio_searching/src/encode.py b/demos/audio_searching/src/encode.py
--- a/demos/audio_searching/src/encode.py
+++ b/demos/audio_searching/src/encode.py
@@ -15,7 +15,12 @@
import librosa
import numpy as np
+from config import DEFAULT_TABLE
+
from logs import LOGGER
+from paddlespeech.cli import VectorExecutor
+
+vector_executor = VectorExecutor()
def get_audio_embedding(path):
@@ -23,16 +28,9 @@
Use vpr_inference to generate embedding of audio
"""
try:
- RESAMPLE_RATE = 16000
- audio, _ = librosa.load(path, sr=RESAMPLE_RATE, mono=True)
-
- # TODO add infer/python interface to get embedding, now fake it by rand
- # vpr = ECAPATDNN(checkpoint_path=None, device='cuda')
- # embedding = vpr.inference(audio)
- np.random.seed(hash(os.path.basename(path)) % 1000000)
- embedding = np.random.rand(1, 2048)
+ embedding = vector_executor(audio_file=path)
embedding = embedding / np.linalg.norm(embedding)
- embedding = embedding.tolist()[0]
+ embedding = embedding.tolist()
return embedding
except Exception as e:
LOGGER.error(f"Error with embedding:{e}")
| {"golden_diff": "diff --git a/demos/audio_searching/src/config.py b/demos/audio_searching/src/config.py\n--- a/demos/audio_searching/src/config.py\n+++ b/demos/audio_searching/src/config.py\n@@ -16,7 +16,7 @@\n ############### Milvus Configuration ###############\n MILVUS_HOST = os.getenv(\"MILVUS_HOST\", \"127.0.0.1\")\n MILVUS_PORT = int(os.getenv(\"MILVUS_PORT\", \"19530\"))\n-VECTOR_DIMENSION = int(os.getenv(\"VECTOR_DIMENSION\", \"2048\"))\n+VECTOR_DIMENSION = int(os.getenv(\"VECTOR_DIMENSION\", \"192\"))\n INDEX_FILE_SIZE = int(os.getenv(\"INDEX_FILE_SIZE\", \"1024\"))\n METRIC_TYPE = os.getenv(\"METRIC_TYPE\", \"L2\")\n DEFAULT_TABLE = os.getenv(\"DEFAULT_TABLE\", \"audio_table\")\ndiff --git a/demos/audio_searching/src/encode.py b/demos/audio_searching/src/encode.py\n--- a/demos/audio_searching/src/encode.py\n+++ b/demos/audio_searching/src/encode.py\n@@ -15,7 +15,12 @@\n \n import librosa\n import numpy as np\n+from config import DEFAULT_TABLE\n+\n from logs import LOGGER\n+from paddlespeech.cli import VectorExecutor\n+\n+vector_executor = VectorExecutor()\n \n \n def get_audio_embedding(path):\n@@ -23,16 +28,9 @@\n Use vpr_inference to generate embedding of audio\n \"\"\"\n try:\n- RESAMPLE_RATE = 16000\n- audio, _ = librosa.load(path, sr=RESAMPLE_RATE, mono=True)\n-\n- # TODO add infer/python interface to get embedding, now fake it by rand\n- # vpr = ECAPATDNN(checkpoint_path=None, device='cuda')\n- # embedding = vpr.inference(audio)\n- np.random.seed(hash(os.path.basename(path)) % 1000000)\n- embedding = np.random.rand(1, 2048)\n+ embedding = vector_executor(audio_file=path)\n embedding = embedding / np.linalg.norm(embedding)\n- embedding = embedding.tolist()[0]\n+ embedding = embedding.tolist()\n return embedding\n except Exception as e:\n LOGGER.error(f\"Error with embedding:{e}\")\n", "issue": "[vec][search] update to paddlespeech model\n\n", "before_files": [{"content": "# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nimport os\n\n############### Milvus Configuration ###############\nMILVUS_HOST = os.getenv(\"MILVUS_HOST\", \"127.0.0.1\")\nMILVUS_PORT = int(os.getenv(\"MILVUS_PORT\", \"19530\"))\nVECTOR_DIMENSION = int(os.getenv(\"VECTOR_DIMENSION\", \"2048\"))\nINDEX_FILE_SIZE = int(os.getenv(\"INDEX_FILE_SIZE\", \"1024\"))\nMETRIC_TYPE = os.getenv(\"METRIC_TYPE\", \"L2\")\nDEFAULT_TABLE = os.getenv(\"DEFAULT_TABLE\", \"audio_table\")\nTOP_K = int(os.getenv(\"TOP_K\", \"10\"))\n\n############### MySQL Configuration ###############\nMYSQL_HOST = os.getenv(\"MYSQL_HOST\", \"127.0.0.1\")\nMYSQL_PORT = int(os.getenv(\"MYSQL_PORT\", \"3306\"))\nMYSQL_USER = os.getenv(\"MYSQL_USER\", \"root\")\nMYSQL_PWD = os.getenv(\"MYSQL_PWD\", \"123456\")\nMYSQL_DB = os.getenv(\"MYSQL_DB\", \"mysql\")\n\n############### Data Path ###############\nUPLOAD_PATH = os.getenv(\"UPLOAD_PATH\", \"tmp/audio-data\")\n\n############### Number of Log Files ###############\nLOGS_NUM = int(os.getenv(\"logs_num\", \"0\"))\n", "path": "demos/audio_searching/src/config.py"}, {"content": "# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nimport os\n\nimport librosa\nimport numpy as np\nfrom logs import LOGGER\n\n\ndef get_audio_embedding(path):\n \"\"\"\n Use vpr_inference to generate embedding of audio\n \"\"\"\n try:\n RESAMPLE_RATE = 16000\n audio, _ = librosa.load(path, sr=RESAMPLE_RATE, mono=True)\n\n # TODO add infer/python interface to get embedding, now fake it by rand\n # vpr = ECAPATDNN(checkpoint_path=None, device='cuda')\n # embedding = vpr.inference(audio)\n np.random.seed(hash(os.path.basename(path)) % 1000000)\n embedding = np.random.rand(1, 2048)\n embedding = embedding / np.linalg.norm(embedding)\n embedding = embedding.tolist()[0]\n return embedding\n except Exception as e:\n LOGGER.error(f\"Error with embedding:{e}\")\n return None\n", "path": "demos/audio_searching/src/encode.py"}]} | 1,460 | 512 |
gh_patches_debug_14 | rasdani/github-patches | git_diff | OCHA-DAP__hdx-ckan-2135 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Browse Page Map: opening a country link has different behaviors
From the map: open in new tab
From the list: open in same tab
We should make it the same: open in same tab (unless there was some specification that it should be a new tab that I'm not remembering.
Graphic in Colombia page: instead of line (time-series) make it a bar graph.
CJ added current action for this issue:
- Change "Number of IDPs" graph **from** bar graph **to** line graph.
-----------------Original issue text follows---------------------
I think the graph **Number of people with access constrains** would look better if it was a bar graph instead of a line, time-series:

The reason I think that is that the lines give the impression the indicator changes significantly every month, but in a continuum of time. Bar graphs will help the user compare months as nearly independent measurements, which is influences better consumption of the data in my opinion.
I chatted with the Data Team about this (including @JavierTeran) and they've approved this suggestion.
</issue>
<code>
[start of ckanext-hdx_theme/ckanext/hdx_theme/version.py]
1 hdx_version = 'v0.6.1'
2
[end of ckanext-hdx_theme/ckanext/hdx_theme/version.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ckanext-hdx_theme/ckanext/hdx_theme/version.py b/ckanext-hdx_theme/ckanext/hdx_theme/version.py
--- a/ckanext-hdx_theme/ckanext/hdx_theme/version.py
+++ b/ckanext-hdx_theme/ckanext/hdx_theme/version.py
@@ -1 +1 @@
-hdx_version = 'v0.6.1'
+hdx_version = 'v0.6.2'
| {"golden_diff": "diff --git a/ckanext-hdx_theme/ckanext/hdx_theme/version.py b/ckanext-hdx_theme/ckanext/hdx_theme/version.py\n--- a/ckanext-hdx_theme/ckanext/hdx_theme/version.py\n+++ b/ckanext-hdx_theme/ckanext/hdx_theme/version.py\n@@ -1 +1 @@\n-hdx_version = 'v0.6.1'\n+hdx_version = 'v0.6.2'\n", "issue": "Browse Page Map: opening a country link has different behaviors\nFrom the map: open in new tab\nFrom the list: open in same tab\n\nWe should make it the same: open in same tab (unless there was some specification that it should be a new tab that I'm not remembering. \n\nGraphic in Colombia page: instead of line (time-series) make it a bar graph.\nCJ added current action for this issue:\n- Change \"Number of IDPs\" graph **from** bar graph **to** line graph. \n\n-----------------Original issue text follows---------------------\nI think the graph **Number of people with access constrains** would look better if it was a bar graph instead of a line, time-series: \n\n\n\nThe reason I think that is that the lines give the impression the indicator changes significantly every month, but in a continuum of time. Bar graphs will help the user compare months as nearly independent measurements, which is influences better consumption of the data in my opinion. \n\nI chatted with the Data Team about this (including @JavierTeran) and they've approved this suggestion.\n\n", "before_files": [{"content": "hdx_version = 'v0.6.1'\n", "path": "ckanext-hdx_theme/ckanext/hdx_theme/version.py"}]} | 878 | 107 |
gh_patches_debug_2059 | rasdani/github-patches | git_diff | comic__grand-challenge.org-1062 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
The schema is empty for unauthorised users.
Another problem with this - the schema is empty for unauthorised users. You need to add `public=True` to `get_schema_view`.
_Originally posted by @jmsmkn in https://github.com/comic/grand-challenge.org/issues/1017#issuecomment-567254400_
</issue>
<code>
[start of app/grandchallenge/api/urls.py]
1 from django.conf import settings
2 from django.conf.urls import include, url
3 from django.urls import path
4 from drf_yasg import openapi
5 from drf_yasg.views import get_schema_view
6 from rest_framework import permissions, routers
7
8 from grandchallenge.algorithms.views import (
9 AlgorithmImageViewSet,
10 AlgorithmViewSet,
11 JobViewSet,
12 ResultViewSet,
13 )
14 from grandchallenge.cases.views import (
15 ImageViewSet,
16 RawImageUploadSessionViewSet,
17 )
18 from grandchallenge.jqfileupload.views import StagedFileViewSet
19 from grandchallenge.reader_studies.views import (
20 AnswerViewSet,
21 QuestionViewSet,
22 ReaderStudyViewSet,
23 )
24 from grandchallenge.retina_api.views import LandmarkAnnotationSetViewSet
25 from grandchallenge.subdomains.utils import reverse_lazy
26 from grandchallenge.workstation_configs.views import WorkstationConfigViewSet
27 from grandchallenge.workstations.views import SessionViewSet
28
29 app_name = "api"
30
31 router = routers.DefaultRouter()
32 router.register(
33 r"cases/upload-sessions",
34 RawImageUploadSessionViewSet,
35 basename="upload-session",
36 )
37 router.register(r"cases/images", ImageViewSet, basename="image")
38 router.register(r"workstations/sessions", SessionViewSet)
39 router.register(
40 r"workstations/configs",
41 WorkstationConfigViewSet,
42 basename="workstations-config",
43 )
44 router.register(r"algorithms/jobs", JobViewSet, basename="algorithms-job")
45 router.register(
46 r"algorithms/results", ResultViewSet, basename="algorithms-result"
47 )
48 router.register(
49 r"algorithms/images", AlgorithmImageViewSet, basename="algorithms-image"
50 )
51 router.register(r"algorithms", AlgorithmViewSet, basename="algorithm")
52
53 router.register(
54 r"reader-studies/answers", AnswerViewSet, basename="reader-studies-answer"
55 )
56 router.register(
57 r"reader-studies/questions",
58 QuestionViewSet,
59 basename="reader-studies-question",
60 )
61 router.register(r"reader-studies", ReaderStudyViewSet, basename="reader-study")
62 router.register(r"chunked-uploads", StagedFileViewSet, basename="staged-file")
63
64 router.register(
65 r"retina/landmark-annotation",
66 LandmarkAnnotationSetViewSet,
67 basename="landmark-annotation",
68 )
69
70 # TODO: add terms_of_service and contact
71 schema_view = get_schema_view(
72 openapi.Info(
73 title=f"{settings.SESSION_COOKIE_DOMAIN.lstrip('.')} API",
74 default_version="v1",
75 description=f"The API for {settings.SESSION_COOKIE_DOMAIN.lstrip('.')}.",
76 license=openapi.License(name="Apache License 2.0"),
77 terms_of_service=reverse_lazy(
78 "policies:detail", kwargs={"slug": "terms-of-service"}
79 ),
80 ),
81 permission_classes=(permissions.AllowAny,),
82 patterns=[path("api/v1/", include(router.urls))],
83 )
84
85 urlpatterns = [
86 url(
87 r"^swagger(?P<format>\.json|\.yaml)$",
88 schema_view.without_ui(),
89 name="schema-json",
90 ),
91 # Do not namespace the router.urls without updating the view names in
92 # the serializers
93 path("v1/", include(router.urls)),
94 path("auth/", include("rest_framework.urls", namespace="rest_framework")),
95 path("", schema_view.with_ui("swagger"), name="schema-docs"),
96 ]
97
[end of app/grandchallenge/api/urls.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/app/grandchallenge/api/urls.py b/app/grandchallenge/api/urls.py
--- a/app/grandchallenge/api/urls.py
+++ b/app/grandchallenge/api/urls.py
@@ -78,6 +78,7 @@
"policies:detail", kwargs={"slug": "terms-of-service"}
),
),
+ public=True,
permission_classes=(permissions.AllowAny,),
patterns=[path("api/v1/", include(router.urls))],
)
| {"golden_diff": "diff --git a/app/grandchallenge/api/urls.py b/app/grandchallenge/api/urls.py\n--- a/app/grandchallenge/api/urls.py\n+++ b/app/grandchallenge/api/urls.py\n@@ -78,6 +78,7 @@\n \"policies:detail\", kwargs={\"slug\": \"terms-of-service\"}\n ),\n ),\n+ public=True,\n permission_classes=(permissions.AllowAny,),\n patterns=[path(\"api/v1/\", include(router.urls))],\n )\n", "issue": "The schema is empty for unauthorised users.\nAnother problem with this - the schema is empty for unauthorised users. You need to add `public=True` to `get_schema_view`.\r\n\r\n_Originally posted by @jmsmkn in https://github.com/comic/grand-challenge.org/issues/1017#issuecomment-567254400_\n", "before_files": [{"content": "from django.conf import settings\nfrom django.conf.urls import include, url\nfrom django.urls import path\nfrom drf_yasg import openapi\nfrom drf_yasg.views import get_schema_view\nfrom rest_framework import permissions, routers\n\nfrom grandchallenge.algorithms.views import (\n AlgorithmImageViewSet,\n AlgorithmViewSet,\n JobViewSet,\n ResultViewSet,\n)\nfrom grandchallenge.cases.views import (\n ImageViewSet,\n RawImageUploadSessionViewSet,\n)\nfrom grandchallenge.jqfileupload.views import StagedFileViewSet\nfrom grandchallenge.reader_studies.views import (\n AnswerViewSet,\n QuestionViewSet,\n ReaderStudyViewSet,\n)\nfrom grandchallenge.retina_api.views import LandmarkAnnotationSetViewSet\nfrom grandchallenge.subdomains.utils import reverse_lazy\nfrom grandchallenge.workstation_configs.views import WorkstationConfigViewSet\nfrom grandchallenge.workstations.views import SessionViewSet\n\napp_name = \"api\"\n\nrouter = routers.DefaultRouter()\nrouter.register(\n r\"cases/upload-sessions\",\n RawImageUploadSessionViewSet,\n basename=\"upload-session\",\n)\nrouter.register(r\"cases/images\", ImageViewSet, basename=\"image\")\nrouter.register(r\"workstations/sessions\", SessionViewSet)\nrouter.register(\n r\"workstations/configs\",\n WorkstationConfigViewSet,\n basename=\"workstations-config\",\n)\nrouter.register(r\"algorithms/jobs\", JobViewSet, basename=\"algorithms-job\")\nrouter.register(\n r\"algorithms/results\", ResultViewSet, basename=\"algorithms-result\"\n)\nrouter.register(\n r\"algorithms/images\", AlgorithmImageViewSet, basename=\"algorithms-image\"\n)\nrouter.register(r\"algorithms\", AlgorithmViewSet, basename=\"algorithm\")\n\nrouter.register(\n r\"reader-studies/answers\", AnswerViewSet, basename=\"reader-studies-answer\"\n)\nrouter.register(\n r\"reader-studies/questions\",\n QuestionViewSet,\n basename=\"reader-studies-question\",\n)\nrouter.register(r\"reader-studies\", ReaderStudyViewSet, basename=\"reader-study\")\nrouter.register(r\"chunked-uploads\", StagedFileViewSet, basename=\"staged-file\")\n\nrouter.register(\n r\"retina/landmark-annotation\",\n LandmarkAnnotationSetViewSet,\n basename=\"landmark-annotation\",\n)\n\n# TODO: add terms_of_service and contact\nschema_view = get_schema_view(\n openapi.Info(\n title=f\"{settings.SESSION_COOKIE_DOMAIN.lstrip('.')} API\",\n default_version=\"v1\",\n description=f\"The API for {settings.SESSION_COOKIE_DOMAIN.lstrip('.')}.\",\n license=openapi.License(name=\"Apache License 2.0\"),\n terms_of_service=reverse_lazy(\n \"policies:detail\", kwargs={\"slug\": \"terms-of-service\"}\n ),\n ),\n permission_classes=(permissions.AllowAny,),\n patterns=[path(\"api/v1/\", include(router.urls))],\n)\n\nurlpatterns = [\n url(\n r\"^swagger(?P<format>\\.json|\\.yaml)$\",\n schema_view.without_ui(),\n name=\"schema-json\",\n ),\n # Do not namespace the router.urls without updating the view names in\n # the serializers\n path(\"v1/\", include(router.urls)),\n path(\"auth/\", include(\"rest_framework.urls\", namespace=\"rest_framework\")),\n path(\"\", schema_view.with_ui(\"swagger\"), name=\"schema-docs\"),\n]\n", "path": "app/grandchallenge/api/urls.py"}]} | 1,499 | 105 |
gh_patches_debug_12858 | rasdani/github-patches | git_diff | streamlink__streamlink-5616 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
plugins.hls: recognize URLs with uppercase ".M3U8"
### Checklist
- [X] This is a bug report and not [a different kind of issue](https://github.com/streamlink/streamlink/issues/new/choose)
- [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink)
- [X] [I have checked the list of open and recently closed bug reports](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22bug%22)
- [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master)
### Streamlink version
streamlink 6.2.1+29.gc82a8535
### Description
Currently a URL with upper case M3U8, e.g. `https://example.com/live.M3U8` would not be recognized as an HLS URL.
### Debug log
```text
>streamlink https://example.com/live.M3U8 best --loglevel=debug
[cli][debug] OS: Windows 10
[cli][debug] Python: 3.11.1
[cli][debug] OpenSSL: OpenSSL 1.1.1q 5 Jul 2022
[cli][debug] Streamlink: 6.2.1+29.gc82a8535
[cli][debug] Dependencies:
[cli][debug] certifi: 2023.5.7
[cli][debug] isodate: 0.6.1
[cli][debug] lxml: 4.9.2
[cli][debug] pycountry: 22.3.5
[cli][debug] pycryptodome: 3.16.0
[cli][debug] PySocks: 1.7.1
[cli][debug] requests: 2.31.0
[cli][debug] trio: 0.22.0
[cli][debug] trio-websocket: 0.9.2
[cli][debug] typing-extensions: 4.4.0
[cli][debug] urllib3: 1.26.15
[cli][debug] websocket-client: 1.5.1
[cli][debug] Arguments:
[cli][debug] url=https://example.com/live.M3U8
[cli][debug] stream=['best']
[cli][debug] --loglevel=debug
[cli][debug] --player=mpv.exe
[cli][debug] --stream-segment-threads=10
[cli][debug] --hls-segment-queue-threshold=0.0
error: No plugin can handle URL: https://example.com/live.M3U8
```
</issue>
<code>
[start of src/streamlink/plugins/dash.py]
1 import logging
2 import re
3
4 from streamlink.plugin import Plugin, pluginmatcher
5 from streamlink.plugin.plugin import LOW_PRIORITY, parse_params, stream_weight
6 from streamlink.stream.dash import DASHStream
7 from streamlink.utils.url import update_scheme
8
9
10 log = logging.getLogger(__name__)
11
12
13 @pluginmatcher(re.compile(
14 r"dash://(?P<url>\S+)(?:\s(?P<params>.+))?$",
15 ))
16 @pluginmatcher(priority=LOW_PRIORITY, pattern=re.compile(
17 r"(?P<url>\S+\.mpd(?:\?\S*)?)(?:\s(?P<params>.+))?$",
18 ))
19 class MPEGDASH(Plugin):
20 @classmethod
21 def stream_weight(cls, stream):
22 match = re.match(r"^(?:(.*)\+)?(?:a(\d+)k)$", stream)
23 if match and match.group(1) and match.group(2):
24 weight, group = stream_weight(match.group(1))
25 weight += int(match.group(2))
26 return weight, group
27 elif match and match.group(2):
28 return stream_weight(f"{match.group(2)}k")
29 else:
30 return stream_weight(stream)
31
32 def _get_streams(self):
33 data = self.match.groupdict()
34 url = update_scheme("https://", data.get("url"), force=False)
35 params = parse_params(data.get("params"))
36 log.debug(f"URL={url}; params={params}")
37
38 return DASHStream.parse_manifest(self.session, url, **params)
39
40
41 __plugin__ = MPEGDASH
42
[end of src/streamlink/plugins/dash.py]
[start of src/streamlink/plugins/hls.py]
1 import logging
2 import re
3
4 from streamlink.plugin import Plugin, pluginmatcher
5 from streamlink.plugin.plugin import LOW_PRIORITY, parse_params
6 from streamlink.stream.hls import HLSStream
7 from streamlink.utils.url import update_scheme
8
9
10 log = logging.getLogger(__name__)
11
12
13 @pluginmatcher(re.compile(
14 r"hls(?:variant)?://(?P<url>\S+)(?:\s(?P<params>.+))?$",
15 ))
16 @pluginmatcher(priority=LOW_PRIORITY, pattern=re.compile(
17 r"(?P<url>\S+\.m3u8(?:\?\S*)?)(?:\s(?P<params>.+))?$",
18 ))
19 class HLSPlugin(Plugin):
20 def _get_streams(self):
21 data = self.match.groupdict()
22 url = update_scheme("https://", data.get("url"), force=False)
23 params = parse_params(data.get("params"))
24 log.debug(f"URL={url}; params={params}")
25
26 streams = HLSStream.parse_variant_playlist(self.session, url, **params)
27
28 return streams or {"live": HLSStream(self.session, url, **params)}
29
30
31 __plugin__ = HLSPlugin
32
[end of src/streamlink/plugins/hls.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/streamlink/plugins/dash.py b/src/streamlink/plugins/dash.py
--- a/src/streamlink/plugins/dash.py
+++ b/src/streamlink/plugins/dash.py
@@ -15,6 +15,7 @@
))
@pluginmatcher(priority=LOW_PRIORITY, pattern=re.compile(
r"(?P<url>\S+\.mpd(?:\?\S*)?)(?:\s(?P<params>.+))?$",
+ re.IGNORECASE,
))
class MPEGDASH(Plugin):
@classmethod
diff --git a/src/streamlink/plugins/hls.py b/src/streamlink/plugins/hls.py
--- a/src/streamlink/plugins/hls.py
+++ b/src/streamlink/plugins/hls.py
@@ -15,6 +15,7 @@
))
@pluginmatcher(priority=LOW_PRIORITY, pattern=re.compile(
r"(?P<url>\S+\.m3u8(?:\?\S*)?)(?:\s(?P<params>.+))?$",
+ re.IGNORECASE,
))
class HLSPlugin(Plugin):
def _get_streams(self):
| {"golden_diff": "diff --git a/src/streamlink/plugins/dash.py b/src/streamlink/plugins/dash.py\n--- a/src/streamlink/plugins/dash.py\n+++ b/src/streamlink/plugins/dash.py\n@@ -15,6 +15,7 @@\n ))\n @pluginmatcher(priority=LOW_PRIORITY, pattern=re.compile(\n r\"(?P<url>\\S+\\.mpd(?:\\?\\S*)?)(?:\\s(?P<params>.+))?$\",\n+ re.IGNORECASE,\n ))\n class MPEGDASH(Plugin):\n @classmethod\ndiff --git a/src/streamlink/plugins/hls.py b/src/streamlink/plugins/hls.py\n--- a/src/streamlink/plugins/hls.py\n+++ b/src/streamlink/plugins/hls.py\n@@ -15,6 +15,7 @@\n ))\n @pluginmatcher(priority=LOW_PRIORITY, pattern=re.compile(\n r\"(?P<url>\\S+\\.m3u8(?:\\?\\S*)?)(?:\\s(?P<params>.+))?$\",\n+ re.IGNORECASE,\n ))\n class HLSPlugin(Plugin):\n def _get_streams(self):\n", "issue": "plugins.hls: recognize URLs with uppercase \".M3U8\"\n### Checklist\n\n- [X] This is a bug report and not [a different kind of issue](https://github.com/streamlink/streamlink/issues/new/choose)\n- [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink)\n- [X] [I have checked the list of open and recently closed bug reports](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22bug%22)\n- [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master)\n\n### Streamlink version\n\nstreamlink 6.2.1+29.gc82a8535\n\n### Description\n\nCurrently a URL with upper case M3U8, e.g. `https://example.com/live.M3U8` would not be recognized as an HLS URL.\r\n\r\n\n\n### Debug log\n\n```text\n>streamlink https://example.com/live.M3U8 best --loglevel=debug \r\n[cli][debug] OS: Windows 10\r\n[cli][debug] Python: 3.11.1\r\n[cli][debug] OpenSSL: OpenSSL 1.1.1q 5 Jul 2022\r\n[cli][debug] Streamlink: 6.2.1+29.gc82a8535\r\n[cli][debug] Dependencies:\r\n[cli][debug] certifi: 2023.5.7\r\n[cli][debug] isodate: 0.6.1\r\n[cli][debug] lxml: 4.9.2\r\n[cli][debug] pycountry: 22.3.5\r\n[cli][debug] pycryptodome: 3.16.0\r\n[cli][debug] PySocks: 1.7.1\r\n[cli][debug] requests: 2.31.0\r\n[cli][debug] trio: 0.22.0\r\n[cli][debug] trio-websocket: 0.9.2\r\n[cli][debug] typing-extensions: 4.4.0\r\n[cli][debug] urllib3: 1.26.15\r\n[cli][debug] websocket-client: 1.5.1\r\n[cli][debug] Arguments:\r\n[cli][debug] url=https://example.com/live.M3U8\r\n[cli][debug] stream=['best']\r\n[cli][debug] --loglevel=debug\r\n[cli][debug] --player=mpv.exe\r\n[cli][debug] --stream-segment-threads=10\r\n[cli][debug] --hls-segment-queue-threshold=0.0\r\nerror: No plugin can handle URL: https://example.com/live.M3U8\n```\n\n", "before_files": [{"content": "import logging\nimport re\n\nfrom streamlink.plugin import Plugin, pluginmatcher\nfrom streamlink.plugin.plugin import LOW_PRIORITY, parse_params, stream_weight\nfrom streamlink.stream.dash import DASHStream\nfrom streamlink.utils.url import update_scheme\n\n\nlog = logging.getLogger(__name__)\n\n\n@pluginmatcher(re.compile(\n r\"dash://(?P<url>\\S+)(?:\\s(?P<params>.+))?$\",\n))\n@pluginmatcher(priority=LOW_PRIORITY, pattern=re.compile(\n r\"(?P<url>\\S+\\.mpd(?:\\?\\S*)?)(?:\\s(?P<params>.+))?$\",\n))\nclass MPEGDASH(Plugin):\n @classmethod\n def stream_weight(cls, stream):\n match = re.match(r\"^(?:(.*)\\+)?(?:a(\\d+)k)$\", stream)\n if match and match.group(1) and match.group(2):\n weight, group = stream_weight(match.group(1))\n weight += int(match.group(2))\n return weight, group\n elif match and match.group(2):\n return stream_weight(f\"{match.group(2)}k\")\n else:\n return stream_weight(stream)\n\n def _get_streams(self):\n data = self.match.groupdict()\n url = update_scheme(\"https://\", data.get(\"url\"), force=False)\n params = parse_params(data.get(\"params\"))\n log.debug(f\"URL={url}; params={params}\")\n\n return DASHStream.parse_manifest(self.session, url, **params)\n\n\n__plugin__ = MPEGDASH\n", "path": "src/streamlink/plugins/dash.py"}, {"content": "import logging\nimport re\n\nfrom streamlink.plugin import Plugin, pluginmatcher\nfrom streamlink.plugin.plugin import LOW_PRIORITY, parse_params\nfrom streamlink.stream.hls import HLSStream\nfrom streamlink.utils.url import update_scheme\n\n\nlog = logging.getLogger(__name__)\n\n\n@pluginmatcher(re.compile(\n r\"hls(?:variant)?://(?P<url>\\S+)(?:\\s(?P<params>.+))?$\",\n))\n@pluginmatcher(priority=LOW_PRIORITY, pattern=re.compile(\n r\"(?P<url>\\S+\\.m3u8(?:\\?\\S*)?)(?:\\s(?P<params>.+))?$\",\n))\nclass HLSPlugin(Plugin):\n def _get_streams(self):\n data = self.match.groupdict()\n url = update_scheme(\"https://\", data.get(\"url\"), force=False)\n params = parse_params(data.get(\"params\"))\n log.debug(f\"URL={url}; params={params}\")\n\n streams = HLSStream.parse_variant_playlist(self.session, url, **params)\n\n return streams or {\"live\": HLSStream(self.session, url, **params)}\n\n\n__plugin__ = HLSPlugin\n", "path": "src/streamlink/plugins/hls.py"}]} | 1,931 | 235 |
gh_patches_debug_4119 | rasdani/github-patches | git_diff | mathesar-foundation__mathesar-2424 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
First time password reset should redirect to login page instead of confirmation page
## Description
* After the user resets their password at the page: `/auth/password_reset_confirm`, they are redirected to `/auth/reset/done/`.
* They should be redirected to `/auth/login` instead.
</issue>
<code>
[start of mathesar/users/password_reset.py]
1 from django.contrib.auth.forms import SetPasswordForm
2 from django.contrib.auth.views import PasswordResetConfirmView
3 from django.utils.decorators import method_decorator
4 from django.views.decorators.cache import never_cache
5 from django.views.decorators.debug import sensitive_post_parameters
6 from django.utils.translation import gettext_lazy as _
7
8
9 class MathesarSetPasswordForm(SetPasswordForm):
10 def save(self, commit=True):
11 password = self.cleaned_data["new_password1"]
12 self.user.set_password(password)
13 # Default password is replaced with a password is set by the user, so change the status
14 self.user.password_change_needed = False
15 if commit:
16 self.user.save()
17 return self.user
18
19
20 class MathesarPasswordResetConfirmView(PasswordResetConfirmView):
21 # Override default form as we need custom save behaviour
22 form_class = MathesarSetPasswordForm
23 template_name = 'users/password_reset_confirmation.html'
24 title = _('Change Default Password')
25
26 @method_decorator(sensitive_post_parameters())
27 @method_decorator(never_cache)
28 def dispatch(self, *args, **kwargs):
29 self.user = self.request.user
30 self.validlink = True
31 # Avoid calling the PasswordResetConfirmView `dispatch` method
32 # as it contains behaviours not suited for our user flow
33 return super(PasswordResetConfirmView, self).dispatch(*args, **kwargs)
34
35 def form_valid(self, form):
36 form.save()
37 return super(PasswordResetConfirmView, self).form_valid(form)
38
[end of mathesar/users/password_reset.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mathesar/users/password_reset.py b/mathesar/users/password_reset.py
--- a/mathesar/users/password_reset.py
+++ b/mathesar/users/password_reset.py
@@ -22,6 +22,7 @@
form_class = MathesarSetPasswordForm
template_name = 'users/password_reset_confirmation.html'
title = _('Change Default Password')
+ success_url = "/auth/login"
@method_decorator(sensitive_post_parameters())
@method_decorator(never_cache)
| {"golden_diff": "diff --git a/mathesar/users/password_reset.py b/mathesar/users/password_reset.py\n--- a/mathesar/users/password_reset.py\n+++ b/mathesar/users/password_reset.py\n@@ -22,6 +22,7 @@\n form_class = MathesarSetPasswordForm\n template_name = 'users/password_reset_confirmation.html'\n title = _('Change Default Password')\n+ success_url = \"/auth/login\"\n \n @method_decorator(sensitive_post_parameters())\n @method_decorator(never_cache)\n", "issue": "First time password reset should redirect to login page instead of confirmation page\n## Description\r\n* After the user resets their password at the page: `/auth/password_reset_confirm`, they are redirected to `/auth/reset/done/`.\r\n* They should be redirected to `/auth/login` instead.\n", "before_files": [{"content": "from django.contrib.auth.forms import SetPasswordForm\nfrom django.contrib.auth.views import PasswordResetConfirmView\nfrom django.utils.decorators import method_decorator\nfrom django.views.decorators.cache import never_cache\nfrom django.views.decorators.debug import sensitive_post_parameters\nfrom django.utils.translation import gettext_lazy as _\n\n\nclass MathesarSetPasswordForm(SetPasswordForm):\n def save(self, commit=True):\n password = self.cleaned_data[\"new_password1\"]\n self.user.set_password(password)\n # Default password is replaced with a password is set by the user, so change the status\n self.user.password_change_needed = False\n if commit:\n self.user.save()\n return self.user\n\n\nclass MathesarPasswordResetConfirmView(PasswordResetConfirmView):\n # Override default form as we need custom save behaviour\n form_class = MathesarSetPasswordForm\n template_name = 'users/password_reset_confirmation.html'\n title = _('Change Default Password')\n\n @method_decorator(sensitive_post_parameters())\n @method_decorator(never_cache)\n def dispatch(self, *args, **kwargs):\n self.user = self.request.user\n self.validlink = True\n # Avoid calling the PasswordResetConfirmView `dispatch` method\n # as it contains behaviours not suited for our user flow\n return super(PasswordResetConfirmView, self).dispatch(*args, **kwargs)\n\n def form_valid(self, form):\n form.save()\n return super(PasswordResetConfirmView, self).form_valid(form)\n", "path": "mathesar/users/password_reset.py"}]} | 974 | 105 |
gh_patches_debug_9831 | rasdani/github-patches | git_diff | wemake-services__wemake-python-styleguide-1226 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Annotation complexity should not fail on expressions
# Bug report
This code:
```python
def some() -> 'test expression':
...
```
Makes `flake8-annotation-complexity` to fail. We need to ignore this case silently.
Related: #1170
Demo: https://asciinema.org/a/IIjIfkVKytmZ1F5c2YufMORdi
Annotation complexity should not fail on expressions
# Bug report
This code:
```python
def some() -> 'test expression':
...
```
Makes `flake8-annotation-complexity` to fail. We need to ignore this case silently.
Related: #1170
Demo: https://asciinema.org/a/IIjIfkVKytmZ1F5c2YufMORdi
</issue>
<code>
[start of wemake_python_styleguide/logic/complexity/annotations.py]
1 """
2 Counts annotation complexity by getting the nesting level of nodes.
3
4 So ``List[int]`` complexity is 2
5 and ``Tuple[List[Optional[str]], int]`` is 4.
6
7 Adapted from: https://github.com/best-doctor/flake8-annotations-complexity
8 """
9
10 import ast
11 from typing import Union
12
13 _Annotation = Union[
14 ast.expr,
15 ast.Str,
16 ]
17
18
19 def get_annotation_compexity(annotation_node: _Annotation) -> int:
20 """
21 Recursevly counts complexity of annotation nodes.
22
23 When annotations are written as strings,
24 we additionally parse them to ``ast`` nodes.
25 """
26 if isinstance(annotation_node, ast.Str):
27 annotation_node = ast.parse( # type: ignore
28 annotation_node.s,
29 ).body[0].value
30
31 if isinstance(annotation_node, ast.Subscript):
32 return 1 + get_annotation_compexity(
33 annotation_node.slice.value, # type: ignore
34 )
35 elif isinstance(annotation_node, (ast.Tuple, ast.List)):
36 return max(
37 (get_annotation_compexity(node) for node in annotation_node.elts),
38 default=1,
39 )
40 return 1
41
[end of wemake_python_styleguide/logic/complexity/annotations.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/wemake_python_styleguide/logic/complexity/annotations.py b/wemake_python_styleguide/logic/complexity/annotations.py
--- a/wemake_python_styleguide/logic/complexity/annotations.py
+++ b/wemake_python_styleguide/logic/complexity/annotations.py
@@ -24,9 +24,12 @@
we additionally parse them to ``ast`` nodes.
"""
if isinstance(annotation_node, ast.Str):
- annotation_node = ast.parse( # type: ignore
- annotation_node.s,
- ).body[0].value
+ try:
+ annotation_node = ast.parse( # type: ignore
+ annotation_node.s,
+ ).body[0].value
+ except SyntaxError:
+ return 1
if isinstance(annotation_node, ast.Subscript):
return 1 + get_annotation_compexity(
| {"golden_diff": "diff --git a/wemake_python_styleguide/logic/complexity/annotations.py b/wemake_python_styleguide/logic/complexity/annotations.py\n--- a/wemake_python_styleguide/logic/complexity/annotations.py\n+++ b/wemake_python_styleguide/logic/complexity/annotations.py\n@@ -24,9 +24,12 @@\n we additionally parse them to ``ast`` nodes.\n \"\"\"\n if isinstance(annotation_node, ast.Str):\n- annotation_node = ast.parse( # type: ignore\n- annotation_node.s,\n- ).body[0].value\n+ try:\n+ annotation_node = ast.parse( # type: ignore\n+ annotation_node.s,\n+ ).body[0].value\n+ except SyntaxError:\n+ return 1\n \n if isinstance(annotation_node, ast.Subscript):\n return 1 + get_annotation_compexity(\n", "issue": "Annotation complexity should not fail on expressions\n# Bug report\r\n\r\nThis code:\r\n\r\n```python\r\ndef some() -> 'test expression':\r\n ...\r\n```\r\n\r\nMakes `flake8-annotation-complexity` to fail. We need to ignore this case silently.\r\n\r\nRelated: #1170 \r\n\r\nDemo: https://asciinema.org/a/IIjIfkVKytmZ1F5c2YufMORdi\nAnnotation complexity should not fail on expressions\n# Bug report\r\n\r\nThis code:\r\n\r\n```python\r\ndef some() -> 'test expression':\r\n ...\r\n```\r\n\r\nMakes `flake8-annotation-complexity` to fail. We need to ignore this case silently.\r\n\r\nRelated: #1170 \r\n\r\nDemo: https://asciinema.org/a/IIjIfkVKytmZ1F5c2YufMORdi\n", "before_files": [{"content": "\"\"\"\nCounts annotation complexity by getting the nesting level of nodes.\n\nSo ``List[int]`` complexity is 2\nand ``Tuple[List[Optional[str]], int]`` is 4.\n\nAdapted from: https://github.com/best-doctor/flake8-annotations-complexity\n\"\"\"\n\nimport ast\nfrom typing import Union\n\n_Annotation = Union[\n ast.expr,\n ast.Str,\n]\n\n\ndef get_annotation_compexity(annotation_node: _Annotation) -> int:\n \"\"\"\n Recursevly counts complexity of annotation nodes.\n\n When annotations are written as strings,\n we additionally parse them to ``ast`` nodes.\n \"\"\"\n if isinstance(annotation_node, ast.Str):\n annotation_node = ast.parse( # type: ignore\n annotation_node.s,\n ).body[0].value\n\n if isinstance(annotation_node, ast.Subscript):\n return 1 + get_annotation_compexity(\n annotation_node.slice.value, # type: ignore\n )\n elif isinstance(annotation_node, (ast.Tuple, ast.List)):\n return max(\n (get_annotation_compexity(node) for node in annotation_node.elts),\n default=1,\n )\n return 1\n", "path": "wemake_python_styleguide/logic/complexity/annotations.py"}]} | 1,059 | 199 |
gh_patches_debug_10410 | rasdani/github-patches | git_diff | pypa__pip-6731 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add docs for new pip debug command
This is a follow-up issue to PR #6638 to add docs for the new `pip debug` command. As @xavfernandez said in [this comment](https://github.com/pypa/pip/pull/6638#pullrequestreview-256090004):
> It would also need basic documentation (at least a `docs/html/reference/pip_debug.rst`) and most importantly (IMHO), strongly emphasize that the output and the options of this command are provisional and might change without notice.
</issue>
<code>
[start of src/pip/_internal/commands/debug.py]
1 from __future__ import absolute_import
2
3 import logging
4 import sys
5
6 from pip._internal.cli import cmdoptions
7 from pip._internal.cli.base_command import Command
8 from pip._internal.cli.cmdoptions import make_target_python
9 from pip._internal.cli.status_codes import SUCCESS
10 from pip._internal.utils.logging import indent_log
11 from pip._internal.utils.misc import get_pip_version
12 from pip._internal.utils.typing import MYPY_CHECK_RUNNING
13 from pip._internal.wheel import format_tag
14
15 if MYPY_CHECK_RUNNING:
16 from typing import Any, List
17 from optparse import Values
18
19 logger = logging.getLogger(__name__)
20
21
22 def show_value(name, value):
23 # type: (str, str) -> None
24 logger.info('{}: {}'.format(name, value))
25
26
27 def show_sys_implementation():
28 # type: () -> None
29 logger.info('sys.implementation:')
30 if hasattr(sys, 'implementation'):
31 implementation = sys.implementation # type: ignore
32 implementation_name = implementation.name
33 else:
34 implementation_name = ''
35
36 with indent_log():
37 show_value('name', implementation_name)
38
39
40 def show_tags(options):
41 # type: (Values) -> None
42 tag_limit = 10
43
44 target_python = make_target_python(options)
45 tags = target_python.get_tags()
46
47 # Display the target options that were explicitly provided.
48 formatted_target = target_python.format_given()
49 suffix = ''
50 if formatted_target:
51 suffix = ' (target: {})'.format(formatted_target)
52
53 msg = 'Compatible tags: {}{}'.format(len(tags), suffix)
54 logger.info(msg)
55
56 if options.verbose < 1 and len(tags) > tag_limit:
57 tags_limited = True
58 tags = tags[:tag_limit]
59 else:
60 tags_limited = False
61
62 with indent_log():
63 for tag in tags:
64 logger.info(format_tag(tag))
65
66 if tags_limited:
67 msg = (
68 '...\n'
69 '[First {tag_limit} tags shown. Pass --verbose to show all.]'
70 ).format(tag_limit=tag_limit)
71 logger.info(msg)
72
73
74 class DebugCommand(Command):
75 """
76 Display debug information.
77 """
78
79 name = 'debug'
80 usage = """
81 %prog <options>"""
82 summary = 'Show information useful for debugging.'
83 ignore_require_venv = True
84
85 def __init__(self, *args, **kw):
86 super(DebugCommand, self).__init__(*args, **kw)
87
88 cmd_opts = self.cmd_opts
89 cmdoptions.add_target_python_options(cmd_opts)
90 self.parser.insert_option_group(0, cmd_opts)
91
92 def run(self, options, args):
93 # type: (Values, List[Any]) -> int
94 show_value('pip version', get_pip_version())
95 show_value('sys.version', sys.version)
96 show_value('sys.executable', sys.executable)
97 show_value('sys.platform', sys.platform)
98 show_sys_implementation()
99
100 show_tags(options)
101
102 return SUCCESS
103
[end of src/pip/_internal/commands/debug.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/pip/_internal/commands/debug.py b/src/pip/_internal/commands/debug.py
--- a/src/pip/_internal/commands/debug.py
+++ b/src/pip/_internal/commands/debug.py
@@ -91,6 +91,12 @@
def run(self, options, args):
# type: (Values, List[Any]) -> int
+ logger.warning(
+ "This command is only meant for debugging. "
+ "Do not use this with automation for parsing and getting these "
+ "details, since the output and options of this command may "
+ "change without notice."
+ )
show_value('pip version', get_pip_version())
show_value('sys.version', sys.version)
show_value('sys.executable', sys.executable)
| {"golden_diff": "diff --git a/src/pip/_internal/commands/debug.py b/src/pip/_internal/commands/debug.py\n--- a/src/pip/_internal/commands/debug.py\n+++ b/src/pip/_internal/commands/debug.py\n@@ -91,6 +91,12 @@\n \n def run(self, options, args):\n # type: (Values, List[Any]) -> int\n+ logger.warning(\n+ \"This command is only meant for debugging. \"\n+ \"Do not use this with automation for parsing and getting these \"\n+ \"details, since the output and options of this command may \"\n+ \"change without notice.\"\n+ )\n show_value('pip version', get_pip_version())\n show_value('sys.version', sys.version)\n show_value('sys.executable', sys.executable)\n", "issue": "Add docs for new pip debug command\nThis is a follow-up issue to PR #6638 to add docs for the new `pip debug` command. As @xavfernandez said in [this comment](https://github.com/pypa/pip/pull/6638#pullrequestreview-256090004):\r\n\r\n> It would also need basic documentation (at least a `docs/html/reference/pip_debug.rst`) and most importantly (IMHO), strongly emphasize that the output and the options of this command are provisional and might change without notice.\r\n\r\n\r\n\n", "before_files": [{"content": "from __future__ import absolute_import\n\nimport logging\nimport sys\n\nfrom pip._internal.cli import cmdoptions\nfrom pip._internal.cli.base_command import Command\nfrom pip._internal.cli.cmdoptions import make_target_python\nfrom pip._internal.cli.status_codes import SUCCESS\nfrom pip._internal.utils.logging import indent_log\nfrom pip._internal.utils.misc import get_pip_version\nfrom pip._internal.utils.typing import MYPY_CHECK_RUNNING\nfrom pip._internal.wheel import format_tag\n\nif MYPY_CHECK_RUNNING:\n from typing import Any, List\n from optparse import Values\n\nlogger = logging.getLogger(__name__)\n\n\ndef show_value(name, value):\n # type: (str, str) -> None\n logger.info('{}: {}'.format(name, value))\n\n\ndef show_sys_implementation():\n # type: () -> None\n logger.info('sys.implementation:')\n if hasattr(sys, 'implementation'):\n implementation = sys.implementation # type: ignore\n implementation_name = implementation.name\n else:\n implementation_name = ''\n\n with indent_log():\n show_value('name', implementation_name)\n\n\ndef show_tags(options):\n # type: (Values) -> None\n tag_limit = 10\n\n target_python = make_target_python(options)\n tags = target_python.get_tags()\n\n # Display the target options that were explicitly provided.\n formatted_target = target_python.format_given()\n suffix = ''\n if formatted_target:\n suffix = ' (target: {})'.format(formatted_target)\n\n msg = 'Compatible tags: {}{}'.format(len(tags), suffix)\n logger.info(msg)\n\n if options.verbose < 1 and len(tags) > tag_limit:\n tags_limited = True\n tags = tags[:tag_limit]\n else:\n tags_limited = False\n\n with indent_log():\n for tag in tags:\n logger.info(format_tag(tag))\n\n if tags_limited:\n msg = (\n '...\\n'\n '[First {tag_limit} tags shown. Pass --verbose to show all.]'\n ).format(tag_limit=tag_limit)\n logger.info(msg)\n\n\nclass DebugCommand(Command):\n \"\"\"\n Display debug information.\n \"\"\"\n\n name = 'debug'\n usage = \"\"\"\n %prog <options>\"\"\"\n summary = 'Show information useful for debugging.'\n ignore_require_venv = True\n\n def __init__(self, *args, **kw):\n super(DebugCommand, self).__init__(*args, **kw)\n\n cmd_opts = self.cmd_opts\n cmdoptions.add_target_python_options(cmd_opts)\n self.parser.insert_option_group(0, cmd_opts)\n\n def run(self, options, args):\n # type: (Values, List[Any]) -> int\n show_value('pip version', get_pip_version())\n show_value('sys.version', sys.version)\n show_value('sys.executable', sys.executable)\n show_value('sys.platform', sys.platform)\n show_sys_implementation()\n\n show_tags(options)\n\n return SUCCESS\n", "path": "src/pip/_internal/commands/debug.py"}]} | 1,522 | 176 |
gh_patches_debug_3521 | rasdani/github-patches | git_diff | wagtail__wagtail-2465 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Remove redundant template debug lines from project template
Ref: https://github.com/torchbox/wagtail/blob/9ff7961a3c8f508ad17735cd815335bad12fd67f/wagtail/project_template/project_name/settings/dev.py#L7-L8
#1688
According to https://docs.djangoproject.com/en/1.9/topics/templates/#django.template.backends.django.DjangoTemplates, the 'debug' option on the DjangoTemplates engine defaults to the global DEBUG setting, so setting this here is apparently redundant. (Also, there's no corresponding option for the Jinja2 backend, so setting this for all engines is not strictly correct.)
So, we just need someone to double-check that with these lines removed, template debug info still displays in development mode but not in production.
</issue>
<code>
[start of wagtail/project_template/project_name/settings/dev.py]
1 from __future__ import absolute_import, unicode_literals
2
3 from .base import *
4
5 # SECURITY WARNING: don't run with debug turned on in production!
6 DEBUG = True
7
8 for template_engine in TEMPLATES:
9 template_engine['OPTIONS']['debug'] = True
10
11 # SECURITY WARNING: keep the secret key used in production secret!
12 SECRET_KEY = '{{ secret_key }}'
13
14
15 EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'
16
17
18 try:
19 from .local import *
20 except ImportError:
21 pass
22
[end of wagtail/project_template/project_name/settings/dev.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/wagtail/project_template/project_name/settings/dev.py b/wagtail/project_template/project_name/settings/dev.py
--- a/wagtail/project_template/project_name/settings/dev.py
+++ b/wagtail/project_template/project_name/settings/dev.py
@@ -5,9 +5,6 @@
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
-for template_engine in TEMPLATES:
- template_engine['OPTIONS']['debug'] = True
-
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = '{{ secret_key }}'
| {"golden_diff": "diff --git a/wagtail/project_template/project_name/settings/dev.py b/wagtail/project_template/project_name/settings/dev.py\n--- a/wagtail/project_template/project_name/settings/dev.py\n+++ b/wagtail/project_template/project_name/settings/dev.py\n@@ -5,9 +5,6 @@\n # SECURITY WARNING: don't run with debug turned on in production!\n DEBUG = True\n \n-for template_engine in TEMPLATES:\n- template_engine['OPTIONS']['debug'] = True\n-\n # SECURITY WARNING: keep the secret key used in production secret!\n SECRET_KEY = '{{ secret_key }}'\n", "issue": "Remove redundant template debug lines from project template\nRef: https://github.com/torchbox/wagtail/blob/9ff7961a3c8f508ad17735cd815335bad12fd67f/wagtail/project_template/project_name/settings/dev.py#L7-L8\n#1688\n\nAccording to https://docs.djangoproject.com/en/1.9/topics/templates/#django.template.backends.django.DjangoTemplates, the 'debug' option on the DjangoTemplates engine defaults to the global DEBUG setting, so setting this here is apparently redundant. (Also, there's no corresponding option for the Jinja2 backend, so setting this for all engines is not strictly correct.)\n\nSo, we just need someone to double-check that with these lines removed, template debug info still displays in development mode but not in production.\n\n", "before_files": [{"content": "from __future__ import absolute_import, unicode_literals\n\nfrom .base import *\n\n# SECURITY WARNING: don't run with debug turned on in production!\nDEBUG = True\n\nfor template_engine in TEMPLATES:\n template_engine['OPTIONS']['debug'] = True\n\n# SECURITY WARNING: keep the secret key used in production secret!\nSECRET_KEY = '{{ secret_key }}'\n\n\nEMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'\n\n\ntry:\n from .local import *\nexcept ImportError:\n pass\n", "path": "wagtail/project_template/project_name/settings/dev.py"}]} | 869 | 123 |
gh_patches_debug_3489 | rasdani/github-patches | git_diff | cookiecutter__cookiecutter-1526 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Enforce the minimal coverage to 100%
</issue>
<code>
[start of cookiecutter/utils.py]
1 """Helper functions used throughout Cookiecutter."""
2 import contextlib
3 import errno
4 import logging
5 import os
6 import shutil
7 import stat
8 import sys
9
10 from cookiecutter.prompt import read_user_yes_no
11
12 logger = logging.getLogger(__name__)
13
14
15 def force_delete(func, path, exc_info):
16 """Error handler for `shutil.rmtree()` equivalent to `rm -rf`.
17
18 Usage: `shutil.rmtree(path, onerror=force_delete)`
19 From stackoverflow.com/questions/1889597
20 """
21 os.chmod(path, stat.S_IWRITE)
22 func(path)
23
24
25 def rmtree(path):
26 """Remove a directory and all its contents. Like rm -rf on Unix.
27
28 :param path: A directory path.
29 """
30 shutil.rmtree(path, onerror=force_delete)
31
32
33 def make_sure_path_exists(path):
34 """Ensure that a directory exists.
35
36 :param path: A directory path.
37 """
38 logger.debug('Making sure path exists: %s', path)
39 try:
40 os.makedirs(path)
41 logger.debug('Created directory at: %s', path)
42 except OSError as exception:
43 if exception.errno != errno.EEXIST:
44 return False
45 return True
46
47
48 @contextlib.contextmanager
49 def work_in(dirname=None):
50 """Context manager version of os.chdir.
51
52 When exited, returns to the working directory prior to entering.
53 """
54 curdir = os.getcwd()
55 try:
56 if dirname is not None:
57 os.chdir(dirname)
58 yield
59 finally:
60 os.chdir(curdir)
61
62
63 def make_executable(script_path):
64 """Make `script_path` executable.
65
66 :param script_path: The file to change
67 """
68 status = os.stat(script_path)
69 os.chmod(script_path, status.st_mode | stat.S_IEXEC)
70
71
72 def prompt_and_delete(path, no_input=False):
73 """
74 Ask user if it's okay to delete the previously-downloaded file/directory.
75
76 If yes, delete it. If no, checks to see if the old version should be
77 reused. If yes, it's reused; otherwise, Cookiecutter exits.
78
79 :param path: Previously downloaded zipfile.
80 :param no_input: Suppress prompt to delete repo and just delete it.
81 :return: True if the content was deleted
82 """
83 # Suppress prompt if called via API
84 if no_input:
85 ok_to_delete = True
86 else:
87 question = (
88 "You've downloaded {} before. Is it okay to delete and re-download it?"
89 ).format(path)
90
91 ok_to_delete = read_user_yes_no(question, 'yes')
92
93 if ok_to_delete:
94 if os.path.isdir(path):
95 rmtree(path)
96 else:
97 os.remove(path)
98 return True
99 else:
100 ok_to_reuse = read_user_yes_no(
101 "Do you want to re-use the existing version?", 'yes'
102 )
103
104 if ok_to_reuse:
105 return False
106
107 sys.exit()
108
[end of cookiecutter/utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/cookiecutter/utils.py b/cookiecutter/utils.py
--- a/cookiecutter/utils.py
+++ b/cookiecutter/utils.py
@@ -16,7 +16,7 @@
"""Error handler for `shutil.rmtree()` equivalent to `rm -rf`.
Usage: `shutil.rmtree(path, onerror=force_delete)`
- From stackoverflow.com/questions/1889597
+ From https://docs.python.org/3/library/shutil.html#rmtree-example
"""
os.chmod(path, stat.S_IWRITE)
func(path)
| {"golden_diff": "diff --git a/cookiecutter/utils.py b/cookiecutter/utils.py\n--- a/cookiecutter/utils.py\n+++ b/cookiecutter/utils.py\n@@ -16,7 +16,7 @@\n \"\"\"Error handler for `shutil.rmtree()` equivalent to `rm -rf`.\n \n Usage: `shutil.rmtree(path, onerror=force_delete)`\n- From stackoverflow.com/questions/1889597\n+ From https://docs.python.org/3/library/shutil.html#rmtree-example\n \"\"\"\n os.chmod(path, stat.S_IWRITE)\n func(path)\n", "issue": "Enforce the minimal coverage to 100%\n\n", "before_files": [{"content": "\"\"\"Helper functions used throughout Cookiecutter.\"\"\"\nimport contextlib\nimport errno\nimport logging\nimport os\nimport shutil\nimport stat\nimport sys\n\nfrom cookiecutter.prompt import read_user_yes_no\n\nlogger = logging.getLogger(__name__)\n\n\ndef force_delete(func, path, exc_info):\n \"\"\"Error handler for `shutil.rmtree()` equivalent to `rm -rf`.\n\n Usage: `shutil.rmtree(path, onerror=force_delete)`\n From stackoverflow.com/questions/1889597\n \"\"\"\n os.chmod(path, stat.S_IWRITE)\n func(path)\n\n\ndef rmtree(path):\n \"\"\"Remove a directory and all its contents. Like rm -rf on Unix.\n\n :param path: A directory path.\n \"\"\"\n shutil.rmtree(path, onerror=force_delete)\n\n\ndef make_sure_path_exists(path):\n \"\"\"Ensure that a directory exists.\n\n :param path: A directory path.\n \"\"\"\n logger.debug('Making sure path exists: %s', path)\n try:\n os.makedirs(path)\n logger.debug('Created directory at: %s', path)\n except OSError as exception:\n if exception.errno != errno.EEXIST:\n return False\n return True\n\n\[email protected]\ndef work_in(dirname=None):\n \"\"\"Context manager version of os.chdir.\n\n When exited, returns to the working directory prior to entering.\n \"\"\"\n curdir = os.getcwd()\n try:\n if dirname is not None:\n os.chdir(dirname)\n yield\n finally:\n os.chdir(curdir)\n\n\ndef make_executable(script_path):\n \"\"\"Make `script_path` executable.\n\n :param script_path: The file to change\n \"\"\"\n status = os.stat(script_path)\n os.chmod(script_path, status.st_mode | stat.S_IEXEC)\n\n\ndef prompt_and_delete(path, no_input=False):\n \"\"\"\n Ask user if it's okay to delete the previously-downloaded file/directory.\n\n If yes, delete it. If no, checks to see if the old version should be\n reused. If yes, it's reused; otherwise, Cookiecutter exits.\n\n :param path: Previously downloaded zipfile.\n :param no_input: Suppress prompt to delete repo and just delete it.\n :return: True if the content was deleted\n \"\"\"\n # Suppress prompt if called via API\n if no_input:\n ok_to_delete = True\n else:\n question = (\n \"You've downloaded {} before. Is it okay to delete and re-download it?\"\n ).format(path)\n\n ok_to_delete = read_user_yes_no(question, 'yes')\n\n if ok_to_delete:\n if os.path.isdir(path):\n rmtree(path)\n else:\n os.remove(path)\n return True\n else:\n ok_to_reuse = read_user_yes_no(\n \"Do you want to re-use the existing version?\", 'yes'\n )\n\n if ok_to_reuse:\n return False\n\n sys.exit()\n", "path": "cookiecutter/utils.py"}]} | 1,410 | 134 |
gh_patches_debug_1356 | rasdani/github-patches | git_diff | kserve__kserve-2103 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Cannot install required version of numpy on M1 mac
/kind bug
Issue:
Installation on python 3.8 or 3.9 (and presumably all versions of Python) of the v0.8.0 release candidate fails due to the pinned requirement of numpy.
Expected behavior:
kserve's release candidate for 0.8 can be installed on an M1 mac.
Extra information:
https://github.com/numpy/numpy/releases/tag/v1.21.0 numpy 1.21+ allows installation on M1 macs.
**Environment:**
- OS (e.g. from `/etc/os-release`): M1 mac
</issue>
<code>
[start of python/lgbserver/setup.py]
1 # Copyright 2021 The KServe Authors.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from setuptools import setup, find_packages
16
17 tests_require = [
18 'pytest',
19 'pytest-asyncio',
20 'pytest-tornasync',
21 'mypy'
22 ]
23
24 setup(
25 name='lgbserver',
26 version='0.7.0',
27 author_email='[email protected]',
28 license='../../LICENSE.txt',
29 url='https://github.com/kserve/kserve/python/lgbserver',
30 description='Model Server implementation for LightGBM. \
31 Not intended for use outside KServe Frameworks Images',
32 long_description=open('README.md').read(),
33 python_requires='>3.4',
34 packages=find_packages("lgbserver"),
35 install_requires=[
36 "kserve>=0.7.0",
37 "lightgbm == 3.3.2",
38 "pandas == 0.25.3",
39 "argparse >= 1.4.0",
40 ],
41 tests_require=tests_require,
42 extras_require={'test': tests_require}
43 )
44
[end of python/lgbserver/setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/python/lgbserver/setup.py b/python/lgbserver/setup.py
--- a/python/lgbserver/setup.py
+++ b/python/lgbserver/setup.py
@@ -35,7 +35,7 @@
install_requires=[
"kserve>=0.7.0",
"lightgbm == 3.3.2",
- "pandas == 0.25.3",
+ "pandas == 1.3.5",
"argparse >= 1.4.0",
],
tests_require=tests_require,
| {"golden_diff": "diff --git a/python/lgbserver/setup.py b/python/lgbserver/setup.py\n--- a/python/lgbserver/setup.py\n+++ b/python/lgbserver/setup.py\n@@ -35,7 +35,7 @@\n install_requires=[\n \"kserve>=0.7.0\",\n \"lightgbm == 3.3.2\",\n- \"pandas == 0.25.3\",\n+ \"pandas == 1.3.5\",\n \"argparse >= 1.4.0\",\n ],\n tests_require=tests_require,\n", "issue": "Cannot install required version of numpy on M1 mac\n/kind bug\r\n\r\nIssue:\r\nInstallation on python 3.8 or 3.9 (and presumably all versions of Python) of the v0.8.0 release candidate fails due to the pinned requirement of numpy.\r\n\r\nExpected behavior:\r\nkserve's release candidate for 0.8 can be installed on an M1 mac.\r\n\r\nExtra information:\r\nhttps://github.com/numpy/numpy/releases/tag/v1.21.0 numpy 1.21+ allows installation on M1 macs.\r\n\r\n\r\n**Environment:**\r\n\r\n- OS (e.g. from `/etc/os-release`): M1 mac\r\n\n", "before_files": [{"content": "# Copyright 2021 The KServe Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom setuptools import setup, find_packages\n\ntests_require = [\n 'pytest',\n 'pytest-asyncio',\n 'pytest-tornasync',\n 'mypy'\n]\n\nsetup(\n name='lgbserver',\n version='0.7.0',\n author_email='[email protected]',\n license='../../LICENSE.txt',\n url='https://github.com/kserve/kserve/python/lgbserver',\n description='Model Server implementation for LightGBM. \\\n Not intended for use outside KServe Frameworks Images',\n long_description=open('README.md').read(),\n python_requires='>3.4',\n packages=find_packages(\"lgbserver\"),\n install_requires=[\n \"kserve>=0.7.0\",\n \"lightgbm == 3.3.2\",\n \"pandas == 0.25.3\",\n \"argparse >= 1.4.0\",\n ],\n tests_require=tests_require,\n extras_require={'test': tests_require}\n)\n", "path": "python/lgbserver/setup.py"}]} | 1,101 | 125 |
gh_patches_debug_18073 | rasdani/github-patches | git_diff | kartoza__prj.app-895 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Display the version number in the footer

In the footer, next to "Available on github", we can display the version number from this file: https://github.com/kartoza/projecta/blob/develop/django_project/.version
To be able to know between staging and production which version we are running
Sentry is already reading this file: https://github.com/kartoza/projecta/blob/develop/django_project/core/settings/prod.py#L47
</issue>
<code>
[start of django_project/lesson/templatetags/lesson_tags.py]
1 # coding=utf-8
2 """Custom tags for lesson app."""
3
4 from django import template
5 from django.utils.safestring import mark_safe
6
7 register = template.Library()
8
9
10 @register.filter(name='is_translation_up_to_date')
11 def is_translation_up_to_date(value):
12 if not value.is_translation_up_to_date:
13 return mark_safe(
14 '<span title="Translation is outdated"><sup>❗</sup></span>')
15 else:
16 return mark_safe('')
17
[end of django_project/lesson/templatetags/lesson_tags.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/django_project/lesson/templatetags/lesson_tags.py b/django_project/lesson/templatetags/lesson_tags.py
--- a/django_project/lesson/templatetags/lesson_tags.py
+++ b/django_project/lesson/templatetags/lesson_tags.py
@@ -1,8 +1,9 @@
# coding=utf-8
"""Custom tags for lesson app."""
-
from django import template
from django.utils.safestring import mark_safe
+from core.settings.utils import absolute_path
+
register = template.Library()
@@ -14,3 +15,16 @@
'<span title="Translation is outdated"><sup>❗</sup></span>')
else:
return mark_safe('')
+
+
[email protected]_tag(takes_context=True)
+def version_tag(context):
+ """Reads current project release from the .version file."""
+ version_file = absolute_path('.version')
+ try:
+ with open(version_file, 'r') as file:
+ version = file.read()
+ context['version'] = version
+ except IOError:
+ context['version'] = 'Unknown'
+ return context['version']
| {"golden_diff": "diff --git a/django_project/lesson/templatetags/lesson_tags.py b/django_project/lesson/templatetags/lesson_tags.py\n--- a/django_project/lesson/templatetags/lesson_tags.py\n+++ b/django_project/lesson/templatetags/lesson_tags.py\n@@ -1,8 +1,9 @@\n # coding=utf-8\n \"\"\"Custom tags for lesson app.\"\"\"\n-\n from django import template\n from django.utils.safestring import mark_safe\n+from core.settings.utils import absolute_path\n+\n \n register = template.Library()\n \n@@ -14,3 +15,16 @@\n '<span title=\"Translation is outdated\"><sup>❗</sup></span>')\n else:\n return mark_safe('')\n+\n+\[email protected]_tag(takes_context=True)\n+def version_tag(context):\n+ \"\"\"Reads current project release from the .version file.\"\"\"\n+ version_file = absolute_path('.version')\n+ try:\n+ with open(version_file, 'r') as file:\n+ version = file.read()\n+ context['version'] = version\n+ except IOError:\n+ context['version'] = 'Unknown'\n+ return context['version']\n", "issue": "Display the version number in the footer\n\r\n\r\nIn the footer, next to \"Available on github\", we can display the version number from this file: https://github.com/kartoza/projecta/blob/develop/django_project/.version\r\n\r\nTo be able to know between staging and production which version we are running\r\n\r\nSentry is already reading this file: https://github.com/kartoza/projecta/blob/develop/django_project/core/settings/prod.py#L47\n", "before_files": [{"content": "# coding=utf-8\n\"\"\"Custom tags for lesson app.\"\"\"\n\nfrom django import template\nfrom django.utils.safestring import mark_safe\n\nregister = template.Library()\n\n\[email protected](name='is_translation_up_to_date')\ndef is_translation_up_to_date(value):\n if not value.is_translation_up_to_date:\n return mark_safe(\n '<span title=\"Translation is outdated\"><sup>❗</sup></span>')\n else:\n return mark_safe('')\n", "path": "django_project/lesson/templatetags/lesson_tags.py"}]} | 856 | 265 |
gh_patches_debug_1969 | rasdani/github-patches | git_diff | kserve__kserve-1053 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Tabular Explainer e2e test failing
/kind bug
```
(base) C02YJ034JGH5:~ dsun20$ kubectl logs isvc-explainer-tabular-explainer-default-7cnkj-deployment-4q4hn -n kfserving-ci-e2e-test kfserving-container
[I 200828 13:12:28 font_manager:1423] Generating new fontManager, this may take some time...
Traceback (most recent call last):
File "/usr/local/lib/python3.7/runpy.py", line 183, in _run_module_as_main
mod_name, mod_spec, code = _get_module_details(mod_name, _Error)
File "/usr/local/lib/python3.7/runpy.py", line 142, in _get_module_details
return _get_module_details(pkg_main_name, error)
File "/usr/local/lib/python3.7/runpy.py", line 109, in _get_module_details
__import__(pkg_name)
File "/alibiexplainer/alibiexplainer/__init__.py", line 15, in <module>
from .explainer import AlibiExplainer
File "/alibiexplainer/alibiexplainer/explainer.py", line 21, in <module>
from alibiexplainer.anchor_images import AnchorImages
File "/alibiexplainer/alibiexplainer/anchor_images.py", line 17, in <module>
import alibi
File "/usr/local/lib/python3.7/site-packages/alibi/__init__.py", line 1, in <module>
from . import confidence, datasets, explainers, utils
File "/usr/local/lib/python3.7/site-packages/alibi/explainers/__init__.py", line 11, in <module>
from .kernel_shap import KernelShap
File "/usr/local/lib/python3.7/site-packages/alibi/explainers/kernel_shap.py", line 11, in <module>
from shap.common import DenseData, DenseDataWithIndex
ModuleNotFoundError: No module named 'shap.common'
```
**What did you expect to happen:**
**Anything else you would like to add:**
[Miscellaneous information that will assist in solving the issue.]
**Environment:**
- Istio Version:
- Knative Version:
- KFServing Version:
- Kubeflow version:
- Kfdef:[k8s_istio/istio_dex/gcp_basic_auth/gcp_iap/aws/aws_cognito/ibm]
- Minikube version:
- Kubernetes version: (use `kubectl version`):
- OS (e.g. from `/etc/os-release`):
</issue>
<code>
[start of python/alibiexplainer/setup.py]
1 # Copyright 2019 kubeflow.org.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from setuptools import setup, find_packages
16
17 tests_require = [
18 'pytest',
19 'pytest-tornasync',
20 'mypy'
21 ]
22
23 setup(
24 name='alibiexplainer',
25 version='0.4.0',
26 author_email='[email protected]',
27 license='../../LICENSE.txt',
28 url='https://github.com/kubeflow/kfserving/python/kfserving/alibiexplainer',
29 description='Model Explaination Server. \
30 Not intended for use outside KFServing Frameworks Images',
31 long_description=open('README.md').read(),
32 python_requires='>=3.6',
33 packages=find_packages("alibiexplainer"),
34 install_requires=[
35 "kfserving>=0.4.0",
36 "alibi==0.4.0",
37 "scikit-learn>=0.20.3",
38 "argparse>=1.4.0",
39 "requests>=2.22.0",
40 "joblib>=0.13.2",
41 "pandas>=0.24.2",
42 "numpy>=1.16.3",
43 "dill>=0.3.0",
44 "spacy>=2.1.4"
45 ],
46 tests_require=tests_require,
47 extras_require={'test': tests_require}
48 )
49
[end of python/alibiexplainer/setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/python/alibiexplainer/setup.py b/python/alibiexplainer/setup.py
--- a/python/alibiexplainer/setup.py
+++ b/python/alibiexplainer/setup.py
@@ -32,6 +32,7 @@
python_requires='>=3.6',
packages=find_packages("alibiexplainer"),
install_requires=[
+ "shap==0.35",
"kfserving>=0.4.0",
"alibi==0.4.0",
"scikit-learn>=0.20.3",
| {"golden_diff": "diff --git a/python/alibiexplainer/setup.py b/python/alibiexplainer/setup.py\n--- a/python/alibiexplainer/setup.py\n+++ b/python/alibiexplainer/setup.py\n@@ -32,6 +32,7 @@\n python_requires='>=3.6',\n packages=find_packages(\"alibiexplainer\"),\n install_requires=[\n+ \"shap==0.35\",\n \"kfserving>=0.4.0\",\n \"alibi==0.4.0\",\n \"scikit-learn>=0.20.3\",\n", "issue": "Tabular Explainer e2e test failing\n/kind bug\r\n\r\n```\r\n(base) C02YJ034JGH5:~ dsun20$ kubectl logs isvc-explainer-tabular-explainer-default-7cnkj-deployment-4q4hn -n kfserving-ci-e2e-test kfserving-container\r\n[I 200828 13:12:28 font_manager:1423] Generating new fontManager, this may take some time...\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.7/runpy.py\", line 183, in _run_module_as_main\r\n mod_name, mod_spec, code = _get_module_details(mod_name, _Error)\r\n File \"/usr/local/lib/python3.7/runpy.py\", line 142, in _get_module_details\r\n return _get_module_details(pkg_main_name, error)\r\n File \"/usr/local/lib/python3.7/runpy.py\", line 109, in _get_module_details\r\n __import__(pkg_name)\r\n File \"/alibiexplainer/alibiexplainer/__init__.py\", line 15, in <module>\r\n from .explainer import AlibiExplainer\r\n File \"/alibiexplainer/alibiexplainer/explainer.py\", line 21, in <module>\r\n from alibiexplainer.anchor_images import AnchorImages\r\n File \"/alibiexplainer/alibiexplainer/anchor_images.py\", line 17, in <module>\r\n import alibi\r\n File \"/usr/local/lib/python3.7/site-packages/alibi/__init__.py\", line 1, in <module>\r\n from . import confidence, datasets, explainers, utils\r\n File \"/usr/local/lib/python3.7/site-packages/alibi/explainers/__init__.py\", line 11, in <module>\r\n from .kernel_shap import KernelShap\r\n File \"/usr/local/lib/python3.7/site-packages/alibi/explainers/kernel_shap.py\", line 11, in <module>\r\n from shap.common import DenseData, DenseDataWithIndex\r\nModuleNotFoundError: No module named 'shap.common'\r\n```\r\n\r\n\r\n**What did you expect to happen:**\r\n\r\n\r\n**Anything else you would like to add:**\r\n[Miscellaneous information that will assist in solving the issue.]\r\n\r\n\r\n**Environment:**\r\n\r\n- Istio Version:\r\n- Knative Version:\r\n- KFServing Version:\r\n- Kubeflow version:\r\n- Kfdef:[k8s_istio/istio_dex/gcp_basic_auth/gcp_iap/aws/aws_cognito/ibm]\r\n- Minikube version:\r\n- Kubernetes version: (use `kubectl version`):\r\n- OS (e.g. from `/etc/os-release`):\r\n\n", "before_files": [{"content": "# Copyright 2019 kubeflow.org.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom setuptools import setup, find_packages\n\ntests_require = [\n 'pytest',\n 'pytest-tornasync',\n 'mypy'\n]\n\nsetup(\n name='alibiexplainer',\n version='0.4.0',\n author_email='[email protected]',\n license='../../LICENSE.txt',\n url='https://github.com/kubeflow/kfserving/python/kfserving/alibiexplainer',\n description='Model Explaination Server. \\\n Not intended for use outside KFServing Frameworks Images',\n long_description=open('README.md').read(),\n python_requires='>=3.6',\n packages=find_packages(\"alibiexplainer\"),\n install_requires=[\n \"kfserving>=0.4.0\",\n \"alibi==0.4.0\",\n \"scikit-learn>=0.20.3\",\n \"argparse>=1.4.0\",\n \"requests>=2.22.0\",\n \"joblib>=0.13.2\",\n \"pandas>=0.24.2\",\n \"numpy>=1.16.3\",\n \"dill>=0.3.0\",\n \"spacy>=2.1.4\"\n ],\n tests_require=tests_require,\n extras_require={'test': tests_require}\n)\n", "path": "python/alibiexplainer/setup.py"}]} | 1,644 | 124 |
gh_patches_debug_2597 | rasdani/github-patches | git_diff | cookiecutter__cookiecutter-539 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Increase development status to 'beta' or 'stable'.
I think we can say the project is waaaay beyond alpha. :wink:
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2
3 import os
4 import sys
5
6 try:
7 from setuptools import setup
8 except ImportError:
9 from distutils.core import setup
10
11 version = "1.1.0"
12
13 if sys.argv[-1] == 'publish':
14 os.system('python setup.py sdist upload')
15 os.system('python setup.py bdist_wheel upload')
16 sys.exit()
17
18 if sys.argv[-1] == 'tag':
19 os.system("git tag -a %s -m 'version %s'" % (version, version))
20 os.system("git push --tags")
21 sys.exit()
22
23 with open('README.rst') as readme_file:
24 readme = readme_file.read()
25
26 with open('HISTORY.rst') as history_file:
27 history = history_file.read().replace('.. :changelog:', '')
28
29 requirements = [
30 'future>=0.15.2',
31 'binaryornot>=0.2.0',
32 'jinja2>=2.7',
33 'PyYAML>=3.10',
34 'click>=5.0',
35 'whichcraft>=0.1.1'
36 ]
37
38 long_description = readme + '\n\n' + history
39
40 if sys.argv[-1] == 'readme':
41 print(long_description)
42 sys.exit()
43
44
45 setup(
46 name='cookiecutter',
47 version=version,
48 description=('A command-line utility that creates projects from project '
49 'templates, e.g. creating a Python package project from a '
50 'Python package project template.'),
51 long_description=long_description,
52 author='Audrey Roy',
53 author_email='[email protected]',
54 url='https://github.com/audreyr/cookiecutter',
55 packages=[
56 'cookiecutter',
57 ],
58 package_dir={'cookiecutter': 'cookiecutter'},
59 entry_points={
60 'console_scripts': [
61 'cookiecutter = cookiecutter.cli:main',
62 ]
63 },
64 include_package_data=True,
65 install_requires=requirements,
66 license='BSD',
67 zip_safe=False,
68 classifiers=[
69 'Development Status :: 3 - Alpha',
70 'Environment :: Console',
71 'Intended Audience :: Developers',
72 'Natural Language :: English',
73 'License :: OSI Approved :: BSD License',
74 'Programming Language :: Python',
75 'Programming Language :: Python :: 2',
76 'Programming Language :: Python :: 2.7',
77 'Programming Language :: Python :: 3',
78 'Programming Language :: Python :: 3.3',
79 'Programming Language :: Python :: 3.4',
80 'Programming Language :: Python :: 3.5',
81 'Programming Language :: Python :: Implementation :: CPython',
82 'Programming Language :: Python :: Implementation :: PyPy',
83 'Topic :: Software Development',
84 ],
85 keywords=(
86 'cookiecutter, Python, projects, project templates, Jinja2, '
87 'skeleton, scaffolding, project directory, setup.py, package, '
88 'packaging'
89 ),
90 )
91
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -66,7 +66,7 @@
license='BSD',
zip_safe=False,
classifiers=[
- 'Development Status :: 3 - Alpha',
+ 'Development Status :: 5 - Production/Stable',
'Environment :: Console',
'Intended Audience :: Developers',
'Natural Language :: English',
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -66,7 +66,7 @@\n license='BSD',\n zip_safe=False,\n classifiers=[\n- 'Development Status :: 3 - Alpha',\n+ 'Development Status :: 5 - Production/Stable',\n 'Environment :: Console',\n 'Intended Audience :: Developers',\n 'Natural Language :: English',\n", "issue": "Increase development status to 'beta' or 'stable'.\nI think we can say the project is waaaay beyond alpha. :wink: \n\n", "before_files": [{"content": "#!/usr/bin/env python\n\nimport os\nimport sys\n\ntry:\n from setuptools import setup\nexcept ImportError:\n from distutils.core import setup\n\nversion = \"1.1.0\"\n\nif sys.argv[-1] == 'publish':\n os.system('python setup.py sdist upload')\n os.system('python setup.py bdist_wheel upload')\n sys.exit()\n\nif sys.argv[-1] == 'tag':\n os.system(\"git tag -a %s -m 'version %s'\" % (version, version))\n os.system(\"git push --tags\")\n sys.exit()\n\nwith open('README.rst') as readme_file:\n readme = readme_file.read()\n\nwith open('HISTORY.rst') as history_file:\n history = history_file.read().replace('.. :changelog:', '')\n\nrequirements = [\n 'future>=0.15.2',\n 'binaryornot>=0.2.0',\n 'jinja2>=2.7',\n 'PyYAML>=3.10',\n 'click>=5.0',\n 'whichcraft>=0.1.1'\n]\n\nlong_description = readme + '\\n\\n' + history\n\nif sys.argv[-1] == 'readme':\n print(long_description)\n sys.exit()\n\n\nsetup(\n name='cookiecutter',\n version=version,\n description=('A command-line utility that creates projects from project '\n 'templates, e.g. creating a Python package project from a '\n 'Python package project template.'),\n long_description=long_description,\n author='Audrey Roy',\n author_email='[email protected]',\n url='https://github.com/audreyr/cookiecutter',\n packages=[\n 'cookiecutter',\n ],\n package_dir={'cookiecutter': 'cookiecutter'},\n entry_points={\n 'console_scripts': [\n 'cookiecutter = cookiecutter.cli:main',\n ]\n },\n include_package_data=True,\n install_requires=requirements,\n license='BSD',\n zip_safe=False,\n classifiers=[\n 'Development Status :: 3 - Alpha',\n 'Environment :: Console',\n 'Intended Audience :: Developers',\n 'Natural Language :: English',\n 'License :: OSI Approved :: BSD License',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: Implementation :: CPython',\n 'Programming Language :: Python :: Implementation :: PyPy',\n 'Topic :: Software Development',\n ],\n keywords=(\n 'cookiecutter, Python, projects, project templates, Jinja2, '\n 'skeleton, scaffolding, project directory, setup.py, package, '\n 'packaging'\n ),\n)\n", "path": "setup.py"}]} | 1,373 | 91 |
gh_patches_debug_51622 | rasdani/github-patches | git_diff | akvo__akvo-rsr-3604 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Organisation report shown in project reports page
The "Project overview" report is displayed on the project report page, which is an organisation report and should not be displayed on the project report page.
</issue>
<code>
[start of akvo/rest/views/report.py]
1 # -*- coding: utf-8 -*-
2
3 # Akvo RSR is covered by the GNU Affero General Public License.
4 # See more details in the license.txt file located at the root folder of the Akvo RSR module.
5 # For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.
6
7 from django.db.models import Q
8 from django.shortcuts import get_object_or_404
9 from rest_framework import status
10 from rest_framework.decorators import api_view
11 from rest_framework.response import Response
12
13 from akvo.rsr.models import Report, ReportFormat, Project
14 from ..serializers import ReportSerializer, ReportFormatSerializer
15 from ..viewsets import BaseRSRViewSet
16
17
18 class ReportViewSet(BaseRSRViewSet):
19 """Viewset providing Result data."""
20
21 queryset = Report.objects.prefetch_related(
22 'organisations',
23 'formats',
24 )
25 serializer_class = ReportSerializer
26
27 def get_queryset(self):
28 """
29 Allow custom filter for sync_owner, since this field has been replaced by the
30 reporting org partnership.
31 """
32 reports = super(ReportViewSet, self).get_queryset()
33 user = self.request.user
34 is_admin = user.is_active and (user.is_superuser or user.is_admin)
35 if not is_admin:
36 # Show only those reports that the user is allowed to see
37 approved_orgs = user.approved_organisations() if not user.is_anonymous() else []
38 reports = reports.filter(
39 Q(organisations=None) | Q(organisations__in=approved_orgs)
40 ).distinct()
41 return reports
42
43
44 @api_view(['GET'])
45 def report_formats(request):
46 """
47 A view for displaying all report format information.
48 """
49 return Response({
50 'count': ReportFormat.objects.all().count(),
51 'results': [ReportFormatSerializer(f).data for f in ReportFormat.objects.all()],
52 })
53
54
55 @api_view(['GET'])
56 def project_reports(request, project_pk):
57 """A view for displaying project specific reports."""
58
59 project = get_object_or_404(Project, pk=project_pk)
60 reports = Report.objects.prefetch_related('formats', 'organisations')\
61 .filter(url__icontains='project')
62
63 user = request.user
64 if not user.has_perm('rsr.view_project', project):
65 return Response('Request not allowed', status=status.HTTP_403_FORBIDDEN)
66
67 is_admin = user.is_active and (user.is_superuser or user.is_admin)
68
69 if not is_admin:
70 partners_org = project.partner_organisation_pks()
71 reports = reports.filter(
72 Q(organisations=None) | Q(organisations__in=partners_org)
73 )
74
75 serializer = ReportSerializer(reports.distinct(), many=True)
76 return Response(serializer.data)
77
[end of akvo/rest/views/report.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/akvo/rest/views/report.py b/akvo/rest/views/report.py
--- a/akvo/rest/views/report.py
+++ b/akvo/rest/views/report.py
@@ -58,7 +58,7 @@
project = get_object_or_404(Project, pk=project_pk)
reports = Report.objects.prefetch_related('formats', 'organisations')\
- .filter(url__icontains='project')
+ .filter(url__icontains='{project}')
user = request.user
if not user.has_perm('rsr.view_project', project):
| {"golden_diff": "diff --git a/akvo/rest/views/report.py b/akvo/rest/views/report.py\n--- a/akvo/rest/views/report.py\n+++ b/akvo/rest/views/report.py\n@@ -58,7 +58,7 @@\n \n project = get_object_or_404(Project, pk=project_pk)\n reports = Report.objects.prefetch_related('formats', 'organisations')\\\n- .filter(url__icontains='project')\n+ .filter(url__icontains='{project}')\n \n user = request.user\n if not user.has_perm('rsr.view_project', project):\n", "issue": "Organisation report shown in project reports page\nThe \"Project overview\" report is displayed on the project report page, which is an organisation report and should not be displayed on the project report page.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Akvo RSR is covered by the GNU Affero General Public License.\n# See more details in the license.txt file located at the root folder of the Akvo RSR module.\n# For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.\n\nfrom django.db.models import Q\nfrom django.shortcuts import get_object_or_404\nfrom rest_framework import status\nfrom rest_framework.decorators import api_view\nfrom rest_framework.response import Response\n\nfrom akvo.rsr.models import Report, ReportFormat, Project\nfrom ..serializers import ReportSerializer, ReportFormatSerializer\nfrom ..viewsets import BaseRSRViewSet\n\n\nclass ReportViewSet(BaseRSRViewSet):\n \"\"\"Viewset providing Result data.\"\"\"\n\n queryset = Report.objects.prefetch_related(\n 'organisations',\n 'formats',\n )\n serializer_class = ReportSerializer\n\n def get_queryset(self):\n \"\"\"\n Allow custom filter for sync_owner, since this field has been replaced by the\n reporting org partnership.\n \"\"\"\n reports = super(ReportViewSet, self).get_queryset()\n user = self.request.user\n is_admin = user.is_active and (user.is_superuser or user.is_admin)\n if not is_admin:\n # Show only those reports that the user is allowed to see\n approved_orgs = user.approved_organisations() if not user.is_anonymous() else []\n reports = reports.filter(\n Q(organisations=None) | Q(organisations__in=approved_orgs)\n ).distinct()\n return reports\n\n\n@api_view(['GET'])\ndef report_formats(request):\n \"\"\"\n A view for displaying all report format information.\n \"\"\"\n return Response({\n 'count': ReportFormat.objects.all().count(),\n 'results': [ReportFormatSerializer(f).data for f in ReportFormat.objects.all()],\n })\n\n\n@api_view(['GET'])\ndef project_reports(request, project_pk):\n \"\"\"A view for displaying project specific reports.\"\"\"\n\n project = get_object_or_404(Project, pk=project_pk)\n reports = Report.objects.prefetch_related('formats', 'organisations')\\\n .filter(url__icontains='project')\n\n user = request.user\n if not user.has_perm('rsr.view_project', project):\n return Response('Request not allowed', status=status.HTTP_403_FORBIDDEN)\n\n is_admin = user.is_active and (user.is_superuser or user.is_admin)\n\n if not is_admin:\n partners_org = project.partner_organisation_pks()\n reports = reports.filter(\n Q(organisations=None) | Q(organisations__in=partners_org)\n )\n\n serializer = ReportSerializer(reports.distinct(), many=True)\n return Response(serializer.data)\n", "path": "akvo/rest/views/report.py"}]} | 1,314 | 128 |
gh_patches_debug_16617 | rasdani/github-patches | git_diff | OCA__bank-payment-900 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[14.0] account_payment_purchase: changing many2one resets payment_mode
Hi,
We've seen that when the field purchase_vendor_bill_id is changed and a purchase is selected from it, the payment_mode_id is always reseted because it is using the reference purchase_id.
`new_mode = self.purchase_id.payment_mode_id.id or False`
We've made this change, and it seems to work as it should.
`new_mode = self.purchase_vendor_bill_id.purchase_order_id.payment_mode_id.id or False`
The same goes for the partner_bank_id field.
@MiquelRForgeFlow
</issue>
<code>
[start of account_payment_purchase/models/account_invoice.py]
1 # Copyright 2016 Akretion (<http://www.akretion.com>).
2 # Copyright 2017 Tecnativa - Vicent Cubells.
3 # License AGPL-3.0 or later (http://www.gnu.org/licenses/agpl.html).
4
5 from odoo import _, api, models
6
7
8 class AccountMove(models.Model):
9 _inherit = "account.move"
10
11 @api.onchange("purchase_vendor_bill_id", "purchase_id")
12 def _onchange_purchase_auto_complete(self):
13 new_mode = self.purchase_id.payment_mode_id.id or False
14 new_bank = self.purchase_id.supplier_partner_bank_id.id or False
15 res = super()._onchange_purchase_auto_complete() or {}
16 if self.payment_mode_id and new_mode and self.payment_mode_id.id != new_mode:
17 res["warning"] = {
18 "title": _("Warning"),
19 "message": _("Selected purchase order have different payment mode."),
20 }
21 return res
22 self.payment_mode_id = new_mode
23 if self.partner_bank_id and new_bank and self.partner_bank_id.id != new_bank:
24 res["warning"] = {
25 "title": _("Warning"),
26 "message": _("Selected purchase order have different supplier bank."),
27 }
28 return res
29 self.partner_bank_id = new_bank
30 return res
31
[end of account_payment_purchase/models/account_invoice.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/account_payment_purchase/models/account_invoice.py b/account_payment_purchase/models/account_invoice.py
--- a/account_payment_purchase/models/account_invoice.py
+++ b/account_payment_purchase/models/account_invoice.py
@@ -10,8 +10,16 @@
@api.onchange("purchase_vendor_bill_id", "purchase_id")
def _onchange_purchase_auto_complete(self):
- new_mode = self.purchase_id.payment_mode_id.id or False
- new_bank = self.purchase_id.supplier_partner_bank_id.id or False
+
+ new_mode = (
+ self.purchase_vendor_bill_id.purchase_order_id.payment_mode_id.id
+ or self.purchase_id.payment_mode_id.id
+ )
+ new_bank = (
+ self.purchase_vendor_bill_id.purchase_order_id.supplier_partner_bank_id.id
+ or self.purchase_id.supplier_partner_bank_id.id
+ )
+
res = super()._onchange_purchase_auto_complete() or {}
if self.payment_mode_id and new_mode and self.payment_mode_id.id != new_mode:
res["warning"] = {
| {"golden_diff": "diff --git a/account_payment_purchase/models/account_invoice.py b/account_payment_purchase/models/account_invoice.py\n--- a/account_payment_purchase/models/account_invoice.py\n+++ b/account_payment_purchase/models/account_invoice.py\n@@ -10,8 +10,16 @@\n \n @api.onchange(\"purchase_vendor_bill_id\", \"purchase_id\")\n def _onchange_purchase_auto_complete(self):\n- new_mode = self.purchase_id.payment_mode_id.id or False\n- new_bank = self.purchase_id.supplier_partner_bank_id.id or False\n+\n+ new_mode = (\n+ self.purchase_vendor_bill_id.purchase_order_id.payment_mode_id.id\n+ or self.purchase_id.payment_mode_id.id\n+ )\n+ new_bank = (\n+ self.purchase_vendor_bill_id.purchase_order_id.supplier_partner_bank_id.id\n+ or self.purchase_id.supplier_partner_bank_id.id\n+ )\n+\n res = super()._onchange_purchase_auto_complete() or {}\n if self.payment_mode_id and new_mode and self.payment_mode_id.id != new_mode:\n res[\"warning\"] = {\n", "issue": "[14.0] account_payment_purchase: changing many2one resets payment_mode\nHi,\r\nWe've seen that when the field purchase_vendor_bill_id is changed and a purchase is selected from it, the payment_mode_id is always reseted because it is using the reference purchase_id.\r\n`new_mode = self.purchase_id.payment_mode_id.id or False`\r\nWe've made this change, and it seems to work as it should.\r\n`new_mode = self.purchase_vendor_bill_id.purchase_order_id.payment_mode_id.id or False`\r\nThe same goes for the partner_bank_id field.\r\n@MiquelRForgeFlow \n", "before_files": [{"content": "# Copyright 2016 Akretion (<http://www.akretion.com>).\n# Copyright 2017 Tecnativa - Vicent Cubells.\n# License AGPL-3.0 or later (http://www.gnu.org/licenses/agpl.html).\n\nfrom odoo import _, api, models\n\n\nclass AccountMove(models.Model):\n _inherit = \"account.move\"\n\n @api.onchange(\"purchase_vendor_bill_id\", \"purchase_id\")\n def _onchange_purchase_auto_complete(self):\n new_mode = self.purchase_id.payment_mode_id.id or False\n new_bank = self.purchase_id.supplier_partner_bank_id.id or False\n res = super()._onchange_purchase_auto_complete() or {}\n if self.payment_mode_id and new_mode and self.payment_mode_id.id != new_mode:\n res[\"warning\"] = {\n \"title\": _(\"Warning\"),\n \"message\": _(\"Selected purchase order have different payment mode.\"),\n }\n return res\n self.payment_mode_id = new_mode\n if self.partner_bank_id and new_bank and self.partner_bank_id.id != new_bank:\n res[\"warning\"] = {\n \"title\": _(\"Warning\"),\n \"message\": _(\"Selected purchase order have different supplier bank.\"),\n }\n return res\n self.partner_bank_id = new_bank\n return res\n", "path": "account_payment_purchase/models/account_invoice.py"}]} | 994 | 227 |
gh_patches_debug_21021 | rasdani/github-patches | git_diff | liqd__a4-meinberlin-593 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Make creator a readonly field in django admin
With a large user base the dropdown used for the user selection in django admin becomes unresponsive. As there is no apparent reason to change the creator of an object (comment, item, rate, poll, ...) the creator field should be made read_only.
The same problem occurs for every model where a user is set as a foreign key (Action: actor, Projects: member/moderator, Organisation: initiator)
The readonly property can either be set on every Admin class by setting ` readonly_fields = ('creator',)` individually or by using a custom `A4Admin(admin.ModelAdmin)` which has to be set as the parent for every Admin class used.
</issue>
<code>
[start of apps/polls/admin.py]
1 from django.contrib import admin
2
3 from . import models
4
5
6 class ChoiceInline(admin.TabularInline):
7 model = models.Choice
8
9
10 class QuestionAdmin(admin.ModelAdmin):
11 inlines = [
12 ChoiceInline
13 ]
14
15
16 class VoteAdmin(admin.ModelAdmin):
17 list_filter = ('choice__question',)
18
19
20 admin.site.register(models.Question, QuestionAdmin)
21 admin.site.register(models.Vote, VoteAdmin)
22
[end of apps/polls/admin.py]
[start of apps/documents/admin.py]
1 from django.contrib import admin
2
3 from . import models
4
5
6 class ParagraphAdmin(admin.ModelAdmin):
7 list_filter = ('chapter',)
8
9
10 admin.site.register(models.Chapter)
11 admin.site.register(models.Paragraph, ParagraphAdmin)
12
[end of apps/documents/admin.py]
[start of apps/offlineevents/admin.py]
1 from django.contrib import admin
2
3 from . import models
4
5 admin.site.register(models.OfflineEvent, admin.ModelAdmin)
6
[end of apps/offlineevents/admin.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/apps/documents/admin.py b/apps/documents/admin.py
--- a/apps/documents/admin.py
+++ b/apps/documents/admin.py
@@ -3,9 +3,12 @@
from . import models
[email protected](models.Paragraph)
class ParagraphAdmin(admin.ModelAdmin):
list_filter = ('chapter',)
+ readonly_fields = ('creator',)
-admin.site.register(models.Chapter)
-admin.site.register(models.Paragraph, ParagraphAdmin)
[email protected](models.Chapter)
+class ChapterAdmin(admin.ModelAdmin):
+ readonly_fields = ('creator', )
diff --git a/apps/offlineevents/admin.py b/apps/offlineevents/admin.py
--- a/apps/offlineevents/admin.py
+++ b/apps/offlineevents/admin.py
@@ -2,4 +2,7 @@
from . import models
-admin.site.register(models.OfflineEvent, admin.ModelAdmin)
+
[email protected](models.OfflineEvent)
+class OfflineEventAdmin(admin.ModelAdmin):
+ readonly_fields = ('creator', )
diff --git a/apps/polls/admin.py b/apps/polls/admin.py
--- a/apps/polls/admin.py
+++ b/apps/polls/admin.py
@@ -7,15 +7,8 @@
model = models.Choice
[email protected](models.Question)
class QuestionAdmin(admin.ModelAdmin):
inlines = [
ChoiceInline
]
-
-
-class VoteAdmin(admin.ModelAdmin):
- list_filter = ('choice__question',)
-
-
-admin.site.register(models.Question, QuestionAdmin)
-admin.site.register(models.Vote, VoteAdmin)
| {"golden_diff": "diff --git a/apps/documents/admin.py b/apps/documents/admin.py\n--- a/apps/documents/admin.py\n+++ b/apps/documents/admin.py\n@@ -3,9 +3,12 @@\n from . import models\n \n \[email protected](models.Paragraph)\n class ParagraphAdmin(admin.ModelAdmin):\n list_filter = ('chapter',)\n+ readonly_fields = ('creator',)\n \n \n-admin.site.register(models.Chapter)\n-admin.site.register(models.Paragraph, ParagraphAdmin)\[email protected](models.Chapter)\n+class ChapterAdmin(admin.ModelAdmin):\n+ readonly_fields = ('creator', )\ndiff --git a/apps/offlineevents/admin.py b/apps/offlineevents/admin.py\n--- a/apps/offlineevents/admin.py\n+++ b/apps/offlineevents/admin.py\n@@ -2,4 +2,7 @@\n \n from . import models\n \n-admin.site.register(models.OfflineEvent, admin.ModelAdmin)\n+\[email protected](models.OfflineEvent)\n+class OfflineEventAdmin(admin.ModelAdmin):\n+ readonly_fields = ('creator', )\ndiff --git a/apps/polls/admin.py b/apps/polls/admin.py\n--- a/apps/polls/admin.py\n+++ b/apps/polls/admin.py\n@@ -7,15 +7,8 @@\n model = models.Choice\n \n \[email protected](models.Question)\n class QuestionAdmin(admin.ModelAdmin):\n inlines = [\n ChoiceInline\n ]\n-\n-\n-class VoteAdmin(admin.ModelAdmin):\n- list_filter = ('choice__question',)\n-\n-\n-admin.site.register(models.Question, QuestionAdmin)\n-admin.site.register(models.Vote, VoteAdmin)\n", "issue": "Make creator a readonly field in django admin\nWith a large user base the dropdown used for the user selection in django admin becomes unresponsive. As there is no apparent reason to change the creator of an object (comment, item, rate, poll, ...) the creator field should be made read_only.\r\n\r\nThe same problem occurs for every model where a user is set as a foreign key (Action: actor, Projects: member/moderator, Organisation: initiator)\r\n\r\nThe readonly property can either be set on every Admin class by setting ` readonly_fields = ('creator',)` individually or by using a custom `A4Admin(admin.ModelAdmin)` which has to be set as the parent for every Admin class used.\n", "before_files": [{"content": "from django.contrib import admin\n\nfrom . import models\n\n\nclass ChoiceInline(admin.TabularInline):\n model = models.Choice\n\n\nclass QuestionAdmin(admin.ModelAdmin):\n inlines = [\n ChoiceInline\n ]\n\n\nclass VoteAdmin(admin.ModelAdmin):\n list_filter = ('choice__question',)\n\n\nadmin.site.register(models.Question, QuestionAdmin)\nadmin.site.register(models.Vote, VoteAdmin)\n", "path": "apps/polls/admin.py"}, {"content": "from django.contrib import admin\n\nfrom . import models\n\n\nclass ParagraphAdmin(admin.ModelAdmin):\n list_filter = ('chapter',)\n\n\nadmin.site.register(models.Chapter)\nadmin.site.register(models.Paragraph, ParagraphAdmin)\n", "path": "apps/documents/admin.py"}, {"content": "from django.contrib import admin\n\nfrom . import models\n\nadmin.site.register(models.OfflineEvent, admin.ModelAdmin)\n", "path": "apps/offlineevents/admin.py"}]} | 921 | 321 |
gh_patches_debug_24554 | rasdani/github-patches | git_diff | litestar-org__litestar-174 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Move `requests` to `testing` extra
I was inspecting starlight dependencies and was confused to see `requests` being required without any obvious reason.
I found #13 then, but I disagree with the resolution since I believe libs required for testing purposes should not be installed for normal use. Now it only one lib (with several dependencies), but imagine if more dependencies would be added for testing.
#### What I propose:
1. Move requests from required dependencies to `testing` extra (`pip install starlight[testing]`)
1. Remove import of `starlite.testing` from `starlite` package
2. When starlight is imported explicitly (`from starlight import testint`), check for requests installed. if not, raise `RuntimeError("To access starlight.testing install starlight with [testing] extra")`
How would `pyproject.toml` of end user look like:
```toml
[tool.poetry.dependencies]
python = "^3.10"
starlite = "^1.3.9"
[tool.poetry.dev-dependencies]
starlite = {extras = ["testing"], version = "*"} # whatever version is installed + testing dependencies
pytest = "^5.2"
```
I can send a PR if changes are welcomed.
</issue>
<code>
[start of starlite/__init__.py]
1 from starlite.datastructures import File, Redirect, State, Stream, Template
2
3 from .app import Starlite
4 from .config import (
5 CacheConfig,
6 CORSConfig,
7 OpenAPIConfig,
8 StaticFilesConfig,
9 TemplateConfig,
10 )
11 from .connection import Request, WebSocket
12 from .controller import Controller
13 from .dto import DTOFactory
14 from .enums import (
15 HttpMethod,
16 MediaType,
17 OpenAPIMediaType,
18 RequestEncodingType,
19 ScopeType,
20 )
21 from .exceptions import (
22 HTTPException,
23 ImproperlyConfiguredException,
24 InternalServerException,
25 MissingDependencyException,
26 NotAuthorizedException,
27 NotFoundException,
28 PermissionDeniedException,
29 ServiceUnavailableException,
30 StarLiteException,
31 ValidationException,
32 )
33 from .handlers import (
34 ASGIRouteHandler,
35 BaseRouteHandler,
36 HTTPRouteHandler,
37 WebsocketRouteHandler,
38 asgi,
39 delete,
40 get,
41 patch,
42 post,
43 put,
44 route,
45 websocket,
46 )
47 from .logging import LoggingConfig, QueueListenerHandler
48 from .middleware import AbstractAuthenticationMiddleware, AuthenticationResult
49 from .openapi.controller import OpenAPIController
50 from .params import Body, Dependency, Parameter
51 from .plugins import PluginProtocol
52 from .provide import Provide
53 from .response import Response
54 from .router import Router
55 from .routes import BaseRoute, HTTPRoute, WebSocketRoute
56 from .testing import TestClient, create_test_client, create_test_request
57 from .types import MiddlewareProtocol, Partial, ResponseHeader
58
59 __all__ = [
60 "ASGIRouteHandler",
61 "AbstractAuthenticationMiddleware",
62 "AuthenticationResult",
63 "BaseRoute",
64 "BaseRouteHandler",
65 "Body",
66 "CORSConfig",
67 "CacheConfig",
68 "Controller",
69 "Dependency",
70 "DTOFactory",
71 "File",
72 "HTTPException",
73 "HTTPRoute",
74 "HTTPRouteHandler",
75 "HttpMethod",
76 "ImproperlyConfiguredException",
77 "InternalServerException",
78 "LoggingConfig",
79 "MediaType",
80 "MiddlewareProtocol",
81 "MissingDependencyException",
82 "NotAuthorizedException",
83 "NotFoundException",
84 "OpenAPIConfig",
85 "OpenAPIController",
86 "OpenAPIMediaType",
87 "Parameter",
88 "Partial",
89 "PermissionDeniedException",
90 "PluginProtocol",
91 "Provide",
92 "QueueListenerHandler",
93 "Redirect",
94 "Request",
95 "RequestEncodingType",
96 "Response",
97 "ResponseHeader",
98 "Router",
99 "ScopeType",
100 "ServiceUnavailableException",
101 "StarLiteException",
102 "Starlite",
103 "State",
104 "StaticFilesConfig",
105 "Stream",
106 "Template",
107 "TemplateConfig",
108 "TestClient",
109 "ValidationException",
110 "WebSocket",
111 "WebSocketRoute",
112 "WebsocketRouteHandler",
113 "asgi",
114 "create_test_client",
115 "create_test_request",
116 "delete",
117 "get",
118 "patch",
119 "post",
120 "put",
121 "route",
122 "websocket",
123 ]
124
[end of starlite/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/starlite/__init__.py b/starlite/__init__.py
--- a/starlite/__init__.py
+++ b/starlite/__init__.py
@@ -1,3 +1,5 @@
+from typing import TYPE_CHECKING, Any
+
from starlite.datastructures import File, Redirect, State, Stream, Template
from .app import Starlite
@@ -53,9 +55,12 @@
from .response import Response
from .router import Router
from .routes import BaseRoute, HTTPRoute, WebSocketRoute
-from .testing import TestClient, create_test_client, create_test_request
from .types import MiddlewareProtocol, Partial, ResponseHeader
+if TYPE_CHECKING:
+ from .testing import TestClient, create_test_client, create_test_request
+
+
__all__ = [
"ASGIRouteHandler",
"AbstractAuthenticationMiddleware",
@@ -121,3 +126,17 @@
"route",
"websocket",
]
+
+_dynamic_imports = {"TestClient", "create_test_client", "create_test_request"}
+
+
+# pylint: disable=import-outside-toplevel
+def __getattr__(name: str) -> Any:
+ """Provide lazy importing as per https://peps.python.org/pep-0562/"""
+ if name not in _dynamic_imports:
+ raise AttributeError(f"Module {__package__} has no attribute {name}")
+
+ from . import testing
+
+ attr = globals()[name] = getattr(testing, name)
+ return attr
| {"golden_diff": "diff --git a/starlite/__init__.py b/starlite/__init__.py\n--- a/starlite/__init__.py\n+++ b/starlite/__init__.py\n@@ -1,3 +1,5 @@\n+from typing import TYPE_CHECKING, Any\n+\n from starlite.datastructures import File, Redirect, State, Stream, Template\n \n from .app import Starlite\n@@ -53,9 +55,12 @@\n from .response import Response\n from .router import Router\n from .routes import BaseRoute, HTTPRoute, WebSocketRoute\n-from .testing import TestClient, create_test_client, create_test_request\n from .types import MiddlewareProtocol, Partial, ResponseHeader\n \n+if TYPE_CHECKING:\n+ from .testing import TestClient, create_test_client, create_test_request\n+\n+\n __all__ = [\n \"ASGIRouteHandler\",\n \"AbstractAuthenticationMiddleware\",\n@@ -121,3 +126,17 @@\n \"route\",\n \"websocket\",\n ]\n+\n+_dynamic_imports = {\"TestClient\", \"create_test_client\", \"create_test_request\"}\n+\n+\n+# pylint: disable=import-outside-toplevel\n+def __getattr__(name: str) -> Any:\n+ \"\"\"Provide lazy importing as per https://peps.python.org/pep-0562/\"\"\"\n+ if name not in _dynamic_imports:\n+ raise AttributeError(f\"Module {__package__} has no attribute {name}\")\n+\n+ from . import testing\n+\n+ attr = globals()[name] = getattr(testing, name)\n+ return attr\n", "issue": "Move `requests` to `testing` extra\nI was inspecting starlight dependencies and was confused to see `requests` being required without any obvious reason.\r\n\r\nI found #13 then, but I disagree with the resolution since I believe libs required for testing purposes should not be installed for normal use. Now it only one lib (with several dependencies), but imagine if more dependencies would be added for testing.\r\n\r\n#### What I propose:\r\n1. Move requests from required dependencies to `testing` extra (`pip install starlight[testing]`)\r\n1. Remove import of `starlite.testing` from `starlite` package\r\n2. When starlight is imported explicitly (`from starlight import testint`), check for requests installed. if not, raise `RuntimeError(\"To access starlight.testing install starlight with [testing] extra\")`\r\n\r\nHow would `pyproject.toml` of end user look like:\r\n```toml\r\n[tool.poetry.dependencies]\r\npython = \"^3.10\"\r\nstarlite = \"^1.3.9\"\r\n\r\n[tool.poetry.dev-dependencies]\r\nstarlite = {extras = [\"testing\"], version = \"*\"} # whatever version is installed + testing dependencies\r\npytest = \"^5.2\"\r\n```\r\n\r\n\r\nI can send a PR if changes are welcomed.\n", "before_files": [{"content": "from starlite.datastructures import File, Redirect, State, Stream, Template\n\nfrom .app import Starlite\nfrom .config import (\n CacheConfig,\n CORSConfig,\n OpenAPIConfig,\n StaticFilesConfig,\n TemplateConfig,\n)\nfrom .connection import Request, WebSocket\nfrom .controller import Controller\nfrom .dto import DTOFactory\nfrom .enums import (\n HttpMethod,\n MediaType,\n OpenAPIMediaType,\n RequestEncodingType,\n ScopeType,\n)\nfrom .exceptions import (\n HTTPException,\n ImproperlyConfiguredException,\n InternalServerException,\n MissingDependencyException,\n NotAuthorizedException,\n NotFoundException,\n PermissionDeniedException,\n ServiceUnavailableException,\n StarLiteException,\n ValidationException,\n)\nfrom .handlers import (\n ASGIRouteHandler,\n BaseRouteHandler,\n HTTPRouteHandler,\n WebsocketRouteHandler,\n asgi,\n delete,\n get,\n patch,\n post,\n put,\n route,\n websocket,\n)\nfrom .logging import LoggingConfig, QueueListenerHandler\nfrom .middleware import AbstractAuthenticationMiddleware, AuthenticationResult\nfrom .openapi.controller import OpenAPIController\nfrom .params import Body, Dependency, Parameter\nfrom .plugins import PluginProtocol\nfrom .provide import Provide\nfrom .response import Response\nfrom .router import Router\nfrom .routes import BaseRoute, HTTPRoute, WebSocketRoute\nfrom .testing import TestClient, create_test_client, create_test_request\nfrom .types import MiddlewareProtocol, Partial, ResponseHeader\n\n__all__ = [\n \"ASGIRouteHandler\",\n \"AbstractAuthenticationMiddleware\",\n \"AuthenticationResult\",\n \"BaseRoute\",\n \"BaseRouteHandler\",\n \"Body\",\n \"CORSConfig\",\n \"CacheConfig\",\n \"Controller\",\n \"Dependency\",\n \"DTOFactory\",\n \"File\",\n \"HTTPException\",\n \"HTTPRoute\",\n \"HTTPRouteHandler\",\n \"HttpMethod\",\n \"ImproperlyConfiguredException\",\n \"InternalServerException\",\n \"LoggingConfig\",\n \"MediaType\",\n \"MiddlewareProtocol\",\n \"MissingDependencyException\",\n \"NotAuthorizedException\",\n \"NotFoundException\",\n \"OpenAPIConfig\",\n \"OpenAPIController\",\n \"OpenAPIMediaType\",\n \"Parameter\",\n \"Partial\",\n \"PermissionDeniedException\",\n \"PluginProtocol\",\n \"Provide\",\n \"QueueListenerHandler\",\n \"Redirect\",\n \"Request\",\n \"RequestEncodingType\",\n \"Response\",\n \"ResponseHeader\",\n \"Router\",\n \"ScopeType\",\n \"ServiceUnavailableException\",\n \"StarLiteException\",\n \"Starlite\",\n \"State\",\n \"StaticFilesConfig\",\n \"Stream\",\n \"Template\",\n \"TemplateConfig\",\n \"TestClient\",\n \"ValidationException\",\n \"WebSocket\",\n \"WebSocketRoute\",\n \"WebsocketRouteHandler\",\n \"asgi\",\n \"create_test_client\",\n \"create_test_request\",\n \"delete\",\n \"get\",\n \"patch\",\n \"post\",\n \"put\",\n \"route\",\n \"websocket\",\n]\n", "path": "starlite/__init__.py"}]} | 1,710 | 341 |
gh_patches_debug_13005 | rasdani/github-patches | git_diff | tensorflow__tfx-25 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
base_component.BaseComponent.__str__ method raising KeyError
I got a `KeyError` when calling this method: `base_component.BaseComponent.__str__`
Here is the code to reproduce:
```python
import os
from tfx.utils.dsl_utils import csv_input
from tfx.components.example_gen.csv_example_gen.component import CsvExampleGen
_taxi_root = os.path.join(os.environ['HOME'], 'taxi')
_data_root = os.path.join(_taxi_root, 'data/simple')
examples = csv_input(_data_root)
example_gen = CsvExampleGen(input_base=examples)
print(example_gen)
```
The error trace is:
```
/Users/alelevier/Documents/github/tfx/tfx/components/base/base_component.pyc in __str__(self)
89 input_dict=self.input_dict,
90 outputs=self.outputs,
---> 91 exec_properties=self.exec_properties)
92
93 def __repr__(self):
KeyError: '\n component_name'
```
I looked at the method, it needs use double `{{` and `}}` so change from:
```
def __str__(self):
return """
{
component_name: {component_name},
unique_name: {unique_name},
driver: {driver},
executor: {executor},
input_dict: {input_dict},
outputs: {outputs},
exec_properties: {exec_properties}
}
""".format( # pylint: disable=missing-format-argument-key
component_name=self.component_name,
unique_name=self.unique_name,
driver=self.driver,
executor=self.executor,
input_dict=self.input_dict,
outputs=self.outputs,
exec_properties=self.exec_properties)
```
To:
```
def __str__(self):
return """
{{
component_name: {component_name},
unique_name: {unique_name},
driver: {driver},
executor: {executor},
input_dict: {input_dict},
outputs: {outputs},
exec_properties: {exec_properties}
}}
""".format( # pylint: disable=missing-format-argument-key
component_name=self.component_name,
unique_name=self.unique_name,
driver=self.driver,
executor=self.executor,
input_dict=self.input_dict,
outputs=self.outputs,
exec_properties=self.exec_properties)
```
</issue>
<code>
[start of tfx/components/base/base_component.py]
1 # Copyright 2019 Google LLC. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 """Base class for all TFX components."""
15
16 from __future__ import absolute_import
17 from __future__ import division
18 from __future__ import print_function
19
20 import abc
21 from six import with_metaclass
22 from typing import Any
23 from typing import Dict
24 from typing import Optional
25 from typing import Text
26
27 from tfx.utils import channel
28
29
30 class ComponentOutputs(object):
31 """Helper class to wrap outputs from TFX components."""
32
33 def __init__(self, d):
34 self.__dict__ = d
35
36 def get_all(self):
37 return self.__dict__
38
39
40 class BaseComponent(with_metaclass(abc.ABCMeta, object)):
41 """Base TFX component.
42
43 This is the parent class of any TFX component.
44
45 Attributes:
46 component_name: Name of the component, should be unique per component class.
47 unique_name: Unique name for every component class instance.
48 driver: Driver class to handle pre-execution behaviors in a component.
49 executor: Executor class to do the real execution work.
50 input_dict: A [Text -> Channel] dict serving as the inputs to the component.
51 exec_properties: A [Text -> Any] dict serving as additional properties
52 needed for execution.
53 outputs: Optional Channel destinations of the component.
54 """
55
56 def __init__(self,
57 component_name,
58 driver,
59 executor,
60 input_dict,
61 exec_properties,
62 unique_name = '',
63 outputs = ComponentOutputs({})):
64 self.component_name = component_name
65 self.driver = driver
66 self.executor = executor
67 self.input_dict = input_dict
68 self.exec_properties = exec_properties
69 self.unique_name = unique_name
70 self.outputs = outputs or self._create_outputs()
71 self._type_check(self.input_dict, self.exec_properties)
72
73 def __str__(self):
74 return """
75 {
76 component_name: {component_name},
77 unique_name: {unique_name},
78 driver: {driver},
79 executor: {executor},
80 input_dict: {input_dict},
81 outputs: {outputs},
82 exec_properties: {exec_properties}
83 }
84 """.format( # pylint: disable=missing-format-argument-key
85 component_name=self.component_name,
86 unique_name=self.unique_name,
87 driver=self.driver,
88 executor=self.executor,
89 input_dict=self.input_dict,
90 outputs=self.outputs,
91 exec_properties=self.exec_properties)
92
93 def __repr__(self):
94 return self.__str__()
95
96 @abc.abstractmethod
97 def _create_outputs(self):
98 """Creates outputs placeholder for components.
99
100 Returns:
101 ComponentOutputs object containing the dict of [Text -> Channel]
102 """
103 raise NotImplementedError
104
105 @abc.abstractmethod
106 def _type_check(self, input_dict,
107 exec_properties):
108 """Does type checking for the inputs and exec_properties.
109
110 Args:
111 input_dict: A Dict[Text, Channel] as the inputs of the Component.
112 exec_properties: A Dict[Text, Any] as the execution properties of the
113 component.
114 """
115 raise NotImplementedError
116
[end of tfx/components/base/base_component.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/tfx/components/base/base_component.py b/tfx/components/base/base_component.py
--- a/tfx/components/base/base_component.py
+++ b/tfx/components/base/base_component.py
@@ -72,7 +72,7 @@
def __str__(self):
return """
-{
+{{
component_name: {component_name},
unique_name: {unique_name},
driver: {driver},
@@ -80,7 +80,7 @@
input_dict: {input_dict},
outputs: {outputs},
exec_properties: {exec_properties}
-}
+}}
""".format( # pylint: disable=missing-format-argument-key
component_name=self.component_name,
unique_name=self.unique_name,
| {"golden_diff": "diff --git a/tfx/components/base/base_component.py b/tfx/components/base/base_component.py\n--- a/tfx/components/base/base_component.py\n+++ b/tfx/components/base/base_component.py\n@@ -72,7 +72,7 @@\n \n def __str__(self):\n return \"\"\"\n-{\n+{{\n component_name: {component_name},\n unique_name: {unique_name},\n driver: {driver},\n@@ -80,7 +80,7 @@\n input_dict: {input_dict},\n outputs: {outputs},\n exec_properties: {exec_properties}\n-}\n+}}\n \"\"\".format( # pylint: disable=missing-format-argument-key\n component_name=self.component_name,\n unique_name=self.unique_name,\n", "issue": "base_component.BaseComponent.__str__ method raising KeyError\nI got a `KeyError` when calling this method: `base_component.BaseComponent.__str__`\r\n\r\nHere is the code to reproduce:\r\n\r\n```python\r\nimport os\r\nfrom tfx.utils.dsl_utils import csv_input\r\nfrom tfx.components.example_gen.csv_example_gen.component import CsvExampleGen\r\n\r\n_taxi_root = os.path.join(os.environ['HOME'], 'taxi')\r\n_data_root = os.path.join(_taxi_root, 'data/simple')\r\nexamples = csv_input(_data_root)\r\nexample_gen = CsvExampleGen(input_base=examples)\r\nprint(example_gen)\r\n```\r\n\r\nThe error trace is:\r\n\r\n```\r\n/Users/alelevier/Documents/github/tfx/tfx/components/base/base_component.pyc in __str__(self)\r\n 89 input_dict=self.input_dict,\r\n 90 outputs=self.outputs,\r\n---> 91 exec_properties=self.exec_properties)\r\n 92 \r\n 93 def __repr__(self):\r\n\r\nKeyError: '\\n component_name'\r\n```\r\n\r\nI looked at the method, it needs use double `{{` and `}}` so change from:\r\n\r\n```\r\n def __str__(self):\r\n return \"\"\"\r\n{\r\n component_name: {component_name},\r\n unique_name: {unique_name},\r\n driver: {driver},\r\n executor: {executor},\r\n input_dict: {input_dict},\r\n outputs: {outputs},\r\n exec_properties: {exec_properties}\r\n}\r\n \"\"\".format( # pylint: disable=missing-format-argument-key\r\n component_name=self.component_name,\r\n unique_name=self.unique_name,\r\n driver=self.driver,\r\n executor=self.executor,\r\n input_dict=self.input_dict,\r\n outputs=self.outputs,\r\n exec_properties=self.exec_properties)\r\n```\r\n\r\nTo:\r\n\r\n```\r\n def __str__(self):\r\n return \"\"\"\r\n{{\r\n component_name: {component_name},\r\n unique_name: {unique_name},\r\n driver: {driver},\r\n executor: {executor},\r\n input_dict: {input_dict},\r\n outputs: {outputs},\r\n exec_properties: {exec_properties}\r\n}}\r\n \"\"\".format( # pylint: disable=missing-format-argument-key\r\n component_name=self.component_name,\r\n unique_name=self.unique_name,\r\n driver=self.driver,\r\n executor=self.executor,\r\n input_dict=self.input_dict,\r\n outputs=self.outputs,\r\n exec_properties=self.exec_properties)\r\n```\n", "before_files": [{"content": "# Copyright 2019 Google LLC. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Base class for all TFX components.\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport abc\nfrom six import with_metaclass\nfrom typing import Any\nfrom typing import Dict\nfrom typing import Optional\nfrom typing import Text\n\nfrom tfx.utils import channel\n\n\nclass ComponentOutputs(object):\n \"\"\"Helper class to wrap outputs from TFX components.\"\"\"\n\n def __init__(self, d):\n self.__dict__ = d\n\n def get_all(self):\n return self.__dict__\n\n\nclass BaseComponent(with_metaclass(abc.ABCMeta, object)):\n \"\"\"Base TFX component.\n\n This is the parent class of any TFX component.\n\n Attributes:\n component_name: Name of the component, should be unique per component class.\n unique_name: Unique name for every component class instance.\n driver: Driver class to handle pre-execution behaviors in a component.\n executor: Executor class to do the real execution work.\n input_dict: A [Text -> Channel] dict serving as the inputs to the component.\n exec_properties: A [Text -> Any] dict serving as additional properties\n needed for execution.\n outputs: Optional Channel destinations of the component.\n \"\"\"\n\n def __init__(self,\n component_name,\n driver,\n executor,\n input_dict,\n exec_properties,\n unique_name = '',\n outputs = ComponentOutputs({})):\n self.component_name = component_name\n self.driver = driver\n self.executor = executor\n self.input_dict = input_dict\n self.exec_properties = exec_properties\n self.unique_name = unique_name\n self.outputs = outputs or self._create_outputs()\n self._type_check(self.input_dict, self.exec_properties)\n\n def __str__(self):\n return \"\"\"\n{\n component_name: {component_name},\n unique_name: {unique_name},\n driver: {driver},\n executor: {executor},\n input_dict: {input_dict},\n outputs: {outputs},\n exec_properties: {exec_properties}\n}\n \"\"\".format( # pylint: disable=missing-format-argument-key\n component_name=self.component_name,\n unique_name=self.unique_name,\n driver=self.driver,\n executor=self.executor,\n input_dict=self.input_dict,\n outputs=self.outputs,\n exec_properties=self.exec_properties)\n\n def __repr__(self):\n return self.__str__()\n\n @abc.abstractmethod\n def _create_outputs(self):\n \"\"\"Creates outputs placeholder for components.\n\n Returns:\n ComponentOutputs object containing the dict of [Text -> Channel]\n \"\"\"\n raise NotImplementedError\n\n @abc.abstractmethod\n def _type_check(self, input_dict,\n exec_properties):\n \"\"\"Does type checking for the inputs and exec_properties.\n\n Args:\n input_dict: A Dict[Text, Channel] as the inputs of the Component.\n exec_properties: A Dict[Text, Any] as the execution properties of the\n component.\n \"\"\"\n raise NotImplementedError\n", "path": "tfx/components/base/base_component.py"}]} | 2,044 | 159 |
gh_patches_debug_14845 | rasdani/github-patches | git_diff | HypothesisWorks__hypothesis-1044 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`escalation.belongs_to` cache is broken
The problem is that the cache is checked with the incoming string type, but [the string type changes](https://github.com/HypothesisWorks/hypothesis-python/blob/3.44.1/src/hypothesis/internal/escalation.py#L41) before the value is inserted:
A simple (but not elegant) fix:
```diff
--- hypothesis/internal/escalation.py
+++ hypothesis/internal/escalation.py
@@ -34,13 +34,14 @@ def belongs_to(package):
cache = {text_type: {}, binary_type: {}}
def accept(filepath):
+ ftype = type(filepath)
try:
- return cache[type(filepath)][filepath]
+ return cache[ftype][filepath]
except KeyError:
pass
filepath = encoded_filepath(filepath)
result = os.path.abspath(filepath).startswith(root)
- cache[type(filepath)][filepath] = result
+ cache[ftype][filepath] = result
return result
accept.__name__ = 'is_%s_file' % (package.__name__,)
return accept
```
</issue>
<code>
[start of src/hypothesis/internal/escalation.py]
1 # coding=utf-8
2 #
3 # This file is part of Hypothesis, which may be found at
4 # https://github.com/HypothesisWorks/hypothesis-python
5 #
6 # Most of this work is copyright (C) 2013-2017 David R. MacIver
7 # ([email protected]), but it contains contributions by others. See
8 # CONTRIBUTING.rst for a full list of people who may hold copyright, and
9 # consult the git log if you need to determine who owns an individual
10 # contribution.
11 #
12 # This Source Code Form is subject to the terms of the Mozilla Public License,
13 # v. 2.0. If a copy of the MPL was not distributed with this file, You can
14 # obtain one at http://mozilla.org/MPL/2.0/.
15 #
16 # END HEADER
17
18 from __future__ import division, print_function, absolute_import
19
20 import os
21 import sys
22
23 import coverage
24
25 import hypothesis
26 from hypothesis.errors import StopTest, DeadlineExceeded, \
27 HypothesisException, UnsatisfiedAssumption
28 from hypothesis.internal.compat import text_type, binary_type, \
29 encoded_filepath
30
31
32 def belongs_to(package):
33 root = os.path.dirname(package.__file__)
34 cache = {text_type: {}, binary_type: {}}
35
36 def accept(filepath):
37 try:
38 return cache[type(filepath)][filepath]
39 except KeyError:
40 pass
41 filepath = encoded_filepath(filepath)
42 result = os.path.abspath(filepath).startswith(root)
43 cache[type(filepath)][filepath] = result
44 return result
45 accept.__name__ = 'is_%s_file' % (package.__name__,)
46 return accept
47
48
49 PREVENT_ESCALATION = os.getenv('HYPOTHESIS_DO_NOT_ESCALATE') == 'true'
50
51 FILE_CACHE = {}
52
53
54 is_hypothesis_file = belongs_to(hypothesis)
55 is_coverage_file = belongs_to(coverage)
56
57 HYPOTHESIS_CONTROL_EXCEPTIONS = (
58 DeadlineExceeded, StopTest, UnsatisfiedAssumption
59 )
60
61
62 def mark_for_escalation(e):
63 if not isinstance(e, HYPOTHESIS_CONTROL_EXCEPTIONS):
64 e.hypothesis_internal_always_escalate = True
65
66
67 def escalate_hypothesis_internal_error():
68 if PREVENT_ESCALATION:
69 return
70 error_type, e, tb = sys.exc_info()
71 if getattr(e, 'hypothesis_internal_always_escalate', False):
72 raise
73 import traceback
74 filepath = traceback.extract_tb(tb)[-1][0]
75 if is_hypothesis_file(filepath) and not isinstance(
76 e, (HypothesisException,) + HYPOTHESIS_CONTROL_EXCEPTIONS,
77 ):
78 raise
79 # This is so that if we do something wrong and trigger an internal Coverage
80 # error we don't try to catch it. It should be impossible to trigger, but
81 # you never know.
82 if is_coverage_file(filepath): # pragma: no cover
83 raise
84
[end of src/hypothesis/internal/escalation.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/hypothesis/internal/escalation.py b/src/hypothesis/internal/escalation.py
--- a/src/hypothesis/internal/escalation.py
+++ b/src/hypothesis/internal/escalation.py
@@ -34,13 +34,15 @@
cache = {text_type: {}, binary_type: {}}
def accept(filepath):
+ ftype = type(filepath)
try:
- return cache[type(filepath)][filepath]
+ return cache[ftype][filepath]
except KeyError:
pass
- filepath = encoded_filepath(filepath)
- result = os.path.abspath(filepath).startswith(root)
- cache[type(filepath)][filepath] = result
+ new_filepath = encoded_filepath(filepath)
+ result = os.path.abspath(new_filepath).startswith(root)
+ cache[ftype][filepath] = result
+ cache[type(new_filepath)][new_filepath] = result
return result
accept.__name__ = 'is_%s_file' % (package.__name__,)
return accept
| {"golden_diff": "diff --git a/src/hypothesis/internal/escalation.py b/src/hypothesis/internal/escalation.py\n--- a/src/hypothesis/internal/escalation.py\n+++ b/src/hypothesis/internal/escalation.py\n@@ -34,13 +34,15 @@\n cache = {text_type: {}, binary_type: {}}\n \n def accept(filepath):\n+ ftype = type(filepath)\n try:\n- return cache[type(filepath)][filepath]\n+ return cache[ftype][filepath]\n except KeyError:\n pass\n- filepath = encoded_filepath(filepath)\n- result = os.path.abspath(filepath).startswith(root)\n- cache[type(filepath)][filepath] = result\n+ new_filepath = encoded_filepath(filepath)\n+ result = os.path.abspath(new_filepath).startswith(root)\n+ cache[ftype][filepath] = result\n+ cache[type(new_filepath)][new_filepath] = result\n return result\n accept.__name__ = 'is_%s_file' % (package.__name__,)\n return accept\n", "issue": "`escalation.belongs_to` cache is broken\nThe problem is that the cache is checked with the incoming string type, but [the string type changes](https://github.com/HypothesisWorks/hypothesis-python/blob/3.44.1/src/hypothesis/internal/escalation.py#L41) before the value is inserted:\r\n\r\nA simple (but not elegant) fix:\r\n\r\n```diff\r\n--- hypothesis/internal/escalation.py\r\n+++ hypothesis/internal/escalation.py\r\n@@ -34,13 +34,14 @@ def belongs_to(package):\r\n cache = {text_type: {}, binary_type: {}}\r\n \r\n def accept(filepath):\r\n+ ftype = type(filepath)\r\n try:\r\n- return cache[type(filepath)][filepath]\r\n+ return cache[ftype][filepath]\r\n except KeyError:\r\n pass\r\n filepath = encoded_filepath(filepath)\r\n result = os.path.abspath(filepath).startswith(root)\r\n- cache[type(filepath)][filepath] = result\r\n+ cache[ftype][filepath] = result\r\n return result\r\n accept.__name__ = 'is_%s_file' % (package.__name__,)\r\n return accept\r\n```\n", "before_files": [{"content": "# coding=utf-8\n#\n# This file is part of Hypothesis, which may be found at\n# https://github.com/HypothesisWorks/hypothesis-python\n#\n# Most of this work is copyright (C) 2013-2017 David R. MacIver\n# ([email protected]), but it contains contributions by others. See\n# CONTRIBUTING.rst for a full list of people who may hold copyright, and\n# consult the git log if you need to determine who owns an individual\n# contribution.\n#\n# This Source Code Form is subject to the terms of the Mozilla Public License,\n# v. 2.0. If a copy of the MPL was not distributed with this file, You can\n# obtain one at http://mozilla.org/MPL/2.0/.\n#\n# END HEADER\n\nfrom __future__ import division, print_function, absolute_import\n\nimport os\nimport sys\n\nimport coverage\n\nimport hypothesis\nfrom hypothesis.errors import StopTest, DeadlineExceeded, \\\n HypothesisException, UnsatisfiedAssumption\nfrom hypothesis.internal.compat import text_type, binary_type, \\\n encoded_filepath\n\n\ndef belongs_to(package):\n root = os.path.dirname(package.__file__)\n cache = {text_type: {}, binary_type: {}}\n\n def accept(filepath):\n try:\n return cache[type(filepath)][filepath]\n except KeyError:\n pass\n filepath = encoded_filepath(filepath)\n result = os.path.abspath(filepath).startswith(root)\n cache[type(filepath)][filepath] = result\n return result\n accept.__name__ = 'is_%s_file' % (package.__name__,)\n return accept\n\n\nPREVENT_ESCALATION = os.getenv('HYPOTHESIS_DO_NOT_ESCALATE') == 'true'\n\nFILE_CACHE = {}\n\n\nis_hypothesis_file = belongs_to(hypothesis)\nis_coverage_file = belongs_to(coverage)\n\nHYPOTHESIS_CONTROL_EXCEPTIONS = (\n DeadlineExceeded, StopTest, UnsatisfiedAssumption\n)\n\n\ndef mark_for_escalation(e):\n if not isinstance(e, HYPOTHESIS_CONTROL_EXCEPTIONS):\n e.hypothesis_internal_always_escalate = True\n\n\ndef escalate_hypothesis_internal_error():\n if PREVENT_ESCALATION:\n return\n error_type, e, tb = sys.exc_info()\n if getattr(e, 'hypothesis_internal_always_escalate', False):\n raise\n import traceback\n filepath = traceback.extract_tb(tb)[-1][0]\n if is_hypothesis_file(filepath) and not isinstance(\n e, (HypothesisException,) + HYPOTHESIS_CONTROL_EXCEPTIONS,\n ):\n raise\n # This is so that if we do something wrong and trigger an internal Coverage\n # error we don't try to catch it. It should be impossible to trigger, but\n # you never know.\n if is_coverage_file(filepath): # pragma: no cover\n raise\n", "path": "src/hypothesis/internal/escalation.py"}]} | 1,584 | 222 |
gh_patches_debug_36419 | rasdani/github-patches | git_diff | alltheplaces__alltheplaces-8552 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
house_au: split off new brand for House Bed & Bath
house_au captures two brands:
* [House](https://www.wikidata.org/wiki/Q117921987)
* House Bed & Bath (https://www.wikidata.org/wiki/Q126176210)
Currently the spider doesn't differentiate the two brands. It should though.
Reference: https://globalretailbrands.net/
</issue>
<code>
[start of locations/spiders/house_au.py]
1 import reverse_geocoder
2 from scrapy import Request, Spider
3
4 from locations.categories import Categories
5 from locations.dict_parser import DictParser
6 from locations.hours import OpeningHours
7 from locations.pipelines.address_clean_up import clean_address
8
9
10 class HouseAUSpider(Spider):
11 name = "house_au"
12 item_attributes = {
13 "brand": "House",
14 "brand_wikidata": "Q117921987",
15 "extras": Categories.SHOP_HOUSEWARE.value,
16 }
17 allowed_domains = ["www.house.com.au"]
18 start_urls = ["https://www.house.com.au/api/get-stores"]
19
20 def start_requests(self):
21 for url in self.start_urls:
22 yield Request(url=url, method="POST")
23
24 def parse(self, response):
25 for location in response.json():
26 item = DictParser.parse(location)
27
28 # Some stores have wildly incorrect coordinates for
29 # locations as far away as France. Only add geometry
30 # where coordinates existing within Australia.
31 if result := reverse_geocoder.get((location["latitude"], location["longitude"]), mode=1, verbose=False):
32 if result["cc"] == "AU":
33 item["geometry"] = location["location"]
34
35 item["street_address"] = clean_address([location["address1"], location["address2"]])
36 item["website"] = "https://www.house.com.au/stores/" + location["slug"]
37 item["opening_hours"] = OpeningHours()
38 for day_name, hours in location["storeHours"].items():
39 if hours["open"] == "-" or hours["close"] == "-" or hours["close"] == "17:3016:00":
40 continue
41 item["opening_hours"].add_range(
42 day_name.title(),
43 hours["open"].replace(".", ":"),
44 hours["close"].replace(".", ":").replace(":-", ":"),
45 )
46 yield item
47
[end of locations/spiders/house_au.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/locations/spiders/house_au.py b/locations/spiders/house_au.py
--- a/locations/spiders/house_au.py
+++ b/locations/spiders/house_au.py
@@ -9,13 +9,16 @@
class HouseAUSpider(Spider):
name = "house_au"
- item_attributes = {
- "brand": "House",
- "brand_wikidata": "Q117921987",
- "extras": Categories.SHOP_HOUSEWARE.value,
- }
allowed_domains = ["www.house.com.au"]
start_urls = ["https://www.house.com.au/api/get-stores"]
+ brands = {
+ "House Bed & Bath": {
+ "brand": "House Bed & Bath",
+ "brand_wikidata": "",
+ "extras": Categories.SHOP_HOUSEHOLD_LINEN.value,
+ },
+ "House": {"brand": "House", "brand_wikidata": "Q117921987", "extras": Categories.SHOP_HOUSEWARE.value},
+ }
def start_requests(self):
for url in self.start_urls:
@@ -25,6 +28,12 @@
for location in response.json():
item = DictParser.parse(location)
+ for brand_name in self.brands.keys():
+ if item["name"].startswith(f"{brand_name} "):
+ item.update(self.brands[brand_name])
+ item["branch"] = item["name"].replace(f"{brand_name} ", "")
+ break
+
# Some stores have wildly incorrect coordinates for
# locations as far away as France. Only add geometry
# where coordinates existing within Australia.
@@ -34,6 +43,7 @@
item["street_address"] = clean_address([location["address1"], location["address2"]])
item["website"] = "https://www.house.com.au/stores/" + location["slug"]
+
item["opening_hours"] = OpeningHours()
for day_name, hours in location["storeHours"].items():
if hours["open"] == "-" or hours["close"] == "-" or hours["close"] == "17:3016:00":
@@ -43,4 +53,5 @@
hours["open"].replace(".", ":"),
hours["close"].replace(".", ":").replace(":-", ":"),
)
+
yield item
| {"golden_diff": "diff --git a/locations/spiders/house_au.py b/locations/spiders/house_au.py\n--- a/locations/spiders/house_au.py\n+++ b/locations/spiders/house_au.py\n@@ -9,13 +9,16 @@\n \n class HouseAUSpider(Spider):\n name = \"house_au\"\n- item_attributes = {\n- \"brand\": \"House\",\n- \"brand_wikidata\": \"Q117921987\",\n- \"extras\": Categories.SHOP_HOUSEWARE.value,\n- }\n allowed_domains = [\"www.house.com.au\"]\n start_urls = [\"https://www.house.com.au/api/get-stores\"]\n+ brands = {\n+ \"House Bed & Bath\": {\n+ \"brand\": \"House Bed & Bath\",\n+ \"brand_wikidata\": \"\",\n+ \"extras\": Categories.SHOP_HOUSEHOLD_LINEN.value,\n+ },\n+ \"House\": {\"brand\": \"House\", \"brand_wikidata\": \"Q117921987\", \"extras\": Categories.SHOP_HOUSEWARE.value},\n+ }\n \n def start_requests(self):\n for url in self.start_urls:\n@@ -25,6 +28,12 @@\n for location in response.json():\n item = DictParser.parse(location)\n \n+ for brand_name in self.brands.keys():\n+ if item[\"name\"].startswith(f\"{brand_name} \"):\n+ item.update(self.brands[brand_name])\n+ item[\"branch\"] = item[\"name\"].replace(f\"{brand_name} \", \"\")\n+ break\n+\n # Some stores have wildly incorrect coordinates for\n # locations as far away as France. Only add geometry\n # where coordinates existing within Australia.\n@@ -34,6 +43,7 @@\n \n item[\"street_address\"] = clean_address([location[\"address1\"], location[\"address2\"]])\n item[\"website\"] = \"https://www.house.com.au/stores/\" + location[\"slug\"]\n+\n item[\"opening_hours\"] = OpeningHours()\n for day_name, hours in location[\"storeHours\"].items():\n if hours[\"open\"] == \"-\" or hours[\"close\"] == \"-\" or hours[\"close\"] == \"17:3016:00\":\n@@ -43,4 +53,5 @@\n hours[\"open\"].replace(\".\", \":\"),\n hours[\"close\"].replace(\".\", \":\").replace(\":-\", \":\"),\n )\n+\n yield item\n", "issue": "house_au: split off new brand for House Bed & Bath\nhouse_au captures two brands:\r\n* [House](https://www.wikidata.org/wiki/Q117921987)\r\n* House Bed & Bath (https://www.wikidata.org/wiki/Q126176210)\r\n\r\nCurrently the spider doesn't differentiate the two brands. It should though.\r\n\r\nReference: https://globalretailbrands.net/\n", "before_files": [{"content": "import reverse_geocoder\nfrom scrapy import Request, Spider\n\nfrom locations.categories import Categories\nfrom locations.dict_parser import DictParser\nfrom locations.hours import OpeningHours\nfrom locations.pipelines.address_clean_up import clean_address\n\n\nclass HouseAUSpider(Spider):\n name = \"house_au\"\n item_attributes = {\n \"brand\": \"House\",\n \"brand_wikidata\": \"Q117921987\",\n \"extras\": Categories.SHOP_HOUSEWARE.value,\n }\n allowed_domains = [\"www.house.com.au\"]\n start_urls = [\"https://www.house.com.au/api/get-stores\"]\n\n def start_requests(self):\n for url in self.start_urls:\n yield Request(url=url, method=\"POST\")\n\n def parse(self, response):\n for location in response.json():\n item = DictParser.parse(location)\n\n # Some stores have wildly incorrect coordinates for\n # locations as far away as France. Only add geometry\n # where coordinates existing within Australia.\n if result := reverse_geocoder.get((location[\"latitude\"], location[\"longitude\"]), mode=1, verbose=False):\n if result[\"cc\"] == \"AU\":\n item[\"geometry\"] = location[\"location\"]\n\n item[\"street_address\"] = clean_address([location[\"address1\"], location[\"address2\"]])\n item[\"website\"] = \"https://www.house.com.au/stores/\" + location[\"slug\"]\n item[\"opening_hours\"] = OpeningHours()\n for day_name, hours in location[\"storeHours\"].items():\n if hours[\"open\"] == \"-\" or hours[\"close\"] == \"-\" or hours[\"close\"] == \"17:3016:00\":\n continue\n item[\"opening_hours\"].add_range(\n day_name.title(),\n hours[\"open\"].replace(\".\", \":\"),\n hours[\"close\"].replace(\".\", \":\").replace(\":-\", \":\"),\n )\n yield item\n", "path": "locations/spiders/house_au.py"}]} | 1,125 | 545 |
gh_patches_debug_7899 | rasdani/github-patches | git_diff | cloudtools__troposphere-1692 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
implement AWS::CodeStarConnections changes from May 14, 2020 update
</issue>
<code>
[start of troposphere/codestarconnections.py]
1 # Copyright (c) 2012-2020, Mark Peek <[email protected]>
2 # All rights reserved.
3 #
4 # See LICENSE file for full license.
5
6
7 from . import AWSObject
8
9
10 VALID_CONNECTION_PROVIDERTYPE = ('Bitbucket')
11
12
13 def validate_connection_providertype(connection_providertype):
14 """Validate ProviderType for Connection"""
15
16 if connection_providertype not in VALID_CONNECTION_PROVIDERTYPE:
17 raise ValueError("Connection ProviderType must be one of: %s" %
18 ", ".join(VALID_CONNECTION_PROVIDERTYPE))
19 return connection_providertype
20
21
22 class Connection(AWSObject):
23 resource_type = "AWS::CodeStarConnections::Connection"
24
25 props = {
26 'ConnectionName': (basestring, True),
27 'ProviderType': (validate_connection_providertype, True),
28 }
29
[end of troposphere/codestarconnections.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/troposphere/codestarconnections.py b/troposphere/codestarconnections.py
--- a/troposphere/codestarconnections.py
+++ b/troposphere/codestarconnections.py
@@ -4,7 +4,7 @@
# See LICENSE file for full license.
-from . import AWSObject
+from . import AWSObject, Tags
VALID_CONNECTION_PROVIDERTYPE = ('Bitbucket')
@@ -25,4 +25,5 @@
props = {
'ConnectionName': (basestring, True),
'ProviderType': (validate_connection_providertype, True),
+ 'Tags': (Tags, False),
}
| {"golden_diff": "diff --git a/troposphere/codestarconnections.py b/troposphere/codestarconnections.py\n--- a/troposphere/codestarconnections.py\n+++ b/troposphere/codestarconnections.py\n@@ -4,7 +4,7 @@\n # See LICENSE file for full license.\n \n \n-from . import AWSObject\n+from . import AWSObject, Tags\n \n \n VALID_CONNECTION_PROVIDERTYPE = ('Bitbucket')\n@@ -25,4 +25,5 @@\n props = {\n 'ConnectionName': (basestring, True),\n 'ProviderType': (validate_connection_providertype, True),\n+ 'Tags': (Tags, False),\n }\n", "issue": "implement AWS::CodeStarConnections changes from May 14, 2020 update\n\n", "before_files": [{"content": "# Copyright (c) 2012-2020, Mark Peek <[email protected]>\n# All rights reserved.\n#\n# See LICENSE file for full license.\n\n\nfrom . import AWSObject\n\n\nVALID_CONNECTION_PROVIDERTYPE = ('Bitbucket')\n\n\ndef validate_connection_providertype(connection_providertype):\n \"\"\"Validate ProviderType for Connection\"\"\"\n\n if connection_providertype not in VALID_CONNECTION_PROVIDERTYPE:\n raise ValueError(\"Connection ProviderType must be one of: %s\" %\n \", \".join(VALID_CONNECTION_PROVIDERTYPE))\n return connection_providertype\n\n\nclass Connection(AWSObject):\n resource_type = \"AWS::CodeStarConnections::Connection\"\n\n props = {\n 'ConnectionName': (basestring, True),\n 'ProviderType': (validate_connection_providertype, True),\n }\n", "path": "troposphere/codestarconnections.py"}]} | 790 | 143 |
gh_patches_debug_2344 | rasdani/github-patches | git_diff | ethereum__web3.py-3196 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Remove lru-dict dependency
lru-dict requires a wheel that is not pre-compiled for Python 3.11.
It is only used in 1 place where it should be able to be replaced with the built-in functools lru cache: https://github.com/ethereum/web3.py/blob/master/web3/middleware/cache.py#L196
Removing this dependency would avoid future compatibility problems as well.
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2 from setuptools import (
3 find_packages,
4 setup,
5 )
6
7 extras_require = {
8 "tester": [
9 "eth-tester[py-evm]==v0.9.1-b.1",
10 "py-geth>=3.11.0",
11 ],
12 "linter": [
13 "black>=22.1.0",
14 "flake8==3.8.3",
15 "isort>=5.11.0",
16 "mypy==1.4.1",
17 "types-setuptools>=57.4.4",
18 "types-requests>=2.26.1",
19 "types-protobuf==3.19.13",
20 ],
21 "docs": [
22 "sphinx>=5.3.0",
23 "sphinx_rtd_theme>=1.0.0",
24 "towncrier>=21,<22",
25 ],
26 "dev": [
27 "bumpversion",
28 "flaky>=3.7.0",
29 "hypothesis>=3.31.2",
30 "importlib-metadata<5.0;python_version<'3.8'",
31 "pytest>=7.0.0",
32 "pytest-asyncio>=0.18.1,<0.23",
33 "pytest-mock>=1.10",
34 "pytest-watch>=4.2",
35 "pytest-xdist>=1.29",
36 "setuptools>=38.6.0",
37 "tox>=3.18.0",
38 "tqdm>4.32",
39 "twine>=1.13",
40 "when-changed>=0.3.0",
41 "build>=0.9.0",
42 ],
43 "ipfs": [
44 "ipfshttpclient==0.8.0a2",
45 ],
46 }
47
48 extras_require["dev"] = (
49 extras_require["tester"]
50 + extras_require["linter"]
51 + extras_require["docs"]
52 + extras_require["ipfs"]
53 + extras_require["dev"]
54 )
55
56 with open("./README.md") as readme:
57 long_description = readme.read()
58
59 setup(
60 name="web3",
61 # *IMPORTANT*: Don't manually change the version here. Use the 'bumpversion' utility.
62 version="6.14.0",
63 description="""web3.py""",
64 long_description_content_type="text/markdown",
65 long_description=long_description,
66 author="The Ethereum Foundation",
67 author_email="[email protected]",
68 url="https://github.com/ethereum/web3.py",
69 include_package_data=True,
70 install_requires=[
71 "aiohttp>=3.7.4.post0",
72 "eth-abi>=4.0.0",
73 "eth-account>=0.8.0",
74 "eth-hash[pycryptodome]>=0.5.1",
75 "eth-typing>=3.0.0",
76 "eth-utils>=2.1.0",
77 "hexbytes>=0.1.0,<0.4.0",
78 "jsonschema>=4.0.0",
79 "lru-dict>=1.1.6,<1.3.0",
80 "protobuf>=4.21.6",
81 "pydantic>=2.4.0",
82 "pywin32>=223;platform_system=='Windows'",
83 "requests>=2.16.0",
84 "typing-extensions>=4.0.1",
85 "websockets>=10.0.0",
86 "pyunormalize>=15.0.0",
87 ],
88 python_requires=">=3.7.2",
89 extras_require=extras_require,
90 py_modules=["web3", "ens", "ethpm"],
91 entry_points={"pytest11": ["pytest_ethereum = web3.tools.pytest_ethereum.plugins"]},
92 license="MIT",
93 zip_safe=False,
94 keywords="ethereum",
95 packages=find_packages(exclude=["tests", "tests.*"]),
96 package_data={"web3": ["py.typed"]},
97 classifiers=[
98 "Development Status :: 5 - Production/Stable",
99 "Intended Audience :: Developers",
100 "License :: OSI Approved :: MIT License",
101 "Natural Language :: English",
102 "Programming Language :: Python :: 3",
103 "Programming Language :: Python :: 3.7",
104 "Programming Language :: Python :: 3.8",
105 "Programming Language :: Python :: 3.9",
106 "Programming Language :: Python :: 3.10",
107 "Programming Language :: Python :: 3.11",
108 ],
109 )
110
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -76,7 +76,6 @@
"eth-utils>=2.1.0",
"hexbytes>=0.1.0,<0.4.0",
"jsonschema>=4.0.0",
- "lru-dict>=1.1.6,<1.3.0",
"protobuf>=4.21.6",
"pydantic>=2.4.0",
"pywin32>=223;platform_system=='Windows'",
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -76,7 +76,6 @@\n \"eth-utils>=2.1.0\",\n \"hexbytes>=0.1.0,<0.4.0\",\n \"jsonschema>=4.0.0\",\n- \"lru-dict>=1.1.6,<1.3.0\",\n \"protobuf>=4.21.6\",\n \"pydantic>=2.4.0\",\n \"pywin32>=223;platform_system=='Windows'\",\n", "issue": "Remove lru-dict dependency\nlru-dict requires a wheel that is not pre-compiled for Python 3.11.\r\n\r\nIt is only used in 1 place where it should be able to be replaced with the built-in functools lru cache: https://github.com/ethereum/web3.py/blob/master/web3/middleware/cache.py#L196\r\n\r\nRemoving this dependency would avoid future compatibility problems as well.\n", "before_files": [{"content": "#!/usr/bin/env python\nfrom setuptools import (\n find_packages,\n setup,\n)\n\nextras_require = {\n \"tester\": [\n \"eth-tester[py-evm]==v0.9.1-b.1\",\n \"py-geth>=3.11.0\",\n ],\n \"linter\": [\n \"black>=22.1.0\",\n \"flake8==3.8.3\",\n \"isort>=5.11.0\",\n \"mypy==1.4.1\",\n \"types-setuptools>=57.4.4\",\n \"types-requests>=2.26.1\",\n \"types-protobuf==3.19.13\",\n ],\n \"docs\": [\n \"sphinx>=5.3.0\",\n \"sphinx_rtd_theme>=1.0.0\",\n \"towncrier>=21,<22\",\n ],\n \"dev\": [\n \"bumpversion\",\n \"flaky>=3.7.0\",\n \"hypothesis>=3.31.2\",\n \"importlib-metadata<5.0;python_version<'3.8'\",\n \"pytest>=7.0.0\",\n \"pytest-asyncio>=0.18.1,<0.23\",\n \"pytest-mock>=1.10\",\n \"pytest-watch>=4.2\",\n \"pytest-xdist>=1.29\",\n \"setuptools>=38.6.0\",\n \"tox>=3.18.0\",\n \"tqdm>4.32\",\n \"twine>=1.13\",\n \"when-changed>=0.3.0\",\n \"build>=0.9.0\",\n ],\n \"ipfs\": [\n \"ipfshttpclient==0.8.0a2\",\n ],\n}\n\nextras_require[\"dev\"] = (\n extras_require[\"tester\"]\n + extras_require[\"linter\"]\n + extras_require[\"docs\"]\n + extras_require[\"ipfs\"]\n + extras_require[\"dev\"]\n)\n\nwith open(\"./README.md\") as readme:\n long_description = readme.read()\n\nsetup(\n name=\"web3\",\n # *IMPORTANT*: Don't manually change the version here. Use the 'bumpversion' utility.\n version=\"6.14.0\",\n description=\"\"\"web3.py\"\"\",\n long_description_content_type=\"text/markdown\",\n long_description=long_description,\n author=\"The Ethereum Foundation\",\n author_email=\"[email protected]\",\n url=\"https://github.com/ethereum/web3.py\",\n include_package_data=True,\n install_requires=[\n \"aiohttp>=3.7.4.post0\",\n \"eth-abi>=4.0.0\",\n \"eth-account>=0.8.0\",\n \"eth-hash[pycryptodome]>=0.5.1\",\n \"eth-typing>=3.0.0\",\n \"eth-utils>=2.1.0\",\n \"hexbytes>=0.1.0,<0.4.0\",\n \"jsonschema>=4.0.0\",\n \"lru-dict>=1.1.6,<1.3.0\",\n \"protobuf>=4.21.6\",\n \"pydantic>=2.4.0\",\n \"pywin32>=223;platform_system=='Windows'\",\n \"requests>=2.16.0\",\n \"typing-extensions>=4.0.1\",\n \"websockets>=10.0.0\",\n \"pyunormalize>=15.0.0\",\n ],\n python_requires=\">=3.7.2\",\n extras_require=extras_require,\n py_modules=[\"web3\", \"ens\", \"ethpm\"],\n entry_points={\"pytest11\": [\"pytest_ethereum = web3.tools.pytest_ethereum.plugins\"]},\n license=\"MIT\",\n zip_safe=False,\n keywords=\"ethereum\",\n packages=find_packages(exclude=[\"tests\", \"tests.*\"]),\n package_data={\"web3\": [\"py.typed\"]},\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: MIT License\",\n \"Natural Language :: English\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Programming Language :: Python :: 3.11\",\n ],\n)\n", "path": "setup.py"}]} | 1,838 | 131 |
gh_patches_debug_527 | rasdani/github-patches | git_diff | mlflow__mlflow-351 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
UUID dependency breaks python 3 under AWS linux
### System information
- **Have I written custom code (as opposed to using a stock example script provided in MLflow)**: No
- **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: Amazon linux deep learning AMI 12.0 (like CentOS)
- **MLflow installed from (source or binary)**: source (PyPI)
- **MLflow version (run ``mlflow --version``)**: mlflow, version 0.5.0
- **Python version**: Python 3.6.6
- **npm version (if running the dev UI): N/A
- **Exact command to reproduce**: python -c "import mlflow"
### Describe the problem
```pip install mlflow``` also installs uuid==1.30 (which breaks under python3)
The default "uuid" library is included in the python standard library. On the AWS instance, the installed version shadows the default, and includes syntax which is only valid in python2.
On the computer I'm connecting to the instance from, the same script does not produce any errors, but ```uuid.__file__``` points to a standard library version and not the packaged 1.30
### Source code / logs
Full reproduction from a newly created instance:
```
source activate tensorflow_p36
virtualenv env --system-site-packages --python=$(which python) env
source env/bin/activate
pip install mlflow
python -c "import mlflow"
```
```
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/ec2-user/scratch/env/lib/python3.6/site-packages/mlflow/__init__.py", line 33, in <module>
import mlflow.projects as projects # noqa
File "/home/ec2-user/scratch/env/lib/python3.6/site-packages/mlflow/projects/__init__.py", line 17, in <module>
import mlflow.tracking as tracking
File "/home/ec2-user/scratch/env/lib/python3.6/site-packages/mlflow/tracking/__init__.py", line 7, in <module>
from mlflow.tracking.service import MLflowService, get_service
File "/home/ec2-user/scratch/env/lib/python3.6/site-packages/mlflow/tracking/service.py", line 13, in <module>
from mlflow.tracking.utils import _get_store
File "/home/ec2-user/scratch/env/lib/python3.6/site-packages/mlflow/tracking/utils.py", line 8, in <module>
from mlflow.store.file_store import FileStore
File "/home/ec2-user/scratch/env/lib/python3.6/site-packages/mlflow/store/file_store.py", line 3, in <module>
import uuid
File "/home/ec2-user/scratch/env/lib/python3.6/site-packages/uuid.py", line 138
if not 0 <= time_low < 1<<32L:
^
SyntaxError: invalid syntax
```
</issue>
<code>
[start of setup.py]
1 import imp
2 import os
3 from setuptools import setup, find_packages
4
5 version = imp.load_source(
6 'mlflow.version', os.path.join('mlflow', 'version.py')).VERSION
7
8
9 # Get a list of all files in the JS directory to include in our module
10 def package_files(directory):
11 paths = []
12 for (path, directories, filenames) in os.walk(directory):
13 for filename in filenames:
14 paths.append(os.path.join('..', path, filename))
15 return paths
16
17
18 # Prints out a set of paths (relative to the mlflow/ directory) of files in mlflow/server/js/build
19 # to include in the wheel, e.g. "../mlflow/server/js/build/index.html"
20 js_files = package_files('mlflow/server/js/build')
21 sagmaker_server_files = package_files("mlflow/sagemaker/container")
22
23 setup(
24 name='mlflow',
25 version=version,
26 packages=find_packages(exclude=['tests', 'tests.*']),
27 package_data={"mlflow": js_files + sagmaker_server_files},
28 install_requires=[
29 'awscli',
30 'click>=6.7',
31 'databricks-cli>=0.8.0',
32 'requests>=2.17.3',
33 'six>=1.10.0',
34 'uuid',
35 'gunicorn',
36 'Flask',
37 'numpy',
38 'pandas',
39 'scipy',
40 'scikit-learn',
41 'python-dateutil',
42 'protobuf>=3.6.0',
43 'gitpython>=2.1.0',
44 'pyyaml',
45 'boto3',
46 'querystring_parser',
47 'simplejson',
48 ],
49 entry_points='''
50 [console_scripts]
51 mlflow=mlflow.cli:cli
52 ''',
53 zip_safe=False,
54 author='Databricks',
55 description='MLflow: An ML Workflow Tool',
56 long_description=open('README.rst').read(),
57 license='Apache License 2.0',
58 classifiers=[
59 'Intended Audience :: Developers',
60 'Programming Language :: Python :: 2.7',
61 'Programming Language :: Python :: 3.6',
62 ],
63 keywords='ml ai databricks',
64 url='https://mlflow.org/'
65 )
66
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -31,7 +31,6 @@
'databricks-cli>=0.8.0',
'requests>=2.17.3',
'six>=1.10.0',
- 'uuid',
'gunicorn',
'Flask',
'numpy',
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -31,7 +31,6 @@\n 'databricks-cli>=0.8.0',\n 'requests>=2.17.3',\n 'six>=1.10.0',\n- 'uuid',\n 'gunicorn',\n 'Flask',\n 'numpy',\n", "issue": "UUID dependency breaks python 3 under AWS linux\n### System information\r\n- **Have I written custom code (as opposed to using a stock example script provided in MLflow)**: No\r\n- **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: Amazon linux deep learning AMI 12.0 (like CentOS)\r\n- **MLflow installed from (source or binary)**: source (PyPI)\r\n- **MLflow version (run ``mlflow --version``)**: mlflow, version 0.5.0\r\n- **Python version**: Python 3.6.6\r\n- **npm version (if running the dev UI): N/A\r\n- **Exact command to reproduce**: python -c \"import mlflow\"\r\n\r\n### Describe the problem\r\n```pip install mlflow``` also installs uuid==1.30 (which breaks under python3)\r\n\r\nThe default \"uuid\" library is included in the python standard library. On the AWS instance, the installed version shadows the default, and includes syntax which is only valid in python2. \r\nOn the computer I'm connecting to the instance from, the same script does not produce any errors, but ```uuid.__file__``` points to a standard library version and not the packaged 1.30\r\n\r\n### Source code / logs\r\nFull reproduction from a newly created instance:\r\n```\r\nsource activate tensorflow_p36\r\nvirtualenv env --system-site-packages --python=$(which python) env\r\nsource env/bin/activate\r\npip install mlflow\r\npython -c \"import mlflow\"\r\n```\r\n```\r\nTraceback (most recent call last):\r\n File \"<string>\", line 1, in <module>\r\n File \"/home/ec2-user/scratch/env/lib/python3.6/site-packages/mlflow/__init__.py\", line 33, in <module>\r\n import mlflow.projects as projects # noqa\r\n File \"/home/ec2-user/scratch/env/lib/python3.6/site-packages/mlflow/projects/__init__.py\", line 17, in <module>\r\n import mlflow.tracking as tracking\r\n File \"/home/ec2-user/scratch/env/lib/python3.6/site-packages/mlflow/tracking/__init__.py\", line 7, in <module>\r\n from mlflow.tracking.service import MLflowService, get_service\r\n File \"/home/ec2-user/scratch/env/lib/python3.6/site-packages/mlflow/tracking/service.py\", line 13, in <module>\r\n from mlflow.tracking.utils import _get_store\r\n File \"/home/ec2-user/scratch/env/lib/python3.6/site-packages/mlflow/tracking/utils.py\", line 8, in <module>\r\n from mlflow.store.file_store import FileStore\r\n File \"/home/ec2-user/scratch/env/lib/python3.6/site-packages/mlflow/store/file_store.py\", line 3, in <module>\r\n import uuid\r\n File \"/home/ec2-user/scratch/env/lib/python3.6/site-packages/uuid.py\", line 138\r\n if not 0 <= time_low < 1<<32L:\r\n ^\r\nSyntaxError: invalid syntax\r\n```\n", "before_files": [{"content": "import imp\nimport os\nfrom setuptools import setup, find_packages\n\nversion = imp.load_source(\n 'mlflow.version', os.path.join('mlflow', 'version.py')).VERSION\n\n\n# Get a list of all files in the JS directory to include in our module\ndef package_files(directory):\n paths = []\n for (path, directories, filenames) in os.walk(directory):\n for filename in filenames:\n paths.append(os.path.join('..', path, filename))\n return paths\n\n\n# Prints out a set of paths (relative to the mlflow/ directory) of files in mlflow/server/js/build\n# to include in the wheel, e.g. \"../mlflow/server/js/build/index.html\"\njs_files = package_files('mlflow/server/js/build')\nsagmaker_server_files = package_files(\"mlflow/sagemaker/container\")\n\nsetup(\n name='mlflow',\n version=version,\n packages=find_packages(exclude=['tests', 'tests.*']),\n package_data={\"mlflow\": js_files + sagmaker_server_files},\n install_requires=[\n 'awscli',\n 'click>=6.7',\n 'databricks-cli>=0.8.0',\n 'requests>=2.17.3',\n 'six>=1.10.0',\n 'uuid',\n 'gunicorn',\n 'Flask',\n 'numpy',\n 'pandas',\n 'scipy',\n 'scikit-learn',\n 'python-dateutil',\n 'protobuf>=3.6.0',\n 'gitpython>=2.1.0',\n 'pyyaml',\n 'boto3',\n 'querystring_parser',\n 'simplejson',\n ],\n entry_points='''\n [console_scripts]\n mlflow=mlflow.cli:cli\n ''',\n zip_safe=False,\n author='Databricks',\n description='MLflow: An ML Workflow Tool',\n long_description=open('README.rst').read(),\n license='Apache License 2.0',\n classifiers=[\n 'Intended Audience :: Developers',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3.6',\n ],\n keywords='ml ai databricks',\n url='https://mlflow.org/'\n)\n", "path": "setup.py"}]} | 1,794 | 87 |
gh_patches_debug_3810 | rasdani/github-patches | git_diff | iterative__dvc-3129 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
dvc: fix version generatio
Looks like dynamic version got broken https://travis-ci.com/iterative/dvc/jobs/274986530 .
</issue>
<code>
[start of dvc/version.py]
1 # Used in setup.py, so don't pull any additional dependencies
2 #
3 # Based on:
4 # - https://github.com/python/mypy/blob/master/mypy/version.py
5 # - https://github.com/python/mypy/blob/master/mypy/git.py
6 import os
7 import subprocess
8
9
10 _BASE_VERSION = "0.81.0"
11
12
13 def _generate_version(base_version):
14 """Generate a version with information about the git repository"""
15 pkg_dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
16
17 if not _is_git_repo(pkg_dir) or not _have_git():
18 return base_version
19
20 if _is_release(pkg_dir, base_version) and not _is_dirty(pkg_dir):
21 return base_version
22
23 return "{base_version}+{short_sha}{dirty}".format(
24 base_version=base_version,
25 short_sha=_git_revision(pkg_dir).decode("utf-8")[0:6],
26 dirty=".mod" if _is_dirty(pkg_dir) else "",
27 )
28
29
30 def _is_git_repo(dir_path):
31 """Is the given directory version-controlled with git?"""
32 return os.path.exists(os.path.join(dir_path, ".git"))
33
34
35 def _have_git():
36 """Can we run the git executable?"""
37 try:
38 subprocess.check_output(["git", "--help"])
39 return True
40 except subprocess.CalledProcessError:
41 return False
42 except OSError:
43 return False
44
45
46 def _is_release(dir_path, base_version):
47 try:
48 output = subprocess.check_output(
49 ["git", "describe", "--tags", "--exact-match"],
50 cwd=dir_path,
51 stderr=subprocess.STDOUT,
52 )
53 tag = output.strip()
54 return tag == base_version
55 except subprocess.CalledProcessError:
56 return False
57
58
59 def _git_revision(dir_path):
60 """Get the SHA-1 of the HEAD of a git repository."""
61 return subprocess.check_output(
62 ["git", "rev-parse", "HEAD"], cwd=dir_path
63 ).strip()
64
65
66 def _is_dirty(dir_path):
67 """Check whether a git repository has uncommitted changes."""
68 try:
69 subprocess.check_call(["git", "diff", "--quiet"], cwd=dir_path)
70 return False
71 except subprocess.CalledProcessError:
72 return True
73
74
75 __version__ = _generate_version(_BASE_VERSION)
76
[end of dvc/version.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/dvc/version.py b/dvc/version.py
--- a/dvc/version.py
+++ b/dvc/version.py
@@ -49,7 +49,7 @@
["git", "describe", "--tags", "--exact-match"],
cwd=dir_path,
stderr=subprocess.STDOUT,
- )
+ ).decode("utf-8")
tag = output.strip()
return tag == base_version
except subprocess.CalledProcessError:
| {"golden_diff": "diff --git a/dvc/version.py b/dvc/version.py\n--- a/dvc/version.py\n+++ b/dvc/version.py\n@@ -49,7 +49,7 @@\n [\"git\", \"describe\", \"--tags\", \"--exact-match\"],\n cwd=dir_path,\n stderr=subprocess.STDOUT,\n- )\n+ ).decode(\"utf-8\")\n tag = output.strip()\n return tag == base_version\n except subprocess.CalledProcessError:\n", "issue": "dvc: fix version generatio\nLooks like dynamic version got broken https://travis-ci.com/iterative/dvc/jobs/274986530 .\r\n\n", "before_files": [{"content": "# Used in setup.py, so don't pull any additional dependencies\n#\n# Based on:\n# - https://github.com/python/mypy/blob/master/mypy/version.py\n# - https://github.com/python/mypy/blob/master/mypy/git.py\nimport os\nimport subprocess\n\n\n_BASE_VERSION = \"0.81.0\"\n\n\ndef _generate_version(base_version):\n \"\"\"Generate a version with information about the git repository\"\"\"\n pkg_dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))\n\n if not _is_git_repo(pkg_dir) or not _have_git():\n return base_version\n\n if _is_release(pkg_dir, base_version) and not _is_dirty(pkg_dir):\n return base_version\n\n return \"{base_version}+{short_sha}{dirty}\".format(\n base_version=base_version,\n short_sha=_git_revision(pkg_dir).decode(\"utf-8\")[0:6],\n dirty=\".mod\" if _is_dirty(pkg_dir) else \"\",\n )\n\n\ndef _is_git_repo(dir_path):\n \"\"\"Is the given directory version-controlled with git?\"\"\"\n return os.path.exists(os.path.join(dir_path, \".git\"))\n\n\ndef _have_git():\n \"\"\"Can we run the git executable?\"\"\"\n try:\n subprocess.check_output([\"git\", \"--help\"])\n return True\n except subprocess.CalledProcessError:\n return False\n except OSError:\n return False\n\n\ndef _is_release(dir_path, base_version):\n try:\n output = subprocess.check_output(\n [\"git\", \"describe\", \"--tags\", \"--exact-match\"],\n cwd=dir_path,\n stderr=subprocess.STDOUT,\n )\n tag = output.strip()\n return tag == base_version\n except subprocess.CalledProcessError:\n return False\n\n\ndef _git_revision(dir_path):\n \"\"\"Get the SHA-1 of the HEAD of a git repository.\"\"\"\n return subprocess.check_output(\n [\"git\", \"rev-parse\", \"HEAD\"], cwd=dir_path\n ).strip()\n\n\ndef _is_dirty(dir_path):\n \"\"\"Check whether a git repository has uncommitted changes.\"\"\"\n try:\n subprocess.check_call([\"git\", \"diff\", \"--quiet\"], cwd=dir_path)\n return False\n except subprocess.CalledProcessError:\n return True\n\n\n__version__ = _generate_version(_BASE_VERSION)\n", "path": "dvc/version.py"}]} | 1,220 | 101 |
gh_patches_debug_31725 | rasdani/github-patches | git_diff | pyca__cryptography-3880 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
RFC 5649 support
RFC 3394 (AES Key Wrap) was added a while back. I'd like to request support for RFC 5649 (AES Key Wrap with Padding), since it builds off of RFC 3394. It looks like OpenSSL handled this back in 2015:
https://rt.openssl.org/Ticket/Display.html?id=3675&user=guest&pass=guest
Is this feasible for cryptography in the not-too-distant future?
Thanks,
Peter
</issue>
<code>
[start of src/cryptography/hazmat/primitives/keywrap.py]
1 # This file is dual licensed under the terms of the Apache License, Version
2 # 2.0, and the BSD License. See the LICENSE file in the root of this repository
3 # for complete details.
4
5 from __future__ import absolute_import, division, print_function
6
7 import struct
8
9 from cryptography.hazmat.primitives.ciphers import Cipher
10 from cryptography.hazmat.primitives.ciphers.algorithms import AES
11 from cryptography.hazmat.primitives.ciphers.modes import ECB
12 from cryptography.hazmat.primitives.constant_time import bytes_eq
13
14
15 def _wrap_core(wrapping_key, a, r, backend):
16 # RFC 3394 Key Wrap - 2.2.1 (index method)
17 encryptor = Cipher(AES(wrapping_key), ECB(), backend).encryptor()
18 n = len(r)
19 for j in range(6):
20 for i in range(n):
21 # every encryption operation is a discrete 16 byte chunk (because
22 # AES has a 128-bit block size) and since we're using ECB it is
23 # safe to reuse the encryptor for the entire operation
24 b = encryptor.update(a + r[i])
25 # pack/unpack are safe as these are always 64-bit chunks
26 a = struct.pack(
27 ">Q", struct.unpack(">Q", b[:8])[0] ^ ((n * j) + i + 1)
28 )
29 r[i] = b[-8:]
30
31 assert encryptor.finalize() == b""
32
33 return a + b"".join(r)
34
35
36 def aes_key_wrap(wrapping_key, key_to_wrap, backend):
37 if len(wrapping_key) not in [16, 24, 32]:
38 raise ValueError("The wrapping key must be a valid AES key length")
39
40 if len(key_to_wrap) < 16:
41 raise ValueError("The key to wrap must be at least 16 bytes")
42
43 if len(key_to_wrap) % 8 != 0:
44 raise ValueError("The key to wrap must be a multiple of 8 bytes")
45
46 a = b"\xa6\xa6\xa6\xa6\xa6\xa6\xa6\xa6"
47 r = [key_to_wrap[i:i + 8] for i in range(0, len(key_to_wrap), 8)]
48 return _wrap_core(wrapping_key, a, r, backend)
49
50
51 def _unwrap_core(wrapping_key, a, r, backend):
52 # Implement RFC 3394 Key Unwrap - 2.2.2 (index method)
53 decryptor = Cipher(AES(wrapping_key), ECB(), backend).decryptor()
54 n = len(r)
55 for j in reversed(range(6)):
56 for i in reversed(range(n)):
57 # pack/unpack are safe as these are always 64-bit chunks
58 atr = struct.pack(
59 ">Q", struct.unpack(">Q", a)[0] ^ ((n * j) + i + 1)
60 ) + r[i]
61 # every decryption operation is a discrete 16 byte chunk so
62 # it is safe to reuse the decryptor for the entire operation
63 b = decryptor.update(atr)
64 a = b[:8]
65 r[i] = b[-8:]
66
67 assert decryptor.finalize() == b""
68 return a, r
69
70
71 def aes_key_unwrap(wrapping_key, wrapped_key, backend):
72 if len(wrapped_key) < 24:
73 raise ValueError("Must be at least 24 bytes")
74
75 if len(wrapped_key) % 8 != 0:
76 raise ValueError("The wrapped key must be a multiple of 8 bytes")
77
78 if len(wrapping_key) not in [16, 24, 32]:
79 raise ValueError("The wrapping key must be a valid AES key length")
80
81 aiv = b"\xa6\xa6\xa6\xa6\xa6\xa6\xa6\xa6"
82 r = [wrapped_key[i:i + 8] for i in range(0, len(wrapped_key), 8)]
83 a = r.pop(0)
84 a, r = _unwrap_core(wrapping_key, a, r, backend)
85 if not bytes_eq(a, aiv):
86 raise InvalidUnwrap()
87
88 return b"".join(r)
89
90
91 class InvalidUnwrap(Exception):
92 pass
93
[end of src/cryptography/hazmat/primitives/keywrap.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/cryptography/hazmat/primitives/keywrap.py b/src/cryptography/hazmat/primitives/keywrap.py
--- a/src/cryptography/hazmat/primitives/keywrap.py
+++ b/src/cryptography/hazmat/primitives/keywrap.py
@@ -68,6 +68,63 @@
return a, r
+def aes_key_wrap_with_padding(wrapping_key, key_to_wrap, backend):
+ if len(wrapping_key) not in [16, 24, 32]:
+ raise ValueError("The wrapping key must be a valid AES key length")
+
+ aiv = b"\xA6\x59\x59\xA6" + struct.pack(">i", len(key_to_wrap))
+ # pad the key to wrap if necessary
+ pad = (8 - (len(key_to_wrap) % 8)) % 8
+ key_to_wrap = key_to_wrap + b"\x00" * pad
+ if len(key_to_wrap) == 8:
+ # RFC 5649 - 4.1 - exactly 8 octets after padding
+ encryptor = Cipher(AES(wrapping_key), ECB(), backend).encryptor()
+ b = encryptor.update(aiv + key_to_wrap)
+ assert encryptor.finalize() == b""
+ return b
+ else:
+ r = [key_to_wrap[i:i + 8] for i in range(0, len(key_to_wrap), 8)]
+ return _wrap_core(wrapping_key, aiv, r, backend)
+
+
+def aes_key_unwrap_with_padding(wrapping_key, wrapped_key, backend):
+ if len(wrapped_key) < 16:
+ raise ValueError("Must be at least 16 bytes")
+
+ if len(wrapping_key) not in [16, 24, 32]:
+ raise ValueError("The wrapping key must be a valid AES key length")
+
+ if len(wrapped_key) == 16:
+ # RFC 5649 - 4.2 - exactly two 64-bit blocks
+ decryptor = Cipher(AES(wrapping_key), ECB(), backend).decryptor()
+ b = decryptor.update(wrapped_key)
+ assert decryptor.finalize() == b""
+ a = b[:8]
+ data = b[8:]
+ n = 1
+ else:
+ r = [wrapped_key[i:i + 8] for i in range(0, len(wrapped_key), 8)]
+ encrypted_aiv = r.pop(0)
+ n = len(r)
+ a, r = _unwrap_core(wrapping_key, encrypted_aiv, r, backend)
+ data = b"".join(r)
+
+ # 1) Check that MSB(32,A) = A65959A6.
+ # 2) Check that 8*(n-1) < LSB(32,A) <= 8*n. If so, let
+ # MLI = LSB(32,A).
+ # 3) Let b = (8*n)-MLI, and then check that the rightmost b octets of
+ # the output data are zero.
+ (mli,) = struct.unpack(">I", a[4:])
+ b = (8 * n) - mli
+ if (
+ not bytes_eq(a[:4], b"\xa6\x59\x59\xa6") or not
+ 8 * (n - 1) < mli <= 8 * n or not bytes_eq(data[-b:], b"\x00" * b)
+ ):
+ raise InvalidUnwrap()
+
+ return data[:-b]
+
+
def aes_key_unwrap(wrapping_key, wrapped_key, backend):
if len(wrapped_key) < 24:
raise ValueError("Must be at least 24 bytes")
| {"golden_diff": "diff --git a/src/cryptography/hazmat/primitives/keywrap.py b/src/cryptography/hazmat/primitives/keywrap.py\n--- a/src/cryptography/hazmat/primitives/keywrap.py\n+++ b/src/cryptography/hazmat/primitives/keywrap.py\n@@ -68,6 +68,63 @@\n return a, r\n \n \n+def aes_key_wrap_with_padding(wrapping_key, key_to_wrap, backend):\n+ if len(wrapping_key) not in [16, 24, 32]:\n+ raise ValueError(\"The wrapping key must be a valid AES key length\")\n+\n+ aiv = b\"\\xA6\\x59\\x59\\xA6\" + struct.pack(\">i\", len(key_to_wrap))\n+ # pad the key to wrap if necessary\n+ pad = (8 - (len(key_to_wrap) % 8)) % 8\n+ key_to_wrap = key_to_wrap + b\"\\x00\" * pad\n+ if len(key_to_wrap) == 8:\n+ # RFC 5649 - 4.1 - exactly 8 octets after padding\n+ encryptor = Cipher(AES(wrapping_key), ECB(), backend).encryptor()\n+ b = encryptor.update(aiv + key_to_wrap)\n+ assert encryptor.finalize() == b\"\"\n+ return b\n+ else:\n+ r = [key_to_wrap[i:i + 8] for i in range(0, len(key_to_wrap), 8)]\n+ return _wrap_core(wrapping_key, aiv, r, backend)\n+\n+\n+def aes_key_unwrap_with_padding(wrapping_key, wrapped_key, backend):\n+ if len(wrapped_key) < 16:\n+ raise ValueError(\"Must be at least 16 bytes\")\n+\n+ if len(wrapping_key) not in [16, 24, 32]:\n+ raise ValueError(\"The wrapping key must be a valid AES key length\")\n+\n+ if len(wrapped_key) == 16:\n+ # RFC 5649 - 4.2 - exactly two 64-bit blocks\n+ decryptor = Cipher(AES(wrapping_key), ECB(), backend).decryptor()\n+ b = decryptor.update(wrapped_key)\n+ assert decryptor.finalize() == b\"\"\n+ a = b[:8]\n+ data = b[8:]\n+ n = 1\n+ else:\n+ r = [wrapped_key[i:i + 8] for i in range(0, len(wrapped_key), 8)]\n+ encrypted_aiv = r.pop(0)\n+ n = len(r)\n+ a, r = _unwrap_core(wrapping_key, encrypted_aiv, r, backend)\n+ data = b\"\".join(r)\n+\n+ # 1) Check that MSB(32,A) = A65959A6.\n+ # 2) Check that 8*(n-1) < LSB(32,A) <= 8*n. If so, let\n+ # MLI = LSB(32,A).\n+ # 3) Let b = (8*n)-MLI, and then check that the rightmost b octets of\n+ # the output data are zero.\n+ (mli,) = struct.unpack(\">I\", a[4:])\n+ b = (8 * n) - mli\n+ if (\n+ not bytes_eq(a[:4], b\"\\xa6\\x59\\x59\\xa6\") or not\n+ 8 * (n - 1) < mli <= 8 * n or not bytes_eq(data[-b:], b\"\\x00\" * b)\n+ ):\n+ raise InvalidUnwrap()\n+\n+ return data[:-b]\n+\n+\n def aes_key_unwrap(wrapping_key, wrapped_key, backend):\n if len(wrapped_key) < 24:\n raise ValueError(\"Must be at least 24 bytes\")\n", "issue": "RFC 5649 support\nRFC 3394 (AES Key Wrap) was added a while back. I'd like to request support for RFC 5649 (AES Key Wrap with Padding), since it builds off of RFC 3394. It looks like OpenSSL handled this back in 2015:\r\n\r\nhttps://rt.openssl.org/Ticket/Display.html?id=3675&user=guest&pass=guest\r\n\r\nIs this feasible for cryptography in the not-too-distant future?\r\n\r\nThanks,\r\nPeter\n", "before_files": [{"content": "# This file is dual licensed under the terms of the Apache License, Version\n# 2.0, and the BSD License. See the LICENSE file in the root of this repository\n# for complete details.\n\nfrom __future__ import absolute_import, division, print_function\n\nimport struct\n\nfrom cryptography.hazmat.primitives.ciphers import Cipher\nfrom cryptography.hazmat.primitives.ciphers.algorithms import AES\nfrom cryptography.hazmat.primitives.ciphers.modes import ECB\nfrom cryptography.hazmat.primitives.constant_time import bytes_eq\n\n\ndef _wrap_core(wrapping_key, a, r, backend):\n # RFC 3394 Key Wrap - 2.2.1 (index method)\n encryptor = Cipher(AES(wrapping_key), ECB(), backend).encryptor()\n n = len(r)\n for j in range(6):\n for i in range(n):\n # every encryption operation is a discrete 16 byte chunk (because\n # AES has a 128-bit block size) and since we're using ECB it is\n # safe to reuse the encryptor for the entire operation\n b = encryptor.update(a + r[i])\n # pack/unpack are safe as these are always 64-bit chunks\n a = struct.pack(\n \">Q\", struct.unpack(\">Q\", b[:8])[0] ^ ((n * j) + i + 1)\n )\n r[i] = b[-8:]\n\n assert encryptor.finalize() == b\"\"\n\n return a + b\"\".join(r)\n\n\ndef aes_key_wrap(wrapping_key, key_to_wrap, backend):\n if len(wrapping_key) not in [16, 24, 32]:\n raise ValueError(\"The wrapping key must be a valid AES key length\")\n\n if len(key_to_wrap) < 16:\n raise ValueError(\"The key to wrap must be at least 16 bytes\")\n\n if len(key_to_wrap) % 8 != 0:\n raise ValueError(\"The key to wrap must be a multiple of 8 bytes\")\n\n a = b\"\\xa6\\xa6\\xa6\\xa6\\xa6\\xa6\\xa6\\xa6\"\n r = [key_to_wrap[i:i + 8] for i in range(0, len(key_to_wrap), 8)]\n return _wrap_core(wrapping_key, a, r, backend)\n\n\ndef _unwrap_core(wrapping_key, a, r, backend):\n # Implement RFC 3394 Key Unwrap - 2.2.2 (index method)\n decryptor = Cipher(AES(wrapping_key), ECB(), backend).decryptor()\n n = len(r)\n for j in reversed(range(6)):\n for i in reversed(range(n)):\n # pack/unpack are safe as these are always 64-bit chunks\n atr = struct.pack(\n \">Q\", struct.unpack(\">Q\", a)[0] ^ ((n * j) + i + 1)\n ) + r[i]\n # every decryption operation is a discrete 16 byte chunk so\n # it is safe to reuse the decryptor for the entire operation\n b = decryptor.update(atr)\n a = b[:8]\n r[i] = b[-8:]\n\n assert decryptor.finalize() == b\"\"\n return a, r\n\n\ndef aes_key_unwrap(wrapping_key, wrapped_key, backend):\n if len(wrapped_key) < 24:\n raise ValueError(\"Must be at least 24 bytes\")\n\n if len(wrapped_key) % 8 != 0:\n raise ValueError(\"The wrapped key must be a multiple of 8 bytes\")\n\n if len(wrapping_key) not in [16, 24, 32]:\n raise ValueError(\"The wrapping key must be a valid AES key length\")\n\n aiv = b\"\\xa6\\xa6\\xa6\\xa6\\xa6\\xa6\\xa6\\xa6\"\n r = [wrapped_key[i:i + 8] for i in range(0, len(wrapped_key), 8)]\n a = r.pop(0)\n a, r = _unwrap_core(wrapping_key, a, r, backend)\n if not bytes_eq(a, aiv):\n raise InvalidUnwrap()\n\n return b\"\".join(r)\n\n\nclass InvalidUnwrap(Exception):\n pass\n", "path": "src/cryptography/hazmat/primitives/keywrap.py"}]} | 1,792 | 886 |
gh_patches_debug_31639 | rasdani/github-patches | git_diff | ESMCI__cime-1136 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ERR test does not always report failures correctly
The ERR test runs four separate jobs, if one of these jobs completes but the next fails to launch, the test reports PASS. To reproduce this problem its enough to edit the jobid_pattern field in config batch so that the dependency is incorrect - this causes the first job to exit and the second to fail to launch. But the TestStatus file indicates all PASS.
</issue>
<code>
[start of utils/python/CIME/case_submit.py]
1 #!/usr/bin/env python
2
3 """
4 case.submit - Submit a cesm workflow to the queueing system or run it
5 if there is no queueing system. A cesm workflow may include multiple
6 jobs.
7 """
8 import socket
9 from CIME.XML.standard_module_setup import *
10 from CIME.utils import expect, append_status
11 from CIME.preview_namelists import create_namelists
12 from CIME.check_lockedfiles import check_lockedfiles
13 from CIME.check_input_data import check_all_input_data
14 from CIME.case_cmpgen_namelists import case_cmpgen_namelists
15
16 logger = logging.getLogger(__name__)
17
18 def submit(case, job=None, resubmit=False, no_batch=False):
19 caseroot = case.get_value("CASEROOT")
20
21 if job is None:
22 if case.get_value("TEST"):
23 job = "case.test"
24 else:
25 job = "case.run"
26
27 if resubmit:
28 resub = case.get_value("RESUBMIT")
29 logger.info("Submitting job '%s', resubmit=%d" % (job, resub))
30 case.set_value("RESUBMIT",resub-1)
31 if case.get_value("RESUBMIT_SETS_CONTINUE_RUN"):
32 case.set_value("CONTINUE_RUN", True)
33 else:
34 if job in ("case.test","case.run"):
35 check_case(case, caseroot)
36 check_DA_settings(case)
37 if case.get_value("MACH") == "mira":
38 with open(".original_host","w") as fd:
39 fd.write( socket.gethostname())
40
41 # if case.submit is called with the no_batch flag then we assume that this
42 # flag will stay in effect for the duration of the RESUBMITs
43 env_batch = case.get_env("batch")
44 if not resubmit:
45 case.set_value("IS_FIRST_RUN", True)
46 if no_batch:
47 batch_system = "none"
48 else:
49 batch_system = env_batch.get_batch_system_type()
50 case.set_value("BATCH_SYSTEM", batch_system)
51 else:
52 if env_batch.get_batch_system_type() == "none":
53 no_batch = True
54
55 # This is a resubmission, do not reinitialize test values
56 case.set_value("IS_FIRST_RUN", False)
57
58 #Load Modules
59 case.load_env()
60
61 case.set_value("RUN_WITH_SUBMIT",True)
62 case.flush()
63
64 logger.warn("submit_jobs %s"%job)
65 job_ids = case.submit_jobs(no_batch=no_batch, job=job)
66 msg = "Submitted jobs %s"%job_ids
67 append_status(msg, caseroot=caseroot, sfile="CaseStatus")
68
69 def check_case(case, caseroot):
70 check_lockedfiles(caseroot)
71 create_namelists(case) # Must be called before check_all_input_data
72 logger.info("Checking that inputdata is available as part of case submission")
73 check_all_input_data(case)
74 # Now that we have baselines, do baseline operations
75 if case.get_value("TEST"):
76 case_cmpgen_namelists(case)
77
78 expect(case.get_value("BUILD_COMPLETE"), "Build complete is "
79 "not True please rebuild the model by calling case.build")
80 logger.info("Check case OK")
81
82 def check_DA_settings(case):
83 if case.get_value("DATA_ASSIMILATION"):
84 script = case.get_value("DATA_ASSIMILATION_SCRIPT")
85 cycles = case.get_value("DATA_ASSIMILATION_CYCLES")
86 logger.info("Data Assimilation enabled using script %s with %d cycles"%(script,cycles))
87
88
[end of utils/python/CIME/case_submit.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/utils/python/CIME/case_submit.py b/utils/python/CIME/case_submit.py
--- a/utils/python/CIME/case_submit.py
+++ b/utils/python/CIME/case_submit.py
@@ -12,10 +12,11 @@
from CIME.check_lockedfiles import check_lockedfiles
from CIME.check_input_data import check_all_input_data
from CIME.case_cmpgen_namelists import case_cmpgen_namelists
+from CIME.test_status import *
logger = logging.getLogger(__name__)
-def submit(case, job=None, resubmit=False, no_batch=False):
+def _submit(case, job=None, resubmit=False, no_batch=False):
caseroot = case.get_value("CASEROOT")
if job is None:
@@ -61,11 +62,27 @@
case.set_value("RUN_WITH_SUBMIT",True)
case.flush()
- logger.warn("submit_jobs %s"%job)
+ logger.warn("submit_jobs %s" % job)
job_ids = case.submit_jobs(no_batch=no_batch, job=job)
- msg = "Submitted jobs %s"%job_ids
+ msg = "Submitted jobs %s" % job_ids
append_status(msg, caseroot=caseroot, sfile="CaseStatus")
+def submit(case, job=None, resubmit=False, no_batch=False):
+ try:
+ _submit(case, job=job, resubmit=resubmit, no_batch=no_batch)
+ except:
+ # If something failed in the batch system, make sure to mark
+ # the test as failed if we are running a test.
+ if case.get_value("TEST"):
+ caseroot = case.get_value("CASEROOT")
+ casebaseid = case.get_value("CASEBASEID")
+ with TestStatus(test_dir=caseroot, test_name=casebaseid, lock=True) as ts:
+ ts.set_status(RUN_PHASE, TEST_FAIL_STATUS, comments="batch system failure")
+
+ append_status("Batch submission failed, TestStatus file changed to read-only", caseroot=caseroot, sfile="TestStatus.log")
+
+ raise
+
def check_case(case, caseroot):
check_lockedfiles(caseroot)
create_namelists(case) # Must be called before check_all_input_data
| {"golden_diff": "diff --git a/utils/python/CIME/case_submit.py b/utils/python/CIME/case_submit.py\n--- a/utils/python/CIME/case_submit.py\n+++ b/utils/python/CIME/case_submit.py\n@@ -12,10 +12,11 @@\n from CIME.check_lockedfiles import check_lockedfiles\n from CIME.check_input_data import check_all_input_data\n from CIME.case_cmpgen_namelists import case_cmpgen_namelists\n+from CIME.test_status import *\n \n logger = logging.getLogger(__name__)\n \n-def submit(case, job=None, resubmit=False, no_batch=False):\n+def _submit(case, job=None, resubmit=False, no_batch=False):\n caseroot = case.get_value(\"CASEROOT\")\n \n if job is None:\n@@ -61,11 +62,27 @@\n case.set_value(\"RUN_WITH_SUBMIT\",True)\n case.flush()\n \n- logger.warn(\"submit_jobs %s\"%job)\n+ logger.warn(\"submit_jobs %s\" % job)\n job_ids = case.submit_jobs(no_batch=no_batch, job=job)\n- msg = \"Submitted jobs %s\"%job_ids\n+ msg = \"Submitted jobs %s\" % job_ids\n append_status(msg, caseroot=caseroot, sfile=\"CaseStatus\")\n \n+def submit(case, job=None, resubmit=False, no_batch=False):\n+ try:\n+ _submit(case, job=job, resubmit=resubmit, no_batch=no_batch)\n+ except:\n+ # If something failed in the batch system, make sure to mark\n+ # the test as failed if we are running a test.\n+ if case.get_value(\"TEST\"):\n+ caseroot = case.get_value(\"CASEROOT\")\n+ casebaseid = case.get_value(\"CASEBASEID\")\n+ with TestStatus(test_dir=caseroot, test_name=casebaseid, lock=True) as ts:\n+ ts.set_status(RUN_PHASE, TEST_FAIL_STATUS, comments=\"batch system failure\")\n+\n+ append_status(\"Batch submission failed, TestStatus file changed to read-only\", caseroot=caseroot, sfile=\"TestStatus.log\")\n+\n+ raise\n+\n def check_case(case, caseroot):\n check_lockedfiles(caseroot)\n create_namelists(case) # Must be called before check_all_input_data\n", "issue": "ERR test does not always report failures correctly\nThe ERR test runs four separate jobs, if one of these jobs completes but the next fails to launch, the test reports PASS. To reproduce this problem its enough to edit the jobid_pattern field in config batch so that the dependency is incorrect - this causes the first job to exit and the second to fail to launch. But the TestStatus file indicates all PASS. \n", "before_files": [{"content": "#!/usr/bin/env python\n\n\"\"\"\ncase.submit - Submit a cesm workflow to the queueing system or run it\nif there is no queueing system. A cesm workflow may include multiple\njobs.\n\"\"\"\nimport socket\nfrom CIME.XML.standard_module_setup import *\nfrom CIME.utils import expect, append_status\nfrom CIME.preview_namelists import create_namelists\nfrom CIME.check_lockedfiles import check_lockedfiles\nfrom CIME.check_input_data import check_all_input_data\nfrom CIME.case_cmpgen_namelists import case_cmpgen_namelists\n\nlogger = logging.getLogger(__name__)\n\ndef submit(case, job=None, resubmit=False, no_batch=False):\n caseroot = case.get_value(\"CASEROOT\")\n\n if job is None:\n if case.get_value(\"TEST\"):\n job = \"case.test\"\n else:\n job = \"case.run\"\n\n if resubmit:\n resub = case.get_value(\"RESUBMIT\")\n logger.info(\"Submitting job '%s', resubmit=%d\" % (job, resub))\n case.set_value(\"RESUBMIT\",resub-1)\n if case.get_value(\"RESUBMIT_SETS_CONTINUE_RUN\"):\n case.set_value(\"CONTINUE_RUN\", True)\n else:\n if job in (\"case.test\",\"case.run\"):\n check_case(case, caseroot)\n check_DA_settings(case)\n if case.get_value(\"MACH\") == \"mira\":\n with open(\".original_host\",\"w\") as fd:\n fd.write( socket.gethostname())\n\n # if case.submit is called with the no_batch flag then we assume that this\n # flag will stay in effect for the duration of the RESUBMITs\n env_batch = case.get_env(\"batch\")\n if not resubmit:\n case.set_value(\"IS_FIRST_RUN\", True)\n if no_batch:\n batch_system = \"none\"\n else:\n batch_system = env_batch.get_batch_system_type()\n case.set_value(\"BATCH_SYSTEM\", batch_system)\n else:\n if env_batch.get_batch_system_type() == \"none\":\n no_batch = True\n\n # This is a resubmission, do not reinitialize test values\n case.set_value(\"IS_FIRST_RUN\", False)\n\n #Load Modules\n case.load_env()\n\n case.set_value(\"RUN_WITH_SUBMIT\",True)\n case.flush()\n\n logger.warn(\"submit_jobs %s\"%job)\n job_ids = case.submit_jobs(no_batch=no_batch, job=job)\n msg = \"Submitted jobs %s\"%job_ids\n append_status(msg, caseroot=caseroot, sfile=\"CaseStatus\")\n\ndef check_case(case, caseroot):\n check_lockedfiles(caseroot)\n create_namelists(case) # Must be called before check_all_input_data\n logger.info(\"Checking that inputdata is available as part of case submission\")\n check_all_input_data(case)\n # Now that we have baselines, do baseline operations\n if case.get_value(\"TEST\"):\n case_cmpgen_namelists(case)\n\n expect(case.get_value(\"BUILD_COMPLETE\"), \"Build complete is \"\n \"not True please rebuild the model by calling case.build\")\n logger.info(\"Check case OK\")\n\ndef check_DA_settings(case):\n if case.get_value(\"DATA_ASSIMILATION\"):\n script = case.get_value(\"DATA_ASSIMILATION_SCRIPT\")\n cycles = case.get_value(\"DATA_ASSIMILATION_CYCLES\")\n logger.info(\"Data Assimilation enabled using script %s with %d cycles\"%(script,cycles))\n\n", "path": "utils/python/CIME/case_submit.py"}]} | 1,567 | 521 |
gh_patches_debug_21868 | rasdani/github-patches | git_diff | streamlink__streamlink-4885 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
plugins.btv: No playable streams found
### Checklist
- [X] This is a plugin issue and not a different kind of issue
- [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink)
- [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22)
- [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master)
### Streamlink version
Latest stable release
### Description
The plugin is not functional. I am attaching a log.
### Debug log
```text
streamlink --loglevel debug "https://btvplus.bg/live/" best
[cli][debug] OS: Linux-5.15.0-50-generic-x86_64-with-glibc2.29
[cli][debug] Python: 3.8.10
[cli][debug] Streamlink: 5.0.1
[cli][debug] Dependencies:
[cli][debug] isodate: 0.6.0
[cli][debug] lxml: 4.6.4
[cli][debug] pycountry: 19.8.18
[cli][debug] pycryptodome: 3.9.9
[cli][debug] PySocks: 1.7.1
[cli][debug] requests: 2.26.0
[cli][debug] websocket-client: 1.2.1
[cli][debug] Arguments:
[cli][debug] url=https://btvplus.bg/live/
[cli][debug] stream=['best']
[cli][debug] --loglevel=debug
[cli][info] Found matching plugin btv for URL https://btvplus.bg/live/
[utils.l10n][debug] Language code: bg_BG
error: No playable streams found on this URL: https://btvplus.bg/live/
```
</issue>
<code>
[start of src/streamlink/plugins/btv.py]
1 """
2 $description A privately owned Bulgarian live TV channel.
3 $url btvplus.bg
4 $type live
5 $region Bulgaria
6 """
7
8 import logging
9 import re
10
11 from streamlink.plugin import Plugin, pluginmatcher
12 from streamlink.plugin.api import validate
13 from streamlink.stream.hls import HLSStream
14
15 log = logging.getLogger(__name__)
16
17
18 @pluginmatcher(re.compile(
19 r"https?://(?:www\.)?btvplus\.bg/live/?"
20 ))
21 class BTV(Plugin):
22 URL_API = "https://btvplus.bg/lbin/v3/btvplus/player_config.php"
23
24 def _get_streams(self):
25 media_id = self.session.http.get(self.url, schema=validate.Schema(
26 re.compile(r"media_id=(\d+)"),
27 validate.any(None, validate.get(1)),
28 ))
29 if media_id is None:
30 return
31
32 stream_url = self.session.http.get(
33 self.URL_API,
34 params={
35 "media_id": media_id,
36 },
37 schema=validate.Schema(
38 validate.any(
39 validate.all(
40 validate.regex(re.compile(r"geo_blocked_stream")),
41 validate.get(0),
42 ),
43 validate.all(
44 validate.parse_json(),
45 {
46 "status": "ok",
47 "config": str,
48 },
49 validate.get("config"),
50 re.compile(r"src: \"(http.*?)\""),
51 validate.none_or_all(
52 validate.get(1),
53 validate.url(),
54 ),
55 ),
56 ),
57 ),
58 )
59 if not stream_url:
60 return
61
62 if stream_url == "geo_blocked_stream":
63 log.error("The content is not available in your region")
64 return
65
66 return HLSStream.parse_variant_playlist(self.session, stream_url)
67
68
69 __plugin__ = BTV
70
[end of src/streamlink/plugins/btv.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/streamlink/plugins/btv.py b/src/streamlink/plugins/btv.py
--- a/src/streamlink/plugins/btv.py
+++ b/src/streamlink/plugins/btv.py
@@ -44,14 +44,11 @@
validate.parse_json(),
{
"status": "ok",
- "config": str,
+ "info": {
+ "file": validate.url(path=validate.endswith(".m3u8")),
+ },
},
- validate.get("config"),
- re.compile(r"src: \"(http.*?)\""),
- validate.none_or_all(
- validate.get(1),
- validate.url(),
- ),
+ validate.get(("info", "file")),
),
),
),
@@ -63,7 +60,7 @@
log.error("The content is not available in your region")
return
- return HLSStream.parse_variant_playlist(self.session, stream_url)
+ return {"live": HLSStream(self.session, stream_url)}
__plugin__ = BTV
| {"golden_diff": "diff --git a/src/streamlink/plugins/btv.py b/src/streamlink/plugins/btv.py\n--- a/src/streamlink/plugins/btv.py\n+++ b/src/streamlink/plugins/btv.py\n@@ -44,14 +44,11 @@\n validate.parse_json(),\n {\n \"status\": \"ok\",\n- \"config\": str,\n+ \"info\": {\n+ \"file\": validate.url(path=validate.endswith(\".m3u8\")),\n+ },\n },\n- validate.get(\"config\"),\n- re.compile(r\"src: \\\"(http.*?)\\\"\"),\n- validate.none_or_all(\n- validate.get(1),\n- validate.url(),\n- ),\n+ validate.get((\"info\", \"file\")),\n ),\n ),\n ),\n@@ -63,7 +60,7 @@\n log.error(\"The content is not available in your region\")\n return\n \n- return HLSStream.parse_variant_playlist(self.session, stream_url)\n+ return {\"live\": HLSStream(self.session, stream_url)}\n \n \n __plugin__ = BTV\n", "issue": "plugins.btv: No playable streams found\n### Checklist\n\n- [X] This is a plugin issue and not a different kind of issue\n- [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink)\n- [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22)\n- [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master)\n\n### Streamlink version\n\nLatest stable release\n\n### Description\n\nThe plugin is not functional. I am attaching a log.\n\n### Debug log\n\n```text\nstreamlink --loglevel debug \"https://btvplus.bg/live/\" best\r\n[cli][debug] OS: Linux-5.15.0-50-generic-x86_64-with-glibc2.29\r\n[cli][debug] Python: 3.8.10\r\n[cli][debug] Streamlink: 5.0.1\r\n[cli][debug] Dependencies:\r\n[cli][debug] isodate: 0.6.0\r\n[cli][debug] lxml: 4.6.4\r\n[cli][debug] pycountry: 19.8.18\r\n[cli][debug] pycryptodome: 3.9.9\r\n[cli][debug] PySocks: 1.7.1\r\n[cli][debug] requests: 2.26.0\r\n[cli][debug] websocket-client: 1.2.1\r\n[cli][debug] Arguments:\r\n[cli][debug] url=https://btvplus.bg/live/\r\n[cli][debug] stream=['best']\r\n[cli][debug] --loglevel=debug\r\n[cli][info] Found matching plugin btv for URL https://btvplus.bg/live/\r\n[utils.l10n][debug] Language code: bg_BG\r\nerror: No playable streams found on this URL: https://btvplus.bg/live/\n```\n\n", "before_files": [{"content": "\"\"\"\n$description A privately owned Bulgarian live TV channel.\n$url btvplus.bg\n$type live\n$region Bulgaria\n\"\"\"\n\nimport logging\nimport re\n\nfrom streamlink.plugin import Plugin, pluginmatcher\nfrom streamlink.plugin.api import validate\nfrom streamlink.stream.hls import HLSStream\n\nlog = logging.getLogger(__name__)\n\n\n@pluginmatcher(re.compile(\n r\"https?://(?:www\\.)?btvplus\\.bg/live/?\"\n))\nclass BTV(Plugin):\n URL_API = \"https://btvplus.bg/lbin/v3/btvplus/player_config.php\"\n\n def _get_streams(self):\n media_id = self.session.http.get(self.url, schema=validate.Schema(\n re.compile(r\"media_id=(\\d+)\"),\n validate.any(None, validate.get(1)),\n ))\n if media_id is None:\n return\n\n stream_url = self.session.http.get(\n self.URL_API,\n params={\n \"media_id\": media_id,\n },\n schema=validate.Schema(\n validate.any(\n validate.all(\n validate.regex(re.compile(r\"geo_blocked_stream\")),\n validate.get(0),\n ),\n validate.all(\n validate.parse_json(),\n {\n \"status\": \"ok\",\n \"config\": str,\n },\n validate.get(\"config\"),\n re.compile(r\"src: \\\"(http.*?)\\\"\"),\n validate.none_or_all(\n validate.get(1),\n validate.url(),\n ),\n ),\n ),\n ),\n )\n if not stream_url:\n return\n\n if stream_url == \"geo_blocked_stream\":\n log.error(\"The content is not available in your region\")\n return\n\n return HLSStream.parse_variant_playlist(self.session, stream_url)\n\n\n__plugin__ = BTV\n", "path": "src/streamlink/plugins/btv.py"}]} | 1,524 | 227 |
gh_patches_debug_32008 | rasdani/github-patches | git_diff | microsoft__AzureTRE-1039 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
API app not reporting requests to AppInsights
**Description**
Ensure opencensus reports http requests to app insights.
</issue>
<code>
[start of api_app/main.py]
1 import logging
2 import uvicorn
3
4 from fastapi import FastAPI
5 from fastapi.exceptions import RequestValidationError
6 from fastapi_utils.tasks import repeat_every
7 from starlette.exceptions import HTTPException
8 from starlette.middleware.errors import ServerErrorMiddleware
9
10 from api.routes.api import router as api_router
11 from api.routes.api import tags_metadata
12 from api.errors.http_error import http_error_handler
13 from api.errors.validation_error import http422_error_handler
14 from api.errors.generic_error import generic_error_handler
15 from core import config
16 from core.events import create_start_app_handler, create_stop_app_handler
17 from services.logging import disable_unwanted_loggers, initialize_logging
18 from service_bus.deployment_status_update import receive_message_and_update_deployment
19
20
21 def get_application() -> FastAPI:
22 application = FastAPI(
23 title=config.PROJECT_NAME,
24 debug=config.DEBUG,
25 description=config.API_DESCRIPTION,
26 version=config.VERSION,
27 docs_url="/api/docs",
28 swagger_ui_oauth2_redirect_url="/api/docs/oauth2-redirect",
29 swagger_ui_init_oauth={
30 "usePkceWithAuthorizationCodeGrant": True,
31 "clientId": config.SWAGGER_UI_CLIENT_ID,
32 "scopes": ["openid", "offline_access", f"api://{config.API_CLIENT_ID}/Workspace.Read", f"api://{config.API_CLIENT_ID}/Workspace.Write"]
33 },
34 openapi_tags=tags_metadata
35 )
36
37 application.add_event_handler("startup", create_start_app_handler(application))
38 application.add_event_handler("shutdown", create_stop_app_handler(application))
39
40 application.add_middleware(ServerErrorMiddleware, handler=generic_error_handler)
41 application.add_exception_handler(HTTPException, http_error_handler)
42 application.add_exception_handler(RequestValidationError, http422_error_handler)
43
44 application.include_router(api_router, prefix=config.API_PREFIX)
45 return application
46
47
48 app = get_application()
49
50
51 @app.on_event("startup")
52 async def initialize_logging_on_startup():
53 if config.DEBUG:
54 initialize_logging(logging.DEBUG)
55 else:
56 initialize_logging(logging.INFO)
57
58 disable_unwanted_loggers()
59
60
61 @app.on_event("startup")
62 @repeat_every(seconds=20, wait_first=True, logger=logging.getLogger())
63 async def update_deployment_status() -> None:
64 await receive_message_and_update_deployment(app)
65
66
67 if __name__ == "__main__":
68 uvicorn.run(app, host="0.0.0.0", port=8000)
69
[end of api_app/main.py]
[start of api_app/_version.py]
1 __version__ = "0.1.1"
2
[end of api_app/_version.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/api_app/_version.py b/api_app/_version.py
--- a/api_app/_version.py
+++ b/api_app/_version.py
@@ -1 +1 @@
-__version__ = "0.1.1"
+__version__ = "0.1.3"
diff --git a/api_app/main.py b/api_app/main.py
--- a/api_app/main.py
+++ b/api_app/main.py
@@ -1,7 +1,8 @@
import logging
+import os
import uvicorn
-from fastapi import FastAPI
+from fastapi import FastAPI, Request
from fastapi.exceptions import RequestValidationError
from fastapi_utils.tasks import repeat_every
from starlette.exceptions import HTTPException
@@ -17,6 +18,16 @@
from services.logging import disable_unwanted_loggers, initialize_logging
from service_bus.deployment_status_update import receive_message_and_update_deployment
+# Opencensus Azure imports
+from opencensus.ext.azure.trace_exporter import AzureExporter
+from opencensus.trace.attributes_helper import COMMON_ATTRIBUTES
+from opencensus.trace.samplers import ProbabilitySampler
+from opencensus.trace.span import SpanKind
+from opencensus.trace.tracer import Tracer
+
+HTTP_URL = COMMON_ATTRIBUTES['HTTP_URL']
+HTTP_STATUS_CODE = COMMON_ATTRIBUTES['HTTP_STATUS_CODE']
+
def get_application() -> FastAPI:
application = FastAPI(
@@ -64,5 +75,19 @@
await receive_message_and_update_deployment(app)
[email protected]("http")
+async def add_process_time_header(request: Request, call_next):
+ tracer = Tracer(exporter=AzureExporter(connection_string=f'InstrumentationKey={os.getenv("APPINSIGHTS_INSTRUMENTATIONKEY")}'), sampler=ProbabilitySampler(1.0))
+ with tracer.span("main") as span:
+ span.span_kind = SpanKind.SERVER
+
+ response = await call_next(request)
+
+ tracer.add_attribute_to_current_span(attribute_key=HTTP_STATUS_CODE, attribute_value=response.status_code)
+ tracer.add_attribute_to_current_span(attribute_key=HTTP_URL, attribute_value=str(request.url))
+
+ return response
+
+
if __name__ == "__main__":
uvicorn.run(app, host="0.0.0.0", port=8000)
| {"golden_diff": "diff --git a/api_app/_version.py b/api_app/_version.py\n--- a/api_app/_version.py\n+++ b/api_app/_version.py\n@@ -1 +1 @@\n-__version__ = \"0.1.1\"\n+__version__ = \"0.1.3\"\ndiff --git a/api_app/main.py b/api_app/main.py\n--- a/api_app/main.py\n+++ b/api_app/main.py\n@@ -1,7 +1,8 @@\n import logging\n+import os\n import uvicorn\n \n-from fastapi import FastAPI\n+from fastapi import FastAPI, Request\n from fastapi.exceptions import RequestValidationError\n from fastapi_utils.tasks import repeat_every\n from starlette.exceptions import HTTPException\n@@ -17,6 +18,16 @@\n from services.logging import disable_unwanted_loggers, initialize_logging\n from service_bus.deployment_status_update import receive_message_and_update_deployment\n \n+# Opencensus Azure imports\n+from opencensus.ext.azure.trace_exporter import AzureExporter\n+from opencensus.trace.attributes_helper import COMMON_ATTRIBUTES\n+from opencensus.trace.samplers import ProbabilitySampler\n+from opencensus.trace.span import SpanKind\n+from opencensus.trace.tracer import Tracer\n+\n+HTTP_URL = COMMON_ATTRIBUTES['HTTP_URL']\n+HTTP_STATUS_CODE = COMMON_ATTRIBUTES['HTTP_STATUS_CODE']\n+\n \n def get_application() -> FastAPI:\n application = FastAPI(\n@@ -64,5 +75,19 @@\n await receive_message_and_update_deployment(app)\n \n \[email protected](\"http\")\n+async def add_process_time_header(request: Request, call_next):\n+ tracer = Tracer(exporter=AzureExporter(connection_string=f'InstrumentationKey={os.getenv(\"APPINSIGHTS_INSTRUMENTATIONKEY\")}'), sampler=ProbabilitySampler(1.0))\n+ with tracer.span(\"main\") as span:\n+ span.span_kind = SpanKind.SERVER\n+\n+ response = await call_next(request)\n+\n+ tracer.add_attribute_to_current_span(attribute_key=HTTP_STATUS_CODE, attribute_value=response.status_code)\n+ tracer.add_attribute_to_current_span(attribute_key=HTTP_URL, attribute_value=str(request.url))\n+\n+ return response\n+\n+\n if __name__ == \"__main__\":\n uvicorn.run(app, host=\"0.0.0.0\", port=8000)\n", "issue": "API app not reporting requests to AppInsights\n**Description**\r\nEnsure opencensus reports http requests to app insights.\n", "before_files": [{"content": "import logging\nimport uvicorn\n\nfrom fastapi import FastAPI\nfrom fastapi.exceptions import RequestValidationError\nfrom fastapi_utils.tasks import repeat_every\nfrom starlette.exceptions import HTTPException\nfrom starlette.middleware.errors import ServerErrorMiddleware\n\nfrom api.routes.api import router as api_router\nfrom api.routes.api import tags_metadata\nfrom api.errors.http_error import http_error_handler\nfrom api.errors.validation_error import http422_error_handler\nfrom api.errors.generic_error import generic_error_handler\nfrom core import config\nfrom core.events import create_start_app_handler, create_stop_app_handler\nfrom services.logging import disable_unwanted_loggers, initialize_logging\nfrom service_bus.deployment_status_update import receive_message_and_update_deployment\n\n\ndef get_application() -> FastAPI:\n application = FastAPI(\n title=config.PROJECT_NAME,\n debug=config.DEBUG,\n description=config.API_DESCRIPTION,\n version=config.VERSION,\n docs_url=\"/api/docs\",\n swagger_ui_oauth2_redirect_url=\"/api/docs/oauth2-redirect\",\n swagger_ui_init_oauth={\n \"usePkceWithAuthorizationCodeGrant\": True,\n \"clientId\": config.SWAGGER_UI_CLIENT_ID,\n \"scopes\": [\"openid\", \"offline_access\", f\"api://{config.API_CLIENT_ID}/Workspace.Read\", f\"api://{config.API_CLIENT_ID}/Workspace.Write\"]\n },\n openapi_tags=tags_metadata\n )\n\n application.add_event_handler(\"startup\", create_start_app_handler(application))\n application.add_event_handler(\"shutdown\", create_stop_app_handler(application))\n\n application.add_middleware(ServerErrorMiddleware, handler=generic_error_handler)\n application.add_exception_handler(HTTPException, http_error_handler)\n application.add_exception_handler(RequestValidationError, http422_error_handler)\n\n application.include_router(api_router, prefix=config.API_PREFIX)\n return application\n\n\napp = get_application()\n\n\[email protected]_event(\"startup\")\nasync def initialize_logging_on_startup():\n if config.DEBUG:\n initialize_logging(logging.DEBUG)\n else:\n initialize_logging(logging.INFO)\n\n disable_unwanted_loggers()\n\n\[email protected]_event(\"startup\")\n@repeat_every(seconds=20, wait_first=True, logger=logging.getLogger())\nasync def update_deployment_status() -> None:\n await receive_message_and_update_deployment(app)\n\n\nif __name__ == \"__main__\":\n uvicorn.run(app, host=\"0.0.0.0\", port=8000)\n", "path": "api_app/main.py"}, {"content": "__version__ = \"0.1.1\"\n", "path": "api_app/_version.py"}]} | 1,223 | 502 |
gh_patches_debug_7155 | rasdani/github-patches | git_diff | MongoEngine__mongoengine-873 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
No module named 'django.utils.importlib' (Django dev)
In mongoengine/django/mongo_auth/models.py
See https://github.com/django/django/tree/master/django/utils
No module named 'django.utils.importlib' (Django dev)
In mongoengine/django/mongo_auth/models.py
See https://github.com/django/django/tree/master/django/utils
</issue>
<code>
[start of mongoengine/django/mongo_auth/models.py]
1 from django.conf import settings
2 from django.contrib.auth.hashers import make_password
3 from django.contrib.auth.models import UserManager
4 from django.core.exceptions import ImproperlyConfigured
5 from django.db import models
6 from django.utils.importlib import import_module
7 from django.utils.translation import ugettext_lazy as _
8
9
10 __all__ = (
11 'get_user_document',
12 )
13
14
15 MONGOENGINE_USER_DOCUMENT = getattr(
16 settings, 'MONGOENGINE_USER_DOCUMENT', 'mongoengine.django.auth.User')
17
18
19 def get_user_document():
20 """Get the user document class used for authentication.
21
22 This is the class defined in settings.MONGOENGINE_USER_DOCUMENT, which
23 defaults to `mongoengine.django.auth.User`.
24
25 """
26
27 name = MONGOENGINE_USER_DOCUMENT
28 dot = name.rindex('.')
29 module = import_module(name[:dot])
30 return getattr(module, name[dot + 1:])
31
32
33 class MongoUserManager(UserManager):
34 """A User manager wich allows the use of MongoEngine documents in Django.
35
36 To use the manager, you must tell django.contrib.auth to use MongoUser as
37 the user model. In you settings.py, you need:
38
39 INSTALLED_APPS = (
40 ...
41 'django.contrib.auth',
42 'mongoengine.django.mongo_auth',
43 ...
44 )
45 AUTH_USER_MODEL = 'mongo_auth.MongoUser'
46
47 Django will use the model object to access the custom Manager, which will
48 replace the original queryset with MongoEngine querysets.
49
50 By default, mongoengine.django.auth.User will be used to store users. You
51 can specify another document class in MONGOENGINE_USER_DOCUMENT in your
52 settings.py.
53
54 The User Document class has the same requirements as a standard custom user
55 model: https://docs.djangoproject.com/en/dev/topics/auth/customizing/
56
57 In particular, the User Document class must define USERNAME_FIELD and
58 REQUIRED_FIELDS.
59
60 `AUTH_USER_MODEL` has been added in Django 1.5.
61
62 """
63
64 def contribute_to_class(self, model, name):
65 super(MongoUserManager, self).contribute_to_class(model, name)
66 self.dj_model = self.model
67 self.model = get_user_document()
68
69 self.dj_model.USERNAME_FIELD = self.model.USERNAME_FIELD
70 username = models.CharField(_('username'), max_length=30, unique=True)
71 username.contribute_to_class(self.dj_model, self.dj_model.USERNAME_FIELD)
72
73 self.dj_model.REQUIRED_FIELDS = self.model.REQUIRED_FIELDS
74 for name in self.dj_model.REQUIRED_FIELDS:
75 field = models.CharField(_(name), max_length=30)
76 field.contribute_to_class(self.dj_model, name)
77
78
79 def get(self, *args, **kwargs):
80 try:
81 return self.get_query_set().get(*args, **kwargs)
82 except self.model.DoesNotExist:
83 # ModelBackend expects this exception
84 raise self.dj_model.DoesNotExist
85
86 @property
87 def db(self):
88 raise NotImplementedError
89
90 def get_empty_query_set(self):
91 return self.model.objects.none()
92
93 def get_query_set(self):
94 return self.model.objects
95
96
97 class MongoUser(models.Model):
98 """"Dummy user model for Django.
99
100 MongoUser is used to replace Django's UserManager with MongoUserManager.
101 The actual user document class is mongoengine.django.auth.User or any
102 other document class specified in MONGOENGINE_USER_DOCUMENT.
103
104 To get the user document class, use `get_user_document()`.
105
106 """
107
108 objects = MongoUserManager()
109
110 class Meta:
111 app_label = 'mongo_auth'
112
113 def set_password(self, password):
114 """Doesn't do anything, but works around the issue with Django 1.6."""
115 make_password(password)
116
[end of mongoengine/django/mongo_auth/models.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mongoengine/django/mongo_auth/models.py b/mongoengine/django/mongo_auth/models.py
--- a/mongoengine/django/mongo_auth/models.py
+++ b/mongoengine/django/mongo_auth/models.py
@@ -3,7 +3,11 @@
from django.contrib.auth.models import UserManager
from django.core.exceptions import ImproperlyConfigured
from django.db import models
-from django.utils.importlib import import_module
+try:
+ from django.utils.module_loading import import_module
+except ImportError:
+ """Handle older versions of Django"""
+ from django.utils.importlib import import_module
from django.utils.translation import ugettext_lazy as _
| {"golden_diff": "diff --git a/mongoengine/django/mongo_auth/models.py b/mongoengine/django/mongo_auth/models.py\n--- a/mongoengine/django/mongo_auth/models.py\n+++ b/mongoengine/django/mongo_auth/models.py\n@@ -3,7 +3,11 @@\n from django.contrib.auth.models import UserManager\n from django.core.exceptions import ImproperlyConfigured\n from django.db import models\n-from django.utils.importlib import import_module\n+try:\n+ from django.utils.module_loading import import_module\n+except ImportError:\n+ \"\"\"Handle older versions of Django\"\"\"\n+ from django.utils.importlib import import_module\n from django.utils.translation import ugettext_lazy as _\n", "issue": "No module named 'django.utils.importlib' (Django dev)\nIn mongoengine/django/mongo_auth/models.py\nSee https://github.com/django/django/tree/master/django/utils\n\nNo module named 'django.utils.importlib' (Django dev)\nIn mongoengine/django/mongo_auth/models.py\nSee https://github.com/django/django/tree/master/django/utils\n\n", "before_files": [{"content": "from django.conf import settings\nfrom django.contrib.auth.hashers import make_password\nfrom django.contrib.auth.models import UserManager\nfrom django.core.exceptions import ImproperlyConfigured\nfrom django.db import models\nfrom django.utils.importlib import import_module\nfrom django.utils.translation import ugettext_lazy as _\n\n\n__all__ = (\n 'get_user_document',\n)\n\n\nMONGOENGINE_USER_DOCUMENT = getattr(\n settings, 'MONGOENGINE_USER_DOCUMENT', 'mongoengine.django.auth.User')\n\n\ndef get_user_document():\n \"\"\"Get the user document class used for authentication.\n\n This is the class defined in settings.MONGOENGINE_USER_DOCUMENT, which\n defaults to `mongoengine.django.auth.User`.\n\n \"\"\"\n\n name = MONGOENGINE_USER_DOCUMENT\n dot = name.rindex('.')\n module = import_module(name[:dot])\n return getattr(module, name[dot + 1:])\n\n\nclass MongoUserManager(UserManager):\n \"\"\"A User manager wich allows the use of MongoEngine documents in Django.\n\n To use the manager, you must tell django.contrib.auth to use MongoUser as\n the user model. In you settings.py, you need:\n\n INSTALLED_APPS = (\n ...\n 'django.contrib.auth',\n 'mongoengine.django.mongo_auth',\n ...\n )\n AUTH_USER_MODEL = 'mongo_auth.MongoUser'\n\n Django will use the model object to access the custom Manager, which will\n replace the original queryset with MongoEngine querysets.\n\n By default, mongoengine.django.auth.User will be used to store users. You\n can specify another document class in MONGOENGINE_USER_DOCUMENT in your\n settings.py.\n\n The User Document class has the same requirements as a standard custom user\n model: https://docs.djangoproject.com/en/dev/topics/auth/customizing/\n\n In particular, the User Document class must define USERNAME_FIELD and\n REQUIRED_FIELDS.\n\n `AUTH_USER_MODEL` has been added in Django 1.5.\n\n \"\"\"\n\n def contribute_to_class(self, model, name):\n super(MongoUserManager, self).contribute_to_class(model, name)\n self.dj_model = self.model\n self.model = get_user_document()\n\n self.dj_model.USERNAME_FIELD = self.model.USERNAME_FIELD\n username = models.CharField(_('username'), max_length=30, unique=True)\n username.contribute_to_class(self.dj_model, self.dj_model.USERNAME_FIELD)\n\n self.dj_model.REQUIRED_FIELDS = self.model.REQUIRED_FIELDS\n for name in self.dj_model.REQUIRED_FIELDS:\n field = models.CharField(_(name), max_length=30)\n field.contribute_to_class(self.dj_model, name)\n\n\n def get(self, *args, **kwargs):\n try:\n return self.get_query_set().get(*args, **kwargs)\n except self.model.DoesNotExist:\n # ModelBackend expects this exception\n raise self.dj_model.DoesNotExist\n\n @property\n def db(self):\n raise NotImplementedError\n\n def get_empty_query_set(self):\n return self.model.objects.none()\n\n def get_query_set(self):\n return self.model.objects\n\n\nclass MongoUser(models.Model):\n \"\"\"\"Dummy user model for Django.\n\n MongoUser is used to replace Django's UserManager with MongoUserManager.\n The actual user document class is mongoengine.django.auth.User or any\n other document class specified in MONGOENGINE_USER_DOCUMENT.\n\n To get the user document class, use `get_user_document()`.\n\n \"\"\"\n\n objects = MongoUserManager()\n\n class Meta:\n app_label = 'mongo_auth'\n\n def set_password(self, password):\n \"\"\"Doesn't do anything, but works around the issue with Django 1.6.\"\"\"\n make_password(password)\n", "path": "mongoengine/django/mongo_auth/models.py"}]} | 1,671 | 146 |
gh_patches_debug_26284 | rasdani/github-patches | git_diff | python-poetry__poetry-123 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
poetry script: unable to find entry point module, if package resides in src
`poetry script` is unable to find the entry point module in `src`
```
[NameError]
name 'Module' is not defined
script <script-name> [<args>]...
```
</issue>
<code>
[start of poetry/console/commands/script.py]
1 import sys
2
3 from .venv_command import VenvCommand
4
5
6 class ScriptCommand(VenvCommand):
7 """
8 Executes a script defined in <comment>pyproject.toml</comment>
9
10 script
11 { script-name : The name of the script to execute }
12 { args?* : The command and arguments/options to pass to the script. }
13 """
14
15 def handle(self):
16 script = self.argument('script-name')
17 argv = [script] + self.argument('args')
18
19 scripts = self.poetry.local_config.get('scripts')
20 if not scripts:
21 raise RuntimeError('No scripts defined in pyproject.toml')
22
23 if script not in scripts:
24 raise ValueError('Script {} is not defined'.format(script))
25
26 module, callable_ = scripts[script].split(':')
27
28 cmd = ['python', '-c']
29
30 cmd += [
31 '"import sys; '
32 'from importlib import import_module; '
33 'sys.argv = {!r}; '
34 'import_module(\'{}\').{}()"'.format(
35 argv, module, callable_
36 )
37 ]
38
39 self.venv.run(*cmd, shell=True, call=True)
40
41 def merge_application_definition(self, merge_args=True):
42 if self._application is None \
43 or (self._application_definition_merged
44 and (self._application_definition_merged_with_args or not merge_args)):
45 return
46
47 if merge_args:
48 current_arguments = self._definition.get_arguments()
49 self._definition.set_arguments(self._application.get_definition().get_arguments())
50 self._definition.add_arguments(current_arguments)
51
52 self._application_definition_merged = True
53 if merge_args:
54 self._application_definition_merged_with_args = True
55
[end of poetry/console/commands/script.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/poetry/console/commands/script.py b/poetry/console/commands/script.py
--- a/poetry/console/commands/script.py
+++ b/poetry/console/commands/script.py
@@ -1,5 +1,6 @@
import sys
+from ...masonry.utils.module import Module
from .venv_command import VenvCommand
@@ -25,19 +26,32 @@
module, callable_ = scripts[script].split(':')
+ src_in_sys_path = 'sys.path.append(\'src\'); '\
+ if self._module.is_in_src() else ''
+
cmd = ['python', '-c']
cmd += [
'"import sys; '
'from importlib import import_module; '
- 'sys.argv = {!r}; '
+ 'sys.argv = {!r}; {}'
'import_module(\'{}\').{}()"'.format(
- argv, module, callable_
+ argv, src_in_sys_path, module, callable_
)
]
self.venv.run(*cmd, shell=True, call=True)
+ @property
+ def _module(self):
+ poetry = self.poetry
+ package = poetry.package
+ path = poetry.file.parent
+ module = Module(
+ package.name, path.as_posix()
+ )
+ return module
+
def merge_application_definition(self, merge_args=True):
if self._application is None \
or (self._application_definition_merged
| {"golden_diff": "diff --git a/poetry/console/commands/script.py b/poetry/console/commands/script.py\n--- a/poetry/console/commands/script.py\n+++ b/poetry/console/commands/script.py\n@@ -1,5 +1,6 @@\n import sys\n \n+from ...masonry.utils.module import Module\n from .venv_command import VenvCommand\n \n \n@@ -25,19 +26,32 @@\n \n module, callable_ = scripts[script].split(':')\n \n+ src_in_sys_path = 'sys.path.append(\\'src\\'); '\\\n+ if self._module.is_in_src() else ''\n+\n cmd = ['python', '-c']\n \n cmd += [\n '\"import sys; '\n 'from importlib import import_module; '\n- 'sys.argv = {!r}; '\n+ 'sys.argv = {!r}; {}'\n 'import_module(\\'{}\\').{}()\"'.format(\n- argv, module, callable_\n+ argv, src_in_sys_path, module, callable_\n )\n ]\n \n self.venv.run(*cmd, shell=True, call=True)\n \n+ @property\n+ def _module(self):\n+ poetry = self.poetry\n+ package = poetry.package\n+ path = poetry.file.parent\n+ module = Module(\n+ package.name, path.as_posix()\n+ )\n+ return module\n+\n def merge_application_definition(self, merge_args=True):\n if self._application is None \\\n or (self._application_definition_merged\n", "issue": "poetry script: unable to find entry point module, if package resides in src\n`poetry script` is unable to find the entry point module in `src`\r\n\r\n```\r\n[NameError]\r\nname 'Module' is not defined\r\nscript <script-name> [<args>]...\r\n```\n", "before_files": [{"content": "import sys\n\nfrom .venv_command import VenvCommand\n\n\nclass ScriptCommand(VenvCommand):\n \"\"\"\n Executes a script defined in <comment>pyproject.toml</comment>\n\n script\n { script-name : The name of the script to execute }\n { args?* : The command and arguments/options to pass to the script. }\n \"\"\"\n\n def handle(self):\n script = self.argument('script-name')\n argv = [script] + self.argument('args')\n\n scripts = self.poetry.local_config.get('scripts')\n if not scripts:\n raise RuntimeError('No scripts defined in pyproject.toml')\n\n if script not in scripts:\n raise ValueError('Script {} is not defined'.format(script))\n\n module, callable_ = scripts[script].split(':')\n\n cmd = ['python', '-c']\n\n cmd += [\n '\"import sys; '\n 'from importlib import import_module; '\n 'sys.argv = {!r}; '\n 'import_module(\\'{}\\').{}()\"'.format(\n argv, module, callable_\n )\n ]\n\n self.venv.run(*cmd, shell=True, call=True)\n\n def merge_application_definition(self, merge_args=True):\n if self._application is None \\\n or (self._application_definition_merged\n and (self._application_definition_merged_with_args or not merge_args)):\n return\n\n if merge_args:\n current_arguments = self._definition.get_arguments()\n self._definition.set_arguments(self._application.get_definition().get_arguments())\n self._definition.add_arguments(current_arguments)\n\n self._application_definition_merged = True\n if merge_args:\n self._application_definition_merged_with_args = True\n", "path": "poetry/console/commands/script.py"}]} | 1,062 | 331 |
gh_patches_debug_23167 | rasdani/github-patches | git_diff | onnx__sklearn-onnx-59 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
MemoryError when trying to convert TfIdf
Hello
Got an exception when trying to export a pipeline with a TfIdf :
Exception "unhandled MemoryError"
cv = CountVectorizer()
tt = TfidfTransformer()
lsvc = LinearSVC(penalty=penalty, dual=False, tol=1e-3)
text_clf = Pipeline([
('vect', cv),
('tfidf', tt),
('clf', lsvc),
])
text_clf.fit(twenty_train.data, twenty_train.target)
print("Converting text_clf to onnx...")
onnx = convert_sklearn(text_clf, target_opset=9, name='DocClassifierCV-Tfidf-LSVC',
initial_types=[('input', StringTensorType())]
)
Exception "unhandled MemoryError"
The stack is:
convert_sklearn()
convert_topology() :
_registration.get_converter(operator.type)(scope, operator, container)
convert_sklearn_tfidf_transformer()
if not isinstance(cst, numpy.ndarray):
cst = numpy.array(cst.todense())
toDense()
return np.asmatrix(self.toarray(order=order, out=out))
_process_toarray_args()
return np.zeros(self.shape, dtype=self.dtype, order=order)
Could make sens : the input sparse matrix is (strangely) 130000 per 130000, pretty big to be densified.
</issue>
<code>
[start of skl2onnx/operator_converters/TfIdfTransformer.py]
1 # -------------------------------------------------------------------------
2 # Copyright (c) Microsoft Corporation. All rights reserved.
3 # Licensed under the MIT License. See License.txt in the project root for
4 # license information.
5 # --------------------------------------------------------------------------
6
7 import numpy
8 import numbers
9 import warnings
10 from ..common._registration import register_converter
11 from ..common._apply_operation import apply_log, apply_add, apply_mul, apply_identity
12 from ..proto import onnx_proto
13
14
15 def convert_sklearn_tfidf_transformer(scope, operator, container):
16 # TODO: use sparse containers when available
17 op = operator.raw_operator
18 data = operator.input_full_names
19 final = operator.output_full_names
20 C = operator.inputs[0].type.shape[1]
21
22 if op.sublinear_tf:
23 # code scikit-learn
24 # np.log(X.data, X.data) --> does not apply on null coefficient
25 # X.data += 1
26 raise RuntimeError("ONNX does not support sparse tensors, sublinear_tf must be False")
27
28 logged = scope.get_unique_variable_name('logged')
29 apply_log(scope, data, logged, container)
30
31 if not op.use_idf and op.norm is None:
32 loggedplus1 = final
33 else:
34 loggedplus1 = scope.get_unique_variable_name('loggedplus1')
35 ones = scope.get_unique_variable_name('ones')
36 cst = numpy.ones((C,), dtype=numpy.float32)
37 container.add_initializer(ones, onnx_proto.TensorProto.FLOAT, [C], cst.flatten())
38 apply_add(scope, [logged, ones], loggedplus1, container, broadcast=1)
39
40 data = [loggedplus1]
41
42 if op.use_idf:
43 # code scikit-learn
44 # X = X * self._idf_diag
45 cst = op._idf_diag.astype(numpy.float32)
46 if not isinstance(cst, numpy.ndarray):
47 cst = numpy.array(cst.todense())
48 if len(cst.shape) > 1:
49 cst = numpy.diag(cst)
50 cst = cst.ravel().flatten()
51 shape = [len(cst)]
52 idfcst = scope.get_unique_variable_name('idfcst')
53 container.add_initializer(idfcst, onnx_proto.TensorProto.FLOAT, shape, cst)
54 idfed = final if op.norm is None else scope.get_unique_variable_name('idfed')
55 apply_mul(scope, data + [idfcst], idfed, container, broadcast=1)
56 data = [idfed]
57
58 if op.norm is not None:
59 op_type = 'Normalizer'
60 norm_map = {'max': 'MAX', 'l1': 'L1', 'l2': 'L2'}
61 attrs = {'name': scope.get_unique_operator_name(op_type)}
62 if op.norm in norm_map:
63 attrs['norm'] = norm_map[op.norm]
64 else:
65 raise RuntimeError('Invalid norm: %s' % op.norm)
66
67 container.add_node(op_type, data, operator.output_full_names, op_domain='ai.onnx.ml', **attrs)
68 data = None
69
70 if data == operator.input_full_names:
71 # Nothing happened --> identity
72 apply_identity(scope, data, final, container)
73
74
75 register_converter('SklearnTfidfTransformer', convert_sklearn_tfidf_transformer)
76
[end of skl2onnx/operator_converters/TfIdfTransformer.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/skl2onnx/operator_converters/TfIdfTransformer.py b/skl2onnx/operator_converters/TfIdfTransformer.py
--- a/skl2onnx/operator_converters/TfIdfTransformer.py
+++ b/skl2onnx/operator_converters/TfIdfTransformer.py
@@ -4,9 +4,10 @@
# license information.
# --------------------------------------------------------------------------
-import numpy
import numbers
import warnings
+import numpy
+from scipy.sparse import diags
from ..common._registration import register_converter
from ..common._apply_operation import apply_log, apply_add, apply_mul, apply_identity
from ..proto import onnx_proto
@@ -44,7 +45,11 @@
# X = X * self._idf_diag
cst = op._idf_diag.astype(numpy.float32)
if not isinstance(cst, numpy.ndarray):
- cst = numpy.array(cst.todense())
+ if len(cst.shape) > 1:
+ n = cst.shape[0]
+ cst = numpy.array([cst[i, i] for i in range(n)])
+ else:
+ cst = numpy.array(cst.todense())
if len(cst.shape) > 1:
cst = numpy.diag(cst)
cst = cst.ravel().flatten()
| {"golden_diff": "diff --git a/skl2onnx/operator_converters/TfIdfTransformer.py b/skl2onnx/operator_converters/TfIdfTransformer.py\n--- a/skl2onnx/operator_converters/TfIdfTransformer.py\n+++ b/skl2onnx/operator_converters/TfIdfTransformer.py\n@@ -4,9 +4,10 @@\n # license information.\n # --------------------------------------------------------------------------\n \n-import numpy\n import numbers\n import warnings\n+import numpy\n+from scipy.sparse import diags\n from ..common._registration import register_converter\n from ..common._apply_operation import apply_log, apply_add, apply_mul, apply_identity\n from ..proto import onnx_proto\n@@ -44,7 +45,11 @@\n # X = X * self._idf_diag\n cst = op._idf_diag.astype(numpy.float32)\n if not isinstance(cst, numpy.ndarray):\n- cst = numpy.array(cst.todense())\n+ if len(cst.shape) > 1:\n+ n = cst.shape[0]\n+ cst = numpy.array([cst[i, i] for i in range(n)])\n+ else:\n+ cst = numpy.array(cst.todense())\n if len(cst.shape) > 1:\n cst = numpy.diag(cst)\n cst = cst.ravel().flatten()\n", "issue": "MemoryError when trying to convert TfIdf\nHello\r\nGot an exception when trying to export a pipeline with a TfIdf : \r\nException \"unhandled MemoryError\"\r\n cv = CountVectorizer()\r\n tt = TfidfTransformer()\r\n lsvc = LinearSVC(penalty=penalty, dual=False, tol=1e-3)\r\n text_clf = Pipeline([ \r\n ('vect', cv),\r\n ('tfidf', tt),\r\n ('clf', lsvc),\r\n ])\r\n text_clf.fit(twenty_train.data, twenty_train.target) \r\n print(\"Converting text_clf to onnx...\")\r\n onnx = convert_sklearn(text_clf, target_opset=9, name='DocClassifierCV-Tfidf-LSVC', \r\n initial_types=[('input', StringTensorType())]\r\n )\r\n\r\nException \"unhandled MemoryError\"\r\n\r\nThe stack is:\r\n\r\nconvert_sklearn()\r\n\r\nconvert_topology() : \r\n _registration.get_converter(operator.type)(scope, operator, container)\r\n\r\nconvert_sklearn_tfidf_transformer()\r\n if not isinstance(cst, numpy.ndarray):\r\n cst = numpy.array(cst.todense())\r\n\r\ntoDense()\r\n return np.asmatrix(self.toarray(order=order, out=out))\r\n\r\n_process_toarray_args()\r\n return np.zeros(self.shape, dtype=self.dtype, order=order)\r\n\r\nCould make sens : the input sparse matrix is (strangely) 130000 per 130000, pretty big to be densified.\r\n\n", "before_files": [{"content": "# -------------------------------------------------------------------------\n# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License. See License.txt in the project root for\n# license information.\n# --------------------------------------------------------------------------\n\nimport numpy\nimport numbers\nimport warnings\nfrom ..common._registration import register_converter\nfrom ..common._apply_operation import apply_log, apply_add, apply_mul, apply_identity\nfrom ..proto import onnx_proto\n\n\ndef convert_sklearn_tfidf_transformer(scope, operator, container):\n # TODO: use sparse containers when available\n op = operator.raw_operator\n data = operator.input_full_names\n final = operator.output_full_names\n C = operator.inputs[0].type.shape[1]\n \n if op.sublinear_tf:\n # code scikit-learn\n # np.log(X.data, X.data) --> does not apply on null coefficient\n # X.data += 1\n raise RuntimeError(\"ONNX does not support sparse tensors, sublinear_tf must be False\")\n \n logged = scope.get_unique_variable_name('logged')\n apply_log(scope, data, logged, container)\n \n if not op.use_idf and op.norm is None:\n loggedplus1 = final\n else:\n loggedplus1 = scope.get_unique_variable_name('loggedplus1')\n ones = scope.get_unique_variable_name('ones')\n cst = numpy.ones((C,), dtype=numpy.float32)\n container.add_initializer(ones, onnx_proto.TensorProto.FLOAT, [C], cst.flatten()) \n apply_add(scope, [logged, ones], loggedplus1, container, broadcast=1)\n \n data = [loggedplus1]\n \n if op.use_idf:\n # code scikit-learn\n # X = X * self._idf_diag\n cst = op._idf_diag.astype(numpy.float32)\n if not isinstance(cst, numpy.ndarray):\n cst = numpy.array(cst.todense())\n if len(cst.shape) > 1:\n cst = numpy.diag(cst)\n cst = cst.ravel().flatten()\n shape = [len(cst)]\n idfcst = scope.get_unique_variable_name('idfcst')\n container.add_initializer(idfcst, onnx_proto.TensorProto.FLOAT, shape, cst)\n idfed = final if op.norm is None else scope.get_unique_variable_name('idfed')\n apply_mul(scope, data + [idfcst], idfed, container, broadcast=1)\n data = [idfed]\n\n if op.norm is not None:\n op_type = 'Normalizer'\n norm_map = {'max': 'MAX', 'l1': 'L1', 'l2': 'L2'}\n attrs = {'name': scope.get_unique_operator_name(op_type)}\n if op.norm in norm_map:\n attrs['norm'] = norm_map[op.norm]\n else:\n raise RuntimeError('Invalid norm: %s' % op.norm)\n\n container.add_node(op_type, data, operator.output_full_names, op_domain='ai.onnx.ml', **attrs)\n data = None\n \n if data == operator.input_full_names:\n # Nothing happened --> identity\n apply_identity(scope, data, final, container)\n\n\nregister_converter('SklearnTfidfTransformer', convert_sklearn_tfidf_transformer)\n", "path": "skl2onnx/operator_converters/TfIdfTransformer.py"}]} | 1,703 | 286 |
gh_patches_debug_29507 | rasdani/github-patches | git_diff | translate__pootle-6487 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Paths dropdown missing some parent dirs
if a dir contains only directories, not any active stores its not shown in menu (on master)
</issue>
<code>
[start of pootle/core/paths.py]
1 # -*- coding: utf-8 -*-
2 #
3 # Copyright (C) Pootle contributors.
4 #
5 # This file is a part of the Pootle project. It is distributed under the GPL3
6 # or later license. See the LICENSE file for a copy of the license and the
7 # AUTHORS file for copyright and authorship information.
8
9 import posixpath
10
11 from pootle.core.decorators import persistent_property
12 from pootle.core.delegate import revision
13
14
15 class Paths(object):
16
17 def __init__(self, context, q, show_all=False):
18 self.context = context
19 self.q = q
20 self.show_all = show_all
21
22 @property
23 def rev_cache_key(self):
24 return revision.get(
25 self.context.directory.__class__)(
26 self.context.directory).get(key="stats")
27
28 @property
29 def cache_key(self):
30 return (
31 "%s.%s.%s"
32 % (self.q,
33 self.rev_cache_key,
34 self.show_all))
35
36 @property
37 def store_qs(self):
38 raise NotImplementedError
39
40 @property
41 def stores(self):
42 stores = self.store_qs.exclude(obsolete=True)
43 if not self.show_all:
44 stores = stores.exclude(
45 translation_project__project__disabled=True)
46 return stores.exclude(is_template=True).filter(
47 tp_path__contains=self.q).order_by()
48
49 @persistent_property
50 def paths(self):
51 stores = set(
52 st[1:]
53 for st
54 in self.stores.values_list("tp_path", flat=True))
55 dirs = set(
56 ("%s/" % posixpath.dirname(path))
57 for path
58 in stores
59 if (path.count("/") > 1
60 and self.q in path))
61 return sorted(
62 dirs | stores,
63 key=lambda path: (posixpath.dirname(path), posixpath.basename(path)))
64
[end of pootle/core/paths.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pootle/core/paths.py b/pootle/core/paths.py
--- a/pootle/core/paths.py
+++ b/pootle/core/paths.py
@@ -6,7 +6,11 @@
# or later license. See the LICENSE file for a copy of the license and the
# AUTHORS file for copyright and authorship information.
+import pathlib
import posixpath
+from hashlib import md5
+
+from django.utils.encoding import force_bytes
from pootle.core.decorators import persistent_property
from pootle.core.delegate import revision
@@ -29,7 +33,7 @@
def cache_key(self):
return (
"%s.%s.%s"
- % (self.q,
+ % (md5(force_bytes(self.q)).hexdigest(),
self.rev_cache_key,
self.show_all))
@@ -52,12 +56,17 @@
st[1:]
for st
in self.stores.values_list("tp_path", flat=True))
- dirs = set(
- ("%s/" % posixpath.dirname(path))
- for path
- in stores
- if (path.count("/") > 1
- and self.q in path))
+ dirs = set()
+ for store in stores:
+ if posixpath.dirname(store) in dirs:
+ continue
+ dirs = (
+ dirs
+ | (set(
+ "%s/" % str(p)
+ for p
+ in pathlib.PosixPath(store).parents
+ if str(p) != ".")))
return sorted(
dirs | stores,
key=lambda path: (posixpath.dirname(path), posixpath.basename(path)))
| {"golden_diff": "diff --git a/pootle/core/paths.py b/pootle/core/paths.py\n--- a/pootle/core/paths.py\n+++ b/pootle/core/paths.py\n@@ -6,7 +6,11 @@\n # or later license. See the LICENSE file for a copy of the license and the\n # AUTHORS file for copyright and authorship information.\n \n+import pathlib\n import posixpath\n+from hashlib import md5\n+\n+from django.utils.encoding import force_bytes\n \n from pootle.core.decorators import persistent_property\n from pootle.core.delegate import revision\n@@ -29,7 +33,7 @@\n def cache_key(self):\n return (\n \"%s.%s.%s\"\n- % (self.q,\n+ % (md5(force_bytes(self.q)).hexdigest(),\n self.rev_cache_key,\n self.show_all))\n \n@@ -52,12 +56,17 @@\n st[1:]\n for st\n in self.stores.values_list(\"tp_path\", flat=True))\n- dirs = set(\n- (\"%s/\" % posixpath.dirname(path))\n- for path\n- in stores\n- if (path.count(\"/\") > 1\n- and self.q in path))\n+ dirs = set()\n+ for store in stores:\n+ if posixpath.dirname(store) in dirs:\n+ continue\n+ dirs = (\n+ dirs\n+ | (set(\n+ \"%s/\" % str(p)\n+ for p\n+ in pathlib.PosixPath(store).parents\n+ if str(p) != \".\")))\n return sorted(\n dirs | stores,\n key=lambda path: (posixpath.dirname(path), posixpath.basename(path)))\n", "issue": "Paths dropdown missing some parent dirs\nif a dir contains only directories, not any active stores its not shown in menu (on master)\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nimport posixpath\n\nfrom pootle.core.decorators import persistent_property\nfrom pootle.core.delegate import revision\n\n\nclass Paths(object):\n\n def __init__(self, context, q, show_all=False):\n self.context = context\n self.q = q\n self.show_all = show_all\n\n @property\n def rev_cache_key(self):\n return revision.get(\n self.context.directory.__class__)(\n self.context.directory).get(key=\"stats\")\n\n @property\n def cache_key(self):\n return (\n \"%s.%s.%s\"\n % (self.q,\n self.rev_cache_key,\n self.show_all))\n\n @property\n def store_qs(self):\n raise NotImplementedError\n\n @property\n def stores(self):\n stores = self.store_qs.exclude(obsolete=True)\n if not self.show_all:\n stores = stores.exclude(\n translation_project__project__disabled=True)\n return stores.exclude(is_template=True).filter(\n tp_path__contains=self.q).order_by()\n\n @persistent_property\n def paths(self):\n stores = set(\n st[1:]\n for st\n in self.stores.values_list(\"tp_path\", flat=True))\n dirs = set(\n (\"%s/\" % posixpath.dirname(path))\n for path\n in stores\n if (path.count(\"/\") > 1\n and self.q in path))\n return sorted(\n dirs | stores,\n key=lambda path: (posixpath.dirname(path), posixpath.basename(path)))\n", "path": "pootle/core/paths.py"}]} | 1,079 | 371 |
gh_patches_debug_843 | rasdani/github-patches | git_diff | obspy__obspy-2148 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
FDSN routing client has a locale dependency
There's a dummy call to `time.strptime` in the module init that uses locale-specific formatting, which fails under locales that don't use the same names (ie. "Nov" for the 11th month of the year).
```
>>> import locale
>>> locale.setlocale(locale.LC_TIME, ('zh_CN', 'UTF-8'))
'zh_CN.UTF-8'
>>> from obspy.clients.fdsn.routing.routing_client import RoutingClient
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/workspace/anaconda/envs/django/lib/python2.7/site-packages/obspy/clients/fdsn/__init__.py", line 242, in <module>
from .routing.routing_client import RoutingClient # NOQA
File "/workspace/anaconda/envs/django/lib/python2.7/site-packages/obspy/clients/fdsn/routing/__init__.py", line 25, in <module>
time.strptime("30 Nov 00", "%d %b %y")
File "/workspace/anaconda/envs/django/lib/python2.7/_strptime.py", line 478, in _strptime_time
return _strptime(data_string, format)[0]
File "/workspace/anaconda/envs/django/lib/python2.7/_strptime.py", line 332, in _strptime
(data_string, format))
ValueError: time data u'30 Nov 00' does not match format u'%d %b %y'
```
I believe switching this to an ISO8601-like string would be locale-agnostic:
time.strptime("2000/11/30", "%Y/%m/%d")
</issue>
<code>
[start of obspy/clients/fdsn/routing/__init__.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 """
4 obspy.clients.fdsn.routing - Routing services for FDSN web services
5 ===================================================================
6
7 :copyright:
8 The ObsPy Development Team ([email protected])
9 Celso G Reyes, 2017
10 IRIS-DMC
11 :license:
12 GNU Lesser General Public License, Version 3
13 (https://www.gnu.org/copyleft/lesser.html)
14 """
15 from __future__ import (absolute_import, division, print_function,
16 unicode_literals)
17 from future.builtins import * # NOQA
18
19
20 # Extremely ugly way to avoid a race condition the first time strptime is
21 # imported which is not thread safe...
22 #
23 # See https://bugs.python.org/issue7980
24 import time
25 time.strptime("30 Nov 00", "%d %b %y")
26
27
28 if __name__ == '__main__': # pragma: no cover
29 import doctest
30 doctest.testmod(exclude_empty=True)
31
[end of obspy/clients/fdsn/routing/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/obspy/clients/fdsn/routing/__init__.py b/obspy/clients/fdsn/routing/__init__.py
--- a/obspy/clients/fdsn/routing/__init__.py
+++ b/obspy/clients/fdsn/routing/__init__.py
@@ -22,7 +22,7 @@
#
# See https://bugs.python.org/issue7980
import time
-time.strptime("30 Nov 00", "%d %b %y")
+time.strptime("2000/11/30", "%Y/%m/%d")
if __name__ == '__main__': # pragma: no cover
| {"golden_diff": "diff --git a/obspy/clients/fdsn/routing/__init__.py b/obspy/clients/fdsn/routing/__init__.py\n--- a/obspy/clients/fdsn/routing/__init__.py\n+++ b/obspy/clients/fdsn/routing/__init__.py\n@@ -22,7 +22,7 @@\n #\n # See https://bugs.python.org/issue7980\n import time\n-time.strptime(\"30 Nov 00\", \"%d %b %y\")\n+time.strptime(\"2000/11/30\", \"%Y/%m/%d\")\n \n \n if __name__ == '__main__': # pragma: no cover\n", "issue": "FDSN routing client has a locale dependency\nThere's a dummy call to `time.strptime` in the module init that uses locale-specific formatting, which fails under locales that don't use the same names (ie. \"Nov\" for the 11th month of the year).\r\n\r\n```\r\n>>> import locale\r\n>>> locale.setlocale(locale.LC_TIME, ('zh_CN', 'UTF-8'))\r\n'zh_CN.UTF-8'\r\n>>> from obspy.clients.fdsn.routing.routing_client import RoutingClient\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/workspace/anaconda/envs/django/lib/python2.7/site-packages/obspy/clients/fdsn/__init__.py\", line 242, in <module>\r\n from .routing.routing_client import RoutingClient # NOQA\r\n File \"/workspace/anaconda/envs/django/lib/python2.7/site-packages/obspy/clients/fdsn/routing/__init__.py\", line 25, in <module>\r\n time.strptime(\"30 Nov 00\", \"%d %b %y\")\r\n File \"/workspace/anaconda/envs/django/lib/python2.7/_strptime.py\", line 478, in _strptime_time\r\n return _strptime(data_string, format)[0]\r\n File \"/workspace/anaconda/envs/django/lib/python2.7/_strptime.py\", line 332, in _strptime\r\n (data_string, format))\r\nValueError: time data u'30 Nov 00' does not match format u'%d %b %y'\r\n```\r\n\r\nI believe switching this to an ISO8601-like string would be locale-agnostic:\r\n\r\n time.strptime(\"2000/11/30\", \"%Y/%m/%d\")\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\"\"\"\nobspy.clients.fdsn.routing - Routing services for FDSN web services\n===================================================================\n\n:copyright:\n The ObsPy Development Team ([email protected])\n Celso G Reyes, 2017\n IRIS-DMC\n:license:\n GNU Lesser General Public License, Version 3\n (https://www.gnu.org/copyleft/lesser.html)\n\"\"\"\nfrom __future__ import (absolute_import, division, print_function,\n unicode_literals)\nfrom future.builtins import * # NOQA\n\n\n# Extremely ugly way to avoid a race condition the first time strptime is\n# imported which is not thread safe...\n#\n# See https://bugs.python.org/issue7980\nimport time\ntime.strptime(\"30 Nov 00\", \"%d %b %y\")\n\n\nif __name__ == '__main__': # pragma: no cover\n import doctest\n doctest.testmod(exclude_empty=True)\n", "path": "obspy/clients/fdsn/routing/__init__.py"}]} | 1,219 | 151 |
gh_patches_debug_8074 | rasdani/github-patches | git_diff | alltheplaces__alltheplaces-2456 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Spider academy is broken
During the global build at 2021-05-21-20-28-08, spider **academy** failed with **0 features** and **0 errors**.
Here's [the log](https://data.alltheplaces.xyz/runs/2021-05-21-20-28-08/logs/academy.log) and [the output](https://data.alltheplaces.xyz/runs/2021-05-21-20-28-08/output/academy.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-05-21-20-28-08/output/academy.geojson))
</issue>
<code>
[start of locations/spiders/academy.py]
1 # -*- coding: utf-8 -*-
2 import json
3 import re
4
5 import scrapy
6 from scrapy.utils.gz import gunzip
7
8 from locations.items import GeojsonPointItem
9 from locations.hours import OpeningHours
10
11
12 class AcademySpider(scrapy.Spider):
13 name = "academy"
14 item_attributes = {'brand': 'Academy Sports + Outdoors', 'brand_wikidata': 'Q4671380'}
15 allowed_domains = []
16 start_urls = [
17 'https://www.academy.com/sitemap_store_1.xml.gz',
18 ]
19
20 def parse(self, response):
21 body = gunzip(response.body)
22 body = scrapy.Selector(text=body)
23 body.remove_namespaces()
24 urls = body.xpath('//url/loc/text()').extract()
25 for path in urls:
26 store_url = re.compile(r'http://www.academy.com/shop/storelocator/.+?/.+?/store-\d+')
27 if re.search(store_url, path):
28 yield scrapy.Request(
29 path.strip(),
30 callback=self.parse_store
31 )
32
33 def parse_hours(self, hours):
34 opening_hours = OpeningHours()
35
36 for elem in hours:
37 day, open_time, close_time = re.search(r'([A-Za-z]+)\s([\d:]+)\s-\s([\d:]+)', elem).groups()
38 opening_hours.add_range(day=day[:2], open_time=open_time, close_time=close_time)
39
40 return opening_hours.as_opening_hours()
41
42 def parse_store(self, response):
43 properties = {
44 'ref': re.search(r'.+/(.+?)/?(?:\.html|$)', response.url).group(1),
45 'name': response.xpath('normalize-space(//h1[@itemprop="name"]//text())').extract_first(),
46 'addr_full': response.xpath('normalize-space(//span[@itemprop="streetAddress"]//text())').extract_first(),
47 'city': response.xpath('normalize-space(//span[@itemprop="addressLocality"]//text())').extract_first(),
48 'state': response.xpath('normalize-space(//span[@itemprop="addressRegion"]//text())').extract_first(),
49 'postcode': response.xpath('normalize-space(//span[@itemprop="postalCode"]//text())').extract_first(),
50 'phone': response.xpath('//a[@id="storePhone"]/text()').extract_first(),
51 'website': response.url,
52 'lat': float(response.xpath('//input[@id="params"]/@data-lat').extract_first()),
53 'lon': float(response.xpath('//input[@id="params"]/@data-lng').extract_first()),
54 }
55
56 properties['opening_hours'] = self.parse_hours(
57 response.xpath('//*[@itemprop="openingHours"]/@datetime').extract()
58 )
59
60 yield GeojsonPointItem(**properties)
61
[end of locations/spiders/academy.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/locations/spiders/academy.py b/locations/spiders/academy.py
--- a/locations/spiders/academy.py
+++ b/locations/spiders/academy.py
@@ -23,7 +23,7 @@
body.remove_namespaces()
urls = body.xpath('//url/loc/text()').extract()
for path in urls:
- store_url = re.compile(r'http://www.academy.com/shop/storelocator/.+?/.+?/store-\d+')
+ store_url = re.compile(r'https://www.academy.com/shop/storelocator/.+?/.+?/store-\d+')
if re.search(store_url, path):
yield scrapy.Request(
path.strip(),
| {"golden_diff": "diff --git a/locations/spiders/academy.py b/locations/spiders/academy.py\n--- a/locations/spiders/academy.py\n+++ b/locations/spiders/academy.py\n@@ -23,7 +23,7 @@\n body.remove_namespaces()\n urls = body.xpath('//url/loc/text()').extract()\n for path in urls:\n- store_url = re.compile(r'http://www.academy.com/shop/storelocator/.+?/.+?/store-\\d+')\n+ store_url = re.compile(r'https://www.academy.com/shop/storelocator/.+?/.+?/store-\\d+')\n if re.search(store_url, path):\n yield scrapy.Request(\n path.strip(),\n", "issue": "Spider academy is broken\nDuring the global build at 2021-05-21-20-28-08, spider **academy** failed with **0 features** and **0 errors**.\n\nHere's [the log](https://data.alltheplaces.xyz/runs/2021-05-21-20-28-08/logs/academy.log) and [the output](https://data.alltheplaces.xyz/runs/2021-05-21-20-28-08/output/academy.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-05-21-20-28-08/output/academy.geojson))\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport json\nimport re\n\nimport scrapy\nfrom scrapy.utils.gz import gunzip\n\nfrom locations.items import GeojsonPointItem\nfrom locations.hours import OpeningHours\n\n\nclass AcademySpider(scrapy.Spider):\n name = \"academy\"\n item_attributes = {'brand': 'Academy Sports + Outdoors', 'brand_wikidata': 'Q4671380'}\n allowed_domains = []\n start_urls = [\n 'https://www.academy.com/sitemap_store_1.xml.gz',\n ]\n\n def parse(self, response):\n body = gunzip(response.body)\n body = scrapy.Selector(text=body)\n body.remove_namespaces()\n urls = body.xpath('//url/loc/text()').extract()\n for path in urls:\n store_url = re.compile(r'http://www.academy.com/shop/storelocator/.+?/.+?/store-\\d+')\n if re.search(store_url, path):\n yield scrapy.Request(\n path.strip(),\n callback=self.parse_store\n )\n\n def parse_hours(self, hours):\n opening_hours = OpeningHours()\n\n for elem in hours:\n day, open_time, close_time = re.search(r'([A-Za-z]+)\\s([\\d:]+)\\s-\\s([\\d:]+)', elem).groups()\n opening_hours.add_range(day=day[:2], open_time=open_time, close_time=close_time)\n\n return opening_hours.as_opening_hours()\n\n def parse_store(self, response):\n properties = {\n 'ref': re.search(r'.+/(.+?)/?(?:\\.html|$)', response.url).group(1),\n 'name': response.xpath('normalize-space(//h1[@itemprop=\"name\"]//text())').extract_first(),\n 'addr_full': response.xpath('normalize-space(//span[@itemprop=\"streetAddress\"]//text())').extract_first(),\n 'city': response.xpath('normalize-space(//span[@itemprop=\"addressLocality\"]//text())').extract_first(),\n 'state': response.xpath('normalize-space(//span[@itemprop=\"addressRegion\"]//text())').extract_first(),\n 'postcode': response.xpath('normalize-space(//span[@itemprop=\"postalCode\"]//text())').extract_first(),\n 'phone': response.xpath('//a[@id=\"storePhone\"]/text()').extract_first(),\n 'website': response.url,\n 'lat': float(response.xpath('//input[@id=\"params\"]/@data-lat').extract_first()),\n 'lon': float(response.xpath('//input[@id=\"params\"]/@data-lng').extract_first()),\n }\n \n properties['opening_hours'] = self.parse_hours(\n response.xpath('//*[@itemprop=\"openingHours\"]/@datetime').extract()\n )\n \n yield GeojsonPointItem(**properties)\n", "path": "locations/spiders/academy.py"}]} | 1,437 | 161 |
gh_patches_debug_14582 | rasdani/github-patches | git_diff | liqd__a4-product-139 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Translations incomplete
- partner page
</issue>
<code>
[start of liqd_product/apps/contrib/management/commands/makemessages.py]
1 from os import path
2
3 from django.conf import settings
4 from django.core.management.commands import makemessages
5
6
7 def get_module_dir(name):
8 module = __import__(name)
9 return path.dirname(module.__file__)
10
11
12 class Command(makemessages.Command):
13 msgmerge_options = (
14 makemessages.Command.msgmerge_options + ['--no-fuzzy-matching']
15 )
16
17 def handle(self, *args, **options):
18 if options['domain'] == 'djangojs':
19 if options['extensions'] is None:
20 options['extensions'] = ['js', 'jsx']
21 return super().handle(*args, **options)
22
23 def find_files(self, root):
24 a4js_paths = super().find_files(path.join(
25 settings.BASE_DIR, 'node_modules', 'adhocracy4', 'adhocracy4'
26 ))
27 a4_paths = super().find_files(get_module_dir('adhocracy4'))
28 liqd_product_paths = super().find_files(
29 path.relpath(get_module_dir('liqd_product'))
30 )
31
32 return a4js_paths + a4_paths + liqd_product_paths
33
[end of liqd_product/apps/contrib/management/commands/makemessages.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/liqd_product/apps/contrib/management/commands/makemessages.py b/liqd_product/apps/contrib/management/commands/makemessages.py
--- a/liqd_product/apps/contrib/management/commands/makemessages.py
+++ b/liqd_product/apps/contrib/management/commands/makemessages.py
@@ -25,8 +25,15 @@
settings.BASE_DIR, 'node_modules', 'adhocracy4', 'adhocracy4'
))
a4_paths = super().find_files(get_module_dir('adhocracy4'))
+ mbjs_paths = super().find_files(path.join(
+ settings.BASE_DIR, 'node_modules', 'a4-meinberlin', 'meinberlin'
+ ))
+ mb_paths = super().find_files(get_module_dir('meinberlin'))
+
liqd_product_paths = super().find_files(
path.relpath(get_module_dir('liqd_product'))
)
- return a4js_paths + a4_paths + liqd_product_paths
+ return a4js_paths + a4_paths + \
+ mbjs_paths + mb_paths + \
+ liqd_product_paths
| {"golden_diff": "diff --git a/liqd_product/apps/contrib/management/commands/makemessages.py b/liqd_product/apps/contrib/management/commands/makemessages.py\n--- a/liqd_product/apps/contrib/management/commands/makemessages.py\n+++ b/liqd_product/apps/contrib/management/commands/makemessages.py\n@@ -25,8 +25,15 @@\n settings.BASE_DIR, 'node_modules', 'adhocracy4', 'adhocracy4'\n ))\n a4_paths = super().find_files(get_module_dir('adhocracy4'))\n+ mbjs_paths = super().find_files(path.join(\n+ settings.BASE_DIR, 'node_modules', 'a4-meinberlin', 'meinberlin'\n+ ))\n+ mb_paths = super().find_files(get_module_dir('meinberlin'))\n+\n liqd_product_paths = super().find_files(\n path.relpath(get_module_dir('liqd_product'))\n )\n \n- return a4js_paths + a4_paths + liqd_product_paths\n+ return a4js_paths + a4_paths + \\\n+ mbjs_paths + mb_paths + \\\n+ liqd_product_paths\n", "issue": "Translations incomplete\n- partner page\n", "before_files": [{"content": "from os import path\n\nfrom django.conf import settings\nfrom django.core.management.commands import makemessages\n\n\ndef get_module_dir(name):\n module = __import__(name)\n return path.dirname(module.__file__)\n\n\nclass Command(makemessages.Command):\n msgmerge_options = (\n makemessages.Command.msgmerge_options + ['--no-fuzzy-matching']\n )\n\n def handle(self, *args, **options):\n if options['domain'] == 'djangojs':\n if options['extensions'] is None:\n options['extensions'] = ['js', 'jsx']\n return super().handle(*args, **options)\n\n def find_files(self, root):\n a4js_paths = super().find_files(path.join(\n settings.BASE_DIR, 'node_modules', 'adhocracy4', 'adhocracy4'\n ))\n a4_paths = super().find_files(get_module_dir('adhocracy4'))\n liqd_product_paths = super().find_files(\n path.relpath(get_module_dir('liqd_product'))\n )\n\n return a4js_paths + a4_paths + liqd_product_paths\n", "path": "liqd_product/apps/contrib/management/commands/makemessages.py"}]} | 858 | 256 |
gh_patches_debug_8000 | rasdani/github-patches | git_diff | arviz-devs__arviz-203 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Changing the order of plotting and file load seems to cause netcdf errors
For some reason it seems that changing the order of plotting and loading data causes failures, not entirely sure why yet. This is only occurring for me on Ubuntu 16.04 with the latest version of arviz. It does not occur on my osx laptop
Load, Load, Plot, Plot doesn't work. Stack trace attached
[stack_trace.txt](https://github.com/arviz-devs/arviz/files/2349232/stack_trace.txt)
```
import arviz as az
az.style.use('arviz-darkgrid')
non_centered = az.load_arviz_data('non_centered_eight')
centered = az.load_arviz_data('centered_eight')
az.violintraceplot(non_centered, var_names=["mu", "tau"], textsize=8)
az.violintraceplot(centered, var_names=["mu", "tau"], textsize=8)
```
Load, Plot, Load, Plot works
```
import arviz as az
az.style.use('arviz-darkgrid')
non_centered = az.load_arviz_data('non_centered_eight')
az.violintraceplot(non_centered, var_names=["mu", "tau"], textsize=8)
centered = az.load_arviz_data('centered_eight')
az.violintraceplot(centered, var_names=["mu", "tau"], textsize=8)
```
</issue>
<code>
[start of arviz/inference_data.py]
1 """Data structure for using netcdf groups with xarray."""
2 import netCDF4 as nc
3 import xarray as xr
4
5
6 class InferenceData():
7 """Container for accessing netCDF files using xarray."""
8
9 def __init__(self, *_, **kwargs):
10 """Initialize InferenceData object from keyword xarray datasets.
11
12 Examples
13 --------
14 InferenceData(posterior=posterior, prior=prior)
15
16 Parameters
17 ----------
18 kwargs :
19 Keyword arguments of xarray datasets
20 """
21 self._groups = []
22 for key, dataset in kwargs.items():
23 if dataset is None:
24 continue
25 elif not isinstance(dataset, xr.Dataset):
26 raise ValueError('Arguments to InferenceData must be xarray Datasets '
27 '(argument "{}" was type "{}")'.format(key, type(dataset)))
28 setattr(self, key, dataset)
29 self._groups.append(key)
30
31 def __repr__(self):
32 """Make string representation of object."""
33 return 'Inference data with groups:\n\t> {options}'.format(
34 options='\n\t> '.join(self._groups)
35 )
36
37 @staticmethod
38 def from_netcdf(filename):
39 """Initialize object from a netcdf file.
40
41 Expects that the file will have groups, each of which can be loaded by xarray.
42
43 Parameters
44 ----------
45 filename : str
46 location of netcdf file
47
48 Returns
49 -------
50 InferenceData object
51 """
52 groups = {}
53 for group in nc.Dataset(filename, mode='r').groups:
54 groups[group] = xr.open_dataset(filename, group=group, autoclose=True)
55 return InferenceData(**groups)
56
57 def to_netcdf(self, filename):
58 """Write InferenceData to file using netcdf4.
59
60 Parameters
61 ----------
62 filename : str
63 Location to write to
64
65 Returns
66 -------
67 str
68 Location of netcdf file
69 """
70 mode = 'w' # overwrite first, then append
71 for group in self._groups:
72 data = getattr(self, group)
73 data.to_netcdf(filename, mode=mode, group=group)
74 data.close()
75 mode = 'a'
76 return filename
77
[end of arviz/inference_data.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/arviz/inference_data.py b/arviz/inference_data.py
--- a/arviz/inference_data.py
+++ b/arviz/inference_data.py
@@ -50,8 +50,12 @@
InferenceData object
"""
groups = {}
- for group in nc.Dataset(filename, mode='r').groups:
- groups[group] = xr.open_dataset(filename, group=group, autoclose=True)
+ with nc.Dataset(filename, mode='r') as data:
+ data_groups = list(data.groups)
+
+ for group in data_groups:
+ with xr.open_dataset(filename, group=group) as data:
+ groups[group] = data
return InferenceData(**groups)
def to_netcdf(self, filename):
| {"golden_diff": "diff --git a/arviz/inference_data.py b/arviz/inference_data.py\n--- a/arviz/inference_data.py\n+++ b/arviz/inference_data.py\n@@ -50,8 +50,12 @@\n InferenceData object\n \"\"\"\n groups = {}\n- for group in nc.Dataset(filename, mode='r').groups:\n- groups[group] = xr.open_dataset(filename, group=group, autoclose=True)\n+ with nc.Dataset(filename, mode='r') as data:\n+ data_groups = list(data.groups)\n+\n+ for group in data_groups:\n+ with xr.open_dataset(filename, group=group) as data:\n+ groups[group] = data\n return InferenceData(**groups)\n \n def to_netcdf(self, filename):\n", "issue": "Changing the order of plotting and file load seems to cause netcdf errors\nFor some reason it seems that changing the order of plotting and loading data causes failures, not entirely sure why yet. This is only occurring for me on Ubuntu 16.04 with the latest version of arviz. It does not occur on my osx laptop\r\n\r\n\r\nLoad, Load, Plot, Plot doesn't work. Stack trace attached\r\n[stack_trace.txt](https://github.com/arviz-devs/arviz/files/2349232/stack_trace.txt)\r\n\r\n```\r\nimport arviz as az\r\naz.style.use('arviz-darkgrid')\r\n\r\nnon_centered = az.load_arviz_data('non_centered_eight')\r\ncentered = az.load_arviz_data('centered_eight')\r\naz.violintraceplot(non_centered, var_names=[\"mu\", \"tau\"], textsize=8)\r\naz.violintraceplot(centered, var_names=[\"mu\", \"tau\"], textsize=8)\r\n```\r\n\r\nLoad, Plot, Load, Plot works\r\n\r\n```\r\nimport arviz as az\r\naz.style.use('arviz-darkgrid')\r\n\r\nnon_centered = az.load_arviz_data('non_centered_eight')\r\naz.violintraceplot(non_centered, var_names=[\"mu\", \"tau\"], textsize=8)\r\n\r\ncentered = az.load_arviz_data('centered_eight')\r\naz.violintraceplot(centered, var_names=[\"mu\", \"tau\"], textsize=8)\r\n```\n", "before_files": [{"content": "\"\"\"Data structure for using netcdf groups with xarray.\"\"\"\nimport netCDF4 as nc\nimport xarray as xr\n\n\nclass InferenceData():\n \"\"\"Container for accessing netCDF files using xarray.\"\"\"\n\n def __init__(self, *_, **kwargs):\n \"\"\"Initialize InferenceData object from keyword xarray datasets.\n\n Examples\n --------\n InferenceData(posterior=posterior, prior=prior)\n\n Parameters\n ----------\n kwargs :\n Keyword arguments of xarray datasets\n \"\"\"\n self._groups = []\n for key, dataset in kwargs.items():\n if dataset is None:\n continue\n elif not isinstance(dataset, xr.Dataset):\n raise ValueError('Arguments to InferenceData must be xarray Datasets '\n '(argument \"{}\" was type \"{}\")'.format(key, type(dataset)))\n setattr(self, key, dataset)\n self._groups.append(key)\n\n def __repr__(self):\n \"\"\"Make string representation of object.\"\"\"\n return 'Inference data with groups:\\n\\t> {options}'.format(\n options='\\n\\t> '.join(self._groups)\n )\n\n @staticmethod\n def from_netcdf(filename):\n \"\"\"Initialize object from a netcdf file.\n\n Expects that the file will have groups, each of which can be loaded by xarray.\n\n Parameters\n ----------\n filename : str\n location of netcdf file\n\n Returns\n -------\n InferenceData object\n \"\"\"\n groups = {}\n for group in nc.Dataset(filename, mode='r').groups:\n groups[group] = xr.open_dataset(filename, group=group, autoclose=True)\n return InferenceData(**groups)\n\n def to_netcdf(self, filename):\n \"\"\"Write InferenceData to file using netcdf4.\n\n Parameters\n ----------\n filename : str\n Location to write to\n\n Returns\n -------\n str\n Location of netcdf file\n \"\"\"\n mode = 'w' # overwrite first, then append\n for group in self._groups:\n data = getattr(self, group)\n data.to_netcdf(filename, mode=mode, group=group)\n data.close()\n mode = 'a'\n return filename\n", "path": "arviz/inference_data.py"}]} | 1,468 | 170 |
gh_patches_debug_6914 | rasdani/github-patches | git_diff | saulpw__visidata-1717 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Create New Table from Clipboard
It would be nice to have a command which can create a new table directly from the clipboard.
For example, suppose I have tab separated data in my clipboard.
I would like to be able to execute a visidata command to load a table from the clipboard.
The command might ask me what filetype I am loading (tsv in this case).
Then it would load the data directly from the clipboard.
This would be particularly handy for quick file format conversions. e.g. if I copy a table from OneNote and want to convert it to a Markdown table, I could just import it into Visidata form the clipboard, and copy it back to the clipboard as a markdown table.
</issue>
<code>
[start of visidata/features/open_syspaste.py]
1 from visidata import vd, BaseSheet, Path
2
3
4 @BaseSheet.api
5 def open_syspaste(sheet, filetype='tsv'):
6 import io
7
8 v = vd.sysclipValue().strip() or vd.fail('nothing to open')
9
10 p = Path('syspaste'+'.'+filetype, fp=io.BytesIO(v.encode('utf-8')))
11 return vd.openSource(p, filetype=filetype)
12
13
14 BaseSheet.addCommand('', 'open-syspaste', 'vd.push(open_syspaste())', 'open clipboard as tsv')
15
[end of visidata/features/open_syspaste.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/visidata/features/open_syspaste.py b/visidata/features/open_syspaste.py
--- a/visidata/features/open_syspaste.py
+++ b/visidata/features/open_syspaste.py
@@ -1,3 +1,7 @@
+'''
+Load new table from system clipboard
+'''
+
from visidata import vd, BaseSheet, Path
@@ -11,4 +15,4 @@
return vd.openSource(p, filetype=filetype)
-BaseSheet.addCommand('', 'open-syspaste', 'vd.push(open_syspaste())', 'open clipboard as tsv')
+BaseSheet.addCommand('gShift+P', 'open-syspaste', 'vd.push(open_syspaste(filetype=vd.input("paste as filetype: ", value="tsv")))', 'open clipboard as filetype')
| {"golden_diff": "diff --git a/visidata/features/open_syspaste.py b/visidata/features/open_syspaste.py\n--- a/visidata/features/open_syspaste.py\n+++ b/visidata/features/open_syspaste.py\n@@ -1,3 +1,7 @@\n+'''\n+Load new table from system clipboard\n+'''\n+\n from visidata import vd, BaseSheet, Path\n \n \n@@ -11,4 +15,4 @@\n return vd.openSource(p, filetype=filetype)\n \n \n-BaseSheet.addCommand('', 'open-syspaste', 'vd.push(open_syspaste())', 'open clipboard as tsv')\n+BaseSheet.addCommand('gShift+P', 'open-syspaste', 'vd.push(open_syspaste(filetype=vd.input(\"paste as filetype: \", value=\"tsv\")))', 'open clipboard as filetype')\n", "issue": "Create New Table from Clipboard\nIt would be nice to have a command which can create a new table directly from the clipboard.\r\n\r\nFor example, suppose I have tab separated data in my clipboard.\r\nI would like to be able to execute a visidata command to load a table from the clipboard.\r\nThe command might ask me what filetype I am loading (tsv in this case).\r\nThen it would load the data directly from the clipboard.\r\n\r\nThis would be particularly handy for quick file format conversions. e.g. if I copy a table from OneNote and want to convert it to a Markdown table, I could just import it into Visidata form the clipboard, and copy it back to the clipboard as a markdown table.\n", "before_files": [{"content": "from visidata import vd, BaseSheet, Path\n\n\[email protected]\ndef open_syspaste(sheet, filetype='tsv'):\n import io\n\n v = vd.sysclipValue().strip() or vd.fail('nothing to open')\n\n p = Path('syspaste'+'.'+filetype, fp=io.BytesIO(v.encode('utf-8')))\n return vd.openSource(p, filetype=filetype)\n\n\nBaseSheet.addCommand('', 'open-syspaste', 'vd.push(open_syspaste())', 'open clipboard as tsv')\n", "path": "visidata/features/open_syspaste.py"}]} | 821 | 177 |
gh_patches_debug_9866 | rasdani/github-patches | git_diff | microsoft__ptvsd-882 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
pyfile adds an additional new line in every line of code
## Environment data
- PTVSD version: master
- OS and version: any
- Python version (& distribution if applicable, e.g. Anaconda): 3.6
- Using VS Code or Visual Studio:
## Actual behavior
code:
```python
@pyfile
def foo():
print('one')
print('two')
```
The file generated by pyfile, `foo.py`:
```python
print('one')
print('two')
```
## Expected behavior
The file generated by pyfile, `foo.py`:
```python
print('one')
print('two')
```
Having the extra blank lines makes it confusing to set breakpoints in tests.
</issue>
<code>
[start of pytests/conftest.py]
1 # Copyright (c) Microsoft Corporation. All rights reserved.
2 # Licensed under the MIT License. See LICENSE in the project root
3 # for license information.
4
5 from __future__ import print_function, with_statement, absolute_import
6
7 import inspect
8 import pytest
9 import threading
10 import types
11
12 from .helpers.session import DebugSession
13
14
15 @pytest.fixture
16 def daemon():
17 """Provides a factory function for daemon threads. The returned thread is
18 started immediately, and it must not be alive by the time the test returns.
19 """
20
21 daemons = []
22
23 def factory(func, name_suffix=''):
24 name = func.__name__ + name_suffix
25 thread = threading.Thread(target=func, name=name)
26 thread.daemon = True
27 daemons.append(thread)
28 thread.start()
29 return thread
30
31 yield factory
32
33 for thread in daemons:
34 assert not thread.is_alive()
35
36
37 @pytest.fixture
38 def pyfile(request, tmpdir):
39 """A fixture providing a factory function that generates .py files.
40
41 The returned factory takes a single function with an empty argument list,
42 generates a temporary file that contains the code corresponding to the
43 function body, and returns the full path to the generated file. Idiomatic
44 use is as a decorator, e.g.:
45
46 @pyfile
47 def script_file():
48 print('fizz')
49 print('buzz')
50
51 will produce a temporary file named script_file.py containing:
52
53 print('fizz')
54 print('buzz')
55
56 and the variable script_file will contain the path to that file.
57
58 In order for the factory to be able to extract the function body properly,
59 function header ("def") must all be on a single line, with nothing after
60 the colon but whitespace.
61 """
62
63 def factory(source):
64 assert isinstance(source, types.FunctionType)
65 name = source.__name__
66 source, _ = inspect.getsourcelines(source)
67
68 # First, find the "def" line.
69 def_lineno = 0
70 for line in source:
71 line = line.strip()
72 if line.startswith('def') and line.endswith(':'):
73 break
74 def_lineno += 1
75 else:
76 raise ValueError('Failed to locate function header.')
77
78 # Remove everything up to and including "def".
79 source = source[def_lineno + 1:]
80 assert source
81
82 # Now we need to adjust indentation. Compute how much the first line of
83 # the body is indented by, then dedent all lines by that amount.
84 line = source[0]
85 indent = len(line) - len(line.lstrip())
86 source = [line[indent:] for line in source]
87 source = '\n'.join(source)
88
89 tmpfile = tmpdir.join(name + '.py')
90 assert not tmpfile.check()
91 tmpfile.write(source)
92 return tmpfile.strpath
93
94 return factory
95
96
97 @pytest.fixture(params=[
98 'launch', 'attach_socket' # 'attach_pid'
99 ])
100 def debug_session(request):
101 session = DebugSession(request.param)
102 yield session
103 try:
104 session.wait_for_exit()
105 finally:
106 session.stop()
107
108
109
[end of pytests/conftest.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pytests/conftest.py b/pytests/conftest.py
--- a/pytests/conftest.py
+++ b/pytests/conftest.py
@@ -84,7 +84,7 @@
line = source[0]
indent = len(line) - len(line.lstrip())
source = [line[indent:] for line in source]
- source = '\n'.join(source)
+ source = ''.join(source)
tmpfile = tmpdir.join(name + '.py')
assert not tmpfile.check()
@@ -104,5 +104,3 @@
session.wait_for_exit()
finally:
session.stop()
-
-
| {"golden_diff": "diff --git a/pytests/conftest.py b/pytests/conftest.py\n--- a/pytests/conftest.py\n+++ b/pytests/conftest.py\n@@ -84,7 +84,7 @@\n line = source[0]\n indent = len(line) - len(line.lstrip())\n source = [line[indent:] for line in source]\n- source = '\\n'.join(source)\n+ source = ''.join(source)\n \n tmpfile = tmpdir.join(name + '.py')\n assert not tmpfile.check()\n@@ -104,5 +104,3 @@\n session.wait_for_exit()\n finally:\n session.stop()\n-\n-\n", "issue": "pyfile adds an additional new line in every line of code\n## Environment data\r\n\r\n- PTVSD version: master\r\n- OS and version: any\r\n- Python version (& distribution if applicable, e.g. Anaconda): 3.6\r\n- Using VS Code or Visual Studio:\r\n\r\n## Actual behavior\r\n\r\ncode:\r\n```python\r\n@pyfile\r\ndef foo():\r\n print('one')\r\n print('two')\r\n```\r\nThe file generated by pyfile, `foo.py`:\r\n```python\r\nprint('one')\r\n\r\nprint('two')\r\n\r\n```\r\n\r\n## Expected behavior\r\n\r\nThe file generated by pyfile, `foo.py`:\r\n```python\r\nprint('one')\r\nprint('two')\r\n```\r\n\r\nHaving the extra blank lines makes it confusing to set breakpoints in tests.\r\n\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License. See LICENSE in the project root\n# for license information.\n\nfrom __future__ import print_function, with_statement, absolute_import\n\nimport inspect\nimport pytest\nimport threading\nimport types\n\nfrom .helpers.session import DebugSession\n\n\[email protected]\ndef daemon():\n \"\"\"Provides a factory function for daemon threads. The returned thread is\n started immediately, and it must not be alive by the time the test returns.\n \"\"\"\n\n daemons = []\n\n def factory(func, name_suffix=''):\n name = func.__name__ + name_suffix\n thread = threading.Thread(target=func, name=name)\n thread.daemon = True\n daemons.append(thread)\n thread.start()\n return thread\n\n yield factory\n\n for thread in daemons:\n assert not thread.is_alive()\n\n\[email protected]\ndef pyfile(request, tmpdir):\n \"\"\"A fixture providing a factory function that generates .py files.\n\n The returned factory takes a single function with an empty argument list,\n generates a temporary file that contains the code corresponding to the\n function body, and returns the full path to the generated file. Idiomatic\n use is as a decorator, e.g.:\n\n @pyfile\n def script_file():\n print('fizz')\n print('buzz')\n\n will produce a temporary file named script_file.py containing:\n\n print('fizz')\n print('buzz')\n\n and the variable script_file will contain the path to that file.\n\n In order for the factory to be able to extract the function body properly,\n function header (\"def\") must all be on a single line, with nothing after\n the colon but whitespace.\n \"\"\"\n\n def factory(source):\n assert isinstance(source, types.FunctionType)\n name = source.__name__\n source, _ = inspect.getsourcelines(source)\n\n # First, find the \"def\" line.\n def_lineno = 0\n for line in source:\n line = line.strip()\n if line.startswith('def') and line.endswith(':'):\n break\n def_lineno += 1\n else:\n raise ValueError('Failed to locate function header.')\n\n # Remove everything up to and including \"def\".\n source = source[def_lineno + 1:]\n assert source\n\n # Now we need to adjust indentation. Compute how much the first line of\n # the body is indented by, then dedent all lines by that amount.\n line = source[0]\n indent = len(line) - len(line.lstrip())\n source = [line[indent:] for line in source]\n source = '\\n'.join(source)\n\n tmpfile = tmpdir.join(name + '.py')\n assert not tmpfile.check()\n tmpfile.write(source)\n return tmpfile.strpath\n\n return factory\n\n\[email protected](params=[\n 'launch', 'attach_socket' # 'attach_pid'\n])\ndef debug_session(request):\n session = DebugSession(request.param)\n yield session\n try:\n session.wait_for_exit()\n finally:\n session.stop()\n\n\n", "path": "pytests/conftest.py"}]} | 1,595 | 148 |
gh_patches_debug_8312 | rasdani/github-patches | git_diff | opensearch-project__opensearch-build-672 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[Bug]: The location specified in bundle-manifest (manifest.yml) are invalid
### Describe the bug
A build manifest is bundled inside the OpenSearch bundle tarball (manifest.yml) which contains all the information about the components used to build the bundle. The bundle manifest contains a key `location` whose value is invalid for all components.
```
build:
architecture: x64
id: '317'
location: https://ci.opensearch.org/ci/bundles/1.1.0/317/opensearch-1.1.0-linux-x64.tar.gz
name: OpenSearch
version: 1.1.0
components:
- commit_id: 15e9f137622d878b79103df8f82d78d782b686a1
location: https://ci.opensearch.org/ci/builds/1.1.0/317/bundle/opensearch-min-1.1.0-linux-x64.tar.gz
name: OpenSearch
ref: '1.1'
repository: https://github.com/opensearch-project/OpenSearch.git
```
### To reproduce
Download the bundle. Untar it and see the manifest.yml
Try accessing the location url to download individual component. It will give `Access Denied` error
### Expected behavior
The URLs should be valid. Each component should be downloadable from the given location url
### Screenshots
_No response_
### Host / Environment
_No response_
### Additional context
_No response_
### Relevant log output
_No response_
</issue>
<code>
[start of bundle-workflow/src/assemble_workflow/bundle_recorder.py]
1 # SPDX-License-Identifier: Apache-2.0
2 #
3 # The OpenSearch Contributors require contributions made to
4 # this file be licensed under the Apache-2.0 license or a
5 # compatible open source license.
6
7 import os
8 from urllib.parse import urljoin
9
10 from manifests.bundle_manifest import BundleManifest
11
12
13 class BundleRecorder:
14 def __init__(self, build, output_dir, artifacts_dir):
15 self.output_dir = output_dir
16 self.build_id = build.id
17 self.public_url = os.getenv("PUBLIC_ARTIFACT_URL", None)
18 self.version = build.version
19 self.tar_name = self.__get_tar_name(build)
20 self.artifacts_dir = artifacts_dir
21 self.bundle_manifest = self.BundleManifestBuilder(
22 build.id,
23 build.name,
24 build.version,
25 build.architecture,
26 self.__get_tar_location(),
27 )
28
29 def __get_tar_name(self, build):
30 parts = [build.name.lower(), build.version, "linux", build.architecture]
31 return "-".join(parts) + ".tar.gz"
32
33 def __get_public_url_path(self, folder, rel_path):
34 path = "{}/{}/{}/{}".format(folder, self.version, self.build_id, rel_path)
35 return urljoin(self.public_url, path)
36
37 def __get_location(self, folder_name, rel_path, abs_path):
38 if self.public_url:
39 return self.__get_public_url_path(folder_name, rel_path)
40 return abs_path
41
42 # Assembled bundles are expected to be served from a separate "bundles" folder
43 # Example: https://artifacts.opensearch.org/bundles/1.0.0/<build-id
44 def __get_tar_location(self):
45 return self.__get_location(
46 "bundles", self.tar_name, os.path.join(self.output_dir, self.tar_name)
47 )
48
49 # Build artifacts are expected to be served from a "builds" folder
50 # Example: https://artifacts.opensearch.org/builds/1.0.0/<build-id>
51 def __get_component_location(self, component_rel_path):
52 abs_path = os.path.join(self.artifacts_dir, component_rel_path)
53 return self.__get_location("builds", component_rel_path, abs_path)
54
55 def record_component(self, component, rel_path):
56 self.bundle_manifest.append_component(
57 component.name,
58 component.repository,
59 component.ref,
60 component.commit_id,
61 self.__get_component_location(rel_path),
62 )
63
64 def get_manifest(self):
65 return self.bundle_manifest.to_manifest()
66
67 def write_manifest(self, folder):
68 manifest_path = os.path.join(folder, "manifest.yml")
69 self.get_manifest().to_file(manifest_path)
70
71 class BundleManifestBuilder:
72 def __init__(self, build_id, name, version, arch, location):
73 self.data = {}
74 self.data["build"] = {}
75 self.data["build"]["id"] = build_id
76 self.data["build"]["name"] = name
77 self.data["build"]["version"] = str(version)
78 self.data["build"]["architecture"] = arch
79 self.data["build"]["location"] = location
80 self.data["schema-version"] = "1.0"
81 # We need to store components as a hash so that we can append artifacts by component name
82 # When we convert to a BundleManifest this will get converted back into a list
83 self.data["components"] = []
84
85 def append_component(self, name, repository_url, ref, commit_id, location):
86 component = {
87 "name": name,
88 "repository": repository_url,
89 "ref": ref,
90 "commit_id": commit_id,
91 "location": location,
92 }
93 self.data["components"].append(component)
94
95 def to_manifest(self):
96 return BundleManifest(self.data)
97
[end of bundle-workflow/src/assemble_workflow/bundle_recorder.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/bundle-workflow/src/assemble_workflow/bundle_recorder.py b/bundle-workflow/src/assemble_workflow/bundle_recorder.py
--- a/bundle-workflow/src/assemble_workflow/bundle_recorder.py
+++ b/bundle-workflow/src/assemble_workflow/bundle_recorder.py
@@ -31,8 +31,8 @@
return "-".join(parts) + ".tar.gz"
def __get_public_url_path(self, folder, rel_path):
- path = "{}/{}/{}/{}".format(folder, self.version, self.build_id, rel_path)
- return urljoin(self.public_url, path)
+ path = "/".join((folder, self.version, self.build_id, rel_path))
+ return urljoin(self.public_url + "/", path)
def __get_location(self, folder_name, rel_path, abs_path):
if self.public_url:
| {"golden_diff": "diff --git a/bundle-workflow/src/assemble_workflow/bundle_recorder.py b/bundle-workflow/src/assemble_workflow/bundle_recorder.py\n--- a/bundle-workflow/src/assemble_workflow/bundle_recorder.py\n+++ b/bundle-workflow/src/assemble_workflow/bundle_recorder.py\n@@ -31,8 +31,8 @@\n return \"-\".join(parts) + \".tar.gz\"\n \n def __get_public_url_path(self, folder, rel_path):\n- path = \"{}/{}/{}/{}\".format(folder, self.version, self.build_id, rel_path)\n- return urljoin(self.public_url, path)\n+ path = \"/\".join((folder, self.version, self.build_id, rel_path))\n+ return urljoin(self.public_url + \"/\", path)\n \n def __get_location(self, folder_name, rel_path, abs_path):\n if self.public_url:\n", "issue": "[Bug]: The location specified in bundle-manifest (manifest.yml) are invalid\n### Describe the bug\n\nA build manifest is bundled inside the OpenSearch bundle tarball (manifest.yml) which contains all the information about the components used to build the bundle. The bundle manifest contains a key `location` whose value is invalid for all components.\r\n\r\n```\r\nbuild:\r\n architecture: x64\r\n id: '317'\r\n location: https://ci.opensearch.org/ci/bundles/1.1.0/317/opensearch-1.1.0-linux-x64.tar.gz\r\n name: OpenSearch\r\n version: 1.1.0\r\ncomponents:\r\n- commit_id: 15e9f137622d878b79103df8f82d78d782b686a1\r\n location: https://ci.opensearch.org/ci/builds/1.1.0/317/bundle/opensearch-min-1.1.0-linux-x64.tar.gz\r\n name: OpenSearch\r\n ref: '1.1'\r\n repository: https://github.com/opensearch-project/OpenSearch.git\r\n```\n\n### To reproduce\n\nDownload the bundle. Untar it and see the manifest.yml\r\n\r\nTry accessing the location url to download individual component. It will give `Access Denied` error\n\n### Expected behavior\n\nThe URLs should be valid. Each component should be downloadable from the given location url\n\n### Screenshots\n\n_No response_\n\n### Host / Environment\n\n_No response_\n\n### Additional context\n\n_No response_\n\n### Relevant log output\n\n_No response_\n", "before_files": [{"content": "# SPDX-License-Identifier: Apache-2.0\n#\n# The OpenSearch Contributors require contributions made to\n# this file be licensed under the Apache-2.0 license or a\n# compatible open source license.\n\nimport os\nfrom urllib.parse import urljoin\n\nfrom manifests.bundle_manifest import BundleManifest\n\n\nclass BundleRecorder:\n def __init__(self, build, output_dir, artifacts_dir):\n self.output_dir = output_dir\n self.build_id = build.id\n self.public_url = os.getenv(\"PUBLIC_ARTIFACT_URL\", None)\n self.version = build.version\n self.tar_name = self.__get_tar_name(build)\n self.artifacts_dir = artifacts_dir\n self.bundle_manifest = self.BundleManifestBuilder(\n build.id,\n build.name,\n build.version,\n build.architecture,\n self.__get_tar_location(),\n )\n\n def __get_tar_name(self, build):\n parts = [build.name.lower(), build.version, \"linux\", build.architecture]\n return \"-\".join(parts) + \".tar.gz\"\n\n def __get_public_url_path(self, folder, rel_path):\n path = \"{}/{}/{}/{}\".format(folder, self.version, self.build_id, rel_path)\n return urljoin(self.public_url, path)\n\n def __get_location(self, folder_name, rel_path, abs_path):\n if self.public_url:\n return self.__get_public_url_path(folder_name, rel_path)\n return abs_path\n\n # Assembled bundles are expected to be served from a separate \"bundles\" folder\n # Example: https://artifacts.opensearch.org/bundles/1.0.0/<build-id\n def __get_tar_location(self):\n return self.__get_location(\n \"bundles\", self.tar_name, os.path.join(self.output_dir, self.tar_name)\n )\n\n # Build artifacts are expected to be served from a \"builds\" folder\n # Example: https://artifacts.opensearch.org/builds/1.0.0/<build-id>\n def __get_component_location(self, component_rel_path):\n abs_path = os.path.join(self.artifacts_dir, component_rel_path)\n return self.__get_location(\"builds\", component_rel_path, abs_path)\n\n def record_component(self, component, rel_path):\n self.bundle_manifest.append_component(\n component.name,\n component.repository,\n component.ref,\n component.commit_id,\n self.__get_component_location(rel_path),\n )\n\n def get_manifest(self):\n return self.bundle_manifest.to_manifest()\n\n def write_manifest(self, folder):\n manifest_path = os.path.join(folder, \"manifest.yml\")\n self.get_manifest().to_file(manifest_path)\n\n class BundleManifestBuilder:\n def __init__(self, build_id, name, version, arch, location):\n self.data = {}\n self.data[\"build\"] = {}\n self.data[\"build\"][\"id\"] = build_id\n self.data[\"build\"][\"name\"] = name\n self.data[\"build\"][\"version\"] = str(version)\n self.data[\"build\"][\"architecture\"] = arch\n self.data[\"build\"][\"location\"] = location\n self.data[\"schema-version\"] = \"1.0\"\n # We need to store components as a hash so that we can append artifacts by component name\n # When we convert to a BundleManifest this will get converted back into a list\n self.data[\"components\"] = []\n\n def append_component(self, name, repository_url, ref, commit_id, location):\n component = {\n \"name\": name,\n \"repository\": repository_url,\n \"ref\": ref,\n \"commit_id\": commit_id,\n \"location\": location,\n }\n self.data[\"components\"].append(component)\n\n def to_manifest(self):\n return BundleManifest(self.data)\n", "path": "bundle-workflow/src/assemble_workflow/bundle_recorder.py"}]} | 1,893 | 192 |
gh_patches_debug_28949 | rasdani/github-patches | git_diff | google__mobly-417 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`SnippetEvent` should be loggable
Right now logging event object directly does not show the content of the event, which makes debugging difficult.
`logging.info(event)` should log the full content of the event.
</issue>
<code>
[start of setup.py]
1 # Copyright 2016 Google Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import platform
16 import setuptools
17 from setuptools.command import test
18 import sys
19
20 install_requires = [
21 'future',
22 # mock-1.0.1 is the last version compatible with setuptools <17.1,
23 # which is what comes with Ubuntu 14.04 LTS.
24 'mock<=1.0.1',
25 'portpicker',
26 'psutil',
27 'pytz',
28 'pyyaml',
29 'timeout_decorator',
30 'pyserial'
31 ]
32
33 if sys.version_info < (3, ):
34 install_requires.extend([
35 'enum34',
36 # "futures" is needed for py2 compatibility and it only works in 2.7
37 'futures',
38 ])
39
40 if platform.system() == 'Windows':
41 install_requires.append('pywin32')
42
43
44 class PyTest(test.test):
45 """Class used to execute unit tests using PyTest. This allows us to execute
46 unit tests without having to install the package.
47 """
48
49 def finalize_options(self):
50 test.test.finalize_options(self)
51 self.test_args = ['-x', "tests"]
52 self.test_suite = True
53
54 def run_tests(self):
55 import pytest
56 errno = pytest.main(self.test_args)
57 sys.exit(errno)
58
59
60 def main():
61 setuptools.setup(
62 name='mobly',
63 version='1.7.1',
64 maintainer = 'Ang Li',
65 maintainer_email = '[email protected]',
66 description='Automation framework for special end-to-end test cases',
67 license='Apache2.0',
68 url = 'https://github.com/google/mobly',
69 download_url = 'https://github.com/google/mobly/tarball/1.7.1',
70 packages=setuptools.find_packages(),
71 include_package_data=False,
72 scripts=['tools/sl4a_shell.py', 'tools/snippet_shell.py'],
73 tests_require=['pytest'],
74 install_requires=install_requires,
75 cmdclass={'test': PyTest}, )
76
77
78 if __name__ == '__main__':
79 main()
80
[end of setup.py]
[start of mobly/controllers/android_device_lib/snippet_event.py]
1 # Copyright 2017 Google Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15
16 def from_dict(event_dict):
17 """Create a SnippetEvent object from a dictionary.
18
19 Args:
20 event_dict: a dictionary representing an event.
21
22 Returns:
23 A SnippetEvent object.
24 """
25 return SnippetEvent(
26 callback_id=event_dict['callbackId'],
27 name=event_dict['name'],
28 creation_time=event_dict['time'],
29 data=event_dict['data'])
30
31
32 class SnippetEvent(object):
33 """The class that represents callback events for mobly snippet library.
34
35 Attributes:
36 callback_id: string, the callback ID associated with the event.
37 name: string, the name of the event.
38 creation_time: int, the epoch time when the event is created on the
39 Rpc server side.
40 data: dictionary, the data held by the event. Can be None.
41 """
42
43 def __init__(self, callback_id, name, creation_time, data):
44 self.callback_id = callback_id
45 self.name = name
46 self.creation_time = creation_time
47 self.data = data
48
[end of mobly/controllers/android_device_lib/snippet_event.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mobly/controllers/android_device_lib/snippet_event.py b/mobly/controllers/android_device_lib/snippet_event.py
--- a/mobly/controllers/android_device_lib/snippet_event.py
+++ b/mobly/controllers/android_device_lib/snippet_event.py
@@ -3,9 +3,9 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
-#
+#
# http://www.apache.org/licenses/LICENSE-2.0
-#
+#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
@@ -45,3 +45,8 @@
self.name = name
self.creation_time = creation_time
self.data = data
+
+ def __repr__(self):
+ return ('SnippetEvent(callback_id: %s, name: %s, creation_time: %s, '
+ 'data: %s)') % (self.callback_id, self.name,
+ self.creation_time, self.data)
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -70,7 +70,7 @@
packages=setuptools.find_packages(),
include_package_data=False,
scripts=['tools/sl4a_shell.py', 'tools/snippet_shell.py'],
- tests_require=['pytest'],
+ tests_require=['pytest', 'testfixtures'],
install_requires=install_requires,
cmdclass={'test': PyTest}, )
| {"golden_diff": "diff --git a/mobly/controllers/android_device_lib/snippet_event.py b/mobly/controllers/android_device_lib/snippet_event.py\n--- a/mobly/controllers/android_device_lib/snippet_event.py\n+++ b/mobly/controllers/android_device_lib/snippet_event.py\n@@ -3,9 +3,9 @@\n # Licensed under the Apache License, Version 2.0 (the \"License\");\n # you may not use this file except in compliance with the License.\n # You may obtain a copy of the License at\n-# \n+#\n # http://www.apache.org/licenses/LICENSE-2.0\n-# \n+#\n # Unless required by applicable law or agreed to in writing, software\n # distributed under the License is distributed on an \"AS IS\" BASIS,\n # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n@@ -45,3 +45,8 @@\n self.name = name\n self.creation_time = creation_time\n self.data = data\n+\n+ def __repr__(self):\n+ return ('SnippetEvent(callback_id: %s, name: %s, creation_time: %s, '\n+ 'data: %s)') % (self.callback_id, self.name,\n+ self.creation_time, self.data)\ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -70,7 +70,7 @@\n packages=setuptools.find_packages(),\n include_package_data=False,\n scripts=['tools/sl4a_shell.py', 'tools/snippet_shell.py'],\n- tests_require=['pytest'],\n+ tests_require=['pytest', 'testfixtures'],\n install_requires=install_requires,\n cmdclass={'test': PyTest}, )\n", "issue": "`SnippetEvent` should be loggable\nRight now logging event object directly does not show the content of the event, which makes debugging difficult.\r\n`logging.info(event)` should log the full content of the event.\n", "before_files": [{"content": "# Copyright 2016 Google Inc.\n# \n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n# \n# http://www.apache.org/licenses/LICENSE-2.0\n# \n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport platform\nimport setuptools\nfrom setuptools.command import test\nimport sys\n\ninstall_requires = [\n 'future',\n # mock-1.0.1 is the last version compatible with setuptools <17.1,\n # which is what comes with Ubuntu 14.04 LTS.\n 'mock<=1.0.1',\n 'portpicker',\n 'psutil',\n 'pytz',\n 'pyyaml',\n 'timeout_decorator',\n 'pyserial'\n]\n\nif sys.version_info < (3, ):\n install_requires.extend([\n 'enum34',\n # \"futures\" is needed for py2 compatibility and it only works in 2.7\n 'futures',\n ])\n\nif platform.system() == 'Windows':\n install_requires.append('pywin32')\n\n\nclass PyTest(test.test):\n \"\"\"Class used to execute unit tests using PyTest. This allows us to execute\n unit tests without having to install the package.\n \"\"\"\n\n def finalize_options(self):\n test.test.finalize_options(self)\n self.test_args = ['-x', \"tests\"]\n self.test_suite = True\n\n def run_tests(self):\n import pytest\n errno = pytest.main(self.test_args)\n sys.exit(errno)\n\n\ndef main():\n setuptools.setup(\n name='mobly',\n version='1.7.1',\n maintainer = 'Ang Li',\n maintainer_email = '[email protected]',\n description='Automation framework for special end-to-end test cases',\n license='Apache2.0',\n url = 'https://github.com/google/mobly',\n download_url = 'https://github.com/google/mobly/tarball/1.7.1',\n packages=setuptools.find_packages(),\n include_package_data=False,\n scripts=['tools/sl4a_shell.py', 'tools/snippet_shell.py'],\n tests_require=['pytest'],\n install_requires=install_requires,\n cmdclass={'test': PyTest}, )\n\n\nif __name__ == '__main__':\n main()\n", "path": "setup.py"}, {"content": "# Copyright 2017 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n# \n# http://www.apache.org/licenses/LICENSE-2.0\n# \n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\ndef from_dict(event_dict):\n \"\"\"Create a SnippetEvent object from a dictionary.\n\n Args:\n event_dict: a dictionary representing an event.\n\n Returns:\n A SnippetEvent object.\n \"\"\"\n return SnippetEvent(\n callback_id=event_dict['callbackId'],\n name=event_dict['name'],\n creation_time=event_dict['time'],\n data=event_dict['data'])\n\n\nclass SnippetEvent(object):\n \"\"\"The class that represents callback events for mobly snippet library.\n\n Attributes:\n callback_id: string, the callback ID associated with the event.\n name: string, the name of the event.\n creation_time: int, the epoch time when the event is created on the\n Rpc server side.\n data: dictionary, the data held by the event. Can be None.\n \"\"\"\n\n def __init__(self, callback_id, name, creation_time, data):\n self.callback_id = callback_id\n self.name = name\n self.creation_time = creation_time\n self.data = data\n", "path": "mobly/controllers/android_device_lib/snippet_event.py"}]} | 1,764 | 365 |
gh_patches_debug_5700 | rasdani/github-patches | git_diff | psychopy__psychopy-1325 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
psychopyApp won't start with Matplotlib 1.5 installed
See http://discourse.psychopy.org/t/mac-specific-help/1540/3
We need to figure out
- whether this problem is Anaconda-specific (and would require fixing upstream)
- whether this problem is Mac-specific
</issue>
<code>
[start of psychopy/app/psychopyApp.py]
1 #!/usr/bin/env python2
2
3 # Part of the PsychoPy library
4 # Copyright (C) 2015 Jonathan Peirce
5 # Distributed under the terms of the GNU General Public License (GPL).
6
7 from __future__ import absolute_import, print_function
8
9 import sys
10 from psychopy.app._psychopyApp import PsychoPyApp, __version__
11
12 # NB the PsychoPyApp classes moved to _psychopyApp.py as of version 1.78.00
13 # to allow for better upgrading possibilities from the mac app bundle. this
14 # file now used solely as a launcher for the app, not as the app itself.
15
16 if __name__ == '__main__':
17 if '-x' in sys.argv:
18 # run a .py script from the command line using StandAlone python
19 targetScript = sys.argv[sys.argv.index('-x') + 1]
20 from psychopy import core
21 import os
22 core.shellCall([sys.executable, os.path.abspath(targetScript)])
23 sys.exit()
24 if '-v' in sys.argv or '--version' in sys.argv:
25 info = 'PsychoPy2, version %s (c)Jonathan Peirce 2015, GNU GPL license'
26 print(info % __version__)
27 sys.exit()
28 if '-h' in sys.argv or '--help' in sys.argv:
29 print("""Starts the PsychoPy2 application.
30
31 Usage: python PsychoPy.py [options] [file]
32
33 Without options or files provided this starts PsychoPy using prefs to
34 decide on the view(s) to open. If optional [file] is provided action
35 depends on the type of the [file]:
36
37 Python script 'file.py' -- opens coder
38
39 Experiment design 'file.psyexp' -- opens builder
40
41 Options:
42 -c, --coder, coder opens coder view only
43 -b, --builder, builder opens builder view only
44 -x script.py execute script.py using StandAlone python
45
46 -v, --version prints version and exits
47 -h, --help prints this help and exit
48
49 --firstrun launches configuration wizard
50 --no-splash suppresses splash screen
51
52 """)
53 sys.exit()
54
55 else:
56 showSplash = True
57 if '--no-splash' in sys.argv:
58 showSplash = False
59 del sys.argv[sys.argv.index('--no-splash')]
60 app = PsychoPyApp(0, showSplash=showSplash)
61 app.MainLoop()
62
[end of psychopy/app/psychopyApp.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/psychopy/app/psychopyApp.py b/psychopy/app/psychopyApp.py
--- a/psychopy/app/psychopyApp.py
+++ b/psychopy/app/psychopyApp.py
@@ -8,6 +8,8 @@
import sys
from psychopy.app._psychopyApp import PsychoPyApp, __version__
+# fix OS X locale-bug on startup: sets locale to LC_ALL (must be defined!)
+import psychopy.locale_setup # noqa
# NB the PsychoPyApp classes moved to _psychopyApp.py as of version 1.78.00
# to allow for better upgrading possibilities from the mac app bundle. this
| {"golden_diff": "diff --git a/psychopy/app/psychopyApp.py b/psychopy/app/psychopyApp.py\n--- a/psychopy/app/psychopyApp.py\n+++ b/psychopy/app/psychopyApp.py\n@@ -8,6 +8,8 @@\n \n import sys\n from psychopy.app._psychopyApp import PsychoPyApp, __version__\n+# fix OS X locale-bug on startup: sets locale to LC_ALL (must be defined!)\n+import psychopy.locale_setup # noqa\n \n # NB the PsychoPyApp classes moved to _psychopyApp.py as of version 1.78.00\n # to allow for better upgrading possibilities from the mac app bundle. this\n", "issue": "psychopyApp won't start with Matplotlib 1.5 installed\nSee http://discourse.psychopy.org/t/mac-specific-help/1540/3\r\n\r\nWe need to figure out\r\n- whether this problem is Anaconda-specific (and would require fixing upstream)\r\n- whether this problem is Mac-specific\n", "before_files": [{"content": "#!/usr/bin/env python2\n\n# Part of the PsychoPy library\n# Copyright (C) 2015 Jonathan Peirce\n# Distributed under the terms of the GNU General Public License (GPL).\n\nfrom __future__ import absolute_import, print_function\n\nimport sys\nfrom psychopy.app._psychopyApp import PsychoPyApp, __version__\n\n# NB the PsychoPyApp classes moved to _psychopyApp.py as of version 1.78.00\n# to allow for better upgrading possibilities from the mac app bundle. this\n# file now used solely as a launcher for the app, not as the app itself.\n\nif __name__ == '__main__':\n if '-x' in sys.argv:\n # run a .py script from the command line using StandAlone python\n targetScript = sys.argv[sys.argv.index('-x') + 1]\n from psychopy import core\n import os\n core.shellCall([sys.executable, os.path.abspath(targetScript)])\n sys.exit()\n if '-v' in sys.argv or '--version' in sys.argv:\n info = 'PsychoPy2, version %s (c)Jonathan Peirce 2015, GNU GPL license'\n print(info % __version__)\n sys.exit()\n if '-h' in sys.argv or '--help' in sys.argv:\n print(\"\"\"Starts the PsychoPy2 application.\n\nUsage: python PsychoPy.py [options] [file]\n\nWithout options or files provided this starts PsychoPy using prefs to\ndecide on the view(s) to open. If optional [file] is provided action\ndepends on the type of the [file]:\n\n Python script 'file.py' -- opens coder\n\n Experiment design 'file.psyexp' -- opens builder\n\nOptions:\n -c, --coder, coder opens coder view only\n -b, --builder, builder opens builder view only\n -x script.py execute script.py using StandAlone python\n\n -v, --version prints version and exits\n -h, --help prints this help and exit\n\n --firstrun launches configuration wizard\n --no-splash suppresses splash screen\n\n\"\"\")\n sys.exit()\n\n else:\n showSplash = True\n if '--no-splash' in sys.argv:\n showSplash = False\n del sys.argv[sys.argv.index('--no-splash')]\n app = PsychoPyApp(0, showSplash=showSplash)\n app.MainLoop()\n", "path": "psychopy/app/psychopyApp.py"}]} | 1,268 | 150 |
gh_patches_debug_19111 | rasdani/github-patches | git_diff | scrapy__scrapy-3045 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Python3.3 support and requirements without it
Scrapy still supports py3.3 (at least according to its trove classifiers in setup.py and the CI conf)
but some of its dependencies dropped support some time ago.
https://github.com/pyca/service_identity/blob/master/CHANGELOG.rst#backward-incompatible-changes-1
https://github.com/pyca/cryptography/blob/master/CHANGELOG.rst#20---2017-07-17
This caused some problems when testing scrapy daemon with py3.3,
which was resolved by installing the [enum-compat virtual package](https://pypi.python.org/pypi/enum-compat/0.0.2)
There are several options here.
scrapy1.5 can drop support for python3.3,
scrapy1.4 can restrict the max versions for those dependencies
and enum-compat can become a requirement,
although there may be more things broken.
I didn't figure out why the python3.3 build for scrapy doesn't fail
but here is a failed scrapyd build https://travis-ci.org/scrapy/scrapyd/jobs/299029712
</issue>
<code>
[start of setup.py]
1 from os.path import dirname, join
2 from pkg_resources import parse_version
3 from setuptools import setup, find_packages, __version__ as setuptools_version
4
5
6 with open(join(dirname(__file__), 'scrapy/VERSION'), 'rb') as f:
7 version = f.read().decode('ascii').strip()
8
9
10 def has_environment_marker_platform_impl_support():
11 """Code extracted from 'pytest/setup.py'
12 https://github.com/pytest-dev/pytest/blob/7538680c/setup.py#L31
13
14 The first known release to support environment marker with range operators
15 it is 18.5, see:
16 https://setuptools.readthedocs.io/en/latest/history.html#id235
17 """
18 return parse_version(setuptools_version) >= parse_version('18.5')
19
20
21 extras_require = {}
22
23 if has_environment_marker_platform_impl_support():
24 extras_require[':platform_python_implementation == "PyPy"'] = [
25 'PyPyDispatcher>=2.1.0',
26 ]
27
28
29 setup(
30 name='Scrapy',
31 version=version,
32 url='https://scrapy.org',
33 description='A high-level Web Crawling and Web Scraping framework',
34 long_description=open('README.rst').read(),
35 author='Scrapy developers',
36 maintainer='Pablo Hoffman',
37 maintainer_email='[email protected]',
38 license='BSD',
39 packages=find_packages(exclude=('tests', 'tests.*')),
40 include_package_data=True,
41 zip_safe=False,
42 entry_points={
43 'console_scripts': ['scrapy = scrapy.cmdline:execute']
44 },
45 classifiers=[
46 'Framework :: Scrapy',
47 'Development Status :: 5 - Production/Stable',
48 'Environment :: Console',
49 'Intended Audience :: Developers',
50 'License :: OSI Approved :: BSD License',
51 'Operating System :: OS Independent',
52 'Programming Language :: Python',
53 'Programming Language :: Python :: 2',
54 'Programming Language :: Python :: 2.7',
55 'Programming Language :: Python :: 3',
56 'Programming Language :: Python :: 3.3',
57 'Programming Language :: Python :: 3.4',
58 'Programming Language :: Python :: 3.5',
59 'Programming Language :: Python :: 3.6',
60 'Topic :: Internet :: WWW/HTTP',
61 'Topic :: Software Development :: Libraries :: Application Frameworks',
62 'Topic :: Software Development :: Libraries :: Python Modules',
63 ],
64 install_requires=[
65 'Twisted>=13.1.0',
66 'w3lib>=1.17.0',
67 'queuelib',
68 'lxml',
69 'pyOpenSSL',
70 'cssselect>=0.9',
71 'six>=1.5.2',
72 'parsel>=1.1',
73 'PyDispatcher>=2.0.5',
74 'service_identity',
75 ],
76 extras_require=extras_require,
77 )
78
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -53,7 +53,6 @@
'Programming Language :: Python :: 2',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
- 'Programming Language :: Python :: 3.3',
'Programming Language :: Python :: 3.4',
'Programming Language :: Python :: 3.5',
'Programming Language :: Python :: 3.6',
@@ -61,6 +60,7 @@
'Topic :: Software Development :: Libraries :: Application Frameworks',
'Topic :: Software Development :: Libraries :: Python Modules',
],
+ python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*',
install_requires=[
'Twisted>=13.1.0',
'w3lib>=1.17.0',
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -53,7 +53,6 @@\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n- 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n@@ -61,6 +60,7 @@\n 'Topic :: Software Development :: Libraries :: Application Frameworks',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n ],\n+ python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*',\n install_requires=[\n 'Twisted>=13.1.0',\n 'w3lib>=1.17.0',\n", "issue": "Python3.3 support and requirements without it\nScrapy still supports py3.3 (at least according to its trove classifiers in setup.py and the CI conf)\r\nbut some of its dependencies dropped support some time ago.\r\nhttps://github.com/pyca/service_identity/blob/master/CHANGELOG.rst#backward-incompatible-changes-1\r\nhttps://github.com/pyca/cryptography/blob/master/CHANGELOG.rst#20---2017-07-17\r\n\r\nThis caused some problems when testing scrapy daemon with py3.3,\r\nwhich was resolved by installing the [enum-compat virtual package](https://pypi.python.org/pypi/enum-compat/0.0.2)\r\n\r\nThere are several options here.\r\nscrapy1.5 can drop support for python3.3,\r\nscrapy1.4 can restrict the max versions for those dependencies\r\nand enum-compat can become a requirement,\r\nalthough there may be more things broken.\r\n\r\nI didn't figure out why the python3.3 build for scrapy doesn't fail\r\nbut here is a failed scrapyd build https://travis-ci.org/scrapy/scrapyd/jobs/299029712\n", "before_files": [{"content": "from os.path import dirname, join\nfrom pkg_resources import parse_version\nfrom setuptools import setup, find_packages, __version__ as setuptools_version\n\n\nwith open(join(dirname(__file__), 'scrapy/VERSION'), 'rb') as f:\n version = f.read().decode('ascii').strip()\n\n\ndef has_environment_marker_platform_impl_support():\n \"\"\"Code extracted from 'pytest/setup.py'\n https://github.com/pytest-dev/pytest/blob/7538680c/setup.py#L31\n\n The first known release to support environment marker with range operators\n it is 18.5, see:\n https://setuptools.readthedocs.io/en/latest/history.html#id235\n \"\"\"\n return parse_version(setuptools_version) >= parse_version('18.5')\n\n\nextras_require = {}\n\nif has_environment_marker_platform_impl_support():\n extras_require[':platform_python_implementation == \"PyPy\"'] = [\n 'PyPyDispatcher>=2.1.0',\n ]\n\n\nsetup(\n name='Scrapy',\n version=version,\n url='https://scrapy.org',\n description='A high-level Web Crawling and Web Scraping framework',\n long_description=open('README.rst').read(),\n author='Scrapy developers',\n maintainer='Pablo Hoffman',\n maintainer_email='[email protected]',\n license='BSD',\n packages=find_packages(exclude=('tests', 'tests.*')),\n include_package_data=True,\n zip_safe=False,\n entry_points={\n 'console_scripts': ['scrapy = scrapy.cmdline:execute']\n },\n classifiers=[\n 'Framework :: Scrapy',\n 'Development Status :: 5 - Production/Stable',\n 'Environment :: Console',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: BSD License',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Topic :: Internet :: WWW/HTTP',\n 'Topic :: Software Development :: Libraries :: Application Frameworks',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n ],\n install_requires=[\n 'Twisted>=13.1.0',\n 'w3lib>=1.17.0',\n 'queuelib',\n 'lxml',\n 'pyOpenSSL',\n 'cssselect>=0.9',\n 'six>=1.5.2',\n 'parsel>=1.1',\n 'PyDispatcher>=2.0.5',\n 'service_identity',\n ],\n extras_require=extras_require,\n)\n", "path": "setup.py"}]} | 1,550 | 213 |
gh_patches_debug_1326 | rasdani/github-patches | git_diff | iterative__dvc-1757 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
typo in docs
super minor typo:
$dvc repro --help
-c CWD, --cwd CWD Directory within your repo to **reroduce** from.
dvc --version
0.30.1
</issue>
<code>
[start of dvc/command/repro.py]
1 from __future__ import unicode_literals
2
3 import os
4
5 import dvc.logger as logger
6 from dvc.command.base import CmdBase
7 from dvc.command.status import CmdDataStatus
8 from dvc.exceptions import DvcException
9
10
11 class CmdRepro(CmdBase):
12 def run(self):
13 recursive = not self.args.single_item
14 saved_dir = os.path.realpath(os.curdir)
15 if self.args.cwd:
16 os.chdir(self.args.cwd)
17
18 # Dirty hack so the for loop below can at least enter once
19 if self.args.all_pipelines:
20 self.args.targets = [None]
21 elif not self.args.targets:
22 self.args.targets = self.default_targets
23
24 ret = 0
25 for target in self.args.targets:
26 try:
27 stages = self.repo.reproduce(
28 target,
29 recursive=recursive,
30 force=self.args.force,
31 dry=self.args.dry,
32 interactive=self.args.interactive,
33 pipeline=self.args.pipeline,
34 all_pipelines=self.args.all_pipelines,
35 ignore_build_cache=self.args.ignore_build_cache,
36 no_commit=self.args.no_commit,
37 )
38
39 if len(stages) == 0:
40 logger.info(CmdDataStatus.UP_TO_DATE_MSG)
41
42 if self.args.metrics:
43 self.repo.metrics.show()
44 except DvcException:
45 logger.error()
46 ret = 1
47 break
48
49 os.chdir(saved_dir)
50 return ret
51
52
53 def add_parser(subparsers, parent_parser):
54 REPRO_HELP = "Reproduce DVC file. Default file name - 'Dvcfile'."
55 repro_parser = subparsers.add_parser(
56 "repro",
57 parents=[parent_parser],
58 description=REPRO_HELP,
59 help=REPRO_HELP,
60 )
61 repro_parser.add_argument(
62 "targets", nargs="*", help="DVC file to reproduce."
63 )
64 repro_parser.add_argument(
65 "-f",
66 "--force",
67 action="store_true",
68 default=False,
69 help="Reproduce even if dependencies were not changed.",
70 )
71 repro_parser.add_argument(
72 "-s",
73 "--single-item",
74 action="store_true",
75 default=False,
76 help="Reproduce only single data item without recursive dependencies "
77 "check.",
78 )
79 repro_parser.add_argument(
80 "-c",
81 "--cwd",
82 default=os.path.curdir,
83 help="Directory within your repo to reroduce from.",
84 )
85 repro_parser.add_argument(
86 "-m",
87 "--metrics",
88 action="store_true",
89 default=False,
90 help="Show metrics after reproduction.",
91 )
92 repro_parser.add_argument(
93 "--dry",
94 action="store_true",
95 default=False,
96 help="Only print the commands that would be executed without "
97 "actually executing.",
98 )
99 repro_parser.add_argument(
100 "-i",
101 "--interactive",
102 action="store_true",
103 default=False,
104 help="Ask for confirmation before reproducing each stage.",
105 )
106 repro_parser.add_argument(
107 "-p",
108 "--pipeline",
109 action="store_true",
110 default=False,
111 help="Reproduce the whole pipeline that the specified stage file "
112 "belongs to.",
113 )
114 repro_parser.add_argument(
115 "-P",
116 "--all-pipelines",
117 action="store_true",
118 default=False,
119 help="Reproduce all pipelines in the repo.",
120 )
121 repro_parser.add_argument(
122 "--ignore-build-cache",
123 action="store_true",
124 default=False,
125 help="Reproduce all descendants of a changed stage even if their "
126 "direct dependencies didn't change.",
127 )
128 repro_parser.add_argument(
129 "--no-commit",
130 action="store_true",
131 default=False,
132 help="Don't put files/directories into cache.",
133 )
134 repro_parser.set_defaults(func=CmdRepro)
135
[end of dvc/command/repro.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/dvc/command/repro.py b/dvc/command/repro.py
--- a/dvc/command/repro.py
+++ b/dvc/command/repro.py
@@ -80,7 +80,7 @@
"-c",
"--cwd",
default=os.path.curdir,
- help="Directory within your repo to reroduce from.",
+ help="Directory within your repo to reproduce from.",
)
repro_parser.add_argument(
"-m",
| {"golden_diff": "diff --git a/dvc/command/repro.py b/dvc/command/repro.py\n--- a/dvc/command/repro.py\n+++ b/dvc/command/repro.py\n@@ -80,7 +80,7 @@\n \"-c\",\n \"--cwd\",\n default=os.path.curdir,\n- help=\"Directory within your repo to reroduce from.\",\n+ help=\"Directory within your repo to reproduce from.\",\n )\n repro_parser.add_argument(\n \"-m\",\n", "issue": "typo in docs\nsuper minor typo:\r\n\r\n$dvc repro --help\r\n -c CWD, --cwd CWD Directory within your repo to **reroduce** from.\r\n\r\ndvc --version\r\n0.30.1\r\n\r\n\n", "before_files": [{"content": "from __future__ import unicode_literals\n\nimport os\n\nimport dvc.logger as logger\nfrom dvc.command.base import CmdBase\nfrom dvc.command.status import CmdDataStatus\nfrom dvc.exceptions import DvcException\n\n\nclass CmdRepro(CmdBase):\n def run(self):\n recursive = not self.args.single_item\n saved_dir = os.path.realpath(os.curdir)\n if self.args.cwd:\n os.chdir(self.args.cwd)\n\n # Dirty hack so the for loop below can at least enter once\n if self.args.all_pipelines:\n self.args.targets = [None]\n elif not self.args.targets:\n self.args.targets = self.default_targets\n\n ret = 0\n for target in self.args.targets:\n try:\n stages = self.repo.reproduce(\n target,\n recursive=recursive,\n force=self.args.force,\n dry=self.args.dry,\n interactive=self.args.interactive,\n pipeline=self.args.pipeline,\n all_pipelines=self.args.all_pipelines,\n ignore_build_cache=self.args.ignore_build_cache,\n no_commit=self.args.no_commit,\n )\n\n if len(stages) == 0:\n logger.info(CmdDataStatus.UP_TO_DATE_MSG)\n\n if self.args.metrics:\n self.repo.metrics.show()\n except DvcException:\n logger.error()\n ret = 1\n break\n\n os.chdir(saved_dir)\n return ret\n\n\ndef add_parser(subparsers, parent_parser):\n REPRO_HELP = \"Reproduce DVC file. Default file name - 'Dvcfile'.\"\n repro_parser = subparsers.add_parser(\n \"repro\",\n parents=[parent_parser],\n description=REPRO_HELP,\n help=REPRO_HELP,\n )\n repro_parser.add_argument(\n \"targets\", nargs=\"*\", help=\"DVC file to reproduce.\"\n )\n repro_parser.add_argument(\n \"-f\",\n \"--force\",\n action=\"store_true\",\n default=False,\n help=\"Reproduce even if dependencies were not changed.\",\n )\n repro_parser.add_argument(\n \"-s\",\n \"--single-item\",\n action=\"store_true\",\n default=False,\n help=\"Reproduce only single data item without recursive dependencies \"\n \"check.\",\n )\n repro_parser.add_argument(\n \"-c\",\n \"--cwd\",\n default=os.path.curdir,\n help=\"Directory within your repo to reroduce from.\",\n )\n repro_parser.add_argument(\n \"-m\",\n \"--metrics\",\n action=\"store_true\",\n default=False,\n help=\"Show metrics after reproduction.\",\n )\n repro_parser.add_argument(\n \"--dry\",\n action=\"store_true\",\n default=False,\n help=\"Only print the commands that would be executed without \"\n \"actually executing.\",\n )\n repro_parser.add_argument(\n \"-i\",\n \"--interactive\",\n action=\"store_true\",\n default=False,\n help=\"Ask for confirmation before reproducing each stage.\",\n )\n repro_parser.add_argument(\n \"-p\",\n \"--pipeline\",\n action=\"store_true\",\n default=False,\n help=\"Reproduce the whole pipeline that the specified stage file \"\n \"belongs to.\",\n )\n repro_parser.add_argument(\n \"-P\",\n \"--all-pipelines\",\n action=\"store_true\",\n default=False,\n help=\"Reproduce all pipelines in the repo.\",\n )\n repro_parser.add_argument(\n \"--ignore-build-cache\",\n action=\"store_true\",\n default=False,\n help=\"Reproduce all descendants of a changed stage even if their \"\n \"direct dependencies didn't change.\",\n )\n repro_parser.add_argument(\n \"--no-commit\",\n action=\"store_true\",\n default=False,\n help=\"Don't put files/directories into cache.\",\n )\n repro_parser.set_defaults(func=CmdRepro)\n", "path": "dvc/command/repro.py"}]} | 1,686 | 102 |
gh_patches_debug_10319 | rasdani/github-patches | git_diff | voxel51__fiftyone-102 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Need custom FloatField that supports more numeric types
In this code (taken from https://github.com/voxel51/fiftyone/blob/develop/examples/model_inference/README.md), I had to add `float(confidence)` otherwise I got an error about `confidence`, which was a numpy float32 or something similar, not being a supported value for a `mongoengine.fields.FloatField`.
```py
for imgs, sample_ids in data_loader:
predictions, confidences = predict(model, imgs)
# Add predictions to your FiftyOne dataset
for sample_id, prediction, confidence in zip(
sample_ids, predictions, confidences
):
sample = dataset[sample_id]
sample[model_name] = fo.Classification(label=labels_map[prediction])
sample["confidence"] = float(confidence) # float() is required here, but shouldn't need to be...
sample.save()
```
Kind of hard to believe that MongoEngine doesn't handle casting a `np.float32` into a float, but, alas, it seems like our wrapper around `mongoengine.fields.FloatField` will need to override the `validate()` function below to cast non-int types with `float()` as well...
https://github.com/MongoEngine/mongoengine/blob/4275c2d7b791f5910308a4815a1ba39324dee373/mongoengine/fields.py#L377-L411
</issue>
<code>
[start of fiftyone/core/fields.py]
1 """
2 Fields of dataset sample schemas.
3
4 | Copyright 2017-2020, Voxel51, Inc.
5 | `voxel51.com <https://voxel51.com/>`_
6 |
7 """
8 # pragma pylint: disable=redefined-builtin
9 # pragma pylint: disable=unused-wildcard-import
10 # pragma pylint: disable=wildcard-import
11 from __future__ import absolute_import
12 from __future__ import division
13 from __future__ import print_function
14 from __future__ import unicode_literals
15 from builtins import *
16
17 # pragma pylint: enable=redefined-builtin
18 # pragma pylint: enable=unused-wildcard-import
19 # pragma pylint: enable=wildcard-import
20
21 import mongoengine.fields
22
23
24 class Field(mongoengine.fields.BaseField):
25 pass
26
27
28 class BooleanField(mongoengine.BooleanField, Field):
29 pass
30
31
32 class IntField(mongoengine.IntField, Field):
33 pass
34
35
36 class FloatField(mongoengine.FloatField, Field):
37 pass
38
39
40 class StringField(mongoengine.StringField, Field):
41 pass
42
43
44 class ListField(mongoengine.ListField, Field):
45 pass
46
47
48 class DictField(mongoengine.DictField, Field):
49 pass
50
51
52 class EmbeddedDocumentField(mongoengine.EmbeddedDocumentField, Field):
53 pass
54
[end of fiftyone/core/fields.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/fiftyone/core/fields.py b/fiftyone/core/fields.py
--- a/fiftyone/core/fields.py
+++ b/fiftyone/core/fields.py
@@ -34,7 +34,19 @@
class FloatField(mongoengine.FloatField, Field):
- pass
+ def validate(self, value):
+ try:
+ value = float(value)
+ except OverflowError:
+ self.error("The value is too large to be converted to float")
+ except (TypeError, ValueError):
+ self.error("%s could not be converted to float" % value)
+
+ if self.min_value is not None and value < self.min_value:
+ self.error("Float value is too small")
+
+ if self.max_value is not None and value > self.max_value:
+ self.error("Float value is too large")
class StringField(mongoengine.StringField, Field):
| {"golden_diff": "diff --git a/fiftyone/core/fields.py b/fiftyone/core/fields.py\n--- a/fiftyone/core/fields.py\n+++ b/fiftyone/core/fields.py\n@@ -34,7 +34,19 @@\n \n \n class FloatField(mongoengine.FloatField, Field):\n- pass\n+ def validate(self, value):\n+ try:\n+ value = float(value)\n+ except OverflowError:\n+ self.error(\"The value is too large to be converted to float\")\n+ except (TypeError, ValueError):\n+ self.error(\"%s could not be converted to float\" % value)\n+\n+ if self.min_value is not None and value < self.min_value:\n+ self.error(\"Float value is too small\")\n+\n+ if self.max_value is not None and value > self.max_value:\n+ self.error(\"Float value is too large\")\n \n \n class StringField(mongoengine.StringField, Field):\n", "issue": "Need custom FloatField that supports more numeric types\nIn this code (taken from https://github.com/voxel51/fiftyone/blob/develop/examples/model_inference/README.md), I had to add `float(confidence)` otherwise I got an error about `confidence`, which was a numpy float32 or something similar, not being a supported value for a `mongoengine.fields.FloatField`. \r\n\r\n```py\r\nfor imgs, sample_ids in data_loader:\r\n predictions, confidences = predict(model, imgs)\r\n\r\n # Add predictions to your FiftyOne dataset\r\n for sample_id, prediction, confidence in zip(\r\n sample_ids, predictions, confidences\r\n ):\r\n sample = dataset[sample_id]\r\n sample[model_name] = fo.Classification(label=labels_map[prediction])\r\n sample[\"confidence\"] = float(confidence) # float() is required here, but shouldn't need to be...\r\n sample.save()\r\n```\r\n\r\nKind of hard to believe that MongoEngine doesn't handle casting a `np.float32` into a float, but, alas, it seems like our wrapper around `mongoengine.fields.FloatField` will need to override the `validate()` function below to cast non-int types with `float()` as well...\r\n\r\nhttps://github.com/MongoEngine/mongoengine/blob/4275c2d7b791f5910308a4815a1ba39324dee373/mongoengine/fields.py#L377-L411\r\n\n", "before_files": [{"content": "\"\"\"\nFields of dataset sample schemas.\n\n| Copyright 2017-2020, Voxel51, Inc.\n| `voxel51.com <https://voxel51.com/>`_\n|\n\"\"\"\n# pragma pylint: disable=redefined-builtin\n# pragma pylint: disable=unused-wildcard-import\n# pragma pylint: disable=wildcard-import\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\nfrom __future__ import unicode_literals\nfrom builtins import *\n\n# pragma pylint: enable=redefined-builtin\n# pragma pylint: enable=unused-wildcard-import\n# pragma pylint: enable=wildcard-import\n\nimport mongoengine.fields\n\n\nclass Field(mongoengine.fields.BaseField):\n pass\n\n\nclass BooleanField(mongoengine.BooleanField, Field):\n pass\n\n\nclass IntField(mongoengine.IntField, Field):\n pass\n\n\nclass FloatField(mongoengine.FloatField, Field):\n pass\n\n\nclass StringField(mongoengine.StringField, Field):\n pass\n\n\nclass ListField(mongoengine.ListField, Field):\n pass\n\n\nclass DictField(mongoengine.DictField, Field):\n pass\n\n\nclass EmbeddedDocumentField(mongoengine.EmbeddedDocumentField, Field):\n pass\n", "path": "fiftyone/core/fields.py"}]} | 1,240 | 201 |
gh_patches_debug_12410 | rasdani/github-patches | git_diff | sagemath__sage-36176 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
toml package is obsolete
### Steps To Reproduce
_No response_
### Expected Behavior
toml is not installed
### Actual Behavior
toml is installed
### Additional Information
I think our `toml` package is obsolete. The only other package listing it as a requirement is tox, but tox only needs it with ancient pythons: https://github.com/tox-dev/tox/blob/3.27.1/setup.cfg#L45
I think our dependency can be replaced with tomli at which point toml can go.
### Environment
```markdown
Gentoo / git develop
```
### Checklist
- [X] I have searched the existing issues for a bug report that matches the one I want to file, without success.
- [X] I have read the documentation and troubleshoot guide
</issue>
<code>
[start of build/sage_bootstrap/creator.py]
1 # -*- coding: utf-8 -*-
2 """
3 Package Creator
4 """
5
6 # ****************************************************************************
7 # Copyright (C) 2016 Volker Braun <[email protected]>
8 #
9 # This program is free software: you can redistribute it and/or modify
10 # it under the terms of the GNU General Public License as published by
11 # the Free Software Foundation, either version 2 of the License, or
12 # (at your option) any later version.
13 # https://www.gnu.org/licenses/
14 # ****************************************************************************
15
16 import os
17
18 import logging
19 log = logging.getLogger()
20
21 from sage_bootstrap.env import SAGE_ROOT
22
23
24 class PackageCreator(object):
25
26 def __init__(self, package_name):
27 self.package_name = package_name
28 self.path = os.path.join(SAGE_ROOT, 'build', 'pkgs', package_name)
29 try:
30 os.mkdir(self.path)
31 except OSError:
32 pass
33
34 def set_version(self, version):
35 """
36 Write the version to ``package-version.txt``
37 """
38 with open(os.path.join(self.path, 'package-version.txt'), 'w+') as f:
39 f.write(version)
40 f.write('\n')
41
42 def set_type(self, pkg_type):
43 """
44 Write the package type to ``type``
45 """
46 with open(os.path.join(self.path, 'type'), 'w+') as f:
47 f.write(pkg_type)
48 f.write('\n')
49
50 def set_tarball(self, tarball, upstream_url):
51 """
52 Write the tarball name pattern to ``checksums.ini``
53 """
54 with open(os.path.join(self.path, 'checksums.ini'), 'w+') as f:
55 f.write('tarball={0}'.format(tarball))
56 f.write('\n')
57 if upstream_url:
58 f.write('upstream_url={0}'.format(upstream_url))
59 f.write('\n')
60
61 def set_description(self, description, license, upstream_contact):
62 """
63 Write the ``SPKG.rst`` file
64 """
65 with open(os.path.join(self.path, 'SPKG.rst'), 'w+') as f:
66 def heading(title, char='-'):
67 return '{0}\n{1}\n\n'.format(title, char * len(title))
68 if description:
69 title = '{0}: {1}'.format(self.package_name, description)
70 else:
71 title = self.package_name
72 f.write(heading(title, '='))
73 f.write(heading('Description'))
74 if description:
75 f.write('{0}\n\n'.format(description))
76 f.write(heading('License'))
77 if license:
78 f.write('{0}\n\n'.format(license))
79 f.write(heading('Upstream Contact'))
80 if upstream_contact:
81 f.write('{0}\n\n'.format(upstream_contact))
82
83 def set_python_data_and_scripts(self, pypi_package_name=None, source='normal'):
84 """
85 Write the file ``dependencies`` and other files for Python packages.
86
87 If ``source`` is ``"normal"``, write the files ``spkg-install.in`` and
88 ``install-requires.txt``.
89
90 If ``source`` is ``"wheel"``, write the file ``install-requires.txt``.
91
92 If ``source`` is ``"pip"``, write the file ``requirements.txt``.
93 """
94 if pypi_package_name is None:
95 pypi_package_name = self.package_name
96 with open(os.path.join(self.path, 'dependencies'), 'w+') as f:
97 f.write('$(PYTHON) | $(PYTHON_TOOLCHAIN)\n\n')
98 f.write('----------\nAll lines of this file are ignored except the first.\n')
99 if source == 'normal':
100 with open(os.path.join(self.path, 'spkg-install.in'), 'w+') as f:
101 f.write('cd src\nsdh_pip_install .\n')
102 with open(os.path.join(self.path, 'install-requires.txt'), 'w+') as f:
103 f.write('{0}\n'.format(pypi_package_name))
104 elif source == 'wheel':
105 with open(os.path.join(self.path, 'install-requires.txt'), 'w+') as f:
106 f.write('{0}\n'.format(pypi_package_name))
107 elif source == 'pip':
108 with open(os.path.join(self.path, 'requirements.txt'), 'w+') as f:
109 f.write('{0}\n'.format(pypi_package_name))
110 elif source == 'script':
111 pass
112 else:
113 raise ValueError('package source must be one of normal, script, pip, or wheel')
114
[end of build/sage_bootstrap/creator.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/build/sage_bootstrap/creator.py b/build/sage_bootstrap/creator.py
--- a/build/sage_bootstrap/creator.py
+++ b/build/sage_bootstrap/creator.py
@@ -94,7 +94,7 @@
if pypi_package_name is None:
pypi_package_name = self.package_name
with open(os.path.join(self.path, 'dependencies'), 'w+') as f:
- f.write('$(PYTHON) | $(PYTHON_TOOLCHAIN)\n\n')
+ f.write(' | $(PYTHON_TOOLCHAIN) $(PYTHON)\n\n')
f.write('----------\nAll lines of this file are ignored except the first.\n')
if source == 'normal':
with open(os.path.join(self.path, 'spkg-install.in'), 'w+') as f:
| {"golden_diff": "diff --git a/build/sage_bootstrap/creator.py b/build/sage_bootstrap/creator.py\n--- a/build/sage_bootstrap/creator.py\n+++ b/build/sage_bootstrap/creator.py\n@@ -94,7 +94,7 @@\n if pypi_package_name is None:\n pypi_package_name = self.package_name\n with open(os.path.join(self.path, 'dependencies'), 'w+') as f:\n- f.write('$(PYTHON) | $(PYTHON_TOOLCHAIN)\\n\\n')\n+ f.write(' | $(PYTHON_TOOLCHAIN) $(PYTHON)\\n\\n')\n f.write('----------\\nAll lines of this file are ignored except the first.\\n')\n if source == 'normal':\n with open(os.path.join(self.path, 'spkg-install.in'), 'w+') as f:\n", "issue": "toml package is obsolete\n### Steps To Reproduce\n\n_No response_\n\n### Expected Behavior\n\ntoml is not installed\n\n### Actual Behavior\n\ntoml is installed\n\n### Additional Information\n\nI think our `toml` package is obsolete. The only other package listing it as a requirement is tox, but tox only needs it with ancient pythons: https://github.com/tox-dev/tox/blob/3.27.1/setup.cfg#L45\r\n\r\nI think our dependency can be replaced with tomli at which point toml can go.\r\n\n\n### Environment\n\n```markdown\nGentoo / git develop\n```\n\n\n### Checklist\n\n- [X] I have searched the existing issues for a bug report that matches the one I want to file, without success.\n- [X] I have read the documentation and troubleshoot guide\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\nPackage Creator\n\"\"\"\n\n# ****************************************************************************\n# Copyright (C) 2016 Volker Braun <[email protected]>\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 2 of the License, or\n# (at your option) any later version.\n# https://www.gnu.org/licenses/\n# ****************************************************************************\n\nimport os\n\nimport logging\nlog = logging.getLogger()\n\nfrom sage_bootstrap.env import SAGE_ROOT\n\n\nclass PackageCreator(object):\n\n def __init__(self, package_name):\n self.package_name = package_name\n self.path = os.path.join(SAGE_ROOT, 'build', 'pkgs', package_name)\n try:\n os.mkdir(self.path)\n except OSError:\n pass\n\n def set_version(self, version):\n \"\"\"\n Write the version to ``package-version.txt``\n \"\"\"\n with open(os.path.join(self.path, 'package-version.txt'), 'w+') as f:\n f.write(version)\n f.write('\\n')\n\n def set_type(self, pkg_type):\n \"\"\"\n Write the package type to ``type``\n \"\"\"\n with open(os.path.join(self.path, 'type'), 'w+') as f:\n f.write(pkg_type)\n f.write('\\n')\n\n def set_tarball(self, tarball, upstream_url):\n \"\"\"\n Write the tarball name pattern to ``checksums.ini``\n \"\"\"\n with open(os.path.join(self.path, 'checksums.ini'), 'w+') as f:\n f.write('tarball={0}'.format(tarball))\n f.write('\\n')\n if upstream_url:\n f.write('upstream_url={0}'.format(upstream_url))\n f.write('\\n')\n\n def set_description(self, description, license, upstream_contact):\n \"\"\"\n Write the ``SPKG.rst`` file\n \"\"\"\n with open(os.path.join(self.path, 'SPKG.rst'), 'w+') as f:\n def heading(title, char='-'):\n return '{0}\\n{1}\\n\\n'.format(title, char * len(title))\n if description:\n title = '{0}: {1}'.format(self.package_name, description)\n else:\n title = self.package_name\n f.write(heading(title, '='))\n f.write(heading('Description'))\n if description:\n f.write('{0}\\n\\n'.format(description))\n f.write(heading('License'))\n if license:\n f.write('{0}\\n\\n'.format(license))\n f.write(heading('Upstream Contact'))\n if upstream_contact:\n f.write('{0}\\n\\n'.format(upstream_contact))\n\n def set_python_data_and_scripts(self, pypi_package_name=None, source='normal'):\n \"\"\"\n Write the file ``dependencies`` and other files for Python packages.\n\n If ``source`` is ``\"normal\"``, write the files ``spkg-install.in`` and\n ``install-requires.txt``.\n\n If ``source`` is ``\"wheel\"``, write the file ``install-requires.txt``.\n\n If ``source`` is ``\"pip\"``, write the file ``requirements.txt``.\n \"\"\"\n if pypi_package_name is None:\n pypi_package_name = self.package_name\n with open(os.path.join(self.path, 'dependencies'), 'w+') as f:\n f.write('$(PYTHON) | $(PYTHON_TOOLCHAIN)\\n\\n')\n f.write('----------\\nAll lines of this file are ignored except the first.\\n')\n if source == 'normal':\n with open(os.path.join(self.path, 'spkg-install.in'), 'w+') as f:\n f.write('cd src\\nsdh_pip_install .\\n')\n with open(os.path.join(self.path, 'install-requires.txt'), 'w+') as f:\n f.write('{0}\\n'.format(pypi_package_name))\n elif source == 'wheel':\n with open(os.path.join(self.path, 'install-requires.txt'), 'w+') as f:\n f.write('{0}\\n'.format(pypi_package_name))\n elif source == 'pip':\n with open(os.path.join(self.path, 'requirements.txt'), 'w+') as f:\n f.write('{0}\\n'.format(pypi_package_name))\n elif source == 'script':\n pass\n else:\n raise ValueError('package source must be one of normal, script, pip, or wheel')\n", "path": "build/sage_bootstrap/creator.py"}]} | 1,918 | 172 |
gh_patches_debug_20288 | rasdani/github-patches | git_diff | lk-geimfari__mimesis-926 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add UUID objects support for uuid()
- [x] Add parameter `as_object`
</issue>
<code>
[start of mimesis/providers/cryptographic.py]
1 # -*- coding: utf-8 -*-
2
3 """Cryptographic data provider."""
4
5 import hashlib
6 import secrets
7 from typing import Optional, Union
8 from uuid import UUID, uuid4
9
10 from mimesis.enums import Algorithm
11 from mimesis.providers.base import BaseProvider
12 from mimesis.providers.text import Text
13
14 __all__ = ['Cryptographic']
15
16
17 class Cryptographic(BaseProvider):
18 """Class that provides cryptographic data."""
19
20 def __init__(self, *args, **kwargs) -> None:
21 """Initialize attributes.
22
23 :param seed: Seed.
24 """
25 super().__init__(*args, **kwargs)
26 self.__words = Text('en')._data.get('words', {})
27
28 class Meta:
29 """Class for metadata."""
30
31 name = 'cryptographic'
32
33 @staticmethod
34 def uuid(as_object: bool = False) -> Union[UUID, str]:
35 """Generate random UUID4.
36
37 This method returns string by default,
38 but you can make it return uuid.UUID object using
39 parameter **as_object**
40
41 :param as_object: Returns uuid.UUID.
42 :return: UUID.
43 """
44 _uuid = uuid4()
45
46 if not as_object:
47 return str(_uuid)
48
49 return _uuid
50
51 def hash(self, algorithm: Algorithm = None) -> str: # noqa: A003
52 """Generate random hash.
53
54 To change hashing algorithm, pass parameter ``algorithm``
55 with needed value of the enum object :class:`~mimesis.enums.Algorithm`
56
57 :param algorithm: Enum object :class:`~mimesis.enums.Algorithm`.
58 :return: Hash.
59 :raises NonEnumerableError: When algorithm is unsupported.
60 """
61 key = self._validate_enum(algorithm, Algorithm)
62
63 if hasattr(hashlib, key):
64 fn = getattr(hashlib, key)
65 return fn(self.uuid().encode()).hexdigest() # type: ignore
66
67 @staticmethod
68 def token_bytes(entropy: int = 32) -> bytes:
69 """Generate byte string containing ``entropy`` bytes.
70
71 The string has ``entropy`` random bytes, each byte
72 converted to two hex digits.
73
74 .. warning:: Seed is not applicable to this method,
75 because of its cryptographic-safe nature.
76
77 :param entropy: Number of bytes (default: 32).
78 :return: Random bytes.
79 """
80 return secrets.token_bytes(entropy)
81
82 @staticmethod
83 def token_hex(entropy: int = 32) -> str:
84 """Return a random text string, in hexadecimal.
85
86 The string has *entropy* random bytes, each byte converted to two
87 hex digits. If *entropy* is ``None`` or not supplied, a reasonable
88 default is used.
89
90 .. warning:: Seed is not applicable to this method,
91 because of its cryptographic-safe nature.
92
93 :param entropy: Number of bytes (default: 32).
94 :return: Token.
95 """
96 return secrets.token_hex(entropy)
97
98 @staticmethod
99 def token_urlsafe(entropy: int = 32):
100 """Return a random URL-safe text string, in Base64 encoding.
101
102 The string has *entropy* random bytes. If *entropy* is ``None``
103 or not supplied, a reasonable default is used.
104
105 .. warning:: Seed is not applicable to this method,
106 because of its cryptographic-safe nature.
107
108 :param entropy: Number of bytes (default: 32).
109 :return: URL-safe token.
110 """
111 return secrets.token_urlsafe(entropy)
112
113 def mnemonic_phrase(self, length: int = 12,
114 separator: Optional[str] = None) -> str:
115 """Generate pseudo mnemonic phrase.
116
117 Please, keep in mind that this method generates
118 crypto-insecure values.
119
120 :param separator: Separator of phrases (Default is " ").
121 :param length: Number of words.
122 :return: Mnemonic phrase.
123 """
124 if not separator:
125 separator = ' '
126
127 words = self.__words['normal']
128 words_generator = (self.random.choice(words) for _ in range(length))
129 return '{}'.format(separator).join(words_generator)
130
[end of mimesis/providers/cryptographic.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mimesis/providers/cryptographic.py b/mimesis/providers/cryptographic.py
--- a/mimesis/providers/cryptographic.py
+++ b/mimesis/providers/cryptographic.py
@@ -38,6 +38,9 @@
but you can make it return uuid.UUID object using
parameter **as_object**
+ .. warning:: Seed is not applicable to this method,
+ because of its cryptographic-safe nature.
+
:param as_object: Returns uuid.UUID.
:return: UUID.
"""
@@ -54,6 +57,9 @@
To change hashing algorithm, pass parameter ``algorithm``
with needed value of the enum object :class:`~mimesis.enums.Algorithm`
+ .. warning:: Seed is not applicable to this method,
+ because of its cryptographic-safe nature.
+
:param algorithm: Enum object :class:`~mimesis.enums.Algorithm`.
:return: Hash.
:raises NonEnumerableError: When algorithm is unsupported.
| {"golden_diff": "diff --git a/mimesis/providers/cryptographic.py b/mimesis/providers/cryptographic.py\n--- a/mimesis/providers/cryptographic.py\n+++ b/mimesis/providers/cryptographic.py\n@@ -38,6 +38,9 @@\n but you can make it return uuid.UUID object using\n parameter **as_object**\n \n+ .. warning:: Seed is not applicable to this method,\n+ because of its cryptographic-safe nature.\n+\n :param as_object: Returns uuid.UUID.\n :return: UUID.\n \"\"\"\n@@ -54,6 +57,9 @@\n To change hashing algorithm, pass parameter ``algorithm``\n with needed value of the enum object :class:`~mimesis.enums.Algorithm`\n \n+ .. warning:: Seed is not applicable to this method,\n+ because of its cryptographic-safe nature.\n+\n :param algorithm: Enum object :class:`~mimesis.enums.Algorithm`.\n :return: Hash.\n :raises NonEnumerableError: When algorithm is unsupported.\n", "issue": "Add UUID objects support for uuid()\n- [x] Add parameter `as_object` \n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\"Cryptographic data provider.\"\"\"\n\nimport hashlib\nimport secrets\nfrom typing import Optional, Union\nfrom uuid import UUID, uuid4\n\nfrom mimesis.enums import Algorithm\nfrom mimesis.providers.base import BaseProvider\nfrom mimesis.providers.text import Text\n\n__all__ = ['Cryptographic']\n\n\nclass Cryptographic(BaseProvider):\n \"\"\"Class that provides cryptographic data.\"\"\"\n\n def __init__(self, *args, **kwargs) -> None:\n \"\"\"Initialize attributes.\n\n :param seed: Seed.\n \"\"\"\n super().__init__(*args, **kwargs)\n self.__words = Text('en')._data.get('words', {})\n\n class Meta:\n \"\"\"Class for metadata.\"\"\"\n\n name = 'cryptographic'\n\n @staticmethod\n def uuid(as_object: bool = False) -> Union[UUID, str]:\n \"\"\"Generate random UUID4.\n\n This method returns string by default,\n but you can make it return uuid.UUID object using\n parameter **as_object**\n\n :param as_object: Returns uuid.UUID.\n :return: UUID.\n \"\"\"\n _uuid = uuid4()\n\n if not as_object:\n return str(_uuid)\n\n return _uuid\n\n def hash(self, algorithm: Algorithm = None) -> str: # noqa: A003\n \"\"\"Generate random hash.\n\n To change hashing algorithm, pass parameter ``algorithm``\n with needed value of the enum object :class:`~mimesis.enums.Algorithm`\n\n :param algorithm: Enum object :class:`~mimesis.enums.Algorithm`.\n :return: Hash.\n :raises NonEnumerableError: When algorithm is unsupported.\n \"\"\"\n key = self._validate_enum(algorithm, Algorithm)\n\n if hasattr(hashlib, key):\n fn = getattr(hashlib, key)\n return fn(self.uuid().encode()).hexdigest() # type: ignore\n\n @staticmethod\n def token_bytes(entropy: int = 32) -> bytes:\n \"\"\"Generate byte string containing ``entropy`` bytes.\n\n The string has ``entropy`` random bytes, each byte\n converted to two hex digits.\n\n .. warning:: Seed is not applicable to this method,\n because of its cryptographic-safe nature.\n\n :param entropy: Number of bytes (default: 32).\n :return: Random bytes.\n \"\"\"\n return secrets.token_bytes(entropy)\n\n @staticmethod\n def token_hex(entropy: int = 32) -> str:\n \"\"\"Return a random text string, in hexadecimal.\n\n The string has *entropy* random bytes, each byte converted to two\n hex digits. If *entropy* is ``None`` or not supplied, a reasonable\n default is used.\n\n .. warning:: Seed is not applicable to this method,\n because of its cryptographic-safe nature.\n\n :param entropy: Number of bytes (default: 32).\n :return: Token.\n \"\"\"\n return secrets.token_hex(entropy)\n\n @staticmethod\n def token_urlsafe(entropy: int = 32):\n \"\"\"Return a random URL-safe text string, in Base64 encoding.\n\n The string has *entropy* random bytes. If *entropy* is ``None``\n or not supplied, a reasonable default is used.\n\n .. warning:: Seed is not applicable to this method,\n because of its cryptographic-safe nature.\n\n :param entropy: Number of bytes (default: 32).\n :return: URL-safe token.\n \"\"\"\n return secrets.token_urlsafe(entropy)\n\n def mnemonic_phrase(self, length: int = 12,\n separator: Optional[str] = None) -> str:\n \"\"\"Generate pseudo mnemonic phrase.\n\n Please, keep in mind that this method generates\n crypto-insecure values.\n\n :param separator: Separator of phrases (Default is \" \").\n :param length: Number of words.\n :return: Mnemonic phrase.\n \"\"\"\n if not separator:\n separator = ' '\n\n words = self.__words['normal']\n words_generator = (self.random.choice(words) for _ in range(length))\n return '{}'.format(separator).join(words_generator)\n", "path": "mimesis/providers/cryptographic.py"}]} | 1,743 | 217 |
gh_patches_debug_21786 | rasdani/github-patches | git_diff | pre-commit__pre-commit-310 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Non-ascii prints in error handler without tty cause stacktrace
```
23:00:13 style runtests: commands[0] | pre-commit run --all-files
23:00:13 [INFO] Installing environment for [email protected]:mirrors/pre-commit/mirrors-jshint.
23:00:13 [INFO] Once installed this environment will be reused.
23:00:13 [INFO] This may take a few minutes...
23:01:33 Traceback (most recent call last):
23:01:33 File ".tox/style/bin/pre-commit", line 11, in <module>
23:01:33 sys.exit(main())
23:01:33 File ".../.tox/style/local/lib/python2.7/site-packages/pre_commit/main.py", line 157, in main
23:01:33 'Command {0} failed to exit with a returncode'.format(args.command)
23:01:33 File "/usr/lib64/python2.7/contextlib.py", line 35, in __exit__
23:01:33 self.gen.throw(type, value, traceback)
23:01:33 File ".../.tox/style/local/lib/python2.7/site-packages/pre_commit/error_handler.py", line 41, in error_handler
23:01:33 traceback.format_exc(),
23:01:33 File ".../.tox/style/local/lib/python2.7/site-packages/pre_commit/error_handler.py", line 21, in _log_and_exit
23:01:33 print_fn(error_msg)
23:01:33 UnicodeEncodeError: 'ascii' codec can't encode characters in position 735-737: ordinal not in range(128)
```
</issue>
<code>
[start of pre_commit/error_handler.py]
1 from __future__ import absolute_import
2 from __future__ import print_function
3 from __future__ import unicode_literals
4
5 import contextlib
6 import io
7 import os.path
8 import traceback
9
10 from pre_commit.errors import FatalError
11 from pre_commit.store import Store
12
13
14 # For testing purposes
15 class PreCommitSystemExit(SystemExit):
16 pass
17
18
19 def _log_and_exit(msg, exc, formatted, print_fn=print):
20 error_msg = '{0}: {1}: {2}'.format(msg, type(exc).__name__, exc)
21 print_fn(error_msg)
22 print_fn('Check the log at ~/.pre-commit/pre-commit.log')
23 store = Store()
24 store.require_created()
25 with io.open(os.path.join(store.directory, 'pre-commit.log'), 'w') as log:
26 log.write(error_msg + '\n')
27 log.write(formatted + '\n')
28 raise PreCommitSystemExit(1)
29
30
31 @contextlib.contextmanager
32 def error_handler():
33 try:
34 yield
35 except FatalError as e:
36 _log_and_exit('An error has occurred', e, traceback.format_exc())
37 except Exception as e:
38 _log_and_exit(
39 'An unexpected error has occurred',
40 e,
41 traceback.format_exc(),
42 )
43
[end of pre_commit/error_handler.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pre_commit/error_handler.py b/pre_commit/error_handler.py
--- a/pre_commit/error_handler.py
+++ b/pre_commit/error_handler.py
@@ -7,7 +7,9 @@
import os.path
import traceback
+from pre_commit import five
from pre_commit.errors import FatalError
+from pre_commit.output import sys_stdout_write_wrapper
from pre_commit.store import Store
@@ -16,15 +18,15 @@
pass
-def _log_and_exit(msg, exc, formatted, print_fn=print):
- error_msg = '{0}: {1}: {2}'.format(msg, type(exc).__name__, exc)
- print_fn(error_msg)
- print_fn('Check the log at ~/.pre-commit/pre-commit.log')
+def _log_and_exit(msg, exc, formatted, write_fn=sys_stdout_write_wrapper):
+ error_msg = '{0}: {1}: {2}\n'.format(msg, type(exc).__name__, exc)
+ write_fn(error_msg)
+ write_fn('Check the log at ~/.pre-commit/pre-commit.log\n')
store = Store()
store.require_created()
- with io.open(os.path.join(store.directory, 'pre-commit.log'), 'w') as log:
- log.write(error_msg + '\n')
- log.write(formatted + '\n')
+ with io.open(os.path.join(store.directory, 'pre-commit.log'), 'wb') as log:
+ log.write(five.to_bytes(error_msg))
+ log.write(five.to_bytes(formatted) + b'\n')
raise PreCommitSystemExit(1)
| {"golden_diff": "diff --git a/pre_commit/error_handler.py b/pre_commit/error_handler.py\n--- a/pre_commit/error_handler.py\n+++ b/pre_commit/error_handler.py\n@@ -7,7 +7,9 @@\n import os.path\n import traceback\n \n+from pre_commit import five\n from pre_commit.errors import FatalError\n+from pre_commit.output import sys_stdout_write_wrapper\n from pre_commit.store import Store\n \n \n@@ -16,15 +18,15 @@\n pass\n \n \n-def _log_and_exit(msg, exc, formatted, print_fn=print):\n- error_msg = '{0}: {1}: {2}'.format(msg, type(exc).__name__, exc)\n- print_fn(error_msg)\n- print_fn('Check the log at ~/.pre-commit/pre-commit.log')\n+def _log_and_exit(msg, exc, formatted, write_fn=sys_stdout_write_wrapper):\n+ error_msg = '{0}: {1}: {2}\\n'.format(msg, type(exc).__name__, exc)\n+ write_fn(error_msg)\n+ write_fn('Check the log at ~/.pre-commit/pre-commit.log\\n')\n store = Store()\n store.require_created()\n- with io.open(os.path.join(store.directory, 'pre-commit.log'), 'w') as log:\n- log.write(error_msg + '\\n')\n- log.write(formatted + '\\n')\n+ with io.open(os.path.join(store.directory, 'pre-commit.log'), 'wb') as log:\n+ log.write(five.to_bytes(error_msg))\n+ log.write(five.to_bytes(formatted) + b'\\n')\n raise PreCommitSystemExit(1)\n", "issue": "Non-ascii prints in error handler without tty cause stacktrace\n```\n23:00:13 style runtests: commands[0] | pre-commit run --all-files\n23:00:13 [INFO] Installing environment for [email protected]:mirrors/pre-commit/mirrors-jshint.\n23:00:13 [INFO] Once installed this environment will be reused.\n23:00:13 [INFO] This may take a few minutes...\n23:01:33 Traceback (most recent call last):\n23:01:33 File \".tox/style/bin/pre-commit\", line 11, in <module>\n23:01:33 sys.exit(main())\n23:01:33 File \".../.tox/style/local/lib/python2.7/site-packages/pre_commit/main.py\", line 157, in main\n23:01:33 'Command {0} failed to exit with a returncode'.format(args.command)\n23:01:33 File \"/usr/lib64/python2.7/contextlib.py\", line 35, in __exit__\n23:01:33 self.gen.throw(type, value, traceback)\n23:01:33 File \".../.tox/style/local/lib/python2.7/site-packages/pre_commit/error_handler.py\", line 41, in error_handler\n23:01:33 traceback.format_exc(),\n23:01:33 File \".../.tox/style/local/lib/python2.7/site-packages/pre_commit/error_handler.py\", line 21, in _log_and_exit\n23:01:33 print_fn(error_msg)\n23:01:33 UnicodeEncodeError: 'ascii' codec can't encode characters in position 735-737: ordinal not in range(128)\n```\n\n", "before_files": [{"content": "from __future__ import absolute_import\nfrom __future__ import print_function\nfrom __future__ import unicode_literals\n\nimport contextlib\nimport io\nimport os.path\nimport traceback\n\nfrom pre_commit.errors import FatalError\nfrom pre_commit.store import Store\n\n\n# For testing purposes\nclass PreCommitSystemExit(SystemExit):\n pass\n\n\ndef _log_and_exit(msg, exc, formatted, print_fn=print):\n error_msg = '{0}: {1}: {2}'.format(msg, type(exc).__name__, exc)\n print_fn(error_msg)\n print_fn('Check the log at ~/.pre-commit/pre-commit.log')\n store = Store()\n store.require_created()\n with io.open(os.path.join(store.directory, 'pre-commit.log'), 'w') as log:\n log.write(error_msg + '\\n')\n log.write(formatted + '\\n')\n raise PreCommitSystemExit(1)\n\n\[email protected]\ndef error_handler():\n try:\n yield\n except FatalError as e:\n _log_and_exit('An error has occurred', e, traceback.format_exc())\n except Exception as e:\n _log_and_exit(\n 'An unexpected error has occurred',\n e,\n traceback.format_exc(),\n )\n", "path": "pre_commit/error_handler.py"}]} | 1,302 | 344 |
gh_patches_debug_7181 | rasdani/github-patches | git_diff | vega__altair-3074 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Include a suggestion to update frontend (Jupyterlab, ...) in mimetype error
Follow-up that comes out of #2585. Raised by @joelostblom:
> do you think we need to communicate the minimum version of JuptyerLab that support Altair 5 somewhere? I am thinking ideally directly in the error message if possible, but otherwise at least in the docs and release notes, what do you all think?
</issue>
<code>
[start of altair/vegalite/v5/display.py]
1 import os
2
3 from ...utils.mimebundle import spec_to_mimebundle
4 from ..display import Displayable
5 from ..display import default_renderer_base
6 from ..display import json_renderer_base
7 from ..display import RendererRegistry
8 from ..display import HTMLRenderer
9
10 from .schema import SCHEMA_VERSION
11
12 VEGALITE_VERSION = SCHEMA_VERSION.lstrip("v")
13 VEGA_VERSION = "5"
14 VEGAEMBED_VERSION = "6"
15
16
17 # ==============================================================================
18 # VegaLite v5 renderer logic
19 # ==============================================================================
20
21
22 # The MIME type for Vega-Lite 5.x releases.
23 VEGALITE_MIME_TYPE = "application/vnd.vegalite.v5+json" # type: str
24
25 # The entry point group that can be used by other packages to declare other
26 # renderers that will be auto-detected. Explicit registration is also
27 # allowed by the PluginRegistery API.
28 ENTRY_POINT_GROUP = "altair.vegalite.v5.renderer" # type: str
29
30 # The display message when rendering fails
31 DEFAULT_DISPLAY = """\
32 <VegaLite 5 object>
33
34 If you see this message, it means the renderer has not been properly enabled
35 for the frontend that you are using. For more information, see
36 https://altair-viz.github.io/user_guide/display_frontends.html#troubleshooting
37 """
38
39 renderers = RendererRegistry(entry_point_group=ENTRY_POINT_GROUP)
40
41 here = os.path.dirname(os.path.realpath(__file__))
42
43
44 def mimetype_renderer(spec, **metadata):
45 return default_renderer_base(spec, VEGALITE_MIME_TYPE, DEFAULT_DISPLAY, **metadata)
46
47
48 def json_renderer(spec, **metadata):
49 return json_renderer_base(spec, DEFAULT_DISPLAY, **metadata)
50
51
52 def png_renderer(spec, **metadata):
53 return spec_to_mimebundle(
54 spec,
55 format="png",
56 mode="vega-lite",
57 vega_version=VEGA_VERSION,
58 vegaembed_version=VEGAEMBED_VERSION,
59 vegalite_version=VEGALITE_VERSION,
60 **metadata,
61 )
62
63
64 def svg_renderer(spec, **metadata):
65 return spec_to_mimebundle(
66 spec,
67 format="svg",
68 mode="vega-lite",
69 vega_version=VEGA_VERSION,
70 vegaembed_version=VEGAEMBED_VERSION,
71 vegalite_version=VEGALITE_VERSION,
72 **metadata,
73 )
74
75
76 html_renderer = HTMLRenderer(
77 mode="vega-lite",
78 template="universal",
79 vega_version=VEGA_VERSION,
80 vegaembed_version=VEGAEMBED_VERSION,
81 vegalite_version=VEGALITE_VERSION,
82 )
83
84 renderers.register("default", html_renderer)
85 renderers.register("html", html_renderer)
86 renderers.register("colab", html_renderer)
87 renderers.register("kaggle", html_renderer)
88 renderers.register("zeppelin", html_renderer)
89 renderers.register("mimetype", mimetype_renderer)
90 renderers.register("jupyterlab", mimetype_renderer)
91 renderers.register("nteract", mimetype_renderer)
92 renderers.register("json", json_renderer)
93 renderers.register("png", png_renderer)
94 renderers.register("svg", svg_renderer)
95 renderers.enable("default")
96
97
98 class VegaLite(Displayable):
99 """An IPython/Jupyter display class for rendering VegaLite 5."""
100
101 renderers = renderers
102 schema_path = (__name__, "schema/vega-lite-schema.json")
103
104
105 def vegalite(spec, validate=True):
106 """Render and optionally validate a VegaLite 5 spec.
107
108 This will use the currently enabled renderer to render the spec.
109
110 Parameters
111 ==========
112 spec: dict
113 A fully compliant VegaLite 5 spec, with the data portion fully processed.
114 validate: bool
115 Should the spec be validated against the VegaLite 5 schema?
116 """
117 from IPython.display import display
118
119 display(VegaLite(spec, validate=validate))
120
[end of altair/vegalite/v5/display.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/altair/vegalite/v5/display.py b/altair/vegalite/v5/display.py
--- a/altair/vegalite/v5/display.py
+++ b/altair/vegalite/v5/display.py
@@ -28,8 +28,8 @@
ENTRY_POINT_GROUP = "altair.vegalite.v5.renderer" # type: str
# The display message when rendering fails
-DEFAULT_DISPLAY = """\
-<VegaLite 5 object>
+DEFAULT_DISPLAY = f"""\
+<VegaLite {VEGALITE_VERSION.split('.')[0]} object>
If you see this message, it means the renderer has not been properly enabled
for the frontend that you are using. For more information, see
| {"golden_diff": "diff --git a/altair/vegalite/v5/display.py b/altair/vegalite/v5/display.py\n--- a/altair/vegalite/v5/display.py\n+++ b/altair/vegalite/v5/display.py\n@@ -28,8 +28,8 @@\n ENTRY_POINT_GROUP = \"altair.vegalite.v5.renderer\" # type: str\n \n # The display message when rendering fails\n-DEFAULT_DISPLAY = \"\"\"\\\n-<VegaLite 5 object>\n+DEFAULT_DISPLAY = f\"\"\"\\\n+<VegaLite {VEGALITE_VERSION.split('.')[0]} object>\n \n If you see this message, it means the renderer has not been properly enabled\n for the frontend that you are using. For more information, see\n", "issue": "Include a suggestion to update frontend (Jupyterlab, ...) in mimetype error\nFollow-up that comes out of #2585. Raised by @joelostblom:\r\n\r\n> do you think we need to communicate the minimum version of JuptyerLab that support Altair 5 somewhere? I am thinking ideally directly in the error message if possible, but otherwise at least in the docs and release notes, what do you all think?\n", "before_files": [{"content": "import os\n\nfrom ...utils.mimebundle import spec_to_mimebundle\nfrom ..display import Displayable\nfrom ..display import default_renderer_base\nfrom ..display import json_renderer_base\nfrom ..display import RendererRegistry\nfrom ..display import HTMLRenderer\n\nfrom .schema import SCHEMA_VERSION\n\nVEGALITE_VERSION = SCHEMA_VERSION.lstrip(\"v\")\nVEGA_VERSION = \"5\"\nVEGAEMBED_VERSION = \"6\"\n\n\n# ==============================================================================\n# VegaLite v5 renderer logic\n# ==============================================================================\n\n\n# The MIME type for Vega-Lite 5.x releases.\nVEGALITE_MIME_TYPE = \"application/vnd.vegalite.v5+json\" # type: str\n\n# The entry point group that can be used by other packages to declare other\n# renderers that will be auto-detected. Explicit registration is also\n# allowed by the PluginRegistery API.\nENTRY_POINT_GROUP = \"altair.vegalite.v5.renderer\" # type: str\n\n# The display message when rendering fails\nDEFAULT_DISPLAY = \"\"\"\\\n<VegaLite 5 object>\n\nIf you see this message, it means the renderer has not been properly enabled\nfor the frontend that you are using. For more information, see\nhttps://altair-viz.github.io/user_guide/display_frontends.html#troubleshooting\n\"\"\"\n\nrenderers = RendererRegistry(entry_point_group=ENTRY_POINT_GROUP)\n\nhere = os.path.dirname(os.path.realpath(__file__))\n\n\ndef mimetype_renderer(spec, **metadata):\n return default_renderer_base(spec, VEGALITE_MIME_TYPE, DEFAULT_DISPLAY, **metadata)\n\n\ndef json_renderer(spec, **metadata):\n return json_renderer_base(spec, DEFAULT_DISPLAY, **metadata)\n\n\ndef png_renderer(spec, **metadata):\n return spec_to_mimebundle(\n spec,\n format=\"png\",\n mode=\"vega-lite\",\n vega_version=VEGA_VERSION,\n vegaembed_version=VEGAEMBED_VERSION,\n vegalite_version=VEGALITE_VERSION,\n **metadata,\n )\n\n\ndef svg_renderer(spec, **metadata):\n return spec_to_mimebundle(\n spec,\n format=\"svg\",\n mode=\"vega-lite\",\n vega_version=VEGA_VERSION,\n vegaembed_version=VEGAEMBED_VERSION,\n vegalite_version=VEGALITE_VERSION,\n **metadata,\n )\n\n\nhtml_renderer = HTMLRenderer(\n mode=\"vega-lite\",\n template=\"universal\",\n vega_version=VEGA_VERSION,\n vegaembed_version=VEGAEMBED_VERSION,\n vegalite_version=VEGALITE_VERSION,\n)\n\nrenderers.register(\"default\", html_renderer)\nrenderers.register(\"html\", html_renderer)\nrenderers.register(\"colab\", html_renderer)\nrenderers.register(\"kaggle\", html_renderer)\nrenderers.register(\"zeppelin\", html_renderer)\nrenderers.register(\"mimetype\", mimetype_renderer)\nrenderers.register(\"jupyterlab\", mimetype_renderer)\nrenderers.register(\"nteract\", mimetype_renderer)\nrenderers.register(\"json\", json_renderer)\nrenderers.register(\"png\", png_renderer)\nrenderers.register(\"svg\", svg_renderer)\nrenderers.enable(\"default\")\n\n\nclass VegaLite(Displayable):\n \"\"\"An IPython/Jupyter display class for rendering VegaLite 5.\"\"\"\n\n renderers = renderers\n schema_path = (__name__, \"schema/vega-lite-schema.json\")\n\n\ndef vegalite(spec, validate=True):\n \"\"\"Render and optionally validate a VegaLite 5 spec.\n\n This will use the currently enabled renderer to render the spec.\n\n Parameters\n ==========\n spec: dict\n A fully compliant VegaLite 5 spec, with the data portion fully processed.\n validate: bool\n Should the spec be validated against the VegaLite 5 schema?\n \"\"\"\n from IPython.display import display\n\n display(VegaLite(spec, validate=validate))\n", "path": "altair/vegalite/v5/display.py"}]} | 1,720 | 168 |
gh_patches_debug_7111 | rasdani/github-patches | git_diff | pwndbg__pwndbg-1757 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
"'function' object has no attribute '_reset'" in `pwndbg/pwndbg/color/syntax_highlight.py`
<!--
Before reporting a new issue, make sure that we do not have any duplicates already open.
If there is one it might be good to take part in the discussion there.
Please make sure you have checked that the issue persists on LATEST pwndbg version.
Below is a template for BUG REPORTS.
Don't include it if this is a FEATURE REQUEST.
-->
### Description
using latest dev branch, with `set syntax-highlight-style solarized-light` param, above error will be thrown
```bash
$ gdb ~/matrix-matrix-multiply/build/src/dgemm -ex 'start' -ex ''
pwndbg: loaded 141 pwndbg commands and 42 shell commands. Type pwndbg [--shell | --all] [filter] for a list.
pwndbg: created $rebase, $ida GDB functions (can be used with print/break)
Traceback (most recent call last):
File "/home/czg/pwndbg/pwndbg/gdblib/config.py", line 93, in __get_set_string_gdb_gte_9
trigger()
File "/home/czg/pwndbg/pwndbg/color/syntax_highlight.py", line 37, in check_style
get_highlight_source._reset()
^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'function' object has no attribute '_reset'
/home/czg/.gdbinit:19: Error in sourced command file:
Error occurred in Python: 'function' object has no attribute '_reset'
```
when I read `$ git log -p pwndbg/commands/context.py` I found nothing about `_reset`, but the error only exists after I upgrade python from `3.10` to `3.11.3` and pwndbg recently.
<!--
Briefly describe the problem you are having in a few paragraphs.
-->
<!--
### Steps to reproduce
-->
<!--
What do we have to do to reproduce the problem?
If this is connected to particular C/asm code,
please provide the smallest C code that reproduces the issue.
-->
<!--
### My setup
-->
<!--
Show us your gdb/python/pwndbg/OS/IDA Pro version (depending on your case).
NOTE: We are currently supporting only Ubuntu installations.
It is known that pwndbg is not fully working e.g. on Arch Linux (the heap stuff is not working there).
If you would like to change this situation - help us improving pwndbg and supporting other distros!
This can be displayed in pwndbg through `version` command.
If it is somehow unavailable, use:
* `show version` - for gdb
* `py import sys; print(sys.version)` - for python
* pwndbg version/git commit id
-->
</issue>
<code>
[start of pwndbg/color/syntax_highlight.py]
1 import os.path
2 import re
3 from typing import Any
4 from typing import Dict
5
6 import pygments
7 import pygments.formatters
8 import pygments.lexers
9
10 import pwndbg.gdblib.config
11 from pwndbg.color import disable_colors
12 from pwndbg.color import message
13 from pwndbg.color import theme
14 from pwndbg.color.lexer import PwntoolsLexer
15
16 pwndbg.gdblib.config.add_param("syntax-highlight", True, "Source code / assembly syntax highlight")
17 style = theme.add_param(
18 "syntax-highlight-style",
19 "monokai",
20 "Source code / assembly syntax highlight stylename of pygments module",
21 )
22
23 formatter = pygments.formatters.Terminal256Formatter(style=str(style))
24 pwntools_lexer = PwntoolsLexer()
25 lexer_cache: Dict[str, Any] = {}
26
27
28 @pwndbg.gdblib.config.trigger(style)
29 def check_style() -> None:
30 global formatter
31 try:
32 formatter = pygments.formatters.Terminal256Formatter(style=str(style))
33
34 # Reset the highlighted source cache
35 from pwndbg.commands.context import get_highlight_source
36
37 get_highlight_source._reset()
38 except pygments.util.ClassNotFound:
39 print(
40 message.warn(f"The pygment formatter style '{style}' is not found, restore to default")
41 )
42 style.revert_default()
43
44
45 def syntax_highlight(code, filename=".asm"):
46 # No syntax highlight if pygment is not installed
47 if disable_colors:
48 return code
49
50 filename = os.path.basename(filename)
51
52 lexer = lexer_cache.get(filename, None)
53
54 # If source code is asm, use our customized lexer.
55 # Note: We can not register our Lexer to pygments and use their APIs,
56 # since the pygment only search the lexers installed via setuptools.
57 if not lexer:
58 for glob_pat in PwntoolsLexer.filenames:
59 pat = "^" + glob_pat.replace(".", r"\.").replace("*", r".*") + "$"
60 if re.match(pat, filename):
61 lexer = pwntools_lexer
62 break
63
64 if not lexer:
65 try:
66 lexer = pygments.lexers.guess_lexer_for_filename(filename, code, stripnl=False)
67 except pygments.util.ClassNotFound:
68 # no lexer for this file or invalid style
69 pass
70
71 if lexer:
72 lexer_cache[filename] = lexer
73
74 code = pygments.highlight(code, lexer, formatter).rstrip()
75
76 return code
77
[end of pwndbg/color/syntax_highlight.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pwndbg/color/syntax_highlight.py b/pwndbg/color/syntax_highlight.py
--- a/pwndbg/color/syntax_highlight.py
+++ b/pwndbg/color/syntax_highlight.py
@@ -34,7 +34,7 @@
# Reset the highlighted source cache
from pwndbg.commands.context import get_highlight_source
- get_highlight_source._reset()
+ get_highlight_source.cache.clear()
except pygments.util.ClassNotFound:
print(
message.warn(f"The pygment formatter style '{style}' is not found, restore to default")
| {"golden_diff": "diff --git a/pwndbg/color/syntax_highlight.py b/pwndbg/color/syntax_highlight.py\n--- a/pwndbg/color/syntax_highlight.py\n+++ b/pwndbg/color/syntax_highlight.py\n@@ -34,7 +34,7 @@\n # Reset the highlighted source cache\n from pwndbg.commands.context import get_highlight_source\n \n- get_highlight_source._reset()\n+ get_highlight_source.cache.clear()\n except pygments.util.ClassNotFound:\n print(\n message.warn(f\"The pygment formatter style '{style}' is not found, restore to default\")\n", "issue": "\"'function' object has no attribute '_reset'\" in `pwndbg/pwndbg/color/syntax_highlight.py`\n<!--\r\nBefore reporting a new issue, make sure that we do not have any duplicates already open.\r\nIf there is one it might be good to take part in the discussion there.\r\n\r\nPlease make sure you have checked that the issue persists on LATEST pwndbg version.\r\n\r\nBelow is a template for BUG REPORTS.\r\nDon't include it if this is a FEATURE REQUEST.\r\n-->\r\n\r\n\r\n### Description\r\n\r\nusing latest dev branch, with `set syntax-highlight-style solarized-light` param, above error will be thrown\r\n\r\n```bash\r\n$ gdb ~/matrix-matrix-multiply/build/src/dgemm -ex 'start' -ex ''\r\npwndbg: loaded 141 pwndbg commands and 42 shell commands. Type pwndbg [--shell | --all] [filter] for a list.\r\npwndbg: created $rebase, $ida GDB functions (can be used with print/break)\r\nTraceback (most recent call last):\r\n File \"/home/czg/pwndbg/pwndbg/gdblib/config.py\", line 93, in __get_set_string_gdb_gte_9\r\n trigger()\r\n File \"/home/czg/pwndbg/pwndbg/color/syntax_highlight.py\", line 37, in check_style\r\n get_highlight_source._reset()\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nAttributeError: 'function' object has no attribute '_reset'\r\n/home/czg/.gdbinit:19: Error in sourced command file:\r\nError occurred in Python: 'function' object has no attribute '_reset'\r\n```\r\n\r\nwhen I read `$ git log -p pwndbg/commands/context.py` I found nothing about `_reset`, but the error only exists after I upgrade python from `3.10` to `3.11.3` and pwndbg recently.\r\n\r\n<!--\r\nBriefly describe the problem you are having in a few paragraphs.\r\n-->\r\n\r\n<!--\r\n### Steps to reproduce\r\n-->\r\n\r\n<!--\r\nWhat do we have to do to reproduce the problem?\r\nIf this is connected to particular C/asm code, \r\nplease provide the smallest C code that reproduces the issue.\r\n-->\r\n\r\n<!--\r\n### My setup\r\n-->\r\n\r\n<!--\r\nShow us your gdb/python/pwndbg/OS/IDA Pro version (depending on your case).\r\n\r\nNOTE: We are currently supporting only Ubuntu installations.\r\nIt is known that pwndbg is not fully working e.g. on Arch Linux (the heap stuff is not working there).\r\nIf you would like to change this situation - help us improving pwndbg and supporting other distros!\r\n\r\nThis can be displayed in pwndbg through `version` command.\r\n\r\nIf it is somehow unavailable, use:\r\n* `show version` - for gdb\r\n* `py import sys; print(sys.version)` - for python\r\n* pwndbg version/git commit id\r\n-->\r\n\n", "before_files": [{"content": "import os.path\nimport re\nfrom typing import Any\nfrom typing import Dict\n\nimport pygments\nimport pygments.formatters\nimport pygments.lexers\n\nimport pwndbg.gdblib.config\nfrom pwndbg.color import disable_colors\nfrom pwndbg.color import message\nfrom pwndbg.color import theme\nfrom pwndbg.color.lexer import PwntoolsLexer\n\npwndbg.gdblib.config.add_param(\"syntax-highlight\", True, \"Source code / assembly syntax highlight\")\nstyle = theme.add_param(\n \"syntax-highlight-style\",\n \"monokai\",\n \"Source code / assembly syntax highlight stylename of pygments module\",\n)\n\nformatter = pygments.formatters.Terminal256Formatter(style=str(style))\npwntools_lexer = PwntoolsLexer()\nlexer_cache: Dict[str, Any] = {}\n\n\[email protected](style)\ndef check_style() -> None:\n global formatter\n try:\n formatter = pygments.formatters.Terminal256Formatter(style=str(style))\n\n # Reset the highlighted source cache\n from pwndbg.commands.context import get_highlight_source\n\n get_highlight_source._reset()\n except pygments.util.ClassNotFound:\n print(\n message.warn(f\"The pygment formatter style '{style}' is not found, restore to default\")\n )\n style.revert_default()\n\n\ndef syntax_highlight(code, filename=\".asm\"):\n # No syntax highlight if pygment is not installed\n if disable_colors:\n return code\n\n filename = os.path.basename(filename)\n\n lexer = lexer_cache.get(filename, None)\n\n # If source code is asm, use our customized lexer.\n # Note: We can not register our Lexer to pygments and use their APIs,\n # since the pygment only search the lexers installed via setuptools.\n if not lexer:\n for glob_pat in PwntoolsLexer.filenames:\n pat = \"^\" + glob_pat.replace(\".\", r\"\\.\").replace(\"*\", r\".*\") + \"$\"\n if re.match(pat, filename):\n lexer = pwntools_lexer\n break\n\n if not lexer:\n try:\n lexer = pygments.lexers.guess_lexer_for_filename(filename, code, stripnl=False)\n except pygments.util.ClassNotFound:\n # no lexer for this file or invalid style\n pass\n\n if lexer:\n lexer_cache[filename] = lexer\n\n code = pygments.highlight(code, lexer, formatter).rstrip()\n\n return code\n", "path": "pwndbg/color/syntax_highlight.py"}]} | 1,840 | 124 |
gh_patches_debug_4793 | rasdani/github-patches | git_diff | iterative__dvc-6688 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
http: allow reading proxies from the current environment
This was something enabled as default by the requests, but not on aiohttp. We have to explicitly enable it to keep the current behavior. https://docs.aiohttp.org/en/stable/client_advanced.html#proxy-support. Discord context: https://discord.com/channels/485586884165107732/563406153334128681/891230518992052274
</issue>
<code>
[start of dvc/fs/http.py]
1 import threading
2
3 from funcy import cached_property, memoize, wrap_with
4
5 from dvc import prompt
6 from dvc.path_info import HTTPURLInfo
7 from dvc.scheme import Schemes
8
9 from .fsspec_wrapper import FSSpecWrapper, NoDirectoriesMixin
10
11
12 @wrap_with(threading.Lock())
13 @memoize
14 def ask_password(host, user):
15 return prompt.password(
16 "Enter a password for "
17 "host '{host}' user '{user}'".format(host=host, user=user)
18 )
19
20
21 def make_context(ssl_verify):
22 if isinstance(ssl_verify, bool) or ssl_verify is None:
23 return ssl_verify
24
25 # If this is a path, then we will create an
26 # SSL context for it, and load the given certificate.
27 import ssl
28
29 context = ssl.create_default_context()
30 context.load_verify_locations(ssl_verify)
31 return context
32
33
34 # pylint: disable=abstract-method
35 class HTTPFileSystem(NoDirectoriesMixin, FSSpecWrapper):
36 scheme = Schemes.HTTP
37 PATH_CLS = HTTPURLInfo
38 PARAM_CHECKSUM = "checksum"
39 REQUIRES = {"aiohttp": "aiohttp", "aiohttp-retry": "aiohttp_retry"}
40 CAN_TRAVERSE = False
41
42 SESSION_RETRIES = 5
43 SESSION_BACKOFF_FACTOR = 0.1
44 REQUEST_TIMEOUT = 60
45
46 def _prepare_credentials(self, **config):
47 import aiohttp
48 from fsspec.asyn import fsspec_loop
49
50 from dvc.config import ConfigError
51
52 credentials = {}
53 client_kwargs = credentials.setdefault("client_kwargs", {})
54
55 if config.get("auth"):
56 user = config.get("user")
57 password = config.get("password")
58 custom_auth_header = config.get("custom_auth_header")
59
60 if password is None and config.get("ask_password"):
61 password = ask_password(config.get("url"), user or "custom")
62
63 auth_method = config["auth"]
64 if auth_method == "basic":
65 if user is None or password is None:
66 raise ConfigError(
67 "HTTP 'basic' authentication require both "
68 "'user' and 'password'"
69 )
70
71 client_kwargs["auth"] = aiohttp.BasicAuth(user, password)
72 elif auth_method == "custom":
73 if custom_auth_header is None or password is None:
74 raise ConfigError(
75 "HTTP 'custom' authentication require both "
76 "'custom_auth_header' and 'password'"
77 )
78 credentials["headers"] = {custom_auth_header: password}
79 else:
80 raise NotImplementedError(
81 f"Auth method {auth_method!r} is not supported."
82 )
83
84 if "ssl_verify" in config:
85 with fsspec_loop():
86 client_kwargs["connector"] = aiohttp.TCPConnector(
87 ssl=make_context(config["ssl_verify"])
88 )
89
90 credentials["get_client"] = self.get_client
91 self.upload_method = config.get("method", "POST")
92 return credentials
93
94 async def get_client(self, **kwargs):
95 import aiohttp
96 from aiohttp_retry import ExponentialRetry, RetryClient
97
98 kwargs["retry_options"] = ExponentialRetry(
99 attempts=self.SESSION_RETRIES,
100 factor=self.SESSION_BACKOFF_FACTOR,
101 max_timeout=self.REQUEST_TIMEOUT,
102 )
103
104 # The default timeout for the aiohttp is 300 seconds
105 # which is too low for DVC's interactions (especially
106 # on the read) when dealing with large data blobs. We
107 # unlimit the total time to read, and only limit the
108 # time that is spent when connecting to the remote server.
109 kwargs["timeout"] = aiohttp.ClientTimeout(
110 total=None,
111 connect=self.REQUEST_TIMEOUT,
112 sock_connect=self.REQUEST_TIMEOUT,
113 sock_read=None,
114 )
115
116 return RetryClient(**kwargs)
117
118 @cached_property
119 def fs(self):
120 from fsspec.implementations.http import (
121 HTTPFileSystem as _HTTPFileSystem,
122 )
123
124 return _HTTPFileSystem(**self.fs_args)
125
126 def _entry_hook(self, entry):
127 entry["checksum"] = entry.get("ETag") or entry.get("Content-MD5")
128 return entry
129
[end of dvc/fs/http.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/dvc/fs/http.py b/dvc/fs/http.py
--- a/dvc/fs/http.py
+++ b/dvc/fs/http.py
@@ -87,6 +87,9 @@
ssl=make_context(config["ssl_verify"])
)
+ # Allow reading proxy configurations from the environment.
+ client_kwargs["trust_env"] = True
+
credentials["get_client"] = self.get_client
self.upload_method = config.get("method", "POST")
return credentials
| {"golden_diff": "diff --git a/dvc/fs/http.py b/dvc/fs/http.py\n--- a/dvc/fs/http.py\n+++ b/dvc/fs/http.py\n@@ -87,6 +87,9 @@\n ssl=make_context(config[\"ssl_verify\"])\n )\n \n+ # Allow reading proxy configurations from the environment.\n+ client_kwargs[\"trust_env\"] = True\n+\n credentials[\"get_client\"] = self.get_client\n self.upload_method = config.get(\"method\", \"POST\")\n return credentials\n", "issue": "http: allow reading proxies from the current environment\nThis was something enabled as default by the requests, but not on aiohttp. We have to explicitly enable it to keep the current behavior. https://docs.aiohttp.org/en/stable/client_advanced.html#proxy-support. Discord context: https://discord.com/channels/485586884165107732/563406153334128681/891230518992052274\n", "before_files": [{"content": "import threading\n\nfrom funcy import cached_property, memoize, wrap_with\n\nfrom dvc import prompt\nfrom dvc.path_info import HTTPURLInfo\nfrom dvc.scheme import Schemes\n\nfrom .fsspec_wrapper import FSSpecWrapper, NoDirectoriesMixin\n\n\n@wrap_with(threading.Lock())\n@memoize\ndef ask_password(host, user):\n return prompt.password(\n \"Enter a password for \"\n \"host '{host}' user '{user}'\".format(host=host, user=user)\n )\n\n\ndef make_context(ssl_verify):\n if isinstance(ssl_verify, bool) or ssl_verify is None:\n return ssl_verify\n\n # If this is a path, then we will create an\n # SSL context for it, and load the given certificate.\n import ssl\n\n context = ssl.create_default_context()\n context.load_verify_locations(ssl_verify)\n return context\n\n\n# pylint: disable=abstract-method\nclass HTTPFileSystem(NoDirectoriesMixin, FSSpecWrapper):\n scheme = Schemes.HTTP\n PATH_CLS = HTTPURLInfo\n PARAM_CHECKSUM = \"checksum\"\n REQUIRES = {\"aiohttp\": \"aiohttp\", \"aiohttp-retry\": \"aiohttp_retry\"}\n CAN_TRAVERSE = False\n\n SESSION_RETRIES = 5\n SESSION_BACKOFF_FACTOR = 0.1\n REQUEST_TIMEOUT = 60\n\n def _prepare_credentials(self, **config):\n import aiohttp\n from fsspec.asyn import fsspec_loop\n\n from dvc.config import ConfigError\n\n credentials = {}\n client_kwargs = credentials.setdefault(\"client_kwargs\", {})\n\n if config.get(\"auth\"):\n user = config.get(\"user\")\n password = config.get(\"password\")\n custom_auth_header = config.get(\"custom_auth_header\")\n\n if password is None and config.get(\"ask_password\"):\n password = ask_password(config.get(\"url\"), user or \"custom\")\n\n auth_method = config[\"auth\"]\n if auth_method == \"basic\":\n if user is None or password is None:\n raise ConfigError(\n \"HTTP 'basic' authentication require both \"\n \"'user' and 'password'\"\n )\n\n client_kwargs[\"auth\"] = aiohttp.BasicAuth(user, password)\n elif auth_method == \"custom\":\n if custom_auth_header is None or password is None:\n raise ConfigError(\n \"HTTP 'custom' authentication require both \"\n \"'custom_auth_header' and 'password'\"\n )\n credentials[\"headers\"] = {custom_auth_header: password}\n else:\n raise NotImplementedError(\n f\"Auth method {auth_method!r} is not supported.\"\n )\n\n if \"ssl_verify\" in config:\n with fsspec_loop():\n client_kwargs[\"connector\"] = aiohttp.TCPConnector(\n ssl=make_context(config[\"ssl_verify\"])\n )\n\n credentials[\"get_client\"] = self.get_client\n self.upload_method = config.get(\"method\", \"POST\")\n return credentials\n\n async def get_client(self, **kwargs):\n import aiohttp\n from aiohttp_retry import ExponentialRetry, RetryClient\n\n kwargs[\"retry_options\"] = ExponentialRetry(\n attempts=self.SESSION_RETRIES,\n factor=self.SESSION_BACKOFF_FACTOR,\n max_timeout=self.REQUEST_TIMEOUT,\n )\n\n # The default timeout for the aiohttp is 300 seconds\n # which is too low for DVC's interactions (especially\n # on the read) when dealing with large data blobs. We\n # unlimit the total time to read, and only limit the\n # time that is spent when connecting to the remote server.\n kwargs[\"timeout\"] = aiohttp.ClientTimeout(\n total=None,\n connect=self.REQUEST_TIMEOUT,\n sock_connect=self.REQUEST_TIMEOUT,\n sock_read=None,\n )\n\n return RetryClient(**kwargs)\n\n @cached_property\n def fs(self):\n from fsspec.implementations.http import (\n HTTPFileSystem as _HTTPFileSystem,\n )\n\n return _HTTPFileSystem(**self.fs_args)\n\n def _entry_hook(self, entry):\n entry[\"checksum\"] = entry.get(\"ETag\") or entry.get(\"Content-MD5\")\n return entry\n", "path": "dvc/fs/http.py"}]} | 1,845 | 109 |
gh_patches_debug_4485 | rasdani/github-patches | git_diff | goauthentik__authentik-8146 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
2023.10.6 - "Please select a username" after Azure AD login
**Describe your question/**
Is it now a expected behavior in 2023.10.6 version to ask every user for username input after logging in with azure ad?

In previous versions it was simply authenticating without any prompt, using email address from Azure AD as username.
Now it expects user to input username (and it leads to duplicated accounts, because users with mail as username already exist), and if you enter already existing mail as username it shows error:

I think it can be related to this fix:
https://github.com/goauthentik/authentik/pull/7970
Is it possible somehow to set this username automatically, or revert back to using email address so old user accounts will work again?
**Version and Deployment (please complete the following information):**
- authentik version: 2023.10.6
- Deployment: helm
</issue>
<code>
[start of authentik/sources/oauth/types/azure_ad.py]
1 """AzureAD OAuth2 Views"""
2 from typing import Any
3
4 from structlog.stdlib import get_logger
5
6 from authentik.sources.oauth.clients.oauth2 import UserprofileHeaderAuthClient
7 from authentik.sources.oauth.types.oidc import OpenIDConnectOAuth2Callback
8 from authentik.sources.oauth.types.registry import SourceType, registry
9 from authentik.sources.oauth.views.redirect import OAuthRedirect
10
11 LOGGER = get_logger()
12
13
14 class AzureADOAuthRedirect(OAuthRedirect):
15 """Azure AD OAuth2 Redirect"""
16
17 def get_additional_parameters(self, source): # pragma: no cover
18 return {
19 "scope": ["openid", "https://graph.microsoft.com/User.Read"],
20 }
21
22
23 class AzureADOAuthCallback(OpenIDConnectOAuth2Callback):
24 """AzureAD OAuth2 Callback"""
25
26 client_class = UserprofileHeaderAuthClient
27
28 def get_user_enroll_context(
29 self,
30 info: dict[str, Any],
31 ) -> dict[str, Any]:
32 mail = info.get("mail", None) or info.get("otherMails", [None])[0]
33 return {
34 "username": info.get("userPrincipalName"),
35 "email": mail,
36 "name": info.get("displayName"),
37 }
38
39
40 @registry.register()
41 class AzureADType(SourceType):
42 """Azure AD Type definition"""
43
44 callback_view = AzureADOAuthCallback
45 redirect_view = AzureADOAuthRedirect
46 verbose_name = "Azure AD"
47 name = "azuread"
48
49 urls_customizable = True
50
51 authorization_url = "https://login.microsoftonline.com/common/oauth2/v2.0/authorize"
52 access_token_url = "https://login.microsoftonline.com/common/oauth2/v2.0/token" # nosec
53 profile_url = "https://graph.microsoft.com/v1.0/me"
54 oidc_well_known_url = (
55 "https://login.microsoftonline.com/common/.well-known/openid-configuration"
56 )
57 oidc_jwks_url = "https://login.microsoftonline.com/common/discovery/keys"
58
[end of authentik/sources/oauth/types/azure_ad.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/authentik/sources/oauth/types/azure_ad.py b/authentik/sources/oauth/types/azure_ad.py
--- a/authentik/sources/oauth/types/azure_ad.py
+++ b/authentik/sources/oauth/types/azure_ad.py
@@ -25,6 +25,11 @@
client_class = UserprofileHeaderAuthClient
+ def get_user_id(self, info: dict[str, str]) -> str:
+ # Default try to get `id` for the Graph API endpoint
+ # fallback to OpenID logic in case the profile URL was changed
+ return info.get("id", super().get_user_id(info))
+
def get_user_enroll_context(
self,
info: dict[str, Any],
| {"golden_diff": "diff --git a/authentik/sources/oauth/types/azure_ad.py b/authentik/sources/oauth/types/azure_ad.py\n--- a/authentik/sources/oauth/types/azure_ad.py\n+++ b/authentik/sources/oauth/types/azure_ad.py\n@@ -25,6 +25,11 @@\n \n client_class = UserprofileHeaderAuthClient\n \n+ def get_user_id(self, info: dict[str, str]) -> str:\n+ # Default try to get `id` for the Graph API endpoint\n+ # fallback to OpenID logic in case the profile URL was changed\n+ return info.get(\"id\", super().get_user_id(info))\n+\n def get_user_enroll_context(\n self,\n info: dict[str, Any],\n", "issue": "2023.10.6 - \"Please select a username\" after Azure AD login\n**Describe your question/**\r\n\r\nIs it now a expected behavior in 2023.10.6 version to ask every user for username input after logging in with azure ad?\r\n\r\n\r\nIn previous versions it was simply authenticating without any prompt, using email address from Azure AD as username.\r\n\r\nNow it expects user to input username (and it leads to duplicated accounts, because users with mail as username already exist), and if you enter already existing mail as username it shows error:\r\n\r\n\r\nI think it can be related to this fix:\r\nhttps://github.com/goauthentik/authentik/pull/7970\r\n\r\nIs it possible somehow to set this username automatically, or revert back to using email address so old user accounts will work again?\r\n\r\n**Version and Deployment (please complete the following information):**\r\n\r\n- authentik version: 2023.10.6\r\n- Deployment: helm\r\n\r\n\n", "before_files": [{"content": "\"\"\"AzureAD OAuth2 Views\"\"\"\nfrom typing import Any\n\nfrom structlog.stdlib import get_logger\n\nfrom authentik.sources.oauth.clients.oauth2 import UserprofileHeaderAuthClient\nfrom authentik.sources.oauth.types.oidc import OpenIDConnectOAuth2Callback\nfrom authentik.sources.oauth.types.registry import SourceType, registry\nfrom authentik.sources.oauth.views.redirect import OAuthRedirect\n\nLOGGER = get_logger()\n\n\nclass AzureADOAuthRedirect(OAuthRedirect):\n \"\"\"Azure AD OAuth2 Redirect\"\"\"\n\n def get_additional_parameters(self, source): # pragma: no cover\n return {\n \"scope\": [\"openid\", \"https://graph.microsoft.com/User.Read\"],\n }\n\n\nclass AzureADOAuthCallback(OpenIDConnectOAuth2Callback):\n \"\"\"AzureAD OAuth2 Callback\"\"\"\n\n client_class = UserprofileHeaderAuthClient\n\n def get_user_enroll_context(\n self,\n info: dict[str, Any],\n ) -> dict[str, Any]:\n mail = info.get(\"mail\", None) or info.get(\"otherMails\", [None])[0]\n return {\n \"username\": info.get(\"userPrincipalName\"),\n \"email\": mail,\n \"name\": info.get(\"displayName\"),\n }\n\n\[email protected]()\nclass AzureADType(SourceType):\n \"\"\"Azure AD Type definition\"\"\"\n\n callback_view = AzureADOAuthCallback\n redirect_view = AzureADOAuthRedirect\n verbose_name = \"Azure AD\"\n name = \"azuread\"\n\n urls_customizable = True\n\n authorization_url = \"https://login.microsoftonline.com/common/oauth2/v2.0/authorize\"\n access_token_url = \"https://login.microsoftonline.com/common/oauth2/v2.0/token\" # nosec\n profile_url = \"https://graph.microsoft.com/v1.0/me\"\n oidc_well_known_url = (\n \"https://login.microsoftonline.com/common/.well-known/openid-configuration\"\n )\n oidc_jwks_url = \"https://login.microsoftonline.com/common/discovery/keys\"\n", "path": "authentik/sources/oauth/types/azure_ad.py"}]} | 1,413 | 162 |
gh_patches_debug_4482 | rasdani/github-patches | git_diff | aio-libs-abandoned__aioredis-py-313 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Missing PyPI classifier for Python 3.6 support
Currently, the PyPI page shows the following as supported:
> Programming Language :: Python
> Programming Language :: Python :: 3
> Programming Language :: Python :: 3.3
> Programming Language :: Python :: 3.4
> Programming Language :: Python :: 3.5
However, Python 3.6 is part of the automated tests, and the README states it is supported,
so I'm presuming this is just an omission.
</issue>
<code>
[start of setup.py]
1 import re
2 import os.path
3 import sys
4 import platform
5 from setuptools import setup, find_packages
6
7
8 install_requires = ['async-timeout']
9 if platform.python_implementation() == 'CPython':
10 install_requires.append('hiredis')
11
12 PY_VER = sys.version_info
13
14 if PY_VER >= (3, 4):
15 pass
16 elif PY_VER >= (3, 3):
17 install_requires.append('asyncio')
18 else:
19 raise RuntimeError("aioredis doesn't support Python version prior 3.3")
20
21
22 def read(*parts):
23 with open(os.path.join(*parts), 'rt') as f:
24 return f.read().strip()
25
26
27 def read_version():
28 regexp = re.compile(r"^__version__\W*=\W*'([\d.abrc]+)'")
29 init_py = os.path.join(os.path.dirname(__file__),
30 'aioredis', '__init__.py')
31 with open(init_py) as f:
32 for line in f:
33 match = regexp.match(line)
34 if match is not None:
35 return match.group(1)
36 else:
37 raise RuntimeError('Cannot find version in aioredis/__init__.py')
38
39
40 classifiers = [
41 'License :: OSI Approved :: MIT License',
42 'Development Status :: 4 - Beta',
43 'Programming Language :: Python',
44 'Programming Language :: Python :: 3',
45 'Programming Language :: Python :: 3.3',
46 'Programming Language :: Python :: 3.4',
47 'Programming Language :: Python :: 3.5',
48 'Operating System :: POSIX',
49 'Environment :: Web Environment',
50 'Intended Audience :: Developers',
51 'Topic :: Software Development',
52 'Topic :: Software Development :: Libraries',
53 'Framework :: AsyncIO',
54 ]
55
56 setup(name='aioredis',
57 version=read_version(),
58 description=("asyncio (PEP 3156) Redis support"),
59 long_description="\n\n".join((read('README.rst'), read('CHANGES.txt'))),
60 classifiers=classifiers,
61 platforms=["POSIX"],
62 author="Alexey Popravka",
63 author_email="[email protected]",
64 url="https://github.com/aio-libs/aioredis",
65 license="MIT",
66 packages=find_packages(exclude=["tests"]),
67 install_requires=install_requires,
68 include_package_data=True,
69 )
70
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -45,6 +45,7 @@
'Programming Language :: Python :: 3.3',
'Programming Language :: Python :: 3.4',
'Programming Language :: Python :: 3.5',
+ 'Programming Language :: Python :: 3.6',
'Operating System :: POSIX',
'Environment :: Web Environment',
'Intended Audience :: Developers',
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -45,6 +45,7 @@\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n+ 'Programming Language :: Python :: 3.6',\n 'Operating System :: POSIX',\n 'Environment :: Web Environment',\n 'Intended Audience :: Developers',\n", "issue": "Missing PyPI classifier for Python 3.6 support\nCurrently, the PyPI page shows the following as supported:\r\n\r\n> Programming Language :: Python\r\n> Programming Language :: Python :: 3\r\n> Programming Language :: Python :: 3.3\r\n> Programming Language :: Python :: 3.4\r\n> Programming Language :: Python :: 3.5\r\n\r\nHowever, Python 3.6 is part of the automated tests, and the README states it is supported,\r\nso I'm presuming this is just an omission.\n", "before_files": [{"content": "import re\nimport os.path\nimport sys\nimport platform\nfrom setuptools import setup, find_packages\n\n\ninstall_requires = ['async-timeout']\nif platform.python_implementation() == 'CPython':\n install_requires.append('hiredis')\n\nPY_VER = sys.version_info\n\nif PY_VER >= (3, 4):\n pass\nelif PY_VER >= (3, 3):\n install_requires.append('asyncio')\nelse:\n raise RuntimeError(\"aioredis doesn't support Python version prior 3.3\")\n\n\ndef read(*parts):\n with open(os.path.join(*parts), 'rt') as f:\n return f.read().strip()\n\n\ndef read_version():\n regexp = re.compile(r\"^__version__\\W*=\\W*'([\\d.abrc]+)'\")\n init_py = os.path.join(os.path.dirname(__file__),\n 'aioredis', '__init__.py')\n with open(init_py) as f:\n for line in f:\n match = regexp.match(line)\n if match is not None:\n return match.group(1)\n else:\n raise RuntimeError('Cannot find version in aioredis/__init__.py')\n\n\nclassifiers = [\n 'License :: OSI Approved :: MIT License',\n 'Development Status :: 4 - Beta',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Operating System :: POSIX',\n 'Environment :: Web Environment',\n 'Intended Audience :: Developers',\n 'Topic :: Software Development',\n 'Topic :: Software Development :: Libraries',\n 'Framework :: AsyncIO',\n]\n\nsetup(name='aioredis',\n version=read_version(),\n description=(\"asyncio (PEP 3156) Redis support\"),\n long_description=\"\\n\\n\".join((read('README.rst'), read('CHANGES.txt'))),\n classifiers=classifiers,\n platforms=[\"POSIX\"],\n author=\"Alexey Popravka\",\n author_email=\"[email protected]\",\n url=\"https://github.com/aio-libs/aioredis\",\n license=\"MIT\",\n packages=find_packages(exclude=[\"tests\"]),\n install_requires=install_requires,\n include_package_data=True,\n )\n", "path": "setup.py"}]} | 1,275 | 104 |
gh_patches_debug_20523 | rasdani/github-patches | git_diff | DataDog__dd-trace-py-507 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
"cannot decode v0.3 services payload" error when tracing aiopg
I have a Python application using aiopg and monitored by Datadog. When starting, it sends a bad frame to `trace-agent` and then everything goes fine.
Versions:
- datadog-agent: 6.2.1
- ddtrace-py: 0.12.1
- aiopg: 0.14.0
Here is a minimalist test case:
```python
import asyncio
import aiopg
from ddtrace import patch
patch(aiopg=True)
async def main():
async with aiopg.connect(host=None) as db:
pass
while True:
await asyncio.sleep(0.1)
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
```
This logs the following error:
```
failed_to_send services to Agent: HTTP error status 400, reason Bad Request, message Content-Type: text/plain; charset=utf-8
X-Content-Type-Options: nosniff
Date: Mon, 18 Jun 2018 15:25:18 GMT
Content-Length: 59
```
And then `trace-agent` reports:
```
trace-agent[4437]: 2018-06-18 15:31:16 ERROR (receiver.go:275) - cannot decode v0.3 services payload: msgp: attempted to decode type "nil" with method for "str"
```
I believe this is related to https://github.com/DataDog/datadog-trace-agent/issues/350.
</issue>
<code>
[start of ddtrace/contrib/aiopg/connection.py]
1 import asyncio
2 import wrapt
3
4 from aiopg.utils import _ContextManager
5
6 from .. import dbapi
7 from ...ext import sql
8 from ...pin import Pin
9
10
11 class AIOTracedCursor(wrapt.ObjectProxy):
12 """ TracedCursor wraps a psql cursor and traces it's queries. """
13
14 def __init__(self, cursor, pin):
15 super(AIOTracedCursor, self).__init__(cursor)
16 pin.onto(self)
17 name = pin.app or 'sql'
18 self._datadog_name = '%s.query' % name
19
20 @asyncio.coroutine
21 def _trace_method(self, method, resource, extra_tags, *args, **kwargs):
22 pin = Pin.get_from(self)
23 if not pin or not pin.enabled():
24 result = yield from method(*args, **kwargs) # noqa: E999
25 return result
26 service = pin.service
27
28 with pin.tracer.trace(self._datadog_name, service=service,
29 resource=resource) as s:
30 s.span_type = sql.TYPE
31 s.set_tag(sql.QUERY, resource)
32 s.set_tags(pin.tags)
33 s.set_tags(extra_tags)
34
35 try:
36 result = yield from method(*args, **kwargs)
37 return result
38 finally:
39 s.set_metric("db.rowcount", self.rowcount)
40
41 @asyncio.coroutine
42 def executemany(self, query, *args, **kwargs):
43 # FIXME[matt] properly handle kwargs here. arg names can be different
44 # with different libs.
45 result = yield from self._trace_method(
46 self.__wrapped__.executemany, query, {'sql.executemany': 'true'},
47 query, *args, **kwargs) # noqa: E999
48 return result
49
50 @asyncio.coroutine
51 def execute(self, query, *args, **kwargs):
52 result = yield from self._trace_method(
53 self.__wrapped__.execute, query, {}, query, *args, **kwargs)
54 return result
55
56 @asyncio.coroutine
57 def callproc(self, proc, args):
58 result = yield from self._trace_method(
59 self.__wrapped__.callproc, proc, {}, proc, args) # noqa: E999
60 return result
61
62
63 class AIOTracedConnection(wrapt.ObjectProxy):
64 """ TracedConnection wraps a Connection with tracing code. """
65
66 def __init__(self, conn):
67 super(AIOTracedConnection, self).__init__(conn)
68 name = dbapi._get_vendor(conn)
69 Pin(service=name, app=name).onto(self)
70
71 def cursor(self, *args, **kwargs):
72 # unfortunately we also need to patch this method as otherwise "self"
73 # ends up being the aiopg connection object
74 coro = self._cursor(*args, **kwargs)
75 return _ContextManager(coro)
76
77 @asyncio.coroutine
78 def _cursor(self, *args, **kwargs):
79 cursor = yield from self.__wrapped__._cursor(*args, **kwargs) # noqa: E999
80 pin = Pin.get_from(self)
81 if not pin:
82 return cursor
83 return AIOTracedCursor(cursor, pin)
84
[end of ddtrace/contrib/aiopg/connection.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ddtrace/contrib/aiopg/connection.py b/ddtrace/contrib/aiopg/connection.py
--- a/ddtrace/contrib/aiopg/connection.py
+++ b/ddtrace/contrib/aiopg/connection.py
@@ -4,8 +4,8 @@
from aiopg.utils import _ContextManager
from .. import dbapi
-from ...ext import sql
from ...pin import Pin
+from ...ext import sql, AppTypes
class AIOTracedCursor(wrapt.ObjectProxy):
@@ -63,10 +63,11 @@
class AIOTracedConnection(wrapt.ObjectProxy):
""" TracedConnection wraps a Connection with tracing code. """
- def __init__(self, conn):
+ def __init__(self, conn, pin=None):
super(AIOTracedConnection, self).__init__(conn)
name = dbapi._get_vendor(conn)
- Pin(service=name, app=name).onto(self)
+ db_pin = pin or Pin(service=name, app=name, app_type=AppTypes.db)
+ db_pin.onto(self)
def cursor(self, *args, **kwargs):
# unfortunately we also need to patch this method as otherwise "self"
| {"golden_diff": "diff --git a/ddtrace/contrib/aiopg/connection.py b/ddtrace/contrib/aiopg/connection.py\n--- a/ddtrace/contrib/aiopg/connection.py\n+++ b/ddtrace/contrib/aiopg/connection.py\n@@ -4,8 +4,8 @@\n from aiopg.utils import _ContextManager\n \n from .. import dbapi\n-from ...ext import sql\n from ...pin import Pin\n+from ...ext import sql, AppTypes\n \n \n class AIOTracedCursor(wrapt.ObjectProxy):\n@@ -63,10 +63,11 @@\n class AIOTracedConnection(wrapt.ObjectProxy):\n \"\"\" TracedConnection wraps a Connection with tracing code. \"\"\"\n \n- def __init__(self, conn):\n+ def __init__(self, conn, pin=None):\n super(AIOTracedConnection, self).__init__(conn)\n name = dbapi._get_vendor(conn)\n- Pin(service=name, app=name).onto(self)\n+ db_pin = pin or Pin(service=name, app=name, app_type=AppTypes.db)\n+ db_pin.onto(self)\n \n def cursor(self, *args, **kwargs):\n # unfortunately we also need to patch this method as otherwise \"self\"\n", "issue": "\"cannot decode v0.3 services payload\" error when tracing aiopg\nI have a Python application using aiopg and monitored by Datadog. When starting, it sends a bad frame to `trace-agent` and then everything goes fine.\r\n\r\nVersions:\r\n- datadog-agent: 6.2.1\r\n- ddtrace-py: 0.12.1\r\n- aiopg: 0.14.0\r\n\r\nHere is a minimalist test case:\r\n\r\n```python\r\nimport asyncio\r\nimport aiopg\r\n\r\nfrom ddtrace import patch\r\n\r\npatch(aiopg=True)\r\n\r\n\r\nasync def main():\r\n async with aiopg.connect(host=None) as db:\r\n pass\r\n\r\n while True:\r\n await asyncio.sleep(0.1)\r\n\r\n\r\nloop = asyncio.get_event_loop()\r\nloop.run_until_complete(main())\r\n```\r\n\r\nThis logs the following error:\r\n```\r\nfailed_to_send services to Agent: HTTP error status 400, reason Bad Request, message Content-Type: text/plain; charset=utf-8\r\nX-Content-Type-Options: nosniff\r\nDate: Mon, 18 Jun 2018 15:25:18 GMT\r\nContent-Length: 59\r\n```\r\n\r\nAnd then `trace-agent` reports:\r\n```\r\ntrace-agent[4437]: 2018-06-18 15:31:16 ERROR (receiver.go:275) - cannot decode v0.3 services payload: msgp: attempted to decode type \"nil\" with method for \"str\"\r\n```\r\n\r\nI believe this is related to https://github.com/DataDog/datadog-trace-agent/issues/350.\n", "before_files": [{"content": "import asyncio\nimport wrapt\n\nfrom aiopg.utils import _ContextManager\n\nfrom .. import dbapi\nfrom ...ext import sql\nfrom ...pin import Pin\n\n\nclass AIOTracedCursor(wrapt.ObjectProxy):\n \"\"\" TracedCursor wraps a psql cursor and traces it's queries. \"\"\"\n\n def __init__(self, cursor, pin):\n super(AIOTracedCursor, self).__init__(cursor)\n pin.onto(self)\n name = pin.app or 'sql'\n self._datadog_name = '%s.query' % name\n\n @asyncio.coroutine\n def _trace_method(self, method, resource, extra_tags, *args, **kwargs):\n pin = Pin.get_from(self)\n if not pin or not pin.enabled():\n result = yield from method(*args, **kwargs) # noqa: E999\n return result\n service = pin.service\n\n with pin.tracer.trace(self._datadog_name, service=service,\n resource=resource) as s:\n s.span_type = sql.TYPE\n s.set_tag(sql.QUERY, resource)\n s.set_tags(pin.tags)\n s.set_tags(extra_tags)\n\n try:\n result = yield from method(*args, **kwargs)\n return result\n finally:\n s.set_metric(\"db.rowcount\", self.rowcount)\n\n @asyncio.coroutine\n def executemany(self, query, *args, **kwargs):\n # FIXME[matt] properly handle kwargs here. arg names can be different\n # with different libs.\n result = yield from self._trace_method(\n self.__wrapped__.executemany, query, {'sql.executemany': 'true'},\n query, *args, **kwargs) # noqa: E999\n return result\n\n @asyncio.coroutine\n def execute(self, query, *args, **kwargs):\n result = yield from self._trace_method(\n self.__wrapped__.execute, query, {}, query, *args, **kwargs)\n return result\n\n @asyncio.coroutine\n def callproc(self, proc, args):\n result = yield from self._trace_method(\n self.__wrapped__.callproc, proc, {}, proc, args) # noqa: E999\n return result\n\n\nclass AIOTracedConnection(wrapt.ObjectProxy):\n \"\"\" TracedConnection wraps a Connection with tracing code. \"\"\"\n\n def __init__(self, conn):\n super(AIOTracedConnection, self).__init__(conn)\n name = dbapi._get_vendor(conn)\n Pin(service=name, app=name).onto(self)\n\n def cursor(self, *args, **kwargs):\n # unfortunately we also need to patch this method as otherwise \"self\"\n # ends up being the aiopg connection object\n coro = self._cursor(*args, **kwargs)\n return _ContextManager(coro)\n\n @asyncio.coroutine\n def _cursor(self, *args, **kwargs):\n cursor = yield from self.__wrapped__._cursor(*args, **kwargs) # noqa: E999\n pin = Pin.get_from(self)\n if not pin:\n return cursor\n return AIOTracedCursor(cursor, pin)\n", "path": "ddtrace/contrib/aiopg/connection.py"}]} | 1,768 | 269 |
gh_patches_debug_12558 | rasdani/github-patches | git_diff | hydroshare__hydroshare-1690 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
"Web App" needs a space (not "WebApp")
On the open web app button there needs to be a space:

</issue>
<code>
[start of hs_tools_resource/migrations/0010_auto_20161203_1913.py]
1 # -*- coding: utf-8 -*-
2 from __future__ import unicode_literals
3
4 from django.db import migrations, models
5
6
7 class Migration(migrations.Migration):
8
9 dependencies = [
10 ('hs_tools_resource', '0009_auto_20160929_1543'),
11 ]
12
13 operations = [
14 migrations.RemoveField(
15 model_name='toolicon',
16 name='url',
17 ),
18 migrations.AddField(
19 model_name='toolicon',
20 name='value',
21 field=models.CharField(default=b'', max_length=1024, blank=True),
22 ),
23 migrations.AlterField(
24 model_name='apphomepageurl',
25 name='value',
26 field=models.CharField(default=b'', max_length=1024, blank=True),
27 ),
28 migrations.AlterField(
29 model_name='requesturlbase',
30 name='value',
31 field=models.CharField(default=b'', max_length=1024, blank=True),
32 ),
33 migrations.AlterField(
34 model_name='supportedrestypes',
35 name='supported_res_types',
36 field=models.ManyToManyField(to='hs_tools_resource.SupportedResTypeChoices', blank=True),
37 ),
38 migrations.AlterField(
39 model_name='supportedsharingstatus',
40 name='sharing_status',
41 field=models.ManyToManyField(to='hs_tools_resource.SupportedSharingStatusChoices', blank=True),
42 ),
43 ]
44
[end of hs_tools_resource/migrations/0010_auto_20161203_1913.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/hs_tools_resource/migrations/0010_auto_20161203_1913.py b/hs_tools_resource/migrations/0010_auto_20161203_1913.py
--- a/hs_tools_resource/migrations/0010_auto_20161203_1913.py
+++ b/hs_tools_resource/migrations/0010_auto_20161203_1913.py
@@ -11,14 +11,15 @@
]
operations = [
- migrations.RemoveField(
+ migrations.AlterField(
model_name='toolicon',
name='url',
+ field=models.CharField(default=b'', max_length=1024, blank=True),
),
- migrations.AddField(
+ migrations.RenameField(
model_name='toolicon',
- name='value',
- field=models.CharField(default=b'', max_length=1024, blank=True),
+ old_name='url',
+ new_name='value'
),
migrations.AlterField(
model_name='apphomepageurl',
| {"golden_diff": "diff --git a/hs_tools_resource/migrations/0010_auto_20161203_1913.py b/hs_tools_resource/migrations/0010_auto_20161203_1913.py\n--- a/hs_tools_resource/migrations/0010_auto_20161203_1913.py\n+++ b/hs_tools_resource/migrations/0010_auto_20161203_1913.py\n@@ -11,14 +11,15 @@\n ]\n \n operations = [\n- migrations.RemoveField(\n+ migrations.AlterField(\n model_name='toolicon',\n name='url',\n+ field=models.CharField(default=b'', max_length=1024, blank=True),\n ),\n- migrations.AddField(\n+ migrations.RenameField(\n model_name='toolicon',\n- name='value',\n- field=models.CharField(default=b'', max_length=1024, blank=True),\n+ old_name='url',\n+ new_name='value'\n ),\n migrations.AlterField(\n model_name='apphomepageurl',\n", "issue": "\"Web App\" needs a space (not \"WebApp\")\nOn the open web app button there needs to be a space:\r\n\r\n\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nfrom __future__ import unicode_literals\n\nfrom django.db import migrations, models\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('hs_tools_resource', '0009_auto_20160929_1543'),\n ]\n\n operations = [\n migrations.RemoveField(\n model_name='toolicon',\n name='url',\n ),\n migrations.AddField(\n model_name='toolicon',\n name='value',\n field=models.CharField(default=b'', max_length=1024, blank=True),\n ),\n migrations.AlterField(\n model_name='apphomepageurl',\n name='value',\n field=models.CharField(default=b'', max_length=1024, blank=True),\n ),\n migrations.AlterField(\n model_name='requesturlbase',\n name='value',\n field=models.CharField(default=b'', max_length=1024, blank=True),\n ),\n migrations.AlterField(\n model_name='supportedrestypes',\n name='supported_res_types',\n field=models.ManyToManyField(to='hs_tools_resource.SupportedResTypeChoices', blank=True),\n ),\n migrations.AlterField(\n model_name='supportedsharingstatus',\n name='sharing_status',\n field=models.ManyToManyField(to='hs_tools_resource.SupportedSharingStatusChoices', blank=True),\n ),\n ]\n", "path": "hs_tools_resource/migrations/0010_auto_20161203_1913.py"}]} | 1,031 | 260 |
gh_patches_debug_14598 | rasdani/github-patches | git_diff | sunpy__sunpy-3076 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add list of constants to docs
Currently if one wants to know what constants `sunpy.sun` has built in you have to run `sunpy.sun.constants.print_all()`, but it would be nice to have a table that lists the constants here: https://docs.sunpy.org/en/latest/code_ref/sun.html#module-sunpy.sun.constants (like AstroPy: http://docs.astropy.org/en/stable/constants/index.html#reference-api)
</issue>
<code>
[start of sunpy/sun/constants.py]
1 """
2 This module provides fundamental solar physical constants.
3 """
4 from astropy.table import Table
5
6 from sunpy.sun import _constants as _con
7
8 __all__ = [
9 'get', 'find', 'print_all', 'spectral_classification', 'au', 'mass', 'equatorial_radius',
10 'volume', 'surface_area', 'average_density', 'equatorial_surface_gravity',
11 'effective_temperature', 'luminosity', 'mass_conversion_rate', 'escape_velocity', 'sfu',
12 'average_angular_size'
13 ]
14
15 constants = _con.physical_constants
16
17
18 def get(key):
19 """
20 Retrieve a constant by key. This is just a short cut into a dictionary.
21
22 Parameters
23 ----------
24 key : `str`
25 Key in dictionary in ``constants``.
26
27 Returns
28 -------
29 constant : `~astropy.units.Constant`
30
31 See Also
32 --------
33 `sunpy.sun.constants`
34 Contains the description of ``constants``, which, as a dictionary literal object, does not itself possess a docstring.
35
36 Examples
37 --------
38 >>> from sunpy.sun import constants
39 >>> constants.get('mass')
40 <<class 'astropy.constants.iau2015.IAU2015'> name='Solar mass' value=1.9884754153381438e+30 uncertainty=9.236140093538353e+25 unit='kg' reference='IAU 2015 Resolution B 3 + CODATA 2014'>
41 """
42 return constants[key]
43
44
45 def find(sub=None):
46 """
47 Return list of constants keys containing a given string.
48
49 Parameters
50 ----------
51 sub : `str`, optional
52 Sub-string to search keys for. By default set to `None` and returns all keys.
53
54 Returns
55 -------
56 `None`, `list`
57 The matching keys.
58
59 See Also
60 --------
61 `sunpy.sun.constants`
62 Contains the description of ``constants``, which, as a dictionary literal object, does not itself possess a docstring.
63 """
64 if sub is None:
65 result = list(constants.keys())
66 else:
67 result = [key for key in constants if sub.lower() in key.lower()]
68
69 result.sort()
70 return result
71
72
73 def print_all():
74 """
75 Provides a table of the complete list of constants.
76
77 Returns
78 -------
79 `astropy.table.Table`
80 """
81 data_rows = []
82 for key, this_constant in constants.items():
83 data_rows.append([
84 key, this_constant.name, this_constant.value, this_constant.uncertainty,
85 str(this_constant.unit), this_constant.reference
86 ])
87
88 t = Table(rows=data_rows, names=('key', 'name', 'value', 'uncertainty', 'unit', 'Reference'))
89 return t
90
91
92 # Spectral class is not included in physical constants since it is not a number
93 spectral_classification = 'G2V'
94 au = astronomical_unit = get('mean distance')
95 # The following variables from _gets are brought out by making them
96 # accessible through a call such as sun.volume
97 mass = get('mass')
98 equatorial_radius = radius = get('radius')
99 volume = get('volume')
100 surface_area = get('surface area')
101 average_density = density = get('average density')
102 equatorial_surface_gravity = surface_gravity = get('surface gravity')
103 effective_temperature = get('effective temperature')
104 luminosity = get('luminosity')
105 mass_conversion_rate = get('mass conversion rate')
106 escape_velocity = get('escape velocity')
107 sfu = get('solar flux unit')
108 # Observable parameters
109 average_angular_size = get('average angular size')
110
[end of sunpy/sun/constants.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/sunpy/sun/constants.py b/sunpy/sun/constants.py
--- a/sunpy/sun/constants.py
+++ b/sunpy/sun/constants.py
@@ -89,6 +89,20 @@
return t
+# Add a list of constants to the docs
+_lines = [
+ 'The following constants are available:\n',
+ '====================== ============== ================ =================================',
+ ' Name Value Unit Description',
+ '====================== ============== ================ =================================',
+]
+for key, const in constants.items():
+ _lines.append('{0:^22} {1:^14.9g} {2:^16} {3}'.format(
+ key, const.value, const._unit_string, const.name))
+_lines.append(_lines[1])
+if __doc__ is not None:
+ __doc__ += '\n'.join(_lines)
+
# Spectral class is not included in physical constants since it is not a number
spectral_classification = 'G2V'
au = astronomical_unit = get('mean distance')
| {"golden_diff": "diff --git a/sunpy/sun/constants.py b/sunpy/sun/constants.py\n--- a/sunpy/sun/constants.py\n+++ b/sunpy/sun/constants.py\n@@ -89,6 +89,20 @@\n return t\n \n \n+# Add a list of constants to the docs\n+_lines = [\n+ 'The following constants are available:\\n',\n+ '====================== ============== ================ =================================',\n+ ' Name Value Unit Description',\n+ '====================== ============== ================ =================================',\n+]\n+for key, const in constants.items():\n+ _lines.append('{0:^22} {1:^14.9g} {2:^16} {3}'.format(\n+ key, const.value, const._unit_string, const.name))\n+_lines.append(_lines[1])\n+if __doc__ is not None:\n+ __doc__ += '\\n'.join(_lines)\n+\n # Spectral class is not included in physical constants since it is not a number\n spectral_classification = 'G2V'\n au = astronomical_unit = get('mean distance')\n", "issue": "Add list of constants to docs\nCurrently if one wants to know what constants `sunpy.sun` has built in you have to run `sunpy.sun.constants.print_all()`, but it would be nice to have a table that lists the constants here: https://docs.sunpy.org/en/latest/code_ref/sun.html#module-sunpy.sun.constants (like AstroPy: http://docs.astropy.org/en/stable/constants/index.html#reference-api)\n", "before_files": [{"content": "\"\"\"\nThis module provides fundamental solar physical constants.\n\"\"\"\nfrom astropy.table import Table\n\nfrom sunpy.sun import _constants as _con\n\n__all__ = [\n 'get', 'find', 'print_all', 'spectral_classification', 'au', 'mass', 'equatorial_radius',\n 'volume', 'surface_area', 'average_density', 'equatorial_surface_gravity',\n 'effective_temperature', 'luminosity', 'mass_conversion_rate', 'escape_velocity', 'sfu',\n 'average_angular_size'\n]\n\nconstants = _con.physical_constants\n\n\ndef get(key):\n \"\"\"\n Retrieve a constant by key. This is just a short cut into a dictionary.\n\n Parameters\n ----------\n key : `str`\n Key in dictionary in ``constants``.\n\n Returns\n -------\n constant : `~astropy.units.Constant`\n\n See Also\n --------\n `sunpy.sun.constants`\n Contains the description of ``constants``, which, as a dictionary literal object, does not itself possess a docstring.\n\n Examples\n --------\n >>> from sunpy.sun import constants\n >>> constants.get('mass')\n <<class 'astropy.constants.iau2015.IAU2015'> name='Solar mass' value=1.9884754153381438e+30 uncertainty=9.236140093538353e+25 unit='kg' reference='IAU 2015 Resolution B 3 + CODATA 2014'>\n \"\"\"\n return constants[key]\n\n\ndef find(sub=None):\n \"\"\"\n Return list of constants keys containing a given string.\n\n Parameters\n ----------\n sub : `str`, optional\n Sub-string to search keys for. By default set to `None` and returns all keys.\n\n Returns\n -------\n `None`, `list`\n The matching keys.\n\n See Also\n --------\n `sunpy.sun.constants`\n Contains the description of ``constants``, which, as a dictionary literal object, does not itself possess a docstring.\n \"\"\"\n if sub is None:\n result = list(constants.keys())\n else:\n result = [key for key in constants if sub.lower() in key.lower()]\n\n result.sort()\n return result\n\n\ndef print_all():\n \"\"\"\n Provides a table of the complete list of constants.\n\n Returns\n -------\n `astropy.table.Table`\n \"\"\"\n data_rows = []\n for key, this_constant in constants.items():\n data_rows.append([\n key, this_constant.name, this_constant.value, this_constant.uncertainty,\n str(this_constant.unit), this_constant.reference\n ])\n\n t = Table(rows=data_rows, names=('key', 'name', 'value', 'uncertainty', 'unit', 'Reference'))\n return t\n\n\n# Spectral class is not included in physical constants since it is not a number\nspectral_classification = 'G2V'\nau = astronomical_unit = get('mean distance')\n# The following variables from _gets are brought out by making them\n# accessible through a call such as sun.volume\nmass = get('mass')\nequatorial_radius = radius = get('radius')\nvolume = get('volume')\nsurface_area = get('surface area')\naverage_density = density = get('average density')\nequatorial_surface_gravity = surface_gravity = get('surface gravity')\neffective_temperature = get('effective temperature')\nluminosity = get('luminosity')\nmass_conversion_rate = get('mass conversion rate')\nescape_velocity = get('escape velocity')\nsfu = get('solar flux unit')\n# Observable parameters\naverage_angular_size = get('average angular size')\n", "path": "sunpy/sun/constants.py"}]} | 1,647 | 241 |
gh_patches_debug_72 | rasdani/github-patches | git_diff | Kinto__kinto-7 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
PostgreSQL by default ?
- put `cliquet[postgresql]` in requirements
- put storage_url in config (default postgres:postgres@localhost/postgres)
</issue>
<code>
[start of setup.py]
1 import os
2 from setuptools import setup, find_packages
3
4 here = os.path.abspath(os.path.dirname(__file__))
5
6 with open(os.path.join(here, 'README.rst')) as f:
7 README = f.read()
8
9 REQUIREMENTS = [
10 'colander',
11 'cornice',
12 'six',
13 'waitress',
14 'cliquet'
15 ]
16
17 ENTRY_POINTS = {
18 'paste.app_factory': [
19 'main = kinto:main',
20 ]}
21
22 setup(name='kinto',
23 version='0.1.dev0',
24 description='kinto',
25 long_description=README,
26 classifiers=[
27 "Programming Language :: Python",
28 "Topic :: Internet :: WWW/HTTP",
29 "Topic :: Internet :: WWW/HTTP :: WSGI :: Application"
30 ],
31 keywords="web services",
32 author='Mozilla Services',
33 author_email='[email protected]',
34 url='',
35 packages=find_packages(),
36 include_package_data=True,
37 zip_safe=False,
38 install_requires=REQUIREMENTS,
39 entry_points=ENTRY_POINTS)
40
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -11,7 +11,7 @@
'cornice',
'six',
'waitress',
- 'cliquet'
+ 'cliquet[postgresql]'
]
ENTRY_POINTS = {
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -11,7 +11,7 @@\n 'cornice',\n 'six',\n 'waitress',\n- 'cliquet'\n+ 'cliquet[postgresql]'\n ]\n \n ENTRY_POINTS = {\n", "issue": "PostgreSQL by default ?\n- put `cliquet[postgresql]` in requirements\n- put storage_url in config (default postgres:postgres@localhost/postgres)\n\n", "before_files": [{"content": "import os\nfrom setuptools import setup, find_packages\n\nhere = os.path.abspath(os.path.dirname(__file__))\n\nwith open(os.path.join(here, 'README.rst')) as f:\n README = f.read()\n\nREQUIREMENTS = [\n 'colander',\n 'cornice',\n 'six',\n 'waitress',\n 'cliquet'\n]\n\nENTRY_POINTS = {\n 'paste.app_factory': [\n 'main = kinto:main',\n ]}\n\nsetup(name='kinto',\n version='0.1.dev0',\n description='kinto',\n long_description=README,\n classifiers=[\n \"Programming Language :: Python\",\n \"Topic :: Internet :: WWW/HTTP\",\n \"Topic :: Internet :: WWW/HTTP :: WSGI :: Application\"\n ],\n keywords=\"web services\",\n author='Mozilla Services',\n author_email='[email protected]',\n url='',\n packages=find_packages(),\n include_package_data=True,\n zip_safe=False,\n install_requires=REQUIREMENTS,\n entry_points=ENTRY_POINTS)\n", "path": "setup.py"}]} | 857 | 69 |
gh_patches_debug_25562 | rasdani/github-patches | git_diff | pydantic__pydantic-107 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Missing HISTORY.rst
pydantic installs fine from pip but via a dependency in setup.py it fails with a missing HISTORY.rst due to your long_description in setup.py. Basically, you need a MANIFEST.in that includes that file.
```
Processing pydantic-0.6.2.tar.gz
Writing /var/folders/4j/00jv8sg138n61hsj6ppf60pm0000gn/T/easy_install-o7rp3h7o/pydantic-0.6.2/setup.cfg
Running pydantic-0.6.2/setup.py -q bdist_egg --dist-dir /var/folders/4j/00jv8sg138n61hsj6ppf60pm0000gn/T/easy_install-o7rp3h7o/pydantic-0.6.2/egg-dist-tmp-7bd8a1a8
error: [Errno 2] No such file or directory: '/private/var/folders/4j/00jv8sg138n61hsj6ppf60pm0000gn/T/easy_install-o7rp3h7o/pydantic-0.6.2/HISTORY.rst'
```
</issue>
<code>
[start of setup.py]
1 from importlib.machinery import SourceFileLoader
2 from pathlib import Path
3 from setuptools import setup
4
5 THIS_DIR = Path(__file__).resolve().parent
6 long_description = (
7 THIS_DIR.joinpath('README.rst').read_text() +
8 '\n\n' +
9 THIS_DIR.joinpath('HISTORY.rst').read_text()
10 )
11
12 # avoid loading the package before requirements are installed:
13 version = SourceFileLoader('version', 'pydantic/version.py').load_module()
14
15 setup(
16 name='pydantic',
17 version=str(version.VERSION),
18 description='Data validation and settings management using python 3.6 type hinting',
19 long_description=long_description,
20 classifiers=[
21 'Development Status :: 4 - Beta',
22 'Environment :: Console',
23 'Programming Language :: Python',
24 'Programming Language :: Python :: 3',
25 'Programming Language :: Python :: 3 :: Only',
26 'Programming Language :: Python :: 3.6',
27 'Intended Audience :: Developers',
28 'Intended Audience :: Information Technology',
29 'Intended Audience :: System Administrators',
30 'License :: OSI Approved :: MIT License',
31 'Operating System :: Unix',
32 'Operating System :: POSIX :: Linux',
33 'Environment :: MacOS X',
34 'Topic :: Software Development :: Libraries :: Python Modules',
35 'Topic :: Internet',
36 ],
37 author='Samuel Colvin',
38 author_email='[email protected]',
39 url='https://github.com/samuelcolvin/pydantic',
40 license='MIT',
41 packages=['pydantic'],
42 python_requires='>=3.6',
43 zip_safe=True,
44 extras_require={
45 'msgpack': ['msgpack-python>=0.4.8'],
46 'ujson': ['ujson>=1.35'],
47 }
48 )
49
[end of setup.py]
[start of pydantic/version.py]
1 from distutils.version import StrictVersion
2
3 __all__ = ['VERSION']
4
5 VERSION = StrictVersion('0.6.2')
6
[end of pydantic/version.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pydantic/version.py b/pydantic/version.py
--- a/pydantic/version.py
+++ b/pydantic/version.py
@@ -2,4 +2,4 @@
__all__ = ['VERSION']
-VERSION = StrictVersion('0.6.2')
+VERSION = StrictVersion('0.6.3')
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -2,12 +2,15 @@
from pathlib import Path
from setuptools import setup
+description = 'Data validation and settings management using python 3.6 type hinting'
THIS_DIR = Path(__file__).resolve().parent
-long_description = (
- THIS_DIR.joinpath('README.rst').read_text() +
- '\n\n' +
- THIS_DIR.joinpath('HISTORY.rst').read_text()
-)
+try:
+ long_description = '\n\n'.join([
+ THIS_DIR.joinpath('README.rst').read_text(),
+ THIS_DIR.joinpath('HISTORY.rst').read_text()
+ ])
+except FileNotFoundError:
+ long_description = description + '.\n\nSee https://pydantic-docs.helpmanual.io/ for documentation.'
# avoid loading the package before requirements are installed:
version = SourceFileLoader('version', 'pydantic/version.py').load_module()
@@ -15,7 +18,7 @@
setup(
name='pydantic',
version=str(version.VERSION),
- description='Data validation and settings management using python 3.6 type hinting',
+ description=description,
long_description=long_description,
classifiers=[
'Development Status :: 4 - Beta',
| {"golden_diff": "diff --git a/pydantic/version.py b/pydantic/version.py\n--- a/pydantic/version.py\n+++ b/pydantic/version.py\n@@ -2,4 +2,4 @@\n \n __all__ = ['VERSION']\n \n-VERSION = StrictVersion('0.6.2')\n+VERSION = StrictVersion('0.6.3')\ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -2,12 +2,15 @@\n from pathlib import Path\n from setuptools import setup\n \n+description = 'Data validation and settings management using python 3.6 type hinting'\n THIS_DIR = Path(__file__).resolve().parent\n-long_description = (\n- THIS_DIR.joinpath('README.rst').read_text() +\n- '\\n\\n' +\n- THIS_DIR.joinpath('HISTORY.rst').read_text()\n-)\n+try:\n+ long_description = '\\n\\n'.join([\n+ THIS_DIR.joinpath('README.rst').read_text(),\n+ THIS_DIR.joinpath('HISTORY.rst').read_text()\n+ ])\n+except FileNotFoundError:\n+ long_description = description + '.\\n\\nSee https://pydantic-docs.helpmanual.io/ for documentation.'\n \n # avoid loading the package before requirements are installed:\n version = SourceFileLoader('version', 'pydantic/version.py').load_module()\n@@ -15,7 +18,7 @@\n setup(\n name='pydantic',\n version=str(version.VERSION),\n- description='Data validation and settings management using python 3.6 type hinting',\n+ description=description,\n long_description=long_description,\n classifiers=[\n 'Development Status :: 4 - Beta',\n", "issue": "Missing HISTORY.rst\npydantic installs fine from pip but via a dependency in setup.py it fails with a missing HISTORY.rst due to your long_description in setup.py. Basically, you need a MANIFEST.in that includes that file.\r\n\r\n```\r\nProcessing pydantic-0.6.2.tar.gz\r\nWriting /var/folders/4j/00jv8sg138n61hsj6ppf60pm0000gn/T/easy_install-o7rp3h7o/pydantic-0.6.2/setup.cfg\r\nRunning pydantic-0.6.2/setup.py -q bdist_egg --dist-dir /var/folders/4j/00jv8sg138n61hsj6ppf60pm0000gn/T/easy_install-o7rp3h7o/pydantic-0.6.2/egg-dist-tmp-7bd8a1a8\r\nerror: [Errno 2] No such file or directory: '/private/var/folders/4j/00jv8sg138n61hsj6ppf60pm0000gn/T/easy_install-o7rp3h7o/pydantic-0.6.2/HISTORY.rst'\r\n```\n", "before_files": [{"content": "from importlib.machinery import SourceFileLoader\nfrom pathlib import Path\nfrom setuptools import setup\n\nTHIS_DIR = Path(__file__).resolve().parent\nlong_description = (\n THIS_DIR.joinpath('README.rst').read_text() +\n '\\n\\n' +\n THIS_DIR.joinpath('HISTORY.rst').read_text()\n)\n\n# avoid loading the package before requirements are installed:\nversion = SourceFileLoader('version', 'pydantic/version.py').load_module()\n\nsetup(\n name='pydantic',\n version=str(version.VERSION),\n description='Data validation and settings management using python 3.6 type hinting',\n long_description=long_description,\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Environment :: Console',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3 :: Only',\n 'Programming Language :: Python :: 3.6',\n 'Intended Audience :: Developers',\n 'Intended Audience :: Information Technology',\n 'Intended Audience :: System Administrators',\n 'License :: OSI Approved :: MIT License',\n 'Operating System :: Unix',\n 'Operating System :: POSIX :: Linux',\n 'Environment :: MacOS X',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n 'Topic :: Internet',\n ],\n author='Samuel Colvin',\n author_email='[email protected]',\n url='https://github.com/samuelcolvin/pydantic',\n license='MIT',\n packages=['pydantic'],\n python_requires='>=3.6',\n zip_safe=True,\n extras_require={\n 'msgpack': ['msgpack-python>=0.4.8'],\n 'ujson': ['ujson>=1.35'],\n }\n)\n", "path": "setup.py"}, {"content": "from distutils.version import StrictVersion\n\n__all__ = ['VERSION']\n\nVERSION = StrictVersion('0.6.2')\n", "path": "pydantic/version.py"}]} | 1,333 | 368 |
gh_patches_debug_17096 | rasdani/github-patches | git_diff | inventree__InvenTree-5045 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Make event test more specific
@matmair [this CI failure](https://github.com/inventree/InvenTree/actions/runs/5259899543/jobs/9506168595?pr=4732) is the last in the current batch of weird ones. I'm seeing it on multiple PRs, about 50% of the time.
Here's the failing line:
https://github.com/inventree/InvenTree/blob/c8365ccd0c9371ea4d127fe616e0029f35b3c19c/InvenTree/plugin/samples/event/test_event_sample.py#L27
Sometimes, `cm.warning.args[0]` returns an "unclosed file object", rather than a string. Any ideas?
_Originally posted by @SchrodingersGat in https://github.com/inventree/InvenTree/issues/4732#issuecomment-1590219025_
</issue>
<code>
[start of InvenTree/plugin/samples/event/event_sample.py]
1 """Sample plugin which responds to events."""
2
3 import warnings
4
5 from django.conf import settings
6
7 from plugin import InvenTreePlugin
8 from plugin.mixins import EventMixin
9
10
11 class EventPluginSample(EventMixin, InvenTreePlugin):
12 """A sample plugin which provides supports for triggered events."""
13
14 NAME = "EventPlugin"
15 SLUG = "sampleevent"
16 TITLE = "Triggered Events"
17
18 def process_event(self, event, *args, **kwargs):
19 """Custom event processing."""
20 print(f"Processing triggered event: '{event}'")
21 print("args:", str(args))
22 print("kwargs:", str(kwargs))
23
24 # Issue warning that we can test for
25 if settings.PLUGIN_TESTING:
26 warnings.warn(f'Event `{event}` triggered', stacklevel=2)
27
[end of InvenTree/plugin/samples/event/event_sample.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/InvenTree/plugin/samples/event/event_sample.py b/InvenTree/plugin/samples/event/event_sample.py
--- a/InvenTree/plugin/samples/event/event_sample.py
+++ b/InvenTree/plugin/samples/event/event_sample.py
@@ -1,12 +1,14 @@
"""Sample plugin which responds to events."""
-import warnings
+import logging
from django.conf import settings
from plugin import InvenTreePlugin
from plugin.mixins import EventMixin
+logger = logging.getLogger('inventree')
+
class EventPluginSample(EventMixin, InvenTreePlugin):
"""A sample plugin which provides supports for triggered events."""
@@ -23,4 +25,4 @@
# Issue warning that we can test for
if settings.PLUGIN_TESTING:
- warnings.warn(f'Event `{event}` triggered', stacklevel=2)
+ logger.debug(f'Event `{event}` triggered in sample plugin')
| {"golden_diff": "diff --git a/InvenTree/plugin/samples/event/event_sample.py b/InvenTree/plugin/samples/event/event_sample.py\n--- a/InvenTree/plugin/samples/event/event_sample.py\n+++ b/InvenTree/plugin/samples/event/event_sample.py\n@@ -1,12 +1,14 @@\n \"\"\"Sample plugin which responds to events.\"\"\"\n \n-import warnings\n+import logging\n \n from django.conf import settings\n \n from plugin import InvenTreePlugin\n from plugin.mixins import EventMixin\n \n+logger = logging.getLogger('inventree')\n+\n \n class EventPluginSample(EventMixin, InvenTreePlugin):\n \"\"\"A sample plugin which provides supports for triggered events.\"\"\"\n@@ -23,4 +25,4 @@\n \n # Issue warning that we can test for\n if settings.PLUGIN_TESTING:\n- warnings.warn(f'Event `{event}` triggered', stacklevel=2)\n+ logger.debug(f'Event `{event}` triggered in sample plugin')\n", "issue": "Make event test more specific\n @matmair [this CI failure](https://github.com/inventree/InvenTree/actions/runs/5259899543/jobs/9506168595?pr=4732) is the last in the current batch of weird ones. I'm seeing it on multiple PRs, about 50% of the time.\r\n\r\nHere's the failing line:\r\n\r\nhttps://github.com/inventree/InvenTree/blob/c8365ccd0c9371ea4d127fe616e0029f35b3c19c/InvenTree/plugin/samples/event/test_event_sample.py#L27\r\n\r\nSometimes, `cm.warning.args[0]` returns an \"unclosed file object\", rather than a string. Any ideas?\r\n\r\n_Originally posted by @SchrodingersGat in https://github.com/inventree/InvenTree/issues/4732#issuecomment-1590219025_\r\n \n", "before_files": [{"content": "\"\"\"Sample plugin which responds to events.\"\"\"\n\nimport warnings\n\nfrom django.conf import settings\n\nfrom plugin import InvenTreePlugin\nfrom plugin.mixins import EventMixin\n\n\nclass EventPluginSample(EventMixin, InvenTreePlugin):\n \"\"\"A sample plugin which provides supports for triggered events.\"\"\"\n\n NAME = \"EventPlugin\"\n SLUG = \"sampleevent\"\n TITLE = \"Triggered Events\"\n\n def process_event(self, event, *args, **kwargs):\n \"\"\"Custom event processing.\"\"\"\n print(f\"Processing triggered event: '{event}'\")\n print(\"args:\", str(args))\n print(\"kwargs:\", str(kwargs))\n\n # Issue warning that we can test for\n if settings.PLUGIN_TESTING:\n warnings.warn(f'Event `{event}` triggered', stacklevel=2)\n", "path": "InvenTree/plugin/samples/event/event_sample.py"}]} | 995 | 206 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.