problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.71k
9.01k
| golden_diff
stringlengths 151
4.94k
| verification_info
stringlengths 465
11.3k
| num_tokens_prompt
int64 557
2.05k
| num_tokens_diff
int64 48
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_4069 | rasdani/github-patches | git_diff | goauthentik__authentik-7454 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Patreon login doesn't work/setup is not intuitive
**Describe the bug**
While trying to set up the Patreon social integration, I realised that the required fields of Consumer Key and Consumer Secret don't seem to apply to the data that Patreon provides with its API - or at least the terminology is confusing. But outside of that, the default scopes that it seems to be presenting Patreon with upon setup are not part of Patreon's API, and will always error out with an "Invalid Scope" unless manually replaced with the correct ones. If this social integration is working and I'm mistaken, it is missing documentation that would definitely make it easier on new users.
To Reproduce
Steps to reproduce the behavior:
1. Go to the social integration settings.
2. Click on the Patreon integration.
3. Enter the Client ID and Secret into the Key and Secret fields (assuming that's what you're supposed to use)
4. Get an invalid_scope error when trying to sign in
Expected behavior
Should allow users to log in via Patreon.
Screenshots
N/A
Logs
N/A
Version and Deployment (please complete the following information):
authentik version: 2023.6.1
Deployment: TrueNAS
</issue>
<code>
[start of authentik/sources/oauth/types/patreon.py]
1 """Patreon OAuth Views"""
2 from typing import Any
3
4 from authentik.sources.oauth.clients.oauth2 import UserprofileHeaderAuthClient
5 from authentik.sources.oauth.models import OAuthSource
6 from authentik.sources.oauth.types.registry import SourceType, registry
7 from authentik.sources.oauth.views.callback import OAuthCallback
8 from authentik.sources.oauth.views.redirect import OAuthRedirect
9
10
11 class PatreonOAuthRedirect(OAuthRedirect):
12 """Patreon OAuth2 Redirect"""
13
14 def get_additional_parameters(self, source: OAuthSource): # pragma: no cover
15 return {
16 "scope": ["openid", "email", "profile"],
17 }
18
19
20 class PatreonOAuthCallback(OAuthCallback):
21 """Patreon OAuth2 Callback"""
22
23 client_class: UserprofileHeaderAuthClient
24
25 def get_user_id(self, info: dict[str, str]) -> str:
26 return info.get("data", {}).get("id")
27
28 def get_user_enroll_context(
29 self,
30 info: dict[str, Any],
31 ) -> dict[str, Any]:
32 return {
33 "username": info.get("data", {}).get("attributes", {}).get("vanity"),
34 "email": info.get("data", {}).get("attributes", {}).get("email"),
35 "name": info.get("data", {}).get("attributes", {}).get("full_name"),
36 }
37
38
39 @registry.register()
40 class PatreonType(SourceType):
41 """OpenIDConnect Type definition"""
42
43 callback_view = PatreonOAuthCallback
44 redirect_view = PatreonOAuthRedirect
45 name = "Patreon"
46 slug = "patreon"
47
48 authorization_url = "https://www.patreon.com/oauth2/authorize"
49 access_token_url = "https://www.patreon.com/api/oauth2/token" # nosec
50 profile_url = "https://www.patreon.com/api/oauth2/api/current_user"
51
[end of authentik/sources/oauth/types/patreon.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/authentik/sources/oauth/types/patreon.py b/authentik/sources/oauth/types/patreon.py
--- a/authentik/sources/oauth/types/patreon.py
+++ b/authentik/sources/oauth/types/patreon.py
@@ -12,8 +12,9 @@
"""Patreon OAuth2 Redirect"""
def get_additional_parameters(self, source: OAuthSource): # pragma: no cover
+ # https://docs.patreon.com/#scopes
return {
- "scope": ["openid", "email", "profile"],
+ "scope": ["identity", "identity[email]"],
}
| {"golden_diff": "diff --git a/authentik/sources/oauth/types/patreon.py b/authentik/sources/oauth/types/patreon.py\n--- a/authentik/sources/oauth/types/patreon.py\n+++ b/authentik/sources/oauth/types/patreon.py\n@@ -12,8 +12,9 @@\n \"\"\"Patreon OAuth2 Redirect\"\"\"\r\n \r\n def get_additional_parameters(self, source: OAuthSource): # pragma: no cover\r\n+ # https://docs.patreon.com/#scopes\r\n return {\r\n- \"scope\": [\"openid\", \"email\", \"profile\"],\r\n+ \"scope\": [\"identity\", \"identity[email]\"],\r\n }\n", "issue": "Patreon login doesn't work/setup is not intuitive\n**Describe the bug**\r\nWhile trying to set up the Patreon social integration, I realised that the required fields of Consumer Key and Consumer Secret don't seem to apply to the data that Patreon provides with its API - or at least the terminology is confusing. But outside of that, the default scopes that it seems to be presenting Patreon with upon setup are not part of Patreon's API, and will always error out with an \"Invalid Scope\" unless manually replaced with the correct ones. If this social integration is working and I'm mistaken, it is missing documentation that would definitely make it easier on new users.\r\n\r\nTo Reproduce\r\nSteps to reproduce the behavior:\r\n\r\n1. Go to the social integration settings.\r\n2. Click on the Patreon integration.\r\n3. Enter the Client ID and Secret into the Key and Secret fields (assuming that's what you're supposed to use)\r\n4. Get an invalid_scope error when trying to sign in\r\n\r\nExpected behavior\r\nShould allow users to log in via Patreon.\r\n\r\nScreenshots\r\nN/A\r\n\r\nLogs\r\nN/A\r\n\r\nVersion and Deployment (please complete the following information):\r\n\r\nauthentik version: 2023.6.1\r\nDeployment: TrueNAS\r\n\r\n\r\n\r\n\r\n\r\n\r\n\n", "before_files": [{"content": "\"\"\"Patreon OAuth Views\"\"\"\r\nfrom typing import Any\r\n\r\nfrom authentik.sources.oauth.clients.oauth2 import UserprofileHeaderAuthClient\r\nfrom authentik.sources.oauth.models import OAuthSource\r\nfrom authentik.sources.oauth.types.registry import SourceType, registry\r\nfrom authentik.sources.oauth.views.callback import OAuthCallback\r\nfrom authentik.sources.oauth.views.redirect import OAuthRedirect\r\n\r\n\r\nclass PatreonOAuthRedirect(OAuthRedirect):\r\n \"\"\"Patreon OAuth2 Redirect\"\"\"\r\n\r\n def get_additional_parameters(self, source: OAuthSource): # pragma: no cover\r\n return {\r\n \"scope\": [\"openid\", \"email\", \"profile\"],\r\n }\r\n\r\n\r\nclass PatreonOAuthCallback(OAuthCallback):\r\n \"\"\"Patreon OAuth2 Callback\"\"\"\r\n\r\n client_class: UserprofileHeaderAuthClient\r\n\r\n def get_user_id(self, info: dict[str, str]) -> str:\r\n return info.get(\"data\", {}).get(\"id\")\r\n\r\n def get_user_enroll_context(\r\n self,\r\n info: dict[str, Any],\r\n ) -> dict[str, Any]:\r\n return {\r\n \"username\": info.get(\"data\", {}).get(\"attributes\", {}).get(\"vanity\"),\r\n \"email\": info.get(\"data\", {}).get(\"attributes\", {}).get(\"email\"),\r\n \"name\": info.get(\"data\", {}).get(\"attributes\", {}).get(\"full_name\"),\r\n }\r\n\r\n\r\[email protected]()\r\nclass PatreonType(SourceType):\r\n \"\"\"OpenIDConnect Type definition\"\"\"\r\n\r\n callback_view = PatreonOAuthCallback\r\n redirect_view = PatreonOAuthRedirect\r\n name = \"Patreon\"\r\n slug = \"patreon\"\r\n\r\n authorization_url = \"https://www.patreon.com/oauth2/authorize\"\r\n access_token_url = \"https://www.patreon.com/api/oauth2/token\" # nosec\r\n profile_url = \"https://www.patreon.com/api/oauth2/api/current_user\"\r\n", "path": "authentik/sources/oauth/types/patreon.py"}]} | 1,291 | 139 |
gh_patches_debug_1861 | rasdani/github-patches | git_diff | carpentries__amy-690 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
No reverse match for rest_framework namespace
The error for a very strange reason shows when accessing these URLs:
https://github.com/swcarpentry/amy/blob/develop/api/urls.py#L57
I wasn't able to get rid of it; it's not being used at all, so maybe it should be removedโฆ?
</issue>
<code>
[start of api/urls.py]
1 from django.conf.urls import url, include
2 from rest_framework_nested import routers
3 from rest_framework.urlpatterns import format_suffix_patterns
4
5 from . import views
6
7 # new in Django 1.9: this defines a namespace for URLs; there's no need for
8 # `namespace='api'` in the include()
9 app_name = 'api'
10
11 # routers generate URLs for methods like `.list` or `.retrieve`
12 router = routers.SimpleRouter()
13 router.register('reports', views.ReportsViewSet, base_name='reports')
14 router.register('persons', views.PersonViewSet)
15 awards_router = routers.NestedSimpleRouter(router, 'persons', lookup='person')
16 awards_router.register('awards', views.AwardViewSet, base_name='person-awards')
17 person_task_router = routers.NestedSimpleRouter(router, 'persons',
18 lookup='person')
19 person_task_router.register('tasks', views.PersonTaskViewSet,
20 base_name='person-tasks')
21 router.register('events', views.EventViewSet)
22 tasks_router = routers.NestedSimpleRouter(router, 'events', lookup='event')
23 tasks_router.register('tasks', views.TaskViewSet, base_name='event-tasks')
24 todos_router = routers.NestedSimpleRouter(router, 'events', lookup='event')
25 todos_router.register('todos', views.TodoViewSet, base_name='event-todos')
26 router.register('hosts', views.HostViewSet)
27 router.register('airports', views.AirportViewSet)
28
29 urlpatterns = [
30 url('^$', views.ApiRoot.as_view(), name='root'),
31 # TODO: turn these export views into ViewSets and add them to the router
32 url('^export/badges/$',
33 views.ExportBadgesView.as_view(),
34 name='export-badges'),
35 url('^export/instructors/$',
36 views.ExportInstructorLocationsView.as_view(),
37 name='export-instructors'),
38 url('^export/members/$',
39 views.ExportMembersView.as_view(),
40 name='export-members'),
41 url('^events/published/$',
42 views.PublishedEvents.as_view(),
43 name='events-published'),
44 url('^todos/user/$',
45 views.UserTodoItems.as_view(),
46 name='user-todos'),
47
48 url('^', include(router.urls)),
49 url('^', include(awards_router.urls)),
50 url('^', include(person_task_router.urls)),
51 url('^', include(tasks_router.urls)),
52 url('^', include(todos_router.urls)),
53 ]
54
55 # for login-logout functionality
56 urlpatterns += [
57 url(r'^api-auth/',
58 include('rest_framework.urls', namespace='rest_framework')),
59 ]
60
61 urlpatterns = format_suffix_patterns(urlpatterns) # allow to specify format
62
[end of api/urls.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/api/urls.py b/api/urls.py
--- a/api/urls.py
+++ b/api/urls.py
@@ -52,10 +52,4 @@
url('^', include(todos_router.urls)),
]
-# for login-logout functionality
-urlpatterns += [
- url(r'^api-auth/',
- include('rest_framework.urls', namespace='rest_framework')),
-]
-
urlpatterns = format_suffix_patterns(urlpatterns) # allow to specify format
| {"golden_diff": "diff --git a/api/urls.py b/api/urls.py\n--- a/api/urls.py\n+++ b/api/urls.py\n@@ -52,10 +52,4 @@\n url('^', include(todos_router.urls)),\n ]\n \n-# for login-logout functionality\n-urlpatterns += [\n- url(r'^api-auth/',\n- include('rest_framework.urls', namespace='rest_framework')),\n-]\n-\n urlpatterns = format_suffix_patterns(urlpatterns) # allow to specify format\n", "issue": "No reverse match for rest_framework namespace\nThe error for a very strange reason shows when accessing these URLs:\nhttps://github.com/swcarpentry/amy/blob/develop/api/urls.py#L57\n\nI wasn't able to get rid of it; it's not being used at all, so maybe it should be removed\u2026?\n\n", "before_files": [{"content": "from django.conf.urls import url, include\nfrom rest_framework_nested import routers\nfrom rest_framework.urlpatterns import format_suffix_patterns\n\nfrom . import views\n\n# new in Django 1.9: this defines a namespace for URLs; there's no need for\n# `namespace='api'` in the include()\napp_name = 'api'\n\n# routers generate URLs for methods like `.list` or `.retrieve`\nrouter = routers.SimpleRouter()\nrouter.register('reports', views.ReportsViewSet, base_name='reports')\nrouter.register('persons', views.PersonViewSet)\nawards_router = routers.NestedSimpleRouter(router, 'persons', lookup='person')\nawards_router.register('awards', views.AwardViewSet, base_name='person-awards')\nperson_task_router = routers.NestedSimpleRouter(router, 'persons',\n lookup='person')\nperson_task_router.register('tasks', views.PersonTaskViewSet,\n base_name='person-tasks')\nrouter.register('events', views.EventViewSet)\ntasks_router = routers.NestedSimpleRouter(router, 'events', lookup='event')\ntasks_router.register('tasks', views.TaskViewSet, base_name='event-tasks')\ntodos_router = routers.NestedSimpleRouter(router, 'events', lookup='event')\ntodos_router.register('todos', views.TodoViewSet, base_name='event-todos')\nrouter.register('hosts', views.HostViewSet)\nrouter.register('airports', views.AirportViewSet)\n\nurlpatterns = [\n url('^$', views.ApiRoot.as_view(), name='root'),\n # TODO: turn these export views into ViewSets and add them to the router\n url('^export/badges/$',\n views.ExportBadgesView.as_view(),\n name='export-badges'),\n url('^export/instructors/$',\n views.ExportInstructorLocationsView.as_view(),\n name='export-instructors'),\n url('^export/members/$',\n views.ExportMembersView.as_view(),\n name='export-members'),\n url('^events/published/$',\n views.PublishedEvents.as_view(),\n name='events-published'),\n url('^todos/user/$',\n views.UserTodoItems.as_view(),\n name='user-todos'),\n\n url('^', include(router.urls)),\n url('^', include(awards_router.urls)),\n url('^', include(person_task_router.urls)),\n url('^', include(tasks_router.urls)),\n url('^', include(todos_router.urls)),\n]\n\n# for login-logout functionality\nurlpatterns += [\n url(r'^api-auth/',\n include('rest_framework.urls', namespace='rest_framework')),\n]\n\nurlpatterns = format_suffix_patterns(urlpatterns) # allow to specify format\n", "path": "api/urls.py"}]} | 1,261 | 105 |
gh_patches_debug_8186 | rasdani/github-patches | git_diff | saleor__saleor-5117 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Allow to search for products by SKU in admin dashboard
### What I'm trying to achieve
I'm looking to find a product by its SKU
### Describe a proposed solution
Tweak search engine settings to return products with full/partial SKU match.
</issue>
<code>
[start of saleor/search/backends/postgresql_storefront.py]
1 from django.contrib.postgres.search import TrigramSimilarity
2 from django.db.models import Q
3
4 from ...product.models import Product
5
6
7 def search(phrase):
8 """Return matching products for storefront views.
9
10 Fuzzy storefront search that is resistant to small typing errors made
11 by user. Name is matched using trigram similarity, description uses
12 standard postgres full text search.
13
14 Args:
15 phrase (str): searched phrase
16
17 """
18 name_sim = TrigramSimilarity("name", phrase)
19 published = Q(is_published=True)
20 ft_in_description = Q(description__search=phrase)
21 name_similar = Q(name_sim__gt=0.2)
22 return Product.objects.annotate(name_sim=name_sim).filter(
23 (ft_in_description | name_similar) & published
24 )
25
[end of saleor/search/backends/postgresql_storefront.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/saleor/search/backends/postgresql_storefront.py b/saleor/search/backends/postgresql_storefront.py
--- a/saleor/search/backends/postgresql_storefront.py
+++ b/saleor/search/backends/postgresql_storefront.py
@@ -16,9 +16,9 @@
"""
name_sim = TrigramSimilarity("name", phrase)
- published = Q(is_published=True)
ft_in_description = Q(description__search=phrase)
+ ft_by_sku = Q(variants__sku__search=phrase)
name_similar = Q(name_sim__gt=0.2)
return Product.objects.annotate(name_sim=name_sim).filter(
- (ft_in_description | name_similar) & published
+ (ft_in_description | name_similar | ft_by_sku)
)
| {"golden_diff": "diff --git a/saleor/search/backends/postgresql_storefront.py b/saleor/search/backends/postgresql_storefront.py\n--- a/saleor/search/backends/postgresql_storefront.py\n+++ b/saleor/search/backends/postgresql_storefront.py\n@@ -16,9 +16,9 @@\n \n \"\"\"\n name_sim = TrigramSimilarity(\"name\", phrase)\n- published = Q(is_published=True)\n ft_in_description = Q(description__search=phrase)\n+ ft_by_sku = Q(variants__sku__search=phrase)\n name_similar = Q(name_sim__gt=0.2)\n return Product.objects.annotate(name_sim=name_sim).filter(\n- (ft_in_description | name_similar) & published\n+ (ft_in_description | name_similar | ft_by_sku)\n )\n", "issue": "Allow to search for products by SKU in admin dashboard\n### What I'm trying to achieve\r\nI'm looking to find a product by its SKU\r\n\r\n### Describe a proposed solution\r\nTweak search engine settings to return products with full/partial SKU match.\r\n\r\n\n", "before_files": [{"content": "from django.contrib.postgres.search import TrigramSimilarity\nfrom django.db.models import Q\n\nfrom ...product.models import Product\n\n\ndef search(phrase):\n \"\"\"Return matching products for storefront views.\n\n Fuzzy storefront search that is resistant to small typing errors made\n by user. Name is matched using trigram similarity, description uses\n standard postgres full text search.\n\n Args:\n phrase (str): searched phrase\n\n \"\"\"\n name_sim = TrigramSimilarity(\"name\", phrase)\n published = Q(is_published=True)\n ft_in_description = Q(description__search=phrase)\n name_similar = Q(name_sim__gt=0.2)\n return Product.objects.annotate(name_sim=name_sim).filter(\n (ft_in_description | name_similar) & published\n )\n", "path": "saleor/search/backends/postgresql_storefront.py"}]} | 810 | 180 |
gh_patches_debug_17752 | rasdani/github-patches | git_diff | nf-core__tools-1590 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Lint warning on Nextflow minimum version badge
### Description of the bug
`nf-core lint` complains that the minimum version badge for Nextflow could not found, however it was present in the `README.md`.
It occurred after the `template-merge-2.4`
It appears to be a bug.
### Command used and terminal output
```console
(nextflow2) rnavar$ nf-core lint
,--./,-.
___ __ __ __ ___ /,-._.--~\
|\ | |__ __ / ` / \ |__) |__ } {
| \| | \__, \__/ | \ |___ \`-._,-`-,
`._,._,'
nf-core/tools version 2.4.1 - https://nf-co.re
INFO Testing pipeline: . __init__.py:244
โญโ [!] 1 Pipeline Test Warning โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ โ
โ readme: README did not have a Nextflow minimum version badge. โ
โ โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
```
### System information
_No response_
</issue>
<code>
[start of nf_core/lint/readme.py]
1 #!/usr/bin/env python
2
3 import os
4 import re
5
6
7 def readme(self):
8 """Repository ``README.md`` tests
9
10 The ``README.md`` files for a project are very important and must meet some requirements:
11
12 * Nextflow badge
13
14 * If no Nextflow badge is found, a warning is given
15 * If a badge is found but the version doesn't match the minimum version in the config file, the test fails
16 * Example badge code:
17
18 .. code-block:: md
19
20 [](https://www.nextflow.io/)
21
22 * Bioconda badge
23
24 * If your pipeline contains a file called ``environment.yml`` in the root directory, a bioconda badge is required
25 * Required badge code:
26
27 .. code-block:: md
28
29 [](https://bioconda.github.io/)
30
31 .. note:: These badges are a markdown image ```` *inside* a markdown link ``[markdown image](<link URL>)``, so a bit fiddly to write.
32 """
33 passed = []
34 warned = []
35 failed = []
36
37 with open(os.path.join(self.wf_path, "README.md"), "r") as fh:
38 content = fh.read()
39
40 # Check that there is a readme badge showing the minimum required version of Nextflow
41 # [](https://www.nextflow.io/)
42 # and that it has the correct version
43 nf_badge_re = r"\[!\[Nextflow\]\(https://img\.shields\.io/badge/nextflow%20DSL2-%E2%89%A5([\d\.]+)-23aa62\.svg\?labelColor=000000\)\]\(https://www\.nextflow\.io/\)"
44 match = re.search(nf_badge_re, content)
45 if match:
46 nf_badge_version = match.group(1).strip("'\"")
47 try:
48 assert nf_badge_version == self.minNextflowVersion
49 except (AssertionError, KeyError):
50 failed.append(
51 "README Nextflow minimum version badge does not match config. Badge: `{}`, Config: `{}`".format(
52 nf_badge_version, self.minNextflowVersion
53 )
54 )
55 else:
56 passed.append(
57 "README Nextflow minimum version badge matched config. Badge: `{}`, Config: `{}`".format(
58 nf_badge_version, self.minNextflowVersion
59 )
60 )
61 else:
62 warned.append("README did not have a Nextflow minimum version badge.")
63
64 # Check that the minimum version mentioned in the quick start section is consistent
65 # Looking for: "1. Install [`Nextflow`](https://www.nextflow.io/docs/latest/getstarted.html#installation) (`>=21.10.3`)"
66 nf_version_re = r"1\.\s*Install\s*\[`Nextflow`\]\(https://www.nextflow.io/docs/latest/getstarted.html#installation\)\s*\(`>=(\d*\.\d*\.\d*)`\)"
67 match = re.search(nf_version_re, content)
68 if match:
69 nf_quickstart_version = match.group(1)
70 try:
71 assert nf_quickstart_version == self.minNextflowVersion
72 except (AssertionError, KeyError):
73 failed.append(
74 f"README Nextflow minimium version in Quick Start section does not match config. README: `{nf_quickstart_version}`, Config `{self.minNextflowVersion}`"
75 )
76 else:
77 passed.append(
78 f"README Nextflow minimum version in Quick Start section matched config. README: `{nf_quickstart_version}`, Config: `{self.minNextflowVersion}`"
79 )
80 else:
81 warned.append("README did not have a Nextflow minimum version mentioned in Quick Start section.")
82
83 return {"passed": passed, "warned": warned, "failed": failed}
84
[end of nf_core/lint/readme.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/nf_core/lint/readme.py b/nf_core/lint/readme.py
--- a/nf_core/lint/readme.py
+++ b/nf_core/lint/readme.py
@@ -38,9 +38,9 @@
content = fh.read()
# Check that there is a readme badge showing the minimum required version of Nextflow
- # [](https://www.nextflow.io/)
+ # [](https://www.nextflow.io/)
# and that it has the correct version
- nf_badge_re = r"\[!\[Nextflow\]\(https://img\.shields\.io/badge/nextflow%20DSL2-%E2%89%A5([\d\.]+)-23aa62\.svg\?labelColor=000000\)\]\(https://www\.nextflow\.io/\)"
+ nf_badge_re = r"\[!\[Nextflow\]\(https://img\.shields\.io/badge/nextflow%20DSL2-%E2%89%A5([\d\.]+)-23aa62\.svg\)\]\(https://www\.nextflow\.io/\)"
match = re.search(nf_badge_re, content)
if match:
nf_badge_version = match.group(1).strip("'\"")
| {"golden_diff": "diff --git a/nf_core/lint/readme.py b/nf_core/lint/readme.py\n--- a/nf_core/lint/readme.py\n+++ b/nf_core/lint/readme.py\n@@ -38,9 +38,9 @@\n content = fh.read()\n \n # Check that there is a readme badge showing the minimum required version of Nextflow\n- # [](https://www.nextflow.io/)\n+ # [](https://www.nextflow.io/)\n # and that it has the correct version\n- nf_badge_re = r\"\\[!\\[Nextflow\\]\\(https://img\\.shields\\.io/badge/nextflow%20DSL2-%E2%89%A5([\\d\\.]+)-23aa62\\.svg\\?labelColor=000000\\)\\]\\(https://www\\.nextflow\\.io/\\)\"\n+ nf_badge_re = r\"\\[!\\[Nextflow\\]\\(https://img\\.shields\\.io/badge/nextflow%20DSL2-%E2%89%A5([\\d\\.]+)-23aa62\\.svg\\)\\]\\(https://www\\.nextflow\\.io/\\)\"\n match = re.search(nf_badge_re, content)\n if match:\n nf_badge_version = match.group(1).strip(\"'\\\"\")\n", "issue": "Lint warning on Nextflow minimum version badge\n### Description of the bug\n\n`nf-core lint` complains that the minimum version badge for Nextflow could not found, however it was present in the `README.md`.\r\nIt occurred after the `template-merge-2.4`\r\nIt appears to be a bug.\r\n\r\n\n\n### Command used and terminal output\n\n```console\n(nextflow2) rnavar$ nf-core lint\r\n\r\n\r\n\r\n ,--./,-.\r\n\r\n ___ __ __ __ ___ /,-._.--~\\\r\n\r\n |\\ | |__ __ / ` / \\ |__) |__ } {\r\n\r\n | \\| | \\__, \\__/ | \\ |___ \\`-._,-`-,\r\n\r\n `._,._,'\r\n\r\n\r\n\r\n nf-core/tools version 2.4.1 - https://nf-co.re\r\n\r\n\r\n\r\n\r\n\r\nINFO Testing pipeline: . __init__.py:244\r\n\r\n\r\n\r\n\u256d\u2500 [!] 1 Pipeline Test Warning \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\r\n\r\n\u2502 \u2502\r\n\r\n\u2502 readme: README did not have a Nextflow minimum version badge. \u2502\r\n\r\n\u2502 \u2502\r\n\r\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\n```\n\n\n### System information\n\n_No response_\n", "before_files": [{"content": "#!/usr/bin/env python\n\nimport os\nimport re\n\n\ndef readme(self):\n \"\"\"Repository ``README.md`` tests\n\n The ``README.md`` files for a project are very important and must meet some requirements:\n\n * Nextflow badge\n\n * If no Nextflow badge is found, a warning is given\n * If a badge is found but the version doesn't match the minimum version in the config file, the test fails\n * Example badge code:\n\n .. code-block:: md\n\n [](https://www.nextflow.io/)\n\n * Bioconda badge\n\n * If your pipeline contains a file called ``environment.yml`` in the root directory, a bioconda badge is required\n * Required badge code:\n\n .. code-block:: md\n\n [](https://bioconda.github.io/)\n\n .. note:: These badges are a markdown image ```` *inside* a markdown link ``[markdown image](<link URL>)``, so a bit fiddly to write.\n \"\"\"\n passed = []\n warned = []\n failed = []\n\n with open(os.path.join(self.wf_path, \"README.md\"), \"r\") as fh:\n content = fh.read()\n\n # Check that there is a readme badge showing the minimum required version of Nextflow\n # [](https://www.nextflow.io/)\n # and that it has the correct version\n nf_badge_re = r\"\\[!\\[Nextflow\\]\\(https://img\\.shields\\.io/badge/nextflow%20DSL2-%E2%89%A5([\\d\\.]+)-23aa62\\.svg\\?labelColor=000000\\)\\]\\(https://www\\.nextflow\\.io/\\)\"\n match = re.search(nf_badge_re, content)\n if match:\n nf_badge_version = match.group(1).strip(\"'\\\"\")\n try:\n assert nf_badge_version == self.minNextflowVersion\n except (AssertionError, KeyError):\n failed.append(\n \"README Nextflow minimum version badge does not match config. Badge: `{}`, Config: `{}`\".format(\n nf_badge_version, self.minNextflowVersion\n )\n )\n else:\n passed.append(\n \"README Nextflow minimum version badge matched config. Badge: `{}`, Config: `{}`\".format(\n nf_badge_version, self.minNextflowVersion\n )\n )\n else:\n warned.append(\"README did not have a Nextflow minimum version badge.\")\n\n # Check that the minimum version mentioned in the quick start section is consistent\n # Looking for: \"1. Install [`Nextflow`](https://www.nextflow.io/docs/latest/getstarted.html#installation) (`>=21.10.3`)\"\n nf_version_re = r\"1\\.\\s*Install\\s*\\[`Nextflow`\\]\\(https://www.nextflow.io/docs/latest/getstarted.html#installation\\)\\s*\\(`>=(\\d*\\.\\d*\\.\\d*)`\\)\"\n match = re.search(nf_version_re, content)\n if match:\n nf_quickstart_version = match.group(1)\n try:\n assert nf_quickstart_version == self.minNextflowVersion\n except (AssertionError, KeyError):\n failed.append(\n f\"README Nextflow minimium version in Quick Start section does not match config. README: `{nf_quickstart_version}`, Config `{self.minNextflowVersion}`\"\n )\n else:\n passed.append(\n f\"README Nextflow minimum version in Quick Start section matched config. README: `{nf_quickstart_version}`, Config: `{self.minNextflowVersion}`\"\n )\n else:\n warned.append(\"README did not have a Nextflow minimum version mentioned in Quick Start section.\")\n\n return {\"passed\": passed, \"warned\": warned, \"failed\": failed}\n", "path": "nf_core/lint/readme.py"}]} | 1,944 | 393 |
gh_patches_debug_8873 | rasdani/github-patches | git_diff | kubeflow__pipelines-4132 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
allow output artifact store configuration (vs hard coded)
it seems like the output artifacts are always stored in a specific minio service, port, namespace, bucket, secrets, etc (`minio-service.kubeflow:9000`).
see: https://github.com/kubeflow/pipelines/blob/f40a22a3f4a8e06d20cf3e3f425b5058d5c87e0b/sdk/python/kfp/compiler/_op_to_template.py#L148
it would be great to make it flexible, e.g. allow using S3, or change namespace or bucket names.
i suggest making it configurable, i can do such PR if we agree its needed.
flexible pipeline service (host) path in client SDK
when creating an SDK `Client()` the path to `ml-pipeline` API service is loaded from a hard coded value (`ml-pipeline.kubeflow.svc.cluster.local:8888`) which indicate a specific k8s namespace. it can be valuable to load that default value from an env variable, i.e. changing the line in `_client.py` from:
`config.host = host if host else Client.IN_CLUSTER_DNS_NAME`
to:
`config.host = host or os.environ.get('ML_PIPELINE_DNS_NAME',Client.IN_CLUSTER_DNS_NAME)`
also note that when a user provide the `host` parameter, the ipython output points to the API server and not to the UI service (see the logic in `_get_url_prefix()`), it seems like a potential bug
if its acceptable i can submit a PR for the line change above
</issue>
<code>
[start of sdk/python/setup.py]
1 # Copyright 2018 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import os
16 import re
17 from setuptools import setup
18
19 NAME = 'kfp'
20 #VERSION = .... Change the version in kfp/__init__.py
21
22 REQUIRES = [
23 'PyYAML',
24 'google-cloud-storage>=1.13.0',
25 'kubernetes>=8.0.0, <12.0.0',
26 'google-auth>=1.6.1',
27 'requests_toolbelt>=0.8.0',
28 'cloudpickle',
29 # Update the upper version whenever a new major version of the
30 # kfp-server-api package is released.
31 # Update the lower version when kfp sdk depends on new apis/fields in
32 # kfp-server-api.
33 # Note, please also update ./requirements.in
34 'kfp-server-api>=0.2.5, <2.0.0',
35 'jsonschema >= 3.0.1',
36 'tabulate',
37 'click',
38 'Deprecated',
39 'strip-hints',
40 ]
41
42
43 def find_version(*file_path_parts):
44 here = os.path.abspath(os.path.dirname(__file__))
45 with open(os.path.join(here, *file_path_parts), 'r') as fp:
46 version_file_text = fp.read()
47
48 version_match = re.search(
49 r"^__version__ = ['\"]([^'\"]*)['\"]",
50 version_file_text,
51 re.M,
52 )
53 if version_match:
54 return version_match.group(1)
55
56 raise RuntimeError('Unable to find version string.')
57
58
59 setup(
60 name=NAME,
61 version=find_version('kfp', '__init__.py'),
62 description='KubeFlow Pipelines SDK',
63 author='google',
64 install_requires=REQUIRES,
65 packages=[
66 'kfp',
67 'kfp.cli',
68 'kfp.cli.diagnose_me',
69 'kfp.compiler',
70 'kfp.components',
71 'kfp.components.structures',
72 'kfp.components.structures.kubernetes',
73 'kfp.containers',
74 'kfp.dsl',
75 'kfp.dsl.extensions',
76 'kfp.notebook',
77 ],
78 classifiers=[
79 'Intended Audience :: Developers',
80 'Intended Audience :: Education',
81 'Intended Audience :: Science/Research',
82 'License :: OSI Approved :: Apache Software License',
83 'Programming Language :: Python :: 3',
84 'Programming Language :: Python :: 3.5',
85 'Programming Language :: Python :: 3.6',
86 'Programming Language :: Python :: 3.7',
87 'Topic :: Scientific/Engineering',
88 'Topic :: Scientific/Engineering :: Artificial Intelligence',
89 'Topic :: Software Development',
90 'Topic :: Software Development :: Libraries',
91 'Topic :: Software Development :: Libraries :: Python Modules',
92 ],
93 python_requires='>=3.5.3',
94 include_package_data=True,
95 entry_points={
96 'console_scripts': [
97 'dsl-compile = kfp.compiler.main:main', 'kfp=kfp.__main__:main'
98 ]
99 })
100
[end of sdk/python/setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/sdk/python/setup.py b/sdk/python/setup.py
--- a/sdk/python/setup.py
+++ b/sdk/python/setup.py
@@ -39,6 +39,10 @@
'strip-hints',
]
+TESTS_REQUIRE = [
+ 'mock',
+]
+
def find_version(*file_path_parts):
here = os.path.abspath(os.path.dirname(__file__))
@@ -62,6 +66,7 @@
description='KubeFlow Pipelines SDK',
author='google',
install_requires=REQUIRES,
+ tests_require=TESTS_REQUIRE,
packages=[
'kfp',
'kfp.cli',
| {"golden_diff": "diff --git a/sdk/python/setup.py b/sdk/python/setup.py\n--- a/sdk/python/setup.py\n+++ b/sdk/python/setup.py\n@@ -39,6 +39,10 @@\n 'strip-hints',\n ]\n \n+TESTS_REQUIRE = [\n+ 'mock',\n+]\n+\n \n def find_version(*file_path_parts):\n here = os.path.abspath(os.path.dirname(__file__))\n@@ -62,6 +66,7 @@\n description='KubeFlow Pipelines SDK',\n author='google',\n install_requires=REQUIRES,\n+ tests_require=TESTS_REQUIRE,\n packages=[\n 'kfp',\n 'kfp.cli',\n", "issue": "allow output artifact store configuration (vs hard coded)\nit seems like the output artifacts are always stored in a specific minio service, port, namespace, bucket, secrets, etc (`minio-service.kubeflow:9000`). \r\n\r\nsee: https://github.com/kubeflow/pipelines/blob/f40a22a3f4a8e06d20cf3e3f425b5058d5c87e0b/sdk/python/kfp/compiler/_op_to_template.py#L148\r\n\r\nit would be great to make it flexible, e.g. allow using S3, or change namespace or bucket names.\r\ni suggest making it configurable, i can do such PR if we agree its needed. \nflexible pipeline service (host) path in client SDK \nwhen creating an SDK `Client()` the path to `ml-pipeline` API service is loaded from a hard coded value (`ml-pipeline.kubeflow.svc.cluster.local:8888`) which indicate a specific k8s namespace. it can be valuable to load that default value from an env variable, i.e. changing the line in `_client.py` from:\r\n\r\n`config.host = host if host else Client.IN_CLUSTER_DNS_NAME`\r\n\r\nto:\r\n\r\n`config.host = host or os.environ.get('ML_PIPELINE_DNS_NAME',Client.IN_CLUSTER_DNS_NAME)`\r\n\r\nalso note that when a user provide the `host` parameter, the ipython output points to the API server and not to the UI service (see the logic in `_get_url_prefix()`), it seems like a potential bug\r\n\r\nif its acceptable i can submit a PR for the line change above\r\n \n", "before_files": [{"content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\nimport re\nfrom setuptools import setup\n\nNAME = 'kfp'\n#VERSION = .... Change the version in kfp/__init__.py\n\nREQUIRES = [\n 'PyYAML',\n 'google-cloud-storage>=1.13.0',\n 'kubernetes>=8.0.0, <12.0.0',\n 'google-auth>=1.6.1',\n 'requests_toolbelt>=0.8.0',\n 'cloudpickle',\n # Update the upper version whenever a new major version of the\n # kfp-server-api package is released.\n # Update the lower version when kfp sdk depends on new apis/fields in\n # kfp-server-api.\n # Note, please also update ./requirements.in\n 'kfp-server-api>=0.2.5, <2.0.0',\n 'jsonschema >= 3.0.1',\n 'tabulate',\n 'click',\n 'Deprecated',\n 'strip-hints',\n]\n\n\ndef find_version(*file_path_parts):\n here = os.path.abspath(os.path.dirname(__file__))\n with open(os.path.join(here, *file_path_parts), 'r') as fp:\n version_file_text = fp.read()\n\n version_match = re.search(\n r\"^__version__ = ['\\\"]([^'\\\"]*)['\\\"]\",\n version_file_text,\n re.M,\n )\n if version_match:\n return version_match.group(1)\n\n raise RuntimeError('Unable to find version string.')\n\n\nsetup(\n name=NAME,\n version=find_version('kfp', '__init__.py'),\n description='KubeFlow Pipelines SDK',\n author='google',\n install_requires=REQUIRES,\n packages=[\n 'kfp',\n 'kfp.cli',\n 'kfp.cli.diagnose_me',\n 'kfp.compiler',\n 'kfp.components',\n 'kfp.components.structures',\n 'kfp.components.structures.kubernetes',\n 'kfp.containers',\n 'kfp.dsl',\n 'kfp.dsl.extensions',\n 'kfp.notebook',\n ],\n classifiers=[\n 'Intended Audience :: Developers',\n 'Intended Audience :: Education',\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Topic :: Scientific/Engineering',\n 'Topic :: Scientific/Engineering :: Artificial Intelligence',\n 'Topic :: Software Development',\n 'Topic :: Software Development :: Libraries',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n ],\n python_requires='>=3.5.3',\n include_package_data=True,\n entry_points={\n 'console_scripts': [\n 'dsl-compile = kfp.compiler.main:main', 'kfp=kfp.__main__:main'\n ]\n })\n", "path": "sdk/python/setup.py"}]} | 1,851 | 142 |
gh_patches_debug_29733 | rasdani/github-patches | git_diff | bridgecrewio__checkov-4879 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Unable to Evaluate Final Result from condition
**Describe the issue**
CKV_GCP_43: "Ensure KMS encryption keys are rotated within a period of 90 days"
**Examples**
Check: CKV_GCP_43: "Ensure KMS encryption keys are rotated within a period of 90 days"
FAILED for resource: module.kms.google_kms_crypto_key.key
File: /main.tf:11-29
Calling File: /example/production/main.tf:1-6
Guide: https://docs.bridgecrew.io/docs/bc_gcp_general_4
11 | resource "google_kms_crypto_key" "key" {
12 | count = var.prevent_destroy ? length(var.keys) : 0
13 | name = var.keys[count.index]
14 | key_ring = google_kms_key_ring.key_ring.id
15 | rotation_period = contains(["ASYMMETRIC_SIGN", "ASYMMETRIC_DECRYPT"], var.purpose) ? null : var.key_rotation_period
16 | #rotation_period = var.key_rotation_period
17 | purpose = var.purpose
18 |
19 | lifecycle {
20 | prevent_destroy = true
21 | }
22 |
23 | version_template {
24 | algorithm = var.key_algorithm
25 | protection_level = var.key_protection_level
26 | }
27 |
28 | labels = var.labels
29 | }
Checkov should providing error only in ASYMMETRIC key creation not the ENCRYPT_DCRYPT purpose for KMS key. Even after setting the purpose to ENCRYPT_DCRYPT and key_rotation_period variable to 90 days(7776000s), check is failing.
**Version (please complete the following information):**
- Checkov Version 2.3.156
**Additional context**
`contains(["ASYMMETRIC_SIGN", "ASYMMETRIC_DECRYPT"], var.purpose) ? null : var.key_rotation_period`
Above line should be evaluated and marked as passed for GCP KMS as ASYMMETRIC key is not supporting Automatic rotation.
</issue>
<code>
[start of checkov/terraform/checks/resource/gcp/GoogleKMSRotationPeriod.py]
1 from typing import Dict, List, Any
2
3 from checkov.common.util.type_forcers import force_int
4
5 from checkov.common.models.enums import CheckResult, CheckCategories
6 from checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck
7
8 # rotation_period time unit is seconds
9 ONE_DAY = 24 * 60 * 60
10 NINETY_DAYS = 90 * ONE_DAY
11
12
13 class GoogleKMSKeyRotationPeriod(BaseResourceCheck):
14 def __init__(self) -> None:
15 name = "Ensure KMS encryption keys are rotated within a period of 90 days"
16 id = "CKV_GCP_43"
17 supported_resources = ["google_kms_crypto_key"]
18 categories = [CheckCategories.GENERAL_SECURITY]
19 super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
20
21 def scan_resource_conf(self, conf: Dict[str, List[Any]]) -> CheckResult:
22 self.evaluated_keys = ["rotation_period"]
23 rotation = conf.get("rotation_period")
24 if rotation and rotation[0]:
25 time = force_int(rotation[0][:-1])
26 if time and ONE_DAY <= time <= NINETY_DAYS:
27 return CheckResult.PASSED
28 return CheckResult.FAILED
29
30
31 check = GoogleKMSKeyRotationPeriod()
32
[end of checkov/terraform/checks/resource/gcp/GoogleKMSRotationPeriod.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/checkov/terraform/checks/resource/gcp/GoogleKMSRotationPeriod.py b/checkov/terraform/checks/resource/gcp/GoogleKMSRotationPeriod.py
--- a/checkov/terraform/checks/resource/gcp/GoogleKMSRotationPeriod.py
+++ b/checkov/terraform/checks/resource/gcp/GoogleKMSRotationPeriod.py
@@ -5,6 +5,7 @@
from checkov.common.models.enums import CheckResult, CheckCategories
from checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck
+ASYMMETRIC_KEYS = {"ASYMMETRIC_DECRYPT", "ASYMMETRIC_SIGN"}
# rotation_period time unit is seconds
ONE_DAY = 24 * 60 * 60
NINETY_DAYS = 90 * ONE_DAY
@@ -14,11 +15,17 @@
def __init__(self) -> None:
name = "Ensure KMS encryption keys are rotated within a period of 90 days"
id = "CKV_GCP_43"
- supported_resources = ["google_kms_crypto_key"]
- categories = [CheckCategories.GENERAL_SECURITY]
+ supported_resources = ("google_kms_crypto_key",)
+ categories = (CheckCategories.GENERAL_SECURITY,)
super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
def scan_resource_conf(self, conf: Dict[str, List[Any]]) -> CheckResult:
+ purpose = conf.get("purpose")
+ if purpose and isinstance(purpose, list) and purpose[0] in ASYMMETRIC_KEYS:
+ # https://cloud.google.com/kms/docs/key-rotation#asymmetric
+ # automatic key rotation is not supported for asymmetric keys
+ return CheckResult.UNKNOWN
+
self.evaluated_keys = ["rotation_period"]
rotation = conf.get("rotation_period")
if rotation and rotation[0]:
| {"golden_diff": "diff --git a/checkov/terraform/checks/resource/gcp/GoogleKMSRotationPeriod.py b/checkov/terraform/checks/resource/gcp/GoogleKMSRotationPeriod.py\n--- a/checkov/terraform/checks/resource/gcp/GoogleKMSRotationPeriod.py\n+++ b/checkov/terraform/checks/resource/gcp/GoogleKMSRotationPeriod.py\n@@ -5,6 +5,7 @@\n from checkov.common.models.enums import CheckResult, CheckCategories\n from checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck\n \n+ASYMMETRIC_KEYS = {\"ASYMMETRIC_DECRYPT\", \"ASYMMETRIC_SIGN\"}\n # rotation_period time unit is seconds\n ONE_DAY = 24 * 60 * 60\n NINETY_DAYS = 90 * ONE_DAY\n@@ -14,11 +15,17 @@\n def __init__(self) -> None:\n name = \"Ensure KMS encryption keys are rotated within a period of 90 days\"\n id = \"CKV_GCP_43\"\n- supported_resources = [\"google_kms_crypto_key\"]\n- categories = [CheckCategories.GENERAL_SECURITY]\n+ supported_resources = (\"google_kms_crypto_key\",)\n+ categories = (CheckCategories.GENERAL_SECURITY,)\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n \n def scan_resource_conf(self, conf: Dict[str, List[Any]]) -> CheckResult:\n+ purpose = conf.get(\"purpose\")\n+ if purpose and isinstance(purpose, list) and purpose[0] in ASYMMETRIC_KEYS:\n+ # https://cloud.google.com/kms/docs/key-rotation#asymmetric\n+ # automatic key rotation is not supported for asymmetric keys\n+ return CheckResult.UNKNOWN\n+\n self.evaluated_keys = [\"rotation_period\"]\n rotation = conf.get(\"rotation_period\")\n if rotation and rotation[0]:\n", "issue": "Unable to Evaluate Final Result from condition \n**Describe the issue**\r\nCKV_GCP_43: \"Ensure KMS encryption keys are rotated within a period of 90 days\"\r\n\r\n**Examples**\r\nCheck: CKV_GCP_43: \"Ensure KMS encryption keys are rotated within a period of 90 days\"\r\n\tFAILED for resource: module.kms.google_kms_crypto_key.key\r\n\tFile: /main.tf:11-29\r\n\tCalling File: /example/production/main.tf:1-6\r\n\tGuide: https://docs.bridgecrew.io/docs/bc_gcp_general_4\r\n\r\n\t\t11 | resource \"google_kms_crypto_key\" \"key\" {\r\n\t\t12 | count = var.prevent_destroy ? length(var.keys) : 0\r\n\t\t13 | name = var.keys[count.index]\r\n\t\t14 | key_ring = google_kms_key_ring.key_ring.id\r\n\t\t15 | rotation_period = contains([\"ASYMMETRIC_SIGN\", \"ASYMMETRIC_DECRYPT\"], var.purpose) ? null : var.key_rotation_period\r\n\t\t16 | #rotation_period = var.key_rotation_period\r\n\t\t17 | purpose = var.purpose\r\n\t\t18 |\r\n\t\t19 | lifecycle {\r\n\t\t20 | prevent_destroy = true\r\n\t\t21 | }\r\n\t\t22 |\r\n\t\t23 | version_template {\r\n\t\t24 | algorithm = var.key_algorithm\r\n\t\t25 | protection_level = var.key_protection_level\r\n\t\t26 | }\r\n\t\t27 |\r\n\t\t28 | labels = var.labels\r\n\t\t29 | }\r\n\r\nCheckov should providing error only in ASYMMETRIC key creation not the ENCRYPT_DCRYPT purpose for KMS key. Even after setting the purpose to ENCRYPT_DCRYPT and key_rotation_period variable to 90 days(7776000s), check is failing.\r\n**Version (please complete the following information):**\r\n - Checkov Version 2.3.156\r\n\r\n**Additional context**\r\n`contains([\"ASYMMETRIC_SIGN\", \"ASYMMETRIC_DECRYPT\"], var.purpose) ? null : var.key_rotation_period`\r\nAbove line should be evaluated and marked as passed for GCP KMS as ASYMMETRIC key is not supporting Automatic rotation.\r\n\n", "before_files": [{"content": "from typing import Dict, List, Any\n\nfrom checkov.common.util.type_forcers import force_int\n\nfrom checkov.common.models.enums import CheckResult, CheckCategories\nfrom checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck\n\n# rotation_period time unit is seconds\nONE_DAY = 24 * 60 * 60\nNINETY_DAYS = 90 * ONE_DAY\n\n\nclass GoogleKMSKeyRotationPeriod(BaseResourceCheck):\n def __init__(self) -> None:\n name = \"Ensure KMS encryption keys are rotated within a period of 90 days\"\n id = \"CKV_GCP_43\"\n supported_resources = [\"google_kms_crypto_key\"]\n categories = [CheckCategories.GENERAL_SECURITY]\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def scan_resource_conf(self, conf: Dict[str, List[Any]]) -> CheckResult:\n self.evaluated_keys = [\"rotation_period\"]\n rotation = conf.get(\"rotation_period\")\n if rotation and rotation[0]:\n time = force_int(rotation[0][:-1])\n if time and ONE_DAY <= time <= NINETY_DAYS:\n return CheckResult.PASSED\n return CheckResult.FAILED\n\n\ncheck = GoogleKMSKeyRotationPeriod()\n", "path": "checkov/terraform/checks/resource/gcp/GoogleKMSRotationPeriod.py"}]} | 1,409 | 418 |
gh_patches_debug_29450 | rasdani/github-patches | git_diff | bridgecrewio__checkov-4476 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
AWS_CKV_7 False Positive on assymetric key check in Cloudformation
**Describe the issue**
In terraform, the check avoids false positives with an extra check against symmetric keys before checking whether rotation is enabled. This same check hasn't been configured for cloudformation:
```
def scan_resource_conf(self, conf):
# Only symmetric keys support auto rotation. The attribute is optional and defaults to symmetric.
spec = conf.get('customer_master_key_spec')
if not spec or 'SYMMETRIC_DEFAULT' in spec:
return super().scan_resource_conf(conf)
else:
return CheckResult.PASSED
```
**Examples**
```
RSASigningKey:
Type: 'AWS::KMS::Key'
Properties:
Description: RSA-3072 asymmetric KMS key for signing and verification
KeySpec: RSA_3072
KeyUsage: SIGN_VERIFY
KeyPolicy:
Version: 2012-10-17
Id: key-default-1
Statement:
- Sid: Enable IAM User Permissions
Effect: Allow
Principal:
AWS: 'arn:aws:iam::111122223333:root'
Action: 'kms:*'
Resource: '*'
- Sid: Allow administration of the key
Effect: Allow
Principal:
AWS: 'arn:aws:iam::111122223333:role/Admin'
Action:
- 'kms:Create*'
- 'kms:Describe*'
- 'kms:Enable*'
- 'kms:List*'
- 'kms:Put*'
- 'kms:Update*'
- 'kms:Revoke*'
- 'kms:Disable*'
- 'kms:Get*'
- 'kms:Delete*'
- 'kms:ScheduleKeyDeletion'
- 'kms:CancelKeyDeletion'
Resource: '*'
- Sid: Allow use of the key
Effect: Allow
Principal:
AWS: 'arn:aws:iam::111122223333:role/Developer'
Action:
- 'kms:Sign'
- 'kms:Verify'
- 'kms:DescribeKey'
Resource: '*'
```
**Version (please complete the following information):**
- Checkov Version [e.g. 2.3.0]
**Additional context**
This blocks checkov working for assymetric keys in CFN.
</issue>
<code>
[start of checkov/cloudformation/checks/resource/aws/KMSRotation.py]
1 from checkov.common.models.enums import CheckCategories
2 from checkov.cloudformation.checks.resource.base_resource_value_check import BaseResourceValueCheck
3
4
5 class KMSRotation(BaseResourceValueCheck):
6 def __init__(self) -> None:
7 name = "Ensure rotation for customer created CMKs is enabled"
8 id = "CKV_AWS_7"
9 supported_resources = ("AWS::KMS::Key",)
10 categories = (CheckCategories.ENCRYPTION,)
11 super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
12
13 def get_inspected_key(self) -> str:
14 return "Properties/EnableKeyRotation"
15
16
17 check = KMSRotation()
18
[end of checkov/cloudformation/checks/resource/aws/KMSRotation.py]
[start of checkov/terraform/checks/resource/aws/KMSRotation.py]
1 from checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck
2 from checkov.common.models.enums import CheckCategories, CheckResult
3
4
5 class KMSRotation(BaseResourceValueCheck):
6 def __init__(self):
7 name = "Ensure rotation for customer created CMKs is enabled"
8 id = "CKV_AWS_7"
9 supported_resources = ['aws_kms_key']
10 categories = [CheckCategories.ENCRYPTION]
11 super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
12
13 def get_inspected_key(self):
14 return "enable_key_rotation"
15
16 def scan_resource_conf(self, conf):
17 # Only symmetric keys support auto rotation. The attribute is optional and defaults to symmetric.
18 spec = conf.get('customer_master_key_spec')
19 if not spec or 'SYMMETRIC_DEFAULT' in spec:
20 return super().scan_resource_conf(conf)
21 else:
22 return CheckResult.PASSED
23
24
25 check = KMSRotation()
26
[end of checkov/terraform/checks/resource/aws/KMSRotation.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/checkov/cloudformation/checks/resource/aws/KMSRotation.py b/checkov/cloudformation/checks/resource/aws/KMSRotation.py
--- a/checkov/cloudformation/checks/resource/aws/KMSRotation.py
+++ b/checkov/cloudformation/checks/resource/aws/KMSRotation.py
@@ -1,4 +1,4 @@
-from checkov.common.models.enums import CheckCategories
+from checkov.common.models.enums import CheckCategories, CheckResult
from checkov.cloudformation.checks.resource.base_resource_value_check import BaseResourceValueCheck
@@ -13,5 +13,15 @@
def get_inspected_key(self) -> str:
return "Properties/EnableKeyRotation"
+ def scan_resource_conf(self, conf):
+ # Only symmetric keys support auto rotation. The attribute is optional and defaults to symmetric.
+ properties = conf.get("Properties")
+ if properties and isinstance(properties, dict):
+ spec = properties.get("KeySpec")
+ if spec and isinstance(spec, str):
+ if 'SYMMETRIC_DEFAULT' not in spec and 'HMAC' not in spec:
+ return CheckResult.UNKNOWN
+ return super().scan_resource_conf(conf)
+
check = KMSRotation()
diff --git a/checkov/terraform/checks/resource/aws/KMSRotation.py b/checkov/terraform/checks/resource/aws/KMSRotation.py
--- a/checkov/terraform/checks/resource/aws/KMSRotation.py
+++ b/checkov/terraform/checks/resource/aws/KMSRotation.py
@@ -16,10 +16,10 @@
def scan_resource_conf(self, conf):
# Only symmetric keys support auto rotation. The attribute is optional and defaults to symmetric.
spec = conf.get('customer_master_key_spec')
- if not spec or 'SYMMETRIC_DEFAULT' in spec:
+ if not spec or 'SYMMETRIC_DEFAULT' in spec or 'HMAC' in spec:
return super().scan_resource_conf(conf)
else:
- return CheckResult.PASSED
+ return CheckResult.UNKNOWN
check = KMSRotation()
| {"golden_diff": "diff --git a/checkov/cloudformation/checks/resource/aws/KMSRotation.py b/checkov/cloudformation/checks/resource/aws/KMSRotation.py\n--- a/checkov/cloudformation/checks/resource/aws/KMSRotation.py\n+++ b/checkov/cloudformation/checks/resource/aws/KMSRotation.py\n@@ -1,4 +1,4 @@\n-from checkov.common.models.enums import CheckCategories\n+from checkov.common.models.enums import CheckCategories, CheckResult\n from checkov.cloudformation.checks.resource.base_resource_value_check import BaseResourceValueCheck\n \n \n@@ -13,5 +13,15 @@\n def get_inspected_key(self) -> str:\n return \"Properties/EnableKeyRotation\"\n \n+ def scan_resource_conf(self, conf):\n+ # Only symmetric keys support auto rotation. The attribute is optional and defaults to symmetric.\n+ properties = conf.get(\"Properties\")\n+ if properties and isinstance(properties, dict):\n+ spec = properties.get(\"KeySpec\")\n+ if spec and isinstance(spec, str):\n+ if 'SYMMETRIC_DEFAULT' not in spec and 'HMAC' not in spec:\n+ return CheckResult.UNKNOWN\n+ return super().scan_resource_conf(conf)\n+\n \n check = KMSRotation()\ndiff --git a/checkov/terraform/checks/resource/aws/KMSRotation.py b/checkov/terraform/checks/resource/aws/KMSRotation.py\n--- a/checkov/terraform/checks/resource/aws/KMSRotation.py\n+++ b/checkov/terraform/checks/resource/aws/KMSRotation.py\n@@ -16,10 +16,10 @@\n def scan_resource_conf(self, conf):\n # Only symmetric keys support auto rotation. The attribute is optional and defaults to symmetric.\n spec = conf.get('customer_master_key_spec')\n- if not spec or 'SYMMETRIC_DEFAULT' in spec:\n+ if not spec or 'SYMMETRIC_DEFAULT' in spec or 'HMAC' in spec:\n return super().scan_resource_conf(conf)\n else:\n- return CheckResult.PASSED\n+ return CheckResult.UNKNOWN\n \n \n check = KMSRotation()\n", "issue": "AWS_CKV_7 False Positive on assymetric key check in Cloudformation\n**Describe the issue**\r\nIn terraform, the check avoids false positives with an extra check against symmetric keys before checking whether rotation is enabled. This same check hasn't been configured for cloudformation:\r\n\r\n```\r\ndef scan_resource_conf(self, conf):\r\n # Only symmetric keys support auto rotation. The attribute is optional and defaults to symmetric.\r\n spec = conf.get('customer_master_key_spec')\r\n if not spec or 'SYMMETRIC_DEFAULT' in spec:\r\n return super().scan_resource_conf(conf)\r\n else:\r\n return CheckResult.PASSED\r\n```\r\n\r\n**Examples**\r\n\r\n```\r\nRSASigningKey:\r\n Type: 'AWS::KMS::Key'\r\n Properties:\r\n Description: RSA-3072 asymmetric KMS key for signing and verification\r\n KeySpec: RSA_3072\r\n KeyUsage: SIGN_VERIFY\r\n KeyPolicy:\r\n Version: 2012-10-17\r\n Id: key-default-1\r\n Statement:\r\n - Sid: Enable IAM User Permissions\r\n Effect: Allow\r\n Principal:\r\n AWS: 'arn:aws:iam::111122223333:root'\r\n Action: 'kms:*'\r\n Resource: '*'\r\n - Sid: Allow administration of the key\r\n Effect: Allow\r\n Principal:\r\n AWS: 'arn:aws:iam::111122223333:role/Admin'\r\n Action:\r\n - 'kms:Create*'\r\n - 'kms:Describe*'\r\n - 'kms:Enable*'\r\n - 'kms:List*'\r\n - 'kms:Put*'\r\n - 'kms:Update*'\r\n - 'kms:Revoke*'\r\n - 'kms:Disable*'\r\n - 'kms:Get*'\r\n - 'kms:Delete*'\r\n - 'kms:ScheduleKeyDeletion'\r\n - 'kms:CancelKeyDeletion'\r\n Resource: '*'\r\n - Sid: Allow use of the key\r\n Effect: Allow\r\n Principal:\r\n AWS: 'arn:aws:iam::111122223333:role/Developer'\r\n Action:\r\n - 'kms:Sign'\r\n - 'kms:Verify'\r\n - 'kms:DescribeKey'\r\n Resource: '*'\r\n```\r\n\r\n**Version (please complete the following information):**\r\n - Checkov Version [e.g. 2.3.0]\r\n\r\n**Additional context**\r\n\r\nThis blocks checkov working for assymetric keys in CFN.\r\n\n", "before_files": [{"content": "from checkov.common.models.enums import CheckCategories\nfrom checkov.cloudformation.checks.resource.base_resource_value_check import BaseResourceValueCheck\n\n\nclass KMSRotation(BaseResourceValueCheck):\n def __init__(self) -> None:\n name = \"Ensure rotation for customer created CMKs is enabled\"\n id = \"CKV_AWS_7\"\n supported_resources = (\"AWS::KMS::Key\",)\n categories = (CheckCategories.ENCRYPTION,)\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def get_inspected_key(self) -> str:\n return \"Properties/EnableKeyRotation\"\n\n\ncheck = KMSRotation()\n", "path": "checkov/cloudformation/checks/resource/aws/KMSRotation.py"}, {"content": "from checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck\nfrom checkov.common.models.enums import CheckCategories, CheckResult\n\n\nclass KMSRotation(BaseResourceValueCheck):\n def __init__(self):\n name = \"Ensure rotation for customer created CMKs is enabled\"\n id = \"CKV_AWS_7\"\n supported_resources = ['aws_kms_key']\n categories = [CheckCategories.ENCRYPTION]\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def get_inspected_key(self):\n return \"enable_key_rotation\"\n\n def scan_resource_conf(self, conf):\n # Only symmetric keys support auto rotation. The attribute is optional and defaults to symmetric.\n spec = conf.get('customer_master_key_spec')\n if not spec or 'SYMMETRIC_DEFAULT' in spec:\n return super().scan_resource_conf(conf)\n else:\n return CheckResult.PASSED\n\n\ncheck = KMSRotation()\n", "path": "checkov/terraform/checks/resource/aws/KMSRotation.py"}]} | 1,583 | 448 |
gh_patches_debug_3608 | rasdani/github-patches | git_diff | bokeh__bokeh-5620 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Correctly handle data values <= 0 on a log scale
This is a continuation from issue #5389, partially adressed by PR #5477. There persists an issue where negative data is not handled correctly. All data <= 0 should be discarded before generating the plot.
As is, if `values = np.linspace(-0.1, 0.9), a JS error complains that it "could not set initial ranges", probably because `log(n)` for `n<=0` is not defined.
</issue>
<code>
[start of sphinx/source/docs/user_guide/examples/plotting_log_scale_axis.py]
1 from bokeh.plotting import figure, output_file, show
2
3 x = [0.1, 0.5, 1.0, 1.5, 2.0, 2.5, 3.0]
4 y = [10**xx for xx in x]
5
6 output_file("log.html")
7
8 # create a new plot with a log axis type
9 p = figure(plot_width=400, plot_height=400,
10 y_axis_type="log", y_range=(10**-1, 10**4))
11
12 p.line(x, y, line_width=2)
13 p.circle(x, y, fill_color="white", size=8)
14
15 show(p)
16
[end of sphinx/source/docs/user_guide/examples/plotting_log_scale_axis.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/sphinx/source/docs/user_guide/examples/plotting_log_scale_axis.py b/sphinx/source/docs/user_guide/examples/plotting_log_scale_axis.py
--- a/sphinx/source/docs/user_guide/examples/plotting_log_scale_axis.py
+++ b/sphinx/source/docs/user_guide/examples/plotting_log_scale_axis.py
@@ -6,8 +6,7 @@
output_file("log.html")
# create a new plot with a log axis type
-p = figure(plot_width=400, plot_height=400,
- y_axis_type="log", y_range=(10**-1, 10**4))
+p = figure(plot_width=400, plot_height=400, y_axis_type="log")
p.line(x, y, line_width=2)
p.circle(x, y, fill_color="white", size=8)
| {"golden_diff": "diff --git a/sphinx/source/docs/user_guide/examples/plotting_log_scale_axis.py b/sphinx/source/docs/user_guide/examples/plotting_log_scale_axis.py\n--- a/sphinx/source/docs/user_guide/examples/plotting_log_scale_axis.py\n+++ b/sphinx/source/docs/user_guide/examples/plotting_log_scale_axis.py\n@@ -6,8 +6,7 @@\n output_file(\"log.html\")\n \n # create a new plot with a log axis type\n-p = figure(plot_width=400, plot_height=400,\n- y_axis_type=\"log\", y_range=(10**-1, 10**4))\n+p = figure(plot_width=400, plot_height=400, y_axis_type=\"log\")\n \n p.line(x, y, line_width=2)\n p.circle(x, y, fill_color=\"white\", size=8)\n", "issue": "Correctly handle data values <= 0 on a log scale\nThis is a continuation from issue #5389, partially adressed by PR #5477. There persists an issue where negative data is not handled correctly. All data <= 0 should be discarded before generating the plot.\r\n\r\nAs is, if `values = np.linspace(-0.1, 0.9), a JS error complains that it \"could not set initial ranges\", probably because `log(n)` for `n<=0` is not defined.\n", "before_files": [{"content": "from bokeh.plotting import figure, output_file, show\n\nx = [0.1, 0.5, 1.0, 1.5, 2.0, 2.5, 3.0]\ny = [10**xx for xx in x]\n\noutput_file(\"log.html\")\n\n# create a new plot with a log axis type\np = figure(plot_width=400, plot_height=400,\n y_axis_type=\"log\", y_range=(10**-1, 10**4))\n\np.line(x, y, line_width=2)\np.circle(x, y, fill_color=\"white\", size=8)\n\nshow(p)\n", "path": "sphinx/source/docs/user_guide/examples/plotting_log_scale_axis.py"}]} | 840 | 187 |
gh_patches_debug_26426 | rasdani/github-patches | git_diff | weni-ai__bothub-engine-106 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Disallow samples without intent or entities
Disallow samples creation without an intent or one entity at least.
</issue>
<code>
[start of bothub/api/validators.py]
1 from django.utils.translation import gettext as _
2 from rest_framework.exceptions import PermissionDenied
3 from rest_framework.exceptions import ValidationError
4
5 from bothub.common.models import RepositoryTranslatedExample
6
7
8 class CanContributeInRepositoryValidator(object):
9 def __call__(self, value):
10 user_authorization = value.get_user_authorization(
11 self.request.user)
12 if not user_authorization.can_contribute:
13 raise PermissionDenied(
14 _('You can\'t contribute in this repository'))
15
16 def set_context(self, serializer):
17 self.request = serializer.context.get('request')
18
19
20 class CanContributeInRepositoryExampleValidator(object):
21 def __call__(self, value):
22 repository = value.repository_update.repository
23 user_authorization = repository.get_user_authorization(
24 self.request.user)
25 if not user_authorization.can_contribute:
26 raise PermissionDenied(
27 _('You can\'t contribute in this repository'))
28
29 def set_context(self, serializer):
30 self.request = serializer.context.get('request')
31
32
33 class CanContributeInRepositoryTranslatedExampleValidator(object):
34 def __call__(self, value):
35 repository = value.original_example.repository_update.repository
36 user_authorization = repository.get_user_authorization(
37 self.request.user)
38 if not user_authorization.can_contribute:
39 raise PermissionDenied(
40 _('You can\'t contribute in this repository'))
41
42 def set_context(self, serializer):
43 self.request = serializer.context.get('request')
44
45
46 class TranslatedExampleEntitiesValidator(object):
47 def __call__(self, attrs):
48 original_example = attrs.get('original_example')
49 entities_valid = RepositoryTranslatedExample.same_entities_validator(
50 list(map(lambda x: dict(x), attrs.get('entities'))),
51 list(map(lambda x: x.to_dict, original_example.entities.all())))
52 if not entities_valid:
53 raise ValidationError({'entities': _('Invalid entities')})
54
55
56 class TranslatedExampleLanguageValidator(object):
57 def __call__(self, attrs):
58 original_example = attrs.get('original_example')
59 language = attrs.get('language')
60 if original_example.repository_update.language == language:
61 raise ValidationError({'language': _(
62 'Can\'t translate to the same language')})
63
[end of bothub/api/validators.py]
[start of bothub/api/serializers/example.py]
1 from rest_framework import serializers
2
3 from django.utils.translation import gettext as _
4
5 from bothub.common.models import Repository
6 from bothub.common.models import RepositoryExample
7 from bothub.common.models import RepositoryExampleEntity
8
9 from ..fields import EntityText
10 from ..validators import CanContributeInRepositoryExampleValidator
11 from ..validators import CanContributeInRepositoryValidator
12 from .translate import RepositoryTranslatedExampleSerializer
13
14
15 class RepositoryExampleEntitySerializer(serializers.ModelSerializer):
16 class Meta:
17 model = RepositoryExampleEntity
18 fields = [
19 'id',
20 'repository_example',
21 'start',
22 'end',
23 'entity',
24 'created_at',
25 'value',
26 ]
27
28 repository_example = serializers.PrimaryKeyRelatedField(
29 queryset=RepositoryExample.objects,
30 validators=[
31 CanContributeInRepositoryExampleValidator(),
32 ],
33 help_text=_('Example\'s ID'))
34 value = serializers.SerializerMethodField()
35
36 def get_value(self, obj):
37 return obj.value
38
39
40 class NewRepositoryExampleEntitySerializer(serializers.ModelSerializer):
41 class Meta:
42 model = RepositoryExampleEntity
43 fields = [
44 'repository_example',
45 'start',
46 'end',
47 'entity',
48 ]
49
50
51 class RepositoryExampleSerializer(serializers.ModelSerializer):
52 class Meta:
53 model = RepositoryExample
54 fields = [
55 'id',
56 'repository_update',
57 'deleted_in',
58 'text',
59 'intent',
60 'language',
61 'created_at',
62 'entities',
63 'translations',
64 ]
65 read_only_fields = [
66 'repository_update',
67 'deleted_in',
68 ]
69
70 entities = RepositoryExampleEntitySerializer(
71 many=True,
72 read_only=True)
73 translations = RepositoryTranslatedExampleSerializer(
74 many=True,
75 read_only=True)
76 language = serializers.SerializerMethodField()
77
78 def get_language(self, obj):
79 return obj.language
80
81
82 class NewRepositoryExampleSerializer(serializers.ModelSerializer):
83 class Meta:
84 model = RepositoryExample
85 fields = [
86 'id',
87 'repository',
88 'repository_update',
89 'text',
90 'entities',
91 'intent',
92 ]
93
94 id = serializers.PrimaryKeyRelatedField(
95 read_only=True,
96 style={'show': False})
97 text = EntityText(style={'entities_field': 'entities'})
98 repository = serializers.PrimaryKeyRelatedField(
99 queryset=Repository.objects,
100 validators=[
101 CanContributeInRepositoryValidator(),
102 ],
103 source='repository_update',
104 style={'show': False})
105 repository_update = serializers.PrimaryKeyRelatedField(
106 read_only=True,
107 style={'show': False})
108 entities = NewRepositoryExampleEntitySerializer(
109 many=True,
110 style={'text_field': 'text'})
111
112 def validate_repository(self, repository):
113 return repository.current_update()
114
115 def create(self, validated_data):
116 entities_data = validated_data.pop('entities')
117 example = self.Meta.model.objects.create(**validated_data)
118 for entity_data in entities_data:
119 RepositoryExampleEntity.objects.create(
120 repository_example=example,
121 **entity_data)
122 return example
123
[end of bothub/api/serializers/example.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/bothub/api/serializers/example.py b/bothub/api/serializers/example.py
--- a/bothub/api/serializers/example.py
+++ b/bothub/api/serializers/example.py
@@ -9,6 +9,7 @@
from ..fields import EntityText
from ..validators import CanContributeInRepositoryExampleValidator
from ..validators import CanContributeInRepositoryValidator
+from ..validators import ExampleWithIntentOrEntityValidator
from .translate import RepositoryTranslatedExampleSerializer
@@ -109,6 +110,10 @@
many=True,
style={'text_field': 'text'})
+ def __init__(self, *args, **kwargs):
+ super().__init__(*args, **kwargs)
+ self.validators.append(ExampleWithIntentOrEntityValidator())
+
def validate_repository(self, repository):
return repository.current_update()
diff --git a/bothub/api/validators.py b/bothub/api/validators.py
--- a/bothub/api/validators.py
+++ b/bothub/api/validators.py
@@ -60,3 +60,12 @@
if original_example.repository_update.language == language:
raise ValidationError({'language': _(
'Can\'t translate to the same language')})
+
+
+class ExampleWithIntentOrEntityValidator(object):
+ def __call__(self, attrs):
+ intent = attrs.get('intent')
+ entities = attrs.get('entities')
+
+ if not intent and not entities:
+ raise ValidationError(_('Define a intent or one entity'))
| {"golden_diff": "diff --git a/bothub/api/serializers/example.py b/bothub/api/serializers/example.py\n--- a/bothub/api/serializers/example.py\n+++ b/bothub/api/serializers/example.py\n@@ -9,6 +9,7 @@\n from ..fields import EntityText\n from ..validators import CanContributeInRepositoryExampleValidator\n from ..validators import CanContributeInRepositoryValidator\n+from ..validators import ExampleWithIntentOrEntityValidator\n from .translate import RepositoryTranslatedExampleSerializer\n \n \n@@ -109,6 +110,10 @@\n many=True,\n style={'text_field': 'text'})\n \n+ def __init__(self, *args, **kwargs):\n+ super().__init__(*args, **kwargs)\n+ self.validators.append(ExampleWithIntentOrEntityValidator())\n+\n def validate_repository(self, repository):\n return repository.current_update()\n \ndiff --git a/bothub/api/validators.py b/bothub/api/validators.py\n--- a/bothub/api/validators.py\n+++ b/bothub/api/validators.py\n@@ -60,3 +60,12 @@\n if original_example.repository_update.language == language:\n raise ValidationError({'language': _(\n 'Can\\'t translate to the same language')})\n+\n+\n+class ExampleWithIntentOrEntityValidator(object):\n+ def __call__(self, attrs):\n+ intent = attrs.get('intent')\n+ entities = attrs.get('entities')\n+\n+ if not intent and not entities:\n+ raise ValidationError(_('Define a intent or one entity'))\n", "issue": "Disallow samples without intent or entities\nDisallow samples creation without an intent or one entity at least.\n", "before_files": [{"content": "from django.utils.translation import gettext as _\nfrom rest_framework.exceptions import PermissionDenied\nfrom rest_framework.exceptions import ValidationError\n\nfrom bothub.common.models import RepositoryTranslatedExample\n\n\nclass CanContributeInRepositoryValidator(object):\n def __call__(self, value):\n user_authorization = value.get_user_authorization(\n self.request.user)\n if not user_authorization.can_contribute:\n raise PermissionDenied(\n _('You can\\'t contribute in this repository'))\n\n def set_context(self, serializer):\n self.request = serializer.context.get('request')\n\n\nclass CanContributeInRepositoryExampleValidator(object):\n def __call__(self, value):\n repository = value.repository_update.repository\n user_authorization = repository.get_user_authorization(\n self.request.user)\n if not user_authorization.can_contribute:\n raise PermissionDenied(\n _('You can\\'t contribute in this repository'))\n\n def set_context(self, serializer):\n self.request = serializer.context.get('request')\n\n\nclass CanContributeInRepositoryTranslatedExampleValidator(object):\n def __call__(self, value):\n repository = value.original_example.repository_update.repository\n user_authorization = repository.get_user_authorization(\n self.request.user)\n if not user_authorization.can_contribute:\n raise PermissionDenied(\n _('You can\\'t contribute in this repository'))\n\n def set_context(self, serializer):\n self.request = serializer.context.get('request')\n\n\nclass TranslatedExampleEntitiesValidator(object):\n def __call__(self, attrs):\n original_example = attrs.get('original_example')\n entities_valid = RepositoryTranslatedExample.same_entities_validator(\n list(map(lambda x: dict(x), attrs.get('entities'))),\n list(map(lambda x: x.to_dict, original_example.entities.all())))\n if not entities_valid:\n raise ValidationError({'entities': _('Invalid entities')})\n\n\nclass TranslatedExampleLanguageValidator(object):\n def __call__(self, attrs):\n original_example = attrs.get('original_example')\n language = attrs.get('language')\n if original_example.repository_update.language == language:\n raise ValidationError({'language': _(\n 'Can\\'t translate to the same language')})\n", "path": "bothub/api/validators.py"}, {"content": "from rest_framework import serializers\n\nfrom django.utils.translation import gettext as _\n\nfrom bothub.common.models import Repository\nfrom bothub.common.models import RepositoryExample\nfrom bothub.common.models import RepositoryExampleEntity\n\nfrom ..fields import EntityText\nfrom ..validators import CanContributeInRepositoryExampleValidator\nfrom ..validators import CanContributeInRepositoryValidator\nfrom .translate import RepositoryTranslatedExampleSerializer\n\n\nclass RepositoryExampleEntitySerializer(serializers.ModelSerializer):\n class Meta:\n model = RepositoryExampleEntity\n fields = [\n 'id',\n 'repository_example',\n 'start',\n 'end',\n 'entity',\n 'created_at',\n 'value',\n ]\n\n repository_example = serializers.PrimaryKeyRelatedField(\n queryset=RepositoryExample.objects,\n validators=[\n CanContributeInRepositoryExampleValidator(),\n ],\n help_text=_('Example\\'s ID'))\n value = serializers.SerializerMethodField()\n\n def get_value(self, obj):\n return obj.value\n\n\nclass NewRepositoryExampleEntitySerializer(serializers.ModelSerializer):\n class Meta:\n model = RepositoryExampleEntity\n fields = [\n 'repository_example',\n 'start',\n 'end',\n 'entity',\n ]\n\n\nclass RepositoryExampleSerializer(serializers.ModelSerializer):\n class Meta:\n model = RepositoryExample\n fields = [\n 'id',\n 'repository_update',\n 'deleted_in',\n 'text',\n 'intent',\n 'language',\n 'created_at',\n 'entities',\n 'translations',\n ]\n read_only_fields = [\n 'repository_update',\n 'deleted_in',\n ]\n\n entities = RepositoryExampleEntitySerializer(\n many=True,\n read_only=True)\n translations = RepositoryTranslatedExampleSerializer(\n many=True,\n read_only=True)\n language = serializers.SerializerMethodField()\n\n def get_language(self, obj):\n return obj.language\n\n\nclass NewRepositoryExampleSerializer(serializers.ModelSerializer):\n class Meta:\n model = RepositoryExample\n fields = [\n 'id',\n 'repository',\n 'repository_update',\n 'text',\n 'entities',\n 'intent',\n ]\n\n id = serializers.PrimaryKeyRelatedField(\n read_only=True,\n style={'show': False})\n text = EntityText(style={'entities_field': 'entities'})\n repository = serializers.PrimaryKeyRelatedField(\n queryset=Repository.objects,\n validators=[\n CanContributeInRepositoryValidator(),\n ],\n source='repository_update',\n style={'show': False})\n repository_update = serializers.PrimaryKeyRelatedField(\n read_only=True,\n style={'show': False})\n entities = NewRepositoryExampleEntitySerializer(\n many=True,\n style={'text_field': 'text'})\n\n def validate_repository(self, repository):\n return repository.current_update()\n\n def create(self, validated_data):\n entities_data = validated_data.pop('entities')\n example = self.Meta.model.objects.create(**validated_data)\n for entity_data in entities_data:\n RepositoryExampleEntity.objects.create(\n repository_example=example,\n **entity_data)\n return example\n", "path": "bothub/api/serializers/example.py"}]} | 2,040 | 339 |
gh_patches_debug_14973 | rasdani/github-patches | git_diff | chainer__chainer-104 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Remove `chainer-cuda-requirements` that is deprecated
`pip install chainer-cuda-deps` is recommended, and `chainer-cuda-requirements` is deprecated now. It will be removed in the future minor release.
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2 from setuptools import setup
3
4 setup(
5 name='chainer',
6 version='1.0.1',
7 description='A flexible framework of neural networks',
8 author='Seiya Tokui',
9 author_email='[email protected]',
10 url='http://chainer.org/',
11 packages=['chainer',
12 'chainer.cudnn',
13 'chainer.functions',
14 'chainer.optimizers',
15 'chainer.requirements',
16 'chainer.utils'],
17 package_data={'chainer.requirements': ['cuda-requirements.txt']},
18 install_requires=['numpy',
19 'six>=1.9.0'],
20 scripts=['scripts/chainer-cuda-requirements'],
21 tests_require=['nose'],
22 )
23
[end of setup.py]
[start of chainer/requirements/__init__.py]
1 import os
2
3
4 def get_cuda_requirements_path():
5 return os.path.join(os.path.dirname(__file__), 'cuda-requirements.txt')
6
7
8 def get_cuda_requirements():
9 with open(get_cuda_requirements_path()) as f:
10 return f.read()
11
[end of chainer/requirements/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/chainer/requirements/__init__.py b/chainer/requirements/__init__.py
deleted file mode 100644
--- a/chainer/requirements/__init__.py
+++ /dev/null
@@ -1,10 +0,0 @@
-import os
-
-
-def get_cuda_requirements_path():
- return os.path.join(os.path.dirname(__file__), 'cuda-requirements.txt')
-
-
-def get_cuda_requirements():
- with open(get_cuda_requirements_path()) as f:
- return f.read()
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -12,11 +12,8 @@
'chainer.cudnn',
'chainer.functions',
'chainer.optimizers',
- 'chainer.requirements',
'chainer.utils'],
- package_data={'chainer.requirements': ['cuda-requirements.txt']},
install_requires=['numpy',
'six>=1.9.0'],
- scripts=['scripts/chainer-cuda-requirements'],
tests_require=['nose'],
)
| {"golden_diff": "diff --git a/chainer/requirements/__init__.py b/chainer/requirements/__init__.py\ndeleted file mode 100644\n--- a/chainer/requirements/__init__.py\n+++ /dev/null\n@@ -1,10 +0,0 @@\n-import os\n-\n-\n-def get_cuda_requirements_path():\n- return os.path.join(os.path.dirname(__file__), 'cuda-requirements.txt')\n-\n-\n-def get_cuda_requirements():\n- with open(get_cuda_requirements_path()) as f:\n- return f.read()\ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -12,11 +12,8 @@\n 'chainer.cudnn',\n 'chainer.functions',\n 'chainer.optimizers',\n- 'chainer.requirements',\n 'chainer.utils'],\n- package_data={'chainer.requirements': ['cuda-requirements.txt']},\n install_requires=['numpy',\n 'six>=1.9.0'],\n- scripts=['scripts/chainer-cuda-requirements'],\n tests_require=['nose'],\n )\n", "issue": "Remove `chainer-cuda-requirements` that is deprecated\n`pip install chainer-cuda-deps` is recommended, and `chainer-cuda-requirements` is deprecated now. It will be removed in the future minor release.\n\n", "before_files": [{"content": "#!/usr/bin/env python\nfrom setuptools import setup\n\nsetup(\n name='chainer',\n version='1.0.1',\n description='A flexible framework of neural networks',\n author='Seiya Tokui',\n author_email='[email protected]',\n url='http://chainer.org/',\n packages=['chainer',\n 'chainer.cudnn',\n 'chainer.functions',\n 'chainer.optimizers',\n 'chainer.requirements',\n 'chainer.utils'],\n package_data={'chainer.requirements': ['cuda-requirements.txt']},\n install_requires=['numpy',\n 'six>=1.9.0'],\n scripts=['scripts/chainer-cuda-requirements'],\n tests_require=['nose'],\n)\n", "path": "setup.py"}, {"content": "import os\n\n\ndef get_cuda_requirements_path():\n return os.path.join(os.path.dirname(__file__), 'cuda-requirements.txt')\n\n\ndef get_cuda_requirements():\n with open(get_cuda_requirements_path()) as f:\n return f.read()\n", "path": "chainer/requirements/__init__.py"}]} | 860 | 235 |
gh_patches_debug_51565 | rasdani/github-patches | git_diff | ray-project__ray-1413 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Worker dies when passed pandas DataFrame.
### System information
- **Ray version**: 0.3.0
- **Python version**: 3.6.0
- **Exact command to reproduce**:
```python
import pandas as pd
import ray
pd.__version__ # '0.19.2'
ray.init()
df = pd.DataFrame(data={'col1': [1, 2, 3, 4], 'col2': [3, 4, 5, 6]})
@ray.remote
def f(x):
pass
f.remote(df)
```
The last line causes the following error to be printed in the background.
```
A worker died or was killed while executing a task.
```
cc @devin-petersohn
</issue>
<code>
[start of python/ray/dataframe/__init__.py]
1 from __future__ import absolute_import
2 from __future__ import division
3 from __future__ import print_function
4
5 from .dataframe import DataFrame
6 from .dataframe import from_pandas
7 from .dataframe import to_pandas
8 from .series import Series
9 import ray
10 import pandas as pd
11
12 __all__ = ["DataFrame", "from_pandas", "to_pandas", "Series"]
13
14 ray.register_custom_serializer(pd.DataFrame, use_pickle=True)
15 ray.register_custom_serializer(pd.core.indexes.base.Index, use_pickle=True)
16
[end of python/ray/dataframe/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/python/ray/dataframe/__init__.py b/python/ray/dataframe/__init__.py
--- a/python/ray/dataframe/__init__.py
+++ b/python/ray/dataframe/__init__.py
@@ -6,10 +6,5 @@
from .dataframe import from_pandas
from .dataframe import to_pandas
from .series import Series
-import ray
-import pandas as pd
__all__ = ["DataFrame", "from_pandas", "to_pandas", "Series"]
-
-ray.register_custom_serializer(pd.DataFrame, use_pickle=True)
-ray.register_custom_serializer(pd.core.indexes.base.Index, use_pickle=True)
| {"golden_diff": "diff --git a/python/ray/dataframe/__init__.py b/python/ray/dataframe/__init__.py\n--- a/python/ray/dataframe/__init__.py\n+++ b/python/ray/dataframe/__init__.py\n@@ -6,10 +6,5 @@\n from .dataframe import from_pandas\n from .dataframe import to_pandas\n from .series import Series\n-import ray\n-import pandas as pd\n \n __all__ = [\"DataFrame\", \"from_pandas\", \"to_pandas\", \"Series\"]\n-\n-ray.register_custom_serializer(pd.DataFrame, use_pickle=True)\n-ray.register_custom_serializer(pd.core.indexes.base.Index, use_pickle=True)\n", "issue": "Worker dies when passed pandas DataFrame.\n### System information\r\n- **Ray version**: 0.3.0\r\n- **Python version**: 3.6.0\r\n- **Exact command to reproduce**:\r\n\r\n```python\r\nimport pandas as pd\r\nimport ray\r\n\r\npd.__version__ # '0.19.2'\r\n\r\nray.init()\r\n\r\ndf = pd.DataFrame(data={'col1': [1, 2, 3, 4], 'col2': [3, 4, 5, 6]})\r\n\r\[email protected]\r\ndef f(x):\r\n pass\r\n\r\nf.remote(df)\r\n```\r\n\r\nThe last line causes the following error to be printed in the background.\r\n\r\n```\r\nA worker died or was killed while executing a task.\r\n```\r\n\r\ncc @devin-petersohn\n", "before_files": [{"content": "from __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nfrom .dataframe import DataFrame\nfrom .dataframe import from_pandas\nfrom .dataframe import to_pandas\nfrom .series import Series\nimport ray\nimport pandas as pd\n\n__all__ = [\"DataFrame\", \"from_pandas\", \"to_pandas\", \"Series\"]\n\nray.register_custom_serializer(pd.DataFrame, use_pickle=True)\nray.register_custom_serializer(pd.core.indexes.base.Index, use_pickle=True)\n", "path": "python/ray/dataframe/__init__.py"}]} | 841 | 139 |
gh_patches_debug_549 | rasdani/github-patches | git_diff | mabel-dev__opteryx-1412 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
๐ชฒ ARM test fails
~~~
ValueError: 'orso/bitarray/cbitarray.pyx' doesn't match any files
~~~
https://github.com/mabel-dev/opteryx/actions/runs/7535073365/job/20510453555
</issue>
<code>
[start of opteryx/__version__.py]
1 __build__ = 244
2
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """
16 Store the version here so:
17 1) we don't load dependencies by storing it in __init__.py
18 2) we can import it in setup.py for the same reason
19 """
20 from enum import Enum # isort: skip
21
22
23 class VersionStatus(Enum):
24 ALPHA = "alpha"
25 BETA = "beta"
26 RELEASE = "release"
27
28
29 _major = 0
30 _minor = 12
31 _revision = 5
32 _status = VersionStatus.BETA
33
34 __version__ = f"{_major}.{_minor}.{_revision}" + (
35 f"-{_status.value}.{__build__}" if _status != VersionStatus.RELEASE else ""
36 )
37
[end of opteryx/__version__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/opteryx/__version__.py b/opteryx/__version__.py
--- a/opteryx/__version__.py
+++ b/opteryx/__version__.py
@@ -1,4 +1,4 @@
-__build__ = 244
+__build__ = 248
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
| {"golden_diff": "diff --git a/opteryx/__version__.py b/opteryx/__version__.py\n--- a/opteryx/__version__.py\n+++ b/opteryx/__version__.py\n@@ -1,4 +1,4 @@\n-__build__ = 244\n+__build__ = 248\n \n # Licensed under the Apache License, Version 2.0 (the \"License\");\n # you may not use this file except in compliance with the License.\n", "issue": "\ud83e\udeb2 ARM test fails \n\r\n~~~\r\nValueError: 'orso/bitarray/cbitarray.pyx' doesn't match any files\r\n~~~\r\n\r\nhttps://github.com/mabel-dev/opteryx/actions/runs/7535073365/job/20510453555\n", "before_files": [{"content": "__build__ = 244\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nStore the version here so:\n1) we don't load dependencies by storing it in __init__.py\n2) we can import it in setup.py for the same reason\n\"\"\"\nfrom enum import Enum # isort: skip\n\n\nclass VersionStatus(Enum):\n ALPHA = \"alpha\"\n BETA = \"beta\"\n RELEASE = \"release\"\n\n\n_major = 0\n_minor = 12\n_revision = 5\n_status = VersionStatus.BETA\n\n__version__ = f\"{_major}.{_minor}.{_revision}\" + (\n f\"-{_status.value}.{__build__}\" if _status != VersionStatus.RELEASE else \"\"\n)\n", "path": "opteryx/__version__.py"}]} | 953 | 102 |
gh_patches_debug_17327 | rasdani/github-patches | git_diff | alltheplaces__alltheplaces-3950 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Primrose Schools
Is generating 1,221 errors. Adding a if statement for `content` should fix it. Could also be turned into a sitemap spider.
</issue>
<code>
[start of locations/spiders/primrose_schools.py]
1 import json
2
3 import scrapy
4
5 from locations.items import GeojsonPointItem
6
7
8 class PrimroseSchoolsSpider(scrapy.Spider):
9 name = "primrose_schools"
10 item_attributes = {"brand": "Primrose Schools", "brand_wikidata": "Q7243677"}
11 allowed_domains = ["primroseschools.com"]
12
13 start_urls = ["https://www.primroseschools.com/find-a-school/"]
14
15 def parse(self, response):
16 with open(
17 "./locations/searchable_points/us_centroids_50mile_radius.csv"
18 ) as points:
19 next(points)
20 for point in points:
21 row = point.replace("\n", "").split(",")
22 lati = row[1]
23 long = row[2]
24 searchurl = "https://www.primroseschools.com/find-a-school/?search_string=USA&latitude={la}&longitude={lo}".format(
25 la=lati, lo=long
26 )
27 yield scrapy.Request(
28 response.urljoin(searchurl), callback=self.parse_search
29 )
30
31 def parse_search(self, response):
32 content = response.xpath('//script[@type="application/json"]/text()').get()
33 schools = json.loads(content)
34 for i in schools:
35 if i["address_1"]:
36 properties = {
37 "name": i["name"],
38 "addr_full": i["address_1"] + " " + i["address_2"],
39 "city": i["city"],
40 "state": i["state"],
41 "postcode": i["zip_code"],
42 "phone": i["phone"],
43 "ref": i["id"],
44 "website": "https://www.primroseschools.com" + i["url"],
45 "lat": float(i["latitude"]),
46 "lon": float(i["longitude"]),
47 }
48 yield GeojsonPointItem(**properties)
49
[end of locations/spiders/primrose_schools.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/locations/spiders/primrose_schools.py b/locations/spiders/primrose_schools.py
--- a/locations/spiders/primrose_schools.py
+++ b/locations/spiders/primrose_schools.py
@@ -30,12 +30,17 @@
def parse_search(self, response):
content = response.xpath('//script[@type="application/json"]/text()').get()
+ if content is None:
+ return
+
schools = json.loads(content)
for i in schools:
if i["address_1"]:
properties = {
"name": i["name"],
- "addr_full": i["address_1"] + " " + i["address_2"],
+ "street_address": ", ".join(
+ filter(None, [i["address_1"], i["address_2"]])
+ ),
"city": i["city"],
"state": i["state"],
"postcode": i["zip_code"],
| {"golden_diff": "diff --git a/locations/spiders/primrose_schools.py b/locations/spiders/primrose_schools.py\n--- a/locations/spiders/primrose_schools.py\n+++ b/locations/spiders/primrose_schools.py\n@@ -30,12 +30,17 @@\n \n def parse_search(self, response):\n content = response.xpath('//script[@type=\"application/json\"]/text()').get()\n+ if content is None:\n+ return\n+\n schools = json.loads(content)\n for i in schools:\n if i[\"address_1\"]:\n properties = {\n \"name\": i[\"name\"],\n- \"addr_full\": i[\"address_1\"] + \" \" + i[\"address_2\"],\n+ \"street_address\": \", \".join(\n+ filter(None, [i[\"address_1\"], i[\"address_2\"]])\n+ ),\n \"city\": i[\"city\"],\n \"state\": i[\"state\"],\n \"postcode\": i[\"zip_code\"],\n", "issue": "Primrose Schools\nIs generating 1,221 errors. Adding a if statement for `content` should fix it. Could also be turned into a sitemap spider.\n", "before_files": [{"content": "import json\n\nimport scrapy\n\nfrom locations.items import GeojsonPointItem\n\n\nclass PrimroseSchoolsSpider(scrapy.Spider):\n name = \"primrose_schools\"\n item_attributes = {\"brand\": \"Primrose Schools\", \"brand_wikidata\": \"Q7243677\"}\n allowed_domains = [\"primroseschools.com\"]\n\n start_urls = [\"https://www.primroseschools.com/find-a-school/\"]\n\n def parse(self, response):\n with open(\n \"./locations/searchable_points/us_centroids_50mile_radius.csv\"\n ) as points:\n next(points)\n for point in points:\n row = point.replace(\"\\n\", \"\").split(\",\")\n lati = row[1]\n long = row[2]\n searchurl = \"https://www.primroseschools.com/find-a-school/?search_string=USA&latitude={la}&longitude={lo}\".format(\n la=lati, lo=long\n )\n yield scrapy.Request(\n response.urljoin(searchurl), callback=self.parse_search\n )\n\n def parse_search(self, response):\n content = response.xpath('//script[@type=\"application/json\"]/text()').get()\n schools = json.loads(content)\n for i in schools:\n if i[\"address_1\"]:\n properties = {\n \"name\": i[\"name\"],\n \"addr_full\": i[\"address_1\"] + \" \" + i[\"address_2\"],\n \"city\": i[\"city\"],\n \"state\": i[\"state\"],\n \"postcode\": i[\"zip_code\"],\n \"phone\": i[\"phone\"],\n \"ref\": i[\"id\"],\n \"website\": \"https://www.primroseschools.com\" + i[\"url\"],\n \"lat\": float(i[\"latitude\"]),\n \"lon\": float(i[\"longitude\"]),\n }\n yield GeojsonPointItem(**properties)\n", "path": "locations/spiders/primrose_schools.py"}]} | 1,075 | 218 |
gh_patches_debug_16325 | rasdani/github-patches | git_diff | rasterio__rasterio-670 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
rio stack output empty
`rio stack`ing one or more rasters without an explicit band index results in a raster with all nulls
```
$ rio info --tell-me-more tests/data/RGB.byte.tif | jq .stats[0].max
255
$ rio stack tests/data/RGB.byte.tif /tmp/test.tif && \
rio info --tell-me-more /tmp/test.tif | jq .stats[0].max
null
```
</issue>
<code>
[start of rasterio/rio/stack.py]
1 """Commands for operating on bands of datasets."""
2 import logging
3
4 import click
5 from cligj import files_inout_arg, format_opt
6
7 from .helpers import resolve_inout
8 from . import options
9 import rasterio
10 from rasterio.five import zip_longest
11
12
13 # Stack command.
14 @click.command(short_help="Stack a number of bands into a multiband dataset.")
15 @files_inout_arg
16 @options.output_opt
17 @format_opt
18 @options.bidx_mult_opt
19 @options.rgb_opt
20 @options.force_overwrite_opt
21 @options.creation_options
22 @click.pass_context
23 def stack(ctx, files, output, driver, bidx, photometric, force_overwrite,
24 creation_options):
25 """Stack a number of bands from one or more input files into a
26 multiband dataset.
27
28 Input datasets must be of a kind: same data type, dimensions, etc. The
29 output is cloned from the first input.
30
31 By default, rio-stack will take all bands from each input and write them
32 in same order to the output. Optionally, bands for each input may be
33 specified using a simple syntax:
34
35 --bidx N takes the Nth band from the input (first band is 1).
36
37 --bidx M,N,0 takes bands M, N, and O.
38
39 --bidx M..O takes bands M-O, inclusive.
40
41 --bidx ..N takes all bands up to and including N.
42
43 --bidx N.. takes all bands from N to the end.
44
45 Examples, using the Rasterio testing dataset, which produce a copy.
46
47 rio stack RGB.byte.tif -o stacked.tif
48
49 rio stack RGB.byte.tif --bidx 1,2,3 -o stacked.tif
50
51 rio stack RGB.byte.tif --bidx 1..3 -o stacked.tif
52
53 rio stack RGB.byte.tif --bidx ..2 RGB.byte.tif --bidx 3.. -o stacked.tif
54
55 """
56
57 verbosity = (ctx.obj and ctx.obj.get('verbosity')) or 2
58 logger = logging.getLogger('rio')
59 try:
60 with rasterio.drivers(CPL_DEBUG=verbosity>2):
61 output, files = resolve_inout(files=files, output=output,
62 force_overwrite=force_overwrite)
63 output_count = 0
64 indexes = []
65 for path, item in zip_longest(files, bidx, fillvalue=None):
66 with rasterio.open(path) as src:
67 src_indexes = src.indexes
68 if item is None:
69 indexes.append(src_indexes)
70 output_count += len(src_indexes)
71 elif '..' in item:
72 start, stop = map(
73 lambda x: int(x) if x else None, item.split('..'))
74 if start is None:
75 start = 1
76 indexes.append(src_indexes[slice(start-1, stop)])
77 output_count += len(src_indexes[slice(start-1, stop)])
78 else:
79 parts = list(map(int, item.split(',')))
80 if len(parts) == 1:
81 indexes.append(parts[0])
82 output_count += 1
83 else:
84 parts = list(parts)
85 indexes.append(parts)
86 output_count += len(parts)
87
88 with rasterio.open(files[0]) as first:
89 kwargs = first.meta
90 kwargs.update(**creation_options)
91 kwargs['transform'] = kwargs.pop('affine')
92
93 kwargs.update(
94 driver=driver,
95 count=output_count)
96
97 if photometric:
98 kwargs['photometric'] = photometric
99
100 with rasterio.open(output, 'w', **kwargs) as dst:
101 dst_idx = 1
102 for path, index in zip(files, indexes):
103 with rasterio.open(path) as src:
104 if isinstance(index, int):
105 data = src.read(index)
106 dst.write(data, dst_idx)
107 dst_idx += 1
108 elif isinstance(index, list):
109 data = src.read(index)
110 dst.write(data, range(dst_idx, dst_idx+len(index)))
111 dst_idx += len(index)
112
113 except Exception:
114 logger.exception("Exception caught during processing")
115 raise click.Abort()
116
[end of rasterio/rio/stack.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/rasterio/rio/stack.py b/rasterio/rio/stack.py
--- a/rasterio/rio/stack.py
+++ b/rasterio/rio/stack.py
@@ -1,4 +1,5 @@
"""Commands for operating on bands of datasets."""
+import collections
import logging
import click
@@ -105,7 +106,7 @@
data = src.read(index)
dst.write(data, dst_idx)
dst_idx += 1
- elif isinstance(index, list):
+ elif isinstance(index, collections.Iterable):
data = src.read(index)
dst.write(data, range(dst_idx, dst_idx+len(index)))
dst_idx += len(index)
| {"golden_diff": "diff --git a/rasterio/rio/stack.py b/rasterio/rio/stack.py\n--- a/rasterio/rio/stack.py\n+++ b/rasterio/rio/stack.py\n@@ -1,4 +1,5 @@\n \"\"\"Commands for operating on bands of datasets.\"\"\"\n+import collections\n import logging\n \n import click\n@@ -105,7 +106,7 @@\n data = src.read(index)\n dst.write(data, dst_idx)\n dst_idx += 1\n- elif isinstance(index, list):\n+ elif isinstance(index, collections.Iterable):\n data = src.read(index)\n dst.write(data, range(dst_idx, dst_idx+len(index)))\n dst_idx += len(index)\n", "issue": "rio stack output empty\n`rio stack`ing one or more rasters without an explicit band index results in a raster with all nulls\n\n```\n$ rio info --tell-me-more tests/data/RGB.byte.tif | jq .stats[0].max\n255\n$ rio stack tests/data/RGB.byte.tif /tmp/test.tif && \\\n rio info --tell-me-more /tmp/test.tif | jq .stats[0].max\nnull\n```\n\n", "before_files": [{"content": "\"\"\"Commands for operating on bands of datasets.\"\"\"\nimport logging\n\nimport click\nfrom cligj import files_inout_arg, format_opt\n\nfrom .helpers import resolve_inout\nfrom . import options\nimport rasterio\nfrom rasterio.five import zip_longest\n\n\n# Stack command.\[email protected](short_help=\"Stack a number of bands into a multiband dataset.\")\n@files_inout_arg\[email protected]_opt\n@format_opt\[email protected]_mult_opt\[email protected]_opt\[email protected]_overwrite_opt\[email protected]_options\[email protected]_context\ndef stack(ctx, files, output, driver, bidx, photometric, force_overwrite,\n creation_options):\n \"\"\"Stack a number of bands from one or more input files into a\n multiband dataset.\n\n Input datasets must be of a kind: same data type, dimensions, etc. The\n output is cloned from the first input.\n\n By default, rio-stack will take all bands from each input and write them\n in same order to the output. Optionally, bands for each input may be\n specified using a simple syntax:\n\n --bidx N takes the Nth band from the input (first band is 1).\n\n --bidx M,N,0 takes bands M, N, and O.\n\n --bidx M..O takes bands M-O, inclusive.\n\n --bidx ..N takes all bands up to and including N.\n\n --bidx N.. takes all bands from N to the end.\n\n Examples, using the Rasterio testing dataset, which produce a copy.\n\n rio stack RGB.byte.tif -o stacked.tif\n\n rio stack RGB.byte.tif --bidx 1,2,3 -o stacked.tif\n\n rio stack RGB.byte.tif --bidx 1..3 -o stacked.tif\n\n rio stack RGB.byte.tif --bidx ..2 RGB.byte.tif --bidx 3.. -o stacked.tif\n\n \"\"\"\n\n verbosity = (ctx.obj and ctx.obj.get('verbosity')) or 2\n logger = logging.getLogger('rio')\n try:\n with rasterio.drivers(CPL_DEBUG=verbosity>2):\n output, files = resolve_inout(files=files, output=output,\n force_overwrite=force_overwrite)\n output_count = 0\n indexes = []\n for path, item in zip_longest(files, bidx, fillvalue=None):\n with rasterio.open(path) as src:\n src_indexes = src.indexes\n if item is None:\n indexes.append(src_indexes)\n output_count += len(src_indexes)\n elif '..' in item:\n start, stop = map(\n lambda x: int(x) if x else None, item.split('..'))\n if start is None:\n start = 1\n indexes.append(src_indexes[slice(start-1, stop)])\n output_count += len(src_indexes[slice(start-1, stop)])\n else:\n parts = list(map(int, item.split(',')))\n if len(parts) == 1:\n indexes.append(parts[0])\n output_count += 1\n else:\n parts = list(parts)\n indexes.append(parts)\n output_count += len(parts)\n\n with rasterio.open(files[0]) as first:\n kwargs = first.meta\n kwargs.update(**creation_options)\n kwargs['transform'] = kwargs.pop('affine')\n\n kwargs.update(\n driver=driver,\n count=output_count)\n\n if photometric:\n kwargs['photometric'] = photometric\n\n with rasterio.open(output, 'w', **kwargs) as dst:\n dst_idx = 1\n for path, index in zip(files, indexes):\n with rasterio.open(path) as src:\n if isinstance(index, int):\n data = src.read(index)\n dst.write(data, dst_idx)\n dst_idx += 1\n elif isinstance(index, list):\n data = src.read(index)\n dst.write(data, range(dst_idx, dst_idx+len(index)))\n dst_idx += len(index)\n\n except Exception:\n logger.exception(\"Exception caught during processing\")\n raise click.Abort()\n", "path": "rasterio/rio/stack.py"}]} | 1,761 | 156 |
gh_patches_debug_25922 | rasdani/github-patches | git_diff | freedomofpress__securedrop-5834 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Restoring tarball to Focal shows error for v2
## Description
When restoring a v2-only Xenial backup tarball to a v3-only Focal instance, the restore action fails. It fails even if the admin explicitly requests that the tor config be preserved as-is.
## Steps to Reproduce
I used libvirt-based VMs for testing, and performed all admin actions from a virtualized Tails v4.16 VM.
1. Create a v2-only backup tarball from a Xenial host.
2. Perform a clean install of Focal, with v3-only vars.
3. Attempt to restore the backup: `./securedrop-admin --force restore --preserve-tor-config ~/Persistent/backups/xenial-v2-only/sd-backup-2021-02-26--15-57-06.tar.gz`
## Expected Behavior
Restore action completes, old URLs are restored, and I can proceed with regenerating new v3 URL and finalizing the Xenial -> Focal migration.
## Actual Behavior
Restore action fails. Even when I include the `--preserve-tor-config` flag, it still fails.
## Comments
On one hand, the failure is expected, since Focal is v3-only, but in the context of a migration from Xenial, it's likely we're going to have admins migrating to Focal from a recently created backup, so I recommend we defer the fail-closed behavior to a subsequent release. That'd have bearing on WIP docs changes in e..g. https://github.com/freedomofpress/securedrop-docs/pull/133
The above is a policy question, but this ticket is also pointing out some bugs that should be fixed. For one, `--preserve-tor-config` is not honored, and it should be.
</issue>
<code>
[start of install_files/ansible-base/roles/restore/files/compare_torrc.py]
1 #!/usr/bin/env python
2
3 #
4 # Compares Tor configurations on the app server and from a backup. If
5 # restoring the backup would alter the server's Tor configuration,
6 # print a warning and exit.
7 #
8
9 from __future__ import print_function
10
11 import os
12 import re
13 import sys
14
15
16 def get_tor_versions(path):
17 """
18 Determine which service versions are offered in the given torrc.
19 """
20 service_re = re.compile(r"HiddenServiceDir\s+(?:.*)/(.*)")
21 versions = set([])
22 with open(path) as f:
23 for line in f:
24 m = service_re.match(line)
25 if m:
26 service = m.group(1)
27 if "v3" in service:
28 versions.add(3)
29 else:
30 versions.add(2)
31
32 return versions
33
34
35 def strset(s):
36 """
37 Sort the given set and join members with "and".
38 """
39 return " and ".join(str(v) for v in sorted(s))
40
41
42 if __name__ == "__main__":
43 tempdir = sys.argv[1]
44
45 server_versions = get_tor_versions(os.path.join(tempdir, "app/etc/tor/torrc"))
46 backup_versions = get_tor_versions(os.path.join(tempdir, "backup/etc/tor/torrc"))
47
48 if server_versions == backup_versions:
49 print("The Tor configuration in the backup matches the server.")
50 sys.exit(0)
51
52 if (3 in server_versions) and (3 in backup_versions):
53 print("V3 services detected in backup and server - proceeding with v3-only restore")
54 sys.exit(0)
55
56 print(
57 "The Tor configuration on the app server offers version {} services.".format(
58 strset(server_versions)
59 )
60 )
61
62 print(
63 "The Tor configuration in this backup offers version {} services.".format(
64 strset(backup_versions)
65 )
66 )
67
68 print("\nRestoring a backup with a different Tor configuration than the server ")
69 print("is currently unsupported. If you require technical assistance, please ")
70 print("contact the SecureDrop team via the support portal or at ")
71 print("[email protected].")
72
73 sys.exit(1)
74
[end of install_files/ansible-base/roles/restore/files/compare_torrc.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/install_files/ansible-base/roles/restore/files/compare_torrc.py b/install_files/ansible-base/roles/restore/files/compare_torrc.py
--- a/install_files/ansible-base/roles/restore/files/compare_torrc.py
+++ b/install_files/ansible-base/roles/restore/files/compare_torrc.py
@@ -46,11 +46,11 @@
backup_versions = get_tor_versions(os.path.join(tempdir, "backup/etc/tor/torrc"))
if server_versions == backup_versions:
- print("The Tor configuration in the backup matches the server.")
+ print("Valid configuration: the Tor configuration in the backup matches the server.")
sys.exit(0)
if (3 in server_versions) and (3 in backup_versions):
- print("V3 services detected in backup and server - proceeding with v3-only restore")
+ print("Valid configuration: V3 services only`")
sys.exit(0)
print(
@@ -65,9 +65,11 @@
)
)
- print("\nRestoring a backup with a different Tor configuration than the server ")
- print("is currently unsupported. If you require technical assistance, please ")
- print("contact the SecureDrop team via the support portal or at ")
+ print("\nIncompatible configuration: Restoring a backup including a different ")
+ print("Tor configuration than the server Tor configuration is unsupported. ")
+ print("Optionally, use --preserve-tor-config to apply a data-only backup.")
+ print("If you require technical assistance, please contact the ")
+ print("SecureDrop team via the support portal or at ")
print("[email protected].")
sys.exit(1)
| {"golden_diff": "diff --git a/install_files/ansible-base/roles/restore/files/compare_torrc.py b/install_files/ansible-base/roles/restore/files/compare_torrc.py\n--- a/install_files/ansible-base/roles/restore/files/compare_torrc.py\n+++ b/install_files/ansible-base/roles/restore/files/compare_torrc.py\n@@ -46,11 +46,11 @@\n backup_versions = get_tor_versions(os.path.join(tempdir, \"backup/etc/tor/torrc\"))\n \n if server_versions == backup_versions:\n- print(\"The Tor configuration in the backup matches the server.\")\n+ print(\"Valid configuration: the Tor configuration in the backup matches the server.\")\n sys.exit(0)\n \n if (3 in server_versions) and (3 in backup_versions):\n- print(\"V3 services detected in backup and server - proceeding with v3-only restore\")\n+ print(\"Valid configuration: V3 services only`\")\n sys.exit(0)\n \n print(\n@@ -65,9 +65,11 @@\n )\n )\n \n- print(\"\\nRestoring a backup with a different Tor configuration than the server \")\n- print(\"is currently unsupported. If you require technical assistance, please \")\n- print(\"contact the SecureDrop team via the support portal or at \")\n+ print(\"\\nIncompatible configuration: Restoring a backup including a different \")\n+ print(\"Tor configuration than the server Tor configuration is unsupported. \")\n+ print(\"Optionally, use --preserve-tor-config to apply a data-only backup.\")\n+ print(\"If you require technical assistance, please contact the \")\n+ print(\"SecureDrop team via the support portal or at \")\n print(\"[email protected].\")\n \n sys.exit(1)\n", "issue": "Restoring tarball to Focal shows error for v2\n## Description\r\n\r\nWhen restoring a v2-only Xenial backup tarball to a v3-only Focal instance, the restore action fails. It fails even if the admin explicitly requests that the tor config be preserved as-is. \r\n\r\n## Steps to Reproduce\r\nI used libvirt-based VMs for testing, and performed all admin actions from a virtualized Tails v4.16 VM.\r\n\r\n1. Create a v2-only backup tarball from a Xenial host.\r\n2. Perform a clean install of Focal, with v3-only vars.\r\n3. Attempt to restore the backup: `./securedrop-admin --force restore --preserve-tor-config ~/Persistent/backups/xenial-v2-only/sd-backup-2021-02-26--15-57-06.tar.gz`\r\n\r\n## Expected Behavior\r\n\r\nRestore action completes, old URLs are restored, and I can proceed with regenerating new v3 URL and finalizing the Xenial -> Focal migration. \r\n\r\n\r\n## Actual Behavior\r\n\r\nRestore action fails. Even when I include the `--preserve-tor-config` flag, it still fails. \r\n\r\n## Comments\r\nOn one hand, the failure is expected, since Focal is v3-only, but in the context of a migration from Xenial, it's likely we're going to have admins migrating to Focal from a recently created backup, so I recommend we defer the fail-closed behavior to a subsequent release. That'd have bearing on WIP docs changes in e..g. https://github.com/freedomofpress/securedrop-docs/pull/133\r\n\r\nThe above is a policy question, but this ticket is also pointing out some bugs that should be fixed. For one, `--preserve-tor-config` is not honored, and it should be.\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n\n#\n# Compares Tor configurations on the app server and from a backup. If\n# restoring the backup would alter the server's Tor configuration,\n# print a warning and exit.\n#\n\nfrom __future__ import print_function\n\nimport os\nimport re\nimport sys\n\n\ndef get_tor_versions(path):\n \"\"\"\n Determine which service versions are offered in the given torrc.\n \"\"\"\n service_re = re.compile(r\"HiddenServiceDir\\s+(?:.*)/(.*)\")\n versions = set([])\n with open(path) as f:\n for line in f:\n m = service_re.match(line)\n if m:\n service = m.group(1)\n if \"v3\" in service:\n versions.add(3)\n else:\n versions.add(2)\n\n return versions\n\n\ndef strset(s):\n \"\"\"\n Sort the given set and join members with \"and\".\n \"\"\"\n return \" and \".join(str(v) for v in sorted(s))\n\n\nif __name__ == \"__main__\":\n tempdir = sys.argv[1]\n\n server_versions = get_tor_versions(os.path.join(tempdir, \"app/etc/tor/torrc\"))\n backup_versions = get_tor_versions(os.path.join(tempdir, \"backup/etc/tor/torrc\"))\n\n if server_versions == backup_versions:\n print(\"The Tor configuration in the backup matches the server.\")\n sys.exit(0)\n\n if (3 in server_versions) and (3 in backup_versions):\n print(\"V3 services detected in backup and server - proceeding with v3-only restore\")\n sys.exit(0)\n\n print(\n \"The Tor configuration on the app server offers version {} services.\".format(\n strset(server_versions)\n )\n )\n\n print(\n \"The Tor configuration in this backup offers version {} services.\".format(\n strset(backup_versions)\n )\n )\n\n print(\"\\nRestoring a backup with a different Tor configuration than the server \")\n print(\"is currently unsupported. If you require technical assistance, please \")\n print(\"contact the SecureDrop team via the support portal or at \")\n print(\"[email protected].\")\n\n sys.exit(1)\n", "path": "install_files/ansible-base/roles/restore/files/compare_torrc.py"}]} | 1,561 | 384 |
gh_patches_debug_20393 | rasdani/github-patches | git_diff | PlasmaPy__PlasmaPy-405 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add doc page on authors and credits
It would be really helpful to have a page in our `docs` directory that lists the Coordinating Committee members and a full list of authors of the code, along with other credits. Some examples are Astropy's [Authors and Credits page](http://docs.astropy.org/en/stable/credits.html), and SunPy's [The Project](http://sunpy.org/team.html). The list of code contributors can already be accessed from our GitHub repository and the commit log; however, this often does not include full names. We might be able to find a way to automate this, though that's low priority. We should do this prior to our 0.1 release.
To help with the organization, we should probably create an `about` subdirectory that will include pages about the PlasmaPy project as a whole, including this one. The `docs/stability.rst` page could go in this directory too.
</issue>
<code>
[start of plasmapy/constants/__init__.py]
1 """Physical and mathematical constants."""
2
3 from numpy import pi
4
5 from astropy.constants.si import (
6 e,
7 mu0,
8 eps0,
9 k_B,
10 c,
11 G,
12 h,
13 hbar,
14 m_p,
15 m_n,
16 m_e,
17 u,
18 sigma_sb,
19 N_A,
20 R,
21 Ryd,
22 a0,
23 muB,
24 sigma_T,
25 au,
26 pc,
27 kpc,
28 g0,
29 L_sun,
30 M_sun,
31 R_sun,
32 M_earth,
33 R_earth,
34 )
35
36 from astropy.constants import atm
37
[end of plasmapy/constants/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/plasmapy/constants/__init__.py b/plasmapy/constants/__init__.py
--- a/plasmapy/constants/__init__.py
+++ b/plasmapy/constants/__init__.py
@@ -1,4 +1,8 @@
-"""Physical and mathematical constants."""
+"""
+Contains physical and mathematical constants commonly used in plasma
+physics.
+
+"""
from numpy import pi
@@ -34,3 +38,26 @@
)
from astropy.constants import atm
+
+# The following code is modified from astropy.constants to produce a
+# table containing information on the constants contained with PlasmaPy.
+# Mathematical constants can be just entered.
+
+_lines = [
+ 'The following constants are available:\n',
+ '========== ================= ================ ============================================',
+ 'Name Value Units Description',
+ '========== ================= ================ ============================================',
+ " pi 3.141592653589793 Ratio of circumference to diameter of circle",
+]
+
+_constants = [eval(item) for item in dir() if item[0] != '_' and item != 'pi']
+for _const in _constants:
+ _lines.append('{0:^10} {1:^17.12g} {2:^16} {3}'
+ .format(_const.abbrev, _const.value, _const._unit_string, _const.name))
+
+_lines.append(_lines[1])
+
+__doc__ += '\n'.join(_lines)
+
+del _lines, _const, _constants
| {"golden_diff": "diff --git a/plasmapy/constants/__init__.py b/plasmapy/constants/__init__.py\n--- a/plasmapy/constants/__init__.py\n+++ b/plasmapy/constants/__init__.py\n@@ -1,4 +1,8 @@\n-\"\"\"Physical and mathematical constants.\"\"\"\n+\"\"\"\n+Contains physical and mathematical constants commonly used in plasma\n+physics.\n+\n+\"\"\"\n \n from numpy import pi\n \n@@ -34,3 +38,26 @@\n )\n \n from astropy.constants import atm\n+\n+# The following code is modified from astropy.constants to produce a\n+# table containing information on the constants contained with PlasmaPy.\n+# Mathematical constants can be just entered.\n+\n+_lines = [\n+ 'The following constants are available:\\n',\n+ '========== ================= ================ ============================================',\n+ 'Name Value Units Description',\n+ '========== ================= ================ ============================================',\n+ \" pi 3.141592653589793 Ratio of circumference to diameter of circle\",\n+]\n+\n+_constants = [eval(item) for item in dir() if item[0] != '_' and item != 'pi']\n+for _const in _constants:\n+ _lines.append('{0:^10} {1:^17.12g} {2:^16} {3}'\n+ .format(_const.abbrev, _const.value, _const._unit_string, _const.name))\n+\n+_lines.append(_lines[1])\n+\n+__doc__ += '\\n'.join(_lines)\n+\n+del _lines, _const, _constants\n", "issue": "Add doc page on authors and credits\nIt would be really helpful to have a page in our `docs` directory that lists the Coordinating Committee members and a full list of authors of the code, along with other credits. Some examples are Astropy's [Authors and Credits page](http://docs.astropy.org/en/stable/credits.html), and SunPy's [The Project](http://sunpy.org/team.html). The list of code contributors can already be accessed from our GitHub repository and the commit log; however, this often does not include full names. We might be able to find a way to automate this, though that's low priority. We should do this prior to our 0.1 release.\r\n\r\nTo help with the organization, we should probably create an `about` subdirectory that will include pages about the PlasmaPy project as a whole, including this one. The `docs/stability.rst` page could go in this directory too.\n", "before_files": [{"content": "\"\"\"Physical and mathematical constants.\"\"\"\n\nfrom numpy import pi\n\nfrom astropy.constants.si import (\n e,\n mu0,\n eps0,\n k_B,\n c,\n G,\n h,\n hbar,\n m_p,\n m_n,\n m_e,\n u,\n sigma_sb,\n N_A,\n R,\n Ryd,\n a0,\n muB,\n sigma_T,\n au,\n pc,\n kpc,\n g0,\n L_sun,\n M_sun,\n R_sun,\n M_earth,\n R_earth,\n)\n\nfrom astropy.constants import atm\n", "path": "plasmapy/constants/__init__.py"}]} | 944 | 351 |
gh_patches_debug_20348 | rasdani/github-patches | git_diff | google__personfinder-397 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Internal server error on multiview.py with invalid record ID
multiview.py returns Internal server error when one of the specified IDs is invalid. It should return 404 or something instead.
```
AttributeError: 'NoneType' object has no attribute 'person_record_id'
at get (multiview.py:47)
at serve (main.py:622)
at get (main.py:647)
```
</issue>
<code>
[start of app/multiview.py]
1 #!/usr/bin/python2.7
2 # Copyright 2010 Google Inc.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15
16 from model import *
17 from utils import *
18 import pfif
19 import reveal
20 import subscribe
21 import view
22
23 from django.utils.translation import ugettext as _
24
25 # Fields to show for side-by-side comparison.
26 COMPARE_FIELDS = pfif.PFIF_1_4.fields['person'] + ['primary_full_name']
27
28
29 class Handler(BaseHandler):
30 def get(self):
31 # To handle multiple persons, we create a single object where
32 # each property is a list of values, one for each person.
33 # This makes page rendering easier.
34 person = dict([(prop, []) for prop in COMPARE_FIELDS])
35 any_person = dict([(prop, None) for prop in COMPARE_FIELDS])
36
37 # Get all persons from db.
38 # TODO: Can later optimize to use fewer DB calls.
39 for i in [1, 2, 3]:
40 id = self.request.get('id%d' % i)
41 if not id:
42 break
43 p = Person.get(self.repo, id)
44 sanitize_urls(p)
45
46 for prop in COMPARE_FIELDS:
47 val = getattr(p, prop)
48 if prop == 'sex': # convert enum value to localized text
49 val = get_person_sex_text(p)
50 person[prop].append(val)
51 any_person[prop] = any_person[prop] or val
52
53 # Compute the local times for the date fields on the person and format.
54 person['source_datetime_local_string'] = map(
55 self.to_formatted_local_datetime, person['source_date'])
56
57 # Check if private info should be revealed.
58 content_id = 'multiview:' + ','.join(person['person_record_id'])
59 reveal_url = reveal.make_reveal_url(self, content_id)
60 show_private_info = reveal.verify(content_id, self.params.signature)
61
62 standalone = self.request.get('standalone')
63
64 # TODO: Handle no persons found.
65
66 person['profile_pages'] = [view.get_profile_pages(profile_urls, self)
67 for profile_urls in person['profile_urls']]
68 any_person['profile_pages'] = any(person['profile_pages'])
69
70 # Note: we're not showing notes and linked persons information
71 # here at the moment.
72 self.render('multiview.html',
73 person=person, any=any_person, standalone=standalone,
74 cols=len(person['full_name']) + 1,
75 onload_function='view_page_loaded()', markdup=True,
76 show_private_info=show_private_info, reveal_url=reveal_url)
77
78 def post(self):
79 if not self.params.text:
80 return self.error(
81 200, _('Message is required. Please go back and try again.'))
82
83 if not self.params.author_name:
84 return self.error(
85 200, _('Your name is required in the "About you" section. Please go back and try again.'))
86
87 # TODO: To reduce possible abuse, we currently limit to 3 person
88 # match. We could guard using e.g. an XSRF token, which I don't know how
89 # to build in GAE.
90
91 ids = set()
92 for i in [1, 2, 3]:
93 id = getattr(self.params, 'id%d' % i)
94 if not id:
95 break
96 ids.add(id)
97
98 if len(ids) > 1:
99 notes = []
100 for person_id in ids:
101 person = Person.get(self.repo, person_id)
102 person_notes = []
103 for other_id in ids - set([person_id]):
104 note = Note.create_original(
105 self.repo,
106 entry_date=get_utcnow(),
107 person_record_id=person_id,
108 linked_person_record_id=other_id,
109 text=self.params.text,
110 author_name=self.params.author_name,
111 author_phone=self.params.author_phone,
112 author_email=self.params.author_email,
113 source_date=get_utcnow())
114 person_notes.append(note)
115 # Notify person's subscribers of all new duplicates. We do not
116 # follow links since each Person record in the ids list gets its
117 # own note. However, 1) when > 2 records are marked as
118 # duplicates, subscribers will still receive multiple
119 # notifications, and 2) subscribers to already-linked Persons
120 # will not be notified of the new link.
121 subscribe.send_notifications(self, person, person_notes, False)
122 notes += person_notes
123 # Write all notes to store
124 db.put(notes)
125 self.redirect('/view', id=self.params.id1)
126
[end of app/multiview.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/app/multiview.py b/app/multiview.py
--- a/app/multiview.py
+++ b/app/multiview.py
@@ -41,6 +41,11 @@
if not id:
break
p = Person.get(self.repo, id)
+ if not p:
+ return self.error(
+ 404,
+ _("This person's entry does not exist or has been "
+ "deleted."))
sanitize_urls(p)
for prop in COMPARE_FIELDS:
@@ -103,7 +108,7 @@
for other_id in ids - set([person_id]):
note = Note.create_original(
self.repo,
- entry_date=get_utcnow(),
+ entry_date=get_utcnow(),
person_record_id=person_id,
linked_person_record_id=other_id,
text=self.params.text,
| {"golden_diff": "diff --git a/app/multiview.py b/app/multiview.py\n--- a/app/multiview.py\n+++ b/app/multiview.py\n@@ -41,6 +41,11 @@\n if not id:\n break\n p = Person.get(self.repo, id)\n+ if not p:\n+ return self.error(\n+ 404,\n+ _(\"This person's entry does not exist or has been \"\n+ \"deleted.\"))\n sanitize_urls(p)\n \n for prop in COMPARE_FIELDS:\n@@ -103,7 +108,7 @@\n for other_id in ids - set([person_id]):\n note = Note.create_original(\n self.repo,\n- entry_date=get_utcnow(), \n+ entry_date=get_utcnow(),\n person_record_id=person_id,\n linked_person_record_id=other_id,\n text=self.params.text,\n", "issue": "Internal server error on multiview.py with invalid record ID\nmultiview.py returns Internal server error when one of the specified IDs is invalid. It should return 404 or something instead.\r\n\r\n```\r\nAttributeError: 'NoneType' object has no attribute 'person_record_id'\r\nat get (multiview.py:47)\r\nat serve (main.py:622)\r\nat get (main.py:647)\r\n```\n", "before_files": [{"content": "#!/usr/bin/python2.7\n# Copyright 2010 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom model import *\nfrom utils import *\nimport pfif\nimport reveal\nimport subscribe\nimport view\n\nfrom django.utils.translation import ugettext as _\n\n# Fields to show for side-by-side comparison.\nCOMPARE_FIELDS = pfif.PFIF_1_4.fields['person'] + ['primary_full_name']\n\n\nclass Handler(BaseHandler):\n def get(self):\n # To handle multiple persons, we create a single object where\n # each property is a list of values, one for each person.\n # This makes page rendering easier.\n person = dict([(prop, []) for prop in COMPARE_FIELDS])\n any_person = dict([(prop, None) for prop in COMPARE_FIELDS])\n\n # Get all persons from db.\n # TODO: Can later optimize to use fewer DB calls.\n for i in [1, 2, 3]:\n id = self.request.get('id%d' % i)\n if not id:\n break\n p = Person.get(self.repo, id)\n sanitize_urls(p)\n\n for prop in COMPARE_FIELDS:\n val = getattr(p, prop)\n if prop == 'sex': # convert enum value to localized text\n val = get_person_sex_text(p)\n person[prop].append(val)\n any_person[prop] = any_person[prop] or val\n\n # Compute the local times for the date fields on the person and format.\n person['source_datetime_local_string'] = map(\n self.to_formatted_local_datetime, person['source_date'])\n\n # Check if private info should be revealed.\n content_id = 'multiview:' + ','.join(person['person_record_id'])\n reveal_url = reveal.make_reveal_url(self, content_id)\n show_private_info = reveal.verify(content_id, self.params.signature)\n\n standalone = self.request.get('standalone')\n\n # TODO: Handle no persons found.\n\n person['profile_pages'] = [view.get_profile_pages(profile_urls, self)\n for profile_urls in person['profile_urls']]\n any_person['profile_pages'] = any(person['profile_pages'])\n\n # Note: we're not showing notes and linked persons information\n # here at the moment.\n self.render('multiview.html',\n person=person, any=any_person, standalone=standalone,\n cols=len(person['full_name']) + 1,\n onload_function='view_page_loaded()', markdup=True,\n show_private_info=show_private_info, reveal_url=reveal_url)\n\n def post(self):\n if not self.params.text:\n return self.error(\n 200, _('Message is required. Please go back and try again.'))\n\n if not self.params.author_name:\n return self.error(\n 200, _('Your name is required in the \"About you\" section. Please go back and try again.'))\n\n # TODO: To reduce possible abuse, we currently limit to 3 person\n # match. We could guard using e.g. an XSRF token, which I don't know how\n # to build in GAE.\n\n ids = set()\n for i in [1, 2, 3]:\n id = getattr(self.params, 'id%d' % i)\n if not id:\n break\n ids.add(id)\n\n if len(ids) > 1:\n notes = []\n for person_id in ids:\n person = Person.get(self.repo, person_id)\n person_notes = []\n for other_id in ids - set([person_id]):\n note = Note.create_original(\n self.repo,\n entry_date=get_utcnow(), \n person_record_id=person_id,\n linked_person_record_id=other_id,\n text=self.params.text,\n author_name=self.params.author_name,\n author_phone=self.params.author_phone,\n author_email=self.params.author_email,\n source_date=get_utcnow())\n person_notes.append(note)\n # Notify person's subscribers of all new duplicates. We do not\n # follow links since each Person record in the ids list gets its\n # own note. However, 1) when > 2 records are marked as\n # duplicates, subscribers will still receive multiple\n # notifications, and 2) subscribers to already-linked Persons\n # will not be notified of the new link.\n subscribe.send_notifications(self, person, person_notes, False)\n notes += person_notes\n # Write all notes to store\n db.put(notes)\n self.redirect('/view', id=self.params.id1)\n", "path": "app/multiview.py"}]} | 1,996 | 198 |
gh_patches_debug_59251 | rasdani/github-patches | git_diff | ephios-dev__ephios-639 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
PWA does not respect orientation lock on Android
</issue>
<code>
[start of ephios/core/views/pwa.py]
1 import functools
2
3 from django.conf import settings
4 from django.contrib.staticfiles import finders
5 from django.http import HttpResponse, JsonResponse
6 from django.shortcuts import render
7 from django.utils.translation import get_language
8
9
10 def manifest(request):
11 manifest_json = {
12 "name": "ephios",
13 "short_name": "ephios",
14 "description": "ephios manages events for medical services",
15 "start_url": "/",
16 "display": "standalone",
17 "scope": "/",
18 "orientation": "any",
19 "background_color": "#fff",
20 "theme_color": "#000",
21 "status_bar": "default",
22 "dir": "auto",
23 "icons": settings.PWA_APP_ICONS,
24 "lang": get_language(),
25 }
26 response = JsonResponse(manifest_json)
27 response["Service-Worker-Allowed"] = "/"
28 return response
29
30
31 @functools.lru_cache
32 def serviceworker_js():
33 with open(finders.find("ephios/js/serviceworker.js"), "rb") as sw_js:
34 return sw_js.read()
35
36
37 def serviceworker(request):
38 return HttpResponse(
39 serviceworker_js(),
40 content_type="application/javascript",
41 )
42
43
44 def offline(request):
45 return render(request, "offline.html")
46
[end of ephios/core/views/pwa.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ephios/core/views/pwa.py b/ephios/core/views/pwa.py
--- a/ephios/core/views/pwa.py
+++ b/ephios/core/views/pwa.py
@@ -15,7 +15,6 @@
"start_url": "/",
"display": "standalone",
"scope": "/",
- "orientation": "any",
"background_color": "#fff",
"theme_color": "#000",
"status_bar": "default",
| {"golden_diff": "diff --git a/ephios/core/views/pwa.py b/ephios/core/views/pwa.py\n--- a/ephios/core/views/pwa.py\n+++ b/ephios/core/views/pwa.py\n@@ -15,7 +15,6 @@\n \"start_url\": \"/\",\n \"display\": \"standalone\",\n \"scope\": \"/\",\n- \"orientation\": \"any\",\n \"background_color\": \"#fff\",\n \"theme_color\": \"#000\",\n \"status_bar\": \"default\",\n", "issue": "PWA does not respect orientation lock on Android\n\n", "before_files": [{"content": "import functools\n\nfrom django.conf import settings\nfrom django.contrib.staticfiles import finders\nfrom django.http import HttpResponse, JsonResponse\nfrom django.shortcuts import render\nfrom django.utils.translation import get_language\n\n\ndef manifest(request):\n manifest_json = {\n \"name\": \"ephios\",\n \"short_name\": \"ephios\",\n \"description\": \"ephios manages events for medical services\",\n \"start_url\": \"/\",\n \"display\": \"standalone\",\n \"scope\": \"/\",\n \"orientation\": \"any\",\n \"background_color\": \"#fff\",\n \"theme_color\": \"#000\",\n \"status_bar\": \"default\",\n \"dir\": \"auto\",\n \"icons\": settings.PWA_APP_ICONS,\n \"lang\": get_language(),\n }\n response = JsonResponse(manifest_json)\n response[\"Service-Worker-Allowed\"] = \"/\"\n return response\n\n\[email protected]_cache\ndef serviceworker_js():\n with open(finders.find(\"ephios/js/serviceworker.js\"), \"rb\") as sw_js:\n return sw_js.read()\n\n\ndef serviceworker(request):\n return HttpResponse(\n serviceworker_js(),\n content_type=\"application/javascript\",\n )\n\n\ndef offline(request):\n return render(request, \"offline.html\")\n", "path": "ephios/core/views/pwa.py"}]} | 909 | 112 |
gh_patches_debug_31130 | rasdani/github-patches | git_diff | kedro-org__kedro-1789 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add parameters to `%reload_kedro` line magic
## Description
Currently you cannot pass things like `env` or `extra_params` via the line magic, but you can by importing the function.
https://github.com/kedro-org/kedro/blob/5ae97cfb70e5b0d4490132847977d482f13c840f/kedro/extras/extensions/ipython.py#L38
Why don't we introduce feature parity here?
</issue>
<code>
[start of kedro/extras/extensions/ipython.py]
1 # pylint: disable=import-outside-toplevel,global-statement,invalid-name,too-many-locals
2 """
3 This script creates an IPython extension to load Kedro-related variables in
4 local scope.
5 """
6 import logging
7 import sys
8 from pathlib import Path
9 from typing import Any, Dict
10
11 logger = logging.getLogger(__name__)
12 default_project_path = Path.cwd()
13
14
15 def _remove_cached_modules(package_name):
16 to_remove = [mod for mod in sys.modules if mod.startswith(package_name)]
17 # `del` is used instead of `reload()` because: If the new version of a module does not
18 # define a name that was defined by the old version, the old definition remains.
19 for module in to_remove:
20 del sys.modules[module] # pragma: no cover
21
22
23 def _find_kedro_project(current_dir: Path): # pragma: no cover
24 from kedro.framework.startup import _is_project
25
26 while current_dir != current_dir.parent:
27 if _is_project(current_dir):
28 return current_dir
29 current_dir = current_dir.parent
30
31 return None
32
33
34 def reload_kedro(
35 path: str = None, env: str = None, extra_params: Dict[str, Any] = None
36 ):
37 """Line magic which reloads all Kedro default variables.
38 Setting the path will also make it default for subsequent calls.
39 """
40 from IPython import get_ipython
41 from IPython.core.magic import needs_local_scope, register_line_magic
42
43 from kedro.framework.cli import load_entry_points
44 from kedro.framework.project import LOGGING # noqa # pylint:disable=unused-import
45 from kedro.framework.project import configure_project, pipelines
46 from kedro.framework.session import KedroSession
47 from kedro.framework.startup import bootstrap_project
48
49 # If a path is provided, set it as default for subsequent calls
50 global default_project_path
51 if path:
52 default_project_path = Path(path).expanduser().resolve()
53 logger.info("Updated path to Kedro project: %s", default_project_path)
54 else:
55 logger.info("No path argument was provided. Using: %s", default_project_path)
56
57 metadata = bootstrap_project(default_project_path)
58 _remove_cached_modules(metadata.package_name)
59 configure_project(metadata.package_name)
60
61 session = KedroSession.create(
62 metadata.package_name, default_project_path, env=env, extra_params=extra_params
63 )
64 context = session.load_context()
65 catalog = context.catalog
66
67 get_ipython().push(
68 variables={
69 "context": context,
70 "catalog": catalog,
71 "session": session,
72 "pipelines": pipelines,
73 }
74 )
75
76 logger.info("Kedro project %s", str(metadata.project_name))
77 logger.info(
78 "Defined global variable 'context', 'session', 'catalog' and 'pipelines'"
79 )
80
81 for line_magic in load_entry_points("line_magic"):
82 register_line_magic(needs_local_scope(line_magic))
83 logger.info("Registered line magic '%s'", line_magic.__name__) # type: ignore
84
85
86 def load_ipython_extension(ipython):
87 """Main entry point when %load_ext is executed"""
88
89 global default_project_path
90
91 ipython.register_magic_function(reload_kedro, "line", "reload_kedro")
92
93 default_project_path = _find_kedro_project(Path.cwd())
94
95 if default_project_path is None:
96 logger.warning(
97 "Kedro extension was registered but couldn't find a Kedro project. "
98 "Make sure you run '%reload_kedro <project_root>'."
99 )
100 return
101
102 reload_kedro(default_project_path)
103
[end of kedro/extras/extensions/ipython.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/kedro/extras/extensions/ipython.py b/kedro/extras/extensions/ipython.py
--- a/kedro/extras/extensions/ipython.py
+++ b/kedro/extras/extensions/ipython.py
@@ -8,6 +8,9 @@
from pathlib import Path
from typing import Any, Dict
+from kedro.framework.cli.project import PARAMS_ARG_HELP
+from kedro.framework.cli.utils import ENV_HELP, _split_params
+
logger = logging.getLogger(__name__)
default_project_path = Path.cwd()
@@ -84,12 +87,46 @@
def load_ipython_extension(ipython):
- """Main entry point when %load_ext is executed"""
+ """
+ Main entry point when %load_ext is executed.
+ IPython will look for this function specifically.
+ See https://ipython.readthedocs.io/en/stable/config/extensions/index.html
- global default_project_path
+ This function is called when users do `%load_ext kedro.extras.extensions.ipython`.
+ When user use `kedro jupyter notebook` or `jupyter ipython`, this extension is
+ loaded automatically.
+ """
+ from IPython.core.magic_arguments import argument, magic_arguments, parse_argstring
+
+ @magic_arguments()
+ @argument(
+ "path",
+ type=str,
+ help=(
+ "Path to the project root directory. If not given, use the previously set"
+ "project root."
+ ),
+ nargs="?",
+ default=None,
+ )
+ @argument("-e", "--env", type=str, default=None, help=ENV_HELP)
+ @argument(
+ "--params",
+ type=lambda value: _split_params(None, None, value),
+ default=None,
+ help=PARAMS_ARG_HELP,
+ )
+ def magic_reload_kedro(line: str):
+ """
+ The `%reload_kedro` IPython line magic. See
+ https://kedro.readthedocs.io/en/stable/tools_integration/ipython.html for more.
+ """
+ args = parse_argstring(magic_reload_kedro, line)
+ reload_kedro(args.path, args.env, args.params)
- ipython.register_magic_function(reload_kedro, "line", "reload_kedro")
+ global default_project_path
+ ipython.register_magic_function(magic_reload_kedro, magic_name="reload_kedro")
default_project_path = _find_kedro_project(Path.cwd())
if default_project_path is None:
| {"golden_diff": "diff --git a/kedro/extras/extensions/ipython.py b/kedro/extras/extensions/ipython.py\n--- a/kedro/extras/extensions/ipython.py\n+++ b/kedro/extras/extensions/ipython.py\n@@ -8,6 +8,9 @@\n from pathlib import Path\n from typing import Any, Dict\n \n+from kedro.framework.cli.project import PARAMS_ARG_HELP\n+from kedro.framework.cli.utils import ENV_HELP, _split_params\n+\n logger = logging.getLogger(__name__)\n default_project_path = Path.cwd()\n \n@@ -84,12 +87,46 @@\n \n \n def load_ipython_extension(ipython):\n- \"\"\"Main entry point when %load_ext is executed\"\"\"\n+ \"\"\"\n+ Main entry point when %load_ext is executed.\n+ IPython will look for this function specifically.\n+ See https://ipython.readthedocs.io/en/stable/config/extensions/index.html\n \n- global default_project_path\n+ This function is called when users do `%load_ext kedro.extras.extensions.ipython`.\n+ When user use `kedro jupyter notebook` or `jupyter ipython`, this extension is\n+ loaded automatically.\n+ \"\"\"\n+ from IPython.core.magic_arguments import argument, magic_arguments, parse_argstring\n+\n+ @magic_arguments()\n+ @argument(\n+ \"path\",\n+ type=str,\n+ help=(\n+ \"Path to the project root directory. If not given, use the previously set\"\n+ \"project root.\"\n+ ),\n+ nargs=\"?\",\n+ default=None,\n+ )\n+ @argument(\"-e\", \"--env\", type=str, default=None, help=ENV_HELP)\n+ @argument(\n+ \"--params\",\n+ type=lambda value: _split_params(None, None, value),\n+ default=None,\n+ help=PARAMS_ARG_HELP,\n+ )\n+ def magic_reload_kedro(line: str):\n+ \"\"\"\n+ The `%reload_kedro` IPython line magic. See\n+ https://kedro.readthedocs.io/en/stable/tools_integration/ipython.html for more.\n+ \"\"\"\n+ args = parse_argstring(magic_reload_kedro, line)\n+ reload_kedro(args.path, args.env, args.params)\n \n- ipython.register_magic_function(reload_kedro, \"line\", \"reload_kedro\")\n+ global default_project_path\n \n+ ipython.register_magic_function(magic_reload_kedro, magic_name=\"reload_kedro\")\n default_project_path = _find_kedro_project(Path.cwd())\n \n if default_project_path is None:\n", "issue": "Add parameters to `%reload_kedro` line magic \n## Description\r\n\r\nCurrently you cannot pass things like `env` or `extra_params` via the line magic, but you can by importing the function.\r\n\r\nhttps://github.com/kedro-org/kedro/blob/5ae97cfb70e5b0d4490132847977d482f13c840f/kedro/extras/extensions/ipython.py#L38\r\n\r\nWhy don't we introduce feature parity here? \n", "before_files": [{"content": "# pylint: disable=import-outside-toplevel,global-statement,invalid-name,too-many-locals\n\"\"\"\nThis script creates an IPython extension to load Kedro-related variables in\nlocal scope.\n\"\"\"\nimport logging\nimport sys\nfrom pathlib import Path\nfrom typing import Any, Dict\n\nlogger = logging.getLogger(__name__)\ndefault_project_path = Path.cwd()\n\n\ndef _remove_cached_modules(package_name):\n to_remove = [mod for mod in sys.modules if mod.startswith(package_name)]\n # `del` is used instead of `reload()` because: If the new version of a module does not\n # define a name that was defined by the old version, the old definition remains.\n for module in to_remove:\n del sys.modules[module] # pragma: no cover\n\n\ndef _find_kedro_project(current_dir: Path): # pragma: no cover\n from kedro.framework.startup import _is_project\n\n while current_dir != current_dir.parent:\n if _is_project(current_dir):\n return current_dir\n current_dir = current_dir.parent\n\n return None\n\n\ndef reload_kedro(\n path: str = None, env: str = None, extra_params: Dict[str, Any] = None\n):\n \"\"\"Line magic which reloads all Kedro default variables.\n Setting the path will also make it default for subsequent calls.\n \"\"\"\n from IPython import get_ipython\n from IPython.core.magic import needs_local_scope, register_line_magic\n\n from kedro.framework.cli import load_entry_points\n from kedro.framework.project import LOGGING # noqa # pylint:disable=unused-import\n from kedro.framework.project import configure_project, pipelines\n from kedro.framework.session import KedroSession\n from kedro.framework.startup import bootstrap_project\n\n # If a path is provided, set it as default for subsequent calls\n global default_project_path\n if path:\n default_project_path = Path(path).expanduser().resolve()\n logger.info(\"Updated path to Kedro project: %s\", default_project_path)\n else:\n logger.info(\"No path argument was provided. Using: %s\", default_project_path)\n\n metadata = bootstrap_project(default_project_path)\n _remove_cached_modules(metadata.package_name)\n configure_project(metadata.package_name)\n\n session = KedroSession.create(\n metadata.package_name, default_project_path, env=env, extra_params=extra_params\n )\n context = session.load_context()\n catalog = context.catalog\n\n get_ipython().push(\n variables={\n \"context\": context,\n \"catalog\": catalog,\n \"session\": session,\n \"pipelines\": pipelines,\n }\n )\n\n logger.info(\"Kedro project %s\", str(metadata.project_name))\n logger.info(\n \"Defined global variable 'context', 'session', 'catalog' and 'pipelines'\"\n )\n\n for line_magic in load_entry_points(\"line_magic\"):\n register_line_magic(needs_local_scope(line_magic))\n logger.info(\"Registered line magic '%s'\", line_magic.__name__) # type: ignore\n\n\ndef load_ipython_extension(ipython):\n \"\"\"Main entry point when %load_ext is executed\"\"\"\n\n global default_project_path\n\n ipython.register_magic_function(reload_kedro, \"line\", \"reload_kedro\")\n\n default_project_path = _find_kedro_project(Path.cwd())\n\n if default_project_path is None:\n logger.warning(\n \"Kedro extension was registered but couldn't find a Kedro project. \"\n \"Make sure you run '%reload_kedro <project_root>'.\"\n )\n return\n\n reload_kedro(default_project_path)\n", "path": "kedro/extras/extensions/ipython.py"}]} | 1,659 | 570 |
gh_patches_debug_7545 | rasdani/github-patches | git_diff | deeppavlov__DeepPavlov-861 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Python 3.7.0 support
DeepPavlov has scikit-learn version fixed to v0.19.1, but its c-extensions build fails on python 3.7.0 (at least on macOS), please see [scikit-learn issue](https://github.com/scikit-learn/scikit-learn/issues/11320).
This issue has been fixed in scikit-learn v0.19.2 release, so you have to up at least minor version to enable python 3.7.0 support.
I can try python 3.7.0 compatibility of other packages and prepare a pull-request, if you want.
</issue>
<code>
[start of deeppavlov/__init__.py]
1 # Copyright 2017 Neural Networks and Deep Learning lab, MIPT
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import sys
16 from pathlib import Path
17
18 from .core.common.log import init_logger
19
20 try:
21 from .configs import configs
22 # noinspection PyUnresolvedReferences
23 from .core.commands.infer import build_model
24 # noinspection PyUnresolvedReferences
25 from .core.commands.train import train_evaluate_model_from_config
26 from .download import deep_download
27 from .core.common.chainer import Chainer
28
29 # TODO: make better
30 def train_model(config: [str, Path, dict], download: bool = False, recursive: bool = False) -> Chainer:
31 train_evaluate_model_from_config(config, download=download, recursive=recursive)
32 return build_model(config, load_trained=True)
33
34 def evaluate_model(config: [str, Path, dict], download: bool = False, recursive: bool = False) -> dict:
35 return train_evaluate_model_from_config(config, to_train=False, download=download, recursive=recursive)
36
37 except ImportError:
38 'Assuming that requirements are not yet installed'
39
40 __version__ = '0.4.0'
41 __author__ = 'Neural Networks and Deep Learning lab, MIPT'
42 __description__ = 'An open source library for building end-to-end dialog systems and training chatbots.'
43 __keywords__ = ['NLP', 'NER', 'SQUAD', 'Intents', 'Chatbot']
44 __license__ = 'Apache License, Version 2.0'
45 __email__ = '[email protected]'
46
47 # check version
48 assert sys.hexversion >= 0x3060000, 'Does not work in python3.5 or lower'
49
50 # resolve conflicts with previous DeepPavlov installations versioned up to 0.0.9
51 dot_dp_path = Path('~/.deeppavlov').expanduser().resolve()
52 if dot_dp_path.is_file():
53 dot_dp_path.unlink()
54
55 # initiate logging
56 init_logger()
57
[end of deeppavlov/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/deeppavlov/__init__.py b/deeppavlov/__init__.py
--- a/deeppavlov/__init__.py
+++ b/deeppavlov/__init__.py
@@ -37,7 +37,7 @@
except ImportError:
'Assuming that requirements are not yet installed'
-__version__ = '0.4.0'
+__version__ = '0.5.0'
__author__ = 'Neural Networks and Deep Learning lab, MIPT'
__description__ = 'An open source library for building end-to-end dialog systems and training chatbots.'
__keywords__ = ['NLP', 'NER', 'SQUAD', 'Intents', 'Chatbot']
| {"golden_diff": "diff --git a/deeppavlov/__init__.py b/deeppavlov/__init__.py\n--- a/deeppavlov/__init__.py\n+++ b/deeppavlov/__init__.py\n@@ -37,7 +37,7 @@\n except ImportError:\n 'Assuming that requirements are not yet installed'\n \n-__version__ = '0.4.0'\n+__version__ = '0.5.0'\n __author__ = 'Neural Networks and Deep Learning lab, MIPT'\n __description__ = 'An open source library for building end-to-end dialog systems and training chatbots.'\n __keywords__ = ['NLP', 'NER', 'SQUAD', 'Intents', 'Chatbot']\n", "issue": "Python 3.7.0 support\nDeepPavlov has scikit-learn version fixed to v0.19.1, but its c-extensions build fails on python 3.7.0 (at least on macOS), please see [scikit-learn issue](https://github.com/scikit-learn/scikit-learn/issues/11320).\r\n\r\nThis issue has been fixed in scikit-learn v0.19.2 release, so you have to up at least minor version to enable python 3.7.0 support.\r\n\r\nI can try python 3.7.0 compatibility of other packages and prepare a pull-request, if you want.\n", "before_files": [{"content": "# Copyright 2017 Neural Networks and Deep Learning lab, MIPT\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport sys\nfrom pathlib import Path\n\nfrom .core.common.log import init_logger\n\ntry:\n from .configs import configs\n # noinspection PyUnresolvedReferences\n from .core.commands.infer import build_model\n # noinspection PyUnresolvedReferences\n from .core.commands.train import train_evaluate_model_from_config\n from .download import deep_download\n from .core.common.chainer import Chainer\n\n # TODO: make better\n def train_model(config: [str, Path, dict], download: bool = False, recursive: bool = False) -> Chainer:\n train_evaluate_model_from_config(config, download=download, recursive=recursive)\n return build_model(config, load_trained=True)\n\n def evaluate_model(config: [str, Path, dict], download: bool = False, recursive: bool = False) -> dict:\n return train_evaluate_model_from_config(config, to_train=False, download=download, recursive=recursive)\n\nexcept ImportError:\n 'Assuming that requirements are not yet installed'\n\n__version__ = '0.4.0'\n__author__ = 'Neural Networks and Deep Learning lab, MIPT'\n__description__ = 'An open source library for building end-to-end dialog systems and training chatbots.'\n__keywords__ = ['NLP', 'NER', 'SQUAD', 'Intents', 'Chatbot']\n__license__ = 'Apache License, Version 2.0'\n__email__ = '[email protected]'\n\n# check version\nassert sys.hexversion >= 0x3060000, 'Does not work in python3.5 or lower'\n\n# resolve conflicts with previous DeepPavlov installations versioned up to 0.0.9\ndot_dp_path = Path('~/.deeppavlov').expanduser().resolve()\nif dot_dp_path.is_file():\n dot_dp_path.unlink()\n\n# initiate logging\ninit_logger()\n", "path": "deeppavlov/__init__.py"}]} | 1,341 | 161 |
gh_patches_debug_12274 | rasdani/github-patches | git_diff | wagtail__wagtail-11223 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Report pages performance regression
### Issue Summary
Various report pages have a performance regression in Wagtail 5.2, which I've tracked down to:
https://github.com/wagtail/wagtail/commit/7ba1afb8a402a09be5838a026523be78f08ea877
https://github.com/wagtail/wagtail/pull/10822
On a few sites we've upgraded to Wagtail 5.2 - performance in the Site History report has been significantly reduced:
Before:
<img width="1717" alt="Screenshot 2023-11-11 at 21 12 02" src="https://github.com/wagtail/wagtail/assets/177332/79650e6b-9c96-4d21-bbdf-23b98c862bf4">
After:
<img width="1716" alt="Screenshot 2023-11-11 at 21 13 09" src="https://github.com/wagtail/wagtail/assets/177332/e719e250-5c9c-4dc8-823b-1e1c3b40a74c">
<img width="900" alt="Screenshot 2023-11-11 at 21 13 19" src="https://github.com/wagtail/wagtail/assets/177332/5623467b-a0ca-4472-aa46-540ff568ac82">
### Steps to Reproduce
Find an existing Wagtail project with lots of pages, and log entries.
Check http://127.0.0.1:9000/admin/reports/site-history/ with the project running Wagtail 5.2 - page will probably be slow to load.
(Note: I did try and create a quick script to test this with Wagtail's starter project - but the performance of SQLite and a lack of a debug toolbar slowing things down made it a bit tricky!).
- I have confirmed that this issue can be reproduced as described on a fresh Wagtail project: yes
### Technical details
- Python version: 3.11 / any
- Django version: 4.2 / any
- Wagtail version: 5.2 / main
- Browser version: n/a
</issue>
<code>
[start of wagtail/admin/views/reports/base.py]
1 from django.utils.translation import gettext_lazy as _
2
3 from wagtail.admin.views.generic.models import IndexView
4
5
6 class ReportView(IndexView):
7 template_name = "wagtailadmin/reports/base_report.html"
8 title = ""
9 paginate_by = 50
10
11 def get_filtered_queryset(self):
12 return self.filter_queryset(self.get_queryset())
13
14 def decorate_paginated_queryset(self, object_list):
15 # A hook point to allow rewriting the object list after pagination has been applied
16 return object_list
17
18 def get(self, request, *args, **kwargs):
19 self.filters, self.object_list = self.get_filtered_queryset()
20 self.object_list = self.decorate_paginated_queryset(self.object_list)
21 context = self.get_context_data()
22 return self.render_to_response(context)
23
24 def get_context_data(self, *args, **kwargs):
25 context = super().get_context_data(*args, **kwargs)
26 context["title"] = self.title
27 return context
28
29
30 class PageReportView(ReportView):
31 template_name = "wagtailadmin/reports/base_page_report.html"
32 export_headings = {
33 "latest_revision_created_at": _("Updated"),
34 "status_string": _("Status"),
35 "content_type.model_class._meta.verbose_name.title": _("Type"),
36 }
37 list_export = [
38 "title",
39 "latest_revision_created_at",
40 "status_string",
41 "content_type.model_class._meta.verbose_name.title",
42 ]
43
[end of wagtail/admin/views/reports/base.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/wagtail/admin/views/reports/base.py b/wagtail/admin/views/reports/base.py
--- a/wagtail/admin/views/reports/base.py
+++ b/wagtail/admin/views/reports/base.py
@@ -17,8 +17,12 @@
def get(self, request, *args, **kwargs):
self.filters, self.object_list = self.get_filtered_queryset()
- self.object_list = self.decorate_paginated_queryset(self.object_list)
context = self.get_context_data()
+ # Decorate the queryset *after* Django's BaseListView has returned a paginated/reduced
+ # list of objects
+ context["object_list"] = self.decorate_paginated_queryset(
+ context["object_list"]
+ )
return self.render_to_response(context)
def get_context_data(self, *args, **kwargs):
| {"golden_diff": "diff --git a/wagtail/admin/views/reports/base.py b/wagtail/admin/views/reports/base.py\n--- a/wagtail/admin/views/reports/base.py\n+++ b/wagtail/admin/views/reports/base.py\n@@ -17,8 +17,12 @@\n \n def get(self, request, *args, **kwargs):\n self.filters, self.object_list = self.get_filtered_queryset()\n- self.object_list = self.decorate_paginated_queryset(self.object_list)\n context = self.get_context_data()\n+ # Decorate the queryset *after* Django's BaseListView has returned a paginated/reduced\n+ # list of objects\n+ context[\"object_list\"] = self.decorate_paginated_queryset(\n+ context[\"object_list\"]\n+ )\n return self.render_to_response(context)\n \n def get_context_data(self, *args, **kwargs):\n", "issue": "Report pages performance regression\n### Issue Summary\r\n\r\nVarious report pages have a performance regression in Wagtail 5.2, which I've tracked down to:\r\n\r\nhttps://github.com/wagtail/wagtail/commit/7ba1afb8a402a09be5838a026523be78f08ea877\r\nhttps://github.com/wagtail/wagtail/pull/10822\r\n\r\nOn a few sites we've upgraded to Wagtail 5.2 - performance in the Site History report has been significantly reduced:\r\n\r\nBefore:\r\n<img width=\"1717\" alt=\"Screenshot 2023-11-11 at 21 12 02\" src=\"https://github.com/wagtail/wagtail/assets/177332/79650e6b-9c96-4d21-bbdf-23b98c862bf4\">\r\n\r\nAfter:\r\n<img width=\"1716\" alt=\"Screenshot 2023-11-11 at 21 13 09\" src=\"https://github.com/wagtail/wagtail/assets/177332/e719e250-5c9c-4dc8-823b-1e1c3b40a74c\">\r\n<img width=\"900\" alt=\"Screenshot 2023-11-11 at 21 13 19\" src=\"https://github.com/wagtail/wagtail/assets/177332/5623467b-a0ca-4472-aa46-540ff568ac82\">\r\n\r\n### Steps to Reproduce\r\n\r\nFind an existing Wagtail project with lots of pages, and log entries.\r\n\r\nCheck http://127.0.0.1:9000/admin/reports/site-history/ with the project running Wagtail 5.2 - page will probably be slow to load.\r\n\r\n(Note: I did try and create a quick script to test this with Wagtail's starter project - but the performance of SQLite and a lack of a debug toolbar slowing things down made it a bit tricky!).\r\n\r\n- I have confirmed that this issue can be reproduced as described on a fresh Wagtail project: yes\r\n\r\n### Technical details\r\n\r\n- Python version: 3.11 / any\r\n- Django version: 4.2 / any\r\n- Wagtail version: 5.2 / main\r\n- Browser version: n/a\n", "before_files": [{"content": "from django.utils.translation import gettext_lazy as _\n\nfrom wagtail.admin.views.generic.models import IndexView\n\n\nclass ReportView(IndexView):\n template_name = \"wagtailadmin/reports/base_report.html\"\n title = \"\"\n paginate_by = 50\n\n def get_filtered_queryset(self):\n return self.filter_queryset(self.get_queryset())\n\n def decorate_paginated_queryset(self, object_list):\n # A hook point to allow rewriting the object list after pagination has been applied\n return object_list\n\n def get(self, request, *args, **kwargs):\n self.filters, self.object_list = self.get_filtered_queryset()\n self.object_list = self.decorate_paginated_queryset(self.object_list)\n context = self.get_context_data()\n return self.render_to_response(context)\n\n def get_context_data(self, *args, **kwargs):\n context = super().get_context_data(*args, **kwargs)\n context[\"title\"] = self.title\n return context\n\n\nclass PageReportView(ReportView):\n template_name = \"wagtailadmin/reports/base_page_report.html\"\n export_headings = {\n \"latest_revision_created_at\": _(\"Updated\"),\n \"status_string\": _(\"Status\"),\n \"content_type.model_class._meta.verbose_name.title\": _(\"Type\"),\n }\n list_export = [\n \"title\",\n \"latest_revision_created_at\",\n \"status_string\",\n \"content_type.model_class._meta.verbose_name.title\",\n ]\n", "path": "wagtail/admin/views/reports/base.py"}]} | 1,493 | 187 |
gh_patches_debug_38940 | rasdani/github-patches | git_diff | streamlink__streamlink-205 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
picarto updated streamlink no longer works
Hey guys picarto no longer works because they said they updated the player so html5 can be default soon.
when you run the program it says found matching plugin picarto for url https:// https://picarto.tv/picknamehere
then the it says error: no stream on this URL: https://picarto.tv/picknamehere.
thanks guys for the awesome program hopefully it gets solved soon!
</issue>
<code>
[start of src/streamlink/plugins/picarto.py]
1 import re
2
3 from streamlink.plugin import Plugin
4 from streamlink.plugin.api import http
5 from streamlink.stream import RTMPStream
6
7 API_CHANNEL_INFO = "https://picarto.tv/process/channel"
8 RTMP_URL = "rtmp://{}:1935/play/"
9 RTMP_PLAYPATH = "golive+{}?token={}"
10
11 _url_re = re.compile(r"""
12 https?://(\w+\.)?picarto\.tv/[^&?/]
13 """, re.VERBOSE)
14
15 _channel_casing_re = re.compile(r"""
16 <script>placeStreamChannel(Flash)?\('(?P<channel>[^']+)',[^,]+,[^,]+,'(?P<visibility>[^']+)'(,[^,]+)?\);</script>
17 """, re.VERBOSE)
18
19
20 class Picarto(Plugin):
21 @classmethod
22 def can_handle_url(self, url):
23 return _url_re.match(url)
24
25 def _get_streams(self):
26 page_res = http.get(self.url)
27 match = _channel_casing_re.search(page_res.text)
28
29 if not match:
30 return {}
31
32 channel = match.group("channel")
33 visibility = match.group("visibility")
34
35 channel_server_res = http.post(API_CHANNEL_INFO, data={
36 "loadbalancinginfo": channel
37 })
38
39 streams = {}
40 streams["live"] = RTMPStream(self.session, {
41 "rtmp": RTMP_URL.format(channel_server_res.text),
42 "playpath": RTMP_PLAYPATH.format(channel, visibility),
43 "pageUrl": self.url,
44 "live": True
45 })
46 return streams
47
48 __plugin__ = Picarto
49
[end of src/streamlink/plugins/picarto.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/streamlink/plugins/picarto.py b/src/streamlink/plugins/picarto.py
--- a/src/streamlink/plugins/picarto.py
+++ b/src/streamlink/plugins/picarto.py
@@ -2,47 +2,69 @@
from streamlink.plugin import Plugin
from streamlink.plugin.api import http
+from streamlink.stream import HLSStream
from streamlink.stream import RTMPStream
API_CHANNEL_INFO = "https://picarto.tv/process/channel"
RTMP_URL = "rtmp://{}:1935/play/"
RTMP_PLAYPATH = "golive+{}?token={}"
+HLS_URL = "https://{}/hls/{}/index.m3u8?token={}"
_url_re = re.compile(r"""
https?://(\w+\.)?picarto\.tv/[^&?/]
""", re.VERBOSE)
+# placeStream(channel, playerID, product, offlineImage, online, token, tech)
_channel_casing_re = re.compile(r"""
- <script>placeStreamChannel(Flash)?\('(?P<channel>[^']+)',[^,]+,[^,]+,'(?P<visibility>[^']+)'(,[^,]+)?\);</script>
+ <script>\s*placeStream\s*\((.*?)\);?\s*</script>
""", re.VERBOSE)
class Picarto(Plugin):
@classmethod
- def can_handle_url(self, url):
- return _url_re.match(url)
+ def can_handle_url(cls, url):
+ return _url_re.match(url) is not None
+
+ @staticmethod
+ def _get_stream_arguments(page):
+ match = _channel_casing_re.search(page.text)
+ if not match:
+ raise ValueError
+
+ # transform the arguments
+ channel, player_id, product, offline_image, online, visibility, is_flash = \
+ map(lambda a: a.strip("' \""), match.group(1).split(","))
+ player_id, product, offline_image, online, is_flash = \
+ map(lambda a: bool(int(a)), [player_id, product, offline_image, online, is_flash])
+
+ return channel, player_id, product, offline_image, online, visibility, is_flash
def _get_streams(self):
- page_res = http.get(self.url)
- match = _channel_casing_re.search(page_res.text)
+ page = http.get(self.url)
- if not match:
- return {}
+ try:
+ channel, _, _, _, online, visibility, is_flash = self._get_stream_arguments(page)
+ except ValueError:
+ return
- channel = match.group("channel")
- visibility = match.group("visibility")
+ if not online:
+ self.logger.error("This stream is currently offline")
+ return
channel_server_res = http.post(API_CHANNEL_INFO, data={
"loadbalancinginfo": channel
})
- streams = {}
- streams["live"] = RTMPStream(self.session, {
- "rtmp": RTMP_URL.format(channel_server_res.text),
- "playpath": RTMP_PLAYPATH.format(channel, visibility),
- "pageUrl": self.url,
- "live": True
- })
- return streams
+ if is_flash:
+ return {"live": RTMPStream(self.session, {
+ "rtmp": RTMP_URL.format(channel_server_res.text),
+ "playpath": RTMP_PLAYPATH.format(channel, visibility),
+ "pageUrl": self.url,
+ "live": True
+ })}
+ else:
+ return HLSStream.parse_variant_playlist(self.session,
+ HLS_URL.format(channel_server_res.text, channel, visibility),
+ verify=False)
__plugin__ = Picarto
| {"golden_diff": "diff --git a/src/streamlink/plugins/picarto.py b/src/streamlink/plugins/picarto.py\n--- a/src/streamlink/plugins/picarto.py\n+++ b/src/streamlink/plugins/picarto.py\n@@ -2,47 +2,69 @@\n \n from streamlink.plugin import Plugin\n from streamlink.plugin.api import http\n+from streamlink.stream import HLSStream\n from streamlink.stream import RTMPStream\n \n API_CHANNEL_INFO = \"https://picarto.tv/process/channel\"\n RTMP_URL = \"rtmp://{}:1935/play/\"\n RTMP_PLAYPATH = \"golive+{}?token={}\"\n+HLS_URL = \"https://{}/hls/{}/index.m3u8?token={}\"\n \n _url_re = re.compile(r\"\"\"\n https?://(\\w+\\.)?picarto\\.tv/[^&?/]\n \"\"\", re.VERBOSE)\n \n+# placeStream(channel, playerID, product, offlineImage, online, token, tech)\n _channel_casing_re = re.compile(r\"\"\"\n- <script>placeStreamChannel(Flash)?\\('(?P<channel>[^']+)',[^,]+,[^,]+,'(?P<visibility>[^']+)'(,[^,]+)?\\);</script>\n+ <script>\\s*placeStream\\s*\\((.*?)\\);?\\s*</script>\n \"\"\", re.VERBOSE)\n \n \n class Picarto(Plugin):\n @classmethod\n- def can_handle_url(self, url):\n- return _url_re.match(url)\n+ def can_handle_url(cls, url):\n+ return _url_re.match(url) is not None\n+\n+ @staticmethod\n+ def _get_stream_arguments(page):\n+ match = _channel_casing_re.search(page.text)\n+ if not match:\n+ raise ValueError\n+\n+ # transform the arguments\n+ channel, player_id, product, offline_image, online, visibility, is_flash = \\\n+ map(lambda a: a.strip(\"' \\\"\"), match.group(1).split(\",\"))\n+ player_id, product, offline_image, online, is_flash = \\\n+ map(lambda a: bool(int(a)), [player_id, product, offline_image, online, is_flash])\n+\n+ return channel, player_id, product, offline_image, online, visibility, is_flash\n \n def _get_streams(self):\n- page_res = http.get(self.url)\n- match = _channel_casing_re.search(page_res.text)\n+ page = http.get(self.url)\n \n- if not match:\n- return {}\n+ try:\n+ channel, _, _, _, online, visibility, is_flash = self._get_stream_arguments(page)\n+ except ValueError:\n+ return\n \n- channel = match.group(\"channel\")\n- visibility = match.group(\"visibility\")\n+ if not online:\n+ self.logger.error(\"This stream is currently offline\")\n+ return\n \n channel_server_res = http.post(API_CHANNEL_INFO, data={\n \"loadbalancinginfo\": channel\n })\n \n- streams = {}\n- streams[\"live\"] = RTMPStream(self.session, {\n- \"rtmp\": RTMP_URL.format(channel_server_res.text),\n- \"playpath\": RTMP_PLAYPATH.format(channel, visibility),\n- \"pageUrl\": self.url,\n- \"live\": True\n- })\n- return streams\n+ if is_flash:\n+ return {\"live\": RTMPStream(self.session, {\n+ \"rtmp\": RTMP_URL.format(channel_server_res.text),\n+ \"playpath\": RTMP_PLAYPATH.format(channel, visibility),\n+ \"pageUrl\": self.url,\n+ \"live\": True\n+ })}\n+ else:\n+ return HLSStream.parse_variant_playlist(self.session,\n+ HLS_URL.format(channel_server_res.text, channel, visibility),\n+ verify=False)\n \n __plugin__ = Picarto\n", "issue": "picarto updated streamlink no longer works\nHey guys picarto no longer works because they said they updated the player so html5 can be default soon.\r\nwhen you run the program it says found matching plugin picarto for url https:// https://picarto.tv/picknamehere\r\nthen the it says error: no stream on this URL: https://picarto.tv/picknamehere.\r\nthanks guys for the awesome program hopefully it gets solved soon!\n", "before_files": [{"content": "import re\n\nfrom streamlink.plugin import Plugin\nfrom streamlink.plugin.api import http\nfrom streamlink.stream import RTMPStream\n\nAPI_CHANNEL_INFO = \"https://picarto.tv/process/channel\"\nRTMP_URL = \"rtmp://{}:1935/play/\"\nRTMP_PLAYPATH = \"golive+{}?token={}\"\n\n_url_re = re.compile(r\"\"\"\n https?://(\\w+\\.)?picarto\\.tv/[^&?/]\n\"\"\", re.VERBOSE)\n\n_channel_casing_re = re.compile(r\"\"\"\n <script>placeStreamChannel(Flash)?\\('(?P<channel>[^']+)',[^,]+,[^,]+,'(?P<visibility>[^']+)'(,[^,]+)?\\);</script>\n\"\"\", re.VERBOSE)\n\n\nclass Picarto(Plugin):\n @classmethod\n def can_handle_url(self, url):\n return _url_re.match(url)\n\n def _get_streams(self):\n page_res = http.get(self.url)\n match = _channel_casing_re.search(page_res.text)\n\n if not match:\n return {}\n\n channel = match.group(\"channel\")\n visibility = match.group(\"visibility\")\n\n channel_server_res = http.post(API_CHANNEL_INFO, data={\n \"loadbalancinginfo\": channel\n })\n\n streams = {}\n streams[\"live\"] = RTMPStream(self.session, {\n \"rtmp\": RTMP_URL.format(channel_server_res.text),\n \"playpath\": RTMP_PLAYPATH.format(channel, visibility),\n \"pageUrl\": self.url,\n \"live\": True\n })\n return streams\n\n__plugin__ = Picarto\n", "path": "src/streamlink/plugins/picarto.py"}]} | 1,076 | 827 |
gh_patches_debug_4348 | rasdani/github-patches | git_diff | pwndbg__pwndbg-747 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bad unsigned casting
### Description
`pwndbg.memory.u` returns signed integers (with minus `-` sign).
### Steps to reproduce
```c
#include <stdio.h>
#include <stdint.h>
int main(int argc, char const *argv[])
{
uint64_t x = 0xb60ad86e8fb52ea8;
printf("%p\n", &x);
getc(stdin);
return 0;
}
```
```
clang bad_u.c -g -o bad_u
gdb ./bad_u
pwndbg> x/xg 0x7fffffffab18
0x7fffffffab18: 0xb60ad86e8fb52ea8
pwndbg> python-interactive
>>> pwndbg.memory.u(0x7fffffffab18)
-5329209239670542680
```
Idk why it doesn't break the pwndbg visibly. Found it running `vis_heap_chunks` on arbitrary addresses (the minus were printed in few places).
### My setup
```
GNU gdb (Ubuntu 8.1-0ubuntu3.2) 8.1.0.20180409-git
python: 3.6.9 (default, Nov 7 2019, 10:44:02)
pwndbg: dev branch
```
Bad unsigned casting
### Description
`pwndbg.memory.u` returns signed integers (with minus `-` sign).
### Steps to reproduce
```c
#include <stdio.h>
#include <stdint.h>
int main(int argc, char const *argv[])
{
uint64_t x = 0xb60ad86e8fb52ea8;
printf("%p\n", &x);
getc(stdin);
return 0;
}
```
```
clang bad_u.c -g -o bad_u
gdb ./bad_u
pwndbg> x/xg 0x7fffffffab18
0x7fffffffab18: 0xb60ad86e8fb52ea8
pwndbg> python-interactive
>>> pwndbg.memory.u(0x7fffffffab18)
-5329209239670542680
```
Idk why it doesn't break the pwndbg visibly. Found it running `vis_heap_chunks` on arbitrary addresses (the minus were printed in few places).
### My setup
```
GNU gdb (Ubuntu 8.1-0ubuntu3.2) 8.1.0.20180409-git
python: 3.6.9 (default, Nov 7 2019, 10:44:02)
pwndbg: dev branch
```
</issue>
<code>
[start of pwndbg/inthook.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 """
4 This hook is necessary for compatibility with Python2.7 versions of GDB
5 since they cannot directly cast to integer a gdb.Value object that is
6 not already an integer type.
7 """
8 from __future__ import absolute_import
9 from __future__ import division
10 from __future__ import print_function
11 from __future__ import unicode_literals
12
13 import enum
14 import os
15
16 import gdb
17 import six
18 from future.utils import with_metaclass
19
20 import pwndbg.typeinfo
21
22 if six.PY2:
23 import __builtin__ as builtins
24 else:
25 import builtins
26
27 _int = builtins.int
28
29
30 # We need this class to get isinstance(7, xint) to return True
31 class IsAnInt(type):
32 def __instancecheck__(self, other):
33 return isinstance(other, _int)
34
35
36 class xint(with_metaclass(IsAnInt, builtins.int)):
37 def __new__(cls, value, *a, **kw):
38 if isinstance(value, gdb.Value):
39 if pwndbg.typeinfo.is_pointer(value):
40 value = value.cast(pwndbg.typeinfo.size_t)
41 else:
42 value = value.cast(pwndbg.typeinfo.ssize_t)
43
44 elif isinstance(value, gdb.Symbol):
45 symbol = value
46 value = symbol.value()
47 if symbol.is_function:
48 value = value.cast(pwndbg.typeinfo.size_t)
49
50 elif not isinstance(value, (six.string_types, six.integer_types)) \
51 or isinstance(cls, enum.EnumMeta):
52 # without check for EnumMeta math operations with enums were failing e.g.:
53 # pwndbg> py import re; flags = 1 | re.MULTILINE
54 return _int.__new__(cls, value, *a, **kw)
55
56 return _int(_int(value, *a, **kw))
57
58 # Do not hook 'int' if we are just generating documentation
59 if os.environ.get('SPHINX', None) is None:
60 builtins.int = xint
61 globals()['int'] = xint
62 if six.PY3:
63 builtins.long = xint
64 globals()['long'] = xint
65
[end of pwndbg/inthook.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pwndbg/inthook.py b/pwndbg/inthook.py
--- a/pwndbg/inthook.py
+++ b/pwndbg/inthook.py
@@ -39,7 +39,7 @@
if pwndbg.typeinfo.is_pointer(value):
value = value.cast(pwndbg.typeinfo.size_t)
else:
- value = value.cast(pwndbg.typeinfo.ssize_t)
+ return _int.__new__(cls, value, *a, **kw)
elif isinstance(value, gdb.Symbol):
symbol = value
| {"golden_diff": "diff --git a/pwndbg/inthook.py b/pwndbg/inthook.py\n--- a/pwndbg/inthook.py\n+++ b/pwndbg/inthook.py\n@@ -39,7 +39,7 @@\n if pwndbg.typeinfo.is_pointer(value):\n value = value.cast(pwndbg.typeinfo.size_t)\n else:\n- value = value.cast(pwndbg.typeinfo.ssize_t)\n+ return _int.__new__(cls, value, *a, **kw)\n \n elif isinstance(value, gdb.Symbol):\n symbol = value\n", "issue": "Bad unsigned casting\n### Description\r\n\r\n`pwndbg.memory.u` returns signed integers (with minus `-` sign).\r\n\r\n### Steps to reproduce\r\n\r\n\r\n```c\r\n#include <stdio.h>\r\n#include <stdint.h>\r\n\r\nint main(int argc, char const *argv[])\r\n{\r\n uint64_t x = 0xb60ad86e8fb52ea8;\r\n printf(\"%p\\n\", &x);\r\n getc(stdin);\r\n return 0;\r\n}\r\n```\r\n\r\n```\r\nclang bad_u.c -g -o bad_u\r\ngdb ./bad_u\r\n\r\npwndbg> x/xg 0x7fffffffab18\r\n0x7fffffffab18:\t0xb60ad86e8fb52ea8\r\npwndbg> python-interactive \r\n>>> pwndbg.memory.u(0x7fffffffab18)\r\n-5329209239670542680\r\n```\r\n\r\nIdk why it doesn't break the pwndbg visibly. Found it running `vis_heap_chunks` on arbitrary addresses (the minus were printed in few places).\r\n\r\n### My setup\r\n\r\n```\r\nGNU gdb (Ubuntu 8.1-0ubuntu3.2) 8.1.0.20180409-git\r\npython: 3.6.9 (default, Nov 7 2019, 10:44:02)\r\npwndbg: dev branch\r\n```\nBad unsigned casting\n### Description\r\n\r\n`pwndbg.memory.u` returns signed integers (with minus `-` sign).\r\n\r\n### Steps to reproduce\r\n\r\n\r\n```c\r\n#include <stdio.h>\r\n#include <stdint.h>\r\n\r\nint main(int argc, char const *argv[])\r\n{\r\n uint64_t x = 0xb60ad86e8fb52ea8;\r\n printf(\"%p\\n\", &x);\r\n getc(stdin);\r\n return 0;\r\n}\r\n```\r\n\r\n```\r\nclang bad_u.c -g -o bad_u\r\ngdb ./bad_u\r\n\r\npwndbg> x/xg 0x7fffffffab18\r\n0x7fffffffab18:\t0xb60ad86e8fb52ea8\r\npwndbg> python-interactive \r\n>>> pwndbg.memory.u(0x7fffffffab18)\r\n-5329209239670542680\r\n```\r\n\r\nIdk why it doesn't break the pwndbg visibly. Found it running `vis_heap_chunks` on arbitrary addresses (the minus were printed in few places).\r\n\r\n### My setup\r\n\r\n```\r\nGNU gdb (Ubuntu 8.1-0ubuntu3.2) 8.1.0.20180409-git\r\npython: 3.6.9 (default, Nov 7 2019, 10:44:02)\r\npwndbg: dev branch\r\n```\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\"\"\"\nThis hook is necessary for compatibility with Python2.7 versions of GDB\nsince they cannot directly cast to integer a gdb.Value object that is\nnot already an integer type.\n\"\"\"\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\nfrom __future__ import unicode_literals\n\nimport enum\nimport os\n\nimport gdb\nimport six\nfrom future.utils import with_metaclass\n\nimport pwndbg.typeinfo\n\nif six.PY2:\n import __builtin__ as builtins\nelse:\n import builtins\n\n_int = builtins.int\n\n\n# We need this class to get isinstance(7, xint) to return True\nclass IsAnInt(type):\n def __instancecheck__(self, other):\n return isinstance(other, _int)\n\n\nclass xint(with_metaclass(IsAnInt, builtins.int)):\n def __new__(cls, value, *a, **kw):\n if isinstance(value, gdb.Value):\n if pwndbg.typeinfo.is_pointer(value):\n value = value.cast(pwndbg.typeinfo.size_t)\n else:\n value = value.cast(pwndbg.typeinfo.ssize_t)\n\n elif isinstance(value, gdb.Symbol):\n symbol = value\n value = symbol.value()\n if symbol.is_function:\n value = value.cast(pwndbg.typeinfo.size_t)\n\n elif not isinstance(value, (six.string_types, six.integer_types)) \\\n or isinstance(cls, enum.EnumMeta):\n # without check for EnumMeta math operations with enums were failing e.g.:\n # pwndbg> py import re; flags = 1 | re.MULTILINE\n return _int.__new__(cls, value, *a, **kw)\n\n return _int(_int(value, *a, **kw))\n\n# Do not hook 'int' if we are just generating documentation\nif os.environ.get('SPHINX', None) is None:\n builtins.int = xint\n globals()['int'] = xint\n if six.PY3:\n builtins.long = xint\n globals()['long'] = xint\n", "path": "pwndbg/inthook.py"}]} | 1,767 | 127 |
gh_patches_debug_11149 | rasdani/github-patches | git_diff | open-mmlab__mmocr-285 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
LineStrParser: separator behaviour
I've a question regarding this snippet of code:
https://github.com/open-mmlab/mmocr/blob/01d8d63be945882fb2d9eaca5e1c1b39cb45f274/mmocr/datasets/utils/parser.py#L33-L36
Is there a particular reason to use these 4 lines of code instead of simply `line_str = line_str.split(self.separator)`?
I'm asking this because for my own use case I have:
- a TSV file with `filename` and `text` as keys for text recognition task
- some blank spaces in `filename` e.g. `my cropped image.png`
Hence, LineStrParser is configured as follows:
```python
parser=dict(
type='LineStrParser',
keys=['filename', 'text'],
keys_idx=[0, 1],
separator='\t'))
```
but with the 4-lines code snippet, the line parsing fails. Instead, with simply `line_str = line_str.split(self.separator)` everything works well.
</issue>
<code>
[start of mmocr/datasets/utils/parser.py]
1 import json
2
3 from mmocr.datasets.builder import PARSERS
4
5
6 @PARSERS.register_module()
7 class LineStrParser:
8 """Parse string of one line in annotation file to dict format.
9
10 Args:
11 keys (list[str]): Keys in result dict.
12 keys_idx (list[int]): Value index in sub-string list
13 for each key above.
14 separator (str): Separator to separate string to list of sub-string.
15 """
16
17 def __init__(self,
18 keys=['filename', 'text'],
19 keys_idx=[0, 1],
20 separator=' '):
21 assert isinstance(keys, list)
22 assert isinstance(keys_idx, list)
23 assert isinstance(separator, str)
24 assert len(keys) > 0
25 assert len(keys) == len(keys_idx)
26 self.keys = keys
27 self.keys_idx = keys_idx
28 self.separator = separator
29
30 def get_item(self, data_ret, index):
31 map_index = index % len(data_ret)
32 line_str = data_ret[map_index]
33 for split_key in self.separator:
34 if split_key != ' ':
35 line_str = line_str.replace(split_key, ' ')
36 line_str = line_str.split()
37 if len(line_str) <= max(self.keys_idx):
38 raise Exception(
39 f'key index: {max(self.keys_idx)} out of range: {line_str}')
40
41 line_info = {}
42 for i, key in enumerate(self.keys):
43 line_info[key] = line_str[self.keys_idx[i]]
44 return line_info
45
46
47 @PARSERS.register_module()
48 class LineJsonParser:
49 """Parse json-string of one line in annotation file to dict format.
50
51 Args:
52 keys (list[str]): Keys in both json-string and result dict.
53 """
54
55 def __init__(self, keys=[], **kwargs):
56 assert isinstance(keys, list)
57 assert len(keys) > 0
58 self.keys = keys
59
60 def get_item(self, data_ret, index):
61 map_index = index % len(data_ret)
62 line_json_obj = json.loads(data_ret[map_index])
63 line_info = {}
64 for key in self.keys:
65 if key not in line_json_obj:
66 raise Exception(f'key {key} not in line json {line_json_obj}')
67 line_info[key] = line_json_obj[key]
68
69 return line_info
70
[end of mmocr/datasets/utils/parser.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mmocr/datasets/utils/parser.py b/mmocr/datasets/utils/parser.py
--- a/mmocr/datasets/utils/parser.py
+++ b/mmocr/datasets/utils/parser.py
@@ -30,10 +30,7 @@
def get_item(self, data_ret, index):
map_index = index % len(data_ret)
line_str = data_ret[map_index]
- for split_key in self.separator:
- if split_key != ' ':
- line_str = line_str.replace(split_key, ' ')
- line_str = line_str.split()
+ line_str = line_str.split(self.separator)
if len(line_str) <= max(self.keys_idx):
raise Exception(
f'key index: {max(self.keys_idx)} out of range: {line_str}')
| {"golden_diff": "diff --git a/mmocr/datasets/utils/parser.py b/mmocr/datasets/utils/parser.py\n--- a/mmocr/datasets/utils/parser.py\n+++ b/mmocr/datasets/utils/parser.py\n@@ -30,10 +30,7 @@\n def get_item(self, data_ret, index):\n map_index = index % len(data_ret)\n line_str = data_ret[map_index]\n- for split_key in self.separator:\n- if split_key != ' ':\n- line_str = line_str.replace(split_key, ' ')\n- line_str = line_str.split()\n+ line_str = line_str.split(self.separator)\n if len(line_str) <= max(self.keys_idx):\n raise Exception(\n f'key index: {max(self.keys_idx)} out of range: {line_str}')\n", "issue": "LineStrParser: separator behaviour\nI've a question regarding this snippet of code:\r\nhttps://github.com/open-mmlab/mmocr/blob/01d8d63be945882fb2d9eaca5e1c1b39cb45f274/mmocr/datasets/utils/parser.py#L33-L36\r\n\r\nIs there a particular reason to use these 4 lines of code instead of simply `line_str = line_str.split(self.separator)`?\r\n\r\nI'm asking this because for my own use case I have:\r\n- a TSV file with `filename` and `text` as keys for text recognition task\r\n- some blank spaces in `filename` e.g. `my cropped image.png`\r\n \r\nHence, LineStrParser is configured as follows:\r\n```python\r\nparser=dict(\r\n type='LineStrParser',\r\n keys=['filename', 'text'],\r\n keys_idx=[0, 1],\r\n separator='\\t'))\r\n```\r\nbut with the 4-lines code snippet, the line parsing fails. Instead, with simply `line_str = line_str.split(self.separator)` everything works well.\n", "before_files": [{"content": "import json\n\nfrom mmocr.datasets.builder import PARSERS\n\n\[email protected]_module()\nclass LineStrParser:\n \"\"\"Parse string of one line in annotation file to dict format.\n\n Args:\n keys (list[str]): Keys in result dict.\n keys_idx (list[int]): Value index in sub-string list\n for each key above.\n separator (str): Separator to separate string to list of sub-string.\n \"\"\"\n\n def __init__(self,\n keys=['filename', 'text'],\n keys_idx=[0, 1],\n separator=' '):\n assert isinstance(keys, list)\n assert isinstance(keys_idx, list)\n assert isinstance(separator, str)\n assert len(keys) > 0\n assert len(keys) == len(keys_idx)\n self.keys = keys\n self.keys_idx = keys_idx\n self.separator = separator\n\n def get_item(self, data_ret, index):\n map_index = index % len(data_ret)\n line_str = data_ret[map_index]\n for split_key in self.separator:\n if split_key != ' ':\n line_str = line_str.replace(split_key, ' ')\n line_str = line_str.split()\n if len(line_str) <= max(self.keys_idx):\n raise Exception(\n f'key index: {max(self.keys_idx)} out of range: {line_str}')\n\n line_info = {}\n for i, key in enumerate(self.keys):\n line_info[key] = line_str[self.keys_idx[i]]\n return line_info\n\n\[email protected]_module()\nclass LineJsonParser:\n \"\"\"Parse json-string of one line in annotation file to dict format.\n\n Args:\n keys (list[str]): Keys in both json-string and result dict.\n \"\"\"\n\n def __init__(self, keys=[], **kwargs):\n assert isinstance(keys, list)\n assert len(keys) > 0\n self.keys = keys\n\n def get_item(self, data_ret, index):\n map_index = index % len(data_ret)\n line_json_obj = json.loads(data_ret[map_index])\n line_info = {}\n for key in self.keys:\n if key not in line_json_obj:\n raise Exception(f'key {key} not in line json {line_json_obj}')\n line_info[key] = line_json_obj[key]\n\n return line_info\n", "path": "mmocr/datasets/utils/parser.py"}]} | 1,408 | 172 |
gh_patches_debug_17996 | rasdani/github-patches | git_diff | cookiecutter__cookiecutter-768 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Wrong hook executed when a tilde-suffixed file of the same name exists
- Cookiecutter version: 1.4.0
- Template project url: https://github.com/thorgate/django-project-template
- Python version: 3.4
- Operating System: Ubuntu 15.10 wily
### Description:
When using gedit or some other text editor that pollutes the directory with backup files ending with a tilde, cookiecutter mistakes that for the "real" hook it should run. This resulted in cookiecutter running a ridiculously outdated version of my pre-gen hook.
The obvious solution is to just remove `pre_gen_project.py~`, which works, but I believe ideally cookiecutter shouldn't be running it in the first place.
### What I've run:
```
gedit django-template/hooks/pre_gen_project.py
cookiecutter django-template
```
</issue>
<code>
[start of cookiecutter/hooks.py]
1 # -*- coding: utf-8 -*-
2
3 """
4 cookiecutter.hooks
5 ------------------
6
7 Functions for discovering and executing various cookiecutter hooks.
8 """
9
10 import io
11 import logging
12 import os
13 import subprocess
14 import sys
15 import tempfile
16
17 from jinja2 import Template
18
19 from cookiecutter import utils
20 from .exceptions import FailedHookException
21
22
23 _HOOKS = [
24 'pre_gen_project',
25 'post_gen_project',
26 # TODO: other hooks should be listed here
27 ]
28 EXIT_SUCCESS = 0
29
30
31 def find_hooks():
32 """
33 Must be called with the project template as the current working directory.
34 Returns a dict of all hook scripts provided.
35 Dict's key will be the hook/script's name, without extension, while
36 values will be the absolute path to the script.
37 Missing scripts will not be included in the returned dict.
38 """
39 hooks_dir = 'hooks'
40 r = {}
41 logging.debug('hooks_dir is {0}'.format(hooks_dir))
42 if not os.path.isdir(hooks_dir):
43 logging.debug('No hooks/ dir in template_dir')
44 return r
45 for f in os.listdir(hooks_dir):
46 basename = os.path.splitext(os.path.basename(f))[0]
47 if basename in _HOOKS:
48 r[basename] = os.path.abspath(os.path.join(hooks_dir, f))
49 return r
50
51
52 def run_script(script_path, cwd='.'):
53 """
54 Executes a script from a working directory.
55
56 :param script_path: Absolute path to the script to run.
57 :param cwd: The directory to run the script from.
58 """
59 run_thru_shell = sys.platform.startswith('win')
60 if script_path.endswith('.py'):
61 script_command = [sys.executable, script_path]
62 else:
63 script_command = [script_path]
64
65 utils.make_executable(script_path)
66
67 proc = subprocess.Popen(
68 script_command,
69 shell=run_thru_shell,
70 cwd=cwd
71 )
72 exit_status = proc.wait()
73 if exit_status != EXIT_SUCCESS:
74 raise FailedHookException(
75 "Hook script failed (exit status: %d)" % exit_status)
76
77
78 def run_script_with_context(script_path, cwd, context):
79 """
80 Executes a script after rendering with it Jinja.
81
82 :param script_path: Absolute path to the script to run.
83 :param cwd: The directory to run the script from.
84 :param context: Cookiecutter project template context.
85 """
86 _, extension = os.path.splitext(script_path)
87
88 contents = io.open(script_path, 'r', encoding='utf-8').read()
89
90 with tempfile.NamedTemporaryFile(
91 delete=False,
92 mode='wb',
93 suffix=extension
94 ) as temp:
95 output = Template(contents).render(**context)
96 temp.write(output.encode('utf-8'))
97
98 run_script(temp.name, cwd)
99
100
101 def run_hook(hook_name, project_dir, context):
102 """
103 Try to find and execute a hook from the specified project directory.
104
105 :param hook_name: The hook to execute.
106 :param project_dir: The directory to execute the script from.
107 :param context: Cookiecutter project context.
108 """
109 script = find_hooks().get(hook_name)
110 if script is None:
111 logging.debug('No hooks found')
112 return
113 run_script_with_context(script, project_dir, context)
114
[end of cookiecutter/hooks.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/cookiecutter/hooks.py b/cookiecutter/hooks.py
--- a/cookiecutter/hooks.py
+++ b/cookiecutter/hooks.py
@@ -37,16 +37,20 @@
Missing scripts will not be included in the returned dict.
"""
hooks_dir = 'hooks'
- r = {}
+ hooks = {}
logging.debug('hooks_dir is {0}'.format(hooks_dir))
+
if not os.path.isdir(hooks_dir):
logging.debug('No hooks/ dir in template_dir')
- return r
+ return hooks
+
for f in os.listdir(hooks_dir):
- basename = os.path.splitext(os.path.basename(f))[0]
- if basename in _HOOKS:
- r[basename] = os.path.abspath(os.path.join(hooks_dir, f))
- return r
+ filename = os.path.basename(f)
+ basename = os.path.splitext(filename)[0]
+
+ if basename in _HOOKS and not filename.endswith('~'):
+ hooks[basename] = os.path.abspath(os.path.join(hooks_dir, f))
+ return hooks
def run_script(script_path, cwd='.'):
| {"golden_diff": "diff --git a/cookiecutter/hooks.py b/cookiecutter/hooks.py\n--- a/cookiecutter/hooks.py\n+++ b/cookiecutter/hooks.py\n@@ -37,16 +37,20 @@\n Missing scripts will not be included in the returned dict.\n \"\"\"\n hooks_dir = 'hooks'\n- r = {}\n+ hooks = {}\n logging.debug('hooks_dir is {0}'.format(hooks_dir))\n+\n if not os.path.isdir(hooks_dir):\n logging.debug('No hooks/ dir in template_dir')\n- return r\n+ return hooks\n+\n for f in os.listdir(hooks_dir):\n- basename = os.path.splitext(os.path.basename(f))[0]\n- if basename in _HOOKS:\n- r[basename] = os.path.abspath(os.path.join(hooks_dir, f))\n- return r\n+ filename = os.path.basename(f)\n+ basename = os.path.splitext(filename)[0]\n+\n+ if basename in _HOOKS and not filename.endswith('~'):\n+ hooks[basename] = os.path.abspath(os.path.join(hooks_dir, f))\n+ return hooks\n \n \n def run_script(script_path, cwd='.'):\n", "issue": "Wrong hook executed when a tilde-suffixed file of the same name exists\n- Cookiecutter version: 1.4.0\n- Template project url: https://github.com/thorgate/django-project-template\n- Python version: 3.4\n- Operating System: Ubuntu 15.10 wily\n### Description:\n\nWhen using gedit or some other text editor that pollutes the directory with backup files ending with a tilde, cookiecutter mistakes that for the \"real\" hook it should run. This resulted in cookiecutter running a ridiculously outdated version of my pre-gen hook.\n\nThe obvious solution is to just remove `pre_gen_project.py~`, which works, but I believe ideally cookiecutter shouldn't be running it in the first place.\n### What I've run:\n\n```\ngedit django-template/hooks/pre_gen_project.py\ncookiecutter django-template\n```\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\"\ncookiecutter.hooks\n------------------\n\nFunctions for discovering and executing various cookiecutter hooks.\n\"\"\"\n\nimport io\nimport logging\nimport os\nimport subprocess\nimport sys\nimport tempfile\n\nfrom jinja2 import Template\n\nfrom cookiecutter import utils\nfrom .exceptions import FailedHookException\n\n\n_HOOKS = [\n 'pre_gen_project',\n 'post_gen_project',\n # TODO: other hooks should be listed here\n]\nEXIT_SUCCESS = 0\n\n\ndef find_hooks():\n \"\"\"\n Must be called with the project template as the current working directory.\n Returns a dict of all hook scripts provided.\n Dict's key will be the hook/script's name, without extension, while\n values will be the absolute path to the script.\n Missing scripts will not be included in the returned dict.\n \"\"\"\n hooks_dir = 'hooks'\n r = {}\n logging.debug('hooks_dir is {0}'.format(hooks_dir))\n if not os.path.isdir(hooks_dir):\n logging.debug('No hooks/ dir in template_dir')\n return r\n for f in os.listdir(hooks_dir):\n basename = os.path.splitext(os.path.basename(f))[0]\n if basename in _HOOKS:\n r[basename] = os.path.abspath(os.path.join(hooks_dir, f))\n return r\n\n\ndef run_script(script_path, cwd='.'):\n \"\"\"\n Executes a script from a working directory.\n\n :param script_path: Absolute path to the script to run.\n :param cwd: The directory to run the script from.\n \"\"\"\n run_thru_shell = sys.platform.startswith('win')\n if script_path.endswith('.py'):\n script_command = [sys.executable, script_path]\n else:\n script_command = [script_path]\n\n utils.make_executable(script_path)\n\n proc = subprocess.Popen(\n script_command,\n shell=run_thru_shell,\n cwd=cwd\n )\n exit_status = proc.wait()\n if exit_status != EXIT_SUCCESS:\n raise FailedHookException(\n \"Hook script failed (exit status: %d)\" % exit_status)\n\n\ndef run_script_with_context(script_path, cwd, context):\n \"\"\"\n Executes a script after rendering with it Jinja.\n\n :param script_path: Absolute path to the script to run.\n :param cwd: The directory to run the script from.\n :param context: Cookiecutter project template context.\n \"\"\"\n _, extension = os.path.splitext(script_path)\n\n contents = io.open(script_path, 'r', encoding='utf-8').read()\n\n with tempfile.NamedTemporaryFile(\n delete=False,\n mode='wb',\n suffix=extension\n ) as temp:\n output = Template(contents).render(**context)\n temp.write(output.encode('utf-8'))\n\n run_script(temp.name, cwd)\n\n\ndef run_hook(hook_name, project_dir, context):\n \"\"\"\n Try to find and execute a hook from the specified project directory.\n\n :param hook_name: The hook to execute.\n :param project_dir: The directory to execute the script from.\n :param context: Cookiecutter project context.\n \"\"\"\n script = find_hooks().get(hook_name)\n if script is None:\n logging.debug('No hooks found')\n return\n run_script_with_context(script, project_dir, context)\n", "path": "cookiecutter/hooks.py"}]} | 1,679 | 257 |
gh_patches_debug_56255 | rasdani/github-patches | git_diff | litestar-org__litestar-1377 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
StaticFilesConfig and virtual directories
I'm trying to write a ``FileSystemProtocol`` to load files from the package data using [importlib_resources](https://importlib-resources.readthedocs.io/en/latest/using.html#). But because ``directories`` is defined as ``DirectoryPath``, pydantic checks if the given directories exist in the local filesystem.
This is not generally true, especially in any kind of virtual filesystem (e.g. a zipped package). I think this condition should be relaxed to support virtual filesystems.
https://github.com/starlite-api/starlite/blob/9bb6dcd57c10a591377cf8e3a537e9292566d5b9/starlite/config/static_files.py#L32
</issue>
<code>
[start of starlite/events/emitter.py]
1 from __future__ import annotations
2
3 from abc import ABC, abstractmethod
4 from asyncio import CancelledError, Queue, Task, create_task
5 from collections import defaultdict
6 from contextlib import suppress
7 from typing import TYPE_CHECKING, Any, DefaultDict, Sequence
8
9 import sniffio
10
11 from starlite.exceptions import ImproperlyConfiguredException
12
13 __all__ = ("BaseEventEmitterBackend", "SimpleEventEmitter")
14
15
16 if TYPE_CHECKING:
17 from starlite.events.listener import EventListener
18
19
20 class BaseEventEmitterBackend(ABC):
21 """Abstract class used to define event emitter backends."""
22
23 __slots__ = ("listeners",)
24
25 listeners: DefaultDict[str, set[EventListener]]
26
27 def __init__(self, listeners: Sequence[EventListener]):
28 """Create an event emitter instance.
29
30 Args:
31 listeners: A list of listeners.
32 """
33 self.listeners = defaultdict(set)
34 for listener in listeners:
35 for event_id in listener.event_ids:
36 self.listeners[event_id].add(listener)
37
38 @abstractmethod
39 def emit(self, event_id: str, *args: Any, **kwargs: Any) -> None: # pragma: no cover
40 """Emit an event to all attached listeners.
41
42 Args:
43 event_id: The ID of the event to emit, e.g 'my_event'.
44 *args: args to pass to the listener(s).
45 **kwargs: kwargs to pass to the listener(s)
46
47 Returns:
48 None
49 """
50 raise NotImplementedError("not implemented")
51
52 @abstractmethod
53 async def on_startup(self) -> None: # pragma: no cover
54 """Hook called on application startup, used to establish connection or perform other async operations.
55
56 Returns:
57 None
58 """
59 raise NotImplementedError("not implemented")
60
61 @abstractmethod
62 async def on_shutdown(self) -> None: # pragma: no cover
63 """Hook called on application shutdown, used to perform cleanup.
64
65 Returns:
66 None
67 """
68 raise NotImplementedError("not implemented")
69
70
71 class SimpleEventEmitter(BaseEventEmitterBackend):
72 """Event emitter the works only in the current process"""
73
74 __slots__ = ("_queue", "_worker_task")
75
76 _worker_task: Task | None
77
78 def __init__(self, listeners: Sequence[EventListener]):
79 """Create an event emitter instance.
80
81 Args:
82 listeners: A list of listeners.
83 """
84 super().__init__(listeners=listeners)
85 self._queue: Queue | None = None
86 self._worker_task = None
87
88 async def _worker(self) -> None:
89 """Worker that runs in a separate task and continuously pulls events from asyncio queue.
90
91 Returns:
92 None
93 """
94 while self._queue:
95 fn, args, kwargs = await self._queue.get()
96 await fn(*args, *kwargs)
97 self._queue.task_done()
98
99 async def on_startup(self) -> None:
100 """Hook called on application startup, used to establish connection or perform other async operations.
101
102 Returns:
103 None
104 """
105 if sniffio.current_async_library() != "asyncio":
106 return
107
108 self._queue = Queue()
109 self._worker_task = create_task(self._worker())
110
111 async def on_shutdown(self) -> None:
112 """Hook called on application shutdown, used to perform cleanup.
113
114 Returns:
115 None
116 """
117
118 if self._queue:
119 await self._queue.join()
120
121 if self._worker_task:
122 self._worker_task.cancel()
123 with suppress(CancelledError):
124 await self._worker_task
125
126 self._worker_task = None
127 self._queue = None
128
129 def emit(self, event_id: str, *args: Any, **kwargs: Any) -> None:
130 """Emit an event to all attached listeners.
131
132 Args:
133 event_id: The ID of the event to emit, e.g 'my_event'.
134 *args: args to pass to the listener(s).
135 **kwargs: kwargs to pass to the listener(s)
136
137 Returns:
138 None
139 """
140 if not (self._worker_task and self._queue):
141 if sniffio.current_async_library() != "asyncio":
142 raise ImproperlyConfiguredException("{type(self).__name__} only supports 'asyncio' based event loops")
143
144 raise ImproperlyConfiguredException("Worker not running")
145
146 if listeners := self.listeners.get(event_id):
147 for listener in listeners:
148 self._queue.put_nowait((listener.fn, args, kwargs))
149 return
150 raise ImproperlyConfiguredException(f"no event listeners are registered for event ID: {event_id}")
151
[end of starlite/events/emitter.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/starlite/events/emitter.py b/starlite/events/emitter.py
--- a/starlite/events/emitter.py
+++ b/starlite/events/emitter.py
@@ -93,7 +93,7 @@
"""
while self._queue:
fn, args, kwargs = await self._queue.get()
- await fn(*args, *kwargs)
+ await fn(*args, **kwargs)
self._queue.task_done()
async def on_startup(self) -> None:
| {"golden_diff": "diff --git a/starlite/events/emitter.py b/starlite/events/emitter.py\n--- a/starlite/events/emitter.py\n+++ b/starlite/events/emitter.py\n@@ -93,7 +93,7 @@\n \"\"\"\n while self._queue:\n fn, args, kwargs = await self._queue.get()\n- await fn(*args, *kwargs)\n+ await fn(*args, **kwargs)\n self._queue.task_done()\n \n async def on_startup(self) -> None:\n", "issue": "StaticFilesConfig and virtual directories\nI'm trying to write a ``FileSystemProtocol`` to load files from the package data using [importlib_resources](https://importlib-resources.readthedocs.io/en/latest/using.html#). But because ``directories`` is defined as ``DirectoryPath``, pydantic checks if the given directories exist in the local filesystem. \r\n\r\nThis is not generally true, especially in any kind of virtual filesystem (e.g. a zipped package). I think this condition should be relaxed to support virtual filesystems.\r\n\r\nhttps://github.com/starlite-api/starlite/blob/9bb6dcd57c10a591377cf8e3a537e9292566d5b9/starlite/config/static_files.py#L32\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom abc import ABC, abstractmethod\nfrom asyncio import CancelledError, Queue, Task, create_task\nfrom collections import defaultdict\nfrom contextlib import suppress\nfrom typing import TYPE_CHECKING, Any, DefaultDict, Sequence\n\nimport sniffio\n\nfrom starlite.exceptions import ImproperlyConfiguredException\n\n__all__ = (\"BaseEventEmitterBackend\", \"SimpleEventEmitter\")\n\n\nif TYPE_CHECKING:\n from starlite.events.listener import EventListener\n\n\nclass BaseEventEmitterBackend(ABC):\n \"\"\"Abstract class used to define event emitter backends.\"\"\"\n\n __slots__ = (\"listeners\",)\n\n listeners: DefaultDict[str, set[EventListener]]\n\n def __init__(self, listeners: Sequence[EventListener]):\n \"\"\"Create an event emitter instance.\n\n Args:\n listeners: A list of listeners.\n \"\"\"\n self.listeners = defaultdict(set)\n for listener in listeners:\n for event_id in listener.event_ids:\n self.listeners[event_id].add(listener)\n\n @abstractmethod\n def emit(self, event_id: str, *args: Any, **kwargs: Any) -> None: # pragma: no cover\n \"\"\"Emit an event to all attached listeners.\n\n Args:\n event_id: The ID of the event to emit, e.g 'my_event'.\n *args: args to pass to the listener(s).\n **kwargs: kwargs to pass to the listener(s)\n\n Returns:\n None\n \"\"\"\n raise NotImplementedError(\"not implemented\")\n\n @abstractmethod\n async def on_startup(self) -> None: # pragma: no cover\n \"\"\"Hook called on application startup, used to establish connection or perform other async operations.\n\n Returns:\n None\n \"\"\"\n raise NotImplementedError(\"not implemented\")\n\n @abstractmethod\n async def on_shutdown(self) -> None: # pragma: no cover\n \"\"\"Hook called on application shutdown, used to perform cleanup.\n\n Returns:\n None\n \"\"\"\n raise NotImplementedError(\"not implemented\")\n\n\nclass SimpleEventEmitter(BaseEventEmitterBackend):\n \"\"\"Event emitter the works only in the current process\"\"\"\n\n __slots__ = (\"_queue\", \"_worker_task\")\n\n _worker_task: Task | None\n\n def __init__(self, listeners: Sequence[EventListener]):\n \"\"\"Create an event emitter instance.\n\n Args:\n listeners: A list of listeners.\n \"\"\"\n super().__init__(listeners=listeners)\n self._queue: Queue | None = None\n self._worker_task = None\n\n async def _worker(self) -> None:\n \"\"\"Worker that runs in a separate task and continuously pulls events from asyncio queue.\n\n Returns:\n None\n \"\"\"\n while self._queue:\n fn, args, kwargs = await self._queue.get()\n await fn(*args, *kwargs)\n self._queue.task_done()\n\n async def on_startup(self) -> None:\n \"\"\"Hook called on application startup, used to establish connection or perform other async operations.\n\n Returns:\n None\n \"\"\"\n if sniffio.current_async_library() != \"asyncio\":\n return\n\n self._queue = Queue()\n self._worker_task = create_task(self._worker())\n\n async def on_shutdown(self) -> None:\n \"\"\"Hook called on application shutdown, used to perform cleanup.\n\n Returns:\n None\n \"\"\"\n\n if self._queue:\n await self._queue.join()\n\n if self._worker_task:\n self._worker_task.cancel()\n with suppress(CancelledError):\n await self._worker_task\n\n self._worker_task = None\n self._queue = None\n\n def emit(self, event_id: str, *args: Any, **kwargs: Any) -> None:\n \"\"\"Emit an event to all attached listeners.\n\n Args:\n event_id: The ID of the event to emit, e.g 'my_event'.\n *args: args to pass to the listener(s).\n **kwargs: kwargs to pass to the listener(s)\n\n Returns:\n None\n \"\"\"\n if not (self._worker_task and self._queue):\n if sniffio.current_async_library() != \"asyncio\":\n raise ImproperlyConfiguredException(\"{type(self).__name__} only supports 'asyncio' based event loops\")\n\n raise ImproperlyConfiguredException(\"Worker not running\")\n\n if listeners := self.listeners.get(event_id):\n for listener in listeners:\n self._queue.put_nowait((listener.fn, args, kwargs))\n return\n raise ImproperlyConfiguredException(f\"no event listeners are registered for event ID: {event_id}\")\n", "path": "starlite/events/emitter.py"}]} | 2,045 | 108 |
gh_patches_debug_26955 | rasdani/github-patches | git_diff | alltheplaces__alltheplaces-3411 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
BMO Harris Bank
https://branchlocator.bmoharris.com/
</issue>
<code>
[start of locations/spiders/bmo_harris.py]
1 import html
2 import json
3 import scrapy
4
5 from locations.items import GeojsonPointItem
6 from locations.hours import OpeningHours
7
8
9 class BMOHarrisSpider(scrapy.Spider):
10 name = "bmo-harris"
11 item_attributes = { 'brand': "BMO Harris Bank" }
12 allowed_domains = ["branches.bmoharris.com"]
13 download_delay = 0.5
14 start_urls = (
15 'https://branches.bmoharris.com/',
16 )
17
18 def parse_store(self, response):
19 properties = {
20 'addr_full': response.xpath('//meta[@property="business:contact_data:street_address"]/@content').extract_first(),
21 'phone': response.xpath('//meta[@property="business:contact_data:phone_number"]/@content').extract_first(),
22 'city': response.xpath('//meta[@property="business:contact_data:locality"]/@content').extract_first(),
23 'state': response.xpath('//meta[@property="business:contact_data:region"]/@content').extract_first(),
24 'postcode': response.xpath('//meta[@property="business:contact_data:postal_code"]/@content').extract_first(),
25 'country': response.xpath('//meta[@property="business:contact_data:country_name"]/@content').extract_first(),
26 'ref': response.url,
27 'website': response.url,
28 'lat': response.xpath('//meta[@property="place:location:latitude"]/@content').extract_first(),
29 'lon': response.xpath('//meta[@property="place:location:longitude"]/@content').extract_first(),
30 }
31
32 yield GeojsonPointItem(**properties)
33
34 def parse(self, response):
35 # Step into hierarchy of place
36 for url in response.xpath("//div[@class='itemlist']/p/a/@href").extract():
37 yield scrapy.Request(response.urljoin(url))
38
39 # Look for links to stores
40 for url in response.xpath("//div[@class='itemlist']/li/span[@itemprop='streetAddress']/a/@href").extract():
41 yield scrapy.Request(response.urljoin(url), callback=self.parse_store)
42
[end of locations/spiders/bmo_harris.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/locations/spiders/bmo_harris.py b/locations/spiders/bmo_harris.py
--- a/locations/spiders/bmo_harris.py
+++ b/locations/spiders/bmo_harris.py
@@ -7,13 +7,14 @@
class BMOHarrisSpider(scrapy.Spider):
- name = "bmo-harris"
- item_attributes = { 'brand': "BMO Harris Bank" }
+ name = "bmo_harris"
+ item_attributes = {'brand': "BMO Harris Bank", 'brand_wikidata': "Q4835981"}
allowed_domains = ["branches.bmoharris.com"]
download_delay = 0.5
start_urls = (
'https://branches.bmoharris.com/',
)
+ user_agent = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.159 Safari/537.36'
def parse_store(self, response):
properties = {
@@ -33,9 +34,9 @@
def parse(self, response):
# Step into hierarchy of place
- for url in response.xpath("//div[@class='itemlist']/p/a/@href").extract():
+ for url in response.xpath("//ul[@class='itemlist']/li/a/@href").extract():
yield scrapy.Request(response.urljoin(url))
# Look for links to stores
- for url in response.xpath("//div[@class='itemlist']/li/span[@itemprop='streetAddress']/a/@href").extract():
+ for url in response.xpath("//ul[@class='itemlist']/li/div/span[@itemprop='streetAddress']/a/@href").extract():
yield scrapy.Request(response.urljoin(url), callback=self.parse_store)
| {"golden_diff": "diff --git a/locations/spiders/bmo_harris.py b/locations/spiders/bmo_harris.py\n--- a/locations/spiders/bmo_harris.py\n+++ b/locations/spiders/bmo_harris.py\n@@ -7,13 +7,14 @@\n \n \n class BMOHarrisSpider(scrapy.Spider):\n- name = \"bmo-harris\"\n- item_attributes = { 'brand': \"BMO Harris Bank\" }\n+ name = \"bmo_harris\"\n+ item_attributes = {'brand': \"BMO Harris Bank\", 'brand_wikidata': \"Q4835981\"}\n allowed_domains = [\"branches.bmoharris.com\"]\n download_delay = 0.5\n start_urls = (\n 'https://branches.bmoharris.com/',\n )\n+ user_agent = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.159 Safari/537.36'\n \n def parse_store(self, response):\n properties = {\n@@ -33,9 +34,9 @@\n \n def parse(self, response):\n # Step into hierarchy of place\n- for url in response.xpath(\"//div[@class='itemlist']/p/a/@href\").extract():\n+ for url in response.xpath(\"//ul[@class='itemlist']/li/a/@href\").extract():\n yield scrapy.Request(response.urljoin(url))\n \n # Look for links to stores\n- for url in response.xpath(\"//div[@class='itemlist']/li/span[@itemprop='streetAddress']/a/@href\").extract():\n+ for url in response.xpath(\"//ul[@class='itemlist']/li/div/span[@itemprop='streetAddress']/a/@href\").extract():\n yield scrapy.Request(response.urljoin(url), callback=self.parse_store)\n", "issue": "BMO Harris Bank\nhttps://branchlocator.bmoharris.com/\n", "before_files": [{"content": "import html\nimport json\nimport scrapy\n\nfrom locations.items import GeojsonPointItem\nfrom locations.hours import OpeningHours\n\n\nclass BMOHarrisSpider(scrapy.Spider):\n name = \"bmo-harris\"\n item_attributes = { 'brand': \"BMO Harris Bank\" }\n allowed_domains = [\"branches.bmoharris.com\"]\n download_delay = 0.5\n start_urls = (\n 'https://branches.bmoharris.com/',\n )\n\n def parse_store(self, response):\n properties = {\n 'addr_full': response.xpath('//meta[@property=\"business:contact_data:street_address\"]/@content').extract_first(),\n 'phone': response.xpath('//meta[@property=\"business:contact_data:phone_number\"]/@content').extract_first(),\n 'city': response.xpath('//meta[@property=\"business:contact_data:locality\"]/@content').extract_first(),\n 'state': response.xpath('//meta[@property=\"business:contact_data:region\"]/@content').extract_first(),\n 'postcode': response.xpath('//meta[@property=\"business:contact_data:postal_code\"]/@content').extract_first(),\n 'country': response.xpath('//meta[@property=\"business:contact_data:country_name\"]/@content').extract_first(),\n 'ref': response.url,\n 'website': response.url,\n 'lat': response.xpath('//meta[@property=\"place:location:latitude\"]/@content').extract_first(),\n 'lon': response.xpath('//meta[@property=\"place:location:longitude\"]/@content').extract_first(),\n }\n\n yield GeojsonPointItem(**properties)\n\n def parse(self, response):\n # Step into hierarchy of place\n for url in response.xpath(\"//div[@class='itemlist']/p/a/@href\").extract():\n yield scrapy.Request(response.urljoin(url))\n\n # Look for links to stores\n for url in response.xpath(\"//div[@class='itemlist']/li/span[@itemprop='streetAddress']/a/@href\").extract():\n yield scrapy.Request(response.urljoin(url), callback=self.parse_store)\n", "path": "locations/spiders/bmo_harris.py"}]} | 1,067 | 422 |
gh_patches_debug_5298 | rasdani/github-patches | git_diff | pyca__cryptography-2845 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
_ModuleWithDeprecations doesn't handle patching properly.
`_ModuleWithDeprecations` catches `__getattr__` and `__setattr__` to patch through to the underlying module, but does not intercept `__delattr__`. That means that if you're using something like `mock.patch`, the mock successfully lands in place, but cannot be removed: the mock was applied to the underlying module, but the delete comes from the proxy.
Should be easily fixed.
</issue>
<code>
[start of src/cryptography/utils.py]
1 # This file is dual licensed under the terms of the Apache License, Version
2 # 2.0, and the BSD License. See the LICENSE file in the root of this repository
3 # for complete details.
4
5 from __future__ import absolute_import, division, print_function
6
7 import abc
8 import binascii
9 import inspect
10 import struct
11 import sys
12 import warnings
13
14
15 # the functions deprecated in 1.0 are on an arbitrarily extended deprecation
16 # cycle and should not be removed until we agree on when that cycle ends.
17 DeprecatedIn10 = DeprecationWarning
18 DeprecatedIn12 = DeprecationWarning
19
20
21 def read_only_property(name):
22 return property(lambda self: getattr(self, name))
23
24
25 def register_interface(iface):
26 def register_decorator(klass):
27 verify_interface(iface, klass)
28 iface.register(klass)
29 return klass
30 return register_decorator
31
32
33 if hasattr(int, "from_bytes"):
34 int_from_bytes = int.from_bytes
35 else:
36 def int_from_bytes(data, byteorder, signed=False):
37 assert byteorder == 'big'
38 assert not signed
39
40 if len(data) % 4 != 0:
41 data = (b'\x00' * (4 - (len(data) % 4))) + data
42
43 result = 0
44
45 while len(data) > 0:
46 digit, = struct.unpack('>I', data[:4])
47 result = (result << 32) + digit
48 # TODO: this is quadratic in the length of data
49 data = data[4:]
50
51 return result
52
53
54 def int_to_bytes(integer, length=None):
55 hex_string = '%x' % integer
56 if length is None:
57 n = len(hex_string)
58 else:
59 n = length * 2
60 return binascii.unhexlify(hex_string.zfill(n + (n & 1)))
61
62
63 class InterfaceNotImplemented(Exception):
64 pass
65
66
67 if hasattr(inspect, "signature"):
68 signature = inspect.signature
69 else:
70 signature = inspect.getargspec
71
72
73 def verify_interface(iface, klass):
74 for method in iface.__abstractmethods__:
75 if not hasattr(klass, method):
76 raise InterfaceNotImplemented(
77 "{0} is missing a {1!r} method".format(klass, method)
78 )
79 if isinstance(getattr(iface, method), abc.abstractproperty):
80 # Can't properly verify these yet.
81 continue
82 sig = signature(getattr(iface, method))
83 actual = signature(getattr(klass, method))
84 if sig != actual:
85 raise InterfaceNotImplemented(
86 "{0}.{1}'s signature differs from the expected. Expected: "
87 "{2!r}. Received: {3!r}".format(
88 klass, method, sig, actual
89 )
90 )
91
92
93 if sys.version_info >= (2, 7):
94 def bit_length(x):
95 return x.bit_length()
96 else:
97 def bit_length(x):
98 return len(bin(x)) - (2 + (x <= 0))
99
100
101 class _DeprecatedValue(object):
102 def __init__(self, value, message, warning_class):
103 self.value = value
104 self.message = message
105 self.warning_class = warning_class
106
107
108 class _ModuleWithDeprecations(object):
109 def __init__(self, module):
110 self.__dict__["_module"] = module
111
112 def __getattr__(self, attr):
113 obj = getattr(self._module, attr)
114 if isinstance(obj, _DeprecatedValue):
115 warnings.warn(obj.message, obj.warning_class, stacklevel=2)
116 obj = obj.value
117 return obj
118
119 def __setattr__(self, attr, value):
120 setattr(self._module, attr, value)
121
122 def __dir__(self):
123 return ["_module"] + dir(self._module)
124
125
126 def deprecated(value, module_name, message, warning_class):
127 module = sys.modules[module_name]
128 if not isinstance(module, _ModuleWithDeprecations):
129 sys.modules[module_name] = module = _ModuleWithDeprecations(module)
130 return _DeprecatedValue(value, message, warning_class)
131
[end of src/cryptography/utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/cryptography/utils.py b/src/cryptography/utils.py
--- a/src/cryptography/utils.py
+++ b/src/cryptography/utils.py
@@ -119,6 +119,13 @@
def __setattr__(self, attr, value):
setattr(self._module, attr, value)
+ def __delattr__(self, attr):
+ obj = getattr(self._module, attr)
+ if isinstance(obj, _DeprecatedValue):
+ warnings.warn(obj.message, obj.warning_class, stacklevel=2)
+
+ delattr(self._module, attr)
+
def __dir__(self):
return ["_module"] + dir(self._module)
| {"golden_diff": "diff --git a/src/cryptography/utils.py b/src/cryptography/utils.py\n--- a/src/cryptography/utils.py\n+++ b/src/cryptography/utils.py\n@@ -119,6 +119,13 @@\n def __setattr__(self, attr, value):\n setattr(self._module, attr, value)\n \n+ def __delattr__(self, attr):\n+ obj = getattr(self._module, attr)\n+ if isinstance(obj, _DeprecatedValue):\n+ warnings.warn(obj.message, obj.warning_class, stacklevel=2)\n+\n+ delattr(self._module, attr)\n+\n def __dir__(self):\n return [\"_module\"] + dir(self._module)\n", "issue": "_ModuleWithDeprecations doesn't handle patching properly.\n`_ModuleWithDeprecations` catches `__getattr__` and `__setattr__` to patch through to the underlying module, but does not intercept `__delattr__`. That means that if you're using something like `mock.patch`, the mock successfully lands in place, but cannot be removed: the mock was applied to the underlying module, but the delete comes from the proxy.\n\nShould be easily fixed.\n\n", "before_files": [{"content": "# This file is dual licensed under the terms of the Apache License, Version\n# 2.0, and the BSD License. See the LICENSE file in the root of this repository\n# for complete details.\n\nfrom __future__ import absolute_import, division, print_function\n\nimport abc\nimport binascii\nimport inspect\nimport struct\nimport sys\nimport warnings\n\n\n# the functions deprecated in 1.0 are on an arbitrarily extended deprecation\n# cycle and should not be removed until we agree on when that cycle ends.\nDeprecatedIn10 = DeprecationWarning\nDeprecatedIn12 = DeprecationWarning\n\n\ndef read_only_property(name):\n return property(lambda self: getattr(self, name))\n\n\ndef register_interface(iface):\n def register_decorator(klass):\n verify_interface(iface, klass)\n iface.register(klass)\n return klass\n return register_decorator\n\n\nif hasattr(int, \"from_bytes\"):\n int_from_bytes = int.from_bytes\nelse:\n def int_from_bytes(data, byteorder, signed=False):\n assert byteorder == 'big'\n assert not signed\n\n if len(data) % 4 != 0:\n data = (b'\\x00' * (4 - (len(data) % 4))) + data\n\n result = 0\n\n while len(data) > 0:\n digit, = struct.unpack('>I', data[:4])\n result = (result << 32) + digit\n # TODO: this is quadratic in the length of data\n data = data[4:]\n\n return result\n\n\ndef int_to_bytes(integer, length=None):\n hex_string = '%x' % integer\n if length is None:\n n = len(hex_string)\n else:\n n = length * 2\n return binascii.unhexlify(hex_string.zfill(n + (n & 1)))\n\n\nclass InterfaceNotImplemented(Exception):\n pass\n\n\nif hasattr(inspect, \"signature\"):\n signature = inspect.signature\nelse:\n signature = inspect.getargspec\n\n\ndef verify_interface(iface, klass):\n for method in iface.__abstractmethods__:\n if not hasattr(klass, method):\n raise InterfaceNotImplemented(\n \"{0} is missing a {1!r} method\".format(klass, method)\n )\n if isinstance(getattr(iface, method), abc.abstractproperty):\n # Can't properly verify these yet.\n continue\n sig = signature(getattr(iface, method))\n actual = signature(getattr(klass, method))\n if sig != actual:\n raise InterfaceNotImplemented(\n \"{0}.{1}'s signature differs from the expected. Expected: \"\n \"{2!r}. Received: {3!r}\".format(\n klass, method, sig, actual\n )\n )\n\n\nif sys.version_info >= (2, 7):\n def bit_length(x):\n return x.bit_length()\nelse:\n def bit_length(x):\n return len(bin(x)) - (2 + (x <= 0))\n\n\nclass _DeprecatedValue(object):\n def __init__(self, value, message, warning_class):\n self.value = value\n self.message = message\n self.warning_class = warning_class\n\n\nclass _ModuleWithDeprecations(object):\n def __init__(self, module):\n self.__dict__[\"_module\"] = module\n\n def __getattr__(self, attr):\n obj = getattr(self._module, attr)\n if isinstance(obj, _DeprecatedValue):\n warnings.warn(obj.message, obj.warning_class, stacklevel=2)\n obj = obj.value\n return obj\n\n def __setattr__(self, attr, value):\n setattr(self._module, attr, value)\n\n def __dir__(self):\n return [\"_module\"] + dir(self._module)\n\n\ndef deprecated(value, module_name, message, warning_class):\n module = sys.modules[module_name]\n if not isinstance(module, _ModuleWithDeprecations):\n sys.modules[module_name] = module = _ModuleWithDeprecations(module)\n return _DeprecatedValue(value, message, warning_class)\n", "path": "src/cryptography/utils.py"}]} | 1,801 | 149 |
gh_patches_debug_6466 | rasdani/github-patches | git_diff | plone__Products.CMFPlone-1417 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Return HTTP errors in proper format
Proposer: Eric Brehault
Seconder:
# Motivation
When a page does not exist, or has an error, or is not allowed for the user, Plone returns the appropriate HTTP error (404, 500, ...), and the response is a human readable page, properly skinned, which nice for the user.
And if the requested resource is not a page (an image, a JS file, an AJAX call, etc.), Plone also returns this human readable page.
It is useless because the page will not be displayed, and it produces many problems:
- the response is very heavy,
- it involves a lot of processing (Plone will render an entire page for nothing),
- for AJAX call, the response cannot be easily interperted,
- it might produce a cascade of errors (for instance: the regular response is not supposed to be rendered via Diazo, as it is not an HTML page, but the error is rendered by Diazo, and it might produce another error).
# Proposed solution
We could display the human readable error page only if the current request `HTTP_ACCEPT` parameter contains `text/html`, in other cases, we would just return a simple JSON error reponse.
# Proposed implementation
Test the `HTTP_ACCEPT` value in `Products/CMFPlone/skins/plone_templates/standard_error_message.py`, and call the existing template or make a JSON response accordingly.
# Risks
No identified risks.
</issue>
<code>
[start of Products/CMFPlone/skins/plone_templates/standard_error_message.py]
1 ## Script (Python) "standard_error_message"
2 ##bind container=container
3 ##bind context=context
4 ##bind namespace=
5 ##bind script=script
6 ##bind subpath=traverse_subpath
7 ##parameters=**kwargs
8 ##title=Dispatches to relevant error view
9
10 ## by default we handle everything in 1 PageTemplate.
11 # you could easily check for the error_type and
12 # dispatch to an appropriate PageTemplate.
13
14 # Check if the object is traversable, if not it might be a view, get its parent
15 # because we need to render the error on an actual content object
16 from AccessControl import Unauthorized
17 try:
18 while not hasattr(context.aq_explicit, 'restrictedTraverse'):
19 context = context.aq_parent
20 except (Unauthorized, AttributeError):
21 context = context.portal_url.getPortalObject()
22
23 error_type = kwargs.get('error_type', None)
24 error_message = kwargs.get('error_message', None)
25 error_log_url = kwargs.get('error_log_url', None)
26 error_tb = kwargs.get('error_tb', None)
27 error_traceback = kwargs.get('error_traceback', None)
28 error_value = kwargs.get('error_value', None)
29
30 if error_log_url:
31 error_log_id = error_log_url.split('?id=')[1]
32 else:
33 error_log_id = None
34
35
36 no_actions = {'folder': [], 'user': [], 'global': [], 'workflow': []}
37 error_page = context.default_error_message(
38 error_type=error_type,
39 error_message=error_message,
40 error_tb=error_tb,
41 error_value=error_value,
42 error_log_url=error_log_url,
43 error_log_id=error_log_id,
44 no_portlets=True,
45 actions=no_actions)
46
47 return error_page
48
[end of Products/CMFPlone/skins/plone_templates/standard_error_message.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/Products/CMFPlone/skins/plone_templates/standard_error_message.py b/Products/CMFPlone/skins/plone_templates/standard_error_message.py
--- a/Products/CMFPlone/skins/plone_templates/standard_error_message.py
+++ b/Products/CMFPlone/skins/plone_templates/standard_error_message.py
@@ -27,6 +27,10 @@
error_traceback = kwargs.get('error_traceback', None)
error_value = kwargs.get('error_value', None)
+if "text/html" not in context.REQUEST.getHeader('Accept', ''):
+ context.REQUEST.RESPONSE.setHeader("Content-Type", "application/json")
+ return '{"error_type": "{0:s}"}'.format(error_type)
+
if error_log_url:
error_log_id = error_log_url.split('?id=')[1]
else:
| {"golden_diff": "diff --git a/Products/CMFPlone/skins/plone_templates/standard_error_message.py b/Products/CMFPlone/skins/plone_templates/standard_error_message.py\n--- a/Products/CMFPlone/skins/plone_templates/standard_error_message.py\n+++ b/Products/CMFPlone/skins/plone_templates/standard_error_message.py\n@@ -27,6 +27,10 @@\n error_traceback = kwargs.get('error_traceback', None)\n error_value = kwargs.get('error_value', None)\n \n+if \"text/html\" not in context.REQUEST.getHeader('Accept', ''):\n+ context.REQUEST.RESPONSE.setHeader(\"Content-Type\", \"application/json\")\n+ return '{\"error_type\": \"{0:s}\"}'.format(error_type)\n+\n if error_log_url:\n error_log_id = error_log_url.split('?id=')[1]\n else:\n", "issue": "Return HTTP errors in proper format\nProposer: Eric Brehault\nSeconder:\n# Motivation\n\nWhen a page does not exist, or has an error, or is not allowed for the user, Plone returns the appropriate HTTP error (404, 500, ...), and the response is a human readable page, properly skinned, which nice for the user.\nAnd if the requested resource is not a page (an image, a JS file, an AJAX call, etc.), Plone also returns this human readable page.\nIt is useless because the page will not be displayed, and it produces many problems:\n- the response is very heavy,\n- it involves a lot of processing (Plone will render an entire page for nothing),\n- for AJAX call, the response cannot be easily interperted,\n- it might produce a cascade of errors (for instance: the regular response is not supposed to be rendered via Diazo, as it is not an HTML page, but the error is rendered by Diazo, and it might produce another error).\n# Proposed solution\n\nWe could display the human readable error page only if the current request `HTTP_ACCEPT` parameter contains `text/html`, in other cases, we would just return a simple JSON error reponse.\n# Proposed implementation\n\nTest the `HTTP_ACCEPT` value in `Products/CMFPlone/skins/plone_templates/standard_error_message.py`, and call the existing template or make a JSON response accordingly.\n# Risks\n\nNo identified risks.\n\n", "before_files": [{"content": "## Script (Python) \"standard_error_message\"\n##bind container=container\n##bind context=context\n##bind namespace=\n##bind script=script\n##bind subpath=traverse_subpath\n##parameters=**kwargs\n##title=Dispatches to relevant error view\n\n## by default we handle everything in 1 PageTemplate.\n# you could easily check for the error_type and\n# dispatch to an appropriate PageTemplate.\n\n# Check if the object is traversable, if not it might be a view, get its parent\n# because we need to render the error on an actual content object\nfrom AccessControl import Unauthorized\ntry:\n while not hasattr(context.aq_explicit, 'restrictedTraverse'):\n context = context.aq_parent\nexcept (Unauthorized, AttributeError):\n context = context.portal_url.getPortalObject()\n\nerror_type = kwargs.get('error_type', None)\nerror_message = kwargs.get('error_message', None)\nerror_log_url = kwargs.get('error_log_url', None)\nerror_tb = kwargs.get('error_tb', None)\nerror_traceback = kwargs.get('error_traceback', None)\nerror_value = kwargs.get('error_value', None)\n\nif error_log_url:\n error_log_id = error_log_url.split('?id=')[1]\nelse:\n error_log_id = None\n\n\nno_actions = {'folder': [], 'user': [], 'global': [], 'workflow': []}\nerror_page = context.default_error_message(\n error_type=error_type,\n error_message=error_message,\n error_tb=error_tb,\n error_value=error_value,\n error_log_url=error_log_url,\n error_log_id=error_log_id,\n no_portlets=True,\n actions=no_actions)\n\nreturn error_page\n", "path": "Products/CMFPlone/skins/plone_templates/standard_error_message.py"}]} | 1,322 | 191 |
gh_patches_debug_3520 | rasdani/github-patches | git_diff | encode__uvicorn-1328 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
No `python_requires` defined
### Checklist
- [X] The bug is reproducible against the latest release or `master`.
- [X] There are no similar issues or pull requests to fix it yet.
### Describe the bug
It seems that no `python_requires` is defined for the `uvicorn` package, which in turn results in the latest version being installed in a Python 3.6 (CI) environment (that subsequently fails).
If `python_requires` were defined to restrict the package to supported versions of the interpreter, I would have got an older version (that supported `py36`) instead.
### Steps to reproduce the bug
In a `py36` environment
```
pip install uvicorn
# Run uvicorn
# ...
```
### Expected behavior
An older version is installed that works.
### Actual behavior
`uvicorn` errors out, says `py36` is unsupported.
### Debugging material
_No response_
### Environment
CPython 3.6
### Additional context
_No response_
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3
4 import os
5 import re
6
7 from setuptools import setup
8
9
10 def get_version(package):
11 """
12 Return package version as listed in `__version__` in `init.py`.
13 """
14 path = os.path.join(package, "__init__.py")
15 init_py = open(path, "r", encoding="utf8").read()
16 return re.search("__version__ = ['\"]([^'\"]+)['\"]", init_py).group(1)
17
18
19 def get_long_description():
20 """
21 Return the README.
22 """
23 return open("README.md", "r", encoding="utf8").read()
24
25
26 def get_packages(package):
27 """
28 Return root package and all sub-packages.
29 """
30 return [
31 dirpath
32 for dirpath, dirnames, filenames in os.walk(package)
33 if os.path.exists(os.path.join(dirpath, "__init__.py"))
34 ]
35
36
37 env_marker_cpython = (
38 "sys_platform != 'win32'"
39 " and (sys_platform != 'cygwin'"
40 " and platform_python_implementation != 'PyPy')"
41 )
42
43 env_marker_win = "sys_platform == 'win32'"
44 env_marker_below_38 = "python_version < '3.8'"
45
46 minimal_requirements = [
47 "asgiref>=3.4.0",
48 "click>=7.0",
49 "h11>=0.8",
50 "typing-extensions;" + env_marker_below_38,
51 ]
52
53
54 extra_requirements = [
55 "websockets>=10.0",
56 "httptools>=0.2.0,<0.4.0",
57 "uvloop>=0.14.0,!=0.15.0,!=0.15.1; " + env_marker_cpython,
58 "colorama>=0.4;" + env_marker_win,
59 "watchgod>=0.6",
60 "python-dotenv>=0.13",
61 "PyYAML>=5.1",
62 ]
63
64
65 setup(
66 name="uvicorn",
67 version=get_version("uvicorn"),
68 url="https://www.uvicorn.org/",
69 license="BSD",
70 description="The lightning-fast ASGI server.",
71 long_description=get_long_description(),
72 long_description_content_type="text/markdown",
73 author="Tom Christie",
74 author_email="[email protected]",
75 packages=get_packages("uvicorn"),
76 install_requires=minimal_requirements,
77 extras_require={"standard": extra_requirements},
78 include_package_data=True,
79 classifiers=[
80 "Development Status :: 4 - Beta",
81 "Environment :: Web Environment",
82 "Intended Audience :: Developers",
83 "License :: OSI Approved :: BSD License",
84 "Operating System :: OS Independent",
85 "Topic :: Internet :: WWW/HTTP",
86 "Programming Language :: Python :: 3",
87 "Programming Language :: Python :: 3.7",
88 "Programming Language :: Python :: 3.8",
89 "Programming Language :: Python :: 3.9",
90 "Programming Language :: Python :: 3.10",
91 "Programming Language :: Python :: Implementation :: CPython",
92 "Programming Language :: Python :: Implementation :: PyPy",
93 ],
94 entry_points="""
95 [console_scripts]
96 uvicorn=uvicorn.main:main
97 """,
98 project_urls={
99 "Funding": "https://github.com/sponsors/encode",
100 "Source": "https://github.com/encode/uvicorn",
101 "Changelog": "https://github.com/encode/uvicorn/blob/master/CHANGELOG.md",
102 },
103 )
104
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -73,6 +73,7 @@
author="Tom Christie",
author_email="[email protected]",
packages=get_packages("uvicorn"),
+ python_requires=">=3.7",
install_requires=minimal_requirements,
extras_require={"standard": extra_requirements},
include_package_data=True,
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -73,6 +73,7 @@\n author=\"Tom Christie\",\n author_email=\"[email protected]\",\n packages=get_packages(\"uvicorn\"),\n+ python_requires=\">=3.7\",\n install_requires=minimal_requirements,\n extras_require={\"standard\": extra_requirements},\n include_package_data=True,\n", "issue": "No `python_requires` defined\n### Checklist\r\n\r\n- [X] The bug is reproducible against the latest release or `master`.\r\n- [X] There are no similar issues or pull requests to fix it yet.\r\n\r\n### Describe the bug\r\n\r\nIt seems that no `python_requires` is defined for the `uvicorn` package, which in turn results in the latest version being installed in a Python 3.6 (CI) environment (that subsequently fails).\r\n\r\nIf `python_requires` were defined to restrict the package to supported versions of the interpreter, I would have got an older version (that supported `py36`) instead.\r\n\r\n### Steps to reproduce the bug\r\n\r\nIn a `py36` environment\r\n```\r\npip install uvicorn\r\n# Run uvicorn\r\n# ...\r\n```\r\n\r\n### Expected behavior\r\n\r\nAn older version is installed that works.\r\n\r\n### Actual behavior\r\n\r\n`uvicorn` errors out, says `py36` is unsupported.\r\n\r\n### Debugging material\r\n\r\n_No response_\r\n\r\n### Environment\r\n\r\nCPython 3.6\r\n\r\n### Additional context\r\n\r\n_No response_\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\nimport os\nimport re\n\nfrom setuptools import setup\n\n\ndef get_version(package):\n \"\"\"\n Return package version as listed in `__version__` in `init.py`.\n \"\"\"\n path = os.path.join(package, \"__init__.py\")\n init_py = open(path, \"r\", encoding=\"utf8\").read()\n return re.search(\"__version__ = ['\\\"]([^'\\\"]+)['\\\"]\", init_py).group(1)\n\n\ndef get_long_description():\n \"\"\"\n Return the README.\n \"\"\"\n return open(\"README.md\", \"r\", encoding=\"utf8\").read()\n\n\ndef get_packages(package):\n \"\"\"\n Return root package and all sub-packages.\n \"\"\"\n return [\n dirpath\n for dirpath, dirnames, filenames in os.walk(package)\n if os.path.exists(os.path.join(dirpath, \"__init__.py\"))\n ]\n\n\nenv_marker_cpython = (\n \"sys_platform != 'win32'\"\n \" and (sys_platform != 'cygwin'\"\n \" and platform_python_implementation != 'PyPy')\"\n)\n\nenv_marker_win = \"sys_platform == 'win32'\"\nenv_marker_below_38 = \"python_version < '3.8'\"\n\nminimal_requirements = [\n \"asgiref>=3.4.0\",\n \"click>=7.0\",\n \"h11>=0.8\",\n \"typing-extensions;\" + env_marker_below_38,\n]\n\n\nextra_requirements = [\n \"websockets>=10.0\",\n \"httptools>=0.2.0,<0.4.0\",\n \"uvloop>=0.14.0,!=0.15.0,!=0.15.1; \" + env_marker_cpython,\n \"colorama>=0.4;\" + env_marker_win,\n \"watchgod>=0.6\",\n \"python-dotenv>=0.13\",\n \"PyYAML>=5.1\",\n]\n\n\nsetup(\n name=\"uvicorn\",\n version=get_version(\"uvicorn\"),\n url=\"https://www.uvicorn.org/\",\n license=\"BSD\",\n description=\"The lightning-fast ASGI server.\",\n long_description=get_long_description(),\n long_description_content_type=\"text/markdown\",\n author=\"Tom Christie\",\n author_email=\"[email protected]\",\n packages=get_packages(\"uvicorn\"),\n install_requires=minimal_requirements,\n extras_require={\"standard\": extra_requirements},\n include_package_data=True,\n classifiers=[\n \"Development Status :: 4 - Beta\",\n \"Environment :: Web Environment\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: BSD License\",\n \"Operating System :: OS Independent\",\n \"Topic :: Internet :: WWW/HTTP\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Programming Language :: Python :: Implementation :: PyPy\",\n ],\n entry_points=\"\"\"\n [console_scripts]\n uvicorn=uvicorn.main:main\n \"\"\",\n project_urls={\n \"Funding\": \"https://github.com/sponsors/encode\",\n \"Source\": \"https://github.com/encode/uvicorn\",\n \"Changelog\": \"https://github.com/encode/uvicorn/blob/master/CHANGELOG.md\",\n },\n)\n", "path": "setup.py"}]} | 1,724 | 90 |
gh_patches_debug_1016 | rasdani/github-patches | git_diff | scikit-hep__pyhf-2068 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
docs build failing on Pygments lexter warning
Hm. Something related to https://github.com/spatialaudio/nbsphinx/issues/24 is breaking the docs build. We're getting
```pytb
WARNING: Pygments lexer name 'ipython3' is not known
```
for all the notebooks during the docs build and we fail on warnings.
_Originally posted by @matthewfeickert in https://github.com/scikit-hep/pyhf/issues/2066#issuecomment-1329937208_
</issue>
<code>
[start of setup.py]
1 from setuptools import setup
2
3 extras_require = {
4 'shellcomplete': ['click_completion'],
5 'tensorflow': [
6 'tensorflow>=2.7.0', # c.f. PR #1962
7 'tensorflow-probability>=0.11.0', # c.f. PR #1657
8 ],
9 'torch': ['torch>=1.10.0'], # c.f. PR #1657
10 'jax': ['jax>=0.2.10', 'jaxlib>=0.1.61,!=0.1.68'], # c.f. PR #1962, Issue #1501
11 'xmlio': ['uproot>=4.1.1'], # c.f. PR #1567
12 'minuit': ['iminuit>=2.7.0'], # c.f. PR #1895
13 }
14 extras_require['backends'] = sorted(
15 set(
16 extras_require['tensorflow']
17 + extras_require['torch']
18 + extras_require['jax']
19 + extras_require['minuit']
20 )
21 )
22 extras_require['contrib'] = sorted({'matplotlib', 'requests'})
23 extras_require['test'] = sorted(
24 set(
25 extras_require['backends']
26 + extras_require['xmlio']
27 + extras_require['contrib']
28 + extras_require['shellcomplete']
29 + [
30 'scikit-hep-testdata>=0.4.11',
31 'pytest>=6.0',
32 'coverage[toml]>=6.0.0',
33 'pytest-mock',
34 'requests-mock>=1.9.0',
35 'pytest-benchmark[histogram]',
36 'pytest-console-scripts',
37 'pytest-mpl',
38 'pydocstyle',
39 'papermill~=2.3.4',
40 'scrapbook~=0.5.0',
41 'jupyter',
42 'graphviz',
43 'pytest-socket>=0.2.0', # c.f. PR #1917
44 ]
45 )
46 )
47 extras_require['docs'] = sorted(
48 set(
49 extras_require['xmlio']
50 + extras_require['contrib']
51 + [
52 'sphinx>=5.1.1', # c.f. https://github.com/scikit-hep/pyhf/pull/1926
53 'sphinxcontrib-bibtex~=2.1',
54 'sphinx-click',
55 'sphinx_rtd_theme',
56 'nbsphinx!=0.8.8', # c.f. https://github.com/spatialaudio/nbsphinx/issues/620
57 'ipywidgets',
58 'sphinx-issues',
59 'sphinx-copybutton>=0.3.2',
60 'sphinx-togglebutton>=0.3.0',
61 ]
62 )
63 )
64 extras_require['develop'] = sorted(
65 set(
66 extras_require['docs']
67 + extras_require['test']
68 + [
69 'nbdime',
70 'tbump>=6.7.0',
71 'ipython',
72 'pre-commit',
73 'nox',
74 'check-manifest',
75 'codemetapy>=2.3.0',
76 'twine',
77 ]
78 )
79 )
80 extras_require['complete'] = sorted(set(sum(extras_require.values(), [])))
81
82
83 setup(
84 extras_require=extras_require,
85 use_scm_version=lambda: {'local_scheme': lambda version: ''},
86 )
87
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -58,6 +58,7 @@
'sphinx-issues',
'sphinx-copybutton>=0.3.2',
'sphinx-togglebutton>=0.3.0',
+ 'ipython!=8.7.0', # c.f. https://github.com/scikit-hep/pyhf/pull/2068
]
)
)
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -58,6 +58,7 @@\n 'sphinx-issues',\n 'sphinx-copybutton>=0.3.2',\n 'sphinx-togglebutton>=0.3.0',\n+ 'ipython!=8.7.0', # c.f. https://github.com/scikit-hep/pyhf/pull/2068\n ]\n )\n )\n", "issue": "docs build failing on Pygments lexter warning\nHm. Something related to https://github.com/spatialaudio/nbsphinx/issues/24 is breaking the docs build. We're getting\r\n\r\n```pytb\r\nWARNING: Pygments lexer name 'ipython3' is not known\r\n```\r\n\r\nfor all the notebooks during the docs build and we fail on warnings.\r\n\r\n_Originally posted by @matthewfeickert in https://github.com/scikit-hep/pyhf/issues/2066#issuecomment-1329937208_\r\n \n", "before_files": [{"content": "from setuptools import setup\n\nextras_require = {\n 'shellcomplete': ['click_completion'],\n 'tensorflow': [\n 'tensorflow>=2.7.0', # c.f. PR #1962\n 'tensorflow-probability>=0.11.0', # c.f. PR #1657\n ],\n 'torch': ['torch>=1.10.0'], # c.f. PR #1657\n 'jax': ['jax>=0.2.10', 'jaxlib>=0.1.61,!=0.1.68'], # c.f. PR #1962, Issue #1501\n 'xmlio': ['uproot>=4.1.1'], # c.f. PR #1567\n 'minuit': ['iminuit>=2.7.0'], # c.f. PR #1895\n}\nextras_require['backends'] = sorted(\n set(\n extras_require['tensorflow']\n + extras_require['torch']\n + extras_require['jax']\n + extras_require['minuit']\n )\n)\nextras_require['contrib'] = sorted({'matplotlib', 'requests'})\nextras_require['test'] = sorted(\n set(\n extras_require['backends']\n + extras_require['xmlio']\n + extras_require['contrib']\n + extras_require['shellcomplete']\n + [\n 'scikit-hep-testdata>=0.4.11',\n 'pytest>=6.0',\n 'coverage[toml]>=6.0.0',\n 'pytest-mock',\n 'requests-mock>=1.9.0',\n 'pytest-benchmark[histogram]',\n 'pytest-console-scripts',\n 'pytest-mpl',\n 'pydocstyle',\n 'papermill~=2.3.4',\n 'scrapbook~=0.5.0',\n 'jupyter',\n 'graphviz',\n 'pytest-socket>=0.2.0', # c.f. PR #1917\n ]\n )\n)\nextras_require['docs'] = sorted(\n set(\n extras_require['xmlio']\n + extras_require['contrib']\n + [\n 'sphinx>=5.1.1', # c.f. https://github.com/scikit-hep/pyhf/pull/1926\n 'sphinxcontrib-bibtex~=2.1',\n 'sphinx-click',\n 'sphinx_rtd_theme',\n 'nbsphinx!=0.8.8', # c.f. https://github.com/spatialaudio/nbsphinx/issues/620\n 'ipywidgets',\n 'sphinx-issues',\n 'sphinx-copybutton>=0.3.2',\n 'sphinx-togglebutton>=0.3.0',\n ]\n )\n)\nextras_require['develop'] = sorted(\n set(\n extras_require['docs']\n + extras_require['test']\n + [\n 'nbdime',\n 'tbump>=6.7.0',\n 'ipython',\n 'pre-commit',\n 'nox',\n 'check-manifest',\n 'codemetapy>=2.3.0',\n 'twine',\n ]\n )\n)\nextras_require['complete'] = sorted(set(sum(extras_require.values(), [])))\n\n\nsetup(\n extras_require=extras_require,\n use_scm_version=lambda: {'local_scheme': lambda version: ''},\n)\n", "path": "setup.py"}]} | 1,559 | 106 |
gh_patches_debug_52175 | rasdani/github-patches | git_diff | microsoft__ptvsd-167 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Error reading integer
From VS (might not be a ptvsd bug, not sure at this point):
Create new python application
Add new item, python unit test
Set the unit test as startup file
F5
Result:
```
---------------------------
Microsoft Visual Studio
---------------------------
Error reading integer. Unexpected token: Boolean. Path 'exitCode'.
---------------------------
OK
---------------------------
```
</issue>
<code>
[start of ptvsd/debugger.py]
1 # Copyright (c) Microsoft Corporation. All rights reserved.
2 # Licensed under the MIT License. See LICENSE in the project root
3 # for license information.
4
5 import sys
6
7
8 __author__ = "Microsoft Corporation <[email protected]>"
9 __version__ = "4.0.0a1"
10
11 DONT_DEBUG = []
12
13
14 def debug(filename, port_num, debug_id, debug_options, run_as):
15 # TODO: docstring
16
17 # import the wrapper first, so that it gets a chance
18 # to detour pydevd socket functionality.
19 import ptvsd.wrapper
20 import pydevd
21
22 args = [
23 '--port', str(port_num),
24 '--client', '127.0.0.1',
25 ]
26 if run_as == 'module':
27 args.append('--module')
28 args.extend(('--file', filename + ":"))
29 else:
30 args.extend(('--file', filename))
31 sys.argv[1:0] = args
32 try:
33 pydevd.main()
34 except SystemExit as ex:
35 ptvsd.wrapper.ptvsd_sys_exit_code = ex.code
36 raise
37
[end of ptvsd/debugger.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ptvsd/debugger.py b/ptvsd/debugger.py
--- a/ptvsd/debugger.py
+++ b/ptvsd/debugger.py
@@ -32,5 +32,5 @@
try:
pydevd.main()
except SystemExit as ex:
- ptvsd.wrapper.ptvsd_sys_exit_code = ex.code
+ ptvsd.wrapper.ptvsd_sys_exit_code = int(ex.code)
raise
| {"golden_diff": "diff --git a/ptvsd/debugger.py b/ptvsd/debugger.py\n--- a/ptvsd/debugger.py\n+++ b/ptvsd/debugger.py\n@@ -32,5 +32,5 @@\n try:\n pydevd.main()\n except SystemExit as ex:\n- ptvsd.wrapper.ptvsd_sys_exit_code = ex.code\n+ ptvsd.wrapper.ptvsd_sys_exit_code = int(ex.code)\n raise\n", "issue": "Error reading integer\nFrom VS (might not be a ptvsd bug, not sure at this point):\r\nCreate new python application\r\nAdd new item, python unit test\r\nSet the unit test as startup file\r\nF5\r\n\r\nResult:\r\n```\r\n---------------------------\r\nMicrosoft Visual Studio\r\n---------------------------\r\nError reading integer. Unexpected token: Boolean. Path 'exitCode'.\r\n---------------------------\r\nOK \r\n---------------------------\r\n```\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License. See LICENSE in the project root\n# for license information.\n\nimport sys\n\n\n__author__ = \"Microsoft Corporation <[email protected]>\"\n__version__ = \"4.0.0a1\"\n\nDONT_DEBUG = []\n\n\ndef debug(filename, port_num, debug_id, debug_options, run_as):\n # TODO: docstring\n\n # import the wrapper first, so that it gets a chance\n # to detour pydevd socket functionality.\n import ptvsd.wrapper\n import pydevd\n\n args = [\n '--port', str(port_num),\n '--client', '127.0.0.1',\n ]\n if run_as == 'module':\n args.append('--module')\n args.extend(('--file', filename + \":\"))\n else:\n args.extend(('--file', filename))\n sys.argv[1:0] = args\n try:\n pydevd.main()\n except SystemExit as ex:\n ptvsd.wrapper.ptvsd_sys_exit_code = ex.code\n raise\n", "path": "ptvsd/debugger.py"}]} | 932 | 105 |
gh_patches_debug_17990 | rasdani/github-patches | git_diff | rotki__rotki-3143 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Upgrading DB from v26->v27 can fail if user balancer LP events stored in their DB
## Problem Definition
A user who upgraded from v1.16.2 to v1.18.1 notified us that they saw a DB upgrade failure from v26->v27. Which means the app versions v1.17.2 to v1.18.0.
Turns out that for specific user DBs who have had some Balancer LP events detected and had both the balancer events and the balancer pools DB table populated the DB upgrade would fail, since the upgrade deletes the balancer pools table first, hence possibly hitting a constraint.
## Workaround
Workaround is rather easy. Download v1.17.0-v1.17.2, since that can open v26 DB, purge all uniswap and balancer data, and then open with v1.18.XX.
## Task
Fix the upgrade so that this does not occur even for this special case of users.
</issue>
<code>
[start of rotkehlchen/db/upgrades/v26_v27.py]
1 from typing import TYPE_CHECKING
2
3 if TYPE_CHECKING:
4 from rotkehlchen.db.dbhandler import DBHandler
5
6
7 def upgrade_v26_to_v27(db: 'DBHandler') -> None:
8 """Upgrades the DB from v26 to v27
9
10 - Deletes and recreates the tables that were changed after removing UnknownEthereumToken
11 """
12 cursor = db.conn.cursor()
13 cursor.execute('DROP TABLE IF EXISTS balancer_pools;')
14
15 cursor.execute('DROP TABLE IF EXISTS balancer_events;')
16 cursor.execute("""
17 CREATE TABLE IF NOT EXISTS balancer_events (
18 tx_hash VARCHAR[42] NOT NULL,
19 log_index INTEGER NOT NULL,
20 address VARCHAR[42] NOT NULL,
21 timestamp INTEGER NOT NULL,
22 type TEXT NOT NULL,
23 pool_address_token TEXT NOT NULL,
24 lp_amount TEXT NOT NULL,
25 usd_value TEXT NOT NULL,
26 amount0 TEXT NOT NULL,
27 amount1 TEXT NOT NULL,
28 amount2 TEXT,
29 amount3 TEXT,
30 amount4 TEXT,
31 amount5 TEXT,
32 amount6 TEXT,
33 amount7 TEXT,
34 FOREIGN KEY (pool_address_token) REFERENCES assets(identifier) ON UPDATE CASCADE,
35 PRIMARY KEY (tx_hash, log_index)
36 );
37 """)
38 cursor.execute('DELETE FROM used_query_ranges WHERE name LIKE "balancer_events%";')
39
40 cursor.execute('DROP TABLE IF EXISTS amm_swaps;')
41 cursor.execute("""
42 CREATE TABLE IF NOT EXISTS amm_swaps (
43 tx_hash VARCHAR[42] NOT NULL,
44 log_index INTEGER NOT NULL,
45 address VARCHAR[42] NOT NULL,
46 from_address VARCHAR[42] NOT NULL,
47 to_address VARCHAR[42] NOT NULL,
48 timestamp INTEGER NOT NULL,
49 location CHAR(1) NOT NULL DEFAULT('A') REFERENCES location(location),
50 token0_identifier TEXT NOT NULL,
51 token1_identifier TEXT NOT NULL,
52 amount0_in TEXT,
53 amount1_in TEXT,
54 amount0_out TEXT,
55 amount1_out TEXT,
56 FOREIGN KEY(token0_identifier) REFERENCES assets(identifier) ON UPDATE CASCADE,
57 FOREIGN KEY(token1_identifier) REFERENCES assets(identifier) ON UPDATE CASCADE,
58 PRIMARY KEY (tx_hash, log_index)
59 );""")
60 cursor.execute('DELETE FROM used_query_ranges WHERE name LIKE "balancer_trades%";')
61 cursor.execute('DELETE FROM used_query_ranges WHERE name LIKE "uniswap_trades%";')
62
63 cursor.execute('DROP TABLE IF EXISTS uniswap_events;')
64 cursor.execute("""
65 CREATE TABLE IF NOT EXISTS uniswap_events (
66 tx_hash VARCHAR[42] NOT NULL,
67 log_index INTEGER NOT NULL,
68 address VARCHAR[42] NOT NULL,
69 timestamp INTEGER NOT NULL,
70 type TEXT NOT NULL,
71 pool_address VARCHAR[42] NOT NULL,
72 token0_identifier TEXT NOT NULL,
73 token1_identifier TEXT NOT NULL,
74 amount0 TEXT,
75 amount1 TEXT,
76 usd_price TEXT,
77 lp_amount TEXT,
78 FOREIGN KEY(token0_identifier) REFERENCES assets(identifier) ON UPDATE CASCADE,
79 FOREIGN KEY(token1_identifier) REFERENCES assets(identifier) ON UPDATE CASCADE,
80 PRIMARY KEY (tx_hash, log_index)
81 );""")
82 cursor.execute('DELETE FROM used_query_ranges WHERE name LIKE "uniswap_events%";')
83
84 db.conn.commit()
85
[end of rotkehlchen/db/upgrades/v26_v27.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/rotkehlchen/db/upgrades/v26_v27.py b/rotkehlchen/db/upgrades/v26_v27.py
--- a/rotkehlchen/db/upgrades/v26_v27.py
+++ b/rotkehlchen/db/upgrades/v26_v27.py
@@ -10,8 +10,6 @@
- Deletes and recreates the tables that were changed after removing UnknownEthereumToken
"""
cursor = db.conn.cursor()
- cursor.execute('DROP TABLE IF EXISTS balancer_pools;')
-
cursor.execute('DROP TABLE IF EXISTS balancer_events;')
cursor.execute("""
CREATE TABLE IF NOT EXISTS balancer_events (
@@ -35,6 +33,7 @@
PRIMARY KEY (tx_hash, log_index)
);
""")
+ cursor.execute('DROP TABLE IF EXISTS balancer_pools;')
cursor.execute('DELETE FROM used_query_ranges WHERE name LIKE "balancer_events%";')
cursor.execute('DROP TABLE IF EXISTS amm_swaps;')
| {"golden_diff": "diff --git a/rotkehlchen/db/upgrades/v26_v27.py b/rotkehlchen/db/upgrades/v26_v27.py\n--- a/rotkehlchen/db/upgrades/v26_v27.py\n+++ b/rotkehlchen/db/upgrades/v26_v27.py\n@@ -10,8 +10,6 @@\n - Deletes and recreates the tables that were changed after removing UnknownEthereumToken\n \"\"\"\n cursor = db.conn.cursor()\n- cursor.execute('DROP TABLE IF EXISTS balancer_pools;')\n-\n cursor.execute('DROP TABLE IF EXISTS balancer_events;')\n cursor.execute(\"\"\"\n CREATE TABLE IF NOT EXISTS balancer_events (\n@@ -35,6 +33,7 @@\n PRIMARY KEY (tx_hash, log_index)\n );\n \"\"\")\n+ cursor.execute('DROP TABLE IF EXISTS balancer_pools;')\n cursor.execute('DELETE FROM used_query_ranges WHERE name LIKE \"balancer_events%\";')\n \n cursor.execute('DROP TABLE IF EXISTS amm_swaps;')\n", "issue": "Upgrading DB from v26->v27 can fail if user balancer LP events stored in their DB\n## Problem Definition\r\n\r\nA user who upgraded from v1.16.2 to v1.18.1 notified us that they saw a DB upgrade failure from v26->v27. Which means the app versions v1.17.2 to v1.18.0.\r\n\r\nTurns out that for specific user DBs who have had some Balancer LP events detected and had both the balancer events and the balancer pools DB table populated the DB upgrade would fail, since the upgrade deletes the balancer pools table first, hence possibly hitting a constraint.\r\n\r\n## Workaround\r\n\r\nWorkaround is rather easy. Download v1.17.0-v1.17.2, since that can open v26 DB, purge all uniswap and balancer data, and then open with v1.18.XX.\r\n\r\n## Task\r\n\r\nFix the upgrade so that this does not occur even for this special case of users.\n", "before_files": [{"content": "from typing import TYPE_CHECKING\n\nif TYPE_CHECKING:\n from rotkehlchen.db.dbhandler import DBHandler\n\n\ndef upgrade_v26_to_v27(db: 'DBHandler') -> None:\n \"\"\"Upgrades the DB from v26 to v27\n\n - Deletes and recreates the tables that were changed after removing UnknownEthereumToken\n \"\"\"\n cursor = db.conn.cursor()\n cursor.execute('DROP TABLE IF EXISTS balancer_pools;')\n\n cursor.execute('DROP TABLE IF EXISTS balancer_events;')\n cursor.execute(\"\"\"\nCREATE TABLE IF NOT EXISTS balancer_events (\n tx_hash VARCHAR[42] NOT NULL,\n log_index INTEGER NOT NULL,\n address VARCHAR[42] NOT NULL,\n timestamp INTEGER NOT NULL,\n type TEXT NOT NULL,\n pool_address_token TEXT NOT NULL,\n lp_amount TEXT NOT NULL,\n usd_value TEXT NOT NULL,\n amount0 TEXT NOT NULL,\n amount1 TEXT NOT NULL,\n amount2 TEXT,\n amount3 TEXT,\n amount4 TEXT,\n amount5 TEXT,\n amount6 TEXT,\n amount7 TEXT,\n FOREIGN KEY (pool_address_token) REFERENCES assets(identifier) ON UPDATE CASCADE,\n PRIMARY KEY (tx_hash, log_index)\n);\n\"\"\")\n cursor.execute('DELETE FROM used_query_ranges WHERE name LIKE \"balancer_events%\";')\n\n cursor.execute('DROP TABLE IF EXISTS amm_swaps;')\n cursor.execute(\"\"\"\nCREATE TABLE IF NOT EXISTS amm_swaps (\n tx_hash VARCHAR[42] NOT NULL,\n log_index INTEGER NOT NULL,\n address VARCHAR[42] NOT NULL,\n from_address VARCHAR[42] NOT NULL,\n to_address VARCHAR[42] NOT NULL,\n timestamp INTEGER NOT NULL,\n location CHAR(1) NOT NULL DEFAULT('A') REFERENCES location(location),\n token0_identifier TEXT NOT NULL,\n token1_identifier TEXT NOT NULL,\n amount0_in TEXT,\n amount1_in TEXT,\n amount0_out TEXT,\n amount1_out TEXT,\n FOREIGN KEY(token0_identifier) REFERENCES assets(identifier) ON UPDATE CASCADE,\n FOREIGN KEY(token1_identifier) REFERENCES assets(identifier) ON UPDATE CASCADE,\n PRIMARY KEY (tx_hash, log_index)\n);\"\"\")\n cursor.execute('DELETE FROM used_query_ranges WHERE name LIKE \"balancer_trades%\";')\n cursor.execute('DELETE FROM used_query_ranges WHERE name LIKE \"uniswap_trades%\";')\n\n cursor.execute('DROP TABLE IF EXISTS uniswap_events;')\n cursor.execute(\"\"\"\nCREATE TABLE IF NOT EXISTS uniswap_events (\n tx_hash VARCHAR[42] NOT NULL,\n log_index INTEGER NOT NULL,\n address VARCHAR[42] NOT NULL,\n timestamp INTEGER NOT NULL,\n type TEXT NOT NULL,\n pool_address VARCHAR[42] NOT NULL,\n token0_identifier TEXT NOT NULL,\n token1_identifier TEXT NOT NULL,\n amount0 TEXT,\n amount1 TEXT,\n usd_price TEXT,\n lp_amount TEXT,\n FOREIGN KEY(token0_identifier) REFERENCES assets(identifier) ON UPDATE CASCADE,\n FOREIGN KEY(token1_identifier) REFERENCES assets(identifier) ON UPDATE CASCADE,\n PRIMARY KEY (tx_hash, log_index)\n);\"\"\")\n cursor.execute('DELETE FROM used_query_ranges WHERE name LIKE \"uniswap_events%\";')\n\n db.conn.commit()\n", "path": "rotkehlchen/db/upgrades/v26_v27.py"}]} | 1,630 | 229 |
gh_patches_debug_11800 | rasdani/github-patches | git_diff | Lightning-AI__torchmetrics-2017 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Backwards incompatible change to MSE for pixelwise regression
## ๐ Bug
#1937 introduces an unintended consequence: pixelwise regression is no longer supported.
### To Reproduce
Run the following script:
```python
import torch
import torchmetrics
B = 4
H = W = 3
x = torch.rand(B, H, W)
y = torch.rand(B, H, W)
torchmetrics.functional.mean_squared_error(x, y)
```
This results in the following error msg:
```
Traceback (most recent call last):
File "test.py", line 10, in <module>
torchmetrics.functional.mean_squared_error(x, y, num_outputs=H * W)
File "lib/python3.10/site-packages/torchmetrics/functional/regression/mse.py", line 84, in mean_squared_error
sum_squared_error, n_obs = _mean_squared_error_update(preds, target, num_outputs=num_outputs)
File "lib/python3.10/site-packages/torchmetrics/functional/regression/mse.py", line 35, in _mean_squared_error_update
_check_data_shape_to_num_outputs(preds, target, num_outputs, allow_1d_reshape=True)
File "lib/python3.10/site-packages/torchmetrics/functional/regression/utils.py", line 31, in _check_data_shape_to_num_outputs
raise ValueError(
ValueError: Expected both predictions and target to be either 1- or 2-dimensional tensors, but got 3 and 3.
```
### Expected behavior
I would expect the MSE metrics to support pixelwise regression (predicting a single regression value for each pixel in an image). The above script works fine with torchmetrics 1.0.3.
### Environment
- TorchMetrics version: 1.1.0, spack
- Python & PyTorch Version: 3.10.10, 2.1.0
- Any other relevant information such as OS: macOS
### Additional context
@SkafteNicki @Borda @justusschock
</issue>
<code>
[start of src/torchmetrics/functional/regression/mse.py]
1 # Copyright The Lightning team.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 from typing import Tuple, Union
15
16 import torch
17 from torch import Tensor
18
19 from torchmetrics.functional.regression.utils import _check_data_shape_to_num_outputs
20 from torchmetrics.utilities.checks import _check_same_shape
21
22
23 def _mean_squared_error_update(preds: Tensor, target: Tensor, num_outputs: int) -> Tuple[Tensor, int]:
24 """Update and returns variables required to compute Mean Squared Error.
25
26 Check for same shape of input tensors.
27
28 Args:
29 preds: Predicted tensor
30 target: Ground truth tensor
31 num_outputs: Number of outputs in multioutput setting
32
33 """
34 _check_same_shape(preds, target)
35 _check_data_shape_to_num_outputs(preds, target, num_outputs, allow_1d_reshape=True)
36 if num_outputs == 1:
37 preds = preds.view(-1)
38 target = target.view(-1)
39 diff = preds - target
40 sum_squared_error = torch.sum(diff * diff, dim=0)
41 n_obs = target.shape[0]
42 return sum_squared_error, n_obs
43
44
45 def _mean_squared_error_compute(sum_squared_error: Tensor, n_obs: Union[int, Tensor], squared: bool = True) -> Tensor:
46 """Compute Mean Squared Error.
47
48 Args:
49 sum_squared_error: Sum of square of errors over all observations
50 n_obs: Number of predictions or observations
51 squared: Returns RMSE value if set to False.
52
53 Example:
54 >>> preds = torch.tensor([0., 1, 2, 3])
55 >>> target = torch.tensor([0., 1, 2, 2])
56 >>> sum_squared_error, n_obs = _mean_squared_error_update(preds, target, num_outputs=1)
57 >>> _mean_squared_error_compute(sum_squared_error, n_obs)
58 tensor(0.2500)
59
60 """
61 return sum_squared_error / n_obs if squared else torch.sqrt(sum_squared_error / n_obs)
62
63
64 def mean_squared_error(preds: Tensor, target: Tensor, squared: bool = True, num_outputs: int = 1) -> Tensor:
65 """Compute mean squared error.
66
67 Args:
68 preds: estimated labels
69 target: ground truth labels
70 squared: returns RMSE value if set to False
71 num_outputs: Number of outputs in multioutput setting
72
73 Return:
74 Tensor with MSE
75
76 Example:
77 >>> from torchmetrics.functional.regression import mean_squared_error
78 >>> x = torch.tensor([0., 1, 2, 3])
79 >>> y = torch.tensor([0., 1, 2, 2])
80 >>> mean_squared_error(x, y)
81 tensor(0.2500)
82
83 """
84 sum_squared_error, n_obs = _mean_squared_error_update(preds, target, num_outputs=num_outputs)
85 return _mean_squared_error_compute(sum_squared_error, n_obs, squared=squared)
86
[end of src/torchmetrics/functional/regression/mse.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/torchmetrics/functional/regression/mse.py b/src/torchmetrics/functional/regression/mse.py
--- a/src/torchmetrics/functional/regression/mse.py
+++ b/src/torchmetrics/functional/regression/mse.py
@@ -16,7 +16,6 @@
import torch
from torch import Tensor
-from torchmetrics.functional.regression.utils import _check_data_shape_to_num_outputs
from torchmetrics.utilities.checks import _check_same_shape
@@ -32,7 +31,6 @@
"""
_check_same_shape(preds, target)
- _check_data_shape_to_num_outputs(preds, target, num_outputs, allow_1d_reshape=True)
if num_outputs == 1:
preds = preds.view(-1)
target = target.view(-1)
| {"golden_diff": "diff --git a/src/torchmetrics/functional/regression/mse.py b/src/torchmetrics/functional/regression/mse.py\n--- a/src/torchmetrics/functional/regression/mse.py\n+++ b/src/torchmetrics/functional/regression/mse.py\n@@ -16,7 +16,6 @@\n import torch\n from torch import Tensor\n \n-from torchmetrics.functional.regression.utils import _check_data_shape_to_num_outputs\n from torchmetrics.utilities.checks import _check_same_shape\n \n \n@@ -32,7 +31,6 @@\n \n \"\"\"\n _check_same_shape(preds, target)\n- _check_data_shape_to_num_outputs(preds, target, num_outputs, allow_1d_reshape=True)\n if num_outputs == 1:\n preds = preds.view(-1)\n target = target.view(-1)\n", "issue": "Backwards incompatible change to MSE for pixelwise regression\n## \ud83d\udc1b Bug\r\n\r\n#1937 introduces an unintended consequence: pixelwise regression is no longer supported.\r\n\r\n### To Reproduce\r\n\r\nRun the following script:\r\n```python\r\nimport torch\r\nimport torchmetrics\r\n\r\nB = 4\r\nH = W = 3\r\n\r\nx = torch.rand(B, H, W)\r\ny = torch.rand(B, H, W)\r\n\r\ntorchmetrics.functional.mean_squared_error(x, y)\r\n```\r\nThis results in the following error msg:\r\n```\r\nTraceback (most recent call last):\r\n File \"test.py\", line 10, in <module>\r\n torchmetrics.functional.mean_squared_error(x, y, num_outputs=H * W)\r\n File \"lib/python3.10/site-packages/torchmetrics/functional/regression/mse.py\", line 84, in mean_squared_error\r\n sum_squared_error, n_obs = _mean_squared_error_update(preds, target, num_outputs=num_outputs)\r\n File \"lib/python3.10/site-packages/torchmetrics/functional/regression/mse.py\", line 35, in _mean_squared_error_update\r\n _check_data_shape_to_num_outputs(preds, target, num_outputs, allow_1d_reshape=True)\r\n File \"lib/python3.10/site-packages/torchmetrics/functional/regression/utils.py\", line 31, in _check_data_shape_to_num_outputs\r\n raise ValueError(\r\nValueError: Expected both predictions and target to be either 1- or 2-dimensional tensors, but got 3 and 3.\r\n```\r\n\r\n### Expected behavior\r\n\r\nI would expect the MSE metrics to support pixelwise regression (predicting a single regression value for each pixel in an image). The above script works fine with torchmetrics 1.0.3.\r\n\r\n### Environment\r\n\r\n- TorchMetrics version: 1.1.0, spack\r\n- Python & PyTorch Version: 3.10.10, 2.1.0\r\n- Any other relevant information such as OS: macOS\r\n\r\n### Additional context\r\n\r\n@SkafteNicki @Borda @justusschock \n", "before_files": [{"content": "# Copyright The Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom typing import Tuple, Union\n\nimport torch\nfrom torch import Tensor\n\nfrom torchmetrics.functional.regression.utils import _check_data_shape_to_num_outputs\nfrom torchmetrics.utilities.checks import _check_same_shape\n\n\ndef _mean_squared_error_update(preds: Tensor, target: Tensor, num_outputs: int) -> Tuple[Tensor, int]:\n \"\"\"Update and returns variables required to compute Mean Squared Error.\n\n Check for same shape of input tensors.\n\n Args:\n preds: Predicted tensor\n target: Ground truth tensor\n num_outputs: Number of outputs in multioutput setting\n\n \"\"\"\n _check_same_shape(preds, target)\n _check_data_shape_to_num_outputs(preds, target, num_outputs, allow_1d_reshape=True)\n if num_outputs == 1:\n preds = preds.view(-1)\n target = target.view(-1)\n diff = preds - target\n sum_squared_error = torch.sum(diff * diff, dim=0)\n n_obs = target.shape[0]\n return sum_squared_error, n_obs\n\n\ndef _mean_squared_error_compute(sum_squared_error: Tensor, n_obs: Union[int, Tensor], squared: bool = True) -> Tensor:\n \"\"\"Compute Mean Squared Error.\n\n Args:\n sum_squared_error: Sum of square of errors over all observations\n n_obs: Number of predictions or observations\n squared: Returns RMSE value if set to False.\n\n Example:\n >>> preds = torch.tensor([0., 1, 2, 3])\n >>> target = torch.tensor([0., 1, 2, 2])\n >>> sum_squared_error, n_obs = _mean_squared_error_update(preds, target, num_outputs=1)\n >>> _mean_squared_error_compute(sum_squared_error, n_obs)\n tensor(0.2500)\n\n \"\"\"\n return sum_squared_error / n_obs if squared else torch.sqrt(sum_squared_error / n_obs)\n\n\ndef mean_squared_error(preds: Tensor, target: Tensor, squared: bool = True, num_outputs: int = 1) -> Tensor:\n \"\"\"Compute mean squared error.\n\n Args:\n preds: estimated labels\n target: ground truth labels\n squared: returns RMSE value if set to False\n num_outputs: Number of outputs in multioutput setting\n\n Return:\n Tensor with MSE\n\n Example:\n >>> from torchmetrics.functional.regression import mean_squared_error\n >>> x = torch.tensor([0., 1, 2, 3])\n >>> y = torch.tensor([0., 1, 2, 2])\n >>> mean_squared_error(x, y)\n tensor(0.2500)\n\n \"\"\"\n sum_squared_error, n_obs = _mean_squared_error_update(preds, target, num_outputs=num_outputs)\n return _mean_squared_error_compute(sum_squared_error, n_obs, squared=squared)\n", "path": "src/torchmetrics/functional/regression/mse.py"}]} | 1,909 | 180 |
gh_patches_debug_7053 | rasdani/github-patches | git_diff | zulip__zulip-21237 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
docs: make links equally browsable on both GitHub and ReadTheDocs
Once upstream bug https://github.com/readthedocs/recommonmark/issues/179 is fixed, we can replace the `.html` part in links of the form `file_name.html#anchor` with `.md`.
This is a followup to https://github.com/zulip/zulip/pull/13232.
</issue>
<code>
[start of version.py]
1 import os
2
3 ZULIP_VERSION = "5.0-dev+git"
4
5 # Add information on number of commits and commit hash to version, if available
6 zulip_git_version_file = os.path.join(
7 os.path.dirname(os.path.abspath(__file__)), "zulip-git-version"
8 )
9 lines = [ZULIP_VERSION, ""]
10 if os.path.exists(zulip_git_version_file):
11 with open(zulip_git_version_file) as f:
12 lines = f.readlines() + ["", ""]
13 ZULIP_VERSION = lines.pop(0).strip()
14 ZULIP_MERGE_BASE = lines.pop(0).strip()
15
16 LATEST_MAJOR_VERSION = "4.0"
17 LATEST_RELEASE_VERSION = "4.10"
18 LATEST_RELEASE_ANNOUNCEMENT = "https://blog.zulip.com/2021/05/13/zulip-4-0-released/"
19
20 # Versions of the desktop app below DESKTOP_MINIMUM_VERSION will be
21 # prevented from connecting to the Zulip server. Versions above
22 # DESKTOP_MINIMUM_VERSION but below DESKTOP_WARNING_VERSION will have
23 # a banner at the top of the page asking the user to upgrade.
24 DESKTOP_MINIMUM_VERSION = "5.2.0"
25 DESKTOP_WARNING_VERSION = "5.4.3"
26
27 # Bump the API_FEATURE_LEVEL whenever an API change is made
28 # that clients might want to condition on. If we forget at
29 # the time we make the change, then bump it later as soon
30 # as we notice; clients using API_FEATURE_LEVEL will just not
31 # use the new feature/API until the bump.
32 #
33 # Changes should be accompanied by documentation explaining what the
34 # new level means in templates/zerver/api/changelog.md, as well as
35 # "**Changes**" entries in the endpoint's documentation in `zulip.yaml`.
36 API_FEATURE_LEVEL = 117
37
38 # Bump the minor PROVISION_VERSION to indicate that folks should provision
39 # only when going from an old version of the code to a newer version. Bump
40 # the major version to indicate that folks should provision in both
41 # directions.
42
43 # Typically,
44 # * adding a dependency only requires a minor version bump;
45 # * removing a dependency requires a major version bump;
46 # * upgrading a dependency requires a major version bump, unless the
47 # upgraded dependency is backwards compatible with all of our
48 # historical commits sharing the same major version, in which case a
49 # minor version bump suffices.
50
51 PROVISION_VERSION = "179.0"
52
[end of version.py]
[start of docs/conf.py]
1 # For documentation on Sphinx configuration options, see:
2 # https://www.sphinx-doc.org/en/master/usage/configuration.html
3 # https://myst-parser.readthedocs.io/en/latest/sphinx/reference.html
4 # https://sphinx-rtd-theme.readthedocs.io/en/stable/configuring.html
5
6 import os
7 import sys
8 from typing import Any
9
10 sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), "..")))
11 from version import LATEST_RELEASE_VERSION, ZULIP_VERSION
12
13 on_rtd = os.environ.get("READTHEDOCS") == "True"
14
15 # General configuration
16
17 extensions = [
18 "myst_parser",
19 "sphinx_rtd_theme",
20 ]
21 templates_path = ["_templates"]
22 project = "Zulip"
23 copyright = "2012โ2015 Dropbox, Inc., 2015โ2021 Kandra Labs, Inc., and contributors"
24 author = "The Zulip Team"
25 version = ZULIP_VERSION
26 release = ZULIP_VERSION
27 exclude_patterns = ["_build", "README.md"]
28 suppress_warnings = [
29 "myst.header",
30 ]
31 pygments_style = "sphinx"
32
33 # Options for Markdown parser
34
35 myst_enable_extensions = [
36 "colon_fence",
37 "substitution",
38 ]
39 myst_substitutions = {
40 "LATEST_RELEASE_VERSION": LATEST_RELEASE_VERSION,
41 }
42
43 # Options for HTML output
44
45 html_theme = "sphinx_rtd_theme"
46 html_theme_options = {
47 "collapse_navigation": not on_rtd, # makes local builds much faster
48 "logo_only": True,
49 }
50 html_logo = "images/zulip-logo.svg"
51 html_static_path = ["_static"]
52
53
54 def setup(app: Any) -> None:
55 # overrides for wide tables in RTD theme
56 app.add_css_file("theme_overrides.css") # path relative to _static
57
[end of docs/conf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -36,6 +36,7 @@
"colon_fence",
"substitution",
]
+myst_heading_anchors = 6
myst_substitutions = {
"LATEST_RELEASE_VERSION": LATEST_RELEASE_VERSION,
}
diff --git a/version.py b/version.py
--- a/version.py
+++ b/version.py
@@ -48,4 +48,4 @@
# historical commits sharing the same major version, in which case a
# minor version bump suffices.
-PROVISION_VERSION = "179.0"
+PROVISION_VERSION = "180.0"
| {"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -36,6 +36,7 @@\n \"colon_fence\",\n \"substitution\",\n ]\n+myst_heading_anchors = 6\n myst_substitutions = {\n \"LATEST_RELEASE_VERSION\": LATEST_RELEASE_VERSION,\n }\ndiff --git a/version.py b/version.py\n--- a/version.py\n+++ b/version.py\n@@ -48,4 +48,4 @@\n # historical commits sharing the same major version, in which case a\n # minor version bump suffices.\n \n-PROVISION_VERSION = \"179.0\"\n+PROVISION_VERSION = \"180.0\"\n", "issue": "docs: make links equally browsable on both GitHub and ReadTheDocs\nOnce upstream bug https://github.com/readthedocs/recommonmark/issues/179 is fixed, we can replace the `.html` part in links of the form `file_name.html#anchor` with `.md`.\r\n\r\nThis is a followup to https://github.com/zulip/zulip/pull/13232.\n", "before_files": [{"content": "import os\n\nZULIP_VERSION = \"5.0-dev+git\"\n\n# Add information on number of commits and commit hash to version, if available\nzulip_git_version_file = os.path.join(\n os.path.dirname(os.path.abspath(__file__)), \"zulip-git-version\"\n)\nlines = [ZULIP_VERSION, \"\"]\nif os.path.exists(zulip_git_version_file):\n with open(zulip_git_version_file) as f:\n lines = f.readlines() + [\"\", \"\"]\nZULIP_VERSION = lines.pop(0).strip()\nZULIP_MERGE_BASE = lines.pop(0).strip()\n\nLATEST_MAJOR_VERSION = \"4.0\"\nLATEST_RELEASE_VERSION = \"4.10\"\nLATEST_RELEASE_ANNOUNCEMENT = \"https://blog.zulip.com/2021/05/13/zulip-4-0-released/\"\n\n# Versions of the desktop app below DESKTOP_MINIMUM_VERSION will be\n# prevented from connecting to the Zulip server. Versions above\n# DESKTOP_MINIMUM_VERSION but below DESKTOP_WARNING_VERSION will have\n# a banner at the top of the page asking the user to upgrade.\nDESKTOP_MINIMUM_VERSION = \"5.2.0\"\nDESKTOP_WARNING_VERSION = \"5.4.3\"\n\n# Bump the API_FEATURE_LEVEL whenever an API change is made\n# that clients might want to condition on. If we forget at\n# the time we make the change, then bump it later as soon\n# as we notice; clients using API_FEATURE_LEVEL will just not\n# use the new feature/API until the bump.\n#\n# Changes should be accompanied by documentation explaining what the\n# new level means in templates/zerver/api/changelog.md, as well as\n# \"**Changes**\" entries in the endpoint's documentation in `zulip.yaml`.\nAPI_FEATURE_LEVEL = 117\n\n# Bump the minor PROVISION_VERSION to indicate that folks should provision\n# only when going from an old version of the code to a newer version. Bump\n# the major version to indicate that folks should provision in both\n# directions.\n\n# Typically,\n# * adding a dependency only requires a minor version bump;\n# * removing a dependency requires a major version bump;\n# * upgrading a dependency requires a major version bump, unless the\n# upgraded dependency is backwards compatible with all of our\n# historical commits sharing the same major version, in which case a\n# minor version bump suffices.\n\nPROVISION_VERSION = \"179.0\"\n", "path": "version.py"}, {"content": "# For documentation on Sphinx configuration options, see:\n# https://www.sphinx-doc.org/en/master/usage/configuration.html\n# https://myst-parser.readthedocs.io/en/latest/sphinx/reference.html\n# https://sphinx-rtd-theme.readthedocs.io/en/stable/configuring.html\n\nimport os\nimport sys\nfrom typing import Any\n\nsys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), \"..\")))\nfrom version import LATEST_RELEASE_VERSION, ZULIP_VERSION\n\non_rtd = os.environ.get(\"READTHEDOCS\") == \"True\"\n\n# General configuration\n\nextensions = [\n \"myst_parser\",\n \"sphinx_rtd_theme\",\n]\ntemplates_path = [\"_templates\"]\nproject = \"Zulip\"\ncopyright = \"2012\u20132015 Dropbox, Inc., 2015\u20132021 Kandra Labs, Inc., and contributors\"\nauthor = \"The Zulip Team\"\nversion = ZULIP_VERSION\nrelease = ZULIP_VERSION\nexclude_patterns = [\"_build\", \"README.md\"]\nsuppress_warnings = [\n \"myst.header\",\n]\npygments_style = \"sphinx\"\n\n# Options for Markdown parser\n\nmyst_enable_extensions = [\n \"colon_fence\",\n \"substitution\",\n]\nmyst_substitutions = {\n \"LATEST_RELEASE_VERSION\": LATEST_RELEASE_VERSION,\n}\n\n# Options for HTML output\n\nhtml_theme = \"sphinx_rtd_theme\"\nhtml_theme_options = {\n \"collapse_navigation\": not on_rtd, # makes local builds much faster\n \"logo_only\": True,\n}\nhtml_logo = \"images/zulip-logo.svg\"\nhtml_static_path = [\"_static\"]\n\n\ndef setup(app: Any) -> None:\n # overrides for wide tables in RTD theme\n app.add_css_file(\"theme_overrides.css\") # path relative to _static\n", "path": "docs/conf.py"}]} | 1,781 | 155 |
gh_patches_debug_22323 | rasdani/github-patches | git_diff | scikit-hep__pyhf-937 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Document simplemodels API
# Description
In discussion today with @coolalexzb, I realized that the [`pyhf.simplemodels`](https://github.com/scikit-hep/pyhf/blob/79984be837ef6e53bdd12a82163c34d47d507dba/src/pyhf/simplemodels.py) API is not documented in our docs. Even thought this isn't something we want people to really use, we still show it in our examples and so it needs documentation.
</issue>
<code>
[start of src/pyhf/simplemodels.py]
1 from . import Model
2
3
4 def hepdata_like(signal_data, bkg_data, bkg_uncerts, batch_size=None):
5 spec = {
6 'channels': [
7 {
8 'name': 'singlechannel',
9 'samples': [
10 {
11 'name': 'signal',
12 'data': signal_data,
13 'modifiers': [
14 {'name': 'mu', 'type': 'normfactor', 'data': None}
15 ],
16 },
17 {
18 'name': 'background',
19 'data': bkg_data,
20 'modifiers': [
21 {
22 'name': 'uncorr_bkguncrt',
23 'type': 'shapesys',
24 'data': bkg_uncerts,
25 }
26 ],
27 },
28 ],
29 }
30 ]
31 }
32 return Model(spec, batch_size=batch_size)
33
[end of src/pyhf/simplemodels.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/pyhf/simplemodels.py b/src/pyhf/simplemodels.py
--- a/src/pyhf/simplemodels.py
+++ b/src/pyhf/simplemodels.py
@@ -2,6 +2,38 @@
def hepdata_like(signal_data, bkg_data, bkg_uncerts, batch_size=None):
+ """
+ Construct a simple single channel :class:`~pyhf.pdf.Model` with a
+ :class:`~pyhf.modifiers.shapesys` modifier representing an uncorrelated
+ background uncertainty.
+
+ Example:
+ >>> import pyhf
+ >>> pyhf.set_backend("numpy")
+ >>> model = pyhf.simplemodels.hepdata_like(
+ ... signal_data=[12.0, 11.0], bkg_data=[50.0, 52.0], bkg_uncerts=[3.0, 7.0]
+ ... )
+ >>> model.schema
+ 'model.json'
+ >>> model.config.channels
+ ['singlechannel']
+ >>> model.config.samples
+ ['background', 'signal']
+ >>> model.config.parameters
+ ['mu', 'uncorr_bkguncrt']
+ >>> model.expected_data(model.config.suggested_init())
+ array([ 62. , 63. , 277.77777778, 55.18367347])
+
+ Args:
+ signal_data (`list`): The data in the signal sample
+ bkg_data (`list`): The data in the background sample
+ bkg_uncerts (`list`): The statistical uncertainty on the background sample counts
+ batch_size (`None` or `int`): Number of simultaneous (batched) Models to compute
+
+ Returns:
+ ~pyhf.pdf.Model: The statistical model adhering to the :obj:`model.json` schema
+
+ """
spec = {
'channels': [
{
| {"golden_diff": "diff --git a/src/pyhf/simplemodels.py b/src/pyhf/simplemodels.py\n--- a/src/pyhf/simplemodels.py\n+++ b/src/pyhf/simplemodels.py\n@@ -2,6 +2,38 @@\n \n \n def hepdata_like(signal_data, bkg_data, bkg_uncerts, batch_size=None):\n+ \"\"\"\n+ Construct a simple single channel :class:`~pyhf.pdf.Model` with a\n+ :class:`~pyhf.modifiers.shapesys` modifier representing an uncorrelated\n+ background uncertainty.\n+\n+ Example:\n+ >>> import pyhf\n+ >>> pyhf.set_backend(\"numpy\")\n+ >>> model = pyhf.simplemodels.hepdata_like(\n+ ... signal_data=[12.0, 11.0], bkg_data=[50.0, 52.0], bkg_uncerts=[3.0, 7.0]\n+ ... )\n+ >>> model.schema\n+ 'model.json'\n+ >>> model.config.channels\n+ ['singlechannel']\n+ >>> model.config.samples\n+ ['background', 'signal']\n+ >>> model.config.parameters\n+ ['mu', 'uncorr_bkguncrt']\n+ >>> model.expected_data(model.config.suggested_init())\n+ array([ 62. , 63. , 277.77777778, 55.18367347])\n+\n+ Args:\n+ signal_data (`list`): The data in the signal sample\n+ bkg_data (`list`): The data in the background sample\n+ bkg_uncerts (`list`): The statistical uncertainty on the background sample counts\n+ batch_size (`None` or `int`): Number of simultaneous (batched) Models to compute\n+\n+ Returns:\n+ ~pyhf.pdf.Model: The statistical model adhering to the :obj:`model.json` schema\n+\n+ \"\"\"\n spec = {\n 'channels': [\n {\n", "issue": "Document simplemodels API\n# Description\r\n\r\nIn discussion today with @coolalexzb, I realized that the [`pyhf.simplemodels`](https://github.com/scikit-hep/pyhf/blob/79984be837ef6e53bdd12a82163c34d47d507dba/src/pyhf/simplemodels.py) API is not documented in our docs. Even thought this isn't something we want people to really use, we still show it in our examples and so it needs documentation.\n", "before_files": [{"content": "from . import Model\n\n\ndef hepdata_like(signal_data, bkg_data, bkg_uncerts, batch_size=None):\n spec = {\n 'channels': [\n {\n 'name': 'singlechannel',\n 'samples': [\n {\n 'name': 'signal',\n 'data': signal_data,\n 'modifiers': [\n {'name': 'mu', 'type': 'normfactor', 'data': None}\n ],\n },\n {\n 'name': 'background',\n 'data': bkg_data,\n 'modifiers': [\n {\n 'name': 'uncorr_bkguncrt',\n 'type': 'shapesys',\n 'data': bkg_uncerts,\n }\n ],\n },\n ],\n }\n ]\n }\n return Model(spec, batch_size=batch_size)\n", "path": "src/pyhf/simplemodels.py"}]} | 881 | 438 |
gh_patches_debug_22770 | rasdani/github-patches | git_diff | DataDog__dd-trace-py-334 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Error log still occurs when tracer is disabled (Django)
The tracer is logging the following error when disabled:
> 2017-07-05 12:54:36,552:[none]:[ddtrace.writer:134]:ERROR cannot send services: [Errno 111] Connection refused
This is occurring when integrated with Django with the following configuration:
```python
DATADOG_TRACE = {
'ENABLED': False
}
```
From reading the [documentation](http://pypi.datadoghq.com/trace/docs/#module-ddtrace.contrib.django) which states:
> ENABLED (default: not django_settings.DEBUG): defines if the tracer is enabled or not. If set to false, the code is still instrumented but no spans are sent to the trace agent. This setting cannot be changed at runtime and a restart is required. By default the tracer is disabled when in DEBUG mode, enabled otherwise.
It seems this log should not occur. If no spans are sent to the trace agent then presumably a connection should not be established?
Package Info
------------------
> datadog==0.15.0
> ddtrace==0.8.5
</issue>
<code>
[start of ddtrace/contrib/django/apps.py]
1 import logging
2
3 # 3rd party
4 from django.apps import AppConfig
5
6 # project
7 from .db import patch_db
8 from .conf import settings
9 from .cache import patch_cache
10 from .templates import patch_template
11 from .middleware import insert_exception_middleware
12
13 from ...ext import AppTypes
14
15
16 log = logging.getLogger(__name__)
17
18
19 class TracerConfig(AppConfig):
20 name = 'ddtrace.contrib.django'
21 label = 'datadog_django'
22
23 def ready(self):
24 """
25 Ready is called as soon as the registry is fully populated.
26 Tracing capabilities must be enabled in this function so that
27 all Django internals are properly configured.
28 """
29 tracer = settings.TRACER
30
31 if settings.TAGS:
32 tracer.set_tags(settings.TAGS)
33
34 # define the service details
35 tracer.set_service_info(
36 app='django',
37 app_type=AppTypes.web,
38 service=settings.DEFAULT_SERVICE,
39 )
40
41 # configure the tracer instance
42 # TODO[manu]: we may use configure() but because it creates a new
43 # AgentWriter, it breaks all tests. The configure() behavior must
44 # be changed to use it in this integration
45 tracer.enabled = settings.ENABLED
46 tracer.writer.api.hostname = settings.AGENT_HOSTNAME
47 tracer.writer.api.port = settings.AGENT_PORT
48
49 if settings.AUTO_INSTRUMENT:
50 # trace Django internals
51 insert_exception_middleware()
52 try:
53 patch_db(tracer)
54 except Exception:
55 log.exception('error patching Django database connections')
56
57 try:
58 patch_template(tracer)
59 except Exception:
60 log.exception('error patching Django template rendering')
61
62 try:
63 patch_cache(tracer)
64 except Exception:
65 log.exception('error patching Django cache')
66
[end of ddtrace/contrib/django/apps.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ddtrace/contrib/django/apps.py b/ddtrace/contrib/django/apps.py
--- a/ddtrace/contrib/django/apps.py
+++ b/ddtrace/contrib/django/apps.py
@@ -31,13 +31,6 @@
if settings.TAGS:
tracer.set_tags(settings.TAGS)
- # define the service details
- tracer.set_service_info(
- app='django',
- app_type=AppTypes.web,
- service=settings.DEFAULT_SERVICE,
- )
-
# configure the tracer instance
# TODO[manu]: we may use configure() but because it creates a new
# AgentWriter, it breaks all tests. The configure() behavior must
@@ -46,6 +39,13 @@
tracer.writer.api.hostname = settings.AGENT_HOSTNAME
tracer.writer.api.port = settings.AGENT_PORT
+ # define the service details
+ tracer.set_service_info(
+ app='django',
+ app_type=AppTypes.web,
+ service=settings.DEFAULT_SERVICE,
+ )
+
if settings.AUTO_INSTRUMENT:
# trace Django internals
insert_exception_middleware()
| {"golden_diff": "diff --git a/ddtrace/contrib/django/apps.py b/ddtrace/contrib/django/apps.py\n--- a/ddtrace/contrib/django/apps.py\n+++ b/ddtrace/contrib/django/apps.py\n@@ -31,13 +31,6 @@\n if settings.TAGS:\n tracer.set_tags(settings.TAGS)\n \n- # define the service details\n- tracer.set_service_info(\n- app='django',\n- app_type=AppTypes.web,\n- service=settings.DEFAULT_SERVICE,\n- )\n-\n # configure the tracer instance\n # TODO[manu]: we may use configure() but because it creates a new\n # AgentWriter, it breaks all tests. The configure() behavior must\n@@ -46,6 +39,13 @@\n tracer.writer.api.hostname = settings.AGENT_HOSTNAME\n tracer.writer.api.port = settings.AGENT_PORT\n \n+ # define the service details\n+ tracer.set_service_info(\n+ app='django',\n+ app_type=AppTypes.web,\n+ service=settings.DEFAULT_SERVICE,\n+ )\n+\n if settings.AUTO_INSTRUMENT:\n # trace Django internals\n insert_exception_middleware()\n", "issue": "Error log still occurs when tracer is disabled (Django)\nThe tracer is logging the following error when disabled:\r\n\r\n> 2017-07-05 12:54:36,552:[none]:[ddtrace.writer:134]:ERROR cannot send services: [Errno 111] Connection refused\r\n\r\nThis is occurring when integrated with Django with the following configuration:\r\n\r\n```python\r\nDATADOG_TRACE = {\r\n 'ENABLED': False\r\n}\r\n```\r\nFrom reading the [documentation](http://pypi.datadoghq.com/trace/docs/#module-ddtrace.contrib.django) which states:\r\n> ENABLED (default: not django_settings.DEBUG): defines if the tracer is enabled or not. If set to false, the code is still instrumented but no spans are sent to the trace agent. This setting cannot be changed at runtime and a restart is required. By default the tracer is disabled when in DEBUG mode, enabled otherwise.\r\n\r\nIt seems this log should not occur. If no spans are sent to the trace agent then presumably a connection should not be established?\r\n\r\nPackage Info\r\n------------------\r\n\r\n> datadog==0.15.0\r\n> ddtrace==0.8.5 \r\n\n", "before_files": [{"content": "import logging\n\n# 3rd party\nfrom django.apps import AppConfig\n\n# project\nfrom .db import patch_db\nfrom .conf import settings\nfrom .cache import patch_cache\nfrom .templates import patch_template\nfrom .middleware import insert_exception_middleware\n\nfrom ...ext import AppTypes\n\n\nlog = logging.getLogger(__name__)\n\n\nclass TracerConfig(AppConfig):\n name = 'ddtrace.contrib.django'\n label = 'datadog_django'\n\n def ready(self):\n \"\"\"\n Ready is called as soon as the registry is fully populated.\n Tracing capabilities must be enabled in this function so that\n all Django internals are properly configured.\n \"\"\"\n tracer = settings.TRACER\n\n if settings.TAGS:\n tracer.set_tags(settings.TAGS)\n\n # define the service details\n tracer.set_service_info(\n app='django',\n app_type=AppTypes.web,\n service=settings.DEFAULT_SERVICE,\n )\n\n # configure the tracer instance\n # TODO[manu]: we may use configure() but because it creates a new\n # AgentWriter, it breaks all tests. The configure() behavior must\n # be changed to use it in this integration\n tracer.enabled = settings.ENABLED\n tracer.writer.api.hostname = settings.AGENT_HOSTNAME\n tracer.writer.api.port = settings.AGENT_PORT\n\n if settings.AUTO_INSTRUMENT:\n # trace Django internals\n insert_exception_middleware()\n try:\n patch_db(tracer)\n except Exception:\n log.exception('error patching Django database connections')\n\n try:\n patch_template(tracer)\n except Exception:\n log.exception('error patching Django template rendering')\n\n try:\n patch_cache(tracer)\n except Exception:\n log.exception('error patching Django cache')\n", "path": "ddtrace/contrib/django/apps.py"}]} | 1,311 | 255 |
gh_patches_debug_6077 | rasdani/github-patches | git_diff | learningequality__kolibri-3759 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
remove translations for user kinds on backend
### Observed behavior
In role kinds we use the string "Classroom Assignable Coach": https://github.com/learningequality/kolibri/blob/develop/kolibri/auth/constants/role_kinds.py#L15
This string is not something that should be user-facing
### Expected behavior
implementation details hidden from user
### User-facing consequences
confusing, inconsistent terminology
### Context
https://crowdin.com/translate/kolibri/498/en-es#37506
</issue>
<code>
[start of kolibri/auth/constants/role_kinds.py]
1 """
2 This module contains constants representing the kinds of "roles" that a user can have with respect to a Collection.
3 """
4 from __future__ import unicode_literals
5
6 from django.utils.translation import ugettext_lazy as _
7
8 ADMIN = "admin"
9 COACH = "coach"
10 ASSIGNABLE_COACH = "classroom assignable coach"
11
12 choices = (
13 (ADMIN, _("Admin")),
14 (COACH, _("Coach")),
15 (ASSIGNABLE_COACH, _("Classroom Assignable Coach")),
16 )
17
[end of kolibri/auth/constants/role_kinds.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/kolibri/auth/constants/role_kinds.py b/kolibri/auth/constants/role_kinds.py
--- a/kolibri/auth/constants/role_kinds.py
+++ b/kolibri/auth/constants/role_kinds.py
@@ -3,14 +3,12 @@
"""
from __future__ import unicode_literals
-from django.utils.translation import ugettext_lazy as _
-
ADMIN = "admin"
COACH = "coach"
ASSIGNABLE_COACH = "classroom assignable coach"
choices = (
- (ADMIN, _("Admin")),
- (COACH, _("Coach")),
- (ASSIGNABLE_COACH, _("Classroom Assignable Coach")),
+ (ADMIN, "Admin"),
+ (COACH, "Coach"),
+ (ASSIGNABLE_COACH, "Classroom Assignable Coach"),
)
| {"golden_diff": "diff --git a/kolibri/auth/constants/role_kinds.py b/kolibri/auth/constants/role_kinds.py\n--- a/kolibri/auth/constants/role_kinds.py\n+++ b/kolibri/auth/constants/role_kinds.py\n@@ -3,14 +3,12 @@\n \"\"\"\n from __future__ import unicode_literals\n \n-from django.utils.translation import ugettext_lazy as _\n-\n ADMIN = \"admin\"\n COACH = \"coach\"\n ASSIGNABLE_COACH = \"classroom assignable coach\"\n \n choices = (\n- (ADMIN, _(\"Admin\")),\n- (COACH, _(\"Coach\")),\n- (ASSIGNABLE_COACH, _(\"Classroom Assignable Coach\")),\n+ (ADMIN, \"Admin\"),\n+ (COACH, \"Coach\"),\n+ (ASSIGNABLE_COACH, \"Classroom Assignable Coach\"),\n )\n", "issue": "remove translations for user kinds on backend\n### Observed behavior\r\n\r\nIn role kinds we use the string \"Classroom Assignable Coach\": https://github.com/learningequality/kolibri/blob/develop/kolibri/auth/constants/role_kinds.py#L15\r\n\r\nThis string is not something that should be user-facing\r\n\r\n### Expected behavior\r\n\r\nimplementation details hidden from user\r\n\r\n### User-facing consequences\r\n\r\nconfusing, inconsistent terminology\r\n\r\n\r\n### Context\r\n\r\nhttps://crowdin.com/translate/kolibri/498/en-es#37506\r\n\r\n\n", "before_files": [{"content": "\"\"\"\nThis module contains constants representing the kinds of \"roles\" that a user can have with respect to a Collection.\n\"\"\"\nfrom __future__ import unicode_literals\n\nfrom django.utils.translation import ugettext_lazy as _\n\nADMIN = \"admin\"\nCOACH = \"coach\"\nASSIGNABLE_COACH = \"classroom assignable coach\"\n\nchoices = (\n (ADMIN, _(\"Admin\")),\n (COACH, _(\"Coach\")),\n (ASSIGNABLE_COACH, _(\"Classroom Assignable Coach\")),\n)\n", "path": "kolibri/auth/constants/role_kinds.py"}]} | 785 | 178 |
gh_patches_debug_33088 | rasdani/github-patches | git_diff | mathesar-foundation__mathesar-317 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
User should be able to configure multiple databases in settings
**Problem**
<!-- Please provide a clear and concise description of the problem that this feature request is designed to solve.-->
Currently, the user can only configure one Mathesar database in the settings. They should be able to configure as many databases to connect to Mathesar as they want.
**Proposed solution**
<!-- A clear and concise description of your proposed solution or feature. -->
The user should be able to configure multiple databases in the `.env` file.
**Additional context**
<!-- Add any other context or screenshots about the feature request here.-->
We might want to use `python-decouple`'s [built in CSV helper](https://github.com/henriquebastos/python-decouple/#built-in-csv-helper) for this.
Ideally, the user would be able to associate the database key with the connection information directly using a tuple or something like that.
</issue>
<code>
[start of config/settings.py]
1 """
2 Django settings for config project.
3
4 Generated by 'django-admin startproject' using Django 3.1.7.
5
6 For more information on this file, see
7 https://docs.djangoproject.com/en/3.1/topics/settings/
8
9 For the full list of settings and their values, see
10 https://docs.djangoproject.com/en/3.1/ref/settings/
11 """
12
13 import os
14 from pathlib import Path
15
16 from decouple import Csv, config as decouple_config
17 from dj_database_url import parse as db_url
18
19 # Build paths inside the project like this: BASE_DIR / 'subdir'.
20 BASE_DIR = Path(__file__).resolve().parent.parent
21
22 # Application definition
23
24 INSTALLED_APPS = [
25 "django.contrib.admin",
26 "django.contrib.auth",
27 "django.contrib.contenttypes",
28 "django.contrib.sessions",
29 "django.contrib.messages",
30 "django.contrib.staticfiles",
31 "rest_framework",
32 "django_filters",
33 "django_property_filter",
34 "mathesar",
35 ]
36
37 MIDDLEWARE = [
38 "django.middleware.security.SecurityMiddleware",
39 "django.contrib.sessions.middleware.SessionMiddleware",
40 "django.middleware.common.CommonMiddleware",
41 "django.middleware.csrf.CsrfViewMiddleware",
42 "django.contrib.auth.middleware.AuthenticationMiddleware",
43 "django.contrib.messages.middleware.MessageMiddleware",
44 "django.middleware.clickjacking.XFrameOptionsMiddleware",
45 ]
46
47 ROOT_URLCONF = "config.urls"
48
49 TEMPLATES = [
50 {
51 "BACKEND": "django.template.backends.django.DjangoTemplates",
52 "DIRS": [],
53 "APP_DIRS": True,
54 "OPTIONS": {
55 "context_processors": [
56 "config.context_processors.get_settings",
57 "django.template.context_processors.debug",
58 "django.template.context_processors.request",
59 "django.contrib.auth.context_processors.auth",
60 "django.contrib.messages.context_processors.messages",
61 ],
62 },
63 },
64 ]
65
66 WSGI_APPLICATION = "config.wsgi.application"
67
68 # Database
69 # https://docs.djangoproject.com/en/3.1/ref/settings/#databases
70
71 # TODO: Add to documentation that database keys should not be than 128 characters.
72 DATABASES = {
73 decouple_config('DJANGO_DATABASE_KEY'): decouple_config('DJANGO_DATABASE_URL', cast=db_url),
74 decouple_config('MATHESAR_DATABASE_KEY'): decouple_config('MATHESAR_DATABASE_URL', cast=db_url)
75 }
76
77 # pytest-django will create a new database named 'test_{DATABASES[table_db]['NAME']}'
78 # and use it for our API tests if we don't specify DATABASES[table_db]['TEST']['NAME']
79 if decouple_config('TEST', default=False, cast=bool):
80 DATABASES[decouple_config('MATHESAR_DATABASE_KEY')]['TEST'] = {
81 'NAME': DATABASES[decouple_config('MATHESAR_DATABASE_KEY')]['NAME']
82 }
83
84
85 # Quick-start development settings - unsuitable for production
86 # See https://docs.djangoproject.com/en/3.1/howto/deployment/checklist/
87
88 # SECURITY WARNING: keep the secret key used in production secret!
89 SECRET_KEY = decouple_config('SECRET_KEY')
90
91 # SECURITY WARNING: don't run with debug turned on in production!
92 DEBUG = decouple_config('DEBUG', default=False, cast=bool)
93
94 ALLOWED_HOSTS = decouple_config('ALLOWED_HOSTS', cast=Csv())
95
96 # Password validation
97 # https://docs.djangoproject.com/en/3.1/ref/settings/#auth-password-validators
98
99 AUTH_PASSWORD_VALIDATORS = [
100 {
101 "NAME": "django.contrib.auth.password_validation.UserAttributeSimilarityValidator",
102 },
103 {
104 "NAME": "django.contrib.auth.password_validation.MinimumLengthValidator",
105 },
106 {
107 "NAME": "django.contrib.auth.password_validation.CommonPasswordValidator",
108 },
109 {
110 "NAME": "django.contrib.auth.password_validation.NumericPasswordValidator",
111 },
112 ]
113
114
115 # Internationalization
116 # https://docs.djangoproject.com/en/3.1/topics/i18n/
117
118 LANGUAGE_CODE = "en-us"
119
120 TIME_ZONE = "UTC"
121
122 USE_I18N = True
123
124 USE_L10N = True
125
126 USE_TZ = True
127
128
129 # Static files (CSS, JavaScript, Images)
130 # https://docs.djangoproject.com/en/3.1/howto/static-files/
131
132 STATIC_URL = "/static/"
133
134 CLIENT_DEV_URL = "http://localhost:3000"
135
136
137 # Media files (uploaded by the user)
138
139 MEDIA_ROOT = os.path.join(BASE_DIR, '.media/')
140
141 MEDIA_URL = "/media/"
142
[end of config/settings.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/config/settings.py b/config/settings.py
--- a/config/settings.py
+++ b/config/settings.py
@@ -16,6 +16,16 @@
from decouple import Csv, config as decouple_config
from dj_database_url import parse as db_url
+
+# We use a 'tuple' with pipes as delimiters as decople naively splits the global
+# variables on commas when casting to Csv()
+def pipe_delim(pipe_string):
+ # Remove opening and closing brackets
+ pipe_string = pipe_string[1:-1]
+ # Split on pipe delim
+ return pipe_string.split("|")
+
+
# Build paths inside the project like this: BASE_DIR / 'subdir'.
BASE_DIR = Path(__file__).resolve().parent.parent
@@ -69,17 +79,20 @@
# https://docs.djangoproject.com/en/3.1/ref/settings/#databases
# TODO: Add to documentation that database keys should not be than 128 characters.
+
+# MATHESAR_DATABASES should be of the form '({db_name}|{db_url}), ({db_name}|{db_url})'
+# See pipe_delim above for why we use pipes as delimiters
DATABASES = {
- decouple_config('DJANGO_DATABASE_KEY'): decouple_config('DJANGO_DATABASE_URL', cast=db_url),
- decouple_config('MATHESAR_DATABASE_KEY'): decouple_config('MATHESAR_DATABASE_URL', cast=db_url)
+ db_key: db_url(url_string)
+ for db_key, url_string in decouple_config('MATHESAR_DATABASES', cast=Csv(pipe_delim))
}
+DATABASES[decouple_config('DJANGO_DATABASE_KEY')] = decouple_config('DJANGO_DATABASE_URL', cast=db_url)
# pytest-django will create a new database named 'test_{DATABASES[table_db]['NAME']}'
# and use it for our API tests if we don't specify DATABASES[table_db]['TEST']['NAME']
if decouple_config('TEST', default=False, cast=bool):
- DATABASES[decouple_config('MATHESAR_DATABASE_KEY')]['TEST'] = {
- 'NAME': DATABASES[decouple_config('MATHESAR_DATABASE_KEY')]['NAME']
- }
+ for db_key, _ in decouple_config('MATHESAR_DATABASES', cast=Csv(pipe_delim)):
+ DATABASES[db_key]['TEST'] = {'NAME': DATABASES[db_key]['NAME']}
# Quick-start development settings - unsuitable for production
| {"golden_diff": "diff --git a/config/settings.py b/config/settings.py\n--- a/config/settings.py\n+++ b/config/settings.py\n@@ -16,6 +16,16 @@\n from decouple import Csv, config as decouple_config\n from dj_database_url import parse as db_url\n \n+\n+# We use a 'tuple' with pipes as delimiters as decople naively splits the global\n+# variables on commas when casting to Csv()\n+def pipe_delim(pipe_string):\n+ # Remove opening and closing brackets\n+ pipe_string = pipe_string[1:-1]\n+ # Split on pipe delim\n+ return pipe_string.split(\"|\")\n+\n+\n # Build paths inside the project like this: BASE_DIR / 'subdir'.\n BASE_DIR = Path(__file__).resolve().parent.parent\n \n@@ -69,17 +79,20 @@\n # https://docs.djangoproject.com/en/3.1/ref/settings/#databases\n \n # TODO: Add to documentation that database keys should not be than 128 characters.\n+\n+# MATHESAR_DATABASES should be of the form '({db_name}|{db_url}), ({db_name}|{db_url})'\n+# See pipe_delim above for why we use pipes as delimiters\n DATABASES = {\n- decouple_config('DJANGO_DATABASE_KEY'): decouple_config('DJANGO_DATABASE_URL', cast=db_url),\n- decouple_config('MATHESAR_DATABASE_KEY'): decouple_config('MATHESAR_DATABASE_URL', cast=db_url)\n+ db_key: db_url(url_string)\n+ for db_key, url_string in decouple_config('MATHESAR_DATABASES', cast=Csv(pipe_delim))\n }\n+DATABASES[decouple_config('DJANGO_DATABASE_KEY')] = decouple_config('DJANGO_DATABASE_URL', cast=db_url)\n \n # pytest-django will create a new database named 'test_{DATABASES[table_db]['NAME']}'\n # and use it for our API tests if we don't specify DATABASES[table_db]['TEST']['NAME']\n if decouple_config('TEST', default=False, cast=bool):\n- DATABASES[decouple_config('MATHESAR_DATABASE_KEY')]['TEST'] = {\n- 'NAME': DATABASES[decouple_config('MATHESAR_DATABASE_KEY')]['NAME']\n- }\n+ for db_key, _ in decouple_config('MATHESAR_DATABASES', cast=Csv(pipe_delim)):\n+ DATABASES[db_key]['TEST'] = {'NAME': DATABASES[db_key]['NAME']}\n \n \n # Quick-start development settings - unsuitable for production\n", "issue": "User should be able to configure multiple databases in settings\n**Problem**\r\n<!-- Please provide a clear and concise description of the problem that this feature request is designed to solve.-->\r\nCurrently, the user can only configure one Mathesar database in the settings. They should be able to configure as many databases to connect to Mathesar as they want.\r\n\r\n**Proposed solution**\r\n<!-- A clear and concise description of your proposed solution or feature. -->\r\nThe user should be able to configure multiple databases in the `.env` file.\r\n\r\n**Additional context**\r\n<!-- Add any other context or screenshots about the feature request here.-->\r\nWe might want to use `python-decouple`'s [built in CSV helper](https://github.com/henriquebastos/python-decouple/#built-in-csv-helper) for this.\r\n\r\nIdeally, the user would be able to associate the database key with the connection information directly using a tuple or something like that.\n", "before_files": [{"content": "\"\"\"\nDjango settings for config project.\n\nGenerated by 'django-admin startproject' using Django 3.1.7.\n\nFor more information on this file, see\nhttps://docs.djangoproject.com/en/3.1/topics/settings/\n\nFor the full list of settings and their values, see\nhttps://docs.djangoproject.com/en/3.1/ref/settings/\n\"\"\"\n\nimport os\nfrom pathlib import Path\n\nfrom decouple import Csv, config as decouple_config\nfrom dj_database_url import parse as db_url\n\n# Build paths inside the project like this: BASE_DIR / 'subdir'.\nBASE_DIR = Path(__file__).resolve().parent.parent\n\n# Application definition\n\nINSTALLED_APPS = [\n \"django.contrib.admin\",\n \"django.contrib.auth\",\n \"django.contrib.contenttypes\",\n \"django.contrib.sessions\",\n \"django.contrib.messages\",\n \"django.contrib.staticfiles\",\n \"rest_framework\",\n \"django_filters\",\n \"django_property_filter\",\n \"mathesar\",\n]\n\nMIDDLEWARE = [\n \"django.middleware.security.SecurityMiddleware\",\n \"django.contrib.sessions.middleware.SessionMiddleware\",\n \"django.middleware.common.CommonMiddleware\",\n \"django.middleware.csrf.CsrfViewMiddleware\",\n \"django.contrib.auth.middleware.AuthenticationMiddleware\",\n \"django.contrib.messages.middleware.MessageMiddleware\",\n \"django.middleware.clickjacking.XFrameOptionsMiddleware\",\n]\n\nROOT_URLCONF = \"config.urls\"\n\nTEMPLATES = [\n {\n \"BACKEND\": \"django.template.backends.django.DjangoTemplates\",\n \"DIRS\": [],\n \"APP_DIRS\": True,\n \"OPTIONS\": {\n \"context_processors\": [\n \"config.context_processors.get_settings\",\n \"django.template.context_processors.debug\",\n \"django.template.context_processors.request\",\n \"django.contrib.auth.context_processors.auth\",\n \"django.contrib.messages.context_processors.messages\",\n ],\n },\n },\n]\n\nWSGI_APPLICATION = \"config.wsgi.application\"\n\n# Database\n# https://docs.djangoproject.com/en/3.1/ref/settings/#databases\n\n# TODO: Add to documentation that database keys should not be than 128 characters.\nDATABASES = {\n decouple_config('DJANGO_DATABASE_KEY'): decouple_config('DJANGO_DATABASE_URL', cast=db_url),\n decouple_config('MATHESAR_DATABASE_KEY'): decouple_config('MATHESAR_DATABASE_URL', cast=db_url)\n}\n\n# pytest-django will create a new database named 'test_{DATABASES[table_db]['NAME']}'\n# and use it for our API tests if we don't specify DATABASES[table_db]['TEST']['NAME']\nif decouple_config('TEST', default=False, cast=bool):\n DATABASES[decouple_config('MATHESAR_DATABASE_KEY')]['TEST'] = {\n 'NAME': DATABASES[decouple_config('MATHESAR_DATABASE_KEY')]['NAME']\n }\n\n\n# Quick-start development settings - unsuitable for production\n# See https://docs.djangoproject.com/en/3.1/howto/deployment/checklist/\n\n# SECURITY WARNING: keep the secret key used in production secret!\nSECRET_KEY = decouple_config('SECRET_KEY')\n\n# SECURITY WARNING: don't run with debug turned on in production!\nDEBUG = decouple_config('DEBUG', default=False, cast=bool)\n\nALLOWED_HOSTS = decouple_config('ALLOWED_HOSTS', cast=Csv())\n\n# Password validation\n# https://docs.djangoproject.com/en/3.1/ref/settings/#auth-password-validators\n\nAUTH_PASSWORD_VALIDATORS = [\n {\n \"NAME\": \"django.contrib.auth.password_validation.UserAttributeSimilarityValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.MinimumLengthValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.CommonPasswordValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.NumericPasswordValidator\",\n },\n]\n\n\n# Internationalization\n# https://docs.djangoproject.com/en/3.1/topics/i18n/\n\nLANGUAGE_CODE = \"en-us\"\n\nTIME_ZONE = \"UTC\"\n\nUSE_I18N = True\n\nUSE_L10N = True\n\nUSE_TZ = True\n\n\n# Static files (CSS, JavaScript, Images)\n# https://docs.djangoproject.com/en/3.1/howto/static-files/\n\nSTATIC_URL = \"/static/\"\n\nCLIENT_DEV_URL = \"http://localhost:3000\"\n\n\n# Media files (uploaded by the user)\n\nMEDIA_ROOT = os.path.join(BASE_DIR, '.media/')\n\nMEDIA_URL = \"/media/\"\n", "path": "config/settings.py"}]} | 1,987 | 546 |
gh_patches_debug_3923 | rasdani/github-patches | git_diff | deepset-ai__haystack-6173 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Create a script for 2.0 API Reference docs
</issue>
<code>
[start of docs/pydoc/renderers.py]
1 import os
2 import sys
3 import io
4 import dataclasses
5 import typing as t
6 import base64
7 import warnings
8 from pathlib import Path
9
10 import requests
11 import docspec
12 from pydoc_markdown.interfaces import Context, Renderer
13 from pydoc_markdown.contrib.renderers.markdown import MarkdownRenderer
14
15
16 README_FRONTMATTER = """---
17 title: {title}
18 excerpt: {excerpt}
19 category: {category}
20 slug: {slug}
21 parentDoc: {parent_doc}
22 order: {order}
23 hidden: false
24 ---
25
26 """
27
28
29 def create_headers(version: str):
30 # Utility function to create Readme.io headers.
31 # We assume the README_API_KEY env var is set since we check outside
32 # to show clearer error messages.
33 README_API_KEY = os.getenv("README_API_KEY")
34 token = base64.b64encode(f"{README_API_KEY}:".encode()).decode()
35 return {"authorization": f"Basic {token}", "x-readme-version": version}
36
37
38 @dataclasses.dataclass
39 class ReadmeRenderer(Renderer):
40 """
41 This custom Renderer is heavily based on the `MarkdownRenderer`,
42 it just prepends a front matter so that the output can be published
43 directly to readme.io.
44 """
45
46 # These settings will be used in the front matter output
47 title: str
48 category_slug: str
49 excerpt: str
50 slug: str
51 order: int
52 parent_doc_slug: str = ""
53 # Docs categories fetched from Readme.io
54 categories: t.Dict[str, str] = dataclasses.field(init=False)
55 # This exposes a special `markdown` settings value that can be used to pass
56 # parameters to the underlying `MarkdownRenderer`
57 markdown: MarkdownRenderer = dataclasses.field(default_factory=MarkdownRenderer)
58
59 def init(self, context: Context) -> None:
60 self.markdown.init(context)
61 self.version = self._doc_version()
62 self.categories = self._readme_categories(self.version)
63
64 def _doc_version(self) -> str:
65 """
66 Returns the docs version.
67 """
68 root = Path(__file__).absolute().parent.parent.parent
69 full_version = (root / "VERSION.txt").read_text()
70 major, minor = full_version.split(".")[:2]
71 if "rc0" in full_version:
72 return f"v{major}.{minor}-unstable"
73 return f"v{major}.{minor}"
74
75 def _readme_categories(self, version: str) -> t.Dict[str, str]:
76 """
77 Fetch the categories of the given version from Readme.io.
78 README_API_KEY env var must be set to correctly get the categories.
79 Returns dictionary containing all the categories slugs and their ids.
80 """
81 README_API_KEY = os.getenv("README_API_KEY")
82 if not README_API_KEY:
83 warnings.warn("README_API_KEY env var is not set, using a placeholder category ID")
84 return {"haystack-classes": "ID"}
85
86 headers = create_headers(version)
87
88 res = requests.get("https://dash.readme.com/api/v1/categories", headers=headers, timeout=60)
89
90 if not res.ok:
91 sys.exit(f"Error requesting {version} categories")
92
93 return {c["slug"]: c["id"] for c in res.json()}
94
95 def _doc_id(self, doc_slug: str, version: str) -> str:
96 """
97 Fetch the doc id of the given doc slug and version from Readme.io.
98 README_API_KEY env var must be set to correctly get the id.
99 If doc_slug is an empty string return an empty string.
100 """
101 if not doc_slug:
102 # Not all docs have a parent doc, in case we get no slug
103 # we just return an empty string.
104 return ""
105
106 README_API_KEY = os.getenv("README_API_KEY")
107 if not README_API_KEY:
108 warnings.warn("README_API_KEY env var is not set, using a placeholder doc ID")
109 return "fake-doc-id"
110
111 headers = create_headers(version)
112 res = requests.get(f"https://dash.readme.com/api/v1/docs/{doc_slug}", headers=headers, timeout=60)
113 if not res.ok:
114 sys.exit(f"Error requesting {doc_slug} doc for version {version}")
115
116 return res.json()["id"]
117
118 def render(self, modules: t.List[docspec.Module]) -> None:
119 if self.markdown.filename is None:
120 sys.stdout.write(self._frontmatter())
121 self.markdown.render_single_page(sys.stdout, modules)
122 else:
123 with io.open(self.markdown.filename, "w", encoding=self.markdown.encoding) as fp:
124 fp.write(self._frontmatter())
125 self.markdown.render_single_page(t.cast(t.TextIO, fp), modules)
126
127 def _frontmatter(self) -> str:
128 return README_FRONTMATTER.format(
129 title=self.title,
130 category=self.categories[self.category_slug],
131 parent_doc=self._doc_id(self.parent_doc_slug, self.version),
132 excerpt=self.excerpt,
133 slug=self.slug,
134 order=self.order,
135 )
136
[end of docs/pydoc/renderers.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/docs/pydoc/renderers.py b/docs/pydoc/renderers.py
--- a/docs/pydoc/renderers.py
+++ b/docs/pydoc/renderers.py
@@ -133,3 +133,16 @@
slug=self.slug,
order=self.order,
)
+
+
[email protected]
+class ReadmePreviewRenderer(ReadmeRenderer):
+ """
+ This custom Renderer behaves just like the ReadmeRenderer but renders docs with the hardcoded version 2.0 to generate correct category ids.
+ """
+
+ def _doc_version(self) -> str:
+ """
+ Returns the hardcoded docs version 2.0.
+ """
+ return "v2.0"
| {"golden_diff": "diff --git a/docs/pydoc/renderers.py b/docs/pydoc/renderers.py\n--- a/docs/pydoc/renderers.py\n+++ b/docs/pydoc/renderers.py\n@@ -133,3 +133,16 @@\n slug=self.slug,\n order=self.order,\n )\n+\n+\[email protected]\n+class ReadmePreviewRenderer(ReadmeRenderer):\n+ \"\"\"\n+ This custom Renderer behaves just like the ReadmeRenderer but renders docs with the hardcoded version 2.0 to generate correct category ids.\n+ \"\"\"\n+\n+ def _doc_version(self) -> str:\n+ \"\"\"\n+ Returns the hardcoded docs version 2.0.\n+ \"\"\"\n+ return \"v2.0\"\n", "issue": "Create a script for 2.0 API Reference docs\n\n", "before_files": [{"content": "import os\nimport sys\nimport io\nimport dataclasses\nimport typing as t\nimport base64\nimport warnings\nfrom pathlib import Path\n\nimport requests\nimport docspec\nfrom pydoc_markdown.interfaces import Context, Renderer\nfrom pydoc_markdown.contrib.renderers.markdown import MarkdownRenderer\n\n\nREADME_FRONTMATTER = \"\"\"---\ntitle: {title}\nexcerpt: {excerpt}\ncategory: {category}\nslug: {slug}\nparentDoc: {parent_doc}\norder: {order}\nhidden: false\n---\n\n\"\"\"\n\n\ndef create_headers(version: str):\n # Utility function to create Readme.io headers.\n # We assume the README_API_KEY env var is set since we check outside\n # to show clearer error messages.\n README_API_KEY = os.getenv(\"README_API_KEY\")\n token = base64.b64encode(f\"{README_API_KEY}:\".encode()).decode()\n return {\"authorization\": f\"Basic {token}\", \"x-readme-version\": version}\n\n\[email protected]\nclass ReadmeRenderer(Renderer):\n \"\"\"\n This custom Renderer is heavily based on the `MarkdownRenderer`,\n it just prepends a front matter so that the output can be published\n directly to readme.io.\n \"\"\"\n\n # These settings will be used in the front matter output\n title: str\n category_slug: str\n excerpt: str\n slug: str\n order: int\n parent_doc_slug: str = \"\"\n # Docs categories fetched from Readme.io\n categories: t.Dict[str, str] = dataclasses.field(init=False)\n # This exposes a special `markdown` settings value that can be used to pass\n # parameters to the underlying `MarkdownRenderer`\n markdown: MarkdownRenderer = dataclasses.field(default_factory=MarkdownRenderer)\n\n def init(self, context: Context) -> None:\n self.markdown.init(context)\n self.version = self._doc_version()\n self.categories = self._readme_categories(self.version)\n\n def _doc_version(self) -> str:\n \"\"\"\n Returns the docs version.\n \"\"\"\n root = Path(__file__).absolute().parent.parent.parent\n full_version = (root / \"VERSION.txt\").read_text()\n major, minor = full_version.split(\".\")[:2]\n if \"rc0\" in full_version:\n return f\"v{major}.{minor}-unstable\"\n return f\"v{major}.{minor}\"\n\n def _readme_categories(self, version: str) -> t.Dict[str, str]:\n \"\"\"\n Fetch the categories of the given version from Readme.io.\n README_API_KEY env var must be set to correctly get the categories.\n Returns dictionary containing all the categories slugs and their ids.\n \"\"\"\n README_API_KEY = os.getenv(\"README_API_KEY\")\n if not README_API_KEY:\n warnings.warn(\"README_API_KEY env var is not set, using a placeholder category ID\")\n return {\"haystack-classes\": \"ID\"}\n\n headers = create_headers(version)\n\n res = requests.get(\"https://dash.readme.com/api/v1/categories\", headers=headers, timeout=60)\n\n if not res.ok:\n sys.exit(f\"Error requesting {version} categories\")\n\n return {c[\"slug\"]: c[\"id\"] for c in res.json()}\n\n def _doc_id(self, doc_slug: str, version: str) -> str:\n \"\"\"\n Fetch the doc id of the given doc slug and version from Readme.io.\n README_API_KEY env var must be set to correctly get the id.\n If doc_slug is an empty string return an empty string.\n \"\"\"\n if not doc_slug:\n # Not all docs have a parent doc, in case we get no slug\n # we just return an empty string.\n return \"\"\n\n README_API_KEY = os.getenv(\"README_API_KEY\")\n if not README_API_KEY:\n warnings.warn(\"README_API_KEY env var is not set, using a placeholder doc ID\")\n return \"fake-doc-id\"\n\n headers = create_headers(version)\n res = requests.get(f\"https://dash.readme.com/api/v1/docs/{doc_slug}\", headers=headers, timeout=60)\n if not res.ok:\n sys.exit(f\"Error requesting {doc_slug} doc for version {version}\")\n\n return res.json()[\"id\"]\n\n def render(self, modules: t.List[docspec.Module]) -> None:\n if self.markdown.filename is None:\n sys.stdout.write(self._frontmatter())\n self.markdown.render_single_page(sys.stdout, modules)\n else:\n with io.open(self.markdown.filename, \"w\", encoding=self.markdown.encoding) as fp:\n fp.write(self._frontmatter())\n self.markdown.render_single_page(t.cast(t.TextIO, fp), modules)\n\n def _frontmatter(self) -> str:\n return README_FRONTMATTER.format(\n title=self.title,\n category=self.categories[self.category_slug],\n parent_doc=self._doc_id(self.parent_doc_slug, self.version),\n excerpt=self.excerpt,\n slug=self.slug,\n order=self.order,\n )\n", "path": "docs/pydoc/renderers.py"}]} | 1,928 | 158 |
gh_patches_debug_22565 | rasdani/github-patches | git_diff | pre-commit__pre-commit-874 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Concurrent execution results in uneven work per thread
I'm running `pre-commit` from current `master` to test the concurrency feature introduced with #851. While it in general seems to work, work is distributed pretty uneven. One hook we run is [`prospector`](https://github.com/guykisel/prospector-mirror) which is nice for testing, because it takes a relatively long time and it prints the time taken in its output.
Running `pre-commit run -a --verbose prospector | grep "Time Taken"` on a medium sized project (~100 Python files) results in the following distribution of work to the available 4 logical CPU cores:
```
Time Taken: 17.10 seconds
Time Taken: 8.70 seconds
Time Taken: 18.68 seconds
Time Taken: 108.02 seconds
```
Especially compared to running it with concurrency disabled (using `PRE_COMMIT_NO_CONCURRENCY`), it's pretty obvious that concurrency doesn't provide any real benefit here:
```
Time Taken: 116.95 seconds
```
I'd be happy to help debugging this further. Just tell me what other information you need. :slightly_smiling_face:
</issue>
<code>
[start of pre_commit/languages/helpers.py]
1 from __future__ import unicode_literals
2
3 import multiprocessing
4 import os
5 import shlex
6
7 from pre_commit.util import cmd_output
8 from pre_commit.xargs import xargs
9
10
11 def run_setup_cmd(prefix, cmd):
12 cmd_output(*cmd, cwd=prefix.prefix_dir, encoding=None)
13
14
15 def environment_dir(ENVIRONMENT_DIR, language_version):
16 if ENVIRONMENT_DIR is None:
17 return None
18 else:
19 return '{}-{}'.format(ENVIRONMENT_DIR, language_version)
20
21
22 def to_cmd(hook):
23 return tuple(shlex.split(hook['entry'])) + tuple(hook['args'])
24
25
26 def assert_version_default(binary, version):
27 if version != 'default':
28 raise AssertionError(
29 'For now, pre-commit requires system-installed {}'.format(binary),
30 )
31
32
33 def assert_no_additional_deps(lang, additional_deps):
34 if additional_deps:
35 raise AssertionError(
36 'For now, pre-commit does not support '
37 'additional_dependencies for {}'.format(lang),
38 )
39
40
41 def basic_get_default_version():
42 return 'default'
43
44
45 def basic_healthy(prefix, language_version):
46 return True
47
48
49 def no_install(prefix, version, additional_dependencies):
50 raise AssertionError('This type is not installable')
51
52
53 def target_concurrency(hook):
54 if hook['require_serial'] or 'PRE_COMMIT_NO_CONCURRENCY' in os.environ:
55 return 1
56 else:
57 # Travis appears to have a bunch of CPUs, but we can't use them all.
58 if 'TRAVIS' in os.environ:
59 return 2
60 else:
61 try:
62 return multiprocessing.cpu_count()
63 except NotImplementedError:
64 return 1
65
66
67 def run_xargs(hook, cmd, file_args):
68 return xargs(cmd, file_args, target_concurrency=target_concurrency(hook))
69
[end of pre_commit/languages/helpers.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pre_commit/languages/helpers.py b/pre_commit/languages/helpers.py
--- a/pre_commit/languages/helpers.py
+++ b/pre_commit/languages/helpers.py
@@ -2,12 +2,18 @@
import multiprocessing
import os
+import random
import shlex
+import six
+
from pre_commit.util import cmd_output
from pre_commit.xargs import xargs
+FIXED_RANDOM_SEED = 1542676186
+
+
def run_setup_cmd(prefix, cmd):
cmd_output(*cmd, cwd=prefix.prefix_dir, encoding=None)
@@ -64,5 +70,21 @@
return 1
+def _shuffled(seq):
+ """Deterministically shuffle identically under both py2 + py3."""
+ fixed_random = random.Random()
+ if six.PY2: # pragma: no cover (py2)
+ fixed_random.seed(FIXED_RANDOM_SEED)
+ else:
+ fixed_random.seed(FIXED_RANDOM_SEED, version=1)
+
+ seq = list(seq)
+ random.shuffle(seq, random=fixed_random.random)
+ return seq
+
+
def run_xargs(hook, cmd, file_args):
+ # Shuffle the files so that they more evenly fill out the xargs partitions,
+ # but do it deterministically in case a hook cares about ordering.
+ file_args = _shuffled(file_args)
return xargs(cmd, file_args, target_concurrency=target_concurrency(hook))
| {"golden_diff": "diff --git a/pre_commit/languages/helpers.py b/pre_commit/languages/helpers.py\n--- a/pre_commit/languages/helpers.py\n+++ b/pre_commit/languages/helpers.py\n@@ -2,12 +2,18 @@\n \n import multiprocessing\n import os\n+import random\n import shlex\n \n+import six\n+\n from pre_commit.util import cmd_output\n from pre_commit.xargs import xargs\n \n \n+FIXED_RANDOM_SEED = 1542676186\n+\n+\n def run_setup_cmd(prefix, cmd):\n cmd_output(*cmd, cwd=prefix.prefix_dir, encoding=None)\n \n@@ -64,5 +70,21 @@\n return 1\n \n \n+def _shuffled(seq):\n+ \"\"\"Deterministically shuffle identically under both py2 + py3.\"\"\"\n+ fixed_random = random.Random()\n+ if six.PY2: # pragma: no cover (py2)\n+ fixed_random.seed(FIXED_RANDOM_SEED)\n+ else:\n+ fixed_random.seed(FIXED_RANDOM_SEED, version=1)\n+\n+ seq = list(seq)\n+ random.shuffle(seq, random=fixed_random.random)\n+ return seq\n+\n+\n def run_xargs(hook, cmd, file_args):\n+ # Shuffle the files so that they more evenly fill out the xargs partitions,\n+ # but do it deterministically in case a hook cares about ordering.\n+ file_args = _shuffled(file_args)\n return xargs(cmd, file_args, target_concurrency=target_concurrency(hook))\n", "issue": "Concurrent execution results in uneven work per thread\nI'm running `pre-commit` from current `master` to test the concurrency feature introduced with #851. While it in general seems to work, work is distributed pretty uneven. One hook we run is [`prospector`](https://github.com/guykisel/prospector-mirror) which is nice for testing, because it takes a relatively long time and it prints the time taken in its output.\r\n\r\nRunning `pre-commit run -a --verbose prospector | grep \"Time Taken\"` on a medium sized project (~100 Python files) results in the following distribution of work to the available 4 logical CPU cores:\r\n```\r\nTime Taken: 17.10 seconds\r\nTime Taken: 8.70 seconds\r\nTime Taken: 18.68 seconds\r\nTime Taken: 108.02 seconds\r\n```\r\n\r\nEspecially compared to running it with concurrency disabled (using `PRE_COMMIT_NO_CONCURRENCY`), it's pretty obvious that concurrency doesn't provide any real benefit here:\r\n```\r\nTime Taken: 116.95 seconds\r\n```\r\n\r\nI'd be happy to help debugging this further. Just tell me what other information you need. :slightly_smiling_face: \n", "before_files": [{"content": "from __future__ import unicode_literals\n\nimport multiprocessing\nimport os\nimport shlex\n\nfrom pre_commit.util import cmd_output\nfrom pre_commit.xargs import xargs\n\n\ndef run_setup_cmd(prefix, cmd):\n cmd_output(*cmd, cwd=prefix.prefix_dir, encoding=None)\n\n\ndef environment_dir(ENVIRONMENT_DIR, language_version):\n if ENVIRONMENT_DIR is None:\n return None\n else:\n return '{}-{}'.format(ENVIRONMENT_DIR, language_version)\n\n\ndef to_cmd(hook):\n return tuple(shlex.split(hook['entry'])) + tuple(hook['args'])\n\n\ndef assert_version_default(binary, version):\n if version != 'default':\n raise AssertionError(\n 'For now, pre-commit requires system-installed {}'.format(binary),\n )\n\n\ndef assert_no_additional_deps(lang, additional_deps):\n if additional_deps:\n raise AssertionError(\n 'For now, pre-commit does not support '\n 'additional_dependencies for {}'.format(lang),\n )\n\n\ndef basic_get_default_version():\n return 'default'\n\n\ndef basic_healthy(prefix, language_version):\n return True\n\n\ndef no_install(prefix, version, additional_dependencies):\n raise AssertionError('This type is not installable')\n\n\ndef target_concurrency(hook):\n if hook['require_serial'] or 'PRE_COMMIT_NO_CONCURRENCY' in os.environ:\n return 1\n else:\n # Travis appears to have a bunch of CPUs, but we can't use them all.\n if 'TRAVIS' in os.environ:\n return 2\n else:\n try:\n return multiprocessing.cpu_count()\n except NotImplementedError:\n return 1\n\n\ndef run_xargs(hook, cmd, file_args):\n return xargs(cmd, file_args, target_concurrency=target_concurrency(hook))\n", "path": "pre_commit/languages/helpers.py"}]} | 1,318 | 334 |
gh_patches_debug_9732 | rasdani/github-patches | git_diff | dbt-labs__dbt-core-9452 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[CT-3190] Pinning detective work
### Housekeeping
- [X] I am a maintainer of dbt-core
### Short description
We recently pinned `types-requests<2.31.0` because it had a dependency conflict with `urllib3` which we have pinned to `~=1.0` because of another conflict with `requests` requiring `openssl`.
This ticket is to look into if those pins are still required and clean them up if not.
### Acceptance criteria
We have confirmed that the pins are
- required to continue to work
_or_
- not required and we have re-pinned appropriately
### Impact to Other Teams
adapters - based on the notes it seems like `urllib3` is pinned for the snowflake adapter as well so we will want to ensure changing the dependencies does not adversely affect them
### Will backports be required?
no
### Context
_No response_
</issue>
<code>
[start of core/setup.py]
1 #!/usr/bin/env python
2 import os
3 import sys
4
5 if sys.version_info < (3, 8):
6 print("Error: dbt does not support this version of Python.")
7 print("Please upgrade to Python 3.8 or higher.")
8 sys.exit(1)
9
10
11 from setuptools import setup
12
13 try:
14 from setuptools import find_namespace_packages
15 except ImportError:
16 # the user has a downlevel version of setuptools.
17 print("Error: dbt requires setuptools v40.1.0 or higher.")
18 print('Please upgrade setuptools with "pip install --upgrade setuptools" ' "and try again")
19 sys.exit(1)
20
21
22 this_directory = os.path.abspath(os.path.dirname(__file__))
23 with open(os.path.join(this_directory, "README.md")) as f:
24 long_description = f.read()
25
26
27 package_name = "dbt-core"
28 package_version = "1.8.0a1"
29 description = """With dbt, data analysts and engineers can build analytics \
30 the way engineers build applications."""
31
32
33 setup(
34 name=package_name,
35 version=package_version,
36 description=description,
37 long_description=long_description,
38 long_description_content_type="text/markdown",
39 author="dbt Labs",
40 author_email="[email protected]",
41 url="https://github.com/dbt-labs/dbt-core",
42 packages=find_namespace_packages(include=["dbt", "dbt.*"]),
43 include_package_data=True,
44 test_suite="test",
45 entry_points={
46 "console_scripts": ["dbt = dbt.cli.main:cli"],
47 },
48 install_requires=[
49 # ----
50 # dbt-core uses these packages deeply, throughout the codebase, and there have been breaking changes in past patch releases (even though these are major-version-one).
51 # Pin to the patch or minor version, and bump in each new minor version of dbt-core.
52 "agate~=1.7.0",
53 "Jinja2~=3.1.2",
54 "mashumaro[msgpack]~=3.9",
55 # ----
56 # Legacy: This package has not been updated since 2019, and it is unused in dbt's logging system (since v1.0)
57 # The dependency here will be removed along with the removal of 'legacy logging', in a future release of dbt-core
58 "logbook>=1.5,<1.6",
59 # ----
60 # dbt-core uses these packages in standard ways. Pin to the major version, and check compatibility
61 # with major versions in each new minor version of dbt-core.
62 "click>=8.0.2,<9",
63 "networkx>=2.3,<4",
64 # ----
65 # These packages are major-version-0. Keep upper bounds on upcoming minor versions (which could have breaking changes)
66 # and check compatibility / bump in each new minor version of dbt-core.
67 "pathspec>=0.9,<0.12",
68 "sqlparse>=0.2.3,<0.5",
69 # ----
70 # These are major-version-0 packages also maintained by dbt-labs. Accept patches.
71 "dbt-extractor~=0.5.0",
72 "minimal-snowplow-tracker~=0.0.2",
73 "dbt-semantic-interfaces~=0.5.0a2",
74 "dbt-common~=0.1.0",
75 "dbt-adapters~=0.1.0a2",
76 # ----
77 # Expect compatibility with all new versions of these packages, so lower bounds only.
78 "packaging>20.9",
79 "protobuf>=4.0.0",
80 "pytz>=2015.7",
81 "pyyaml>=6.0",
82 "daff>=1.3.46",
83 "typing-extensions>=4.4",
84 # ----
85 ],
86 zip_safe=False,
87 classifiers=[
88 "Development Status :: 5 - Production/Stable",
89 "License :: OSI Approved :: Apache Software License",
90 "Operating System :: Microsoft :: Windows",
91 "Operating System :: MacOS :: MacOS X",
92 "Operating System :: POSIX :: Linux",
93 "Programming Language :: Python :: 3.8",
94 "Programming Language :: Python :: 3.9",
95 "Programming Language :: Python :: 3.10",
96 "Programming Language :: Python :: 3.11",
97 ],
98 python_requires=">=3.8",
99 )
100
[end of core/setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/core/setup.py b/core/setup.py
--- a/core/setup.py
+++ b/core/setup.py
@@ -61,6 +61,7 @@
# with major versions in each new minor version of dbt-core.
"click>=8.0.2,<9",
"networkx>=2.3,<4",
+ "requests<3.0.0", # should match dbt-common
# ----
# These packages are major-version-0. Keep upper bounds on upcoming minor versions (which could have breaking changes)
# and check compatibility / bump in each new minor version of dbt-core.
| {"golden_diff": "diff --git a/core/setup.py b/core/setup.py\n--- a/core/setup.py\n+++ b/core/setup.py\n@@ -61,6 +61,7 @@\n # with major versions in each new minor version of dbt-core.\n \"click>=8.0.2,<9\",\n \"networkx>=2.3,<4\",\n+ \"requests<3.0.0\", # should match dbt-common\n # ----\n # These packages are major-version-0. Keep upper bounds on upcoming minor versions (which could have breaking changes)\n # and check compatibility / bump in each new minor version of dbt-core.\n", "issue": "[CT-3190] Pinning detective work\n### Housekeeping\n\n- [X] I am a maintainer of dbt-core\n\n### Short description\n\nWe recently pinned `types-requests<2.31.0` because it had a dependency conflict with `urllib3` which we have pinned to `~=1.0` because of another conflict with `requests` requiring `openssl`.\r\n\r\nThis ticket is to look into if those pins are still required and clean them up if not. \n\n### Acceptance criteria\n\nWe have confirmed that the pins are\r\n- required to continue to work\r\n_or_\r\n- not required and we have re-pinned appropriately\n\n### Impact to Other Teams\n\nadapters - based on the notes it seems like `urllib3` is pinned for the snowflake adapter as well so we will want to ensure changing the dependencies does not adversely affect them\n\n### Will backports be required?\n\nno\n\n### Context\n\n_No response_\n", "before_files": [{"content": "#!/usr/bin/env python\nimport os\nimport sys\n\nif sys.version_info < (3, 8):\n print(\"Error: dbt does not support this version of Python.\")\n print(\"Please upgrade to Python 3.8 or higher.\")\n sys.exit(1)\n\n\nfrom setuptools import setup\n\ntry:\n from setuptools import find_namespace_packages\nexcept ImportError:\n # the user has a downlevel version of setuptools.\n print(\"Error: dbt requires setuptools v40.1.0 or higher.\")\n print('Please upgrade setuptools with \"pip install --upgrade setuptools\" ' \"and try again\")\n sys.exit(1)\n\n\nthis_directory = os.path.abspath(os.path.dirname(__file__))\nwith open(os.path.join(this_directory, \"README.md\")) as f:\n long_description = f.read()\n\n\npackage_name = \"dbt-core\"\npackage_version = \"1.8.0a1\"\ndescription = \"\"\"With dbt, data analysts and engineers can build analytics \\\nthe way engineers build applications.\"\"\"\n\n\nsetup(\n name=package_name,\n version=package_version,\n description=description,\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n author=\"dbt Labs\",\n author_email=\"[email protected]\",\n url=\"https://github.com/dbt-labs/dbt-core\",\n packages=find_namespace_packages(include=[\"dbt\", \"dbt.*\"]),\n include_package_data=True,\n test_suite=\"test\",\n entry_points={\n \"console_scripts\": [\"dbt = dbt.cli.main:cli\"],\n },\n install_requires=[\n # ----\n # dbt-core uses these packages deeply, throughout the codebase, and there have been breaking changes in past patch releases (even though these are major-version-one).\n # Pin to the patch or minor version, and bump in each new minor version of dbt-core.\n \"agate~=1.7.0\",\n \"Jinja2~=3.1.2\",\n \"mashumaro[msgpack]~=3.9\",\n # ----\n # Legacy: This package has not been updated since 2019, and it is unused in dbt's logging system (since v1.0)\n # The dependency here will be removed along with the removal of 'legacy logging', in a future release of dbt-core\n \"logbook>=1.5,<1.6\",\n # ----\n # dbt-core uses these packages in standard ways. Pin to the major version, and check compatibility\n # with major versions in each new minor version of dbt-core.\n \"click>=8.0.2,<9\",\n \"networkx>=2.3,<4\",\n # ----\n # These packages are major-version-0. Keep upper bounds on upcoming minor versions (which could have breaking changes)\n # and check compatibility / bump in each new minor version of dbt-core.\n \"pathspec>=0.9,<0.12\",\n \"sqlparse>=0.2.3,<0.5\",\n # ----\n # These are major-version-0 packages also maintained by dbt-labs. Accept patches.\n \"dbt-extractor~=0.5.0\",\n \"minimal-snowplow-tracker~=0.0.2\",\n \"dbt-semantic-interfaces~=0.5.0a2\",\n \"dbt-common~=0.1.0\",\n \"dbt-adapters~=0.1.0a2\",\n # ----\n # Expect compatibility with all new versions of these packages, so lower bounds only.\n \"packaging>20.9\",\n \"protobuf>=4.0.0\",\n \"pytz>=2015.7\",\n \"pyyaml>=6.0\",\n \"daff>=1.3.46\",\n \"typing-extensions>=4.4\",\n # ----\n ],\n zip_safe=False,\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Operating System :: Microsoft :: Windows\",\n \"Operating System :: MacOS :: MacOS X\",\n \"Operating System :: POSIX :: Linux\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Programming Language :: Python :: 3.11\",\n ],\n python_requires=\">=3.8\",\n)\n", "path": "core/setup.py"}]} | 1,883 | 139 |
gh_patches_debug_40781 | rasdani/github-patches | git_diff | bokeh__bokeh-6812 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Inline, Minified Resources do not work in classic notebooks
This is due to an interaction with the classic notebooks use of JQuery, when output is published as `text/html`. New notebook code published a div and a script together as `text/html`. Propose to solve by publishing a single script as `application/javascript` (which should work) that creates the necessary div itself
</issue>
<code>
[start of bokeh/util/notebook.py]
1 ''' Functions useful for loading Bokeh code and data in Jupyter/Zeppelin notebooks.
2
3 '''
4 from __future__ import absolute_import
5
6 from IPython.display import publish_display_data
7
8 from ..embed import _wrap_in_script_tag
9
10 LOAD_MIME_TYPE = 'application/vnd.bokehjs_load.v0+json'
11 EXEC_MIME_TYPE = 'application/vnd.bokehjs_exec.v0+json'
12
13 _notebook_loaded = None
14
15 # TODO (bev) notebook_type and zeppelin bits should be removed after external zeppelin hook available
16 def load_notebook(resources=None, verbose=False, hide_banner=False, load_timeout=5000, notebook_type='jupyter'):
17 ''' Prepare the IPython notebook for displaying Bokeh plots.
18
19 Args:
20 resources (Resource, optional) :
21 how and where to load BokehJS from (default: CDN)
22
23 verbose (bool, optional) :
24 whether to report detailed settings (default: False)
25
26 hide_banner (bool, optional):
27 whether to hide the Bokeh banner (default: False)
28
29 load_timeout (int, optional) :
30 Timeout in milliseconds when plots assume load timed out (default: 5000)
31
32 notebook_type (string):
33 notebook_type (default: jupyter)
34
35 .. warning::
36 Clearing the output cell containing the published BokehJS
37 resources HTML code may cause Bokeh CSS styling to be removed.
38
39 Returns:
40 None
41
42 '''
43 nb_html, nb_js = _load_notebook_html(resources, verbose, hide_banner, load_timeout)
44 lab_html, lab_js = _load_notebook_html(resources, verbose, hide_banner, load_timeout, register_mimetype=False)
45 if notebook_type=='jupyter':
46 publish_display_data({'text/html': nb_html + _wrap_in_script_tag(nb_js),
47 LOAD_MIME_TYPE: {"script": lab_js, "div": lab_html}})
48 else:
49 _publish_zeppelin_data(lab_html, lab_js)
50
51
52 FINALIZE_JS = """
53 document.getElementById("%s").textContent = "BokehJS is loading...";
54 """
55
56 # TODO (bev) This will eventually go away
57 def _publish_zeppelin_data(html, js):
58 print('%html ' + html)
59 print('%html ' + '<script type="text/javascript">' + js + "</script>")
60
61 def _load_notebook_html(resources=None, verbose=False, hide_banner=False,
62 load_timeout=5000, register_mimetype=True):
63 global _notebook_loaded
64
65 from .. import __version__
66 from ..core.templates import AUTOLOAD_NB_JS, NOTEBOOK_LOAD
67 from ..util.serialization import make_id
68 from ..util.compiler import bundle_all_models
69 from ..resources import CDN
70
71 if resources is None:
72 resources = CDN
73
74 if resources.mode == 'inline':
75 js_info = 'inline'
76 css_info = 'inline'
77 else:
78 js_info = resources.js_files[0] if len(resources.js_files) == 1 else resources.js_files
79 css_info = resources.css_files[0] if len(resources.css_files) == 1 else resources.css_files
80
81 warnings = ["Warning: " + msg['text'] for msg in resources.messages if msg['type'] == 'warn']
82
83 if _notebook_loaded and verbose:
84 warnings.append('Warning: BokehJS previously loaded')
85
86 _notebook_loaded = resources
87
88 element_id = make_id()
89
90 html = NOTEBOOK_LOAD.render(
91 element_id = element_id,
92 verbose = verbose,
93 js_info = js_info,
94 css_info = css_info,
95 bokeh_version = __version__,
96 warnings = warnings,
97 hide_banner = hide_banner,
98 )
99
100 custom_models_js = bundle_all_models()
101
102 js = AUTOLOAD_NB_JS.render(
103 elementid = '' if hide_banner else element_id,
104 js_urls = resources.js_files,
105 css_urls = resources.css_files,
106 js_raw = resources.js_raw + [custom_models_js] + ([] if hide_banner else [FINALIZE_JS % element_id]),
107 css_raw = resources.css_raw_str,
108 force = True,
109 timeout = load_timeout,
110 register_mimetype = register_mimetype
111 )
112
113 return html, js
114
115 def get_comms(target_name):
116 ''' Create a Jupyter comms object for a specific target, that can
117 be used to update Bokeh documents in the Jupyter notebook.
118
119 Args:
120 target_name (str) : the target name the Comms object should connect to
121
122 Returns
123 Jupyter Comms
124
125 '''
126 from ipykernel.comm import Comm
127 return Comm(target_name=target_name, data={})
128
[end of bokeh/util/notebook.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/bokeh/util/notebook.py b/bokeh/util/notebook.py
--- a/bokeh/util/notebook.py
+++ b/bokeh/util/notebook.py
@@ -5,8 +5,7 @@
from IPython.display import publish_display_data
-from ..embed import _wrap_in_script_tag
-
+JS_MIME_TYPE = 'application/javascript'
LOAD_MIME_TYPE = 'application/vnd.bokehjs_load.v0+json'
EXEC_MIME_TYPE = 'application/vnd.bokehjs_exec.v0+json'
@@ -40,33 +39,14 @@
None
'''
- nb_html, nb_js = _load_notebook_html(resources, verbose, hide_banner, load_timeout)
- lab_html, lab_js = _load_notebook_html(resources, verbose, hide_banner, load_timeout, register_mimetype=False)
- if notebook_type=='jupyter':
- publish_display_data({'text/html': nb_html + _wrap_in_script_tag(nb_js),
- LOAD_MIME_TYPE: {"script": lab_js, "div": lab_html}})
- else:
- _publish_zeppelin_data(lab_html, lab_js)
-
-FINALIZE_JS = """
-document.getElementById("%s").textContent = "BokehJS is loading...";
-"""
-
-# TODO (bev) This will eventually go away
-def _publish_zeppelin_data(html, js):
- print('%html ' + html)
- print('%html ' + '<script type="text/javascript">' + js + "</script>")
-
-def _load_notebook_html(resources=None, verbose=False, hide_banner=False,
- load_timeout=5000, register_mimetype=True):
global _notebook_loaded
from .. import __version__
- from ..core.templates import AUTOLOAD_NB_JS, NOTEBOOK_LOAD
+ from ..core.templates import NOTEBOOK_LOAD
from ..util.serialization import make_id
- from ..util.compiler import bundle_all_models
from ..resources import CDN
+ from ..util.compiler import bundle_all_models
if resources is None:
resources = CDN
@@ -99,18 +79,48 @@
custom_models_js = bundle_all_models()
+ nb_js = _loading_js(resources, element_id, custom_models_js, load_timeout, register_mime=True)
+ jl_js = _loading_js(resources, element_id, custom_models_js, load_timeout, register_mime=False)
+
+ if notebook_type=='jupyter':
+
+ if not hide_banner:
+ publish_display_data({'text/html': html})
+
+ publish_display_data({
+ JS_MIME_TYPE : nb_js,
+ LOAD_MIME_TYPE : {"script": jl_js}
+ })
+
+ else:
+ _publish_zeppelin_data(html, jl_js)
+
+
+FINALIZE_JS = """
+document.getElementById("%s").textContent = "BokehJS is loading...";
+"""
+
+# TODO (bev) This will eventually go away
+def _publish_zeppelin_data(html, js):
+ print('%html ' + html)
+ print('%html ' + '<script type="text/javascript">' + js + "</script>")
+
+def _loading_js(resources, element_id, custom_models_js, load_timeout=5000, register_mime=True):
+
+ from ..core.templates import AUTOLOAD_NB_JS
+
js = AUTOLOAD_NB_JS.render(
- elementid = '' if hide_banner else element_id,
- js_urls = resources.js_files,
- css_urls = resources.css_files,
- js_raw = resources.js_raw + [custom_models_js] + ([] if hide_banner else [FINALIZE_JS % element_id]),
- css_raw = resources.css_raw_str,
- force = True,
- timeout = load_timeout,
- register_mimetype = register_mimetype
+ elementid = element_id,
+ js_urls = resources.js_files,
+ css_urls = resources.css_files,
+ js_raw = resources.js_raw + [custom_models_js] + [FINALIZE_JS % element_id],
+ css_raw = resources.css_raw_str,
+ force = True,
+ timeout = load_timeout,
+ register_mime = register_mime
)
- return html, js
+ return js
def get_comms(target_name):
''' Create a Jupyter comms object for a specific target, that can
| {"golden_diff": "diff --git a/bokeh/util/notebook.py b/bokeh/util/notebook.py\n--- a/bokeh/util/notebook.py\n+++ b/bokeh/util/notebook.py\n@@ -5,8 +5,7 @@\n \n from IPython.display import publish_display_data\n \n-from ..embed import _wrap_in_script_tag\n-\n+JS_MIME_TYPE = 'application/javascript'\n LOAD_MIME_TYPE = 'application/vnd.bokehjs_load.v0+json'\n EXEC_MIME_TYPE = 'application/vnd.bokehjs_exec.v0+json'\n \n@@ -40,33 +39,14 @@\n None\n \n '''\n- nb_html, nb_js = _load_notebook_html(resources, verbose, hide_banner, load_timeout)\n- lab_html, lab_js = _load_notebook_html(resources, verbose, hide_banner, load_timeout, register_mimetype=False)\n- if notebook_type=='jupyter':\n- publish_display_data({'text/html': nb_html + _wrap_in_script_tag(nb_js),\n- LOAD_MIME_TYPE: {\"script\": lab_js, \"div\": lab_html}})\n- else:\n- _publish_zeppelin_data(lab_html, lab_js)\n \n-\n-FINALIZE_JS = \"\"\"\n-document.getElementById(\"%s\").textContent = \"BokehJS is loading...\";\n-\"\"\"\n-\n-# TODO (bev) This will eventually go away\n-def _publish_zeppelin_data(html, js):\n- print('%html ' + html)\n- print('%html ' + '<script type=\"text/javascript\">' + js + \"</script>\")\n-\n-def _load_notebook_html(resources=None, verbose=False, hide_banner=False,\n- load_timeout=5000, register_mimetype=True):\n global _notebook_loaded\n \n from .. import __version__\n- from ..core.templates import AUTOLOAD_NB_JS, NOTEBOOK_LOAD\n+ from ..core.templates import NOTEBOOK_LOAD\n from ..util.serialization import make_id\n- from ..util.compiler import bundle_all_models\n from ..resources import CDN\n+ from ..util.compiler import bundle_all_models\n \n if resources is None:\n resources = CDN\n@@ -99,18 +79,48 @@\n \n custom_models_js = bundle_all_models()\n \n+ nb_js = _loading_js(resources, element_id, custom_models_js, load_timeout, register_mime=True)\n+ jl_js = _loading_js(resources, element_id, custom_models_js, load_timeout, register_mime=False)\n+\n+ if notebook_type=='jupyter':\n+\n+ if not hide_banner:\n+ publish_display_data({'text/html': html})\n+\n+ publish_display_data({\n+ JS_MIME_TYPE : nb_js,\n+ LOAD_MIME_TYPE : {\"script\": jl_js}\n+ })\n+\n+ else:\n+ _publish_zeppelin_data(html, jl_js)\n+\n+\n+FINALIZE_JS = \"\"\"\n+document.getElementById(\"%s\").textContent = \"BokehJS is loading...\";\n+\"\"\"\n+\n+# TODO (bev) This will eventually go away\n+def _publish_zeppelin_data(html, js):\n+ print('%html ' + html)\n+ print('%html ' + '<script type=\"text/javascript\">' + js + \"</script>\")\n+\n+def _loading_js(resources, element_id, custom_models_js, load_timeout=5000, register_mime=True):\n+\n+ from ..core.templates import AUTOLOAD_NB_JS\n+\n js = AUTOLOAD_NB_JS.render(\n- elementid = '' if hide_banner else element_id,\n- js_urls = resources.js_files,\n- css_urls = resources.css_files,\n- js_raw = resources.js_raw + [custom_models_js] + ([] if hide_banner else [FINALIZE_JS % element_id]),\n- css_raw = resources.css_raw_str,\n- force = True,\n- timeout = load_timeout,\n- register_mimetype = register_mimetype\n+ elementid = element_id,\n+ js_urls = resources.js_files,\n+ css_urls = resources.css_files,\n+ js_raw = resources.js_raw + [custom_models_js] + [FINALIZE_JS % element_id],\n+ css_raw = resources.css_raw_str,\n+ force = True,\n+ timeout = load_timeout,\n+ register_mime = register_mime\n )\n \n- return html, js\n+ return js\n \n def get_comms(target_name):\n ''' Create a Jupyter comms object for a specific target, that can\n", "issue": "Inline, Minified Resources do not work in classic notebooks\nThis is due to an interaction with the classic notebooks use of JQuery, when output is published as `text/html`. New notebook code published a div and a script together as `text/html`. Propose to solve by publishing a single script as `application/javascript` (which should work) that creates the necessary div itself \n", "before_files": [{"content": "''' Functions useful for loading Bokeh code and data in Jupyter/Zeppelin notebooks.\n\n'''\nfrom __future__ import absolute_import\n\nfrom IPython.display import publish_display_data\n\nfrom ..embed import _wrap_in_script_tag\n\nLOAD_MIME_TYPE = 'application/vnd.bokehjs_load.v0+json'\nEXEC_MIME_TYPE = 'application/vnd.bokehjs_exec.v0+json'\n\n_notebook_loaded = None\n\n# TODO (bev) notebook_type and zeppelin bits should be removed after external zeppelin hook available\ndef load_notebook(resources=None, verbose=False, hide_banner=False, load_timeout=5000, notebook_type='jupyter'):\n ''' Prepare the IPython notebook for displaying Bokeh plots.\n\n Args:\n resources (Resource, optional) :\n how and where to load BokehJS from (default: CDN)\n\n verbose (bool, optional) :\n whether to report detailed settings (default: False)\n\n hide_banner (bool, optional):\n whether to hide the Bokeh banner (default: False)\n\n load_timeout (int, optional) :\n Timeout in milliseconds when plots assume load timed out (default: 5000)\n\n notebook_type (string):\n notebook_type (default: jupyter)\n\n .. warning::\n Clearing the output cell containing the published BokehJS\n resources HTML code may cause Bokeh CSS styling to be removed.\n\n Returns:\n None\n\n '''\n nb_html, nb_js = _load_notebook_html(resources, verbose, hide_banner, load_timeout)\n lab_html, lab_js = _load_notebook_html(resources, verbose, hide_banner, load_timeout, register_mimetype=False)\n if notebook_type=='jupyter':\n publish_display_data({'text/html': nb_html + _wrap_in_script_tag(nb_js),\n LOAD_MIME_TYPE: {\"script\": lab_js, \"div\": lab_html}})\n else:\n _publish_zeppelin_data(lab_html, lab_js)\n\n\nFINALIZE_JS = \"\"\"\ndocument.getElementById(\"%s\").textContent = \"BokehJS is loading...\";\n\"\"\"\n\n# TODO (bev) This will eventually go away\ndef _publish_zeppelin_data(html, js):\n print('%html ' + html)\n print('%html ' + '<script type=\"text/javascript\">' + js + \"</script>\")\n\ndef _load_notebook_html(resources=None, verbose=False, hide_banner=False,\n load_timeout=5000, register_mimetype=True):\n global _notebook_loaded\n\n from .. import __version__\n from ..core.templates import AUTOLOAD_NB_JS, NOTEBOOK_LOAD\n from ..util.serialization import make_id\n from ..util.compiler import bundle_all_models\n from ..resources import CDN\n\n if resources is None:\n resources = CDN\n\n if resources.mode == 'inline':\n js_info = 'inline'\n css_info = 'inline'\n else:\n js_info = resources.js_files[0] if len(resources.js_files) == 1 else resources.js_files\n css_info = resources.css_files[0] if len(resources.css_files) == 1 else resources.css_files\n\n warnings = [\"Warning: \" + msg['text'] for msg in resources.messages if msg['type'] == 'warn']\n\n if _notebook_loaded and verbose:\n warnings.append('Warning: BokehJS previously loaded')\n\n _notebook_loaded = resources\n\n element_id = make_id()\n\n html = NOTEBOOK_LOAD.render(\n element_id = element_id,\n verbose = verbose,\n js_info = js_info,\n css_info = css_info,\n bokeh_version = __version__,\n warnings = warnings,\n hide_banner = hide_banner,\n )\n\n custom_models_js = bundle_all_models()\n\n js = AUTOLOAD_NB_JS.render(\n elementid = '' if hide_banner else element_id,\n js_urls = resources.js_files,\n css_urls = resources.css_files,\n js_raw = resources.js_raw + [custom_models_js] + ([] if hide_banner else [FINALIZE_JS % element_id]),\n css_raw = resources.css_raw_str,\n force = True,\n timeout = load_timeout,\n register_mimetype = register_mimetype\n )\n\n return html, js\n\ndef get_comms(target_name):\n ''' Create a Jupyter comms object for a specific target, that can\n be used to update Bokeh documents in the Jupyter notebook.\n\n Args:\n target_name (str) : the target name the Comms object should connect to\n\n Returns\n Jupyter Comms\n\n '''\n from ipykernel.comm import Comm\n return Comm(target_name=target_name, data={})\n", "path": "bokeh/util/notebook.py"}]} | 1,902 | 964 |
gh_patches_debug_59717 | rasdani/github-patches | git_diff | pytorch__audio-1339 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Making `AudioMetaData` print friendly
`AudioMetaData` class reports meta-data of audio source. It is however not print friendly.
```python
print(torchaudio.info(src))
>>> <torchaudio.backend.common.AudioMetaData object at 0x7f1bc5cd2890>
```
It is nice if we can simply print the attributes like `dataclass` objects do.
```python
print(torchaudio.info(src))
>>> AudioMetaData(sample_rate=900, encoding="PCM", ...)
```
## Steps
There are two approaches I can think of
1. Add `__str__` method.
2. Use `dataclasses.dataclass`
For 2, the `info` function has to be TorchScript-compatible. This means that its return type `AudioMetaData` has to be TorchScript-able. For this reason, `dataclass` might not be applicable. This can be checked with the following test;
```bash
(cd test && pytest torchaudio_unittest/backend/sox_io/torchscript_test.py)
```
## Build and test
Please refer to the [contribution guide](https://github.com/pytorch/audio/blob/master/CONTRIBUTING.md) for how to setup development environment.
To test,
```bash
(cd test && pytest torchaudio_unittest/backend/sox_io/torchscript_test.py torchaudio_unittest/backend/sox_io/info_test.py torchaudio_unittest/backend/soundfile_io/info_test.py)
```
</issue>
<code>
[start of torchaudio/backend/common.py]
1 class AudioMetaData:
2 """Return type of ``torchaudio.info`` function.
3
4 This class is used by :ref:`"sox_io" backend<sox_io_backend>` and
5 :ref:`"soundfile" backend with the new interface<soundfile_backend>`.
6
7 :ivar int sample_rate: Sample rate
8 :ivar int num_frames: The number of frames
9 :ivar int num_channels: The number of channels
10 :ivar int bits_per_sample: The number of bits per sample. This is 0 for lossy formats,
11 or when it cannot be accurately inferred.
12 :ivar str encoding: Audio encoding
13 The values encoding can take are one of the following:
14
15 * ``PCM_S``: Signed integer linear PCM
16 * ``PCM_U``: Unsigned integer linear PCM
17 * ``PCM_F``: Floating point linear PCM
18 * ``FLAC``: Flac, Free Lossless Audio Codec
19 * ``ULAW``: Mu-law
20 * ``ALAW``: A-law
21 * ``MP3`` : MP3, MPEG-1 Audio Layer III
22 * ``VORBIS``: OGG Vorbis
23 * ``AMR_WB``: Adaptive Multi-Rate
24 * ``AMR_NB``: Adaptive Multi-Rate Wideband
25 * ``OPUS``: Opus
26 * ``UNKNOWN`` : None of above
27 """
28 def __init__(
29 self,
30 sample_rate: int,
31 num_frames: int,
32 num_channels: int,
33 bits_per_sample: int,
34 encoding: str,
35 ):
36 self.sample_rate = sample_rate
37 self.num_frames = num_frames
38 self.num_channels = num_channels
39 self.bits_per_sample = bits_per_sample
40 self.encoding = encoding
41
[end of torchaudio/backend/common.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/torchaudio/backend/common.py b/torchaudio/backend/common.py
--- a/torchaudio/backend/common.py
+++ b/torchaudio/backend/common.py
@@ -38,3 +38,14 @@
self.num_channels = num_channels
self.bits_per_sample = bits_per_sample
self.encoding = encoding
+
+ def __str__(self):
+ return (
+ f"AudioMetaData("
+ f"sample_rate={self.sample_rate}, "
+ f"num_frames={self.num_frames}, "
+ f"num_channels={self.num_channels}, "
+ f"bits_per_sample={self.bits_per_sample}, "
+ f"encoding={self.encoding}"
+ f")"
+ )
| {"golden_diff": "diff --git a/torchaudio/backend/common.py b/torchaudio/backend/common.py\n--- a/torchaudio/backend/common.py\n+++ b/torchaudio/backend/common.py\n@@ -38,3 +38,14 @@\n self.num_channels = num_channels\n self.bits_per_sample = bits_per_sample\n self.encoding = encoding\n+\n+ def __str__(self):\n+ return (\n+ f\"AudioMetaData(\"\n+ f\"sample_rate={self.sample_rate}, \"\n+ f\"num_frames={self.num_frames}, \"\n+ f\"num_channels={self.num_channels}, \"\n+ f\"bits_per_sample={self.bits_per_sample}, \"\n+ f\"encoding={self.encoding}\"\n+ f\")\"\n+ )\n", "issue": "Making `AudioMetaData` print friendly\n`AudioMetaData` class reports meta-data of audio source. It is however not print friendly.\r\n\r\n```python\r\nprint(torchaudio.info(src))\r\n>>> <torchaudio.backend.common.AudioMetaData object at 0x7f1bc5cd2890>\r\n```\r\n\r\nIt is nice if we can simply print the attributes like `dataclass` objects do.\r\n\r\n```python\r\nprint(torchaudio.info(src))\r\n>>> AudioMetaData(sample_rate=900, encoding=\"PCM\", ...)\r\n```\r\n\r\n## Steps\r\n\r\nThere are two approaches I can think of\r\n1. Add `__str__` method.\r\n2. Use `dataclasses.dataclass`\r\n\r\nFor 2, the `info` function has to be TorchScript-compatible. This means that its return type `AudioMetaData` has to be TorchScript-able. For this reason, `dataclass` might not be applicable. This can be checked with the following test;\r\n\r\n```bash\r\n(cd test && pytest torchaudio_unittest/backend/sox_io/torchscript_test.py)\r\n```\r\n\r\n## Build and test\r\n\r\nPlease refer to the [contribution guide](https://github.com/pytorch/audio/blob/master/CONTRIBUTING.md) for how to setup development environment.\r\n\r\nTo test, \r\n\r\n```bash\r\n(cd test && pytest torchaudio_unittest/backend/sox_io/torchscript_test.py torchaudio_unittest/backend/sox_io/info_test.py torchaudio_unittest/backend/soundfile_io/info_test.py)\r\n```\n", "before_files": [{"content": "class AudioMetaData:\n \"\"\"Return type of ``torchaudio.info`` function.\n\n This class is used by :ref:`\"sox_io\" backend<sox_io_backend>` and\n :ref:`\"soundfile\" backend with the new interface<soundfile_backend>`.\n\n :ivar int sample_rate: Sample rate\n :ivar int num_frames: The number of frames\n :ivar int num_channels: The number of channels\n :ivar int bits_per_sample: The number of bits per sample. This is 0 for lossy formats,\n or when it cannot be accurately inferred.\n :ivar str encoding: Audio encoding\n The values encoding can take are one of the following:\n\n * ``PCM_S``: Signed integer linear PCM\n * ``PCM_U``: Unsigned integer linear PCM\n * ``PCM_F``: Floating point linear PCM\n * ``FLAC``: Flac, Free Lossless Audio Codec\n * ``ULAW``: Mu-law\n * ``ALAW``: A-law\n * ``MP3`` : MP3, MPEG-1 Audio Layer III\n * ``VORBIS``: OGG Vorbis\n * ``AMR_WB``: Adaptive Multi-Rate\n * ``AMR_NB``: Adaptive Multi-Rate Wideband\n * ``OPUS``: Opus\n * ``UNKNOWN`` : None of above\n \"\"\"\n def __init__(\n self,\n sample_rate: int,\n num_frames: int,\n num_channels: int,\n bits_per_sample: int,\n encoding: str,\n ):\n self.sample_rate = sample_rate\n self.num_frames = num_frames\n self.num_channels = num_channels\n self.bits_per_sample = bits_per_sample\n self.encoding = encoding\n", "path": "torchaudio/backend/common.py"}]} | 1,312 | 164 |
gh_patches_debug_4769 | rasdani/github-patches | git_diff | spotify__luigi-1447 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Scheduler only hosts on unix socket when run in the background
Support for hosting the central scheduler on a unix socket was added, which is nice, but the scheduler ignores the `--unix-socket` argument from the command line when `--background` is not also supplied.
This will work properly, and the scheduler will listen on the provided unix socket:
```
luigid --unix-socket /path/to/socket --background
```
With this command, the scheduler will still listen on the default port (8082):
```
luigid --unix-socket /path/to/socket
```
Fixing this would be a simple matter of passing the `unix_socket` argument onto the call to `server.run` in the case where the server is not daemonized, but was there a reason this functionality was left out in the first place? If so, it probably ought to be in the documentation; as is, reading it gives me the impression that I should be able to tell the scheduler to listen on a unix socket regardless of whether it's running in the background.
</issue>
<code>
[start of luigi/cmdline.py]
1 import os
2 import argparse
3 import logging
4 import sys
5
6 from luigi.retcodes import run_with_retcodes
7
8
9 def luigi_run(argv=sys.argv[1:]):
10 run_with_retcodes(argv)
11
12
13 def luigid(argv=sys.argv[1:]):
14 import luigi.server
15 import luigi.process
16 import luigi.configuration
17 parser = argparse.ArgumentParser(description=u'Central luigi server')
18 parser.add_argument(u'--background', help=u'Run in background mode', action='store_true')
19 parser.add_argument(u'--pidfile', help=u'Write pidfile')
20 parser.add_argument(u'--logdir', help=u'log directory')
21 parser.add_argument(u'--state-path', help=u'Pickled state file')
22 parser.add_argument(u'--address', help=u'Listening interface')
23 parser.add_argument(u'--unix-socket', help=u'Unix socket path')
24 parser.add_argument(u'--port', default=8082, help=u'Listening port')
25
26 opts = parser.parse_args(argv)
27
28 if opts.state_path:
29 config = luigi.configuration.get_config()
30 config.set('scheduler', 'state_path', opts.state_path)
31
32 if opts.background:
33 # daemonize sets up logging to spooled log files
34 logging.getLogger().setLevel(logging.INFO)
35 luigi.process.daemonize(luigi.server.run, api_port=opts.port,
36 address=opts.address, pidfile=opts.pidfile,
37 logdir=opts.logdir, unix_socket=opts.unix_socket)
38 else:
39 if opts.logdir:
40 logging.basicConfig(level=logging.INFO, format=luigi.process.get_log_format(),
41 filename=os.path.join(opts.logdir, "luigi-server.log"))
42 else:
43 logging.basicConfig(level=logging.INFO, format=luigi.process.get_log_format())
44 luigi.server.run(api_port=opts.port, address=opts.address)
45
[end of luigi/cmdline.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/luigi/cmdline.py b/luigi/cmdline.py
--- a/luigi/cmdline.py
+++ b/luigi/cmdline.py
@@ -41,4 +41,4 @@
filename=os.path.join(opts.logdir, "luigi-server.log"))
else:
logging.basicConfig(level=logging.INFO, format=luigi.process.get_log_format())
- luigi.server.run(api_port=opts.port, address=opts.address)
+ luigi.server.run(api_port=opts.port, address=opts.address, unix_socket=opts.unix_socket)
| {"golden_diff": "diff --git a/luigi/cmdline.py b/luigi/cmdline.py\n--- a/luigi/cmdline.py\n+++ b/luigi/cmdline.py\n@@ -41,4 +41,4 @@\n filename=os.path.join(opts.logdir, \"luigi-server.log\"))\n else:\n logging.basicConfig(level=logging.INFO, format=luigi.process.get_log_format())\n- luigi.server.run(api_port=opts.port, address=opts.address)\n+ luigi.server.run(api_port=opts.port, address=opts.address, unix_socket=opts.unix_socket)\n", "issue": "Scheduler only hosts on unix socket when run in the background\nSupport for hosting the central scheduler on a unix socket was added, which is nice, but the scheduler ignores the `--unix-socket` argument from the command line when `--background` is not also supplied. \n\nThis will work properly, and the scheduler will listen on the provided unix socket:\n\n```\nluigid --unix-socket /path/to/socket --background\n```\n\nWith this command, the scheduler will still listen on the default port (8082):\n\n```\nluigid --unix-socket /path/to/socket\n```\n\nFixing this would be a simple matter of passing the `unix_socket` argument onto the call to `server.run` in the case where the server is not daemonized, but was there a reason this functionality was left out in the first place? If so, it probably ought to be in the documentation; as is, reading it gives me the impression that I should be able to tell the scheduler to listen on a unix socket regardless of whether it's running in the background.\n\n", "before_files": [{"content": "import os\nimport argparse\nimport logging\nimport sys\n\nfrom luigi.retcodes import run_with_retcodes\n\n\ndef luigi_run(argv=sys.argv[1:]):\n run_with_retcodes(argv)\n\n\ndef luigid(argv=sys.argv[1:]):\n import luigi.server\n import luigi.process\n import luigi.configuration\n parser = argparse.ArgumentParser(description=u'Central luigi server')\n parser.add_argument(u'--background', help=u'Run in background mode', action='store_true')\n parser.add_argument(u'--pidfile', help=u'Write pidfile')\n parser.add_argument(u'--logdir', help=u'log directory')\n parser.add_argument(u'--state-path', help=u'Pickled state file')\n parser.add_argument(u'--address', help=u'Listening interface')\n parser.add_argument(u'--unix-socket', help=u'Unix socket path')\n parser.add_argument(u'--port', default=8082, help=u'Listening port')\n\n opts = parser.parse_args(argv)\n\n if opts.state_path:\n config = luigi.configuration.get_config()\n config.set('scheduler', 'state_path', opts.state_path)\n\n if opts.background:\n # daemonize sets up logging to spooled log files\n logging.getLogger().setLevel(logging.INFO)\n luigi.process.daemonize(luigi.server.run, api_port=opts.port,\n address=opts.address, pidfile=opts.pidfile,\n logdir=opts.logdir, unix_socket=opts.unix_socket)\n else:\n if opts.logdir:\n logging.basicConfig(level=logging.INFO, format=luigi.process.get_log_format(),\n filename=os.path.join(opts.logdir, \"luigi-server.log\"))\n else:\n logging.basicConfig(level=logging.INFO, format=luigi.process.get_log_format())\n luigi.server.run(api_port=opts.port, address=opts.address)\n", "path": "luigi/cmdline.py"}]} | 1,245 | 125 |
gh_patches_debug_20043 | rasdani/github-patches | git_diff | archlinux__archinstall-66 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Issue installing package groups (kde-applications for instance)
As mentioned in #61, support for package groups doesn't work.
The idea here is that it should be supported, we simply never verified that the [archinstall.find_package()](https://github.com/Torxed/archinstall/blob/master/archinstall/lib/packages.py#L7-L17) function can verify those, and apparently it can't. So we have to use another API endpoint or multiple to support this.
*The backplane supports it already, as the packages are sent as a unfiltered string to `pacman -S` more or less.*
</issue>
<code>
[start of archinstall/lib/packages.py]
1 import urllib.request, urllib.parse
2 import ssl, json
3 from .exceptions import *
4
5 BASE_URL = 'https://www.archlinux.org/packages/search/json/?name={package}'
6
7 def find_package(name):
8 """
9 Finds a specific package via the package database.
10 It makes a simple web-request, which might be a bit slow.
11 """
12 ssl_context = ssl.create_default_context()
13 ssl_context.check_hostname = False
14 ssl_context.verify_mode = ssl.CERT_NONE
15 response = urllib.request.urlopen(BASE_URL.format(package=name), context=ssl_context)
16 data = response.read().decode('UTF-8')
17 return json.loads(data)
18
19 def find_packages(*names):
20 """
21 This function returns the search results for many packages.
22 The function itself is rather slow, so consider not sending to
23 many packages to the search query.
24 """
25 result = {}
26 for package in names:
27 result[package] = find_package(package)
28 return result
29
30 def validate_package_list(packages :list):
31 """
32 Validates a list of given packages.
33 Raises `RequirementError` if one or more packages are not found.
34 """
35 invalid_packages = []
36 for package in packages:
37 if not find_package(package)['results']:
38 invalid_packages.append(package)
39
40 if invalid_packages:
41 raise RequirementError(f"Invalid package names: {invalid_packages}")
42
43 return True
[end of archinstall/lib/packages.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/archinstall/lib/packages.py b/archinstall/lib/packages.py
--- a/archinstall/lib/packages.py
+++ b/archinstall/lib/packages.py
@@ -3,6 +3,23 @@
from .exceptions import *
BASE_URL = 'https://www.archlinux.org/packages/search/json/?name={package}'
+BASE_GROUP_URL = 'https://www.archlinux.org/groups/x86_64/{group}/'
+
+def find_group(name):
+ ssl_context = ssl.create_default_context()
+ ssl_context.check_hostname = False
+ ssl_context.verify_mode = ssl.CERT_NONE
+ try:
+ response = urllib.request.urlopen(BASE_GROUP_URL.format(group=name), context=ssl_context)
+ except urllib.error.HTTPError as err:
+ if err.code == 404:
+ return False
+ else:
+ raise err
+
+ # Just to be sure some code didn't slip through the exception
+ if response.code == 200:
+ return True
def find_package(name):
"""
@@ -34,7 +51,7 @@
"""
invalid_packages = []
for package in packages:
- if not find_package(package)['results']:
+ if not find_package(package)['results'] and not find_group(package):
invalid_packages.append(package)
if invalid_packages:
| {"golden_diff": "diff --git a/archinstall/lib/packages.py b/archinstall/lib/packages.py\n--- a/archinstall/lib/packages.py\n+++ b/archinstall/lib/packages.py\n@@ -3,6 +3,23 @@\n from .exceptions import *\n \n BASE_URL = 'https://www.archlinux.org/packages/search/json/?name={package}'\n+BASE_GROUP_URL = 'https://www.archlinux.org/groups/x86_64/{group}/'\n+\n+def find_group(name):\n+\tssl_context = ssl.create_default_context()\n+\tssl_context.check_hostname = False\n+\tssl_context.verify_mode = ssl.CERT_NONE\n+\ttry:\n+\t\tresponse = urllib.request.urlopen(BASE_GROUP_URL.format(group=name), context=ssl_context)\n+\texcept urllib.error.HTTPError as err:\n+\t\tif err.code == 404:\n+\t\t\treturn False\n+\t\telse:\n+\t\t\traise err\n+\t\n+\t# Just to be sure some code didn't slip through the exception\n+\tif response.code == 200:\n+\t\treturn True\n \n def find_package(name):\n \t\"\"\"\n@@ -34,7 +51,7 @@\n \t\"\"\"\n \tinvalid_packages = []\n \tfor package in packages:\n-\t\tif not find_package(package)['results']:\n+\t\tif not find_package(package)['results'] and not find_group(package):\n \t\t\tinvalid_packages.append(package)\n \t\n \tif invalid_packages:\n", "issue": "Issue installing package groups (kde-applications for instance)\nAs mentioned in #61, support for package groups doesn't work.\r\nThe idea here is that it should be supported, we simply never verified that the [archinstall.find_package()](https://github.com/Torxed/archinstall/blob/master/archinstall/lib/packages.py#L7-L17) function can verify those, and apparently it can't. So we have to use another API endpoint or multiple to support this.\r\n\r\n*The backplane supports it already, as the packages are sent as a unfiltered string to `pacman -S` more or less.*\n", "before_files": [{"content": "import urllib.request, urllib.parse\nimport ssl, json\nfrom .exceptions import *\n\nBASE_URL = 'https://www.archlinux.org/packages/search/json/?name={package}'\n\ndef find_package(name):\n\t\"\"\"\n\tFinds a specific package via the package database.\n\tIt makes a simple web-request, which might be a bit slow.\n\t\"\"\"\n\tssl_context = ssl.create_default_context()\n\tssl_context.check_hostname = False\n\tssl_context.verify_mode = ssl.CERT_NONE\n\tresponse = urllib.request.urlopen(BASE_URL.format(package=name), context=ssl_context)\n\tdata = response.read().decode('UTF-8')\n\treturn json.loads(data)\n\ndef find_packages(*names):\n\t\"\"\"\n\tThis function returns the search results for many packages.\n\tThe function itself is rather slow, so consider not sending to\n\tmany packages to the search query.\n\t\"\"\"\n\tresult = {}\n\tfor package in names:\n\t\tresult[package] = find_package(package)\n\treturn result\n\ndef validate_package_list(packages :list):\n\t\"\"\"\n\tValidates a list of given packages.\n\tRaises `RequirementError` if one or more packages are not found.\n\t\"\"\"\n\tinvalid_packages = []\n\tfor package in packages:\n\t\tif not find_package(package)['results']:\n\t\t\tinvalid_packages.append(package)\n\t\n\tif invalid_packages:\n\t\traise RequirementError(f\"Invalid package names: {invalid_packages}\")\n\n\treturn True", "path": "archinstall/lib/packages.py"}]} | 1,045 | 292 |
gh_patches_debug_63087 | rasdani/github-patches | git_diff | translate__pootle-5160 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Ensure tests can be run with `--reuse-db`
When iterating over a test that require DB access (or a few of them), currently a site-wide setup is made which in such scenario ends up being relatively time-consuming and tedious.
Ideally one could use [pytest-django's `--reuse-db` flag](http://pytest-django.readthedocs.org/en/latest/database.html#reuse-db-reuse-the-testing-database-between-test-runs) to considerably reduce setup time on test iterations, however at the current state of things such feature cannot be used due to the way the Pootle test DB environment is setup.
Let's try to fix that so we can benefit from `--reuse-db`.
</issue>
<code>
[start of pytest_pootle/plugin.py]
1 # -*- coding: utf-8 -*-
2 #
3 # Copyright (C) Pootle contributors.
4 #
5 # This file is a part of the Pootle project. It is distributed under the GPL3
6 # or later license. See the LICENSE file for a copy of the license and the
7 # AUTHORS file for copyright and authorship information.
8
9 import os
10 import shutil
11 from pkgutil import iter_modules
12
13 import pytest
14
15 from . import fixtures
16 from .env import PootleTestEnv
17 from .fixtures import models as fixtures_models
18 from .fixtures.core import management as fixtures_core_management
19 from .fixtures.core import utils as fixtures_core_utils
20 from .fixtures import formats as fixtures_formats
21 from .fixtures import pootle_fs as fixtures_fs
22
23
24 def _load_fixtures(*modules):
25 for mod in modules:
26 path = mod.__path__
27 prefix = '%s.' % mod.__name__
28
29 for loader_, name, is_pkg in iter_modules(path, prefix):
30 if not is_pkg:
31 yield name
32
33
34 @pytest.fixture
35 def po_test_dir(request, tmpdir):
36 po_dir = str(tmpdir.mkdir("po"))
37
38 def rm_po_dir():
39 if os.path.exists(po_dir):
40 shutil.rmtree(po_dir)
41
42 request.addfinalizer(rm_po_dir)
43 return po_dir
44
45
46 @pytest.fixture
47 def po_directory(request, po_test_dir, settings):
48 """Sets up a tmp directory for PO files."""
49 from pootle_store.models import fs
50
51 translation_directory = settings.POOTLE_TRANSLATION_DIRECTORY
52
53 # Adjust locations
54 settings.POOTLE_TRANSLATION_DIRECTORY = po_test_dir
55 fs.location = po_test_dir
56
57 def _cleanup():
58 settings.POOTLE_TRANSLATION_DIRECTORY = translation_directory
59
60 request.addfinalizer(_cleanup)
61
62
63 @pytest.fixture(scope='session')
64 def tests_use_db(request):
65 return bool(
66 [item for item in request.node.items
67 if item.get_marker('django_db')])
68
69
70 @pytest.fixture(scope='session')
71 def tests_use_vfolders(request):
72 return bool(
73 [item for item in request.node.items
74 if item.get_marker('pootle_vfolders')])
75
76
77 @pytest.fixture(scope='session')
78 def tests_use_migration(request, tests_use_db):
79 return bool(
80 tests_use_db
81 and [item for item in request.node.items
82 if item.get_marker('django_migration')])
83
84
85 @pytest.fixture(autouse=True, scope='session')
86 def setup_db_if_needed(request, tests_use_db):
87 """Sets up the site DB only if tests requested to use the DB (autouse)."""
88 if tests_use_db:
89 return request.getfuncargvalue('post_db_setup')
90
91
92 @pytest.fixture(scope='session')
93 def post_db_setup(translations_directory, django_db_setup, django_db_blocker,
94 tests_use_db, tests_use_vfolders, request):
95 """Sets up the site DB for the test session."""
96 if tests_use_db:
97 with django_db_blocker.unblock():
98 PootleTestEnv().setup(
99 vfolders=tests_use_vfolders)
100
101
102 @pytest.fixture(scope='session')
103 def django_db_use_migrations(tests_use_migration):
104 return tests_use_migration
105
106
107 pytest_plugins = tuple(
108 _load_fixtures(
109 fixtures,
110 fixtures_core_management,
111 fixtures_core_utils,
112 fixtures_formats,
113 fixtures_models,
114 fixtures_fs))
115
[end of pytest_pootle/plugin.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pytest_pootle/plugin.py b/pytest_pootle/plugin.py
--- a/pytest_pootle/plugin.py
+++ b/pytest_pootle/plugin.py
@@ -85,7 +85,7 @@
@pytest.fixture(autouse=True, scope='session')
def setup_db_if_needed(request, tests_use_db):
"""Sets up the site DB only if tests requested to use the DB (autouse)."""
- if tests_use_db:
+ if tests_use_db and not request.config.getvalue('reuse_db'):
return request.getfuncargvalue('post_db_setup')
| {"golden_diff": "diff --git a/pytest_pootle/plugin.py b/pytest_pootle/plugin.py\n--- a/pytest_pootle/plugin.py\n+++ b/pytest_pootle/plugin.py\n@@ -85,7 +85,7 @@\n @pytest.fixture(autouse=True, scope='session')\n def setup_db_if_needed(request, tests_use_db):\n \"\"\"Sets up the site DB only if tests requested to use the DB (autouse).\"\"\"\n- if tests_use_db:\n+ if tests_use_db and not request.config.getvalue('reuse_db'):\n return request.getfuncargvalue('post_db_setup')\n", "issue": "Ensure tests can be run with `--reuse-db`\nWhen iterating over a test that require DB access (or a few of them), currently a site-wide setup is made which in such scenario ends up being relatively time-consuming and tedious.\n\nIdeally one could use [pytest-django's `--reuse-db` flag](http://pytest-django.readthedocs.org/en/latest/database.html#reuse-db-reuse-the-testing-database-between-test-runs) to considerably reduce setup time on test iterations, however at the current state of things such feature cannot be used due to the way the Pootle test DB environment is setup.\n\nLet's try to fix that so we can benefit from `--reuse-db`.\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nimport os\nimport shutil\nfrom pkgutil import iter_modules\n\nimport pytest\n\nfrom . import fixtures\nfrom .env import PootleTestEnv\nfrom .fixtures import models as fixtures_models\nfrom .fixtures.core import management as fixtures_core_management\nfrom .fixtures.core import utils as fixtures_core_utils\nfrom .fixtures import formats as fixtures_formats\nfrom .fixtures import pootle_fs as fixtures_fs\n\n\ndef _load_fixtures(*modules):\n for mod in modules:\n path = mod.__path__\n prefix = '%s.' % mod.__name__\n\n for loader_, name, is_pkg in iter_modules(path, prefix):\n if not is_pkg:\n yield name\n\n\[email protected]\ndef po_test_dir(request, tmpdir):\n po_dir = str(tmpdir.mkdir(\"po\"))\n\n def rm_po_dir():\n if os.path.exists(po_dir):\n shutil.rmtree(po_dir)\n\n request.addfinalizer(rm_po_dir)\n return po_dir\n\n\[email protected]\ndef po_directory(request, po_test_dir, settings):\n \"\"\"Sets up a tmp directory for PO files.\"\"\"\n from pootle_store.models import fs\n\n translation_directory = settings.POOTLE_TRANSLATION_DIRECTORY\n\n # Adjust locations\n settings.POOTLE_TRANSLATION_DIRECTORY = po_test_dir\n fs.location = po_test_dir\n\n def _cleanup():\n settings.POOTLE_TRANSLATION_DIRECTORY = translation_directory\n\n request.addfinalizer(_cleanup)\n\n\[email protected](scope='session')\ndef tests_use_db(request):\n return bool(\n [item for item in request.node.items\n if item.get_marker('django_db')])\n\n\[email protected](scope='session')\ndef tests_use_vfolders(request):\n return bool(\n [item for item in request.node.items\n if item.get_marker('pootle_vfolders')])\n\n\[email protected](scope='session')\ndef tests_use_migration(request, tests_use_db):\n return bool(\n tests_use_db\n and [item for item in request.node.items\n if item.get_marker('django_migration')])\n\n\[email protected](autouse=True, scope='session')\ndef setup_db_if_needed(request, tests_use_db):\n \"\"\"Sets up the site DB only if tests requested to use the DB (autouse).\"\"\"\n if tests_use_db:\n return request.getfuncargvalue('post_db_setup')\n\n\[email protected](scope='session')\ndef post_db_setup(translations_directory, django_db_setup, django_db_blocker,\n tests_use_db, tests_use_vfolders, request):\n \"\"\"Sets up the site DB for the test session.\"\"\"\n if tests_use_db:\n with django_db_blocker.unblock():\n PootleTestEnv().setup(\n vfolders=tests_use_vfolders)\n\n\[email protected](scope='session')\ndef django_db_use_migrations(tests_use_migration):\n return tests_use_migration\n\n\npytest_plugins = tuple(\n _load_fixtures(\n fixtures,\n fixtures_core_management,\n fixtures_core_utils,\n fixtures_formats,\n fixtures_models,\n fixtures_fs))\n", "path": "pytest_pootle/plugin.py"}]} | 1,639 | 130 |
gh_patches_debug_15855 | rasdani/github-patches | git_diff | readthedocs__readthedocs.org-10668 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Django: adapt admin code for 3.x
It seems that we missed an upgrade to make it fully compatible with Django 3.x
We are using `admin.ACTION_CHECKBOX_NAME` when it was deprecated and it was removed already:
> The compatibility import of django.contrib.admin.helpers.ACTION_CHECKBOX_NAME in django.contrib.admin is removed.
(from https://docs.djangoproject.com/en/4.0/releases/3.1/#id1)
The code lives at https://github.com/readthedocs/readthedocs.org/blob/e94c26074e9abdf7056b4e6502c52f8a6b128055/readthedocs/notifications/views.py#L48
</issue>
<code>
[start of readthedocs/notifications/views.py]
1 """Django views for the notifications app."""
2 from django.contrib import admin, messages
3 from django.http import HttpResponseRedirect
4 from django.views.generic import FormView
5
6 from .forms import SendNotificationForm
7
8
9 class SendNotificationView(FormView):
10
11 """
12 Form view for sending notifications to users from admin pages.
13
14 Accepts the following additional parameters:
15
16 :param queryset: Queryset to use to determine the users to send emails to
17 :param action_name: Name of the action to pass to the form template,
18 determines the action to pass back to the admin view
19 :param notification_classes: List of :py:class:`Notification` classes to
20 display in the form
21 """
22
23 form_class = SendNotificationForm
24 template_name = "notifications/send_notification_form.html"
25 action_name = "send_email"
26 notification_classes = []
27
28 def get_form_kwargs(self):
29 """
30 Override form kwargs based on input fields.
31
32 The admin posts to this view initially, so detect the send button on
33 form post variables. Drop additional fields if we see the send button.
34 """
35 kwargs = super().get_form_kwargs()
36 kwargs["notification_classes"] = self.notification_classes
37 if "send" not in self.request.POST:
38 kwargs.pop("data", None)
39 kwargs.pop("files", None)
40 return kwargs
41
42 def get_initial(self):
43 """Add selected ids to initial form data."""
44 initial = super().get_initial()
45 initial["_selected_action"] = self.request.POST.getlist(
46 admin.ACTION_CHECKBOX_NAME,
47 )
48 return initial
49
50 def form_valid(self, form):
51 """If form is valid, send notification to recipients."""
52 count = 0
53 notification_cls = form.cleaned_data["source"]
54 for obj in self.get_queryset().all():
55 for recipient in self.get_object_recipients(obj):
56 notification = notification_cls(
57 context_object=obj,
58 request=self.request,
59 user=recipient,
60 )
61 notification.send()
62 count += 1
63 if count == 0:
64 self.message_user("No recipients to send to", level=messages.ERROR)
65 else:
66 self.message_user("Queued {} messages".format(count))
67 return HttpResponseRedirect(self.request.get_full_path())
68
69 def get_object_recipients(self, obj):
70 """
71 Iterate over queryset objects and return User objects.
72
73 This allows for non-User querysets to pass back a list of Users to send
74 to. By default, assume we're working with :py:class:`User` objects and
75 just yield the single object.
76
77 For example, this could be made to return project owners with::
78
79 for owner in AdminPermission.members(project):
80 yield owner
81
82 :param obj: object from queryset, type is dependent on model class
83 :rtype: django.contrib.auth.models.User
84 """
85 yield obj
86
87 def get_queryset(self):
88 return self.kwargs.get("queryset")
89
90 def get_context_data(self, **kwargs):
91 """Return queryset in context."""
92 context = super().get_context_data(**kwargs)
93 recipients = []
94 for obj in self.get_queryset().all():
95 recipients.extend(self.get_object_recipients(obj))
96 context["recipients"] = recipients
97 context["action_name"] = self.action_name
98 return context
99
100 def message_user(
101 self,
102 message,
103 level=messages.INFO,
104 extra_tags="",
105 fail_silently=False,
106 ):
107 """
108 Implementation of.
109
110 :py:meth:`django.contrib.admin.options.ModelAdmin.message_user`
111
112 Send message through messages framework
113 """
114 # TODO generalize this or check if implementation in ModelAdmin is
115 # usable here
116 messages.add_message(
117 self.request,
118 level,
119 message,
120 extra_tags=extra_tags,
121 fail_silently=fail_silently,
122 )
123
[end of readthedocs/notifications/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/readthedocs/notifications/views.py b/readthedocs/notifications/views.py
--- a/readthedocs/notifications/views.py
+++ b/readthedocs/notifications/views.py
@@ -1,5 +1,5 @@
"""Django views for the notifications app."""
-from django.contrib import admin, messages
+from django.contrib import messages
from django.http import HttpResponseRedirect
from django.views.generic import FormView
@@ -42,9 +42,7 @@
def get_initial(self):
"""Add selected ids to initial form data."""
initial = super().get_initial()
- initial["_selected_action"] = self.request.POST.getlist(
- admin.ACTION_CHECKBOX_NAME,
- )
+ initial["_selected_action"] = self.request.POST.getlist("_selected_action")
return initial
def form_valid(self, form):
| {"golden_diff": "diff --git a/readthedocs/notifications/views.py b/readthedocs/notifications/views.py\n--- a/readthedocs/notifications/views.py\n+++ b/readthedocs/notifications/views.py\n@@ -1,5 +1,5 @@\n \"\"\"Django views for the notifications app.\"\"\"\n-from django.contrib import admin, messages\n+from django.contrib import messages\n from django.http import HttpResponseRedirect\n from django.views.generic import FormView\n \n@@ -42,9 +42,7 @@\n def get_initial(self):\n \"\"\"Add selected ids to initial form data.\"\"\"\n initial = super().get_initial()\n- initial[\"_selected_action\"] = self.request.POST.getlist(\n- admin.ACTION_CHECKBOX_NAME,\n- )\n+ initial[\"_selected_action\"] = self.request.POST.getlist(\"_selected_action\")\n return initial\n \n def form_valid(self, form):\n", "issue": "Django: adapt admin code for 3.x\nIt seems that we missed an upgrade to make it fully compatible with Django 3.x\r\n\r\nWe are using `admin.ACTION_CHECKBOX_NAME` when it was deprecated and it was removed already:\r\n\r\n> The compatibility import of django.contrib.admin.helpers.ACTION_CHECKBOX_NAME in django.contrib.admin is removed.\r\n\r\n(from https://docs.djangoproject.com/en/4.0/releases/3.1/#id1)\r\n\r\nThe code lives at https://github.com/readthedocs/readthedocs.org/blob/e94c26074e9abdf7056b4e6502c52f8a6b128055/readthedocs/notifications/views.py#L48\n", "before_files": [{"content": "\"\"\"Django views for the notifications app.\"\"\"\nfrom django.contrib import admin, messages\nfrom django.http import HttpResponseRedirect\nfrom django.views.generic import FormView\n\nfrom .forms import SendNotificationForm\n\n\nclass SendNotificationView(FormView):\n\n \"\"\"\n Form view for sending notifications to users from admin pages.\n\n Accepts the following additional parameters:\n\n :param queryset: Queryset to use to determine the users to send emails to\n :param action_name: Name of the action to pass to the form template,\n determines the action to pass back to the admin view\n :param notification_classes: List of :py:class:`Notification` classes to\n display in the form\n \"\"\"\n\n form_class = SendNotificationForm\n template_name = \"notifications/send_notification_form.html\"\n action_name = \"send_email\"\n notification_classes = []\n\n def get_form_kwargs(self):\n \"\"\"\n Override form kwargs based on input fields.\n\n The admin posts to this view initially, so detect the send button on\n form post variables. Drop additional fields if we see the send button.\n \"\"\"\n kwargs = super().get_form_kwargs()\n kwargs[\"notification_classes\"] = self.notification_classes\n if \"send\" not in self.request.POST:\n kwargs.pop(\"data\", None)\n kwargs.pop(\"files\", None)\n return kwargs\n\n def get_initial(self):\n \"\"\"Add selected ids to initial form data.\"\"\"\n initial = super().get_initial()\n initial[\"_selected_action\"] = self.request.POST.getlist(\n admin.ACTION_CHECKBOX_NAME,\n )\n return initial\n\n def form_valid(self, form):\n \"\"\"If form is valid, send notification to recipients.\"\"\"\n count = 0\n notification_cls = form.cleaned_data[\"source\"]\n for obj in self.get_queryset().all():\n for recipient in self.get_object_recipients(obj):\n notification = notification_cls(\n context_object=obj,\n request=self.request,\n user=recipient,\n )\n notification.send()\n count += 1\n if count == 0:\n self.message_user(\"No recipients to send to\", level=messages.ERROR)\n else:\n self.message_user(\"Queued {} messages\".format(count))\n return HttpResponseRedirect(self.request.get_full_path())\n\n def get_object_recipients(self, obj):\n \"\"\"\n Iterate over queryset objects and return User objects.\n\n This allows for non-User querysets to pass back a list of Users to send\n to. By default, assume we're working with :py:class:`User` objects and\n just yield the single object.\n\n For example, this could be made to return project owners with::\n\n for owner in AdminPermission.members(project):\n yield owner\n\n :param obj: object from queryset, type is dependent on model class\n :rtype: django.contrib.auth.models.User\n \"\"\"\n yield obj\n\n def get_queryset(self):\n return self.kwargs.get(\"queryset\")\n\n def get_context_data(self, **kwargs):\n \"\"\"Return queryset in context.\"\"\"\n context = super().get_context_data(**kwargs)\n recipients = []\n for obj in self.get_queryset().all():\n recipients.extend(self.get_object_recipients(obj))\n context[\"recipients\"] = recipients\n context[\"action_name\"] = self.action_name\n return context\n\n def message_user(\n self,\n message,\n level=messages.INFO,\n extra_tags=\"\",\n fail_silently=False,\n ):\n \"\"\"\n Implementation of.\n\n :py:meth:`django.contrib.admin.options.ModelAdmin.message_user`\n\n Send message through messages framework\n \"\"\"\n # TODO generalize this or check if implementation in ModelAdmin is\n # usable here\n messages.add_message(\n self.request,\n level,\n message,\n extra_tags=extra_tags,\n fail_silently=fail_silently,\n )\n", "path": "readthedocs/notifications/views.py"}]} | 1,771 | 179 |
gh_patches_debug_27494 | rasdani/github-patches | git_diff | shuup__shuup-1977 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Admin UI: Fix media browser file upload
An exception is raised when you manually select the file while uploading it. To reproduce:
- Go to Products
- Select/Create a product
- Go to Files section
- Click over the dropzone area
- In the media browser window, click Upload
- Select a file and check the console (error)

</issue>
<code>
[start of shuup/admin/browser_config.py]
1 # -*- coding: utf-8 -*-
2 # This file is part of Shuup.
3 #
4 # Copyright (c) 2012-2019, Shoop Commerce Ltd. All rights reserved.
5 #
6 # This source code is licensed under the OSL-3.0 license found in the
7 # LICENSE file in the root directory of this source tree.
8 from django.conf import settings
9
10 from shuup.utils.i18n import get_current_babel_locale
11
12
13 class BaseBrowserConfigProvider(object):
14 @classmethod
15 def get_browser_urls(cls, request, **kwargs):
16 return {}
17
18 @classmethod
19 def get_gettings(cls, request, **kwargs):
20 return {}
21
22
23 class DefaultBrowserConfigProvider(BaseBrowserConfigProvider):
24 @classmethod
25 def get_browser_urls(cls, request, **kwargs):
26 return {
27 "edit": "shuup_admin:edit",
28 "select": "shuup_admin:select",
29 "media": "shuup_admin:media.browse",
30 "product": "shuup_admin:shop_product.list",
31 "contact": "shuup_admin:contact.list",
32 "setLanguage": "shuup_admin:set-language",
33 "tour": "shuup_admin:tour",
34 "menu_toggle": "shuup_admin:menu_toggle"
35 }
36
37 @classmethod
38 def get_gettings(cls, request, **kwargs):
39 return {
40 "minSearchInputLength": settings.SHUUP_ADMIN_MINIMUM_INPUT_LENGTH_SEARCH or 1,
41 "dateInputFormat": settings.SHUUP_ADMIN_DATE_INPUT_FORMAT,
42 "datetimeInputFormat": settings.SHUUP_ADMIN_DATETIME_INPUT_FORMAT,
43 "timeInputFormat": settings.SHUUP_ADMIN_TIME_INPUT_FORMAT,
44 "datetimeInputStep": settings.SHUUP_ADMIN_DATETIME_INPUT_STEP,
45 "dateInputLocale": get_current_babel_locale().language
46 }
47
[end of shuup/admin/browser_config.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/shuup/admin/browser_config.py b/shuup/admin/browser_config.py
--- a/shuup/admin/browser_config.py
+++ b/shuup/admin/browser_config.py
@@ -7,6 +7,7 @@
# LICENSE file in the root directory of this source tree.
from django.conf import settings
+from shuup.admin.utils.permissions import has_permission
from shuup.utils.i18n import get_current_babel_locale
@@ -26,7 +27,7 @@
return {
"edit": "shuup_admin:edit",
"select": "shuup_admin:select",
- "media": "shuup_admin:media.browse",
+ "media": ("shuup_admin:media.browse" if has_permission(request.user, "shuup_admin:media.browse") else None),
"product": "shuup_admin:shop_product.list",
"contact": "shuup_admin:contact.list",
"setLanguage": "shuup_admin:set-language",
@@ -42,5 +43,6 @@
"datetimeInputFormat": settings.SHUUP_ADMIN_DATETIME_INPUT_FORMAT,
"timeInputFormat": settings.SHUUP_ADMIN_TIME_INPUT_FORMAT,
"datetimeInputStep": settings.SHUUP_ADMIN_DATETIME_INPUT_STEP,
- "dateInputLocale": get_current_babel_locale().language
+ "dateInputLocale": get_current_babel_locale().language,
+ "staticPrefix": settings.STATIC_URL,
}
| {"golden_diff": "diff --git a/shuup/admin/browser_config.py b/shuup/admin/browser_config.py\n--- a/shuup/admin/browser_config.py\n+++ b/shuup/admin/browser_config.py\n@@ -7,6 +7,7 @@\n # LICENSE file in the root directory of this source tree.\n from django.conf import settings\n \n+from shuup.admin.utils.permissions import has_permission\n from shuup.utils.i18n import get_current_babel_locale\n \n \n@@ -26,7 +27,7 @@\n return {\n \"edit\": \"shuup_admin:edit\",\n \"select\": \"shuup_admin:select\",\n- \"media\": \"shuup_admin:media.browse\",\n+ \"media\": (\"shuup_admin:media.browse\" if has_permission(request.user, \"shuup_admin:media.browse\") else None),\n \"product\": \"shuup_admin:shop_product.list\",\n \"contact\": \"shuup_admin:contact.list\",\n \"setLanguage\": \"shuup_admin:set-language\",\n@@ -42,5 +43,6 @@\n \"datetimeInputFormat\": settings.SHUUP_ADMIN_DATETIME_INPUT_FORMAT,\n \"timeInputFormat\": settings.SHUUP_ADMIN_TIME_INPUT_FORMAT,\n \"datetimeInputStep\": settings.SHUUP_ADMIN_DATETIME_INPUT_STEP,\n- \"dateInputLocale\": get_current_babel_locale().language\n+ \"dateInputLocale\": get_current_babel_locale().language,\n+ \"staticPrefix\": settings.STATIC_URL,\n }\n", "issue": " Admin UI: Fix media browser file upload\nAn exception is raised when you manually select the file while uploading it. To reproduce:\r\n- Go to Products\r\n- Select/Create a product\r\n- Go to Files section\r\n- Click over the dropzone area\r\n- In the media browser window, click Upload\r\n- Select a file and check the console (error)\r\n\r\n\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# This file is part of Shuup.\n#\n# Copyright (c) 2012-2019, Shoop Commerce Ltd. All rights reserved.\n#\n# This source code is licensed under the OSL-3.0 license found in the\n# LICENSE file in the root directory of this source tree.\nfrom django.conf import settings\n\nfrom shuup.utils.i18n import get_current_babel_locale\n\n\nclass BaseBrowserConfigProvider(object):\n @classmethod\n def get_browser_urls(cls, request, **kwargs):\n return {}\n\n @classmethod\n def get_gettings(cls, request, **kwargs):\n return {}\n\n\nclass DefaultBrowserConfigProvider(BaseBrowserConfigProvider):\n @classmethod\n def get_browser_urls(cls, request, **kwargs):\n return {\n \"edit\": \"shuup_admin:edit\",\n \"select\": \"shuup_admin:select\",\n \"media\": \"shuup_admin:media.browse\",\n \"product\": \"shuup_admin:shop_product.list\",\n \"contact\": \"shuup_admin:contact.list\",\n \"setLanguage\": \"shuup_admin:set-language\",\n \"tour\": \"shuup_admin:tour\",\n \"menu_toggle\": \"shuup_admin:menu_toggle\"\n }\n\n @classmethod\n def get_gettings(cls, request, **kwargs):\n return {\n \"minSearchInputLength\": settings.SHUUP_ADMIN_MINIMUM_INPUT_LENGTH_SEARCH or 1,\n \"dateInputFormat\": settings.SHUUP_ADMIN_DATE_INPUT_FORMAT,\n \"datetimeInputFormat\": settings.SHUUP_ADMIN_DATETIME_INPUT_FORMAT,\n \"timeInputFormat\": settings.SHUUP_ADMIN_TIME_INPUT_FORMAT,\n \"datetimeInputStep\": settings.SHUUP_ADMIN_DATETIME_INPUT_STEP,\n \"dateInputLocale\": get_current_babel_locale().language\n }\n", "path": "shuup/admin/browser_config.py"}]} | 1,166 | 328 |
gh_patches_debug_12240 | rasdani/github-patches | git_diff | GPflow__GPflow-1355 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
setup.py depends on external dataclasses package for python >= 3.8
Setup.py has a check
```python
is_py37 = sys.version_info.major == 3 and sys.version_info.minor == 7
```
and adds the PyPI `dataclasses` package to the requirements when `not is_py37`. (`dataclasses` has been incorporated in the stdlib in python 3.7.) With python 3.8 released, this check is inaccurate, as setup.py currently adds the dependency on dataclasses when the python version is 3.8 or later, not just when it's less than 3.7.
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3
4 # pylint: skip-file
5
6 import os
7 import sys
8 from pathlib import Path
9
10 from pkg_resources import parse_version
11 from setuptools import find_packages, setup
12
13 is_py37 = sys.version_info.major == 3 and sys.version_info.minor == 7
14 on_rtd = os.environ.get("READTHEDOCS", None) == "True" # copied from the docs
15
16 # Dependencies of GPflow
17 requirements = ["numpy>=1.10.0", "scipy>=0.18.0", "multipledispatch>=0.4.9", "tabulate"]
18
19 if not is_py37:
20 requirements.append("dataclasses")
21
22 if not on_rtd:
23 requirements.append("tensorflow-probability>=0.9")
24
25 min_tf_version = "2.1.0"
26 tf_cpu = "tensorflow"
27 tf_gpu = "tensorflow-gpu"
28
29
30 # for latest_version() [see https://github.com/GPflow/GPflow/issues/1348]:
31 def latest_version(package_name):
32 import json
33 from urllib import request
34 import re
35
36 url = f"https://pypi.python.org/pypi/{package_name}/json"
37 data = json.load(request.urlopen(url))
38 # filter out rc and beta releases and, more generally, any releases that
39 # do not contain exclusively numbers and dots.
40 versions = [parse_version(v) for v in data["releases"].keys() if re.match("^[0-9.]+$", v)]
41 versions.sort()
42 return versions[-1] # return latest version
43
44
45 # Only detect TF if not installed or outdated. If not, do not do not list as
46 # requirement to avoid installing over e.g. tensorflow-gpu
47 # To avoid this, rely on importing rather than the package name (like pip).
48
49 try:
50 # If tf not installed, import raises ImportError
51 import tensorflow as tf
52
53 if parse_version(tf.__version__) < parse_version(min_tf_version):
54 # TF pre-installed, but below the minimum required version
55 raise DeprecationWarning("TensorFlow version below minimum requirement")
56 except (ImportError, DeprecationWarning):
57 # Add TensorFlow to dependencies to trigger installation/update
58 if not on_rtd:
59 # Do not add TF if we are installing GPflow on readthedocs
60 requirements.append(tf_cpu)
61 gast_requirement = (
62 "gast>=0.2.2,<0.3"
63 if latest_version("tensorflow") < parse_version("2.2")
64 else "gast>=0.3.3"
65 )
66 requirements.append(gast_requirement)
67
68
69 with open(str(Path(".", "VERSION").absolute())) as version_file:
70 version = version_file.read().strip()
71
72 packages = find_packages(".", exclude=["tests"])
73
74 setup(
75 name="gpflow",
76 version=version,
77 author="James Hensman, Alex Matthews",
78 author_email="[email protected]",
79 description="Gaussian process methods in TensorFlow",
80 license="Apache License 2.0",
81 keywords="machine-learning gaussian-processes kernels tensorflow",
82 url="http://github.com/GPflow/GPflow",
83 packages=packages,
84 include_package_data=True,
85 install_requires=requirements,
86 extras_require={"Tensorflow with GPU": [tf_gpu]},
87 python_requires=">=3.6",
88 classifiers=[
89 "License :: OSI Approved :: Apache Software License",
90 "Natural Language :: English",
91 "Operating System :: MacOS :: MacOS X",
92 "Operating System :: Microsoft :: Windows",
93 "Operating System :: POSIX :: Linux",
94 "Programming Language :: Python :: 3.6",
95 "Topic :: Scientific/Engineering :: Artificial Intelligence",
96 ],
97 )
98
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -10,13 +10,13 @@
from pkg_resources import parse_version
from setuptools import find_packages, setup
-is_py37 = sys.version_info.major == 3 and sys.version_info.minor == 7
on_rtd = os.environ.get("READTHEDOCS", None) == "True" # copied from the docs
# Dependencies of GPflow
requirements = ["numpy>=1.10.0", "scipy>=0.18.0", "multipledispatch>=0.4.9", "tabulate"]
-if not is_py37:
+if sys.version_info < (3, 7):
+ # became part of stdlib in python 3.7
requirements.append("dataclasses")
if not on_rtd:
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -10,13 +10,13 @@\n from pkg_resources import parse_version\n from setuptools import find_packages, setup\n \n-is_py37 = sys.version_info.major == 3 and sys.version_info.minor == 7\n on_rtd = os.environ.get(\"READTHEDOCS\", None) == \"True\" # copied from the docs\n \n # Dependencies of GPflow\n requirements = [\"numpy>=1.10.0\", \"scipy>=0.18.0\", \"multipledispatch>=0.4.9\", \"tabulate\"]\n \n-if not is_py37:\n+if sys.version_info < (3, 7):\n+ # became part of stdlib in python 3.7\n requirements.append(\"dataclasses\")\n \n if not on_rtd:\n", "issue": "setup.py depends on external dataclasses package for python >= 3.8\nSetup.py has a check\r\n```python\r\nis_py37 = sys.version_info.major == 3 and sys.version_info.minor == 7\r\n```\r\nand adds the PyPI `dataclasses` package to the requirements when `not is_py37`. (`dataclasses` has been incorporated in the stdlib in python 3.7.) With python 3.8 released, this check is inaccurate, as setup.py currently adds the dependency on dataclasses when the python version is 3.8 or later, not just when it's less than 3.7.\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\n# pylint: skip-file\n\nimport os\nimport sys\nfrom pathlib import Path\n\nfrom pkg_resources import parse_version\nfrom setuptools import find_packages, setup\n\nis_py37 = sys.version_info.major == 3 and sys.version_info.minor == 7\non_rtd = os.environ.get(\"READTHEDOCS\", None) == \"True\" # copied from the docs\n\n# Dependencies of GPflow\nrequirements = [\"numpy>=1.10.0\", \"scipy>=0.18.0\", \"multipledispatch>=0.4.9\", \"tabulate\"]\n\nif not is_py37:\n requirements.append(\"dataclasses\")\n\nif not on_rtd:\n requirements.append(\"tensorflow-probability>=0.9\")\n\nmin_tf_version = \"2.1.0\"\ntf_cpu = \"tensorflow\"\ntf_gpu = \"tensorflow-gpu\"\n\n\n# for latest_version() [see https://github.com/GPflow/GPflow/issues/1348]:\ndef latest_version(package_name):\n import json\n from urllib import request\n import re\n\n url = f\"https://pypi.python.org/pypi/{package_name}/json\"\n data = json.load(request.urlopen(url))\n # filter out rc and beta releases and, more generally, any releases that\n # do not contain exclusively numbers and dots.\n versions = [parse_version(v) for v in data[\"releases\"].keys() if re.match(\"^[0-9.]+$\", v)]\n versions.sort()\n return versions[-1] # return latest version\n\n\n# Only detect TF if not installed or outdated. If not, do not do not list as\n# requirement to avoid installing over e.g. tensorflow-gpu\n# To avoid this, rely on importing rather than the package name (like pip).\n\ntry:\n # If tf not installed, import raises ImportError\n import tensorflow as tf\n\n if parse_version(tf.__version__) < parse_version(min_tf_version):\n # TF pre-installed, but below the minimum required version\n raise DeprecationWarning(\"TensorFlow version below minimum requirement\")\nexcept (ImportError, DeprecationWarning):\n # Add TensorFlow to dependencies to trigger installation/update\n if not on_rtd:\n # Do not add TF if we are installing GPflow on readthedocs\n requirements.append(tf_cpu)\n gast_requirement = (\n \"gast>=0.2.2,<0.3\"\n if latest_version(\"tensorflow\") < parse_version(\"2.2\")\n else \"gast>=0.3.3\"\n )\n requirements.append(gast_requirement)\n\n\nwith open(str(Path(\".\", \"VERSION\").absolute())) as version_file:\n version = version_file.read().strip()\n\npackages = find_packages(\".\", exclude=[\"tests\"])\n\nsetup(\n name=\"gpflow\",\n version=version,\n author=\"James Hensman, Alex Matthews\",\n author_email=\"[email protected]\",\n description=\"Gaussian process methods in TensorFlow\",\n license=\"Apache License 2.0\",\n keywords=\"machine-learning gaussian-processes kernels tensorflow\",\n url=\"http://github.com/GPflow/GPflow\",\n packages=packages,\n include_package_data=True,\n install_requires=requirements,\n extras_require={\"Tensorflow with GPU\": [tf_gpu]},\n python_requires=\">=3.6\",\n classifiers=[\n \"License :: OSI Approved :: Apache Software License\",\n \"Natural Language :: English\",\n \"Operating System :: MacOS :: MacOS X\",\n \"Operating System :: Microsoft :: Windows\",\n \"Operating System :: POSIX :: Linux\",\n \"Programming Language :: Python :: 3.6\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n ],\n)\n", "path": "setup.py"}]} | 1,663 | 193 |
gh_patches_debug_9864 | rasdani/github-patches | git_diff | getpelican__pelican-2720 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Fixing some warnings and errors in the sample content.
The current sample content is not up-to-date with the current Pelican mechanism.
This will help new comers to understand better how Pelican works.
* More valid articles.
* More translations.
* Images are now correctly displayed.
</issue>
<code>
[start of samples/pelican.conf.py]
1 # -*- coding: utf-8 -*-
2
3 AUTHOR = 'Alexis Mรฉtaireau'
4 SITENAME = "Alexis' log"
5 SITESUBTITLE = 'A personal blog.'
6 SITEURL = 'http://blog.notmyidea.org'
7 TIMEZONE = "Europe/Paris"
8
9 # can be useful in development, but set to False when you're ready to publish
10 RELATIVE_URLS = True
11
12 GITHUB_URL = 'http://github.com/ametaireau/'
13 DISQUS_SITENAME = "blog-notmyidea"
14 REVERSE_CATEGORY_ORDER = True
15 LOCALE = "C"
16 DEFAULT_PAGINATION = 4
17 DEFAULT_DATE = (2012, 3, 2, 14, 1, 1)
18
19 FEED_ALL_RSS = 'feeds/all.rss.xml'
20 CATEGORY_FEED_RSS = 'feeds/{slug}.rss.xml'
21
22 LINKS = (('Biologeek', 'http://biologeek.org'),
23 ('Filyb', "http://filyb.info/"),
24 ('Libert-fr', "http://www.libert-fr.com"),
25 ('N1k0', "http://prendreuncafe.com/blog/"),
26 ('Tarek Ziadรฉ', "http://ziade.org/blog"),
27 ('Zubin Mithra', "http://zubin71.wordpress.com/"),)
28
29 SOCIAL = (('twitter', 'http://twitter.com/ametaireau'),
30 ('lastfm', 'http://lastfm.com/user/akounet'),
31 ('github', 'http://github.com/ametaireau'),)
32
33 # global metadata to all the contents
34 DEFAULT_METADATA = {'yeah': 'it is'}
35
36 # path-specific metadata
37 EXTRA_PATH_METADATA = {
38 'extra/robots.txt': {'path': 'robots.txt'},
39 }
40
41 # static paths will be copied without parsing their contents
42 STATIC_PATHS = [
43 'pictures',
44 'extra/robots.txt',
45 ]
46
47 # custom page generated with a jinja2 template
48 TEMPLATE_PAGES = {'pages/jinja2_template.html': 'jinja2_template.html'}
49
50 # code blocks with line numbers
51 PYGMENTS_RST_OPTIONS = {'linenos': 'table'}
52
53 # foobar will not be used, because it's not in caps. All configuration keys
54 # have to be in caps
55 foobar = "barbaz"
56
[end of samples/pelican.conf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/samples/pelican.conf.py b/samples/pelican.conf.py
--- a/samples/pelican.conf.py
+++ b/samples/pelican.conf.py
@@ -40,13 +40,16 @@
# static paths will be copied without parsing their contents
STATIC_PATHS = [
- 'pictures',
+ 'images',
'extra/robots.txt',
]
# custom page generated with a jinja2 template
TEMPLATE_PAGES = {'pages/jinja2_template.html': 'jinja2_template.html'}
+# there is no other HTML content
+READERS = {'html': None}
+
# code blocks with line numbers
PYGMENTS_RST_OPTIONS = {'linenos': 'table'}
| {"golden_diff": "diff --git a/samples/pelican.conf.py b/samples/pelican.conf.py\n--- a/samples/pelican.conf.py\n+++ b/samples/pelican.conf.py\n@@ -40,13 +40,16 @@\n \n # static paths will be copied without parsing their contents\n STATIC_PATHS = [\n- 'pictures',\n+ 'images',\n 'extra/robots.txt',\n ]\n \n # custom page generated with a jinja2 template\n TEMPLATE_PAGES = {'pages/jinja2_template.html': 'jinja2_template.html'}\n \n+# there is no other HTML content\n+READERS = {'html': None}\n+\n # code blocks with line numbers\n PYGMENTS_RST_OPTIONS = {'linenos': 'table'}\n", "issue": "Fixing some warnings and errors in the sample content.\nThe current sample content is not up-to-date with the current Pelican mechanism.\r\n\r\nThis will help new comers to understand better how Pelican works.\r\n\r\n* More valid articles.\r\n* More translations.\r\n* Images are now correctly displayed.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\nAUTHOR = 'Alexis M\u00e9taireau'\nSITENAME = \"Alexis' log\"\nSITESUBTITLE = 'A personal blog.'\nSITEURL = 'http://blog.notmyidea.org'\nTIMEZONE = \"Europe/Paris\"\n\n# can be useful in development, but set to False when you're ready to publish\nRELATIVE_URLS = True\n\nGITHUB_URL = 'http://github.com/ametaireau/'\nDISQUS_SITENAME = \"blog-notmyidea\"\nREVERSE_CATEGORY_ORDER = True\nLOCALE = \"C\"\nDEFAULT_PAGINATION = 4\nDEFAULT_DATE = (2012, 3, 2, 14, 1, 1)\n\nFEED_ALL_RSS = 'feeds/all.rss.xml'\nCATEGORY_FEED_RSS = 'feeds/{slug}.rss.xml'\n\nLINKS = (('Biologeek', 'http://biologeek.org'),\n ('Filyb', \"http://filyb.info/\"),\n ('Libert-fr', \"http://www.libert-fr.com\"),\n ('N1k0', \"http://prendreuncafe.com/blog/\"),\n ('Tarek Ziad\u00e9', \"http://ziade.org/blog\"),\n ('Zubin Mithra', \"http://zubin71.wordpress.com/\"),)\n\nSOCIAL = (('twitter', 'http://twitter.com/ametaireau'),\n ('lastfm', 'http://lastfm.com/user/akounet'),\n ('github', 'http://github.com/ametaireau'),)\n\n# global metadata to all the contents\nDEFAULT_METADATA = {'yeah': 'it is'}\n\n# path-specific metadata\nEXTRA_PATH_METADATA = {\n 'extra/robots.txt': {'path': 'robots.txt'},\n }\n\n# static paths will be copied without parsing their contents\nSTATIC_PATHS = [\n 'pictures',\n 'extra/robots.txt',\n ]\n\n# custom page generated with a jinja2 template\nTEMPLATE_PAGES = {'pages/jinja2_template.html': 'jinja2_template.html'}\n\n# code blocks with line numbers\nPYGMENTS_RST_OPTIONS = {'linenos': 'table'}\n\n# foobar will not be used, because it's not in caps. All configuration keys\n# have to be in caps\nfoobar = \"barbaz\"\n", "path": "samples/pelican.conf.py"}]} | 1,199 | 160 |
gh_patches_debug_12299 | rasdani/github-patches | git_diff | medtagger__MedTagger-145 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Race condition between API & Worker containers that updates fixtures on start
## Expected Behavior
Containers should start without any issues.
## Actual Behavior
Any of these two containers may fail to start due to unexpected exceptions while applying fixtures.
## Steps to Reproduce the Problem
1. Run `docker-compose up`.
2. Be lucky.
## Additional comment
Both of these containers start script for applying fixtures. Maybe only one should do it (preferrably API)? Maybe this script should be better protected for errors?
</issue>
<code>
[start of backend/medtagger/database/fixtures.py]
1 """Insert all database fixtures."""
2 import logging.config
3
4 from sqlalchemy import exists
5
6 from medtagger.database import db_session
7 from medtagger.database.models import ScanCategory, Role
8
9 logging.config.fileConfig('logging.conf')
10 logger = logging.getLogger(__name__)
11
12 CATEGORIES = [{
13 'key': 'KIDNEYS',
14 'name': 'Kidneys',
15 'image_path': '../../../assets/icon/kidneys_category_icon.svg',
16 }, {
17 'key': 'LIVER',
18 'name': 'Liver',
19 'image_path': '../../../assets/icon/liver_category_icon.svg',
20 }, {
21 'key': 'HEART',
22 'name': 'Hearth',
23 'image_path': '../../../assets/icon/heart_category_icon.svg',
24 }, {
25 'key': 'LUNGS',
26 'name': 'Lungs',
27 'image_path': '../../../assets/icon/lungs_category_icon.svg',
28 }]
29
30 ROLES = [
31 {
32 'name': 'admin',
33 },
34 {
35 'name': 'doctor',
36 },
37 {
38 'name': 'volunteer',
39 },
40 ]
41
42
43 def insert_scan_categories() -> None:
44 """Insert all default Scan Categories if don't exist."""
45 with db_session() as session:
46 for row in CATEGORIES:
47 category_key = row.get('key', '')
48 category_exists = session.query(exists().where(ScanCategory.key == category_key)).scalar()
49 if category_exists:
50 logger.info('Scan Category exists with key "%s"', category_key)
51 continue
52
53 category = ScanCategory(**row)
54 session.add(category)
55 logger.info('Scan Category added for key "%s"', category_key)
56
57
58 def insert_user_roles() -> None:
59 """Insert default user Roles."""
60 with db_session() as session:
61 for row in ROLES:
62 role_name = row.get('name', '')
63 role_exists = session.query(exists().where(Role.name == role_name)).scalar()
64 if role_exists:
65 logger.info('Role exists with name "%s"', role_name)
66 continue
67
68 role = Role(**row)
69 session.add(role)
70 logger.info('Role added for name "%s"', role_name)
71
72
73 def apply_all_fixtures() -> None:
74 """Apply all available fixtures."""
75 logger.info('Applying fixtures for Scan Categories...')
76 insert_scan_categories()
77 logger.info('Applying fixtures for user Roles...')
78 insert_user_roles()
79
80
81 if __name__ == '__main__':
82 apply_all_fixtures()
83
[end of backend/medtagger/database/fixtures.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/backend/medtagger/database/fixtures.py b/backend/medtagger/database/fixtures.py
--- a/backend/medtagger/database/fixtures.py
+++ b/backend/medtagger/database/fixtures.py
@@ -2,6 +2,7 @@
import logging.config
from sqlalchemy import exists
+from sqlalchemy.exc import IntegrityError
from medtagger.database import db_session
from medtagger.database.models import ScanCategory, Role
@@ -79,4 +80,8 @@
if __name__ == '__main__':
- apply_all_fixtures()
+ try:
+ apply_all_fixtures()
+ except IntegrityError:
+ logger.error('An error occurred while applying fixtures! It is highly possible that there was'
+ 'a race condition between multiple processes applying fixtures at the same time.')
| {"golden_diff": "diff --git a/backend/medtagger/database/fixtures.py b/backend/medtagger/database/fixtures.py\n--- a/backend/medtagger/database/fixtures.py\n+++ b/backend/medtagger/database/fixtures.py\n@@ -2,6 +2,7 @@\n import logging.config\n \n from sqlalchemy import exists\n+from sqlalchemy.exc import IntegrityError\n \n from medtagger.database import db_session\n from medtagger.database.models import ScanCategory, Role\n@@ -79,4 +80,8 @@\n \n \n if __name__ == '__main__':\n- apply_all_fixtures()\n+ try:\n+ apply_all_fixtures()\n+ except IntegrityError:\n+ logger.error('An error occurred while applying fixtures! It is highly possible that there was'\n+ 'a race condition between multiple processes applying fixtures at the same time.')\n", "issue": "Race condition between API & Worker containers that updates fixtures on start\n## Expected Behavior\r\n\r\nContainers should start without any issues.\r\n\r\n## Actual Behavior\r\n\r\nAny of these two containers may fail to start due to unexpected exceptions while applying fixtures.\r\n\r\n## Steps to Reproduce the Problem\r\n\r\n 1. Run `docker-compose up`.\r\n 2. Be lucky.\r\n\r\n## Additional comment\r\n\r\nBoth of these containers start script for applying fixtures. Maybe only one should do it (preferrably API)? Maybe this script should be better protected for errors?\r\n\n", "before_files": [{"content": "\"\"\"Insert all database fixtures.\"\"\"\nimport logging.config\n\nfrom sqlalchemy import exists\n\nfrom medtagger.database import db_session\nfrom medtagger.database.models import ScanCategory, Role\n\nlogging.config.fileConfig('logging.conf')\nlogger = logging.getLogger(__name__)\n\nCATEGORIES = [{\n 'key': 'KIDNEYS',\n 'name': 'Kidneys',\n 'image_path': '../../../assets/icon/kidneys_category_icon.svg',\n}, {\n 'key': 'LIVER',\n 'name': 'Liver',\n 'image_path': '../../../assets/icon/liver_category_icon.svg',\n}, {\n 'key': 'HEART',\n 'name': 'Hearth',\n 'image_path': '../../../assets/icon/heart_category_icon.svg',\n}, {\n 'key': 'LUNGS',\n 'name': 'Lungs',\n 'image_path': '../../../assets/icon/lungs_category_icon.svg',\n}]\n\nROLES = [\n {\n 'name': 'admin',\n },\n {\n 'name': 'doctor',\n },\n {\n 'name': 'volunteer',\n },\n]\n\n\ndef insert_scan_categories() -> None:\n \"\"\"Insert all default Scan Categories if don't exist.\"\"\"\n with db_session() as session:\n for row in CATEGORIES:\n category_key = row.get('key', '')\n category_exists = session.query(exists().where(ScanCategory.key == category_key)).scalar()\n if category_exists:\n logger.info('Scan Category exists with key \"%s\"', category_key)\n continue\n\n category = ScanCategory(**row)\n session.add(category)\n logger.info('Scan Category added for key \"%s\"', category_key)\n\n\ndef insert_user_roles() -> None:\n \"\"\"Insert default user Roles.\"\"\"\n with db_session() as session:\n for row in ROLES:\n role_name = row.get('name', '')\n role_exists = session.query(exists().where(Role.name == role_name)).scalar()\n if role_exists:\n logger.info('Role exists with name \"%s\"', role_name)\n continue\n\n role = Role(**row)\n session.add(role)\n logger.info('Role added for name \"%s\"', role_name)\n\n\ndef apply_all_fixtures() -> None:\n \"\"\"Apply all available fixtures.\"\"\"\n logger.info('Applying fixtures for Scan Categories...')\n insert_scan_categories()\n logger.info('Applying fixtures for user Roles...')\n insert_user_roles()\n\n\nif __name__ == '__main__':\n apply_all_fixtures()\n", "path": "backend/medtagger/database/fixtures.py"}]} | 1,331 | 174 |
gh_patches_debug_33821 | rasdani/github-patches | git_diff | airctic__icevision-993 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
SSD model doesn't work
## ๐ Bug
SSD model doesn't work anymore. It seems related to MMDetection updates made here:
https://github.com/open-mmlab/mmdetection/pull/5789/files
Refer to discussion on our Discord forum:
https://discord.com/channels/735877944085446747/780951885485965352/920249646964670464
</issue>
<code>
[start of icevision/models/mmdet/utils.py]
1 __all__ = [
2 "MMDetBackboneConfig",
3 "mmdet_configs_path",
4 "param_groups",
5 "MMDetBackboneConfig",
6 "create_model_config",
7 ]
8
9 from icevision.imports import *
10 from icevision.utils import *
11 from icevision.backbones import BackboneConfig
12 from icevision.models.mmdet.download_configs import download_mmdet_configs
13 from mmdet.models.detectors import *
14 from mmcv import Config
15 from mmdet.models.backbones.ssd_vgg import SSDVGG
16 from mmdet.models.backbones.csp_darknet import CSPDarknet
17
18
19 mmdet_configs_path = download_mmdet_configs()
20
21
22 class MMDetBackboneConfig(BackboneConfig):
23 def __init__(self, model_name, config_path, weights_url):
24 self.model_name = model_name
25 self.config_path = config_path
26 self.weights_url = weights_url
27 self.pretrained: bool
28
29 def __call__(self, pretrained: bool = True) -> "MMDetBackboneConfig":
30 self.pretrained = pretrained
31 return self
32
33
34 def param_groups(model):
35 body = model.backbone
36
37 layers = []
38 if isinstance(body, SSDVGG):
39 layers += [body.features]
40 layers += [body.extra, body.l2_norm]
41 elif isinstance(body, CSPDarknet):
42 layers += [body.stem.conv.conv, body.stem.conv.bn]
43 layers += [body.stage1, body.stage2, body.stage3, body.stage4]
44 layers += [model.neck]
45 else:
46 layers += [nn.Sequential(body.conv1, body.bn1)]
47 layers += [getattr(body, l) for l in body.res_layers]
48 layers += [model.neck]
49
50 if isinstance(model, SingleStageDetector):
51 layers += [model.bbox_head]
52 elif isinstance(model, TwoStageDetector):
53 layers += [nn.Sequential(model.rpn_head, model.roi_head)]
54 else:
55 raise RuntimeError(
56 "{model} must inherit either from SingleStageDetector or TwoStageDetector class"
57 )
58
59 _param_groups = [list(layer.parameters()) for layer in layers]
60 check_all_model_params_in_groups2(model, _param_groups)
61 return _param_groups
62
63
64 def create_model_config(
65 backbone: MMDetBackboneConfig,
66 pretrained: bool = True,
67 checkpoints_path: Optional[Union[str, Path]] = "checkpoints",
68 force_download=False,
69 cfg_options=None,
70 ):
71
72 model_name = backbone.model_name
73 config_path = backbone.config_path
74 weights_url = backbone.weights_url
75
76 # download weights
77 weights_path = None
78 if pretrained and weights_url:
79 save_dir = Path(checkpoints_path) / model_name
80 save_dir.mkdir(exist_ok=True, parents=True)
81
82 fname = Path(weights_url).name
83 weights_path = save_dir / fname
84
85 if not weights_path.exists() or force_download:
86 download_url(url=weights_url, save_path=str(weights_path))
87
88 cfg = Config.fromfile(config_path)
89
90 if cfg_options is not None:
91 cfg.merge_from_dict(cfg_options)
92
93 return cfg, weights_path
94
[end of icevision/models/mmdet/utils.py]
[start of icevision/models/mmdet/models/ssd/backbones/resnet_fpn.py]
1 __all__ = [
2 "ssd300",
3 "ssd512",
4 ]
5
6 from icevision.imports import *
7 from icevision.models.mmdet.utils import *
8
9
10 class MMDetSSDBackboneConfig(MMDetBackboneConfig):
11 def __init__(self, **kwargs):
12 super().__init__(model_name="ssd", **kwargs)
13
14
15 base_config_path = mmdet_configs_path / "ssd"
16 base_weights_url = "http://download.openmmlab.com/mmdetection/v2.0/ssd"
17
18 ssd300 = MMDetSSDBackboneConfig(
19 config_path=base_config_path / "ssd300_coco.py",
20 weights_url=f"{base_weights_url}/ssd300_coco/ssd300_coco_20200307-a92d2092.pth",
21 )
22
23 ssd512 = MMDetSSDBackboneConfig(
24 config_path=base_config_path / "ssd512_coco.py",
25 weights_url=f"{base_weights_url}/ssd512_coco/ssd512_coco_20200308-038c5591.pth",
26 )
27
[end of icevision/models/mmdet/models/ssd/backbones/resnet_fpn.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/icevision/models/mmdet/models/ssd/backbones/resnet_fpn.py b/icevision/models/mmdet/models/ssd/backbones/resnet_fpn.py
--- a/icevision/models/mmdet/models/ssd/backbones/resnet_fpn.py
+++ b/icevision/models/mmdet/models/ssd/backbones/resnet_fpn.py
@@ -1,6 +1,7 @@
__all__ = [
"ssd300",
"ssd512",
+ "ssdlite_mobilenetv2",
]
from icevision.imports import *
@@ -17,10 +18,15 @@
ssd300 = MMDetSSDBackboneConfig(
config_path=base_config_path / "ssd300_coco.py",
- weights_url=f"{base_weights_url}/ssd300_coco/ssd300_coco_20200307-a92d2092.pth",
+ weights_url=f"{base_weights_url}/ssd300_coco/ssd300_coco_20210803_015428-d231a06e.pth",
)
ssd512 = MMDetSSDBackboneConfig(
config_path=base_config_path / "ssd512_coco.py",
- weights_url=f"{base_weights_url}/ssd512_coco/ssd512_coco_20200308-038c5591.pth",
+ weights_url=f"{base_weights_url}/ssd512_coco/ssd512_coco_20210803_022849-0a47a1ca.pth",
+)
+
+ssdlite_mobilenetv2 = MMDetSSDBackboneConfig(
+ config_path=base_config_path / "ssdlite_mobilenetv2_scratch_600e_coco.py",
+ weights_url=f"{base_weights_url}/ssd512_coco/ssdlite_mobilenetv2_scratch_600e_coco_20210629_110627-974d9307.pth",
)
diff --git a/icevision/models/mmdet/utils.py b/icevision/models/mmdet/utils.py
--- a/icevision/models/mmdet/utils.py
+++ b/icevision/models/mmdet/utils.py
@@ -35,18 +35,21 @@
body = model.backbone
layers = []
+
+ # add the backbone
if isinstance(body, SSDVGG):
layers += [body.features]
- layers += [body.extra, body.l2_norm]
elif isinstance(body, CSPDarknet):
layers += [body.stem.conv.conv, body.stem.conv.bn]
layers += [body.stage1, body.stage2, body.stage3, body.stage4]
- layers += [model.neck]
else:
layers += [nn.Sequential(body.conv1, body.bn1)]
layers += [getattr(body, l) for l in body.res_layers]
- layers += [model.neck]
+ # add the neck
+ layers += [model.neck]
+
+ # add the head
if isinstance(model, SingleStageDetector):
layers += [model.bbox_head]
elif isinstance(model, TwoStageDetector):
| {"golden_diff": "diff --git a/icevision/models/mmdet/models/ssd/backbones/resnet_fpn.py b/icevision/models/mmdet/models/ssd/backbones/resnet_fpn.py\n--- a/icevision/models/mmdet/models/ssd/backbones/resnet_fpn.py\n+++ b/icevision/models/mmdet/models/ssd/backbones/resnet_fpn.py\n@@ -1,6 +1,7 @@\n __all__ = [\n \"ssd300\",\n \"ssd512\",\n+ \"ssdlite_mobilenetv2\",\n ]\n \n from icevision.imports import *\n@@ -17,10 +18,15 @@\n \n ssd300 = MMDetSSDBackboneConfig(\n config_path=base_config_path / \"ssd300_coco.py\",\n- weights_url=f\"{base_weights_url}/ssd300_coco/ssd300_coco_20200307-a92d2092.pth\",\n+ weights_url=f\"{base_weights_url}/ssd300_coco/ssd300_coco_20210803_015428-d231a06e.pth\",\n )\n \n ssd512 = MMDetSSDBackboneConfig(\n config_path=base_config_path / \"ssd512_coco.py\",\n- weights_url=f\"{base_weights_url}/ssd512_coco/ssd512_coco_20200308-038c5591.pth\",\n+ weights_url=f\"{base_weights_url}/ssd512_coco/ssd512_coco_20210803_022849-0a47a1ca.pth\",\n+)\n+\n+ssdlite_mobilenetv2 = MMDetSSDBackboneConfig(\n+ config_path=base_config_path / \"ssdlite_mobilenetv2_scratch_600e_coco.py\",\n+ weights_url=f\"{base_weights_url}/ssd512_coco/ssdlite_mobilenetv2_scratch_600e_coco_20210629_110627-974d9307.pth\",\n )\ndiff --git a/icevision/models/mmdet/utils.py b/icevision/models/mmdet/utils.py\n--- a/icevision/models/mmdet/utils.py\n+++ b/icevision/models/mmdet/utils.py\n@@ -35,18 +35,21 @@\n body = model.backbone\n \n layers = []\n+\n+ # add the backbone\n if isinstance(body, SSDVGG):\n layers += [body.features]\n- layers += [body.extra, body.l2_norm]\n elif isinstance(body, CSPDarknet):\n layers += [body.stem.conv.conv, body.stem.conv.bn]\n layers += [body.stage1, body.stage2, body.stage3, body.stage4]\n- layers += [model.neck]\n else:\n layers += [nn.Sequential(body.conv1, body.bn1)]\n layers += [getattr(body, l) for l in body.res_layers]\n- layers += [model.neck]\n \n+ # add the neck\n+ layers += [model.neck]\n+\n+ # add the head\n if isinstance(model, SingleStageDetector):\n layers += [model.bbox_head]\n elif isinstance(model, TwoStageDetector):\n", "issue": "SSD model doesn't work\n## \ud83d\udc1b Bug\r\n\r\nSSD model doesn't work anymore. It seems related to MMDetection updates made here:\r\nhttps://github.com/open-mmlab/mmdetection/pull/5789/files\r\n\r\nRefer to discussion on our Discord forum:\r\nhttps://discord.com/channels/735877944085446747/780951885485965352/920249646964670464\n", "before_files": [{"content": "__all__ = [\n \"MMDetBackboneConfig\",\n \"mmdet_configs_path\",\n \"param_groups\",\n \"MMDetBackboneConfig\",\n \"create_model_config\",\n]\n\nfrom icevision.imports import *\nfrom icevision.utils import *\nfrom icevision.backbones import BackboneConfig\nfrom icevision.models.mmdet.download_configs import download_mmdet_configs\nfrom mmdet.models.detectors import *\nfrom mmcv import Config\nfrom mmdet.models.backbones.ssd_vgg import SSDVGG\nfrom mmdet.models.backbones.csp_darknet import CSPDarknet\n\n\nmmdet_configs_path = download_mmdet_configs()\n\n\nclass MMDetBackboneConfig(BackboneConfig):\n def __init__(self, model_name, config_path, weights_url):\n self.model_name = model_name\n self.config_path = config_path\n self.weights_url = weights_url\n self.pretrained: bool\n\n def __call__(self, pretrained: bool = True) -> \"MMDetBackboneConfig\":\n self.pretrained = pretrained\n return self\n\n\ndef param_groups(model):\n body = model.backbone\n\n layers = []\n if isinstance(body, SSDVGG):\n layers += [body.features]\n layers += [body.extra, body.l2_norm]\n elif isinstance(body, CSPDarknet):\n layers += [body.stem.conv.conv, body.stem.conv.bn]\n layers += [body.stage1, body.stage2, body.stage3, body.stage4]\n layers += [model.neck]\n else:\n layers += [nn.Sequential(body.conv1, body.bn1)]\n layers += [getattr(body, l) for l in body.res_layers]\n layers += [model.neck]\n\n if isinstance(model, SingleStageDetector):\n layers += [model.bbox_head]\n elif isinstance(model, TwoStageDetector):\n layers += [nn.Sequential(model.rpn_head, model.roi_head)]\n else:\n raise RuntimeError(\n \"{model} must inherit either from SingleStageDetector or TwoStageDetector class\"\n )\n\n _param_groups = [list(layer.parameters()) for layer in layers]\n check_all_model_params_in_groups2(model, _param_groups)\n return _param_groups\n\n\ndef create_model_config(\n backbone: MMDetBackboneConfig,\n pretrained: bool = True,\n checkpoints_path: Optional[Union[str, Path]] = \"checkpoints\",\n force_download=False,\n cfg_options=None,\n):\n\n model_name = backbone.model_name\n config_path = backbone.config_path\n weights_url = backbone.weights_url\n\n # download weights\n weights_path = None\n if pretrained and weights_url:\n save_dir = Path(checkpoints_path) / model_name\n save_dir.mkdir(exist_ok=True, parents=True)\n\n fname = Path(weights_url).name\n weights_path = save_dir / fname\n\n if not weights_path.exists() or force_download:\n download_url(url=weights_url, save_path=str(weights_path))\n\n cfg = Config.fromfile(config_path)\n\n if cfg_options is not None:\n cfg.merge_from_dict(cfg_options)\n\n return cfg, weights_path\n", "path": "icevision/models/mmdet/utils.py"}, {"content": "__all__ = [\n \"ssd300\",\n \"ssd512\",\n]\n\nfrom icevision.imports import *\nfrom icevision.models.mmdet.utils import *\n\n\nclass MMDetSSDBackboneConfig(MMDetBackboneConfig):\n def __init__(self, **kwargs):\n super().__init__(model_name=\"ssd\", **kwargs)\n\n\nbase_config_path = mmdet_configs_path / \"ssd\"\nbase_weights_url = \"http://download.openmmlab.com/mmdetection/v2.0/ssd\"\n\nssd300 = MMDetSSDBackboneConfig(\n config_path=base_config_path / \"ssd300_coco.py\",\n weights_url=f\"{base_weights_url}/ssd300_coco/ssd300_coco_20200307-a92d2092.pth\",\n)\n\nssd512 = MMDetSSDBackboneConfig(\n config_path=base_config_path / \"ssd512_coco.py\",\n weights_url=f\"{base_weights_url}/ssd512_coco/ssd512_coco_20200308-038c5591.pth\",\n)\n", "path": "icevision/models/mmdet/models/ssd/backbones/resnet_fpn.py"}]} | 1,888 | 772 |
gh_patches_debug_2569 | rasdani/github-patches | git_diff | ephios-dev__ephios-1244 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
API: `/api/users/by_email` returns 404 error for email addresses with dots before the @
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to `[ephios-url]/api/users/by_email/[email protected]/`
**Expected behaviour**
Assuming the user exists, the information about the user should be returned.
**Screenshots**
Instead the page 404s.
<img width="1511" alt="Screenshot 2024-03-27 at 18 54 08" src="https://github.com/ephios-dev/ephios/assets/2546622/1383feee-28b0-4825-a31e-c39e2cc3f2ab">
**Environment**
State which device, operating system, browser and browser version you are using.
MacOS 14.2.1 (23C71), Version 17.2.1 (19617.1.17.11.12)
**Additional context**
* The problem does not appear for the test emails `usaaa@localhost/`, `admin@localhost/` or `[email protected]`.
</issue>
<code>
[start of ephios/api/views/users.py]
1 from django.db.models import Q
2 from django.utils import timezone
3 from django_filters.rest_framework import DjangoFilterBackend
4 from oauth2_provider.contrib.rest_framework import IsAuthenticatedOrTokenHasScope
5 from rest_framework import viewsets
6 from rest_framework.exceptions import PermissionDenied
7 from rest_framework.fields import SerializerMethodField
8 from rest_framework.filters import SearchFilter
9 from rest_framework.generics import RetrieveAPIView
10 from rest_framework.mixins import RetrieveModelMixin
11 from rest_framework.permissions import DjangoObjectPermissions
12 from rest_framework.relations import SlugRelatedField
13 from rest_framework.schemas.openapi import AutoSchema
14 from rest_framework.serializers import ModelSerializer
15 from rest_framework.viewsets import GenericViewSet
16 from rest_framework_guardian.filters import ObjectPermissionsFilter
17
18 from ephios.api.views.events import ParticipationSerializer
19 from ephios.core.models import LocalParticipation, Qualification, UserProfile
20 from ephios.core.services.qualification import collect_all_included_qualifications
21
22
23 class QualificationSerializer(ModelSerializer):
24 category = SlugRelatedField(slug_field="uuid", read_only=True)
25 includes = SerializerMethodField()
26
27 class Meta:
28 model = Qualification
29 fields = [
30 "uuid",
31 "title",
32 "abbreviation",
33 "category",
34 "includes",
35 ]
36
37 def get_includes(self, obj):
38 return [q.uuid for q in collect_all_included_qualifications(obj.includes.all())]
39
40
41 class UserProfileSerializer(ModelSerializer):
42 qualifications = SerializerMethodField()
43
44 class Meta:
45 model = UserProfile
46 fields = [
47 "id",
48 "display_name",
49 "date_of_birth",
50 "email",
51 "qualifications",
52 ]
53
54 def get_qualifications(self, obj):
55 return QualificationSerializer(
56 Qualification.objects.filter(
57 Q(grants__user=obj)
58 & (Q(grants__expires__gte=timezone.now()) | Q(grants__expires__isnull=True))
59 ),
60 many=True,
61 ).data
62
63
64 class UserProfileMeView(RetrieveAPIView):
65 serializer_class = UserProfileSerializer
66 queryset = UserProfile.objects.all()
67 permission_classes = [IsAuthenticatedOrTokenHasScope]
68 required_scopes = ["ME_READ"]
69 schema = AutoSchema(operation_id_base="OwnUserProfile")
70
71 def get_object(self):
72 if self.request.user is None:
73 raise PermissionDenied()
74 return self.request.user
75
76
77 class UserViewSet(viewsets.ReadOnlyModelViewSet):
78 serializer_class = UserProfileSerializer
79 queryset = UserProfile.objects.all()
80 permission_classes = [IsAuthenticatedOrTokenHasScope, DjangoObjectPermissions]
81 required_scopes = ["CONFIDENTIAL_READ"]
82 search_fields = ["display_name", "email"]
83
84 filter_backends = [
85 DjangoFilterBackend,
86 SearchFilter,
87 ObjectPermissionsFilter,
88 ]
89
90
91 class UserByMailView(RetrieveModelMixin, GenericViewSet):
92 serializer_class = UserProfileSerializer
93 queryset = UserProfile.objects.all()
94 permission_classes = [IsAuthenticatedOrTokenHasScope, DjangoObjectPermissions]
95 required_scopes = ["CONFIDENTIAL_READ"]
96 filter_backends = [ObjectPermissionsFilter]
97 lookup_url_kwarg = "email"
98 lookup_field = "email"
99 schema = AutoSchema(operation_id_base="UserProfileByMail")
100
101
102 class UserParticipationView(viewsets.ReadOnlyModelViewSet):
103 serializer_class = ParticipationSerializer
104 permission_classes = [IsAuthenticatedOrTokenHasScope]
105 filter_backends = [ObjectPermissionsFilter, DjangoFilterBackend]
106 filterset_fields = ["state"]
107 required_scopes = ["CONFIDENTIAL_READ"]
108
109 def get_queryset(self):
110 return LocalParticipation.objects.filter(user=self.kwargs.get("user"))
111
[end of ephios/api/views/users.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ephios/api/views/users.py b/ephios/api/views/users.py
--- a/ephios/api/views/users.py
+++ b/ephios/api/views/users.py
@@ -96,6 +96,7 @@
filter_backends = [ObjectPermissionsFilter]
lookup_url_kwarg = "email"
lookup_field = "email"
+ lookup_value_regex = "[^/]+" # customize to allow dots (".") in the lookup value
schema = AutoSchema(operation_id_base="UserProfileByMail")
| {"golden_diff": "diff --git a/ephios/api/views/users.py b/ephios/api/views/users.py\n--- a/ephios/api/views/users.py\n+++ b/ephios/api/views/users.py\n@@ -96,6 +96,7 @@\n filter_backends = [ObjectPermissionsFilter]\n lookup_url_kwarg = \"email\"\n lookup_field = \"email\"\n+ lookup_value_regex = \"[^/]+\" # customize to allow dots (\".\") in the lookup value\n schema = AutoSchema(operation_id_base=\"UserProfileByMail\")\n", "issue": "API: `/api/users/by_email` returns 404 error for email addresses with dots before the @\n**Describe the bug**\r\nA clear and concise description of what the bug is.\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. Go to `[ephios-url]/api/users/by_email/[email protected]/`\r\n\r\n**Expected behaviour**\r\nAssuming the user exists, the information about the user should be returned.\r\n\r\n**Screenshots**\r\nInstead the page 404s.\r\n\r\n<img width=\"1511\" alt=\"Screenshot 2024-03-27 at 18 54 08\" src=\"https://github.com/ephios-dev/ephios/assets/2546622/1383feee-28b0-4825-a31e-c39e2cc3f2ab\">\r\n\r\n**Environment**\r\nState which device, operating system, browser and browser version you are using.\r\nMacOS 14.2.1 (23C71), Version 17.2.1 (19617.1.17.11.12)\r\n\r\n**Additional context**\r\n* The problem does not appear for the test emails `usaaa@localhost/`, `admin@localhost/` or `[email protected]`.\n", "before_files": [{"content": "from django.db.models import Q\nfrom django.utils import timezone\nfrom django_filters.rest_framework import DjangoFilterBackend\nfrom oauth2_provider.contrib.rest_framework import IsAuthenticatedOrTokenHasScope\nfrom rest_framework import viewsets\nfrom rest_framework.exceptions import PermissionDenied\nfrom rest_framework.fields import SerializerMethodField\nfrom rest_framework.filters import SearchFilter\nfrom rest_framework.generics import RetrieveAPIView\nfrom rest_framework.mixins import RetrieveModelMixin\nfrom rest_framework.permissions import DjangoObjectPermissions\nfrom rest_framework.relations import SlugRelatedField\nfrom rest_framework.schemas.openapi import AutoSchema\nfrom rest_framework.serializers import ModelSerializer\nfrom rest_framework.viewsets import GenericViewSet\nfrom rest_framework_guardian.filters import ObjectPermissionsFilter\n\nfrom ephios.api.views.events import ParticipationSerializer\nfrom ephios.core.models import LocalParticipation, Qualification, UserProfile\nfrom ephios.core.services.qualification import collect_all_included_qualifications\n\n\nclass QualificationSerializer(ModelSerializer):\n category = SlugRelatedField(slug_field=\"uuid\", read_only=True)\n includes = SerializerMethodField()\n\n class Meta:\n model = Qualification\n fields = [\n \"uuid\",\n \"title\",\n \"abbreviation\",\n \"category\",\n \"includes\",\n ]\n\n def get_includes(self, obj):\n return [q.uuid for q in collect_all_included_qualifications(obj.includes.all())]\n\n\nclass UserProfileSerializer(ModelSerializer):\n qualifications = SerializerMethodField()\n\n class Meta:\n model = UserProfile\n fields = [\n \"id\",\n \"display_name\",\n \"date_of_birth\",\n \"email\",\n \"qualifications\",\n ]\n\n def get_qualifications(self, obj):\n return QualificationSerializer(\n Qualification.objects.filter(\n Q(grants__user=obj)\n & (Q(grants__expires__gte=timezone.now()) | Q(grants__expires__isnull=True))\n ),\n many=True,\n ).data\n\n\nclass UserProfileMeView(RetrieveAPIView):\n serializer_class = UserProfileSerializer\n queryset = UserProfile.objects.all()\n permission_classes = [IsAuthenticatedOrTokenHasScope]\n required_scopes = [\"ME_READ\"]\n schema = AutoSchema(operation_id_base=\"OwnUserProfile\")\n\n def get_object(self):\n if self.request.user is None:\n raise PermissionDenied()\n return self.request.user\n\n\nclass UserViewSet(viewsets.ReadOnlyModelViewSet):\n serializer_class = UserProfileSerializer\n queryset = UserProfile.objects.all()\n permission_classes = [IsAuthenticatedOrTokenHasScope, DjangoObjectPermissions]\n required_scopes = [\"CONFIDENTIAL_READ\"]\n search_fields = [\"display_name\", \"email\"]\n\n filter_backends = [\n DjangoFilterBackend,\n SearchFilter,\n ObjectPermissionsFilter,\n ]\n\n\nclass UserByMailView(RetrieveModelMixin, GenericViewSet):\n serializer_class = UserProfileSerializer\n queryset = UserProfile.objects.all()\n permission_classes = [IsAuthenticatedOrTokenHasScope, DjangoObjectPermissions]\n required_scopes = [\"CONFIDENTIAL_READ\"]\n filter_backends = [ObjectPermissionsFilter]\n lookup_url_kwarg = \"email\"\n lookup_field = \"email\"\n schema = AutoSchema(operation_id_base=\"UserProfileByMail\")\n\n\nclass UserParticipationView(viewsets.ReadOnlyModelViewSet):\n serializer_class = ParticipationSerializer\n permission_classes = [IsAuthenticatedOrTokenHasScope]\n filter_backends = [ObjectPermissionsFilter, DjangoFilterBackend]\n filterset_fields = [\"state\"]\n required_scopes = [\"CONFIDENTIAL_READ\"]\n\n def get_queryset(self):\n return LocalParticipation.objects.filter(user=self.kwargs.get(\"user\"))\n", "path": "ephios/api/views/users.py"}]} | 1,813 | 116 |
gh_patches_debug_8988 | rasdani/github-patches | git_diff | beeware__toga-1454 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
"Your first Toga app" (helloworld) does not work as shown in the docs
I copy-pasted the code [found here](https://toga.readthedocs.io/en/latest/tutorial/tutorial-0.html). When I ran it, I got this error message:
```
$ python -m helloworld
Traceback (most recent call last):
File "C:\Users\brendan\AppData\Local\Programs\Python\Python39\lib\runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Users\brendan\AppData\Local\Programs\Python\Python39\lib\runpy.py", line 87, in _run_code
exec(code, run_globals)
File "C:\bjk\tmp\beeware-toga\helloworld.py", line 2, in <module>
from tutorial import __version__
ModuleNotFoundError: No module named 'tutorial'
```
If I comment out the line `from tutorial import __version__` and delete the kwarg `version=__version__` in the call to `toga.App`, the module does run; i.e., the GUI window pops up and seems to work. However, during the run, I get a warning:
```
$ python -m helloworld
C:\Users\brendan\AppData\Local\Programs\Python\Python39\lib\site-packages\clr_loader\wrappers.py:20: DeprecationWarning:
builtin type GC Offset Base has no __module__ attribute
return self._callable(ffi.cast("void*", buf_arr), len(buf_arr))
```
Maybe it's just out-of-date documentation?
P.S. FWIW, a straight copy-paste of the second tutorial, "[A slightly less toy example](https://toga.readthedocs.io/en/latest/tutorial/tutorial-1.html)" works as is, although it does produce the same DeprecationWarning.
P.P.S. Ditto the fourth tutorial, "[Letโs build a browser!](https://toga.readthedocs.io/en/latest/tutorial/tutorial-3.html)"
</issue>
<code>
[start of examples/tutorial0/tutorial/app.py]
1 import toga
2 from tutorial import __version__
3
4
5 def button_handler(widget):
6 print("hello")
7
8
9 def build(app):
10 box = toga.Box()
11
12 button = toga.Button('Hello world', on_press=button_handler)
13 button.style.padding = 50
14 button.style.flex = 1
15 box.add(button)
16
17 return box
18
19
20 def main():
21 return toga.App(
22 'First App',
23 'org.beeware.helloworld',
24 author='Tiberius Yak',
25 description="A testing app",
26 version=__version__,
27 home_page="https://beeware.org",
28 startup=build
29 )
30
31
32 if __name__ == '__main__':
33 main().main_loop()
34
[end of examples/tutorial0/tutorial/app.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/examples/tutorial0/tutorial/app.py b/examples/tutorial0/tutorial/app.py
--- a/examples/tutorial0/tutorial/app.py
+++ b/examples/tutorial0/tutorial/app.py
@@ -1,5 +1,4 @@
import toga
-from tutorial import __version__
def button_handler(widget):
@@ -18,15 +17,7 @@
def main():
- return toga.App(
- 'First App',
- 'org.beeware.helloworld',
- author='Tiberius Yak',
- description="A testing app",
- version=__version__,
- home_page="https://beeware.org",
- startup=build
- )
+ return toga.App('First App', 'org.beeware.helloworld', startup=build)
if __name__ == '__main__':
| {"golden_diff": "diff --git a/examples/tutorial0/tutorial/app.py b/examples/tutorial0/tutorial/app.py\n--- a/examples/tutorial0/tutorial/app.py\n+++ b/examples/tutorial0/tutorial/app.py\n@@ -1,5 +1,4 @@\n import toga\n-from tutorial import __version__\n \n \n def button_handler(widget):\n@@ -18,15 +17,7 @@\n \n \n def main():\n- return toga.App(\n- 'First App',\n- 'org.beeware.helloworld',\n- author='Tiberius Yak',\n- description=\"A testing app\",\n- version=__version__,\n- home_page=\"https://beeware.org\",\n- startup=build\n- )\n+ return toga.App('First App', 'org.beeware.helloworld', startup=build)\n \n \n if __name__ == '__main__':\n", "issue": "\"Your first Toga app\" (helloworld) does not work as shown in the docs\nI copy-pasted the code [found here](https://toga.readthedocs.io/en/latest/tutorial/tutorial-0.html). When I ran it, I got this error message:\r\n\r\n```\r\n$ python -m helloworld\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\brendan\\AppData\\Local\\Programs\\Python\\Python39\\lib\\runpy.py\", line 197, in _run_module_as_main\r\n return _run_code(code, main_globals, None,\r\n File \"C:\\Users\\brendan\\AppData\\Local\\Programs\\Python\\Python39\\lib\\runpy.py\", line 87, in _run_code\r\n exec(code, run_globals)\r\n File \"C:\\bjk\\tmp\\beeware-toga\\helloworld.py\", line 2, in <module>\r\n from tutorial import __version__\r\nModuleNotFoundError: No module named 'tutorial'\r\n```\r\n\r\nIf I comment out the line `from tutorial import __version__` and delete the kwarg `version=__version__` in the call to `toga.App`, the module does run; i.e., the GUI window pops up and seems to work. However, during the run, I get a warning:\r\n\r\n```\r\n$ python -m helloworld\r\nC:\\Users\\brendan\\AppData\\Local\\Programs\\Python\\Python39\\lib\\site-packages\\clr_loader\\wrappers.py:20: DeprecationWarning:\r\n builtin type GC Offset Base has no __module__ attribute\r\n return self._callable(ffi.cast(\"void*\", buf_arr), len(buf_arr))\r\n```\r\n\r\nMaybe it's just out-of-date documentation?\r\n\r\nP.S. FWIW, a straight copy-paste of the second tutorial, \"[A slightly less toy example](https://toga.readthedocs.io/en/latest/tutorial/tutorial-1.html)\" works as is, although it does produce the same DeprecationWarning.\r\n\r\nP.P.S. Ditto the fourth tutorial, \"[Let\u2019s build a browser!](https://toga.readthedocs.io/en/latest/tutorial/tutorial-3.html)\"\n", "before_files": [{"content": "import toga\nfrom tutorial import __version__\n\n\ndef button_handler(widget):\n print(\"hello\")\n\n\ndef build(app):\n box = toga.Box()\n\n button = toga.Button('Hello world', on_press=button_handler)\n button.style.padding = 50\n button.style.flex = 1\n box.add(button)\n\n return box\n\n\ndef main():\n return toga.App(\n 'First App',\n 'org.beeware.helloworld',\n author='Tiberius Yak',\n description=\"A testing app\",\n version=__version__,\n home_page=\"https://beeware.org\",\n startup=build\n )\n\n\nif __name__ == '__main__':\n main().main_loop()\n", "path": "examples/tutorial0/tutorial/app.py"}]} | 1,214 | 175 |
gh_patches_debug_3755 | rasdani/github-patches | git_diff | dotkom__onlineweb4-486 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
API should not show marks publicly
The API shows all marks for all users publicly. Should be unregistered from API if it is not utterly necessary by some client-side ajax call.
</issue>
<code>
[start of apps/api/v0/urls.py]
1 # -*- coding: utf-8 -*-
2
3 from django.conf.urls import patterns, url, include
4
5 from tastypie.api import Api
6
7 from apps.api.v0.article import ArticleResource, ArticleLatestResource
8 from apps.api.v0.authentication import UserResource
9 from apps.api.v0.events import EventResource, AttendanceEventResource, AttendeeResource, CompanyResource, CompanyEventResource
10 from apps.api.v0.marks import MarkResource, EntryResource, MyMarksResource, MyActiveMarksResource
11 from apps.api.v0.offline import IssueResource
12
13 v0_api = Api(api_name='v0')
14
15 # users
16 v0_api.register(UserResource())
17
18 # event
19 v0_api.register(EventResource())
20 v0_api.register(AttendanceEventResource())
21 v0_api.register(CompanyResource())
22 v0_api.register(CompanyEventResource())
23
24 # article
25 v0_api.register(ArticleResource())
26 v0_api.register(ArticleLatestResource())
27
28 # marks
29 v0_api.register(MarkResource())
30 v0_api.register(EntryResource())
31 v0_api.register(MyMarksResource())
32 v0_api.register(MyActiveMarksResource())
33
34 # offline
35 v0_api.register(IssueResource())
36
37 # Set the urls to be included.
38 urlpatterns = patterns('',
39 url(r'^', include(v0_api.urls)),
40 )
41
[end of apps/api/v0/urls.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/apps/api/v0/urls.py b/apps/api/v0/urls.py
--- a/apps/api/v0/urls.py
+++ b/apps/api/v0/urls.py
@@ -26,10 +26,10 @@
v0_api.register(ArticleLatestResource())
# marks
-v0_api.register(MarkResource())
-v0_api.register(EntryResource())
-v0_api.register(MyMarksResource())
-v0_api.register(MyActiveMarksResource())
+#v0_api.register(MarkResource())
+#v0_api.register(EntryResource())
+#v0_api.register(MyMarksResource())
+#v0_api.register(MyActiveMarksResource())
# offline
v0_api.register(IssueResource())
| {"golden_diff": "diff --git a/apps/api/v0/urls.py b/apps/api/v0/urls.py\n--- a/apps/api/v0/urls.py\n+++ b/apps/api/v0/urls.py\n@@ -26,10 +26,10 @@\n v0_api.register(ArticleLatestResource())\n \n # marks\n-v0_api.register(MarkResource())\n-v0_api.register(EntryResource())\n-v0_api.register(MyMarksResource())\n-v0_api.register(MyActiveMarksResource())\n+#v0_api.register(MarkResource())\n+#v0_api.register(EntryResource())\n+#v0_api.register(MyMarksResource())\n+#v0_api.register(MyActiveMarksResource())\n \n # offline\n v0_api.register(IssueResource())\n", "issue": "API should not show marks publicly\nThe API shows all marks for all users publicly. Should be unregistered from API if it is not utterly necessary by some client-side ajax call.\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\nfrom django.conf.urls import patterns, url, include\n\nfrom tastypie.api import Api\n\nfrom apps.api.v0.article import ArticleResource, ArticleLatestResource\nfrom apps.api.v0.authentication import UserResource\nfrom apps.api.v0.events import EventResource, AttendanceEventResource, AttendeeResource, CompanyResource, CompanyEventResource\nfrom apps.api.v0.marks import MarkResource, EntryResource, MyMarksResource, MyActiveMarksResource\nfrom apps.api.v0.offline import IssueResource\n\nv0_api = Api(api_name='v0')\n\n# users\nv0_api.register(UserResource())\n\n# event\nv0_api.register(EventResource())\nv0_api.register(AttendanceEventResource())\nv0_api.register(CompanyResource())\nv0_api.register(CompanyEventResource())\n\n# article\nv0_api.register(ArticleResource())\nv0_api.register(ArticleLatestResource())\n\n# marks\nv0_api.register(MarkResource())\nv0_api.register(EntryResource())\nv0_api.register(MyMarksResource())\nv0_api.register(MyActiveMarksResource())\n\n# offline\nv0_api.register(IssueResource())\n\n# Set the urls to be included.\nurlpatterns = patterns('',\n url(r'^', include(v0_api.urls)),\n)\n", "path": "apps/api/v0/urls.py"}]} | 919 | 150 |
gh_patches_debug_5292 | rasdani/github-patches | git_diff | freedomofpress__securedrop-6881 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Release SecureDrop 2.6.0
This is a tracking issue for the release of SecureDrop 2.6.0
Tentatively scheduled as follows:
**Pre-release announcement:** 06-15-2023
**Release date:** 06-22-2023
**Release manager:** @legoktm
**Deputy release manager:** @zenmonkeykstop
**Localization manager:** @cfm
**Communications manager:** @nathandyer
_SecureDrop maintainers and testers:_ As you QA 2.6.0, please report back your testing results as comments on this ticket. File GitHub issues for any problems found, tag them "QA: Release".
Test debian packages will be posted on https://apt-test.freedom.press signed with [the test key](https://gist.githubusercontent.com/conorsch/ec4008b111bc3142fca522693f3cce7e/raw/2968621e8ad92db4505a31fcc5776422d7d26729/apt-test%2520apt%2520pubkey).
# [QA Matrix for 2.6.0](https://docs.google.com/spreadsheets/d/1j-F9e45O9TWkbWZVzKUdbyYIP69yfALzlDx9bR49UHI/edit#gid=361662860)
# [Test Plan for 2.6.0](https://github.com/freedomofpress/securedrop/wiki/2.6.0-Test-Plan)
# Prepare release candidate (2.6.0~rc1)
- [ ] Link to latest version of Tails, including release candidates, to test against during QA
- [x] Prepare 2.6.0~rc1 release changelog
- [x] Branch off release/2.6.0 from develop
- [x] Prepare 2.6.0
- [x] Build debs, preserving build log, and put up `2.6.0~rc1` on test apt server
- [x] Commit build log.
After each test, please update the QA matrix and post details for Basic Server Testing, Application Acceptance Testing and release-specific testing below in comments to this ticket.
# Final release
- [x] Ensure builder in release branch is updated and/or update builder image
- [x] Push signed tag
- [x] Pre-Flight: Test updater logic in Tails (apt-qa tracks the `release` branch in the LFS repo)
- [x] Build final Debian packages(and preserve build log)
- [x] Commit package build log to https://github.com/freedomofpress/build-logs
- [x] Pre-Flight: Test that install and upgrade from 2.5.2 to 2.6.0 works w/ prod repo debs (apt-qa.freedom.press polls the `release` branch in the LFS repo for the debs)
- [x] Flip apt QA server to prod status (merge to `main` in the LFS repo)
- [x] Merge Docs branch changes to ``main`` and verify new docs build in securedrop-docs repo
- [x] Prepare release messaging
# Post release
- [x] Create GitHub release object
- [x] Once release object is created, update versions in `securedrop-docs` and Wagtail
- [x] Verify new docs show up on https://docs.securedrop.org
- [x] Publish announcements
- [ ] Merge changelog back to `develop`
- [ ] Update roadmap wiki page: https://github.com/freedomofpress/securedrop/wiki/Development-Roadmap
</issue>
<code>
[start of securedrop/version.py]
1 __version__ = "2.6.0~rc1"
2
[end of securedrop/version.py]
[start of securedrop/setup.py]
1 import setuptools
2
3 long_description = "The SecureDrop whistleblower platform."
4
5 setuptools.setup(
6 name="securedrop-app-code",
7 version="2.6.0~rc1",
8 author="Freedom of the Press Foundation",
9 author_email="[email protected]",
10 description="SecureDrop Server",
11 long_description=long_description,
12 long_description_content_type="text/markdown",
13 license="AGPLv3+",
14 python_requires=">=3.8",
15 url="https://github.com/freedomofpress/securedrop",
16 classifiers=[
17 "Development Status :: 5 - Stable",
18 "Programming Language :: Python :: 3",
19 "Topic :: Software Development :: Libraries :: Python Modules",
20 "License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)",
21 "Intended Audience :: Developers",
22 "Operating System :: OS Independent",
23 ],
24 )
25
[end of securedrop/setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/securedrop/setup.py b/securedrop/setup.py
--- a/securedrop/setup.py
+++ b/securedrop/setup.py
@@ -4,7 +4,7 @@
setuptools.setup(
name="securedrop-app-code",
- version="2.6.0~rc1",
+ version="2.7.0~rc1",
author="Freedom of the Press Foundation",
author_email="[email protected]",
description="SecureDrop Server",
diff --git a/securedrop/version.py b/securedrop/version.py
--- a/securedrop/version.py
+++ b/securedrop/version.py
@@ -1 +1 @@
-__version__ = "2.6.0~rc1"
+__version__ = "2.7.0~rc1"
| {"golden_diff": "diff --git a/securedrop/setup.py b/securedrop/setup.py\n--- a/securedrop/setup.py\n+++ b/securedrop/setup.py\n@@ -4,7 +4,7 @@\n \n setuptools.setup(\n name=\"securedrop-app-code\",\n- version=\"2.6.0~rc1\",\n+ version=\"2.7.0~rc1\",\n author=\"Freedom of the Press Foundation\",\n author_email=\"[email protected]\",\n description=\"SecureDrop Server\",\ndiff --git a/securedrop/version.py b/securedrop/version.py\n--- a/securedrop/version.py\n+++ b/securedrop/version.py\n@@ -1 +1 @@\n-__version__ = \"2.6.0~rc1\"\n+__version__ = \"2.7.0~rc1\"\n", "issue": "Release SecureDrop 2.6.0\nThis is a tracking issue for the release of SecureDrop 2.6.0\r\n\r\nTentatively scheduled as follows:\r\n\r\n**Pre-release announcement:** 06-15-2023\r\n**Release date:** 06-22-2023\r\n\r\n**Release manager:** @legoktm \r\n**Deputy release manager:** @zenmonkeykstop \r\n**Localization manager:** @cfm\r\n**Communications manager:** @nathandyer \r\n\r\n_SecureDrop maintainers and testers:_ As you QA 2.6.0, please report back your testing results as comments on this ticket. File GitHub issues for any problems found, tag them \"QA: Release\".\r\n\r\nTest debian packages will be posted on https://apt-test.freedom.press signed with [the test key](https://gist.githubusercontent.com/conorsch/ec4008b111bc3142fca522693f3cce7e/raw/2968621e8ad92db4505a31fcc5776422d7d26729/apt-test%2520apt%2520pubkey).\r\n\r\n# [QA Matrix for 2.6.0](https://docs.google.com/spreadsheets/d/1j-F9e45O9TWkbWZVzKUdbyYIP69yfALzlDx9bR49UHI/edit#gid=361662860)\r\n# [Test Plan for 2.6.0](https://github.com/freedomofpress/securedrop/wiki/2.6.0-Test-Plan)\r\n\r\n# Prepare release candidate (2.6.0~rc1)\r\n- [ ] Link to latest version of Tails, including release candidates, to test against during QA\r\n- [x] Prepare 2.6.0~rc1 release changelog\r\n- [x] Branch off release/2.6.0 from develop\r\n- [x] Prepare 2.6.0\r\n- [x] Build debs, preserving build log, and put up `2.6.0~rc1` on test apt server\r\n- [x] Commit build log.\r\n\r\nAfter each test, please update the QA matrix and post details for Basic Server Testing, Application Acceptance Testing and release-specific testing below in comments to this ticket.\r\n\r\n# Final release\r\n- [x] Ensure builder in release branch is updated and/or update builder image\n- [x] Push signed tag \n- [x] Pre-Flight: Test updater logic in Tails (apt-qa tracks the `release` branch in the LFS repo)\n- [x] Build final Debian packages(and preserve build log)\n- [x] Commit package build log to https://github.com/freedomofpress/build-logs\n- [x] Pre-Flight: Test that install and upgrade from 2.5.2 to 2.6.0 works w/ prod repo debs (apt-qa.freedom.press polls the `release` branch in the LFS repo for the debs)\n- [x] Flip apt QA server to prod status (merge to `main` in the LFS repo)\n- [x] Merge Docs branch changes to ``main`` and verify new docs build in securedrop-docs repo\n- [x] Prepare release messaging\n\r\n# Post release\r\n- [x] Create GitHub release object \n- [x] Once release object is created, update versions in `securedrop-docs` and Wagtail\r\n- [x] Verify new docs show up on https://docs.securedrop.org\r\n- [x] Publish announcements\r\n- [ ] Merge changelog back to `develop`\r\n- [ ] Update roadmap wiki page: https://github.com/freedomofpress/securedrop/wiki/Development-Roadmap\n", "before_files": [{"content": "__version__ = \"2.6.0~rc1\"\n", "path": "securedrop/version.py"}, {"content": "import setuptools\n\nlong_description = \"The SecureDrop whistleblower platform.\"\n\nsetuptools.setup(\n name=\"securedrop-app-code\",\n version=\"2.6.0~rc1\",\n author=\"Freedom of the Press Foundation\",\n author_email=\"[email protected]\",\n description=\"SecureDrop Server\",\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n license=\"AGPLv3+\",\n python_requires=\">=3.8\",\n url=\"https://github.com/freedomofpress/securedrop\",\n classifiers=[\n \"Development Status :: 5 - Stable\",\n \"Programming Language :: Python :: 3\",\n \"Topic :: Software Development :: Libraries :: Python Modules\",\n \"License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)\",\n \"Intended Audience :: Developers\",\n \"Operating System :: OS Independent\",\n ],\n)\n", "path": "securedrop/setup.py"}]} | 1,628 | 176 |
gh_patches_debug_2507 | rasdani/github-patches | git_diff | spotify__luigi-1494 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Python 3.5 support
Luigi may already work with Python 3.5, but since the README doesn't mention it I thought I'd ask.
Does Luigi support Python 3.5?
</issue>
<code>
[start of setup.py]
1 # Copyright (c) 2012 Spotify AB
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License"); you may not
4 # use this file except in compliance with the License. You may obtain a copy of
5 # the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
11 # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
12 # License for the specific language governing permissions and limitations under
13 # the License.
14
15 import os
16
17 from setuptools import setup
18
19
20 def get_static_files(path):
21 return [os.path.join(dirpath.replace("luigi/", ""), ext)
22 for (dirpath, dirnames, filenames) in os.walk(path)
23 for ext in ["*.html", "*.js", "*.css", "*.png",
24 "*.eot", "*.svg", "*.ttf", "*.woff", "*.woff2"]]
25
26
27 luigi_package_data = sum(map(get_static_files, ["luigi/static", "luigi/templates"]), [])
28
29 readme_note = """\
30 .. note::
31
32 For the latest source, discussion, etc, please visit the
33 `GitHub repository <https://github.com/spotify/luigi>`_\n\n
34 """
35
36 with open('README.rst') as fobj:
37 long_description = readme_note + fobj.read()
38
39 install_requires = [
40 'tornado>=4.0,<5',
41 'python-daemon<3.0',
42 ]
43
44 if os.environ.get('READTHEDOCS', None) == 'True':
45 # So that we can build documentation for luigi.db_task_history and luigi.contrib.sqla
46 install_requires.append('sqlalchemy')
47 # readthedocs don't like python-daemon, see #1342
48 install_requires.remove('python-daemon<3.0')
49
50 setup(
51 name='luigi',
52 version='2.0.1',
53 description='Workflow mgmgt + task scheduling + dependency resolution',
54 long_description=long_description,
55 author='Erik Bernhardsson',
56 url='https://github.com/spotify/luigi',
57 license='Apache License 2.0',
58 packages=[
59 'luigi',
60 'luigi.contrib',
61 'luigi.contrib.hdfs',
62 'luigi.tools'
63 ],
64 package_data={
65 'luigi': luigi_package_data
66 },
67 entry_points={
68 'console_scripts': [
69 'luigi = luigi.cmdline:luigi_run',
70 'luigid = luigi.cmdline:luigid',
71 'luigi-grep = luigi.tools.luigi_grep:main',
72 'luigi-deps = luigi.tools.deps:main',
73 'luigi-migrate = luigi.tools.migrate:main'
74 ]
75 },
76 install_requires=install_requires,
77 classifiers=[
78 'Development Status :: 5 - Production/Stable',
79 'Environment :: Console',
80 'Environment :: Web Environment',
81 'Intended Audience :: Developers',
82 'Intended Audience :: System Administrators',
83 'License :: OSI Approved :: Apache Software License',
84 'Programming Language :: Python :: 2.7',
85 'Programming Language :: Python :: 3.3',
86 'Programming Language :: Python :: 3.4',
87 'Topic :: System :: Monitoring',
88 ],
89 )
90
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -84,6 +84,7 @@
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3.3',
'Programming Language :: Python :: 3.4',
+ 'Programming Language :: Python :: 3.5',
'Topic :: System :: Monitoring',
],
)
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -84,6 +84,7 @@\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n+ 'Programming Language :: Python :: 3.5',\n 'Topic :: System :: Monitoring',\n ],\n )\n", "issue": "Python 3.5 support\nLuigi may already work with Python 3.5, but since the README doesn't mention it I thought I'd ask.\n\nDoes Luigi support Python 3.5?\n\n", "before_files": [{"content": "# Copyright (c) 2012 Spotify AB\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"); you may not\n# use this file except in compliance with the License. You may obtain a copy of\n# the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT\n# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the\n# License for the specific language governing permissions and limitations under\n# the License.\n\nimport os\n\nfrom setuptools import setup\n\n\ndef get_static_files(path):\n return [os.path.join(dirpath.replace(\"luigi/\", \"\"), ext)\n for (dirpath, dirnames, filenames) in os.walk(path)\n for ext in [\"*.html\", \"*.js\", \"*.css\", \"*.png\",\n \"*.eot\", \"*.svg\", \"*.ttf\", \"*.woff\", \"*.woff2\"]]\n\n\nluigi_package_data = sum(map(get_static_files, [\"luigi/static\", \"luigi/templates\"]), [])\n\nreadme_note = \"\"\"\\\n.. note::\n\n For the latest source, discussion, etc, please visit the\n `GitHub repository <https://github.com/spotify/luigi>`_\\n\\n\n\"\"\"\n\nwith open('README.rst') as fobj:\n long_description = readme_note + fobj.read()\n\ninstall_requires = [\n 'tornado>=4.0,<5',\n 'python-daemon<3.0',\n]\n\nif os.environ.get('READTHEDOCS', None) == 'True':\n # So that we can build documentation for luigi.db_task_history and luigi.contrib.sqla\n install_requires.append('sqlalchemy')\n # readthedocs don't like python-daemon, see #1342\n install_requires.remove('python-daemon<3.0')\n\nsetup(\n name='luigi',\n version='2.0.1',\n description='Workflow mgmgt + task scheduling + dependency resolution',\n long_description=long_description,\n author='Erik Bernhardsson',\n url='https://github.com/spotify/luigi',\n license='Apache License 2.0',\n packages=[\n 'luigi',\n 'luigi.contrib',\n 'luigi.contrib.hdfs',\n 'luigi.tools'\n ],\n package_data={\n 'luigi': luigi_package_data\n },\n entry_points={\n 'console_scripts': [\n 'luigi = luigi.cmdline:luigi_run',\n 'luigid = luigi.cmdline:luigid',\n 'luigi-grep = luigi.tools.luigi_grep:main',\n 'luigi-deps = luigi.tools.deps:main',\n 'luigi-migrate = luigi.tools.migrate:main'\n ]\n },\n install_requires=install_requires,\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Environment :: Console',\n 'Environment :: Web Environment',\n 'Intended Audience :: Developers',\n 'Intended Audience :: System Administrators',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Topic :: System :: Monitoring',\n ],\n)\n", "path": "setup.py"}]} | 1,478 | 93 |
gh_patches_debug_12339 | rasdani/github-patches | git_diff | nextcloud__appstore-73 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Nightly support
- nightlies don't have a separate version number but a flag:
```
curl -X POST -u "user:password" http://localhost:8000/api/v1/apps/releases -H "Content-Type: application/json" -d '{"download":"https://example.com/release.tar.gz", "nightly":true }'
```
- this is also listed in the "get all apps" API with a `nightly: true` attribute https://nextcloudappstore.readthedocs.io/en/latest/restapi.html#get-all-apps-and-releases
- upload of a new nightly will delete the previous one for that app
- this allows to upgrade to a nightly (needs to be invoked by the admin and can be undone -> next regular release of the app will be installed)
</issue>
<code>
[start of nextcloudappstore/core/api/v1/urls.py]
1 from django.conf.urls import url
2 from django.views.decorators.http import etag
3 from nextcloudappstore.core.api.v1.views import Apps, AppReleases, \
4 app_api_etag, Categories, category_api_etag
5
6 urlpatterns = [
7 url(r'^platform/(?P<version>\d+\.\d+\.\d+)/apps\.json$',
8 etag(app_api_etag)(Apps.as_view()), name='apps'),
9 url(r'^apps/releases/?$', AppReleases.as_view(),
10 name='app-release-create'),
11 url(r'^apps/(?P<pk>[a-z_]+)/?$', Apps.as_view(), name='app-delete'),
12 url(r'^apps/(?P<app>[a-z_]+)/releases/(?P<version>\d+\.\d+\.\d+)/?$',
13 AppReleases.as_view(), name='app-release-delete'),
14 url(r'^categories.json$',
15 etag(category_api_etag)(Categories.as_view()), name='categories'),
16 ]
17
[end of nextcloudappstore/core/api/v1/urls.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/nextcloudappstore/core/api/v1/urls.py b/nextcloudappstore/core/api/v1/urls.py
--- a/nextcloudappstore/core/api/v1/urls.py
+++ b/nextcloudappstore/core/api/v1/urls.py
@@ -9,7 +9,8 @@
url(r'^apps/releases/?$', AppReleases.as_view(),
name='app-release-create'),
url(r'^apps/(?P<pk>[a-z_]+)/?$', Apps.as_view(), name='app-delete'),
- url(r'^apps/(?P<app>[a-z_]+)/releases/(?P<version>\d+\.\d+\.\d+)/?$',
+ url(r'^apps/(?P<app>[a-z_]+)/releases/(?P<version>\d+\.\d+\.\d+'
+ r'(?:-nightly)?)/?$',
AppReleases.as_view(), name='app-release-delete'),
url(r'^categories.json$',
etag(category_api_etag)(Categories.as_view()), name='categories'),
| {"golden_diff": "diff --git a/nextcloudappstore/core/api/v1/urls.py b/nextcloudappstore/core/api/v1/urls.py\n--- a/nextcloudappstore/core/api/v1/urls.py\n+++ b/nextcloudappstore/core/api/v1/urls.py\n@@ -9,7 +9,8 @@\n url(r'^apps/releases/?$', AppReleases.as_view(),\n name='app-release-create'),\n url(r'^apps/(?P<pk>[a-z_]+)/?$', Apps.as_view(), name='app-delete'),\n- url(r'^apps/(?P<app>[a-z_]+)/releases/(?P<version>\\d+\\.\\d+\\.\\d+)/?$',\n+ url(r'^apps/(?P<app>[a-z_]+)/releases/(?P<version>\\d+\\.\\d+\\.\\d+'\n+ r'(?:-nightly)?)/?$',\n AppReleases.as_view(), name='app-release-delete'),\n url(r'^categories.json$',\n etag(category_api_etag)(Categories.as_view()), name='categories'),\n", "issue": "Nightly support\n- nightlies don't have a separate version number but a flag:\n\n```\ncurl -X POST -u \"user:password\" http://localhost:8000/api/v1/apps/releases -H \"Content-Type: application/json\" -d '{\"download\":\"https://example.com/release.tar.gz\", \"nightly\":true }'\n```\n- this is also listed in the \"get all apps\" API with a `nightly: true` attribute https://nextcloudappstore.readthedocs.io/en/latest/restapi.html#get-all-apps-and-releases\n- upload of a new nightly will delete the previous one for that app\n- this allows to upgrade to a nightly (needs to be invoked by the admin and can be undone -> next regular release of the app will be installed)\n\n", "before_files": [{"content": "from django.conf.urls import url\nfrom django.views.decorators.http import etag\nfrom nextcloudappstore.core.api.v1.views import Apps, AppReleases, \\\n app_api_etag, Categories, category_api_etag\n\nurlpatterns = [\n url(r'^platform/(?P<version>\\d+\\.\\d+\\.\\d+)/apps\\.json$',\n etag(app_api_etag)(Apps.as_view()), name='apps'),\n url(r'^apps/releases/?$', AppReleases.as_view(),\n name='app-release-create'),\n url(r'^apps/(?P<pk>[a-z_]+)/?$', Apps.as_view(), name='app-delete'),\n url(r'^apps/(?P<app>[a-z_]+)/releases/(?P<version>\\d+\\.\\d+\\.\\d+)/?$',\n AppReleases.as_view(), name='app-release-delete'),\n url(r'^categories.json$',\n etag(category_api_etag)(Categories.as_view()), name='categories'),\n]\n", "path": "nextcloudappstore/core/api/v1/urls.py"}]} | 944 | 229 |
gh_patches_debug_19020 | rasdani/github-patches | git_diff | iterative__dvc-5888 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
list: Add --show-json (or similar flag)
In the vs code project we have a view that uses `dvc list . --dvc-only` to show all paths that are tracked by DVC in a tree view. Reasons for this view, some discussion around it and a short demo are shown here: https://github.com/iterative/vscode-dvc/issues/318.
At the moment we take the stdout from the command, split the string into a list (using `\n` as a delimiter) and then post process to work out whether or not the paths relate to files or directories. I can see from the output of the command that directories are already highlighted:

From the above I assume that the work to determine what the path is (file or dir) has already been done by the cli. Rather than working out this information again it would be ideal if the cli could pass us json that contains the aforementioned information.
This will reduce the amount of code required in the extension and should increase performance (ever so slightly).
Please let me know if any of the above is unclear.
Thanks
</issue>
<code>
[start of dvc/command/ls/__init__.py]
1 import argparse
2 import logging
3 import sys
4
5 from dvc.command import completion
6 from dvc.command.base import CmdBaseNoRepo, append_doc_link
7 from dvc.command.ls.ls_colors import LsColors
8 from dvc.exceptions import DvcException
9
10 logger = logging.getLogger(__name__)
11
12
13 def _prettify(entries, with_color=False):
14 if with_color:
15 ls_colors = LsColors()
16 fmt = ls_colors.format
17 else:
18
19 def fmt(entry):
20 return entry["path"]
21
22 return [fmt(entry) for entry in entries]
23
24
25 class CmdList(CmdBaseNoRepo):
26 def run(self):
27 from dvc.repo import Repo
28
29 try:
30 entries = Repo.ls(
31 self.args.url,
32 self.args.path,
33 rev=self.args.rev,
34 recursive=self.args.recursive,
35 dvc_only=self.args.dvc_only,
36 )
37 if entries:
38 entries = _prettify(entries, sys.stdout.isatty())
39 logger.info("\n".join(entries))
40 return 0
41 except DvcException:
42 logger.exception(f"failed to list '{self.args.url}'")
43 return 1
44
45
46 def add_parser(subparsers, parent_parser):
47 LIST_HELP = (
48 "List repository contents, including files"
49 " and directories tracked by DVC and by Git."
50 )
51 list_parser = subparsers.add_parser(
52 "list",
53 parents=[parent_parser],
54 description=append_doc_link(LIST_HELP, "list"),
55 help=LIST_HELP,
56 formatter_class=argparse.RawTextHelpFormatter,
57 )
58 list_parser.add_argument("url", help="Location of DVC repository to list")
59 list_parser.add_argument(
60 "-R",
61 "--recursive",
62 action="store_true",
63 help="Recursively list files.",
64 )
65 list_parser.add_argument(
66 "--dvc-only", action="store_true", help="Show only DVC outputs."
67 )
68 list_parser.add_argument(
69 "--rev",
70 nargs="?",
71 help="Git revision (e.g. SHA, branch, tag)",
72 metavar="<commit>",
73 )
74 list_parser.add_argument(
75 "path",
76 nargs="?",
77 help="Path to directory within the repository to list outputs for",
78 ).complete = completion.DIR
79 list_parser.set_defaults(func=CmdList)
80
[end of dvc/command/ls/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/dvc/command/ls/__init__.py b/dvc/command/ls/__init__.py
--- a/dvc/command/ls/__init__.py
+++ b/dvc/command/ls/__init__.py
@@ -34,7 +34,11 @@
recursive=self.args.recursive,
dvc_only=self.args.dvc_only,
)
- if entries:
+ if self.args.show_json:
+ import json
+
+ logger.info(json.dumps(entries))
+ elif entries:
entries = _prettify(entries, sys.stdout.isatty())
logger.info("\n".join(entries))
return 0
@@ -65,6 +69,9 @@
list_parser.add_argument(
"--dvc-only", action="store_true", help="Show only DVC outputs."
)
+ list_parser.add_argument(
+ "--show-json", action="store_true", help="Show output in JSON format."
+ )
list_parser.add_argument(
"--rev",
nargs="?",
| {"golden_diff": "diff --git a/dvc/command/ls/__init__.py b/dvc/command/ls/__init__.py\n--- a/dvc/command/ls/__init__.py\n+++ b/dvc/command/ls/__init__.py\n@@ -34,7 +34,11 @@\n recursive=self.args.recursive,\n dvc_only=self.args.dvc_only,\n )\n- if entries:\n+ if self.args.show_json:\n+ import json\n+\n+ logger.info(json.dumps(entries))\n+ elif entries:\n entries = _prettify(entries, sys.stdout.isatty())\n logger.info(\"\\n\".join(entries))\n return 0\n@@ -65,6 +69,9 @@\n list_parser.add_argument(\n \"--dvc-only\", action=\"store_true\", help=\"Show only DVC outputs.\"\n )\n+ list_parser.add_argument(\n+ \"--show-json\", action=\"store_true\", help=\"Show output in JSON format.\"\n+ )\n list_parser.add_argument(\n \"--rev\",\n nargs=\"?\",\n", "issue": "list: Add --show-json (or similar flag)\nIn the vs code project we have a view that uses `dvc list . --dvc-only` to show all paths that are tracked by DVC in a tree view. Reasons for this view, some discussion around it and a short demo are shown here: https://github.com/iterative/vscode-dvc/issues/318.\r\n\r\nAt the moment we take the stdout from the command, split the string into a list (using `\\n` as a delimiter) and then post process to work out whether or not the paths relate to files or directories. I can see from the output of the command that directories are already highlighted: \r\n\r\n\r\n\r\nFrom the above I assume that the work to determine what the path is (file or dir) has already been done by the cli. Rather than working out this information again it would be ideal if the cli could pass us json that contains the aforementioned information.\r\n\r\nThis will reduce the amount of code required in the extension and should increase performance (ever so slightly).\r\n\r\nPlease let me know if any of the above is unclear.\r\n\r\nThanks\n", "before_files": [{"content": "import argparse\nimport logging\nimport sys\n\nfrom dvc.command import completion\nfrom dvc.command.base import CmdBaseNoRepo, append_doc_link\nfrom dvc.command.ls.ls_colors import LsColors\nfrom dvc.exceptions import DvcException\n\nlogger = logging.getLogger(__name__)\n\n\ndef _prettify(entries, with_color=False):\n if with_color:\n ls_colors = LsColors()\n fmt = ls_colors.format\n else:\n\n def fmt(entry):\n return entry[\"path\"]\n\n return [fmt(entry) for entry in entries]\n\n\nclass CmdList(CmdBaseNoRepo):\n def run(self):\n from dvc.repo import Repo\n\n try:\n entries = Repo.ls(\n self.args.url,\n self.args.path,\n rev=self.args.rev,\n recursive=self.args.recursive,\n dvc_only=self.args.dvc_only,\n )\n if entries:\n entries = _prettify(entries, sys.stdout.isatty())\n logger.info(\"\\n\".join(entries))\n return 0\n except DvcException:\n logger.exception(f\"failed to list '{self.args.url}'\")\n return 1\n\n\ndef add_parser(subparsers, parent_parser):\n LIST_HELP = (\n \"List repository contents, including files\"\n \" and directories tracked by DVC and by Git.\"\n )\n list_parser = subparsers.add_parser(\n \"list\",\n parents=[parent_parser],\n description=append_doc_link(LIST_HELP, \"list\"),\n help=LIST_HELP,\n formatter_class=argparse.RawTextHelpFormatter,\n )\n list_parser.add_argument(\"url\", help=\"Location of DVC repository to list\")\n list_parser.add_argument(\n \"-R\",\n \"--recursive\",\n action=\"store_true\",\n help=\"Recursively list files.\",\n )\n list_parser.add_argument(\n \"--dvc-only\", action=\"store_true\", help=\"Show only DVC outputs.\"\n )\n list_parser.add_argument(\n \"--rev\",\n nargs=\"?\",\n help=\"Git revision (e.g. SHA, branch, tag)\",\n metavar=\"<commit>\",\n )\n list_parser.add_argument(\n \"path\",\n nargs=\"?\",\n help=\"Path to directory within the repository to list outputs for\",\n ).complete = completion.DIR\n list_parser.set_defaults(func=CmdList)\n", "path": "dvc/command/ls/__init__.py"}]} | 1,479 | 223 |
gh_patches_debug_3062 | rasdani/github-patches | git_diff | facebookresearch__hydra-1281 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Release new version of Hydra
# ๐ Feature Request
I would like you to release Hydra that includes this PR: https://github.com/facebookresearch/hydra/pull/1197
## Motivation
currently I am using python 3.9 and I can't run Hydra due to a bug that is solved in above PR
</issue>
<code>
[start of hydra/__init__.py]
1 # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
2
3 # Source of truth for Hydra's version
4 __version__ = "1.0.4"
5 from hydra import utils
6 from hydra.errors import MissingConfigException
7 from hydra.main import main
8 from hydra.types import TaskFunction
9
10 __all__ = ["__version__", "MissingConfigException", "main", "utils", "TaskFunction"]
11
[end of hydra/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/hydra/__init__.py b/hydra/__init__.py
--- a/hydra/__init__.py
+++ b/hydra/__init__.py
@@ -1,7 +1,7 @@
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
# Source of truth for Hydra's version
-__version__ = "1.0.4"
+__version__ = "1.0.5"
from hydra import utils
from hydra.errors import MissingConfigException
from hydra.main import main
| {"golden_diff": "diff --git a/hydra/__init__.py b/hydra/__init__.py\n--- a/hydra/__init__.py\n+++ b/hydra/__init__.py\n@@ -1,7 +1,7 @@\n # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n \n # Source of truth for Hydra's version\n-__version__ = \"1.0.4\"\n+__version__ = \"1.0.5\"\n from hydra import utils\n from hydra.errors import MissingConfigException\n from hydra.main import main\n", "issue": "Release new version of Hydra\n# \ud83d\ude80 Feature Request\r\n\r\nI would like you to release Hydra that includes this PR: https://github.com/facebookresearch/hydra/pull/1197\r\n\r\n## Motivation\r\n\r\ncurrently I am using python 3.9 and I can't run Hydra due to a bug that is solved in above PR\r\n\r\n\n", "before_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n\n# Source of truth for Hydra's version\n__version__ = \"1.0.4\"\nfrom hydra import utils\nfrom hydra.errors import MissingConfigException\nfrom hydra.main import main\nfrom hydra.types import TaskFunction\n\n__all__ = [\"__version__\", \"MissingConfigException\", \"main\", \"utils\", \"TaskFunction\"]\n", "path": "hydra/__init__.py"}]} | 718 | 123 |
gh_patches_debug_47930 | rasdani/github-patches | git_diff | liqd__a4-opin-614 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
project page header: more vertical space for byline
The byline in the project pageโs header area, which showโs the projectโs organization is vertically too close to the headline of the project.

</issue>
<code>
[start of euth/organisations/views.py]
1 from django.views import generic
2
3 from . import models
4
5
6 class OrganisationDetailView(generic.DetailView):
7 model = models.Organisation
8
9 def visible_projects(self):
10 if self.request.user in self.object.initiators.all():
11 return self.object.project_set.all()
12 else:
13 return self.object.project_set.filter(is_draft=False)
14
15
16 class OrganisationListView(generic.ListView):
17 model = models.Organisation
18 paginate_by = 10
19
[end of euth/organisations/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/euth/organisations/views.py b/euth/organisations/views.py
--- a/euth/organisations/views.py
+++ b/euth/organisations/views.py
@@ -15,4 +15,4 @@
class OrganisationListView(generic.ListView):
model = models.Organisation
- paginate_by = 10
+ paginate_by = 12
| {"golden_diff": "diff --git a/euth/organisations/views.py b/euth/organisations/views.py\n--- a/euth/organisations/views.py\n+++ b/euth/organisations/views.py\n@@ -15,4 +15,4 @@\n \n class OrganisationListView(generic.ListView):\n model = models.Organisation\n- paginate_by = 10\n+ paginate_by = 12\n", "issue": "project page header: more vertical space for byline\nThe byline in the project page\u2019s header area, which show\u2019s the project\u2019s organization is vertically too close to the headline of the project. \r\n\r\n\n", "before_files": [{"content": "from django.views import generic\n\nfrom . import models\n\n\nclass OrganisationDetailView(generic.DetailView):\n model = models.Organisation\n\n def visible_projects(self):\n if self.request.user in self.object.initiators.all():\n return self.object.project_set.all()\n else:\n return self.object.project_set.filter(is_draft=False)\n\n\nclass OrganisationListView(generic.ListView):\n model = models.Organisation\n paginate_by = 10\n", "path": "euth/organisations/views.py"}]} | 791 | 87 |
gh_patches_debug_26170 | rasdani/github-patches | git_diff | zulip__zulip-22270 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
management: `rename_stream` management command does not work
`rename_stream` uses the `do_rename_stream` function to rename the stream. However, it accesses a non-existent attribute when calling it.
```
do_rename_stream(stream, new_name, self.user_profile) # self.user_profile does not exist
```
To replicate this, run:
```
python manage.py rename_stream Denmark bar -r zulip
```
and you should see:
```
AttributeError: 'Command' object has no attribute 'user_profile'
```
You might want to look at `zerver/management/commands/rename_stream.py` and `zerver/actions/streams.py`.
The fix should refactor `do_rename_stream` to accept `user_profile: Optional[UserProfile]` with the `None` default, and correctly handle what should happen for the notification message that might be sent when the stream is renamed (which currently mentions the name of the acting user that renames it).
</issue>
<code>
[start of zerver/management/commands/rename_stream.py]
1 from argparse import ArgumentParser
2 from typing import Any
3
4 from zerver.actions.streams import do_rename_stream
5 from zerver.lib.management import ZulipBaseCommand
6 from zerver.models import get_stream
7
8
9 class Command(ZulipBaseCommand):
10 help = """Change the stream name for a realm."""
11
12 def add_arguments(self, parser: ArgumentParser) -> None:
13 parser.add_argument("old_name", metavar="<old name>", help="name of stream to be renamed")
14 parser.add_argument(
15 "new_name", metavar="<new name>", help="new name to rename the stream to"
16 )
17 self.add_realm_args(parser, required=True)
18
19 def handle(self, *args: Any, **options: str) -> None:
20 realm = self.get_realm(options)
21 assert realm is not None # Should be ensured by parser
22 old_name = options["old_name"]
23 new_name = options["new_name"]
24
25 stream = get_stream(old_name, realm)
26 do_rename_stream(stream, new_name, self.user_profile)
27
[end of zerver/management/commands/rename_stream.py]
[start of zilencer/management/commands/migrate_stream_notifications.py]
1 from typing import Any
2
3 from django.core.management.base import BaseCommand
4
5 from zerver.models import Subscription
6
7
8 class Command(BaseCommand):
9 help = """One-off script to migration users' stream notification settings."""
10
11 def handle(self, *args: Any, **options: Any) -> None:
12 for subscription in Subscription.objects.all():
13 subscription.desktop_notifications = subscription.notifications
14 subscription.audible_notifications = subscription.notifications
15 subscription.save(update_fields=["desktop_notifications", "audible_notifications"])
16
[end of zilencer/management/commands/migrate_stream_notifications.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/zerver/management/commands/rename_stream.py b/zerver/management/commands/rename_stream.py
deleted file mode 100644
--- a/zerver/management/commands/rename_stream.py
+++ /dev/null
@@ -1,26 +0,0 @@
-from argparse import ArgumentParser
-from typing import Any
-
-from zerver.actions.streams import do_rename_stream
-from zerver.lib.management import ZulipBaseCommand
-from zerver.models import get_stream
-
-
-class Command(ZulipBaseCommand):
- help = """Change the stream name for a realm."""
-
- def add_arguments(self, parser: ArgumentParser) -> None:
- parser.add_argument("old_name", metavar="<old name>", help="name of stream to be renamed")
- parser.add_argument(
- "new_name", metavar="<new name>", help="new name to rename the stream to"
- )
- self.add_realm_args(parser, required=True)
-
- def handle(self, *args: Any, **options: str) -> None:
- realm = self.get_realm(options)
- assert realm is not None # Should be ensured by parser
- old_name = options["old_name"]
- new_name = options["new_name"]
-
- stream = get_stream(old_name, realm)
- do_rename_stream(stream, new_name, self.user_profile)
diff --git a/zilencer/management/commands/migrate_stream_notifications.py b/zilencer/management/commands/migrate_stream_notifications.py
deleted file mode 100644
--- a/zilencer/management/commands/migrate_stream_notifications.py
+++ /dev/null
@@ -1,15 +0,0 @@
-from typing import Any
-
-from django.core.management.base import BaseCommand
-
-from zerver.models import Subscription
-
-
-class Command(BaseCommand):
- help = """One-off script to migration users' stream notification settings."""
-
- def handle(self, *args: Any, **options: Any) -> None:
- for subscription in Subscription.objects.all():
- subscription.desktop_notifications = subscription.notifications
- subscription.audible_notifications = subscription.notifications
- subscription.save(update_fields=["desktop_notifications", "audible_notifications"])
| {"golden_diff": "diff --git a/zerver/management/commands/rename_stream.py b/zerver/management/commands/rename_stream.py\ndeleted file mode 100644\n--- a/zerver/management/commands/rename_stream.py\n+++ /dev/null\n@@ -1,26 +0,0 @@\n-from argparse import ArgumentParser\n-from typing import Any\n-\n-from zerver.actions.streams import do_rename_stream\n-from zerver.lib.management import ZulipBaseCommand\n-from zerver.models import get_stream\n-\n-\n-class Command(ZulipBaseCommand):\n- help = \"\"\"Change the stream name for a realm.\"\"\"\n-\n- def add_arguments(self, parser: ArgumentParser) -> None:\n- parser.add_argument(\"old_name\", metavar=\"<old name>\", help=\"name of stream to be renamed\")\n- parser.add_argument(\n- \"new_name\", metavar=\"<new name>\", help=\"new name to rename the stream to\"\n- )\n- self.add_realm_args(parser, required=True)\n-\n- def handle(self, *args: Any, **options: str) -> None:\n- realm = self.get_realm(options)\n- assert realm is not None # Should be ensured by parser\n- old_name = options[\"old_name\"]\n- new_name = options[\"new_name\"]\n-\n- stream = get_stream(old_name, realm)\n- do_rename_stream(stream, new_name, self.user_profile)\ndiff --git a/zilencer/management/commands/migrate_stream_notifications.py b/zilencer/management/commands/migrate_stream_notifications.py\ndeleted file mode 100644\n--- a/zilencer/management/commands/migrate_stream_notifications.py\n+++ /dev/null\n@@ -1,15 +0,0 @@\n-from typing import Any\n-\n-from django.core.management.base import BaseCommand\n-\n-from zerver.models import Subscription\n-\n-\n-class Command(BaseCommand):\n- help = \"\"\"One-off script to migration users' stream notification settings.\"\"\"\n-\n- def handle(self, *args: Any, **options: Any) -> None:\n- for subscription in Subscription.objects.all():\n- subscription.desktop_notifications = subscription.notifications\n- subscription.audible_notifications = subscription.notifications\n- subscription.save(update_fields=[\"desktop_notifications\", \"audible_notifications\"])\n", "issue": "management: `rename_stream` management command does not work\n`rename_stream` uses the `do_rename_stream` function to rename the stream. However, it accesses a non-existent attribute when calling it.\r\n\r\n```\r\ndo_rename_stream(stream, new_name, self.user_profile) # self.user_profile does not exist\r\n```\r\n\r\nTo replicate this, run:\r\n```\r\npython manage.py rename_stream Denmark bar -r zulip\r\n```\r\nand you should see:\r\n```\r\nAttributeError: 'Command' object has no attribute 'user_profile'\r\n```\r\nYou might want to look at `zerver/management/commands/rename_stream.py` and `zerver/actions/streams.py`.\r\n\r\nThe fix should refactor `do_rename_stream` to accept `user_profile: Optional[UserProfile]` with the `None` default, and correctly handle what should happen for the notification message that might be sent when the stream is renamed (which currently mentions the name of the acting user that renames it).\n", "before_files": [{"content": "from argparse import ArgumentParser\nfrom typing import Any\n\nfrom zerver.actions.streams import do_rename_stream\nfrom zerver.lib.management import ZulipBaseCommand\nfrom zerver.models import get_stream\n\n\nclass Command(ZulipBaseCommand):\n help = \"\"\"Change the stream name for a realm.\"\"\"\n\n def add_arguments(self, parser: ArgumentParser) -> None:\n parser.add_argument(\"old_name\", metavar=\"<old name>\", help=\"name of stream to be renamed\")\n parser.add_argument(\n \"new_name\", metavar=\"<new name>\", help=\"new name to rename the stream to\"\n )\n self.add_realm_args(parser, required=True)\n\n def handle(self, *args: Any, **options: str) -> None:\n realm = self.get_realm(options)\n assert realm is not None # Should be ensured by parser\n old_name = options[\"old_name\"]\n new_name = options[\"new_name\"]\n\n stream = get_stream(old_name, realm)\n do_rename_stream(stream, new_name, self.user_profile)\n", "path": "zerver/management/commands/rename_stream.py"}, {"content": "from typing import Any\n\nfrom django.core.management.base import BaseCommand\n\nfrom zerver.models import Subscription\n\n\nclass Command(BaseCommand):\n help = \"\"\"One-off script to migration users' stream notification settings.\"\"\"\n\n def handle(self, *args: Any, **options: Any) -> None:\n for subscription in Subscription.objects.all():\n subscription.desktop_notifications = subscription.notifications\n subscription.audible_notifications = subscription.notifications\n subscription.save(update_fields=[\"desktop_notifications\", \"audible_notifications\"])\n", "path": "zilencer/management/commands/migrate_stream_notifications.py"}]} | 1,174 | 487 |
gh_patches_debug_12810 | rasdani/github-patches | git_diff | joke2k__faker-626 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Uk mobile number
It seems like the uk mobile number is not in the right format
it's completely not valid
some examples of them:
+44(0)9128 405119
(01414) 35336
01231052134
Uk mobile number
It seems like the uk mobile number is not in the right format
it's completely not valid
some examples of them:
+44(0)9128 405119
(01414) 35336
01231052134
</issue>
<code>
[start of faker/providers/phone_number/en_GB/__init__.py]
1 from __future__ import unicode_literals
2 from .. import Provider as PhoneNumberProvider
3
4
5 class Provider(PhoneNumberProvider):
6 formats = (
7 '+44(0)##########',
8 '+44(0)#### ######',
9 '+44(0)#########',
10 '+44(0)#### #####',
11 '0##########',
12 '0#########',
13 '0#### ######',
14 '0#### #####',
15 '(0####) ######',
16 '(0####) #####',
17 )
18
[end of faker/providers/phone_number/en_GB/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/faker/providers/phone_number/en_GB/__init__.py b/faker/providers/phone_number/en_GB/__init__.py
--- a/faker/providers/phone_number/en_GB/__init__.py
+++ b/faker/providers/phone_number/en_GB/__init__.py
@@ -3,6 +3,15 @@
class Provider(PhoneNumberProvider):
+ # Source: https://en.wikipedia.org/wiki/Telephone_numbers_in_the_United_Kingdom
+
+ cellphone_formats = (
+ '+44 7### ######',
+ '+44 7#########',
+ '07### ######',
+ '07#########',
+ )
+
formats = (
'+44(0)##########',
'+44(0)#### ######',
@@ -15,3 +24,7 @@
'(0####) ######',
'(0####) #####',
)
+
+ def cellphone_number(self):
+ pattern = self.random_element(self.cellphone_formats)
+ return self.numerify(self.generator.parse(pattern))
| {"golden_diff": "diff --git a/faker/providers/phone_number/en_GB/__init__.py b/faker/providers/phone_number/en_GB/__init__.py\n--- a/faker/providers/phone_number/en_GB/__init__.py\n+++ b/faker/providers/phone_number/en_GB/__init__.py\n@@ -3,6 +3,15 @@\n \n \n class Provider(PhoneNumberProvider):\n+ # Source: https://en.wikipedia.org/wiki/Telephone_numbers_in_the_United_Kingdom\n+\n+ cellphone_formats = (\n+ '+44 7### ######',\n+ '+44 7#########',\n+ '07### ######',\n+ '07#########',\n+ )\n+\n formats = (\n '+44(0)##########',\n '+44(0)#### ######',\n@@ -15,3 +24,7 @@\n '(0####) ######',\n '(0####) #####',\n )\n+\n+ def cellphone_number(self):\n+ pattern = self.random_element(self.cellphone_formats)\n+ return self.numerify(self.generator.parse(pattern))\n", "issue": "Uk mobile number\nIt seems like the uk mobile number is not in the right format \r\nit's completely not valid\r\nsome examples of them: \r\n+44(0)9128 405119\r\n(01414) 35336\r\n01231052134\nUk mobile number\nIt seems like the uk mobile number is not in the right format \r\nit's completely not valid\r\nsome examples of them: \r\n+44(0)9128 405119\r\n(01414) 35336\r\n01231052134\n", "before_files": [{"content": "from __future__ import unicode_literals\nfrom .. import Provider as PhoneNumberProvider\n\n\nclass Provider(PhoneNumberProvider):\n formats = (\n '+44(0)##########',\n '+44(0)#### ######',\n '+44(0)#########',\n '+44(0)#### #####',\n '0##########',\n '0#########',\n '0#### ######',\n '0#### #####',\n '(0####) ######',\n '(0####) #####',\n )\n", "path": "faker/providers/phone_number/en_GB/__init__.py"}]} | 836 | 236 |
gh_patches_debug_3977 | rasdani/github-patches | git_diff | activeloopai__deeplake-1513 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG] Pytorch dataloader because of transforms and shuffle=false
## ๐๐ Bug Report
### โ๏ธ Current Behavior
A clear and concise description of the behavior.
```python
import hub
ds = hub.load("hub://activeloop/mnist-test")
dataloader = ds.pytorch(batch_size=2, num_workers=2, shuffle=False, transform={"images": None, "labels": None})
for (images, labels) in dataloader:
print(images.shape, labels.shape)
break
```
```
Opening dataset in read-only mode as you don't have write permissions.
hub://activeloop/mnist-test loaded successfully.
This dataset can be visualized at https://app.activeloop.ai/activeloop/mnist-test.
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
[<ipython-input-21-22b652d8dbed>](https://localhost:8080/#) in <module>()
4 dataloader = ds.pytorch(batch_size=2, num_workers=2, shuffle=False, transform={"images": None, "labels": None})
5 for (images, labels) in dataloader:
----> 6 print(images.shape, labels.shape)
7 break
AttributeError: 'str' object has no attribute 'shape'
```
but when you remove the argument `transform` from dataloader that script works.
### โ๏ธ Environment
- Google colab
</issue>
<code>
[start of hub/integrations/pytorch/common.py]
1 from typing import Callable, Dict, List, Optional
2 from hub.util.iterable_ordered_dict import IterableOrderedDict
3 import numpy as np
4
5
6 def collate_fn(batch):
7 import torch
8
9 elem = batch[0]
10
11 if isinstance(elem, IterableOrderedDict):
12 return IterableOrderedDict(
13 (key, collate_fn([d[key] for d in batch])) for key in elem.keys()
14 )
15
16 if isinstance(elem, np.ndarray) and elem.size > 0 and isinstance(elem[0], str):
17 batch = [it[0] for it in batch]
18 return torch.utils.data._utils.collate.default_collate(batch)
19
20
21 def convert_fn(data):
22 import torch
23
24 if isinstance(data, IterableOrderedDict):
25 return IterableOrderedDict((k, convert_fn(v)) for k, v in data.items())
26 if isinstance(data, np.ndarray) and data.size > 0 and isinstance(data[0], str):
27 data = data[0]
28
29 return torch.utils.data._utils.collate.default_convert(data)
30
31
32 class PytorchTransformFunction:
33 def __init__(
34 self,
35 transform_dict: Optional[Dict[str, Optional[Callable]]] = None,
36 composite_transform: Optional[Callable] = None,
37 tensors: List[str] = None,
38 ) -> None:
39 self.composite_transform = composite_transform
40 self.transform_dict = transform_dict
41 tensors = tensors or []
42
43 if transform_dict is not None:
44 for tensor in transform_dict:
45 if tensor not in tensors:
46 raise ValueError(f"Invalid transform. Tensor {tensor} not found.")
47
48 def __call__(self, data_in: Dict) -> Dict:
49 if self.composite_transform is not None:
50 return self.composite_transform(data_in)
51 elif self.transform_dict is not None:
52 data_out = {}
53 for tensor, fn in self.transform_dict.items():
54 value = data_in[tensor]
55 data_out[tensor] = value if fn is None else fn(value)
56 return data_out
57 return data_in
58
[end of hub/integrations/pytorch/common.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/hub/integrations/pytorch/common.py b/hub/integrations/pytorch/common.py
--- a/hub/integrations/pytorch/common.py
+++ b/hub/integrations/pytorch/common.py
@@ -53,5 +53,6 @@
for tensor, fn in self.transform_dict.items():
value = data_in[tensor]
data_out[tensor] = value if fn is None else fn(value)
+ data_out = IterableOrderedDict(data_out)
return data_out
return data_in
| {"golden_diff": "diff --git a/hub/integrations/pytorch/common.py b/hub/integrations/pytorch/common.py\n--- a/hub/integrations/pytorch/common.py\n+++ b/hub/integrations/pytorch/common.py\n@@ -53,5 +53,6 @@\n for tensor, fn in self.transform_dict.items():\n value = data_in[tensor]\n data_out[tensor] = value if fn is None else fn(value)\n+ data_out = IterableOrderedDict(data_out)\n return data_out\n return data_in\n", "issue": "[BUG] Pytorch dataloader because of transforms and shuffle=false\n## \ud83d\udc1b\ud83d\udc1b Bug Report\r\n\r\n\r\n### \u2697\ufe0f Current Behavior\r\nA clear and concise description of the behavior.\r\n\r\n```python\r\nimport hub\r\nds = hub.load(\"hub://activeloop/mnist-test\")\r\n\r\ndataloader = ds.pytorch(batch_size=2, num_workers=2, shuffle=False, transform={\"images\": None, \"labels\": None})\r\nfor (images, labels) in dataloader:\r\n print(images.shape, labels.shape)\r\n break\r\n```\r\n\r\n```\r\nOpening dataset in read-only mode as you don't have write permissions.\r\nhub://activeloop/mnist-test loaded successfully.\r\nThis dataset can be visualized at https://app.activeloop.ai/activeloop/mnist-test.\r\n---------------------------------------------------------------------------\r\nAttributeError Traceback (most recent call last)\r\n[<ipython-input-21-22b652d8dbed>](https://localhost:8080/#) in <module>()\r\n 4 dataloader = ds.pytorch(batch_size=2, num_workers=2, shuffle=False, transform={\"images\": None, \"labels\": None})\r\n 5 for (images, labels) in dataloader:\r\n----> 6 print(images.shape, labels.shape)\r\n 7 break\r\n\r\nAttributeError: 'str' object has no attribute 'shape'\r\n```\r\n\r\nbut when you remove the argument `transform` from dataloader that script works.\r\n\r\n### \u2699\ufe0f Environment\r\n\r\n- Google colab\r\n\n", "before_files": [{"content": "from typing import Callable, Dict, List, Optional\nfrom hub.util.iterable_ordered_dict import IterableOrderedDict\nimport numpy as np\n\n\ndef collate_fn(batch):\n import torch\n\n elem = batch[0]\n\n if isinstance(elem, IterableOrderedDict):\n return IterableOrderedDict(\n (key, collate_fn([d[key] for d in batch])) for key in elem.keys()\n )\n\n if isinstance(elem, np.ndarray) and elem.size > 0 and isinstance(elem[0], str):\n batch = [it[0] for it in batch]\n return torch.utils.data._utils.collate.default_collate(batch)\n\n\ndef convert_fn(data):\n import torch\n\n if isinstance(data, IterableOrderedDict):\n return IterableOrderedDict((k, convert_fn(v)) for k, v in data.items())\n if isinstance(data, np.ndarray) and data.size > 0 and isinstance(data[0], str):\n data = data[0]\n\n return torch.utils.data._utils.collate.default_convert(data)\n\n\nclass PytorchTransformFunction:\n def __init__(\n self,\n transform_dict: Optional[Dict[str, Optional[Callable]]] = None,\n composite_transform: Optional[Callable] = None,\n tensors: List[str] = None,\n ) -> None:\n self.composite_transform = composite_transform\n self.transform_dict = transform_dict\n tensors = tensors or []\n\n if transform_dict is not None:\n for tensor in transform_dict:\n if tensor not in tensors:\n raise ValueError(f\"Invalid transform. Tensor {tensor} not found.\")\n\n def __call__(self, data_in: Dict) -> Dict:\n if self.composite_transform is not None:\n return self.composite_transform(data_in)\n elif self.transform_dict is not None:\n data_out = {}\n for tensor, fn in self.transform_dict.items():\n value = data_in[tensor]\n data_out[tensor] = value if fn is None else fn(value)\n return data_out\n return data_in\n", "path": "hub/integrations/pytorch/common.py"}]} | 1,407 | 117 |
gh_patches_debug_2155 | rasdani/github-patches | git_diff | wright-group__WrightTools-878 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
pcov TypeError in kit._leastsq
In kit._leastsq, if the line 62 if statement is not passed, the consequent else statement makes pcov data type float, triggering"TypeError: 'int' object is not subscriptable" in line 72-73:
72: try:
73: error.append(np.absolute(pcov[i][i]) ** 0.5)
Line 74 picks up index out of bound errors, not sure if it was meant to catch the type error.
74: except IndexError:
75: error.append(0.00)
Error is bypassed if I put a 2D array into line 68, but have not spent the time considering what this array should look like.
</issue>
<code>
[start of WrightTools/kit/_leastsq.py]
1 """Least-square fitting tools."""
2
3
4 # --- import --------------------------------------------------------------------------------------
5
6
7 from ._utilities import Timer
8
9 import numpy as np
10
11 from scipy import optimize as scipy_optimize
12
13
14 # --- define --------------------------------------------------------------------------------------
15
16
17 __all__ = ["leastsqfitter"]
18
19
20 # --- functions -----------------------------------------------------------------------------------
21
22
23 def leastsqfitter(p0, datax, datay, function, verbose=False, cov_verbose=False):
24 """Conveniently call scipy.optmize.leastsq().
25
26 Returns fit parameters and their errors.
27
28 Parameters
29 ----------
30 p0 : list
31 list of guess parameters to pass to function
32 datax : array
33 array of independent values
34 datay : array
35 array of dependent values
36 function : function
37 function object to fit data to. Must be of the callable form function(p, x)
38 verbose : bool
39 toggles printing of fit time, fit params, and fit param errors
40 cov_verbose : bool
41 toggles printing of covarience matrix
42
43 Returns
44 -------
45 pfit_leastsq : list
46 list of fit parameters. s.t. the error between datay and function(p, datax) is minimized
47 perr_leastsq : list
48 list of fit parameter errors (1 std)
49 """
50 timer = Timer(verbose=False)
51 with timer:
52 # define error function
53 def errfunc(p, x, y):
54 return y - function(p, x)
55
56 # run optimization
57 pfit_leastsq, pcov, infodict, errmsg, success = scipy_optimize.leastsq(
58 errfunc, p0, args=(datax, datay), full_output=1, epsfcn=0.0001
59 )
60 # calculate covarience matrix
61 # original idea https://stackoverflow.com/a/21844726
62 if (len(datay) > len(p0)) and pcov is not None:
63 s_sq = (errfunc(pfit_leastsq, datax, datay) ** 2).sum() / (len(datay) - len(p0))
64 pcov = pcov * s_sq
65 if cov_verbose:
66 print(pcov)
67 else:
68 pcov = np.inf
69 # calculate and write errors
70 error = []
71 for i in range(len(pfit_leastsq)):
72 try:
73 error.append(np.absolute(pcov[i][i]) ** 0.5)
74 except IndexError:
75 error.append(0.00)
76 perr_leastsq = np.array(error)
77 # exit
78 if verbose:
79 print("fit params: ", pfit_leastsq)
80 print("fit params error: ", perr_leastsq)
81 print("fitting done in %f seconds" % timer.interval)
82 return pfit_leastsq, perr_leastsq
83
[end of WrightTools/kit/_leastsq.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/WrightTools/kit/_leastsq.py b/WrightTools/kit/_leastsq.py
--- a/WrightTools/kit/_leastsq.py
+++ b/WrightTools/kit/_leastsq.py
@@ -65,7 +65,7 @@
if cov_verbose:
print(pcov)
else:
- pcov = np.inf
+ pcov = np.array(np.inf)
# calculate and write errors
error = []
for i in range(len(pfit_leastsq)):
| {"golden_diff": "diff --git a/WrightTools/kit/_leastsq.py b/WrightTools/kit/_leastsq.py\n--- a/WrightTools/kit/_leastsq.py\n+++ b/WrightTools/kit/_leastsq.py\n@@ -65,7 +65,7 @@\n if cov_verbose:\n print(pcov)\n else:\n- pcov = np.inf\n+ pcov = np.array(np.inf)\n # calculate and write errors\n error = []\n for i in range(len(pfit_leastsq)):\n", "issue": "pcov TypeError in kit._leastsq\nIn kit._leastsq, if the line 62 if statement is not passed, the consequent else statement makes pcov data type float, triggering\"TypeError: 'int' object is not subscriptable\" in line 72-73:\r\n\r\n72: try:\r\n73: error.append(np.absolute(pcov[i][i]) ** 0.5)\r\n\r\nLine 74 picks up index out of bound errors, not sure if it was meant to catch the type error.\r\n\r\n74: except IndexError:\r\n75: error.append(0.00)\r\n\r\nError is bypassed if I put a 2D array into line 68, but have not spent the time considering what this array should look like.\n", "before_files": [{"content": "\"\"\"Least-square fitting tools.\"\"\"\n\n\n# --- import --------------------------------------------------------------------------------------\n\n\nfrom ._utilities import Timer\n\nimport numpy as np\n\nfrom scipy import optimize as scipy_optimize\n\n\n# --- define --------------------------------------------------------------------------------------\n\n\n__all__ = [\"leastsqfitter\"]\n\n\n# --- functions -----------------------------------------------------------------------------------\n\n\ndef leastsqfitter(p0, datax, datay, function, verbose=False, cov_verbose=False):\n \"\"\"Conveniently call scipy.optmize.leastsq().\n\n Returns fit parameters and their errors.\n\n Parameters\n ----------\n p0 : list\n list of guess parameters to pass to function\n datax : array\n array of independent values\n datay : array\n array of dependent values\n function : function\n function object to fit data to. Must be of the callable form function(p, x)\n verbose : bool\n toggles printing of fit time, fit params, and fit param errors\n cov_verbose : bool\n toggles printing of covarience matrix\n\n Returns\n -------\n pfit_leastsq : list\n list of fit parameters. s.t. the error between datay and function(p, datax) is minimized\n perr_leastsq : list\n list of fit parameter errors (1 std)\n \"\"\"\n timer = Timer(verbose=False)\n with timer:\n # define error function\n def errfunc(p, x, y):\n return y - function(p, x)\n\n # run optimization\n pfit_leastsq, pcov, infodict, errmsg, success = scipy_optimize.leastsq(\n errfunc, p0, args=(datax, datay), full_output=1, epsfcn=0.0001\n )\n # calculate covarience matrix\n # original idea https://stackoverflow.com/a/21844726\n if (len(datay) > len(p0)) and pcov is not None:\n s_sq = (errfunc(pfit_leastsq, datax, datay) ** 2).sum() / (len(datay) - len(p0))\n pcov = pcov * s_sq\n if cov_verbose:\n print(pcov)\n else:\n pcov = np.inf\n # calculate and write errors\n error = []\n for i in range(len(pfit_leastsq)):\n try:\n error.append(np.absolute(pcov[i][i]) ** 0.5)\n except IndexError:\n error.append(0.00)\n perr_leastsq = np.array(error)\n # exit\n if verbose:\n print(\"fit params: \", pfit_leastsq)\n print(\"fit params error: \", perr_leastsq)\n print(\"fitting done in %f seconds\" % timer.interval)\n return pfit_leastsq, perr_leastsq\n", "path": "WrightTools/kit/_leastsq.py"}]} | 1,487 | 119 |
gh_patches_debug_20829 | rasdani/github-patches | git_diff | litestar-org__litestar-1114 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Documentation: "older version" warning present on latest
Every page under https://starlite-api.github.io/starlite/latest/ has the "You are viewing the documentation for an older version of Starlite. Click here to get to the latest version" warning, which links back to the welcome page.
The message is not present in https://starlite-api.github.io/starlite/1.50/, https://starlite-api.github.io/starlite/1.49/, or https://starlite-api.github.io/starlite/1.47/.
Documentation: "older version" warning present on latest
Every page under https://starlite-api.github.io/starlite/latest/ has the "You are viewing the documentation for an older version of Starlite. Click here to get to the latest version" warning, which links back to the welcome page.
The message is not present in https://starlite-api.github.io/starlite/1.50/, https://starlite-api.github.io/starlite/1.49/, or https://starlite-api.github.io/starlite/1.47/.
</issue>
<code>
[start of tools/publish_docs.py]
1 import importlib.metadata
2 import json
3 import shutil
4 import subprocess
5 from pathlib import Path
6 import argparse
7 import shutil
8
9 parser = argparse.ArgumentParser()
10 parser.add_argument("--version", required=False)
11 parser.add_argument("--push", action="store_true")
12 parser.add_argument("--latest", action="store_true")
13
14
15 def update_versions_file(version: str) -> None:
16 versions_file = Path("versions.json")
17 versions = []
18 if versions_file.exists():
19 versions = json.loads(versions_file.read_text())
20
21 new_version_spec = {"version": version, "title": version, "aliases": []}
22 if any(v["version"] == version for v in versions):
23 versions = [v if v["version"] != version else new_version_spec for v in versions]
24 else:
25 versions.insert(0, new_version_spec)
26
27 versions_file.write_text(json.dumps(versions))
28
29
30 def make_version(version: str, push: bool, latest: bool) -> None:
31 subprocess.run(["make", "docs"], check=True)
32
33 subprocess.run(["git", "checkout", "gh-pages"], check=True)
34
35 update_versions_file(version)
36
37 docs_src_path = Path("docs/_build/html")
38 docs_dest_path = Path(version)
39 docs_dest_path_latest = Path("latest")
40 if docs_dest_path.exists():
41 shutil.rmtree(docs_dest_path)
42
43 docs_src_path.rename(docs_dest_path)
44 if latest:
45 if docs_dest_path_latest.exists():
46 shutil.rmtree(docs_dest_path_latest)
47 shutil.copytree(docs_dest_path, docs_dest_path_latest)
48 subprocess.run(["git", "add", "latest"], check=True)
49
50 subprocess.run(["git", "add", version], check=True)
51 subprocess.run(["git", "add", "versions.json"], check=True)
52 subprocess.run(["git", "commit", "-m", f"automated docs build: {version}"], check=True)
53 if push:
54 subprocess.run(["git", "push"], check=True)
55 subprocess.run(["git", "checkout", "-"], check=True)
56
57
58 def main() -> None:
59 args = parser.parse_args()
60 version = args.version or importlib.metadata.version("starlite").rsplit(".", 1)[0]
61 make_version(version=version, push=args.push, latest=args.latest)
62
63
64 if __name__ == "__main__":
65 main()
66
[end of tools/publish_docs.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/tools/publish_docs.py b/tools/publish_docs.py
--- a/tools/publish_docs.py
+++ b/tools/publish_docs.py
@@ -12,7 +12,7 @@
parser.add_argument("--latest", action="store_true")
-def update_versions_file(version: str) -> None:
+def add_to_versions_file(version: str, latest: bool) -> None:
versions_file = Path("versions.json")
versions = []
if versions_file.exists():
@@ -24,6 +24,11 @@
else:
versions.insert(0, new_version_spec)
+ if latest:
+ for version in versions:
+ version["aliases"] = []
+ versions[0]["aliases"] = ["latest"]
+
versions_file.write_text(json.dumps(versions))
@@ -32,7 +37,7 @@
subprocess.run(["git", "checkout", "gh-pages"], check=True)
- update_versions_file(version)
+ add_to_versions_file(version, latest)
docs_src_path = Path("docs/_build/html")
docs_dest_path = Path(version)
| {"golden_diff": "diff --git a/tools/publish_docs.py b/tools/publish_docs.py\n--- a/tools/publish_docs.py\n+++ b/tools/publish_docs.py\n@@ -12,7 +12,7 @@\n parser.add_argument(\"--latest\", action=\"store_true\")\n \n \n-def update_versions_file(version: str) -> None:\n+def add_to_versions_file(version: str, latest: bool) -> None:\n versions_file = Path(\"versions.json\")\n versions = []\n if versions_file.exists():\n@@ -24,6 +24,11 @@\n else:\n versions.insert(0, new_version_spec)\n \n+ if latest:\n+ for version in versions:\n+ version[\"aliases\"] = []\n+ versions[0][\"aliases\"] = [\"latest\"]\n+\n versions_file.write_text(json.dumps(versions))\n \n \n@@ -32,7 +37,7 @@\n \n subprocess.run([\"git\", \"checkout\", \"gh-pages\"], check=True)\n \n- update_versions_file(version)\n+ add_to_versions_file(version, latest)\n \n docs_src_path = Path(\"docs/_build/html\")\n docs_dest_path = Path(version)\n", "issue": "Documentation: \"older version\" warning present on latest\nEvery page under https://starlite-api.github.io/starlite/latest/ has the \"You are viewing the documentation for an older version of Starlite. Click here to get to the latest version\" warning, which links back to the welcome page. \r\n\r\nThe message is not present in https://starlite-api.github.io/starlite/1.50/, https://starlite-api.github.io/starlite/1.49/, or https://starlite-api.github.io/starlite/1.47/.\nDocumentation: \"older version\" warning present on latest\nEvery page under https://starlite-api.github.io/starlite/latest/ has the \"You are viewing the documentation for an older version of Starlite. Click here to get to the latest version\" warning, which links back to the welcome page. \r\n\r\nThe message is not present in https://starlite-api.github.io/starlite/1.50/, https://starlite-api.github.io/starlite/1.49/, or https://starlite-api.github.io/starlite/1.47/.\n", "before_files": [{"content": "import importlib.metadata\nimport json\nimport shutil\nimport subprocess\nfrom pathlib import Path\nimport argparse\nimport shutil\n\nparser = argparse.ArgumentParser()\nparser.add_argument(\"--version\", required=False)\nparser.add_argument(\"--push\", action=\"store_true\")\nparser.add_argument(\"--latest\", action=\"store_true\")\n\n\ndef update_versions_file(version: str) -> None:\n versions_file = Path(\"versions.json\")\n versions = []\n if versions_file.exists():\n versions = json.loads(versions_file.read_text())\n\n new_version_spec = {\"version\": version, \"title\": version, \"aliases\": []}\n if any(v[\"version\"] == version for v in versions):\n versions = [v if v[\"version\"] != version else new_version_spec for v in versions]\n else:\n versions.insert(0, new_version_spec)\n\n versions_file.write_text(json.dumps(versions))\n\n\ndef make_version(version: str, push: bool, latest: bool) -> None:\n subprocess.run([\"make\", \"docs\"], check=True)\n\n subprocess.run([\"git\", \"checkout\", \"gh-pages\"], check=True)\n\n update_versions_file(version)\n\n docs_src_path = Path(\"docs/_build/html\")\n docs_dest_path = Path(version)\n docs_dest_path_latest = Path(\"latest\")\n if docs_dest_path.exists():\n shutil.rmtree(docs_dest_path)\n\n docs_src_path.rename(docs_dest_path)\n if latest:\n if docs_dest_path_latest.exists():\n shutil.rmtree(docs_dest_path_latest)\n shutil.copytree(docs_dest_path, docs_dest_path_latest)\n subprocess.run([\"git\", \"add\", \"latest\"], check=True)\n\n subprocess.run([\"git\", \"add\", version], check=True)\n subprocess.run([\"git\", \"add\", \"versions.json\"], check=True)\n subprocess.run([\"git\", \"commit\", \"-m\", f\"automated docs build: {version}\"], check=True)\n if push:\n subprocess.run([\"git\", \"push\"], check=True)\n subprocess.run([\"git\", \"checkout\", \"-\"], check=True)\n\n\ndef main() -> None:\n args = parser.parse_args()\n version = args.version or importlib.metadata.version(\"starlite\").rsplit(\".\", 1)[0]\n make_version(version=version, push=args.push, latest=args.latest)\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "tools/publish_docs.py"}]} | 1,381 | 243 |
gh_patches_debug_16074 | rasdani/github-patches | git_diff | Zeroto521__my-data-toolkit-645 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
DEP: Depcrated `utm_crs`
<!--
Thanks for contributing a pull request!
Please follow these standard acronyms to start the commit message:
- ENH: enhancement
- BUG: bug fix
- DOC: documentation
- TYP: type annotations
- TST: addition or modification of tests
- MAINT: maintenance commit (refactoring, typos, etc.)
- BLD: change related to building
- REL: related to releasing
- API: an (incompatible) API change
- DEP: deprecate something, or remove a deprecated object
- DEV: development tool or utility
- REV: revert an earlier commit
- PERF: performance improvement
- BOT: always commit via a bot
- CI: related to CI or CD
- CLN: Code cleanup
-->
- [ ] closes #xxxx
- [x] whatsnew entry
`pyproj.database.query_utm_crs_info` too slow to query all data.
For 1 point will cost 200ms but for 2000 points will cost 200s.
Even try `parallelize` to `utm_crs`, but the speed is still so lower.
</issue>
<code>
[start of dtoolkit/geoaccessor/geoseries/utm_crs.py]
1 import geopandas as gpd
2 import pandas as pd
3 from pyproj.aoi import AreaOfInterest
4 from pyproj.database import query_utm_crs_info
5
6 from dtoolkit.geoaccessor.register import register_geoseries_method
7 from dtoolkit.util._decorator import warning
8
9
10 @register_geoseries_method
11 @warning(
12 "The 'utm_crs' is deprecated and will be removed in 0.0.17. "
13 "(Warning added DToolKit 0.0.16)",
14 DeprecationWarning,
15 stacklevel=3,
16 )
17 def utm_crs(s: gpd.GeoSeries, /, datum_name: str = "WGS 84") -> pd.Series:
18 """
19 Returns the estimated UTM CRS based on the bounds of each geometry.
20
21 .. deprecated:: 0.0.17
22 The 'utm_crs' is deprecated and will be removed in 0.0.17.
23 (Warning added DToolKit 0.0.16)
24
25 Parameters
26 ----------
27 datum_name : str, default 'WGS 84'
28 The name of the datum in the CRS name ('NAD27', 'NAD83', 'WGS 84', โฆ).
29
30 Returns
31 -------
32 Series
33 The element type is :class:`~pyproj.database.CRSInfo`.
34
35 See Also
36 --------
37 dtoolkit.geoaccessor.geoseries.utm_crs
38 Returns the estimated UTM CRS based on the bounds of each geometry.
39
40 dtoolkit.geoaccessor.geodataframe.utm_crs
41 Returns the estimated UTM CRS based on the bounds of each geometry.
42
43 geopandas.GeoSeries.estimate_utm_crs
44 Returns the estimated UTM CRS based on the bounds of the dataset.
45
46 geopandas.GeoDataFrame.estimate_utm_crs
47 Returns the estimated UTM CRS based on the bounds of the dataset.
48
49 Examples
50 --------
51 >>> import dtoolkit.accessor
52 >>> import dtoolkit.geoaccessor
53 >>> import geopandas as gpd
54 >>> s = gpd.GeoSeries.from_wkt(["Point (120 50)", "Point (100 1)"], crs="epsg:4326")
55 >>> s.utm_crs()
56 0 (EPSG, 32650, WGS 84 / UTM zone 50N, PJType.PR...
57 1 (EPSG, 32647, WGS 84 / UTM zone 47N, PJType.PR...
58 dtype: object
59
60 Same operate for GeoDataFrame.
61
62 >>> s.to_frame("geometry").utm_crs()
63 0 (EPSG, 32650, WGS 84 / UTM zone 50N, PJType.PR...
64 1 (EPSG, 32647, WGS 84 / UTM zone 47N, PJType.PR...
65 dtype: object
66
67 Get the EPSG code.
68
69 >>> s.utm_crs().getattr("code")
70 0 32650
71 1 32647
72 dtype: object
73 """
74
75 return s.bounds.apply(
76 lambda bound: None
77 if bound.isna().all()
78 else query_utm_crs_info(
79 datum_name=datum_name,
80 area_of_interest=AreaOfInterest(
81 west_lon_degree=bound["minx"],
82 south_lat_degree=bound["miny"],
83 east_lon_degree=bound["maxx"],
84 north_lat_degree=bound["maxy"],
85 ),
86 )[0],
87 axis=1,
88 )
89
[end of dtoolkit/geoaccessor/geoseries/utm_crs.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/dtoolkit/geoaccessor/geoseries/utm_crs.py b/dtoolkit/geoaccessor/geoseries/utm_crs.py
--- a/dtoolkit/geoaccessor/geoseries/utm_crs.py
+++ b/dtoolkit/geoaccessor/geoseries/utm_crs.py
@@ -9,8 +9,8 @@
@register_geoseries_method
@warning(
- "The 'utm_crs' is deprecated and will be removed in 0.0.17. "
- "(Warning added DToolKit 0.0.16)",
+ "The 'utm_crs' is deprecated and will be removed in 0.0.18. "
+ "(Warning added DToolKit 0.0.17)",
DeprecationWarning,
stacklevel=3,
)
@@ -18,9 +18,9 @@
"""
Returns the estimated UTM CRS based on the bounds of each geometry.
- .. deprecated:: 0.0.17
- The 'utm_crs' is deprecated and will be removed in 0.0.17.
- (Warning added DToolKit 0.0.16)
+ .. deprecated:: 0.0.18
+ The 'utm_crs' is deprecated and will be removed in 0.0.18.
+ (Warning added DToolKit 0.0.17)
Parameters
----------
| {"golden_diff": "diff --git a/dtoolkit/geoaccessor/geoseries/utm_crs.py b/dtoolkit/geoaccessor/geoseries/utm_crs.py\n--- a/dtoolkit/geoaccessor/geoseries/utm_crs.py\n+++ b/dtoolkit/geoaccessor/geoseries/utm_crs.py\n@@ -9,8 +9,8 @@\n \n @register_geoseries_method\n @warning(\n- \"The 'utm_crs' is deprecated and will be removed in 0.0.17. \"\n- \"(Warning added DToolKit 0.0.16)\",\n+ \"The 'utm_crs' is deprecated and will be removed in 0.0.18. \"\n+ \"(Warning added DToolKit 0.0.17)\",\n DeprecationWarning,\n stacklevel=3,\n )\n@@ -18,9 +18,9 @@\n \"\"\"\n Returns the estimated UTM CRS based on the bounds of each geometry.\n \n- .. deprecated:: 0.0.17\n- The 'utm_crs' is deprecated and will be removed in 0.0.17.\n- (Warning added DToolKit 0.0.16)\n+ .. deprecated:: 0.0.18\n+ The 'utm_crs' is deprecated and will be removed in 0.0.18.\n+ (Warning added DToolKit 0.0.17)\n \n Parameters\n ----------\n", "issue": "DEP: Depcrated `utm_crs`\n<!--\r\nThanks for contributing a pull request!\r\n\r\nPlease follow these standard acronyms to start the commit message:\r\n\r\n- ENH: enhancement\r\n- BUG: bug fix\r\n- DOC: documentation\r\n- TYP: type annotations\r\n- TST: addition or modification of tests\r\n- MAINT: maintenance commit (refactoring, typos, etc.)\r\n- BLD: change related to building\r\n- REL: related to releasing\r\n- API: an (incompatible) API change\r\n- DEP: deprecate something, or remove a deprecated object\r\n- DEV: development tool or utility\r\n- REV: revert an earlier commit\r\n- PERF: performance improvement\r\n- BOT: always commit via a bot\r\n- CI: related to CI or CD\r\n- CLN: Code cleanup\r\n-->\r\n\r\n- [ ] closes #xxxx\r\n- [x] whatsnew entry\r\n\r\n`pyproj.database.query_utm_crs_info` too slow to query all data.\r\nFor 1 point will cost 200ms but for 2000 points will cost 200s.\r\nEven try `parallelize` to `utm_crs`, but the speed is still so lower.\n", "before_files": [{"content": "import geopandas as gpd\nimport pandas as pd\nfrom pyproj.aoi import AreaOfInterest\nfrom pyproj.database import query_utm_crs_info\n\nfrom dtoolkit.geoaccessor.register import register_geoseries_method\nfrom dtoolkit.util._decorator import warning\n\n\n@register_geoseries_method\n@warning(\n \"The 'utm_crs' is deprecated and will be removed in 0.0.17. \"\n \"(Warning added DToolKit 0.0.16)\",\n DeprecationWarning,\n stacklevel=3,\n)\ndef utm_crs(s: gpd.GeoSeries, /, datum_name: str = \"WGS 84\") -> pd.Series:\n \"\"\"\n Returns the estimated UTM CRS based on the bounds of each geometry.\n\n .. deprecated:: 0.0.17\n The 'utm_crs' is deprecated and will be removed in 0.0.17.\n (Warning added DToolKit 0.0.16)\n\n Parameters\n ----------\n datum_name : str, default 'WGS 84'\n The name of the datum in the CRS name ('NAD27', 'NAD83', 'WGS 84', \u2026).\n\n Returns\n -------\n Series\n The element type is :class:`~pyproj.database.CRSInfo`.\n\n See Also\n --------\n dtoolkit.geoaccessor.geoseries.utm_crs\n Returns the estimated UTM CRS based on the bounds of each geometry.\n\n dtoolkit.geoaccessor.geodataframe.utm_crs\n Returns the estimated UTM CRS based on the bounds of each geometry.\n\n geopandas.GeoSeries.estimate_utm_crs\n Returns the estimated UTM CRS based on the bounds of the dataset.\n\n geopandas.GeoDataFrame.estimate_utm_crs\n Returns the estimated UTM CRS based on the bounds of the dataset.\n\n Examples\n --------\n >>> import dtoolkit.accessor\n >>> import dtoolkit.geoaccessor\n >>> import geopandas as gpd\n >>> s = gpd.GeoSeries.from_wkt([\"Point (120 50)\", \"Point (100 1)\"], crs=\"epsg:4326\")\n >>> s.utm_crs()\n 0 (EPSG, 32650, WGS 84 / UTM zone 50N, PJType.PR...\n 1 (EPSG, 32647, WGS 84 / UTM zone 47N, PJType.PR...\n dtype: object\n\n Same operate for GeoDataFrame.\n\n >>> s.to_frame(\"geometry\").utm_crs()\n 0 (EPSG, 32650, WGS 84 / UTM zone 50N, PJType.PR...\n 1 (EPSG, 32647, WGS 84 / UTM zone 47N, PJType.PR...\n dtype: object\n\n Get the EPSG code.\n\n >>> s.utm_crs().getattr(\"code\")\n 0 32650\n 1 32647\n dtype: object\n \"\"\"\n\n return s.bounds.apply(\n lambda bound: None\n if bound.isna().all()\n else query_utm_crs_info(\n datum_name=datum_name,\n area_of_interest=AreaOfInterest(\n west_lon_degree=bound[\"minx\"],\n south_lat_degree=bound[\"miny\"],\n east_lon_degree=bound[\"maxx\"],\n north_lat_degree=bound[\"maxy\"],\n ),\n )[0],\n axis=1,\n )\n", "path": "dtoolkit/geoaccessor/geoseries/utm_crs.py"}]} | 1,806 | 329 |
gh_patches_debug_5539 | rasdani/github-patches | git_diff | acl-org__acl-anthology-2313 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Ingestion request: Generation challenges at INLG 2022
This is to complete the ingestions of all papers from INLG; the Generation Challenges papers still needed to be uploaded. See #1897 for the other papers.
Here are the papers and metadata from the INLG generation challenges, as generated using ACLPUB2: https://drive.google.com/file/d/1518aAVuvtbvHgw_6FREzJl0kip_lkqNg/view?usp=share_link
I think this matches the Anthology format, but I'm not sure as I added everything manually. (Export didn't work.) Could you check whether everything is OK to ingest in the Anthology? Many thanks!
</issue>
<code>
[start of bin/volumes_from_diff.py]
1 #!/usr/bin/env python3
2
3 """
4 Takes a list of XML files on STDIN, and prints all the volumes
5 within each of those files. e.g.,
6
7 git diff --name-only master | ./bin/volumes_from_xml.py https://preview.aclanthology.org/BRANCH
8
9 Used to find the list of volumes to generate previews for.
10 """
11
12 import sys
13 import argparse
14 import lxml.etree as etree
15 import subprocess
16
17
18 parser = argparse.ArgumentParser()
19 parser.add_argument("url_root")
20 args = parser.parse_args()
21
22 volumes = []
23 for filepath in sys.stdin:
24 try:
25 tree = etree.parse(filepath.rstrip())
26 except Exception as e:
27 continue
28 root = tree.getroot()
29 collection_id = root.attrib["id"]
30 for volume in root:
31 volume_name = volume.attrib["id"]
32 volume_id = f"{collection_id}-{volume_name}"
33 volumes.append(f"[{volume_id}]({args.url_root}/{volume_id})")
34
35 if len(volumes) > 50:
36 volumes = volumes[0:50] + [f"(plus {len(volumes)-50} more...)"]
37
38 print(", ".join(volumes))
39
[end of bin/volumes_from_diff.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/bin/volumes_from_diff.py b/bin/volumes_from_diff.py
--- a/bin/volumes_from_diff.py
+++ b/bin/volumes_from_diff.py
@@ -27,7 +27,7 @@
continue
root = tree.getroot()
collection_id = root.attrib["id"]
- for volume in root:
+ for volume in root.findall("./volume"):
volume_name = volume.attrib["id"]
volume_id = f"{collection_id}-{volume_name}"
volumes.append(f"[{volume_id}]({args.url_root}/{volume_id})")
| {"golden_diff": "diff --git a/bin/volumes_from_diff.py b/bin/volumes_from_diff.py\n--- a/bin/volumes_from_diff.py\n+++ b/bin/volumes_from_diff.py\n@@ -27,7 +27,7 @@\n continue\n root = tree.getroot()\n collection_id = root.attrib[\"id\"]\n- for volume in root:\n+ for volume in root.findall(\"./volume\"):\n volume_name = volume.attrib[\"id\"]\n volume_id = f\"{collection_id}-{volume_name}\"\n volumes.append(f\"[{volume_id}]({args.url_root}/{volume_id})\")\n", "issue": "Ingestion request: Generation challenges at INLG 2022\nThis is to complete the ingestions of all papers from INLG; the Generation Challenges papers still needed to be uploaded. See #1897 for the other papers. \r\n\r\nHere are the papers and metadata from the INLG generation challenges, as generated using ACLPUB2: https://drive.google.com/file/d/1518aAVuvtbvHgw_6FREzJl0kip_lkqNg/view?usp=share_link\r\n\r\nI think this matches the Anthology format, but I'm not sure as I added everything manually. (Export didn't work.) Could you check whether everything is OK to ingest in the Anthology? Many thanks!\n", "before_files": [{"content": "#!/usr/bin/env python3\n\n\"\"\"\nTakes a list of XML files on STDIN, and prints all the volumes\nwithin each of those files. e.g.,\n\n git diff --name-only master | ./bin/volumes_from_xml.py https://preview.aclanthology.org/BRANCH\n\nUsed to find the list of volumes to generate previews for.\n\"\"\"\n\nimport sys\nimport argparse\nimport lxml.etree as etree\nimport subprocess\n\n\nparser = argparse.ArgumentParser()\nparser.add_argument(\"url_root\")\nargs = parser.parse_args()\n\nvolumes = []\nfor filepath in sys.stdin:\n try:\n tree = etree.parse(filepath.rstrip())\n except Exception as e:\n continue\n root = tree.getroot()\n collection_id = root.attrib[\"id\"]\n for volume in root:\n volume_name = volume.attrib[\"id\"]\n volume_id = f\"{collection_id}-{volume_name}\"\n volumes.append(f\"[{volume_id}]({args.url_root}/{volume_id})\")\n\nif len(volumes) > 50:\n volumes = volumes[0:50] + [f\"(plus {len(volumes)-50} more...)\"]\n\nprint(\", \".join(volumes))\n", "path": "bin/volumes_from_diff.py"}]} | 1,012 | 124 |
gh_patches_debug_20906 | rasdani/github-patches | git_diff | mampfes__hacs_waste_collection_schedule-346 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Not using HA supported python version
HA supports the last two minor versions of Python, that is currently 3.10 and 3.9.
[calendar.py](https://github.com/mampfes/hacs_waste_collection_schedule/blob/master/custom_components/waste_collection_schedule/calendar.py#L118) makes use of Python 3.10 only type hinting features for optional arguments via unions:
`def calc_unique_calendar_id(scraper: Scraper, type: str | None = None):`
The union str | None is not supported as type hint by Python 3.9, hence the waste collection schedule fails to load albeit HA runs on a supported installation.
</issue>
<code>
[start of custom_components/waste_collection_schedule/calendar.py]
1 """Calendar platform support for Waste Collection Schedule."""
2
3 import logging
4 from datetime import timedelta, timezone, datetime
5
6 from homeassistant.components.calendar import CalendarEntity, CalendarEvent
7 from homeassistant.core import HomeAssistant
8 from homeassistant.util.dt import DEFAULT_TIME_ZONE
9
10 from custom_components.waste_collection_schedule.waste_collection_schedule.scraper import (
11 Scraper,
12 )
13
14 _LOGGER = logging.getLogger(__name__)
15
16
17 async def async_setup_platform(hass, config, async_add_entities, discovery_info=None):
18 """Set up calendar platform."""
19 # We only want this platform to be set up via discovery.
20 if discovery_info is None:
21 return
22
23 entities = []
24
25 api = discovery_info["api"]
26
27 for scraper in api.scrapers:
28 dedicated_calendar_types = scraper.get_dedicated_calendar_types()
29 global_calendar_types = scraper.get_global_calendar_types()
30
31 if dedicated_calendar_types is not None:
32 for type in dedicated_calendar_types:
33 unique_id = calc_unique_calendar_id(scraper, type)
34
35 entities.append(
36 WasteCollectionCalendar(
37 api,
38 scraper,
39 scraper.get_calendar_title_for_type(type),
40 [scraper.get_collection_type(type)],
41 unique_id,
42 )
43 )
44
45 if global_calendar_types is not None or dedicated_calendar_types is None:
46 unique_id = calc_unique_calendar_id(scraper)
47 entities.append(
48 WasteCollectionCalendar(
49 api,
50 scraper,
51 scraper.calendar_title,
52 [
53 scraper.get_collection_type(type)
54 for type in global_calendar_types
55 ]
56 if global_calendar_types is not None
57 else None,
58 unique_id,
59 )
60 )
61
62 async_add_entities(entities)
63
64
65 class WasteCollectionCalendar(CalendarEntity):
66 """Calendar entity class."""
67
68 def __init__(self, api, scraper, name, types, unique_id: str):
69 self._api = api
70 self._scraper = scraper
71 self._name = name
72 self._types = types
73 self._unique_id = unique_id
74 self._attr_unique_id = unique_id
75
76 @property
77 def name(self):
78 """Return entity name."""
79 return self._name
80
81 @property
82 def event(self):
83 """Return next collection event."""
84 collections = self._scraper.get_upcoming(
85 count=1, include_today=True, types=self._types
86 )
87
88 if len(collections) == 0:
89 return None
90 else:
91 return self._convert(collections[0])
92
93 async def async_get_events(
94 self, hass: HomeAssistant, start_date: datetime, end_date: datetime
95 ):
96 """Return all events within specified time span."""
97 events = []
98
99 for collection in self._scraper.get_upcoming(
100 include_today=True, types=self._types
101 ):
102 event = self._convert(collection)
103
104 if start_date <= event.start_datetime_local <= end_date:
105 events.append(event)
106
107 return events
108
109 def _convert(self, collection) -> CalendarEvent:
110 """Convert an collection into a Home Assistant calendar event."""
111 return CalendarEvent(
112 summary=collection.type,
113 start=collection.date,
114 end=collection.date + timedelta(days=1),
115 )
116
117
118 def calc_unique_calendar_id(scraper: Scraper, type: str | None = None):
119 return scraper.unique_id + ("_" + type if type is not None else "") + "_calendar"
120
[end of custom_components/waste_collection_schedule/calendar.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/custom_components/waste_collection_schedule/calendar.py b/custom_components/waste_collection_schedule/calendar.py
--- a/custom_components/waste_collection_schedule/calendar.py
+++ b/custom_components/waste_collection_schedule/calendar.py
@@ -1,15 +1,12 @@
"""Calendar platform support for Waste Collection Schedule."""
import logging
-from datetime import timedelta, timezone, datetime
+from datetime import datetime, timedelta
from homeassistant.components.calendar import CalendarEntity, CalendarEvent
from homeassistant.core import HomeAssistant
-from homeassistant.util.dt import DEFAULT_TIME_ZONE
-from custom_components.waste_collection_schedule.waste_collection_schedule.scraper import (
- Scraper,
-)
+from custom_components.waste_collection_schedule.waste_collection_schedule.scraper import Scraper
_LOGGER = logging.getLogger(__name__)
@@ -115,5 +112,5 @@
)
-def calc_unique_calendar_id(scraper: Scraper, type: str | None = None):
+def calc_unique_calendar_id(scraper: Scraper, type: str = None):
return scraper.unique_id + ("_" + type if type is not None else "") + "_calendar"
| {"golden_diff": "diff --git a/custom_components/waste_collection_schedule/calendar.py b/custom_components/waste_collection_schedule/calendar.py\n--- a/custom_components/waste_collection_schedule/calendar.py\n+++ b/custom_components/waste_collection_schedule/calendar.py\n@@ -1,15 +1,12 @@\n \"\"\"Calendar platform support for Waste Collection Schedule.\"\"\"\n \n import logging\n-from datetime import timedelta, timezone, datetime\n+from datetime import datetime, timedelta\n \n from homeassistant.components.calendar import CalendarEntity, CalendarEvent\n from homeassistant.core import HomeAssistant\n-from homeassistant.util.dt import DEFAULT_TIME_ZONE\n \n-from custom_components.waste_collection_schedule.waste_collection_schedule.scraper import (\n- Scraper,\n-)\n+from custom_components.waste_collection_schedule.waste_collection_schedule.scraper import Scraper\n \n _LOGGER = logging.getLogger(__name__)\n \n@@ -115,5 +112,5 @@\n )\n \n \n-def calc_unique_calendar_id(scraper: Scraper, type: str | None = None):\n+def calc_unique_calendar_id(scraper: Scraper, type: str = None):\n return scraper.unique_id + (\"_\" + type if type is not None else \"\") + \"_calendar\"\n", "issue": "Not using HA supported python version\nHA supports the last two minor versions of Python, that is currently 3.10 and 3.9.\r\n[calendar.py](https://github.com/mampfes/hacs_waste_collection_schedule/blob/master/custom_components/waste_collection_schedule/calendar.py#L118) makes use of Python 3.10 only type hinting features for optional arguments via unions:\r\n`def calc_unique_calendar_id(scraper: Scraper, type: str | None = None):`\r\nThe union str | None is not supported as type hint by Python 3.9, hence the waste collection schedule fails to load albeit HA runs on a supported installation.\n", "before_files": [{"content": "\"\"\"Calendar platform support for Waste Collection Schedule.\"\"\"\n\nimport logging\nfrom datetime import timedelta, timezone, datetime\n\nfrom homeassistant.components.calendar import CalendarEntity, CalendarEvent\nfrom homeassistant.core import HomeAssistant\nfrom homeassistant.util.dt import DEFAULT_TIME_ZONE\n\nfrom custom_components.waste_collection_schedule.waste_collection_schedule.scraper import (\n Scraper,\n)\n\n_LOGGER = logging.getLogger(__name__)\n\n\nasync def async_setup_platform(hass, config, async_add_entities, discovery_info=None):\n \"\"\"Set up calendar platform.\"\"\"\n # We only want this platform to be set up via discovery.\n if discovery_info is None:\n return\n\n entities = []\n\n api = discovery_info[\"api\"]\n\n for scraper in api.scrapers:\n dedicated_calendar_types = scraper.get_dedicated_calendar_types()\n global_calendar_types = scraper.get_global_calendar_types()\n\n if dedicated_calendar_types is not None:\n for type in dedicated_calendar_types:\n unique_id = calc_unique_calendar_id(scraper, type)\n\n entities.append(\n WasteCollectionCalendar(\n api,\n scraper,\n scraper.get_calendar_title_for_type(type),\n [scraper.get_collection_type(type)],\n unique_id,\n )\n )\n\n if global_calendar_types is not None or dedicated_calendar_types is None:\n unique_id = calc_unique_calendar_id(scraper)\n entities.append(\n WasteCollectionCalendar(\n api,\n scraper,\n scraper.calendar_title,\n [\n scraper.get_collection_type(type)\n for type in global_calendar_types\n ]\n if global_calendar_types is not None\n else None,\n unique_id,\n )\n )\n\n async_add_entities(entities)\n\n\nclass WasteCollectionCalendar(CalendarEntity):\n \"\"\"Calendar entity class.\"\"\"\n\n def __init__(self, api, scraper, name, types, unique_id: str):\n self._api = api\n self._scraper = scraper\n self._name = name\n self._types = types\n self._unique_id = unique_id\n self._attr_unique_id = unique_id\n\n @property\n def name(self):\n \"\"\"Return entity name.\"\"\"\n return self._name\n\n @property\n def event(self):\n \"\"\"Return next collection event.\"\"\"\n collections = self._scraper.get_upcoming(\n count=1, include_today=True, types=self._types\n )\n\n if len(collections) == 0:\n return None\n else:\n return self._convert(collections[0])\n\n async def async_get_events(\n self, hass: HomeAssistant, start_date: datetime, end_date: datetime\n ):\n \"\"\"Return all events within specified time span.\"\"\"\n events = []\n\n for collection in self._scraper.get_upcoming(\n include_today=True, types=self._types\n ):\n event = self._convert(collection)\n\n if start_date <= event.start_datetime_local <= end_date:\n events.append(event)\n\n return events\n\n def _convert(self, collection) -> CalendarEvent:\n \"\"\"Convert an collection into a Home Assistant calendar event.\"\"\"\n return CalendarEvent(\n summary=collection.type,\n start=collection.date,\n end=collection.date + timedelta(days=1),\n )\n\n\ndef calc_unique_calendar_id(scraper: Scraper, type: str | None = None):\n return scraper.unique_id + (\"_\" + type if type is not None else \"\") + \"_calendar\"\n", "path": "custom_components/waste_collection_schedule/calendar.py"}]} | 1,657 | 241 |
gh_patches_debug_2209 | rasdani/github-patches | git_diff | OCHA-DAP__hdx-ckan-1887 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Please allow markdown to the organization description field
Right now markdown is not allowed in that field. I believe that this is preventing me from adding paragraphs and other particular styles to the text in question.

</issue>
<code>
[start of ckanext-hdx_crisis/ckanext/hdx_crisis/controllers/crisis_controller.py]
1 '''
2 Created on Nov 3, 2014
3
4 @author: alexandru-m-g
5 '''
6
7 import logging
8
9 import ckan.lib.base as base
10 import ckan.logic as logic
11 import ckan.model as model
12 import ckan.common as common
13 import ckan.lib.helpers as h
14
15 import ckanext.hdx_crisis.dao.data_access as data_access
16 import ckanext.hdx_crisis.formatters.top_line_items_formatter as formatters
17
18 render = base.render
19 get_action = logic.get_action
20 c = common.c
21 request = common.request
22 _ = common._
23
24
25 log = logging.getLogger(__name__)
26
27
28 class CrisisController(base.BaseController):
29
30 def show(self):
31
32 context = {'model': model, 'session': model.Session,
33 'user': c.user or c.author, 'for_view': True,
34 'auth_user_obj': c.userobj}
35
36 crisis_data_access = data_access.EbolaCrisisDataAccess()
37 crisis_data_access.fetch_data(context)
38 c.top_line_items = crisis_data_access.get_top_line_items()
39
40 formatter = formatters.TopLineItemsFormatter(c.top_line_items)
41 formatter.format_results()
42
43 search_term = u'ebola'
44
45 self._generate_dataset_results(context, search_term)
46
47 self._generate_other_links(search_term)
48
49 return render('crisis/crisis.html')
50
51 def _generate_dataset_results(self, context, search_term):
52 limit = 25
53 c.q = search_term
54
55 page = int(request.params.get('page', 1))
56 data_dict = {'sort': u'metadata_modified desc',
57 'fq': '+dataset_type:dataset',
58 'rows': limit,
59 'q': c.q,
60 'start': (page - 1) * limit
61 }
62 query = get_action("package_search")(context, data_dict)
63
64 def pager_url(q=None, page=None):
65 url = h.url_for('show_crisis', page=page) + '#datasets-section'
66 return url
67
68 c.page = h.Page(
69 collection=query['results'],
70 page=page,
71 url=pager_url,
72 item_count=query['count'],
73 items_per_page=limit
74 )
75 c.items = query['results']
76 c.item_count = query['count']
77
78 def _generate_other_links(self, search_term):
79 c.other_links = {}
80 c.other_links['show_more'] = h.url_for(
81 "search", **{'q': search_term, 'sort': u'metadata_modified desc',
82 'ext_indicator': '0'})
83
[end of ckanext-hdx_crisis/ckanext/hdx_crisis/controllers/crisis_controller.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ckanext-hdx_crisis/ckanext/hdx_crisis/controllers/crisis_controller.py b/ckanext-hdx_crisis/ckanext/hdx_crisis/controllers/crisis_controller.py
--- a/ckanext-hdx_crisis/ckanext/hdx_crisis/controllers/crisis_controller.py
+++ b/ckanext-hdx_crisis/ckanext/hdx_crisis/controllers/crisis_controller.py
@@ -46,7 +46,7 @@
self._generate_other_links(search_term)
- return render('crisis/crisis.html')
+ return render('crisis/crisis-ebola.html')
def _generate_dataset_results(self, context, search_term):
limit = 25
| {"golden_diff": "diff --git a/ckanext-hdx_crisis/ckanext/hdx_crisis/controllers/crisis_controller.py b/ckanext-hdx_crisis/ckanext/hdx_crisis/controllers/crisis_controller.py\n--- a/ckanext-hdx_crisis/ckanext/hdx_crisis/controllers/crisis_controller.py\n+++ b/ckanext-hdx_crisis/ckanext/hdx_crisis/controllers/crisis_controller.py\n@@ -46,7 +46,7 @@\n \n self._generate_other_links(search_term)\n \n- return render('crisis/crisis.html')\n+ return render('crisis/crisis-ebola.html')\n \n def _generate_dataset_results(self, context, search_term):\n limit = 25\n", "issue": "Please allow markdown to the organization description field\nRight now markdown is not allowed in that field. I believe that this is preventing me from adding paragraphs and other particular styles to the text in question. \n\n\n\n", "before_files": [{"content": "'''\nCreated on Nov 3, 2014\n\n@author: alexandru-m-g\n'''\n\nimport logging\n\nimport ckan.lib.base as base\nimport ckan.logic as logic\nimport ckan.model as model\nimport ckan.common as common\nimport ckan.lib.helpers as h\n\nimport ckanext.hdx_crisis.dao.data_access as data_access\nimport ckanext.hdx_crisis.formatters.top_line_items_formatter as formatters\n\nrender = base.render\nget_action = logic.get_action\nc = common.c\nrequest = common.request\n_ = common._\n\n\nlog = logging.getLogger(__name__)\n\n\nclass CrisisController(base.BaseController):\n\n def show(self):\n\n context = {'model': model, 'session': model.Session,\n 'user': c.user or c.author, 'for_view': True,\n 'auth_user_obj': c.userobj}\n\n crisis_data_access = data_access.EbolaCrisisDataAccess()\n crisis_data_access.fetch_data(context)\n c.top_line_items = crisis_data_access.get_top_line_items()\n\n formatter = formatters.TopLineItemsFormatter(c.top_line_items)\n formatter.format_results()\n\n search_term = u'ebola'\n\n self._generate_dataset_results(context, search_term)\n\n self._generate_other_links(search_term)\n\n return render('crisis/crisis.html')\n\n def _generate_dataset_results(self, context, search_term):\n limit = 25\n c.q = search_term\n\n page = int(request.params.get('page', 1))\n data_dict = {'sort': u'metadata_modified desc',\n 'fq': '+dataset_type:dataset',\n 'rows': limit,\n 'q': c.q,\n 'start': (page - 1) * limit\n }\n query = get_action(\"package_search\")(context, data_dict)\n\n def pager_url(q=None, page=None):\n url = h.url_for('show_crisis', page=page) + '#datasets-section'\n return url\n\n c.page = h.Page(\n collection=query['results'],\n page=page,\n url=pager_url,\n item_count=query['count'],\n items_per_page=limit\n )\n c.items = query['results']\n c.item_count = query['count']\n\n def _generate_other_links(self, search_term):\n c.other_links = {}\n c.other_links['show_more'] = h.url_for(\n \"search\", **{'q': search_term, 'sort': u'metadata_modified desc',\n 'ext_indicator': '0'})\n", "path": "ckanext-hdx_crisis/ckanext/hdx_crisis/controllers/crisis_controller.py"}]} | 1,408 | 180 |
gh_patches_debug_39448 | rasdani/github-patches | git_diff | scoutapp__scout_apm_python-355 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Instrument Starlette background tasks
Starlette supports [background tasks](https://www.starlette.io/background/). We should instrument these as background transactions.
</issue>
<code>
[start of src/scout_apm/async_/starlette.py]
1 # coding=utf-8
2 from __future__ import absolute_import, division, print_function, unicode_literals
3
4 from starlette.requests import Request
5
6 import scout_apm.core
7 from scout_apm.core.tracked_request import TrackedRequest
8 from scout_apm.core.web_requests import (
9 create_filtered_path,
10 ignore_path,
11 track_amazon_request_queue_time,
12 track_request_queue_time,
13 )
14
15
16 class ScoutMiddleware:
17 def __init__(self, app):
18 self.app = app
19 installed = scout_apm.core.install()
20 self._do_nothing = not installed
21
22 async def __call__(self, scope, receive, send):
23 if self._do_nothing or scope["type"] != "http":
24 await self.app(scope, receive, send)
25 return
26
27 request = Request(scope)
28 tracked_request = TrackedRequest.instance()
29 # Can't name controller until post-routing - see final clause
30 controller_span = tracked_request.start_span(operation="Controller/Unknown")
31
32 tracked_request.tag(
33 "path",
34 create_filtered_path(request.url.path, request.query_params.multi_items()),
35 )
36 if ignore_path(request.url.path):
37 tracked_request.tag("ignore_transaction", True)
38
39 user_ip = (
40 request.headers.get("x-forwarded-for", default="").split(",")[0]
41 or request.headers.get("client-ip", default="").split(",")[0]
42 or request.client.host
43 )
44 tracked_request.tag("user_ip", user_ip)
45
46 queue_time = request.headers.get(
47 "x-queue-start", default=""
48 ) or request.headers.get("x-request-start", default="")
49 tracked_queue_time = track_request_queue_time(queue_time, tracked_request)
50 if not tracked_queue_time:
51 amazon_queue_time = request.headers.get("x-amzn-trace-id", default="")
52 track_amazon_request_queue_time(amazon_queue_time, tracked_request)
53
54 try:
55 await self.app(scope, receive, send)
56 except Exception as exc:
57 tracked_request.tag("error", "true")
58 raise exc
59 finally:
60 if "endpoint" in scope:
61 endpoint = scope["endpoint"]
62 controller_span.operation = "Controller/{}.{}".format(
63 endpoint.__module__, endpoint.__qualname__
64 )
65 tracked_request.is_real_request = True
66 tracked_request.stop_span()
67
[end of src/scout_apm/async_/starlette.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/scout_apm/async_/starlette.py b/src/scout_apm/async_/starlette.py
--- a/src/scout_apm/async_/starlette.py
+++ b/src/scout_apm/async_/starlette.py
@@ -1,6 +1,8 @@
# coding=utf-8
from __future__ import absolute_import, division, print_function, unicode_literals
+import wrapt
+from starlette.background import BackgroundTask
from starlette.requests import Request
import scout_apm.core
@@ -18,6 +20,8 @@
self.app = app
installed = scout_apm.core.install()
self._do_nothing = not installed
+ if installed:
+ install_background_instrumentation()
async def __call__(self, scope, receive, send):
if self._do_nothing or scope["type"] != "http":
@@ -51,16 +55,57 @@
amazon_queue_time = request.headers.get("x-amzn-trace-id", default="")
track_amazon_request_queue_time(amazon_queue_time, tracked_request)
- try:
- await self.app(scope, receive, send)
- except Exception as exc:
- tracked_request.tag("error", "true")
- raise exc
- finally:
+ def rename_controller_span_from_endpoint():
if "endpoint" in scope:
+ # Rename top span
endpoint = scope["endpoint"]
controller_span.operation = "Controller/{}.{}".format(
endpoint.__module__, endpoint.__qualname__
)
tracked_request.is_real_request = True
+
+ async def wrapped_send(data):
+ # Finish HTTP span when body finishes sending, not later (e.g.
+ # after background tasks)
+ if data.get("type", None) == "http.response.body" and not data.get(
+ "more_body", False
+ ):
+ rename_controller_span_from_endpoint()
+ tracked_request.stop_span()
+ return await send(data)
+
+ try:
+ await self.app(scope, receive, wrapped_send)
+ except Exception as exc:
+ tracked_request.tag("error", "true")
+ raise exc
+ finally:
+ if tracked_request.end_time is None:
+ rename_controller_span_from_endpoint()
+ tracked_request.stop_span()
+
+
+background_instrumentation_installed = False
+
+
+def install_background_instrumentation():
+ global background_instrumentation_installed
+ if background_instrumentation_installed:
+ return
+ background_instrumentation_installed = True
+
+ @wrapt.decorator
+ async def wrapped_background_call(wrapped, instance, args, kwargs):
+ tracked_request = TrackedRequest.instance()
+ tracked_request.is_real_request = True
+ tracked_request.start_span(
+ operation="Job/{}.{}".format(
+ instance.func.__module__, instance.func.__qualname__
+ )
+ )
+ try:
+ return await wrapped(*args, **kwargs)
+ finally:
tracked_request.stop_span()
+
+ BackgroundTask.__call__ = wrapped_background_call(BackgroundTask.__call__)
| {"golden_diff": "diff --git a/src/scout_apm/async_/starlette.py b/src/scout_apm/async_/starlette.py\n--- a/src/scout_apm/async_/starlette.py\n+++ b/src/scout_apm/async_/starlette.py\n@@ -1,6 +1,8 @@\n # coding=utf-8\n from __future__ import absolute_import, division, print_function, unicode_literals\n \n+import wrapt\n+from starlette.background import BackgroundTask\n from starlette.requests import Request\n \n import scout_apm.core\n@@ -18,6 +20,8 @@\n self.app = app\n installed = scout_apm.core.install()\n self._do_nothing = not installed\n+ if installed:\n+ install_background_instrumentation()\n \n async def __call__(self, scope, receive, send):\n if self._do_nothing or scope[\"type\"] != \"http\":\n@@ -51,16 +55,57 @@\n amazon_queue_time = request.headers.get(\"x-amzn-trace-id\", default=\"\")\n track_amazon_request_queue_time(amazon_queue_time, tracked_request)\n \n- try:\n- await self.app(scope, receive, send)\n- except Exception as exc:\n- tracked_request.tag(\"error\", \"true\")\n- raise exc\n- finally:\n+ def rename_controller_span_from_endpoint():\n if \"endpoint\" in scope:\n+ # Rename top span\n endpoint = scope[\"endpoint\"]\n controller_span.operation = \"Controller/{}.{}\".format(\n endpoint.__module__, endpoint.__qualname__\n )\n tracked_request.is_real_request = True\n+\n+ async def wrapped_send(data):\n+ # Finish HTTP span when body finishes sending, not later (e.g.\n+ # after background tasks)\n+ if data.get(\"type\", None) == \"http.response.body\" and not data.get(\n+ \"more_body\", False\n+ ):\n+ rename_controller_span_from_endpoint()\n+ tracked_request.stop_span()\n+ return await send(data)\n+\n+ try:\n+ await self.app(scope, receive, wrapped_send)\n+ except Exception as exc:\n+ tracked_request.tag(\"error\", \"true\")\n+ raise exc\n+ finally:\n+ if tracked_request.end_time is None:\n+ rename_controller_span_from_endpoint()\n+ tracked_request.stop_span()\n+\n+\n+background_instrumentation_installed = False\n+\n+\n+def install_background_instrumentation():\n+ global background_instrumentation_installed\n+ if background_instrumentation_installed:\n+ return\n+ background_instrumentation_installed = True\n+\n+ @wrapt.decorator\n+ async def wrapped_background_call(wrapped, instance, args, kwargs):\n+ tracked_request = TrackedRequest.instance()\n+ tracked_request.is_real_request = True\n+ tracked_request.start_span(\n+ operation=\"Job/{}.{}\".format(\n+ instance.func.__module__, instance.func.__qualname__\n+ )\n+ )\n+ try:\n+ return await wrapped(*args, **kwargs)\n+ finally:\n tracked_request.stop_span()\n+\n+ BackgroundTask.__call__ = wrapped_background_call(BackgroundTask.__call__)\n", "issue": "Instrument Starlette background tasks\nStarlette supports [background tasks](https://www.starlette.io/background/). We should instrument these as background transactions.\n", "before_files": [{"content": "# coding=utf-8\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nfrom starlette.requests import Request\n\nimport scout_apm.core\nfrom scout_apm.core.tracked_request import TrackedRequest\nfrom scout_apm.core.web_requests import (\n create_filtered_path,\n ignore_path,\n track_amazon_request_queue_time,\n track_request_queue_time,\n)\n\n\nclass ScoutMiddleware:\n def __init__(self, app):\n self.app = app\n installed = scout_apm.core.install()\n self._do_nothing = not installed\n\n async def __call__(self, scope, receive, send):\n if self._do_nothing or scope[\"type\"] != \"http\":\n await self.app(scope, receive, send)\n return\n\n request = Request(scope)\n tracked_request = TrackedRequest.instance()\n # Can't name controller until post-routing - see final clause\n controller_span = tracked_request.start_span(operation=\"Controller/Unknown\")\n\n tracked_request.tag(\n \"path\",\n create_filtered_path(request.url.path, request.query_params.multi_items()),\n )\n if ignore_path(request.url.path):\n tracked_request.tag(\"ignore_transaction\", True)\n\n user_ip = (\n request.headers.get(\"x-forwarded-for\", default=\"\").split(\",\")[0]\n or request.headers.get(\"client-ip\", default=\"\").split(\",\")[0]\n or request.client.host\n )\n tracked_request.tag(\"user_ip\", user_ip)\n\n queue_time = request.headers.get(\n \"x-queue-start\", default=\"\"\n ) or request.headers.get(\"x-request-start\", default=\"\")\n tracked_queue_time = track_request_queue_time(queue_time, tracked_request)\n if not tracked_queue_time:\n amazon_queue_time = request.headers.get(\"x-amzn-trace-id\", default=\"\")\n track_amazon_request_queue_time(amazon_queue_time, tracked_request)\n\n try:\n await self.app(scope, receive, send)\n except Exception as exc:\n tracked_request.tag(\"error\", \"true\")\n raise exc\n finally:\n if \"endpoint\" in scope:\n endpoint = scope[\"endpoint\"]\n controller_span.operation = \"Controller/{}.{}\".format(\n endpoint.__module__, endpoint.__qualname__\n )\n tracked_request.is_real_request = True\n tracked_request.stop_span()\n", "path": "src/scout_apm/async_/starlette.py"}]} | 1,195 | 684 |
gh_patches_debug_27631 | rasdani/github-patches | git_diff | OpenMined__PySyft-2308 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
TrainConfig parameter "epochs"
**TrainConfig parameter "epochs" doesn't have effect.**
After changing the number of epochs=1 to epochs=100. The worker still do only 1 epoch.
```
train_config = sy.TrainConfig(
model=traced_model,
loss_fn=loss_fn,
batch_size=batch_size,
shuffle=True,
#max_nr_batches=max_nr_batches,
epochs=100,
lr=lr,
)
```
</issue>
<code>
[start of syft/federated/federated_client.py]
1 import torch as th
2 from torch.utils.data import BatchSampler, RandomSampler, SequentialSampler
3
4 from syft.generic import ObjectStorage
5 from syft.federated.train_config import TrainConfig
6
7
8 class FederatedClient(ObjectStorage):
9 """A Client able to execute federated learning in local datasets."""
10
11 def __init__(self, datasets=None):
12 super().__init__()
13 self.datasets = datasets if datasets is not None else dict()
14 self.optimizer = None
15 self.train_config = None
16
17 def add_dataset(self, dataset, key: str):
18 self.datasets[key] = dataset
19
20 def remove_dataset(self, key: str):
21 if key in self.datasets:
22 del self.datasets[key]
23
24 def set_obj(self, obj: object):
25 """Registers objects checking if which objects it should cache.
26
27 Args:
28 obj: An object to be registered.
29 """
30 if isinstance(obj, TrainConfig):
31 self.train_config = obj
32 self.optimizer = None
33 else:
34 super().set_obj(obj)
35
36 def _build_optimizer(
37 self, optimizer_name: str, model, lr: float, weight_decay: float
38 ) -> th.optim.Optimizer:
39 """Build an optimizer if needed.
40
41 Args:
42 optimizer_name: A string indicating the optimizer name.
43 lr: A float indicating the learning rate.
44 weight_decay: Weight decay parameter of the optimizer
45 Returns:
46 A Torch Optimizer.
47 """
48 if self.optimizer is not None:
49 return self.optimizer
50
51 optimizer_name = optimizer_name.lower()
52 if optimizer_name == "sgd":
53 optim_args = dict()
54 optim_args["lr"] = lr
55 if weight_decay is not None:
56 optim_args["weight_decay"] = weight_decay
57 self.optimizer = th.optim.SGD(model.parameters(), **optim_args)
58 else:
59 raise ValueError("Unknown optimizer: {}".format(optimizer_name))
60 return self.optimizer
61
62 def fit(self, dataset_key: str, **kwargs):
63 """Fits a model on the local dataset as specified in the local TrainConfig object.
64
65 Args:
66 dataset_key: Identifier of the local dataset that shall be used for training.
67 **kwargs: Unused.
68
69 Returns:
70 loss: Training loss on the last batch of training data.
71 """
72 if self.train_config is None:
73 raise ValueError("TrainConfig not defined.")
74
75 model = self.get_obj(self.train_config._model_id).obj
76 loss_fn = self.get_obj(self.train_config._loss_fn_id).obj
77
78 self._build_optimizer(
79 self.train_config.optimizer,
80 model,
81 lr=self.train_config.lr,
82 weight_decay=self.train_config.weight_decay,
83 )
84
85 return self._fit(model=model, dataset_key=dataset_key, loss_fn=loss_fn)
86
87 def _create_data_loader(self, dataset_key: str, shuffle: bool = False):
88 data_range = range(len(self.datasets[dataset_key]))
89 if shuffle:
90 sampler = RandomSampler(data_range)
91 else:
92 sampler = SequentialSampler(data_range)
93 data_loader = th.utils.data.DataLoader(
94 self.datasets[dataset_key],
95 batch_size=self.train_config.batch_size,
96 sampler=sampler,
97 num_workers=0,
98 )
99 return data_loader
100
101 def _fit(self, model, dataset_key, loss_fn):
102 model.train()
103 data_loader = self._create_data_loader(
104 dataset_key=dataset_key, shuffle=self.train_config.shuffle
105 )
106
107 loss = None
108 iteration_count = 0
109 for (data, target) in data_loader:
110 # Set gradients to zero
111 self.optimizer.zero_grad()
112
113 # Update model
114 output = model(data)
115 loss = loss_fn(target=target, pred=output)
116 loss.backward()
117 self.optimizer.step()
118
119 # Update and check interation count
120 iteration_count += 1
121 if iteration_count >= self.train_config.max_nr_batches >= 0:
122 break
123
124 return loss
125
[end of syft/federated/federated_client.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/syft/federated/federated_client.py b/syft/federated/federated_client.py
--- a/syft/federated/federated_client.py
+++ b/syft/federated/federated_client.py
@@ -72,6 +72,9 @@
if self.train_config is None:
raise ValueError("TrainConfig not defined.")
+ if dataset_key not in self.datasets:
+ raise ValueError("Dataset {} unknown.".format(dataset_key))
+
model = self.get_obj(self.train_config._model_id).obj
loss_fn = self.get_obj(self.train_config._loss_fn_id).obj
@@ -106,19 +109,21 @@
loss = None
iteration_count = 0
- for (data, target) in data_loader:
- # Set gradients to zero
- self.optimizer.zero_grad()
-
- # Update model
- output = model(data)
- loss = loss_fn(target=target, pred=output)
- loss.backward()
- self.optimizer.step()
-
- # Update and check interation count
- iteration_count += 1
- if iteration_count >= self.train_config.max_nr_batches >= 0:
- break
+
+ for _ in range(self.train_config.epochs):
+ for (data, target) in data_loader:
+ # Set gradients to zero
+ self.optimizer.zero_grad()
+
+ # Update model
+ output = model(data)
+ loss = loss_fn(target=target, pred=output)
+ loss.backward()
+ self.optimizer.step()
+
+ # Update and check interation count
+ iteration_count += 1
+ if iteration_count >= self.train_config.max_nr_batches >= 0:
+ break
return loss
| {"golden_diff": "diff --git a/syft/federated/federated_client.py b/syft/federated/federated_client.py\n--- a/syft/federated/federated_client.py\n+++ b/syft/federated/federated_client.py\n@@ -72,6 +72,9 @@\n if self.train_config is None:\n raise ValueError(\"TrainConfig not defined.\")\n \n+ if dataset_key not in self.datasets:\n+ raise ValueError(\"Dataset {} unknown.\".format(dataset_key))\n+\n model = self.get_obj(self.train_config._model_id).obj\n loss_fn = self.get_obj(self.train_config._loss_fn_id).obj\n \n@@ -106,19 +109,21 @@\n \n loss = None\n iteration_count = 0\n- for (data, target) in data_loader:\n- # Set gradients to zero\n- self.optimizer.zero_grad()\n-\n- # Update model\n- output = model(data)\n- loss = loss_fn(target=target, pred=output)\n- loss.backward()\n- self.optimizer.step()\n-\n- # Update and check interation count\n- iteration_count += 1\n- if iteration_count >= self.train_config.max_nr_batches >= 0:\n- break\n+\n+ for _ in range(self.train_config.epochs):\n+ for (data, target) in data_loader:\n+ # Set gradients to zero\n+ self.optimizer.zero_grad()\n+\n+ # Update model\n+ output = model(data)\n+ loss = loss_fn(target=target, pred=output)\n+ loss.backward()\n+ self.optimizer.step()\n+\n+ # Update and check interation count\n+ iteration_count += 1\n+ if iteration_count >= self.train_config.max_nr_batches >= 0:\n+ break\n \n return loss\n", "issue": "TrainConfig parameter \"epochs\"\n**TrainConfig parameter \"epochs\" doesn't have effect.**\r\nAfter changing the number of epochs=1 to epochs=100. The worker still do only 1 epoch.\r\n\r\n```\r\ntrain_config = sy.TrainConfig(\r\n model=traced_model,\r\n loss_fn=loss_fn,\r\n batch_size=batch_size,\r\n shuffle=True,\r\n #max_nr_batches=max_nr_batches,\r\n epochs=100,\r\n lr=lr,\r\n )\r\n```\n", "before_files": [{"content": "import torch as th\nfrom torch.utils.data import BatchSampler, RandomSampler, SequentialSampler\n\nfrom syft.generic import ObjectStorage\nfrom syft.federated.train_config import TrainConfig\n\n\nclass FederatedClient(ObjectStorage):\n \"\"\"A Client able to execute federated learning in local datasets.\"\"\"\n\n def __init__(self, datasets=None):\n super().__init__()\n self.datasets = datasets if datasets is not None else dict()\n self.optimizer = None\n self.train_config = None\n\n def add_dataset(self, dataset, key: str):\n self.datasets[key] = dataset\n\n def remove_dataset(self, key: str):\n if key in self.datasets:\n del self.datasets[key]\n\n def set_obj(self, obj: object):\n \"\"\"Registers objects checking if which objects it should cache.\n\n Args:\n obj: An object to be registered.\n \"\"\"\n if isinstance(obj, TrainConfig):\n self.train_config = obj\n self.optimizer = None\n else:\n super().set_obj(obj)\n\n def _build_optimizer(\n self, optimizer_name: str, model, lr: float, weight_decay: float\n ) -> th.optim.Optimizer:\n \"\"\"Build an optimizer if needed.\n\n Args:\n optimizer_name: A string indicating the optimizer name.\n lr: A float indicating the learning rate.\n weight_decay: Weight decay parameter of the optimizer\n Returns:\n A Torch Optimizer.\n \"\"\"\n if self.optimizer is not None:\n return self.optimizer\n\n optimizer_name = optimizer_name.lower()\n if optimizer_name == \"sgd\":\n optim_args = dict()\n optim_args[\"lr\"] = lr\n if weight_decay is not None:\n optim_args[\"weight_decay\"] = weight_decay\n self.optimizer = th.optim.SGD(model.parameters(), **optim_args)\n else:\n raise ValueError(\"Unknown optimizer: {}\".format(optimizer_name))\n return self.optimizer\n\n def fit(self, dataset_key: str, **kwargs):\n \"\"\"Fits a model on the local dataset as specified in the local TrainConfig object.\n\n Args:\n dataset_key: Identifier of the local dataset that shall be used for training.\n **kwargs: Unused.\n\n Returns:\n loss: Training loss on the last batch of training data.\n \"\"\"\n if self.train_config is None:\n raise ValueError(\"TrainConfig not defined.\")\n\n model = self.get_obj(self.train_config._model_id).obj\n loss_fn = self.get_obj(self.train_config._loss_fn_id).obj\n\n self._build_optimizer(\n self.train_config.optimizer,\n model,\n lr=self.train_config.lr,\n weight_decay=self.train_config.weight_decay,\n )\n\n return self._fit(model=model, dataset_key=dataset_key, loss_fn=loss_fn)\n\n def _create_data_loader(self, dataset_key: str, shuffle: bool = False):\n data_range = range(len(self.datasets[dataset_key]))\n if shuffle:\n sampler = RandomSampler(data_range)\n else:\n sampler = SequentialSampler(data_range)\n data_loader = th.utils.data.DataLoader(\n self.datasets[dataset_key],\n batch_size=self.train_config.batch_size,\n sampler=sampler,\n num_workers=0,\n )\n return data_loader\n\n def _fit(self, model, dataset_key, loss_fn):\n model.train()\n data_loader = self._create_data_loader(\n dataset_key=dataset_key, shuffle=self.train_config.shuffle\n )\n\n loss = None\n iteration_count = 0\n for (data, target) in data_loader:\n # Set gradients to zero\n self.optimizer.zero_grad()\n\n # Update model\n output = model(data)\n loss = loss_fn(target=target, pred=output)\n loss.backward()\n self.optimizer.step()\n\n # Update and check interation count\n iteration_count += 1\n if iteration_count >= self.train_config.max_nr_batches >= 0:\n break\n\n return loss\n", "path": "syft/federated/federated_client.py"}]} | 1,759 | 395 |
gh_patches_debug_15340 | rasdani/github-patches | git_diff | pypa__setuptools-2134 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Removing compatibility for Python 2
In #1458 and the Setuptools 45 release, this project dropped declared support for Python 2, adding a warning when a late version was invoked on Python 2. This warning helped address many of the systemic uses of Setuptools 45+ on Python 2, but there continue to be users (presumably) reporting that they've [encountered the warning](https://github.com/pypa/setuptools/issues?q=is%3Aissue+in%3Atitle+%22incompatible+install%22+).
I say presumably because most of them have submitted a blank template without providing any information.
Since March, these users have been directed to the template via bit.ly, so I have metrics on the number of users encountering and following the link.

It seems there have been 50-100 clicks per day since Apr 11. I'm guessing bit.ly doesn't give me data older than 30 days.
To put that in perspective, Setuptools received over 45M downloads in the last month, so the number of people that followed that link (3.3k) is 0.007% of the downloads.
Still, that's upwards of 100 people per day whose workflow would be broken until they could fix their environment.
Let's also consider that each of these users encountering this issue are following discouraged if not deprecated workflows and are creating new or updated environments (new since setuptools 45 was released in January).
It seems to me we have two options - support Python 2 until the incidents of users encountering this error message reduces to a trickle (what is that threshold) or bite the bullet and drop support for Python 2.
I'd like to review the outstanding issues relating to this issue, but my inclination is to move forward with dropping support.
</issue>
<code>
[start of pkg_resources/py2_warn.py]
1 import sys
2 import warnings
3 import textwrap
4
5
6 msg = textwrap.dedent("""
7 You are running Setuptools on Python 2, which is no longer
8 supported and
9 >>> SETUPTOOLS WILL STOP WORKING <<<
10 in a subsequent release (no sooner than 2020-04-20).
11 Please ensure you are installing
12 Setuptools using pip 9.x or later or pin to `setuptools<45`
13 in your environment.
14 If you have done those things and are still encountering
15 this message, please follow up at
16 https://bit.ly/setuptools-py2-warning.
17 """)
18
19 pre = "Setuptools will stop working on Python 2\n"
20
21 sys.version_info < (3,) and warnings.warn(pre + "*" * 60 + msg + "*" * 60)
22
[end of pkg_resources/py2_warn.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pkg_resources/py2_warn.py b/pkg_resources/py2_warn.py
--- a/pkg_resources/py2_warn.py
+++ b/pkg_resources/py2_warn.py
@@ -4,18 +4,13 @@
msg = textwrap.dedent("""
- You are running Setuptools on Python 2, which is no longer
- supported and
- >>> SETUPTOOLS WILL STOP WORKING <<<
- in a subsequent release (no sooner than 2020-04-20).
- Please ensure you are installing
- Setuptools using pip 9.x or later or pin to `setuptools<45`
- in your environment.
- If you have done those things and are still encountering
- this message, please follow up at
- https://bit.ly/setuptools-py2-warning.
+ Encountered a version of Setuptools that no longer supports
+ this version of Python. Please head to
+ https://bit.ly/setuptools-py2-warning for support.
""")
-pre = "Setuptools will stop working on Python 2\n"
+pre = "Setuptools no longer works on Python 2\n"
-sys.version_info < (3,) and warnings.warn(pre + "*" * 60 + msg + "*" * 60)
+if sys.version_info < (3,):
+ warnings.warn(pre + "*" * 60 + msg + "*" * 60)
+ raise SystemExit(32)
| {"golden_diff": "diff --git a/pkg_resources/py2_warn.py b/pkg_resources/py2_warn.py\n--- a/pkg_resources/py2_warn.py\n+++ b/pkg_resources/py2_warn.py\n@@ -4,18 +4,13 @@\n \n \n msg = textwrap.dedent(\"\"\"\n- You are running Setuptools on Python 2, which is no longer\n- supported and\n- >>> SETUPTOOLS WILL STOP WORKING <<<\n- in a subsequent release (no sooner than 2020-04-20).\n- Please ensure you are installing\n- Setuptools using pip 9.x or later or pin to `setuptools<45`\n- in your environment.\n- If you have done those things and are still encountering\n- this message, please follow up at\n- https://bit.ly/setuptools-py2-warning.\n+ Encountered a version of Setuptools that no longer supports\n+ this version of Python. Please head to\n+ https://bit.ly/setuptools-py2-warning for support.\n \"\"\")\n \n-pre = \"Setuptools will stop working on Python 2\\n\"\n+pre = \"Setuptools no longer works on Python 2\\n\"\n \n-sys.version_info < (3,) and warnings.warn(pre + \"*\" * 60 + msg + \"*\" * 60)\n+if sys.version_info < (3,):\n+ warnings.warn(pre + \"*\" * 60 + msg + \"*\" * 60)\n+ raise SystemExit(32)\n", "issue": "Removing compatibility for Python 2\nIn #1458 and the Setuptools 45 release, this project dropped declared support for Python 2, adding a warning when a late version was invoked on Python 2. This warning helped address many of the systemic uses of Setuptools 45+ on Python 2, but there continue to be users (presumably) reporting that they've [encountered the warning](https://github.com/pypa/setuptools/issues?q=is%3Aissue+in%3Atitle+%22incompatible+install%22+).\r\n\r\nI say presumably because most of them have submitted a blank template without providing any information.\r\n\r\nSince March, these users have been directed to the template via bit.ly, so I have metrics on the number of users encountering and following the link.\r\n\r\n\r\n\r\nIt seems there have been 50-100 clicks per day since Apr 11. I'm guessing bit.ly doesn't give me data older than 30 days.\r\n\r\nTo put that in perspective, Setuptools received over 45M downloads in the last month, so the number of people that followed that link (3.3k) is 0.007% of the downloads.\r\n\r\nStill, that's upwards of 100 people per day whose workflow would be broken until they could fix their environment.\r\n\r\nLet's also consider that each of these users encountering this issue are following discouraged if not deprecated workflows and are creating new or updated environments (new since setuptools 45 was released in January).\r\n\r\nIt seems to me we have two options - support Python 2 until the incidents of users encountering this error message reduces to a trickle (what is that threshold) or bite the bullet and drop support for Python 2.\r\n\r\nI'd like to review the outstanding issues relating to this issue, but my inclination is to move forward with dropping support.\n", "before_files": [{"content": "import sys\nimport warnings\nimport textwrap\n\n\nmsg = textwrap.dedent(\"\"\"\n You are running Setuptools on Python 2, which is no longer\n supported and\n >>> SETUPTOOLS WILL STOP WORKING <<<\n in a subsequent release (no sooner than 2020-04-20).\n Please ensure you are installing\n Setuptools using pip 9.x or later or pin to `setuptools<45`\n in your environment.\n If you have done those things and are still encountering\n this message, please follow up at\n https://bit.ly/setuptools-py2-warning.\n \"\"\")\n\npre = \"Setuptools will stop working on Python 2\\n\"\n\nsys.version_info < (3,) and warnings.warn(pre + \"*\" * 60 + msg + \"*\" * 60)\n", "path": "pkg_resources/py2_warn.py"}]} | 1,207 | 328 |
gh_patches_debug_10597 | rasdani/github-patches | git_diff | e2nIEE__pandapower-849 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Estimation results
Hello,
I think in python file pandapower -> estimation -> results.py there is a baseMVA missing in the calculation.
I think line 22 should be adjusted to this, or similar:
`Sbus = np.multiply(V, np.conj(Ybus * V)) * baseMVA`
Thanks
</issue>
<code>
[start of pandapower/estimation/results.py]
1 # -*- coding: utf-8 -*-
2
3 # Copyright (c) 2016-2020 by University of Kassel and Fraunhofer Institute for Energy Economics
4 # and Energy System Technology (IEE), Kassel. All rights reserved.
5
6 import numpy as np
7
8 from pandapower.pypower.idx_bus import PD, QD
9 from pandapower.pf.ppci_variables import _get_pf_variables_from_ppci
10 from pandapower.pf.pfsoln_numba import pfsoln
11 from pandapower.results import _copy_results_ppci_to_ppc, _extract_results_se, init_results
12 from pandapower.auxiliary import _add_pf_options, get_values, _clean_up
13
14 def _calc_power_flow(ppci, V):
15 # store results for all elements
16 # calculate branch results (in ppc_i)
17 baseMVA, bus, gen, branch, ref, pv, pq, _, _, _, ref_gens = _get_pf_variables_from_ppci(ppci)
18 Ybus, Yf, Yt = ppci['internal']['Ybus'], ppci['internal']['Yf'], ppci['internal']['Yt']
19 ppci['bus'], ppci['gen'], ppci['branch'] = pfsoln(baseMVA, bus, gen, branch, Ybus, Yf, Yt, V, ref, ref_gens)
20
21 # calculate bus power injections
22 Sbus = np.multiply(V, np.conj(Ybus * V))
23 ppci["bus"][:, PD] = -Sbus.real # saved in per unit, injection -> demand
24 ppci["bus"][:, QD] = -Sbus.imag # saved in per unit, injection -> demand
25 return ppci
26
27
28 def _extract_result_ppci_to_pp(net, ppc, ppci):
29 # convert to pandapower indices
30 ppc = _copy_results_ppci_to_ppc(ppci, ppc, mode="se")
31
32 # extract results from ppc
33 try:
34 _add_pf_options(net, tolerance_mva=1e-8, trafo_loading="current",
35 numba=True, ac=True, algorithm='nr', max_iteration="auto")
36 except:
37 pass
38 # writes res_bus.vm_pu / va_degree and res_line
39 _extract_results_se(net, ppc)
40
41 # restore backup of previous results
42 _rename_results(net)
43
44 # additionally, write bus power demand results (these are not written in _extract_results)
45 mapping_table = net["_pd2ppc_lookups"]["bus"]
46 net.res_bus_est.index = net.bus.index
47 net.res_bus_est.p_mw = get_values(ppc["bus"][:, 2], net.bus.index.values,
48 mapping_table)
49 net.res_bus_est.q_mvar = get_values(ppc["bus"][:, 3], net.bus.index.values,
50 mapping_table)
51
52 _clean_up(net)
53 # delete results which are not correctly calculated
54 for k in list(net.keys()):
55 if k.startswith("res_") and k.endswith("_est") and \
56 k not in ("res_bus_est", "res_line_est", "res_trafo_est", "res_trafo3w_est"):
57 del net[k]
58 return net
59
60
61 def _copy_power_flow_results(net):
62 """
63 copy old power flow results (if they exist) into res_*_power_flow tables for backup
64 :param net: pandapower grid
65 :return:
66 """
67 elements_to_init = ["bus", "ext_grid", "line", "load", "load_3ph" "sgen", "sgen_3ph", "trafo", "trafo3w",
68 "shunt", "impedance", "gen", "ward", "xward", "dcline"]
69 for element in elements_to_init:
70 res_name = "res_" + element
71 res_name_pf = res_name + "_power_flow"
72 if res_name in net:
73 net[res_name_pf] = (net[res_name]).copy()
74 init_results(net)
75
76
77 def _rename_results(net):
78 """
79 write result tables to result tables for estimation (e.g., res_bus -> res_bus_est)
80 reset backed up result tables (e.g., res_bus_power_flow -> res_bus)
81 :param net: pandapower grid
82 :return:
83 """
84 elements_to_init = ["bus", "ext_grid", "line", "load", "sgen", "trafo", "trafo3w",
85 "shunt", "impedance", "gen", "ward", "xward", "dcline"]
86 # rename res_* tables to res_*_est and then res_*_power_flow to res_*
87 for element in elements_to_init:
88 res_name = "res_" + element
89 res_name_pf = res_name + "_power_flow"
90 res_name_est = res_name + "_est"
91 net[res_name_est] = net[res_name]
92 if res_name_pf in net:
93 net[res_name] = net[res_name_pf]
94 else:
95 del net[res_name]
96
97 def eppci2pp(net, ppc, eppci):
98 # calculate the branch power flow and bus power injection based on the estimated voltage vector
99 eppci = _calc_power_flow(eppci, eppci.V)
100
101 # extract the result from ppci to ppc and pandpower network
102 net = _extract_result_ppci_to_pp(net, ppc, eppci)
103 return net
104
105
[end of pandapower/estimation/results.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pandapower/estimation/results.py b/pandapower/estimation/results.py
--- a/pandapower/estimation/results.py
+++ b/pandapower/estimation/results.py
@@ -19,7 +19,7 @@
ppci['bus'], ppci['gen'], ppci['branch'] = pfsoln(baseMVA, bus, gen, branch, Ybus, Yf, Yt, V, ref, ref_gens)
# calculate bus power injections
- Sbus = np.multiply(V, np.conj(Ybus * V))
+ Sbus = np.multiply(V, np.conj(Ybus * V)) * baseMVA
ppci["bus"][:, PD] = -Sbus.real # saved in per unit, injection -> demand
ppci["bus"][:, QD] = -Sbus.imag # saved in per unit, injection -> demand
return ppci
| {"golden_diff": "diff --git a/pandapower/estimation/results.py b/pandapower/estimation/results.py\n--- a/pandapower/estimation/results.py\n+++ b/pandapower/estimation/results.py\n@@ -19,7 +19,7 @@\n ppci['bus'], ppci['gen'], ppci['branch'] = pfsoln(baseMVA, bus, gen, branch, Ybus, Yf, Yt, V, ref, ref_gens)\n \n # calculate bus power injections\n- Sbus = np.multiply(V, np.conj(Ybus * V))\n+ Sbus = np.multiply(V, np.conj(Ybus * V)) * baseMVA\n ppci[\"bus\"][:, PD] = -Sbus.real # saved in per unit, injection -> demand\n ppci[\"bus\"][:, QD] = -Sbus.imag # saved in per unit, injection -> demand\n return ppci\n", "issue": "Estimation results\nHello,\r\n\r\nI think in python file pandapower -> estimation -> results.py there is a baseMVA missing in the calculation.\r\n\r\nI think line 22 should be adjusted to this, or similar:\r\n\r\n`Sbus = np.multiply(V, np.conj(Ybus * V)) * baseMVA`\r\n\r\nThanks\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Copyright (c) 2016-2020 by University of Kassel and Fraunhofer Institute for Energy Economics\n# and Energy System Technology (IEE), Kassel. All rights reserved.\n\nimport numpy as np\n\nfrom pandapower.pypower.idx_bus import PD, QD\nfrom pandapower.pf.ppci_variables import _get_pf_variables_from_ppci\nfrom pandapower.pf.pfsoln_numba import pfsoln\nfrom pandapower.results import _copy_results_ppci_to_ppc, _extract_results_se, init_results\nfrom pandapower.auxiliary import _add_pf_options, get_values, _clean_up\n\ndef _calc_power_flow(ppci, V):\n # store results for all elements\n # calculate branch results (in ppc_i)\n baseMVA, bus, gen, branch, ref, pv, pq, _, _, _, ref_gens = _get_pf_variables_from_ppci(ppci)\n Ybus, Yf, Yt = ppci['internal']['Ybus'], ppci['internal']['Yf'], ppci['internal']['Yt']\n ppci['bus'], ppci['gen'], ppci['branch'] = pfsoln(baseMVA, bus, gen, branch, Ybus, Yf, Yt, V, ref, ref_gens)\n\n # calculate bus power injections\n Sbus = np.multiply(V, np.conj(Ybus * V))\n ppci[\"bus\"][:, PD] = -Sbus.real # saved in per unit, injection -> demand\n ppci[\"bus\"][:, QD] = -Sbus.imag # saved in per unit, injection -> demand\n return ppci\n\n\ndef _extract_result_ppci_to_pp(net, ppc, ppci):\n # convert to pandapower indices\n ppc = _copy_results_ppci_to_ppc(ppci, ppc, mode=\"se\")\n\n # extract results from ppc\n try:\n _add_pf_options(net, tolerance_mva=1e-8, trafo_loading=\"current\",\n numba=True, ac=True, algorithm='nr', max_iteration=\"auto\")\n except:\n pass\n # writes res_bus.vm_pu / va_degree and res_line\n _extract_results_se(net, ppc)\n\n # restore backup of previous results\n _rename_results(net)\n\n # additionally, write bus power demand results (these are not written in _extract_results)\n mapping_table = net[\"_pd2ppc_lookups\"][\"bus\"]\n net.res_bus_est.index = net.bus.index\n net.res_bus_est.p_mw = get_values(ppc[\"bus\"][:, 2], net.bus.index.values,\n mapping_table)\n net.res_bus_est.q_mvar = get_values(ppc[\"bus\"][:, 3], net.bus.index.values,\n mapping_table)\n\n _clean_up(net)\n # delete results which are not correctly calculated\n for k in list(net.keys()):\n if k.startswith(\"res_\") and k.endswith(\"_est\") and \\\n k not in (\"res_bus_est\", \"res_line_est\", \"res_trafo_est\", \"res_trafo3w_est\"):\n del net[k]\n return net\n\n\ndef _copy_power_flow_results(net):\n \"\"\"\n copy old power flow results (if they exist) into res_*_power_flow tables for backup\n :param net: pandapower grid\n :return:\n \"\"\"\n elements_to_init = [\"bus\", \"ext_grid\", \"line\", \"load\", \"load_3ph\" \"sgen\", \"sgen_3ph\", \"trafo\", \"trafo3w\",\n \"shunt\", \"impedance\", \"gen\", \"ward\", \"xward\", \"dcline\"]\n for element in elements_to_init:\n res_name = \"res_\" + element\n res_name_pf = res_name + \"_power_flow\"\n if res_name in net:\n net[res_name_pf] = (net[res_name]).copy()\n init_results(net)\n\n\ndef _rename_results(net):\n \"\"\"\n write result tables to result tables for estimation (e.g., res_bus -> res_bus_est)\n reset backed up result tables (e.g., res_bus_power_flow -> res_bus)\n :param net: pandapower grid\n :return:\n \"\"\"\n elements_to_init = [\"bus\", \"ext_grid\", \"line\", \"load\", \"sgen\", \"trafo\", \"trafo3w\",\n \"shunt\", \"impedance\", \"gen\", \"ward\", \"xward\", \"dcline\"]\n # rename res_* tables to res_*_est and then res_*_power_flow to res_*\n for element in elements_to_init:\n res_name = \"res_\" + element\n res_name_pf = res_name + \"_power_flow\"\n res_name_est = res_name + \"_est\"\n net[res_name_est] = net[res_name]\n if res_name_pf in net:\n net[res_name] = net[res_name_pf]\n else:\n del net[res_name]\n\ndef eppci2pp(net, ppc, eppci):\n # calculate the branch power flow and bus power injection based on the estimated voltage vector\n eppci = _calc_power_flow(eppci, eppci.V)\n\n # extract the result from ppci to ppc and pandpower network\n net = _extract_result_ppci_to_pp(net, ppc, eppci)\n return net\n\n", "path": "pandapower/estimation/results.py"}]} | 2,017 | 212 |
gh_patches_debug_25376 | rasdani/github-patches | git_diff | team-ocean__veros-49 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Job resubmission with job scheduler doesn't work
I was not able to find out the reason behind resubmission issue with job scheduler, such as:
`veros-resubmit -i acc.lowres -n 50 -l 62208000 -c "python acc.py -b bohrium -v debug" --callback "/usr/bin/sbatch /groups/ocean/nutrik/veros_cases/paper/acc/veros_batch.sh"`
Although jobs with run length of up to 29 days are resubmitted fine, those with longer run length are not resubmitted and no errors or messages are reported.
In fact, jobs are successfully resubmitted without scheduler (`--callback "./veros_batch.sh"`) for any run length.
</issue>
<code>
[start of veros/cli/veros_resubmit.py]
1 #!/usr/bin/env python
2
3 import functools
4 import subprocess
5 import shlex
6 import sys
7 import os
8
9 import click
10
11 LAST_N_FILENAME = "{identifier}.current_run"
12
13
14 class ShellCommand(click.ParamType):
15 name = "command"
16
17 def convert(self, value, param, ctx):
18 return shlex.split(value)
19
20
21 def get_current_n(filename):
22 if not os.path.isfile(filename):
23 return 0
24
25 with open(filename, "r") as f:
26 return int(f.read())
27
28
29 def write_next_n(n, filename):
30 with open(filename, "w") as f:
31 f.write(str(n))
32
33
34 def call_veros(cmd, name, n, runlen):
35 identifier = "{name}.{n:0>4}".format(name=name, n=n)
36 prev_id = "{name}.{n:0>4}".format(name=name, n=n - 1)
37 args = ["-s", "identifier", identifier, "-s", "restart_output_filename",
38 "{identifier}.restart.h5", "-s", "runlen", "{}".format(runlen)]
39 if n:
40 args += ["-s", "restart_input_filename", "{prev_id}.restart.h5".format(prev_id=prev_id)]
41 sys.stdout.write("\n >>> {}\n\n".format(" ".join(cmd + args)))
42 sys.stdout.flush()
43 try:
44 subprocess.check_call(cmd + args)
45 except subprocess.CalledProcessError:
46 raise RuntimeError("Run {} failed, exiting".format(n))
47
48
49 def resubmit(identifier, num_runs, length_per_run, veros_cmd, callback):
50 """Performs several runs of Veros back to back, using the previous run as restart input.
51
52 Intended to be used with scheduling systems (e.g. SLURM or PBS).
53
54 """
55 last_n_filename = LAST_N_FILENAME.format(identifier=identifier)
56
57 current_n = get_current_n(last_n_filename)
58 if current_n >= num_runs:
59 return
60
61 call_veros(veros_cmd, identifier, current_n, length_per_run)
62 write_next_n(current_n + 1, last_n_filename)
63 subprocess.Popen(callback)
64
65
66 @click.command("veros-resubmit", short_help="Re-run a Veros setup several times")
67 @click.option("-i", "--identifier", required=True,
68 help="Base identifier of the simulation")
69 @click.option("-n", "--num-runs", type=click.INT, required=True,
70 help="Total number of runs to execute")
71 @click.option("-l", "--length-per-run", type=click.FLOAT, required=True,
72 help="Length (in seconds) of each run")
73 @click.option("-c", "--veros-cmd", type=ShellCommand(), required=True,
74 help="The command that is used to call veros (quoted)")
75 @click.option("--callback", metavar="CMD", type=ShellCommand(), default=None,
76 help="Command to call after each run has finished (quoted, default: call self)")
77 @functools.wraps(resubmit)
78 def cli(*args, **kwargs):
79 if kwargs["callback"] is None:
80 kwargs["callback"] = sys.argv
81 resubmit(*args, **kwargs)
82
83
84 if __name__ == "__main__":
85 cli()
86
[end of veros/cli/veros_resubmit.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/veros/cli/veros_resubmit.py b/veros/cli/veros_resubmit.py
--- a/veros/cli/veros_resubmit.py
+++ b/veros/cli/veros_resubmit.py
@@ -5,10 +5,13 @@
import shlex
import sys
import os
+import time
import click
LAST_N_FILENAME = "{identifier}.current_run"
+CHILD_TIMEOUT = 10
+POLL_DELAY = 0.1
class ShellCommand(click.ParamType):
@@ -60,7 +63,21 @@
call_veros(veros_cmd, identifier, current_n, length_per_run)
write_next_n(current_n + 1, last_n_filename)
- subprocess.Popen(callback)
+ next_proc = subprocess.Popen(callback)
+
+ # catch immediately crashing processes
+ timeout = CHILD_TIMEOUT
+
+ while timeout > 0:
+ retcode = next_proc.poll()
+ if retcode is not None:
+ if retcode > 0:
+ # process crashed
+ raise RuntimeError("Callback exited with {}".format(retcode))
+ else:
+ break
+ time.sleep(POLL_DELAY)
+ timeout -= POLL_DELAY
@click.command("veros-resubmit", short_help="Re-run a Veros setup several times")
@@ -78,6 +95,7 @@
def cli(*args, **kwargs):
if kwargs["callback"] is None:
kwargs["callback"] = sys.argv
+
resubmit(*args, **kwargs)
| {"golden_diff": "diff --git a/veros/cli/veros_resubmit.py b/veros/cli/veros_resubmit.py\n--- a/veros/cli/veros_resubmit.py\n+++ b/veros/cli/veros_resubmit.py\n@@ -5,10 +5,13 @@\n import shlex\n import sys\n import os\n+import time\n \n import click\n \n LAST_N_FILENAME = \"{identifier}.current_run\"\n+CHILD_TIMEOUT = 10\n+POLL_DELAY = 0.1\n \n \n class ShellCommand(click.ParamType):\n@@ -60,7 +63,21 @@\n \n call_veros(veros_cmd, identifier, current_n, length_per_run)\n write_next_n(current_n + 1, last_n_filename)\n- subprocess.Popen(callback)\n+ next_proc = subprocess.Popen(callback)\n+\n+ # catch immediately crashing processes\n+ timeout = CHILD_TIMEOUT\n+\n+ while timeout > 0:\n+ retcode = next_proc.poll()\n+ if retcode is not None:\n+ if retcode > 0:\n+ # process crashed\n+ raise RuntimeError(\"Callback exited with {}\".format(retcode))\n+ else:\n+ break\n+ time.sleep(POLL_DELAY)\n+ timeout -= POLL_DELAY\n \n \n @click.command(\"veros-resubmit\", short_help=\"Re-run a Veros setup several times\")\n@@ -78,6 +95,7 @@\n def cli(*args, **kwargs):\n if kwargs[\"callback\"] is None:\n kwargs[\"callback\"] = sys.argv\n+\n resubmit(*args, **kwargs)\n", "issue": "Job resubmission with job scheduler doesn't work \nI was not able to find out the reason behind resubmission issue with job scheduler, such as:\r\n`veros-resubmit -i acc.lowres -n 50 -l 62208000 -c \"python acc.py -b bohrium -v debug\" --callback \"/usr/bin/sbatch /groups/ocean/nutrik/veros_cases/paper/acc/veros_batch.sh\"`\r\nAlthough jobs with run length of up to 29 days are resubmitted fine, those with longer run length are not resubmitted and no errors or messages are reported.\r\n\r\nIn fact, jobs are successfully resubmitted without scheduler (`--callback \"./veros_batch.sh\"`) for any run length.\n", "before_files": [{"content": "#!/usr/bin/env python\n\nimport functools\nimport subprocess\nimport shlex\nimport sys\nimport os\n\nimport click\n\nLAST_N_FILENAME = \"{identifier}.current_run\"\n\n\nclass ShellCommand(click.ParamType):\n name = \"command\"\n\n def convert(self, value, param, ctx):\n return shlex.split(value)\n\n\ndef get_current_n(filename):\n if not os.path.isfile(filename):\n return 0\n\n with open(filename, \"r\") as f:\n return int(f.read())\n\n\ndef write_next_n(n, filename):\n with open(filename, \"w\") as f:\n f.write(str(n))\n\n\ndef call_veros(cmd, name, n, runlen):\n identifier = \"{name}.{n:0>4}\".format(name=name, n=n)\n prev_id = \"{name}.{n:0>4}\".format(name=name, n=n - 1)\n args = [\"-s\", \"identifier\", identifier, \"-s\", \"restart_output_filename\",\n \"{identifier}.restart.h5\", \"-s\", \"runlen\", \"{}\".format(runlen)]\n if n:\n args += [\"-s\", \"restart_input_filename\", \"{prev_id}.restart.h5\".format(prev_id=prev_id)]\n sys.stdout.write(\"\\n >>> {}\\n\\n\".format(\" \".join(cmd + args)))\n sys.stdout.flush()\n try:\n subprocess.check_call(cmd + args)\n except subprocess.CalledProcessError:\n raise RuntimeError(\"Run {} failed, exiting\".format(n))\n\n\ndef resubmit(identifier, num_runs, length_per_run, veros_cmd, callback):\n \"\"\"Performs several runs of Veros back to back, using the previous run as restart input.\n\n Intended to be used with scheduling systems (e.g. SLURM or PBS).\n\n \"\"\"\n last_n_filename = LAST_N_FILENAME.format(identifier=identifier)\n\n current_n = get_current_n(last_n_filename)\n if current_n >= num_runs:\n return\n\n call_veros(veros_cmd, identifier, current_n, length_per_run)\n write_next_n(current_n + 1, last_n_filename)\n subprocess.Popen(callback)\n\n\[email protected](\"veros-resubmit\", short_help=\"Re-run a Veros setup several times\")\[email protected](\"-i\", \"--identifier\", required=True,\n help=\"Base identifier of the simulation\")\[email protected](\"-n\", \"--num-runs\", type=click.INT, required=True,\n help=\"Total number of runs to execute\")\[email protected](\"-l\", \"--length-per-run\", type=click.FLOAT, required=True,\n help=\"Length (in seconds) of each run\")\[email protected](\"-c\", \"--veros-cmd\", type=ShellCommand(), required=True,\n help=\"The command that is used to call veros (quoted)\")\[email protected](\"--callback\", metavar=\"CMD\", type=ShellCommand(), default=None,\n help=\"Command to call after each run has finished (quoted, default: call self)\")\[email protected](resubmit)\ndef cli(*args, **kwargs):\n if kwargs[\"callback\"] is None:\n kwargs[\"callback\"] = sys.argv\n resubmit(*args, **kwargs)\n\n\nif __name__ == \"__main__\":\n cli()\n", "path": "veros/cli/veros_resubmit.py"}]} | 1,572 | 348 |
gh_patches_debug_30694 | rasdani/github-patches | git_diff | mampfes__hacs_waste_collection_schedule-1599 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[Bug]: was_wolfsburg_de stopped fetching data
### I Have A Problem With:
A specific source
### What's Your Problem
The Source was_wolfsburg_de stopped fetching data for 2024. I suspect because the request link is no longer accurate.
I have experimented a bit, and with the following address I receive current data:
https://was-wolfsburg.de/subabfuhrtermine/php/abfuhrtermine.php
It only concerns "Restmรผll, Bioabfall und Papierabfall". "Gelber Sack" is still functioning.
### Source (if relevant)
was_wolfsburg_de
### Logs
_No response_
### Relevant Configuration
_No response_
### Checklist Source Error
- [ ] Use the example parameters for your source (often available in the documentation) (don't forget to restart Home Assistant after changing the configuration)
- [ ] Checked that the website of your service provider is still working
- [ ] Tested my attributes on the service provider website (if possible)
- [X] I have tested with the latest version of the integration (master) (for HACS in the 3 dot menu of the integration click on "Redownload" and choose master as version)
### Checklist Sensor Error
- [ ] Checked in the Home Assistant Calendar tab if the event names match the types names (if types argument is used)
### Required
- [X] I have searched past (closed AND opened) issues to see if this bug has already been reported, and it hasn't been.
- [X] I understand that people give their precious time for free, and thus I've done my very best to make this problem as easy as possible to investigate.
</issue>
<code>
[start of custom_components/waste_collection_schedule/waste_collection_schedule/source/was_wolfsburg_de.py]
1 import datetime
2 import re
3
4 import requests
5 from waste_collection_schedule import Collection # type: ignore[attr-defined]
6 from waste_collection_schedule.service.ICS import ICS
7
8 TITLE = "Wolfsburger Abfallwirtschaft und Straรenreinigung"
9 DESCRIPTION = "Source for waste collections for WAS-Wolfsburg, Germany."
10 URL = "https://was-wolfsburg.de"
11 TEST_CASES = {
12 "Barnstorf": {"city": "Barnstorf", "street": "Bahnhofspassage"},
13 "Sรผlfeld": {"city": "Sรผlfeld", "street": "Bรคrheide"},
14 }
15 CHARACTER_MAP = {
16 ord("รผ"): "u",
17 ord("รถ"): "o", # doesn't appear to be needed
18 ord("รค"): "a", # doesn't appear to be needed
19 }
20
21
22 class Source:
23 def __init__(self, city: str, street: str):
24 self._city = city.translate(CHARACTER_MAP)
25 self._street = street.translate(CHARACTER_MAP)
26 self._ics = ICS()
27
28 def fetch(self):
29 # fetch "Gelber Sack"
30 args = {"g": self._city}
31 r = requests.get(
32 "https://was-wolfsburg.de/subgelberweihgarten/php/abfuhrgelber.php",
33 params=args,
34 )
35
36 entries = []
37 match = re.findall(r"(\d{2})\.(\d{2})\.(\d{4})", r.text)
38 for m in match:
39 date = datetime.date(day=int(m[0]), month=int(m[1]), year=int(m[2]))
40 entries.append(Collection(date, "Gelber Sack"))
41
42 # fetch remaining collections
43 args = {"ortabf": self._street}
44 r = requests.post(
45 "https://was-wolfsburg.de/subabfuhrtermine/ics_abfuhrtermine3.php",
46 data=args,
47 )
48 dates = self._ics.convert(r.text)
49 for d in dates:
50 entries.append(Collection(d[0], d[1]))
51
52 return entries
53
[end of custom_components/waste_collection_schedule/waste_collection_schedule/source/was_wolfsburg_de.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/was_wolfsburg_de.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/was_wolfsburg_de.py
--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/was_wolfsburg_de.py
+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/was_wolfsburg_de.py
@@ -12,6 +12,14 @@
"Barnstorf": {"city": "Barnstorf", "street": "Bahnhofspassage"},
"Sรผlfeld": {"city": "Sรผlfeld", "street": "Bรคrheide"},
}
+
+ICON_MAP = {
+ "Gelber Sack": "mdi:recycle",
+ "Bioabfall": "mdi:leaf",
+ "Restabfall": "mdi:trash-can",
+ "Altpapier": "mdi:file-document-outline",
+}
+
CHARACTER_MAP = {
ord("รผ"): "u",
ord("รถ"): "o", # doesn't appear to be needed
@@ -37,16 +45,21 @@
match = re.findall(r"(\d{2})\.(\d{2})\.(\d{4})", r.text)
for m in match:
date = datetime.date(day=int(m[0]), month=int(m[1]), year=int(m[2]))
- entries.append(Collection(date, "Gelber Sack"))
+ entries.append(
+ Collection(date, "Gelber Sack", icon=ICON_MAP["Gelber Sack"])
+ )
# fetch remaining collections
- args = {"ortabf": self._street}
- r = requests.post(
- "https://was-wolfsburg.de/subabfuhrtermine/ics_abfuhrtermine3.php",
- data=args,
+ args = {"k": self._street}
+ r = requests.get(
+ "https://was-wolfsburg.de/subabfuhrtermine/php/abfuhrtermine.php",
+ params=args,
+ )
+ match = re.findall(
+ r"(\d{2})\.(\d{2})\.(\d{4}).*?<em>\s*([A-Za-z- ]+)\s*</em>", r.text
)
- dates = self._ics.convert(r.text)
- for d in dates:
- entries.append(Collection(d[0], d[1]))
+ for m in match:
+ date = datetime.date(day=int(m[0]), month=int(m[1]), year=int(m[2]))
+ entries.append(Collection(date, m[3], icon=ICON_MAP[m[3]]))
return entries
| {"golden_diff": "diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/was_wolfsburg_de.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/was_wolfsburg_de.py\n--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/was_wolfsburg_de.py\n+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/was_wolfsburg_de.py\n@@ -12,6 +12,14 @@\n \"Barnstorf\": {\"city\": \"Barnstorf\", \"street\": \"Bahnhofspassage\"},\n \"S\u00fclfeld\": {\"city\": \"S\u00fclfeld\", \"street\": \"B\u00e4rheide\"},\n }\n+\n+ICON_MAP = {\n+ \"Gelber Sack\": \"mdi:recycle\",\n+ \"Bioabfall\": \"mdi:leaf\",\n+ \"Restabfall\": \"mdi:trash-can\",\n+ \"Altpapier\": \"mdi:file-document-outline\",\n+}\n+\n CHARACTER_MAP = {\n ord(\"\u00fc\"): \"u\",\n ord(\"\u00f6\"): \"o\", # doesn't appear to be needed\n@@ -37,16 +45,21 @@\n match = re.findall(r\"(\\d{2})\\.(\\d{2})\\.(\\d{4})\", r.text)\n for m in match:\n date = datetime.date(day=int(m[0]), month=int(m[1]), year=int(m[2]))\n- entries.append(Collection(date, \"Gelber Sack\"))\n+ entries.append(\n+ Collection(date, \"Gelber Sack\", icon=ICON_MAP[\"Gelber Sack\"])\n+ )\n \n # fetch remaining collections\n- args = {\"ortabf\": self._street}\n- r = requests.post(\n- \"https://was-wolfsburg.de/subabfuhrtermine/ics_abfuhrtermine3.php\",\n- data=args,\n+ args = {\"k\": self._street}\n+ r = requests.get(\n+ \"https://was-wolfsburg.de/subabfuhrtermine/php/abfuhrtermine.php\",\n+ params=args,\n+ )\n+ match = re.findall(\n+ r\"(\\d{2})\\.(\\d{2})\\.(\\d{4}).*?<em>\\s*([A-Za-z- ]+)\\s*</em>\", r.text\n )\n- dates = self._ics.convert(r.text)\n- for d in dates:\n- entries.append(Collection(d[0], d[1]))\n+ for m in match:\n+ date = datetime.date(day=int(m[0]), month=int(m[1]), year=int(m[2]))\n+ entries.append(Collection(date, m[3], icon=ICON_MAP[m[3]]))\n \n return entries\n", "issue": "[Bug]: was_wolfsburg_de stopped fetching data\n### I Have A Problem With:\n\nA specific source\n\n### What's Your Problem\n\nThe Source was_wolfsburg_de stopped fetching data for 2024. I suspect because the request link is no longer accurate.\r\nI have experimented a bit, and with the following address I receive current data: \r\n\r\nhttps://was-wolfsburg.de/subabfuhrtermine/php/abfuhrtermine.php\r\n\r\nIt only concerns \"Restm\u00fcll, Bioabfall und Papierabfall\". \"Gelber Sack\" is still functioning.\n\n### Source (if relevant)\n\nwas_wolfsburg_de\n\n### Logs\n\n_No response_\n\n### Relevant Configuration\n\n_No response_\n\n### Checklist Source Error\n\n- [ ] Use the example parameters for your source (often available in the documentation) (don't forget to restart Home Assistant after changing the configuration)\n- [ ] Checked that the website of your service provider is still working\n- [ ] Tested my attributes on the service provider website (if possible)\n- [X] I have tested with the latest version of the integration (master) (for HACS in the 3 dot menu of the integration click on \"Redownload\" and choose master as version)\n\n### Checklist Sensor Error\n\n- [ ] Checked in the Home Assistant Calendar tab if the event names match the types names (if types argument is used)\n\n### Required\n\n- [X] I have searched past (closed AND opened) issues to see if this bug has already been reported, and it hasn't been.\n- [X] I understand that people give their precious time for free, and thus I've done my very best to make this problem as easy as possible to investigate.\n", "before_files": [{"content": "import datetime\nimport re\n\nimport requests\nfrom waste_collection_schedule import Collection # type: ignore[attr-defined]\nfrom waste_collection_schedule.service.ICS import ICS\n\nTITLE = \"Wolfsburger Abfallwirtschaft und Stra\u00dfenreinigung\"\nDESCRIPTION = \"Source for waste collections for WAS-Wolfsburg, Germany.\"\nURL = \"https://was-wolfsburg.de\"\nTEST_CASES = {\n \"Barnstorf\": {\"city\": \"Barnstorf\", \"street\": \"Bahnhofspassage\"},\n \"S\u00fclfeld\": {\"city\": \"S\u00fclfeld\", \"street\": \"B\u00e4rheide\"},\n}\nCHARACTER_MAP = {\n ord(\"\u00fc\"): \"u\",\n ord(\"\u00f6\"): \"o\", # doesn't appear to be needed\n ord(\"\u00e4\"): \"a\", # doesn't appear to be needed\n}\n\n\nclass Source:\n def __init__(self, city: str, street: str):\n self._city = city.translate(CHARACTER_MAP)\n self._street = street.translate(CHARACTER_MAP)\n self._ics = ICS()\n\n def fetch(self):\n # fetch \"Gelber Sack\"\n args = {\"g\": self._city}\n r = requests.get(\n \"https://was-wolfsburg.de/subgelberweihgarten/php/abfuhrgelber.php\",\n params=args,\n )\n\n entries = []\n match = re.findall(r\"(\\d{2})\\.(\\d{2})\\.(\\d{4})\", r.text)\n for m in match:\n date = datetime.date(day=int(m[0]), month=int(m[1]), year=int(m[2]))\n entries.append(Collection(date, \"Gelber Sack\"))\n\n # fetch remaining collections\n args = {\"ortabf\": self._street}\n r = requests.post(\n \"https://was-wolfsburg.de/subabfuhrtermine/ics_abfuhrtermine3.php\",\n data=args,\n )\n dates = self._ics.convert(r.text)\n for d in dates:\n entries.append(Collection(d[0], d[1]))\n\n return entries\n", "path": "custom_components/waste_collection_schedule/waste_collection_schedule/source/was_wolfsburg_de.py"}]} | 1,484 | 610 |
gh_patches_debug_18547 | rasdani/github-patches | git_diff | searx__searx-1501 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Asksteem is gone
The API has been discontinued so it should probably be removed as an option entirely.
</issue>
<code>
[start of searx/engines/asksteem.py]
1 """
2 Asksteem (general)
3
4 @website https://asksteem.com/
5 @provide-api yes
6
7 @using-api yes
8 @results JSON (https://github.com/Hoxly/asksteem-docs/wiki)
9 @stable yes
10 @parse url, title, content
11 """
12
13 from json import loads
14 from searx.url_utils import urlencode
15
16 # engine dependent config
17 categories = ['general']
18 paging = True
19 language_support = False
20 disabled = True
21
22 # search-url
23 search_url = 'https://api.asksteem.com/search?{params}'
24 result_url = 'https://steemit.com/@{author}/{title}'
25
26
27 # do search-request
28 def request(query, params):
29 url = search_url.format(params=urlencode({'q': query, 'pg': params['pageno']}))
30 params['url'] = url
31 return params
32
33
34 # get response from search-request
35 def response(resp):
36 json = loads(resp.text)
37
38 results = []
39
40 for result in json.get('results', []):
41 results.append({'url': result_url.format(author=result['author'], title=result['permlink']),
42 'title': result['title'],
43 'content': result['summary']})
44 return results
45
[end of searx/engines/asksteem.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/searx/engines/asksteem.py b/searx/engines/asksteem.py
deleted file mode 100644
--- a/searx/engines/asksteem.py
+++ /dev/null
@@ -1,44 +0,0 @@
-"""
- Asksteem (general)
-
- @website https://asksteem.com/
- @provide-api yes
-
- @using-api yes
- @results JSON (https://github.com/Hoxly/asksteem-docs/wiki)
- @stable yes
- @parse url, title, content
-"""
-
-from json import loads
-from searx.url_utils import urlencode
-
-# engine dependent config
-categories = ['general']
-paging = True
-language_support = False
-disabled = True
-
-# search-url
-search_url = 'https://api.asksteem.com/search?{params}'
-result_url = 'https://steemit.com/@{author}/{title}'
-
-
-# do search-request
-def request(query, params):
- url = search_url.format(params=urlencode({'q': query, 'pg': params['pageno']}))
- params['url'] = url
- return params
-
-
-# get response from search-request
-def response(resp):
- json = loads(resp.text)
-
- results = []
-
- for result in json.get('results', []):
- results.append({'url': result_url.format(author=result['author'], title=result['permlink']),
- 'title': result['title'],
- 'content': result['summary']})
- return results
| {"golden_diff": "diff --git a/searx/engines/asksteem.py b/searx/engines/asksteem.py\ndeleted file mode 100644\n--- a/searx/engines/asksteem.py\n+++ /dev/null\n@@ -1,44 +0,0 @@\n-\"\"\"\n- Asksteem (general)\n-\n- @website https://asksteem.com/\n- @provide-api yes\n-\n- @using-api yes\n- @results JSON (https://github.com/Hoxly/asksteem-docs/wiki)\n- @stable yes\n- @parse url, title, content\n-\"\"\"\n-\n-from json import loads\n-from searx.url_utils import urlencode\n-\n-# engine dependent config\n-categories = ['general']\n-paging = True\n-language_support = False\n-disabled = True\n-\n-# search-url\n-search_url = 'https://api.asksteem.com/search?{params}'\n-result_url = 'https://steemit.com/@{author}/{title}'\n-\n-\n-# do search-request\n-def request(query, params):\n- url = search_url.format(params=urlencode({'q': query, 'pg': params['pageno']}))\n- params['url'] = url\n- return params\n-\n-\n-# get response from search-request\n-def response(resp):\n- json = loads(resp.text)\n-\n- results = []\n-\n- for result in json.get('results', []):\n- results.append({'url': result_url.format(author=result['author'], title=result['permlink']),\n- 'title': result['title'],\n- 'content': result['summary']})\n- return results\n", "issue": "Asksteem is gone\nThe API has been discontinued so it should probably be removed as an option entirely.\n", "before_files": [{"content": "\"\"\"\n Asksteem (general)\n\n @website https://asksteem.com/\n @provide-api yes\n\n @using-api yes\n @results JSON (https://github.com/Hoxly/asksteem-docs/wiki)\n @stable yes\n @parse url, title, content\n\"\"\"\n\nfrom json import loads\nfrom searx.url_utils import urlencode\n\n# engine dependent config\ncategories = ['general']\npaging = True\nlanguage_support = False\ndisabled = True\n\n# search-url\nsearch_url = 'https://api.asksteem.com/search?{params}'\nresult_url = 'https://steemit.com/@{author}/{title}'\n\n\n# do search-request\ndef request(query, params):\n url = search_url.format(params=urlencode({'q': query, 'pg': params['pageno']}))\n params['url'] = url\n return params\n\n\n# get response from search-request\ndef response(resp):\n json = loads(resp.text)\n\n results = []\n\n for result in json.get('results', []):\n results.append({'url': result_url.format(author=result['author'], title=result['permlink']),\n 'title': result['title'],\n 'content': result['summary']})\n return results\n", "path": "searx/engines/asksteem.py"}]} | 924 | 360 |
gh_patches_debug_9 | rasdani/github-patches | git_diff | OCHA-DAP__hdx-ckan-1038 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Update the version number on the logo and footer.
For sprint 25, we will increment to 0.3.2
</issue>
<code>
[start of ckanext-hdx_theme/ckanext/hdx_theme/version.py]
1 hdx_version='v0.3.1'
[end of ckanext-hdx_theme/ckanext/hdx_theme/version.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ckanext-hdx_theme/ckanext/hdx_theme/version.py b/ckanext-hdx_theme/ckanext/hdx_theme/version.py
--- a/ckanext-hdx_theme/ckanext/hdx_theme/version.py
+++ b/ckanext-hdx_theme/ckanext/hdx_theme/version.py
@@ -1 +1 @@
-hdx_version='v0.3.1'
\ No newline at end of file
+hdx_version='v0.3.2'
\ No newline at end of file
| {"golden_diff": "diff --git a/ckanext-hdx_theme/ckanext/hdx_theme/version.py b/ckanext-hdx_theme/ckanext/hdx_theme/version.py\n--- a/ckanext-hdx_theme/ckanext/hdx_theme/version.py\n+++ b/ckanext-hdx_theme/ckanext/hdx_theme/version.py\n@@ -1 +1 @@\n-hdx_version='v0.3.1'\n\\ No newline at end of file\n+hdx_version='v0.3.2'\n\\ No newline at end of file\n", "issue": "Update the version number on the logo and footer.\nFor sprint 25, we will increment to 0.3.2\n\n", "before_files": [{"content": "hdx_version='v0.3.1'", "path": "ckanext-hdx_theme/ckanext/hdx_theme/version.py"}]} | 594 | 121 |
gh_patches_debug_42493 | rasdani/github-patches | git_diff | PrefectHQ__prefect-3725 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Allow exporter arguments in Jupyter ExecuteNotebook task
## Current behavior
When running the `jupyter.jupyter.ExecuteNotebook` task with `output_format='html'` the default settings for the HTMLExporter are used. There is no way to pass arguments to this exporter.
## Proposed behavior
Allow passing arguments to the HTMLExporter.
## Implementation suggestion
Something like `html_exporter = nbconvert.HTMLExporter(**exporter_kwargs)` on the following line:
https://github.com/PrefectHQ/prefect/blob/master/src/prefect/tasks/jupyter/jupyter.py#L65
## Example usecase
This allows you to exclude code cells, only showing their output, in the exported html document by passing the `exclude_input=True` argument to the exporter.
</issue>
<code>
[start of src/prefect/tasks/jupyter/jupyter.py]
1 import nbconvert
2 import nbformat
3 import papermill as pm
4
5 from prefect import Task
6 from prefect.utilities.tasks import defaults_from_attrs
7
8
9 class ExecuteNotebook(Task):
10 """
11 Task for running Jupyter Notebooks.
12 In order to parametrize the notebook, you need to mark the parameters cell as described in
13 the papermill documentation: https://papermill.readthedocs.io/en/latest/usage-parameterize.html
14
15 Args:
16 - path (string, optional): path to fetch the notebook from.
17 Can be a cloud storage path.
18 Can also be provided post-initialization by calling this task instance
19 - parameters (dict, optional): dictionary of parameters to use for the notebook
20 Can also be provided at runtime
21 - output_format (str, optional): Notebook output format.
22 Currently supported: json, html (default: json)
23 - kernel_name (string, optional): kernel name to run the notebook with.
24 If not provided, the default kernel will be used.
25 - **kwargs: additional keyword arguments to pass to the Task constructor
26 """
27
28 def __init__(
29 self,
30 path: str = None,
31 parameters: dict = None,
32 output_format: str = "json",
33 kernel_name: str = None,
34 **kwargs
35 ):
36 self.path = path
37 self.parameters = parameters
38 self.output_format = output_format
39 self.kernel_name = kernel_name
40 super().__init__(**kwargs)
41
42 @defaults_from_attrs("path", "parameters", "output_format")
43 def run(
44 self,
45 path: str = None,
46 parameters: dict = None,
47 output_format: str = None,
48 ) -> str:
49 """
50 Run a Jupyter notebook and output as HTML or JSON
51
52 Args:
53 - path (string, optional): path to fetch the notebook from; can also be
54 a cloud storage path
55 - parameters (dict, optional): dictionary of parameters to use for the notebook
56 - output_format (str, optional): Notebook output format.
57 Currently supported: json, html (default: json)
58 """
59 nb: nbformat.NotebookNode = pm.execute_notebook(
60 path, "-", parameters=parameters, kernel_name=self.kernel_name
61 )
62 if output_format == "json":
63 return nbformat.writes(nb)
64 if output_format == "html":
65 html_exporter = nbconvert.HTMLExporter()
66 (body, resources) = html_exporter.from_notebook_node(nb)
67 return body
68
69 raise NotImplementedError("Notebook output %s not supported", output_format)
70
[end of src/prefect/tasks/jupyter/jupyter.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/prefect/tasks/jupyter/jupyter.py b/src/prefect/tasks/jupyter/jupyter.py
--- a/src/prefect/tasks/jupyter/jupyter.py
+++ b/src/prefect/tasks/jupyter/jupyter.py
@@ -18,8 +18,12 @@
Can also be provided post-initialization by calling this task instance
- parameters (dict, optional): dictionary of parameters to use for the notebook
Can also be provided at runtime
- - output_format (str, optional): Notebook output format.
- Currently supported: json, html (default: json)
+ - output_format (str, optional): Notebook output format, should be a valid
+ nbconvert Exporter name. 'json' is treated as 'notebook'.
+ Valid exporter names: asciidoc, custom, html, latex, markdown,
+ notebook, pdf, python, rst, script, slides, webpdf. (default: notebook)
+ - exporter_kwargs (dict, optional): The arguments used for initializing
+ the exporter.
- kernel_name (string, optional): kernel name to run the notebook with.
If not provided, the default kernel will be used.
- **kwargs: additional keyword arguments to pass to the Task constructor
@@ -29,7 +33,8 @@
self,
path: str = None,
parameters: dict = None,
- output_format: str = "json",
+ output_format: str = "notebook",
+ exporter_kwargs: dict = None,
kernel_name: str = None,
**kwargs
):
@@ -37,33 +42,40 @@
self.parameters = parameters
self.output_format = output_format
self.kernel_name = kernel_name
+ self.exporter_kwargs = exporter_kwargs
super().__init__(**kwargs)
- @defaults_from_attrs("path", "parameters", "output_format")
+ @defaults_from_attrs("path", "parameters", "output_format", "exporter_kwargs")
def run(
self,
path: str = None,
parameters: dict = None,
output_format: str = None,
+ exporter_kwargs: dict = None,
) -> str:
"""
- Run a Jupyter notebook and output as HTML or JSON
+ Run a Jupyter notebook and output as HTML, notebook, or other formats.
Args:
- path (string, optional): path to fetch the notebook from; can also be
a cloud storage path
- parameters (dict, optional): dictionary of parameters to use for the notebook
- - output_format (str, optional): Notebook output format.
- Currently supported: json, html (default: json)
+ - output_format (str, optional): Notebook output format, should be a valid
+ nbconvert Exporter name. 'json' is treated as 'notebook'.
+ Valid exporter names: asciidoc, custom, html, latex, markdown,
+ notebook, pdf, python, rst, script, slides, webpdf. (default: notebook)
+ - exporter_kwargs (dict, optional): The arguments used for initializing
+ the exporter.
"""
nb: nbformat.NotebookNode = pm.execute_notebook(
path, "-", parameters=parameters, kernel_name=self.kernel_name
)
if output_format == "json":
- return nbformat.writes(nb)
- if output_format == "html":
- html_exporter = nbconvert.HTMLExporter()
- (body, resources) = html_exporter.from_notebook_node(nb)
- return body
+ output_format = "notebook"
- raise NotImplementedError("Notebook output %s not supported", output_format)
+ if exporter_kwargs is None:
+ exporter_kwargs = {}
+
+ exporter = nbconvert.get_exporter(output_format)
+ body, resources = nbconvert.export(exporter, nb, **exporter_kwargs)
+ return body
| {"golden_diff": "diff --git a/src/prefect/tasks/jupyter/jupyter.py b/src/prefect/tasks/jupyter/jupyter.py\n--- a/src/prefect/tasks/jupyter/jupyter.py\n+++ b/src/prefect/tasks/jupyter/jupyter.py\n@@ -18,8 +18,12 @@\n Can also be provided post-initialization by calling this task instance\n - parameters (dict, optional): dictionary of parameters to use for the notebook\n Can also be provided at runtime\n- - output_format (str, optional): Notebook output format.\n- Currently supported: json, html (default: json)\n+ - output_format (str, optional): Notebook output format, should be a valid\n+ nbconvert Exporter name. 'json' is treated as 'notebook'.\n+ Valid exporter names: asciidoc, custom, html, latex, markdown,\n+ notebook, pdf, python, rst, script, slides, webpdf. (default: notebook)\n+ - exporter_kwargs (dict, optional): The arguments used for initializing\n+ the exporter.\n - kernel_name (string, optional): kernel name to run the notebook with.\n If not provided, the default kernel will be used.\n - **kwargs: additional keyword arguments to pass to the Task constructor\n@@ -29,7 +33,8 @@\n self,\n path: str = None,\n parameters: dict = None,\n- output_format: str = \"json\",\n+ output_format: str = \"notebook\",\n+ exporter_kwargs: dict = None,\n kernel_name: str = None,\n **kwargs\n ):\n@@ -37,33 +42,40 @@\n self.parameters = parameters\n self.output_format = output_format\n self.kernel_name = kernel_name\n+ self.exporter_kwargs = exporter_kwargs\n super().__init__(**kwargs)\n \n- @defaults_from_attrs(\"path\", \"parameters\", \"output_format\")\n+ @defaults_from_attrs(\"path\", \"parameters\", \"output_format\", \"exporter_kwargs\")\n def run(\n self,\n path: str = None,\n parameters: dict = None,\n output_format: str = None,\n+ exporter_kwargs: dict = None,\n ) -> str:\n \"\"\"\n- Run a Jupyter notebook and output as HTML or JSON\n+ Run a Jupyter notebook and output as HTML, notebook, or other formats.\n \n Args:\n - path (string, optional): path to fetch the notebook from; can also be\n a cloud storage path\n - parameters (dict, optional): dictionary of parameters to use for the notebook\n- - output_format (str, optional): Notebook output format.\n- Currently supported: json, html (default: json)\n+ - output_format (str, optional): Notebook output format, should be a valid\n+ nbconvert Exporter name. 'json' is treated as 'notebook'.\n+ Valid exporter names: asciidoc, custom, html, latex, markdown,\n+ notebook, pdf, python, rst, script, slides, webpdf. (default: notebook)\n+ - exporter_kwargs (dict, optional): The arguments used for initializing\n+ the exporter.\n \"\"\"\n nb: nbformat.NotebookNode = pm.execute_notebook(\n path, \"-\", parameters=parameters, kernel_name=self.kernel_name\n )\n if output_format == \"json\":\n- return nbformat.writes(nb)\n- if output_format == \"html\":\n- html_exporter = nbconvert.HTMLExporter()\n- (body, resources) = html_exporter.from_notebook_node(nb)\n- return body\n+ output_format = \"notebook\"\n \n- raise NotImplementedError(\"Notebook output %s not supported\", output_format)\n+ if exporter_kwargs is None:\n+ exporter_kwargs = {}\n+\n+ exporter = nbconvert.get_exporter(output_format)\n+ body, resources = nbconvert.export(exporter, nb, **exporter_kwargs)\n+ return body\n", "issue": "Allow exporter arguments in Jupyter ExecuteNotebook task\n## Current behavior\r\n\r\nWhen running the `jupyter.jupyter.ExecuteNotebook` task with `output_format='html'` the default settings for the HTMLExporter are used. There is no way to pass arguments to this exporter.\r\n\r\n## Proposed behavior\r\n\r\nAllow passing arguments to the HTMLExporter.\r\n\r\n## Implementation suggestion\r\n\r\nSomething like `html_exporter = nbconvert.HTMLExporter(**exporter_kwargs)` on the following line:\r\nhttps://github.com/PrefectHQ/prefect/blob/master/src/prefect/tasks/jupyter/jupyter.py#L65\r\n\r\n## Example usecase\r\n\r\nThis allows you to exclude code cells, only showing their output, in the exported html document by passing the `exclude_input=True` argument to the exporter.\n", "before_files": [{"content": "import nbconvert\nimport nbformat\nimport papermill as pm\n\nfrom prefect import Task\nfrom prefect.utilities.tasks import defaults_from_attrs\n\n\nclass ExecuteNotebook(Task):\n \"\"\"\n Task for running Jupyter Notebooks.\n In order to parametrize the notebook, you need to mark the parameters cell as described in\n the papermill documentation: https://papermill.readthedocs.io/en/latest/usage-parameterize.html\n\n Args:\n - path (string, optional): path to fetch the notebook from.\n Can be a cloud storage path.\n Can also be provided post-initialization by calling this task instance\n - parameters (dict, optional): dictionary of parameters to use for the notebook\n Can also be provided at runtime\n - output_format (str, optional): Notebook output format.\n Currently supported: json, html (default: json)\n - kernel_name (string, optional): kernel name to run the notebook with.\n If not provided, the default kernel will be used.\n - **kwargs: additional keyword arguments to pass to the Task constructor\n \"\"\"\n\n def __init__(\n self,\n path: str = None,\n parameters: dict = None,\n output_format: str = \"json\",\n kernel_name: str = None,\n **kwargs\n ):\n self.path = path\n self.parameters = parameters\n self.output_format = output_format\n self.kernel_name = kernel_name\n super().__init__(**kwargs)\n\n @defaults_from_attrs(\"path\", \"parameters\", \"output_format\")\n def run(\n self,\n path: str = None,\n parameters: dict = None,\n output_format: str = None,\n ) -> str:\n \"\"\"\n Run a Jupyter notebook and output as HTML or JSON\n\n Args:\n - path (string, optional): path to fetch the notebook from; can also be\n a cloud storage path\n - parameters (dict, optional): dictionary of parameters to use for the notebook\n - output_format (str, optional): Notebook output format.\n Currently supported: json, html (default: json)\n \"\"\"\n nb: nbformat.NotebookNode = pm.execute_notebook(\n path, \"-\", parameters=parameters, kernel_name=self.kernel_name\n )\n if output_format == \"json\":\n return nbformat.writes(nb)\n if output_format == \"html\":\n html_exporter = nbconvert.HTMLExporter()\n (body, resources) = html_exporter.from_notebook_node(nb)\n return body\n\n raise NotImplementedError(\"Notebook output %s not supported\", output_format)\n", "path": "src/prefect/tasks/jupyter/jupyter.py"}]} | 1,382 | 858 |
gh_patches_debug_19122 | rasdani/github-patches | git_diff | aimhubio__aim-1917 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Pytorch track_gradients_dists errors out if some parameters don't have gradients
## ๐ Bug
When collecting gradients for each layer weight of a model, the function `get_model_layers` errors out if some model parameters don't have gradients.
### Expected behavior
Ignore weights if grad is None.
### Environment
- Aim Version (e.g., 3.11.1)
- Python version 3.10
- pip version 22.0
- Any OS
</issue>
<code>
[start of aim/sdk/adapters/pytorch.py]
1 def track_params_dists(model, run):
2 from aim import Distribution
3 data_hist = get_model_layers(model, 'data')
4
5 for name, params in data_hist.items():
6 if 'weight' in params:
7 run.track(
8 Distribution(params['weight']),
9 name=name,
10 context={
11 'type': 'data',
12 'params': 'weights',
13 }
14 )
15 if 'bias' in params:
16 run.track(
17 Distribution(params['bias']),
18 name=name,
19 context={
20 'type': 'data',
21 'params': 'biases',
22 }
23 )
24
25
26 def track_gradients_dists(model, run):
27 from aim import Distribution
28 grad_hist = get_model_layers(model, 'grad')
29
30 for name, params in grad_hist.items():
31 if 'weight' in params:
32 run.track(
33 Distribution(params['weight']),
34 name=name,
35 context={
36 'type': 'gradients',
37 'params': 'weights',
38 }
39 )
40 if 'bias' in params:
41 run.track(
42 Distribution(params['bias']),
43 name=name,
44 context={
45 'type': 'gradients',
46 'params': 'biases',
47 }
48 )
49
50
51 def get_model_layers(model, dt, parent_name=None):
52 layers = {}
53 for name, m in model.named_children():
54 layer_name = '{}__{}'.format(parent_name, name) \
55 if parent_name \
56 else name
57 layer_name += '.{}'.format(type(m).__name__)
58
59 if len(list(m.named_children())):
60 layers.update(get_model_layers(m, dt, layer_name))
61 else:
62 layers[layer_name] = {}
63 if hasattr(m, 'weight') \
64 and m.weight is not None \
65 and hasattr(m.weight, dt):
66 layers[layer_name]['weight'] = get_pt_tensor(getattr(m.weight, dt)).numpy()
67
68 if hasattr(m, 'bias') \
69 and m.bias is not None \
70 and hasattr(m.bias, dt):
71 layers[layer_name]['bias'] = get_pt_tensor(getattr(m.bias, dt)).numpy()
72
73 return layers
74
75
76 # Move tensor from GPU to CPU
77 def get_pt_tensor(t):
78 return t.cpu() if hasattr(t, 'is_cuda') and t.is_cuda else t
79
[end of aim/sdk/adapters/pytorch.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/aim/sdk/adapters/pytorch.py b/aim/sdk/adapters/pytorch.py
--- a/aim/sdk/adapters/pytorch.py
+++ b/aim/sdk/adapters/pytorch.py
@@ -60,15 +60,17 @@
layers.update(get_model_layers(m, dt, layer_name))
else:
layers[layer_name] = {}
- if hasattr(m, 'weight') \
- and m.weight is not None \
- and hasattr(m.weight, dt):
- layers[layer_name]['weight'] = get_pt_tensor(getattr(m.weight, dt)).numpy()
+ weight = None
+ if hasattr(m, 'weight') and m.weight is not None:
+ weight = getattr(m.weight, dt, None)
+ if weight is not None:
+ layers[layer_name]['weight'] = get_pt_tensor(weight).numpy()
- if hasattr(m, 'bias') \
- and m.bias is not None \
- and hasattr(m.bias, dt):
- layers[layer_name]['bias'] = get_pt_tensor(getattr(m.bias, dt)).numpy()
+ bias = None
+ if hasattr(m, 'bias') and m.bias is not None:
+ bias = getattr(m.bias, dt, None)
+ if bias is not None:
+ layers[layer_name]['bias'] = get_pt_tensor(bias).numpy()
return layers
| {"golden_diff": "diff --git a/aim/sdk/adapters/pytorch.py b/aim/sdk/adapters/pytorch.py\n--- a/aim/sdk/adapters/pytorch.py\n+++ b/aim/sdk/adapters/pytorch.py\n@@ -60,15 +60,17 @@\n layers.update(get_model_layers(m, dt, layer_name))\n else:\n layers[layer_name] = {}\n- if hasattr(m, 'weight') \\\n- and m.weight is not None \\\n- and hasattr(m.weight, dt):\n- layers[layer_name]['weight'] = get_pt_tensor(getattr(m.weight, dt)).numpy()\n+ weight = None\n+ if hasattr(m, 'weight') and m.weight is not None:\n+ weight = getattr(m.weight, dt, None)\n+ if weight is not None:\n+ layers[layer_name]['weight'] = get_pt_tensor(weight).numpy()\n \n- if hasattr(m, 'bias') \\\n- and m.bias is not None \\\n- and hasattr(m.bias, dt):\n- layers[layer_name]['bias'] = get_pt_tensor(getattr(m.bias, dt)).numpy()\n+ bias = None\n+ if hasattr(m, 'bias') and m.bias is not None:\n+ bias = getattr(m.bias, dt, None)\n+ if bias is not None:\n+ layers[layer_name]['bias'] = get_pt_tensor(bias).numpy()\n \n return layers\n", "issue": "Pytorch track_gradients_dists errors out if some parameters don't have gradients\n## \ud83d\udc1b Bug\r\n\r\nWhen collecting gradients for each layer weight of a model, the function `get_model_layers` errors out if some model parameters don't have gradients.\r\n\r\n### Expected behavior\r\n\r\nIgnore weights if grad is None.\r\n\r\n### Environment\r\n\r\n- Aim Version (e.g., 3.11.1)\r\n- Python version 3.10\r\n- pip version 22.0\r\n- Any OS\r\n\r\n\n", "before_files": [{"content": "def track_params_dists(model, run):\n from aim import Distribution\n data_hist = get_model_layers(model, 'data')\n\n for name, params in data_hist.items():\n if 'weight' in params:\n run.track(\n Distribution(params['weight']),\n name=name,\n context={\n 'type': 'data',\n 'params': 'weights',\n }\n )\n if 'bias' in params:\n run.track(\n Distribution(params['bias']),\n name=name,\n context={\n 'type': 'data',\n 'params': 'biases',\n }\n )\n\n\ndef track_gradients_dists(model, run):\n from aim import Distribution\n grad_hist = get_model_layers(model, 'grad')\n\n for name, params in grad_hist.items():\n if 'weight' in params:\n run.track(\n Distribution(params['weight']),\n name=name,\n context={\n 'type': 'gradients',\n 'params': 'weights',\n }\n )\n if 'bias' in params:\n run.track(\n Distribution(params['bias']),\n name=name,\n context={\n 'type': 'gradients',\n 'params': 'biases',\n }\n )\n\n\ndef get_model_layers(model, dt, parent_name=None):\n layers = {}\n for name, m in model.named_children():\n layer_name = '{}__{}'.format(parent_name, name) \\\n if parent_name \\\n else name\n layer_name += '.{}'.format(type(m).__name__)\n\n if len(list(m.named_children())):\n layers.update(get_model_layers(m, dt, layer_name))\n else:\n layers[layer_name] = {}\n if hasattr(m, 'weight') \\\n and m.weight is not None \\\n and hasattr(m.weight, dt):\n layers[layer_name]['weight'] = get_pt_tensor(getattr(m.weight, dt)).numpy()\n\n if hasattr(m, 'bias') \\\n and m.bias is not None \\\n and hasattr(m.bias, dt):\n layers[layer_name]['bias'] = get_pt_tensor(getattr(m.bias, dt)).numpy()\n\n return layers\n\n\n# Move tensor from GPU to CPU\ndef get_pt_tensor(t):\n return t.cpu() if hasattr(t, 'is_cuda') and t.is_cuda else t\n", "path": "aim/sdk/adapters/pytorch.py"}]} | 1,275 | 303 |
gh_patches_debug_9161 | rasdani/github-patches | git_diff | ciudadanointeligente__votainteligente-portal-electoral-765 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Ordernar Propuestas
Por:
- [x] รบltimas creadas
- [x] Creadas por organizaciรณn
- [x] Con mรกs orazones.
Y por *defecto* puede ser:
- Random
- Por corazones, encuentro local, es organizaciรณn.
</issue>
<code>
[start of popular_proposal/filters.py]
1 # coding=utf-8
2 from django_filters import (FilterSet,
3 ChoiceFilter,
4 ModelChoiceFilter,
5 )
6 from popular_proposal.models import PopularProposal
7 from popular_proposal.forms.form_texts import TOPIC_CHOICES
8 from elections.models import Area
9 from django.conf import settings
10 from constance import config
11 from django.forms import CharField, Form, ChoiceField
12 from haystack.query import SearchQuerySet
13
14
15 def filterable_areas(request):
16 if settings.FILTERABLE_AREAS_TYPE:
17 return Area.public.filter(classification__in=settings.FILTERABLE_AREAS_TYPE)
18 return Area.public.all()
19
20
21 class TextSearchForm(Form):
22 text = CharField(label=u'Quรฉ buscas?', required=False)
23 order_by = ChoiceField(required=False,
24 label=u"Ordenar por",
25 choices=[('', u'Por apoyos'),
26 ('-created', u'รltimas primero'),
27 ])
28
29 def full_clean(self):
30 super(TextSearchForm, self).full_clean()
31 cleaned_data = {}
32 for k in self.cleaned_data:
33 v = self.cleaned_data.get(k, '')
34
35 if (isinstance(v, unicode) or isinstance(v, str)) and not v.strip():
36 cleaned_data[k] = None
37 self.cleaned_data.update(cleaned_data)
38
39
40 class ProposalWithoutAreaFilter(FilterSet):
41 clasification = ChoiceFilter(choices=TOPIC_CHOICES,
42 empty_label=u"Selecciona",
43 label=u"Clasificaciรณn")
44
45 def __init__(self,
46 data=None,
47 queryset=None,
48 prefix=None,
49 strict=None,
50 **kwargs):
51 self.area = kwargs.pop('area', None)
52 if self.area is None and data is not None:
53 self.area = data.get('area', None)
54 if self.area:
55 self.area = Area.objects.get(id=self.area)
56 if queryset is None:
57 queryset = PopularProposal.ordered.all()
58 if self.area is not None:
59 queryset = queryset.filter(area=self.area)
60 super(ProposalWithoutAreaFilter, self).__init__(data=data,
61 queryset=queryset,
62 prefix=prefix,
63 strict=strict)
64
65 @property
66 def form(self):
67 super(ProposalWithoutAreaFilter, self).form
68 is_filled_search = False
69 for k in self.data:
70 i = self.data[k]
71 is_filled_search = True
72 self._form.fields[k].initial = i
73 self._form.is_filled_search = is_filled_search
74 return self._form
75
76 @property
77 def qs(self):
78
79 super(ProposalWithoutAreaFilter, self).qs
80 self._qs = self._qs.exclude(area__id=config.HIDDEN_AREAS)
81 if not self.form.is_valid():
82 return self._qs
83 order_by = self.form.cleaned_data.get('order_by', None)
84 if order_by:
85 self._qs = self._qs.order_by(order_by)
86 else:
87 self._qs = self._qs.by_likers()
88 text = self.form.cleaned_data.get('text', '')
89
90 if text:
91 pks = []
92 text_search = SearchQuerySet().models(self._meta.model).auto_query(text)
93 for r in text_search:
94 pks.append(r.pk)
95 return self._qs.filter(id__in=pks)
96 return self._qs
97
98 class Meta:
99 model = PopularProposal
100 fields = ['clasification', ]
101 form = TextSearchForm
102
103
104 def possible_areas(request):
105 as_ = Area.public.all()
106 return as_
107
108
109 class ProposalWithAreaFilter(ProposalWithoutAreaFilter):
110 area = ModelChoiceFilter(queryset=possible_areas, label="Comuna donde fue generada")
111
112
113 class ProposalGeneratedAtFilter(ProposalWithoutAreaFilter):
114 generated_at = ModelChoiceFilter(queryset=filterable_areas,
115 empty_label=u"Selecciona",
116 label="Comuna donde fue generada")
117
[end of popular_proposal/filters.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/popular_proposal/filters.py b/popular_proposal/filters.py
--- a/popular_proposal/filters.py
+++ b/popular_proposal/filters.py
@@ -24,6 +24,8 @@
label=u"Ordenar por",
choices=[('', u'Por apoyos'),
('-created', u'รltimas primero'),
+ ('-proposer__profile__is_organization', u'De organizaciones primero'),
+ ('-is_local_meeting', u'Encuentros locales primero'),
])
def full_clean(self):
| {"golden_diff": "diff --git a/popular_proposal/filters.py b/popular_proposal/filters.py\n--- a/popular_proposal/filters.py\n+++ b/popular_proposal/filters.py\n@@ -24,6 +24,8 @@\n label=u\"Ordenar por\",\n choices=[('', u'Por apoyos'),\n ('-created', u'\u00daltimas primero'),\n+ ('-proposer__profile__is_organization', u'De organizaciones primero'),\n+ ('-is_local_meeting', u'Encuentros locales primero'),\n ])\n \n def full_clean(self):\n", "issue": "Ordernar Propuestas\nPor:\r\n- [x] \u00faltimas creadas\r\n- [x] Creadas por organizaci\u00f3n\r\n- [x] Con m\u00e1s orazones.\r\n\r\nY por *defecto* puede ser:\r\n- Random\r\n- Por corazones, encuentro local, es organizaci\u00f3n.\n", "before_files": [{"content": "# coding=utf-8\nfrom django_filters import (FilterSet,\n ChoiceFilter,\n ModelChoiceFilter,\n )\nfrom popular_proposal.models import PopularProposal\nfrom popular_proposal.forms.form_texts import TOPIC_CHOICES\nfrom elections.models import Area\nfrom django.conf import settings\nfrom constance import config\nfrom django.forms import CharField, Form, ChoiceField\nfrom haystack.query import SearchQuerySet\n\n\ndef filterable_areas(request):\n if settings.FILTERABLE_AREAS_TYPE:\n return Area.public.filter(classification__in=settings.FILTERABLE_AREAS_TYPE)\n return Area.public.all()\n\n\nclass TextSearchForm(Form):\n text = CharField(label=u'Qu\u00e9 buscas?', required=False)\n order_by = ChoiceField(required=False,\n label=u\"Ordenar por\",\n choices=[('', u'Por apoyos'),\n ('-created', u'\u00daltimas primero'),\n ])\n\n def full_clean(self):\n super(TextSearchForm, self).full_clean()\n cleaned_data = {}\n for k in self.cleaned_data:\n v = self.cleaned_data.get(k, '')\n\n if (isinstance(v, unicode) or isinstance(v, str)) and not v.strip():\n cleaned_data[k] = None\n self.cleaned_data.update(cleaned_data)\n\n\nclass ProposalWithoutAreaFilter(FilterSet):\n clasification = ChoiceFilter(choices=TOPIC_CHOICES,\n empty_label=u\"Selecciona\",\n label=u\"Clasificaci\u00f3n\")\n\n def __init__(self,\n data=None,\n queryset=None,\n prefix=None,\n strict=None,\n **kwargs):\n self.area = kwargs.pop('area', None)\n if self.area is None and data is not None:\n self.area = data.get('area', None)\n if self.area:\n self.area = Area.objects.get(id=self.area)\n if queryset is None:\n queryset = PopularProposal.ordered.all()\n if self.area is not None:\n queryset = queryset.filter(area=self.area)\n super(ProposalWithoutAreaFilter, self).__init__(data=data,\n queryset=queryset,\n prefix=prefix,\n strict=strict)\n\n @property\n def form(self):\n super(ProposalWithoutAreaFilter, self).form\n is_filled_search = False\n for k in self.data:\n i = self.data[k]\n is_filled_search = True\n self._form.fields[k].initial = i\n self._form.is_filled_search = is_filled_search\n return self._form\n\n @property\n def qs(self):\n\n super(ProposalWithoutAreaFilter, self).qs\n self._qs = self._qs.exclude(area__id=config.HIDDEN_AREAS)\n if not self.form.is_valid():\n return self._qs\n order_by = self.form.cleaned_data.get('order_by', None)\n if order_by:\n self._qs = self._qs.order_by(order_by)\n else:\n self._qs = self._qs.by_likers()\n text = self.form.cleaned_data.get('text', '')\n\n if text:\n pks = []\n text_search = SearchQuerySet().models(self._meta.model).auto_query(text)\n for r in text_search:\n pks.append(r.pk)\n return self._qs.filter(id__in=pks)\n return self._qs\n\n class Meta:\n model = PopularProposal\n fields = ['clasification', ]\n form = TextSearchForm\n\n\ndef possible_areas(request):\n as_ = Area.public.all()\n return as_\n\n\nclass ProposalWithAreaFilter(ProposalWithoutAreaFilter):\n area = ModelChoiceFilter(queryset=possible_areas, label=\"Comuna donde fue generada\")\n\n\nclass ProposalGeneratedAtFilter(ProposalWithoutAreaFilter):\n generated_at = ModelChoiceFilter(queryset=filterable_areas,\n empty_label=u\"Selecciona\",\n label=\"Comuna donde fue generada\")\n", "path": "popular_proposal/filters.py"}]} | 1,671 | 129 |
gh_patches_debug_5999 | rasdani/github-patches | git_diff | alltheplaces__alltheplaces-4515 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Homebase spider webpage regex is too restrictive
The homebase_gb_ie.py spider contains a regex in sitemap_rules to restrict things to store pages:
`sitemap_rules = [(r"https:\/\/store\.homebase\.co\.uk\/[-\w]+\/[-.\w]+$", "parse_sd")]`
This regex is slightly too strict, as there's a store with a "." in the place level: https://store.homebase.co.uk/st.-albans/the-courtyard-alban-park , which is currently not returned.
To include this store, the regex should presumably be changed to
`sitemap_rules = [(r"https:\/\/store\.homebase\.co\.uk\/[-.\w]+\/[-.\w]+$", "parse_sd")]`
</issue>
<code>
[start of locations/spiders/homebase_gb_ie.py]
1 from scrapy.spiders import SitemapSpider
2
3 from locations.structured_data_spider import StructuredDataSpider
4
5
6 class HomebaseGBIESpider(SitemapSpider, StructuredDataSpider):
7 name = "homebase_gb_ie"
8 item_attributes = {"brand": "Homebase", "brand_wikidata": "Q9293447"}
9 sitemap_urls = ["https://store.homebase.co.uk/robots.txt"]
10 sitemap_rules = [(r"https:\/\/store\.homebase\.co\.uk\/[-\w]+\/[-.\w]+$", "parse_sd")]
11 skip_auto_cc = True
12
[end of locations/spiders/homebase_gb_ie.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/locations/spiders/homebase_gb_ie.py b/locations/spiders/homebase_gb_ie.py
--- a/locations/spiders/homebase_gb_ie.py
+++ b/locations/spiders/homebase_gb_ie.py
@@ -7,5 +7,5 @@
name = "homebase_gb_ie"
item_attributes = {"brand": "Homebase", "brand_wikidata": "Q9293447"}
sitemap_urls = ["https://store.homebase.co.uk/robots.txt"]
- sitemap_rules = [(r"https:\/\/store\.homebase\.co\.uk\/[-\w]+\/[-.\w]+$", "parse_sd")]
+ sitemap_rules = [(r"https:\/\/store\.homebase\.co\.uk\/[-.\w]+\/[-.\w]+$", "parse_sd")]
skip_auto_cc = True
| {"golden_diff": "diff --git a/locations/spiders/homebase_gb_ie.py b/locations/spiders/homebase_gb_ie.py\n--- a/locations/spiders/homebase_gb_ie.py\n+++ b/locations/spiders/homebase_gb_ie.py\n@@ -7,5 +7,5 @@\n name = \"homebase_gb_ie\"\n item_attributes = {\"brand\": \"Homebase\", \"brand_wikidata\": \"Q9293447\"}\n sitemap_urls = [\"https://store.homebase.co.uk/robots.txt\"]\n- sitemap_rules = [(r\"https:\\/\\/store\\.homebase\\.co\\.uk\\/[-\\w]+\\/[-.\\w]+$\", \"parse_sd\")]\n+ sitemap_rules = [(r\"https:\\/\\/store\\.homebase\\.co\\.uk\\/[-.\\w]+\\/[-.\\w]+$\", \"parse_sd\")]\n skip_auto_cc = True\n", "issue": "Homebase spider webpage regex is too restrictive\nThe homebase_gb_ie.py spider contains a regex in sitemap_rules to restrict things to store pages:\r\n`sitemap_rules = [(r\"https:\\/\\/store\\.homebase\\.co\\.uk\\/[-\\w]+\\/[-.\\w]+$\", \"parse_sd\")]`\r\n\r\nThis regex is slightly too strict, as there's a store with a \".\" in the place level: https://store.homebase.co.uk/st.-albans/the-courtyard-alban-park , which is currently not returned.\r\n\r\nTo include this store, the regex should presumably be changed to\r\n`sitemap_rules = [(r\"https:\\/\\/store\\.homebase\\.co\\.uk\\/[-.\\w]+\\/[-.\\w]+$\", \"parse_sd\")]`\n", "before_files": [{"content": "from scrapy.spiders import SitemapSpider\n\nfrom locations.structured_data_spider import StructuredDataSpider\n\n\nclass HomebaseGBIESpider(SitemapSpider, StructuredDataSpider):\n name = \"homebase_gb_ie\"\n item_attributes = {\"brand\": \"Homebase\", \"brand_wikidata\": \"Q9293447\"}\n sitemap_urls = [\"https://store.homebase.co.uk/robots.txt\"]\n sitemap_rules = [(r\"https:\\/\\/store\\.homebase\\.co\\.uk\\/[-\\w]+\\/[-.\\w]+$\", \"parse_sd\")]\n skip_auto_cc = True\n", "path": "locations/spiders/homebase_gb_ie.py"}]} | 845 | 185 |
gh_patches_debug_1557 | rasdani/github-patches | git_diff | WordPress__openverse-api-637 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Return secure URLs for the fields thumbnail, detail_url and related_url.
_(Framed the verbiage of the title as a feature request)_ ๐
## Problem
The response for search and detail requests includes insecure URLs (`http` instead of `https`) in the fields `thumbnail`, `detail_url` and `related_url`.

e.g.:
**Search**
https://api.openverse.engineering/v1/images/?q=flower
**Detail:**
https://api.openverse.engineering/v1/images/6c1769b0-a3e5-4dae-8a36-8531a6e1430f/
## Description
When trying to integrate Openverse with some code on the browser I ended up having to replace the scheme part of the URL for avoiding notices like ```xxxx was loaded over HTTPS, but requested an insecure resource 'http://api.openverse.engineering/v1/images/6c1769b0-a3e5-4dae-8a36-8531a6e1430f/'. This request has been blocked; the content must be served over HTTPS.`
</issue>
<code>
[start of api/catalog/api/serializers/base.py]
1 import re
2
3 from django.conf import settings
4 from rest_framework import serializers
5
6
7 class SchemableHyperlinkedIdentityField(serializers.HyperlinkedIdentityField):
8 """
9 This field returns the link but allows the option to replace the URL scheme.
10 """
11
12 def __init__(self, scheme=settings.API_LINK_SCHEME, *args, **kwargs):
13 super().__init__(*args, **kwargs)
14
15 self.scheme = scheme
16
17 def get_url(self, *args, **kwargs):
18 url = super().get_url(*args, **kwargs)
19
20 # Only rewrite URLs if a fixed scheme is provided
21 if self.scheme is not None:
22 re.sub(r"^\w+://", f"{self.scheme}://", url, 1)
23
24 return url
25
[end of api/catalog/api/serializers/base.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/api/catalog/api/serializers/base.py b/api/catalog/api/serializers/base.py
--- a/api/catalog/api/serializers/base.py
+++ b/api/catalog/api/serializers/base.py
@@ -19,6 +19,6 @@
# Only rewrite URLs if a fixed scheme is provided
if self.scheme is not None:
- re.sub(r"^\w+://", f"{self.scheme}://", url, 1)
+ url = re.sub(r"^\w+://", f"{self.scheme}://", url, 1)
return url
| {"golden_diff": "diff --git a/api/catalog/api/serializers/base.py b/api/catalog/api/serializers/base.py\n--- a/api/catalog/api/serializers/base.py\n+++ b/api/catalog/api/serializers/base.py\n@@ -19,6 +19,6 @@\n \n # Only rewrite URLs if a fixed scheme is provided\n if self.scheme is not None:\n- re.sub(r\"^\\w+://\", f\"{self.scheme}://\", url, 1)\n+ url = re.sub(r\"^\\w+://\", f\"{self.scheme}://\", url, 1)\n \n return url\n", "issue": "Return secure URLs for the fields thumbnail, detail_url and related_url.\n_(Framed the verbiage of the title as a feature request)_ \ud83d\ude4f \r\n\r\n## Problem\r\n\r\nThe response for search and detail requests includes insecure URLs (`http` instead of `https`) in the fields `thumbnail`, `detail_url` and `related_url`.\r\n\r\n\r\n\r\n\r\ne.g.:\r\n\r\n**Search**\r\n\r\nhttps://api.openverse.engineering/v1/images/?q=flower\r\n\r\n**Detail:**\r\n\r\nhttps://api.openverse.engineering/v1/images/6c1769b0-a3e5-4dae-8a36-8531a6e1430f/\r\n\r\n## Description\r\n\r\nWhen trying to integrate Openverse with some code on the browser I ended up having to replace the scheme part of the URL for avoiding notices like ```xxxx was loaded over HTTPS, but requested an insecure resource 'http://api.openverse.engineering/v1/images/6c1769b0-a3e5-4dae-8a36-8531a6e1430f/'. This request has been blocked; the content must be served over HTTPS.`\r\n \r\n\n", "before_files": [{"content": "import re\n\nfrom django.conf import settings\nfrom rest_framework import serializers\n\n\nclass SchemableHyperlinkedIdentityField(serializers.HyperlinkedIdentityField):\n \"\"\"\n This field returns the link but allows the option to replace the URL scheme.\n \"\"\"\n\n def __init__(self, scheme=settings.API_LINK_SCHEME, *args, **kwargs):\n super().__init__(*args, **kwargs)\n\n self.scheme = scheme\n\n def get_url(self, *args, **kwargs):\n url = super().get_url(*args, **kwargs)\n\n # Only rewrite URLs if a fixed scheme is provided\n if self.scheme is not None:\n re.sub(r\"^\\w+://\", f\"{self.scheme}://\", url, 1)\n\n return url\n", "path": "api/catalog/api/serializers/base.py"}]} | 1,060 | 131 |
gh_patches_debug_57595 | rasdani/github-patches | git_diff | joke2k__faker-704 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
AttributeError: module 'faker.providers' has no attribute '__file__'
I converted my python code to .exe using cx_Freeze. While opening my .exe file I am getting this error.
Traceback (most recent call last):
File "C:\Program Files\Python36\lib\site-packages\cx_Freeze\initscripts\__startup__.py", line 14, in run
module.run()
File "C:\Program Files\Python36\lib\site-packages\cx_Freeze\initscripts\Console.py", line 26, in run
exec(code, m.__dict__)
File "DataGenerator.py", line 7, in <module>
File "C:\Program Files\Python36\lib\site-packages\faker\__init__.py", line 4, in <module>
from faker.factory import Factory
File "C:\Program Files\Python36\lib\site-packages\faker\factory.py", line 10, in <module>
from faker.config import DEFAULT_LOCALE, PROVIDERS, AVAILABLE_LOCALES
File "C:\Program Files\Python36\lib\site-packages\faker\config.py", line 11, in <module>
PROVIDERS = find_available_providers([import_module(path) for path in META_PROVIDERS_MODULES])
File "C:\Program Files\Python36\lib\site-packages\faker\utils\loading.py", line 29, in find_available_providers
providers = ['.'.join([providers_mod.__package__, mod]) for mod in list_module(providers_mod)]
File "C:\Program Files\Python36\lib\site-packages\faker\utils\loading.py", line 7, in list_module
path = os.path.dirname(module.__file__)
AttributeError: module 'faker.providers' has no attribute '__file__'
</issue>
<code>
[start of faker/utils/loading.py]
1 import os
2 import sys
3 from importlib import import_module
4 import pkgutil
5
6
7 def get_path(module):
8 if getattr(sys, 'frozen', False):
9 # frozen
10 path = os.path.dirname(sys.executable)
11 else:
12 # unfrozen
13 path = os.path.dirname(os.path.realpath(module.__file__))
14 return path
15
16
17 def list_module(module):
18 path = get_path(module)
19 modules = [name for finder, name,
20 is_pkg in pkgutil.iter_modules([path]) if is_pkg]
21 return modules
22
23
24 def find_available_locales(providers):
25 available_locales = set()
26
27 for provider_path in providers:
28
29 provider_module = import_module(provider_path)
30 if getattr(provider_module, 'localized', False):
31 langs = list_module(provider_module)
32 available_locales.update(langs)
33 return available_locales
34
35
36 def find_available_providers(modules):
37 available_providers = set()
38 for providers_mod in modules:
39 providers = [
40 '.'.join([providers_mod.__package__, mod])
41 for mod in list_module(providers_mod) if mod != '__pycache__'
42 ]
43 available_providers.update(providers)
44 return sorted(available_providers)
45
[end of faker/utils/loading.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/faker/utils/loading.py b/faker/utils/loading.py
--- a/faker/utils/loading.py
+++ b/faker/utils/loading.py
@@ -7,7 +7,10 @@
def get_path(module):
if getattr(sys, 'frozen', False):
# frozen
- path = os.path.dirname(sys.executable)
+ base_dir = os.path.dirname(sys.executable)
+ lib_dir = os.path.join(base_dir, "lib")
+ module_to_rel_path = os.path.join(*module.__package__.split("."))
+ path = os.path.join(lib_dir, module_to_rel_path)
else:
# unfrozen
path = os.path.dirname(os.path.realpath(module.__file__))
| {"golden_diff": "diff --git a/faker/utils/loading.py b/faker/utils/loading.py\n--- a/faker/utils/loading.py\n+++ b/faker/utils/loading.py\n@@ -7,7 +7,10 @@\n def get_path(module):\n if getattr(sys, 'frozen', False):\n # frozen\n- path = os.path.dirname(sys.executable)\n+ base_dir = os.path.dirname(sys.executable)\n+ lib_dir = os.path.join(base_dir, \"lib\")\n+ module_to_rel_path = os.path.join(*module.__package__.split(\".\"))\n+ path = os.path.join(lib_dir, module_to_rel_path)\n else:\n # unfrozen\n path = os.path.dirname(os.path.realpath(module.__file__))\n", "issue": "AttributeError: module 'faker.providers' has no attribute '__file__'\nI converted my python code to .exe using cx_Freeze. While opening my .exe file I am getting this error.\r\n\r\nTraceback (most recent call last):\r\n File \"C:\\Program Files\\Python36\\lib\\site-packages\\cx_Freeze\\initscripts\\__startup__.py\", line 14, in run\r\n module.run()\r\n File \"C:\\Program Files\\Python36\\lib\\site-packages\\cx_Freeze\\initscripts\\Console.py\", line 26, in run\r\n exec(code, m.__dict__)\r\n File \"DataGenerator.py\", line 7, in <module>\r\n File \"C:\\Program Files\\Python36\\lib\\site-packages\\faker\\__init__.py\", line 4, in <module>\r\n from faker.factory import Factory\r\n File \"C:\\Program Files\\Python36\\lib\\site-packages\\faker\\factory.py\", line 10, in <module>\r\n from faker.config import DEFAULT_LOCALE, PROVIDERS, AVAILABLE_LOCALES\r\n File \"C:\\Program Files\\Python36\\lib\\site-packages\\faker\\config.py\", line 11, in <module>\r\n PROVIDERS = find_available_providers([import_module(path) for path in META_PROVIDERS_MODULES])\r\n File \"C:\\Program Files\\Python36\\lib\\site-packages\\faker\\utils\\loading.py\", line 29, in find_available_providers\r\n providers = ['.'.join([providers_mod.__package__, mod]) for mod in list_module(providers_mod)]\r\n File \"C:\\Program Files\\Python36\\lib\\site-packages\\faker\\utils\\loading.py\", line 7, in list_module\r\n path = os.path.dirname(module.__file__)\r\nAttributeError: module 'faker.providers' has no attribute '__file__'\n", "before_files": [{"content": "import os\nimport sys\nfrom importlib import import_module\nimport pkgutil\n\n\ndef get_path(module):\n if getattr(sys, 'frozen', False):\n # frozen\n path = os.path.dirname(sys.executable)\n else:\n # unfrozen\n path = os.path.dirname(os.path.realpath(module.__file__))\n return path\n\n\ndef list_module(module):\n path = get_path(module)\n modules = [name for finder, name,\n is_pkg in pkgutil.iter_modules([path]) if is_pkg]\n return modules\n\n\ndef find_available_locales(providers):\n available_locales = set()\n\n for provider_path in providers:\n\n provider_module = import_module(provider_path)\n if getattr(provider_module, 'localized', False):\n langs = list_module(provider_module)\n available_locales.update(langs)\n return available_locales\n\n\ndef find_available_providers(modules):\n available_providers = set()\n for providers_mod in modules:\n providers = [\n '.'.join([providers_mod.__package__, mod])\n for mod in list_module(providers_mod) if mod != '__pycache__'\n ]\n available_providers.update(providers)\n return sorted(available_providers)\n", "path": "faker/utils/loading.py"}]} | 1,280 | 155 |
gh_patches_debug_8619 | rasdani/github-patches | git_diff | open-mmlab__mmdetection3d-647 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bug in indoor_converter
If `pkl_prefix=='sunrgbd'` we go to this [else](https://github.com/open-mmlab/mmdetection3d/blob/master/tools/data_converter/indoor_converter.py#L89) for `s3dis` and get `FileNotFoundError`.
</issue>
<code>
[start of tools/data_converter/indoor_converter.py]
1 import mmcv
2 import numpy as np
3 import os
4
5 from tools.data_converter.s3dis_data_utils import S3DISData, S3DISSegData
6 from tools.data_converter.scannet_data_utils import ScanNetData, ScanNetSegData
7 from tools.data_converter.sunrgbd_data_utils import SUNRGBDData
8
9
10 def create_indoor_info_file(data_path,
11 pkl_prefix='sunrgbd',
12 save_path=None,
13 use_v1=False,
14 workers=4):
15 """Create indoor information file.
16
17 Get information of the raw data and save it to the pkl file.
18
19 Args:
20 data_path (str): Path of the data.
21 pkl_prefix (str): Prefix of the pkl to be saved. Default: 'sunrgbd'.
22 save_path (str): Path of the pkl to be saved. Default: None.
23 use_v1 (bool): Whether to use v1. Default: False.
24 workers (int): Number of threads to be used. Default: 4.
25 """
26 assert os.path.exists(data_path)
27 assert pkl_prefix in ['sunrgbd', 'scannet', 's3dis'], \
28 f'unsupported indoor dataset {pkl_prefix}'
29 save_path = data_path if save_path is None else save_path
30 assert os.path.exists(save_path)
31
32 # generate infos for both detection and segmentation task
33 if pkl_prefix in ['sunrgbd', 'scannet']:
34 train_filename = os.path.join(save_path,
35 f'{pkl_prefix}_infos_train.pkl')
36 val_filename = os.path.join(save_path, f'{pkl_prefix}_infos_val.pkl')
37 if pkl_prefix == 'sunrgbd':
38 # SUN RGB-D has a train-val split
39 train_dataset = SUNRGBDData(
40 root_path=data_path, split='train', use_v1=use_v1)
41 val_dataset = SUNRGBDData(
42 root_path=data_path, split='val', use_v1=use_v1)
43 else:
44 # ScanNet has a train-val-test split
45 train_dataset = ScanNetData(root_path=data_path, split='train')
46 val_dataset = ScanNetData(root_path=data_path, split='val')
47 test_dataset = ScanNetData(root_path=data_path, split='test')
48 test_filename = os.path.join(save_path,
49 f'{pkl_prefix}_infos_test.pkl')
50
51 infos_train = train_dataset.get_infos(
52 num_workers=workers, has_label=True)
53 mmcv.dump(infos_train, train_filename, 'pkl')
54 print(f'{pkl_prefix} info train file is saved to {train_filename}')
55
56 infos_val = val_dataset.get_infos(num_workers=workers, has_label=True)
57 mmcv.dump(infos_val, val_filename, 'pkl')
58 print(f'{pkl_prefix} info val file is saved to {val_filename}')
59
60 if pkl_prefix == 'scannet':
61 infos_test = test_dataset.get_infos(
62 num_workers=workers, has_label=False)
63 mmcv.dump(infos_test, test_filename, 'pkl')
64 print(f'{pkl_prefix} info test file is saved to {test_filename}')
65
66 # generate infos for the semantic segmentation task
67 # e.g. re-sampled scene indexes and label weights
68 # scene indexes are used to re-sample rooms with different number of points
69 # label weights are used to balance classes with different number of points
70 if pkl_prefix == 'scannet':
71 # label weight computation function is adopted from
72 # https://github.com/charlesq34/pointnet2/blob/master/scannet/scannet_dataset.py#L24
73 train_dataset = ScanNetSegData(
74 data_root=data_path,
75 ann_file=train_filename,
76 split='train',
77 num_points=8192,
78 label_weight_func=lambda x: 1.0 / np.log(1.2 + x))
79 # TODO: do we need to generate on val set?
80 val_dataset = ScanNetSegData(
81 data_root=data_path,
82 ann_file=val_filename,
83 split='val',
84 num_points=8192,
85 label_weight_func=lambda x: 1.0 / np.log(1.2 + x))
86 # no need to generate for test set
87 train_dataset.get_seg_infos()
88 val_dataset.get_seg_infos()
89 else:
90 # S3DIS doesn't have a fixed train-val split
91 # it has 6 areas instead, so we generate info file for each of them
92 # in training, we will use dataset to wrap different areas
93 splits = [f'Area_{i}' for i in [1, 2, 3, 4, 5, 6]]
94 for split in splits:
95 dataset = S3DISData(root_path=data_path, split=split)
96 info = dataset.get_infos(num_workers=workers, has_label=True)
97 filename = os.path.join(save_path,
98 f'{pkl_prefix}_infos_{split}.pkl')
99 mmcv.dump(info, filename, 'pkl')
100 print(f'{pkl_prefix} info {split} file is saved to {filename}')
101 seg_dataset = S3DISSegData(
102 data_root=data_path,
103 ann_file=filename,
104 split=split,
105 num_points=4096,
106 label_weight_func=lambda x: 1.0 / np.log(1.2 + x))
107 seg_dataset.get_seg_infos()
108
[end of tools/data_converter/indoor_converter.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/tools/data_converter/indoor_converter.py b/tools/data_converter/indoor_converter.py
--- a/tools/data_converter/indoor_converter.py
+++ b/tools/data_converter/indoor_converter.py
@@ -86,7 +86,7 @@
# no need to generate for test set
train_dataset.get_seg_infos()
val_dataset.get_seg_infos()
- else:
+ elif pkl_prefix == 's3dis':
# S3DIS doesn't have a fixed train-val split
# it has 6 areas instead, so we generate info file for each of them
# in training, we will use dataset to wrap different areas
| {"golden_diff": "diff --git a/tools/data_converter/indoor_converter.py b/tools/data_converter/indoor_converter.py\n--- a/tools/data_converter/indoor_converter.py\n+++ b/tools/data_converter/indoor_converter.py\n@@ -86,7 +86,7 @@\n # no need to generate for test set\n train_dataset.get_seg_infos()\n val_dataset.get_seg_infos()\n- else:\n+ elif pkl_prefix == 's3dis':\n # S3DIS doesn't have a fixed train-val split\n # it has 6 areas instead, so we generate info file for each of them\n # in training, we will use dataset to wrap different areas\n", "issue": "Bug in indoor_converter\nIf `pkl_prefix=='sunrgbd'` we go to this [else](https://github.com/open-mmlab/mmdetection3d/blob/master/tools/data_converter/indoor_converter.py#L89) for `s3dis` and get `FileNotFoundError`.\n", "before_files": [{"content": "import mmcv\nimport numpy as np\nimport os\n\nfrom tools.data_converter.s3dis_data_utils import S3DISData, S3DISSegData\nfrom tools.data_converter.scannet_data_utils import ScanNetData, ScanNetSegData\nfrom tools.data_converter.sunrgbd_data_utils import SUNRGBDData\n\n\ndef create_indoor_info_file(data_path,\n pkl_prefix='sunrgbd',\n save_path=None,\n use_v1=False,\n workers=4):\n \"\"\"Create indoor information file.\n\n Get information of the raw data and save it to the pkl file.\n\n Args:\n data_path (str): Path of the data.\n pkl_prefix (str): Prefix of the pkl to be saved. Default: 'sunrgbd'.\n save_path (str): Path of the pkl to be saved. Default: None.\n use_v1 (bool): Whether to use v1. Default: False.\n workers (int): Number of threads to be used. Default: 4.\n \"\"\"\n assert os.path.exists(data_path)\n assert pkl_prefix in ['sunrgbd', 'scannet', 's3dis'], \\\n f'unsupported indoor dataset {pkl_prefix}'\n save_path = data_path if save_path is None else save_path\n assert os.path.exists(save_path)\n\n # generate infos for both detection and segmentation task\n if pkl_prefix in ['sunrgbd', 'scannet']:\n train_filename = os.path.join(save_path,\n f'{pkl_prefix}_infos_train.pkl')\n val_filename = os.path.join(save_path, f'{pkl_prefix}_infos_val.pkl')\n if pkl_prefix == 'sunrgbd':\n # SUN RGB-D has a train-val split\n train_dataset = SUNRGBDData(\n root_path=data_path, split='train', use_v1=use_v1)\n val_dataset = SUNRGBDData(\n root_path=data_path, split='val', use_v1=use_v1)\n else:\n # ScanNet has a train-val-test split\n train_dataset = ScanNetData(root_path=data_path, split='train')\n val_dataset = ScanNetData(root_path=data_path, split='val')\n test_dataset = ScanNetData(root_path=data_path, split='test')\n test_filename = os.path.join(save_path,\n f'{pkl_prefix}_infos_test.pkl')\n\n infos_train = train_dataset.get_infos(\n num_workers=workers, has_label=True)\n mmcv.dump(infos_train, train_filename, 'pkl')\n print(f'{pkl_prefix} info train file is saved to {train_filename}')\n\n infos_val = val_dataset.get_infos(num_workers=workers, has_label=True)\n mmcv.dump(infos_val, val_filename, 'pkl')\n print(f'{pkl_prefix} info val file is saved to {val_filename}')\n\n if pkl_prefix == 'scannet':\n infos_test = test_dataset.get_infos(\n num_workers=workers, has_label=False)\n mmcv.dump(infos_test, test_filename, 'pkl')\n print(f'{pkl_prefix} info test file is saved to {test_filename}')\n\n # generate infos for the semantic segmentation task\n # e.g. re-sampled scene indexes and label weights\n # scene indexes are used to re-sample rooms with different number of points\n # label weights are used to balance classes with different number of points\n if pkl_prefix == 'scannet':\n # label weight computation function is adopted from\n # https://github.com/charlesq34/pointnet2/blob/master/scannet/scannet_dataset.py#L24\n train_dataset = ScanNetSegData(\n data_root=data_path,\n ann_file=train_filename,\n split='train',\n num_points=8192,\n label_weight_func=lambda x: 1.0 / np.log(1.2 + x))\n # TODO: do we need to generate on val set?\n val_dataset = ScanNetSegData(\n data_root=data_path,\n ann_file=val_filename,\n split='val',\n num_points=8192,\n label_weight_func=lambda x: 1.0 / np.log(1.2 + x))\n # no need to generate for test set\n train_dataset.get_seg_infos()\n val_dataset.get_seg_infos()\n else:\n # S3DIS doesn't have a fixed train-val split\n # it has 6 areas instead, so we generate info file for each of them\n # in training, we will use dataset to wrap different areas\n splits = [f'Area_{i}' for i in [1, 2, 3, 4, 5, 6]]\n for split in splits:\n dataset = S3DISData(root_path=data_path, split=split)\n info = dataset.get_infos(num_workers=workers, has_label=True)\n filename = os.path.join(save_path,\n f'{pkl_prefix}_infos_{split}.pkl')\n mmcv.dump(info, filename, 'pkl')\n print(f'{pkl_prefix} info {split} file is saved to {filename}')\n seg_dataset = S3DISSegData(\n data_root=data_path,\n ann_file=filename,\n split=split,\n num_points=4096,\n label_weight_func=lambda x: 1.0 / np.log(1.2 + x))\n seg_dataset.get_seg_infos()\n", "path": "tools/data_converter/indoor_converter.py"}]} | 1,997 | 144 |
gh_patches_debug_36598 | rasdani/github-patches | git_diff | getredash__redash-1944 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Redash Permissions not working for some use cases
### Issue Summary
Currently, when query owner grants permission to another user for a query, the user is still unable to perform the following tasks:
* change data source
* schedule the query
* add and save new visualisation
I believe the user should have the ability to do all the things that the owner could do once permission has been granted.
### Technical details:
* Redash Version: 1.0.3
* Browser/OS: Chrome
* How did you install Redash: AWS using the AMI
</issue>
<code>
[start of redash/handlers/visualizations.py]
1 import json
2 from flask import request
3
4 from redash import models
5 from redash.permissions import require_permission, require_admin_or_owner
6 from redash.handlers.base import BaseResource, get_object_or_404
7
8
9 class VisualizationListResource(BaseResource):
10 @require_permission('edit_query')
11 def post(self):
12 kwargs = request.get_json(force=True)
13
14 query = get_object_or_404(models.Query.get_by_id_and_org, kwargs.pop('query_id'), self.current_org)
15 require_admin_or_owner(query.user_id)
16
17 kwargs['options'] = json.dumps(kwargs['options'])
18 kwargs['query_rel'] = query
19
20 vis = models.Visualization(**kwargs)
21 models.db.session.add(vis)
22 models.db.session.commit()
23 d = vis.to_dict(with_query=False)
24 return d
25
26
27 class VisualizationResource(BaseResource):
28 @require_permission('edit_query')
29 def post(self, visualization_id):
30 vis = get_object_or_404(models.Visualization.get_by_id_and_org, visualization_id, self.current_org)
31 require_admin_or_owner(vis.query_rel.user_id)
32
33 kwargs = request.get_json(force=True)
34 if 'options' in kwargs:
35 kwargs['options'] = json.dumps(kwargs['options'])
36
37 kwargs.pop('id', None)
38 kwargs.pop('query_id', None)
39
40 self.update_model(vis, kwargs)
41 d = vis.to_dict(with_query=False)
42 models.db.session.commit()
43 return d
44
45 @require_permission('edit_query')
46 def delete(self, visualization_id):
47 vis = get_object_or_404(models.Visualization.get_by_id_and_org, visualization_id, self.current_org)
48 require_admin_or_owner(vis.query_rel.user_id)
49 models.db.session.delete(vis)
50 models.db.session.commit()
51
[end of redash/handlers/visualizations.py]
[start of redash/permissions.py]
1 from flask_login import current_user
2 from flask_restful import abort
3 import functools
4 from funcy import flatten
5
6 view_only = True
7 not_view_only = False
8
9 ACCESS_TYPE_VIEW = 'view'
10 ACCESS_TYPE_MODIFY = 'modify'
11 ACCESS_TYPE_DELETE = 'delete'
12
13 ACCESS_TYPES = (ACCESS_TYPE_VIEW, ACCESS_TYPE_MODIFY, ACCESS_TYPE_DELETE)
14
15
16 def has_access(object_groups, user, need_view_only):
17 if 'admin' in user.permissions:
18 return True
19
20 matching_groups = set(object_groups.keys()).intersection(user.group_ids)
21
22 if not matching_groups:
23 return False
24
25 required_level = 1 if need_view_only else 2
26
27 group_level = 1 if all(flatten([object_groups[group] for group in matching_groups])) else 2
28
29 return required_level <= group_level
30
31
32 def require_access(object_groups, user, need_view_only):
33 if not has_access(object_groups, user, need_view_only):
34 abort(403)
35
36
37 class require_permissions(object):
38 def __init__(self, permissions):
39 self.permissions = permissions
40
41 def __call__(self, fn):
42 @functools.wraps(fn)
43 def decorated(*args, **kwargs):
44 has_permissions = current_user.has_permissions(self.permissions)
45
46 if has_permissions:
47 return fn(*args, **kwargs)
48 else:
49 abort(403)
50
51 return decorated
52
53
54 def require_permission(permission):
55 return require_permissions((permission,))
56
57
58 def require_admin(fn):
59 return require_permission('admin')(fn)
60
61
62 def require_super_admin(fn):
63 return require_permission('super_admin')(fn)
64
65
66 def has_permission_or_owner(permission, object_owner_id):
67 return int(object_owner_id) == current_user.id or current_user.has_permission(permission)
68
69
70 def is_admin_or_owner(object_owner_id):
71 return has_permission_or_owner('admin', object_owner_id)
72
73
74 def require_permission_or_owner(permission, object_owner_id):
75 if not has_permission_or_owner(permission, object_owner_id):
76 abort(403)
77
78
79 def require_admin_or_owner(object_owner_id):
80 if not is_admin_or_owner(object_owner_id):
81 abort(403, message="You don't have permission to edit this resource.")
82
83
84 def can_modify(obj, user):
85 return is_admin_or_owner(obj.user_id) or user.has_access(obj, ACCESS_TYPE_MODIFY)
86
87
88 def require_object_modify_permission(obj, user):
89 if not can_modify(obj, user):
90 abort(403)
91
[end of redash/permissions.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/redash/handlers/visualizations.py b/redash/handlers/visualizations.py
--- a/redash/handlers/visualizations.py
+++ b/redash/handlers/visualizations.py
@@ -1,9 +1,12 @@
import json
+
from flask import request
from redash import models
-from redash.permissions import require_permission, require_admin_or_owner
from redash.handlers.base import BaseResource, get_object_or_404
+from redash.permissions import (require_admin_or_owner,
+ require_object_modify_permission,
+ require_permission)
class VisualizationListResource(BaseResource):
@@ -12,7 +15,7 @@
kwargs = request.get_json(force=True)
query = get_object_or_404(models.Query.get_by_id_and_org, kwargs.pop('query_id'), self.current_org)
- require_admin_or_owner(query.user_id)
+ require_object_modify_permission(query, self.current_user)
kwargs['options'] = json.dumps(kwargs['options'])
kwargs['query_rel'] = query
@@ -28,7 +31,7 @@
@require_permission('edit_query')
def post(self, visualization_id):
vis = get_object_or_404(models.Visualization.get_by_id_and_org, visualization_id, self.current_org)
- require_admin_or_owner(vis.query_rel.user_id)
+ require_object_modify_permission(vis.query_rel, self.current_user)
kwargs = request.get_json(force=True)
if 'options' in kwargs:
@@ -45,6 +48,6 @@
@require_permission('edit_query')
def delete(self, visualization_id):
vis = get_object_or_404(models.Visualization.get_by_id_and_org, visualization_id, self.current_org)
- require_admin_or_owner(vis.query_rel.user_id)
+ require_object_modify_permission(vis.query_rel, self.current_user)
models.db.session.delete(vis)
models.db.session.commit()
diff --git a/redash/permissions.py b/redash/permissions.py
--- a/redash/permissions.py
+++ b/redash/permissions.py
@@ -1,6 +1,7 @@
+import functools
+
from flask_login import current_user
from flask_restful import abort
-import functools
from funcy import flatten
view_only = True
| {"golden_diff": "diff --git a/redash/handlers/visualizations.py b/redash/handlers/visualizations.py\n--- a/redash/handlers/visualizations.py\n+++ b/redash/handlers/visualizations.py\n@@ -1,9 +1,12 @@\n import json\n+\n from flask import request\n \n from redash import models\n-from redash.permissions import require_permission, require_admin_or_owner\n from redash.handlers.base import BaseResource, get_object_or_404\n+from redash.permissions import (require_admin_or_owner,\n+ require_object_modify_permission,\n+ require_permission)\n \n \n class VisualizationListResource(BaseResource):\n@@ -12,7 +15,7 @@\n kwargs = request.get_json(force=True)\n \n query = get_object_or_404(models.Query.get_by_id_and_org, kwargs.pop('query_id'), self.current_org)\n- require_admin_or_owner(query.user_id)\n+ require_object_modify_permission(query, self.current_user)\n \n kwargs['options'] = json.dumps(kwargs['options'])\n kwargs['query_rel'] = query\n@@ -28,7 +31,7 @@\n @require_permission('edit_query')\n def post(self, visualization_id):\n vis = get_object_or_404(models.Visualization.get_by_id_and_org, visualization_id, self.current_org)\n- require_admin_or_owner(vis.query_rel.user_id)\n+ require_object_modify_permission(vis.query_rel, self.current_user)\n \n kwargs = request.get_json(force=True)\n if 'options' in kwargs:\n@@ -45,6 +48,6 @@\n @require_permission('edit_query')\n def delete(self, visualization_id):\n vis = get_object_or_404(models.Visualization.get_by_id_and_org, visualization_id, self.current_org)\n- require_admin_or_owner(vis.query_rel.user_id)\n+ require_object_modify_permission(vis.query_rel, self.current_user)\n models.db.session.delete(vis)\n models.db.session.commit()\ndiff --git a/redash/permissions.py b/redash/permissions.py\n--- a/redash/permissions.py\n+++ b/redash/permissions.py\n@@ -1,6 +1,7 @@\n+import functools\n+\n from flask_login import current_user\n from flask_restful import abort\n-import functools\n from funcy import flatten\n \n view_only = True\n", "issue": "Redash Permissions not working for some use cases\n### Issue Summary\r\n\r\nCurrently, when query owner grants permission to another user for a query, the user is still unable to perform the following tasks:\r\n\r\n* change data source\r\n* schedule the query\r\n* add and save new visualisation\r\n\r\nI believe the user should have the ability to do all the things that the owner could do once permission has been granted.\r\n\r\n### Technical details:\r\n\r\n* Redash Version: 1.0.3\r\n* Browser/OS: Chrome\r\n* How did you install Redash: AWS using the AMI\r\n\n", "before_files": [{"content": "import json\nfrom flask import request\n\nfrom redash import models\nfrom redash.permissions import require_permission, require_admin_or_owner\nfrom redash.handlers.base import BaseResource, get_object_or_404\n\n\nclass VisualizationListResource(BaseResource):\n @require_permission('edit_query')\n def post(self):\n kwargs = request.get_json(force=True)\n\n query = get_object_or_404(models.Query.get_by_id_and_org, kwargs.pop('query_id'), self.current_org)\n require_admin_or_owner(query.user_id)\n\n kwargs['options'] = json.dumps(kwargs['options'])\n kwargs['query_rel'] = query\n\n vis = models.Visualization(**kwargs)\n models.db.session.add(vis)\n models.db.session.commit()\n d = vis.to_dict(with_query=False)\n return d\n\n\nclass VisualizationResource(BaseResource):\n @require_permission('edit_query')\n def post(self, visualization_id):\n vis = get_object_or_404(models.Visualization.get_by_id_and_org, visualization_id, self.current_org)\n require_admin_or_owner(vis.query_rel.user_id)\n\n kwargs = request.get_json(force=True)\n if 'options' in kwargs:\n kwargs['options'] = json.dumps(kwargs['options'])\n\n kwargs.pop('id', None)\n kwargs.pop('query_id', None)\n\n self.update_model(vis, kwargs)\n d = vis.to_dict(with_query=False)\n models.db.session.commit()\n return d\n\n @require_permission('edit_query')\n def delete(self, visualization_id):\n vis = get_object_or_404(models.Visualization.get_by_id_and_org, visualization_id, self.current_org)\n require_admin_or_owner(vis.query_rel.user_id)\n models.db.session.delete(vis)\n models.db.session.commit()\n", "path": "redash/handlers/visualizations.py"}, {"content": "from flask_login import current_user\nfrom flask_restful import abort\nimport functools\nfrom funcy import flatten\n\nview_only = True\nnot_view_only = False\n\nACCESS_TYPE_VIEW = 'view'\nACCESS_TYPE_MODIFY = 'modify'\nACCESS_TYPE_DELETE = 'delete'\n\nACCESS_TYPES = (ACCESS_TYPE_VIEW, ACCESS_TYPE_MODIFY, ACCESS_TYPE_DELETE)\n\n\ndef has_access(object_groups, user, need_view_only):\n if 'admin' in user.permissions:\n return True\n\n matching_groups = set(object_groups.keys()).intersection(user.group_ids)\n\n if not matching_groups:\n return False\n\n required_level = 1 if need_view_only else 2\n\n group_level = 1 if all(flatten([object_groups[group] for group in matching_groups])) else 2\n\n return required_level <= group_level\n\n\ndef require_access(object_groups, user, need_view_only):\n if not has_access(object_groups, user, need_view_only):\n abort(403)\n\n\nclass require_permissions(object):\n def __init__(self, permissions):\n self.permissions = permissions\n\n def __call__(self, fn):\n @functools.wraps(fn)\n def decorated(*args, **kwargs):\n has_permissions = current_user.has_permissions(self.permissions)\n\n if has_permissions:\n return fn(*args, **kwargs)\n else:\n abort(403)\n\n return decorated\n\n\ndef require_permission(permission):\n return require_permissions((permission,))\n\n\ndef require_admin(fn):\n return require_permission('admin')(fn)\n\n\ndef require_super_admin(fn):\n return require_permission('super_admin')(fn)\n\n\ndef has_permission_or_owner(permission, object_owner_id):\n return int(object_owner_id) == current_user.id or current_user.has_permission(permission)\n\n\ndef is_admin_or_owner(object_owner_id):\n return has_permission_or_owner('admin', object_owner_id)\n\n\ndef require_permission_or_owner(permission, object_owner_id):\n if not has_permission_or_owner(permission, object_owner_id):\n abort(403)\n\n\ndef require_admin_or_owner(object_owner_id):\n if not is_admin_or_owner(object_owner_id):\n abort(403, message=\"You don't have permission to edit this resource.\")\n\n\ndef can_modify(obj, user):\n return is_admin_or_owner(obj.user_id) or user.has_access(obj, ACCESS_TYPE_MODIFY)\n\n\ndef require_object_modify_permission(obj, user):\n if not can_modify(obj, user):\n abort(403)\n", "path": "redash/permissions.py"}]} | 1,863 | 497 |
gh_patches_debug_3019 | rasdani/github-patches | git_diff | rucio__rucio-4790 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Fix setup_webui script
Motivation
----------
Script has a wrong import, needs to be fixed.
</issue>
<code>
[start of setup_webui.py]
1 # -*- coding: utf-8 -*-
2 # Copyright 2015-2021 CERN
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15 #
16 # Authors:
17 # - Vincent Garonne <[email protected]>, 2015-2017
18 # - Martin Barisits <[email protected]>, 2016-2021
19 # - Benedikt Ziemons <[email protected]>, 2021
20
21 import os
22 import sys
23
24 from setuptools import setup
25
26
27 if sys.version_info < (3, 6):
28 print('ERROR: Rucio WebUI requires at least Python 3.6 to run.')
29 sys.exit(1)
30
31 try:
32 from setuputil import get_rucio_version
33 except ImportError:
34 sys.path.append(os.path.abspath(os.path.dirname(__file__)))
35 from setuputil import get_rucio_version
36
37 name = 'rucio-webui'
38 packages = ['rucio', 'rucio.web', 'rucio.web.ui', 'rucio.web.ui.flask', 'rucio.web.flask.common']
39 data_files = []
40 description = "Rucio WebUI Package"
41
42 setup(
43 name=name,
44 version=get_rucio_version(),
45 packages=packages,
46 package_dir={'': 'lib'},
47 data_files=None,
48 include_package_data=True,
49 scripts=None,
50 author="Rucio",
51 author_email="[email protected]",
52 description=description,
53 license="Apache License, Version 2.0",
54 url="https://rucio.cern.ch/",
55 python_requires=">=3.6, <4",
56 classifiers=[
57 'Development Status :: 5 - Production/Stable',
58 'License :: OSI Approved :: Apache Software License',
59 'Intended Audience :: Information Technology',
60 'Intended Audience :: System Administrators',
61 'Operating System :: POSIX :: Linux',
62 'Natural Language :: English',
63 'Programming Language :: Python',
64 'Programming Language :: Python :: 3',
65 'Programming Language :: Python :: 3.6',
66 'Programming Language :: Python :: 3.7',
67 'Programming Language :: Python :: 3.8',
68 'Programming Language :: Python :: 3.9',
69 'Environment :: No Input/Output (Daemon)', ],
70 install_requires=['rucio>=1.2.5', ],
71 )
72
[end of setup_webui.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup_webui.py b/setup_webui.py
--- a/setup_webui.py
+++ b/setup_webui.py
@@ -35,7 +35,7 @@
from setuputil import get_rucio_version
name = 'rucio-webui'
-packages = ['rucio', 'rucio.web', 'rucio.web.ui', 'rucio.web.ui.flask', 'rucio.web.flask.common']
+packages = ['rucio', 'rucio.web', 'rucio.web.ui', 'rucio.web.ui.flask', 'rucio.web.ui.flask.common']
data_files = []
description = "Rucio WebUI Package"
| {"golden_diff": "diff --git a/setup_webui.py b/setup_webui.py\n--- a/setup_webui.py\n+++ b/setup_webui.py\n@@ -35,7 +35,7 @@\n from setuputil import get_rucio_version\n \n name = 'rucio-webui'\n-packages = ['rucio', 'rucio.web', 'rucio.web.ui', 'rucio.web.ui.flask', 'rucio.web.flask.common']\n+packages = ['rucio', 'rucio.web', 'rucio.web.ui', 'rucio.web.ui.flask', 'rucio.web.ui.flask.common']\n data_files = []\n description = \"Rucio WebUI Package\"\n", "issue": "Fix setup_webui script\nMotivation\r\n----------\r\nScript has a wrong import, needs to be fixed.\r\n\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright 2015-2021 CERN\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n# Authors:\n# - Vincent Garonne <[email protected]>, 2015-2017\n# - Martin Barisits <[email protected]>, 2016-2021\n# - Benedikt Ziemons <[email protected]>, 2021\n\nimport os\nimport sys\n\nfrom setuptools import setup\n\n\nif sys.version_info < (3, 6):\n print('ERROR: Rucio WebUI requires at least Python 3.6 to run.')\n sys.exit(1)\n\ntry:\n from setuputil import get_rucio_version\nexcept ImportError:\n sys.path.append(os.path.abspath(os.path.dirname(__file__)))\n from setuputil import get_rucio_version\n\nname = 'rucio-webui'\npackages = ['rucio', 'rucio.web', 'rucio.web.ui', 'rucio.web.ui.flask', 'rucio.web.flask.common']\ndata_files = []\ndescription = \"Rucio WebUI Package\"\n\nsetup(\n name=name,\n version=get_rucio_version(),\n packages=packages,\n package_dir={'': 'lib'},\n data_files=None,\n include_package_data=True,\n scripts=None,\n author=\"Rucio\",\n author_email=\"[email protected]\",\n description=description,\n license=\"Apache License, Version 2.0\",\n url=\"https://rucio.cern.ch/\",\n python_requires=\">=3.6, <4\",\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'License :: OSI Approved :: Apache Software License',\n 'Intended Audience :: Information Technology',\n 'Intended Audience :: System Administrators',\n 'Operating System :: POSIX :: Linux',\n 'Natural Language :: English',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n 'Programming Language :: Python :: 3.9',\n 'Environment :: No Input/Output (Daemon)', ],\n install_requires=['rucio>=1.2.5', ],\n)\n", "path": "setup_webui.py"}]} | 1,321 | 142 |
gh_patches_debug_26431 | rasdani/github-patches | git_diff | tensorflow__tfx-91 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Import errors when trying to run Chicago Taxi on Dataflow
Similarly as in issue [#47](https://github.com/tensorflow/tfx/issues/47), I still have a problem with running CTE on Dataflow. When I use the code with no modifications, the error from previous issue persists - it seems that somehow the `try-except` around the imports doesn't do its job.
When I changed the code to include only the relative import in my fork [here](https://github.com/mwalenia/tfx/tree/import-fix), the problem disappeared, but another one manifested.
This time, there's a problem with importing `estimator` from tensorflow somewhere in the dependencies. Stacktrace:
```Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/batchworker.py", line 773, in run
self._load_main_session(self.local_staging_directory)
File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/batchworker.py", line 489, in _load_main_session
pickler.load_session(session_file)
File "/usr/local/lib/python2.7/dist-packages/apache_beam/internal/pickler.py", line 269, in load_session
return dill.load_session(file_path)
File "/usr/local/lib/python2.7/dist-packages/dill/_dill.py", line 410, in load_session
module = unpickler.load()
File "/usr/lib/python2.7/pickle.py", line 864, in load
dispatch[key](self)
File "/usr/lib/python2.7/pickle.py", line 1139, in load_reduce
value = func(*args)
File "/usr/local/lib/python2.7/dist-packages/dill/_dill.py", line 828, in _import_module
return getattr(__import__(module, None, None, [obj]), obj)
File "/usr/local/lib/python2.7/dist-packages/trainer/taxi.py", line 19, in <module>
from tensorflow_transform import coders as tft_coders
File "/usr/local/lib/python2.7/dist-packages/tensorflow_transform/__init__.py", line 19, in <module>
from tensorflow_transform.analyzers import *
File "/usr/local/lib/python2.7/dist-packages/tensorflow_transform/analyzers.py", line 39, in <module>
from tensorflow_transform import tf_utils
File "/usr/local/lib/python2.7/dist-packages/tensorflow_transform/tf_utils.py", line 24, in <module>
from tensorflow.contrib.proto.python.ops import encode_proto_op
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/__init__.py", line 48, in <module>
from tensorflow.contrib import distribute
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/distribute/__init__.py", line 34, in <module>
from tensorflow.contrib.distribute.python.tpu_strategy import TPUStrategy
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/distribute/python/tpu_strategy.py", line 27, in <module>
from tensorflow.contrib.tpu.python.ops import tpu_ops
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/tpu/__init__.py", line 73, in <module>
from tensorflow.contrib.tpu.python.tpu.keras_support import tpu_model as keras_to_tpu_model
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/tpu/python/tpu/keras_support.py", line 71, in <module>
from tensorflow.python.estimator import model_fn as model_fn_lib
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/estimator/__init__.py", line 25, in <module>
import tensorflow.python.estimator.estimator_lib
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/estimator/estimator_lib.py", line 22, in <module>
from tensorflow.python.estimator.canned.baseline import BaselineClassifier
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/estimator/canned/baseline.py", line 50, in <module>
from tensorflow.python.estimator import estimator
ImportError: cannot import name estimator
```
Is there anything I can do to fix this?
</issue>
<code>
[start of tfx/examples/chicago_taxi/setup.py]
1 # Copyright 2019 Google LLC. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 """Setup dependencies for local and cloud deployment."""
15 import setuptools
16
17 # LINT.IfChange
18 TF_VERSION = '1.12.0'
19 # LINT.ThenChange(train_mlengine.sh, start_model_server_mlengine.sh)
20
21 # LINT.IfChange
22 BEAM_VERSION = '2.11.0'
23 # LINT.ThenChange(setup_beam_on_flink.sh)
24
25 if __name__ == '__main__':
26 setuptools.setup(
27 name='tfx_chicago_taxi',
28 version='0.12.0',
29 packages=setuptools.find_packages(),
30 install_requires=[
31 'apache-beam[gcp]==' + BEAM_VERSION,
32 'jupyter==1.0',
33 'numpy==1.14.5',
34 'protobuf==3.6.1',
35 'tensorflow==' + TF_VERSION,
36 'tensorflow-data-validation==0.12.0',
37 'tensorflow-metadata==0.12.1',
38 'tensorflow-model-analysis==0.12.1',
39 'tensorflow-serving-api==1.12.0',
40 'tensorflow-transform==0.12.0',
41 ],
42 python_requires='>=2.7,<3')
43
[end of tfx/examples/chicago_taxi/setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/tfx/examples/chicago_taxi/setup.py b/tfx/examples/chicago_taxi/setup.py
--- a/tfx/examples/chicago_taxi/setup.py
+++ b/tfx/examples/chicago_taxi/setup.py
@@ -15,28 +15,29 @@
import setuptools
# LINT.IfChange
-TF_VERSION = '1.12.0'
+TF_VERSION = '1.13.1'
# LINT.ThenChange(train_mlengine.sh, start_model_server_mlengine.sh)
# LINT.IfChange
-BEAM_VERSION = '2.11.0'
+BEAM_VERSION = '2.12.0'
# LINT.ThenChange(setup_beam_on_flink.sh)
if __name__ == '__main__':
setuptools.setup(
name='tfx_chicago_taxi',
- version='0.12.0',
+ version='0.13.0',
packages=setuptools.find_packages(),
install_requires=[
- 'apache-beam[gcp]==' + BEAM_VERSION,
- 'jupyter==1.0',
- 'numpy==1.14.5',
- 'protobuf==3.6.1',
- 'tensorflow==' + TF_VERSION,
- 'tensorflow-data-validation==0.12.0',
- 'tensorflow-metadata==0.12.1',
- 'tensorflow-model-analysis==0.12.1',
- 'tensorflow-serving-api==1.12.0',
- 'tensorflow-transform==0.12.0',
+ 'apache-beam[gcp]>=' + BEAM_VERSION,
+ 'jupyter>=1.0,<2',
+ 'notebook>=5.7.8,<5.8',
+ 'numpy>=1.14.5,<2',
+ 'protobuf>=3.7.0,<3.8.0',
+ 'tensorflow>=' + TF_VERSION,
+ 'tensorflow-data-validation>=0.13.1,<0.14',
+ 'tensorflow-metadata>=0.13.1,<0.14',
+ 'tensorflow-model-analysis>=0.13.2,<0.14',
+ 'tensorflow-serving-api>=1.13.0,<1.14',
+ 'tensorflow-transform>=0.13.0,<0.14',
],
- python_requires='>=2.7,<3')
+ python_requires='>=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,<4',)
| {"golden_diff": "diff --git a/tfx/examples/chicago_taxi/setup.py b/tfx/examples/chicago_taxi/setup.py\n--- a/tfx/examples/chicago_taxi/setup.py\n+++ b/tfx/examples/chicago_taxi/setup.py\n@@ -15,28 +15,29 @@\n import setuptools\n \n # LINT.IfChange\n-TF_VERSION = '1.12.0'\n+TF_VERSION = '1.13.1'\n # LINT.ThenChange(train_mlengine.sh, start_model_server_mlengine.sh)\n \n # LINT.IfChange\n-BEAM_VERSION = '2.11.0'\n+BEAM_VERSION = '2.12.0'\n # LINT.ThenChange(setup_beam_on_flink.sh)\n \n if __name__ == '__main__':\n setuptools.setup(\n name='tfx_chicago_taxi',\n- version='0.12.0',\n+ version='0.13.0',\n packages=setuptools.find_packages(),\n install_requires=[\n- 'apache-beam[gcp]==' + BEAM_VERSION,\n- 'jupyter==1.0',\n- 'numpy==1.14.5',\n- 'protobuf==3.6.1',\n- 'tensorflow==' + TF_VERSION,\n- 'tensorflow-data-validation==0.12.0',\n- 'tensorflow-metadata==0.12.1',\n- 'tensorflow-model-analysis==0.12.1',\n- 'tensorflow-serving-api==1.12.0',\n- 'tensorflow-transform==0.12.0',\n+ 'apache-beam[gcp]>=' + BEAM_VERSION,\n+ 'jupyter>=1.0,<2',\n+ 'notebook>=5.7.8,<5.8',\n+ 'numpy>=1.14.5,<2',\n+ 'protobuf>=3.7.0,<3.8.0',\n+ 'tensorflow>=' + TF_VERSION,\n+ 'tensorflow-data-validation>=0.13.1,<0.14',\n+ 'tensorflow-metadata>=0.13.1,<0.14',\n+ 'tensorflow-model-analysis>=0.13.2,<0.14',\n+ 'tensorflow-serving-api>=1.13.0,<1.14',\n+ 'tensorflow-transform>=0.13.0,<0.14',\n ],\n- python_requires='>=2.7,<3')\n+ python_requires='>=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,<4',)\n", "issue": "Import errors when trying to run Chicago Taxi on Dataflow\nSimilarly as in issue [#47](https://github.com/tensorflow/tfx/issues/47), I still have a problem with running CTE on Dataflow. When I use the code with no modifications, the error from previous issue persists - it seems that somehow the `try-except` around the imports doesn't do its job.\r\n\r\nWhen I changed the code to include only the relative import in my fork [here](https://github.com/mwalenia/tfx/tree/import-fix), the problem disappeared, but another one manifested.\r\n\r\nThis time, there's a problem with importing `estimator` from tensorflow somewhere in the dependencies. Stacktrace:\r\n\r\n```Traceback (most recent call last):\r\n File \"/usr/local/lib/python2.7/dist-packages/dataflow_worker/batchworker.py\", line 773, in run\r\n self._load_main_session(self.local_staging_directory)\r\n File \"/usr/local/lib/python2.7/dist-packages/dataflow_worker/batchworker.py\", line 489, in _load_main_session\r\n pickler.load_session(session_file)\r\n File \"/usr/local/lib/python2.7/dist-packages/apache_beam/internal/pickler.py\", line 269, in load_session\r\n return dill.load_session(file_path)\r\n File \"/usr/local/lib/python2.7/dist-packages/dill/_dill.py\", line 410, in load_session\r\n module = unpickler.load()\r\n File \"/usr/lib/python2.7/pickle.py\", line 864, in load\r\n dispatch[key](self)\r\n File \"/usr/lib/python2.7/pickle.py\", line 1139, in load_reduce\r\n value = func(*args)\r\n File \"/usr/local/lib/python2.7/dist-packages/dill/_dill.py\", line 828, in _import_module\r\n return getattr(__import__(module, None, None, [obj]), obj)\r\n File \"/usr/local/lib/python2.7/dist-packages/trainer/taxi.py\", line 19, in <module>\r\n from tensorflow_transform import coders as tft_coders\r\n File \"/usr/local/lib/python2.7/dist-packages/tensorflow_transform/__init__.py\", line 19, in <module>\r\n from tensorflow_transform.analyzers import *\r\n File \"/usr/local/lib/python2.7/dist-packages/tensorflow_transform/analyzers.py\", line 39, in <module>\r\n from tensorflow_transform import tf_utils\r\n File \"/usr/local/lib/python2.7/dist-packages/tensorflow_transform/tf_utils.py\", line 24, in <module>\r\n from tensorflow.contrib.proto.python.ops import encode_proto_op\r\n File \"/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/__init__.py\", line 48, in <module>\r\n from tensorflow.contrib import distribute\r\n File \"/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/distribute/__init__.py\", line 34, in <module>\r\n from tensorflow.contrib.distribute.python.tpu_strategy import TPUStrategy\r\n File \"/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/distribute/python/tpu_strategy.py\", line 27, in <module>\r\n from tensorflow.contrib.tpu.python.ops import tpu_ops\r\n File \"/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/tpu/__init__.py\", line 73, in <module>\r\n from tensorflow.contrib.tpu.python.tpu.keras_support import tpu_model as keras_to_tpu_model\r\n File \"/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/tpu/python/tpu/keras_support.py\", line 71, in <module>\r\n from tensorflow.python.estimator import model_fn as model_fn_lib\r\n File \"/usr/local/lib/python2.7/dist-packages/tensorflow/python/estimator/__init__.py\", line 25, in <module>\r\n import tensorflow.python.estimator.estimator_lib\r\n File \"/usr/local/lib/python2.7/dist-packages/tensorflow/python/estimator/estimator_lib.py\", line 22, in <module>\r\n from tensorflow.python.estimator.canned.baseline import BaselineClassifier\r\n File \"/usr/local/lib/python2.7/dist-packages/tensorflow/python/estimator/canned/baseline.py\", line 50, in <module>\r\n from tensorflow.python.estimator import estimator\r\nImportError: cannot import name estimator\r\n```\r\n\r\nIs there anything I can do to fix this? \n", "before_files": [{"content": "# Copyright 2019 Google LLC. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Setup dependencies for local and cloud deployment.\"\"\"\nimport setuptools\n\n# LINT.IfChange\nTF_VERSION = '1.12.0'\n# LINT.ThenChange(train_mlengine.sh, start_model_server_mlengine.sh)\n\n# LINT.IfChange\nBEAM_VERSION = '2.11.0'\n# LINT.ThenChange(setup_beam_on_flink.sh)\n\nif __name__ == '__main__':\n setuptools.setup(\n name='tfx_chicago_taxi',\n version='0.12.0',\n packages=setuptools.find_packages(),\n install_requires=[\n 'apache-beam[gcp]==' + BEAM_VERSION,\n 'jupyter==1.0',\n 'numpy==1.14.5',\n 'protobuf==3.6.1',\n 'tensorflow==' + TF_VERSION,\n 'tensorflow-data-validation==0.12.0',\n 'tensorflow-metadata==0.12.1',\n 'tensorflow-model-analysis==0.12.1',\n 'tensorflow-serving-api==1.12.0',\n 'tensorflow-transform==0.12.0',\n ],\n python_requires='>=2.7,<3')\n", "path": "tfx/examples/chicago_taxi/setup.py"}]} | 1,981 | 568 |
gh_patches_debug_18748 | rasdani/github-patches | git_diff | microsoft__PubSec-Info-Assistant-356 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Text Enrichment function not quoting blob paths correctly
We have some files with percentage (%) symbols in them, which appear to cause an issue when getting to the Text Enrichment stage of the Function App due to the way the `get_blob_and_sas` function works. Example file name: `Unemployment rate back up to 3.7% in October _ Australian Bureau of Statistics.pdf`
I would suggest replacing the code that manually substitutes spaces (below) with a proper URL quoting function like `blob_path = urllib.parse.quote(blob_path)`
https://github.com/microsoft/PubSec-Info-Assistant/blob/7fa4561652211b023965d4522b2bfd7168af4060/functions/shared_code/utilities_helper.py#L52
</issue>
<code>
[start of functions/shared_code/utilities_helper.py]
1 # Copyright (c) Microsoft Corporation.
2 # Licensed under the MIT license.
3
4 import os
5 import logging
6 from datetime import datetime, timedelta
7 from azure.storage.blob import generate_blob_sas, BlobSasPermissions
8
9 class UtilitiesHelper:
10 """ Helper class for utility functions"""
11 def __init__(self,
12 azure_blob_storage_account,
13 azure_blob_storage_endpoint,
14 azure_blob_storage_key
15 ):
16 self.azure_blob_storage_account = azure_blob_storage_account
17 self.azure_blob_storage_endpoint = azure_blob_storage_endpoint
18 self.azure_blob_storage_key = azure_blob_storage_key
19
20 def get_filename_and_extension(self, path):
21 """ Function to return the file name & type"""
22 # Split the path into base and extension
23 base_name = os.path.basename(path)
24 segments = path.split("/")
25 directory = "/".join(segments[1:-1]) + "/"
26 if directory == "/":
27 directory = ""
28 file_name, file_extension = os.path.splitext(base_name)
29 return file_name, file_extension, directory
30
31 def get_blob_and_sas(self, blob_path):
32 """ Function to retrieve the uri and sas token for a given blob in azure storage"""
33
34 # Get path and file name minus the root container
35 separator = "/"
36 file_path_w_name_no_cont = separator.join(
37 blob_path.split(separator)[1:])
38
39 container_name = separator.join(
40 blob_path.split(separator)[0:1])
41
42 # Gen SAS token
43 sas_token = generate_blob_sas(
44 account_name=self.azure_blob_storage_account,
45 container_name=container_name,
46 blob_name=file_path_w_name_no_cont,
47 account_key=self.azure_blob_storage_key,
48 permission=BlobSasPermissions(read=True),
49 expiry=datetime.utcnow() + timedelta(hours=1)
50 )
51 source_blob_path = f'{self.azure_blob_storage_endpoint}{blob_path}?{sas_token}'
52 source_blob_path = source_blob_path.replace(" ", "%20")
53 logging.info("Path and SAS token for file in azure storage are now generated \n")
54 return source_blob_path
[end of functions/shared_code/utilities_helper.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/functions/shared_code/utilities_helper.py b/functions/shared_code/utilities_helper.py
--- a/functions/shared_code/utilities_helper.py
+++ b/functions/shared_code/utilities_helper.py
@@ -3,6 +3,7 @@
import os
import logging
+import urllib.parse
from datetime import datetime, timedelta
from azure.storage.blob import generate_blob_sas, BlobSasPermissions
@@ -48,7 +49,7 @@
permission=BlobSasPermissions(read=True),
expiry=datetime.utcnow() + timedelta(hours=1)
)
+ blob_path = urllib.parse.quote(blob_path)
source_blob_path = f'{self.azure_blob_storage_endpoint}{blob_path}?{sas_token}'
- source_blob_path = source_blob_path.replace(" ", "%20")
logging.info("Path and SAS token for file in azure storage are now generated \n")
return source_blob_path
\ No newline at end of file
| {"golden_diff": "diff --git a/functions/shared_code/utilities_helper.py b/functions/shared_code/utilities_helper.py\n--- a/functions/shared_code/utilities_helper.py\n+++ b/functions/shared_code/utilities_helper.py\n@@ -3,6 +3,7 @@\n \n import os\n import logging\n+import urllib.parse\n from datetime import datetime, timedelta\n from azure.storage.blob import generate_blob_sas, BlobSasPermissions\n \n@@ -48,7 +49,7 @@\n permission=BlobSasPermissions(read=True),\n expiry=datetime.utcnow() + timedelta(hours=1)\n )\n+ blob_path = urllib.parse.quote(blob_path)\n source_blob_path = f'{self.azure_blob_storage_endpoint}{blob_path}?{sas_token}'\n- source_blob_path = source_blob_path.replace(\" \", \"%20\")\n logging.info(\"Path and SAS token for file in azure storage are now generated \\n\")\n return source_blob_path\n\\ No newline at end of file\n", "issue": "Text Enrichment function not quoting blob paths correctly\nWe have some files with percentage (%) symbols in them, which appear to cause an issue when getting to the Text Enrichment stage of the Function App due to the way the `get_blob_and_sas` function works. Example file name: `Unemployment rate back up to 3.7% in October _ Australian Bureau of Statistics.pdf`\r\n\r\nI would suggest replacing the code that manually substitutes spaces (below) with a proper URL quoting function like `blob_path = urllib.parse.quote(blob_path)`\r\n\r\nhttps://github.com/microsoft/PubSec-Info-Assistant/blob/7fa4561652211b023965d4522b2bfd7168af4060/functions/shared_code/utilities_helper.py#L52\r\n\r\n\r\n\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation.\n# Licensed under the MIT license.\n\nimport os\nimport logging\nfrom datetime import datetime, timedelta\nfrom azure.storage.blob import generate_blob_sas, BlobSasPermissions\n\nclass UtilitiesHelper:\n \"\"\" Helper class for utility functions\"\"\"\n def __init__(self,\n azure_blob_storage_account,\n azure_blob_storage_endpoint,\n azure_blob_storage_key\n ):\n self.azure_blob_storage_account = azure_blob_storage_account\n self.azure_blob_storage_endpoint = azure_blob_storage_endpoint\n self.azure_blob_storage_key = azure_blob_storage_key\n \n def get_filename_and_extension(self, path):\n \"\"\" Function to return the file name & type\"\"\"\n # Split the path into base and extension\n base_name = os.path.basename(path)\n segments = path.split(\"/\")\n directory = \"/\".join(segments[1:-1]) + \"/\"\n if directory == \"/\":\n directory = \"\"\n file_name, file_extension = os.path.splitext(base_name)\n return file_name, file_extension, directory\n \n def get_blob_and_sas(self, blob_path):\n \"\"\" Function to retrieve the uri and sas token for a given blob in azure storage\"\"\"\n\n # Get path and file name minus the root container\n separator = \"/\"\n file_path_w_name_no_cont = separator.join(\n blob_path.split(separator)[1:])\n \n container_name = separator.join(\n blob_path.split(separator)[0:1])\n\n # Gen SAS token\n sas_token = generate_blob_sas(\n account_name=self.azure_blob_storage_account,\n container_name=container_name,\n blob_name=file_path_w_name_no_cont,\n account_key=self.azure_blob_storage_key,\n permission=BlobSasPermissions(read=True),\n expiry=datetime.utcnow() + timedelta(hours=1)\n )\n source_blob_path = f'{self.azure_blob_storage_endpoint}{blob_path}?{sas_token}'\n source_blob_path = source_blob_path.replace(\" \", \"%20\")\n logging.info(\"Path and SAS token for file in azure storage are now generated \\n\")\n return source_blob_path", "path": "functions/shared_code/utilities_helper.py"}]} | 1,254 | 201 |
gh_patches_debug_27874 | rasdani/github-patches | git_diff | cloud-custodian__cloud-custodian-7673 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Any way to filter on tags for Cognito identity-pool or user-pool?
### Discussed in https://github.com/orgs/cloud-custodian/discussions/7616
<div type='discussions-op-text'>
<sup>Originally posted by **stepkirk** August 5, 2022</sup>
We normally enforce tags on AWS resources by using Custodian to look for certain required tags on a resource and then, if the tags don't exist or aren't in the correct format, we mark the resource for deletion after a certain grace period. With the Cognito identity-pool and user-pool resources, it doesn't look like we can check for tags the normal way and it doesn't look like we can mark a resource for later deletion. Is that true or am I missing something?
Any plans to add tagging/marking support in the future for these Cognito resources?</div>
</issue>
<code>
[start of c7n/resources/cognito.py]
1 # Copyright The Cloud Custodian Authors.
2 # SPDX-License-Identifier: Apache-2.0
3 from botocore.exceptions import ClientError
4
5 from c7n.actions import BaseAction
6 from c7n.manager import resources
7 from c7n.query import QueryResourceManager, TypeInfo
8 from c7n.utils import local_session, type_schema
9
10
11 @resources.register('identity-pool')
12 class CognitoIdentityPool(QueryResourceManager):
13
14 class resource_type(TypeInfo):
15 service = 'cognito-identity'
16 enum_spec = ('list_identity_pools', 'IdentityPools', {'MaxResults': 60})
17 detail_spec = (
18 'describe_identity_pool', 'IdentityPoolId', 'IdentityPoolId', None)
19 id = 'IdentityPoolId'
20 name = 'IdentityPoolName'
21 arn_type = "identitypool"
22 cfn_type = 'AWS::Cognito::IdentityPool'
23
24
25 @CognitoIdentityPool.action_registry.register('delete')
26 class DeleteIdentityPool(BaseAction):
27 """Action to delete cognito identity pool
28
29 It is recommended to use a filter to avoid unwanted deletion of pools
30
31 :example:
32
33 .. code-block:: yaml
34
35 policies:
36 - name: identity-pool-delete
37 resource: identity-pool
38 actions:
39 - delete
40 """
41
42 schema = type_schema('delete')
43 permissions = ("cognito-identity:DeleteIdentityPool",)
44
45 def process(self, pools):
46 with self.executor_factory(max_workers=2) as w:
47 list(w.map(self.process_pool, pools))
48
49 def process_pool(self, pool):
50 client = local_session(
51 self.manager.session_factory).client('cognito-identity')
52 try:
53 client.delete_identity_pool(IdentityPoolId=pool['IdentityPoolId'])
54 except ClientError as e:
55 self.log.exception(
56 "Exception deleting identity pool:\n %s" % e)
57
58
59 @resources.register('user-pool')
60 class CognitoUserPool(QueryResourceManager):
61
62 class resource_type(TypeInfo):
63 service = "cognito-idp"
64 enum_spec = ('list_user_pools', 'UserPools', {'MaxResults': 60})
65 detail_spec = (
66 'describe_user_pool', 'UserPoolId', 'Id', 'UserPool')
67 id = 'Id'
68 name = 'Name'
69 arn_type = "userpool"
70 cfn_type = 'AWS::Cognito::UserPool'
71
72
73 @CognitoUserPool.action_registry.register('delete')
74 class DeleteUserPool(BaseAction):
75 """Action to delete cognito user pool
76
77 It is recommended to use a filter to avoid unwanted deletion of pools
78
79 :example:
80
81 .. code-block:: yaml
82
83 policies:
84 - name: user-pool-delete
85 resource: user-pool
86 actions:
87 - delete
88 """
89
90 schema = type_schema('delete')
91 permissions = ("cognito-idp:DeleteUserPool",)
92
93 def process(self, pools):
94 with self.executor_factory(max_workers=2) as w:
95 list(w.map(self.process_pool, pools))
96
97 def process_pool(self, pool):
98 client = local_session(
99 self.manager.session_factory).client('cognito-idp')
100 try:
101 client.delete_user_pool(UserPoolId=pool['Id'])
102 except ClientError as e:
103 self.log.exception(
104 "Exception deleting user pool:\n %s" % e)
105
[end of c7n/resources/cognito.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/c7n/resources/cognito.py b/c7n/resources/cognito.py
--- a/c7n/resources/cognito.py
+++ b/c7n/resources/cognito.py
@@ -4,10 +4,21 @@
from c7n.actions import BaseAction
from c7n.manager import resources
-from c7n.query import QueryResourceManager, TypeInfo
+from c7n.query import QueryResourceManager, TypeInfo, DescribeSource
+from c7n.tags import universal_augment
from c7n.utils import local_session, type_schema
+class DescribeIdentityPool(DescribeSource):
+ def augment(self, resources):
+ return universal_augment(self.manager, resources)
+
+
+class DescribeUserPool(DescribeSource):
+ def augment(self, resources):
+ return universal_augment(self.manager, resources)
+
+
@resources.register('identity-pool')
class CognitoIdentityPool(QueryResourceManager):
@@ -20,6 +31,11 @@
name = 'IdentityPoolName'
arn_type = "identitypool"
cfn_type = 'AWS::Cognito::IdentityPool'
+ universal_taggable = object()
+
+ source_mapping = {
+ 'describe': DescribeIdentityPool,
+ }
@CognitoIdentityPool.action_registry.register('delete')
@@ -69,6 +85,10 @@
arn_type = "userpool"
cfn_type = 'AWS::Cognito::UserPool'
+ source_mapping = {
+ 'describe': DescribeUserPool,
+ }
+
@CognitoUserPool.action_registry.register('delete')
class DeleteUserPool(BaseAction):
| {"golden_diff": "diff --git a/c7n/resources/cognito.py b/c7n/resources/cognito.py\n--- a/c7n/resources/cognito.py\n+++ b/c7n/resources/cognito.py\n@@ -4,10 +4,21 @@\n \n from c7n.actions import BaseAction\n from c7n.manager import resources\n-from c7n.query import QueryResourceManager, TypeInfo\n+from c7n.query import QueryResourceManager, TypeInfo, DescribeSource\n+from c7n.tags import universal_augment\n from c7n.utils import local_session, type_schema\n \n \n+class DescribeIdentityPool(DescribeSource):\n+ def augment(self, resources):\n+ return universal_augment(self.manager, resources)\n+\n+\n+class DescribeUserPool(DescribeSource):\n+ def augment(self, resources):\n+ return universal_augment(self.manager, resources)\n+\n+\n @resources.register('identity-pool')\n class CognitoIdentityPool(QueryResourceManager):\n \n@@ -20,6 +31,11 @@\n name = 'IdentityPoolName'\n arn_type = \"identitypool\"\n cfn_type = 'AWS::Cognito::IdentityPool'\n+ universal_taggable = object()\n+\n+ source_mapping = {\n+ 'describe': DescribeIdentityPool,\n+ }\n \n \n @CognitoIdentityPool.action_registry.register('delete')\n@@ -69,6 +85,10 @@\n arn_type = \"userpool\"\n cfn_type = 'AWS::Cognito::UserPool'\n \n+ source_mapping = {\n+ 'describe': DescribeUserPool,\n+ }\n+\n \n @CognitoUserPool.action_registry.register('delete')\n class DeleteUserPool(BaseAction):\n", "issue": "Any way to filter on tags for Cognito identity-pool or user-pool?\n### Discussed in https://github.com/orgs/cloud-custodian/discussions/7616\r\n\r\n<div type='discussions-op-text'>\r\n\r\n<sup>Originally posted by **stepkirk** August 5, 2022</sup>\r\nWe normally enforce tags on AWS resources by using Custodian to look for certain required tags on a resource and then, if the tags don't exist or aren't in the correct format, we mark the resource for deletion after a certain grace period. With the Cognito identity-pool and user-pool resources, it doesn't look like we can check for tags the normal way and it doesn't look like we can mark a resource for later deletion. Is that true or am I missing something?\r\n\r\nAny plans to add tagging/marking support in the future for these Cognito resources?</div>\n", "before_files": [{"content": "# Copyright The Cloud Custodian Authors.\n# SPDX-License-Identifier: Apache-2.0\nfrom botocore.exceptions import ClientError\n\nfrom c7n.actions import BaseAction\nfrom c7n.manager import resources\nfrom c7n.query import QueryResourceManager, TypeInfo\nfrom c7n.utils import local_session, type_schema\n\n\[email protected]('identity-pool')\nclass CognitoIdentityPool(QueryResourceManager):\n\n class resource_type(TypeInfo):\n service = 'cognito-identity'\n enum_spec = ('list_identity_pools', 'IdentityPools', {'MaxResults': 60})\n detail_spec = (\n 'describe_identity_pool', 'IdentityPoolId', 'IdentityPoolId', None)\n id = 'IdentityPoolId'\n name = 'IdentityPoolName'\n arn_type = \"identitypool\"\n cfn_type = 'AWS::Cognito::IdentityPool'\n\n\[email protected]_registry.register('delete')\nclass DeleteIdentityPool(BaseAction):\n \"\"\"Action to delete cognito identity pool\n\n It is recommended to use a filter to avoid unwanted deletion of pools\n\n :example:\n\n .. code-block:: yaml\n\n policies:\n - name: identity-pool-delete\n resource: identity-pool\n actions:\n - delete\n \"\"\"\n\n schema = type_schema('delete')\n permissions = (\"cognito-identity:DeleteIdentityPool\",)\n\n def process(self, pools):\n with self.executor_factory(max_workers=2) as w:\n list(w.map(self.process_pool, pools))\n\n def process_pool(self, pool):\n client = local_session(\n self.manager.session_factory).client('cognito-identity')\n try:\n client.delete_identity_pool(IdentityPoolId=pool['IdentityPoolId'])\n except ClientError as e:\n self.log.exception(\n \"Exception deleting identity pool:\\n %s\" % e)\n\n\[email protected]('user-pool')\nclass CognitoUserPool(QueryResourceManager):\n\n class resource_type(TypeInfo):\n service = \"cognito-idp\"\n enum_spec = ('list_user_pools', 'UserPools', {'MaxResults': 60})\n detail_spec = (\n 'describe_user_pool', 'UserPoolId', 'Id', 'UserPool')\n id = 'Id'\n name = 'Name'\n arn_type = \"userpool\"\n cfn_type = 'AWS::Cognito::UserPool'\n\n\[email protected]_registry.register('delete')\nclass DeleteUserPool(BaseAction):\n \"\"\"Action to delete cognito user pool\n\n It is recommended to use a filter to avoid unwanted deletion of pools\n\n :example:\n\n .. code-block:: yaml\n\n policies:\n - name: user-pool-delete\n resource: user-pool\n actions:\n - delete\n \"\"\"\n\n schema = type_schema('delete')\n permissions = (\"cognito-idp:DeleteUserPool\",)\n\n def process(self, pools):\n with self.executor_factory(max_workers=2) as w:\n list(w.map(self.process_pool, pools))\n\n def process_pool(self, pool):\n client = local_session(\n self.manager.session_factory).client('cognito-idp')\n try:\n client.delete_user_pool(UserPoolId=pool['Id'])\n except ClientError as e:\n self.log.exception(\n \"Exception deleting user pool:\\n %s\" % e)\n", "path": "c7n/resources/cognito.py"}]} | 1,670 | 356 |
gh_patches_debug_200 | rasdani/github-patches | git_diff | scrapy__scrapy-1566 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
signals docs are confusing
It seems it is not explained how to connect a callback to a singnal anywhere in Scrapy docs.
http://doc.scrapy.org/en/latest/topics/signals.html tells:
> You can connect to signals (or send your own) through the [Signals API](http://doc.scrapy.org/en/latest/topics/api.html#topics-api-signals).
But if you follow this link you get docs for scrapy.signalmanager.SignalManager - that's fine, but it is not explained where to get a SignalManager instance from.
There is an example in Extension docs (http://doc.scrapy.org/en/latest/topics/extensions.html#sample-extension), but
a) this is just an example;
b) it is not explained that crawler.signals is a SignalManager instance;
c) this example is neither in Signals docs nor in SignalManager docs.
There is also a bit of information here: http://doc.scrapy.org/en/latest/topics/api.html#scrapy.crawler.Crawler.signals, but
a) it is not linked to neither from Signal docs nor from SignalManager, so you can't find it if you don't know about it already;
b) it is not explained that crawler.signals is the only way to access signals.
So in the end users may get some luck connecting signals if they start from Crawler docs, but almost no luck if they start from Signals docs.
</issue>
<code>
[start of scrapy/utils/misc.py]
1 """Helper functions which doesn't fit anywhere else"""
2 import re
3 import hashlib
4 from importlib import import_module
5 from pkgutil import iter_modules
6
7 import six
8 from w3lib.html import replace_entities
9
10 from scrapy.utils.python import flatten, to_unicode
11 from scrapy.item import BaseItem
12
13
14 _ITERABLE_SINGLE_VALUES = dict, BaseItem, six.text_type, bytes
15
16
17 def arg_to_iter(arg):
18 """Convert an argument to an iterable. The argument can be a None, single
19 value, or an iterable.
20
21 Exception: if arg is a dict, [arg] will be returned
22 """
23 if arg is None:
24 return []
25 elif not isinstance(arg, _ITERABLE_SINGLE_VALUES) and hasattr(arg, '__iter__'):
26 return arg
27 else:
28 return [arg]
29
30
31 def load_object(path):
32 """Load an object given its absolute object path, and return it.
33
34 object can be a class, function, variable or an instance.
35 path ie: 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware'
36 """
37
38 try:
39 dot = path.rindex('.')
40 except ValueError:
41 raise ValueError("Error loading object '%s': not a full path" % path)
42
43 module, name = path[:dot], path[dot+1:]
44 mod = import_module(module)
45
46 try:
47 obj = getattr(mod, name)
48 except AttributeError:
49 raise NameError("Module '%s' doesn't define any object named '%s'" % (module, name))
50
51 return obj
52
53
54 def walk_modules(path):
55 """Loads a module and all its submodules from the given module path and
56 returns them. If *any* module throws an exception while importing, that
57 exception is thrown back.
58
59 For example: walk_modules('scrapy.utils')
60 """
61
62 mods = []
63 mod = import_module(path)
64 mods.append(mod)
65 if hasattr(mod, '__path__'):
66 for _, subpath, ispkg in iter_modules(mod.__path__):
67 fullpath = path + '.' + subpath
68 if ispkg:
69 mods += walk_modules(fullpath)
70 else:
71 submod = import_module(fullpath)
72 mods.append(submod)
73 return mods
74
75
76 def extract_regex(regex, text, encoding='utf-8'):
77 """Extract a list of unicode strings from the given text/encoding using the following policies:
78
79 * if the regex contains a named group called "extract" that will be returned
80 * if the regex contains multiple numbered groups, all those will be returned (flattened)
81 * if the regex doesn't contain any group the entire regex matching is returned
82 """
83
84 if isinstance(regex, six.string_types):
85 regex = re.compile(regex, re.UNICODE)
86
87 try:
88 strings = [regex.search(text).group('extract')] # named group
89 except:
90 strings = regex.findall(text) # full regex or numbered groups
91 strings = flatten(strings)
92
93 if isinstance(text, six.text_type):
94 return [replace_entities(s, keep=['lt', 'amp']) for s in strings]
95 else:
96 return [replace_entities(to_unicode(s, encoding), keep=['lt', 'amp'])
97 for s in strings]
98
99
100 def md5sum(file):
101 """Calculate the md5 checksum of a file-like object without reading its
102 whole content in memory.
103
104 >>> from io import BytesIO
105 >>> md5sum(BytesIO(b'file content to hash'))
106 '784406af91dd5a54fbb9c84c2236595a'
107 """
108 m = hashlib.md5()
109 while True:
110 d = file.read(8096)
111 if not d:
112 break
113 m.update(d)
114 return m.hexdigest()
115
116 def rel_has_nofollow(rel):
117 """Return True if link rel attribute has nofollow type"""
118 return True if rel is not None and 'nofollow' in rel.split() else False
119
120
[end of scrapy/utils/misc.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/scrapy/utils/misc.py b/scrapy/utils/misc.py
--- a/scrapy/utils/misc.py
+++ b/scrapy/utils/misc.py
@@ -1,4 +1,4 @@
-"""Helper functions which doesn't fit anywhere else"""
+"""Helper functions which don't fit anywhere else"""
import re
import hashlib
from importlib import import_module
| {"golden_diff": "diff --git a/scrapy/utils/misc.py b/scrapy/utils/misc.py\n--- a/scrapy/utils/misc.py\n+++ b/scrapy/utils/misc.py\n@@ -1,4 +1,4 @@\n-\"\"\"Helper functions which doesn't fit anywhere else\"\"\"\n+\"\"\"Helper functions which don't fit anywhere else\"\"\"\n import re\n import hashlib\n from importlib import import_module\n", "issue": "signals docs are confusing\nIt seems it is not explained how to connect a callback to a singnal anywhere in Scrapy docs.\n\nhttp://doc.scrapy.org/en/latest/topics/signals.html tells:\n\n> You can connect to signals (or send your own) through the [Signals API](http://doc.scrapy.org/en/latest/topics/api.html#topics-api-signals).\n\nBut if you follow this link you get docs for scrapy.signalmanager.SignalManager - that's fine, but it is not explained where to get a SignalManager instance from.\n\nThere is an example in Extension docs (http://doc.scrapy.org/en/latest/topics/extensions.html#sample-extension), but\n\na) this is just an example;\nb) it is not explained that crawler.signals is a SignalManager instance;\nc) this example is neither in Signals docs nor in SignalManager docs.\n\nThere is also a bit of information here: http://doc.scrapy.org/en/latest/topics/api.html#scrapy.crawler.Crawler.signals, but\n\na) it is not linked to neither from Signal docs nor from SignalManager, so you can't find it if you don't know about it already;\nb) it is not explained that crawler.signals is the only way to access signals.\n\nSo in the end users may get some luck connecting signals if they start from Crawler docs, but almost no luck if they start from Signals docs.\n\n", "before_files": [{"content": "\"\"\"Helper functions which doesn't fit anywhere else\"\"\"\nimport re\nimport hashlib\nfrom importlib import import_module\nfrom pkgutil import iter_modules\n\nimport six\nfrom w3lib.html import replace_entities\n\nfrom scrapy.utils.python import flatten, to_unicode\nfrom scrapy.item import BaseItem\n\n\n_ITERABLE_SINGLE_VALUES = dict, BaseItem, six.text_type, bytes\n\n\ndef arg_to_iter(arg):\n \"\"\"Convert an argument to an iterable. The argument can be a None, single\n value, or an iterable.\n\n Exception: if arg is a dict, [arg] will be returned\n \"\"\"\n if arg is None:\n return []\n elif not isinstance(arg, _ITERABLE_SINGLE_VALUES) and hasattr(arg, '__iter__'):\n return arg\n else:\n return [arg]\n\n\ndef load_object(path):\n \"\"\"Load an object given its absolute object path, and return it.\n\n object can be a class, function, variable or an instance.\n path ie: 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware'\n \"\"\"\n\n try:\n dot = path.rindex('.')\n except ValueError:\n raise ValueError(\"Error loading object '%s': not a full path\" % path)\n\n module, name = path[:dot], path[dot+1:]\n mod = import_module(module)\n\n try:\n obj = getattr(mod, name)\n except AttributeError:\n raise NameError(\"Module '%s' doesn't define any object named '%s'\" % (module, name))\n\n return obj\n\n\ndef walk_modules(path):\n \"\"\"Loads a module and all its submodules from the given module path and\n returns them. If *any* module throws an exception while importing, that\n exception is thrown back.\n\n For example: walk_modules('scrapy.utils')\n \"\"\"\n\n mods = []\n mod = import_module(path)\n mods.append(mod)\n if hasattr(mod, '__path__'):\n for _, subpath, ispkg in iter_modules(mod.__path__):\n fullpath = path + '.' + subpath\n if ispkg:\n mods += walk_modules(fullpath)\n else:\n submod = import_module(fullpath)\n mods.append(submod)\n return mods\n\n\ndef extract_regex(regex, text, encoding='utf-8'):\n \"\"\"Extract a list of unicode strings from the given text/encoding using the following policies:\n\n * if the regex contains a named group called \"extract\" that will be returned\n * if the regex contains multiple numbered groups, all those will be returned (flattened)\n * if the regex doesn't contain any group the entire regex matching is returned\n \"\"\"\n\n if isinstance(regex, six.string_types):\n regex = re.compile(regex, re.UNICODE)\n\n try:\n strings = [regex.search(text).group('extract')] # named group\n except:\n strings = regex.findall(text) # full regex or numbered groups\n strings = flatten(strings)\n\n if isinstance(text, six.text_type):\n return [replace_entities(s, keep=['lt', 'amp']) for s in strings]\n else:\n return [replace_entities(to_unicode(s, encoding), keep=['lt', 'amp'])\n for s in strings]\n\n\ndef md5sum(file):\n \"\"\"Calculate the md5 checksum of a file-like object without reading its\n whole content in memory.\n\n >>> from io import BytesIO\n >>> md5sum(BytesIO(b'file content to hash'))\n '784406af91dd5a54fbb9c84c2236595a'\n \"\"\"\n m = hashlib.md5()\n while True:\n d = file.read(8096)\n if not d:\n break\n m.update(d)\n return m.hexdigest()\n\ndef rel_has_nofollow(rel):\n \"\"\"Return True if link rel attribute has nofollow type\"\"\"\n return True if rel is not None and 'nofollow' in rel.split() else False\n \n", "path": "scrapy/utils/misc.py"}]} | 1,927 | 78 |
gh_patches_debug_29045 | rasdani/github-patches | git_diff | litestar-org__litestar-2864 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bug: OpenAPI schema generation fails due to same operation IDs
### Description
If two routes with the same path, but different methods are defined then the OpenAPI generation fails due to both of them having the same value for operation ID. After running `git bisect`, #2805 seems to have introduced this.
### URL to code causing the issue
_No response_
### MCVE
```python
from litestar import get, post
from litestar.app import Litestar
from litestar.testing import create_test_client
@post("/")
async def post_handler() -> None:
...
@get("/")
async def get_handler() -> None:
...
with create_test_client([post_handler, get_handler]) as client:
response = client.get("/schema/openapi.json")
assert response.status_code == 200
```
### Steps to reproduce
_No response_
### Screenshots
_No response_
### Logs
_No response_
### Litestar Version
HEAD
### Platform
- [ ] Linux
- [ ] Mac
- [ ] Windows
- [ ] Other (Please specify in the description above)
<!-- POLAR PLEDGE BADGE START -->
---
> [!NOTE]
> While we are open for sponsoring on [GitHub Sponsors](https://github.com/sponsors/litestar-org/) and
> [OpenCollective](https://opencollective.com/litestar), we also utilize [Polar.sh](https://polar.sh/) to engage in pledge-based sponsorship.
>
> Check out all issues funded or available for funding [on our Polar.sh dashboard](https://polar.sh/litestar-org)
> * If you would like to see an issue prioritized, make a pledge towards it!
> * We receive the pledge once the issue is completed & verified
> * This, along with engagement in the community, helps us know which features are a priority to our users.
<a href="https://polar.sh/litestar-org/litestar/issues/2863">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://polar.sh/api/github/litestar-org/litestar/issues/2863/pledge.svg?darkmode=1">
<img alt="Fund with Polar" src="https://polar.sh/api/github/litestar-org/litestar/issues/2863/pledge.svg">
</picture>
</a>
<!-- POLAR PLEDGE BADGE END -->
</issue>
<code>
[start of litestar/_openapi/plugin.py]
1 from __future__ import annotations
2
3 from typing import TYPE_CHECKING
4
5 from litestar._openapi.datastructures import OpenAPIContext
6 from litestar._openapi.path_item import create_path_item_for_route
7 from litestar.exceptions import ImproperlyConfiguredException
8 from litestar.plugins import InitPluginProtocol
9 from litestar.plugins.base import ReceiveRoutePlugin
10 from litestar.routes import HTTPRoute
11
12 if TYPE_CHECKING:
13 from litestar.app import Litestar
14 from litestar.config.app import AppConfig
15 from litestar.openapi.config import OpenAPIConfig
16 from litestar.openapi.spec import OpenAPI
17 from litestar.routes import BaseRoute
18
19
20 class OpenAPIPlugin(InitPluginProtocol, ReceiveRoutePlugin):
21 __slots__ = (
22 "app",
23 "included_routes",
24 "_openapi_config",
25 "_openapi_schema",
26 )
27
28 def __init__(self, app: Litestar) -> None:
29 self.app = app
30 self.included_routes: list[HTTPRoute] = []
31 self._openapi_config: OpenAPIConfig | None = None
32 self._openapi_schema: OpenAPI | None = None
33
34 def _build_openapi_schema(self) -> OpenAPI:
35 openapi = self.openapi_config.to_openapi_schema()
36 context = OpenAPIContext(openapi_config=self.openapi_config, plugins=self.app.plugins.openapi)
37 openapi.paths = {
38 route.path_format or "/": create_path_item_for_route(context, route) for route in self.included_routes
39 }
40 openapi.components.schemas = context.schema_registry.generate_components_schemas()
41 return openapi
42
43 def provide_openapi(self) -> OpenAPI:
44 if not self._openapi_schema:
45 self._openapi_schema = self._build_openapi_schema()
46 return self._openapi_schema
47
48 def on_app_init(self, app_config: AppConfig) -> AppConfig:
49 if app_config.openapi_config:
50 self._openapi_config = app_config.openapi_config
51 app_config.route_handlers.append(self.openapi_config.openapi_controller)
52 return app_config
53
54 @property
55 def openapi_config(self) -> OpenAPIConfig:
56 if not self._openapi_config:
57 raise ImproperlyConfiguredException("OpenAPIConfig not initialized")
58 return self._openapi_config
59
60 def receive_route(self, route: BaseRoute) -> None:
61 if not isinstance(route, HTTPRoute):
62 return
63
64 if any(route_handler.resolve_include_in_schema() for route_handler, _ in route.route_handler_map.values()):
65 # Force recompute the schema if a new route is added
66 self._openapi_schema = None
67 self.included_routes.append(route)
68
[end of litestar/_openapi/plugin.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/litestar/_openapi/plugin.py b/litestar/_openapi/plugin.py
--- a/litestar/_openapi/plugin.py
+++ b/litestar/_openapi/plugin.py
@@ -27,7 +27,7 @@
def __init__(self, app: Litestar) -> None:
self.app = app
- self.included_routes: list[HTTPRoute] = []
+ self.included_routes: dict[str, HTTPRoute] = {}
self._openapi_config: OpenAPIConfig | None = None
self._openapi_schema: OpenAPI | None = None
@@ -35,7 +35,8 @@
openapi = self.openapi_config.to_openapi_schema()
context = OpenAPIContext(openapi_config=self.openapi_config, plugins=self.app.plugins.openapi)
openapi.paths = {
- route.path_format or "/": create_path_item_for_route(context, route) for route in self.included_routes
+ route.path_format or "/": create_path_item_for_route(context, route)
+ for route in self.included_routes.values()
}
openapi.components.schemas = context.schema_registry.generate_components_schemas()
return openapi
@@ -64,4 +65,4 @@
if any(route_handler.resolve_include_in_schema() for route_handler, _ in route.route_handler_map.values()):
# Force recompute the schema if a new route is added
self._openapi_schema = None
- self.included_routes.append(route)
+ self.included_routes[route.path] = route
| {"golden_diff": "diff --git a/litestar/_openapi/plugin.py b/litestar/_openapi/plugin.py\n--- a/litestar/_openapi/plugin.py\n+++ b/litestar/_openapi/plugin.py\n@@ -27,7 +27,7 @@\n \n def __init__(self, app: Litestar) -> None:\n self.app = app\n- self.included_routes: list[HTTPRoute] = []\n+ self.included_routes: dict[str, HTTPRoute] = {}\n self._openapi_config: OpenAPIConfig | None = None\n self._openapi_schema: OpenAPI | None = None\n \n@@ -35,7 +35,8 @@\n openapi = self.openapi_config.to_openapi_schema()\n context = OpenAPIContext(openapi_config=self.openapi_config, plugins=self.app.plugins.openapi)\n openapi.paths = {\n- route.path_format or \"/\": create_path_item_for_route(context, route) for route in self.included_routes\n+ route.path_format or \"/\": create_path_item_for_route(context, route)\n+ for route in self.included_routes.values()\n }\n openapi.components.schemas = context.schema_registry.generate_components_schemas()\n return openapi\n@@ -64,4 +65,4 @@\n if any(route_handler.resolve_include_in_schema() for route_handler, _ in route.route_handler_map.values()):\n # Force recompute the schema if a new route is added\n self._openapi_schema = None\n- self.included_routes.append(route)\n+ self.included_routes[route.path] = route\n", "issue": "Bug: OpenAPI schema generation fails due to same operation IDs\n### Description\n\nIf two routes with the same path, but different methods are defined then the OpenAPI generation fails due to both of them having the same value for operation ID. After running `git bisect`, #2805 seems to have introduced this.\n\n### URL to code causing the issue\n\n_No response_\n\n### MCVE\n\n```python\nfrom litestar import get, post\r\nfrom litestar.app import Litestar\r\nfrom litestar.testing import create_test_client\r\n\r\n\r\n@post(\"/\")\r\nasync def post_handler() -> None:\r\n ...\r\n\r\n\r\n@get(\"/\")\r\nasync def get_handler() -> None:\r\n ...\r\n\r\n\r\nwith create_test_client([post_handler, get_handler]) as client:\r\n response = client.get(\"/schema/openapi.json\")\r\n\r\n assert response.status_code == 200\n```\n\n\n### Steps to reproduce\n\n_No response_\n\n### Screenshots\n\n_No response_\n\n### Logs\n\n_No response_\n\n### Litestar Version\n\nHEAD\n\n### Platform\n\n- [ ] Linux\n- [ ] Mac\n- [ ] Windows\n- [ ] Other (Please specify in the description above)\n\n<!-- POLAR PLEDGE BADGE START -->\n---\n> [!NOTE] \n> While we are open for sponsoring on [GitHub Sponsors](https://github.com/sponsors/litestar-org/) and \n> [OpenCollective](https://opencollective.com/litestar), we also utilize [Polar.sh](https://polar.sh/) to engage in pledge-based sponsorship.\n>\n> Check out all issues funded or available for funding [on our Polar.sh dashboard](https://polar.sh/litestar-org)\n> * If you would like to see an issue prioritized, make a pledge towards it!\n> * We receive the pledge once the issue is completed & verified\n> * This, along with engagement in the community, helps us know which features are a priority to our users.\n\n<a href=\"https://polar.sh/litestar-org/litestar/issues/2863\">\n<picture>\n <source media=\"(prefers-color-scheme: dark)\" srcset=\"https://polar.sh/api/github/litestar-org/litestar/issues/2863/pledge.svg?darkmode=1\">\n <img alt=\"Fund with Polar\" src=\"https://polar.sh/api/github/litestar-org/litestar/issues/2863/pledge.svg\">\n</picture>\n</a>\n<!-- POLAR PLEDGE BADGE END -->\n\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom typing import TYPE_CHECKING\n\nfrom litestar._openapi.datastructures import OpenAPIContext\nfrom litestar._openapi.path_item import create_path_item_for_route\nfrom litestar.exceptions import ImproperlyConfiguredException\nfrom litestar.plugins import InitPluginProtocol\nfrom litestar.plugins.base import ReceiveRoutePlugin\nfrom litestar.routes import HTTPRoute\n\nif TYPE_CHECKING:\n from litestar.app import Litestar\n from litestar.config.app import AppConfig\n from litestar.openapi.config import OpenAPIConfig\n from litestar.openapi.spec import OpenAPI\n from litestar.routes import BaseRoute\n\n\nclass OpenAPIPlugin(InitPluginProtocol, ReceiveRoutePlugin):\n __slots__ = (\n \"app\",\n \"included_routes\",\n \"_openapi_config\",\n \"_openapi_schema\",\n )\n\n def __init__(self, app: Litestar) -> None:\n self.app = app\n self.included_routes: list[HTTPRoute] = []\n self._openapi_config: OpenAPIConfig | None = None\n self._openapi_schema: OpenAPI | None = None\n\n def _build_openapi_schema(self) -> OpenAPI:\n openapi = self.openapi_config.to_openapi_schema()\n context = OpenAPIContext(openapi_config=self.openapi_config, plugins=self.app.plugins.openapi)\n openapi.paths = {\n route.path_format or \"/\": create_path_item_for_route(context, route) for route in self.included_routes\n }\n openapi.components.schemas = context.schema_registry.generate_components_schemas()\n return openapi\n\n def provide_openapi(self) -> OpenAPI:\n if not self._openapi_schema:\n self._openapi_schema = self._build_openapi_schema()\n return self._openapi_schema\n\n def on_app_init(self, app_config: AppConfig) -> AppConfig:\n if app_config.openapi_config:\n self._openapi_config = app_config.openapi_config\n app_config.route_handlers.append(self.openapi_config.openapi_controller)\n return app_config\n\n @property\n def openapi_config(self) -> OpenAPIConfig:\n if not self._openapi_config:\n raise ImproperlyConfiguredException(\"OpenAPIConfig not initialized\")\n return self._openapi_config\n\n def receive_route(self, route: BaseRoute) -> None:\n if not isinstance(route, HTTPRoute):\n return\n\n if any(route_handler.resolve_include_in_schema() for route_handler, _ in route.route_handler_map.values()):\n # Force recompute the schema if a new route is added\n self._openapi_schema = None\n self.included_routes.append(route)\n", "path": "litestar/_openapi/plugin.py"}]} | 1,757 | 340 |
gh_patches_debug_3371 | rasdani/github-patches | git_diff | e2nIEE__pandapower-1661 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
pandapower.networks: nets have wrong order of columns
Example for net = nw.case24_ieee_rts():
```python
net.bus.head()
Out[43]:
in_service max_vm_pu min_vm_pu name type vn_kv zone
0 True 1.1 0.9 a b 138.0 1.0
1 True 1.1 0.9 b b 138.0 1.0
2 True 1.1 0.9 c b 138.0 1.0
3 True 1.1 0.9 d b 138.0 1.0
4 True 1.1 0.9 e b 138.0 1.0
```
</issue>
<code>
[start of setup.py]
1 # -*- coding: utf-8 -*-
2
3 # Copyright (c) 2016-2022 by University of Kassel and Fraunhofer Institute for Energy Economics
4 # and Energy System Technology (IEE), Kassel. All rights reserved.
5
6 from setuptools import setup, find_packages
7 import re
8
9 with open('README.rst', 'rb') as f:
10 install = f.read().decode('utf-8')
11
12 with open('CHANGELOG.rst', 'rb') as f:
13 changelog = f.read().decode('utf-8')
14
15 classifiers = [
16 'Development Status :: 5 - Production/Stable',
17 'Environment :: Console',
18 'Intended Audience :: Developers',
19 'Intended Audience :: Education',
20 'Intended Audience :: Science/Research',
21 'License :: OSI Approved :: BSD License',
22 'Natural Language :: English',
23 'Operating System :: OS Independent',
24 'Programming Language :: Python',
25 'Programming Language :: Python :: 3']
26
27 with open('.github/workflows/github_test_action.yml', 'rb') as f:
28 lines = f.read().decode('utf-8')
29 versions = set(re.findall('3.[7-9]', lines)) | set(re.findall('3.1[0-9]', lines))
30 for version in sorted(versions):
31 classifiers.append('Programming Language :: Python :: %s' % version)
32
33 long_description = '\n\n'.join((install, changelog))
34
35 setup(
36 name='pandapower',
37 version='2.10.1',
38 author='Leon Thurner, Alexander Scheidler',
39 author_email='[email protected], [email protected]',
40 description='An easy to use open source tool for power system modeling, analysis and optimization with a high degree of automation.',
41 long_description=long_description,
42 long_description_content_type='text/x-rst',
43 url='http://www.pandapower.org',
44 license='BSD',
45 install_requires=["pandas>=1.0",
46 "networkx>=2.5",
47 "scipy",
48 "numpy>=0.11",
49 "packaging",
50 "tqdm",
51 "deepdiff"],
52 extras_require={
53 "docs": ["numpydoc", "sphinx", "sphinx_rtd_theme"],
54 "plotting": ["plotly", "matplotlib", "python-igraph", "geopandas"],
55 # "shapely", "pyproj" are depedencies of geopandas and so already available;
56 # "base64", "hashlib", "zlib" produce installing problems, so they are not included
57 "test": ["pytest", "pytest-xdist"],
58 "performance": ["ortools"], # , "lightsim2grid"],
59 "fileio": ["xlsxwriter", "openpyxl", "cryptography", "geopandas"],
60 # "fiona" is a depedency of geopandas and so already available
61 "converter": ["matpowercaseframes"],
62 "all": ["numpydoc", "sphinx", "sphinx_rtd_theme",
63 "plotly", "matplotlib", "python-igraph", "geopandas",
64 "pytest", "pytest-xdist",
65 "ortools", # lightsim2grid,
66 "xlsxwriter", "openpyxl", "cryptography",
67 "matpowercaseframes"
68 ]}, # "shapely", "pyproj", "fiona" are depedencies of geopandas and so already available
69 # "hashlib", "zlib", "base64" produce installing problems, so it is not included
70 packages=find_packages(),
71 include_package_data=True,
72 classifiers=classifiers
73 )
74
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -45,7 +45,7 @@
install_requires=["pandas>=1.0",
"networkx>=2.5",
"scipy",
- "numpy>=0.11",
+ "numpy",
"packaging",
"tqdm",
"deepdiff"],
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -45,7 +45,7 @@\n install_requires=[\"pandas>=1.0\",\n \"networkx>=2.5\",\n \"scipy\",\n- \"numpy>=0.11\",\n+ \"numpy\",\n \"packaging\",\n \"tqdm\",\n \"deepdiff\"],\n", "issue": "pandapower.networks: nets have wrong order of columns\nExample for net = nw.case24_ieee_rts():\r\n\r\n```python\r\nnet.bus.head()\r\nOut[43]: \r\n in_service max_vm_pu min_vm_pu name type vn_kv zone\r\n0 True 1.1 0.9 a b 138.0 1.0\r\n1 True 1.1 0.9 b b 138.0 1.0\r\n2 True 1.1 0.9 c b 138.0 1.0\r\n3 True 1.1 0.9 d b 138.0 1.0\r\n4 True 1.1 0.9 e b 138.0 1.0\r\n```\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Copyright (c) 2016-2022 by University of Kassel and Fraunhofer Institute for Energy Economics\n# and Energy System Technology (IEE), Kassel. All rights reserved.\n\nfrom setuptools import setup, find_packages\nimport re\n\nwith open('README.rst', 'rb') as f:\n install = f.read().decode('utf-8')\n\nwith open('CHANGELOG.rst', 'rb') as f:\n changelog = f.read().decode('utf-8')\n\nclassifiers = [\n 'Development Status :: 5 - Production/Stable',\n 'Environment :: Console',\n 'Intended Audience :: Developers',\n 'Intended Audience :: Education',\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: BSD License',\n 'Natural Language :: English',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 3']\n\nwith open('.github/workflows/github_test_action.yml', 'rb') as f:\n lines = f.read().decode('utf-8')\n versions = set(re.findall('3.[7-9]', lines)) | set(re.findall('3.1[0-9]', lines))\n for version in sorted(versions):\n classifiers.append('Programming Language :: Python :: %s' % version)\n\nlong_description = '\\n\\n'.join((install, changelog))\n\nsetup(\n name='pandapower',\n version='2.10.1',\n author='Leon Thurner, Alexander Scheidler',\n author_email='[email protected], [email protected]',\n description='An easy to use open source tool for power system modeling, analysis and optimization with a high degree of automation.',\n long_description=long_description,\n long_description_content_type='text/x-rst',\n url='http://www.pandapower.org',\n license='BSD',\n install_requires=[\"pandas>=1.0\",\n \"networkx>=2.5\",\n \"scipy\",\n \"numpy>=0.11\",\n \"packaging\",\n \"tqdm\",\n \"deepdiff\"],\n extras_require={\n \"docs\": [\"numpydoc\", \"sphinx\", \"sphinx_rtd_theme\"],\n \"plotting\": [\"plotly\", \"matplotlib\", \"python-igraph\", \"geopandas\"],\n # \"shapely\", \"pyproj\" are depedencies of geopandas and so already available;\n # \"base64\", \"hashlib\", \"zlib\" produce installing problems, so they are not included\n \"test\": [\"pytest\", \"pytest-xdist\"],\n \"performance\": [\"ortools\"], # , \"lightsim2grid\"],\n \"fileio\": [\"xlsxwriter\", \"openpyxl\", \"cryptography\", \"geopandas\"],\n # \"fiona\" is a depedency of geopandas and so already available\n \"converter\": [\"matpowercaseframes\"],\n \"all\": [\"numpydoc\", \"sphinx\", \"sphinx_rtd_theme\",\n \"plotly\", \"matplotlib\", \"python-igraph\", \"geopandas\",\n \"pytest\", \"pytest-xdist\",\n \"ortools\", # lightsim2grid,\n \"xlsxwriter\", \"openpyxl\", \"cryptography\",\n \"matpowercaseframes\"\n ]}, # \"shapely\", \"pyproj\", \"fiona\" are depedencies of geopandas and so already available\n # \"hashlib\", \"zlib\", \"base64\" produce installing problems, so it is not included\n packages=find_packages(),\n include_package_data=True,\n classifiers=classifiers\n)\n", "path": "setup.py"}]} | 1,697 | 89 |
gh_patches_debug_20828 | rasdani/github-patches | git_diff | bookwyrm-social__bookwyrm-619 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Reading goal status doesn't set plurals correctly
When someone is only planning to read 1 book, the status should say "1 book" not "1 books"
</issue>
<code>
[start of bookwyrm/views/goal.py]
1 ''' non-interactive pages '''
2 from django.contrib.auth.decorators import login_required
3 from django.http import HttpResponseNotFound
4 from django.shortcuts import redirect
5 from django.template.response import TemplateResponse
6 from django.utils.decorators import method_decorator
7 from django.views import View
8
9 from bookwyrm import forms, models
10 from bookwyrm.status import create_generated_note
11 from .helpers import get_user_from_username, object_visible_to_user
12
13
14 # pylint: disable= no-self-use
15 @method_decorator(login_required, name='dispatch')
16 class Goal(View):
17 ''' track books for the year '''
18 def get(self, request, username, year):
19 ''' reading goal page '''
20 user = get_user_from_username(username)
21 year = int(year)
22 goal = models.AnnualGoal.objects.filter(
23 year=year, user=user
24 ).first()
25 if not goal and user != request.user:
26 return HttpResponseNotFound()
27
28 if goal and not object_visible_to_user(request.user, goal):
29 return HttpResponseNotFound()
30
31 data = {
32 'title': '%s\'s %d Reading' % (user.display_name, year),
33 'goal_form': forms.GoalForm(instance=goal),
34 'goal': goal,
35 'user': user,
36 'year': year,
37 'is_self': request.user == user,
38 }
39 return TemplateResponse(request, 'goal.html', data)
40
41
42 def post(self, request, username, year):
43 ''' update or create an annual goal '''
44 user = get_user_from_username(username)
45 if user != request.user:
46 return HttpResponseNotFound()
47
48 year = int(year)
49 goal = models.AnnualGoal.objects.filter(
50 year=year, user=request.user
51 ).first()
52 form = forms.GoalForm(request.POST, instance=goal)
53 if not form.is_valid():
54 data = {
55 'title': '%s\'s %d Reading' % (request.user.display_name, year),
56 'goal_form': form,
57 'goal': goal,
58 'year': year,
59 }
60 return TemplateResponse(request, 'goal.html', data)
61 goal = form.save()
62
63 if request.POST.get('post-status'):
64 # create status, if appropraite
65 create_generated_note(
66 request.user,
67 'set a goal to read %d books in %d' % (goal.goal, goal.year),
68 privacy=goal.privacy
69 )
70
71 return redirect(request.headers.get('Referer', '/'))
72
[end of bookwyrm/views/goal.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/bookwyrm/views/goal.py b/bookwyrm/views/goal.py
--- a/bookwyrm/views/goal.py
+++ b/bookwyrm/views/goal.py
@@ -2,6 +2,7 @@
from django.contrib.auth.decorators import login_required
from django.http import HttpResponseNotFound
from django.shortcuts import redirect
+from django.template.loader import get_template
from django.template.response import TemplateResponse
from django.utils.decorators import method_decorator
from django.views import View
@@ -62,9 +63,10 @@
if request.POST.get('post-status'):
# create status, if appropraite
+ template = get_template('snippets/generated_status/goal.html')
create_generated_note(
request.user,
- 'set a goal to read %d books in %d' % (goal.goal, goal.year),
+ template.render({'goal': goal, 'user': request.user}).strip(),
privacy=goal.privacy
)
| {"golden_diff": "diff --git a/bookwyrm/views/goal.py b/bookwyrm/views/goal.py\n--- a/bookwyrm/views/goal.py\n+++ b/bookwyrm/views/goal.py\n@@ -2,6 +2,7 @@\n from django.contrib.auth.decorators import login_required\n from django.http import HttpResponseNotFound\n from django.shortcuts import redirect\n+from django.template.loader import get_template\n from django.template.response import TemplateResponse\n from django.utils.decorators import method_decorator\n from django.views import View\n@@ -62,9 +63,10 @@\n \n if request.POST.get('post-status'):\n # create status, if appropraite\n+ template = get_template('snippets/generated_status/goal.html')\n create_generated_note(\n request.user,\n- 'set a goal to read %d books in %d' % (goal.goal, goal.year),\n+ template.render({'goal': goal, 'user': request.user}).strip(),\n privacy=goal.privacy\n )\n", "issue": "Reading goal status doesn't set plurals correctly\nWhen someone is only planning to read 1 book, the status should say \"1 book\" not \"1 books\"\n", "before_files": [{"content": "''' non-interactive pages '''\nfrom django.contrib.auth.decorators import login_required\nfrom django.http import HttpResponseNotFound\nfrom django.shortcuts import redirect\nfrom django.template.response import TemplateResponse\nfrom django.utils.decorators import method_decorator\nfrom django.views import View\n\nfrom bookwyrm import forms, models\nfrom bookwyrm.status import create_generated_note\nfrom .helpers import get_user_from_username, object_visible_to_user\n\n\n# pylint: disable= no-self-use\n@method_decorator(login_required, name='dispatch')\nclass Goal(View):\n ''' track books for the year '''\n def get(self, request, username, year):\n ''' reading goal page '''\n user = get_user_from_username(username)\n year = int(year)\n goal = models.AnnualGoal.objects.filter(\n year=year, user=user\n ).first()\n if not goal and user != request.user:\n return HttpResponseNotFound()\n\n if goal and not object_visible_to_user(request.user, goal):\n return HttpResponseNotFound()\n\n data = {\n 'title': '%s\\'s %d Reading' % (user.display_name, year),\n 'goal_form': forms.GoalForm(instance=goal),\n 'goal': goal,\n 'user': user,\n 'year': year,\n 'is_self': request.user == user,\n }\n return TemplateResponse(request, 'goal.html', data)\n\n\n def post(self, request, username, year):\n ''' update or create an annual goal '''\n user = get_user_from_username(username)\n if user != request.user:\n return HttpResponseNotFound()\n\n year = int(year)\n goal = models.AnnualGoal.objects.filter(\n year=year, user=request.user\n ).first()\n form = forms.GoalForm(request.POST, instance=goal)\n if not form.is_valid():\n data = {\n 'title': '%s\\'s %d Reading' % (request.user.display_name, year),\n 'goal_form': form,\n 'goal': goal,\n 'year': year,\n }\n return TemplateResponse(request, 'goal.html', data)\n goal = form.save()\n\n if request.POST.get('post-status'):\n # create status, if appropraite\n create_generated_note(\n request.user,\n 'set a goal to read %d books in %d' % (goal.goal, goal.year),\n privacy=goal.privacy\n )\n\n return redirect(request.headers.get('Referer', '/'))\n", "path": "bookwyrm/views/goal.py"}]} | 1,228 | 210 |
gh_patches_debug_17133 | rasdani/github-patches | git_diff | python-poetry__poetry-7547 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
add -e/--executable to poetry env info to get the python executable path
- [x] I have searched the [issues](https://github.com/python-poetry/poetry/issues) of this repo and believe that this is not a duplicate.
- [x] I have searched the [FAQ](https://python-poetry.org/docs/faq/) and general [documentation](https://python-poetry.org/docs/) and believe that my question is not already covered.
## Feature Request
in addition to the already present `-p/--path` option, add a `-e/--execuatble` option to return the python executable path.
My use case: I'm starting to use Taskfile and poetry on some projects; these project are developed on both linux and windows;
I would like to avoid having to install tools such as mypy in the virtual environment, since they can be run from the outside (this also allows me to have faster CI, I have set up a custom docker image with all the tools needed).
mypy in particular wants to know the exact path of the python executable to work (passed as `--python-executable` option), so having a new `poetry env info --executable` option that outputs the python path would solve my issue in a cross-platform fashion.
</issue>
<code>
[start of src/poetry/console/commands/env/info.py]
1 from __future__ import annotations
2
3 from typing import TYPE_CHECKING
4
5 from cleo.helpers import option
6
7 from poetry.console.commands.command import Command
8
9
10 if TYPE_CHECKING:
11 from poetry.utils.env import Env
12
13
14 class EnvInfoCommand(Command):
15 name = "env info"
16 description = "Displays information about the current environment."
17
18 options = [option("path", "p", "Only display the environment's path.")]
19
20 def handle(self) -> int:
21 from poetry.utils.env import EnvManager
22
23 env = EnvManager(self.poetry).get()
24
25 if self.option("path"):
26 if not env.is_venv():
27 return 1
28
29 self.line(str(env.path))
30
31 return 0
32
33 self._display_complete_info(env)
34 return 0
35
36 def _display_complete_info(self, env: Env) -> None:
37 env_python_version = ".".join(str(s) for s in env.version_info[:3])
38 self.line("")
39 self.line("<b>Virtualenv</b>")
40 listing = [
41 f"<info>Python</info>: <comment>{env_python_version}</>",
42 f"<info>Implementation</info>: <comment>{env.python_implementation}</>",
43 (
44 "<info>Path</info>: "
45 f" <comment>{env.path if env.is_venv() else 'NA'}</>"
46 ),
47 (
48 "<info>Executable</info>: "
49 f" <comment>{env.python if env.is_venv() else 'NA'}</>"
50 ),
51 ]
52 if env.is_venv():
53 listing.append(
54 "<info>Valid</info>: "
55 f" <{'comment' if env.is_sane() else 'error'}>{env.is_sane()}</>"
56 )
57 self.line("\n".join(listing))
58
59 self.line("")
60
61 system_env = env.parent_env
62 python = ".".join(str(v) for v in system_env.version_info[:3])
63 self.line("<b>System</b>")
64 self.line(
65 "\n".join(
66 [
67 f"<info>Platform</info>: <comment>{env.platform}</>",
68 f"<info>OS</info>: <comment>{env.os}</>",
69 f"<info>Python</info>: <comment>{python}</>",
70 f"<info>Path</info>: <comment>{system_env.path}</>",
71 f"<info>Executable</info>: <comment>{system_env.python}</>",
72 ]
73 )
74 )
75
[end of src/poetry/console/commands/env/info.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/poetry/console/commands/env/info.py b/src/poetry/console/commands/env/info.py
--- a/src/poetry/console/commands/env/info.py
+++ b/src/poetry/console/commands/env/info.py
@@ -15,7 +15,12 @@
name = "env info"
description = "Displays information about the current environment."
- options = [option("path", "p", "Only display the environment's path.")]
+ options = [
+ option("path", "p", "Only display the environment's path."),
+ option(
+ "executable", "e", "Only display the environment's python executable path."
+ ),
+ ]
def handle(self) -> int:
from poetry.utils.env import EnvManager
@@ -30,6 +35,14 @@
return 0
+ if self.option("executable"):
+ if not env.is_venv():
+ return 1
+
+ self.line(str(env.python))
+
+ return 0
+
self._display_complete_info(env)
return 0
| {"golden_diff": "diff --git a/src/poetry/console/commands/env/info.py b/src/poetry/console/commands/env/info.py\n--- a/src/poetry/console/commands/env/info.py\n+++ b/src/poetry/console/commands/env/info.py\n@@ -15,7 +15,12 @@\n name = \"env info\"\n description = \"Displays information about the current environment.\"\n \n- options = [option(\"path\", \"p\", \"Only display the environment's path.\")]\n+ options = [\n+ option(\"path\", \"p\", \"Only display the environment's path.\"),\n+ option(\n+ \"executable\", \"e\", \"Only display the environment's python executable path.\"\n+ ),\n+ ]\n \n def handle(self) -> int:\n from poetry.utils.env import EnvManager\n@@ -30,6 +35,14 @@\n \n return 0\n \n+ if self.option(\"executable\"):\n+ if not env.is_venv():\n+ return 1\n+\n+ self.line(str(env.python))\n+\n+ return 0\n+\n self._display_complete_info(env)\n return 0\n", "issue": "add -e/--executable to poetry env info to get the python executable path\n- [x] I have searched the [issues](https://github.com/python-poetry/poetry/issues) of this repo and believe that this is not a duplicate.\r\n- [x] I have searched the [FAQ](https://python-poetry.org/docs/faq/) and general [documentation](https://python-poetry.org/docs/) and believe that my question is not already covered.\r\n\r\n## Feature Request\r\n\r\nin addition to the already present `-p/--path` option, add a `-e/--execuatble` option to return the python executable path.\r\n\r\nMy use case: I'm starting to use Taskfile and poetry on some projects; these project are developed on both linux and windows;\r\n\r\nI would like to avoid having to install tools such as mypy in the virtual environment, since they can be run from the outside (this also allows me to have faster CI, I have set up a custom docker image with all the tools needed).\r\n\r\nmypy in particular wants to know the exact path of the python executable to work (passed as `--python-executable` option), so having a new `poetry env info --executable` option that outputs the python path would solve my issue in a cross-platform fashion.\r\n\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom typing import TYPE_CHECKING\n\nfrom cleo.helpers import option\n\nfrom poetry.console.commands.command import Command\n\n\nif TYPE_CHECKING:\n from poetry.utils.env import Env\n\n\nclass EnvInfoCommand(Command):\n name = \"env info\"\n description = \"Displays information about the current environment.\"\n\n options = [option(\"path\", \"p\", \"Only display the environment's path.\")]\n\n def handle(self) -> int:\n from poetry.utils.env import EnvManager\n\n env = EnvManager(self.poetry).get()\n\n if self.option(\"path\"):\n if not env.is_venv():\n return 1\n\n self.line(str(env.path))\n\n return 0\n\n self._display_complete_info(env)\n return 0\n\n def _display_complete_info(self, env: Env) -> None:\n env_python_version = \".\".join(str(s) for s in env.version_info[:3])\n self.line(\"\")\n self.line(\"<b>Virtualenv</b>\")\n listing = [\n f\"<info>Python</info>: <comment>{env_python_version}</>\",\n f\"<info>Implementation</info>: <comment>{env.python_implementation}</>\",\n (\n \"<info>Path</info>: \"\n f\" <comment>{env.path if env.is_venv() else 'NA'}</>\"\n ),\n (\n \"<info>Executable</info>: \"\n f\" <comment>{env.python if env.is_venv() else 'NA'}</>\"\n ),\n ]\n if env.is_venv():\n listing.append(\n \"<info>Valid</info>: \"\n f\" <{'comment' if env.is_sane() else 'error'}>{env.is_sane()}</>\"\n )\n self.line(\"\\n\".join(listing))\n\n self.line(\"\")\n\n system_env = env.parent_env\n python = \".\".join(str(v) for v in system_env.version_info[:3])\n self.line(\"<b>System</b>\")\n self.line(\n \"\\n\".join(\n [\n f\"<info>Platform</info>: <comment>{env.platform}</>\",\n f\"<info>OS</info>: <comment>{env.os}</>\",\n f\"<info>Python</info>: <comment>{python}</>\",\n f\"<info>Path</info>: <comment>{system_env.path}</>\",\n f\"<info>Executable</info>: <comment>{system_env.python}</>\",\n ]\n )\n )\n", "path": "src/poetry/console/commands/env/info.py"}]} | 1,492 | 246 |
gh_patches_debug_38758 | rasdani/github-patches | git_diff | python-discord__site-1104 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Consider dropping deploy preview support for redirects app
Do we need previews of the legacy redirects?
If not, we may be able to remove a lot of code from the redirects app.
</issue>
<code>
[start of pydis_site/apps/redirect/urls.py]
1 import dataclasses
2 import re
3
4 import yaml
5 from django import conf
6 from django.http import HttpResponse
7 from django.urls import URLPattern, path
8 from django_distill import distill_path
9
10 from pydis_site import settings
11 from pydis_site.apps.content import urls as pages_urls
12 from pydis_site.apps.redirect.views import CustomRedirectView
13 from pydis_site.apps.resources import urls as resources_urls
14
15 app_name = "redirect"
16
17
18 __PARAMETER_REGEX = re.compile(r"<\w+:\w+>")
19 REDIRECT_TEMPLATE = "<meta http-equiv=\"refresh\" content=\"0; URL={url}\"/>"
20
21
22 @dataclasses.dataclass(frozen=True)
23 class Redirect:
24 """Metadata about a redirect route."""
25
26 original_path: str
27 redirect_route: str
28 redirect_arguments: tuple[str] = tuple()
29
30 prefix_redirect: bool = False
31
32
33 def map_redirect(name: str, data: Redirect) -> list[URLPattern]:
34 """Return a pattern using the Redirects app, or a static HTML redirect for static builds."""
35 if not settings.STATIC_BUILD:
36 # Normal dynamic redirect
37 return [path(
38 data.original_path,
39 CustomRedirectView.as_view(
40 pattern_name=data.redirect_route,
41 static_args=tuple(data.redirect_arguments),
42 prefix_redirect=data.prefix_redirect
43 ),
44 name=name
45 )]
46
47 # Create static HTML redirects for static builds
48 new_app_name = data.redirect_route.split(":")[0]
49
50 if __PARAMETER_REGEX.search(data.original_path):
51 # Redirects for paths which accept parameters
52 # We generate an HTML redirect file for all possible entries
53 paths = []
54
55 class RedirectFunc:
56 def __init__(self, new_url: str, _name: str):
57 self.result = HttpResponse(REDIRECT_TEMPLATE.format(url=new_url))
58 self.__qualname__ = _name
59
60 def __call__(self, *args, **kwargs):
61 return self.result
62
63 if new_app_name == resources_urls.app_name:
64 items = resources_urls.get_all_resources()
65 elif new_app_name == pages_urls.app_name:
66 items = pages_urls.get_all_pages()
67 else:
68 raise ValueError(f"Unknown app in redirect: {new_app_name}")
69
70 for item in items:
71 entry = next(iter(item.values()))
72
73 # Replace dynamic redirect with concrete path
74 concrete_path = __PARAMETER_REGEX.sub(entry, data.original_path)
75 new_redirect = f"/{new_app_name}/{entry}"
76 pattern_name = f"{name}_{entry}"
77
78 paths.append(distill_path(
79 concrete_path,
80 RedirectFunc(new_redirect, pattern_name),
81 name=pattern_name
82 ))
83
84 return paths
85
86 redirect_path_name = "pages" if new_app_name == "content" else new_app_name
87 if len(data.redirect_arguments) > 0:
88 redirect_arg = data.redirect_arguments[0]
89 else:
90 redirect_arg = "resources/"
91 new_redirect = f"/{redirect_path_name}/{redirect_arg}"
92
93 if new_redirect == "/resources/resources/":
94 new_redirect = "/resources/"
95
96 return [distill_path(
97 data.original_path,
98 lambda *args: HttpResponse(REDIRECT_TEMPLATE.format(url=new_redirect)),
99 name=name,
100 )]
101
102
103 urlpatterns = []
104 for _name, _data in yaml.safe_load(conf.settings.REDIRECTIONS_PATH.read_text()).items():
105 urlpatterns.extend(map_redirect(_name, Redirect(**_data)))
106
[end of pydis_site/apps/redirect/urls.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pydis_site/apps/redirect/urls.py b/pydis_site/apps/redirect/urls.py
--- a/pydis_site/apps/redirect/urls.py
+++ b/pydis_site/apps/redirect/urls.py
@@ -3,14 +3,9 @@
import yaml
from django import conf
-from django.http import HttpResponse
from django.urls import URLPattern, path
-from django_distill import distill_path
-from pydis_site import settings
-from pydis_site.apps.content import urls as pages_urls
from pydis_site.apps.redirect.views import CustomRedirectView
-from pydis_site.apps.resources import urls as resources_urls
app_name = "redirect"
@@ -31,72 +26,15 @@
def map_redirect(name: str, data: Redirect) -> list[URLPattern]:
- """Return a pattern using the Redirects app, or a static HTML redirect for static builds."""
- if not settings.STATIC_BUILD:
- # Normal dynamic redirect
- return [path(
- data.original_path,
- CustomRedirectView.as_view(
- pattern_name=data.redirect_route,
- static_args=tuple(data.redirect_arguments),
- prefix_redirect=data.prefix_redirect
- ),
- name=name
- )]
-
- # Create static HTML redirects for static builds
- new_app_name = data.redirect_route.split(":")[0]
-
- if __PARAMETER_REGEX.search(data.original_path):
- # Redirects for paths which accept parameters
- # We generate an HTML redirect file for all possible entries
- paths = []
-
- class RedirectFunc:
- def __init__(self, new_url: str, _name: str):
- self.result = HttpResponse(REDIRECT_TEMPLATE.format(url=new_url))
- self.__qualname__ = _name
-
- def __call__(self, *args, **kwargs):
- return self.result
-
- if new_app_name == resources_urls.app_name:
- items = resources_urls.get_all_resources()
- elif new_app_name == pages_urls.app_name:
- items = pages_urls.get_all_pages()
- else:
- raise ValueError(f"Unknown app in redirect: {new_app_name}")
-
- for item in items:
- entry = next(iter(item.values()))
-
- # Replace dynamic redirect with concrete path
- concrete_path = __PARAMETER_REGEX.sub(entry, data.original_path)
- new_redirect = f"/{new_app_name}/{entry}"
- pattern_name = f"{name}_{entry}"
-
- paths.append(distill_path(
- concrete_path,
- RedirectFunc(new_redirect, pattern_name),
- name=pattern_name
- ))
-
- return paths
-
- redirect_path_name = "pages" if new_app_name == "content" else new_app_name
- if len(data.redirect_arguments) > 0:
- redirect_arg = data.redirect_arguments[0]
- else:
- redirect_arg = "resources/"
- new_redirect = f"/{redirect_path_name}/{redirect_arg}"
-
- if new_redirect == "/resources/resources/":
- new_redirect = "/resources/"
-
- return [distill_path(
+ """Return a pattern using the Redirects app."""
+ return [path(
data.original_path,
- lambda *args: HttpResponse(REDIRECT_TEMPLATE.format(url=new_redirect)),
- name=name,
+ CustomRedirectView.as_view(
+ pattern_name=data.redirect_route,
+ static_args=tuple(data.redirect_arguments),
+ prefix_redirect=data.prefix_redirect
+ ),
+ name=name
)]
| {"golden_diff": "diff --git a/pydis_site/apps/redirect/urls.py b/pydis_site/apps/redirect/urls.py\n--- a/pydis_site/apps/redirect/urls.py\n+++ b/pydis_site/apps/redirect/urls.py\n@@ -3,14 +3,9 @@\n \n import yaml\n from django import conf\n-from django.http import HttpResponse\n from django.urls import URLPattern, path\n-from django_distill import distill_path\n \n-from pydis_site import settings\n-from pydis_site.apps.content import urls as pages_urls\n from pydis_site.apps.redirect.views import CustomRedirectView\n-from pydis_site.apps.resources import urls as resources_urls\n \n app_name = \"redirect\"\n \n@@ -31,72 +26,15 @@\n \n \n def map_redirect(name: str, data: Redirect) -> list[URLPattern]:\n- \"\"\"Return a pattern using the Redirects app, or a static HTML redirect for static builds.\"\"\"\n- if not settings.STATIC_BUILD:\n- # Normal dynamic redirect\n- return [path(\n- data.original_path,\n- CustomRedirectView.as_view(\n- pattern_name=data.redirect_route,\n- static_args=tuple(data.redirect_arguments),\n- prefix_redirect=data.prefix_redirect\n- ),\n- name=name\n- )]\n-\n- # Create static HTML redirects for static builds\n- new_app_name = data.redirect_route.split(\":\")[0]\n-\n- if __PARAMETER_REGEX.search(data.original_path):\n- # Redirects for paths which accept parameters\n- # We generate an HTML redirect file for all possible entries\n- paths = []\n-\n- class RedirectFunc:\n- def __init__(self, new_url: str, _name: str):\n- self.result = HttpResponse(REDIRECT_TEMPLATE.format(url=new_url))\n- self.__qualname__ = _name\n-\n- def __call__(self, *args, **kwargs):\n- return self.result\n-\n- if new_app_name == resources_urls.app_name:\n- items = resources_urls.get_all_resources()\n- elif new_app_name == pages_urls.app_name:\n- items = pages_urls.get_all_pages()\n- else:\n- raise ValueError(f\"Unknown app in redirect: {new_app_name}\")\n-\n- for item in items:\n- entry = next(iter(item.values()))\n-\n- # Replace dynamic redirect with concrete path\n- concrete_path = __PARAMETER_REGEX.sub(entry, data.original_path)\n- new_redirect = f\"/{new_app_name}/{entry}\"\n- pattern_name = f\"{name}_{entry}\"\n-\n- paths.append(distill_path(\n- concrete_path,\n- RedirectFunc(new_redirect, pattern_name),\n- name=pattern_name\n- ))\n-\n- return paths\n-\n- redirect_path_name = \"pages\" if new_app_name == \"content\" else new_app_name\n- if len(data.redirect_arguments) > 0:\n- redirect_arg = data.redirect_arguments[0]\n- else:\n- redirect_arg = \"resources/\"\n- new_redirect = f\"/{redirect_path_name}/{redirect_arg}\"\n-\n- if new_redirect == \"/resources/resources/\":\n- new_redirect = \"/resources/\"\n-\n- return [distill_path(\n+ \"\"\"Return a pattern using the Redirects app.\"\"\"\n+ return [path(\n data.original_path,\n- lambda *args: HttpResponse(REDIRECT_TEMPLATE.format(url=new_redirect)),\n- name=name,\n+ CustomRedirectView.as_view(\n+ pattern_name=data.redirect_route,\n+ static_args=tuple(data.redirect_arguments),\n+ prefix_redirect=data.prefix_redirect\n+ ),\n+ name=name\n )]\n", "issue": "Consider dropping deploy preview support for redirects app\nDo we need previews of the legacy redirects?\n\nIf not, we may be able to remove a lot of code from the redirects app.\n", "before_files": [{"content": "import dataclasses\nimport re\n\nimport yaml\nfrom django import conf\nfrom django.http import HttpResponse\nfrom django.urls import URLPattern, path\nfrom django_distill import distill_path\n\nfrom pydis_site import settings\nfrom pydis_site.apps.content import urls as pages_urls\nfrom pydis_site.apps.redirect.views import CustomRedirectView\nfrom pydis_site.apps.resources import urls as resources_urls\n\napp_name = \"redirect\"\n\n\n__PARAMETER_REGEX = re.compile(r\"<\\w+:\\w+>\")\nREDIRECT_TEMPLATE = \"<meta http-equiv=\\\"refresh\\\" content=\\\"0; URL={url}\\\"/>\"\n\n\[email protected](frozen=True)\nclass Redirect:\n \"\"\"Metadata about a redirect route.\"\"\"\n\n original_path: str\n redirect_route: str\n redirect_arguments: tuple[str] = tuple()\n\n prefix_redirect: bool = False\n\n\ndef map_redirect(name: str, data: Redirect) -> list[URLPattern]:\n \"\"\"Return a pattern using the Redirects app, or a static HTML redirect for static builds.\"\"\"\n if not settings.STATIC_BUILD:\n # Normal dynamic redirect\n return [path(\n data.original_path,\n CustomRedirectView.as_view(\n pattern_name=data.redirect_route,\n static_args=tuple(data.redirect_arguments),\n prefix_redirect=data.prefix_redirect\n ),\n name=name\n )]\n\n # Create static HTML redirects for static builds\n new_app_name = data.redirect_route.split(\":\")[0]\n\n if __PARAMETER_REGEX.search(data.original_path):\n # Redirects for paths which accept parameters\n # We generate an HTML redirect file for all possible entries\n paths = []\n\n class RedirectFunc:\n def __init__(self, new_url: str, _name: str):\n self.result = HttpResponse(REDIRECT_TEMPLATE.format(url=new_url))\n self.__qualname__ = _name\n\n def __call__(self, *args, **kwargs):\n return self.result\n\n if new_app_name == resources_urls.app_name:\n items = resources_urls.get_all_resources()\n elif new_app_name == pages_urls.app_name:\n items = pages_urls.get_all_pages()\n else:\n raise ValueError(f\"Unknown app in redirect: {new_app_name}\")\n\n for item in items:\n entry = next(iter(item.values()))\n\n # Replace dynamic redirect with concrete path\n concrete_path = __PARAMETER_REGEX.sub(entry, data.original_path)\n new_redirect = f\"/{new_app_name}/{entry}\"\n pattern_name = f\"{name}_{entry}\"\n\n paths.append(distill_path(\n concrete_path,\n RedirectFunc(new_redirect, pattern_name),\n name=pattern_name\n ))\n\n return paths\n\n redirect_path_name = \"pages\" if new_app_name == \"content\" else new_app_name\n if len(data.redirect_arguments) > 0:\n redirect_arg = data.redirect_arguments[0]\n else:\n redirect_arg = \"resources/\"\n new_redirect = f\"/{redirect_path_name}/{redirect_arg}\"\n\n if new_redirect == \"/resources/resources/\":\n new_redirect = \"/resources/\"\n\n return [distill_path(\n data.original_path,\n lambda *args: HttpResponse(REDIRECT_TEMPLATE.format(url=new_redirect)),\n name=name,\n )]\n\n\nurlpatterns = []\nfor _name, _data in yaml.safe_load(conf.settings.REDIRECTIONS_PATH.read_text()).items():\n urlpatterns.extend(map_redirect(_name, Redirect(**_data)))\n", "path": "pydis_site/apps/redirect/urls.py"}]} | 1,518 | 774 |
gh_patches_debug_7175 | rasdani/github-patches | git_diff | RedHatInsights__insights-core-2743 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Remove defunct entry_points
These scripts no longer exist. We should remove the entry_points.
* [insights.tools.generate_api_config](https://github.com/RedHatInsights/insights-core/blob/master/setup.py#L23)
* [insights.tools.perf](https://github.com/RedHatInsights/insights-core/blob/master/setup.py#L24)
</issue>
<code>
[start of setup.py]
1 import os
2 import sys
3 from setuptools import setup, find_packages
4
5 __here__ = os.path.dirname(os.path.abspath(__file__))
6
7 package_info = dict.fromkeys(["RELEASE", "COMMIT", "VERSION", "NAME"])
8
9 for name in package_info:
10 with open(os.path.join(__here__, "insights", name)) as f:
11 package_info[name] = f.read().strip()
12
13 entry_points = {
14 'console_scripts': [
15 'insights-collect = insights.collect:main',
16 'insights-run = insights:main',
17 'insights = insights.command_parser:main',
18 'insights-cat = insights.tools.cat:main',
19 'insights-dupkeycheck = insights.tools.dupkeycheck:main',
20 'insights-inspect = insights.tools.insights_inspect:main',
21 'insights-info = insights.tools.query:main',
22 'insights-ocpshell= insights.ocpshell:main',
23 'gen_api = insights.tools.generate_api_config:main',
24 'insights-perf = insights.tools.perf:main',
25 'client = insights.client:run',
26 'mangle = insights.util.mangle:main'
27 ]
28 }
29
30 runtime = set([
31 'six',
32 'requests',
33 'redis',
34 'cachecontrol',
35 'cachecontrol[redis]',
36 'cachecontrol[filecache]',
37 'defusedxml',
38 'lockfile',
39 'jinja2',
40 ])
41
42 if (sys.version_info < (2, 7)):
43 runtime.add('pyyaml>=3.10,<=3.13')
44 else:
45 runtime.add('pyyaml')
46
47
48 def maybe_require(pkg):
49 try:
50 __import__(pkg)
51 except ImportError:
52 runtime.add(pkg)
53
54
55 maybe_require("importlib")
56 maybe_require("argparse")
57
58
59 client = set([
60 'requests'
61 ])
62
63 develop = set([
64 'futures==3.0.5',
65 'wheel',
66 ])
67
68 docs = set([
69 'Sphinx<=3.0.2',
70 'nbsphinx',
71 'sphinx_rtd_theme',
72 'ipython',
73 'colorama',
74 'jinja2',
75 'Pygments'
76 ])
77
78 testing = set([
79 'coverage==4.3.4',
80 'pytest==3.0.6',
81 'pytest-cov==2.4.0',
82 'mock==2.0.0',
83 ])
84
85 cluster = set([
86 'ansible',
87 'pandas',
88 'colorama',
89 ])
90
91 openshift = set([
92 'openshift'
93 ])
94
95 linting = set([
96 'flake8==2.6.2',
97 ])
98
99 optional = set([
100 'python-cjson',
101 'python-logstash',
102 'python-statsd',
103 'watchdog',
104 ])
105
106 if __name__ == "__main__":
107 # allows for runtime modification of rpm name
108 name = os.environ.get("INSIGHTS_CORE_NAME", package_info["NAME"])
109
110 setup(
111 name=name,
112 version=package_info["VERSION"],
113 description="Insights Core is a data collection and analysis framework",
114 long_description=open("README.rst").read(),
115 url="https://github.com/redhatinsights/insights-core",
116 author="Red Hat, Inc.",
117 author_email="[email protected]",
118 packages=find_packages(),
119 install_requires=list(runtime),
120 package_data={'': ['LICENSE']},
121 license='Apache 2.0',
122 extras_require={
123 'develop': list(runtime | develop | client | docs | linting | testing | cluster),
124 'develop26': list(runtime | develop | client | linting | testing | cluster),
125 'client': list(runtime | client),
126 'client-develop': list(runtime | develop | client | linting | testing),
127 'cluster': list(runtime | cluster),
128 'openshift': list(runtime | openshift),
129 'optional': list(optional),
130 'docs': list(docs),
131 'linting': list(linting | client),
132 'testing': list(testing | client)
133 },
134 classifiers=[
135 'Development Status :: 5 - Production/Stable',
136 'Intended Audience :: Developers',
137 'Natural Language :: English',
138 'License :: OSI Approved :: Apache Software License',
139 'Programming Language :: Python',
140 'Programming Language :: Python :: 2.6',
141 'Programming Language :: Python :: 2.7',
142 'Programming Language :: Python :: 3.3',
143 'Programming Language :: Python :: 3.4',
144 'Programming Language :: Python :: 3.5',
145 'Programming Language :: Python :: 3.6'
146 ],
147 entry_points=entry_points,
148 include_package_data=True
149 )
150
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -20,8 +20,6 @@
'insights-inspect = insights.tools.insights_inspect:main',
'insights-info = insights.tools.query:main',
'insights-ocpshell= insights.ocpshell:main',
- 'gen_api = insights.tools.generate_api_config:main',
- 'insights-perf = insights.tools.perf:main',
'client = insights.client:run',
'mangle = insights.util.mangle:main'
]
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -20,8 +20,6 @@\n 'insights-inspect = insights.tools.insights_inspect:main',\n 'insights-info = insights.tools.query:main',\n 'insights-ocpshell= insights.ocpshell:main',\n- 'gen_api = insights.tools.generate_api_config:main',\n- 'insights-perf = insights.tools.perf:main',\n 'client = insights.client:run',\n 'mangle = insights.util.mangle:main'\n ]\n", "issue": "Remove defunct entry_points\nThese scripts no longer exist. We should remove the entry_points.\r\n\r\n* [insights.tools.generate_api_config](https://github.com/RedHatInsights/insights-core/blob/master/setup.py#L23)\r\n* [insights.tools.perf](https://github.com/RedHatInsights/insights-core/blob/master/setup.py#L24)\n", "before_files": [{"content": "import os\nimport sys\nfrom setuptools import setup, find_packages\n\n__here__ = os.path.dirname(os.path.abspath(__file__))\n\npackage_info = dict.fromkeys([\"RELEASE\", \"COMMIT\", \"VERSION\", \"NAME\"])\n\nfor name in package_info:\n with open(os.path.join(__here__, \"insights\", name)) as f:\n package_info[name] = f.read().strip()\n\nentry_points = {\n 'console_scripts': [\n 'insights-collect = insights.collect:main',\n 'insights-run = insights:main',\n 'insights = insights.command_parser:main',\n 'insights-cat = insights.tools.cat:main',\n 'insights-dupkeycheck = insights.tools.dupkeycheck:main',\n 'insights-inspect = insights.tools.insights_inspect:main',\n 'insights-info = insights.tools.query:main',\n 'insights-ocpshell= insights.ocpshell:main',\n 'gen_api = insights.tools.generate_api_config:main',\n 'insights-perf = insights.tools.perf:main',\n 'client = insights.client:run',\n 'mangle = insights.util.mangle:main'\n ]\n}\n\nruntime = set([\n 'six',\n 'requests',\n 'redis',\n 'cachecontrol',\n 'cachecontrol[redis]',\n 'cachecontrol[filecache]',\n 'defusedxml',\n 'lockfile',\n 'jinja2',\n])\n\nif (sys.version_info < (2, 7)):\n runtime.add('pyyaml>=3.10,<=3.13')\nelse:\n runtime.add('pyyaml')\n\n\ndef maybe_require(pkg):\n try:\n __import__(pkg)\n except ImportError:\n runtime.add(pkg)\n\n\nmaybe_require(\"importlib\")\nmaybe_require(\"argparse\")\n\n\nclient = set([\n 'requests'\n])\n\ndevelop = set([\n 'futures==3.0.5',\n 'wheel',\n])\n\ndocs = set([\n 'Sphinx<=3.0.2',\n 'nbsphinx',\n 'sphinx_rtd_theme',\n 'ipython',\n 'colorama',\n 'jinja2',\n 'Pygments'\n])\n\ntesting = set([\n 'coverage==4.3.4',\n 'pytest==3.0.6',\n 'pytest-cov==2.4.0',\n 'mock==2.0.0',\n])\n\ncluster = set([\n 'ansible',\n 'pandas',\n 'colorama',\n])\n\nopenshift = set([\n 'openshift'\n])\n\nlinting = set([\n 'flake8==2.6.2',\n])\n\noptional = set([\n 'python-cjson',\n 'python-logstash',\n 'python-statsd',\n 'watchdog',\n])\n\nif __name__ == \"__main__\":\n # allows for runtime modification of rpm name\n name = os.environ.get(\"INSIGHTS_CORE_NAME\", package_info[\"NAME\"])\n\n setup(\n name=name,\n version=package_info[\"VERSION\"],\n description=\"Insights Core is a data collection and analysis framework\",\n long_description=open(\"README.rst\").read(),\n url=\"https://github.com/redhatinsights/insights-core\",\n author=\"Red Hat, Inc.\",\n author_email=\"[email protected]\",\n packages=find_packages(),\n install_requires=list(runtime),\n package_data={'': ['LICENSE']},\n license='Apache 2.0',\n extras_require={\n 'develop': list(runtime | develop | client | docs | linting | testing | cluster),\n 'develop26': list(runtime | develop | client | linting | testing | cluster),\n 'client': list(runtime | client),\n 'client-develop': list(runtime | develop | client | linting | testing),\n 'cluster': list(runtime | cluster),\n 'openshift': list(runtime | openshift),\n 'optional': list(optional),\n 'docs': list(docs),\n 'linting': list(linting | client),\n 'testing': list(testing | client)\n },\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Developers',\n 'Natural Language :: English',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6'\n ],\n entry_points=entry_points,\n include_package_data=True\n )\n", "path": "setup.py"}]} | 1,954 | 129 |
gh_patches_debug_26911 | rasdani/github-patches | git_diff | pyca__cryptography-1424 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Need zope.interface.verify.verifyObject for ABCs.
Every object that claims to provide some interface should then use it in a unit test.
</issue>
<code>
[start of cryptography/utils.py]
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
10 # implied.
11 # See the License for the specific language governing permissions and
12 # limitations under the License.
13
14 from __future__ import absolute_import, division, print_function
15
16 import sys
17
18
19 DeprecatedIn06 = DeprecationWarning
20
21
22 def register_interface(iface):
23 def register_decorator(klass):
24 iface.register(klass)
25 return klass
26 return register_decorator
27
28
29 def read_only_property(name):
30 return property(lambda self: getattr(self, name))
31
32
33 def bit_length(x):
34 if sys.version_info >= (2, 7):
35 return x.bit_length()
36 else:
37 return len(bin(x)) - (2 + (x <= 0))
38
[end of cryptography/utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/cryptography/utils.py b/cryptography/utils.py
--- a/cryptography/utils.py
+++ b/cryptography/utils.py
@@ -13,6 +13,8 @@
from __future__ import absolute_import, division, print_function
+import abc
+import inspect
import sys
@@ -21,6 +23,7 @@
def register_interface(iface):
def register_decorator(klass):
+ verify_interface(iface, klass)
iface.register(klass)
return klass
return register_decorator
@@ -30,6 +33,30 @@
return property(lambda self: getattr(self, name))
+class InterfaceNotImplemented(Exception):
+ pass
+
+
+def verify_interface(iface, klass):
+ for method in iface.__abstractmethods__:
+ if not hasattr(klass, method):
+ raise InterfaceNotImplemented(
+ "{0} is missing a {1!r} method".format(klass, method)
+ )
+ if isinstance(getattr(iface, method), abc.abstractproperty):
+ # Can't properly verify these yet.
+ continue
+ spec = inspect.getargspec(getattr(iface, method))
+ actual = inspect.getargspec(getattr(klass, method))
+ if spec != actual:
+ raise InterfaceNotImplemented(
+ "{0}.{1}'s signature differs from the expected. Expected: "
+ "{2!r}. Received: {3!r}".format(
+ klass, method, spec, actual
+ )
+ )
+
+
def bit_length(x):
if sys.version_info >= (2, 7):
return x.bit_length()
| {"golden_diff": "diff --git a/cryptography/utils.py b/cryptography/utils.py\n--- a/cryptography/utils.py\n+++ b/cryptography/utils.py\n@@ -13,6 +13,8 @@\n \n from __future__ import absolute_import, division, print_function\n \n+import abc\n+import inspect\n import sys\n \n \n@@ -21,6 +23,7 @@\n \n def register_interface(iface):\n def register_decorator(klass):\n+ verify_interface(iface, klass)\n iface.register(klass)\n return klass\n return register_decorator\n@@ -30,6 +33,30 @@\n return property(lambda self: getattr(self, name))\n \n \n+class InterfaceNotImplemented(Exception):\n+ pass\n+\n+\n+def verify_interface(iface, klass):\n+ for method in iface.__abstractmethods__:\n+ if not hasattr(klass, method):\n+ raise InterfaceNotImplemented(\n+ \"{0} is missing a {1!r} method\".format(klass, method)\n+ )\n+ if isinstance(getattr(iface, method), abc.abstractproperty):\n+ # Can't properly verify these yet.\n+ continue\n+ spec = inspect.getargspec(getattr(iface, method))\n+ actual = inspect.getargspec(getattr(klass, method))\n+ if spec != actual:\n+ raise InterfaceNotImplemented(\n+ \"{0}.{1}'s signature differs from the expected. Expected: \"\n+ \"{2!r}. Received: {3!r}\".format(\n+ klass, method, spec, actual\n+ )\n+ )\n+\n+\n def bit_length(x):\n if sys.version_info >= (2, 7):\n return x.bit_length()\n", "issue": "Need zope.interface.verify.verifyObject for ABCs.\nEvery object that claims to provide some interface should then use it in a unit test.\n\n", "before_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n# implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import absolute_import, division, print_function\n\nimport sys\n\n\nDeprecatedIn06 = DeprecationWarning\n\n\ndef register_interface(iface):\n def register_decorator(klass):\n iface.register(klass)\n return klass\n return register_decorator\n\n\ndef read_only_property(name):\n return property(lambda self: getattr(self, name))\n\n\ndef bit_length(x):\n if sys.version_info >= (2, 7):\n return x.bit_length()\n else:\n return len(bin(x)) - (2 + (x <= 0))\n", "path": "cryptography/utils.py"}]} | 869 | 358 |
gh_patches_debug_3588 | rasdani/github-patches | git_diff | akvo__akvo-rsr-3753 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Show only relevant updates in typeahead on Akvo pages
Currently, all updates can be searched for on partner site updates typeahead.
</issue>
<code>
[start of akvo/rest/views/typeahead.py]
1 # -*- coding: utf-8 -*-
2
3 """Akvo RSR is covered by the GNU Affero General Public License.
4 See more details in the license.txt file located at the root folder of the
5 Akvo RSR module. For additional details on the GNU license please
6 see < http://www.gnu.org/licenses/agpl.html >.
7 """
8
9 from django.conf import settings
10 from rest_framework.decorators import api_view
11 from rest_framework.response import Response
12
13 from akvo.codelists.models import Country, Version
14 from akvo.rest.serializers import (TypeaheadCountrySerializer,
15 TypeaheadOrganisationSerializer,
16 TypeaheadProjectSerializer,
17 TypeaheadProjectUpdateSerializer,
18 TypeaheadKeywordSerializer,)
19 from akvo.rsr.models import Organisation, Project, ProjectUpdate
20 from akvo.rsr.views.project import _project_directory_coll
21
22
23 def rejig(queryset, serializer):
24 """Rearrange & add queryset count to the response data."""
25 return {
26 'count': queryset.count(),
27 'results': serializer.data
28 }
29
30
31 @api_view(['GET'])
32 def typeahead_country(request):
33 iati_version = Version.objects.get(code=settings.IATI_VERSION)
34 countries = Country.objects.filter(version=iati_version)
35 return Response(
36 rejig(countries, TypeaheadCountrySerializer(countries, many=True))
37 )
38
39
40 @api_view(['GET'])
41 def typeahead_organisation(request):
42 page = request.rsr_page
43 if request.GET.get('partners', '0') == '1' and page:
44 organisations = page.partners()
45 else:
46 # Project editor - all organizations
47 organisations = Organisation.objects.all()
48
49 organisations = organisations.values('id', 'name', 'long_name')
50
51 return Response(
52 rejig(organisations, TypeaheadOrganisationSerializer(organisations,
53 many=True))
54 )
55
56
57 @api_view(['GET'])
58 def typeahead_user_organisations(request):
59 user = request.user
60 is_admin = user.is_active and (user.is_superuser or user.is_admin)
61 organisations = user.approved_organisations() if not is_admin else Organisation.objects.all()
62 return Response(
63 rejig(organisations, TypeaheadOrganisationSerializer(organisations,
64 many=True))
65 )
66
67
68 @api_view(['GET'])
69 def typeahead_keyword(request):
70 page = request.rsr_page
71 keywords = page.keywords.all() if page else None
72 if keywords:
73 return Response(
74 rejig(keywords, TypeaheadKeywordSerializer(keywords, many=True))
75 )
76 # No keywords on rsr.akvo.org
77 return Response({})
78
79
80 @api_view(['GET'])
81 def typeahead_project(request):
82 """Return the typeaheads for projects.
83
84 Without any query parameters, it returns the info for all the projects in
85 the current context -- changes depending on whether we are on a partner
86 site, or the RSR site.
87
88 If a published query parameter is passed, only projects that have been
89 published are returned.
90
91 NOTE: The unauthenticated user gets information about all the projects when
92 using this API endpoint. More permission checking will need to be added,
93 if the amount of data being returned is changed.
94
95 """
96 if request.GET.get('published', '0') == '0':
97 # Project editor - organization projects, all
98 page = request.rsr_page
99 projects = page.all_projects() if page else Project.objects.all()
100 else:
101 # Search bar - organization projects, published
102 projects = _project_directory_coll(request)
103
104 projects = projects.exclude(title='')
105 return Response(
106 rejig(projects, TypeaheadProjectSerializer(projects, many=True))
107 )
108
109
110 @api_view(['GET'])
111 def typeahead_user_projects(request):
112 user = request.user
113 is_admin = user.is_active and (user.is_superuser or user.is_admin)
114 if is_admin:
115 projects = Project.objects.all()
116 else:
117 projects = user.approved_organisations().all_projects()
118 projects = projects.exclude(title='')
119 return Response(
120 rejig(projects, TypeaheadProjectSerializer(projects, many=True))
121 )
122
123
124 @api_view(['GET'])
125 def typeahead_impact_projects(request):
126 user = request.user
127 projects = Project.objects.all() if user.is_admin or user.is_superuser else user.my_projects()
128 projects = projects.published().filter(is_impact_project=True).order_by('title')
129
130 return Response(
131 rejig(projects, TypeaheadProjectSerializer(projects, many=True))
132 )
133
134
135 @api_view(['GET'])
136 def typeahead_projectupdate(request):
137 updates = ProjectUpdate.objects.all()
138 return Response(
139 rejig(updates, TypeaheadProjectUpdateSerializer(updates, many=True))
140 )
141
[end of akvo/rest/views/typeahead.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/akvo/rest/views/typeahead.py b/akvo/rest/views/typeahead.py
--- a/akvo/rest/views/typeahead.py
+++ b/akvo/rest/views/typeahead.py
@@ -134,7 +134,8 @@
@api_view(['GET'])
def typeahead_projectupdate(request):
- updates = ProjectUpdate.objects.all()
+ page = request.rsr_page
+ updates = page.updates() if page else ProjectUpdate.objects.all()
return Response(
rejig(updates, TypeaheadProjectUpdateSerializer(updates, many=True))
)
| {"golden_diff": "diff --git a/akvo/rest/views/typeahead.py b/akvo/rest/views/typeahead.py\n--- a/akvo/rest/views/typeahead.py\n+++ b/akvo/rest/views/typeahead.py\n@@ -134,7 +134,8 @@\n \n @api_view(['GET'])\n def typeahead_projectupdate(request):\n- updates = ProjectUpdate.objects.all()\n+ page = request.rsr_page\n+ updates = page.updates() if page else ProjectUpdate.objects.all()\n return Response(\n rejig(updates, TypeaheadProjectUpdateSerializer(updates, many=True))\n )\n", "issue": "Show only relevant updates in typeahead on Akvo pages\nCurrently, all updates can be searched for on partner site updates typeahead. \n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\"Akvo RSR is covered by the GNU Affero General Public License.\nSee more details in the license.txt file located at the root folder of the\nAkvo RSR module. For additional details on the GNU license please\nsee < http://www.gnu.org/licenses/agpl.html >.\n\"\"\"\n\nfrom django.conf import settings\nfrom rest_framework.decorators import api_view\nfrom rest_framework.response import Response\n\nfrom akvo.codelists.models import Country, Version\nfrom akvo.rest.serializers import (TypeaheadCountrySerializer,\n TypeaheadOrganisationSerializer,\n TypeaheadProjectSerializer,\n TypeaheadProjectUpdateSerializer,\n TypeaheadKeywordSerializer,)\nfrom akvo.rsr.models import Organisation, Project, ProjectUpdate\nfrom akvo.rsr.views.project import _project_directory_coll\n\n\ndef rejig(queryset, serializer):\n \"\"\"Rearrange & add queryset count to the response data.\"\"\"\n return {\n 'count': queryset.count(),\n 'results': serializer.data\n }\n\n\n@api_view(['GET'])\ndef typeahead_country(request):\n iati_version = Version.objects.get(code=settings.IATI_VERSION)\n countries = Country.objects.filter(version=iati_version)\n return Response(\n rejig(countries, TypeaheadCountrySerializer(countries, many=True))\n )\n\n\n@api_view(['GET'])\ndef typeahead_organisation(request):\n page = request.rsr_page\n if request.GET.get('partners', '0') == '1' and page:\n organisations = page.partners()\n else:\n # Project editor - all organizations\n organisations = Organisation.objects.all()\n\n organisations = organisations.values('id', 'name', 'long_name')\n\n return Response(\n rejig(organisations, TypeaheadOrganisationSerializer(organisations,\n many=True))\n )\n\n\n@api_view(['GET'])\ndef typeahead_user_organisations(request):\n user = request.user\n is_admin = user.is_active and (user.is_superuser or user.is_admin)\n organisations = user.approved_organisations() if not is_admin else Organisation.objects.all()\n return Response(\n rejig(organisations, TypeaheadOrganisationSerializer(organisations,\n many=True))\n )\n\n\n@api_view(['GET'])\ndef typeahead_keyword(request):\n page = request.rsr_page\n keywords = page.keywords.all() if page else None\n if keywords:\n return Response(\n rejig(keywords, TypeaheadKeywordSerializer(keywords, many=True))\n )\n # No keywords on rsr.akvo.org\n return Response({})\n\n\n@api_view(['GET'])\ndef typeahead_project(request):\n \"\"\"Return the typeaheads for projects.\n\n Without any query parameters, it returns the info for all the projects in\n the current context -- changes depending on whether we are on a partner\n site, or the RSR site.\n\n If a published query parameter is passed, only projects that have been\n published are returned.\n\n NOTE: The unauthenticated user gets information about all the projects when\n using this API endpoint. More permission checking will need to be added,\n if the amount of data being returned is changed.\n\n \"\"\"\n if request.GET.get('published', '0') == '0':\n # Project editor - organization projects, all\n page = request.rsr_page\n projects = page.all_projects() if page else Project.objects.all()\n else:\n # Search bar - organization projects, published\n projects = _project_directory_coll(request)\n\n projects = projects.exclude(title='')\n return Response(\n rejig(projects, TypeaheadProjectSerializer(projects, many=True))\n )\n\n\n@api_view(['GET'])\ndef typeahead_user_projects(request):\n user = request.user\n is_admin = user.is_active and (user.is_superuser or user.is_admin)\n if is_admin:\n projects = Project.objects.all()\n else:\n projects = user.approved_organisations().all_projects()\n projects = projects.exclude(title='')\n return Response(\n rejig(projects, TypeaheadProjectSerializer(projects, many=True))\n )\n\n\n@api_view(['GET'])\ndef typeahead_impact_projects(request):\n user = request.user\n projects = Project.objects.all() if user.is_admin or user.is_superuser else user.my_projects()\n projects = projects.published().filter(is_impact_project=True).order_by('title')\n\n return Response(\n rejig(projects, TypeaheadProjectSerializer(projects, many=True))\n )\n\n\n@api_view(['GET'])\ndef typeahead_projectupdate(request):\n updates = ProjectUpdate.objects.all()\n return Response(\n rejig(updates, TypeaheadProjectUpdateSerializer(updates, many=True))\n )\n", "path": "akvo/rest/views/typeahead.py"}]} | 1,884 | 130 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.