problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.71k
9.01k
| golden_diff
stringlengths 151
4.94k
| verification_info
stringlengths 465
11.3k
| num_tokens_prompt
int64 557
2.05k
| num_tokens_diff
int64 48
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_4124 | rasdani/github-patches | git_diff | PlasmaPy__PlasmaPy-541 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Archive version 0.1.1 on Zenodo and get a DOI
There is a partnership between [Zenodo](https://zenodo.org/) and GitHub that allows Zenodo to archive releases and [make code citable](https://guides.github.com/activities/citable-code/). Zenodo can then mint a digital object identifier (DOI) that would make that version of PlasmaPy citable. We can also get a persistent doi that would alway refers to the most recent version. We should make archiving our release on Zenodo part of our regular release process. [SunPy](https://doi.org/10.5281/zenodo.591887) has done this already.
- [x] Link the PlasmaPy organization/repository to Zenodo
- [ ] Archive version 0.1 on Zenodo and get a persistent DOI and a release DOI
- [ ] Put a badge on our main README.md for our DOI
- [ ] Put the persistent DOI in our docs and on our main website, along with instructions on how to cite the code
- [ ] Document instructions on how to put a release on Zenodo in our release guide.
</issue>
<code>
[start of plasmapy/__init__.py]
1 # Licensed under a 3-clause BSD style license - see LICENSE.rst
2
3 # Packages may add whatever they like to this file, but
4 # should keep this content at the top.
5 # ----------------------------------------------------------------------------
6 from ._base_init import *
7 # ----------------------------------------------------------------------------
8
9 # Enforce Python version check during package import.
10 # This is the same check as the one at the top of setup.py
11 import sys
12
13 __name__ = "plasmapy"
14
15 __doc__ = ("A community-developed and community-driven open source "
16 "core Python package for plasma physics.")
17
18
19 class UnsupportedPythonError(Exception):
20 pass
21
22
23 if sys.version_info < tuple((int(val) for val in "3.6".split('.'))):
24 raise UnsupportedPythonError("plasmapy does not support Python < {}".format(3.6))
25
26 if not _ASTROPY_SETUP_:
27 # For egg_info test builds to pass, put package imports here.
28 from . import atomic
29 from . import classes
30 from . import constants
31 from . import diagnostics
32 from . import mathematics
33 from . import physics
34 from . import utils
35
36 def online_help(query):
37 """
38 Search the online PlasmaPy documentation for the given query from plasmapy.org
39 Opens the results in the default web browser.
40 Requires an active Internet connection.
41 Redirects to Astropy.units in case of query 'unit' or 'units'
42
43 Parameters
44 ----------
45 query : str
46 The search query.
47 """
48 from urllib.parse import urlencode
49 import webbrowser
50
51 url = ('http://docs.plasmapy.org/en/stable/search.html?'
52 '{0}&check_keywords=yes&area=default').format(urlencode({'q': query}))
53
54 if(query.lower() in ('unit', 'units')):
55 url = 'http://docs.astropy.org/en/stable/units/'
56
57 webbrowser.open(url)
58
59 __citation__ = """@misc{plasmapy_community_2018_1238132,
60 author = {PlasmaPy Community and
61 Murphy, Nicholas A. and
62 Leonard, Andrew J. and
63 Sta\'nczak, Dominik and
64 Kozlowski, Pawel M. and
65 Langendorf, Samuel J. and
66 Haggerty, Colby C. and
67 Beckers, Jasper P. and
68 Mumford, Stuart J. and
69 Parashar, Tulasi N. and
70 Huang, Yi-Min},
71 title = {{PlasmaPy: an open source community-developed
72 Python package for plasma physics}},
73 month = apr,
74 year = 2018,
75 doi = {10.5281/zenodo.1238132},
76 url = {https://doi.org/10.5281/zenodo.1238132}
77 }"""
78
[end of plasmapy/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/plasmapy/__init__.py b/plasmapy/__init__.py
--- a/plasmapy/__init__.py
+++ b/plasmapy/__init__.py
@@ -57,7 +57,7 @@
webbrowser.open(url)
__citation__ = """@misc{plasmapy_community_2018_1238132,
- author = {PlasmaPy Community and
+ author = {{PlasmaPy Community} and
Murphy, Nicholas A. and
Leonard, Andrew J. and
Sta\'nczak, Dominik and
| {"golden_diff": "diff --git a/plasmapy/__init__.py b/plasmapy/__init__.py\n--- a/plasmapy/__init__.py\n+++ b/plasmapy/__init__.py\n@@ -57,7 +57,7 @@\n webbrowser.open(url)\n \n __citation__ = \"\"\"@misc{plasmapy_community_2018_1238132,\n- author = {PlasmaPy Community and\n+ author = {{PlasmaPy Community} and\n Murphy, Nicholas A. and\n Leonard, Andrew J. and\n Sta\\'nczak, Dominik and\n", "issue": "Archive version 0.1.1 on Zenodo and get a DOI\nThere is a partnership between [Zenodo](https://zenodo.org/) and GitHub that allows Zenodo to archive releases and [make code citable](https://guides.github.com/activities/citable-code/). Zenodo can then mint a digital object identifier (DOI) that would make that version of PlasmaPy citable. We can also get a persistent doi that would alway refers to the most recent version. We should make archiving our release on Zenodo part of our regular release process. [SunPy](https://doi.org/10.5281/zenodo.591887) has done this already. \r\n\r\n- [x] Link the PlasmaPy organization/repository to Zenodo\r\n- [ ] Archive version 0.1 on Zenodo and get a persistent DOI and a release DOI\r\n- [ ] Put a badge on our main README.md for our DOI\r\n- [ ] Put the persistent DOI in our docs and on our main website, along with instructions on how to cite the code\r\n- [ ] Document instructions on how to put a release on Zenodo in our release guide.\r\n\r\n\r\n\n", "before_files": [{"content": "# Licensed under a 3-clause BSD style license - see LICENSE.rst\n\n# Packages may add whatever they like to this file, but\n# should keep this content at the top.\n# ----------------------------------------------------------------------------\nfrom ._base_init import *\n# ----------------------------------------------------------------------------\n\n# Enforce Python version check during package import.\n# This is the same check as the one at the top of setup.py\nimport sys\n\n__name__ = \"plasmapy\"\n\n__doc__ = (\"A community-developed and community-driven open source \"\n \"core Python package for plasma physics.\")\n\n\nclass UnsupportedPythonError(Exception):\n pass\n\n\nif sys.version_info < tuple((int(val) for val in \"3.6\".split('.'))):\n raise UnsupportedPythonError(\"plasmapy does not support Python < {}\".format(3.6))\n\nif not _ASTROPY_SETUP_:\n # For egg_info test builds to pass, put package imports here.\n from . import atomic\n from . import classes\n from . import constants\n from . import diagnostics\n from . import mathematics\n from . import physics\n from . import utils\n\ndef online_help(query):\n \"\"\"\n Search the online PlasmaPy documentation for the given query from plasmapy.org\n Opens the results in the default web browser.\n Requires an active Internet connection.\n Redirects to Astropy.units in case of query 'unit' or 'units'\n\n Parameters\n ----------\n query : str\n The search query.\n \"\"\"\n from urllib.parse import urlencode\n import webbrowser\n\n url = ('http://docs.plasmapy.org/en/stable/search.html?'\n '{0}&check_keywords=yes&area=default').format(urlencode({'q': query}))\n\n if(query.lower() in ('unit', 'units')):\n url = 'http://docs.astropy.org/en/stable/units/'\n\n webbrowser.open(url)\n\n__citation__ = \"\"\"@misc{plasmapy_community_2018_1238132,\n author = {PlasmaPy Community and\n Murphy, Nicholas A. and\n Leonard, Andrew J. and\n Sta\\'nczak, Dominik and\n Kozlowski, Pawel M. and\n Langendorf, Samuel J. and\n Haggerty, Colby C. and\n Beckers, Jasper P. and\n Mumford, Stuart J. and\n Parashar, Tulasi N. and\n Huang, Yi-Min},\n title = {{PlasmaPy: an open source community-developed \n Python package for plasma physics}},\n month = apr,\n year = 2018,\n doi = {10.5281/zenodo.1238132},\n url = {https://doi.org/10.5281/zenodo.1238132}\n}\"\"\"\n", "path": "plasmapy/__init__.py"}]} | 1,576 | 143 |
gh_patches_debug_12051 | rasdani/github-patches | git_diff | learningequality__kolibri-7269 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Update profile modal displays to super admin account created with setup wizard
### Observed behavior
_(Wouldn't even know this might be an issue if I hadn't been reviewing super admin Gherkins all past week :blush:)_
According to the [Gherkin scenario](https://github.com/learningequality/kolibri/blob/release-v0.13.x/integration_testing/features/learner/learner-profile-update-notification.feature#L22), this should not appear:

### Expected behavior
No profile update modal for the super admin.
### User-facing consequences
Annoyed super admin.
### Errors and logs
…
### Steps to reproduce
1. Install Kolibri
2. Go through the setup wizard
3. Go to Learn
### Context
* Kolibri version: 0.14.0b6, DEB installer
* Operating system: Ubuntu 16.04
* Browser: both Firefox and Chrome
</issue>
<code>
[start of kolibri/core/device/utils.py]
1 """
2 Do all imports of the device settings model inside the function scope here,
3 so as to allow these functions to be easily imported without worrying about
4 circular imports.
5 """
6 from django.db.utils import OperationalError
7 from django.db.utils import ProgrammingError
8
9 LANDING_PAGE_SIGN_IN = "sign-in"
10 LANDING_PAGE_LEARN = "learn"
11
12 APP_KEY_COOKIE_NAME = "app_key_cookie"
13
14
15 class DeviceNotProvisioned(Exception):
16 pass
17
18
19 no_default_value = object()
20
21
22 def get_device_setting(setting, default=no_default_value):
23 from .models import DeviceSettings
24
25 try:
26 device_settings = DeviceSettings.objects.get()
27 if device_settings is None:
28 raise DeviceSettings.DoesNotExist
29 return getattr(device_settings, setting)
30 except (DeviceSettings.DoesNotExist, OperationalError, ProgrammingError):
31 if default is not no_default_value:
32 return default
33 raise DeviceNotProvisioned
34
35
36 def device_provisioned():
37 return get_device_setting("is_provisioned", False)
38
39
40 def is_landing_page(landing_page):
41 return get_device_setting("landing_page", LANDING_PAGE_SIGN_IN) == landing_page
42
43
44 def allow_guest_access():
45 if get_device_setting("allow_guest_access", False):
46 return True
47
48 return is_landing_page(LANDING_PAGE_LEARN)
49
50
51 def allow_learner_unassigned_resource_access():
52 if get_device_setting("allow_learner_unassigned_resource_access", True):
53 return True
54
55 return is_landing_page(LANDING_PAGE_LEARN)
56
57
58 def allow_peer_unlisted_channel_import():
59 return get_device_setting("allow_peer_unlisted_channel_import", False)
60
61
62 def allow_other_browsers_to_connect():
63 return get_device_setting("allow_other_browsers_to_connect", True)
64
65
66 def set_device_settings(**kwargs):
67 from .models import DeviceSettings
68
69 try:
70 device_settings = DeviceSettings.objects.get()
71 for key, value in kwargs.items():
72 setattr(device_settings, key, value)
73 device_settings.save()
74 except DeviceSettings.DoesNotExist:
75 raise DeviceNotProvisioned
76
77
78 def create_superuser(user_data, facility):
79 from .models import DevicePermissions
80 from kolibri.core.auth.models import FacilityUser
81 from django.core.exceptions import ValidationError
82
83 username = user_data.get("username")
84 password = user_data.get("password")
85 full_name = user_data.get("full_name")
86
87 # Code copied from FacilityUserModelManager (create_superuser method doesn't work)
88 if FacilityUser.objects.filter(
89 username__iexact=username, facility=facility
90 ).exists():
91 raise ValidationError("An account with that username already exists")
92
93 superuser = FacilityUser.objects.create(
94 full_name=full_name or username,
95 username=username,
96 password=password,
97 facility=facility,
98 )
99
100 superuser.full_clean()
101 superuser.set_password(password)
102 superuser.save()
103
104 # make the user a facility admin
105 facility.add_admin(superuser)
106
107 # make the user into a superuser on this device
108 DevicePermissions.objects.create(
109 user=superuser, is_superuser=True, can_manage_content=True
110 )
111 return superuser
112
113
114 def provision_device(device_name=None, **kwargs):
115 from .models import DeviceSettings
116
117 device_settings, _ = DeviceSettings.objects.get_or_create(defaults=kwargs)
118 if device_name is not None:
119 device_settings.name = device_name
120 device_settings.is_provisioned = True
121 device_settings.save()
122
123
124 def valid_app_key(app_key):
125 from .models import DeviceAppKey
126
127 return app_key == DeviceAppKey.get_app_key()
128
129
130 def valid_app_key_on_request(request):
131 return APP_KEY_COOKIE_NAME in request.COOKIES and valid_app_key(
132 request.COOKIES.get(APP_KEY_COOKIE_NAME)
133 )
134
135
136 def set_app_key_on_response(response):
137 from .models import DeviceAppKey
138
139 response.set_cookie(APP_KEY_COOKIE_NAME, DeviceAppKey.get_app_key())
140
[end of kolibri/core/device/utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/kolibri/core/device/utils.py b/kolibri/core/device/utils.py
--- a/kolibri/core/device/utils.py
+++ b/kolibri/core/device/utils.py
@@ -90,11 +90,15 @@
).exists():
raise ValidationError("An account with that username already exists")
+ # gender and birth_year are set to "DEFERRED", since superusers do not
+ # need to provide this and are not nudged to update profile on Learn page
superuser = FacilityUser.objects.create(
full_name=full_name or username,
username=username,
password=password,
facility=facility,
+ gender="DEFERRED",
+ birth_year="DEFERRED",
)
superuser.full_clean()
| {"golden_diff": "diff --git a/kolibri/core/device/utils.py b/kolibri/core/device/utils.py\n--- a/kolibri/core/device/utils.py\n+++ b/kolibri/core/device/utils.py\n@@ -90,11 +90,15 @@\n ).exists():\n raise ValidationError(\"An account with that username already exists\")\n \n+ # gender and birth_year are set to \"DEFERRED\", since superusers do not\n+ # need to provide this and are not nudged to update profile on Learn page\n superuser = FacilityUser.objects.create(\n full_name=full_name or username,\n username=username,\n password=password,\n facility=facility,\n+ gender=\"DEFERRED\",\n+ birth_year=\"DEFERRED\",\n )\n \n superuser.full_clean()\n", "issue": "Update profile modal displays to super admin account created with setup wizard \n### Observed behavior\r\n_(Wouldn't even know this might be an issue if I hadn't been reviewing super admin Gherkins all past week :blush:)_\r\n\r\nAccording to the [Gherkin scenario](https://github.com/learningequality/kolibri/blob/release-v0.13.x/integration_testing/features/learner/learner-profile-update-notification.feature#L22), this should not appear:\r\n\r\n\r\n\r\n### Expected behavior\r\nNo profile update modal for the super admin.\r\n\r\n### User-facing consequences\r\nAnnoyed super admin.\r\n\r\n### Errors and logs\r\n\u2026\r\n\r\n### Steps to reproduce\r\n1. Install Kolibri\r\n2. Go through the setup wizard\r\n3. Go to Learn \r\n\r\n### Context\r\n * Kolibri version: 0.14.0b6, DEB installer\r\n * Operating system: Ubuntu 16.04\r\n * Browser: both Firefox and Chrome\r\n\n", "before_files": [{"content": "\"\"\"\nDo all imports of the device settings model inside the function scope here,\nso as to allow these functions to be easily imported without worrying about\ncircular imports.\n\"\"\"\nfrom django.db.utils import OperationalError\nfrom django.db.utils import ProgrammingError\n\nLANDING_PAGE_SIGN_IN = \"sign-in\"\nLANDING_PAGE_LEARN = \"learn\"\n\nAPP_KEY_COOKIE_NAME = \"app_key_cookie\"\n\n\nclass DeviceNotProvisioned(Exception):\n pass\n\n\nno_default_value = object()\n\n\ndef get_device_setting(setting, default=no_default_value):\n from .models import DeviceSettings\n\n try:\n device_settings = DeviceSettings.objects.get()\n if device_settings is None:\n raise DeviceSettings.DoesNotExist\n return getattr(device_settings, setting)\n except (DeviceSettings.DoesNotExist, OperationalError, ProgrammingError):\n if default is not no_default_value:\n return default\n raise DeviceNotProvisioned\n\n\ndef device_provisioned():\n return get_device_setting(\"is_provisioned\", False)\n\n\ndef is_landing_page(landing_page):\n return get_device_setting(\"landing_page\", LANDING_PAGE_SIGN_IN) == landing_page\n\n\ndef allow_guest_access():\n if get_device_setting(\"allow_guest_access\", False):\n return True\n\n return is_landing_page(LANDING_PAGE_LEARN)\n\n\ndef allow_learner_unassigned_resource_access():\n if get_device_setting(\"allow_learner_unassigned_resource_access\", True):\n return True\n\n return is_landing_page(LANDING_PAGE_LEARN)\n\n\ndef allow_peer_unlisted_channel_import():\n return get_device_setting(\"allow_peer_unlisted_channel_import\", False)\n\n\ndef allow_other_browsers_to_connect():\n return get_device_setting(\"allow_other_browsers_to_connect\", True)\n\n\ndef set_device_settings(**kwargs):\n from .models import DeviceSettings\n\n try:\n device_settings = DeviceSettings.objects.get()\n for key, value in kwargs.items():\n setattr(device_settings, key, value)\n device_settings.save()\n except DeviceSettings.DoesNotExist:\n raise DeviceNotProvisioned\n\n\ndef create_superuser(user_data, facility):\n from .models import DevicePermissions\n from kolibri.core.auth.models import FacilityUser\n from django.core.exceptions import ValidationError\n\n username = user_data.get(\"username\")\n password = user_data.get(\"password\")\n full_name = user_data.get(\"full_name\")\n\n # Code copied from FacilityUserModelManager (create_superuser method doesn't work)\n if FacilityUser.objects.filter(\n username__iexact=username, facility=facility\n ).exists():\n raise ValidationError(\"An account with that username already exists\")\n\n superuser = FacilityUser.objects.create(\n full_name=full_name or username,\n username=username,\n password=password,\n facility=facility,\n )\n\n superuser.full_clean()\n superuser.set_password(password)\n superuser.save()\n\n # make the user a facility admin\n facility.add_admin(superuser)\n\n # make the user into a superuser on this device\n DevicePermissions.objects.create(\n user=superuser, is_superuser=True, can_manage_content=True\n )\n return superuser\n\n\ndef provision_device(device_name=None, **kwargs):\n from .models import DeviceSettings\n\n device_settings, _ = DeviceSettings.objects.get_or_create(defaults=kwargs)\n if device_name is not None:\n device_settings.name = device_name\n device_settings.is_provisioned = True\n device_settings.save()\n\n\ndef valid_app_key(app_key):\n from .models import DeviceAppKey\n\n return app_key == DeviceAppKey.get_app_key()\n\n\ndef valid_app_key_on_request(request):\n return APP_KEY_COOKIE_NAME in request.COOKIES and valid_app_key(\n request.COOKIES.get(APP_KEY_COOKIE_NAME)\n )\n\n\ndef set_app_key_on_response(response):\n from .models import DeviceAppKey\n\n response.set_cookie(APP_KEY_COOKIE_NAME, DeviceAppKey.get_app_key())\n", "path": "kolibri/core/device/utils.py"}]} | 1,975 | 170 |
gh_patches_debug_5861 | rasdani/github-patches | git_diff | google__turbinia-929 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add --storage_file parameter to log2timeline
Add `--storage_file` parameter as new plaso version will not work without this anymore.
https://github.com/google/turbinia/blob/23a97d9d826cbcc51e6b5dfd50d85251506bf242/turbinia/workers/plaso.py#L121
</issue>
<code>
[start of turbinia/workers/plaso.py]
1 # -*- coding: utf-8 -*-
2 # Copyright 2015 Google Inc.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15 """Task for running Plaso."""
16
17 from __future__ import unicode_literals
18
19 import os
20 import logging
21
22 from turbinia import config
23 from turbinia.evidence import EvidenceState as state
24 from turbinia.evidence import PlasoFile
25 from turbinia.workers import TurbiniaTask
26 from turbinia.lib import file_helpers
27
28
29 class PlasoTask(TurbiniaTask):
30 """Task to run Plaso (log2timeline)."""
31
32 # Plaso requires the Disk to be attached, but doesn't require it be mounted.
33 REQUIRED_STATES = [state.ATTACHED, state.DECOMPRESSED]
34
35 TASK_CONFIG = {
36 # 'none' as indicated in the options for status_view within
37 # the Plaso documentation
38 'status_view': 'none',
39 'hashers': 'all',
40 'partitions': 'all',
41 'vss_stores': 'none',
42 'artifact_filters': None,
43 'file_filter': None,
44 'yara_rules': None
45 }
46
47 def build_plaso_command(self, base_command, conf):
48 """Builds a typical plaso command, contains logic specific to log2timeline.
49
50 Args:
51 base_command (str): Command to invoke log2timeline (e.g. log2timeline.py)
52 conf (dict): Dynamic config containing the parameters for the command.
53
54 Returns:
55 String for valid Log2timeline command.
56 """
57 self.result.log(
58 'Generating Plaso command line from arguments: {0!s}'.format(conf),
59 level=logging.DEBUG)
60 cmd = [base_command]
61 for k, v in conf.items():
62 cli_args = [
63 'status_view', 'hashers', 'partitions', 'vss_stores',
64 'artifact_filters', 'file_filter', 'yara_rules'
65 ]
66 if (k not in cli_args or not v):
67 continue
68 prepend = '-'
69 if len(k) > 1:
70 prepend = '--'
71 if k == 'file_filter':
72 file_path = file_helpers.write_list_to_temp_file(
73 v, preferred_dir=self.tmp_dir)
74 cmd.extend(['-f', file_path])
75 elif k == 'yara_rules':
76 file_path = file_helpers.write_str_to_temp_file(
77 v, preferred_dir=self.tmp_dir)
78 cmd.extend(['--yara_rules', file_path])
79 elif isinstance(v, list):
80 cmd.extend([prepend + k, ','.join(v)])
81 elif isinstance(v, bool):
82 cmd.append(prepend + k)
83 elif isinstance(v, str):
84 cmd.extend([prepend + k, v])
85 return cmd
86
87 def run(self, evidence, result):
88 """Task that process data with Plaso.
89
90 Args:
91 evidence (Evidence object): The evidence we will process.
92 result (TurbiniaTaskResult): The object to place task results into.
93
94 Returns:
95 TurbiniaTaskResult object.
96 """
97
98 config.LoadConfig()
99
100 # Write plaso file into tmp_dir because sqlite has issues with some shared
101 # filesystems (e.g NFS).
102 plaso_file = os.path.join(self.tmp_dir, '{0:s}.plaso'.format(self.id))
103 plaso_evidence = PlasoFile(source_path=plaso_file)
104 plaso_log = os.path.join(self.output_dir, '{0:s}.log'.format(self.id))
105
106 cmd = self.build_plaso_command('log2timeline.py', self.task_config)
107
108 if config.DEBUG_TASKS or self.task_config.get('debug_tasks'):
109 cmd.append('-d')
110
111 if evidence.credentials:
112 for credential_type, credential_data in evidence.credentials:
113 cmd.extend([
114 '--credential', '{0:s}:{1:s}'.format(
115 credential_type, credential_data)
116 ])
117
118 cmd.extend(['--temporary_directory', self.tmp_dir])
119 cmd.extend(['--logfile', plaso_log])
120 cmd.extend(['--unattended'])
121 cmd.extend([plaso_file, evidence.local_path])
122
123 result.log('Running plaso as [{0:s}]'.format(' '.join(cmd)))
124 self.execute(
125 cmd, result, log_files=[plaso_log], new_evidence=[plaso_evidence],
126 close=True)
127
128 return result
129
[end of turbinia/workers/plaso.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/turbinia/workers/plaso.py b/turbinia/workers/plaso.py
--- a/turbinia/workers/plaso.py
+++ b/turbinia/workers/plaso.py
@@ -118,7 +118,8 @@
cmd.extend(['--temporary_directory', self.tmp_dir])
cmd.extend(['--logfile', plaso_log])
cmd.extend(['--unattended'])
- cmd.extend([plaso_file, evidence.local_path])
+ cmd.extend(['--storage_file', plaso_file])
+ cmd.extend([evidence.local_path])
result.log('Running plaso as [{0:s}]'.format(' '.join(cmd)))
self.execute(
| {"golden_diff": "diff --git a/turbinia/workers/plaso.py b/turbinia/workers/plaso.py\n--- a/turbinia/workers/plaso.py\n+++ b/turbinia/workers/plaso.py\n@@ -118,7 +118,8 @@\n cmd.extend(['--temporary_directory', self.tmp_dir])\n cmd.extend(['--logfile', plaso_log])\n cmd.extend(['--unattended'])\n- cmd.extend([plaso_file, evidence.local_path])\n+ cmd.extend(['--storage_file', plaso_file])\n+ cmd.extend([evidence.local_path])\n \n result.log('Running plaso as [{0:s}]'.format(' '.join(cmd)))\n self.execute(\n", "issue": "Add --storage_file parameter to log2timeline\nAdd `--storage_file` parameter as new plaso version will not work without this anymore.\r\n\r\nhttps://github.com/google/turbinia/blob/23a97d9d826cbcc51e6b5dfd50d85251506bf242/turbinia/workers/plaso.py#L121\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright 2015 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Task for running Plaso.\"\"\"\n\nfrom __future__ import unicode_literals\n\nimport os\nimport logging\n\nfrom turbinia import config\nfrom turbinia.evidence import EvidenceState as state\nfrom turbinia.evidence import PlasoFile\nfrom turbinia.workers import TurbiniaTask\nfrom turbinia.lib import file_helpers\n\n\nclass PlasoTask(TurbiniaTask):\n \"\"\"Task to run Plaso (log2timeline).\"\"\"\n\n # Plaso requires the Disk to be attached, but doesn't require it be mounted.\n REQUIRED_STATES = [state.ATTACHED, state.DECOMPRESSED]\n\n TASK_CONFIG = {\n # 'none' as indicated in the options for status_view within\n # the Plaso documentation\n 'status_view': 'none',\n 'hashers': 'all',\n 'partitions': 'all',\n 'vss_stores': 'none',\n 'artifact_filters': None,\n 'file_filter': None,\n 'yara_rules': None\n }\n\n def build_plaso_command(self, base_command, conf):\n \"\"\"Builds a typical plaso command, contains logic specific to log2timeline.\n\n Args:\n base_command (str): Command to invoke log2timeline (e.g. log2timeline.py)\n conf (dict): Dynamic config containing the parameters for the command.\n\n Returns:\n String for valid Log2timeline command.\n \"\"\"\n self.result.log(\n 'Generating Plaso command line from arguments: {0!s}'.format(conf),\n level=logging.DEBUG)\n cmd = [base_command]\n for k, v in conf.items():\n cli_args = [\n 'status_view', 'hashers', 'partitions', 'vss_stores',\n 'artifact_filters', 'file_filter', 'yara_rules'\n ]\n if (k not in cli_args or not v):\n continue\n prepend = '-'\n if len(k) > 1:\n prepend = '--'\n if k == 'file_filter':\n file_path = file_helpers.write_list_to_temp_file(\n v, preferred_dir=self.tmp_dir)\n cmd.extend(['-f', file_path])\n elif k == 'yara_rules':\n file_path = file_helpers.write_str_to_temp_file(\n v, preferred_dir=self.tmp_dir)\n cmd.extend(['--yara_rules', file_path])\n elif isinstance(v, list):\n cmd.extend([prepend + k, ','.join(v)])\n elif isinstance(v, bool):\n cmd.append(prepend + k)\n elif isinstance(v, str):\n cmd.extend([prepend + k, v])\n return cmd\n\n def run(self, evidence, result):\n \"\"\"Task that process data with Plaso.\n\n Args:\n evidence (Evidence object): The evidence we will process.\n result (TurbiniaTaskResult): The object to place task results into.\n\n Returns:\n TurbiniaTaskResult object.\n \"\"\"\n\n config.LoadConfig()\n\n # Write plaso file into tmp_dir because sqlite has issues with some shared\n # filesystems (e.g NFS).\n plaso_file = os.path.join(self.tmp_dir, '{0:s}.plaso'.format(self.id))\n plaso_evidence = PlasoFile(source_path=plaso_file)\n plaso_log = os.path.join(self.output_dir, '{0:s}.log'.format(self.id))\n\n cmd = self.build_plaso_command('log2timeline.py', self.task_config)\n\n if config.DEBUG_TASKS or self.task_config.get('debug_tasks'):\n cmd.append('-d')\n\n if evidence.credentials:\n for credential_type, credential_data in evidence.credentials:\n cmd.extend([\n '--credential', '{0:s}:{1:s}'.format(\n credential_type, credential_data)\n ])\n\n cmd.extend(['--temporary_directory', self.tmp_dir])\n cmd.extend(['--logfile', plaso_log])\n cmd.extend(['--unattended'])\n cmd.extend([plaso_file, evidence.local_path])\n\n result.log('Running plaso as [{0:s}]'.format(' '.join(cmd)))\n self.execute(\n cmd, result, log_files=[plaso_log], new_evidence=[plaso_evidence],\n close=True)\n\n return result\n", "path": "turbinia/workers/plaso.py"}]} | 1,956 | 154 |
gh_patches_debug_22297 | rasdani/github-patches | git_diff | canonical__snapcraft-4427 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`snapcraft remote-build --launchpad-timeout` does not work
### Bug Description
The argument `--launchpad-timeout` for remote-build stopped being accepted when snapcraft 7 was release.
Scope of work:
1. Add `--launchpad-timeout` as an argparse argument in `snapcraft/commands/remote.py`
2. Test that it gets passed to the new and fallback remote builders.
### To Reproduce
`snapcraft remote-build --launchpad-timeout <seconds>`
### Environment
n/a
### snapcraft.yaml
```shell
n/a
```
### Relevant log output
```shell
Usage: snapcraft [options] command [args]...
Try 'snapcraft remote-build -h' for help.
Error: unrecognized arguments: --launchpad-timeout 3600
```
### Additional context
_No response_
</issue>
<code>
[start of snapcraft/commands/remote.py]
1 # -*- Mode:Python; indent-tabs-mode:nil; tab-width:4 -*-
2 #
3 # Copyright 2022 Canonical Ltd.
4 #
5 # This program is free software: you can redistribute it and/or modify
6 # it under the terms of the GNU General Public License version 3 as
7 # published by the Free Software Foundation.
8 #
9 # This program is distributed in the hope that it will be useful,
10 # but WITHOUT ANY WARRANTY; without even the implied warranty of
11 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
12 # GNU General Public License for more details.
13 #
14 # You should have received a copy of the GNU General Public License
15 # along with this program. If not, see <http://www.gnu.org/licenses/>.
16
17 """Snapcraft remote build command."""
18
19 import argparse
20 import os
21 import textwrap
22
23 from craft_cli import BaseCommand, emit
24 from craft_cli.helptexts import HIDDEN
25 from overrides import overrides
26
27 from snapcraft.legacy_cli import run_legacy
28 from snapcraft.parts.lifecycle import get_snap_project, process_yaml
29 from snapcraft.utils import confirm_with_user
30 from snapcraft_legacy.internal.remote_build.errors import AcceptPublicUploadError
31
32 _CONFIRMATION_PROMPT = (
33 "All data sent to remote builders will be publicly available. "
34 "Are you sure you want to continue?"
35 )
36
37
38 class RemoteBuildCommand(BaseCommand):
39 """Command passthrough for the remote-build command."""
40
41 name = "remote-build"
42 help_msg = "Dispatch a snap for remote build"
43 overview = textwrap.dedent(
44 """
45 Command remote-build sends the current project to be built
46 remotely. After the build is complete, packages for each
47 architecture are retrieved and will be available in the
48 local filesystem.
49
50 If not specified in the snapcraft.yaml file, the list of
51 architectures to build can be set using the --build-on option.
52 If both are specified, an error will occur.
53
54 Interrupted remote builds can be resumed using the --recover
55 option, followed by the build number informed when the remote
56 build was originally dispatched. The current state of the
57 remote build for each architecture can be checked using the
58 --status option."""
59 )
60
61 @overrides
62 def fill_parser(self, parser: argparse.ArgumentParser) -> None:
63 parser.add_argument(
64 "--recover", action="store_true", help="recover an interrupted build"
65 )
66 parser.add_argument(
67 "--status", action="store_true", help="display remote build status"
68 )
69 parser_target = parser.add_mutually_exclusive_group()
70 parser_target.add_argument(
71 "--build-on",
72 metavar="arch",
73 nargs="+",
74 help=HIDDEN,
75 )
76 parser_target.add_argument(
77 "--build-for",
78 metavar="arch",
79 nargs="+",
80 help="architecture to build for",
81 )
82 parser.add_argument(
83 "--build-id", metavar="build-id", help="specific build id to retrieve"
84 )
85 parser.add_argument(
86 "--launchpad-accept-public-upload",
87 action="store_true",
88 help="acknowledge that uploaded code will be publicly available.",
89 )
90
91 @overrides
92 def run(self, parsed_args):
93 if os.getenv("SUDO_USER") and os.geteuid() == 0:
94 emit.message(
95 "Running with 'sudo' may cause permission errors and is discouraged."
96 )
97
98 emit.message(
99 "snapcraft remote-build is experimental and is subject to change - use with caution."
100 )
101
102 if parsed_args.build_on:
103 emit.message("Use --build-for instead of --build-on")
104 parsed_args.build_for = parsed_args.build_on
105
106 if not parsed_args.launchpad_accept_public_upload and not confirm_with_user(
107 _CONFIRMATION_PROMPT
108 ):
109 raise AcceptPublicUploadError()
110
111 snap_project = get_snap_project()
112 # TODO proper core22 support would mean we need to load the project
113 # yaml_data = process_yaml(snap_project.project_file)
114 # for now, only log explicitly that we are falling back to legacy to
115 # remote build for core22
116 process_yaml(snap_project.project_file)
117
118 emit.debug(
119 "core22 not yet supported in new code base: re-executing into legacy for remote-build"
120 )
121 run_legacy()
122
[end of snapcraft/commands/remote.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/snapcraft/commands/remote.py b/snapcraft/commands/remote.py
--- a/snapcraft/commands/remote.py
+++ b/snapcraft/commands/remote.py
@@ -55,7 +55,13 @@
option, followed by the build number informed when the remote
build was originally dispatched. The current state of the
remote build for each architecture can be checked using the
- --status option."""
+ --status option.
+
+ To set a timeout on the remote-build command, use the option
+ ``--launchpad-timeout=<seconds>``. The timeout is local, so the build on
+ launchpad will continue even if the local instance of snapcraft is
+ interrupted or times out.
+ """
)
@overrides
@@ -87,6 +93,13 @@
action="store_true",
help="acknowledge that uploaded code will be publicly available.",
)
+ parser.add_argument(
+ "--launchpad-timeout",
+ type=int,
+ default=0,
+ metavar="<seconds>",
+ help="Time in seconds to wait for launchpad to build.",
+ )
@overrides
def run(self, parsed_args):
| {"golden_diff": "diff --git a/snapcraft/commands/remote.py b/snapcraft/commands/remote.py\n--- a/snapcraft/commands/remote.py\n+++ b/snapcraft/commands/remote.py\n@@ -55,7 +55,13 @@\n option, followed by the build number informed when the remote\n build was originally dispatched. The current state of the\n remote build for each architecture can be checked using the\n- --status option.\"\"\"\n+ --status option.\n+\n+ To set a timeout on the remote-build command, use the option\n+ ``--launchpad-timeout=<seconds>``. The timeout is local, so the build on\n+ launchpad will continue even if the local instance of snapcraft is\n+ interrupted or times out.\n+ \"\"\"\n )\n \n @overrides\n@@ -87,6 +93,13 @@\n action=\"store_true\",\n help=\"acknowledge that uploaded code will be publicly available.\",\n )\n+ parser.add_argument(\n+ \"--launchpad-timeout\",\n+ type=int,\n+ default=0,\n+ metavar=\"<seconds>\",\n+ help=\"Time in seconds to wait for launchpad to build.\",\n+ )\n \n @overrides\n def run(self, parsed_args):\n", "issue": "`snapcraft remote-build --launchpad-timeout` does not work\n### Bug Description\n\nThe argument `--launchpad-timeout` for remote-build stopped being accepted when snapcraft 7 was release.\r\n\r\nScope of work:\r\n1. Add `--launchpad-timeout` as an argparse argument in `snapcraft/commands/remote.py`\r\n2. Test that it gets passed to the new and fallback remote builders.\n\n### To Reproduce\n\n`snapcraft remote-build --launchpad-timeout <seconds>`\n\n### Environment\n\nn/a\n\n### snapcraft.yaml\n\n```shell\nn/a\n```\n\n\n### Relevant log output\n\n```shell\nUsage: snapcraft [options] command [args]...\r\nTry 'snapcraft remote-build -h' for help.\r\n\r\nError: unrecognized arguments: --launchpad-timeout 3600\n```\n\n\n### Additional context\n\n_No response_\n", "before_files": [{"content": "# -*- Mode:Python; indent-tabs-mode:nil; tab-width:4 -*-\n#\n# Copyright 2022 Canonical Ltd.\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License version 3 as\n# published by the Free Software Foundation.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this program. If not, see <http://www.gnu.org/licenses/>.\n\n\"\"\"Snapcraft remote build command.\"\"\"\n\nimport argparse\nimport os\nimport textwrap\n\nfrom craft_cli import BaseCommand, emit\nfrom craft_cli.helptexts import HIDDEN\nfrom overrides import overrides\n\nfrom snapcraft.legacy_cli import run_legacy\nfrom snapcraft.parts.lifecycle import get_snap_project, process_yaml\nfrom snapcraft.utils import confirm_with_user\nfrom snapcraft_legacy.internal.remote_build.errors import AcceptPublicUploadError\n\n_CONFIRMATION_PROMPT = (\n \"All data sent to remote builders will be publicly available. \"\n \"Are you sure you want to continue?\"\n)\n\n\nclass RemoteBuildCommand(BaseCommand):\n \"\"\"Command passthrough for the remote-build command.\"\"\"\n\n name = \"remote-build\"\n help_msg = \"Dispatch a snap for remote build\"\n overview = textwrap.dedent(\n \"\"\"\n Command remote-build sends the current project to be built\n remotely. After the build is complete, packages for each\n architecture are retrieved and will be available in the\n local filesystem.\n\n If not specified in the snapcraft.yaml file, the list of\n architectures to build can be set using the --build-on option.\n If both are specified, an error will occur.\n\n Interrupted remote builds can be resumed using the --recover\n option, followed by the build number informed when the remote\n build was originally dispatched. The current state of the\n remote build for each architecture can be checked using the\n --status option.\"\"\"\n )\n\n @overrides\n def fill_parser(self, parser: argparse.ArgumentParser) -> None:\n parser.add_argument(\n \"--recover\", action=\"store_true\", help=\"recover an interrupted build\"\n )\n parser.add_argument(\n \"--status\", action=\"store_true\", help=\"display remote build status\"\n )\n parser_target = parser.add_mutually_exclusive_group()\n parser_target.add_argument(\n \"--build-on\",\n metavar=\"arch\",\n nargs=\"+\",\n help=HIDDEN,\n )\n parser_target.add_argument(\n \"--build-for\",\n metavar=\"arch\",\n nargs=\"+\",\n help=\"architecture to build for\",\n )\n parser.add_argument(\n \"--build-id\", metavar=\"build-id\", help=\"specific build id to retrieve\"\n )\n parser.add_argument(\n \"--launchpad-accept-public-upload\",\n action=\"store_true\",\n help=\"acknowledge that uploaded code will be publicly available.\",\n )\n\n @overrides\n def run(self, parsed_args):\n if os.getenv(\"SUDO_USER\") and os.geteuid() == 0:\n emit.message(\n \"Running with 'sudo' may cause permission errors and is discouraged.\"\n )\n\n emit.message(\n \"snapcraft remote-build is experimental and is subject to change - use with caution.\"\n )\n\n if parsed_args.build_on:\n emit.message(\"Use --build-for instead of --build-on\")\n parsed_args.build_for = parsed_args.build_on\n\n if not parsed_args.launchpad_accept_public_upload and not confirm_with_user(\n _CONFIRMATION_PROMPT\n ):\n raise AcceptPublicUploadError()\n\n snap_project = get_snap_project()\n # TODO proper core22 support would mean we need to load the project\n # yaml_data = process_yaml(snap_project.project_file)\n # for now, only log explicitly that we are falling back to legacy to\n # remote build for core22\n process_yaml(snap_project.project_file)\n\n emit.debug(\n \"core22 not yet supported in new code base: re-executing into legacy for remote-build\"\n )\n run_legacy()\n", "path": "snapcraft/commands/remote.py"}]} | 1,895 | 276 |
gh_patches_debug_10009 | rasdani/github-patches | git_diff | bridgecrewio__checkov-5886 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
CKV_AZURE_234 condition is incorrect
https://github.com/bridgecrewio/checkov/blob/dc6a7cd84c5e006c289f2710b960b7be96a29fae/checkov/terraform/checks/resource/azure/AzureDefenderDisabledForResManager.py#L20C110-L20C118
The condition used in this check is being triggered for all `azurerm_security_center_subscription_pricing` resources with **any** `resource_type`. For example,
```
resource "azurerm_security_center_subscription_pricing" "mdc_srvrs" {
tier = "Standard"
resource_type = "VirtualMachines"
subplan = "P2"
```
Would raise the `CKV_AZURE_234` finding. For any other `resource_type` we get a failure.
</issue>
<code>
[start of checkov/terraform/checks/resource/azure/AzureDefenderDisabledForResManager.py]
1 from __future__ import annotations
2
3 from typing import Any
4
5 from checkov.common.models.enums import CheckCategories, CheckResult
6 from checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck
7
8
9 class AzureDefenderDisabledForResManager(BaseResourceCheck):
10 def __init__(self) -> None:
11 name = "Ensure that Azure Defender for cloud is set to On for Resource Manager"
12 id = "CKV_AZURE_234"
13 supported_resources = ("azurerm_security_center_subscription_pricing",)
14 categories = (CheckCategories.GENERAL_SECURITY,)
15 super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
16
17 def scan_resource_conf(self, conf: dict[str, list[Any]]) -> CheckResult:
18 return (
19 CheckResult.PASSED
20 if conf.get("resource_type", [""])[0].lower() == "arm" and conf.get("tier", [""])[0].lower() == "standard"
21 else CheckResult.FAILED
22 )
23
24 def get_evaluated_keys(self) -> list[str]:
25 return ["resource_type", "tier"]
26
27
28 check = AzureDefenderDisabledForResManager()
29
[end of checkov/terraform/checks/resource/azure/AzureDefenderDisabledForResManager.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/checkov/terraform/checks/resource/azure/AzureDefenderDisabledForResManager.py b/checkov/terraform/checks/resource/azure/AzureDefenderDisabledForResManager.py
--- a/checkov/terraform/checks/resource/azure/AzureDefenderDisabledForResManager.py
+++ b/checkov/terraform/checks/resource/azure/AzureDefenderDisabledForResManager.py
@@ -16,9 +16,9 @@
def scan_resource_conf(self, conf: dict[str, list[Any]]) -> CheckResult:
return (
- CheckResult.PASSED
- if conf.get("resource_type", [""])[0].lower() == "arm" and conf.get("tier", [""])[0].lower() == "standard"
- else CheckResult.FAILED
+ CheckResult.FAILED
+ if conf.get("resource_type", [""])[0].lower() == "arm" and conf.get("tier", [""])[0].lower() != "standard"
+ else CheckResult.PASSED
)
def get_evaluated_keys(self) -> list[str]:
| {"golden_diff": "diff --git a/checkov/terraform/checks/resource/azure/AzureDefenderDisabledForResManager.py b/checkov/terraform/checks/resource/azure/AzureDefenderDisabledForResManager.py\n--- a/checkov/terraform/checks/resource/azure/AzureDefenderDisabledForResManager.py\n+++ b/checkov/terraform/checks/resource/azure/AzureDefenderDisabledForResManager.py\n@@ -16,9 +16,9 @@\n \n def scan_resource_conf(self, conf: dict[str, list[Any]]) -> CheckResult:\n return (\n- CheckResult.PASSED\n- if conf.get(\"resource_type\", [\"\"])[0].lower() == \"arm\" and conf.get(\"tier\", [\"\"])[0].lower() == \"standard\"\n- else CheckResult.FAILED\n+ CheckResult.FAILED\n+ if conf.get(\"resource_type\", [\"\"])[0].lower() == \"arm\" and conf.get(\"tier\", [\"\"])[0].lower() != \"standard\"\n+ else CheckResult.PASSED\n )\n \n def get_evaluated_keys(self) -> list[str]:\n", "issue": "CKV_AZURE_234 condition is incorrect\nhttps://github.com/bridgecrewio/checkov/blob/dc6a7cd84c5e006c289f2710b960b7be96a29fae/checkov/terraform/checks/resource/azure/AzureDefenderDisabledForResManager.py#L20C110-L20C118\r\n\r\nThe condition used in this check is being triggered for all `azurerm_security_center_subscription_pricing` resources with **any** `resource_type`. For example, \r\n\r\n```\r\nresource \"azurerm_security_center_subscription_pricing\" \"mdc_srvrs\" {\r\n tier = \"Standard\"\r\n resource_type = \"VirtualMachines\"\r\n subplan = \"P2\"\r\n```\r\n\r\nWould raise the `CKV_AZURE_234` finding. For any other `resource_type` we get a failure.\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom typing import Any\n\nfrom checkov.common.models.enums import CheckCategories, CheckResult\nfrom checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck\n\n\nclass AzureDefenderDisabledForResManager(BaseResourceCheck):\n def __init__(self) -> None:\n name = \"Ensure that Azure Defender for cloud is set to On for Resource Manager\"\n id = \"CKV_AZURE_234\"\n supported_resources = (\"azurerm_security_center_subscription_pricing\",)\n categories = (CheckCategories.GENERAL_SECURITY,)\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def scan_resource_conf(self, conf: dict[str, list[Any]]) -> CheckResult:\n return (\n CheckResult.PASSED\n if conf.get(\"resource_type\", [\"\"])[0].lower() == \"arm\" and conf.get(\"tier\", [\"\"])[0].lower() == \"standard\"\n else CheckResult.FAILED\n )\n\n def get_evaluated_keys(self) -> list[str]:\n return [\"resource_type\", \"tier\"]\n\n\ncheck = AzureDefenderDisabledForResManager()\n", "path": "checkov/terraform/checks/resource/azure/AzureDefenderDisabledForResManager.py"}]} | 1,070 | 243 |
gh_patches_debug_4237 | rasdani/github-patches | git_diff | great-expectations__great_expectations-2600 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
BUG with dependency: SQLAlchemy 1.4.0 for user installation
**Describe the bug**
SQLAlchemy `>= 1.4.0` breaks the imports in
https://github.com/great-expectations/great_expectations/blob/18234477693306e5e3845e69a1be78a85c639295/great_expectations/dataset/sqlalchemy_dataset.py#L42
This isn't showing up in the tests since the tests [pin the version](https://github.com/great-expectations/great_expectations/pull/2547) from above whereas the user installation doesn't
https://github.com/great-expectations/great_expectations/blob/18234477693306e5e3845e69a1be78a85c639295/setup.py#L25
The stack trace is from trying to scaffold a new suite is:
```
...
File ".../great_expectations/data_context/data_context.py", line 1421, in get_batch
return self._get_batch_v2(
File ".../great_expectations/data_context/data_context.py", line 1147, in _get_batch_v2
return validator.get_dataset()
File ".../great_expectations/validator/validator.py", line 1431, in get_dataset
return self.expectation_engine(
File ".../great_expectations/dataset/sqlalchemy_dataset.py", line 508, in __init__
self._table = sa.Table(table_name, sa.MetaData(), schema=schema)
AttributeError: 'NoneType' object has no attribute 'Table'
```
As a work around until the code is upgraded to work with `1.4.0` the version installed should be `<1.4.0`.
</issue>
<code>
[start of setup.py]
1 from setuptools import find_packages, setup
2
3 import versioneer
4
5 # Parse requirements.txt
6 with open("requirements.txt") as f:
7 required = f.read().splitlines()
8
9 # try:
10 # import pypandoc
11 # long_description = pypandoc.convert_file('README.md', 'rst')
12 # except (IOError, ImportError):
13 long_description = "Always know what to expect from your data. (See https://github.com/great-expectations/great_expectations for full description)."
14
15 config = {
16 "description": "Always know what to expect from your data.",
17 "author": "The Great Expectations Team",
18 "url": "https://github.com/great-expectations/great_expectations",
19 "author_email": "[email protected]",
20 "version": versioneer.get_version(),
21 "cmdclass": versioneer.get_cmdclass(),
22 "install_requires": required,
23 "extras_require": {
24 "spark": ["pyspark>=2.3.2"],
25 "sqlalchemy": ["sqlalchemy>=1.2"],
26 "airflow": ["apache-airflow[s3]>=1.9.0", "boto3>=1.7.3"],
27 "gcp": [
28 "google-cloud>=0.34.0",
29 "google-cloud-storage>=1.28.0",
30 "google-cloud-secret-manager>=1.0.0",
31 "pybigquery==0.4.15",
32 ],
33 "redshift": ["psycopg2>=2.8"],
34 "s3": ["boto3>=1.14"],
35 "aws_secrets": ["boto3>=1.8.7"],
36 "azure_secrets": ["azure-identity>=1.0.0", "azure-keyvault-secrets>=4.0.0"],
37 "snowflake": ["snowflake-sqlalchemy>=1.2"],
38 },
39 "packages": find_packages(exclude=["contrib*", "docs*", "tests*", "examples*"]),
40 "entry_points": {
41 "console_scripts": ["great_expectations=great_expectations.cli:main"]
42 },
43 "name": "great_expectations",
44 "long_description": long_description,
45 "license": "Apache-2.0",
46 "keywords": "data science testing pipeline data quality dataquality validation datavalidation",
47 "include_package_data": True,
48 "classifiers": [
49 "Development Status :: 4 - Beta",
50 "Intended Audience :: Developers",
51 "Intended Audience :: Science/Research",
52 "Intended Audience :: Other Audience",
53 "Topic :: Scientific/Engineering",
54 "Topic :: Software Development",
55 "Topic :: Software Development :: Testing",
56 "License :: OSI Approved :: Apache Software License",
57 "Programming Language :: Python :: 3",
58 "Programming Language :: Python :: 3.6",
59 "Programming Language :: Python :: 3.7",
60 "Programming Language :: Python :: 3.8",
61 ],
62 }
63
64 setup(**config)
65
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -22,7 +22,7 @@
"install_requires": required,
"extras_require": {
"spark": ["pyspark>=2.3.2"],
- "sqlalchemy": ["sqlalchemy>=1.2"],
+ "sqlalchemy": ["sqlalchemy>=1.2,<1.4.0"],
"airflow": ["apache-airflow[s3]>=1.9.0", "boto3>=1.7.3"],
"gcp": [
"google-cloud>=0.34.0",
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -22,7 +22,7 @@\n \"install_requires\": required,\n \"extras_require\": {\n \"spark\": [\"pyspark>=2.3.2\"],\n- \"sqlalchemy\": [\"sqlalchemy>=1.2\"],\n+ \"sqlalchemy\": [\"sqlalchemy>=1.2,<1.4.0\"],\n \"airflow\": [\"apache-airflow[s3]>=1.9.0\", \"boto3>=1.7.3\"],\n \"gcp\": [\n \"google-cloud>=0.34.0\",\n", "issue": "BUG with dependency: SQLAlchemy 1.4.0 for user installation\n**Describe the bug**\r\n\r\nSQLAlchemy `>= 1.4.0` breaks the imports in \r\nhttps://github.com/great-expectations/great_expectations/blob/18234477693306e5e3845e69a1be78a85c639295/great_expectations/dataset/sqlalchemy_dataset.py#L42\r\n\r\nThis isn't showing up in the tests since the tests [pin the version](https://github.com/great-expectations/great_expectations/pull/2547) from above whereas the user installation doesn't\r\nhttps://github.com/great-expectations/great_expectations/blob/18234477693306e5e3845e69a1be78a85c639295/setup.py#L25\r\n\r\nThe stack trace is from trying to scaffold a new suite is:\r\n```\r\n...\r\n File \".../great_expectations/data_context/data_context.py\", line 1421, in get_batch\r\n return self._get_batch_v2(\r\n File \".../great_expectations/data_context/data_context.py\", line 1147, in _get_batch_v2\r\n return validator.get_dataset()\r\n File \".../great_expectations/validator/validator.py\", line 1431, in get_dataset\r\n return self.expectation_engine(\r\n File \".../great_expectations/dataset/sqlalchemy_dataset.py\", line 508, in __init__\r\n self._table = sa.Table(table_name, sa.MetaData(), schema=schema)\r\nAttributeError: 'NoneType' object has no attribute 'Table'\r\n```\r\n\r\nAs a work around until the code is upgraded to work with `1.4.0` the version installed should be `<1.4.0`.\n", "before_files": [{"content": "from setuptools import find_packages, setup\n\nimport versioneer\n\n# Parse requirements.txt\nwith open(\"requirements.txt\") as f:\n required = f.read().splitlines()\n\n# try:\n# import pypandoc\n# long_description = pypandoc.convert_file('README.md', 'rst')\n# except (IOError, ImportError):\nlong_description = \"Always know what to expect from your data. (See https://github.com/great-expectations/great_expectations for full description).\"\n\nconfig = {\n \"description\": \"Always know what to expect from your data.\",\n \"author\": \"The Great Expectations Team\",\n \"url\": \"https://github.com/great-expectations/great_expectations\",\n \"author_email\": \"[email protected]\",\n \"version\": versioneer.get_version(),\n \"cmdclass\": versioneer.get_cmdclass(),\n \"install_requires\": required,\n \"extras_require\": {\n \"spark\": [\"pyspark>=2.3.2\"],\n \"sqlalchemy\": [\"sqlalchemy>=1.2\"],\n \"airflow\": [\"apache-airflow[s3]>=1.9.0\", \"boto3>=1.7.3\"],\n \"gcp\": [\n \"google-cloud>=0.34.0\",\n \"google-cloud-storage>=1.28.0\",\n \"google-cloud-secret-manager>=1.0.0\",\n \"pybigquery==0.4.15\",\n ],\n \"redshift\": [\"psycopg2>=2.8\"],\n \"s3\": [\"boto3>=1.14\"],\n \"aws_secrets\": [\"boto3>=1.8.7\"],\n \"azure_secrets\": [\"azure-identity>=1.0.0\", \"azure-keyvault-secrets>=4.0.0\"],\n \"snowflake\": [\"snowflake-sqlalchemy>=1.2\"],\n },\n \"packages\": find_packages(exclude=[\"contrib*\", \"docs*\", \"tests*\", \"examples*\"]),\n \"entry_points\": {\n \"console_scripts\": [\"great_expectations=great_expectations.cli:main\"]\n },\n \"name\": \"great_expectations\",\n \"long_description\": long_description,\n \"license\": \"Apache-2.0\",\n \"keywords\": \"data science testing pipeline data quality dataquality validation datavalidation\",\n \"include_package_data\": True,\n \"classifiers\": [\n \"Development Status :: 4 - Beta\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: Science/Research\",\n \"Intended Audience :: Other Audience\",\n \"Topic :: Scientific/Engineering\",\n \"Topic :: Software Development\",\n \"Topic :: Software Development :: Testing\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n ],\n}\n\nsetup(**config)\n", "path": "setup.py"}]} | 1,711 | 142 |
gh_patches_debug_26787 | rasdani/github-patches | git_diff | conan-io__conan-center-index-7037 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[request] rapidcheck/20210702
### Package Details
* Package Name/Version: **rapidcheck/20210702**
The above-mentioned version solves issue #5205 and is not yet available as a recipe. Please add this version.
</issue>
<code>
[start of recipes/rapidcheck/all/conanfile.py]
1 from conans import CMake, ConanFile, tools
2 from conans.errors import ConanInvalidConfiguration
3 import os
4 import textwrap
5
6 required_conan_version = ">=1.33.0"
7
8
9 class RapidcheckConan(ConanFile):
10 name = "rapidcheck"
11 description = "QuickCheck clone for C++ with the goal of being simple to use with as little boilerplate as possible"
12 url = "https://github.com/conan-io/conan-center-index"
13 homepage = "https://github.com/emil-e/rapidcheck"
14 license = "BSD-2-Clause"
15 topics = "quickcheck", "testing", "property-testing"
16 exports_sources = "CMakeLists.txt"
17 generators = "cmake"
18 settings = "os", "arch", "compiler", "build_type"
19 options = {
20 "shared": [True, False],
21 "fPIC": [True, False],
22 "enable_rtti": [True, False],
23 }
24 default_options = {
25 "shared": False,
26 "fPIC": True,
27 "enable_rtti": True,
28 }
29
30 _cmake = None
31
32 @property
33 def _source_subfolder(self):
34 return "source_subfolder"
35
36 @property
37 def _build_subfolder(self):
38 return "build_subfolder"
39
40 def config_options(self):
41 if self.settings.os == "Windows":
42 del self.options.fPIC
43
44 def configure(self):
45 if self.options.shared:
46 del self.options.fPIC
47
48 def validate(self):
49 if self.settings.compiler.get_safe("cppstd"):
50 tools.check_min_cppstd(self, 11)
51 if self.settings.compiler == "Visual Studio" and self.options.shared:
52 raise ConanInvalidConfiguration("shared is not supported using Visual Studio")
53
54 def source(self):
55 tools.get(**self.conan_data["sources"][self.version],
56 destination=self._source_subfolder, strip_root=True)
57
58 def _configure_cmake(self):
59 if self._cmake:
60 return self._cmake
61 self._cmake = CMake(self)
62 self._cmake.definitions["RC_ENABLE_RTTI"] = self.options.enable_rtti
63 self._cmake.definitions["RC_ENABLE_TESTS"] = False
64 self._cmake.definitions["RC_ENABLE_EXAMPLES"] = False
65 self._cmake.configure(build_folder=self._build_subfolder)
66 return self._cmake
67
68 def build(self):
69 cmake = self._configure_cmake()
70 cmake.build()
71
72 def package(self):
73 self.copy(pattern="LICENSE*", src=self._source_subfolder, dst="licenses")
74 cmake = self._configure_cmake()
75 cmake.install()
76 tools.rmdir(os.path.join(self.package_folder, "share"))
77 self._create_cmake_module_alias_targets(
78 os.path.join(self.package_folder, self._module_file_rel_path),
79 {"rapidcheck": "rapidcheck::rapidcheck"}
80 )
81
82 @staticmethod
83 def _create_cmake_module_alias_targets(module_file, targets):
84 content = ""
85 for alias, aliased in targets.items():
86 content += textwrap.dedent("""\
87 if(TARGET {aliased} AND NOT TARGET {alias})
88 add_library({alias} INTERFACE IMPORTED)
89 set_property(TARGET {alias} PROPERTY INTERFACE_LINK_LIBRARIES {aliased})
90 endif()
91 """.format(alias=alias, aliased=aliased))
92 tools.save(module_file, content)
93
94 @property
95 def _module_subfolder(self):
96 return os.path.join("lib", "cmake")
97
98 @property
99 def _module_file_rel_path(self):
100 return os.path.join(self._module_subfolder,
101 "conan-official-{}-targets.cmake".format(self.name))
102
103 def package_info(self):
104 self.cpp_info.names["cmake_find_package"] = "rapidcheck"
105 self.cpp_info.names["cmake_find_package_multi"] = "rapidcheck"
106 self.cpp_info.builddirs.append(self._module_subfolder)
107 self.cpp_info.build_modules["cmake_find_package"] = [self._module_file_rel_path]
108 self.cpp_info.build_modules["cmake_find_package_multi"] = [self._module_file_rel_path]
109 self.cpp_info.libs = ["rapidcheck"]
110 if tools.Version(self.version) < "20201218":
111 if self.options.enable_rtti:
112 self.cpp_info.defines.append("RC_USE_RTTI")
113 else:
114 if not self.options.enable_rtti:
115 self.cpp_info.defines.append("RC_DONT_USE_RTTI")
116
[end of recipes/rapidcheck/all/conanfile.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/recipes/rapidcheck/all/conanfile.py b/recipes/rapidcheck/all/conanfile.py
--- a/recipes/rapidcheck/all/conanfile.py
+++ b/recipes/rapidcheck/all/conanfile.py
@@ -51,6 +51,9 @@
if self.settings.compiler == "Visual Studio" and self.options.shared:
raise ConanInvalidConfiguration("shared is not supported using Visual Studio")
+ if 'cci' not in self.version:
+ self.output.warn("This version has been deprecated in favor of '{}/cci.{}'".format(self.name, self.version))
+
def source(self):
tools.get(**self.conan_data["sources"][self.version],
destination=self._source_subfolder, strip_root=True)
@@ -107,7 +110,11 @@
self.cpp_info.build_modules["cmake_find_package"] = [self._module_file_rel_path]
self.cpp_info.build_modules["cmake_find_package_multi"] = [self._module_file_rel_path]
self.cpp_info.libs = ["rapidcheck"]
- if tools.Version(self.version) < "20201218":
+ # Remove after 9473 is merged.
+ version = self.version
+ if version.startswith("cci."):
+ version = version[4:]
+ if version < "20201218":
if self.options.enable_rtti:
self.cpp_info.defines.append("RC_USE_RTTI")
else:
| {"golden_diff": "diff --git a/recipes/rapidcheck/all/conanfile.py b/recipes/rapidcheck/all/conanfile.py\n--- a/recipes/rapidcheck/all/conanfile.py\n+++ b/recipes/rapidcheck/all/conanfile.py\n@@ -51,6 +51,9 @@\n if self.settings.compiler == \"Visual Studio\" and self.options.shared:\n raise ConanInvalidConfiguration(\"shared is not supported using Visual Studio\")\n \n+ if 'cci' not in self.version:\n+ self.output.warn(\"This version has been deprecated in favor of '{}/cci.{}'\".format(self.name, self.version))\n+\n def source(self):\n tools.get(**self.conan_data[\"sources\"][self.version],\n destination=self._source_subfolder, strip_root=True)\n@@ -107,7 +110,11 @@\n self.cpp_info.build_modules[\"cmake_find_package\"] = [self._module_file_rel_path]\n self.cpp_info.build_modules[\"cmake_find_package_multi\"] = [self._module_file_rel_path]\n self.cpp_info.libs = [\"rapidcheck\"]\n- if tools.Version(self.version) < \"20201218\":\n+ # Remove after 9473 is merged.\n+ version = self.version\n+ if version.startswith(\"cci.\"):\n+ version = version[4:]\n+ if version < \"20201218\":\n if self.options.enable_rtti:\n self.cpp_info.defines.append(\"RC_USE_RTTI\")\n else:\n", "issue": "[request] rapidcheck/20210702\n### Package Details\r\n * Package Name/Version: **rapidcheck/20210702**\r\n\r\nThe above-mentioned version solves issue #5205 and is not yet available as a recipe. Please add this version.\r\n\n", "before_files": [{"content": "from conans import CMake, ConanFile, tools\nfrom conans.errors import ConanInvalidConfiguration\nimport os\nimport textwrap\n\nrequired_conan_version = \">=1.33.0\"\n\n\nclass RapidcheckConan(ConanFile):\n name = \"rapidcheck\"\n description = \"QuickCheck clone for C++ with the goal of being simple to use with as little boilerplate as possible\"\n url = \"https://github.com/conan-io/conan-center-index\"\n homepage = \"https://github.com/emil-e/rapidcheck\"\n license = \"BSD-2-Clause\"\n topics = \"quickcheck\", \"testing\", \"property-testing\"\n exports_sources = \"CMakeLists.txt\"\n generators = \"cmake\"\n settings = \"os\", \"arch\", \"compiler\", \"build_type\"\n options = {\n \"shared\": [True, False],\n \"fPIC\": [True, False],\n \"enable_rtti\": [True, False],\n }\n default_options = {\n \"shared\": False,\n \"fPIC\": True,\n \"enable_rtti\": True,\n }\n\n _cmake = None\n\n @property\n def _source_subfolder(self):\n return \"source_subfolder\"\n\n @property\n def _build_subfolder(self):\n return \"build_subfolder\"\n\n def config_options(self):\n if self.settings.os == \"Windows\":\n del self.options.fPIC\n\n def configure(self):\n if self.options.shared:\n del self.options.fPIC\n\n def validate(self):\n if self.settings.compiler.get_safe(\"cppstd\"):\n tools.check_min_cppstd(self, 11)\n if self.settings.compiler == \"Visual Studio\" and self.options.shared:\n raise ConanInvalidConfiguration(\"shared is not supported using Visual Studio\")\n\n def source(self):\n tools.get(**self.conan_data[\"sources\"][self.version],\n destination=self._source_subfolder, strip_root=True)\n\n def _configure_cmake(self):\n if self._cmake:\n return self._cmake\n self._cmake = CMake(self)\n self._cmake.definitions[\"RC_ENABLE_RTTI\"] = self.options.enable_rtti\n self._cmake.definitions[\"RC_ENABLE_TESTS\"] = False\n self._cmake.definitions[\"RC_ENABLE_EXAMPLES\"] = False\n self._cmake.configure(build_folder=self._build_subfolder)\n return self._cmake\n\n def build(self):\n cmake = self._configure_cmake()\n cmake.build()\n\n def package(self):\n self.copy(pattern=\"LICENSE*\", src=self._source_subfolder, dst=\"licenses\")\n cmake = self._configure_cmake()\n cmake.install()\n tools.rmdir(os.path.join(self.package_folder, \"share\"))\n self._create_cmake_module_alias_targets(\n os.path.join(self.package_folder, self._module_file_rel_path),\n {\"rapidcheck\": \"rapidcheck::rapidcheck\"}\n )\n\n @staticmethod\n def _create_cmake_module_alias_targets(module_file, targets):\n content = \"\"\n for alias, aliased in targets.items():\n content += textwrap.dedent(\"\"\"\\\n if(TARGET {aliased} AND NOT TARGET {alias})\n add_library({alias} INTERFACE IMPORTED)\n set_property(TARGET {alias} PROPERTY INTERFACE_LINK_LIBRARIES {aliased})\n endif()\n \"\"\".format(alias=alias, aliased=aliased))\n tools.save(module_file, content)\n\n @property\n def _module_subfolder(self):\n return os.path.join(\"lib\", \"cmake\")\n\n @property\n def _module_file_rel_path(self):\n return os.path.join(self._module_subfolder,\n \"conan-official-{}-targets.cmake\".format(self.name))\n\n def package_info(self):\n self.cpp_info.names[\"cmake_find_package\"] = \"rapidcheck\"\n self.cpp_info.names[\"cmake_find_package_multi\"] = \"rapidcheck\"\n self.cpp_info.builddirs.append(self._module_subfolder)\n self.cpp_info.build_modules[\"cmake_find_package\"] = [self._module_file_rel_path]\n self.cpp_info.build_modules[\"cmake_find_package_multi\"] = [self._module_file_rel_path]\n self.cpp_info.libs = [\"rapidcheck\"]\n if tools.Version(self.version) < \"20201218\":\n if self.options.enable_rtti:\n self.cpp_info.defines.append(\"RC_USE_RTTI\")\n else:\n if not self.options.enable_rtti:\n self.cpp_info.defines.append(\"RC_DONT_USE_RTTI\")\n", "path": "recipes/rapidcheck/all/conanfile.py"}]} | 1,839 | 330 |
gh_patches_debug_4594 | rasdani/github-patches | git_diff | google__turbinia-1054 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
FileArtifactExtraction Tasks failing
From a recent run:
```
* FileArtifactExtractionTask: image_export.py failed for artifact TomcatFiles - local_path not provided.
* FileArtifactExtractionTask: image_export.py failed for artifact SshdConfigFile - local_path not provided.
* FileArtifactExtractionTask: image_export.py failed for artifact RedisConfigFile - local_path not provided.
* FileArtifactExtractionTask: image_export.py failed for artifact JupyterConfigFile - local_path not provided.
* FileArtifactExtractionTask: image_export.py failed for artifact ApacheAccessLogs - local_path not provided.
* FileArtifactExtractionTask: image_export.py failed for artifact NginxAccessLogs - local_path not provided.
* FileArtifactExtractionTask: image_export.py failed for artifact GKEDockerContainerLogs - local_path not provided.
* FileArtifactExtractionTask: image_export.py failed for artifact LinuxScheduleFiles - local_path not provided.
```
</issue>
<code>
[start of turbinia/workers/artifact.py]
1 # -*- coding: utf-8 -*-
2 # Copyright 2015 Google Inc.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15 """Task for running Plaso."""
16
17 from __future__ import unicode_literals
18
19 import os
20
21 from turbinia import config
22 from turbinia.evidence import ExportedFileArtifact
23 from turbinia.evidence import EvidenceState as state
24 from turbinia.workers import TurbiniaTask
25
26
27 class FileArtifactExtractionTask(TurbiniaTask):
28 """Task to run image_export (log2timeline)."""
29
30 REQUIRED_STATES = [state.ATTACHED]
31
32 def __init__(self, artifact_name='FileArtifact'):
33 super(FileArtifactExtractionTask, self).__init__()
34 self.artifact_name = artifact_name
35
36 def run(self, evidence, result):
37 """Extracts artifacts using Plaso image_export.py.
38
39 Args:
40 evidence (Evidence object): The evidence we will process.
41 result (TurbiniaTaskResult): The object to place task results into.
42
43 Returns:
44 TurbiniaTaskResult object.
45 """
46 config.LoadConfig()
47
48 export_directory = os.path.join(self.output_dir, 'export')
49 image_export_log = os.path.join(
50 self.output_dir, '{0:s}.log'.format(self.id))
51
52 cmd = [
53 'sudo',
54 'image_export.py',
55 '--no-hashes',
56 '--logfile',
57 image_export_log,
58 '-w',
59 export_directory,
60 '--partitions',
61 'all',
62 '--volumes',
63 'all',
64 '--unattended',
65 '--artifact_filters',
66 self.artifact_name,
67 ]
68 if config.DEBUG_TASKS or self.task_config.get('debug_tasks'):
69 cmd.append('-d')
70
71 if evidence.credentials:
72 for credential_type, credential_data in evidence.credentials:
73 cmd.extend([
74 '--credential', '{0:s}:{1:s}'.format(
75 credential_type, credential_data)
76 ])
77
78 # Path to the source image/directory.
79 cmd.append(evidence.local_path)
80 if not evidence.local_path:
81 result.log('Tried to run image_export without local_path')
82 result.close(
83 self, False,
84 'image_export.py failed for artifact {0:s} - local_path not provided.'
85 .format(self.artifact_name))
86 return result
87
88 result.log('Running image_export as [{0:s}]'.format(' '.join(cmd)))
89
90 ret, _ = self.execute(cmd, result, log_files=[image_export_log])
91 if ret:
92 result.close(
93 self, False, 'image_export.py failed for artifact {0:s}.'.format(
94 self.artifact_name))
95 return result
96
97 for dirpath, _, filenames in os.walk(export_directory):
98 for filename in filenames:
99 exported_artifact = ExportedFileArtifact(
100 artifact_name=self.artifact_name, source_path=os.path.join(
101 dirpath, filename))
102 result.log('Adding artifact {0:s}'.format(filename))
103 result.add_evidence(exported_artifact, evidence.config)
104
105 result.close(
106 self, True, 'Extracted {0:d} new {1:s} artifacts'.format(
107 len(result.evidence), self.artifact_name))
108
109 return result
110
[end of turbinia/workers/artifact.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/turbinia/workers/artifact.py b/turbinia/workers/artifact.py
--- a/turbinia/workers/artifact.py
+++ b/turbinia/workers/artifact.py
@@ -27,7 +27,7 @@
class FileArtifactExtractionTask(TurbiniaTask):
"""Task to run image_export (log2timeline)."""
- REQUIRED_STATES = [state.ATTACHED]
+ REQUIRED_STATES = [state.ATTACHED, state.CONTAINER_MOUNTED]
def __init__(self, artifact_name='FileArtifact'):
super(FileArtifactExtractionTask, self).__init__()
| {"golden_diff": "diff --git a/turbinia/workers/artifact.py b/turbinia/workers/artifact.py\n--- a/turbinia/workers/artifact.py\n+++ b/turbinia/workers/artifact.py\n@@ -27,7 +27,7 @@\n class FileArtifactExtractionTask(TurbiniaTask):\n \"\"\"Task to run image_export (log2timeline).\"\"\"\n \n- REQUIRED_STATES = [state.ATTACHED]\n+ REQUIRED_STATES = [state.ATTACHED, state.CONTAINER_MOUNTED]\n \n def __init__(self, artifact_name='FileArtifact'):\n super(FileArtifactExtractionTask, self).__init__()\n", "issue": "FileArtifactExtraction Tasks failing\nFrom a recent run:\r\n```\r\n* FileArtifactExtractionTask: image_export.py failed for artifact TomcatFiles - local_path not provided.\r\n* FileArtifactExtractionTask: image_export.py failed for artifact SshdConfigFile - local_path not provided.\r\n* FileArtifactExtractionTask: image_export.py failed for artifact RedisConfigFile - local_path not provided.\r\n* FileArtifactExtractionTask: image_export.py failed for artifact JupyterConfigFile - local_path not provided.\r\n* FileArtifactExtractionTask: image_export.py failed for artifact ApacheAccessLogs - local_path not provided.\r\n* FileArtifactExtractionTask: image_export.py failed for artifact NginxAccessLogs - local_path not provided.\r\n* FileArtifactExtractionTask: image_export.py failed for artifact GKEDockerContainerLogs - local_path not provided.\r\n* FileArtifactExtractionTask: image_export.py failed for artifact LinuxScheduleFiles - local_path not provided.\r\n```\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright 2015 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Task for running Plaso.\"\"\"\n\nfrom __future__ import unicode_literals\n\nimport os\n\nfrom turbinia import config\nfrom turbinia.evidence import ExportedFileArtifact\nfrom turbinia.evidence import EvidenceState as state\nfrom turbinia.workers import TurbiniaTask\n\n\nclass FileArtifactExtractionTask(TurbiniaTask):\n \"\"\"Task to run image_export (log2timeline).\"\"\"\n\n REQUIRED_STATES = [state.ATTACHED]\n\n def __init__(self, artifact_name='FileArtifact'):\n super(FileArtifactExtractionTask, self).__init__()\n self.artifact_name = artifact_name\n\n def run(self, evidence, result):\n \"\"\"Extracts artifacts using Plaso image_export.py.\n\n Args:\n evidence (Evidence object): The evidence we will process.\n result (TurbiniaTaskResult): The object to place task results into.\n\n Returns:\n TurbiniaTaskResult object.\n \"\"\"\n config.LoadConfig()\n\n export_directory = os.path.join(self.output_dir, 'export')\n image_export_log = os.path.join(\n self.output_dir, '{0:s}.log'.format(self.id))\n\n cmd = [\n 'sudo',\n 'image_export.py',\n '--no-hashes',\n '--logfile',\n image_export_log,\n '-w',\n export_directory,\n '--partitions',\n 'all',\n '--volumes',\n 'all',\n '--unattended',\n '--artifact_filters',\n self.artifact_name,\n ]\n if config.DEBUG_TASKS or self.task_config.get('debug_tasks'):\n cmd.append('-d')\n\n if evidence.credentials:\n for credential_type, credential_data in evidence.credentials:\n cmd.extend([\n '--credential', '{0:s}:{1:s}'.format(\n credential_type, credential_data)\n ])\n\n # Path to the source image/directory.\n cmd.append(evidence.local_path)\n if not evidence.local_path:\n result.log('Tried to run image_export without local_path')\n result.close(\n self, False,\n 'image_export.py failed for artifact {0:s} - local_path not provided.'\n .format(self.artifact_name))\n return result\n\n result.log('Running image_export as [{0:s}]'.format(' '.join(cmd)))\n\n ret, _ = self.execute(cmd, result, log_files=[image_export_log])\n if ret:\n result.close(\n self, False, 'image_export.py failed for artifact {0:s}.'.format(\n self.artifact_name))\n return result\n\n for dirpath, _, filenames in os.walk(export_directory):\n for filename in filenames:\n exported_artifact = ExportedFileArtifact(\n artifact_name=self.artifact_name, source_path=os.path.join(\n dirpath, filename))\n result.log('Adding artifact {0:s}'.format(filename))\n result.add_evidence(exported_artifact, evidence.config)\n\n result.close(\n self, True, 'Extracted {0:d} new {1:s} artifacts'.format(\n len(result.evidence), self.artifact_name))\n\n return result\n", "path": "turbinia/workers/artifact.py"}]} | 1,765 | 141 |
gh_patches_debug_34580 | rasdani/github-patches | git_diff | DDMAL__CantusDB-118 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Source create page layout could be improved
At `source-create/`, the form is not taking up all the horizontal space. This should be an easy fix by changing the column width in Bootstrap.
It would be better if we keep the same layout as the old Cantus. For example, the first three fields should be on the same row. This will make the form look more compact.
Plus, the width of the fields should be adjusted according to the expected content. For example, the `summary` field should be wider than "date".
The look is not the most important thing, but in this case, a little bit more polishing can go a long way.
</issue>
<code>
[start of django/cantusdb_project/main_app/models/source.py]
1 from django.db import models
2 from main_app.models import BaseModel
3 from django.contrib.auth import get_user_model
4
5
6 class Source(BaseModel):
7 cursus_choices = [("Monastic", "Monastic"), ("Secular", "Secular")]
8 source_status_choices = [
9 (
10 "Editing process (not all the fields have been proofread)",
11 "Editing process (not all the fields have been proofread)",
12 ),
13 ("Published / Complete", "Published / Complete"),
14 ("Published / Proofread pending", "Published / Proofread pending"),
15 ("Unpublished / Editing process", "Unpublished / Editing process"),
16 ("Unpublished / Indexing process", "Unpublished / Indexing process"),
17 ("Unpublished / Proofread pending", "Unpublished / Proofread pending"),
18 ("Unpublished / Proofreading process", "Unpublished / Proofreading process"),
19 ]
20
21 # sources with public=False cannot be accessed by its url (access denied) and do not appear in source list
22 public = models.BooleanField(blank=True, null=True)
23 # sources with visible=False can be accessed by typing in the url, but do not appear in source list
24 visible = models.BooleanField(blank=True, null=True)
25 title = models.CharField(
26 max_length=255,
27 help_text="Full Manuscript Identification (City, Archive, Shelf-mark)",
28 )
29 # the siglum field as implemented on the old Cantus is composed of both the RISM siglum and the shelfmark
30 # it is a human-readable ID for a source
31 siglum = models.CharField(max_length=63, null=True, blank=True)
32 # the RISM siglum uniquely identifies a library or holding institution
33 rism_siglum = models.ForeignKey(
34 "RismSiglum", on_delete=models.PROTECT, null=True, blank=True,
35 )
36 provenance = models.ForeignKey(
37 "Provenance",
38 on_delete=models.PROTECT,
39 help_text="If the origin is unknown, select a location where the source was "
40 "used later in its lifetime and provide details in the "
41 '"Provenance notes" field.',
42 null=True,
43 blank=True,
44 )
45 provenance_notes = models.TextField(
46 blank=True,
47 null=True,
48 help_text="More exact indication of the provenance (if necessary)",
49 )
50 full_source = models.BooleanField(blank=True, null=True)
51 date = models.CharField(
52 blank=True,
53 null=True,
54 max_length=63,
55 help_text='Date of the manuscript (e.g. "1200s", "1300-1350", etc.)',
56 )
57 century = models.ManyToManyField("Century", related_name="sources")
58 notation = models.ManyToManyField("Notation", related_name="sources")
59 cursus = models.CharField(
60 blank=True, null=True, choices=cursus_choices, max_length=63
61 )
62 # TODO: Fill this field up with JSON info when I have access to the Users
63 current_editors = models.ManyToManyField(get_user_model(), related_name="sources_edited")
64 inventoried_by = models.ManyToManyField(
65 "Indexer", related_name="sources_inventoried"
66 )
67 full_text_entered_by = models.ManyToManyField(
68 "Indexer", related_name="entered_full_text_for_sources"
69 )
70 melodies_entered_by = models.ManyToManyField(
71 "Indexer", related_name="entered_melody_for_sources"
72 )
73 proofreaders = models.ManyToManyField("Indexer", related_name="proofread_sources")
74 other_editors = models.ManyToManyField("Indexer", related_name="edited_sources")
75 segment = models.ForeignKey(
76 "Segment", on_delete=models.PROTECT, blank=True, null=True
77 )
78 source_status = models.CharField(blank=True, null=True, max_length=255)
79 complete_inventory = models.BooleanField(blank=True, null=True)
80 summary = models.TextField(blank=True, null=True)
81 liturgical_occasions = models.TextField(blank=True, null=True)
82 description = models.TextField(blank=True, null=True)
83 selected_bibliography = models.TextField(blank=True, null=True)
84 image_link = models.URLField(blank=True, null=True)
85 indexing_notes = models.TextField(blank=True, null=True)
86 indexing_date = models.TextField(blank=True, null=True)
87 json_info = models.JSONField(blank=True, null=True)
88 fragmentarium_id = models.CharField(max_length=15, blank=True, null=True)
89 dact_id = models.CharField(max_length=15, blank=True, null=True)
90
91 def number_of_chants(self) -> int:
92 """Returns the number of Chants and Sequences in this Source."""
93 return self.chant_set.count() + self.sequence_set.count()
94
95 def number_of_melodies(self) -> int:
96 """Returns the number of Chants in this Source that have melodies."""
97 return self.chant_set.filter(volpiano__isnull=False).count()
98
99 def __str__(self):
100 return self.title
101
[end of django/cantusdb_project/main_app/models/source.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/django/cantusdb_project/main_app/models/source.py b/django/cantusdb_project/main_app/models/source.py
--- a/django/cantusdb_project/main_app/models/source.py
+++ b/django/cantusdb_project/main_app/models/source.py
@@ -28,7 +28,12 @@
)
# the siglum field as implemented on the old Cantus is composed of both the RISM siglum and the shelfmark
# it is a human-readable ID for a source
- siglum = models.CharField(max_length=63, null=True, blank=True)
+ siglum = models.CharField(
+ max_length=63,
+ null=True,
+ blank=True,
+ help_text="RISM-style siglum + Shelf-mark (e.g. GB-Ob 202).",
+ )
# the RISM siglum uniquely identifies a library or holding institution
rism_siglum = models.ForeignKey(
"RismSiglum", on_delete=models.PROTECT, null=True, blank=True,
@@ -61,6 +66,9 @@
)
# TODO: Fill this field up with JSON info when I have access to the Users
current_editors = models.ManyToManyField(get_user_model(), related_name="sources_edited")
+ # created_by = models.ForeignKey(
+ # get_user_model(), related_name="sources_created", on_delete=models.PROTECT, blank=True, null=True
+ # )
inventoried_by = models.ManyToManyField(
"Indexer", related_name="sources_inventoried"
)
@@ -81,7 +89,11 @@
liturgical_occasions = models.TextField(blank=True, null=True)
description = models.TextField(blank=True, null=True)
selected_bibliography = models.TextField(blank=True, null=True)
- image_link = models.URLField(blank=True, null=True)
+ image_link = models.URLField(
+ blank=True,
+ null=True,
+ help_text='HTTP link to the image gallery of the source.',
+ )
indexing_notes = models.TextField(blank=True, null=True)
indexing_date = models.TextField(blank=True, null=True)
json_info = models.JSONField(blank=True, null=True)
| {"golden_diff": "diff --git a/django/cantusdb_project/main_app/models/source.py b/django/cantusdb_project/main_app/models/source.py\n--- a/django/cantusdb_project/main_app/models/source.py\n+++ b/django/cantusdb_project/main_app/models/source.py\n@@ -28,7 +28,12 @@\n )\n # the siglum field as implemented on the old Cantus is composed of both the RISM siglum and the shelfmark\n # it is a human-readable ID for a source\n- siglum = models.CharField(max_length=63, null=True, blank=True)\n+ siglum = models.CharField(\n+ max_length=63, \n+ null=True, \n+ blank=True,\n+ help_text=\"RISM-style siglum + Shelf-mark (e.g. GB-Ob 202).\",\n+ )\n # the RISM siglum uniquely identifies a library or holding institution\n rism_siglum = models.ForeignKey(\n \"RismSiglum\", on_delete=models.PROTECT, null=True, blank=True,\n@@ -61,6 +66,9 @@\n )\n # TODO: Fill this field up with JSON info when I have access to the Users\n current_editors = models.ManyToManyField(get_user_model(), related_name=\"sources_edited\")\n+ # created_by = models.ForeignKey(\n+ # get_user_model(), related_name=\"sources_created\", on_delete=models.PROTECT, blank=True, null=True\n+ # )\n inventoried_by = models.ManyToManyField(\n \"Indexer\", related_name=\"sources_inventoried\"\n )\n@@ -81,7 +89,11 @@\n liturgical_occasions = models.TextField(blank=True, null=True)\n description = models.TextField(blank=True, null=True)\n selected_bibliography = models.TextField(blank=True, null=True)\n- image_link = models.URLField(blank=True, null=True)\n+ image_link = models.URLField(\n+ blank=True, \n+ null=True,\n+ help_text='HTTP link to the image gallery of the source.',\n+ )\n indexing_notes = models.TextField(blank=True, null=True)\n indexing_date = models.TextField(blank=True, null=True)\n json_info = models.JSONField(blank=True, null=True)\n", "issue": "Source create page layout could be improved\nAt `source-create/`, the form is not taking up all the horizontal space. This should be an easy fix by changing the column width in Bootstrap. \r\nIt would be better if we keep the same layout as the old Cantus. For example, the first three fields should be on the same row. This will make the form look more compact. \r\nPlus, the width of the fields should be adjusted according to the expected content. For example, the `summary` field should be wider than \"date\". \r\nThe look is not the most important thing, but in this case, a little bit more polishing can go a long way. \n", "before_files": [{"content": "from django.db import models\nfrom main_app.models import BaseModel\nfrom django.contrib.auth import get_user_model\n\n\nclass Source(BaseModel):\n cursus_choices = [(\"Monastic\", \"Monastic\"), (\"Secular\", \"Secular\")]\n source_status_choices = [\n (\n \"Editing process (not all the fields have been proofread)\",\n \"Editing process (not all the fields have been proofread)\",\n ),\n (\"Published / Complete\", \"Published / Complete\"),\n (\"Published / Proofread pending\", \"Published / Proofread pending\"),\n (\"Unpublished / Editing process\", \"Unpublished / Editing process\"),\n (\"Unpublished / Indexing process\", \"Unpublished / Indexing process\"),\n (\"Unpublished / Proofread pending\", \"Unpublished / Proofread pending\"),\n (\"Unpublished / Proofreading process\", \"Unpublished / Proofreading process\"),\n ]\n\n # sources with public=False cannot be accessed by its url (access denied) and do not appear in source list\n public = models.BooleanField(blank=True, null=True)\n # sources with visible=False can be accessed by typing in the url, but do not appear in source list\n visible = models.BooleanField(blank=True, null=True)\n title = models.CharField(\n max_length=255,\n help_text=\"Full Manuscript Identification (City, Archive, Shelf-mark)\",\n )\n # the siglum field as implemented on the old Cantus is composed of both the RISM siglum and the shelfmark\n # it is a human-readable ID for a source\n siglum = models.CharField(max_length=63, null=True, blank=True)\n # the RISM siglum uniquely identifies a library or holding institution\n rism_siglum = models.ForeignKey(\n \"RismSiglum\", on_delete=models.PROTECT, null=True, blank=True,\n )\n provenance = models.ForeignKey(\n \"Provenance\",\n on_delete=models.PROTECT,\n help_text=\"If the origin is unknown, select a location where the source was \"\n \"used later in its lifetime and provide details in the \"\n '\"Provenance notes\" field.',\n null=True,\n blank=True,\n )\n provenance_notes = models.TextField(\n blank=True,\n null=True,\n help_text=\"More exact indication of the provenance (if necessary)\",\n )\n full_source = models.BooleanField(blank=True, null=True)\n date = models.CharField(\n blank=True,\n null=True,\n max_length=63,\n help_text='Date of the manuscript (e.g. \"1200s\", \"1300-1350\", etc.)',\n )\n century = models.ManyToManyField(\"Century\", related_name=\"sources\")\n notation = models.ManyToManyField(\"Notation\", related_name=\"sources\")\n cursus = models.CharField(\n blank=True, null=True, choices=cursus_choices, max_length=63\n )\n # TODO: Fill this field up with JSON info when I have access to the Users\n current_editors = models.ManyToManyField(get_user_model(), related_name=\"sources_edited\")\n inventoried_by = models.ManyToManyField(\n \"Indexer\", related_name=\"sources_inventoried\"\n )\n full_text_entered_by = models.ManyToManyField(\n \"Indexer\", related_name=\"entered_full_text_for_sources\"\n )\n melodies_entered_by = models.ManyToManyField(\n \"Indexer\", related_name=\"entered_melody_for_sources\"\n )\n proofreaders = models.ManyToManyField(\"Indexer\", related_name=\"proofread_sources\")\n other_editors = models.ManyToManyField(\"Indexer\", related_name=\"edited_sources\")\n segment = models.ForeignKey(\n \"Segment\", on_delete=models.PROTECT, blank=True, null=True\n )\n source_status = models.CharField(blank=True, null=True, max_length=255)\n complete_inventory = models.BooleanField(blank=True, null=True)\n summary = models.TextField(blank=True, null=True)\n liturgical_occasions = models.TextField(blank=True, null=True)\n description = models.TextField(blank=True, null=True)\n selected_bibliography = models.TextField(blank=True, null=True)\n image_link = models.URLField(blank=True, null=True)\n indexing_notes = models.TextField(blank=True, null=True)\n indexing_date = models.TextField(blank=True, null=True)\n json_info = models.JSONField(blank=True, null=True)\n fragmentarium_id = models.CharField(max_length=15, blank=True, null=True)\n dact_id = models.CharField(max_length=15, blank=True, null=True)\n\n def number_of_chants(self) -> int:\n \"\"\"Returns the number of Chants and Sequences in this Source.\"\"\"\n return self.chant_set.count() + self.sequence_set.count()\n\n def number_of_melodies(self) -> int:\n \"\"\"Returns the number of Chants in this Source that have melodies.\"\"\"\n return self.chant_set.filter(volpiano__isnull=False).count()\n\n def __str__(self):\n return self.title\n ", "path": "django/cantusdb_project/main_app/models/source.py"}]} | 1,939 | 497 |
gh_patches_debug_30399 | rasdani/github-patches | git_diff | aws-powertools__powertools-lambda-python-504 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
bug: aws scalar type for AWSDateTime should include milliseconds
**What were you trying to accomplish?**
## Expected Behavior
AWSDateTime - An extended ISO 8601 date and time string in the format YYYY-MM-DDThh:mm:ss.sssZ.
## Current Behavior
AWSDateTime is not including the millisecond part `sss`.
## Possible Solution
Generate timestamps to include the milliseconds
## Steps to Reproduce (for bugs)
```python3
> print(aws_datetime())
2021-07-02T17:09:47Z
````
## Environment
* **Powertools version used**: 1.17.0
* **Packaging format (Layers, PyPi)**: PyPi
* **AWS Lambda function runtime:** Python 3.8
</issue>
<code>
[start of aws_lambda_powertools/utilities/data_classes/appsync/scalar_types_utils.py]
1 import datetime
2 import time
3 import uuid
4
5
6 def _formatted_time(now: datetime.date, fmt: str, timezone_offset: int) -> str:
7 """String formatted time with optional timezone offset
8
9 Parameters
10 ----------
11 now : datetime.date
12 Current datetime with zero timezone offset
13 fmt : str
14 Data format before adding timezone offset
15 timezone_offset : int
16 Timezone offset in hours, defaults to 0
17 Returns
18 -------
19 str
20 Returns string formatted time with optional timezone offset
21 """
22 if timezone_offset == 0:
23 return now.strftime(fmt + "Z")
24
25 now = now + datetime.timedelta(hours=timezone_offset)
26 fmt += "+" if timezone_offset > 0 else "-"
27 fmt += str(abs(timezone_offset)).zfill(2)
28 fmt += ":00:00"
29
30 return now.strftime(fmt)
31
32
33 def make_id() -> str:
34 """ID - A unique identifier for an object. This scalar is serialized like a String but isn't meant to be
35 human-readable."""
36 return str(uuid.uuid4())
37
38
39 def aws_date(timezone_offset: int = 0) -> str:
40 """AWSDate - An extended ISO 8601 date string in the format YYYY-MM-DD.
41
42 Parameters
43 ----------
44 timezone_offset : int
45 Timezone offset, defaults to 0
46
47 Returns
48 -------
49 str
50 Returns current time as AWSDate scalar string with optional timezone offset
51 """
52 return _formatted_time(datetime.datetime.utcnow(), "%Y-%m-%d", timezone_offset)
53
54
55 def aws_time(timezone_offset: int = 0) -> str:
56 """AWSTime - An extended ISO 8601 time string in the format hh:mm:ss.sss.
57
58 Parameters
59 ----------
60 timezone_offset : int
61 Timezone offset, defaults to 0
62
63 Returns
64 -------
65 str
66 Returns current time as AWSTime scalar string with optional timezone offset
67 """
68 return _formatted_time(datetime.datetime.utcnow(), "%H:%M:%S", timezone_offset)
69
70
71 def aws_datetime(timezone_offset: int = 0) -> str:
72 """AWSDateTime - An extended ISO 8601 date and time string in the format YYYY-MM-DDThh:mm:ss.sssZ.
73
74 Parameters
75 ----------
76 timezone_offset : int
77 Timezone offset, defaults to 0
78
79 Returns
80 -------
81 str
82 Returns current time as AWSDateTime scalar string with optional timezone offset
83 """
84 return _formatted_time(datetime.datetime.utcnow(), "%Y-%m-%dT%H:%M:%S", timezone_offset)
85
86
87 def aws_timestamp() -> int:
88 """AWSTimestamp - An integer value representing the number of seconds before or after 1970-01-01-T00:00Z."""
89 return int(time.time())
90
[end of aws_lambda_powertools/utilities/data_classes/appsync/scalar_types_utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/aws_lambda_powertools/utilities/data_classes/appsync/scalar_types_utils.py b/aws_lambda_powertools/utilities/data_classes/appsync/scalar_types_utils.py
--- a/aws_lambda_powertools/utilities/data_classes/appsync/scalar_types_utils.py
+++ b/aws_lambda_powertools/utilities/data_classes/appsync/scalar_types_utils.py
@@ -19,15 +19,21 @@
str
Returns string formatted time with optional timezone offset
"""
- if timezone_offset == 0:
- return now.strftime(fmt + "Z")
+ if timezone_offset != 0:
+ now = now + datetime.timedelta(hours=timezone_offset)
+
+ datetime_str = now.strftime(fmt)
+ if fmt.endswith(".%f"):
+ datetime_str = datetime_str[:-3]
- now = now + datetime.timedelta(hours=timezone_offset)
- fmt += "+" if timezone_offset > 0 else "-"
- fmt += str(abs(timezone_offset)).zfill(2)
- fmt += ":00:00"
+ if timezone_offset == 0:
+ postfix = "Z"
+ else:
+ postfix = "+" if timezone_offset > 0 else "-"
+ postfix += str(abs(timezone_offset)).zfill(2)
+ postfix += ":00:00"
- return now.strftime(fmt)
+ return datetime_str + postfix
def make_id() -> str:
@@ -65,7 +71,7 @@
str
Returns current time as AWSTime scalar string with optional timezone offset
"""
- return _formatted_time(datetime.datetime.utcnow(), "%H:%M:%S", timezone_offset)
+ return _formatted_time(datetime.datetime.utcnow(), "%H:%M:%S.%f", timezone_offset)
def aws_datetime(timezone_offset: int = 0) -> str:
@@ -81,7 +87,7 @@
str
Returns current time as AWSDateTime scalar string with optional timezone offset
"""
- return _formatted_time(datetime.datetime.utcnow(), "%Y-%m-%dT%H:%M:%S", timezone_offset)
+ return _formatted_time(datetime.datetime.utcnow(), "%Y-%m-%dT%H:%M:%S.%f", timezone_offset)
def aws_timestamp() -> int:
| {"golden_diff": "diff --git a/aws_lambda_powertools/utilities/data_classes/appsync/scalar_types_utils.py b/aws_lambda_powertools/utilities/data_classes/appsync/scalar_types_utils.py\n--- a/aws_lambda_powertools/utilities/data_classes/appsync/scalar_types_utils.py\n+++ b/aws_lambda_powertools/utilities/data_classes/appsync/scalar_types_utils.py\n@@ -19,15 +19,21 @@\n str\n Returns string formatted time with optional timezone offset\n \"\"\"\n- if timezone_offset == 0:\n- return now.strftime(fmt + \"Z\")\n+ if timezone_offset != 0:\n+ now = now + datetime.timedelta(hours=timezone_offset)\n+\n+ datetime_str = now.strftime(fmt)\n+ if fmt.endswith(\".%f\"):\n+ datetime_str = datetime_str[:-3]\n \n- now = now + datetime.timedelta(hours=timezone_offset)\n- fmt += \"+\" if timezone_offset > 0 else \"-\"\n- fmt += str(abs(timezone_offset)).zfill(2)\n- fmt += \":00:00\"\n+ if timezone_offset == 0:\n+ postfix = \"Z\"\n+ else:\n+ postfix = \"+\" if timezone_offset > 0 else \"-\"\n+ postfix += str(abs(timezone_offset)).zfill(2)\n+ postfix += \":00:00\"\n \n- return now.strftime(fmt)\n+ return datetime_str + postfix\n \n \n def make_id() -> str:\n@@ -65,7 +71,7 @@\n str\n Returns current time as AWSTime scalar string with optional timezone offset\n \"\"\"\n- return _formatted_time(datetime.datetime.utcnow(), \"%H:%M:%S\", timezone_offset)\n+ return _formatted_time(datetime.datetime.utcnow(), \"%H:%M:%S.%f\", timezone_offset)\n \n \n def aws_datetime(timezone_offset: int = 0) -> str:\n@@ -81,7 +87,7 @@\n str\n Returns current time as AWSDateTime scalar string with optional timezone offset\n \"\"\"\n- return _formatted_time(datetime.datetime.utcnow(), \"%Y-%m-%dT%H:%M:%S\", timezone_offset)\n+ return _formatted_time(datetime.datetime.utcnow(), \"%Y-%m-%dT%H:%M:%S.%f\", timezone_offset)\n \n \n def aws_timestamp() -> int:\n", "issue": "bug: aws scalar type for AWSDateTime should include milliseconds\n**What were you trying to accomplish?**\r\n\r\n## Expected Behavior\r\n\r\nAWSDateTime - An extended ISO 8601 date and time string in the format YYYY-MM-DDThh:mm:ss.sssZ.\r\n\r\n## Current Behavior\r\n\r\nAWSDateTime is not including the millisecond part `sss`.\r\n\r\n## Possible Solution\r\n\r\nGenerate timestamps to include the milliseconds\r\n\r\n## Steps to Reproduce (for bugs)\r\n\r\n```python3\r\n> print(aws_datetime())\r\n2021-07-02T17:09:47Z\r\n````\r\n\r\n## Environment\r\n\r\n* **Powertools version used**: 1.17.0\r\n* **Packaging format (Layers, PyPi)**: PyPi\r\n* **AWS Lambda function runtime:** Python 3.8\r\n\n", "before_files": [{"content": "import datetime\nimport time\nimport uuid\n\n\ndef _formatted_time(now: datetime.date, fmt: str, timezone_offset: int) -> str:\n \"\"\"String formatted time with optional timezone offset\n\n Parameters\n ----------\n now : datetime.date\n Current datetime with zero timezone offset\n fmt : str\n Data format before adding timezone offset\n timezone_offset : int\n Timezone offset in hours, defaults to 0\n Returns\n -------\n str\n Returns string formatted time with optional timezone offset\n \"\"\"\n if timezone_offset == 0:\n return now.strftime(fmt + \"Z\")\n\n now = now + datetime.timedelta(hours=timezone_offset)\n fmt += \"+\" if timezone_offset > 0 else \"-\"\n fmt += str(abs(timezone_offset)).zfill(2)\n fmt += \":00:00\"\n\n return now.strftime(fmt)\n\n\ndef make_id() -> str:\n \"\"\"ID - A unique identifier for an object. This scalar is serialized like a String but isn't meant to be\n human-readable.\"\"\"\n return str(uuid.uuid4())\n\n\ndef aws_date(timezone_offset: int = 0) -> str:\n \"\"\"AWSDate - An extended ISO 8601 date string in the format YYYY-MM-DD.\n\n Parameters\n ----------\n timezone_offset : int\n Timezone offset, defaults to 0\n\n Returns\n -------\n str\n Returns current time as AWSDate scalar string with optional timezone offset\n \"\"\"\n return _formatted_time(datetime.datetime.utcnow(), \"%Y-%m-%d\", timezone_offset)\n\n\ndef aws_time(timezone_offset: int = 0) -> str:\n \"\"\"AWSTime - An extended ISO 8601 time string in the format hh:mm:ss.sss.\n\n Parameters\n ----------\n timezone_offset : int\n Timezone offset, defaults to 0\n\n Returns\n -------\n str\n Returns current time as AWSTime scalar string with optional timezone offset\n \"\"\"\n return _formatted_time(datetime.datetime.utcnow(), \"%H:%M:%S\", timezone_offset)\n\n\ndef aws_datetime(timezone_offset: int = 0) -> str:\n \"\"\"AWSDateTime - An extended ISO 8601 date and time string in the format YYYY-MM-DDThh:mm:ss.sssZ.\n\n Parameters\n ----------\n timezone_offset : int\n Timezone offset, defaults to 0\n\n Returns\n -------\n str\n Returns current time as AWSDateTime scalar string with optional timezone offset\n \"\"\"\n return _formatted_time(datetime.datetime.utcnow(), \"%Y-%m-%dT%H:%M:%S\", timezone_offset)\n\n\ndef aws_timestamp() -> int:\n \"\"\"AWSTimestamp - An integer value representing the number of seconds before or after 1970-01-01-T00:00Z.\"\"\"\n return int(time.time())\n", "path": "aws_lambda_powertools/utilities/data_classes/appsync/scalar_types_utils.py"}]} | 1,515 | 489 |
gh_patches_debug_32440 | rasdani/github-patches | git_diff | OpenMined__PySyft-3591 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Calling syft.grid.register() twice should throw an informative error
**Is your feature request related to a problem? Please describe.**
If you call syft.grid.register() twice in the same python runtime it should raise an error describing that you can't do this - that they shoudl restart the python runtime and try again.
</issue>
<code>
[start of syft/grid/__init__.py]
1 from .network import Network
2 import sys
3 import uuid
4
5 DEFAULT_NETWORK_URL = "ws://ec2-13-59-45-128.us-east-2.compute.amazonaws.com"
6
7
8 def register(**kwargs):
9 """ Add this process as a new peer registering it in the grid network.
10
11 Returns:
12 peer: Peer Network instance.
13 """
14 try:
15 if not kwargs:
16 args = {"max_size": None, "timeout": 444, "url": DEFAULT_NETWORK_URL}
17 else:
18 args = kwargs
19
20 peer_id = str(uuid.uuid4())
21 sys.stdout.write(
22 "Connecting to OpenGrid (" + "\033[94m" + args["url"] + "\033[0m" + ") ... "
23 )
24
25 peer = Network(peer_id, **args)
26
27 sys.stdout.write("\033[92m" + "OK" + "\033[0m" + "\n")
28 sys.stdout.write("Peer ID: " + peer_id + "\n")
29
30 sys.stdout.write(
31 "\033[93m" + "DISCLAIMER" + "\033[0m"
32 ":"
33 + "\033[1m"
34 + " OpenGrid is an experimental feature currently in alpha. Do not use this to protect real-world data.\n"
35 + "\033[0m"
36 )
37
38 sys.stdout.write("Where to get help: \n")
39 sys.stdout.write(
40 " - Join our slack (https://slack.openmined.org) and ask for help in the #lib_syft channel.\n"
41 )
42 sys.stdout.write(
43 " - File a Github Issue: https://github.com/OpenMined/PySyft and add the string '#opengrid' in the issue title.\n"
44 )
45 sys.stdout.write(
46 " - Want to join in our development team? Apply here: https://forms.gle/wcH1vxzvPyDSbSVW6\n"
47 )
48 peer.start()
49 return peer
50 except Exception as e:
51 sys.stdout.write("\033[91m" + "FAIL" + "\033[0m" + "\n")
52 sys.stdout.write("You were not able to register your node.\n")
53
[end of syft/grid/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/syft/grid/__init__.py b/syft/grid/__init__.py
--- a/syft/grid/__init__.py
+++ b/syft/grid/__init__.py
@@ -4,13 +4,25 @@
DEFAULT_NETWORK_URL = "ws://ec2-13-59-45-128.us-east-2.compute.amazonaws.com"
+_registered_peer = None
+
def register(**kwargs):
""" Add this process as a new peer registering it in the grid network.
-
+
Returns:
peer: Peer Network instance.
"""
+ global _registered_peer
+
+ if isinstance(_registered_peer, Network):
+ sys.stdout.write(
+ "\033[93m" + "WARNING" + "\033[0m"
+ ":" + f" You are already a registered peer!\n{_registered_peer}\n"
+ )
+
+ return _registered_peer
+
try:
if not kwargs:
args = {"max_size": None, "timeout": 444, "url": DEFAULT_NETWORK_URL}
@@ -22,7 +34,7 @@
"Connecting to OpenGrid (" + "\033[94m" + args["url"] + "\033[0m" + ") ... "
)
- peer = Network(peer_id, **args)
+ _registered_peer = Network(peer_id, **args)
sys.stdout.write("\033[92m" + "OK" + "\033[0m" + "\n")
sys.stdout.write("Peer ID: " + peer_id + "\n")
@@ -45,8 +57,11 @@
sys.stdout.write(
" - Want to join in our development team? Apply here: https://forms.gle/wcH1vxzvPyDSbSVW6\n"
)
- peer.start()
- return peer
+
+ _registered_peer.start()
+
+ return _registered_peer
+
except Exception as e:
sys.stdout.write("\033[91m" + "FAIL" + "\033[0m" + "\n")
sys.stdout.write("You were not able to register your node.\n")
| {"golden_diff": "diff --git a/syft/grid/__init__.py b/syft/grid/__init__.py\n--- a/syft/grid/__init__.py\n+++ b/syft/grid/__init__.py\n@@ -4,13 +4,25 @@\n \n DEFAULT_NETWORK_URL = \"ws://ec2-13-59-45-128.us-east-2.compute.amazonaws.com\"\n \n+_registered_peer = None\n+\n \n def register(**kwargs):\n \"\"\" Add this process as a new peer registering it in the grid network.\n- \n+\n Returns:\n peer: Peer Network instance.\n \"\"\"\n+ global _registered_peer\n+\n+ if isinstance(_registered_peer, Network):\n+ sys.stdout.write(\n+ \"\\033[93m\" + \"WARNING\" + \"\\033[0m\"\n+ \":\" + f\" You are already a registered peer!\\n{_registered_peer}\\n\"\n+ )\n+\n+ return _registered_peer\n+\n try:\n if not kwargs:\n args = {\"max_size\": None, \"timeout\": 444, \"url\": DEFAULT_NETWORK_URL}\n@@ -22,7 +34,7 @@\n \"Connecting to OpenGrid (\" + \"\\033[94m\" + args[\"url\"] + \"\\033[0m\" + \") ... \"\n )\n \n- peer = Network(peer_id, **args)\n+ _registered_peer = Network(peer_id, **args)\n \n sys.stdout.write(\"\\033[92m\" + \"OK\" + \"\\033[0m\" + \"\\n\")\n sys.stdout.write(\"Peer ID: \" + peer_id + \"\\n\")\n@@ -45,8 +57,11 @@\n sys.stdout.write(\n \" - Want to join in our development team? Apply here: https://forms.gle/wcH1vxzvPyDSbSVW6\\n\"\n )\n- peer.start()\n- return peer\n+\n+ _registered_peer.start()\n+\n+ return _registered_peer\n+\n except Exception as e:\n sys.stdout.write(\"\\033[91m\" + \"FAIL\" + \"\\033[0m\" + \"\\n\")\n sys.stdout.write(\"You were not able to register your node.\\n\")\n", "issue": "Calling syft.grid.register() twice should throw an informative error\n**Is your feature request related to a problem? Please describe.**\r\nIf you call syft.grid.register() twice in the same python runtime it should raise an error describing that you can't do this - that they shoudl restart the python runtime and try again.\n", "before_files": [{"content": "from .network import Network\nimport sys\nimport uuid\n\nDEFAULT_NETWORK_URL = \"ws://ec2-13-59-45-128.us-east-2.compute.amazonaws.com\"\n\n\ndef register(**kwargs):\n \"\"\" Add this process as a new peer registering it in the grid network.\n \n Returns:\n peer: Peer Network instance.\n \"\"\"\n try:\n if not kwargs:\n args = {\"max_size\": None, \"timeout\": 444, \"url\": DEFAULT_NETWORK_URL}\n else:\n args = kwargs\n\n peer_id = str(uuid.uuid4())\n sys.stdout.write(\n \"Connecting to OpenGrid (\" + \"\\033[94m\" + args[\"url\"] + \"\\033[0m\" + \") ... \"\n )\n\n peer = Network(peer_id, **args)\n\n sys.stdout.write(\"\\033[92m\" + \"OK\" + \"\\033[0m\" + \"\\n\")\n sys.stdout.write(\"Peer ID: \" + peer_id + \"\\n\")\n\n sys.stdout.write(\n \"\\033[93m\" + \"DISCLAIMER\" + \"\\033[0m\"\n \":\"\n + \"\\033[1m\"\n + \" OpenGrid is an experimental feature currently in alpha. Do not use this to protect real-world data.\\n\"\n + \"\\033[0m\"\n )\n\n sys.stdout.write(\"Where to get help: \\n\")\n sys.stdout.write(\n \" - Join our slack (https://slack.openmined.org) and ask for help in the #lib_syft channel.\\n\"\n )\n sys.stdout.write(\n \" - File a Github Issue: https://github.com/OpenMined/PySyft and add the string '#opengrid' in the issue title.\\n\"\n )\n sys.stdout.write(\n \" - Want to join in our development team? Apply here: https://forms.gle/wcH1vxzvPyDSbSVW6\\n\"\n )\n peer.start()\n return peer\n except Exception as e:\n sys.stdout.write(\"\\033[91m\" + \"FAIL\" + \"\\033[0m\" + \"\\n\")\n sys.stdout.write(\"You were not able to register your node.\\n\")\n", "path": "syft/grid/__init__.py"}]} | 1,208 | 499 |
gh_patches_debug_48578 | rasdani/github-patches | git_diff | openai__gym-2683 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
py.typed not bundled in release
The latest pypi package [gym==0.23.0](https://pypi.org/project/gym/0.23.0/) does not include `py.typed`, resulting in failed `mypy` checks.
Reproduce by `pip install gym` and noting the missing file or downloading the zip from pypi (zip on GH contains the file).
</issue>
<code>
[start of setup.py]
1 import os.path
2 import sys
3 import itertools
4
5 from setuptools import find_packages, setup
6
7 # Don't import gym module here, since deps may not be installed
8 sys.path.insert(0, os.path.join(os.path.dirname(__file__), "gym"))
9 from version import VERSION
10
11 # Environment-specific dependencies.
12 extras = {
13 "atari": ["ale-py~=0.7.4"],
14 "accept-rom-license": ["autorom[accept-rom-license]~=0.4.2"],
15 "box2d": ["box2d-py==2.3.5", "pygame==2.1.0"],
16 "classic_control": ["pygame==2.1.0"],
17 "mujoco": ["mujoco_py>=1.50, <2.0"],
18 "toy_text": ["pygame==2.1.0", "scipy>=1.4.1"],
19 "other": ["lz4>=3.1.0", "opencv-python>=3.0"],
20 }
21
22 # Meta dependency groups.
23 nomujoco_blacklist = set(["mujoco", "accept-rom-license", "atari"])
24 nomujoco_groups = set(extras.keys()) - nomujoco_blacklist
25
26 extras["nomujoco"] = list(
27 itertools.chain.from_iterable(map(lambda group: extras[group], nomujoco_groups))
28 )
29
30
31 all_blacklist = set(["accept-rom-license"])
32 all_groups = set(extras.keys()) - all_blacklist
33
34 extras["all"] = list(
35 itertools.chain.from_iterable(map(lambda group: extras[group], all_groups))
36 )
37
38 setup(
39 name="gym",
40 version=VERSION,
41 description="Gym: A universal API for reinforcement learning environments",
42 url="https://www.gymlibrary.ml/",
43 author="Gym Community",
44 author_email="[email protected]",
45 license="MIT",
46 packages=[package for package in find_packages() if package.startswith("gym")],
47 zip_safe=False,
48 install_requires=[
49 "numpy>=1.18.0",
50 "cloudpickle>=1.2.0",
51 "importlib_metadata>=4.10.0; python_version < '3.10'",
52 "gym_notices>=0.0.4",
53 ],
54 extras_require=extras,
55 package_data={
56 "gym": [
57 "envs/mujoco/assets/*.xml",
58 "envs/classic_control/assets/*.png",
59 "envs/toy_text/font/*.ttf",
60 "envs/toy_text/img/*.png",
61 ]
62 },
63 tests_require=["pytest", "mock"],
64 python_requires=">=3.7",
65 classifiers=[
66 "Programming Language :: Python :: 3",
67 "Programming Language :: Python :: 3.7",
68 "Programming Language :: Python :: 3.8",
69 "Programming Language :: Python :: 3.9",
70 "Programming Language :: Python :: 3.10",
71 ],
72 )
73
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -58,6 +58,7 @@
"envs/classic_control/assets/*.png",
"envs/toy_text/font/*.ttf",
"envs/toy_text/img/*.png",
+ "py.typed",
]
},
tests_require=["pytest", "mock"],
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -58,6 +58,7 @@\n \"envs/classic_control/assets/*.png\",\n \"envs/toy_text/font/*.ttf\",\n \"envs/toy_text/img/*.png\",\n+ \"py.typed\",\n ]\n },\n tests_require=[\"pytest\", \"mock\"],\n", "issue": "py.typed not bundled in release\nThe latest pypi package [gym==0.23.0](https://pypi.org/project/gym/0.23.0/) does not include `py.typed`, resulting in failed `mypy` checks.\r\n\r\nReproduce by `pip install gym` and noting the missing file or downloading the zip from pypi (zip on GH contains the file).\n", "before_files": [{"content": "import os.path\nimport sys\nimport itertools\n\nfrom setuptools import find_packages, setup\n\n# Don't import gym module here, since deps may not be installed\nsys.path.insert(0, os.path.join(os.path.dirname(__file__), \"gym\"))\nfrom version import VERSION\n\n# Environment-specific dependencies.\nextras = {\n \"atari\": [\"ale-py~=0.7.4\"],\n \"accept-rom-license\": [\"autorom[accept-rom-license]~=0.4.2\"],\n \"box2d\": [\"box2d-py==2.3.5\", \"pygame==2.1.0\"],\n \"classic_control\": [\"pygame==2.1.0\"],\n \"mujoco\": [\"mujoco_py>=1.50, <2.0\"],\n \"toy_text\": [\"pygame==2.1.0\", \"scipy>=1.4.1\"],\n \"other\": [\"lz4>=3.1.0\", \"opencv-python>=3.0\"],\n}\n\n# Meta dependency groups.\nnomujoco_blacklist = set([\"mujoco\", \"accept-rom-license\", \"atari\"])\nnomujoco_groups = set(extras.keys()) - nomujoco_blacklist\n\nextras[\"nomujoco\"] = list(\n itertools.chain.from_iterable(map(lambda group: extras[group], nomujoco_groups))\n)\n\n\nall_blacklist = set([\"accept-rom-license\"])\nall_groups = set(extras.keys()) - all_blacklist\n\nextras[\"all\"] = list(\n itertools.chain.from_iterable(map(lambda group: extras[group], all_groups))\n)\n\nsetup(\n name=\"gym\",\n version=VERSION,\n description=\"Gym: A universal API for reinforcement learning environments\",\n url=\"https://www.gymlibrary.ml/\",\n author=\"Gym Community\",\n author_email=\"[email protected]\",\n license=\"MIT\",\n packages=[package for package in find_packages() if package.startswith(\"gym\")],\n zip_safe=False,\n install_requires=[\n \"numpy>=1.18.0\",\n \"cloudpickle>=1.2.0\",\n \"importlib_metadata>=4.10.0; python_version < '3.10'\",\n \"gym_notices>=0.0.4\",\n ],\n extras_require=extras,\n package_data={\n \"gym\": [\n \"envs/mujoco/assets/*.xml\",\n \"envs/classic_control/assets/*.png\",\n \"envs/toy_text/font/*.ttf\",\n \"envs/toy_text/img/*.png\",\n ]\n },\n tests_require=[\"pytest\", \"mock\"],\n python_requires=\">=3.7\",\n classifiers=[\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n ],\n)\n", "path": "setup.py"}]} | 1,392 | 87 |
gh_patches_debug_24910 | rasdani/github-patches | git_diff | pypi__warehouse-3457 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Trending Projects are not updated
I think in the past month that I've been looking at pypi.org (and even before that), the "Trending Projects" haven't changed.
</issue>
<code>
[start of warehouse/packaging/tasks.py]
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
10 # See the License for the specific language governing permissions and
11 # limitations under the License.
12
13 from warehouse import tasks
14 from warehouse.cache.origin import IOriginCache
15 from warehouse.packaging.models import Project
16
17
18 @tasks.task(ignore_result=True, acks_late=True)
19 def compute_trending(request):
20 bq = request.find_service(name="gcloud.bigquery")
21 query = bq.run_sync_query(
22 """ SELECT project,
23 IF(
24 STDDEV(downloads) > 0,
25 (todays_downloads - AVG(downloads))/STDDEV(downloads),
26 NULL
27 ) as zscore
28 FROM (
29 SELECT project,
30 date,
31 downloads,
32 FIRST_VALUE(downloads) OVER (
33 PARTITION BY project
34 ORDER BY DATE DESC
35 ROWS BETWEEN UNBOUNDED PRECEDING
36 AND UNBOUNDED FOLLOWING
37 ) as todays_downloads
38 FROM (
39 SELECT file.project as project,
40 DATE(timestamp) AS date,
41 COUNT(*) as downloads
42 FROM `{table}`
43 WHERE _TABLE_SUFFIX BETWEEN
44 FORMAT_DATE(
45 "%Y%m%d",
46 DATE_ADD(CURRENT_DATE(), INTERVAL -31 day))
47 AND
48 FORMAT_DATE(
49 "%Y%m%d",
50 DATE_ADD(CURRENT_DATE(), INTERVAL -1 day))
51 GROUP BY file.project, date
52 )
53 )
54 GROUP BY project, todays_downloads
55 HAVING SUM(downloads) >= 5000
56 ORDER BY zscore DESC
57 """.format(table=request.registry.settings["warehouse.trending_table"])
58 )
59 query.use_legacy_sql = False
60 query.run()
61
62 zscores = {}
63 page_token = None
64 while True:
65 rows, total_rows, page_token = query.fetch_data(
66 max_results=1000,
67 page_token=page_token,
68 )
69
70 zscores.update(dict(rows))
71
72 if not page_token:
73 break
74
75 # We're going to "reset" all of our zscores to a steady state where they
76 # are all equal to ``None``. The next query will then set any that have a
77 # value back to the expected value.
78 (request.db.query(Project)
79 .filter(Project.zscore != None) # noqa
80 .update({Project.zscore: None}))
81
82 # We need to convert the normalized name that we get out of BigQuery and
83 # turn it into the primary key of the Project object and construct a list
84 # of primary key: new zscore, including a default of None if the item isn't
85 # in the result set.
86 query = request.db.query(Project.name, Project.normalized_name).all()
87 to_update = [
88 {"name": name, "zscore": zscores[normalized_name]}
89 for name, normalized_name in query
90 if normalized_name in zscores
91 ]
92
93 # Reflect out updated ZScores into the database.
94 request.db.bulk_update_mappings(Project, to_update)
95
96 # Trigger a purge of the trending surrogate key.
97 try:
98 cacher = request.find_service(IOriginCache)
99 except ValueError:
100 pass
101 else:
102 cacher.purge(["trending"])
103
[end of warehouse/packaging/tasks.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/warehouse/packaging/tasks.py b/warehouse/packaging/tasks.py
--- a/warehouse/packaging/tasks.py
+++ b/warehouse/packaging/tasks.py
@@ -18,7 +18,7 @@
@tasks.task(ignore_result=True, acks_late=True)
def compute_trending(request):
bq = request.find_service(name="gcloud.bigquery")
- query = bq.run_sync_query(
+ query = bq.query(
""" SELECT project,
IF(
STDDEV(downloads) > 0,
@@ -56,21 +56,11 @@
ORDER BY zscore DESC
""".format(table=request.registry.settings["warehouse.trending_table"])
)
- query.use_legacy_sql = False
- query.run()
zscores = {}
- page_token = None
- while True:
- rows, total_rows, page_token = query.fetch_data(
- max_results=1000,
- page_token=page_token,
- )
-
- zscores.update(dict(rows))
-
- if not page_token:
- break
+ for row in query.result():
+ row = dict(row)
+ zscores[row["project"]] = row["zscore"]
# We're going to "reset" all of our zscores to a steady state where they
# are all equal to ``None``. The next query will then set any that have a
| {"golden_diff": "diff --git a/warehouse/packaging/tasks.py b/warehouse/packaging/tasks.py\n--- a/warehouse/packaging/tasks.py\n+++ b/warehouse/packaging/tasks.py\n@@ -18,7 +18,7 @@\n @tasks.task(ignore_result=True, acks_late=True)\n def compute_trending(request):\n bq = request.find_service(name=\"gcloud.bigquery\")\n- query = bq.run_sync_query(\n+ query = bq.query(\n \"\"\" SELECT project,\n IF(\n STDDEV(downloads) > 0,\n@@ -56,21 +56,11 @@\n ORDER BY zscore DESC\n \"\"\".format(table=request.registry.settings[\"warehouse.trending_table\"])\n )\n- query.use_legacy_sql = False\n- query.run()\n \n zscores = {}\n- page_token = None\n- while True:\n- rows, total_rows, page_token = query.fetch_data(\n- max_results=1000,\n- page_token=page_token,\n- )\n-\n- zscores.update(dict(rows))\n-\n- if not page_token:\n- break\n+ for row in query.result():\n+ row = dict(row)\n+ zscores[row[\"project\"]] = row[\"zscore\"]\n \n # We're going to \"reset\" all of our zscores to a steady state where they\n # are all equal to ``None``. The next query will then set any that have a\n", "issue": "Trending Projects are not updated\nI think in the past month that I've been looking at pypi.org (and even before that), the \"Trending Projects\" haven't changed.\r\n\n", "before_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom warehouse import tasks\nfrom warehouse.cache.origin import IOriginCache\nfrom warehouse.packaging.models import Project\n\n\[email protected](ignore_result=True, acks_late=True)\ndef compute_trending(request):\n bq = request.find_service(name=\"gcloud.bigquery\")\n query = bq.run_sync_query(\n \"\"\" SELECT project,\n IF(\n STDDEV(downloads) > 0,\n (todays_downloads - AVG(downloads))/STDDEV(downloads),\n NULL\n ) as zscore\n FROM (\n SELECT project,\n date,\n downloads,\n FIRST_VALUE(downloads) OVER (\n PARTITION BY project\n ORDER BY DATE DESC\n ROWS BETWEEN UNBOUNDED PRECEDING\n AND UNBOUNDED FOLLOWING\n ) as todays_downloads\n FROM (\n SELECT file.project as project,\n DATE(timestamp) AS date,\n COUNT(*) as downloads\n FROM `{table}`\n WHERE _TABLE_SUFFIX BETWEEN\n FORMAT_DATE(\n \"%Y%m%d\",\n DATE_ADD(CURRENT_DATE(), INTERVAL -31 day))\n AND\n FORMAT_DATE(\n \"%Y%m%d\",\n DATE_ADD(CURRENT_DATE(), INTERVAL -1 day))\n GROUP BY file.project, date\n )\n )\n GROUP BY project, todays_downloads\n HAVING SUM(downloads) >= 5000\n ORDER BY zscore DESC\n \"\"\".format(table=request.registry.settings[\"warehouse.trending_table\"])\n )\n query.use_legacy_sql = False\n query.run()\n\n zscores = {}\n page_token = None\n while True:\n rows, total_rows, page_token = query.fetch_data(\n max_results=1000,\n page_token=page_token,\n )\n\n zscores.update(dict(rows))\n\n if not page_token:\n break\n\n # We're going to \"reset\" all of our zscores to a steady state where they\n # are all equal to ``None``. The next query will then set any that have a\n # value back to the expected value.\n (request.db.query(Project)\n .filter(Project.zscore != None) # noqa\n .update({Project.zscore: None}))\n\n # We need to convert the normalized name that we get out of BigQuery and\n # turn it into the primary key of the Project object and construct a list\n # of primary key: new zscore, including a default of None if the item isn't\n # in the result set.\n query = request.db.query(Project.name, Project.normalized_name).all()\n to_update = [\n {\"name\": name, \"zscore\": zscores[normalized_name]}\n for name, normalized_name in query\n if normalized_name in zscores\n ]\n\n # Reflect out updated ZScores into the database.\n request.db.bulk_update_mappings(Project, to_update)\n\n # Trigger a purge of the trending surrogate key.\n try:\n cacher = request.find_service(IOriginCache)\n except ValueError:\n pass\n else:\n cacher.purge([\"trending\"])\n", "path": "warehouse/packaging/tasks.py"}]} | 1,555 | 315 |
gh_patches_debug_5434 | rasdani/github-patches | git_diff | secdev__scapy-4403 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Scapy overides platform
Scapy exports the platform name, and could override the platform module. This is likely the issue: https://github.com/secdev/scapy/blob/b0506a1e22321eba41d5c21d26bba418de04bc8f/scapy/consts.py#L10
Here are the example:
```shell
python issue.py
<class 'str'>
<class 'module'>
```
```python
import platform
from scapy.all import *
print(type(platform))
import platform
print(type(platform))
```
</issue>
<code>
[start of scapy/consts.py]
1 # SPDX-License-Identifier: GPL-2.0-only
2 # This file is part of Scapy
3 # See https://scapy.net/ for more information
4 # Copyright (C) Philippe Biondi <[email protected]>
5
6 """
7 This file contains constants
8 """
9
10 from sys import byteorder, platform, maxsize
11 import platform as platform_lib
12
13 LINUX = platform.startswith("linux")
14 OPENBSD = platform.startswith("openbsd")
15 FREEBSD = "freebsd" in platform
16 NETBSD = platform.startswith("netbsd")
17 DARWIN = platform.startswith("darwin")
18 SOLARIS = platform.startswith("sunos")
19 WINDOWS = platform.startswith("win32")
20 WINDOWS_XP = platform_lib.release() == "XP"
21 BSD = DARWIN or FREEBSD or OPENBSD or NETBSD
22 # See https://docs.python.org/3/library/platform.html#cross-platform
23 IS_64BITS = maxsize > 2**32
24 BIG_ENDIAN = byteorder == 'big'
25 # LOOPBACK_NAME moved to conf.loopback_name
26
[end of scapy/consts.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/scapy/consts.py b/scapy/consts.py
--- a/scapy/consts.py
+++ b/scapy/consts.py
@@ -10,6 +10,20 @@
from sys import byteorder, platform, maxsize
import platform as platform_lib
+__all__ = [
+ "LINUX",
+ "OPENBSD",
+ "FREEBSD",
+ "NETBSD",
+ "DARWIN",
+ "SOLARIS",
+ "WINDOWS",
+ "WINDOWS_XP",
+ "BSD",
+ "IS_64BITS",
+ "BIG_ENDIAN",
+]
+
LINUX = platform.startswith("linux")
OPENBSD = platform.startswith("openbsd")
FREEBSD = "freebsd" in platform
| {"golden_diff": "diff --git a/scapy/consts.py b/scapy/consts.py\n--- a/scapy/consts.py\n+++ b/scapy/consts.py\n@@ -10,6 +10,20 @@\n from sys import byteorder, platform, maxsize\n import platform as platform_lib\n \n+__all__ = [\n+ \"LINUX\",\n+ \"OPENBSD\",\n+ \"FREEBSD\",\n+ \"NETBSD\",\n+ \"DARWIN\",\n+ \"SOLARIS\",\n+ \"WINDOWS\",\n+ \"WINDOWS_XP\",\n+ \"BSD\",\n+ \"IS_64BITS\",\n+ \"BIG_ENDIAN\",\n+]\n+\n LINUX = platform.startswith(\"linux\")\n OPENBSD = platform.startswith(\"openbsd\")\n FREEBSD = \"freebsd\" in platform\n", "issue": "Scapy overides platform\nScapy exports the platform name, and could override the platform module. This is likely the issue: https://github.com/secdev/scapy/blob/b0506a1e22321eba41d5c21d26bba418de04bc8f/scapy/consts.py#L10\r\n\r\nHere are the example:\r\n\r\n```shell\r\npython issue.py \r\n<class 'str'>\r\n<class 'module'>\r\n```\r\n\r\n```python\r\nimport platform\r\nfrom scapy.all import *\r\nprint(type(platform))\r\n\r\nimport platform\r\nprint(type(platform))\r\n```\n", "before_files": [{"content": "# SPDX-License-Identifier: GPL-2.0-only\n# This file is part of Scapy\n# See https://scapy.net/ for more information\n# Copyright (C) Philippe Biondi <[email protected]>\n\n\"\"\"\nThis file contains constants\n\"\"\"\n\nfrom sys import byteorder, platform, maxsize\nimport platform as platform_lib\n\nLINUX = platform.startswith(\"linux\")\nOPENBSD = platform.startswith(\"openbsd\")\nFREEBSD = \"freebsd\" in platform\nNETBSD = platform.startswith(\"netbsd\")\nDARWIN = platform.startswith(\"darwin\")\nSOLARIS = platform.startswith(\"sunos\")\nWINDOWS = platform.startswith(\"win32\")\nWINDOWS_XP = platform_lib.release() == \"XP\"\nBSD = DARWIN or FREEBSD or OPENBSD or NETBSD\n# See https://docs.python.org/3/library/platform.html#cross-platform\nIS_64BITS = maxsize > 2**32\nBIG_ENDIAN = byteorder == 'big'\n# LOOPBACK_NAME moved to conf.loopback_name\n", "path": "scapy/consts.py"}]} | 921 | 169 |
gh_patches_debug_22411 | rasdani/github-patches | git_diff | wagtail__wagtail-730 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Allow use of MPO formatted JPEG images
Just tried loading some JPEG images into a website and was given an error "Not a valid JPEG image please use blah blah".
The images were from my Nikon D3300 which seems to create JPEG files in MPO format. This format is supported by Pillow but Wagtail is blocking them from being uploaded. I disabled the format validation and everything seemed to work fine.
</issue>
<code>
[start of wagtail/wagtailimages/fields.py]
1 import os
2
3 from PIL import Image
4
5 from django.forms.fields import ImageField
6 from django.core.exceptions import ValidationError
7 from django.utils.translation import ugettext_lazy as _
8 from django.template.defaultfilters import filesizeformat
9 from django.conf import settings
10
11
12 ALLOWED_EXTENSIONS = ['gif', 'jpg', 'jpeg', 'png']
13 SUPPORTED_FORMATS_TEXT = _("GIF, JPEG, PNG")
14
15 INVALID_IMAGE_ERROR = _(
16 "Not a supported image format. Supported formats: %s."
17 ) % SUPPORTED_FORMATS_TEXT
18
19 INVALID_IMAGE_KNOWN_FORMAT_ERROR = _(
20 "Not a valid %s image."
21 )
22
23 MAX_UPLOAD_SIZE = getattr(settings, 'WAGTAILIMAGES_MAX_UPLOAD_SIZE', 10 * 1024 * 1024)
24
25 if MAX_UPLOAD_SIZE is not None:
26 MAX_UPLOAD_SIZE_TEXT = filesizeformat(MAX_UPLOAD_SIZE)
27
28 FILE_TOO_LARGE_ERROR = _(
29 "This file is too big. Maximum filesize %s."
30 ) % (MAX_UPLOAD_SIZE_TEXT, )
31
32 FILE_TOO_LARGE_KNOWN_SIZE_ERROR = _(
33 "This file is too big (%%s). Maximum filesize %s."
34 ) % (MAX_UPLOAD_SIZE_TEXT, )
35
36 IMAGE_FIELD_HELP_TEXT = _(
37 "Supported formats: %s. Maximum filesize: %s."
38 ) % (SUPPORTED_FORMATS_TEXT, MAX_UPLOAD_SIZE_TEXT, )
39 else:
40 MAX_UPLOAD_SIZE_TEXT = ""
41 FILE_TOO_LARGE_ERROR = ""
42 FILE_TOO_LARGE_KNOWN_SIZE_ERROR = ""
43
44 IMAGE_FIELD_HELP_TEXT = _(
45 "Supported formats: %s."
46 ) % (SUPPORTED_FORMATS_TEXT, )
47
48
49 class WagtailImageField(ImageField):
50 default_error_messages = {
51 'invalid_image': INVALID_IMAGE_ERROR,
52 'invalid_image_known_format': INVALID_IMAGE_KNOWN_FORMAT_ERROR,
53 'file_too_large': FILE_TOO_LARGE_KNOWN_SIZE_ERROR,
54 }
55
56 def __init__(self, *args, **kwargs):
57 super(WagtailImageField, self).__init__(*args, **kwargs)
58
59 self.help_text = IMAGE_FIELD_HELP_TEXT
60
61 def check_image_file_format(self, f):
62 # Check file extension
63 extension = os.path.splitext(f.name)[1].lower()[1:]
64
65 if extension not in ALLOWED_EXTENSIONS:
66 raise ValidationError(self.error_messages['invalid_image'], code='invalid_image')
67
68 if hasattr(f, 'image'):
69 # Django 1.8 annotates the file object with the PIL image
70 image = f.image
71 elif not f.closed:
72 # Open image file
73 file_position = f.tell()
74 f.seek(0)
75
76 try:
77 image = Image.open(f)
78 except IOError:
79 # Uploaded file is not even an image file (or corrupted)
80 raise ValidationError(self.error_messages['invalid_image_known_format'],
81 code='invalid_image_known_format')
82
83 f.seek(file_position)
84 else:
85 # Couldn't get the PIL image, skip checking the internal file format
86 return
87
88 image_format = extension
89 if extension == 'jpg':
90 image_format = 'jpeg'
91
92 # Check that the internal format matches the extension
93 # It is possible to upload PSD files if their extension is set to jpg, png or gif. This should catch them out
94 if image.format.upper() != image_format.upper():
95 raise ValidationError(self.error_messages['invalid_image_known_format'] % (
96 image_format.upper()
97 ), code='invalid_image_known_format')
98
99 def check_image_file_size(self, f):
100 # Upload size checking can be disabled by setting max upload size to None
101 if MAX_UPLOAD_SIZE is None:
102 return
103
104 # Check the filesize
105 if f.size > MAX_UPLOAD_SIZE:
106 raise ValidationError(self.error_messages['file_too_large'] % (
107 filesizeformat(f.size),
108 ), code='file_too_large')
109
110 def to_python(self, data):
111 f = super(WagtailImageField, self).to_python(data)
112
113 if f is not None:
114 self.check_image_file_size(f)
115 self.check_image_file_format(f)
116
117 return f
118
[end of wagtail/wagtailimages/fields.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/wagtail/wagtailimages/fields.py b/wagtail/wagtailimages/fields.py
--- a/wagtail/wagtailimages/fields.py
+++ b/wagtail/wagtailimages/fields.py
@@ -85,15 +85,19 @@
# Couldn't get the PIL image, skip checking the internal file format
return
- image_format = extension
- if extension == 'jpg':
- image_format = 'jpeg'
+ image_format = extension.upper()
+ if image_format == 'JPG':
+ image_format = 'JPEG'
+
+ internal_image_format = image.format.upper()
+ if internal_image_format == 'MPO':
+ internal_image_format = 'JPEG'
# Check that the internal format matches the extension
# It is possible to upload PSD files if their extension is set to jpg, png or gif. This should catch them out
- if image.format.upper() != image_format.upper():
+ if internal_image_format != image_format:
raise ValidationError(self.error_messages['invalid_image_known_format'] % (
- image_format.upper()
+ image_format,
), code='invalid_image_known_format')
def check_image_file_size(self, f):
| {"golden_diff": "diff --git a/wagtail/wagtailimages/fields.py b/wagtail/wagtailimages/fields.py\n--- a/wagtail/wagtailimages/fields.py\n+++ b/wagtail/wagtailimages/fields.py\n@@ -85,15 +85,19 @@\n # Couldn't get the PIL image, skip checking the internal file format\n return\n \n- image_format = extension\n- if extension == 'jpg':\n- image_format = 'jpeg'\n+ image_format = extension.upper()\n+ if image_format == 'JPG':\n+ image_format = 'JPEG'\n+\n+ internal_image_format = image.format.upper()\n+ if internal_image_format == 'MPO':\n+ internal_image_format = 'JPEG'\n \n # Check that the internal format matches the extension\n # It is possible to upload PSD files if their extension is set to jpg, png or gif. This should catch them out\n- if image.format.upper() != image_format.upper():\n+ if internal_image_format != image_format:\n raise ValidationError(self.error_messages['invalid_image_known_format'] % (\n- image_format.upper()\n+ image_format,\n ), code='invalid_image_known_format')\n \n def check_image_file_size(self, f):\n", "issue": "Allow use of MPO formatted JPEG images\nJust tried loading some JPEG images into a website and was given an error \"Not a valid JPEG image please use blah blah\".\n\nThe images were from my Nikon D3300 which seems to create JPEG files in MPO format. This format is supported by Pillow but Wagtail is blocking them from being uploaded. I disabled the format validation and everything seemed to work fine.\n\n", "before_files": [{"content": "import os\n\nfrom PIL import Image\n\nfrom django.forms.fields import ImageField\nfrom django.core.exceptions import ValidationError\nfrom django.utils.translation import ugettext_lazy as _\nfrom django.template.defaultfilters import filesizeformat\nfrom django.conf import settings\n\n\nALLOWED_EXTENSIONS = ['gif', 'jpg', 'jpeg', 'png']\nSUPPORTED_FORMATS_TEXT = _(\"GIF, JPEG, PNG\")\n\nINVALID_IMAGE_ERROR = _(\n \"Not a supported image format. Supported formats: %s.\"\n) % SUPPORTED_FORMATS_TEXT\n\nINVALID_IMAGE_KNOWN_FORMAT_ERROR = _(\n \"Not a valid %s image.\"\n)\n\nMAX_UPLOAD_SIZE = getattr(settings, 'WAGTAILIMAGES_MAX_UPLOAD_SIZE', 10 * 1024 * 1024)\n\nif MAX_UPLOAD_SIZE is not None:\n MAX_UPLOAD_SIZE_TEXT = filesizeformat(MAX_UPLOAD_SIZE)\n\n FILE_TOO_LARGE_ERROR = _(\n \"This file is too big. Maximum filesize %s.\"\n ) % (MAX_UPLOAD_SIZE_TEXT, )\n\n FILE_TOO_LARGE_KNOWN_SIZE_ERROR = _(\n \"This file is too big (%%s). Maximum filesize %s.\"\n ) % (MAX_UPLOAD_SIZE_TEXT, )\n\n IMAGE_FIELD_HELP_TEXT = _(\n \"Supported formats: %s. Maximum filesize: %s.\"\n ) % (SUPPORTED_FORMATS_TEXT, MAX_UPLOAD_SIZE_TEXT, )\nelse:\n MAX_UPLOAD_SIZE_TEXT = \"\"\n FILE_TOO_LARGE_ERROR = \"\"\n FILE_TOO_LARGE_KNOWN_SIZE_ERROR = \"\"\n\n IMAGE_FIELD_HELP_TEXT = _(\n \"Supported formats: %s.\"\n ) % (SUPPORTED_FORMATS_TEXT, )\n\n\nclass WagtailImageField(ImageField):\n default_error_messages = {\n 'invalid_image': INVALID_IMAGE_ERROR,\n 'invalid_image_known_format': INVALID_IMAGE_KNOWN_FORMAT_ERROR,\n 'file_too_large': FILE_TOO_LARGE_KNOWN_SIZE_ERROR,\n }\n\n def __init__(self, *args, **kwargs):\n super(WagtailImageField, self).__init__(*args, **kwargs)\n\n self.help_text = IMAGE_FIELD_HELP_TEXT\n\n def check_image_file_format(self, f):\n # Check file extension\n extension = os.path.splitext(f.name)[1].lower()[1:]\n\n if extension not in ALLOWED_EXTENSIONS:\n raise ValidationError(self.error_messages['invalid_image'], code='invalid_image')\n\n if hasattr(f, 'image'):\n # Django 1.8 annotates the file object with the PIL image\n image = f.image\n elif not f.closed:\n # Open image file\n file_position = f.tell()\n f.seek(0)\n\n try:\n image = Image.open(f)\n except IOError:\n # Uploaded file is not even an image file (or corrupted)\n raise ValidationError(self.error_messages['invalid_image_known_format'],\n code='invalid_image_known_format')\n\n f.seek(file_position)\n else:\n # Couldn't get the PIL image, skip checking the internal file format\n return\n\n image_format = extension\n if extension == 'jpg':\n image_format = 'jpeg'\n\n # Check that the internal format matches the extension\n # It is possible to upload PSD files if their extension is set to jpg, png or gif. This should catch them out\n if image.format.upper() != image_format.upper():\n raise ValidationError(self.error_messages['invalid_image_known_format'] % (\n image_format.upper()\n ), code='invalid_image_known_format')\n\n def check_image_file_size(self, f):\n # Upload size checking can be disabled by setting max upload size to None\n if MAX_UPLOAD_SIZE is None:\n return\n\n # Check the filesize\n if f.size > MAX_UPLOAD_SIZE:\n raise ValidationError(self.error_messages['file_too_large'] % (\n filesizeformat(f.size),\n ), code='file_too_large')\n\n def to_python(self, data):\n f = super(WagtailImageField, self).to_python(data)\n\n if f is not None:\n self.check_image_file_size(f)\n self.check_image_file_format(f)\n\n return f\n", "path": "wagtail/wagtailimages/fields.py"}]} | 1,754 | 272 |
gh_patches_debug_14539 | rasdani/github-patches | git_diff | cloudtools__troposphere-1589 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add support for DataLocationResource & TableWithColumnsResource in AWS::LakeFormation::Permissions (2020, Jan 16 update)
waiting for the doc to be updated
</issue>
<code>
[start of troposphere/lakeformation.py]
1 # Copyright (c) 2012-2019, Mark Peek <[email protected]>
2 # All rights reserved.
3 #
4 # See LICENSE file for full license.
5 #
6 # *** Do not modify - this file is autogenerated ***
7 # Resource specification version: 5.3.0
8
9
10 from . import AWSObject
11 from . import AWSProperty
12
13
14 class Admins(AWSProperty):
15 props = {
16 }
17
18
19 class DataLakeSettings(AWSObject):
20 resource_type = "AWS::LakeFormation::DataLakeSettings"
21
22 props = {
23 'Admins': (Admins, False),
24 }
25
26
27 class DataLakePrincipal(AWSProperty):
28 props = {
29 'DataLakePrincipalIdentifier': (basestring, False),
30 }
31
32
33 class DatabaseResource(AWSProperty):
34 props = {
35 'Name': (basestring, False),
36 }
37
38
39 class TableResource(AWSProperty):
40 props = {
41 'DatabaseName': (basestring, False),
42 'Name': (basestring, False),
43 }
44
45
46 class Resource(AWSProperty):
47 props = {
48 'DatabaseResource': (DatabaseResource, False),
49 'TableResource': (TableResource, False),
50 }
51
52
53 class Permissions(AWSObject):
54 resource_type = "AWS::LakeFormation::Permissions"
55
56 props = {
57 'DataLakePrincipal': (DataLakePrincipal, True),
58 'Permissions': ([basestring], False),
59 'PermissionsWithGrantOption': ([basestring], False),
60 'Resource': (Resource, True),
61 }
62
[end of troposphere/lakeformation.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/troposphere/lakeformation.py b/troposphere/lakeformation.py
--- a/troposphere/lakeformation.py
+++ b/troposphere/lakeformation.py
@@ -43,10 +43,33 @@
}
+class DataLocationResource(AWSProperty):
+ props = {
+ 'S3Resource': (basestring, False),
+ }
+
+
+class ColumnWildcard(AWSProperty):
+ props = {
+ 'ExcludedColumnNames': ([basestring], False),
+ }
+
+
+class TableWithColumnsResource(AWSProperty):
+ props = {
+ 'ColumnNames': ([basestring], False),
+ 'ColumnWildcard': (ColumnWildcard, False),
+ 'DatabaseName': (basestring, False),
+ 'Name': (basestring, False),
+ }
+
+
class Resource(AWSProperty):
props = {
'DatabaseResource': (DatabaseResource, False),
+ 'DataLocationResource': (DataLocationResource, False),
'TableResource': (TableResource, False),
+ 'TableWithColumnsResource': (TableWithColumnsResource, False),
}
| {"golden_diff": "diff --git a/troposphere/lakeformation.py b/troposphere/lakeformation.py\n--- a/troposphere/lakeformation.py\n+++ b/troposphere/lakeformation.py\n@@ -43,10 +43,33 @@\n }\n \n \n+class DataLocationResource(AWSProperty):\n+ props = {\n+ 'S3Resource': (basestring, False),\n+ }\n+\n+\n+class ColumnWildcard(AWSProperty):\n+ props = {\n+ 'ExcludedColumnNames': ([basestring], False),\n+ }\n+\n+\n+class TableWithColumnsResource(AWSProperty):\n+ props = {\n+ 'ColumnNames': ([basestring], False),\n+ 'ColumnWildcard': (ColumnWildcard, False),\n+ 'DatabaseName': (basestring, False),\n+ 'Name': (basestring, False),\n+ }\n+\n+\n class Resource(AWSProperty):\n props = {\n 'DatabaseResource': (DatabaseResource, False),\n+ 'DataLocationResource': (DataLocationResource, False),\n 'TableResource': (TableResource, False),\n+ 'TableWithColumnsResource': (TableWithColumnsResource, False),\n }\n", "issue": "Add support for DataLocationResource & TableWithColumnsResource in AWS::LakeFormation::Permissions (2020, Jan 16 update)\nwaiting for the doc to be updated\n", "before_files": [{"content": "# Copyright (c) 2012-2019, Mark Peek <[email protected]>\n# All rights reserved.\n#\n# See LICENSE file for full license.\n#\n# *** Do not modify - this file is autogenerated ***\n# Resource specification version: 5.3.0\n\n\nfrom . import AWSObject\nfrom . import AWSProperty\n\n\nclass Admins(AWSProperty):\n props = {\n }\n\n\nclass DataLakeSettings(AWSObject):\n resource_type = \"AWS::LakeFormation::DataLakeSettings\"\n\n props = {\n 'Admins': (Admins, False),\n }\n\n\nclass DataLakePrincipal(AWSProperty):\n props = {\n 'DataLakePrincipalIdentifier': (basestring, False),\n }\n\n\nclass DatabaseResource(AWSProperty):\n props = {\n 'Name': (basestring, False),\n }\n\n\nclass TableResource(AWSProperty):\n props = {\n 'DatabaseName': (basestring, False),\n 'Name': (basestring, False),\n }\n\n\nclass Resource(AWSProperty):\n props = {\n 'DatabaseResource': (DatabaseResource, False),\n 'TableResource': (TableResource, False),\n }\n\n\nclass Permissions(AWSObject):\n resource_type = \"AWS::LakeFormation::Permissions\"\n\n props = {\n 'DataLakePrincipal': (DataLakePrincipal, True),\n 'Permissions': ([basestring], False),\n 'PermissionsWithGrantOption': ([basestring], False),\n 'Resource': (Resource, True),\n }\n", "path": "troposphere/lakeformation.py"}]} | 1,022 | 253 |
gh_patches_debug_31851 | rasdani/github-patches | git_diff | coala__coala-2865 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add more strings to Constants
Our Constants are located in coala/coalib/misc/Constants.py and there we hold options for `TRUE_STRINGS`. I think we should expand these true strings with more options, such as : `yep`,`ja` or even `hell yeah` (who knows what the user might come up with). Feel free to add your own suggestions if you think they fit.
</issue>
<code>
[start of coalib/misc/Constants.py]
1 # -*- coding: utf-8 -*-
2
3 import appdirs
4 import os
5 import re
6
7 # Start ignoring PyImportSortBear, PyLintBear as BUS_NAME is imported as a
8 # constant from other files.
9 from coalib import BUS_NAME
10 from coalib import VERSION
11 # Stop ignoring
12
13
14 THIS_IS_A_BUG = ('This is a bug. We are sorry for the inconvenience. '
15 'Please contact the developers for assistance.')
16
17 CRASH_MESSAGE = ('An unknown error occurred. This is a bug. We are '
18 'sorry for the inconvenience. Please contact the '
19 'developers for assistance. During execution of '
20 'coala an exception was raised. This should never '
21 'happen. When asked for, the following information '
22 'may help investigating:')
23
24 VERSION_CONFLICT_MESSAGE = ('There is a conflict in the version of a '
25 'dependency you have installed and the '
26 'requirements of coala. This may be resolved by '
27 'creating a separate virtual environment for '
28 'coala or running `pip install "%s"`. Be aware '
29 'that the latter solution might break other '
30 'python packages that depend on the currently '
31 'installed version.')
32
33 OBJ_NOT_ACCESSIBLE = '{} is not accessible and will be ignored!'
34
35 TRUE_STRINGS = ['1',
36 'on',
37 'y',
38 'yes',
39 'yeah',
40 'sure',
41 'true',
42 'definitely',
43 'yup',
44 'right',
45 'aye',
46 'positive']
47
48 FALSE_STRINGS = ['0',
49 'off',
50 'n',
51 'no',
52 'nope',
53 'nah',
54 'false',
55 'wrong',
56 'none',
57 'nay',
58 'negative']
59
60 # This string contains many unicode characters to challenge tests.
61 COMPLEX_TEST_STRING = ('4 r34l ch4ll3n63: 123 ÄÖü ABc @€¥ §&% {[( ←↓→↑ '
62 'ĦŊħ ß°^ \\\n\u2192')
63
64 # Path to the coalib directory
65 coalib_root = os.path.join(os.path.dirname(__file__),
66 os.path.pardir)
67
68 # Path to the language definition files
69 language_definitions = os.path.join(coalib_root,
70 'bearlib',
71 'languages',
72 'definitions')
73
74 system_coafile = os.path.join(coalib_root, 'default_coafile')
75
76 user_coafile = os.path.join(os.path.expanduser('~'), '.coarc')
77
78 default_coafile = '.coafile'
79
80 USER_DATA_DIR = appdirs.user_data_dir('coala', version=VERSION)
81
82 GLOBBING_SPECIAL_CHARS = '()[]|?*'
83
84 URL_REGEX = re.compile(
85 r'^(?:(?:http|ftp)[s]?://)?' # scheme
86 r'(?:(?:[A-Z0-9](?:[A-Z0-9-]{0,61}[A-Z0-9])?\.)+' # domain name
87 r'(?:[A-Z]{2,6}\.?|[A-Z0-9-]{2,}\.?)|'
88 r'localhost|' # OR localhost
89 r'(?:\d{1,3}\.){3}\d{1,3})' # OR an ip
90 r'(?::\d+)?' # optional port number
91 r'(?:/?|[/?]\S+)$', # path
92 re.IGNORECASE)
93
[end of coalib/misc/Constants.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/coalib/misc/Constants.py b/coalib/misc/Constants.py
--- a/coalib/misc/Constants.py
+++ b/coalib/misc/Constants.py
@@ -34,27 +34,69 @@
TRUE_STRINGS = ['1',
'on',
+ 'okay',
+ 'ok',
+ 'okey-dokey',
'y',
'yes',
'yeah',
+ 'yea',
+ 'ya',
+ 'ye',
+ 'yessir',
'sure',
'true',
+ 'tru',
+ 'uh-huh',
'definitely',
'yup',
+ 'yep',
'right',
'aye',
+ 'alright',
+ 'alrighty',
+ 'hell yeah',
+ 'affirmative',
+ 'certainly',
+ 'definitely',
+ 'absolutely',
+ 'roger',
+ 'righto',
+ 'ja',
+ 'da',
+ 'si',
+ 'oui',
+ 'amen',
+ 'totally',
+ '10-4',
'positive']
FALSE_STRINGS = ['0',
'off',
'n',
'no',
+ 'nix',
'nope',
+ 'nop',
'nah',
+ 'nay',
'false',
+ 'uh-uh',
'wrong',
'none',
'nay',
+ 'hell no',
+ 'fat chance',
+ 'not a chance in hell',
+ 'not in a million years',
+ 'out of the question',
+ 'no siree',
+ 'no way',
+ 'nein',
+ 'njet',
+ 'nee',
+ 'non',
+ 'hakuna',
'negative']
# This string contains many unicode characters to challenge tests.
| {"golden_diff": "diff --git a/coalib/misc/Constants.py b/coalib/misc/Constants.py\n--- a/coalib/misc/Constants.py\n+++ b/coalib/misc/Constants.py\n@@ -34,27 +34,69 @@\n \n TRUE_STRINGS = ['1',\n 'on',\n+ 'okay',\n+ 'ok',\n+ 'okey-dokey',\n 'y',\n 'yes',\n 'yeah',\n+ 'yea',\n+ 'ya',\n+ 'ye',\n+ 'yessir',\n 'sure',\n 'true',\n+ 'tru',\n+ 'uh-huh',\n 'definitely',\n 'yup',\n+ 'yep',\n 'right',\n 'aye',\n+ 'alright',\n+ 'alrighty',\n+ 'hell yeah',\n+ 'affirmative',\n+ 'certainly',\n+ 'definitely',\n+ 'absolutely',\n+ 'roger',\n+ 'righto',\n+ 'ja',\n+ 'da',\n+ 'si',\n+ 'oui',\n+ 'amen',\n+ 'totally',\n+ '10-4',\n 'positive']\n \n FALSE_STRINGS = ['0',\n 'off',\n 'n',\n 'no',\n+ 'nix',\n 'nope',\n+ 'nop',\n 'nah',\n+ 'nay',\n 'false',\n+ 'uh-uh',\n 'wrong',\n 'none',\n 'nay',\n+ 'hell no',\n+ 'fat chance',\n+ 'not a chance in hell',\n+ 'not in a million years',\n+ 'out of the question',\n+ 'no siree',\n+ 'no way',\n+ 'nein',\n+ 'njet',\n+ 'nee',\n+ 'non',\n+ 'hakuna',\n 'negative']\n \n # This string contains many unicode characters to challenge tests.\n", "issue": "Add more strings to Constants\nOur Constants are located in coala/coalib/misc/Constants.py and there we hold options for `TRUE_STRINGS`. I think we should expand these true strings with more options, such as : `yep`,`ja` or even `hell yeah` (who knows what the user might come up with). Feel free to add your own suggestions if you think they fit.\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\nimport appdirs\nimport os\nimport re\n\n# Start ignoring PyImportSortBear, PyLintBear as BUS_NAME is imported as a\n# constant from other files.\nfrom coalib import BUS_NAME\nfrom coalib import VERSION\n# Stop ignoring\n\n\nTHIS_IS_A_BUG = ('This is a bug. We are sorry for the inconvenience. '\n 'Please contact the developers for assistance.')\n\nCRASH_MESSAGE = ('An unknown error occurred. This is a bug. We are '\n 'sorry for the inconvenience. Please contact the '\n 'developers for assistance. During execution of '\n 'coala an exception was raised. This should never '\n 'happen. When asked for, the following information '\n 'may help investigating:')\n\nVERSION_CONFLICT_MESSAGE = ('There is a conflict in the version of a '\n 'dependency you have installed and the '\n 'requirements of coala. This may be resolved by '\n 'creating a separate virtual environment for '\n 'coala or running `pip install \"%s\"`. Be aware '\n 'that the latter solution might break other '\n 'python packages that depend on the currently '\n 'installed version.')\n\nOBJ_NOT_ACCESSIBLE = '{} is not accessible and will be ignored!'\n\nTRUE_STRINGS = ['1',\n 'on',\n 'y',\n 'yes',\n 'yeah',\n 'sure',\n 'true',\n 'definitely',\n 'yup',\n 'right',\n 'aye',\n 'positive']\n\nFALSE_STRINGS = ['0',\n 'off',\n 'n',\n 'no',\n 'nope',\n 'nah',\n 'false',\n 'wrong',\n 'none',\n 'nay',\n 'negative']\n\n# This string contains many unicode characters to challenge tests.\nCOMPLEX_TEST_STRING = ('4 r34l ch4ll3n63: 123 \u00c4\u00d6\u00fc ABc @\u20ac\u00a5 \u00a7&% {[( \u2190\u2193\u2192\u2191 '\n '\u0126\u014a\u0127 \u00df\u00b0^ \\\\\\n\\u2192')\n\n# Path to the coalib directory\ncoalib_root = os.path.join(os.path.dirname(__file__),\n os.path.pardir)\n\n# Path to the language definition files\nlanguage_definitions = os.path.join(coalib_root,\n 'bearlib',\n 'languages',\n 'definitions')\n\nsystem_coafile = os.path.join(coalib_root, 'default_coafile')\n\nuser_coafile = os.path.join(os.path.expanduser('~'), '.coarc')\n\ndefault_coafile = '.coafile'\n\nUSER_DATA_DIR = appdirs.user_data_dir('coala', version=VERSION)\n\nGLOBBING_SPECIAL_CHARS = '()[]|?*'\n\nURL_REGEX = re.compile(\n r'^(?:(?:http|ftp)[s]?://)?' # scheme\n r'(?:(?:[A-Z0-9](?:[A-Z0-9-]{0,61}[A-Z0-9])?\\.)+' # domain name\n r'(?:[A-Z]{2,6}\\.?|[A-Z0-9-]{2,}\\.?)|'\n r'localhost|' # OR localhost\n r'(?:\\d{1,3}\\.){3}\\d{1,3})' # OR an ip\n r'(?::\\d+)?' # optional port number\n r'(?:/?|[/?]\\S+)$', # path\n re.IGNORECASE)\n", "path": "coalib/misc/Constants.py"}]} | 1,559 | 430 |
gh_patches_debug_37713 | rasdani/github-patches | git_diff | coala__coala-964 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
glob/collecting: Accept strings too
the methods there want a list but `glob("string")` makes sense too (and behaves very strangely), we should support that.
</issue>
<code>
[start of coalib/collecting/Collectors.py]
1 import os
2
3 from coalib.collecting.Importers import iimport_objects
4 from coalib.misc.Decorators import yield_once
5 from coalib.misc.i18n import _
6 from coalib.parsing.Globbing import iglob
7
8
9 def _yield_if_right_kind(bear_class, kinds):
10 try:
11 if bear_class.kind() in kinds:
12 yield bear_class
13 except NotImplementedError:
14 pass
15
16
17 def _import_bears(file_path, kinds):
18 # recursive imports:
19 for bear_list in iimport_objects(file_path,
20 names='__additional_bears__',
21 types=list):
22 for bear_class in bear_list:
23 for valid_bear_class in _yield_if_right_kind(bear_class, kinds):
24 yield valid_bear_class
25 # normal import
26 for bear_class in iimport_objects(file_path,
27 attributes='kind',
28 local=True):
29 for valid_bear_class in _yield_if_right_kind(bear_class, kinds):
30 yield valid_bear_class
31
32
33 @yield_once
34 def icollect(file_paths):
35 """
36 Evaluate globs in file paths and return all matching files.
37
38 :param file_paths: list of file paths that can include globs
39 :return: iterator that yields paths of all matching files
40 """
41 for file_path in file_paths:
42 for match in iglob(file_path):
43 yield match
44
45
46 def collect_files(file_paths):
47 """
48 Evaluate globs in file paths and return all matching files
49
50 :param file_paths: list of file paths that can include globs
51 :return: list of paths of all matching files
52 """
53 return list(filter(os.path.isfile, icollect(file_paths)))
54
55
56 def collect_dirs(dir_paths):
57 """
58 Evaluate globs in directory paths and return all matching directories
59
60 :param dir_paths: list of file paths that can include globs
61 :return: list of paths of all matching directories
62 """
63 return list(filter(os.path.isdir, icollect(dir_paths)))
64
65
66 @yield_once
67 def icollect_bears(bear_dirs, bear_names, kinds, log_printer):
68 """
69 Collect all bears from bear directories that have a matching kind.
70
71 :param bear_dirs: directories that can contain bears
72 :param bear_names: names of bears
73 :param kinds: list of bear kinds to be collected
74 :param log_printer: log_printer to handle logging
75 :return: iterator that yields bear classes
76 """
77 for bear_dir in filter(os.path.isdir, icollect(bear_dirs)):
78 for bear_name in bear_names:
79 for matching_file in iglob(
80 os.path.join(bear_dir, bear_name + '.py')):
81
82 try:
83 for bear in _import_bears(matching_file, kinds):
84 yield bear
85 except:
86 log_printer.warn(_("Unable to collect bears from {file}. "
87 "Probably the file is malformed or "
88 "the module code raises an exception.")
89 .format(file=matching_file))
90
91
92 def collect_bears(bear_dirs, bear_names, kinds, log_printer):
93 """
94 Collect all bears from bear directories that have a matching kind.
95
96 :param bear_dirs: directories that can contain bears
97 :param bear_names: names of bears
98 :param kinds: list of bear kinds to be collected
99 :param log_printer: log_printer to handle logging
100 :return: list of matching bear classes
101 """
102 return list(icollect_bears(bear_dirs, bear_names, kinds, log_printer))
103
[end of coalib/collecting/Collectors.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/coalib/collecting/Collectors.py b/coalib/collecting/Collectors.py
--- a/coalib/collecting/Collectors.py
+++ b/coalib/collecting/Collectors.py
@@ -35,9 +35,12 @@
"""
Evaluate globs in file paths and return all matching files.
- :param file_paths: list of file paths that can include globs
+ :param file_paths: file path or list of such that can include globs
:return: iterator that yields paths of all matching files
"""
+ if isinstance(file_paths, str):
+ file_paths = [file_paths]
+
for file_path in file_paths:
for match in iglob(file_path):
yield match
@@ -47,7 +50,7 @@
"""
Evaluate globs in file paths and return all matching files
- :param file_paths: list of file paths that can include globs
+ :param file_paths: file path or list of such that can include globs
:return: list of paths of all matching files
"""
return list(filter(os.path.isfile, icollect(file_paths)))
@@ -57,7 +60,7 @@
"""
Evaluate globs in directory paths and return all matching directories
- :param dir_paths: list of file paths that can include globs
+ :param dir_paths: file path or list of such that can include globs
:return: list of paths of all matching directories
"""
return list(filter(os.path.isdir, icollect(dir_paths)))
@@ -68,7 +71,7 @@
"""
Collect all bears from bear directories that have a matching kind.
- :param bear_dirs: directories that can contain bears
+ :param bear_dirs: directory name or list of such that can contain bears
:param bear_names: names of bears
:param kinds: list of bear kinds to be collected
:param log_printer: log_printer to handle logging
@@ -93,7 +96,7 @@
"""
Collect all bears from bear directories that have a matching kind.
- :param bear_dirs: directories that can contain bears
+ :param bear_dirs: directory name or list of such that can contain bears
:param bear_names: names of bears
:param kinds: list of bear kinds to be collected
:param log_printer: log_printer to handle logging
| {"golden_diff": "diff --git a/coalib/collecting/Collectors.py b/coalib/collecting/Collectors.py\n--- a/coalib/collecting/Collectors.py\n+++ b/coalib/collecting/Collectors.py\n@@ -35,9 +35,12 @@\n \"\"\"\n Evaluate globs in file paths and return all matching files.\n \n- :param file_paths: list of file paths that can include globs\n+ :param file_paths: file path or list of such that can include globs\n :return: iterator that yields paths of all matching files\n \"\"\"\n+ if isinstance(file_paths, str):\n+ file_paths = [file_paths]\n+\n for file_path in file_paths:\n for match in iglob(file_path):\n yield match\n@@ -47,7 +50,7 @@\n \"\"\"\n Evaluate globs in file paths and return all matching files\n \n- :param file_paths: list of file paths that can include globs\n+ :param file_paths: file path or list of such that can include globs\n :return: list of paths of all matching files\n \"\"\"\n return list(filter(os.path.isfile, icollect(file_paths)))\n@@ -57,7 +60,7 @@\n \"\"\"\n Evaluate globs in directory paths and return all matching directories\n \n- :param dir_paths: list of file paths that can include globs\n+ :param dir_paths: file path or list of such that can include globs\n :return: list of paths of all matching directories\n \"\"\"\n return list(filter(os.path.isdir, icollect(dir_paths)))\n@@ -68,7 +71,7 @@\n \"\"\"\n Collect all bears from bear directories that have a matching kind.\n \n- :param bear_dirs: directories that can contain bears\n+ :param bear_dirs: directory name or list of such that can contain bears\n :param bear_names: names of bears\n :param kinds: list of bear kinds to be collected\n :param log_printer: log_printer to handle logging\n@@ -93,7 +96,7 @@\n \"\"\"\n Collect all bears from bear directories that have a matching kind.\n \n- :param bear_dirs: directories that can contain bears\n+ :param bear_dirs: directory name or list of such that can contain bears\n :param bear_names: names of bears\n :param kinds: list of bear kinds to be collected\n :param log_printer: log_printer to handle logging\n", "issue": "glob/collecting: Accept strings too\nthe methods there want a list but `glob(\"string\")` makes sense too (and behaves very strangely), we should support that.\n\n", "before_files": [{"content": "import os\n\nfrom coalib.collecting.Importers import iimport_objects\nfrom coalib.misc.Decorators import yield_once\nfrom coalib.misc.i18n import _\nfrom coalib.parsing.Globbing import iglob\n\n\ndef _yield_if_right_kind(bear_class, kinds):\n try:\n if bear_class.kind() in kinds:\n yield bear_class\n except NotImplementedError:\n pass\n\n\ndef _import_bears(file_path, kinds):\n # recursive imports:\n for bear_list in iimport_objects(file_path,\n names='__additional_bears__',\n types=list):\n for bear_class in bear_list:\n for valid_bear_class in _yield_if_right_kind(bear_class, kinds):\n yield valid_bear_class\n # normal import\n for bear_class in iimport_objects(file_path,\n attributes='kind',\n local=True):\n for valid_bear_class in _yield_if_right_kind(bear_class, kinds):\n yield valid_bear_class\n\n\n@yield_once\ndef icollect(file_paths):\n \"\"\"\n Evaluate globs in file paths and return all matching files.\n\n :param file_paths: list of file paths that can include globs\n :return: iterator that yields paths of all matching files\n \"\"\"\n for file_path in file_paths:\n for match in iglob(file_path):\n yield match\n\n\ndef collect_files(file_paths):\n \"\"\"\n Evaluate globs in file paths and return all matching files\n\n :param file_paths: list of file paths that can include globs\n :return: list of paths of all matching files\n \"\"\"\n return list(filter(os.path.isfile, icollect(file_paths)))\n\n\ndef collect_dirs(dir_paths):\n \"\"\"\n Evaluate globs in directory paths and return all matching directories\n\n :param dir_paths: list of file paths that can include globs\n :return: list of paths of all matching directories\n \"\"\"\n return list(filter(os.path.isdir, icollect(dir_paths)))\n\n\n@yield_once\ndef icollect_bears(bear_dirs, bear_names, kinds, log_printer):\n \"\"\"\n Collect all bears from bear directories that have a matching kind.\n\n :param bear_dirs: directories that can contain bears\n :param bear_names: names of bears\n :param kinds: list of bear kinds to be collected\n :param log_printer: log_printer to handle logging\n :return: iterator that yields bear classes\n \"\"\"\n for bear_dir in filter(os.path.isdir, icollect(bear_dirs)):\n for bear_name in bear_names:\n for matching_file in iglob(\n os.path.join(bear_dir, bear_name + '.py')):\n\n try:\n for bear in _import_bears(matching_file, kinds):\n yield bear\n except:\n log_printer.warn(_(\"Unable to collect bears from {file}. \"\n \"Probably the file is malformed or \"\n \"the module code raises an exception.\")\n .format(file=matching_file))\n\n\ndef collect_bears(bear_dirs, bear_names, kinds, log_printer):\n \"\"\"\n Collect all bears from bear directories that have a matching kind.\n\n :param bear_dirs: directories that can contain bears\n :param bear_names: names of bears\n :param kinds: list of bear kinds to be collected\n :param log_printer: log_printer to handle logging\n :return: list of matching bear classes\n \"\"\"\n return list(icollect_bears(bear_dirs, bear_names, kinds, log_printer))\n", "path": "coalib/collecting/Collectors.py"}]} | 1,539 | 554 |
gh_patches_debug_11204 | rasdani/github-patches | git_diff | svthalia__concrexit-1736 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
AttributeError: 'Event' object has no attribute 'title_en'
Sentry Issue: [CONCREXIT-6D](https://sentry.io/organizations/thalia/issues/2465590057/?referrer=github_integration)
```
AttributeError: 'Event' object has no attribute 'title_en'
(9 additional frame(s) were not displayed)
...
File "rest_framework/mixins.py", line 68, in update
self.perform_update(serializer)
File "pizzas/api/v1/viewsets.py", line 105, in perform_update
self._update_payment(
File "pizzas/api/v1/viewsets.py", line 114, in _update_payment
order.payment = create_payment(order, processed_by, payment_type)
File "payments/services.py", line 67, in create_payment
notes=payable.payment_notes,
File "pizzas/payables.py", line 21, in payment_notes
f"Food order by {self.model.member_name} "
```
</issue>
<code>
[start of website/pizzas/payables.py]
1 from django.template.defaultfilters import date
2
3 from payments import Payable, payables
4 from pizzas.models import FoodOrder
5 from pizzas.services import can_change_order
6
7
8 class FoodOrderPayable(Payable):
9 @property
10 def payment_amount(self):
11 return self.model.product.price
12
13 @property
14 def payment_topic(self):
15 start_date = date(self.model.food_event.start, "Y-m-d")
16 return f"Food {self.model.food_event.event.title_en} [{start_date}]"
17
18 @property
19 def payment_notes(self):
20 return (
21 f"Food order by {self.model.member_name} "
22 f"for {self.model.food_event.event.title_en}"
23 )
24
25 @property
26 def payment_payer(self):
27 return self.model.member
28
29 def can_manage_payment(self, member):
30 return can_change_order(member, self.model.food_event)
31
32
33 def register():
34 payables.register(FoodOrder, FoodOrderPayable)
35
[end of website/pizzas/payables.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/website/pizzas/payables.py b/website/pizzas/payables.py
--- a/website/pizzas/payables.py
+++ b/website/pizzas/payables.py
@@ -13,13 +13,13 @@
@property
def payment_topic(self):
start_date = date(self.model.food_event.start, "Y-m-d")
- return f"Food {self.model.food_event.event.title_en} [{start_date}]"
+ return f"Food {self.model.food_event.event.title} [{start_date}]"
@property
def payment_notes(self):
return (
f"Food order by {self.model.member_name} "
- f"for {self.model.food_event.event.title_en}"
+ f"for {self.model.food_event.event.title}"
)
@property
| {"golden_diff": "diff --git a/website/pizzas/payables.py b/website/pizzas/payables.py\n--- a/website/pizzas/payables.py\n+++ b/website/pizzas/payables.py\n@@ -13,13 +13,13 @@\n @property\n def payment_topic(self):\n start_date = date(self.model.food_event.start, \"Y-m-d\")\n- return f\"Food {self.model.food_event.event.title_en} [{start_date}]\"\n+ return f\"Food {self.model.food_event.event.title} [{start_date}]\"\n \n @property\n def payment_notes(self):\n return (\n f\"Food order by {self.model.member_name} \"\n- f\"for {self.model.food_event.event.title_en}\"\n+ f\"for {self.model.food_event.event.title}\"\n )\n \n @property\n", "issue": "AttributeError: 'Event' object has no attribute 'title_en'\nSentry Issue: [CONCREXIT-6D](https://sentry.io/organizations/thalia/issues/2465590057/?referrer=github_integration)\n\n```\nAttributeError: 'Event' object has no attribute 'title_en'\n(9 additional frame(s) were not displayed)\n...\n File \"rest_framework/mixins.py\", line 68, in update\n self.perform_update(serializer)\n File \"pizzas/api/v1/viewsets.py\", line 105, in perform_update\n self._update_payment(\n File \"pizzas/api/v1/viewsets.py\", line 114, in _update_payment\n order.payment = create_payment(order, processed_by, payment_type)\n File \"payments/services.py\", line 67, in create_payment\n notes=payable.payment_notes,\n File \"pizzas/payables.py\", line 21, in payment_notes\n f\"Food order by {self.model.member_name} \"\n```\n", "before_files": [{"content": "from django.template.defaultfilters import date\n\nfrom payments import Payable, payables\nfrom pizzas.models import FoodOrder\nfrom pizzas.services import can_change_order\n\n\nclass FoodOrderPayable(Payable):\n @property\n def payment_amount(self):\n return self.model.product.price\n\n @property\n def payment_topic(self):\n start_date = date(self.model.food_event.start, \"Y-m-d\")\n return f\"Food {self.model.food_event.event.title_en} [{start_date}]\"\n\n @property\n def payment_notes(self):\n return (\n f\"Food order by {self.model.member_name} \"\n f\"for {self.model.food_event.event.title_en}\"\n )\n\n @property\n def payment_payer(self):\n return self.model.member\n\n def can_manage_payment(self, member):\n return can_change_order(member, self.model.food_event)\n\n\ndef register():\n payables.register(FoodOrder, FoodOrderPayable)\n", "path": "website/pizzas/payables.py"}]} | 1,033 | 181 |
gh_patches_debug_21261 | rasdani/github-patches | git_diff | cookiecutter__cookiecutter-62 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
UnicodeEncodeError when the defualt value is used and contains non ascii characters.
Error occurs when the user uses the default unicode string.
Code:
```
if PY3:
cookiecutter_dict[key] = new_val
else:
cookiecutter_dict[key] = new_val.decode('utf-8')
```
Everything is okay in Python 3, but `new_val` is already unicode in 2.x.
</issue>
<code>
[start of cookiecutter/prompt.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3
4 """
5 cookiecutter.prompt
6 ---------------------
7
8 Functions for prompting the user for project info.
9 """
10
11 from __future__ import unicode_literals
12 import sys
13
14 PY3 = sys.version > '3'
15 if PY3:
16 iteritems = lambda d: iter(d.items())
17 else:
18 input = raw_input
19 iteritems = lambda d: d.iteritems()
20
21 def prompt_for_config(context):
22 """
23 Prompts the user to enter new config, using context as a source for the
24 field names and sample values.
25 """
26 cookiecutter_dict = {}
27
28 for key, val in iteritems(context['cookiecutter']):
29 prompt = "{0} (default is \"{1}\")? ".format(key, val)
30 new_val = input(prompt.encode('utf-8'))
31 new_val = new_val.strip()
32
33 if new_val == '':
34 new_val = val
35
36 if PY3:
37 cookiecutter_dict[key] = new_val
38 else:
39 cookiecutter_dict[key] = new_val.decode('utf-8')
40 return cookiecutter_dict
41
42
43 def query_yes_no(question, default="yes"):
44 """
45 Ask a yes/no question via `raw_input()` and return their answer.
46
47 :param question: A string that is presented to the user.
48 :param default: The presumed answer if the user just hits <Enter>.
49 It must be "yes" (the default), "no" or None (meaning
50 an answer is required of the user).
51
52 The "answer" return value is one of "yes" or "no".
53
54 Adapted from
55 http://stackoverflow.com/questions/3041986/python-command-line-yes-no-input
56 http://code.activestate.com/recipes/577058/
57
58 """
59 valid = {"yes": True, "y": True, "ye": True, "no": False, "n": False}
60 if default is None:
61 prompt = " [y/n] "
62 elif default == "yes":
63 prompt = " [Y/n] "
64 elif default == "no":
65 prompt = " [y/N] "
66 else:
67 raise ValueError("invalid default answer: '%s'" % default)
68
69 while True:
70 sys.stdout.write(question + prompt)
71 choice = input().lower()
72
73 if default is not None and choice == '':
74 return valid[default]
75 elif choice in valid:
76 return valid[choice]
77 else:
78 sys.stdout.write("Please respond with 'yes' or 'no' "
79 "(or 'y' or 'n').\n")
80
[end of cookiecutter/prompt.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/cookiecutter/prompt.py b/cookiecutter/prompt.py
--- a/cookiecutter/prompt.py
+++ b/cookiecutter/prompt.py
@@ -18,6 +18,7 @@
input = raw_input
iteritems = lambda d: d.iteritems()
+
def prompt_for_config(context):
"""
Prompts the user to enter new config, using context as a source for the
@@ -27,16 +28,18 @@
for key, val in iteritems(context['cookiecutter']):
prompt = "{0} (default is \"{1}\")? ".format(key, val)
- new_val = input(prompt.encode('utf-8'))
+
+ if PY3:
+ new_val = input(prompt.encode('utf-8'))
+ else:
+ new_val = input(prompt.encode('utf-8')).decode('utf-8')
+
new_val = new_val.strip()
if new_val == '':
new_val = val
- if PY3:
- cookiecutter_dict[key] = new_val
- else:
- cookiecutter_dict[key] = new_val.decode('utf-8')
+ cookiecutter_dict[key] = new_val
return cookiecutter_dict
| {"golden_diff": "diff --git a/cookiecutter/prompt.py b/cookiecutter/prompt.py\n--- a/cookiecutter/prompt.py\n+++ b/cookiecutter/prompt.py\n@@ -18,6 +18,7 @@\n input = raw_input\n iteritems = lambda d: d.iteritems()\n \n+\n def prompt_for_config(context):\n \"\"\"\n Prompts the user to enter new config, using context as a source for the\n@@ -27,16 +28,18 @@\n \n for key, val in iteritems(context['cookiecutter']):\n prompt = \"{0} (default is \\\"{1}\\\")? \".format(key, val)\n- new_val = input(prompt.encode('utf-8'))\n+\n+ if PY3:\n+ new_val = input(prompt.encode('utf-8'))\n+ else:\n+ new_val = input(prompt.encode('utf-8')).decode('utf-8')\n+\n new_val = new_val.strip()\n \n if new_val == '':\n new_val = val\n \n- if PY3:\n- cookiecutter_dict[key] = new_val\n- else:\n- cookiecutter_dict[key] = new_val.decode('utf-8')\n+ cookiecutter_dict[key] = new_val\n return cookiecutter_dict\n", "issue": "UnicodeEncodeError when the defualt value is used and contains non ascii characters.\nError occurs when the user uses the default unicode string.\n\nCode:\n\n```\nif PY3:\n cookiecutter_dict[key] = new_val\nelse:\n cookiecutter_dict[key] = new_val.decode('utf-8')\n```\n\nEverything is okay in Python 3, but `new_val` is already unicode in 2.x.\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\n\"\"\"\ncookiecutter.prompt\n---------------------\n\nFunctions for prompting the user for project info.\n\"\"\"\n\nfrom __future__ import unicode_literals\nimport sys\n\nPY3 = sys.version > '3'\nif PY3:\n iteritems = lambda d: iter(d.items())\nelse:\n input = raw_input\n iteritems = lambda d: d.iteritems()\n\ndef prompt_for_config(context):\n \"\"\"\n Prompts the user to enter new config, using context as a source for the\n field names and sample values.\n \"\"\"\n cookiecutter_dict = {}\n\n for key, val in iteritems(context['cookiecutter']):\n prompt = \"{0} (default is \\\"{1}\\\")? \".format(key, val)\n new_val = input(prompt.encode('utf-8'))\n new_val = new_val.strip()\n\n if new_val == '':\n new_val = val\n\n if PY3:\n cookiecutter_dict[key] = new_val\n else:\n cookiecutter_dict[key] = new_val.decode('utf-8')\n return cookiecutter_dict\n\n\ndef query_yes_no(question, default=\"yes\"):\n \"\"\"\n Ask a yes/no question via `raw_input()` and return their answer.\n\n :param question: A string that is presented to the user.\n :param default: The presumed answer if the user just hits <Enter>.\n It must be \"yes\" (the default), \"no\" or None (meaning\n an answer is required of the user).\n\n The \"answer\" return value is one of \"yes\" or \"no\".\n\n Adapted from\n http://stackoverflow.com/questions/3041986/python-command-line-yes-no-input\n http://code.activestate.com/recipes/577058/\n\n \"\"\"\n valid = {\"yes\": True, \"y\": True, \"ye\": True, \"no\": False, \"n\": False}\n if default is None:\n prompt = \" [y/n] \"\n elif default == \"yes\":\n prompt = \" [Y/n] \"\n elif default == \"no\":\n prompt = \" [y/N] \"\n else:\n raise ValueError(\"invalid default answer: '%s'\" % default)\n\n while True:\n sys.stdout.write(question + prompt)\n choice = input().lower()\n\n if default is not None and choice == '':\n return valid[default]\n elif choice in valid:\n return valid[choice]\n else:\n sys.stdout.write(\"Please respond with 'yes' or 'no' \"\n \"(or 'y' or 'n').\\n\")\n", "path": "cookiecutter/prompt.py"}]} | 1,349 | 282 |
gh_patches_debug_32478 | rasdani/github-patches | git_diff | open-telemetry__opentelemetry-python-2080 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
format_baggage does not escape non-ascii in baggage keys
https://github.com/open-telemetry/opentelemetry-python/blob/4250078e43ddb24c88e19270c7af01ae63336fb9/opentelemetry-api/src/opentelemetry/baggage/propagation/__init__.py#L100
The cpp implementation does this, and it looks like in python test strings there is url encoding.
https://github.com/open-telemetry/opentelemetry-cpp/blob/61d3c5e318830d10a0859befa046aa4847593764/api/include/opentelemetry/baggage/baggage.h#L174
</issue>
<code>
[start of opentelemetry-api/src/opentelemetry/baggage/propagation/__init__.py]
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 #
15 import typing
16 import urllib.parse
17
18 from opentelemetry import baggage
19 from opentelemetry.context import get_current
20 from opentelemetry.context.context import Context
21 from opentelemetry.propagators import textmap
22
23
24 class W3CBaggagePropagator(textmap.TextMapPropagator):
25 """Extracts and injects Baggage which is used to annotate telemetry."""
26
27 _MAX_HEADER_LENGTH = 8192
28 _MAX_PAIR_LENGTH = 4096
29 _MAX_PAIRS = 180
30 _BAGGAGE_HEADER_NAME = "baggage"
31
32 def extract(
33 self,
34 carrier: textmap.CarrierT,
35 context: typing.Optional[Context] = None,
36 getter: textmap.Getter = textmap.default_getter,
37 ) -> Context:
38 """Extract Baggage from the carrier.
39
40 See
41 `opentelemetry.propagators.textmap.TextMapPropagator.extract`
42 """
43
44 if context is None:
45 context = get_current()
46
47 header = _extract_first_element(
48 getter.get(carrier, self._BAGGAGE_HEADER_NAME)
49 )
50
51 if not header or len(header) > self._MAX_HEADER_LENGTH:
52 return context
53
54 baggage_entries = header.split(",")
55 total_baggage_entries = self._MAX_PAIRS
56 for entry in baggage_entries:
57 if total_baggage_entries <= 0:
58 return context
59 total_baggage_entries -= 1
60 if len(entry) > self._MAX_PAIR_LENGTH:
61 continue
62 try:
63 name, value = entry.split("=", 1)
64 except Exception: # pylint: disable=broad-except
65 continue
66 context = baggage.set_baggage(
67 urllib.parse.unquote(name).strip(),
68 urllib.parse.unquote(value).strip(),
69 context=context,
70 )
71
72 return context
73
74 def inject(
75 self,
76 carrier: textmap.CarrierT,
77 context: typing.Optional[Context] = None,
78 setter: textmap.Setter = textmap.default_setter,
79 ) -> None:
80 """Injects Baggage into the carrier.
81
82 See
83 `opentelemetry.propagators.textmap.TextMapPropagator.inject`
84 """
85 baggage_entries = baggage.get_all(context=context)
86 if not baggage_entries:
87 return
88
89 baggage_string = _format_baggage(baggage_entries)
90 setter.set(carrier, self._BAGGAGE_HEADER_NAME, baggage_string)
91
92 @property
93 def fields(self) -> typing.Set[str]:
94 """Returns a set with the fields set in `inject`."""
95 return {self._BAGGAGE_HEADER_NAME}
96
97
98 def _format_baggage(baggage_entries: typing.Mapping[str, object]) -> str:
99 return ",".join(
100 key + "=" + urllib.parse.quote_plus(str(value))
101 for key, value in baggage_entries.items()
102 )
103
104
105 def _extract_first_element(
106 items: typing.Optional[typing.Iterable[textmap.CarrierT]],
107 ) -> typing.Optional[textmap.CarrierT]:
108 if items is None:
109 return None
110 return next(iter(items), None)
111
[end of opentelemetry-api/src/opentelemetry/baggage/propagation/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/opentelemetry-api/src/opentelemetry/baggage/propagation/__init__.py b/opentelemetry-api/src/opentelemetry/baggage/propagation/__init__.py
--- a/opentelemetry-api/src/opentelemetry/baggage/propagation/__init__.py
+++ b/opentelemetry-api/src/opentelemetry/baggage/propagation/__init__.py
@@ -13,9 +13,9 @@
# limitations under the License.
#
import typing
-import urllib.parse
+from urllib.parse import quote_plus, unquote_plus
-from opentelemetry import baggage
+from opentelemetry.baggage import get_all, set_baggage
from opentelemetry.context import get_current
from opentelemetry.context.context import Context
from opentelemetry.propagators import textmap
@@ -63,9 +63,9 @@
name, value = entry.split("=", 1)
except Exception: # pylint: disable=broad-except
continue
- context = baggage.set_baggage(
- urllib.parse.unquote(name).strip(),
- urllib.parse.unquote(value).strip(),
+ context = set_baggage(
+ unquote_plus(name).strip(),
+ unquote_plus(value).strip(),
context=context,
)
@@ -82,7 +82,7 @@
See
`opentelemetry.propagators.textmap.TextMapPropagator.inject`
"""
- baggage_entries = baggage.get_all(context=context)
+ baggage_entries = get_all(context=context)
if not baggage_entries:
return
@@ -97,7 +97,7 @@
def _format_baggage(baggage_entries: typing.Mapping[str, object]) -> str:
return ",".join(
- key + "=" + urllib.parse.quote_plus(str(value))
+ quote_plus(str(key)) + "=" + quote_plus(str(value))
for key, value in baggage_entries.items()
)
| {"golden_diff": "diff --git a/opentelemetry-api/src/opentelemetry/baggage/propagation/__init__.py b/opentelemetry-api/src/opentelemetry/baggage/propagation/__init__.py\n--- a/opentelemetry-api/src/opentelemetry/baggage/propagation/__init__.py\n+++ b/opentelemetry-api/src/opentelemetry/baggage/propagation/__init__.py\n@@ -13,9 +13,9 @@\n # limitations under the License.\n #\n import typing\n-import urllib.parse\n+from urllib.parse import quote_plus, unquote_plus\n \n-from opentelemetry import baggage\n+from opentelemetry.baggage import get_all, set_baggage\n from opentelemetry.context import get_current\n from opentelemetry.context.context import Context\n from opentelemetry.propagators import textmap\n@@ -63,9 +63,9 @@\n name, value = entry.split(\"=\", 1)\n except Exception: # pylint: disable=broad-except\n continue\n- context = baggage.set_baggage(\n- urllib.parse.unquote(name).strip(),\n- urllib.parse.unquote(value).strip(),\n+ context = set_baggage(\n+ unquote_plus(name).strip(),\n+ unquote_plus(value).strip(),\n context=context,\n )\n \n@@ -82,7 +82,7 @@\n See\n `opentelemetry.propagators.textmap.TextMapPropagator.inject`\n \"\"\"\n- baggage_entries = baggage.get_all(context=context)\n+ baggage_entries = get_all(context=context)\n if not baggage_entries:\n return\n \n@@ -97,7 +97,7 @@\n \n def _format_baggage(baggage_entries: typing.Mapping[str, object]) -> str:\n return \",\".join(\n- key + \"=\" + urllib.parse.quote_plus(str(value))\n+ quote_plus(str(key)) + \"=\" + quote_plus(str(value))\n for key, value in baggage_entries.items()\n )\n", "issue": "format_baggage does not escape non-ascii in baggage keys\nhttps://github.com/open-telemetry/opentelemetry-python/blob/4250078e43ddb24c88e19270c7af01ae63336fb9/opentelemetry-api/src/opentelemetry/baggage/propagation/__init__.py#L100\r\n\r\nThe cpp implementation does this, and it looks like in python test strings there is url encoding.\r\nhttps://github.com/open-telemetry/opentelemetry-cpp/blob/61d3c5e318830d10a0859befa046aa4847593764/api/include/opentelemetry/baggage/baggage.h#L174\n", "before_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\nimport typing\nimport urllib.parse\n\nfrom opentelemetry import baggage\nfrom opentelemetry.context import get_current\nfrom opentelemetry.context.context import Context\nfrom opentelemetry.propagators import textmap\n\n\nclass W3CBaggagePropagator(textmap.TextMapPropagator):\n \"\"\"Extracts and injects Baggage which is used to annotate telemetry.\"\"\"\n\n _MAX_HEADER_LENGTH = 8192\n _MAX_PAIR_LENGTH = 4096\n _MAX_PAIRS = 180\n _BAGGAGE_HEADER_NAME = \"baggage\"\n\n def extract(\n self,\n carrier: textmap.CarrierT,\n context: typing.Optional[Context] = None,\n getter: textmap.Getter = textmap.default_getter,\n ) -> Context:\n \"\"\"Extract Baggage from the carrier.\n\n See\n `opentelemetry.propagators.textmap.TextMapPropagator.extract`\n \"\"\"\n\n if context is None:\n context = get_current()\n\n header = _extract_first_element(\n getter.get(carrier, self._BAGGAGE_HEADER_NAME)\n )\n\n if not header or len(header) > self._MAX_HEADER_LENGTH:\n return context\n\n baggage_entries = header.split(\",\")\n total_baggage_entries = self._MAX_PAIRS\n for entry in baggage_entries:\n if total_baggage_entries <= 0:\n return context\n total_baggage_entries -= 1\n if len(entry) > self._MAX_PAIR_LENGTH:\n continue\n try:\n name, value = entry.split(\"=\", 1)\n except Exception: # pylint: disable=broad-except\n continue\n context = baggage.set_baggage(\n urllib.parse.unquote(name).strip(),\n urllib.parse.unquote(value).strip(),\n context=context,\n )\n\n return context\n\n def inject(\n self,\n carrier: textmap.CarrierT,\n context: typing.Optional[Context] = None,\n setter: textmap.Setter = textmap.default_setter,\n ) -> None:\n \"\"\"Injects Baggage into the carrier.\n\n See\n `opentelemetry.propagators.textmap.TextMapPropagator.inject`\n \"\"\"\n baggage_entries = baggage.get_all(context=context)\n if not baggage_entries:\n return\n\n baggage_string = _format_baggage(baggage_entries)\n setter.set(carrier, self._BAGGAGE_HEADER_NAME, baggage_string)\n\n @property\n def fields(self) -> typing.Set[str]:\n \"\"\"Returns a set with the fields set in `inject`.\"\"\"\n return {self._BAGGAGE_HEADER_NAME}\n\n\ndef _format_baggage(baggage_entries: typing.Mapping[str, object]) -> str:\n return \",\".join(\n key + \"=\" + urllib.parse.quote_plus(str(value))\n for key, value in baggage_entries.items()\n )\n\n\ndef _extract_first_element(\n items: typing.Optional[typing.Iterable[textmap.CarrierT]],\n) -> typing.Optional[textmap.CarrierT]:\n if items is None:\n return None\n return next(iter(items), None)\n", "path": "opentelemetry-api/src/opentelemetry/baggage/propagation/__init__.py"}]} | 1,761 | 420 |
gh_patches_debug_66455 | rasdani/github-patches | git_diff | pyca__cryptography-8319 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Incorrect docstrings in x25519 and x448 `.public_key()` methods
See:
https://github.com/pyca/cryptography/blob/127a2860740c77f45362e68e0ed7d2d108a39033/src/cryptography/hazmat/primitives/asymmetric/x25519.py#L60-L64
https://github.com/pyca/cryptography/blob/127a2860740c77f45362e68e0ed7d2d108a39033/src/cryptography/hazmat/primitives/asymmetric/x448.py#L60-L64
In both instances, the method does not return serialised bytes, but a public key object. The full [generated documentation](https://cryptography.io/en/latest/hazmat/primitives/asymmetric/x25519/#cryptography.hazmat.primitives.asymmetric.x25519.X25519PrivateKey.public_key) is correct, as are the Ed* docstrings.
</issue>
<code>
[start of src/cryptography/hazmat/primitives/asymmetric/x448.py]
1 # This file is dual licensed under the terms of the Apache License, Version
2 # 2.0, and the BSD License. See the LICENSE file in the root of this repository
3 # for complete details.
4
5
6 import abc
7
8 from cryptography.exceptions import UnsupportedAlgorithm, _Reasons
9 from cryptography.hazmat.primitives import _serialization
10
11
12 class X448PublicKey(metaclass=abc.ABCMeta):
13 @classmethod
14 def from_public_bytes(cls, data: bytes) -> "X448PublicKey":
15 from cryptography.hazmat.backends.openssl.backend import backend
16
17 if not backend.x448_supported():
18 raise UnsupportedAlgorithm(
19 "X448 is not supported by this version of OpenSSL.",
20 _Reasons.UNSUPPORTED_EXCHANGE_ALGORITHM,
21 )
22
23 return backend.x448_load_public_bytes(data)
24
25 @abc.abstractmethod
26 def public_bytes(
27 self,
28 encoding: _serialization.Encoding,
29 format: _serialization.PublicFormat,
30 ) -> bytes:
31 """
32 The serialized bytes of the public key.
33 """
34
35
36 class X448PrivateKey(metaclass=abc.ABCMeta):
37 @classmethod
38 def generate(cls) -> "X448PrivateKey":
39 from cryptography.hazmat.backends.openssl.backend import backend
40
41 if not backend.x448_supported():
42 raise UnsupportedAlgorithm(
43 "X448 is not supported by this version of OpenSSL.",
44 _Reasons.UNSUPPORTED_EXCHANGE_ALGORITHM,
45 )
46 return backend.x448_generate_key()
47
48 @classmethod
49 def from_private_bytes(cls, data: bytes) -> "X448PrivateKey":
50 from cryptography.hazmat.backends.openssl.backend import backend
51
52 if not backend.x448_supported():
53 raise UnsupportedAlgorithm(
54 "X448 is not supported by this version of OpenSSL.",
55 _Reasons.UNSUPPORTED_EXCHANGE_ALGORITHM,
56 )
57
58 return backend.x448_load_private_bytes(data)
59
60 @abc.abstractmethod
61 def public_key(self) -> X448PublicKey:
62 """
63 The serialized bytes of the public key.
64 """
65
66 @abc.abstractmethod
67 def private_bytes(
68 self,
69 encoding: _serialization.Encoding,
70 format: _serialization.PrivateFormat,
71 encryption_algorithm: _serialization.KeySerializationEncryption,
72 ) -> bytes:
73 """
74 The serialized bytes of the private key.
75 """
76
77 @abc.abstractmethod
78 def exchange(self, peer_public_key: X448PublicKey) -> bytes:
79 """
80 Performs a key exchange operation using the provided peer's public key.
81 """
82
[end of src/cryptography/hazmat/primitives/asymmetric/x448.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/cryptography/hazmat/primitives/asymmetric/x448.py b/src/cryptography/hazmat/primitives/asymmetric/x448.py
--- a/src/cryptography/hazmat/primitives/asymmetric/x448.py
+++ b/src/cryptography/hazmat/primitives/asymmetric/x448.py
@@ -60,7 +60,7 @@
@abc.abstractmethod
def public_key(self) -> X448PublicKey:
"""
- The serialized bytes of the public key.
+ Returns the public key associated with this private key
"""
@abc.abstractmethod
| {"golden_diff": "diff --git a/src/cryptography/hazmat/primitives/asymmetric/x448.py b/src/cryptography/hazmat/primitives/asymmetric/x448.py\n--- a/src/cryptography/hazmat/primitives/asymmetric/x448.py\n+++ b/src/cryptography/hazmat/primitives/asymmetric/x448.py\n@@ -60,7 +60,7 @@\n @abc.abstractmethod\n def public_key(self) -> X448PublicKey:\n \"\"\"\n- The serialized bytes of the public key.\n+ Returns the public key associated with this private key\n \"\"\"\n \n @abc.abstractmethod\n", "issue": "Incorrect docstrings in x25519 and x448 `.public_key()` methods\nSee:\r\n\r\nhttps://github.com/pyca/cryptography/blob/127a2860740c77f45362e68e0ed7d2d108a39033/src/cryptography/hazmat/primitives/asymmetric/x25519.py#L60-L64\r\n\r\nhttps://github.com/pyca/cryptography/blob/127a2860740c77f45362e68e0ed7d2d108a39033/src/cryptography/hazmat/primitives/asymmetric/x448.py#L60-L64\r\n\r\nIn both instances, the method does not return serialised bytes, but a public key object. The full [generated documentation](https://cryptography.io/en/latest/hazmat/primitives/asymmetric/x25519/#cryptography.hazmat.primitives.asymmetric.x25519.X25519PrivateKey.public_key) is correct, as are the Ed* docstrings.\n", "before_files": [{"content": "# This file is dual licensed under the terms of the Apache License, Version\n# 2.0, and the BSD License. See the LICENSE file in the root of this repository\n# for complete details.\n\n\nimport abc\n\nfrom cryptography.exceptions import UnsupportedAlgorithm, _Reasons\nfrom cryptography.hazmat.primitives import _serialization\n\n\nclass X448PublicKey(metaclass=abc.ABCMeta):\n @classmethod\n def from_public_bytes(cls, data: bytes) -> \"X448PublicKey\":\n from cryptography.hazmat.backends.openssl.backend import backend\n\n if not backend.x448_supported():\n raise UnsupportedAlgorithm(\n \"X448 is not supported by this version of OpenSSL.\",\n _Reasons.UNSUPPORTED_EXCHANGE_ALGORITHM,\n )\n\n return backend.x448_load_public_bytes(data)\n\n @abc.abstractmethod\n def public_bytes(\n self,\n encoding: _serialization.Encoding,\n format: _serialization.PublicFormat,\n ) -> bytes:\n \"\"\"\n The serialized bytes of the public key.\n \"\"\"\n\n\nclass X448PrivateKey(metaclass=abc.ABCMeta):\n @classmethod\n def generate(cls) -> \"X448PrivateKey\":\n from cryptography.hazmat.backends.openssl.backend import backend\n\n if not backend.x448_supported():\n raise UnsupportedAlgorithm(\n \"X448 is not supported by this version of OpenSSL.\",\n _Reasons.UNSUPPORTED_EXCHANGE_ALGORITHM,\n )\n return backend.x448_generate_key()\n\n @classmethod\n def from_private_bytes(cls, data: bytes) -> \"X448PrivateKey\":\n from cryptography.hazmat.backends.openssl.backend import backend\n\n if not backend.x448_supported():\n raise UnsupportedAlgorithm(\n \"X448 is not supported by this version of OpenSSL.\",\n _Reasons.UNSUPPORTED_EXCHANGE_ALGORITHM,\n )\n\n return backend.x448_load_private_bytes(data)\n\n @abc.abstractmethod\n def public_key(self) -> X448PublicKey:\n \"\"\"\n The serialized bytes of the public key.\n \"\"\"\n\n @abc.abstractmethod\n def private_bytes(\n self,\n encoding: _serialization.Encoding,\n format: _serialization.PrivateFormat,\n encryption_algorithm: _serialization.KeySerializationEncryption,\n ) -> bytes:\n \"\"\"\n The serialized bytes of the private key.\n \"\"\"\n\n @abc.abstractmethod\n def exchange(self, peer_public_key: X448PublicKey) -> bytes:\n \"\"\"\n Performs a key exchange operation using the provided peer's public key.\n \"\"\"\n", "path": "src/cryptography/hazmat/primitives/asymmetric/x448.py"}]} | 1,523 | 137 |
gh_patches_debug_38411 | rasdani/github-patches | git_diff | cloud-custodian__cloud-custodian-5275 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Support for EFS Lifecycles
According to this, it's not an available action: https://cloudcustodian.io/docs/aws/resources/efs.html
Console and CLI instructions from AWS are here: https://docs.aws.amazon.com/efs/latest/ug/enable-lifecycle-management.html
</issue>
<code>
[start of c7n/resources/efs.py]
1 # Copyright 2015-2017 Capital One Services, LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 from __future__ import absolute_import, division, print_function, unicode_literals
15
16 from c7n.actions import Action
17 from c7n.filters.kms import KmsRelatedFilter
18 from c7n.manager import resources
19 from c7n.filters.vpc import SecurityGroupFilter, SubnetFilter
20 from c7n.query import QueryResourceManager, ChildResourceManager, TypeInfo
21 from c7n.tags import universal_augment
22 from c7n.utils import local_session, type_schema, get_retry
23
24
25 @resources.register('efs')
26 class ElasticFileSystem(QueryResourceManager):
27
28 class resource_type(TypeInfo):
29 service = 'efs'
30 enum_spec = ('describe_file_systems', 'FileSystems', None)
31 id = 'FileSystemId'
32 name = 'Name'
33 date = 'CreationTime'
34 dimension = 'FileSystemId'
35 arn_type = 'file-system'
36 permission_prefix = arn_service = 'elasticfilesystem'
37 filter_name = 'FileSystemId'
38 filter_type = 'scalar'
39 universal_taggable = True
40
41 augment = universal_augment
42
43
44 @resources.register('efs-mount-target')
45 class ElasticFileSystemMountTarget(ChildResourceManager):
46
47 class resource_type(TypeInfo):
48 service = 'efs'
49 parent_spec = ('efs', 'FileSystemId', None)
50 enum_spec = ('describe_mount_targets', 'MountTargets', None)
51 permission_prefix = 'elasticfilesystem'
52 name = id = 'MountTargetId'
53 filter_name = 'MountTargetId'
54 filter_type = 'scalar'
55 arn = False
56
57
58 @ElasticFileSystemMountTarget.filter_registry.register('subnet')
59 class Subnet(SubnetFilter):
60
61 RelatedIdsExpression = "SubnetId"
62
63
64 @ElasticFileSystemMountTarget.filter_registry.register('security-group')
65 class SecurityGroup(SecurityGroupFilter):
66
67 efs_group_cache = None
68
69 RelatedIdsExpression = ""
70
71 def get_related_ids(self, resources):
72
73 if self.efs_group_cache:
74 group_ids = set()
75 for r in resources:
76 group_ids.update(
77 self.efs_group_cache.get(r['MountTargetId'], ()))
78 return list(group_ids)
79
80 client = local_session(self.manager.session_factory).client('efs')
81 groups = {}
82 group_ids = set()
83 retry = get_retry(('Throttled',), 12)
84
85 for r in resources:
86 groups[r['MountTargetId']] = retry(
87 client.describe_mount_target_security_groups,
88 MountTargetId=r['MountTargetId'])['SecurityGroups']
89 group_ids.update(groups[r['MountTargetId']])
90
91 self.efs_group_cache = groups
92 return list(group_ids)
93
94
95 @ElasticFileSystem.filter_registry.register('kms-key')
96 class KmsFilter(KmsRelatedFilter):
97 """
98 Filter a resource by its associcated kms key and optionally the aliasname
99 of the kms key by using 'c7n:AliasName'
100
101 :example:
102
103 .. code-block:: yaml
104
105 policies:
106 - name: efs-kms-key-filters
107 resource: efs
108 filters:
109 - type: kms-key
110 key: c7n:AliasName
111 value: "^(alias/aws/)"
112 op: regex
113 """
114 RelatedIdsExpression = 'KmsKeyId'
115
116
117 @ElasticFileSystem.action_registry.register('delete')
118 class Delete(Action):
119
120 schema = type_schema('delete')
121 permissions = ('elasticfilesystem:DescribeMountTargets',
122 'elasticfilesystem:DeleteMountTarget',
123 'elasticfilesystem:DeleteFileSystem')
124
125 def process(self, resources):
126 client = local_session(self.manager.session_factory).client('efs')
127 self.unmount_filesystems(resources)
128 retry = get_retry(('FileSystemInUse',), 12)
129 for r in resources:
130 retry(client.delete_file_system, FileSystemId=r['FileSystemId'])
131
132 def unmount_filesystems(self, resources):
133 client = local_session(self.manager.session_factory).client('efs')
134 for r in resources:
135 if not r['NumberOfMountTargets']:
136 continue
137 for t in client.describe_mount_targets(
138 FileSystemId=r['FileSystemId'])['MountTargets']:
139 client.delete_mount_target(MountTargetId=t['MountTargetId'])
140
[end of c7n/resources/efs.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/c7n/resources/efs.py b/c7n/resources/efs.py
--- a/c7n/resources/efs.py
+++ b/c7n/resources/efs.py
@@ -13,13 +13,15 @@
# limitations under the License.
from __future__ import absolute_import, division, print_function, unicode_literals
-from c7n.actions import Action
+from c7n.actions import Action, BaseAction
+from c7n.exceptions import PolicyValidationError
from c7n.filters.kms import KmsRelatedFilter
from c7n.manager import resources
from c7n.filters.vpc import SecurityGroupFilter, SubnetFilter
from c7n.query import QueryResourceManager, ChildResourceManager, TypeInfo
from c7n.tags import universal_augment
from c7n.utils import local_session, type_schema, get_retry
+from .aws import shape_validate
@resources.register('efs')
@@ -137,3 +139,57 @@
for t in client.describe_mount_targets(
FileSystemId=r['FileSystemId'])['MountTargets']:
client.delete_mount_target(MountTargetId=t['MountTargetId'])
+
+
[email protected]_registry.register('configure-lifecycle-policy')
+class ConfigureLifecycle(BaseAction):
+ """Enable/disable lifecycle policy for efs.
+
+ :example:
+
+ .. code-block:: yaml
+
+ policies:
+ - name: efs-apply-lifecycle
+ resource: efs
+ actions:
+ - type: configure-lifecycle-policy
+ state: enable
+ rules:
+ - 'TransitionToIA': 'AFTER_7_DAYS'
+
+ """
+ schema = type_schema(
+ 'configure-lifecycle-policy',
+ state={'enum': ['enable', 'disable']},
+ rules={
+ 'type': 'array',
+ 'items': {'type': 'object'}},
+ required=['state'])
+
+ permissions = ('elasticfilesystem:PutLifecycleConfiguration',)
+ shape = 'PutLifecycleConfigurationRequest'
+
+ def validate(self):
+ if self.data.get('state') == 'enable' and 'rules' not in self.data:
+ raise PolicyValidationError(
+ 'rules are required to enable lifecycle configuration %s' % (self.manager.data))
+ if self.data.get('state') == 'disable' and 'rules' in self.data:
+ raise PolicyValidationError(
+ 'rules not required to disable lifecycle configuration %s' % (self.manager.data))
+ if self.data.get('rules'):
+ attrs = {}
+ attrs['LifecyclePolicies'] = self.data['rules']
+ attrs['FileSystemId'] = 'PolicyValidator'
+ return shape_validate(attrs, self.shape, 'efs')
+
+ def process(self, resources):
+ client = local_session(self.manager.session_factory).client('efs')
+ op_map = {'enable': self.data.get('rules'), 'disable': []}
+ for r in resources:
+ try:
+ client.put_lifecycle_configuration(
+ FileSystemId=r['FileSystemId'],
+ LifecyclePolicies=op_map.get(self.data.get('state')))
+ except client.exceptions.FileSystemNotFound:
+ continue
| {"golden_diff": "diff --git a/c7n/resources/efs.py b/c7n/resources/efs.py\n--- a/c7n/resources/efs.py\n+++ b/c7n/resources/efs.py\n@@ -13,13 +13,15 @@\n # limitations under the License.\n from __future__ import absolute_import, division, print_function, unicode_literals\n \n-from c7n.actions import Action\n+from c7n.actions import Action, BaseAction\n+from c7n.exceptions import PolicyValidationError\n from c7n.filters.kms import KmsRelatedFilter\n from c7n.manager import resources\n from c7n.filters.vpc import SecurityGroupFilter, SubnetFilter\n from c7n.query import QueryResourceManager, ChildResourceManager, TypeInfo\n from c7n.tags import universal_augment\n from c7n.utils import local_session, type_schema, get_retry\n+from .aws import shape_validate\n \n \n @resources.register('efs')\n@@ -137,3 +139,57 @@\n for t in client.describe_mount_targets(\n FileSystemId=r['FileSystemId'])['MountTargets']:\n client.delete_mount_target(MountTargetId=t['MountTargetId'])\n+\n+\[email protected]_registry.register('configure-lifecycle-policy')\n+class ConfigureLifecycle(BaseAction):\n+ \"\"\"Enable/disable lifecycle policy for efs.\n+\n+ :example:\n+\n+ .. code-block:: yaml\n+\n+ policies:\n+ - name: efs-apply-lifecycle\n+ resource: efs\n+ actions:\n+ - type: configure-lifecycle-policy\n+ state: enable\n+ rules:\n+ - 'TransitionToIA': 'AFTER_7_DAYS'\n+\n+ \"\"\"\n+ schema = type_schema(\n+ 'configure-lifecycle-policy',\n+ state={'enum': ['enable', 'disable']},\n+ rules={\n+ 'type': 'array',\n+ 'items': {'type': 'object'}},\n+ required=['state'])\n+\n+ permissions = ('elasticfilesystem:PutLifecycleConfiguration',)\n+ shape = 'PutLifecycleConfigurationRequest'\n+\n+ def validate(self):\n+ if self.data.get('state') == 'enable' and 'rules' not in self.data:\n+ raise PolicyValidationError(\n+ 'rules are required to enable lifecycle configuration %s' % (self.manager.data))\n+ if self.data.get('state') == 'disable' and 'rules' in self.data:\n+ raise PolicyValidationError(\n+ 'rules not required to disable lifecycle configuration %s' % (self.manager.data))\n+ if self.data.get('rules'):\n+ attrs = {}\n+ attrs['LifecyclePolicies'] = self.data['rules']\n+ attrs['FileSystemId'] = 'PolicyValidator'\n+ return shape_validate(attrs, self.shape, 'efs')\n+\n+ def process(self, resources):\n+ client = local_session(self.manager.session_factory).client('efs')\n+ op_map = {'enable': self.data.get('rules'), 'disable': []}\n+ for r in resources:\n+ try:\n+ client.put_lifecycle_configuration(\n+ FileSystemId=r['FileSystemId'],\n+ LifecyclePolicies=op_map.get(self.data.get('state')))\n+ except client.exceptions.FileSystemNotFound:\n+ continue\n", "issue": "Support for EFS Lifecycles\nAccording to this, it's not an available action: https://cloudcustodian.io/docs/aws/resources/efs.html\r\n\r\nConsole and CLI instructions from AWS are here: https://docs.aws.amazon.com/efs/latest/ug/enable-lifecycle-management.html\n", "before_files": [{"content": "# Copyright 2015-2017 Capital One Services, LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nfrom c7n.actions import Action\nfrom c7n.filters.kms import KmsRelatedFilter\nfrom c7n.manager import resources\nfrom c7n.filters.vpc import SecurityGroupFilter, SubnetFilter\nfrom c7n.query import QueryResourceManager, ChildResourceManager, TypeInfo\nfrom c7n.tags import universal_augment\nfrom c7n.utils import local_session, type_schema, get_retry\n\n\[email protected]('efs')\nclass ElasticFileSystem(QueryResourceManager):\n\n class resource_type(TypeInfo):\n service = 'efs'\n enum_spec = ('describe_file_systems', 'FileSystems', None)\n id = 'FileSystemId'\n name = 'Name'\n date = 'CreationTime'\n dimension = 'FileSystemId'\n arn_type = 'file-system'\n permission_prefix = arn_service = 'elasticfilesystem'\n filter_name = 'FileSystemId'\n filter_type = 'scalar'\n universal_taggable = True\n\n augment = universal_augment\n\n\[email protected]('efs-mount-target')\nclass ElasticFileSystemMountTarget(ChildResourceManager):\n\n class resource_type(TypeInfo):\n service = 'efs'\n parent_spec = ('efs', 'FileSystemId', None)\n enum_spec = ('describe_mount_targets', 'MountTargets', None)\n permission_prefix = 'elasticfilesystem'\n name = id = 'MountTargetId'\n filter_name = 'MountTargetId'\n filter_type = 'scalar'\n arn = False\n\n\[email protected]_registry.register('subnet')\nclass Subnet(SubnetFilter):\n\n RelatedIdsExpression = \"SubnetId\"\n\n\[email protected]_registry.register('security-group')\nclass SecurityGroup(SecurityGroupFilter):\n\n efs_group_cache = None\n\n RelatedIdsExpression = \"\"\n\n def get_related_ids(self, resources):\n\n if self.efs_group_cache:\n group_ids = set()\n for r in resources:\n group_ids.update(\n self.efs_group_cache.get(r['MountTargetId'], ()))\n return list(group_ids)\n\n client = local_session(self.manager.session_factory).client('efs')\n groups = {}\n group_ids = set()\n retry = get_retry(('Throttled',), 12)\n\n for r in resources:\n groups[r['MountTargetId']] = retry(\n client.describe_mount_target_security_groups,\n MountTargetId=r['MountTargetId'])['SecurityGroups']\n group_ids.update(groups[r['MountTargetId']])\n\n self.efs_group_cache = groups\n return list(group_ids)\n\n\[email protected]_registry.register('kms-key')\nclass KmsFilter(KmsRelatedFilter):\n \"\"\"\n Filter a resource by its associcated kms key and optionally the aliasname\n of the kms key by using 'c7n:AliasName'\n\n :example:\n\n .. code-block:: yaml\n\n policies:\n - name: efs-kms-key-filters\n resource: efs\n filters:\n - type: kms-key\n key: c7n:AliasName\n value: \"^(alias/aws/)\"\n op: regex\n \"\"\"\n RelatedIdsExpression = 'KmsKeyId'\n\n\[email protected]_registry.register('delete')\nclass Delete(Action):\n\n schema = type_schema('delete')\n permissions = ('elasticfilesystem:DescribeMountTargets',\n 'elasticfilesystem:DeleteMountTarget',\n 'elasticfilesystem:DeleteFileSystem')\n\n def process(self, resources):\n client = local_session(self.manager.session_factory).client('efs')\n self.unmount_filesystems(resources)\n retry = get_retry(('FileSystemInUse',), 12)\n for r in resources:\n retry(client.delete_file_system, FileSystemId=r['FileSystemId'])\n\n def unmount_filesystems(self, resources):\n client = local_session(self.manager.session_factory).client('efs')\n for r in resources:\n if not r['NumberOfMountTargets']:\n continue\n for t in client.describe_mount_targets(\n FileSystemId=r['FileSystemId'])['MountTargets']:\n client.delete_mount_target(MountTargetId=t['MountTargetId'])\n", "path": "c7n/resources/efs.py"}]} | 1,941 | 696 |
gh_patches_debug_28943 | rasdani/github-patches | git_diff | comic__grand-challenge.org-1020 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Replace links in the challenge copying command
People might use full url links in the page HTML, use a regex to replace the links using the new challenge short name.
</issue>
<code>
[start of app/grandchallenge/challenges/management/commands/copy_challenge.py]
1 from django.core.management import BaseCommand, CommandError
2
3 from grandchallenge.challenges.models import Challenge
4 from grandchallenge.pages.models import Page
5
6
7 class Command(BaseCommand):
8 help = "Creates a copy of a challenge"
9
10 challenge_fields = [
11 "creator",
12 "description",
13 "educational",
14 "disclaimer",
15 "require_participant_review",
16 "use_registration_page",
17 "registration_page_text",
18 "use_evaluation",
19 "logo",
20 "banner",
21 ]
22
23 challenge_m2m_fields = [
24 "task_types",
25 "modalities",
26 "structures",
27 ]
28
29 config_fields = [
30 "use_teams",
31 "score_title",
32 "score_jsonpath",
33 "score_error_jsonpath",
34 "score_default_sort",
35 "score_decimal_places",
36 "extra_results_columns",
37 "scoring_method_choice",
38 "result_display_choice",
39 "allow_submission_comments",
40 "display_submission_comments",
41 "supplementary_file_choice",
42 "supplementary_file_label",
43 "supplementary_file_help_text",
44 "show_supplementary_file_link",
45 "publication_url_choice",
46 "show_publication_url",
47 "daily_submission_limit",
48 "submission_page_html",
49 "auto_publish_new_results",
50 "display_all_metrics",
51 "submission_join_key",
52 ]
53
54 page_fields = [
55 "title",
56 "permission_lvl",
57 "order",
58 "display_title",
59 "hidden",
60 "html",
61 ]
62
63 def add_arguments(self, parser):
64 parser.add_argument("source", type=str)
65 parser.add_argument("dest", type=str)
66
67 def handle(self, *args, **options):
68 src_name = options.pop("source")
69 dest_name = options.pop("dest")
70
71 if src_name.lower() == dest_name.lower():
72 raise CommandError("Source and dest names must be different")
73
74 src_challenge = Challenge.objects.get(short_name__iexact=src_name)
75 dest_challenge = self._create_new_challenge(
76 src_challenge=src_challenge, dest_name=dest_name
77 )
78
79 self._copy_m2m_fields(
80 src_challenge=src_challenge, dest_challenge=dest_challenge
81 )
82 self._copy_evaluation_config(
83 src_challenge=src_challenge, dest_challenge=dest_challenge
84 )
85 self._copy_pages(
86 src_challenge=src_challenge, dest_challenge=dest_challenge
87 )
88 self._copy_admins(
89 src_challenge=src_challenge, dest_challenge=dest_challenge
90 )
91
92 def _create_new_challenge(self, *, src_challenge, dest_name):
93 new_challenge = Challenge(
94 short_name=dest_name,
95 **{f: getattr(src_challenge, f) for f in self.challenge_fields},
96 )
97 new_challenge.full_clean()
98 new_challenge.save()
99 return new_challenge
100
101 def _copy_m2m_fields(self, *, src_challenge, dest_challenge):
102 for f in self.challenge_m2m_fields:
103 src_m2m = getattr(src_challenge, f)
104 dest_m2m = getattr(dest_challenge, f)
105 dest_m2m.set(src_m2m.all())
106
107 def _copy_evaluation_config(self, *, src_challenge, dest_challenge):
108 src_config = src_challenge.evaluation_config
109 dest_config = dest_challenge.evaluation_config
110
111 for attr in self.config_fields:
112 setattr(dest_config, attr, getattr(src_config, attr))
113
114 dest_config.save()
115
116 def _copy_pages(self, *, src_challenge, dest_challenge):
117 src_pages = src_challenge.page_set.all()
118
119 for src_page in src_pages:
120 Page.objects.create(
121 challenge=dest_challenge,
122 **{f: getattr(src_page, f) for f in self.page_fields},
123 )
124
125 def _copy_admins(self, *, src_challenge, dest_challenge):
126 for u in src_challenge.get_admins():
127 dest_challenge.add_admin(u)
128
[end of app/grandchallenge/challenges/management/commands/copy_challenge.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/app/grandchallenge/challenges/management/commands/copy_challenge.py b/app/grandchallenge/challenges/management/commands/copy_challenge.py
--- a/app/grandchallenge/challenges/management/commands/copy_challenge.py
+++ b/app/grandchallenge/challenges/management/commands/copy_challenge.py
@@ -1,3 +1,6 @@
+import re
+
+from django.contrib.sites.models import Site
from django.core.management import BaseCommand, CommandError
from grandchallenge.challenges.models import Challenge
@@ -57,7 +60,6 @@
"order",
"display_title",
"hidden",
- "html",
]
def add_arguments(self, parser):
@@ -113,12 +115,24 @@
dest_config.save()
+ def _substitute_urls(self, html, domain, old, new):
+ quote_replace = r"href='([^']*)'"
+ regex = fr'href="[^/]*//{old}.{domain}([^""]*)"'
+ html = re.sub(quote_replace, r'href="\1"', html)
+ return re.sub(regex, fr'href="https://{new}.{domain}\1"', html,)
+
def _copy_pages(self, *, src_challenge, dest_challenge):
src_pages = src_challenge.page_set.all()
+ site = Site.objects.get_current()
+ domain = site.domain
+ old = src_challenge.short_name
+ new = dest_challenge.short_name
+
for src_page in src_pages:
Page.objects.create(
challenge=dest_challenge,
+ html=self._substitute_urls(src_page.html, domain, old, new),
**{f: getattr(src_page, f) for f in self.page_fields},
)
| {"golden_diff": "diff --git a/app/grandchallenge/challenges/management/commands/copy_challenge.py b/app/grandchallenge/challenges/management/commands/copy_challenge.py\n--- a/app/grandchallenge/challenges/management/commands/copy_challenge.py\n+++ b/app/grandchallenge/challenges/management/commands/copy_challenge.py\n@@ -1,3 +1,6 @@\n+import re\n+\n+from django.contrib.sites.models import Site\n from django.core.management import BaseCommand, CommandError\n \n from grandchallenge.challenges.models import Challenge\n@@ -57,7 +60,6 @@\n \"order\",\n \"display_title\",\n \"hidden\",\n- \"html\",\n ]\n \n def add_arguments(self, parser):\n@@ -113,12 +115,24 @@\n \n dest_config.save()\n \n+ def _substitute_urls(self, html, domain, old, new):\n+ quote_replace = r\"href='([^']*)'\"\n+ regex = fr'href=\"[^/]*//{old}.{domain}([^\"\"]*)\"'\n+ html = re.sub(quote_replace, r'href=\"\\1\"', html)\n+ return re.sub(regex, fr'href=\"https://{new}.{domain}\\1\"', html,)\n+\n def _copy_pages(self, *, src_challenge, dest_challenge):\n src_pages = src_challenge.page_set.all()\n \n+ site = Site.objects.get_current()\n+ domain = site.domain\n+ old = src_challenge.short_name\n+ new = dest_challenge.short_name\n+\n for src_page in src_pages:\n Page.objects.create(\n challenge=dest_challenge,\n+ html=self._substitute_urls(src_page.html, domain, old, new),\n **{f: getattr(src_page, f) for f in self.page_fields},\n )\n", "issue": "Replace links in the challenge copying command\nPeople might use full url links in the page HTML, use a regex to replace the links using the new challenge short name.\n", "before_files": [{"content": "from django.core.management import BaseCommand, CommandError\n\nfrom grandchallenge.challenges.models import Challenge\nfrom grandchallenge.pages.models import Page\n\n\nclass Command(BaseCommand):\n help = \"Creates a copy of a challenge\"\n\n challenge_fields = [\n \"creator\",\n \"description\",\n \"educational\",\n \"disclaimer\",\n \"require_participant_review\",\n \"use_registration_page\",\n \"registration_page_text\",\n \"use_evaluation\",\n \"logo\",\n \"banner\",\n ]\n\n challenge_m2m_fields = [\n \"task_types\",\n \"modalities\",\n \"structures\",\n ]\n\n config_fields = [\n \"use_teams\",\n \"score_title\",\n \"score_jsonpath\",\n \"score_error_jsonpath\",\n \"score_default_sort\",\n \"score_decimal_places\",\n \"extra_results_columns\",\n \"scoring_method_choice\",\n \"result_display_choice\",\n \"allow_submission_comments\",\n \"display_submission_comments\",\n \"supplementary_file_choice\",\n \"supplementary_file_label\",\n \"supplementary_file_help_text\",\n \"show_supplementary_file_link\",\n \"publication_url_choice\",\n \"show_publication_url\",\n \"daily_submission_limit\",\n \"submission_page_html\",\n \"auto_publish_new_results\",\n \"display_all_metrics\",\n \"submission_join_key\",\n ]\n\n page_fields = [\n \"title\",\n \"permission_lvl\",\n \"order\",\n \"display_title\",\n \"hidden\",\n \"html\",\n ]\n\n def add_arguments(self, parser):\n parser.add_argument(\"source\", type=str)\n parser.add_argument(\"dest\", type=str)\n\n def handle(self, *args, **options):\n src_name = options.pop(\"source\")\n dest_name = options.pop(\"dest\")\n\n if src_name.lower() == dest_name.lower():\n raise CommandError(\"Source and dest names must be different\")\n\n src_challenge = Challenge.objects.get(short_name__iexact=src_name)\n dest_challenge = self._create_new_challenge(\n src_challenge=src_challenge, dest_name=dest_name\n )\n\n self._copy_m2m_fields(\n src_challenge=src_challenge, dest_challenge=dest_challenge\n )\n self._copy_evaluation_config(\n src_challenge=src_challenge, dest_challenge=dest_challenge\n )\n self._copy_pages(\n src_challenge=src_challenge, dest_challenge=dest_challenge\n )\n self._copy_admins(\n src_challenge=src_challenge, dest_challenge=dest_challenge\n )\n\n def _create_new_challenge(self, *, src_challenge, dest_name):\n new_challenge = Challenge(\n short_name=dest_name,\n **{f: getattr(src_challenge, f) for f in self.challenge_fields},\n )\n new_challenge.full_clean()\n new_challenge.save()\n return new_challenge\n\n def _copy_m2m_fields(self, *, src_challenge, dest_challenge):\n for f in self.challenge_m2m_fields:\n src_m2m = getattr(src_challenge, f)\n dest_m2m = getattr(dest_challenge, f)\n dest_m2m.set(src_m2m.all())\n\n def _copy_evaluation_config(self, *, src_challenge, dest_challenge):\n src_config = src_challenge.evaluation_config\n dest_config = dest_challenge.evaluation_config\n\n for attr in self.config_fields:\n setattr(dest_config, attr, getattr(src_config, attr))\n\n dest_config.save()\n\n def _copy_pages(self, *, src_challenge, dest_challenge):\n src_pages = src_challenge.page_set.all()\n\n for src_page in src_pages:\n Page.objects.create(\n challenge=dest_challenge,\n **{f: getattr(src_page, f) for f in self.page_fields},\n )\n\n def _copy_admins(self, *, src_challenge, dest_challenge):\n for u in src_challenge.get_admins():\n dest_challenge.add_admin(u)\n", "path": "app/grandchallenge/challenges/management/commands/copy_challenge.py"}]} | 1,685 | 387 |
gh_patches_debug_58008 | rasdani/github-patches | git_diff | marshmallow-code__webargs-385 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Fix compatibility with Falcon 2.0
Tests are currently failing when Falcon 2.0.0 is installed.
</issue>
<code>
[start of setup.py]
1 # -*- coding: utf-8 -*-
2 import sys
3 import re
4 from setuptools import setup, find_packages
5
6 INSTALL_REQUIRES = ["marshmallow>=2.15.2"]
7 if sys.version_info[0] < 3:
8 INSTALL_REQUIRES.append("simplejson>=2.1.0")
9
10 FRAMEWORKS = [
11 "Flask>=0.12.2",
12 "Django>=1.11.16",
13 "bottle>=0.12.13",
14 "tornado>=4.5.2",
15 "pyramid>=1.9.1",
16 "webapp2>=3.0.0b1",
17 "falcon>=1.4.0",
18 'aiohttp>=3.0.0; python_version >= "3.5"',
19 ]
20 EXTRAS_REQUIRE = {
21 "frameworks": FRAMEWORKS,
22 "tests": [
23 "pytest",
24 "mock",
25 "webtest==2.0.32",
26 'webtest-aiohttp==2.0.0; python_version >= "3.5"',
27 'pytest-aiohttp>=0.3.0; python_version >= "3.5"',
28 ]
29 + FRAMEWORKS,
30 "lint": [
31 'mypy==0.650; python_version >= "3.5"',
32 "flake8==3.6.0",
33 'flake8-bugbear==18.8.0; python_version >= "3.5"',
34 "pre-commit==1.13.0",
35 ],
36 }
37 EXTRAS_REQUIRE["dev"] = EXTRAS_REQUIRE["tests"] + EXTRAS_REQUIRE["lint"] + ["tox"]
38
39
40 def find_version(fname):
41 """Attempts to find the version number in the file names fname.
42 Raises RuntimeError if not found.
43 """
44 version = ""
45 with open(fname, "r") as fp:
46 reg = re.compile(r'__version__ = [\'"]([^\'"]*)[\'"]')
47 for line in fp:
48 m = reg.match(line)
49 if m:
50 version = m.group(1)
51 break
52 if not version:
53 raise RuntimeError("Cannot find version information")
54 return version
55
56
57 def read(fname):
58 with open(fname) as fp:
59 content = fp.read()
60 return content
61
62
63 setup(
64 name="webargs",
65 version=find_version("src/webargs/__init__.py"),
66 description=(
67 "Declarative parsing and validation of HTTP request objects, "
68 "with built-in support for popular web frameworks, including "
69 "Flask, Django, Bottle, Tornado, Pyramid, webapp2, Falcon, and aiohttp."
70 ),
71 long_description=read("README.rst"),
72 author="Steven Loria",
73 author_email="[email protected]",
74 url="https://github.com/marshmallow-code/webargs",
75 packages=find_packages("src"),
76 package_dir={"": "src"},
77 install_requires=INSTALL_REQUIRES,
78 extras_require=EXTRAS_REQUIRE,
79 license="MIT",
80 zip_safe=False,
81 keywords=(
82 "webargs",
83 "http",
84 "flask",
85 "django",
86 "bottle",
87 "tornado",
88 "aiohttp",
89 "webapp2",
90 "request",
91 "arguments",
92 "validation",
93 "parameters",
94 "rest",
95 "api",
96 "marshmallow",
97 ),
98 classifiers=[
99 "Development Status :: 5 - Production/Stable",
100 "Intended Audience :: Developers",
101 "License :: OSI Approved :: MIT License",
102 "Natural Language :: English",
103 "Programming Language :: Python :: 2",
104 "Programming Language :: Python :: 2.7",
105 "Programming Language :: Python :: 3",
106 "Programming Language :: Python :: 3.5",
107 "Programming Language :: Python :: 3.6",
108 "Programming Language :: Python :: 3.7",
109 "Topic :: Internet :: WWW/HTTP :: Dynamic Content",
110 "Topic :: Internet :: WWW/HTTP :: WSGI :: Application",
111 ],
112 test_suite="tests",
113 project_urls={
114 "Changelog": "https://webargs.readthedocs.io/en/latest/changelog.html",
115 "Issues": "https://github.com/marshmallow-code/webargs/issues",
116 "Funding": "https://opencollective.com/marshmallow",
117 "Tidelift": "https://tidelift.com/subscription/pkg/pypi-webargs?utm_source=pypi-marshmallow&utm_medium=pypi", # noqa
118 },
119 )
120
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -14,7 +14,7 @@
"tornado>=4.5.2",
"pyramid>=1.9.1",
"webapp2>=3.0.0b1",
- "falcon>=1.4.0",
+ "falcon>=1.4.0,<2.0",
'aiohttp>=3.0.0; python_version >= "3.5"',
]
EXTRAS_REQUIRE = {
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -14,7 +14,7 @@\n \"tornado>=4.5.2\",\n \"pyramid>=1.9.1\",\n \"webapp2>=3.0.0b1\",\n- \"falcon>=1.4.0\",\n+ \"falcon>=1.4.0,<2.0\",\n 'aiohttp>=3.0.0; python_version >= \"3.5\"',\n ]\n EXTRAS_REQUIRE = {\n", "issue": "Fix compatibility with Falcon 2.0\nTests are currently failing when Falcon 2.0.0 is installed.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport sys\nimport re\nfrom setuptools import setup, find_packages\n\nINSTALL_REQUIRES = [\"marshmallow>=2.15.2\"]\nif sys.version_info[0] < 3:\n INSTALL_REQUIRES.append(\"simplejson>=2.1.0\")\n\nFRAMEWORKS = [\n \"Flask>=0.12.2\",\n \"Django>=1.11.16\",\n \"bottle>=0.12.13\",\n \"tornado>=4.5.2\",\n \"pyramid>=1.9.1\",\n \"webapp2>=3.0.0b1\",\n \"falcon>=1.4.0\",\n 'aiohttp>=3.0.0; python_version >= \"3.5\"',\n]\nEXTRAS_REQUIRE = {\n \"frameworks\": FRAMEWORKS,\n \"tests\": [\n \"pytest\",\n \"mock\",\n \"webtest==2.0.32\",\n 'webtest-aiohttp==2.0.0; python_version >= \"3.5\"',\n 'pytest-aiohttp>=0.3.0; python_version >= \"3.5\"',\n ]\n + FRAMEWORKS,\n \"lint\": [\n 'mypy==0.650; python_version >= \"3.5\"',\n \"flake8==3.6.0\",\n 'flake8-bugbear==18.8.0; python_version >= \"3.5\"',\n \"pre-commit==1.13.0\",\n ],\n}\nEXTRAS_REQUIRE[\"dev\"] = EXTRAS_REQUIRE[\"tests\"] + EXTRAS_REQUIRE[\"lint\"] + [\"tox\"]\n\n\ndef find_version(fname):\n \"\"\"Attempts to find the version number in the file names fname.\n Raises RuntimeError if not found.\n \"\"\"\n version = \"\"\n with open(fname, \"r\") as fp:\n reg = re.compile(r'__version__ = [\\'\"]([^\\'\"]*)[\\'\"]')\n for line in fp:\n m = reg.match(line)\n if m:\n version = m.group(1)\n break\n if not version:\n raise RuntimeError(\"Cannot find version information\")\n return version\n\n\ndef read(fname):\n with open(fname) as fp:\n content = fp.read()\n return content\n\n\nsetup(\n name=\"webargs\",\n version=find_version(\"src/webargs/__init__.py\"),\n description=(\n \"Declarative parsing and validation of HTTP request objects, \"\n \"with built-in support for popular web frameworks, including \"\n \"Flask, Django, Bottle, Tornado, Pyramid, webapp2, Falcon, and aiohttp.\"\n ),\n long_description=read(\"README.rst\"),\n author=\"Steven Loria\",\n author_email=\"[email protected]\",\n url=\"https://github.com/marshmallow-code/webargs\",\n packages=find_packages(\"src\"),\n package_dir={\"\": \"src\"},\n install_requires=INSTALL_REQUIRES,\n extras_require=EXTRAS_REQUIRE,\n license=\"MIT\",\n zip_safe=False,\n keywords=(\n \"webargs\",\n \"http\",\n \"flask\",\n \"django\",\n \"bottle\",\n \"tornado\",\n \"aiohttp\",\n \"webapp2\",\n \"request\",\n \"arguments\",\n \"validation\",\n \"parameters\",\n \"rest\",\n \"api\",\n \"marshmallow\",\n ),\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: MIT License\",\n \"Natural Language :: English\",\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Topic :: Internet :: WWW/HTTP :: Dynamic Content\",\n \"Topic :: Internet :: WWW/HTTP :: WSGI :: Application\",\n ],\n test_suite=\"tests\",\n project_urls={\n \"Changelog\": \"https://webargs.readthedocs.io/en/latest/changelog.html\",\n \"Issues\": \"https://github.com/marshmallow-code/webargs/issues\",\n \"Funding\": \"https://opencollective.com/marshmallow\",\n \"Tidelift\": \"https://tidelift.com/subscription/pkg/pypi-webargs?utm_source=pypi-marshmallow&utm_medium=pypi\", # noqa\n },\n)\n", "path": "setup.py"}]} | 1,792 | 124 |
gh_patches_debug_19737 | rasdani/github-patches | git_diff | biopython__biopython-3247 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Should we remove/replace Bio.Alphabet?
Filing this meta-issue, since we don't seem to have a single place discussing this on GitHub.
**My (likely biased) summary from those discussions is no one likes the current alphabet system, and most people ignore it.**
Biopython has a complicated hard to use legacy alphabet system in ``Bio.Alphabet`` which is used as a typing system (e.g. can't do reverse-complement on a protein), can store some useful information like the expected letters, if gapped and the gap character (although only one), and a stop codon symbol (although only one).
The objects in ``Bio.Alphabet`` cover three-letter alphabets as well as the more commonly used one-letter alphabets, although the ``Seq`` object effectively assumes the later only. Three-letter alphabets can be used with the array-based ``MutableSeq`` object, but it is very fragile and many things break - thus #1681.
Note we do not (currently) validate the expected letters when making a sequence object with an alphabet with an explicit set of expected letters - #1040.
Discussion on #1674 (hiding alphabets) has meant we now hide the alphabet in the ``Seq`` object representation if it is the default alphabet. Discussion there, and on #1681, and on the mailing list suggests going further and *removing* ``Bio.Alphabet`` entirely.
Note that removal is not as simple as it might sound - it will require some delicate modifications, for example several of the parsers in ``Bio.SeqIO`` use the alphabet to store the sequence type (important metadata in more than one file format).
I personally do like the typing system aspect of the alphabet system, but think we only need a much simpler DNA/RNA/nucleotide/protein/other system (more like an enum). I started looking at that on this branch which aimed to break as little existing code as possible: https://github.com/peterjc/biopython/tree/alpha_lite
</issue>
<code>
[start of Bio/Alphabet/__init__.py]
1 # Copyright 2000-2002 by Andrew Dalke.
2 # Revisions copyright 2007-2010 by Peter Cock.
3 # All rights reserved.
4 #
5 # This file is part of the Biopython distribution and governed by your
6 # choice of the "Biopython License Agreement" or the "BSD 3-Clause License".
7 # Please see the LICENSE file that should have been included as part of this
8 # package.
9 """Alphabets were previously used to declare sequence type and letters (OBSOLETE).
10
11 The design of Bio.Aphabet included a number of historic design choices
12 which, with the benefit of hindsight, were regretable. Bio.Alphabet was
13 therefore removed from Biopython in release 1.78. Instead, the molecule type is
14 included as an annotation on SeqRecords where appropriate.
15
16 Please see
17 https://github.com/biopython/biopython/issues/3156
18 for examples showing how to transition from Bio.Alphabet to molecule type
19 annotations.
20 """
21
22 raise ImportError(
23 "Bio.Alphabet has been removed from Biopython. In many cases, the alphabet can simply be ignored and removed from scripts. In a few cases, you may need to specify the ``molecule_type`` as an annotation on a SeqRecord for your script to work correctly. Please see https://github.com/biopython/biopython/issues/3156 for more information."
24 )
25
[end of Bio/Alphabet/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/Bio/Alphabet/__init__.py b/Bio/Alphabet/__init__.py
--- a/Bio/Alphabet/__init__.py
+++ b/Bio/Alphabet/__init__.py
@@ -13,12 +13,10 @@
therefore removed from Biopython in release 1.78. Instead, the molecule type is
included as an annotation on SeqRecords where appropriate.
-Please see
-https://github.com/biopython/biopython/issues/3156
-for examples showing how to transition from Bio.Alphabet to molecule type
-annotations.
+Please see https://biopython.org/wiki/Alphabet for examples showing how to
+transition from Bio.Alphabet to molecule type annotations.
"""
raise ImportError(
- "Bio.Alphabet has been removed from Biopython. In many cases, the alphabet can simply be ignored and removed from scripts. In a few cases, you may need to specify the ``molecule_type`` as an annotation on a SeqRecord for your script to work correctly. Please see https://github.com/biopython/biopython/issues/3156 for more information."
+ "Bio.Alphabet has been removed from Biopython. In many cases, the alphabet can simply be ignored and removed from scripts. In a few cases, you may need to specify the ``molecule_type`` as an annotation on a SeqRecord for your script to work correctly. Please see https://biopython.org/wiki/Alphabet for more information."
)
| {"golden_diff": "diff --git a/Bio/Alphabet/__init__.py b/Bio/Alphabet/__init__.py\n--- a/Bio/Alphabet/__init__.py\n+++ b/Bio/Alphabet/__init__.py\n@@ -13,12 +13,10 @@\n therefore removed from Biopython in release 1.78. Instead, the molecule type is\n included as an annotation on SeqRecords where appropriate.\n \n-Please see\n-https://github.com/biopython/biopython/issues/3156\n-for examples showing how to transition from Bio.Alphabet to molecule type\n-annotations.\n+Please see https://biopython.org/wiki/Alphabet for examples showing how to\n+transition from Bio.Alphabet to molecule type annotations.\n \"\"\"\n \n raise ImportError(\n- \"Bio.Alphabet has been removed from Biopython. In many cases, the alphabet can simply be ignored and removed from scripts. In a few cases, you may need to specify the ``molecule_type`` as an annotation on a SeqRecord for your script to work correctly. Please see https://github.com/biopython/biopython/issues/3156 for more information.\"\n+ \"Bio.Alphabet has been removed from Biopython. In many cases, the alphabet can simply be ignored and removed from scripts. In a few cases, you may need to specify the ``molecule_type`` as an annotation on a SeqRecord for your script to work correctly. Please see https://biopython.org/wiki/Alphabet for more information.\"\n )\n", "issue": "Should we remove/replace Bio.Alphabet?\nFiling this meta-issue, since we don't seem to have a single place discussing this on GitHub.\r\n\r\n**My (likely biased) summary from those discussions is no one likes the current alphabet system, and most people ignore it.**\r\n\r\nBiopython has a complicated hard to use legacy alphabet system in ``Bio.Alphabet`` which is used as a typing system (e.g. can't do reverse-complement on a protein), can store some useful information like the expected letters, if gapped and the gap character (although only one), and a stop codon symbol (although only one).\r\n\r\nThe objects in ``Bio.Alphabet`` cover three-letter alphabets as well as the more commonly used one-letter alphabets, although the ``Seq`` object effectively assumes the later only. Three-letter alphabets can be used with the array-based ``MutableSeq`` object, but it is very fragile and many things break - thus #1681.\r\n\r\nNote we do not (currently) validate the expected letters when making a sequence object with an alphabet with an explicit set of expected letters - #1040.\r\n\r\nDiscussion on #1674 (hiding alphabets) has meant we now hide the alphabet in the ``Seq`` object representation if it is the default alphabet. Discussion there, and on #1681, and on the mailing list suggests going further and *removing* ``Bio.Alphabet`` entirely.\r\n\r\nNote that removal is not as simple as it might sound - it will require some delicate modifications, for example several of the parsers in ``Bio.SeqIO`` use the alphabet to store the sequence type (important metadata in more than one file format).\r\n\r\nI personally do like the typing system aspect of the alphabet system, but think we only need a much simpler DNA/RNA/nucleotide/protein/other system (more like an enum). I started looking at that on this branch which aimed to break as little existing code as possible: https://github.com/peterjc/biopython/tree/alpha_lite\r\n\n", "before_files": [{"content": "# Copyright 2000-2002 by Andrew Dalke.\n# Revisions copyright 2007-2010 by Peter Cock.\n# All rights reserved.\n#\n# This file is part of the Biopython distribution and governed by your\n# choice of the \"Biopython License Agreement\" or the \"BSD 3-Clause License\".\n# Please see the LICENSE file that should have been included as part of this\n# package.\n\"\"\"Alphabets were previously used to declare sequence type and letters (OBSOLETE).\n\nThe design of Bio.Aphabet included a number of historic design choices\nwhich, with the benefit of hindsight, were regretable. Bio.Alphabet was\ntherefore removed from Biopython in release 1.78. Instead, the molecule type is\nincluded as an annotation on SeqRecords where appropriate.\n\nPlease see\nhttps://github.com/biopython/biopython/issues/3156\nfor examples showing how to transition from Bio.Alphabet to molecule type\nannotations.\n\"\"\"\n\nraise ImportError(\n \"Bio.Alphabet has been removed from Biopython. In many cases, the alphabet can simply be ignored and removed from scripts. In a few cases, you may need to specify the ``molecule_type`` as an annotation on a SeqRecord for your script to work correctly. Please see https://github.com/biopython/biopython/issues/3156 for more information.\"\n)\n", "path": "Bio/Alphabet/__init__.py"}]} | 1,319 | 330 |
gh_patches_debug_33949 | rasdani/github-patches | git_diff | xonsh__xonsh-522 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Feature-request: Static configuration file as a command line option
I suggest that an option to the commandline --config_file is added that allows the user to specify a file location for a static configuration file. (http://xonsh.org/xonshconfig.html)
This would allow portable usage of xonsh, e.g. install python and xonsh on a usb and bring the shell on a stick.
</issue>
<code>
[start of xonsh/main.py]
1 # -*- coding: utf-8 -*-
2 """The main xonsh script."""
3 import os
4 import sys
5 import shlex
6 import signal
7 import builtins
8 import subprocess
9 from argparse import ArgumentParser, Namespace
10 from contextlib import contextmanager
11
12 from xonsh import __version__
13 from xonsh.shell import Shell
14 from xonsh.pretty import pprint
15 from xonsh.jobs import ignore_sigtstp
16
17 parser = ArgumentParser(description='xonsh')
18 parser.add_argument('-V', '--version',
19 action='version',
20 version='/'.join(('xonsh', __version__)),
21 help='show version information and exit')
22 parser.add_argument('-c',
23 help="Run a single command and exit",
24 dest='command',
25 required=False,
26 default=None)
27 parser.add_argument('-i',
28 help='force running in interactive mode',
29 dest='force_interactive',
30 action='store_true',
31 default=False)
32 parser.add_argument('-l',
33 help='run as a login shell',
34 dest='login',
35 action='store_true',
36 default=False)
37 parser.add_argument('--no-rc',
38 help="Do not load the .xonshrc file",
39 dest='norc',
40 action='store_true',
41 default=False)
42 parser.add_argument('-D',
43 dest='defines',
44 help='define an environment variable, in the form of '
45 '-DNAME=VAL. May be used many times.',
46 metavar='ITEM',
47 nargs='*',
48 default=None)
49 parser.add_argument('--shell-type',
50 help='What kind of shell should be used. '
51 'Possible options: readline, prompt_toolkit. '
52 'Warning! If set this overrides $SHELL_TYPE variable.',
53 dest='shell_type',
54 choices=('readline', 'prompt_toolkit'),
55 default=None)
56 parser.add_argument('file',
57 metavar='script-file',
58 help='If present, execute the script in script-file'
59 ' and exit',
60 nargs='?',
61 default=None)
62 parser.add_argument('args',
63 metavar='args',
64 help='Additional arguments to the script specified'
65 ' by script-file',
66 nargs='*',
67 default=[])
68
69
70 def _pprint_displayhook(value):
71 if value is not None:
72 builtins._ = value
73 pprint(value)
74
75
76 def premain(argv=None):
77 """Setup for main xonsh entry point, returns parsed arguments."""
78 args = parser.parse_args(argv)
79 shell_kwargs = {'shell_type': args.shell_type}
80 if args.norc:
81 shell_kwargs['ctx'] = {}
82 setattr(sys, 'displayhook', _pprint_displayhook)
83 shell = builtins.__xonsh_shell__ = Shell(**shell_kwargs)
84 from xonsh import imphooks
85 env = builtins.__xonsh_env__
86 if args.defines is not None:
87 env.update([x.split('=', 1) for x in args.defines])
88 if args.login:
89 env['XONSH_LOGIN'] = True
90 env['XONSH_INTERACTIVE'] = False
91 return args
92
93
94 def main(argv=None):
95 """Main entry point for xonsh cli."""
96 args = premain(argv)
97 env = builtins.__xonsh_env__
98 shell = builtins.__xonsh_shell__
99 if args.command is not None:
100 # run a single command and exit
101 shell.default(args.command)
102 elif args.file is not None:
103 # run a script contained in a file
104 if os.path.isfile(args.file):
105 with open(args.file) as f:
106 code = f.read()
107 code = code if code.endswith('\n') else code + '\n'
108 env['ARGS'] = [args.file] + args.args
109 code = shell.execer.compile(code, mode='exec', glbs=shell.ctx)
110 shell.execer.exec(code, mode='exec', glbs=shell.ctx)
111 else:
112 print('xonsh: {0}: No such file or directory.'.format(args.file))
113 elif not sys.stdin.isatty() and not args.force_interactive:
114 # run a script given on stdin
115 code = sys.stdin.read()
116 code = code if code.endswith('\n') else code + '\n'
117 code = shell.execer.compile(code, mode='exec', glbs=shell.ctx)
118 shell.execer.exec(code, mode='exec', glbs=shell.ctx)
119 else:
120 # otherwise, enter the shell
121 env['XONSH_INTERACTIVE'] = True
122 ignore_sigtstp()
123 shell.cmdloop()
124 postmain(args)
125
126
127 def postmain(args=None):
128 """Teardown for main xonsh entry point, accepts parsed arguments."""
129 del builtins.__xonsh_shell__
130
131
132 @contextmanager
133 def main_context(argv=None):
134 """Generator that runs pre- and post-main() functions. This has two iterations.
135 The first yields the shell. The second returns None but cleans
136 up the shell.
137 """
138 args = premain(argv)
139 yield builtins.__xonsh_shell__
140 postmain(args)
141
142
143
144 if __name__ == '__main__':
145 main()
146
[end of xonsh/main.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/xonsh/main.py b/xonsh/main.py
--- a/xonsh/main.py
+++ b/xonsh/main.py
@@ -2,11 +2,8 @@
"""The main xonsh script."""
import os
import sys
-import shlex
-import signal
import builtins
-import subprocess
-from argparse import ArgumentParser, Namespace
+from argparse import ArgumentParser, ArgumentTypeError
from contextlib import contextmanager
from xonsh import __version__
@@ -14,6 +11,18 @@
from xonsh.pretty import pprint
from xonsh.jobs import ignore_sigtstp
+def path_argument(s):
+ """Return a path only if the path is actually legal
+
+ This is very similar to argparse.FileType, except that it doesn't return
+ an open file handle, but rather simply validates the path."""
+
+ s = os.path.abspath(os.path.expanduser(s))
+ if not os.path.isfile(s):
+ raise ArgumentTypeError('"%s" must be a valid path to a file' % s)
+ return s
+
+
parser = ArgumentParser(description='xonsh')
parser.add_argument('-V', '--version',
action='version',
@@ -34,6 +43,10 @@
dest='login',
action='store_true',
default=False)
+parser.add_argument('--config-path',
+ help='specify a custom static configuration file',
+ dest='config_path',
+ type=path_argument)
parser.add_argument('--no-rc',
help="Do not load the .xonshrc file",
dest='norc',
@@ -79,6 +92,8 @@
shell_kwargs = {'shell_type': args.shell_type}
if args.norc:
shell_kwargs['ctx'] = {}
+ if args.config_path:
+ shell_kwargs['ctx']= {'XONSHCONFIG': args.config_path}
setattr(sys, 'displayhook', _pprint_displayhook)
shell = builtins.__xonsh_shell__ = Shell(**shell_kwargs)
from xonsh import imphooks
| {"golden_diff": "diff --git a/xonsh/main.py b/xonsh/main.py\n--- a/xonsh/main.py\n+++ b/xonsh/main.py\n@@ -2,11 +2,8 @@\n \"\"\"The main xonsh script.\"\"\"\n import os\n import sys\n-import shlex\n-import signal\n import builtins\n-import subprocess\n-from argparse import ArgumentParser, Namespace\n+from argparse import ArgumentParser, ArgumentTypeError\n from contextlib import contextmanager\n \n from xonsh import __version__\n@@ -14,6 +11,18 @@\n from xonsh.pretty import pprint\n from xonsh.jobs import ignore_sigtstp\n \n+def path_argument(s):\n+ \"\"\"Return a path only if the path is actually legal\n+\n+ This is very similar to argparse.FileType, except that it doesn't return\n+ an open file handle, but rather simply validates the path.\"\"\"\n+\n+ s = os.path.abspath(os.path.expanduser(s))\n+ if not os.path.isfile(s):\n+ raise ArgumentTypeError('\"%s\" must be a valid path to a file' % s)\n+ return s\n+\n+\n parser = ArgumentParser(description='xonsh')\n parser.add_argument('-V', '--version',\n action='version',\n@@ -34,6 +43,10 @@\n dest='login',\n action='store_true',\n default=False)\n+parser.add_argument('--config-path',\n+ help='specify a custom static configuration file',\n+ dest='config_path',\n+ type=path_argument)\n parser.add_argument('--no-rc',\n help=\"Do not load the .xonshrc file\",\n dest='norc',\n@@ -79,6 +92,8 @@\n shell_kwargs = {'shell_type': args.shell_type}\n if args.norc:\n shell_kwargs['ctx'] = {}\n+ if args.config_path:\n+ shell_kwargs['ctx']= {'XONSHCONFIG': args.config_path}\n setattr(sys, 'displayhook', _pprint_displayhook)\n shell = builtins.__xonsh_shell__ = Shell(**shell_kwargs)\n from xonsh import imphooks\n", "issue": "Feature-request: Static configuration file as a command line option\nI suggest that an option to the commandline --config_file is added that allows the user to specify a file location for a static configuration file. (http://xonsh.org/xonshconfig.html)\n\nThis would allow portable usage of xonsh, e.g. install python and xonsh on a usb and bring the shell on a stick. \n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"The main xonsh script.\"\"\"\nimport os\nimport sys\nimport shlex\nimport signal\nimport builtins\nimport subprocess\nfrom argparse import ArgumentParser, Namespace\nfrom contextlib import contextmanager\n\nfrom xonsh import __version__\nfrom xonsh.shell import Shell\nfrom xonsh.pretty import pprint\nfrom xonsh.jobs import ignore_sigtstp\n\nparser = ArgumentParser(description='xonsh')\nparser.add_argument('-V', '--version',\n action='version',\n version='/'.join(('xonsh', __version__)),\n help='show version information and exit')\nparser.add_argument('-c',\n help=\"Run a single command and exit\",\n dest='command',\n required=False,\n default=None)\nparser.add_argument('-i',\n help='force running in interactive mode',\n dest='force_interactive',\n action='store_true',\n default=False)\nparser.add_argument('-l',\n help='run as a login shell',\n dest='login',\n action='store_true',\n default=False)\nparser.add_argument('--no-rc',\n help=\"Do not load the .xonshrc file\",\n dest='norc',\n action='store_true',\n default=False)\nparser.add_argument('-D',\n dest='defines',\n help='define an environment variable, in the form of '\n '-DNAME=VAL. May be used many times.',\n metavar='ITEM',\n nargs='*',\n default=None)\nparser.add_argument('--shell-type',\n help='What kind of shell should be used. '\n 'Possible options: readline, prompt_toolkit. '\n 'Warning! If set this overrides $SHELL_TYPE variable.',\n dest='shell_type',\n choices=('readline', 'prompt_toolkit'),\n default=None)\nparser.add_argument('file',\n metavar='script-file',\n help='If present, execute the script in script-file'\n ' and exit',\n nargs='?',\n default=None)\nparser.add_argument('args',\n metavar='args',\n help='Additional arguments to the script specified'\n ' by script-file',\n nargs='*',\n default=[])\n\n\ndef _pprint_displayhook(value):\n if value is not None:\n builtins._ = value\n pprint(value)\n\n\ndef premain(argv=None):\n \"\"\"Setup for main xonsh entry point, returns parsed arguments.\"\"\"\n args = parser.parse_args(argv)\n shell_kwargs = {'shell_type': args.shell_type}\n if args.norc:\n shell_kwargs['ctx'] = {}\n setattr(sys, 'displayhook', _pprint_displayhook)\n shell = builtins.__xonsh_shell__ = Shell(**shell_kwargs)\n from xonsh import imphooks\n env = builtins.__xonsh_env__\n if args.defines is not None:\n env.update([x.split('=', 1) for x in args.defines])\n if args.login:\n env['XONSH_LOGIN'] = True\n env['XONSH_INTERACTIVE'] = False\n return args\n\n\ndef main(argv=None):\n \"\"\"Main entry point for xonsh cli.\"\"\"\n args = premain(argv)\n env = builtins.__xonsh_env__\n shell = builtins.__xonsh_shell__\n if args.command is not None:\n # run a single command and exit\n shell.default(args.command)\n elif args.file is not None:\n # run a script contained in a file\n if os.path.isfile(args.file):\n with open(args.file) as f:\n code = f.read()\n code = code if code.endswith('\\n') else code + '\\n'\n env['ARGS'] = [args.file] + args.args\n code = shell.execer.compile(code, mode='exec', glbs=shell.ctx)\n shell.execer.exec(code, mode='exec', glbs=shell.ctx)\n else:\n print('xonsh: {0}: No such file or directory.'.format(args.file))\n elif not sys.stdin.isatty() and not args.force_interactive:\n # run a script given on stdin\n code = sys.stdin.read()\n code = code if code.endswith('\\n') else code + '\\n'\n code = shell.execer.compile(code, mode='exec', glbs=shell.ctx)\n shell.execer.exec(code, mode='exec', glbs=shell.ctx)\n else:\n # otherwise, enter the shell\n env['XONSH_INTERACTIVE'] = True\n ignore_sigtstp()\n shell.cmdloop()\n postmain(args)\n\n\ndef postmain(args=None):\n \"\"\"Teardown for main xonsh entry point, accepts parsed arguments.\"\"\"\n del builtins.__xonsh_shell__\n\n\n@contextmanager\ndef main_context(argv=None):\n \"\"\"Generator that runs pre- and post-main() functions. This has two iterations.\n The first yields the shell. The second returns None but cleans\n up the shell.\n \"\"\"\n args = premain(argv)\n yield builtins.__xonsh_shell__\n postmain(args)\n\n\n\nif __name__ == '__main__':\n main()\n", "path": "xonsh/main.py"}]} | 2,034 | 454 |
gh_patches_debug_23517 | rasdani/github-patches | git_diff | hydroshare__hydroshare-2550 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Can't add user via admin panel
</issue>
<code>
[start of hs_core/admin.py]
1 from mezzanine.pages.admin import PageAdmin
2 from django.contrib.gis import admin
3 from django.contrib.contenttypes.admin import GenericTabularInline
4 from .models import *
5
6 class InlineResourceFiles(GenericTabularInline):
7 model = ResourceFile
8
9 class GenericResourceAdmin(PageAdmin):
10 inlines = PageAdmin.inlines + [InlineResourceFiles]
11
12 admin.site.register(GenericResource, GenericResourceAdmin)
13
[end of hs_core/admin.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/hs_core/admin.py b/hs_core/admin.py
--- a/hs_core/admin.py
+++ b/hs_core/admin.py
@@ -1,12 +1,35 @@
-from mezzanine.pages.admin import PageAdmin
+from django import forms
+from django.contrib.auth.admin import UserAdmin
+from django.contrib.auth.forms import UserCreationForm
+from django.contrib.auth.models import User
from django.contrib.gis import admin
from django.contrib.contenttypes.admin import GenericTabularInline
+from django.utils.translation import ugettext_lazy as _
+
+from mezzanine.pages.admin import PageAdmin
+
from .models import *
+
+class UserCreationFormExtended(UserCreationForm):
+ def __init__(self, *args, **kwargs):
+ super(UserCreationFormExtended, self).__init__(*args, **kwargs)
+ self.fields['email'] = forms.EmailField(label=_("E-mail"), max_length=75)
+
+UserAdmin.add_form = UserCreationFormExtended
+UserAdmin.add_fieldsets = (
+ (None, {
+ 'classes': ('wide',),
+ 'fields': ('email', 'username', 'password1', 'password2',)
+ }),
+)
+
class InlineResourceFiles(GenericTabularInline):
model = ResourceFile
class GenericResourceAdmin(PageAdmin):
inlines = PageAdmin.inlines + [InlineResourceFiles]
+admin.site.unregister(User)
+admin.site.register(User, UserAdmin)
admin.site.register(GenericResource, GenericResourceAdmin)
| {"golden_diff": "diff --git a/hs_core/admin.py b/hs_core/admin.py\n--- a/hs_core/admin.py\n+++ b/hs_core/admin.py\n@@ -1,12 +1,35 @@\n-from mezzanine.pages.admin import PageAdmin\n+from django import forms\n+from django.contrib.auth.admin import UserAdmin\n+from django.contrib.auth.forms import UserCreationForm\n+from django.contrib.auth.models import User\n from django.contrib.gis import admin\n from django.contrib.contenttypes.admin import GenericTabularInline\n+from django.utils.translation import ugettext_lazy as _\n+\n+from mezzanine.pages.admin import PageAdmin\n+\n from .models import *\n \n+\n+class UserCreationFormExtended(UserCreationForm):\n+ def __init__(self, *args, **kwargs):\n+ super(UserCreationFormExtended, self).__init__(*args, **kwargs)\n+ self.fields['email'] = forms.EmailField(label=_(\"E-mail\"), max_length=75)\n+\n+UserAdmin.add_form = UserCreationFormExtended\n+UserAdmin.add_fieldsets = (\n+ (None, {\n+ 'classes': ('wide',),\n+ 'fields': ('email', 'username', 'password1', 'password2',)\n+ }),\n+)\n+\n class InlineResourceFiles(GenericTabularInline):\n model = ResourceFile\n \n class GenericResourceAdmin(PageAdmin):\n inlines = PageAdmin.inlines + [InlineResourceFiles]\n \n+admin.site.unregister(User)\n+admin.site.register(User, UserAdmin)\n admin.site.register(GenericResource, GenericResourceAdmin)\n", "issue": "Can't add user via admin panel\n\n", "before_files": [{"content": "from mezzanine.pages.admin import PageAdmin\nfrom django.contrib.gis import admin\nfrom django.contrib.contenttypes.admin import GenericTabularInline\nfrom .models import *\n\nclass InlineResourceFiles(GenericTabularInline):\n model = ResourceFile\n\nclass GenericResourceAdmin(PageAdmin):\n inlines = PageAdmin.inlines + [InlineResourceFiles]\n\nadmin.site.register(GenericResource, GenericResourceAdmin)\n", "path": "hs_core/admin.py"}]} | 645 | 329 |
gh_patches_debug_14239 | rasdani/github-patches | git_diff | RUCAIBox__RecBole-692 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[🐛BUG] case_study.py 中, 输入的用户id只有一个时, full_sort_topk 报错
代码
```python3
import torch
import pandas as pd
from recbole.model.general_recommender.bpr import BPR
from recbole.config import Config
from recbole.data import create_dataset, data_preparation
from recbole.utils.case_study import full_sort_topk
param_dict = {
'use_gpu': False
}
# 加载 BPR 模型
bpr_model_path = "D:\\source\\recbole-0.2.0\\app\\ex\\saved\\BPR-Jan-18-2021_14-03-52.pth"
bpr_config = Config(model='BPR',
dataset='ml-100k',
config_dict=param_dict)
dataset = create_dataset(bpr_config)
train_data, valid_data, test_data = data_preparation(bpr_config, dataset)
bpr_model = BPR(bpr_config, train_data)
checkpoint = torch.load(bpr_model_path)
bpr_model.load_state_dict(checkpoint['state_dict'])
bpr_model.eval()
uid_series = dataset.token2id(dataset.uid_field, ['200']) # 原始数据集中的用户id,变换为训练内部使用的索引id
full_sort_topk(uid_series, bpr_model, test_data, 10)
```
报错信息
Traceback (most recent call last):
File "D:/source/recbole-0.2.0/app/ex/bpr_predict_ml100k.py", line 33, in <module>
full_sort_topk(uid_series, bpr_model, test_data, 10)
File "D:\source\recbole-0.2.0\recbole\utils\case_study.py", line 87, in full_sort_topk
scores = full_sort_scores(uid_series, model, test_data)
File "D:\Anaconda3\envs\pytorch\lib\site-packages\torch\autograd\grad_mode.py", line 26, in decorate_context
return func(*args, **kwargs)
File "D:\source\recbole-0.2.0\recbole\utils\case_study.py", line 45, in full_sort_scores
history_row = torch.cat([torch.full_like(hist_iid, i) for i, hist_iid in enumerate(history_item)])
RuntimeError: zero-dimensional tensor (at position 0) cannot be concatenated
</issue>
<code>
[start of recbole/utils/case_study.py]
1 # @Time : 2020/12/25
2 # @Author : Yushuo Chen
3 # @Email : [email protected]
4
5 # UPDATE
6 # @Time : 2020/12/25
7 # @Author : Yushuo Chen
8 # @email : [email protected]
9
10 """
11 recbole.utils.case_study
12 #####################################
13 """
14
15 import numpy as np
16 import torch
17
18 from recbole.data.dataloader.general_dataloader import GeneralFullDataLoader
19 from recbole.data.dataloader.sequential_dataloader import SequentialFullDataLoader
20
21
22 @torch.no_grad()
23 def full_sort_scores(uid_series, model, test_data):
24 """Calculate the scores of all items for each user in uid_series.
25
26 Note:
27 The score of [pad] and history items will be set into -inf.
28
29 Args:
30 uid_series (numpy.ndarray): User id series
31 model (AbstractRecommender): Model to predict
32 test_data (AbstractDataLoader): The test_data of model
33
34 Returns:
35 torch.Tensor: the scores of all items for each user in uid_series.
36 """
37 uid_field = test_data.dataset.uid_field
38 dataset = test_data.dataset
39 model.eval()
40
41 if isinstance(test_data, GeneralFullDataLoader):
42 index = np.isin(test_data.user_df[uid_field].numpy(), uid_series)
43 input_interaction = test_data.user_df[index]
44 history_item = test_data.uid2history_item[input_interaction[uid_field]]
45 history_row = torch.cat([torch.full_like(hist_iid, i) for i, hist_iid in enumerate(history_item)])
46 history_col = torch.cat(list(history_item))
47 history_index = history_row, history_col
48 elif isinstance(test_data, SequentialFullDataLoader):
49 index = np.isin(test_data.uid_list, uid_series)
50 input_interaction = test_data.augmentation(
51 test_data.item_list_index[index], test_data.target_index[index], test_data.item_list_length[index]
52 )
53 history_index = None
54 else:
55 raise NotImplementedError
56
57 # Get scores of all items
58 try:
59 scores = model.full_sort_predict(input_interaction)
60 except NotImplementedError:
61 input_interaction = input_interaction.repeat(dataset.item_num)
62 input_interaction.update(test_data.get_item_feature().repeat(len(uid_series)))
63 scores = model.predict(input_interaction)
64
65 scores = scores.view(-1, dataset.item_num)
66 scores[:, 0] = -np.inf # set scores of [pad] to -inf
67 if history_index is not None:
68 scores[history_index] = -np.inf # set scores of history items to -inf
69
70 return scores
71
72
73 def full_sort_topk(uid_series, model, test_data, k):
74 """Calculate the top-k items' scores and ids for each user in uid_series.
75
76 Args:
77 uid_series (numpy.ndarray): User id series
78 model (AbstractRecommender): Model to predict
79 test_data (AbstractDataLoader): The test_data of model
80 k (int): The top-k items.
81
82 Returns:
83 tuple:
84 - topk_scores (torch.Tensor): The scores of topk items.
85 - topk_index (torch.Tensor): The index of topk items, which is also the internal ids of items.
86 """
87 scores = full_sort_scores(uid_series, model, test_data)
88 return torch.topk(scores, k)
89
[end of recbole/utils/case_study.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/recbole/utils/case_study.py b/recbole/utils/case_study.py
--- a/recbole/utils/case_study.py
+++ b/recbole/utils/case_study.py
@@ -41,7 +41,7 @@
if isinstance(test_data, GeneralFullDataLoader):
index = np.isin(test_data.user_df[uid_field].numpy(), uid_series)
input_interaction = test_data.user_df[index]
- history_item = test_data.uid2history_item[input_interaction[uid_field]]
+ history_item = test_data.uid2history_item[input_interaction[uid_field].numpy()]
history_row = torch.cat([torch.full_like(hist_iid, i) for i, hist_iid in enumerate(history_item)])
history_col = torch.cat(list(history_item))
history_index = history_row, history_col
| {"golden_diff": "diff --git a/recbole/utils/case_study.py b/recbole/utils/case_study.py\n--- a/recbole/utils/case_study.py\n+++ b/recbole/utils/case_study.py\n@@ -41,7 +41,7 @@\n if isinstance(test_data, GeneralFullDataLoader):\n index = np.isin(test_data.user_df[uid_field].numpy(), uid_series)\n input_interaction = test_data.user_df[index]\n- history_item = test_data.uid2history_item[input_interaction[uid_field]]\n+ history_item = test_data.uid2history_item[input_interaction[uid_field].numpy()]\n history_row = torch.cat([torch.full_like(hist_iid, i) for i, hist_iid in enumerate(history_item)])\n history_col = torch.cat(list(history_item))\n history_index = history_row, history_col\n", "issue": "[\ud83d\udc1bBUG] case_study.py \u4e2d\uff0c \u8f93\u5165\u7684\u7528\u6237id\u53ea\u6709\u4e00\u4e2a\u65f6\uff0c full_sort_topk \u62a5\u9519\n\u4ee3\u7801\r\n```python3\r\nimport torch\r\nimport pandas as pd\r\n\r\nfrom recbole.model.general_recommender.bpr import BPR\r\nfrom recbole.config import Config\r\nfrom recbole.data import create_dataset, data_preparation\r\nfrom recbole.utils.case_study import full_sort_topk\r\n\r\nparam_dict = {\r\n 'use_gpu': False\r\n}\r\n\r\n# \u52a0\u8f7d BPR \u6a21\u578b\r\nbpr_model_path = \"D:\\\\source\\\\recbole-0.2.0\\\\app\\\\ex\\\\saved\\\\BPR-Jan-18-2021_14-03-52.pth\"\r\nbpr_config = Config(model='BPR',\r\n dataset='ml-100k',\r\n config_dict=param_dict)\r\ndataset = create_dataset(bpr_config)\r\ntrain_data, valid_data, test_data = data_preparation(bpr_config, dataset)\r\n\r\nbpr_model = BPR(bpr_config, train_data)\r\ncheckpoint = torch.load(bpr_model_path)\r\nbpr_model.load_state_dict(checkpoint['state_dict'])\r\nbpr_model.eval()\r\n\r\nuid_series = dataset.token2id(dataset.uid_field, ['200']) # \u539f\u59cb\u6570\u636e\u96c6\u4e2d\u7684\u7528\u6237id\uff0c\u53d8\u6362\u4e3a\u8bad\u7ec3\u5185\u90e8\u4f7f\u7528\u7684\u7d22\u5f15id\r\n\r\nfull_sort_topk(uid_series, bpr_model, test_data, 10)\r\n```\r\n\r\n\u62a5\u9519\u4fe1\u606f\r\nTraceback (most recent call last):\r\n File \"D:/source/recbole-0.2.0/app/ex/bpr_predict_ml100k.py\", line 33, in <module>\r\n full_sort_topk(uid_series, bpr_model, test_data, 10)\r\n File \"D:\\source\\recbole-0.2.0\\recbole\\utils\\case_study.py\", line 87, in full_sort_topk\r\n scores = full_sort_scores(uid_series, model, test_data)\r\n File \"D:\\Anaconda3\\envs\\pytorch\\lib\\site-packages\\torch\\autograd\\grad_mode.py\", line 26, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"D:\\source\\recbole-0.2.0\\recbole\\utils\\case_study.py\", line 45, in full_sort_scores\r\n history_row = torch.cat([torch.full_like(hist_iid, i) for i, hist_iid in enumerate(history_item)])\r\nRuntimeError: zero-dimensional tensor (at position 0) cannot be concatenated\n", "before_files": [{"content": "# @Time : 2020/12/25\n# @Author : Yushuo Chen\n# @Email : [email protected]\n\n# UPDATE\n# @Time : 2020/12/25\n# @Author : Yushuo Chen\n# @email : [email protected]\n\n\"\"\"\nrecbole.utils.case_study\n#####################################\n\"\"\"\n\nimport numpy as np\nimport torch\n\nfrom recbole.data.dataloader.general_dataloader import GeneralFullDataLoader\nfrom recbole.data.dataloader.sequential_dataloader import SequentialFullDataLoader\n\n\[email protected]_grad()\ndef full_sort_scores(uid_series, model, test_data):\n \"\"\"Calculate the scores of all items for each user in uid_series.\n\n Note:\n The score of [pad] and history items will be set into -inf.\n\n Args:\n uid_series (numpy.ndarray): User id series\n model (AbstractRecommender): Model to predict\n test_data (AbstractDataLoader): The test_data of model\n\n Returns:\n torch.Tensor: the scores of all items for each user in uid_series.\n \"\"\"\n uid_field = test_data.dataset.uid_field\n dataset = test_data.dataset\n model.eval()\n\n if isinstance(test_data, GeneralFullDataLoader):\n index = np.isin(test_data.user_df[uid_field].numpy(), uid_series)\n input_interaction = test_data.user_df[index]\n history_item = test_data.uid2history_item[input_interaction[uid_field]]\n history_row = torch.cat([torch.full_like(hist_iid, i) for i, hist_iid in enumerate(history_item)])\n history_col = torch.cat(list(history_item))\n history_index = history_row, history_col\n elif isinstance(test_data, SequentialFullDataLoader):\n index = np.isin(test_data.uid_list, uid_series)\n input_interaction = test_data.augmentation(\n test_data.item_list_index[index], test_data.target_index[index], test_data.item_list_length[index]\n )\n history_index = None\n else:\n raise NotImplementedError\n\n # Get scores of all items\n try:\n scores = model.full_sort_predict(input_interaction)\n except NotImplementedError:\n input_interaction = input_interaction.repeat(dataset.item_num)\n input_interaction.update(test_data.get_item_feature().repeat(len(uid_series)))\n scores = model.predict(input_interaction)\n\n scores = scores.view(-1, dataset.item_num)\n scores[:, 0] = -np.inf # set scores of [pad] to -inf\n if history_index is not None:\n scores[history_index] = -np.inf # set scores of history items to -inf\n\n return scores\n\n\ndef full_sort_topk(uid_series, model, test_data, k):\n \"\"\"Calculate the top-k items' scores and ids for each user in uid_series.\n\n Args:\n uid_series (numpy.ndarray): User id series\n model (AbstractRecommender): Model to predict\n test_data (AbstractDataLoader): The test_data of model\n k (int): The top-k items.\n\n Returns:\n tuple:\n - topk_scores (torch.Tensor): The scores of topk items.\n - topk_index (torch.Tensor): The index of topk items, which is also the internal ids of items.\n \"\"\"\n scores = full_sort_scores(uid_series, model, test_data)\n return torch.topk(scores, k)\n", "path": "recbole/utils/case_study.py"}]} | 1,989 | 176 |
gh_patches_debug_57934 | rasdani/github-patches | git_diff | scrapy__scrapy-1905 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
response.body is duplicate
Access the [text page(not mine)](http://files.qidian.com/Author4/3615059/88542882.txt) by browsers or wget and you will find the response content is not duplicate, but scrapy's `response.body` is duplicate. I had tried set the scrapy's headers same as a real brower's, but it is still duplicate.
Just use the follow sample code, and you will find the issue.
```
scrapy shell "http://files.qidian.com/Author4/3615059/88542882.txt"
```
Sorry for my bad english.
</issue>
<code>
[start of scrapy/utils/gz.py]
1 import struct
2
3 try:
4 from cStringIO import StringIO as BytesIO
5 except ImportError:
6 from io import BytesIO
7 from gzip import GzipFile
8
9 import six
10 import re
11
12 # - Python>=3.5 GzipFile's read() has issues returning leftover
13 # uncompressed data when input is corrupted
14 # (regression or bug-fix compared to Python 3.4)
15 # - read1(), which fetches data before raising EOFError on next call
16 # works here but is only available from Python>=3.3
17 # - scrapy does not support Python 3.2
18 # - Python 2.7 GzipFile works fine with standard read() + extrabuf
19 if six.PY2:
20 def read1(gzf, size=-1):
21 return gzf.read(size)
22 else:
23 def read1(gzf, size=-1):
24 return gzf.read1(size)
25
26
27 def gunzip(data):
28 """Gunzip the given data and return as much data as possible.
29
30 This is resilient to CRC checksum errors.
31 """
32 f = GzipFile(fileobj=BytesIO(data))
33 output = b''
34 chunk = b'.'
35 while chunk:
36 try:
37 chunk = read1(f, 8196)
38 output += chunk
39 except (IOError, EOFError, struct.error):
40 # complete only if there is some data, otherwise re-raise
41 # see issue 87 about catching struct.error
42 # some pages are quite small so output is '' and f.extrabuf
43 # contains the whole page content
44 if output or getattr(f, 'extrabuf', None):
45 try:
46 output += f.extrabuf
47 finally:
48 break
49 else:
50 raise
51 return output
52
53 _is_gzipped = re.compile(br'^application/(x-)?gzip\b', re.I).search
54 _is_octetstream = re.compile(br'^(application|binary)/octet-stream\b', re.I).search
55
56 def is_gzipped(response):
57 """Return True if the response is gzipped, or False otherwise"""
58 ctype = response.headers.get('Content-Type', b'')
59 cenc = response.headers.get('Content-Encoding', b'').lower()
60 return (_is_gzipped(ctype) or
61 (_is_octetstream(ctype) and cenc in (b'gzip', b'x-gzip')))
62
[end of scrapy/utils/gz.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/scrapy/utils/gz.py b/scrapy/utils/gz.py
--- a/scrapy/utils/gz.py
+++ b/scrapy/utils/gz.py
@@ -43,7 +43,7 @@
# contains the whole page content
if output or getattr(f, 'extrabuf', None):
try:
- output += f.extrabuf
+ output += f.extrabuf[-f.extrasize:]
finally:
break
else:
| {"golden_diff": "diff --git a/scrapy/utils/gz.py b/scrapy/utils/gz.py\n--- a/scrapy/utils/gz.py\n+++ b/scrapy/utils/gz.py\n@@ -43,7 +43,7 @@\n # contains the whole page content\n if output or getattr(f, 'extrabuf', None):\n try:\n- output += f.extrabuf\n+ output += f.extrabuf[-f.extrasize:]\n finally:\n break\n else:\n", "issue": "response.body is duplicate\nAccess the [text page(not mine)](http://files.qidian.com/Author4/3615059/88542882.txt) by browsers or wget and you will find the response content is not duplicate, but scrapy's `response.body` is duplicate. I had tried set the scrapy's headers same as a real brower's, but it is still duplicate.\n\nJust use the follow sample code, and you will find the issue.\n\n```\nscrapy shell \"http://files.qidian.com/Author4/3615059/88542882.txt\"\n```\n\nSorry for my bad english. \n\n", "before_files": [{"content": "import struct\n\ntry:\n from cStringIO import StringIO as BytesIO\nexcept ImportError:\n from io import BytesIO\nfrom gzip import GzipFile\n\nimport six\nimport re\n\n# - Python>=3.5 GzipFile's read() has issues returning leftover\n# uncompressed data when input is corrupted\n# (regression or bug-fix compared to Python 3.4)\n# - read1(), which fetches data before raising EOFError on next call\n# works here but is only available from Python>=3.3\n# - scrapy does not support Python 3.2\n# - Python 2.7 GzipFile works fine with standard read() + extrabuf\nif six.PY2:\n def read1(gzf, size=-1):\n return gzf.read(size)\nelse:\n def read1(gzf, size=-1):\n return gzf.read1(size)\n\n\ndef gunzip(data):\n \"\"\"Gunzip the given data and return as much data as possible.\n\n This is resilient to CRC checksum errors.\n \"\"\"\n f = GzipFile(fileobj=BytesIO(data))\n output = b''\n chunk = b'.'\n while chunk:\n try:\n chunk = read1(f, 8196)\n output += chunk\n except (IOError, EOFError, struct.error):\n # complete only if there is some data, otherwise re-raise\n # see issue 87 about catching struct.error\n # some pages are quite small so output is '' and f.extrabuf\n # contains the whole page content\n if output or getattr(f, 'extrabuf', None):\n try:\n output += f.extrabuf\n finally:\n break\n else:\n raise\n return output\n\n_is_gzipped = re.compile(br'^application/(x-)?gzip\\b', re.I).search\n_is_octetstream = re.compile(br'^(application|binary)/octet-stream\\b', re.I).search\n\ndef is_gzipped(response):\n \"\"\"Return True if the response is gzipped, or False otherwise\"\"\"\n ctype = response.headers.get('Content-Type', b'')\n cenc = response.headers.get('Content-Encoding', b'').lower()\n return (_is_gzipped(ctype) or\n (_is_octetstream(ctype) and cenc in (b'gzip', b'x-gzip')))\n", "path": "scrapy/utils/gz.py"}]} | 1,320 | 108 |
gh_patches_debug_36747 | rasdani/github-patches | git_diff | CiviWiki__OpenCiviWiki-1037 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Move forms from `api/forms.py` to the `accounts/forms.py`
## Idea summary
There are several account/profile forms defined in [`api/forms.py`](https://github.com/CiviWiki/OpenCiviWiki/blob/develop/project/api/forms.py). Those forms should be moved to [`accounts/forms.py`](https://github.com/CiviWiki/OpenCiviWiki/blob/develop/project/accounts/forms.py) or deleted if they are duplicate code.
**Update:** the code in `api/forms.py` is actually redundant, so may simply be deleted.
## Task
The steps to complete this task are:
- [x] [fork this repository](https://docs.github.com/en/get-started/quickstart/fork-a-repo) and clone it to your local computer
- [x] set up a local development environment as [outlined in our Contributing Guide](https://github.com/CiviWiki/OpenCiviWiki/blob/develop/CONTRIBUTING.md#development)
- [x] delete the file `api/forms.py`
- [x] commit your changes
- [x] push your code to GitHub
- [x] [open a pull request](https://docs.github.com/en/github/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/creating-a-pull-request) against the `main` branch in this repository
</issue>
<code>
[start of project/api/forms.py]
1 from django import forms
2 from django.core.files.images import get_image_dimensions
3 from django.contrib.auth import get_user_model
4 from accounts.models import Profile
5
6
7 class UpdatePassword(forms.ModelForm):
8 """
9 Form for updating User Password
10 """
11
12 class Meta:
13 model = get_user_model()
14 fields = ["password", "verify"]
15
16 password = forms.CharField(
17 label="Password",
18 widget=forms.PasswordInput(
19 attrs={
20 "class": "form-control",
21 "placeholder": "Password",
22 "required": "True",
23 }
24 ),
25 )
26 verify = forms.CharField(
27 label="Password Verify",
28 widget=forms.PasswordInput(
29 attrs={
30 "class": "form-control",
31 "placeholder": "Password Verify",
32 "required": "True",
33 }
34 ),
35 help_text="Please retype your password.",
36 )
37
38 def clean(self):
39 """
40 Verifies that the passwords match
41 """
42 clean_data = super(UpdatePassword, self).clean()
43 if "password" in clean_data and "verify" in clean_data:
44 if clean_data["password"] != clean_data["verify"]:
45 raise forms.ValidationError("Passwords don't match.")
46 else:
47 raise forms.ValidationError("Both password fields need to be filled out.")
48 return clean_data
49
50
51 class UpdateProfileImage(forms.ModelForm):
52 """
53 Form for updating profile image
54 """
55
56 class Meta:
57 model = Profile
58 fields = ["profile_image"]
59
60 profile_image = forms.ImageField()
61
62 def clean_profile_image(self):
63 """
64 This function is used to make sure that profile images follow Civiwiki standards.
65
66 Requirements:
67 - Height cannot exceed 960px
68 - Width cannot exceed 1280px
69 - Image must be (jpg, jpeg, pjeg, png)
70 - File size cannot exceed 2MB
71 """
72 profile_image = self.cleaned_data["profile_image"]
73
74 try:
75 w, h = get_image_dimensions(profile_image)
76
77 # validate dimensions
78 max_height = 960
79 max_width = 1280
80 if w > max_width or h > max_height:
81 raise forms.ValidationError(
82 u"Please use an image that is {w} x {h} pixels or smaller.".format(
83 w=max_width, h=max_height
84 )
85 )
86
87 # validate content type
88 main, sub = profile_image.content_type.split("/")
89 if not (main == "image" and sub in ["jpg", "jpeg", "pjpeg", "png"]):
90 raise forms.ValidationError(u"Please use a JPEG or PNG image.")
91
92 # validate file size
93 if len(profile_image) > (2000 * 1024):
94 raise forms.ValidationError(
95 "Profile image file size may not exceed 2MB."
96 )
97
98 except AttributeError:
99 pass
100
101 return profile_image
102
[end of project/api/forms.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/project/api/forms.py b/project/api/forms.py
deleted file mode 100644
--- a/project/api/forms.py
+++ /dev/null
@@ -1,101 +0,0 @@
-from django import forms
-from django.core.files.images import get_image_dimensions
-from django.contrib.auth import get_user_model
-from accounts.models import Profile
-
-
-class UpdatePassword(forms.ModelForm):
- """
- Form for updating User Password
- """
-
- class Meta:
- model = get_user_model()
- fields = ["password", "verify"]
-
- password = forms.CharField(
- label="Password",
- widget=forms.PasswordInput(
- attrs={
- "class": "form-control",
- "placeholder": "Password",
- "required": "True",
- }
- ),
- )
- verify = forms.CharField(
- label="Password Verify",
- widget=forms.PasswordInput(
- attrs={
- "class": "form-control",
- "placeholder": "Password Verify",
- "required": "True",
- }
- ),
- help_text="Please retype your password.",
- )
-
- def clean(self):
- """
- Verifies that the passwords match
- """
- clean_data = super(UpdatePassword, self).clean()
- if "password" in clean_data and "verify" in clean_data:
- if clean_data["password"] != clean_data["verify"]:
- raise forms.ValidationError("Passwords don't match.")
- else:
- raise forms.ValidationError("Both password fields need to be filled out.")
- return clean_data
-
-
-class UpdateProfileImage(forms.ModelForm):
- """
- Form for updating profile image
- """
-
- class Meta:
- model = Profile
- fields = ["profile_image"]
-
- profile_image = forms.ImageField()
-
- def clean_profile_image(self):
- """
- This function is used to make sure that profile images follow Civiwiki standards.
-
- Requirements:
- - Height cannot exceed 960px
- - Width cannot exceed 1280px
- - Image must be (jpg, jpeg, pjeg, png)
- - File size cannot exceed 2MB
- """
- profile_image = self.cleaned_data["profile_image"]
-
- try:
- w, h = get_image_dimensions(profile_image)
-
- # validate dimensions
- max_height = 960
- max_width = 1280
- if w > max_width or h > max_height:
- raise forms.ValidationError(
- u"Please use an image that is {w} x {h} pixels or smaller.".format(
- w=max_width, h=max_height
- )
- )
-
- # validate content type
- main, sub = profile_image.content_type.split("/")
- if not (main == "image" and sub in ["jpg", "jpeg", "pjpeg", "png"]):
- raise forms.ValidationError(u"Please use a JPEG or PNG image.")
-
- # validate file size
- if len(profile_image) > (2000 * 1024):
- raise forms.ValidationError(
- "Profile image file size may not exceed 2MB."
- )
-
- except AttributeError:
- pass
-
- return profile_image
| {"golden_diff": "diff --git a/project/api/forms.py b/project/api/forms.py\ndeleted file mode 100644\n--- a/project/api/forms.py\n+++ /dev/null\n@@ -1,101 +0,0 @@\n-from django import forms\n-from django.core.files.images import get_image_dimensions\n-from django.contrib.auth import get_user_model\n-from accounts.models import Profile\n-\n-\n-class UpdatePassword(forms.ModelForm):\n- \"\"\"\n- Form for updating User Password\n- \"\"\"\n-\n- class Meta:\n- model = get_user_model()\n- fields = [\"password\", \"verify\"]\n-\n- password = forms.CharField(\n- label=\"Password\",\n- widget=forms.PasswordInput(\n- attrs={\n- \"class\": \"form-control\",\n- \"placeholder\": \"Password\",\n- \"required\": \"True\",\n- }\n- ),\n- )\n- verify = forms.CharField(\n- label=\"Password Verify\",\n- widget=forms.PasswordInput(\n- attrs={\n- \"class\": \"form-control\",\n- \"placeholder\": \"Password Verify\",\n- \"required\": \"True\",\n- }\n- ),\n- help_text=\"Please retype your password.\",\n- )\n-\n- def clean(self):\n- \"\"\"\n- Verifies that the passwords match\n- \"\"\"\n- clean_data = super(UpdatePassword, self).clean()\n- if \"password\" in clean_data and \"verify\" in clean_data:\n- if clean_data[\"password\"] != clean_data[\"verify\"]:\n- raise forms.ValidationError(\"Passwords don't match.\")\n- else:\n- raise forms.ValidationError(\"Both password fields need to be filled out.\")\n- return clean_data\n-\n-\n-class UpdateProfileImage(forms.ModelForm):\n- \"\"\"\n- Form for updating profile image\n- \"\"\"\n-\n- class Meta:\n- model = Profile\n- fields = [\"profile_image\"]\n-\n- profile_image = forms.ImageField()\n-\n- def clean_profile_image(self):\n- \"\"\"\n- This function is used to make sure that profile images follow Civiwiki standards.\n-\n- Requirements:\n- - Height cannot exceed 960px\n- - Width cannot exceed 1280px\n- - Image must be (jpg, jpeg, pjeg, png)\n- - File size cannot exceed 2MB\n- \"\"\"\n- profile_image = self.cleaned_data[\"profile_image\"]\n-\n- try:\n- w, h = get_image_dimensions(profile_image)\n-\n- # validate dimensions\n- max_height = 960\n- max_width = 1280\n- if w > max_width or h > max_height:\n- raise forms.ValidationError(\n- u\"Please use an image that is {w} x {h} pixels or smaller.\".format(\n- w=max_width, h=max_height\n- )\n- )\n-\n- # validate content type\n- main, sub = profile_image.content_type.split(\"/\")\n- if not (main == \"image\" and sub in [\"jpg\", \"jpeg\", \"pjpeg\", \"png\"]):\n- raise forms.ValidationError(u\"Please use a JPEG or PNG image.\")\n-\n- # validate file size\n- if len(profile_image) > (2000 * 1024):\n- raise forms.ValidationError(\n- \"Profile image file size may not exceed 2MB.\"\n- )\n-\n- except AttributeError:\n- pass\n-\n- return profile_image\n", "issue": "Move forms from `api/forms.py` to the `accounts/forms.py`\n## Idea summary\r\n\r\nThere are several account/profile forms defined in [`api/forms.py`](https://github.com/CiviWiki/OpenCiviWiki/blob/develop/project/api/forms.py). Those forms should be moved to [`accounts/forms.py`](https://github.com/CiviWiki/OpenCiviWiki/blob/develop/project/accounts/forms.py) or deleted if they are duplicate code.\r\n\r\n**Update:** the code in `api/forms.py` is actually redundant, so may simply be deleted.\r\n\r\n## Task\r\n\r\nThe steps to complete this task are:\r\n\r\n- [x] [fork this repository](https://docs.github.com/en/get-started/quickstart/fork-a-repo) and clone it to your local computer\r\n- [x] set up a local development environment as [outlined in our Contributing Guide](https://github.com/CiviWiki/OpenCiviWiki/blob/develop/CONTRIBUTING.md#development)\r\n- [x] delete the file `api/forms.py`\r\n- [x] commit your changes\r\n- [x] push your code to GitHub\r\n- [x] [open a pull request](https://docs.github.com/en/github/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/creating-a-pull-request) against the `main` branch in this repository\n", "before_files": [{"content": "from django import forms\nfrom django.core.files.images import get_image_dimensions\nfrom django.contrib.auth import get_user_model\nfrom accounts.models import Profile\n\n\nclass UpdatePassword(forms.ModelForm):\n \"\"\"\n Form for updating User Password\n \"\"\"\n\n class Meta:\n model = get_user_model()\n fields = [\"password\", \"verify\"]\n\n password = forms.CharField(\n label=\"Password\",\n widget=forms.PasswordInput(\n attrs={\n \"class\": \"form-control\",\n \"placeholder\": \"Password\",\n \"required\": \"True\",\n }\n ),\n )\n verify = forms.CharField(\n label=\"Password Verify\",\n widget=forms.PasswordInput(\n attrs={\n \"class\": \"form-control\",\n \"placeholder\": \"Password Verify\",\n \"required\": \"True\",\n }\n ),\n help_text=\"Please retype your password.\",\n )\n\n def clean(self):\n \"\"\"\n Verifies that the passwords match\n \"\"\"\n clean_data = super(UpdatePassword, self).clean()\n if \"password\" in clean_data and \"verify\" in clean_data:\n if clean_data[\"password\"] != clean_data[\"verify\"]:\n raise forms.ValidationError(\"Passwords don't match.\")\n else:\n raise forms.ValidationError(\"Both password fields need to be filled out.\")\n return clean_data\n\n\nclass UpdateProfileImage(forms.ModelForm):\n \"\"\"\n Form for updating profile image\n \"\"\"\n\n class Meta:\n model = Profile\n fields = [\"profile_image\"]\n\n profile_image = forms.ImageField()\n\n def clean_profile_image(self):\n \"\"\"\n This function is used to make sure that profile images follow Civiwiki standards.\n\n Requirements:\n - Height cannot exceed 960px\n - Width cannot exceed 1280px\n - Image must be (jpg, jpeg, pjeg, png)\n - File size cannot exceed 2MB\n \"\"\"\n profile_image = self.cleaned_data[\"profile_image\"]\n\n try:\n w, h = get_image_dimensions(profile_image)\n\n # validate dimensions\n max_height = 960\n max_width = 1280\n if w > max_width or h > max_height:\n raise forms.ValidationError(\n u\"Please use an image that is {w} x {h} pixels or smaller.\".format(\n w=max_width, h=max_height\n )\n )\n\n # validate content type\n main, sub = profile_image.content_type.split(\"/\")\n if not (main == \"image\" and sub in [\"jpg\", \"jpeg\", \"pjpeg\", \"png\"]):\n raise forms.ValidationError(u\"Please use a JPEG or PNG image.\")\n\n # validate file size\n if len(profile_image) > (2000 * 1024):\n raise forms.ValidationError(\n \"Profile image file size may not exceed 2MB.\"\n )\n\n except AttributeError:\n pass\n\n return profile_image\n", "path": "project/api/forms.py"}]} | 1,633 | 746 |
gh_patches_debug_1240 | rasdani/github-patches | git_diff | mindsdb__lightwood-603 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
:wrench: Add default logging level environment variable
## Task
Add a `LIGHTWOOD_LOG` environment variable that controls the default logging level for lightwood. It should be possible to set values for it so that `DEBUG`, `INFO`, `WARNING`, `ERROR` and `CRITICAL` are all possible options. The logger lightwood uses is declared and exported [here](https://github.com/mindsdb/lightwood/blob/stable/lightwood/helpers/log.py).
## Steps :male_detective: :female_detective:
- Fork the Lightwood repository, checkout the `staging` branch and from it create a new one.
- Implement the necessary changes.
- Check that only the appropriate logs are getting through. For this, you can run any of the integration tests, like [`test_boston_housing`](https://github.com/mindsdb/lightwood/blob/stable/tests/integration/basic/test_boston_housing.py), and analyze the output.
- Make the PR and address any comments that reviewers might make.
## Additional rewards :1st_place_medal:
Each documentation PR brings :one: point for entry into the draw for a :computer: Deep Learning Laptop powered by the NVIDIA RTX 3080 Max-Q GPU or other swag :shirt: :bear: . For more info check out https://mindsdb.com/hacktoberfest/
</issue>
<code>
[start of lightwood/helpers/log.py]
1 import logging
2 import os
3
4
5 def initialize_log():
6 pid = os.getpid()
7 logging.basicConfig()
8 log = logging.getLogger(f'lightwood-{pid}')
9 log.setLevel(logging.DEBUG)
10 return log
11
12
13 log = initialize_log()
14
[end of lightwood/helpers/log.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/lightwood/helpers/log.py b/lightwood/helpers/log.py
--- a/lightwood/helpers/log.py
+++ b/lightwood/helpers/log.py
@@ -6,7 +6,8 @@
pid = os.getpid()
logging.basicConfig()
log = logging.getLogger(f'lightwood-{pid}')
- log.setLevel(logging.DEBUG)
+ log_level = os.environ.get('LIGHTWOOD_LOG', 'DEBUG')
+ log.setLevel(log_level)
return log
| {"golden_diff": "diff --git a/lightwood/helpers/log.py b/lightwood/helpers/log.py\n--- a/lightwood/helpers/log.py\n+++ b/lightwood/helpers/log.py\n@@ -6,7 +6,8 @@\n pid = os.getpid()\n logging.basicConfig()\n log = logging.getLogger(f'lightwood-{pid}')\n- log.setLevel(logging.DEBUG)\n+ log_level = os.environ.get('LIGHTWOOD_LOG', 'DEBUG')\n+ log.setLevel(log_level)\n return log\n", "issue": ":wrench: Add default logging level environment variable\n## Task\r\n\r\nAdd a `LIGHTWOOD_LOG` environment variable that controls the default logging level for lightwood. It should be possible to set values for it so that `DEBUG`, `INFO`, `WARNING`, `ERROR` and `CRITICAL` are all possible options. The logger lightwood uses is declared and exported [here](https://github.com/mindsdb/lightwood/blob/stable/lightwood/helpers/log.py).\r\n\r\n## Steps :male_detective: :female_detective: \r\n\r\n- Fork the Lightwood repository, checkout the `staging` branch and from it create a new one.\r\n- Implement the necessary changes.\r\n- Check that only the appropriate logs are getting through. For this, you can run any of the integration tests, like [`test_boston_housing`](https://github.com/mindsdb/lightwood/blob/stable/tests/integration/basic/test_boston_housing.py), and analyze the output.\r\n- Make the PR and address any comments that reviewers might make.\r\n\r\n## Additional rewards :1st_place_medal: \r\n\r\nEach documentation PR brings :one: point for entry into the draw for a :computer: Deep Learning Laptop powered by the NVIDIA RTX 3080 Max-Q GPU or other swag :shirt: :bear: . For more info check out https://mindsdb.com/hacktoberfest/\n", "before_files": [{"content": "import logging\nimport os\n\n\ndef initialize_log():\n pid = os.getpid()\n logging.basicConfig()\n log = logging.getLogger(f'lightwood-{pid}')\n log.setLevel(logging.DEBUG)\n return log\n\n\nlog = initialize_log()\n", "path": "lightwood/helpers/log.py"}]} | 892 | 100 |
gh_patches_debug_2089 | rasdani/github-patches | git_diff | OpenMined__PySyft-4708 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add Windows to CI
## Description
Add windows to the CI tests as a separate step for say python 3.8 and torch==1.6.0 initially just to get things working. Then if it works expand to all versions to see any potential issues.
## Definition of Done
This ticket is done when we know what does and doesn't run on Windows in CI from the current "fast" tests and the new "slow" tests. Post a screenshot and link to CI here when it's running.
</issue>
<code>
[start of src/syft/lib/torch/__init__.py]
1 # stdlib
2 from typing import Dict
3 from typing import Union
4
5 # third party
6 from packaging import version
7 import torch
8
9 # syft relative
10 from . import parameter # noqa: 401
11 from . import uppercase_tensor # noqa: 401
12 from ...ast.globals import Globals
13 from .allowlist import allowlist
14
15 TORCH_VERSION = version.parse(torch.__version__)
16
17
18 def get_return_type(support_dict: Union[str, Dict[str, str]]) -> str:
19 if isinstance(support_dict, str):
20 return support_dict
21 else:
22 return support_dict["return_type"]
23
24
25 def version_supported(support_dict: Union[str, Dict[str, str]]) -> bool:
26 if isinstance(support_dict, str):
27 return True
28 else:
29 return TORCH_VERSION >= version.parse(support_dict["min_version"])
30
31
32 def create_torch_ast() -> Globals:
33 ast = Globals()
34
35 # most methods work in all versions and have a single return type
36 # for the more complicated ones we pass a dict with keys like return_type and
37 # min_version
38 for method, return_type_name_or_dict in allowlist.items():
39 if version_supported(support_dict=return_type_name_or_dict):
40 return_type = get_return_type(support_dict=return_type_name_or_dict)
41 if return_type == "unknown":
42 # this allows us to import them for testing
43 continue
44 ast.add_path(
45 path=method, framework_reference=torch, return_type_name=return_type
46 )
47 # add all the torch.nn.Parameter hooks
48 if method.startswith("torch.Tensor."):
49 method = method.replace("torch.Tensor.", "torch.nn.Parameter.")
50 return_type = return_type.replace("torch.Tensor", "torch.nn.Parameter")
51 ast.add_path(
52 path=method, framework_reference=torch, return_type_name=return_type
53 )
54 else:
55 print(f"Skipping torch.{method} not supported in {TORCH_VERSION}")
56
57 for klass in ast.classes:
58 klass.create_pointer_class()
59 klass.create_send_method()
60 klass.create_serialization_methods()
61 klass.create_storable_object_attr_convenience_methods()
62 return ast
63
[end of src/syft/lib/torch/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/syft/lib/torch/__init__.py b/src/syft/lib/torch/__init__.py
--- a/src/syft/lib/torch/__init__.py
+++ b/src/syft/lib/torch/__init__.py
@@ -12,7 +12,7 @@
from ...ast.globals import Globals
from .allowlist import allowlist
-TORCH_VERSION = version.parse(torch.__version__)
+TORCH_VERSION = version.parse(torch.__version__.split("+")[0])
def get_return_type(support_dict: Union[str, Dict[str, str]]) -> str:
| {"golden_diff": "diff --git a/src/syft/lib/torch/__init__.py b/src/syft/lib/torch/__init__.py\n--- a/src/syft/lib/torch/__init__.py\n+++ b/src/syft/lib/torch/__init__.py\n@@ -12,7 +12,7 @@\n from ...ast.globals import Globals\n from .allowlist import allowlist\n \n-TORCH_VERSION = version.parse(torch.__version__)\n+TORCH_VERSION = version.parse(torch.__version__.split(\"+\")[0])\n \n \n def get_return_type(support_dict: Union[str, Dict[str, str]]) -> str:\n", "issue": "Add Windows to CI\n## Description\r\nAdd windows to the CI tests as a separate step for say python 3.8 and torch==1.6.0 initially just to get things working. Then if it works expand to all versions to see any potential issues.\r\n\r\n## Definition of Done\r\nThis ticket is done when we know what does and doesn't run on Windows in CI from the current \"fast\" tests and the new \"slow\" tests. Post a screenshot and link to CI here when it's running.\n", "before_files": [{"content": "# stdlib\nfrom typing import Dict\nfrom typing import Union\n\n# third party\nfrom packaging import version\nimport torch\n\n# syft relative\nfrom . import parameter # noqa: 401\nfrom . import uppercase_tensor # noqa: 401\nfrom ...ast.globals import Globals\nfrom .allowlist import allowlist\n\nTORCH_VERSION = version.parse(torch.__version__)\n\n\ndef get_return_type(support_dict: Union[str, Dict[str, str]]) -> str:\n if isinstance(support_dict, str):\n return support_dict\n else:\n return support_dict[\"return_type\"]\n\n\ndef version_supported(support_dict: Union[str, Dict[str, str]]) -> bool:\n if isinstance(support_dict, str):\n return True\n else:\n return TORCH_VERSION >= version.parse(support_dict[\"min_version\"])\n\n\ndef create_torch_ast() -> Globals:\n ast = Globals()\n\n # most methods work in all versions and have a single return type\n # for the more complicated ones we pass a dict with keys like return_type and\n # min_version\n for method, return_type_name_or_dict in allowlist.items():\n if version_supported(support_dict=return_type_name_or_dict):\n return_type = get_return_type(support_dict=return_type_name_or_dict)\n if return_type == \"unknown\":\n # this allows us to import them for testing\n continue\n ast.add_path(\n path=method, framework_reference=torch, return_type_name=return_type\n )\n # add all the torch.nn.Parameter hooks\n if method.startswith(\"torch.Tensor.\"):\n method = method.replace(\"torch.Tensor.\", \"torch.nn.Parameter.\")\n return_type = return_type.replace(\"torch.Tensor\", \"torch.nn.Parameter\")\n ast.add_path(\n path=method, framework_reference=torch, return_type_name=return_type\n )\n else:\n print(f\"Skipping torch.{method} not supported in {TORCH_VERSION}\")\n\n for klass in ast.classes:\n klass.create_pointer_class()\n klass.create_send_method()\n klass.create_serialization_methods()\n klass.create_storable_object_attr_convenience_methods()\n return ast\n", "path": "src/syft/lib/torch/__init__.py"}]} | 1,232 | 132 |
gh_patches_debug_37129 | rasdani/github-patches | git_diff | streamlink__streamlink-2912 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[bug] CDNBG - multiple issues
## Bug Report
1. BITelevision should be removed from the plugin and/or wiki/info pages, as it no longer exists.
2. Inlife.bg shouldn't be listed as supported under CDNbg.
3. Tvbulgare.bg should be listed as supported in inlife.bg's place (latter shares the former's stream)
4. Mu-vi.tv gives an error.
5. CDNBG should cover VTK - the national military channel
6. Kanal3's livestream is not found.
7. CDNBG should cover Cherno More - the regional channel for Varna, Bulgaria.
### Reproduction steps / Explicit stream URLs to test
1. https://bitelevision.com/ is not a thing anymore.
2. Inlife.bg can't be opened and shouldn't be listed - it is a 'media partner' that restreams https://tvbulgare.bg/, which could be put in as a replacement for it.
3. https://tvbulgare.bg/ - No playable streams found.
4. http://mu-vi.tv/LiveStreams/pages/Live.aspx - Error: Unable to open URL.
5. https://www.armymedia.bg/
6. https://kanal3.bg/live
7. https://www.chernomore.bg/
</issue>
<code>
[start of src/streamlink/plugins/cdnbg.py]
1 import logging
2 import re
3
4 from streamlink.compat import urlparse
5 from streamlink.plugin import Plugin
6 from streamlink.plugin.api import useragents
7 from streamlink.plugin.api import validate
8 from streamlink.stream import HLSStream
9 from streamlink.utils import update_scheme
10
11 log = logging.getLogger(__name__)
12
13
14 class CDNBG(Plugin):
15 url_re = re.compile(r"""
16 https?://(?:www\.)?(?:
17 tv\.bnt\.bg/\w+(?:/\w+)?|
18 bitelevision\.com/live|
19 nova\.bg/live|
20 kanal3\.bg/live|
21 bgonair\.bg/tvonline|
22 inlife\.bg|
23 mmtvmusic\.com/live|
24 mu-vi\.tv/LiveStreams/pages/Live\.aspx|
25 videochanel\.bstv\.bg|
26 live\.bstv\.bg|
27 bloombergtv.bg/video
28 )/?
29 """, re.VERBOSE)
30 iframe_re = re.compile(r"iframe .*?src=\"((?:https?(?::|:))?//(?:\w+\.)?cdn.bg/live[^\"]+)\"", re.DOTALL)
31 sdata_re = re.compile(r"sdata\.src.*?=.*?(?P<q>[\"'])(?P<url>http.*?)(?P=q)")
32 hls_file_re = re.compile(r"(src|file): (?P<q>[\"'])(?P<url>(https?:)?//.+?m3u8.*?)(?P=q)")
33 hls_src_re = re.compile(r"video src=(?P<url>http[^ ]+m3u8[^ ]*)")
34
35 stream_schema = validate.Schema(
36 validate.any(
37 validate.all(validate.transform(sdata_re.search), validate.get("url")),
38 validate.all(validate.transform(hls_file_re.search), validate.get("url")),
39 validate.all(validate.transform(hls_src_re.search), validate.get("url")),
40 )
41 )
42
43 @classmethod
44 def can_handle_url(cls, url):
45 return cls.url_re.match(url) is not None
46
47 def find_iframe(self, res):
48 p = urlparse(self.url)
49 for url in self.iframe_re.findall(res.text):
50 if "googletagmanager" not in url:
51 url = url.replace(":", ":")
52 if url.startswith("//"):
53 return "{0}:{1}".format(p.scheme, url)
54 else:
55 return url
56
57 def _get_streams(self):
58 self.session.http.headers.update({"User-Agent": useragents.CHROME})
59 res = self.session.http.get(self.url)
60 iframe_url = self.find_iframe(res)
61
62 if iframe_url:
63 log.debug("Found iframe: {0}", iframe_url)
64 res = self.session.http.get(iframe_url, headers={"Referer": self.url})
65 stream_url = update_scheme(self.url, self.stream_schema.validate(res.text))
66 log.warning("SSL Verification disabled.")
67 return HLSStream.parse_variant_playlist(self.session,
68 stream_url,
69 verify=False)
70
71
72 __plugin__ = CDNBG
73
[end of src/streamlink/plugins/cdnbg.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/streamlink/plugins/cdnbg.py b/src/streamlink/plugins/cdnbg.py
--- a/src/streamlink/plugins/cdnbg.py
+++ b/src/streamlink/plugins/cdnbg.py
@@ -15,16 +15,15 @@
url_re = re.compile(r"""
https?://(?:www\.)?(?:
tv\.bnt\.bg/\w+(?:/\w+)?|
- bitelevision\.com/live|
nova\.bg/live|
- kanal3\.bg/live|
bgonair\.bg/tvonline|
- inlife\.bg|
mmtvmusic\.com/live|
mu-vi\.tv/LiveStreams/pages/Live\.aspx|
- videochanel\.bstv\.bg|
live\.bstv\.bg|
- bloombergtv.bg/video
+ bloombergtv.bg/video|
+ armymedia.bg|
+ chernomore.bg|
+ i.cdn.bg/live/
)/?
""", re.VERBOSE)
iframe_re = re.compile(r"iframe .*?src=\"((?:https?(?::|:))?//(?:\w+\.)?cdn.bg/live[^\"]+)\"", re.DOTALL)
@@ -44,23 +43,26 @@
def can_handle_url(cls, url):
return cls.url_re.match(url) is not None
- def find_iframe(self, res):
- p = urlparse(self.url)
- for url in self.iframe_re.findall(res.text):
- if "googletagmanager" not in url:
- url = url.replace(":", ":")
- if url.startswith("//"):
- return "{0}:{1}".format(p.scheme, url)
+ def find_iframe(self, url):
+ self.session.http.headers.update({"User-Agent": useragents.CHROME})
+ res = self.session.http.get(self.url)
+ p = urlparse(url)
+ for iframe_url in self.iframe_re.findall(res.text):
+ if "googletagmanager" not in iframe_url:
+ log.debug("Found iframe: {0}", iframe_url)
+ iframe_url = iframe_url.replace(":", ":")
+ if iframe_url.startswith("//"):
+ return "{0}:{1}".format(p.scheme, iframe_url)
else:
- return url
+ return iframe_url
def _get_streams(self):
- self.session.http.headers.update({"User-Agent": useragents.CHROME})
- res = self.session.http.get(self.url)
- iframe_url = self.find_iframe(res)
+ if "i.cdn.bg/live/" in self.url:
+ iframe_url = self.url
+ else:
+ iframe_url = self.find_iframe(self.url)
if iframe_url:
- log.debug("Found iframe: {0}", iframe_url)
res = self.session.http.get(iframe_url, headers={"Referer": self.url})
stream_url = update_scheme(self.url, self.stream_schema.validate(res.text))
log.warning("SSL Verification disabled.")
| {"golden_diff": "diff --git a/src/streamlink/plugins/cdnbg.py b/src/streamlink/plugins/cdnbg.py\n--- a/src/streamlink/plugins/cdnbg.py\n+++ b/src/streamlink/plugins/cdnbg.py\n@@ -15,16 +15,15 @@\n url_re = re.compile(r\"\"\"\n https?://(?:www\\.)?(?:\n tv\\.bnt\\.bg/\\w+(?:/\\w+)?|\n- bitelevision\\.com/live|\n nova\\.bg/live|\n- kanal3\\.bg/live|\n bgonair\\.bg/tvonline|\n- inlife\\.bg|\n mmtvmusic\\.com/live|\n mu-vi\\.tv/LiveStreams/pages/Live\\.aspx|\n- videochanel\\.bstv\\.bg|\n live\\.bstv\\.bg|\n- bloombergtv.bg/video\n+ bloombergtv.bg/video|\n+ armymedia.bg|\n+ chernomore.bg|\n+ i.cdn.bg/live/\n )/?\n \"\"\", re.VERBOSE)\n iframe_re = re.compile(r\"iframe .*?src=\\\"((?:https?(?::|:))?//(?:\\w+\\.)?cdn.bg/live[^\\\"]+)\\\"\", re.DOTALL)\n@@ -44,23 +43,26 @@\n def can_handle_url(cls, url):\n return cls.url_re.match(url) is not None\n \n- def find_iframe(self, res):\n- p = urlparse(self.url)\n- for url in self.iframe_re.findall(res.text):\n- if \"googletagmanager\" not in url:\n- url = url.replace(\":\", \":\")\n- if url.startswith(\"//\"):\n- return \"{0}:{1}\".format(p.scheme, url)\n+ def find_iframe(self, url):\n+ self.session.http.headers.update({\"User-Agent\": useragents.CHROME})\n+ res = self.session.http.get(self.url)\n+ p = urlparse(url)\n+ for iframe_url in self.iframe_re.findall(res.text):\n+ if \"googletagmanager\" not in iframe_url:\n+ log.debug(\"Found iframe: {0}\", iframe_url)\n+ iframe_url = iframe_url.replace(\":\", \":\")\n+ if iframe_url.startswith(\"//\"):\n+ return \"{0}:{1}\".format(p.scheme, iframe_url)\n else:\n- return url\n+ return iframe_url\n \n def _get_streams(self):\n- self.session.http.headers.update({\"User-Agent\": useragents.CHROME})\n- res = self.session.http.get(self.url)\n- iframe_url = self.find_iframe(res)\n+ if \"i.cdn.bg/live/\" in self.url:\n+ iframe_url = self.url\n+ else:\n+ iframe_url = self.find_iframe(self.url)\n \n if iframe_url:\n- log.debug(\"Found iframe: {0}\", iframe_url)\n res = self.session.http.get(iframe_url, headers={\"Referer\": self.url})\n stream_url = update_scheme(self.url, self.stream_schema.validate(res.text))\n log.warning(\"SSL Verification disabled.\")\n", "issue": "[bug] CDNBG - multiple issues\n## Bug Report\r\n1. BITelevision should be removed from the plugin and/or wiki/info pages, as it no longer exists.\r\n2. Inlife.bg shouldn't be listed as supported under CDNbg.\r\n3. Tvbulgare.bg should be listed as supported in inlife.bg's place (latter shares the former's stream)\r\n4. Mu-vi.tv gives an error.\r\n5. CDNBG should cover VTK - the national military channel\r\n6. Kanal3's livestream is not found.\r\n7. CDNBG should cover Cherno More - the regional channel for Varna, Bulgaria.\r\n\r\n### Reproduction steps / Explicit stream URLs to test\r\n1. https://bitelevision.com/ is not a thing anymore.\r\n2. Inlife.bg can't be opened and shouldn't be listed - it is a 'media partner' that restreams https://tvbulgare.bg/, which could be put in as a replacement for it.\r\n3. https://tvbulgare.bg/ - No playable streams found.\r\n4. http://mu-vi.tv/LiveStreams/pages/Live.aspx - Error: Unable to open URL.\r\n5. https://www.armymedia.bg/\r\n6. https://kanal3.bg/live\r\n7. https://www.chernomore.bg/\n", "before_files": [{"content": "import logging\nimport re\n\nfrom streamlink.compat import urlparse\nfrom streamlink.plugin import Plugin\nfrom streamlink.plugin.api import useragents\nfrom streamlink.plugin.api import validate\nfrom streamlink.stream import HLSStream\nfrom streamlink.utils import update_scheme\n\nlog = logging.getLogger(__name__)\n\n\nclass CDNBG(Plugin):\n url_re = re.compile(r\"\"\"\n https?://(?:www\\.)?(?:\n tv\\.bnt\\.bg/\\w+(?:/\\w+)?|\n bitelevision\\.com/live|\n nova\\.bg/live|\n kanal3\\.bg/live|\n bgonair\\.bg/tvonline|\n inlife\\.bg|\n mmtvmusic\\.com/live|\n mu-vi\\.tv/LiveStreams/pages/Live\\.aspx|\n videochanel\\.bstv\\.bg|\n live\\.bstv\\.bg|\n bloombergtv.bg/video\n )/?\n \"\"\", re.VERBOSE)\n iframe_re = re.compile(r\"iframe .*?src=\\\"((?:https?(?::|:))?//(?:\\w+\\.)?cdn.bg/live[^\\\"]+)\\\"\", re.DOTALL)\n sdata_re = re.compile(r\"sdata\\.src.*?=.*?(?P<q>[\\\"'])(?P<url>http.*?)(?P=q)\")\n hls_file_re = re.compile(r\"(src|file): (?P<q>[\\\"'])(?P<url>(https?:)?//.+?m3u8.*?)(?P=q)\")\n hls_src_re = re.compile(r\"video src=(?P<url>http[^ ]+m3u8[^ ]*)\")\n\n stream_schema = validate.Schema(\n validate.any(\n validate.all(validate.transform(sdata_re.search), validate.get(\"url\")),\n validate.all(validate.transform(hls_file_re.search), validate.get(\"url\")),\n validate.all(validate.transform(hls_src_re.search), validate.get(\"url\")),\n )\n )\n\n @classmethod\n def can_handle_url(cls, url):\n return cls.url_re.match(url) is not None\n\n def find_iframe(self, res):\n p = urlparse(self.url)\n for url in self.iframe_re.findall(res.text):\n if \"googletagmanager\" not in url:\n url = url.replace(\":\", \":\")\n if url.startswith(\"//\"):\n return \"{0}:{1}\".format(p.scheme, url)\n else:\n return url\n\n def _get_streams(self):\n self.session.http.headers.update({\"User-Agent\": useragents.CHROME})\n res = self.session.http.get(self.url)\n iframe_url = self.find_iframe(res)\n\n if iframe_url:\n log.debug(\"Found iframe: {0}\", iframe_url)\n res = self.session.http.get(iframe_url, headers={\"Referer\": self.url})\n stream_url = update_scheme(self.url, self.stream_schema.validate(res.text))\n log.warning(\"SSL Verification disabled.\")\n return HLSStream.parse_variant_playlist(self.session,\n stream_url,\n verify=False)\n\n\n__plugin__ = CDNBG\n", "path": "src/streamlink/plugins/cdnbg.py"}]} | 1,615 | 671 |
gh_patches_debug_9053 | rasdani/github-patches | git_diff | encode__httpx-234 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bug in the 0.7.0 packaging
I've tried to upgrade to 0.7.0 and it exploded with a
```
$ poetry update
Updating dependencies
Resolving dependencies... (1.0s)
Package operations: 0 installs, 1 update, 0 removals
- Updating httpx (0.6.8 -> 0.7.0)
[EnvCommandError]
Command ['/Users/pablo/Library/Caches/pypoetry/virtualenvs/drop-eventsng-1aNj3rOl-py3.7/bin/python', '-m', 'pip', 'install', '--no-deps', '-U', 'httpx==0.7.0'] errored with the following return code 1, and output:
Collecting httpx==0.7.0
Using cached https://files.pythonhosted.org/packages/12/b3/fdd6e528a3385e2149ad42cc4e9b54e326d532e3e79a86e7cfdaea45723e/httpx-0.7.0.tar.gz
ERROR: Command errored out with exit status 1:
command: /Users/pablo/Library/Caches/pypoetry/virtualenvs/drop-eventsng-1aNj3rOl-py3.7/bin/python -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/private/var/folders/x4/txc7_0pn6zlfb30cs_0sh5mm0000gn/T/pip-install-jq6aut9d/httpx/setup.py'"'"'; __file__='"'"'/private/var/folders/x4/txc7_0pn6zlfb30cs_0sh5mm0000gn/T/pip-install-jq6aut9d/httpx/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base pip-egg-info
cwd: /private/var/folders/x4/txc7_0pn6zlfb30cs_0sh5mm0000gn/T/pip-install-jq6aut9d/httpx/
Complete output (7 lines):
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/private/var/folders/x4/txc7_0pn6zlfb30cs_0sh5mm0000gn/T/pip-install-jq6aut9d/httpx/setup.py", line 45, in <module>
long_description=get_long_description(),
File "/private/var/folders/x4/txc7_0pn6zlfb30cs_0sh5mm0000gn/T/pip-install-jq6aut9d/httpx/setup.py", line 26, in get_long_description
with open("CHANGELOG.md", encoding="utf8") as f:
FileNotFoundError: [Errno 2] No such file or directory: 'CHANGELOG.md'
----------------------------------------
ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.
```
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3
4 import re
5 from pathlib import Path
6
7 from setuptools import setup
8
9
10 def get_version(package):
11 """
12 Return package version as listed in `__version__` in `init.py`.
13 """
14 version = Path(package, "__version__.py").read_text()
15 return re.search("__version__ = ['\"]([^'\"]+)['\"]", version).group(1)
16
17
18 def get_long_description():
19 """
20 Return the README.
21 """
22 long_description = ""
23 with open("README.md", encoding="utf8") as f:
24 long_description += f.read()
25 long_description += "\n\n"
26 with open("CHANGELOG.md", encoding="utf8") as f:
27 long_description += f.read()
28 return long_description
29
30
31 def get_packages(package):
32 """
33 Return root package and all sub-packages.
34 """
35 return [str(path.parent) for path in Path(package).glob("**/__init__.py")]
36
37
38 setup(
39 name="httpx",
40 python_requires=">=3.6",
41 version=get_version("httpx"),
42 url="https://github.com/encode/httpx",
43 license="BSD",
44 description="The next generation HTTP client.",
45 long_description=get_long_description(),
46 long_description_content_type="text/markdown",
47 author="Tom Christie",
48 author_email="[email protected]",
49 package_data={"httpx": ["py.typed"]},
50 packages=get_packages("httpx"),
51 install_requires=[
52 "certifi",
53 "chardet==3.*",
54 "h11==0.8.*",
55 "h2==3.*",
56 "hstspreload",
57 "idna==2.*",
58 "rfc3986==1.*",
59 ],
60 classifiers=[
61 "Development Status :: 3 - Alpha",
62 "Environment :: Web Environment",
63 "Intended Audience :: Developers",
64 "License :: OSI Approved :: BSD License",
65 "Operating System :: OS Independent",
66 "Topic :: Internet :: WWW/HTTP",
67 "Programming Language :: Python :: 3",
68 "Programming Language :: Python :: 3.6",
69 "Programming Language :: Python :: 3.7",
70 "Programming Language :: Python :: 3.8",
71 ],
72 )
73
[end of setup.py]
[start of httpx/__version__.py]
1 __title__ = "httpx"
2 __description__ = "A next generation HTTP client, for Python 3."
3 __version__ = "0.7.0"
4
[end of httpx/__version__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/httpx/__version__.py b/httpx/__version__.py
--- a/httpx/__version__.py
+++ b/httpx/__version__.py
@@ -1,3 +1,3 @@
__title__ = "httpx"
__description__ = "A next generation HTTP client, for Python 3."
-__version__ = "0.7.0"
+__version__ = "0.7.1"
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -48,6 +48,7 @@
author_email="[email protected]",
package_data={"httpx": ["py.typed"]},
packages=get_packages("httpx"),
+ include_package_data=True,
install_requires=[
"certifi",
"chardet==3.*",
| {"golden_diff": "diff --git a/httpx/__version__.py b/httpx/__version__.py\n--- a/httpx/__version__.py\n+++ b/httpx/__version__.py\n@@ -1,3 +1,3 @@\n __title__ = \"httpx\"\n __description__ = \"A next generation HTTP client, for Python 3.\"\n-__version__ = \"0.7.0\"\n+__version__ = \"0.7.1\"\ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -48,6 +48,7 @@\n author_email=\"[email protected]\",\n package_data={\"httpx\": [\"py.typed\"]},\n packages=get_packages(\"httpx\"),\n+ include_package_data=True,\n install_requires=[\n \"certifi\",\n \"chardet==3.*\",\n", "issue": "Bug in the 0.7.0 packaging\nI've tried to upgrade to 0.7.0 and it exploded with a\r\n\r\n```\r\n$ poetry update\r\nUpdating dependencies\r\nResolving dependencies... (1.0s)\r\n\r\n\r\nPackage operations: 0 installs, 1 update, 0 removals\r\n\r\n - Updating httpx (0.6.8 -> 0.7.0)\r\n\r\n[EnvCommandError]\r\nCommand ['/Users/pablo/Library/Caches/pypoetry/virtualenvs/drop-eventsng-1aNj3rOl-py3.7/bin/python', '-m', 'pip', 'install', '--no-deps', '-U', 'httpx==0.7.0'] errored with the following return code 1, and output:\r\nCollecting httpx==0.7.0\r\n Using cached https://files.pythonhosted.org/packages/12/b3/fdd6e528a3385e2149ad42cc4e9b54e326d532e3e79a86e7cfdaea45723e/httpx-0.7.0.tar.gz\r\n ERROR: Command errored out with exit status 1:\r\n command: /Users/pablo/Library/Caches/pypoetry/virtualenvs/drop-eventsng-1aNj3rOl-py3.7/bin/python -c 'import sys, setuptools, tokenize; sys.argv[0] = '\"'\"'/private/var/folders/x4/txc7_0pn6zlfb30cs_0sh5mm0000gn/T/pip-install-jq6aut9d/httpx/setup.py'\"'\"'; __file__='\"'\"'/private/var/folders/x4/txc7_0pn6zlfb30cs_0sh5mm0000gn/T/pip-install-jq6aut9d/httpx/setup.py'\"'\"';f=getattr(tokenize, '\"'\"'open'\"'\"', open)(__file__);code=f.read().replace('\"'\"'\\r\\n'\"'\"', '\"'\"'\\n'\"'\"');f.close();exec(compile(code, __file__, '\"'\"'exec'\"'\"'))' egg_info --egg-base pip-egg-info\r\n cwd: /private/var/folders/x4/txc7_0pn6zlfb30cs_0sh5mm0000gn/T/pip-install-jq6aut9d/httpx/\r\n Complete output (7 lines):\r\n Traceback (most recent call last):\r\n File \"<string>\", line 1, in <module>\r\n File \"/private/var/folders/x4/txc7_0pn6zlfb30cs_0sh5mm0000gn/T/pip-install-jq6aut9d/httpx/setup.py\", line 45, in <module>\r\n long_description=get_long_description(),\r\n File \"/private/var/folders/x4/txc7_0pn6zlfb30cs_0sh5mm0000gn/T/pip-install-jq6aut9d/httpx/setup.py\", line 26, in get_long_description\r\n with open(\"CHANGELOG.md\", encoding=\"utf8\") as f:\r\n FileNotFoundError: [Errno 2] No such file or directory: 'CHANGELOG.md'\r\n ----------------------------------------\r\nERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.\r\n```\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\nimport re\nfrom pathlib import Path\n\nfrom setuptools import setup\n\n\ndef get_version(package):\n \"\"\"\n Return package version as listed in `__version__` in `init.py`.\n \"\"\"\n version = Path(package, \"__version__.py\").read_text()\n return re.search(\"__version__ = ['\\\"]([^'\\\"]+)['\\\"]\", version).group(1)\n\n\ndef get_long_description():\n \"\"\"\n Return the README.\n \"\"\"\n long_description = \"\"\n with open(\"README.md\", encoding=\"utf8\") as f:\n long_description += f.read()\n long_description += \"\\n\\n\"\n with open(\"CHANGELOG.md\", encoding=\"utf8\") as f:\n long_description += f.read()\n return long_description\n\n\ndef get_packages(package):\n \"\"\"\n Return root package and all sub-packages.\n \"\"\"\n return [str(path.parent) for path in Path(package).glob(\"**/__init__.py\")]\n\n\nsetup(\n name=\"httpx\",\n python_requires=\">=3.6\",\n version=get_version(\"httpx\"),\n url=\"https://github.com/encode/httpx\",\n license=\"BSD\",\n description=\"The next generation HTTP client.\",\n long_description=get_long_description(),\n long_description_content_type=\"text/markdown\",\n author=\"Tom Christie\",\n author_email=\"[email protected]\",\n package_data={\"httpx\": [\"py.typed\"]},\n packages=get_packages(\"httpx\"),\n install_requires=[\n \"certifi\",\n \"chardet==3.*\",\n \"h11==0.8.*\",\n \"h2==3.*\",\n \"hstspreload\",\n \"idna==2.*\",\n \"rfc3986==1.*\",\n ],\n classifiers=[\n \"Development Status :: 3 - Alpha\",\n \"Environment :: Web Environment\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: BSD License\",\n \"Operating System :: OS Independent\",\n \"Topic :: Internet :: WWW/HTTP\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n ],\n)\n", "path": "setup.py"}, {"content": "__title__ = \"httpx\"\n__description__ = \"A next generation HTTP client, for Python 3.\"\n__version__ = \"0.7.0\"\n", "path": "httpx/__version__.py"}]} | 1,964 | 184 |
gh_patches_debug_19085 | rasdani/github-patches | git_diff | liqd__a4-meinberlin-811 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Use <time> element for dates
This way screen readers (and other ATs) know that it should be read as a date.
</issue>
<code>
[start of meinberlin/apps/contrib/templatetags/contrib_tags.py]
1 from django import template
2 from django.template.loader import render_to_string
3
4 register = template.Library()
5
6
7 @register.assignment_tag
8 def include_template_string(template, **kwargs):
9 rendered_template = render_to_string(template, kwargs)
10 return str(rendered_template)
11
12
13 @register.assignment_tag
14 def combined_url_parameter(request_query_dict, **kwargs):
15 combined_query_dict = request_query_dict.copy()
16 for key in kwargs:
17 combined_query_dict.setlist(key, [kwargs[key]])
18 encoded_parameter = '?' + combined_query_dict.urlencode()
19 return encoded_parameter
20
21
22 @register.assignment_tag
23 def filter_has_perm(perm, user, objects):
24 """Filter a list of objects based on user permissions."""
25 if not hasattr(user, 'has_perm'):
26 # If the swapped user model does not support permissions, all objects
27 # will be returned. This is taken from rules.templatetags.has_perm.
28 return objects
29 else:
30 return [obj for obj in objects if user.has_perm(perm, obj)]
31
32
33 @register.filter
34 def percentage(value, max_value):
35 return round(value / max_value * 100)
36
37
38 @register.assignment_tag
39 def project_tile_image(project):
40 return project.tile_image or project.image or None
41
42
43 @register.assignment_tag
44 def project_tile_image_copyright(project):
45 if project.tile_image:
46 return project.tile_image_copyright
47 elif project.image:
48 return project.image_copyright
49 else:
50 return None
51
[end of meinberlin/apps/contrib/templatetags/contrib_tags.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/meinberlin/apps/contrib/templatetags/contrib_tags.py b/meinberlin/apps/contrib/templatetags/contrib_tags.py
--- a/meinberlin/apps/contrib/templatetags/contrib_tags.py
+++ b/meinberlin/apps/contrib/templatetags/contrib_tags.py
@@ -1,5 +1,8 @@
from django import template
+from django.forms.utils import flatatt
+from django.template import defaultfilters
from django.template.loader import render_to_string
+from django.utils.safestring import mark_safe
register = template.Library()
@@ -48,3 +51,18 @@
return project.image_copyright
else:
return None
+
+
[email protected]_tag()
+def html_date(value, displayfmt=None, datetimefmt='c', **kwargs):
+ """Format a date and wrap it in a html <time> element.
+
+ Additional html attributes may be provided as kwargs (e.g. 'class').
+ """
+ displaydate = defaultfilters.date(value, displayfmt)
+ datetime = defaultfilters.date(value, datetimefmt)
+ attribs = flatatt(kwargs)
+ result = '<time %s datetime="%s">%s</time>' % (attribs,
+ datetime,
+ displaydate)
+ return mark_safe(result)
| {"golden_diff": "diff --git a/meinberlin/apps/contrib/templatetags/contrib_tags.py b/meinberlin/apps/contrib/templatetags/contrib_tags.py\n--- a/meinberlin/apps/contrib/templatetags/contrib_tags.py\n+++ b/meinberlin/apps/contrib/templatetags/contrib_tags.py\n@@ -1,5 +1,8 @@\n from django import template\n+from django.forms.utils import flatatt\n+from django.template import defaultfilters\n from django.template.loader import render_to_string\n+from django.utils.safestring import mark_safe\n \n register = template.Library()\n \n@@ -48,3 +51,18 @@\n return project.image_copyright\n else:\n return None\n+\n+\[email protected]_tag()\n+def html_date(value, displayfmt=None, datetimefmt='c', **kwargs):\n+ \"\"\"Format a date and wrap it in a html <time> element.\n+\n+ Additional html attributes may be provided as kwargs (e.g. 'class').\n+ \"\"\"\n+ displaydate = defaultfilters.date(value, displayfmt)\n+ datetime = defaultfilters.date(value, datetimefmt)\n+ attribs = flatatt(kwargs)\n+ result = '<time %s datetime=\"%s\">%s</time>' % (attribs,\n+ datetime,\n+ displaydate)\n+ return mark_safe(result)\n", "issue": "Use <time> element for dates\nThis way screen readers (and other ATs) know that it should be read as a date.\n", "before_files": [{"content": "from django import template\nfrom django.template.loader import render_to_string\n\nregister = template.Library()\n\n\[email protected]_tag\ndef include_template_string(template, **kwargs):\n rendered_template = render_to_string(template, kwargs)\n return str(rendered_template)\n\n\[email protected]_tag\ndef combined_url_parameter(request_query_dict, **kwargs):\n combined_query_dict = request_query_dict.copy()\n for key in kwargs:\n combined_query_dict.setlist(key, [kwargs[key]])\n encoded_parameter = '?' + combined_query_dict.urlencode()\n return encoded_parameter\n\n\[email protected]_tag\ndef filter_has_perm(perm, user, objects):\n \"\"\"Filter a list of objects based on user permissions.\"\"\"\n if not hasattr(user, 'has_perm'):\n # If the swapped user model does not support permissions, all objects\n # will be returned. This is taken from rules.templatetags.has_perm.\n return objects\n else:\n return [obj for obj in objects if user.has_perm(perm, obj)]\n\n\[email protected]\ndef percentage(value, max_value):\n return round(value / max_value * 100)\n\n\[email protected]_tag\ndef project_tile_image(project):\n return project.tile_image or project.image or None\n\n\[email protected]_tag\ndef project_tile_image_copyright(project):\n if project.tile_image:\n return project.tile_image_copyright\n elif project.image:\n return project.image_copyright\n else:\n return None\n", "path": "meinberlin/apps/contrib/templatetags/contrib_tags.py"}]} | 990 | 297 |
gh_patches_debug_356 | rasdani/github-patches | git_diff | zigpy__zha-device-handlers-4 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Missing import for types breaking LocalDataCluster
</issue>
<code>
[start of zhaquirks/__init__.py]
1 import importlib
2 import pkgutil
3 from zigpy.quirks import CustomCluster
4 from zigpy.util import ListenableMixin
5
6 UNKNOWN = 'Unknown'
7
8
9 class Bus(ListenableMixin):
10
11 def __init__(self, *args, **kwargs):
12 super().__init__(*args, **kwargs)
13 self._listeners = {}
14
15
16 class LocalDataCluster(CustomCluster):
17
18 def __init__(self, *args, **kwargs):
19 super().__init__(*args, **kwargs)
20
21 async def read_attributes_raw(self, attributes, manufacturer=None):
22 attributes = [types.uint16_t(a) for a in attributes]
23 v = [self._attr_cache.get(attr) for attr in attributes]
24 return v
25
26 def _update_attribute(self, attrid, value):
27 super()._update_attribute(attrid, value)
28
29
30 class EventableCluster(CustomCluster):
31
32 def __init__(self, *args, **kwargs):
33 super().__init__(*args, **kwargs)
34
35 def handle_cluster_request(self, tsn, command_id, args):
36 super().handle_cluster_request(tsn, command_id, args)
37 if self.server_commands is not None and\
38 self.server_commands.get(command_id) is not None:
39 self.listener_event(
40 'zha_send_event',
41 self,
42 self.server_commands.get(command_id)[0],
43 args
44 )
45
46 def _update_attribute(self, attrid, value):
47 super()._update_attribute(attrid, value)
48 self.listener_event(
49 'zha_send_event',
50 self,
51 'attribute_updated',
52 {
53 'attribute_id': attrid,
54 'attribute_name': self.attributes.get(attrid, [UNKNOWN])[0],
55 'value': value
56 }
57 )
58
59 name = __name__
60 path = __path__
61 for importer, modname, ispkg in pkgutil.walk_packages(
62 path=path,
63 prefix=name +'.'
64 ):
65 importlib.import_module(modname)
66
[end of zhaquirks/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/zhaquirks/__init__.py b/zhaquirks/__init__.py
--- a/zhaquirks/__init__.py
+++ b/zhaquirks/__init__.py
@@ -2,6 +2,7 @@
import pkgutil
from zigpy.quirks import CustomCluster
from zigpy.util import ListenableMixin
+import zigpy.types as types
UNKNOWN = 'Unknown'
| {"golden_diff": "diff --git a/zhaquirks/__init__.py b/zhaquirks/__init__.py\n--- a/zhaquirks/__init__.py\n+++ b/zhaquirks/__init__.py\n@@ -2,6 +2,7 @@\n import pkgutil\n from zigpy.quirks import CustomCluster\n from zigpy.util import ListenableMixin\n+import zigpy.types as types\n \n UNKNOWN = 'Unknown'\n", "issue": "Missing import for types breaking LocalDataCluster\n\n", "before_files": [{"content": "import importlib\nimport pkgutil\nfrom zigpy.quirks import CustomCluster\nfrom zigpy.util import ListenableMixin\n\nUNKNOWN = 'Unknown'\n\n\nclass Bus(ListenableMixin):\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self._listeners = {}\n\n\nclass LocalDataCluster(CustomCluster):\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n\n async def read_attributes_raw(self, attributes, manufacturer=None):\n attributes = [types.uint16_t(a) for a in attributes]\n v = [self._attr_cache.get(attr) for attr in attributes]\n return v\n\n def _update_attribute(self, attrid, value):\n super()._update_attribute(attrid, value)\n\n\nclass EventableCluster(CustomCluster):\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n\n def handle_cluster_request(self, tsn, command_id, args):\n super().handle_cluster_request(tsn, command_id, args)\n if self.server_commands is not None and\\\n self.server_commands.get(command_id) is not None:\n self.listener_event(\n 'zha_send_event',\n self,\n self.server_commands.get(command_id)[0],\n args\n )\n\n def _update_attribute(self, attrid, value):\n super()._update_attribute(attrid, value)\n self.listener_event(\n 'zha_send_event',\n self,\n 'attribute_updated',\n {\n 'attribute_id': attrid,\n 'attribute_name': self.attributes.get(attrid, [UNKNOWN])[0],\n 'value': value\n }\n )\n\nname = __name__\npath = __path__\nfor importer, modname, ispkg in pkgutil.walk_packages(\n path=path,\n prefix=name +'.'\n ):\n importlib.import_module(modname)\n", "path": "zhaquirks/__init__.py"}]} | 1,092 | 91 |
gh_patches_debug_597 | rasdani/github-patches | git_diff | pex-tool__pex-1610 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Release 2.1.66
On the docket:
+ [x] Support specifying foreign platforms in full detail. #1597
+ [x] Respect PEX_ROOT in PEXEnvironment.mount. #1599
+ [x] Be able to see what .pex file is run from the list of system processes #1604
</issue>
<code>
[start of pex/version.py]
1 # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 __version__ = "2.1.65"
5
[end of pex/version.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pex/version.py b/pex/version.py
--- a/pex/version.py
+++ b/pex/version.py
@@ -1,4 +1,4 @@
# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
# Licensed under the Apache License, Version 2.0 (see LICENSE).
-__version__ = "2.1.65"
+__version__ = "2.1.66"
| {"golden_diff": "diff --git a/pex/version.py b/pex/version.py\n--- a/pex/version.py\n+++ b/pex/version.py\n@@ -1,4 +1,4 @@\n # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n # Licensed under the Apache License, Version 2.0 (see LICENSE).\n \n-__version__ = \"2.1.65\"\n+__version__ = \"2.1.66\"\n", "issue": "Release 2.1.66\nOn the docket:\r\n+ [x] Support specifying foreign platforms in full detail. #1597\r\n+ [x] Respect PEX_ROOT in PEXEnvironment.mount. #1599 \r\n+ [x] Be able to see what .pex file is run from the list of system processes #1604 \n", "before_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.65\"\n", "path": "pex/version.py"}]} | 664 | 97 |
gh_patches_debug_12361 | rasdani/github-patches | git_diff | carpentries__amy-2333 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Prepare AMY staging instance for actual use
New test AMY server is running, but it lacks some features from the other server.
- [x] Run fixtures (should be accompanied by #2239)
- [x] Scaffold non-admin users for AMY database
- [ ] Add default admin user
</issue>
<code>
[start of amy/workshops/management/commands/create_superuser.py]
1 from django.core.management.base import BaseCommand, CommandError
2
3 from communityroles.models import CommunityRole, CommunityRoleConfig
4 from workshops.models import Person
5
6
7 class Command(BaseCommand):
8 args = "no arguments"
9 help = 'Create a superuser called "admin" with password "admin".'
10
11 def handle(self, *args, **options):
12 try:
13 admin = Person.objects.create_superuser(
14 username="admin",
15 personal="admin",
16 family="admin",
17 email="[email protected]",
18 password="admin",
19 )
20 print("Created admin user")
21
22 role_config = CommunityRoleConfig.objects.get(name="instructor")
23 CommunityRole.objects.create(
24 config=role_config,
25 person=admin,
26 )
27 print("Assigned Instructor community role to admin user")
28
29 except Exception as e:
30 raise CommandError("Failed to create admin: {0}".format(str(e)))
31
[end of amy/workshops/management/commands/create_superuser.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/amy/workshops/management/commands/create_superuser.py b/amy/workshops/management/commands/create_superuser.py
--- a/amy/workshops/management/commands/create_superuser.py
+++ b/amy/workshops/management/commands/create_superuser.py
@@ -9,9 +9,15 @@
help = 'Create a superuser called "admin" with password "admin".'
def handle(self, *args, **options):
+ username = "admin"
+
+ if Person.objects.filter(username=username).exists():
+ print("Admin user exists, quitting.")
+ return
+
try:
admin = Person.objects.create_superuser(
- username="admin",
+ username=username,
personal="admin",
family="admin",
email="[email protected]",
| {"golden_diff": "diff --git a/amy/workshops/management/commands/create_superuser.py b/amy/workshops/management/commands/create_superuser.py\n--- a/amy/workshops/management/commands/create_superuser.py\n+++ b/amy/workshops/management/commands/create_superuser.py\n@@ -9,9 +9,15 @@\n help = 'Create a superuser called \"admin\" with password \"admin\".'\n \n def handle(self, *args, **options):\n+ username = \"admin\"\n+\n+ if Person.objects.filter(username=username).exists():\n+ print(\"Admin user exists, quitting.\")\n+ return\n+\n try:\n admin = Person.objects.create_superuser(\n- username=\"admin\",\n+ username=username,\n personal=\"admin\",\n family=\"admin\",\n email=\"[email protected]\",\n", "issue": "Prepare AMY staging instance for actual use\nNew test AMY server is running, but it lacks some features from the other server.\r\n\r\n- [x] Run fixtures (should be accompanied by #2239)\r\n- [x] Scaffold non-admin users for AMY database\r\n- [ ] Add default admin user\n", "before_files": [{"content": "from django.core.management.base import BaseCommand, CommandError\n\nfrom communityroles.models import CommunityRole, CommunityRoleConfig\nfrom workshops.models import Person\n\n\nclass Command(BaseCommand):\n args = \"no arguments\"\n help = 'Create a superuser called \"admin\" with password \"admin\".'\n\n def handle(self, *args, **options):\n try:\n admin = Person.objects.create_superuser(\n username=\"admin\",\n personal=\"admin\",\n family=\"admin\",\n email=\"[email protected]\",\n password=\"admin\",\n )\n print(\"Created admin user\")\n\n role_config = CommunityRoleConfig.objects.get(name=\"instructor\")\n CommunityRole.objects.create(\n config=role_config,\n person=admin,\n )\n print(\"Assigned Instructor community role to admin user\")\n\n except Exception as e:\n raise CommandError(\"Failed to create admin: {0}\".format(str(e)))\n", "path": "amy/workshops/management/commands/create_superuser.py"}]} | 854 | 171 |
gh_patches_debug_4328 | rasdani/github-patches | git_diff | pytorch__TensorRT-2311 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Upstream Dynamo Backend to Torch
- Add a hook in the Torch repo to secure the namespace `"tensorrt"` and have it point to `"torch_tensorrt"`
- Add necessary imports and skipped tests
- Raise a PR in Torch to add this functionality
</issue>
<code>
[start of py/torch_tensorrt/dynamo/backend/backends.py]
1 from __future__ import annotations
2
3 import logging
4 from functools import partial
5 from typing import Any, Callable, Sequence
6
7 import torch
8 import torch._dynamo as td
9 from torch._functorch.aot_autograd import aot_module_simplified, make_boxed_compiler
10 from torch_tensorrt.dynamo import CompilationSettings
11 from torch_tensorrt.dynamo.compile import compile_module
12 from torch_tensorrt.dynamo.lowering._decompositions import get_decompositions
13 from torch_tensorrt.dynamo.lowering._pre_aot_lowering import pre_aot_substitutions
14 from torch_tensorrt.dynamo.utils import parse_dynamo_kwargs
15
16 logger = logging.getLogger(__name__)
17
18
19 @td.register_backend(name="torch_tensorrt") # type: ignore[misc]
20 def torch_tensorrt_backend(
21 gm: torch.fx.GraphModule, sample_inputs: Sequence[torch.Tensor], **kwargs: Any
22 ) -> torch.nn.Module:
23 # Set log level at the top of compilation (torch_tensorrt.dynamo)
24 if (
25 (
26 "options" in kwargs
27 and "debug" in kwargs["options"]
28 and kwargs["options"]["debug"]
29 )
30 or ("debug" in kwargs and kwargs["debug"])
31 ) and logger.parent:
32 logger.parent.setLevel(logging.DEBUG)
33
34 DEFAULT_BACKEND = aot_torch_tensorrt_aten_backend
35
36 compiled_mod: torch.nn.Module = DEFAULT_BACKEND(gm, sample_inputs, **kwargs)
37 return compiled_mod
38
39
40 @td.register_backend(name="aot_torch_tensorrt_aten") # type: ignore[misc]
41 def aot_torch_tensorrt_aten_backend(
42 gm: torch.fx.GraphModule, sample_inputs: Sequence[torch.Tensor], **kwargs: Any
43 ) -> torch.nn.Module:
44 settings = parse_dynamo_kwargs(kwargs)
45
46 custom_backend = partial(
47 _pretraced_backend,
48 settings=settings,
49 )
50
51 # Perform Pre-AOT Lowering for Module-Level Replacement
52 gm = pre_aot_substitutions(gm)
53
54 # Invoke AOTAutograd to translate operators to aten
55 return aot_module_simplified(
56 gm,
57 sample_inputs,
58 fw_compiler=make_boxed_compiler(custom_backend),
59 decompositions=get_decompositions(settings.enable_experimental_decompositions),
60 )
61
62
63 def _pretraced_backend(
64 gm: torch.fx.GraphModule,
65 sample_inputs: Sequence[torch.Tensor],
66 settings: CompilationSettings = CompilationSettings(),
67 ) -> torch.fx.GraphModule | Callable[..., Any]:
68 """Helper function to manage translation of traced FX module to TRT engines
69
70 Args:
71 module: FX GraphModule to convert
72 inputs: Inputs to the module
73 settings: Compilation settings
74 Returns:
75 Compiled FX GraphModule
76 """
77 try:
78 logger.debug("Post-AOT Autograd graph:\n" + str(gm.graph))
79
80 trt_compiled = compile_module(
81 gm,
82 sample_inputs,
83 settings=settings,
84 )
85 return trt_compiled
86 except AssertionError:
87 if not settings.pass_through_build_failures:
88 logger.warning(
89 "TRT conversion failed on the subgraph. See trace above. "
90 + "Returning GraphModule forward instead.",
91 exc_info=True,
92 )
93 return gm.forward
94 else:
95 logger.critical(
96 "Halting compilation on build failure since "
97 + "pass_through_build_failures was specified as True. "
98 + "To return the default Torch implementation and avoid "
99 + "halting compilation on engine build failures, "
100 + "specify pass_through_build_failures=False."
101 )
102 raise
103
[end of py/torch_tensorrt/dynamo/backend/backends.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/py/torch_tensorrt/dynamo/backend/backends.py b/py/torch_tensorrt/dynamo/backend/backends.py
--- a/py/torch_tensorrt/dynamo/backend/backends.py
+++ b/py/torch_tensorrt/dynamo/backend/backends.py
@@ -16,6 +16,7 @@
logger = logging.getLogger(__name__)
[email protected]_backend(name="tensorrt") # type: ignore[misc]
@td.register_backend(name="torch_tensorrt") # type: ignore[misc]
def torch_tensorrt_backend(
gm: torch.fx.GraphModule, sample_inputs: Sequence[torch.Tensor], **kwargs: Any
| {"golden_diff": "diff --git a/py/torch_tensorrt/dynamo/backend/backends.py b/py/torch_tensorrt/dynamo/backend/backends.py\n--- a/py/torch_tensorrt/dynamo/backend/backends.py\n+++ b/py/torch_tensorrt/dynamo/backend/backends.py\n@@ -16,6 +16,7 @@\n logger = logging.getLogger(__name__)\n \n \[email protected]_backend(name=\"tensorrt\") # type: ignore[misc]\n @td.register_backend(name=\"torch_tensorrt\") # type: ignore[misc]\n def torch_tensorrt_backend(\n gm: torch.fx.GraphModule, sample_inputs: Sequence[torch.Tensor], **kwargs: Any\n", "issue": "Upstream Dynamo Backend to Torch\n- Add a hook in the Torch repo to secure the namespace `\"tensorrt\"` and have it point to `\"torch_tensorrt\"`\r\n- Add necessary imports and skipped tests\r\n- Raise a PR in Torch to add this functionality\n", "before_files": [{"content": "from __future__ import annotations\n\nimport logging\nfrom functools import partial\nfrom typing import Any, Callable, Sequence\n\nimport torch\nimport torch._dynamo as td\nfrom torch._functorch.aot_autograd import aot_module_simplified, make_boxed_compiler\nfrom torch_tensorrt.dynamo import CompilationSettings\nfrom torch_tensorrt.dynamo.compile import compile_module\nfrom torch_tensorrt.dynamo.lowering._decompositions import get_decompositions\nfrom torch_tensorrt.dynamo.lowering._pre_aot_lowering import pre_aot_substitutions\nfrom torch_tensorrt.dynamo.utils import parse_dynamo_kwargs\n\nlogger = logging.getLogger(__name__)\n\n\[email protected]_backend(name=\"torch_tensorrt\") # type: ignore[misc]\ndef torch_tensorrt_backend(\n gm: torch.fx.GraphModule, sample_inputs: Sequence[torch.Tensor], **kwargs: Any\n) -> torch.nn.Module:\n # Set log level at the top of compilation (torch_tensorrt.dynamo)\n if (\n (\n \"options\" in kwargs\n and \"debug\" in kwargs[\"options\"]\n and kwargs[\"options\"][\"debug\"]\n )\n or (\"debug\" in kwargs and kwargs[\"debug\"])\n ) and logger.parent:\n logger.parent.setLevel(logging.DEBUG)\n\n DEFAULT_BACKEND = aot_torch_tensorrt_aten_backend\n\n compiled_mod: torch.nn.Module = DEFAULT_BACKEND(gm, sample_inputs, **kwargs)\n return compiled_mod\n\n\[email protected]_backend(name=\"aot_torch_tensorrt_aten\") # type: ignore[misc]\ndef aot_torch_tensorrt_aten_backend(\n gm: torch.fx.GraphModule, sample_inputs: Sequence[torch.Tensor], **kwargs: Any\n) -> torch.nn.Module:\n settings = parse_dynamo_kwargs(kwargs)\n\n custom_backend = partial(\n _pretraced_backend,\n settings=settings,\n )\n\n # Perform Pre-AOT Lowering for Module-Level Replacement\n gm = pre_aot_substitutions(gm)\n\n # Invoke AOTAutograd to translate operators to aten\n return aot_module_simplified(\n gm,\n sample_inputs,\n fw_compiler=make_boxed_compiler(custom_backend),\n decompositions=get_decompositions(settings.enable_experimental_decompositions),\n )\n\n\ndef _pretraced_backend(\n gm: torch.fx.GraphModule,\n sample_inputs: Sequence[torch.Tensor],\n settings: CompilationSettings = CompilationSettings(),\n) -> torch.fx.GraphModule | Callable[..., Any]:\n \"\"\"Helper function to manage translation of traced FX module to TRT engines\n\n Args:\n module: FX GraphModule to convert\n inputs: Inputs to the module\n settings: Compilation settings\n Returns:\n Compiled FX GraphModule\n \"\"\"\n try:\n logger.debug(\"Post-AOT Autograd graph:\\n\" + str(gm.graph))\n\n trt_compiled = compile_module(\n gm,\n sample_inputs,\n settings=settings,\n )\n return trt_compiled\n except AssertionError:\n if not settings.pass_through_build_failures:\n logger.warning(\n \"TRT conversion failed on the subgraph. See trace above. \"\n + \"Returning GraphModule forward instead.\",\n exc_info=True,\n )\n return gm.forward\n else:\n logger.critical(\n \"Halting compilation on build failure since \"\n + \"pass_through_build_failures was specified as True. \"\n + \"To return the default Torch implementation and avoid \"\n + \"halting compilation on engine build failures, \"\n + \"specify pass_through_build_failures=False.\"\n )\n raise\n", "path": "py/torch_tensorrt/dynamo/backend/backends.py"}]} | 1,564 | 140 |
gh_patches_debug_16886 | rasdani/github-patches | git_diff | urllib3__urllib3-2335 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add decode_content parameter to top-level APIs
Like the title says, add `decode_content` to the top-level APIs `urllib3.request()`.
See https://github.com/urllib3/urllib3/commit/ddf7361ac0467431a2f3df6ba346c9c506c29d56 for an example.
</issue>
<code>
[start of src/urllib3/__init__.py]
1 """
2 Python HTTP library with thread-safe connection pooling, file post support, user friendly, and more
3 """
4
5 # Set default logging handler to avoid "No handler found" warnings.
6 import logging
7 import warnings
8 from logging import NullHandler
9 from typing import Mapping, Optional, Type, Union
10
11 from . import exceptions
12 from ._collections import HTTPHeaderDict
13 from ._version import __version__
14 from .connection import _TYPE_BODY
15 from .connectionpool import HTTPConnectionPool, HTTPSConnectionPool, connection_from_url
16 from .filepost import _TYPE_FIELDS, encode_multipart_formdata
17 from .poolmanager import PoolManager, ProxyManager, proxy_from_url
18 from .response import BaseHTTPResponse, HTTPResponse
19 from .util.request import make_headers
20 from .util.retry import Retry
21 from .util.timeout import Timeout
22
23 __author__ = "Andrey Petrov ([email protected])"
24 __license__ = "MIT"
25 __version__ = __version__
26
27 __all__ = (
28 "HTTPConnectionPool",
29 "HTTPHeaderDict",
30 "HTTPSConnectionPool",
31 "PoolManager",
32 "ProxyManager",
33 "HTTPResponse",
34 "Retry",
35 "Timeout",
36 "add_stderr_logger",
37 "connection_from_url",
38 "disable_warnings",
39 "encode_multipart_formdata",
40 "make_headers",
41 "proxy_from_url",
42 "request",
43 )
44
45 logging.getLogger(__name__).addHandler(NullHandler())
46
47
48 def add_stderr_logger(level: int = logging.DEBUG) -> logging.StreamHandler:
49 """
50 Helper for quickly adding a StreamHandler to the logger. Useful for
51 debugging.
52
53 Returns the handler after adding it.
54 """
55 # This method needs to be in this __init__.py to get the __name__ correct
56 # even if urllib3 is vendored within another package.
57 logger = logging.getLogger(__name__)
58 handler = logging.StreamHandler()
59 handler.setFormatter(logging.Formatter("%(asctime)s %(levelname)s %(message)s"))
60 logger.addHandler(handler)
61 logger.setLevel(level)
62 logger.debug("Added a stderr logging handler to logger: %s", __name__)
63 return handler
64
65
66 # ... Clean up.
67 del NullHandler
68
69
70 # All warning filters *must* be appended unless you're really certain that they
71 # shouldn't be: otherwise, it's very hard for users to use most Python
72 # mechanisms to silence them.
73 # SecurityWarning's always go off by default.
74 warnings.simplefilter("always", exceptions.SecurityWarning, append=True)
75 # InsecurePlatformWarning's don't vary between requests, so we keep it default.
76 warnings.simplefilter("default", exceptions.InsecurePlatformWarning, append=True)
77 # SNIMissingWarnings should go off only once.
78 warnings.simplefilter("default", exceptions.SNIMissingWarning, append=True)
79
80
81 def disable_warnings(category: Type[Warning] = exceptions.HTTPWarning) -> None:
82 """
83 Helper for quickly disabling all urllib3 warnings.
84 """
85 warnings.simplefilter("ignore", category)
86
87
88 _DEFAULT_POOL = PoolManager()
89
90
91 def request(
92 method: str,
93 url: str,
94 *,
95 body: Optional[_TYPE_BODY] = None,
96 fields: Optional[_TYPE_FIELDS] = None,
97 headers: Optional[Mapping[str, str]] = None,
98 preload_content: Optional[bool] = True,
99 redirect: Optional[bool] = True,
100 retries: Optional[Union[Retry, bool, int]] = None,
101 timeout: Optional[Union[Timeout, float, int]] = 3,
102 ) -> BaseHTTPResponse:
103 """
104 A convenience, top-level request method. It uses a module-global ``PoolManager`` instance.
105 Therefore, its side effects could be shared across dependencies relying on it.
106 To avoid side effects create a new ``PoolManager`` instance and use it instead.
107 The method does not accept low-level ``**urlopen_kw`` keyword arguments.
108 """
109
110 return _DEFAULT_POOL.request(
111 method,
112 url,
113 body=body,
114 fields=fields,
115 headers=headers,
116 preload_content=preload_content,
117 redirect=redirect,
118 retries=retries,
119 timeout=timeout,
120 )
121
[end of src/urllib3/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/urllib3/__init__.py b/src/urllib3/__init__.py
--- a/src/urllib3/__init__.py
+++ b/src/urllib3/__init__.py
@@ -96,6 +96,7 @@
fields: Optional[_TYPE_FIELDS] = None,
headers: Optional[Mapping[str, str]] = None,
preload_content: Optional[bool] = True,
+ decode_content: Optional[bool] = True,
redirect: Optional[bool] = True,
retries: Optional[Union[Retry, bool, int]] = None,
timeout: Optional[Union[Timeout, float, int]] = 3,
@@ -114,6 +115,7 @@
fields=fields,
headers=headers,
preload_content=preload_content,
+ decode_content=decode_content,
redirect=redirect,
retries=retries,
timeout=timeout,
| {"golden_diff": "diff --git a/src/urllib3/__init__.py b/src/urllib3/__init__.py\n--- a/src/urllib3/__init__.py\n+++ b/src/urllib3/__init__.py\n@@ -96,6 +96,7 @@\n fields: Optional[_TYPE_FIELDS] = None,\n headers: Optional[Mapping[str, str]] = None,\n preload_content: Optional[bool] = True,\n+ decode_content: Optional[bool] = True,\n redirect: Optional[bool] = True,\n retries: Optional[Union[Retry, bool, int]] = None,\n timeout: Optional[Union[Timeout, float, int]] = 3,\n@@ -114,6 +115,7 @@\n fields=fields,\n headers=headers,\n preload_content=preload_content,\n+ decode_content=decode_content,\n redirect=redirect,\n retries=retries,\n timeout=timeout,\n", "issue": "Add decode_content parameter to top-level APIs\nLike the title says, add `decode_content` to the top-level APIs `urllib3.request()`.\r\n\r\nSee https://github.com/urllib3/urllib3/commit/ddf7361ac0467431a2f3df6ba346c9c506c29d56 for an example.\n", "before_files": [{"content": "\"\"\"\nPython HTTP library with thread-safe connection pooling, file post support, user friendly, and more\n\"\"\"\n\n# Set default logging handler to avoid \"No handler found\" warnings.\nimport logging\nimport warnings\nfrom logging import NullHandler\nfrom typing import Mapping, Optional, Type, Union\n\nfrom . import exceptions\nfrom ._collections import HTTPHeaderDict\nfrom ._version import __version__\nfrom .connection import _TYPE_BODY\nfrom .connectionpool import HTTPConnectionPool, HTTPSConnectionPool, connection_from_url\nfrom .filepost import _TYPE_FIELDS, encode_multipart_formdata\nfrom .poolmanager import PoolManager, ProxyManager, proxy_from_url\nfrom .response import BaseHTTPResponse, HTTPResponse\nfrom .util.request import make_headers\nfrom .util.retry import Retry\nfrom .util.timeout import Timeout\n\n__author__ = \"Andrey Petrov ([email protected])\"\n__license__ = \"MIT\"\n__version__ = __version__\n\n__all__ = (\n \"HTTPConnectionPool\",\n \"HTTPHeaderDict\",\n \"HTTPSConnectionPool\",\n \"PoolManager\",\n \"ProxyManager\",\n \"HTTPResponse\",\n \"Retry\",\n \"Timeout\",\n \"add_stderr_logger\",\n \"connection_from_url\",\n \"disable_warnings\",\n \"encode_multipart_formdata\",\n \"make_headers\",\n \"proxy_from_url\",\n \"request\",\n)\n\nlogging.getLogger(__name__).addHandler(NullHandler())\n\n\ndef add_stderr_logger(level: int = logging.DEBUG) -> logging.StreamHandler:\n \"\"\"\n Helper for quickly adding a StreamHandler to the logger. Useful for\n debugging.\n\n Returns the handler after adding it.\n \"\"\"\n # This method needs to be in this __init__.py to get the __name__ correct\n # even if urllib3 is vendored within another package.\n logger = logging.getLogger(__name__)\n handler = logging.StreamHandler()\n handler.setFormatter(logging.Formatter(\"%(asctime)s %(levelname)s %(message)s\"))\n logger.addHandler(handler)\n logger.setLevel(level)\n logger.debug(\"Added a stderr logging handler to logger: %s\", __name__)\n return handler\n\n\n# ... Clean up.\ndel NullHandler\n\n\n# All warning filters *must* be appended unless you're really certain that they\n# shouldn't be: otherwise, it's very hard for users to use most Python\n# mechanisms to silence them.\n# SecurityWarning's always go off by default.\nwarnings.simplefilter(\"always\", exceptions.SecurityWarning, append=True)\n# InsecurePlatformWarning's don't vary between requests, so we keep it default.\nwarnings.simplefilter(\"default\", exceptions.InsecurePlatformWarning, append=True)\n# SNIMissingWarnings should go off only once.\nwarnings.simplefilter(\"default\", exceptions.SNIMissingWarning, append=True)\n\n\ndef disable_warnings(category: Type[Warning] = exceptions.HTTPWarning) -> None:\n \"\"\"\n Helper for quickly disabling all urllib3 warnings.\n \"\"\"\n warnings.simplefilter(\"ignore\", category)\n\n\n_DEFAULT_POOL = PoolManager()\n\n\ndef request(\n method: str,\n url: str,\n *,\n body: Optional[_TYPE_BODY] = None,\n fields: Optional[_TYPE_FIELDS] = None,\n headers: Optional[Mapping[str, str]] = None,\n preload_content: Optional[bool] = True,\n redirect: Optional[bool] = True,\n retries: Optional[Union[Retry, bool, int]] = None,\n timeout: Optional[Union[Timeout, float, int]] = 3,\n) -> BaseHTTPResponse:\n \"\"\"\n A convenience, top-level request method. It uses a module-global ``PoolManager`` instance.\n Therefore, its side effects could be shared across dependencies relying on it.\n To avoid side effects create a new ``PoolManager`` instance and use it instead.\n The method does not accept low-level ``**urlopen_kw`` keyword arguments.\n \"\"\"\n\n return _DEFAULT_POOL.request(\n method,\n url,\n body=body,\n fields=fields,\n headers=headers,\n preload_content=preload_content,\n redirect=redirect,\n retries=retries,\n timeout=timeout,\n )\n", "path": "src/urllib3/__init__.py"}]} | 1,761 | 204 |
gh_patches_debug_24800 | rasdani/github-patches | git_diff | cookiecutter__cookiecutter-1493 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Need help with generating GUID/UUID values for context variables
* Cookiecutter version: 1.6
* Template project url: none
* Python version: 3.7 (virtual env created using win python 3.7 x64)
* Operating System: Windows 10, 64 Bit
### Description:
First off many thanks for creating this project !
Here is some context of what I am trying to do and where I need some guidance
* I am trying to use CC to generate new a Visual Studio 2017 solution and project files with a particular folder/file organization that I like
* I was able to most of it working but for the below:
* Parts of the above project, solution files involves generating several unique GUIDs
* my first approach was creating a `pre_gen_project.py` inside the `hooks` folder and update/create new variables that could be added to the ones loaded from `cookiecutter.json` or entered by the user
* I was however blocked as I could not figure out how to access the context being used by CC and the jinja2 engine
* I proceeded to go over the many issues on github and found some related ones like the following: #60, #102, #180, #288 but no clear answer on how to achieve what I'd like
* I also followed some others issues that suggested creating custom jinja2 extension/filter (#944) but I couldnt figure out how or where to put them in the template folder so the cookiecutter.exe can identify them and pick them up
* Lastly, I also tried going over the CC source code and tried to create a new executable from my script (similar to `cli.py`) that passes the guids via the `extra_context` to `cookiecutter.main(...)` but ran into some other problems that I am still trying to figure out
Appreciate any pointers on how I can inject GUID values for the context variables
</issue>
<code>
[start of cookiecutter/extensions.py]
1 """Jinja2 extensions."""
2 import json
3 import string
4 from secrets import choice
5
6 from jinja2.ext import Extension
7 from slugify import slugify as pyslugify
8
9
10 class JsonifyExtension(Extension):
11 """Jinja2 extension to convert a Python object to JSON."""
12
13 def __init__(self, environment):
14 """Initialize the extension with the given environment."""
15 super(JsonifyExtension, self).__init__(environment)
16
17 def jsonify(obj):
18 return json.dumps(obj, sort_keys=True, indent=4)
19
20 environment.filters['jsonify'] = jsonify
21
22
23 class RandomStringExtension(Extension):
24 """Jinja2 extension to create a random string."""
25
26 def __init__(self, environment):
27 """Jinja2 Extension Constructor."""
28 super(RandomStringExtension, self).__init__(environment)
29
30 def random_ascii_string(length, punctuation=False):
31 if punctuation:
32 corpus = "".join((string.ascii_letters, string.punctuation))
33 else:
34 corpus = string.ascii_letters
35 return "".join(choice(corpus) for _ in range(length))
36
37 environment.globals.update(random_ascii_string=random_ascii_string)
38
39
40 class SlugifyExtension(Extension):
41 """Jinja2 Extension to slugify string."""
42
43 def __init__(self, environment):
44 """Jinja2 Extension constructor."""
45 super(SlugifyExtension, self).__init__(environment)
46
47 def slugify(value, **kwargs):
48 """Slugifies the value."""
49 return pyslugify(value, **kwargs)
50
51 environment.filters['slugify'] = slugify
52
[end of cookiecutter/extensions.py]
[start of cookiecutter/environment.py]
1 """Jinja2 environment and extensions loading."""
2 from jinja2 import Environment, StrictUndefined
3
4 from cookiecutter.exceptions import UnknownExtension
5
6
7 class ExtensionLoaderMixin(object):
8 """Mixin providing sane loading of extensions specified in a given context.
9
10 The context is being extracted from the keyword arguments before calling
11 the next parent class in line of the child.
12 """
13
14 def __init__(self, **kwargs):
15 """Initialize the Jinja2 Environment object while loading extensions.
16
17 Does the following:
18
19 1. Establishes default_extensions (currently just a Time feature)
20 2. Reads extensions set in the cookiecutter.json _extensions key.
21 3. Attempts to load the extensions. Provides useful error if fails.
22 """
23 context = kwargs.pop('context', {})
24
25 default_extensions = [
26 'cookiecutter.extensions.JsonifyExtension',
27 'cookiecutter.extensions.RandomStringExtension',
28 'cookiecutter.extensions.SlugifyExtension',
29 'jinja2_time.TimeExtension',
30 ]
31 extensions = default_extensions + self._read_extensions(context)
32
33 try:
34 super(ExtensionLoaderMixin, self).__init__(extensions=extensions, **kwargs)
35 except ImportError as err:
36 raise UnknownExtension('Unable to load extension: {}'.format(err))
37
38 def _read_extensions(self, context):
39 """Return list of extensions as str to be passed on to the Jinja2 env.
40
41 If context does not contain the relevant info, return an empty
42 list instead.
43 """
44 try:
45 extensions = context['cookiecutter']['_extensions']
46 except KeyError:
47 return []
48 else:
49 return [str(ext) for ext in extensions]
50
51
52 class StrictEnvironment(ExtensionLoaderMixin, Environment):
53 """Create strict Jinja2 environment.
54
55 Jinja2 environment will raise error on undefined variable in template-
56 rendering context.
57 """
58
59 def __init__(self, **kwargs):
60 """Set the standard Cookiecutter StrictEnvironment.
61
62 Also loading extensions defined in cookiecutter.json's _extensions key.
63 """
64 super(StrictEnvironment, self).__init__(undefined=StrictUndefined, **kwargs)
65
[end of cookiecutter/environment.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/cookiecutter/environment.py b/cookiecutter/environment.py
--- a/cookiecutter/environment.py
+++ b/cookiecutter/environment.py
@@ -26,6 +26,7 @@
'cookiecutter.extensions.JsonifyExtension',
'cookiecutter.extensions.RandomStringExtension',
'cookiecutter.extensions.SlugifyExtension',
+ 'cookiecutter.extensions.UUIDExtension',
'jinja2_time.TimeExtension',
]
extensions = default_extensions + self._read_extensions(context)
diff --git a/cookiecutter/extensions.py b/cookiecutter/extensions.py
--- a/cookiecutter/extensions.py
+++ b/cookiecutter/extensions.py
@@ -1,6 +1,7 @@
"""Jinja2 extensions."""
import json
import string
+import uuid
from secrets import choice
from jinja2.ext import Extension
@@ -49,3 +50,17 @@
return pyslugify(value, **kwargs)
environment.filters['slugify'] = slugify
+
+
+class UUIDExtension(Extension):
+ """Jinja2 Extension to generate uuid4 string."""
+
+ def __init__(self, environment):
+ """Jinja2 Extension constructor."""
+ super(UUIDExtension, self).__init__(environment)
+
+ def uuid4():
+ """Generate UUID4."""
+ return str(uuid.uuid4())
+
+ environment.globals.update(uuid4=uuid4)
| {"golden_diff": "diff --git a/cookiecutter/environment.py b/cookiecutter/environment.py\n--- a/cookiecutter/environment.py\n+++ b/cookiecutter/environment.py\n@@ -26,6 +26,7 @@\n 'cookiecutter.extensions.JsonifyExtension',\n 'cookiecutter.extensions.RandomStringExtension',\n 'cookiecutter.extensions.SlugifyExtension',\n+ 'cookiecutter.extensions.UUIDExtension',\n 'jinja2_time.TimeExtension',\n ]\n extensions = default_extensions + self._read_extensions(context)\ndiff --git a/cookiecutter/extensions.py b/cookiecutter/extensions.py\n--- a/cookiecutter/extensions.py\n+++ b/cookiecutter/extensions.py\n@@ -1,6 +1,7 @@\n \"\"\"Jinja2 extensions.\"\"\"\n import json\n import string\n+import uuid\n from secrets import choice\n \n from jinja2.ext import Extension\n@@ -49,3 +50,17 @@\n return pyslugify(value, **kwargs)\n \n environment.filters['slugify'] = slugify\n+\n+\n+class UUIDExtension(Extension):\n+ \"\"\"Jinja2 Extension to generate uuid4 string.\"\"\"\n+\n+ def __init__(self, environment):\n+ \"\"\"Jinja2 Extension constructor.\"\"\"\n+ super(UUIDExtension, self).__init__(environment)\n+\n+ def uuid4():\n+ \"\"\"Generate UUID4.\"\"\"\n+ return str(uuid.uuid4())\n+\n+ environment.globals.update(uuid4=uuid4)\n", "issue": "Need help with generating GUID/UUID values for context variables\n* Cookiecutter version: 1.6\r\n* Template project url: none\r\n* Python version: 3.7 (virtual env created using win python 3.7 x64)\r\n* Operating System: Windows 10, 64 Bit\r\n\r\n### Description:\r\n\r\nFirst off many thanks for creating this project !\r\nHere is some context of what I am trying to do and where I need some guidance\r\n* I am trying to use CC to generate new a Visual Studio 2017 solution and project files with a particular folder/file organization that I like\r\n* I was able to most of it working but for the below:\r\n* Parts of the above project, solution files involves generating several unique GUIDs\r\n* my first approach was creating a `pre_gen_project.py` inside the `hooks` folder and update/create new variables that could be added to the ones loaded from `cookiecutter.json` or entered by the user\r\n* I was however blocked as I could not figure out how to access the context being used by CC and the jinja2 engine \r\n* I proceeded to go over the many issues on github and found some related ones like the following: #60, #102, #180, #288 but no clear answer on how to achieve what I'd like\r\n* I also followed some others issues that suggested creating custom jinja2 extension/filter (#944) but I couldnt figure out how or where to put them in the template folder so the cookiecutter.exe can identify them and pick them up\r\n* Lastly, I also tried going over the CC source code and tried to create a new executable from my script (similar to `cli.py`) that passes the guids via the `extra_context` to `cookiecutter.main(...)` but ran into some other problems that I am still trying to figure out\r\n\r\nAppreciate any pointers on how I can inject GUID values for the context variables\n", "before_files": [{"content": "\"\"\"Jinja2 extensions.\"\"\"\nimport json\nimport string\nfrom secrets import choice\n\nfrom jinja2.ext import Extension\nfrom slugify import slugify as pyslugify\n\n\nclass JsonifyExtension(Extension):\n \"\"\"Jinja2 extension to convert a Python object to JSON.\"\"\"\n\n def __init__(self, environment):\n \"\"\"Initialize the extension with the given environment.\"\"\"\n super(JsonifyExtension, self).__init__(environment)\n\n def jsonify(obj):\n return json.dumps(obj, sort_keys=True, indent=4)\n\n environment.filters['jsonify'] = jsonify\n\n\nclass RandomStringExtension(Extension):\n \"\"\"Jinja2 extension to create a random string.\"\"\"\n\n def __init__(self, environment):\n \"\"\"Jinja2 Extension Constructor.\"\"\"\n super(RandomStringExtension, self).__init__(environment)\n\n def random_ascii_string(length, punctuation=False):\n if punctuation:\n corpus = \"\".join((string.ascii_letters, string.punctuation))\n else:\n corpus = string.ascii_letters\n return \"\".join(choice(corpus) for _ in range(length))\n\n environment.globals.update(random_ascii_string=random_ascii_string)\n\n\nclass SlugifyExtension(Extension):\n \"\"\"Jinja2 Extension to slugify string.\"\"\"\n\n def __init__(self, environment):\n \"\"\"Jinja2 Extension constructor.\"\"\"\n super(SlugifyExtension, self).__init__(environment)\n\n def slugify(value, **kwargs):\n \"\"\"Slugifies the value.\"\"\"\n return pyslugify(value, **kwargs)\n\n environment.filters['slugify'] = slugify\n", "path": "cookiecutter/extensions.py"}, {"content": "\"\"\"Jinja2 environment and extensions loading.\"\"\"\nfrom jinja2 import Environment, StrictUndefined\n\nfrom cookiecutter.exceptions import UnknownExtension\n\n\nclass ExtensionLoaderMixin(object):\n \"\"\"Mixin providing sane loading of extensions specified in a given context.\n\n The context is being extracted from the keyword arguments before calling\n the next parent class in line of the child.\n \"\"\"\n\n def __init__(self, **kwargs):\n \"\"\"Initialize the Jinja2 Environment object while loading extensions.\n\n Does the following:\n\n 1. Establishes default_extensions (currently just a Time feature)\n 2. Reads extensions set in the cookiecutter.json _extensions key.\n 3. Attempts to load the extensions. Provides useful error if fails.\n \"\"\"\n context = kwargs.pop('context', {})\n\n default_extensions = [\n 'cookiecutter.extensions.JsonifyExtension',\n 'cookiecutter.extensions.RandomStringExtension',\n 'cookiecutter.extensions.SlugifyExtension',\n 'jinja2_time.TimeExtension',\n ]\n extensions = default_extensions + self._read_extensions(context)\n\n try:\n super(ExtensionLoaderMixin, self).__init__(extensions=extensions, **kwargs)\n except ImportError as err:\n raise UnknownExtension('Unable to load extension: {}'.format(err))\n\n def _read_extensions(self, context):\n \"\"\"Return list of extensions as str to be passed on to the Jinja2 env.\n\n If context does not contain the relevant info, return an empty\n list instead.\n \"\"\"\n try:\n extensions = context['cookiecutter']['_extensions']\n except KeyError:\n return []\n else:\n return [str(ext) for ext in extensions]\n\n\nclass StrictEnvironment(ExtensionLoaderMixin, Environment):\n \"\"\"Create strict Jinja2 environment.\n\n Jinja2 environment will raise error on undefined variable in template-\n rendering context.\n \"\"\"\n\n def __init__(self, **kwargs):\n \"\"\"Set the standard Cookiecutter StrictEnvironment.\n\n Also loading extensions defined in cookiecutter.json's _extensions key.\n \"\"\"\n super(StrictEnvironment, self).__init__(undefined=StrictUndefined, **kwargs)\n", "path": "cookiecutter/environment.py"}]} | 1,969 | 312 |
gh_patches_debug_8560 | rasdani/github-patches | git_diff | uccser__cs-unplugged-197 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Remove topic query from homepage
When the homepage is loaded, a database query is performed. This is currently not needed and should be removed.
Remove topic query from homepage
When the homepage is loaded, a database query is performed. This is currently not needed and should be removed.
</issue>
<code>
[start of csunplugged/general/views.py]
1 from django.views.generic import TemplateView
2
3
4 class GeneralIndexView(TemplateView):
5 template_name = 'general/index.html'
6
7 def get_context_data(self, **kwargs):
8 # TODO: Investigate if importing model from another
9 # app is sensible/best approach.
10 from topics.models import Topic
11 context = super(GeneralIndexView, self).get_context_data(**kwargs)
12 context['total_topics'] = Topic.objects.count()
13 return context
14
15
16 class GeneralAboutView(TemplateView):
17 template_name = 'general/about.html'
18
[end of csunplugged/general/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/csunplugged/general/views.py b/csunplugged/general/views.py
--- a/csunplugged/general/views.py
+++ b/csunplugged/general/views.py
@@ -4,14 +4,6 @@
class GeneralIndexView(TemplateView):
template_name = 'general/index.html'
- def get_context_data(self, **kwargs):
- # TODO: Investigate if importing model from another
- # app is sensible/best approach.
- from topics.models import Topic
- context = super(GeneralIndexView, self).get_context_data(**kwargs)
- context['total_topics'] = Topic.objects.count()
- return context
-
class GeneralAboutView(TemplateView):
template_name = 'general/about.html'
| {"golden_diff": "diff --git a/csunplugged/general/views.py b/csunplugged/general/views.py\n--- a/csunplugged/general/views.py\n+++ b/csunplugged/general/views.py\n@@ -4,14 +4,6 @@\n class GeneralIndexView(TemplateView):\n template_name = 'general/index.html'\n \n- def get_context_data(self, **kwargs):\n- # TODO: Investigate if importing model from another\n- # app is sensible/best approach.\n- from topics.models import Topic\n- context = super(GeneralIndexView, self).get_context_data(**kwargs)\n- context['total_topics'] = Topic.objects.count()\n- return context\n-\n \n class GeneralAboutView(TemplateView):\n template_name = 'general/about.html'\n", "issue": "Remove topic query from homepage\nWhen the homepage is loaded, a database query is performed. This is currently not needed and should be removed.\nRemove topic query from homepage\nWhen the homepage is loaded, a database query is performed. This is currently not needed and should be removed.\n", "before_files": [{"content": "from django.views.generic import TemplateView\n\n\nclass GeneralIndexView(TemplateView):\n template_name = 'general/index.html'\n\n def get_context_data(self, **kwargs):\n # TODO: Investigate if importing model from another\n # app is sensible/best approach.\n from topics.models import Topic\n context = super(GeneralIndexView, self).get_context_data(**kwargs)\n context['total_topics'] = Topic.objects.count()\n return context\n\n\nclass GeneralAboutView(TemplateView):\n template_name = 'general/about.html'\n", "path": "csunplugged/general/views.py"}]} | 740 | 166 |
gh_patches_debug_35530 | rasdani/github-patches | git_diff | hylang__hy-2214 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
curses and SIGWINCH do not work properly with Hy.
After importing curses under hy, curses is unable to detect the size of the terminal as it is resized.
This manifests as curses.LINES and curses.COLS not being updated, stdscr.getmaxyx not working and so on.
However, the workaround of launching the hy program from python with:
```
import hy
import curses
from main import main_event_loop
if __name__ == "__main__":
curses.wrapper(main_event_loop, ...)
```
allows curses to dynamically detect the size of the terminal.
I conclude therefore the problem is with the hy binary. My (limited) understanding acquired during tracking down the source of this problem is that curses uses the SIGWINCH signal, so perhaps that is a place to look.
Void linux x86 64bit, python 3.9.0
Freebsd 12.2, python 3.8.6
hy 0.19.0 (reported by pip) installed from git master branch via pip
</issue>
<code>
[start of hy/completer.py]
1 import contextlib
2 import os
3 import re
4 import sys
5 import builtins
6
7 import hy.macros
8 import hy.compiler
9
10
11 docomplete = True
12
13 try:
14 import readline
15 except AttributeError as e:
16 # https://github.com/pyreadline/pyreadline/issues/65
17 if "module 'collections' has no attribute 'Callable'" in str(e):
18 docomplete = False
19 else:
20 raise
21 except ImportError:
22 docomplete = False
23
24 if docomplete:
25 if sys.platform == 'darwin' and 'libedit' in readline.__doc__:
26 readline_bind = "bind ^I rl_complete"
27 else:
28 readline_bind = "tab: complete"
29
30
31 class Completer:
32
33 def __init__(self, namespace={}):
34 if not isinstance(namespace, dict):
35 raise TypeError('namespace must be a dictionary')
36 self.namespace = namespace
37 self.path = [builtins.__dict__,
38 namespace]
39
40 namespace.setdefault('__macros__', {})
41
42 self.path.append(namespace['__macros__'])
43
44 def attr_matches(self, text):
45 # Borrowed from IPython's completer
46 m = re.match(r"(\S+(\.[\w-]+)*)\.([\w-]*)$", text)
47
48 if m:
49 expr, attr = m.group(1, 3)
50 attr = attr.replace("-", "_")
51 expr = expr.replace("-", "_")
52 else:
53 return []
54
55 try:
56 obj = eval(expr, self.namespace)
57 words = dir(obj)
58 except Exception:
59 return []
60
61 n = len(attr)
62 matches = []
63 for w in words:
64 if w[:n] == attr:
65 matches.append("{}.{}".format(
66 expr.replace("_", "-"), w.replace("_", "-")))
67 return matches
68
69 def global_matches(self, text):
70 matches = []
71 for p in self.path:
72 for k in p.keys():
73 if isinstance(k, str):
74 k = k.replace("_", "-")
75 if k.startswith(text):
76 matches.append(k)
77 return matches
78
79 def complete(self, text, state):
80 if "." in text:
81 matches = self.attr_matches(text)
82 else:
83 matches = self.global_matches(text)
84 try:
85 return matches[state]
86 except IndexError:
87 return None
88
89
90 @contextlib.contextmanager
91 def completion(completer=None):
92 delims = "()[]{} "
93 if not completer:
94 completer = Completer()
95
96 if docomplete:
97 readline.set_completer(completer.complete)
98 readline.set_completer_delims(delims)
99
100 history = os.environ.get(
101 "HY_HISTORY", os.path.expanduser("~/.hy-history"))
102 readline.parse_and_bind("set blink-matching-paren on")
103
104 try:
105 readline.read_history_file(history)
106 except OSError:
107 pass
108
109 readline.parse_and_bind(readline_bind)
110
111 try:
112 yield
113 finally:
114 if docomplete:
115 try:
116 readline.write_history_file(history)
117 except OSError:
118 pass
119
[end of hy/completer.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/hy/completer.py b/hy/completer.py
--- a/hy/completer.py
+++ b/hy/completer.py
@@ -8,24 +8,19 @@
import hy.compiler
-docomplete = True
-
-try:
- import readline
-except AttributeError as e:
- # https://github.com/pyreadline/pyreadline/issues/65
- if "module 'collections' has no attribute 'Callable'" in str(e):
- docomplete = False
- else:
- raise
-except ImportError:
- docomplete = False
-
-if docomplete:
- if sys.platform == 'darwin' and 'libedit' in readline.__doc__:
- readline_bind = "bind ^I rl_complete"
- else:
- readline_bind = "tab: complete"
+# Lazily import `readline` to work around
+# https://bugs.python.org/issue2675#msg265564
+readline = None
+def init_readline():
+ global readline
+ try:
+ import readline
+ except AttributeError as e:
+ # https://github.com/pyreadline/pyreadline/issues/65
+ if "module 'collections' has no attribute 'Callable'" not in str(e):
+ raise
+ except ImportError:
+ pass
class Completer:
@@ -86,33 +81,42 @@
except IndexError:
return None
-
@contextlib.contextmanager
def completion(completer=None):
delims = "()[]{} "
+
+ init_readline()
+ if not readline:
+ # We have nothing to do. Act like a null context manager.
+ yield
+ return
+
if not completer:
completer = Completer()
- if docomplete:
- readline.set_completer(completer.complete)
- readline.set_completer_delims(delims)
+ if sys.platform == 'darwin' and 'libedit' in readline.__doc__:
+ readline_bind = "bind ^I rl_complete"
+ else:
+ readline_bind = "tab: complete"
- history = os.environ.get(
- "HY_HISTORY", os.path.expanduser("~/.hy-history"))
- readline.parse_and_bind("set blink-matching-paren on")
+ readline.set_completer(completer.complete)
+ readline.set_completer_delims(delims)
- try:
- readline.read_history_file(history)
- except OSError:
- pass
+ history = os.environ.get(
+ "HY_HISTORY", os.path.expanduser("~/.hy-history"))
+ readline.parse_and_bind("set blink-matching-paren on")
- readline.parse_and_bind(readline_bind)
+ try:
+ readline.read_history_file(history)
+ except OSError:
+ pass
+
+ readline.parse_and_bind(readline_bind)
try:
yield
finally:
- if docomplete:
- try:
- readline.write_history_file(history)
- except OSError:
- pass
+ try:
+ readline.write_history_file(history)
+ except OSError:
+ pass
| {"golden_diff": "diff --git a/hy/completer.py b/hy/completer.py\n--- a/hy/completer.py\n+++ b/hy/completer.py\n@@ -8,24 +8,19 @@\n import hy.compiler\n \n \n-docomplete = True\n-\n-try:\n- import readline\n-except AttributeError as e:\n- # https://github.com/pyreadline/pyreadline/issues/65\n- if \"module 'collections' has no attribute 'Callable'\" in str(e):\n- docomplete = False\n- else:\n- raise\n-except ImportError:\n- docomplete = False\n-\n-if docomplete:\n- if sys.platform == 'darwin' and 'libedit' in readline.__doc__:\n- readline_bind = \"bind ^I rl_complete\"\n- else:\n- readline_bind = \"tab: complete\"\n+# Lazily import `readline` to work around\n+# https://bugs.python.org/issue2675#msg265564\n+readline = None\n+def init_readline():\n+ global readline\n+ try:\n+ import readline\n+ except AttributeError as e:\n+ # https://github.com/pyreadline/pyreadline/issues/65\n+ if \"module 'collections' has no attribute 'Callable'\" not in str(e):\n+ raise\n+ except ImportError:\n+ pass\n \n \n class Completer:\n@@ -86,33 +81,42 @@\n except IndexError:\n return None\n \n-\n @contextlib.contextmanager\n def completion(completer=None):\n delims = \"()[]{} \"\n+\n+ init_readline()\n+ if not readline:\n+ # We have nothing to do. Act like a null context manager.\n+ yield\n+ return\n+\n if not completer:\n completer = Completer()\n \n- if docomplete:\n- readline.set_completer(completer.complete)\n- readline.set_completer_delims(delims)\n+ if sys.platform == 'darwin' and 'libedit' in readline.__doc__:\n+ readline_bind = \"bind ^I rl_complete\"\n+ else:\n+ readline_bind = \"tab: complete\"\n \n- history = os.environ.get(\n- \"HY_HISTORY\", os.path.expanduser(\"~/.hy-history\"))\n- readline.parse_and_bind(\"set blink-matching-paren on\")\n+ readline.set_completer(completer.complete)\n+ readline.set_completer_delims(delims)\n \n- try:\n- readline.read_history_file(history)\n- except OSError:\n- pass\n+ history = os.environ.get(\n+ \"HY_HISTORY\", os.path.expanduser(\"~/.hy-history\"))\n+ readline.parse_and_bind(\"set blink-matching-paren on\")\n \n- readline.parse_and_bind(readline_bind)\n+ try:\n+ readline.read_history_file(history)\n+ except OSError:\n+ pass\n+\n+ readline.parse_and_bind(readline_bind)\n \n try:\n yield\n finally:\n- if docomplete:\n- try:\n- readline.write_history_file(history)\n- except OSError:\n- pass\n+ try:\n+ readline.write_history_file(history)\n+ except OSError:\n+ pass\n", "issue": "curses and SIGWINCH do not work properly with Hy.\nAfter importing curses under hy, curses is unable to detect the size of the terminal as it is resized.\r\nThis manifests as curses.LINES and curses.COLS not being updated, stdscr.getmaxyx not working and so on.\r\n\r\nHowever, the workaround of launching the hy program from python with:\r\n```\r\nimport hy\r\nimport curses\r\nfrom main import main_event_loop\r\n\r\nif __name__ == \"__main__\":\r\n curses.wrapper(main_event_loop, ...)\r\n```\r\nallows curses to dynamically detect the size of the terminal.\r\n\r\nI conclude therefore the problem is with the hy binary. My (limited) understanding acquired during tracking down the source of this problem is that curses uses the SIGWINCH signal, so perhaps that is a place to look.\r\n\r\nVoid linux x86 64bit, python 3.9.0\r\nFreebsd 12.2, python 3.8.6\r\nhy 0.19.0 (reported by pip) installed from git master branch via pip\n", "before_files": [{"content": "import contextlib\nimport os\nimport re\nimport sys\nimport builtins\n\nimport hy.macros\nimport hy.compiler\n\n\ndocomplete = True\n\ntry:\n import readline\nexcept AttributeError as e:\n # https://github.com/pyreadline/pyreadline/issues/65\n if \"module 'collections' has no attribute 'Callable'\" in str(e):\n docomplete = False\n else:\n raise\nexcept ImportError:\n docomplete = False\n\nif docomplete:\n if sys.platform == 'darwin' and 'libedit' in readline.__doc__:\n readline_bind = \"bind ^I rl_complete\"\n else:\n readline_bind = \"tab: complete\"\n\n\nclass Completer:\n\n def __init__(self, namespace={}):\n if not isinstance(namespace, dict):\n raise TypeError('namespace must be a dictionary')\n self.namespace = namespace\n self.path = [builtins.__dict__,\n namespace]\n\n namespace.setdefault('__macros__', {})\n\n self.path.append(namespace['__macros__'])\n\n def attr_matches(self, text):\n # Borrowed from IPython's completer\n m = re.match(r\"(\\S+(\\.[\\w-]+)*)\\.([\\w-]*)$\", text)\n\n if m:\n expr, attr = m.group(1, 3)\n attr = attr.replace(\"-\", \"_\")\n expr = expr.replace(\"-\", \"_\")\n else:\n return []\n\n try:\n obj = eval(expr, self.namespace)\n words = dir(obj)\n except Exception:\n return []\n\n n = len(attr)\n matches = []\n for w in words:\n if w[:n] == attr:\n matches.append(\"{}.{}\".format(\n expr.replace(\"_\", \"-\"), w.replace(\"_\", \"-\")))\n return matches\n\n def global_matches(self, text):\n matches = []\n for p in self.path:\n for k in p.keys():\n if isinstance(k, str):\n k = k.replace(\"_\", \"-\")\n if k.startswith(text):\n matches.append(k)\n return matches\n\n def complete(self, text, state):\n if \".\" in text:\n matches = self.attr_matches(text)\n else:\n matches = self.global_matches(text)\n try:\n return matches[state]\n except IndexError:\n return None\n\n\[email protected]\ndef completion(completer=None):\n delims = \"()[]{} \"\n if not completer:\n completer = Completer()\n\n if docomplete:\n readline.set_completer(completer.complete)\n readline.set_completer_delims(delims)\n\n history = os.environ.get(\n \"HY_HISTORY\", os.path.expanduser(\"~/.hy-history\"))\n readline.parse_and_bind(\"set blink-matching-paren on\")\n\n try:\n readline.read_history_file(history)\n except OSError:\n pass\n\n readline.parse_and_bind(readline_bind)\n\n try:\n yield\n finally:\n if docomplete:\n try:\n readline.write_history_file(history)\n except OSError:\n pass\n", "path": "hy/completer.py"}]} | 1,659 | 710 |
gh_patches_debug_511 | rasdani/github-patches | git_diff | python-gitlab__python-gitlab-1437 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Missing API code owner approval for protected branches
## Summary
The branch manager is missing an attribute implementation of `code_owner_approval_required` as documented in [GitLab API documentation](https://docs.gitlab.com/ce/api/protected_branches.html#protect-repository-branches)
## Expected Behavior
`ProjectProtectedBranchManager.code_owner_approval_required` should be implemented to mirror the API as documented:
Attribute | Type | Required | Description
-- | -- | -- | --
code_owner_approval_required | boolean | no | Prevent pushes to this branch if it matches an item in the CODEOWNERS file. (defaults: false)
## Actual Behavior
`code_owner_approval_required` is not available as attribute in `ProjectProtectedBranchManager`.
</issue>
<code>
[start of gitlab/v4/objects/branches.py]
1 from gitlab import cli
2 from gitlab import exceptions as exc
3 from gitlab.base import RequiredOptional, RESTManager, RESTObject
4 from gitlab.mixins import NoUpdateMixin, ObjectDeleteMixin
5
6
7 __all__ = [
8 "ProjectBranch",
9 "ProjectBranchManager",
10 "ProjectProtectedBranch",
11 "ProjectProtectedBranchManager",
12 ]
13
14
15 class ProjectBranch(ObjectDeleteMixin, RESTObject):
16 _id_attr = "name"
17
18 @cli.register_custom_action(
19 "ProjectBranch", tuple(), ("developers_can_push", "developers_can_merge")
20 )
21 @exc.on_http_error(exc.GitlabProtectError)
22 def protect(self, developers_can_push=False, developers_can_merge=False, **kwargs):
23 """Protect the branch.
24
25 Args:
26 developers_can_push (bool): Set to True if developers are allowed
27 to push to the branch
28 developers_can_merge (bool): Set to True if developers are allowed
29 to merge to the branch
30 **kwargs: Extra options to send to the server (e.g. sudo)
31
32 Raises:
33 GitlabAuthenticationError: If authentication is not correct
34 GitlabProtectError: If the branch could not be protected
35 """
36 id = self.get_id().replace("/", "%2F")
37 path = "%s/%s/protect" % (self.manager.path, id)
38 post_data = {
39 "developers_can_push": developers_can_push,
40 "developers_can_merge": developers_can_merge,
41 }
42 self.manager.gitlab.http_put(path, post_data=post_data, **kwargs)
43 self._attrs["protected"] = True
44
45 @cli.register_custom_action("ProjectBranch")
46 @exc.on_http_error(exc.GitlabProtectError)
47 def unprotect(self, **kwargs):
48 """Unprotect the branch.
49
50 Args:
51 **kwargs: Extra options to send to the server (e.g. sudo)
52
53 Raises:
54 GitlabAuthenticationError: If authentication is not correct
55 GitlabProtectError: If the branch could not be unprotected
56 """
57 id = self.get_id().replace("/", "%2F")
58 path = "%s/%s/unprotect" % (self.manager.path, id)
59 self.manager.gitlab.http_put(path, **kwargs)
60 self._attrs["protected"] = False
61
62
63 class ProjectBranchManager(NoUpdateMixin, RESTManager):
64 _path = "/projects/%(project_id)s/repository/branches"
65 _obj_cls = ProjectBranch
66 _from_parent_attrs = {"project_id": "id"}
67 _create_attrs = RequiredOptional(required=("branch", "ref"))
68
69
70 class ProjectProtectedBranch(ObjectDeleteMixin, RESTObject):
71 _id_attr = "name"
72
73
74 class ProjectProtectedBranchManager(NoUpdateMixin, RESTManager):
75 _path = "/projects/%(project_id)s/protected_branches"
76 _obj_cls = ProjectProtectedBranch
77 _from_parent_attrs = {"project_id": "id"}
78 _create_attrs = RequiredOptional(
79 required=("name",),
80 optional=(
81 "push_access_level",
82 "merge_access_level",
83 "unprotect_access_level",
84 "allowed_to_push",
85 "allowed_to_merge",
86 "allowed_to_unprotect",
87 ),
88 )
89
[end of gitlab/v4/objects/branches.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/gitlab/v4/objects/branches.py b/gitlab/v4/objects/branches.py
--- a/gitlab/v4/objects/branches.py
+++ b/gitlab/v4/objects/branches.py
@@ -84,5 +84,6 @@
"allowed_to_push",
"allowed_to_merge",
"allowed_to_unprotect",
+ "code_owner_approval_required",
),
)
| {"golden_diff": "diff --git a/gitlab/v4/objects/branches.py b/gitlab/v4/objects/branches.py\n--- a/gitlab/v4/objects/branches.py\n+++ b/gitlab/v4/objects/branches.py\n@@ -84,5 +84,6 @@\n \"allowed_to_push\",\n \"allowed_to_merge\",\n \"allowed_to_unprotect\",\n+ \"code_owner_approval_required\",\n ),\n )\n", "issue": "Missing API code owner approval for protected branches\n## Summary\r\n\r\nThe branch manager is missing an attribute implementation of `code_owner_approval_required` as documented in [GitLab API documentation](https://docs.gitlab.com/ce/api/protected_branches.html#protect-repository-branches)\r\n\r\n## Expected Behavior\r\n\r\n`ProjectProtectedBranchManager.code_owner_approval_required` should be implemented to mirror the API as documented:\r\n\r\nAttribute | Type | Required | Description\r\n-- | -- | -- | --\r\ncode_owner_approval_required | boolean | no | Prevent pushes to this branch if it matches an item in the\u00a0CODEOWNERS\u00a0file. (defaults: false)\r\n\r\n## Actual Behavior\r\n\r\n`code_owner_approval_required` is not available as attribute in `ProjectProtectedBranchManager`.\r\n\n", "before_files": [{"content": "from gitlab import cli\nfrom gitlab import exceptions as exc\nfrom gitlab.base import RequiredOptional, RESTManager, RESTObject\nfrom gitlab.mixins import NoUpdateMixin, ObjectDeleteMixin\n\n\n__all__ = [\n \"ProjectBranch\",\n \"ProjectBranchManager\",\n \"ProjectProtectedBranch\",\n \"ProjectProtectedBranchManager\",\n]\n\n\nclass ProjectBranch(ObjectDeleteMixin, RESTObject):\n _id_attr = \"name\"\n\n @cli.register_custom_action(\n \"ProjectBranch\", tuple(), (\"developers_can_push\", \"developers_can_merge\")\n )\n @exc.on_http_error(exc.GitlabProtectError)\n def protect(self, developers_can_push=False, developers_can_merge=False, **kwargs):\n \"\"\"Protect the branch.\n\n Args:\n developers_can_push (bool): Set to True if developers are allowed\n to push to the branch\n developers_can_merge (bool): Set to True if developers are allowed\n to merge to the branch\n **kwargs: Extra options to send to the server (e.g. sudo)\n\n Raises:\n GitlabAuthenticationError: If authentication is not correct\n GitlabProtectError: If the branch could not be protected\n \"\"\"\n id = self.get_id().replace(\"/\", \"%2F\")\n path = \"%s/%s/protect\" % (self.manager.path, id)\n post_data = {\n \"developers_can_push\": developers_can_push,\n \"developers_can_merge\": developers_can_merge,\n }\n self.manager.gitlab.http_put(path, post_data=post_data, **kwargs)\n self._attrs[\"protected\"] = True\n\n @cli.register_custom_action(\"ProjectBranch\")\n @exc.on_http_error(exc.GitlabProtectError)\n def unprotect(self, **kwargs):\n \"\"\"Unprotect the branch.\n\n Args:\n **kwargs: Extra options to send to the server (e.g. sudo)\n\n Raises:\n GitlabAuthenticationError: If authentication is not correct\n GitlabProtectError: If the branch could not be unprotected\n \"\"\"\n id = self.get_id().replace(\"/\", \"%2F\")\n path = \"%s/%s/unprotect\" % (self.manager.path, id)\n self.manager.gitlab.http_put(path, **kwargs)\n self._attrs[\"protected\"] = False\n\n\nclass ProjectBranchManager(NoUpdateMixin, RESTManager):\n _path = \"/projects/%(project_id)s/repository/branches\"\n _obj_cls = ProjectBranch\n _from_parent_attrs = {\"project_id\": \"id\"}\n _create_attrs = RequiredOptional(required=(\"branch\", \"ref\"))\n\n\nclass ProjectProtectedBranch(ObjectDeleteMixin, RESTObject):\n _id_attr = \"name\"\n\n\nclass ProjectProtectedBranchManager(NoUpdateMixin, RESTManager):\n _path = \"/projects/%(project_id)s/protected_branches\"\n _obj_cls = ProjectProtectedBranch\n _from_parent_attrs = {\"project_id\": \"id\"}\n _create_attrs = RequiredOptional(\n required=(\"name\",),\n optional=(\n \"push_access_level\",\n \"merge_access_level\",\n \"unprotect_access_level\",\n \"allowed_to_push\",\n \"allowed_to_merge\",\n \"allowed_to_unprotect\",\n ),\n )\n", "path": "gitlab/v4/objects/branches.py"}]} | 1,550 | 92 |
gh_patches_debug_29411 | rasdani/github-patches | git_diff | cloudtools__troposphere-836 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add ResourceLifecycleConfig to AWS::ElasticBeanstalk::Application
[AWS::ElasticBeanstalk::Application](http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-beanstalk.html)
Use the ResourceLifecycleConfig property to define lifecycle settings for resources that belong to the application, and the service role that Elastic Beanstalk assumes in order to apply lifecycle settings.
</issue>
<code>
[start of troposphere/elasticbeanstalk.py]
1 # Copyright (c) 2013, Mark Peek <[email protected]>
2 # All rights reserved.
3 #
4 # See LICENSE file for full license.
5
6 from . import AWSObject, AWSProperty, Tags
7
8
9 WebServer = "WebServer"
10 Worker = "Worker"
11 WebServerType = "Standard"
12 WorkerType = "SQS/HTTP"
13
14
15 class SourceBundle(AWSProperty):
16 props = {
17 'S3Bucket': (basestring, True),
18 'S3Key': (basestring, True),
19 }
20
21
22 class SourceConfiguration(AWSProperty):
23 props = {
24 'ApplicationName': (basestring, True),
25 'TemplateName': (basestring, True),
26 }
27
28
29 class OptionSettings(AWSProperty):
30 props = {
31 'Namespace': (basestring, True),
32 'OptionName': (basestring, True),
33 'Value': (basestring, True),
34 }
35
36
37 class Application(AWSObject):
38 resource_type = "AWS::ElasticBeanstalk::Application"
39
40 props = {
41 'ApplicationName': (basestring, False),
42 'Description': (basestring, False),
43 }
44
45
46 class ApplicationVersion(AWSObject):
47 resource_type = "AWS::ElasticBeanstalk::ApplicationVersion"
48
49 props = {
50 'ApplicationName': (basestring, True),
51 'Description': (basestring, False),
52 'SourceBundle': (SourceBundle, False),
53 }
54
55
56 class ConfigurationTemplate(AWSObject):
57 resource_type = "AWS::ElasticBeanstalk::ConfigurationTemplate"
58
59 props = {
60 'ApplicationName': (basestring, True),
61 'Description': (basestring, False),
62 'EnvironmentId': (basestring, False),
63 'OptionSettings': ([OptionSettings], False),
64 'SolutionStackName': (basestring, False),
65 'SourceConfiguration': (SourceConfiguration, False),
66 }
67
68
69 def validate_tier_name(name):
70 valid_names = [WebServer, Worker]
71 if name not in valid_names:
72 raise ValueError('Tier name needs to be one of %r' % valid_names)
73 return name
74
75
76 def validate_tier_type(tier_type):
77 valid_types = [WebServerType, WorkerType]
78 if tier_type not in valid_types:
79 raise ValueError('Tier type needs to be one of %r' % valid_types)
80 return tier_type
81
82
83 class Tier(AWSProperty):
84 props = {
85 'Name': (validate_tier_name, False),
86 'Type': (validate_tier_type, False),
87 'Version': (basestring, False),
88 }
89
90
91 class Environment(AWSObject):
92 resource_type = "AWS::ElasticBeanstalk::Environment"
93
94 props = {
95 'ApplicationName': (basestring, True),
96 'CNAMEPrefix': (basestring, False),
97 'Description': (basestring, False),
98 'EnvironmentName': (basestring, False),
99 'OptionSettings': ([OptionSettings], False),
100 'SolutionStackName': (basestring, False),
101 'Tags': (Tags, False),
102 'TemplateName': (basestring, False),
103 'Tier': (Tier, False),
104 'VersionLabel': (basestring, False),
105 }
106
[end of troposphere/elasticbeanstalk.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/troposphere/elasticbeanstalk.py b/troposphere/elasticbeanstalk.py
--- a/troposphere/elasticbeanstalk.py
+++ b/troposphere/elasticbeanstalk.py
@@ -4,7 +4,7 @@
# See LICENSE file for full license.
from . import AWSObject, AWSProperty, Tags
-
+from .validators import boolean, integer
WebServer = "WebServer"
Worker = "Worker"
@@ -12,6 +12,29 @@
WorkerType = "SQS/HTTP"
+class MaxAgeRule(AWSProperty):
+ props = {
+ 'DeleteSourceFromS3': (boolean, False),
+ 'Enabled': (boolean, False),
+ 'MaxAgeInDays': (integer, False),
+ }
+
+
+class MaxCountRule(AWSProperty):
+ props = {
+ 'DeleteSourceFromS3': (boolean, False),
+ 'Enabled': (boolean, False),
+ 'MaxCount': (integer, False),
+ }
+
+
+class ApplicationVersionLifecycleConfig(AWSProperty):
+ props = {
+ 'MaxAgeRule': (MaxAgeRule, False),
+ 'MaxCountRule': (MaxCountRule, False),
+ }
+
+
class SourceBundle(AWSProperty):
props = {
'S3Bucket': (basestring, True),
@@ -26,6 +49,13 @@
}
+class ApplicationResourceLifecycleConfig(AWSProperty):
+ props = {
+ 'ServiceRole': (basestring, False),
+ 'VersionLifecycleConfig': (ApplicationVersionLifecycleConfig, False),
+ }
+
+
class OptionSettings(AWSProperty):
props = {
'Namespace': (basestring, True),
@@ -40,6 +70,7 @@
props = {
'ApplicationName': (basestring, False),
'Description': (basestring, False),
+ 'ResourceLifecycleConfig': (ApplicationResourceLifecycleConfig, False),
}
| {"golden_diff": "diff --git a/troposphere/elasticbeanstalk.py b/troposphere/elasticbeanstalk.py\n--- a/troposphere/elasticbeanstalk.py\n+++ b/troposphere/elasticbeanstalk.py\n@@ -4,7 +4,7 @@\n # See LICENSE file for full license.\n \n from . import AWSObject, AWSProperty, Tags\n-\n+from .validators import boolean, integer\n \n WebServer = \"WebServer\"\n Worker = \"Worker\"\n@@ -12,6 +12,29 @@\n WorkerType = \"SQS/HTTP\"\n \n \n+class MaxAgeRule(AWSProperty):\n+ props = {\n+ 'DeleteSourceFromS3': (boolean, False),\n+ 'Enabled': (boolean, False),\n+ 'MaxAgeInDays': (integer, False),\n+ }\n+\n+\n+class MaxCountRule(AWSProperty):\n+ props = {\n+ 'DeleteSourceFromS3': (boolean, False),\n+ 'Enabled': (boolean, False),\n+ 'MaxCount': (integer, False),\n+ }\n+\n+\n+class ApplicationVersionLifecycleConfig(AWSProperty):\n+ props = {\n+ 'MaxAgeRule': (MaxAgeRule, False),\n+ 'MaxCountRule': (MaxCountRule, False),\n+ }\n+\n+\n class SourceBundle(AWSProperty):\n props = {\n 'S3Bucket': (basestring, True),\n@@ -26,6 +49,13 @@\n }\n \n \n+class ApplicationResourceLifecycleConfig(AWSProperty):\n+ props = {\n+ 'ServiceRole': (basestring, False),\n+ 'VersionLifecycleConfig': (ApplicationVersionLifecycleConfig, False),\n+ }\n+\n+\n class OptionSettings(AWSProperty):\n props = {\n 'Namespace': (basestring, True),\n@@ -40,6 +70,7 @@\n props = {\n 'ApplicationName': (basestring, False),\n 'Description': (basestring, False),\n+ 'ResourceLifecycleConfig': (ApplicationResourceLifecycleConfig, False),\n }\n", "issue": "Add ResourceLifecycleConfig to AWS::ElasticBeanstalk::Application\n[AWS::ElasticBeanstalk::Application](http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-beanstalk.html)\r\nUse the ResourceLifecycleConfig property to define lifecycle settings for resources that belong to the application, and the service role that Elastic Beanstalk assumes in order to apply lifecycle settings.\n", "before_files": [{"content": "# Copyright (c) 2013, Mark Peek <[email protected]>\n# All rights reserved.\n#\n# See LICENSE file for full license.\n\nfrom . import AWSObject, AWSProperty, Tags\n\n\nWebServer = \"WebServer\"\nWorker = \"Worker\"\nWebServerType = \"Standard\"\nWorkerType = \"SQS/HTTP\"\n\n\nclass SourceBundle(AWSProperty):\n props = {\n 'S3Bucket': (basestring, True),\n 'S3Key': (basestring, True),\n }\n\n\nclass SourceConfiguration(AWSProperty):\n props = {\n 'ApplicationName': (basestring, True),\n 'TemplateName': (basestring, True),\n }\n\n\nclass OptionSettings(AWSProperty):\n props = {\n 'Namespace': (basestring, True),\n 'OptionName': (basestring, True),\n 'Value': (basestring, True),\n }\n\n\nclass Application(AWSObject):\n resource_type = \"AWS::ElasticBeanstalk::Application\"\n\n props = {\n 'ApplicationName': (basestring, False),\n 'Description': (basestring, False),\n }\n\n\nclass ApplicationVersion(AWSObject):\n resource_type = \"AWS::ElasticBeanstalk::ApplicationVersion\"\n\n props = {\n 'ApplicationName': (basestring, True),\n 'Description': (basestring, False),\n 'SourceBundle': (SourceBundle, False),\n }\n\n\nclass ConfigurationTemplate(AWSObject):\n resource_type = \"AWS::ElasticBeanstalk::ConfigurationTemplate\"\n\n props = {\n 'ApplicationName': (basestring, True),\n 'Description': (basestring, False),\n 'EnvironmentId': (basestring, False),\n 'OptionSettings': ([OptionSettings], False),\n 'SolutionStackName': (basestring, False),\n 'SourceConfiguration': (SourceConfiguration, False),\n }\n\n\ndef validate_tier_name(name):\n valid_names = [WebServer, Worker]\n if name not in valid_names:\n raise ValueError('Tier name needs to be one of %r' % valid_names)\n return name\n\n\ndef validate_tier_type(tier_type):\n valid_types = [WebServerType, WorkerType]\n if tier_type not in valid_types:\n raise ValueError('Tier type needs to be one of %r' % valid_types)\n return tier_type\n\n\nclass Tier(AWSProperty):\n props = {\n 'Name': (validate_tier_name, False),\n 'Type': (validate_tier_type, False),\n 'Version': (basestring, False),\n }\n\n\nclass Environment(AWSObject):\n resource_type = \"AWS::ElasticBeanstalk::Environment\"\n\n props = {\n 'ApplicationName': (basestring, True),\n 'CNAMEPrefix': (basestring, False),\n 'Description': (basestring, False),\n 'EnvironmentName': (basestring, False),\n 'OptionSettings': ([OptionSettings], False),\n 'SolutionStackName': (basestring, False),\n 'Tags': (Tags, False),\n 'TemplateName': (basestring, False),\n 'Tier': (Tier, False),\n 'VersionLabel': (basestring, False),\n }\n", "path": "troposphere/elasticbeanstalk.py"}]} | 1,521 | 439 |
gh_patches_debug_19869 | rasdani/github-patches | git_diff | open-telemetry__opentelemetry-python-529 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add suppress_instrumentation flag in context for Metrics
Similar to [logic](https://github.com/open-telemetry/opentelemetry-python/blob/master/opentelemetry-sdk/src/opentelemetry/sdk/trace/export/__init__.py#L205) in SpanProcessors, this need to be done in Metrics to avoid duplicated telemetry when using Http ext or other packages relying on this
</issue>
<code>
[start of opentelemetry-sdk/src/opentelemetry/sdk/metrics/export/controller.py]
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import atexit
16 import threading
17
18
19 class PushController(threading.Thread):
20 """A push based controller, used for exporting.
21
22 Uses a worker thread that periodically collects metrics for exporting,
23 exports them and performs some post-processing.
24 """
25
26 daemon = True
27
28 def __init__(self, meter, exporter, interval, shutdown_on_exit=True):
29 super().__init__()
30 self.meter = meter
31 self.exporter = exporter
32 self.interval = interval
33 self.finished = threading.Event()
34 self._atexit_handler = None
35 if shutdown_on_exit:
36 self._atexit_handler = atexit.register(self.shutdown)
37 self.start()
38
39 def run(self):
40 while not self.finished.wait(self.interval):
41 self.tick()
42
43 def shutdown(self):
44 self.finished.set()
45 self.exporter.shutdown()
46 if self._atexit_handler is not None:
47 atexit.unregister(self._atexit_handler)
48 self._atexit_handler = None
49
50 def tick(self):
51 # Collect all of the meter's metrics to be exported
52 self.meter.collect()
53 # Export the given metrics in the batcher
54 self.exporter.export(self.meter.batcher.checkpoint_set())
55 # Perform post-exporting logic based on batcher configuration
56 self.meter.batcher.finished_collection()
57
[end of opentelemetry-sdk/src/opentelemetry/sdk/metrics/export/controller.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/opentelemetry-sdk/src/opentelemetry/sdk/metrics/export/controller.py b/opentelemetry-sdk/src/opentelemetry/sdk/metrics/export/controller.py
--- a/opentelemetry-sdk/src/opentelemetry/sdk/metrics/export/controller.py
+++ b/opentelemetry-sdk/src/opentelemetry/sdk/metrics/export/controller.py
@@ -15,6 +15,8 @@
import atexit
import threading
+from opentelemetry.context import attach, detach, set_value
+
class PushController(threading.Thread):
"""A push based controller, used for exporting.
@@ -50,7 +52,9 @@
def tick(self):
# Collect all of the meter's metrics to be exported
self.meter.collect()
+ token = attach(set_value("suppress_instrumentation", True))
# Export the given metrics in the batcher
self.exporter.export(self.meter.batcher.checkpoint_set())
+ detach(token)
# Perform post-exporting logic based on batcher configuration
self.meter.batcher.finished_collection()
| {"golden_diff": "diff --git a/opentelemetry-sdk/src/opentelemetry/sdk/metrics/export/controller.py b/opentelemetry-sdk/src/opentelemetry/sdk/metrics/export/controller.py\n--- a/opentelemetry-sdk/src/opentelemetry/sdk/metrics/export/controller.py\n+++ b/opentelemetry-sdk/src/opentelemetry/sdk/metrics/export/controller.py\n@@ -15,6 +15,8 @@\n import atexit\n import threading\n \n+from opentelemetry.context import attach, detach, set_value\n+\n \n class PushController(threading.Thread):\n \"\"\"A push based controller, used for exporting.\n@@ -50,7 +52,9 @@\n def tick(self):\n # Collect all of the meter's metrics to be exported\n self.meter.collect()\n+ token = attach(set_value(\"suppress_instrumentation\", True))\n # Export the given metrics in the batcher\n self.exporter.export(self.meter.batcher.checkpoint_set())\n+ detach(token)\n # Perform post-exporting logic based on batcher configuration\n self.meter.batcher.finished_collection()\n", "issue": "Add suppress_instrumentation flag in context for Metrics\nSimilar to [logic](https://github.com/open-telemetry/opentelemetry-python/blob/master/opentelemetry-sdk/src/opentelemetry/sdk/trace/export/__init__.py#L205) in SpanProcessors, this need to be done in Metrics to avoid duplicated telemetry when using Http ext or other packages relying on this\n", "before_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport atexit\nimport threading\n\n\nclass PushController(threading.Thread):\n \"\"\"A push based controller, used for exporting.\n\n Uses a worker thread that periodically collects metrics for exporting,\n exports them and performs some post-processing.\n \"\"\"\n\n daemon = True\n\n def __init__(self, meter, exporter, interval, shutdown_on_exit=True):\n super().__init__()\n self.meter = meter\n self.exporter = exporter\n self.interval = interval\n self.finished = threading.Event()\n self._atexit_handler = None\n if shutdown_on_exit:\n self._atexit_handler = atexit.register(self.shutdown)\n self.start()\n\n def run(self):\n while not self.finished.wait(self.interval):\n self.tick()\n\n def shutdown(self):\n self.finished.set()\n self.exporter.shutdown()\n if self._atexit_handler is not None:\n atexit.unregister(self._atexit_handler)\n self._atexit_handler = None\n\n def tick(self):\n # Collect all of the meter's metrics to be exported\n self.meter.collect()\n # Export the given metrics in the batcher\n self.exporter.export(self.meter.batcher.checkpoint_set())\n # Perform post-exporting logic based on batcher configuration\n self.meter.batcher.finished_collection()\n", "path": "opentelemetry-sdk/src/opentelemetry/sdk/metrics/export/controller.py"}]} | 1,144 | 225 |
gh_patches_debug_8934 | rasdani/github-patches | git_diff | vispy__vispy-1595 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Load STL files into vispy
Hi there, I think I found a bug in vispy/vispy/io/mesh.py in col 42:
mesh = load_stl(fname)
when I try to import a *.stl file by read_mesh(fname), an error occured like this:
File "D:\Python3.5\lib\site-packages\vispy\io\mesh.py", line 43, in read_mesh
mesh = load_stl(fname)
File "D:\Python3.5\lib\site-packages\vispy\io\stl.py", line 43, in load_stl
file_pos = file_obj.tell()
AttributeError: 'str' object has no attribute 'tell'
by change col42 into :mesh = trimesh.load(fname), problem soved!
</issue>
<code>
[start of vispy/io/mesh.py]
1 # -*- coding: utf-8 -*-
2 # Copyright (c) Vispy Development Team. All Rights Reserved.
3 # Distributed under the (new) BSD License. See LICENSE.txt for more info.
4
5 """ Reading and writing of data like images and meshes.
6 """
7
8 from os import path as op
9
10 from .wavefront import WavefrontReader, WavefrontWriter
11 from .stl import load_stl
12
13
14 def read_mesh(fname):
15 """Read mesh data from file.
16
17 Parameters
18 ----------
19 fname : str
20 File name to read. Format will be inferred from the filename.
21 Currently only '.obj' and '.obj.gz' are supported.
22
23 Returns
24 -------
25 vertices : array
26 Vertices.
27 faces : array | None
28 Triangle face definitions.
29 normals : array
30 Normals for the mesh.
31 texcoords : array | None
32 Texture coordinates.
33 """
34 # Check format
35 fmt = op.splitext(fname)[1].lower()
36 if fmt == '.gz':
37 fmt = op.splitext(op.splitext(fname)[0])[1].lower()
38
39 if fmt in ('.obj'):
40 return WavefrontReader.read(fname)
41 elif fmt in ('.stl'):
42 mesh = load_stl(fname)
43 vertices = mesh.vertices
44 faces = mesh.faces
45 normals = mesh.face_normals
46 texcoords = None
47 return vertices, faces, normals, texcoords
48 elif not format:
49 raise ValueError('read_mesh needs could not determine format.')
50 else:
51 raise ValueError('read_mesh does not understand format %s.' % fmt)
52
53
54 def write_mesh(fname, vertices, faces, normals, texcoords, name='',
55 format='obj', overwrite=False, reshape_faces=True):
56 """ Write mesh data to file.
57
58 Parameters
59 ----------
60 fname : str
61 Filename to write. Must end with ".obj" or ".gz".
62 vertices : array
63 Vertices.
64 faces : array | None
65 Triangle face definitions.
66 normals : array
67 Normals for the mesh.
68 texcoords : array | None
69 Texture coordinates.
70 name : str
71 Name of the object.
72 format : str
73 Currently only "obj" is supported.
74 overwrite : bool
75 If the file exists, overwrite it.
76 reshape_faces : bool
77 Reshape the `faces` array to (Nf, 3). Set to `False`
78 if you need to write a mesh with non triangular faces.
79 """
80 # Check file
81 if op.isfile(fname) and not overwrite:
82 raise IOError('file "%s" exists, use overwrite=True' % fname)
83
84 # Check format
85 if format not in ('obj'):
86 raise ValueError('Only "obj" format writing currently supported')
87 WavefrontWriter.write(fname, vertices, faces,
88 normals, texcoords, name, reshape_faces)
89
[end of vispy/io/mesh.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/vispy/io/mesh.py b/vispy/io/mesh.py
--- a/vispy/io/mesh.py
+++ b/vispy/io/mesh.py
@@ -39,10 +39,11 @@
if fmt in ('.obj'):
return WavefrontReader.read(fname)
elif fmt in ('.stl'):
- mesh = load_stl(fname)
- vertices = mesh.vertices
- faces = mesh.faces
- normals = mesh.face_normals
+ file_obj = open(fname, mode='rb')
+ mesh = load_stl(file_obj)
+ vertices = mesh['vertices']
+ faces = mesh['faces']
+ normals = mesh['face_normals']
texcoords = None
return vertices, faces, normals, texcoords
elif not format:
| {"golden_diff": "diff --git a/vispy/io/mesh.py b/vispy/io/mesh.py\n--- a/vispy/io/mesh.py\n+++ b/vispy/io/mesh.py\n@@ -39,10 +39,11 @@\n if fmt in ('.obj'):\n return WavefrontReader.read(fname)\n elif fmt in ('.stl'):\n- mesh = load_stl(fname)\n- vertices = mesh.vertices\n- faces = mesh.faces\n- normals = mesh.face_normals\n+ file_obj = open(fname, mode='rb')\n+ mesh = load_stl(file_obj)\n+ vertices = mesh['vertices']\n+ faces = mesh['faces']\n+ normals = mesh['face_normals']\n texcoords = None\n return vertices, faces, normals, texcoords\n elif not format:\n", "issue": "Load STL files into vispy\nHi there, I think I found a bug in vispy/vispy/io/mesh.py in col 42:\r\nmesh = load_stl(fname)\r\nwhen I try to import a *.stl file by read_mesh(fname), an error occured like this: \r\n File \"D:\\Python3.5\\lib\\site-packages\\vispy\\io\\mesh.py\", line 43, in read_mesh\r\n mesh = load_stl(fname)\r\n File \"D:\\Python3.5\\lib\\site-packages\\vispy\\io\\stl.py\", line 43, in load_stl\r\n file_pos = file_obj.tell()\r\nAttributeError: 'str' object has no attribute 'tell'\r\nby change col42 into :mesh = trimesh.load(fname), problem soved!\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright (c) Vispy Development Team. All Rights Reserved.\n# Distributed under the (new) BSD License. See LICENSE.txt for more info.\n\n\"\"\" Reading and writing of data like images and meshes.\n\"\"\"\n\nfrom os import path as op\n\nfrom .wavefront import WavefrontReader, WavefrontWriter\nfrom .stl import load_stl\n\n\ndef read_mesh(fname):\n \"\"\"Read mesh data from file.\n\n Parameters\n ----------\n fname : str\n File name to read. Format will be inferred from the filename.\n Currently only '.obj' and '.obj.gz' are supported.\n\n Returns\n -------\n vertices : array\n Vertices.\n faces : array | None\n Triangle face definitions.\n normals : array\n Normals for the mesh.\n texcoords : array | None\n Texture coordinates.\n \"\"\"\n # Check format\n fmt = op.splitext(fname)[1].lower()\n if fmt == '.gz':\n fmt = op.splitext(op.splitext(fname)[0])[1].lower()\n\n if fmt in ('.obj'):\n return WavefrontReader.read(fname)\n elif fmt in ('.stl'):\n mesh = load_stl(fname)\n vertices = mesh.vertices\n faces = mesh.faces\n normals = mesh.face_normals\n texcoords = None\n return vertices, faces, normals, texcoords\n elif not format:\n raise ValueError('read_mesh needs could not determine format.')\n else:\n raise ValueError('read_mesh does not understand format %s.' % fmt)\n\n\ndef write_mesh(fname, vertices, faces, normals, texcoords, name='',\n format='obj', overwrite=False, reshape_faces=True):\n \"\"\" Write mesh data to file.\n\n Parameters\n ----------\n fname : str\n Filename to write. Must end with \".obj\" or \".gz\".\n vertices : array\n Vertices.\n faces : array | None\n Triangle face definitions.\n normals : array\n Normals for the mesh.\n texcoords : array | None\n Texture coordinates.\n name : str\n Name of the object.\n format : str\n Currently only \"obj\" is supported.\n overwrite : bool\n If the file exists, overwrite it.\n reshape_faces : bool\n Reshape the `faces` array to (Nf, 3). Set to `False`\n if you need to write a mesh with non triangular faces.\n \"\"\"\n # Check file\n if op.isfile(fname) and not overwrite:\n raise IOError('file \"%s\" exists, use overwrite=True' % fname)\n\n # Check format\n if format not in ('obj'):\n raise ValueError('Only \"obj\" format writing currently supported')\n WavefrontWriter.write(fname, vertices, faces,\n normals, texcoords, name, reshape_faces)\n", "path": "vispy/io/mesh.py"}]} | 1,491 | 180 |
gh_patches_debug_12156 | rasdani/github-patches | git_diff | nltk__nltk-3022 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
nltk.chat.chatbot() endless loop
When I type `import nltk` followed by `nltk.chat.chatbots()`, it lists/asks which one I want to talk to, and then endlessly scrolls the following: ` Enter a number in the range 1-5: Error: bad chatbot number`, in both Jupyter and Spyder.
</issue>
<code>
[start of nltk/chat/__init__.py]
1 # Natural Language Toolkit: Chatbots
2 #
3 # Copyright (C) 2001-2022 NLTK Project
4 # Authors: Steven Bird <[email protected]>
5 # URL: <https://www.nltk.org/>
6 # For license information, see LICENSE.TXT
7
8 # Based on an Eliza implementation by Joe Strout <[email protected]>,
9 # Jeff Epler <[email protected]> and Jez Higgins <[email protected]>.
10
11 """
12 A class for simple chatbots. These perform simple pattern matching on sentences
13 typed by users, and respond with automatically generated sentences.
14
15 These chatbots may not work using the windows command line or the
16 windows IDLE GUI.
17 """
18
19 from nltk.chat.eliza import eliza_chat
20 from nltk.chat.iesha import iesha_chat
21 from nltk.chat.rude import rude_chat
22 from nltk.chat.suntsu import suntsu_chat
23 from nltk.chat.util import Chat
24 from nltk.chat.zen import zen_chat
25
26 bots = [
27 (eliza_chat, "Eliza (psycho-babble)"),
28 (iesha_chat, "Iesha (teen anime junky)"),
29 (rude_chat, "Rude (abusive bot)"),
30 (suntsu_chat, "Suntsu (Chinese sayings)"),
31 (zen_chat, "Zen (gems of wisdom)"),
32 ]
33
34
35 def chatbots():
36 import sys
37
38 print("Which chatbot would you like to talk to?")
39 botcount = len(bots)
40 for i in range(botcount):
41 print(" %d: %s" % (i + 1, bots[i][1]))
42 while True:
43 print("\nEnter a number in the range 1-%d: " % botcount, end=" ")
44 choice = sys.stdin.readline().strip()
45 if choice.isdigit() and (int(choice) - 1) in range(botcount):
46 break
47 else:
48 print(" Error: bad chatbot number")
49
50 chatbot = bots[int(choice) - 1][0]
51 chatbot()
52
[end of nltk/chat/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/nltk/chat/__init__.py b/nltk/chat/__init__.py
--- a/nltk/chat/__init__.py
+++ b/nltk/chat/__init__.py
@@ -33,15 +33,12 @@
def chatbots():
- import sys
-
print("Which chatbot would you like to talk to?")
botcount = len(bots)
for i in range(botcount):
print(" %d: %s" % (i + 1, bots[i][1]))
while True:
- print("\nEnter a number in the range 1-%d: " % botcount, end=" ")
- choice = sys.stdin.readline().strip()
+ choice = input(f"\nEnter a number in the range 1-{botcount}: ").strip()
if choice.isdigit() and (int(choice) - 1) in range(botcount):
break
else:
| {"golden_diff": "diff --git a/nltk/chat/__init__.py b/nltk/chat/__init__.py\n--- a/nltk/chat/__init__.py\n+++ b/nltk/chat/__init__.py\n@@ -33,15 +33,12 @@\n \n \n def chatbots():\n- import sys\n-\n print(\"Which chatbot would you like to talk to?\")\n botcount = len(bots)\n for i in range(botcount):\n print(\" %d: %s\" % (i + 1, bots[i][1]))\n while True:\n- print(\"\\nEnter a number in the range 1-%d: \" % botcount, end=\" \")\n- choice = sys.stdin.readline().strip()\n+ choice = input(f\"\\nEnter a number in the range 1-{botcount}: \").strip()\n if choice.isdigit() and (int(choice) - 1) in range(botcount):\n break\n else:\n", "issue": "nltk.chat.chatbot() endless loop\nWhen I type `import nltk` followed by `nltk.chat.chatbots()`, it lists/asks which one I want to talk to, and then endlessly scrolls the following: ` Enter a number in the range 1-5: Error: bad chatbot number`, in both Jupyter and Spyder.\n", "before_files": [{"content": "# Natural Language Toolkit: Chatbots\n#\n# Copyright (C) 2001-2022 NLTK Project\n# Authors: Steven Bird <[email protected]>\n# URL: <https://www.nltk.org/>\n# For license information, see LICENSE.TXT\n\n# Based on an Eliza implementation by Joe Strout <[email protected]>,\n# Jeff Epler <[email protected]> and Jez Higgins <[email protected]>.\n\n\"\"\"\nA class for simple chatbots. These perform simple pattern matching on sentences\ntyped by users, and respond with automatically generated sentences.\n\nThese chatbots may not work using the windows command line or the\nwindows IDLE GUI.\n\"\"\"\n\nfrom nltk.chat.eliza import eliza_chat\nfrom nltk.chat.iesha import iesha_chat\nfrom nltk.chat.rude import rude_chat\nfrom nltk.chat.suntsu import suntsu_chat\nfrom nltk.chat.util import Chat\nfrom nltk.chat.zen import zen_chat\n\nbots = [\n (eliza_chat, \"Eliza (psycho-babble)\"),\n (iesha_chat, \"Iesha (teen anime junky)\"),\n (rude_chat, \"Rude (abusive bot)\"),\n (suntsu_chat, \"Suntsu (Chinese sayings)\"),\n (zen_chat, \"Zen (gems of wisdom)\"),\n]\n\n\ndef chatbots():\n import sys\n\n print(\"Which chatbot would you like to talk to?\")\n botcount = len(bots)\n for i in range(botcount):\n print(\" %d: %s\" % (i + 1, bots[i][1]))\n while True:\n print(\"\\nEnter a number in the range 1-%d: \" % botcount, end=\" \")\n choice = sys.stdin.readline().strip()\n if choice.isdigit() and (int(choice) - 1) in range(botcount):\n break\n else:\n print(\" Error: bad chatbot number\")\n\n chatbot = bots[int(choice) - 1][0]\n chatbot()\n", "path": "nltk/chat/__init__.py"}]} | 1,159 | 201 |
gh_patches_debug_27761 | rasdani/github-patches | git_diff | kserve__kserve-2216 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Incompatible versions for google protos required by kserve and dependencies
/kind bug
**What steps did you take and what happened:**
[A clear and concise description of what the bug is.]
Trying to install the kserve module, kserve requires `googleapis-common-protos==1.53.0`, but some dependencies of kserve require `googleapis-common-protos<2.0dev,>=1.56.2`, and hence kserve cannot be installed.
**What did you expect to happen:**
The version of protos required by kserve should be updated to be compatible with its dependencies
**Anything else you would like to add:**
[Miscellaneous information that will assist in solving the issue.]
**Environment:**
- Istio Version:
- Knative Version:
- KFServing Version:
- Kubeflow version:
- Kfdef:[k8s_istio/istio_dex/gcp_basic_auth/gcp_iap/aws/aws_cognito/ibm]
- Minikube version:
- Kubernetes version: (use `kubectl version`):
- OS (e.g. from `/etc/os-release`):
</issue>
<code>
[start of python/kserve/setup.py]
1 # Copyright 2021 The KServe Authors.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import setuptools
16
17 TESTS_REQUIRES = [
18 'pytest',
19 'pytest-xdist',
20 'pytest-cov',
21 'pytest-asyncio',
22 'pytest-tornasync',
23 'mypy'
24 ]
25
26 with open('requirements.txt') as f:
27 REQUIRES = f.readlines()
28
29 setuptools.setup(
30 name='kserve',
31 version='0.8.0',
32 author="The KServe Authors",
33 author_email='[email protected], [email protected], [email protected]',
34 license="Apache License Version 2.0",
35 url="https://github.com/kserve/kserve/tree/master/python/kserve",
36 description="KServe Python SDK",
37 long_description="Python SDK for KServe Server and Client.",
38 python_requires='>=3.6',
39 packages=[
40 'kserve',
41 'kserve.api',
42 'kserve.constants',
43 'kserve.models',
44 'kserve.handlers',
45 'kserve.utils',
46 ],
47 package_data={'': ['requirements.txt']},
48 include_package_data=True,
49 zip_safe=False,
50 classifiers=[
51 'Intended Audience :: Developers',
52 'Intended Audience :: Education',
53 'Intended Audience :: Science/Research',
54 'Programming Language :: Python :: 3',
55 'Programming Language :: Python :: 3.6',
56 'Programming Language :: Python :: 3.7',
57 "License :: OSI Approved :: Apache Software License",
58 "Operating System :: OS Independent",
59 'Topic :: Scientific/Engineering',
60 'Topic :: Scientific/Engineering :: Artificial Intelligence',
61 'Topic :: Software Development',
62 'Topic :: Software Development :: Libraries',
63 'Topic :: Software Development :: Libraries :: Python Modules',
64 ],
65 install_requires=REQUIRES,
66 tests_require=TESTS_REQUIRES,
67 extras_require={'test': TESTS_REQUIRES}
68 )
69
[end of python/kserve/setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/python/kserve/setup.py b/python/kserve/setup.py
--- a/python/kserve/setup.py
+++ b/python/kserve/setup.py
@@ -28,14 +28,14 @@
setuptools.setup(
name='kserve',
- version='0.8.0',
+ version='0.9.0rc0',
author="The KServe Authors",
author_email='[email protected], [email protected], [email protected]',
license="Apache License Version 2.0",
url="https://github.com/kserve/kserve/tree/master/python/kserve",
description="KServe Python SDK",
long_description="Python SDK for KServe Server and Client.",
- python_requires='>=3.6',
+ python_requires='>=3.7',
packages=[
'kserve',
'kserve.api',
@@ -52,8 +52,9 @@
'Intended Audience :: Education',
'Intended Audience :: Science/Research',
'Programming Language :: Python :: 3',
- 'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: 3.7',
+ 'Programming Language :: Python :: 3.8',
+ 'Programming Language :: Python :: 3.9',
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
'Topic :: Scientific/Engineering',
| {"golden_diff": "diff --git a/python/kserve/setup.py b/python/kserve/setup.py\n--- a/python/kserve/setup.py\n+++ b/python/kserve/setup.py\n@@ -28,14 +28,14 @@\n \n setuptools.setup(\n name='kserve',\n- version='0.8.0',\n+ version='0.9.0rc0',\n author=\"The KServe Authors\",\n author_email='[email protected], [email protected], [email protected]',\n license=\"Apache License Version 2.0\",\n url=\"https://github.com/kserve/kserve/tree/master/python/kserve\",\n description=\"KServe Python SDK\",\n long_description=\"Python SDK for KServe Server and Client.\",\n- python_requires='>=3.6',\n+ python_requires='>=3.7',\n packages=[\n 'kserve',\n 'kserve.api',\n@@ -52,8 +52,9 @@\n 'Intended Audience :: Education',\n 'Intended Audience :: Science/Research',\n 'Programming Language :: Python :: 3',\n- 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n+ 'Programming Language :: Python :: 3.8',\n+ 'Programming Language :: Python :: 3.9',\n \"License :: OSI Approved :: Apache Software License\",\n \"Operating System :: OS Independent\",\n 'Topic :: Scientific/Engineering',\n", "issue": "Incompatible versions for google protos required by kserve and dependencies\n/kind bug\r\n\r\n**What steps did you take and what happened:**\r\n[A clear and concise description of what the bug is.]\r\nTrying to install the kserve module, kserve requires `googleapis-common-protos==1.53.0`, but some dependencies of kserve require `googleapis-common-protos<2.0dev,>=1.56.2`, and hence kserve cannot be installed.\r\n\r\n\r\n**What did you expect to happen:**\r\nThe version of protos required by kserve should be updated to be compatible with its dependencies\r\n\r\n**Anything else you would like to add:**\r\n[Miscellaneous information that will assist in solving the issue.]\r\n\r\n\r\n**Environment:**\r\n\r\n- Istio Version:\r\n- Knative Version:\r\n- KFServing Version:\r\n- Kubeflow version:\r\n- Kfdef:[k8s_istio/istio_dex/gcp_basic_auth/gcp_iap/aws/aws_cognito/ibm]\r\n- Minikube version:\r\n- Kubernetes version: (use `kubectl version`):\r\n- OS (e.g. from `/etc/os-release`):\r\n\n", "before_files": [{"content": "# Copyright 2021 The KServe Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport setuptools\n\nTESTS_REQUIRES = [\n 'pytest',\n 'pytest-xdist',\n 'pytest-cov',\n 'pytest-asyncio',\n 'pytest-tornasync',\n 'mypy'\n]\n\nwith open('requirements.txt') as f:\n REQUIRES = f.readlines()\n\nsetuptools.setup(\n name='kserve',\n version='0.8.0',\n author=\"The KServe Authors\",\n author_email='[email protected], [email protected], [email protected]',\n license=\"Apache License Version 2.0\",\n url=\"https://github.com/kserve/kserve/tree/master/python/kserve\",\n description=\"KServe Python SDK\",\n long_description=\"Python SDK for KServe Server and Client.\",\n python_requires='>=3.6',\n packages=[\n 'kserve',\n 'kserve.api',\n 'kserve.constants',\n 'kserve.models',\n 'kserve.handlers',\n 'kserve.utils',\n ],\n package_data={'': ['requirements.txt']},\n include_package_data=True,\n zip_safe=False,\n classifiers=[\n 'Intended Audience :: Developers',\n 'Intended Audience :: Education',\n 'Intended Audience :: Science/Research',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n \"License :: OSI Approved :: Apache Software License\",\n \"Operating System :: OS Independent\",\n 'Topic :: Scientific/Engineering',\n 'Topic :: Scientific/Engineering :: Artificial Intelligence',\n 'Topic :: Software Development',\n 'Topic :: Software Development :: Libraries',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n ],\n install_requires=REQUIRES,\n tests_require=TESTS_REQUIRES,\n extras_require={'test': TESTS_REQUIRES}\n)\n", "path": "python/kserve/setup.py"}]} | 1,439 | 317 |
gh_patches_debug_23345 | rasdani/github-patches | git_diff | elastic__apm-agent-python-958 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Expand k8s pod ID discovery regex
Implementing elastic/apm#344
</issue>
<code>
[start of elasticapm/utils/cgroup.py]
1 # BSD 3-Clause License
2 #
3 # Copyright (c) 2019, Elasticsearch BV
4 # All rights reserved.
5 #
6 # Redistribution and use in source and binary forms, with or without
7 # modification, are permitted provided that the following conditions are met:
8 #
9 # * Redistributions of source code must retain the above copyright notice, this
10 # list of conditions and the following disclaimer.
11 #
12 # * Redistributions in binary form must reproduce the above copyright notice,
13 # this list of conditions and the following disclaimer in the documentation
14 # and/or other materials provided with the distribution.
15 #
16 # * Neither the name of the copyright holder nor the names of its
17 # contributors may be used to endorse or promote products derived from
18 # this software without specific prior written permission.
19 #
20 # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
21 # AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
22 # IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
23 # DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
24 # FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
25 # DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
26 # SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
27 # CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
28 # OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
29 # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
30
31 import os
32 import re
33
34 CGROUP_PATH = "/proc/self/cgroup"
35
36 SYSTEMD_SCOPE_SUFFIX = ".scope"
37
38 kubepods_regexp = re.compile(
39 r"(?:^/kubepods[\S]*/pod([^/]+)$)|(?:^/kubepods\.slice/kubepods-[^/]+\.slice/kubepods-[^/]+-pod([^/]+)\.slice$)"
40 )
41
42 container_id_regexp = re.compile(
43 "^(?:[0-9a-f]{64}|[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4,})$", re.IGNORECASE
44 )
45
46
47 def get_cgroup_container_metadata():
48 """
49 Reads docker/kubernetes metadata (container id, pod id) from /proc/self/cgroup
50
51 The result is a nested dictionary with the detected IDs, e.g.
52
53 {
54 "container": {"id": "2227daf62df6694645fee5df53c1f91271546a9560e8600a525690ae252b7f63"},
55 "pod": {"uid": "90d81341_92de_11e7_8cf2_507b9d4141fa"}
56 }
57
58 :return: a dictionary with the detected ids or {}
59 """
60 if not os.path.exists(CGROUP_PATH):
61 return {}
62 with open(CGROUP_PATH) as f:
63 return parse_cgroups(f) or {}
64
65
66 def parse_cgroups(filehandle):
67 """
68 Reads lines from a file handle and tries to parse docker container IDs and kubernetes Pod IDs.
69
70 See tests.utils.docker_tests.test_cgroup_parsing for a set of test cases
71
72 :param filehandle:
73 :return: nested dictionary or None
74 """
75 for line in filehandle:
76 parts = line.strip().split(":")
77 if len(parts) != 3:
78 continue
79 cgroup_path = parts[2]
80
81 # Depending on the filesystem driver used for cgroup
82 # management, the paths in /proc/pid/cgroup will have
83 # one of the following formats in a Docker container:
84 #
85 # systemd: /system.slice/docker-<container-ID>.scope
86 # cgroupfs: /docker/<container-ID>
87 #
88 # In a Kubernetes pod, the cgroup path will look like:
89 #
90 # systemd:/kubepods.slice/kubepods-<QoS-class>.slice/kubepods-<QoS-class>-pod<pod-UID>.slice/<container-iD>.scope
91 # cgroupfs:/kubepods/<QoS-class>/pod<pod-UID>/<container-iD>
92
93 directory, container_id = os.path.split(cgroup_path)
94 if container_id.endswith(SYSTEMD_SCOPE_SUFFIX):
95 container_id = container_id[: -len(SYSTEMD_SCOPE_SUFFIX)]
96 if "-" in container_id:
97 container_id = container_id.split("-", 1)[1]
98 kubepods_match = kubepods_regexp.match(directory)
99 if kubepods_match:
100 pod_id = kubepods_match.group(1)
101 if not pod_id:
102 pod_id = kubepods_match.group(2)
103 if pod_id:
104 pod_id = pod_id.replace("_", "-")
105 return {"container": {"id": container_id}, "kubernetes": {"pod": {"uid": pod_id}}}
106 elif container_id_regexp.match(container_id):
107 return {"container": {"id": container_id}}
108
[end of elasticapm/utils/cgroup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/elasticapm/utils/cgroup.py b/elasticapm/utils/cgroup.py
--- a/elasticapm/utils/cgroup.py
+++ b/elasticapm/utils/cgroup.py
@@ -36,7 +36,7 @@
SYSTEMD_SCOPE_SUFFIX = ".scope"
kubepods_regexp = re.compile(
- r"(?:^/kubepods[\S]*/pod([^/]+)$)|(?:^/kubepods\.slice/kubepods-[^/]+\.slice/kubepods-[^/]+-pod([^/]+)\.slice$)"
+ r"(?:^/kubepods[\S]*/pod([^/]+)$)|(?:^/kubepods\.slice/(kubepods-[^/]+\.slice/)?kubepods[^/]*-pod([^/]+)\.slice$)"
)
container_id_regexp = re.compile(
@@ -97,9 +97,9 @@
container_id = container_id.split("-", 1)[1]
kubepods_match = kubepods_regexp.match(directory)
if kubepods_match:
- pod_id = kubepods_match.group(1)
+ pod_id = kubepods_match.group(1) # if first part of kubepods_regexp matched
if not pod_id:
- pod_id = kubepods_match.group(2)
+ pod_id = kubepods_match.group(3) # if second part of kubepods_regexp matched
if pod_id:
pod_id = pod_id.replace("_", "-")
return {"container": {"id": container_id}, "kubernetes": {"pod": {"uid": pod_id}}}
| {"golden_diff": "diff --git a/elasticapm/utils/cgroup.py b/elasticapm/utils/cgroup.py\n--- a/elasticapm/utils/cgroup.py\n+++ b/elasticapm/utils/cgroup.py\n@@ -36,7 +36,7 @@\n SYSTEMD_SCOPE_SUFFIX = \".scope\"\n \n kubepods_regexp = re.compile(\n- r\"(?:^/kubepods[\\S]*/pod([^/]+)$)|(?:^/kubepods\\.slice/kubepods-[^/]+\\.slice/kubepods-[^/]+-pod([^/]+)\\.slice$)\"\n+ r\"(?:^/kubepods[\\S]*/pod([^/]+)$)|(?:^/kubepods\\.slice/(kubepods-[^/]+\\.slice/)?kubepods[^/]*-pod([^/]+)\\.slice$)\"\n )\n \n container_id_regexp = re.compile(\n@@ -97,9 +97,9 @@\n container_id = container_id.split(\"-\", 1)[1]\n kubepods_match = kubepods_regexp.match(directory)\n if kubepods_match:\n- pod_id = kubepods_match.group(1)\n+ pod_id = kubepods_match.group(1) # if first part of kubepods_regexp matched\n if not pod_id:\n- pod_id = kubepods_match.group(2)\n+ pod_id = kubepods_match.group(3) # if second part of kubepods_regexp matched\n if pod_id:\n pod_id = pod_id.replace(\"_\", \"-\")\n return {\"container\": {\"id\": container_id}, \"kubernetes\": {\"pod\": {\"uid\": pod_id}}}\n", "issue": "Expand k8s pod ID discovery regex\nImplementing elastic/apm#344\n", "before_files": [{"content": "# BSD 3-Clause License\n#\n# Copyright (c) 2019, Elasticsearch BV\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are met:\n#\n# * Redistributions of source code must retain the above copyright notice, this\n# list of conditions and the following disclaimer.\n#\n# * Redistributions in binary form must reproduce the above copyright notice,\n# this list of conditions and the following disclaimer in the documentation\n# and/or other materials provided with the distribution.\n#\n# * Neither the name of the copyright holder nor the names of its\n# contributors may be used to endorse or promote products derived from\n# this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\n# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\n# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\n# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\n# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\nimport os\nimport re\n\nCGROUP_PATH = \"/proc/self/cgroup\"\n\nSYSTEMD_SCOPE_SUFFIX = \".scope\"\n\nkubepods_regexp = re.compile(\n r\"(?:^/kubepods[\\S]*/pod([^/]+)$)|(?:^/kubepods\\.slice/kubepods-[^/]+\\.slice/kubepods-[^/]+-pod([^/]+)\\.slice$)\"\n)\n\ncontainer_id_regexp = re.compile(\n \"^(?:[0-9a-f]{64}|[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4,})$\", re.IGNORECASE\n)\n\n\ndef get_cgroup_container_metadata():\n \"\"\"\n Reads docker/kubernetes metadata (container id, pod id) from /proc/self/cgroup\n\n The result is a nested dictionary with the detected IDs, e.g.\n\n {\n \"container\": {\"id\": \"2227daf62df6694645fee5df53c1f91271546a9560e8600a525690ae252b7f63\"},\n \"pod\": {\"uid\": \"90d81341_92de_11e7_8cf2_507b9d4141fa\"}\n }\n\n :return: a dictionary with the detected ids or {}\n \"\"\"\n if not os.path.exists(CGROUP_PATH):\n return {}\n with open(CGROUP_PATH) as f:\n return parse_cgroups(f) or {}\n\n\ndef parse_cgroups(filehandle):\n \"\"\"\n Reads lines from a file handle and tries to parse docker container IDs and kubernetes Pod IDs.\n\n See tests.utils.docker_tests.test_cgroup_parsing for a set of test cases\n\n :param filehandle:\n :return: nested dictionary or None\n \"\"\"\n for line in filehandle:\n parts = line.strip().split(\":\")\n if len(parts) != 3:\n continue\n cgroup_path = parts[2]\n\n # Depending on the filesystem driver used for cgroup\n # management, the paths in /proc/pid/cgroup will have\n # one of the following formats in a Docker container:\n #\n # systemd: /system.slice/docker-<container-ID>.scope\n # cgroupfs: /docker/<container-ID>\n #\n # In a Kubernetes pod, the cgroup path will look like:\n #\n # systemd:/kubepods.slice/kubepods-<QoS-class>.slice/kubepods-<QoS-class>-pod<pod-UID>.slice/<container-iD>.scope\n # cgroupfs:/kubepods/<QoS-class>/pod<pod-UID>/<container-iD>\n\n directory, container_id = os.path.split(cgroup_path)\n if container_id.endswith(SYSTEMD_SCOPE_SUFFIX):\n container_id = container_id[: -len(SYSTEMD_SCOPE_SUFFIX)]\n if \"-\" in container_id:\n container_id = container_id.split(\"-\", 1)[1]\n kubepods_match = kubepods_regexp.match(directory)\n if kubepods_match:\n pod_id = kubepods_match.group(1)\n if not pod_id:\n pod_id = kubepods_match.group(2)\n if pod_id:\n pod_id = pod_id.replace(\"_\", \"-\")\n return {\"container\": {\"id\": container_id}, \"kubernetes\": {\"pod\": {\"uid\": pod_id}}}\n elif container_id_regexp.match(container_id):\n return {\"container\": {\"id\": container_id}}\n", "path": "elasticapm/utils/cgroup.py"}]} | 1,935 | 384 |
gh_patches_debug_13515 | rasdani/github-patches | git_diff | mathesar-foundation__mathesar-3267 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add openAPI Specification for UI related databases endpoint
Generate openAPI spec for `databases` endpoint corresponding to UI
</issue>
<code>
[start of config/settings/openapi.py]
1 def custom_preprocessing_hook(endpoints):
2 filtered = []
3 for (path, path_regex, method, callback) in endpoints:
4 # Remove all but DRF API endpoints
5 if path.startswith("/api/db/v0/databases/") or path.startswith("/api/db/v0/data_files/") or path.startswith("/api/db/v0/schemas/") or path.startswith("/api/db/v0/tables/"):
6 filtered.append((path, path_regex, method, callback))
7 return filtered
8
9
10 def remove_url_prefix_hook(result, **kwargs):
11 # Remove namespace and version URL prefix from the operation Id of the generated API schema
12 for path, path_info in result['paths'].items():
13 for method, operation in path_info.items():
14 operation_id = operation.get('operationId')
15 if operation_id:
16 if path.startswith('/api/db/v0/'):
17 operation['operationId'] = operation_id.replace('db_v0_', '')
18 elif path.startswith('/api/ui/v0/'):
19 operation['operationId'] = operation_id.replace('ui_v0_', '')
20
21 return result
22
[end of config/settings/openapi.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/config/settings/openapi.py b/config/settings/openapi.py
--- a/config/settings/openapi.py
+++ b/config/settings/openapi.py
@@ -1,9 +1,14 @@
def custom_preprocessing_hook(endpoints):
- filtered = []
- for (path, path_regex, method, callback) in endpoints:
- # Remove all but DRF API endpoints
- if path.startswith("/api/db/v0/databases/") or path.startswith("/api/db/v0/data_files/") or path.startswith("/api/db/v0/schemas/") or path.startswith("/api/db/v0/tables/"):
- filtered.append((path, path_regex, method, callback))
+ prefixes = [
+ "/api/db/v0/databases/",
+ "/api/db/v0/data_files/",
+ "/api/db/v0/schemas/",
+ "/api/db/v0/tables/",
+ "/api/db/v0/links/",
+ "/api/db/v0/queries/",
+ "/api/ui/v0/databases/"
+ ]
+ filtered = [(path, path_regex, method, callback) for path, path_regex, method, callback in endpoints if any(path.startswith(prefix) for prefix in prefixes)]
return filtered
| {"golden_diff": "diff --git a/config/settings/openapi.py b/config/settings/openapi.py\n--- a/config/settings/openapi.py\n+++ b/config/settings/openapi.py\n@@ -1,9 +1,14 @@\n def custom_preprocessing_hook(endpoints):\n- filtered = []\n- for (path, path_regex, method, callback) in endpoints:\n- # Remove all but DRF API endpoints\n- if path.startswith(\"/api/db/v0/databases/\") or path.startswith(\"/api/db/v0/data_files/\") or path.startswith(\"/api/db/v0/schemas/\") or path.startswith(\"/api/db/v0/tables/\"):\n- filtered.append((path, path_regex, method, callback))\n+ prefixes = [\n+ \"/api/db/v0/databases/\",\n+ \"/api/db/v0/data_files/\",\n+ \"/api/db/v0/schemas/\",\n+ \"/api/db/v0/tables/\",\n+ \"/api/db/v0/links/\",\n+ \"/api/db/v0/queries/\",\n+ \"/api/ui/v0/databases/\"\n+ ]\n+ filtered = [(path, path_regex, method, callback) for path, path_regex, method, callback in endpoints if any(path.startswith(prefix) for prefix in prefixes)]\n return filtered\n", "issue": "Add openAPI Specification for UI related databases endpoint\nGenerate openAPI spec for `databases` endpoint corresponding to UI\n", "before_files": [{"content": "def custom_preprocessing_hook(endpoints):\n filtered = []\n for (path, path_regex, method, callback) in endpoints:\n # Remove all but DRF API endpoints\n if path.startswith(\"/api/db/v0/databases/\") or path.startswith(\"/api/db/v0/data_files/\") or path.startswith(\"/api/db/v0/schemas/\") or path.startswith(\"/api/db/v0/tables/\"):\n filtered.append((path, path_regex, method, callback))\n return filtered\n\n\ndef remove_url_prefix_hook(result, **kwargs):\n # Remove namespace and version URL prefix from the operation Id of the generated API schema\n for path, path_info in result['paths'].items():\n for method, operation in path_info.items():\n operation_id = operation.get('operationId')\n if operation_id:\n if path.startswith('/api/db/v0/'):\n operation['operationId'] = operation_id.replace('db_v0_', '')\n elif path.startswith('/api/ui/v0/'):\n operation['operationId'] = operation_id.replace('ui_v0_', '')\n\n return result\n", "path": "config/settings/openapi.py"}]} | 822 | 263 |
gh_patches_debug_26362 | rasdani/github-patches | git_diff | watchdogpolska__feder-329 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Autocomplete dla JST w MonitoringFilter
</issue>
<code>
[start of feder/monitorings/models.py]
1 from itertools import groupby
2
3 import reversion
4 from autoslug.fields import AutoSlugField
5 from django.conf import settings
6 from django.contrib.auth import get_user_model
7 from django.core.urlresolvers import reverse
8 from django.db import models
9 from django.utils.translation import ugettext_lazy as _
10 from guardian.models import GroupObjectPermissionBase, UserObjectPermissionBase
11 from model_utils.models import TimeStampedModel
12
13 from .validators import validate_template_syntax
14
15 _('Monitorings index')
16 _('Can add Monitoring')
17 _('Can change Monitoring')
18 _('Can delete Monitoring')
19
20 NOTIFY_HELP = _("Notify about new alerts person who can view alerts")
21
22
23 class MonitoringQuerySet(models.QuerySet):
24 def with_case_count(self):
25 return self.annotate(case_count=models.Count('case'))
26
27
28 @reversion.register()
29 class Monitoring(TimeStampedModel):
30 perm_model = 'monitoringuserobjectpermission'
31 name = models.CharField(verbose_name=_("Name"), max_length=50)
32 slug = AutoSlugField(populate_from='name', verbose_name=_("Slug"), unique=True)
33 user = models.ForeignKey(settings.AUTH_USER_MODEL, verbose_name=_("User"))
34 description = models.TextField(verbose_name=_("Description"), blank=True)
35 subject = models.CharField(verbose_name=_("Subject"), max_length=80)
36 template = models.TextField(verbose_name=_("Template"),
37 help_text=_("Use {{EMAIL}} for insert reply address"),
38 validators=[validate_template_syntax])
39 email_footer = models.TextField(default='',
40 verbose_name=_("Email footer"),
41 help_text=_("Footer for sent mail and replies"))
42 notify_alert = models.BooleanField(default=True,
43 verbose_name=_("Notify about alerts"),
44 help_text=NOTIFY_HELP)
45 objects = MonitoringQuerySet.as_manager()
46
47 class Meta:
48 verbose_name = _("Monitoring")
49 verbose_name_plural = _("Monitoring")
50 ordering = ['created', ]
51 permissions = (
52 ('add_questionary', _('Can add questionary')),
53 ('change_questionary', _('Can change questionary')),
54 ('delete_questionary', _('Can delete questionary')),
55 ('add_case', _('Can add case')),
56 ('change_case', _('Can change case')),
57 ('delete_case', _('Can delete case')),
58 ('add_task', _('Can add task')),
59 ('change_task', _('Can change task')),
60 ('delete_task', _('Can delete task')),
61 ('add_letter', _('Can add letter')),
62 ('reply', _('Can reply')),
63 ('add_draft', _('Add reply draft')),
64 ('change_letter', _('Can change task')),
65 ('delete_letter', _('Can delete letter')),
66 ('view_alert', _('Can view alert')),
67 ('change_alert', _('Can change alert')),
68 ('delete_alert', _('Can delete alert')),
69 ('manage_perm', _('Can manage perms')),
70 ('select_survey', _('Can select answer')),
71 ('view_log', _('Can view logs')),
72 )
73
74 def __unicode__(self):
75 return self.name
76
77 def get_users_with_perm(self, perm=None):
78 qs = get_user_model().objects.filter(**{self.perm_model + '__content_object': self})
79 if perm:
80 qs = qs.filter(**{self.perm_model + '__permission__codename': perm})
81 return qs.distinct().all()
82
83 def get_absolute_url(self):
84 return reverse('monitorings:details', kwargs={'slug': self.slug})
85
86 def permission_map(self):
87 dataset = (self.monitoringuserobjectpermission_set.select_related('permission', 'user').
88 order_by('permission').all())
89 user_list = {x.user for x in dataset}
90
91 def index_generate():
92 grouped = groupby(dataset, lambda x: x.permission)
93 for perm, users in grouped:
94 user_perm_list = [x.user for x in users]
95 yield perm, [(perm, (user in user_perm_list)) for user in user_list]
96
97 return user_list, index_generate()
98
99
100 class MonitoringUserObjectPermission(UserObjectPermissionBase):
101 content_object = models.ForeignKey(Monitoring)
102
103
104 class MonitoringGroupObjectPermission(GroupObjectPermissionBase):
105 content_object = models.ForeignKey(Monitoring)
106
[end of feder/monitorings/models.py]
[start of feder/monitorings/filters.py]
1 # -*- coding: utf-8 -*-
2 import django_filters
3 from dal import autocomplete
4 from django.contrib.auth import get_user_model
5 from django.db.models import Count
6 from django.utils.translation import ugettext_lazy as _
7
8 from .models import Monitoring
9
10
11 class MonitoringFilter(django_filters.FilterSet):
12 created = django_filters.DateRangeFilter(label=_("Creation date"))
13
14 def __init__(self, *args, **kwargs):
15 super(MonitoringFilter, self).__init__(*args, **kwargs)
16 self.filters['name'].lookup_expr = 'icontains'
17 self.filters['name'].label = _("Name")
18 # Limit users select to which have any cases
19 qs = (get_user_model().objects.
20 annotate(case_count=Count('case')).
21 filter(case_count__gt=0).all())
22 self.filters['user'].extra['queryset'] = qs
23 self.filters['user'].widget = autocomplete.ModelSelect2(url='users:autocomplete')
24
25 class Meta:
26 model = Monitoring
27 fields = ['name', 'user', 'created']
28 order_by = ['created', '-created', '-case_count']
29 order_by = [
30 ('created', _('Creation date (ascending)')),
31 ('-created', _('Creation date (descending)')),
32 ('case_count', _('Cases count (ascending)')),
33 ('-case_count', _('Cases count (descending)')),
34 ]
35
[end of feder/monitorings/filters.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/feder/monitorings/filters.py b/feder/monitorings/filters.py
--- a/feder/monitorings/filters.py
+++ b/feder/monitorings/filters.py
@@ -6,10 +6,22 @@
from django.utils.translation import ugettext_lazy as _
from .models import Monitoring
+from teryt_tree.dal_ext.filters import VoivodeshipFilter, CountyFilter, CommunityFilter
class MonitoringFilter(django_filters.FilterSet):
created = django_filters.DateRangeFilter(label=_("Creation date"))
+ voivodeship = VoivodeshipFilter(
+ widget=autocomplete.ModelSelect2(url='teryt:voivodeship-autocomplete')
+ )
+ county = CountyFilter(
+ widget=autocomplete.ModelSelect2(url='teryt:county-autocomplete',
+ forward=['voivodeship'])
+ )
+ community = CommunityFilter(
+ widget=autocomplete.ModelSelect2(url='teryt:community-autocomplete',
+ forward=['county'])
+ )
def __init__(self, *args, **kwargs):
super(MonitoringFilter, self).__init__(*args, **kwargs)
diff --git a/feder/monitorings/models.py b/feder/monitorings/models.py
--- a/feder/monitorings/models.py
+++ b/feder/monitorings/models.py
@@ -24,6 +24,9 @@
def with_case_count(self):
return self.annotate(case_count=models.Count('case'))
+ def area(self, jst):
+ return self.filter(case__institution__jst__tree_id=jst.tree_id,
+ case__institution__jst__lft__range=(jst.lft, jst.rght))
@reversion.register()
class Monitoring(TimeStampedModel):
| {"golden_diff": "diff --git a/feder/monitorings/filters.py b/feder/monitorings/filters.py\n--- a/feder/monitorings/filters.py\n+++ b/feder/monitorings/filters.py\n@@ -6,10 +6,22 @@\n from django.utils.translation import ugettext_lazy as _\n \n from .models import Monitoring\n+from teryt_tree.dal_ext.filters import VoivodeshipFilter, CountyFilter, CommunityFilter\n \n \n class MonitoringFilter(django_filters.FilterSet):\n created = django_filters.DateRangeFilter(label=_(\"Creation date\"))\n+ voivodeship = VoivodeshipFilter(\n+ widget=autocomplete.ModelSelect2(url='teryt:voivodeship-autocomplete')\n+ )\n+ county = CountyFilter(\n+ widget=autocomplete.ModelSelect2(url='teryt:county-autocomplete',\n+ forward=['voivodeship'])\n+ )\n+ community = CommunityFilter(\n+ widget=autocomplete.ModelSelect2(url='teryt:community-autocomplete',\n+ forward=['county'])\n+ )\n \n def __init__(self, *args, **kwargs):\n super(MonitoringFilter, self).__init__(*args, **kwargs)\ndiff --git a/feder/monitorings/models.py b/feder/monitorings/models.py\n--- a/feder/monitorings/models.py\n+++ b/feder/monitorings/models.py\n@@ -24,6 +24,9 @@\n def with_case_count(self):\n return self.annotate(case_count=models.Count('case'))\n \n+ def area(self, jst):\n+ return self.filter(case__institution__jst__tree_id=jst.tree_id,\n+ case__institution__jst__lft__range=(jst.lft, jst.rght))\n \n @reversion.register()\n class Monitoring(TimeStampedModel):\n", "issue": "Autocomplete dla JST w MonitoringFilter\n\n", "before_files": [{"content": "from itertools import groupby\n\nimport reversion\nfrom autoslug.fields import AutoSlugField\nfrom django.conf import settings\nfrom django.contrib.auth import get_user_model\nfrom django.core.urlresolvers import reverse\nfrom django.db import models\nfrom django.utils.translation import ugettext_lazy as _\nfrom guardian.models import GroupObjectPermissionBase, UserObjectPermissionBase\nfrom model_utils.models import TimeStampedModel\n\nfrom .validators import validate_template_syntax\n\n_('Monitorings index')\n_('Can add Monitoring')\n_('Can change Monitoring')\n_('Can delete Monitoring')\n\nNOTIFY_HELP = _(\"Notify about new alerts person who can view alerts\")\n\n\nclass MonitoringQuerySet(models.QuerySet):\n def with_case_count(self):\n return self.annotate(case_count=models.Count('case'))\n\n\[email protected]()\nclass Monitoring(TimeStampedModel):\n perm_model = 'monitoringuserobjectpermission'\n name = models.CharField(verbose_name=_(\"Name\"), max_length=50)\n slug = AutoSlugField(populate_from='name', verbose_name=_(\"Slug\"), unique=True)\n user = models.ForeignKey(settings.AUTH_USER_MODEL, verbose_name=_(\"User\"))\n description = models.TextField(verbose_name=_(\"Description\"), blank=True)\n subject = models.CharField(verbose_name=_(\"Subject\"), max_length=80)\n template = models.TextField(verbose_name=_(\"Template\"),\n help_text=_(\"Use {{EMAIL}} for insert reply address\"),\n validators=[validate_template_syntax])\n email_footer = models.TextField(default='',\n verbose_name=_(\"Email footer\"),\n help_text=_(\"Footer for sent mail and replies\"))\n notify_alert = models.BooleanField(default=True,\n verbose_name=_(\"Notify about alerts\"),\n help_text=NOTIFY_HELP)\n objects = MonitoringQuerySet.as_manager()\n\n class Meta:\n verbose_name = _(\"Monitoring\")\n verbose_name_plural = _(\"Monitoring\")\n ordering = ['created', ]\n permissions = (\n ('add_questionary', _('Can add questionary')),\n ('change_questionary', _('Can change questionary')),\n ('delete_questionary', _('Can delete questionary')),\n ('add_case', _('Can add case')),\n ('change_case', _('Can change case')),\n ('delete_case', _('Can delete case')),\n ('add_task', _('Can add task')),\n ('change_task', _('Can change task')),\n ('delete_task', _('Can delete task')),\n ('add_letter', _('Can add letter')),\n ('reply', _('Can reply')),\n ('add_draft', _('Add reply draft')),\n ('change_letter', _('Can change task')),\n ('delete_letter', _('Can delete letter')),\n ('view_alert', _('Can view alert')),\n ('change_alert', _('Can change alert')),\n ('delete_alert', _('Can delete alert')),\n ('manage_perm', _('Can manage perms')),\n ('select_survey', _('Can select answer')),\n ('view_log', _('Can view logs')),\n )\n\n def __unicode__(self):\n return self.name\n\n def get_users_with_perm(self, perm=None):\n qs = get_user_model().objects.filter(**{self.perm_model + '__content_object': self})\n if perm:\n qs = qs.filter(**{self.perm_model + '__permission__codename': perm})\n return qs.distinct().all()\n\n def get_absolute_url(self):\n return reverse('monitorings:details', kwargs={'slug': self.slug})\n\n def permission_map(self):\n dataset = (self.monitoringuserobjectpermission_set.select_related('permission', 'user').\n order_by('permission').all())\n user_list = {x.user for x in dataset}\n\n def index_generate():\n grouped = groupby(dataset, lambda x: x.permission)\n for perm, users in grouped:\n user_perm_list = [x.user for x in users]\n yield perm, [(perm, (user in user_perm_list)) for user in user_list]\n\n return user_list, index_generate()\n\n\nclass MonitoringUserObjectPermission(UserObjectPermissionBase):\n content_object = models.ForeignKey(Monitoring)\n\n\nclass MonitoringGroupObjectPermission(GroupObjectPermissionBase):\n content_object = models.ForeignKey(Monitoring)\n", "path": "feder/monitorings/models.py"}, {"content": "# -*- coding: utf-8 -*-\nimport django_filters\nfrom dal import autocomplete\nfrom django.contrib.auth import get_user_model\nfrom django.db.models import Count\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom .models import Monitoring\n\n\nclass MonitoringFilter(django_filters.FilterSet):\n created = django_filters.DateRangeFilter(label=_(\"Creation date\"))\n\n def __init__(self, *args, **kwargs):\n super(MonitoringFilter, self).__init__(*args, **kwargs)\n self.filters['name'].lookup_expr = 'icontains'\n self.filters['name'].label = _(\"Name\")\n # Limit users select to which have any cases\n qs = (get_user_model().objects.\n annotate(case_count=Count('case')).\n filter(case_count__gt=0).all())\n self.filters['user'].extra['queryset'] = qs\n self.filters['user'].widget = autocomplete.ModelSelect2(url='users:autocomplete')\n\n class Meta:\n model = Monitoring\n fields = ['name', 'user', 'created']\n order_by = ['created', '-created', '-case_count']\n order_by = [\n ('created', _('Creation date (ascending)')),\n ('-created', _('Creation date (descending)')),\n ('case_count', _('Cases count (ascending)')),\n ('-case_count', _('Cases count (descending)')),\n ]\n", "path": "feder/monitorings/filters.py"}]} | 1,978 | 390 |
gh_patches_debug_29515 | rasdani/github-patches | git_diff | plone__Products.CMFPlone-1512 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Resources from third party add-ons are not being included in compiled plone-legacy bundle
Seems JS resources registered in Plone 5 using old approach (`jsregistry.xml`) are not included in the final compilation: I installed an add-on and, even as I can see the JS resources listed in `default.js`, the source code is not present.
If I enable development mode, then I can see the source code included in `plone-legacy-compiled.js` and it's executed normally.
</issue>
<code>
[start of Products/CMFPlone/resources/browser/combine.py]
1 import re
2 from zExceptions import NotFound
3 from Acquisition import aq_base
4 from datetime import datetime
5 from plone.registry.interfaces import IRegistry
6 from plone.resource.file import FilesystemFile
7 from plone.resource.interfaces import IResourceDirectory
8 from Products.CMFPlone.interfaces import IBundleRegistry
9 from Products.CMFPlone.interfaces.resources import (
10 OVERRIDE_RESOURCE_DIRECTORY_NAME,
11 )
12 from StringIO import StringIO
13 from zope.component import getUtility
14 from zope.component import queryUtility
15
16 PRODUCTION_RESOURCE_DIRECTORY = "production"
17
18
19 def get_production_resource_directory():
20 persistent_directory = queryUtility(IResourceDirectory, name="persistent")
21 if persistent_directory is None:
22 return ''
23 container = persistent_directory[OVERRIDE_RESOURCE_DIRECTORY_NAME]
24 try:
25 production_folder = container[PRODUCTION_RESOURCE_DIRECTORY]
26 except NotFound:
27 return "%s/++unique++1" % PRODUCTION_RESOURCE_DIRECTORY
28 timestamp = production_folder.readFile('timestamp.txt')
29 return "%s/++unique++%s" % (
30 PRODUCTION_RESOURCE_DIRECTORY, timestamp)
31
32
33 def get_resource(context, path):
34 resource = context.unrestrictedTraverse(path)
35 if isinstance(resource, FilesystemFile):
36 (directory, sep, filename) = path.rpartition('/')
37 return context.unrestrictedTraverse(directory).readFile(filename)
38 else:
39 if hasattr(aq_base(resource), 'GET'):
40 # for FileResource
41 return resource.GET()
42 else:
43 # any BrowserView
44 return resource()
45
46
47 def write_js(context, folder, meta_bundle):
48 registry = getUtility(IRegistry)
49 resources = []
50
51 # default resources
52 if meta_bundle == 'default' and registry.records.get(
53 'plone.resources/jquery.js'
54 ):
55 resources.append(get_resource(context,
56 registry.records['plone.resources/jquery.js'].value))
57 resources.append(get_resource(context,
58 registry.records['plone.resources.requirejs'].value))
59 resources.append(get_resource(context,
60 registry.records['plone.resources.configjs'].value))
61
62 # bundles
63 bundles = registry.collectionOfInterface(
64 IBundleRegistry, prefix="plone.bundles", check=False)
65 for bundle in bundles.values():
66 if bundle.merge_with == meta_bundle:
67 resources.append(get_resource(context, bundle.jscompilation))
68
69 fi = StringIO()
70 for script in resources:
71 fi.write(script + '\n')
72 folder.writeFile(meta_bundle + ".js", fi)
73
74
75 def write_css(context, folder, meta_bundle):
76 registry = getUtility(IRegistry)
77 resources = []
78
79 bundles = registry.collectionOfInterface(
80 IBundleRegistry, prefix="plone.bundles", check=False)
81 for bundle in bundles.values():
82 if bundle.merge_with == meta_bundle:
83 css = get_resource(context, bundle.csscompilation)
84 # Preserve relative urls:
85 # we prefix with '../'' any url not starting with '/'
86 # or http: or data:
87 css = re.sub(
88 r"""(url\(['"]?(?!['"]?([a-z]+:|\/)))""",
89 r'\1../',
90 css)
91 resources.append(css)
92
93 fi = StringIO()
94 for script in resources:
95 fi.write(script + '\n')
96 folder.writeFile(meta_bundle + ".css", fi)
97
98
99 def combine_bundles(context):
100 persistent_directory = queryUtility(IResourceDirectory, name="persistent")
101 if persistent_directory is None:
102 return
103 if OVERRIDE_RESOURCE_DIRECTORY_NAME not in persistent_directory:
104 persistent_directory.makeDirectory(OVERRIDE_RESOURCE_DIRECTORY_NAME)
105 container = persistent_directory[OVERRIDE_RESOURCE_DIRECTORY_NAME]
106 if PRODUCTION_RESOURCE_DIRECTORY not in container:
107 container.makeDirectory(PRODUCTION_RESOURCE_DIRECTORY)
108 production_folder = container[PRODUCTION_RESOURCE_DIRECTORY]
109
110 # store timestamp
111 fi = StringIO()
112 fi.write(datetime.now().isoformat())
113 production_folder.writeFile("timestamp.txt", fi)
114
115 # generate new combined bundles
116 write_js(context, production_folder, 'default')
117 write_js(context, production_folder, 'logged-in')
118 write_css(context, production_folder, 'default')
119 write_css(context, production_folder, 'logged-in')
120
[end of Products/CMFPlone/resources/browser/combine.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/Products/CMFPlone/resources/browser/combine.py b/Products/CMFPlone/resources/browser/combine.py
--- a/Products/CMFPlone/resources/browser/combine.py
+++ b/Products/CMFPlone/resources/browser/combine.py
@@ -31,6 +31,14 @@
def get_resource(context, path):
+ if path.startswith('++plone++'):
+ # ++plone++ resources can be customized, we return their override
+ # value if any
+ overrides = get_override_directory(context)
+ filepath = path[9:]
+ if overrides.isFile(filepath):
+ return overrides.readFile(filepath)
+
resource = context.unrestrictedTraverse(path)
if isinstance(resource, FilesystemFile):
(directory, sep, filename) = path.rpartition('/')
@@ -96,13 +104,17 @@
folder.writeFile(meta_bundle + ".css", fi)
-def combine_bundles(context):
+def get_override_directory(context):
persistent_directory = queryUtility(IResourceDirectory, name="persistent")
if persistent_directory is None:
return
if OVERRIDE_RESOURCE_DIRECTORY_NAME not in persistent_directory:
persistent_directory.makeDirectory(OVERRIDE_RESOURCE_DIRECTORY_NAME)
- container = persistent_directory[OVERRIDE_RESOURCE_DIRECTORY_NAME]
+ return persistent_directory[OVERRIDE_RESOURCE_DIRECTORY_NAME]
+
+
+def combine_bundles(context):
+ container = get_override_directory(context)
if PRODUCTION_RESOURCE_DIRECTORY not in container:
container.makeDirectory(PRODUCTION_RESOURCE_DIRECTORY)
production_folder = container[PRODUCTION_RESOURCE_DIRECTORY]
| {"golden_diff": "diff --git a/Products/CMFPlone/resources/browser/combine.py b/Products/CMFPlone/resources/browser/combine.py\n--- a/Products/CMFPlone/resources/browser/combine.py\n+++ b/Products/CMFPlone/resources/browser/combine.py\n@@ -31,6 +31,14 @@\n \n \n def get_resource(context, path):\n+ if path.startswith('++plone++'):\n+ # ++plone++ resources can be customized, we return their override\n+ # value if any\n+ overrides = get_override_directory(context)\n+ filepath = path[9:]\n+ if overrides.isFile(filepath):\n+ return overrides.readFile(filepath)\n+\n resource = context.unrestrictedTraverse(path)\n if isinstance(resource, FilesystemFile):\n (directory, sep, filename) = path.rpartition('/')\n@@ -96,13 +104,17 @@\n folder.writeFile(meta_bundle + \".css\", fi)\n \n \n-def combine_bundles(context):\n+def get_override_directory(context):\n persistent_directory = queryUtility(IResourceDirectory, name=\"persistent\")\n if persistent_directory is None:\n return\n if OVERRIDE_RESOURCE_DIRECTORY_NAME not in persistent_directory:\n persistent_directory.makeDirectory(OVERRIDE_RESOURCE_DIRECTORY_NAME)\n- container = persistent_directory[OVERRIDE_RESOURCE_DIRECTORY_NAME]\n+ return persistent_directory[OVERRIDE_RESOURCE_DIRECTORY_NAME]\n+\n+\n+def combine_bundles(context):\n+ container = get_override_directory(context)\n if PRODUCTION_RESOURCE_DIRECTORY not in container:\n container.makeDirectory(PRODUCTION_RESOURCE_DIRECTORY)\n production_folder = container[PRODUCTION_RESOURCE_DIRECTORY]\n", "issue": "Resources from third party add-ons are not being included in compiled plone-legacy bundle\nSeems JS resources registered in Plone 5 using old approach (`jsregistry.xml`) are not included in the final compilation: I installed an add-on and, even as I can see the JS resources listed in `default.js`, the source code is not present.\n\nIf I enable development mode, then I can see the source code included in `plone-legacy-compiled.js` and it's executed normally.\n\n", "before_files": [{"content": "import re\nfrom zExceptions import NotFound\nfrom Acquisition import aq_base\nfrom datetime import datetime\nfrom plone.registry.interfaces import IRegistry\nfrom plone.resource.file import FilesystemFile\nfrom plone.resource.interfaces import IResourceDirectory\nfrom Products.CMFPlone.interfaces import IBundleRegistry\nfrom Products.CMFPlone.interfaces.resources import (\n OVERRIDE_RESOURCE_DIRECTORY_NAME,\n)\nfrom StringIO import StringIO\nfrom zope.component import getUtility\nfrom zope.component import queryUtility\n\nPRODUCTION_RESOURCE_DIRECTORY = \"production\"\n\n\ndef get_production_resource_directory():\n persistent_directory = queryUtility(IResourceDirectory, name=\"persistent\")\n if persistent_directory is None:\n return ''\n container = persistent_directory[OVERRIDE_RESOURCE_DIRECTORY_NAME]\n try:\n production_folder = container[PRODUCTION_RESOURCE_DIRECTORY]\n except NotFound:\n return \"%s/++unique++1\" % PRODUCTION_RESOURCE_DIRECTORY\n timestamp = production_folder.readFile('timestamp.txt')\n return \"%s/++unique++%s\" % (\n PRODUCTION_RESOURCE_DIRECTORY, timestamp)\n\n\ndef get_resource(context, path):\n resource = context.unrestrictedTraverse(path)\n if isinstance(resource, FilesystemFile):\n (directory, sep, filename) = path.rpartition('/')\n return context.unrestrictedTraverse(directory).readFile(filename)\n else:\n if hasattr(aq_base(resource), 'GET'):\n # for FileResource\n return resource.GET()\n else:\n # any BrowserView\n return resource()\n\n\ndef write_js(context, folder, meta_bundle):\n registry = getUtility(IRegistry)\n resources = []\n\n # default resources\n if meta_bundle == 'default' and registry.records.get(\n 'plone.resources/jquery.js'\n ):\n resources.append(get_resource(context,\n registry.records['plone.resources/jquery.js'].value))\n resources.append(get_resource(context,\n registry.records['plone.resources.requirejs'].value))\n resources.append(get_resource(context,\n registry.records['plone.resources.configjs'].value))\n\n # bundles\n bundles = registry.collectionOfInterface(\n IBundleRegistry, prefix=\"plone.bundles\", check=False)\n for bundle in bundles.values():\n if bundle.merge_with == meta_bundle:\n resources.append(get_resource(context, bundle.jscompilation))\n\n fi = StringIO()\n for script in resources:\n fi.write(script + '\\n')\n folder.writeFile(meta_bundle + \".js\", fi)\n\n\ndef write_css(context, folder, meta_bundle):\n registry = getUtility(IRegistry)\n resources = []\n\n bundles = registry.collectionOfInterface(\n IBundleRegistry, prefix=\"plone.bundles\", check=False)\n for bundle in bundles.values():\n if bundle.merge_with == meta_bundle:\n css = get_resource(context, bundle.csscompilation)\n # Preserve relative urls:\n # we prefix with '../'' any url not starting with '/'\n # or http: or data:\n css = re.sub(\n r\"\"\"(url\\(['\"]?(?!['\"]?([a-z]+:|\\/)))\"\"\",\n r'\\1../',\n css)\n resources.append(css)\n\n fi = StringIO()\n for script in resources:\n fi.write(script + '\\n')\n folder.writeFile(meta_bundle + \".css\", fi)\n\n\ndef combine_bundles(context):\n persistent_directory = queryUtility(IResourceDirectory, name=\"persistent\")\n if persistent_directory is None:\n return\n if OVERRIDE_RESOURCE_DIRECTORY_NAME not in persistent_directory:\n persistent_directory.makeDirectory(OVERRIDE_RESOURCE_DIRECTORY_NAME)\n container = persistent_directory[OVERRIDE_RESOURCE_DIRECTORY_NAME]\n if PRODUCTION_RESOURCE_DIRECTORY not in container:\n container.makeDirectory(PRODUCTION_RESOURCE_DIRECTORY)\n production_folder = container[PRODUCTION_RESOURCE_DIRECTORY]\n\n # store timestamp\n fi = StringIO()\n fi.write(datetime.now().isoformat())\n production_folder.writeFile(\"timestamp.txt\", fi)\n\n # generate new combined bundles\n write_js(context, production_folder, 'default')\n write_js(context, production_folder, 'logged-in')\n write_css(context, production_folder, 'default')\n write_css(context, production_folder, 'logged-in')\n", "path": "Products/CMFPlone/resources/browser/combine.py"}]} | 1,765 | 340 |
gh_patches_debug_24757 | rasdani/github-patches | git_diff | dbt-labs__dbt-core-7635 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[CT-2479] replace all instances of set-output and node16
Details in https://github.com/dbt-labs/actions/issues/39.
### Acceptance Criteria
- [ ] Verified there are no workflows to update
_or_
- [ ] removed all uses of `set-output` - either directly or up updating any marketplace actions we reference
- [ ] removed all references to node16 - either directly or up updating any marketplace actions we reference
- [ ] backport changes
</issue>
<code>
[start of .github/actions/latest-wrangler/main.py]
1 import os
2 import sys
3 import requests
4 from distutils.util import strtobool
5 from typing import Union
6 from packaging.version import parse, Version
7
8 if __name__ == "__main__":
9
10 # get inputs
11 package = os.environ["INPUT_PACKAGE"]
12 new_version = parse(os.environ["INPUT_NEW_VERSION"])
13 gh_token = os.environ["INPUT_GH_TOKEN"]
14 halt_on_missing = strtobool(os.environ.get("INPUT_HALT_ON_MISSING", "False"))
15
16 # get package metadata from github
17 package_request = requests.get(
18 f"https://api.github.com/orgs/dbt-labs/packages/container/{package}/versions",
19 auth=("", gh_token),
20 )
21 package_meta = package_request.json()
22
23 # Log info if we don't get a 200
24 if package_request.status_code != 200:
25 print(f"Call to GH API failed: {package_request.status_code} {package_meta['message']}")
26
27 # Make an early exit if there is no matching package in github
28 if package_request.status_code == 404:
29 if halt_on_missing:
30 sys.exit(1)
31 else:
32 # everything is the latest if the package doesn't exist
33 print(f"::set-output name=latest::{True}")
34 print(f"::set-output name=minor_latest::{True}")
35 sys.exit(0)
36
37 # TODO: verify package meta is "correct"
38 # https://github.com/dbt-labs/dbt-core/issues/4640
39
40 # map versions and tags
41 version_tag_map = {
42 version["id"]: version["metadata"]["container"]["tags"] for version in package_meta
43 }
44
45 # is pre-release
46 pre_rel = True if any(x in str(new_version) for x in ["a", "b", "rc"]) else False
47
48 # semver of current latest
49 for version, tags in version_tag_map.items():
50 if "latest" in tags:
51 # N.B. This seems counterintuitive, but we expect any version tagged
52 # 'latest' to have exactly three associated tags:
53 # latest, major.minor.latest, and major.minor.patch.
54 # Subtracting everything that contains the string 'latest' gets us
55 # the major.minor.patch which is what's needed for comparison.
56 current_latest = parse([tag for tag in tags if "latest" not in tag][0])
57 else:
58 current_latest = False
59
60 # semver of current_minor_latest
61 for version, tags in version_tag_map.items():
62 if f"{new_version.major}.{new_version.minor}.latest" in tags:
63 # Similar to above, only now we expect exactly two tags:
64 # major.minor.patch and major.minor.latest
65 current_minor_latest = parse([tag for tag in tags if "latest" not in tag][0])
66 else:
67 current_minor_latest = False
68
69 def is_latest(
70 pre_rel: bool, new_version: Version, remote_latest: Union[bool, Version]
71 ) -> bool:
72 """Determine if a given contaier should be tagged 'latest' based on:
73 - it's pre-release status
74 - it's version
75 - the version of a previously identified container tagged 'latest'
76
77 :param pre_rel: Wether or not the version of the new container is a pre-release
78 :param new_version: The version of the new container
79 :param remote_latest: The version of the previously identified container that's
80 already tagged latest or False
81 """
82 # is a pre-release = not latest
83 if pre_rel:
84 return False
85 # + no latest tag found = is latest
86 if not remote_latest:
87 return True
88 # + if remote version is lower than current = is latest, else not latest
89 return True if remote_latest <= new_version else False
90
91 latest = is_latest(pre_rel, new_version, current_latest)
92 minor_latest = is_latest(pre_rel, new_version, current_minor_latest)
93
94 print(f"::set-output name=latest::{latest}")
95 print(f"::set-output name=minor_latest::{minor_latest}")
96
[end of .github/actions/latest-wrangler/main.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/.github/actions/latest-wrangler/main.py b/.github/actions/latest-wrangler/main.py
--- a/.github/actions/latest-wrangler/main.py
+++ b/.github/actions/latest-wrangler/main.py
@@ -28,11 +28,12 @@
if package_request.status_code == 404:
if halt_on_missing:
sys.exit(1)
- else:
- # everything is the latest if the package doesn't exist
- print(f"::set-output name=latest::{True}")
- print(f"::set-output name=minor_latest::{True}")
- sys.exit(0)
+ # everything is the latest if the package doesn't exist
+ github_output = os.environ.get("GITHUB_OUTPUT")
+ with open(github_output, "at", encoding="utf-8") as gh_output:
+ gh_output.write("latest=True")
+ gh_output.write("minor_latest=True")
+ sys.exit(0)
# TODO: verify package meta is "correct"
# https://github.com/dbt-labs/dbt-core/issues/4640
@@ -91,5 +92,7 @@
latest = is_latest(pre_rel, new_version, current_latest)
minor_latest = is_latest(pre_rel, new_version, current_minor_latest)
- print(f"::set-output name=latest::{latest}")
- print(f"::set-output name=minor_latest::{minor_latest}")
+ github_output = os.environ.get("GITHUB_OUTPUT")
+ with open(github_output, "at", encoding="utf-8") as gh_output:
+ gh_output.write(f"latest={latest}")
+ gh_output.write(f"minor_latest={minor_latest}")
| {"golden_diff": "diff --git a/.github/actions/latest-wrangler/main.py b/.github/actions/latest-wrangler/main.py\n--- a/.github/actions/latest-wrangler/main.py\n+++ b/.github/actions/latest-wrangler/main.py\n@@ -28,11 +28,12 @@\n if package_request.status_code == 404:\n if halt_on_missing:\n sys.exit(1)\n- else:\n- # everything is the latest if the package doesn't exist\n- print(f\"::set-output name=latest::{True}\")\n- print(f\"::set-output name=minor_latest::{True}\")\n- sys.exit(0)\n+ # everything is the latest if the package doesn't exist\n+ github_output = os.environ.get(\"GITHUB_OUTPUT\")\n+ with open(github_output, \"at\", encoding=\"utf-8\") as gh_output:\n+ gh_output.write(\"latest=True\")\n+ gh_output.write(\"minor_latest=True\")\n+ sys.exit(0)\n \n # TODO: verify package meta is \"correct\"\n # https://github.com/dbt-labs/dbt-core/issues/4640\n@@ -91,5 +92,7 @@\n latest = is_latest(pre_rel, new_version, current_latest)\n minor_latest = is_latest(pre_rel, new_version, current_minor_latest)\n \n- print(f\"::set-output name=latest::{latest}\")\n- print(f\"::set-output name=minor_latest::{minor_latest}\")\n+ github_output = os.environ.get(\"GITHUB_OUTPUT\")\n+ with open(github_output, \"at\", encoding=\"utf-8\") as gh_output:\n+ gh_output.write(f\"latest={latest}\")\n+ gh_output.write(f\"minor_latest={minor_latest}\")\n", "issue": "[CT-2479] replace all instances of set-output and node16\nDetails in https://github.com/dbt-labs/actions/issues/39.\r\n\r\n### Acceptance Criteria\r\n- [ ] Verified there are no workflows to update\r\n_or_\r\n- [ ] removed all uses of `set-output` - either directly or up updating any marketplace actions we reference\r\n- [ ] removed all references to node16 - either directly or up updating any marketplace actions we reference\r\n- [ ] backport changes\n", "before_files": [{"content": "import os\nimport sys\nimport requests\nfrom distutils.util import strtobool\nfrom typing import Union\nfrom packaging.version import parse, Version\n\nif __name__ == \"__main__\":\n\n # get inputs\n package = os.environ[\"INPUT_PACKAGE\"]\n new_version = parse(os.environ[\"INPUT_NEW_VERSION\"])\n gh_token = os.environ[\"INPUT_GH_TOKEN\"]\n halt_on_missing = strtobool(os.environ.get(\"INPUT_HALT_ON_MISSING\", \"False\"))\n\n # get package metadata from github\n package_request = requests.get(\n f\"https://api.github.com/orgs/dbt-labs/packages/container/{package}/versions\",\n auth=(\"\", gh_token),\n )\n package_meta = package_request.json()\n\n # Log info if we don't get a 200\n if package_request.status_code != 200:\n print(f\"Call to GH API failed: {package_request.status_code} {package_meta['message']}\")\n\n # Make an early exit if there is no matching package in github\n if package_request.status_code == 404:\n if halt_on_missing:\n sys.exit(1)\n else:\n # everything is the latest if the package doesn't exist\n print(f\"::set-output name=latest::{True}\")\n print(f\"::set-output name=minor_latest::{True}\")\n sys.exit(0)\n\n # TODO: verify package meta is \"correct\"\n # https://github.com/dbt-labs/dbt-core/issues/4640\n\n # map versions and tags\n version_tag_map = {\n version[\"id\"]: version[\"metadata\"][\"container\"][\"tags\"] for version in package_meta\n }\n\n # is pre-release\n pre_rel = True if any(x in str(new_version) for x in [\"a\", \"b\", \"rc\"]) else False\n\n # semver of current latest\n for version, tags in version_tag_map.items():\n if \"latest\" in tags:\n # N.B. This seems counterintuitive, but we expect any version tagged\n # 'latest' to have exactly three associated tags:\n # latest, major.minor.latest, and major.minor.patch.\n # Subtracting everything that contains the string 'latest' gets us\n # the major.minor.patch which is what's needed for comparison.\n current_latest = parse([tag for tag in tags if \"latest\" not in tag][0])\n else:\n current_latest = False\n\n # semver of current_minor_latest\n for version, tags in version_tag_map.items():\n if f\"{new_version.major}.{new_version.minor}.latest\" in tags:\n # Similar to above, only now we expect exactly two tags:\n # major.minor.patch and major.minor.latest\n current_minor_latest = parse([tag for tag in tags if \"latest\" not in tag][0])\n else:\n current_minor_latest = False\n\n def is_latest(\n pre_rel: bool, new_version: Version, remote_latest: Union[bool, Version]\n ) -> bool:\n \"\"\"Determine if a given contaier should be tagged 'latest' based on:\n - it's pre-release status\n - it's version\n - the version of a previously identified container tagged 'latest'\n\n :param pre_rel: Wether or not the version of the new container is a pre-release\n :param new_version: The version of the new container\n :param remote_latest: The version of the previously identified container that's\n already tagged latest or False\n \"\"\"\n # is a pre-release = not latest\n if pre_rel:\n return False\n # + no latest tag found = is latest\n if not remote_latest:\n return True\n # + if remote version is lower than current = is latest, else not latest\n return True if remote_latest <= new_version else False\n\n latest = is_latest(pre_rel, new_version, current_latest)\n minor_latest = is_latest(pre_rel, new_version, current_minor_latest)\n\n print(f\"::set-output name=latest::{latest}\")\n print(f\"::set-output name=minor_latest::{minor_latest}\")\n", "path": ".github/actions/latest-wrangler/main.py"}]} | 1,723 | 378 |
gh_patches_debug_53536 | rasdani/github-patches | git_diff | quantumlib__Cirq-2374 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Cirq should ship a py.typed file
[PEP 561](https://www.python.org/dev/peps/pep-0561/) says that any packages that ship with type information should have a py.typed file in their package. Otherwise, type checkers like mypy can't find Cirq. (FWIW I just did `touch ~/.virtualenvs/.../cirq/py.typed`, and then mypy type-checks the file correctly.)
Other than that, Cirq seems pretty awesome so far :ok_hand:.
Cirq should ship a py.typed file
[PEP 561](https://www.python.org/dev/peps/pep-0561/) says that any packages that ship with type information should have a py.typed file in their package. Otherwise, type checkers like mypy can't find Cirq. (FWIW I just did `touch ~/.virtualenvs/.../cirq/py.typed`, and then mypy type-checks the file correctly.)
Other than that, Cirq seems pretty awesome so far :ok_hand:.
</issue>
<code>
[start of setup.py]
1 # Copyright 2018 The Cirq Developers
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import io
16 import os
17 from setuptools import find_packages, setup
18
19 # This reads the __version__ variable from cirq/_version.py
20 __version__ = ''
21 exec(open('cirq/_version.py').read())
22
23 name = 'cirq'
24
25 description = ('A framework for creating, editing, and invoking '
26 'Noisy Intermediate Scale Quantum (NISQ) circuits.')
27
28 # README file as long_description.
29 long_description = io.open('README.rst', encoding='utf-8').read()
30
31 # If CIRQ_DEV_VERSION is set then we use cirq-dev as the name of the package
32 # and update the version to this value.
33 if 'CIRQ_DEV_VERSION' in os.environ:
34 name = 'cirq-dev'
35 __version__ = os.environ['CIRQ_DEV_VERSION']
36 long_description = (
37 "**This is a development version of Cirq and may be "
38 "unstable.**\n\n**For the latest stable release of Cirq "
39 "see**\n`here <https://pypi.org/project/cirq>`__.\n\n" +
40 long_description)
41
42 # Read in requirements
43 requirements = open('requirements.txt').readlines()
44 requirements = [r.strip() for r in requirements]
45 contrib_requirements = open('cirq/contrib/contrib-requirements.txt').readlines()
46 contrib_requirements = [r.strip() for r in contrib_requirements]
47 dev_requirements = open('dev_tools/conf/pip-list-dev-tools.txt').readlines()
48 dev_requirements = [r.strip() for r in dev_requirements]
49
50 cirq_packages = ['cirq'] + [
51 'cirq.' + package for package in find_packages(where='cirq')
52 ]
53
54 # Sanity check
55 assert __version__, 'Version string cannot be empty'
56
57 setup(name=name,
58 version=__version__,
59 url='http://github.com/quantumlib/cirq',
60 author='The Cirq Developers',
61 author_email='[email protected]',
62 python_requires=('>=3.6.0'),
63 install_requires=requirements,
64 extras_require={
65 'contrib': contrib_requirements,
66 'dev_env': dev_requirements + contrib_requirements,
67 },
68 license='Apache 2',
69 description=description,
70 long_description=long_description,
71 packages=cirq_packages,
72 package_data={
73 'cirq.api.google.v1': ['*.proto'],
74 'cirq.api.google.v2': ['*.proto'],
75 'cirq.google.api.v1': ['*.proto'],
76 'cirq.google.api.v2': ['*.proto'],
77 })
78
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -70,6 +70,7 @@
long_description=long_description,
packages=cirq_packages,
package_data={
+ 'cirq': ['py.typed'],
'cirq.api.google.v1': ['*.proto'],
'cirq.api.google.v2': ['*.proto'],
'cirq.google.api.v1': ['*.proto'],
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -70,6 +70,7 @@\n long_description=long_description,\n packages=cirq_packages,\n package_data={\n+ 'cirq': ['py.typed'],\n 'cirq.api.google.v1': ['*.proto'],\n 'cirq.api.google.v2': ['*.proto'],\n 'cirq.google.api.v1': ['*.proto'],\n", "issue": "Cirq should ship a py.typed file\n[PEP 561](https://www.python.org/dev/peps/pep-0561/) says that any packages that ship with type information should have a py.typed file in their package. Otherwise, type checkers like mypy can't find Cirq. (FWIW I just did `touch ~/.virtualenvs/.../cirq/py.typed`, and then mypy type-checks the file correctly.)\r\n\r\nOther than that, Cirq seems pretty awesome so far :ok_hand:.\nCirq should ship a py.typed file\n[PEP 561](https://www.python.org/dev/peps/pep-0561/) says that any packages that ship with type information should have a py.typed file in their package. Otherwise, type checkers like mypy can't find Cirq. (FWIW I just did `touch ~/.virtualenvs/.../cirq/py.typed`, and then mypy type-checks the file correctly.)\r\n\r\nOther than that, Cirq seems pretty awesome so far :ok_hand:.\n", "before_files": [{"content": "# Copyright 2018 The Cirq Developers\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport io\nimport os\nfrom setuptools import find_packages, setup\n\n# This reads the __version__ variable from cirq/_version.py\n__version__ = ''\nexec(open('cirq/_version.py').read())\n\nname = 'cirq'\n\ndescription = ('A framework for creating, editing, and invoking '\n 'Noisy Intermediate Scale Quantum (NISQ) circuits.')\n\n# README file as long_description.\nlong_description = io.open('README.rst', encoding='utf-8').read()\n\n# If CIRQ_DEV_VERSION is set then we use cirq-dev as the name of the package\n# and update the version to this value.\nif 'CIRQ_DEV_VERSION' in os.environ:\n name = 'cirq-dev'\n __version__ = os.environ['CIRQ_DEV_VERSION']\n long_description = (\n \"**This is a development version of Cirq and may be \"\n \"unstable.**\\n\\n**For the latest stable release of Cirq \"\n \"see**\\n`here <https://pypi.org/project/cirq>`__.\\n\\n\" +\n long_description)\n\n# Read in requirements\nrequirements = open('requirements.txt').readlines()\nrequirements = [r.strip() for r in requirements]\ncontrib_requirements = open('cirq/contrib/contrib-requirements.txt').readlines()\ncontrib_requirements = [r.strip() for r in contrib_requirements]\ndev_requirements = open('dev_tools/conf/pip-list-dev-tools.txt').readlines()\ndev_requirements = [r.strip() for r in dev_requirements]\n\ncirq_packages = ['cirq'] + [\n 'cirq.' + package for package in find_packages(where='cirq')\n]\n\n# Sanity check\nassert __version__, 'Version string cannot be empty'\n\nsetup(name=name,\n version=__version__,\n url='http://github.com/quantumlib/cirq',\n author='The Cirq Developers',\n author_email='[email protected]',\n python_requires=('>=3.6.0'),\n install_requires=requirements,\n extras_require={\n 'contrib': contrib_requirements,\n 'dev_env': dev_requirements + contrib_requirements,\n },\n license='Apache 2',\n description=description,\n long_description=long_description,\n packages=cirq_packages,\n package_data={\n 'cirq.api.google.v1': ['*.proto'],\n 'cirq.api.google.v2': ['*.proto'],\n 'cirq.google.api.v1': ['*.proto'],\n 'cirq.google.api.v2': ['*.proto'],\n })\n", "path": "setup.py"}]} | 1,586 | 100 |
gh_patches_debug_8566 | rasdani/github-patches | git_diff | great-expectations__great_expectations-1229 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Great Expectations is still marked as Python 2 compatible
It looks like running `pip install great_expectations==0.9.7` in a Python 2 environment starts working, before failing when pulling `marshmallow`. This is expected since this PR: https://github.com/great-expectations/great_expectations/pull/1187 but on PyPI, GE is still marked as Python 2 compatible because of the `setup.py` file.
I'm opening a PR that fixes this in a sec, but feel free to close if I'm missing something! :)
</issue>
<code>
[start of setup.py]
1 from setuptools import setup, find_packages
2 import versioneer
3
4 # Parse requirements.txt
5 with open('requirements.txt') as f:
6 required = f.read().splitlines()
7
8 #try:
9 # import pypandoc
10 # long_description = pypandoc.convert_file('README.md', 'rst')
11 #except (IOError, ImportError):
12 long_description = 'Always know what to expect from your data. (See https://github.com/great-expectations/great_expectations for full description).'
13
14 config = {
15 'description': 'Always know what to expect from your data.',
16 'author': 'The Great Expectations Team',
17 'url': 'https://github.com/great-expectations/great_expectations',
18 'author_email': '[email protected]',
19 'version': versioneer.get_version(),
20 'cmdclass': versioneer.get_cmdclass(),
21 'install_requires': required,
22 'extras_require': {
23 'spark': ['pyspark>=2.3.2'],
24 'sqlalchemy': ['sqlalchemy>=1.2'],
25 'airflow': ['apache-airflow[s3]>=1.9.0', 'boto3>=1.7.3']
26 },
27 'packages': find_packages(exclude=['docs', 'tests', 'examples']),
28 'entry_points': {
29 'console_scripts': ['great_expectations=great_expectations.cli:main']
30 },
31 'name': 'great_expectations',
32 'long_description': long_description,
33 'license': 'Apache-2.0',
34 'keywords': 'data science testing pipeline data quality dataquality validation datavalidation',
35 'include_package_data': True,
36 'classifiers': [
37 'Development Status :: 4 - Beta',
38 'Intended Audience :: Developers',
39 'Intended Audience :: Science/Research',
40 'Intended Audience :: Other Audience',
41 'Topic :: Scientific/Engineering',
42 'Topic :: Software Development',
43 'Topic :: Software Development :: Testing',
44 'License :: OSI Approved :: Apache Software License',
45 'Programming Language :: Python :: 2',
46 'Programming Language :: Python :: 2.7',
47 'Programming Language :: Python :: 3',
48 'Programming Language :: Python :: 3.6',
49 'Programming Language :: Python :: 3.7',
50 ]
51 }
52
53 setup(**config)
54
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -42,8 +42,6 @@
'Topic :: Software Development',
'Topic :: Software Development :: Testing',
'License :: OSI Approved :: Apache Software License',
- 'Programming Language :: Python :: 2',
- 'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: 3.7',
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -42,8 +42,6 @@\n 'Topic :: Software Development',\n 'Topic :: Software Development :: Testing',\n 'License :: OSI Approved :: Apache Software License',\n- 'Programming Language :: Python :: 2',\n- 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n", "issue": "Great Expectations is still marked as Python 2 compatible\nIt looks like running `pip install great_expectations==0.9.7` in a Python 2 environment starts working, before failing when pulling `marshmallow`. This is expected since this PR: https://github.com/great-expectations/great_expectations/pull/1187 but on PyPI, GE is still marked as Python 2 compatible because of the `setup.py` file.\r\n\r\nI'm opening a PR that fixes this in a sec, but feel free to close if I'm missing something! :)\n", "before_files": [{"content": "from setuptools import setup, find_packages\nimport versioneer\n\n# Parse requirements.txt\nwith open('requirements.txt') as f:\n required = f.read().splitlines()\n\n#try:\n# import pypandoc\n# long_description = pypandoc.convert_file('README.md', 'rst')\n#except (IOError, ImportError):\nlong_description = 'Always know what to expect from your data. (See https://github.com/great-expectations/great_expectations for full description).'\n\nconfig = {\n 'description': 'Always know what to expect from your data.',\n 'author': 'The Great Expectations Team',\n 'url': 'https://github.com/great-expectations/great_expectations',\n 'author_email': '[email protected]',\n 'version': versioneer.get_version(),\n 'cmdclass': versioneer.get_cmdclass(),\n 'install_requires': required,\n 'extras_require': {\n 'spark': ['pyspark>=2.3.2'],\n 'sqlalchemy': ['sqlalchemy>=1.2'],\n 'airflow': ['apache-airflow[s3]>=1.9.0', 'boto3>=1.7.3']\n },\n 'packages': find_packages(exclude=['docs', 'tests', 'examples']),\n 'entry_points': {\n 'console_scripts': ['great_expectations=great_expectations.cli:main']\n },\n 'name': 'great_expectations',\n 'long_description': long_description,\n 'license': 'Apache-2.0',\n 'keywords': 'data science testing pipeline data quality dataquality validation datavalidation',\n 'include_package_data': True,\n 'classifiers': [\n 'Development Status :: 4 - Beta',\n 'Intended Audience :: Developers',\n 'Intended Audience :: Science/Research',\n 'Intended Audience :: Other Audience',\n 'Topic :: Scientific/Engineering',\n 'Topic :: Software Development',\n 'Topic :: Software Development :: Testing',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n ]\n}\n\nsetup(**config)\n", "path": "setup.py"}]} | 1,249 | 118 |
gh_patches_debug_6345 | rasdani/github-patches | git_diff | carpentries__amy-743 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Escaped tags in every revision page

Escaped tags in every revision page

</issue>
<code>
[start of workshops/templatetags/diff.py]
1 from django import template
2
3 from reversion.helpers import generate_patch_html
4
5 register = template.Library()
6
7
8 @register.simple_tag
9 def semantic_diff(left, right, field):
10 return generate_patch_html(left, right, field, cleanup='semantic')
11
[end of workshops/templatetags/diff.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/workshops/templatetags/diff.py b/workshops/templatetags/diff.py
--- a/workshops/templatetags/diff.py
+++ b/workshops/templatetags/diff.py
@@ -1,4 +1,5 @@
from django import template
+from django.utils.safestring import mark_safe
from reversion.helpers import generate_patch_html
@@ -7,4 +8,4 @@
@register.simple_tag
def semantic_diff(left, right, field):
- return generate_patch_html(left, right, field, cleanup='semantic')
+ return mark_safe(generate_patch_html(left, right, field, cleanup='semantic'))
| {"golden_diff": "diff --git a/workshops/templatetags/diff.py b/workshops/templatetags/diff.py\n--- a/workshops/templatetags/diff.py\n+++ b/workshops/templatetags/diff.py\n@@ -1,4 +1,5 @@\n from django import template\n+from django.utils.safestring import mark_safe\n \n from reversion.helpers import generate_patch_html\n \n@@ -7,4 +8,4 @@\n \n @register.simple_tag\n def semantic_diff(left, right, field):\n- return generate_patch_html(left, right, field, cleanup='semantic')\n+ return mark_safe(generate_patch_html(left, right, field, cleanup='semantic'))\n", "issue": "Escaped tags in every revision page\n\n\nEscaped tags in every revision page\n\n\n", "before_files": [{"content": "from django import template\n\nfrom reversion.helpers import generate_patch_html\n\nregister = template.Library()\n\n\[email protected]_tag\ndef semantic_diff(left, right, field):\n return generate_patch_html(left, right, field, cleanup='semantic')\n", "path": "workshops/templatetags/diff.py"}]} | 787 | 148 |
gh_patches_debug_5199 | rasdani/github-patches | git_diff | PrefectHQ__prefect-1165 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Cannot raise a skip signal with a result
I am filing an issue by suggestion of @cicdw after a conversation on gitter.
I came up with the following use case: a task that raises a skip signal with a result because its logic has detected that there is no work to do and the result is already calculated somewhere. I could just return it, but it would be useful for me to know that the _heavy_ part of the task did not actually execute.
An example of the use case would be:
```python
from prefect import task, Flow
from prefect.engine import signals
@task
def test_skipped():
raise signals.SKIP('skipping', result=5)
f = Flow("test", tasks=[test_skipped])
flow_state = f.run()
```
which fails because of how the `PrefectStateSignal` constructor handles its initialization:
```
Traceback (most recent call last):
File ".../prefect/engine/signals.py", line 27, in __init__
result=self, message=message, *args, **kwargs
TypeError: type object got multiple values for keyword argument 'result'
```
Chris suggested the following workaround, which works correctly, but still pointed out that the case above should work.
```python
from prefect import task, Flow
from prefect.engine.runner import ENDRUN
from prefect.engine.state import Skipped
@task
def test_skipped():
skip = Skipped("skipping", result=5)
raise ENDRUN(state=skip)
f = Flow("test", tasks=[test_skipped])
flow_state = f.run()
flow_state.result[test_skipped].result # 5
```
</issue>
<code>
[start of src/prefect/engine/signals.py]
1 """
2 These Exceptions, when raised, are used to signal state changes when tasks or flows are running. Signals
3 are used in TaskRunners and FlowRunners as a way of communicating the changes in states.
4 """
5
6 from prefect.engine import state
7 from prefect.utilities.exceptions import PrefectError
8
9
10 class PrefectStateSignal(PrefectError):
11 """
12 Create a new PrefectStateSignal object.
13
14 Args:
15 - message (Any, optional): Defaults to `None`. A message about the signal.
16 - *args (Any, optional): additional arguments to pass to this Signal's
17 associated state constructor
18 - **kwargs (Any, optional): additional keyword arguments to pass to this Signal's
19 associated state constructor
20 """
21
22 _state_cls = state.State
23
24 def __init__(self, message: str = None, *args, **kwargs): # type: ignore
25 super().__init__(message) # type: ignore
26 self.state = self._state_cls( # type: ignore
27 result=self, message=message, *args, **kwargs
28 )
29
30
31 class FAIL(PrefectStateSignal):
32 """
33 Indicates that a task failed.
34
35 Args:
36 - message (Any, optional): Defaults to `None`. A message about the signal.
37 - *args (Any, optional): additional arguments to pass to this Signal's
38 associated state constructor
39 - **kwargs (Any, optional): additional keyword arguments to pass to this Signal's
40 associated state constructor
41 """
42
43 _state_cls = state.Failed
44
45
46 class TRIGGERFAIL(FAIL):
47 """
48 Indicates that a task trigger failed.
49
50 Args:
51 - message (Any, optional): Defaults to `None`. A message about the signal.
52 - *args (Any, optional): additional arguments to pass to this Signal's
53 associated state constructor
54 - **kwargs (Any, optional): additional keyword arguments to pass to this Signal's
55 associated state constructor
56 """
57
58 _state_cls = state.TriggerFailed
59
60
61 class SUCCESS(PrefectStateSignal):
62 """
63 Indicates that a task succeeded.
64
65 Args:
66 - message (Any, optional): Defaults to `None`. A message about the signal.
67 - *args (Any, optional): additional arguments to pass to this Signal's
68 associated state constructor
69 - **kwargs (Any, optional): additional keyword arguments to pass to this Signal's
70 associated state constructor
71 """
72
73 _state_cls = state.Success
74
75
76 class RETRY(PrefectStateSignal):
77 """
78 Used to indicate that a task should be retried.
79
80 Args:
81 - message (Any, optional): Defaults to `None`. A message about the signal.
82 - *args (Any, optional): additional arguments to pass to this Signal's
83 associated state constructor
84 - **kwargs (Any, optional): additional keyword arguments to pass to this Signal's
85 associated state constructor
86 """
87
88 _state_cls = state.Retrying
89
90
91 class SKIP(PrefectStateSignal):
92 """
93 Indicates that a task was skipped. By default, downstream tasks will
94 act as if skipped tasks succeeded.
95
96 Args:
97 - message (Any, optional): Defaults to `None`. A message about the signal.
98 - *args (Any, optional): additional arguments to pass to this Signal's
99 associated state constructor
100 - **kwargs (Any, optional): additional keyword arguments to pass to this Signal's
101 associated state constructor
102 """
103
104 _state_cls = state.Skipped
105
106
107 class PAUSE(PrefectStateSignal):
108 """
109 Indicates that a task should not run and wait for manual execution.
110
111 Args:
112 - message (Any, optional): Defaults to `None`. A message about the signal.
113 - *args (Any, optional): additional arguments to pass to this Signal's
114 associated state constructor
115 - **kwargs (Any, optional): additional keyword arguments to pass to this Signal's
116 associated state constructor
117 """
118
119 _state_cls = state.Paused
120
[end of src/prefect/engine/signals.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/prefect/engine/signals.py b/src/prefect/engine/signals.py
--- a/src/prefect/engine/signals.py
+++ b/src/prefect/engine/signals.py
@@ -23,8 +23,9 @@
def __init__(self, message: str = None, *args, **kwargs): # type: ignore
super().__init__(message) # type: ignore
+ kwargs.setdefault("result", self)
self.state = self._state_cls( # type: ignore
- result=self, message=message, *args, **kwargs
+ message=message, *args, **kwargs
)
| {"golden_diff": "diff --git a/src/prefect/engine/signals.py b/src/prefect/engine/signals.py\n--- a/src/prefect/engine/signals.py\n+++ b/src/prefect/engine/signals.py\n@@ -23,8 +23,9 @@\n \n def __init__(self, message: str = None, *args, **kwargs): # type: ignore\n super().__init__(message) # type: ignore\n+ kwargs.setdefault(\"result\", self)\n self.state = self._state_cls( # type: ignore\n- result=self, message=message, *args, **kwargs\n+ message=message, *args, **kwargs\n )\n", "issue": "Cannot raise a skip signal with a result\nI am filing an issue by suggestion of @cicdw after a conversation on gitter.\r\nI came up with the following use case: a task that raises a skip signal with a result because its logic has detected that there is no work to do and the result is already calculated somewhere. I could just return it, but it would be useful for me to know that the _heavy_ part of the task did not actually execute.\r\n\r\nAn example of the use case would be:\r\n\r\n```python\r\nfrom prefect import task, Flow\r\nfrom prefect.engine import signals\r\n\r\n@task\r\ndef test_skipped():\r\n raise signals.SKIP('skipping', result=5)\r\n\r\nf = Flow(\"test\", tasks=[test_skipped])\r\nflow_state = f.run()\r\n```\r\n\r\nwhich fails because of how the `PrefectStateSignal` constructor handles its initialization:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \".../prefect/engine/signals.py\", line 27, in __init__\r\n result=self, message=message, *args, **kwargs\r\nTypeError: type object got multiple values for keyword argument 'result'\r\n```\r\n\r\nChris suggested the following workaround, which works correctly, but still pointed out that the case above should work.\r\n\r\n```python\r\nfrom prefect import task, Flow\r\nfrom prefect.engine.runner import ENDRUN\r\nfrom prefect.engine.state import Skipped\r\n\r\n@task\r\ndef test_skipped():\r\n skip = Skipped(\"skipping\", result=5)\r\n raise ENDRUN(state=skip)\r\n\r\nf = Flow(\"test\", tasks=[test_skipped])\r\nflow_state = f.run()\r\n\r\nflow_state.result[test_skipped].result # 5\r\n```\n", "before_files": [{"content": "\"\"\"\nThese Exceptions, when raised, are used to signal state changes when tasks or flows are running. Signals\nare used in TaskRunners and FlowRunners as a way of communicating the changes in states.\n\"\"\"\n\nfrom prefect.engine import state\nfrom prefect.utilities.exceptions import PrefectError\n\n\nclass PrefectStateSignal(PrefectError):\n \"\"\"\n Create a new PrefectStateSignal object.\n\n Args:\n - message (Any, optional): Defaults to `None`. A message about the signal.\n - *args (Any, optional): additional arguments to pass to this Signal's\n associated state constructor\n - **kwargs (Any, optional): additional keyword arguments to pass to this Signal's\n associated state constructor\n \"\"\"\n\n _state_cls = state.State\n\n def __init__(self, message: str = None, *args, **kwargs): # type: ignore\n super().__init__(message) # type: ignore\n self.state = self._state_cls( # type: ignore\n result=self, message=message, *args, **kwargs\n )\n\n\nclass FAIL(PrefectStateSignal):\n \"\"\"\n Indicates that a task failed.\n\n Args:\n - message (Any, optional): Defaults to `None`. A message about the signal.\n - *args (Any, optional): additional arguments to pass to this Signal's\n associated state constructor\n - **kwargs (Any, optional): additional keyword arguments to pass to this Signal's\n associated state constructor\n \"\"\"\n\n _state_cls = state.Failed\n\n\nclass TRIGGERFAIL(FAIL):\n \"\"\"\n Indicates that a task trigger failed.\n\n Args:\n - message (Any, optional): Defaults to `None`. A message about the signal.\n - *args (Any, optional): additional arguments to pass to this Signal's\n associated state constructor\n - **kwargs (Any, optional): additional keyword arguments to pass to this Signal's\n associated state constructor\n \"\"\"\n\n _state_cls = state.TriggerFailed\n\n\nclass SUCCESS(PrefectStateSignal):\n \"\"\"\n Indicates that a task succeeded.\n\n Args:\n - message (Any, optional): Defaults to `None`. A message about the signal.\n - *args (Any, optional): additional arguments to pass to this Signal's\n associated state constructor\n - **kwargs (Any, optional): additional keyword arguments to pass to this Signal's\n associated state constructor\n \"\"\"\n\n _state_cls = state.Success\n\n\nclass RETRY(PrefectStateSignal):\n \"\"\"\n Used to indicate that a task should be retried.\n\n Args:\n - message (Any, optional): Defaults to `None`. A message about the signal.\n - *args (Any, optional): additional arguments to pass to this Signal's\n associated state constructor\n - **kwargs (Any, optional): additional keyword arguments to pass to this Signal's\n associated state constructor\n \"\"\"\n\n _state_cls = state.Retrying\n\n\nclass SKIP(PrefectStateSignal):\n \"\"\"\n Indicates that a task was skipped. By default, downstream tasks will\n act as if skipped tasks succeeded.\n\n Args:\n - message (Any, optional): Defaults to `None`. A message about the signal.\n - *args (Any, optional): additional arguments to pass to this Signal's\n associated state constructor\n - **kwargs (Any, optional): additional keyword arguments to pass to this Signal's\n associated state constructor\n \"\"\"\n\n _state_cls = state.Skipped\n\n\nclass PAUSE(PrefectStateSignal):\n \"\"\"\n Indicates that a task should not run and wait for manual execution.\n\n Args:\n - message (Any, optional): Defaults to `None`. A message about the signal.\n - *args (Any, optional): additional arguments to pass to this Signal's\n associated state constructor\n - **kwargs (Any, optional): additional keyword arguments to pass to this Signal's\n associated state constructor\n \"\"\"\n\n _state_cls = state.Paused\n", "path": "src/prefect/engine/signals.py"}]} | 2,013 | 147 |
gh_patches_debug_15255 | rasdani/github-patches | git_diff | chainer__chainer-1421 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
SerialIterator's shuffle does not work under certain batch sizes
When we give `shuffle=True` to `SerialIterator`, re-shuffling after an epoch is skipped if `len(dataset)` is divisible by `batch_size`.

https://github.com/pfnet/chainer/blob/master/chainer/iterators/serial_iterator.py#L65
Variable `_order` is never re-shuffled if `rest` > 0 (i.e., `len(dataset)` is divisible by `batch_size`).
(If it is okay, I'm interested in working on this issue at the development meeting tomorrow.)
</issue>
<code>
[start of chainer/iterators/serial_iterator.py]
1 from __future__ import division
2
3 import numpy
4
5 from chainer.dataset import iterator
6
7
8 class SerialIterator(iterator.Iterator):
9
10 """Dataset iterator that serially reads the examples.
11
12 This is a simple implementation of :class:`~chainer.dataset.Iterator`
13 that just visits each example in either the order of indexes or a shuffled
14 order.
15
16 To avoid unintentional performance degradation, the ``shuffle`` option is
17 set to ``True`` by default. For validation, it is better to set it to
18 ``False`` when the underlying dataset supports fast slicing. If the
19 order of examples has an important meaning and the updater depends on the
20 original order, this option should be set to ``False``.
21
22 Args:
23 dataset: Dataset to iterate.
24 batch_size (int): Number of examples within each batch.
25 repeat (bool): If ``True``, it infinitely loops over the dataset.
26 Otherwise, it stops iteration at the end of the first epoch.
27 shuffle (bool): If ``True``, the order of examples is shuffled at the
28 beginning of each epoch. Otherwise, examples are extracted in the
29 order of indexes.
30
31 """
32 def __init__(self, dataset, batch_size, repeat=True, shuffle=True):
33 self.dataset = dataset
34 self.batch_size = batch_size
35 self._repeat = repeat
36 if shuffle:
37 self._order = numpy.random.permutation(len(dataset))
38 else:
39 self._order = None
40
41 self.current_position = 0
42 self.epoch = 0
43 self.is_new_epoch = False
44
45 def __next__(self):
46 if not self._repeat and self.epoch > 0:
47 raise StopIteration
48
49 i = self.current_position
50 i_end = i + self.batch_size
51 N = len(self.dataset)
52
53 if self._order is None:
54 batch = self.dataset[i:i_end]
55 else:
56 batch = [self.dataset[index] for index in self._order[i:i_end]]
57
58 if i_end >= N:
59 if self._repeat:
60 rest = i_end - N
61 if rest > 0:
62 if self._order is None:
63 batch += list(self.dataset[:rest])
64 else:
65 numpy.random.shuffle(self._order)
66 batch += [self.dataset[index]
67 for index in self._order[:rest]]
68 self.current_position = rest
69 else:
70 self.current_position = N
71
72 self.epoch += 1
73 self.is_new_epoch = True
74 else:
75 self.is_new_epoch = False
76 self.current_position = i_end
77
78 return batch
79
80 next = __next__
81
82 @property
83 def epoch_detail(self):
84 return self.epoch + self.current_position / len(self.dataset)
85
86 def serialize(self, serializer):
87 self.current_position = serializer('current_position',
88 self.current_position)
89 self.epoch = serializer('epoch', self.epoch)
90 self.is_new_epoch = serializer('is_new_epoch', self.is_new_epoch)
91 if self._order is not None:
92 serializer('_order', self._order)
93
[end of chainer/iterators/serial_iterator.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/chainer/iterators/serial_iterator.py b/chainer/iterators/serial_iterator.py
--- a/chainer/iterators/serial_iterator.py
+++ b/chainer/iterators/serial_iterator.py
@@ -58,11 +58,12 @@
if i_end >= N:
if self._repeat:
rest = i_end - N
+ if self._order is not None:
+ numpy.random.shuffle(self._order)
if rest > 0:
if self._order is None:
batch += list(self.dataset[:rest])
else:
- numpy.random.shuffle(self._order)
batch += [self.dataset[index]
for index in self._order[:rest]]
self.current_position = rest
| {"golden_diff": "diff --git a/chainer/iterators/serial_iterator.py b/chainer/iterators/serial_iterator.py\n--- a/chainer/iterators/serial_iterator.py\n+++ b/chainer/iterators/serial_iterator.py\n@@ -58,11 +58,12 @@\n if i_end >= N:\n if self._repeat:\n rest = i_end - N\n+ if self._order is not None:\n+ numpy.random.shuffle(self._order)\n if rest > 0:\n if self._order is None:\n batch += list(self.dataset[:rest])\n else:\n- numpy.random.shuffle(self._order)\n batch += [self.dataset[index]\n for index in self._order[:rest]]\n self.current_position = rest\n", "issue": "SerialIterator's shuffle does not work under certain batch sizes\nWhen we give `shuffle=True` to `SerialIterator`, re-shuffling after an epoch is skipped if `len(dataset)` is divisible by `batch_size`.\n\n\n\nhttps://github.com/pfnet/chainer/blob/master/chainer/iterators/serial_iterator.py#L65\n\nVariable `_order` is never re-shuffled if `rest` > 0 (i.e., `len(dataset)` is divisible by `batch_size`).\n\n(If it is okay, I'm interested in working on this issue at the development meeting tomorrow.)\n\n", "before_files": [{"content": "from __future__ import division\n\nimport numpy\n\nfrom chainer.dataset import iterator\n\n\nclass SerialIterator(iterator.Iterator):\n\n \"\"\"Dataset iterator that serially reads the examples.\n\n This is a simple implementation of :class:`~chainer.dataset.Iterator`\n that just visits each example in either the order of indexes or a shuffled\n order.\n\n To avoid unintentional performance degradation, the ``shuffle`` option is\n set to ``True`` by default. For validation, it is better to set it to\n ``False`` when the underlying dataset supports fast slicing. If the\n order of examples has an important meaning and the updater depends on the\n original order, this option should be set to ``False``.\n\n Args:\n dataset: Dataset to iterate.\n batch_size (int): Number of examples within each batch.\n repeat (bool): If ``True``, it infinitely loops over the dataset.\n Otherwise, it stops iteration at the end of the first epoch.\n shuffle (bool): If ``True``, the order of examples is shuffled at the\n beginning of each epoch. Otherwise, examples are extracted in the\n order of indexes.\n\n \"\"\"\n def __init__(self, dataset, batch_size, repeat=True, shuffle=True):\n self.dataset = dataset\n self.batch_size = batch_size\n self._repeat = repeat\n if shuffle:\n self._order = numpy.random.permutation(len(dataset))\n else:\n self._order = None\n\n self.current_position = 0\n self.epoch = 0\n self.is_new_epoch = False\n\n def __next__(self):\n if not self._repeat and self.epoch > 0:\n raise StopIteration\n\n i = self.current_position\n i_end = i + self.batch_size\n N = len(self.dataset)\n\n if self._order is None:\n batch = self.dataset[i:i_end]\n else:\n batch = [self.dataset[index] for index in self._order[i:i_end]]\n\n if i_end >= N:\n if self._repeat:\n rest = i_end - N\n if rest > 0:\n if self._order is None:\n batch += list(self.dataset[:rest])\n else:\n numpy.random.shuffle(self._order)\n batch += [self.dataset[index]\n for index in self._order[:rest]]\n self.current_position = rest\n else:\n self.current_position = N\n\n self.epoch += 1\n self.is_new_epoch = True\n else:\n self.is_new_epoch = False\n self.current_position = i_end\n\n return batch\n\n next = __next__\n\n @property\n def epoch_detail(self):\n return self.epoch + self.current_position / len(self.dataset)\n\n def serialize(self, serializer):\n self.current_position = serializer('current_position',\n self.current_position)\n self.epoch = serializer('epoch', self.epoch)\n self.is_new_epoch = serializer('is_new_epoch', self.is_new_epoch)\n if self._order is not None:\n serializer('_order', self._order)\n", "path": "chainer/iterators/serial_iterator.py"}]} | 1,562 | 164 |
gh_patches_debug_12636 | rasdani/github-patches | git_diff | Mailu__Mailu-2630 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Docker container crashes if IPv6 is disabled at the system level.
If listen [::] is found somewhere in the configs, but IPv6 is disabled at the host system level and in the docker, then the process crashes, and, accordingly, the docker container also crashes.
This can be manually climbed into each container, corrected, but it is not very convenient.
docker exec mailu_front_1 sed -i '/listen \[/d' /conf/nginx.conf
docker exec mailu_front_1 sed -i '/listen \[/d' /etc/nginx/nginx.conf
docker exec mailu_front_1 sed -i '/listen \[/d' /etc/nginx/http.d/default.conf
docker restart mailu_front_1
docker restart mailu_webdav_1 && docker exec -it mailu_webdav_1 sed -i 's/hosts =.*\[::\].*/hosts = 0.0.0.0:5232/g' /radicale.conf && docker restart mailu_webdav_1
Can you add a container launch option to remove listen [::] from configs?
</issue>
<code>
[start of core/admin/start.py]
1 #!/usr/bin/env python3
2
3 import os
4 import logging as log
5 import sys
6 from socrate import system
7
8 os.system("chown mailu:mailu -R /dkim")
9 os.system("find /data | grep -v /fetchmail | xargs -n1 chown mailu:mailu")
10 system.drop_privs_to('mailu')
11
12 log.basicConfig(stream=sys.stderr, level=os.environ.get("LOG_LEVEL", "INFO"))
13 system.set_env(['SECRET'])
14
15 os.system("flask mailu advertise")
16 os.system("flask db upgrade")
17
18 account = os.environ.get("INITIAL_ADMIN_ACCOUNT")
19 domain = os.environ.get("INITIAL_ADMIN_DOMAIN")
20 password = os.environ.get("INITIAL_ADMIN_PW")
21
22 if account is not None and domain is not None and password is not None:
23 mode = os.environ.get("INITIAL_ADMIN_MODE", default="ifmissing")
24 log.info("Creating initial admin account %s@%s with mode %s", account, domain, mode)
25 os.system("flask mailu admin %s %s '%s' --mode %s" % (account, domain, password, mode))
26
27 def test_DNS():
28 import dns.resolver
29 import dns.exception
30 import dns.flags
31 import dns.rdtypes
32 import dns.rdatatype
33 import dns.rdataclass
34 import time
35 # DNS stub configured to do DNSSEC enabled queries
36 resolver = dns.resolver.Resolver()
37 resolver.use_edns(0, dns.flags.DO, 1232)
38 resolver.flags = dns.flags.AD | dns.flags.RD
39 nameservers = resolver.nameservers
40 for ns in nameservers:
41 resolver.nameservers=[ns]
42 while True:
43 try:
44 result = resolver.resolve('example.org', dns.rdatatype.A, dns.rdataclass.IN, lifetime=10)
45 except Exception as e:
46 log.critical("Your DNS resolver at %s is not working (%s). Please see https://mailu.io/master/faq.html#the-admin-container-won-t-start-and-its-log-says-critical-your-dns-resolver-isn-t-doing-dnssec-validation", ns, e)
47 else:
48 if result.response.flags & dns.flags.AD:
49 break
50 log.critical("Your DNS resolver at %s isn't doing DNSSEC validation; Please see https://mailu.io/master/faq.html#the-admin-container-won-t-start-and-its-log-says-critical-your-dns-resolver-isn-t-doing-dnssec-validation.", ns)
51 time.sleep(5)
52
53 test_DNS()
54
55 start_command=" ".join([
56 "gunicorn",
57 f"--threads {str(os.cpu_count())}",
58 "-b :80",
59 "--logger-class mailu.Logger",
60 "--worker-tmp-dir /dev/shm",
61 "--access-logfile -" if (log.root.level<=log.INFO) else "",
62 "--error-logfile -",
63 "--preload",
64 "'mailu:create_app()'"])
65
66 os.system(start_command)
67
[end of core/admin/start.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/core/admin/start.py b/core/admin/start.py
--- a/core/admin/start.py
+++ b/core/admin/start.py
@@ -52,15 +52,21 @@
test_DNS()
-start_command=" ".join([
- "gunicorn",
- f"--threads {str(os.cpu_count())}",
- "-b :80",
+cmdline = [
+ "gunicorn",
+ "--threads", f"{os.cpu_count()}",
+ # If SUBNET6 is defined, gunicorn must listen on IPv6 as well as IPv4
+ "-b", f"{'[::]' if os.environ.get('SUBNET6') else ''}:80",
"--logger-class mailu.Logger",
"--worker-tmp-dir /dev/shm",
- "--access-logfile -" if (log.root.level<=log.INFO) else "",
- "--error-logfile -",
- "--preload",
- "'mailu:create_app()'"])
+ "--error-logfile", "-",
+ "--preload"
+]
-os.system(start_command)
+# logging
+if log.root.level <= log.INFO:
+ cmdline.extend(["--access-logfile", "-"])
+
+cmdline.append("'mailu:create_app()'")
+
+os.system(" ".join(cmdline))
| {"golden_diff": "diff --git a/core/admin/start.py b/core/admin/start.py\n--- a/core/admin/start.py\n+++ b/core/admin/start.py\n@@ -52,15 +52,21 @@\n \n test_DNS()\n \n-start_command=\" \".join([\n- \"gunicorn\",\n- f\"--threads {str(os.cpu_count())}\",\n- \"-b :80\",\n+cmdline = [\n+\t\"gunicorn\",\n+\t\"--threads\", f\"{os.cpu_count()}\",\n+\t# If SUBNET6 is defined, gunicorn must listen on IPv6 as well as IPv4\n+\t\"-b\", f\"{'[::]' if os.environ.get('SUBNET6') else ''}:80\",\n \"--logger-class mailu.Logger\",\n \"--worker-tmp-dir /dev/shm\",\n- \"--access-logfile -\" if (log.root.level<=log.INFO) else \"\",\n- \"--error-logfile -\",\n- \"--preload\",\n- \"'mailu:create_app()'\"])\n+\t\"--error-logfile\", \"-\",\n+\t\"--preload\"\n+]\n \n-os.system(start_command)\n+# logging\n+if log.root.level <= log.INFO:\n+\tcmdline.extend([\"--access-logfile\", \"-\"])\n+\n+cmdline.append(\"'mailu:create_app()'\")\n+\n+os.system(\" \".join(cmdline))\n", "issue": "Docker container crashes if IPv6 is disabled at the system level.\nIf listen [::] is found somewhere in the configs, but IPv6 is disabled at the host system level and in the docker, then the process crashes, and, accordingly, the docker container also crashes.\r\n\r\nThis can be manually climbed into each container, corrected, but it is not very convenient.\r\n\r\ndocker exec mailu_front_1 sed -i '/listen \\[/d' /conf/nginx.conf\r\ndocker exec mailu_front_1 sed -i '/listen \\[/d' /etc/nginx/nginx.conf\r\ndocker exec mailu_front_1 sed -i '/listen \\[/d' /etc/nginx/http.d/default.conf\r\ndocker restart mailu_front_1\r\n\r\ndocker restart mailu_webdav_1 && docker exec -it mailu_webdav_1 sed -i 's/hosts =.*\\[::\\].*/hosts = 0.0.0.0:5232/g' /radicale.conf && docker restart mailu_webdav_1\r\n\r\n\r\nCan you add a container launch option to remove listen [::] from configs?\n", "before_files": [{"content": "#!/usr/bin/env python3\n\nimport os\nimport logging as log\nimport sys\nfrom socrate import system\n\nos.system(\"chown mailu:mailu -R /dkim\")\nos.system(\"find /data | grep -v /fetchmail | xargs -n1 chown mailu:mailu\")\nsystem.drop_privs_to('mailu')\n\nlog.basicConfig(stream=sys.stderr, level=os.environ.get(\"LOG_LEVEL\", \"INFO\"))\nsystem.set_env(['SECRET'])\n\nos.system(\"flask mailu advertise\")\nos.system(\"flask db upgrade\")\n\naccount = os.environ.get(\"INITIAL_ADMIN_ACCOUNT\")\ndomain = os.environ.get(\"INITIAL_ADMIN_DOMAIN\")\npassword = os.environ.get(\"INITIAL_ADMIN_PW\")\n\nif account is not None and domain is not None and password is not None:\n mode = os.environ.get(\"INITIAL_ADMIN_MODE\", default=\"ifmissing\")\n log.info(\"Creating initial admin account %s@%s with mode %s\", account, domain, mode)\n os.system(\"flask mailu admin %s %s '%s' --mode %s\" % (account, domain, password, mode))\n\ndef test_DNS():\n import dns.resolver\n import dns.exception\n import dns.flags\n import dns.rdtypes\n import dns.rdatatype\n import dns.rdataclass\n import time\n # DNS stub configured to do DNSSEC enabled queries\n resolver = dns.resolver.Resolver()\n resolver.use_edns(0, dns.flags.DO, 1232)\n resolver.flags = dns.flags.AD | dns.flags.RD\n nameservers = resolver.nameservers\n for ns in nameservers:\n resolver.nameservers=[ns]\n while True:\n try:\n result = resolver.resolve('example.org', dns.rdatatype.A, dns.rdataclass.IN, lifetime=10)\n except Exception as e:\n log.critical(\"Your DNS resolver at %s is not working (%s). Please see https://mailu.io/master/faq.html#the-admin-container-won-t-start-and-its-log-says-critical-your-dns-resolver-isn-t-doing-dnssec-validation\", ns, e)\n else:\n if result.response.flags & dns.flags.AD:\n break\n log.critical(\"Your DNS resolver at %s isn't doing DNSSEC validation; Please see https://mailu.io/master/faq.html#the-admin-container-won-t-start-and-its-log-says-critical-your-dns-resolver-isn-t-doing-dnssec-validation.\", ns)\n time.sleep(5)\n\ntest_DNS()\n\nstart_command=\" \".join([\n \"gunicorn\",\n f\"--threads {str(os.cpu_count())}\",\n \"-b :80\",\n \"--logger-class mailu.Logger\",\n \"--worker-tmp-dir /dev/shm\",\n \"--access-logfile -\" if (log.root.level<=log.INFO) else \"\",\n \"--error-logfile -\",\n \"--preload\",\n \"'mailu:create_app()'\"])\n\nos.system(start_command)\n", "path": "core/admin/start.py"}]} | 1,539 | 280 |
gh_patches_debug_3119 | rasdani/github-patches | git_diff | Kinto__kinto-186 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Allow POST on buckets using the id_generator or the id provided in the data.
</issue>
<code>
[start of kinto/views/buckets.py]
1 from six import text_type
2 from uuid import UUID
3
4 from pyramid.httpexceptions import (HTTPForbidden, HTTPPreconditionFailed,
5 HTTPException)
6 from pyramid.security import NO_PERMISSION_REQUIRED
7 from pyramid.view import view_config
8
9 from cliquet import resource
10 from cliquet.utils import hmac_digest, build_request, reapply_cors
11
12 from kinto.views import NameGenerator
13
14
15 def create_bucket(request, bucket_id):
16 """Create a bucket if it doesn't exists."""
17 bucket_put = (request.method.lower() == 'put' and
18 request.path.endswith('buckets/default'))
19
20 if not bucket_put:
21 subrequest = build_request(request, {
22 'method': 'PUT',
23 'path': '/buckets/%s' % bucket_id,
24 'body': {"data": {}},
25 'headers': {'If-None-Match': '*'.encode('utf-8')}
26 })
27
28 try:
29 request.invoke_subrequest(subrequest)
30 except HTTPPreconditionFailed:
31 # The bucket already exists
32 pass
33
34
35 def create_collection(request, bucket_id):
36 subpath = request.matchdict.get('subpath')
37 if subpath and subpath.startswith('collections/'):
38 collection_id = subpath.split('/')[1]
39 collection_put = (request.method.lower() == 'put' and
40 request.path.endswith(collection_id))
41 if not collection_put:
42 subrequest = build_request(request, {
43 'method': 'PUT',
44 'path': '/buckets/%s/collections/%s' % (
45 bucket_id, collection_id),
46 'body': {"data": {}},
47 'headers': {'If-None-Match': '*'.encode('utf-8')}
48 })
49 try:
50 request.invoke_subrequest(subrequest)
51 except HTTPPreconditionFailed:
52 # The collection already exists
53 pass
54
55
56 @view_config(route_name='default_bucket', permission=NO_PERMISSION_REQUIRED)
57 @view_config(route_name='default_bucket_collection',
58 permission=NO_PERMISSION_REQUIRED)
59 def default_bucket(request):
60 if request.method.lower() == 'options':
61 path = request.path.replace('default', 'unknown')
62 subrequest = build_request(request, {
63 'method': 'OPTIONS',
64 'path': path
65 })
66 return request.invoke_subrequest(subrequest)
67
68 if getattr(request, 'prefixed_userid', None) is None:
69 raise HTTPForbidden # Pass through the forbidden_view_config
70
71 settings = request.registry.settings
72 hmac_secret = settings['cliquet.userid_hmac_secret']
73 # Build the user unguessable bucket_id UUID from its user_id
74 digest = hmac_digest(hmac_secret, request.prefixed_userid)
75 bucket_id = text_type(UUID(digest[:32]))
76 path = request.path.replace('/buckets/default', '/buckets/%s' % bucket_id)
77 querystring = request.url[(request.url.index(request.path) +
78 len(request.path)):]
79
80 # Make sure bucket exists
81 create_bucket(request, bucket_id)
82
83 # Make sure the collection exists
84 create_collection(request, bucket_id)
85
86 subrequest = build_request(request, {
87 'method': request.method,
88 'path': path + querystring,
89 'body': request.body
90 })
91
92 try:
93 response = request.invoke_subrequest(subrequest)
94 except HTTPException as error:
95 response = reapply_cors(subrequest, error)
96 return response
97
98
99 @resource.register(name='bucket',
100 collection_methods=('GET',),
101 collection_path='/buckets',
102 record_path='/buckets/{{id}}')
103 class Bucket(resource.ProtectedResource):
104 permissions = ('read', 'write', 'collection:create', 'group:create')
105
106 def __init__(self, *args, **kwargs):
107 super(Bucket, self).__init__(*args, **kwargs)
108 self.collection.id_generator = NameGenerator()
109
110 def get_parent_id(self, request):
111 # Buckets are not isolated by user, unlike Cliquet resources.
112 return ''
113
114 def delete(self):
115 result = super(Bucket, self).delete()
116
117 # Delete groups.
118 storage = self.collection.storage
119 parent_id = '/buckets/%s' % self.record_id
120 storage.delete_all(collection_id='group',
121 parent_id=parent_id,
122 with_deleted=False)
123 storage.purge_deleted(collection_id='group',
124 parent_id=parent_id)
125
126 # Delete collections.
127 deleted = storage.delete_all(collection_id='collection',
128 parent_id=parent_id,
129 with_deleted=False)
130 storage.purge_deleted(collection_id='collection',
131 parent_id=parent_id)
132
133 # Delete records.
134 id_field = self.collection.id_field
135 for collection in deleted:
136 parent_id = '/buckets/%s/collections/%s' % (self.record_id,
137 collection[id_field])
138 storage.delete_all(collection_id='record',
139 parent_id=parent_id,
140 with_deleted=False)
141 storage.purge_deleted(collection_id='record', parent_id=parent_id)
142
143 return result
144
[end of kinto/views/buckets.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/kinto/views/buckets.py b/kinto/views/buckets.py
--- a/kinto/views/buckets.py
+++ b/kinto/views/buckets.py
@@ -97,7 +97,7 @@
@resource.register(name='bucket',
- collection_methods=('GET',),
+ collection_methods=('GET', 'POST'),
collection_path='/buckets',
record_path='/buckets/{{id}}')
class Bucket(resource.ProtectedResource):
| {"golden_diff": "diff --git a/kinto/views/buckets.py b/kinto/views/buckets.py\n--- a/kinto/views/buckets.py\n+++ b/kinto/views/buckets.py\n@@ -97,7 +97,7 @@\n \n \n @resource.register(name='bucket',\n- collection_methods=('GET',),\n+ collection_methods=('GET', 'POST'),\n collection_path='/buckets',\n record_path='/buckets/{{id}}')\n class Bucket(resource.ProtectedResource):\n", "issue": "Allow POST on buckets using the id_generator or the id provided in the data.\n\n", "before_files": [{"content": "from six import text_type\nfrom uuid import UUID\n\nfrom pyramid.httpexceptions import (HTTPForbidden, HTTPPreconditionFailed,\n HTTPException)\nfrom pyramid.security import NO_PERMISSION_REQUIRED\nfrom pyramid.view import view_config\n\nfrom cliquet import resource\nfrom cliquet.utils import hmac_digest, build_request, reapply_cors\n\nfrom kinto.views import NameGenerator\n\n\ndef create_bucket(request, bucket_id):\n \"\"\"Create a bucket if it doesn't exists.\"\"\"\n bucket_put = (request.method.lower() == 'put' and\n request.path.endswith('buckets/default'))\n\n if not bucket_put:\n subrequest = build_request(request, {\n 'method': 'PUT',\n 'path': '/buckets/%s' % bucket_id,\n 'body': {\"data\": {}},\n 'headers': {'If-None-Match': '*'.encode('utf-8')}\n })\n\n try:\n request.invoke_subrequest(subrequest)\n except HTTPPreconditionFailed:\n # The bucket already exists\n pass\n\n\ndef create_collection(request, bucket_id):\n subpath = request.matchdict.get('subpath')\n if subpath and subpath.startswith('collections/'):\n collection_id = subpath.split('/')[1]\n collection_put = (request.method.lower() == 'put' and\n request.path.endswith(collection_id))\n if not collection_put:\n subrequest = build_request(request, {\n 'method': 'PUT',\n 'path': '/buckets/%s/collections/%s' % (\n bucket_id, collection_id),\n 'body': {\"data\": {}},\n 'headers': {'If-None-Match': '*'.encode('utf-8')}\n })\n try:\n request.invoke_subrequest(subrequest)\n except HTTPPreconditionFailed:\n # The collection already exists\n pass\n\n\n@view_config(route_name='default_bucket', permission=NO_PERMISSION_REQUIRED)\n@view_config(route_name='default_bucket_collection',\n permission=NO_PERMISSION_REQUIRED)\ndef default_bucket(request):\n if request.method.lower() == 'options':\n path = request.path.replace('default', 'unknown')\n subrequest = build_request(request, {\n 'method': 'OPTIONS',\n 'path': path\n })\n return request.invoke_subrequest(subrequest)\n\n if getattr(request, 'prefixed_userid', None) is None:\n raise HTTPForbidden # Pass through the forbidden_view_config\n\n settings = request.registry.settings\n hmac_secret = settings['cliquet.userid_hmac_secret']\n # Build the user unguessable bucket_id UUID from its user_id\n digest = hmac_digest(hmac_secret, request.prefixed_userid)\n bucket_id = text_type(UUID(digest[:32]))\n path = request.path.replace('/buckets/default', '/buckets/%s' % bucket_id)\n querystring = request.url[(request.url.index(request.path) +\n len(request.path)):]\n\n # Make sure bucket exists\n create_bucket(request, bucket_id)\n\n # Make sure the collection exists\n create_collection(request, bucket_id)\n\n subrequest = build_request(request, {\n 'method': request.method,\n 'path': path + querystring,\n 'body': request.body\n })\n\n try:\n response = request.invoke_subrequest(subrequest)\n except HTTPException as error:\n response = reapply_cors(subrequest, error)\n return response\n\n\[email protected](name='bucket',\n collection_methods=('GET',),\n collection_path='/buckets',\n record_path='/buckets/{{id}}')\nclass Bucket(resource.ProtectedResource):\n permissions = ('read', 'write', 'collection:create', 'group:create')\n\n def __init__(self, *args, **kwargs):\n super(Bucket, self).__init__(*args, **kwargs)\n self.collection.id_generator = NameGenerator()\n\n def get_parent_id(self, request):\n # Buckets are not isolated by user, unlike Cliquet resources.\n return ''\n\n def delete(self):\n result = super(Bucket, self).delete()\n\n # Delete groups.\n storage = self.collection.storage\n parent_id = '/buckets/%s' % self.record_id\n storage.delete_all(collection_id='group',\n parent_id=parent_id,\n with_deleted=False)\n storage.purge_deleted(collection_id='group',\n parent_id=parent_id)\n\n # Delete collections.\n deleted = storage.delete_all(collection_id='collection',\n parent_id=parent_id,\n with_deleted=False)\n storage.purge_deleted(collection_id='collection',\n parent_id=parent_id)\n\n # Delete records.\n id_field = self.collection.id_field\n for collection in deleted:\n parent_id = '/buckets/%s/collections/%s' % (self.record_id,\n collection[id_field])\n storage.delete_all(collection_id='record',\n parent_id=parent_id,\n with_deleted=False)\n storage.purge_deleted(collection_id='record', parent_id=parent_id)\n\n return result\n", "path": "kinto/views/buckets.py"}]} | 1,929 | 97 |
gh_patches_debug_42915 | rasdani/github-patches | git_diff | TheAlgorithms__Python-1403 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`Head` and `temp` names should change
Hi,
In your [Linked List implementation](https://github.com/TheAlgorithms/Python/blob/master/data_structures/linked_list/singly_linked_list.py), I think `temp` is wrongly spelled as `tamp`. The code works but for readability purpose all `tamp` should be replaced by `temp`.
Also, I find it strange to name the `head` with a capital `Head`. Generally, capitalization in Python is saved for Class names, not class attributes or methods. If you think the code should be more *Pythonic*, please consider changing all `Head` to `head` in the class attributes for Linked List.
</issue>
<code>
[start of data_structures/linked_list/singly_linked_list.py]
1 class Node: # create a Node
2 def __init__(self, data):
3 self.data = data # given data
4 self.next = None # given next to None
5
6
7 class Linked_List:
8 def __init__(self):
9 self.Head = None # Initialize Head to None
10
11 def insert_tail(self, data):
12 if self.Head is None:
13 self.insert_head(data) # If this is first node, call insert_head
14 else:
15 temp = self.Head
16 while temp.next != None: # traverse to last node
17 temp = temp.next
18 temp.next = Node(data) # create node & link to tail
19
20 def insert_head(self, data):
21 newNod = Node(data) # create a new node
22 if self.Head != None:
23 newNod.next = self.Head # link newNode to head
24 self.Head = newNod # make NewNode as Head
25
26 def printList(self): # print every node data
27 tamp = self.Head
28 while tamp is not None:
29 print(tamp.data)
30 tamp = tamp.next
31
32 def delete_head(self): # delete from head
33 temp = self.Head
34 if self.Head != None:
35 self.Head = self.Head.next
36 temp.next = None
37 return temp
38
39 def delete_tail(self): # delete from tail
40 tamp = self.Head
41 if self.Head != None:
42 if self.Head.next is None: # if Head is the only Node in the Linked List
43 self.Head = None
44 else:
45 while tamp.next.next is not None: # find the 2nd last element
46 tamp = tamp.next
47 tamp.next, tamp = (
48 None,
49 tamp.next,
50 ) # (2nd last element).next = None and tamp = last element
51 return tamp
52
53 def isEmpty(self):
54 return self.Head is None # Return if Head is none
55
56 def reverse(self):
57 prev = None
58 current = self.Head
59
60 while current:
61 # Store the current node's next node.
62 next_node = current.next
63 # Make the current node's next point backwards
64 current.next = prev
65 # Make the previous node be the current node
66 prev = current
67 # Make the current node the next node (to progress iteration)
68 current = next_node
69 # Return prev in order to put the head at the end
70 self.Head = prev
71
72
73 def main():
74 A = Linked_List()
75 print("Inserting 1st at Head")
76 a1 = input()
77 A.insert_head(a1)
78 print("Inserting 2nd at Head")
79 a2 = input()
80 A.insert_head(a2)
81 print("\nPrint List : ")
82 A.printList()
83 print("\nInserting 1st at Tail")
84 a3 = input()
85 A.insert_tail(a3)
86 print("Inserting 2nd at Tail")
87 a4 = input()
88 A.insert_tail(a4)
89 print("\nPrint List : ")
90 A.printList()
91 print("\nDelete Head")
92 A.delete_head()
93 print("Delete Tail")
94 A.delete_tail()
95 print("\nPrint List : ")
96 A.printList()
97 print("\nReverse Linked List")
98 A.reverse()
99 print("\nPrint List : ")
100 A.printList()
101
102
103 if __name__ == "__main__":
104 main()
105
[end of data_structures/linked_list/singly_linked_list.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/data_structures/linked_list/singly_linked_list.py b/data_structures/linked_list/singly_linked_list.py
--- a/data_structures/linked_list/singly_linked_list.py
+++ b/data_structures/linked_list/singly_linked_list.py
@@ -6,56 +6,56 @@
class Linked_List:
def __init__(self):
- self.Head = None # Initialize Head to None
+ self.head = None # Initialize head to None
def insert_tail(self, data):
- if self.Head is None:
+ if self.head is None:
self.insert_head(data) # If this is first node, call insert_head
else:
- temp = self.Head
+ temp = self.head
while temp.next != None: # traverse to last node
temp = temp.next
temp.next = Node(data) # create node & link to tail
def insert_head(self, data):
newNod = Node(data) # create a new node
- if self.Head != None:
- newNod.next = self.Head # link newNode to head
- self.Head = newNod # make NewNode as Head
+ if self.head != None:
+ newNod.next = self.head # link newNode to head
+ self.head = newNod # make NewNode as head
def printList(self): # print every node data
- tamp = self.Head
- while tamp is not None:
- print(tamp.data)
- tamp = tamp.next
+ temp = self.head
+ while temp is not None:
+ print(temp.data)
+ temp = temp.next
def delete_head(self): # delete from head
- temp = self.Head
- if self.Head != None:
- self.Head = self.Head.next
+ temp = self.head
+ if self.head != None:
+ self.head = self.head.next
temp.next = None
return temp
def delete_tail(self): # delete from tail
- tamp = self.Head
- if self.Head != None:
- if self.Head.next is None: # if Head is the only Node in the Linked List
- self.Head = None
+ temp = self.head
+ if self.head != None:
+ if self.head.next is None: # if head is the only Node in the Linked List
+ self.head = None
else:
- while tamp.next.next is not None: # find the 2nd last element
- tamp = tamp.next
- tamp.next, tamp = (
+ while temp.next.next is not None: # find the 2nd last element
+ temp = temp.next
+ temp.next, temp = (
None,
- tamp.next,
- ) # (2nd last element).next = None and tamp = last element
- return tamp
+ temp.next,
+ ) # (2nd last element).next = None and temp = last element
+ return temp
def isEmpty(self):
- return self.Head is None # Return if Head is none
+ return self.head is None # Return if head is none
def reverse(self):
prev = None
- current = self.Head
+ current = self.head
while current:
# Store the current node's next node.
@@ -67,15 +67,15 @@
# Make the current node the next node (to progress iteration)
current = next_node
# Return prev in order to put the head at the end
- self.Head = prev
+ self.head = prev
def main():
A = Linked_List()
- print("Inserting 1st at Head")
+ print("Inserting 1st at head")
a1 = input()
A.insert_head(a1)
- print("Inserting 2nd at Head")
+ print("Inserting 2nd at head")
a2 = input()
A.insert_head(a2)
print("\nPrint List : ")
@@ -88,7 +88,7 @@
A.insert_tail(a4)
print("\nPrint List : ")
A.printList()
- print("\nDelete Head")
+ print("\nDelete head")
A.delete_head()
print("Delete Tail")
A.delete_tail()
| {"golden_diff": "diff --git a/data_structures/linked_list/singly_linked_list.py b/data_structures/linked_list/singly_linked_list.py\n--- a/data_structures/linked_list/singly_linked_list.py\n+++ b/data_structures/linked_list/singly_linked_list.py\n@@ -6,56 +6,56 @@\n \n class Linked_List:\n def __init__(self):\n- self.Head = None # Initialize Head to None\n+ self.head = None # Initialize head to None\n \n def insert_tail(self, data):\n- if self.Head is None:\n+ if self.head is None:\n self.insert_head(data) # If this is first node, call insert_head\n else:\n- temp = self.Head\n+ temp = self.head\n while temp.next != None: # traverse to last node\n temp = temp.next\n temp.next = Node(data) # create node & link to tail\n \n def insert_head(self, data):\n newNod = Node(data) # create a new node\n- if self.Head != None:\n- newNod.next = self.Head # link newNode to head\n- self.Head = newNod # make NewNode as Head\n+ if self.head != None:\n+ newNod.next = self.head # link newNode to head\n+ self.head = newNod # make NewNode as head\n \n def printList(self): # print every node data\n- tamp = self.Head\n- while tamp is not None:\n- print(tamp.data)\n- tamp = tamp.next\n+ temp = self.head\n+ while temp is not None:\n+ print(temp.data)\n+ temp = temp.next\n \n def delete_head(self): # delete from head\n- temp = self.Head\n- if self.Head != None:\n- self.Head = self.Head.next\n+ temp = self.head\n+ if self.head != None:\n+ self.head = self.head.next\n temp.next = None\n return temp\n \n def delete_tail(self): # delete from tail\n- tamp = self.Head\n- if self.Head != None:\n- if self.Head.next is None: # if Head is the only Node in the Linked List\n- self.Head = None\n+ temp = self.head\n+ if self.head != None:\n+ if self.head.next is None: # if head is the only Node in the Linked List\n+ self.head = None\n else:\n- while tamp.next.next is not None: # find the 2nd last element\n- tamp = tamp.next\n- tamp.next, tamp = (\n+ while temp.next.next is not None: # find the 2nd last element\n+ temp = temp.next\n+ temp.next, temp = (\n None,\n- tamp.next,\n- ) # (2nd last element).next = None and tamp = last element\n- return tamp\n+ temp.next,\n+ ) # (2nd last element).next = None and temp = last element\n+ return temp\n \n def isEmpty(self):\n- return self.Head is None # Return if Head is none\n+ return self.head is None # Return if head is none\n \n def reverse(self):\n prev = None\n- current = self.Head\n+ current = self.head\n \n while current:\n # Store the current node's next node.\n@@ -67,15 +67,15 @@\n # Make the current node the next node (to progress iteration)\n current = next_node\n # Return prev in order to put the head at the end\n- self.Head = prev\n+ self.head = prev\n \n \n def main():\n A = Linked_List()\n- print(\"Inserting 1st at Head\")\n+ print(\"Inserting 1st at head\")\n a1 = input()\n A.insert_head(a1)\n- print(\"Inserting 2nd at Head\")\n+ print(\"Inserting 2nd at head\")\n a2 = input()\n A.insert_head(a2)\n print(\"\\nPrint List : \")\n@@ -88,7 +88,7 @@\n A.insert_tail(a4)\n print(\"\\nPrint List : \")\n A.printList()\n- print(\"\\nDelete Head\")\n+ print(\"\\nDelete head\")\n A.delete_head()\n print(\"Delete Tail\")\n A.delete_tail()\n", "issue": "`Head` and `temp` names should change\nHi,\r\n\r\nIn your [Linked List implementation](https://github.com/TheAlgorithms/Python/blob/master/data_structures/linked_list/singly_linked_list.py), I think `temp` is wrongly spelled as `tamp`. The code works but for readability purpose all `tamp` should be replaced by `temp`.\r\n\r\nAlso, I find it strange to name the `head` with a capital `Head`. Generally, capitalization in Python is saved for Class names, not class attributes or methods. If you think the code should be more *Pythonic*, please consider changing all `Head` to `head` in the class attributes for Linked List.\r\n\r\n\n", "before_files": [{"content": "class Node: # create a Node\n def __init__(self, data):\n self.data = data # given data\n self.next = None # given next to None\n\n\nclass Linked_List:\n def __init__(self):\n self.Head = None # Initialize Head to None\n\n def insert_tail(self, data):\n if self.Head is None:\n self.insert_head(data) # If this is first node, call insert_head\n else:\n temp = self.Head\n while temp.next != None: # traverse to last node\n temp = temp.next\n temp.next = Node(data) # create node & link to tail\n\n def insert_head(self, data):\n newNod = Node(data) # create a new node\n if self.Head != None:\n newNod.next = self.Head # link newNode to head\n self.Head = newNod # make NewNode as Head\n\n def printList(self): # print every node data\n tamp = self.Head\n while tamp is not None:\n print(tamp.data)\n tamp = tamp.next\n\n def delete_head(self): # delete from head\n temp = self.Head\n if self.Head != None:\n self.Head = self.Head.next\n temp.next = None\n return temp\n\n def delete_tail(self): # delete from tail\n tamp = self.Head\n if self.Head != None:\n if self.Head.next is None: # if Head is the only Node in the Linked List\n self.Head = None\n else:\n while tamp.next.next is not None: # find the 2nd last element\n tamp = tamp.next\n tamp.next, tamp = (\n None,\n tamp.next,\n ) # (2nd last element).next = None and tamp = last element\n return tamp\n\n def isEmpty(self):\n return self.Head is None # Return if Head is none\n\n def reverse(self):\n prev = None\n current = self.Head\n\n while current:\n # Store the current node's next node.\n next_node = current.next\n # Make the current node's next point backwards\n current.next = prev\n # Make the previous node be the current node\n prev = current\n # Make the current node the next node (to progress iteration)\n current = next_node\n # Return prev in order to put the head at the end\n self.Head = prev\n\n\ndef main():\n A = Linked_List()\n print(\"Inserting 1st at Head\")\n a1 = input()\n A.insert_head(a1)\n print(\"Inserting 2nd at Head\")\n a2 = input()\n A.insert_head(a2)\n print(\"\\nPrint List : \")\n A.printList()\n print(\"\\nInserting 1st at Tail\")\n a3 = input()\n A.insert_tail(a3)\n print(\"Inserting 2nd at Tail\")\n a4 = input()\n A.insert_tail(a4)\n print(\"\\nPrint List : \")\n A.printList()\n print(\"\\nDelete Head\")\n A.delete_head()\n print(\"Delete Tail\")\n A.delete_tail()\n print(\"\\nPrint List : \")\n A.printList()\n print(\"\\nReverse Linked List\")\n A.reverse()\n print(\"\\nPrint List : \")\n A.printList()\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "data_structures/linked_list/singly_linked_list.py"}]} | 1,646 | 972 |
gh_patches_debug_38558 | rasdani/github-patches | git_diff | hylang__hy-1431 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
repl shouldn't crash
```Hy
=> (defmacro bad [] `(macro-error 'x ""))
<function <lambda> at 0x000001D01D0ED7B8>
=> (bad)
Traceback (most recent call last):
File "c:\users\me\documents\github\hy\hy\cmdline.py", line 99, in runsource
ast_callback)
File "c:\users\me\documents\github\hy\hy\importer.py", line 198, in hy_eval
eval(ast_compile(_ast, "<eval_body>", "exec"), namespace)
File "<eval_body>", line 1, in <module>
hy.errors.HyMacroExpansionError: <exception str() failed>
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\ME\workspace\hy36-gilch\Scripts\hy-script.py", line 11, in <module>
load_entry_point('hy', 'console_scripts', 'hy')()
File "c:\users\me\documents\github\hy\hy\cmdline.py", line 346, in hy_main
sys.exit(cmdline_handler("hy", sys.argv))
File "c:\users\me\documents\github\hy\hy\cmdline.py", line 341, in cmdline_handler
return run_repl(spy=options.spy, output_fn=options.repl_output_fn)
File "c:\users\me\documents\github\hy\hy\cmdline.py", line 236, in run_repl
os=platform.system()
File "C:\Users\ME\AppData\Local\Programs\Python\Python36\lib\code.py", line 233, in interact
more = self.push(line)
File "C:\Users\ME\AppData\Local\Programs\Python\Python36\lib\code.py", line 259, in push
more = self.runsource(source, self.filename)
File "c:\users\me\documents\github\hy\hy\cmdline.py", line 105, in runsource
print(e, file=sys.stderr)
File "c:\users\me\documents\github\hy\hy\errors.py", line 46, in __str__
line = self.expression.start_line
AttributeError: 'HySymbol' object has no attribute 'start_line'
```
The repl should report errors, but not exit.
</issue>
<code>
[start of hy/errors.py]
1 # -*- encoding: utf-8 -*-
2 # Copyright 2017 the authors.
3 # This file is part of Hy, which is free software licensed under the Expat
4 # license. See the LICENSE.
5
6 import traceback
7
8 from clint.textui import colored
9
10
11 class HyError(Exception):
12 """
13 Generic Hy error. All internal Exceptions will be subclassed from this
14 Exception.
15 """
16 pass
17
18
19 class HyCompileError(HyError):
20 def __init__(self, exception, traceback=None):
21 self.exception = exception
22 self.traceback = traceback
23
24 def __str__(self):
25 if isinstance(self.exception, HyTypeError):
26 return str(self.exception)
27 if self.traceback:
28 tb = "".join(traceback.format_tb(self.traceback)).strip()
29 else:
30 tb = "No traceback available. 😟"
31 return("Internal Compiler Bug 😱\n⤷ %s: %s\nCompilation traceback:\n%s"
32 % (self.exception.__class__.__name__,
33 self.exception, tb))
34
35
36 class HyTypeError(TypeError):
37 def __init__(self, expression, message):
38 super(HyTypeError, self).__init__(message)
39 self.expression = expression
40 self.message = message
41 self.source = None
42 self.filename = None
43
44 def __str__(self):
45
46 line = self.expression.start_line
47 start = self.expression.start_column
48 end = self.expression.end_column
49
50 source = []
51 if self.source is not None:
52 source = self.source.split("\n")[line-1:self.expression.end_line]
53
54 if line == self.expression.end_line:
55 length = end - start
56 else:
57 length = len(source[0]) - start
58
59 result = ""
60
61 result += ' File "%s", line %d, column %d\n\n' % (self.filename,
62 line,
63 start)
64
65 if len(source) == 1:
66 result += ' %s\n' % colored.red(source[0])
67 result += ' %s%s\n' % (' '*(start-1),
68 colored.green('^' + '-'*(length-1) + '^'))
69 if len(source) > 1:
70 result += ' %s\n' % colored.red(source[0])
71 result += ' %s%s\n' % (' '*(start-1),
72 colored.green('^' + '-'*length))
73 if len(source) > 2: # write the middle lines
74 for line in source[1:-1]:
75 result += ' %s\n' % colored.red("".join(line))
76 result += ' %s\n' % colored.green("-"*len(line))
77
78 # write the last line
79 result += ' %s\n' % colored.red("".join(source[-1]))
80 result += ' %s\n' % colored.green('-'*(end-1) + '^')
81
82 result += colored.yellow("%s: %s\n\n" %
83 (self.__class__.__name__,
84 self.message.encode('utf-8')))
85
86 return result
87
88
89 class HyMacroExpansionError(HyTypeError):
90 pass
91
92
93 class HyIOError(HyError, IOError):
94 """
95 Trivial subclass of IOError and HyError, to distinguish between
96 IOErrors raised by Hy itself as opposed to Hy programs.
97 """
98 pass
99
[end of hy/errors.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/hy/errors.py b/hy/errors.py
--- a/hy/errors.py
+++ b/hy/errors.py
@@ -43,41 +43,47 @@
def __str__(self):
- line = self.expression.start_line
- start = self.expression.start_column
- end = self.expression.end_column
-
- source = []
- if self.source is not None:
- source = self.source.split("\n")[line-1:self.expression.end_line]
-
- if line == self.expression.end_line:
- length = end - start
- else:
- length = len(source[0]) - start
-
result = ""
- result += ' File "%s", line %d, column %d\n\n' % (self.filename,
- line,
- start)
-
- if len(source) == 1:
- result += ' %s\n' % colored.red(source[0])
- result += ' %s%s\n' % (' '*(start-1),
- colored.green('^' + '-'*(length-1) + '^'))
- if len(source) > 1:
- result += ' %s\n' % colored.red(source[0])
- result += ' %s%s\n' % (' '*(start-1),
- colored.green('^' + '-'*length))
- if len(source) > 2: # write the middle lines
- for line in source[1:-1]:
- result += ' %s\n' % colored.red("".join(line))
- result += ' %s\n' % colored.green("-"*len(line))
-
- # write the last line
- result += ' %s\n' % colored.red("".join(source[-1]))
- result += ' %s\n' % colored.green('-'*(end-1) + '^')
+ if all(getattr(self.expression, x, None) is not None
+ for x in ("start_line", "start_column", "end_column")):
+
+ line = self.expression.start_line
+ start = self.expression.start_column
+ end = self.expression.end_column
+
+ source = []
+ if self.source is not None:
+ source = self.source.split("\n")[line-1:self.expression.end_line]
+
+ if line == self.expression.end_line:
+ length = end - start
+ else:
+ length = len(source[0]) - start
+
+ result += ' File "%s", line %d, column %d\n\n' % (self.filename,
+ line,
+ start)
+
+ if len(source) == 1:
+ result += ' %s\n' % colored.red(source[0])
+ result += ' %s%s\n' % (' '*(start-1),
+ colored.green('^' + '-'*(length-1) + '^'))
+ if len(source) > 1:
+ result += ' %s\n' % colored.red(source[0])
+ result += ' %s%s\n' % (' '*(start-1),
+ colored.green('^' + '-'*length))
+ if len(source) > 2: # write the middle lines
+ for line in source[1:-1]:
+ result += ' %s\n' % colored.red("".join(line))
+ result += ' %s\n' % colored.green("-"*len(line))
+
+ # write the last line
+ result += ' %s\n' % colored.red("".join(source[-1]))
+ result += ' %s\n' % colored.green('-'*(end-1) + '^')
+
+ else:
+ result += ' File "%s", unknown location\n' % self.filename
result += colored.yellow("%s: %s\n\n" %
(self.__class__.__name__,
| {"golden_diff": "diff --git a/hy/errors.py b/hy/errors.py\n--- a/hy/errors.py\n+++ b/hy/errors.py\n@@ -43,41 +43,47 @@\n \n def __str__(self):\n \n- line = self.expression.start_line\n- start = self.expression.start_column\n- end = self.expression.end_column\n-\n- source = []\n- if self.source is not None:\n- source = self.source.split(\"\\n\")[line-1:self.expression.end_line]\n-\n- if line == self.expression.end_line:\n- length = end - start\n- else:\n- length = len(source[0]) - start\n-\n result = \"\"\n \n- result += ' File \"%s\", line %d, column %d\\n\\n' % (self.filename,\n- line,\n- start)\n-\n- if len(source) == 1:\n- result += ' %s\\n' % colored.red(source[0])\n- result += ' %s%s\\n' % (' '*(start-1),\n- colored.green('^' + '-'*(length-1) + '^'))\n- if len(source) > 1:\n- result += ' %s\\n' % colored.red(source[0])\n- result += ' %s%s\\n' % (' '*(start-1),\n- colored.green('^' + '-'*length))\n- if len(source) > 2: # write the middle lines\n- for line in source[1:-1]:\n- result += ' %s\\n' % colored.red(\"\".join(line))\n- result += ' %s\\n' % colored.green(\"-\"*len(line))\n-\n- # write the last line\n- result += ' %s\\n' % colored.red(\"\".join(source[-1]))\n- result += ' %s\\n' % colored.green('-'*(end-1) + '^')\n+ if all(getattr(self.expression, x, None) is not None\n+ for x in (\"start_line\", \"start_column\", \"end_column\")):\n+\n+ line = self.expression.start_line\n+ start = self.expression.start_column\n+ end = self.expression.end_column\n+\n+ source = []\n+ if self.source is not None:\n+ source = self.source.split(\"\\n\")[line-1:self.expression.end_line]\n+\n+ if line == self.expression.end_line:\n+ length = end - start\n+ else:\n+ length = len(source[0]) - start\n+\n+ result += ' File \"%s\", line %d, column %d\\n\\n' % (self.filename,\n+ line,\n+ start)\n+\n+ if len(source) == 1:\n+ result += ' %s\\n' % colored.red(source[0])\n+ result += ' %s%s\\n' % (' '*(start-1),\n+ colored.green('^' + '-'*(length-1) + '^'))\n+ if len(source) > 1:\n+ result += ' %s\\n' % colored.red(source[0])\n+ result += ' %s%s\\n' % (' '*(start-1),\n+ colored.green('^' + '-'*length))\n+ if len(source) > 2: # write the middle lines\n+ for line in source[1:-1]:\n+ result += ' %s\\n' % colored.red(\"\".join(line))\n+ result += ' %s\\n' % colored.green(\"-\"*len(line))\n+\n+ # write the last line\n+ result += ' %s\\n' % colored.red(\"\".join(source[-1]))\n+ result += ' %s\\n' % colored.green('-'*(end-1) + '^')\n+\n+ else:\n+ result += ' File \"%s\", unknown location\\n' % self.filename\n \n result += colored.yellow(\"%s: %s\\n\\n\" %\n (self.__class__.__name__,\n", "issue": "repl shouldn't crash\n```Hy\r\n=> (defmacro bad [] `(macro-error 'x \"\"))\r\n<function <lambda> at 0x000001D01D0ED7B8>\r\n=> (bad)\r\nTraceback (most recent call last):\r\n File \"c:\\users\\me\\documents\\github\\hy\\hy\\cmdline.py\", line 99, in runsource\r\n ast_callback)\r\n File \"c:\\users\\me\\documents\\github\\hy\\hy\\importer.py\", line 198, in hy_eval\r\n eval(ast_compile(_ast, \"<eval_body>\", \"exec\"), namespace)\r\n File \"<eval_body>\", line 1, in <module>\r\nhy.errors.HyMacroExpansionError: <exception str() failed>\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\ME\\workspace\\hy36-gilch\\Scripts\\hy-script.py\", line 11, in <module>\r\n load_entry_point('hy', 'console_scripts', 'hy')()\r\n File \"c:\\users\\me\\documents\\github\\hy\\hy\\cmdline.py\", line 346, in hy_main\r\n sys.exit(cmdline_handler(\"hy\", sys.argv))\r\n File \"c:\\users\\me\\documents\\github\\hy\\hy\\cmdline.py\", line 341, in cmdline_handler\r\n return run_repl(spy=options.spy, output_fn=options.repl_output_fn)\r\n File \"c:\\users\\me\\documents\\github\\hy\\hy\\cmdline.py\", line 236, in run_repl\r\n os=platform.system()\r\n File \"C:\\Users\\ME\\AppData\\Local\\Programs\\Python\\Python36\\lib\\code.py\", line 233, in interact\r\n more = self.push(line)\r\n File \"C:\\Users\\ME\\AppData\\Local\\Programs\\Python\\Python36\\lib\\code.py\", line 259, in push\r\n more = self.runsource(source, self.filename)\r\n File \"c:\\users\\me\\documents\\github\\hy\\hy\\cmdline.py\", line 105, in runsource\r\n print(e, file=sys.stderr)\r\n File \"c:\\users\\me\\documents\\github\\hy\\hy\\errors.py\", line 46, in __str__\r\n line = self.expression.start_line\r\nAttributeError: 'HySymbol' object has no attribute 'start_line'\r\n```\r\nThe repl should report errors, but not exit.\n", "before_files": [{"content": "# -*- encoding: utf-8 -*-\n# Copyright 2017 the authors.\n# This file is part of Hy, which is free software licensed under the Expat\n# license. See the LICENSE.\n\nimport traceback\n\nfrom clint.textui import colored\n\n\nclass HyError(Exception):\n \"\"\"\n Generic Hy error. All internal Exceptions will be subclassed from this\n Exception.\n \"\"\"\n pass\n\n\nclass HyCompileError(HyError):\n def __init__(self, exception, traceback=None):\n self.exception = exception\n self.traceback = traceback\n\n def __str__(self):\n if isinstance(self.exception, HyTypeError):\n return str(self.exception)\n if self.traceback:\n tb = \"\".join(traceback.format_tb(self.traceback)).strip()\n else:\n tb = \"No traceback available. \ud83d\ude1f\"\n return(\"Internal Compiler Bug \ud83d\ude31\\n\u2937 %s: %s\\nCompilation traceback:\\n%s\"\n % (self.exception.__class__.__name__,\n self.exception, tb))\n\n\nclass HyTypeError(TypeError):\n def __init__(self, expression, message):\n super(HyTypeError, self).__init__(message)\n self.expression = expression\n self.message = message\n self.source = None\n self.filename = None\n\n def __str__(self):\n\n line = self.expression.start_line\n start = self.expression.start_column\n end = self.expression.end_column\n\n source = []\n if self.source is not None:\n source = self.source.split(\"\\n\")[line-1:self.expression.end_line]\n\n if line == self.expression.end_line:\n length = end - start\n else:\n length = len(source[0]) - start\n\n result = \"\"\n\n result += ' File \"%s\", line %d, column %d\\n\\n' % (self.filename,\n line,\n start)\n\n if len(source) == 1:\n result += ' %s\\n' % colored.red(source[0])\n result += ' %s%s\\n' % (' '*(start-1),\n colored.green('^' + '-'*(length-1) + '^'))\n if len(source) > 1:\n result += ' %s\\n' % colored.red(source[0])\n result += ' %s%s\\n' % (' '*(start-1),\n colored.green('^' + '-'*length))\n if len(source) > 2: # write the middle lines\n for line in source[1:-1]:\n result += ' %s\\n' % colored.red(\"\".join(line))\n result += ' %s\\n' % colored.green(\"-\"*len(line))\n\n # write the last line\n result += ' %s\\n' % colored.red(\"\".join(source[-1]))\n result += ' %s\\n' % colored.green('-'*(end-1) + '^')\n\n result += colored.yellow(\"%s: %s\\n\\n\" %\n (self.__class__.__name__,\n self.message.encode('utf-8')))\n\n return result\n\n\nclass HyMacroExpansionError(HyTypeError):\n pass\n\n\nclass HyIOError(HyError, IOError):\n \"\"\"\n Trivial subclass of IOError and HyError, to distinguish between\n IOErrors raised by Hy itself as opposed to Hy programs.\n \"\"\"\n pass\n", "path": "hy/errors.py"}]} | 1,999 | 868 |
gh_patches_debug_1599 | rasdani/github-patches | git_diff | bridgecrewio__checkov-2214 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
CKV_AZURE_80 - dotnet_framework_version with v6.0 fails
**Describe the issue**
Currently .NET 6.0 is the latest LTS version. However, CKV_AZURE_80 expects that latest version is v5.0.
**Examples**
```
resource "azurerm_app_service" "searchApi" {
...
site_config {
dotnet_framework_version = "v6.0"
}
}
```
There should be no warning for CKV_AZURE_80 with the above configuration.
**Version (please complete the following information):**
- Checkov Version 2.0.717
</issue>
<code>
[start of checkov/terraform/checks/resource/azure/AppServiceDotnetFrameworkVersion.py]
1 from checkov.common.models.enums import CheckCategories
2 from checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck
3
4
5 class AppServiceDotnetFrameworkVersion(BaseResourceValueCheck):
6 def __init__(self):
7 name = "Ensure that 'Net Framework' version is the latest, if used as a part of the web app"
8 id = "CKV_AZURE_80"
9 supported_resources = ['azurerm_app_service']
10 categories = [CheckCategories.GENERAL_SECURITY]
11 super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
12
13 def get_inspected_key(self):
14 return "site_config/0/dotnet_framework_version"
15
16 def get_expected_value(self):
17 return "v5.0"
18
19
20 check = AppServiceDotnetFrameworkVersion()
21
[end of checkov/terraform/checks/resource/azure/AppServiceDotnetFrameworkVersion.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/checkov/terraform/checks/resource/azure/AppServiceDotnetFrameworkVersion.py b/checkov/terraform/checks/resource/azure/AppServiceDotnetFrameworkVersion.py
--- a/checkov/terraform/checks/resource/azure/AppServiceDotnetFrameworkVersion.py
+++ b/checkov/terraform/checks/resource/azure/AppServiceDotnetFrameworkVersion.py
@@ -14,7 +14,7 @@
return "site_config/0/dotnet_framework_version"
def get_expected_value(self):
- return "v5.0"
+ return "v6.0"
check = AppServiceDotnetFrameworkVersion()
| {"golden_diff": "diff --git a/checkov/terraform/checks/resource/azure/AppServiceDotnetFrameworkVersion.py b/checkov/terraform/checks/resource/azure/AppServiceDotnetFrameworkVersion.py\n--- a/checkov/terraform/checks/resource/azure/AppServiceDotnetFrameworkVersion.py\n+++ b/checkov/terraform/checks/resource/azure/AppServiceDotnetFrameworkVersion.py\n@@ -14,7 +14,7 @@\n return \"site_config/0/dotnet_framework_version\"\n \n def get_expected_value(self):\n- return \"v5.0\"\n+ return \"v6.0\"\n \n \n check = AppServiceDotnetFrameworkVersion()\n", "issue": "CKV_AZURE_80 - dotnet_framework_version with v6.0 fails\n**Describe the issue**\r\nCurrently .NET 6.0 is the latest LTS version. However, CKV_AZURE_80 expects that latest version is v5.0.\r\n\r\n**Examples**\r\n```\r\nresource \"azurerm_app_service\" \"searchApi\" {\r\n ...\r\n site_config {\r\n dotnet_framework_version = \"v6.0\"\r\n }\r\n}\r\n```\r\nThere should be no warning for CKV_AZURE_80 with the above configuration.\r\n\r\n**Version (please complete the following information):**\r\n - Checkov Version 2.0.717\r\n\n", "before_files": [{"content": "from checkov.common.models.enums import CheckCategories\nfrom checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck\n\n\nclass AppServiceDotnetFrameworkVersion(BaseResourceValueCheck):\n def __init__(self):\n name = \"Ensure that 'Net Framework' version is the latest, if used as a part of the web app\"\n id = \"CKV_AZURE_80\"\n supported_resources = ['azurerm_app_service']\n categories = [CheckCategories.GENERAL_SECURITY]\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def get_inspected_key(self):\n return \"site_config/0/dotnet_framework_version\"\n\n def get_expected_value(self):\n return \"v5.0\"\n\n\ncheck = AppServiceDotnetFrameworkVersion()\n", "path": "checkov/terraform/checks/resource/azure/AppServiceDotnetFrameworkVersion.py"}]} | 912 | 138 |
gh_patches_debug_36974 | rasdani/github-patches | git_diff | pulp__pulpcore-2315 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Clean up TaskReservedResources/task-table at migration to new-tasking-system
See https://bugzilla.redhat.com/show_bug.cgi?id=2031154 for details.
Migration that needs to be updated to purge taskreservedresource entries: 0064_add_new_style_task_columns.py
This wants to be cherrypicked into 3.14/15/16 (after which the offending table no longer exists)
</issue>
<code>
[start of pulpcore/app/migrations/0064_add_new_style_task_columns.py]
1 # Generated by Django 2.2.20 on 2021-04-27 07:51
2
3 import django.contrib.postgres.fields
4 import django.contrib.postgres.fields.jsonb
5 from django.db import migrations, models
6
7
8 def copy_reserved_resources_record(apps, schema_editor):
9 Task = apps.get_model('core', 'Task')
10 for task in Task.objects.iterator():
11 task._reserved_resources_record = list(task.reserved_resources_record.values_list('resource', flat=True))
12 task.save()
13
14
15 def noop(apps, schema_editor):
16 pass
17
18
19 class Migration(migrations.Migration):
20
21 dependencies = [
22 ('core', '0063_repository_retained_versions'),
23 ]
24
25 operations = [
26 migrations.AddField(
27 model_name='task',
28 name='args',
29 field=django.contrib.postgres.fields.jsonb.JSONField(null=True),
30 ),
31 migrations.AddField(
32 model_name='task',
33 name='kwargs',
34 field=django.contrib.postgres.fields.jsonb.JSONField(null=True),
35 ),
36 migrations.AddField(
37 model_name='task',
38 name='_reserved_resources_record',
39 field=django.contrib.postgres.fields.ArrayField(base_field=models.CharField(max_length=256), null=True, size=None),
40 ),
41 migrations.AlterField(
42 model_name='task',
43 name='_resource_job_id',
44 field=models.UUIDField(null=True),
45 ),
46 migrations.AlterField(
47 model_name='progressreport',
48 name='state',
49 field=models.TextField(choices=[('waiting', 'Waiting'), ('skipped', 'Skipped'), ('running', 'Running'), ('completed', 'Completed'), ('failed', 'Failed'), ('canceled', 'Canceled'), ('canceling', 'Canceling')], default='waiting'),
50 ),
51 migrations.AlterField(
52 model_name='task',
53 name='state',
54 field=models.TextField(choices=[('waiting', 'Waiting'), ('skipped', 'Skipped'), ('running', 'Running'), ('completed', 'Completed'), ('failed', 'Failed'), ('canceled', 'Canceled'), ('canceling', 'Canceling')]),
55 ),
56 migrations.AddIndex(
57 model_name='task',
58 index=models.Index(fields=['pulp_created'], name='core_task_pulp_cr_10223f_idx'),
59 ),
60 migrations.RunPython(
61 code=copy_reserved_resources_record,
62 reverse_code=noop,
63 ),
64 migrations.RemoveField(
65 model_name='taskreservedresourcerecord',
66 name='resource',
67 ),
68 migrations.RemoveField(
69 model_name='taskreservedresourcerecord',
70 name='task',
71 ),
72 migrations.DeleteModel(
73 name='ReservedResourceRecord',
74 ),
75 migrations.DeleteModel(
76 name='TaskReservedResourceRecord',
77 ),
78 migrations.RenameField(
79 model_name='task',
80 old_name='_reserved_resources_record',
81 new_name='reserved_resources_record',
82 ),
83 ]
84
[end of pulpcore/app/migrations/0064_add_new_style_task_columns.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pulpcore/app/migrations/0064_add_new_style_task_columns.py b/pulpcore/app/migrations/0064_add_new_style_task_columns.py
--- a/pulpcore/app/migrations/0064_add_new_style_task_columns.py
+++ b/pulpcore/app/migrations/0064_add_new_style_task_columns.py
@@ -4,16 +4,34 @@
import django.contrib.postgres.fields.jsonb
from django.db import migrations, models
+TASK_BATCH_SIZE = 1000
+
def copy_reserved_resources_record(apps, schema_editor):
Task = apps.get_model('core', 'Task')
- for task in Task.objects.iterator():
+
+ # Update _reserved_resource_record for all tasks, 1000 tasks at a time.
+ # When we hit 1K tasks, go to the db for the batch.
+ # Make sure to update the final batch!
+ tasks = []
+ for task in Task.objects.iterator(chunk_size=TASK_BATCH_SIZE):
task._reserved_resources_record = list(task.reserved_resources_record.values_list('resource', flat=True))
- task.save()
+ tasks.append(task)
+ if len(tasks) == TASK_BATCH_SIZE:
+ Task.objects.bulk_update(tasks, ["_reserved_resources_record"])
+ tasks.clear()
+
+ # Update last set of tasks
+ if len(tasks) > 0:
+ Task.objects.bulk_update(tasks, ["_reserved_resources_record"])
+
+def purge_reservedresources(apps, schema_editor):
+ TaskReservedResource = apps.get_model('core', 'TaskReservedResource')
+ TaskReservedResource.objects.all().delete()
-def noop(apps, schema_editor):
- pass
+ ReservedResource = apps.get_model('core', 'ReservedResource')
+ ReservedResource.objects.all().delete()
class Migration(migrations.Migration):
@@ -23,6 +41,12 @@
]
operations = [
+ # Purge any ReservedResource entries - if there are any, they're orphans
+ migrations.RunPython(
+ code=purge_reservedresources,
+ reverse_code=migrations.RunPython.noop,
+ ),
+ # Update entities for the new task-system
migrations.AddField(
model_name='task',
name='args',
@@ -59,7 +83,7 @@
),
migrations.RunPython(
code=copy_reserved_resources_record,
- reverse_code=noop,
+ reverse_code=migrations.RunPython.noop,
),
migrations.RemoveField(
model_name='taskreservedresourcerecord',
@@ -80,4 +104,5 @@
old_name='_reserved_resources_record',
new_name='reserved_resources_record',
),
+
]
| {"golden_diff": "diff --git a/pulpcore/app/migrations/0064_add_new_style_task_columns.py b/pulpcore/app/migrations/0064_add_new_style_task_columns.py\n--- a/pulpcore/app/migrations/0064_add_new_style_task_columns.py\n+++ b/pulpcore/app/migrations/0064_add_new_style_task_columns.py\n@@ -4,16 +4,34 @@\n import django.contrib.postgres.fields.jsonb\n from django.db import migrations, models\n \n+TASK_BATCH_SIZE = 1000\n+\n \n def copy_reserved_resources_record(apps, schema_editor):\n Task = apps.get_model('core', 'Task')\n- for task in Task.objects.iterator():\n+\n+ # Update _reserved_resource_record for all tasks, 1000 tasks at a time.\n+ # When we hit 1K tasks, go to the db for the batch.\n+ # Make sure to update the final batch!\n+ tasks = []\n+ for task in Task.objects.iterator(chunk_size=TASK_BATCH_SIZE):\n task._reserved_resources_record = list(task.reserved_resources_record.values_list('resource', flat=True))\n- task.save()\n+ tasks.append(task)\n+ if len(tasks) == TASK_BATCH_SIZE:\n+ Task.objects.bulk_update(tasks, [\"_reserved_resources_record\"])\n+ tasks.clear()\n+\n+ # Update last set of tasks\n+ if len(tasks) > 0:\n+ Task.objects.bulk_update(tasks, [\"_reserved_resources_record\"])\n+\n \n+def purge_reservedresources(apps, schema_editor):\n+ TaskReservedResource = apps.get_model('core', 'TaskReservedResource')\n+ TaskReservedResource.objects.all().delete()\n \n-def noop(apps, schema_editor):\n- pass\n+ ReservedResource = apps.get_model('core', 'ReservedResource')\n+ ReservedResource.objects.all().delete()\n \n \n class Migration(migrations.Migration):\n@@ -23,6 +41,12 @@\n ]\n \n operations = [\n+ # Purge any ReservedResource entries - if there are any, they're orphans\n+ migrations.RunPython(\n+ code=purge_reservedresources,\n+ reverse_code=migrations.RunPython.noop,\n+ ),\n+ # Update entities for the new task-system\n migrations.AddField(\n model_name='task',\n name='args',\n@@ -59,7 +83,7 @@\n ),\n migrations.RunPython(\n code=copy_reserved_resources_record,\n- reverse_code=noop,\n+ reverse_code=migrations.RunPython.noop,\n ),\n migrations.RemoveField(\n model_name='taskreservedresourcerecord',\n@@ -80,4 +104,5 @@\n old_name='_reserved_resources_record',\n new_name='reserved_resources_record',\n ),\n+\n ]\n", "issue": "Clean up TaskReservedResources/task-table at migration to new-tasking-system\nSee https://bugzilla.redhat.com/show_bug.cgi?id=2031154 for details.\r\n\r\nMigration that needs to be updated to purge taskreservedresource entries: 0064_add_new_style_task_columns.py\r\n\r\nThis wants to be cherrypicked into 3.14/15/16 (after which the offending table no longer exists)\n", "before_files": [{"content": "# Generated by Django 2.2.20 on 2021-04-27 07:51\n\nimport django.contrib.postgres.fields\nimport django.contrib.postgres.fields.jsonb\nfrom django.db import migrations, models\n\n\ndef copy_reserved_resources_record(apps, schema_editor):\n Task = apps.get_model('core', 'Task')\n for task in Task.objects.iterator():\n task._reserved_resources_record = list(task.reserved_resources_record.values_list('resource', flat=True))\n task.save()\n\n\ndef noop(apps, schema_editor):\n pass\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('core', '0063_repository_retained_versions'),\n ]\n\n operations = [\n migrations.AddField(\n model_name='task',\n name='args',\n field=django.contrib.postgres.fields.jsonb.JSONField(null=True),\n ),\n migrations.AddField(\n model_name='task',\n name='kwargs',\n field=django.contrib.postgres.fields.jsonb.JSONField(null=True),\n ),\n migrations.AddField(\n model_name='task',\n name='_reserved_resources_record',\n field=django.contrib.postgres.fields.ArrayField(base_field=models.CharField(max_length=256), null=True, size=None),\n ),\n migrations.AlterField(\n model_name='task',\n name='_resource_job_id',\n field=models.UUIDField(null=True),\n ),\n migrations.AlterField(\n model_name='progressreport',\n name='state',\n field=models.TextField(choices=[('waiting', 'Waiting'), ('skipped', 'Skipped'), ('running', 'Running'), ('completed', 'Completed'), ('failed', 'Failed'), ('canceled', 'Canceled'), ('canceling', 'Canceling')], default='waiting'),\n ),\n migrations.AlterField(\n model_name='task',\n name='state',\n field=models.TextField(choices=[('waiting', 'Waiting'), ('skipped', 'Skipped'), ('running', 'Running'), ('completed', 'Completed'), ('failed', 'Failed'), ('canceled', 'Canceled'), ('canceling', 'Canceling')]),\n ),\n migrations.AddIndex(\n model_name='task',\n index=models.Index(fields=['pulp_created'], name='core_task_pulp_cr_10223f_idx'),\n ),\n migrations.RunPython(\n code=copy_reserved_resources_record,\n reverse_code=noop,\n ),\n migrations.RemoveField(\n model_name='taskreservedresourcerecord',\n name='resource',\n ),\n migrations.RemoveField(\n model_name='taskreservedresourcerecord',\n name='task',\n ),\n migrations.DeleteModel(\n name='ReservedResourceRecord',\n ),\n migrations.DeleteModel(\n name='TaskReservedResourceRecord',\n ),\n migrations.RenameField(\n model_name='task',\n old_name='_reserved_resources_record',\n new_name='reserved_resources_record',\n ),\n ]\n", "path": "pulpcore/app/migrations/0064_add_new_style_task_columns.py"}]} | 1,427 | 601 |
gh_patches_debug_35139 | rasdani/github-patches | git_diff | spotify__luigi-1744 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
HdfsTarget commands fail when many targets are instantiated
I've recently added an existence check to a large mapreduce task for which some input files may be missing. With a large enough set of inputs, it will fail every time. I've simplified it to the following code:
``` python
from luigi.contrib.hdfs import HdfsTarget
many_targets = [HdfsTarget('/') for _ in range(2000)]
all(target.exists() for target in many_targets)
```
This will break if I use any past 1000 or so. Here the client uses snakebite. For a more direct triggering, we can also do
``` python
from snakebite.client import AutoConfigClient
clients = [AutoConfigClient() for _ in range(10000)]
all(client.test('/', exists=True) for client in clients)
```
In either case, the bug goes away if I use a generator expression rather than a list comprehension. The problem is that when I'm dealing with objects coming out of luigi calls like input_hadoop, it's too late for me to decide between lists and iterators. I can code around this by instantiating all of my HdfsTargets with the same client, but I'm not sure this is safe. It could also be fixed in luigi if we had get_autoconfig_client return the same object each time. Is there any reason this wouldn't work?
</issue>
<code>
[start of luigi/contrib/hdfs/clients.py]
1 # -*- coding: utf-8 -*-
2 #
3 # Copyright 2012-2015 Spotify AB
4 #
5 # Licensed under the Apache License, Version 2.0 (the "License");
6 # you may not use this file except in compliance with the License.
7 # You may obtain a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing, software
12 # distributed under the License is distributed on an "AS IS" BASIS,
13 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14 # See the License for the specific language governing permissions and
15 # limitations under the License.
16 #
17
18 """
19 The implementations of the hdfs clients. The hadoop cli client and the
20 snakebite client.
21 """
22
23
24 from luigi.contrib.hdfs import config as hdfs_config
25 from luigi.contrib.hdfs import snakebite_client as hdfs_snakebite_client
26 from luigi.contrib.hdfs import webhdfs_client as hdfs_webhdfs_client
27 from luigi.contrib.hdfs import hadoopcli_clients as hdfs_hadoopcli_clients
28 import luigi.contrib.target
29 import logging
30
31 logger = logging.getLogger('luigi-interface')
32
33
34 def get_autoconfig_client():
35 """
36 Creates the client as specified in the `luigi.cfg` configuration.
37 """
38 configured_client = hdfs_config.get_configured_hdfs_client()
39 if configured_client == "webhdfs":
40 return hdfs_webhdfs_client.WebHdfsClient()
41 if configured_client == "snakebite":
42 return hdfs_snakebite_client.SnakebiteHdfsClient()
43 if configured_client == "snakebite_with_hadoopcli_fallback":
44 return luigi.contrib.target.CascadingClient([hdfs_snakebite_client.SnakebiteHdfsClient(),
45 hdfs_hadoopcli_clients.create_hadoopcli_client()])
46 if configured_client == "hadoopcli":
47 return hdfs_hadoopcli_clients.create_hadoopcli_client()
48 raise Exception("Unknown hdfs client " + configured_client)
49
50
51 def _with_ac(method_name):
52 def result(*args, **kwargs):
53 return getattr(get_autoconfig_client(), method_name)(*args, **kwargs)
54 return result
55
56 exists = _with_ac('exists')
57 rename = _with_ac('rename')
58 remove = _with_ac('remove')
59 mkdir = _with_ac('mkdir')
60 listdir = _with_ac('listdir')
61
[end of luigi/contrib/hdfs/clients.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/luigi/contrib/hdfs/clients.py b/luigi/contrib/hdfs/clients.py
--- a/luigi/contrib/hdfs/clients.py
+++ b/luigi/contrib/hdfs/clients.py
@@ -19,33 +19,42 @@
The implementations of the hdfs clients. The hadoop cli client and the
snakebite client.
"""
-
+import logging
+import threading
from luigi.contrib.hdfs import config as hdfs_config
from luigi.contrib.hdfs import snakebite_client as hdfs_snakebite_client
from luigi.contrib.hdfs import webhdfs_client as hdfs_webhdfs_client
from luigi.contrib.hdfs import hadoopcli_clients as hdfs_hadoopcli_clients
import luigi.contrib.target
-import logging
logger = logging.getLogger('luigi-interface')
+_AUTOCONFIG_CLIENT = threading.local()
+
-def get_autoconfig_client():
+def get_autoconfig_client(client_cache=_AUTOCONFIG_CLIENT):
"""
Creates the client as specified in the `luigi.cfg` configuration.
"""
- configured_client = hdfs_config.get_configured_hdfs_client()
- if configured_client == "webhdfs":
- return hdfs_webhdfs_client.WebHdfsClient()
- if configured_client == "snakebite":
- return hdfs_snakebite_client.SnakebiteHdfsClient()
- if configured_client == "snakebite_with_hadoopcli_fallback":
- return luigi.contrib.target.CascadingClient([hdfs_snakebite_client.SnakebiteHdfsClient(),
- hdfs_hadoopcli_clients.create_hadoopcli_client()])
- if configured_client == "hadoopcli":
- return hdfs_hadoopcli_clients.create_hadoopcli_client()
- raise Exception("Unknown hdfs client " + configured_client)
+ try:
+ return client_cache.client
+ except AttributeError:
+ configured_client = hdfs_config.get_configured_hdfs_client()
+ if configured_client == "webhdfs":
+ client_cache.client = hdfs_webhdfs_client.WebHdfsClient()
+ elif configured_client == "snakebite":
+ client_cache.client = hdfs_snakebite_client.SnakebiteHdfsClient()
+ elif configured_client == "snakebite_with_hadoopcli_fallback":
+ client_cache.client = luigi.contrib.target.CascadingClient([
+ hdfs_snakebite_client.SnakebiteHdfsClient(),
+ hdfs_hadoopcli_clients.create_hadoopcli_client(),
+ ])
+ elif configured_client == "hadoopcli":
+ client_cache.client = hdfs_hadoopcli_clients.create_hadoopcli_client()
+ else:
+ raise Exception("Unknown hdfs client " + configured_client)
+ return client_cache.client
def _with_ac(method_name):
| {"golden_diff": "diff --git a/luigi/contrib/hdfs/clients.py b/luigi/contrib/hdfs/clients.py\n--- a/luigi/contrib/hdfs/clients.py\n+++ b/luigi/contrib/hdfs/clients.py\n@@ -19,33 +19,42 @@\n The implementations of the hdfs clients. The hadoop cli client and the\n snakebite client.\n \"\"\"\n-\n+import logging\n+import threading\n \n from luigi.contrib.hdfs import config as hdfs_config\n from luigi.contrib.hdfs import snakebite_client as hdfs_snakebite_client\n from luigi.contrib.hdfs import webhdfs_client as hdfs_webhdfs_client\n from luigi.contrib.hdfs import hadoopcli_clients as hdfs_hadoopcli_clients\n import luigi.contrib.target\n-import logging\n \n logger = logging.getLogger('luigi-interface')\n \n+_AUTOCONFIG_CLIENT = threading.local()\n+\n \n-def get_autoconfig_client():\n+def get_autoconfig_client(client_cache=_AUTOCONFIG_CLIENT):\n \"\"\"\n Creates the client as specified in the `luigi.cfg` configuration.\n \"\"\"\n- configured_client = hdfs_config.get_configured_hdfs_client()\n- if configured_client == \"webhdfs\":\n- return hdfs_webhdfs_client.WebHdfsClient()\n- if configured_client == \"snakebite\":\n- return hdfs_snakebite_client.SnakebiteHdfsClient()\n- if configured_client == \"snakebite_with_hadoopcli_fallback\":\n- return luigi.contrib.target.CascadingClient([hdfs_snakebite_client.SnakebiteHdfsClient(),\n- hdfs_hadoopcli_clients.create_hadoopcli_client()])\n- if configured_client == \"hadoopcli\":\n- return hdfs_hadoopcli_clients.create_hadoopcli_client()\n- raise Exception(\"Unknown hdfs client \" + configured_client)\n+ try:\n+ return client_cache.client\n+ except AttributeError:\n+ configured_client = hdfs_config.get_configured_hdfs_client()\n+ if configured_client == \"webhdfs\":\n+ client_cache.client = hdfs_webhdfs_client.WebHdfsClient()\n+ elif configured_client == \"snakebite\":\n+ client_cache.client = hdfs_snakebite_client.SnakebiteHdfsClient()\n+ elif configured_client == \"snakebite_with_hadoopcli_fallback\":\n+ client_cache.client = luigi.contrib.target.CascadingClient([\n+ hdfs_snakebite_client.SnakebiteHdfsClient(),\n+ hdfs_hadoopcli_clients.create_hadoopcli_client(),\n+ ])\n+ elif configured_client == \"hadoopcli\":\n+ client_cache.client = hdfs_hadoopcli_clients.create_hadoopcli_client()\n+ else:\n+ raise Exception(\"Unknown hdfs client \" + configured_client)\n+ return client_cache.client\n \n \n def _with_ac(method_name):\n", "issue": "HdfsTarget commands fail when many targets are instantiated\nI've recently added an existence check to a large mapreduce task for which some input files may be missing. With a large enough set of inputs, it will fail every time. I've simplified it to the following code:\n\n``` python\nfrom luigi.contrib.hdfs import HdfsTarget\n\nmany_targets = [HdfsTarget('/') for _ in range(2000)]\nall(target.exists() for target in many_targets)\n```\n\nThis will break if I use any past 1000 or so. Here the client uses snakebite. For a more direct triggering, we can also do\n\n``` python\nfrom snakebite.client import AutoConfigClient\n\nclients = [AutoConfigClient() for _ in range(10000)]\nall(client.test('/', exists=True) for client in clients)\n```\n\nIn either case, the bug goes away if I use a generator expression rather than a list comprehension. The problem is that when I'm dealing with objects coming out of luigi calls like input_hadoop, it's too late for me to decide between lists and iterators. I can code around this by instantiating all of my HdfsTargets with the same client, but I'm not sure this is safe. It could also be fixed in luigi if we had get_autoconfig_client return the same object each time. Is there any reason this wouldn't work?\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright 2012-2015 Spotify AB\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n\n\"\"\"\nThe implementations of the hdfs clients. The hadoop cli client and the\nsnakebite client.\n\"\"\"\n\n\nfrom luigi.contrib.hdfs import config as hdfs_config\nfrom luigi.contrib.hdfs import snakebite_client as hdfs_snakebite_client\nfrom luigi.contrib.hdfs import webhdfs_client as hdfs_webhdfs_client\nfrom luigi.contrib.hdfs import hadoopcli_clients as hdfs_hadoopcli_clients\nimport luigi.contrib.target\nimport logging\n\nlogger = logging.getLogger('luigi-interface')\n\n\ndef get_autoconfig_client():\n \"\"\"\n Creates the client as specified in the `luigi.cfg` configuration.\n \"\"\"\n configured_client = hdfs_config.get_configured_hdfs_client()\n if configured_client == \"webhdfs\":\n return hdfs_webhdfs_client.WebHdfsClient()\n if configured_client == \"snakebite\":\n return hdfs_snakebite_client.SnakebiteHdfsClient()\n if configured_client == \"snakebite_with_hadoopcli_fallback\":\n return luigi.contrib.target.CascadingClient([hdfs_snakebite_client.SnakebiteHdfsClient(),\n hdfs_hadoopcli_clients.create_hadoopcli_client()])\n if configured_client == \"hadoopcli\":\n return hdfs_hadoopcli_clients.create_hadoopcli_client()\n raise Exception(\"Unknown hdfs client \" + configured_client)\n\n\ndef _with_ac(method_name):\n def result(*args, **kwargs):\n return getattr(get_autoconfig_client(), method_name)(*args, **kwargs)\n return result\n\nexists = _with_ac('exists')\nrename = _with_ac('rename')\nremove = _with_ac('remove')\nmkdir = _with_ac('mkdir')\nlistdir = _with_ac('listdir')\n", "path": "luigi/contrib/hdfs/clients.py"}]} | 1,463 | 607 |
gh_patches_debug_2493 | rasdani/github-patches | git_diff | freedomofpress__securedrop-359 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
securedrop_init script in Tails doesn't work right if you run it twice
It appends torrc-additions to torrc multiple times, and it should just append it once.
securedrop_init script in Tails doesn't work right if you run it twice
It appends torrc-additions to torrc multiple times, and it should just append it once.
</issue>
<code>
[start of tails_files/securedrop_init.py]
1 #!/usr/bin/env python
2
3 import os, sys, subprocess
4
5 if __name__ == '__main__':
6 # check for root
7 if not os.geteuid()==0:
8 sys.exit('You need to run this as root')
9
10 # paths
11 path_torrc_additions = '/home/amnesia/Persistent/.securedrop/torrc_additions'
12 path_torrc_backup = '/etc/tor/torrc.bak'
13 path_torrc = '/etc/tor/torrc'
14
15 # load torrc_additions
16 if os.path.isfile(path_torrc_additions):
17 torrc_additions = open(path_torrc_additions).read()
18 else:
19 sys.exit('Error opening {0} for reading'.format(path_torrc_additions));
20
21 # load torrc
22 if os.path.isfile(path_torrc_backup):
23 torrc = open(path_torrc_backup).read()
24 else:
25 if os.path.isfile(path_torrc):
26 torrc = open(path_torrc).read()
27 else:
28 sys.exit('Error opening {0} for reading'.format(path_torrc));
29
30 # save a backup
31 open(path_torrc_backup, 'w').write(torrc)
32
33 # append the additions
34 open(path_torrc, 'a').write(torrc_additions)
35
36 # reload tor
37 subprocess.call(['/usr/sbin/service', 'tor', 'reload'])
38
39 # success
40 subprocess.call(['/usr/bin/sudo', '-u', 'amnesia', '/usr/bin/notify-send', 'Updated torrc', 'You can now connect to your SecureDrop document interface']);
41
42
[end of tails_files/securedrop_init.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/tails_files/securedrop_init.py b/tails_files/securedrop_init.py
--- a/tails_files/securedrop_init.py
+++ b/tails_files/securedrop_init.py
@@ -31,7 +31,7 @@
open(path_torrc_backup, 'w').write(torrc)
# append the additions
- open(path_torrc, 'a').write(torrc_additions)
+ open(path_torrc, 'w').write(torrc+torrc_additions)
# reload tor
subprocess.call(['/usr/sbin/service', 'tor', 'reload'])
| {"golden_diff": "diff --git a/tails_files/securedrop_init.py b/tails_files/securedrop_init.py\n--- a/tails_files/securedrop_init.py\n+++ b/tails_files/securedrop_init.py\n@@ -31,7 +31,7 @@\n open(path_torrc_backup, 'w').write(torrc)\n \n # append the additions\n- open(path_torrc, 'a').write(torrc_additions)\n+ open(path_torrc, 'w').write(torrc+torrc_additions)\n \n # reload tor\n subprocess.call(['/usr/sbin/service', 'tor', 'reload'])\n", "issue": "securedrop_init script in Tails doesn't work right if you run it twice\nIt appends torrc-additions to torrc multiple times, and it should just append it once.\n\nsecuredrop_init script in Tails doesn't work right if you run it twice\nIt appends torrc-additions to torrc multiple times, and it should just append it once.\n\n", "before_files": [{"content": "#!/usr/bin/env python\n\nimport os, sys, subprocess\n\nif __name__ == '__main__':\n # check for root\n if not os.geteuid()==0:\n sys.exit('You need to run this as root')\n\n # paths\n path_torrc_additions = '/home/amnesia/Persistent/.securedrop/torrc_additions'\n path_torrc_backup = '/etc/tor/torrc.bak'\n path_torrc = '/etc/tor/torrc'\n\n # load torrc_additions\n if os.path.isfile(path_torrc_additions):\n torrc_additions = open(path_torrc_additions).read()\n else:\n sys.exit('Error opening {0} for reading'.format(path_torrc_additions));\n\n # load torrc\n if os.path.isfile(path_torrc_backup):\n torrc = open(path_torrc_backup).read()\n else:\n if os.path.isfile(path_torrc):\n torrc = open(path_torrc).read()\n else:\n sys.exit('Error opening {0} for reading'.format(path_torrc));\n\n # save a backup\n open(path_torrc_backup, 'w').write(torrc)\n\n # append the additions\n open(path_torrc, 'a').write(torrc_additions)\n\n # reload tor\n subprocess.call(['/usr/sbin/service', 'tor', 'reload'])\n\n # success\n subprocess.call(['/usr/bin/sudo', '-u', 'amnesia', '/usr/bin/notify-send', 'Updated torrc', 'You can now connect to your SecureDrop document interface']);\n\n", "path": "tails_files/securedrop_init.py"}]} | 1,052 | 139 |
gh_patches_debug_9703 | rasdani/github-patches | git_diff | searx__searx-487 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
AttributeError: 'module' object has no attribute 'old_where'
I updated my searx instance today, and got the following error:
```
Traceback (most recent call last):
File "/usr/local/searx/searx/__init__.py", line 55, in <module>
environ['REQUESTS_CA_BUNDLE'] = certifi.old_where()
AttributeError: 'module' object has no attribute 'old_where'
```
I updated the dependencies with `pip install --upgrade -r requirements.txt` before running searx.
</issue>
<code>
[start of searx/__init__.py]
1 '''
2 searx is free software: you can redistribute it and/or modify
3 it under the terms of the GNU Affero General Public License as published by
4 the Free Software Foundation, either version 3 of the License, or
5 (at your option) any later version.
6
7 searx is distributed in the hope that it will be useful,
8 but WITHOUT ANY WARRANTY; without even the implied warranty of
9 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
10 GNU Affero General Public License for more details.
11
12 You should have received a copy of the GNU Affero General Public License
13 along with searx. If not, see < http://www.gnu.org/licenses/ >.
14
15 (C) 2013- by Adam Tauber, <[email protected]>
16 '''
17
18 import certifi
19 import logging
20 from os import environ
21 from os.path import realpath, dirname, join, abspath
22 from ssl import OPENSSL_VERSION_INFO, OPENSSL_VERSION
23 try:
24 from yaml import load
25 except:
26 from sys import exit, stderr
27 stderr.write('[E] install pyyaml\n')
28 exit(2)
29
30 searx_dir = abspath(dirname(__file__))
31 engine_dir = dirname(realpath(__file__))
32
33 # if possible set path to settings using the
34 # enviroment variable SEARX_SETTINGS_PATH
35 if 'SEARX_SETTINGS_PATH' in environ:
36 settings_path = environ['SEARX_SETTINGS_PATH']
37 # otherwise using default path
38 else:
39 settings_path = join(searx_dir, 'settings.yml')
40
41 # load settings
42 with open(settings_path) as settings_yaml:
43 settings = load(settings_yaml)
44
45 if settings.get('general', {}).get('debug'):
46 logging.basicConfig(level=logging.DEBUG)
47 else:
48 logging.basicConfig(level=logging.WARNING)
49
50 logger = logging.getLogger('searx')
51
52 # Workaround for openssl versions <1.0.2
53 # https://github.com/certifi/python-certifi/issues/26
54 if OPENSSL_VERSION_INFO[0:3] < (1, 0, 2):
55 environ['REQUESTS_CA_BUNDLE'] = certifi.old_where()
56 logger.warning('You are using an old openssl version({0}), please upgrade above 1.0.2!'.format(OPENSSL_VERSION))
57
58 logger.info('Initialisation done')
59
[end of searx/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/searx/__init__.py b/searx/__init__.py
--- a/searx/__init__.py
+++ b/searx/__init__.py
@@ -52,7 +52,8 @@
# Workaround for openssl versions <1.0.2
# https://github.com/certifi/python-certifi/issues/26
if OPENSSL_VERSION_INFO[0:3] < (1, 0, 2):
- environ['REQUESTS_CA_BUNDLE'] = certifi.old_where()
+ if hasattr(certifi, 'old_where'):
+ environ['REQUESTS_CA_BUNDLE'] = certifi.old_where()
logger.warning('You are using an old openssl version({0}), please upgrade above 1.0.2!'.format(OPENSSL_VERSION))
logger.info('Initialisation done')
| {"golden_diff": "diff --git a/searx/__init__.py b/searx/__init__.py\n--- a/searx/__init__.py\n+++ b/searx/__init__.py\n@@ -52,7 +52,8 @@\n # Workaround for openssl versions <1.0.2\n # https://github.com/certifi/python-certifi/issues/26\n if OPENSSL_VERSION_INFO[0:3] < (1, 0, 2):\n- environ['REQUESTS_CA_BUNDLE'] = certifi.old_where()\n+ if hasattr(certifi, 'old_where'):\n+ environ['REQUESTS_CA_BUNDLE'] = certifi.old_where()\n logger.warning('You are using an old openssl version({0}), please upgrade above 1.0.2!'.format(OPENSSL_VERSION))\n \n logger.info('Initialisation done')\n", "issue": "AttributeError: 'module' object has no attribute 'old_where'\nI updated my searx instance today, and got the following error:\n\n```\nTraceback (most recent call last):\n File \"/usr/local/searx/searx/__init__.py\", line 55, in <module>\n environ['REQUESTS_CA_BUNDLE'] = certifi.old_where()\nAttributeError: 'module' object has no attribute 'old_where'\n```\n\nI updated the dependencies with `pip install --upgrade -r requirements.txt` before running searx.\n\n", "before_files": [{"content": "'''\nsearx is free software: you can redistribute it and/or modify\nit under the terms of the GNU Affero General Public License as published by\nthe Free Software Foundation, either version 3 of the License, or\n(at your option) any later version.\n\nsearx is distributed in the hope that it will be useful,\nbut WITHOUT ANY WARRANTY; without even the implied warranty of\nMERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\nGNU Affero General Public License for more details.\n\nYou should have received a copy of the GNU Affero General Public License\nalong with searx. If not, see < http://www.gnu.org/licenses/ >.\n\n(C) 2013- by Adam Tauber, <[email protected]>\n'''\n\nimport certifi\nimport logging\nfrom os import environ\nfrom os.path import realpath, dirname, join, abspath\nfrom ssl import OPENSSL_VERSION_INFO, OPENSSL_VERSION\ntry:\n from yaml import load\nexcept:\n from sys import exit, stderr\n stderr.write('[E] install pyyaml\\n')\n exit(2)\n\nsearx_dir = abspath(dirname(__file__))\nengine_dir = dirname(realpath(__file__))\n\n# if possible set path to settings using the\n# enviroment variable SEARX_SETTINGS_PATH\nif 'SEARX_SETTINGS_PATH' in environ:\n settings_path = environ['SEARX_SETTINGS_PATH']\n# otherwise using default path\nelse:\n settings_path = join(searx_dir, 'settings.yml')\n\n# load settings\nwith open(settings_path) as settings_yaml:\n settings = load(settings_yaml)\n\nif settings.get('general', {}).get('debug'):\n logging.basicConfig(level=logging.DEBUG)\nelse:\n logging.basicConfig(level=logging.WARNING)\n\nlogger = logging.getLogger('searx')\n\n# Workaround for openssl versions <1.0.2\n# https://github.com/certifi/python-certifi/issues/26\nif OPENSSL_VERSION_INFO[0:3] < (1, 0, 2):\n environ['REQUESTS_CA_BUNDLE'] = certifi.old_where()\n logger.warning('You are using an old openssl version({0}), please upgrade above 1.0.2!'.format(OPENSSL_VERSION))\n\nlogger.info('Initialisation done')\n", "path": "searx/__init__.py"}]} | 1,250 | 184 |
gh_patches_debug_42563 | rasdani/github-patches | git_diff | litestar-org__litestar-1474 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
StaticFilesConfig and virtual directories
I'm trying to write a ``FileSystemProtocol`` to load files from the package data using [importlib_resources](https://importlib-resources.readthedocs.io/en/latest/using.html#). But because ``directories`` is defined as ``DirectoryPath``, pydantic checks if the given directories exist in the local filesystem.
This is not generally true, especially in any kind of virtual filesystem (e.g. a zipped package). I think this condition should be relaxed to support virtual filesystems.
https://github.com/starlite-api/starlite/blob/9bb6dcd57c10a591377cf8e3a537e9292566d5b9/starlite/config/static_files.py#L32
</issue>
<code>
[start of litestar/constants.py]
1 from inspect import Signature
2
3 from pydantic.fields import Undefined
4
5 from litestar.types import Empty
6
7 DEFAULT_ALLOWED_CORS_HEADERS = {"Accept", "Accept-Language", "Content-Language", "Content-Type"}
8 DEFAULT_CHUNK_SIZE = 1024 * 128 # 128KB
9 HTTP_DISCONNECT = "http.disconnect"
10 HTTP_RESPONSE_BODY = "http.response.body"
11 HTTP_RESPONSE_START = "http.response.start"
12 ONE_MEGABYTE = 1024 * 1024
13 OPENAPI_NOT_INITIALIZED = "Litestar has not been instantiated with OpenAPIConfig"
14 REDIRECT_STATUS_CODES = {301, 302, 303, 307, 308}
15 RESERVED_KWARGS = {"state", "headers", "cookies", "request", "socket", "data", "query", "scope", "body"}
16 SCOPE_STATE_DEPENDENCY_CACHE = "dependency_cache"
17 SCOPE_STATE_NAMESPACE = "__litestar__"
18 SCOPE_STATE_RESPONSE_COMPRESSED = "response_compressed"
19 SKIP_VALIDATION_NAMES = {"request", "socket", "scope", "receive", "send"}
20 UNDEFINED_SENTINELS = {Undefined, Signature.empty, Empty, Ellipsis}
21 WEBSOCKET_CLOSE = "websocket.close"
22 WEBSOCKET_DISCONNECT = "websocket.disconnect"
23
[end of litestar/constants.py]
[start of litestar/response/redirect.py]
1 from __future__ import annotations
2
3 from typing import TYPE_CHECKING, Any, Literal
4 from urllib.parse import quote
5
6 from litestar.constants import REDIRECT_STATUS_CODES
7 from litestar.enums import MediaType
8 from litestar.exceptions import ImproperlyConfiguredException
9 from litestar.response.base import Response
10 from litestar.status_codes import HTTP_307_TEMPORARY_REDIRECT
11
12 __all__ = ("RedirectResponse",)
13
14
15 if TYPE_CHECKING:
16 from litestar.background_tasks import BackgroundTask, BackgroundTasks
17 from litestar.types import ResponseCookies
18
19
20 class RedirectResponse(Response[Any]):
21 """A redirect response."""
22
23 def __init__(
24 self,
25 url: str,
26 *,
27 status_code: Literal[301, 302, 303, 307, 308] = HTTP_307_TEMPORARY_REDIRECT,
28 background: BackgroundTask | BackgroundTasks | None = None,
29 headers: dict[str, Any] | None = None,
30 cookies: ResponseCookies | None = None,
31 encoding: str = "utf-8",
32 ) -> None:
33 """Initialize the response.
34
35 Args:
36 url: A url to redirect to.
37 status_code: An HTTP status code. The status code should be one of 301, 302, 303, 307 or 308,
38 otherwise an exception will be raised.
39 background: A background task or tasks to be run after the response is sent.
40 headers: A string keyed dictionary of response headers. Header keys are insensitive.
41 cookies: A list of :class:`Cookie <.datastructures.Cookie>` instances to be set under the response
42 ``Set-Cookie`` header.
43 encoding: The encoding to be used for the response headers.
44
45 Raises:
46 ImproperlyConfiguredException: If status code is not a redirect status code.
47 """
48 if status_code not in REDIRECT_STATUS_CODES:
49 raise ImproperlyConfiguredException(
50 f"{status_code} is not a valid for this response. "
51 f"Redirect responses should have one of "
52 f"the following status codes: {', '.join([str(s) for s in REDIRECT_STATUS_CODES])}"
53 )
54 super().__init__(
55 background=background,
56 content=b"",
57 cookies=cookies,
58 headers={**(headers or {}), "location": quote(url, safe="/#%[]=:;$&()+,!?*@'~")},
59 media_type=MediaType.TEXT,
60 status_code=status_code,
61 encoding=encoding,
62 )
63
[end of litestar/response/redirect.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/litestar/constants.py b/litestar/constants.py
--- a/litestar/constants.py
+++ b/litestar/constants.py
@@ -2,6 +2,7 @@
from pydantic.fields import Undefined
+from litestar.enums import MediaType
from litestar.types import Empty
DEFAULT_ALLOWED_CORS_HEADERS = {"Accept", "Accept-Language", "Content-Language", "Content-Type"}
@@ -12,6 +13,7 @@
ONE_MEGABYTE = 1024 * 1024
OPENAPI_NOT_INITIALIZED = "Litestar has not been instantiated with OpenAPIConfig"
REDIRECT_STATUS_CODES = {301, 302, 303, 307, 308}
+REDIRECT_ALLOWED_MEDIA_TYPES = {MediaType.TEXT, MediaType.HTML}
RESERVED_KWARGS = {"state", "headers", "cookies", "request", "socket", "data", "query", "scope", "body"}
SCOPE_STATE_DEPENDENCY_CACHE = "dependency_cache"
SCOPE_STATE_NAMESPACE = "__litestar__"
diff --git a/litestar/response/redirect.py b/litestar/response/redirect.py
--- a/litestar/response/redirect.py
+++ b/litestar/response/redirect.py
@@ -3,7 +3,7 @@
from typing import TYPE_CHECKING, Any, Literal
from urllib.parse import quote
-from litestar.constants import REDIRECT_STATUS_CODES
+from litestar.constants import REDIRECT_ALLOWED_MEDIA_TYPES, REDIRECT_STATUS_CODES
from litestar.enums import MediaType
from litestar.exceptions import ImproperlyConfiguredException
from litestar.response.base import Response
@@ -29,6 +29,7 @@
headers: dict[str, Any] | None = None,
cookies: ResponseCookies | None = None,
encoding: str = "utf-8",
+ media_type: str | MediaType = MediaType.TEXT,
) -> None:
"""Initialize the response.
@@ -41,9 +42,11 @@
cookies: A list of :class:`Cookie <.datastructures.Cookie>` instances to be set under the response
``Set-Cookie`` header.
encoding: The encoding to be used for the response headers.
+ media_type: A value for the response ``Content-Type`` header.
+
Raises:
- ImproperlyConfiguredException: If status code is not a redirect status code.
+ ImproperlyConfiguredException: Either if status code is not a redirect status code or media type is not supported.
"""
if status_code not in REDIRECT_STATUS_CODES:
raise ImproperlyConfiguredException(
@@ -51,12 +54,18 @@
f"Redirect responses should have one of "
f"the following status codes: {', '.join([str(s) for s in REDIRECT_STATUS_CODES])}"
)
+ if media_type not in REDIRECT_ALLOWED_MEDIA_TYPES:
+ raise ImproperlyConfiguredException(
+ f"{media_type} media type is not supported yet. "
+ f"Media type should be one of "
+ f"the following values: {', '.join([str(s) for s in REDIRECT_ALLOWED_MEDIA_TYPES])}"
+ )
super().__init__(
background=background,
content=b"",
cookies=cookies,
headers={**(headers or {}), "location": quote(url, safe="/#%[]=:;$&()+,!?*@'~")},
- media_type=MediaType.TEXT,
+ media_type=media_type,
status_code=status_code,
encoding=encoding,
)
| {"golden_diff": "diff --git a/litestar/constants.py b/litestar/constants.py\n--- a/litestar/constants.py\n+++ b/litestar/constants.py\n@@ -2,6 +2,7 @@\n \n from pydantic.fields import Undefined\n \n+from litestar.enums import MediaType\n from litestar.types import Empty\n \n DEFAULT_ALLOWED_CORS_HEADERS = {\"Accept\", \"Accept-Language\", \"Content-Language\", \"Content-Type\"}\n@@ -12,6 +13,7 @@\n ONE_MEGABYTE = 1024 * 1024\n OPENAPI_NOT_INITIALIZED = \"Litestar has not been instantiated with OpenAPIConfig\"\n REDIRECT_STATUS_CODES = {301, 302, 303, 307, 308}\n+REDIRECT_ALLOWED_MEDIA_TYPES = {MediaType.TEXT, MediaType.HTML}\n RESERVED_KWARGS = {\"state\", \"headers\", \"cookies\", \"request\", \"socket\", \"data\", \"query\", \"scope\", \"body\"}\n SCOPE_STATE_DEPENDENCY_CACHE = \"dependency_cache\"\n SCOPE_STATE_NAMESPACE = \"__litestar__\"\ndiff --git a/litestar/response/redirect.py b/litestar/response/redirect.py\n--- a/litestar/response/redirect.py\n+++ b/litestar/response/redirect.py\n@@ -3,7 +3,7 @@\n from typing import TYPE_CHECKING, Any, Literal\n from urllib.parse import quote\n \n-from litestar.constants import REDIRECT_STATUS_CODES\n+from litestar.constants import REDIRECT_ALLOWED_MEDIA_TYPES, REDIRECT_STATUS_CODES\n from litestar.enums import MediaType\n from litestar.exceptions import ImproperlyConfiguredException\n from litestar.response.base import Response\n@@ -29,6 +29,7 @@\n headers: dict[str, Any] | None = None,\n cookies: ResponseCookies | None = None,\n encoding: str = \"utf-8\",\n+ media_type: str | MediaType = MediaType.TEXT,\n ) -> None:\n \"\"\"Initialize the response.\n \n@@ -41,9 +42,11 @@\n cookies: A list of :class:`Cookie <.datastructures.Cookie>` instances to be set under the response\n ``Set-Cookie`` header.\n encoding: The encoding to be used for the response headers.\n+ media_type: A value for the response ``Content-Type`` header.\n+\n \n Raises:\n- ImproperlyConfiguredException: If status code is not a redirect status code.\n+ ImproperlyConfiguredException: Either if status code is not a redirect status code or media type is not supported.\n \"\"\"\n if status_code not in REDIRECT_STATUS_CODES:\n raise ImproperlyConfiguredException(\n@@ -51,12 +54,18 @@\n f\"Redirect responses should have one of \"\n f\"the following status codes: {', '.join([str(s) for s in REDIRECT_STATUS_CODES])}\"\n )\n+ if media_type not in REDIRECT_ALLOWED_MEDIA_TYPES:\n+ raise ImproperlyConfiguredException(\n+ f\"{media_type} media type is not supported yet. \"\n+ f\"Media type should be one of \"\n+ f\"the following values: {', '.join([str(s) for s in REDIRECT_ALLOWED_MEDIA_TYPES])}\"\n+ )\n super().__init__(\n background=background,\n content=b\"\",\n cookies=cookies,\n headers={**(headers or {}), \"location\": quote(url, safe=\"/#%[]=:;$&()+,!?*@'~\")},\n- media_type=MediaType.TEXT,\n+ media_type=media_type,\n status_code=status_code,\n encoding=encoding,\n )\n", "issue": "StaticFilesConfig and virtual directories\nI'm trying to write a ``FileSystemProtocol`` to load files from the package data using [importlib_resources](https://importlib-resources.readthedocs.io/en/latest/using.html#). But because ``directories`` is defined as ``DirectoryPath``, pydantic checks if the given directories exist in the local filesystem. \r\n\r\nThis is not generally true, especially in any kind of virtual filesystem (e.g. a zipped package). I think this condition should be relaxed to support virtual filesystems.\r\n\r\nhttps://github.com/starlite-api/starlite/blob/9bb6dcd57c10a591377cf8e3a537e9292566d5b9/starlite/config/static_files.py#L32\n", "before_files": [{"content": "from inspect import Signature\n\nfrom pydantic.fields import Undefined\n\nfrom litestar.types import Empty\n\nDEFAULT_ALLOWED_CORS_HEADERS = {\"Accept\", \"Accept-Language\", \"Content-Language\", \"Content-Type\"}\nDEFAULT_CHUNK_SIZE = 1024 * 128 # 128KB\nHTTP_DISCONNECT = \"http.disconnect\"\nHTTP_RESPONSE_BODY = \"http.response.body\"\nHTTP_RESPONSE_START = \"http.response.start\"\nONE_MEGABYTE = 1024 * 1024\nOPENAPI_NOT_INITIALIZED = \"Litestar has not been instantiated with OpenAPIConfig\"\nREDIRECT_STATUS_CODES = {301, 302, 303, 307, 308}\nRESERVED_KWARGS = {\"state\", \"headers\", \"cookies\", \"request\", \"socket\", \"data\", \"query\", \"scope\", \"body\"}\nSCOPE_STATE_DEPENDENCY_CACHE = \"dependency_cache\"\nSCOPE_STATE_NAMESPACE = \"__litestar__\"\nSCOPE_STATE_RESPONSE_COMPRESSED = \"response_compressed\"\nSKIP_VALIDATION_NAMES = {\"request\", \"socket\", \"scope\", \"receive\", \"send\"}\nUNDEFINED_SENTINELS = {Undefined, Signature.empty, Empty, Ellipsis}\nWEBSOCKET_CLOSE = \"websocket.close\"\nWEBSOCKET_DISCONNECT = \"websocket.disconnect\"\n", "path": "litestar/constants.py"}, {"content": "from __future__ import annotations\n\nfrom typing import TYPE_CHECKING, Any, Literal\nfrom urllib.parse import quote\n\nfrom litestar.constants import REDIRECT_STATUS_CODES\nfrom litestar.enums import MediaType\nfrom litestar.exceptions import ImproperlyConfiguredException\nfrom litestar.response.base import Response\nfrom litestar.status_codes import HTTP_307_TEMPORARY_REDIRECT\n\n__all__ = (\"RedirectResponse\",)\n\n\nif TYPE_CHECKING:\n from litestar.background_tasks import BackgroundTask, BackgroundTasks\n from litestar.types import ResponseCookies\n\n\nclass RedirectResponse(Response[Any]):\n \"\"\"A redirect response.\"\"\"\n\n def __init__(\n self,\n url: str,\n *,\n status_code: Literal[301, 302, 303, 307, 308] = HTTP_307_TEMPORARY_REDIRECT,\n background: BackgroundTask | BackgroundTasks | None = None,\n headers: dict[str, Any] | None = None,\n cookies: ResponseCookies | None = None,\n encoding: str = \"utf-8\",\n ) -> None:\n \"\"\"Initialize the response.\n\n Args:\n url: A url to redirect to.\n status_code: An HTTP status code. The status code should be one of 301, 302, 303, 307 or 308,\n otherwise an exception will be raised.\n background: A background task or tasks to be run after the response is sent.\n headers: A string keyed dictionary of response headers. Header keys are insensitive.\n cookies: A list of :class:`Cookie <.datastructures.Cookie>` instances to be set under the response\n ``Set-Cookie`` header.\n encoding: The encoding to be used for the response headers.\n\n Raises:\n ImproperlyConfiguredException: If status code is not a redirect status code.\n \"\"\"\n if status_code not in REDIRECT_STATUS_CODES:\n raise ImproperlyConfiguredException(\n f\"{status_code} is not a valid for this response. \"\n f\"Redirect responses should have one of \"\n f\"the following status codes: {', '.join([str(s) for s in REDIRECT_STATUS_CODES])}\"\n )\n super().__init__(\n background=background,\n content=b\"\",\n cookies=cookies,\n headers={**(headers or {}), \"location\": quote(url, safe=\"/#%[]=:;$&()+,!?*@'~\")},\n media_type=MediaType.TEXT,\n status_code=status_code,\n encoding=encoding,\n )\n", "path": "litestar/response/redirect.py"}]} | 1,701 | 775 |
gh_patches_debug_7722 | rasdani/github-patches | git_diff | googleapis__python-bigquery-624 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
google.auth.exceptions.TransportError is not being retried
Hi,
Recently i faced an error can you please consider using this exception as a retry one also. Since i have faced this error in one of our production system
https://github.com/googleapis/python-storage/issues/414
</issue>
<code>
[start of google/cloud/bigquery/retry.py]
1 # Copyright 2018 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from google.api_core import exceptions
16 from google.api_core import retry
17 import requests.exceptions
18
19
20 _RETRYABLE_REASONS = frozenset(
21 ["rateLimitExceeded", "backendError", "internalError", "badGateway"]
22 )
23
24 _UNSTRUCTURED_RETRYABLE_TYPES = (
25 ConnectionError,
26 exceptions.TooManyRequests,
27 exceptions.InternalServerError,
28 exceptions.BadGateway,
29 requests.exceptions.ConnectionError,
30 )
31
32
33 def _should_retry(exc):
34 """Predicate for determining when to retry.
35
36 We retry if and only if the 'reason' is 'backendError'
37 or 'rateLimitExceeded'.
38 """
39 if not hasattr(exc, "errors") or len(exc.errors) == 0:
40 # Check for unstructured error returns, e.g. from GFE
41 return isinstance(exc, _UNSTRUCTURED_RETRYABLE_TYPES)
42
43 reason = exc.errors[0]["reason"]
44 return reason in _RETRYABLE_REASONS
45
46
47 DEFAULT_RETRY = retry.Retry(predicate=_should_retry)
48 """The default retry object.
49
50 Any method with a ``retry`` parameter will be retried automatically,
51 with reasonable defaults. To disable retry, pass ``retry=None``.
52 To modify the default retry behavior, call a ``with_XXX`` method
53 on ``DEFAULT_RETRY``. For example, to change the deadline to 30 seconds,
54 pass ``retry=bigquery.DEFAULT_RETRY.with_deadline(30)``.
55 """
56
[end of google/cloud/bigquery/retry.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/google/cloud/bigquery/retry.py b/google/cloud/bigquery/retry.py
--- a/google/cloud/bigquery/retry.py
+++ b/google/cloud/bigquery/retry.py
@@ -14,6 +14,7 @@
from google.api_core import exceptions
from google.api_core import retry
+from google.auth import exceptions as auth_exceptions
import requests.exceptions
@@ -27,6 +28,7 @@
exceptions.InternalServerError,
exceptions.BadGateway,
requests.exceptions.ConnectionError,
+ auth_exceptions.TransportError,
)
| {"golden_diff": "diff --git a/google/cloud/bigquery/retry.py b/google/cloud/bigquery/retry.py\n--- a/google/cloud/bigquery/retry.py\n+++ b/google/cloud/bigquery/retry.py\n@@ -14,6 +14,7 @@\n \n from google.api_core import exceptions\n from google.api_core import retry\n+from google.auth import exceptions as auth_exceptions\n import requests.exceptions\n \n \n@@ -27,6 +28,7 @@\n exceptions.InternalServerError,\n exceptions.BadGateway,\n requests.exceptions.ConnectionError,\n+ auth_exceptions.TransportError,\n )\n", "issue": "google.auth.exceptions.TransportError is not being retried\nHi, \r\n\r\nRecently i faced an error can you please consider using this exception as a retry one also. Since i have faced this error in one of our production system\r\n\r\nhttps://github.com/googleapis/python-storage/issues/414\r\n\r\n\n", "before_files": [{"content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom google.api_core import exceptions\nfrom google.api_core import retry\nimport requests.exceptions\n\n\n_RETRYABLE_REASONS = frozenset(\n [\"rateLimitExceeded\", \"backendError\", \"internalError\", \"badGateway\"]\n)\n\n_UNSTRUCTURED_RETRYABLE_TYPES = (\n ConnectionError,\n exceptions.TooManyRequests,\n exceptions.InternalServerError,\n exceptions.BadGateway,\n requests.exceptions.ConnectionError,\n)\n\n\ndef _should_retry(exc):\n \"\"\"Predicate for determining when to retry.\n\n We retry if and only if the 'reason' is 'backendError'\n or 'rateLimitExceeded'.\n \"\"\"\n if not hasattr(exc, \"errors\") or len(exc.errors) == 0:\n # Check for unstructured error returns, e.g. from GFE\n return isinstance(exc, _UNSTRUCTURED_RETRYABLE_TYPES)\n\n reason = exc.errors[0][\"reason\"]\n return reason in _RETRYABLE_REASONS\n\n\nDEFAULT_RETRY = retry.Retry(predicate=_should_retry)\n\"\"\"The default retry object.\n\nAny method with a ``retry`` parameter will be retried automatically,\nwith reasonable defaults. To disable retry, pass ``retry=None``.\nTo modify the default retry behavior, call a ``with_XXX`` method\non ``DEFAULT_RETRY``. For example, to change the deadline to 30 seconds,\npass ``retry=bigquery.DEFAULT_RETRY.with_deadline(30)``.\n\"\"\"\n", "path": "google/cloud/bigquery/retry.py"}]} | 1,140 | 119 |
gh_patches_debug_5757 | rasdani/github-patches | git_diff | edgedb__edgedb-7149 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ALTER MODULE foo RENAME TO bar gives ISE
We should produce a parse error or an unimplemented message.
We could actually support it, but it might actually be kind of hairy to do, since in the data model modules really don't *do* anything, they *just* lay claim to a name. (And DDL isn't *that* important anyway; renaming a module might actually work in SDL...)
</issue>
<code>
[start of edb/schema/modules.py]
1 #
2 # This source file is part of the EdgeDB open source project.
3 #
4 # Copyright 2008-present MagicStack Inc. and the EdgeDB authors.
5 #
6 # Licensed under the Apache License, Version 2.0 (the "License");
7 # you may not use this file except in compliance with the License.
8 # You may obtain a copy of the License at
9 #
10 # http://www.apache.org/licenses/LICENSE-2.0
11 #
12 # Unless required by applicable law or agreed to in writing, software
13 # distributed under the License is distributed on an "AS IS" BASIS,
14 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
15 # See the License for the specific language governing permissions and
16 # limitations under the License.
17 #
18
19
20 from __future__ import annotations
21
22
23 from edb import errors
24
25 from edb.edgeql import ast as qlast
26 from edb.edgeql import qltypes
27
28 from . import annos as s_anno
29 from . import delta as sd
30 from . import name as sn
31 from . import objects as so
32 from . import schema as s_schema
33
34 RESERVED_MODULE_NAMES = {
35 'super',
36 }
37
38
39 class Module(
40 s_anno.AnnotationSubject,
41 so.Object, # Help reflection figure out the right db MRO
42 qlkind=qltypes.SchemaObjectClass.MODULE,
43 data_safe=False,
44 ):
45 # N.B: Modules are not "qualified" objects, even though they can
46 # be nested (because they might *not* be nested) and we arrange
47 # for their names to always be represented with an UnqualName.
48 pass
49
50
51 class ModuleCommandContext(sd.ObjectCommandContext[Module]):
52 pass
53
54
55 class ModuleCommand(
56 sd.ObjectCommand[Module],
57 context_class=ModuleCommandContext,
58 ):
59
60 def _validate_legal_command(
61 self,
62 schema: s_schema.Schema,
63 context: sd.CommandContext,
64 ) -> None:
65 super()._validate_legal_command(schema, context)
66
67 last = str(self.classname)
68 first = last
69 enclosing = None
70 if '::' in str(self.classname):
71 first, _, _ = str(self.classname).partition('::')
72 enclosing, _, last = str(self.classname).rpartition('::')
73 if not schema.has_module(enclosing):
74 raise errors.UnknownModuleError(
75 f'module {enclosing!r} is not in this schema')
76
77 if last in RESERVED_MODULE_NAMES:
78 raise errors.SchemaDefinitionError(
79 f"module {last!r} is a reserved module name")
80
81 if (
82 not context.stdmode and not context.testmode
83 and sn.UnqualName(first) in s_schema.STD_MODULES
84 ):
85 raise errors.SchemaDefinitionError(
86 f'cannot {self._delta_action} {self.get_verbosename()}: '
87 f'module {first} is read-only',
88 span=self.span)
89
90
91 class CreateModule(ModuleCommand, sd.CreateObject[Module]):
92 astnode = qlast.CreateModule
93
94
95 class AlterModule(ModuleCommand, sd.AlterObject[Module]):
96 astnode = qlast.AlterModule
97
98
99 class DeleteModule(ModuleCommand, sd.DeleteObject[Module]):
100 astnode = qlast.DropModule
101
102 def _validate_legal_command(
103 self,
104 schema: s_schema.Schema,
105 context: sd.CommandContext,
106 ) -> None:
107 super()._validate_legal_command(schema, context)
108
109 # For now, we disallow deleting non-empty modules.
110
111 # Modules aren't actually stored with any direct linkage
112 # to the objects in them, so explicitly search for objects
113 # in the module (excluding the module itself).
114 has_objects = bool(any(schema.get_objects(
115 included_modules=[self.classname],
116 excluded_items=[self.classname],
117 )))
118
119 if has_objects:
120 vn = self.scls.get_verbosename(schema)
121 raise errors.SchemaError(
122 f'cannot drop {vn} because it is not empty'
123 )
124
[end of edb/schema/modules.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/edb/schema/modules.py b/edb/schema/modules.py
--- a/edb/schema/modules.py
+++ b/edb/schema/modules.py
@@ -96,6 +96,19 @@
astnode = qlast.AlterModule
+class RenameModule(ModuleCommand, sd.RenameObject[Module]):
+
+ def apply(
+ self,
+ schema: s_schema.Schema,
+ context: sd.CommandContext,
+ ) -> s_schema.Schema:
+ raise errors.SchemaError(
+ f'renaming modules is not supported',
+ span=self.span,
+ )
+
+
class DeleteModule(ModuleCommand, sd.DeleteObject[Module]):
astnode = qlast.DropModule
| {"golden_diff": "diff --git a/edb/schema/modules.py b/edb/schema/modules.py\n--- a/edb/schema/modules.py\n+++ b/edb/schema/modules.py\n@@ -96,6 +96,19 @@\n astnode = qlast.AlterModule\n \n \n+class RenameModule(ModuleCommand, sd.RenameObject[Module]):\n+\n+ def apply(\n+ self,\n+ schema: s_schema.Schema,\n+ context: sd.CommandContext,\n+ ) -> s_schema.Schema:\n+ raise errors.SchemaError(\n+ f'renaming modules is not supported',\n+ span=self.span,\n+ )\n+\n+\n class DeleteModule(ModuleCommand, sd.DeleteObject[Module]):\n astnode = qlast.DropModule\n", "issue": "ALTER MODULE foo RENAME TO bar gives ISE\nWe should produce a parse error or an unimplemented message.\r\n\r\nWe could actually support it, but it might actually be kind of hairy to do, since in the data model modules really don't *do* anything, they *just* lay claim to a name. (And DDL isn't *that* important anyway; renaming a module might actually work in SDL...)\n", "before_files": [{"content": "#\n# This source file is part of the EdgeDB open source project.\n#\n# Copyright 2008-present MagicStack Inc. and the EdgeDB authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n\n\nfrom __future__ import annotations\n\n\nfrom edb import errors\n\nfrom edb.edgeql import ast as qlast\nfrom edb.edgeql import qltypes\n\nfrom . import annos as s_anno\nfrom . import delta as sd\nfrom . import name as sn\nfrom . import objects as so\nfrom . import schema as s_schema\n\nRESERVED_MODULE_NAMES = {\n 'super',\n}\n\n\nclass Module(\n s_anno.AnnotationSubject,\n so.Object, # Help reflection figure out the right db MRO\n qlkind=qltypes.SchemaObjectClass.MODULE,\n data_safe=False,\n):\n # N.B: Modules are not \"qualified\" objects, even though they can\n # be nested (because they might *not* be nested) and we arrange\n # for their names to always be represented with an UnqualName.\n pass\n\n\nclass ModuleCommandContext(sd.ObjectCommandContext[Module]):\n pass\n\n\nclass ModuleCommand(\n sd.ObjectCommand[Module],\n context_class=ModuleCommandContext,\n):\n\n def _validate_legal_command(\n self,\n schema: s_schema.Schema,\n context: sd.CommandContext,\n ) -> None:\n super()._validate_legal_command(schema, context)\n\n last = str(self.classname)\n first = last\n enclosing = None\n if '::' in str(self.classname):\n first, _, _ = str(self.classname).partition('::')\n enclosing, _, last = str(self.classname).rpartition('::')\n if not schema.has_module(enclosing):\n raise errors.UnknownModuleError(\n f'module {enclosing!r} is not in this schema')\n\n if last in RESERVED_MODULE_NAMES:\n raise errors.SchemaDefinitionError(\n f\"module {last!r} is a reserved module name\")\n\n if (\n not context.stdmode and not context.testmode\n and sn.UnqualName(first) in s_schema.STD_MODULES\n ):\n raise errors.SchemaDefinitionError(\n f'cannot {self._delta_action} {self.get_verbosename()}: '\n f'module {first} is read-only',\n span=self.span)\n\n\nclass CreateModule(ModuleCommand, sd.CreateObject[Module]):\n astnode = qlast.CreateModule\n\n\nclass AlterModule(ModuleCommand, sd.AlterObject[Module]):\n astnode = qlast.AlterModule\n\n\nclass DeleteModule(ModuleCommand, sd.DeleteObject[Module]):\n astnode = qlast.DropModule\n\n def _validate_legal_command(\n self,\n schema: s_schema.Schema,\n context: sd.CommandContext,\n ) -> None:\n super()._validate_legal_command(schema, context)\n\n # For now, we disallow deleting non-empty modules.\n\n # Modules aren't actually stored with any direct linkage\n # to the objects in them, so explicitly search for objects\n # in the module (excluding the module itself).\n has_objects = bool(any(schema.get_objects(\n included_modules=[self.classname],\n excluded_items=[self.classname],\n )))\n\n if has_objects:\n vn = self.scls.get_verbosename(schema)\n raise errors.SchemaError(\n f'cannot drop {vn} because it is not empty'\n )\n", "path": "edb/schema/modules.py"}]} | 1,736 | 154 |
gh_patches_debug_21809 | rasdani/github-patches | git_diff | prowler-cloud__prowler-2639 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[Bug]: The check 'Potential secret found in EC2 instance * User Data.' does not include the line numbers where the secrets were found
### Steps to Reproduce
The check 'Potential secret found in EC2 instance * User Data.' does not show the line numbers, whereas 'Potential secret found in variables of ECS task definition' does. Why is it so?
The results of check without precise pointing at the line are frustrating: you do not know where exactly the scanner found the secret and how many secrets were found.
Same issue will rise if you need to troubleshoot the scanner.
### Expected behavior
Numbers of lines with secrets are included in issue description.
### Actual Result with Screenshots or Logs
-
### How did you install Prowler?
Docker (docker pull toniblyx/prowler)
### Environment Resource
Fargate
### OS used
--
### Prowler version
3
### Pip version
--
### Context
_No response_
</issue>
<code>
[start of prowler/providers/aws/services/ec2/ec2_instance_secrets_user_data/ec2_instance_secrets_user_data.py]
1 import os
2 import tempfile
3 import zlib
4 from base64 import b64decode
5
6 from detect_secrets import SecretsCollection
7 from detect_secrets.settings import default_settings
8
9 from prowler.lib.check.models import Check, Check_Report_AWS
10 from prowler.providers.aws.services.ec2.ec2_client import ec2_client
11
12
13 class ec2_instance_secrets_user_data(Check):
14 def execute(self):
15 findings = []
16 for instance in ec2_client.instances:
17 if instance.state != "terminated":
18 report = Check_Report_AWS(self.metadata())
19 report.region = instance.region
20 report.resource_id = instance.id
21 report.resource_arn = instance.arn
22 report.resource_tags = instance.tags
23 if instance.user_data:
24 temp_user_data_file = tempfile.NamedTemporaryFile(delete=False)
25 user_data = b64decode(instance.user_data)
26 if user_data[0:2] == b"\x1f\x8b": # GZIP magic number
27 user_data = zlib.decompress(
28 user_data, zlib.MAX_WBITS | 32
29 ).decode("utf-8")
30 else:
31 user_data = user_data.decode("utf-8")
32
33 temp_user_data_file.write(
34 bytes(user_data, encoding="raw_unicode_escape")
35 )
36 temp_user_data_file.close()
37 secrets = SecretsCollection()
38 with default_settings():
39 secrets.scan_file(temp_user_data_file.name)
40
41 if secrets.json():
42 report.status = "FAIL"
43 report.status_extended = f"Potential secret found in EC2 instance {instance.id} User Data."
44 else:
45 report.status = "PASS"
46 report.status_extended = (
47 f"No secrets found in EC2 instance {instance.id} User Data."
48 )
49
50 os.remove(temp_user_data_file.name)
51 else:
52 report.status = "PASS"
53 report.status_extended = f"No secrets found in EC2 instance {instance.id} since User Data is empty."
54
55 findings.append(report)
56
57 return findings
58
[end of prowler/providers/aws/services/ec2/ec2_instance_secrets_user_data/ec2_instance_secrets_user_data.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/prowler/providers/aws/services/ec2/ec2_instance_secrets_user_data/ec2_instance_secrets_user_data.py b/prowler/providers/aws/services/ec2/ec2_instance_secrets_user_data/ec2_instance_secrets_user_data.py
--- a/prowler/providers/aws/services/ec2/ec2_instance_secrets_user_data/ec2_instance_secrets_user_data.py
+++ b/prowler/providers/aws/services/ec2/ec2_instance_secrets_user_data/ec2_instance_secrets_user_data.py
@@ -38,9 +38,19 @@
with default_settings():
secrets.scan_file(temp_user_data_file.name)
- if secrets.json():
+ detect_secrets_output = secrets.json()
+ if detect_secrets_output:
+ secrets_string = ", ".join(
+ [
+ f"{secret['type']} on line {secret['line_number']}"
+ for secret in detect_secrets_output[
+ temp_user_data_file.name
+ ]
+ ]
+ )
report.status = "FAIL"
- report.status_extended = f"Potential secret found in EC2 instance {instance.id} User Data."
+ report.status_extended = f"Potential secret found in EC2 instance {instance.id} User Data -> {secrets_string}."
+
else:
report.status = "PASS"
report.status_extended = (
| {"golden_diff": "diff --git a/prowler/providers/aws/services/ec2/ec2_instance_secrets_user_data/ec2_instance_secrets_user_data.py b/prowler/providers/aws/services/ec2/ec2_instance_secrets_user_data/ec2_instance_secrets_user_data.py\n--- a/prowler/providers/aws/services/ec2/ec2_instance_secrets_user_data/ec2_instance_secrets_user_data.py\n+++ b/prowler/providers/aws/services/ec2/ec2_instance_secrets_user_data/ec2_instance_secrets_user_data.py\n@@ -38,9 +38,19 @@\n with default_settings():\n secrets.scan_file(temp_user_data_file.name)\n \n- if secrets.json():\n+ detect_secrets_output = secrets.json()\n+ if detect_secrets_output:\n+ secrets_string = \", \".join(\n+ [\n+ f\"{secret['type']} on line {secret['line_number']}\"\n+ for secret in detect_secrets_output[\n+ temp_user_data_file.name\n+ ]\n+ ]\n+ )\n report.status = \"FAIL\"\n- report.status_extended = f\"Potential secret found in EC2 instance {instance.id} User Data.\"\n+ report.status_extended = f\"Potential secret found in EC2 instance {instance.id} User Data -> {secrets_string}.\"\n+\n else:\n report.status = \"PASS\"\n report.status_extended = (\n", "issue": "[Bug]: The check 'Potential secret found in EC2 instance * User Data.' does not include the line numbers where the secrets were found\n### Steps to Reproduce\n\nThe check 'Potential secret found in EC2 instance * User Data.' does not show the line numbers, whereas 'Potential secret found in variables of ECS task definition' does. Why is it so?\r\n\r\nThe results of check without precise pointing at the line are frustrating: you do not know where exactly the scanner found the secret and how many secrets were found.\r\n\r\nSame issue will rise if you need to troubleshoot the scanner.\n\n### Expected behavior\n\nNumbers of lines with secrets are included in issue description.\n\n### Actual Result with Screenshots or Logs\n\n-\n\n### How did you install Prowler?\n\nDocker (docker pull toniblyx/prowler)\n\n### Environment Resource\n\nFargate\n\n### OS used\n\n--\n\n### Prowler version\n\n3\n\n### Pip version\n\n--\n\n### Context\n\n_No response_\n", "before_files": [{"content": "import os\nimport tempfile\nimport zlib\nfrom base64 import b64decode\n\nfrom detect_secrets import SecretsCollection\nfrom detect_secrets.settings import default_settings\n\nfrom prowler.lib.check.models import Check, Check_Report_AWS\nfrom prowler.providers.aws.services.ec2.ec2_client import ec2_client\n\n\nclass ec2_instance_secrets_user_data(Check):\n def execute(self):\n findings = []\n for instance in ec2_client.instances:\n if instance.state != \"terminated\":\n report = Check_Report_AWS(self.metadata())\n report.region = instance.region\n report.resource_id = instance.id\n report.resource_arn = instance.arn\n report.resource_tags = instance.tags\n if instance.user_data:\n temp_user_data_file = tempfile.NamedTemporaryFile(delete=False)\n user_data = b64decode(instance.user_data)\n if user_data[0:2] == b\"\\x1f\\x8b\": # GZIP magic number\n user_data = zlib.decompress(\n user_data, zlib.MAX_WBITS | 32\n ).decode(\"utf-8\")\n else:\n user_data = user_data.decode(\"utf-8\")\n\n temp_user_data_file.write(\n bytes(user_data, encoding=\"raw_unicode_escape\")\n )\n temp_user_data_file.close()\n secrets = SecretsCollection()\n with default_settings():\n secrets.scan_file(temp_user_data_file.name)\n\n if secrets.json():\n report.status = \"FAIL\"\n report.status_extended = f\"Potential secret found in EC2 instance {instance.id} User Data.\"\n else:\n report.status = \"PASS\"\n report.status_extended = (\n f\"No secrets found in EC2 instance {instance.id} User Data.\"\n )\n\n os.remove(temp_user_data_file.name)\n else:\n report.status = \"PASS\"\n report.status_extended = f\"No secrets found in EC2 instance {instance.id} since User Data is empty.\"\n\n findings.append(report)\n\n return findings\n", "path": "prowler/providers/aws/services/ec2/ec2_instance_secrets_user_data/ec2_instance_secrets_user_data.py"}]} | 1,297 | 288 |
gh_patches_debug_32978 | rasdani/github-patches | git_diff | sunpy__sunpy-2770 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Solar Cycle Gallery example out of date
The example includes the following text
> For this example we will use the SunPy sample data, if you want the current data, delete the argument to the create function. i.e. noaa = lc.NOAAIndicesLightCurve.create()
This text is inline and therefore not checked during build so was not caught. This should be fixed and this behavior should be discouraged.
</issue>
<code>
[start of examples/plotting/solar_cycle_example.py]
1 """
2 ===============
3 The Solar Cycle
4 ===============
5
6 This example shows the current and possible next solar cycle.
7 """
8 from __future__ import print_function, division
9
10 import datetime
11 import matplotlib.pyplot as plt
12
13 import sunpy.timeseries as ts
14 from sunpy.data.sample import NOAAINDICES_TIMESERIES, NOAAPREDICT_TIMESERIES
15
16 ###############################################################################
17 # For this example we will use the SunPy sample data, if you want the current
18 # data, delete the argument to the ``create`` function. i.e.
19 # ``noaa = lc.NOAAIndicesLightCurve.create()``
20
21 noaa = ts.TimeSeries(NOAAINDICES_TIMESERIES, source='noaaindices')
22 noaa_predict = ts.TimeSeries(NOAAPREDICT_TIMESERIES, source='noaapredictindices')
23
24 ###############################################################################
25 # Next lets grab the data again to create a new data structure that we will
26 # shift by 12 years to simulate the next solar cycle. We will truncate the
27 # data to only plot what is necessary.
28
29 noaa2 = ts.TimeSeries(NOAAINDICES_TIMESERIES, source='noaaindices')
30 noaa2.data = noaa2.data.shift(2, freq=datetime.timedelta(days=365*12))
31 noaa2 = noaa2.truncate('2021/04/01', '2030/01/01')
32
33 ###############################################################################
34 # Finally lets plot everything together with some arbitrary range for the
35 # strength of the next solar cycle.
36
37 plt.plot(noaa.data.index, noaa.data['sunspot RI'], label='Sunspot Number')
38 plt.plot(noaa_predict.data.index, noaa_predict.data['sunspot'],
39 color='grey', label='Near-term Prediction')
40 plt.fill_between(noaa_predict.data.index, noaa_predict.data['sunspot low'],
41 noaa_predict.data['sunspot high'], alpha=0.3, color='grey')
42
43 plt.fill_between(noaa2.data.index, noaa2.data['sunspot RI smooth']*0.4,
44 noaa2.data['sunspot RI smooth']*1.3, alpha=0.3, color='grey',
45 label='Next Cycle Predict')
46 plt.ylim(0)
47 plt.text('2011-01-01', 120, 'Cycle 24', fontsize=16)
48 plt.text('2024-01-01', 120, 'Cycle 25', fontsize=16)
49 plt.ylabel('Sunspot Number')
50 plt.xlabel('Year')
51 plt.legend(loc=2, framealpha=0.5)
52 plt.show()
53
[end of examples/plotting/solar_cycle_example.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/examples/plotting/solar_cycle_example.py b/examples/plotting/solar_cycle_example.py
--- a/examples/plotting/solar_cycle_example.py
+++ b/examples/plotting/solar_cycle_example.py
@@ -14,25 +14,25 @@
from sunpy.data.sample import NOAAINDICES_TIMESERIES, NOAAPREDICT_TIMESERIES
###############################################################################
-# For this example we will use the SunPy sample data, if you want the current
-# data, delete the argument to the ``create`` function. i.e.
-# ``noaa = lc.NOAAIndicesLightCurve.create()``
+# For this example we will use the SunPy sample data. This code snippet grabs
+# the most current NOAA solar cycle data as a ``TimeSeries``
+# (see :ref:`timeseries_code_ref`).
noaa = ts.TimeSeries(NOAAINDICES_TIMESERIES, source='noaaindices')
noaa_predict = ts.TimeSeries(NOAAPREDICT_TIMESERIES, source='noaapredictindices')
###############################################################################
-# Next lets grab the data again to create a new data structure that we will
-# shift by 12 years to simulate the next solar cycle. We will truncate the
-# data to only plot what is necessary.
+# Next, we grab a new copy of the data and shift it forward 12 years to
+# simulate the next solar cycle. We will also truncate the data to ensure
+# that we only plot what is necessary.
noaa2 = ts.TimeSeries(NOAAINDICES_TIMESERIES, source='noaaindices')
noaa2.data = noaa2.data.shift(2, freq=datetime.timedelta(days=365*12))
noaa2 = noaa2.truncate('2021/04/01', '2030/01/01')
###############################################################################
-# Finally lets plot everything together with some arbitrary range for the
-# strength of the next solar cycle.
+# Finally, we plot both ``noaa`` and ``noaa2`` together, with an arbitrary
+# range for the strength of the next solar cycle.
plt.plot(noaa.data.index, noaa.data['sunspot RI'], label='Sunspot Number')
plt.plot(noaa_predict.data.index, noaa_predict.data['sunspot'],
| {"golden_diff": "diff --git a/examples/plotting/solar_cycle_example.py b/examples/plotting/solar_cycle_example.py\n--- a/examples/plotting/solar_cycle_example.py\n+++ b/examples/plotting/solar_cycle_example.py\n@@ -14,25 +14,25 @@\n from sunpy.data.sample import NOAAINDICES_TIMESERIES, NOAAPREDICT_TIMESERIES\n \n ###############################################################################\n-# For this example we will use the SunPy sample data, if you want the current\n-# data, delete the argument to the ``create`` function. i.e.\n-# ``noaa = lc.NOAAIndicesLightCurve.create()``\n+# For this example we will use the SunPy sample data. This code snippet grabs\n+# the most current NOAA solar cycle data as a ``TimeSeries``\n+# (see :ref:`timeseries_code_ref`).\n \n noaa = ts.TimeSeries(NOAAINDICES_TIMESERIES, source='noaaindices')\n noaa_predict = ts.TimeSeries(NOAAPREDICT_TIMESERIES, source='noaapredictindices')\n \n ###############################################################################\n-# Next lets grab the data again to create a new data structure that we will\n-# shift by 12 years to simulate the next solar cycle. We will truncate the\n-# data to only plot what is necessary.\n+# Next, we grab a new copy of the data and shift it forward 12 years to\n+# simulate the next solar cycle. We will also truncate the data to ensure\n+# that we only plot what is necessary.\n \n noaa2 = ts.TimeSeries(NOAAINDICES_TIMESERIES, source='noaaindices')\n noaa2.data = noaa2.data.shift(2, freq=datetime.timedelta(days=365*12))\n noaa2 = noaa2.truncate('2021/04/01', '2030/01/01')\n \n ###############################################################################\n-# Finally lets plot everything together with some arbitrary range for the\n-# strength of the next solar cycle.\n+# Finally, we plot both ``noaa`` and ``noaa2`` together, with an arbitrary\n+# range for the strength of the next solar cycle.\n \n plt.plot(noaa.data.index, noaa.data['sunspot RI'], label='Sunspot Number')\n plt.plot(noaa_predict.data.index, noaa_predict.data['sunspot'],\n", "issue": "Solar Cycle Gallery example out of date\nThe example includes the following text \r\n\r\n> For this example we will use the SunPy sample data, if you want the current data, delete the argument to the create function. i.e. noaa = lc.NOAAIndicesLightCurve.create()\r\n\r\nThis text is inline and therefore not checked during build so was not caught. This should be fixed and this behavior should be discouraged.\n", "before_files": [{"content": "\"\"\"\n===============\nThe Solar Cycle\n===============\n\nThis example shows the current and possible next solar cycle.\n\"\"\"\nfrom __future__ import print_function, division\n\nimport datetime\nimport matplotlib.pyplot as plt\n\nimport sunpy.timeseries as ts\nfrom sunpy.data.sample import NOAAINDICES_TIMESERIES, NOAAPREDICT_TIMESERIES\n\n###############################################################################\n# For this example we will use the SunPy sample data, if you want the current\n# data, delete the argument to the ``create`` function. i.e.\n# ``noaa = lc.NOAAIndicesLightCurve.create()``\n\nnoaa = ts.TimeSeries(NOAAINDICES_TIMESERIES, source='noaaindices')\nnoaa_predict = ts.TimeSeries(NOAAPREDICT_TIMESERIES, source='noaapredictindices')\n\n###############################################################################\n# Next lets grab the data again to create a new data structure that we will\n# shift by 12 years to simulate the next solar cycle. We will truncate the\n# data to only plot what is necessary.\n\nnoaa2 = ts.TimeSeries(NOAAINDICES_TIMESERIES, source='noaaindices')\nnoaa2.data = noaa2.data.shift(2, freq=datetime.timedelta(days=365*12))\nnoaa2 = noaa2.truncate('2021/04/01', '2030/01/01')\n\n###############################################################################\n# Finally lets plot everything together with some arbitrary range for the\n# strength of the next solar cycle.\n\nplt.plot(noaa.data.index, noaa.data['sunspot RI'], label='Sunspot Number')\nplt.plot(noaa_predict.data.index, noaa_predict.data['sunspot'],\n color='grey', label='Near-term Prediction')\nplt.fill_between(noaa_predict.data.index, noaa_predict.data['sunspot low'],\n noaa_predict.data['sunspot high'], alpha=0.3, color='grey')\n\nplt.fill_between(noaa2.data.index, noaa2.data['sunspot RI smooth']*0.4,\n noaa2.data['sunspot RI smooth']*1.3, alpha=0.3, color='grey',\n label='Next Cycle Predict')\nplt.ylim(0)\nplt.text('2011-01-01', 120, 'Cycle 24', fontsize=16)\nplt.text('2024-01-01', 120, 'Cycle 25', fontsize=16)\nplt.ylabel('Sunspot Number')\nplt.xlabel('Year')\nplt.legend(loc=2, framealpha=0.5)\nplt.show()\n", "path": "examples/plotting/solar_cycle_example.py"}]} | 1,285 | 505 |
gh_patches_debug_42513 | rasdani/github-patches | git_diff | sql-machine-learning__elasticdl-57 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Change PS to run in graph mode.
在multi-thread下面, 如果ps enable eager mode, 在同一个process下面,所有thread都是在eager mode下面了。 eager mode 可以run graph, 反之则不行。
</issue>
<code>
[start of python/elasticdl/tflib/ps/ps.py]
1 import threading
2 import queue
3 import numpy as np
4 import tensorflow.contrib.eager as tfe
5 import tensorflow as tf
6 tf.enable_eager_execution()
7
8
9 class ParameterServer(object):
10 def __init__(self, optimizer, vars):
11 self._opt = optimizer
12 self._vars = {}
13 for k, v in vars.items():
14 if (not isinstance(v, np.ndarray)
15 or v.dtype not in (np.float32, np.float64)):
16 raise ValueError(
17 'Initial value for variable %s is not of float type ndarray' %
18 k)
19 self._vars[k] = tfe.Variable(v, name=k)
20 self._step = 0
21 self._grad_q = queue.Queue()
22 self._lock = threading.Lock()
23 self._runner = threading.Thread(target=self._run, name='ps-runner')
24 self._exiting = False
25 self._min_step_cv = threading.Condition()
26
27 def pull(self, names=None, min_step=0, blocking=True, timeout=None):
28 with self._min_step_cv:
29 self._min_step_cv.wait_for(
30 lambda: not blocking or min_step <= self._step,
31 timeout=timeout)
32 with self._lock:
33 if min_step > self._step:
34 raise LookupError(
35 'Required step is not ready yet: %s' %
36 min_step)
37 if names:
38 res = {k: self._vars[k].numpy() for k in names}
39 else:
40 res = {k: v.numpy() for k, v in self._vars.items()}
41 return self._step, res
42
43 def push(self, base_step, sub_step, grads):
44 with self._lock:
45 if base_step > self._step:
46 raise ValueError(
47 'Illegal base step %s, parameter server step is %s' %
48 (base_step, self._step))
49
50 if sub_step < 0:
51 raise ValueError('Illegal sub step %s' % sub_step)
52
53 for k, g in grads.items():
54 v = self._vars[k]
55 if g.dtype != v.dtype.as_numpy_dtype or g.shape != v.shape:
56 raise ValueError('Incompatible gradient for variable %s' % k)
57 # TODO(l.zou): use @dataclass when python 3.7 is available.
58 self._grad_q.put((base_step, sub_step, grads))
59
60 def _compute(self, grads):
61 grads_vars = [(g, self._vars[k]) for k, g in grads.items()]
62 with self._lock:
63 self._opt.apply_gradients(grads_vars)
64 with self._min_step_cv:
65 self._step += 1
66 self._min_step_cv.notify_all()
67
68 def _run(self):
69 while not self._exiting:
70 # TODO(l.zou): How to properly accumulate and decay grads?
71 try:
72 base_step, sub_step, grads = self._grad_q.get(timeout=1.0)
73 self._compute(grads)
74 except queue.Empty:
75 pass
76
77 def start(self):
78 self._runner.start()
79
80 def join(self):
81 self._exiting = True
82 self._runner.join()
83
[end of python/elasticdl/tflib/ps/ps.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/python/elasticdl/tflib/ps/ps.py b/python/elasticdl/tflib/ps/ps.py
--- a/python/elasticdl/tflib/ps/ps.py
+++ b/python/elasticdl/tflib/ps/ps.py
@@ -1,22 +1,12 @@
import threading
import queue
import numpy as np
-import tensorflow.contrib.eager as tfe
import tensorflow as tf
-tf.enable_eager_execution()
+from tensorflow.python.ops import array_ops
class ParameterServer(object):
def __init__(self, optimizer, vars):
- self._opt = optimizer
- self._vars = {}
- for k, v in vars.items():
- if (not isinstance(v, np.ndarray)
- or v.dtype not in (np.float32, np.float64)):
- raise ValueError(
- 'Initial value for variable %s is not of float type ndarray' %
- k)
- self._vars[k] = tfe.Variable(v, name=k)
self._step = 0
self._grad_q = queue.Queue()
self._lock = threading.Lock()
@@ -24,6 +14,23 @@
self._exiting = False
self._min_step_cv = threading.Condition()
+ self._grads_vars = {}
+ for k, v in vars.items():
+ if (not isinstance(v, np.ndarray)
+ or v.dtype not in (np.float32, np.float64)):
+ raise ValueError(
+ 'Initial value for variable %s is not of float type ndarray' %
+ k)
+ # TODO: In graph mode we don't need to keep track of variables by ourselves.
+ self._grads_vars[k] = (array_ops.placeholder(dtype=v.dtype), tf.Variable(v, name=k))
+
+ self._opt = optimizer
+ self._apply_grad_op = self._opt.apply_gradients(self._grads_vars.values())
+
+ self._sess = tf.Session()
+ init_op = tf.global_variables_initializer()
+ self._sess.run(init_op)
+
def pull(self, names=None, min_step=0, blocking=True, timeout=None):
with self._min_step_cv:
self._min_step_cv.wait_for(
@@ -35,9 +42,9 @@
'Required step is not ready yet: %s' %
min_step)
if names:
- res = {k: self._vars[k].numpy() for k in names}
+ res = {k: self._grads_vars[k][1].eval(self._sess) for k in names}
else:
- res = {k: v.numpy() for k, v in self._vars.items()}
+ res = {k: v[1].eval(self._sess) for k, v in self._grads_vars.items()}
return self._step, res
def push(self, base_step, sub_step, grads):
@@ -51,16 +58,16 @@
raise ValueError('Illegal sub step %s' % sub_step)
for k, g in grads.items():
- v = self._vars[k]
+ v = self._grads_vars[k][1]
if g.dtype != v.dtype.as_numpy_dtype or g.shape != v.shape:
raise ValueError('Incompatible gradient for variable %s' % k)
# TODO(l.zou): use @dataclass when python 3.7 is available.
self._grad_q.put((base_step, sub_step, grads))
def _compute(self, grads):
- grads_vars = [(g, self._vars[k]) for k, g in grads.items()]
with self._lock:
- self._opt.apply_gradients(grads_vars)
+ feed_dict = {self._grads_vars[k][0]:v for k, v in grads.items()}
+ self._sess.run(self._apply_grad_op, feed_dict=feed_dict)
with self._min_step_cv:
self._step += 1
self._min_step_cv.notify_all()
@@ -80,3 +87,4 @@
def join(self):
self._exiting = True
self._runner.join()
+ self._sess.close()
| {"golden_diff": "diff --git a/python/elasticdl/tflib/ps/ps.py b/python/elasticdl/tflib/ps/ps.py\n--- a/python/elasticdl/tflib/ps/ps.py\n+++ b/python/elasticdl/tflib/ps/ps.py\n@@ -1,22 +1,12 @@\n import threading\n import queue\n import numpy as np\n-import tensorflow.contrib.eager as tfe\n import tensorflow as tf\n-tf.enable_eager_execution()\n+from tensorflow.python.ops import array_ops\n \n \n class ParameterServer(object):\n def __init__(self, optimizer, vars):\n- self._opt = optimizer\n- self._vars = {}\n- for k, v in vars.items():\n- if (not isinstance(v, np.ndarray)\n- or v.dtype not in (np.float32, np.float64)):\n- raise ValueError(\n- 'Initial value for variable %s is not of float type ndarray' %\n- k)\n- self._vars[k] = tfe.Variable(v, name=k)\n self._step = 0\n self._grad_q = queue.Queue()\n self._lock = threading.Lock()\n@@ -24,6 +14,23 @@\n self._exiting = False\n self._min_step_cv = threading.Condition()\n \n+ self._grads_vars = {}\n+ for k, v in vars.items():\n+ if (not isinstance(v, np.ndarray)\n+ or v.dtype not in (np.float32, np.float64)):\n+ raise ValueError(\n+ 'Initial value for variable %s is not of float type ndarray' %\n+ k)\n+ # TODO: In graph mode we don't need to keep track of variables by ourselves.\n+ self._grads_vars[k] = (array_ops.placeholder(dtype=v.dtype), tf.Variable(v, name=k))\n+\n+ self._opt = optimizer\n+ self._apply_grad_op = self._opt.apply_gradients(self._grads_vars.values())\n+\n+ self._sess = tf.Session()\n+ init_op = tf.global_variables_initializer()\n+ self._sess.run(init_op)\n+\n def pull(self, names=None, min_step=0, blocking=True, timeout=None):\n with self._min_step_cv:\n self._min_step_cv.wait_for(\n@@ -35,9 +42,9 @@\n 'Required step is not ready yet: %s' %\n min_step)\n if names:\n- res = {k: self._vars[k].numpy() for k in names}\n+ res = {k: self._grads_vars[k][1].eval(self._sess) for k in names}\n else:\n- res = {k: v.numpy() for k, v in self._vars.items()}\n+ res = {k: v[1].eval(self._sess) for k, v in self._grads_vars.items()}\n return self._step, res\n \n def push(self, base_step, sub_step, grads):\n@@ -51,16 +58,16 @@\n raise ValueError('Illegal sub step %s' % sub_step)\n \n for k, g in grads.items():\n- v = self._vars[k]\n+ v = self._grads_vars[k][1]\n if g.dtype != v.dtype.as_numpy_dtype or g.shape != v.shape:\n raise ValueError('Incompatible gradient for variable %s' % k)\n # TODO(l.zou): use @dataclass when python 3.7 is available.\n self._grad_q.put((base_step, sub_step, grads))\n \n def _compute(self, grads):\n- grads_vars = [(g, self._vars[k]) for k, g in grads.items()]\n with self._lock:\n- self._opt.apply_gradients(grads_vars)\n+ feed_dict = {self._grads_vars[k][0]:v for k, v in grads.items()}\n+ self._sess.run(self._apply_grad_op, feed_dict=feed_dict)\n with self._min_step_cv:\n self._step += 1\n self._min_step_cv.notify_all()\n@@ -80,3 +87,4 @@\n def join(self):\n self._exiting = True\n self._runner.join()\n+ self._sess.close()\n", "issue": "Change PS to run in graph mode.\n\u5728multi-thread\u4e0b\u9762\uff0c \u5982\u679cps enable eager mode, \u5728\u540c\u4e00\u4e2aprocess\u4e0b\u9762\uff0c\u6240\u6709thread\u90fd\u662f\u5728eager mode\u4e0b\u9762\u4e86\u3002 eager mode \u53ef\u4ee5run graph, \u53cd\u4e4b\u5219\u4e0d\u884c\u3002\n", "before_files": [{"content": "import threading\nimport queue\nimport numpy as np\nimport tensorflow.contrib.eager as tfe\nimport tensorflow as tf\ntf.enable_eager_execution()\n\n\nclass ParameterServer(object):\n def __init__(self, optimizer, vars):\n self._opt = optimizer\n self._vars = {}\n for k, v in vars.items():\n if (not isinstance(v, np.ndarray)\n or v.dtype not in (np.float32, np.float64)):\n raise ValueError(\n 'Initial value for variable %s is not of float type ndarray' %\n k)\n self._vars[k] = tfe.Variable(v, name=k)\n self._step = 0\n self._grad_q = queue.Queue()\n self._lock = threading.Lock()\n self._runner = threading.Thread(target=self._run, name='ps-runner')\n self._exiting = False\n self._min_step_cv = threading.Condition()\n\n def pull(self, names=None, min_step=0, blocking=True, timeout=None):\n with self._min_step_cv:\n self._min_step_cv.wait_for(\n lambda: not blocking or min_step <= self._step,\n timeout=timeout)\n with self._lock:\n if min_step > self._step:\n raise LookupError(\n 'Required step is not ready yet: %s' %\n min_step)\n if names:\n res = {k: self._vars[k].numpy() for k in names}\n else:\n res = {k: v.numpy() for k, v in self._vars.items()}\n return self._step, res\n\n def push(self, base_step, sub_step, grads):\n with self._lock:\n if base_step > self._step:\n raise ValueError(\n 'Illegal base step %s, parameter server step is %s' %\n (base_step, self._step))\n\n if sub_step < 0:\n raise ValueError('Illegal sub step %s' % sub_step)\n\n for k, g in grads.items():\n v = self._vars[k]\n if g.dtype != v.dtype.as_numpy_dtype or g.shape != v.shape:\n raise ValueError('Incompatible gradient for variable %s' % k)\n # TODO(l.zou): use @dataclass when python 3.7 is available.\n self._grad_q.put((base_step, sub_step, grads))\n\n def _compute(self, grads):\n grads_vars = [(g, self._vars[k]) for k, g in grads.items()]\n with self._lock:\n self._opt.apply_gradients(grads_vars)\n with self._min_step_cv:\n self._step += 1\n self._min_step_cv.notify_all()\n\n def _run(self):\n while not self._exiting:\n # TODO(l.zou): How to properly accumulate and decay grads?\n try:\n base_step, sub_step, grads = self._grad_q.get(timeout=1.0)\n self._compute(grads)\n except queue.Empty:\n pass\n\n def start(self):\n self._runner.start()\n\n def join(self):\n self._exiting = True\n self._runner.join()\n", "path": "python/elasticdl/tflib/ps/ps.py"}]} | 1,431 | 927 |
gh_patches_debug_36975 | rasdani/github-patches | git_diff | pulp__pulpcore-2318 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Clean up TaskReservedResources/task-table at migration to new-tasking-system
See https://bugzilla.redhat.com/show_bug.cgi?id=2031154 for details.
Migration that needs to be updated to purge taskreservedresource entries: 0064_add_new_style_task_columns.py
This wants to be cherrypicked into 3.14/15/16 (after which the offending table no longer exists)
</issue>
<code>
[start of pulpcore/app/migrations/0064_add_new_style_task_columns.py]
1 # Generated by Django 2.2.20 on 2021-04-27 07:51
2
3 import django.contrib.postgres.fields
4 import django.contrib.postgres.fields.jsonb
5 from django.db import migrations, models
6
7
8 def copy_reserved_resources_record(apps, schema_editor):
9 Task = apps.get_model('core', 'Task')
10 for task in Task.objects.iterator():
11 task._reserved_resources_record = list(task.reserved_resources_record.values_list('resource', flat=True))
12 task.save()
13
14
15 def noop(apps, schema_editor):
16 pass
17
18
19 class Migration(migrations.Migration):
20
21 dependencies = [
22 ('core', '0063_repository_retained_versions'),
23 ]
24
25 operations = [
26 migrations.AddField(
27 model_name='task',
28 name='args',
29 field=django.contrib.postgres.fields.jsonb.JSONField(null=True),
30 ),
31 migrations.AddField(
32 model_name='task',
33 name='kwargs',
34 field=django.contrib.postgres.fields.jsonb.JSONField(null=True),
35 ),
36 migrations.AddField(
37 model_name='task',
38 name='_reserved_resources_record',
39 field=django.contrib.postgres.fields.ArrayField(base_field=models.CharField(max_length=256), null=True, size=None),
40 ),
41 migrations.AlterField(
42 model_name='task',
43 name='_resource_job_id',
44 field=models.UUIDField(null=True),
45 ),
46 migrations.AlterField(
47 model_name='progressreport',
48 name='state',
49 field=models.TextField(choices=[('waiting', 'Waiting'), ('skipped', 'Skipped'), ('running', 'Running'), ('completed', 'Completed'), ('failed', 'Failed'), ('canceled', 'Canceled'), ('canceling', 'Canceling')], default='waiting'),
50 ),
51 migrations.AlterField(
52 model_name='task',
53 name='state',
54 field=models.TextField(choices=[('waiting', 'Waiting'), ('skipped', 'Skipped'), ('running', 'Running'), ('completed', 'Completed'), ('failed', 'Failed'), ('canceled', 'Canceled'), ('canceling', 'Canceling')]),
55 ),
56 migrations.AddIndex(
57 model_name='task',
58 index=models.Index(fields=['pulp_created'], name='core_task_pulp_cr_10223f_idx'),
59 ),
60 migrations.RunPython(
61 code=copy_reserved_resources_record,
62 reverse_code=noop,
63 ),
64 migrations.RemoveField(
65 model_name='taskreservedresourcerecord',
66 name='resource',
67 ),
68 migrations.RemoveField(
69 model_name='taskreservedresourcerecord',
70 name='task',
71 ),
72 migrations.DeleteModel(
73 name='ReservedResourceRecord',
74 ),
75 migrations.DeleteModel(
76 name='TaskReservedResourceRecord',
77 ),
78 migrations.RenameField(
79 model_name='task',
80 old_name='_reserved_resources_record',
81 new_name='reserved_resources_record',
82 ),
83 ]
84
[end of pulpcore/app/migrations/0064_add_new_style_task_columns.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pulpcore/app/migrations/0064_add_new_style_task_columns.py b/pulpcore/app/migrations/0064_add_new_style_task_columns.py
--- a/pulpcore/app/migrations/0064_add_new_style_task_columns.py
+++ b/pulpcore/app/migrations/0064_add_new_style_task_columns.py
@@ -4,16 +4,34 @@
import django.contrib.postgres.fields.jsonb
from django.db import migrations, models
+TASK_BATCH_SIZE = 1000
+
def copy_reserved_resources_record(apps, schema_editor):
Task = apps.get_model('core', 'Task')
- for task in Task.objects.iterator():
+
+ # Update _reserved_resource_record for all tasks, 1000 tasks at a time.
+ # When we hit 1K tasks, go to the db for the batch.
+ # Make sure to update the final batch!
+ tasks = []
+ for task in Task.objects.iterator(chunk_size=TASK_BATCH_SIZE):
task._reserved_resources_record = list(task.reserved_resources_record.values_list('resource', flat=True))
- task.save()
+ tasks.append(task)
+ if len(tasks) == TASK_BATCH_SIZE:
+ Task.objects.bulk_update(tasks, ["_reserved_resources_record"])
+ tasks.clear()
+
+ # Update last set of tasks
+ if len(tasks) > 0:
+ Task.objects.bulk_update(tasks, ["_reserved_resources_record"])
+
+def purge_reservedresources(apps, schema_editor):
+ TaskReservedResource = apps.get_model('core', 'TaskReservedResource')
+ TaskReservedResource.objects.all().delete()
-def noop(apps, schema_editor):
- pass
+ ReservedResource = apps.get_model('core', 'ReservedResource')
+ ReservedResource.objects.all().delete()
class Migration(migrations.Migration):
@@ -23,6 +41,12 @@
]
operations = [
+ # Purge any ReservedResource entries - if there are any, they're orphans
+ migrations.RunPython(
+ code=purge_reservedresources,
+ reverse_code=migrations.RunPython.noop,
+ ),
+ # Update entities for the new task-system
migrations.AddField(
model_name='task',
name='args',
@@ -59,7 +83,7 @@
),
migrations.RunPython(
code=copy_reserved_resources_record,
- reverse_code=noop,
+ reverse_code=migrations.RunPython.noop,
),
migrations.RemoveField(
model_name='taskreservedresourcerecord',
@@ -80,4 +104,5 @@
old_name='_reserved_resources_record',
new_name='reserved_resources_record',
),
+
]
| {"golden_diff": "diff --git a/pulpcore/app/migrations/0064_add_new_style_task_columns.py b/pulpcore/app/migrations/0064_add_new_style_task_columns.py\n--- a/pulpcore/app/migrations/0064_add_new_style_task_columns.py\n+++ b/pulpcore/app/migrations/0064_add_new_style_task_columns.py\n@@ -4,16 +4,34 @@\n import django.contrib.postgres.fields.jsonb\n from django.db import migrations, models\n \n+TASK_BATCH_SIZE = 1000\n+\n \n def copy_reserved_resources_record(apps, schema_editor):\n Task = apps.get_model('core', 'Task')\n- for task in Task.objects.iterator():\n+\n+ # Update _reserved_resource_record for all tasks, 1000 tasks at a time.\n+ # When we hit 1K tasks, go to the db for the batch.\n+ # Make sure to update the final batch!\n+ tasks = []\n+ for task in Task.objects.iterator(chunk_size=TASK_BATCH_SIZE):\n task._reserved_resources_record = list(task.reserved_resources_record.values_list('resource', flat=True))\n- task.save()\n+ tasks.append(task)\n+ if len(tasks) == TASK_BATCH_SIZE:\n+ Task.objects.bulk_update(tasks, [\"_reserved_resources_record\"])\n+ tasks.clear()\n+\n+ # Update last set of tasks\n+ if len(tasks) > 0:\n+ Task.objects.bulk_update(tasks, [\"_reserved_resources_record\"])\n+\n \n+def purge_reservedresources(apps, schema_editor):\n+ TaskReservedResource = apps.get_model('core', 'TaskReservedResource')\n+ TaskReservedResource.objects.all().delete()\n \n-def noop(apps, schema_editor):\n- pass\n+ ReservedResource = apps.get_model('core', 'ReservedResource')\n+ ReservedResource.objects.all().delete()\n \n \n class Migration(migrations.Migration):\n@@ -23,6 +41,12 @@\n ]\n \n operations = [\n+ # Purge any ReservedResource entries - if there are any, they're orphans\n+ migrations.RunPython(\n+ code=purge_reservedresources,\n+ reverse_code=migrations.RunPython.noop,\n+ ),\n+ # Update entities for the new task-system\n migrations.AddField(\n model_name='task',\n name='args',\n@@ -59,7 +83,7 @@\n ),\n migrations.RunPython(\n code=copy_reserved_resources_record,\n- reverse_code=noop,\n+ reverse_code=migrations.RunPython.noop,\n ),\n migrations.RemoveField(\n model_name='taskreservedresourcerecord',\n@@ -80,4 +104,5 @@\n old_name='_reserved_resources_record',\n new_name='reserved_resources_record',\n ),\n+\n ]\n", "issue": "Clean up TaskReservedResources/task-table at migration to new-tasking-system\nSee https://bugzilla.redhat.com/show_bug.cgi?id=2031154 for details.\r\n\r\nMigration that needs to be updated to purge taskreservedresource entries: 0064_add_new_style_task_columns.py\r\n\r\nThis wants to be cherrypicked into 3.14/15/16 (after which the offending table no longer exists)\n", "before_files": [{"content": "# Generated by Django 2.2.20 on 2021-04-27 07:51\n\nimport django.contrib.postgres.fields\nimport django.contrib.postgres.fields.jsonb\nfrom django.db import migrations, models\n\n\ndef copy_reserved_resources_record(apps, schema_editor):\n Task = apps.get_model('core', 'Task')\n for task in Task.objects.iterator():\n task._reserved_resources_record = list(task.reserved_resources_record.values_list('resource', flat=True))\n task.save()\n\n\ndef noop(apps, schema_editor):\n pass\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('core', '0063_repository_retained_versions'),\n ]\n\n operations = [\n migrations.AddField(\n model_name='task',\n name='args',\n field=django.contrib.postgres.fields.jsonb.JSONField(null=True),\n ),\n migrations.AddField(\n model_name='task',\n name='kwargs',\n field=django.contrib.postgres.fields.jsonb.JSONField(null=True),\n ),\n migrations.AddField(\n model_name='task',\n name='_reserved_resources_record',\n field=django.contrib.postgres.fields.ArrayField(base_field=models.CharField(max_length=256), null=True, size=None),\n ),\n migrations.AlterField(\n model_name='task',\n name='_resource_job_id',\n field=models.UUIDField(null=True),\n ),\n migrations.AlterField(\n model_name='progressreport',\n name='state',\n field=models.TextField(choices=[('waiting', 'Waiting'), ('skipped', 'Skipped'), ('running', 'Running'), ('completed', 'Completed'), ('failed', 'Failed'), ('canceled', 'Canceled'), ('canceling', 'Canceling')], default='waiting'),\n ),\n migrations.AlterField(\n model_name='task',\n name='state',\n field=models.TextField(choices=[('waiting', 'Waiting'), ('skipped', 'Skipped'), ('running', 'Running'), ('completed', 'Completed'), ('failed', 'Failed'), ('canceled', 'Canceled'), ('canceling', 'Canceling')]),\n ),\n migrations.AddIndex(\n model_name='task',\n index=models.Index(fields=['pulp_created'], name='core_task_pulp_cr_10223f_idx'),\n ),\n migrations.RunPython(\n code=copy_reserved_resources_record,\n reverse_code=noop,\n ),\n migrations.RemoveField(\n model_name='taskreservedresourcerecord',\n name='resource',\n ),\n migrations.RemoveField(\n model_name='taskreservedresourcerecord',\n name='task',\n ),\n migrations.DeleteModel(\n name='ReservedResourceRecord',\n ),\n migrations.DeleteModel(\n name='TaskReservedResourceRecord',\n ),\n migrations.RenameField(\n model_name='task',\n old_name='_reserved_resources_record',\n new_name='reserved_resources_record',\n ),\n ]\n", "path": "pulpcore/app/migrations/0064_add_new_style_task_columns.py"}]} | 1,427 | 601 |
gh_patches_debug_17883 | rasdani/github-patches | git_diff | encode__httpx-2803 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Change the type of `Extensions` from `Mapping` to `MutableMapping`.
### Discussed in https://github.com/encode/httpx/discussions/2793
<div type='discussions-op-text'>
<sup>Originally posted by **karosis88** July 28, 2023</sup>
I'm working on a library that implements HTTP Caching for httpx and httpcore (it provides transports and connection pools), and I'd like to add an extension that simply indicates whether or not the response was taken from the cache.
Unfortunately, the type of extension is Mapping, so this is an error for mypy.
```python
response = httpx.Response(200)
response.extensions['my_custom_extension'] = 'something'
```
OUTPUT
```
error: Unsupported target for indexed assignment ("Mapping[Str, Any]") [index]
```
The solution is to simply change the extension type from `Mapping` to `MutableMapping`, allowing us to add custom extensions after the response has been created.
[See also this pr](https://github.com/karosis88/hishel/pull/4)</div>
---
I believe the only change needed is in the "_models.py" file.
</issue>
<code>
[start of httpx/_types.py]
1 """
2 Type definitions for type checking purposes.
3 """
4
5 import ssl
6 from http.cookiejar import CookieJar
7 from typing import (
8 IO,
9 TYPE_CHECKING,
10 Any,
11 AsyncIterable,
12 AsyncIterator,
13 Callable,
14 Dict,
15 Iterable,
16 Iterator,
17 List,
18 Mapping,
19 NamedTuple,
20 Optional,
21 Sequence,
22 Tuple,
23 Union,
24 )
25
26 if TYPE_CHECKING: # pragma: no cover
27 from ._auth import Auth # noqa: F401
28 from ._config import Proxy, Timeout # noqa: F401
29 from ._models import Cookies, Headers, Request # noqa: F401
30 from ._urls import URL, QueryParams # noqa: F401
31
32
33 PrimitiveData = Optional[Union[str, int, float, bool]]
34
35 RawURL = NamedTuple(
36 "RawURL",
37 [
38 ("raw_scheme", bytes),
39 ("raw_host", bytes),
40 ("port", Optional[int]),
41 ("raw_path", bytes),
42 ],
43 )
44
45 URLTypes = Union["URL", str]
46
47 QueryParamTypes = Union[
48 "QueryParams",
49 Mapping[str, Union[PrimitiveData, Sequence[PrimitiveData]]],
50 List[Tuple[str, PrimitiveData]],
51 Tuple[Tuple[str, PrimitiveData], ...],
52 str,
53 bytes,
54 ]
55
56 HeaderTypes = Union[
57 "Headers",
58 Mapping[str, str],
59 Mapping[bytes, bytes],
60 Sequence[Tuple[str, str]],
61 Sequence[Tuple[bytes, bytes]],
62 ]
63
64 CookieTypes = Union["Cookies", CookieJar, Dict[str, str], List[Tuple[str, str]]]
65
66 CertTypes = Union[
67 # certfile
68 str,
69 # (certfile, keyfile)
70 Tuple[str, Optional[str]],
71 # (certfile, keyfile, password)
72 Tuple[str, Optional[str], Optional[str]],
73 ]
74 VerifyTypes = Union[str, bool, ssl.SSLContext]
75 TimeoutTypes = Union[
76 Optional[float],
77 Tuple[Optional[float], Optional[float], Optional[float], Optional[float]],
78 "Timeout",
79 ]
80 ProxiesTypes = Union[URLTypes, "Proxy", Dict[URLTypes, Union[None, URLTypes, "Proxy"]]]
81
82 AuthTypes = Union[
83 Tuple[Union[str, bytes], Union[str, bytes]],
84 Callable[["Request"], "Request"],
85 "Auth",
86 ]
87
88 RequestContent = Union[str, bytes, Iterable[bytes], AsyncIterable[bytes]]
89 ResponseContent = Union[str, bytes, Iterable[bytes], AsyncIterable[bytes]]
90 ResponseExtensions = Mapping[str, Any]
91
92 RequestData = Mapping[str, Any]
93
94 FileContent = Union[IO[bytes], bytes, str]
95 FileTypes = Union[
96 # file (or bytes)
97 FileContent,
98 # (filename, file (or bytes))
99 Tuple[Optional[str], FileContent],
100 # (filename, file (or bytes), content_type)
101 Tuple[Optional[str], FileContent, Optional[str]],
102 # (filename, file (or bytes), content_type, headers)
103 Tuple[Optional[str], FileContent, Optional[str], Mapping[str, str]],
104 ]
105 RequestFiles = Union[Mapping[str, FileTypes], Sequence[Tuple[str, FileTypes]]]
106
107 RequestExtensions = Mapping[str, Any]
108
109
110 class SyncByteStream:
111 def __iter__(self) -> Iterator[bytes]:
112 raise NotImplementedError(
113 "The '__iter__' method must be implemented."
114 ) # pragma: no cover
115 yield b"" # pragma: no cover
116
117 def close(self) -> None:
118 """
119 Subclasses can override this method to release any network resources
120 after a request/response cycle is complete.
121 """
122
123
124 class AsyncByteStream:
125 async def __aiter__(self) -> AsyncIterator[bytes]:
126 raise NotImplementedError(
127 "The '__aiter__' method must be implemented."
128 ) # pragma: no cover
129 yield b"" # pragma: no cover
130
131 async def aclose(self) -> None:
132 pass
133
[end of httpx/_types.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/httpx/_types.py b/httpx/_types.py
--- a/httpx/_types.py
+++ b/httpx/_types.py
@@ -16,6 +16,7 @@
Iterator,
List,
Mapping,
+ MutableMapping,
NamedTuple,
Optional,
Sequence,
@@ -87,7 +88,7 @@
RequestContent = Union[str, bytes, Iterable[bytes], AsyncIterable[bytes]]
ResponseContent = Union[str, bytes, Iterable[bytes], AsyncIterable[bytes]]
-ResponseExtensions = Mapping[str, Any]
+ResponseExtensions = MutableMapping[str, Any]
RequestData = Mapping[str, Any]
@@ -104,7 +105,7 @@
]
RequestFiles = Union[Mapping[str, FileTypes], Sequence[Tuple[str, FileTypes]]]
-RequestExtensions = Mapping[str, Any]
+RequestExtensions = MutableMapping[str, Any]
class SyncByteStream:
| {"golden_diff": "diff --git a/httpx/_types.py b/httpx/_types.py\n--- a/httpx/_types.py\n+++ b/httpx/_types.py\n@@ -16,6 +16,7 @@\n Iterator,\n List,\n Mapping,\n+ MutableMapping,\n NamedTuple,\n Optional,\n Sequence,\n@@ -87,7 +88,7 @@\n \n RequestContent = Union[str, bytes, Iterable[bytes], AsyncIterable[bytes]]\n ResponseContent = Union[str, bytes, Iterable[bytes], AsyncIterable[bytes]]\n-ResponseExtensions = Mapping[str, Any]\n+ResponseExtensions = MutableMapping[str, Any]\n \n RequestData = Mapping[str, Any]\n \n@@ -104,7 +105,7 @@\n ]\n RequestFiles = Union[Mapping[str, FileTypes], Sequence[Tuple[str, FileTypes]]]\n \n-RequestExtensions = Mapping[str, Any]\n+RequestExtensions = MutableMapping[str, Any]\n \n \n class SyncByteStream:\n", "issue": "Change the type of `Extensions` from `Mapping` to `MutableMapping`.\n### Discussed in https://github.com/encode/httpx/discussions/2793\r\n\r\n<div type='discussions-op-text'>\r\n\r\n<sup>Originally posted by **karosis88** July 28, 2023</sup>\r\nI'm working on a library that implements HTTP Caching for httpx and httpcore (it provides transports and connection pools), and I'd like to add an extension that simply indicates whether or not the response was taken from the cache.\r\n\r\nUnfortunately, the type of extension is Mapping, so this is an error for mypy.\r\n\r\n\r\n```python\r\nresponse = httpx.Response(200)\r\nresponse.extensions['my_custom_extension'] = 'something'\r\n```\r\n\r\nOUTPUT \r\n```\r\nerror: Unsupported target for indexed assignment (\"Mapping[Str, Any]\") [index]\r\n```\r\n\r\nThe solution is to simply change the extension type from `Mapping` to `MutableMapping`, allowing us to add custom extensions after the response has been created.\r\n\r\n[See also this pr](https://github.com/karosis88/hishel/pull/4)</div>\r\n\r\n---\r\n\r\nI believe the only change needed is in the \"_models.py\" file.\n", "before_files": [{"content": "\"\"\"\nType definitions for type checking purposes.\n\"\"\"\n\nimport ssl\nfrom http.cookiejar import CookieJar\nfrom typing import (\n IO,\n TYPE_CHECKING,\n Any,\n AsyncIterable,\n AsyncIterator,\n Callable,\n Dict,\n Iterable,\n Iterator,\n List,\n Mapping,\n NamedTuple,\n Optional,\n Sequence,\n Tuple,\n Union,\n)\n\nif TYPE_CHECKING: # pragma: no cover\n from ._auth import Auth # noqa: F401\n from ._config import Proxy, Timeout # noqa: F401\n from ._models import Cookies, Headers, Request # noqa: F401\n from ._urls import URL, QueryParams # noqa: F401\n\n\nPrimitiveData = Optional[Union[str, int, float, bool]]\n\nRawURL = NamedTuple(\n \"RawURL\",\n [\n (\"raw_scheme\", bytes),\n (\"raw_host\", bytes),\n (\"port\", Optional[int]),\n (\"raw_path\", bytes),\n ],\n)\n\nURLTypes = Union[\"URL\", str]\n\nQueryParamTypes = Union[\n \"QueryParams\",\n Mapping[str, Union[PrimitiveData, Sequence[PrimitiveData]]],\n List[Tuple[str, PrimitiveData]],\n Tuple[Tuple[str, PrimitiveData], ...],\n str,\n bytes,\n]\n\nHeaderTypes = Union[\n \"Headers\",\n Mapping[str, str],\n Mapping[bytes, bytes],\n Sequence[Tuple[str, str]],\n Sequence[Tuple[bytes, bytes]],\n]\n\nCookieTypes = Union[\"Cookies\", CookieJar, Dict[str, str], List[Tuple[str, str]]]\n\nCertTypes = Union[\n # certfile\n str,\n # (certfile, keyfile)\n Tuple[str, Optional[str]],\n # (certfile, keyfile, password)\n Tuple[str, Optional[str], Optional[str]],\n]\nVerifyTypes = Union[str, bool, ssl.SSLContext]\nTimeoutTypes = Union[\n Optional[float],\n Tuple[Optional[float], Optional[float], Optional[float], Optional[float]],\n \"Timeout\",\n]\nProxiesTypes = Union[URLTypes, \"Proxy\", Dict[URLTypes, Union[None, URLTypes, \"Proxy\"]]]\n\nAuthTypes = Union[\n Tuple[Union[str, bytes], Union[str, bytes]],\n Callable[[\"Request\"], \"Request\"],\n \"Auth\",\n]\n\nRequestContent = Union[str, bytes, Iterable[bytes], AsyncIterable[bytes]]\nResponseContent = Union[str, bytes, Iterable[bytes], AsyncIterable[bytes]]\nResponseExtensions = Mapping[str, Any]\n\nRequestData = Mapping[str, Any]\n\nFileContent = Union[IO[bytes], bytes, str]\nFileTypes = Union[\n # file (or bytes)\n FileContent,\n # (filename, file (or bytes))\n Tuple[Optional[str], FileContent],\n # (filename, file (or bytes), content_type)\n Tuple[Optional[str], FileContent, Optional[str]],\n # (filename, file (or bytes), content_type, headers)\n Tuple[Optional[str], FileContent, Optional[str], Mapping[str, str]],\n]\nRequestFiles = Union[Mapping[str, FileTypes], Sequence[Tuple[str, FileTypes]]]\n\nRequestExtensions = Mapping[str, Any]\n\n\nclass SyncByteStream:\n def __iter__(self) -> Iterator[bytes]:\n raise NotImplementedError(\n \"The '__iter__' method must be implemented.\"\n ) # pragma: no cover\n yield b\"\" # pragma: no cover\n\n def close(self) -> None:\n \"\"\"\n Subclasses can override this method to release any network resources\n after a request/response cycle is complete.\n \"\"\"\n\n\nclass AsyncByteStream:\n async def __aiter__(self) -> AsyncIterator[bytes]:\n raise NotImplementedError(\n \"The '__aiter__' method must be implemented.\"\n ) # pragma: no cover\n yield b\"\" # pragma: no cover\n\n async def aclose(self) -> None:\n pass\n", "path": "httpx/_types.py"}]} | 1,943 | 206 |
gh_patches_debug_6089 | rasdani/github-patches | git_diff | encode__starlette-1459 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Raising Exceptions in sub-applications routes
### Checklist
- [X] The bug is reproducible against the latest release or `master`.
- [X] There are no similar issues or pull requests to fix it yet.
### Describe the bug
Let's start with this PR: #1262
It's about preventing raise `anyio.ExceptionGroup` in views under a `BaseHTTPMiddleware`. PR resolve that problem with nonlocal variable that stores our exception. But in the case of sub-applications, it does not work.
As I can see (fyi I am not good at asyncio), in the case below, we reach and read a response before we raise an exception and store it to our nonlocal variable:
fragment of `BaseHTTPMiddleware.__call__`
```python
async def call_next(request: Request) -> Response:
app_exc: typing.Optional[Exception] = None
send_stream, recv_stream = anyio.create_memory_object_stream()
async def coro() -> None:
nonlocal app_exc
async with send_stream:
try:
task = await self.app(scope, request.receive, send_stream.send)
except Exception as exc:
app_exc = exc
task_group.start_soon(coro)
try:
message = await recv_stream.receive()
except anyio.EndOfStream:
if app_exc is not None:
raise app_exc
raise RuntimeError("No response returned.")
...
response = StreamingResponse(
status_code=message["status"], content=body_stream()
)
response.raw_headers = message["headers"]
return response
```
in this moment: `except anyio.EndOfStream:` exception still no raised.
### Steps to reproduce the bug
```python
import httpx
import pytest
from fastapi import FastAPI, APIRouter
from starlette.middleware import Middleware
from starlette.middleware.base import BaseHTTPMiddleware, RequestResponseEndpoint
from starlette.requests import Request
from starlette.responses import Response
from starlette.routing import Route
class SomeError(Exception):
pass
class SomeMiddleware(BaseHTTPMiddleware):
async def dispatch(
self, request: Request, call_next: RequestResponseEndpoint
) -> Response:
return await call_next(request)
# Drop (or use not BaseHTTPMiddleware based) middleware and test works fine
app = FastAPI(middleware=[Middleware(SomeMiddleware), ])
async def simple_route(request: Request):
raise SomeError
another_router = APIRouter(
routes=[Route('/simple-route/', simple_route, methods=['GET'])]
)
sub_app = FastAPI()
sub_app.include_router(another_router)
app.router.mount(f'/api', sub_app)
@pytest.mark.asyncio
async def test_simple_route():
async with httpx.AsyncClient(app=app) as client:
with pytest.raises(SomeError):
await client.get("http://testserver/api/simple-route/")
```
### Expected behavior
An exception was raised and caught by pytest exception
### Actual behavior
An exception wasn't raised
### Debugging material
_No response_
### Environment
macOS Monterey 12.0.1, starlette 0.17.1, Python 3.9.9
### Additional context
_No response_
</issue>
<code>
[start of starlette/middleware/base.py]
1 import typing
2
3 import anyio
4
5 from starlette.requests import Request
6 from starlette.responses import Response, StreamingResponse
7 from starlette.types import ASGIApp, Receive, Scope, Send
8
9 RequestResponseEndpoint = typing.Callable[[Request], typing.Awaitable[Response]]
10 DispatchFunction = typing.Callable[
11 [Request, RequestResponseEndpoint], typing.Awaitable[Response]
12 ]
13
14
15 class BaseHTTPMiddleware:
16 def __init__(self, app: ASGIApp, dispatch: DispatchFunction = None) -> None:
17 self.app = app
18 self.dispatch_func = self.dispatch if dispatch is None else dispatch
19
20 async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:
21 if scope["type"] != "http":
22 await self.app(scope, receive, send)
23 return
24
25 async def call_next(request: Request) -> Response:
26 app_exc: typing.Optional[Exception] = None
27 send_stream, recv_stream = anyio.create_memory_object_stream()
28
29 async def coro() -> None:
30 nonlocal app_exc
31
32 async with send_stream:
33 try:
34 await self.app(scope, request.receive, send_stream.send)
35 except Exception as exc:
36 app_exc = exc
37
38 task_group.start_soon(coro)
39
40 try:
41 message = await recv_stream.receive()
42 except anyio.EndOfStream:
43 if app_exc is not None:
44 raise app_exc
45 raise RuntimeError("No response returned.")
46
47 assert message["type"] == "http.response.start"
48
49 async def body_stream() -> typing.AsyncGenerator[bytes, None]:
50 async with recv_stream:
51 async for message in recv_stream:
52 assert message["type"] == "http.response.body"
53 yield message.get("body", b"")
54
55 response = StreamingResponse(
56 status_code=message["status"], content=body_stream()
57 )
58 response.raw_headers = message["headers"]
59 return response
60
61 async with anyio.create_task_group() as task_group:
62 request = Request(scope, receive=receive)
63 response = await self.dispatch_func(request, call_next)
64 await response(scope, receive, send)
65 task_group.cancel_scope.cancel()
66
67 async def dispatch(
68 self, request: Request, call_next: RequestResponseEndpoint
69 ) -> Response:
70 raise NotImplementedError() # pragma: no cover
71
[end of starlette/middleware/base.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/starlette/middleware/base.py b/starlette/middleware/base.py
--- a/starlette/middleware/base.py
+++ b/starlette/middleware/base.py
@@ -52,6 +52,9 @@
assert message["type"] == "http.response.body"
yield message.get("body", b"")
+ if app_exc is not None:
+ raise app_exc
+
response = StreamingResponse(
status_code=message["status"], content=body_stream()
)
| {"golden_diff": "diff --git a/starlette/middleware/base.py b/starlette/middleware/base.py\n--- a/starlette/middleware/base.py\n+++ b/starlette/middleware/base.py\n@@ -52,6 +52,9 @@\n assert message[\"type\"] == \"http.response.body\"\n yield message.get(\"body\", b\"\")\n \n+ if app_exc is not None:\n+ raise app_exc\n+\n response = StreamingResponse(\n status_code=message[\"status\"], content=body_stream()\n )\n", "issue": "Raising Exceptions in sub-applications routes\n### Checklist\r\n\r\n- [X] The bug is reproducible against the latest release or `master`.\r\n- [X] There are no similar issues or pull requests to fix it yet.\r\n\r\n### Describe the bug\r\n\r\nLet's start with this PR: #1262\r\n\r\nIt's about preventing raise `anyio.ExceptionGroup` in views under a `BaseHTTPMiddleware`. PR resolve that problem with nonlocal variable that stores our exception. But in the case of sub-applications, it does not work. \r\n\r\nAs I can see (fyi I am not good at asyncio), in the case below, we reach and read a response before we raise an exception and store it to our nonlocal variable:\r\n\r\nfragment of `BaseHTTPMiddleware.__call__`\r\n```python\r\nasync def call_next(request: Request) -> Response:\r\n app_exc: typing.Optional[Exception] = None\r\n send_stream, recv_stream = anyio.create_memory_object_stream()\r\n\r\n async def coro() -> None:\r\n nonlocal app_exc\r\n\r\n async with send_stream:\r\n try:\r\n task = await self.app(scope, request.receive, send_stream.send)\r\n except Exception as exc:\r\n app_exc = exc\r\n\r\n task_group.start_soon(coro)\r\n\r\n try:\r\n message = await recv_stream.receive()\r\n except anyio.EndOfStream:\r\n if app_exc is not None:\r\n raise app_exc\r\n raise RuntimeError(\"No response returned.\")\r\n \r\n ...\r\n response = StreamingResponse(\r\n status_code=message[\"status\"], content=body_stream()\r\n )\r\n response.raw_headers = message[\"headers\"]\r\n return response\r\n```\r\n\r\nin this moment: `except anyio.EndOfStream:` exception still no raised.\r\n\r\n### Steps to reproduce the bug\r\n\r\n```python\r\nimport httpx\r\nimport pytest\r\nfrom fastapi import FastAPI, APIRouter\r\nfrom starlette.middleware import Middleware\r\nfrom starlette.middleware.base import BaseHTTPMiddleware, RequestResponseEndpoint\r\nfrom starlette.requests import Request\r\nfrom starlette.responses import Response\r\nfrom starlette.routing import Route\r\n\r\n\r\nclass SomeError(Exception):\r\n pass\r\n\r\n\r\nclass SomeMiddleware(BaseHTTPMiddleware):\r\n async def dispatch(\r\n self, request: Request, call_next: RequestResponseEndpoint\r\n ) -> Response:\r\n return await call_next(request)\r\n\r\n# Drop (or use not BaseHTTPMiddleware based) middleware and test works fine\r\napp = FastAPI(middleware=[Middleware(SomeMiddleware), ])\r\n\r\n\r\nasync def simple_route(request: Request):\r\n raise SomeError\r\n\r\n\r\nanother_router = APIRouter(\r\n routes=[Route('/simple-route/', simple_route, methods=['GET'])]\r\n)\r\nsub_app = FastAPI()\r\nsub_app.include_router(another_router)\r\napp.router.mount(f'/api', sub_app)\r\n\r\n\r\[email protected]\r\nasync def test_simple_route():\r\n async with httpx.AsyncClient(app=app) as client:\r\n with pytest.raises(SomeError):\r\n await client.get(\"http://testserver/api/simple-route/\")\r\n\r\n```\r\n\r\n### Expected behavior\r\n\r\nAn exception was raised and caught by pytest exception\r\n\r\n### Actual behavior\r\n\r\nAn exception wasn't raised\r\n\r\n### Debugging material\r\n\r\n_No response_\r\n\r\n### Environment\r\n\r\nmacOS Monterey 12.0.1, starlette 0.17.1, Python 3.9.9\r\n\r\n\r\n### Additional context\r\n\r\n_No response_\n", "before_files": [{"content": "import typing\n\nimport anyio\n\nfrom starlette.requests import Request\nfrom starlette.responses import Response, StreamingResponse\nfrom starlette.types import ASGIApp, Receive, Scope, Send\n\nRequestResponseEndpoint = typing.Callable[[Request], typing.Awaitable[Response]]\nDispatchFunction = typing.Callable[\n [Request, RequestResponseEndpoint], typing.Awaitable[Response]\n]\n\n\nclass BaseHTTPMiddleware:\n def __init__(self, app: ASGIApp, dispatch: DispatchFunction = None) -> None:\n self.app = app\n self.dispatch_func = self.dispatch if dispatch is None else dispatch\n\n async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:\n if scope[\"type\"] != \"http\":\n await self.app(scope, receive, send)\n return\n\n async def call_next(request: Request) -> Response:\n app_exc: typing.Optional[Exception] = None\n send_stream, recv_stream = anyio.create_memory_object_stream()\n\n async def coro() -> None:\n nonlocal app_exc\n\n async with send_stream:\n try:\n await self.app(scope, request.receive, send_stream.send)\n except Exception as exc:\n app_exc = exc\n\n task_group.start_soon(coro)\n\n try:\n message = await recv_stream.receive()\n except anyio.EndOfStream:\n if app_exc is not None:\n raise app_exc\n raise RuntimeError(\"No response returned.\")\n\n assert message[\"type\"] == \"http.response.start\"\n\n async def body_stream() -> typing.AsyncGenerator[bytes, None]:\n async with recv_stream:\n async for message in recv_stream:\n assert message[\"type\"] == \"http.response.body\"\n yield message.get(\"body\", b\"\")\n\n response = StreamingResponse(\n status_code=message[\"status\"], content=body_stream()\n )\n response.raw_headers = message[\"headers\"]\n return response\n\n async with anyio.create_task_group() as task_group:\n request = Request(scope, receive=receive)\n response = await self.dispatch_func(request, call_next)\n await response(scope, receive, send)\n task_group.cancel_scope.cancel()\n\n async def dispatch(\n self, request: Request, call_next: RequestResponseEndpoint\n ) -> Response:\n raise NotImplementedError() # pragma: no cover\n", "path": "starlette/middleware/base.py"}]} | 1,859 | 108 |
gh_patches_debug_5943 | rasdani/github-patches | git_diff | paperless-ngx__paperless-ngx-5320 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Configuration: User-Arguments are mandatory
### Description
See the following Screenshot.

I don't think, user args should be mandatory, since the documentation says "additionally specify arguments".
### Steps to reproduce
1. Click on "Configuration"
2. Fill in some Values
3. Click "Save"
4. User-Args are mandatory
In order to save a Configuration, the user now has to enter an empty json which is not very user-friendly.
### Webserver logs
```bash
not applicable
```
### Browser logs
_No response_
### Paperless-ngx version
2.3.2
### Host OS
Synology DSM 6.2.X
### Installation method
Docker - official image
### Browser
Chrome
### Configuration changes
none of interest
### Other
_No response_
### Please confirm the following
- [X] I believe this issue is a bug that affects all users of Paperless-ngx, not something specific to my installation.
- [X] I have already searched for relevant existing issues and discussions before opening this report.
- [X] I have updated the title field above with a concise description.
</issue>
<code>
[start of src/paperless/serialisers.py]
1 import logging
2
3 from django.contrib.auth.models import Group
4 from django.contrib.auth.models import Permission
5 from django.contrib.auth.models import User
6 from rest_framework import serializers
7
8 from paperless.models import ApplicationConfiguration
9
10 logger = logging.getLogger("paperless.settings")
11
12
13 class ObfuscatedUserPasswordField(serializers.Field):
14 """
15 Sends *** string instead of password in the clear
16 """
17
18 def to_representation(self, value):
19 return "**********" if len(value) > 0 else ""
20
21 def to_internal_value(self, data):
22 return data
23
24
25 class UserSerializer(serializers.ModelSerializer):
26 password = ObfuscatedUserPasswordField(required=False)
27 user_permissions = serializers.SlugRelatedField(
28 many=True,
29 queryset=Permission.objects.all(),
30 slug_field="codename",
31 required=False,
32 )
33 inherited_permissions = serializers.SerializerMethodField()
34
35 class Meta:
36 model = User
37 fields = (
38 "id",
39 "username",
40 "email",
41 "password",
42 "first_name",
43 "last_name",
44 "date_joined",
45 "is_staff",
46 "is_active",
47 "is_superuser",
48 "groups",
49 "user_permissions",
50 "inherited_permissions",
51 )
52
53 def get_inherited_permissions(self, obj):
54 return obj.get_group_permissions()
55
56 def update(self, instance, validated_data):
57 if "password" in validated_data:
58 if len(validated_data.get("password").replace("*", "")) > 0:
59 instance.set_password(validated_data.get("password"))
60 instance.save()
61 validated_data.pop("password")
62 super().update(instance, validated_data)
63 return instance
64
65 def create(self, validated_data):
66 groups = None
67 if "groups" in validated_data:
68 groups = validated_data.pop("groups")
69 user_permissions = None
70 if "user_permissions" in validated_data:
71 user_permissions = validated_data.pop("user_permissions")
72 password = None
73 if (
74 "password" in validated_data
75 and len(validated_data.get("password").replace("*", "")) > 0
76 ):
77 password = validated_data.pop("password")
78 user = User.objects.create(**validated_data)
79 # set groups
80 if groups:
81 user.groups.set(groups)
82 # set permissions
83 if user_permissions:
84 user.user_permissions.set(user_permissions)
85 # set password
86 if password:
87 user.set_password(password)
88 user.save()
89 return user
90
91
92 class GroupSerializer(serializers.ModelSerializer):
93 permissions = serializers.SlugRelatedField(
94 many=True,
95 queryset=Permission.objects.all(),
96 slug_field="codename",
97 )
98
99 class Meta:
100 model = Group
101 fields = (
102 "id",
103 "name",
104 "permissions",
105 )
106
107
108 class ProfileSerializer(serializers.ModelSerializer):
109 email = serializers.EmailField(allow_null=False)
110 password = ObfuscatedUserPasswordField(required=False, allow_null=False)
111 auth_token = serializers.SlugRelatedField(read_only=True, slug_field="key")
112
113 class Meta:
114 model = User
115 fields = (
116 "email",
117 "password",
118 "first_name",
119 "last_name",
120 "auth_token",
121 )
122
123
124 class ApplicationConfigurationSerializer(serializers.ModelSerializer):
125 user_args = serializers.JSONField(binary=True)
126
127 class Meta:
128 model = ApplicationConfiguration
129 fields = "__all__"
130
[end of src/paperless/serialisers.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/paperless/serialisers.py b/src/paperless/serialisers.py
--- a/src/paperless/serialisers.py
+++ b/src/paperless/serialisers.py
@@ -122,7 +122,12 @@
class ApplicationConfigurationSerializer(serializers.ModelSerializer):
- user_args = serializers.JSONField(binary=True)
+ user_args = serializers.JSONField(binary=True, allow_null=True)
+
+ def run_validation(self, data):
+ if "user_args" in data and data["user_args"] == "":
+ data["user_args"] = None
+ return super().run_validation(data)
class Meta:
model = ApplicationConfiguration
| {"golden_diff": "diff --git a/src/paperless/serialisers.py b/src/paperless/serialisers.py\n--- a/src/paperless/serialisers.py\n+++ b/src/paperless/serialisers.py\n@@ -122,7 +122,12 @@\n \n \n class ApplicationConfigurationSerializer(serializers.ModelSerializer):\n- user_args = serializers.JSONField(binary=True)\n+ user_args = serializers.JSONField(binary=True, allow_null=True)\n+\n+ def run_validation(self, data):\n+ if \"user_args\" in data and data[\"user_args\"] == \"\":\n+ data[\"user_args\"] = None\n+ return super().run_validation(data)\n \n class Meta:\n model = ApplicationConfiguration\n", "issue": "Configuration: User-Arguments are mandatory\n### Description\r\n\r\nSee the following Screenshot. \r\n\r\n\r\n\r\nI don't think, user args should be mandatory, since the documentation says \"additionally specify arguments\".\r\n\r\n### Steps to reproduce\r\n\r\n1. Click on \"Configuration\"\r\n2. Fill in some Values\r\n3. Click \"Save\"\r\n4. User-Args are mandatory\r\n\r\nIn order to save a Configuration, the user now has to enter an empty json which is not very user-friendly.\r\n\r\n### Webserver logs\r\n\r\n```bash\r\nnot applicable\r\n```\r\n\r\n\r\n### Browser logs\r\n\r\n_No response_\r\n\r\n### Paperless-ngx version\r\n\r\n2.3.2\r\n\r\n### Host OS\r\n\r\nSynology DSM 6.2.X\r\n\r\n### Installation method\r\n\r\nDocker - official image\r\n\r\n### Browser\r\n\r\nChrome\r\n\r\n### Configuration changes\r\n\r\nnone of interest\r\n\r\n### Other\r\n\r\n_No response_\r\n\r\n### Please confirm the following\r\n\r\n- [X] I believe this issue is a bug that affects all users of Paperless-ngx, not something specific to my installation.\r\n- [X] I have already searched for relevant existing issues and discussions before opening this report.\r\n- [X] I have updated the title field above with a concise description.\n", "before_files": [{"content": "import logging\n\nfrom django.contrib.auth.models import Group\nfrom django.contrib.auth.models import Permission\nfrom django.contrib.auth.models import User\nfrom rest_framework import serializers\n\nfrom paperless.models import ApplicationConfiguration\n\nlogger = logging.getLogger(\"paperless.settings\")\n\n\nclass ObfuscatedUserPasswordField(serializers.Field):\n \"\"\"\n Sends *** string instead of password in the clear\n \"\"\"\n\n def to_representation(self, value):\n return \"**********\" if len(value) > 0 else \"\"\n\n def to_internal_value(self, data):\n return data\n\n\nclass UserSerializer(serializers.ModelSerializer):\n password = ObfuscatedUserPasswordField(required=False)\n user_permissions = serializers.SlugRelatedField(\n many=True,\n queryset=Permission.objects.all(),\n slug_field=\"codename\",\n required=False,\n )\n inherited_permissions = serializers.SerializerMethodField()\n\n class Meta:\n model = User\n fields = (\n \"id\",\n \"username\",\n \"email\",\n \"password\",\n \"first_name\",\n \"last_name\",\n \"date_joined\",\n \"is_staff\",\n \"is_active\",\n \"is_superuser\",\n \"groups\",\n \"user_permissions\",\n \"inherited_permissions\",\n )\n\n def get_inherited_permissions(self, obj):\n return obj.get_group_permissions()\n\n def update(self, instance, validated_data):\n if \"password\" in validated_data:\n if len(validated_data.get(\"password\").replace(\"*\", \"\")) > 0:\n instance.set_password(validated_data.get(\"password\"))\n instance.save()\n validated_data.pop(\"password\")\n super().update(instance, validated_data)\n return instance\n\n def create(self, validated_data):\n groups = None\n if \"groups\" in validated_data:\n groups = validated_data.pop(\"groups\")\n user_permissions = None\n if \"user_permissions\" in validated_data:\n user_permissions = validated_data.pop(\"user_permissions\")\n password = None\n if (\n \"password\" in validated_data\n and len(validated_data.get(\"password\").replace(\"*\", \"\")) > 0\n ):\n password = validated_data.pop(\"password\")\n user = User.objects.create(**validated_data)\n # set groups\n if groups:\n user.groups.set(groups)\n # set permissions\n if user_permissions:\n user.user_permissions.set(user_permissions)\n # set password\n if password:\n user.set_password(password)\n user.save()\n return user\n\n\nclass GroupSerializer(serializers.ModelSerializer):\n permissions = serializers.SlugRelatedField(\n many=True,\n queryset=Permission.objects.all(),\n slug_field=\"codename\",\n )\n\n class Meta:\n model = Group\n fields = (\n \"id\",\n \"name\",\n \"permissions\",\n )\n\n\nclass ProfileSerializer(serializers.ModelSerializer):\n email = serializers.EmailField(allow_null=False)\n password = ObfuscatedUserPasswordField(required=False, allow_null=False)\n auth_token = serializers.SlugRelatedField(read_only=True, slug_field=\"key\")\n\n class Meta:\n model = User\n fields = (\n \"email\",\n \"password\",\n \"first_name\",\n \"last_name\",\n \"auth_token\",\n )\n\n\nclass ApplicationConfigurationSerializer(serializers.ModelSerializer):\n user_args = serializers.JSONField(binary=True)\n\n class Meta:\n model = ApplicationConfiguration\n fields = \"__all__\"\n", "path": "src/paperless/serialisers.py"}]} | 1,835 | 151 |
gh_patches_debug_34233 | rasdani/github-patches | git_diff | bokeh__bokeh-10360 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
webdriver failing to find installed firefox/geckodriver
Can't get export examples running on binder even though everything is installed and on the path:
<img width="1130" alt="Screen Shot 2020-06-02 at 8 23 16 PM" src="https://user-images.githubusercontent.com/1078448/83592297-e9459c80-a50e-11ea-90d1-7189fcc93af0.png">
</issue>
<code>
[start of bokeh/io/webdriver.py]
1 #-----------------------------------------------------------------------------
2 # Copyright (c) 2012 - 2020, Anaconda, Inc., and Bokeh Contributors.
3 # All rights reserved.
4 #
5 # The full license is in the file LICENSE.txt, distributed with this software.
6 #-----------------------------------------------------------------------------
7 '''
8
9 '''
10
11 #-----------------------------------------------------------------------------
12 # Boilerplate
13 #-----------------------------------------------------------------------------
14 import logging # isort:skip
15 log = logging.getLogger(__name__)
16
17 #-----------------------------------------------------------------------------
18 # Imports
19 #-----------------------------------------------------------------------------
20
21 from ..util.dependencies import import_required # isort:skip
22 import_required("selenium.webdriver",
23 "To use bokeh.io image export functions you need selenium "
24 "('conda install selenium' or 'pip install selenium')")
25
26 # Standard library imports
27 import atexit
28 import shutil
29 from os.path import devnull
30 from typing import List, Optional
31
32 # External imports
33 from selenium import webdriver
34 from selenium.webdriver.remote.webdriver import WebDriver
35 from typing_extensions import Literal
36
37 #-----------------------------------------------------------------------------
38 # Globals and constants
39 #-----------------------------------------------------------------------------
40
41 DriverKind = Literal["firefox", "chromium"]
42
43 __all__ = (
44 'webdriver_control',
45 )
46
47 #-----------------------------------------------------------------------------
48 # General API
49 #-----------------------------------------------------------------------------
50
51 #-----------------------------------------------------------------------------
52 # Dev API
53 #-----------------------------------------------------------------------------
54
55 def create_firefox_webdriver() -> WebDriver:
56 from selenium.webdriver.firefox.firefox_binary import FirefoxBinary
57 binary = FirefoxBinary(_detect("firefox"))
58 options = webdriver.firefox.options.Options()
59 options.add_argument("--headless")
60 return webdriver.Firefox(firefox_binary=binary, options=options, service_log_path=devnull)
61
62 def create_chromium_webdriver() -> WebDriver:
63 options = webdriver.chrome.options.Options()
64 options.add_argument("--headless")
65 options.add_argument("--hide-scrollbars")
66 options.add_argument("--force-device-scale-factor=1")
67 options.add_argument("--force-color-profile=srgb")
68 return webdriver.Chrome(options=options)
69
70 #-----------------------------------------------------------------------------
71 # Private API
72 #-----------------------------------------------------------------------------
73
74 def _detect(executable: str) -> Optional[str]:
75 return shutil.which(executable)
76
77 def _try_create_firefox_webdriver() -> Optional[WebDriver]:
78 try:
79 return create_firefox_webdriver()
80 except Exception:
81 return None
82
83 def _try_create_chromium_webdriver() -> Optional[WebDriver]:
84 try:
85 return create_chromium_webdriver()
86 except Exception:
87 return None
88
89 class _WebdriverState:
90 '''
91
92 '''
93
94 reuse: bool
95 kind: Optional[DriverKind]
96
97 current: Optional[WebDriver]
98 _drivers: List[WebDriver]
99
100 def __init__(self, *, kind: Optional[DriverKind] = None, reuse: bool = True):
101 self.kind = kind
102 self.reuse = reuse
103 self.current = None
104 self._drivers = set()
105
106 def terminate(self, driver: WebDriver) -> None:
107 self._drivers.remove(driver)
108 driver.quit()
109
110 def reset(self) -> None:
111 if self.current is not None:
112 self.terminate(self.current)
113 self.current = None
114
115 def get(self) -> WebDriver:
116 if not self.reuse or self.current is None:
117 self.reset()
118 self.current = self.create()
119 return self.current
120
121 def create(self, kind: Optional[DriverKind] = None) -> WebDriver:
122 driver = self._create(kind)
123 self._drivers.add(driver)
124 return driver
125
126 def _create(self, kind: Optional[DriverKind]) -> WebDriver:
127 driver_kind = kind or self.kind
128
129 if driver_kind is None:
130 driver = _try_create_chromium_webdriver()
131 if driver is not None:
132 self.kind = "chromium"
133 return driver
134
135 driver = _try_create_firefox_webdriver()
136 if driver is not None:
137 self.kind = "firefox"
138 return driver
139
140 raise RuntimeError("Neither firefox and geckodriver nor a variant of chromium browser and " \
141 "chromedriver are available on system PATH. You can install the former " \
142 "with 'conda install -c conda-forge firefox geckodriver'.")
143 elif driver_kind == "chromium":
144 return create_chromium_webdriver()
145 elif driver_kind == "firefox":
146 return create_firefox_webdriver()
147 else:
148 raise ValueError(f"'{driver_kind}' is not a recognized webdriver kind")
149
150 def cleanup(self) -> None:
151 self.reset()
152 for driver in list(self._drivers):
153 self.terminate(driver)
154
155 #-----------------------------------------------------------------------------
156 # Code
157 #-----------------------------------------------------------------------------
158
159 webdriver_control = _WebdriverState()
160
161 atexit.register(lambda: webdriver_control.cleanup())
162
[end of bokeh/io/webdriver.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/bokeh/io/webdriver.py b/bokeh/io/webdriver.py
--- a/bokeh/io/webdriver.py
+++ b/bokeh/io/webdriver.py
@@ -25,8 +25,9 @@
# Standard library imports
import atexit
-import shutil
-from os.path import devnull
+import os
+from os.path import devnull, dirname, isfile, join
+from shutil import which
from typing import List, Optional
# External imports
@@ -53,11 +54,38 @@
#-----------------------------------------------------------------------------
def create_firefox_webdriver() -> WebDriver:
+ firefox = which("firefox")
+ if firefox is None:
+ raise RuntimeError("firefox is not installed or not present on PATH")
+
+ geckodriver = which("geckodriver")
+ if geckodriver is None:
+ raise RuntimeError("geckodriver is not installed or not present on PATH")
+
+ firefox_paths = [
+ join(dirname(firefox), "FirefoxApp", "firefox"),
+ join(dirname(firefox), "FirefoxApp", "Contents", "MacOS", "firefox"),
+ ]
+
+ for firefox_path in firefox_paths:
+ if _is_executable(firefox_path):
+ binary_path = firefox_path
+ break
+ else:
+ binary_path = firefox
+
from selenium.webdriver.firefox.firefox_binary import FirefoxBinary
- binary = FirefoxBinary(_detect("firefox"))
+ binary = FirefoxBinary(binary_path)
+
options = webdriver.firefox.options.Options()
options.add_argument("--headless")
- return webdriver.Firefox(firefox_binary=binary, options=options, service_log_path=devnull)
+
+ return webdriver.Firefox(
+ options=options,
+ firefox_binary=binary,
+ executable_path=geckodriver,
+ service_log_path=devnull,
+ )
def create_chromium_webdriver() -> WebDriver:
options = webdriver.chrome.options.Options()
@@ -71,8 +99,8 @@
# Private API
#-----------------------------------------------------------------------------
-def _detect(executable: str) -> Optional[str]:
- return shutil.which(executable)
+def _is_executable(path: str) -> bool:
+ return isfile(path) and os.access(path, os.X_OK)
def _try_create_firefox_webdriver() -> Optional[WebDriver]:
try:
| {"golden_diff": "diff --git a/bokeh/io/webdriver.py b/bokeh/io/webdriver.py\n--- a/bokeh/io/webdriver.py\n+++ b/bokeh/io/webdriver.py\n@@ -25,8 +25,9 @@\n \n # Standard library imports\n import atexit\n-import shutil\n-from os.path import devnull\n+import os\n+from os.path import devnull, dirname, isfile, join\n+from shutil import which\n from typing import List, Optional\n \n # External imports\n@@ -53,11 +54,38 @@\n #-----------------------------------------------------------------------------\n \n def create_firefox_webdriver() -> WebDriver:\n+ firefox = which(\"firefox\")\n+ if firefox is None:\n+ raise RuntimeError(\"firefox is not installed or not present on PATH\")\n+\n+ geckodriver = which(\"geckodriver\")\n+ if geckodriver is None:\n+ raise RuntimeError(\"geckodriver is not installed or not present on PATH\")\n+\n+ firefox_paths = [\n+ join(dirname(firefox), \"FirefoxApp\", \"firefox\"),\n+ join(dirname(firefox), \"FirefoxApp\", \"Contents\", \"MacOS\", \"firefox\"),\n+ ]\n+\n+ for firefox_path in firefox_paths:\n+ if _is_executable(firefox_path):\n+ binary_path = firefox_path\n+ break\n+ else:\n+ binary_path = firefox\n+\n from selenium.webdriver.firefox.firefox_binary import FirefoxBinary\n- binary = FirefoxBinary(_detect(\"firefox\"))\n+ binary = FirefoxBinary(binary_path)\n+\n options = webdriver.firefox.options.Options()\n options.add_argument(\"--headless\")\n- return webdriver.Firefox(firefox_binary=binary, options=options, service_log_path=devnull)\n+\n+ return webdriver.Firefox(\n+ options=options,\n+ firefox_binary=binary,\n+ executable_path=geckodriver,\n+ service_log_path=devnull,\n+ )\n \n def create_chromium_webdriver() -> WebDriver:\n options = webdriver.chrome.options.Options()\n@@ -71,8 +99,8 @@\n # Private API\n #-----------------------------------------------------------------------------\n \n-def _detect(executable: str) -> Optional[str]:\n- return shutil.which(executable)\n+def _is_executable(path: str) -> bool:\n+ return isfile(path) and os.access(path, os.X_OK)\n \n def _try_create_firefox_webdriver() -> Optional[WebDriver]:\n try:\n", "issue": "webdriver failing to find installed firefox/geckodriver\nCan't get export examples running on binder even though everything is installed and on the path:\r\n\r\n<img width=\"1130\" alt=\"Screen Shot 2020-06-02 at 8 23 16 PM\" src=\"https://user-images.githubusercontent.com/1078448/83592297-e9459c80-a50e-11ea-90d1-7189fcc93af0.png\">\r\n\r\n\n", "before_files": [{"content": "#-----------------------------------------------------------------------------\n# Copyright (c) 2012 - 2020, Anaconda, Inc., and Bokeh Contributors.\n# All rights reserved.\n#\n# The full license is in the file LICENSE.txt, distributed with this software.\n#-----------------------------------------------------------------------------\n'''\n\n'''\n\n#-----------------------------------------------------------------------------\n# Boilerplate\n#-----------------------------------------------------------------------------\nimport logging # isort:skip\nlog = logging.getLogger(__name__)\n\n#-----------------------------------------------------------------------------\n# Imports\n#-----------------------------------------------------------------------------\n\nfrom ..util.dependencies import import_required # isort:skip\nimport_required(\"selenium.webdriver\",\n \"To use bokeh.io image export functions you need selenium \"\n \"('conda install selenium' or 'pip install selenium')\")\n\n# Standard library imports\nimport atexit\nimport shutil\nfrom os.path import devnull\nfrom typing import List, Optional\n\n# External imports\nfrom selenium import webdriver\nfrom selenium.webdriver.remote.webdriver import WebDriver\nfrom typing_extensions import Literal\n\n#-----------------------------------------------------------------------------\n# Globals and constants\n#-----------------------------------------------------------------------------\n\nDriverKind = Literal[\"firefox\", \"chromium\"]\n\n__all__ = (\n 'webdriver_control',\n)\n\n#-----------------------------------------------------------------------------\n# General API\n#-----------------------------------------------------------------------------\n\n#-----------------------------------------------------------------------------\n# Dev API\n#-----------------------------------------------------------------------------\n\ndef create_firefox_webdriver() -> WebDriver:\n from selenium.webdriver.firefox.firefox_binary import FirefoxBinary\n binary = FirefoxBinary(_detect(\"firefox\"))\n options = webdriver.firefox.options.Options()\n options.add_argument(\"--headless\")\n return webdriver.Firefox(firefox_binary=binary, options=options, service_log_path=devnull)\n\ndef create_chromium_webdriver() -> WebDriver:\n options = webdriver.chrome.options.Options()\n options.add_argument(\"--headless\")\n options.add_argument(\"--hide-scrollbars\")\n options.add_argument(\"--force-device-scale-factor=1\")\n options.add_argument(\"--force-color-profile=srgb\")\n return webdriver.Chrome(options=options)\n\n#-----------------------------------------------------------------------------\n# Private API\n#-----------------------------------------------------------------------------\n\ndef _detect(executable: str) -> Optional[str]:\n return shutil.which(executable)\n\ndef _try_create_firefox_webdriver() -> Optional[WebDriver]:\n try:\n return create_firefox_webdriver()\n except Exception:\n return None\n\ndef _try_create_chromium_webdriver() -> Optional[WebDriver]:\n try:\n return create_chromium_webdriver()\n except Exception:\n return None\n\nclass _WebdriverState:\n '''\n\n '''\n\n reuse: bool\n kind: Optional[DriverKind]\n\n current: Optional[WebDriver]\n _drivers: List[WebDriver]\n\n def __init__(self, *, kind: Optional[DriverKind] = None, reuse: bool = True):\n self.kind = kind\n self.reuse = reuse\n self.current = None\n self._drivers = set()\n\n def terminate(self, driver: WebDriver) -> None:\n self._drivers.remove(driver)\n driver.quit()\n\n def reset(self) -> None:\n if self.current is not None:\n self.terminate(self.current)\n self.current = None\n\n def get(self) -> WebDriver:\n if not self.reuse or self.current is None:\n self.reset()\n self.current = self.create()\n return self.current\n\n def create(self, kind: Optional[DriverKind] = None) -> WebDriver:\n driver = self._create(kind)\n self._drivers.add(driver)\n return driver\n\n def _create(self, kind: Optional[DriverKind]) -> WebDriver:\n driver_kind = kind or self.kind\n\n if driver_kind is None:\n driver = _try_create_chromium_webdriver()\n if driver is not None:\n self.kind = \"chromium\"\n return driver\n\n driver = _try_create_firefox_webdriver()\n if driver is not None:\n self.kind = \"firefox\"\n return driver\n\n raise RuntimeError(\"Neither firefox and geckodriver nor a variant of chromium browser and \" \\\n \"chromedriver are available on system PATH. You can install the former \" \\\n \"with 'conda install -c conda-forge firefox geckodriver'.\")\n elif driver_kind == \"chromium\":\n return create_chromium_webdriver()\n elif driver_kind == \"firefox\":\n return create_firefox_webdriver()\n else:\n raise ValueError(f\"'{driver_kind}' is not a recognized webdriver kind\")\n\n def cleanup(self) -> None:\n self.reset()\n for driver in list(self._drivers):\n self.terminate(driver)\n\n#-----------------------------------------------------------------------------\n# Code\n#-----------------------------------------------------------------------------\n\nwebdriver_control = _WebdriverState()\n\natexit.register(lambda: webdriver_control.cleanup())\n", "path": "bokeh/io/webdriver.py"}]} | 2,011 | 531 |
gh_patches_debug_38225 | rasdani/github-patches | git_diff | medtagger__MedTagger-506 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Check for sent Label in E2E Tests
## Current Behavior
There are no checks on Labels sent to REST API.
## Expected Behavior
E2E Tests should also check if Label on the backend side was properly created.
</issue>
<code>
[start of backend/medtagger/config.py]
1 """Module responsible for reading data from application configuration."""
2 import os
3 from typing import Any
4
5
6 class AppConfiguration:
7 """Class that represents application configuration."""
8
9 def __init__(self) -> None:
10 """Initialize application configuration."""
11 pass
12
13 @staticmethod
14 def get(namespace: str, key: str, fallback: Any = None) -> Any:
15 """Return value of a given configuration entry.
16
17 :param namespace: name of a namespace for given entry
18 :param key: key for which it should return value from given namespace
19 :param fallback: default value returned if key was not found
20 :return: value for given entry
21 """
22 variable_name = 'MEDTAGGER__' + namespace.upper() + '_' + key.upper()
23 return os.environ.get(variable_name, fallback)
24
25 @staticmethod
26 def getint(namespace: str, key: str, fallback: int = 0) -> int:
27 """Return integer value for given key in namespace."""
28 return int(AppConfiguration.get(namespace, key, fallback))
29
30 @staticmethod
31 def getboolean(namespace: str, key: str, fallback: bool = False) -> bool:
32 """Return boolean value for given key in namespace."""
33 return bool(AppConfiguration.getint(namespace, key, fallback))
34
[end of backend/medtagger/config.py]
[start of backend/medtagger/api/exceptions.py]
1 """Exceptions used across whole API."""
2 from medtagger.exceptions import MedTaggerException
3
4
5 class BaseHTTPException(MedTaggerException):
6 """Base class for all HTTP Exceptions."""
7
8 pass
9
10
11 class UnauthorizedException(BaseHTTPException):
12 """Exception designed to use once there was an authorization error during business logic processing."""
13
14 pass
15
16
17 class NotFoundException(BaseHTTPException):
18 """Exception designed to use while the object that user was looking for could not be found."""
19
20 pass
21
22
23 class InvalidArgumentsException(BaseHTTPException):
24 """Exception designed to use with invalid arguments (400 status code)."""
25
26 pass
27
28
29 class AccessForbiddenException(BaseHTTPException):
30 """Exception designed to use while the user does not have a privilege to perform action."""
31
32 pass
33
[end of backend/medtagger/api/exceptions.py]
[start of backend/medtagger/exceptions.py]
1 """All available Exceptions for whole project."""
2
3
4 class MedTaggerException(Exception):
5 """Base class for all HTTP Exceptions."""
6
7 pass
8
9
10 class UnsupportedActionException(MedTaggerException):
11 """Exception for unsupported Action."""
12
13 pass
14
15
16 class InvalidResponseException(MedTaggerException):
17 """Exception for invalid Response."""
18
19 pass
20
21
22 class InternalErrorException(MedTaggerException):
23 """Exception designed to use to indicate internal errors (like DB/Storage error)."""
24
25 pass
26
[end of backend/medtagger/exceptions.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/backend/medtagger/api/exceptions.py b/backend/medtagger/api/exceptions.py
--- a/backend/medtagger/api/exceptions.py
+++ b/backend/medtagger/api/exceptions.py
@@ -5,28 +5,28 @@
class BaseHTTPException(MedTaggerException):
"""Base class for all HTTP Exceptions."""
- pass
+ pass # pylint: disable=unnecessary-pass
class UnauthorizedException(BaseHTTPException):
"""Exception designed to use once there was an authorization error during business logic processing."""
- pass
+ pass # pylint: disable=unnecessary-pass
class NotFoundException(BaseHTTPException):
"""Exception designed to use while the object that user was looking for could not be found."""
- pass
+ pass # pylint: disable=unnecessary-pass
class InvalidArgumentsException(BaseHTTPException):
"""Exception designed to use with invalid arguments (400 status code)."""
- pass
+ pass # pylint: disable=unnecessary-pass
class AccessForbiddenException(BaseHTTPException):
"""Exception designed to use while the user does not have a privilege to perform action."""
- pass
+ pass # pylint: disable=unnecessary-pass
diff --git a/backend/medtagger/config.py b/backend/medtagger/config.py
--- a/backend/medtagger/config.py
+++ b/backend/medtagger/config.py
@@ -6,10 +6,6 @@
class AppConfiguration:
"""Class that represents application configuration."""
- def __init__(self) -> None:
- """Initialize application configuration."""
- pass
-
@staticmethod
def get(namespace: str, key: str, fallback: Any = None) -> Any:
"""Return value of a given configuration entry.
diff --git a/backend/medtagger/exceptions.py b/backend/medtagger/exceptions.py
--- a/backend/medtagger/exceptions.py
+++ b/backend/medtagger/exceptions.py
@@ -4,22 +4,22 @@
class MedTaggerException(Exception):
"""Base class for all HTTP Exceptions."""
- pass
+ pass # pylint: disable=unnecessary-pass
class UnsupportedActionException(MedTaggerException):
"""Exception for unsupported Action."""
- pass
+ pass # pylint: disable=unnecessary-pass
class InvalidResponseException(MedTaggerException):
"""Exception for invalid Response."""
- pass
+ pass # pylint: disable=unnecessary-pass
class InternalErrorException(MedTaggerException):
"""Exception designed to use to indicate internal errors (like DB/Storage error)."""
- pass
+ pass # pylint: disable=unnecessary-pass
| {"golden_diff": "diff --git a/backend/medtagger/api/exceptions.py b/backend/medtagger/api/exceptions.py\n--- a/backend/medtagger/api/exceptions.py\n+++ b/backend/medtagger/api/exceptions.py\n@@ -5,28 +5,28 @@\n class BaseHTTPException(MedTaggerException):\n \"\"\"Base class for all HTTP Exceptions.\"\"\"\n \n- pass\n+ pass # pylint: disable=unnecessary-pass\n \n \n class UnauthorizedException(BaseHTTPException):\n \"\"\"Exception designed to use once there was an authorization error during business logic processing.\"\"\"\n \n- pass\n+ pass # pylint: disable=unnecessary-pass\n \n \n class NotFoundException(BaseHTTPException):\n \"\"\"Exception designed to use while the object that user was looking for could not be found.\"\"\"\n \n- pass\n+ pass # pylint: disable=unnecessary-pass\n \n \n class InvalidArgumentsException(BaseHTTPException):\n \"\"\"Exception designed to use with invalid arguments (400 status code).\"\"\"\n \n- pass\n+ pass # pylint: disable=unnecessary-pass\n \n \n class AccessForbiddenException(BaseHTTPException):\n \"\"\"Exception designed to use while the user does not have a privilege to perform action.\"\"\"\n \n- pass\n+ pass # pylint: disable=unnecessary-pass\ndiff --git a/backend/medtagger/config.py b/backend/medtagger/config.py\n--- a/backend/medtagger/config.py\n+++ b/backend/medtagger/config.py\n@@ -6,10 +6,6 @@\n class AppConfiguration:\n \"\"\"Class that represents application configuration.\"\"\"\n \n- def __init__(self) -> None:\n- \"\"\"Initialize application configuration.\"\"\"\n- pass\n-\n @staticmethod\n def get(namespace: str, key: str, fallback: Any = None) -> Any:\n \"\"\"Return value of a given configuration entry.\ndiff --git a/backend/medtagger/exceptions.py b/backend/medtagger/exceptions.py\n--- a/backend/medtagger/exceptions.py\n+++ b/backend/medtagger/exceptions.py\n@@ -4,22 +4,22 @@\n class MedTaggerException(Exception):\n \"\"\"Base class for all HTTP Exceptions.\"\"\"\n \n- pass\n+ pass # pylint: disable=unnecessary-pass\n \n \n class UnsupportedActionException(MedTaggerException):\n \"\"\"Exception for unsupported Action.\"\"\"\n \n- pass\n+ pass # pylint: disable=unnecessary-pass\n \n \n class InvalidResponseException(MedTaggerException):\n \"\"\"Exception for invalid Response.\"\"\"\n \n- pass\n+ pass # pylint: disable=unnecessary-pass\n \n \n class InternalErrorException(MedTaggerException):\n \"\"\"Exception designed to use to indicate internal errors (like DB/Storage error).\"\"\"\n \n- pass\n+ pass # pylint: disable=unnecessary-pass\n", "issue": "Check for sent Label in E2E Tests\n## Current Behavior\r\n\r\nThere are no checks on Labels sent to REST API.\r\n\r\n## Expected Behavior\r\n\r\nE2E Tests should also check if Label on the backend side was properly created.\n", "before_files": [{"content": "\"\"\"Module responsible for reading data from application configuration.\"\"\"\nimport os\nfrom typing import Any\n\n\nclass AppConfiguration:\n \"\"\"Class that represents application configuration.\"\"\"\n\n def __init__(self) -> None:\n \"\"\"Initialize application configuration.\"\"\"\n pass\n\n @staticmethod\n def get(namespace: str, key: str, fallback: Any = None) -> Any:\n \"\"\"Return value of a given configuration entry.\n\n :param namespace: name of a namespace for given entry\n :param key: key for which it should return value from given namespace\n :param fallback: default value returned if key was not found\n :return: value for given entry\n \"\"\"\n variable_name = 'MEDTAGGER__' + namespace.upper() + '_' + key.upper()\n return os.environ.get(variable_name, fallback)\n\n @staticmethod\n def getint(namespace: str, key: str, fallback: int = 0) -> int:\n \"\"\"Return integer value for given key in namespace.\"\"\"\n return int(AppConfiguration.get(namespace, key, fallback))\n\n @staticmethod\n def getboolean(namespace: str, key: str, fallback: bool = False) -> bool:\n \"\"\"Return boolean value for given key in namespace.\"\"\"\n return bool(AppConfiguration.getint(namespace, key, fallback))\n", "path": "backend/medtagger/config.py"}, {"content": "\"\"\"Exceptions used across whole API.\"\"\"\nfrom medtagger.exceptions import MedTaggerException\n\n\nclass BaseHTTPException(MedTaggerException):\n \"\"\"Base class for all HTTP Exceptions.\"\"\"\n\n pass\n\n\nclass UnauthorizedException(BaseHTTPException):\n \"\"\"Exception designed to use once there was an authorization error during business logic processing.\"\"\"\n\n pass\n\n\nclass NotFoundException(BaseHTTPException):\n \"\"\"Exception designed to use while the object that user was looking for could not be found.\"\"\"\n\n pass\n\n\nclass InvalidArgumentsException(BaseHTTPException):\n \"\"\"Exception designed to use with invalid arguments (400 status code).\"\"\"\n\n pass\n\n\nclass AccessForbiddenException(BaseHTTPException):\n \"\"\"Exception designed to use while the user does not have a privilege to perform action.\"\"\"\n\n pass\n", "path": "backend/medtagger/api/exceptions.py"}, {"content": "\"\"\"All available Exceptions for whole project.\"\"\"\n\n\nclass MedTaggerException(Exception):\n \"\"\"Base class for all HTTP Exceptions.\"\"\"\n\n pass\n\n\nclass UnsupportedActionException(MedTaggerException):\n \"\"\"Exception for unsupported Action.\"\"\"\n\n pass\n\n\nclass InvalidResponseException(MedTaggerException):\n \"\"\"Exception for invalid Response.\"\"\"\n\n pass\n\n\nclass InternalErrorException(MedTaggerException):\n \"\"\"Exception designed to use to indicate internal errors (like DB/Storage error).\"\"\"\n\n pass\n", "path": "backend/medtagger/exceptions.py"}]} | 1,341 | 603 |
gh_patches_debug_61378 | rasdani/github-patches | git_diff | Lightning-AI__torchmetrics-1288 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
inplace operation in pairwise_cosine_similarity
## 🐛 Bug
Hello !
The x, y values are modified inplace in the `pairwise_cosine_similarity` function.
This is not documented and may cause bugs that are difficult to find.
Thank you.
### To Reproduce
```python
import torch
from torchmetrics.functional import pairwise_cosine_similarity
x = torch.tensor([[2, 3], [3, 5], [5, 8]], dtype=torch.float32)
y = torch.tensor([[1, 0], [2, 1]], dtype=torch.float32)
print("Result:", pairwise_cosine_similarity(x, y))
print("X:", x)
print("Y:", y)
"""Out[0]
Result: tensor([[0.5547, 0.8682],
[0.5145, 0.8437],
[0.5300, 0.8533]])
X: tensor([[0.5547, 0.8321],
[0.5145, 0.8575],
[0.5300, 0.8480]])
Y: tensor([[1.0000, 0.0000],
[0.8944, 0.4472]])
"""
```
### Environment
torchmetrics==0.10.0
</issue>
<code>
[start of src/torchmetrics/functional/pairwise/cosine.py]
1 # Copyright The PyTorch Lightning team.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 from typing import Optional
15
16 import torch
17 from torch import Tensor
18 from typing_extensions import Literal
19
20 from torchmetrics.functional.pairwise.helpers import _check_input, _reduce_distance_matrix
21 from torchmetrics.utilities.compute import _safe_matmul
22
23
24 def _pairwise_cosine_similarity_update(
25 x: Tensor, y: Optional[Tensor] = None, zero_diagonal: Optional[bool] = None
26 ) -> Tensor:
27 """Calculates the pairwise cosine similarity matrix.
28
29 Args:
30 x: tensor of shape ``[N,d]``
31 y: tensor of shape ``[M,d]``
32 zero_diagonal: determines if the diagonal of the distance matrix should be set to zero
33 """
34 x, y, zero_diagonal = _check_input(x, y, zero_diagonal)
35
36 norm = torch.norm(x, p=2, dim=1)
37 x /= norm.unsqueeze(1)
38 norm = torch.norm(y, p=2, dim=1)
39 y /= norm.unsqueeze(1)
40
41 distance = _safe_matmul(x, y)
42 if zero_diagonal:
43 distance.fill_diagonal_(0)
44 return distance
45
46
47 def pairwise_cosine_similarity(
48 x: Tensor,
49 y: Optional[Tensor] = None,
50 reduction: Literal["mean", "sum", "none", None] = None,
51 zero_diagonal: Optional[bool] = None,
52 ) -> Tensor:
53 r"""Calculates pairwise cosine similarity:
54
55 .. math::
56 s_{cos}(x,y) = \frac{<x,y>}{||x|| \cdot ||y||}
57 = \frac{\sum_{d=1}^D x_d \cdot y_d }{\sqrt{\sum_{d=1}^D x_i^2} \cdot \sqrt{\sum_{d=1}^D y_i^2}}
58
59 If both :math:`x` and :math:`y` are passed in, the calculation will be performed pairwise
60 between the rows of :math:`x` and :math:`y`.
61 If only :math:`x` is passed in, the calculation will be performed between the rows of :math:`x`.
62
63 Args:
64 x: Tensor with shape ``[N, d]``
65 y: Tensor with shape ``[M, d]``, optional
66 reduction: reduction to apply along the last dimension. Choose between `'mean'`, `'sum'`
67 (applied along column dimension) or `'none'`, `None` for no reduction
68 zero_diagonal: if the diagonal of the distance matrix should be set to 0. If only :math:`x` is given
69 this defaults to ``True`` else if :math:`y` is also given it defaults to ``False``
70
71 Returns:
72 A ``[N,N]`` matrix of distances if only ``x`` is given, else a ``[N,M]`` matrix
73
74 Example:
75 >>> import torch
76 >>> from torchmetrics.functional import pairwise_cosine_similarity
77 >>> x = torch.tensor([[2, 3], [3, 5], [5, 8]], dtype=torch.float32)
78 >>> y = torch.tensor([[1, 0], [2, 1]], dtype=torch.float32)
79 >>> pairwise_cosine_similarity(x, y)
80 tensor([[0.5547, 0.8682],
81 [0.5145, 0.8437],
82 [0.5300, 0.8533]])
83 >>> pairwise_cosine_similarity(x)
84 tensor([[0.0000, 0.9989, 0.9996],
85 [0.9989, 0.0000, 0.9998],
86 [0.9996, 0.9998, 0.0000]])
87 """
88 distance = _pairwise_cosine_similarity_update(x, y, zero_diagonal)
89 return _reduce_distance_matrix(distance, reduction)
90
[end of src/torchmetrics/functional/pairwise/cosine.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/torchmetrics/functional/pairwise/cosine.py b/src/torchmetrics/functional/pairwise/cosine.py
--- a/src/torchmetrics/functional/pairwise/cosine.py
+++ b/src/torchmetrics/functional/pairwise/cosine.py
@@ -34,9 +34,9 @@
x, y, zero_diagonal = _check_input(x, y, zero_diagonal)
norm = torch.norm(x, p=2, dim=1)
- x /= norm.unsqueeze(1)
+ x = x / norm.unsqueeze(1)
norm = torch.norm(y, p=2, dim=1)
- y /= norm.unsqueeze(1)
+ y = y / norm.unsqueeze(1)
distance = _safe_matmul(x, y)
if zero_diagonal:
| {"golden_diff": "diff --git a/src/torchmetrics/functional/pairwise/cosine.py b/src/torchmetrics/functional/pairwise/cosine.py\n--- a/src/torchmetrics/functional/pairwise/cosine.py\n+++ b/src/torchmetrics/functional/pairwise/cosine.py\n@@ -34,9 +34,9 @@\n x, y, zero_diagonal = _check_input(x, y, zero_diagonal)\n \n norm = torch.norm(x, p=2, dim=1)\n- x /= norm.unsqueeze(1)\n+ x = x / norm.unsqueeze(1)\n norm = torch.norm(y, p=2, dim=1)\n- y /= norm.unsqueeze(1)\n+ y = y / norm.unsqueeze(1)\n \n distance = _safe_matmul(x, y)\n if zero_diagonal:\n", "issue": "inplace operation in pairwise_cosine_similarity\n## \ud83d\udc1b Bug\r\nHello !\r\nThe x, y values are modified inplace in the `pairwise_cosine_similarity` function. \r\nThis is not documented and may cause bugs that are difficult to find. \r\nThank you.\r\n\r\n### To Reproduce\r\n\r\n```python\r\nimport torch\r\nfrom torchmetrics.functional import pairwise_cosine_similarity\r\nx = torch.tensor([[2, 3], [3, 5], [5, 8]], dtype=torch.float32)\r\ny = torch.tensor([[1, 0], [2, 1]], dtype=torch.float32)\r\nprint(\"Result:\", pairwise_cosine_similarity(x, y))\r\nprint(\"X:\", x)\r\nprint(\"Y:\", y)\r\n\"\"\"Out[0]\r\nResult: tensor([[0.5547, 0.8682],\r\n [0.5145, 0.8437],\r\n [0.5300, 0.8533]])\r\nX: tensor([[0.5547, 0.8321],\r\n [0.5145, 0.8575],\r\n [0.5300, 0.8480]])\r\nY: tensor([[1.0000, 0.0000],\r\n [0.8944, 0.4472]])\r\n\"\"\"\r\n```\r\n\r\n### Environment\r\ntorchmetrics==0.10.0\r\n\r\n\n", "before_files": [{"content": "# Copyright The PyTorch Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom typing import Optional\n\nimport torch\nfrom torch import Tensor\nfrom typing_extensions import Literal\n\nfrom torchmetrics.functional.pairwise.helpers import _check_input, _reduce_distance_matrix\nfrom torchmetrics.utilities.compute import _safe_matmul\n\n\ndef _pairwise_cosine_similarity_update(\n x: Tensor, y: Optional[Tensor] = None, zero_diagonal: Optional[bool] = None\n) -> Tensor:\n \"\"\"Calculates the pairwise cosine similarity matrix.\n\n Args:\n x: tensor of shape ``[N,d]``\n y: tensor of shape ``[M,d]``\n zero_diagonal: determines if the diagonal of the distance matrix should be set to zero\n \"\"\"\n x, y, zero_diagonal = _check_input(x, y, zero_diagonal)\n\n norm = torch.norm(x, p=2, dim=1)\n x /= norm.unsqueeze(1)\n norm = torch.norm(y, p=2, dim=1)\n y /= norm.unsqueeze(1)\n\n distance = _safe_matmul(x, y)\n if zero_diagonal:\n distance.fill_diagonal_(0)\n return distance\n\n\ndef pairwise_cosine_similarity(\n x: Tensor,\n y: Optional[Tensor] = None,\n reduction: Literal[\"mean\", \"sum\", \"none\", None] = None,\n zero_diagonal: Optional[bool] = None,\n) -> Tensor:\n r\"\"\"Calculates pairwise cosine similarity:\n\n .. math::\n s_{cos}(x,y) = \\frac{<x,y>}{||x|| \\cdot ||y||}\n = \\frac{\\sum_{d=1}^D x_d \\cdot y_d }{\\sqrt{\\sum_{d=1}^D x_i^2} \\cdot \\sqrt{\\sum_{d=1}^D y_i^2}}\n\n If both :math:`x` and :math:`y` are passed in, the calculation will be performed pairwise\n between the rows of :math:`x` and :math:`y`.\n If only :math:`x` is passed in, the calculation will be performed between the rows of :math:`x`.\n\n Args:\n x: Tensor with shape ``[N, d]``\n y: Tensor with shape ``[M, d]``, optional\n reduction: reduction to apply along the last dimension. Choose between `'mean'`, `'sum'`\n (applied along column dimension) or `'none'`, `None` for no reduction\n zero_diagonal: if the diagonal of the distance matrix should be set to 0. If only :math:`x` is given\n this defaults to ``True`` else if :math:`y` is also given it defaults to ``False``\n\n Returns:\n A ``[N,N]`` matrix of distances if only ``x`` is given, else a ``[N,M]`` matrix\n\n Example:\n >>> import torch\n >>> from torchmetrics.functional import pairwise_cosine_similarity\n >>> x = torch.tensor([[2, 3], [3, 5], [5, 8]], dtype=torch.float32)\n >>> y = torch.tensor([[1, 0], [2, 1]], dtype=torch.float32)\n >>> pairwise_cosine_similarity(x, y)\n tensor([[0.5547, 0.8682],\n [0.5145, 0.8437],\n [0.5300, 0.8533]])\n >>> pairwise_cosine_similarity(x)\n tensor([[0.0000, 0.9989, 0.9996],\n [0.9989, 0.0000, 0.9998],\n [0.9996, 0.9998, 0.0000]])\n \"\"\"\n distance = _pairwise_cosine_similarity_update(x, y, zero_diagonal)\n return _reduce_distance_matrix(distance, reduction)\n", "path": "src/torchmetrics/functional/pairwise/cosine.py"}]} | 2,045 | 186 |
gh_patches_debug_1038 | rasdani/github-patches | git_diff | mathesar-foundation__mathesar-341 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Individually run API tests don't build tables database
## Description
Running a individual test in `mathesar` that doesn't use the `engine` or `test_db` fixture will not have the tables databases built for the test. As a result, many will error when trying to access the tables database.
## Expected behavior
The tables database should always be built.
## To Reproduce
Run any test in `mathesar` that doesn't use `engine` or `test_db`. Ex:
```
docker exec mathesar_web_1 pytest mathesar/tests/views/api/test_schema_api.py::test_schema_update
```
## Additional context
Introduced due to the changes in #329, since `pytest-django` no longer creates the tables db for us.
</issue>
<code>
[start of conftest.py]
1 """
2 This file should provide utilities for setting up test DBs and the like. It's
3 intended to be the containment zone for anything specific about the testing
4 environment (e.g., the login info for the Postgres instance for testing)
5 """
6 import pytest
7 from sqlalchemy import create_engine, text
8 from config.settings import DATABASES
9
10 TEST_DB = "mathesar_db_test"
11
12
13 @pytest.fixture(scope="session")
14 def test_db_name():
15 return TEST_DB
16
17
18 @pytest.fixture(scope="session")
19 def test_db():
20 superuser_engine = _get_superuser_engine()
21 with superuser_engine.connect() as conn:
22 conn.execution_options(isolation_level="AUTOCOMMIT")
23 conn.execute(text(f"DROP DATABASE IF EXISTS {TEST_DB} WITH (FORCE)"))
24 conn.execute(text(f"CREATE DATABASE {TEST_DB}"))
25 yield TEST_DB
26 with superuser_engine.connect() as conn:
27 conn.execution_options(isolation_level="AUTOCOMMIT")
28 conn.execute(text(f"DROP DATABASE {TEST_DB} WITH (FORCE)"))
29
30
31 @pytest.fixture(scope="session")
32 def engine(test_db):
33 return create_engine(
34 _get_connection_string(
35 DATABASES["default"]["USER"],
36 DATABASES["default"]["PASSWORD"],
37 DATABASES["default"]["HOST"],
38 test_db,
39 ),
40 future=True,
41 )
42
43
44 def _get_superuser_engine():
45 return create_engine(
46 _get_connection_string(
47 username=DATABASES["default"]["USER"],
48 password=DATABASES["default"]["PASSWORD"],
49 hostname=DATABASES["default"]["HOST"],
50 database=DATABASES["default"]["NAME"],
51 ),
52 future=True,
53 )
54
55
56 def _get_connection_string(username, password, hostname, database):
57 return f"postgresql://{username}:{password}@{hostname}/{database}"
58
[end of conftest.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/conftest.py b/conftest.py
--- a/conftest.py
+++ b/conftest.py
@@ -15,7 +15,7 @@
return TEST_DB
[email protected](scope="session")
[email protected](scope="session", autouse=True)
def test_db():
superuser_engine = _get_superuser_engine()
with superuser_engine.connect() as conn:
| {"golden_diff": "diff --git a/conftest.py b/conftest.py\n--- a/conftest.py\n+++ b/conftest.py\n@@ -15,7 +15,7 @@\n return TEST_DB\n \n \[email protected](scope=\"session\")\[email protected](scope=\"session\", autouse=True)\n def test_db():\n superuser_engine = _get_superuser_engine()\n with superuser_engine.connect() as conn:\n", "issue": "Individually run API tests don't build tables database\n## Description\r\nRunning a individual test in `mathesar` that doesn't use the `engine` or `test_db` fixture will not have the tables databases built for the test. As a result, many will error when trying to access the tables database.\r\n\r\n## Expected behavior\r\nThe tables database should always be built.\r\n\r\n## To Reproduce\r\nRun any test in `mathesar` that doesn't use `engine` or `test_db`. Ex:\r\n```\r\ndocker exec mathesar_web_1 pytest mathesar/tests/views/api/test_schema_api.py::test_schema_update\r\n```\r\n\r\n## Additional context\r\nIntroduced due to the changes in #329, since `pytest-django` no longer creates the tables db for us.\r\n\n", "before_files": [{"content": "\"\"\"\nThis file should provide utilities for setting up test DBs and the like. It's\nintended to be the containment zone for anything specific about the testing\nenvironment (e.g., the login info for the Postgres instance for testing)\n\"\"\"\nimport pytest\nfrom sqlalchemy import create_engine, text\nfrom config.settings import DATABASES\n\nTEST_DB = \"mathesar_db_test\"\n\n\[email protected](scope=\"session\")\ndef test_db_name():\n return TEST_DB\n\n\[email protected](scope=\"session\")\ndef test_db():\n superuser_engine = _get_superuser_engine()\n with superuser_engine.connect() as conn:\n conn.execution_options(isolation_level=\"AUTOCOMMIT\")\n conn.execute(text(f\"DROP DATABASE IF EXISTS {TEST_DB} WITH (FORCE)\"))\n conn.execute(text(f\"CREATE DATABASE {TEST_DB}\"))\n yield TEST_DB\n with superuser_engine.connect() as conn:\n conn.execution_options(isolation_level=\"AUTOCOMMIT\")\n conn.execute(text(f\"DROP DATABASE {TEST_DB} WITH (FORCE)\"))\n\n\[email protected](scope=\"session\")\ndef engine(test_db):\n return create_engine(\n _get_connection_string(\n DATABASES[\"default\"][\"USER\"],\n DATABASES[\"default\"][\"PASSWORD\"],\n DATABASES[\"default\"][\"HOST\"],\n test_db,\n ),\n future=True,\n )\n\n\ndef _get_superuser_engine():\n return create_engine(\n _get_connection_string(\n username=DATABASES[\"default\"][\"USER\"],\n password=DATABASES[\"default\"][\"PASSWORD\"],\n hostname=DATABASES[\"default\"][\"HOST\"],\n database=DATABASES[\"default\"][\"NAME\"],\n ),\n future=True,\n )\n\n\ndef _get_connection_string(username, password, hostname, database):\n return f\"postgresql://{username}:{password}@{hostname}/{database}\"\n", "path": "conftest.py"}]} | 1,180 | 91 |
gh_patches_debug_23746 | rasdani/github-patches | git_diff | mitmproxy__mitmproxy-3114 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
tcp_message script not working
Hi,
I tried to execute the TCP message replace script from the doc but it seems is not working. I don't know if this is a issue with the doc script or with mitmproxy.
The script was unchanged.
##### Steps to reproduce the problem:
1. mitmdump --mode transparent --tcp-host ".*" -k -s examples/complex/tcp_message.py
Loading script: examples/tcp_message.py
Proxy server listening at http://*:8080
192.168.1.241:37604: clientconnect
::ffff:192.168.1.241:37604: Certificate verification error for None: hostname 'no-hostname' doesn't match either of '*.local.org', 'local.org'
::ffff:192.168.1.241:37604: Ignoring server verification error, continuing with connection
Addon error: Traceback (most recent call last):
File "examples/tcp_message.py", line 16, in tcp_message
modified_msg = tcp_msg.message.replace("foo", "bar")
AttributeError: 'TCPFlow' object has no attribute 'message'
192.168.1.241:37604 -> tcp -> 10.0.0.2:5443
Addon error: Traceback (most recent call last):
File "examples/tcp_message.py", line 16, in tcp_message
modified_msg = tcp_msg.message.replace("foo", "bar")
AttributeError: 'TCPFlow' object has no attribute 'message'
192.168.1.241:37604 <- tcp <- 10.0.0.2:5443
##### System information
<!-- Paste the output of "mitmproxy --version" here. -->
mitmdump --version
Mitmproxy: 3.0.4
Python: 3.6.0
OpenSSL: OpenSSL 1.1.0h 27 Mar 2018
Platform: Linux-3.19.0-65-generic-x86_64-with-debian-jessie-sid
<!-- Please use the mitmproxy forums (https://discourse.mitmproxy.org/) for support/how-to questions. Thanks! :) -->
</issue>
<code>
[start of examples/complex/tcp_message.py]
1 """
2 tcp_message Inline Script Hook API Demonstration
3 ------------------------------------------------
4
5 * modifies packets containing "foo" to "bar"
6 * prints various details for each packet.
7
8 example cmdline invocation:
9 mitmdump -T --host --tcp ".*" -q -s examples/tcp_message.py
10 """
11 from mitmproxy.utils import strutils
12 from mitmproxy import ctx
13
14
15 def tcp_message(tcp_msg):
16 modified_msg = tcp_msg.message.replace("foo", "bar")
17
18 is_modified = False if modified_msg == tcp_msg.message else True
19 tcp_msg.message = modified_msg
20
21 ctx.log.info(
22 "[tcp_message{}] from {} {} to {} {}:\r\n{}".format(
23 " (modified)" if is_modified else "",
24 "client" if tcp_msg.sender == tcp_msg.client_conn else "server",
25 tcp_msg.sender.address,
26 "server" if tcp_msg.receiver == tcp_msg.server_conn else "client",
27 tcp_msg.receiver.address, strutils.bytes_to_escaped_str(tcp_msg.message))
28 )
29
[end of examples/complex/tcp_message.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/examples/complex/tcp_message.py b/examples/complex/tcp_message.py
--- a/examples/complex/tcp_message.py
+++ b/examples/complex/tcp_message.py
@@ -6,23 +6,22 @@
* prints various details for each packet.
example cmdline invocation:
-mitmdump -T --host --tcp ".*" -q -s examples/tcp_message.py
+mitmdump --rawtcp --tcp-host ".*" -s examples/complex/tcp_message.py
"""
from mitmproxy.utils import strutils
from mitmproxy import ctx
+from mitmproxy import tcp
-def tcp_message(tcp_msg):
- modified_msg = tcp_msg.message.replace("foo", "bar")
-
- is_modified = False if modified_msg == tcp_msg.message else True
- tcp_msg.message = modified_msg
+def tcp_message(flow: tcp.TCPFlow):
+ message = flow.messages[-1]
+ old_content = message.content
+ message.content = old_content.replace(b"foo", b"bar")
ctx.log.info(
- "[tcp_message{}] from {} {} to {} {}:\r\n{}".format(
- " (modified)" if is_modified else "",
- "client" if tcp_msg.sender == tcp_msg.client_conn else "server",
- tcp_msg.sender.address,
- "server" if tcp_msg.receiver == tcp_msg.server_conn else "client",
- tcp_msg.receiver.address, strutils.bytes_to_escaped_str(tcp_msg.message))
+ "[tcp_message{}] from {} to {}:\n{}".format(
+ " (modified)" if message.content != old_content else "",
+ "client" if message.from_client else "server",
+ "server" if message.from_client else "client",
+ strutils.bytes_to_escaped_str(message.content))
)
| {"golden_diff": "diff --git a/examples/complex/tcp_message.py b/examples/complex/tcp_message.py\n--- a/examples/complex/tcp_message.py\n+++ b/examples/complex/tcp_message.py\n@@ -6,23 +6,22 @@\n * prints various details for each packet.\n \n example cmdline invocation:\n-mitmdump -T --host --tcp \".*\" -q -s examples/tcp_message.py\n+mitmdump --rawtcp --tcp-host \".*\" -s examples/complex/tcp_message.py\n \"\"\"\n from mitmproxy.utils import strutils\n from mitmproxy import ctx\n+from mitmproxy import tcp\n \n \n-def tcp_message(tcp_msg):\n- modified_msg = tcp_msg.message.replace(\"foo\", \"bar\")\n-\n- is_modified = False if modified_msg == tcp_msg.message else True\n- tcp_msg.message = modified_msg\n+def tcp_message(flow: tcp.TCPFlow):\n+ message = flow.messages[-1]\n+ old_content = message.content\n+ message.content = old_content.replace(b\"foo\", b\"bar\")\n \n ctx.log.info(\n- \"[tcp_message{}] from {} {} to {} {}:\\r\\n{}\".format(\n- \" (modified)\" if is_modified else \"\",\n- \"client\" if tcp_msg.sender == tcp_msg.client_conn else \"server\",\n- tcp_msg.sender.address,\n- \"server\" if tcp_msg.receiver == tcp_msg.server_conn else \"client\",\n- tcp_msg.receiver.address, strutils.bytes_to_escaped_str(tcp_msg.message))\n+ \"[tcp_message{}] from {} to {}:\\n{}\".format(\n+ \" (modified)\" if message.content != old_content else \"\",\n+ \"client\" if message.from_client else \"server\",\n+ \"server\" if message.from_client else \"client\",\n+ strutils.bytes_to_escaped_str(message.content))\n )\n", "issue": "tcp_message script not working\nHi,\r\n\r\nI tried to execute the TCP message replace script from the doc but it seems is not working. I don't know if this is a issue with the doc script or with mitmproxy.\r\n\r\nThe script was unchanged.\r\n\r\n##### Steps to reproduce the problem:\r\n\r\n1. mitmdump --mode transparent --tcp-host \".*\" -k -s examples/complex/tcp_message.py\r\n\r\nLoading script: examples/tcp_message.py\r\nProxy server listening at http://*:8080\r\n192.168.1.241:37604: clientconnect\r\n::ffff:192.168.1.241:37604: Certificate verification error for None: hostname 'no-hostname' doesn't match either of '*.local.org', 'local.org'\r\n::ffff:192.168.1.241:37604: Ignoring server verification error, continuing with connection\r\nAddon error: Traceback (most recent call last):\r\n File \"examples/tcp_message.py\", line 16, in tcp_message\r\n modified_msg = tcp_msg.message.replace(\"foo\", \"bar\")\r\nAttributeError: 'TCPFlow' object has no attribute 'message'\r\n\r\n192.168.1.241:37604 -> tcp -> 10.0.0.2:5443\r\nAddon error: Traceback (most recent call last):\r\n File \"examples/tcp_message.py\", line 16, in tcp_message\r\n modified_msg = tcp_msg.message.replace(\"foo\", \"bar\")\r\nAttributeError: 'TCPFlow' object has no attribute 'message'\r\n\r\n192.168.1.241:37604 <- tcp <- 10.0.0.2:5443\r\n\r\n##### System information\r\n\r\n<!-- Paste the output of \"mitmproxy --version\" here. -->\r\n\r\nmitmdump --version\r\nMitmproxy: 3.0.4 \r\nPython: 3.6.0\r\nOpenSSL: OpenSSL 1.1.0h 27 Mar 2018\r\nPlatform: Linux-3.19.0-65-generic-x86_64-with-debian-jessie-sid\r\n\r\n<!-- Please use the mitmproxy forums (https://discourse.mitmproxy.org/) for support/how-to questions. Thanks! :) -->\r\n\n", "before_files": [{"content": "\"\"\"\ntcp_message Inline Script Hook API Demonstration\n------------------------------------------------\n\n* modifies packets containing \"foo\" to \"bar\"\n* prints various details for each packet.\n\nexample cmdline invocation:\nmitmdump -T --host --tcp \".*\" -q -s examples/tcp_message.py\n\"\"\"\nfrom mitmproxy.utils import strutils\nfrom mitmproxy import ctx\n\n\ndef tcp_message(tcp_msg):\n modified_msg = tcp_msg.message.replace(\"foo\", \"bar\")\n\n is_modified = False if modified_msg == tcp_msg.message else True\n tcp_msg.message = modified_msg\n\n ctx.log.info(\n \"[tcp_message{}] from {} {} to {} {}:\\r\\n{}\".format(\n \" (modified)\" if is_modified else \"\",\n \"client\" if tcp_msg.sender == tcp_msg.client_conn else \"server\",\n tcp_msg.sender.address,\n \"server\" if tcp_msg.receiver == tcp_msg.server_conn else \"client\",\n tcp_msg.receiver.address, strutils.bytes_to_escaped_str(tcp_msg.message))\n )\n", "path": "examples/complex/tcp_message.py"}]} | 1,322 | 388 |
gh_patches_debug_758 | rasdani/github-patches | git_diff | vllm-project__vllm-2337 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[v0.2.7] Release Tracker
**ETA**: Jan 3rd - 4th
## Major changes
TBD
## PRs to be merged before the release
- [x] #2221
- [ ] ~~#2293~~ (deferred)
</issue>
<code>
[start of vllm/__init__.py]
1 """vLLM: a high-throughput and memory-efficient inference engine for LLMs"""
2
3 from vllm.engine.arg_utils import AsyncEngineArgs, EngineArgs
4 from vllm.engine.async_llm_engine import AsyncLLMEngine
5 from vllm.engine.llm_engine import LLMEngine
6 from vllm.engine.ray_utils import initialize_cluster
7 from vllm.entrypoints.llm import LLM
8 from vllm.outputs import CompletionOutput, RequestOutput
9 from vllm.sampling_params import SamplingParams
10
11 __version__ = "0.2.6"
12
13 __all__ = [
14 "LLM",
15 "SamplingParams",
16 "RequestOutput",
17 "CompletionOutput",
18 "LLMEngine",
19 "EngineArgs",
20 "AsyncLLMEngine",
21 "AsyncEngineArgs",
22 "initialize_cluster",
23 ]
24
[end of vllm/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/vllm/__init__.py b/vllm/__init__.py
--- a/vllm/__init__.py
+++ b/vllm/__init__.py
@@ -8,7 +8,7 @@
from vllm.outputs import CompletionOutput, RequestOutput
from vllm.sampling_params import SamplingParams
-__version__ = "0.2.6"
+__version__ = "0.2.7"
__all__ = [
"LLM",
| {"golden_diff": "diff --git a/vllm/__init__.py b/vllm/__init__.py\n--- a/vllm/__init__.py\n+++ b/vllm/__init__.py\n@@ -8,7 +8,7 @@\n from vllm.outputs import CompletionOutput, RequestOutput\n from vllm.sampling_params import SamplingParams\n \n-__version__ = \"0.2.6\"\n+__version__ = \"0.2.7\"\n \n __all__ = [\n \"LLM\",\n", "issue": "[v0.2.7] Release Tracker\n**ETA**: Jan 3rd - 4th\r\n\r\n## Major changes\r\n\r\nTBD\r\n\r\n## PRs to be merged before the release\r\n\r\n- [x] #2221 \r\n- [ ] ~~#2293~~ (deferred)\n", "before_files": [{"content": "\"\"\"vLLM: a high-throughput and memory-efficient inference engine for LLMs\"\"\"\n\nfrom vllm.engine.arg_utils import AsyncEngineArgs, EngineArgs\nfrom vllm.engine.async_llm_engine import AsyncLLMEngine\nfrom vllm.engine.llm_engine import LLMEngine\nfrom vllm.engine.ray_utils import initialize_cluster\nfrom vllm.entrypoints.llm import LLM\nfrom vllm.outputs import CompletionOutput, RequestOutput\nfrom vllm.sampling_params import SamplingParams\n\n__version__ = \"0.2.6\"\n\n__all__ = [\n \"LLM\",\n \"SamplingParams\",\n \"RequestOutput\",\n \"CompletionOutput\",\n \"LLMEngine\",\n \"EngineArgs\",\n \"AsyncLLMEngine\",\n \"AsyncEngineArgs\",\n \"initialize_cluster\",\n]\n", "path": "vllm/__init__.py"}]} | 820 | 109 |
gh_patches_debug_7314 | rasdani/github-patches | git_diff | chainer__chainer-552 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
where doesn't work with int16 and float64 on cuda6.5
On only cuda 6.5, this code doesn't work:
```
x = cupy.array([1,2,3], dtype=cupy.int16)
y = cupy.array([1,2,3], dtype=cupy.float64)
c = cupy.array([1,0,1], dtype=cupy.bool_)
cupy.where(c, x, y)
```
Other combinations such as (int16, float32) and (int32, float64) correctly work.
Maybe this is a bug on cuda 6.5, and fixed on 7.0.
Note that `cupy.where(c, y, x)` can work.
</issue>
<code>
[start of cupy/sorting/search.py]
1 from cupy import elementwise
2 from cupy import reduction
3
4
5 def argmax(a, axis=None, dtype=None, out=None, keepdims=False):
6 """Returns the indices of the maximum along an axis.
7
8 Args:
9 a (cupy.ndarray): Array to take argmax.
10 axis (int): Along which axis to find the maximum. ``a`` is flattened by
11 default.
12 dtype: Data type specifier.
13 out (cupy.ndarray): Output array.
14 keepdims (bool): If True, the axis ``axis`` is preserved as an axis of
15 length one.
16
17 Returns:
18 cupy.ndarray: The indices of the maximum of ``a`` along an axis.
19
20 .. seealso:: :func:`numpy.argmax`
21
22 """
23 return reduction.argmax(a, axis=axis, dtype=dtype, out=out,
24 keepdims=keepdims)
25
26
27 # TODO(okuta): Implement nanargmax
28
29
30 def argmin(a, axis=None, dtype=None, out=None, keepdims=False):
31 """Returns the indices of the minimum along an axis.
32
33 Args:
34 a (cupy.ndarray): Array to take argmin.
35 axis (int): Along which axis to find the minimum. ``a`` is flattened by
36 default.
37 dtype: Data type specifier.
38 out (cupy.ndarray): Output array.
39 keepdims (bool): If True, the axis ``axis`` is preserved as an axis of
40 length one.
41
42 Returns:
43 cupy.ndarray: The indices of the minimum of ``a`` along an axis.
44
45 .. seealso:: :func:`numpy.argmin`
46
47 """
48 return reduction.argmin(a, axis=axis, dtype=dtype, out=out,
49 keepdims=keepdims)
50
51
52 # TODO(okuta): Implement nanargmin
53
54
55 # TODO(okuta): Implement argwhere
56
57
58 # TODO(okuta): Implement nonzero
59
60
61 # TODO(okuta): Implement flatnonzero
62
63
64 def where(condition, x=None, y=None):
65 """Return elements, either from x or y, depending on condition.
66
67 .. note::
68
69 Currently Cupy doesn't support ``where(condition)``, that Numpy
70 supports.
71
72 Args:
73 condition (cupy.ndarray): When True, take x, otherwise take y.
74 x (cupy.ndarray): Values from which to choose on ``True``.
75 y (cupy.ndarray): Values from which to choose on ``False``.
76
77 Returns:
78 cupy.ndarray: Each element of output contains elements of ``x`` when
79 ``condition`` is ``True``, otherwise elements of ``y``.
80
81 """
82
83 missing = (x is None, y is None).count(True)
84
85 if missing == 1:
86 raise ValueError("Must provide both 'x' and 'y' or neither.")
87 if missing == 2:
88 # TODO(unno): return nonzero(cond)
89 return NotImplementedError()
90
91 return _where_ufunc(condition.astype('?'), x, y)
92
93 _where_ufunc = elementwise.create_ufunc(
94 'cupy_where',
95 ('???->?', '?bb->b', '?BB->B', '?hh->h', '?HH->H', '?ii->i', '?II->I',
96 '?ll->l', '?LL->L', '?qq->q', '?QQ->Q', '?ee->e', '?ff->f', '?dd->d'),
97 'out0 = in0 ? in1 : in2')
98
99
100 # TODO(okuta): Implement searchsorted
101
102
103 # TODO(okuta): Implement extract
104
[end of cupy/sorting/search.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/cupy/sorting/search.py b/cupy/sorting/search.py
--- a/cupy/sorting/search.py
+++ b/cupy/sorting/search.py
@@ -93,7 +93,12 @@
_where_ufunc = elementwise.create_ufunc(
'cupy_where',
('???->?', '?bb->b', '?BB->B', '?hh->h', '?HH->H', '?ii->i', '?II->I',
- '?ll->l', '?LL->L', '?qq->q', '?QQ->Q', '?ee->e', '?ff->f', '?dd->d'),
+ '?ll->l', '?LL->L', '?qq->q', '?QQ->Q', '?ee->e', '?ff->f',
+ # On CUDA 6.5 these combinations don't work correctly (on CUDA >=7.0, it
+ # works).
+ # See issue #551.
+ '?hd->d', '?Hd->d',
+ '?dd->d'),
'out0 = in0 ? in1 : in2')
| {"golden_diff": "diff --git a/cupy/sorting/search.py b/cupy/sorting/search.py\n--- a/cupy/sorting/search.py\n+++ b/cupy/sorting/search.py\n@@ -93,7 +93,12 @@\n _where_ufunc = elementwise.create_ufunc(\n 'cupy_where',\n ('???->?', '?bb->b', '?BB->B', '?hh->h', '?HH->H', '?ii->i', '?II->I',\n- '?ll->l', '?LL->L', '?qq->q', '?QQ->Q', '?ee->e', '?ff->f', '?dd->d'),\n+ '?ll->l', '?LL->L', '?qq->q', '?QQ->Q', '?ee->e', '?ff->f',\n+ # On CUDA 6.5 these combinations don't work correctly (on CUDA >=7.0, it\n+ # works).\n+ # See issue #551.\n+ '?hd->d', '?Hd->d',\n+ '?dd->d'),\n 'out0 = in0 ? in1 : in2')\n", "issue": "where doesn't work with int16 and float64 on cuda6.5\nOn only cuda 6.5, this code doesn't work:\n\n```\nx = cupy.array([1,2,3], dtype=cupy.int16)\ny = cupy.array([1,2,3], dtype=cupy.float64)\nc = cupy.array([1,0,1], dtype=cupy.bool_)\ncupy.where(c, x, y)\n```\n\nOther combinations such as (int16, float32) and (int32, float64) correctly work.\nMaybe this is a bug on cuda 6.5, and fixed on 7.0.\n\nNote that `cupy.where(c, y, x)` can work.\n\n", "before_files": [{"content": "from cupy import elementwise\nfrom cupy import reduction\n\n\ndef argmax(a, axis=None, dtype=None, out=None, keepdims=False):\n \"\"\"Returns the indices of the maximum along an axis.\n\n Args:\n a (cupy.ndarray): Array to take argmax.\n axis (int): Along which axis to find the maximum. ``a`` is flattened by\n default.\n dtype: Data type specifier.\n out (cupy.ndarray): Output array.\n keepdims (bool): If True, the axis ``axis`` is preserved as an axis of\n length one.\n\n Returns:\n cupy.ndarray: The indices of the maximum of ``a`` along an axis.\n\n .. seealso:: :func:`numpy.argmax`\n\n \"\"\"\n return reduction.argmax(a, axis=axis, dtype=dtype, out=out,\n keepdims=keepdims)\n\n\n# TODO(okuta): Implement nanargmax\n\n\ndef argmin(a, axis=None, dtype=None, out=None, keepdims=False):\n \"\"\"Returns the indices of the minimum along an axis.\n\n Args:\n a (cupy.ndarray): Array to take argmin.\n axis (int): Along which axis to find the minimum. ``a`` is flattened by\n default.\n dtype: Data type specifier.\n out (cupy.ndarray): Output array.\n keepdims (bool): If True, the axis ``axis`` is preserved as an axis of\n length one.\n\n Returns:\n cupy.ndarray: The indices of the minimum of ``a`` along an axis.\n\n .. seealso:: :func:`numpy.argmin`\n\n \"\"\"\n return reduction.argmin(a, axis=axis, dtype=dtype, out=out,\n keepdims=keepdims)\n\n\n# TODO(okuta): Implement nanargmin\n\n\n# TODO(okuta): Implement argwhere\n\n\n# TODO(okuta): Implement nonzero\n\n\n# TODO(okuta): Implement flatnonzero\n\n\ndef where(condition, x=None, y=None):\n \"\"\"Return elements, either from x or y, depending on condition.\n\n .. note::\n\n Currently Cupy doesn't support ``where(condition)``, that Numpy\n supports.\n\n Args:\n condition (cupy.ndarray): When True, take x, otherwise take y.\n x (cupy.ndarray): Values from which to choose on ``True``.\n y (cupy.ndarray): Values from which to choose on ``False``.\n\n Returns:\n cupy.ndarray: Each element of output contains elements of ``x`` when\n ``condition`` is ``True``, otherwise elements of ``y``.\n\n \"\"\"\n\n missing = (x is None, y is None).count(True)\n\n if missing == 1:\n raise ValueError(\"Must provide both 'x' and 'y' or neither.\")\n if missing == 2:\n # TODO(unno): return nonzero(cond)\n return NotImplementedError()\n\n return _where_ufunc(condition.astype('?'), x, y)\n\n_where_ufunc = elementwise.create_ufunc(\n 'cupy_where',\n ('???->?', '?bb->b', '?BB->B', '?hh->h', '?HH->H', '?ii->i', '?II->I',\n '?ll->l', '?LL->L', '?qq->q', '?QQ->Q', '?ee->e', '?ff->f', '?dd->d'),\n 'out0 = in0 ? in1 : in2')\n\n\n# TODO(okuta): Implement searchsorted\n\n\n# TODO(okuta): Implement extract\n", "path": "cupy/sorting/search.py"}]} | 1,675 | 247 |
gh_patches_debug_603 | rasdani/github-patches | git_diff | pex-tool__pex-1761 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Release 2.1.87
On the docket:
+ [ ] A relative --tmpdir foils pex3 lock create. #1758
</issue>
<code>
[start of pex/version.py]
1 # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 __version__ = "2.1.86"
5
[end of pex/version.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pex/version.py b/pex/version.py
--- a/pex/version.py
+++ b/pex/version.py
@@ -1,4 +1,4 @@
# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
# Licensed under the Apache License, Version 2.0 (see LICENSE).
-__version__ = "2.1.86"
+__version__ = "2.1.87"
| {"golden_diff": "diff --git a/pex/version.py b/pex/version.py\n--- a/pex/version.py\n+++ b/pex/version.py\n@@ -1,4 +1,4 @@\n # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n # Licensed under the Apache License, Version 2.0 (see LICENSE).\n \n-__version__ = \"2.1.86\"\n+__version__ = \"2.1.87\"\n", "issue": "Release 2.1.87\nOn the docket:\r\n+ [ ] A relative --tmpdir foils pex3 lock create. #1758\n", "before_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.86\"\n", "path": "pex/version.py"}]} | 619 | 97 |
gh_patches_debug_6507 | rasdani/github-patches | git_diff | aws__aws-sam-cli-2007 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Incorrect inline help in "sam local generate-event" command
<!-- Make sure we don't have an existing Issue that reports the bug you are seeing (both open and closed). -->
### Describe your idea/feature/enhancement
Using the CLI I had some problems with the inline help of the command "sam local generate-event". I was trying to pipe the event generated by that command with "sam local invoke" and it failed. The part of the inline help that it is incorrect is this:
`{...} After you generate a sample event, you can use it to test your Lambda function locally
$ sam local generate-event s3 [put/delete] --bucket <bucket> --key <key> | sam local invoke <function logical id> {...}`
In the web documentation here (https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-cli-command-reference-sam-local-generate-event.html) the help is correct:
`After you generate a sample event, you can use it to test your Lambda function locally
sam local generate-event s3 [put/delete] --bucket <bucket> --key <key> | sam local invoke -e - <function logical id>`
### Proposal
Replace the incorrect help by the correct one.
</issue>
<code>
[start of samcli/commands/local/generate_event/cli.py]
1 """
2 Sets up the cli for generate-event
3 """
4
5 import click
6
7 from samcli.cli.main import pass_context
8 from samcli.commands.local.generate_event.event_generation import GenerateEventCommand
9
10 HELP_TEXT = """
11 You can use this command to generate sample payloads from different event sources
12 such as S3, API Gateway, and SNS. These payloads contain the information that the
13 event sources send to your Lambda functions.\n
14 \b
15 Generate the event that S3 sends to your Lambda function when a new object is uploaded
16 $ sam local generate-event s3 [put/delete]\n
17 \b
18 You can even customize the event by adding parameter flags. To find which flags apply to your command,
19 run:\n
20 $ sam local generate-event s3 [put/delete] --help\n
21 Then you can add in those flags that you wish to customize using\n
22 $ sam local generate-event s3 [put/delete] --bucket <bucket> --key <key>\n
23 \b
24 After you generate a sample event, you can use it to test your Lambda function locally
25 $ sam local generate-event s3 [put/delete] --bucket <bucket> --key <key> | sam local invoke <function logical id>
26 """
27
28
29 @click.command(name="generate-event", cls=GenerateEventCommand, help=HELP_TEXT)
30 @pass_context
31 def cli(self):
32 """
33 Generate an event for one of the services listed below:
34 """
35
[end of samcli/commands/local/generate_event/cli.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/samcli/commands/local/generate_event/cli.py b/samcli/commands/local/generate_event/cli.py
--- a/samcli/commands/local/generate_event/cli.py
+++ b/samcli/commands/local/generate_event/cli.py
@@ -22,7 +22,7 @@
$ sam local generate-event s3 [put/delete] --bucket <bucket> --key <key>\n
\b
After you generate a sample event, you can use it to test your Lambda function locally
-$ sam local generate-event s3 [put/delete] --bucket <bucket> --key <key> | sam local invoke <function logical id>
+$ sam local generate-event s3 [put/delete] --bucket <bucket> --key <key> | sam local invoke -e - <function logical id>
"""
| {"golden_diff": "diff --git a/samcli/commands/local/generate_event/cli.py b/samcli/commands/local/generate_event/cli.py\n--- a/samcli/commands/local/generate_event/cli.py\n+++ b/samcli/commands/local/generate_event/cli.py\n@@ -22,7 +22,7 @@\n $ sam local generate-event s3 [put/delete] --bucket <bucket> --key <key>\\n\n \\b\n After you generate a sample event, you can use it to test your Lambda function locally\n-$ sam local generate-event s3 [put/delete] --bucket <bucket> --key <key> | sam local invoke <function logical id>\n+$ sam local generate-event s3 [put/delete] --bucket <bucket> --key <key> | sam local invoke -e - <function logical id>\n \"\"\"\n", "issue": "Incorrect inline help in \"sam local generate-event\" command\n<!-- Make sure we don't have an existing Issue that reports the bug you are seeing (both open and closed). -->\r\n\r\n### Describe your idea/feature/enhancement\r\n\r\nUsing the CLI I had some problems with the inline help of the command \"sam local generate-event\". I was trying to pipe the event generated by that command with \"sam local invoke\" and it failed. The part of the inline help that it is incorrect is this:\r\n\r\n`{...} After you generate a sample event, you can use it to test your Lambda function locally\r\n $ sam local generate-event s3 [put/delete] --bucket <bucket> --key <key> | sam local invoke <function logical id> {...}`\r\n\r\nIn the web documentation here (https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-cli-command-reference-sam-local-generate-event.html) the help is correct:\r\n\r\n`After you generate a sample event, you can use it to test your Lambda function locally\r\nsam local generate-event s3 [put/delete] --bucket <bucket> --key <key> | sam local invoke -e - <function logical id>`\r\n### Proposal\r\n\r\nReplace the incorrect help by the correct one.\r\n\n", "before_files": [{"content": "\"\"\"\nSets up the cli for generate-event\n\"\"\"\n\nimport click\n\nfrom samcli.cli.main import pass_context\nfrom samcli.commands.local.generate_event.event_generation import GenerateEventCommand\n\nHELP_TEXT = \"\"\"\nYou can use this command to generate sample payloads from different event sources\nsuch as S3, API Gateway, and SNS. These payloads contain the information that the\nevent sources send to your Lambda functions.\\n\n\\b\nGenerate the event that S3 sends to your Lambda function when a new object is uploaded\n$ sam local generate-event s3 [put/delete]\\n\n\\b\nYou can even customize the event by adding parameter flags. To find which flags apply to your command,\nrun:\\n\n$ sam local generate-event s3 [put/delete] --help\\n\nThen you can add in those flags that you wish to customize using\\n\n$ sam local generate-event s3 [put/delete] --bucket <bucket> --key <key>\\n\n\\b\nAfter you generate a sample event, you can use it to test your Lambda function locally\n$ sam local generate-event s3 [put/delete] --bucket <bucket> --key <key> | sam local invoke <function logical id>\n\"\"\"\n\n\[email protected](name=\"generate-event\", cls=GenerateEventCommand, help=HELP_TEXT)\n@pass_context\ndef cli(self):\n \"\"\"\n Generate an event for one of the services listed below:\n \"\"\"\n", "path": "samcli/commands/local/generate_event/cli.py"}]} | 1,166 | 177 |
gh_patches_debug_29651 | rasdani/github-patches | git_diff | facebookresearch__hydra-2520 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Document hydra.utils.{get_class,get_method}
Users would benefit from documentation for the following:
[`hydra.utils.get_class`](https://github.com/facebookresearch/hydra/blob/1cbe86ebecbeb134a3f2041120d57447a7394314/hydra/utils.py#L21)
[`hydra.utils.get_method`](https://github.com/facebookresearch/hydra/blob/1cbe86ebecbeb134a3f2041120d57447a7394314/hydra/utils.py#L32)
</issue>
<code>
[start of hydra/utils.py]
1 # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
2
3 import logging.config
4 import os
5 from pathlib import Path
6 from typing import Any, Callable
7
8 import hydra._internal.instantiate._instantiate2
9 import hydra.types
10 from hydra._internal.utils import _locate
11 from hydra.core.hydra_config import HydraConfig
12
13 log = logging.getLogger(__name__)
14
15 # Instantiation related symbols
16 instantiate = hydra._internal.instantiate._instantiate2.instantiate
17 call = instantiate
18 ConvertMode = hydra.types.ConvertMode
19
20
21 def get_class(path: str) -> type:
22 try:
23 cls = _locate(path)
24 if not isinstance(cls, type):
25 raise ValueError(
26 f"Located non-class of type '{type(cls).__name__}'"
27 + f" while loading '{path}'"
28 )
29 return cls
30 except Exception as e:
31 log.error(f"Error initializing class at {path}: {e}")
32 raise e
33
34
35 def get_method(path: str) -> Callable[..., Any]:
36 try:
37 obj = _locate(path)
38 if not callable(obj):
39 raise ValueError(
40 f"Located non-callable of type '{type(obj).__name__}'"
41 + f" while loading '{path}'"
42 )
43 cl: Callable[..., Any] = obj
44 return cl
45 except Exception as e:
46 log.error(f"Error getting callable at {path} : {e}")
47 raise e
48
49
50 # Alias for get_method
51 get_static_method = get_method
52
53
54 def get_original_cwd() -> str:
55 """
56 :return: the original working directory the Hydra application was launched from
57 """
58 if not HydraConfig.initialized():
59 raise ValueError(
60 "get_original_cwd() must only be used after HydraConfig is initialized"
61 )
62 ret = HydraConfig.get().runtime.cwd
63 assert ret is not None and isinstance(ret, str)
64 return ret
65
66
67 def to_absolute_path(path: str) -> str:
68 """
69 converts the specified path to be absolute path.
70 if the input path is relative, it's interpreted as relative to the original working directory
71 if it's absolute, it's returned as is
72 :param path: path to convert
73 :return:
74 """
75 p = Path(path)
76 if not HydraConfig.initialized():
77 base = Path(os.getcwd())
78 else:
79 base = Path(get_original_cwd())
80 if p.is_absolute():
81 ret = p
82 else:
83 ret = base / p
84 return str(ret)
85
[end of hydra/utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/hydra/utils.py b/hydra/utils.py
--- a/hydra/utils.py
+++ b/hydra/utils.py
@@ -19,6 +19,14 @@
def get_class(path: str) -> type:
+ """
+ Look up a class based on a dotpath.
+ Fails if the path does not point to a class.
+
+ >>> import my_module
+ >>> from hydra.utils import get_class
+ >>> assert get_class("my_module.MyClass") is my_module.MyClass
+ """
try:
cls = _locate(path)
if not isinstance(cls, type):
@@ -28,11 +36,19 @@
)
return cls
except Exception as e:
- log.error(f"Error initializing class at {path}: {e}")
+ log.error(f"Error getting class at {path}: {e}")
raise e
def get_method(path: str) -> Callable[..., Any]:
+ """
+ Look up a callable based on a dotpath.
+ Fails if the path does not point to a callable object.
+
+ >>> import my_module
+ >>> from hydra.utils import get_method
+ >>> assert get_method("my_module.my_function") is my_module.my_function
+ """
try:
obj = _locate(path)
if not callable(obj):
@@ -51,6 +67,22 @@
get_static_method = get_method
+def get_object(path: str) -> Any:
+ """
+ Look up a callable based on a dotpath.
+
+ >>> import my_module
+ >>> from hydra.utils import get_object
+ >>> assert get_object("my_module.my_object") is my_module.my_object
+ """
+ try:
+ obj = _locate(path)
+ return obj
+ except Exception as e:
+ log.error(f"Error getting object at {path} : {e}")
+ raise e
+
+
def get_original_cwd() -> str:
"""
:return: the original working directory the Hydra application was launched from
| {"golden_diff": "diff --git a/hydra/utils.py b/hydra/utils.py\n--- a/hydra/utils.py\n+++ b/hydra/utils.py\n@@ -19,6 +19,14 @@\n \n \n def get_class(path: str) -> type:\n+ \"\"\"\n+ Look up a class based on a dotpath.\n+ Fails if the path does not point to a class.\n+\n+ >>> import my_module\n+ >>> from hydra.utils import get_class\n+ >>> assert get_class(\"my_module.MyClass\") is my_module.MyClass\n+ \"\"\"\n try:\n cls = _locate(path)\n if not isinstance(cls, type):\n@@ -28,11 +36,19 @@\n )\n return cls\n except Exception as e:\n- log.error(f\"Error initializing class at {path}: {e}\")\n+ log.error(f\"Error getting class at {path}: {e}\")\n raise e\n \n \n def get_method(path: str) -> Callable[..., Any]:\n+ \"\"\"\n+ Look up a callable based on a dotpath.\n+ Fails if the path does not point to a callable object.\n+\n+ >>> import my_module\n+ >>> from hydra.utils import get_method\n+ >>> assert get_method(\"my_module.my_function\") is my_module.my_function\n+ \"\"\"\n try:\n obj = _locate(path)\n if not callable(obj):\n@@ -51,6 +67,22 @@\n get_static_method = get_method\n \n \n+def get_object(path: str) -> Any:\n+ \"\"\"\n+ Look up a callable based on a dotpath.\n+\n+ >>> import my_module\n+ >>> from hydra.utils import get_object\n+ >>> assert get_object(\"my_module.my_object\") is my_module.my_object\n+ \"\"\"\n+ try:\n+ obj = _locate(path)\n+ return obj\n+ except Exception as e:\n+ log.error(f\"Error getting object at {path} : {e}\")\n+ raise e\n+\n+\n def get_original_cwd() -> str:\n \"\"\"\n :return: the original working directory the Hydra application was launched from\n", "issue": "Document hydra.utils.{get_class,get_method}\nUsers would benefit from documentation for the following:\r\n[`hydra.utils.get_class`](https://github.com/facebookresearch/hydra/blob/1cbe86ebecbeb134a3f2041120d57447a7394314/hydra/utils.py#L21)\r\n[`hydra.utils.get_method`](https://github.com/facebookresearch/hydra/blob/1cbe86ebecbeb134a3f2041120d57447a7394314/hydra/utils.py#L32)\n", "before_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n\nimport logging.config\nimport os\nfrom pathlib import Path\nfrom typing import Any, Callable\n\nimport hydra._internal.instantiate._instantiate2\nimport hydra.types\nfrom hydra._internal.utils import _locate\nfrom hydra.core.hydra_config import HydraConfig\n\nlog = logging.getLogger(__name__)\n\n# Instantiation related symbols\ninstantiate = hydra._internal.instantiate._instantiate2.instantiate\ncall = instantiate\nConvertMode = hydra.types.ConvertMode\n\n\ndef get_class(path: str) -> type:\n try:\n cls = _locate(path)\n if not isinstance(cls, type):\n raise ValueError(\n f\"Located non-class of type '{type(cls).__name__}'\"\n + f\" while loading '{path}'\"\n )\n return cls\n except Exception as e:\n log.error(f\"Error initializing class at {path}: {e}\")\n raise e\n\n\ndef get_method(path: str) -> Callable[..., Any]:\n try:\n obj = _locate(path)\n if not callable(obj):\n raise ValueError(\n f\"Located non-callable of type '{type(obj).__name__}'\"\n + f\" while loading '{path}'\"\n )\n cl: Callable[..., Any] = obj\n return cl\n except Exception as e:\n log.error(f\"Error getting callable at {path} : {e}\")\n raise e\n\n\n# Alias for get_method\nget_static_method = get_method\n\n\ndef get_original_cwd() -> str:\n \"\"\"\n :return: the original working directory the Hydra application was launched from\n \"\"\"\n if not HydraConfig.initialized():\n raise ValueError(\n \"get_original_cwd() must only be used after HydraConfig is initialized\"\n )\n ret = HydraConfig.get().runtime.cwd\n assert ret is not None and isinstance(ret, str)\n return ret\n\n\ndef to_absolute_path(path: str) -> str:\n \"\"\"\n converts the specified path to be absolute path.\n if the input path is relative, it's interpreted as relative to the original working directory\n if it's absolute, it's returned as is\n :param path: path to convert\n :return:\n \"\"\"\n p = Path(path)\n if not HydraConfig.initialized():\n base = Path(os.getcwd())\n else:\n base = Path(get_original_cwd())\n if p.is_absolute():\n ret = p\n else:\n ret = base / p\n return str(ret)\n", "path": "hydra/utils.py"}]} | 1,385 | 466 |
gh_patches_debug_28659 | rasdani/github-patches | git_diff | searxng__searxng-2303 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bug: seznam(CZ) ignored in search results
<!-- PLEASE FILL THESE FIELDS, IT REALLY HELPS THE MAINTAINERS OF SearXNG -->
**Versions of SearX(NG):**
2022.11.25-1314c1c5, vanilla
2022.11.27-90b429bb, vanilla
2022.11.28, vanilla
2.3.7+20221122, fork
1.1.0-37-4d9586e2, vanilla
**How did you install SearX(NG)?**
I didn't. I tried several public instances and all suffer from the same issue.
**What happened?**
When searching for some Czech terms, like "seznam.cz", there is never the "seznam(CZ)" tag under individual search results. When trying to limit the search to the seznam(CZ) engine:
`!szn seznam.cz`
I'm getting an error:
`Sorry!we didn't find any results. Please use another query or search in more categories.`
When trying to limit the search to bing using "!bi" keyword, it works as expected (all search results have the bing tag under them).
**How to reproduce:**
`!szn <search term>`
**Expected behavior:**
When limiting searching to the seznam(CZ) engine, all search results should have the "seznam(CZ)" tag under them.
When searching w/o limiting, "seznam(CZ)" tag should be mixed in with other tags among search results.
**Screenshots:**

**Additional context:**
Besides public instances w/ the latest docker image (2022.11.25-1314c1c5). It neither works w/ forks, upstream searX instances nor higher NG versions than those utilizing dockerized images from docker hub.
https://searx.be/
https://searx.tiekoetter.com/
https://searx.mistli.net/
https://search.privacyguides.net/
https://searx.webheberg.info/
https://spot.murena.io/
https://darmarit.org/searx/
</issue>
<code>
[start of searx/engines/seznam.py]
1 # SPDX-License-Identifier: AGPL-3.0-or-later
2 """
3 Seznam
4 """
5
6 from urllib.parse import urlencode
7 from lxml import html
8 from searx.network import get
9 from searx.exceptions import SearxEngineAccessDeniedException
10 from searx.utils import (
11 extract_text,
12 eval_xpath_list,
13 eval_xpath_getindex,
14 eval_xpath,
15 )
16
17 # about
18 about = {
19 "website": "https://www.seznam.cz/",
20 "wikidata_id": "Q3490485",
21 "official_api_documentation": "https://api.sklik.cz/",
22 "use_official_api": False,
23 "require_api_key": False,
24 "results": "HTML",
25 "language": "cz",
26 }
27
28 categories = ['general', 'web']
29 base_url = 'https://search.seznam.cz/'
30
31
32 def request(query, params):
33 response_index = get(base_url, headers=params['headers'], raise_for_httperror=True)
34 dom = html.fromstring(response_index.text)
35
36 url_params = {
37 'q': query,
38 'oq': query,
39 }
40 for e in eval_xpath_list(dom, '//input[@type="hidden"]'):
41 name = e.get('name')
42 value = e.get('value')
43 url_params[name] = value
44
45 params['url'] = base_url + '?' + urlencode(url_params)
46 params['cookies'] = response_index.cookies
47 return params
48
49
50 def response(resp):
51 if resp.url.path.startswith('/verify'):
52 raise SearxEngineAccessDeniedException()
53
54 results = []
55
56 dom = html.fromstring(resp.content.decode())
57 for result_element in eval_xpath_list(dom, '//div[@data-dot="results"]/div'):
58 result_data = eval_xpath_getindex(result_element, './/div[contains(@class, "bec586")]', 0, default=None)
59 if result_data is None:
60 continue
61 title_element = eval_xpath_getindex(result_element, './/h3/a', 0)
62 results.append(
63 {
64 'url': title_element.get('href'),
65 'title': extract_text(title_element),
66 'content': extract_text(eval_xpath(result_data, './/div[@class="_3eded7"]')),
67 }
68 )
69
70 return results
71
[end of searx/engines/seznam.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/searx/engines/seznam.py b/searx/engines/seznam.py
--- a/searx/engines/seznam.py
+++ b/searx/engines/seznam.py
@@ -1,6 +1,7 @@
# SPDX-License-Identifier: AGPL-3.0-or-later
-"""
- Seznam
+# lint: pylint
+"""Seznam
+
"""
from urllib.parse import urlencode
@@ -11,7 +12,6 @@
extract_text,
eval_xpath_list,
eval_xpath_getindex,
- eval_xpath,
)
# about
@@ -54,8 +54,12 @@
results = []
dom = html.fromstring(resp.content.decode())
- for result_element in eval_xpath_list(dom, '//div[@data-dot="results"]/div'):
- result_data = eval_xpath_getindex(result_element, './/div[contains(@class, "bec586")]', 0, default=None)
+ for result_element in eval_xpath_list(
+ dom, '//div[@id="searchpage-root"]//div[@class="Layout--left"]/div[@class="f2c528"]'
+ ):
+ result_data = eval_xpath_getindex(
+ result_element, './/div[@class="c8774a" or @class="e69e8d a11657"]', 0, default=None
+ )
if result_data is None:
continue
title_element = eval_xpath_getindex(result_element, './/h3/a', 0)
@@ -63,7 +67,7 @@
{
'url': title_element.get('href'),
'title': extract_text(title_element),
- 'content': extract_text(eval_xpath(result_data, './/div[@class="_3eded7"]')),
+ 'content': extract_text(result_data),
}
)
| {"golden_diff": "diff --git a/searx/engines/seznam.py b/searx/engines/seznam.py\n--- a/searx/engines/seznam.py\n+++ b/searx/engines/seznam.py\n@@ -1,6 +1,7 @@\n # SPDX-License-Identifier: AGPL-3.0-or-later\n-\"\"\"\n- Seznam\n+# lint: pylint\n+\"\"\"Seznam\n+\n \"\"\"\n \n from urllib.parse import urlencode\n@@ -11,7 +12,6 @@\n extract_text,\n eval_xpath_list,\n eval_xpath_getindex,\n- eval_xpath,\n )\n \n # about\n@@ -54,8 +54,12 @@\n results = []\n \n dom = html.fromstring(resp.content.decode())\n- for result_element in eval_xpath_list(dom, '//div[@data-dot=\"results\"]/div'):\n- result_data = eval_xpath_getindex(result_element, './/div[contains(@class, \"bec586\")]', 0, default=None)\n+ for result_element in eval_xpath_list(\n+ dom, '//div[@id=\"searchpage-root\"]//div[@class=\"Layout--left\"]/div[@class=\"f2c528\"]'\n+ ):\n+ result_data = eval_xpath_getindex(\n+ result_element, './/div[@class=\"c8774a\" or @class=\"e69e8d a11657\"]', 0, default=None\n+ )\n if result_data is None:\n continue\n title_element = eval_xpath_getindex(result_element, './/h3/a', 0)\n@@ -63,7 +67,7 @@\n {\n 'url': title_element.get('href'),\n 'title': extract_text(title_element),\n- 'content': extract_text(eval_xpath(result_data, './/div[@class=\"_3eded7\"]')),\n+ 'content': extract_text(result_data),\n }\n )\n", "issue": "Bug: seznam(CZ) ignored in search results\n<!-- PLEASE FILL THESE FIELDS, IT REALLY HELPS THE MAINTAINERS OF SearXNG -->\r\n\r\n**Versions of SearX(NG):**\r\n2022.11.25-1314c1c5, vanilla\r\n2022.11.27-90b429bb, vanilla\r\n2022.11.28, vanilla\r\n2.3.7+20221122, fork\r\n1.1.0-37-4d9586e2, vanilla\r\n\r\n**How did you install SearX(NG)?**\r\nI didn't. I tried several public instances and all suffer from the same issue.\r\n\r\n**What happened?**\r\nWhen searching for some Czech terms, like \"seznam.cz\", there is never the \"seznam(CZ)\" tag under individual search results. When trying to limit the search to the seznam(CZ) engine:\r\n`!szn seznam.cz`\r\nI'm getting an error:\r\n`Sorry!we didn't find any results. Please use another query or search in more categories.`\r\nWhen trying to limit the search to bing using \"!bi\" keyword, it works as expected (all search results have the bing tag under them).\r\n\r\n**How to reproduce:**\r\n`!szn <search term>`\r\n\r\n**Expected behavior:**\r\nWhen limiting searching to the seznam(CZ) engine, all search results should have the \"seznam(CZ)\" tag under them.\r\nWhen searching w/o limiting, \"seznam(CZ)\" tag should be mixed in with other tags among search results.\r\n\r\n**Screenshots:**\r\n\r\n\r\n**Additional context:**\r\nBesides public instances w/ the latest docker image (2022.11.25-1314c1c5). It neither works w/ forks, upstream searX instances nor higher NG versions than those utilizing dockerized images from docker hub.\r\nhttps://searx.be/\r\nhttps://searx.tiekoetter.com/\r\nhttps://searx.mistli.net/\r\nhttps://search.privacyguides.net/\r\nhttps://searx.webheberg.info/\r\nhttps://spot.murena.io/\r\nhttps://darmarit.org/searx/\r\n\n", "before_files": [{"content": "# SPDX-License-Identifier: AGPL-3.0-or-later\n\"\"\"\n Seznam\n\"\"\"\n\nfrom urllib.parse import urlencode\nfrom lxml import html\nfrom searx.network import get\nfrom searx.exceptions import SearxEngineAccessDeniedException\nfrom searx.utils import (\n extract_text,\n eval_xpath_list,\n eval_xpath_getindex,\n eval_xpath,\n)\n\n# about\nabout = {\n \"website\": \"https://www.seznam.cz/\",\n \"wikidata_id\": \"Q3490485\",\n \"official_api_documentation\": \"https://api.sklik.cz/\",\n \"use_official_api\": False,\n \"require_api_key\": False,\n \"results\": \"HTML\",\n \"language\": \"cz\",\n}\n\ncategories = ['general', 'web']\nbase_url = 'https://search.seznam.cz/'\n\n\ndef request(query, params):\n response_index = get(base_url, headers=params['headers'], raise_for_httperror=True)\n dom = html.fromstring(response_index.text)\n\n url_params = {\n 'q': query,\n 'oq': query,\n }\n for e in eval_xpath_list(dom, '//input[@type=\"hidden\"]'):\n name = e.get('name')\n value = e.get('value')\n url_params[name] = value\n\n params['url'] = base_url + '?' + urlencode(url_params)\n params['cookies'] = response_index.cookies\n return params\n\n\ndef response(resp):\n if resp.url.path.startswith('/verify'):\n raise SearxEngineAccessDeniedException()\n\n results = []\n\n dom = html.fromstring(resp.content.decode())\n for result_element in eval_xpath_list(dom, '//div[@data-dot=\"results\"]/div'):\n result_data = eval_xpath_getindex(result_element, './/div[contains(@class, \"bec586\")]', 0, default=None)\n if result_data is None:\n continue\n title_element = eval_xpath_getindex(result_element, './/h3/a', 0)\n results.append(\n {\n 'url': title_element.get('href'),\n 'title': extract_text(title_element),\n 'content': extract_text(eval_xpath(result_data, './/div[@class=\"_3eded7\"]')),\n }\n )\n\n return results\n", "path": "searx/engines/seznam.py"}]} | 1,723 | 425 |
gh_patches_debug_17634 | rasdani/github-patches | git_diff | liqd__a4-opin-726 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
use small images in user avatar in moderators list
</issue>
<code>
[start of euth/users/serializers.py]
1 from rest_framework import serializers
2
3 from .models import User
4
5
6 class UserSerializer(serializers.ModelSerializer):
7 avatar = serializers.ImageField()
8
9 class Meta:
10 model = User
11 fields = ('id', 'username', 'avatar', 'default_avatar')
12 read_only_fields = ('id', 'username', 'avatar', 'default_avatar')
13
14
15 # mails should not be exposed in API, so there is a separate one for this
16 class UserWithMailSerializer(UserSerializer):
17 class Meta(UserSerializer.Meta):
18 fields = ('id', 'username', 'avatar', 'default_avatar', 'email')
19 read_only_fields = ('id', 'username', 'avatar', 'default_avatar',
20 'email')
21
[end of euth/users/serializers.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/euth/users/serializers.py b/euth/users/serializers.py
--- a/euth/users/serializers.py
+++ b/euth/users/serializers.py
@@ -1,16 +1,22 @@
+from easy_thumbnails.files import get_thumbnailer
from rest_framework import serializers
from .models import User
class UserSerializer(serializers.ModelSerializer):
- avatar = serializers.ImageField()
+ avatar = serializers.SerializerMethodField()
class Meta:
model = User
fields = ('id', 'username', 'avatar', 'default_avatar')
read_only_fields = ('id', 'username', 'avatar', 'default_avatar')
+ def get_avatar(self, obj):
+ if obj.avatar:
+ image = get_thumbnailer(obj.avatar)['avatar_small']
+ return image.url
+
# mails should not be exposed in API, so there is a separate one for this
class UserWithMailSerializer(UserSerializer):
| {"golden_diff": "diff --git a/euth/users/serializers.py b/euth/users/serializers.py\n--- a/euth/users/serializers.py\n+++ b/euth/users/serializers.py\n@@ -1,16 +1,22 @@\n+from easy_thumbnails.files import get_thumbnailer\n from rest_framework import serializers\n \n from .models import User\n \n \n class UserSerializer(serializers.ModelSerializer):\n- avatar = serializers.ImageField()\n+ avatar = serializers.SerializerMethodField()\n \n class Meta:\n model = User\n fields = ('id', 'username', 'avatar', 'default_avatar')\n read_only_fields = ('id', 'username', 'avatar', 'default_avatar')\n \n+ def get_avatar(self, obj):\n+ if obj.avatar:\n+ image = get_thumbnailer(obj.avatar)['avatar_small']\n+ return image.url\n+\n \n # mails should not be exposed in API, so there is a separate one for this\n class UserWithMailSerializer(UserSerializer):\n", "issue": "use small images in user avatar in moderators list\n\n", "before_files": [{"content": "from rest_framework import serializers\n\nfrom .models import User\n\n\nclass UserSerializer(serializers.ModelSerializer):\n avatar = serializers.ImageField()\n\n class Meta:\n model = User\n fields = ('id', 'username', 'avatar', 'default_avatar')\n read_only_fields = ('id', 'username', 'avatar', 'default_avatar')\n\n\n# mails should not be exposed in API, so there is a separate one for this\nclass UserWithMailSerializer(UserSerializer):\n class Meta(UserSerializer.Meta):\n fields = ('id', 'username', 'avatar', 'default_avatar', 'email')\n read_only_fields = ('id', 'username', 'avatar', 'default_avatar',\n 'email')\n", "path": "euth/users/serializers.py"}]} | 728 | 206 |
gh_patches_debug_647 | rasdani/github-patches | git_diff | pex-tool__pex-2095 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Release 2.1.129
On the docket:
+ [x] Pex resolves VCS and local project requirements from locks incorrectly. #2092
</issue>
<code>
[start of pex/version.py]
1 # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 __version__ = "2.1.128"
5
[end of pex/version.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pex/version.py b/pex/version.py
--- a/pex/version.py
+++ b/pex/version.py
@@ -1,4 +1,4 @@
# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
# Licensed under the Apache License, Version 2.0 (see LICENSE).
-__version__ = "2.1.128"
+__version__ = "2.1.129"
| {"golden_diff": "diff --git a/pex/version.py b/pex/version.py\n--- a/pex/version.py\n+++ b/pex/version.py\n@@ -1,4 +1,4 @@\n # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n # Licensed under the Apache License, Version 2.0 (see LICENSE).\n \n-__version__ = \"2.1.128\"\n+__version__ = \"2.1.129\"\n", "issue": "Release 2.1.129\nOn the docket:\r\n+ [x] Pex resolves VCS and local project requirements from locks incorrectly. #2092\n", "before_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.128\"\n", "path": "pex/version.py"}]} | 622 | 99 |
gh_patches_debug_13708 | rasdani/github-patches | git_diff | bokeh__bokeh-8492 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Typo in range_tool example
There is a small typo "range_rool" in the range_tool.py example. I would like to use this issue to create my first pull request and see how the process works.
</issue>
<code>
[start of examples/plotting/file/range_tool.py]
1 import numpy as np
2
3 from bokeh.io import show
4 from bokeh.layouts import column
5 from bokeh.models import ColumnDataSource, RangeTool
6 from bokeh.plotting import figure
7 from bokeh.sampledata.stocks import AAPL
8
9 dates = np.array(AAPL['date'], dtype=np.datetime64)
10 source = ColumnDataSource(data=dict(date=dates, close=AAPL['adj_close']))
11
12 p = figure(plot_height=300, plot_width=800, tools="", toolbar_location=None,
13 x_axis_type="datetime", x_axis_location="above",
14 background_fill_color="#efefef", x_range=(dates[1500], dates[2500]))
15
16 p.line('date', 'close', source=source)
17 p.yaxis.axis_label = 'Price'
18
19 select = figure(title="Drag the middle and edges of the selection box to change the range above",
20 plot_height=130, plot_width=800, y_range=p.y_range,
21 x_axis_type="datetime", y_axis_type=None,
22 tools="", toolbar_location=None, background_fill_color="#efefef")
23
24 range_rool = RangeTool(x_range=p.x_range)
25 range_rool.overlay.fill_color = "navy"
26 range_rool.overlay.fill_alpha = 0.2
27
28 select.line('date', 'close', source=source)
29 select.ygrid.grid_line_color = None
30 select.add_tools(range_rool)
31 select.toolbar.active_multi = range_rool
32
33 show(column(p, select))
34
[end of examples/plotting/file/range_tool.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/examples/plotting/file/range_tool.py b/examples/plotting/file/range_tool.py
--- a/examples/plotting/file/range_tool.py
+++ b/examples/plotting/file/range_tool.py
@@ -21,13 +21,13 @@
x_axis_type="datetime", y_axis_type=None,
tools="", toolbar_location=None, background_fill_color="#efefef")
-range_rool = RangeTool(x_range=p.x_range)
-range_rool.overlay.fill_color = "navy"
-range_rool.overlay.fill_alpha = 0.2
+range_tool = RangeTool(x_range=p.x_range)
+range_tool.overlay.fill_color = "navy"
+range_tool.overlay.fill_alpha = 0.2
select.line('date', 'close', source=source)
select.ygrid.grid_line_color = None
-select.add_tools(range_rool)
-select.toolbar.active_multi = range_rool
+select.add_tools(range_tool)
+select.toolbar.active_multi = range_tool
show(column(p, select))
| {"golden_diff": "diff --git a/examples/plotting/file/range_tool.py b/examples/plotting/file/range_tool.py\n--- a/examples/plotting/file/range_tool.py\n+++ b/examples/plotting/file/range_tool.py\n@@ -21,13 +21,13 @@\n x_axis_type=\"datetime\", y_axis_type=None,\n tools=\"\", toolbar_location=None, background_fill_color=\"#efefef\")\n \n-range_rool = RangeTool(x_range=p.x_range)\n-range_rool.overlay.fill_color = \"navy\"\n-range_rool.overlay.fill_alpha = 0.2\n+range_tool = RangeTool(x_range=p.x_range)\n+range_tool.overlay.fill_color = \"navy\"\n+range_tool.overlay.fill_alpha = 0.2\n \n select.line('date', 'close', source=source)\n select.ygrid.grid_line_color = None\n-select.add_tools(range_rool)\n-select.toolbar.active_multi = range_rool\n+select.add_tools(range_tool)\n+select.toolbar.active_multi = range_tool\n \n show(column(p, select))\n", "issue": "Typo in range_tool example\nThere is a small typo \"range_rool\" in the range_tool.py example. I would like to use this issue to create my first pull request and see how the process works.\n", "before_files": [{"content": "import numpy as np\n\nfrom bokeh.io import show\nfrom bokeh.layouts import column\nfrom bokeh.models import ColumnDataSource, RangeTool\nfrom bokeh.plotting import figure\nfrom bokeh.sampledata.stocks import AAPL\n\ndates = np.array(AAPL['date'], dtype=np.datetime64)\nsource = ColumnDataSource(data=dict(date=dates, close=AAPL['adj_close']))\n\np = figure(plot_height=300, plot_width=800, tools=\"\", toolbar_location=None,\n x_axis_type=\"datetime\", x_axis_location=\"above\",\n background_fill_color=\"#efefef\", x_range=(dates[1500], dates[2500]))\n\np.line('date', 'close', source=source)\np.yaxis.axis_label = 'Price'\n\nselect = figure(title=\"Drag the middle and edges of the selection box to change the range above\",\n plot_height=130, plot_width=800, y_range=p.y_range,\n x_axis_type=\"datetime\", y_axis_type=None,\n tools=\"\", toolbar_location=None, background_fill_color=\"#efefef\")\n\nrange_rool = RangeTool(x_range=p.x_range)\nrange_rool.overlay.fill_color = \"navy\"\nrange_rool.overlay.fill_alpha = 0.2\n\nselect.line('date', 'close', source=source)\nselect.ygrid.grid_line_color = None\nselect.add_tools(range_rool)\nselect.toolbar.active_multi = range_rool\n\nshow(column(p, select))\n", "path": "examples/plotting/file/range_tool.py"}]} | 967 | 220 |
gh_patches_debug_62436 | rasdani/github-patches | git_diff | comic__grand-challenge.org-3379 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Server error page won't render because of missing context
If a view throws a 500 error, the 500.html should get rendered. We recently updated the template to inherit from base.html, and now it will not render anymore because it is missing context variables (the 500 view is by default passed an empty context).
I'm unsure if we should update the 500 view and add the missing context or if we should go back to not inheriting from base.html for the error views?
</issue>
<code>
[start of app/config/urls/challenge_subdomain.py]
1 from django.conf import settings
2 from django.urls import include, path
3 from django.views.generic import TemplateView
4
5 from grandchallenge.challenges.views import ChallengeUpdate
6
7 urlpatterns = [
8 path(
9 "robots.txt",
10 TemplateView.as_view(
11 template_name="robots.txt", content_type="text/plain"
12 ),
13 name="subdomain_robots_txt",
14 ),
15 path(
16 "evaluation/",
17 include("grandchallenge.evaluation.urls", namespace="evaluation"),
18 ),
19 path("teams/", include("grandchallenge.teams.urls", namespace="teams")),
20 path(
21 "participants/",
22 include("grandchallenge.participants.urls", namespace="participants"),
23 ),
24 path("admins/", include("grandchallenge.admins.urls", namespace="admins")),
25 path("update/", ChallengeUpdate.as_view(), name="challenge-update"),
26 path("summernote/", include("django_summernote.urls")),
27 path("", include("grandchallenge.pages.urls", namespace="pages")),
28 ]
29
30 if settings.DEBUG and settings.ENABLE_DEBUG_TOOLBAR:
31 import debug_toolbar
32
33 urlpatterns = [
34 path("__debug__/", include(debug_toolbar.urls))
35 ] + urlpatterns
36
[end of app/config/urls/challenge_subdomain.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/app/config/urls/challenge_subdomain.py b/app/config/urls/challenge_subdomain.py
--- a/app/config/urls/challenge_subdomain.py
+++ b/app/config/urls/challenge_subdomain.py
@@ -4,6 +4,9 @@
from grandchallenge.challenges.views import ChallengeUpdate
+handler500 = "grandchallenge.core.views.handler500"
+
+
urlpatterns = [
path(
"robots.txt",
| {"golden_diff": "diff --git a/app/config/urls/challenge_subdomain.py b/app/config/urls/challenge_subdomain.py\n--- a/app/config/urls/challenge_subdomain.py\n+++ b/app/config/urls/challenge_subdomain.py\n@@ -4,6 +4,9 @@\n \n from grandchallenge.challenges.views import ChallengeUpdate\n \n+handler500 = \"grandchallenge.core.views.handler500\"\n+\n+\n urlpatterns = [\n path(\n \"robots.txt\",\n", "issue": "Server error page won't render because of missing context\nIf a view throws a 500 error, the 500.html should get rendered. We recently updated the template to inherit from base.html, and now it will not render anymore because it is missing context variables (the 500 view is by default passed an empty context). \r\n\r\nI'm unsure if we should update the 500 view and add the missing context or if we should go back to not inheriting from base.html for the error views? \r\n\r\n\n", "before_files": [{"content": "from django.conf import settings\nfrom django.urls import include, path\nfrom django.views.generic import TemplateView\n\nfrom grandchallenge.challenges.views import ChallengeUpdate\n\nurlpatterns = [\n path(\n \"robots.txt\",\n TemplateView.as_view(\n template_name=\"robots.txt\", content_type=\"text/plain\"\n ),\n name=\"subdomain_robots_txt\",\n ),\n path(\n \"evaluation/\",\n include(\"grandchallenge.evaluation.urls\", namespace=\"evaluation\"),\n ),\n path(\"teams/\", include(\"grandchallenge.teams.urls\", namespace=\"teams\")),\n path(\n \"participants/\",\n include(\"grandchallenge.participants.urls\", namespace=\"participants\"),\n ),\n path(\"admins/\", include(\"grandchallenge.admins.urls\", namespace=\"admins\")),\n path(\"update/\", ChallengeUpdate.as_view(), name=\"challenge-update\"),\n path(\"summernote/\", include(\"django_summernote.urls\")),\n path(\"\", include(\"grandchallenge.pages.urls\", namespace=\"pages\")),\n]\n\nif settings.DEBUG and settings.ENABLE_DEBUG_TOOLBAR:\n import debug_toolbar\n\n urlpatterns = [\n path(\"__debug__/\", include(debug_toolbar.urls))\n ] + urlpatterns\n", "path": "app/config/urls/challenge_subdomain.py"}]} | 955 | 99 |
gh_patches_debug_22478 | rasdani/github-patches | git_diff | python-pillow__Pillow-3588 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
No n_frames, or bad value for n_frames
When I feed the test file flower2.jpg into this code (from #1630)
```python
im = Image.open( fn )
imgcnt = im.n_frames
colors = im.getcolors( im.width * im.height )
if args.hist:
for cnt, col in colors:
allcolors[ col ] += cnt
for iz in range( 1, imgcnt ):
im = Image.open( fn ) # does getcolors implicitly close????
# without the open, get "seek of closed
# file" error on line below.
im.seek( iz )
colors = im.getcolors( im.width * im.height )
for cnt, col in colors:
allcolors[ col ] += cnt
```
I get "AttributeError: n_frames"
But other .jpg files do not get that error... this one: http://nevcal.com/temporary/20151110-105826gl.jpg has no problem with the attribute error on that line, but it gets a value of 2, apparently handles the seek OK, but dies in the second call to getcolors, with "OSError: image file is truncated (0 bytes not processed)".
</issue>
<code>
[start of src/PIL/MpoImagePlugin.py]
1 #
2 # The Python Imaging Library.
3 # $Id$
4 #
5 # MPO file handling
6 #
7 # See "Multi-Picture Format" (CIPA DC-007-Translation 2009, Standard of the
8 # Camera & Imaging Products Association)
9 #
10 # The multi-picture object combines multiple JPEG images (with a modified EXIF
11 # data format) into a single file. While it can theoretically be used much like
12 # a GIF animation, it is commonly used to represent 3D photographs and is (as
13 # of this writing) the most commonly used format by 3D cameras.
14 #
15 # History:
16 # 2014-03-13 Feneric Created
17 #
18 # See the README file for information on usage and redistribution.
19 #
20
21 from . import Image, JpegImagePlugin
22
23 # __version__ is deprecated and will be removed in a future version. Use
24 # PIL.__version__ instead.
25 __version__ = "0.1"
26
27
28 def _accept(prefix):
29 return JpegImagePlugin._accept(prefix)
30
31
32 def _save(im, fp, filename):
33 # Note that we can only save the current frame at present
34 return JpegImagePlugin._save(im, fp, filename)
35
36
37 ##
38 # Image plugin for MPO images.
39
40 class MpoImageFile(JpegImagePlugin.JpegImageFile):
41
42 format = "MPO"
43 format_description = "MPO (CIPA DC-007)"
44 _close_exclusive_fp_after_loading = False
45
46 def _open(self):
47 self.fp.seek(0) # prep the fp in order to pass the JPEG test
48 JpegImagePlugin.JpegImageFile._open(self)
49 self.mpinfo = self._getmp()
50 self.__framecount = self.mpinfo[0xB001]
51 self.__mpoffsets = [mpent['DataOffset'] + self.info['mpoffset']
52 for mpent in self.mpinfo[0xB002]]
53 self.__mpoffsets[0] = 0
54 # Note that the following assertion will only be invalid if something
55 # gets broken within JpegImagePlugin.
56 assert self.__framecount == len(self.__mpoffsets)
57 del self.info['mpoffset'] # no longer needed
58 self.__fp = self.fp # FIXME: hack
59 self.__fp.seek(self.__mpoffsets[0]) # get ready to read first frame
60 self.__frame = 0
61 self.offset = 0
62 # for now we can only handle reading and individual frame extraction
63 self.readonly = 1
64
65 def load_seek(self, pos):
66 self.__fp.seek(pos)
67
68 @property
69 def n_frames(self):
70 return self.__framecount
71
72 @property
73 def is_animated(self):
74 return self.__framecount > 1
75
76 def seek(self, frame):
77 if not self._seek_check(frame):
78 return
79 self.fp = self.__fp
80 self.offset = self.__mpoffsets[frame]
81 self.tile = [
82 ("jpeg", (0, 0) + self.size, self.offset, (self.mode, ""))
83 ]
84 self.__frame = frame
85
86 def tell(self):
87 return self.__frame
88
89 def _close__fp(self):
90 try:
91 if self.__fp != self.fp:
92 self.__fp.close()
93 except AttributeError:
94 pass
95 finally:
96 self.__fp = None
97
98
99 # ---------------------------------------------------------------------
100 # Registry stuff
101
102 # Note that since MPO shares a factory with JPEG, we do not need to do a
103 # separate registration for it here.
104 # Image.register_open(MpoImageFile.format,
105 # JpegImagePlugin.jpeg_factory, _accept)
106 Image.register_save(MpoImageFile.format, _save)
107
108 Image.register_extension(MpoImageFile.format, ".mpo")
109
110 Image.register_mime(MpoImageFile.format, "image/mpo")
111
[end of src/PIL/MpoImagePlugin.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/PIL/MpoImagePlugin.py b/src/PIL/MpoImagePlugin.py
--- a/src/PIL/MpoImagePlugin.py
+++ b/src/PIL/MpoImagePlugin.py
@@ -18,7 +18,8 @@
# See the README file for information on usage and redistribution.
#
-from . import Image, JpegImagePlugin
+from . import Image, ImageFile, JpegImagePlugin
+from ._binary import i16be as i16
# __version__ is deprecated and will be removed in a future version. Use
# PIL.__version__ instead.
@@ -78,6 +79,20 @@
return
self.fp = self.__fp
self.offset = self.__mpoffsets[frame]
+
+ self.fp.seek(self.offset + 2) # skip SOI marker
+ if "parsed_exif" in self.info:
+ del self.info["parsed_exif"]
+ if i16(self.fp.read(2)) == 0xFFE1: # APP1
+ n = i16(self.fp.read(2))-2
+ self.info["exif"] = ImageFile._safe_read(self.fp, n)
+
+ exif = self._getexif()
+ if 40962 in exif and 40963 in exif:
+ self._size = (exif[40962], exif[40963])
+ elif "exif" in self.info:
+ del self.info["exif"]
+
self.tile = [
("jpeg", (0, 0) + self.size, self.offset, (self.mode, ""))
]
| {"golden_diff": "diff --git a/src/PIL/MpoImagePlugin.py b/src/PIL/MpoImagePlugin.py\n--- a/src/PIL/MpoImagePlugin.py\n+++ b/src/PIL/MpoImagePlugin.py\n@@ -18,7 +18,8 @@\n # See the README file for information on usage and redistribution.\n #\n \n-from . import Image, JpegImagePlugin\n+from . import Image, ImageFile, JpegImagePlugin\n+from ._binary import i16be as i16\n \n # __version__ is deprecated and will be removed in a future version. Use\n # PIL.__version__ instead.\n@@ -78,6 +79,20 @@\n return\n self.fp = self.__fp\n self.offset = self.__mpoffsets[frame]\n+\n+ self.fp.seek(self.offset + 2) # skip SOI marker\n+ if \"parsed_exif\" in self.info:\n+ del self.info[\"parsed_exif\"]\n+ if i16(self.fp.read(2)) == 0xFFE1: # APP1\n+ n = i16(self.fp.read(2))-2\n+ self.info[\"exif\"] = ImageFile._safe_read(self.fp, n)\n+\n+ exif = self._getexif()\n+ if 40962 in exif and 40963 in exif:\n+ self._size = (exif[40962], exif[40963])\n+ elif \"exif\" in self.info:\n+ del self.info[\"exif\"]\n+\n self.tile = [\n (\"jpeg\", (0, 0) + self.size, self.offset, (self.mode, \"\"))\n ]\n", "issue": "No n_frames, or bad value for n_frames\nWhen I feed the test file flower2.jpg into this code (from #1630)\r\n\r\n```python\r\nim = Image.open( fn )\r\nimgcnt = im.n_frames\r\ncolors = im.getcolors( im.width * im.height )\r\nif args.hist:\r\n for cnt, col in colors:\r\n allcolors[ col ] += cnt\r\n for iz in range( 1, imgcnt ):\r\n im = Image.open( fn ) # does getcolors implicitly close????\r\n # without the open, get \"seek of closed\r\n # file\" error on line below.\r\n im.seek( iz )\r\n colors = im.getcolors( im.width * im.height )\r\n for cnt, col in colors:\r\n allcolors[ col ] += cnt\r\n```\r\n\r\nI get \"AttributeError: n_frames\"\r\n\r\nBut other .jpg files do not get that error... this one: http://nevcal.com/temporary/20151110-105826gl.jpg has no problem with the attribute error on that line, but it gets a value of 2, apparently handles the seek OK, but dies in the second call to getcolors, with \"OSError: image file is truncated (0 bytes not processed)\".\r\n\n", "before_files": [{"content": "#\n# The Python Imaging Library.\n# $Id$\n#\n# MPO file handling\n#\n# See \"Multi-Picture Format\" (CIPA DC-007-Translation 2009, Standard of the\n# Camera & Imaging Products Association)\n#\n# The multi-picture object combines multiple JPEG images (with a modified EXIF\n# data format) into a single file. While it can theoretically be used much like\n# a GIF animation, it is commonly used to represent 3D photographs and is (as\n# of this writing) the most commonly used format by 3D cameras.\n#\n# History:\n# 2014-03-13 Feneric Created\n#\n# See the README file for information on usage and redistribution.\n#\n\nfrom . import Image, JpegImagePlugin\n\n# __version__ is deprecated and will be removed in a future version. Use\n# PIL.__version__ instead.\n__version__ = \"0.1\"\n\n\ndef _accept(prefix):\n return JpegImagePlugin._accept(prefix)\n\n\ndef _save(im, fp, filename):\n # Note that we can only save the current frame at present\n return JpegImagePlugin._save(im, fp, filename)\n\n\n##\n# Image plugin for MPO images.\n\nclass MpoImageFile(JpegImagePlugin.JpegImageFile):\n\n format = \"MPO\"\n format_description = \"MPO (CIPA DC-007)\"\n _close_exclusive_fp_after_loading = False\n\n def _open(self):\n self.fp.seek(0) # prep the fp in order to pass the JPEG test\n JpegImagePlugin.JpegImageFile._open(self)\n self.mpinfo = self._getmp()\n self.__framecount = self.mpinfo[0xB001]\n self.__mpoffsets = [mpent['DataOffset'] + self.info['mpoffset']\n for mpent in self.mpinfo[0xB002]]\n self.__mpoffsets[0] = 0\n # Note that the following assertion will only be invalid if something\n # gets broken within JpegImagePlugin.\n assert self.__framecount == len(self.__mpoffsets)\n del self.info['mpoffset'] # no longer needed\n self.__fp = self.fp # FIXME: hack\n self.__fp.seek(self.__mpoffsets[0]) # get ready to read first frame\n self.__frame = 0\n self.offset = 0\n # for now we can only handle reading and individual frame extraction\n self.readonly = 1\n\n def load_seek(self, pos):\n self.__fp.seek(pos)\n\n @property\n def n_frames(self):\n return self.__framecount\n\n @property\n def is_animated(self):\n return self.__framecount > 1\n\n def seek(self, frame):\n if not self._seek_check(frame):\n return\n self.fp = self.__fp\n self.offset = self.__mpoffsets[frame]\n self.tile = [\n (\"jpeg\", (0, 0) + self.size, self.offset, (self.mode, \"\"))\n ]\n self.__frame = frame\n\n def tell(self):\n return self.__frame\n\n def _close__fp(self):\n try:\n if self.__fp != self.fp:\n self.__fp.close()\n except AttributeError:\n pass\n finally:\n self.__fp = None\n\n\n# ---------------------------------------------------------------------\n# Registry stuff\n\n# Note that since MPO shares a factory with JPEG, we do not need to do a\n# separate registration for it here.\n# Image.register_open(MpoImageFile.format,\n# JpegImagePlugin.jpeg_factory, _accept)\nImage.register_save(MpoImageFile.format, _save)\n\nImage.register_extension(MpoImageFile.format, \".mpo\")\n\nImage.register_mime(MpoImageFile.format, \"image/mpo\")\n", "path": "src/PIL/MpoImagePlugin.py"}]} | 1,887 | 377 |
gh_patches_debug_58134 | rasdani/github-patches | git_diff | liqd__a4-meinberlin-382 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Order of poll answer choices mixed up after saving
The order of poll answer choices is mixed up after saving. Restoring original order is not possible:

</issue>
<code>
[start of apps/polls/models.py]
1 from django.contrib.contenttypes.fields import GenericRelation
2 from django.db import models
3
4 from adhocracy4.comments import models as comment_models
5 from adhocracy4.models.base import UserGeneratedContentModel
6 from adhocracy4.modules import models as module_models
7
8 from . import validators
9
10
11 class Poll(module_models.Item):
12 comments = GenericRelation(comment_models.Comment,
13 related_query_name='poll',
14 object_id_field='object_pk')
15
16
17 class Question(models.Model):
18 label = models.CharField(max_length=255)
19 weight = models.SmallIntegerField()
20
21 poll = models.ForeignKey(
22 'Poll',
23 on_delete=models.CASCADE,
24 related_name='questions'
25 )
26
27 def user_choices_list(self, user):
28 if not user.is_authenticated():
29 return []
30
31 return self.choices\
32 .filter(votes__creator=user)\
33 .values_list('id', flat=True)
34
35 def __str__(self):
36 return self.label
37
38 class Meta:
39 ordering = ['weight']
40
41
42 class ChoiceQuerySet(models.QuerySet):
43
44 def annotate_vote_count(self):
45 return self.annotate(
46 vote_count=models.Count(
47 'votes'
48 )
49 )
50
51
52 class Choice(models.Model):
53 label = models.CharField(max_length=255)
54
55 question = models.ForeignKey(
56 'Question',
57 on_delete=models.CASCADE,
58 related_name='choices',
59 )
60
61 objects = ChoiceQuerySet.as_manager()
62
63 def __str__(self):
64 return '%s @%s' % (self.label, self.question)
65
66
67 class Vote(UserGeneratedContentModel):
68 choice = models.ForeignKey(
69 'Choice',
70 on_delete=models.CASCADE,
71 related_name='votes'
72 )
73
74 def validate_unique(self, exclude=None):
75 super(Vote, self).validate_unique(exclude)
76 validators.single_vote_per_user(self.creator,
77 self.choice.question,
78 self.pk)
79
80 # Make Vote instances behave like items for rule checking
81 @property
82 def module(self):
83 self.choice.question.poll.module
84
85 @property
86 def project(self):
87 return self.module.project
88
89 def __str__(self):
90 return '%s: %s' % (self.creator, self.choice)
91
[end of apps/polls/models.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/apps/polls/models.py b/apps/polls/models.py
--- a/apps/polls/models.py
+++ b/apps/polls/models.py
@@ -60,6 +60,9 @@
objects = ChoiceQuerySet.as_manager()
+ class Meta:
+ ordering = ['id']
+
def __str__(self):
return '%s @%s' % (self.label, self.question)
| {"golden_diff": "diff --git a/apps/polls/models.py b/apps/polls/models.py\n--- a/apps/polls/models.py\n+++ b/apps/polls/models.py\n@@ -60,6 +60,9 @@\n \n objects = ChoiceQuerySet.as_manager()\n \n+ class Meta:\n+ ordering = ['id']\n+\n def __str__(self):\n return '%s @%s' % (self.label, self.question)\n", "issue": "Order of poll answer choices mixed up after saving\nThe order of poll answer choices is mixed up after saving. Restoring original order is not possible:\r\n\r\n\n", "before_files": [{"content": "from django.contrib.contenttypes.fields import GenericRelation\nfrom django.db import models\n\nfrom adhocracy4.comments import models as comment_models\nfrom adhocracy4.models.base import UserGeneratedContentModel\nfrom adhocracy4.modules import models as module_models\n\nfrom . import validators\n\n\nclass Poll(module_models.Item):\n comments = GenericRelation(comment_models.Comment,\n related_query_name='poll',\n object_id_field='object_pk')\n\n\nclass Question(models.Model):\n label = models.CharField(max_length=255)\n weight = models.SmallIntegerField()\n\n poll = models.ForeignKey(\n 'Poll',\n on_delete=models.CASCADE,\n related_name='questions'\n )\n\n def user_choices_list(self, user):\n if not user.is_authenticated():\n return []\n\n return self.choices\\\n .filter(votes__creator=user)\\\n .values_list('id', flat=True)\n\n def __str__(self):\n return self.label\n\n class Meta:\n ordering = ['weight']\n\n\nclass ChoiceQuerySet(models.QuerySet):\n\n def annotate_vote_count(self):\n return self.annotate(\n vote_count=models.Count(\n 'votes'\n )\n )\n\n\nclass Choice(models.Model):\n label = models.CharField(max_length=255)\n\n question = models.ForeignKey(\n 'Question',\n on_delete=models.CASCADE,\n related_name='choices',\n )\n\n objects = ChoiceQuerySet.as_manager()\n\n def __str__(self):\n return '%s @%s' % (self.label, self.question)\n\n\nclass Vote(UserGeneratedContentModel):\n choice = models.ForeignKey(\n 'Choice',\n on_delete=models.CASCADE,\n related_name='votes'\n )\n\n def validate_unique(self, exclude=None):\n super(Vote, self).validate_unique(exclude)\n validators.single_vote_per_user(self.creator,\n self.choice.question,\n self.pk)\n\n # Make Vote instances behave like items for rule checking\n @property\n def module(self):\n self.choice.question.poll.module\n\n @property\n def project(self):\n return self.module.project\n\n def __str__(self):\n return '%s: %s' % (self.creator, self.choice)\n", "path": "apps/polls/models.py"}]} | 1,298 | 92 |
gh_patches_debug_11332 | rasdani/github-patches | git_diff | certbot__certbot-5383 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Apache SSL cipher settings are old, no ChaCha20
The Nginx plugin's `options-ssl-nginx.conf` file uses Mozilla's current intermediate SSL cipher configuration.
The Apache plugin probably did too... except it hasn't been updated since 2014: 2faacc1b43786edd5386305f9cffec376b5a5d26
Should Certbot's Apache settings be updated?
The main difference is that the new configuration adds ChaCha20 cipher suites. (It also removes a few things.)
Should this wait until after further documentation/feature improvements in #4830?
If so, how about a stopgap patch to add ChaCha20 without removing anything?
Differences with OpenSSL 1.0.2 or 1.1.0:
* Adds ChaCha20. (Yay!)
* Adds newer 3DES cipher suites with key exchange/signature algorithms other than RSA/RSA. (Probably doesn't matter. Anything that supports ECDHE-ECDSA ought to support a better cipher than 3DES. ) (Edit: Mozilla suggests that EDH-RSA 3DES is useful, though,.)
* Removes AES-CCM. (OpenSSL 1.1. Probably only included inadvertently.)
* Removes Camellia. (Perfectly nice cipher, but everybody uses AES.)
* Removes some static DH and SRP key exchange cipher suites. (Probably only included by accident, mostly or entirely ignored by servers and clients.)
* Changes the order of some things.
</issue>
<code>
[start of certbot-apache/certbot_apache/constants.py]
1 """Apache plugin constants."""
2 import pkg_resources
3
4
5 MOD_SSL_CONF_DEST = "options-ssl-apache.conf"
6 """Name of the mod_ssl config file as saved in `IConfig.config_dir`."""
7
8
9 UPDATED_MOD_SSL_CONF_DIGEST = ".updated-options-ssl-apache-conf-digest.txt"
10 """Name of the hash of the updated or informed mod_ssl_conf as saved in `IConfig.config_dir`."""
11
12 ALL_SSL_OPTIONS_HASHES = [
13 '2086bca02db48daf93468332543c60ac6acdb6f0b58c7bfdf578a5d47092f82a',
14 '4844d36c9a0f587172d9fa10f4f1c9518e3bcfa1947379f155e16a70a728c21a',
15 '5a922826719981c0a234b1fbcd495f3213e49d2519e845ea0748ba513044b65b',
16 '4066b90268c03c9ba0201068eaa39abbc02acf9558bb45a788b630eb85dadf27',
17 'f175e2e7c673bd88d0aff8220735f385f916142c44aa83b09f1df88dd4767a88',
18 'cfdd7c18d2025836ea3307399f509cfb1ebf2612c87dd600a65da2a8e2f2797b',
19 ]
20 """SHA256 hashes of the contents of previous versions of all versions of MOD_SSL_CONF_SRC"""
21
22 AUGEAS_LENS_DIR = pkg_resources.resource_filename(
23 "certbot_apache", "augeas_lens")
24 """Path to the Augeas lens directory"""
25
26 REWRITE_HTTPS_ARGS = [
27 "^", "https://%{SERVER_NAME}%{REQUEST_URI}", "[L,NE,R=permanent]"]
28 """Apache version<2.3.9 rewrite rule arguments used for redirections to
29 https vhost"""
30
31 REWRITE_HTTPS_ARGS_WITH_END = [
32 "^", "https://%{SERVER_NAME}%{REQUEST_URI}", "[END,NE,R=permanent]"]
33 """Apache version >= 2.3.9 rewrite rule arguments used for redirections to
34 https vhost"""
35
36 OLD_REWRITE_HTTPS_ARGS = [
37 ["^", "https://%{SERVER_NAME}%{REQUEST_URI}", "[L,QSA,R=permanent]"],
38 ["^", "https://%{SERVER_NAME}%{REQUEST_URI}", "[END,QSA,R=permanent]"]]
39
40 HSTS_ARGS = ["always", "set", "Strict-Transport-Security",
41 "\"max-age=31536000\""]
42 """Apache header arguments for HSTS"""
43
44 UIR_ARGS = ["always", "set", "Content-Security-Policy",
45 "upgrade-insecure-requests"]
46
47 HEADER_ARGS = {"Strict-Transport-Security": HSTS_ARGS,
48 "Upgrade-Insecure-Requests": UIR_ARGS}
49
[end of certbot-apache/certbot_apache/constants.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/certbot-apache/certbot_apache/constants.py b/certbot-apache/certbot_apache/constants.py
--- a/certbot-apache/certbot_apache/constants.py
+++ b/certbot-apache/certbot_apache/constants.py
@@ -16,6 +16,8 @@
'4066b90268c03c9ba0201068eaa39abbc02acf9558bb45a788b630eb85dadf27',
'f175e2e7c673bd88d0aff8220735f385f916142c44aa83b09f1df88dd4767a88',
'cfdd7c18d2025836ea3307399f509cfb1ebf2612c87dd600a65da2a8e2f2797b',
+ '80720bd171ccdc2e6b917ded340defae66919e4624962396b992b7218a561791',
+ 'c0c022ea6b8a51ecc8f1003d0a04af6c3f2bc1c3ce506b3c2dfc1f11ef931082',
]
"""SHA256 hashes of the contents of previous versions of all versions of MOD_SSL_CONF_SRC"""
| {"golden_diff": "diff --git a/certbot-apache/certbot_apache/constants.py b/certbot-apache/certbot_apache/constants.py\n--- a/certbot-apache/certbot_apache/constants.py\n+++ b/certbot-apache/certbot_apache/constants.py\n@@ -16,6 +16,8 @@\n '4066b90268c03c9ba0201068eaa39abbc02acf9558bb45a788b630eb85dadf27',\n 'f175e2e7c673bd88d0aff8220735f385f916142c44aa83b09f1df88dd4767a88',\n 'cfdd7c18d2025836ea3307399f509cfb1ebf2612c87dd600a65da2a8e2f2797b',\n+ '80720bd171ccdc2e6b917ded340defae66919e4624962396b992b7218a561791',\n+ 'c0c022ea6b8a51ecc8f1003d0a04af6c3f2bc1c3ce506b3c2dfc1f11ef931082',\n ]\n \"\"\"SHA256 hashes of the contents of previous versions of all versions of MOD_SSL_CONF_SRC\"\"\"\n", "issue": "Apache SSL cipher settings are old, no ChaCha20\nThe Nginx plugin's `options-ssl-nginx.conf` file uses Mozilla's current intermediate SSL cipher configuration.\r\n\r\nThe Apache plugin probably did too... except it hasn't been updated since 2014: 2faacc1b43786edd5386305f9cffec376b5a5d26\r\n\r\nShould Certbot's Apache settings be updated?\r\n\r\nThe main difference is that the new configuration adds ChaCha20 cipher suites. (It also removes a few things.)\r\n\r\nShould this wait until after further documentation/feature improvements in #4830?\r\n\r\nIf so, how about a stopgap patch to add ChaCha20 without removing anything?\r\n\r\nDifferences with OpenSSL 1.0.2 or 1.1.0:\r\n\r\n* Adds ChaCha20. (Yay!)\r\n* Adds newer 3DES cipher suites with key exchange/signature algorithms other than RSA/RSA. (Probably doesn't matter. Anything that supports ECDHE-ECDSA ought to support a better cipher than 3DES. ) (Edit: Mozilla suggests that EDH-RSA 3DES is useful, though,.)\r\n* Removes AES-CCM. (OpenSSL 1.1. Probably only included inadvertently.)\r\n* Removes Camellia. (Perfectly nice cipher, but everybody uses AES.)\r\n* Removes some static DH and SRP key exchange cipher suites. (Probably only included by accident, mostly or entirely ignored by servers and clients.)\r\n* Changes the order of some things.\n", "before_files": [{"content": "\"\"\"Apache plugin constants.\"\"\"\nimport pkg_resources\n\n\nMOD_SSL_CONF_DEST = \"options-ssl-apache.conf\"\n\"\"\"Name of the mod_ssl config file as saved in `IConfig.config_dir`.\"\"\"\n\n\nUPDATED_MOD_SSL_CONF_DIGEST = \".updated-options-ssl-apache-conf-digest.txt\"\n\"\"\"Name of the hash of the updated or informed mod_ssl_conf as saved in `IConfig.config_dir`.\"\"\"\n\nALL_SSL_OPTIONS_HASHES = [\n '2086bca02db48daf93468332543c60ac6acdb6f0b58c7bfdf578a5d47092f82a',\n '4844d36c9a0f587172d9fa10f4f1c9518e3bcfa1947379f155e16a70a728c21a',\n '5a922826719981c0a234b1fbcd495f3213e49d2519e845ea0748ba513044b65b',\n '4066b90268c03c9ba0201068eaa39abbc02acf9558bb45a788b630eb85dadf27',\n 'f175e2e7c673bd88d0aff8220735f385f916142c44aa83b09f1df88dd4767a88',\n 'cfdd7c18d2025836ea3307399f509cfb1ebf2612c87dd600a65da2a8e2f2797b',\n]\n\"\"\"SHA256 hashes of the contents of previous versions of all versions of MOD_SSL_CONF_SRC\"\"\"\n\nAUGEAS_LENS_DIR = pkg_resources.resource_filename(\n \"certbot_apache\", \"augeas_lens\")\n\"\"\"Path to the Augeas lens directory\"\"\"\n\nREWRITE_HTTPS_ARGS = [\n \"^\", \"https://%{SERVER_NAME}%{REQUEST_URI}\", \"[L,NE,R=permanent]\"]\n\"\"\"Apache version<2.3.9 rewrite rule arguments used for redirections to\nhttps vhost\"\"\"\n\nREWRITE_HTTPS_ARGS_WITH_END = [\n \"^\", \"https://%{SERVER_NAME}%{REQUEST_URI}\", \"[END,NE,R=permanent]\"]\n\"\"\"Apache version >= 2.3.9 rewrite rule arguments used for redirections to\n https vhost\"\"\"\n\nOLD_REWRITE_HTTPS_ARGS = [\n [\"^\", \"https://%{SERVER_NAME}%{REQUEST_URI}\", \"[L,QSA,R=permanent]\"],\n [\"^\", \"https://%{SERVER_NAME}%{REQUEST_URI}\", \"[END,QSA,R=permanent]\"]]\n\nHSTS_ARGS = [\"always\", \"set\", \"Strict-Transport-Security\",\n \"\\\"max-age=31536000\\\"\"]\n\"\"\"Apache header arguments for HSTS\"\"\"\n\nUIR_ARGS = [\"always\", \"set\", \"Content-Security-Policy\",\n \"upgrade-insecure-requests\"]\n\nHEADER_ARGS = {\"Strict-Transport-Security\": HSTS_ARGS,\n \"Upgrade-Insecure-Requests\": UIR_ARGS}\n", "path": "certbot-apache/certbot_apache/constants.py"}]} | 1,748 | 393 |
gh_patches_debug_12635 | rasdani/github-patches | git_diff | open-telemetry__opentelemetry-python-3113 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
'set_logger_provider' no longer importable from 'opentelemetry.sdk._logs'
A week ago I was able to run the example for logging in https://github.com/open-telemetry/opentelemetry-python/tree/main/docs/examples/logs.
**Steps to reproduce**
I install either version via:
```bash
pip install opentelemetry-exporter-otlp==1.15.0
```
or
```bash
pip install opentelemetry-exporter-otlp==1.14.0
```
Then run the following code:
```python
from opentelemetry.sdk._logs import set_logger_provider
```
**What is the expected behavior?**
Should import without any issues.
**What is the actual behavior?**
```python
ImportError: cannot import name 'set_logger_provider' from 'opentelemetry.sdk._logs' (/usr/local/lib/python3.10/dist-packages/opentelemetry/sdk/_logs/__init__.py)
```
**Additional context**
I found that when I installed `opentelemetry-exporter-otlp==1.14.0`, `opentelemetry-sdk==1.15.0` was installed as well. When I installed `opentelemetry-sdk==1.14.0`, everything works fine again.
</issue>
<code>
[start of docs/examples/logs/example.py]
1 import logging
2
3 from opentelemetry import trace
4 from opentelemetry.exporter.otlp.proto.grpc._log_exporter import (
5 OTLPLogExporter,
6 )
7 from opentelemetry.sdk._logs import (
8 LoggerProvider,
9 LoggingHandler,
10 set_logger_provider,
11 )
12 from opentelemetry.sdk._logs.export import BatchLogRecordProcessor
13 from opentelemetry.sdk.resources import Resource
14 from opentelemetry.sdk.trace import TracerProvider
15 from opentelemetry.sdk.trace.export import (
16 BatchSpanProcessor,
17 ConsoleSpanExporter,
18 )
19
20 trace.set_tracer_provider(TracerProvider())
21 trace.get_tracer_provider().add_span_processor(
22 BatchSpanProcessor(ConsoleSpanExporter())
23 )
24
25 logger_provider = LoggerProvider(
26 resource=Resource.create(
27 {
28 "service.name": "shoppingcart",
29 "service.instance.id": "instance-12",
30 }
31 ),
32 )
33 set_logger_provider(logger_provider)
34
35 exporter = OTLPLogExporter(insecure=True)
36 logger_provider.add_log_record_processor(BatchLogRecordProcessor(exporter))
37 handler = LoggingHandler(level=logging.NOTSET, logger_provider=logger_provider)
38
39 # Attach OTLP handler to root logger
40 logging.getLogger().addHandler(handler)
41
42 # Log directly
43 logging.info("Jackdaws love my big sphinx of quartz.")
44
45 # Create different namespaced loggers
46 logger1 = logging.getLogger("myapp.area1")
47 logger2 = logging.getLogger("myapp.area2")
48
49 logger1.debug("Quick zephyrs blow, vexing daft Jim.")
50 logger1.info("How quickly daft jumping zebras vex.")
51 logger2.warning("Jail zesty vixen who grabbed pay from quack.")
52 logger2.error("The five boxing wizards jump quickly.")
53
54
55 # Trace context correlation
56 tracer = trace.get_tracer(__name__)
57 with tracer.start_as_current_span("foo"):
58 # Do something
59 logger2.error("Hyderabad, we have a major problem.")
60
61 logger_provider.shutdown()
62
[end of docs/examples/logs/example.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/docs/examples/logs/example.py b/docs/examples/logs/example.py
--- a/docs/examples/logs/example.py
+++ b/docs/examples/logs/example.py
@@ -1,14 +1,11 @@
import logging
from opentelemetry import trace
+from opentelemetry._logs import set_logger_provider
from opentelemetry.exporter.otlp.proto.grpc._log_exporter import (
OTLPLogExporter,
)
-from opentelemetry.sdk._logs import (
- LoggerProvider,
- LoggingHandler,
- set_logger_provider,
-)
+from opentelemetry.sdk._logs import LoggerProvider, LoggingHandler
from opentelemetry.sdk._logs.export import BatchLogRecordProcessor
from opentelemetry.sdk.resources import Resource
from opentelemetry.sdk.trace import TracerProvider
| {"golden_diff": "diff --git a/docs/examples/logs/example.py b/docs/examples/logs/example.py\n--- a/docs/examples/logs/example.py\n+++ b/docs/examples/logs/example.py\n@@ -1,14 +1,11 @@\n import logging\n \n from opentelemetry import trace\n+from opentelemetry._logs import set_logger_provider\n from opentelemetry.exporter.otlp.proto.grpc._log_exporter import (\n OTLPLogExporter,\n )\n-from opentelemetry.sdk._logs import (\n- LoggerProvider,\n- LoggingHandler,\n- set_logger_provider,\n-)\n+from opentelemetry.sdk._logs import LoggerProvider, LoggingHandler\n from opentelemetry.sdk._logs.export import BatchLogRecordProcessor\n from opentelemetry.sdk.resources import Resource\n from opentelemetry.sdk.trace import TracerProvider\n", "issue": "'set_logger_provider' no longer importable from 'opentelemetry.sdk._logs'\nA week ago I was able to run the example for logging in https://github.com/open-telemetry/opentelemetry-python/tree/main/docs/examples/logs.\r\n\r\n**Steps to reproduce**\r\nI install either version via:\r\n```bash\r\npip install opentelemetry-exporter-otlp==1.15.0\r\n```\r\nor\r\n```bash\r\npip install opentelemetry-exporter-otlp==1.14.0\r\n```\r\nThen run the following code:\r\n```python\r\nfrom opentelemetry.sdk._logs import set_logger_provider\r\n```\r\n\r\n**What is the expected behavior?**\r\nShould import without any issues.\r\n\r\n**What is the actual behavior?**\r\n```python\r\nImportError: cannot import name 'set_logger_provider' from 'opentelemetry.sdk._logs' (/usr/local/lib/python3.10/dist-packages/opentelemetry/sdk/_logs/__init__.py)\r\n```\r\n\r\n\r\n**Additional context**\r\nI found that when I installed `opentelemetry-exporter-otlp==1.14.0`, `opentelemetry-sdk==1.15.0` was installed as well. When I installed `opentelemetry-sdk==1.14.0`, everything works fine again.\r\n\n", "before_files": [{"content": "import logging\n\nfrom opentelemetry import trace\nfrom opentelemetry.exporter.otlp.proto.grpc._log_exporter import (\n OTLPLogExporter,\n)\nfrom opentelemetry.sdk._logs import (\n LoggerProvider,\n LoggingHandler,\n set_logger_provider,\n)\nfrom opentelemetry.sdk._logs.export import BatchLogRecordProcessor\nfrom opentelemetry.sdk.resources import Resource\nfrom opentelemetry.sdk.trace import TracerProvider\nfrom opentelemetry.sdk.trace.export import (\n BatchSpanProcessor,\n ConsoleSpanExporter,\n)\n\ntrace.set_tracer_provider(TracerProvider())\ntrace.get_tracer_provider().add_span_processor(\n BatchSpanProcessor(ConsoleSpanExporter())\n)\n\nlogger_provider = LoggerProvider(\n resource=Resource.create(\n {\n \"service.name\": \"shoppingcart\",\n \"service.instance.id\": \"instance-12\",\n }\n ),\n)\nset_logger_provider(logger_provider)\n\nexporter = OTLPLogExporter(insecure=True)\nlogger_provider.add_log_record_processor(BatchLogRecordProcessor(exporter))\nhandler = LoggingHandler(level=logging.NOTSET, logger_provider=logger_provider)\n\n# Attach OTLP handler to root logger\nlogging.getLogger().addHandler(handler)\n\n# Log directly\nlogging.info(\"Jackdaws love my big sphinx of quartz.\")\n\n# Create different namespaced loggers\nlogger1 = logging.getLogger(\"myapp.area1\")\nlogger2 = logging.getLogger(\"myapp.area2\")\n\nlogger1.debug(\"Quick zephyrs blow, vexing daft Jim.\")\nlogger1.info(\"How quickly daft jumping zebras vex.\")\nlogger2.warning(\"Jail zesty vixen who grabbed pay from quack.\")\nlogger2.error(\"The five boxing wizards jump quickly.\")\n\n\n# Trace context correlation\ntracer = trace.get_tracer(__name__)\nwith tracer.start_as_current_span(\"foo\"):\n # Do something\n logger2.error(\"Hyderabad, we have a major problem.\")\n\nlogger_provider.shutdown()\n", "path": "docs/examples/logs/example.py"}]} | 1,323 | 166 |
gh_patches_debug_31186 | rasdani/github-patches | git_diff | facebookresearch__habitat-lab-272 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Fix contains method in habitat space classes
## 🐛 Bug
The `contains` method for the `EmptySpace`, `ActionSpace`, and `ListSpace` classes in `habitat.core.spaces` all contain errors. Specifically:
1. `EmptySpace.contains(x)` should return `True` if `x is None`.
2. `ActionSpace.contains(x)` improperly handles invalid keys and invalid `x["action"]` values.
3. `ListSpace.contains(x)` improperly checks the length of `x`.
## Command
`EmptySpace.contains()`
`ActionSpace.contains()`
`ListSpace.contains()`
## To Reproduce
Code to reproduce the behavior:
```python
>>> import gym
>>> from habitat.core.spaces import EmptySpace, ActionSpace, ListSpace
>>>
>>> space = EmptySpace()
>>> print(space.contains(None)) # should be True
False
>>>
>>> space = ActionSpace({
... "move": gym.spaces.Dict({
... "position": gym.spaces.Discrete(2),
... "velocity": gym.spaces.Discrete(3)
... }),
... "move_forward": EmptySpace(),
... })
>>> space.contains({'action': 'move'}) # should be False
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/mnt/habitat/habitat-api/habitat/core/spaces.py", line 64, in contains
if not self.spaces[x["action"]].contains(x["action_args"]):
KeyError: 'action_args'
>>>
>>> space.contains({'action': None}) # should be false
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/mnt/habitat/habitat-api/habitat/core/spaces.py", line 64, in contains
if not self.spaces[x["action"]].contains(x["action_args"]):
KeyError: None
>>>
>>> space = ListSpace(gym.spaces.Discrete(2), 5, 10)
>>> print(space.contains([0, 1, 0, 1])) # should be True
False
```
## Expected behavior
The `contains` methods should properly return `True`, `False`, `False`, and `True` in the four cases above, respectively.
## Additional context
There is also an error in example code in the docs for the `ActionSpace` class.
</issue>
<code>
[start of habitat/core/spaces.py]
1 #!/usr/bin/env python3
2
3 # Copyright (c) Facebook, Inc. and its affiliates.
4 # This source code is licensed under the MIT license found in the
5 # LICENSE file in the root directory of this source tree.
6
7 from collections import OrderedDict
8 from typing import Sized
9
10 import gym
11 from gym import Space
12
13
14 class EmptySpace(Space):
15 """
16 A ``gym.Space`` that reflects arguments space for action that doesn't have
17 arguments. Needed for consistency ang always samples `None` value.
18 """
19
20 def sample(self):
21 return None
22
23 def contains(self, x):
24 return False
25
26
27 class ActionSpace(gym.spaces.Dict):
28 """
29 A dictionary of ``EmbodiedTask`` actions and their argument spaces.
30
31 .. code:: py
32
33 self.observation_space = spaces.ActionSpace(
34 "move": spaces.Dict({
35 "position": spaces.Discrete(2),
36 "velocity": spaces.Discrete(3)
37 },
38 "move_forward": EmptySpace,
39 )
40 )
41 """
42
43 def __init__(self, spaces):
44 if isinstance(spaces, dict):
45 self.spaces = OrderedDict(sorted(list(spaces.items())))
46 if isinstance(spaces, list):
47 self.spaces = OrderedDict(spaces)
48 self.actions_select = gym.spaces.Discrete(len(self.spaces))
49
50 @property
51 def n(self):
52 return len(self.spaces)
53
54 def sample(self):
55 action_index = self.actions_select.sample()
56 return {
57 "action": list(self.spaces.keys())[action_index],
58 "action_args": list(self.spaces.values())[action_index].sample(),
59 }
60
61 def contains(self, x):
62 if not isinstance(x, dict) and {"action", "action_args"} not in x:
63 return False
64 if not self.spaces[x["action"]].contains(x["action_args"]):
65 return False
66 return True
67
68 def __repr__(self):
69 return (
70 "ActionSpace("
71 + ", ".join([k + ":" + str(s) for k, s in self.spaces.items()])
72 + ")"
73 )
74
75
76 class ListSpace(Space):
77 """
78 A ``gym.Space`` that describes a list of other Space. Used to describe
79 list of tokens ids, vectors and etc.
80
81 .. code:: py
82
83 observation_space = ListSpace(spaces.Discrete(
84 dataset.question_vocab.get_size()))
85 """
86
87 def __init__(self, space, min_seq_length=0, max_seq_length=1 << 15):
88 self.min_seq_length = min_seq_length
89 self.max_seq_length = max_seq_length
90 self.space = space
91 self.length_select = gym.spaces.Discrete(
92 max_seq_length - min_seq_length
93 )
94
95 def sample(self):
96 seq_length = self.length_select.sample() + self.min_seq_length
97 return [self.space.sample() for _ in range(seq_length)]
98
99 def contains(self, x):
100 if not isinstance(x, Sized):
101 return False
102
103 if self.min_seq_length <= len(x) <= self.max_seq_length:
104 return False
105
106 return all([self.space.contains(el) for el in x])
107
108 def __repr__(self):
109 return (
110 f"ListSpace({self.space}, min_seq_length="
111 f"{self.min_seq_length}, max_seq_length={self.max_seq_length})"
112 )
113
[end of habitat/core/spaces.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/habitat/core/spaces.py b/habitat/core/spaces.py
--- a/habitat/core/spaces.py
+++ b/habitat/core/spaces.py
@@ -21,8 +21,13 @@
return None
def contains(self, x):
+ if x is None:
+ return True
return False
+ def __repr__(self):
+ return "EmptySpace()"
+
class ActionSpace(gym.spaces.Dict):
"""
@@ -30,14 +35,13 @@
.. code:: py
- self.observation_space = spaces.ActionSpace(
+ self.observation_space = spaces.ActionSpace({
"move": spaces.Dict({
"position": spaces.Discrete(2),
"velocity": spaces.Discrete(3)
- },
- "move_forward": EmptySpace,
- )
- )
+ }),
+ "move_forward": EmptySpace(),
+ })
"""
def __init__(self, spaces):
@@ -59,9 +63,11 @@
}
def contains(self, x):
- if not isinstance(x, dict) and {"action", "action_args"} not in x:
+ if not isinstance(x, dict) or "action" not in x:
+ return False
+ if x["action"] not in self.spaces:
return False
- if not self.spaces[x["action"]].contains(x["action_args"]):
+ if not self.spaces[x["action"]].contains(x.get("action_args", None)):
return False
return True
@@ -100,7 +106,7 @@
if not isinstance(x, Sized):
return False
- if self.min_seq_length <= len(x) <= self.max_seq_length:
+ if not (self.min_seq_length <= len(x) <= self.max_seq_length):
return False
return all([self.space.contains(el) for el in x])
| {"golden_diff": "diff --git a/habitat/core/spaces.py b/habitat/core/spaces.py\n--- a/habitat/core/spaces.py\n+++ b/habitat/core/spaces.py\n@@ -21,8 +21,13 @@\n return None\n \n def contains(self, x):\n+ if x is None:\n+ return True\n return False\n \n+ def __repr__(self):\n+ return \"EmptySpace()\"\n+\n \n class ActionSpace(gym.spaces.Dict):\n \"\"\"\n@@ -30,14 +35,13 @@\n \n .. code:: py\n \n- self.observation_space = spaces.ActionSpace(\n+ self.observation_space = spaces.ActionSpace({\n \"move\": spaces.Dict({\n \"position\": spaces.Discrete(2),\n \"velocity\": spaces.Discrete(3)\n- },\n- \"move_forward\": EmptySpace,\n- )\n- )\n+ }),\n+ \"move_forward\": EmptySpace(),\n+ })\n \"\"\"\n \n def __init__(self, spaces):\n@@ -59,9 +63,11 @@\n }\n \n def contains(self, x):\n- if not isinstance(x, dict) and {\"action\", \"action_args\"} not in x:\n+ if not isinstance(x, dict) or \"action\" not in x:\n+ return False\n+ if x[\"action\"] not in self.spaces:\n return False\n- if not self.spaces[x[\"action\"]].contains(x[\"action_args\"]):\n+ if not self.spaces[x[\"action\"]].contains(x.get(\"action_args\", None)):\n return False\n return True\n \n@@ -100,7 +106,7 @@\n if not isinstance(x, Sized):\n return False\n \n- if self.min_seq_length <= len(x) <= self.max_seq_length:\n+ if not (self.min_seq_length <= len(x) <= self.max_seq_length):\n return False\n \n return all([self.space.contains(el) for el in x])\n", "issue": "Fix contains method in habitat space classes\n## \ud83d\udc1b Bug\r\n\r\nThe `contains` method for the `EmptySpace`, `ActionSpace`, and `ListSpace` classes in `habitat.core.spaces` all contain errors. Specifically:\r\n1. `EmptySpace.contains(x)` should return `True` if `x is None`.\r\n2. `ActionSpace.contains(x)` improperly handles invalid keys and invalid `x[\"action\"]` values.\r\n3. `ListSpace.contains(x)` improperly checks the length of `x`.\r\n\r\n## Command\r\n\r\n`EmptySpace.contains()`\r\n`ActionSpace.contains()`\r\n`ListSpace.contains()`\r\n\r\n## To Reproduce\r\n\r\nCode to reproduce the behavior:\r\n\r\n```python\r\n>>> import gym\r\n>>> from habitat.core.spaces import EmptySpace, ActionSpace, ListSpace\r\n>>>\r\n>>> space = EmptySpace()\r\n>>> print(space.contains(None)) # should be True\r\nFalse\r\n>>>\r\n>>> space = ActionSpace({\r\n... \"move\": gym.spaces.Dict({\r\n... \"position\": gym.spaces.Discrete(2),\r\n... \"velocity\": gym.spaces.Discrete(3)\r\n... }),\r\n... \"move_forward\": EmptySpace(),\r\n... })\r\n>>> space.contains({'action': 'move'}) # should be False\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/mnt/habitat/habitat-api/habitat/core/spaces.py\", line 64, in contains\r\n if not self.spaces[x[\"action\"]].contains(x[\"action_args\"]):\r\nKeyError: 'action_args'\r\n>>>\r\n>>> space.contains({'action': None}) # should be false\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/mnt/habitat/habitat-api/habitat/core/spaces.py\", line 64, in contains\r\n if not self.spaces[x[\"action\"]].contains(x[\"action_args\"]):\r\nKeyError: None\r\n>>>\r\n>>> space = ListSpace(gym.spaces.Discrete(2), 5, 10)\r\n>>> print(space.contains([0, 1, 0, 1])) # should be True\r\nFalse\r\n```\r\n\r\n## Expected behavior\r\n\r\nThe `contains` methods should properly return `True`, `False`, `False`, and `True` in the four cases above, respectively.\r\n\r\n## Additional context\r\n\r\nThere is also an error in example code in the docs for the `ActionSpace` class.\n", "before_files": [{"content": "#!/usr/bin/env python3\n\n# Copyright (c) Facebook, Inc. and its affiliates.\n# This source code is licensed under the MIT license found in the\n# LICENSE file in the root directory of this source tree.\n\nfrom collections import OrderedDict\nfrom typing import Sized\n\nimport gym\nfrom gym import Space\n\n\nclass EmptySpace(Space):\n \"\"\"\n A ``gym.Space`` that reflects arguments space for action that doesn't have\n arguments. Needed for consistency ang always samples `None` value.\n \"\"\"\n\n def sample(self):\n return None\n\n def contains(self, x):\n return False\n\n\nclass ActionSpace(gym.spaces.Dict):\n \"\"\"\n A dictionary of ``EmbodiedTask`` actions and their argument spaces.\n\n .. code:: py\n\n self.observation_space = spaces.ActionSpace(\n \"move\": spaces.Dict({\n \"position\": spaces.Discrete(2),\n \"velocity\": spaces.Discrete(3)\n },\n \"move_forward\": EmptySpace,\n )\n )\n \"\"\"\n\n def __init__(self, spaces):\n if isinstance(spaces, dict):\n self.spaces = OrderedDict(sorted(list(spaces.items())))\n if isinstance(spaces, list):\n self.spaces = OrderedDict(spaces)\n self.actions_select = gym.spaces.Discrete(len(self.spaces))\n\n @property\n def n(self):\n return len(self.spaces)\n\n def sample(self):\n action_index = self.actions_select.sample()\n return {\n \"action\": list(self.spaces.keys())[action_index],\n \"action_args\": list(self.spaces.values())[action_index].sample(),\n }\n\n def contains(self, x):\n if not isinstance(x, dict) and {\"action\", \"action_args\"} not in x:\n return False\n if not self.spaces[x[\"action\"]].contains(x[\"action_args\"]):\n return False\n return True\n\n def __repr__(self):\n return (\n \"ActionSpace(\"\n + \", \".join([k + \":\" + str(s) for k, s in self.spaces.items()])\n + \")\"\n )\n\n\nclass ListSpace(Space):\n \"\"\"\n A ``gym.Space`` that describes a list of other Space. Used to describe\n list of tokens ids, vectors and etc.\n\n .. code:: py\n\n observation_space = ListSpace(spaces.Discrete(\n dataset.question_vocab.get_size()))\n \"\"\"\n\n def __init__(self, space, min_seq_length=0, max_seq_length=1 << 15):\n self.min_seq_length = min_seq_length\n self.max_seq_length = max_seq_length\n self.space = space\n self.length_select = gym.spaces.Discrete(\n max_seq_length - min_seq_length\n )\n\n def sample(self):\n seq_length = self.length_select.sample() + self.min_seq_length\n return [self.space.sample() for _ in range(seq_length)]\n\n def contains(self, x):\n if not isinstance(x, Sized):\n return False\n\n if self.min_seq_length <= len(x) <= self.max_seq_length:\n return False\n\n return all([self.space.contains(el) for el in x])\n\n def __repr__(self):\n return (\n f\"ListSpace({self.space}, min_seq_length=\"\n f\"{self.min_seq_length}, max_seq_length={self.max_seq_length})\"\n )\n", "path": "habitat/core/spaces.py"}]} | 2,018 | 435 |
gh_patches_debug_20970 | rasdani/github-patches | git_diff | fidals__shopelectro-453 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Убери категорию из фида
Из фида gm.yml - убирай категорию https://www.shopelectro.ru/catalog/categories/usiliteli-zvuka-dlia-slaboslyshashchikh/
Убрать только из этого фида.
</issue>
<code>
[start of shopelectro/management/commands/price.py]
1 import os
2
3 from django.conf import settings
4 from django.core.management.base import BaseCommand
5 from django.template.loader import render_to_string
6 from django.urls import reverse
7
8 from shopelectro.models import Product, Category
9
10
11 class Command(BaseCommand):
12 """Generate yml file for a given vendor (YM or price.ru)."""
13
14 # Online market services, that works with our prices.
15 # Dict keys - url targets for every service
16 TARGETS = {
17 'YM': 'yandex.yml',
18 'priceru': 'priceru.xml',
19 'GM': 'gm.yml',
20 'SE78': 'se78.yml',
21 }
22 # price files will be stored at this dir
23 BASE_DIR = settings.ASSETS_DIR
24
25 IGNORED_CATEGORIES = [
26 'Измерительные приборы', 'Новогодние вращающиеся светодиодные лампы',
27 'Новогодние лазерные проекторы', 'MP3- колонки', 'Беспроводные звонки',
28 'Радиоприёмники', 'Фонари', 'Отвертки', 'Весы электронные портативные',
29 ]
30
31 def create_prices(self):
32 for target in self.TARGETS.items():
33 self.generate_yml(*target)
34
35 def handle(self, *args, **options):
36 self.create_prices()
37
38 @classmethod
39 def get_context_for_yml(cls, utm):
40 """Create context dictionary for rendering files."""
41 def put_utm(product):
42 """Put UTM attribute to product."""
43 utm_marks = [
44 ('utm_source', utm),
45 ('utm_medium', 'cpc'),
46 ('utm_content', product.get_root_category().page.slug),
47 ('utm_term', str(product.vendor_code)),
48 ]
49
50 url = reverse('product', args=(product.vendor_code,))
51 utm_mark_query = '&'.join('{}={}'.format(k, v) for k, v in utm_marks)
52 product.utm_url = '{}{}?{}'.format(settings.BASE_URL, url, utm_mark_query)
53
54 product.prepared_params = list(
55 filter(
56 lambda x: x[0].name != 'Производитель',
57 product.params
58 )
59 )
60
61 return product
62
63 def put_crumbs(product): # Ignore PyDocStyleBear
64 """Crumbs for google merchant. https://goo.gl/b0UJQp"""
65 product.crumbs = ' > '.join(
66 product.page.get_ancestors_fields('h1', include_self=False)[1:]
67 )
68 return product
69
70 def filter_categories(utm):
71 categories_to_exclude = (
72 Category.objects
73 .filter(name__in=cls.IGNORED_CATEGORIES)
74 .get_descendants(include_self=True)
75 )
76
77 result_categories = Category.objects.exclude(id__in=categories_to_exclude)
78
79 if utm == 'YM':
80 """
81 Yandex Market feed requires items in some categories to have pictures
82 To simplify filtering we are excluding all categories
83 which don't contain at least one product with picture
84 """
85 result_categories = result_categories.get_categories_tree_with_pictures()
86
87 return result_categories
88
89 def prepare_products(categories_, utm):
90 """Filter product list and patch it for rendering."""
91 products_except_others = (
92 Product.objects
93 .select_related('page')
94 .prefetch_related('category')
95 .prefetch_related('page__images')
96 .filter(category__in=categories_, price__gt=0, page__is_active=True)
97 )
98
99 if utm == 'YM':
100 """
101 Yandex Market feed requires items in some categories to have pictures
102 To simplify filtering we are excluding all products without pictures
103 """
104 products_except_others = (
105 products_except_others
106 .filter(page__images__isnull=False)
107 .distinct()
108 )
109
110 result_products = [
111 put_crumbs(put_utm(product))
112 for product in products_except_others
113 ]
114
115 return result_products
116
117 categories = (
118 filter_categories(utm) if utm != 'SE78'
119 else Category.objects.all()
120 )
121
122 products = prepare_products(categories, utm)
123
124 return {
125 'base_url': settings.BASE_URL,
126 'categories': categories,
127 'products': products,
128 'shop': settings.SHOP,
129 'utm': utm,
130 }
131
132 @classmethod
133 def generate_yml(cls, utm, file_name):
134 """Generate yml file."""
135 file_to_write = os.path.join(cls.BASE_DIR, file_name)
136 context = cls.get_context_for_yml(utm)
137
138 with open(file_to_write, 'w', encoding='utf-8') as file:
139 file.write(render_to_string('prices/price.yml', context).strip())
140
141 return '{} generated...'.format(file_name)
142
[end of shopelectro/management/commands/price.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/shopelectro/management/commands/price.py b/shopelectro/management/commands/price.py
--- a/shopelectro/management/commands/price.py
+++ b/shopelectro/management/commands/price.py
@@ -1,3 +1,4 @@
+from collections import defaultdict
import os
from django.conf import settings
@@ -28,6 +29,10 @@
'Радиоприёмники', 'Фонари', 'Отвертки', 'Весы электронные портативные',
]
+ IGNORED_CATEGORIES_BY_TARGET = defaultdict(list, {
+ 'GM': ['Усилители звука для слабослышащих'],
+ })
+
def create_prices(self):
for target in self.TARGETS.items():
self.generate_yml(*target)
@@ -71,6 +76,7 @@
categories_to_exclude = (
Category.objects
.filter(name__in=cls.IGNORED_CATEGORIES)
+ .filter(name__in=cls.IGNORED_CATEGORIES_BY_TARGET[utm])
.get_descendants(include_self=True)
)
| {"golden_diff": "diff --git a/shopelectro/management/commands/price.py b/shopelectro/management/commands/price.py\n--- a/shopelectro/management/commands/price.py\n+++ b/shopelectro/management/commands/price.py\n@@ -1,3 +1,4 @@\n+from collections import defaultdict\n import os\n \n from django.conf import settings\n@@ -28,6 +29,10 @@\n '\u0420\u0430\u0434\u0438\u043e\u043f\u0440\u0438\u0451\u043c\u043d\u0438\u043a\u0438', '\u0424\u043e\u043d\u0430\u0440\u0438', '\u041e\u0442\u0432\u0435\u0440\u0442\u043a\u0438', '\u0412\u0435\u0441\u044b \u044d\u043b\u0435\u043a\u0442\u0440\u043e\u043d\u043d\u044b\u0435 \u043f\u043e\u0440\u0442\u0430\u0442\u0438\u0432\u043d\u044b\u0435',\n ]\n \n+ IGNORED_CATEGORIES_BY_TARGET = defaultdict(list, {\n+ 'GM': ['\u0423\u0441\u0438\u043b\u0438\u0442\u0435\u043b\u0438 \u0437\u0432\u0443\u043a\u0430 \u0434\u043b\u044f \u0441\u043b\u0430\u0431\u043e\u0441\u043b\u044b\u0448\u0430\u0449\u0438\u0445'],\n+ })\n+\n def create_prices(self):\n for target in self.TARGETS.items():\n self.generate_yml(*target)\n@@ -71,6 +76,7 @@\n categories_to_exclude = (\n Category.objects\n .filter(name__in=cls.IGNORED_CATEGORIES)\n+ .filter(name__in=cls.IGNORED_CATEGORIES_BY_TARGET[utm])\n .get_descendants(include_self=True)\n )\n", "issue": "\u0423\u0431\u0435\u0440\u0438 \u043a\u0430\u0442\u0435\u0433\u043e\u0440\u0438\u044e \u0438\u0437 \u0444\u0438\u0434\u0430\n\u0418\u0437 \u0444\u0438\u0434\u0430 gm.yml - \u0443\u0431\u0438\u0440\u0430\u0439 \u043a\u0430\u0442\u0435\u0433\u043e\u0440\u0438\u044e https://www.shopelectro.ru/catalog/categories/usiliteli-zvuka-dlia-slaboslyshashchikh/\r\n\u0423\u0431\u0440\u0430\u0442\u044c \u0442\u043e\u043b\u044c\u043a\u043e \u0438\u0437 \u044d\u0442\u043e\u0433\u043e \u0444\u0438\u0434\u0430. \n", "before_files": [{"content": "import os\n\nfrom django.conf import settings\nfrom django.core.management.base import BaseCommand\nfrom django.template.loader import render_to_string\nfrom django.urls import reverse\n\nfrom shopelectro.models import Product, Category\n\n\nclass Command(BaseCommand):\n \"\"\"Generate yml file for a given vendor (YM or price.ru).\"\"\"\n\n # Online market services, that works with our prices.\n # Dict keys - url targets for every service\n TARGETS = {\n 'YM': 'yandex.yml',\n 'priceru': 'priceru.xml',\n 'GM': 'gm.yml',\n 'SE78': 'se78.yml',\n }\n # price files will be stored at this dir\n BASE_DIR = settings.ASSETS_DIR\n\n IGNORED_CATEGORIES = [\n '\u0418\u0437\u043c\u0435\u0440\u0438\u0442\u0435\u043b\u044c\u043d\u044b\u0435 \u043f\u0440\u0438\u0431\u043e\u0440\u044b', '\u041d\u043e\u0432\u043e\u0433\u043e\u0434\u043d\u0438\u0435 \u0432\u0440\u0430\u0449\u0430\u044e\u0449\u0438\u0435\u0441\u044f \u0441\u0432\u0435\u0442\u043e\u0434\u0438\u043e\u0434\u043d\u044b\u0435 \u043b\u0430\u043c\u043f\u044b',\n '\u041d\u043e\u0432\u043e\u0433\u043e\u0434\u043d\u0438\u0435 \u043b\u0430\u0437\u0435\u0440\u043d\u044b\u0435 \u043f\u0440\u043e\u0435\u043a\u0442\u043e\u0440\u044b', 'MP3- \u043a\u043e\u043b\u043e\u043d\u043a\u0438', '\u0411\u0435\u0441\u043f\u0440\u043e\u0432\u043e\u0434\u043d\u044b\u0435 \u0437\u0432\u043e\u043d\u043a\u0438',\n '\u0420\u0430\u0434\u0438\u043e\u043f\u0440\u0438\u0451\u043c\u043d\u0438\u043a\u0438', '\u0424\u043e\u043d\u0430\u0440\u0438', '\u041e\u0442\u0432\u0435\u0440\u0442\u043a\u0438', '\u0412\u0435\u0441\u044b \u044d\u043b\u0435\u043a\u0442\u0440\u043e\u043d\u043d\u044b\u0435 \u043f\u043e\u0440\u0442\u0430\u0442\u0438\u0432\u043d\u044b\u0435',\n ]\n\n def create_prices(self):\n for target in self.TARGETS.items():\n self.generate_yml(*target)\n\n def handle(self, *args, **options):\n self.create_prices()\n\n @classmethod\n def get_context_for_yml(cls, utm):\n \"\"\"Create context dictionary for rendering files.\"\"\"\n def put_utm(product):\n \"\"\"Put UTM attribute to product.\"\"\"\n utm_marks = [\n ('utm_source', utm),\n ('utm_medium', 'cpc'),\n ('utm_content', product.get_root_category().page.slug),\n ('utm_term', str(product.vendor_code)),\n ]\n\n url = reverse('product', args=(product.vendor_code,))\n utm_mark_query = '&'.join('{}={}'.format(k, v) for k, v in utm_marks)\n product.utm_url = '{}{}?{}'.format(settings.BASE_URL, url, utm_mark_query)\n\n product.prepared_params = list(\n filter(\n lambda x: x[0].name != '\u041f\u0440\u043e\u0438\u0437\u0432\u043e\u0434\u0438\u0442\u0435\u043b\u044c',\n product.params\n )\n )\n\n return product\n\n def put_crumbs(product): # Ignore PyDocStyleBear\n \"\"\"Crumbs for google merchant. https://goo.gl/b0UJQp\"\"\"\n product.crumbs = ' > '.join(\n product.page.get_ancestors_fields('h1', include_self=False)[1:]\n )\n return product\n\n def filter_categories(utm):\n categories_to_exclude = (\n Category.objects\n .filter(name__in=cls.IGNORED_CATEGORIES)\n .get_descendants(include_self=True)\n )\n\n result_categories = Category.objects.exclude(id__in=categories_to_exclude)\n\n if utm == 'YM':\n \"\"\"\n Yandex Market feed requires items in some categories to have pictures\n To simplify filtering we are excluding all categories\n which don't contain at least one product with picture\n \"\"\"\n result_categories = result_categories.get_categories_tree_with_pictures()\n\n return result_categories\n\n def prepare_products(categories_, utm):\n \"\"\"Filter product list and patch it for rendering.\"\"\"\n products_except_others = (\n Product.objects\n .select_related('page')\n .prefetch_related('category')\n .prefetch_related('page__images')\n .filter(category__in=categories_, price__gt=0, page__is_active=True)\n )\n\n if utm == 'YM':\n \"\"\"\n Yandex Market feed requires items in some categories to have pictures\n To simplify filtering we are excluding all products without pictures\n \"\"\"\n products_except_others = (\n products_except_others\n .filter(page__images__isnull=False)\n .distinct()\n )\n\n result_products = [\n put_crumbs(put_utm(product))\n for product in products_except_others\n ]\n\n return result_products\n\n categories = (\n filter_categories(utm) if utm != 'SE78'\n else Category.objects.all()\n )\n\n products = prepare_products(categories, utm)\n\n return {\n 'base_url': settings.BASE_URL,\n 'categories': categories,\n 'products': products,\n 'shop': settings.SHOP,\n 'utm': utm,\n }\n\n @classmethod\n def generate_yml(cls, utm, file_name):\n \"\"\"Generate yml file.\"\"\"\n file_to_write = os.path.join(cls.BASE_DIR, file_name)\n context = cls.get_context_for_yml(utm)\n\n with open(file_to_write, 'w', encoding='utf-8') as file:\n file.write(render_to_string('prices/price.yml', context).strip())\n\n return '{} generated...'.format(file_name)\n", "path": "shopelectro/management/commands/price.py"}]} | 1,995 | 259 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.